Can we trust AI as a causal mapping assistant? Our new validation study

Dec 9, 2025
Our new paper (Steve and Gabriele’s), "AI-assisted causal mapping: a validation study," has just been published in the International Journal of Social Research Methodology.
We know that identifying causal claims from interview transcripts used to be incredibly time-consuming. But, we do NOT want to build a "black box" that models the whole system for you. Instead, we wanted to find out if we could use LLMs as low-level assistants to speed this up without sacrificing rigour or transparency.
So in this paper we ask: is the ability of current AI models to identify causal claims within texts of sufficient quality to be useful?
The results are encouraging: we found that it is possible to use these tools to exhaustively and transparently identify and summarise causal claims.
 

Abstract

People doing causal systems mapping are often interested in harvesting claims about causal links from text sources, for example from interview transcripts with experts: a task which used to be time-consuming. In this paper, we show how to use generative AI as a low-level assistant to exhaustively and transparently identify and then summarise causal claims. We use techniques from causal mapping. We do not try to model the system or assess the strength of causal links, but rather to assess the strength of the evidence for each causal link or pathway: an approach which is comparatively easy to automate. We ask: Is the ability of LLMs to identify causal claims within texts of sufficient quality to be useful, and what can we say about reliability or validity? The results are encouraging. We conclude by discussing risks and ethical issues, as well as suggesting some areas for further research.
Keywords: Causal mapping; generative AI; LLM; validation