Responding to our one-page description of causal mapping, Julian King says the elephant in the room with causal mapping is: can causal mapping really help you get from causal opinions to causal inference?
The short answer is: sure it can help you, the evaluator, make that leap. But it causal mapping does not give out free passes.
But in more detail, here are four more responses to the elephant.
1: Causal mapping ignores the elephant. On its own, causal mapping doesn’t even try to warrant that kind of step: it humbly assembles and organises heaps of assorted evidence in order for the actual evaluator to make the final evaluative judgement. Unlike evidence from an interview, or a conclusion from process tracing or from a randomised experiment, causal mapping evidence isn’t a kind of evidence, it’s an assemblage of those other kinds of evidence. It certainly isn’t a shortcut to get cheap answers to high-stakes answers by conducting a few interviews with bystanders.
If you have to answer high stakes causal questions like “did X cause Y” and “how much did X contribute to Y” and have just a handful of pieces of evidence, there isn’t much point using causal mapping. Causal mapping is most useful for larger heaps of evidence, especially from mixed sources and of mixed quality; it gives you a whole range of ways of sorting and summarising that information, on which you can base your evaluative judgements. What it doesn’t give you is a free pass to any evaluation conclusions, and especially not the high stakes ones which occupy so much of our attention when we think and write about evaluation.
2: In most actual causal mapping studies, the elephant usually doesn’t even enter the room. Usually, we aren’t dealing with monolithic, high-stakes questions. Most causal mappers are looking for (and finding) answers to questions like these:
In which districts was our intervention mentioned most often?
Do children see things differently?
How much evidence is there linking our intervention to this outcome?
Does our project plan see the world in the same way as our stakeholders?
All of these are relevant questions for evaluations. Some of them might feed into judgements about relevance, or about effectiveness or impact, and so on. We might notice for example that there is some evidence for a direct link from an intervention to an outcome, and much more indirect evidence, and some of those paths remain even when we remove less reliable sources. We can even compare the quantity of evidence for one causal pathway with the quantity of evidence for a different pathway. We can ask how many sources mention the entirety of a particular pathway, or we can ask which pathways have to be constructed out of evidence from different sources. (On the other hand we don’t, for example, make the mistake of inferring from the fact that there is a lot of evidence for a particular causal link that the link is a strong one.)
All of this is bread and butter for evaluators, even though it doesn’t answer those elephant questions.
3: Causal mapping pushes back against the elephant.
In every evaluation, the evaluator assembles some evidence and makes an evaluative judgement on the basis of it. All evaluation involves causal mapping in this sense. Occasionally there is, or seems to be, only a single piece of evidence in the heap – perhaps, evidence from a controlled experiment. But the final judgement is the evaluator’s responsibility, and (perhaps implicitly) must take into account other factors: “this is a controlled experiment, it was carried out by a reputable team, … but wait, their most recent study was criticised for allowing too much contamination …. but wait, the effect sizes were calculated with the latest method and controlled experiments seem to be a good way of reaching causal conclusions …”, and so on. An essential part of the evaluative process is also careful consideration of how exactly to formulate a conclusion, bearing in mind the context and the audience and how it will be generalised and applied. So, in practice, there is always a heap of factors to consider, often involving different parts of more than one causal pathway, even when the heap seems to be dominated by one or two elephants.
4: Causal mapping embraces the elephant. In most causal mapping studies, we do not in fact simply assemble the evidence we already have but actively gather it systematically. A good example is QuIP, the Qualitative Impact Assessment Protocol. The evidence is “only” the considered opinions of carefully selected individual stakeholders, but it is gathered using blindfolding techniques to minimise bias so that, once assembled and organised with causal mapping, the evaluative leap from opinions about causality to conclusions about causality can be made with more confidence, transparently and with appropriate caveats. Still, it's not the causal mapping itself which makes or warrants the leap, it's the evaluator, using evaluative judgement.
Over the last three years we have developed a whole range of algorithms for causal mapping. The algorithms are published open-source in the form of functions for the programming language R.
I've got a load of texts to analyse, can I use Causal Map just for standard Qualitative Data Analysis, like ordinary thematic analysis, to identify important themes?