Causality — a crash course
19 Oct, 2025 at 18:35 | Posted in Theory of Science & Methodology | Leave a comment
Yours truly has been offering a crash course on causality to fellow researchers at Malmö University over the past couple of years. If you’re curious, the course PowerPoint is available here: Causality – a crash course.
Many research questions in the social sciences today are fundamentally about issues of causality. What is behind the rise in unemployment? What effects do independent schools have on pupils’ grades?
But questions of cause and effect are also prevalent in politics. What measures are needed to tackle rising inflation? What consequences has the COVID-19 epidemic had on public health? Answering these types of questions presupposes causal reasoning.
Since the mid-20th century, causality has been increasingly discussed in terms of probability. With traditional statistical methods, we can uncover correlations and descriptive measures of their nature (statistical significance, robustness, sensitivity, etc.). However, we as researchers (and policymakers) often wish to go further than that and also try to identify causal mechanisms and relationships that can explain existing correlations. This makes it possible, for instance, through the active use of manipulations/interventions, to ascertain the causal effects of various policy packages and measures.
In the research world, it has now become extremely important — not least for securing research funding — to use research designs with a high ‘evidence value’. Consistently, evidence hierarchies are based on the degree of causal knowledge that different research strategies yield (here, the debate around Randomised Controlled Trials (RCTs) and meta-analyses has played a significant role).
The course presents fundamental theories and models for causal inference and describes and demonstrates how to use the most common and popular methods for causal inference, primarily within the social sciences.
In Part 1, various attempts made over time to define causality and track the effects of interventions in different fields are presented. Here, various currently more or less popular theories of causality are also discussed, which take their starting point in concepts and models of probability, mechanisms, counterfactuals, manipulability, etc.
In Part 2, the focus is on the link between causality and probability. A common idea in several of the presented theories is that causality involves one event (a cause) changing the probability of another event (an effect). The ability of traditional regression analysis to deliver causal relationships and explanations has been increasingly called into question. Graph theory (DAGs), Neyman-Rubin’s ‘potential outcomes’ theory, and Judea Pearl’s ‘do-calculus’ are some examples of theories now presented as superior alternatives. The lecture includes a critical discussion of whether, and to what extent, this is actually the case.
In Part 3, recent advancements in Machine Learning, Big Data, and AI are discussed. It has been claimed in recent years that these make it possible not only to ascertain causality through experimental randomised studies as before, but also to detect and establish causal chains from observational data, based on more or less reasonable assumptions. Here, the limitations of these evidence-based methods are also, of course, opened for discussion. Even if one does not use experimental and randomisation strategies — now considered the obvious ‘gold standard’ — one can certainly argue that good grounds can be presented for the knowledge claims we make with other methods (field studies, case studies, natural experiments, quasi-experiments, etc.). Not least from the perspective of external validity and generalisability, it is important to critically evaluate the evidence hierarchies that prevail in the research world today.
Framing all causal questions as questions of manipulation or intervention leads to many challenges, particularly when we entertain ‘hypothetical’ and ‘symbolic’ interventions. Humans have few barriers to imagination, but this can make it difficult to assess the relevance of the proposed thought experiments. Performing ‘well-defined’ interventions is one thing, but if we don’t want to give up searching for answers to the questions we’re genuinely interested in, and instead only focus on answerable questions, interventionist studies become of limited applicability and value. Intervention effects in thought experiments do not automatically represent the causal effects we are seeking. Identifying causes (reverse causality) and measuring the effects of causes (forward causality) are not the same thing. In social sciences, like economics, we typically begin by identifying the problem and understanding why it occurred, and only afterwards do we look at the effects of the causes.
Relying on the interventionist approach often means that, instead of posing interesting questions on a social level, the focus shifts to individuals. Rather than asking about the structural socio-economic factors behind, for instance, gender or racial discrimination, the emphasis is placed on the choices made by individuals (which — as yours truly argues in his book Ekonomisk teori och metod — tends to result in explanations that are inadequately ‘deep’). Within the manipulation/intervention approach to causality, only a specific type of restricted causal question can be posed. A typical example of the pitfalls of this limiting approach is the work of ‘Nobel prize’ winner Esther Duflo, who argues that economics should be based on evidence from randomised experiments and field studies. Duflo and her colleagues wish to move away from ‘big ideas’ like political economy and institutional reform, opting instead to address more manageable problems, much like plumbers do. Yours truly remains unconvinced that this is the right direction for advancing economics and making it a relevant and realistic science. A plumber may fix minor leaks in your system, but if the entire system is failing, something more than just good old-fashioned plumbing is required. The grand social and economic problems we face today will not be solved by plumbers performing interventions or manipulations in the form of randomised control trials.
Although, of course, it is possible (to varying degrees, depending on context) to fit causal questions into a manipulation/intervention framework, before we can do so, we must first agree that we’ve identified the causal problem we aim to address when recommending different policies. Before we can calculate causal effects, we must identify the causes — and this is perhaps not always best done within a manipulation/intervention framework. One issue is that this approach, when applied to broader social and economic contexts, requires a reframing of the questions we pose, often resulting in ‘well-defined’ causal answers, but not necessarily answers to the questions we are truly interested in. The manipulation/intervention framework is just one way of conducting causal analysis — but it is not the way. As an advocate of ‘inference to the best explanation,’ I believe we must also carefully consider explanatory factors when estimating and identifying causal relations.
The potential outcomes and interventionist accounts of causality are not identical. But — despite their differences in emphasis and formalism, the connection between them is both strong and conceptually intertwined.
While I don’t think we should discard all (causal) theories based on ‘modular’ interventions, stability, invariance, etc., we must acknowledge that, outside of systems that possibly satisfy these assumptions, such theories are of questionable substantial value.
Modularity refers to the possibility of independently manipulating causal relationships within a system. Most economists today — especially when conducting experiments — assume some form of invariance or modularity, meaning that one can intervene on a part of a model without altering other dependencies within that model.
Modularity makes causal inferences drawn from ‘interventions’ stable. But while causal inferences are not possible without making some assumptions, you must always justify why these assumptions are reasonable. In the case of modularity, this means demonstrating that, for the target system you are analysing — society — it is possible to make ‘surgical interventions,’ ‘wiggle,’ or ‘manipulate’ parts of the system without affecting other parts. Since societies are inherently interactionally complex, open systems, it is practically difficult to find causes that are independently manipulable and exhibit such invariance under intervention. Most social mechanisms and relations are not modular. Extraordinary claims require extraordinary evidence. So, if researchers wish to continue using models that assume modularity, they must begin to justify its reasonableness. As scientists, we should not merely accept what is conventionally assumed. When is modularity a reasonable assumption, and when is it not? That modularity allows us to identify causality in ‘epistemically convenient systems’ is no argument for assuming that it applies to real-world societies.
Modularity does not hold for all causal systems, and attempting to ‘save’ it by suggesting (as Judea Pearl does) that a ‘symbolic intervention’ is sufficient, is unconvincing. Developing models that show how things might possibly be explained is not the goal. It’s not enough. Effective explanations in the social sciences must go beyond mere theoretical possibilities and accurately account for observable realities. We need models built on assumptions that do not conflict with known facts, and which show how things are actually to be explained.
In the potential outcomes approach to causality, sex and race are often not considered causes, since they do not fit within this counterfactual manipulation/intervention framework of causal inference. Sex and race cannot be directly manipulated or intervened upon, which makes it difficult to conceptualise what the ‘potential outcomes’ would be for individuals if they were of a different sex or race. Most social scientists would arguably consider this a serious drawback to the real-world applicability of the approach.
Limiting causes to those that are manipulable in the potential outcomes framework excludes important factors like sex and race. Arguing that only well-specified interventions should count as causes sounds utterly contorted to most social scientists. Causal inference frameworks should be flexible enough to account for causes like sex and race, even if they do not neatly fit into the traditional experimental model.
Our aspirations must be more far-reaching than simply constructing coherent and ‘credible’ models about ‘possible worlds.’ We want to understand and explain ‘difference-making’ in the real world. No matter how many mechanisms or coherent relations you represent in your model, you still need to show that these mechanisms and relations are at work and exist in society if we are to conduct real science. Science must be something more than just realistic storytelling or ‘explanatory fictionalism.’ You need to provide decisive empirical evidence that what you infer from your model helps us uncover what is actually happening in the real world. It is not enough to present epistemically informative insights about logically possible models.
Moreover, and more importantly, you must present a world-linking argument and show how those models explain or teach us something about real-world societies. If you fail to support your models in that way, why should we care about them? And if you do not clarify what the real-world target systems of your models are, how are we supposed to evaluate or test them? Without that information, it’s impossible for us to check whether the ‘possible world’ models you propose hold true for the one world we care about — the real world.
The interventionist approach to causal inference has grown increasingly popular in the social sciences over the past three decades. However, most social systems are complex, evolving, contingent, dynamic, emergent, and genuinely uncertain. The theories and methods based on the interventionist approach are not suitable for those systems. As an unsubstantiated general assumption guiding causal analysis in the social sciences, modularity should be abandoned. Other, more pluralistic methods and theories of causal inference and explanation are required.

