Causal Inference is More than Fitting the Data Well

This post first appeared on February 1, 2021, on causalscience.org.

Causal inference is becoming an increasingly important topic in industry. Several big players have already taken notice and started to invest in the causal data science skills of their people. One piece of evidence surely was the huge success of the first Causal Data Science Meeting last year. Our own research further proves this point. Over the course of last year, we have talked to many data scientists working in the tech sector, as well as related industries, and all of them reported to us that interest in causality and frustration about the limits of classical machine learning are rising. Especially when you tackle complex problems that are related to the strategic direction of your company, the ability to forecast the effects of your actions—and thus causal inference—becomes of great significance.

Yet, we also learned that applying causal inference methods poses a number of significant challenges to practitioners. Not only is there an educational gap and many data scientists still do not have much experience with these tools, but cleanly identifying the root causes behind relationships in your data and ruling out alternative explanations can be time-consuming. Data science teams often simply do not have the time to run an elaborate study because of pressure to bring models to production quickly.

Need for a cultural change

Another important bottleneck we have encountered in our research is cultural though. Classical machine learning is all about minimizing prediction error. The more accurately your model is able to, e.g., classify x-ray images or forecast future stock market prices the better. This simple target gives you an objective standard of evaluation which is easy to understand for everyone. ML research made great progress in the past by running competitions on which methods and algorithms provide the best out-of-sample-fit in various problem domains ranging from image recognition to natural language processing. Such an objective and simple evaluation criterion is missing in causal inference.

CI is much harder than simply optimizing a loss function and context-specific domain knowledge plays a crucial role. Unless you can benchmark your model predictions to actual experiments, which is pretty rare in practice and even then, you will only be able to tell how well you did ex-post, there is no simple criterion to judge the accuracy of a particular estimate. The quality of causal inferences depends on several crucial assumptions, which are not easily testable with the data at hand. This forces people to completely rethink the way they approach their data science and ML problems.

In fact, there is an important theoretical reason why causal data science is challenging in that regard. It is called the Pearl causal hierarchy. The PCH, which is also known under the name ladder of causation, states that any data analysis can be mapped to one of three distinct layers of an information hierarchy. At the lowest rung there are associations, which refer to simple conditional probability statements between variables in the data. They remain purely correlational (“how does X relate to Y?”) and therefore do not have any causal meaning. The second rung relates to interventions (“what happens to Y if I manipulate X?”) and here we already enter the world of causality. On the third layer we finally have counterfactuals (“What would Y be if X had been x?”), which represent the highest form of causal reasoning.

Causal inference cannot be purely data-driven

The PCH tells us that to climb the ladder of causation and be able to infer causal effects from the data, we need to be willing to make at least some causal assumptions in the first place. “No causes in, no causes out”! This fact can be proven mathematically. There is no CI method that would be entirely data-driven. You always need that extra ingredient in form of specific domain knowledge that is introduced to the problem and which can only be judged based on experience and theoretical reasoning. This is the way causal diagrams work, for example, but other causal assumptions such as conditional independence, instrument validity or parallel trends in difference-in-differences fall into the same category.

Because these causal assumptions are necessarily context-specific, they are more complex and multidimensional than a simple fit criterion based on squared loss. That does not mean that they are in any way arbitrary though. The theoretical requirements for causal inference imposed by the PCH call for an entirely new way of thinking about data science, which also introduces non-trivial organizational challenges. We need to put domain experts such as clients, engineers, and sales partners in the loop, who can tell us whether our assumptions make sense and the way we model a certain problem is accurate. This will lead to a much more holistic approach to data science and the way teams are structured. Some first steps going in that direction are described here in a post by Patrick Doupe, principal economist at Zalando. In the coming months we plan to publish more content of that sort creating a dialogue between industry and academia on how to push causal inference applications in industry practice.

Mapping Unchartered Territory

A frequent point of criticism against Directed Acyclic Graphs is that writing them down for a real-world problem can be a difficult task. There are numerous possible variables to consider and it’s not clear how we can determine all the causal relationships between them. We recently had a Twitter discussion where exactly this argument popped up again.

Continue reading Mapping Unchartered Territory

PO vs. DAGs – Comments on Guido Imbens’ New Paper

Guido Imbens published a new working paper in which he develops a detailed comparison of the potential outcomes framework (PO) and directed acyclic graphs (DAG) for causal inference in econometrics. I really appreciate this paper, because it introduces a broader audience in economics to DAGs and highlights the complementarity of both approaches for applied econometric work. Continue reading PO vs. DAGs – Comments on Guido Imbens’ New Paper

Causal Data Science in Business

A while back I was posting about Facebook’s causal inference group and how causal data science tools slowly find their way from academia into business. Since then I came across many more examples of well-known companies investing in their causal inference (CI) capabilities: Microsoft released its DoWhy library for Python, providing CI tools based on Directed Acylic Graphs (DAGs); I recently met people from IBM Research interested in the topic; Zalando is constantly looking for people to join their CI/ML team; and Lufthansa, Uber, and Lyft have research units working on causal AI applications too. Continue reading Causal Data Science in Business

Don’t Put Too Much Meaning Into Control Variables

Update: The success of this blog post motivated us to formulate our point in a bit more detail in this paper, which is available on arXiv. Check it out if you need a citable version of the argument below.


I’m currently reading this great paper by Carlos Cinelli and Chad Hazlett: “Making Sense of Sensitivity: Extending Omitted Variable Bias”. They develop a full suite of sensitivity analysis tools for the omitted variable problem in linear regression, which everyone interested in causal inference should have a look at. While kind of a side topic, they make an important point on page 6 (footnote 6): Continue reading Don’t Put Too Much Meaning Into Control Variables

Beyond Curve Fitting

Last week I attended the AAAI spring symposium on “Beyond Curve Fitting: Causation, Counterfactuals, and Imagination-based AI”, held at Stanford University. Since Judea Pearl and Dana Mackenzie published “The Book of Why”, the topic of causal inference gains increasing momentum in the machine learning and artificial intelligence community. If we want to build truly intelligent machines, which are able to interact with us in a meaningful way, we have to teach them the concept of causality. Otherwise, our future robots will never be able to understand that forcing the rooster to crow at 3am in the morning won’t make the sun appear. Continue reading Beyond Curve Fitting

Why so much hate against propensity score matching?

I’ve seen several variants of this meme on Twitter recently.

This is just one example, so nothing against @HallaMartin. But his tweet got me thinking. Apparently, in the year 2019 it’s not possible anymore to convince people in an econ seminar with a propensity score matching (or any other matching on observables, for that matter). But why is that?

Here’s what I think. The typical matching setup looks somewhat like this:

You’re interested in estimating the causal effect of X on Y. But in order to do so, you will need to adjust for the confounders W, otherwise you’ll end up with biased results. If you’re able to measure W, this adjustment can be done in a propensity score matching, which is actually an efficient way of dealing with a large set of covariates.

The problem though is to be sure that you’ve adjusted for all possible confounding factors. How can you be certain that there are no unobserved variables left that affect both X and Y? Because if the picture looks like the one below (where the unobserved confounders are depicted by the dashed bidirected arc), matching will only give you biased estimates of the causal effect you’re after.

Presumably, the Twitter meme is alluding to exactly this problem. And I agree that it’s hard to make the claim that you’ve accounted for all confounding influence factors in a matching. But how’s that with economists’ most preferred alternative—the instrumental variable (IV) estimator? Here the setup looks like this:

Now, unobserved confounders between X and Y are allowed, as long as you’re able to find an instruments Z that affects X, but which doesn’t affect Y directly (other than through X). In that case, creates exogenous variation in X that can be leveraged to estimate X‘s causal effect. (Because of the exogonous variation in X induced by Z, we also call this IV setup a surrogate experiment, by the way.)

Great, so we have found a way forward if we’re not 100% sure that we’ve accounted for all unobserved confounders. Instead of a propensity score matching, we can simply resort to an IV estimator.

But if you think about this a bit more, you’ll realize that we face a very similar situation here. The whole IV strategy breaks down if there are unobserved confounders between Z and Y (see again the dashed arc below). How can we be sure to rule out all influence factors that jointly affect the instrument and the outcome? It’s the same problem all over again.

So in that sense, matching and IV are not very different. In both cases we need to carefully justify our identifying assumptions based on the domain knowledge we have. Whether ruling out Z \dashleftarrow\dashrightarrow Y is more plausible than X \dashleftarrow\dashrightarrow Y depends on the specific context under study. But on theoretical grounds, there’s no difference in strength or quality between the two assumptions. So I don’t really get why—as a rule—economists shouldn’t trust a propensity score matching, but an IV approach is fine.

Now you might say that this is just Twitter babble. But my impression is that most economists nowadays would be indeed very suspicious towards “selection on observables”-types of identification strategies.* Even though there’s nothing inherently implausible about them.

In my view, the opaqueness of the potential outcome (PO) framework is partly to blame for this. Let me explain. In PO you’re starting point is to assume uncofoundedness of the treatment variable

(Y^1, Y^0) \perp X | W.

This assumption requires that the treatment X needs to be independent of the potential outcomes of Y, when controlling for a vector of covariates W (as in the first picture above). But what is this magic vector W that can make all your causal effect estimation dreams come true? Nobody will tell you.

And if the context you’re studying is a bit more complicated than in the graphs I’ve showed you—with several causally conected variables in a model—it’ becomes very complex to even properly think this through. So in the end, deciding whether unconfoundedness holds becomes more of guessing game.

My hunch is that after having seen too many failed attempts of dealing with this sort of complexity, people have developed a general mistrust against unconfoundedness and strong exogeneity type assumptions. But we still don’t want to give up on causal inference altogether. So we move over to the next best thing: IV, RDD, Diff-in-Diff, you name it.

It’s not that these methods have weaker requirements. They all rely on untestable assumptions about unobservables. But maybe they seem more credible because you’ve jumped through more hoops with them?

I don’t know. And I don’t want to get too much into kitchen sink psychology here. I just know that the PO framework makes it incredibly hard to justify crucial identification assumptions, because it’s so much of a black box. And I think there are better alternatives out there, based on the causal graphs I used in this post (see also here). Who knows, maybe by adopting them we might one day be able to appreciate a well carried out propensity score matching again.


* Interestingly though, this only seems to be the case for reduced-form analyses. Structural folks mostly get away with controlling for observables; presumably because structural models make causal assumptions much more explicit than the potential outcome framework.

Causal Inference for Policymaking

I just submitted an extended abstract of an upcoming paper to a conference that will discuss new analytical tools and techniques for policymaking. The abstract contains a brief discussion about the importance of causal inference for taking informed policy decisions. And I would like to share these thoughts here. Continue reading Causal Inference for Policymaking

Graphs and Occam’s Razor

One argument / point of criticism I often hear from people who start exploring Directed Acyclic Graphs (DAG) is that graphical models can quickly become very complex. When you read about the methodology for the first time you get walked through all these toy models – small, well-behaved examples with nice properties, in which causal inference works like a charm.

Continue reading Graphs and Occam’s Razor