[This is the second part of a fair copy of a recent Twitter thread of mine. I suggest you have a look at part 1 about nonlinear mediation analysis first. Otherwise, it might be hard to follow this post.]
Understanding causal effects is tough, but understanding causal mechanisms is even tougher. When we try to understand mechanisms we move beyond the question whether a certain causal effect exists, and ask how an effect comes about instead. For example, we would like to know whether the gender pay gap—currently about 22% in Germany—is driven either by workplace discrimination, leisure preferences or human capital differentials. Because the appropriate policy responses would differ drastically in each case. To do this, we have to conduct a mediation analysis, which is able to tease out the different mechanisms at play. Unfortunately, however, mediation analysis relies on a set of quite strong assumptions. Probably the most important, sequential ignorability (SI), basically precludes any causal dependencies between the mechanisms under study.¹
My colleague with whom I had this discussion wasn’t too comfortable with it. How can we ever have faith in such strong assumptions? Don’t we know that the world is hella complex? Isn’t it natural to believe that multiple causal mechanisms will interfere with each other and violate SI? My reply was essentially: yes and no. If theory tells you that your mediator of interest is dependent on another mediating influence, then I’m afraid there is not much you can do. You won’t be able to empirically estimate the quantities you’re after. And as I mentioned in the previous post, a mediation analysis always requires SI, even in linear models. Also piecemeal randomized control trials won’t solve the problem, if there is effect heterogeneity in your population.²
On the other hand (and now comes the no part of my reply), I think we sometimes need to take a step back and reflect whether the world is really as complex as we believe it to be. I have the impression that social science theorists can often be rather quick in postulating all sorts of causal dependencies. Partly, because the incentives to make theoretical contributions, or to find refinements and boundary conditions of existing theories are so large. Theorists seem to do this without necessarily being aware of the consequences that might arise for empirical testing though. Causal inference is reliant on the absence of causal relationships between certain variables in your model. In fact, if everything causes everything—in a nearly complete causal graph—it will be virtually impossible to recover any causal effect from observational data.
Please don’t get me wrong here. My point is not to promote a “don’t ask, don’t tell” policy. If there is a clear indication for a certain causal link to be present, we shouldn’t pretend otherwise. I’m just saying that we need to be aware of the potential costs that ever further refinements of theories entail. I believe my position is similar to what Kieran Healy recently summed up with the memorable slogan: fuck nuance! Adapting theories to capture more and more particularities, and account for a larger set of dependencies might be superficially attractive. In reality, however, adding nuance does not just reduce the predictive power of a theory (a machine learning expert would say “overfitting”), but also lowers its chances to be brought to the data.
I think, we fare much better with simple, generalizable, and robust theories. Therefore, instead of incentivizing theoretical contributions (I observe this to be particularly pervasive in the management sciences, but apparently it’s similar in sociology), we should encourage more re-testing of the old and boring stuff. At a minimum, this will give us solid evidence about the fundamental causal hypotheses of our fields. Theory-wise, I’m a proponent of keeping it (relatively) simple. After all, isn’t it our job as scientists to reduce complexity? Personally, I’m more comfortable with a theory that is restricted to a core set of well-tested relationships, than with a nuanced description of the world, that will never be subject to any empirical scrutiny.
¹ More precisely, any post-treatment confounder, i.e., a second mediator that exerts a causal influence on another mediator of interest, will violate sequential ignorability. See the third graph in my previous post. It doesn’t matter, by the way, whether the post-treatment confounder is observed or not.
² I don’t have time to get into this last point in detail. But this paper describes it quite nicely.