In my class we recently discussed a paper by Higgins and Rodriguez (2006)—published in the Journal of Financial Economics—that contains an important lesson for researchers who want to apply the difference-in-differences (DiD) method in competition analysis and merger control. Continue reading Becoming More Different Over Time
This is a fair copy of a recent Twitter thread of mine. I thought it might be interesting to develop my arguments in a bit more detail and preserve them for later use.
[This post requires some knowledge of directed acyclic graphs (DAG) and causal inference. Providing an introduction to the topic goes beyond the scope of this blog though. But you can have a look at a recent paper of mine in which I describe this method in more detail.]
Graphical models of causation, most notably associated with the name of computer scientist Judea Pearl, received a lot of pushback from the grandees of econometrics. Heckman had his famous debate with Pearl, arguing that economics looks back on its own tradition of causal inference, going back to Haavelmo, and that we don’t need DAGs. Continue reading Econometrics and the “not invented here” syndrome: suggestive evidence from the causal graph literature
An interesting paper by Daniel Bradley, Incheol Kim, and Xuan Tian got recently published in Management Science (link to the SSRN version): Continue reading Labor unions may affect innovation negatively
We provide evidence on the value of patents to startups by leveraging the random assignment of applications to examiners with different propensities to grant patents. Using unique data on all first-time applications filed at the U.S. Patent Office since 2001, we find that startups that win the patent “lottery” by drawing lenient examiners have, on average, 55% higher employment growth and 80% higher sales growth five years later. Patent winners also pursue more, and higher quality, follow-on innovation. Winning a first patent boosts a startup’s subsequent growth and innovation by facilitating access to funding from VCs, banks, and public investors.
Today, Judea Pearl commented on a new NBER working paper by Josh Angrist and Jörn-Steffen Pischke in a mail for subscribers to the UCLA Causality Blog. I think the text is too good to hide it in a mailing list though. That’s why I will quote it here:
Overturning Econometrics Education
(or, do we need a “causal interpretation”?)
My attention was called to a recent paper by Josh Angrist and Jorn-Steffen Pischke titled; “Undergraduate econometrics instruction” (A NBER working paper)
This paper advocates a pedagogical paradigm shift that has methodological ramifications beyond econometrics instruction; As I understand it, the shift stands contrary to the traditional teachings of causal inference, as defined by Sewal Wright (1920), Haavelmo (1943), Marschak (1950), Wold (1960), and other founding fathers of econometrics methodology.
In a nut shell, Angrist and Pischke start with a set of favorite statistical routines such as IV, regression, differences-in-differences among others, and then search for “a set of control variables needed to insure that the regression-estimated effect of the variable of interest has a causal interpretation” Traditional causal inference (including economics) teaches us that asking whether the output of a statistical routine “has a causal interpretation” is the wrong question to ask, for it misses the direction of the analysis. Instead, one should start with the target causal parameter itself, and asks whether it is ESTIMABLE (and if so how), be it by IV, regression, differences-in-differences, or perhaps by some new routine that is yet to be discovered and ordained by name. Clearly, no “causal interpretation” is needed for parameters that are intrinsically causal; for example, “causal effect” “path coefficient”, “direct effect” or “effect of treatment on the treated” or “probability of causation”
In practical terms, the difference between the two paradigms is that estimability requires a substantive model while interpretability appears to be model-free.
A model exposes its assumptions explicitly, while statistical routines give the deceptive impression that they run assumptions-free ( hence their popular appeal). The former lends itself to judgmental and statistical tests, the latter escapes such scrutiny.
In conclusion, if an educator needs to choose between the “interpretability” and “estimability” paradigms, I would go for the latter. If traditional econometrics education is tailored to support the estimability track, I do not believe a paradigm shift is warranted towards an “interpretation seeking” paradigm as the one proposed by Angrist and Pischke,
I would gladly open this blog for additional discussion on this topic.
I tried to post a comment on NBER (National Bureau of Economic Research), but was rejected for not being an approved “NBER family member”. If any of our readers is a “”NBER family member” feel free to post the above.
Note: “NBER working papers are circulated for discussion and comment purposes.” (page 1).
Update: By now, the text has been published on the causality blog.
Here is a great introductory lecture into causal inference and the power of directed acyclic graphs / bayesian networks. It repeats a point I made earlier on this blog that big data alone, without a causal model (i.e., theory) to support it, is simply not sufficient for making causal claims. Continue reading Causality for Policy Assessment and Impact Analysis