What’s wrong with policy advice?

John Cochrane recently released a blog post about the discrepancy between academic work and policy advice in macroeconomics. He criticizes that consulting for the policy world is most often based on economic methodology such as static IS-LM/AS-AD models. These date back 40 years and have largely disappeared from the academic landscape.

This current state is puzzling. At least one of the reasons why we do economic research is to better guide policy makers to more sensible interventions to the economy. So either “academic research ran off the rails for 40 years producing nothing of value” or we hold back the diamonds for our little elite circle in the ivory tower. I have seen quite some policy related work myself. Thus,  I want report from my own field – innovation economics.

(Warning: because of the econometric vocabulary this article might come across a little technical)

Liquidity trap in a simple IS-LM model. Source: commons.wikimedia.org
Liquidity trap in a simple IS-LM model. Source: commons.wikimedia.org

Between 2007 and 2013, the EU spent a total budget of 50 billion euros in the 7th Framework Programme for Research and Technological Development (FP7). These Framework Programmes exist since the 1980s and are designed to fund policy initiatives within the EU to foster innovation and economic growth. An important part of FP7 (and its successor Horizon 2020) are subsidy programs for research and development activities of firms. Subsidies are granted to correct various well-known market failures that especially small firms face when they want to conduct R&D. Ex-post evaluation is mandatory for every public policy intervention by the EU. If you invest 50 billions, you better want to know whether this money is worth spent.

As the document puts it, a goal of policy evaluation within the EU should be

[…] a critical, evidence-based judgement of whether EU action(s) has met the needs it aimed to satisfy and actually achieved its expected effects. It will go beyond an assessment of whether something happened or not, and look at causality – whether the action taken by a given party altered behaviours and led to the expected changes and/or any other unintended changes.

Causal inference is a big topic in econometrics these days. People are increasingly aware that a statistically significant relationship between two variables does not per se justify any causal interpretation. And because the topic of causal inference is at the frontier of current academic research, I’m pretty sure that a paper which carefully identifies a causal effect of an economically significant policy measure such as a large-scale EU subsidy program should have no problem to get published.

But as usual, things are not that easy. In 2000, David et al. published an influential survey article about the, at that time, current empirical evidence whether public R&D support has an effect on overall R&D expenses. They conclude that many of the conducted empirical studies suffer from possible sources of endogeneity and do not recover a causal link between R&D subsidies and increased firm-level R&D.

Most of the papers under review use parametric approaches such as OLS, GLS or Fixed Effects panel data models. Interestingly, in the aftermath of this pessimistic assessment of the literature, non-parametric approaches such as Propensity Score Matching became more popular. This is probably a consequence of the easier access to these methods once they were implemented in software packages like Stata. But no matching procedure can avoid problems such as selection bias or the presence of unobserved confounders.

Although matching estimators or other “selection on observables”-methods still seem to be the workhorse in policy advice today, you can observe an adoption of more state-of-the-art methodolgy. Much more often methods like Difference-in-Differences, RDD or IV estimation in general are employed. This development is very welcome since causal interpretations are more credible for these research designs. And especially the first two designs arise naturally in many settings of policy evaluation.

Still, I don’t think many of these studies would be publishable. A good academic paper needs a careful discussion about what kind of parameters these quasi-experimental designs actually identify. This is out of the scope of most policy evaluation reports. They are written for policy makers who probably did not enjoy a graduate-level education in economics. Or if they did, it was 20 years ago.

Also I see no structural work in innovation economics to guide policy decisions (this might be different in the field of Indsutrial Organization though). Developing a structural model of R&D knowledge spillovers (a key parameter for the justification of policy interventions in R&D), however, is far from trivial. It’s probably easier, and cheaper, to run some regressions than to develop a full-blown theoretical model. For many departments contract research, such as policy evaluation, is a source of financing which also cross-subsidizes academic research. Researchers definitely face a trade-off here.

To conclude, I guess there is not much wrong with macroeconomics in particular. Policy advice in other disciplines has the same problems. In my opinion, this stems from two main sources. First, sometimes policy makers want to know answers to things on which the academic literature is more or less silent. The exact nature of knowledge spillovers, for example, is still far from understood. In most of these cases, policy advice then relies on simple storytelling and descriptive analyses. Second, economists could do a better job in explaining why sophisticated state-of-the-art methodology is actually necessary for good policy advice. Not every policy maker needs to understand the exact workings of a LATE estimator as long as we explain carefully how results should be interpreted and what the limitations of the methods are.

References

David, P. A., B. H. Hall, and A. A. Toole (2000): “Is public R&D a complement or substitute for private R&D? A review of the econometric evidence,” Research Policy 29, 497-529.

2 thoughts on “What’s wrong with policy advice?”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s