# Blog

## A Global Decline in Research Productivity?

My coauthor Philipp Böing and I just released a new discussion paper:

“A Global Decline in Research Productivity? Evidence from China and Germany”

Abstract: In a recent paper, Bloom et al. (2020) find evidence for a substantial decline in research productivity in the U.S. economy during the last 40 years. In this paper, we replicate their findings for China and Germany, using detailed firm-level data spanning three decades. Our results indicate that diminishing returns in idea production are a global phenomenon, not just confined to the U.S.

## Innovation step by step

My coauthor Petra Andries (Ghent University) and I just published a new paper in Research Policy: “Firm-level effects of staged investments in innovation: The moderating role of resource availability” (preprint available on this website)

## Control variables in regressions — better don’t report them!

A while ago I wrote a short blog post with a pretty simple message: “Don’t Put Too Much Meaning Into Control Variables”. And I must say I was surprised by the many positive responses it got. The respective tweet received more than 1000 likes and nearly 400 retweets. And the blog post even got mentioned in an internal newsletter by the World Bank. So clearly there seems to be some demand for the topic. That’s why my coauthor Beyers Louw (PhD student at Maastricht University) and I decided to turn it into a citable research note, which is now available on arXiv:

“On the Nuisance of Control Variables in Regression Analysis”

Abstract: Control variables are included in regression analyses to estimate the causal effect of a treatment variable of interest on an outcome. In this note we argue that control variables are unlikely to have a causal interpretation themselves though. We therefore suggest to refrain from discussing their marginal effects in the results sections of empirical research papers.

Please use it and save yourself a paragraph or two in your next research paper! :)

## Public procurement as a policy instrument for innovation

We have a new paper Public Procurement of Innovation: Evidence from a German Legislative Reform out at IJIO (preprint available without paywall here under “Research”) and I’ve briefly summarized the content in a Twitter thread (apparently that’s were these things happen these days, blogs are so 2012…). For reference, I’ll link to the tweets below:

## Causal Inference in Business Practice – Survey

My colleagues and I are currently looking for data scientists to take part in a short survey (5–10 min) on causal inference in business practice. Is data-driven decision making important in your job? Then we’d love to hear your perspective: maastrichtuniversity.eu.qualtrics.com/jfe/form/SV_af

## Mapping Unchartered Territory

A frequent point of criticism against Directed Acyclic Graphs is that writing them down for a real-world problem can be a difficult task. There are numerous possible variables to consider and it’s not clear how we can determine all the causal relationships between them. We recently had a Twitter discussion where exactly this argument popped up again.

## PO vs. DAGs – Comments on Guido Imbens’ New Paper

Guido Imbens published a new working paper in which he develops a detailed comparison of the potential outcomes framework (PO) and directed acyclic graphs (DAG) for causal inference in econometrics. I really appreciate this paper, because it introduces a broader audience in economics to DAGs and highlights the complementarity of both approaches for applied econometric work. Continue reading PO vs. DAGs – Comments on Guido Imbens’ New Paper

## Causal Data Science in Business

A while back I was posting about Facebook’s causal inference group and how causal data science tools slowly find their way from academia into business. Since then I came across many more examples of well-known companies investing in their causal inference (CI) capabilities: Microsoft released its DoWhy library for Python, providing CI tools based on Directed Acylic Graphs (DAGs); I recently met people from IBM Research interested in the topic; Zalando is constantly looking for people to join their CI/ML team; and Lufthansa, Uber, and Lyft have research units working on causal AI applications too. Continue reading Causal Data Science in Business

## Don’t Put Too Much Meaning Into Control Variables

I’m currently reading this great paper by Carlos Cinelli and Chad Hazlett: “Making Sense of Sensitivity: Extending Omitted Variable Bias”. They develop a full suite of sensitivity analysis tools for the omitted variable problem in linear regression, which everyone interested in causal inference should have a look at. While kind of a side topic, they make an important point on page 6 (footnote 6):

[…] since the researcher’s goal is to estimate the causal effect of D on Y , usually Z is required only to, along with X, block the back-door paths from D to Y (Pearl 2009), or equivalently, make the treatment assignment conditionally ignorable. In this case, $\hat{\gamma}$ could reflect not only its causal effect on Y , if any, but also other spurious associations not eliminated by standard assumptions.

## Beyond Curve Fitting

Last week I attended the AAAI spring symposium on “Beyond Curve Fitting: Causation, Counterfactuals, and Imagination-based AI”, held at Stanford University. Since Judea Pearl and Dana Mackenzie published “The Book of Why”, the topic of causal inference gains increasing momentum in the machine learning and artificial intelligence community. If we want to build truly intelligent machines, which are able to interact with us in a meaningful way, we have to teach them the concept of causality. Otherwise, our future robots will never be able to understand that forcing the rooster to crow at 3am in the morning won’t make the sun appear. Continue reading Beyond Curve Fitting