Second day of the Annual Congress of the European Economic Association had some exiting papers to offer:
“Does Gender Matter for Academic Promotion? Evidence from Two Large-Scale Randomized Natural Experiments” by Natalia Zinovyeva and Manuel Bagues
The authors ask themselves whether female scientists do better in an evaluation process when other female scientists are in the evaluation committee. There is lots of suggestive evidence for a gender bias in academia. The most prominent theory is probably the “old boys’ club”. Science is dominated by men who consciously or unconsciously, discriminate against women.
The authors exploit two settings in Italy in Spain where female scientists applied for job promotions (assistant or associate professorships) and evaluation committees were randomly assigned. This is very neat from the point of the econometrician because it allows to estimate causal effects of having women in the committee.
Surprisingly, it turns out that all male committees treat women more favorable. Digging deeper, however, the authors show that in principle female evaluators actually give better grades to female candidates. The negative effect comes from the fact that, once a women is on board, the other male evaluators give substantially worse grades. A possible interpretation of these results is that men are aware of the discrimination debate and thus shy away from giving bad evaluations when they are all alone in a committee. Whether these bad evaluations would be deserved, i.e, represent a fair judgement of a female candidate’s quality or are the result of discrimination is not clear though.
“The Finite Sample Performance of Semi- and Nonparametric Estimators for Treatment Effects and Policy Evaluation” by Markus Frölich, Martin Huber and Manuel Wiesenfarth
Treatment effect estimators are very popular in applied research. Because of the curse of dimensionality you face, for example, in nearest-neighbor direct covariate matching, applied researchers often rely on propensity score matching methods. Rosenbaum and Rubin (1983) have shown that conditional on the propensity score P(X), i.e., the probability to receive treatment as a function of covariates, covariates are balanced in the treated and non-treated groups. The propensity score is one-dimensional and thus solves the dimensionality problem.
Rosenbaum and Rubin, however, assume the propensity score to be known which is hardly the case in non-experimental settings. In practice you have to estimate it. But of course, the eventual estimation outcome will be very sensitive to biased estimates, in particular when functional forms are misspecified. You could avoid that by estimating the propensity score nonparametrically, without invoking functional form assumptions. But this introduces the curse of dimensionality through the backdoor.
The authors do an extensive simulation exercise in which they compare the performance of various estimation approaches. They pay particularly attention to the first stage propensity score estimation. It turns out that methods that use a nonparametric first-stage perform considerably better than parametric approaches (e.g., estimating the p-score by probit), which are used by most applied researchers. Somewhat surprisingly, matching directly on the covariates (with additional bias adjustment) also gives better results than parametric approaches. Therefore, if you don’t have too many covariates in your analysis and thus the computational burden is not too high, you should reconsider to introduce restrictive functional form assumptions.
“Direct and Indirect Treatment Effects: Causal Chains and Mediation Analysis with Instrumental Variables” by Markus Frölich and Martin Huber
Another useful paper for applied researchers by the same authors as before. In many settings an effect of a variable D on Y goes partly through another variable M, a so-called mediator. Think, for example, of the effect of education on smoking habits which is mediated by income. Higher education might make you more aware of the bad effects of smoking on your health (the direct effect) but also increases your income such that you can afford cigarettes more easily. It depends now on the theoretical question whether you are interested in the direct effect or the overall effect which also incorporates the effect going through the mediator.
Such kind of analysis is very popular in the social sciences. There is a long literature in sociology and related fields which uses linear models. In linear models, estimation is quite easy but this changes dramatically if you move to the nonparametric case. As the authors show, suddenly you need two instruments, one for the treatment variable and one for the mediator. For some settings these instruments even have to be independent in order to identify the quantities of interest. In addition, in many settings you need monotonicity assumptions on the relationship between the treatment and mediator variable and the instruments.
In a linear model, monotonicity assumptions are naturally fulfilled. That’s why estimation is comparably easy under linearity. The paper shows what could go wrong though when linearity is questionable. Especially, conditioning on the mediator variable, what many applied people like to do, won’t work in a nonparametric model and will give you biased estimates.