Get access



  • *I would like to thank the editor, an anonymous referee, and Dan Ackerberg, Stephen Bond, Garth Frazer, Mel Fuss, Robert Gagné, Robert McMillan, Marc Melitz, Ariel Pakes, Peter Reiss, Chad Syverson, Frank Wolak, and participants at CIRANO, the NBER Summer Institute, SITE, and the Econometric Society World Congress for comments, All remaining errors are my own. Financial support from SSHRC and the Connaught Fund is gratefully acknowledged.


Researchers interested in estimating productivity can choose from an array of methodologies, each with its strengths and weaknesses. We compare the robustness of five widely used techniques, two non-parametric and three parametric: in order, (a) index numbers, (b) data envelopment analysis (DEA), (c) stochastic frontiers, (d) instrumental variables (GMM) and (e) semiparametric estimation. Using simulated samples of firms, we analyze the sensitivity of alternative methods to the way randomness is introduced in the data generating process. Three experiments are considered, introducing randomness via factor price heterogeneity, measurement error and differences in production technology respectively. When measurement error is small, index numbers are excellent for estimating productivity growth and are among the best for estimating productivity levels. DEA excels when technology is heterogeneous and returns to scale are not constant. When measurement or optimization errors are nonnegligible, parametric approaches are preferred. Ranked by the persistence of the productivity differentials between firms (in decreasing order), one should prefer the stochastic frontiers, GMM, or semiparametric estimation methods. The practical relevance of each experiment for applied researchers is discussed explicitly.

Get access to the full text of this article