We are grateful to Macartan Humphreys for generously providing data and constructive comments. We thank Chris Achen, Larry Bartels, Jake Bowers, Josh Clinton, Tasos Kalandrakis, Adam Meirowitz, Jas Sekhon, Curt Signorino, and Dustin Tingley for helpful comments and seminar participants at Northwestern, Princeton, and Rochester for stimulating discussions. An earlier version of this article was presented at the 2008 annual meeting of the Midwest Political Science Association and the 2008 Summer Political Methodology Meeting. Financial support from the National Science Foundation (SES-0752050 and SES-0849715) and the Princeton University Committee on Research in the Humanities and Social Sciences is acknowledged. The replication materials of the empirical results given in this article are available as Imai and Yamamoto (2010) on the Dataverse Network.
Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis
Version of Record online: 9 APR 2010
©2010, Midwest Political Science Association
American Journal of Political Science
Volume 54, Issue 2, pages 543–560, April 2010
How to Cite
Imai, K. and Yamamoto, T. (2010), Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis. American Journal of Political Science, 54: 543–560. doi: 10.1111/j.1540-5907.2010.00446.x
- Issue online: 9 APR 2010
- Version of Record online: 9 APR 2010
Political scientists have long been concerned about the validity of survey measurements. Although many have studied classical measurement error in linear regression models where the error is assumed to arise completely at random, in a number of situations the error may be correlated with the outcome. We analyze the impact of differential measurement error on causal estimation. The proposed nonparametric identification analysis avoids arbitrary modeling decisions and formally characterizes the roles of different assumptions. We show the serious consequences of differential misclassification and offer a new sensitivity analysis that allows researchers to evaluate the robustness of their conclusions. Our methods are motivated by a field experiment on democratic deliberations, in which one set of estimates potentially suffers from differential misclassification. We show that an analysis ignoring differential measurement error may considerably overestimate the causal effects. This finding contrasts with the case of classical measurement error, which always yields attenuation bias.