1. Chakrabarti: Economist, Federal Reserve Bank of New York, 33 Liberty Street, New York, NY 10045. Phone 212-720-6415, Fax 212-720-1844, E-mail Rajashri.Chakrabarti@ny.frb.org
    Search for more papers by this author
    • I thank Steve Coate, Sue Dynarski, Ron Ehrenberg, David Figlio, Ed Glaeser, Caroline Hoxby, Brian Jacob, Bridget Long, Paul Peterson, Miguel Urquiola, seminar participants at Duke University, University of Florida, Harvard University, University of Maryland, MIT, Northwestern University, Econometric Society Conference, Association for Public Policy Analysis and Management Conference, Society of Labor Economists Conference for helpful discussions, the Florida Department of Education for data used in this analysis, and the Program on Education Policy and Governance at Harvard University for its postdoctoral support. Dan Greenwald provided excellent research assistance. The views expressed in this paper are those of the author and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. All errors are my own.


This paper analyzes the incentives and responses of public schools in the context of an educational reform. Much of the literature studying the effect of voucher programs on public schools has looked at the effect on student and mean school scores. This paper tries to go inside the black box to investigate some of the ways in which schools facing the Florida accountability-tied voucher program behaved. Schools getting an “F” grade for the first time were exposed to the threat of vouchers, but did not face vouchers unless and until they got a second “F” within the next 3 years. In addition, “F,” being the lowest grade, exposed the threatened schools to stigma. Exploiting the institutional details of this program, I analyze the incentives built into the system and investigate the behavior of the threatened public schools facing these incentives. There is strong evidence that they did respond to incentives. Using highly disaggregated school-level data, a difference-in-differences estimation strategy as well as a regression discontinuity (RD) analysis, I find that the threatened schools tended to focus more on students below the minimum criteria cutoffs rather than equally on all. Second, consistent with incentives, the threatened school improvements were, by far, the largest in writing. These results are robust to controlling for differential preprogram trends, changes in demographic compositions, mean reversion, and sorting. These findings have important policy implications. (JEL H4, I21, I28)