Get access

TESTING BAYESIAN UPDATING WITH THE ASSOCIATED PRESS TOP 25

Authors

  • DANIEL F. STONE

    1. Stone: Assistant Professor, Department of Economics, Oregon State University, Corvallis, OR 97331. Phone 541 737 1477, Fax 541 737 5917, E-mail dan.stone@oregonstate.edu
    Search for more papers by this author
    • I thank Shan Zhou for excellent research assistance, Paul Montella of the Associated Press for providing me with the 2006 ballots and helpful discussion, Andrew Nutting for sharing data and discussion, and Edi Karni, Matt Shum, Joe Aldy, Tumenjargal Enkhbayar, Liz Schroeder, Carol Horton Tremblay, Stephen Shore, Peyton Young, Basit Zafar, and seminar participants at the Econometric Society 2009 North American Summer Meeting and 2009 IAREP/SABE joint meeting for helpful comments. Two referees and the coeditor (especially) also provided very helpful feedback. I thank Andrew Nutting for providing the data set used for this analysis. The data sets used for the article's main analysis do not include ranks on teams for each week throughout the seasons, only the first half of each season and final ranks. I do not find evidence of precision increasing substantially in just the first half of seasons. I thank an anonymous referee for suggesting this. The Sagarin rankings are a component of the Bowl Championship Series (BCS) rankings along with other computer rankings. I cannot use the BCS rankings because they are not computed after the bowls. I use the Sagarin ratings because they were easily obtainable and I expect other computer rankings would yield similar results.


Abstract

Most studies of Bayesian updating use experimental data. This article uses a non-experimental data source—the voter ballots of the Associated Press college football poll, a weekly subjective ranking of the top 25 teams—to test Bayes' rule as a descriptive model. I find that voters sometimes underreact to new information, sometimes overreact, and at other times their behavior is consistent with estimated Bayesian updating. A unifying explanation for the disparate results is that voters are more responsive to information that is more salient (i.e., noticeable). In particular, voters respond in a “more Bayesian” way to losses and wins over ranked teams, as compared to wins over unranked teams, and voters seem unaware of subtle variation in the precision of priors. (JEL D80, D83, D84)

Get access to the full text of this article

Ancillary