What Happens in the Field Stays in the Field: Exploring Whether Professionals Play Minimax in Laboratory Experiments


  • Steven D. Levitt,

    1. Dept. of Economics, University of Chicago, 1126 East 59th Street, Chicago, IL 60637, U.S.A.; slevitt@uchicago.edu
    Search for more papers by this author
  • John A. List,

    1. Dept. of Economics, University of Chicago, 1126 East 59th Street, Chicago, IL 60637, U.S.A.; jlist@uchicago.edu
    Search for more papers by this author
  • David H. Reiley

    1. Dept. of Economics, University of Arizona, 401 McClelland Hall, Tucson, AZ 85721, U.S.A.; reily@eller.arizona.edu
    Search for more papers by this author
    • We would like to thank the editor and three anonymous referees for valuable comments. Ignacio Palacios-Huerta and Jesse Shapiro provided helpful conversations. Phil Gordon was instrumental in our efforts to recruit world-class poker players. Omar Al-Ubaydli, David Caballero, Dwyer Gunn, Bill Hessert, Ryan Johnson, Min Lee, Randall Lewis, Andrew Sherman, Alec Smith, Brittany Smith, Dean Strachan, and, especially, Lisandra Rickards provided fantastic research assistance.


The minimax argument represents game theory in its most elegant form: simple but with stark predictions. Although some of these predictions have been met with reasonable success in the field, experimental data have generally not provided results close to the theoretical predictions. In a striking study, Palacios-Huerta and Volij (2008) presented evidence that potentially resolves this puzzle: both amateur and professional soccer players play nearly exact minimax strategies in laboratory experiments. In this paper, we establish important bounds on these results by examining the behavior of four distinct subject pools: college students, bridge professionals, world-class poker players, who have vast experience with high-stakes randomization in card games, and American professional soccer players. In contrast to Palacios-Huerta and Volij's results, we find little evidence that real-world experience transfers to the lab in these games—indeed, similar to previous experimental results, all four subject pools provide choices that are generally not close to minimax predictions. We use two additional pieces of evidence to explore why professionals do not perform well in the lab: (i) complementary experimental treatments that pit professionals against preprogrammed computers and (ii) post-experiment questionnaires. The most likely explanation is that these professionals are unable to transfer their skills at randomization from the familiar context of the field to the unfamiliar context of the lab.