SEARCH

SEARCH BY CITATION

Keywords:

  • ACT-R;
  • equivalent number of observations (ENO);
  • explorative sampler;
  • fitting;
  • generalization criteria;
  • prospect theory;
  • reinforcement learning;
  • the 1–800 critique

Abstract

Erev, Ert, and Roth organized three choice prediction competitions focused on three related choice tasks: One shot decisions from description (decisions under risk), one shot decisions from experience, and repeated decisions from experience. Each competition was based on two experimental datasets: An estimation dataset, and a competition dataset. The studies that generated the two datasets used the same methods and subject pool, and examined decision problems randomly selected from the same distribution. After collecting the experimental data to be used for estimation, the organizers posted them on the Web, together with their fit with several baseline models, and challenged other researchers to compete to predict the results of the second (competition) set of experimental sessions. Fourteen teams responded to the challenge: The last seven authors of this paper are members of the winning teams. The results highlight the robustness of the difference between decisions from description and decisions from experience. The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the sensitivity to the weighted values decreases with the distance between the cumulative payoff functions. The best predictions of decisions from experience were obtained with models that assume reliance on small samples. Merits and limitations of the competition method are discussed. Copyright © 2009 John Wiley & Sons, Ltd.