Get access



  • Authors' Note. This paper is based in part on the master's thesis of Michael S. Christian, which was chaired by Bryan D. Edwards. An earlier version of this paper was presented at the 2007 Annual Conference of the Society for Industrial and Organizational Psychology.

  • We are grateful to Winfred Arthur, Jr., Ronald Landis, Michael Burke, and Filip Lievens for reviewing previous drafts of this article and providing valuable suggestions. We also thank Michael McDaniel, Phillip Bobko, and Edgar Kausel for their helpful comments and suggestions on this project. Finally, we acknowledge the work of Jessica Siegel, Helen Terry, and Adela Garza.

and requests for reprints should be addressed to Michael S. Christian Eller College of Management, University of Arizona, Department of Management and Organizations, McClelland Hall, PO Box 210108, Tucson, AZ 85721-0108;


Situational judgment tests (SJTs) are a measurement method that may be designed to assess a variety of constructs. Nevertheless, many studies fail to report the constructs measured by the situational judgment tests in the extant literature. Consequently, a construct-level focus in the situational judgment test literature is lacking, and researchers and practitioners know little about the specific constructs typically measured. Our objective was to extend the efforts of previous researchers (e.g., McDaniel, Hartman, Whetzel, & Grubb, 2007; McDaniel & Ngyuen, 2001; Schmitt & Chan, 2006) by highlighting the need for a construct focus in situational judgment test research. We identified and classified the construct domains assessed by situational judgment tests in the literature into a content-based typology. We then conducted a meta-analysis to determine the criterion-related validity of each construct domain and to test for moderators. We found that situational judgment tests most often assess leadership and interpersonal skills and those situational judgment tests measuring teamwork skills and leadership have relatively high validities for overall job performance. Although based on a small number of studies, we found evidence that (a) matching the predictor constructs with criterion facets improved criterion-related validity; and (b) video-based situational judgment tests tended to have stronger criterion-related validity than pencil-and-paper situational judgment tests, holding constructs constant. Implications for practice and research are discussed.