Charles E. Lance, Department of Psychology, The University of Georgia.
Why Assessment Centers Do Not Work the Way They Are Supposed To
Article first published online: 29 FEB 2008
2008 Society for Industrial and Organizational Psychology
Industrial and Organizational Psychology
Volume 1, Issue 1, pages 84–97, March 2008
How to Cite
LANCE, C. E. (2008), Why Assessment Centers Do Not Work the Way They Are Supposed To. Industrial and Organizational Psychology, 1: 84–97. doi: 10.1111/j.1754-9434.2007.00017.x
This work was supported in part by National Institute on Aging Grant 5R01AG15321-10 and by Gail M. Williamson, principal investigator. Thanks go to Filip Lievens, Duncan Jackson, Brian Hoffman, and Lillian Eby for comments on an earlier version of this paper.
- Issue published online: 29 FEB 2008
- Article first published online: 29 FEB 2008
Assessment centers (ACs) are often designed with the intent of measuring a number of dimensions as they are assessed in various exercises, but after 25 years of research, it is now clear that AC ratings that are completed at the end of each exercise (commonly known as postexercise dimension ratings) substantially reflect the effects of the exercises in which they were completed and not the dimensions they were designed to reflect. This is the crux of the long-standing “construct validity problem” for AC ratings. I review the existing research on AC construct validity and conclude that (a) contrary to previous notions, AC candidate behavior is inherently cross-situationally (i.e., cross-exercise) specific, not cross-situationally consistent as was once thought, (b) assessors rather accurately assess candidate behavior, and (c) these facts should be recognized in the redesign of ACs toward task- or role-based ACs and away from traditional dimension-based ACs.