Get access

Why Assessment Centers Do Not Work the Way They Are Supposed To

Authors


  • Charles E. Lance, Department of Psychology, The University of Georgia.

    This work was supported in part by National Institute on Aging Grant 5R01AG15321-10 and by Gail M. Williamson, principal investigator. Thanks go to Filip Lievens, Duncan Jackson, Brian Hoffman, and Lillian Eby for comments on an earlier version of this paper.

Charles E. Lance. E-mail: clance@uga.edu
Address: Department of Psychology, The University of Georgia, Athens, Georgia 30602-3013

Abstract

Assessment centers (ACs) are often designed with the intent of measuring a number of dimensions as they are assessed in various exercises, but after 25 years of research, it is now clear that AC ratings that are completed at the end of each exercise (commonly known as postexercise dimension ratings) substantially reflect the effects of the exercises in which they were completed and not the dimensions they were designed to reflect. This is the crux of the long-standing “construct validity problem” for AC ratings. I review the existing research on AC construct validity and conclude that (a) contrary to previous notions, AC candidate behavior is inherently cross-situationally (i.e., cross-exercise) specific, not cross-situationally consistent as was once thought, (b) assessors rather accurately assess candidate behavior, and (c) these facts should be recognized in the redesign of ACs toward task- or role-based ACs and away from traditional dimension-based ACs.

Ancillary