The authors thank Lee J. Cronbach, Rebecca Zwick, and three anonymous reviewers for their useful comments about this manuscript.
Components of Rater Error in a Complex Performance Assessment
Article first published online: 12 SEP 2005
Journal of Educational Measurement
Volume 36, Issue 1, pages 29–45, March 1999
How to Cite
Clauser, B. E., Clyman, S. G. and Swanson, D. B. (1999), Components of Rater Error in a Complex Performance Assessment. Journal of Educational Measurement, 36: 29–45. doi: 10.1111/j.1745-3984.1999.tb00544.x
- Issue published online: 12 SEP 2005
- Article first published online: 12 SEP 2005
Numerous studies have examined performance assessment data using generaliz-ability theory. Typically, these studies have treated raters as randomly sampled from a population, with each rater judging a given performance on a single occasion. This paper presents two studies that focus on aspects of the rating process that are not explicitly accounted for in this typical design. The first study makes explicit the “committee” facet, acknowledging that raters often work within groups. The second study makes explicit the “rating-occasion” facet by having each rater judge each performance on two separate occasions. The results of the first study highlight the importance of clearly specifying the relevant facets of the universe of interest. Failing to include the committee facet led to an overly optimistic estimate of the precision of the measurement procedure. By contrast, failing to include the rating-occasion facet, in the second study, had minimal impact on the estimated error variance.