The authors would like to extend their gratitude to the editor and two anonymous reviewers for their insightful and constructive commentaries on earlier drafts of this paper. Any remaining deficiencies are, of course, solely the authors. The authors would like to thank Clark Chalifour and Dick DeVore for their assistance conducting this study.
‘Mental Model’ Comparison of Automated and Human Scoring
Version of Record online: 12 SEP 2005
Journal of Educational Measurement
Volume 36, Issue 2, pages 158–184, June 1999
How to Cite
Williamson, D. M., Bejar, I. I. and Hone, A. S. (1999), ‘Mental Model’ Comparison of Automated and Human Scoring. Journal of Educational Measurement, 36: 158–184. doi: 10.1111/j.1745-3984.1999.tb00552.x
Correspondence concerning this article should be addressed to David M. Williamson, The Chauncey Group International, 664 Rosedale Road, Princeton, New Jersey 08540.
- Issue online: 12 SEP 2005
- Version of Record online: 12 SEP 2005
‘Mental models’ used by automated scoring for the simulation divisions of the computerized Architect Registration Examination are contrasted with those used by experienced human graders. Candidate solutions (N = 3613) received both automated and human holistic scores. Quantitative analyses suggest high correspondence between automated and human scores; thereby suggesting similar mental models are implemented. Solutions with discrepancies between automated and human scores were selected for qualitative analysis. The human graders were reconvened to review the human scores and to investigate the source of score discrepancies in light of rationales provided by the automated scoring process. After review, slightly more than half of the score discrepancies were reduced or eliminated. Six sources of discrepancy between original human scores and automated scores were identified: subjective criteria; objective criteria; tolerances/ weighting; details; examinee task interpretation; and unjustified. The tendency of the human graders to be compelled by automated score rationales varied by the nature of original score discrepancy. We determine that, while the automated scores are based on a mental model consistent with that of expert graders, there remain some important differences, both intentional and incidental, which distinguish between human and automated scoring. We conclude that automated scoring has the potential to enhance the validity evidence of scores in addition to improving efficiency.