Assessing professionalism in the context of an objective structured clinical examination: an in-depth study of the rating process
Article first published online: 14 MAR 2007
Volume 41, Issue 4, pages 331–340, April 2007
How to Cite
Mazor, K. M., Zanetti, M. L., Alper, E. J., Hatem, D., Barrett, S. V., Meterko, V., Gammon, W. and Pugnaire, M. P. (2007), Assessing professionalism in the context of an objective structured clinical examination: an in-depth study of the rating process. Medical Education, 41: 331–340. doi: 10.1111/j.1365-2929.2006.02692.x
- Issue published online: 14 MAR 2007
- Article first published online: 14 MAR 2007
- Received 22 May 2006; editorial comments to authors 18 August 2006; accepted for publication 31 October 2006
- *professional practice;
- clinical competence/*standards;
Introduction Professionalism is fundamental to the practice of medicine. Objective structured clinical examinations (OSCEs) have been proposed as appropriate for assessing some aspects of professionalism. This study investigated how raters assign professionalism ratings to medical students' performances in OSCE encounters.
Methods Three standardised patients, 3 doctor preceptors, and 3 lay people viewed and rated 20 videotaped encounters between 3rd-year medical students and standardised patients. Raters recorded their thoughts while rating. Qualitative and quantitative analyses were conducted. Comments about observable behaviours were coded, and relative frequencies were computed. Correlations between counts of categorised comments and overall professionalism ratings were also computed.
Results Raters varied in which behaviours they attended to, and how behaviours were evaluated. This was true within and between rater type. Raters also differed in the behaviours they consider when providing global evaluations of professionalism.
Conclusions This study highlights the complexity of the processes involved in assigning ratings to doctor–patient encounters. Greater emphasis on behavioural definitions of specific behaviours may not be a sufficient solution, as raters appear to vary in both attention to and evaluation of behaviours. Reliance on global ratings is also problematic, especially if relatively few raters are used, for similar reasons. We propose a model highlighting the multiple points where raters viewing the same encounter may diverge, resulting in different ratings of the same performance. Progress in assessment of professionalism will require further dialogue about what constitutes professional behaviour in the medical encounter, with input from multiple constituencies and multiple representatives within each constituency.