Introduction Professionalism is fundamental to the practice of medicine. Objective structured clinical examinations (OSCEs) have been proposed as appropriate for assessing some aspects of professionalism. This study investigated how raters assign professionalism ratings to medical students' performances in OSCE encounters.
Methods Three standardised patients, 3 doctor preceptors, and 3 lay people viewed and rated 20 videotaped encounters between 3rd-year medical students and standardised patients. Raters recorded their thoughts while rating. Qualitative and quantitative analyses were conducted. Comments about observable behaviours were coded, and relative frequencies were computed. Correlations between counts of categorised comments and overall professionalism ratings were also computed.
Results Raters varied in which behaviours they attended to, and how behaviours were evaluated. This was true within and between rater type. Raters also differed in the behaviours they consider when providing global evaluations of professionalism.
Conclusions This study highlights the complexity of the processes involved in assigning ratings to doctor–patient encounters. Greater emphasis on behavioural definitions of specific behaviours may not be a sufficient solution, as raters appear to vary in both attention to and evaluation of behaviours. Reliance on global ratings is also problematic, especially if relatively few raters are used, for similar reasons. We propose a model highlighting the multiple points where raters viewing the same encounter may diverge, resulting in different ratings of the same performance. Progress in assessment of professionalism will require further dialogue about what constitutes professional behaviour in the medical encounter, with input from multiple constituencies and multiple representatives within each constituency.