Opening the black box of clinical skills assessment via observation: a conceptual model
Version of Record online: 14 SEP 2011
© Blackwell Publishing Ltd 2011
Volume 45, Issue 10, pages 1048–1060, October 2011
How to Cite
Kogan, J. R., Conforti, L., Bernabeo, E., Iobst, W. and Holmboe, E. (2011), Opening the black box of clinical skills assessment via observation: a conceptual model. Medical Education, 45: 1048–1060. doi: 10.1111/j.1365-2923.2011.04025.x
- Issue online: 14 SEP 2011
- Version of Record online: 14 SEP 2011
- Received 6 December 2010; editorial comments to authors 25 January 2011; accepted for publication 21 March 2011
Medical Education 2011: 45: 1048–1060
Objectives This study was intended to develop a conceptual framework of the factors impacting on faculty members’ judgements and ratings of resident doctors (residents) after direct observation with patients.
Methods In 2009, 44 general internal medicine faculty members responsible for out-patient resident teaching in 16 internal medicine residency programmes in a large urban area in the eastern USA watched four videotaped scenarios and two live scenarios of standardised residents engaged in clinical encounters with standardised patients. After each, faculty members rated the resident using a mini-clinical evaluation exercise and were individually interviewed using a semi-structured interview. Interviews were videotaped, transcribed and analysed using grounded theory methods.
Results Four primary themes that provide insights into the variability of faculty assessments of residents’ performance were identified: (i) the frames of reference used by faculty members when translating observations into judgements and ratings are variable; (ii) high levels of inference are used during the direct observation process; (iii) the methods by which judgements are synthesised into numerical ratings are variable, and (iv) factors external to resident performance influence ratings. From these themes, a conceptual model was developed to describe the process of observation, interpretation, synthesis and rating.
Conclusions It is likely that multiple factors account for the variability in faculty ratings of residents. Understanding these factors informs potential new approaches to faculty development to improve the accuracy, reliability and utility of clinical skills assessment.