Assessing the quality of clinical teaching: a preliminary study
Article first published online: 19 MAR 2010
© Blackwell Publishing Ltd 2010
Volume 44, Issue 4, pages 379–386, April 2010
How to Cite
Conigliaro, R. L. and Stratton, T. D. (2010), Assessing the quality of clinical teaching: a preliminary study. Medical Education, 44: 379–386. doi: 10.1111/j.1365-2923.2009.03612.x
- Issue published online: 19 MAR 2010
- Article first published online: 19 MAR 2010
- Received 12 May 2009; editorial comments to authors 17 June 2009, 15 October 2009; accepted for publication 23 November 2009
Medical Education 2010:44: 379–386
Objectives Evaluations in the clinical arena are fraught with problems. Current assessments of clinical teaching typically measure attributes of clinical teachers in overly broad terms, are often subjective and often succumb to the halo effect. This is in contradistinction to measurements of lectures, workshops or online educational content, which can more readily be assessed using objective criteria. As a result, clinical evaluations are often insufficient to provide focused feedback, guide faculty development or identify specific areas for clinical teachers to implement change and improvement. The aim of our study was to offset these limitations.
Methods We developed a structured, 15-item objective structured clinical examination (OSCE)-type checklist of discrete teaching behaviours intended to be: (i) observable; (ii) applicable to multiple disciplines, and (iii) reliably identifiable. Our goal was to test and utilise this checklist as an objective assessment of clinical teaching across a range of in-patient teaching rounds experiences. During 2007–2008, pairs of external raters on two separate occasions observed nine attending physicians during actual in-patient paediatrics and internal medicine ward rounds at a large, academic medical centre. Observers documented the extent to which specific teaching behaviours did or did not occur.
Results The internal consistency of the 15-item checklist was good (α = 0.85). A two-facet, partially nested G study found the generalisability of ratings to be generally acceptable, but inter-rater reliability varied greatly between occasions and across individual checklist items.
Conclusions Despite attempts to identify discrete and observable target behaviours, placing observers on rounds to detect these behaviours may not be as straightforward as it would seem. Clinical teaching may be a more inherently subjective process, based on different teaching styles of faculty staff. However, a set of objective checklist items to be completed by trained observers on teaching rounds holds promise as a potentially viable means of identifying strengths and weaknesses of clinical instruction. Further research is needed to define what constitutes quality clinical teaching, as well as the most reliable method for assessing it.