Traditional finals and OSCEs in predicting consultant and self-reported clinical skills of PRHOs: a pilot study
Article first published online: 27 JUN 2003
Volume 37, Issue 7, pages 597–602, July 2003
How to Cite
Probert, C. S., Cahill, D. J., McCann, G. L. and Ben-Shlomo, Y. (2003), Traditional finals and OSCEs in predicting consultant and self-reported clinical skills of PRHOs: a pilot study. Medical Education, 37: 597–602. doi: 10.1046/j.1365-2923.2003.01557.x
- Issue published online: 27 JUN 2003
- Article first published online: 27 JUN 2003
- Received 17 July 2002; editorial comments to authors 16 October 2002; accepted for publication 12 December 2002
- education, medical, undergraduate/*standards;
- educational measurement/*standards;
- *clinical competence;
Introduction As we move from standard ‘long case’ final examinations to new objective structured formats, we need to ensure the new is at least as good as the old. Furthermore, knowledge of which examination format best predicts medical student progression and clinical skills development would be of value.
Methods A group of medical students sat both the standard long case examination and the new objective structured clinical examination (OSCE) to introduce this latter examination to our Medical School for final MB. At the end of their pre-registration year, the group and their supervising consultants submitted performance evaluation questionnaires.
Results Thirty medical students sat both examinations and 20 returned evaluation questionnaires. Of the 72 consultants approached, 60 (83%) returned completed questionnaires. No correlation existed between self- and consultant reported performance. The traditional finals examination was inversely associated with consultant assessment. Better performing students were not rated as better doctors. The OSCE (and its components) was more consistent and showed positive associations with consultant ratings across the board.
Discussion Major discrepancies exist between the 2 examination formats, in data interpretation and practical skills, which are explicitly tested in OSCEs but less so in traditional finals. Standardised marking schemes may reduce examiner variability and discretion and weaken correlations across the 2 examinations. This pilot provides empirical evidence that OSCEs assess different clinical domains than do traditional finals. Additionally, OSCEs improve prediction of clinical performance as assessed by independent consultants.
Conclusion Traditional finals and OSCEs correlate poorly with one another. Objective structured clinical examinations appear to correlate well with consultant assessment at the end of the pre-registration house officer year.