Predictive validity and estimated cut score of an objective structured clinical examination (OSCE) used as an assessment of clinical skills at the end of the first clinical year


Iain G Martin Professor of Surgery, South Auckland Clinical School, Middlemore Hospital, Private Bag 93 311, Otahuhu, Auckland 6, New Zealand. Tel.: 00 64 9 276 00 44 x 7804; Fax: 00 64 9 276 00 66; E-mail:



Assessment plays a key role in the learning process. The validity of any given assessment tool should ideally be established. If an assessment is to act as a guide to future teaching and learning then its predictive validity must be established.


To assess the ability of an objective structured clinical examination (OSCE) taken at the end of the first clinical year of an undergraduate medical degree to predict later performance in clinical examinations.


Performance of two consecutive cohorts of year 3 medical undergraduates (n=138 and n=128) in a 23 station OSCE were compared with their performance in 5 subsequent clinical examinations in years 4 and 5 of the course.


Poor performance in the OSCE was strongly associated with later poor performance in other clinical examinations. Students in the lowest three deciles of OSCE performance were 6 times more likely to fail another clinical examination. Receiver operating characteristic curves were constructed as a method to criterion reference the cut point for future examinations.


Performance in an OSCE taken early in the clinical course strongly predicts later clinical performance. Assessing subsequent student performance is a powerful tool for assessing examination validity. The use of ROC curves represents a novel method for determining future criterion referenced examination cut points.