Disclosure: All authors report that the conduct of the presently reported study was supported by the Medical Council of Canada.
Poorly performing physicians: Does the script concordance test detect bad clinical reasoning?†
Article first published online: 22 SEP 2010
Copyright © 2010 The Alliance for Continuing Medical Education, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education
Journal of Continuing Education in the Health Professions
Volume 30, Issue 3, pages 161–166, Summer 2010
How to Cite
Goulet, F., Jacques, A., Gagnon, R., Charlin, B. and Shabah, A. (2010), Poorly performing physicians: Does the script concordance test detect bad clinical reasoning?. J. Contin. Educ. Health Prof., 30: 161–166. doi: 10.1002/chp.20076
- Issue published online: 22 SEP 2010
- Article first published online: 22 SEP 2010
- clinical reasoning;
- clinical competence;
- Script Concordance Test;
- poorly performing physicians
Evaluation of poorly performing physicians is a worldwide concern for licensing bodies. The Collège des Médecins du Québec currently assesses the clinical competence of physicians previously identified with potential clinical competence difficulties through a day-long procedure called the Structured Oral Interview (SOI). Two peer physicians produce a qualitative report. In view of remediation activities and the potential for legal consequences, more information on the clinical reasoning process (CRP) and quantitative data on the quality of that process is needed. This study examines the Script Concordance Test (SCT), a tool that provides a standardized and objective measure of a specific dimension of CRP, clinical data interpretation (CDI), to determine whether it could be useful in that endeavor.
Over a 2-year period, 20 family physicians took, in addition to the SOI, a 1-hour paper-and-pencil SCT. Three evaluators, blind as to the purpose of the experiment, retrospectively reviewed SOI reports and were asked to estimate clinical reasoning quality. Subjects were classified into 2 groups (below and above median of the score distribution) for the 2 assessment methods. Agreement between classifications is estimated with the use of the Kappa coefficient.
Intraclass correlation for SOI was 0.89. Cronbach alpha coefficient for the SCT was 0.90. Agreement between methods was found for 13 participants (Kappa: 0.30, P = 0.18), but 7 out of 20 participants were classified differently in both methods. All participants but 1 had SCT scores below 2 SD of panel mean, thus indicating serious deficiencies in CDI.
The finding that the majority of the referred group did so poorly on CDI tasks has great interest for assessment as well as for remediation. In remediation of prescribing skills, adding SCT to SOI is useful for assessment of cognitive reasoning in poorly performing physicians. The structured oral interview should be improved with more precise reporting by those who assess the clinical reasoning process of examinees, and caution is recommended in interpreting SCT scores; they reflect only a part of the reasoning process.