Get access

Assessing the quality of supervisors’ completed clinical evaluation reports


Nancy L Dudek, Rehabilitation Centre, 505 Smyth Road, Room 1105D, Ottawa, Ontario K1H 8M2, Canada.
Tel: 00 1 613 737 7350 (ext 75596); Fax: 00 1 613 737 9638; E-mail:


Context  Although concern has been raised about the value of clinical evaluation reports for discriminating among trainees, there have been few efforts to formalise the dimensions and qualities that distinguish effective versus less useful styles of form completion.

Methods  Using brainstorming and a modified Delphi technique, a focus group determined the key features of high-quality completed evaluation reports. These features were used to create a rating scale to evaluate the quality of completed reports. The scale was pilot-tested locally; the results were psychometrically analysed and used to modify the scale. The scale was then tested on a national level. Psychometric analysis and final modification of the scale were completed.

Results  Sixteen features of high-quality reports were identified and used to develop a rating scale: the Completed Clinical Evaluation Report Rating (CCERR). The reliability of the scale after a national field test with 55 raters assessing 18 in-training evaluation reports (ITERs) was 0.82. Further revisions were made; the final version of the CCERR contains nine items rated on a 5-point scale. With this version, the mean ratings of three groups of ‘gold-standard’ ITERs (previously judged to be of high, average and poor quality) differed significantly (P < 0.05).

Discussion  The CCERR is a validated scale that can be used to help train supervisors to complete and assess the quality of evaluation reports.

Get access to the full text of this article