- Top of page
- The Clinical Work Sampling (CWS) Strategy
OBJECTIVE: Existing systems of in-training evaluation (ITE) have been criticized as being unreliable and invalid methods for assessing student performance during clinical education. The purpose of this study was to assess the feasibility, reliability, and validity of a clinical work sampling (CWS) approach to ITE. This approach focused on the following: (1) basing performance data on observed behaviors, (2) using multiple observers and occasions, (3) recording data at the time of performance, and (4) allowing for a feasible system to receive feedback.
PARTICIPANTS: Sixty-two third-year University of Ottawa students were assessed during their 8-week internal medicine inpatient experience.
MEASUREMENTS AND MAIN RESULTS: Four performance rating forms (Admission Rating Form, Ward Rating Form, Multidisciplinary Team Rating Form, and Patient's Rating Form) were introduced to document student performance. Voluntary participation rates were variable (12%–64%) with patients excluded from the analysis because of low response rate (12%). The mean number of evaluations per student per rotation (19) exceeded the number of evaluations needed to achieve sufficient reliability. Reliability coefficients were high for the Ward Form (.86) and the Admission Form (.73) but not for the Multidisciplinary Team (.22) Form. There was an examiner effect (rater leniency), but this was small relative to real differences between students. Correlations between the Ward Form and the Admission Form were high (.47), while those with the Multidisciplinary Team Form were lower (.37 and .26, respectively). The CWS approach ITE was considered to be content valid by expert judges.
CONCLUSIONS: The collection of ongoing performance data was reasonably feasible, reliable, and valid.
The principal goal of undergraduate medical education is to prepare medical graduates to competently care for patients. An important part of this process is the observation, assessment and documentation of performance in the care of patients through intraining evaluation (ITE). Other objective measures of competency have been utilized during training programs; ITE has the potential to measure actual practice performance. ITE fulfills dual accountabilities: from the learner perspective, it provides a focus for improving skills and knowledge; from the societal perspective, evaluation discharges the institution's responsibility to ensure that the student has met or exceeded an expected performance level. 1,2 To ensure that graduates are ready to move onto residency, medical schools need effective systems to monitor students' progress through their clinical clerkship experiences. 3,4
Too frequently, the “system” of evaluation in a clinical clerkship consists of a single global rating scale completed at infrequent intervals by a supervisor who may have had minimal contact with the student. Not surprisingly, research has shown that this approach has serious deficiencies. The reliability of scores generated through current approaches to ITE is close to zero 5,6 because students are generally rated “above average,” resulting in limited real variation between students and because of the presence of random and systemic rater biases. 5,7–9 Additionally, there is an insufficient level of structured, documented feedback from supervisors to students during the student's rotation, possibly related to the limited direct observation of student performance by evaluators. 10,11 Finally, the current approach to in-training evaluation is costly in terms of the time and effort required for the minimal utility of the information. 12
From a psychological perspective, a central issue is that the task of assessing the average performance of an individual student over a period of weeks or months places extreme demands on memory. “In such instances, subjects must not only recall multiple events, but must also summarize them into a presumed ‘average state,’ introducing additional opportunities for bias.”13
With existing approaches to ITE, we cannot limit the bias in evaluations, or even be assured that the assessor had any opportunity for observation on which to base these evaluations. As a result of the difficulties of feasibility, reliability and validity, existing approaches to ITE are neither effective, accountable, nor educational.
A frequent response to the identification of these problems with current ITE approaches is to blame the evaluation form, and to revise it. The ongoing difficulties experienced with ITE suggest that improvements must come, not from a revision of the procedures of global summative ratings, but from a major reconceptualization of the evaluation problem.
The Clinical Work Sampling (CWS) Strategy
- Top of page
- The Clinical Work Sampling (CWS) Strategy
It is evident that an effective system must, as a central feature, overcome the primary obstacle of retrospective recall, and must capture evaluation information as the opportunity arises, resulting in multiple observations, from multiple observers, in live time. 10,11 Many opportunities for informal evaluation of students occur in the daily interactions among health professionals in all settings. However, most of these interactions go unreported. The challenge is to devise a system which can capture a reasonable number of these clinical encounters while imposing a minimal additional administrative load on the individuals involved.
One model for this is the work sampling approach used in industry, where observers monitor and record activities at regular intervals (for example, every 15 minutes). 13 While logistically impossible to maintain on an ongoing basis without massive resources, the basic idea of obtaining multiple samples with minimal information from each is attractive. We applied these concepts of CWS during an inpatient internal medicine, unit rotation for the ITE of clinical clerks. Forms were developed to be easily used by evaluators which captured performance data during regular encounters over the course of the work day in a standardized fashion. In this manner, we hoped to avoid the memory decay and subjective averaging resulting from summarizing at the end of the rotation.
The present study was initiated to establish the effectiveness of a CWS approach to ITE. We addressed 3 specific research questions: (1) Is the CWS approach to ITE feasible? (2) Is this approach a reliable measure of student performance on the ward? and (3) Is the CWS approach to ITE a valid measure of student performance?