Overview of Assessment Methods Identified
Multiple methods have been used to assess competence in patient care, including direct observation, simulation-based assessments, objective structured clinical examinations (OSCEs), global faculty evaluations, 360-degree evaluations, portfolios, self-reflection, clinical performance metrics, and procedure logs. Demonstration of competency in patient care at successive levels of training and across multiple clinical scenarios requires several overlapping methods to ensure validity of assessment. A survey of Canadian residency programs demonstrated an average of 1.75 assessment methods. Of equal importance is the frequency and consistency of formative assessment, its integration into the educational curriculum, and the “catalytic effect” of assessment results and feedback on improving individual performance. The selection of assessment methods will be based at least in part on the availability of financial, faculty, and learning resources within a residency. Each method and its supporting evidence for validity and reliability will be discussed individually.
Direct observation allows the learner to be observed in the clinical setting. It allows faculty to provide formative feedback to the learner in real time[8-11] and tends to generate more specific feedback and constructive comments compared to global assessments.[12, 13] At least 55 direct observation tools have been developed, but only a few have proven reliability, validity, or educational outcomes data measured.
Faculty training on the use of any direct observation tool is important given the potential for variability of interpretation of a clinical encounter and the tool’s language, yet few studies have demonstrated more than cursory observer training. There is evidence, however, that even without extensive training, certain tools have good to excellent reliability.[10, 15] The correlation between direct observation and other measures of competency such as written test scores,[16-25] OSCEs, or standardized patient assessments[18-21, 25, 26] has been studied in a number of specialties showing modest correlation supporting the validity of certain direct observation methods. Internal medicine has produced many studies of direct observation, the strongest of which is the mini–clinical evaluation exercise (mini-CEX) assessment tool having robust evidence for its validity and reliability.[11, 15] Other specialties such as physical medicine and rehabilitation have developed similar tools for clinical assessment. The EM Standardized Direct Observation Assessment Tool (SDOT) has been shown to have good inter-rater reliability when residents were observed via videotaped interactions, and in real-time clinical practice, if liberal agreement criteria were used. Learners report improved satisfaction and perceive a positive effect on their clinical care with direct observation assessment. Demonstration of a change in the delivery or quality of patient care is rare in direct observation; more often, improvements are in learner or observer self-assessed modification of attitudes, knowledge, or skills.
Although faculty generally like direct observation as an assessment method,[8, 9, 27, 29] adding this responsibility to existing faculty requirements of direct patient care, supervision, and bedside teaching may seem burdensome. A few EM residency programs have used nonclinical faculty to perform direct observation[8, 29] with reported success; however, this may not be financially practical for many programs. Another concern is the fact that certain patient care encounters such as resuscitations require faculty supervision and direct participation, limiting the ability to perform direct observation. One solution can be videotaping resuscitations for delayed review and debriefing, although technical and legal (HIPAA) barriers exist.
Simulation has the advantage of using standardized scenarios that can be designed to assess specific skills and global patient care without risk to patients. When paired with directed feedback, simulation assessments have demonstrated long-term retention of certain skills at 1.5 years. Scenarios and their assessment rubrics must be both designed in a standardized format that permits dissemination and tested for their reliability and evidence of validity.[32, 33] When high-fidelity simulation (HFS) and the core competencies were first introduced, assessment tools were unvalidated and considered too blunt to provide more than formative assessment; as assessment design becomes increasingly reliable and valid, using simulation-based assessment (SBA) as a summative, or high-stakes, measurement of competency is an important area for further research.
Learners can be assessed with both checklists (e.g. time to action, critical actions performed) and global performance ratings, with different information gleaned from each, all potentially having good discriminatory power[30, 35, 36] and a combination being most useful. Since patient care requires a broad skill set and knowledge base, multiple scenarios are needed to provide a valid assessment of overall patient care competency and to distinguish between performance at different levels of training. Murray et al. demonstrated that 12 scenarios were needed in a study of residents and attending physicians in anesthesia and six in another study comparing student certified registered nurse anesthetists to senior/junior anesthesiologists.
A HFS assessment rating tool should demonstrate both interobserver reliability and evidence of validity by demonstrating improved performance at higher levels of training. Assessments have demonstrated validity for both medical students[39, 40] and residents.[39-43] A set of four pediatric advanced life support scenarios demonstrated good inter-rater reliability and higher scores for more senior pediatric residents, but suggested that multiple scenarios are needed to provide a valid assessment. Improved clinical performance has been demonstrated in advanced cardiac life support (ACLS) using a checklist SBA with high reliability and internal consistency in an internal medicine residency program. One study assessing interns from multiple specialties managing two cardiac scenarios showed a surprising decrease in scores after the clinical experience of intern year, raising questions regarding the assessment's validity. This highlights the importance of assessment rubrics reflecting the clinical skills and cognition that map to real-world competent patient care rather than rubrics directed at stratifying learner performance with items that may penalize more experienced learners who may skip steps. The fact that experts may often use shortcuts to arrive at diagnostic conclusions requires careful design of rubrics that do not overlook more advanced levels of performance.
The evidence for validity of HFS assessments when compared to other forms of assessment is limited. Gordon et al. demonstrated validity of HFS assessment when compared to OSCE. One EM residency program designed a well-received simulation curriculum that found most learners to be competent, but did not translate to an increase in written test scores, which highlights the need to design HFS and other methods of assessment around the educational outcomes the assessment is intended to measure.
The ultimate evidence of validity is comparison to actual patient outcomes or subsequent improvements in patient care, but this has been infrequently measured. Internal medicine residents receiving simulation ACLS training performed better than more senior residents with traditional training based on chart reviews of their resuscitations, albeit limited by the fact that assessment of the intervention group was also closer in time to their ACLS training. Internal medicine residents demonstrated improved airway management skills in both the simulation laboratory and at the patient's bedside when scored by checklist after HFS training. This training was achievable whether senior residents or faculty were training PGY-1 residents. Pediatric EM and gastroenterology attending physicians performed better on a procedural sedation checklist after HFS training and assessment, demonstrating that this effect is not limited to novices.
Simulation-based team training (SBTT) research is limited but shows promise in enhancing the more complex skills of team management and crisis resource management, as well as improving outcomes in simulated scenarios. When added to traditional didactic teaching, simulation training has been shown to improve teamwork among members of emergency department (ED) staff. Scoring systems such as the Ottawa Global Rating Scale demonstrate reliability and validity for assessing leadership, communication, and resource management.[56, 57]
High-fidelity simulation is resource-intensive, historically requiring faculty observer presence to assess individual learners during sessions. This workload has limited the widespread use of simulation-based assessment. Video assessments would allow multiple assessments of one learner's performance without requiring all faculty members to be present during the simulation session. Williams et al. have demonstrated that assessment of videotaped sessions have comparable inter-rater reliability when compared to real-time assessment.
Objective structured clinical examinations are routinely used to evaluate multiple ACGME core competencies and are particularly useful for those that involve direct patient contact (data gathering, assimilation of data, and patient management). Published data indicate that EM educators have used OSCEs to assess multiple patient care competencies using a variety of clinical scenarios. The American Board of Emergency Medicine (ABEM) oral examination format has been adapted to include assessments of core competencies into the critical actions of oral examinations based on changes to the Model of the Clinical Practice of Emergency Medicine.
OSCEs have also been used to assess specific patient care tasks within EM, such as death disclosure and intimate partner violence counseling. While OSCEs have limited use in procedural training, standardized patients have been used for noninvasive nonpainful procedural training and assessment such as ultrasound. OSCEs have been used to evaluate ultrasonography of the abdominal aorta, as well as the completion of the Focused Assessment with Sonography in Trauma examination. In many of these circumstances the OSCE is used to evaluate the effectiveness of an educational intervention, either through comparison of pre- and posttesting or through comparison of study and control groups.
The reliability of OSCE assessment has been demonstrated through interobserver agreement[64, 65] and internal consistency. Quest et al. demonstrated good correlation of faculty and standardized patient ratings of resident performance; however, there was poor correlation between resident self-assessment and both faculty and standardized patient ratings, raising the question of the reliability of self-assessment using an OSCE format. The oral examination format used by ABEM has demonstrated an interexaminer agreement of 97% on critical actions and 95% on performance ratings.
Validity evidence has been demonstrated through comparison to other measures, such as the mini-CEX, improvement with increasing levels of training,[66-71] global evaluations,[72, 73] in-training examination scores, and core competency-based evaluations of patient care, medical knowledge, and practice-based learning. Wallenstein et al. demonstrated that scores on an acute-care OSCE for PGY-1 residents correlated with global ratings of patient care and overall clinical performance at 18 months of training.
Global assessments have been the most commonly used method to meet the ACGME requirement of biannual resident performance review,[2, 75] anchored by specific terminology derived from the core competencies and most recently the EM milestones. Global assessments are subject to recall bias, response bias, and the subjectivity of non-clinical factors such as the halo or millstone effects. Faculty vary in their performance assessments, even when observing the same clinical encounter. When anchored to specific criteria such as the core competencies, global assessments demonstrate reasonable reliability and evidence of validity.[24, 78] They have shown correlation with other measures of competence such as surgical in-training examination scores. Thus, inclusion of specific assessment items that delineate the desired behaviors, skills, and actions is essential to reducing subjectivity[22, 78] and increasing internal consistency.
The reporter-interpreter-manager-educator (RIME) framework used in internal medicine clerkships is an assessment tool that has demonstrated excellent reliability and validity when compared to other measures such as U.S. Medical Licensing Examination (USMLE) scores and medical school grade point average. Ander et al. have demonstrated the validity of the RIME assessment tool for medical students when compared to standard multi-item global evaluations. One anesthesia residency program has developed a global assessment system that is completed on a biweekly basis throughout training. Over a period of 2 years, 14,000 evaluations were collected yielding data that could be normalized across individual faculty raters resulting in a “z-score” that demonstrated a very high degree of reliability and validity in predicting resident performance and the need for remediation.
Although the 360-degree evaluation can involve anyone the learner comes in contact with during his or her professional duties, it has most commonly been studied with nursing assessments[82, 83] and patient assessments.[84, 85] Resident professionalism and interactions with nurses improved in an EM residency after instituting nursing evaluation of the residents. A study of practicing internists found nursing evaluations to be a useful measure of nonclinical skills. When measuring clinical skills, the same group found that peer ratings required at least 11 items to be accurate. Individual practice improvement after receiving 360-degree evaluation feedback varies due to both environmental factors such as clinical workload, the hospital management culture, and individual factors such as self-efficacy and motivation. This suggests that awareness of 360-degree data may not be enough to influence behavioral change and improve outcomes in the patient care competency. Although patients value the clinical skills of residents involved in their care, they may view clinical skills less favorably when not satisfied with resident care regardless of the actual quality of care provided. Given the limited definition of patient care as previously defined by King et al., patient assessments would appear more applicable for the assessment of other core competencies.[84, 85, 88, 89]
To date there are no published studies on the reliability and validity of resident portfolios in EM to assess patient care competency. While resident satisfaction with the use of a learning portfolio in a general surgery training program was high, there was poor interobserver agreement on the assessment of the portfolio entries’ quality. While the authors do not describe the submitted portfolio entries in detail, the template focuses more generally on differential diagnosis, diagnostic studies, and management options, rather than detail of operative procedures. Chart review can yield potentially valuable data on patient care, but may suffer from the confounding effects of collaboration with faculty as the chart is created. O'Sullivan et al. present a model of chart review including appropriateness of history and physical documentation, orders, and additional supporting materials such as assessments by supervising physicians regarding the case presentation and resident efficiency in the ED. A subsequent study by the same primary author in psychiatry demonstrated the reliability of portfolio reviews when assessed using two to three reviewers. Validity was shown with respect to medical knowledge and level of training, but surprisingly not clinical performance.
A Best Evidence Medical Education (BEME) systematic review on the educational effects of portfolios on undergraduate student learning was conducted in 2009. Of the 69 studies analyzed, only about a quarter met the minimum selected quality indicators, and only 13% reported changes in student skills and attitudes. While noting a trend of improving study quality in more recent analyses, the strength and extent of the evidence for the educational effects of portfolios is limited mostly to learner participation, rather than a measureable educational effect. These effects center around self-reflection, self-awareness, and medical knowledge, rather than the patient care competency as previously defined.
Reflection and Self-assessment
While self-assessment shows limited reliability and evidence of validity for professionalism and communication skills, there is a lack of evidence to support its use in the high-stakes realm of physician competence in patient care. A systematic review in 2006 identified 17 studies comparing self-assessment to one or more external objective measures, such as OSCEs, simulation, examination performance, and supervisor evaluation (three studies used two external measures for a total of 20 comparisons). Of the 20 comparisons, 13 demonstrated little, no, or an inverse relationship between self-assessment and objective external assessment. Among the remaining seven demonstrating an overall positive association, wide variability or methodologic errors were identified.
More recent analyses have also failed to demonstrate a strong correlation between self-assessment and independent assessors. A general surgery training program compared resident self-assessment to external evaluation by peers, nurses, and attending physicians. In all comparisons, residents overestimated their global performance regardless of their specific performance level. Residents underestimated their performance in specific competencies including patient care. Residents in the upper quartile of performance underestimated their performance in additional specific competencies, whereas residents in the lowest performance quartile overestimated professionalism skills. A similar study in anesthesia residents demonstrated moderate correlation between self- and observer assessments when reviewing their performance on three emergency HFS scenarios; however, this correlation was poorer at the lower levels of performance, further supporting the unreliability of self-assessment for patient care competence.
Clinical metrics derived from chart review or patient care information systems can be useful in assessing an individual's performance as measured by patients per hour, relative value units (RVUs), or other clinical care measures (e.g., patient acuity, resource utilization),[6, 98] When linked to systematic and ongoing feedback, assessment of clinical metrics can lead to long-term clinical practice change. While there is evidence that certain measures such as RVUs/hour correlate with individual cognitive assessments of multitasking ability, they potentially suffer from a lack of specificity given the resident's inherent inability to practice independently because of his or her supervised role. The measure is more a reflection of the combined performance of the resident and supervising faculty than the resident in isolation. Rather than assessing the quality of an individual patient care encounter, metrics are better suited to assess a resident's ability, on average across multiple encounters, to complete management plans and disposition patients expediently.
Procedure Performance Assessment
Invasive procedural skills are an essential component of resident training. There is ample evidence that there are significant gaps in medical student and resident procedural competence,[100-103] as well as variability in the correct and safe performance of procedures among residents when performing procedures on patients. There is strong evidence supporting the need for audit and feedback after teaching procedural skills such as central venous catheter insertion to ensure a prolonged and profound behavioral change. The fact that explicit assessment of technical skills occurs in as few as 15% of some procedure-oriented residencies highlights the need for improved training and structured assessment prior to direct patient care.
While paper or electronic procedure logs may keep track of a resident's cumulative experience, they do not involve direct observation and feedback on specific psychomotor skills by faculty or other certified trainers. Procedural competence has been assessed using multiple methods, including direct observation during patient care,[30, 106] cadaveric models,[107, 108] animal models, simulated environments, simulated task trainers,[41, 48, 111-115] objective structured assessments, and procedure logs. A recent meta-analysis of simulation-based medical educational methods demonstrated a consistency of results favoring simulation over traditional clinical educational methods. The validity evidence is very strong for simulation procedural training as demonstrated by the real-world clinical effect of reducing infections and complications related to central venous line placement after simulation training,[118, 119] supporting the use of simulation methods for procedural skill competency assessment. As with direct observation and HFS assessments, rubrics with demonstrated evidence of validity and inter-rater reliability are essential to ensuring the quality of these assessments.[106, 113, 120] Once validated, these rubrics can be used by nonclinical raters, decreasing the resource intensity of the assessment.