SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Summary
  6. References

There is an established expectation that physicians in training demonstrate competence in all aspects of clinical care prior to entering professional practice. Multiple methods have been used to assess competence in patient care, including direct observation, simulation-based assessments, objective structured clinical examinations (OSCEs), global faculty evaluations, 360-degree evaluations, portfolios, self-reflection, clinical performance metrics, and procedure logs. A thorough assessment of competence in patient care requires a mixture of methods, taking into account each method's costs, benefits, and current level of evidence. At the 2012 Academic Emergency Medicine (AEM) consensus conference on educational research, one breakout group reviewed and discussed the evidence supporting various methods of assessing patient care and defined a research agenda for the continued development of specific assessment methods based on current best practices. In this article, the authors review each method's supporting reliability and validity evidence and make specific recommendations for future educational research.

In 2001, the Accreditation Council for Graduate Medical Education (ACGME) introduced a timeline for the implementation of training and assessment in six core competencies that form the foundation of clinical competence. Introduced in 1996, the Canadian CanMEDS manager competency correlates to the ACGME patient care competency, broadly defined as “the active engagement in decision-making in the operation of the healthcare system.”[1] The patient care competency for emergency medicine (EM) has been defined by a previous Academic Emergency Medicine (AEM) consensus conference,[2] now further elaborated on by the milestones in training,[3] as being able to efficiently gather and synthesize medical and diagnostic information, prioritize tasks, and implement management plans on multiple patients, as well as performing essential invasive procedures competently.

There is an explicit expectation that physicians in training demonstrate competence in various aspects of clinical care prior to graduation and professional practice.[4] While this accountability falls squarely on the shoulders of residency training programs, it is mirrored by commensurate expectations of maintenance of competency during ongoing professional practice.

The goals of the 2012 AEM consensus conference patient care working group were to describe the current state of evidence for assessment of competence in patient care and define a research agenda for the further development of specific assessment methods based on current best practices.

Methods

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Summary
  6. References

A search was conducted using MEDLINE 1996-present using the key word search terms “assessment,” “patient care,” “competency,” “competence,” “assess*,” “emergency,” and “education” and limited to humans and English language [boolean search: ((assessment and patient care AND (competency or competence)) OR (assess* and emergency and education) resulting in 3493 hits; (patient care and competency) and assessment resulting in 282 references]. These searches were combined with the additional search terms “resident* or medical student*” (58,880) resulting in 414 and 267 final results, respectively. After reviewing for relevance, 76 articles remained. Additional references were identified from review of these results and are included when relevant. These articles were used as a foundation for the breakout group's discussion.

Results

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Summary
  6. References

Overview of Assessment Methods Identified

Multiple methods have been used to assess competence in patient care, including direct observation, simulation-based assessments, objective structured clinical examinations (OSCEs), global faculty evaluations, 360-degree evaluations, portfolios, self-reflection, clinical performance metrics, and procedure logs. Demonstration of competency in patient care at successive levels of training and across multiple clinical scenarios requires several overlapping methods to ensure validity of assessment. A survey of Canadian residency programs demonstrated an average of 1.75 assessment methods.[5] Of equal importance is the frequency and consistency of formative assessment,[6] its integration into the educational curriculum, and the “catalytic effect”[7] of assessment results and feedback on improving individual performance. The selection of assessment methods will be based at least in part on the availability of financial, faculty, and learning resources within a residency. Each method and its supporting evidence for validity and reliability will be discussed individually.

Direct Observation

Direct observation allows the learner to be observed in the clinical setting. It allows faculty to provide formative feedback to the learner in real time[8-11] and tends to generate more specific feedback and constructive comments compared to global assessments.[12, 13] At least 55 direct observation tools have been developed, but only a few have proven reliability, validity, or educational outcomes data measured.[14]

Faculty training on the use of any direct observation tool is important given the potential for variability of interpretation of a clinical encounter and the tool’s language, yet few studies have demonstrated more than cursory observer training.[14] There is evidence, however, that even without extensive training, certain tools have good to excellent reliability.[10, 15] The correlation between direct observation and other measures of competency such as written test scores,[16-25] OSCEs, or standardized patient assessments[18-21, 25, 26] has been studied in a number of specialties showing modest correlation supporting the validity of certain direct observation methods. Internal medicine has produced many studies of direct observation, the strongest of which is the mini–clinical evaluation exercise (mini-CEX) assessment tool having robust evidence for its validity and reliability.[11, 15] Other specialties such as physical medicine and rehabilitation have developed similar tools for clinical assessment.[27] The EM Standardized Direct Observation Assessment Tool (SDOT) has been shown to have good inter-rater reliability when residents were observed via videotaped interactions,[10] and in real-time clinical practice, if liberal agreement criteria were used.[28] Learners report improved satisfaction[9] and perceive a positive effect on their clinical care with direct observation assessment.[14] Demonstration of a change in the delivery or quality of patient care is rare in direct observation; more often, improvements are in learner or observer self-assessed modification of attitudes, knowledge, or skills.[14]

Although faculty generally like direct observation as an assessment method,[8, 9, 27, 29] adding this responsibility to existing faculty requirements of direct patient care, supervision, and bedside teaching may seem burdensome. A few EM residency programs have used nonclinical faculty to perform direct observation[8, 29] with reported success; however, this may not be financially practical for many programs. Another concern is the fact that certain patient care encounters such as resuscitations require faculty supervision and direct participation, limiting the ability to perform direct observation. One solution can be videotaping resuscitations for delayed review and debriefing,[30] although technical and legal (HIPAA) barriers exist.

Simulation

Simulation has the advantage of using standardized scenarios that can be designed to assess specific skills and global patient care without risk to patients. When paired with directed feedback, simulation assessments have demonstrated long-term retention of certain skills at 1.5 years.[31] Scenarios and their assessment rubrics must be both designed in a standardized format that permits dissemination and tested for their reliability and evidence of validity.[32, 33] When high-fidelity simulation (HFS) and the core competencies were first introduced, assessment tools were unvalidated and considered too blunt to provide more than formative assessment[34]; as assessment design becomes increasingly reliable and valid, using simulation-based assessment (SBA) as a summative, or high-stakes, measurement of competency is an important area for further research.[35]

Learners can be assessed with both checklists (e.g. time to action, critical actions performed) and global performance ratings, with different information gleaned from each, all potentially having good discriminatory power[30, 35, 36] and a combination being most useful.[37] Since patient care requires a broad skill set and knowledge base, multiple scenarios are needed to provide a valid assessment of overall patient care competency and to distinguish between performance at different levels of training. Murray et al.[38] demonstrated that 12 scenarios were needed in a study of residents and attending physicians in anesthesia and six in another study comparing student certified registered nurse anesthetists to senior/junior anesthesiologists.[36]

A HFS assessment rating tool should demonstrate both interobserver reliability and evidence of validity by demonstrating improved performance at higher levels of training. Assessments have demonstrated validity for both medical students[39, 40] and residents.[39-43] A set of four pediatric advanced life support scenarios demonstrated good inter-rater reliability and higher scores for more senior pediatric residents, but suggested that multiple scenarios are needed to provide a valid assessment.[44] Improved clinical performance has been demonstrated in advanced cardiac life support (ACLS) using a checklist SBA with high reliability and internal consistency in an internal medicine residency program.[45] One study assessing interns from multiple specialties managing two cardiac scenarios showed a surprising decrease in scores after the clinical experience of intern year, raising questions regarding the assessment's validity.[46] This highlights the importance of assessment rubrics reflecting the clinical skills and cognition that map to real-world competent patient care rather than rubrics directed at stratifying learner performance with items that may penalize more experienced learners who may skip steps.[39] The fact that experts may often use shortcuts to arrive at diagnostic conclusions[47] requires careful design of rubrics that do not overlook more advanced levels of performance.

The evidence for validity of HFS assessments when compared to other forms of assessment is limited. Gordon et al.[42] demonstrated validity of HFS assessment when compared to OSCE. One EM residency program designed a well-received simulation curriculum that found most learners to be competent, but did not translate to an increase in written test scores,[48] which highlights the need to design HFS and other methods of assessment around the educational outcomes the assessment is intended to measure.[37]

The ultimate evidence of validity is comparison to actual patient outcomes or subsequent improvements in patient care, but this has been infrequently measured.[49] Internal medicine residents receiving simulation ACLS training performed better than more senior residents with traditional training based on chart reviews of their resuscitations,[50] albeit limited by the fact that assessment of the intervention group was also closer in time to their ACLS training. Internal medicine residents demonstrated improved airway management skills in both the simulation laboratory and at the patient's bedside when scored by checklist after HFS training.[51] This training was achievable whether senior residents or faculty were training PGY-1 residents.[52] Pediatric EM and gastroenterology attending physicians performed better on a procedural sedation checklist after HFS training and assessment,[53] demonstrating that this effect is not limited to novices.

Simulation-based team training (SBTT) research is limited but shows promise in enhancing the more complex skills of team management and crisis resource management, as well as improving outcomes in simulated scenarios.[54] When added to traditional didactic teaching, simulation training has been shown to improve teamwork among members of emergency department (ED) staff.[55] Scoring systems such as the Ottawa Global Rating Scale demonstrate reliability and validity for assessing leadership, communication, and resource management.[56, 57]

High-fidelity simulation is resource-intensive, historically requiring faculty observer presence to assess individual learners during sessions. This workload has limited the widespread use of simulation-based assessment.[35] Video assessments would allow multiple assessments of one learner's performance without requiring all faculty members to be present during the simulation session. Williams et al.[58] have demonstrated that assessment of videotaped sessions have comparable inter-rater reliability when compared to real-time assessment.

OCSEs

Objective structured clinical examinations are routinely used to evaluate multiple ACGME core competencies and are particularly useful for those that involve direct patient contact (data gathering, assimilation of data, and patient management). Published data indicate that EM educators have used OSCEs to assess multiple patient care competencies using a variety of clinical scenarios.[59] The American Board of Emergency Medicine (ABEM) oral examination format has been adapted to include assessments of core competencies into the critical actions of oral examinations based on changes to the Model of the Clinical Practice of Emergency Medicine.[60]

OSCEs have also been used to assess specific patient care tasks within EM, such as death disclosure[61] and intimate partner violence counseling.[62] While OSCEs have limited use in procedural training, standardized patients have been used for noninvasive nonpainful procedural training and assessment such as ultrasound. OSCEs have been used to evaluate ultrasonography of the abdominal aorta,[63] as well as the completion of the Focused Assessment with Sonography in Trauma examination.[59] In many of these circumstances the OSCE is used to evaluate the effectiveness of an educational intervention, either through comparison of pre- and posttesting or through comparison of study and control groups.

The reliability of OSCE assessment has been demonstrated through interobserver agreement[64, 65] and internal consistency.[66] Quest et al.[61] demonstrated good correlation of faculty and standardized patient ratings of resident performance; however, there was poor correlation between resident self-assessment and both faculty and standardized patient ratings, raising the question of the reliability of self-assessment using an OSCE format. The oral examination format used by ABEM has demonstrated an interexaminer agreement of 97% on critical actions and 95% on performance ratings.[65]

Validity evidence has been demonstrated through comparison to other measures, such as the mini-CEX,[26] improvement with increasing levels of training,[66-71] global evaluations,[72, 73] in-training examination scores,[74] and core competency-based evaluations of patient care, medical knowledge, and practice-based learning.[73] Wallenstein et al.[73] demonstrated that scores on an acute-care OSCE for PGY-1 residents correlated with global ratings of patient care and overall clinical performance at 18 months of training.

Global Assessment

Global assessments have been the most commonly used method to meet the ACGME requirement of biannual resident performance review,[2, 75] anchored by specific terminology derived from the core competencies[76] and most recently the EM milestones.[3] Global assessments are subject to recall bias, response bias, and the subjectivity of non-clinical factors such as the halo or millstone effects.[2] Faculty vary in their performance assessments, even when observing the same clinical encounter.[77] When anchored to specific criteria such as the core competencies, global assessments demonstrate reasonable reliability and evidence of validity.[24, 78] They have shown correlation with other measures of competence such as surgical in-training examination scores.[25] Thus, inclusion of specific assessment items that delineate the desired behaviors, skills, and actions is essential to reducing subjectivity[22, 78] and increasing internal consistency.

The reporter-interpreter-manager-educator (RIME) framework used in internal medicine clerkships is an assessment tool that has demonstrated excellent reliability and validity when compared to other measures such as U.S. Medical Licensing Examination (USMLE) scores and medical school grade point average.[79] Ander et al.[80] have demonstrated the validity of the RIME assessment tool for medical students when compared to standard multi-item global evaluations. One anesthesia residency program has developed a global assessment system that is completed on a biweekly basis throughout training. Over a period of 2 years, 14,000 evaluations were collected yielding data that could be normalized across individual faculty raters resulting in a “z-score” that demonstrated a very high degree of reliability and validity in predicting resident performance and the need for remediation.[78]

360-degree Evaluations

Although the 360-degree evaluation can involve anyone the learner comes in contact with during his or her professional duties,[81] it has most commonly been studied with nursing assessments[82, 83] and patient assessments.[84, 85] Resident professionalism and interactions with nurses improved in an EM residency after instituting nursing evaluation of the residents.[82] A study of practicing internists found nursing evaluations to be a useful measure of nonclinical skills.[83] When measuring clinical skills, the same group found that peer ratings required at least 11 items to be accurate.[86] Individual practice improvement after receiving 360-degree evaluation feedback varies due to both environmental factors such as clinical workload, the hospital management culture, and individual factors such as self-efficacy and motivation.[87] This suggests that awareness of 360-degree data may not be enough to influence behavioral change and improve outcomes in the patient care competency. Although patients value the clinical skills of residents involved in their care,[84] they may view clinical skills less favorably when not satisfied with resident care[88] regardless of the actual quality of care provided. Given the limited definition of patient care as previously defined by King et al.,[2] patient assessments would appear more applicable for the assessment of other core competencies.[84, 85, 88, 89]

Portfolios

To date there are no published studies on the reliability and validity of resident portfolios in EM to assess patient care competency. While resident satisfaction with the use of a learning portfolio in a general surgery training program was high, there was poor interobserver agreement on the assessment of the portfolio entries’ quality.[90] While the authors do not describe the submitted portfolio entries in detail, the template focuses more generally on differential diagnosis, diagnostic studies, and management options, rather than detail of operative procedures. Chart review can yield potentially valuable data on patient care, but may suffer from the confounding effects of collaboration with faculty as the chart is created. O'Sullivan et al.[91] present a model of chart review including appropriateness of history and physical documentation, orders, and additional supporting materials such as assessments by supervising physicians regarding the case presentation and resident efficiency in the ED. A subsequent study by the same primary author in psychiatry demonstrated the reliability of portfolio reviews when assessed using two to three reviewers. Validity was shown with respect to medical knowledge and level of training, but surprisingly not clinical performance.[92]

A Best Evidence Medical Education (BEME) systematic review on the educational effects of portfolios on undergraduate student learning was conducted in 2009.[93] Of the 69 studies analyzed, only about a quarter met the minimum selected quality indicators, and only 13% reported changes in student skills and attitudes. While noting a trend of improving study quality in more recent analyses, the strength and extent of the evidence for the educational effects of portfolios is limited mostly to learner participation, rather than a measureable educational effect. These effects center around self-reflection, self-awareness, and medical knowledge,[93] rather than the patient care competency as previously defined.[2]

Reflection and Self-assessment

While self-assessment shows limited reliability and evidence of validity for professionalism and communication skills,[94] there is a lack of evidence to support its use in the high-stakes realm of physician competence in patient care. A systematic review in 2006 identified 17 studies comparing self-assessment to one or more external objective measures, such as OSCEs, simulation, examination performance, and supervisor evaluation (three studies used two external measures for a total of 20 comparisons).[95] Of the 20 comparisons, 13 demonstrated little, no, or an inverse relationship between self-assessment and objective external assessment. Among the remaining seven demonstrating an overall positive association, wide variability or methodologic errors were identified.[95]

More recent analyses have also failed to demonstrate a strong correlation between self-assessment and independent assessors. A general surgery training program compared resident self-assessment to external evaluation by peers, nurses, and attending physicians. In all comparisons, residents overestimated their global performance regardless of their specific performance level.[96] Residents underestimated their performance in specific competencies including patient care. Residents in the upper quartile of performance underestimated their performance in additional specific competencies, whereas residents in the lowest performance quartile overestimated professionalism skills. A similar study in anesthesia residents demonstrated moderate correlation between self- and observer assessments when reviewing their performance on three emergency HFS scenarios; however, this correlation was poorer at the lower levels of performance,[97] further supporting the unreliability of self-assessment for patient care competence.

Metrics

Clinical metrics derived from chart review or patient care information systems can be useful in assessing an individual's performance as measured by patients per hour, relative value units (RVUs), or other clinical care measures (e.g., patient acuity, resource utilization),[6, 98] When linked to systematic and ongoing feedback, assessment of clinical metrics can lead to long-term clinical practice change.[6] While there is evidence that certain measures such as RVUs/hour correlate with individual cognitive assessments of multitasking ability,[99] they potentially suffer from a lack of specificity given the resident's inherent inability to practice independently because of his or her supervised role. The measure is more a reflection of the combined performance of the resident and supervising faculty than the resident in isolation. Rather than assessing the quality of an individual patient care encounter, metrics are better suited to assess a resident's ability, on average across multiple encounters, to complete management plans and disposition patients expediently.

Procedure Performance Assessment

Invasive procedural skills are an essential component of resident training. There is ample evidence that there are significant gaps in medical student and resident procedural competence,[100-103] as well as variability in the correct and safe performance of procedures among residents when performing procedures on patients.[104] There is strong evidence supporting the need for audit and feedback after teaching procedural skills such as central venous catheter insertion to ensure a prolonged and profound behavioral change.[105] The fact that explicit assessment of technical skills occurs in as few as 15% of some procedure-oriented residencies[75] highlights the need for improved training and structured assessment prior to direct patient care.

While paper or electronic procedure logs may keep track of a resident's cumulative experience, they do not involve direct observation and feedback on specific psychomotor skills by faculty or other certified trainers. Procedural competence has been assessed using multiple methods, including direct observation during patient care,[30, 106] cadaveric models,[107, 108] animal models,[109] simulated environments,[110] simulated task trainers,[41, 48, 111-115] objective structured assessments,[74] and procedure logs.[116] A recent meta-analysis of simulation-based medical educational methods demonstrated a consistency of results favoring simulation over traditional clinical educational methods.[117] The validity evidence is very strong for simulation procedural training as demonstrated by the real-world clinical effect of reducing infections[118] and complications[119] related to central venous line placement after simulation training,[118, 119] supporting the use of simulation methods for procedural skill competency assessment. As with direct observation and HFS assessments, rubrics with demonstrated evidence of validity and inter-rater reliability are essential to ensuring the quality of these assessments.[106, 113, 120] Once validated, these rubrics can be used by nonclinical raters, decreasing the resource intensity of the assessment.[121]

Summary

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Summary
  6. References

Consensus Recommendations

A holistic assessment of competence in patient care requires a mixture of methods rather than any single method of assessment, taking into account each method's costs, benefits, and current level of evidence (see Table 1). Assessments should focus on specific behaviors, tasks, and skills, with opportunities for formative feedback and repeated performance,[47, 122] enabling formative feedback to drive learner growth.[122] The assessment rubric should undergo rigorous testing of its reliability and evidence of validity by comparing its results to actual patient care and patient outcomes. Follow-up assessment is important to ensure durability of competence, which can influence curricular changes in the timing, structure, or repetition of educational interventions throughout residency training (see Figure 1). A variety of assessment methods is necessary to accommodate local variations in access to high-cost technologies such as HFS.

Table 1. Summary of Methods
Assessment MethodStrengthsWeaknessesRelative Cost (Excluding Faculty Time)Highest Level of Evidence of Outcomesa
  1. OCSE = objective structured clinical examination.

  2. a

    Outcomes were rated using a modified Kirkpatrick hierarchy wherein levels of impact are as follows: 1 = participation (learners’ or observers’ views on the tool or its implementation); 2 = learner or observer self-assessed modification of attitudes, knowledge, or skills; 3 = transfer of learning (objectively measured change in learner or observer knowledge or skills); and 4 = results (change in organizational delivery or quality of patient care).

Simulation

No risk

Wide range of scenarios/resuscitations

Cost

Suspension of disbelief

$$$–$$$$4
Direct observationActual patient careVariable scenarios/resuscitations$-$$2–3
360-degree evaluationsMultiple sources for observationsPotential for participation bias, halo and millstone effects$1
OSCE, oral examination, standardized patients

No risk

High fidelity

Standardized

Smaller range of scenarios/resuscitations$–$$$2
PortfoliosLearner-drivenCollection of reflections and work outputs rather than actual patient care1
Self-reflectionReflective practicePoor correlation at lower performance levels1
Global assessmentModerate validity when anchoredPotential for participation bias, halo and millstone effects$NA
MetricsReflect actual measure of clinical practiceLimited by dependence on supervising faculty$–$$1
image

Figure 1. Agenda for developing, validating, and implementing assessments. PGY = postgraduate year.

Download figure to PowerPoint

Direct observation, OSCE, and HFS have the strongest evidence as valid and reliable assessment methods. Global assessments and 360-degree evaluations require specific behavioral anchors to increase their validity and large response rates to control for confounders such as the halo/millstone effects and individual rater variability. Metrics can provide valuable performance data for residents in their more senior years, since these measures can be directly compared to attending physician performance standards. Portfolios and self-reflection lack evidence to support their use as stand-alone assessments of patient care, but have the benefit of encouraging the reflective and learner-directed practice that forms the basis of continuing medical education.

Research Agenda

  • Determine the number of direct observation assessments and types of patient encounters (e.g., critical diagnoses, chief complaints, diagnostic complexity) that are needed to provide a valid reflection of patient care competence for an individual resident.
  • Design and codify a process to create reliable and valid simulation, objective structured clinical, and oral examination assessments that use checklists (time to event or critical action) and global ratings to assess competence in ways that reflect expert clinical practice (which may use shortcuts) rather than simply the accomplishment of basic task lists.
  • Determine the number of global assessments needed to compose a valid assessment of a resident's patient care competence accounting for the known biases of this method.
  • Assess the validity and relevance of nonclinician evaluations in patient care competence given the influence of potential confounders.
  • Determine the validity of clinical metrics relative to other more-studied forms of assessment with good reliability and validity such as direct observation, OSCE, and simulation.
  • Develop standardized training programs and assessments for procedural skill acquisition (such as those for central line insertion), starting with no-risk methods such as simulated, cadaveric, or OSCE experiences and concluding with direct observation assessment during actual patient care and correlation to complications and patient outcomes.

References

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Summary
  6. References
  • 1
    Frank J JM, Tugwell P. Skills for the new millennium: report of the Societal Needs Working Group, CanMEDS 2000 Project. Ann Roy Coll Phys Surg Can. 1996; 29:20616.
  • 2
    King RW, Schiavone F, Counselman FL, Panacek EA. Patient care competency in emergency medicine graduate medical education: results of a consensus group on patient care. Acad Emerg Med. 2002; 9:122735.
  • 3
    Council of Emergency Medicine Residency Directors. Milestones in Training: More than Just Measurements! Council of Residency Directors in Emergency Medicine Annual Meeting. April 2, 2012, Atlanta, GA.
  • 4
    Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002; 287:22635.
  • 5
    Chou S, Cole G, McLaughlin K, Lockyer J. CanMEDS evaluation in Canadian postgraduate training programmes: tools used and programme director satisfaction. Med Educ. 2008; 42:87986.
  • 6
    Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME Guide No. 7. Med Teach. 2006; 28:11728.
  • 7
    Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011; 33:20614.
  • 8
    Cydulka RK, Emerman CL, Jouriles NJ. Evaluation of resident performance and intensive bedside teaching during direct observation. Acad Emerg Med. 1996; 3:34551.
  • 9
    Lane JL, Gottlieb RP. Structured clinical observations: a method to teach clinical skills with limited time and financial resources. Pediatrics. 2000; 105(4 Pt 2):9737.
  • 10
    Shayne P, Gallahue F, Rinnert S, Anderson CL, Hern G, Katz E. Reliability of a core competency checklist assessment in the emergency department: the Standardized Direct Observation Assessment Tool. Acad Emerg Med. 2006; 13:72732.
  • 11
    Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: a method for assessing clinical skills. Ann Intern Med. 2003; 138:47681.
  • 12
    Young JQ, Lieu S, O'Sullivan P, Tong L. Development and initial testing of a structured clinical observation tool to assess pharmacotherapy competence. Acad Psychiatry. 2011; 35:2734.
  • 13
    Hamburger EK, Cuzzi S, Coddington DA, et al. Observation of resident clinical skills: outcomes of a program of direct observation in the continuity clinic setting. Acad Pediatr. 2011; 11:394402.
  • 14
    Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009; 302:131626.
  • 15
    Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini-CEX. Adv Health Sci Educ Theory Pract. 1997; 2:2733.
  • 16
    Meagher FM, Butler MW, Miller SD, Costello RW, Conroy RM, McElvaney NG. Predictive validity of measurements of clinical competence using the team objective structured bedside assessment (TOSBA): assessing the clinical competence of final year medical students. Med Teach. 2009; 31:e54550.
  • 17
    Clay AS, Que L, Petrusa ER, Sebastian M, Govert J. Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching. Crit Care Med. 2007; 35:73854.
  • 18
    Richards ML, Paukert JL, Downing SM, Bordage G. Reliability and usefulness of clinical encounter cards for a third-year surgical clerkship. J Surg Res. 2007; 140:13948.
  • 19
    Hatala R, Ainslie M, Kassen BO, Mackie I, Roberts JM. Assessing the mini-clinical evaluation exercise in comparison to a national specialty examination. Med Educ. 2006; 40:9506.
  • 20
    Brennan BG, Norman GR. Use of encounter cards for evaluation of residents in obstetrics. Acad Med. 1997; 72(10 Suppl 1):S434.
  • 21
    Hamdy H, Prasad K, Williams R, Salih FA. Reliability and validity of the direct observation clinical encounter examination (DOCEE). Med Educ. 2003; 37:20512.
  • 22
    Dunnington GL, Wright K, Hoffman K. A pilot experience with competency-based clinical skills assessment in a surgical clerkship. Am J Surg. 1994; 167:6046.
  • 23
    Price J, Byrne JA. The direct clinical examination: an alternative method for the assessment of clinical psychiatry skills in undergraduate medical students. Med Educ. 1994; 28:1205.
  • 24
    Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini-clinical evaluation exercise for internal medicine residency training. Acad Med. 2002; 77:9004.
  • 25
    Nuovo J, Bertakis KD, Azari R. Assessing resident's knowledge and communication skills using four different evaluation tools. Med Educ. 2006; 40:6306.
  • 26
    Boulet JR, McKinley DW, Norcini JJ, Whelan GP. Assessing the comparability of standardized patient and physician evaluations of clinical skills. Adv Health Sci Educ Theory Pract. 2002; 7:8597.
  • 27
    Musick DW, Bockenek WL, Massagli TL, et al. Reliability of the physical medicine and rehabilitation resident observation and competency assessment tool: a multi-institution study. Am J Phys Med Rehabil. 2010; 89:23544.
  • 28
    LaMantia J, Kane B, Yarris L, et al. Real-time inter-rater reliability of the Council of Emergency Medicine Residency Directors standardized direct observation assessment tool. Acad Emerg Med. 2009; 16(Suppl 2):S517.
  • 29
    Dorfsman ML, Wolfson AB. Direct observation of residents in the emergency department: a structured educational program. Acad Emerg Med. 2009; 16:34351.
  • 30
    Santora TA, Trooskin SZ, Blank CA, Clarke JR, Schinco MA. Video assessment of trauma response: adherence to ATLS protocols. Am J Emerg Med. 1996; 14:5649.
  • 31
    Ander DS, Heilpern K, Goertz F, Click L, Kahn S. Effectiveness of a simulation-based medical student course on managing life-threatening medical conditions. Simul Healthc. 2009; 4:20711.
  • 32
    Bond WF, Spillane L. The use of simulation for emergency medicine resident assessment. Acad Emerg Med. 2002; 9:12959.
    Direct Link:
  • 33
    Boulet JR, Murray DJ. Simulation-based assessment in anesthesiology: requirements for practical implementation. Anesthesiology. 2010; 112:104152.
  • 34
    McLaughlin SA, Doezema D, Sklar DP. Human simulation in emergency medicine training: a model curriculum. Acad Emerg Med. 2002; 9:13108.
  • 35
    Spillane L, Hayden E, Fernandez R, et al. The assessment of individual cognitive expertise and clinical competency: a research agenda. Acad Emerg Med. 2008; 15:10718.
  • 36
    Murray DJ, Boulet JR, Kras JF, McAllister JD, Cox TE. A simulation-based acute skills performance assessment for anesthesia training. Anesth Analg. 2005; 101:112734.
  • 37
    McLaughlin S, Fitch MT, Goyal DG, et al. Simulation in graduate medical education 2008: a review for emergency medicine. Acad Emerg Med. 2008; 15:111729.
  • 38
    Murray DJ, Boulet JR, Avidan M, et al. Performance of residents and anesthesiologists in a simulation-based skill assessment. Anesthesiology. 2007; 107:70513.
  • 39
    Murray D, Boulet J, Ziv A, Woodhouse J, Kras J, McAllister J. An acute care skills evaluation for graduating medical students: a pilot study using clinical simulation. Med Educ. 2002; 36:83341.
  • 40
    Boulet JR, Murray D, Kras J, Woodhouse J, McAllister J, Ziv A. Reliability and validity of a simulation-based acute care skills assessment for medical students and residents. Anesthesiology. 2003; 99:127080.
  • 41
    Girzadas DV Jr, Clay L, Caris J, Rzechula K, Harwood R. High fidelity simulation can discriminate between novice and experienced residents when assessing competency in patient care. Med Teach. 2007; 29:4726.
  • 42
    Gordon JA, Tancredi DN, Binder WD, Wilkerson WM, Shaffer DW. Assessment of a clinical performance evaluation tool for use in a simulator-based testing environment: a pilot study. Acad Med. 2003; 78(10 Suppl):S457.
  • 43
    Brett-Fleegler MB, Vinci RJ, Weiner DL, Harris SK, Shih MC, Kleinman ME. A simulator-based tool that assesses pediatric resident resuscitation competency. Pediatrics. 2008; 121:e597603.
  • 44
    Donoghue A, Nishisaki A, Sutton R, Hales R, Boulet J. Reliability and validity of a scoring instrument for clinical performance during Pediatric Advanced Life Support simulation scenarios. Resuscitation. 2010; 81:3316.
  • 45
    Wayne DB, Butter J, Siddall VJ, et al. Simulation-based training of internal medicine residents in advanced cardiac life support protocols: a randomized trial. Teach Learn Med. 2005; 17:2106.
  • 46
    Opar SP, Short MW, Jorgensen JE, Blankenship RB, Roth BJ. Acute coronary syndrome and cardiac arrest: using simulation to assess resident performance and program outcomes. J Grad Med Educ. 2010; 2:4049.
  • 47
    Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008; 15:98894.
  • 48
    Noeller TP, Smith MD, Holmes L, et al. A theme-based hybrid simulation model to train and evaluate emergency medicine residents. Acad Emerg Med. 2008; 15:1199206.
  • 49
    Kirkpatrick D, ed. Evaluation of Training. New York, NY: McGraw-Hill, 1967.
  • 50
    Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest. 2008; 133:5661.
  • 51
    Mayo PH, Hackney JE, Mueck JT, Ribaudo V, Schneider RF. Achieving house staff competence in emergency airway management: results of a teaching program using a computerized patient simulator. Crit Care Med. 2004; 32:24227.
  • 52
    Rosenthal ME, Adachi M, Ribaudo V, Mueck JT, Schneider RF, Mayo PH. Achieving housestaff competence in emergency airway management using scenario based simulation training: comparison of attending vs housestaff trainers. Chest. 2006; 129:14538.
  • 53
    Shavit I, Keidan I, Hoffmann Y, et al. Enhancing patient safety during pediatric sedation: the impact of simulation-based training of nonanesthesiologists. Arch Pediatr Adolesc Med. 2007; 161:7403.
  • 54
    Weaver SJ, Salas E, Lyons R, et al. Simulation-based team training at the sharp end: a qualitative study of simulation-based team training design, implementation, and evaluation in healthcare. J Emerg Trauma Shock. 2010; 3:36977.
  • 55
    Shapiro MJ, Morey JC, Small SD, et al. Simulation based teamwork training for emergency department staff: does it improve clinical team performance when added to an existing didactic teamwork curriculum? Qual Saf Health Care. 2004; 13:41721.
  • 56
    Kim J, Neilipovitz D, Cardinal P, Chiu M, Clinch J. A pilot study using high-fidelity simulation to formally evaluate performance in the resuscitation of critically ill patients: the University of Ottawa Critical Care Medicine, High-Fidelity Simulation, and Crisis Resource Management I Study. Crit Care Med. 2006; 34:216774.
  • 57
    Kim J, Neilipovitz D, Cardinal P, Chiu M. A comparison of global rating scale and checklist scores in the validation of an evaluation tool to assess performance in the resuscitation of critically ill patients during simulated emergencies (abbreviated as “CRM simulator study IB”). Simul Healthc. 2009; 4:616.
  • 58
    Williams JB, McDonough MA, Hilliard MW, Williams AL, Cuniowski PC, Gonzalez MG. Intermethod reliability of real-time versus delayed videotaped evaluation of a high-fidelity medical simulation septic shock scenario. Acad Emerg Med. 2009; 16:88793.
  • 59
    Gogalniceanu P, Sheena Y, Kashef E, Purkayastha S, Darzi A, Paraskeva P. Is basic emergency ultrasound training feasible as part of standard undergraduate medical education? J Surg Educ. 2010; 67:1526.
  • 60
    Hayes OW, Reisdorff EJ, Walker GL, Carlson DJ, Reinoehl B. Using standardized oral examinations to evaluate general competencies. Acad Emerg Med. 2002; 9:13347.
  • 61
    Quest TE, Otsuki JA, Banja J, Ratcliff JJ, Heron SL, Kaslow NJ. The use of standardized patients within a procedural competency model to teach death disclosure. Acad Emerg Med. 2002; 9:132633.
  • 62
    Heron SL, Hassani DM, Houry D, Quest T, Ander DS. Standardized patients to teach medical students about intimate partner violence. West J Emerg Med. 2010; 11:5005.
  • 63
    Wong I, Jayatilleke T, Kendall R, Atkinson P. Feasibility of a focused ultrasound training programme for medical undergraduate students. Clin Teach. 2011; 8:37.
  • 64
    Ruesseler M, Weinlich M, Byhahn C, et al. Increased authenticity in practical assessment using emergency case OSCE stations. Adv Health Sci Educ Theory Pract. 2010; 15:8195.
  • 65
    Bianchi L, Gallagher EJ, Korte R, Ham HP. Interexaminer agreement on the American Board of Emergency Medicine oral certification examination. Ann Emerg Med. 2003; 41:85964.
  • 66
    McNiel DE, Hung EK, Cramer RJ, Hall SE, Binder RL. An approach to evaluating competence in assessing and managing violence risk. Psychiatr Serv. 2011; 62:902.
  • 67
    Davis D, Lee G. The use of standardized patients in the plastic surgery residency curriculum: teaching core competencies with objective structured clinical examinations. Plast Reconstr Surg. 2011; 128:2918.
  • 68
    Franzese CB. Pilot study of an Objective Structured Clinical Examination (“the Six Pack”) for evaluating clinical competencies. Otolaryngol Head Neck Surg. 2008; 138:1438.
  • 69
    Hawkins R, MacKrell Gaglione M, LaDuca T, et al. Assessment of patient management skills and clinical skills of practising doctors using computer-based case simulations and standardised patients. Med Educ. 2004; 38:95868.
  • 70
    Ozuah PO, Reznik M. Using unannounced standardized patients to assess residents’ competency in asthma severity classification. Ambul Pediatr. 2008; 8:13942.
  • 71
    Short MW, Jorgensen JE, Edwards JA, Blankenship RB, Roth BJ. Assessing intern core competencies with an objective structured clinical examination. J Grad Med Educ. 2009; 1:306.
  • 72
    Jefferies A, Simmons B, Tabak D, et al. Using an objective structured clinical examination (OSCE) to assess multiple physician competencies in postgraduate training. Med Teach. 2007; 29:18391.
  • 73
    Wallenstein J, Heron S, Santen S, Shayne P, Ander D. A core competency-based objective structured clinical examination (OSCE) can predict future resident performance. Acad Emerg Med. 2010; 17(Suppl 2):S6771.
  • 74
    Maker VK, Bonne S. Novel hybrid objective structured assessment of technical skills/objective structured clinical examinations in comprehensive perioperative breast care: a three-year analysis of outcomes. J Surg Educ. 2009; 66:34451.
  • 75
    Brown DJ, Thompson RE, Bhatti NI. Assessment of operative competency in otolaryngology residency: survey of U.S. program directors. Laryngoscope. 2008; 118:17614.
  • 76
    Wang E. Global assessment tool for emergency medicine-specific core competency evaluation [letter]. Acad Emerg Med. 2004; 11:1370.
  • 77
    Herbers JE Jr, Noel GL, Cooper GS, Harvey J, Pangaro LN, Weaver MJ. How accurate are faculty evaluations of clinical competence? J Gen Intern Med. 1989; 4:2028.
  • 78
    Baker K. Determining resident clinical performance: getting beyond the noise. Anesthesiology. 2011; 115:86278.
  • 79
    Durning SJ, Pangaro LN, Lawrence LL, Waechter D, McManigle J, Jackson JL. The feasibility, reliability, and validity of a program director's (supervisor's) evaluation form for medical school graduates. Acad Med. 2005; 80:9648.
  • 80
    Ander DS, Wallenstein J, Abramson JL, Click L, Shayne P. Reporter-interpreter-manager-educator (RIME) descriptive ratings as an evaluation tool in an emergency medicine clerkship. J Emerg Med. 2011; 43:7207. http://dx.doi.org/10.1016/j.jemermed.2011.05.069.
  • 81
    Rodgers KG, Manifold C. 360-degree feedback: possibilities for assessment of the ACGME core competencies for emergency medicine residents. Acad Emerg Med. 2002; 9:13004.
  • 82
    Tintinalli JE. Evaluation of emergency medicine residents by nurses. Acad Med. 1989; 64:4950.
  • 83
    Wenrich MD, Carline JD, Giles LM, Ramsey PG. Ratings of the performances of practicing internists by hospital-based registered nurses. Acad Med. 1993; 68:6807.
  • 84
    Matthews DA, Sledge WH, Lieberman PB. Evaluation of intern performance by medical inpatients. Am J Med. 1987; 83:93844.
  • 85
    Matthews DA, Feinstein AR. A new instrument for patients’ ratings of physician performance in the hospital setting. J Gen Intern Med. 1989; 4:1422.
  • 86
    Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993; 269:165560.
  • 87
    Overeem K, Wollersheim H, Driessen E, et al. Doctors’ perceptions of why 360-degree feedback does (not) work: a qualitative study. Med Educ. 2009; 43:87482.
  • 88
    Boutin-Foster C, Charlson ME. Problematic resident-patient relationships: the patient’s perspective. J Gen Intern Med. 2001; 16:7504.
  • 89
    Johnson G, Booth J, Crossley J, Wade W. Assessing trainees in the workplace: results of a pilot study. Clin Med. 2011; 11:4853.
  • 90
    Webb TP, Merkley TR. An evaluation of the success of a surgical resident learning portfolio. J Surg Educ. 2012; 69:17.
  • 91
    O'Sullivan P, Greene C. Portfolios: possibilities for addressing emergency medicine resident competencies. Acad Emerg Med. 2002; 9:13059.
  • 92
    O'Sullivan PS, Reckase MD, McClain T, Savidge MA, Clardy JA. Demonstration of portfolios to assess competency of residents. Adv Health Sci Educ Theory Pract. 2004; 9:30923.
  • 93
    Buckley S, Coleman J, Davison I, et al. The educational effects of portfolios on undergraduate student learning: a Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 11. Med Teach. 2009; 31:28298.
  • 94
    Learman LA, Autry AM, O'Sullivan P. Reliability and validity of reflection exercises for obstetrics and gynecology residents. Am J Obstet Gynecol. 2008; 198:461.
  • 95
    Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006; 296:1094102.
  • 96
    Lipsett PA, Harris I, Downing S. Resident self-other assessor agreement: influence of assessor, competency, and performance level. Arch Surg. 2011; 146:9016.
  • 97
    Weller JM, Robinson BJ, Jolly B, et al. Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores. Anaesthesia. 2005; 60:24550.
  • 98
    Brennan DF, Silvestri S, Sun JY, Papa L. Progression of emergency medicine resident productivity. Acad Emerg Med. 2007; 14:7904.
  • 99
    Ledrick D, Fisher S, Thompson J, Sniadanko M. An assessment of emergency medicine residents’ ability to perform in a multitasking environment. Acad Med. 2009; 84:128994.
  • 100
    Promes SB, Chudgar SM, Grochowski CO, et al. Gaps in procedural experience and competency in medical school graduates. Acad Emerg Med. 2009; 16(Suppl 2):S5862.
  • 101
    Overly FL, Sudikoff SN, Shapiro MJ. High-fidelity medical simulation as an assessment tool for pediatric residents’ airway management skills. Pediatr Emerg Care. 2007; 23:115.
  • 102
    Al-Eissa M, Chu S, Lynch T, et al. Self-reported experience and competence in core procedures among Canadian pediatric emergency medicine fellowship trainees. CJEM. 2008; 10:5338.
  • 103
    Jensen ML, Hesselfeldt R, Rasmussen MB, et al. Newly graduated doctors’ competence in managing cardiopulmonary arrests assessed using a standardized Advanced Life Support (ALS) assessment. Resuscitation. 2008; 77:638.
  • 104
    Ball CG, Lord J, Laupland KB, et al. Chest tube complications: how well are we training our residents? Can J Surg. 2007; 50:4508.
  • 105
    Cherry MG, Brown JM, Neal T, Ben Shaw N. What features of educational interventions lead to competence in aseptic insertion and maintenance of CV catheters in acute care? BEME Guide No. 15. Med Teach. 2010; 32:198218.
  • 106
    Friedman Z, Katznelson R, Devito I, Siddiqui M, Chan V. Objective assessment of manual skills and proficiency in performing epidural anesthesia–video-assisted validation. Reg Anesth Pain Med. 2006; 31:30410.
  • 107
    Laeeq K, Bhatti NI, Carey JP, et al. Pilot testing of an assessment tool for competency in mastoidectomy. Laryngoscope. 2009; 119:240210.
  • 108
    Laeeq K, Infusino S, Lin SY, Reh DD, Ishii M, Kim J, et al. Video-based assessment of operative competency in endoscopic sinus surgery. Am J Rhinol Allergy. 2010; 24:2347.
  • 109
    Chapman DM, Rhee KJ, Marx JA, et al. Open thoracotomy procedural competency: validity study of teaching and assessment modalities. Ann Emerg Med. 1996; 28:6417.
  • 110
    Black SA, Nestel DF, Kneebone RL, Wolfe JH. Assessment of surgical competence at carotid endarterectomy under local anaesthesia in a simulated operating theatre. Br J Surg. 2010; 97:5116.
  • 111
    Ander DS, Hanson A, Pitts S. Assessing resident skills in the use of rescue airway devices. Ann Emerg Med. 2004; 44:3149.
  • 112
    Binstadt E, Donner S, Nelson J, Flottemesch T, Hegarty C. Simulator training improves fiber-optic intubation proficiency among emergency medicine residents. Acad Emerg Med. 2008; 15:12114.
  • 113
    Ishman SL, Brown DJ, Boss EF, et al. Development and pilot testing of an operative competency assessment tool for pediatric direct laryngoscopy and rigid bronchoscopy. Laryngoscope. 2010; 120:2294300.
  • 114
    Langhan TS, Rigby IJ, Walker IW, Howes D, Donnon T, Lord JA. Simulation-based training in critical resuscitation procedures improves residents’ competence. CJEM. 2009; 11:5359.
  • 115
    Zheng B, Hur HC, Johnson S, Swanstrom LL. Validity of using Fundamentals of Laparoscopic Surgery (FLS) program to assess laparoscopic competence for gynecologists. Surg Endosc. 2010; 24:15260.
  • 116
    Lim TO, Soraya A, Ding LM, Morad Z. Assessing doctors’ competence: application of CUSUM technique in monitoring doctors’ performance. Int J Qual Health Care. 2002; 14:2518.
  • 117
    McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011; 86:70611.
  • 118
    Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009; 169:14203.
  • 119
    Barsuk JH, McGaghie WC, Cohen ER, O'Leary KJ, Wayne DB. Simulation-based mastery learning reduces complications during central venous catheter insertion in a medical intensive care unit. Crit Care Med. 2009; 37:2697701.
  • 120
    Abramoff MD, Folk JC, Lee AG, Beaver HA, Boldt HC. Teaching and assessing competency in retinal lasers in ophthalmology residency. Ophthalmic Surg Lasers Imag. 2008; 39:27080.
  • 121
    Evans LV, Morse JL, Hamann CJ, Osborne M, Lin Z, D'Onofrio G. The development of an independent rater system to assess residents’ competence in invasive procedures. Acad Med. 2009; 84:113543.
  • 122
    Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: from Flexner to competencies. Acad Med. 2002; 77:3617.