SEARCH

SEARCH BY CITATION

Keywords:

  • outcome measures;
  • assessment;
  • competency;
  • resident training;
  • ACGME Outcomes Project

Abstract

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information

This article is designed to serve as a guide for emergency medicine (EM) educators seeking to comply with the measurement and reporting requirements for Phase 3 of the Accreditation Council for Graduate Medical Education (ACGME) Outcome Project. A consensus workshop held during the 2006 Council of Emergency Medicine Residency Directors (CORD) “Best Practices” conference identified specific measures for five of the six EM competencies—interpersonal communication skills, patient care, practice-based learning, professionalism, and systems-based practice (medical knowledge was excluded). The suggested measures described herein should allow for ease in data collection and applicability to multiple core competencies as program directors incorporate core competency outcome measurement into their EM residency training programs.

The objective of this article is to report the results of a consensus workgroup held during the 2006 Council of Emergency Medicine Residency Directors (CORD) “Best Practices” conference. The specific goals of the consensus workgroup were to gather stakeholders to brainstorm potential emergency medicine (EM)-specific resident outcome measures that meet the criteria defined by Phase 3 of the Accreditation Council for Graduate Medical Education (ACGME) Outcome Project. The process and the measures developed were not intended to be all inclusive, but to provide a starting place for program directors to begin addressing the EM-specific competency-derived outcome measures that integrate with our unique learning environment and clinical care requirements. The measures are also designed to assist residency program directors meet all components of Phase 3 of the ACGME Outcome Project.

Background

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information

The ACGME Outcome Project,1 initiated in 1999, defined a new conceptual framework for graduate medical education in the United States. The Outcome Project utilizes a set of six core competencies: interpersonal communication skills, medical knowledge, patient care, practice-based learning, professionalism, and systems-based practice. The goal of the core competencies is to focus resident education on high-quality patient care as defined by the Institute of Medicine (IOM) in “Crossing the Quality Chasm.”2 These STEEEP goals (Safe, Timely, Effective, Efficient, Equitable, and Patient Centered) underpin the specialty-specific definition of the unique knowledge, skills, and attitudes resulting from specific education in a particular discipline.

Implementation of the Outcome Project has been divided by the ACGME into three discrete phases: Phase 1 (July 2001–June 2002) focused on defining specialty-specific competencies. The EM competencies were developed during a 2002 EM educator consensus conference and were disseminated that year in a series of six articles.3–8 Residencies were expected to incorporate the teaching and learning of these EM-specific competencies into their didactic and clinical curriculum.

Phase 2 (July 2002–June 2006) of the project sharpened the focus of the competency definition by linking competencies to assessment tools. The goal was to move beyond simply counting the number of cases the resident was involved in and the procedures performed and toward a discrete assessment of the components of competency—namely, the knowledge, skills, and attitudes needed to competently practice medicine. A number of assessment methods were developed, including record review, checklists, chart-stimulated recall (CSR) oral examination, objective structured clinical examination (OSCE), simulations and models, portfolios, written examination, and 360-degree evaluations (see appendix for an overview, available as an online Data Supplement. .

Using the guidelines and methods provided in the ACGME toolbox of assessment measures,9 CORD’s Standardized Evaluation Group then developed and deployed specific measures of resident performance. An example of an EM-specific tool developed during this time is the Structured Direct Observation Tool (SDOT). Many of these are available for CORD members on the Sharepoint Web site (http://cord.sharepointsite.com).

The goal of the current Phase 3 (July 2006–June 2011) is the full integration of competencies and their assessment with learning and clinical care. The focus is on the development of observable outcome measures that will allow for the assessment of individual and collective resident performance, the use of these identified metrics as the basis for improvement of individual physicians, as well as residency programs in general, and the provision of documentation for accreditation review.

How Can Measuring Outcomes Shape the Learning Environment?

The learning environment is more productive when students and faculty agree upon aligned and explicit goals, instruction, and desired outcomes. Criteria-driven outcomes provide diminished rater subjectivity and increase the likelihood that measurement will be consistent. Learner accountability leads to development of self-assessment and promotes an environment in which feedback is expected and valued. Objective measures provide a consistent set of data by which both residents and faculty can measure progress toward a stated goal. Real-life clinical experiences provide the resident with the necessary contextual relevance of the measure, which, in turn, promotes interest in the material and retention of the teaching. Frequent reflection allows residents to become better at self-assessment. Independent study supplements the general curriculum. By accounting for the fact that residents enter residency with differing backgrounds, skill levels, knowledge bases, aptitudes, abilities, and learning styles, independent study allows for individual focus on areas of perceived need. Characteristics of competency-based teaching and learning are summarized in Table 1.10

Table 1.   Five Important Characteristics of Competency-based Teaching and Learning10
Learning is explicit and clearly aligned with expected competencies.
Teaching is criteria-driven, focusing on accountability in reaching benchmarks and, ultimately, competence.
Content is grounded in “real-life” experiences.
Reflection is focused on fostering the learner’s ability to self-assess.
Curriculum is individualized, providing more opportunities for independent study.

Workgroup Methods for Identifying Outcome Measures

All workgroup leaders were selected prior to the conference, assigned a topic area, and briefed on the goals of the project. Each workgroup leader agreed to utilize the framework developed by the 2002 CORD Consensus Conference, in which the specifics of EM competency were defined.3–8 Five of the six competencies were selected for group work. The medical knowledge competency was not addressed because it is defined as a knowledge-based competency, and two excellent outcome measures, the board examination and the in-service training examination, already exist and are used extensively by program directors. We focused on the competencies of communication, system-based practice, patient care, professionalism, and practice-based learning and improvement. All conference participants were invited to participate in the workgroup sessions. All participants attended a brief didactic session given by one of the authors (CH), who is an educator with expertise in assessment and outcome measure development. The session provided background on the ACGME Outcome Project, the specific goals of Phase 3, and the specific tasks to be completed during the small-group work. At the conclusion of the didactic session, participants were divided into five working groups by counting off 1 through 5. Each numbered group then reconvened in a small group room, joined by their specific leader. Each group focused on one competency with the task of identifying characteristics of the competency that were both important and could be measured as outcomes.

All workgroups were provided with copies of the publications defining their specific competency3–8 to utilize as an on-site resource for their work. Workgroups were encouraged to first identify existing measures that could be adapted to measurement of EM residency training outcomes. In the absence of such measures, they were asked to brainstorm measures of EM competency-based learning that were felt to be reliable and generalizable and that could be easily implemented. Specific measures could be defined in such a way as to focus on individual resident performance or on the aggregate performance, of the entire group (e.g., at the residency program level), or to measure characteristics of the training environment that impact clinical care (including adequate resources, overcrowding, attending decision-making, etc). Workgroups arrived at the products in a consensus manner, and disagreements were resolved by individual leaders as part of the group process. The group noted that these new measures would substantially expand the roles of the program director and program coordinator, roles that have already grown dramatically in the past few years. Thus, a main consideration in the development of these measures was that the process itself not be too burdensome.

To improve the success of implementation, the group used the following criteria to assess the viability of a potential measure: 1) the measure should provide meaningful feedback for both residents and programs, 2) results of the measure must be reliable, 3) data for the numerator and denominator must be easily attainable, and 4) measurements must be limited in scope.

Results of the Consensus Workgroup

All workgroups successfully developed outcome measures that met the criteria defined by the ACGME and fell within the scope of EM practice. Most participants struggled with the differences between the functional uses of assessment and outcome measures. The ACGME Outcome Project defines assessment as the “process of collecting, synthesizing, and interpreting information to aid decision-making.”1 The results of assessments allow educators to make informed decisions about learner knowledge, beliefs, and attitudes. Outcomes are defined as the immediate, short-term, delayed, and long-term results, demonstrating that learning goals and objectives have been accomplished.1 It was concluded that too often, assessment tools are used as outcome measures and that the two are often confused. Despite this confusion, because measuring outcomes for a particular characteristic or skill is so important, it is often necessary to blur the lines and use what little is available. Our recommendations for outcome measurements may occasionally reflect this reality.

The measures presented here are intended only as a guide. They are not intended to be prescriptive, and they do not represent the only options from which an individual residency director can choose when designing a program’s approach to competency measurement. Individual programs may wish to adopt some, all, or none of these measures when developing their own institution-specific outcome measures program. At a minimum, we hope these suggestions will assist residency program directors as they begin to form outcome measurement “toolboxes” that can be modified and refined with advances in our clinical specialty. The way these measures must be utilized for a particular program to be in compliance (e.g., the number of measures assessed for each category), the total number of measures applied, the cycle time for repeat measurement, or how the measures should change to reflect increasing learner competency with a condition remain unanswered questions for which the group had no definitive solutions.

Discussion

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information

Communication Competency

The unique communication skills required of competent emergency physicians (EPs) have been previously defined.3 Building on this previous work, we focused on the outcomes anticipated from practitioners who excelled in this specific competency. We also identified high-leverage areas for data collection and possible methods for enhancing both the face validity of our measures and the practical tips for implementing data collection. A summary of the measures developed is presented in Table 2.11–14

Table 2.   Emergency Medicine Relevant Communication Competencies
ConditionMeasureData Collection Method or Assessment Technique
  1. AMA = patient leaving against medical advice; CR = chart review; CRT = crew resource training; DO = direct observation; OSCE = objective structured clinical examination; PR = peer evaluation; RR = record review; S = simulation and models; SP = standardized patient assessment; 360 = 360-degree evaluations.

Therapeutic relationshipEstablishment of a therapeutic relationshipValidated patient interpersonal skill inventories11–13 SP, PR, 360
Effective communication of care processesPress Ganey14 scoresPress Ganey survey
Patient Satisfaction surveys
Physician satisfaction scores
“My physician was excellent at informing me about the outcomes of my care”
AMANumber of AMACR
AMAPhysician invites AMA patients to return for recommended treatmentCR, OSCE, SP, S
Death notificationFamily satisfaction with resident interpersonal communication skillsSP, S
May use any validated interpersonal skills inventory
Written communication skills—chart documentation Physician documentation supports correct level of billingCR
Leadership of critical care resuscitation teamPhysician leadership inventoryS, DO
CRT

The most critical communication skill required of all EPs is the ability to rapidly develop a therapeutic relationship with their patients. Outcome measures unique to this skill are numerous, primarily focusing on the patient’s perception of the individual physician’s communication skills. The group endorsed the concept of using previously validated patient interpersonal communication inventories to measure the success of individual residents at the outset of the therapeutic relationship. These measures include, but are not limited to, the Calgary Communication Inventory,11 the interpersonal skills and communication instrument of Schnabl et al.,12 and the longitudinal communication skills initiative of Rucker and Morrison.13

Data collection methods for these instruments will vary depending upon individual residency program and departmental logistics. Suggested methods other than these, seen in Table 2, include faculty interview of patients following an assessment using the SDOT, resident-directed patient sampling, and exit interview sampling of a random selection of patients at the conclusion of their emergency department (ED) stay. Regardless of the method chosen, care must be taken to mitigate potential sample bias that can be introduced in a variety of ways, particularly by resident-directed sampling, patient illiteracy, or patient lack of English language proficiency.

Other communication skills important to measure surround the areas of high-risk communications: patients leaving against medical advice (AMA), death notification, and refusal of resuscitation (do not attempt to resuscitate/do not intubate) orders. Although the group easily achieved consensus on the skills that constituted excellence in this competency, difficulty arose in determining practical measurement methods. For example, for patients leaving AMA, some would argue that the best outcome and most desirable communication skill is the ability to effectively convince the person to remain in the ED and continue treatment. Others would state that this outcome is paternalistic, and the only valuable measure is whether the patient received an unbiased communication of the risks and benefits of his or her medical decision. In this scenario, sampling difficulty arises for both the numerator and the denominator. For the numerator, if one selects the percentage of patients who originally planned to leave AMA, but declined following communication with their provider, one would miss all those patients who ultimately decided to depart but were adequately informed of the risks and benefits of their decision. The construction of the denominator for the measure is equally difficult, as most of the discussions that providers have about leaving AMA with patients who then ultimately remain in the department are not captured by standard charting methods. In other words, the number of patients who depart AMA (numerator) is known, but the total number of patients who had discussed this option with their providers (denominator) is not.

The group discussed another potential measure of best practice in the case of the patient who desires to leave AMA, namely, whether the resident encouraged the patient to return if the condition were to worsen or if the patient were to have a change of mind about seeking treatment. For this measure, written documentation of an invitation to return should be noted on the chart and discoverable by review. It was felt that this measure could easily be collected during standard review of all AMA patient charts.

Death notification is another area of significant risk for all EPs. Assessment tools exist to measure resident competency in this difficult communication encounter.15 Measurement of family member satisfaction with the physician communicating this information can be obtained via a telephone survey call-back after an appropriate time interval, or a mail survey.

Another key communication skill for EPs is the ability to communicate effectively in writing, particularly through chart documentation. Components of a well-documented chart include a clear, concise description of medical decision-making, as well as an adequate number of history, review of systems, and physical examination items to support correct billing levels. Data for these measures are supported by chart review.

Physician leadership and conflict resolution skills should also be measured. No known validated instruments exist to measure specific leadership skills of EPs. Measures may exist in aviation, anesthesia, or crew resource training for components of these skills, but they have yet to be adapted to EM.

Patient Care

Residents need to be evaluated not only for their ability to pick the right intervention for a particular patient complaint, but also for their ability to carry out the appropriate therapeutic intervention. Proposed outcome measures surrounding patient care are highlighted in Table 3.

Table 3.   Emergency Medicine Relevant Patient Care Competencies
 MeasureData Collection Method or Assessment Technique
  1. ACS = acute coronary syndromes; CMS = Centers for Medicare and Medicaid Services; CR = chart review; CSR = chart-stimulated recall; DO = direct observation; JCAHO = Joint Commission on Accreditation of Healthcare Organizations; RR = Record Review; S = simulation and models.

Knowledge of proper procedure as defined by preexisting quality assurance programs (e.g., JCAHO, CMS)Compliance with medication, e.g. administration of aspirin and beta-blockers in patients with ACS RR, S, DO
Electrocardiogram ordered and interpreted within 30 min of patient arrival RR
Knowledge of critical components to timely, appropriate diagnosis and management as specified, e.g., in The Clinical Practice of Emergency Medicine or national data on chief complaints Documentation of pulse oximeter reading for patients presenting with shortness of breathRR
Administration of oxygen for patients with abnormal pulse oximeter readingsRR, S
Chest radiograph ordered and properly interpreted in patients with shortness of breath or symptoms consistent with pneumoniaRR, S, CSR
Urinalysis ordered for pain in patients with lower abdomen or flanksRR, S, CSR
Pregnancy test ordered for all women of child bearing age with abdominal painRR, S, CSR
Vital signs recorded and addressed/treated if abnormalRR, S
Serial abdominal exams performed and documented if prolonged ED stay for patients with abdominal pain chief complaintRR
Pain documented and treated when presentRR
Presence or absence of peritoneal signs documented in patients with abdominal painRR
Imaging considered for elder patients with abdominal pain; if performed, results documentedRR, S, CSR
Universally accepted procedural competenciesEndotracheal intubationRR, S, CSR
Documentation that endotracheal tube placement was confirmed by at least two measures
Number of attempts and success rate

Considering the limited time, personnel, and financial support of residency programs, outcome data should parallel and/or dovetail that information required for ongoing reporting systems for regulatory agencies such as The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and Centers for Medicare and Medicaid Services (CMS). Individual resident data as well as collective residency data documenting compliance with accepted therapeutic standards can be expressed in percentage metrics. For example, if a patient presents with the chief complaint of chest pain, the type of metrics that could be documented are compliance with administration of aspirin and beta blockers, as well as the rapid ordering and interpretation of electrocardiograms.

Residency programs should identify common EM chief complaints using sources such as TheClinical Practice of Emergency Medicine,16 published clinical policies,17 or national data on chief complaints, most of which, unfortunately, are limited. Steps critical to timely and appropriate diagnosis and management could be identified as metrics for evaluating individual and program-specific outcomes. Metrics must be objective and universally accepted (i.e., not site-specific). Outcome measures should not be life-or-death dichotomies, but rather should assess whether the resident’s patient care was appropriate and within acceptable norms for EM.

An example of a common EM chief complaint would be shortness of breath. Measures of appropriate care would include whether the resident obtained, documented, and properly interpreted a pulse oximeter reading and chest radiograph and whether he or she acted upon abnormal results. Another example of a chief complaint, abdominal pain, and its associated appropriate care metrics are elaborated on in Table 3.

Compliance to these metrics could also be assessed using simulated patient encounters, computer-based simulations, or an oral boards-type setting, many of which exist in residency programs. Competency in patient care could be assessed retrospectively by residents performing chart audits using predefined criteria for specific chief complaints. Residents could add this to their portfolios along with self-reflection comments, thus enhancing individual academic growth. The program director could gather the data from residents to assess how well the program as a whole teaches patient care related to various chief complaints and make directed educational interventions to correct deficiencies.

In addition to assessing residents in their ability to choose the correct procedure for a particular chief complaint, program directors should also assess residents in their ability to carry out procedures competently. It is not enough to simply attain a count of completed procedures and to document that number in each resident’s semiannual evaluation. Instead, metrics for key procedures should be identified and residents should be assessed on compliance and complication rates (an example of metrics related to endotracheal intubation that programs might consider can be found in Table 2). Functionally, assessing resident competency with key procedures can be accomplished through a variety of means. Some programs dedicate a day to procedural competency, during which residents are assessed in their ability to perform procedures in a simulated setting. Other programs use checklists to document competency. It is important, regardless of the method used, that key metrics are identified in advance and that they are communicated to the learner and faculty assessing the procedural skills.

Practice-Based Learning and Improvement

Practice-based learning refers to the ability to appropriately modify practice based on new literature and patient outcomes and to teach others current medical knowledge and standards. These skills, along with the workgroup’s proposed outcome measures, are listed in Table 4.

Table 4.   Emergency Medicine Relevant Practice-based Learning and Improvement Competency
Physician TaskMeasureData Collection Method or Assessment Technique
  1. EBM = evidence-based medicine; CSR = chart-stimulated recall; P = portfolio; PI = performance improvement; RR = record review; SP = standardized patient assessment.

Analyze and assess practice experience, perform practice-based improvementImpact of PI programDepends on project goal
Learner ability to self-reflect, identify deficits, and improveRR, CSR, P, SP
Locate, appraise, and utilize scientific evidence related to patient health problemsAbility to find a specific piece of information Appraisal of search strategy
Adherence to evidence-based recommendations from Cochrane Collaboration and Agency for Healthcare Research and QualityRR, CSR
Competency in applying knowledge of study design and statistical methods to appraise medical literatureAdherence to the appraisal process, such as described in JAMA Guides to Medical Literature seriesTopic appraisal using EBM techniques
Utilize information technology to enhance learning and improve patient careNumber of quantified prescription or order-entry errorsRR
Skilled in facilitating the learning of emergency medicine principles and practice by othersImpact of teaching on other practitionersTeaching evaluations

Competence in practice-based learning signifies that one is able to analyze and assess practice experience, reflect upon it, and identify and implement means by which to improve that practice.6 Accurate self-assessment is a critical component of this competency and can be measured by determining a learner’s ability to review the care he or she delivered and to identify future improvements for components of care. For instance, through the performance of follow-up to identify missed diagnoses, record review to assess adherence to national and local standards, and self-reflection of individual patient encounters via portfolios, a learner’s ability to identify and correct suboptimal practice patterns can be assessed. Outcome measures include, but are not limited to, improvements in the metrics outlined in other sections of this article.

Current Residency Review Committee for EM requirements stipulate that “Each resident must actively participate in emergency department continuous performance quality improvement (PI) programs.”18 A natural extension of this requirement would be the design of outcome measures that evaluate the impact of such a program. Learners at all levels, from medical students to residents, have been found to have an influence on PI initiatives.19 By measuring this influence, one can accurately determine a learner’s ability to identify a problem and implement a plan for improvements. One case series describes a cohort of internal medicine residents that identified an overuse of intravenous catheters and then developed an intervention that decreased use from 43% to 27%.20 Because PI projects often impact outcomes involving multiple competencies, measures may generate results that can be applied across many domains of resident competency acquisition.

Residents must also be able to locate, appraise, and utilize scientific evidence related to patient health problems and to the larger population from which they are drawn. The ability to find pertinent information, to appropriately assess its validity, and to thoughtfully implement it into practice is critical to a practitioner’s growth. Outcomes for this skill are tied to the assessment methods used. For example, in assessing one’s ability to use tools to find evidence, one could determine the practitioner’s ability to find a specific piece of information; as well, assessment of the search technique could also be used. Objective assessment of appraisal and implementation of this evidence is problematic due to the inherent controversies in determining the “gold standard.” However, by using objective evidence-based recommendations, such as those collected by organizations such as the Cochrane Collaboration and the Agency for Healthcare Research and Quality (AHRQ), one can determine the frequency with which a practitioner deviates from the standard of care for specific diagnoses.

Residents must show competency in applying knowledge of study design and statistical methods to critically appraise medical literature. Numerous guides exist for systematically using evidence-based medicine techniques. The inherent subjectivity of the outcomes could be minimized by focusing on the appraisal process rather than on the conclusion. One method of structured appraisal, described in depth, is published in the Users’ Guides to Medical Literature series in the Journal of the American Medical Association (JAMA).21,22 Interpretation of rudimentary statistical tests is included in the board certification process.

Another skill is the ability to utilize information technology to enhance learning and improve patient care. Presumably, the use of information technology should decrease errors. Practitioners must be able to find and use information pertinent to positively impacting patient care. Assessment tools include 360-degree evaluations and practical examinations, which measure the ability to rapidly access pertinent information to guide care.6 Other surrogates for gauging the accuracy of information retrieval could include the examination of errors in prescription writing or order-entry errors, both of which can be quantified.

Finally, practice-based learning and improvement means that residents are skilled in facilitating the learning of EM principles and practice by students, colleagues, and other health care professionals. Standard evaluation forms can be used to assess the ability of a practitioner to teach others. To better assess outcomes, however, one would need to determine the impact of the teaching on the other practitioners of the health care team. This can be done in simulated settings using either global or checklist evaluations. Due to the specific skills required, several different outcomes measures are likely needed to determine the efficiency and accuracy with which one can find and appraise information, apply it to one’s practice to maintain the highest standard of care, and disseminate the knowledge to other health care providers.

Professionalism

The workgroup segmented model behaviors of professionalism into those considered most important to patients and their families, and those deemed most important to employers and colleagues of EPs. Table 5 highlights the consensus group’s proposed measures. The skills falling under category of “sensitivity and respect for patients” are: 1) treating patients and family with respect; 2) demonstrating sensitivity to patient’s pain, emotional state, and gender, and ethnicity issues; 3) shaking hands with the patient and introducing oneself to the patient and family; 4) showing unconditional positive regard for the patient and family; and 5) being open and responsive to input or feedback of patients and their families. The group agreed that the best assessment methods to evaluate the skills surrounding sensitivity and respect for patients would be the 360-degree evaluation, the Press Ganey Patient Satisfaction survey,14 the SDOT, and any of a number of means to record patient complaints.

Table 5.   Emergency Medicine Relevant Professionalism Competency
Physician TaskMeasureData Collection Method or Assessment Technique
  1. 360 = 360-degree evaluations; DO = direct observation; PR = peer evaluation.

Exhibits professional behaviors toward patients and familiesDemonstrates sensitivity to patient’s pain, emotional state, and gender/ethnicity issues360, Patient satisfaction surveys, PR
Shakes hands with patient and introduces himself to patient and family360, DO
Shows unconditional positive regard for patients and families360, patient satisfaction surveys, PR
Remains open/responsive to input/ feedback of patients and families360, patient satisfaction surveys
Exhibits professional behaviors toward employers and colleaguesHonesty360, patient complaint
Arriving to work on timeTime sheets, PR
Willingly seeing patients throughout entire shiftChart audit of patients seen in last hour of shift, PR
Conducting appropriate sign-outsPR
Punctually completing medical recordsChart completion audit
“Total instances of delinquent charting” 
Attending mandatory meetings and conferencesConference attendance roster audit
Lack of substance abusePR

The following aspects of professionalism were considered to be important by employers and colleagues: honesty, timely compliance with scheduled requirements, and lack of substance abuse. The group decided that with regard to honesty, outcome measures could include the 360-degree evaluation, the SDOT, patient complaints, and any episode of falsification of medical records. The group noted that lying on the part of physicians is often very difficult to measure.

A number of professional skills fall under “compliance with scheduled requirements,” including arriving on time, prepared for work; willingly seeing patients throughout the entire shift; conducting appropriate sign-outs; and punctually completing medical records. The best outcome measures for this skill set are tracking punctuality through time cards or sign-in sheets, reviewing medical records to obtain the number of patients seen per shift or to uncover any instances of delinquent charting, and conducting peer evaluations related to sign-outs. Attendance at mandatory meetings and conferences is also an easy outcome to measure by means of a sign-in sheet or roll.

Appropriate outcome measures regarding “substance abuse” could be any reported violation of the ED’s substance abuse policy and failure to seek treatment when a problem has been identified. Because physician impairment policies vary by state, the standards of each state medical board will dictate specific outcome measures.

The difficulty in measuring certain aspects of professionalism begs the question of whether these aspects actually should be measured. Assessment and outcome measurement of professionalism are fraught with subjectivity and bias. Group discussion was limited not only in determining which elements of professionalism were most important to measure, but also in deciding which were even possible to measure. For example, it was noted that it is extremely difficult, if not impossible, to measure skills such as recognizing the influence of marketing and advertising, using humor and language appropriately, or properly administering symptomatic care.

Systems-Based Practice (SBP)

Emergency medicine educators can incorporate several measures into their curricula to document progressive improvement with respect to the SBP competency. The proposed measures can be found in Table 6.23–30

Table 6.   Emergency Medicine Relevant Systems-based Practice Physician Tasks
Physician TaskMeasureData Collection Method or Assessment Technique
  1. 360 = 360-degree evaluations; CL = checklist; CR = chart review; CSR = chart-stimulated recall; CT = computed tomography; CVA = cerebral vascular accident; DO = direct observation; ICU = intensive care unit; JCAHO = Joint Commission on Accreditation of Healthcare Organizations; MDI = metered dose inhaler; OSCE = objective structured clinical examination; P = portfolio; PR = peer evaluation; PTCA = percutaneous transluminal coronary angioplasty; RR = record review; S = simulation and models; SP = standardized patient assessment; WE = written exam.

Out-of-hospital careResident discusses relevant information with out-of-hospital providers RR
Resident reviews out-of-hospital run sheetCSR, S, RR
Documentation of out-of-hospital care (i.e., aspirin and nitroglycerin given in the field) RR
Modifying factorsResource utilizationCSR, S, 360
Consultation of interpreter for language barrierRR
Legal/professional issuesExplanation of AMA indications, risks, and benefitsRR DO
Explanation of alternative treatments and optionsRR, CL, P, S, CSR
Documentation of patient capacity for decision-makingRR, CL, P, S, CSR
Documentation of invitation to return for recommended treatmentRR, CL, P, S, CSR
Documentation of patient handoff at change of shiftRR
Diagnostic studiesConsideration of evidence-based decision rulesRR, CL, P, S, CSR
Examples include:
 NEXUS C-spine rules27
 Ottawa ankle rules28
 Ottawa knee rules29
 Canadian Head CT rules30
Documentation of deviation from decision rulesRR, CL, P, S, CSR
Documentation of proceduresRR, CL, P, S, CSR
Consultation and dispositionTimely notification of cardiac catheterization team for AMIRR, CL, P, S, CSR, CR
Timely notification of stroke team for acute CVARR, CL, P, S, CSR, CR
Utilization of PSI31 or PORT32 score in CAP for dispositionRR, CL, P, S, CSR, CR
CIWA33 score for alcohol withdrawalRR, CL, P, S, CSR, CR
Consultant interactionsAppropriateness of consultationRR, S
Documentation of indications for consultationRR
Timely disposition (admission or discharge)RR, S
Prevention and educationAppropriate discharge instructions written for understandability at the patient’s levelRR, CL, CSR, S, OSCE, 360
Discharge instructions document a follow-up providerRR
Discharge instructions provide an explanation of medicationsRR, S
Reasons to return for further careRR, S
Appropriate discharge medications provided for  key medical conditions, e.g., steroids/MDI in asthma,  antibiotic choice for indicationRR, CL, CSR, S
Multitasking and team managementJCAHO ORYX Core measures34RR, CL, CSR, S, 360, WE
AMI
 Administration of aspirin and beta-blockers in AMI
 PTCA within 90 min of arrival
 Thrombolysis within 39 min of arrival
Community-acquired pneumonia
 Oxygen assessment
 Blood cultures
 Initial antibiotic administration <4 hr
 Initial antibiotic choice for ICU and non-ICU patients
Time to administration of pain medicationsRR, S
Anticoagulation in atrial fibrillationRR, S
Nursing, staff, housestaff interactionsDO, 360, PR, S
Appropriate role assignment and direction of team by the resident for a medical or trauma resuscitation

Successful outcomes assessment will require the employment of multiple measurement tools and will necessarily vary by institution depending on the relative strengths of each program. The consensus group chose specific criteria for each physician task based on generalizability across programs, acceptance as performance standards based on current guidelines (e.g., AHRQ standards), reliability, validity, and ease of implementation. The group also identified existing resources that support outcome measures for SBP.

Standards of care are available for more than 1,600 diseases on the AHRQ Web site (http://www.ahrq.gov/). Embedded within the site is a link to the National Guideline Clearinghouse (http://www.guideline.gov/), which provides more than 1,800 listings of practice guidelines based on disease, treatment, or quality assessment tools. The AHRQ also has a Web page entirely focused on outcomes and effectiveness (http://www.ahrq.gov/clinic/outcomix.htm).

The Joint Commission on Accreditation of Healthcare Organizations, recently coined “The Joint Commission,” introduced the ORYX31 initiative in February 1997 to integrate outcomes and other performance measurement data into the accreditation process. In addition, ORYX measurement requirements are intended to support Joint Commission–accredited organizations in their quality improvement efforts. In July 2002, accredited hospitals began to collect data on standardized, or “core,” performance measures.31 The Hospital Quality Measures currently utilized by the Joint Commission and CMS are acute myocardial infarction (AMI), heart failure, pneumonia, and surgical infection prevention. With respect to EM, the relevant outcomes to be measured for AMI include administration of aspirin and beta blockers, percutaneous transluminal coronary angioplasty within 90 minutes of arrival, or thrombolysis within 30 minutes of arrival. For pneumonia, they include oxygen assessment, blood cultures, antibiotic administration within 4 hours of arrival, and antibiotic choice for intensive care unit (ICU) and non-ICU patients. One caveat with respect to these measures is that residents cannot control certain aspects of the time-critical events. For instance, time to electrocardiogram (ECG) is institution-dependent, and time to needle from the time of notification is entirely dependent on the invasive cardiologist and the cardiology team framework; therefore, residents can actually only be assessed on timely notification of cardiology.

Using the 3-hour window for stroke team activation for tissue plasminogen activator administration, or door-to-needle times for AMI as examples, a resident’s records can be reviewed for timing or documentation of notification of the stroke team after interpretation of the initial head computed tomography (CT) or notification of the catheterization team after interpretation of the initial ECG. However, door-to-needle time as a whole encompasses other institutional factors, such as time to initial ECG and time for arrival of the consulting service. Each of these metrics is beyond resident control; however, some would argue that these measures could be used as institutional metrics, providing an indicator of appropriateness of the training environment for graduate medical education.

Other outcomes that easily could be evaluated using record review and checklists in the case of AMI, for example, include documentation of aspirin and beta-blocker administration. Residents can be evaluated based on their documentation of medication administration in the ED or by out-of-hospital caregivers. If medications were not administered, resident evaluation should be based on documentation of appropriate contraindications. The checklist format allows for items to be scored as either binary (“Yes” or “No”) or by level of compliance using a Likert-type measurement (total, partial, or incorrect) for each individual parameter. The individual items can then either be scored as a composite (percentage of items performed) or an all-or-none measurement.32 Missing or incomplete documentation of care is interpreted as not having met the accepted standard.

Chart-stimulated recall oral exam cases can be tailored to assess resident understanding of specific systems-based issues. Areas of assessment might include the resident’s use of clinical decision rules for utilization of diagnostic studies (e.g., NEXUS23 criteria for c-spine clearance) or disposition (e.g., PORT28 score for pneumonia or CIWA29 score for alcohol withdrawal).

One outcome measure for the requisite physician skill of multitasking and team management would be time to administration of pain medications. Core measures for JCAHO and ORYX specify guidelines for performance and outline the way in which quality is to be assessed.31 Using these metrics, a program director also can measure individual resident performance and can determine the aggregate performance of the program. The information will yield formative feedback at both the individual and the program levels. Repeat measurement will allow systematic improvement and will provide ample documentation of a systematic approach to improvement for accreditation agencies.

An EM-specific simulation curriculum has been designed to address SBP topics.33 One case involves a patient with a language barrier who suffers from an AMI and who wishes to leave AMA. Another case involves an intoxicated patient with a Level 1 pelvic trauma requiring transport to a specialized facility. SBP issues pertinent to the case include transport protocols, understanding of the Emergency Medicine Treatment and Active Labor Act (EMTALA), and knowledge of local regulations regarding disclosure of driving under the influence of alcohol. Another innovative assessment method for SBP involves the use of simulation for presenting morbidity and mortality conferences. In this scenario, the resident must confront significant issues with patient advocacy, consultation and disposition, and team management.34 OSCEs may also have a role in assessing items such as modifying factors (cultural issues), legal/professional issues (AMA), prevention, and education.

Portfolios may also provide an opportunity for educators to gather data to measure systems-based practice outcomes. An example of an SBP-specific portfolio entry would be a resident quality assurance project to determine institutional performance with respect to measures such as aspirin and beta-blocker administration in patients with AMI. Outcome measurement would use these results (before and after) to evaluate the impact of the SBP project.

Data collected from 360-degree assessments could also potentially be used for SBP measures. These could include rating a resident’s ability to provide appropriate discharge instructions or to converse with a patient about leaving AMA. For example, was the resident discussing the instructions at the patient’s level of understanding? Did the resident provide a follow-up provider and appropriate time interval for follow-up? Did the resident indicate specific criteria (e.g., worsening signs or symptoms) for which medical attention should be sought immediately? Were appropriate medications provided, and were they explained to the patient or caregiver?

Conclusions

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information

This article provides EM educators with multiple outcome methods to evaluate five of the six core competencies. These proposed measures are intended to be used as a guide for individual programs as they seek to comply with ACGME requirements for Phase 3 of the Outcome Project. It is clear from the discussions held by most groups that most educators feel that using core measures is a good starting point, because these metrics have relevance outside of graduate medical education. An added benefit of using these measures is that data may be more easily collected and may be applicable to multiple core competencies.

Emergency medicine training provides ample opportunity for outcome measurement for each of the core competencies. The best methods with which each can be addressed depend on the individual program’s assets, faculty, and hospital infrastructure.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information

The authors are grateful to Paul B. Miller, JD, who offered valuable comments on the previous versions of this article. The authors thank program directors who participated in the Outcome Project focus group during the 2006 CORD Best Practices Conference and the CORD Board of Directors for their support of this project. We also thank Patricia Kinneer for her expert editorial assistance.

References

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information

Supporting Information

  1. Top of page
  2. Abstract
  3. Background
  4. Discussion
  5. Conclusions
  6. Acknowledgments
  7. References
  8. Supporting Information
FilenameFormatSizeDescription
ACEM_046_sm_AppendixS1.doc35KSupporting info item

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.