ACADEMIC EMERGENCY MEDICINE 2011; 18:519–526 © 2011 by the Society for Academic Emergency Medicine
Objectives: The Institute of Medicine (IOM) has recommended the development of national standards for the measurement of emergency care performance. The authors undertook this study with the goals of enumerating and categorizing existing performance measures relevant to pediatric emergency care.
Methods: Potential performance measures were identified through a survey of 1) the peer-reviewed literature, 2) websites of organizations and societies pertaining to quality improvement, and 3) emergency department (ED) directors. Performance measures were enumerated and categorized, using consensus methods, on three dimensions: 1) the IOM quality domains; 2) Donabedian’s structure/process/outcome framework; and 3) general, cross-cutting, or disease-specific measures.
Results: A total of 405 performance measures were found for potential use for pediatric emergency care. When categorized by IOM domain, nearly half of the measures were related to effectiveness, while only 6% of measures addressed patient-centeredness. In the Donabedian dimension, 67% of measures were categorized as process measures, with 29% outcome and 4% structure measures. Finally, 31% of measures were general measures relevant to every ED visit. Although 225 measures (55%) were disease-specific, the majority (56%) of these measures related to only five common conditions.
Conclusions: A wide range of performance measures relevant to pediatric emergency care are available. However, measures lack a systematic and comprehensive approach to evaluate the quality of care provided.
Hospital emergency departments (EDs) are experiencing increased utilization in the face of decreased numbers of EDs, leading to challenges in meeting the increasing gap between the expectations and realities of the quality of care delivered. To achieve system accountability, the 2006 Institute of Medicine (IOM) report, The Future of Emergency Care, recommends “convening a panel with emergency care expertise to develop evidence-based indicators of emergency care system performance.”1 Early efforts to identify performance measures for emergency care have focused on adult-centric clinical conditions,2 as well as performance measures relevant to patients regardless of their age, such as census, throughput, unscheduled return rates, and patient satisfaction.3–9
Despite the fact that children under the age of 19 years represent 27% of the 114 million ED visits annually,10 there is not a set of widely used and accepted pediatric-specific performance measures. The Emergency Care for Children: Growing Pains component of the 2006 IOM report recommends that pediatric emergency medical systems must specifically support the development of national standards for emergency care performance measurement. In one study, Guttmann et al.11 performed an extensive literature review to identify quality of care and outcome pairs. Their work identified 68 quality indicators for 12 clinical conditions that accounted for 23% of all pediatric ED visits. This and other studies mark early approaches to defining performance measures, but highlight the difficulties in developing a set of measures that comprehensively reflect the quality of pediatric emergency care for all children.
To date, the body of research regarding pediatric emergency care performance measurement has not been aggregated and organized within a logical conceptual framework. Delineating the current state of pediatric emergency care performance measurement is critical to understanding measurement gaps and prioritizing measures for both reporting the current state of quality and driving improvement efforts. One critical dimension must incorporate the widely accepted and disseminated IOM quality domains from Crossing the Quality Chasm: health care should be effective, safe, efficient, timely, equitable, and patient-centered.12 Another important dimension includes Donabedian’s structure/process/outcome formulation that has established the framework for most contemporary quality measurement and improvement activity.13,14 Finally, quality measures should reflect the broad spectrum of diseases for which children receive emergency care. Therefore, we performed this study with the goals of aggregating existing performance measures relevant to pediatric emergency care and categorizing measures across three dimensions: the six IOM quality domains,12 Donabedian’s structure/process/outcome formulation,13,14 and type of illness or injury.15
Three sources were utilized to capture possible relevant measures. First, we surveyed the peer-reviewed literature to identify relevant quality measures reflecting emergency or pediatric care. We searched Medline for the years 1950 through 2008 and updated our search regularly until June 2010 using the following terms: emergency department, emergency care, quality, children, pediatric, safety, effectiveness, efficiency, timeliness, quality indicator, performance measure, and metric. Search strings also included combinations of these terms such as emergency department quality, emergency department metric, pediatric emergency care quality, and pediatric quality indicator. The results of these searches were combined with related article searches and a bibliographic review until searches returned overlapping results. To maximize inclusivity, potential measures were captured regardless of patient population (adult, pediatric), setting (ED, inpatient, etc), study type, and whether measures were simply described, piloted, or validated. At this stage, we did not require measures to have delineated specifications or technical requirements.
Second, we searched the websites of organizations involved specifically with quality and quality improvement (Table 1). We identified these organizations based on our experience as ED directors (JMC, MHG, RMR, KNS) and investigators of quality assessment and improvement (all authors). Finally, to capture additional potential measures that have not been published or widely endorsed by quality organizations, we corresponded with a convenience sample of ED directors in the Pediatric Emergency Care Applied Research Network (PECARN) to identify measures in use at these institutions. PECARN is a network of 22 geographically diverse EDs, of which 17 have a physically separate pediatric ED, four are pediatric EDs within general EDs, and one is a general ED.16–18
All identified performance measures were enumerated and are listed in Data Supplement S1 (available as supporting information in the online version of this paper) with their measure definitions and source(s). We list the number of times each measure was reported from any of our three data sources. Measures were reviewed, and those with extremely low frequency in pediatric populations were removed from consideration.15 Excluded measures pertained primarily to conditions such as myocardial infarction and heart failure. Measures applying to general ED care were retained, including measures for patient flow, infrastructure and personnel, measures of patient satisfaction, and general complication and error measures.
Next, each enumerated measure was categorized by IOM domain, Donabedian framework, and disease category. All categorizations were made by one investigator (EAA) and reviewed for agreement by three investigators (ERA, JMC, MHG). Disagreements were resolved by consensus discussion. Figure 1 depicts our performance measure framework and the three dimensions represented along with sample measures representing individual cells within the framework. Each measure was categorized into one or more of the six IOM domains. Measures regarding health care that provides services based on scientific knowledge to all who could benefit, and refrains from providing services to those not likely to benefit, were assigned to the effectiveness domain. Safety measures address care that avoids injuries to patients from the care that is intended to help them. Health care efficiency measures are those that deal with avoiding waste, including waste of equipment, supplies, ideas, and energy. Patient-centered measures connote care that is respectful of and responsive to individual patient preferences, needs, and values, and ensures that patient values guide all clinical decisions. Measures of timeliness of health care are those that address reducing waits and sometimes harmful delays for both those who receive and those who give care. Measures of equity assess care that does not vary because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status.
Measures were also assigned to one of Donabedian’s structure/process/outcome categories.13,14Structural elements provide indirect quality-of-care measures related to a physical setting and resources. Process indicators provide a measure of quality of care and services by evaluating the method or process by which care is delivered. Outcome elements describe valued results related to lengthening life, relieving pain, reducing disabilities, and satisfying the consumer. Each measure was further categorized as 1) a general measure relevant to every ED visit, 2) a cross-cutting measure (those that apply across many clinical conditions, such as pain or diagnostic testing), or 3) a disease-specific measure. Disease-specific measures were organized using the diagnosis grouping system,15 whereby each measure received a major group and subgroup designation. Major groups represent broad disease categories (e.g., respiratory diseases) while subgroups provide further specifications within major groups (e.g., asthma).
We reviewed 215 published manuscripts, investigated 17 organization websites, and queried 22 ED directors. This review yielded 405 relevant performance measures from emergency medicine (EM) and pediatrics for potential use in a pediatric EM setting. Figure 2 depicts the distribution of measures by IOM quality domain. Because some measures represent more than one IOM domain (e.g., documenting weight in kilograms is a measure of effectiveness and safety), 525 individual IOM domain designations were assigned to these 405 measures. Just over half (n = 267, 51%) of these 525 designations represented the IOM domain of effectiveness. Designations of patient-centeredness were represented in 37 measures (7%) and encompassed measures primarily related to patient satisfaction, education, and pain. Designations of timeliness (n = 55, 10%) were mainly represented by traditional ED throughput measures such as ED length of stay and ED arrival-to-triage time, as well as time to therapeutic interventions such as time-to-reliever medications for patients with asthma exacerbations. Many of the measures classified under efficiency (n = 72, 13.7%) involved quantifying resource use (e.g., proportion of patients receiving a computed tomography scan for head trauma). Safety designations (n = 91, 17%) were largely defined by the category of metrics pertaining to medical errors and complications (e.g., radiograph misinterpretation rate). Equity measures constituted the smallest proportion of IOM categorizations (n = 3, 0.5%); however, 14 of the 405 measures did also specify stratification by race, ethnicity, or payer to assess equity.
Each of the 405 measures was assigned to only one Donabedian category. Two-thirds (271 of 405) of identified measures were categorized as process measures. Nearly one-third (n = 119) were noted to be outcome measures. Structure measures accounted for only 15 (3.7%) measures (e.g., staffing hours per patient visit).
Each of the 405 measures was also assigned to only one general/disease-specific category. Thirty-one percent (124 of 405) of measures were categorized as general measures, applicable to all ED visits (e.g., left without being seen rate). Fifty-three (13%) measures were categorized as cross-cutting, many of which related to pain and procedural sedation (n = 12), diagnostic test utilization (n = 9), and medication errors (n = 12). A total of 228 measures (56%) were disease-specific. Using the Diagnosis Grouping System,15 disease-specific measures included 16 of 21 major groups and 32 of 77 subgroups. Respiratory diseases (67 measures) and trauma (32 measures) represented the largest major groups; asthma (44 measures) and fever (29 measures) accounted for the largest subgroups. A few common pediatric conditions (asthma, fever, urinary tract infection, otitis media, and pneumonia) accounted for 125 distinct measures, representing more than one-half of all disease-associated measures. Table 2 presents the distribution of measures across the major and minor groups of the Diagnosis Grouping System. Figure 3 is a summary of our measurement framework and includes the total number of measures contained within each cell of the matrix.
|Allergic, Immunologic, and Rheumatologic diseases (4)|
|Circulatory and cardiovascular diseases (2)|
|Devices and complications of the circulatory system (2)|
|Endocrine, Metabolic, & Nutritional disease (5)|
|Diabetes mellitus (5)|
|ENT, Dental, and Mouth diseases (23)|
| Infectious ear disorders (18) |
Infectious mouth and throat disorders (1)
Infectious nose and sinus disorders, including URI (4)
|Fluid and Electrolyte Disorders (4)|
| Dehydration (3) |
Other fluid and electrolytic disorders (1)
|Gastrointestinal Diseases (19)|
| Abdominal pain (4) |
|Genital and Reproductive Diseases (1)|
|Hematologic Diseases (1)|
|Sickle cell anemia (1)|
|Neurologic Diseases (3)|
| Infectious neurologic diseases (2) |
| Screening exams (6) |
Other devices and complications (12)
Other neonatal disorders (2)
|Psychiatric and Behavioral Diseases and Substance Abuse (2)|
|Respiratory Diseases (67)|
| Devices and complications of the respiratory system (3) |
Infectious respiratory diseases (19)
Allergic, immunologic, and rheumatologic diseases (1)
|Systemic States (29)|
| Abdominal trauma (1) |
Brain and skull trauma (9)
Chest trauma (1)
Complications of trauma (1)
Fractures and dislocations (8)
Lacerations, amputations (3)
Other trauma (8)
Strains and sprains (1)
|Urinary Tract Disease (16)|
|Infectious urinary tract disease (16)|
The 405 measures came from 643 sources. A total of 410 (64%) sources came from the medical literature, 90 (14%) from organization websites, and 143 (22%) from personal communications with PECARN ED medical directors. Disease-specific measures with the most sources included asthma patients discharged with a prescription for steroids (n = 5) and central line complication rates (n = 6). General metrics, including patient arrival to physician evaluation time (11 sources), length of stay in the ED after a decision to admit a patient (9), overall length of stay in the ED (16), and the rates of patients leaving without being seen by a physician or without treatment (10) had the most reported sources.
Recognizing that national efforts have largely ignored the quality of pediatric emergency care, the IOM has recommended the development of national performance measurement standards. This study represents an important early step toward this goal by comprehensively identifying and enumerating performance measures for pediatric emergency care and categorizing them across several dimensions. This study provides useful information for those interested in reporting on and improving the quality of care for pediatric emergency patients by providing a comprehensive multidimensional review.
Performance measures currently reported in the literature predominantly fall into the IOM domain of effectiveness and the Donabedian category of process. More than one-third of all process measures reflect traditional resource utilization and operational management measures, such as ED length of stay and triage-to-provider time, with the remaining two-thirds of these measures representing a disease-specific group. Disease-specific performance measures mostly address a few common pediatric conditions, including asthma, fever, urinary tract infection, otitis media, and pneumonia. These conditions reflect the natural frequency of disease, but do not represent an adequate cross-section of illness severity. Future measure development should consider illness severity because EDs are expected to provide quality care for the sickest children and must understand and measure their performance in this arena. Given the recent focus on health care reform, enhancing the value of care, the role of the consumer, and eliminating disparities, our belief is that measures of efficiency, equity, and patient-centeredness are underrepresented. Important outcome measures that focus on lengthening life, relieving pain, reducing disabilities, and satisfying the consumer are uncommon as well. The paucity of true ED outcome measures may be related to the difficulty in measuring the contribution of an ED visit to long-term outcomes such as lengthening life and reducing disabilities.
It is interesting to note that only a small proportion of quality measures were in the Donabadian category of structure, the category often easiest to measure. Although some structural measures, such as the presence of a dedicated pediatric ED coordinator, are very important to assuring quality of care and are supported by evidence,19 most existing measures focus on ways we deliver care (process) and the health outcomes we produce for our patients. Process measures outnumber outcomes, perhaps because they are easier to measure and are often within the locus of control of the ED. However, valid process measures should also be linked to improving outcomes. Much work remains to examine and confirm many structure/outcome and process/outcome relationships.
Mandatory public reporting is emerging in some sectors of health care (e.g., The Joint Commission sentinel events), so serious consideration must be given to the future of defining, reporting, and disseminating performance measures for pediatric emergency care. Stelfox et al.20 recently published a systematic review of quality indicators for pediatric trauma care. They found 120 indicators currently in use, most of which measured process and outcomes of prehospital and ED care for injuries. The quality of reporting in most studies was insufficient to support the identification of a core set of quality measures, and there were no measures of rehabilitation care. Another study developed performance measures for 12 distinct conditions, accounting for 23% of presentations to pediatric EDs.11 This underscores the need for general measures relevant to every ED visit to complement disease-specific measures, because EDs should not be evaluated on only a fraction of children for whom they care. Relying solely on condition-specific measures provides an incomplete picture of ED quality of care.
We acknowledge that our approach has several limitations. First, although our inclusion criteria for prospective measures were broad, there may be measures not reported by our sources that are applicable to emergency care for children. Second, methods of classification of the array of measures listed here may be viewed as subjective. For this reason, we chose to use widely accepted domains cited and used in quality improvement work, assessing consensus among a group of knowledgeable investigators. While we attempted to provide comprehensiveness in the enumeration and reporting of measures, the goal to retain measure definitions as originally reported inevitably led to duplication. We felt that the benefits of including measures in their published form outweighed the limitation of potential duplication that would exist in the final list. Finally, there is little published information about the validity, reliability, and operational definitions for most of the measures reported here. This information would be helpful to develop a broader consensus about the utility of these measures and reduce this list to a manageable number of measures for use in clinical practice.
To adequately measure quality, a battery of performance measures reflective of all dimensions of emergency care is a key component to advance improvement in the ED setting. This study was performed to elucidate the current environment regarding the state of performance measurement in pediatric emergency medicine; however, much of our findings are relevant to general emergency care and could be replicated for adult-focused diseases. We are encouraged that we were able to find a wide range of relevant measures. However, measures lack a systematic and comprehensive approach with respect to disease frequency and severity and are very much process- and not outcome-oriented, and the most highly referenced measures focus on one aspect of quality, namely timeliness. The work accomplished, especially in the past decade, is strong, but the state of quality pediatric emergency care, and the measures and benchmarks associated with it, are still developing and require further assessment and a more complete validation.
Participating centers and site investigators are listed below in alphabetical order: Children’s Hospital of Philadelphia (Alpern, Shaw); Children’s National Medical Center (Chamberlain); Cincinnati Children’s Hospital Medical Center (Alessandrini, Ruddy, Varadarajan); Medical College of Wisconsin/Children’s Hospital of Wisconsin (Gorelick)
PECARN Steering Committee: N. Kuppermann, Chair; E. Alpern, D. Borgialli, J. Callahan, J. Chamberlain, L. Cimpello, P. Dayan, J. M. Dean, M. Gorelick, D. Jaffe, R. Lichenstein, K. Lillis, R. Maio, F. Moler, D. Monroe, L. Nigrovic, E. Powell, R. Ruddy, R. Stanley, M. Tunik, A. Walker
MCHB/EMSC liaisons: D. Kavanaugh, H. Park
Central Data Management and Coordinating Center (CDMCC): M. Dean, R. Holubkov, S. Knight, A. Donaldson, S. Zuspan
Feasibility and Budget Subcommittee (FABS): T. Singh, Chair; S. Goldfarb, E. Kim, S. Krug, D. Monroe, D. Nelson, H. Rincon, S. Zuspan
Grants and Publications Subcommittee (GAPS): M. Gorelick, Chair; E. Alpern, D. Borgialli, L. Cimpello, A. Donaldson, G. Foltin, S. Knight, F. Moler, L. Nigrovic, S. Teach
Protocol Concept Review and Development Subcommittee (PCRADS): D. Jaffe, Chair; K. Brown, J. Chamberlain, P. Dayan, M. Dean, R. Holubkov, P. Mahajan, R. Maio, K. Shaw, M. Tunik
Quality Assurance Subcommittee (QAS): R. Stanley, Chair; E. Alessandrini, R. Enriquez, R. Gerard, R. Lichenstein, K. Lillis, M. Pusic, R. Ruddy, A. Walker
Safety and Regulatory Affairs Subcommittee (SRAS): W. Schalick, J. Callahan, Co-Chairs; S. Atabaki, J. Burr, K. Call, J. Hoyle, E. Powell, R. Ruddy