SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

ACADEMIC EMERGENCY MEDICINE 2011; 18:519–526 © 2011 by the Society for Academic Emergency Medicine

Abstract

Objectives:  The Institute of Medicine (IOM) has recommended the development of national standards for the measurement of emergency care performance. The authors undertook this study with the goals of enumerating and categorizing existing performance measures relevant to pediatric emergency care.

Methods:  Potential performance measures were identified through a survey of 1) the peer-reviewed literature, 2) websites of organizations and societies pertaining to quality improvement, and 3) emergency department (ED) directors. Performance measures were enumerated and categorized, using consensus methods, on three dimensions: 1) the IOM quality domains; 2) Donabedian’s structure/process/outcome framework; and 3) general, cross-cutting, or disease-specific measures.

Results:  A total of 405 performance measures were found for potential use for pediatric emergency care. When categorized by IOM domain, nearly half of the measures were related to effectiveness, while only 6% of measures addressed patient-centeredness. In the Donabedian dimension, 67% of measures were categorized as process measures, with 29% outcome and 4% structure measures. Finally, 31% of measures were general measures relevant to every ED visit. Although 225 measures (55%) were disease-specific, the majority (56%) of these measures related to only five common conditions.

Conclusions:  A wide range of performance measures relevant to pediatric emergency care are available. However, measures lack a systematic and comprehensive approach to evaluate the quality of care provided.

Hospital emergency departments (EDs) are experiencing increased utilization in the face of decreased numbers of EDs, leading to challenges in meeting the increasing gap between the expectations and realities of the quality of care delivered. To achieve system accountability, the 2006 Institute of Medicine (IOM) report, The Future of Emergency Care, recommends “convening a panel with emergency care expertise to develop evidence-based indicators of emergency care system performance.”1 Early efforts to identify performance measures for emergency care have focused on adult-centric clinical conditions,2 as well as performance measures relevant to patients regardless of their age, such as census, throughput, unscheduled return rates, and patient satisfaction.3–9

Despite the fact that children under the age of 19 years represent 27% of the 114 million ED visits annually,10 there is not a set of widely used and accepted pediatric-specific performance measures. The Emergency Care for Children: Growing Pains component of the 2006 IOM report recommends that pediatric emergency medical systems must specifically support the development of national standards for emergency care performance measurement. In one study, Guttmann et al.11 performed an extensive literature review to identify quality of care and outcome pairs. Their work identified 68 quality indicators for 12 clinical conditions that accounted for 23% of all pediatric ED visits. This and other studies mark early approaches to defining performance measures, but highlight the difficulties in developing a set of measures that comprehensively reflect the quality of pediatric emergency care for all children.

To date, the body of research regarding pediatric emergency care performance measurement has not been aggregated and organized within a logical conceptual framework. Delineating the current state of pediatric emergency care performance measurement is critical to understanding measurement gaps and prioritizing measures for both reporting the current state of quality and driving improvement efforts. One critical dimension must incorporate the widely accepted and disseminated IOM quality domains from Crossing the Quality Chasm: health care should be effective, safe, efficient, timely, equitable, and patient-centered.12 Another important dimension includes Donabedian’s structure/process/outcome formulation that has established the framework for most contemporary quality measurement and improvement activity.13,14 Finally, quality measures should reflect the broad spectrum of diseases for which children receive emergency care. Therefore, we performed this study with the goals of aggregating existing performance measures relevant to pediatric emergency care and categorizing measures across three dimensions: the six IOM quality domains,12 Donabedian’s structure/process/outcome formulation,13,14 and type of illness or injury.15

Methods

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

Relevant Measures

Three sources were utilized to capture possible relevant measures. First, we surveyed the peer-reviewed literature to identify relevant quality measures reflecting emergency or pediatric care. We searched Medline for the years 1950 through 2008 and updated our search regularly until June 2010 using the following terms: emergency department, emergency care, quality, children, pediatric, safety, effectiveness, efficiency, timeliness, quality indicator, performance measure, and metric. Search strings also included combinations of these terms such as emergency department quality, emergency department metric, pediatric emergency care quality, and pediatric quality indicator. The results of these searches were combined with related article searches and a bibliographic review until searches returned overlapping results. To maximize inclusivity, potential measures were captured regardless of patient population (adult, pediatric), setting (ED, inpatient, etc), study type, and whether measures were simply described, piloted, or validated. At this stage, we did not require measures to have delineated specifications or technical requirements.

Second, we searched the websites of organizations involved specifically with quality and quality improvement (Table 1). We identified these organizations based on our experience as ED directors (JMC, MHG, RMR, KNS) and investigators of quality assessment and improvement (all authors). Finally, to capture additional potential measures that have not been published or widely endorsed by quality organizations, we corresponded with a convenience sample of ED directors in the Pediatric Emergency Care Applied Research Network (PECARN) to identify measures in use at these institutions. PECARN is a network of 22 geographically diverse EDs, of which 17 have a physically separate pediatric ED, four are pediatric EDs within general EDs, and one is a general ED.16–18

Table 1.    Organization and Society Websites Reviewed for Performance Measures
The Joint Commission: http://www.jointcommission.org/performance_measurement.aspx
Agency for Healthcare Research and Quality (AHRQ): http://www.qualityindicators.ahrq.gov/pdi_overview.htm
Hospital Quality Alliance (HQA): http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html
National Quality Measures Clearinghouse (NQMC): http://www.qualitymeasures.ahrq.gov/
National Association of Children’s Hospitals and Related Institutions (NACHRI): http://www.childrenshospitals.net/AM/Template.cfm?Section=Quality2&CONTENTID=22925&TEMPLATE=CMHTMLDisplay.cfm
RAND: http://www.rand.org/health/pubs_search.html
Healthcare Effectiveness Data and Information Set (HEDIS): http://web.ncqa.org/tabid/536/Default.aspx
Physician Quality Reporting Initiative (PQRI): https://www.cms.gov/PQRI/15_MeasuresCodes.asp#TopOfPage
Urgent Matters: http://www.urgentmatters.org/
American Academy of Pediatrics (AAP): http://www.aap.org/
Centers for Medicare and Medicaid Services (CMS): http://www.cms.gov/home/rsds.asp
National Quality Forum (NQF): http://www.qualityforum.org/Measures_List.aspx
National Committee for Quality Assurance (NCQA): http://www.ncqa.org/tabid/59/Default.aspx
Ambulatory Care Quality Alliance (AQA): http://www.aqaalliance.org/performancewg.htm
American Board of Pediatrics: https://www.abp.org/ABPWebStatic/
Institute for Healthcare Improvement (IHI): http://www.ihi.org/ihi
Maternal and Child Health Bureau: https://perfdata.hrsa.gov/mchb/TVISReports/MeasurementData/MeasurementDataMenu.aspx

All identified performance measures were enumerated and are listed in Data Supplement S1 (available as supporting information in the online version of this paper) with their measure definitions and source(s). We list the number of times each measure was reported from any of our three data sources. Measures were reviewed, and those with extremely low frequency in pediatric populations were removed from consideration.15 Excluded measures pertained primarily to conditions such as myocardial infarction and heart failure. Measures applying to general ED care were retained, including measures for patient flow, infrastructure and personnel, measures of patient satisfaction, and general complication and error measures.

Next, each enumerated measure was categorized by IOM domain, Donabedian framework, and disease category. All categorizations were made by one investigator (EAA) and reviewed for agreement by three investigators (ERA, JMC, MHG). Disagreements were resolved by consensus discussion. Figure 1 depicts our performance measure framework and the three dimensions represented along with sample measures representing individual cells within the framework. Each measure was categorized into one or more of the six IOM domains. Measures regarding health care that provides services based on scientific knowledge to all who could benefit, and refrains from providing services to those not likely to benefit, were assigned to the effectiveness domain. Safety measures address care that avoids injuries to patients from the care that is intended to help them. Health care efficiency measures are those that deal with avoiding waste, including waste of equipment, supplies, ideas, and energy. Patient-centered measures connote care that is respectful of and responsive to individual patient preferences, needs, and values, and ensures that patient values guide all clinical decisions. Measures of timeliness of health care are those that address reducing waits and sometimes harmful delays for both those who receive and those who give care. Measures of equity assess care that does not vary because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status.

image

Figure 1.  Framework of performance measures by IOM quality domain (rows), Donabedian structure/process/outcome categories (columns) and general/disease specific categories. Examples of measures meeting these criteria are contained within framework cells. ICU = intensive care unit; IOM = Institute of Medicine.

Download figure to PowerPoint

Measures were also assigned to one of Donabedian’s structure/process/outcome categories.13,14Structural elements provide indirect quality-of-care measures related to a physical setting and resources. Process indicators provide a measure of quality of care and services by evaluating the method or process by which care is delivered. Outcome elements describe valued results related to lengthening life, relieving pain, reducing disabilities, and satisfying the consumer. Each measure was further categorized as 1) a general measure relevant to every ED visit, 2) a cross-cutting measure (those that apply across many clinical conditions, such as pain or diagnostic testing), or 3) a disease-specific measure. Disease-specific measures were organized using the diagnosis grouping system,15 whereby each measure received a major group and subgroup designation. Major groups represent broad disease categories (e.g., respiratory diseases) while subgroups provide further specifications within major groups (e.g., asthma).

Results

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

We reviewed 215 published manuscripts, investigated 17 organization websites, and queried 22 ED directors. This review yielded 405 relevant performance measures from emergency medicine (EM) and pediatrics for potential use in a pediatric EM setting. Figure 2 depicts the distribution of measures by IOM quality domain. Because some measures represent more than one IOM domain (e.g., documenting weight in kilograms is a measure of effectiveness and safety), 525 individual IOM domain designations were assigned to these 405 measures. Just over half (n = 267, 51%) of these 525 designations represented the IOM domain of effectiveness. Designations of patient-centeredness were represented in 37 measures (7%) and encompassed measures primarily related to patient satisfaction, education, and pain. Designations of timeliness (n = 55, 10%) were mainly represented by traditional ED throughput measures such as ED length of stay and ED arrival-to-triage time, as well as time to therapeutic interventions such as time-to-reliever medications for patients with asthma exacerbations. Many of the measures classified under efficiency (n = 72, 13.7%) involved quantifying resource use (e.g., proportion of patients receiving a computed tomography scan for head trauma). Safety designations (n = 91, 17%) were largely defined by the category of metrics pertaining to medical errors and complications (e.g., radiograph misinterpretation rate). Equity measures constituted the smallest proportion of IOM categorizations (n = 3, 0.5%); however, 14 of the 405 measures did also specify stratification by race, ethnicity, or payer to assess equity.

image

Figure 2.  Distribution of performance measures by IOM quality domain. Measures may apply to more than one domain (405 measures given 525 IOM domain designations). IOM = Institute of Medicine.

Download figure to PowerPoint

Each of the 405 measures was assigned to only one Donabedian category. Two-thirds (271 of 405) of identified measures were categorized as process measures. Nearly one-third (n = 119) were noted to be outcome measures. Structure measures accounted for only 15 (3.7%) measures (e.g., staffing hours per patient visit).

Each of the 405 measures was also assigned to only one general/disease-specific category. Thirty-one percent (124 of 405) of measures were categorized as general measures, applicable to all ED visits (e.g., left without being seen rate). Fifty-three (13%) measures were categorized as cross-cutting, many of which related to pain and procedural sedation (n = 12), diagnostic test utilization (n = 9), and medication errors (n = 12). A total of 228 measures (56%) were disease-specific. Using the Diagnosis Grouping System,15 disease-specific measures included 16 of 21 major groups and 32 of 77 subgroups. Respiratory diseases (67 measures) and trauma (32 measures) represented the largest major groups; asthma (44 measures) and fever (29 measures) accounted for the largest subgroups. A few common pediatric conditions (asthma, fever, urinary tract infection, otitis media, and pneumonia) accounted for 125 distinct measures, representing more than one-half of all disease-associated measures. Table 2 presents the distribution of measures across the major and minor groups of the Diagnosis Grouping System. Figure 3 is a summary of our measurement framework and includes the total number of measures contained within each cell of the matrix.

Table 2.    Distribution of Disease-specific Measures Across Diagnosis Major Groups and Subgroups* of the Diagnosis Grouping System.15
  1. Number of applicable measures is indicated in parentheses.

  2. n = 228 disease-specific measures.

  3. *Subgroup measures combined equal total number of major group measures.

Allergic, Immunologic, and Rheumatologic diseases (4)
Circulatory and cardiovascular diseases (2)
 Devices and complications of the circulatory system (2)
Endocrine, Metabolic, & Nutritional disease (5)
 Diabetes mellitus (5)
ENT, Dental, and Mouth diseases (23)
 Infectious ear disorders (18)  Infectious mouth and throat disorders (1)  Infectious nose and sinus disorders, including URI (4)
Fluid and Electrolyte Disorders (4)
 Dehydration (3)  Other fluid and electrolytic disorders (1)
Gastrointestinal Diseases (19)
 Abdominal pain (4)  Appendicitis (3)  Gastroenteritis (11)  General (1)
Genital and Reproductive Diseases (1)
 Pregnancy (1)
Hematologic Diseases (1)
 Sickle cell anemia (1)
Neurologic Diseases (3)
 Infectious neurologic diseases (2)  Seizures (1)
Other (20)
 Screening exams (6)  Other devices and complications (12)  Other neonatal disorders (2)
Psychiatric and Behavioral Diseases and Substance Abuse (2)
Respiratory Diseases (67)
 Devices and complications of the respiratory system (3)  Infectious respiratory diseases (19)  Asthma (44)  Allergic, immunologic, and rheumatologic diseases (1)
Systemic States (29)
 Fever (29)
Trauma (32)
 Abdominal trauma (1)  Brain and skull trauma (9)  Chest trauma (1)  Complications of trauma (1)  Fractures and dislocations (8)  Lacerations, amputations (3)  Other trauma (8)  Strains and sprains (1)
Urinary Tract Disease (16)
 Infectious urinary tract disease (16)
image

Figure 3.  Enumeration of performance measures by IOM quality domain (rows), Donabedian structure/process/outcome categories (columns), and general/disease-specific categories. Measures can be classified into more than one IOM domain, only one Donabedian category, and one general or disease-specific category. Total number of measures meeting these criteria are contained within the framework cells. IOM = Institute of Medicine.

Download figure to PowerPoint

The 405 measures came from 643 sources. A total of 410 (64%) sources came from the medical literature, 90 (14%) from organization websites, and 143 (22%) from personal communications with PECARN ED medical directors. Disease-specific measures with the most sources included asthma patients discharged with a prescription for steroids (n = 5) and central line complication rates (n = 6). General metrics, including patient arrival to physician evaluation time (11 sources), length of stay in the ED after a decision to admit a patient (9), overall length of stay in the ED (16), and the rates of patients leaving without being seen by a physician or without treatment (10) had the most reported sources.

Discussion

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

Recognizing that national efforts have largely ignored the quality of pediatric emergency care, the IOM has recommended the development of national performance measurement standards. This study represents an important early step toward this goal by comprehensively identifying and enumerating performance measures for pediatric emergency care and categorizing them across several dimensions. This study provides useful information for those interested in reporting on and improving the quality of care for pediatric emergency patients by providing a comprehensive multidimensional review.

Performance measures currently reported in the literature predominantly fall into the IOM domain of effectiveness and the Donabedian category of process. More than one-third of all process measures reflect traditional resource utilization and operational management measures, such as ED length of stay and triage-to-provider time, with the remaining two-thirds of these measures representing a disease-specific group. Disease-specific performance measures mostly address a few common pediatric conditions, including asthma, fever, urinary tract infection, otitis media, and pneumonia. These conditions reflect the natural frequency of disease, but do not represent an adequate cross-section of illness severity. Future measure development should consider illness severity because EDs are expected to provide quality care for the sickest children and must understand and measure their performance in this arena. Given the recent focus on health care reform, enhancing the value of care, the role of the consumer, and eliminating disparities, our belief is that measures of efficiency, equity, and patient-centeredness are underrepresented. Important outcome measures that focus on lengthening life, relieving pain, reducing disabilities, and satisfying the consumer are uncommon as well. The paucity of true ED outcome measures may be related to the difficulty in measuring the contribution of an ED visit to long-term outcomes such as lengthening life and reducing disabilities.

It is interesting to note that only a small proportion of quality measures were in the Donabadian category of structure, the category often easiest to measure. Although some structural measures, such as the presence of a dedicated pediatric ED coordinator, are very important to assuring quality of care and are supported by evidence,19 most existing measures focus on ways we deliver care (process) and the health outcomes we produce for our patients. Process measures outnumber outcomes, perhaps because they are easier to measure and are often within the locus of control of the ED. However, valid process measures should also be linked to improving outcomes. Much work remains to examine and confirm many structure/outcome and process/outcome relationships.

Mandatory public reporting is emerging in some sectors of health care (e.g., The Joint Commission sentinel events), so serious consideration must be given to the future of defining, reporting, and disseminating performance measures for pediatric emergency care. Stelfox et al.20 recently published a systematic review of quality indicators for pediatric trauma care. They found 120 indicators currently in use, most of which measured process and outcomes of prehospital and ED care for injuries. The quality of reporting in most studies was insufficient to support the identification of a core set of quality measures, and there were no measures of rehabilitation care. Another study developed performance measures for 12 distinct conditions, accounting for 23% of presentations to pediatric EDs.11 This underscores the need for general measures relevant to every ED visit to complement disease-specific measures, because EDs should not be evaluated on only a fraction of children for whom they care. Relying solely on condition-specific measures provides an incomplete picture of ED quality of care.

Limitations

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

We acknowledge that our approach has several limitations. First, although our inclusion criteria for prospective measures were broad, there may be measures not reported by our sources that are applicable to emergency care for children. Second, methods of classification of the array of measures listed here may be viewed as subjective. For this reason, we chose to use widely accepted domains cited and used in quality improvement work, assessing consensus among a group of knowledgeable investigators. While we attempted to provide comprehensiveness in the enumeration and reporting of measures, the goal to retain measure definitions as originally reported inevitably led to duplication. We felt that the benefits of including measures in their published form outweighed the limitation of potential duplication that would exist in the final list. Finally, there is little published information about the validity, reliability, and operational definitions for most of the measures reported here. This information would be helpful to develop a broader consensus about the utility of these measures and reduce this list to a manageable number of measures for use in clinical practice.

Conclusions

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

To adequately measure quality, a battery of performance measures reflective of all dimensions of emergency care is a key component to advance improvement in the ED setting. This study was performed to elucidate the current environment regarding the state of performance measurement in pediatric emergency medicine; however, much of our findings are relevant to general emergency care and could be replicated for adult-focused diseases. We are encouraged that we were able to find a wide range of relevant measures. However, measures lack a systematic and comprehensive approach with respect to disease frequency and severity and are very much process- and not outcome-oriented, and the most highly referenced measures focus on one aspect of quality, namely timeliness. The work accomplished, especially in the past decade, is strong, but the state of quality pediatric emergency care, and the measures and benchmarks associated with it, are still developing and require further assessment and a more complete validation.

Appendix A

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

Participating centers and site investigators are listed below in alphabetical order: Children’s Hospital of Philadelphia (Alpern, Shaw); Children’s National Medical Center (Chamberlain); Cincinnati Children’s Hospital Medical Center (Alessandrini, Ruddy, Varadarajan); Medical College of Wisconsin/Children’s Hospital of Wisconsin (Gorelick)

PECARN Steering Committee: N. Kuppermann, Chair; E. Alpern, D. Borgialli, J. Callahan, J. Chamberlain, L. Cimpello, P. Dayan, J. M. Dean, M. Gorelick, D. Jaffe, R. Lichenstein, K. Lillis, R. Maio, F. Moler, D. Monroe, L. Nigrovic, E. Powell, R. Ruddy, R. Stanley, M. Tunik, A. Walker

MCHB/EMSC liaisons: D. Kavanaugh, H. Park

Central Data Management and Coordinating Center (CDMCC): M. Dean, R. Holubkov, S. Knight, A. Donaldson, S. Zuspan

Feasibility and Budget Subcommittee (FABS): T. Singh, Chair; S. Goldfarb, E. Kim, S. Krug, D. Monroe, D. Nelson, H. Rincon, S. Zuspan

Grants and Publications Subcommittee (GAPS): M. Gorelick, Chair; E. Alpern, D. Borgialli, L. Cimpello, A. Donaldson, G. Foltin, S. Knight, F. Moler, L. Nigrovic, S. Teach

Protocol Concept Review and Development Subcommittee (PCRADS): D. Jaffe, Chair; K. Brown, J. Chamberlain, P. Dayan, M. Dean, R. Holubkov, P. Mahajan, R. Maio, K. Shaw, M. Tunik

Quality Assurance Subcommittee (QAS): R. Stanley, Chair; E. Alessandrini, R. Enriquez, R. Gerard, R. Lichenstein, K. Lillis, M. Pusic, R. Ruddy, A. Walker

Safety and Regulatory Affairs Subcommittee (SRAS): W. Schalick, J. Callahan, Co-Chairs; S. Atabaki, J. Burr, K. Call, J. Hoyle, E. Powell, R. Ruddy

References

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information
  • 1
    Institute of Medicine. The Future of Emergency Care in the United States Health System. Washington, DC: National Academies Press, 2006.
  • 2
    Lindsay P, Schull M, Bronskill S, Anderson G. The development of indicators to measure the quality of clinical care in emergency departments following a modified-Delphi approach. Acad Emerg Med. 2002; 9:11319.
  • 3
    Gausche M, Rutherford M, Lewis RL. Emergency department quality assurance/improvement practices for the pediatric patient. Ann Emerg Med. 1995; 25:8048.
  • 4
    Mangione-Smith R, McGlynn EA. Assessing the quality of healthcare provided to children. Health Serv Res. 1998; 33:105990.
  • 5
    Kyriacou DN, Ricketts V, Dyne PL, McCollough MD, Talan DA. A 5-year time study analysis of emergency department patient care efficiency. Ann Emerg Med. 1999; 34:32635.
  • 6
    Moody-Williams JD, Dawson D, Miller DR, Schafermeyer RW, Wright J, Athey J. Quality and accountability: children’s emergency services in a managed care environment. Ann Emerg Med. 1999; 34:75360.
  • 7
    DePiero AD, Ochsenschlager DW, Chamberlain JM. Analysis of pediatric hospitalizations after emergency department release as a quality improvement tool. Ann Emerg Med. 2002; 39:15963.
  • 8
    Welch S, Augustine J, Camargo CA Jr, Reese C. Emergency department performance measures and benchmarking summit. Acad Emerg Med. 2006; 13:107480.
  • 9
    Hung GR, Chalut D. A consensus established set of important indicators of pediatric emergency department performance. Pediatr Emerg Care. 2008; 24:915.
  • 10
    Nawar EW, Niska RW, Xu J. National Hospital Ambulatory Medical Care Survey: 2005 emergency department summary. Adv Data. 2007; 386:132.
  • 11
    Guttmann A, Razzaq A, Lindsay P, Zagorski B, Anderson GM. Development of measures of the quality of emergency department care for children using a structured panel process. Pediatrics. 2006; 118:11423.
  • 12
    Institute of Medicine. Crossing the Quality Chasm. A New Health System for the 21st Century. Washington, DC: National Academies Press, 2001.
  • 13
    Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966; 44:166203.
  • 14
    Donabedian A. The quality of care. How can it be assessed? Arch Pathol Lab Med. 1997; 121:114550.
  • 15
    Alessandrini EA, Alpern ER, Chamberlain JM, Shea JA, Gorelick MH. A new diagnosis grouping system for child emergency department visits. Acad Emerg Med. 2010; 17:20413.
  • 16
    Alpern ER, Stanley RM, Gorelick MH, et al. Epidemiology of a pediatric emergency medicine research network: the Pediatric Emergency Care. Applied Research Network core data project. Pediatr Emerg Care, 2006; 22:68999.
  • 17
    The Pediatric Emergency Care Applied Research Network (PECARN). Rationale, development, and first steps. Pediatr Emerg Care. 2003; 19:18593.
  • 18
    The Pediatric Emergency Care Applied Research Network (PECARN). Rationale, development, and first steps. Acad Emerg Med. 2003; 10:6618.
  • 19
    American Academy of Pediatrics Committee on Pediatric Emergency Medicine, American College of Emergency Physicians Pediatric Committee, and Emergency Nurses Association Pediatric Committee. Joint Policy Statement--Guidelines for Care of Children in the Emergency Department. Pediatrics. 2009; 124:123343.
  • 20
    Stelfox HT, Bobranksa-Artiuch B, Nathans A, Straus SE. A systematic review of quality indicators for evaluating pediatric trauma care. Crit Care Med. 2010; 38:118796.

Supporting Information

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. Appendix A
  9. References
  10. Supporting Information

Data Supplement S1. Performance measures relevant to pediatric emergency care.

Please note: Wiley Periodicals Inc. are not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

FilenameFormatSizeDescription
ACEM_1057_sm_DataSupplementS1.pdf306KSupporting info item

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.