Emergency Department Information System Adoption in the United States

Authors

  • Adam B. Landman MD, MS, MIS,

    1. From the Robert Wood Johnson Foundation Clinical Scholars Program (ABL, RAD), Department of Emergency Medicine (ABL, SLB), Department of Pediatric Emergency Medicine (ALH), and School of Public Health (RAD), Yale University, New Haven, CT; and the U.S. Department of Veterans Affairs (ABL, RAD), West Haven, CT.
    Search for more papers by this author
  • Steven L. Bernstein MD,

    1. From the Robert Wood Johnson Foundation Clinical Scholars Program (ABL, RAD), Department of Emergency Medicine (ABL, SLB), Department of Pediatric Emergency Medicine (ALH), and School of Public Health (RAD), Yale University, New Haven, CT; and the U.S. Department of Veterans Affairs (ABL, RAD), West Haven, CT.
    Search for more papers by this author
  • Allen L. Hsiao MD,

    1. From the Robert Wood Johnson Foundation Clinical Scholars Program (ABL, RAD), Department of Emergency Medicine (ABL, SLB), Department of Pediatric Emergency Medicine (ALH), and School of Public Health (RAD), Yale University, New Haven, CT; and the U.S. Department of Veterans Affairs (ABL, RAD), West Haven, CT.
    Search for more papers by this author
  • Rani A. Desai PhD, MPH

    1. From the Robert Wood Johnson Foundation Clinical Scholars Program (ABL, RAD), Department of Emergency Medicine (ABL, SLB), Department of Pediatric Emergency Medicine (ALH), and School of Public Health (RAD), Yale University, New Haven, CT; and the U.S. Department of Veterans Affairs (ABL, RAD), West Haven, CT.
    Search for more papers by this author

  • Presented as an abstract at the American College of Emergency Physicians (ACEP) Scientific Assembly, Boston, MA, October 6, 2009; and the Connecticut College of Emergency Physicians (CCEP) Scientific Assembly, Rocky Hill, CT, November 4, 2009.

  • Dr. Landman is a Robert Wood Johnson Foundation Clinical Scholar at Yale University, supported by the U.S. Department of Veterans Affairs and the Robert Wood Johnson Foundation.

  • None of the authors have conflicts of interest to report.

  • A related commentary appears on page 524.

Address for correspondenc and reprints: Adam Landman, MD, MS, MIS; e-mail: adam.landman@yale.edu

Abstract

Objectives:  The American Recovery and Reinvestment Act of 2009 incentivizes adoption of health care information technology (HIT) based on support for specific standards, policies, and features. Limited data have been published on national emergency department information systems (EDIS) adoption, and to our knowledge, no prior studies have considered functionality measures. This study determined current national estimates of EDIS adoption using both single-response rates of EDIS adoption and a novel feature-based definition and also identified emergency department (ED) characteristics associated with EDIS use.

Methods:  The 2006 National Hospital Ambulatory Medical Care Survey, a nationally representative sample of ED visits that also surveyed participating EDs on EDIS, was used to estimate EDIS adoption. EDIS adoption rates were calculated using two definitions: 1) single-response—response to a single survey question as to whether the EDIS was complete, partial, or none; and 2) feature-based—based on the reported features supported by the EDIS, systems were categorized as fully functional, basic, other, or none. The relationship of EDIS adoption to specific ED characteristics such as facility type and location was also examined.

Results:  Using the single-response classification, 16.1% of EDs had a complete EDIS, while 30.4% had a partial EDIS, and 53.5% had none. In contrast, using a feature-based categorization, 1.7% EDs had a fully functional EDIS, 12.3% had basic, 32.1% had other, and 53.9% had none. In multivariable analysis, urban EDs were significantly more likely to have a fully functional or basic EDIS than were rural EDs. Pediatric EDs were significantly more likely than general EDs to have other EDIS.

Conclusions:  Despite more optimistic single-response estimates, fewer than 2% of our nation’s EDs have a fully functional EDIS. EDs in urban areas and those specializing in the care of pediatric patients are more likely to support EDIS. Accurate and consistent EDIS adoption estimates are dependent on whether there are standardized EDIS definitions and classifications of features. To realize the potential value of EDIS for improved emergency care, we need to better understand the extent and correlates of the diffusion of this technology and increase emergency medicine engagement in national HIT policy-making.

Academic Emergency Medicine 2010; 17:536–544 © 2010 by the Society for Academic Emergency Medicine

Health information technology (HIT) has the potential to improve health care.1,2 The American Recovery and Reinvestment Act (ARRA) of 2009 prioritizes and incentivizes the development of a national, interoperable health information system.3,4 HIT may be particularly beneficial in the emergency department (ED), where clinicians continuously care for new patients, many with complicated medical histories or information gaps in their history.5 Recent studies have shown only 17% of physicians and 10% of hospitals have basic electronic medical records (EMRs); however, limited data have been published on national emergency department information system (EDIS) adoption.6–12

Emergency department information systems may help provide ED clinicians with accurate and complete patient histories, automate patient flow, provide physician computerized order entry, and enable sophisticated clinical decision support systems.13–16 In their 2007 report, Hospital-based Emergency Care: At the Breaking Point, the Institute of Medicine recommended that “hospitals adopt robust information and communications systems to improve the safety and quality of emergency care and enhance hospital efficiency.”17 To realize the potential gains in efficiency, cost savings, and improved quality of care with EDIS, widespread adoption of EDIS is required. Accurate EDIS adoption measurement is needed to understand our emergency care system’s current EDIS capacity and future needs, as a baseline to evaluate ARRA programs, and as a possible future measure of quality and performance-related payments.9

In 2002, the National Hospital Ambulatory Medical Care Survey (NHAMCS) estimated that 31% of U.S. EDs had some kind of EDIS.6 However, this estimate was not sensitive to the specific features supported in the systems. Because the ARRA will base incentive payments for HIT adoption on “meaningful use” of HIT, such as support for specific standards, policies, and features, it is important to understand the features supported by EDIS.3 In this article we provide current and more precise national estimates of EDIS adoption using both single-response rates of EDIS adoption and a novel feature-based definition. We also determine ED characteristics associated with feature-based EDIS use.

Methods

Study Design

This was a secondary analysis of the NHAMCS, a national sample of visits to EDs, conducted by the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention (CDC). The Yale Human Investigation Committee exempted this study from review because data are de-identified and publicly available.

Study Setting and Population

We combined patient-level data with ED-level data on EDIS use obtained during hospital interviews. Cross-sectional survey data were analyzed from the 2006 NHAMCS, the most recent available. The NHAMCS used a four-stage probability sampling strategy to identify a nationally representative sample of U.S. EDs located in the 50 states and District of Columbia, excluding federal, military, and Veterans Administration hospitals.18

Study Protocol

During randomly assigned 4-week periods, ED staff abstracted patient visit data to a standardized NHAMCS patient record form. Data were then processed and coded by the Constella Group (Durham, NC). The NCHS calculated sampling weights that can be used to produce unbiased, national estimates.

Prior to participating in NHAMCS, an introductory letter was sent and a screening telephone call were made to each hospital’s administrator to verify eligibility. If the hospital agreed to participate, an in-person induction meeting was arranged, where NHAMCS interviewers further verified eligibility, explained the survey, and collected basic hospital information. EDIS usage data were collected during these induction interviews.6 Because the hospital administrator was the initial contact, he or she was given the option of completing the survey or delegating to one or more persons to complete the survey. NHAMCS recorded the position of the hospital administrator contact, but not other individuals who may have responded to the EDIS questions, and this information was not publicly available. The administrator and survey respondents did not have access to the questions ahead of time (personal communication with Esther S. Hing from the CDC NCHS to clarify the methodology for EDIS data collection, 2009).

Survey respondents were initially asked whether or not their ED uses EMRs. If they responded yes, they were asked whether or not their EDIS supported a set of features, including demographics, computerized physician order entry, laboratory and imaging, and clinical documentation. Table 1 lists all EDIS features surveyed in NHAMCS. Hospitals that did not respond or responded unknown to the EDIS use question were excluded from the analysis. In addition, a single ED had missing data for length of visit and was therefore excluded from unadjusted length of visit statistics as well as the adjusted analysis.

Table 1. 
Feature-based Classification of EDIS and Support for Features
FeaturFully Functional EDISBasic EDISSystems With Feature*Weighted %†Missing‡
  1. EDIS = emergency department information system.

  2. *Number of systems supporting feature (unweighted).

  3. †Number of EDIS with feature out of total EDIS, calculated using weighted, national estimates.

  4. ‡Missing data are assumed to not support that feature.

  5. §Proposed features for “meaningful use” of electronic health record by 2011.25

General
 Patient demographics§XX21295.203
 Medication order entry§XX12649.028
 Test order entry§X 18182.1710
 Laboratory results§XX20389.2912
 Imaging resultsXX17373.3724
 Clinical notesXX14763.7816
 Public health reports§  7125.4248
Interoperability
 Electronic transmission to pharmacyX 3815.2220
 Electronic transmission of test ordersX 14160.3427
 Direct access to electronic imagesX 11142.0940
 Medical history and follow-up notes includedX 12049.6715
 Electronic transmission of public health notifications§  3510.2916
Decision support
 Warnings for medication interactions/contraindications§X 7832.9624
 Abnormal laboratory results highlightedX 14966.2039
 Reminders for guideline based interventionsX 6931.8430

Measures

Prior estimates of EDIS adoption using NHAMCS were based on the hospital representative’s response to the question: “Does your ED use electronic medical records (not including billing records)?”6,19 Respondents classified EDIS as 1) complete, if they used all electronic systems; 2) partial, if they used part electronic and part paper; or 3) none, if they did not use an electronic system. To compare our results with prior estimates and illustrate the limitations of this measurement, we also calculated the single-response EDIS prevalence question as complete, partial, or none based on response to this survey question.

Health information technology adoption estimates based on the response to a single question are limited because they do not consider specific functionality of the systems. As a refinement of the response to a single survey question, we classified EDIS systems as: 1) fully functional, 2) basic, 3) other, or 4) none based on the features supported (Table 1). (Italics are used here to differentiate the feature-based system from the single-question system.) EDIS were classified as other if they had one or more features but did not meet the criteria for basic or fully functional systems indicated below. Systems with no EDIS features were classified as not having an EDIS (none).

Emergency department information systems have been defined broadly as “electronic health record systems designed specifically to manage data and workflow in support of ED patient care and operations.”20 A detailed functional profile for EDIS outlines hundreds of essential functions of an EDIS, including registration, patient tracking, clinical workflow, orders, clinical documentation, discharge management, and administrative support.21 However, no standardized definitions or required features have been established for EDIS. Given lack of current consensus, we created the feature-based classification above based on our own experience, combined with a literature review that identified important features of EDIS from perspective articles, white papers, other EMR adoption studies, and draft guidelines for “meaningful use” of HIT.11,12,15,21–25 We were ultimately limited by the features assessed in NHAMCS. As described in Figure 1, fully functional EDIS systems included all features surveyed in NHAMCS with the exception of public health reporting, because this feature was supported by a small percentage of EDIS in this survey (Table 1). Basic EDIS systems included five core features of EDIS: patient demographics, medication order entry, laboratory results, imaging results, and clinical notes. Our feature-based classifications are similar to feature sets that other HIT researchers have determined are important in basic and comprehensive EMR systems in outpatient and hospital settings.11,12,24

Figure 1.

 EDIS adoption estimates based on single-response and feature-based EDIS classifications. EDIS = emergency department information system.

Independent Variables.  We also explored the influence of ED characteristics on adoption of EDIS. These characteristics included ownership type, geographic region, urban location, teaching status, pediatric ED, patient race and ethnicity, payment type, immediacy of care, percentage of patients admitted, and average length of visit.

The NHAMCS includes data on ownership type, geographic region, and location in a metropolitan statistical area (MSA) at the ED level. All other variables were aggregated from patient-level visit data to characterize EDs.26 For instance, the length of visit for each patient in a particular ED was averaged to produce the mean length of visit for each ED. Continuous variables at the patient level remain continuous variables at the ED level, but reflect the average value across all patients at that ED. Categorical variables at the patient level were transformed to multiple continuous variables at the ED level and reflect the percentage of patients with that characteristic.

Because NHAMCS does not make teaching status or pediatric ED publicly available, we classified EDs as teaching hospitals if more than 10% of patients were seen by an intern or resident. Similarly, EDs were classified as pediatric EDs if the average age of patients was less than 18 years old. EDs in an MSA were considered urban, while those not in an MSA were classified as rural.

Data Analysis

We determined adoption rates of EDIS in 2006 using single-response EDIS classification of complete, partial, or none. We then looked more closely at the features supported by single-response system type. We calculated the number and percentage of EDIS supporting each feature as well as the total number of numbers of features (range 0 to 15) supported by complete and partial systems. The mean numbers of features in complete and partial systems were compared using a t-test. Missing responses for specific features were assumed not to support that feature.

We subsequently calculated the prevalence of EDIS adoption using our feature-based classification of fully functional, basic, other, or none. We compared categorization differences between the single-response and feature-based EDIS classifications using a matrix.

We then performed an exploratory analysis of the association between feature-based EDIS adoption and ED characteristics using both bivariate and multivariable analyses. Since there were only ten EDs with fully functional EDIS, we combined the fully functional and basic categories, resulting in a three level feature-based classification for these analyses: 1) fully functional/basic; 2) other; or 3) none (no EDIS).

We determined bivariate (unadjusted) relationships of feature-based system adoption with ED characteristics, including ownership, geographic region, urban location, teaching hospital, pediatric ED, race/ethnicity, payment type, immediacy of care, admitted patients, and length of visit. EDs with fully functional/basic systems were compared to EDs with other and no EDIS. For these unadjusted analyses comparing three groups, we used chi-square tests for categorical variables and analysis of variance (ANOVA) for continuous variables.

We subsequently built a multinomial regression model to evaluate the association between adoption of EDIS system type and ED characteristics. A direct regression approach was taken, including all independent variables in the model. Two comparisons were made in this multinomial analysis: 1) EDs with fully functional/basic EDIS were compared to EDs with no EDIS; and 2) EDs with other EDIS were compared to EDs with no EDIS. Odds ratios (ORs) and associated 95% confidence intervals (CIs) were calculated.

Data management was performed using SAS version 9.1 (SAS Institute, Cary, NC). To account for the complex survey sampling methodology and national estimation weights, all statistical analyses used SUDAAN 9.03 (Research Triangle Institute, Research Triangle Park, NC). Sample sizes and counts are presented unweighted, while percentages represent weighted national estimates. Type I error rate was set at 0.05 for all analyses.

Results

In the 2006 NHAMCS, 364 EDs participated in the study, representing 4,654 EDs nationwide. The single ED that did not respond to the EDIS use question, along with the seven EDs that responded unknown to the EDIS use question, were excluded from this analysis, yielding a total sample of 356 EDs representing 4,622 U.S. EDs. A single ED did not provide length of visit data and was therefore excluded from calculations involving length of visit.

Single-response EDIS adoption

Using the single-response EDIS classification, 62 of the EDs (16.1%) had a complete EDIS, 160 had a partial EDIS (30.4%), and 134 EDs had no EDIS (53.5%; Figure 1).

Overall, more than 80% of EDIS supported patient demographics, test order entry, and laboratory results (Table 1). Interoperable electronic transmission was not well supported, with fewer than half of the systems able to share images or history and follow-up notes or transmit prescriptions to pharmacies. The majority of systems highlighted abnormal laboratory results, but few systems supported more advanced decision support, such as medication interaction or contraindication warnings or guideline-based reminders.

The limitations of a single-response EDIS classification become apparent when the number of features supported by each EDIS is plotted by system type (Figure 2). Possible misclassifications include one complete system with zero features and four partial systems with the maximum of 15 features. On average, complete systems had 9.05 (95% CI = 8.11 to 9.99) features, and partial systems had 7.24 (95% CI = 6.55 to 7.94) features (t = 3.1, p = 0.0026).

Figure 2.

 Number of features supported by single-response EDIS classification (complete or partial EDIS). Mean features (complete) = 9.05 (95% CI = 8.11 to 9.99). Mean features (partial) = 7.24 (95% CI = 6.55 to 7.94). EDIS = emergency department information system.

Feature-based EDIS Adoption

Using the feature-based EDIS classification, 10 of the EDs (1.7%) had a fully functional EDIS, 69 had a basic EDIS (12.3%), 140 EDs had an other EDIS (32.1%), and 137 had none (53.9%) (Figure 1).

The matrix in Table 2 compares the categorization of EDIS systems by single-response and feature-based classifications. Only four (2.6%) of the 62 single-response complete systems were found to be fully functional systems. The majority of complete systems did not meet the feature requirements for fully functional systems and were classified as basic (46%) or other (51.2%) systems. Similarly, 16% of single-response partial systems met the feature requirements for basic EDIS, while the majority (78%) did not meet these requirements and were classified as other EDIS. Two systems that were classified as partial and one system that was classified as complete had no features and were therefore reclassified as none.

Table 2. 
Comparison of Single-response and Feature-based EDIS Classifications
Single-response ClassificationFeature-based Classification
No EDIS (n = 137)Other (n = 140)Basic (n = 69)Fully Functional (n = 10)
  1. Numbers are based on unweighted sample size; % is row percentage calculated using weighted national estimates.

  2. EDIS = emergency department information system.

None (n = 134)134 (100%)0 (0%)0 (0%)0 (0%)
Partial (n = 160)2 (1.2%)116 (78.5%)36 (16%)6 (4.3%)
Complete (n = 62)1 (0.3%)24 (51.2%)33 (46%)4 (2.6%)

ED Characteristics Associated With EDIS Adoption

We identified differences in EDIS adoption based on ED ownership type, urban location, pediatric ED, and percentage of private pay and Medicare or Medicaid payers (Table 3). Nonprofit, urban, and pediatric EDs, as well as EDs with a higher percentage of private pay and lower percentage of Medicare or Medicaid patients, were significantly associated with adoption of fully functional or basic EDIS. EDIS system adoption did not differ based on the ED’s geographic region, teaching status, percentage of black or African American patients, percentage of Hispanic patients, percentage of admitted patients, immediacy of care, or percentage of patients with other insurance or no insurance. However, after adjustment in multivariable analyses, ED ownership type, and percentage of private pay and Medicare/Medicaid payers were no longer statistically significant (Table 4). Fully functional/basic EDIS were more prevalent in urban EDs than rural EDs. Other EDIS were more prevalent in pediatric EDs than general EDs.

Table 3. 
Unadjusted Analysis of Featured-based EDIS Classification by ED Characteristics
Bivariate AnalysisFully Functional/BasicOther EDISNo EDISp-value*
(n = 79), % (SE)(n = 140), % (SE)(n = 137), % (SE)
  1. ANOVA = analysis of variance; EDIS = emergency department information system; MSA = metropolitan statistical area.

  2. *p-values are reported for chi-square test and ANOVA, respectively.

  3. †Mean % (SE): patient level variables averaged to ED level and then averaged across feature-based EDIS classification. ANOVA used to test for differences among these continuous variables.

  4. n = 139 for this cell. Missing one observation for average length of visit for an other EDIS.

Ownership
 Voluntary, nonprofit17 (3.2)35 (5.1)48 (5.6)0.048
 Government, nonfederal9 (4.6)18 (5.7)73 (7.4)
 Proprietary7 (3.8)40 (12.1)52 (12)
Geographic region0.077
 Northeast21 (5.3)40 (7.6)39 (6.4) 
 Midwest7 (3.6)29 (9)63 (9.8)
 South12 (3.9)27 (5.7)61 (6.7)
 West23 (7.2)41 (10.1)36 (10.6)
MSA0.0017
 Urban20 (3.3)37 (4.4)43 (4.3) 
 Rural2 (1.8)23 (7.5)75 (7.7)
Teaching hospital0.32
 Yes22 (5.9)35 (8.9)43 (10.7) 
 No13 (2.7)32 (4.4)55 (5.1)
Pediatric ED0.022
 Yes29 (10.9)54 (11.8)18 (9.4) 
 No14 (2.5)32 (4.4)55 (5.1)
Ethnicity/race†
 Hispanic13 (0.02)12 (0.02)9 (0.02)0.27
 Black or African American19 (0.03)17 (0.02)18 (0.03)0.88
Payment type†
 Missing4 (0.01)4 (0.01)4 (0.01)0.91
 Private pay40 (0.02)35 (0.02)32 (0.01)0.01
 Medicare/Medicaid35 (0.02)42 (0.01)44 (0.02)0.0013
 Other pay4 (0.01)4 (0.01)4 (0.01)0.85
 No insurance17 (0.02)16 (0.01)15 (0.01)0.75
Immediacy of care†
 Missing15 (0.05)15 (0.04)20 (0.05)0.74
 <15 min16 (0.03)16 (0.02)17 (0.02)0.96
 15 min–59 min33 (0.03)35 (0.03)35 (0.03)0.85
 60 min–2 hr25 (0.04)21 (0.03)17 (0.02)0.22
 >2 hr10 (0.02)13 (0.03)11 (0.02)0.81
Admitted patients†14 (0.01)14 (0.01)11 (0.01)0.26
Average length of visit (min)†183.34 (8.2)179.23 (7.7)‡150.34 (14)0.11
Table 4. 
Multinomial Logistic Regression With Two Comparisons: 1) Fully Functional/Basic EDIS Compared With No EDIS; and 2) Other EDIS Compared With No EDIS
Multinomial Logistic RegressionFully Functional/Basic (n = 79), OR (95% CI)Other EDIS (n = 139)*, OR (95% CI)
  1. EDIS = emergency department information system; MSA = metropolitan statistical area.

  2. *Missing one observation for average length of visit for an other EDIS. Total n = 355 for multinomial regression.

  3. †Statistically significant at α = 0.05.

Ownership
 Voluntary, nonprofit3.98 (0.79–19.96)1.04 (0.29–3.78)
 Government, nonfederal1.76 (0.24–13.07)0.45 (0.11–1.93)
 Proprietary1 (Ref)1 (Ref)
Geographic region
 Northeast0.78 (0.26–2.35)0.69 (0.19–2.59)
 Midwest0.39 (0.09–1.75)0.49 (0.11–2.12)
 South0.57 (0.16–2.02)0.36 (0.10–1.33)
 West1 (Ref)1 (Ref)
MSA
 Urban9.84 (1.15–83.85)1.8 (0.62–5.20)
 Rural1 (Ref)1 (Ref)
Teaching hospital
 Yes1.44 (0.44–4.71)0.64 (0.24–1.73)
 No1 (Ref)1 (Ref)
Pediatric ED
 Yes4.32 (0.62–30.34)6.61 (1.59–27.52)
 No 1 (Ref)1 (Ref)
Ethnicity/race
 Hispanic3.43 (0.12–99.39)1.06 (0.11–10.25)
 Black or African American0.77 (0.14–4.16)0.66 (0.13–3.20)
Payment type
 Missing0.16 (0–12.42)0.12 (0–6.63)
 Private pay2.7 (0.03–255.18)0.23 (0–13.8)
 Medicare/Medicaid0.02 (0–3.15)0.13 (0–7.74)
 Other pay0.46 (0–279.23)0.03 (0–48.21)
 No insurance1 (Ref)1 (Ref)
Immediacy of care
 Missing1.29 (0.12–13.94)0.46 (0.03–6.74)
 <15 min1.87 (0.10–34.99)0.67 (0.04–11.08)
 15 min–59 min0.77 (0.07–8.34)0.66 (0.06–6.97)
 60 min–2 hr3.49 (0.13–93)0.62 (0.04–10.69)
 >2 hr1 (Ref)1 (Ref)
Admitted patients5.93 (0.04–997.26)9.73 (0.10–919.67)
Average length of visit (min)1 (0.99–1.01)1 (1.00–1.01)

Discussion

To the best of our knowledge, this is the first study to examine national rates of EDIS adoption using functionality measures and the first to examine correlates of EDIS adoption. We found a great deal of disagreement between rates of EDIS adoption using the two measures, with significantly fewer fully functional or even basic EDIS systems than would have been suggested by the single-response EDIS adoption rates question. In addition, we found that EDs in urban areas were most likely to have a fully functional or basic EDIS, and pediatric EDs were more likely to have other EDIS.

When using single-response EDIS estimates, 46.1% of U.S. EDs had either a complete or a partial EDIS system in 2006. This seemingly high market penetration increased from 31% in 2002.6 However, these estimates were based on the survey respondent interpretation of complete or partial EDIS, with no standard definitions of these system types. Misclassifications of systems that had no features as partial or complete systems suggest respondents were unable to make clear distinctions using a single-response question for EDIS system definitions.

When specific features of EDIS were analyzed, most were found to support patient demographics, test order entry, and viewing of laboratory results. High support for laboratory results and imaging results was also found in a recent Massachusetts ED survey.27 As support for the general features of EDIS improves, additional systems will begin to support interoperability between departments and outside health care providers, as well as advanced decision support.

The 2009 H1N1 influenza pandemic has led to renewed attention to ED electronic public health reporting, which is also a proposed requirement for 2011 meaningful use of HIT. In this study, we found that 25% of EDIS support public health reporting, and 10% of EDIS support electronic reporting and transmission of public health reports. A recent survey of state epidemiologists reported that 33/41 states (80%) had at least one syndromic surveillance system; however, the study did not report on the representativeness or sampling strategies of these systems within the states and whether or not the system was entirely electronic or required manually data entry.28 Some states, like North Carolina, have a very comprehensive system of public health surveillance, collecting data from 93% of their EDs electronically.29 Our results suggest there may be heterogeneity in support for electronic public health reporting within states at the individual ED level. Future versions of NHAMCS have the potential to add even more to our understanding of ED public health reporting if survey questions are added on 1) type of public health data reported (ranging from dog bites to influenza-like illness surveillance); 2) mechanism of data reporting (fax, Web-based data entry, direct electronic transmission); and 3) frequency of data reporting.

After reclassifying systems based on the features they supported, we found only 1.7% of EDs had fully functional systems and another 12.3% had basic systems. These adoption rates are comparable to a recent national survey showing that 17% of physicians had an EMR available, but only 4% had a fully functional EMR.11 A recent survey of EDs suggested barriers to adoption and implementation of EDIS include expense, difficulty of use, lack of staff acceptance, and fear of investing in equipment that becomes outdated rapidly.27 There may also be debate over who should pay for EDIS adoption and implementation: the hospital or the emergency physician group, which are often different entities. Fully integrated EDIS may be hindered by slow support for interoperability standards and high cost for developing interfaces for the exchange of information between other hospital information systems (e.g., laboratory, imaging, pharmacy) and EDIS. EDIS implementation may be further complicated because EDIS may be implemented as a stand-alone system or part of a larger systemwide electronic health record.

EDIS adoption estimates are dependent on the definitions used. This study is a step forward in obtaining more precise estimates of national EDIS adoption, beyond single-question estimates. The results cast doubt on prior estimates of widespread EDIS adoption and suggest that the majority of EDIS systems in use do not have fully functional capabilities. Although likely to represent a more accurate estimate of EDIS adoption, our feature-based EDIS classification was limited to the features assessed in NHAMCS. Additional features are important in an EDIS and should be considered in future assessments of EDIS functionality, including patient tracking, integration with internal and external hospital and outpatient EMRs, integration of emergency medical services reports, and an electrocardiogram (ECG) repository.

Researchers studying adoption of outpatient and hospital EMR systems have already developed standard system definitions and survey questions using modified Delphi processes with a wide range of experts.11 A similar process could be used to develop standard EDIS definitions, including specific feature sets to measure the unique features of EDIS through a multidisciplinary expert panel of emergency physicians, nurses, staff, informaticians, technologists, and industry representatives. Survey instruments, like NHAMCS, could then be redesigned and customized to more accurately measure EDIS adoption so that adoption rates can be tracked over time. Furthermore, consensus on important features of EDIS would inform vendor product development as well as national HIT policy.

A recent national study found that hospital EMR adoption was more likely in major teaching hospitals, hospitals that are part of larger health systems, and urban areas.12 In multivariable analysis, we found that the most significant factor associated with fully functional or basic EDIS adoption was location in an urban area. Although performed on a relatively small sample of EDs and risking overinterpretation, the multivariable analysis shows that hospital location explains the unadjusted associations between EDIS adoption and ED ownership and payment type, because these associations become nonsignificant in the multivariable model. Pediatric EDs remain significantly associated with increased prevalence of other EDIS after adjustment for ownership type, geographic location, teaching status, and payer mix. This association needs additional investigation, but may be related to children’s hospitals often having large endowments enabling investment in quality tools like an EDIS.

It is interesting that ED ownership, geographic region, teaching status, racial/ethnic patient composition, payer mix, patient acuity, admission rate, and average length of visit were not associated with EDIS adoption. Because most hospitals in this country currently have limited adoption of EDIS, there may be few characteristics that can predict EDIS adoption. ED characteristics associated with EDIS adoption should be reassessed in the future. High adopters could be studied to identify best practices, while additional resources could be provided for low adopters.30

Policy Implications

In response to the 2009 ARRA, the Office of the National Coordinator for Healthcare Information Technology (ONCHIT) released draft guidelines for “meaningful use” of electronic health records in June and August 2009.23,25 This draft guidance provides details on what features electronic health record systems must possess (and in what time frame) as well as the associated outcomes in patient engagement, care coordination, and population health that will be required for hospitals to qualify for ARRA incentive payments. While this is a major step forward toward providing an interoperable EMR to improve health for every American, these proposed guidelines do not offer EDIS-specific recommendations, such as when or how EDIS must support patient tracking or integration of ECG tracings. More broadly, this initial draft does not provide guidance for specialty areas of the hospital, except to note that “new measures under development, by NQF [National Quality Forum], and other recognized organizations will also address the work of specialists.”23 ONCHIT has recognized that initial meaningful use guidelines focus on primary care providers, and a Health IT Policy Committee hearing was held in October 2009 seeking input from specific groups that may not have been adequately addressed by the proposed measures, including emergency physicians.31

Current national HIT policy strongly encourages hospitals and physicians offices to prioritize adoption of inpatient and outpatient electronic health records. However, there may be limited incentive for adoption of EDIS, since emergency physicians are not eligible for ARRA incentive payments as “hospital-based physicians who substantially furnish their services in a hospital setting.”3 Emergency medicine leaders should become more engaged in national HIT policy-making before meaningful use guidelines are finalized. In addition, the recently established Emergency Care Coordination Center in the Office of the Assistant Secretary for Preparedness and Response of the Department of Health and Human Services32 could liaison with ONCHIT to help ensure emergency services information technology needs are met in national HIT policy initiatives.

Limitations

Accurate answers to technical EDIS survey questions require specialized information technology knowledge from appropriately qualified individuals. NHAMCS allows hospital administrators to respond to the EDIS survey questions or delegate this responsibility to other individuals. However, no publicly available information was provided on the individuals who completed the survey questions on EDIS use, their experience with these systems, or their technical knowledge of EDIS. Single-response EDIS adoption rates as well as feature support may also have been biased by lack of standard definitions, lack of homogeneity of responses, or overreport. In addition, survey respondents who judged that they did not have an EDIS were not asked questions about specific EDIS features. This skip pattern likely caused us to miss some EDs with basic or fully functional EDIS, leading to misclassification. It is unlikely that an ED in 2006 would not have computer systems at least for demographics or laboratory results; however, many EDs in this study reported a lack of these features, suggesting underreporting. Missing values may also contribute to underrepresented feature support. We were unable to adjust for ED size (or number of beds) because this information is not publicly available in NHAMCS. While these data are the latest available, they are from 2006. EDIS have been on the market for over 10 years, so there has been ample time for adoption. These results therefore provide baseline adoption benchmarks prior to enactment of government incentives.

Conclusions

Our results suggest that ED information systems adoption is less widespread than previously reported.6 U.S. EDs have low rates of ED information systems adoption, except those in urban areas and those specializing in the care of children. Consensus definitions of what constitutes an ED information system and features required for meaningful ED information system use need to be established. In addition, more specific mechanisms for incentivizing the adoption and implementation of ED information systems should be formulated. Future work should also study the impact of ED information systems on ED costs, quality, and outcomes. The first step to realizing the potential value of ED information systems for improved emergency care is to better define, understand, and promote ED information system use.

Acknowledgments

The authors thank Esther Hing of the CDC National Center for Health Statistics for her assistance clarifying the survey methodology used in the National Hospital Ambulatory Medical Care Survey.

Ancillary