Meaningful Use of Electronic Health Record Systems and Process Quality of Care: Evidence from a Panel Data Analysis of U.S. Acute-Care Hospitals

Authors


Address correspondence to Ajit Appari, Ph.D., Center for Digital Strategies, Tuck School of Business, Dartmouth College, 100 Tuck Hall, Hanover, NH 03755; e-mail: Ajit.Appari@Dartmouth.Edu

Abstract

Objective

To estimate the incremental effects of transitions in electronic health record (EHR) system capabilities on hospital process quality.

Data Source

Hospital Compare (process quality), Health Information and Management Systems Society Analytics (EHR use), and Inpatient Prospective Payment System (hospital characteristics) for 2006–2010.

Study Setting

Hospital EHR systems were categorized into five levels (Level_0 to Level_4) based on use of eight clinical applications. Level_3 systems can meet 2011 EHR “meaningful use” objectives. Process quality was measured as composite scores on a 100-point scale for heart attack, heart failure, pneumonia, and surgical care infection prevention. Statistical analyses were conducted using fixed effects linear panel regression model for all hospitals, hospitals stratified on condition-specific baseline quality, and for large hospitals.

Principal Findings

Among all hospitals, implementing Level_3 systems yielded an incremental 0.35–0.49 percentage point increase in quality (over Level_2) across three conditions. Hospitals in bottom quartile of baseline quality increased 1.16–1.61 percentage points across three conditions for reaching Level_3. However, transitioning to Level_4 yielded an incremental decrease of 0.90–1.0 points for three conditions among all hospitals and 0.65–1.78 for bottom quartile hospitals.

Conclusions

Hospitals transitioning to EHR systems capable of meeting 2011 meaningful use objectives improved process quality, and lower quality hospitals experienced even higher gains. However, hospitals that transitioned to more advanced systems saw quality declines.

Electronic health records (EHRs) are expected to play a key role in improving the quality of U.S. health care (IOM 2001, 2003; Blumenthal 2010; Buntin, Jain, and Blumenthal 2010). EHRs can improve quality of care delivery in numerous ways such as providing accurate and up-to-date patient information and medical knowledge, rapid retrieval of health information, ability to exchange health information to all authorized participants within or across organizations, automated clinical reminders, improved adherence to treatment guidelines, and accumulation of data for quality monitoring and improvement (IOM 2003; Millery and Kukafka 2010). As part of the Health Information Technology for Economic and Clinical and Health Act (HITECH) legislation, the Obama Administration has committed $27 billion dollars to fund the implementation of EHRs through an incentive-based program for organizations that demonstrate the “meaningful use” of certified EHR technology as measured by a set of objectives, including breadth of use (i.e., the spread of EHR use among medical staff), extent of use (i.e., level/frequency of EHR use in organization-wide clinical decision making and nursing workflow), and quality improvement (Blumenthal and Tavenner 2010; CMS 2010). Achievement of these meaningful use (MU) objectives by 2015 is expected to occur in three stages. Initial objectives to be achieved by 2011 include the basic tasks of creating and maintaining medical records in electronic form (e.g., patient demographics, medication lists), as well as using EHR features such as drug–drug and drug–allergy interactions, and clinical decision rules (CMS 2010).

A focus on meaningful use “to achieve significant improvements in care” is essential given the varied evidence of benefits of health information technology (IT). However, to the best of our knowledge, limited empirical research has focused on the benefits of meaningful use. Recently, Jones et al. (2011) showed that the use of computerized physician order entry systems (CPOE) for electronic medication orders satisfying post-2011 meaningful use criteria was associated with lower mortality rates for cardiovascular conditions. Of course, there is an active body of research on the benefits of health IT. Five systematic syntheses of prior work spanning two decades of research found mixed evidence of benefits, though positive findings are on the rise (Garg et al. 2005; Chaudhry et al. 2006; Goldzweig et al. 2009; Millery and Kukafka 2010; Buntin et al. 2011). In particular, findings from studies using national cross-sectional and panel data suggest that the use of health IT was modestly associated with better hospital quality (Kazley and Ozcan 2008; Yu et al. 2009; DesRoches et al. 2010; Himmelstein, Wright, and Wooldhandler 2010; Jones et al. 2010; McCullough et al. 2010; McCullough, Parente, and Town 2011; Miller and Tucker 2011). For example, Appari et al. (2012) recently showed a positive link between electronic medication management systems and adherence to medication guidelines. However, other recent studies still find negative effects of advanced EHR implementations (e.g., Jones et al. 2010).

Overall, the inconsistent findings and a lack of strong positive evidence raise concerns among prospective EHR adopters (CMIO 2010). More important, teasing out the impact on quality is particularly challenging because, just like adoption of IT, care quality is associated with organizational characteristics and market factors (e.g., Hearld et al. 2008; Scanlon et al. 2008; Lehrman et al. 2010; Werner et al. 2011). Prior research has used different empirical strategies to attempt to deal with the endogeneity of IT adoption and hospital characteristics, including the use of propensity score adjustments, instrumental variables, stratification on hospital size, difference-in-difference methods, and longitudinal analysis (e.g., Furukawa et al. 2008; Kazley and Ozcan 2009; Furukawa, Raghu, and Shao 2010, 2011; Himmelstein, Wright, and Wooldhandler 2010; Jones et al. 2010; Miller and Tucker 2011; Appari et al. 2012). Most longitudinal studies have considered only a limited set of health IT applications as markers for EHR implementation such as clinical data repository, clinical decision support, and CPOE (e.g., Jones et al. 2010; McCullough et al. 2010; Miller and Tucker 2011), as well as ancillary information systems and medication management technologies (e.g., Furukawa, Raghu, and Shao 2010, 2011).

In this study, we sought to examine the effects of changes in EHR system capabilities on changes in process quality performance, with specific attention to the transition to systems capable of meeting the 2011 MU objectives. We posit two hypotheses: First, Hospitals that transition to EHR systems capable of meeting 2011 MU objectives in a given year will have positive gains in process quality in the subsequent period, controlling for hospital and market characteristics. Quality improvement follows the law of diminishing returns (Donabedian, Wheeler, and Wyszewianski 1982; Levin 2000), implying larger gains are made when extant quality performance is low, while incremental gains diminish and become more expensive when quality is already high (Cole 1990; Benson, Saraph, and Schroeder 1991; Levin 2000). Grounded in this premise, we hypothesize that Hospitals with lower baseline quality will see a significant change in quality from transitioning to an EHR system capable of meeting 2011 MU objectives.

The primary contribution of this study lies in conceptualizing EHR system capabilities vis-à-vis progression toward 2011 MU objectives and analyzing how such transitions are associated with changes in quality in subsequent periods. Using an extensive 5-year panel dataset of U.S. hospitals, we demonstrate the effects of changes in EHR capability on changes in process quality across multiple conditions for all hospitals, as well as the differential effects in hospitals stratified by condition-specific baseline quality.

Methods

Data Sources

Our analytic sample comprises 3,921 nonfederal acute-care U.S. hospitals spanning 2006–2010 with 16,650 hospital-year observations. Data were drawn from three sources. Data on EHR systems came from 2005 to 2009 Health Information and Management Systems Society (HIMSS) Analytics Databases, which includes hospital characteristics and the operational status of clinical health IT applications.1 HIMMS “is the most comprehensive database of hospital IT adoption decisions” (McCullough 2008; Jones et al. 2010) and has been extensively used in health IT research (e.g., Fonkych and Taylor 2005; Kazley and Ozcan 2008, 2009; Yu et al. 2009; Furukawa, Raghu, and Shao 2010; Jones et al. 2010; McCullough et al. 2010; Miller and Tucker 2011; Appari et al. 2012). HIMSS follows a rigorous annual process to update the database. It involves initial descriptive organizational data gathering conducted by phone and followed by an in-depth health IT inventory survey completed by hospital administrators. HIMSS provides benchmarking reports to respondents as an incentive for participation. Some researchers have pointed to inconsistencies and low response rates for a different HIMSS survey of about 200 hospital chief information officers (e.g., Fonkych and Taylor 2005; Jha et al. 2009). We do not use such survey data.

The EHR data were merged with data on process quality for inpatients obtained from the Centers for Medicare and Medicaid Services (CMS) Hospital Compare website, a publicly available data source of hospital performance on select quality measures developed by Hospital Quality Alliance. In particular, we used third quarter releases from 2007 to 2011, providing hospital quality data for prior calendar years January–December 2006–2010. In addition, data on hospitals' structural characteristics, used as control variables in our analyses, were obtained from the CMS Acute Inpatient Prospective Payment System Impact files for respective years available from the CMS website. A list of all model variables and their respective data sources are described in Appendix SA2.

We matched EHR systems operational in a given year with quality performance data in the subsequent period (minimum of 12 months after EHR system transition, and up to 18-months posttransition). This strategy to lag technology data against performance, also used in prior research (e.g., Furukawa, Raghu, and Shao 2010; Jones et al. 2010; Miller and Tucker 2011; Appari et al. 2012), avoids an overlap of the quality measurement period with the initial adoption and deployment of new technology.

Measurement of EHR Levels

The 2011 MU objectives require adoption and use of several EHR functionalities (or modular applications) (DHHS 2010).2 Building on prior research (e.g., Furukawa, Raghu, and Shao 2010) and the HIMSS report on mapping clinical IT applications to the 2011 MU objectives (HIMSS 2010), we classified hospitals into five levels of cumulative EHR system capability. In particular, we considered eight major clinical IT applications to classify hospital EHR systems: Level_0 hospitals (primitive EHR capability), used as reference group, typically have none or some clinical systems in place but would be considered less than rudimentary (Level_1) on our scale; Level_1 includes three ancillary IT systems—laboratory, pharmacy, radiology; Level_2 additionally includes clinical data repository and clinical decision support; Level_3 further includes nursing documentation and electronic medication administration record; and finally, Level_4 includes CPOE and all preceding applications. Hospitals at EHR Level_3 have the system capabilities required to meet 2011 MU objectives and at Level_4 with full implementation of CPOE and other optional applications have the capabilities to meet post-2011 MU objectives. While complete satisfaction of 2011 MU objectives requires fulfilling clinical and administrative activities using EHR systems, here we measure only whether a hospital system has the functional capabilities to meet the objectives as we have no data on whether they actually accomplished the activities.

Measurement of Process Quality

To measure quality of care, we estimated composite scores using raw data on several process quality indicators for four conditions from Hospital Compare: acute myocardial infarction (AMI or heart attack), heart failure (HF), pneumonia (PN), and surgical care infection prevention (SCIP). Each composite score represents the proportion of eligible patient cases (with no contraindication) for whom evidence-based clinical guidelines were adhered across all indicators (for a given condition during the observation year). List of all constituent indicators for each condition are described in Appendix SA4. These composite scores were estimated using the “denominator-based weights” approach (Shwartz et al. 2008; Jones et al. 2010; Lehrman et al. 2010). To ensure adequate reliability of these quality measures, composite scores were estimated only for hospitals reporting at least 30 eligible patient cases across the constituent indicators (Jha et al. 2005; Shwartz et al. 2008; Wennberg et al. 2008). Finally, the process quality scores were scaled from 0 to 100 to interpret the technology coefficients as percentage change in quality for incremental advancement of EHR systems.

Measurement of Hospital and Market Characteristics

Quality of care delivered at hospitals is influenced by their structural characteristics and the environment in which they operate (Ayanian and Weissman 2002; Donabedian 2003; Jha et al. 2005; Hearld et al. 2008; Scanlon et al. 2008; Armstrong, Laschinger, and Wong 2009; Lehrman et al. 2010; Werner et al. 2011). Consistent with this body of research and prior research in health IT, we used a comprehensive set of control variables to account for potential confounding effects. The hospital characteristics included teaching status, profit status, membership in a multihospital integrated delivery system, magnet status for nursing excellence, presence of cardiac intensive care unit, staffed bed size, transfer adjusted case mix index, rural location, and whether the hospital qualified for Medicare disproportionate share adjustments (Furukawa et al. 2008; Kazley and Ozcan 2008; Jha et al. 2009; Yu et al. 2009; Furukawa, Raghu, and Shao 2010; Himmelstein, Wright, and Wooldhandler 2010; Jha et al. 2010; Appari et al. 2012). Further, we included a measure of market competition intensity measured by the Herfindahl–Hirschman index using bed size to control for competition effect (Levitt 1994; Kessler and Geppert 2005; Weiner et al. 2006; Scanlon et al. 2008; Werner et al. 2011).

Sample

We constructed a national panel data sample of nonfederal (U.S.) acute-care hospitals by merging hospital level data from these three sources using the Medicare provider number as the common identifier (confirming matches based on facility names and location). The process of panel data construction is described in Appendix SA3. To facilitate stratified analyses, hospitals were grouped into bottom quartile, inter-quartile, and top quartile hospitals based on their composite score for each condition individually in 2005—1 year prior to our analysis period. Hospitals that did not have 2005 quality scores were stratified by considering their first appearance in the panel period 2006–2010. We created multiple analytic datasets: an unbalanced panel of 3,921 hospitals spanning 2006–2010 and comprising 16,650 hospital-year observations, and a series of balanced panels for each condition (the number of hospitals and observations vary by condition, see Table 4).

Panel Data Analysis

Our data are an unbalanced short and wide panel, that is, large number of hospitals observed over relatively fewer time periods. To test our primary hypothesis for incremental effects of EHR system transitions on process quality, we estimated a linear panel regression model for each condition (composite process quality scores for acute myocardial infarction, heart failure, pneumonia, and surgical infection prevention as dependent variables) separately—first using the full unbalanced panel (all hospitals with at least 2 years of data), and then using a balanced panel (only hospitals with all 5 years of data). Subsequently, to test our secondary hypothesis, these analyses for unbalanced and balanced panels were repeated for hospitals stratified into bottom quartile, inter-quartile, and top quartile of baseline quality for each condition individually.

The statistical analysis for each condition was conducted by estimating linear panel regression models with both hospital-specific individual and year-specific time fixed effects (in STATA 12.0). Our initial analysis indicated both cross-sectional and temporal dependencies in our panel data (Wooldridge's 2002 test on full panel confirmed serial correlation and Pesaran's 2004 test on a subsample confirmed cross-sectional correlation; additionally, both the standard Hausman test and its alternative robust formulation as suggested in Wooldridge 2002 rejected the random effects model—an alternative approach to account for individual heterogeneity; test results not reported here). On the basis of these results, we applied a variant of Driscoll and Kraay's (1998) method of producing heteroscedasticity, autocorrelation, and spatial correlation consistent robust standard errors in linear panel regressions using xtscc—a STATA program by Hoechle (2007). This program implements an extension of Driscoll and Kraay's method (initially proposed for balanced panels using Newey–West type corrections) that is applicable for unbalanced panels and has been shown to perform better than conventional linear panel models that do not account for cross-sectional dependence, especially across large panels (Hoechle 2007). Using a fixed-effects (within) model, the xtscc procedure first transforms all model variables at an individual cluster level (in our case for each hospital). Then, estimates are generated using a pooled ordinary least-square regression on the within-transformed data panel. The coefficients and their standard errors are robust to very general forms of serial correlation and cross-sectional dependence.

In all regressions, we used year dummies to account for unobserved period-specific fixed effects, and time variant hospital and market characteristics to account for hospital and environment changes over time. Finally, we also performed an additional robustness check by analyzing panel data on large hospitals (with at least 100 beds) that are most likely to have advanced EHR systems (results are reported in online appendix Tables B and C).

Results

Descriptive Statistics

The descriptive statistics of hospital EHR system levels and process quality performance are reported in Table 1. For the pooled data on all hospitals, about 27 percent of hospitals were at EHR Level_1, 35 percent at Level_2, 15 percent at Level_3, 13 percent at Level_4, and the remainder with primitive EHR capability (Level_0). Over time, hospitals have invested in advanced health IT, including nursing documentation, electronic medication administration record, and CPOE. As can be seen in Table 1, the proportion of hospitals in Levels_3 and 4 increased from less than 10 percent in 2006 to almost 50 percent in 2010.

Table 1. Description of Electronic Health Record (EHR) Capability and Process Quality for 3,921 U.S. Acute-Care Hospitals (All Hospitals; 2006–2010)
EHR Capability LevelsEHR Capability Levels over Time (in Percentages)
All (N = 16,650)2006 (n = 2,959)2007 (n = 3,047)2008 (n = 3,544)2009 (n = 3,459)2010 (n = 3,646)
EHR Level_09.0414.439.4510.386.395.51
EHR Level_127.2638.2934.8528.8923.3914.04
EHR Level_235.3238.7637.2535.7833.2832.42
EHR Level_315.085.2710.9613.8018.7024.30
EHR Level_413.303.247.4811.1518.2423.72
Quality MeasuresProcess Quality Performance over Time
Mean (Standard Deviation) Estimates of Composite Process Quality (in Percentages)
Overall20062007200820092010
  1. Notes. EHR Level_0, not all ancillary systems; EHR Level_1 includes three ancillary systems—laboratory, pharmacy, radiology information systems; EHR Level_2, includes Level_1+ clinical data repository and clinical decision support; EHR Level_3 includes Level_2+ nursing documentation and electronic medication administration record systems; EHR Level_4 includes Level_3+ computerized physician order entry system.

  2. The estimates of process quality measures are based on a minimum of 30 total eligible patient cases for a given condition. The number of hospitals included in the estimations of process quality (only overall numbers shown here) would vary substantially from the number of hospitals observed for EHR capability due to inclusion criterion of minimum 30 eligible cases, and because hospitals may not be reporting all measures every year. The constituent indicators of each composite score are described in Appendix SA4.

AMI: Acute myocardial infarction (N = 12,264)94.82 (5.94)91.90 (7.15)93.36 (6.23)95.11 (5.23)96.63 (4.51)97.34 (4.12)
HF: Heart failure (N = 15,554)87.30 (12.72)80.35 (13.19)84.85 (11.92)87.24 (12.65)90.55 (11.65)92.39 (10.58)
PN: Pneumonia (N = 16,486)90.35 (7.59)85.32 (6.83)89.70 (6.81)90.42 (7.35)92.10 (7.25)93.27 (7.20)
SCIP: Surgical care infection prevention (N = 14,692)88.58 (10.84)77.80 (12.16)83.80 (10.29)90.03 (8.57)94.25 (6.14)95.01 (5.53)

Among all hospitals in the panel, the mean (composite) process quality across the four conditions ranged from 87.3 to 94.8 percent. In the 5-year period, mean (composite) process quality for acute myocardial infarction increased moderately by about 5 percentage points (from 91.9 to 97.3 percent). The improvement over that time for heart failure, pneumonia, and surgical care infection prevention was approximately 12–17 percentage points. Descriptive statistics for hospital characteristics are reported in Table 2.

Table 2. Description of Organizational and Market Characteristics for 3,921 U.S. Acute-Care Hospitals (All Hospitals; 2006–2010; 16,650 Hospital-Years)a
CharacteristicsAll20062007200820092010
  1. Notes. HHI: Herfindahl–Hirschman index estimated using proportion of staffed bed size as proxy for market share in local market (defined by hospital referral region).

  2. a

    For all indicator variables proportion estimates are reported and for continuous variables mean (standard deviation) is reported.

  3. b

    Number of hospital-years for these variables are 14,714.

For profit (%)17.1218.8218.1216.2816.3616.46
Teaching (%)29.2833.6632.4627.4527.5826.44
Rural (%)38.9432.5434.3341.6741.2543.13
Multihospital system membership (%)62.7468.1367.5759.6560.5759.38
Magnet certification for nursing excellence (%)6.966.026.835.878.127.79
Cardiac registry participation (%)29.4932.8232.2327.8228.3927.15
Not qualified for DSH payments (%)b 53.0948.7550.6654.1255.8555.51
Transfer adjusted case mix indexb 1.38 (0.27)1.38 (0.25)1.39 (0.26)1.37 (0.27)1.37 (0.27)1.40 (0.28)
Market competition intensity (HHI)0.12 (0.10)0.13 (0.11)0.12 (0.10)0.12 (0.10)0.12 (0.10)0.12 (0.10)
Staffed bed size
Ln(bed size)4.82 (1.01)5.02 (0.89)4.95 (0.95)4.74 (1.04)4.76 (1.03)4.68 (1.06)
6–99 beds (%)37.3628.3931.1841.0240.543.29
100–199 beds (%)25.3829.127.6723.9724.123.02
200–299 beds (%)16.2418.1817.9215.3315.514.84
300–399 beds (%)9.5611.6910.88.878.748.23
400 plus beds (%)11.4712.6412.4410.8111.1710.62

Effect of EHR System Transitions on Hospital Process Quality

The incremental effects of EHR system transitions on process quality for acute myocardial infarction, heart failure, pneumonia, and surgical infection prevention are reported in Table 3 (unbalanced panel) and Table 4 (balanced panel). In the context of delivering care to acute myocardial infarction patients in all hospitals (column one in Table 3), EHR system transitions from Level_2 to Level_3, capable of meeting 2011 MU objectives, were associated with an increase in quality of 0.35 percentage points, after controlling for hospital characteristics, as well as for unobserved period effects, and time variant hospital and market characteristics. The transition from Level_3 to Level_4, however, was associated with a decrease in quality of 0.90 percentage points. In the stratified analysis, EHR system transitions had similar effects among the lowest quality hospitals (bottom quartile) with a statistically significant positive increase of 1.16 percentage points for the transition from Level_2 to Level_3 systems capable of the MU objectives. This means that on average for about 1.2 percent of additional eligible cases, patients received the recommended treatment for AMI in low-quality hospitals that implemented Level_3 systems, all else equal. However, bottom quartile hospitals also had a statistically significant decline of 0.65 percentage points associated with a transition from Level_3 to Level_4 systems. Among hospitals in the interquartile range of baseline quality, there was little significant effect of transitions in EHR systems, except for a slight decline of 0.18 percentage points for transitions from Level_0 to Level_1. Among high-quality hospitals in the top quartile, hospitals transitioning from Level_1 to Level_2 saw a statistically significant improvement of 0.32 percentage points, no effect for transitions to Level_3 systems, but a statistically significant decline in quality of 0.36 points for transitions to Level_4 systems.

Table 3. Estimation of Incremental Effects of Electronic Health Record (EHR) Capability on Inpatient Process Quality at U.S. Acute-Care Hospitals (All Hospitals; Unbalanced Panel; 2006–2010)
EHR Capability TransitionsAll HospitalHospitals Stratified on Baseline Quality
Bottom QuartileInterquartileTop Quartile
  1. Note. *** p ≤ .01; ** p ≤ .05; and * p ≤ .10.

  2. Driscoll and Kraay robust standard errors are in parentheses.

  3. The incremental effects of EHR capability were estimated using fixed effects (within) linear panel regression model with Driscoll and Kraay standard errors accounting for serial and cross-sectional correlation after adjusting for unobserved period fixed effects (year dummies), and hospital characteristics and market competition intensity (measured by Herfindahl–Hirschman index at hospital referral region). All process quality measures (i.e., dependent variables) are composite scores (in percentage) with minimum of 30 total eligible cases for each condition. The organizational characteristics include for-profit status, teaching status, rural location, member of multihospital system, presence of cardiac intensive care unit, transfer adjusted case mix index, qualification for disproportionate share payment, staffed bed capacity, and magnet status. Hospital stratification is based on observed quality in 2005 or first occurrences during 2006–2010.

(A) Acute myocardial infarction: Composite process quality
Level_0 to Level_10.29 (0.25)1.34 (0.78)−0.18 (0.08)* −0.17 (0.21)
Level_1 to Level_2−0.08 (0.18)−0.18 (0.41)−0.23 (0.12)0.32 (0.12)*
Level_2 to Level_30.35 (0.13)* 1.16 (0.28)*** 0.18 (0.13)−0.01 (0.09)
Level_3 to Level_4−0.90 (0.12)*** −0.65 (0.20)** −0.12 (0.07)−0.36 (0.06)***
#Observations12,1812,4236,5283,230
#Hospitals2,8096601,436713
(B) Heart failure: Composite process quality
Level_0 to Level_11.13 (0.15)*** 1.15 (0.63)−0.42 (0.16)* −0.03 (0.18)
Level_1 to Level_2−0.19 (0.18)0.30 (0.47)−0.07 (0.04)0.15 (0.11)
Level_2 to Level_30.37 (0.13)** 1.32 (0.46)** 0.65 (0.17)** −0.31 (0.10)**
Level_3 to Level_4−0.92 (0.21)*** −1.31 (0.29)*** −0.17 (0.16)0.19 (0.16)
#Observations14,4713,0327,8503,589
#Hospitals3,1816931,678810
(C) Community acquired pneumonia: Composite process quality
Level_0 to Level_1−0.15 (0.13)0.96 (0.44)* −0.30 (0.09)** 0.54 (0.18)**
Level_1 to Level_2−0.19 (0.10)−0.20 (0.18)−0.17 (0.16)0.22 (0.10)*
Level_2 to Level_30.49 (0.10)*** 1.61 (0.20)*** 0.19 (0.11)0.16 (0.10)
Level_3 to Level_40.13 (0.06)* −0.23 (0.15)0.52 (0.07)*** −0.17 (0.16)
#Observations14,5713,7677,8472,957
#Hospitals3,2018161,669716
(D) Surgical infection prevention: Composite process quality
Level_0 to Level_10.36 (0.31)0.75 (0.63)0.49 (0.47)−0.73 (0.19)**
Level_1 to Level_2−0.35 (0.27)−0.04 (0.27)−0.26 (0.11)* 0.02 (0.18)
Level_2 to Level_3−0.09 (0.19)0.17 (0.22)0.11 (0.15)−0.51 (0.11)***
Level_3 to Level_4−1.01 (0.14)*** −1.78 (0.20)*** −0.02 (0.23)0.45 (0.17)*
#Observations14,0643,6257,4602,979
#Hospitals3,1128021,595715
Table 4. Estimation of Incremental Effects of Electronic Health Record (EHR) Capability on Inpatient Process Quality at U.S. Acute-Care Hospitals (All Hospitals; Balanced Panel; 2006–2010)
EHR Capability TransitionsAll HospitalHospitals Stratified on Baseline Quality
Bottom QuartileInterquartileTop Quartile
  1. Note. *** p ≤ .01; ** p ≤ .05; and * p ≤ .10.

  2. Driscoll and Kraay robust standard errors in parentheses.

  3. The incremental effects of EHR capability were estimated using fixed effects (within) linear panel regression model with Driscoll and Kraay standard errors accounting for serial and cross-sectional correlation after adjusting for unobserved period fixed effects (year dummies), and hospital characteristics and market competition intensity (measured by Herfindahl–Hirschman index at hospital referral region). All process quality measures (i.e., dependent variables) are composite scores (in percentage) with minimum of 30 total eligible cases for each condition. The organizational characteristics include for-profit status, teaching status, rural location, member of multihospital system, presence of cardiac intensive care unit, transfer adjusted case mix index, qualification for disproportionate share payment, staffed bed capacity, and magnet status. Hospital stratification is based on observed quality in 2005 or first occurrences during 2006–2010.

(A) Acute myocardial infarction: Composite process quality
Level_0 to Level_10.12 (0.34)1.72 (1.28)−0.23 (0.17)−0.27 (0.20)
Level_1 to Level_2−0.04 (0.13)0.09 (0.37)−0.31 (0.08)** 0.31 (0.12)*
Level_2 to Level_30.38 (0.14)** 1.31 (0.22)*** 0.21 (0.18)0.01 (0.09)
Level_3 to Level_4−0.81 (0.15)*** −0.67 (0.26)* −0.06 (0.08)−0.39 (0.05)***
#Observations10,4051,6605,8402,905
#Hospitals2,0813321,168581
(B) Heart failure: Composite process quality
Level_0 to Level_10.93 (0.33)** 1.03 (1.03)−0.93 (0.13)*** 0.07 (0.23)
Level_1 to Level_2−0.29 (0.10)** −0.58 (0.24)* 0.09 (0.08)0.20 (0.13)
Level_2 to Level_30.44 (0.10)*** 1.70 (0.45)** 0.58 (0.14)*** −0.32 (0.11)**
Level_3 to Level_4−0.89 (0.21)*** −1.16 (0.28)*** −0.16 (0.16)0.14 (0.15)
#Observations12,8052,5407,1253,140
#Hospitals2,5615081,425628
(C) Community acquired pneumonia: Composite process quality
Level_0 to Level_1−0.36 (0.11)** 0.61 (0.43)−0.42 (0.11)** 0.47 (0.17)**
Level_1 to Level_2−0.19 (0.09)* −0.16 (0.18)−0.19 (0.13)0.13 (0.10)
Level_2 to Level_30.45 (0.13)** 1.47 (0.19)*** 0.20 (0.13)0.11 (0.09)
Level_3 to Level_40.18 (0.06)** 0.01 (0.12)0.53 (0.08)*** −0.31 (0.10)**
#Observations12,9403,4007,2352,305
#Hospitals2,5886801447461
(D) Surgical infection prevention: Composite process quality
Level_0 to Level_1−0.41 (0.31)−1.37 (0.69)0.53 (0.35)−0.09 (0.46)
Level_1 to Level_20.00 (0.08)0.68 (0.67)−0.52 (0.24)* −0.34 (0.36)
Level_2 to Level_30.53 (0.35)1.41 (0.55)* 0.63 (0.25)* −0.28 (0.25)
Level_3 to Level_4−2.02 (0.21)*** −2.71 (0.42)*** −1.08 (0.42)* −0.11 (0.18)
#Observations5,3651,6752,6701,020
#Hospitals1,073335534204

For process quality in delivering care to heart failure patients, across all hospitals the EHR system transitions to Level_3 were associated with a small but statistically significant increase in quality of 0.37 percentage points. In addition, transitions from primitive EHR (Level_0) to Level_1 were associated with a 1.13 percentage point increase in quality, no significant change for transitions to Level_2, but a decline of 0.92 percentage points for the transition to Level_4. These results are intensified among bottom quartile hospitals in which transitions to Level_3 systems were associated with increased quality of 1.32 percentage points, whereas transitions to Level_4 systems were associated with decreased quality of 1.31 percentage points, essentially wiping out the gains achieved with Level_3 systems. Among hospitals in the interquartile range of quality, there was a statistically significant increase in quality of 0.65 percentage points for the transition to Level_3. Among top quartile hospitals, only the transition to Level_3 was statistically significant but showed a decline in quality of 0.31 percentage points.

Significant quality changes in pneumonia care associated with transitions in EHR systems were mostly positive. Among all hospitals, transitions to Level_3 and to Level_4 were associated with modest but statistically significant increases in quality of 0.49 and 0.13, respectively. For low-quality hospitals, transitions to Level_1 and to Level_3 were associated with statistically significant increases in quality of 0.96 and 1.61, respectively. Hospitals in the interquartile range of quality saw a statistically significant increase in quality of 0.52 for the transition to Level_3, but a modest and statistically significant decline in quality with a transition to Level_1 (0.30 percentage points). Top quality hospitals saw increases in quality for transitions to Level_1 (0.54) and to Level_2 (0.22) only.

Finally, regarding process quality in the context of surgical infection prevention in all hospitals, there were no significant effects of EHR system transitions except for transitions to Level_4, in which quality declined by 1.01 percentage points. Similarly, among bottom quartile hospitals, only the transition to Level_4 EHR systems was associated with a statistically significant decline in quality of 1.78 percentage points. There was very little significant effect of EHR transitions for interquartile quality hospitals, while top quality hospitals saw declines in quality associated with transitions to Level_1 (0.73) and to Level_3 (0.51), but a modest improvement in quality (0.45) with a transition to a Level_4 system.

Discussion

This study examined the relationship between EHR use and process quality of care at nonfederal (U.S.) acute-care hospitals. We employed a panel dataset for the period (2006–2010) to examine how transitions in EHR systems were associated with changes in process quality in subsequent periods, and more specifically, whether transitioning to an EHR system capable of achieving the 2011 MU objectives was associated with improved process quality. Little research has examined the impact of changing EHR systems, or of those that satisfy specific policy standards, such as the MU objectives. Moreover, apart from a handful of studies, most have used a cross-sectional design and considered select clinical IT applications. This study contributes to our understanding of emerging health IT in two important ways. First, to our knowledge, this study is one of the first to quantify, using panel data, the association of EHR system transitions with changes in process quality, and specifically in the context of the MU standards. The variations in effects of EHR system transitions were examined for hospitals stratified by their baseline quality performance. Furthermore, the selection of 2006–2010 as the study period avoids any potential bias associated with policy regime changes under the provisions of the HITECH Act as all technology changes being studied occurred up through 2009 only. Second, unlike prior longitudinal studies, we account for both potential serial and cross-sectional correlation in quality performance in addition to the common practice of adjusting for quality improvement common to all hospitals using year-specific fixed effects, and heterogeneity in hospital practices using hospital-specific fixed effects (within).

Overall, we found support for our primary hypothesis among all hospitals that transitioning to a Level_3 EHR system capable of meeting the 2011 MU objectives was associated with statistically significant, but clinically modest, incremental gains in process quality for AMI, heart failure, and pneumonia care by about 0.35–0.49 percentage points, but not for prevention of surgical infection. Furthermore, the stratified analysis of hospitals grouped on baseline quality performance provided support for the secondary hypothesis that low-quality hospitals (those in the bottom quartile) saw statistically significant increases in quality from transitions to Level_3 systems. The incremental improvements in quality resulting from transitioning to Level_3 systems ranged from 1.16 to 1.61 percentage points for low-quality hospitals. These improvements, though seemingly small in clinical terms, represent increases of on average 3.5–5.0 cases of adherence (per 1,000 eligible cases without contraindication across condition-specific multiple indicators) to recommended treatment guidelines at hospitals with Level_3 system, regardless of their baseline quality. Furthermore, for hospitals with low baseline quality that transition to Level_3 systems, our results represent an increase of on average 11.6–16.1 cases of adherence (per 1,000 eligible cases without contraindication) to recommended treatment guidelines. Given the time variant and invariant controls, and the autocorrelation structure used in the statistical models, these estimates are powerfully robust. Furthermore, these effect sizes are consistent with prior studies examining relationships between health IT and process quality (e.g., DesRoches et al. 2010; Jones et al. 2010; McCullough et al. 2010; Appari et al. 2012). Also recall that the mean quality scores are within a range of 78–97 of 100, so there is a potential “ceiling” effect in process quality, particularly for advanced hospitals with high-quality scores (Jones et al. 2010). Finally, even modest improvements in process quality can have an important impact in health care (Shih and Schoenbaum 2007).

Our findings, however, are tempered by an unexpected, and less promising, finding that transitions from Level_3 to Level_4 systems (capable of meeting post-2011 MU objectives) were associated with statistically significant declines in quality for AMI, heart failure, and surgical infection prevention care. Other research has documented that EHR interventions sometimes introduce undesired operational failures (Tucker and Spear 2006; Harrison, Koppel, and Bar-Lev 2007). These observations coupled with our findings suggest that not all IT implementations will result in additional quality gains. Certainly, it could be the case that some IT system transitions take longer to see the benefits than we were able to examine in this study. Over more time, both significant organizational learning and technology improvement may eventually yield positive results.

Given that there is significant variation in the use of hospital IT by hospital size (Jha et al. 2009, 2010) and that size is also associated with quality, we conducted additional analyses to model the effects of EHR system transitions on quality among medium and large hospitals (bed size of at least 100—shown in online Appendix Tables B and C). The findings are consistent with those for the whole sample, supporting the hypothesis that transitions to EHR systems capable of 2011 MU are associated with quality improvements, and that the transitions are most important for low-quality hospitals. The findings also show that transitions to the most advanced systems are associated with declines in quality. Finally, in light of varying effects of EHR system capability transitions on hospital quality, our findings indicate that there could be a nonlinear relationship between EHR and hospital quality performance.

Implications for Policy and Practice

With the launch of the HITECH Act and the financial incentive programs to promote effective use of EHRs, the Obama administration has argued that meaningful use of health IT will lead to improved quality of health care delivery (Blumenthal 2010). As of January 2012, nearly 2,000 hospitals and over 41,000 providers have received $3.1 billion thus far under the incentive program (which will remain active until 2021; penalties for not demonstrating meaningful use begin after 2015). Our study has important implications for the policy debate over the expected benefits from EHR incentive programs. We find evidence of promising improvements in process quality across a variety of conditions as hospitals implement systems capable of meeting the 2011 meaningful use objectives. Our results also show that the benefits are expected to be more prominent, yet clinically modest, for hospitals with lower baseline quality. However, we also find evidence that transitions to the most advanced EHR systems, may erode the gains from implementation of Level_3 systems, indicating some caution in the promotion of EHR systems.

Our findings also have significant implications for practice such that hospital leadership should be cautious in building expectations from organization-wide EHR implementations, and more important, that technology implementation alone is likely not sufficient to produce quality improvements. Lastly, if hospitals are already performing well on process quality measures, the tangible gains from EHRs may not necessarily be in quality improvement but in sustaining quality (and it may require a special focus on investing in other organizational innovations to exploit long-term gains).

Limitations

This study has some limitations. First, the meaningful use of EHRs is defined in terms of the capabilities of the EHR system as a function of clinical IT applications. However, this definition is a limited proxy for “real” meaningful use, which requires demonstrating activities via technology use. Moreover, we do not distinguish the functionality and usability of the different systems. Certainly, components from different vendors are not equally effective. Another potential limitation may be from time varying omitted variables that occurred contemporaneously to EHR adoption such as quality improvement initiatives or organizational restructuring.

Second, the coverage of hospitals in the HIMSS surveys for early years of the study period was relatively thin due to lower coverage of rural and critical access hospitals. In addition, data in the Hospital Compare databases did not include all process measures for every hospital, and the denominators for quality scores did not always meet the reliability threshold. We attempted to deal with these issues in two ways. First, we analyzed an unbalanced panel, to include hospitals with data for later years but not earlier years. Second, we also created a balanced panel to analyze only hospitals with data across all years.

Finally, we did not study all the potential benefits of EHR use but focused specifically on well-established process measures of health care delivery quality.

Conclusion

We found that transitions to EHR systems capable of meeting the 2011 standards for meaningful use were associated with higher process quality related to conditions of acute myocardial infarction, heart failure, and pneumonia at acute-care hospitals. The effects varied depending on baseline quality performance, with low-quality hospitals seeing the largest improvements in quality. However, we found troubling declines in quality associated with transitions to the most advanced EHR systems.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: This research was partially funded by the National Science Foundation (NSF-CNS-0910842). The data on hospital health information technologies were licensed from HIMSS Analytics. Neither the National Science Foundation nor HIMSS had any role in the study design, management, analysis, interpretation, or approval. We are grateful to two anonymous referees and Ivan Png for helpful comments on earlier drafts of this manuscript.

Disclosures: None.

Disclaimers: None.

Notes

  1. 1

    The 2006–2010 HIMSS databases of nonfederal hospitals include most acute-care hospitals in the United States, except that coverage of smaller hospitals (<100 beds) was relatively low in year 2006.

  2. 2

    For example, clinical decision support implemented by hospitals should support basic drug–drug, drug–allergy, and drug-formulary checks.

Ancillary