The U.S. Department of Health and Human Services' http://www.medicare.gov/hospitalcompare website makes available to the public quality of care data from over 4,000 hospitals in the United States. The website is designed such that patients can make side-by-side comparisons of hospitals that are geographically close to one another with the stated purpose to “help you make decisions about where you get your health care” (DHHS 2013a). As such, the website fits into an overarching goal of health care reform for the Centers for Medicare and Medicaid Services (CMS) to increase the patient-centeredness of health care by providing the patient with data, so that he or she can intelligently participate in his or her care decisions.
Among the measures that CMS reports on the website are a group of surgical process-of-care measures called the Surgical Care Improvement Project (SCIP; Bratzler and Houck 2004). Publicly reported quality measures in surgery may be particularly impactful given that many surgeries occur on an elective or semielective basis, allowing time for patients to review data on hospital quality in their search for a provider. CMS hopes that patients will be able to choose among surgical providers using SCIP data as a guidepost for differentiating higher quality centers (DHHS 2013b). Despite the admirable motivation behind such a consumer-oriented tool, few studies have examined whether SCIP measures, as reported on http://www.medicare.gov/hospitalcompare, are successful at making meaningful distinctions between higher and lower quality hospitals among the groups of hospitals that patients are likely to choose from. A lack of ability of the SCIP measures to demonstrate quality differences among hospitals would highlight an urgent need to transform publicly reported surgical quality measures to provide greater value to patients. If SCIP fails to add value to patient decisions, CMS and hospitals may be particularly concerned given the considerable investment of energy and resources into aspects of SSI prevention outlined by SCIP.
Our aim was to describe the distribution of reported adherence to SCIP surgical site infection (SSI) quality measures (SCIP-1, SCIP-2, and SCIP-3) among hospitals that a patient would be likely to choose from. Performance on these SCIP measures demonstrates a hospital's ability to administer an antibiotic that is intended to prevent SSIs in a timely manner prior to surgery (SCIP-1), to choose the appropriate type of antibiotic (SCIP-2), and to discontinue it within a given time window after the surgical encounter (SCIP-3; Bratzler and Houck 2005).
To define groups of hospitals from which a patient would be likely to choose, we used publicly available data on hospital referral regions (HRRs), as previously defined by the Dartmouth Atlas of Health Care. For each HRR, we determined the median SCIP score among hospitals and described the distribution of hospitals' SCIP performance around the median. The primary endpoint of this study was to determine the extent to which SCIP SSI data available at http://www.medicare.gov/hospitalcompare fulfill its mission to provide a basis on which patients can choose among local hospitals for their own nonemergent surgical care.
- Top of page
- Materials and Methods
- Supporting Information
The total number of hospitals included in the analysis was 2,953 for SCIP-1, 2,956 for SCIP-2, and 2,942 for SCIP-3. These hospitals were located within 305 unique HRRs. The median number of hospitals within an HRR was 6 and the mean number of hospitals was 9.7.
Figure 1 demonstrates the median and IQR of SCIP-1 scores across HRRs. Overall, the median SCIP-1 score among all hospitals was 98.0 percent (IQR: 96.0 percent, 99.0 percent). Hospitals reported high rates of adherence for SCIP-1 with 97.0 percent of HRRs having a median hospital adherence rate of at least 95 percent (Figure 1A and B). More than half of HRRs had an IQR of SCIP-1 scores that was 2 percent or less (Figure 1C). Ninety-three percent of HRRs had an IQR of 5 percent or less. Hospital-reported SCIP-2 and SCIP-3 adherence rates demonstrated similar patterns across HRRs with high median rates of adherence and tight IQRs (Appendix 1).
Figure 1. Medians and IQRs of Reported SCIP-1 Adherence across HRRs. (A) Map of median reported adherence rates among hospitals within each HRR across the United States. (B) Distribution of median hospital reported adherence within HRRs. (C) IQRs of reported adherence for hospitals within the same HRR. HRR, hospital referral region; IQR, interquartile range; SCIP, Surgical Care Improvement Project
Download figure to PowerPoint
Figure 2 demonstrates skewness and the presence of negative and positive outliers for HRRs with five or more hospitals. Sixty-two percent of HRRs demonstrated highly negative skewness (≤−1.0); 17.4 percent demonstrated moderately negative skewness (−0.5 to −1.0; Figure 2A and B); 16.4 percent of HRRs demonstrated an approximately symmetric (−0.5 to 0.5) distribution of hospitals; only 2.8 percent of HRRs demonstrated moderately positive skewness (0.5–1.0); and 4.2 percent of HRRs demonstrated highly positive skewness (≥1.0). Moreover, 42.0 percent of HRRs had no outliers whatsoever, while 37.6 percent had a single negative outlier (Figure 2C). Only 1.0 percent of HRRs had a positive outlier. Hospital-reported SCIP-2 and SCIP-3 adherence rates demonstrated similar patterns across HRRs with the majority demonstrating highly negatively skewed data and either zero or a single negative outlier hospital (Appendix 2).
Figure 2. Skewness and Outliers of Reported SCIP-1 Adherence across HRRs. (A) Map of skewness of hospital reported adherence rates within each HRR across the United States. (B) Distribution of skewness of hospital reported adherence within HRRs. (C) Percentage of HRRs that contain zero or more positive and negative outliers. HRR, hospital referral region; SCIP, Surgical Care Improvement Project
Download figure to PowerPoint
Boxplots demonstrating reported SCIP-1, SCIP-2, and SCIP-3 adherence rates for each HRR can be found in Appendix 3.
- Top of page
- Materials and Methods
- Supporting Information
Among over 2,900 hospitals that publicly report their performance on surgical quality of care measures, hospitals that patients are likely to compare when selecting a surgical provider generally performed very well and demonstrated little variation. Moreover, hospital performance within HRRs tended to be highly negatively skewed with the majority containing zero outliers or a single negative outlier hospital. Thus, publicly reported surgical quality of care measures, which CMS hopes patients will use to identify a preferred destination for their surgical care, do not differentiate the vast majority of hospitals that a patient is likely to choose from when selecting a provider. The SCIP measures that we studied are more successful, however, at singling out a poorly performing hospital and thus may be of some use as a patient-oriented tool for making health care decisions.
To our knowledge, this is the first study to examine what patients view when using publicly reported surgical quality measures. From the patient's perspective, the usefulness of SCIP SSI prevention measures strongly depends on whether hospitals that the patient will compare are distinguished from one another on the basis of their performance. Unfortunately, studies on the ability of public reporting to impact patient decision making have not accompanied the rapid growth of publicly reported measures. One of the few studies that has examined the impact of public reporting demonstrated that since Medicare's initiation of public reporting, patients have not migrated to hospitals that have performed better on quality measures (Ryan, Nallamothu, and Dimick 2012). Our study suggests that one explanation for this trend may be that public reporting does not successfully distinguish among top-performing hospitals that a patient may realistically be expected to choose from. Other studies have shown that the consistency and methodological quality across different public reporting websites are poor (Leonardi, McGory, and Ko 2007). As a result, greater attention must be paid to improving public reporting, so that it appropriately empowers patients to find high-quality care.
With SSI rates largely unaltered in recent years despite steady improvement in SCIP compliance, there is an urgent opportunity for CMS to transform its SSI prevention measures so that public reporting is more relevant to patients (Hawn et al. 2011; Stulberg et al. 2010). There may be other aspects of surgical care beyond antibiotic prophylaxis that impact SSIs and, when measured and publicly reported, truly differentiate hospitals (Bratzler 2006). Such a distinction among hospitals would allow patients to learn which hospitals have higher quality practices and allow hospitals to learn where improvements are needed. For example, patients and providers may have an interest in how hospitals are performing in the organization and safety of the operating environment or the attention paid to postoperative wound care.
Another opportunity may lie in directly reporting hospital outcomes, such as SSI rates, rather than performance on processes of care intended to prevent SSIs, such as preoperative antibiotic timing. Studies have shown that outcomes have not been impacted by SCIP adherence, making the reporting of surrogate process measures less useful to patients (Hawn et al. 2011; Ingraham et al. 2010). In fact, the National Quality Forum has helped endorse a risk-adjusted, nationally benchmarked SSI outcome measure (ACS 2012). Compared with the current SCIP measures, the SSI outcome measure may better be able to directly distinguish hospital quality in a way that is easier for patients to understand. Hospitals would likely be incented toward a more pragmatic, institution-wide effort at reducing SSIs, rather than narrowly focusing on adhering to process-of-care measures without regard to clinical context (e.g., inappropriately redosing antibiotics at minute 61 without apparent clinical indication purely for the sake of SCIP-1 compliance). The efficacy of a systematic redesign of the care process around outcome-oriented efforts has been demonstrated in the literature (Wick et al. 2012).
An even more readily attainable goal may lie in the public reporting of all-or-none composite measures that combine the individual SCIP measures. Composite measures have been shown to be associated with reduced SSI infection rates and simultaneously may help better differentiate hospitals' performance for patients (Stulberg et al. 2010). Reporting a single composite score may also help improve the readability of the surgical quality measures on Hospital Compare. Currently, there are more than 11 different measures, each with its own accompanying explanation that the patient must read and consider. It is important to note, however, that studies have demonstrated that bundling together multiple care practices that have been individually shown to improve outcomes may not actually lead to better outcomes (Anthony et al. 2011).
Our study is subject to several limitations. First, we studied only a subset of SCIP measures having to do with surgical site infections. In addition, for HRRs with fewer than five hospitals, we did not determine the IQR, skewness, or presence of outliers because we did not believe that we could reliably calculate these statistics with fewer than five hospitals. We reported these statistics, however, in HRRs with five or more hospitals, which included more than two-thirds of the total HRRs. In addition, trends in reported SCIP adherence were consistent across all of the SCIP measures that we studied. We have no reason to believe that trends would be substantially different in the HRRs with fewer than five hospitals. Another potential limitation to our study was that HRRs do not perfectly delineate groups of hospitals from which patients choose from when selecting a care provider. It is possible that patients choose from among hospitals in neighboring or even more geographically disparate HRRs. Particularly in cases involving very complex procedures where only a few centers of excellence exist, the patient's willingness to travel farther distances and to cross HRRs may increase. We believe, however, that HRRs more reliably delineate groups of hospitals that patients are likely to choose from than do regions defined by city, county, or state borders. HRRs are based on established patterns of migration for tertiary care using historical data. Again, there is no reason to believe that patterns would not hold when including multiple HRRs given the high median scores and tight IQRs of the majority of HRRs.
In conclusion, publicly reported surgical quality measures for SSI prevention do not meaningfully distinguish the majority of hospitals that patients are likely to choose from. With public reporting requirements likely to grow in the coming years, more studies are needed to describe and improve the ability of such reporting to positively impact patient decision making.