SEARCH

SEARCH BY CITATION

Keywords:

  • Public reporting;
  • Medicare;
  • quality;
  • hospitals;
  • surgery

Abstract

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information

Objective

To determine whether surgical quality measures that Medicare publicly reports provide a basis for patients to choose a hospital from within their geographic region.

Data Source

The Department of Health and Human Services' public reporting website, http://www.medicare.gov/hospitalcompare.

Study Design

We identified hospitals (n = 2,953) reporting adherence rates to the quality measures intended to reduce surgical site infections (Surgical Care Improvement Project, 1–3) in 2012. We defined regions within which patients were likely to compare hospitals using the hospital referral regions (HRRs) from the Dartmouth Atlas of Health Care Project. We described distributions of reported SCIP adherence within each HRR, including medians, interquartile ranges (IQRs), skewness, and outliers.

Principal Findings

Ninety-seven percent of HRRs had median SCIP-1 scores ≥95 percent. In 93 percent of HRRs, half of the hospitals in the HRR were within 5 percent of the median hospital's score. In 62 percent of HRRs, hospitals were skewed toward the higher rates (negative skewness). Seven percent of HRRs demonstrated positive skewness. Only 1 percent had a positive outlier. SCIP-2 and SCIP-3 demonstrated similar distributions.

Conclusions

Publicly reported quality measures for surgical site infection prevention do not distinguish the majority of hospitals that patients are likely to choose from when selecting a surgical provider. More studies are needed to improve public reporting's ability to positively impact patient decision making.

The U.S. Department of Health and Human Services' http://www.medicare.gov/hospitalcompare website makes available to the public quality of care data from over 4,000 hospitals in the United States. The website is designed such that patients can make side-by-side comparisons of hospitals that are geographically close to one another with the stated purpose to “help you make decisions about where you get your health care” (DHHS 2013a). As such, the website fits into an overarching goal of health care reform for the Centers for Medicare and Medicaid Services (CMS) to increase the patient-centeredness of health care by providing the patient with data, so that he or she can intelligently participate in his or her care decisions.

Among the measures that CMS reports on the website are a group of surgical process-of-care measures called the Surgical Care Improvement Project (SCIP; Bratzler and Houck 2004). Publicly reported quality measures in surgery may be particularly impactful given that many surgeries occur on an elective or semielective basis, allowing time for patients to review data on hospital quality in their search for a provider. CMS hopes that patients will be able to choose among surgical providers using SCIP data as a guidepost for differentiating higher quality centers (DHHS 2013b). Despite the admirable motivation behind such a consumer-oriented tool, few studies have examined whether SCIP measures, as reported on http://www.medicare.gov/hospitalcompare, are successful at making meaningful distinctions between higher and lower quality hospitals among the groups of hospitals that patients are likely to choose from. A lack of ability of the SCIP measures to demonstrate quality differences among hospitals would highlight an urgent need to transform publicly reported surgical quality measures to provide greater value to patients. If SCIP fails to add value to patient decisions, CMS and hospitals may be particularly concerned given the considerable investment of energy and resources into aspects of SSI prevention outlined by SCIP.

Our aim was to describe the distribution of reported adherence to SCIP surgical site infection (SSI) quality measures (SCIP-1, SCIP-2, and SCIP-3) among hospitals that a patient would be likely to choose from. Performance on these SCIP measures demonstrates a hospital's ability to administer an antibiotic that is intended to prevent SSIs in a timely manner prior to surgery (SCIP-1), to choose the appropriate type of antibiotic (SCIP-2), and to discontinue it within a given time window after the surgical encounter (SCIP-3; Bratzler and Houck 2005).

To define groups of hospitals from which a patient would be likely to choose, we used publicly available data on hospital referral regions (HRRs), as previously defined by the Dartmouth Atlas of Health Care. For each HRR, we determined the median SCIP score among hospitals and described the distribution of hospitals' SCIP performance around the median. The primary endpoint of this study was to determine the extent to which SCIP SSI data available at http://www.medicare.gov/hospitalcompare fulfill its mission to provide a basis on which patients can choose among local hospitals for their own nonemergent surgical care.

Materials and Methods

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information

Dataset

We obtained publicly reported adherence rates to SCIP-1, SCIP-2, and SCIP-3 from the first quarter of 2012 through the Department of Health and Human Service's Hospital Compare website, using the Hospital Compare data file available at http://data.Medicare.gov. We chose to focus on SCIP 1–3 because health information technologies have led to near universal electronic medication data entry and thus SCIP 1–3 offer a more easily measurable and traceable data series for hospitals. In addition, the SCIP 1–3 measures apply to all surgical patients, while many of the other SCIP measures apply to only a subset of patients.

Hospitals submit data to CMS on a voluntary basis, though CMS will reduce Medicare reimbursements by 2 percent for hospitals that fail to report (DHHS). For each measure, the reported adherence rate is based upon the total number of qualifying cases treated by the hospital. For hospitals with a large caseload, the rate may be based upon a random sampling of qualifying cases. Adherence rates for hospitals with fewer than 25 total cases are not publicly reported because CMS believes that the calculated rate may not accurately predict the hospital's future performance. Each quarter, CMS verifies data submitted by a selected group of hospitals (QualityNet). Each SCIP measure is reported individually on the http://www.medicare.gov/hospitalcompare website and no composite performance score across measures is available to the website's users.

Hospital Referral Regions

To determine groups of hospitals from which patients are likely to choose, we used the 2010 HRRs developed by the Dartmouth Atlas of Health Care Project (The Dartmouth Institute for Health Policy and Clinical Practice). The HRRs defined by the Dartmouth Atlas have been used in a wide array of studies on health care quality and resource utilization (Fisher et al. 2003a,b; Song et al. 2010; Donohue et al. 2012; Landon et al. 2012). HRRs represent health care markets for tertiary care and are defined based upon historical Medicare billing data from patients living in a geographical area. Each HRR contains one or more tertiary care medical centers as well as several referral hospitals from the surrounding area.

Graphical Representations

For graphical representations of HRRs and hospitals within them, we used the geographic information system (GIS) by ESRI ArcGIS software and HRR definitions from the Dartmouth Atlas of Health Care. Hospitals located in the same HRR were compared with one another in terms of reported SCIP-1, SCIP-2, and SCIP-3 adherence.

Statistical Analysis

Within each HRR, we determined the number of hospitals located within the HRR. For all hospitals within an HRR, we calculated the median and mean score. In addition, for HRRs with five or more hospitals, we determined the distribution of hospitals' performance around the median and mean by calculating the IQR and standard deviation in reported adherence rates to SCIP-1, SCIP-2, and SCIP-3. For those regions with five or more hospitals, we calculated the skewness as a measure of symmetry of the distribution of SCIP scores within each HRR to describe whether the majority of hospitals in each region were skewed toward the higher or lower performing hospitals. We calculated skewness using the adjusted Fisher-Pearson standardized moment coefficient and divided hospitals into categories of skewness based on commonly used cutoffs. Finally, we determined the number of positive outlier hospitals (i.e., unusually excellent performers) and negative outlier hospitals (i.e., unusually poor performers) within each HRR for those HRRs containing five or more hospitals to describe SCIP's ability to clearly delineate top- and bottom-performing hospitals for patients. Outliers were defined as a reported rate of adherence falling outside of 1.5 times the interquartile range. Statistical analysis was performed with SAS v9.2 (Cary, NC).

Results

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information

The total number of hospitals included in the analysis was 2,953 for SCIP-1, 2,956 for SCIP-2, and 2,942 for SCIP-3. These hospitals were located within 305 unique HRRs. The median number of hospitals within an HRR was 6 and the mean number of hospitals was 9.7.

Figure 1 demonstrates the median and IQR of SCIP-1 scores across HRRs. Overall, the median SCIP-1 score among all hospitals was 98.0 percent (IQR: 96.0 percent, 99.0 percent). Hospitals reported high rates of adherence for SCIP-1 with 97.0 percent of HRRs having a median hospital adherence rate of at least 95 percent (Figure 1A and B). More than half of HRRs had an IQR of SCIP-1 scores that was 2 percent or less (Figure 1C). Ninety-three percent of HRRs had an IQR of 5 percent or less. Hospital-reported SCIP-2 and SCIP-3 adherence rates demonstrated similar patterns across HRRs with high median rates of adherence and tight IQRs (Appendix 1).

image

Figure 1. Medians and IQRs of Reported SCIP-1 Adherence across HRRs. (A) Map of median reported adherence rates among hospitals within each HRR across the United States. (B) Distribution of median hospital reported adherence within HRRs. (C) IQRs of reported adherence for hospitals within the same HRR. HRR, hospital referral region; IQR, interquartile range; SCIP, Surgical Care Improvement Project

Download figure to PowerPoint

Figure 2 demonstrates skewness and the presence of negative and positive outliers for HRRs with five or more hospitals. Sixty-two percent of HRRs demonstrated highly negative skewness (≤−1.0); 17.4 percent demonstrated moderately negative skewness (−0.5 to −1.0; Figure 2A and B); 16.4 percent of HRRs demonstrated an approximately symmetric (−0.5 to 0.5) distribution of hospitals; only 2.8 percent of HRRs demonstrated moderately positive skewness (0.5–1.0); and 4.2 percent of HRRs demonstrated highly positive skewness (≥1.0). Moreover, 42.0 percent of HRRs had no outliers whatsoever, while 37.6 percent had a single negative outlier (Figure 2C). Only 1.0 percent of HRRs had a positive outlier. Hospital-reported SCIP-2 and SCIP-3 adherence rates demonstrated similar patterns across HRRs with the majority demonstrating highly negatively skewed data and either zero or a single negative outlier hospital (Appendix 2).

image

Figure 2. Skewness and Outliers of Reported SCIP-1 Adherence across HRRs. (A) Map of skewness of hospital reported adherence rates within each HRR across the United States. (B) Distribution of skewness of hospital reported adherence within HRRs. (C) Percentage of HRRs that contain zero or more positive and negative outliers. HRR, hospital referral region; SCIP, Surgical Care Improvement Project

Download figure to PowerPoint

Boxplots demonstrating reported SCIP-1, SCIP-2, and SCIP-3 adherence rates for each HRR can be found in Appendix 3.

Discussion

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information

Among over 2,900 hospitals that publicly report their performance on surgical quality of care measures, hospitals that patients are likely to compare when selecting a surgical provider generally performed very well and demonstrated little variation. Moreover, hospital performance within HRRs tended to be highly negatively skewed with the majority containing zero outliers or a single negative outlier hospital. Thus, publicly reported surgical quality of care measures, which CMS hopes patients will use to identify a preferred destination for their surgical care, do not differentiate the vast majority of hospitals that a patient is likely to choose from when selecting a provider. The SCIP measures that we studied are more successful, however, at singling out a poorly performing hospital and thus may be of some use as a patient-oriented tool for making health care decisions.

To our knowledge, this is the first study to examine what patients view when using publicly reported surgical quality measures. From the patient's perspective, the usefulness of SCIP SSI prevention measures strongly depends on whether hospitals that the patient will compare are distinguished from one another on the basis of their performance. Unfortunately, studies on the ability of public reporting to impact patient decision making have not accompanied the rapid growth of publicly reported measures. One of the few studies that has examined the impact of public reporting demonstrated that since Medicare's initiation of public reporting, patients have not migrated to hospitals that have performed better on quality measures (Ryan, Nallamothu, and Dimick 2012). Our study suggests that one explanation for this trend may be that public reporting does not successfully distinguish among top-performing hospitals that a patient may realistically be expected to choose from. Other studies have shown that the consistency and methodological quality across different public reporting websites are poor (Leonardi, McGory, and Ko 2007). As a result, greater attention must be paid to improving public reporting, so that it appropriately empowers patients to find high-quality care.

With SSI rates largely unaltered in recent years despite steady improvement in SCIP compliance, there is an urgent opportunity for CMS to transform its SSI prevention measures so that public reporting is more relevant to patients (Hawn et al. 2011; Stulberg et al. 2010). There may be other aspects of surgical care beyond antibiotic prophylaxis that impact SSIs and, when measured and publicly reported, truly differentiate hospitals (Bratzler 2006). Such a distinction among hospitals would allow patients to learn which hospitals have higher quality practices and allow hospitals to learn where improvements are needed. For example, patients and providers may have an interest in how hospitals are performing in the organization and safety of the operating environment or the attention paid to postoperative wound care.

Another opportunity may lie in directly reporting hospital outcomes, such as SSI rates, rather than performance on processes of care intended to prevent SSIs, such as preoperative antibiotic timing. Studies have shown that outcomes have not been impacted by SCIP adherence, making the reporting of surrogate process measures less useful to patients (Hawn et al. 2011; Ingraham et al. 2010). In fact, the National Quality Forum has helped endorse a risk-adjusted, nationally benchmarked SSI outcome measure (ACS 2012). Compared with the current SCIP measures, the SSI outcome measure may better be able to directly distinguish hospital quality in a way that is easier for patients to understand. Hospitals would likely be incented toward a more pragmatic, institution-wide effort at reducing SSIs, rather than narrowly focusing on adhering to process-of-care measures without regard to clinical context (e.g., inappropriately redosing antibiotics at minute 61 without apparent clinical indication purely for the sake of SCIP-1 compliance). The efficacy of a systematic redesign of the care process around outcome-oriented efforts has been demonstrated in the literature (Wick et al. 2012).

An even more readily attainable goal may lie in the public reporting of all-or-none composite measures that combine the individual SCIP measures. Composite measures have been shown to be associated with reduced SSI infection rates and simultaneously may help better differentiate hospitals' performance for patients (Stulberg et al. 2010). Reporting a single composite score may also help improve the readability of the surgical quality measures on Hospital Compare. Currently, there are more than 11 different measures, each with its own accompanying explanation that the patient must read and consider. It is important to note, however, that studies have demonstrated that bundling together multiple care practices that have been individually shown to improve outcomes may not actually lead to better outcomes (Anthony et al. 2011).

Our study is subject to several limitations. First, we studied only a subset of SCIP measures having to do with surgical site infections. In addition, for HRRs with fewer than five hospitals, we did not determine the IQR, skewness, or presence of outliers because we did not believe that we could reliably calculate these statistics with fewer than five hospitals. We reported these statistics, however, in HRRs with five or more hospitals, which included more than two-thirds of the total HRRs. In addition, trends in reported SCIP adherence were consistent across all of the SCIP measures that we studied. We have no reason to believe that trends would be substantially different in the HRRs with fewer than five hospitals. Another potential limitation to our study was that HRRs do not perfectly delineate groups of hospitals from which patients choose from when selecting a care provider. It is possible that patients choose from among hospitals in neighboring or even more geographically disparate HRRs. Particularly in cases involving very complex procedures where only a few centers of excellence exist, the patient's willingness to travel farther distances and to cross HRRs may increase. We believe, however, that HRRs more reliably delineate groups of hospitals that patients are likely to choose from than do regions defined by city, county, or state borders. HRRs are based on established patterns of migration for tertiary care using historical data. Again, there is no reason to believe that patterns would not hold when including multiple HRRs given the high median scores and tight IQRs of the majority of HRRs.

In conclusion, publicly reported surgical quality measures for SSI prevention do not meaningfully distinguish the majority of hospitals that patients are likely to choose from. With public reporting requirements likely to grow in the coming years, more studies are needed to describe and improve the ability of such reporting to positively impact patient decision making.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information

Joint Acknowledgment/Disclosure Statement: This research was funded in part by the following National Institutes of Health grants: T32 GM086287-01 (Niklason) from NIGMS, 1K23HL116641-01A1 (Schonberger) from NHLBI, and CTSA grant UL1 RR024139 from NCRR and NCATS. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the National Institutes of Health or the U.S. government.

Disclosures: None.

Disclaimers: None.

References

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information
  • ACS. 2012. “National Quality Forum Endorses Two American College of Surgeons NSQIP Measures” [accessed on May 5, 2013]. Available at http://site.acsnsqip.org/news/national-quality-forum-endorses-two-american-college-of-surgeons-nsqip-measures/
  • Anthony, T., B. W. Murray, J. T. Sum-Ping, F. Lenkovsky, V. D. Vornik, B. J. Parker, J. E. McFarlin, K. Hartless, and S. Huerta. 2011. “Evaluating an Evidence-Based Bundle for Preventing Surgical Site Infection: A Randomized Trial.” Archives of Surgery 146 (3): 2639.
  • Bratzler, D. W. 2006. “The Surgical Infection Prevention and Surgical Care Improvement Projects: Promises and Pitfalls.” American Surgeon 72 (11): 10106; discussion 21–30, 133–48.
  • Bratzler, D. W., and P. M. Houck. 2004. “Antimicrobial Prophylaxis for Surgery: An Advisory Statement from the National Surgical Infection Prevention Project.” Clinical Infectious Diseases 38 (12): 170615.
  • Bratzler, D. W., and P. M. Houck. 2005. “Antimicrobial Prophylaxis for Surgery: An Advisory Statement from the National Surgical Infection Prevention Project.” American Journal of Surgery 189 (4): 395404.
  • The Dartmouth Institute for Health Policy and Clinical Practice. 2013. “The Dartmouth Atlas of Health Care” [accessed on May 5, 2013]. Available at http://www.dartmouthatlas.org/data/region
  • DHHS. 2013a. “What is Hospital Compare?” [accessed on September 21, 2013]. Available at http://hospitalcompare.hhs.gov/About/WhatIs/What-Is-HOS.aspx
  • DHHS. 2013b. “Report to Congress: National Strategy for Quality Improvement in Health Care” [accessed on May 8, 2013]. Available at http://www.healthcare.gov/news/reports/quality03212011a.html
  • Donohue, J. M., N. E. Morden, W. F. Gellad, J. P. Bynum, W. Zhou, J. T. Hanlon, and J. Skinner. 2012. “Sources of Regional Variation in Medicare Part D Drug Spending.” New England Journal of Medicine 366 (6): 5308.
  • Fisher, E. S., D. E. Wennberg, T. A. Stukel, D. J. Gottlieb, F. L. Lucas, and E. L. Pinder. 2003a. “The Implications of Regional Variations in Medicare Spending. Part 1: The Content, Quality, and Accessibility of Care.” Annals of Internal Medicine 138 (4): 27387.
  • Fisher, E. S., D. E. Wennberg, T. A. Stukel, D. J. Gottlieb, F. L. Lucas, and E. L. Pinder. 2003b. “The Implications of Regional Variations in Medicare Spending. Part 2: Health Outcomes and Satisfaction with Care.” Annals of Internal Medicine 138 (4): 28898.
  • Hawn, M. T., C. C. Vick, J. Richman, W. Holman, R. J. Deierhoi, L. A. Graham, W. G. Henderson, and K. M. Itani. 2011. “Surgical Site Infection Prevention: Time to Move beyond the Surgical Care Improvement Program.” Annals of Surgical 254 (3): 4949; discussion 99–501.
  • Ingraham, A. M., M. E. Cohen, K. Y. Bilimoria, J. B. Dimick, K. E. Richards, M. V. Raval, L. A. Fleisher, B. L. Hall, and C. Y. Ko. 2010. “Association of Surgical Care Improvement Project Infection-Related Process Measure Compliance with Risk-Adjusted Outcomes: Implications for Quality Measurement.” Journal of the American College of Surgeons 211 (6): 70514.
  • Landon, B. E., N. L. Keating, M. L. Barnett, J. P. Onnela, S. Paul, A. J. O'Malley, T. Keegan, and N. A. Christakis. 2012. “Variation in Patient-Sharing Networks of Physicians across the United States.” Journal of the American Medical Association 308 (3): 26573.
  • Leonardi, M. J., M. L. McGory, and C. Y. Ko. 2007. “Publicly Available Hospital Comparison Web Sites: Determination of Useful, Valid, and Appropriate Information for Comparing Surgical Quality.” Archives of Surgery 142 (9): 8638; discussion 68–9.
  • QualityNet. 2013. “Data Validation Overview” [accessed on May 8, 2013]. Available at http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1140537255912
  • Ryan, A. M., B. K. Nallamothu, and J. B. Dimick. 2012. “Medicare's Public Reporting Initiative on Hospital Quality Had Modest or No Impact on Mortality from Three Key Conditions.” Health Affairs 31 (3): 58592.
  • Song, Y., J. Skinner, J. Bynum, J. Sutherland, J. E. Wennberg, and E. S. Fisher. 2010. “Regional Variations in Diagnostic Practices.” New England Journal of Medicine 363 (1): 4553.
  • Stulberg, J. J., C. P. Delaney, D. V. Neuhauser, D. C. Aron, P. Fu, and S. M. Koroukian. 2010. “Adherence to Surgical Care Improvement Project Measures and the Association with Postoperative Infections.” Journal of the American Medical Association 303 (24): 247985.
  • Wick, E. C., D. B. Hobson, J. L. Bennett, R. Demski, L. Maragakis, S. L. Gearhart, J. Efron, S. M. Berenholtz, and M. A. Makary. 2012. “Implementation of a Surgical Comprehensive Unit-Based Safety Program to Reduce Surgical Site Infections.” Journal of the American College of Surgeons 215 (2): 193200.

Supporting Information

  1. Top of page
  2. Abstract
  3. Materials and Methods
  4. Results
  5. Discussion
  6. Acknowledgments
  7. References
  8. Supporting Information
FilenameFormatSizeDescription
hesr12164-sup-0001-Appendix1-3.docxWord document2455K

Appendix 1: SCIP-2 Adherence Rates within HRRs across the United States.

Appendix 2: SCIP-3 Adherence Rates within HRRs across the United States.

Appendix 3: SCIP-1, SCIP-2, and SCIP-3 Adherence Rates within HRRs across the United States.

hesr12164-sup-0002-AuthorMatrix.pdfapplication/PDF1179KAppendix SA1: Author Matrix.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.