SEARCH

SEARCH BY CITATION

Keywords:

  • program evaluation;
  • public health;
  • government programs;
  • early detection of cancer

Abstract

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES

BACKGROUND: Little empirical evidence exists about the effectiveness of performance management systems in government. This study assessed the effectiveness of the performance management system of the National Breast and Cervical Cancer Early Detection Program (NBCCEDP) and explored why it works. METHODS: Generalized estimating equation models were used to assess change in program performance after the implementation of a performance management system. In addition, qualitative case study data including observations, interviews, and document review were analyzed using inductive methods. RESULTS: Five of the 7 indicators tested had statistically significant increases in performance postimplementation. Case study results suggest that the system is characterized by high-quality data, measures viewed by grantees as meaningful and fair, and institutionalized data use. CONCLUSIONS: Several factors help to explain the system's effectiveness including characteristics of the NBCCEDP program (eg, service delivery program), qualities of the indicators (eg, process level), financial investment in the system, and a culture of data use. Cancer 2014;120(16 suppl):2566-74. © 2014 American Cancer Society.


INTRODUCTION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES

Performance management is one of the most widely adopted public management reforms in decades.[1] The approach involves the continuous practice of several independent processes related to planning, measurement, analysis, and data use to strengthen accountability, improve program effectiveness, and support policy-related decision making.[2] Carolyn Heinrich,[3] who has written extensively about performance management, addresses its adoption in the public sector: “The rise of the development of performance management systems and practices has been nothing short of meteoric; both nationally and locally, performance management is now a goal or function of most governmental and nongovernmental organizations, and in many countries, legislation and cabinet-level entities have been created to support it” (p.256). Performance management is widely practiced in government, in part because of the 1993 Government Performance and Results Act (GPRA)[4] and former President G.W. Bush's Presidential Management Agenda, which included the Program Assessment and Rating Tool (PART) process.[5] GPRA and PART reflect an explicit emphasis on outcomes rather than outputs or processes as a means to improve government accountability and transparency to the public.

Central to performance management is performance measurement — “the process of defining, monitoring, and using objective indicators of the performance of organizations and programs on a regular basis.”[6] Performance measurement is not without its critics, however. The empirical basis for the practice is not well established,[7, 8] and the underlying, intrinsic assumptions for its practice have been questioned.[9-11] Radin[11] suggested a number of suppositions for measurement that are often not realized including program goals can be clearly defined; outcomes can be specified, quantified, and measured; outcomes are controllable; and data are available and valid. Several studies of federal agencies' experience with GPRA and other performance measurement systems have confirmed these concerns.[9, 12, 13]

Regardless, there is extensive literature describing the benefits of performance management, its purposes, and approaches to develop performance measurement systems, including constructing measures of performance.[6, 14, 15] However, no studies have evaluated the effectiveness of performance management in federal-level public health programs in the United States. The National Breast and Cervical Cancer Early Detection Program (NBCCEDP) introduced a comprehensive performance management system in 2006, including a set of 11 performance measures, to improve program management and grantee performance. The NBCCEDP is described in depth elsewhere,[16] including in this supplemental issue of Cancer.[17] Our study addressed 2 research questions:

  1. Is the NBCCEDP performance management system effective in improving program performance?
  2. What are the key characteristics of the NBCCEDP performance management system that might explain why the system is or is not effective?

Background

The NBCCEDP was established in 1991 and has grown over time.[18] In fiscal year 2010, The Centers for Disesae Control and Prevention (CDC) awarded more than $161 million to 68 state, tribal, and territorial grantees, with awards ranging from $86,179 to $9,031,859 (median award of $2,007,950). The goal of the NBCCEDP is to reduce the morbidity and mortality related to breast and cervical cancer among low-income, underinsured women through the provision of breast and cervical cancer screening. The NBCCEDP is carried out through its network of grantees and more than 10,000 local health care clinics or systems with whom grantees subcontract.

Since the program's inception, CDC has required grantees to report a client-level record of services provided using a set of approximately 100 standardized data elements, the minimum data elements (MDEs), necessary to monitor client demographics for program eligibility and clinical outcomes of women screened with NBCCEDP funds. CDC requires grantees to support a data management system for the collection and reporting of MDE data including personnel. A contractor, funded by CDC, aggregates the data and maintains a team of technical consultants who provide assistance to grantees on data management–related issues. Technical consultants work closely with CDC staff, participating in regular grantee conference calls and site visits.[19]

In 1994, CDC developed a report based on these data called the Data Quality Indicator Guide (DQIG) that includes 27 indicators with predetermined benchmarks to evaluate both data quality and quality of care delivered to women screened in the program. A summary of the DQIG measurement categories is provided in Table 1. The DQIG, along with other reports, is provided to grantees semiannually as part of a standard data review process led by CDC.

Table 1. Data Quality Indicator Guide for the National Breast and Cervical Cancer Early Detection Program — Categories of Indicators
Patient DemographicsCervical Screening DataBreast Screening Data
County of screeningPrevious screeningPrevious screening
Zip code of residenceTest resultTest result
Birth yearDiagnosticsDiagnostics
Race/ethnicityCompleteness of follow-upCompleteness of follow-up
Hispanic originFinal diagnosisFinal diagnosis
 Cancer treatment statusCancer treatment status

Beginning in 2005, CDC initiated efforts to implement a performance management system based on the MDE data collected for program monitoring. That year, CDC identified 11 of the DQIG indicators as priority performance measures in an effort to bring increased attention to specific areas of clinical service delivery. A summary report of the performance measures was developed and provided to grantees as part of the twice-a-year data review. In that same year, CDC notified grantees that 9 of the 11 performance measures would be used in a performance-based funding process to inform annual awards and provided grantees the algorithms used to calculate the measures. The performance-based funding process incorporates a measure of need based on the size of the NBCCEDP-eligible population for each grantee along with measures of performance, including the number of performance measures met for the most recent year for which data are available. The approach is intended to support an equitable distribution of funds and incentivize performance. Full implementation of the performance management system was complete when CDC provided the grantees an edit software program allowing them to produce performance measurement reports at varied levels (eg, state, provider, county) in early 2006. In using this program, grantees are able to identify individual patient records contributing to poor performance and address those issues proactively and independent of CDC.

The 11 NBCCEDP performance measures and related standards are summarized in Table 2. The measures represent indicators that are process-level, short-term clinical outcomes, and intermediate clinical outcomes.

Table 2. The National Breast and Cervical Cancer Early Detection Program Core Program Performance Measures
Indicator TypePerformance Measure DescriptionMeasure TypePerformance Measure CalculationCDC Standard
  1. a

    Performance measures used in performance-based funding and included in generalized estimating equation models.

Screening priority population measuresCervical cancer screenings provided to priority populationaProcess measurePercentage of initial program Papanicolaou tests that are conducted among rarely or never-screened women≥20%
Breast cancer screenings provided to priority populationaProcess measurePercentage of screening mammograms provided to women ≥50 years of age≥75%
Cervical cancer diagnostic measuresComplete diagnostic evaluation of abnormal cervical screeningsaShort-term outcomePercentage of abnormal screening results with complete diagnostic follow-up≥90%
Timely diagnostic evaluation of abnormal cervical screeningsaShort-term outcomePercentage of abnormal screening results with time from screening test result to final diagnosis >60 days≤25%
Treatent initiated for cervical cancers and precancerous cervical lesionsIntermediate-level outcomePercentage of women diagnosed with HSIL, CIN2, CIN3, CIS, invasive with treatment started≥90%
Timely treatment initiated for precancerous cervical lesionsaIntermediate- level outcomePercentage of women diagnosed with HSIL, CIN2, CIN3, CIS with time from date of diagnosis to treatment started >90 days≤20%
Timely treatment initiated for invasive cervical cancersaIntermediate- level outcomePercentage of women diagnosed with invasive carcinoma with time from date of diagnosis to treatment started >60 days≤20%
Breast cancer diagnostic measuresComplete diagnostic evaluation of abnormal breast screensaShort-term outcomePercentage of abnormal screening results with complete diagnostic follow-up≥90%
Timely diagnostic evaluation of abnormal breast screensaShort-term outcomePercentage of abnormal screening results with time from screening test result to final diagnosis >60 days≤25%
Treatment initiated for breast cancersIntermediate- level outcomePercentage of women diagnosed with breast cancer with treatment started≥90%
Timely treatment initiated for breast cancersaIntermediate-level outcomePercentage of women diagnosed with breast cancer with time from date of diagnosis to treatment started >60 days≤20%

MATERIALS AND METHODS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES

We used a parallel, mixed-methods approach[20] involving both quantitative and qualitative methods to address our 2 research questions. To determine the effectiveness of the NBCCEDP performance management system, generalized estimating equation (GEE) models for binary data[21] were fitted using MDE data to compare performance on key measures before and after the formal introduction of the NBCCEDP performance management system. For our qualitative component, previously collected case study data were analyzed to identify characteristics of the performance management system. As recommended by Teddlie and Tashakkori,[20] data for the quantitative and qualitative components were collected and analyzed independently. Below, we describe the quantitative methods followed by the qualitative methods.

Quantitative Methods

A quantitative analysis was conducted to measure changes in program performance before and after implementation of the performance management system (ie, priority measures identified, inclusion in a semiannual data review process, edit program released to grantees, use in performance-based funding). The 9 performance measures used in the performance-based funding process were included in the analysis (see Table 2). The first period spanned 7 MDE submissions from July 2002 and ending in October 2005 before full implementation of the performance management system. We used 7 MDE submissions for the second period beginning in April 2006, after full implementation of the performance management system, and ending in April 2009. Sixty-six of the 68 grantees were included in the analysis. Two grantees were excluded because they had not conducted screening for all relevant periods.

We assessed individual grantee performance for the 9 performance measures used in the performance-based funding process for each of the 14 MDE data submissions. Next, we defined a status indicator (met/not met) to reflect whether grantee performance met the established standard. The total sample size was 924 (66 grantees with 14 time periods) for pre- and postperformance measurement implementation combined and reflects the sample used for analysis with each performance indicator.

For each of the 9 performance measures, we fitted a GEE model for binary data with a logit link function (SAS Institute Inc., Cary, NC) to model grantee performance status and evaluate the effect of implementation of the NBCCEDP performance management system. Each GEE model appropriately considered the correlation between the 14 performance status response variables (met/not met) from a grantee, and as recommended by Agresti,[22] the exchangeable working correlation was used in the GEE analysis. We defined a “preimplementation” and “postimplementation” time indicator as the model covariate with the preperiod as the referent. For each GEE model, we calculated a 95% confidence interval for the odds ratio using the robust or empirical standard error.[21-23] Two performance measures were removed from analysis given the lack of variability in the outcome (timely treatment initiated for invasive cervical cancers and for breast cancers). In both cases, all grantees met the standard both before and after performance management was implemented. All MDE data were approved for grantee collection and reporting by tbe CDC's institutional review board.

Qualitative Methods

Secondary data analysis was conducted using qualitative data originally collected in 2007-2008 as part of a multiple case study exploring the effects of networked governance on performance measurement. The original study represented unpublished dissertation research of the lead author and included the NBCCEDP as a “case.”[12]

The data used for the qualitative analysis include data collected from 3 sources: in-depth, semistructured interviews, observations, and document review. Purposeful sampling was used to select interviewees who had the most extensive experience with the performance measurement system. In contrast to quantitative research, which often uses random sampling techniques, qualitative research typically involves purposeful or judgment sampling, with the principle aim of selecting information-rich participants leading to the greatest understanding. [24] Twelve interviews, averaging 70 minutes each, were conducted with persons in varied roles working with the NBCCEDP (eg, CDC managers and program consultants who work with grantees, CDC staff involved in developing the performance management system, grantee program directors). The interview guide included questions in 5 areas: 1) goals of the NBCCEDP, 2) the implementation structure for the program, 3) the development of the performance management system, 4) the design of the system, and 5) the use of the performance data. Saturation was reached on the basis of the 12 interviews. Three formal observations were made including one of a performance work-group meeting and another of a US Congressional Hearing at which performance data were presented. Detailed field notes were completed for all observations. Fourteen documents were systematically reviewed using a standard abstraction form. Documents included the NBCCEDP policies and procedures manual for grantees, standard feedback reports of program data, and a report of an MDE data validation study.

For analysis, all interviews were audio-recorded with permission from participants and transcribed verbatim by a professional transcriptionist. Inductive techniques adopted from grounded theory were used to develop a detailed operational codebook, and all data were systematically coded[25] in Atlas.ti (Atlas.ti Scientific Software Development GmbH, Berlin, Germany). Several strategies were used to improve the validity and reliability of the study, including the researcher's immersion in the field, data and methodological triangulation, member checking, searching for disconfirming evidence, peer review, and maintenance of a detailed audit trail. The research protocol for the original study was approved by institutional review boards at both CDC and Georgia State University.

RESULTS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES

Quantitative Results

Table 3 shows the percentage of grantee submissions that met the performance measures during each of the 2 periods (ie, pre- and postimplementation of the performance management system), odds ratios, 95% confidence intervals, and Wald P values for the 2-sided hypothesis test assessing impact on program performance.

Table 3. Generalized Estimating Equation Analysis of the National Breast and Cervical Cancer Early Detection Program Performance Before and After Full Implementation of Performance Management (PM) Systema
Performance Measure% Met Pre-PM (n = 462)% Met Post-PM (n = 462)Post-PM Odds RatioPost-PM 95% CIPost-PM Wald P Value
  1. a

    The preimplementation period (July 2002-October 2005) served as the referent period.

Breast cancer screenings provided to priority population62.876.81.97(1.34-2.88).0005
Cervical cancer screenings provided to priority population67.186.63.16(2.06-4.85)<.0001
Complete diagnostic evaluation of abnormal cervical screenings77.3871.97(1.22-3.17).0053
Timely diagnostic evaluation of abnormal cervical screenings58.775.12.13(1.59-2.84)<.0001
Timely treatment initiated for precancerous cervical lesions99.198.50.57(0.13-2.53).458
Complete diagnostic evaluation of abnormal breast screens78.195.55.88(3.33-10.36)<.0001
Timely diagnostic evaluation of abnormal breast screens94.496.81.78(0.87-3.62).1138

Results showed statistically significant improvements in performance for 5 of the 7 indicators. For each of these 5 indicators, the estimated odds ratio (OR) comparing the postimplementation period with the preimplementation period, and the 95% Wald confidence limits for the odds ratio were above the null hypothesis value of 1. Improvement was greatest for complete diagnostic evaluation of abnormal breast screens with the indicator more likely to be met after the introduction of the performance management system. For this indicator, the estimated OR comparing the postimplementation period with the preimplementation period wss 5.88 with a 95% Wald confidence interval of 3.33-10.36. Improvements were also realized in reaching the priority populations for both breast cancer screening (women ages 50-64 years; OR, 1.97 [1.34-2.88]) and cervical cancer screening (women rarely or never screened; OR, 3.16 [2.06-4.85]). In addition, performance improved for complete evaluation of abnormal cervical cancer screenings (OR, 1.97 [1.22-3.17]) and timely diagnostic evaluation of abnormal cervical cancer screenings (OR, 2.13 [1.59-2.84]).

Qualitative Results

We identified 4 characteristics of the NBCCEDP performance management system including: 1) measures viewed as meaningful, 2) measures that grantees perceive as fair, 3) high-quality data, and 4) institutionalized use of performance measurement data. Each of these is described below.

Measures viewed as meaningful

Rather than assuming a broad population-based perspective, the NBCCEDP program goals emphasize accountability for the women served through the program. Data suggest that stakeholders share a deep commitment to those goals and feel a unique responsibility to the NBCCEDP patients. One participant said, “Our program has a direct impact on the lives of people, so that elevates it to a higher level of importance in my view.”

The performance measures mirror the program goals, directing focus toward screening women at highest risk and providing complete and timely clinical follow-up. That alignment translates to measures that are perceived by stakeholders as especially meaningful and appropriately reflecting program priorities. In addition, the performance measures have ethical relevance for stakeholders given that the NBCCEDP addresses breast and cervical cancer screening. One participant said, “When you're screening for a disease that's life threatening, you just have to do this [monitor program performance]. I mean, you can't be lax!” Consequently, the performance measures are viewed as offering evidence that grantees are meeting their obligation to women served by the NBCCEDP — an attribute valued by CDC and grantees alike.

Measures perceived as fair

In contrast to many other public health programs, the NBCCEDP is more easily monitored and evaluated given its focus on the women served in the program (rather than all women in the US population) and the ability to measure clinical service provision and related outcomes. Therefore, despite the inevitable variability across 68 different programs, CDC has identified a common set of performance measures appropriate and relevant for all. One participant stated, “This program [NBCCEDP] certainly has advantages in that we're able to quantify things in a way that other people [in public health] can't. But that's purely because we are a direct service delivery program.”

The NBCCEDP measures are also ones that grantees believe they can genuinely affect, that is, measures over which grantees feel a sense of control. The measures include process indicators as well as short- and intermediate-level clinical outcomes that are closely tied to the work of the grantees and their providers. Consequently, the measures are actionable. The sense of control that grantees feel over performance is also enhanced by relatively strong authority relationships that exist between both CDC and the grantees and the grantees and their providers where screening services are ultimately delivered. These relationships are typically codified through grants and contracts, formal mechanisms that ensure influence within a decentralized system that is otherwise prone to compromised authority. Consequently, these funding mechanisms can be used by CDC to enhance grantee accountability and by grantees to strengthen provider accountability.

Ultimately, this sense of control translates to a perception of the measures by grantees as “fair.” Grantees have, in fact, used the issue of fairness to argue for changes in measures. In one instance, CDC responded by modifying the calculation of a measure related to the timeliness of diagnostic follow-up so that it more closely reflected performance over which the grantees have direct control.

High-quality data

A central challenge to MDE data quality is the variability in data systems and data entry approaches used by the 68 NBCCEDP grantees. However, CDC supports a rigorous data quality system that incorporates detailed audit reports including summaries of data validity checks that are produced and reviewed together by CDC staff, the technical consultants, and the grantees. Following the review, the technical consultant prepares a list of action items that require investigation and response by the grantee. This cycle is an important component of CDC's data quality process. One person said of this review,

We look at a fairly current period of time and look at performance, identify any problems, [including] trend type problems, and have the grantee address it either as a program issue that CDC program consultant would deal with them on or as a data collection/reporting problem that our contractors could give advice on.

Institutionalized use of performance measurement data.

NBCCEDP performance data are used extensively and in ways consistent with their intended purposes of accountability, program improvement, and budgeting. One person said,

I've been extraordinarily impressed with how its [data system] has become a very sophisticated MDE system, number 1. And I've been extraordinarily impressed with the fact that not only do we collect the data but we actually use it.

Data suggest that the collection, reporting, and use of the performance measures (and MDE data, overall) are deeply rooted within the NBCCEDP program culture. Several factors contribute to their integration in NBCCEDP management. First, the MDEs have been collected since the program's inception in 1991; hence, data collection and reporting have always been an important and valued program activity.

Second, the data component is well funded. Grantees are provided resources through their cooperative agreement award to support data collection and reporting (including a data manager), CDC provides direct technical support to grantees through its data management contract, and CDC staff include a senior data manager, programmer, and policy analyst who work with the data contractor to manage and analyze the MDE data, including the performance measures.

Third, the use of performance data has been promoted through CDC's provision of important tools to grantees. In 2005, the coding algorithms for the 11 performance measures were shared with grantees, followed in 2006 with an edit program that allows all grantees to produce performance reports in real time and share them with local-level administrators or providers. These tools have strengthened accountability throughout the decentralized implementation structure that comprises the NBCCEDP.

Finally, the introduction of performance-based funding in 2005 further institutionalized the use of these data into management practice, reinforcing the importance of program quality to grantees. Several interviewees suggested that grantee performance had been positively affected by the integration of performance data into the funding process.

When we started tying it [performance] to funding, performance improved on all the indicators. It just speaks to the value of incentives in this country — when you provide an incentive that you could get more money, or the incentive that you could lose some money if you don't meet these performance measures, then people take the extra step of making sure that their data itself is in good quality and that they are actually monitoring the quality of care that they're providing through this program.

All of these factors contribute to a “data-driven” program culture in which an expectation prevails that these data are an integral component of program operations and management. One person remarked,

If MDE data were collected and nobody touched it, nobody put useful performance reports together — well, all we would have is a database with numbers, you know what I mean? But the data are used at this level, at CDC and down at the program levels.

Limitations

Several study limitations must be acknowledged. First, CDC does not attempt to statistically assess grantee compliance with performance measures when data include fewer than 10 cases. Instead, CDC assumes the standard to be met. To assess the impact of this assumption, we performed a sensitivity analysis. We first deleted all data for grantee submissions that had a sample size of fewer than 10. Next, we performed another GEE analysis with the reduced data set. This analysis revealed larger odds ratios compared with those in our reported results. Therefore, our sensitivity analysis demonstrated that CDC's assumption for meeting the standard is not responsible for the reported odds ratio values significantly greater than 1.

Second, our study results cannot be generalized beyond the NBCCEDP. The qualitative findings, however, provide insights about performance management that may have relevance for others. Third, the case study data were originally collected on the basis of research questions different from the ones posed for this study.

DISCUSSION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES

The purpose of this study was to assess the effectiveness of the NBCCEDP performance management system and describe characteristics of that system that may explain why it did or did not have an effect. Although CDC has monitored NBCCEDP clinical service delivery since its inception in 1991, the 2006 implementation of a performance management system, including priority indicators used to inform program improvement and performance-based funding, represented a new management strategy. Results from our study suggest that this system has been effective in driving program improvements. The quantitative analyses presented here indicate statistically significant improvement in performance on 5 of the 7 indicators analyzed. Qualitative findings illuminate characteristics of the system including measures viewed as meaningful and perceived as fair, data of high quality, and an overall NBCCEDP program with institutionalized data use. Below, we integrate the findings from our mixed-methods study, using our qualitative findings to explain our quantitative results.[20]

Our qualitative results suggest several factors that may explain the apparent success of the NBCCEDP performance management system including 1) characteristics of the program itself, 2) qualities of the indicators used, 3) investments made in the system and the resulting capacity that has been developed over time at CDC and among grantees, and 4) a culture of performance data use that drives program improvement and program management.

First, as a clinical, service delivery program, patient-level outcomes have been relatively easily defined, quantified, and measured. This alone has given the NBCCEDP an edge in developing a performance management system compared with other programs that struggle to define indicators that can be quantified or for which valid data are available.[13] In addition, as a service delivery program, common indicators could be identified despite extensive variability across the 68 grantees, a typical challenge for other public health programs.[26]

A second factor that may explain the effectiveness of the NBCCEDP performance management system involves qualities of the performance indicators themselves. Results highlight the importance of measures perceived by grantees as meaningful and fair. Most experts agree that performance measures should be developed with stakeholder involvement to engender support and ensure their utility.[6, 26] But previous studies of federal programs suggest this is not easily accomplished.[28] With a multitude of stakeholders at local, state, and national levels, the identification of measures that everyone can embrace is critical but often challenging. That the performance measures are perceived as meaningful and fair, closely reflecting NBCCEDP program goals, has likely contributed to their utilization by CDC and the grantees. In addition, CDC has selected measure types (ie, process, short-term clinical outcomes, intermediate clinical outcomes) that are actionable — that is, grantees are able to intervene quickly and make programmatic adjustments in response to performance reports.

Another factor contributing to the effectiveness of the NBCCEDP's performance management system is the financial investment made by CDC and the grantees in the system. CDC has developed a strong data management system and built staff capacity and related infrastructure through a variety of efforts including securing a data contractor, providing funding to grantees for data management staff and software systems, and ensuring ongoing technical assistance, training, and tools. This investment has contributed to ensuring high-quality data — an essential element for an effective system.[6]

Finally, over time, CDC has built a culture of data use starting with its extensive data-monitoring system that has been in place since the NBCCEDP's earliest years. The system of MDE data collection and reporting serves as an important factor that has facilitated the implementation and success of performance management. Further, management practices such as the semiannual data review and the performance-based funding process have institutionalized data use in program operations. This institutionalized use probably serves as a significant incentive for the observed improvements in performance. Also, as Berman and Wang[29] have demonstrated, use of performance data supports satisfaction with its impact-reinforcing action.

Conclusions

Although the NBCCEDP may be a unique public health program, important lessons are derived from the success of CDC's performance management system. Other programs would benefit from building a data management system and related collection and reporting requirements before implementation. Second, a high-functioning data monitoring and performance management system requires significant investment in time and financial resources. Next, stakeholder involvement in developing performance measures is essential to assure that indicators are meaningful to all involved. Fourth, measures are more likely to be perceived as fair by grantees if they reflect performance for which grantees have direct control. Finally, institutionalizing the use of performance measures through practices such as performance-based budgeting contributes to building a data-driven culture that supports improved program management and practice.

FUNDING SUPPORT

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES

This Supplement edition of Cancer has been sponsored by the U.S. Centers for Disease Control and Prevention (CDC), an Agency of the Department of Health and Human Services, under the Contract #200-2012-M-52408 00002. This study was supported by grants from the National Institutes of Health (234567 and 7650432).

REFERENCES

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. FUNDING SUPPORT
  8. CONFLICT OF INTEREST DISCLOSURES
  9. REFERENCES
  • 1
    Moynihan DP. The Dynamics of Performance Management: Constructing Information and Reform. Washington, DC: Georgetown University Press.
  • 2
    Landrum LB, Baker SL. Managing complex systems: performance management in public health. Public Health Manag Pract. 2004;10:13-18.
  • 3
    Heinrich CJ. Evidence-based policy and performance management: challenges and prospects in two parallel movements. Am Rev Public Adm. 2007;37:255-277.
  • 4
    Government Performance and Results Act of 1993. Public Law 103-162.
  • 5
    U.S. Government Accountability Office. OMB's PART reviews increased agencies' attention to improving evidence of program results. Washington, DC: U.S. Government Accountability Office, GAO-06-67; 2005.
  • 6
    Poister, TH. Measuring Performance in Public and Nonprofit Organizations. San Francisco: Jossey-Bass; 2003.
  • 7
    Goddard M, Mannion R. The role of horizontal and vertical approaches to performance measurement and improvement in the U.K. public sector. Public Perform Manag Rev. 2004;28:75-95.
  • 8
    Jennings ET, Haist MP. Putting performance measurement in context. In: Ingraham PA, Lynn LE, eds. The Art of Governance: Analyzing Management and Administration. Washington, DC: Georgetown University Press; 2004:173-194.
  • 9
    Frederickson DG, Frederickson GH. Measuring the Performance of the Hollow State. Washington, DC: Georgetown University Press; 2006.
  • 10
    Mandell M, Keast R. Evaluating Network Arrangements: Toward Revised Performance Measures. Leuven, Belgium; 2006.
  • 11
    Radin BA. Challenging the Performance Movement. Washington, DC: Georgetown University Press; 2006.
  • 12
    DeGroff, A. New public management and governance collide: federal-level performance measurement in networked public management environments. Dissertations Abstracts International (UMI No. 3376270); 2009.
  • 13
    U.S. General Accounting Office. Results-Oriented Government: GPRA has Established a Solid Foundation for Achieving Greater Results. Washington, DC: U.S. General Accounting Office, GAO-04-38, 2004.
  • 14
    Newcomer KE. Using performance measurement to improve programs. New Directions for Evaluation. 1997;75:5-14.
  • 15
    Wholey JS. Making results count in public and nonprofit organizations: balancing performance with other values. In: Newcomer K, Jennings ET, Broom C, Lomax A, eds. Meeting the Challenges of Performance Oriented Government. Washington, DC: American Society for Public Administration, Center for Accountability and Performance; 2002:13-36.
  • 16
    Henson RM, Wyatt SW, Lee NC. The National Breast and Cervical Cancer Early Detection Program: a comprehensive public health response to two major health issues for women. J Public Health Manag Pract. 1996:2:36-47.
  • 17
    Lee NC, Wong FL, Jamison PM, et al. Implementation of the National Breast and Cervical Cancer Early Detection Program: the beginning. Cancer. 2014;120(16 Suppl.):2540-2548.
  • 18
    National Breast and Cervical Cancer Early Detection Program. 2010. Available at: http://www.cdc.gov/cancer/nbccedp. Accessed November 19, 2010.
  • 19
    Yancy B, DeGroff A, Royalty J, Marroulis S, Mattingly C, Benard VB. Using data to effectively manage a national screening program. Cancer. 2014;120(16 Suppl.):2575-2583.
  • 20
    Teddlie C, Tashakkori A. Foundations of Mixed Methods Research. Thousand Oaks, CA: Sage; 2009.
  • 21
    Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73:13-22.
  • 22
    Agresti A. Categorical Data Analysis. 2nd ed. New York: John Wiley & Sons; 2002.
  • 23
    Diggle PJ, Heagerty P, Liang KY, Zeger SL. Analysis of Longitudinal Data, 2nd ed. New York: Oxford University Press; 2002.
  • 24
    Patton MQ. Qualitative research and evaluation methods. Thousand Oaks: Sage, 2002.
  • 25
    Charmaz K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. Thousand Oaks, CA: Sage; 2006.
  • 26
    DeGroff A, Schooley M, Chapel T, Poister TH. Challenges and strategies in applying performance measurement to federal public health programs. Eval Program Plann. 2010;33:365-372.
  • 27
    Hatry HP. Performance Measurement: Getting Results. Washington, DC: Urban Institute Press; 1999.
  • 28
    U.S. General Accounting Office. Managing for Results: Measuring Program Results That Are Under Limited Federal Control. Washington, DC: U.S. General Accounting Office, GAO/GGD-99-16; 1998.
  • 29
    Berman E, XiaoHu W. Performance measurement in U.S. counties: capacity for reform. Public Adm Rev. 2000;60:409-420.