Simulation Methods in Health Services Research: Applications for Policy, Management, and Practice
Profiling Provider Outcome Quality for Pay-for-Performance in the Presence of Missing Data: A Simulation Approach
Article first published online: 12 FEB 2013
© Health Research and Educational Trust
Health Services Research
Volume 48, Issue 2pt2, pages 810–825, April 2013
How to Cite
Ryan, A. M. and Bao, Y. (2013), Profiling Provider Outcome Quality for Pay-for-Performance in the Presence of Missing Data: A Simulation Approach. Health Services Research, 48: 810–825. doi: 10.1111/1475-6773.12038
- Issue published online: 8 MAR 2013
- Article first published online: 12 FEB 2013
- Agency for Healthcare Research and Quality. Grant Number: K01 HS018546-01
- National Institute of Mental Health. Grant Number: K01 MH090087
- Mental health;
- Quality of Care/Patient Safety (measurement);
- missing data;
Provider profiling of outcome performance has become increasingly common in pay-for-performance programs. For chronic conditions, a substantial proportion of patients eligible for outcome measures may be lost to follow-up, potentially compromising outcome profiling. In the context of primary care depression treatment, we assess the implications of missing data for the accuracy of alternative approaches to provider outcome profiling.
We used data from the Improving Mood-Promoting Access to Collaborative Treatment trial and the Depression Improvement across Minnesota, Offering a New Direction initiative to generate parameters for a Monte Carlo simulation experiment.
The patient outcome of interest is the rate of remission of depressive symptoms at 6 months among a panel of patients with major depression at baseline. We considered two alternative approaches to profiling this outcome: (1) a relative, or tournament style threshold, set at the 80th percentile of remission rate among all providers, and (2) an absolute threshold, evaluating whether providers exceed a specified remission rate (30 percent). We performed a Monte Carlo simulation experiment to evaluate the total error rate (proportion of providers who were incorrectly classified) under each profiling approach. The total error rate was partitioned into error from random sampling variability and error resulting from missing data. We then evaluated the accuracy of alternative profiling approaches under different assumptions about the relationship between missing data and depression remission.
Over a range of scenarios, relative profiling approaches had total error rates that were approximately 20 percent lower than absolute profiling approaches, and error due to missing data was approximately 50 percent lower for relative profiling. Most of the profiling error in the simulations was a result of random sampling variability, not missing data: between 11 and 21 percent of total error was attributable to missing data for relative profiling, while between 16 and 33 percent of total error was attributable to missing data for absolute profiling. Finally, compared with relative profiling, absolute profiling was much more sensitive to missing data that was correlated with the remission outcome.
Relative profiling approaches for pay-for-performance were more accurate and more robust to missing data than absolute profiling approaches.