Patterns of Unit and Item Nonresponse in the CAHPS® Hospital Survey

Authors

  • Marc N. Elliott,

    Search for more papers by this author
    • Address correspondence to Marc N. Elliott, Ph.D., PO Box 2138, 1776 Main Street, Santa Monica, CA 90401. Dr. Elliott, Carol Edwards, B.A., Katrin Hambarsoomians, M.S., and Ron D. Hays, Ph.D., are also with RAND at Santa Monica, CA. January Angeles, M.P.P., is with the American Institutes for Research (AIR), Washington, DC.

  • Carol Edwards,

  • January Angeles,

  • Katrin Hambarsoomians,

  • Ron D. Hays


Abstract

Objective. To examine the predictors of unit and item nonresponse, the magnitude of nonresponse bias, and the need for nonresponse weights in the Consumer Assessment of Health Care Providers and Systems (CAHPS®) Hospital Survey.

Methods. A common set of 11 administrative variables (41 degrees of freedom) was used to predict unit nonresponse and the rate of item nonresponse in multivariate models. Descriptive statistics were used to examine the impact of nonresponse on CAHPS Hospital Survey ratings and reports.

Results. Unit nonresponse was highest for younger patients and patients other than non-Hispanic whites (p<.001); item nonresponse increased steadily with age (p<.001). Fourteen of 20 reports of ratings of care had significant (p<.05) but small negative correlations with nonresponse weights (median −0.06; maximum −0.09). Nonresponse weights do not improve overall precision below sample sizes of 300–1,000, and are unlikely to improve the precision of hospital comparisons. In some contexts, case-mix adjustment eliminates most observed nonresponse bias.

Conclusions. Nonresponse weights should not be used for between-hospital comparisons of the CAHPS Hospital Survey, but may make small contributions to overall estimates or demographic comparisons, especially in the absence of case-mix adjustment.

Ancillary