Article first published online: 30 JUL 2012
Copyright © 2012 John Wiley & Sons, Ltd.
Statistics in Medicine
Volume 32, Issue 2, pages 282–289, 30 January 2013
How to Cite
Dalton, J. E. (2013), Flexible recalibration of binary clinical prediction models. Statist. Med., 32: 282–289. doi: 10.1002/sim.5544
- Issue published online: 17 DEC 2012
- Article first published online: 30 JUL 2012
- Manuscript Accepted: 2 JUL 2012
- Manuscript Revised: 26 JUN 2012
- Manuscript Received: 30 NOV 2011
- binary outcomes;
- logistic regression;
- prediction accuracy
Calibration in binary prediction models, that is, the agreement between model predictions and observed outcomes, is an important aspect of assessing the models' utility for characterizing risk in future data. A popular technique for assessing model calibration first proposed by D. R. Cox in 1958 involves fitting a logistic model incorporating an intercept and a slope coefficient for the logit of the estimated probability of the outcome; good calibration is evident if these parameters do not appreciably differ from 0 and 1, respectively. However, in practice, the form of miscalibration may sometimes be more complicated. In this article, we expand the Cox calibration model to allow for more general parameterizations and derive a relative measure of miscalibration between two competing models from this more flexible model. We present an example implementation using data from the US Agency for Healthcare Research and Quality. Copyright © 2012 John Wiley & Sons, Ltd.