Flexible recalibration of binary clinical prediction models


  • Jarrod E. Dalton

    Corresponding author
    • Departments of Quantitative Health Sciences and Outcomes Research, Cleveland Clinic; Department of Epidemiology and Biostatistics, Case Western Reserve University
    Search for more papers by this author

Jarrod E. Dalton, 9500 Euclid Avenue, Mail Code P-77, Cleveland, OH 44195, U.S.A.



Calibration in binary prediction models, that is, the agreement between model predictions and observed outcomes, is an important aspect of assessing the models' utility for characterizing risk in future data. A popular technique for assessing model calibration first proposed by D. R. Cox in 1958 involves fitting a logistic model incorporating an intercept and a slope coefficient for the logit of the estimated probability of the outcome; good calibration is evident if these parameters do not appreciably differ from 0 and 1, respectively. However, in practice, the form of miscalibration may sometimes be more complicated. In this article, we expand the Cox calibration model to allow for more general parameterizations and derive a relative measure of miscalibration between two competing models from this more flexible model. We present an example implementation using data from the US Agency for Healthcare Research and Quality. Copyright © 2012 John Wiley & Sons, Ltd.