SEARCH

SEARCH BY CITATION

Abstract

Objective

To assess the impact, in terms of statistical power and bias of treatment effect, of approaches to dealing with missing data in randomized controlled trials of rheumatoid arthritis with radiographic outcomes.

Methods

We performed a simulation study. The missingness mechanisms we investigated copied the process of withdrawal from trials due to lack of efficacy. We compared 3 methods of managing missing data: all available data (case-complete), last observation carried forward (LOCF), and multiple imputation. Data were then analyzed by classic t-test (comparing the mean absolute change between baseline and final visit) or F test (estimation of treatment effect with repeated measurements by a linear mixed-effects model).

Results

With a missing data rate close to 15%, the treatment effect was underestimated by 18% as estimated by a linear mixed-effects model with a multiple imputation approach to missing data. This bias was lower than that obtained with the case-complete approach (−25%) or LOCF approach (−35%). This statistical approach (combination of multiple imputation and mixed-effects analysis) was moreover associated with a power of 70% (for a 90% nominal level), whereas LOCF was associated with a power of 55% and a case-complete power of 58%. Analysis with the t-test gave qualitatively equivalent but poorer quality results, except when multiple imputation was applied.

Conclusion

Our simulation study demonstrated multiple imputation, offering the smallest bias in treatment effect and the highest power. These results can help in planning trials, especially in choosing methods of imputation and data analysis.