• Missing Data;
  • Evidence-Based Statistics;
  • Implementation Barriers


This commentary discusses the paper by Hallgren and Witkiewitz (2013) which evaluated 5 methods for addressing missing data in clinical trials of interventions for alcohol use disorders. The authors conclude that commonly used methods (e.g., complete case analysis, single imputation methods) can produce misleading results and that better alternatives exist (e.g., multiple imputation [MI]). The problems of using inferior approaches are well-known and well-illustrated by the analysis in this paper, which serves as an educational reminder to use more statistically justified practices.


Findings of this paper are put in context of the broader statistical literature. Strategies to promote common usage of superior missing data methods are discussed.


Solving the poor uptake of statistically justified missing data methods will require a multilevel diagnosis of the problem and likely a multifaceted response, perhaps including the establishment, publication, and enforcement of standards by scientific funding and regulatory agencies, scientific journals, and graduate program accreditation bodies.


Little disagreement exists regarding the importance of addressing missing data in a statistically justified manner (e.g., with MI or other maximum likelihood methods). However, as with the implementation of other evidence-based practices, knowing what should be done does not alone make it happen.