The research on which this article is based received support from the Institute of Education Sciences in the U.S. Department of Education, the Judith Gueron Fund at MDRC, and the William T. Grant Foundation. The authors thank Larry Hedges for his helpful input.
Empirical Benchmarks for Interpreting Effect Sizes in Research
Article first published online: 24 NOV 2008
© 2008, Copyright the Author(s); Journal Compilation © 2008, Society for Research in Child Development with Exclusive License to Print by MDRC
Child Development Perspectives
Volume 2, Issue 3, pages 172–177, December 2008
How to Cite
Hill, C. J., Bloom, H. S., Black, A. R. and Lipsey, M. W. (2008), Empirical Benchmarks for Interpreting Effect Sizes in Research. Child Development Perspectives, 2: 172–177. doi: 10.1111/j.1750-8606.2008.00061.x
- Issue published online: 24 NOV 2008
- Article first published online: 24 NOV 2008
- effect size;
- student performance;
- educational evaluation
ABSTRACT—There is no universal guideline or rule of thumb for judging the practical importance or substantive significance of a standardized effect size estimate for an intervention. Instead, one must develop empirical benchmarks of comparison that reflect the nature of the intervention being evaluated, its target population, and the outcome measure or measures being used. This approach is applied to the assessment of effect size measures for educational interventions designed to improve student academic achievement. Three types of empirical benchmarks are illustrated: (a) normative expectations for growth over time in student achievement, (b) policy-relevant gaps in student achievement by demographic group or school performance, and (c) effect size results from past research for similar interventions and target populations. The findings can be used to help assess educational interventions, and the process of doing so can provide guidelines for how to develop and use such benchmarks in other fields.