Standardized or simple effect size: What should be reported?
Article first published online: 1 FEB 2011
2009 The British Psychological Society
British Journal of Psychology
Volume 100, Issue 3, pages 603–617, August 2009
How to Cite
Baguley, T. (2009), Standardized or simple effect size: What should be reported?. British Journal of Psychology, 100: 603–617. doi: 10.1348/000712608X377117
- Issue published online: 1 FEB 2011
- Article first published online: 1 FEB 2011
- Received 1 May 2007; revised version received 26 August 2008
It is regarded as best practice for psychologists to report effect size when disseminating quantitative research findings. Reporting of effect size in the psychological literature is patchy – though this may be changing – and when reported it is far from clear that appropriate effect size statistics are employed. This paper considers the practice of reporting point estimates of standardized effect size and explores factors such as reliability, range restriction and differences in design that distort standardized effect size unless suitable corrections are employed. For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: (i) a preference for simple effect size over standardized effect size, and (ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers.