The full text of this article hosted at iucr.org is unavailable due to technical difficulties.

Special issue article: Methods and statistics in social psychology: Refinements and new developments

Performing high‐powered studies efficiently with sequential analyses

Daniël Lakens

Corresponding Author

Human Technology Interaction Group, Eindhoven University of Technology, Eindhoven, The Netherlands

Correspondence to: Daniël Lakens, Human Technology Interaction Group, Eindhoven University of Technology, IPO 1.33, PO Box 513, 5600 MB Eindhoven, The Netherlands.E‐mail: E-mail address:D.Lakens@tue.nl
Search for more papers by this author
First published: 15 August 2014
Cited by: 27

Abstract

Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude that an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely that the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data are collected. Additional flexibility is provided by adaptive designs where sample sizes are increased on the basis of the observed effect size. The need for pre‐registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and null‐hypothesis significance testing (NHST) are discussed. Sequential analyses, which are widely used in large‐scale medical trials, provide an efficient way to perform high‐powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research. Copyright © 2014 John Wiley & Sons, Ltd.

Number of times cited: 27

  • , Revisiting non-significant effects of intranasal oxytocin using equivalence testing, Psychoneuroendocrinology, 10.1016/j.psyneuen.2017.10.010, 87, (127-130), (2018).
  • , When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias, Journal of Experimental Social Psychology, 74, (187), (2018).
  • , Promoting Transparent Reporting of Conflicts of Interests and Statistical Analyses at The Journal of Sex Research, The Journal of Sex Research, 55, 1, (1), (2018).
  • , Psychology's Renaissance, Annual Review of Psychology, 69, 1, (511), (2018).
  • , Open Science, Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience, (1-47), (2018).
  • , A Survey of Common Practices in Infancy Research: Description of Policies, Consistency Across and Within Labs, and Suggestions for Improvements, Infancy, 22, 4, (470-491), (2017).
  • , Sample Size, Statistical Power, and False Conclusions in Infant Looking‐Time Research, Infancy, 22, 4, (436-469), (2017).
  • , Exposure to Celebrities as a Possible Explanatory Mechanism in the Perception of American Narcissism, Collabra: Psychology, 3, 1, (4), (2017).
  • , On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines, Frontiers in Psychology, 8, (2017).
  • , Simulating Social Complexity, (229)
  • , The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective, Psychonomic Bulletin & Review, (2017).
  • , Equivalence Tests, Social Psychological and Personality Science, 8, 4, (355), (2017).
  • , Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty, Psychological Science, 28, 11, (1547), (2017).
  • , Ambivalent stereotypes link to peace, conflict, and inequality across 38 nations, Proceedings of the National Academy of Sciences, 114, 4, (669), (2017).
  • , Increasing efficiency of preclinical research by group sequential designs, PLOS Biology, 15, 3, (e2001307), (2017).
  • , Smartphone use undermines enjoyment of face-to-face social interactions, Journal of Experimental Social Psychology, (2017).
  • , Do True and False Intentions Differ in Level of Abstraction? A Test of Construal Level Theory in Deception Contexts, Frontiers in Psychology, 8, (2017).
  • Companion Publication of the 21st International Conference on Intelligent User Interfaces - IUI '16 Companion, (2016).610.1145/2876456.2882846
  • , Optimizing Research Payoff, Perspectives on Psychological Science, 11, 5, (664), (2016).
  • , Does the conclusion follow from the evidence? Recommendations for improving research, Journal of Experimental Social Psychology, 66, (39), (2016).
  • , Approaching a fair deal for significance and other concerns, Journal of Experimental Social Psychology, 65, (1), (2016).
  • , Pre-registration in social psychology—A discussion and suggested template, Journal of Experimental Social Psychology, 67, (2), (2016).
  • , Internal conceptual replications do not increase independent replication success, Psychonomic Bulletin & Review, 23, 5, (1631), (2016).
  • , The pervasive avoidance of prospective statistical power: major consequences and practical solutions, Frontiers in Psychology, 6, (2015).
  • , On the challenges of drawing conclusions fromp-values just below 0.05, PeerJ, 3, (e1142), (2015).
  • , The illusion of nonmediation in telecommunication: Voice intensity biases distance judgments to a communication partner, Acta Psychologica, 157, (101), (2015).
  • , Methods and statistics in social psychology—refinements and new developments, European Journal of Social Psychology, 44, 7, (671-672), (2014).