Implementing false discovery rate control: increasing your power


  • Koen J.F. Verhoeven,

  • Katy L. Simonsen,

  • Lauren M. McIntyre

K.J.F. Verhoeven and L.M. McIntyre, Computational Genomics, Dept of Agronomy, Purdue Univ., Lilly Hall of Life Sciences, 915 W. State Street, West Lafayette, IN 47907-2054. USA ( – K.L. Simonson, Dept of Statistics, Purdue Univ., 150 N. University Ave. West Lafayette, IN 47907-2068, USA.


Popular procedures to control the chance of making type I errors when multiple statistical tests are performed come at a high cost: a reduction in power. As the number of tests increases, power for an individual test may become unacceptably low. This is a consequence of minimizing the chance of making even a single type I error, which is the aim of, for instance, the Bonferroni and sequential Bonferroni procedures. An alternative approach, control of the false discovery rate (FDR), has recently been advocated for ecological studies. This approach aims at controlling the proportion of significant results that are in fact type I errors. Keeping the proportion of type I errors low among all significant results is a sensible, powerful, and easy-to-interpret way of addressing the multiple testing issue. To encourage practical use of the approach, in this note we illustrate how the proposed procedure works, we compare it to more traditional methods that control the familywise error rate, and we discuss some recent useful developments in FDR control.