### Abstract

- Top of page
- Abstract
- 1.Introduction
- 2.Optimal discovery procedure
- 2.1.Optimality goal
- 2.2.Single-thresholding procedures
- 2.3.Definition and derivation of the optimal discovery procedure
- 2.4.Previous work on optimal multiple testing
- 3.Testing several normal means for equality to zero
- 3.1.Testing several normal means
- 3.2.Optimal discovery procedure for testing several normal means
- 3.3.Estimated optimal discovery procedure for testing several normal means
- 4.Estimating the optimal discovery procedure in a general setting
- 4.1.An equivalent optimal discovery procedure significance thresholding function
- 4.2.A maximum likelihood implementation of the optimal discovery procedure
- 4.3.Assessing significance
- 5.Extensions and connections to other concepts
- 5.1.False discovery rate optimality by the optimal discovery procedure
- 5.2.Optimal discovery procedure under randomized null hypotheses
- 5.3.Bayesian hypothesis testing
- 5.4.Stein's paradox
- 5.5.Shrinkage estimation
- 6.Discussion
- Acknowledgements
- References

**Summary. ** The Neyman–Pearson lemma provides a simple procedure for optimally testing a single hypothesis when the null and alternative distributions are known. This result has played a major role in the development of significance testing strategies that are used in practice. Most of the work extending single-testing strategies to multiple tests has focused on formulating and estimating new types of significance measures, such as the false discovery rate. These methods tend to be based on *p*-values that are calculated from each test individually, ignoring information from the other tests. I show here that one can improve the overall performance of multiple significance tests by borrowing information across all the tests when assessing the relative significance of each one, rather than calculating *p*-values for each test individually. The ‘optimal discovery procedure’ is introduced, which shows how to maximize the number of expected true positive results for each fixed number of expected false positive results. The optimality that is achieved by this procedure is shown to be closely related to optimality in terms of the false discovery rate. The optimal discovery procedure motivates a new approach to testing multiple hypotheses, especially when the tests are related. As a simple example, a new simultaneous procedure for testing several normal means is defined; this is surprisingly demonstrated to outperform the optimal single-test procedure, showing that a method which is optimal for single tests may no longer be optimal for multiple tests. Connections to other concepts in statistics are discussed, including Stein's paradox, shrinkage estimation and the Bayesian approach to hypothesis testing.