A false discovery occurs when a researcher concludes that a marker is involved in the etiology of the disease whereas in reality it is not. In genetic studies the risk of false discoveries is very high because only few among the many markers that can be tested will have an effect on the disease. In this article, we argue that it may be best to use methods for controlling false discoveries that would introduce the same ratio of false discoveries divided by all rejected tests into the literature regardless of systematic differences between studies. After a brief discussion of traditional “multiple testing” methods, we show that methods that control the false discovery rate (FDR) may be more suitable to achieve this goal. These FDR methods are therefore discussed in more detail. Instead of merely testing for main effects, it may be important to search for gene–environment/covariate interactions, gene–gene interactions or genetic variants affecting disease subtypes. In the second section, we point out the challenges involved in controlling false discoveries in such searches. The final section discusses the role of replication studies for eliminating false discoveries and the complexities associated with the definition of what constitutes a replication and the design of these studies. © 2007 Wiley-Liss, Inc.