The past 25 years has seen an explosion in the number of genetic markers that can be measured on DNA samples at an ever decreasing cost. Although basic statistical methods for analysing such data gathered on samples of either independent individuals or family members, one or two markers at a time, were already well developed before this explosion occurred, there has been a corresponding burst in activity to develop multiple marker models to find disease-causing gene variants, capitalizing on the data that have become available, to increase the power of such methods. This has required the concomitant development of faster algorithms to speed up the computation of various likelihoods. For linkage analysis, to obtain the approximate locations for genes of interest, Mendelian segregation models have been extended to be more realistic and statistical models that do not assume specific modes of inheritance have been extended to allow for the analysis of larger pedigree structures. For association analysis, to obtain more precise locations for genes of interest, the recent completion of the first stage of the HapMap project has spurred the development, still underway, of novel experimental designs and analytical methods to combat the curse of dimensionality and the resulting multiple testing problem. Perhaps the greatest current challenge concerns how best to gather and synthesize the many lines of evidence possible in order to discover the genetic determinants underlying complex diseases. Copyright © 2006 John Wiley & Sons, Ltd.