Established principles of population genetics are being augmented by new ideas and techniques. Especially interesting are new strategies for using ‘signals of selection’ to determine which genes have been strongly selected in the past few thousand generations. Just a few years ago, this approach offered a few methods and a few examples (Olson 2002). Now, many new methods are applied to genome scan data to identify loci subject to directional and balancing selection as revealed by the homogeneity of the DNA sequences surrounding the loci in question (Vallender and Lahn 2004; Sabeti et al. 2006; Voight et al. 2006), and early overestimates of the number of loci of interest are now being corrected (Thornton and Jensen 2007).
These methods provide answers to long-standing questions, such as the origins of genes for lactase persistence (Bersaglieri et al. 2004). Most adult humans cannot digest milk because the enzyme that breaks down milk sugar is not made in adulthood. Recent studies showed that genes that allow adults to digest milk have evolved separately several times, almost always in dairying cultures (Holden and Mace 1997; Ingram et al. 2007; Tishkoff et al. 2007). Similarly, there has been speculation for decades about whether the genetic tendency to feel sick immediately after drinking alcohol could be common in people from Asia because it protects against alcoholism in a culture where alcohol has long been available. Evidence has been sparse until now. The case has been bolstered by finding a strong signal of selection in Asians at the site of the gene (Voight et al. 2006).
The evolutionary backgrounds of alleles that predispose to disease can now be examined. Of particular interest is a gene that makes apolipoproteins, substances that bind and transport lipids. Individuals with the ApoE4 subtype have a much higher risk of developing atherosclerosis and Alzheimer’s disease. This allele is universal in other primates. In humans, especially those living in cold climates, selection has increased the rates of the ApoE3 allele (Sapolsky and Finch 2000). This may be a case of selection caught in action, perhaps for genes that prevent health problems for meat eaters (Finch and Stanford 2004).
Selection has also been proposed as an explanation for cystic fibrosis, given the scores of mutations that can cause it and its systematic variation with latitude. Mice heterozygous for the CF allele have less fluid loss from cholera toxin (Gabriel et al. 1994), but the chloride channel is not the rate limiting step for fluid loss in humans (Hogenauer et al. 2000). The CF gene also prevents entrance of salmonella typhus into gastrointestinal mucosal cells (Pier et al. 1998). However, cystic fibrosis is more common in climates where diarrheal diseases are less common, and although remarkably prevalent, it remains a rare allele. Cystic fibrosis offers a fine example of creative tests of interesting hypotheses, and an example of how hard it can be to reach a firm conclusion about the adaptive significance of a genetic variation.
Until recently, agreement on how to assess the role of selection on vulnerability genes was elusive (Chadwick and Cardew 1996). That is changing fast. We now have systematic reviews of the role of selection in maintaining the prevalence of genes that increase risk for infectious disease (Dean et al. 2002), and progress in related areas is on the way.
Naïve thinking that genes exist always for the good of the individual and the species remain common in medicine even though biologists abandoned them in the 1970s. The importance of gene-level selection is highlighted in the work of Trivers and others on selfish genetic elements that facilitate their own transmission at the expense of the individual (Burt and Trivers 2006). These are Dawkins’ selfish genes with a vengeance (Dawkins 1976). The best known examples are the T-allele in mice and Segregation Distorter in fruit flies. The role of selfish genetic elements in human disease, including cancer (Crespi and Summers 2005), is an especially exciting area that is starting to be elucidated.
These advances also suggested looking for conflicts that arise between genes transmitted through males versus those transmitted through females. Following the lead of Trivers (1974), Haig (1993) pointed out that in pregnancy, the interests of genes from the male differ from those from the female. Genes derived from male benefit if they somehow induce the female to make more or larger offspring. Energy reserves that a female mouse does not invest in the current litter will not benefit the male unless he happens to mate with her again. Conversely, genes from the female benefit by reserving fat stores for future reproduction. The size of offspring that maximally benefits the male is only slightly different from the size optimal for the female, but this small difference may have shaped a complex system. This evolutionary hypothesis is supported by the details of a remarkable proximate mechanism.
Studies of genetically engineered mice show that the unopposed expression of a gene called insulin-like growth factor 2 (IGF2) results in a large placenta and large but otherwise normal offspring; this outcome benefits paternal genes. When transmitted through the mother, this gene is inactivated by a process called imprinting, making the offspring smaller. IGF2r is a gene with opposite effects; it degrades IGF. Its effect is decreased by imprinting from passage through the father (Haig 1993). Loss of IGF2 imprinting causes Beckwith–Wiedemann syndrome, characterized by large babies with very large internal organs. It may be more likely in offspring conceived using artificial reproductive technologies (Maher et al. 2003). This area of research vividly illustrates the clinical implications of studies that would never be considered without sophisticated applications of evolutionary theory (Wilkins and Haig 2003).
Addressing a much broader issue, it is worth noting that ‘knock-out’ studies are about the evolutionary functions of a gene. They are modern equivalents of the old physiological method of extirpation. Taking out an organ or a gene and looking to see what goes wrong can generate hypotheses about how an organ or gene is useful. Often, no abnormality is observed. Of course, this does not mean that the gene is useless, only that its effects are covered by redundant systems, that its benefits are manifest only in special situations, or that the benefit is just too small to be observed in a laboratory setting. For instance, genes involved the capacity for shivering might well appear to be harmful, unless one happened to look at their effects in extreme cold. Similarly, some genetic variations associated with faster aging are likely to have compensating advantages, otherwise they would have been eliminated. As we are gaining technologies to manipulate genes, evolutionary thinking about their origins and functions becomes more crucial than ever.
Aging research shows how evolutionary thinking can transform a field. Many doctors still view aging as an inevitable result of body parts wearing out. This knowledge gap is unfortunate for a trait so important to medicine. Half a century ago, Medawar (1952) saw that selection weakens with age because the surviving number of individuals declines, even in the absence of senescence. Then Williams (1957) had the insight that pleiotropic genes that cause aging and death can nonetheless be selected for if they also give benefits early in life when selection is stronger. He gave a vivid hypothetical example of a gene that makes bones heal faster in childhood, but that also slowly deposits calcium in the coronary arteries. Hamilton (1966) provided mathematical models for the process.
These evolutionary insights transformed aging research (Finch 1991, 2007). Instead of looking only for proximate explanations for aging, the field now also seeks evolutionary explanations for why aging mechanisms exist at all. Laboratory (Rose 1991; Stearns et al. 2000) and field evidence (Austad 2005) soon showed that aging was a life history trait shaped by natural selection (Stearns 1992, 2000). For many species, senescence in the wild is a deleterious trait with heritable variation, but life spans do not increase, presumably because the reproductive benefits of longer lives would be balanced by costs that decrease reproduction earlier in the life span (Nesse 1988; Austad 2005; Williams et al. 2006).
The big new news in aging research is the discovery of remarkably strong effects of single genes that influence oxidative metabolism (Guarente and Kenyon 2000; Austad 2005). These surprising findings are now being interpreted in evolutionary terms (Partridge and Gems 2006; Ackermann and Pletcher 2007; McElwee et al. 2007). They suggest that mechanisms that protect against oxidative damage are limited by their reproductive costs or just lack of selection. They also show how selection can shape special states of reduced metabolism that allow some species to survive periods of privation. These states slow aging dramatically, but they are special states precisely because they also so dramatically reduce reproduction. The ancient dream of extending lifespan no longer seems like just a dream, but do not buy beach property on Hudson Bay just yet.
When it comes to aging, males are the weaker sex. It has long been known that men die younger than women, but this has rarely been interpreted in a life-history framework. A recent report about higher mortality rates for male than female mammals attributes it to both external causes and faster aging. The faster rates of aging for males are found mainly in polygynous species because a shortened reproductive span decreases the force of selection for older males (Clutton-Brock and Isvaran 2007). An evolutionary view of humans suggested looking at the ratio of male to female mortality across the lifespan in different cultures. This found surprising results (Kruger and Nesse 2004). In every culture at every age through late adulthood, mortality rates are higher for men. In modern societies, for every woman who dies at reproductive maturity, three men die. The pattern is consistent in 20 cultures studied. Further work looking at the proximate causes of sex differences in mortality rates finds that they result not only from accidents and violence, but also from the full range of causes of mortality.
Applications of evolutionary biology to infectious disease are also very direct. Pathogens evolve fast, right under (and in!) our noses. Antibiotic resistance is the classic example. Individual bacteria and viruses vary in their susceptibility to antimicrobial agents; those with even slight resistance replicate faster and their genotypes become more common. Just a few years after Alexander Fleming discovered penicillin, he also discovered antibiotic resistance. The basic phenomenon is very simple. Antibiotics are selction agents that quickly increase the proportion of organisms that can resist them. (Bergstrom and Feldgarden 2007).
Shortly after the US Surgeon General declared in the mid-1950s that the war on infectious disease was over, antibiotic resistance became a serious problem. Staphylococcus quickly became resistant to penicillin, nearly all other bacteria followed. Antibiotic resistance is an arms race; we invent new defenses, the enemy quickly finds ways around them, and we try to find new defenses. We are now faced with many organisms that resist every available antibiotic; some wonder if the war on infectious disease may be lost (Normark and Normark 2002; Levy and Marshall 2004). Nearly 10% of Staphylococcus aureus are now resistant even to methicillin; infections caused by this resistant organism now cause 18 650 deaths per year, more than the 12 500 caused by AIDS (Klevens et al. 2007). The economic burden of antibiotic resistance is estimated at about $80 billion annually in the USA.
Recognition of antibiotic resistance as an example of natural selection is often missing in medical articles on the topic. In biology journals the phrase ‘natural selection’ or another direct reference to evolution is used 79.1% of the time to describe antibiotic resistance, but in biomedical journals they were used only 17.8% of the time. Instead, medical journals use ‘emergence’ or some other circumlocution to avoid the ‘E-word’ (Antonovics et al. 2007).
Many doctors view antibiotics as human discoveries, but most are results of selection acting over millions of years in the deadly interactions of bacteria and fungi with each other. The average bacteria isolated from soil demonstrates resistance to seven antibiotics (D’Costa et al. 2006). This is not because of exposure to human-produced chemicals, but because the long co-evolution of bacteria and fungi has shaped toxins, defenses and new toxins (Ewald 1994). Bacteria and fungi have been developing and testing the effectiveness of antibiotics for millions of years!
Another important aspect of resistance is whether it has costs to the resistant bacteria that will select against the resistance if antibiotics are withdrawn. The answer is sometimes yes, but often the costs seem to be so low that resistance persists, an ecological insight of huge importance for controlling antibiotic resistance (Andersson and Levin 1999). Continuous application of antibiotics also produces selection to reduce their costs, yielding resistant strains that persist after the antibiotics are withdrawn (Schrag and Perrot 1996). However, restriction of antibiotic use in Danish farm animals resulted in decreased resistance (Aarestrup et al. 2001). More work on these evolutionary responses is of great importance.
Selection on pathogens is, of course, not a one-way street. Hosts evolve too, creating co-evolutionary cycles of deception and ability to detect deception of vast complexity (Ewald 1994; Knodler et al. 2001; Frank 2002). The genes of vulnerable individuals become less common, and host resistance evolve, but very slowly compared with the rate of pathogen evolution.
Some of the resulting genetic change is in mechanisms close to the sites of infectivity. For instance, malaria uses the Duffy antigen to enter red blood cells. Individuals without the Duffy antigen are less susceptible to malaria and have a selective advantage where malaria is common (Hamblin and Di Rienzo 2000). This is why the Duffy antigen is absent in most Africans.
The CCR-5 receptor on white blood cells allows HIV to enter. The receptor is absent in about 1% of Europeans; they do not get AIDS even when infected with HIV (Samson et al. 1996). Some geographical evidence suggested that this genetic difference could result from selection by the plague epidemic in the 14th century, but in a nice example of hypothesis testing, more careful examination shows the patterns do not match (Cohn and Weaver 2006). Would we all be better off without the CCR-5 receptor? With the advent of HIV the answer may be yes, but this receptor is not useless; at the very least it appears to protect against West-Nile infection (Lim et al. 2006).
When a parasite such as malaria deals with both a mosquito and a mammal host, the complexity of its evolution is magnified (Mackinnon and Read 2004; Grech et al. 2006). Here host–parasite manipulations can be studied in detail, and their complexity is more than intriguing. Doctors learn about the complexity of parasite life cycles, but rarely do they have an opportunity to consider their evolutionary origins. Nor do they have the evolutionary principles that would allow them to evaluate proposals to drive genetically engineered strains of mosquitoes into wild populations. Such proposals rarely take into account how the introduced strains will evolve in interaction with the wild ones.
Changes in the phenotype also exert selection forces on pathogens. Vaccination of large populations fundamentally changes the environment for a pathogen. For instance, steady pertussis vaccination for 40 years may have selected for more virulent strains of the whooping cough bacteria (Diavatopoulos et al. 2005), although decreased vaccination may be responsible for the increased incidence. Imperfect vaccines can create selection pressures for increased virulence (Gandon et al. 2001). This disturbing possibility has been documented for Marek’s disease in chickens (Davison and Nair 2005). However, when a vaccine targets a toxin, selection can decrease virulence. This has happened for diphtheria, where lines that do not produce toxin have largely displaced the dangerous forms (Soubeyrand and Plotkin 2002). These findings have obvious major public health implications, but the complex realities of host pathogen interactions make confident prediction difficult (Ebert and Bull 2003).
Intuitive models for antibiotic resistance are often incorrect (Normark and Normark 2002). For instance, some hospitals have tried rotating the antibiotic of choice over a period of a few months with the idea that by exposing bacteria to changing selective regime, this will prevent antibiotic resistance. But when the process is modeled, this turns out that antibiotic rotation is ineffective at creating a more heterogeneous suite of selective conditions. At least in principle, hospitals would do better to use a mix of different drugs on different patients simultaneously, rather than to cycle through these different drugs over time (Bergstrom et al. 2004).
Perhaps equally important are more general but less-recognized selection forces from infectious agents. We have a wide variety of protective bodily responses, such as fever, cough and vomiting, that are held in reserve until released by a mechanism that detects the presence of pathogens (Ewald 1994) Mechanisms that regulate expression of these defenses are under constant selection (Nesse 2005c). Individuals vary in how high fever rises during infection, how quickly immune cells are activated, and how much diarrhea is produced for a given level of infection. Most symptoms of infectious disease are not caused directly by the pathogens: they result from these useful defenses. Some are aspects of the inflammation and immune systems that attack pathogens. Others, such as cough, diarrhea and vomiting, extrude pathogens. For all such defenses, one might think that selection would shape regulation mechanisms to be close to the optimal.
But what is optimal? The answer is surprising. When the cost of a false alarm is low relative to the possible costs of not expressing a sufficient defense when it is needed, selection shapes regulation mechanisms that express the defense more readily or more intensely than seems sensible. We put up with smoke detectors that sometimes wail when we make toast because we want to be sure they warn us about any real fire. The ‘smoke detector principle,’ applies signal detection theory to yield quantitative predictions about how selection shaped defense regulation mechanisms (Nesse 2005c). It has clinical relevance because so much everyday medicine involves prescribing medications that block defenses such as fever, pain and cough. This tends to be safe because the body has redundant defense mechanisms and because the thresholds for defense expression are set by the smoke detection principle. Sometimes, however, it is fatal.
Far from suggesting that doctors should let nature take its course, an evolutionary perspective suggests that many defensive reactions are excessive or entirely unnecessary. It also suggests that we have only begun to study a crucial set of principles at the core of general medicine. General practice could have a stronger foundation in science if practitioners had tools for thinking about how selection shaped defense regulation. Most already know that using codeine to block cough after surgery is likely to result in pneumonia, and an increasing number recognize the utility of fever. However, only a few are thinking about how natural selection shaped the mechanisms that regulate defenses. Such thinking will lead to new studies that provide the evidence we need to make better clinical decisions. In one particularly important example, a debate is now underway about whether influenza kills people directly or via the effects of released inflammatory agents (Salomon et al. 2007). If the former is true, anti-inflammatory drugs will increase death rates, if the latter is true it will decrease them.
The central defense against pathogens is, of course, the immune system. The costs as well as the benefits of immune responses need to be analyzed in evolutionary perspective as a life-history trait (Zuk et al. 1996; Lochmiller and Deerenberg 2000; Zuk and Stoehr 2002; Schmid-Hempel and Ebert 2003). In addition to energetic costs, there is tissue damage from immune surveillance, reproductive costs, mate display costs, and others. Of particular interest is variation in immune response, either because of limited resources or facultative systems that adapt the response to the current inner and outer situation (Schmid-Hempel 2003).
The study of pathogen virulence offers another example of how an evolutionary perspective can transform a field. Just a decade ago, many physicians were taught that natural selection tended to shape pathogens and hosts to a benign mutual co-existence. After all, why kill the host that feeds you? Rigorous evolutionary analysis revealed that this view is fundamentally incorrect (Anderson and May 1979; May and Anderson 1979; Ewald 1994; Frank 1996; Ebert 1998).
The most important factor shaping virulence is its influence on the probability of transmission to a new host; virulence is shaped to whatever level maximizes transmission. For instance, prior to modern sanitation, bedridden patients with cholera could infect others and the organisms causing the most diarrhea were transmitted the most. The result is often fatal, but such traits are nonetheless selected for if they maximize transmission. This could have major implications for public health. Good water purification systems prevent infection from bedridden patients, thus shifting the advantage to less virulent organisms whose victims can be up and around to spread them.
Virulence levels can also be influenced when several genetically different pathogen strains compete within a host. This should select for increased virulence. Studies of trypanosomiasis (sleeping sickness) suggest multiple infections may be much more common than previously suspected (Balmer and Tostado 2006).