Increased disease risk can emerge, because the individual has been exposed to an environment that is beyond their evolved capacity to adapt, is entirely novel or that poses a challenge. At its simplest level, diabetes mellitus type 2 can be envisaged as the response of the individual to a nutritional environment that gives them a metabolic load beyond their capacity to cope. While there are developmental and genetic factors that influence the adaptive metabolic capacity of an individual, ultimately, it is the exposure to high glycemic foods and a very different mix of macronutrient intakes, which is thought to be the basis of the diabetes epidemic. Even in populations such as the Pima Indian, for which it has been argued that genetic factors are critical for the high incidence of diabetes mellitus type 2, maintenance of higher energy expenditure and more fundamental nutrition in those villages that maintain a traditional subsistence lifestyle is associated with a lower incidence of diabetes (Schulz et al. 2006).
Scurvy can be considered as another example of mismatch. Only some primates, including humans, have lost the capacity to synthesize vitamin C (Chatterjee et al. 1975). It is assumed that the enzyme responsible for its synthesis, L-gulonolactone oxidase, underwent neutral mutations in a frugivorous ancestor and that it was only with exposure to environments without access to fresh fruits—such as extreme famine and sailing ships—that our inability to make vitamin C is exposed.
Myopia, or short-sightedness, is caused by the inappropriate growth of the eyeball in its sagittal dimension, leading to the light being focused in front of the retina. Eyeball growth occurs in childhood and is regulated by growth factors that are induced by light exposure, so that the growth can be affected by the dominant focal length of vision. Close range indoor activities such as reading may result in the tendency of the growing child’s eyes to focus at only the distance of a page, and indeed, an association between incidence of myopia and increased education has been noted (Milinski 1999). While there may be a genetic predisposition to myopia in some populations, exposure of children in those populations to the outdoors leads to a lower incidence of this condition (Dirani et al. 2009). Thus, myopia can be seen as a mismatch between the environment in which we evolved—outdoors in natural light—and the modern day largely indoor life.
Robin Dunbar proposed, from the association between neocortical size and group size across different species of primate, that humans evolved to live in social groups of 100–150 (Dunbar 2003). There is indeed much evidence in support of that proposition. But humans now live in much larger groups than in the Paleolithic—groups that rely predominantly on verbal or even electronic communication, with less emphasis on the bonding effect of body language. If we add to that the complexity of modern society and its structures compared to those of the Paleolithic or even the modern hunter-gatherer social organizations, it is reasonable to speculate that some forms of mental illness simply reflect individuals living in a social environment beyond their evolved capacity to cope. This is a fertile area for research (Brüne 2008).
With the development of animal husbandry and agriculture and the associated shift to a more concentrated way of living following the invention of agriculture, humans became much more exposed to parasitic loads from each other and proximity to animals. Pandemic influenza outbreaks generally arise from this association. Other infectious patterns reflect the changing environments: the historical distribution of malaria is directly linked to patterns of swamplands and land use. Similarly, increased irrigation following the development of canals in Africa led to a considerable increase in schistosomiasis (Steinmann et al. 2006). The implications of the development of antibiotics are discussed later.
Life history factors
This category combines several related evolutionary concepts that account for how the evolved human life course strategy and changed way of living have led to increased susceptibility to disease. There is necessarily some overlap with the other pathways discussed in this paper, and it includes multiple possible mechanisms such as life history trade-offs and antagonistic pleiotropy; however, we find it a useful heuristic for considering a number of evolutionary explanations.
In life history, there are two basic kinds of trade-off that may arise as a result of adaptive developmental responses to environmental influences. The first occurs when such responses are made to confer immediate advantage, such as the early metamorphosis of the tadpole of the spadefoot toad in response to pond desiccation, which promotes immediate survival but results in smaller adult size that is more susceptible to predation. The second type of trade-off arises from responses that result in an advantage that is manifest later, such as the presence of predators inducing the young of the water flea to develop defensive armor in adulthood, the trade-off being a decrease in resources for reproduction. In humans, where intrauterine growth restriction may be viewed as an immediate adaptive response of the fetus for surviving maternal ill-health or placental dysfunction, the fetus may also make anticipatory responses to more subtle nutritional or hormonal cues to adapt its developmental trajectory to the type of environment in which, according to its prediction, it will live postnatally. These ideas, and the adaptive nature of developmental plasticity, have been expounded extensively (Gluckman et al. 2005a,b, 2007, 2010).
Anticipation is common across taxa, but becomes more obvious in a long-lived species such as the human. Whereas the strategy of bet-hedging is used by species with very high reproductive outputs (Beaumont et al. 2009), mammals with their relatively low reproductive outputs and high maternal investment rely on predictive adaptation to enhance offspring fitness. Situations when different strategies between mother and offspring will emerge have been modeled (Marshall and Uller 2007). Humans are at one extreme, and the situations in which maternal fitness will dominate as in some other species do not occur in humans. Even in famine, fecundity is maintained to a degree. Prediction need not be accurate to be selected (Lachmann and Jablonka 1996), and biases may exist in prediction. Because the consequences of predicting a high-nutrition environment and ending up in a low-nutrition environment are worse than the converse, there is a bias towards predicting a lower nutrition environment and, consequently, towards human susceptibility to disease in modern obesogenic environments. This argument is supported by the observation that under conditions of severe undernutrition, children of lower birth weight are more likely to develop the more benign syndrome of marasmus than those of higher birth weight, who develop kwashiorkor (Jahoor et al. 2006). We argue that the marasmic children are better adapted to low nutrition by virtue of their lower birth weight and thus tolerate undernutrition better. This hypothesis is supported by the finding that the marasmic children as adults have a bias in their appetite towards carbohydrate and possibly fat consumption (T. Forrester, unpublished data), analogous to the preference observed in rats that have been prenatally undernourished.
In considering life course factors, it is important to recognize that a cue acting in early life may have different effects from cues acting later. For example, in rats, prenatal undernutrition shortens life while postnatal undernutrition prolongs life (Jennings et al. 1999). Similar biphasic effects are seen for the influence of nutrition and possibly stress on the age of puberty (Sloboda et al. 2009).
There is increasing evidence for the role of developmental plasticity in influencing the susceptibility to developing disease in a particular environment. It has been shown that longevity was affected by the season of birth in the Gambia, an environment in which the weight gain of pregnant women drops from 1500 g/month in the harvest season to just 400 g/month in the hungry season (Moore et al. 1999). Offspring born in the hungry season had the same infant and juvenile mortalities as the children born in times of plenty, but after the age of 20 they started to show an increase in mortality such that their average life expectancy was 15 years shorter. David Barker (Hales and Barker 1992) and many others showed that size at birth, which can be taken as a proxy measure of intrauterine conditions, was associated with altered risks of metabolic and cardiovascular disease, mood disorders, and osteoporosis in later life.
Elsewhere, we have extensively reviewed this area of research, known as the ‘developmental origins of health and disease’, or DOHaD (Gluckman et al. 2010). We view this phenomenon as a classic example of developmental plasticity operating to ensure survival to reproduce but resulting in antagonistic pleiotropic disadvantages in later life. It is argued that constraint of fetal growth, lower maternal nutrition (Gale et al. 2006), or maternal stress (Meaney 2001) signal to the fetus that the postnatal world will be threatening. The developmentally plastic fetus may make responses incurring either immediate or delayed trade-offs and adjust its physiological development accordingly. A threatening world implies less nutritional security, and thus, an appropriate phenotype is based on a nutritional adaptive capacity to a plane that is lower than that of fetuses who anticipate a more benign world. Thus, the fetus exposed to a low-nutrition environment may or may not be smaller (depending on the severity of the limitation), but either way as an adult it may reach the threshold of metabolic load to which it can respond healthily, leading to diabetes and other metabolic conditions at a lower nutritional level than an individual who, early in life, shifted to a developmental trajectory more appropriate for a higher nutrition environment (Gluckman et al. 2010). Evidence to support this hypothesis includes epidemiological studies on humans prenatally exposed to famine, who have a higher risk of coronary heart disease and obesity in adulthood (Painter et al. 2005). Experimental studies have also shown that rats that experienced fetal undernutrition have higher body fat and are more sedentary compared to their counterparts that received adequate fetal nutrition (Vickers et al. 2000, 2003). They subsequently develop a constellation of symptoms similar to the human metabolic syndrome, such as obesity and hypertension, in adulthood, and these effects are exacerbated by a high-fat postnatal diet. However, if leptin, a satiety hormone made by fat, is administered to these rats neonatally thus artificially shifting their perception of their environment from low to high nutrition, neonatal weight gain, caloric intake, locomotor activity, and fat mass in these infant animals are normalized for the rest of their lives despite exposure to a high-fat diet (Vickers et al. 2005).
Pleiotropy describes how a single gene can influence several different physiological and phenotypic characteristics. Antagonistic pleiotropy refers to genes that confer an advantage in early life, but that result in ill effects later in life. We find utility in employing this term to encompass phenotypic traits that involve life course-associated trade-offs; for example, because human fitness depends primarily on survival to reproductive age (Jones 2009), a potential adaptive advantage in early life may become disadvantageous later on and manifest as obesity, diabetes, and cardiovascular disease in middle age. High levels of insulin growth factor-1 (IGF-1) promote infant and childhood growth and presumably were selected for their consequent fitness advantage, but in later life are associated with higher rates of prostate and breast cancer.
Importantly, these mechanisms operate in all pregnancies and are a reflection of the role of developmental plasticity in ensuring adaptability to a changing environment on a timescale of change between that of selection (many generations) and homeostasis (minutes–days). There is a growing body of experimental and clinical data showing that epigenetic processes are involved. Cues that induce plastic responses must be distinguished from those that disrupt the developmental program: clearly teratogens, such as thalidomide or the rubella virus, operate through the latter. For this reason, we would suggest that terms such as metabolic teratogenesis (Freinkel 1980) are not particularly helpful.
The human pregnancy is a co-adaptive compromise. The human fetus is born in a more altricial state than other closely related primates, because the human upright posture determines that the fetus must pass the pelvic canal that is narrower than in other primates (Rosenberg and Trevathan 1995). Brain growth must continue for a long period after birth to reach the disproportionately larger brain size of the hominin clade. Fetal growth in mammals is not solely genetically controlled, otherwise the outcome would be fetal obstruction in every case where pregnancy followed a female mating with a larger male. Indeed, human fetal growth can be shown to be largely determined by the maternal environment (Gluckman and Hanson 2004). In pregnancies where the egg has been donated, birth size is more closely related to the recipient than to the donor size (Brooks et al. 1995). The constraining mechanism on fetal growth is likely primarily a consequence of the utero-placental anatomy of mother and her ability to deliver nutrients to the placental bed. Further, the placenta, at least in sheep, is able to clear excessive concentrations of growth factors such as IGF-1 from the fetal circulation. Other studies, primarily in mice, raise the possibility of a role for parentally imprinted genes in regulating fetal growth. From studies of the IGF-2 system in mice, David Haig has developed the concept of maternal-fetal conflict to explain the evolution of imprinting (Haig 2010). However, imprinting appears in marsupials and possibly monotremes, and Eric Keverne and colleagues have made a good case for considering imprinting in terms of maternal-fetal co-adaptation rather than conflict (Curley et al. 2004).
Given the long life course of our species, this emergent field of developmental plasticity will become a major part of clinical medicine. As our understanding of epigenetic mechanisms including DNA methylation, histone modifications, and small noncoding RNAs grows, this area is likely to play a major role in clarifying disease causation and treatment. A major challenge for studies in contemporary evolution is the role of epigenetic inheritance. While epigenetic marks have long been established to transfer across mitosis, there is increasing evidence that some epigenetic marks transfer across meiosis. The most well-demonstrated mechanisms are via small RNAs in sperm that can transfer between generations inducing phenotypic effects on pigmentation and heart development in mammals (reviewed in Nadeau 2009). Transgenerational genetic effects on body weight and food intake have also been shown to be passed through the mouse paternal germline for at least two generations (Yazbek et al. in press), again implying the involvement of sperm in the molecular basis for such effects. There is inferential evidence of environmentally induced epigenetic inheritance in experimental animals. For example, the effects of glucocorticoid exposure in pregnant rats on their offspring’s metabolic control extend to the F2 generation even when the intermediate F1-exposed fetus is male (Drake et al. 2005). Similarly, there is some inferential evidence in humans of male line-mediated environmental influences (Hitchins et al. 2007).
In addition to direct epigenetic inheritance, epigenetic marks may be induced in the F1 generation as a result of maternal effects as discussed in the DOHaD example earlier, or via grand-parental effects where the F1 generation is female. This is because the oocyte that will contribute genetic material to the F2 offspring is formed by the F1 female fetus while in the uterus of the F0 generation and is therefore exposed indirectly to the F0 environment. Similarly, male-line germ cells that will form spermatogonia are sequestered in the testis when the male is itself a fetus. Indeed, in the grandchildren of women who became pregnant in the severe Dutch famine of 1944–1945, where the exposed fetus was female, their children are more likely to be obese (Painter et al. 2005). A further form of indirect epigenetic inheritance may be seen in those cases where the environmental niche inducing the epigenetic change leading to the phenotype is recreated in each generation. The best demonstration is in rodents, where altered maternal care has been shown to induce epigenetic changes in the brain, resulting in behavioral changes and, in the next generation, the same pattern of maternal care (Weaver et al. 2004). Cross-fostering and pharmacological agents both reverse the epigenetic change and associated phenotype. The potential implications of direct and epigenetic inheritance, as well as maternal and grand-parental effects, are likely to be particularly important in human medicine, where we must focus on a single generation. This has theoretical implications for the use of traditional genotype–phenotype interactive models. Contemporary evolutionary studies need to develop models that focus on phenotype–environment interaction. In these models, the phenotype at any point in time should be seen as a consequence of the cumulative effects of early environmental influences inducing epigenetic change, extending back to conception where the phenotype is determined by inherited genetic and epigenetic information.
Demographic change, acting through these developmental processes, may also play a role in the changing patterns of disease. First-born children are smaller because of the processes of maternal constraint (Gluckman and Hanson 2004), and they have higher risk of obesity (Reynolds et al. in press). Their smaller size reflects greater maternal constraint and has also been interpreted in life history terms (Metcalfe and Monaghan 2001). We have shown that they have a very different pattern of DNA methylation at birth (McLean et al. 2009), and falling family size may be a factor in changing patterns of chronic disease.
There are other dimensions to life course pathways to disease. The progressive loss of oocytes from the ovary is an inherent property and explains the decline in fertility in women from the beginning of the fourth decade of life. However, cultural changes mean that women can and do delay their pregnancies, and then, because of lower fertility in their later reproductive years, have a much greater requirement for medical intervention to treat infertility. Here is another example of how cultural developments have impacted on human biology; this phenomenon has arisen because of the interaction between prolongation of life course resulting from technological developments in medicine and public health, and shifting of reproductive timing caused by the social changes associated with the development of contraceptive technologies.
Adolescence is an illustrative example of the changing nature of the human life course and the interaction with a changing social context. The age at menarche, the best documented sign of reproductive maturation, in Paleolithic times was probably around the ages of seven to 13 (Gluckman and Hanson 2006); full reproductive competence would have been achieved in concert with the psychosocial maturation required for function as an adult within society. The subsequent occurrence of agriculture and settlement, and the attendant negative outcomes of childhood disease and postnatal undernutrition, resulted in the delay of puberty onset, but again this would have been matched to the increased complexity of society. However, the age at menarche has fallen in Europe from a mean of 17 years around 1800 to about 12 years now (Gluckman and Hanson 2006). This decline can be attributed to better maternal and child health subsequent to the enlightenment support for population growth, improved sanitation and access to food in the postindustrial era, as well as public health and medicine from the late nineteenth century on. But whereas the age of puberty has fallen, the age at which an individual is treated as an adult appears to have risen dramatically in modern Western society. While in the nineteenth century individuals in their late teens were accepted as adults, this is now less likely. If the term adolescence is restricted to the period between the completion of biological maturation and acceptance as an adult in society, then adolescence has probably extended from one to 3 years in the nineteenth century to over a decade in the twenty-first century. Indeed, modern neuro-imaging techniques demonstrate that the brain shows ongoing maturation until well into the third decade, with the pathways influencing impulse control and higher levels of executive function being the last to mature (Lebel et al. 2008). There is, thus, a mismatch between biological and psychosocial maturation, reflected in a far greater morbidity in children who undergo earlier biological maturation, because of acting out behaviors and emotional disorders, including suicidal attempts (Michaud et al. 2006).
These observations raise several hypotheses. Is the delayed maturation of the brain evolutionarily old but has it only recently become of significance, because the higher functions are only needed for coping with the complexities of modern society? Have the complexities of modern society induced a longer period for skills to be learnt and the brain to mature, as has been suggested in the arguments related to the origins of the juvenile period in children? These two hypotheses could be tested by studies of brain maturation across different cultures. Or does the way in which we now rear children change the pattern of brain maturation? In most Western societies, we now control the children’s environment much more rigorously than ever before, and the effect of this can be assessed by comparisons between different educational systems.
Co-evolutionary considerations and the evolutionary arms race
Humans live in symbiotic relationships with a large population of bacteria, particularly in their gastrointestinal tract. Increasingly, it is recognized that this extended symbiome needs to be considered in understanding human health. Alterations in the gut flora are associated not only with acute gastroenteritis but also with chronic disease. For example, there is growing evidence that the gut microbiome plays a role in determining metabolic homeostasis and the risk of diabetes mellitus type 2 and obesity (Tschöp et al. 2009). It is not clear whether the significance of the gut microbiome arises simply from its role in predigestion, from the potential it has to release inflammatory cytokines, or whether it might induce epigenetic changes in the human host.
A key to understanding the consequences of our relationships with the microbial world is in their fast generation times, leading to an evolution much more rapid than that of humans. This is best illustrated by antibiotic resistance. The interval between the commercial introduction of antibiotics and the appearance of resistance in human commensals and pathogens is often frighteningly short, on the order of 1–2 years. Broad use of antibiotics leads to rapid spread and high frequency of resistant strains, particularly in hospital and long-term care settings where rates of antibiotic use are the highest. Moreover, it can be difficult to get rid of resistance once it evolves. Compensatory mutations ameliorate the costs of resistance for bacteria (Schrag and Perrot 1996) and can create fitness valleys that prevent reversion to drug-sensitivity even after drug use is discontinued (Levin et al. 2000). The challenge for medicine is similar to that faced in agriculture, where insecticide use leads to insecticide resistance and herbicide use leads to herbicide resistance. Evolutionary theory has proven useful for suggesting approaches for more effectively deploying our antibiotic resources in ways that will minimize resistance evolution (Lipsitch et al. 2000). For example, despite early enthusiasm, results from trials of antibiotic cycling have been somewhat disappointing (Brown and Nathwani 2005). Evolutionary theory explains why (Bergstrom et al. 2004) and suggests alternative approaches that may be more effective.
Similarly, evolutionary models allow us to understand the process by which viral threats emerge. Phylogenetic analysis has helped us reconstruct the early spread of the human immunodeficiency virus around the globe (Korber et al. 2000), and the genetic origins of the H1N1 influenza pandemic (Smith et al. 2009). Models of sequence evolution can inform the process of designing each year’s influenza vaccine (Russell et al. 2008). Mathematical models of disease emergence have likewise been useful in developing mitigation plans for potential pandemic strains of influenza (Ferguson et al. 2005).
Infections can also shape human evolution. While much in the historical record remains speculative and inferential, there are some contemporary, well-recorded examples. For example, kuru is a prion-caused neurodegenerative disease transmitted by cannibalistic funeral rites in New Guinea. Some mutations in the prion protein gene confer partial or even strong resistance to the disease. There is now evidence that these resistance genes only emerged in recent generations from a common ancestor some 10 generations ago and that that resistance gene is now well spread throughout the population at risk. This may in part explain the recent reduction in the incidence of kuru (Mead et al. 2009).
In population genetics, the examples of sickle cell anemia, the thalassemias, and glucose-6-phosphate dehydrogenase deficiency have all been explained in terms of the heterozygote advantage providing resistance against malaria, whereas the homozygous form is associated with more severe disease (Luzzatto 2004). Recently, the possession of two variants in the APOL1 gene—a characteristic common in Africans but absent in Europeans—was shown to be associated with an increased risk of renal disease (Genovese et al. 2010). The protein produced by these variants showed lytic activity against the trypanosome parasite that causes sleeping sickness, suggesting that the risk alleles were maintained to help confer a protective effect. The association of the variants with protection was dominant, while that with renal disease was recessive, pointing towards a heterozygous advantage model.
Speculation persists about other common alleles that are in apparent equilibrium within populations. For example, in European populations, the most common recessive disease is cystic fibrosis, a disorder of the chloride-secreting channel in epithelia such as the lung associated with excessively viscous secretions and subsequent wheezing and infections; a carrier frequency of one in 25 has been seen in some populations (Massie et al. 2005). It has been suggested this frequency could not persist unless there was an advantage to being a heterozygote. Possible past selective pressures include typhoid, cholera and other diarrheal diseases, or perhaps tuberculosis, but no firm data exist. A recent study analyzing the genome in two human populations was able to identify genes associated with various functions, such as immunity and keratin production, that strongly demonstrated long-term balancing selection (Andres et al. 2009); such studies provide a step towards finding functional variants that may be of phenotypic and medical relevance.
Balancing selection has also been used to explain differences between allelic forms that confer different behaviors. For example, there are alternate alleles of the promoter for the vasopressin receptor that is associated with pair bonding, with one form being more common in individuals who have less stable relationships (Walum et al. 2008). While at the moment such observations are speculative and premature, as human genomic information becomes more widely incorporated into the understanding of human biology and behavior, such inferences and associations will become more frequent; they raise ethical issues that will need to be confronted.