Geographic distribution of plant pathogens in response to climate change




Geographic distributions of pathogens are the outcome of dynamic processes involving host availability, susceptibility and abundance, suitability of climate conditions, and historical contingency including evolutionary change. Distributions have changed fast and are changing fast in response to many factors, including climatic change. The response time of arable agriculture is intrinsically fast, but perennial crops and especially forests are unlikely to adapt easily. Predictions of many of the variables needed to predict changes in pathogen range are still rather uncertain, and their effects will be profoundly modified by changes elsewhere in the agricultural system, including both economic changes affecting growing systems and hosts and evolutionary changes in pathogens and hosts. Tools to predict changes based on environmental correlations depend on good primary data, which is often absent, and need to be checked against the historical record, which remains very poor for almost all pathogens. We argue that at present the uncertainty in predictions of change is so great that the important adaptive response is to monitor changes and to retain the capacity to innovate, both by access to economic capital with reasonably long-term rates of return and by retaining wide scientific expertise, including currently less fashionable specialisms.


What can be usefully said about changes in geographic distribution as climate changes? In this paper, we argue that many pathogen distributions are already far from an equilibrium and may change rapidly in any case. Although climate change will produce additional changes in distribution, these are hard to predict and invasions and adjustments to host density and distribution will at the same time cause large and only partly predictable changes. This means that specific predictions about the effects of climate change are likely to be unreliable, so the useful advice which can be given to policy-makers and managers of farms and the landscape is of a fairly general nature.

We develop our argument in three phases. The first phase begins by considering the population dynamical processes which set the geographic range of a pathogen. We continue by comparing the time-scales involved in practical decision making and in climate-induced changes in the geographic range of pathogens. We then discuss how adequate information about the current distributions of pathogens is, and what limits them. In the second phase, we discuss methods for deducing how a pathogen’s geographic range depends on environmental variables. We then introduce relevant aspects of current climate modelling capability, stressing the extent of between-model uncertainty. This serves as an introduction to discussion of some examples of detailed projection exercises which have been published. Finally, we try to answer the question posed above, showing how the answer may suggest specific policy options.

Population processes setting pathogen ranges

Pathogen distributions are set by host distribution and susceptibility levels, crop management and density, vector distribution if they require a vector, and the environment. Pathogen distributions are necessarily subsets of their host and vector distributions, so changes in host or vector distribution will be a dominant influence on pathogen distributions. However, many pathogens persist as rarities, or frequent but of very low severity, in areas where they do not cause disruption to growers. We therefore need to distinguish two types of range: the range within which a pathogen occurs, and the range within which adjustments to management systems in response to it are needed in order for farming to be successful. Both ranges may change with environment, but management and host change have historically been responsible for the larger problems. For example, in the early 1980s, net blotch caused by Pyrenophora teres became for a while the major problem in growing winter barley in southern England (Jordan, 1981). This coincided with a simultaneous increase in frequency of minimum tillage, leading to much more efficient transfer from one season’s crops to the next, and to the release of the cultivar Sonja, which was much more susceptible to the pathogen than previous cultivars (Jordan & Allen, 1984). Similar examples could be given from all over the world: for example, the rise in coffee berry disease (Colletotrichum kahawae) in the mid-20th century in East Africa (Griffiths & Waller, 1971), or the increased importance of maize streak disease in Zambia and Zimbabwe as irrigation led to year-round cultivation of maize (Rose, 1978). The overwhelming importance of management and cultivar means that direct effects of climate change on components of the pathogen life-cycle will in fact be one of the more minor factors altering pathogen distributions on crops in the future; indirect effects on host susceptibility through changes in cultivar, management of area grown and types of land used are likely to be much more important (cf. Burdon et al., 2006). However, it remains essential to understand how changes in the physical environment compare and interact with other causes of change.

At equilibrium, the geographic range of a pathogen within a host’s distribution depends on the life-cycles on a landscape scale, rather than on individual scales. Epidemiological parameters are usually measured at experimentally accessible scales which deal with individual hosts, but the longer-term ecology may depend on parameters not usually measured. For example, consider a specialist disease of an arable crop, such as yellow rust (Puccinia striiformis) on wheat. Much of the published modelling and measurement concerns the basic multiplication cycle within an infected field: a spore on a leaf has a certain probability of infecting; after the latent period is past new spores start to be produced in a characteristic temporal pattern; these are captured by air-currents, lofted and a proportion are re-deposited (e.g. van den Berg & van den Bosch, 2007). Each phase is affected in quite subtle ways by the state and growth stage of the crop, environmental variables including radiation, rain, wind, dew and temperature. The overall effect of these relationships can be modelled to determine whether the pathogen can spread in the crop or vegetation patch, and how severe it is likely to become. It is possible to measure the population of a pathogen by counting infected units at any scale, either plants or patches. Whether a disease will persist in a population of the chosen units depends on the parameter R0, defined as the number of infected units produced over the lifetime of an infected unit in the absence of any other infection (Segarra et al., 2001). However, the conditions can be quite favourable, with R0≫1 in-field, and yet the disease not be endemic in a region, if the occurence of the host in the landscape and the conditions in the host-free season are unfavourable, so that R0 < 1 at the landscape scale. The process determining endemicity in a region depends on the probability of a successful initial infection, how late in the season this happens, the resulting number of live pathogen individuals in an over-seasoning site, and the location of these relative to the following year’s fields. The key idea is related to the simple observation of Carter & Prince (1981), showing how a sharp boundary in distribution can arise from a very gradual response to environment: a metapopulation will persist where the birth rate of populations exceeds the death rate (Fig. 1). Although modulated by environment, this balance is strongly dependent on details of host population dynamics, and the boundary resulting from this interaction may be unexpected (Fig. 1) and complex (Soubeyrand et al., 2009).

Figure 1.

 Interaction of host and pathogen patch population dynamics and a north–south environmental gradient to produce a pathogen distribution. The pathogen persists where the period between host plantings in the same locality and the combination of build-up in season and decay between seasons is such that it tends to increase when rare. The maps are produced from a simple model which places fields at random in the mapped area. Fields are coloured black if they are in the region where R0 is >1 and the disease will persist. (a) A north–south gradient of environmental favourability, and a west–east gradient of cultivation intensity, so that in the intensively cultivated area the time between successive plantings in the same locality is reduced. (b) Environmental gradient is identical to (a), but cultivation intensity is uniform and intermediate across the whole mapped area. Note the very different distributions which result. (mathematica code used to generate this is available from M. W. Shaw.)

Thus, to take a classic example, black stem rust of wheat is not now endemic in English wheat, though that crop is present in over 10% of the land surface of the country (wheat area in 2008, 19 000 km2: Anonymous, 2009) for 9 months of the year. The alternate host Berberis vulgaris, whilst uncommon, is not extinct, and is often infected with aecia of (presumably) other graminaceous formae speciales of Puccinia graminis, which is fairly common on grasses such as Dactylis glomerata (Kajamuhan & Shaw, 2007). However, P. graminis f. sp. tritici has not been recorded from B. vulgaris in the UK (Ogilvie & Thorpe, 1961) since eradication took place in the late 19th and early 20th centuries. The summer climate is rather cool for the pathogen, but in the 19th century and earlier, with a somewhat cooler climate than at present, the disease was a serious problem. This was presumably because the greater abundance of B. vulgaris led to a greater proportion of early infections which could then both jump from field to field, lead to a greater proportion of B. vulgaris infected with both mating types and volunteers, and so to a stable cycle (Stakman, 1923). Such infections as now occur are believed to result from airborne transport of spores from north Africa and Spain (Smith, 1967). In this disease system, in-season multiplication is important, but only as the mechanism which produces a non-linear relation between one year’s infection and the next. Thus, the geographic distribution of black stem rust is neither the same as that of either host, nor limited directly by the environmental tolerance of the pathogen measured in controlled-environment conditions.

Host distributions have changed fast in response to economic and demographic change during the past few hundred years, and it is reasonable to consider the extent to which pathogens approximate to their equilibria. If they are often far from equilibrium, climate change is likely to be a minor contributor to future changes in range. Pathogen distributions can change rapidly, as observed with invasions of biocontrol agents and of undesired pathogens; recent spectacular examples include Phragmidium violaceum as a biocontrol agent in Australia and New Zealand (Evans et al., 2005), or, in North America, Phakopsora pachyrhizi (Fabiszewski et al., 2010) and Phytophthora ramorum (Prospero et al., 2009) as agents damaging the soybean crop and natural vegetation, respectively. However, other pathogens, perhaps a majority, invade native host populations slowly, for example because they or their vectors are soilborne, or because hosts are scattered and spores do not survive long-distance transport easily.


In thinking about change, it is important to have a conceptual vocabulary that allows diverse processes to be compared. For processes where there is no equilibrium, a speed or rate – so much change per year – is helpful. The inverse of a rate is a time-scale: either may be the more natural way to describe a particular process. For processes where the rate of change depends on how far the state of the process is from a desired or equilibrium state, both rate and time to reach a specified state depend on the current state. In this case, the time-scale of the process can be conceptualized as the ‘relaxation time’, a term drawn from physics. Relaxation time specifies the time-scale on which a system moves back to equilibrium after a disturbance. Since such movement is often slower the closer the system is to its equilibrium, a relaxation time is typically specified as the time taken to move either halfway or 1/e (about 37% of the way) towards the equilibrium. In an ecological context, this might mean the time taken from removal of all vegetation from a patch to move to a state with half the original α-diversity.

For arable crops and most perennials, the rate at which industry and breeders respond to observed trends is already largely greater than the rate of relevant climate change. Equivalently, the time-scale over which significant climate change is is expected (several decades) is longer than the time-scale over which cultivars are replaced by breeding and movement between growing areas (decades). By contrast, for natural vegetation the time-scales involved in climate change are much shorter than the time-scales required to rebuild complex communities (shown, for example, by the differences in composition between woodlands a few decades old and those a millenium or more old; Wulf, 1997; Rackham, 2005). Large changes may be produced in natural vegetation by diseases responding to climatic change and stressed hosts.

A range of decisions could be influenced by information about future changes in pathogen distribution. These include decisions about breeding, land acquisition and infrastructure developments, pesticide and biocontrol discovery and development investments, and the capacity to respond to unexpected developments through monitoring and research, including near-market research (with results intended for immediate use by growers). Some sorts of generic statements may be useful for government policy-making in both the farming and environment arenas. Except in government policy, an investment/decision horizon of 20–30 years is the maximum likely to be considered, and much is determined over a 5- to 15-year period, the half-time of investment funds at 5–10% discount rates (Samuelson & Nordhaus, 1997). On any time-scale shorter than about 20 years, interannual variation dominates effects on cropping and agronomic decisions, and will continue to do so. For example, the standard deviation of annual average July and August temperatures at Rothamsted in central England is 1·15°C; under the IPCC 4 scenarios the projected rate of change of mean summer temperature for this region is 0·02–0·05°C per year (Christensen et al., 2007).

Most of the decisions on investment in land influenced by predictions about climate change concern gross effects on productivity and movement of host growing areas rather than disease problems in themselves. However, there will doubtless be specific crops and regions where disease prospects modify the agronomic assessment (Gregory et al., 2009). For example, champagne grape growers from France buying land in southern England are unlikely to be swayed in this bet-hedging investment by arguments about an altered disease spectrum. Similarly, whether external investment in, for example, Ukrainian arable land was likely to be profitable could be judged on the basis of agreement among models that, taking water supply and temperature together, predict conditions for cropping in 2050 will be broadly worse or broadly better than now. The exact mix of crops or the disease problems faced are unlikely to bother investors decades ahead. Perennial fruit crops are most affected by decadal predictions, since the capital investment per hectare is typically much larger and cash-flows negative for longer. Annual crops can adjust planting and variety multiplication very rapidly, well within 5-year time-scales [as the US response to southern corn leaf blight in the early 1970s demonstrates (Ullstrup, 1972)].

At the fast end, strobilurin fungicide resistance in Mycosphaerella graminicola probably arose four times in Europe after 2002 (Torriani et al., 2009), becoming a predominant genotype continent-wide in about 3 years as a result of the very widespread use of strobilurin fungicides and the airborne spread of ascospores between crops; the same resistance in M. fijiensis on banana in Costa Rica also had a relaxation time of the order of 1 year (Amil et al., 2007). By contrast, since the expansion of oilseed rape cultivation in the UK during the 1970s it has taken approximately 20 years for Sclerotinia sclerotiorum to become a major concern to growers (Gladders et al., 2008). The change occurred because shorter rotations and increased numbers of fields of the crop led to fields being closer on average to a discharging apothecium in a previously cropped field and hence to a large change in probability of over-seasoning. This is perhaps more typical of the time-scales over which pathogens will track host distributions.

In a study of invasive plants, Hawkes (2007) concluded that early invaders showed evidence of release from regulation by pests and pathogens, but by 50–200 years after the initial invasion began this evidence disappeared. Thus, for unmanaged vegetation, the evidence also points to relaxation times of a few decades at most. This is far faster than the recolonization time of many wild plant species (Pearson, 2006). As with crops, therefore, pathogens are likely to remain limited by their host distributions, and not become disconnected from them. However, there is serious concern that climate zones will move faster than it is possible for plant populations to track them; this is especially a concern for local endemic species, which are likely to suffer disproportionate extinction (Loarie et al., 2009). Forests, perennials and annual plants will have different relaxation times. Diseases of each do not necessarily match. Forests are very inefficiently selected for resistance to foliar pathogens or pest-transmitted viruses because the generation time of trees is much longer than the time-scale on which such pathogens or vectors can spread and reach large numbers. The genetic resistance structure of forests presumably reflects survivors of past disease expansions as well as continuing diversifying selection (Bergelson et al., 2001). Linkage disequilibrium with other traits will be strong for some resistance genes because pathogens and pests exert very intense selection over few generations. Hence, forest trees are likely to have poor matching to their environment and slow responses to disease, with above-ground pathogens moving into new host range as fast as it is colonized. By contrast, some fast-maturing plant species with high rates of long-distance dispersal may change their distribution faster than their pathogens can track.

Factors involved in determining R0 and so the equilibrium distribution of a pathogen include host presence, phenology, resistance and agronomy, especially planting date in relation to season; chance; community structure, including the abundance and types of competitors, predators and alternative or alternate hosts; and finally climate – both frequency of conditions permitting infection, dispersal and rate of growth once in the host, and the severity of conditions in the off-season when the pathogen population is decreasing. One class of chance events needs special mention. Any species occupies a range of niches. If a small number of genetic changes occur, the niche may be somewhat altered. For example, before the introduction of strobilurin fungicides, niches including these chemicals could not initially be occupied by, for example Mycosphaerella fijiensis on banana, but evolution of resistance was very rapid, because the altered niche was a single genetic change away (Amil et al., 2007). Other niches may be further away and less easily reached; the vast majority of niches are simply inaccessible from the current genome of an organism (a human cannot become a tree, even in a sunny country). Chance plays a big role in niche adaptation, because the accessibility of a niche may depend on the availability of an intermediate niche, or on preliminary changes to the population as a result of genetic drift. To model all this is challenging or impossible, but the approach does make it clear that the mix and density of host cultivars grown matters and that responses may be non-linear (Shaw, 2008). Thus, specific prediction of geographic change would require not only understanding of pathogen dynamics in one host setting, but how host distributions and qualities will change. This would include innovation and all the economic changes coming from price and demand changes, as well as technical changes. As an example of how fast this happens, the ratio of barley to wheat cultivation area in the UK was 1·5 in 1980 (Anonymous, 1981), but is now 0·3 (Anonymous, 2008).

Current geographic distributions

Maps of pathogen occurrence are regularly published, with the CAB International series especially extensive. For the most part these are necessarily based on published reports or the provenance of herbarium specimens. The sampling giving rise to these reports is therefore far from uniform and statistically highly biased by many factors, and negatives are unlikely to be explicitly represented. Furthermore, large countries covering many climatic zones may be represented as one entity. For known, important diseases the maps probably represent incidence correctly, but maps of severity are unusual, rarely global, and often specific to a particular year. For diseases of less interest, the reported ‘occurrence’ is likely to depend in part on where specialists work and on whether the disease currently matters.

Shifts in the geography of both wild and cultivated plant species are likely to have the most dramatic effects on plant disease, because the new associations greatly increase the probability that new pathogen–host associations will form; some of these may be serious. The obvious examples are the new associations formed when crops are moved to new continents or countries: when cacao or cassava, native to South America, were cultivated in Africa the respective diseases caused by cocoa swollen shoot badnavirus and cassava mosaic geminivirus became major problems (Thresh, 1991). Because wild hosts must rely on natural dispersal mechanisms across a cropped landscape with many unsuitable areas, some species will not be able to invade new areas as fast as climatic zones shift, and new vegetation communities are likely to form. In the margins of surviving areas hosts are likely to be stressed and new pathogens may be able to attack, and existing attacks may become more severe (Desprez-Loustau et al., 2007; Wiedermann et al., 2007). By contrast, most foliar diseases will track their host distributions, because dispersal mechanisms of plant pathogens are at least as effective as those of their hosts. Soil diseases are more likely not to track their hosts closely. Although there have recently been many studies of disease in wild plant communities, knowledge of its consequences and of detailed geographic distribution of disease is not good enough to make generalizations about distributions, let alone predictions of geographic change in the relative distributions of pathogen severity and host density.

Looking at distribution maps, it is clear that there are three types: all host range full [e.g. M. graminicola (CAB International, 1986); Venturia inaequalis (CAB International, 1978); Phytophthora infestans (CAB International, 1996)]; continental divisions, so that for example Africa has the disease, but the Americas do not (e.g. cassava mosaic geminivirus (CAB International, 1998)]; and, perhaps the commonest, patchy with zones of occurrence smaller or much smaller than the continental scale [e.g. barley stripe mosaic hordeivirus (CAB International, 1999); Didymella bryoniae (CAB International, 1980)]. The first case clearly suggests that wherever the host can grow, the pathogen can also be. However, the occurrence of a pathogen does not mean it causes a problem. For example, M. graminicola has a distribution more or less coincident with that of wheat (Eyal, 1999). It was present in England in the mid-1950s, but rarely spread to the upper parts of the plant (Brooks, 1953). Although it has now become the most serious pathogen in UK wheat production (FERA, 2007) the distribution map is unaltered. Similarly, potato blight occurs wherever potato is grown, but is problematic in a substantially smaller geographic area (Hijmans et al., 2000). The second case, where the distribution is limited to one or more large regions, could reflect large-scale environmental differences, but tends to suggest a situation far from equilibrium. For example, cocoa swollen shoot badnavirus is restricted to Africa (Dzahini-Obiatey et al., 2010), and Moniliophthora perniciosa and M. roreri are restricted to the Americas (Griffith et al., 2003). It seems certain that this represents a metastable position dependent on restricted dispersal of the pathogens; quarantine regulations aim to keep this in place as long as possible. Finally, the patchy distributions may reflect genuine ecological constraints. Equally often, they may reflect historical accidents of host jumping by a pathogen, accidental use of susceptible genotypes of host, rare chance movements (when the pathogen is soilborne or has otherwise very restricted dispersal ability), or, unfortunately, incomplete recording.

Of course, the expansion of trade means that many new linkage routes now exist, and one prediction that can be made with reasonable assurance is that invasions by diseases new to a region will remain common or become more frequent (MacLeod et al., 2010).

Relation of pathogen distributions to environmental variables

At equilibrium, a first approximation to where a pathogen would be present would be the set of localities in which the environment was suitable for it to persist and the host or hosts were present at a high density. The actual distribution may be wider than this because of dispersal from areas in which it can persist, or smaller because of the host area and rotational or micro-successional structure. To forecast how geographic distribution will alter given climate change, it is necessary to understand how the distribution depends on environmental factors. Such understanding may come from an independent study of the relationships between infection and the environment, or from an empirical study of the environments in which it currently exists. The first has the advantage of needing no assumptions about the extent to which environment is the main constraint on the current distribution; the second has the advantage of needing no assumptions about having correctly identified all factors – on any scale – in the life of the pathogen which may limit its population. It is hard to test whether direct study of the relationship between infection and environment has identified all limiting factors; but long time series or detailed spatial data can provide the basis for tests. Conversely, deductions about limiting factors in the environment are in danger of identifying non-causal correlations as causal, and need experimental tests of the mechanisms by which the correlations may work. Any method using only the information available about sites where the organism occurs is liable to give quite misleading results if the points sampled are not random with respect to the environmental variables measured – for example, because recording history is concentrated in particular government areas. Since this is true of almost all datasets, the robustness of conclusions in face of the bias needs careful examination.

There are two major conceptual categories of method to estimate where the environment is suitable for an organism to live. First, top-down approaches take a map of positive occurrences of disease and relate this to measurements of environmental factors and host occurrence. This requires the assumption that the distribution of an organism over a region of interest is at equilibrium with respect to environmental factors and host occurrence. Secondly, bottom-up approaches use process-based population models and model the experimentally observed relationships between environments and population processes to estimate where the most favourable conditions for disease are.

For the top-down approach, there are several statistical methods for estimating the ‘best’ function which separates places where the organism might live and does, and might live but does not. These include simple linear discriminant analysis, which finds the linear combination of variables that maximizes the differences between places with and without the pathogen, relative to the variance within each category (te Beest et al., 2009); various generalized regression models, which find the best linear predictor of presence or absence, allowing for a rescaling step and proper treatment of error distributions (Yuen et al., 1996); support vector estimation, which looks for the (generalized) surface which best separates the difficult cases and allows, to an extent, for this surface being a complicated curve (Drake et al., 2006); and a number of methods based on constructing and averaging many distinct classification trees, such as ‘boosted regression trees’ (Elith et al., 2008). The problem with these methods is that the list of locations where a pathogen might live, but does not, is uncertain. It is hard to tell whether the absence is because no-one has looked, or bothered to record the fact, or whether it is because the pathogen is really absent. The ‘absence’ records frequently have to be chosen as random locations with no record and where conditions are not obviously inappropriate because the host does not occur (VanDerWal et al., 2008); clearly the subjective element here is dangerous. The need for a host makes this less of a problem when dealing with pathogens than with plant species; but the distributions of crop plants are relatively well known.

The maximum entropy technique, among others, attempts to avoid the need for ‘absent’ sites. It assigns a weight to each sample point in the set of points with confirmed presence of the pathogen (Phillips et al., 2006). This is done so that the set of weights attached to each sample point reproduces the mean, variance and covariances of the environmental measurements over the sample points, but otherwise is as uninformative (of maximum entropy) as possible. If sampling is unbiased and uniform this set of weights summarizes what is known of the environments in which it occurs. If, furthermore, distribution of the organism is at equilibrium, this information allows assessment of whether it will occur in future at a new location whose environment is known, or at the same location if its environment changes, by using the environmental observations to deduce the associated weight, interpreted as proportional to the probability of occurrence.

For some pathogens, detailed process-based quantitative population models are available, usually developed for forecasting purposes. The best of these can be used successfully to estimate pathogen distributions (Hijmans et al., 2000), but they often require measurements which are unavailable, and tend to be validated – reasonably enough – only in the areas where they were developed. The climex software is to some extent a hybrid of the two approaches. It includes a simple model of life-history and phenology from the start. This is parameterized based on the environmental variables so as to best fit areas where the pathogen (or pest, since that has been the main application of climex) is known to occur. The extra population biology included is appealing and the method has been widely used in quarantine assessments (e.g. Baker et al., 2000) and in predictions of the eventual spread of invasives such as P. ramorum (Venette & Cohen, 2006). However, formal tests of its decision-making quality by omitting parts of datasets and looking at the power of the method to predict them have not suggested that it is routinely more reliable than other methods, and the software available has some limitations (Kim & Beresford, 2009).

All these methods depend on empirical relations with the minimum number of limiting factors, and are derived under current conditions. If other things are not equal, the model will change. Two striking recent examples which demonstrate the problems of longer-term predictions based on limited knowledge will illustrate the point. First, empirical models of the wheat pathogen M. graminicola, referred to earlier, confirm that, as well as cultivar, spring rain, early sowing and cold winters influence its severity, and there are plausible biological reasons why each factor should be important. In the period 1940–1970 it was of negligible severity on the upper parts of the wheat crop (Brooks, 1953; Bearchell et al., 2005), yet in 1983–2010 it steadily became more prevalent, with estimated losses scarcely decreasing despite persistent breeding effort (Bearchell et al., 2005; FERA, 2007). This increase is correlated very closely with decreases in national SO2 emissions, and a 19th century decrease was correlated with increases in SO2 emissions [possibly through S fertilization effects on plant defence, coupled with competition with Phaeosphaeria nodorum (Chandramohan, 2010; Fitt et al., 2011)]. Without the exceptional circumstance of a very long run of data, this correlation would have been missed. The second example is that of yellow rust, Puccinia striiformis, which, until the early 2000s, caused severe disease, mainly in regions with low summer night temperatures (Chen, 2005). The origin of the disease is unknown, but P. striiformis is a plurivorous rust, and it may be speculated that a host jump in a cooler region than the centre of origin of wheat was involved. A variant form capable of withstanding higher temperatures has now spread worldwide and is currently the most serious disease of wheat (Milus et al., 2009); assuming the cultivation of wheat began around 3000 BCE the events giving rise to this variant have apparently a recurrence time of the order of centuries. This change is both opposite to, more serious than, and less predictable than the expected change caused by warming. In general terms, the prediction would have been an increased severity in regions where over-wintering on winter wheat improved, offset by an earlier end to the epidemic in the summer: the balance between these two would have been delicate, but the change would probably not have been dramatic, and would have happened on a decadal rather than an annual time-scale.

Climate modelling and its limitations

General circulation models (GCMs) are models of the energy exchange processes between space, the atmosphere, the oceans and the land. These models include specifications of geography, chemical composition of the atmosphere and ocean, and many other conditions; the effect of altering one or more of these – such as increasing the rate at which carbon dioxide is added to the atmosphere – gives an insight into how climate is likely to respond to that alteration. Although the basic physics of the small-scale processes involved are well understood, it is not possible to model the entire system in fine detail, and the models involve judgements and hypotheses about the larger-scale processes emerging from the small-scale physics and the effects of the mesh of large uniform cells that the models use to approximate the real world (Randall et al., 2007). This means that models built by different groups may make differing predictions about the outcome of the same inputs. Because there is relatively good knowledge about past climates, derived both from instruments and alternative measurements correlated with various climatic features, the accepted models are known to make reasonable predictions for global average properties. They therefore provide the best guide as to how climate will change in response to anthropogenic forcing, such as fossil-fuel burning. In turn, where host distributions have been related to climatic variables, the models can predict changes in crop yields and phenology if economic responses to the continuing change are ignored (Fig. 2). These changes in crop growth and yield are expected to force important changes to agricultural systems (Lane & Jarvis, 2007).

Figure 2.

 Change in climatic suitability of geographic grid cells for maize between present-day (1961–1990) and future climate at 2050 under the A1B SRES emissions scenario (Nakicenovic & Swart, 2000) derived from two different general circulation climate models, ccsm30 (Community Climate System Model version 3.0) and echam5 (echam General Circulation Model 5th generation). Future climate on smaller scales is calculated based on a pattern-scaling approach of Osborn (2009) relating the local climate variable of interest (e.g. near-surface temperature) to global mean temperature. Suitability is determined year by year using the crop phenology routine of glam (Challinor & Wheeler, 2008) and rainfall requirements for 30 simulated years. A cell is deemed suitable when a growing season is simulated in each of the 30 years. Note the northward shift in distribution but the substantial differences between the two models’ projections of change in more southerly regions as a result of reduced rainfall.

Because the models have coarse geographic resolution and simplify simulations in different ways, they tend to disagree more in detail than on a global average. This means that the predicted results of particular patterns of climatic forcing may differ substantially between models at the scales at which political and economic decisions need to be made. There is also intrinsic variability in the outputs of the models for the same input forcing, but this is part of the characteristics of the models. It is relatively easy to characterize this uncertainty by running models many times to produce an ‘ensemble’ of possible futures. The variations between models in their projections of average climate and its variability are more serious (Randall et al., 2007). Even ignoring the uncertainty arising from the inevitably incomplete understanding incorporated into the models, such differences cause severe uncertainties in any projections of the effects of changing forcing factors, such as CO2 emissions, on plant disease distributions. Whilst climate model simulations of current seasonal rainfall patterns are realistic, their simulation of daily rainfall variability is poor. Models commonly simulate too many days of light rainfall, and too few heavy events (Sun et al., 2006). Examining future climate projections of changes to wet-day frequency, some trends, such as the drying of Iberia are evident (Fig. 3). However, changes are not predicted very consistently by current models in many important production regions (Fig. 4). This is an especial problem for pathogen projections, because both fungal pathogens and insect vectors are commonly linked to wetness variables.

Figure 3.

 Absolute change in the probability of a wet day for four seasons (DJF: northern winter, December, January, February; MAM: northern spring, March, April, May; JJA: northern summer, June, July, August; SON: northern autumn, September, October, November), averaged over 15 climate models, between baseline (1961–1990) and future climate (2050s) under the A1B SRES emissions scenario (Nakicenovic & Swart, 2000). Future climates were derived using the ClimGen software of Osborn (2009). Note the drying in northeast South America and the Mediterranean.

Figure 4.

 Signal to noise ratio (mean/standard deviation) of change in seasonal wet-day probability for four seasons (DJF: northern winter, December, January, February; MAM: northern spring, March, April, May; JJA: northern summer, June, July, August; SON: northern autumn, September, October, November), between baseline (1961–90) and future climate (2050s) under the A1B SRES emissions scenario (Nakicenovic & Swart, 2000). Future climates were derived for 15 climate models using the ClimGen software of Osborn (2009). Note the substantial areas (lighter colours) in which the variation between models is much larger than the predicted change.

Specific predictions

If it is reasonable to think of some areas as having more disease than others (Forbes & Simon, 2007), then it makes sense to think how climate change may alter this. Sales of fungicides in Northern Europe, a damp region, are very high, mainly because almost all arable crops recieve one or more sprays. In many other parts of the world, fungicide use on arable crops is much less common. However, this difference is not caused solely or even primarily by climate, but arises because the wet climate and high investment leads to high potential yields per hectare which makes fungicide use economic; where similar circumstances combine, as in vegetable or commercial banana production in the tropics, fungicide use is very intensive. Certainly some groups of pathogens are predominant in certain climatic zones, but this may involve many factors besides climatic preference, including, for example, host and vector distributions. An example of how a simplistic prediction based on a single aspect of pathogen epidemiology can fail comes from from eyespot of wheat caused by Oculimacula yallundae, which over-seasons on straw debris. It was common in the UK in the 1980s to burn wheat stubbles as a hygiene measure. The practice was banned because of public opposition. This was widely expected to lead to increased levels of damage to wheat by eyespot because of increasing inoculum, just as a warmer winter would. In fact, increased levels of antagonistic microorganisms led to faster degradation of straw and a net decrease in infection (Jalaluddin & Jenkyn, 1996). So statements like ‘warmer winters will mean more disease’ have very low confidence indeed, except in specific cases where the biology is very well understood.

What is certain is that future climate, and the distributions of climatic regions, will change, and that the average temperature of the Earth will rise. Detailed climatic changes are harder to predict (Fig. 4), and the variables most relevant to many pathogens and pathogen vectors among the most difficult to predict. In some regions, some aspects of climate change are predicted by most models, so one can take these as worth exploring a bit more; in other cases the signal-to-noise ratio suggests that ignorance should be admitted and advice and planning should be for flexibility and resilience, without pretending to be able to make specific predictions for particular areas. In what follows, we attempt to illustrate the methods and purposes behind published predictions of altered geographic distribution of plant disease, using selected examples which seem useful.

The methods used in such predictions vary. Probably the commonest is to look for where the models predict climatic niches to move to. The niches are either deduced from climex-type analyses as discussed above, or from laboratory studies of the environmental relations of the pathogens. Examples of these include estimates of the area in Brazil suitable for rapid multiplication of Black sigatoka (M. fijiensis) – predicted to decline (Ghini et al., 2007) because of drier conditions (Fig. 3) – and of coffee nematodes (Meloidogyne incognita), predicted to increase because warmer conditions give shorter generation times in Brazil (Ghini et al., 2008). However, the predicted climatic shifts in these studies are large, so economic and agronomic factors would be likely to alter the host ranges as well. Similarly, Desprez-Loustau et al. (2007) looked at likely changes to a wide range of forest diseases in France, using a down-scaling method to reduce the area over which predictions were uniform; most were expected to be favoured by better over-wintering in a warmer climate, or because their hosts were expected to suffer more water stress in the summer (Fig. 3).

A completely process-based approach has been attempted with Leptosphaeria maculans, which causes canker of oilseed rape (Evans et al., 2008; Butterworth et al., 2010). The pathogen is monocyclic, with distinctly defined weather relationships in each stage of the life cycle. These have been determined empirically and account for a high proportion (30–70%) of the variance in population at each stage of the life cycle. Stochastic weather generators were used to down-scale in time the predictions of the GCMs to provide the data needed to run the predictive model under altered climatic conditions. If the model correctly represents the limiting factors to abundance, this seems the most logically unassailable approach; however, although the component models are usefully predictive, they are not perfect and other limiting factors may have been missed, especially biotic ones. Broadly speaking, using the ensembles produced by a suite of GCMs, their models projected greatly increased severity of canker in southern Britain. In the northern part of the island, disease increased slightly, but its effect on the crop was more than offset by improved growing conditions. The host distribution was not included, because to do so would require fully effective long-term economic models. Nonetheless, the results suggest, for example, that breeders for the English market should pay attention to increasing the levels of canker resistance and look for novel resistance sources.

This approach may be applicable to many other well-modelled diseases, although the approach may not be easy to apply to pathogens with less discrete life stages and polycyclic phases in the life cycle. It has the obvious attraction of a causal foundation for the links between population dynamics and weather and can potentially capture non-linearities effectively. However, many of the published models are intended mainly to improve spray guidance or explore the factors leading to severe disease in a single cropping season and were never intended to capture all features of the multi-year, landscape-scale population dynamics. Blitecast (Raposo et al., 1993), for example, performs the tasks for which it was designed excellently, and can be an effective aid to fungicide scheduling against the potato late blight pathogen Phytophthora infestans within a season. It is quite silent about season–season transfer and its predictions are not useful in years when little blight occurs (Taylor et al., 2003); it has also needed re-parameterizing to allow for the recent and continuing increase in aggressiveness of P. infestans, which has probably little to do with climate change. The problem with modelling season–season transfer is that it is a process which occurs on a regional scale – so experiments are very hard and deductions must be made from natural variation – and has strong stochastic elements, but only one replicate per year. Nonetheless, progress is being made (Skelsey et al., 2009).

The philosophical nature of these projections needs to be clearly specified. The climatic projections are clearly uncertain to some extent, and divergence between models from different groups is only a partial measure of that uncertainty. The statements about possible futures are in principle untestable and therefore in danger of being reassuring but meaningless. The models can, however, be tested against the historical record. There are very few long-term plant disease datasets against which long-term disease models can be tested; the longest term datasets are probably those arising out of long-term experiments such as the Rothamsted classical experiments (Bearchell et al., 2005). It would be highly desirable to collate and quantify the data that do exist from breeding stations, crop surveys and pesticide development.


To conclude, we return to the questions with which we opened. An area in which landscape epidemiology has a valid and important contribution to forming policy is in responding to specific cases of likely cropping changes. For example, if a government decides to make major irrigation investment on the basis of expected climate change, the resulting economically favoured cropping patterns may alter known disease patterns, and epidemiologists need the tools to predict how known diseases will respond. To pick a simple example mentioned above, the spread of irrigated maize in southeast Africa led to year-round cultivation, increased and more consistent vector populations and greatly increased pressure from Maize streak virus, especially on rain-fed crops (Rose, 1978). Note that this huge change in growing system need not be driven by climate change; we should perhaps focus on the landscape consequences of change in growing systems generally, rather than singling out climate change and asking for changes specifically attributable to it.

Breeding is a special case: selection and crossing pipelines are at the very least 3–5 years long, decades for trees. Breeders need to have available source germplasm characterized for and variable in the traits that will be needed. So, if climatic (or other) trends suggest increasing problems from certain diseases, that information should be useful in pre-breeding programmes; it is unlikely to be directly relevant to finished variety development for release to growers. However, at the pre-breeding level, even the statement that disease pressures are likely to change unpredictably is useful: it provides evidence to support the maintenance of publically funded pre-breeding and research programmes, and evidence that these need to maintain above all a wide genetic base, not just traits known to be useful.

What can be said, as has been argued above, is that climate change will bring, above all, surprises. The most important and obvious policy recommendation is that research capacity and knowledge bases need to be at least maintained, so that when a surprise happens, people who can rapidly understand it and respond are available. It is hard to see how this can be done without maintaining a diverse scientific base, including, for example, specialists in the ecology or taxonomy of specific groups (House of Lords Science & Technology Committee, 2008), and in-field epidemiology, as well as molecular pathology. In turn, funders might consider how they could avoid otherwise useful short-term metrics distorting the subject base of science or driving talented individuals away from an unstable career (Macilwain, 2010). This may seem hardly a scientific conclusion, but if the main prediction is surprises, the capacity to deal with these is the correct scientific advice.

The other side of this advice is the need for pathologists to ensure that the outcomes of their research are linked to existing knowledge, economic forces and common understanding, so as to be honest contributions to policy. This is easy to say, but not easy to do. The clearest messages to industry or government tend to be the simplest, but the simplest messages may not be the best interpretation of what we should learn from research. Communicating messages derived from complex reasoning and experimentation to busy people with a different way of thinking is hard. Unfortunately, the alternative is to communicate messages which are wrongly understood, which is dangerous.


We are grateful to R. J Hijmans for helpful comments on the original version.