Dutch elm disease revisited: past, present and future management in Great Britain


  • T. D. Harwood,

    Corresponding author
    1. CSIRO Ecosystem Sciences, PO Box 1700, Canberra 2601, Australia
    2. Centre for Environmental Policy, Imperial College London, Mechanical Engineering Building, 3rd Floor, Exhibition Road, South Kensington, London SW7 2AZ; and Silwood Park Campus, Ascot, Berkshire, SL5 7PY, UK
    Search for more papers by this author
  • I. Tomlinson,

    1. Centre for Environmental Policy, Imperial College London, Mechanical Engineering Building, 3rd Floor, Exhibition Road, South Kensington, London SW7 2AZ; and Silwood Park Campus, Ascot, Berkshire, SL5 7PY, UK
    Search for more papers by this author
  • C. A. Potter,

    1. Centre for Environmental Policy, Imperial College London, Mechanical Engineering Building, 3rd Floor, Exhibition Road, South Kensington, London SW7 2AZ; and Silwood Park Campus, Ascot, Berkshire, SL5 7PY, UK
    Search for more papers by this author
  • J. D. Knight

    1. Centre for Environmental Policy, Imperial College London, Mechanical Engineering Building, 3rd Floor, Exhibition Road, South Kensington, London SW7 2AZ; and Silwood Park Campus, Ascot, Berkshire, SL5 7PY, UK
    Search for more papers by this author

E-mail: tom.harwood@csiro.au


The arrival of a new species of the fungus which causes Dutch elm disease into Great Britain in the 1960s caused widespread elm death and continues to be problematic following elm regeneration. Attempts at managing the disease have been largely unsuccessful. Forty years after the outbreak, however, researchers continue to be interested in both the underlying biology of such a severe and dramatic disease event and in the policy lessons that can be drawn from it. We develop a spatial model at a 1 km2 resolution. Following parameterization to replay the historical epidemic, the model is used to explore previously proposed counterfactual management strategies. A new introduction date of late 1962 is estimated. We show that, even had there been high intervention at a national level in terms of disease management early in the epidemic, there would have been little long-term effect on elm numbers. In Brighton, a local pocket of elm which survived the peak of the initial epidemic has been successfully managed. However, Brighton and similar locations are subject to repeated waves of the disease at a 15- to 20-year intervals following regeneration and reinfection of the surrounding areas, during which much more intensive management is required.


Throughout the 1970s an epidemic of a newly introduced Dutch elm disease fungus swept through Britain, killing the majority of mature elm trees (Gibbs, 1978a; Jones, 1981). The epidemic was caused by the scolytid bark beetle-borne fungus Ophiostoma novo-ulmi (Gibbs & Brasier, 1973; Brasier & Kirk, 2001), a new species which had not been identified at the time of introduction. This fungus had been causing widespread elm death in North America, in conjunction with the original Dutch elm disease fungus O. ulmi, which had been introduced from Europe (Gibbs, 1978b). In North America, the native elm species were very susceptible to both pathogens, and a distinction had not been made between them, although a change of disease intensity was identified (Pomerleau, 1961).

In Europe, the original fungus had been causing elm death since at least the 1920s, when it was first identified. In Britain, the official line (Peace, 1960) was that the disease did not justify significant government intervention since a degree of recovery could be expected, and epidemic flareups were localized (provided that the disease did not change its character). Following the introduction of the new fungus on infected timber from Canada (Brasier & Gibbs, 1973), there was a resurgence of the disease. In a recent analysis of the science and policy of the epidemic, Tomlinson & Potter (2010) record how the received wisdom of Peace, widely disseminated and largely accepted within plant pathology circles, had the effect of delaying a response, as the resurgence was understandably perceived as a local flareup of O. ulmi. Hindsight suggests that the potential existed for the new fungal species to be identified outside Britain, and that management could have been better coordinated and managed nationally (Tomlinson & Potter, 2010). Here we use a dynamic spatial simulation model to first offer a reconstruction of the original epidemic, drawing on archive research, interviews with key players, previously published accounts and available datasets describing the national distribution of elm and surveyed disease progress. We go on to test the impact that proposed counterfactual policies of control might have had at a national scale.



The first step in the spatial modelling of the epidemic was to establish the location of susceptible elms in Great Britain. The taxonomy of British elms is complicated and controversial: this in part because of hybridization that has occurred between the native wych elm (Ulmus glabra) and the various forms of smooth-leaved elm (U. minor) which have been introduced since pre-Roman times. In designing their disease surveys, Gibbs & Howell (1972) decided that the only practical option was to recognize just three components in the population: wych elm, English elm (a very common and highly recognizable clone of U. minor which has long gone under the name of U. procera) and a heterogeneous entity that they called smooth-leaved elm which included all the other forms of U. minor and the putative U. minor/U. glabra hybrids. In analysing a number of years’ data, Gibbs (1978a) showed a higher rate of disease progress in English elm than in smooth-leaved elm or wych elm. However, over the years the majority of the smooth-leaved elms in their strongholds in East Anglia and Cornwall succumbed to the disease (Brasier, 1996). Whilst the wych elm, U. glabra, is possibly more susceptible than U. procera to the fungus, it is not favoured by the elm bark beetles (Webber, 2000) so disease progress has been slower in the northwest of England and Scotland (Gibbs, 1978a), where this elm predominates. Climatic factors are also less conducive to the disease in this area.

In the model, all elm species were grouped, and differences in species composition and susceptibility summarized in the habitat suitability index described below. A map of the distribution of elm for 1965 (Fig. 1a) was generated from a number of sources. The baseline distribution of elm at a 1 km resolution was taken from the following Countryside Survey 2000 v1·0 Land Class datasets: Broad-leaved/mixed and yew woodland 1998, Hedge stock 1990, Remnant hedge stock 1990 and Built-up and gardens 1998. It was assumed that the amount of elm in a 1-km square was uniformly proportional to the coverage in the Countryside Survey, and estimates of elm numbers for an area were scaled proportionally. This will lead to an overly homogenous landscape, but there was no basis for further fragmentation. One-kilometre Department of Transport, Local Government and Regions (DTLR) local authority data files for post-1974/75 administrative boundaries from the Countryside Information System v8 were used for both the management model and the derivation of the baseline elm distribution, aggregating authorities where appropriate.

Figure 1.

 Starting point for the British Dutch elm disease epidemic. (a) Distribution of all types of elm prior to the arrival of Ophiostoma novo-ulmi. (b) Distribution of habitat suitability index (ω) used in the simulations, based on the proportions of different elm species and latitude, ranging from 0·6 for wych elm in the north of Scotland to 0·98 in Warwickshire. Index values were allocated on a 100 km square resolution outside the main Forestry Commission survey area.


Data from the Forestry Commission (FC) Census of Woodlands 1965 (Locke, 1970) were used to scale the base map by FC conservancy. This was re-scaled by county using the 1971 and 1972 surveys (Gibbs & Howell, 1972, 1974). The unsurveyed remainder was scaled to give a total British elm population of 30 million (Brasier, 1996), assuming that the proportion of woodland to rural and non-rural elm was 58%, as recorded for the north of England in the 1973 survey.

Rural and non-rural elm (i.e. hedgerow and urban)

The base map for hedgerow elm was generated by scaling the combined Hedge and Remnant hedge classes to a nominal 20 million trees in proportion to hedge length. The urban base map was generated by scaling the built-up and gardens data to give a ratio of 0·178:0·882 based on the 1971 survey. The two layers were then merged. This map was then scaled on a county-by-county basis according to Gibbs & Howell (1972, 1974). Of the unsurveyed area, 42% was assigned as rural and non-rural based on the 1973 survey of northern England, giving a total of 4 737 600 non-woodland elms outside the survey area.

Spatial epidemiological model

The fungus is spread via the bark beetle vectors Scolytus scolytus and S. multistriatus (Webber & Brasier, 1984; Webber, 1990) in Great Britain. Beetles lay their eggs in weakened trees or felled timber, overwintering as larvae and emerging in the spring to feed to maturation on both healthy and weakened trees. Spores from infected feeding galleries are carried by the young beetles as they disperse to new feeding material. Beetle dispersal takes place during the period April to October. Saplings (<15 years old) or regenerating suckers with insufficient inner bark for breeding are unlikely to become infected (Greig, 1994).

Additionally, the structure of hedgerow elm populations in southern England was conducive to rapid linear spread of the disease via a common root system. Brasier & Gibbs (1978) estimate that up to 70% of hedgerow U. procera may have been infected via transmission through a common root system. However, in this model, fungus and vector were treated as a single pathogen, on the assumption that local numbers of infectious beetles are closely correlated with the number of infectious trees (Gadgil et al., 2000). Were simulated management to include explicit beetle control this assumption would be invalid.

Following a short latent period of around 6 weeks, live elms become potentially infectious. Mortality usually follows about a year later, and dead trees become more infectious as they become centres of beetle breeding activity. After a further year the long-dead tree becomes unsuitable for breeding and ceases to be infectious.

The spatial modelling approach of Harwood et al. (2009) was adapted to the specific pathosystem. Within a 1-km grid square, all elms present were treated as a single homogenous population with a uniform probability of transmission. Four main compartments were simulated (SEID) for the potentially susceptible population, with a further compartment J containing an age-cohort model to model regeneration.

For a population of fixed size N individuals, I + D + J, where S is the number of uninfected susceptible individuals, E is the number of exposed (infected but not infectious) individuals, I is the number of infectious individuals, D is the number of dead individuals and J is the number of juveniles. An additional compartment L was used to store the numbers of long-dead non-infectious trees.

The model is governed by the following differential equations:


where βL and βD are the total number of infections that a single infectious individual would make per susceptible individual per day in ideal conditions for live and dead trees respectively, k is the rate at which individuals leave the latent class by becoming infected (1/latent period in days), μ is the per-capita mortality rate and c is the rate of deterioration of dead trees (1/post-mortality infectious period in days). These equations were implemented discretely at a daily time step.

Mortality resulting from causes other than the fungus was assumed to be insignificant. When dead trees become unsuitable for beetle breeding, they are removed from the local population, although a record is kept for long-dead trees, since management may still be required. To maintain a constant population size, as each dead tree becomes uninfectious, a regenerating elm is added to the year 0 class of the juvenile age-cohort model, for which the total size is J. This increments annually, such that at age 15 years the bark becomes thick enough to sustain beetle breeding and the individuals from the last cohort are added to the susceptible class, making way for any new suckers.

Geographical variation in β was used to capture variation in species composition and habitat suitability for vector and pathogen. A value ω ranging from 0 (unsuitable) to 1 (ideal conditions) was allocated to each 1-km grid square. This is applied as a simple multiplier on the value of β for ideal conditions such that


The spatial distribution of ω was mapped as a combination of relative species distributions and susceptibility and latitudinally driven reduction in beetle breeding season. BSBI survey maps (Botanical Society of the British Isles, 1930–2000) were used to plot the distributions of the three elm categories. For the surveyed counties, specific proportions of each species were known. For the remaining areas, proportions were estimated from the distribution maps. An index of species susceptibility, combining fungal pathogenicity and beetle feeding preference of 1 for U. procera and 0·75 for U. minor and U. glabra, consistent with observations in Gibbs & Howell (1974), was applied. The effects of a shorter beetle breeding season at higher latitudes was approximated by a 0·03 reduction per 100-km square for the most northerly part of Britain, giving a minimum value of 0·6 for U. glabra on the north coast, and a maximum of 0·99 for Worcestershire with 99%U. procera. The distribution of the resulting index ω is shown in Figure 1b.

The model was implemented spatially by stochastically apportioning the total number of new infections (R) between the source cell and surrounding grid squares within a 50-km radius by integrating a point-to-point source dispersal function over the area of each source and sink cell. This calculation was performed in advance of simulations and stored as a sorted list of probability of infection for each cell referenced by relative position. The relationship between R and β which represents the within-cell infection rate is given by


where ψ is the proportion of all infections R which occur within the source cell for a given dispersal function, and Δ is the maximum number of trees per km square, set at 10000, i.e. 100 trees per hectare. The probability of infection for the source cell is βSI, and for all other grid cells is given by pcellβScellIcell, where Scell and Icell are the local values of S and I, and pcell is the relative probability of infection of the cell if it were fully occupied (Scell = Δ), derived from the dispersal function.


Accurate estimation of the dispersal kernel describing the probability of transmission as a function of distance is critical to the spatial dynamics of the system. Unfortunately, empirical measurements of this function have not been made. Since control measures should be based on empirical evidence (Carrasco et al., 2010), this is a remarkable oversight, suggesting that control policies have been based on estimated beetle dispersal distances. Progress of the invasion front during the early stages of the epidemic was in the region of 50 km per year (Jones, 1981) where no specific timber movements are documented. Preliminary analysis of the invasion front from 1972 survey data at a 10-km resolution by Swinton & Gilligan (2000) indicates a negative exponential kernel of scale around 20 km (15–40 km). However, it should be noted that this deals exclusively with the appearance of new infections, which is likely to be dominated by rare long-distance events (Neubert & Caswell, 2000) and has no fitting in the biologically significant range of 0–10 km, such that all trees within this range would have a probability of infection of a similar order of magnitude if applied directly. These estimates therefore represent the scale of the observed wave front which should be reproduced.

A typical expert estimate of maximum beetle dispersal distance (R. Strong & S. Derwent, Save the Elms Campaign, Brighton & Hove; M. Parker, East Sussex County Council, personal communication) is no further than 8 miles (12·88 km). Both beetle species have no special adaptation for the carrying of spores (Webber & Brasier, 1984) so continuous spore loss and desiccation would be expected for longer flights. Given this, it is unlikely that viable spores would be present on beetles after a break for maturation feeding, so the assumption was made that effective dispersal was made in a single flight. Whilst the dispersal of scolytid beetles has not been measured, other estimates of the median dispersal distance of insects are available: Ceratitis capitata, 114 m (Plant & Cunningham, 1991); Lucilia sericata, 103–150 m (Smith & Wall, 1998); Diabrotica virgifera, 117 and 188 m (Carrasco et al., 2010). These distances seem appropriate given the localized clustering of new infections around infected trees. A modelling study (Byers, 2000) has shown that dispersal distances of several kilometres per season are feasible for bark beetles. Most dispersal is within 500 m of the host tree, but some can reach altitudes of 1500 m and consequently travel a great deal further. A median dispersal distance of 150 m was chosen for a negative square power law function incorporating radial dispersal (e.g. Shaw et al., 2006; Carrasco et al., 2010) to summarize beetle dispersal, giving a probability of 0·002 of dispersal beyond 12·88 km when integrated across the landscape. Spread through common hedgerow root systems was not specifically modelled, since this was considered to be represented by the high central peak of the beetle kernel, which should ensure neighbouring trees are infected.

The movement of firewood (as distinct from the movement of timber) is an important component of the transmission of Dutch elm disease. The informal movement of firewood was simulated as a local process, and was combined with the estimated beetle dispersal kernel. Assuming that there is a direct linear relationship between firewood transport and distance, and that no firewood movement occurred beyond 50 km, a triangular distribution was chosen.

The behaviour of the two kernels was examined visually in terms of epidemic progress across the landscape within the model for reasonable behaviour. In isolation, the beetle dispersal kernel severely underestimated disease progress in line with the findings of Sarre (1978), whilst the firewood dispersal kernel in isolation achieved an appropriate rate of spatial expansion, but with an excessive rise in the numbers and local density of infected elms as a result of greater contact between individuals. Weighted combinations of the two kernels were investigated, and a 3:1 ratio of beetle:firewood was found to yield a reasonable spatiotemporal pattern for the early stages of the epidemic for a range of infectiousness. The kernel used consequently represents a best estimate combining biological and social knowledge, reproducing the fundamental characteristics of the non-trade spatial spread, rather than a fully fitted model.

Scheduled timber movements and introductions

The epidemic developed from a number of introduction points (Gibbs, 1978a) and spread through the subsequent movement of timber to shipyards and timber merchants (Jones, 1981). These are described, with estimated times of introduction (Table 1). Documented introductions were converted into model introductions, assuming a delay between the actual introduction and time of detection in line with model behaviour. Whilst it is known that timber (often with intact bark) was widely moved northwards for economic reasons during the epidemic, only those movements which were recorded have been included in the model, allowing an estimate of the extent of undocumented movements. For each movement, 100 dead elms with bark intact are introduced at the specified grid reference.

Table 1.   Scheduled imports and movements of timber used in the simulations of the Dutch elm disease epidemic. The sources were used to provide approximate guidance only; specific locations and actual movement dates are unknown
LocationaSimulated grid referenceSimulated dateBased onSource
  1. a(p) indicates port of entry, (s) shipyard and (t) timber yard/sawmill.

  2. bValues estimated by least-squares regression from survey data.

Avonmouth (p)ST 51578201/12/1962bMid-1960sGibbs (1978b)
London (p)TQ 39581301/12/1962bMid-1960sGibbs (1978b)
Chatham (s)TQ 77469701/01/1964Mid-1960sGibbs (1978b)
Gloucester (t)SO 75040001/07/19641968Jones (1981)
Southampton (p)SU 42511301/01/19661971Gibbs (1978b)
Portsmouth (s)SU 64006801/07/1966 Gibbs (1978b)
Plymouth (s)SX 47754401/07/1967 Gibbs (1978b)
Liverpool (p)SJ 35190801/07/1970 Gibbs (1978b)
Northumberland (t)NU 18513101/07/1971 Jones (1981)
West Yorkshire (t)SE 31324101/01/1972 Jones (1981)
North Yorkshire (t)SE 68775001/07/1972 Jones (1981)
Glasgow (t)NS 45772201/07/1974 Redfern (1977)
Edinburgh (t)NS 87679101/07/1975 Jones (1981), Redfern (1977)


The epidemic developed under a spatiotemporally variable management regime which, since removal of infectious individuals directly affects epidemic development, cannot reasonably be ignored. In line with our documented understanding of the role played by local authorities as the frontline managers of the outbreak (Tomlinson & Potter, 2010), management is assumed to be uniform across local authorities as each responds to the presence of infectious elms within their jurisdiction. Dead elms are detected at a daily probability per tree of 0·01 (on average every 100 days) and infectious live elms at half this rate based on discussions with managers. Detection of disease on live trees is only possible when trees are in leaf, set between day 75 and day 300 of each year, resulting in an annual peak level of infection in mid-summer before management takes effect. Three levels of management are implemented, pre-epidemic level (a), control level (b), and sanitation level (c). For management levels b and c, the rate of removal of live trees is assumed to be 1/3 that of infectious dead trees, and 1/4 for the pre-epidemic level a, where the removal of dangerous dead trees is assumed to take priority. Felling of live trees is applied to all live tree classes, uninfected, exposed and infectious at the same level, consistent with observed management in East Sussex, since trees adjacent to infected trees are likely to be infected. All long-dead trees are removed when action is taken, although long-dead trees still accumulate. The sanitation level (c) represents the sanitation felling phase of policy officially introduced in October 1971 and discontinued in December 1972 and reintroduced for the north in May 1974 (Tomlinson & Potter, 2010). During model sanitation felling 90% of dead and 30% of live trees are cut down. Practical removal of more than this is unlikely, because of limitations of manpower, and problems locating all infectious trees, particularly in woodland. Sanitation felling is carried out for a maximum of 3 years for any authority, since after this point most trees will be dead, the main effort expended, and funds are likely to be limiting. As this is a dynamic process, the focus of sanitation felling moves with the invasion front of the epidemic during the 1970s. The sanitation level of management is not permitted before 1970, on the principle that before this point the epidemic is treated as O. ulmi, but some felling would be carried out prior to the official endorsement in 1971.

All authorities start the simulation in a pre-epidemic state (a). When infection is detected in a grid square, removal is first carried out according to the current management regime (30% dead, 10% live infectious and 5% live non-infectious trees) and the grid square is added to an inspection list for re-inspection every 90 days with a detection probability of 1. The management level for the whole authority is then increased to the control level (b) (45% dead, 15% live infectious and 7·5% live non-infectious trees) and management remains at this level until a further infection is found, at which point (if after 1 January 1970) management moves to sanitation felling (c). When sanitation management is implemented, the start date is recorded and, after 3 years, management reverts to the control level. This approach allows a small delay between detection and maximum effort, which is consistent with the epidemic development.


Precise validation of managed epidemics is rarely practical. Whilst it is possible to demonstrate that the behaviour of the model is in line with observations, there are a number of factors which render precise parameterization inappropriate. First, since infection is a stochastic process acting over a spatially heterogeneous landscape, a wide range of outcomes can be expected for the same system in both real and simulated epidemics. Secondly, a degree of management was applied from an early stage, including the removal of dead and infected trees and some precautionary felling. This was locally variable, depending on the behaviour of local authorities, landowners and individuals, and is usually undocumented. Thirdly, the rate of epidemic progress cannot be reliably recorded because of problems with reporting. One cannot be sure whether a new case is the result of the recent local introduction of the disease, a symptom of undetected existing local infection, or the result of improved local surveying. As such, an infection may be locally present for several years before reporting. This is particularly true for more remote or inaccessible locations, and indeed the FC surveys abandoned the survey of woodland elms because of observational difficulties (Gibbs & Howell, 1974).

Fitting was carried out against the FC survey data from 1972 to 1976, taking data for the total elm population of rural and non-rural elms in the main survey area from Table 3 in Swinton & Gilligan (1996), independently verified from the original references, and following a similar methodology. After exploratory simulations, model parameterization was carried out by estimating most parameter values from the literature (Table 2), and performing a least-squares fit to the total elm population for rural and non-rural elm in the absence of management using 20 simulations for each parameterization, taking increments of 0·01 for live infections over a range between 0·73 and 0·78 and varying the initial introduction time from 1962 to 1964 at intervals of 0·1 years. Low variation between simulations, because of the high rate of infection, allowed a reasonable degree of confidence in results. The best fit was obtained with Rdead = 1·5 and Rlive = 0·75 against the total elm population with an introduction time of September 1962. However, the numbers of elm in categories DE (seriously infected and dying trees) and F (long-dead trees) fitted against the compartments D and L, respectively, showed a poor fit in the absence of management. Introducing management allowed the disease curves for DE and F to be better represented, with the maximum decline in elm (1972–1975) being determined by R, but the tail of the curve (1975 onwards) and the temporal pattern of DE and F being responsive to management. The three management scenarios were obtained by manual calibration against all three curves simultaneously.

Table 2.   Parameters used in the simulations of the Dutch elm disease epidemic
  1. aFitted by least-squares regression.

  2. bManagement levels: (a) pre-epidemic; (b) control; (c) sanitation felling. Parameters fitted by manual calibration.

RL Maximum daily infections per infectious live elm0·75a
RD Maximum daily infections per infectious dead elm1·5a
Δ maximum density of elms100 ha−1
Time to reach maturity15 years
k (1/latent period in days)1/50
c (1/post-mortality infectious period in days)1/365
μ mortality rate (1/time from infection to death)1/400
Daily probability of detection of infectious live elm0·05
Daily probability of detection of infectious dead elm0·01
Efficiency of single felling of live elms per km2 for management levels a, b, c0·05,0·15,0·3b
Efficiency of single felling of dead elms per km2 for management levels a, b, c0·2,0·45,0·9b
Maximum duration of management level c3 years

Disease progress was compared with the maps in Jones (1981) to ensure that spatial spread occurred at an appropriate rate during the early stages of the epidemic. Spread in the main infection area was appropriate, although Cornwall remained largely uninfected until 1980, whereas the disease historically arrived here in 1970. This probably indicates a body of timber movement at this time, but in the absence of a documented source this was not added to the model.

Figure 2 shows the performance of the model plotted against the total elm population, the numbers of infectious trees (live and dead) and the numbers of long-dead trees over time for rural and non-rural elms (excluding woodland) within the FC survey area only. Data points are shown from the FC survey data for 1971–1978, with the 1971 data (not used for parameterization) adjusted to correct for inaccuracies in the first survey. The results show the mean result for 20 simulations. Since a low coefficient of variance was observed (average 0·039, maximum 0·088 for the total) further simulations were deemed unnecessary. A good fit to the overall decline can be seen, but the numbers of infectious elms show some deviation. Whilst the model curve mirrors the broad trend up to 1976, there is an apparent levelling in the historical data in 1974, which could not be reproduced by the model. Since the survey numbers almost double in the following season, this may be a result of sampling error or timing. A further deviation in the proportion of infectious elms in 1978 is less easily dismissed. By this point, the surviving model elm population is widely dispersed, so disease transmission slows down, in common with normal epidemiological behaviour. The survey indicates that at this point 72% of the remaining population was infected, compared with a model estimate of 38%. However, inclusion of the 1·4 million Cornish elms, as mentioned above, would address this shortfall.

Figure 2.

 Size of the total live elm population, numbers of severely affected and dead elms (DE) and numbers of long dead elms (F) from the Forestry Commission (FC) surveys of 1971–1978 (point series) plotted against the mean Dutch elm disease model output (black line series) with 99% confidence limits (grey line series) for 25 simulations using the parameters in Table 2. Numbers relate solely to the Rural and non-rural elm category in the FC survey area. Data points for the 1971 survey are corrected and were not used in the parameterization.

The spatial pattern of the epidemic is shown in Figure 3, with gaps in the elm population appearing around the initial introduction points by the 1980s. Whilst the model describes the rate of disease progress through the main infection area, this is largely driven by the initial introduction points. Subsequent northward spread in the mid-1970s, as described by Jones (1981), occurred at a rate of more than 50 km per year. This rate cannot be explained by spread through common root systems, beetle dispersal or local firewood distribution, and is consequently not achieved by the simulations except where scheduled movements occur. This is in common with the findings of Sarre (1978), indicating that the movement of timber was considerable greater than officially documented.

Figure 3.

 Simulated Dutch elm disease epidemic of 1963–1983, showing its spatial development in terms of infectious live trees and infectious dead trees (pale maps, key a) and the total elm distribution before (1963e) and after (1983e) this period (dark maps, key b). After 1979, gaps begin to appear in the elm distribution, following total population loss. The squares are 100-km grid squares on the UK National Grid.


Introduction phase

The model indicates an initial introduction date of late November 1962 with management when fitted against all three metrics. We examined a counterfactual scenario in which the threat was identified in advance and much more robust indefinite sanitation felling policy (efficiencies of 0·3 live, 0·9 dead for all levels) was in place prior to the arrival of the new species. The potential effect on the early development of the epidemic is shown in Figure 4, plotted against the simulated historical spread. The counterfactual epidemic continues to spread, occupying an area of nearly 100 km2 by the late 1960s. The implication is that the epidemic was essentially unstoppable once it had arrived in the country.

Figure 4.

 Effect of a counterfactual scenario with sanitation level felling and timber movement ban on the early stages of the Dutch elm disease epidemic around Avonmouth (b) compared with the simulated historical dispersal (a). The squares are 100-km grid squares on the UK National Grid.

An epidemic out of control

Criticism has been made that there was a significant delay in moving from local to national responsibility and to something approaching a fully integrated and properly co-ordinated sanitation policy (Tomlinson & Potter, 2010). However, Figure 5, showing the historical simulation, a simulation without any management and a high-intervention continuous-sanitation programme, as described above, shows that the longer-term effects of all three scenarios are similar. The historical programme, lagging behind the epidemic had little effect on its outcome, other than reducing the intensity of the population fluctuations. Whilst the continuous-sanitation programme slows the initial decline in the population, it reaches a similar level to the other approaches by the present day.

Figure 5.

 Effects of simulated management on the British Dutch elm disease epidemic. At the historical level of management the effect on the unmanaged epidemic curve is minimal, declining until regeneration of the population outside the main disease areas in the 1990s. If sanitation-level felling is applied, the loss is delayed, but the effects of regeneration are suppressed.

Character of the ongoing epidemic

Figure 6 shows the simulated epidemic and elm population for the first 50 years of the epidemic, with areas wiped out by a disease wave regenerating and becoming reinfected on a roughly 20-year cycle (Fig. 7) As time passes, successive disease fronts build up and spread out, resulting in less dramatic fluctuations and progress towards equilibrium. In terms of management of elm reserves such as Brighton, recognition of the cyclical nature of the threat and appropriate temporal allocation of resources becomes critical (R. Strong & S. Derwent, Save the Elms Campaign, Brighton & Hove; M. Parker, East Sussex County Council, personal communication).

Figure 6.

 Long-term epidemiology of the British Dutch elm disease epidemic. Simulated epidemic 1965–2015, showing the spatial development in terms of infectious live trees and infectious dead trees, (top row, key a) and the parallel total elm distribution (bottom row, key b, superscript e). Local cycles of disease, driven by elm regeneration are evident. The squares are 100-km grid squares on the UK National Grid.

Figure 7.

 Long-term prognosis for the British elm population. Twenty-year cycles of Dutch elm disease and regeneration move towards equilibrium at less than a third of the original population size. Given the time scale of this projection relative to the fitting period, the precise timing of disease cycles are likely to be unreliable.


There has been much discussion over the last 40 years as to whether more could have been done to minimize the impacts of O. novo-ulmi. Tomlinson & Potter (2010) summarize this debate and highlight specific areas where there was potential for improvement: a more rapid initial response, and subsequent more intensive national-scale management. In this study we tested counterfactual management approaches in order to critically examine this debate.

With regards a more rapid response, there was a failure firstly by the scientific community worldwide to identify the new fungal species abroad after the observed changed in disease character in N. America (Pomerleau, 1961) and Eastern Europe (Brasier, 1983), and secondly in the UK, where the history of O. ulmi infection masked the new epidemic. Whilst it is not necessarily reasonable to expect more of the pathologists, particularly in the UK, who were working with the best available information and techniques, it is interesting to test whether it would have made any real difference.

We derive a new estimate for the date of introduction of the new fungal species of late 1962. This is 2 years earlier than the previous estimate (‘1964·95’) of Swinton & Gilligan (1996). The most likely reason for the deviation is that the spatial model requires a longer lag period to allow the disease to develop a sufficient contact surface with the susceptible trees, which is intuitively correct. Traditionally assumed introduction dates, (e.g. Brasier, 1996) based purely on observations, tend towards the mid- to late 1960s, which gives much less time for a potential response. The earlier date and spatial development is consistent with the pattern of reports of Dutch elm disease arriving at the Forestry Commission advisory service in the period 1962–1971 (0, 0, 0, 1, 2, 2, 11, 18, 28 and 114 reports in each respective year) if we assume that these reports all relate to the new species. An earlier introduction would have provided more time for the correct assessment of the new epidemic. In our simulations, by 1967, <16 000 elms were infectious (DE), spread over approximately 1500 km2. Whilst it may be tempting to suggest that the epidemic could have been noticed earlier, the diffuse nature of the early epidemic makes this unlikely, except in local hotspots. The simulations of early disease spread under higher management indicate that even had a strong national control system been in place before the pathogen had arrived, unavoidable problems of the detection of infection would still have allowed the epidemic to spread until economically unfeasible to control.

Whilst there was the potential for a more coordinated national response, our longer-term simulations of management (Fig. 5) question the validity of any large-scale management approaches. Mature elms could have been preserved in the landscape for a longer period, perhaps lessening the intensity of the public sense of loss, but at a prohibitive cost. We tested other proposed management approaches, including stricter timber movement and a cordon sanitaire, which we do not present here, since these were also ineffective. The scale and density of the elm population in rural Britain ensured that when the infection became widely established, spreading from mature trees, ongoing incoming infection eventually overwhelmed most areas, even if local eradication was achieved.

Nevertheless, there is the potential for local control in certain specific cases where a degree of isolation is present, as in the Channel Islands, or the overall density of elms is lower, as in the Netherlands, although long-term eradication is unlikely. In the region of Brighton & Hove, which is still host to the National Elm Collection comprising thousands of mature elms, an ongoing management campaign has been able to limit damage to date, although there has been continuous loss of mature trees. This is the result of both favourable local geography, which limits the flow of beetles into the area to several narrow and manageable pathways, and a surrounding buffer area of management. The local cycle of disease prevalence, driven by regeneration of elm (Fig. 6) means that such a management campaign has to defend itself against periodic high-level attack, during which a great deal of resources are required relative to the lull in epidemic pressure in the intermediate period. The model indicates that the period between disease peaks is around 20 years, in common with the predictions of Brasier (1983), depending on local geography, which may allow changes in the management team and an altered perception of the disease severity, potentially slowing response to future attacks. At the time of publication the disease is very active on the outskirts of Brighton, and the costs of management are rising beyond the normal budget. This is partly because the last epidemic peak was rather weak or well managed, leaving a proportion of surviving elms, which are now 30–40 years old and capable of sustaining higher beetle populations. The implication of this is that effective control may only increase the severity of future local outbreaks, which at some point may be expected to overwhelm a local management programme.

It is also worth noting that with improved management approaches, including chemotherapy and biocontrol (Stipes, 2000), localized control has been achieved in several cases, particularly in more isolated urban areas (in the absence of S. scolytus) where detection is more straightforward. However, a failure to eradicate the disease commits the responsible authority to a long-term programme of management. The problems of this are well illustrated by the successful campaign in New Zealand (Gadgil et al., 2000), where national funding has recently been withdrawn, although eradication seemed possible.

The epidemiological model is complex and difficult to fully parameterize because of a limited understanding of and uncertainty in aspects of the system. A degree of manual calibration was required to ascertain the most appropriate weighting for different dispersal kernels and values for the management efficiencies. This represents a departure from ideal modelling, where information on disease spread and epidemic development are complete. Two existing models of the British Dutch elm disease epidemic exist for comparison. First, the diffusion model of Sarre (1978). Unfortunately, the sources of parameterization and validation are absent from the paper, preventing critical assessment of the model’s validity.

Secondly, there is the model of Swinton & Gilligan (1996, 1999, 2000), which has been subjected to intensive scrutiny during the development of our model. The basic model structure is somewhat different from ours, choosing to ignore the contribution of live infectious trees to the spread of infection, and any contribution of management, but including both species of the causal fungus. In excluding live infectious trees, the onset of transmission is delayed. The effects of actual management were significant and cannot reasonably be ignored. A further aspect is the treatment of hedgerow elm as a single non-spatial pool, unaffected by the presence of contiguous woodland elm. This is highlighted as a shortcoming and area for future development in the 1996 paper, and has the effect of removing the capacity for local oscillations which drive national oscillations. Consequently, the future projections of the model were deviating from the observed epidemic by the time of publication. We note further that the parameterization and validation of the model is somewhat confusing, since the actual elm numbers in the validation Figure 2 do not match with the values in Table 3. This error is repeated in the 2000 publication. The systematic deviation of the plotted model curve from not only the erroneous validation data presented in the figure but also the correct data presented in the table seems unlikely for a least-squares fit, and if representative, casts doubt on the validity of the conclusions.

The various sources of uncertainty in our model in most cases relate to properties of the real system which were ignored in the previous studies. This raises a key question regarding modelling with incomplete information. In most epidemiological systems, information will be incomplete. Where scientific studies are undertaken these will lag behind the requirement for prediction and, possibly, as for Dutch elm disease, become fewer in number as the peak of the epidemic passes, until some research questions (such as how far the vector travels) remain unanswered. Regarding human behaviour, which is inextricably linked to many pathosystems, information becomes much harder to obtain. Records of management and trade movements are rarely obtainable at the required scale. We are left, then, with a choice to either assume that all those factors which are uncertain are irrelevant, or to attempt to simulate their behaviour to the best of our ability, which is open to criticism on grounds of rigour. In this paper, we take the latter approach, and have captured the system behaviour, both in terms of total numbers of trees in different disease categories and in the spatial pattern of spread. Nevertheless, the two areas of manual calibration, i.e. the contribution of different dispersal kernels and the intensity of management over time, should be taken into account when interpreting our results.

Examining the long-term regeneration predictions more critically, three simplifications arise. First, a constant population size is maintained, where in many cases management will result in the loss of root stock, and a change is to be expected (Peterken & Mountford, 1998). Secondly, whilst U. procera regenerates consistently, regeneration from the other elm species is likely to be much lower, so patterns of regeneration outside the south of England are likely to be optimistic, although Peterken & Mountford indicate a strong role for the seedbank in U. glabra. Thirdly, the capacity of elm trees to propagate infection is proportional to their size. Single large mature elm trees have been replaced by greater numbers of smaller elms, which are unlikely to be able to support the same numbers of beetles. If, as described above, these smaller trees survive a local epidemic peak, they will become larger, although perhaps their numbers will decrease as a result of self-thinning. Again this process is simplified in the model, but no quantification of these processes has been made.

It is interesting to note that Figure 7 confirms the unscaled graphical predictions of Brasier (1983, 1996), although our vertical axis is in elm numbers rather than volume, so the equilibrium point is further from the x axis. In the future we can expect a continued elm presence in Britain, comprising a mix of live and dead trees. This will sustain both a population of bark beetles, and the fungal pathogen. This study indicates that even with a more rapid and intensive management response, the decline of the elm was inevitable once O. novo-ulmi arrived in the country. This is borne out by the continued progress of the fungus in other countries. Future management should therefore focus on minimizing the danger of dead trees, rather than any attempt to control the epidemic. In some exceptional circumstances, such as the Brighton area, continued management may be effective, but repeated disease cycles will erode mature tree numbers over time, and are likely to overcome management eventually.

Given that the detection of the early spread of many tree diseases remains difficult, the best policy appears to be to adopt a precautionary approach, taking steps at national borders to ensure that diseases similar to Dutch elm disease do not enter the country in the first place (see Brasier, 2008). However, whilst increasing quarantine measures or rates of inspection will certainly help in preventing the entry of known pests and diseases, this has to be founded on a sound knowledge of all potential invasive organisms, which in itself relies on knowing what all the potential threats are. Clearly there will be introductions of pests and diseases that are unknown, or of unknown threat, where the development of management plans (e.g. whether to control or not) will benefit from modelling such as this as soon as sufficient data is available to build a reliable model. Indeed the current Dutch elm disease epidemic on the Isle of Man is being investigated using a fine-scale spatial agent-based model to prioritize management effort (Mitchell et al., 2009).