Field-based experimental evolution is a research approach in which study species are allowed to evolve across several generations under well-defined field conditions. Field-based experiments in pathogen evolution became feasible with advances in molecular markers and computing technologies. Critical to success of these experiments is the choice of parental genotypes, molecular markers, experimental sites and field plot design. The current study used field-based experimental evolution based on a mark–release–recapture strategy to analyse the dynamics of interstrain competition and host specialization in the cereal pathogens Zymoseptoria tritici, Phaeosphaeria nodorum and Rhynchosporium commune. In all three pathogen–plant interactions, compelling evidence was found indicating that increasing host heterogeneity by growing cultivar mixtures slowed down the evolution of the corresponding pathogen populations. Evidence for differential selection between parasitic and saprophytic phases of the life cycle in P. nodorum and R. commune was also found. The effect of partial resistance on the evolution of the experimental pathogen populations was mixed. A decreased rate of evolution was found in the pathogen populations sampled from partially resistant hosts in Z. tritici and P. nodorum but not R. commune. The findings indicate that field-based experimental evolution offers a powerful approach to test hypotheses associated with the evolution of plant pathogens.
Pathogen populations in agroecosystems are usually composed of many genotypes that vary in fitness and aggressiveness. Pathogen population genetic structure emerges as a result of interactions among different evolutionary processes (e.g. Savile, 1964; Aldaoud et al., 1989; Zhan & McDonald, 2004; Saleh et al., 2012). Some genotypes in a population will be locally adapted residents that were selected by the local biotic and abiotic environments across many generations of selection. Other genotypes will be new mutants, immigrants or recombinants that carry novel alleles or allele combinations that can overcome host resistance or pesticides used for disease control. All pathogen genotypes compete continuously within and among individual host plants for nutrients and living space. Over the course of a growing season, novel genotypes can be introduced regularly as a result of mutation, recombination and immigration. Selection can drive rapid changes in the composition of a pathogen population during a growing season as it adapts to the local environment (e.g. Zhan et al., 2002; Montarry et al., 2006). A better understanding of how these processes affect the evolution of plant pathogens can be useful for implementing more sustainable disease management strategies (e.g. McDonald & Linde, 2002a,b). In plant pathology, researchers are particularly interested to know how pathogen populations will evolve under different agricultural practices, such as crop rotations or deployment of resistant cultivars. It is also useful to know whether there are trade-offs in different stages of the pathogen life cycle, what type of selection operates on pathogen populations in monocultures or cultivar mixtures and how host resistance may affect the balance between sexual and asexual reproduction in a pathogen population.
Current understanding of pathogen evolution is based mainly on theoretical models or retrospective analyses of evolutionary processes imprinted in populations based on molecular and phenotypic surveys (e.g. Stakman et al., 1943; Vallega, 1955; Savile, 1964; Shattock et al., 1977; Barrett, 1980; Aldaoud et al., 1989; Burdon & Jarosz, 1992; Linders, 1996; Roslin et al., 2007; Grünwald & Goss, 2011; Singh et al., 2011; Zhan & McDonald, 2013). Empirical tests of evolutionary hypotheses formulated from models or surveys remain rare. Experimental evolution provides a powerful tool to test and validate fundamental theories and assumptions underlying the evolution of species (Marchetti et al., 2010; López-Villavicencio et al., 2011; Burgess, 2013). Experimental evolution with agricultural pathogens has been conducted mainly in the laboratory because of the difficulty of establishing and tracking pathogen populations in the field. However, advances in molecular and computing technologies coupled with informative experimental designs have enabled field experiments that can test evolutionary hypotheses relevant to plant pathology. This paper describes how a mark–release–recapture experimental design was used to infer evolutionary processes affecting three cereal fungal pathogens and summarizes the main findings from these experiments.
Field-based experimental evolution was conducted with Zymoseptoria tritici (earlier name Mycosphaerella graminicola), the causal agent of septoria tritici leaf blotch on wheat, Phaeosphaeria nodorum (also called Stagonospora nodorum), the causal agent of stagonospora nodorum leaf and glume blotch on wheat, and Rhynchosporium commune (earlier name Rhynchosporium secalis), the causal agent of barley scald. All three pathogens have a global distribution, cause significant yield losses in many areas of the world and are stubble-borne necrotrophs or hemibiotrophs that produce splash-dispersed asexual spores (Eyal, 1999; Zhan et al., 2008; Oliver et al., 2012). Zymoseptoria tritici and P. nodorum are known to produce airborne sexual spores that can be disseminated over long distances (Zhan et al., 1998; Hunter et al., 1999; Sommerhalder et al., 2010). Population genetic analyses indicate that R. commune also has a sexual stage that produces air-dispersed ascospores, but the teleomorph has not yet been found (McDonald et al., 1999; Salamati et al., 2000; Zhan et al., 2008). Host resistance to all three pathogens is mediated both by major resistance genes and quantitative resistance genes exhibiting smaller and generally additive effects (Nicholson et al., 1993; Feng et al., 2004; Goodwin, 2007; Ghaffary et al., 2012; Looseley et al., 2012; Oliver et al., 2012; Francki, 2013). The expression of host resistance can be affected by the stage of crop growth and plant or canopy architecture. All three pathogens have shown a capacity to evolve quickly to overcome major resistance genes (Cowger et al., 2000; Xi et al., 2003; Zhan et al., 2012) reflecting the high evolutionary potential predicted for these pathogens (McDonald & Linde, 2002a).
All three diseases are polycyclic, with several generations of pathogen reproduction during a growing season. All three pathogens produce discrete and highly visible leaf lesions where asexual conidia are formed before being dispersed by rain splash (e.g. Gough, 1978; Brennan et al., 1985; Fitt et al., 1988; Davis & Fitt, 1992; Bannon & Cooke, 1998). Because the population genetics of all three pathogens has been well studied over many years (e.g. Zhan et al., 2003, 2005; Bennett et al., 2005; Sommerhalder et al., 2006, 2007; Adhikari et al., 2008; Zhan & McDonald, 2011; McDonald et al., 2012; Stefansson et al., 2012; Yang et al., 2013), the selectively neutral and highly polymorphic molecular markers needed to tag and track pathogen genotypes across asexual generations in a mark–release–recapture experiment were already developed and validated. All of these properties combine to make these pathogens good model systems for field-based experimental evolution studies that address fundamental questions associated with evolution of plant pathogens.
Choice of molecular markers
It is critical to choose genetic markers that can accurately differentiate and track the pathogen isolates used in the experiment. Marker choice depends on the questions to be addressed and the biology of the pathogen. If the primary aim of the experiment is to measure competition among known genotypes, multilocus markers that generate distinct DNA fingerprints (e.g. RFLP or AFLP or rep-PCR fingerprints) for each genotype are appropriate. Fluorescent proteins could also be used if the pathogens are strictly asexual. If the experiment includes other objectives such as evaluating the relative contributions of immigration and recombination to the genetic structure of the experimental populations, multilocus markers should be complemented with single-locus markers (e.g. SSRs, SNPs). Regardless of the research objectives and reproductive mode of the pathogen, all markers should be reproducible. Ideally, the chosen molecular markers should also be selectively neutral. The neutrality of the markers can be tested using various statistical approaches (e.g. Ewens, 1972; Watterson, 1978; Tajima, 1989; Fu & Li, 1993) based on the frequency distribution of alleles in the population.
Field designs for experimental evolution of plant pathogens
Criteria for choosing pathogen isolates to include in experimental populations
Three criteria should be considered when choosing isolates to include in the experiment, namely geographical origin, diversity and genotype. The experimental isolates should be locally adapted to the climate and ecosystem where the experiment will be conducted. Enough isolates should be used to adequately represent the biological and ecological diversity of a local population, but including too many isolates in the experimental population will reduce the expected frequency of each isolate and significantly increase the minimum sample size required to recapture each genotype (Table 1). If possible, experimental isolates should possess a unique genetic or morphological marker (e.g. a distinct DNA fingerprint profile, a private SSR allele or a unique spore colour) to increase the power of tracking them (see next section). In the authors' field experiments, 8–10 local isolates differing from each other in DNA fingerprint profiles or multilocus RFLP or SSR haplotypes were used. In addition to having distinct fingerprint profiles, isolates carrying at least one allele that is rare or absent in the location where the experiment is conducted are favourable. If isolates with rare alleles are not available, the use of collections of isolates with allele frequencies that differ significantly from the local population is recommended. Because the relative proportion of pathogen strains in a mixed infection can affect within-host competition (Fellous & Koella, 2009), spores from each isolate were mixed in equal proportion prior to inoculating host seedlings at the beginning of the experiment. For the field experiments with P. nodorum and R. commune that were conducted across two growing seasons, infected plant material saved at the end of the first growing season was used to start the epidemic in the second growing season. In these cases, the pathogen populations initiating infection in the second year were descendants of the pathogen population from the first year of the experiment and the competition occurring among pathogen strains during the first season could be continued into a second year.
Table 1. Minimum sample sizes required from each plot to recover an inoculated genotype with 95% or 99% confidence after inoculation with 4–20 inoculants applied in equal proportions
No. inoculated genotypes
Minimum sample size
Criteria for choosing experimental sites
Many pathogens have significant saprophytic abilities that allow them to persist in the field for several years after their host was harvested. Carryover of these pathogen strains at experimental sites can introduce background contamination that will complicate the main experiment. This background contamination can be minimized by placing the experiment in fields that were not planted with known pathogen hosts for several years. This requirement can be relaxed if the only aim of the experiment is to measure competition among the inoculated genotypes and the molecular markers can distinguish unequivocally among the inoculated strains. But the absence of carryover inoculum is critical if the aim of the experiment is to understand the contribution of different sources of inoculum (e.g. migrants originating from outside of the experimental site versus recombinants originating from inside the site) to the genetic structure of experimental populations. The time needed to eliminate carryover inoculum can vary considerably across pathogens. For pathogens that form long-lived survival structures such as sclerotia, experimental sites may require absence of the host for 10 or more years before conducting the experiment. The authors selected field sites that had not been planted with barley or wheat for at least 3 years. To minimize the impact of immigration when using pathogens such as mildews and rusts that produce large quantities of airborne spores that travel long distances, it would be ideal to choose experimental sites that are located at least several kilometres distant from other fields planted to the same crop. This latter recommendation may not be possible for many pathosystems.
Field experimental designs
The field layouts used for experimental evolution are similar to those used for many other field experiments in agriculture. The goal of the design is to minimize the movement of pathogen populations among experimental plots so that each experimental plot maintains an independent epidemic. Plot sizes must be large enough to allow an epidemic to progress naturally while minimizing edge effects. Experimental plots are usually isolated by surrounding them with large border areas planted to a non-host. Treatments should be replicated at least three times to allow a rigorous statistical analysis. As a result, the number of treatments that can be included in an experiment will be limited unless one or more hectares are available. Uninoculated control plots will also be needed to estimate the contribution of immigration to the population genetic dynamics in each plot (see next section). The control plots can be embedded into the main experiment using a split-plot design with host treatment as subplot and the type of inoculation as main plot. If space is limited, the control plots can be arranged as a separate block planted next to the main experiment. In the authors' experiments, the field plots were arranged in a randomized complete block design with four replications. The experimental host populations were planted in a checkerboard (Fig. 1) pattern with plots of non-host separating each host plot. Three of the four replications were inoculated with the experimental pathogen populations. The fourth replication was set as the uninoculated control. In this design, epidemics in the control plots resulted from natural airborne inoculum and isolates recovered from these plots were used to calculate the probability that an isolate was an immigrant.
Recovering the pathogen population from experimental treatments
Because experimental evolution is based on measuring how a population changes over time, a minimum of two pathogen collections is required for most experiments. Additional collections can provide a more nuanced view of the evolutionary process, but also require genotyping many more isolates. The evolution of each pathogen population is evaluated by measuring how its population genetic structure changes across generations. To measure competitive fitness among pathogen strains at the field scale, it is recommended to sample from only one infected leaf per plant and make only one isolation from each infected leaf. This will minimize the tendency to over-sample new infections that result from autoinfection as a result of earlier infections on the same plant (e.g. in many cases different lesions on the same leaf will be the same genotype). It is also recommended to collect samples from new but fully expanded leaves in the uppermost region of the crop canopy to maximize the likelihood that isolates collected at the same point in time are sampled from the same leaf layer in the canopy and represent the same infection cycle. With this sampling strategy, the pathogen populations sampled at different points in time will represent different pathogen generations. The minimum number of isolates to sample in each treatment at each time point depends on the number of inoculants used to initiate the experiment. The minimum number of isolates to sample from each plot will increase with the number of pathogen genotypes included in the experiment as a result of the lower expected frequency for each genotype (Table 1).
The authors made three collections during a single growing season in experiments with Z. tritici (Zhan et al., 1998, 2002, 2007). In the other experiments the pathogen populations were sampled at four or five time points distributed across two growing seasons (Abang et al., 2006; Sommerhalder et al., 2010, 2011). The additional collections and seasons allowed the evaluation of whether selection varies across the host-growing season and whether there is a trade-off between parasitic and saprophytic phases of the pathogen life cycle. In the majority of cases, only one infected leaf was collected from each plant and one strain was isolated from each infected leaf. However, in some plots with very low levels of infection, two isolations were made from one infected leaf to increase the sample size. In these cases, isolations were made from clearly separated lesions to minimize the probability of sampling isolates originating from the same infection. The original plan was to collect at least 70 isolates from each plot at each time point. This sample size provided a 95% probability of sampling all isolates present in the plot at a frequency higher than 0·05 (Zhan et al., 2001). However, because many plots had low levels of infection and the pathogen was not isolated from many of the collected leaves, the total number of isolates included in the analyses was lower than 70 for many plots.
Differentiating inoculant and non-inoculant genotypes
Isolates recovered from the field experiments are expected to be composed of both inoculant and non-inoculant genotypes. Inoculant genotypes are the asexual progeny of the strains released into the plots at the beginning of the experiment and can be separated from non-inoculant genotypes relatively easily by using molecular genetic markers alone or in combination with a statistical analysis. Any molecular markers can be used for this purpose if they can unequivocally differentiate the inoculant and non-inoculant genotypes. If the molecular markers cannot unequivocally distinguish between the two sources of inoculum, or if the parental genotypes used in the experiment are common in the local population, Bayesian statistics or maximum likelihood approaches may be required to distinguish inoculant genotypes from non-inoculant genotypes (Brown, 2000; Zhan et al., 2000).
If the pathogen uses both sexual and asexual reproduction, the non-inoculant genotypes can originate from four sources representing different evolutionary processes: (i) immigration of non-inoculant airborne spores from outside of the experimental plots; (ii) immigration of non-inoculants from within the experimental plots (background contamination) as a result of carryover of the pathogen from earlier seasons or infected plant materials; (iii) recombination among the inoculant genotypes, immigrants or between inoculant genotypes and immigrants within the experimental plots; and (iv) mutations occurring within the inoculant asexual lineages. The contribution of mutation to the generation of new genotypes is expected to be trivial within the time scale of these experiments, although one mutation event was identified in the experiment with Z. tritici that allowed the calculation of a mutation rate for the associated RFLP markers (Zhan & McDonald, 2004). Background contamination can be minimized or prevented by choosing an appropriate experimental site as described earlier and by using pathogen-free seeds. Thus the main sources of non-inoculant genotypes are expected to be recombination and immigration from outside the experimental plots.
The percentage of non-inoculant genotypes originating from sexual recombination or immigration can be estimated using a maximum likelihood approach (Brown, 2000; Zhan et al., 2000), allowing inference of rates of immigration and recombination for the pathogen (Zhan et al., 1998, 2007; Zhan & McDonald, 2004; Sommerhalder et al., 2011). The origin of each non-inoculant genotype can also be assigned by calculating its posterior probability of being a recombinant or immigrant with a Bayesian approach by using the estimated recombination or immigration rate as a prior. In this case, each non-inoculant genotype can be assigned to the category ‘immigrant’, ‘recombinant’ or ‘uncertain’ based on a threshold value. In the authors’ study, 90% probability was used as the threshold value, with any genotype with a 90% or higher posterior probability of being a recombinant categorized as a recombinant. The same metric was used to identify immigrants. Genotypes with posterior probabilities between 10% and 90% were categorized as uncertain, meaning they could have originated from either recombination or immigration.
Both Bayesian and maximum likelihood approaches require known allele frequencies in the source populations of immigrants and recombinants. In the authors' experiments, allele frequencies measured in earlier collections of inoculated plots were used to estimate prior probabilities for recombinants and allele frequencies measured in naturally infected control plots were used to estimate prior probabilities of immigrants. When there is no spatial or temporal population genetic structure in a pathogen, allele frequencies measured in populations collected from nearby locations or in recent years can be used to calculate the prior probabilities for immigrants (Sommerhalder et al., 2011; Zhan & McDonald, 2013).
Main results from field-based experimental evolution of three cereal pathogens
Rapid directional selection for host specialization in host monocultures
Pathogens are thought to evolve more rapidly in agro-ecosystems than in natural ecosystems (Stukenbrock et al., 2007; Brunner et al., 2009) and monoculture is thought to be the main reason for this difference. In agro-ecosystems, it is common to grow a single host genotype over many thousands of hectares, especially in countries with industrial agriculture in the Americas, Europe and Australia. These host monocultures impose strong directional selection on the corresponding pathogen populations, leading to the rapid emergence of mutants or recombinants that can overcome resistant cultivars.
The hypothesis of more rapid pathogen evolution under monoculture was supported in all three of the field experiments. The frequencies of many inoculant strains changed significantly during a single growing season in treatments planted to a single host, with the same strains favoured in all three replicates of the same host (Zhan et al., 2002). This result is consistent with rapid directional selection favouring pathogen strains that were specifically adapted to the different host cultivars planted in the host monoculture treatments. This finding lies in stark contrast to what was observed in treatments composed of host mixtures.
Diversifying selection in host mixtures mitigates directional selection for host specialization
Epidemiological and evolutionary theories suggest that increasing host heterogeneity by mixing cultivars with different phenotypes and/or genotypes can reduce the occurrence and development of plant disease in the short term and retard the evolution of pathogen populations over the long term (Lannou & Mundt, 1996; Marshall et al., 2009). Increasing host heterogeneity results in complex changes in the demography and micro-ecology of the associated pathogen populations (Burdon, 1987) that negatively affect the ability of the pathogen to become locally adapted to a single host and evolve towards host specialization. The application of host heterogeneity to control plant diseases has been hampered by a lack of willingness to adopt simple solutions such as mixtures of cultivars that share similar agronomic characters such as maturity, height and grain quality (Wolfe, 1985; Stuke & Fehrmann, 1988; Mundt, 2002). With the maturation of marker-assisted breeding technologies (Collard & Mackill, 2008; Miedaner & Korzun, 2012), it has become feasible to create isogenic multiline cultivars that differ in their composition of resistance genes but are practically identical for other agronomic characters (e.g. Takeuchi et al., 2006; Brunner et al., 2012). Such multilines offer a promising approach to control plant disease.
The results from the authors' three field experiments conducted on three continents provided compelling evidence that increasing host heterogeneity by using cultivar mixtures prevented directional selection for host specialization. Pathogen populations on host treatments composed of a mixture of only two cultivars remained stable over time for all three pathogens. Selection coefficients (a measure of relative pathogen fitness) were not statistically different among inoculants competing on cultivar mixtures and the average selection coefficients were significantly smaller compared to the pure stands. In contrast, selection coefficients among inoculants always differed in pure line monocultures.
Mixed effects of partial resistance on the evolution of plant pathogens
Partial resistance does not prevent the development of plant disease but can limit disease severity and epidemic development. It may also prove useful to slow the development of virulence against major gene resistance (e.g. Parlevliet & Zadoks, 1977; Kiyosawa, 1982; Pietravalle et al., 2006; Palloix et al., 2009; Brun et al., 2010). It is hypothesized that the majority of partial resistance is non-specific so that all pathogen genotypes will be able to establish, survive and reproduce on partially resistant hosts (Van der Plank, 1968; Simons, 1972; Alexander et al., 1993; Lê Van et al., 2013). As a result, pathogen selection on plants with partial resistance is expected to be weaker than on plants with major gene resistance. This hypothesis has support from some theoretical studies showing that pathogen virulence will evolve slowly on partially resistant hosts (e.g. Bahri et al., 2009).
This hypothesis was tested in the experiments described in this paper and inconclusive results were obtained. For the wheat pathogens Z. tritici and P. nodorum, fungal populations on hosts with partial resistance maintained the highest genotype diversity, displayed the smallest change in genotype frequency over the course of the experimental evolution and had smaller selection coefficients distributed over a narrower range (Zhan et al., 2002; Sommerhalder et al., 2011), consistent with the hypothesis that partial resistance can slow the rate of evolution in pathogen populations (Van der Plank, 1968). But in R. secalis, the opposite pattern of pathogen evolution was observed on partially resistant hosts. Evolution was fastest on the partially resistant landrace Arabi Aswad, with inoculant strains having significantly higher selection coefficients on this landrace compared to all other hosts included in the experiment (Abang et al., 2006).
Partial resistance also selected for higher recombination, aggressiveness and basal resistance to fungicides in Z. tritici. On average, the pathogen population sampled from Madsen, a cultivar with partial resistance, produced more disease (Yang et al., 2013), demonstrated higher tolerance to a DMI (demethylation inhibitor) fungicide (Zhan et al., 2006) and contained more non-inoculant genotypes derived from sexual recombination among the inoculant strains (Zhan et al., 2007) compared to the pathogen population sampled from a susceptible cultivar Stephens grown in the same experiment. The discovery of a significant positive correlation between pathogen aggressiveness and fungicide resistance is worrisome and merits further research in this and other pathogens to determine the generality of the finding. Although inconclusive, this finding suggests that one unforeseen consequence of widespread deployment of partial resistance might be to select for pathogen populations with higher basal levels of antimicrobial resistance and enhanced aggressiveness.
Differential selection between parasitic and saprophytic phases in the pathogen life cycle
Understanding how selection operates during different stages in the pathogen life cycle can be very important for managing plant diseases because of trade-offs between pathogenic and saprophytic fitness. The hypothesis that fitness costs are associated with virulence, here defined as quantitative variation in pathogenicity on susceptible hosts (Lannou, 2012), is an underlying assumption of many theories related to pathogen evolution (e.g. Van der Plank, 1968; Frank, 1992; Gandon & Michalakis, 2000). One aspect of this hypothesis is the possibility of differential selection between parasitic and saprophytic phases of the pathogen life cycle. Pathogen genotypes with a high capacity to exploit the host during the parasitic phase may have a selective disadvantage during the saprophytic phase of the life cycle when living hosts are not available (Kiyosawa, 1982). Surprisingly few studies have been conducted to test this hypothesis, although an experiment with Phytophthora infestans isolates that were resistant or sensitive to metalaxyl illustrated that parasitic fitness could trade-off with saprophytic fitness on stored tubers (Kadish & Cohen, 1992).
This hypothesis was evaluated in experiments with P. nodorum and R. commune. For these two pathogens, experimental evolution was conducted over two consecutive years with a saprophytic phase of several months separating two different cycles of parasitic development. Evidence of differential selection between parasitic and saprophytic stages of the life cycle in both pathogens was found. In P. nodorum, isolate SN99CH2.04 increased in frequency during the parasitic phase but decreased in frequency during the saprophytic phase. The pattern was consistent on all host treatments except the cultivar mixture (Sommerhalder et al., 2011). In R. commune, differential selection was also found on some genotypes (Abang et al., 2006). For example, isolate Sy21 decreased in frequency on Rihane and WI2291 during the parasitic phase but increased in frequency during the saprophytic phase, while isolate Sy15 increased in frequency during the parasitic phase on Arabi Aswad, Rihane and WI2291, but decreased in frequency on all three hosts during the saprophytic phase.
Differential selection between parasitic and saprophytic phases of the pathogen life cycle suggests that mutants able to infect resistant cultivars may emerge at a slower pace than expected because an increase in frequency during the parasitic phase can be offset by a decrease in frequency during the saprophytic phase. These sorts of trade-offs may explain in part the rapid reduction in diversity for virulence alleles seen in a recent field experiment conducted with Leptosphaeria maculans (Daverdin et al., 2012). Multicropping, whereby the same crop is grown year-round in the same region, is a common practice in modern agriculture. This agricultural practice may accelerate the evolution of plant pathogens because pathogen genotypes with a high parasitic fitness can steadily increase in frequency as a result of the year-round availability of the living host without the need to compete or experience trade-offs during the saprophytic part of its life cycle.
Empirical tests of pathogen evolution in the field using approaches of experimental evolution may provide more realistic results regarding how natural selection operates on pathogens and how competition occurs among pathogen genotypes in agro-ecosystems. In this review, 20 years of experience in using this approach to study the evolution of three fungal pathogens has been summarized. The authors hope this mini-review will draw the reader's attention to this powerful approach and catalyse further experiments using experimental evolution and mark–release–recapture approaches to understand how pathogens evolve in agro-ecosystems.
The project was supported by Chinese National Science Foundation grant no. 31071655 and ETH Zurich.
Conflicts of interest
The authors have no conflicts of interest to declare.