The use of chronosequences in studies of ecological succession and soil development

Authors


Correspondence author. E-mail: walker@unlv.nevada.edu

Summary

1. Chronosequences and associated space-for-time substitutions are an important and often necessary tool for studying temporal dynamics of plant communities and soil development across multiple time-scales. However, they are often used inappropriately, leading to false conclusions about ecological patterns and processes, which has prompted recent strong criticism of the approach. Here, we evaluate when chronosequences may or may not be appropriate for studying community and ecosystem development.

2. Chronosequences are appropriate to study plant succession at decadal to millennial time-scales when there is evidence that sites of different ages are following the same trajectory. They can also be reliably used to study aspects of soil development that occur between temporally linked sites over time-scales of centuries to millennia, sometimes independently of their application to shorter-term plant and soil biological communities.

3. Some characteristics of changing plant and soil biological communities (e.g. species richness, plant cover, vegetation structure, soil organic matter accumulation) are more likely to be related in a predictable and temporally linear manner than are other characteristics (e.g. species composition and abundance) and are therefore more reliably studied using a chronosequence approach.

4. Chronosequences are most appropriate for studying communities that are following convergent successional trajectories and have low biodiversity, rapid species turnover and low frequency and severity of disturbance. Chronosequences are least suitable for studying successional trajectories that are divergent, species-rich, highly disturbed or arrested in time because then there are often major difficulties in determining temporal linkages between stages.

5.Synthesis. We conclude that, when successional trajectories exceed the life span of investigators and the experimental and observational studies that they perform, temporal change can be successfully explored through the judicious use of chronosequences.

Introduction

Ecologists who study temporal change are challenged by how to study successional and soil developmental processes that span centuries to millennia. Direct, repeated observations (e.g. through historical photography or long-term plot studies; del Moral 2007) began formally with studies of dunes in Denmark (Warming 1895) and Michigan (USA; Cowles 1899), and such observations provide the best source of evidence about temporal changes in plant and soil biological communities over years to decades. However, few studies extend beyond several decades in duration (but see Chapin et al. 1994; Webb 1996; Whittaker, Partomihardjo & Jones 1999; Walker et al. 2001; Silvertown et al. 2002; Meiners, Cadenasso & Pickett 2007), so indirect measures are needed to determine the age of successional stages and reconstruct historical vegetation or soil conditions over longer time-scales. The most frequently used indirect approach for measuring temporal dynamics involves the use of chronosequences and associated space-for-time substitution which represents a type of ‘natural experiment’ (Pickett 1989; Fukami & Wardle 2005). However, chronosequences may not always be correctly used, and this can lead to misinterpretations about temporal dynamics (Pickett 1989; Fastie 1995; Johnson & Miyanishi 2008), particularly when mechanisms are inferred from the descriptive patterns that chronosequences supply. In Glacier Bay, Alaska, USA, for example, erroneous assumptions about temporal linkages between sites dominated by Alnus and Picea trees led to incorrect generalizations that Alnus facilitated Picea growth through fixation of atmospheric nitrogen (Fastie 1995). Extrapolations about the role of facilitation to other studies were then made without appropriate caveats (Walker 1995; Walker & del Moral 2003). In this example and others (Johnson & Miyanishi 2008), a chronosequence approach has led to more confusion than clarity about temporal change.

Johnson & Miyanishi (2008) highlighted the misuse of the chronosequence concept for studying vegetation succession and suggested that the problems they identified also applied to the use of chronosequences for studying ‘temporal changes in biodiversity, productivity, nutrient cycling, etc.’. We maintain that there are many instances in which the chronosequence approach may usefully clarify ecological processes in a manner that cannot be achieved in any other way, and that the wholesale dismissal of the chronosequence approach is likely to impede, rather than advance, understanding of long-term ecological processes. In this light, we first address the concept of a chronosequence, how to measure it and its links to succession, soil development and temporal scales. Then, we evaluate under which circumstances chronosequence use is most or least appropriate. Finally, we discuss how the use of chronosequences can be improved. Our overarching goal in addressing these issues is to clarify when chronosequences are essential tools to understanding temporal change and when they should not be used in order to avoid misinterpretations of that change.

Concepts and approaches

Ambiguity about the meanings of commonly used terms could be contributing to confusion about the applicability of chronosequences. We therefore provide some definitions of relevant concepts (Table 1) and explore several critical assumptions and concerns involving these concepts. A fundamental assumption about chronosequences is that the communities and ecosystems of the younger sites are currently developing in a temporal pattern that resembles how the older sites developed (termed a space-for-time substitution). When the date of the initial disturbance and subsequent history of the site are known, chronosequences provide the opportunity to study ecological processes over time periods that are longer than direct observation would permit. Concerns about using chronosequences include whether there is any predictable link between young and old sites, whether the chronology is readily interpretable, whether and at what rate characteristics actually change over time and whether landscape context and chance may confound chronosequence assumptions (del Moral 2007). Various lines of independent evidence are essential to justify the space-for-time assumption before applying the chronosequence approach to studies of temporal dynamics.

Table 1.   Definitions of conceptual terms as used throughout this article
ConceptDefinition
ChronosequenceA set of sites formed from the same parent material or substrate that differs in the time since they were formed
Ecological successionThe change in species composition and/or structure over time following either a severe disturbance that removes most organic matter (primary succession) or a less severe disturbance where some biological legacy remains (secondary succession). Biomass, nutrient availability and vegetation stature can either increase (progressive succession) or decrease (retrogressive succession; Walker et al. 2001; Wardle, Walker & Bardgett 2004)
Soil developmentAll temporal change in both the abiotic and biotic aspects of soil, including nutrient and water availability, structure, texture and biota (Bardgett 2005). Often tightly coupled to above-ground changes and subject to the same array of potential trajectories as ecological succession (Wardle 2002)
DisturbanceThe relatively abrupt loss of biomass or structure from an ecosystem that creates opportunities for establishment through alteration of resources or the physical environment (Sousa 1984; White & Pickett 1985; Walker 1999). Disturbances both initiate and modify succession and organisms have complex responses to disturbance that impact biodiversity
Temporal scaleInfluences the interpretation of the previous concepts. Succession is typically studied on a temporal scale that represents 1–10 times the life span of the dominant species (Walker & del Moral 2003)

Chronosequences imply the presence of ecological succession. Therefore, important concerns about ecological succession impact chronosequence studies. These include the balance of deterministic and stochastic elements, whether a sere (successional sequence) is directional (i.e. encompassing a linear replacement of plant communities to a defined endpoint), whether trajectories converge or diverge and whether many trajectories form a network from a single or several starting points (Lepš & Rejmánek 1991; Samuels & Drake 1997). Trajectories (Fig. 1) can also be parallel, deflected, cyclical, arrested (stalled) or simply involve direct replacement of a former dominant species (Walker & del Moral 2003). As with chronosequences, it is important to discern what characteristics change at what rates over time.

Figure 1.

 The most common trajectories of successional development, representing several stages of development from left to right (modified from Walker & del Moral 2003). The left column includes those trajectories most appropriate for chronosequence interpretation; the right column includes those least appropriate for chronosequence interpretation. Within each column, appropriateness decreases from top to bottom, so initial convergence and networks need more intensive sampling than those at the top of the same column. Dotted lines indicate how presumed connections between stages can be erroneously assumed when direct evidence is not available due to incomplete field sampling. For example, the upper line under Continuous divergence may actually represent a trajectory that had a separate origin. The vertical downward arrow represents a disturbance that diverts a successional trajectory.

Temporal scales used to study chronosequences depend on the factor or process of interest and on the life span of the dominant organisms or the organisms of interest. For example, microbial succession in soil can be studied over periods of just several days or weeks, whereas heterotrophic succession (e.g. of decomposers on rotting logs or carcasses) encompasses weeks to years (Bardgett et al. 2005). Secondary plant succession (e.g. colonization of abandoned agricultural fields) is normally examined at decadal scales (Meiners, Cadenasso & Pickett 2007). Primary plant succession (e.g. on lava or dune surfaces) can involve centuries to thousands of years (Walker et al. 1981), while soil development, or pedogenesis, can encompass periods of up to millions of years (Crews et al. 1995). Therefore, details about chronosequences that matter at shorter time intervals (e.g. availability of labile nutrients, species interactions) become less relevant as temporal scales expand and the focus shifts to processes such as the formation of humus, accumulation of soil carbon and phosphorus loss or occlusion. Many processes such as primary productivity, decomposition and nutrient immobilization can be addressed at several temporal scales.

The presence of a more or less linear relationship between sites can be established in a variety of ways. Techniques include investigating oral and historical records (Engstrom 1995), repeat photography (Webb 1996), tree ring analysis (Fastie 1995), lichenometry (Calkin & Ellis 1980), use of micro- and macro-fossils (Bhiry & Filion 1996; Clarkson, Schipper & Lehmann 2004), palynology (Birks 1980), determining carbon isotope ratios (Kume et al. 2003), thermoluminescence dating (Tejan-Kella et al. 1990), potassium–argon dating (Funkhouser, Barnes & Naughton 2007), analysing podzol development (Thompson 1981; Walker et al. 1981) or studying soil depth (Poli Marchese & Grillo 2000). Temporal change on inferred chronosequences can be measured with simple, one-time surveys of vegetation and soils that facilitate conclusions about succession or with repeated measurements when these are logistically feasible. Little effort has been made to design the ideal chronosequence study (e.g. number and temporal spacing of sites, number of replicates within each age group) or duration (e.g. temporal duration that a chronosequence can have and still maintain a valid linkage among stages) (Thompson & Moore 1984; Myster & Malahy 2008). Ultimately, chronosequence measurements should be determined by the parameters of interest, their rate of change and the degree of spatial heterogeneity within chronosequence stages. In the following sections, we review the conditions under which the use of chronosequences is most and least appropriate.

Where chronosequences are most appropriate

Chronosequences are multi-faceted as they can be used to track many ecosystem patterns and processes in developing communities through time, some of which may develop independently of each other. For example, Myster & Malahy (2008) found a convergence of species richness and total plant cover on pastures in Puerto Rico over time, but no such directionality for species composition and abundance. These results reflect a more rapid and deterministic recovery of structural components of vegetation than parameters based on species composition, a result applicable to both primary (Walker & del Moral 2003) and secondary (Guariguata & Ostertag 2001; Chazdon et al. 2007) succession. We discuss several general situations where chronosequences are appropriate and provide examples from the ecological literature.

Short-term seres

When there are demonstrable linkages between stages (i.e. the successional trajectory is predictable), chronosequences provide a useful approach to studies of short-term temporal change with time frames of c. 1–100 years, unless organisms with very short life spans such as soil microbes are involved. Such links come from direct observation of relatively short-term change in permanent vegetation plots or soil microbial and faunal communities, physical remains of previous stages (e.g. tree stumps) or indirect but robust corresponding observations such as overlapping patterns in tree rings. Many studies use chronosequences of this kind and thereby extend our knowledge of successional dynamics. For example, short-term chronosequences such as those on sand dunes have long been used to demonstrate that soil and plant communities change in tandem during succession (Brown 1958). Inferences from these types of studies have subsequently been made: that the build-up of species-specific pathogens in the root zone can accelerate species replacement and hence vegetation change (Van der Putten, Van Dijk & Peters 1993). More recent studies on abandoned fields of known age that differ in time since abandonment have led to significant insights about how below-ground communities and plant–soil feedbacks serve as drivers of species replacement and vegetation successional development (e.g. De Deyn et al. 2003; Kardol, Bezemer & van der Putten 2006). Our understanding of the successional development of soil biotic communities has also advanced through studies of recently exposed glacial substrates. These substrates are initially composed of simple, heterotrophic, microbial communities (Bardgett & Walker 2004; Bardgett et al. 2007) and photosynthetic and nitrogen-fixing bacteria (Schmidt et al. 2008) that over time develop more complex, fungal-based food webs (Ohtonen et al. 1999; Bardgett et al. 2007). Also, advances have emerged from applying the chronosequence approach to substrates of differing decay stage and therefore age, such as fungal communities on decaying leaves (Frankland 1998) and microarthropod communities on decaying tree stumps (Setälä & Marshall 1994).

Convergence of seres and vegetative structure

There are multiple potential trajectories for succession, including single or multiple pathways that can be parallel, convergent or divergent, but that also can be cyclic or form complex networks (Fig. 1; Walker & del Moral 2003). Single and cyclic pathways are the most easily adapted to space-for-time inferences because they typically have few dominant species and few stages (Watt 1947). Chronosequences can also be useful for the study of convergent seres, particularly when convergence occurs early in succession. Whenever multiple pathways are present along a chronosequence, sufficient within-stage sampling is required to detect the pathways and avoid erroneous inferences about non-existent pathways (Fig. 1). In the case of an incomplete chronosequence (missing stages), additional historical, retrospective, observational or experimental data are critical before robust inferences can be made about the missing links.

Convergence occurs as a reduction in heterogeneity of species composition among sites over time or as a growing resemblance among different trajectories (Christensen & Peet 1984; del Moral 2007). Convergence is most likely where there is some biological legacy from the initial disturbance, where a deterministic sequence of species or life-forms is driven by biological processes or where environmental conditions are predictable (Nilsson & Wilson 1991; Inouye & Tilman 1995; Wilson, Allen & Lee 1995). Decreasing beta-diversity is one way to measure convergence along a sere (del Moral & Jones 2002). Convergence to a dominant growth form such as tussock grasses, dense shrub lands, or trees can potentially reduce the typically stochastic processes of dispersal and establishment, and distinctly alter ecosystem properties and environmental conditions (Walker & del Moral 2003). For example, where succession proceeds from relatively open vegetation to closed forest canopy, one might expect a convergence (reduction of variation) among stands of plant traits such as specific leaf area and root : shoot ratios, soil microbiological traits such as the relative biomass of bacteria and fungi or environmental changes such as amount of understorey light and soil and air temperatures. Despite some evidence of predictable directional shifts in these variables (Tilman 1988; Wood & Morris 1990; Chapin et al. 1994; Llambíet al. 2003; Bardgett & Walker 2004), more needs to be done to investigate convergence among stands along the lines of the study by Fukami et al. (2005) on the convergence of plant functional traits during secondary succession. Trait convergence is also complicated by spatial heterogeneity in most plant (Armesto, Pickett & McDonnell 1991) and soil (Boerner, DeMars & Leicht 1996) communities, and a lack of uniformity in the effects of similar structures such as trees on the environment (Binkley & Giardini 1998). Such spatial variability compounds the difficulty of interpreting temporal variability within sites and suggests the need for caution in interpreting chronosequences, even those based on convergence of vegetative structure. Although convergence (especially of life- and growth forms) is a common phenomenon in some long-term seres (Rydin & Borgegård 1991; Poli Marchese & Grillo 2000), other seres show increased heterogeneity of life-forms as we discuss later.

Glacier Bay, Alaska, USA, is a well-studied sere that illustrates many of the points we make about convergence, including the need for multiple sources of information, intense sampling and an understanding of the role of the dominant plant species. The retreating glaciers at Glacier Bay have exposed moraines that have been dated by geological records, direct observation and repeat photography (Vancouver 1798; Field 1947; Goldthwait 1966). A chronosequence of early successional plants has been validated through permanent plots initiated by Cooper (1923) and several additional observational and experimental studies (summarized in Chapin et al. 1994). However, links to the next stage are less well established. Detailed sampling determined that the early successional plants (notably the nitrogen-fixing Alnus) do not always precede stands of Picea (Fastie 1995), the dominant tree species on moraines > 200 years old, as previously assumed. Picea forests contribute greatly to soil acidification (Alban 1982) and promote a retrogressive stage (Wardle, Walker & Bardgett 2004) when Picea stands degenerate after about 10 000 years (Ugolini & Mann 1979; Noble, Lawrence & Streveler 1984) and understorey diversity increases. Assuming that early successional stages converge to Picea forests (a likely, although not directly observed linkage), concerns about Alnus–Picea sequences during the first 200 years become less critical when addressing longer time-scales where Picea and its accompanying ecosystem-level effects predominate. Therefore, for measures of soil biota, soil fertility and plant physiognomy encompassing several millennia, the exact replacement sequence for plant species at hundred-year scales is of marginal importance, if the processes of interest have converged. More important at the longer time-scales are the frequency, intensity and spatial distributions of fire, insect outbreaks, logging and other disturbances that destroy forests and initiate secondary succession, because of the presence of residual forest soil following such disturbances (Walker & del Moral 2003).

Long-term and retrogressive seres

Over time frames encompassing thousands to millions of years, dramatic shifts can occur in soil properties and accompanying plant, animal and microbial communities. These changes negate the previously held assumption that plant communities reach a stable and self-replacing climax (Whittaker 1953). At such temporal scales, chronosequences are usually the only tool available to interpret changes in ecosystem processes, such as net primary productivity and rates of decomposition, nutrient mineralization and nutrient immobilization (Vitousek 2004; Wardle, Walker & Bardgett 2004; Wardle et al. 2008). Long-term chronosequences have also long been recognized as valuable for understanding processes of soil formation and development over time (Walker & Syers 1976), often independently of their application to plant and soil biological communities. However, the linkages between long-term soil development, shorter-term changes in microbial and faunal communities and vegetation development are relatively predictable (Wardle 2002; Bardgett et al. 2005), making the chronosequence approach a reasonable template for interpretation of change at many temporal scales.

Predictable shifts during stages of progressive succession include increasing plant and soil microbial biomass, nutrient availability and rates of nutrient cycling (Chapin, Matson & Mooney 2003). While such increases can continue for thousands of years (Vitousek 2004; Walker & Reddell 2007), in the absence of catastrophic disturbances that reset the system, ecosystem retrogression can occur, which involves a marked decrease in nutrient availability, often accompanied by reductions in plant biomass (Walker et al. 2001; Wardle, Walker & Bardgett 2004). This pattern has been widely documented in many climates and vegetation types, with the possible exceptions of arid systems (Lajtha & Schlesinger 1988; but see Selmants & Hart 2008) and tropical lowland rain forests (Ashton 1985; Kitayama 2005). Retrogression is typically driven by conversion of soil nutrients and especially phosphorus to less available forms, and in some cases leaching of nutrients below the rooting zone or the development of impermeable soil pans leading to water-logging (Walker & Syers 1976; Vitousek 2004; Coomes et al. 2005; Peltzer et al. in press). Long-term (millennial scale) changes in soil processes track, and are impacted by, mid-term (100–1000 years) to short-term (1–100 years) decreases in litter quality, decomposition rates, nutrient use efficiency and nutrient accumulation in plants (Cordell et al. 2001; Richardson et al. 2005; Wardle et al. 2009) and very short-term (days to months) alterations in soil microbial and animal populations (Wardle, Walker & Bardgett 2004; Bardgett et al. 2005; Doblas-Miranda et al. 2008). Therefore, retrogression does not simply involve shifts in community- and ecosystem-level properties at longer time-scales, but an integration of short- to long-term processes that are distinct from progressive succession. To the extent that plant and soil characteristics of interest are predictable across stages of retrogression, chronosequences remain a valid tool. We use two examples to illustrate the benefits of applying the chronosequence approach to long-term seres. Each has a relatively short progressive phase followed by a much longer retrogressive phase.

The current Hawaiian Islands represent an excellent, > 7-Myr chronosequence, because the ecological consequences of their sequential development over an oceanic hotspot are well-documented (Vitousek 2004), making them ideal for between-island comparisons (Mueller-Dombois & Fosberg 1997). Both progressive (Mueller-Dombois 1987) and retrogressive (Wardle, Walker & Bardgett 2004) succession have been documented in this system, with progressive succession dominant on the younger Island of Hawaii (0–0.43 Myr) and retrogressive succession more widespread on older islands such as Maui (0.8–1.3 Myr) and Kauai (5.1 Myr). Within-island chronosequences have also been characterized on the reliably dated and mapped series of volcanic surfaces on the Island of Hawaii that range from 1 year to > 4000 years old (Drake & Mueller-Dombois 1993; Aplet & Vitousek 1994; Kitayama, Mueller-Dombois & Vitousek 1995). For example, one can compare succession and soil development on several surfaces (a‘a lava, pahoehoe lava) across a wide range of elevations (900–> 3000 m a.s.l.), spatial scales (local to > 500 km2) and climates. Under such conditions, studies of chronosequences can thus be designed to meet various assumptions, variation can be quantified through replication within categories, and multivariate approaches can correct for incomplete designs where chronosequence assumptions are not met. Domination of the Hawaiian forests by a single tree species (Metrosideros polymorpha), albeit with several ecotypes, further facilitates comparisons between stages of plant morphology or soil development during both the progressive and retrogressive phases of succession. However, given the numerous climatic changes and variable allochthonous inputs, such as phosphorus inputs from Asian dust, that have occurred during the long history of the current Hawaiian Islands (Chadwick et al. 1999), age-specific processes necessarily become less precise (Vitousek 2004).

The Cooloola Dune sequence in eastern Australia is another example of a long-term sere with a retrogressive phase where a chronosequence approach has been useful. The progressive phase lasted for c. 250 000 years as soil carbon, nutrients and forest biomass accumulated, and was followed by c. 350 000 years of retrogression as podzolic soils developed, leaching occurred to 20-m depth and forest productivity declined (Thompson 1981; Walker et al. 1981, 2000; Wardle, Walker & Bardgett 2004). The oldest soils support a diverse understorey plant community (Wardle et al. 2008) adapted to extreme infertility. As in Hawaii, other disruptions inevitably occur over such long time spans (fire is a recurring phenomenon in Australia), but the chronosequence as a soil-age gradient remains robust. In both Hawaii and Australia, research questions that are best answered in studies of the older stages shift to the effects of soil age on community and ecosystem processes, rather than the generation of hypotheses about mechanisms of succession and species replacements best addressed in younger seres.

Chronosequences as null models and predictive tools

The assumption that a chronosequence exists across various sites with certain patterns of changing traits provides a useful null model that can be verified or refuted with further observation and experimentation. With this approach, useful lessons can be learned even when erroneous assumptions about the chronosequence have been made. For example, studies on sand dunes (Olson 1958; Boerner 1985) that initially assumed a linear successional trajectory have led to the discovery of nonlinear successional networks. Similarly, assumptions about the progressive nature of successional properties have been modified by the recognition of the retrogressive phase of long-term chronosequences (Walker & Reddell 2007). The development of predictive models of successional trajectories is difficult because of our poor understanding about how complex processes such as dispersal, colonization and competition unfold in space and time (Pickett, Cadenasso & Meiners 2009). Lessons learned from chronosequence studies about convergence, deterministic consequences of certain dominant life-forms, or patterns of retrogression can become inputs into a chronosequence function of a general model of succession (Fig. 2; Walker & del Moral 2003). Clarifying such variables can help interpret successional pathways through either interpolation between data on stages of known ages or extrapolation beyond known data to future pathways (completing the dotted lines – particularly for the trajectories shown on the left side of Fig. 1). For example, if short-term chronosequence observations (years to decades) on landslides suggest initial convergence within a progressive succession caused by biotic colonization processes (Guariguata 1990; Walker et al. 1996) and soil development (Zarin & Johnson 1995), extrapolation to longer time periods will be robust and interpolations can be made about intermediate stages. If restoration of a landslide is desired, manipulations improve when trajectories are understood (Walker, Velázquez & Shiels 2009). Chronosequences become essential predictive tools when considering trajectories of community and ecosystem processes at long-term (millennial) scales (Walker et al. 2000). Any such model must account, of course, for the often nonlinear nature of vegetation change by allowing for both deterministic and stochastic aspects of temporal dynamics (Cramer 2007).

Figure 2.

 Elements for a chronosequence function of a general successional model. Following a disturbance, changes in vegetation or soil occur and the chronosequence approach can be used to determine the duration, characteristics and trajectory pattern. In addition, critical abiotic and biotic influences can be determined and characterized. The more extensive description and quantification that can be obtained about an ecosystem, the better the interpretation can be of successional patterns via interpolation within and extrapolation beyond the available data sets. Ultimately, chronosequence tools can aid management by improving the prediction of successional change and its manipulation through such efforts as conservation or restoration.

Where chronosequences are least appropriate

The assumption of many ecologists in the early 20th century was that the present repeats the past (McIntosh 1985), so chronosequences were widely used to interpret temporal patterns. The subsequent shift to a more reductionistic perspective and decades of experimental manipulations indicate that succession is often not deterministic (Glenn-Lewin, Peet & Veblen 1992). Therefore, we assert that chronosequences should not be used to infer short- and mid-term successional dynamics when the sites are not temporally related in a linear fashion or when they have different vegetation histories due to climatic, landscape or stochastic factors (Walker & del Moral 2003). One such example involves toposequences, where differences in plant communities are influenced by their position on the landscape (Matthews & Whittaker 1987; Avis & Lubke 1996) more than by temporal dynamics. Other conditions where chronosequences are least appropriate include divergent trajectories, highly disturbed seres, or seres with slow rates of turnover, which we now discuss in turn.

Divergent and nonlinear seres

When successional trajectories are divergent or are configured as nonlinear networks, the chronosequence approach is less useful and may require more intensive sampling than for parallel or convergent seres (Fig. 1). Divergence is common due to priority effects (i.e. sequence of species arrivals), sensitivity to minor differences in initial conditions, stochastic effects and initial site heterogeneity (Matthews & Whittaker 1987). Early successional communities may more closely resemble each other, particularly in severely disturbed habitats with few successful colonists, while later successional stages with higher biodiversity diverge. High regional biodiversity can contribute to high within-stand diversity and therefore also increase the likelihood of divergence. Local convergence may occur where certain successful species dominate, but divergence may exist at larger spatial scales (Lepš & Rejmánek 1991). Networks occur when there are multiple stages that arise from a single stage, resulting in alternative pathways to a convergent endpoint or continued divergence. Causes of networks include different initial site conditions or stochastic dispersal that results in different pioneer communities, leading to independent and sometimes parallel trajectories (Walker & del Moral 2003). Each additional layer of complexity challenges assumptions of connectivity where interpolation is used because of missing data sets and makes the application of the chronosequence approach more difficult.

Disturbed seres

When severe or frequent disturbances reset a sere, succession may be deflected, thus reducing the value of the chronosequence approach. Deflections occur in a variety of ways due to the differential responses of organisms over time and the nature of the repeat disturbances such as moving dunes (Castillo, Popma & Moreno-Casasola 1991) or repeated floods (Baker & Walford 1995). Alternatively, subsequent disturbances may not reset a general successional trend, even if they are relatively severe, as found for early succession on Puerto Rican landslides (Walker & Shiels 2008) or in fire-driven ecosystems in northern Sweden (Wardle et al. 1997). Deflected seres are typically caused by allogenic disturbances (e.g. flood, invasive species) but can be reinforced through autogenic processes (e.g. grazing), especially those leading to retrogression (Walker & del Moral 2009). When the timing or severity of the disturbance is unknown (e.g. historic dune migrations), there is no historic baseline and chronosequences are hard to apply. Conversely, with well-documented disturbances (e.g. abandonment of agricultural fields; Cramer & Hobbs 2007) or artificial events (e.g. experimental blowdowns of trees; Cooper-Ellis et al. 1999), details about the timing and severity of the disturbance can help to clarify subsequent trajectories and improve the application of the chronosequence approach.

Slow or arrested seres

Rates of plant succession vary from rapid change to almost no change at all. Chronosequences are most applicable to the former; however, changes in ecosystem processes can occur even when all stages are dominated by the same plant species, such as in monospecific New Zealand mountain beech (Nothofagus solandri) stands (Clinton, Allen & Davis 2002). Succession can be arrested due to abiotic constraints (e.g. nutrient limitation), limitations in the size of the regional species pool, or resource-use domination by a species leading to competitive inhibition of other species, at least until the dominant species senesces (Walker & del Moral 2003). Both native and invasive species can dominate a successional stage, typically by monopolizing light, water and nutrients through the formation of mats or thickets composed of algae (Benedetti-Cecchi & Cinelli 1996), mosses (Cutler, Belyea & Dugmore 2008), cryptogamic crusts (Kaltenecker, Wicklow-Howard & Pellant 1999), grasses (Nakamura, Yajima & Kikuchi 1997), vines (Melick & Ashton 1991), ferns (Russell, Raich & Vitousek 1998), shrubs (Young, Shao & Porter 1995) or trees (Dickson & Crocker 1953). Early recognition of arrested states will allow examination of the cause and potentially lead to the discovery of other controlling variables, but the chronosequence approach is not easily applied to such situations.

How to improve the use of chronosequences

Categorical generalizations about when it is appropriate or inappropriate to use chronosequences to study succession or soil development are not possible, because successional trajectories can be complex and difficult to predict (Walker & del Moral 2003). However, the relative merits of applying chronosequences can be compared for different trajectories and community characteristics (Table 2). We suggest that chronosequences work better with predictable than unpredictable seres, but unpredictable, convergent seres can often be analysed with some reliability. These relationships apply to either progressive or retrogressive seres. In contrast, we propose that local community biodiversity and disturbance effects on the usefulness of chronosequences differ between progressive and retrogressive seres for studies of plant succession under conditions of high disturbance. High plant species diversity in the regional species pool can make chronosequence approaches difficult because of the greater potential for colonization of different sites at the same stage by different species leading to alternative trajectories (Matthews 1992; Prach 1994), especially in highly disturbed habitats (MacDougall, Wilson & Bakker 2008). Soil development is less affected than plant succession by plant species diversity, but it is still less likely to be amendable to study by chronosequence approaches when diversity is high and when there is high disturbance. In retrogressive seres, chronosequences can also sometimes be difficult to apply (especially for plant succession), even at low levels of biodiversity, due to the larger potential for divergence (Table 2). Again, soil development is somewhat buffered from these problems.

Table 2.   Relative appropriateness of the chronosequence approach varies depending on (a) predictability and trajectory type (divergent or convergent) and (b) plant biodiversity and disturbance impact (frequency plus severity). ++Very useful, +useful, −not useful, −−potentially misleading
 DivergentConvergent
Plant successionSoil developmentPlant successionSoil development
(a)
Predictable++++++
Unpredictable++
 Low disturbanceHigh disturbance
Plant successionSoil developmentPlant successionSoil development
  1. *Progressive succession.

  2. †Retrogressive succession.

(b)
High biodiversity+−−
Low biodiversity+++* or −†+

The process of soil development encompasses a time span of centuries to millennia and is arguably more deterministic than succession once the roles of climate and parent material are clarified (Jenny 1980). Chronosequences are thus interpreted as a series of soils of different ages that formed on the same parent material, and can be highly appropriate for addressing questions about soil development and its effects on community and ecosystem properties. Such uses of chronosequences have significantly advanced our understanding of how soil nutrients change during pedogenesis (Walker & Syers 1976; Vitousek 2004) and the impact of changes in soil nutrient availability on plants (Wardle et al. 2008), decomposers (Williamson, Wardle & Yeates 2005; Doblas-Miranda et al. 2008), foliar herbivores (Gruner 2007) and above-ground and below-ground ecosystem processes (Crews et al. 1995; Wardle, Walker & Bardgett 2004; Whitehead et al. 2005). Chronosequences can be used in this way to clarify the effects of soil age on current plant community attributes (Wardle et al. 2008), even when they do not generate insights about patterns of plant succession.

When observations of long-term chronosequences are combined with experiments (Fukami & Wardle 2005), further insights are gained about the mechanistic basis of community and ecosystem change. For example, controlled fertilizer experiments performed along both the progressive and retrogressive stages of the Hawaiian chronosequence (Vitousek 2004) have greatly enhanced our understanding of how the relative importance of nitrogen and phosphorus limitation influences ecosystem development both above and below ground. Similarly, plant removal experiments along a 6000-year, fire-driven chronosequence in northern Sweden (Wardle & Zackrisson 2005; Gundale, Wardle & Nilsson in press) have clarified the shifting linkages between plant community composition and soil biogeochemical processes during succession. Although few manipulative experiments have been performed across successional gradients, such studies offer tremendous potential for better understanding the role of both biotic and abiotic factors in driving community and ecosystem change during succession.

The appropriate use of chronosequences relies on at least five site-specific issues that serve as limitations, if not addressed (Table 3). First, chronosequences are most useful when there is a clear pattern of temporal change between multiple stages. Secondly, there should be several lines of evidence about the history of the site. For short-term chronosequences, such evidence might include oral histories, tree rings or historical maps, whereas for long-term chronosequences, these data might include good geographical or stratigraphic dating or biological indicators such as micro- and macro-fossils. If such independent verification of a time series is present, the chronosequence approach is more likely to be justified. Thirdly, locating replicate plots randomly within each stage of the chronosequence (not just the progressive phase), when possible, can help address the structure of the (non-age-related) variation among chronosequence stages. Fourthly, if there are previously established plots that can be relocated, then earlier measurements can be repeated in order to observe directly any subsequent changes and verify chronosequence assumptions (e.g. Clarkson 1997). Finally, site-specific measurements must be made to record relevant changes, but if these measurements do not employ standardized methodology, extrapolations can be difficult to extend to other studies.

Table 3.   Guidelines for developing appropriate chronosequence studies in terms of the elements needed and potential limitations of studies when these elements are missing
Elements neededPotential limitations if element is missing
Two or more stages (duration of time series depends on parameter of interest)Chronosequence study of ecosystem parameters only
Multiple stand characteristics that vary across stagesReduced ability to interpret temporal dynamics
At least one independent verification of time seriesFaulty assumptions about temporal linkages
Replication within stages (number and spacing depends on spatial heterogeneity)Misrepresentation of stage characteristics
Sampling intervals within life span of every dominant species of interest or duration of process of interestMissed stages, inaccurate trajectories
Multiple visits to study plotsMissing verification of short-term dynamics
Sere-appropriate measurementsFailure to record relevant changes
Standardized measurementsLack of ability to extrapolate to other studies

Conclusions

We agree with recent concerns that the misuse of chronosequences can mislead ecologists, particularly in relation to understanding vegetation successional pathways (Johnson & Miyanishi 2008). However, we do not believe that these problems are sufficiently universal or severe to invalidate their use for addressing questions about certain types of ecosystem change. The judicious use of chronosequence studies has greatly advanced our understanding of short-term vegetation change where temporal connections have been confirmed (Foster & Tilman 2000; Meiners, Cadenasso & Pickett 2007). Chronosequences have also significantly aided our understanding of long-term landscape processes (Milner et al. 2007) and soil development (Walker & Syers 1976) and associated functional changes in above-ground and below-ground processes and organisms (Vitousek 2004; Wardle, Walker & Bardgett 2004; Bardgett et al. 2005), even when the plant successional trajectories do not exactly parallel changes in soil development. Chronosequences are most suited for measuring plant and soil community characteristics that change in a relatively predictive, linear fashion over time, such as plant cover and species richness, pedogenesis, soil organic matter accumulation and rates of ecosystem processes, and least suited for those traits that are more diffuse and less predictable such as species composition and abundance. Further, chronosequences work better for studying successional trajectories that are convergent, have low diversity and are infrequently disturbed than for trajectories that are divergent, more diverse and frequently disturbed. Finally, chronosequences can often provide information critical to manipulating successional processes for restoration, even where there is an imperfect understanding of the ecosystem (Hobbs, Walker & Walker 2007). We maintain that when appropriately applied, the chronosequence approach offers invaluable insights into temporal dynamics of vegetation change and soil development that cannot be achieved in any other way and that wholesale dismissal of this approach is more likely to impede than to stimulate understanding of these topics.

Acknowledgements

We thank Roger del Moral, Duane Peltzer, Elizabeth Powell and particularly Chris Fastie for stimulating discussions about this topic and Peter Bellingham, Roger del Moral, Joe Walker and several anonymous referees for insightful comments on the manuscript. Figure 1 is modified from contributions by Chris Fastie. L.R.W. was supported by the Department of Botany at the University of Hawaii at Manoa through the Wilder Chair Program and by the Luquillo Experimental Forest Long-Term Ecological Research Program (NSF grant DEB-0620910).

Ancillary