Decomposition trajectories of diverse litter types: a model selection analysis

Authors

  • William K. Cornwell,

    Corresponding author
    1. Department of Systems Ecology, Institute of Ecological Science, Amsterdam, The Netherlands
    2. School of Biological, Earth and Environmental Sciences, Evolution & Ecology Research Centre, University of New South Wales, Sydney, NSW, Australia
    Search for more papers by this author
  • James T. Weedon

    1. Department of Biology, Research Group of Plant and Vegetation Ecology, University of Antwerp, Wilrijk, Belgium
    Search for more papers by this author

Summary

  1. Decomposition of plant litter is a key process in the global carbon cycle, and its realistic representation is essential for accurate prediction of ecosystem carbon and nutrient dynamics. Traditionally, the exponential decay model, which assumes a constant relative decomposition rate through time, has been widely used for both the analysis of empirical data and large-scale modelling. While this model may be adequate in some contexts, there are also many empirical observations for which it does not fit the data well.
  2. Using a number of previously published leaf and wood decomposition data sets, comprising a diverse selection of litters, we compared the performance of five simple decay models varying in complexity and functional form. To accomplish this, we used a combination of model selection criteria and out-of-sample extrapolation analyses.
  3. Model selection criteria favoured an optimal balance between parsimony and flexibility, and a rarely used model built on the Weibull distribution proved to have sufficient flexibility to fit the full range of decomposition trajectories including litters that show an initial lag phase. Using the Weibull fits, we further showed that the litter N content affects the shape of the decomposition trajectory, a novel biological result. Out-of-sample extrapolation showed biased predictions for all models and estimates of steady-state stocks were often highly sensitive to model choice.
  4. As the field moves beyond the single-pool exponential framework, it is useful to confront alternate models with data from a wide array of litter types. This analysis demonstrates that model flexibility is important for representing the wide array of possible decomposition dynamics encountered in nature, with lag phase dynamics more widespread than previously thought. Out-of-sample extrapolation and derived parameters that rely on it remain problematic with all tested models. Progress in understanding the drivers and consequences of litter decomposition will come from combining empirical approaches with more focused process-based modelling that allow for the assessment of alternative model assumptions.

Introduction

On biological time-scales, a key control over the size of the atmospheric carbon pool is the balance of fluxes between atmosphere and biosphere. This flux is dominated by the uptake of CO2 through primary production by autotrophic organisms and subsequent emission through auto- and heterotrophic respiration. Because of the quantities involved, even small changes in the magnitude of either flux can have large effects on the size of the atmospheric pool and consequently global climate (IPCC 2007; Chapin, Matson & Mooney 2009). It is therefore crucial that models of ecosystem dynamics are able to accurately represent these processes. Decomposition of dead organic matter, particularly plant litter, by heterotrophic micro-organisms is a large component of the flux of carbon from biosphere to atmosphere (Schimel 1995). The rate of decomposition influences important ecosystem processes such as net ecosystem exchange, carbon storage and nutrient cycling. Understanding how the rate and extent of decomposition is controlled by abiotic and biotic factors, and how these controls influence ecosystem energy, carbon and nutrient balances, is thus an important theme in ecosystem research.

More than half a century of research has led to extensive knowledge of the mechanisms that influence decomposition dynamics (Swift, Heal & Anderson 1979; Cadisch & Giller 1997; Prescott 2010). Important controlling factors include local macro- and microclimate, the chemical composition of the organic matter, the composition and activity of the decomposer community, the activity of extracellular enzymes, and external sources of nutrients (Sinsabaugh, Moorhead & Linkins 1994; Boddy & Watkinson 1995; Aerts 1997; Davidson & Janssens 2006; Manzoni et al. 2012b).

Although the process of ‘decomposition’ encompasses a wide variety of chemical transformations with multiple potential end products, there is often a focus on a single emergent outcome: the temporal course of mass loss. This focus reflects the fact that the primary use of decomposition data is describing and understanding the mechanisms controlling the fluxes of carbon between the biosphere and the atmosphere. As such, many studies of litter decomposition track mass loss from a single cohort of litter (‘litter bag’ studies) and summarize these data with the parameters of a fitted decomposition model. A large array of different models exist, ranging in complexity from single-parameter descriptive models that make no attempt to capture biological mechanisms, to sophisticated multiparameter models in which specific (albeit idealized) biological processes are explicitly modelled (Ågren & Bosatta 1996b; Berg & McClaugherty 2008; Manzoni et al. 2012b). As well as being used to summarize empirical data, the simpler versions of these models are often applied as components of global scale models of the carbon cycle (see table 3 in Cornwell et al. 2009). Using a similar approach in empirical work and large-scale models has the advantage that a simple model parameterized with empirical data can be directly used within the global models, thereby lending a strong empirical basis to global models of the carbon cycle (Brovkin et al. 2012).

The earliest systematic observations of leaf litter mass loss during decay suggested that decomposition does not progress at a constant absolute rate (Olson 1963; Swift, Heal & Anderson 1979). Instead, litter tends to lose mass rapidly (in absolute terms) in the initial phases of decomposition; then, this rate progressively diminishes, often (although not always) producing a convex curve of mass remaining versus time. A straightforward way of modelling this type of dynamics involves an assumption of first-order kinetics; and indeed, there is a long tradition of modelling decomposition as a negative exponential decline analogous to the decay of a radioactive isotope (Olson 1963).

This approach has been criticized as inadequate many times (Wieder & Lang 1982; Cheshire et al. 1988; Prescott 2005; Adair et al. 2008), both on conceptual, or a priori grounds, as well as due to a lack of correspondence with empirical data. The problems arise from the central assumption of the negative exponential model (eqn 5 in Table 1). A constant decay constant k effectively requires that the decaying material can be treated as a homogeneous mass with each molecule having an equal chance of being mineralized or leached from the litter in any given time interval. In reality, organic matter is a heterogeneous collection of different molecules in different amounts and complexes, each with its own thermodynamic properties and therefore relative decomposability (Davidson & Janssens 2006). Moreover, even individual compounds do not degrade according to the negative exponential dynamics (Cheshire et al. 1988).

Table 1. Equations used for the model selection analysis and corresponding solutions for long-term, steady-state standing stocks (proportional to inputs)
 x(t)/x0Steady stateReferences
(1) Discrete series math formula math formula Manzoni et al. (2012a)
(2) Discrete parallel math formula math formula Manzoni et al. (2012a)
(3) Continuous exponential math formula math formula Ågren & Bosatta (1996a)
(4) Weibull residence time math formula math formula Fréchet (1927)
(5) Negative exponential e kt math formula Olson (1963)

Empirical evidence further underlines the inadequacy of the exponential model. In particular, mass loss dynamics from decaying coarse woody debris often deviates strongly from exponential decay, there is often an initial ‘lag’ phase in which decay proceeds very slowly for several years, before the rate increases and approximate exponential decay proceeds (Edmonds 1991; Harmon, Krankina & Sexton 2000; Freschet et al. 2012). This is most likely due to the inhibition of initial decomposition by a combination of spatial nutrient limitation, priority effects among decomposers, allelopathy between decomposers, and the relatively high concentrations of recalcitrant phenolic compounds in woody tissue (Cornwell et al. 2009; Fukami et al. 2010; Dickie et al. 2012).

The general consensus that the single-pool exponential decay model is often inadequate suggests that an alternative model may be necessary. A large array of alternative formulations have been proposed, but there have been few studies that systematically seek to determine which of these formulations is most flexible, general and sufficiently tractable to be used to characterize and understand the drivers of different litter decomposition trajectories. Recent work (Manzoni et al. 2012a) has summarized the assumptions and mathematical properties of a large number of commonly used models for litter decomposition. Here we propose to build on that work, using a model selection approach to compare five of these models in terms of their ability to fit an array of different decomposition trajectories observed both across tissue types as well as among a wide variety of species. We also analyse the models in terms of their out-of-sample prediction performance, given that decomposition models are commonly used for extrapolation and projection of, for example, long-term steady-state stocks.

Materials and methods

Models

A model of soil carbon dynamics that mechanistically represents all the relevant biotic and abiotic processes requires considerable complexity (Ågren & Bosatta 1996b; Allison, Wallenstein & Bradford 2010; Moorhead, Lashermes & Sinsabaugh 2012; Todd-Brown et al. 2012). Several such models exist, usually beginning from a priori formulations of mass-balance, stoichiometry and decomposer efficiency (e.g. Ågren & Bosatta 1996b, or enzyme economics and microbial physiology (e.g Allison, Wallenstein & Bradford 2010; Moorhead, Lashermes & Sinsabaugh 2012), which are then used to derive expressions for litter mass loss over time. The advantage of this approach is that the models can generate testable mechanistic hypotheses and, if valid, can be generalized to new contexts and extended to include more processes (e.g. Hyvöonen & Ågren, 2001). However, the level of mechanistic detail also implies a large number of model parameters, limiting their application to the analysis of most empirical decomposition data sets. Our strategy is therefore to examine a range of simpler model formulations (some of which are derived from special cases of more general, process-based models). Although these models are essentially phenomenological–in that we use them to model empirical data without explicitly modelling the underlying biological processes—we propose that their relative simplicity makes them well suited for describing empirical data from litter bag experiments. Our goal in this study is to determine which model formulation is most appropriate for describing the full range of decomposition trajectories encountered among the wide array of litter types found in nature.

The first assessment of the decomposition models involves a model selection approach ( Burnham, & Anderson 2002, 2002; Kuha 2004). This is a statistical framework that compares alternative models for data, by striking a balance between goodness-of-fit to data and model parsimony (in terms of number of free model parameters.) We examine four models in addition to the traditional negative exponential model (Table 1). Each of these models uses a different conceptualization of the decomposition process to extend the simple negative exponential model (for more background, see section 6.1 in Data S1 and Manzoni et al. 2012a).

The first two models we consider (eqns 1 and 2 in Table 1) extend the exponential model by dividing the litter material into two ‘pools’, each with its own exponential decay constant, representing labile and recalcitrant fractions. In eqn 1, ‘discrete series’, all of the litter is assumed to begin in the first pool and is either lost from the system or converted into the second pool in a fixed proportion. In eqn 2, ‘discrete parallel’, the initial distribution of mass between the two pools is fixed, and during decomposition, the two pools lose mass independently. Note that extension of this discrete pools approach to three or more fractions is sometimes advocated (Adair et al. 2008) but in the interest of model parsimony, we only consider two pool models in this analysis. Although the parameters within ‘discrete series’ and ‘discrete parallel’ can be expressed as functions of each other (see eqn 10 within Manzoni et al. 2012a), in practice, due to model formulations and the logical bounds on parameters (i.e. decomposition rates must be positive and r and α must be between 0 and 1), the ‘discrete parallel’ model is slightly less flexible.

The third model (‘continuous exponential’, eqn 3 in Table 1) is the continuous analogue to the discrete pools approach. Plant litter decomposability is described by a continuous quality distribution (equivalent to an infinite number of litter ‘pools’ of varying recalcitrance to decay). This model form can be derived in several ways, most notably under certain parameterizations of the more general Q-model of Ågren & Bosatta (1996b), but also by modelling the distribution of decay rates within heterogeneous litter with a two-parameter gamma distribution (Bolker, Pacala & Parton 1998; Manzoni et al. 2012a).

The fourth model we consider–Weibull residence times, eqn 4 in Table 1–has not been widely used (see its derivation in section 6.1 in Data S1). In this model, a heterogeneous substrate is represented as a distribution of ‘survival times’ with parameters α, which controls the shape of the decomposition trajectory, and β, a scaling parameter that controls the rate of decomposition (see interactive code in section 6.3.1 in Data S1). If α = 1, then the decomposition rate is constant through time, which is a special case equivalent to single-pool exponential decay. If α < 1, then the decomposition rate declines with time, and conversely, when α > 1 decomposition rate increases with time, equivalent to a ‘lag’ phase in the time-course of mass loss. This approach is common in material science where eqn 4 within section 6.1 in Data S1 is widely used to describe the temporal change in the ‘failure’ rate of a material with time (Weibull 1951).

For comparison with the traditional approach, the last model we consider is the conventional, one-parameter negative exponential model (eqn 5 in Table 1). With the one-parameter negative exponential model, the rate of decay (measured in mass × mass−1 × time−1) is assumed to be constant through time. All four other models considered here allow (defined as the apparent decomposition rate (kapp) by Manzoni et al. 2012a) to change through time, although the degree of flexibility in the shape of the kapp – time relationship varies among the models.

Model Selection

We performed the model selection using decomposition data obtained from a global data set of multispecies leaf decomposition experiments (Assemble of Research on Traits and Decomposition; ART-Deco; Cornwell et al. 2008); this data set is available via TRY (Kattge et al. 2011). We used a subset of data from the ART-Deco database, namely the experiments with more than three harvests, a duration of greater than 2 years and mass loss of more than 50% (60 species–site combinations). This data set includes species from tropical, temperate and boreal biomes, along with initial nitrogen data for those litters. We used the nitrogen data (log-transformed) combined with the model fits to explore how initial chemistry affects decomposition trajectory. Detailed, long-term data sets for wood decomposition are less common, preventing a comparable analysis for wood decay. However, for comparative purposes, we also consider an especially thorough 14-year wood decomposition experiment comprising data from three species (Laiho & Prescott 1999).

Model fitting was carried out using constrained optimization with the ‘L-BFGS-B’ algorithm (Byrd, Lu & Nocedal 1995) implemented in the optim function in base R (R Core Team 2013). Likelihood, assuming constant-variance normally distributed error around the predicted mass loss, was used as the objective function, and parameter constraints were imposed according to the specific model (e.g. 0 < α < 1 for the ‘Discrete parallel’ model in Table 1). Some model data combinations converged more reliably than others, and to ensure convergence on global optima, we performed 100 runs of each model fit, using randomized initial parameter values. The parameter combination with the highest likelihood from the 100 runs was selected, and this randomized restart approach generated reproducible fits for all models in all cases. To compare the fits of the five models, we used standard model selection approaches (Burnham & Anderson 2002), which balance parsimony against explanatory power. There are two widely used alternatives, AICc and BIC, which both account for sample size but derive both their sample size adjustment and their parsimony fit balance from different theoretical traditions (Kuha 2004). We use both approaches here. The choice of the model selection criterion only affects the results in comparing models with different numbers of parameters: among models with the same number of parameters, both AICc, BIC, and likelihood all produce equivalent results. We present summaries of AICc, BIC and likelihood calculated for each model × data set combination as well as an overall AICc-, BIC- based weight for each model, treating the combined fits to each of the 60 data sets as a single model, by summing the log-likelihoods for each individual fit.

Extrapolation Analysis

By penalizing extra parameters when comparing model fits, model selection analysis seeks to avoid the problem of overfitting. Nevertheless, there remains a risk that more flexible model formulations will perform poorly when predicting out of sample, due to the inherent trade-off between bias and variance in statistical estimation (Hastie, Tibshirani & Friedman 2009). We therefore performed a simple out-of-sample prediction analysis to determine the relative robustness of the different model formulations to over-fitting issues. The relatively small data sets involved made full cross-validation analyses impractical. Instead, we refit the models to truncated data sets in which the last (i.e. longest time after beginning of incubation) observation was removed and calculated the prediction error of the resulting model fits to this final observation. This was repeated for all models, and all 60 leaf decomposition data sets.

As well as influencing goodness-of-fit and out-of-sample prediction performance, the different model formulations also lead to different projections of the full course of mass loss and therefore steady-state litter stocks. The third column in Table 1 gives formulae for calculating steady-state litter stocks, assuming a constant, unit litter input. We calculated these values for each model–data set combination and estimated the associated errors using parametric bootstrap methods (Efron & Tibshirani 1993). Briefly, for each data set–model combination, we created 100 bootstrap replicate data sets by generating simulated observations using the best-fit model parameters and assuming normally distributed errors. We refitted the model to each of these bootstrap data sets and thereby obtained bootstrap samples of each of the model parameters, the values of which could then be used to calculate corresponding bootstrap distributions of the steady-state stock estimates. In some cases, due to the very large variance in some of these bootstrap distributions (see Fig. 4), we express the variability as median absolute deviations: the median of the absolute differences between each bootstrap observation and the samplewide median.

Results and discussion

Model Selection Results

Litter is an array of chemical compounds, and soil processes are complex, with interactions among microbes, enzymes and spatial dynamics combining to create decomposition trajectories that the traditional single-pool exponential model fails to adequately describe. In most cases, these complex trajectories justify adding model complexity to the traditional representation of decomposition. However, considering species–site combinations separately, there was no clear best–choice model for introducing this added complexity. Each of the five models we considered was selected as the best model in certain situations, and in the case of BIC, no single model was selected for more than 16 out of 60 species–site combinations (Fig. 1). This is logical: given the range of ecosystems and litter types within the ART-Deco data set, each model has its particular strengths and weaknesses and performed well in some contexts. Note that because the negative exponential is a special case of 3 of the 4 other models, it is not possible for the likelihood of the negative exponential to be the highest (Fig. 1). The negative exponential was only selected as the best-fit model for individual cases when a penalty for model complexity was included, as in the case of AICc and BIC (Fig. 1).

Figure 1.

Model selection for 60 species–site combinations from the Art-Deco database (Cornwell et al. 2008). In the left panel, each individual species–site combination is treated as a separate model fit with each of the five models competing. LL, log-likelihood; AICc , the small-sample-size corrected version of Akaike Information Criterion; BIC, Bayesian Information Criterion. In the right panel, we treated the entire data set as one fit with all curves forced to take the same functional form. In the cumulative fit, there are 60*n parameters where n is the number of parameters per curve see Table 1.

In contrast, in terms of the fit to all of the species–site combinations, one model, the Weibull residence time model, stood out strongly. Evidence for one model relative to another is strongest when AICc and BIC point in the same direction (Kuha 2004), which is the case here: considering the data set as a whole, BIC and AICc both favoured a Weibull model over the alternatives. The second-place model was the continuous exponential model in the case of AICc, and for BIC, which has a stronger penalty for model complexity, the second-place model was the single-parameter negative exponential. In both cases, the Weibull model has the strongest support from the data: 67 AICc units and 25 BIC units better than the next best-performing model. Using the standard formulation for AIC and BIC weights, the Weibull model weight was greater than 0·999 for both AICc and BIC (Fig. 1). Despite the fact that it was the best model for only 13 (AICc) or 15 (BIC) out of 60 individual species–site cases, Weibull performed well for a class of trajectories where many models performed badly: the cases where there was lag at the onset of decomposition (Fig. 2 and Fig. S3). These ‘lag’ cases have a kapp that increases through the initial decomposition phase and a Weibull α > 1.

Figure 2.

An example of the divergent fits between models with the capacity for modelling lag phase dynamics – Discrete series and Weibull – and those that cannot. Note that three exponential-based models fit to almost the same curve in this particular example. Data from (Hobbie et al. 2006). The right panel shows how the apparent rate of decomposition changes through time.

One of the other models did provide a good fit to empirical data that showed an initial lag phase: the discrete series model (Fig. 2). This is because, as with the Weibull model (and unlike the continuous exponential and discrete parallel models), certain parameter combinations (in this case when k2 > k1) can lead to values of kapp that increase through time. The likelihood of the discrete series model fits was often comparable to the Weibull model, and they often led to very similar best-fit decomposition trajectories (e.g. Fig. 2). However, from a model selection perspective, given very similar model performance, both AICc and BIC strongly favoured the simplicity of the two-parameter Weibull model over the three-parameter discrete series model (Fig. 1).

Extrapolation Results

The five models showed broadly comparable patterns of prediction error when used to predict out of sample based on truncated data sets, but nevertheless, there were systematic difference in the size of the error. Most notably, all five models tended to overestimate mass loss for the final observation when this observation was excluded from the training data set (black boxes in Fig. 3), although this bias was also evident even when the final point was included in the training data set (grey boxes in Fig. 3). This suggests that the restrictions on decomposition trajectory resulting from the assumptions of all five models lead to poor model performance at late stages of decomposition. Interestingly, the discrete parallel and continuous exponential models showed less prediction bias (respective median prediction error of 1·2% and 1·6% of initial mass) than the discrete series, Weibull and negative exponential models (3·0%, 4·8% and 6·5% respectively).

Figure 3.

Distributions of prediction error of models fitted to full (grey boxes) or truncated (black boxes) leaf decomposition data sets (n = 60). Prediction error is calculated as the difference between last observed mass remaining and the corresponding prediction for the relevant time point. Errors are shown as proportion of initial mass (0·1 = 10% of initial mass). Central stripe represents the median per model, boxes cover the interquartile range and whiskers indicate the full data range, except for outliers represented by points.

When the best-fit model parameters used to project the long-term equilibrium litter accumulation under steady-state assumptions, the differences in model form lead to what are at times highly divergent estimates of litter stocks and associated uncertainties (Fig. 4). Approximately half of the site–species combinations lead to steady-state stock estimates, which are insensitive to model choice–the points in Fig. 4 lie in the same position across the five models. However, for a large minority of site-species combinations, model choice led to widely diverging steady-state estimates. In most of these cases, the discrete parallel, discrete series and continuous exponential models all led to estimates that were at least one and up to three orders of magnitude higher than the corresponding estimates using the negative exponential or Weibull models (missing points in lower 3 panels of Fig. 4). These diverging estimates are due to a series of late-stage data points with very little difference in mass remaining. The corresponding best-fit parameters for the multipool and continuous exponential models thus fit a practically inert and persistent litter fraction, leading to extremely high steady-state estimates.

Figure 4.

Steady-state estimates of litter stocks (in units of annual input) derived from best-fit parameters for each species–site combination using equations in Table 1. Points represent best-fit estimates, and error bars show bootstrap median absolute deviations (see section 'Extrapolation Analysis'). Species–site combinations are ordered by the estimates for the negative exponential fit, and the order of the observations is maintained through the lower panels. Missing points indicate that steady-state estimates where outside the range 0 to 10. This occurred most often in cases where the best-fit model led to an essentially inert litter fraction, which drastically inflates the steady-state litter stock estimate in some cases by orders of magnitude. Extremely large error bars reflect the parameter sensitivity of certain model–data combinations (see section The Difficult Problem of Extrapolation for further explanation).

A New Driver of Variation in Decomposition Trajectory

Based on the analysis of the ART-Deco data set, there is a great deal of variation in the decomposition trajectory among different litter types (Fig. 5c and Fig. S2). Another large pool of C that is important to consider is coarse woody debris. In this section, we first consider the variation in trajectory for leaf litter and also the common trajectory for woody debris.

Figure 5.

The two ends of the decomposition trajectory spectrum are shown in panels (a) and (b); data for both are originally from Hobbie et al. (2006). Panel (a) shows data for Acer saccharum, which initially decomposed very quickly and then decomposition slowed. Panel (b) shows decomposition for Pinus strobus in which the opposite temporal pattern is observed. The estimated half-lives are shown in blue; the estimated mean residence times are shown in red. The standard errors of both estimates are shown with transparent rectangles. Panel (c) shows the distribution of Weibull α values from the Art-Deco database (Cornwell et al. 2008). Within these, there was a wide range of different decomposition patterns. The vertical red line shows α = 1, which indicates a trajectory consistent with the single-pool exponential model; to the left of that line initial decomposition was faster than the single-pool model; to the right of the red line initial decomposition was slower than the single-pool model, indicating a lag phase. Panel (d) shows the relationship between the fitted Weibull α and initial litter nitrogen concentration.

The ‘median’ leaf litter exhibits initial decomposition that is rapid relative to the later stages, leading to a mass loss curve more concave than can be modelled with the single-pool exponential model. This is reflected in the data set-wide median Weibull α value of 0·86, implying that decomposition slows with time and the mean residence time is longer than the half-life. A similar pattern is apparent in Fig. 5a. This case is fit well by the multipool exponential models (Cheshire et al. 1988; Bolker, Pacala & Parton 1998; Adair et al. 2008; Manzoni et al. 2012a) However, the median trajectory does not represent all trajectories: the extreme trajectories for leaf litter are shown in 5a,b, with the remainder of the species–site combinations lying on a continuum between these two extremes.

The large variation in decomposition trajectory raises the interesting question of what drives the observed differences; in other words, why do some species–site combinations show patterns like Fig. 5a, while others are closer to Fig. 5b. Possible mechanisms include the composition of the litter, the dynamics of the decomposers and the spatial structure of the decomposition environment. As an exploration of this pattern, we show that part of the mechanism is the chemistry of the leaves: litter with higher nitrogen has lower α values (Fig. 5d), with the relationship explaining 24% of the total variation. This suggests that the stoichiometry of the litter is one of the important drivers of the shape of the decomposition trajectory. Essentially, the low-N leaves had a decomposition trajectory similar to the more extreme ‘lag’ pattern commonly observed for coarse woody debris. It is well-established that litter initial N content is an important driver of decomposition rate (Swift, Heal & Anderson 1979), and patterns of N immobilization/mineralization (Parton et al. 2007; Manzoni et al. 2008). This is the first evidence that litter N also affects the shape of the general mass loss trajectory.

Because of the slow nature of the process, there is much less wood data available compared to leaf data (Weedon et al. 2008). However, there are a few very good long-term data sets including experiments that lasted as long as 14 years (Laiho & Prescott 1999). We fit our model to two species’ data from this experiment; both species have Weibull α > 1, indicating lag dynamics and a rate of decomposition that increases with time in (Fig. 6). Note that although the shape of the relationship is similar to low-N leaves (comparing Fig. 5 with Fig. 6), the lag phase for wood has the potential to be much longer. Nitrogen concentration may also be an important driver of the trajectory of wood decay. Compared to leaves, wood generally has very low-N and gymnosperm wood is especially low (Weedon et al. 2008). As such, the observation of a very long lag phase for gymnosperm wood and increasing overall rates of decomposition with time is consistent with the pattern shown for leaves in Fig. 5d.

Figure 6.

Wood decomposition data from a 14-year wood decomposition experiment (Laiho & Prescott 1999). Panel (a) shows the mean and standard error (N = 5 for each harvest) for two gymnosperm species in Canada. The Weibull fit finds a ‘lag phase’ (i.e. α > 1) for both species: for Picea α = 1·91 ± 0·70; for Pinus α = 1·23 ± 0·10. Panel (b) shows the distribution of residence times estimated using the Weibull approach from the two species’ woody debris. Panel (c) shows the estimated rate of decay (= kapp) for both species. Note that for both the rate of decay is still increasing after 14 years. Also note that in the case of Pinus, the absolute rate of mass loss remains close to constant [panel (a)] which implies that the relative mass loss is still increasing [panel (c)].

The Difficult Problem of Extrapolation

One of the general goals of the field is to connect decomposition trajectories to carbon stocks. Given the limited time frame of most studies, modelling efforts often require out-of-sample extrapolation especially for slow-decomposing woody debris. It is worth considering how models differ in this regard. In some cases, there was agreement among models for a particular species–site combination in terms of steady-state stock estimates. However, this agreement was by no means universal, and in many cases, the steady-state estimates differed wildly among the models (Figs 3 and 4). On closer examination, most of these diverging cases were trajectories in which kapp slowed through time (i.e. Weibull α < 1). For the discrete parallel, discrete series and continuous exponential models, this led to best-fit parameters that model a exceedingly slowly decomposing pool (i.e. k2 << k1). Although this parameter combination may lead to the best fit in terms of likelihood, the corresponding steady-state estimates (e.g. >4000 times the annual litter input in one case) are several orders of magnitude beyond observed values see (Prescott 2005).

In contrast, the Weibull and negative exponential models, as a consequence of their formulation, are not able to fit an extremely slow-decomposing pool. Because data on the actual steady-state stocks associated with the data sets we used are not available and because of artefacts of litterbag experiments (Wieder & Lang 1982; Prescott 2010), it is not clear which of the estimates is more accurate.

The bootstrap errors associated with the steady-state estimates are often wider for the discrete parallel, discrete series and continuous exponential models (Fig. 4). This is again a consequence of the formulation and parameterization of these models: in the case of the discrete parallel, discrete series and continuous exponential models, the parameters to be fitted are practically (if not structurally) non-identifiable (sensu Raue et al. 2009). In other words, given limited data, there are a range of parameter combinations that produce fits with very similar likelihoods. This means that small shifts in the data (e.g. during the bootstrap resampling procedure) can lead to drastically different best-fit values of the model parameters and consequently highly divergent steady-state estimates (Fig. 4).

In contrast, the two parameters in the Weibull model describe different aspects of the decomposition trajectory (see section 'Models') and are thus more readily identifiable, even with limited data points. This generally translates into less variation in the bootstrapped steady-state estimates. This property also led to more reliable convergence during model fitting (data not shown) and suggests that estimates of litter steady states derived from Weibull model fits may be more robust to sources of error compared to the other models. However, this robustness needs to be balanced against the potential bias in steady-state predictions which are a result of an inability to fit a slow-decomposing litter fraction, as discussed above.

The divergence in steady-state estimates shows one limitation of solely using curve fitting to evaluate alternative models. Collecting data on a sufficiently long time scale to accurately constrain long-term litter stock estimates is practically difficult, especially for coarse wood, and projections to time-scales beyond the data are strongly influenced by the choice of model. In other words, the further the extrapolation from measurements, the less the conclusions are influenced by the data itself, and the greater the influence of assumptions embedded in the model formulation. Although the large numbers of parameters within process-based models make them inappropriate for the type of multispecies, multidata source analysis, we have presented here (and which reflects the reality of most decomposition experiments), their value lies precisely in the fact that they are the best tools for exploring the consequences of different assumptions about mechanisms. For example, the Q-model (Ågren & Bosatta 1996b) allows the analytical separation of decomposition drivers into environment specific (climate and decomposer efficiency) and substrate specific (litter quality distribution). Moreover, having an explicit process-based formulation also allows for tests of whether fit or steady-state prediction or both are improved by the representation of additional mechanistic processes.

Conclusion

Moving beyond the single-pool exponential model leads to improved fits of models to data, but also presents several complications. Empirical data show that the diversity of decomposition trajectories is wider than previously considered, placing a premium on model flexibility. However, the initial phase of decomposition is only part of the problem, as another important goal of decomposition studies is understanding the steady-state stocks, which for all but the fastest decomposing material may involve extrapolating beyond the time-scale of data collection. This extrapolation can lead to conclusions that are dependent on the choice of model formulation. Different models, which may give similarly close fits to the data in hand, may lead to widely divergent long-term projections. This issue suggests that progress will come from a model synthesis such that parameter estimates from empirical investigations using low-parameter models can be more clearly utilized in more complex ecosystem models.

With respect to the initial phases of decay, we have shown that one important part of this diversity is the existence of a ‘lag phase’: in a large minority of litter decomposition trajectories, especially those with low-N concentrations. The Weibull model shows some advantageous behaviour in capturing this aspect of the litter decomposition trajectory. As long-term data sets (e.g. Cornelissen et al. 2012) with intense sampling become available, we suggest that flexible models that allow for the fit of a lag phase will prove useful in understanding the full range of how different types of litter decompose.

Acknowledgements

Financial support came from the Netherlands Organization for Scientific Research (NWO) through its Open Competition Program of the section Earth and Life Sciences (ALW) Grant Number. 820.01.016, and from the University of Antwerp Research Centre of Excellence - ECO. The manuscript also benefited greatly from the insightful comments of two anonymous reviewers.

Ancillary