## Introduction

Ecologists frequently use models to detect and describe patterns, or to predict to new situations. In particular, regression models are often used as tools for quantifying the relationship between one variable and others upon which it depends. Whether analysing the body weight of birds in relation to their age, sex and guild; the abundance of squirrels as it varies with temperature, food and shelter; or vegetation type in relation to aspect, rainfall and soil nutrients, models can be used to identify variables with the most explanatory power, indicate optimal conditions and predict to new cases.

The past 20 years have seen a growing sophistication in the types of statistical model applied in ecology, with impetus from substantial advances in both statistics and computing. Early linear regression models were attractively straightforward, but too simplistic for many real-life situations. In the 1980s and 1990s, generalized linear models (GLM; McCullagh & Nelder 1989) and generalized additive models (GAM; Hastie & Tibshirani 1990) increased our capacity to analyse data with non-normally distributed errors (presence–absence and count data), and to model nonlinear relationships. These models are now widely used in ecology, for example for analysis of morphological relationships (Clarke & Johnston 1999) and population trends (Fewster *et al*. 2000), and for predicting the distributions of species (Buckland & Elston 1993).

Over the same period, computer scientists developed a wide variety of algorithms particularly suited to prediction, including neural nets, ensembles of trees and support vector machines. These machine learning (ML) methods are used less frequently than regression methods in ecology, perhaps partly because they are considered less interpretable and therefore less open to scrutiny. It may also be that ecologists are less familiar with the modelling paradigm of ML, which differs from that of statistics. Statistical approaches to model fitting start by assuming an appropriate data model, and parameters for this model are then estimated from the data. By contrast, ML avoids starting with a data model and rather uses an algorithm to learn the relationship between the response and its predictors (Breiman 2001). The statistical approach focuses on questions such as what model will be postulated (e.g. are the effects additive, or are there interactions?), how the response is distributed, and whether observations are independent. By contrast, the ML approach assumes that the data-generating process (in the case of ecology, nature) is complex and unknown, and tries to learn the response by observing inputs and responses and finding dominant patterns. This places the emphasis on a model's ability to predict well, and focuses on what is being predicted and how prediction success should be measured.

In this paper we discuss a relatively new technique, boosted regression trees (BRT), which draws on insights and techniques from both statistical and ML traditions. The BRT approach differs fundamentally from traditional regression methods that produce a single ‘best’ model, instead using the technique of boosting to combine large numbers of relatively simple tree models adaptively, to optimize predictive performance (e.g. Elith *et al*. 2006; Leathwick *et al*. 2006, 2008). The boosting approach used in BRT places its origins within ML (Schapire 2003), but subsequent developments in the statistical community reinterpret it as an advanced form of regression (Friedman, Hastie & Tibshirani 2000).

Despite clear evidence of strong predictive performance and reliable identification of relevant variables and interactions, BRT has been rarely used in ecology (although see Moisen *et al*. 2006; De’ath 2007). In this paper we aim to facilitate the wider use of BRT by ecologists, demonstrating its use in an analysis of relationships between frequency of capture of short-finned eels (*Anguilla australis* Richardson), and a set of predictors describing river environments in New Zealand. We first explain what BRT models are, and then show how to develop, explore and interpret an optimal model. Supporting software and a tutorial are provided as Supplementary material.

### explanation of boosted regression trees

BRT is one of several techniques that aim to improve the performance of a single model by fitting many models and combining them for prediction. BRT uses two algorithms: regression trees are from the classification and regression tree (decision tree) group of models, and boosting builds and combines a collection of models. We deal with each of these components in turn.

### decision trees

Modern decision trees are described statistically by Breiman *et al*. (1984) and Hastie, Tibshirani & Friedman (2001), and for ecological applications by De’ath & Fabricius (2000). Tree-based models partition the predictor space into rectangles, using a series of rules to identify regions having the most homogeneous responses to predictors. They then fit a constant to each region (Fig. 1), with classification trees fitting the most probable class as the constant, and regression trees fitting the mean response for observations in that region, assuming normally distributed errors. For example, in Fig. 1 the two predictor variables *X*_{1} and *X*_{2} could be temperature and rainfall, and the response *Y*, the mean adult weight of a species. Regions *Y*_{1}, *Y*_{2, etc.} are terminal nodes or leaves, and *t*_{1}, *t*_{2, etc.} are split points. Predictors and split points are chosen to minimize prediction errors. Growing a tree involves recursive binary splits: a binary split is repeatedly applied to its own output until some stopping criterion is reached. An effective strategy for fitting a single decision tree is to grow a large tree, then prune it by collapsing the weakest links identified through cross-validation (CV) (Hastie *et al*. 2001).

Decision trees are popular because they represent information in a way that is intuitive and easy to visualize, and have several other advantageous properties. Preparation of candidate predictors is simplified because predictor variables can be of any type (numeric, binary, categorical, etc.), model outcomes are unaffected by monotone transformations and differing scales of measurement among predictors, and irrelevant predictors are seldom selected. Trees are insensitive to outliers, and can accommodate missing data in predictor variables by using surrogates (Breiman *et al*. 1984). The hierarchical structure of a tree means that the response to one input variable depends on values of inputs higher in the tree, so interactions between predictors are automatically modelled. Despite these benefits, trees are not usually as accurate as other methods, such as GLM and GAM. They have difficulty in modelling smooth functions, even ones as simple as a straight-line response at 45° to two input axes. Also, the tree structure depends on the sample of data, and small changes in training data can result in very different series of splits (Hastie *et al*. 2001). These factors detract from the advantages of trees, introducing uncertainty into their interpretation and limiting their predictive performance.

### boosting

Boosting is a method for improving model accuracy, based on the idea that it is easier to find and average many rough rules of thumb, than to find a single, highly accurate prediction rule (Schapire 2003). Related techniques – including bagging, stacking and model averaging – also build, then merge results from multiple models, but boosting is unique because it is sequential: it is a forward, stagewise procedure. In boosting, models (e.g. decision trees) are fitted iteratively to the training data, using appropriate methods gradually to increase emphasis on observations modelled poorly by the existing collection of trees. Boosting algorithms vary in how they quantify lack of fit and select settings for the next iteration. The original boosting algorithms such as AdaBoost (Freund & Schapire 1996) were developed for two-class classification problems. They apply weights to the observations, emphasizing poorly modelled ones, so the ML literature tends to discuss boosting in terms of changing weights.

Here, though, we focus on regression trees (including logistic regression trees), and the intuition is different. For regression problems, boosting is a form of ‘functional gradient descent’. Consider a loss function – in this case, a measure (such as deviance) that represents the loss in predictive performance due to a suboptimal model. Boosting is a numerical optimization technique for minimizing the loss function by adding, at each step, a new tree that best reduces (steps down the gradient of) the loss function.For BRT, the first regression tree is the one that, for the selected tree size, maximally reduces the loss function. For each following step, the focus is on the residuals: on variation in the response that is not so far explained by the model. [Technical aside: For ordinary regression and squared-error loss, standard residuals are used. For more general loss, the analogue of the residual vector is the vector of negative gradients. Deviance is used as the loss function in the software we use. The negative gradient of the deviance in a logistic regression BRT model or a Poisson BRT model is the residual *y* – *p*, where *y* is the response and *p* the fitted probability or fitted Poisson mean. These are fitted by a tree, and the fitted values are added to the current logit(*p*) or log(*p*).] For example, at the second step, a tree is fitted to the residuals of the first tree, and that second tree could contain quite different variables and split points compared with the first. The model is then updated to contain two trees (two terms), and the residuals from this two-term model are calculated, and so on. The process is stagewise (not stepwise), meaning that existing trees are left unchanged as the model is enlarged. Only the fitted value for each observation is re-estimated at each step to reflect the contribution of the newly added tree. The final BRT model is a linear combination of many trees (usually hundreds to thousands) that can be thought of as a regression model where each term is a tree. We illustrate the way in which the trees combine and contribute to the final fitted model in a later section, ‘How multiple trees produce curvilinear functions’. The model-building process performs best if it moves slowly down the gradient, so the contribution of each tree is usually shrunk by a learning rate that is substantially less than one. Fitted values in the final model are computed as the sum of all trees multiplied by the learning rate, and are much more stable and accurate than those from a single decision tree model.

Similarly to GLM, BRT models can be fitted to a variety of response types (Gaussian, Poisson, binomial, etc.) by specifying the error distribution and the link. Ridgeway (2006) provides mathematical details for available distributions in the software we use here, including calculations for deviance (the loss function), initial values, gradients, and the constants predicted in each terminal node. Some loss functions are more robust to noisy data than others (Hastie *et al*. 2001). For example, binomial data can be modelled in BRTs with several loss functions: exponential loss makes them similar to boosted classification trees such as AdaBoost, but binomial deviance is more robust, and likely to perform better in data where classes may be mislabelled (e.g. false negative observations).

From a user's point of view, important features of BRT as applied in this paper are as follows. First, the process is stochastic – it includes a random or probabilistic component. The stochasticity improves predictive performance, reducing the variance of the final model, by using only a random subset of data to fit each new tree (Friedman 2002). This means that, unless a random seed is set initially, final models will be subtly different each time they are run. Second, the sequential model-fitting process builds on trees fitted previously, and increasingly focuses on the hardest observations to predict. This distinguishes the process from one where a single large tree is fitted to the data set. However, if the perfect fit was a single tree, in a boosted model it would probably be fitted by a sum of identical shrunken versions of itself. Third, values must be provided for two important parameters. The learning rate (*lr*), also known as the shrinkage parameter, determines the contribution of each tree to the growing model, and the tree complexity (*tc*) controls whether interactions are fitted: a *tc* of 1 (single decision stump; two terminal nodes) fits an additive model, a *tc* of two fits a model with up to two-way interactions, and so on. These two parameters then determine the number of trees (*nt*) required for optimal prediction. Finally, prediction from a BRT model is straightforward, but interpretation requires tools for identifying which variables and interactions are important, and for visualizing fitted functions. In the following sections, we use a case study to show how to manage these features of BRT in a typical ecological setting.

### the case study

We demonstrate use of BRT with data describing the distribution of, and environments occupied by, the short-finned eel (*Anguilla australis*) in New Zealand. We aim to produce a model that not only identifies major environmental determinants of *A. australis* distribution, but also can be used to predict and map its occurrence in unsampled rivers. The model will be a form of logistic regression that models the probability that a species occurs, *y* = 1, at a location with covariates * X*,

*P*(

*y*= 1 |

*). This probability will be modelled via a logit : logit*

**X***P*(

*y*= 1 |

*) =*

**X***f*(

*).*

**X***Anguilla australis* is a freshwater eel native to south-eastern Australia, New Zealand and western Pacific islands. Within New Zealand it is a common freshwater species, frequenting lowland lakes, swamps, and sluggish streams and rivers in pastoral areas, and forming a valuable traditional and commercial fishery. Short-finned eels take 10–20 years to mature, then migrate – perhaps in response to rainfall or flow triggers – to the sea to spawn. The eels spawn at considerable depth, then larvae are brought back to the coast on ocean currents and metamorphose into glass eels. After entering freshwater, they become pigmented and migrate upstream. They tend not to penetrate as far upstream as long-finned eels (*Anguilla dieffenbachii*), probably because there is little suitable habitat further inland rather than because they are unable to do so (McDowall 1993).

The data set, developed for research and conservation planning in New Zealand, is described in detail by Leathwick *et al*. 2008). Briefly, species data were records of species caught from 13 369 sites spanning the major environmental gradients in New Zealand's rivers. *Anguilla australis* was caught at 20% of sites. Because this is a much larger data set than is often available in ecology, here we subsample the 13 369 sites, usually partitioning off 1000 records for modelling and keeping the remainder for independent evaluation.

The explanatory variables were a set of 11 functionally relevant environmental predictors (Table 1) that summarize conditions over several spatial scales: local (segment and reach) scale, upstream catchment scale, and downstream to the sea. Most were available as GIS data for the full river system of New Zealand, enabling prediction to all rivers. The exception was one variable describing local substrate conditions (LocSed) that had records at only 82% sites. The 12th variable was categorical, and described fishing method (Table 1). Given these records and covariates, the logistic regression will be modelling the joint probability of occurrence and capture of *A. australis*.

Variable | Description | Mean and range |
---|---|---|

Reach scale predictor | ||

LocSed | Weighted average of proportional cover of bed sediment: 1 = mud, | |

2 = sand, 3 = fine gravel; 4 = coarse gravel; 5 = cobble; 6 = boulder; | ||

7 = bedrock | 3·77, 1–7 | |

Segment scale predictors | ||

SegSumT | Summer air temperature (°C) | 16·3, 8·9–19·8 |

SegTSeas | Winter air temperature (°C), normalized with respect to SegJanT | 0·36, –4·2–4·1 |

SegLowFlow | Segment low flow (m^{3} s^{−1}), fourth root transformed | 1·092, 1·0–4·09 |

Downstream predictors | ||

DSDist | Distance to coast (km) | 74, 0·03–433·4 |

DSDam | Presence of known downstream obstructions, mostly dams | 0·18, 0 or 1 |

DSMaxSlope | Maximum downstream slope (°) | 3·1, 0–29·7 |

Upstream/catchment scale predictors | ||

USAvgT | Average temperature in catchment (°C) compared with segment, | |

normalized with respect to SegJanT | –0·38, –7·7–2·2 | |

USRainDays | Days per month with rain >25 mm | 1·22, 0·21–3·30 |

USSlope | Average slope in the upstream catchment (°) | 14·3, 0–41·0 |

USNative | Area with indigenous forest (proportion) | 0·57, 0–1 |

Fishing method | ||

Method | Fishing method in five classes: electric, net, spot, trap, mixture | NA |

### software and modelling

All models were fitted in r (R Development Core Team 2006) version 2.3-1, using gbm package version 1·5–7 (Ridgeway 2006) plus custom code written by J.L. and J.E. Our code is available with a tutorial (Supplementary material). There is also a growing range of alternative implementations for boosted trees, but we do not address those here. The following two sections explain how to fit, evaluate and interpret a BRT model, highlighting features that make BRT particularly useful in ecology. For all settings other than those mentioned, we used the defaults in gbm.

### optimizing the model with ecological data

Model development in BRT is best understood in the context of other model-fitting practices. For all prediction problems, overfitting models to training data reduces their generality, so regularization methods are used to constrain the fitting procedure so that it balances model fit and predictive performance (Hastie *et al*. 2001). Regularization is particularly important for BRT because its sequential model fitting allows trees to be added until the data are completely overfitted. For most modelling methods, model simplification is achieved by controlling the number of terms. The number of terms is defined by the number of predictor variables and the complexity of fitted functions, and is often determined using stepwise procedures (for a critique of these see Whittingham *et al*. 2006) or by building several models and comparing them with information theoretical measures such as Akaike's information criterion (Burnham & Anderson 2002). Controlling the number of terms implies a prior belief that parsimonious models (fewer terms) provide better prediction. Alternatively, more terms can be fitted and their contributions downweighted using shrinkage (Friedman 2001). In conventional regression, this is applied as global shrinkage (direct, proportional shrinkage on the full model) using ridge or lasso methods (Hastie *et al*. 2001; Reineking & Schröder 2006). Shrinkage in BRT is similar, but is incremental, and is applied to each new tree as it is fitted. Analytically, BRT regularization involves jointly optimizing the number of trees (*nt*), learning rate (*lr*), and tree complexity (*tc*). We focus on trade-offs between these elements in the following sections, after explaining the role of stochasticity.

### boosting with stochasticity

Introducing some randomness into a boosted model usually improves accuracy and speed and reduces overfitting (Friedman 2002), but it does introduce variance in fitted values and predictions between runs (Appendix S1, see Supplementary material). In gbm, stochasticity is controlled through a ‘bag fraction’ that specifies the proportion of data to be selected at each step. The default bag fraction is 0·5, meaning that, at each iteration, 50% of the data are drawn at random, without replacement, from the full training set. Optimal bag fractions can be established by comparing predictive performance and model-to-model variability under different bag fractions. In our experience, stochasticity improves model performance, and fractions in the range 0·5–0·75 have given best results for presence–absence responses. From here on we use a bag fraction of 0·5, but with new data it is worth exploration.

### number of trees vs. learning rate

The *lr* is used to shrink the contribution of each tree as it is added to the model. Decreasing (slowing) *lr* increases the number of trees required, and in general a smaller *lr* (and larger *nt*) are preferable, conditional on the number of observations and time available for computation. The usual approach is to estimate optimal *nt* and *lr* with an independent test set or with CV, using deviance reduction as the measure of success. The following analysis demonstrates how performance varies with these parameters using a subsample of the data set for model fitting, and the remaining data for independent evaluation.

Using a set of 1000 sites and 12 predictor variables, we fitted BRT models with varying values for *nt* (100–20 000) and *lr* (0·1–0·0001), and evaluated them on 12 369 excluded sites. Example code is given in the online tutorial (see Supplementary material). Results for up to 10 000 trees for a *tc* of 1 and 5 are shown in Fig. 2. Our aim here is to find the combination of parameters (*lr*, *tc* and *nt*) that achieves minimum predictive error (minimum error for predictions to independent samples). A value of 0·1 for *lr* (not plotted) was too fast for both *tc* values, and at each addition of trees above the minimum 100 trees, predictive deviance increased, indicating that overfitting occurred almost immediately. The fastest feasible *lr* (0·05) fitted relatively few trees, did not achieve minimum error for *tc* = 1 (see horizontal dashed line) or *tc* = 5, and in both cases predicted poorly as more trees were added (the curves rise steeply after they have reached a minimum, indicating overfitting). In contrast, the smallest values for *lr* approached best predictive performance slowly, and required thousands of trees to reach minimum error. There was little gain in predictive power once more than 500 or so trees were fitted. However, slower *lr* values are generally preferable to faster ones, because they shrink the contribution of each tree more, and help the final model to reliably estimate the response. We explain this further in Appendix S1, and as a rule of thumb recommend fitting models with at least 1000 trees.

### tree complexity

Tree complexity – the number of nodes in a tree – also affects the optimal *nt*. For a given *lr*, fitting more complex trees leads to fewer trees being required for minimum error. So, as *tc* is increased, *lr* must be decreased if sufficient trees are to be fitted (*tc*= 5, Fig. 2b). Theoretically, the *tc* should reflect the true interaction order in the response being modelled (Friedman 2001), but as this is almost always unknown, *tc* is best set with independent data.

Sample size influences optimal settings for *lr* and *tc*, as shown in Fig. 3. For this analysis, the full data set was split into training sets of various sizes (6000, 2000, 1000, 500 and 250 sites), plus an independent test set (7369 sites). BRT models of 30 000 trees were then fitted over a range of values for *tc* (1, 2, 3, 5, 7, 10) and *lr* (0·1, 0·05, 0·01, 0·005, 0·001, 0·0005). We identified, for each parameter combination, the *nt* that achieved minimum prediction error, and summarized results as averages across *tc* (Fig. 3a) and *lr* (Fig. 3b). If the minimum was not reached by 30 000 trees, that parameter combination was excluded.

Predictive performance was influenced most strongly by sample size and, as expected, large samples gave models with lower predictive error. Gains from increased *tc* were greater with larger data sets, presumably because more data provided more detailed information about the full range of sites in which the species occurs, and the complexity in that information could be modelled better using more complex trees. Decision stumps (*tc* 1) were never best (they always had higher predictive deviance), but for small samples there was no advantage – but also little penalty – for using large (higher-*tc*) trees. The reason for not using the highest *tc*, though, is that the model would have to be learnt very slowly to achieve enough trees for reliable estimates. So, small samples here (e.g. 250 sites) would be best modelled with simple trees (*tc* 2 or 3) and a slow enough *lr* to allow at least 1000 trees.

As a general guide, *lr* needs to be decreased as *tc* increases, usually inversely: doubling *tc* should be matched with halving *lr* to give approximately the same *nt*. While the results here suggest that using higher *tc* and very slow *lr* is the best strategy (for samples >500 sites the curves keep descending), the other trade-off is computing time. For example, fitting BRT models on the 1000-site data set on a modern laptop and using our online code took 0·98 min for *tc* 1 and *lr* 0·05 (500 trees), but 3·85 min for *tc* 1 and *lr* 0·01 (2500 trees), 2·36 min for *tc* 5 and *lr* 0·01 (850 trees), and 7·49 min for *tc* 1 and *lr* 0·005 (4600 trees). Where many species are modelled, or many models are required for other reasons (e.g. bootstrapping), using the fastest *lr* that achieves more than, say, 1000 trees is a good strategy. We note, too, that for presence–absence data such as these, optimal settings also vary with prevalence of the species. A very rare or very common species provides less information to model given the same total number of sites, and will generally require slower learning rates.

### identifying the optimal settings

In many situations, large amounts of data are not available, so techniques such as CV are used for model development and/or evaluation. Cross-validation provides a means for testing the model on withheld portions of data, while still using all data at some stage to fit the model. Use of CV for selecting optimal settings is becoming increasingly common (Hastie *et al*. 2001), led by the ML focus on predictive success. Here we demonstrate a CV implementation that first determines the optimal *nt*, then fits a final model to all the data. The CV process is detailed in Fig. 4, and code is available (function gbm.step) in the Supplementary material.

We use a data set of 1000 sites to develop and test a model via CV, also evaluating it on the withheld 12 369 sites. Our selected settings are *lr* of 0·005, *tc* of 5 and bag fraction of 0·5; note that all 1000 sites can be used despite missing data for LocSed at 222 sites. As trees are added, there is an initial steep decline in prediction error followed by a more gradual approach to the minimum (Fig. 5, solid circles). With a slow enough *lr*, the CV estimates of *nt* are reliable and close to those from independent data (Fig. 5).

### simplifying the predictor set

Variable selection in BRT is achieved because the model largely ignores non-informative predictors when fitting trees. This works reasonably well because measures of relative influence quantify the importance of predictors, and irrelevant ones have a minimal effect on prediction. However, unimportant variables can be dropped using methods analogous to backward selection in regression (Miller 1990); these are sometimes referred to as recursive feature elimination. Such simplification is most useful for small data sets where redundant predictors may degrade performance by increasing variance. It is also useful if users are uncomfortable with inclusion of unimportant variables in the model. We detail our methods for simplification in Appendix S2 (see Supplementary material).

### understanding and interpreting the model

A recognized advantage of individual decision trees is their simplicity, but boosting produces a model with hundreds to thousands of trees, presenting a challenge for understanding the final model. Nevertheless, BRT does not have to be treated like a black box, and we show here how the models can be summarized, evaluated and interpreted similarly to conventional regression models.

### relative importance of predictor variables

Formulae developed by Friedman (2001) and implemented in the gbm library estimate the relative influence of predictor variables. The measures are based on the number of times a variable is selected for splitting, weighted by the squared improvement to the model as a result of each split, and averaged over all trees (Friedman & Meulman 2003). The relative influence (or contribution) of each variable is scaled so that the sum adds to 100, with higher numbers indicating stronger influence on the response.

For the *A. australis* model developed on 1000 sites through CV, the six most important variables described the importance of various reach, segment, upstream and downstream conditions, and fishing method (Table 2).

Predictor | Relative contribution (%) |
---|---|

SegSumT | 24·7 |

USNative | 11·3 |

Method | 11·1 |

DSDist | 9·7 |

LocSed | 8·0 |

DSMaxSlope | 7·3 |

USSlope | 6·9 |

USRainDays | 6·5 |

USAvgT | 5·7 |

SegTSeas | 5·7 |

SegLowFlow | 2·9 |

DSDam | 0·1 |

### partial dependence plots

Visualization of fitted functions in a BRT model is easily achieved using partial dependence functions that show the effect of a variable on the response after accounting for the average effects of all other variables in the model. While these graphs are not a perfect representation of the effects of each variable, particularly if there are strong interactions in the data or predictors are strongly correlated, they provide a useful basis for interpretation (Friedman 2001; Friedman & Meulman 2003). The partial responses for *A. australis* for the six most influential variables (Fig. 6) indicate a species occurring in warm, lowland rivers that have gentle downstream slopes and substantial clearing of upstream native vegetation. They demonstrate that short-finned eels often occur close to the coast, but are able to penetrate some distance inland, and prefer reaches with fine sediments. The species is most commonly caught using electric fishing, with lower success from nets, spotlighting and traps.

### identifying important interactions

Even if a decision tree has several nodes, it may not be modelling interactions between predictors because they will be fitted only if supported by the data. In the absence of interactions, in a multinode tree the same response would be fitted to each side of splits below the first node. In effect, *tc* controls the maximum level of interaction that can be quantified, but no information is provided automatically on the nature and magnitude of fitted interaction effects. To quantify these, we use a function that creates, for each possible pair of predictors, a temporary grid of variables representing combinations of values at fixed intervals along each of their ranges. We then form predictions on the linear predictor scale for this grid, while setting values for all other variables to their respective means. We use a linear model to relate these temporary predictions to the two marginal predictors, fitting the latter as factors. The residual variance in this linear model indicates the relative strength of interaction fitted by BRT, with a residual variance of zero indicating that no interaction effects are fitted. Code and examples are available in the Supplementary material.

For *A. australis*, six of the seven most important pairwise interactions all included the most influential predictor, SegSumT. Once identified, interactions can be visualized with joint partial dependence plots. The most important interaction for *A. australis* is shown in Fig. 7 (top panel), compared to the response predicted if interactions were not allowed (*tc* = 1). In this case, allowing interactions reinforces the suitability of environments that combine warm temperatures with low frequency of floods caused by high-intensity rain events in the upstream catchment. With interactions modelled, fitted values for such environments are more than twice those fitted by a model in which no interaction effects are allowed.

### predictive performance

BRT models can be used for prediction in the same way as any regression model, but without additional programming their complexity requires predictions to be made within the modelling software (in this paper, r) rather than in a GIS. Where predictions are to be mapped over many (e.g. millions) of points, scripts can be used to manage the process; examples are given in the Supplementary material. Prediction to any given site uses the final model, and consists of the sum of predictions from all trees multiplied by the learning rate. Standard errors can be estimated with bootstrap procedures, as demonstrated by Leathwick *et al*. (2006).

Where BRT models are developed with CV, statistics on predictive performance can be estimated from the subsets of data excluded from model fitting (see Fig. 4 and Supplementary material). For the model presented previously, the CV estimate of prediction error was close to that on independent data, although slightly overoptimistic (Fig. 5, compare solid and open circles; Table 3, see estimates on independent data compared with CV). This is a typical result, although the ability of CV to estimate true performance varies with data set and species prevalence. In small data sets, CV estimates of predictive performance may be erratic, and repeated and/or stratified cross-validation can help stabilize them (Kohavi 1995).

Independent (12 369 sites) | Cross-validation* (1000 sites) | Train (1000 sites) | |
---|---|---|---|

- *
Mean and SE estimated within model building.
| |||

Percentage deviance explained | 28·3 | 31·3 (0·96) | 52·6 |

Area under the receiver operating characteristic curve | 0·858 | 0·869 (0·015) | 0·958 |

Predictive performance should not be estimated on training data, but results are provided in Table 3 to show that BRT overfits the data, regardless of careful model development (Table 3; see difference between estimates on training and independent data). While overfitting is often seen as a problem in statistical modelling, our experience with BRT is that prediction to independent data is not compromised – indeed, it is generally superior to other methods (see e.g. comparisons with GLM, GAM and multivariate adaptive regression splines, Elith *et al*. 2006; Leathwick *et al*. 2006). The flexibility in the modelling that allows overfitting also enables an accurate description of the relationships in the data, provided that overfitting is appropriately controlled.

### how multiple trees produce curvilinear functions

Finally, having explored important features of a BRT model, we return to the question of how multiple shrunken trees can, in combination, fit a nonlinear function. In gbm it is possible to view the structure of each tree in a BRT model, and to plot the partial response to any variable over any chosen number of shrunken trees (Fig. 8). Our final CV model contained 1050 trees. The first two trees had four out of five variables in common, with the first split in both trees on the same variable but at slightly different values (Fig. 8a). The first tree split on summer temperature at 16·65 °C, showing as a tiny step in the partial plot constructed only from the first tree (Fig. 8b, top left). The step is small in comparison with the final amplitude of the response because the contribution of each tree in the boosted model is shrunk by the learning rate. Adding information from the second tree (Fig. 8a, right) adds a second step at 16·85 °C (Fig. 8b, top right). Summer temperature was the most influential variable in this model, and occurred in 523 of the trees. Gradually, as more trees are included in the partial plot, the response to summer temperature becomes more complex and curvilinear (Fig. 8b, bottom row).