• Bioclimatic modelling;
  • context;
  • correlation;
  • extrapolation;
  • mechanistic;
  • modelling;
  • niche;
  • paradigm;
  • review;
  • species distribution


  1. Top of page
  2. Abstract
  3. Acknowledgements
  4. References

A recent review by Dormann et al. (2012, Journal of Biogeography, 39, 2119–2131) has proposed that methods for the modelling of species distributions be considered as a continuum. We disagree with this thesis, and contend that attempting to present the diverse range of methods as a continuum is unhelpful and ultimately not convincing. It adds to the confusion about the strengths and weaknesses of the diversity of available modelling methods, what exactly it is that they model, and the most appropriate applications. We highlight variation within and between modelling methods that is obscured by the continuum framework and propose that context of application and clarity of method are critical elements for future discourse on the topic.

The broad field of ‘species distribution modelling’ has exploded in popularity recently. Some remarkable progress has been made in developing new tools and methods, but the field has become mired in confusion over the underlying concepts, and is apparently ignorant of the strengths and weaknesses of different methods and how they are best suited to solving different problems. These issues are reflected in research publications that apply models inappropriately (e.g. Pyron et al., 2008 as critiqued by Rodda et al., 2011), incorrectly attribute causality to model extrapolation artefacts (Thomas et al., 2004), overlook significant, inconvenient methodological caveats (e.g. overfitting models; Evans et al., 2009), extrapolate regression models without at least identifying the extrapolation space (e.g. Wiens et al., 2009), or fail to reconcile models with ecological theory (e.g. Warren et al., 2008). Applying ensemble methods to models suffering such shortcomings (e.g. Bradley, 2009) only compounds errors and further obscures underlying problems. We should be mindful that public policy and expenditure on critical issues such as the conservation of biodiversity, biosecurity, and climate change adaptation may be misinformed by the inappropriate application of models or the misrepresentation of their reliability.

As in many areas of science, proponents of species distribution models [SDMs; for clarity of critique we use the terminology of Dormann et al. (2012) while recognizing that it has limitations] have promoted them vigorously to the scientific community, leading occasionally to acrimonious exchanges (e.g. Peterson et al., 2007; Phillips, 2008). In many cases the abilities of modelling methods have been exaggerated. We firmly agree with Dormann et al. (2012), and many previous authors, that the best way to improve modelling methods is an open, honest discussion of their different strengths and weaknesses. We also agree that the best way forward is not for proponents of different methods and models to keep asserting that their chosen modelling paradigm, method or model is unqualifiedly superior to others, or even universally adequate. Yet we are concerned that without some agreement about essential elements in the scientific discourse concerning species distribution modelling, advances may be stymied, effort could be wasted on reinventing previous discoveries, and public policy could be misinformed by poorly framed or executed modelling activities. In this correspondence, we aim to bring greater clarity and add appropriate context to four main issues.

Firstly, in their recent review, Dormann et al. (2012) propose that the apparent dichotomy between correlative and process-based SDMs is better understood as a continuum that bridges dichotomies. It is misleading to refer to the range of methods as a continuum when there are marked and important distinctions between the available methods and models that point to their respective strengths, weaknesses and appropriate uses. We suggest that the continuum concept hides the rich variety of methods and models, and that a constellation spread along multiple axes may be a better representation of the diversity of methods currently available. Even so, we believe that this constellation can be usefully divided into three broad clusters: (1) those that fit response functions based on observed ecophysiological responses to environmental drivers/factors/variables, sometimes termed process-based, ecophysiological, or mechanistic (Kearney et al., 2008); (2) those that derive functional responses from environmental data for locations where the species has been observed and sometimes where it is (assumed to be) absent; these can be loosely grouped as correlative (Guisan & Zimmermann, 2000; Elith et al., 2006); and (3) methods that combine elements from the first two clusters (Sutherst, 2003), which are usually grouped with mechanistic models and termed fitted process-based by Dormann et al. (2012). As observed by Jeschke & Strayer (2008), most modelling comparisons have been within the second group, with far fewer comparisons between groups (but note Kriticos & Randall, 2001; Kearney et al., 2008; Sutherst & Bourne, 2009; Webber et al., 2011). Further evidence that a continuum framework is not helpful is the positioning of hybrid models alongside fitted process-based models (Fig. 1 in Dormann et al., 2012). This grouping obscures the extra insights that hybrid methods can bring by incorporating other components of a species' potential range (e.g. dispersal; Soberón, 2007).

Secondly, Dormann et al. (2012) present all correlative models as a single group. By failing to draw attention to the significant differences between correlative modelling methods, they miss the opportunity to show modellers that such differences may be critical to the success of their study. As Guisan & Zimmermann (2000), Kriticos & Randall (2001), Elith et al. (2006) and others make clear, correlative modelling methods constitute an extremely diverse group, including methods such as neural networks, machine learning, genetic algorithms and more traditional statistical methods such as regression trees and generalized linear models, as well as Bayesian methods. Although correlative methods are all underpinned by the process of identifying relationships between distribution data and the linked covariates, they differ in the extent to which they are affected by spatial autocorrelation between covariate data and interactions between input variables. Methods can also be grouped into those that are discriminative, requiring presence and absence data (whether true absences or pseudo-absences drawn from a defined area or a background), versus those requiring only presence data. This distinction has important implications for both the nature and quality of the input data, what it is that they are modelling and the requirements for assessing the explanatory power of the models (Brotons et al., 2004; Lobo et al., 2008; Webber et al., 2011). Strengths of correlative methods are that producing projections requires minimal knowledge of the ecology or physiology of the organism, and that they can be configured and run rapidly. These traits are also a key weakness, as there is a strong temptation to accept any highly correlated environmental variables and response functions. Such decisions are unwise, and can lead to a flawed and erroneous interpretation of causality and understanding of the capacity for transferability. The instability of the set of covariates identified as important within a single model (Rodda et al., 2011) and between different correlative models (Syphard & Franklin, 2009) sounds a strong warning in this regard. A key challenge is to select and relate covariates to distribution data using causally meaningful linking or response functions (Austin, 2002, 2007).

The very brief overview presented in these first two points should make it clear that these methods do not fall into a continuum. Their suitability for a given case study needs to be assessed carefully, and they need to be applied and interpreted carefully, taking into account the sound advice and cautions contained in Sutherst (2003), Guisan & Thuiller (2005), Heikkinen et al. (2006), Elith & Leathwick (2009), Franklin (2009), Rodda et al. (2011), Webber et al. (2011) and others.

Thirdly, the Dormann et al. (2012) review misrepresents the methodology and conceptual underpinnings of so-called ‘fitted process-based models’. At one point these methods are posited as lying between the extremes of correlative methods on the one hand, and so-called ‘forward process-based’ methods on the other. Rather confusingly, some examples provided (LPJ, LP-GUESS and ORCHIDEE) are models of plant functional types (not SDMs), which are all very closely related models derived from BIOME (Prentice et al., 1992). Throughout the rest of the review, Dormann et al. (2012) also mistakenly treat fitted process-based methods as having the same set of traits as correlative methods. Fitted process-based models such as CLIMEX and STASH (Sykes et al., 1996; Sutherst et al., 2007) are able to draw on the strengths of both correlative and mechanistic modelling paradigms. They allow the modeller to inductively fit ecologically relevant range-limiting functions to species distribution data in a similar manner to many correlative methods. However, it is also possible to incorporate information from phenological observations and direct physiological observations into causally meaningful response functions in a deductive manner similar to that of Kearney et al. (2008). In these models, the growth and range-limiting functions are combined using sound ecological principles (Venette et al., 2010). The flexibility of these models comes at the price of relatively slow model development, and the requirement for the modeller to exercise a considerable degree of ecological nous as they test competing hypotheses and confront conflicting information from different knowledge domains.

Lastly, the literature presented in the Dormann et al. (2012) review and their Appendix S1 is unfortunately incomplete and biased, and lacks any cross-paradigm material. Key manuscripts presenting critical insights into the field, particularly for fitted process-based models, are missing. For example, the CLIMEX version 3 manual alone lists 160 published examples of models involving transferability, and a recent search of the ISI Web of Knowledge database listed 298 CLIMEX publications.

So, if the continuum framework proposed by Dormann et al. (2012) is not the way to advance our understanding of, and develop further insight into, species distribution modelling, what is the answer? We do not pretend to have a comprehensive answer to this question. However, we argue that context of application and clarity of method are critical elements to include in any discourse on the topic. Regarding context, all SDMs need to be evaluated in the context of their purpose; the principle of horses for courses is clearly relevant. It should influence our selection of models, how we evaluate them, and how we apply their results. The purpose of a modelling exercise strongly influences the desired model qualities because the costs of different errors depend on the questions being addressed (Peterson, 2006; Lobo et al., 2008). For example, a modeller working with novel ecosystems (e.g. with invasive species or in future climate scenarios), should value model sensitivity over specificity because of the consequences of underestimating risks. Such modelling applications also prefer generality and transferability over model specificity. Dormann et al. (2012) gloss over the significant distinctions between the modelling paradigms and methods and provide a compare-and-contrast summary of model traits with scant regard for the purpose for which different models were developed, and for which they remain best-suited. The end result is that many of the assertions made by Dormann et al. (2012) regarding the strengths and weaknesses of different modelling methods are too general and misleading.

Regarding clarity, an effective discourse on modelling methods requires the methods to be described in sufficient detail for a reader to understand how the results were derived and interpreted. Some SDMs are structured in a manner that is constrained and transparent (e.g. CLIMEX, NAPPFAST, STASH). In these models the parameters of ecologically informed response functions are fitted to the data; understanding their general structure makes it possible to focus critical attention on the fitting of the parameter set for each species model. In most correlative models the structure and functions, as well as their parameters, result from the model-fitting process. Interpreting these models requires an understanding of the fitted response functions, as well as their parameters. The recently developed tools in MaxEnt to display fitted functions and to map areas of model extrapolation can facilitate model interpretation (Elith et al., 2010; Webber et al., 2011). They are a welcome addition, although few authors have included these figures in model publications. Presentation of these additional outputs should be an essential component of assessing the strengths and limitations of such models. For example, an ecologically inconsistent response function should serve as a warning that there may be a problem with the model (e.g. a multimodal response function may indicate over-fitting or biased sampling).

As ecologists and modellers concerned with ensuring that species distribution modelling maintains the highest possible standards, we hope that efforts to improve modelling methods will be fruitful. These improvements could involve combining novel techniques with promising features drawn from the full suite of existing models, and applying them to appropriate questions using carefully considered data sets. Achieving this goal requires us to remain open to new ideas and to the revision of old ones. We should seek to learn more about the strengths and weaknesses of models that we use regularly (as well as those that are not part of our preferred tool kit) and never forget to reconcile our model results with ecological theory, and, where possible, with empirical data.


  1. Top of page
  2. Abstract
  3. Acknowledgements
  4. References

We thank Helen Murphy and John Scott for their useful comments on earlier drafts.


  1. Top of page
  2. Abstract
  3. Acknowledgements
  4. References