Species composition in forest patches is influenced by distance-to-edge within an average range of 748 m. Pellissier et al., in this issue, assess the significance of this pattern using a very large data set, a well-defined target population and comprehensive logistic models. This is a recipe for overcoming the lack of general validity frequently observed in vegetation science.
Vegetation ecology, as McGill et al. (2006) complain, frequently generates results of only local relevance, culminating in the question whether the discipline will ever reveal ‘general principles’. There is precedence for their view: Lawton (1999) suggested that community ecology was a ‘mess’, while Simberloff (2004) argued that no general rules will ever be found due to the complexity of interactions. In view of the number of citations of McGill et al. (2006) (431 in the Web of Science, when writing this Commentary) we may conclude that discomfort is apparently widespread. The solution, of returning to the fundamental or realized niche distinction and concentrating on species traits, although widely practiced, still awaits the big breakthrough.
An example of a process-related pattern that is presumed to be generally valid is dependence of understorey plant species composition in forests on forest patch size and, most of all, distance-to-edge. Pellissier et al. (2013, this issue) present such an investigation from the forests in the northern half of France. Whereas in previous papers an influence of up to 150 m distance from forest edge was assumed (Bossuyt et al. 1999; Hermy et al. 1999; Honnay et al. 2002; Harper et al. 2005, among many others), the authors now found a significant pattern – independent of locally changing site conditions – within a range of 748 m on average. To design an investigation where ‘significance’ is achieved is not easy in view of the tremendous, millennia-old human impact on fragmented landscapes such as those in the northern half of France. Furthermore, spatial dependence and spatial autocorrelation are omnipresent in field surveys (Legendre & Legendre 1998), hampering, for example, the explanatory power of the widely practiced transect studies.
Sampling design matters
Whenever investigations are based on field surveys, sampling design comes into play. Pellissier et al. (2013) use the most reliable sampling design for the selection of forest patches: the full set of all 1830 patches found in their database. These patches were sampled using a systematic grid, stratified according to French ‘ecoregions’. The resulting sample size on the level of vegetation relevés, n = 19 989, is exceptional and the fieldwork to achieve this was immense. In this design, distance-to-edge is not used for the sampling plan, as would be the case in transects, it can only be determined a posteriori using a geographic information system (GIS). The investment required in field and laboratory work was possible only because data acquisition took place in the course of the National Forest Inventory (NFI) of France, with resources exceeding those of an ordinary PhD thesis by orders of magnitude. No single person or research project alone would ever have the chance to obtain a vegetation sample of the size Pellissier et al. (2013) were able to use.
In search of the sampling universe
Statistics is a rather formal discipline, sometimes failing to consider real-world circumstances. Nevertheless it may succeed in devising ‘general rules’ which are also valid in plant science. One of my favourites is that assessing the variance of a sample of size n within a finite population, N (Cochran 1977):
Statisticians usually point to the fact that the standard deviation of the sample mean decreases with the square root of sample size, n, demonstrating the benefit of large n. The term in brackets is relevant only when the population, N, is finite. And this is a crucial point in vegetation ecology, because N is our target (in statistics sometimes referred to as the sampling universe) of investigation, the one for which results are hoped to be ‘generally valid’. The forests investigated by Pellissier et al. (2013) belong to the temperate biome and this could be the target in this case. Clearly, temperate forests occur in many places around the globe (cf. Wulf & Naaf 2009; Plue et al. 2010; Li et al. 2012). Upon closer inspection, papers addressing forest types hardly ever promise that their results are valid for the entire biome, and the question of what constitutes the statistical population, N, frequently remains open. Pellissier et al. (2013), in contrast, show a map of their forest patches (Fig. 1), and these in fact constitute their sampling universe.
Factorial experimental designs allow factors to be analysed independently. In field surveys, correlation between explanatory factors cannot be avoided. Pellissier et al. (2013) found an elegant way to overcome this problem in their database. They eliminated correlation of the two factors intended to be evaluated separately – forest patch size and distance-to-edge – by appropriate subsampling: they took a subsample where the two explanatory factors are no longer correlated, as illustrated in their fig. 2. What resulted is a kind of quasi-experimental design (Bråthen et al. 2007) in which a part of the sample had to be sacrificed, but sample size remained sufficiently large. In other words, sampling is the result of data analysis, or, in terms used by Edwards et al. (2005), this is model-based sampling. Again, this is possible only when sample size is large.
Potential natural vegetation: a formalism not a belief
When looking at Pellissier et al. (2013) more closely, we see they are not just asking whether there is a distance-to-edge relation. They actually ask whether there is such a relationship that cannot be explained by measured site factors. Explaining vegetation by site factors is the basis of the potential natural vegetation (PNV) approach (see Chiarucci et al. 2010; for a critical review). PNV assumes that given all site factors, the vegetation establishing in the course of succession can be assessed. Pellissier et al. (2013), as well as others, when modelling vegetation and site interactions with correlation-based models, make use of this idea in a formal way. Unlike in PNV, their models not only reveal the role of site factors, but also quantify the variation that remains unexplained (Bittner et al. 2011). The use of models is mandatory because distance-to-edge can be only a faint factor, and the aim of the analysis is to identify its individual explanatory power, independent from other, more powerful site factors. In Fig. 1, I provide an example of how this can be visualized in logistic regression. The graph depicts the expectation of an univariate linear model in which the presence and absence of a species is explained by pH. This yields the well-known S-shaped curve (or an unimodal curve when adding a squared term) and it can be seen that expectations for all plots conform to this. Then there is a bivariate case, still with pH as the best explaining factor, but in addition the minimum water table included as a minor factor. While in the expectations the S-shaped response is still there, we are actually interested in the deviations from the univariate response. Deviance, i.e. lack of model fit, is also visible as it is computed directly from the differences between expectations and presence–absence observations (Wildi 2013). Pellissier et al. (2013) of course try to include all the site factors needed to maximize performance of their models (12 factors on average), thereby minimizing deviance.
In conclusion, the paper of Pellissier et al. (2013) is an example of how ‘general validity’ of results can be achieved using large data sets. Furthermore, they explicitly define the target of investigation, but speculate about an even wider validity for their results in their conclusions: ‘Because flora and long-term forest historical patterns are similar in western temperate Europe (Mather et al. 1998), the same patterns are likely to be observed in other European forests.’ We also learn that the hope for increased ‘general validity’ is tied to large data sets, well-defined sampling design and target population, and the use of comprehensive models.