The n-dimensional hypervolume

Authors

  • Benjamin Blonder,

    Corresponding author
    1. Department of Ecology and Evolutionary Biology, University of Arizona, Tucson, AZ, USA
    2. Rocky Mountain Biological Laboratory, Crested Butte, CO, USA
    3. Center for Macroecology, Evolution, and Climate, University of Copenhagen, Copenhagen, Denmark
    • Correspondence: Benjamin Blonder, Department of Ecology and Evolutionary Biology, University of Arizona, 1041 E Lowell Street, Tucson, AZ 85721, USA.

      E-mail: bblonder@email.arizona.edu

    Search for more papers by this author
  • Christine Lamanna,

    1. Rocky Mountain Biological Laboratory, Crested Butte, CO, USA
    2. Sustainability Solutions Initiative, University of Maine, Orono, ME, USA
    Search for more papers by this author
  • Cyrille Violle,

    1. Centre d'Ecologie Fonctionnelle et Evolutive-UMR 5175, CNRS, Montpellier, France
    Search for more papers by this author
  • Brian J. Enquist

    1. Department of Ecology and Evolutionary Biology, University of Arizona, Tucson, AZ, USA
    2. Rocky Mountain Biological Laboratory, Crested Butte, CO, USA
    3. The Santa Fe Institute, Santa Fe, NM, USA
    Search for more papers by this author

  • Editor: José Alexandre Diniz-Filho

Abstract

Aim

The Hutchinsonian hypervolume is the conceptual foundation for many lines of ecological and evolutionary inquiry, including functional morphology, comparative biology, community ecology and niche theory. However, extant methods to sample from hypervolumes or measure their geometry perform poorly on high-dimensional or holey datasets.

Innovation

We first highlight the conceptual and computational issues that have prevented a more direct approach to measuring hypervolumes. Next, we present a new multivariate kernel density estimation method that resolves many of these problems in an arbitrary number of dimensions.

Main conclusions

We show that our method (implemented as the ‘hypervolume’ R package) can match several extant methods for hypervolume geometry and species distribution modelling. Tools to quantify high-dimensional ecological hypervolumes will enable a wide range of fundamental descriptive, inferential and comparative questions to be addressed.

Introduction

Hutchinson first proposed the n-dimensional hypervolume to quantify species niches (Hutchinson, 1957). In this approach, a set of n variables that represent biologically important and independent axes are identified and the hypervolume is defined by a set of points within this n-dimensional space that reflects suitable values of the variables (e.g. temperature or food size). The hypervolume concept of the niche is widely used in comparative biology (Pigliucci, 2007) and evolutionary biology (e.g. fitness landscapes; Gavrilets, 2004). Within ecology it can be applied beyond the quantification of species niches (Violle & Jiang, 2009), for instance to quantify the multivariate space of a community or a regional pool (Ricklefs & O'Rourke, 1975; Foote, 1997), to measure morphology (Raup & Michelson, 1965) or to test functional ecology hypotheses (Albert et al., 2010; Baraloto et al., 2012; Boucher et al., 2013).

The use of hypervolumes in biology arises through the resolution of three related mathematical questions that are independent of scale and axis choice. The first question is about the geometry of the hypervolume. Given a set of observations, what can be inferred about the overall shape of the hypervolume, its total volume and the presence or absence of holes? This question is relevant to topics including environmental or trait filtering in community assembly (Whittaker & Niering, 1965), forbidden trait combinations in physiological ecology and evolutionary biology (Wright, 1932; Maynard-Smith et al., 1985) and climate breadths in invasion ecology (Petitpierre et al., 2012). A second question about set operations can then be addressed for multiple hypervolumes whose geometry is known. How much do hypervolumes overlap, and what portion of each is unique? These questions are relevant to topics including competitive exclusion (May & MacArthur, 1972; Tilman, 1982; Abrams, 1983), species packing (Findley, 1973; Pacala & Roughgarden, 1982; Ricklefs & Miles, 1994; Tilman et al., 1997) and functional redundancy within communities (Petchey et al., 2007). The third question is about sampling from the n-dimensional space. Is a candidate point in or out of the hypervolume? Sampling questions are equivalent to species distribution modelling (Elith & Leathwick, 2009; Peterson et al., 2011), an approach in which a set of geographic points are projected into hyperspace, those points are determined to be in or out of the hypervolume, and are then back-projected into geographic space as range maps.

While these three mathematical questions integrate a wide range of topics, they have not traditionally been considered in a unified framework. Indeed, independent methods have been developed for each of the above questions. For example, the geometry question has typically been addressed using volume-estimation methods such as a convex hull (Cornwell et al., 2006). The question on set operations has been primarily addressed using a range of overlap indices (Mouillot et al., 2005; Villéger et al., 2008; Warren et al., 2008). The sample question has mostly been addressed using predictive modelling techniques [e.g. generalized linear models (GLM), generalized boosted regression models (GBM), or MaxEnt (Elith et al., 2006; Wisz et al., 2008)]. However, methods that are successful for one question may not be directly transferrable to the other questions. For example, the sampling question can be resolved without delineating all the boundaries of a hypervolume (i.e. sampling the entire hyperspace). While resolving the geometry and set operation questions would effectively resolve the sampling question, existing approaches have been limited.

Here we argue that these three major questions can be addressed with a unified approach to infer hypervolumes from observations. Further, we highlight the key conceptual and dimensional issues that have previously limited the development of such approaches. We then propose a new method – a thresholded multivariate kernel density method – that can simultaneously address each of these questions. We show that our method matches extant methods for all three questions but can also be applied in high dimensions. We demonstrate the method with a simulation analysis and with two examples: the morphological hypervolume overlap of Darwin's finches, and climate hypervolume and geographic range projections of two Quercus species.

Extant methods have conceptual limitations

The general mathematical problem is how to best estimate a hypervolume from a set of observations. Ideally, an estimation procedure should: (1) directly delineate the boundaries of the hypervolume; (2) not assume a fixed distribution of observations; (3) account for disjunctions or holes; (4) not be sensitive to outlier points; and (5) produce a bounded result (i.e. not predict infinite volumes).

Using these criteria, many extant methods fall short (Fig. 1). For example, principal component analysis (e.g. Ricklefs & Travis, 1980), although intuitively appealing, assumes that the hypervolume is multivariate normal, violating procedure (2) above. Many empirical datasets indicate that even single-dimensional responses often deviate strongly from normality (Austin et al., 1984) via high skewness or multiple modes that cannot be removed by transformation. Multivariate range boxes (e.g. Hutchinson, 1957) are also inappropriate because they assume that the hypervolume is multivariate uniform and with box axes aligned to coordinate axes, also violating procedure (2). Other ordination approaches (e.g. outlying means index (OMI); Doledec et al., 2000) have similar distributional limitations or may be better suited for discrimination than geometric applications (Green, 1971). While a convex hull (Cornwell et al., 2006) and other envelope methods (Nix, 1986) are distribution-free approaches that can provide a closer measurement of the hypervolume, they are sensitive to outlier points. As a result, estimates of the shape of the hypervolume using convex hulls can result in errors in measurements of volume and shape. More importantly, none of these three methods can model disjunctions or holes in the hypervolume, complicating the assessment of hypervolume overlap (see discussion below). A potentially more robust approach is to fit different functions to each hypervolume dimension (e.g. Gaussian mixture models; Laughlin et al., 2012); however, this method requires some choices to be made about the nature of the fitting function and results in an estimated hypervolume that may not include interactions or covariation between dimensions.

Figure 1.

A robust operational definition of the hypervolume is important for making correct inferences. Here we show an example of three poor definitions (b–d) and one accurate definition (e). (a) Consider a two-dimensional dataset describing a hypothetical ‘Swiss cheese’ hypervolume. (b) A two-dimensional range box fails to capture the rotation and holes in the hypervolume. (c) A principal components analysis has difficulties with non-normal data and does not account for the holes. (d) A convex hull also does not account for holes and is very sensitive to outlying points. (e) The best solution is to take the area enclosed by a contour of a kernel density estimate, which can account for non-normal, rotated and holey data including outliers. Some species distribution modelling approaches (e.g. generalized boosted regression models) can also approximate this shape.

While species distribution models (SDMs) provide multiple algorithms for sampling that can capture many of these nonlinearities, none of these methods permit delineation of the entire hypervolume. This is because SDMs are intended primarily for sampling points from the entire environmental space (i.e. transformed geographic coordinates), which is computationally simpler than delineating boundaries of the environmental space. Below we discuss the sampling versus delineation problem in more depth. Additionally, SDMs may generate environmental hypervolumes with unbounded volumes, because they may predict that all values along an axis greater/smaller than some threshold value are within the hypervolume (Peterson et al., 2011).

Lastly, there are extant metrics and indices for different properties of the hypervolume, including breadth and overlap (Maguire, 1967; Colwell & Futuyma, 1971; Hurlbert, 1978; Abrams, 1980). However, these approaches do not give direct insight into the geometry and topology of the hypervolume that are needed for many current research questions. As a result, more powerful methods to measure and compare hypervolumes in biology are needed.

High dimensions are different (and harder)

The geometry of a high-dimensional hypervolume may differ qualitatively from a low-dimensional hypervolume in ways that have not been adequately considered, because human intuition is best suited for low-dimensional (n = 1–3) systems. High-dimensional biological hypervolumes are probably not smooth continuous shapes but rather rugged or filled with gaps or holes (Colwell & Futuyma, 1971; May & MacArthur, 1972; Hurlbert, 1978; Abrams, 1980; Jackson & Overpeck, 2000). For example, the recognition that most high-dimensional fitness landscapes are ‘holey’ has provided many advances in understanding evolutionary dynamics (Gavrilets, 1997; Salazar-Ciudad & Marin-Riera, 2013). While fundamental niches are often thought to have simpler and less holey geometry than realized niches (Colwell & Rangel, 2009; Araujo & Peterson, 2012), data limitations have precluded robust tests of this idea. Moreover, all distance-based SDMs (e.g. DOMAIN) are limited by the assumption of no holes. We argue that there should be no a priori reason to assume that a hypervolume (or niche) should be normally or uniformly distributed in multiple dimensions.

Many geometric questions have been pursued exclusively with low-dimensionality analyses (Broennimann et al., 2012; Petitpierre et al., 2012). However, some hypervolumes may be better analysed in higher dimensions. Though the number of axes necessary to describe any system can be debated, there is no reason to believe that two or three dimensions are sufficient for most systems.

In community ecology, trait axes are often implicitly used as proxies for niche axes (Westoby et al., 2002). Currently a focus on measures of single traits as a metric of position along a niche axis is widespread in trait-based ecology and evolution. The goal of this work is to track assembly processes by analysing the distribution of a single trait (e.g. body size) within and between ecological communities. However, evidence suggests that community assembly is driven by integrated phenotypes rather than by single traits (Bonser, 2006), meaning that a hypervolume approach to community assembly may be more relevant.

Delineating high-dimensional hypervolumes using existing approaches is difficult. For example, quantifying a hypervolume with arbitrarily complex geometry requires that the entire hyperspace must be sampled. In low dimensions, this is simple. The geometry of a one-dimensional hypervolume can be defined by determining if each of g regularly spaced points in an interval is in or out. However, for increasingly higher-dimensional hypervolumes this procedure must be repeated independently in each dimension, requiring gn evaluations. For example, characterizing a hypervolume where g = 500 and n = 3 requires more than 108 evaluations; for n = 10 dimensions, it is more than 1026 evaluations. As a result, exhaustive sampling approaches are too computationally demanding to be practical. Thus, robust methods from species distribution modelling have not been useful for delineating hypervolumes. Developing new methods that can handle high-dimensional datasets will remove the limitations of these extant estimation procedures.

Methods

Measuring the hypervolume is now possible

Direct estimation of the n-dimensional hypervolume from a set of observations w can be achieved by a multidimensional kernel density estimation (KDE) procedure. While kernel density approaches for hypervolume delineation have been successfully used in low-dimensional systems (Broennimann et al., 2012; Petitpierre et al., 2012), they had not previously been computationally feasible in high dimensions. Similarly, other fast kernel-based approaches such as support vector machines (Guo et al., 2005) work in transformed high-dimensional spaces but had not been directly applied to geometry questions, as we show here. We now outline the KDE approach and propose a method that can resolve the sampling problem in high dimensions.

We formally define the hypervolume, z, as a set of points within an n-dimensional real-valued continuous space that encloses a set of m observations, w. The problem is to infer z from w. We start by assuming that w is a sample of some distribution Z, of which z is a uniformly random sample. Next, we compute a kernel density estimate of Z, math formula, using the observations w and bandwidth vector math formula. Lastly, we choose a quantile threshold parameter τ ∈ [0,1]. As a result, z can be defined as a set of points enclosed by a contour of math formula containing a fraction 1 – τ of the total probability density. We illustrate this procedure graphically in Fig. 2.

Figure 2.

Illustration of the hypervolume estimation procedure. Consider a one-dimensional set of observations, assumed to be a sample from a probability distribution. Estimate this distribution with a kernel density estimate. Slice (subset) the distribution at different probability levels until at least chosen fraction 1 – τ of the probability density is enclosed by the distribution. The estimated hypervolume is then defined by the axis values of this subset (e.g. Fig. 1e; here shown in black for several values of τ where hypervolumes for each probability fraction are colour-coded where hotter colours include cooler ones). The kernel density estimation and slicing process can naturally be extended to multiple dimensions using importance-sampling Monte Carlo methods described conceptually in Box 1 and in detail in Box 2.

Box 1. A cartoon guide to the hypervolume algorithms

We present an example of hypervolume creation and set operations to develop the reader's conceptual understanding of how the algorithms are implemented. For clarity, the example is drawn in n = 2 dimensions but the algorithms generalize to an arbitrary number of dimensions.

Creation

The algorithm proceeds by (a) computing an n-dimensional kernel density estimate by overlaying hyperbox kernels (gray boxes) around each observation (black dots), (b) sampling from these kernels randomly (gray dots), (c) importance-sampling the space using these boxes and performing range tests on random points using a recursive partitioning tree (rainbow colours are proportional to kernel density) then (d) applying a threshold that encloses a specified quantile of the total probability mass, retaining only points within the resulting volume and then using combined properties of the kernel and importance-sample to subsample the random points to a uniform point density (purple dots). These uniform-density points, along with the known point density and volume, constitute the full stochastic description of the hypervolume. The key advance of the algorithm is to develop efficient approaches for importance-sampling high-dimensional spaces using box kernels and recursive partitioning trees, as described in depth in Box 2.

Set operations

Uniformly random points in an n-dimensional space are likely to be separated by a characteristic distance. The algorithm uses a n-ball test with this distance on the candidate point relative to the hypervolume's random points. If at least one random point in the hypervolume is within the characteristic distance of the candidate point, then the point satisfies the inclusion test. An example is shown here as a zoom from the full hypervolume intersection. The algorithm uses this inclusion test to determine which random points in the first hypervolume are and are not enclosed within the second hypervolume, and vice versa. The intersection is inferred to include the points that satisfy both inclusion tests. The unique component of the first hypervolume is inferred to include the points that do not satisfy the first inclusion test, and vice versa for the unique component of the second hypervolume. The union is inferred to include the unique components of both hypervolumes and the intersection (as calculated above). In all cases the resulting random points are resampled to uniform density and used to infer a new point density and volume. (e) An example is shown of overlap between two hypervolumes. Each hypervolume's random uniformly sampled points are coloured as green or purple, and a ball of the appropriate radius is drawn around each point. Points that have overlapping balls (coloured blue) are inferred to constitute the intersection. (f) A zoom of the boxed region in (e).

graphic

Box 2. Mathematical description of the hypervolume algorithms

Hypervolume construction

For a hypervolume z and measurements w, we perform the kernel density estimation and volume measurement using a Monte Carlo importance sampling approach. Suppose we are given a set of m points w = {w1, … , wm} ∈ ℜn drawn from an unknown probability distribution Z, a kernel k, a threshold τ, and an r-fold replication parameter corresponding to the number of Monte Carlo samples. We wish to find the volume of z and return a set of uniformly random points II with point density π from within z.

The idea is to choose a space P (such that ZP ⊂ ℜn) and randomly sample points II = {p1, …, pr} ⊂ P. At each point math formula, evaluate the kernel density estimate

display math

where vector division indicates division in each dimension independently and α is a normalization constant. Now flag the indices math formula for the q that satisfies

display math

If the volume of z is |z| and the volume of P is |P|, then

display math

That is, the ratio of the sum of the kernel density estimates to the number of sampled points converges to the ratio of the true volume to the volume of the sampled space.

We assume an axis-aligned box kernel,

display math

where math formula is the ith Euclidean unit vector. The proportionality constant is now

display math

The kernel bandwidth vector math formula can be specified by the investigator. Alternatively, it can be chosen quasi-optimally using a Silverman bandwidth estimator for one-dimensional normal data as

display math

where σ is the standard deviation of points in w in the ith dimension:

display math

We choose this kernel representation because (1) it reaches zero in a finite distance and (2) has constant non-zero value, enabling the evaluation of the kernel density estimate to be reduced to a counting problem.

In practice, random sampling of P is impractical because most regions of a high-dimensional space are likely to be empty. Instead, we proceed by importance sampling. Because of the choice of k, we know that Z has non-zero probability density only within regions that are within an axis-aligned box (with widths given by hi) surrounding each point wj, i.e.

display math

We therefore generate a uniformly random set of points drawn only from axis-aligned boxes centred around each wj, each of which has point density

display math

This process yields

display math

where U(a, b) represents a single draw from the uniform distribution scaled to the interval (a, b).

However each axis-aligned box may intersect with multiple other axis-aligned boxes, so ∏ is not uniformly random. Our next step is therefore to determine which random points math formula fall within regions of with higher probability density and correct for their oversampling. Because of the choice of k, we know that each math formula has a kernel density math formula estimate proportional to the number of data points whose kernels (i.e. axis-aligned boxes) intersect this math formula. We build an n-dimensional recursive partitioning tree T from the data points w. Then for each math formula, we perform a range-box query on T, where the range is chosen to be math formula, and count the number of non-zero returns, which is proportional to math formula.

Each point is now over-sampled by a factor proportional to math formula, yielding an effective number of sampled points given by

display math

The total volume of z is therefore the original point density divided by the effective number of points, or

display math

Finally we obtain a uniformly random sample of points from z, ∏*, by sampling σ points from ∏, weighting each point by math formula and retaining only the ρ* unique points math formula, where σ = π|z| reflects the original uniformly sampled point density. We then calculate the final point density π* as π* = ρ*/|z|.

Inclusion test

Consider a set of n-dimensional points math formula with point density math formula. We wish to determine if a point math formula is within the volume sampled by math formula. The characteristic distance between uniformly random points is math formula; this means that math formula is likely to be within math formula if math formula. This is implemented by a ball test using an n-dimensional recursive partitioning tree built from points in math formula.

Hypervolume set operations (intersection, union, unique subset)

Consider two hypervolumes za and zb described by volumes |za| and |zb|, uniformly random point samples math formula and math formula, and point densities math formula and math formula. We wish to find zc = zazb as described by |zc|, math formula, and math formula. First, both za and zb are uniformly randomly sampled to a point density of math formula, where ρ is a user-specified value (using high point densities can be computationally costly but not necessarily significantly more accurate), yielding ma and mb random points respectively in each hypervolume. Then we use the inclusion test (described above) to find the set of points contained in both za and zb by identifying the rab, random points in za enclosed by zb(∏ab), and the rba random points in zb, enclosed by za (∏ba). We calculate the final volume conservatively as the number of points divided by the point density,

display math

The uniformly random sample of points in zc is then math formula and the final point density is math formula.

We also wish to characterize the union and unique components of these hypervolumes, zunion and zun a and zun b. We apply the intersection algorithm (described above) to subsample math formula and math formula to the same point density math formula, then to find the intersection hypervolume zint, and also to flag the points not in each hypervolume, math formula, andmath formula. We then determine the final volume as |zunion| = |za| + |zb| − |zint|. The random sample of points is math formula and the final point density is math formula.

We take a similar approach using the flagged points in one hypervolume and not the other to determine the unique components of each hypervolume.

We describe methods to perform this kernel density estimation of a hypervolume z for both large n and m. The computational problems associated for scaling up this method can be solved with importance-sampling Monte Carlo integration. The resulting algorithms can determine the shape, volume, intersection (overlap), union and set difference of hypervolumes. They can also perform sampling (i.e. species distribution modelling) via inclusion tests in order to determine if a given n-dimensional point is enclosed within a hypervolume or not (Fig. 3). Together, these tools make it possible to directly address our three major questions and move beyond metrics that provide incomplete descriptions of hypervolumes in high dimensions. We describe the algorithms conceptually in Box 1 and in full mathematical depth in Box 2. The software is freely available as an R package (‘hypervolume’), with full documentation and several example analyses, including those presented in this paper.

Figure 3.

Hypervolume geometry operations. From two sets of observations (red and blue) (a), hypervolumes can be created (b), enabling measurement of shape and volume. (c) The total volume occupied by two hypervolumes can be determined as the union of both hypervolumes. (d) Overlap can be measured after finding the intersection between two hypervolumes. (e) The components of each hypervolume that are unique can be identified by set difference operations. (f) An inclusion test can determine if a given point is found within a hypervolume, and enable sampling applications such as species distribution modelling.

Usage guidelines and caveats

To ensure that hypervolume analyses are replicable, we recommend reporting the chosen bandwidth math formula (or the algorithm used to choose it) and the quantile τ obtained by the algorithm (which may differ slightly from the specified quantile due to some discrete approximations in the algorithm; see Box 2).

There are several issues that should be considered before an investigator applies this hypervolume method. First, our approach is best suited for continuous variables. Categorical variables are problematic because a volume is not well defined when the same distance function cannot be defined for all axes. If it is necessary to use categorical variables, the data can first be ordinated into fewer dimensions using other approaches (e.g. the Gower distance; Gower, 1971). We acknowledge that categorical variables are often biologically relevant, and wish to highlight this issue as a major unavoidable limitation of the hypervolume concept that also applies to the other methods discussed.

Missing data may restrict the dimensionality of the analysis. Any observation with at least one missing variable cannot be used for hypervolume estimation because an n-dimensional object is not well defined in fewer than n dimensions. In these situations it will be necessary to remove observations with missing values, reduce the dimensionality of the analysis or fill in missing data values via some other approach, for example multiple imputation (Rubin, 1996) or hierarchical probabilistic matrix factorization (Shan et al., 2012).

Choosing comparable units for each axis is critical. Because volume scales with the product of the units of all dimensions it can be difficult to ascribe a change in volume to one axis if units are not comparable. Similarly, non-comparable dimensions mean that results would not be invariant to changes of units and or scale (e.g. redefining an axis in millimetres instead of metres). Thus the observational data must be normalized (e.g. using z-scores or log-transformation) before the hypervolume method can be applied. The units of the output hypervolume will therefore be the product of axis units [e.g. in powers of standard deviations (SDs) or logarithmic units].

It is important to clearly identify appropriate and biologically relevant axes. For example, it may be unclear what variables should and should not be included in an analysis, and how sensitive a given result may be to this choice (Petchey & Gaston, 2006; Bernhardt-Römermann et al., 2008). Inclusion of dimensions with limited or highly correlated variation will produce degenerate results, such that the hypervolume is effectively constrained to a hyperplane (Van Valen, 1974). These problems can be identified by high variance in the SDs of each dimension or by high Pearson correlation coefficients between each pair of dimensions. Additionally, choice of the number of dimensions to include may also influence results. For example, hypervolumes that appear to overlap in low dimensions may not overlap if more dimensions are added, and conversely, with the addition of extra redundant dimensions, estimates of overlap may be falsely inflated. Nonetheless, we do expect that hypervolume metrics should be comparable across datasets with identical axes.

Sampling issues also deserve careful consideration. The hypervolume approach assumes that the input set of observations is an unbiased sample of the actual distribution. Meeting this assumption may be difficult, depending on the methodology used for data collection. Unavoidable spatial and taxonomic biases can conflate the occurrence and observation processes in real-world datasets. For example, realized climate niches of species may be oversampled in easily accessed regions and undersampled regions that are more difficult to access. This will lead to incorrect inference of holes. While these biases are common to all niche modelling algorithms (Phillips et al., 2009), the kernel density approach used in our method may be more prone to overfitting the data.

Our method can be used with any number of observations regardless of dimensionality. However, analyses with few observations (m/n < 10, as a rough guideline) will be very sensitive to the choice of bandwidth and are not recommended. For example, the volume inferred for a single point is necessarily equal to the product of the kernel bandwidth along each axis and is not biologically relevant. In general, choosing a smaller bandwidth (or large threshold) will lead to a smaller hypervolume, with each observation appearing disjoint from others, while choosing a larger bandwidth (or small threshold) for the same dataset will lead to a larger volume with more observations appearing to be connected. The investigator must thus carefully consider and potentially standardize the choice of bandwidth and threshold for the hypervolume construction process. A bandwidth can be estimated from the data using a quasi-optimal approach (e.g. a Silverman estimator; Silverman, 1992) that pads each observation by an amount that depends on the number of available observations and the total range of variation between observations (reflecting an increasing level of confidence that the observations have sampled the extreme boundaries of the hypervolume).

Results

Application to simulated data

Dataset choice

We next compared our approach with other extant methods using simulated data of a variety of complexities, dimensionalities (n), and number of unique observations (m). We constructed two test datasets of varying complexity. The first dataset, TC, is defined by m samples from a single n dimensional hypercube (Fig. 4a):

display math

where H is the hypercube function

display math
Figure 4.

Hypervolume geometric analysis of simulated data. (a) Data sampled from a hypothetical single hypercube (TC) dataset. (b) Data sampled from a hypothetical double hypercube (TDC) dataset, with each hypercube offset by two units from the origin in all axes. In (a) and (b), data are shown at the same scale in two dimensions for clarity but were simulated in up to eight dimensions for the analyses. (c) Comparison of volumes estimated by different methods (hypervolume, minimum volume ellipsoid, range box, convex hull) for the TC dataset. Each boxplot represents the distribution of volumes inferred from 10 independent samples of m points from a n-dimensional dataset. Boxes that are closer to the black line (the true volume) indicate better methods. The y-axis is log-transformed and normalized by n to reflect the geometric scaling of volume with dimension. (d) Comparison of volumes for the TDC dataset.

The second dataset, TDC, is defined by m samples from double n-dimensional hypercubes, each offset from the origin by two units (Fig. 4b):

display math

In the first example, TC or volume 1 is intended to represent a simple hypervolume that other methods should easily estimate, while in the second example, TDC or volume 2 is intended to represent a complex disjoint hypervolume that may challenge extant methods. For each example, we generated 10 independently sampled test datasets for each parameter combination of m = 10, 100 and 1000 observations and n = 2, 4, 6 and 8 dimensions.

Geometric application

We estimated the volume of each dataset using our method and a range of alternatives: range boxes, minimum volume ellipsoids (similar to a principal components analysis) and a convex hull, and compared results with the known volume of each dataset. Hypervolumes were inferred using a Silverman bandwidth estimator and a quantile threshold of 0.5.

We found that, for the TC dataset, the range box, convex hull and hypervolume methods consistently performed well, but the minimum volume ellipsoid method consistently overestimated volumes (Fig. 4c). This result indicates that the hypervolume method performs well in comparison with extant volume estimation methods for simple datasets. However, for the disjoint TDC dataset we found that the minimum volume ellipsoid and range box consistently overestimated volumes. The convex hull performed best and the hypervolume method performed second best when the sampling effort was high (large m) (Fig. 4d). Nevertheless, unlike the convex hull and minimum volume ellipsoid methods, the results of our hypervolume method were consistent regardless of dimension. This result indicates that our approach provides a viable consistent tool for estimating the volume of complex hypervolumes. The overestimation of volume is not necessarily a problem, and arises because (as previously discussed) the hypervolume method estimates volumes that are due to the choice of kernel bandwidth specified by the researcher.

Sampling application

We next tested the ability of multiple methods to predict presences and absences, using the same test datasets and combinations of dimensions and observations as described above. We compared the hypervolume method with two common and high-performing species distribution modelling algorithms: GLM and GBM (Wisz et al., 2008). We did not evaluate MaxEnt because it is formally equivalent to a GLM (Renner & Warton, 2013). For both GBM and GLM we used fixed thresholds of 0.5 to convert predictions to binary presence/absences, mirroring the threshold used for the hypervolume method. Although more robust approaches are available for determining threshold values (Peterson et al., 2011), this simple approach enables us to facilitate comparisons between all of the methods under equally challenging conditions.

The hypervolume method works using only presence data. For the GLM/GBM approaches, we built models using pseudo-absences obtained by sampling from a hyperspace consisting of the region math formula, i.e. the hypercube spanning (–3,3) in each axis. We then generated a set of n-dimensional test points, half of which were known to be in the analytically defined hypervolume and half of which were known to be outside it. Next, we used each model to make predictions for these points, and then computed two metrics to compare the performance of each model: sensitivity, which measures the true positive rate for predictions, and specificity, which measures the true negative rate for predictions. Better-performing methods have sensitivities and specificities closer to one.

We found that for the simpler TC dataset, the hypervolume and GBM methods had equivalent perfect sensitivity, with the GLM method showing lower sensitivity for large values of m (Fig. 5a). The hypervolume and GBM methods showed higher specificities than the GLM method, with the hypervolume method performing best for smaller values of m (Fig. 5b). For the more challenging TDC dataset, the hypervolume method had similar sensitivity to the GBM model regardless of m (Fig. 5c) but clearly outperformed the GBM model in high dimensions. Additionally, the hypervolume method had consistently higher specificity than the other approaches regardless of m or n (Fig. 5d).

Figure 5.

Hypervolume sampling analysis of simulated data, reflecting a species distribution modelling application. We assessed the ability of multiple methods – hypervolume, generalized linear model (GLM) and generalized boosted regression model (GBM) – to correctly predict sampled points as being in or out of the single (TC; Fig. 4a) or double (TDC; Fig. 4b) hypercubes. Each boxplot represents the prediction statistic calculated from 10 independent samples of m points from each n-dimensional dataset. (a) Sensitivity (true positive rate) and (b) specificity (true negative rate) statistics for the single hypercube dataset. (c) Sensitivity and (d) specificity statistics for the double hypercube dataset. For all panels, boxes that are closer to one indicate better methods.

Together, these results indicate that the hypervolume method not only compares favourably with other species distribution modelling methods for simple geometries, but can outperform other methods when the dataset being measured has a complex geometry (e.g. high specificity). While these preliminary results are limited in their scope, they do suggest that the hypervolume method also should be considered as a viable candidate for predicting species distributions.

Application to real-world data

We next show two applications of our approach using real data. Code and data to duplicate these analyses are included as demonstrations within the R package.

First, we present a demonstration analysis of the nine-dimensional morphological hypervolumes of two species of Darwin's finches (Box 3). A prominent hypothesis for these birds, stemming from Darwin's original observations, is that species co-occurring on the same islands would have experienced strong resource competition and character displacement, and therefore should have evolved to occupy non-overlapping regions of morphospace (Brown & Wilson, 1956). We tested this hypothesis on Isabela Island with data from the Snodgrass–Heller expedition (Snodgrass & Heller, 1904). We used log10-transformed nine-dimensional data to construct hypervolumes for the five species with at least 10 complete observations. Hypervolumes were constructed using a Silverman bandwidth estimator and a quantile threshold of 0%. We found that of the possible math formula overlaps, only two species pairs had non-zero fractional overlaps: less than 1% between Geospiza fuliginosa parvula and Geospiza prosthemelas prosthemelas, and 11% between Geospiza fortis fortis and Geospiza fortis platyrhyncha. Thus, the original hypothesis of character displacement cannot be rejected except perhaps in two cases of weak overlap, of which one case applied to two very closely related subspecies. In contrast, a single-trait analysis would reject the hypothesis, leading to very different biological inferences. Note also that this hypothesis could be tested at the community level, comparing the union of all morphological hypervolumes of all species on each island.

Box 3. Morphological hypervolumes of five species of Darwin's finches co-occurring on Isabela Island

Estimated nine-dimensional hypervolumes for the five species listed in the bottom left inset are shown as pair plots. Variables have original units of mm but have been log10-transformed: BodyL, body length; WingL, wing length; TailL, tail length; BeakW, beak width; BeakH, beak height; LBeakL, lower beak length; UBeakL, upper beak length; N-UBkL, nostril–upper beak length; TarsusL, tarsus length. The coloured points for each species reflect the stochastic description of each hypervolume, i.e. random points sampled from the inferred hypervolume rather than original observations. The inset shows all possible pairwise overlaps between species pairs (2 × shared volume/summed volume). Only two species pairs had non-zero overlap despite apparent overlap in each pair plot. These analyses can be replicated by running the demo ‘finch’ within the R package.

graphic

Second, we present a four-dimensional analysis of the climate hypervolumes of two closely related oak species that are common in the eastern United States, Quercus alba and Quercus rubra (Box 4). We tested the hypothesis that the species occupying a larger region of climate space would also have a larger geographic range. We obtained occurrence data for each species from the BIEN database (http://bien.nceas.ucsb.edu/bien/) and transformed these to climate values using four main WorldClim layers: BIO1, mean annual temperature; BIO4, temperature seasonality; BIO12, mean annual precipitation; and BIO15, precipitation seasonality (Hijmans et al., 2005). Each climate layer was transformed relative to its global mean and SD before analysis; hypervolumes were then constructed using a Silverman bandwidth estimator and a 0% quantile threshold. We found that Q. alba had a smaller volume [0.13 standard deviations (SD4)] compared with Q. rubra (0.15 SD4). We then used the hypervolume method for a sampling application, projecting each of these climate hypervolumes into geographic space. We found that Q. alba also had a smaller range (9097 vs. 11,343 10-arcmin pixels), supporting the original hypothesis and consistent with expert-drawn range maps (Little, 1977). Returning to the climate hyperspace, we then identified the unique (and sometimes disjoint) regions of the Q. rubra climate hypervolume that contributed to this larger volume. In fact, Q. rubra contributed more than twice as much unique volume as Q. alba (0.04 vs 0.02 SD4) to the combined hypervolumes of both species.

Box 4. Climate hypervolumes of two oak species

Occurrence data (shown in the inset) were mapped to climate data and used to infer hypervolumes. Hypervolumes are shown as pair plots (Quercus rubra, red; Quercus alba, blue; four-dimensional climate-space overlap, purple). The coloured points for each species reflect the stochastic description of each hypervolume, i.e. random points sampled from the inferred hypervolume rather than original observations. Quercus rubra had a larger volume than Q. alba. We also projected these hypervolumes into geographic space using the inclusion test for sampling (geographic ranges are shown in inset using the same colour scheme; purple indicates geographic range overlap) and showed that Q. rubra also had a larger range. These analyses can be replicated by running the demo ‘quercus’ within the R package.

graphic

Discussion

The future of the hypervolume

Our approach can unify several previously separate lines of ecological inquiry through direct measurement of hypervolumes. Our demonstration analyses provide preliminary evidence that this new approach can perform as well as several existing approaches, and can also enable new types of analyses.

The application of this method to species distribution modelling, while exciting, is preliminary. An advantage of our hypervolume approach is that it is conceptually simple, does not require absence or pseudo-absence data and enables hypervolume geometry to be simultaneously measured. Moreover, the method performed well in our initial tests. We therefore suggest that the method warrants further comparison with other approaches (e.g. Elith et al., 2006).

The development of our method also highlights several issues that are relevant to hypervolume-related inquiry. First, there has so far been limited understanding of the properties of real high-dimensional biological hypervolumes. The generality and prevalence of holes in genotypic, phenotypic and climatic hypervolumes is under-studied (Austin et al., 1990; Jackson & Overpeck, 2000; Soberón & Nakamura, 2009). Our method can detect holes by calculating the difference between hypervolumes constructed with larger and smaller quantile thresholds. Additionally, little is known about how hypervolumes change over time, for example for climate niche evolution (Peterson, 1999; Jackson & Overpeck, 2000). With appropriate palaeo-data, our method can detect changes in hypervolume and overlap across time periods. Finally, our method is relevant to studies of community assembly, where multivariable analyses of trait hypervolume may produce fundamentally different (and more realistic) insights than single-variable analyses. Such approaches will require robust null modelling approaches (Lessard et al., 2012) to compare observed hypervolume geometry and overlap with expectations under different null hypotheses. The low runtime of our methods now makes this application computationally tractable.

Data are rapidly becoming available to extend hypervolume analyses to higher dimensions. For example, global climate layers (e.g. WorldClim; Hijmans et al., 2005) provide data for measure climate hypervolumes while trait databases: TRY for plants (Kattge et al., 2011) or MammalDB for birds, mammals and reptiles (Baldridge et al., 2012), are examples which provide data to measure functional hypervolumes, both along tens of axes. While obtaining data in other contexts can still be difficult, our tools will permit high-dimensional comparative analyses.

In sum, hypervolumes are relevant to individuals, genotypes, communities, biomes and clades, and can be constructed from a wide range of variables including climate, edaphic variables, functional traits and morphology. Although hypervolumes are a central though controversial concept in biology, they have not been adequately measured in enough real systems in high dimensions. We have provided the computational tools to make this concept operational, usable and tractable.

Acknowledgements

We thank Robert Colwell, Miguel Araújo, David Nogués-Bravo, John Harte, Simon Stump and Stuart Evans for their thoughts. B.B. was supported by a NSF predoctoral fellowship and a NSF Nordic Research Opportunity award. C.L. was supported by a University of Maine's Sustainability Solutions Initiative. C.V. was supported by a Marie Curie International Outgoing Fellowship within the 7th European Community Framework Program. B.J.E. was funded by a National Science Foundation Macrosystems award.

Benjamin Blonder is a PhD student. He is interested in science education and the response of communities and ecosystems to changing climate.

Ancillary