By continuing to browse this site you agree to us using cookies as described in About Cookies
Notice: Please be advised that we experienced an unexpected issue that occurred on Saturday and Sunday January 20th and 21st that caused the site to be down for an extended period of time and affected the ability of users to access content on Wiley Online Library. This issue has now been fully resolved. We apologize for any inconvenience this may have caused and are working to ensure that we can alert you immediately of any unplanned periods of downtime or disruption in the future.
1. A common procedure in the regression analysis of interspecies data is to first test the independent and dependent variables X and Y for phylogenetic signal, and then use the presence of signal in one or both traits to justify regression analysis using phylogenetic methods such as independent contrasts or phylogenetic generalized least squares.
2. This is incorrect, because phylogenetic regression assumes that the residual error in the regression model (not in the original traits) is distributed according to a multivariate normal distribution with variances and covariances proportional to the historical relations of the species in the sample.
3. Here, I examine the consequences of justifying and applying the phylogenetic regression incorrectly. I find that when used improperly the phylogenetic regression can have poor statistical performance, even under some circumstances in which the type I error rate of the method is not inflated over its nominal level.
4. I also find, however, that when tests of phylogenetic signal in phylogenetic regression are applied properly, and in particular when phylogenetic signal in the residual error is simultaneously estimated with the regression parameters, the phylogenetic regression outperforms equivalent non-phylogenetic procedures.
Typical linear regression analysis is of the form: y = Xβ + ɛ, with the ordinary least squares (OLS) solution: , in which y is an n × 1 vector (for n species) containing values for the dependent variable, Y; X is an n × (m + 1) matrix containing 1·0s in the first column and the m independent (explanatory) variables of the model in columns two through m + 1; and is a vector containing the parameter estimates (including intercept) of the fitted univariate or multivariate linear regression model (Rencher & Schaalje 2008). ɛ is an n × 1 vector containing the residual error in the model, and under OLS it is assumed that ɛ is multivariate normally distributed with a variance–covariance matrix given by . Here, I is the identity matrix (an n × n matrix containing 1·0s on the diagonal and zeroes elsewhere), and is the residual variance of the model (i.e. the variability in Y not explained by the regressors).
If the residuals in ɛ are not distributed according to , but instead according to in which C is known and is not proportional to I (i.e. C ≠ kI for k ∈ R and k > 0.0), then fitting the regression model becomes a generalized (instead of ordinary) least squares problem (Rohlf 2001; Kariya & Karuta 2004; Rencher & Schaalje 2008). For non-phylogenetic data, C ≠ kI might be true, for example, if the sampling variance of Y is uneven across data points (i.e. if our data for Y have been collected with varying amounts of error). In this situation, C would be a diagonal matrix containing the n sampling variances of each of the observations for Y. Here, the generalized least squares regression would be the same as a weighted regression in which the weights are proportional to the inverse of the sampling variances for each observation of Y. In the phylogenetic case, the problem is not usually that the diagonal of C is uneven – all extant taxa in a phylogeny are temporally equidistant from the root of the tree (by definition) so they are frequently assumed to have equivalent variances (given that they are all extant and have been measured with comparable accuracy; but see Ives, Midford & Garland 2007). Rather, in the phylogenetic case, it is that the off-diagonals of C are non-zero due to the correlated histories of related species (Butler, Schoener & Losos 2000; Garland & Ives 2000).
To solve this problem, we can find the minimum variance regression slope and intercept using the generalized least squares estimating equation (or Gauss–Markov estimator; Kariya & Karuta 2004):
This approach to the regression of interspecies data was first suggested by Grafen (1989), and has since been showed to be exactly equivalent to regression estimated using the contrasts method of Felsenstein (1985; Garland & Ives 2000; Rohlf 2001). The generalized least squares estimating equation is similar to the OLS estimator (given above), except that now we have down-weighted each observation for Y (and corresponding row of X) depending on the correlation of its residual error with the other observations in our set.
Under a simple Brownian motion model for evolutionary change in Y and the Xs (Cavalli-Sforza & Edwards 1967; Felsenstein 1985, 2004), y (or any column of X, barring the first) is expected to be distributed as a multivariate normal with variance–covariance matrix given by (or ) in which C contains the height of each of the n tips of the tree on its diagonal, as well as the heights of the most recent common ancestor of each species pair i and j in each i,jth off-diagonal position (Felsenstein 1973; O’Meara et al. 2006). (or ) gives the phylogenetic variance or ‘evolutionary rate’ for Y (or X; O’Meara et al. 2006; Revell 2008). More importantly, however, ɛ = y − Xβ will also be distributed according to a multivariate normal with variance–covariance matrix given by under this evolutionary scenario. Figure 1(b) shows the computation of C from a simplified five taxon tree given in Fig. 1(a).
However, it does not follow that if phylogenetic signal for X and/or Y is relatively high then ɛ = y − Xβ will necessarily be distributed according to . Furthermore, it is also possible that even if phylogenetic signal is very low, ɛ = y − Xβ may still be distributed with variance–covariance matrix . Thus, the appropriate test for phylogenetic signal is actually on the residual variability in Y given our regression model – a test which is relatively infrequently applied. In this study, I simulate scenarios in which X and/or Y have relatively high phylogenetic signal, but in which ɛ = y − Xβ is non-phylogenetic and thus the phylogenetic regression is inappropriate. I show that using a phylogenetic regression here will induce increased variance on the regression estimator. I also examine the possibility that X and/or Y are non-phylogenetic, but that ɛ = y − Xβ is distributed according to . In this case, the phylogenetic regression is appropriate; however, standard diagnostic tests on X and Y might be taken to imply that ‘phylogenetic correction’ of the regression is unnecessary. I show that ignoring phylogeny in this case can lead to poor statistical performance of the regression. Finally, I repeat a maximum likelihood procedure using the λ statistic of Pagel (1999) in which we simultaneously estimate phylogenetic signal and the regression parameters (e.g. Revell 2009), thus obviating the need for a priori estimation of phylogenetic signal in the regression variables.
Materials and methods
To illustrate the case in point, I conducted four sets of numerical simulations, each under various conditions. First, I simulated X with phylogenetic signal, but in which the residual error in ɛ = y − Xβ was distributed according to ; i.e. it was non-phylogenetic. Depending on the relationship between X and Y (i.e. β), as well as the size of the residual variance , Y may or may not also show phylogenetic signal in this model. Second, I simulated X without phylogenetic signal, but in which the residual variation in Y (ɛ = y − Xβ) was distributed according to ; i.e. it was phylogenetic. Again, depending on β and , Y may or may not have signal in this model. Third, I simulated X and the residual variability in Y both with phylogenetic signal. This is the traditional Brownian motion model (Felsenstein 1985, 2004). Fourth, I simulated neither X nor Y with phylogenetic signal. I conducted all four of these simulation scenarios across several conditions for the magnitude of residual variability, , and for the relationship between X and Y, β. In particular, for fixed variance of X (), I simulated low, medium and high residual variation in Y (, 0·1, and 1·0). For each value of residual variability in Y, I simulated a low, moderate and high regression relationship between X and Y (in which ‘low’ means close to zero in this case, not negative). I used the regression slopes of β1 = 0·00, 0·75 and 0·90 for these conditions.
Generating data according to these models is quite easy. First, I simulated 1000 stochastic pure-birth phylogenetic trees, each containing a fixed number of species (n = 100). I arbitrarily rescaled these trees to have a total length from the root to any tip of 1·0. To simulate data on the trees in which X has signal, but in which the residual variation in Y given X is uncorrelated with the tree, I simply generated my data vector for X, as , in which denotes the upper triangular Cholesky decomposite of C times the rate of evolution in X (), and u is a vector of values sampled randomly from the standard normal distribution. I then generated a similar vector for the residual error of my regression model, but in this case I simulated the error to be uncorrelated, i.e. . v is a vector of uncorrelated random standard normal deviates, as before. I then simply computed y = xβ1 + ɛ, where β1 is the desired slope of the regression relationship between X and Y and the intercept of the model is (arbitrarily) set to zero.
Generating data for X that is uncorrelated with the tree, but in which residual variability in the model y = xβ1 + ɛ is phylogenetic, was equally straightforward. Here, I just simulated X as , residual error ε as , and computed y = xβ1 + ɛ, as before.
I also simulated data for X and residual error in Y that were both correlated with the tree. To accomplish this, I just calculated , and y = xβ1 + ɛ, for vectors of random standard normal variates u and v.
Finally, I simulated data for X and residual error in Y that was uncorrelated with the phylogeny. In this case, I just generated the random vector for x and residual error as and , respectively, for standard normal vectors u and v, as before. Then, I computed y = xβ1 + ɛ.
The first two generating conditions are much more rarely considered. I can suggest a couple of scenarios to which they might well apply; however, I am sure that biologically savvy readers of this article will come up with others. For example, phylogenetic signal in X but no phylogenetic signal in Y given X would be expected if X evolved by Brownian motion, and Y was determined completely by X, but was measured or phenotypically expressed with error. As long as the measurement or expression error was not phylogenetically correlated, this would represent an example of the first generating model. In the second generating model, there is no phylogenetic signal in X, but residual values for Y given X have signal. We might expect this pattern of interspecific variation if Y represented a phenotypically plastic response to a random (non-phylogenetic) environment X, but the magnitude of the plasticity was phylogenetically autocorrelated. Other biological processes that could result in generating conditions one or two are certainly possible.
For each simulation model and parameter conditions, I fit three different linear regression models for the relationship between X and Y. First, I fit an OLS regression, for which the estimating equation is given above, using , where 1 is a column vector of 1·0s. To test the null hypothesis that the regression slope is β1 = 0.0, we need to calculate the variance–covariance matrix of our estimator, , which is given by:
We can then compute , which should be distributed as a t-statistic with n − 2 = 98 d.f. (Rencher & Schaalje 2008). With the term V11, I am referring to the estimation variance component corresponding to β1 (not β0, the intercept of the model), which is actually in the second row and column of V.
Second, I fit the phylogenetic generalized least squares model (PGLS), in which the error structure of the residual vector, ɛ = y − Xβ, is assumed to be given by . As noted earlier, this model will yield exactly the same regression slope estimate as the procedure of independent contrasts followed by regression through the origin (Felsenstein 1985; Garland, Harvey & Ives 1992; Garland & Ives 2000; Rohlf 2001). Here, C is a matrix with the tree height in every diagonal position, and the heights of the most recent common ancestor of species i and j in each i,jth off-diagonal position (Fig. 1b). The typical generalized least squares estimating equation, , is also given above. To conduct a test of the hypothesis that β1 = 0·0, we again compute , but this time using
Here, Cλ is the variance–covariance matrix, C, to which the λ transformation has been applied (Fig. 1c; Pagel 1999; Freckleton, Harvey & Pagel 2002). We do not have an analytic solution for this equation, so it must be optimized numerically; however, the difficulty of this optimization is alleviated considerably by the fact that for any given value of λ (and thus Cλ), our conditional maximum likelihood estimates for β and can be obtained as follows (Rencher & Schaalje 2008):
By substituting for C, we can conduct our hypothesis test of β1 = 0·0 using the same calculations as for PGLS, above; however, we should test our t-statistic against a t-distribution with n − 3 d.f. due to the one additional parameter (λ) estimated in the PGLS model. In this study, I limited estimation of λ to the interval , because most values of λ outside this interval result in a likelihood equation that is not defined; however, in theory or are possible (Freckleton, Harvey & Pagel 2002).
For each simulated data set, I compared the performance of OLS and PGLS by determining which estimation procedure produced an estimated regression slope, β1, that was closest to its generating value. I also counted the number of significant regressions of each type to estimate the type I error (when the generating regression slope was β1 = 0·00) or power (when β1 > 0·00) of each procedure. Finally, I determined the bias of each estimating procedure by computing the mean parameter estimate across all the simulated data sets for each of the three estimators. Several studies have shown that OLS and GLS are unbiased even if the structure of the error term is specified incorrectly (e.g. Pagel 1993; Rohlf 2006; Rencher & Schaalje 2008; Revell 2009), so I did not expect bias to be significant for any of my estimation methods.
Finally, for each simulated data set I also computed a slew of phylogenetic diagnostics on the variables X and Y. I estimated λ (Pagel 1999; Freckleton, Harvey & Pagel 2002), now for each character separately; I computed K, a measure of phylogenetic signal developed by Blomberg, Garland & Ives (2003); and, finally, I also calculated independent contrasts (Felsenstein 1985), and computed both the Pearson correlation (r) and Spearman rank correlation (ρ) between the absolute values of the standardized contrasts and their expected standard deviations prior to standardization (Garland, Harvey & Ives 1992).
For a single character, contained in (say) character vector y, λ is optimized using likelihood by maximizing the following equation:
in which 1 is a vector of 1·0s, as before; and the conditional maximum likelihood estimates of and are given by and , respectively (Freckleton, Harvey & Pagel 2002). This equation is maximized using numerical methods. As before, we limited estimation of λ to the interval .
Blomberg, Garland & Ives (2003) proposed an alternative measure of phylogenetic signal which is receiving wide utility. Their measure, K, can be computed as follows:
Here, ; tr(C) indicates that the trace of C has been calculated; and all other terms have been previously defined (Revell, Harmon & Collar 2008).
As noted above, I computed the Pearson and Spearman correlations between the absolute values of the standardized independent contrasts of Felsenstein (1985) and the square roots of their expected variances (Garland, Harvey & Ives 1992). This method is intended to measure whether contrasts have been standardized appropriately – which is not expected to have been the case if Brownian motion is an inappropriate model for character evolution in our phylogeny. Thus, a non-significant relationship between the contrasts and their standard deviations is often taken as evidence that phylogenetic methods for regression are appropriate (Garland, Harvey & Ives 1992; Nunn & Barton 2000; Fisher, Blomberg & Owens 2003).
Table 1 shows the mean parameter estimates, type I error rates, and power for each estimating procedure (OLS, PGLS and PGLS), for data generated with phylogenetic signal in the independent variable, X, but no phylogenetic signal in the model residuals. As expected, both OLS and PGLS were unbiased. The most obvious features of Table 1 are fourfold. First, estimation accuracy is substantially higher for OLS than PGLS. Averaged across simulation conditions, in 84·2% of simulations OLS produced a better (i.e. closer to its generating value) estimate of the regression slope than PGLS (Table 1; Fig. 2). Second, variance among estimated values of β1 was much higher (on average 32·9 times higher) for PGLS than OLS. Third (and in spite of points one and two), type I error was not elevated relative to its nominal level for PGLS. Finally, in simulations of β1 ≠ 0·00, power of the PGLS estimator was deflated for conditions of high residual error, .
Table 1. Parameter estimation, type I error and power for OLS, PGLS and PGLS for data generated with phylogenetic signal in X, but no phylogenetic signal for the model residuals
OLS : PGLS
β1 and indicate the generating regression slope and residual error, respectively. indicates the mean parameter estimate by OLS and , the variation among simulations in the estimated slope. and are likewise interpreted. OLS : PGLS indicates the fraction of simulations for which OLS provided a better estimate of the regression slope (i.e. one closer to its generating value) compared with the fraction for which PGLS provided a better estimate. indicates the mean parameter estimate for the regression slope when the regression model was estimated simultaneously with λ and , the variation among simulations. Finally, error/power indicates the type I error rate (if β1 = 0·00) or power of each estimation procedure.
−9·62 × 10−4
1·49 × 10−4
−3·37 × 10−3
3·49 × 10−3
0·836 : 0·164
−9·26 × 10−4
1·50 × 10−4
−8·86 × 10−4
1·65 × 10−3
3·17 × 10−3
5·68 × 10−2
0·825 : 0·175
−8·22 × 10−4
1·70 × 10−3
−4·67 × 10−3
1·46 × 10−2
−1·06 × 10−2
3·97 × 10−1
0·829 : 0·171
−4·61 × 10−3
1·51 × 10−2
1·49 × 10−4
4·41 × 10−3
0·836 : 0·164
1·47 × 10−4
1·50 × 10−3
7·67 × 10−2
0·867 : 0·133
1·55 × 10−3
1·40 × 10−2
4·79 × 10−1
0·850 : 0·150
1·43 × 10−2
1·63 × 10−4
5·69 × 10−3
0·831 : 0·169
1·67 × 10−4
1·62 × 10−3
3·54 × 10−2
0·848 : 0·152
1·64 × 10−3
1·45 × 10−2
5·64 × 10−1
0·853 : 0·147
1·48 × 10−2
Table 2 shows phylogenetic diagnostics of X, Y, and the bivariate regression model including λ. Estimated phylogenetic signal was invariably high for X; however, it was also quite high for Y when the generating value of β1 was high and was relatively low. I also computed two different independent contrasts diagnostics: the Pearson and Spearman rank correlation coefficients for the correlation between the absolute values of the standardized contrasts and the square root of their expected variances prior to standardization. In general, these correlations were near zero for X (as expected), and negative for Y, although for increasing β1 and sufficiently low the strength of the correlations decreased along with my power to detect a significant relationship (Table 2).
Table 2. Phylogenetic diagnostics for X, Y and the bivariate regression model, where the data have been simulated with phylogenetic signal in X, but no signal in the residual variability of Y given X
β1 and are as in Table 1. and indicate the mean value of phylogenetic signal estimated using the λ method for each character, X and Y, separately. indicates the mean fitted λ, where λ was estimated simultaneously with the regression model. K(X) and K(Y) indicate the mean value of phylogenetic signal in X and Y, respectively, estimated using the K method. r(X) and r(Y) indicate the mean Pearson product–moment correlation between the absolute values of the independent contrasts and the square roots of their expected variances prior to standardization. ρ(X) and ρ(Y) are the mean values of the corresponding Spearman rank correlations. Error/power indicates the type I error or power of each contrasts-based diagnostic.
6·43 × 10−5
1.30 × 10−4
8·26 × 10−4
6·95 × 10−4
My results for data generated with no phylogenetic signal in the independent variable, X, but phylogenetically correlated residual variation in Y are given in Tables 3 and 4. Here, the PGLS estimator was better on average 84·5% of the time when compared with the estimated regression slope obtained using OLS (Fig. 2). However, OLS did not suffer from increased type I error when the generating regression slope was β1 = 0·00 (Table 3). Phylogenetic signal was invariably low for X (Table 4). Phylogenetic signal was also generally quite low for Y, except when β1 = 0·00. Diagnostic statistics on the contrasts for both X and Y overwhelming indicate inadequate standardization, again except when β1 = 0·00, in which case they indicated that Y, but not X, has been adequately standardized in the computation of independent contrasts (Table 4).
Table 3. Parameter estimation, type I error and power for OLS, PGLS and PGLS for data generated with no phylogenetic signal in X, but phylogenetic signal for the model residuals
Table 4. Phylogenetic diagnostics for X, Y and the bivariate regression model, where the data have been simulated with no phylogenetic signal in the independent variable, X, but signal in the residual error
When I generated data with phylogenetic signal in both the independent variable and in the model residuals, I found that PGLS vastly outperformed OLS. In this case, the PGLS estimator was better on average 77·8% of the time (Table 5; Fig. 2). I also found that type I error was substantially increased for OLS, unlike in Table 3 when only the residual error was simulated with phylogenetic signal and where OLS had type I error near the nominal level. All measures of phylogenetic signal indicated high phylogenetic signal for both X and Y; and no independent contrasts diagnostic suggested that the contrasts for X or Y had been inadequately standardized (Table 6).
Table 5. Parameter estimation, type I error and power for OLS, PGLS and PGLS for data generated with phylogenetic signal in X and the model residuals
Finally, when I generated data with no phylogenetic signal for X and no phylogenetic signal for the residual variability in Y, OLS outperformed PGLS, as expected. Here, the OLS estimator was better on average 79·9% of the time (Table 7; Fig. 2). I also found that the type I error rate when β1 = 0.00 was elevated for PGLS relative to its nominal level (Table 7), unlike the situation in Table 1. All measures of phylogenetic signal indicated low signal, and furthermore all contrasts-based diagnostics indicated inadequate standardization of X and Y (Table 8).
Table 7. Parameter estimation, type I error and power for OLS, PGLS and PGLS for data generated with no phylogenetic signal in X nor in the model residuals
Notably, PGLS, in which phylogenetic signal is estimated simultaneously with our regression model, effectively recovers the performance of the best model (either OLS or PGLS) under all of the simulation conditions of the study. This is evidenced by the very low estimation variance of β1 in the PGLS model, regardless of the generating conditions (Tables 1, 3, 5, 7). Because fitting the PGLS models requires the estimation of one additional parameter relative to OLS or PGLS, the accuracy of PGLS is slightly decreased relative to OLS, when the assumptions of OLS hold, or PGLS, when the assumptions of PGLS hold. However, the advantage of PGLS is that it very nearly recovers the performance of the best model when no particular level of phylogenetic signal in the residual error can be safely assumed a priori (as will most often be the case for empirical studies).
Over the past 20 years the phylogenetic regression has become among the most commonly applied methods in comparative biology (Felsenstein 1985; Grafen 1989). However, its assumptions are still not widely understood (Rohlf 2006). To examine how and when it should be applied I will review, in turn, each of the simulated scenarios for this study.
Phylogenetic signal in the independent variable
When the generating model was one in which I simulated phylogenetic signal in the independent variable, but uncorrelated residual error in Y, a phylogenetic regression is not necessary. In this case, the assumption of independent errors holds and OLS is a perfectly appropriate method for fitting the regression model.
Consistent with this assertion, I found that OLS overwhelmingly provided a better regression slope estimate (84·2% of the time, averaged across simulation conditions; Fig. 2) than PGLS. Furthermore, the variance among simulations in the regression estimator was much higher for PGLS than OLS (on average 32·9 times higher; Table 1). However, I also found that type I error was not elevated for PGLS when the generating regression slope was β1 = 0·00 (Table 1). This interesting result will be discussed further below. Performance of the phylogenetic regression was fully recovered by simultaneously estimating the λ parameter of Pagel (1999; Revell 2009), as discussed in the ‘Materials and methods’. In general, the fitted value of λ for the regression model was very low (close to zero; Table 2) in this case, which makes PGLS nearly equivalent to OLS thus explaining its good statistical performance here.
No phylogenetic signal in the independent variable
When I used a generating model for simulation with no phylogenetic signal in the independent variable, but phylogenetic signal in the residual error for the dependent variable, a phylogenetic regression is appropriate. Here, the standard OLS assumption of independent errors is not true, and thus OLS is not an appropriate method to fit our bivariate regression model and PGLS should be used. As such, I was not surprised to find that PGLS regression yielded a better estimate of the generating regression slope for the vast majority (84·5%; Fig. 2) of simulated data sets across all simulation conditions.
In these simulations, however, I also found that oftentimes phylogenetic signal for X and Y was low by both of our chosen metrics (the λ statistic of Pagel 1999; and the K statistic of Blomberg, Garland & Ives 2003; Table 4). Furthermore, for all generating conditions except β1 = 0·00, Pearson and Spearman correlation-based diagnostic analysis of the independent contrasts suggested that contrasts had been inadequately standardized. Thus, for data generated under these conditions, standard diagnostics computed for the dependent and independent variables separately might be interpreted to suggest that standard PGLS or contrasts are inappropriate, even though they are called for in this case. As before, type I error of OLS was not inflated over its nominal level, a result which I will discuss at greater length below.
Phylogenetic signal in the independent variable, and the residuals
This is the traditional Brownian motion model for character evolution in X and Y. In this case, it should be no surprise that PGLS outperforms OLS, producing a parameter estimate for the regression slope that is closer to its generating value in 77·8% of simulations, averaged across conditions (Table 5; Fig. 2). I also found that type I error was considerably inflated if OLS was used, which is also consistent with earlier studies (e.g. Rohlf 2006; Revell 2009). Under these simulation conditions, all phylogenetic diagnostics (phylogenetic signal, Pearson and Spearman correlation diagnostics on the contrasts) suggest that the Brownian motion is an appropriate model for evolution and that the phylogenetic regression is indicated.
Non-phylogenetic independent variable and error
When I simulated data for X and residual error in Y that were entirely non-phylogenetic, I found that OLS was vastly superior to PGLS in both estimation and hypothesis testing, with PGLS showing severely inflated type I error for β1 = 0·00 (Table 7). Unsurprisingly, estimated phylogenetic signal was invariably low and significant negative Pearson and Spearman rank correlations between the absolute values of the independent contrasts and the square roots of their expected variances suggested inadequate standardization in the contrasts procedure (Table 8).
Diagnostics and the phylogenetic regression
I found mixed results with regard to diagnostics and the phylogenetic regression. Certainly, as in Table 6, when all diagnostics indicated high phylogenetic signal for the independent and dependent variables considered separately, as well as no relation between the absolute value of independent contrasts and their expected variances, I found that the phylogenetic regression as traditionally applied performed extremely well. However, I also identified conditions in which phylogenetic signal for X and/or Y was high, and in which correlation-based diagnostics generally indicated that the phylogenetic regression was appropriate – and yet residual error was uncorrelated among species, and thus OLS (not PGLS) was called for (e.g. Table 2). In addition, in some cases, I found conditions in which phylogenetic signal was low, correlation-based contrasts analysis indicated inadequate standardization, and yet the phylogenetic regression was fully appropriate (e.g. Table 4).
These results suggest that the tenet of this article, that assessing phylogenetic signal in the original variables is generally insufficient to diagnose whether the phylogenetic regression is appropriate, is correct. Instead, researchers should consider simultaneously fitting a model for phylogenetic signal in the residual error along with their regression model, as I have done using the parameter λ of Pagel (1999), above.
Simultaneous estimation of λ
It is possible to optimize the error structure of the residuals simultaneously with fitting the regression model via least squares (e.g. Grafen 1989). As noted in the preceding paragraphs, this approach seems preferable in general, and in this study, I found that the performance of the best model (OLS or PGLS, depending on the simulation conditions) could be fully or nearly fully recovered through simultaneous optimization of the λ parameter of Pagel (1999) (Tables 1, 3, 5, 7). However, this method has its own limitations. In particular, there are many ways in which the true error structure of our residuals could differ from the hypothesized error structure given by the tree (e.g. Fig. 1a) other than that described by λ (e.g. Blomberg, Garland & Ives 2003; Hansen, Pienaar & Orzack 2008; Lavin et al. 2008). To the end of obtaining an even better fit, Pagel (1999) proposed several other parameters which can be simultaneously optimized. Furthermore, Garland, Harvey & Ives (1992) and others have shown that many analogous transformations can be achieved by manipulating the branch lengths of the tree in various ways.
Type I error of the ‘wrong’ regression estimator
For some of the simulation conditions of this study, I found that the incorrect regression estimator still yielded appropriate type I error rates for the generating condition of β1 = 0·00. This result is somewhat perplexing. In general, I found that the incorrect estimation procedures overwhelmingly lead to estimated regression coefficients that were farther from the generating values than estimates obtained using the correct procedure (e.g. Tables 1, 3, 5, 7; Fig. 2). Thus, one might naively suspect that for a generating slope of β1 = 0·00, these coefficients might also more often be found to be significantly different than 0·00. This is not true because the estimated standard error for the incorrect model in these cases (in particular, phylogenetic signal in X, but not the model residuals; and no phylogenetic signal in X, but in the residuals) increases in direct proportion to the square root of the variance in the estimated regression slopes among simulations (Table 9). In fact, this is generally a property of regression tests that yielded appropriate type I error; but not for tests that produced error inflated over its nominal level (Table 9). This result is also somewhat encouraging, because it suggests that although type I error for the phylogenetic regression can be elevated under some simulation conditions (e.g. Table 7), these are not conditions in which diagnostic tests on the original variables or independent contrasts would generally suggest that the phylogenetic regression was appropriate.
Table 9. The ratio of the square root of the variance of estimates of β1 across simulation conditions by both OLS and PGLS; and the mean ratio of the estimated standard errors for β1 by OLS and PGLS. If our estimate of the standard error captured the uncertainty in , then the two ratios should scale proportionally (as they do for the first two simulation conditions). When the data were generated with phylogenetic signal in X and the model residuals, our estimated standard errors for β1 by OLS was too low; conversely, when the data were generated without signal, our estimated standard errors for β1 by PGLS were too low. This is why type I errors are inflated when the incorrect estimation procedure is used in each case. Standard deviations of the mean ratios among simulation conditions are given in parentheses after each entry
Phylogenetic X; non-phylogenetic ε
Non-phylogenetic X; phylogenetic ε
Phylogenetic X and ε
Non-phylogenetic X and ε
The general recommendations that can be derived from this study are straightforward. Firstly we cannot diagnose whether a phylogenetic regression is appropriate based on univariate measures of phylogenetic signal calculated on the individual variables in our analysis. Although such measures might be interesting for other reasons (e.g. Freckleton, Harvey & Pagel 2002; Blomberg, Garland & Ives 2003; but see Revell, Harmon & Collar 2008), they are not useful in assessing whether or not a phylogenetic regression is appropriate. Secondly, the suitability of a phylogenetic regression should actually be diagnosed by estimating phylogenetic signal in the residual deviations of Y given our predictors (X1, X2, etc.). However, thirdly, an alternative approach, which I recommend, is the simultaneous estimation of phylogenetic signal and the regression model. One example (Pagel’s λ) is provided herein; although many other potentially suitable transformations are also available (e.g. Garland, Harvey & Ives 1992; Pagel 1999; Blomberg, Garland & Ives 2003; Hansen, Pienaar & Orzack 2008; Lavin et al. 2008).
Ordinary least squares regression assumes that the residual error in our regression model is independent among observations. Commonly, this will be an incorrect assumption for various types of data, particularly for data from species related by a phylogenetic tree (Felsenstein 1985; Grafen 1989; Harvey & Pagel 1991). The phylogenetic regression (here, PGLS, but – equivalently – regression through the origin of independent contrasts; Felsenstein 1985; Garland & Ives 2000; Rohlf 2001) can be used for data from species in which the residual error is distributed with covariances between samples that are proportional to the amount of shared branch length from the root node of the tree to the common ancestor of each pair of species in the sample (Fig. 1b; Rohlf 2001). However, as a means of diagnosing a priori whether or not a phylogenetic regression is appropriate, it has become common practice to either: compute independent contrasts-based diagnostics, such as the correlation between the absolute values of standardized contrasts and the square roots of their expected variances; or to estimate phylogenetic signal in the independent and dependent variables of our model. In this study, I have shown that these measures can sometimes be inadequate, and even misleading, regarding whether a phylogenetic regression is called for. However, I have also shown that under conditions in which phylogenetic signal in the independent variable is high, but the phylogenetic regression is inappropriate (or vice versa), type I error is not inflated over its nominal level, even though the accuracy of parameter estimation is substantially decreased. Finally, I showed that simultaneously optimizing the error structure of our generalized least squares model along with the parameters of the model can be a useful approach when the suitability of our data for phylogenetic regression is not known.
L. Harmon, M. Lajeunesse, J. Losos and two anonymous reviewers kindly provided comments on this article. The National Evolutionary Synthesis Center (NSF EF-0423641) supports the author in his current position.