SEARCH

SEARCH BY CITATION

One can joke that ‘exciting statistics’ is an oxymoron, but it is neither a joke nor an exaggeration to say that these are exciting times to be involved in statistical ecology. As Halstead et al.'s (2012) paper nicely exemplifies, recently developed Bayesian analyses can now be used to extract insights from data using techniques that would have been unavailable to the ecological researcher just a decade ago. Some object to this, implying that the subjective priors of the Bayesian approach is the pathway to perdition (e.g. Lele & Dennis, 2009). It is reasonable to ask whether these new approaches are really giving us anything that we could not obtain with traditional tried-and-true frequentist approaches. I believe the answer is a clear yes.

Using traditional statistical terminology, Halstead et al.'s shared frailty models are examples of mixed models. Mixed models are models that include both traditional random effects and fixed effects terms. A mixed model does not necessarily require a Bayesian approach. Indeed, one can get the impression from the ecological literature that the choice of a Bayesian or frequentist approach rests on one's general personal tastes about Bayesian versus frequentist inference (e.g. Ponciano et al.,2009). I believe this position fails to recognize how many modern applications of random effects models have drifted from the traditional model. Although models such as Halstead et al.'s may be algebraically identical to traditional random effects models, the basic motivation and substantive interpretation are quite different.

In the traditional random effects model, the observed levels of the random effect are assumed to be random draws from some underlying population of interest. In this view, interest focuses on population characteristics rather than the values of the particular levels (sites in Halstead et al.'s case); Hodges & Reich (2010) refer to the traditional model as the Scheffé model. Under the Scheffé model, it can be argued that it would seldom make sense to be interested in estimating the values of the specific levels, in contrast to a fixed effects model. This traditional definition seems to be behind much ecological thinking about random effects models; for example, Bolker et al. (2008) gives the Scheffé definition as ‘the’ definition of random effects in their Glossary. But the modern application of random effects models appears to have undergone substantial ‘mission creep’ away from the traditional Scheffé definition.

In Halstead et al.'s situation, it seems likely that there was a relatively unique motivation for selecting each study area, and there is no way that one could construe that anything resembling a random sampling mechanism was used. One might say the study areas could be modelled as if they were random samples from some imaginary population of ‘similar’ areas. But it is here that the frequentist starts to tread on a slippery slope, as this imaginary population sounds a lot like a statement of a Bayesian prior. This issue of reconciling an interest in estimating level effects as if they were fixed effects has bedevilled the traditional random effects analysis since its inception (e.g. Robinson, 1991).

Stein's paradox (Efron & Morris, 1977) and the resulting theory of shrinkage and smoothing estimators have clarified modern thinking on such applications. Stein shocked the statistical world in 1956 by demonstrating that maximum likelihood fixed effects estimators are inadmissible, that is, some better estimator always exists. Stein and colleagues demonstrated that estimators that algebraically resemble random effects estimators outperform fixed effects estimators when applied to fixed effects designs. These estimators can be interpreted as roughness-penalized likelihood estimators, which can also be interpreted as Bayesian models where the roughness penalty is essentially interpreted as a probabilistic ‘texture’ model. In either case, the underlying model is essentially a traditional fixed effects model with some additional assumption about the texture of the fixed effects. Although the algebra ends up looking the same, the underlying conceptual model is very different from the Scheffé model.

The distinctions between the traditional Scheffé model and shrinkage/smoothing model are subtle but important. In the shrinkage/smoothing model, the ‘variance’ parameter is really a description of an essentially fixed deterministic surface. It is a roughness metric of something that is fixed and not an expectation of a function of random variables, which defines a variance. Given the very basic differences in how these models should behave in hypothetical future realizations, which is at the centre of frequentist methodology, it seems that such models require fundamentally different treatments in the frequentist context. As noted by Hodges and Reich (2010), at present these issues are not fully appreciated in the applied literature, and it will take some time for the theory to catch up with applications.

The Bayesian need not be especially concerned about these issues because the Bayesian recognizes no distinction between fixed and random effects, unlike the traditional frequentist. Bayesians can motivate such analyses purely on the superior performance of shrinkage/smoothing estimation if the Scheffé paradigm seems inappropriate. This does not mean that applying Bayesian methods automatically solves all concerns with such analyses. A concern receiving increasing attention is ‘spatial confounding’, which can occur in models such as Halstead et al.'s which include both random spatial terms and spatially referenced covariates, such as ‘linear habitat’ in the case of Halstead et al.'s. The general problem is that the flexibility of the spatial random effects allows them to ‘soak up’ some of the explanatory power of the covariate, causing the covariate's importance to be underestimated. In this respect, it seems possible that Halstead et al.'s analysis may underestimate the importance of features such as linear habitat.

Techniques that address such spatial confounding are an active area of research (Paciorek, 2010; Hodges & Reich, 2010; Hughes & Haran, 2011, unpubl. data). The ecological researcher needs to remain abreast of these developments. It seems likely that these methods will transition from the research phase to application fairly rapidly, offering researchers a new set of increasing powerful tools for spatial analyses. Halstead et al. nicely illustrate the power and promise of these modern methods.

References

  1. Top of page
  2. References