COMMENTARY

# Bayesian shared frailty models for regional inference about wildlife survival

Version of Record online: 19 MAR 2012

DOI: 10.1111/j.1469-1795.2012.00532.x

© 2012 The Author. Animal Conservation © 2012 The Zoological Society of London

Additional Information

#### How to Cite

Heisey, D. M. (2012), Bayesian shared frailty models for regional inference about wildlife survival. Animal Conservation, 15: 127–128. doi: 10.1111/j.1469-1795.2012.00532.x

#### Publication History

- Issue online: 29 MAR 2012
- Version of Record online: 19 MAR 2012

- Abstract
- Article
- References
- Cited By

One can joke that ‘exciting statistics’ is an oxymoron, but it is neither a joke nor an exaggeration to say that these are exciting times to be involved in statistical ecology. As Halstead *et al*.'s (2012) paper nicely exemplifies, recently developed Bayesian analyses can now be used to extract insights from data using techniques that would have been unavailable to the ecological researcher just a decade ago. Some object to this, implying that the subjective priors of the Bayesian approach is the pathway to perdition (e.g. Lele & Dennis, 2009). It is reasonable to ask whether these new approaches are really giving us anything that we could not obtain with traditional tried-and-true frequentist approaches. I believe the answer is a clear yes.

Using traditional statistical terminology, Halstead *et al*.'s shared frailty models are examples of mixed models. Mixed models are models that include both traditional random effects and fixed effects terms. A mixed model does not necessarily require a Bayesian approach. Indeed, one can get the impression from the ecological literature that the choice of a Bayesian or frequentist approach rests on one's general personal tastes about Bayesian versus frequentist inference (e.g. Ponciano *et al*.*,*2009). I believe this position fails to recognize how many modern applications of random effects models have drifted from the traditional model. Although models such as Halstead *et al*.'s may be algebraically identical to traditional random effects models, the basic motivation and substantive interpretation are quite different.

In the traditional random effects model, the observed levels of the random effect are assumed to be random draws from some underlying population of interest. In this view, interest focuses on population characteristics rather than the values of the particular levels (sites in Halstead *et al*.'s case); Hodges & Reich (2010) refer to the traditional model as the Scheffé model. Under the Scheffé model, it can be argued that it would seldom make sense to be interested in estimating the values of the specific levels, in contrast to a fixed effects model. This traditional definition seems to be behind much ecological thinking about random effects models; for example, Bolker *et al*. (2008) gives the Scheffé definition as ‘the’ definition of random effects in their Glossary. But the modern application of random effects models appears to have undergone substantial ‘mission creep’ away from the traditional Scheffé definition.

In Halstead *et al*.'s situation, it seems likely that there was a relatively unique motivation for selecting each study area, and there is no way that one could construe that anything resembling a random sampling mechanism was used. One might say the study areas could be modelled as if they were random samples from some imaginary population of ‘similar’ areas. But it is here that the frequentist starts to tread on a slippery slope, as this imaginary population sounds a lot like a statement of a Bayesian prior. This issue of reconciling an interest in estimating level effects as if they were fixed effects has bedevilled the traditional random effects analysis since its inception (e.g. Robinson, 1991).

Stein's paradox (Efron & Morris, 1977) and the resulting theory of shrinkage and smoothing estimators have clarified modern thinking on such applications. Stein shocked the statistical world in 1956 by demonstrating that maximum likelihood fixed effects estimators are inadmissible, that is, some better estimator always exists. Stein and colleagues demonstrated that estimators that algebraically resemble random effects estimators outperform fixed effects estimators when applied to fixed effects designs. These estimators can be interpreted as roughness-penalized likelihood estimators, which can also be interpreted as Bayesian models where the roughness penalty is essentially interpreted as a probabilistic ‘texture’ model. In either case, the underlying model is essentially a traditional fixed effects model with some additional assumption about the texture of the fixed effects. Although the algebra ends up looking the same, the underlying conceptual model is very different from the Scheffé model.

The distinctions between the traditional Scheffé model and shrinkage/smoothing model are subtle but important. In the shrinkage/smoothing model, the ‘variance’ parameter is really a description of an essentially fixed deterministic surface. It is a roughness metric of something that is fixed and not an expectation of a function of random variables, which defines a variance. Given the very basic differences in how these models should behave in hypothetical future realizations, which is at the centre of frequentist methodology, it seems that such models require fundamentally different treatments in the frequentist context. As noted by Hodges and Reich (2010), at present these issues are not fully appreciated in the applied literature, and it will take some time for the theory to catch up with applications.

The Bayesian need not be especially concerned about these issues because the Bayesian recognizes no distinction between fixed and random effects, unlike the traditional frequentist. Bayesians can motivate such analyses purely on the superior performance of shrinkage/smoothing estimation if the Scheffé paradigm seems inappropriate. This does not mean that applying Bayesian methods automatically solves all concerns with such analyses. A concern receiving increasing attention is ‘spatial confounding’, which can occur in models such as Halstead *et al*.'s which include both random spatial terms and spatially referenced covariates, such as ‘linear habitat’ in the case of Halstead *et al*.'s. The general problem is that the flexibility of the spatial random effects allows them to ‘soak up’ some of the explanatory power of the covariate, causing the covariate's importance to be underestimated. In this respect, it seems possible that Halstead *et al*.'s analysis may underestimate the importance of features such as linear habitat.

Techniques that address such spatial confounding are an active area of research (Paciorek, 2010; Hodges & Reich, 2010; Hughes & Haran, 2011, unpubl. data). The ecological researcher needs to remain abreast of these developments. It seems likely that these methods will transition from the research phase to application fairly rapidly, offering researchers a new set of increasing powerful tools for spatial analyses. Halstead *et al*. nicely illustrate the power and promise of these modern methods.

### References

- 2008). Generalized linear mixed models: a practical guide for ecology and evolution. Trends Ecol. Evol. (Amst.) 24, 127–135. , , , , , & (
- 1977). Stein's paradox in statistics. Sci. Am. 236, 119–127. & (
- 2012). Bayesian shared frailty models for regional inference about wildlife survival. Anim. Conserv. 15, 117–124. , , , & (
- 2010). Adding spatially correlated errors can mess up the fixed effect you love. Am. Stat. 64, 325–334. & (
- 2009). Bayesian methods for hierarchical models: are ecologists making a Faustian bargain? Ecol. Appl. 19, 581–584. & (
- 2010). The importance of scale for spatial-confounding bias and precision of spatial regression estimators. Stat. Sci. 25, 107–125. (
- 2009). Hierarchical models in ecology: confidence intervals, hypothesis testing and model selection using data cloning. Ecology 90, 356–362. , , & (
- 1991). That BLUP is a good thing: the estimation of random effects. Stat. Sci. 6, 15–32. (