SEARCH

SEARCH BY CITATION

Keywords:

  • coefficient of determination;
  • goodness-of-fit;
  • heritability;
  • information criteria;
  • intra-class correlation;
  • linear models;
  • model fit;
  • repeatability;
  • variance explained

Summary

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information
  1. The use of both linear and generalized linear mixed-effects models (LMMs and GLMMs) has become popular not only in social and medical sciences, but also in biological sciences, especially in the field of ecology and evolution. Information criteria, such as Akaike Information Criterion (AIC), are usually presented as model comparison tools for mixed-effects models.

  2. The presentation of ‘variance explained’ (R2) as a relevant summarizing statistic of mixed-effects models, however, is rare, even though R2 is routinely reported for linear models (LMs) and also generalized linear models (GLMs). R2 has the extremely useful property of providing an absolute value for the goodness-of-fit of a model, which cannot be given by the information criteria. As a summary statistic that describes the amount of variance explained, R2 can also be a quantity of biological interest.

  3. One reason for the under-appreciation of R2 for mixed-effects models lies in the fact that R2 can be defined in a number of ways. Furthermore, most definitions of R2 for mixed-effects have theoretical problems (e.g. decreased or negative R2 values in larger models) and/or their use is hindered by practical difficulties (e.g. implementation).

  4. Here, we make a case for the importance of reporting R2 for mixed-effects models. We first provide the common definitions of R2 for LMs and GLMs and discuss the key problems associated with calculating R2 for mixed-effects models. We then recommend a general and simple method for calculating two types of R2 (marginal and conditional R2) for both LMMs and GLMMs, which are less susceptible to common problems.

  5. This method is illustrated by examples and can be widely employed by researchers in any fields of research, regardless of software packages used for fitting mixed-effects models. The proposed method has the potential to facilitate the presentation of R2 for a wide range of circumstances.


Introduction

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

Many biological datasets have multiple strata due to the hierarchical nature of the biological world, for example, cells within individuals, individuals within populations, populations within species and species within communities. Therefore, we need statistical methods that explicitly model the hierarchical structure of real data. Linear mixed-effects models (LMMs; also referred to as multilevel/hierarchical models) and their extension, generalized linear mixed-effects models (GLMMs) form a class of models that incorporate multilevel hierarchies in data. Indeed, LMMs and GLMMs are becoming a part of standard methodological tool kits in biological sciences (Bolker et al. 2009), as well as in social and medical sciences (Gelman & Hill 2007; Congdon 2010; Snijders & Bosker 2011). The widespread use of GLMMs demonstrates that a statistic that summarizes the goodness-of-fit of mixed-effects model to the data would be of great importance. There seems currently no such summary statistic that is widely accepted for mixed-effects models.

Many scientists have traditionally used the coefficient of determination, R2 (ranging from 0 to 1), as a summary statistic to quantify the goodness-of-fit of fixed effects models such as multiple linear regressions, anova, ancova and generalized linear models (GLMs). The concept of R2 as ‘variance explained’ is intuitive. Because R2 is unitless, it is extremely useful as a summary index for statistical models because one can objectively evaluate the fit of models and compare R2 values across studies in a similar manner as standardized effect size statistics under some circumstances (e.g. models with the same responses and similar set of predictors or in other words, it can be utilized for meta-analysis; Nakagawa & Cuthill 2007).

In Table 1, we briefly summarize 12 properties of R2 (based on Kvålseth 1985 and Cameron & Windmeijer 1996; compilation adopted from Orelien & Edwards 2008) that will provide the reader with a good sense of what a ‘traditional’ R2 statistic should be and also provide a benchmark for generalizing R2 to mixed-effects models. Generalizing R2 from linear models (LMs) to LMMs and GLMMs turns out to be a difficult task. A number of ways of obtaining R2 for mixed models have been proposed (e.g. Snijders & Bosker 1994; Xu 2003; Liu, Zheng & Shen 2008; Orelien & Edwards 2008). These proposed methods, however, share some theoretical problems or practical difficulties (discussed in detail below), and consequently, no consensus for a definition of R2 for mixed-effects models has emerged in the statistical literature. Therefore, it is not surprising that R2 is rarely reported as a model summary statistic when mixed models are used.

Table 1. Twelve properties of ‘traditional’ R2 for regression models; adopted from Orelien & Edwards (2008)
PropertyReferences
R2 must represent a goodness-of-fit and have intuitive interpretationKvålseth (1985)
R2 must be unit free; that is, dimensionlessKvålseth (1985)
R2 should range from 0 to 1 where 1 represents a perfect fitKvålseth (1985)
R2 should be general enough to apply to any type of statistical modelKvålseth (1985)
R2 values should not be affected by different model fitting techniquesKvålseth (1985)
R2 values from different models fitted to the same data should be directly comparableKvålseth (1985)
Relative R2 values should be comparable to other accepted goodness-of-fit measuresKvålseth (1985)
All residuals (positive and negative) should be weighted equally by R2Kvålseth (1985)
R2 values should always increase as more predictors are added (without degrees-of-freedom correction)Cameron & Windmeijer (1996)
R2 values based on residual sum of squares and those based on explained sum of squares should matchCameron & Windmeijer (1996)
R2 values and statistical significance of slope parameters should show correspondenceCameron & Windmeijer (1996)
R2 should be interpretable in terms of the information content of the dataCameron & Windmeijer (1996)

In the absence of R2, information criteria are often used and reported as comparison tools for mixed models. Information criteria are based on the likelihood of the data given a fitted model (the ‘likelihood’) penalized by the number of estimated parameters of the model. Commonly used information criteria include Akaike Information Criterion (AIC) (Akaike 1973), Bayesian information criterion (BIC), (Schwarz 1978) and the more recently proposed deviance information criterion (DIC), (Spiegelhalter et al. 2002; reviewed in Claeskens & Hjort 2009; Grueber et al. 2011; Hamaker et al. 2011). Information criteria are used to select the ‘best’ or ‘better’ models, and they are indeed useful for selecting the most parsimonious models from a candidate model set (Burnham & Anderson 2002). There are, however, at least three important limitations to the use of information criteria in relation to R2: (i) while information criteria provide an estimate of the relative fit of alternative models, they do not tell us anything about the absolute model fit (cf. evidence ratio; Burnham & Anderson 2002), (ii) information criteria do not provide any information on variance explained by a model (Orelien & Edwards 2008), and (iii) information criteria are not comparable across different datasets under any circumstances, because they are highly dataset specific (in other words, they are not standardized effect statistics which can be used for meta-analysis; Nakagawa & Cuthill 2007).

In this paper, we start by providing the most common definitions of R2 in LMs and GLMs. We then review previously proposed definitions of R2 measures for mixed-effects models and discuss the problems and difficulties associated with these measures. Finally, we explain a general and simple method for calculating variance explained by LMMs and GLMMs and illustrate its use by simulated ecological datasets.

Definitions of R2

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

In this section, we first describe some of the existing methods for estimating a coefficient of determination, R2, for LMs. A standard (general) linear model (LM) can be written as:

  • display math(eqn 1)
  • display math(eqn 2)

where yi is the ith response value, xhi is the ith value for the hth predictor, β0 is the intercept, βh is the slope (regression coefficient) of the hth predictor, hi is the ith residual value and residual errors are normally (Gaussian) distributed with a variance of inline image. Such regression models are fitted by ordinary least squares (OLS) methods that minimize the sum of squared distances between observed and fitted responses (i.e. minimizing the residual sum of squares). The residual sum of squares appears in the formulation of the most common definition for the coefficient of determination, R2 (Kvålseth 1985; Draper & Smith 1998).

  • display math(eqn 3)
  • display math(eqn 4)

where n is the number of observations (i.e. the total sample size), inline image is the mean of the response, inline image is the ith fitted response value, inline image are estimates of β0 and βh, respectively, and the subscript ‘O’ in R2O signifies OLS regression. An interesting and important feature to note here is that the definition of ‘variance explained’ is rather indirectly composed of 1 minus the ‘variance unexplained’ (we revisit this very point later). An equivalent yet perhaps more intuitive formulation of inline image can also be written as:

  • display math(eqn 5)
  • display math(eqn 6)

where ‘var’ indicates the variance of what is in the following parentheses. Equation (eqn 6) can also be expressed as the ratio between the residual variance of the model of interest and the residual variance of the null model (also referred to as the empty model or the intercept model):

  • display math(eqn 7)

where inline image is the residual variance of the null model.

There are two difficulties with generalizing this definition of inline image to the GLMM context. When generalizing to non-Gaussian response variables (i.e. GLMs), it is not straightforward to get an appropriate estimate of the residual variance. Also, when generalizing to mixed-effects models that consist of error terms at different hierarchical levels (see below), it is not immediately obvious which estimate should be used as the unexplained variance. For GLMs, inline image can be defined using the maximum likelihood (ML) of the full and null models (Maddala 1983). Perhaps, the best-known and most popular definition is:

  • display math(eqn 8)

where Lβ is the likelihood of the data given the fitted model of interest and L0 is the likelihood of the data given the null model, n is the total sample size, the subscript ‘g’ in inline image signifies ‘general’ (this formulation is based on the geometric mean squared improvement; see Menard 2000). Because inline image cannot become 1 even when the model of interest fits data perfectly, Nagelkerke (1991) proposed an adjustment to Equation (eqn 8):

  • display math(eqn 9)

where the denominator term can be interpreted as the maximum possible value of inline image and the subscript ‘G’ in inline image signifies ‘General’. A definition of R2, which is comparable to inline image , is:

  • display math(eqn 10)

We have deliberately left −2 in the denominator and numerator so that inline image (‘D’ signifies ‘deviance’) can be compared with Equation (eqn 3). For a LM (Equation (eqn 1)), the −2 log-likelihood statistic (sometimes referred to as deviance) is equal to the residual sum of squares based on OLS of this model (Menard 2000; see a series of inline image formulas for non-Gaussian responses in Table 1 of Cameron & Windmeijer 1997). There are several other likelihood-based definitions of R2 (reviewed in Cameron & Windmeijer 1997; Menard 2000), but we do not review these definitions, as they are less relevant to our approach below. We will instead discuss the generalization of R2 to LMMs and GLMMs, and associated problems in this process, in the next section.

Common problems when generalizing R2

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

First, let us imagine an experimental design where we sample repeatedly from the same set of individuals. Extending the LM shown in Equations (eqn 1), (eqn 2), we can fit a LMM with one random factor (‘individuals’ in our example) defined as:

  • display math(eqn 11)
  • display math(eqn 12)
  • display math(eqn 13)

where yij is the ith response of the jth individual, xhij is the ith value of the jth individual for the hth predictor, β0 is the intercept, βh is the slope (regression coefficient) of the hth predictor, αj is the individual-specific effect from a normal distribution of individual-specific effects with mean of zero and variance of inline image (between-individual variance) and εegr;ij is the residual associated with the ith value of the jth individual from a normal distribution of residuals with mean of zero and variance of inline image (within-individual variance). As seen in the previous equations, LMMs have by definition more than one variance component (in this case two: inline image and inline image), while LMs have only one (Equations (eqn 1) and (eqn 2)).

One of the earliest definitions of R2 for mixed-effects models is based on the reduction of each variance component when including fixed-effect predictors separately; in other words, separate R2 for each random effect and the residual variance (Raudenbush & Bryk 1986; Bryk & Raudenbush 1992; we detail this measurement in the section ‘Related issues’). This approach is analogous to Equation (eqn 7). As pointed out by Snijders & Bosker (1994), however, it is not uncommon that some predictors can reduce inline image while simultaneously increasing inline image, and vice versa even though the total sum of variance components inline image is usually reduced (for an example, see Table 1 in Snijders & Bosker 1994). Such behaviour of variance components can sometimes result in negative R2 because inline image and inline image can be larger than inline image and inline image , respectively (i.e. the corresponding variance components in the intercept model).

To avoid this problem, Snijders & Bosker (1994) proposed what they refer to as inline image and inline image for LMMs with one random factor (as in Equation (eqn 11)): one R2 value is calculated for each level of a LMM (i.e. the unit level and the grouping/individual level). inline image can be expressed in two forms (analogous to Equations (eqn 5) and (eqn 7)):

  • display math(eqn 14)
  • display math(eqn 15)
  • display math(eqn 16)

where inline image is variance explained at the unit of analysis (i.e. level 1; within-individual variance explained), inline image is the ith fitted value for jth individual and other notations are as above. In a similar manner, inline image can be written as:

  • display math(eqn 17)
  • display math(eqn 18)
  • display math(eqn 19)

where inline image is variance explained at the individual level (i.e. level 2; between-individual variance explained), inline image is the mean observed value for the jth individual, inline image is the fitted value for jth individual, k is the harmonic mean of the number of replicates per individuals, mj is the number of replicates for the ith individual, M is the total number of individuals, and other notations are as above. An advantage of using inline image and inline image is that we can evaluate how much variance is explained at each level of the analysis. However, there are at least three problems with this approach: (i) it turns out that inline image and inline image can decrease in larger models (note that inline image can only increase when more predictors are added without the degrees of freedom adjustment; see Table 1), (ii) it is not clear how inline image and inline image can be extended to more than two levels (i.e. more than one random factor) and (iii) it is also not obvious how inline image and inline image are to be generalized to GLMMs.

The first problem means that because (inline image) of a model with more predictors can be larger than that of a model of fewer predictors, inline image and inline image could also take negative values (Snijders & Bosker 1994). In other words, the estimate of inline image can be larger than that of (inline image). Snijders & Bosker (1999) offer two explanations for decreases in R2 and/or negative R2 in a larger model: (i) chance fluctuation (or sampling variance) that is most prominent when the sample size is small or (ii) misspecification of the model, when the new predictor is redundant in relation to one or more other predictors in the model. Snijders & Bosker (1999) suggest that decreases in inline image and inline image (changes in the ‘wrong’ direction) can be used as a diagnostic in model selection. However, such misspecification does not need to be the cause of an increase in (inline image) (and consequently decreases in inline image and inline image).

The second problem of extending inline image and inline image to models with more than two levels was addressed by Gelman & Pardoe (2006), who provide a solution to extend inline image and inline image to any arbitrary numbers of levels (or random factors) in a Bayesian framework. However, its general implementation is rather difficult, and we therefore refer to the original publication for those interested in this method.

The third problem of generalizing inline image and inline image is particularly profound because the residual variance, inline image, cannot be easily defined for non-Gaussian responses (see also below). At first glance, adopting likelihood-based R2 measures such as in Equations (eqn 8), (eqn 9), (eqn 10) could resolve this problem although such a method only provides R2 at the unit level (i.e. level 1); indeed, this type of solution has been recommended before (Edwards et al. 2008). Unfortunately, there are three obstacles to using a likelihood-based R2 like inline image for generalized models: (i) the likelihoods cannot be compared when models are fitted by restricted maximum likelihood (REML) (the standard way to estimate variance components in LMMs; Pinheiro & Bates 2000), (ii) it is not clear whether we should use the likelihood from the null model such as yij = β0 + εij (excluding random factors) or from the null model such as yij = β0 + αj + εij (including random factors; see Equation (eqn 10)) and (iii) likelihood-based R2 measures applied to LMMs and GLMMs are also subject to the problem of decreased or even negative R2 with the introduction of additional predictors. We are not aware of a solution to this latter obstacle, but partial solutions to obstacles (i) and (ii) have been suggested and need separate discussion.

The first obstacle of fitting models with REML only applies to LMMs, and this can be resolved by using the ML estimates instead of REML. However, it is well known that variance components will be biased when models are fitted by ML (e.g. Pinheiro & Bates 2000).

With respect to the second obstacle regarding the choice of null models, it seems that both are permitted and accepted in the literature (e.g. Xu 2003; Orelien & Edwards 2008). Inclusion of random factors in the intercept model, however, can certainly change the likelihood of the null model that is used as a reference, and thus, it changes R2 values. This relates to an important matter. For mixed-effects models, R2 can be categorized loosely into two types: marginal R2 and conditional R2 (Vonesh, Chinchilli & Pu 1996). Marginal R2 is concerned with variance explained by fixed factors, and conditional R2 is concerned with variance explained by both fixed and random factors. So far, we only concentrated on the former, marginal R2, but we will expand more on the distinction between the two types in the next section.

Although we do not review all proposed definitions of R2 for mixed-effects models here (see Menard 2000; Xu 2003; Orelien & Edwards 2008; Roberts et al. 2011), it appears that all alternative definitions of R2 suffer from one or more aforementioned problems and their implementations may not be straightforward. In the next section, we introduce a definition of R2, which is simple and common to both LMMs and GLMMs and probably less prone to the aforementioned problems than previously proposed definitions.

General and simple R2 for GLMMs

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

We first revisit the point that variance explained (inline image) is actually defined via the variance unexplained by the model, and now we redefine inline image more directly in terms of variance explained:

  • display math(eqn 20)
  • display math(eqn 21)

where the notations are as in Equations (eqn 3), (eqn 4), (eqn 5), (eqn 6). Below, we extend this more direct formulation first to LMMs and then to GLMMs. For simplicity, we use a LMM with two random factors as an example. For the sake of illustration, assume that the two random effects are ‘groups’ (with individuals uniquely assigned to groups) and ‘individuals’ (with multiple observations per individual) (c.f. Equations (eqn 11), (eqn 12), (eqn 13)). Observations are thus clustered in individuals, and individuals are nested within groups (see Schielzeth & Nakagawa 2012 for a discussion of nesting in mixed models). The model can be written as:

  • display math(eqn 22)
  • display math(eqn 23)
  • display math(eqn 24)
  • display math(eqn 25)

where yijk is the ith response of the jth individual, belonging to the kth group, xhijk is the ith value of the jth individual in the kth group for the hth predictor, γk is the group-specific effect from a normal distribution of group-specific effects with mean of zero and variance of inline image , αjk is the individual-specific effect from a normal distribution of individual-specific effects with mean of zero and variance of inline image and εijk is the residual from a normal distribution of group-specific effects with mean of zero and variance of inline image. An R2 for LMM given by Equation (eqn 22) can be defined as:

  • display math(eqn 26)
  • display math(eqn 27)

where inline image is the variance calculated from the fixed effect components of the LMM (c.f. Snijders & Bosker 1999), m in the parentheses indicates marginal R2 (i.e. variance explained by fixed factors; see below for conditional R2). Estimating inline image can, in principle, be carried out by predicting fitted values based on the fixed effects alone (equivalent to multiplying the design matrix of the fixed effects with the vector of fixed effect estimates) followed by calculating the variance of these fitted values (Snijders & Bosker 1999). Note that inline image should be estimated without degrees-of-freedom correction.

An obvious advantage of this formulation is that inline image will never be negative. It is possible that inline image can decrease by the addition of predictors (remember that inline image never decrease with more predictors), but this is unlikely, because inline image should always increase when predictors are added to the model (compare Equations (eqn 16) and 26).

We now generalize inline image to GLMMs. We have mentioned already that for non-Gaussian responses, it is difficult to define the residual variance, inline image . However, it is possible to define the residual variance on the latent (or link) scale, although this definition of the residual variance is specific to the error distribution and the link function used in the analysis. In GLMMs, inline image can be expressed as three components: (i) multiplicative dispersion (ω), (ii) additive dispersion (inline image) and (iii) distribution-specific variance (inline image) (detailed in Nakagawa & Schielzeth 2010). GLMMs can be implemented in two distinct ways, either by multiplicative or additive dispersion; dispersion is fitted to account for variance that exceeds or falls short of the distribution-specific variance (e.g. from binomial or Poisson distributions). In this paper, we only consider additive dispersion implementation of GLMMs although the formulae that we present below can be easily modified for the use with GLMMs that apply to multiplicative dispersion. For more details and also for a review of intra-class correlation (also known as repeatability) and heritability, both of which are closely connected to R2 (see Nakagawa & Schielzeth 2010). When additive dispersion is used, inline image is equal to the sum of the additive dispersion component and the distribution-specific variance inline image, and thus, R2 for GLMMs can be defined as:

  • display math(eqn 28)

where inline image is variance explained on the latent (or link) scale rather than original scale. This can be easily generalized to multiple levels:

  • display math(eqn 29)

where u is the number of random factors in GLMMs (or LMMs) and inline image is the variance component of the lth random factor. Equation (eqn 29) can be modified to express conditional R2 (i.e. variance explained by fixed and random factors).

  • display math(eqn 30)

As one can see in Equation (eqn 30), conditional R2 (inline image) despite its somewhat confusing name can be interpreted as the variance explained by the entire model. Both marginal and conditional inline image convey unique and interesting information, and we recommend they both be presented in publications.

In the case of a Gaussian response and an identity link (as used in LMMs), the linked scale variance and the original scale variance are the same and the distribution-specific variance is zero. Thus, (inline image) reduces to inline image in Equations (eqn 29) and (eqn 30). For other GLMMs, the link-scale variance will differ from the original scale variance. We here present R2 calculated on the link scale because of its generality: Equations (eqn 29) and (eqn 30) can be applied to different families of GLMMs, given the knowledge of distribution-specific variance inline image and a model that fits additive overdispersion (e.g. MCMCglmm; Hadfield 2010). Importantly, when the denominators of Equations (eqn 29) and (eqn 30) include inline image (i.e. for GLMM), both types of inline image will never become 1 in contrast to traditional R2 (see also Table 1). Table 2 summarizes the specifications for binary/proportion data and count data, which are equivalent to Equations (eqn 22), (eqn 23), (eqn 24), (eqn 25). The GLMM formulations presented in Table 2 for binomial GLMMs were first presented by Snijders & Bosker (1999). They also show that this approach can be extended to multinomial GLMMs where the response is categorical with more than two levels (Snijders & Bosker 1999; see also Dean, Nakagawa & Pizzari 2011). However, to our knowledge, equivalent formulas for Poisson GLMMs (i.e. count data) have not been previously described (for derivation, see Appendix 1).

Table 2. Examples of generalized linear mixed models (GLMMs) with binomial and Poisson errors (two random factors) and corresponding marginal and conditional R2
 Binary and proportion dataCount data
Link functionLogit linkProbit linkLog linkSquare-root link
Distribution-specific variance inline image 1 inline image 0·25
Model specification

inline image

inline image

inline image

inline image

inline image

inline image

inline image

inline image

inline image

inline image

DescriptionYijk is the number of ‘successes’ in mijk trials by the jth individual in the kth group at the ith occasion (for binary data, mijk is 1), pijk is the underlying (latent) probability of success for the jth individual in the kth group at the ith occasion (for binary data, inline image is 0).Yijk is the observed count for the jth individual in the kth group at the ith occasion, μijk is the underlying (latent) mean for the ith individual in the kth group at the ith occasion.
Marginal R2 inline imageinline image inline image inline image inline image
Conditional R2 inline image inline image inline image inline image inline image

As a technical note, we mention that for binary data the additive overdispersion is usually fixed to 1 for computational reasons, as additive dispersion is not identifiable (see Goldstein, Browne & Rasbash 2002). Furthermore, some of the R2 formulae include the intercept β0 (like in the case Poisson models for count data). In such cases, R2 values will be more easily interpreted when fixed effects are centred or otherwise have meaningful zero values (see Schielzeth 2010; see also Appendix 1). We further note that for Poisson models with square-root link and a mean of Yijk <5, the given formula is likely to be inaccurate because the variance of square-root transformation of count data substantially exceeds 0·25 (Table 2; Nakagawa & Schielzeth 2010).

Related issues

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

While an obvious advantage of using inline image is its simplicity, one drawback is that inline image does not provide information regarding variance explained at each level in a manner that inline image and inline image do. This shortcoming may be remedied by providing the proportion change in variance (PCV; Merlo et al. 2005a,b) as Supporting information in publications. Using Equations (eqn 22), (eqn 23), (eqn 24), (eqn 25), PCV at three different levels can be expressed as:

  • display math(eqn 31)
  • display math(eqn 32)
  • display math(eqn 33)

where Cγ, Cα and Cɛ are PCV at the level of groups, individuals and units (observations), respectively, and inline image , inline image and inline image are variance components from the intercept model (i.e. Equation (eqn 22); PCV for additive dispersion, inline image can also be calculated by replacing inline image with inline image). Proportion change in variance is in fact one of earliest proposed R2 measures for LMMs (Raudenbush & Bryk 1986; Bryk & Raudenbush 1992), although it can take negative values (Snijders & Bosker 1994). We think, however, that presenting PCV along with R2GLMM will turn out to be very useful, because PCV monitors changes specific to each variance component, that is, how the inclusion of additional predictor(s) has reduced (or increased) variance component at different levels. For example, if Cγ = 0·12, Cα = −0·05 and Cɛ = 0·23, the negative estimate shows that variance at the individual level has increased (i.e. inline image). Additionally, we refer the reader to Hössjer (2008) who describes an alternative approach for quantifying variance explained at different levels using variance components from a single model.

So far, we have only discussed random intercept models (e.g. Equations (eqn 22)) not random-slope models where slopes are fitted for each group (usually along with random intercepts at each level; see Schielzeth & Forstmeier (2009) highlighting the necessity to fit random-slope models when the main interest is on data-level fixed effect predictors). Snijders & Bosker (1999) point out that calculating R2 like inline image and inline image, it is easy to do so for random intercept models, but for random-slope models is tedious (as variance components for slopes cannot be easily integrated with other variance components, e.g. Schielzeth & Forstmeier 2009). Snijders & Bosker (1999) mention that inline image and inline image obtained from random-slope models are usually very similar to those obtained from random intercept models, where the same fixed effects are fitted. Therefore, we recommend calculating inline image (both marginal and conditional) from corresponding random intercept models for random-slope models, although PCV should be calculated for the random-slope models of interest.

Worked examples

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

We will illustrate how the calculation of inline image along with PCV using simulated datasets. Consider a hypothetical species of beetle that has the following life cycle: larvae hatch and grow in the soil until they pupate, and then adult beetles feed and mate on plants. They are a generalist species and so are widely distributed. We are interested in the effect of extra nutrients during the larval stage on subsequent morphology and reproductive success. Larvae are sampled from 12 different populations (‘Population’; see Fig. 1). Within each population, larvae are collected at two different microhabitats (‘Habitat’): dry and wet areas as determined by soil moisture. Larvae are exposed to two different dietary treatments (‘Treatment’): nutrient rich and control. The species is sexually dimorphic and can be easily sexed at the pupa stage (‘Sex’). Male beetles have two different colour morphs: one dark and the other reddish brown (‘Morph’, labelled A and B in Fig 1), and morphs are supposedly subject to sexual selection. Sexed pupae are housed in standard containers until they mature (‘Container’). Each container holds eight same-sex animals from a single population, but with a mix of individuals from the two habitats (N[container] = 120; N[animal] = 960). Three traits are measured after maturation: (i) body length of adult beetles (Gaussian distribution), (ii) frequencies of the two distinct male colour morphs (binomial or Bernoulli distribution) and (iii) the number of eggs laid by each female (Poisson distribution) after random mating (Fig. 1).

image

Figure 1. A schematic of how hypothetical datasets are obtained (see the main text for details).

Download figure to PowerPoint

Data for this hypothetical example were created in R 2.15.0 (R Development Core Team 2012). We used the function lmer in the R package lme4 (version 0.999375-42; Bates, Maechler & Bolker 2011) for fitting LMMs and GLMMs. We modelled three response variables (see also Table 3): (i) the body length with a Gaussian error (‘Size models’), (ii) the two male morphs with the binomial error (logit-link function; ‘Morph models’) and (iii) the female egg numbers with the Poisson error (log-link function; ‘Fecundity models’). For each dataset, we fitted the null (intercept/empty) model and the ‘full’ model; all models contained ‘Population’ and ‘Container’ as random factors; we included an additive dispersion term (see Table 2) in Fecundity models. The full models all included ‘Treatment’ and ‘Habitat’ as fixed factors; ‘Sex’ was added as a fixed factor to the body size model. Two kinds of inline image and PCV for the three variance components were calculated as explained above. The results of modelling the three different datasets are summarized in Table 3; all datasets and an R script are provided as online supplements (Data S1-4).

Table 3. Hypothetical mixed-effects modelling of the effects of nutrient manipulations on body length (mm) (Size models), male morphology (Morph models) and female eggs (Fecundity models); N[population] = 12, N[container] = 120 and N[animal] = 960
Model name Size models Gaussian mixed models Morph models Binary mixed models (logit link) Fecundity models Poisson mixed models (log link)
Null Model Full Model Null Model Full Model Null Model Full Model
  1. CI, confidence interval; PCV, proportion change in variance; NA, not applicable/available; AIC, Akaike Information Criterion; BIC; Bayesian information criterion; ML, maximum likelihood; REML, restricted maximum likelihood; VC, variance components.

  2. For full models, the intercept represents control, dry and female. 95% CI was estimated by assuming an infinitely large degree of freedom (i.e. t = 1·96). For Size models, AIC and BIC values were calculated using ML but other parameters were from REML estimations (see the text for the reason).

Fixed effectsb [95% CI]b [95% CI]b [95% CI]b [95% CI]b [95% CI]b [95% CI]
Intercept14·08 [13·41, 14·76]15·22 [14·53, 15·91]−0·38 [−0·96, 0·21]−1·25 [−1·96, −0·54]1·54 [1·22, 1·86]1·23 [0·91, 1·56]
Treatment (experiment)0·31 [0·18, 0·45]1·01 [0·60, 1·43]0·51 [0·41, 0·26]
Habitat (wet)0·09 [−0·05, 0·23]0·68 [0·27, 1·09]0·10 [0·001, 0·20]
Sex (male)−2·66 [−2·89, −2·45]
Random effectsVCVCVCVCVCVC
Population1·1811·3790·9461·1100·3030·304
Container2·2060·235< 0·00010·0060·0120·023
Residuals (additive dispersion)1·2241·1970·1710·100
Fixed factors 1·809 0·371 0·067
PCV[Population]−16·77%−17·34%−0·54%
PCV[Container] 89·37%<−100%−84·32%
PCV[Residuals]  2·21% 41·54%
inline image 39·16% 7·77% 9·76%
inline image 74·09% 31·13% 57·23%
AIC32753063602·4573·1902·7811·9
BIC32953097614·9594·0920·4836·9

In all the three model sets, some variance components in the full models were larger than corresponding variance components in the null models (e.g. inline image). In Morph models, the sum of all the random effect variance components in the full model was greater than the total variance in the null model (c.f. inline image); see above; Snijders & Bosker 1994). All these patterns result in negative PCV values (see Table 3), while inline image values never become negative. In Morph and Fecundity models, inline image values are relatively minor (8–10%) compared with inline image values. In Size models, on the other hand, inline image was nearly 40%. This was due to a very large effect of ‘Sex’ in body size model; in this model, the ‘Treatment’ and ‘Habitat’ effects together accounted for only c. 1% of the variance (not shown in Table 3). The variance among containers in the null Size model was conflated with the variance caused by differences between the sexes in the null model, as ‘Sex’ and ‘Container’ are confounded by the experimental design (single sex in each container; Fig. 1). A part of the variation assigned to ‘Container’ in the null model was explained by the fixed effect ‘Sex’ in the full model. Finally, it is important to note that both ‘Treatment’ and ‘Habitat’ effects were statistically significant in all the datasets in most cases (five out of six). Much of data variability, however, resided in the random effects along with residuals (additive dispersion) and in the distribution-specific variance. Note that differences between corresponding inline image and inline image values reflect how much variability is in random effects. Importantly, comparing the different variance components including that of the fixed factors within as well as between models, we believe, could help researchers gaining extra insights into their datasets (Merlo et al. 2005a,b). We also note that in some cases, calculating a variance component for each fixed factor may prove useful.

Final remarks

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

Here, we have provided a general measure of R2 that we label inline image. Both marginal and conditional inline image can be easily calculated, regardless of the statistical package used to fit the models. While we do not claim that inline image is a perfect summary statistic, it is less susceptible to the common problems that plague alternative measures of R2. We further believe that inline image can be used as a quantity of biological interest and hence inline image might be thought of as being estimated from the data rather than calculated for a particular dataset. The empirical usefulness of inline image as an estimator of the explained variance should still be tested in future studies. As with every estimator of biological interest, it is desirable to quantify the uncertainty around this estimate (e.g. 95% confidence interval, which could be approximated by parametric bootstrapping or MCMC sampling). As far as we are aware, such uncertainty estimates have not been considered for traditional R2. Perhaps, future studies can also investigate the usefulness of uncertainty estimates for inline image and other R2 measurements.

We finish with a cautionary note that R2 should not replace model assessments such as diagnostic checks for heteroscedasticity, validating assumptions on the distribution of random effects and outlier analyses. Above, we presented R2 with the motivation of summarizing the amount of variance explained in a model that is suitable for the specific research questions and datasets. It should only be used on models that have been checked for quality by other means. It is also important to realize that the R2 can be large due to predictors that are not of direct interest in a particular study (Tjur 2009) such as the sex effect on body size in our example. Despite these limitations, when used along with other statistics such as AIC and PCV, inline image will be a useful summary statistic of mixed-effects models for both biologists and other scientists alike.

Acknowledgements

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

We thank S. English, C. Grueber, F. Korner-Nievergelt, E. Santos, A. Senior and T. Uller for comments on earlier versions and M. Lagisz for help in preparing Fig. 1. We are also grateful to the Editor R. O'Hara and two anonymous referees, whose comments improved this paper. T. Snijders provided guidance on how to calculate variance for fixed effects. H.S. was supported by an Emmy-Noether fellowship of the German Research Foundation (SCHI 1188/1-1).

References

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information
  • Akaike, H. (1973) Information theory as an extension of the maximum likelihood principle. 2nd International Symposium on Information Theory (eds B.N. Petrov & F. Csaki), pp. 267281. Akademiai Kiado, Budapest.
  • Bates, D., Maechler, M. & Bolker, B. (2011) lme4: linear mixed-effects models. R package, version 0.999375-42. http://CRAN.R-project.org/package=lme4.
  • Bolker, B.M., Brooks, M.E., Clark, C.J., Geange, S.W., Poulsen, J.R., Stevens, M.H.H. & White, J.S.S. (2009) Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology & Evolution, 24, 127135.
  • Bryk, A.S. & Raudenbush, S. (1992) Hierarchical Linear Models. Sage, Newbury Park, CS.
  • Burnham, K.P. & Anderson, D.R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd edn. Springer-Verlag, Berlin.
  • Cameron, A.C. & Windmeijer, F.A.G. (1996) R-squared measures for count data regression models with applications to health-care utilization. Journal of Business & Economic Statistics, 14, 209220.
  • Cameron, A.C. & Windmeijer, F.A.G. (1997) An R-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics, 77, 329342.
  • Claeskens, G. & Hjort, N.L. (2009) Model Selection and Model Averaging. Cambridge University Press, Cambridge.
  • Congdon, P.D. (2010) Applied Bayesian Hierarchical Methods. CRC, Boca Raton, FL.
  • Dean, R., Nakagawa, S. & Pizzari, T. (2011) The risk and intensity of sperm ejection in female birds. American Naturalist, 178, 343354.
  • Draper, N.R. & Smith, H. (1998) Applied Regression Analysis, 3rd edn. Wiley, New York.
  • Edwards, L.J., Muller, K.E., Wolfinger, R.D., Qaqish, B.F. & Schabenberger, O. (2008) An R2 statistic for fixed effects in the linear mixed model. Statistics in Medicine, 27, 61376157.
  • Gelman, A. & Hill, J. (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Cambridge.
  • Gelman, A. & Pardoe, L. (2006) Bayesian measures of explained variance and pooling in multilevel (hierarchical) models. Technometrics, 48, 241251.
  • Goldstein, H., Browne, W. & Rasbash, J. (2002) Partitioning variation in multilevel models. Understanding Statistics, 1, 223231.
  • Grueber, C.E., Nakagawa, S., Laws, R.J. & Jamieson, I.G. (2011) Multimodel inference in ecology and evolution: challenges and solutions. Journal of Evolutionary Biology, 24, 699711.
  • Hadfield, J.D. (2010) MCMC methods for multi-response Generalised Linear Mixed Models: the MCMCglmm R package. Journal of Statistical Software, 33, 122.
  • Hamaker, E.L., van Hattum, P., Kuiper, R.M. & Hoijtink, H. (2011) Model selection based on information criteria in multilevel modeling. Handbook of Advanced Multilevel Analysis (eds J. Hox & J.K. Roberts), pp. 231255. Routledge, New York.
  • Hössjer, O. (2008) On the coefficient of determination for mixed regression models. Journal of Statistical Planning and Inference, 138, 30223038.
  • Kvålseth, T.O. (1985) Cautionary note about R2. American Statistician, 39, 279285.
  • Liu, H.H., Zheng, Y. & Shen, J. (2008) Goodness-of-fit measures of R(2) for repeated measures mixed effect models. Journal of Applied Statistics, 35, 10811092.
  • Maddala, G.S. (1983) Limited-Dependent and Qualitative Variables in Econometrics. Cambridge University Press, Cambridge.
  • Menard, S. (2000) Coefficients of determination for multiple logistic regression analysis. American Statistician, 54, 1724.
  • Merlo, J., Chaix, B., Yang, M., Lynch, J. & Rastam, L. (2005a) A brief conceptual tutorial on multilevel analysis in social epidemiology: interpreting neighbourhood differences and the effect of neighbourhood characteristics on individual health. Journal of Epidemiology and Community Health, 59, 10221028.
  • Merlo, J., Yang, M., Chaix, B., Lynch, J. & Rastam, L. (2005b) A brief conceptual tutorial on multilevel analysis in social epidemiology: investigating contextual phenomena in different groups of people. Journal of Epidemiology and Community Health, 59, 729736.
  • Nagelkerke, N.J.D. (1991) A note on a general definition of the coefficient of determination. Biometrika, 78, 691692.
  • Nakagawa, S. & Cuthill, I.C. (2007) Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews, 82, 591605.
  • Nakagawa, S. & Schielzeth, H. (2010) Repeatability for Gaussian and non-Gaussian data: a practical guide for biologists. Biological Reviews, 85, 935956.
  • Orelien, J.G. & Edwards, L.J. (2008) Fixed-effect variable selection in linear mixed models using R2 statistics. Computational Statistics & Data Analysis, 52, 18961907.
  • Pinheiro, J.C. & Bates, D.M. (2000) Mixed-effects Models in S and S-Plus. Springer, New York.
  • R Development Core Team (2012) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
  • Raudenbush, S. & Bryk, A.S. (1986) A hierarchical model for studying school effects. Sociology of Education, 59, 117.
  • Roberts, J.K., Monaco, J.P., Stovall, H. & Foster, V. (2011) Explained variance in multilevel models. Handbook of Advanced Multilevel Analysis (eds J. Hox & J.K. Roberts), pp. 219230. Routledge, New York.
  • Schielzeth, H. (2010) Simple means to improve the interpretability of regression coefficients. Methods in Ecology and Evolution, 1, 103113.
  • Schielzeth, H. & Forstmeier, W. (2009) Conclusions beyond support: overconfident estimates in mixed models. Behavioral Ecology, 20, 416420.
  • Schielzeth, H. & Nakagawa, S. (2012) Nested by design: model fitting and interpretation in a mixed model era. Methods in Ecology and Evolution, doi: 10.1111/j.2041-210x.2012.00251.x.
  • Schwarz, G.E. (1978) Estimating the dimension of a model. Annals of Statistics, 6, 461464.
  • Snijders, T.A. & Bosker, R.J. (1994) Modeled variance in two-level models. Sociological Methods & Research, 22, 342363.
  • Snijders, T. & Bosker, R. (1999) Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. Sage, London.
  • Snijders, T. & Bosker, R. (2011) Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling, 2nd edn. Sage, London.
  • Spiegelhalter, D.J., Best, N.G., Carlin, B.R. & van der Linde, A. (2002) Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society Series B-Statistical Methodology, 64, 583616.
  • Tjur, T. (2009) Coefficients of determination in logistic regression models – a new proposal: the coefficient of discrimination. American Statistician, 63, 366372.
  • Vonesh, E.F., Chinchilli, V.P. & Pu, K.W. (1996) Goodness-of-fit in generalized nonlinear mixed-effects models. Biometrics, 52, 572587.
  • Xu, R.H. (2003) Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 35273541.

Appendix 1

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information

Derivation of distribution-specific variance (inline image ) for Poisson distributions

When a random variable x is Poisson-distributed, the mean and variance of x is respectively:

  • display math(A1)
  • display math(A2)

The distribution of ln(x) can be approximated by the natural logarithm of a log-normal distribution. Then, the variance of ln(x) can be approximated as:

  • display math(A3)

By substituting Equations A1 and A2 into Equation A3, we obtain:

  • display math(A4)

Therefore,

  • display math(A5)

When we replace var(ln(x)) with inline image and λ with exp(β0), we obtain:

  • display math(A6)

Simulations (unpublished data, the authors) show that as E(x) approaches 0, this approximation becomes unreliable. Also, exp(β0) should be obtained either from a model with centred or scaled variables (sense Schielzeth 2010), or an intercept-only model while including all random effects. Note that the former approach may be limited when a model includes categorical variables.

Supporting Information

  1. Top of page
  2. Summary
  3. Introduction
  4. Definitions of R2
  5. Common problems when generalizing R2
  6. General and simple R2 for GLMMs
  7. Related issues
  8. Worked examples
  9. Final remarks
  10. Acknowledgements
  11. References
  12. Appendix 1
  13. Supporting Information
FilenameFormatSizeDescription
mee3261-sup-0001-DataS1.csvapplication/msexcel28KData S1. Data file for Size model (BeetlesBody.csv).
mee3261-sup-0002-DataS2.csvapplication/msexcel9KData S2. Data file for Morph models (BeetlesMale.csv).
mee3261-sup-0003-DataS3.csvapplication/msexcel10KData S3. Data file for Fecundity models (BeetlesFemale.csv).
mee3261-sup-0004-DataS4.Rtext/R10KData S4. R code for examples (R_code_examples.R).
mee3261-sup-0005-R code examples rev.Rtext/R10K 

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.