The paper by S. Nakagawa and I.C. Cuthill (2007) contains some ambiguous and potentially misleading advice in Section III. 5: Non-independence of data (pp. 600-602). Also the authors have since recognised that there is an easy method of calculating a standard effect statistic (*r* or correlation coefficient) from mixed-effect models. These matters are corrected and described below. The authors apologise for any confusion.

Equations 22-24 were provided to approximate standard effect statistics *d* and *r* (Nakagawa Cuthill, 2007). However, these formulae can produce inaccurate estimates (and, in some circumstances, this inaccuracy will be unacceptable). We stated this problem in the original paper as “The procedures of effect size estimation proposed above using mixed-models may or may not work depending on the nature of the data” (p. 601). Therefore, we do not recommend using these formulae to estimate the effect statistics if avoidable.

Here, we suggest the following easy-to-use alternative. The regression coefficient (*b*) can be given as: *b*=*rSD _{y}*/

*SD*where

_{x}*r*is the correlation between a response (

*x*) and a predictor (

*y*) and

*SD*and

_{x}*SD*are standard deviations for

_{y}*x*and

*y*, respectively. When

*SD*=

_{x}*SD*= 1,

_{y}*b*=

*r*. This equality between

*b*and

*r*can be achieved by

*Z*-transformation of

*x*and

*y*(centering a variable to have a mean of zero and dividing it by its standard deviation). It is noted that when dealing with a categorical predictor,

*Z*-transformation of the predictor is usually not possible. However, categorical variables can be reduced to pair-wise comparisons and thus, binary coding of such variables is possible (e.g. using 0 and 1). By doing so,

*Z*-transformation can be applied to categorical variables. This way of obtaining standardised effect statistics (

*r*) is obviously applicable not only to mixed-effect models but also to normal regression. However, it is important to note that when there is more than one predictor in the model (e.g. multiple regression), the regression coefficient,

*b*does not become equal to a partial correlation (exemplified in Equation 12) but rather it is a ‘semipartial’ correlation, which is always smaller than the corresponding partial correlation. For example, when we have a response

*y*and two predictors

*x*

_{1}and

*x*

_{2}, the semipartial correlation obtained as the regression coefficient,

*b*, is the correlation between

*y*and the residuals from the regression of

*x*

_{1}on

*x*

_{2}, whereas the partial correlation which can be obtained from the

*t*value of the corresponding

*r*in Equation 11 is the correlation between the residuals from the regression of

*y*on

*x*

_{2}and the residuals from the regression of

*x*

_{1}on

*x*

_{2}. We refer the readers to statistical texts for more detailed descriptions of the differences between partial correlation and semipartial correlation; we leave discussion on which effect statistic (partial or semipartial correlation) is better suited for meta-analysis to the future. It is also noted that when

*r*is obtained from mixed-effects models, its standard error cannot be estimated in the standard way (i.e. Equation 19). The corresponding standard error is that from the mixed-effects model and this standard error should be used for meta-analysis (by converting it to sampling variance). A problem with obtaining

*r*from mixed-effects models in this way (especially for meta-analysis) is that we need access to the original data or the authors of papers have to provide their results in this way, which is rarely, if ever, done.

An extension of this method of obtaining *r* is possible to generalised linear (mixed) models (GLM and GLMM) or when the response (*y*) is non-normal. For example, when a binary error is used (i.e. the response is either binary or a ratio), *SD _{y}*= 1 can be achieved by using the

*probit*-link function (

*probit*forces standard deviations to be 1). In a similar manner, if the Poisson error (with the

*log*-link function) is used for count data,

*SD*= 1 can be approximated by a slightly awkward series of transformations. That is

_{y}*Z*-transforming the count data in a log-normal scale and back-transforming with rounding to the original count scale.

The transformation to make *b*=*r* (semipartial correlation) by rendering *SD _{x}*=

*SD*= 1 is a useful approach, especially for meta-analysis. Therefore, the presentation of parameter estimates (or regression coefficients) both in the original scale and

_{y}*r*(=

*b*) is recommended. However, researchers need to realise the importance of effect sizes if such presentation practice is to become a standard. Also, we should remember the basic differences in purposes and uses between regression and correlation, as described in the original paper.