Measuring individual differences in reaction norms in field and experimental studies: a power analysis of random regression models



This article is corrected by:

  1. Errata: Erratum Volume 3, Issue 6, 1099, Article first published online: 11 December 2012

Correspondence author. E-mail:


1. Interest in measuring individual variation in reaction norms using mixed-effects and, more specifically, random regression models have grown apace in the last few years within evolution and ecology. However, these are data hungry methods, and little effort to date has been put into understanding how much and what kind of data we need to collect in order to apply these models usefully and reliably.

2. We conducted simulations to address three central questions. First, what is the best sampling strategy to collect sufficient data to test for individual variation using random regression models? Second, on occasions when precision is difficult to assess, can we be confident that a failure to detect significant variance in plasticity using random regression represents a biological reality rather than a lack of statistical power? Finally, does the common practice of censoring individuals with one or few repeated measures improve or reduce power to estimate individual variation in random regressions?

3. We have also developed a series of easy-to-use functions in the ‘pamm’ statistical package for R, which is freely available, that will allow researchers to conduct similar power analyses tailored more specifically to their own data.

4. Our results reveal potentially useful rules of thumb: large data sets (N > 200) are needed to evaluate the variance of individual-specific slopes; a number of individuals/number of observations per individual ratio of approximately 0·5 consistently yielded the highest power to detect random effects; individuals with one or few observations should not generally be censored as this reduces power to detect variance in plasticity.

5. We discuss the wider implications of these simulations and remaining challenges and suggest a new way to standardize results that would better facilitate the comparison of findings across empirical studies.