Correspondence site: http://www.respond2articles.com/MEE/
When can we ignore the problem of imperfect detection in comparative studies?
Article first published online: 23 AUG 2011
© 2011 The Authors. Methods in Ecology and Evolution © 2011 British Ecological Society
Methods in Ecology and Evolution
Volume 3, Issue 1, pages 188–194, February 2012
How to Cite
Archaux, F., Henry, P.-Y. and Gimenez, O. (2012), When can we ignore the problem of imperfect detection in comparative studies?. Methods in Ecology and Evolution, 3: 188–194. doi: 10.1111/j.2041-210X.2011.00142.x
- Issue published online: 1 FEB 2012
- Article first published online: 23 AUG 2011
- Received 29 October 2010; accepted 10 June 2011 Handling Editor: Robert Freckleton
- biodiversity monitoring;
- comparative studies;
- detection probability;
- nonparametric estimator;
- population size;
- sampling design;
- type I error
1. Numbers of individuals or species are often recorded to test for variations in abundance or richness between treatments, habitat types, ecosystem management types, experimental treatments, time periods, etc. However, a difference in mean detectability among treatments is likely to lead to the erroneous conclusion that mean abundance differs among treatments. No guidelines exist to determine the maximum acceptable difference in detectability.
2. In this study, we simulated count data with imperfect detectability for two treatments with identical mean abundance (N) and number of plots (nplots) but different mean detectability (p). We then estimated the risk of erroneously concluding that N differed between treatments because the difference in p was ignored. The magnitude of the risk depended on p, N and nplots.
3. Our simulations showed that even small differences in p can dramatically increase this risk. A detectability difference as small as 4–8% can lead to a 50–90% risk of erroneously concluding that a significant difference in N exists among treatments with identical N = 50 and nplots = 50. Yet, differences in p of this magnitude among treatments or along gradients are commonplace in ecological studies.
4. Fortunately, simple methods of accounting for imperfect detectability prove effective at removing detectability difference between treatments.
5. Considering the high sensitivity of statistical tests to detectability differences among treatments, we conclude that accounting for detectability by setting up a replicated design, applied to at least part of the design scheme and analysing data with appropriate statistical tools, is always worthwhile when comparing count data (abundance, richness).