SEARCH

SEARCH BY CITATION

Keywords:

  • biodiversity monitoring;
  • capture–mark–recapture;
  • comparative studies;
  • detection probability;
  • nonparametric estimator;
  • population size;
  • sampling design;
  • simulations;
  • type I error

Summary

1. Numbers of individuals or species are often recorded to test for variations in abundance or richness between treatments, habitat types, ecosystem management types, experimental treatments, time periods, etc. However, a difference in mean detectability among treatments is likely to lead to the erroneous conclusion that mean abundance differs among treatments. No guidelines exist to determine the maximum acceptable difference in detectability.

2. In this study, we simulated count data with imperfect detectability for two treatments with identical mean abundance (N) and number of plots (nplots) but different mean detectability (p). We then estimated the risk of erroneously concluding that N differed between treatments because the difference in p was ignored. The magnitude of the risk depended on p, N and nplots.

3. Our simulations showed that even small differences in p can dramatically increase this risk. A detectability difference as small as 4–8% can lead to a 50–90% risk of erroneously concluding that a significant difference in N exists among treatments with identical N = 50 and nplots = 50. Yet, differences in p of this magnitude among treatments or along gradients are commonplace in ecological studies.

4. Fortunately, simple methods of accounting for imperfect detectability prove effective at removing detectability difference between treatments.

5. Considering the high sensitivity of statistical tests to detectability differences among treatments, we conclude that accounting for detectability by setting up a replicated design, applied to at least part of the design scheme and analysing data with appropriate statistical tools, is always worthwhile when comparing count data (abundance, richness).