When can we ignore the problem of imperfect detection in comparative studies?


  • Frédéric Archaux,

    Corresponding author
    1. Cemagref, Domaine des Barres, F-45290 Nogent sur Vernisson, France
      Correspondence author. E-mail: frederic.archaux@cemagref.fr
    Search for more papers by this author
  • Pierre-Yves Henry,

    1. UMR 7204 & UMR 7179 CNRS MNHN, Département Ecologie et Gestion de la Biodiversité, Muséum National d’Histoire Naturelle, 1 avenue du Petit Château, 91800 Brunoy, France
    Search for more papers by this author
  • Olivier Gimenez

    1. Centre d’Ecologie Fonctionnelle et Evolutive, UMR 5175, 1919 route de Mende, 34293 Montpellier Cedex 5, France
    Search for more papers by this author

Correspondence author. E-mail: frederic.archaux@cemagref.fr


1. Numbers of individuals or species are often recorded to test for variations in abundance or richness between treatments, habitat types, ecosystem management types, experimental treatments, time periods, etc. However, a difference in mean detectability among treatments is likely to lead to the erroneous conclusion that mean abundance differs among treatments. No guidelines exist to determine the maximum acceptable difference in detectability.

2. In this study, we simulated count data with imperfect detectability for two treatments with identical mean abundance (N) and number of plots (nplots) but different mean detectability (p). We then estimated the risk of erroneously concluding that N differed between treatments because the difference in p was ignored. The magnitude of the risk depended on p, N and nplots.

3. Our simulations showed that even small differences in p can dramatically increase this risk. A detectability difference as small as 4–8% can lead to a 50–90% risk of erroneously concluding that a significant difference in N exists among treatments with identical N = 50 and nplots = 50. Yet, differences in p of this magnitude among treatments or along gradients are commonplace in ecological studies.

4. Fortunately, simple methods of accounting for imperfect detectability prove effective at removing detectability difference between treatments.

5. Considering the high sensitivity of statistical tests to detectability differences among treatments, we conclude that accounting for detectability by setting up a replicated design, applied to at least part of the design scheme and analysing data with appropriate statistical tools, is always worthwhile when comparing count data (abundance, richness).