Operator bias in software-aided bat call identification
Article first published online: 30 MAY 2014
© 2014 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Ecology and Evolution
Volume 4, Issue 13, pages 2703–2713, July 2014
How to Cite
Ecology and Evolution 2014; 4(13):2703–2713
- Issue published online: 9 JUL 2014
- Article first published online: 30 MAY 2014
- Manuscript Accepted: 23 APR 2014
- Manuscript Revised: 15 APR 2014
- Manuscript Received: 14 JAN 2014
- Acoustic identification;
- automated classification;
- identification bias;
- observer experience
Software-aided identification facilitates the handling of large sets of bat call recordings, which is particularly useful in extensive acoustic surveys with several collaborators. Species lists are generated by “objective” automated classification. Subsequent validation consists of removing any species not believed to be present. So far, very little is known about the identification bias introduced by individual validation of operators with varying degrees of experience. Effects on the quality of the resulting data may be considerable, especially for bat species that are difficult to identify acoustically. Using the batcorder system as an example, we compared validation results from 21 volunteer operators with 1–26 years of experience of working on bats. All of them validated identical recordings of bats from eastern Austria. The final outcomes were individual validated lists of plausible species. A questionnaire was used to enquire about individual experience and validation procedures. In the course of species validation, the operators reduced the software's estimate of species richness. The most experienced operators accepted the smallest percentage of species from the software's output and validated conservatively with low interoperator variability. Operators with intermediate experience accepted the largest percentage, with larger variability. Sixty-six percent of the operators, mainly with intermediate and low levels of experience, reintroduced species to their validated lists which had been identified by the automated classification, but were finally excluded from the unvalidated lists. These were, in many cases, rare and infrequently recorded species. The average dissimilarity of the validated species lists dropped with increasing numbers of recordings, tending toward a level of ˜20%. Our results suggest that the operators succeeded in removing false positives and that they detected species that had been wrongly excluded during automated classification. Thus, manual validation of the software's unvalidated output is indispensable for reasonable results. However, although application seems easy, software-aided bat call identification requires an advanced level of operator experience. Identification bias during validation is a major issue, particularly in studies with more than one participant. Measures should be taken to standardize the validation process and harmonize the results of different operators.