Operator bias in software-aided bat call identification


  • Georg Fritsch,

    Corresponding author
    1. Institute of Zoology, University of Natural Resources and Life Sciences, Vienna, Austria
    • Correspondence

      Georg Fritsch, Institute of Zoology, University of Natural Resources and Life Sciences, Gregor-Mendel-Strasse 33, 1180 Vienna, Austria. Tel: +43 1 47654 3233; Fax: +43 1 47654 3203; E-mail: g.fritsch@boku.ac.at

    Search for more papers by this author
  • Alexander Bruckner

    1. Institute of Zoology, University of Natural Resources and Life Sciences, Vienna, Austria
    Search for more papers by this author


Software-aided identification facilitates the handling of large sets of bat call recordings, which is particularly useful in extensive acoustic surveys with several collaborators. Species lists are generated by “objective” automated classification. Subsequent validation consists of removing any species not believed to be present. So far, very little is known about the identification bias introduced by individual validation of operators with varying degrees of experience. Effects on the quality of the resulting data may be considerable, especially for bat species that are difficult to identify acoustically. Using the batcorder system as an example, we compared validation results from 21 volunteer operators with 1–26 years of experience of working on bats. All of them validated identical recordings of bats from eastern Austria. The final outcomes were individual validated lists of plausible species. A questionnaire was used to enquire about individual experience and validation procedures. In the course of species validation, the operators reduced the software's estimate of species richness. The most experienced operators accepted the smallest percentage of species from the software's output and validated conservatively with low interoperator variability. Operators with intermediate experience accepted the largest percentage, with larger variability. Sixty-six percent of the operators, mainly with intermediate and low levels of experience, reintroduced species to their validated lists which had been identified by the automated classification, but were finally excluded from the unvalidated lists. These were, in many cases, rare and infrequently recorded species. The average dissimilarity of the validated species lists dropped with increasing numbers of recordings, tending toward a level of ˜20%. Our results suggest that the operators succeeded in removing false positives and that they detected species that had been wrongly excluded during automated classification. Thus, manual validation of the software's unvalidated output is indispensable for reasonable results. However, although application seems easy, software-aided bat call identification requires an advanced level of operator experience. Identification bias during validation is a major issue, particularly in studies with more than one participant. Measures should be taken to standardize the validation process and harmonize the results of different operators.