SEARCH

SEARCH BY CITATION

Survey methods and recall data form part of the foundation of social research techniques to identify, examine and quantify the relationship between humans and their environment (Gavin & Anderson, 2005; Jones et al., 2008). Understanding the utility of quantitative survey data and the factors that may affect their quality are vital to conservation managers and policymakers whose decisions rely on access to the most accurate evidence (Mascia et al., 2003). This type of methodological self-reflection and validation offers insight into the importance of study design and system knowledge prior to rolling out research programs. Our aim was to determine the influence of recall periods in affecting the accuracy of incident estimates for rare and seasonal events. We used a case study of wildlife consumption in northeastern Madagascar to determine the degree to which consumption estimates were biased due to assumptions of rate constancy throughout the recall period (Golden, Wrangham & Brashares, 2013).

We thank Keane (2013) and Newing & St John (2013) for highlighting key conclusions of our research and pushing us to further examine some of our assumptions. Issues were raised in each commentary regarding the balance in our models between oversimplification (Keane, 2013) and excessive complexity (Newing & St John, 2013). Keane (2013) notes that we expected rarer events to be more salient. We agree with him that salience could also be affected by ‘emotionality, unusualness or personal significance’. In some contexts, therefore, these aspects would be worth quantifying and integrating into the models.

Newing & St John (2013) considered our statistical approach to be overly complex since it reduces the ease of comparison with other studies. They suggested that simply estimating a quantity with a level of variability would be more useful than our method, which involved calculating mean squared errors. The difficulty with their suggestion is that it requires normally distributed data in order to carry out appropriate statistical tests. However, count data of frequencies are rarely normal. We agree that publishing raw data can often promote useful comparisons, but the danger is that it can also foster comparisons that are statistically illegitimate.

Other concerns raised in the two commentaries dealt with the generalizability of our research to other study systems. We acknowledge that a richer description of the cultural context would have reduced some ambiguities (Newing & St John, 2013), and we accept that our study system may not have been typical because of the high level of trust we have established with local people (Keane, 2013; Newing & St John, 2013). In support of the latter point, we found no differences in the validity of our results for legally versus illegally hunted species. To better generalize our results, a system would be needed for integrating the underreporting of illegal behaviors that can be expected in many contexts: Keane and his colleagues have developed important methods in this area (Jones et al., 2008; Keane et al., 2008).

We also appreciate the concerns of Newing & St John (2013) that in some circumstances gender roles could play a part in explaining differences between estimates of wildlife consumption. For instance, Newing & St John (2013) speculate that harvesting (which was reported by men) might not be equivalent to consumption (which was reported by women). In the case of our study, as described in our paper, all meat that women cooked was eaten within the household (even if parts of the meal were shared with friends or kin); and only 2% of consumed wildlife in our study system was purchased (Golden et al., in press). Harvest and consumption frequencies were therefore necessarily very similar. Nevertheless, the occasional donation of cooked meat between households could explain why men slightly overreported consumption as compared to women, who recorded the cooking of raw meat brought in by their husbands.

We agree with Newing & St John (2013) that sharing knowledge across disciplines is an important and difficult goal. The problem is illustrated by their reading of our study, since they interpreted us as saying that ‘studies of wildlife consumption that depend upon recall are best carried out in the low hunting season’. Although we found this result in Makira, we did not suggest that it can be generalized as a rule for future studies. In fact, we openly recognize that our results regarding seasonal recall trends, underreporting versus overreporting, intra-household variation and, of course, types and amounts of bushmeat eaten will be site-specific. We use these results to identify sources of bias that are easily generalized and, we hope, will be of value to future studies. Specifically, our study suggests that the recall window chosen by researchers is critical in both detecting the event of interest and assuring that the extrapolation of the event will not bias the results. Accordingly, our suggested guidelines for future research are as follows:

  • (1) 
    The seasonality of events should be taken strongly into consideration when determining the recall period.
  • (2) 
    The rarity of events should be considered, such that the recall period can be expected to detect an event without requiring responders to remember long into the past.
  • (3) 
    Researchers should not feel it is necessary to use one recall period for all questions (i.e. events of interest). They should tailor the period of recall to the specific question at hand.
  • (4) 
    Having a basic understanding of the system prior to designing surveys is necessary. Pilot studies, community consultation and grass-roots approaches to research are essential to creating a survey instrument that will not bias results, and to ensuring that the research sample includes an appropriate mix of periods of high and low harvesting.

Publication in Animal Conservation of methodological approaches such as ours will hopefully help those creating surveys and planning future studies. We believe that consideration of these potential pitfalls will help to promote accuracy and comparability in evidence used by conservation biologists and policymakers.

References

  1. Top of page
  2. References