Conservation is inherently multidisciplinary (Sandbrook et al., 2013), operates with restricted resources (Waldron et al., 2013) and increasingly recognizes the need for actions to be underpinned by solid, empirical evidence (Pullin & Knight, 2009). Together, these characteristics mean that research in our field makes use of an unusually broad array of data sources. Conservation already embraces research techniques from both the biological and social sciences, and we continually look for new methodologies that we can adapt for our purposes and new ways to make use of data collected outside of formal scientific frameworks. For example, recent studies have used specialized interview techniques (designed to reduce biases arising from non-response and ‘telling the interviewer what they want to hear’) to examine patterns of wildlife poaching and other sensitive, illegal behaviours (St John et al., 2012; Nuno et al., 2013); analyses of opportunistically collected data (e.g. the records of amateur ornithologists; Beale et al., 2013) and ‘citizen science’ initiatives are opening exciting new avenues for research in contexts and at scales that were not previously possible (Dickinson et al., 2012); and local ecological knowledge is promoted as a source of information on the abundance of and trends in populations (Anadón et al., 2009).
The sheer diversity of approaches and data types used in conservation make it one of the most fascinating areas of science to work in. ‘Unusual’ sources of data have huge potential to produce valuable new insights, but they also present significant challenges: collecting them is one thing, but learning to appreciate what they can and cannot tell us is quite another. To maximize the benefits of these approaches and to avoid the dangers of misinterpretation, a common challenge in many areas of conservation research is to learn how much information different data sources provide and to discover the biases that they might be subject to. Put simply, promising sources of information are only truly useful if they are appropriately validated.
Golden, Wrangham & Brashares (2013) do an admirable job of delving into the complexities of one such unusual data source: recalled consumption of bushmeat species in a remote area of Madagascar. Recall data are seen as a useful source of information in a variety of conservation contexts, both for rapid appraisal of current threats (Jones et al., 2008) and also to allow historical trends to be evaluated (Turvey et al., 2012). To examine the properties of recall data, Golden et al. collected two comparable data sets about household-level consumption of several species of wildlife. The female heads of each household were asked to keep detailed records on a daily basis using consumption diaries, while the male heads of the same households were asked for their recall of consumption over longer periods of 1 month or a year. The diaries were then used as a measure of ‘truth’ against which the longer-term recall could be evaluated.
This approach allows Golden et al. (2013) to provide useful practical guidance for the use of recall data, including some surprising results. While recall over shorter periods is likely to suffer less from forgetting, an important finding is that when there is significant variability in consumption over time, annual recall appears to provide a less biased picture than extrapolations based on monthly recall. Reassuringly, the correlation between annual recall and the daily diaries was found to be good, suggesting that recall data can be a valuable source of information. Golden et al. (2013) also observed relatively low levels of variation in the accuracy of annual recall between households, but this is perhaps not surprising given the relatively small sample, drawn from a single community. Wider comparisons would be useful to determine whether this finding holds more generally.
This study also suggests some valuable new questions. An important bias to be expected in recall data stems from differences in the salience of items being recalled (i.e. the degree to which they stand out). If someone asked me to remember what I had eaten over the last year, I imagine I would recall things that I had particularly liked or disliked, or perhaps had eaten during a special occasion, more readily than others. Golden et al. assume that salience is simply a function of rarity, but Reis & Judd (2000) argue that ‘[m]ore distinctive events in terms of intensity, emotionality, unusualness or personal significance, tend to be more influential’. It would be fascinating to discover whether recall data could be calibrated using simple measures of the salience of particular food items (cf. Papworth, Milner-Gulland & Slocombe, 2013). A similar question could also be asked about the sensitivity of different food items. Golden et al. argue that they did not expect their respondents to underreport consumption of illegally hunted species because of the level of trust they had established with the local community. This situation is relatively uncommon and the usefulness of recall data (and indeed many other forms of interview data) would be dramatically improved if we were able to predict the level of bias associated with sensitive topics of study. More generally, learning to correctly interpret unusual forms of data will often require us to develop a better understanding of the human behaviours and motivations that underpin them (Keane, Jones & Milner-Gulland, 2011).
In a field long seen as ‘crisis-driven’ and focused on action and solutions, methodological studies may not have the immediate appeal of species- or threat-oriented research, but they are vital if conservation is to become the mature, evidence-based discipline it aspires to be. To address the challenges facing conservation, we need to embrace the full diversity of data types available to us and ensure that they are properly validated in order to learn how to use them effectively.