The view from the office is not all bad: conservation evaluation as a ‘sexy’ research goal

Authors


Correspondence

Caroline Howe, Department of Life Sciences and Centre for Environmental Policy, Imperial College London, London, SW7 2AZ, UK.

Email: c.howe.01@cantab.net

Abstract

Read the Feature Paper: Evaluating indices of conservation success: a comparative analysis of outcome- and output-based indices

Commentaries on this Feature Paper: Getting what you pay for: the challenge of measuring success in conservation; How can we sell evaluating, analyzing and synthesizing to young scientists?

Conservation currently lags behind other fields (such as health) in terms of both quantity and quality of evaluations. As Possingham (2012) points out, without such analysis, we do not know whether our actions are effective, or if they are, to what extent and in what manner they are successful. There are a number of reasons, discussed by both Jones (2012) and Possingham (2012), as to why conservation suffers from a paucity of empirical evaluations. First, evaluating conservation projects is a complicated undertaking as the ultimate outcomes are often subtle, intangible and slow to manifest themselves, and as such are difficult to quantify. Conservation projects are also frequently affected by external factors, changing focus over time, and multiple simultaneous interventions making it difficult to assign causality to a single intervention. As a result of these complexities, monitoring outcomes can be very expensive and time-consuming (Sommerville, Milner-Gulland & Jones, 2011), and is therefore rarely undertaken at a level that would produce meaningful results. It is also often the case that monitoring and evaluation is undertaken post hoc, when the collection of baseline data would make it easier to assign causality. Possingham (2012) suggests that another reason for this lack of evaluations is because office-based modelling and analysis is not as ‘sexy’ as field-based conservation, and therefore fails to attract PhD candidates and other young researchers. Bearing these arguments in mind, it is possible to form a negative picture for the future of conservation evaluation, and ultimately the practice of conservation in general.

However, the findings of our paper (Howe & Milner-Gulland, 2012) demonstrate that it is possible to develop indices that are broadly useful in evaluating the relative effectiveness of conservation interventions. Our ranked outcomes index was surprisingly robust, and interestingly, correlated well with the outputs-based index. This finding suggests that conservation evaluation would benefit from further exploration of the assumed causal links between outputs and outcomes, and the circumstances under which outputs are an adequate proxy for outcomes. Recent literature demonstrates a growing number of field and office-based studies that attempt to evaluate the relative effectiveness of different conservation programmes (Howe, Medzhidov & Milner-Gulland, 2011; Mills et al., 2011; Lesbarréres & Fahrig, 2012; Walsh et al., 2012). This suggests that the awareness for the need to develop useful and consistent measures of conservation success and to apply the results more broadly is growing, and researchers are responding to calls for improved biodiversity monitoring (Lindenmayer et al., 2012). Alongside this is the increasing cognizance by international programmes, such as the Darwin Initiative, that their internal standard measures, although a useful means of collating statistics on their projects, need to be adapted to allow them to carry out more meaningful monitoring and evaluation in the future. The Darwin Initiative itself intends to update its standard output measures in their new monitoring and evaluation programme.

We feel, therefore, that the future is brighter than it may appear. Our paper is intended as a springboard for further research and an attempt to inspire new researchers in the field of conservation evaluation. There is still a huge body of work to be undertaken. For example, it would be interesting to see how the ranked outcomes approach fares at helping implementers monitor the effectiveness of project portfolios at different scales, from village to international. Because the approach is inherently participatory, it could be used to explore the differences between groups in how they view conservation success. For example, we are currently implementing a case study in which we ask international and national non-governmental organization staff and local villagers to rank project outcomes independently, compare their rankings, and then use these different groups’ rankings to evaluate the village-level differences in perceived success. As Jones (2012) mentions, our study highlights that the differences in world view can have a profound effect on people's ranking of outcome measures. The ranked outcomes measure enables qualitative judgements of the type commonly found in reports to be incorporated into quantitative evaluations, and may be a useful starting point for dialogue.

As Possingham (2012) points out, many organizations already have a huge wealth of information available that could be mined in similar studies. However, much of this data are currently inaccessible. The Darwin Initiative has led the way in making all project documentation freely available online. Conservationists have a shared goal to protect and enhance biodiversity for the future. Although the view may not be as good, sometimes the best way to do that might be from an office. Robust evaluations of existing data have the potential to offer insight into the factors affecting past conservation outcomes so that we can adapt and learn for the future. And that, we think, is a ‘sexy’ research goal.

Ancillary