SEARCH

SEARCH BY CITATION

We would like to thank Cunningham and King (2013) for their comment on our paper about evaluating different indices of success (Howe & Milner-Gulland, 2012), particularly for their support for research that aims to develop methods for assessing the outcomes of conservation projects. However, there were a number of misunderstandings that have arisen from their interpretation of the original paper.

The response from Cunningham and King (2013) does not appear to recognize the aim of this paper, which was to develop robust indices of success that are consistent between projects and evaluators for conservation programmes and projects in general. This paper does not attempt to evaluate the Darwin Initiative, relative to other conservation programmes, nor is it a criticism of the methods by which the Darwin Initiative measures its own success. For the purposes of this study, the Darwin Initiative is used solely as a database to evaluate the potential of three indices of conservation success. The Darwin Initiative was chosen because of its international reputation, the length and depth of its documentation and because confounding variables are reduced as projects are generally independent of each other, have similar budgets, the same duration and similar broad goals.

By failing to recognize the purpose of the paper, Cunningham and King (2013) overlook the findings of this study that demonstrate it is possible to develop robust outcome-based indices of conservation success for comparison of projects within a funder's portfolio and that output-based indices show similar results. This study does this in two stages: firstly by evaluating the internal consistency of the indices chosen and, secondly through a comparative assessment of the indices' rankings of project success. On the first aim, we demonstrate that although there were systematic differences between scorers, the relative rankings between them were consistent.

Cunningham and King (2013) suggest that we disapprove of the Darwin Initiative's use of outputs as a measure of success; however, we do not state that this is the Darwin Initiative's only measure of success nor do we find that it fails as an indicator of success in its own right. In fact, the output index (based on the standard measures) and the subjective ranked outcomes index (based on narratives within the final reports) were fairly consistent between assessors in their rankings of project success. However, the nuanced differences between the indices led us to conclude that different indices pick up different facets of project success, and therefore there is a need for multiple indices.

When the data for this study were collected in 2006, with the full support of both the Darwin Initiative and the Edinburgh Centre for Tropical Forests (the organization then in charge of administrating and evaluating the Darwin Initiative), we were provided solely with the final reports and Darwin Standard Outputs as our raw data. The logical framework referred to was introduced in 2001; consequently, there were only 3 years of completed projects (2003–2006) that contained logical frameworks at the time of our study. Therefore it was not possible to use this information. If this study were to be repeated, this information would be an interesting complement to the indices used.

As correctly stated by Cunningham and King (2013), a number of the standard measures are inputs rather than outputs; however, for the purposes of this study, only the outputs were used (as discussed in the Methods section) and therefore our use of outputs as a comparative index was valid. Regardless of whether the Darwin Initiative uses the standard measures as a method of evaluating their projects, a large number of conservation and development programmes and projects still use outputs as a method of evaluating their progress or success, and therefore it was a useful comparison to include an output-based index alongside two outcome-based indices for this comparative evaluation study.

Finally, Cunningham and King (2013) discuss the Darwin Initiative's current method of evaluating their projects by reviewing how successful the project has been at achieving its purpose statement (outcome). The method described is similar to, but perhaps less quantitative and repeatable than, the ranked outcomes method developed in this study. Their method allows for effectiveness to be evaluated on a project-by-project basis, whereas the different indices described in our study allow for relative project success to be evaluated across a programme and consistently by different assessors. This is a useful feature as significant lessons can be learnt from such cross-programme evaluations, such as our finding regarding the importance of education and funding as factors contributing to overall project success.

Howe and Milner-Gulland (2012) is an analysis of the application of and difference between three different indices of conservation success, with the aim of developing methodologies for project evaluation that are of broad general use within conservation. It is not intended as an evaluation of the Darwin Initiative nor as a critique of the management and evaluation of this programme. The Darwin Initiative has been used solely as a database from which the three indices chosen for this study have been compared; the fact that we could use the Darwin Initiative as our case study is a testament to the strong commitment to reporting and evaluation shown by this programme from the beginning. In fact, the Darwin Initiative has recently placed all successful funding applications and final reports into a public online database, which further promotes transparency and would enable other researchers and conservation practitioners further to investigate the factors promoting project success. This is something that we strongly support, and yet again demonstrates the innovative and constructive approach of the Darwin Initiative to monitoring and evaluation of its programme.

References

  1. Top of page
  2. References