SEARCH

SEARCH BY CITATION

Imagine a construction project whose objective is to build a bridge. The quality of the resulting bridge might be debated and disputed, but that it has been delivered, and that the project is responsible for its delivery, is usually relatively simple to verify. It is much more difficult to evaluate success where a project aims to deliver something whose existence is costly or technically challenging to monitor, and something whose status may be affected (positively or negatively) by a range of influences that have nothing to do with the project. Conservation interventions may seek to influence the conservation status of a range of species, improve the management of an area of habitat, or change the attitudes and behaviour of a population of people. All of these things are challenging to measure in themselves. Importantly, they can also be affected by a myriad of factors external to a project (global commodity prices, national politics or law enforcement, and shifting social norms) making it challenging to tease apart the influence of the project on the sought after outcome. Conservation projects have widely been criticized in the past for poor evaluation (Saterson et al., 2004; Brooks et al., 2006). In their paper, Howe & Milner-Gulland (2012) look at the question of what indices are appropriate for evaluating success in conservation.

Howe & Milner-Gulland (2012) follow others by distinguishing outputs (the amount of something delivered by a project, e.g. number of workshops held, papers published or posters distributed) and outcomes (the long-term consequences of the project, e.g. change in population size of a target species). Outcomes are what the project ultimately aims to deliver, but they can be very costly to measure. A recent study of the costs of monitoring presence or absence of a variety of species of conservation concern in the dry forests of Madagascar illustrates the challenge of monitoring outcomes directly. Sommerville, Milner-Gulland & Jones (2011) found that monitoring that could robustly detect change over time would be unrealistically costly for the vast majority of species as would cost more than the budget for the entire intervention.

The UK government launched its Darwin Initiative at the Rio summit in 1992. Since then, it has invested £88 million in biodiversity conservation projects in 154 countries (DEFRA, 2012). This fantastic programme provided Howe & Milner-Gulland (2012) with an unrivalled opportunity to investigate how much agreement there was in rankings of project success as evaluated using different indices (one based on reported outputs, and two based on subjective scoring of information about outcomes) and also, which explanatory variables best predicted success as defined by the different indices. Their finding that ranking of projects using the outputs-based indicator was well correlated with the ranking from the subjective outcomes measure is interesting and worthy of further exploration. However, as the authors themselves note, there is no quantitative, independent data on outcomes available against which to measure the success of the various indices. Because outcomes are so difficult to measure directly, and may also not be achieved over the small timescale of a funded project, indices based on outputs will always be needed. Underlying this approach is an assumption that there is a mechanism that links delivery of the outputs with delivery of outcomes. This is often not explicit. If assumptions as to linkages between outputs and outcomes were more explicitly spelt out, both in project proposals and reports, alongside the evidence upon which the assumption is based, output measures would become more valuable for assessing project success.

Howe & Milner-Gulland (2012) also investigated the internal consistency (how different assessors would score an individual project using the same index) of two of the possible indices. While they found a high level of agreement between different assessors scoring the same projects with the same index, the existence of an outlier was revealing. The majority of their assessors came from a very similar academic background, while the one from a different discipline (pest-management rather than conservation), scored projects quite differently. It is likely that world view plays an important role in what an individual considers as success in the context of a conservation project. This raises important philosophical questions about the appropriate measures of success and who should be involved in their selection.

It has been suggested that environmental policy, including biodiversity conservation, lacks behind other policy fields such as criminal rehabilitation and health in terms of the quality of project evaluation (Ferraro & Pattanayak, 2006). Sadly, this does indeed appear to be the case, despite the immense importance of proper evaluation to guide future investment. A recent systematic review of Community Forest Management (CFM) projects, an environmental policy that has received millions of dollars of funding from global donors, found very little evidence that CFM delivers the claimed global environmental benefits such as biodiversity conservation, and almost no robust evidence concerning the delivery of local welfare benefits (Bowler et al., 2012). Bowler et al. (2012) do not conclude that CFM is not effective at delivering these benefits, but that the evidence that will allow donors to be confident in the efficacy of their investment simply has not been collected.

The challenges facing biodiversity show no sign of diminishing. Conservation therefore needs to continue and increase the funding it attracts from government schemes (such as the UK government's Darwin Initiative), corporations, non-governmental organizations and private individuals. To maintain and increase donor confidence that conservation investments represent good value for money, conservation science needs to devote considerably more attention to developing, and applying, robust and more cost-effective approaches for evaluating conservation success. Howe & Milner-Gulland (2012) make a useful contribution to this.

References

  1. Top of page
  2. References