Every week, an enthusiastic student emails me, or walks through my open door, keen to do research in conservation. The vast majority of these students want to head into the field. This is wonderful; despite being a modeller, my interest and empathy with ecological systems arises from the direct observation of nature. However, when I explain that the majority of the people in our group analyze existing data, synthesize information across different studies, build models or, worse still, evaluate the outcomes of conservation interventions, the conversation invariably ends abruptly. These are not the sexy activities that our David Attenborough-loving students want – they are desk based and quantitative – what could they have to do with the real world?

My proclamation that conservation needs more analysts, not more field data, invariably elicits a hostile reception among field ecologists. However, the past two decades have seen a growing trend towards synthesis and analysis. First, there was the inspirational creation of the National Centre for Ecological Analysis and Synthesis (NCEAS) that has transformed the discipline of ecology – the only funding rule – you can not use the money to collect more data. Second, there has been the rise of the evidence-based conservation movement (Pullin et al., 2004; Sutherland et al., 2004) where people systematically synthesize and disseminate existing knowledge about biodiversity interventions.

Why is evaluation and analysis so important? First, if we do not know that we are achieving something through an action, why would we persist in taking that action? Ferraro & Pattanayak (2006) sent shock waves through the conservation community when they questioned the evidence for one of conservation's foundational interventions – protected areas. Just a year earlier, the Millennium Ecosystem Assessment stated: ‘Few well-designed empirical analyses assess even the most common biodiversity conservation measures’ (Millennium Ecosystem Assessment, 2005). If we do not have assessments of past actions then decisions are being made in the dark (Cook, Hockings & Carter, 2010). Second, even if we know an action is beneficial to biodiversity, quantifying the biodiversity benefit of that action is essential for determining future investments (Murdoch et al., 2007) because cost-effectiveness analysis demands quantification of benefits.

It is remarkable how much valuable data goes unanalyzed. Two recent papers in Animal Conservation – Howe & Milner-Gulland (2012) and Walsh et al. (2012) provide excellent examples of how we can mine old information to inform future actions. Howe & Milner-Gulland (2012) have evaluated 100 final reports from Darwin Initiative projects carried out between 1997 and 2007. They show that there is consistency in results across three evaluation methods, and they uncover a mix of expected and unexpected effects. Similarly, Walsh et al. (2012) take over 30 years of data on the impact of fox Vulpes vulpes baiting on malleefowl Leipoa ocellata conservation only to discover there is no evidence that this popular intervention works. Surely there is a plethora of opportunities for similar analyses (Bottrill, Hockings & Possingham, 2011). The issue then is not, why were not these analyses done earlier, but why are not there a hundred more similar analyses every year?

I return to my initial concern – how can we wean well-meaning students from the desire to collect more data, which in conservation evaluation is challenging during a 3–5-year PhD project, to the analysis of existing data?

Ironically, I have just returned from the NCEAS ‘wake’ in Santa Barbara – March 2012. While the NCEAS model is purportedly replicated in many places around the world, my limited experience so far with the children of NCEAS are that none are quite the same. History may show that NCEAS embodies both one of the US National Science Foundation's best, and worst, decisions.


  1. Top of page
  2. References