Conservation managers and practitioners frequently operate with short timelines and limited resources. Particularly in contexts where empirical information is sparse or unobtainable, they may rely on experts as a useful, alternative source of knowledge for decision-making (Sutherland 2006; Martin et al. 2012). Experts have acquired learning and experience that allows them to provide valuable insight into the behaviour of environmental systems (e.g. Fazey et al. 2006), and they may estimate ‘facts’ such as population sizes, rates of change or life-history parameters, consolidate and synthesise existing knowledge, determine problem framing and solution methods, and offer predictions about the future (Kuhnert, Martin & Griffiths 2010; Perera, Johnson & Drew 2011; Martin et al. 2012).
However, experts may be subject to cognitive and motivational biases that impair their abilities to accurately report their true beliefs. Expert judgments of facts may be influenced by values and conflicts of interest (Krinitzsky 1993; Shrader-Frechette 1996; O’Brien 2000) and are sensitive to a host of psychological idiosyncrasies and subjective biases (Table 1), including framing, overconfidence, anchoring, halo effects, availability bias and dominance (Fischhoff, Slovic & Lichtenstein 1982; Kahneman & Tversky 1982; Slovic 1999; Gilovich, Griffin & Kahneman 2002). Structured protocols for elicitation have been developed that attempt to counter these biases. These protocols employ formal, documented and systematic procedures for elicitation, and encourage experts to cross-examine evidence, resolve unclear or ambiguous language, think about where their own estimates may be at fault or superior to those of others and generate more carefully constructed uncertainty bounds. A substantial body of evidence supports the assertion that structured elicitation methods produce more reliable and better-calibrated estimates of facts than do unstructured or naïve questions (e.g. Spetzler & Stael von Holstein 1975; Keeney & Von Winterfeldt 1991; Stewart 2001; O’Hagan 2006).
|Anchoring||Final estimates are influenced by an initial salient estimate, either generated by the individual or supplied by the environment||People give a higher estimate of the length of the Mississippi River if asked whether it is longer or shorter than 5000 miles, than if asked whether it is longer or shorter than 200 miles||Jacowitz & Kahneman (1995); Mussweiler & Strack (2000)|
|Anchoring and adjustment||Insufficient adjustment of judgments from an initial anchor, known to be incorrectbut closely related to the true value||People’s estimates of the boiling point of vodka are biased towards the self-generated anchor of the boiling point of water||Tversky & Kahneman (1974); Epley & Gilovich (2005, 2006)|
|Availability bias||People’s judgments are influenced more heavily by the experiences or evidence that most easily come to mind||Tornadoes are judged as more frequent killers than asthma, even though the latter is 20 times more likely||Tversky & Kahneman (1973); Lichtenstein et al. (1978); Schwarz & Vaughn (2002)|
|Confirmation bias||People search for or interpret information (consciously or unconsciously) in a way that accords with their prior beliefs||Scientists may judge research reports that agree with their prior beliefs to be of higher quality than those that disagree||Lord, Ross & Lepper (1979); Koehler (1993);|
|Framing||Individuals draw different conclusions from the same information, depending on how that information is presented||Presenting probabilities as natural frequencies (e.g. 6 subpopulations out of 10) helps people reason with probabilities and reduce biases such as overconfidence||Gigerenzer & Hoffrage (1995); Levin, Schneider & Gaeth (1998)|
|Overconfidence||The tendency for people to have greater confidence in their judgments than is warranted by their level of knowledge||People frequently provide 90% confidence intervals that contain the truth on average only 50% of the time||Lichtenstein, Fischhoff & Phillips (1982); Soll & Klayman (2004); Moore & Healy (2008)|
|Dominance||Social pressures induce group members to conform to the beliefs of a senior or forceful member of the group||Groups spend more of their time addressing the ideas of high-status members than they do exploring ideas put forward by lower-status members||Maier & Hoffman (1960)|
|Egocentrism||Individuals tend to give more weight to their own opinions than to the opinions of others than is warranted||Individuals attribute weights of on average 20–30% to advisor opinions in revising their judgments, when higher weights would have been optimal||Yaniv & Kleinberger (2000); Yaniv (2004)|
|Groupthink||When groups become more concerned with achieving concurrence among their members than in arriving at carefully considered decisions||The invasion of North Korea and the Bay of Pigs invasion have been attributed to decision makers becoming more concerned with retaining group approval than making good decisions||Janis (1972)|
|Halo effects||When the perception of an attribute for an individual or object is influenced by the perception of another attribute or attributes||Attractive people are ascribed more intelligence than those who are less attractive||Nisbett & Wilson (1977); Cooper (1981); Murphy, Jako & Anhalt (1993)|
|Polarisation||The group position following discussion is more extreme than the initial stance of any individual group members||Punitive damages awarded by juries tend to be higher than the median award decided on by members prior to deliberation||Myers & Lamm (1976); Isenberg (1986); Sunstein (2000)|
Within ecology, the uptake of structured methods has been gaining traction (see Choy, O’Leary & Mengersen 2009; Kuhnert, Martin & Griffiths 2010; Burgman et al. 2011a; Martin et al. 2012 for recent reviews). It is generally agreed that face-to-face interviews and workshop-based methods are the most likely to elicit high-quality responses (e.g. Morgan & Henrion 1990; Clemen & Reilly 2001; O’Hagan 2006; Choy, O’Leary & Mengersen 2009; O’Leary et al. 2009; Kuhnert 2011). However, it is not always desirable or feasible to assemble experts together, and a role also exists within ecological applications for methods that facilitate elicitation and interaction among members that are spatially and temporarily distributed (e.g. Donlan et al. 2010; Teck et al. 2010; Eycott, Marzano & Watts 2011).
In ecology, the elicitation of opinions via remote means is typically conducted with email or postal mail via a traditional, single iteration questionnaire (e.g. White et al. 2005) or an iterative Delphi-style process (e.g. Kuhnert, Martin & Griffiths 2010). In the classical Delphi process (Dalkey & Helmer 1963; Linstone & Turoff 1975; Rowe & Wright 2001), experts make an initial estimate, are provided with anonymous feedback about the estimates of the other group members and then make a second, revised estimate, with the estimate and feedback rounds continuing for some set number of rounds or until a pre-specified level of agreement is reached. The Delphi process is well-established in ecology (e.g. Crance 1987; MacMillan & Marshall 2006; O’Neill et al. 2008; Eycott, Marzano & Watts 2011), and it has the advantage when compared with single iteration e-questionnaires and unstructured groups, of allowing judges to revise their judgments in the light of others in the group while alleviating some of the most pervasive social pressures that emerge in unstructured discussion settings (e.g. Kerr & Tindale 2004, 2011; Table 1) through its use of structured interaction and maintenance of participant anonymity.
However, recent reviews and research on the Delphi process suggest that to achieve improvements in accuracy from round to round, experts must be provided with rationales to accompany the feedback they receive about the responses from other group members, and that in the absence of these rationales, their responses will tend to converge only towards a majority position (Rowe & Wright 1999; Rowe, Wright & McColl 2005; Bolger et al. 2011; Dalal et al. 2011). Incorporation of discussion into the feedback stage of the elicitation is one natural and effective means for providing rationales. Burgman et al. (2011b) provide one such example where incorporating a Delphi-based ‘talk-estimate-talk’ approach into a face-to-face expert workshop resulted in revisions that did indeed contribute to improvements in overall response accuracy. Such structured discussion–based methods are typically incorporated into elicitation as part of workshops (e.g. Delbecq, Van de Ven & Gustafson 1975), but could feasibly be adapted for use in remote elicitation to improve on the standard Delphi methodology (e.g. Turoff 1972; Linstone & Turoff 2011).
The purpose of this paper is to adapt a modified Delphi approach that incorporates facilitator-assisted discussion for use via electronic mail. We apply this method to an assessment of threatened Australian birds. We aimed to test the feasibility of applying such an approach via email and demonstrate the value of structured elicitation techniques for identifying and reducing potential sources of bias and error among experts. Our procedure facilitates the interaction and aggregation of opinions from multiple, distributed experts, and is, we believe, accessible to practitioners and suitable for elicitation in a wide variety of applied ecological settings. The outcomes provide both a motivation for the use of structured procedures and a roadmap to guide future elicitors in the process of conducting structured elicitation successfully.