SEARCH

SEARCH BY CITATION

Keywords:

  • Experts;
  • lay judgment;
  • testing;
  • feedback;
  • structured elicitation

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

Expert judgments are a necessary part of environmental management. Typically, experts are defined by their qualifications, track record, professional standing, and experience. We outline the limitations of conventional definitions of expertise and describe how these requirements can sometimes exclude people with useful knowledge. The frailties and biases in expert judgments can interact with the social status afforded to experts to produce judgments that are both unassailable and wrong. Several approaches may improve the rigor of expert judgments; they include widening the set of experiences and skills involved in deliberations, employing structured elicitation, and making experts more accountable through testing and training. We outline the most serious impediments to the routine deployment of these tools, and suggest protocols that would overcome these hurdles.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

Applied ecology and conservation depend on expert scientific judgments (Burgman 2005; Patterson et al. 2007). Recent developments in ecology and environmental management have explored different methods for formally obtaining and combining expert estimates (Martin et al. 2005; MacMillan & Marshall 2006; James et al. 2010). However, the questions of who should be included in the set of experts, and how expert judgments should be verified, remain open.

A person's formal training and technical knowledge (known as their ‘substantive’ expertise; Stern & Fineberg 1996; Walton 1997) often are contrasted with the knowledge of people with no formal training (known as “lay” knowledge). Expert judgments are attractive when time and resources are stretched, and are especially important where existing data are inadequate, circumstances are unique, or extrapolations are required for novel, future and uncertain situations.

Because decisions may create “winners” and “losers,” both the decisions themselves and the expert judgments that support the decisions may be controversial, prompting arguments about who is an expert and how experts’ opinions should be used. This is especially so when experts are called upon to advocate on behalf of stakeholders (Dryzek 2005) and to contribute to legal proceedings.

Decisions involve matters of fact and matters of value (Stern & Fineberg 1996; Gregory 2002; Walshe & Burgman 2010). Although fact and value cannot be separated entirely, we are concerned primarily in this article with the role of experts in estimating facts. If decision making in conservation biology were an entirely objective, detached scientific process that led inexorably to a single, rational outcome, definition of expert status would not be problematic. Ideally, there would be a pool of people with appropriate qualifications, extensive experience, and sound technical skills who could be called upon to dispense judgments in a consistent manner.

Unfortunately, this is rarely if ever the case. Social theories take a wider view, seeing expertise as distributed beyond conventional experts, and being sensitive to context (Carr 2004; Evans 2008). In most practical situations, the pool of potential technical experts is small, composed of people with overlapping training, knowledge, and experiences, so their judgments are not independent. In addition, expert judgments may be compromised by values and conflicts of interest (Krinitzsky 1993; Shrader-Frechette 1996; O’Brien 2000). For example, Campbell (2002) found clear evidence of value-laden biases in expert judgments for marine turtle conservation.

Kahneman & Tversky (1982; see also Fischhoff et al. 1982; Slovic 1999) demonstrated that experts and lay people are sensitive to a host of psychological idiosyncrasies and subjective biases, including framing, availability bias, and social context. Despite their weaknesses, expert estimates of facts are generally better than lay estimates, within the expert's area of expertise (see Shanteau 1992; Slovic 1999; Burgman 2005; Garthwaite et al. 2005; Chi 2006; Evans 2008 for reviews). Unfortunately, experts stray easily outside the narrow limits of their core knowledge, and once outside an expert is no more effective than a layperson (Freudenburg 1999; Ayyub 2001). Additionally, experts (and most other people) are overconfident in the sense that they specify bounds for parameters that are too narrow, thereby placing greater confidence in judgments than is warranted by data or experience (Fischhoff et al. 1982; Speirs-Bridge et al. 2010).

The purpose of this article is to address the problem of defining expertise in conservation and suggest ways it could be improved. The conventional approach to defining experts is by their qualifications, track record, professional standing, and experience. We describe how these requirements can sometimes exclude people with useful knowledge, explaining how the frailties and biases of expert judgments interact with the social status sometimes afforded to experts (Evatts et al. 2006) to produce judgments that are both unassailable and wrong. We then evaluate approaches to the use of experts that will improve the reliability of their judgments. These include widening the set of experiences and skills involved in deliberations, employing structured elicitations, and making experts more accountable through testing and training (we do not evaluate the literature on aggregating the opinions of different experts, which previously has received thoughtful reviews; Wallsten et al. 1997; Clemen & Winkler 1999). Our ultimate aim is to identify tools and strategies that will improve the quality of scientific expert opinion in conservation, highlight impediments to their use, and suggest approaches that will encourage their routine deployment.

Scientific authority, objectivity, and trust

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

The public, the courts, statutory bodies, and other decision makers accept expert opinions because they believe experts have specialized knowledge not available to all, obtained through training and experience, proven by track records of efficient and effective application (Hart 1986; Gullet 2000). Scientific experts are a source for rules and standards (Peel 2005), they estimate facts, and they contribute to decisions to undertake activities (Gullet 2000). The US National Research Council, for instance, asserts that experts have indispensable substantive knowledge, methodological skills, and experience (Stern & Fineberg 1996). Yet despite the incorporation of advisory panels in legal frameworks, legislation rarely defines expertise or specifies the composition of expert panels. The question then arises, how is expert status decided and validated?

Expert opinions are sought in the trial and judicial determination of cases when the areas are specialized and held to be beyond lay knowledge (Fisk 1998, p. 3; Preston 2006) or in situations when direct evidence is unavailable or unattainable (e.g., Lawson 1900, p. 236). Tests used to separate expert opinion from lay knowledge to determine the admissibility of opinion evidence (Gans & Palmer 2004) are a combination of credentials, technical “knowledge” and reputation, reflecting conventional notions of expertise.

Qualifications, reputation, and membership in professional groups are common guides to expert status (Collins & Evans 2007). The expert, recognized by professional membership, is assumed to have privileged access to knowledge and is deferred to in its interpretation (Barley & Kunda 2006). Some professional bodies are accorded the right of self-regulation in return for competence, integrity, and altruistic service (Cruess et al. 2004).

Expertise includes the abilities to communicate technical information to laypersons, synthesize knowledge, understand the history and context of a debate, work effectively with a range of people, and be familiar with the conventions and jargon of a field (termed “interactional” expertise; Collins & Evans 2007). Critically, because scientific analysis requires robust discussion, and (in legal contexts) cross-examination, substantive experts who are unable to communicate may be just as unqualified as those without any substantive expertise. Effective communication is especially important when interactions with stakeholders are designed to foster broad acceptance of a proposed action (the “instrumental” value of participation; Stern & Fineberg 1996).

One of the problems with this system is that experts assume a position of authority, reinforced by professional membership and status. It can intimidate people who wish to examine expert judgments critically (Walton 1997), leading to a culture of technical control in which expert opinions are rarely challenged successfully (Walton 1997). For instance, the Supreme Court of Canada noted that expert opinion “dressed up in scientific language” may appear “virtually infallible” (Gans & Palmer 2004, p. 244).

Many people have a view that knowledge held by suitably qualified experts is a clear, objectively defined “truth,” while knowledge held by stakeholders and the public is fuzzy, oversimplified or corrupt (Hilgartner 1990). We see this as a flawed characterization that may erode public trust in decisions, exacerbated by perceptions that experts are overconfident (see Krinitzsky 1993; O’Brien 2000; Yearley 2000; Cruess et al. 2004; Barley & Kunda 2006). The remainder of this article examines ways of improving the definition and use of technical expertise.

Broadening the definition of expertise

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

Another common (and in our view, misconceived) distinction is that lay knowledge is grounded in real world, operational conditions while technical expertise is based on narrow professional perspectives or theoretical assumptions (Sternberg et al. 1993; Wynne 1996). Informed amateurs and “lay experts” feel as though their evidence is specific, concrete, and sensitive to local realities (Beck 1992; Yearley 2000; Irwin 2001), and they expect “outside” experts to be general and abstract (Gregory & Miller 1998; Leadbeater 2003).

The broader view from social science is that differences between lay and expert knowledge depend on the type of problem, the person applying that knowledge, and the cultural context in which that knowledge is learned and applied (Verran 2002; Carr 2004). Knowledge can be classified as “expert” or “lay” depending on the interests it serves, the purposes for which it is harnessed, or the manner in which it is generated (Agrawal 1995). Motivational biases and conflicts of interest are difficult issues (Slovic 1999). Scientific experts advocate a scientific position, albeit based upon an accepted range of data and methodologies, and they may do so on behalf of a client, such as a proponent for a particular project or decision (Barley & Kunda 2006). In other words, knowledge is contextual (see Broks 2006). We agree with Jasanoff (2006), Broks (2006), Evans (2008), and others that, in many cases, it is not possible to delineate sharply between expert and lay knowledge.

Collins & Evans (2007) classified several forms of expertise, ranging from specific instruction to contributory expertise, the pinnacle of substantive knowledge (Table 1). None of these categories depend exclusively on formal qualifications or professional membership. That is, the reviews and tests of expertise outlined above substantiate the view that expertise is real, but that it is more widely distributed than conventional qualifications and often associated with membership of social groups (which may or may not be professional groups).

Table 1.  A taxonomy of expertise (modified from Collins & Evans 2007)
TypeCharacteristics
  1. aCollins and Evans used the term “beer-mat knowledge” for this category.

Contributory expertiseFully developed and internalized skills and knowledge, including an ability to contribute new knowledge and/or teach.
Interactional expertiseKnowledge gained from learning the language of specialist groups, without necessarily obtaining practical competence.
Primary source knowledgeKnowledge from primary literature includes basic technical competence.
Popular understandingKnowledge from media, with little detail, less complexity.
Specific instructionaFormulaic, rule-based knowledge, typically simple, context-specific and local.

Local residents and resource users often are potential experts in the context of environmental management planning efforts that involve conservation biologists, ecologists, and other technically trained scientists. Collateral benefits of broader definitions of expertise include both improved factual estimates and broader acceptance of decisions. Failing et al. (2007), for example, demonstrated the benefit of considering local sources of knowledge in the context of relicensing a hydroelectric facility in British Columbia, Canada. Both conventionally defined technical expertise and “lay” knowledge, drawn from area residents and from members of a local aboriginal community, were used to construct values hierarchies, to understand causal pathways and to evaluate the consequences of the response of the river system to proposed flow changes. This structured, deliberative effort led to an adaptive management approach attractive to a diverse group of technical and public stakeholders.

We conclude that managers should avoid arbitrary, sharp delineations of expertise, and instead include a process to examine knowledge claims critically (Gregory et al. 2006). We outline below three methods for testing claims of expert status.

Making the most of expert judgment

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

The importance and pervasiveness of expert judgments in conservation biology create an imperative for acquiring judgments that are as accurate and well calibrated as possible. Given the frailties and limitations of expert judgments and the narrow conventional definitions of expertise, what can be done to improve the situation? Essentially, there are three options: to use analytical tests to evaluate the skill and knowledge of potential contributors, to train experts, and to use elicitation procedures that encourage participation and cross-examination of evidence and that anticipate and deal with biases. These are outlined briefly below.

Analytical tests

Cooke (1991) pioneered the idea of using hypothetical and empirical data to measure objectively the knowledge of experts. Essentially, the approach involves asking experts for facts, a subset of which are known to the facilitator but not to the experts (for instance, facts from recent case studies, experiments, hypothetical scenarios, or simulations). Answers to these questions provide information on the skill of the participants, including their reliability (the degree to which an expert's assessment is repeatable and stable across cases; Wallsten & Budescu 1983), accuracy, bias, and calibration (the frequency with which subjective intervals enclose the truth; Speirs-Bridge et al. 2010). Test results may be used to evaluate knowledge, weight opinions, or exclude some opinions altogether (e.g., Cooke & Goossens 2000; see Morgan & Henrion 1990; Cooke & Goossens 2000; Hoffrage et al. 2002; O’Hagan & Oakley 2004). Such appeals for more explicit testing have also appeared in legal academic reviews (Schum & Morris 2007).

The prospect of doing this raises challenging questions. Who sets and administers the tests? Which elements of expertise should the tests examine? Where do the data come from to validate the answers? How does one overcome the fact that experts unused to being challenged are likely to be reluctant to be tested? These tools have been deployed and many of these hurdles overcome in applications in law, meteorology, and engineering (e.g., Cooke 1991; Murphy 1993; see also Fischhoff et al. 1982; Murphy & Winkler 1977; Hora 1992). A complete review of these techniques and their implications is beyond the scope of this review.

Feedback and training

If people have the opportunity to learn how to improve their ability to judge, their performance generally improves (Cooke 1991; Cooke & Goossens 2000). Typically, training outlines a field's jargon and theoretical concepts, and uses case studies, experiments, hypothetical scenarios, and simulations to illustrate processes relevant to the questions at hand. This may include numerical and graphical output derived from similar assessments and different ways of representing uncertainty and probabilities (e.g., Kadane et al. 1980; Cooke 1991; Chaloner et al. 1993; Garthwaite et al. 2005).

For people who are involved routinely in expert judgment exercises, feedback allows them to see the results of their earlier assessments, in relation to outcomes. Feedback protocols require procedures for administering and disseminating the results of professional judgments and test questions, so that experts improve their performance over time (Cooke & Goossens 2000). Yet some situations for which expertise is desired have few or no opportunities for feedback. An example is predictions for the social, environmental, or health impacts of an emerging technology (e.g., nanotechnologies, Wintle et al. 2007); there are few clear parallels, and the success of predictions will not be known for decades (Pidgeon et al. 2008).

While training and feedback generally improve expert performance, bias and overconfidence about facts may persist through many repetitions of an elicitation exercise. Practice and experience alone do not necessarily remove biases. Improvement is usually slow and a large number of similar assessments is needed to generate substantial improvement. Feedback protocols have been deployed in engineering risk assessments in Europe, but it has taken many years to establish accepted procedures (Cooke & Goossens 2000). We conclude that, even though improvement is not instantaneous, systematic feedback is the single most important factor demarking domains in which expertise develops and improves over time (e.g., chess playing, weather forecasting) and domains in which it does not (e.g., psychotherapy) (Dawes 1994).

Structured procedures

Structured elicitation procedures are explicit methods that anticipate and mitigate some of the most important and pervasive psychological and motivational biases. One of earliest and still one of the most useful of these tools is the Delphi technique (see Burgman 2005). In it, experts make an initial judgment of a fact. The responses are shown to other participants, who then make a second, private judgment of the fact. The group average may be weighted by performance on test questions (Cooke 1991). The process circumvents or ameliorates many problems associated with dominance, availability bias, overconfidence (Speirs-Bridge et al. 2010), and related effects. Participants may be given the opportunity to discuss differences of opinion, allowing people to reconcile the meanings of words and context (Regan et al. 2002), thereby removing arbitrary language-based disagreements.

Law plays a critical role in challenging expert judgment when evidence is presented in support of adversarial positions (Christie 1991; Fisk 1998). Under cross-examination, an expert's efficiency, effectiveness, veracity, credibility, and character may be attacked (Christie 1991; Fisk 1998). The qualifications of an expert may be tested by the opinions of other experts. Experts may be tested by hypothetical questions or by proof that on a former occasion, an expert expressed a different opinion. This questioning has its origins in medieval tests of peer judgment (Franklin 2001). We suggest that adversarial tests of expert evidence in domains outside law courts and tribunals will improve the reliability of expert judgments (Franklin et al. 2008).

The structured elicitation processes outlined above provide a context in which opinions may be cross-examined effectively. Participants have an opportunity to hear and to weigh the opinions of others, integrating new information, improving understanding of the question, and evaluating the context and motivations of other participants, before arriving at their final judgment.

This process will work best when, as noted above, people from a variety of social contexts and “positions” in a debate are involved, providing a measure of protection against motivational bias. Experts may be stratified by geography, technical background, experience, affiliations, or other relevant criteria. Stern & Fineberg (1996) outline objective methods for stratifying and selecting stakeholder participants. Structured elicitation protocols provide an environment in which vigorous peer review and debate, including cross-examination of competing claims, may be captured effectively. Cross-examination of data, models, and reasoning may then allow an independent adjudicator to reach a final synthesis of evidence and to form a conclusion (Franklin et al. 2008).

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

Our review suggests that conservation biologists could contribute knowledge more effectively and enhance the credibility of their decisions by embracing a suite of new professional behaviors and wider definitions of expertise. Specifically, the review suggests that the credibility, accuracy, and reliability of expert deliberations will improve if explicit selection, testing, training, and feedback procedures are deployed.

Opportunities for improved performance go beyond recommendations for individual experts. To work effectively, the system in which experts work should be structured to anticipate and deal with cognitive and motivational biases, as described above. In particular, it should ensure the selection of experts is inclusive and transparent, and it should provide ample opportunity for experts to be questioned critically by analysts, other experts, stakeholders, and others.

Our review of scientific authority suggests that what counts as expertise depends on context. Expert performance is likely to be affected in subtle and unpredictable ways by motivations and psychology. If experts are tested, then expertise from all domains may be considered, including what may be considered lay knowledge. This accords with sociological theory of expertise as real but “unequally distributed,” not simply determined by formal qualifications and professional membership (Evans 2008). There are several models for engaging a wider cross-section of potential experts and numerous collateral benefits may accrue from doing so (Carr 2004). These observations lead us to recommend a set of general prescriptions for involving experts in conservation. We recommend the following prescriptions for the managers.

  • 1
    Identify core expertise requirements and the pool of potential experts, including lay expertise (pp. 1, 3).
  • 2
    Create objective selection criteria and clear rules for engaging experts and stratify the pool of experts and select participants transparently based on the strata (pp. 3, 5).
  • 3
    Evaluate the social and scientific context of the problem (pp. 1, 2, 3).
  • 4
    Identify potential conflicts of interest and motivational biases and control bias by “balancing” the composition of expert groups, with respect to the issue at hand (especially if the pool of experts is small) (pp. 1, 2, 5).
  • 5
    Test expertise, relevant to the issues (pp. 2, 4, 5).
  • 6
    Provide opportunities for stakeholders to cross-examine all expert opinions (pp 3, 5).
  • 7
    Train experts and provide routine, systematic, relevant feedback on their performance (p. 4).

At minimum, we recommend a formal, transparent process for the definition and selection of those with relevant expertise, the adoption of new professional standards that employ structured elicitation methods, testing and feedback of expert judgments, aimed at improving the performance of both experts and elicitation methods over time.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References

We thank Tara Martin, Mark Colyvan, Fiona Fidler, Terry Walshe, Bonnie Wintle, Helen Regan, and three anonymous reviewers for their comments. The work was funded by ACERA Project 0611 and NSF Award (SES 0725025). The views expressed in this article are not necessarily endorsed by the authors’ respective organizations.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Scientific authority, objectivity, and trust
  5. Broadening the definition of expertise
  6. Making the most of expert judgment
  7. Conclusions
  8. Acknowledgments
  9. References
  • Agrawal, A. (1995) Indigenous and scientific knowledge: some critical comments. http://www.nuffic.nl/ciran/ikdm/3--3/articles/agrawal.html
  • Ayyub, B.M. (2001) Elicitation of expert opinions for uncertainty and risks. CRC Press, Boca Raton .
  • Barley, S.R., Kunda G. (2006) Contracting: a new form of professional practice. Acad Manag Perspect 20, 4566.
  • Beck, U. (1992) Risk society: towards a new modernity. Sage Publications, London .
  • Broks, P. (2006)Understanding popular science. Open University Press, Maidenhead and Philadelphia .
  • Burgman, M.A. (2005) Risks and decisions for conservation and environmental management. Cambridge University Press, Cambridge .
  • Campbell, L.M. (2002) Science and sustainable use: views of marine turtle conservation experts. Ecol Appl 12, 12291246.
  • Carr, A.J.L. (2004) Why do we all need community science? Soc Nat Resour 17, 19.
  • Chaloner, K.M., Church T., Louis T.A., Matts J.P. (1993) Graphical elicitation of a prior distribution for a clinical trial. Statistician 42, 341353.
  • Chi, M.T.H. (2006) Two approaches to the study of experts’ characteristics. Pages 2130 in K.A.Ericsson, N.Charness, P.J.Feltovitch & R.R.Hoffman, editors. The Cambridge handbook of expertise and expert performance. Cambridge University Press, Cambridge .
  • Christie, E. (1991) The role of law and science in the resolution of disputes over factual evidence. Environ Plan Law J 8, 200210.
  • Clemen, R., Winkler R. (1999) Combining probability distributions from experts in risk analysis. Risk Anal 19, 187203.
  • Collins, H.M., Evans R. (2007) Rethinking expertise. University of Chicago Press, Chicago .
  • Cooke, R.M. (1991) Experts in uncertainty: opinion and subjective probability in science. Oxford University Press, Oxford .
  • Cooke, R.M., Goossens L.H.J. (2000) Procedures guide for structured expert judgement in accident consequence modelling. Radiat Protect Dosim 90, 303309.
  • Cruess, S.R., Cruess R.L., Johnston S. (2004) Professionalism for medicine: opportunities and obligations. Iowa Orthop J 24, 915.
  • Dawes, R.M. (1994) House of cards: psychology and psychotherapy built on myth. Simon and Schuster, New York .
  • Dryzek, J.S. (2005) Politics of the earth. Oxford University Press, Oxford .
  • Evans, R. (2008) The sociology of expertise: the distribution of social fluency. Sociol Compass 2, 281298.
  • Evatts, J., Mieg H.A., Felt U. (2006) Professionalization, scientific expertise, and elitism: a sociological perspective. Pages 105123 in K.A.Ericsson, N.Charness, P.J.Feltovitch & R.R.Hoffman, editors. The Cambridge handbook of expertise and expert performance. Cambridge University Press, Cambridge .
  • Failing, L., Gregory R., Harstone M. (2007) Integrating science and local knowledge in environmental risk management: a decision-focused approach. Ecol Econ 64, 4760.
  • Fischhoff, B., Slovic P., Lichtenstein S. (1982) Lay foibles and expert fables in judgements about risk. Am Stat 36, 240255.
  • Fisk, D. (1998) Environmental science and environmental law. J Environ Law 10, 38.
  • Franklin J. (2001) The science of conjecture. John Hopkins University Press, Baltimore .
  • Franklin, J., Sisson S.A., Burgman M.A., Martin J.K. (2008) Evaluating extreme risks in invasion ecology: learning from banking compliance. Divers Distributions 14, 581591.
  • Freudenburg, W.R. (1999) Tools for understanding the socioeconomic and political settings for environmental decision making. Pages 94125 in V.H.Dale & M.R.English, editors. Tools to aid environmental decision making. Springer, New York .
  • Gans, J., Palmer A. (2004) Australian principles of evidence. Routledge Cavendish, Sydney .
  • Garthwaite, P.H., Kadane J.B., O’Hagan A. (2005) Statistical methods for eliciting probability distributions. J Am Stat Assoc 100, 680700.
  • Gregory, R. (2002) Incorporating value trade-offs into community-based environmental risk decisions. Environ Values 11, 461468.
  • Gregory, R., Failing L., Ohlson D., McDaniels T. (2006) Some pitfalls of an overemphasis on science in environmental risk management decisions. J Risk Res 9, 717735.
  • Gregory, J., Miller S. (1998) Science in public: communication, culture and credibility. Plenum Trade, London .
  • Gullet, W. (2000) The precautionary principle in Australia: policy, law and potential precautionary EIAs. Risk Health Saf Environ 11, 93124.
  • Hart, A. (1986) Knowledge acquisition for expert systems. McGraw-Hill, New York .
  • Hilgartner, S. (1990) The dominant view of popularization: conceptual problems, political uses. Soc Stud Sci 20, 519539.
  • Hoffrage, U., Gigerenzer G., Krauss S. & Martignon L. (2002) Representation facilitates reasoning: what natural frequencies are and what they are not. Cognition 84, 343352.
  • Hora S.C. (1992) Acquisition of expert judgment: examples from risk assessment. J Energy Eng 118, 13648.
  • Irwin, A. (2001) Constructing the scientific citizen: science and democracy in the biosciences. Public Underst Sci 10, 118.
  • James, A., Low Choy S., Mengersen K. (2010) Elicitator: an expert elicitation tool for regression in ecology. Environ Model Software 25, 129145.
  • Jasanoff, S. (2006) Transparency in public science: purposes, reasons, limits. Law Contemp Probl 69, 21.
  • Kadane, J.B., Dickeey J., Winkler R.L., Smith W., Peters S. (1980) Interactive elicitation of opinion for a normal linear model. J Am Stat Assoc 75, 815885.
  • Kahneman, D., Tversky, A. (1982) On the study of statistical intuitions. Pages 493508 in, D.Kahneman, P.Slovic, A.Tversky, editors. Judgment under uncertainty: heuristics and biases. Cambridge University Press, Cambridge .
  • Krinitzsky, E.L. (1993) Earthquake probability in engineering—part 1: the use and misuse of expert opinion. Eng Geol 33, 257288.
  • Lawson, J.D. (1900) The law of expert and opinion evidence, 2nd edition. T.H.Flood, Chicago .
  • Leadbeater, C. (2003) Amateurs: a 21st-century remake. RSA J. Issue 5507, 2225.
  • MacMillan, D.C., Marshall K. (2006) The Delphi process—an expert-based approach to ecological modelling in data-poor environments. Anim Conserv 9, 1119.
  • Martin, T.G., Kuhnert P.M., Mengersen K., Possingham H.P. (2005) The power of expert opinion in ecological models using Bayesian methods: impact of grazing on birds. Ecol Appl 15, 266280.
  • Morgan, M.G., Henrion M. (1990) Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge University Press, Cambridge .
  • Murphy, A.H., Winkler R.L. (1977) Can weather forecasters formulate reliable probability forecasts of precipitation and temperature? Natl Weather Dig 2, 29.
  • Murphy, A.H. (1993) What is a good forecast? An essay on the nature of goodness in weather forecasting. Am Meterol Soc 8, 281293.
  • O’Brien, M. (2000) Making better environmental decisions: an alternative to risk assessment. MIT Press, Cambridge , MA .
  • O’Hagan, A., Oakley J.E. (2004) Probability is perfect, but we can't elicit it perfectly. Reliability Eng Syst Saf Altern Representations Epistemic Uncertainty 85, 23948.
  • Patterson, J., Meek M.E., Strawson J.E., Liteplo R.G. (2007) Engaging expert peers in the development of risk assessments. Risk Anal 27, 16091621.
  • Peel, J. (2005) The precautionary principle in practice: environmental decision-making and scientific uncertainty. Federation Press, Annandale , N.S.W .
  • Preston, B. (2006) The role of public interest environmental litigation. Environ Plann Law J 23, 337350.
  • Pidgeon, N., Harthorn B., Bryant K., Rogers-Hayden R. (2008) Deliberating the risks of nanotechnologies for energy and health applications in the US and UK. Nat Nanotechnol (doi: 10.1038/NNANO.2008.362.)
  • Regan, H.M., Colyvan M., Burgman M.A. (2002) A taxonomy and treatment of uncertainty for ecology and conservation biology. Ecol Appl 12, 618628.
  • Schum, D., Morris J. (2007) Assessing the competence and credibility of human sources of intelligence evidence: contributions from law and probability. Law Probability Risk 6, 247274.
  • Shanteau, J. (1992) How much information does an expert use? Is it relevant? Acta Psychol 81, 7586.
  • Shrader-Frechette, K. (1996) Value judgments in verifying and validating risk assessment models. Pages 291309 in C.R.Cothern, editor. Handbook for environmental risk decision making: values, perceptions and ethics. CRC Lewis Publishers, Boca Raton .
  • Slovic, P. (1999) Trust, emotion, sex, politics, and science: surveying the risk-assessment battlefield. Risk Anal 19, 689701.
  • Speirs-Bridge, A., Fidler F., McBride M., Flander L., Cumming G., Burgman M. (2010) Reducing over-confidence in the interval judgements of experts. Risk Anal 30, 512523.
  • Stern, P.C., Fineberg H.V., editors (1996) Understanding risk: informing decisions in a demographic society. Committee on Risk Characterisation, National Research Council. National Academy Press, Washington , DC .
  • Sternberg, R.J., Wagner R.K., Okagaki, L. (1993) Practical intelligence: the nature and role of tacit knowledge in work and at school. Pages 205227 in J.M.Puckett & H.W.Reese, editor. Lawrence Erlbaum Associates, Hillsdale , New Jersey .
  • Verran, H. (2002) Transferring strategies of land management: indigenous land owners and environmental scientists. Research in science and technology studies, knowledge and society. Pages 155181 Vol. 13, Elsevier & JAI Press, Oxford .
  • Wallsten, T.S., Budescu D.V. (1983) Encoding subjective probabilities—a psychological and psychometric review. Manag Sci 29, 151173.
  • Wallsten, T., Budescu D., Erev I., Diederich A. (1997) Evaluating and combining subjective probability estimates. J Behav Decis Mak 10, 243268.
  • Walton, D. (1997) Appeal to expert opinion: arguments from authority. Pennsylvania State University Press, Pennsylvania .
  • Walshe, T.V., Burgman M.A. (2010) A framework for assessing and managing risks posed by emerging diseases. Risk Anal 30, 236249.
  • Wintle, B., Burgman M., Fidler F. (2007) How fast should nanotechnology advance? Nat Nanotechnol 2, 327.
  • Wynne, B. (1996) May the sheep safely graze? A reflexive view of the expert-lay knowledge divide. Pages 4483 in S.Lash, B.Szerszynski & B.Wynne, editors. Risk, environment and modernity: towards a new ecology. Sage Publications, London .
  • Yearley, S. (2000) Making systematic sense of public discontents with expert knowledge: two analytical approaches and a case study. Public Underst Sci 9, 105122.