Get access

Combining Experts’ Judgments: Comparison of Algorithmic Methods Using Synthetic Data

Authors

  • James K. Hammitt,

    Corresponding author
      James K. Hammitt, Center for Risk Analysis, Harvard University, 718 Huntington Ave., Boston, MA 02115, USA; jkh@harvard.edu.
    Search for more papers by this author
    • Center for Risk Analysis, Harvard University, Boston, MA, USA.

    • LERNA-INRA, Toulouse School of Economics, Toulouse, France.

  • Yifan Zhang

    Search for more papers by this author
    • Department of Biostatistics, Harvard University, Boston, MA, USA.


James K. Hammitt, Center for Risk Analysis, Harvard University, 718 Huntington Ave., Boston, MA 02115, USA; jkh@harvard.edu.

Abstract

Expert judgment (or expert elicitation) is a formal process for eliciting judgments from subject-matter experts about the value of a decision-relevant quantity. Judgments in the form of subjective probability distributions are obtained from several experts, raising the question how best to combine information from multiple experts. A number of algorithmic approaches have been proposed, of which the most commonly employed is the equal-weight combination (the average of the experts’ distributions). We evaluate the properties of five combination methods (equal-weight, best-expert, performance, frequentist, and copula) using simulated expert-judgment data for which we know the process generating the experts’ distributions. We examine cases in which two well-calibrated experts are of equal or unequal quality and their judgments are independent, positively or negatively dependent. In this setting, the copula, frequentist, and best-expert approaches perform better and the equal-weight combination method performs worse than the alternative approaches.

Get access to the full text of this article

Ancillary