From the Editors


  • Tony Cox,

  • Karen Lowrie


Risk Analysis: An International Journal publishes not only original research and important applications to advance the theory and practice of risk analysis, but also “Perspectives” that describe and reflect on progress and needs in the field. This issue begins with two “Perspectives” on foundational issues in risk analysis. Cox discusses causation in risk analysis, focusing on the kinds of evidence and methods needed to support valid causal inferences and modeling. He concludes that current technical methods of causal analysis, developed primarily in other fields, offer opportunities to greatly improve upon older, association-based methods (e.g., regression analyses, Hill criteria, and weight-of-evidence schemes). Using them can make more objective and trustworthy conclusions available to risk managers. Next, Tonn and Stiefel discuss seven technical methods for estimating the risks of human extinction (or other, less dramatic, catastrophes) and compare them on criteria ranging from ease of implementation to ease of communicating results and credibility to policy makers. They consider a variety of events that could end human life, from asteroid impacts to extreme ice ages to alien invasions, and consider how different methods (both probabilistic and nonprobabilistic) might be used to draw credible and informative conclusions about the risks from such unprecedented occurrences. We hope that both newcomers and experienced practitioners in the field of risk analysis will find some of the analytic methods discussed in these “Perspectives” to be interesting and useful.


Wiedemann et al. investigate how informing people about precautionary measures that have been undertaken to alleviate public concerns about possible (but uncertain) risks affects their perceptions of risks. Ironically, precautionary actions can be interpreted as a warning signal that a danger is real and worth worrying about, rather than as reassuring evidence that authorities are exercising precaution to protect against a danger that may not exist or that might be too small to warrant action in a less precautionary society. Thus, attempts to alleviate concerns by publicizing precautions may instead amplify them. Concerns about health effects of electromagnetic fields (EMFs, discussed further in the paper by Spruijt et al. described below) provide a practical example. For risks from terrorist attacks, Mumpower et al., show that psychometric variables—perceptions of likelihood, severity, and number of people affected—are far better predictors of perceived risks from terrorist threats, and also of willingness to pay (WTP) to reduce them, than are demographic variables (e.g., sex, race, and political party affiliation variables). They also observe that perceived risks and WTP to reduce them did not coincide: respondents expressed highest WTP to reduce the risk of a small nuclear device being detonated in a major U.S. city, even though they did not perceive that as the greatest risk.

Haase et al. evaluate how well five commonly used measurement scale formats—including verbal, visual, and numerical scales—for subjective probability estimates perform in settings where correct (objective) probabilities for the adverse effects of medications are available in various displays. They find that numerical scales outperform the rest based on measures of the accuracy and sensitivity of the subjective probabilities expressed on each scale. McNeill et al. advance a substantial stream of recent literature, much of it published in Risk Analysis, on the important link between perceptions of natural disaster (here, wildfire) risk and preparations for the disaster. Focusing on reported preparation behaviors, rather than only on stated intentions, they discover that self-reported measures of preparedness are driven not only by perceptions of risk likelihood and severity, but also by expectations about whether one can rely on official fire warnings (which is associated with less of certain types of preparedness related to house resilience, as is expectation of loss of power in the event of a wildfire); and whether one expects to lose water in the event of a wildfire, which predicts increasing planning preparedness.

Spruijt et al. examine the different roles that scientific experts play in appraising suspected environmental health risks from sources such as EMFs and particulate matter (PM), for which data have not yielded a clear, unambiguous characterization of objective risks. By clustering experts based on their rankings of various statements in order of strength of agreement, the authors identify three main proposed roles for EMF experts: autonomous scientists who believe that science is and should be separate from policy-making, express the belief (shared by all 26 EMF experts) that EMFs do not pose a real threat to public health, and conclude that this makes precautionary policies unnecessary); pragmatists who recognize that scientific knowledge can be used in different ways to shape policy, e.g., strategically, informatively, or deliberatively, and who believe that experts choose among these options and that interaction between science and policy is inevitable and necessary; and action-oriented experts, who take a position about what to do in the face of controversial risks. For PM, experts were unanimous that development of new sources of PM should be impeded and that action should not wait for irrefutable scientific evidence. The authors develop a threefold taxonomy of PM experts, contrasting them with the “pure scientist” and “science arbiter” roles found among EMF experts, and finding a greater prevalence of “issue advocate” and “honest broker of policy alternatives” roles among PM experts.

Fujimi and Tatano apply insights from behavioral economics to the practical challenge of inducing homeowners in Japan to invest in seismic retrofitting of their homes, to better protect themselves against future earthquakes. Homeowners in multiple countries invest less in reducing potential losses from natural disasters than models of purely rational (expected utility maximizing) behavior would recommend. Predictable regret is a likely consequence of these under-investments, which arise in part because of ambiguity aversion, or irrational (compared to normative models) reluctance to take action based on uncertain probabilities. Fujimi and Tatano propose to counter one behavioral anomaly with another, without engaging in coercive policies, by exploiting the fact that people also tend to overvalue warranties compared to their actuarial values (a manifestation of the certainty effect, i.e., irrational overvaluation of certainty). They propose the use of government warranties, in which governments warrant that they will cover all costs of repair following an earthquake for homeowners who retrofit their homes, to increase the perceived value of the retrofit. They find that this increases the perceived value of seismic retrofitting by about a third and provides a more cost-effective way to nudge homeowners into implementing seismic retrofitting than simply subsidizing the costs of retrofitting. This analysis illustrates a careful attempt to apply lessons from behavioral economics to an important real-world risk management problem, while remaining explicitly aware of the tradeoffs between libertarian and paternalistic approaches to changing behaviors.


Two papers in this issue apply quantitative simulation modeling to study risk scenarios. Understanding societal resilience and the dynamics of recovery following large-scale disruptions, such as Hurricane Sandy or other natural disasters, is an active area of ongoing research. Li et al. apply an extension of input–output regional economic modeling, based on a new method of dynamic inequalities, to project how London's economy would recover following a hypothetical disruption. They model recovery over time following massive shocks to consumption and employment caused by a major flood, taking into account shifting imbalances in the recoveries of different sectors. Such dynamic recovery modeling can help to identify adaptation measures and priorities for recovery, such as transportation and healthcare sectors, based on interdependencies among sectors. For their hypothetical disaster scenario, the authors estimate that London's economy would recover in approximately six years under an appropriate rationing scheme in which each industrial sector's limited output is first used to meet the intermediate demand from other industries, and then any surplus is allocated to meet final demand for industry and household reconstruction, household demand, governmental demand, fixed capital formation, and exports in proportion to their predisaster allocations. Although the sectors will be out of balance most of the time during this recovery period, such priority setting can smooth the path to full recovery.

Zagmutt et al. study how emerging diseases could spread through the U.S. farm-raised channel catfish industry, using a stochastic transition model with spatially distributed ponds to estimate outbreak and disease spread dynamics under different scenarios. They find that loss and insurability calculations are sensitive to several model parameters that are poorly known at present (especially contact rates, infectivity, and transmission parameters) and conclude that both stochastic uncertainty due to randomness in disease outbreak dynamics and epistemic uncertainties about the nature of future emerging diseases are important drivers of final uncertainties about future losses.


The remaining two papers in this issue consider how best to treat epistemic uncertainties (due to ignorance) and uncertain dependencies among individual risks, as well as stochastic or aleatory uncertainties (due to randomness), in conducting risk analyses and interpreting the results. Bedford critically examines the use of FN curves (the traditional “Farmer curves” plotting the frequency of accidents killing at least N people against N, on logarithmic scales) to represent group or societal risks. He draws useful distinctions between multiple fatalities caused by single vs. multiple accidents; between risk aversion to epistemic uncertainties and disaster aversion; between probability and frequency of accidents; and between risk aversion and ambiguity aversion when frequencies are uncertain. Bedford shows how these properties are interrelated, and that an often-assumed power law disutility function holds only when preferences satisfy a very special scale-invariance property. He proposes an extension to a two-parameter class of disutility functions that can reflect distinct preferences for disaster aversion and aversion to epistemic uncertainty.

Bier and Lin note that epistemic uncertainties can create dependencies among estimates of risk for different individuals, locations, or other units of analysis. They consider how to model such dependencies, both for evaluating risk for a single technology and for comparing risks across technologies. They review useful distinctions, such as between state-of-knowledge uncertainty and population variability, and between outcome uncertainty and assessment uncertainty. They explain the implications of these distinctions for risk equity, decisions, and for the value of collecting and presenting additional information about dependencies and uncertain probabilities in the short run (to improve a single decision) and in the long run (to improve many decisions that involve the same uncertainties). Using simple examples, they demonstrate the significant practical difference that epistemic (e.g., state-of-knowledge) uncertainty can make in the range of possible outcomes when dependencies may be present. Finally, they consider how to communicate more effectively with decision-makers about how epistemic uncertainties and dependences might affect their evaluations of decision options, and about the extent to which alternative risk management decisions might reduce uncertain risks.