From the Editors


  • Tony Cox,

  • Karen Lowrie,

  • Sally M. Kane


Risk Analysis: An International Journal has long emphasized articles that advance the state-of-the-art in the theory and practical application of health, safety, and environmental (HS&E) risk analysis, and, more recently, adversarial risk analysis. Papers more narrowly focused on specific risk-related topics—such as financial portfolio risk management, project risk management, insurance premium pricing, inventory control, toxicology, or the mathematics of expected utility theory and its generalizations—are more likely to be published in other journals. However, including more papers that address risks to other (non-HS&E, nonadversarial) aspects of human well-being using scientific analytic methods may increase the value of Risk Analysis.

Two papers in this issue consider risks to the well-being of children and young adults. Camasso and Jagannathan examine how risk analysis concepts and technical methods, such as outrage and ROC curves, can be used to improve the difficult risk management decisions taken by Child Protective Services, such as whether to separate children from possibly abusive families. Both type 1 and type 2 errors (mistakenly concluding that the (uncertain) risk of abuse justifies removing a child from the family setting, and mistakenly concluding that it does not) may have grave consequences for the children and families involved. Camasso and Jagannathan point out how better use of risk analysis methods might reduce both. Keeney and Palley address the question of how decision and risk analysis concepts and techniques can be used to reduce mortality risks for children and young adults during the high-risk ages of 15–24. They point out that most of the 20,000 deaths per year in this age group in the United States due to causes, such as fatal car accidents (many arising from poor personal decisions, such as texting while driving), murder, suicide, and alcohol and drug abuse, could be avoided through better decision making. They suggest a practical, constructive decision framework to help parents and children identify and avoid needlessly high mortality risks during these years by using better decision strategies.


Several papers in this issue advance technical aspects and important applications of dose-response modeling, suggesting constructive ways to improve our ability to quantify the probable adverse health impacts of given exposures, or, inversely, to estimate the “benchmark dose” (BMD) that would cause a specified elevation in risk of response. Hoelzer et al. summarize the results of a 2011 workshop coorganized by the Interagency Risk Assessment Consortium (IRAC) and the Joint Institute for Food Safety and Applied Nutrition (JIFSAN) to advance dose-response modeling for Listeria monocytogenes, the bacterium that causes listeriosis food poisoning. They review and compare dose-response relations estimated for various subpopulations from data on outbreaks, FoodNet surveillance data, and results of animal feeding experiments. Different subtypes of L. monocytogenes may differ in virulence by more than four orders of magnitude. In addition to reviewing available statistical and mechanistic (exponential and beta-Poisson) dose-response models, the authors recommend short- and longer-run priorities for collecting additional information to improve future dose-response models for L. monocytogenes.

Marshall et al. propose a novel quantitative approach (based on distances between BMD estimates) to give concrete operational meaning to the concept of two mixtures of chemicals being “sufficiently similar,” from a toxicological perspective, so that toxicity data for one can be used as a useful surrogate for toxicity data for the other. Phung et al. estimate the dose-response relation between exposure to the organic insecticide chlorpyrifos (estimated from analysis of urine samples) and potential neurological effects (e.g., fatigue, emotional states, weakness, memory problems) among rice farmers in Vietnam, estimated from epidemiological data. Monte Carlo uncertainty analysis is used to quantify the fraction of the population that might receive enough exposure from spraying to experience adverse effects.

Guha et al. provide statistical methods for estimating a BMD (i.e., the dose that creates a specified increase in the probability of an adverse response) from quantal dose-response data without imposing any specific parametric modeling assumptions. A Bayesian prior specifies the desired amount of smoothness in the estimated dose-response relation (as well as allowing other prior knowledge about its shape to constrain the estimated relation). Applied to eight cancer bioassay data sets and an in vitro genetic toxicity data set, the nonparametric methods generally produce results similar to those from parametric models in EPA's BMD risk assessment software (BMDS), but also provide useful fits for data sets for which none of the parametric models in BMDS provides an adequate fit. Finally, Schmidt et al. advance the theory and practice of microbial dose-response estimation and risk assessment by applying readily available computational Bayesian methods (using the OpenBUGS software for automating Markov chain Monte Carlo (MCMC) uncertainty analysis) to better estimate and characterize uncertainty and variability in the widely used exponential and beta-Poisson dose-response models. A striking finding is that the beta distribution in the beta-Poisson model cannot adequately characterize variability across individual pathogens. Moreover, as demonstrated in a case study, the risks estimated in the low-dose region of a beta-Poisson model (often the area of greatest practical interest for microbial risk assessment) may be highly uncertain. This is due to second-order uncertainty, that is, uncertainty about the parameters of beta-Poisson distribution, arising from the lack of adequate data to constrain the shape of the posterior distribution of the beta-Poisson model in this region.

Together, these papers present substantial advances in methods and applications of dose-response modeling for BMDs, mixtures, epidemiological exposure-response relations, and microbial risk assessment.


Two papers illustrate the importance of multicriteria decision making in risk management. Santos et al. apply an input–output model to assess the economic and productivity impacts of workforce absenteeism during the 2009 H1N1 pandemic in the National Capitol Region. They conclude that prioritizing sectors for recovery based on two different metrics—economic loss and inoperability—yields quite different rankings of the sectors, so that risk management requires making multiobjective trade-offs. Yemshanov et al. show how to develop integrated risk maps for invasive pest risks even when individual criteria scores are uncertain. To do so, they apply and compare the concepts of multiattribute efficient frontiers (and iterated versions, e.g., based on identifying successive new efficient frontiers as previous ones are removed) and multicriteria (linear-weighted average) techniques. They study the robustness of risk rankings to uncertainties in scores for the multiple components of a risk (e.g., potential for introduction of a pest via ports of entry, geographic distribution of hosts, and potential for the pests to become established and to spread among host populations). A case study of the risk of invasion by oak splendor beetles, a significant invasive pest responsible for the decline of oaks outside North America, shows that the different multiattribute and multicriteria techniques considered identify similar risk maps for North America. Results of different techniques largely agree on where available information suffices to confidently rank different areas in terms of risk of invasion and spread, despite remaining uncertainties.


Three papers shed new light on aspects of risk belief and perception elicitation, combination, and uncertainty characterization. Roelofs and Roelofs explain how probability boxes (p-boxes) can be used to combine elicited intervals for multiple uncertain inputs to a risk analysis (including intervals for quantiles and shapes of distributions), yielding bounds (or intervals) for output quantities of interest, without assuming or requiring any specific dependency structure for the inputs. They illustrate the methodology via a case study of the costs of animal disease outbreaks in the United Kingdom, where different disease parameters may have unknown dependencies. Erdem and Rigby investigate levels of perceived control and levels of concern for 20 food and nonfood risks. A careful consideration of how to elicit perceived degrees of control over risks (using the “best–worst scaling” technique, in which respondents identify two items in a choice set that are maximally different on an underlying comparative scale) leads to the discovery that individuals vary widely in their perceptions of control, with women tending to perceive themselves as having less control over risks than men. Becker et al. examine the relation between sources of information (including public education measures), beliefs about earthquake risk, and adoption of household preparation measures in New Zealand. The results confirm and extend previous findings that salient beliefs (e.g., about inevitability and imminence, optimistic bias, and personal responsibility and responsibility for others) affect preparedness. This research also identifies additional salient beliefs that drive preparedness behaviors, such as beliefs about the importance of safety in everyday life.

A Note from the Book Review Editor

In this issue, Tony Cox looks across several recent books to create a coherent narrative on Decision and Risk Psychology explaining “… how real people do make decisions under risk, uncertainty, time pressure, and peer pressure, and about how they can make such decisions better.” Cox identifies several common elements, integrating across disciplines and experimental results from behavioral economics and brain imaging studies. Interestingly, the authors include both scientists and journalists, reflecting the appeal of this literature to the general public. Cox's review is not only invaluable for risk analysts for its synthesis of important topics, but also provides foundational understanding for students, lay people, and young professionals. We encourage feedback about the content of the review and will publish brief notes to encourage dialogue.

In the next two issues, we will continue to review books on risk management, moving into the realm of prediction. The Signal and the Noise will be reviewed followed by another “mega review” of several books that help to extend and enrich the theme.

The “mega review” model for reviewing books began in Risk Analysis under the aegis of Mike Greenberg. We are continuing to experiment, striving to include reviews by students, to compare perspectives across cultures, and to hear from new voices internationally.