From the Editors

Authors

  • Tony Cox,

  • Karen Lowrie


FOUNDATIONAL ISSUES: CLARIFYING “DEEP UNCERTAINTY” IN RISK ANALYSIS

In this issue, Terje Aven offers a perspective on how to deal with “deep uncertainty” in risk analysis, reexamining the foundations and various possible meanings of the concept, pointing out some ambiguities in key underlying ideas (e.g., what exactly is meant by “known probability” or “correct model”?), and arguing that “We all find robust decision-making tools useful in providing decision support, but their role is considered of less importance in a regime where managerial review and judgment is acknowledged.” His discussion raises thought-provoking foundational issues, such as whether there are (and should be) considerations in effective decision-making under uncertainty beyond what is captured in formal decision analysis. Aven refers to such additional considerations as managerial review and judgment, and discusses what role they might play in settings where relevant probabilities and models cannot confidently be identified.

IMPROVED RISK MODELING FOR FLOODS, AIR POLLUTION HEALTH EFFECTS, WIND FARMS, AND BORDER INSPECTIONS

Can flood risks be assessed with sufficient realism at the level of individual homes to support privatization of flood insurance? Czajkowski et al. apply hazard, exposure, and vulnerability analyses and state-of-the-art flood catastrophe modeling to assess flood risks for a sample of three hundred thousand single-family homes in two counties in Texas. They find a greater than 2-fold difference in flood risks for homes in the two counties (Travis and Galveston) that receive the same risk zone designation by FEMA; conversely, homes that are classified in different flood risk zones by FEMA may have very similar expected annual losses due to floods. Storm surge exposures may be high even outside FEMA-designated storm surge risk zones. Technical progress that now allows flood risks to be estimated more accurately than by classification into zones, and with far greater spatial resolution than used to be possible, might enable private insurers to start pricing flood risk in the United States, paving the way for privatized flood insurance.

Risk estimates phrased as ratios of average risk reduction per unit of reduction in exposure have a compelling “bang for the buck” character that strongly (if implicitly) suggests that reducing exposure will cause reductions in risk. But is this suggestion justified? Cox critically examines the practice of reporting estimated human health benefits from reductions in air pollution via simple ratios (or slope coefficients in simple linear regression models), e.g., percent reduction in population rates of deaths or illnesses per 10 mg/m3 of reduction in average concentration of fine particulate matter pollutant in air. He notes that important recent papers adopting this form of risk communication have assigned unjustified causal interpretations to such ratios, conflating statistical significance with causal significance of such ratios. Similar calculations produce statistically significant values for other ratios (such as significant reductions in accident death rates and significant increases in respiratory mortality rates per 10 mg/m3 of reduction in air pollution) for which causal interpretations seem implausible, suggesting a need for caution in interpreting regression coefficients for other health effects causally. The “average reduction in risk per unit reduction in exposure” presentation also implicitly conceals potentially important, policy-relevant heterogeneity and nonlinearities (e.g., J-shaped or U-shaped patterns) in associations between air pollution exposures and health risks. More nuanced and informative communication is needed to better inform decision-makers about uncertainty in the likely health consequences of further reductions in air pollution levels.

Might occasional hurricanes devastate offshore wind power, or are losses likely to be small enough for grid operators (and their insurers) to take in stride? Rose et al. use simulation modeling of hurricane sizes and occurrence rates, hurricane wind fields, and estimated conditional probabilities of wind turbine buckling to study the vulnerability of offshore wind power to hurricanes. They estimate that up to 6% per decade of wind power off the coast of Texas (the most vulnerable region studied) could be lost to hurricanes, and that a large (hundred-year return period) hurricane could knock 10% of offshore wind power offline simultaneously. Suggested improvements in turbine design and location could mitigate much of this risk.

It is always exciting when better technical methods are developed that demonstrably improve the ability to manage risks, e.g., by reducing probabilities of both false positives and false negatives for a given sample size. Decrouez and Robinson consider the application of relatively sophisticated statistical modeling methods, especially Hidden Markov Models (HMMs), to the problem of sequentially allocating inspection effort to detect biohazards (e.g., invasive pests or disease agents) in containers or consignments arriving at a border inspection station. They find that serial correlation in contamination events justifies not only increased inspection effort once a contaminated consignment is detected, but also continued increased monitoring of an importer even after the contamination prevalence appears to have dropped to normal (zero) levels again. HMM-based methods for calculating the duration of the more intense inspection period lead to decision rules that significantly out-perform those from simpler (e.g., Markov) models. Thus, such sophisticated models appear to be potentially promising for improving adaptive allocation of inspection effort, allowing more effective inspection for a given level of effort. Although further development is needed (e.g., for additional data sets and realistically asymmetric loss functions), this paper nicely illustrates how risk analysts can build on methods from operations research and statistical decision theory to improve important practical risk management operations.

GLOBAL RISKS: CLIMATE CHANGE AND FOOD SECURITY

What might happen to risks from waterborne and foodborne infectious diseases under different scenarios for climate change? In response to a 2008 World Health Organization (WHO) call for development of decision support tools to assess vulnerability and health impacts from climate change, Schijven et al. developed a computer tool for modeling pathogen responses to increased temperature and changes in rainfall. They conclude that increased temperature could lead to increased risks of food poisoning (e.g., from Campylobacter bacteria in improperly cooked or handled chicken), while increases in heavy rainfall events could also lead to increased peaks in infection risks from several pathogens. They suggest that such computer modeling can inform efforts to adapt to modeled infectious disease aspects of climate change.

Wu and Guclu consider how worldwide supplies of, and trade in, maize – a crucial cereal crop for both people and animals, as well as feedstock for multiple industries worldwide – might respond to various shocks, such as drought and heavy precipitation events in key agricultural regions of the world, together with global trends such as growing populations and demands for meat and use of traditional food crops as biofuel. They study the potential for changes in maize supplies (and prices) to propagate through import-export networks. The position of the U.S. in this network as the main exporter of maize to many countries that do not import substantial fractions of their maize from other suppliers leaves much of the world vulnerable to maize price increases, supply shortfalls, and food insecurity if U.S. exports are reduced. This happened starting around 2005, with the diversion of a substantial fraction of U.S. maize production to ethanol, triggering price increases, food insecurity, and food riots in Mexico, and significantly affecting exports to Saudi Arabia and Japan. Although the network models applied to gain these insights are very simple, and no detailed economic model of substitution and other effects was developed, the analysis suggests the potential value of even simple network models in identifying patterns of trade dependency and interdependency among countries. Such analyses provide a useful starting point for understanding how sparsely interconnected trade networks can create vulnerabilities and food insecurity risks in our increasingly interconnected world.

EXPOSURE ASSESSMENT

Ability to detect and quantify very low concentrations of chemicals in food, air, and water, creates opportunities to expand the scope of environmental regulations. For example, the United States EPA is considering regulating N-nitrosamines (mainly N-nitrosodimethylamine, NDMA) as contaminants under the Safe Drinking Water Act. These compounds, some of which are carcinogenic at sufficiently high doses in laboratory rodents, occur at a concentration of nanograms per liter in treated drinking water. They also occur in human blood, presumably reflecting natural synthesis within the body. Hrudey et al. estimate the fraction of lifetime exposure to NMDA (proportion of lifetime average daily dose) that might be prevented if all nitrosamines were removed from drinking water. They conclude that between 0.000002 and 0.0001 of lifetime exposure might be prevented by this means, depending on the surface water treatment systems (using free chlorine or chloramines, respectively).

Methods for Correlated Event Rates

How can the occurrence rates of multiple types or classes of random events be estimated from data when they are not necessarily statistically independent of each other? For example, how should one estimate the average annual occurrence rates of different types of accidents, injuries, or equipment failures in a set of facilities when these rates may be dependent, e.g., because of common cause failures or shared environments? Quigley et al. provide constructive subjective Bayesian methods, and guidance for empirical Bayes methods, for specifying priors and updating based on data to assess the rates of multiple events without making the usual simplifying assumption that they are conditionally independent, given the values of some other unmeasured variables. They study the performance of Bayes linear analysis methods, which use expected values in place of entire probability distributions and linear models in place of Bayes’ Rule to perform updating. They conclude that this simplified approach is as reliable as full Bayesian methods, with advantages in simpler elicitation and computation.

Ancillary