From the Editors


  • Tony Cox,

  • Karen Lowrie


When a complex social system or community is disrupted by disaster, how long does it take to recover? The first two papers in this issue explore various aspects of resilience and recovery dynamics for cities affected by an influenza pandemic or a severe earthquake, respectively. El Haimar and Santos add to a stream of papers in Risk Analysis that have developed and applied dynamic input-output economic models and inoperability models to understand regional economic impacts of disasters. (A readable introduction to such models, and earlier Risk Analysis papers on this topic, are included in the special virtual issue on Economics and Risk Analysis, edited by Greenberg and Cox, now available on the journal web site.) Using a simulation model (FluWorkLoss) and data from the 2009 H1N1 experience, El Haimar and Santos note that, from the standpoint of inoperability, an influenza pandemic in the National Capital Region would hit hardest those sectors related to hospitals and healthcare; but it would cause the greatest economic losses in productive sectors such as Federal agencies and legal services. In evaluating mixes of mitigation options (e.g., vaccine, social distancing, distribution of antivirals), impacts on both inoperability and economic losses must be considered, as they generate quite different rankings for the most critically affected regional economic sectors. Chang et al. develop a novel approach to eliciting consensus point estimates from experts for how vulnerable urban infrastructure is to disruption, how long recovery will take, and how effects are expected to propagate through linked economic sectors. Using the example of a possible large earthquake in Vancouver as a case study, they show that feedback and iterative learning among the experts can be used to develop informative estimates of infrastructure resilience, disruption by a catastrophic event, and recovery over time.


How can we assess and improve the trustworthiness of probabilistic predictions made by experts or by risk models? Steyvers et al. apply ideas from Bayesian signal detection theory to evaluate and compare individual human forecasters of uncertain world events based on the areas under their receiver operating characteristic (ROC) curves. Kim et al. advance the state-of-the-art of model ensemble-based analysis (i.e., using multiple models to form risk estimates, rather than any single model, in order to improve the robustness of model-based conclusions to model uncertainty) by considering how to balance goodness-of-fit against diversity of models. For dose-response models, they propose a model diversity index. They show how it can be used to help select sets of models to be used in model-averaging by assuring that there is enough model diversity to capture uncertainty about low-dose extrapolation of risk, while only considering models that provide adequate goodness-of-fit in the observed range of dose-response data.

Watanabe et al. consider how to develop credible dose-response models for the tropical zoonotic disease leptospirosis (carried by mice and other hosts), using 22 data sets reported in ten different studies. They fit exponential and beta-Poisson dose-response models to each data set; test the null hypothesis that a single set of model parameter values describes all of the data sets; and develop pooled parameter estimates not only for the widely used exponential and beta-Poisson dose-response models, but also for extensions of these models that include time-dependent rate parameters. They discuss the limitations, as well as the statistical advantages, of pooling parameter estimates across multiple strains and species to produce a proposed model that applies to all of them.


Suppose that a complex chemical engineering plant stops working as intended. How quickly and dependably can an automated system detect the problem, classify the probable type of failure, and diagnose its probable root cause? Santos et al. present a new method for learning Bayesian Network (BN) classifiers from data consisting of historical records of faults, symptoms, and diagnoses. The new method identifies combinations of observed variable values, and relations among them (represented by a BN) that are highly diagnostic of underlying faults. They benchmark its performance against two other well-known Bayesian classifier systems (naïve Bayes and tree-augmented network classifiers) on a process engineering test and benchmarking problem, the Tennessee Eastman Process, and conclude that the new algorithm successfully learns relevant predictive relations for accurately classifying observed symptoms in terms of underlying probable fault conditions and causes.


When hikers, climbers, or other recreational visitors to national parks suffer injury or death due to accidents, whom do we blame how much, and how might this be predicted? Rickard presents evidence that factors known to be important in risk perception, such as the controllability and desirability (or affect) of a hazardous activity, can also be used to help predict the extent to which we tend to blame characteristics of the victim instead of characteristics of the park or its management. Such risks are relatively voluntary, of course. Krause et al. study how support for a public good, a carbon capture and storage (CCS) facility, varies as the proposed site of the facility is moved closer to where respondents live, potentially imposing externalities that the community might not voluntarily accept. They find that 80% of respondents initially support the use of CCS technology, but about 20% of these initial supporters oppose having the facility located close to where they live. World view, as well as perceptions of safety and local benefits, help to predict the acceptability of such technologies. The importance of world view in determining the acceptability of risks is further underscored in a paper by Song on public perceptions of the risks and benefits of childhood vaccines. Using the grid-group taxonomy from Douglas and Wildovsky's cultural theory of risk perception, Song demonstrates that hierarchists tend to see greater benefits and lower risks than fatalists, with egalitarians and individualists in between. Such insights may eventually be used (for better or worse) to help frame messages (or “narratives”) to reinforce world views and more effectively encourage desired changes in behaviors. Finally, Vandoros et al., in a fascinating study of how real-world behaviors can be affected by news (in this case, bad financial news), observe that, in the two days following each of Greece's successive announcements of new austerity measures in 2010–2011, road traffic accidents would increase.


The remaining three papers in this issue address aspects of environmental exposures and health risks. Davis et al. propose and apply a flexible simulation-modeling framework for understanding and comparing the potentials of different water distribution systems to spread contaminants introduced at their nodes. Both conservative and exponentially decaying contaminants can be modeled, and upper-bound estimates on impacts can be derived even when no network model is available. Brochu et al. find that overweight or obese individuals have higher inhalation of air per day than their normal-weight counterparts, suggesting that they may also have higher exposures (by total mass inhaled) to airborne pollutants. Gernand and Casman use tree-structured methods from machine learning (classification and regression trees (CART) and Random Forest, a model ensemble extension of single CART methods) to structure a meta-analysis of carbon nanotube (CNT) pulmonary toxicity studies. This innovative approach to discovering what matters most, empirically, for predicting toxicity identified cobalt and other impurities, short length, large diameter, and small aggregate size as important predictors of a range of toxicity indicators. This paper, together with the Bayesian Network paper by Santos et al. for the Tennessee Eastman process and the paper by Kim et al. on a diversity index for designing or selecting model spaces to average over, illustrate the increasing use and importance of machine learning techniques and concepts in applied risk analysis.