There are various pressures to develop new approaches to the risk assessment of chemicals. On the one hand there is pressure to test more chemicals (for example, under the EU chemicals legislation: Registration, Evaluation, Authorization and Restriction of Chemicals [REACH]). This leads to economic pressures to reduce the costs of tests—and to speed them up—and also ethical pressures to reduce the use of animals in testing.
One response to these pressures is being presented as a new paradigm for risk assessment that uses modern molecular techniques, in combination with computational approaches, to make use of observations on molecular, cellular, and metabolic responses arising from chemical perturbations to make an assessment of the likelihood of impact on targets of concern. This is being developed for both human health and ecological risk assessments; respectively in the Key Events Dose-Response Framework 1 and in the Adverse Outcome Pathway (AOP) Framework 2. These approaches are essentially reductionistic in that they attempt to explain responses of whole organisms (human health) and/or ecological systems by understanding processes occurring at lower levels of biological organization. They offer the promise of identifying mechanisms that are responsive to chemical perturbation and of building in vitro testing methods around them.
On the other hand, there are also calls for developing risk assessments that are more in tune with the needs of risk management that relate more obviously to the objects of protection, that is, to impacts on humans and ecosystems 3, 4. The approach here is more explicitly holistic and seeks to express impacts in terms of health and/or ecosystem services. The holistic approach does not automatically promise a route to in vitro testing, but by making assessments more relevant, it ensures that they are more likely to be used as a basis for risk management and hence that the tests on which they have been based have not been wasted.
Our aims in this Focus article are to critically examine the promises for human health and ecological risk assessment arising from the new paradigm, to identify the problems with it, and to propose an alternative approach that is based on predictive systems models of human and ecological targets.
What Is Risk Assessment For?
Broadly, risk assessments are meant to inform measures or interventions that safeguard things that matter so much that they are protected through policies and regulations. Scientific evidence is developed to reduce uncertainties about the likelihood of adverse effects on these things that matter and the causes of the impacts to the extent that management interventions can be justified.
Things that matter in protecting human beings are lives, life span, and health. Things that matter in protecting the environment are the services we obtain from ecosystems (Figure 1); for example, in terms of food supply from game and fisheries, support of agricultural production from things such as soil quality and pollination, flood defense from riverside vegetation and coastal wetlands, recreational opportunities, and the well-being that we experience from knowing nature exists. Ecosystem services include resources that are not directly exploited but are needed to ensure ecosystem processes. All of these services matter to the extent that those affected value them. These values can be made explicit by welfare economists in the values put on statistical lives, healthy life spans, and ecosystem services.
What Risk Managers Need From Risk Assessment
Risk managers most often have to weigh the benefits of an intervention to restrict or ban the production, marketing, use, and disposal of a chemical against the costs to society. Risk assessments must inform these cost–benefit analyses (CBAs); that is, they must be value relevant.
A change in benefit (B) from an intervention is some function of the likely human or ecological impact avoided (A) and the value (V) that is placed on what is protected (V/A). Simply, ΔB = f(ΔA × V).
The costs (C) of the intervention are a function of producer and consumer effects of losing a quantity of the chemical (−ΔQ). Again, simply, ΔC = f(−ΔQ).
For human health risk assessment, A might be expressed as human lives, life span, or healthy life span. For ecological systems, A might be expressed as individuals in a species, as species in a community, or as an ecosystem service. But we need to know how much of the chemical has to be given up (-ΔQ) to enjoy the improvement (ΔA) if the costs of the intervention are to be assessed. Similarly, we need to assess how much is saved, by −ΔQ, to assess the benefit. The relationship between Q and A is expressed in terms of concentration/dose–response relationships.
The Vs are the values that we place on human lives and ecological entities. The we here is not society at large but the people affected by the intervention; otherwise, the CBA would be imposing societal values (for example, on how lives should be weighed) on the decision. Moreover, those involved in the risk assessment or the risk management might at best be a subset of those affected. The Vs are obtained by economists, either revealed from our behavior in real markets or from related behavior or from what we say in response to questionnaires about our preferences. The we here is again ideally related to the group affected.
From an economics standpoint, B must exceed C to justify an intervention: otherwise, society loses. This expresses the basic logic of the CBA. To be helpful, risk assessments have to be given in terms of the As, the things that we value and want to protect and be expressed as dose responses (in terms of A and Q). Anything else, such as endpoints that are not explicitly related to the As means that judgements must be made about the importance of the impact on the As. This involves value judgements by risk managers or risk assessors in terms of how seriously the impacts on the observed endpoints are for impacts on A, and that means that the Vs do not necessarily reflect those of the affected group. For example, risk assessors and managers might decide to take a precautionary view and inflate the likely relationships between impacts on the endpoint and A, or vice versa. Such judgements are often hidden in the complex process of risk assessment and risk management and might not be in line with what those affected want.
Most often, interventions involve costs from losses through restrictions, so risk managers invariably must balance the benefits of the interventions (i.e., reduced impacts) with the costs. The risk assessments that are carried out ought to inform these cost-benefit analyses (i.e., be value relevant), but they often do not because they are not expressed sufficiently clearly in terms of protecting the things that matter. This is arguably the case for assessments based on the new paradigm, because the endpoints do not connect directly to human health or ecosystem services. Assessments that do not meet these requirements of risk management stand the chance of not being used and hence wasting time, money, effort, and possibly test organisms. Potentially worse is that nonrelevant tests might be used in risk management that lead to wrong solutions with associated costs. These ideas are developed further in the sidebar, What Risk Managers Need From Risk Assessment.
What Does the New Paradigm Promise?
Traditional approaches to risk assessment are based on observing the effects of chemicals on the survival, growth, and reproduction of a few test species. The uncertainties are in extrapolating from these to humans for human health assessments and to the many species that represent ecosystems and their processes and services in ecological assessments. The new paradigm offers a way of making observations on molecular, cellular, and metabolic events that relate to things that matter for the assessment of risk.
The Key Events Dose–Response Framework is illustrated in Figure 2. In principle, the key events in the human dose–response analyses imply events that matter. The key-events concept is based on the more traditional mode-of-action analysis: the fundamental biological events and processes that underlie the effect of any bioactive agent. The aim is to find and characterize the dose–response of determining events; that is, events that determine whether the effect that matters will occur given the dose. If such determining events exist, then it is possible to see how this approach could lead to developing a methodology that focuses on the determining event in vitro and so to developing effective nonanimal models that make predictions about the effects that matter. The test of success will be whether determining events can be identified and related unambiguously to the likelihood of death, reduced life span, and impaired health, and whether they indeed can be isolated from related aspects of molecular and cellular responses. There are acknowledged complications, for example, from the possible modulation of effects by homeostatic mechanisms and from the variability in these between individuals, populations, and species.
The AOP Framework, illustrated in Figure 3, offers the same approach as the Key Events Dose–Response Framework but focuses on ecological assessments. Adverse Outcome Pathways have been defined as a set of plausible connections that leads all the way from the molecular initiating event to an adverse effect that is considered relevant for risk assessment 2. The approach recognizes that using various kinds of molecular data to support chemical risk assessments requires that the data can be translated into endpoints that are meaningful for risk assessment. However, the way such translation occurs remains defined only vaguely and seems to rely largely on statistical correlations between responses at different levels of biological organization. Indeed, we have been trying to correlate responses across levels of biological organization—with minimal success—for more than 30 years 5. The main difference today is that advances in molecular techniques provide more data to correlate. Recent scrutiny by van Straalen and Feder 6 effectively reviews the challenges involved in using molecular data, such as gene expression profiles, in the risk assessment of chemicals. As the authors state, “For the moment, when gene expression is used to characterize samples from the field containing a cocktail of unknown chemicals, the outcome is likely to be unpredictable and useless.” But this is exactly the type of situation that dominates in the real world and for which robust methods for risk assessment are needed. Although efforts are increasing to go beyond simple statistical correlations and use metabolic pathways to interpret genomic as well as physiological responses (e.g., for humans and several fish species, notably medaka and zebrafish 7), these approaches typically focus on isolating single metabolic pathways from the entire metabolic context and hence do not capture emergent properties of the intact system.
Proponents of the AOP approach claim it is mechanistic. This is partially true to the extent that the pathways chosen to describe the particular adverse effects of a chemical at different levels of biological organization are based on knowledge of the chemical's effects at different levels. To use one of the examples described by Ankley et al. 2, the aryl hydrocarbon receptor (AhR) is known to bind certain chemicals, and this binding is believed to be critical to initiating various toxic responses. The problem is that the model Ankley et al. 2 presented lacks robust and predictive equations that relate the rates or amounts of chemical binding to AhR to gene expression, enzyme induction, and so on. This makes it difficult to infer rates of enzyme induction if information on the binding of a chemical to the AhR receptor is all that is available. This lack of information also prevents interspecies extrapolations of sensitivity from being performed properly on the basis of AhR binding alone. As another example, in Figure 3 the effects of chemicals at the organism level are linked with an arrow to the effects at the population level. The AOP approach does not make explicit how these linkages should be made, but the implication is that the effects on organism survival, development, or reproduction will inevitably result in impacts at the population level. However, a large and growing body of empirical studies and a growing toolbox of mechanistic effect models demonstrate that this is not true. Sometimes, impacts of chemicals (or other stressors) on the performance of individual organisms cause changes in population structure or dynamics, and sometimes they do not 8. The reason they do not is because various compensatory processes operating at the population level create a linkage between the individual and the population that is complex and nonlinear.
The bottom line is that the likelihood of success of the new paradigm for risk assessment is even more tenuous for ecological systems than for human health. We can state this unequivocally because the organism is not the object of protection in ecological assessments—rather, it is populations and higher-level ecological systems being protected. The complications of extrapolating from suborganismic to ecological-level systems are therefore more profound than for the human health approach, in that modulations of within-organism changes can occur not only through within-organism homeostatic responses but also through compensatory responses at the individual level within populations and through potential adjustments at the species level in ecosystems. Unexpected magnification of effects can also occur; for example, if keystone or other functionally important species are impacted to the extent that food chains are disrupted and thus ecosystem processes and services are impaired.
Tests for the New Paradigm
These are the criteria that must be fulfilled if the Key Event Dose-Response and Adverse Outcome Pathway Frameworks are to deliver in terms of human health and ecological risk assessments.
1.That key determining events or adverse outcomes exist.
2.That the key determining events can be identified.
3.That the key determining events show consistency across taxa for the same stressors.
4.That the key determining events show consistency across stressors with the same mechanism of action.
5.That the key determining events show dose– or concentration–response relationships.
6.That variability in response between individuals can be identified and captured.
7.That the determining events can be related unambiguously to effects on survivorship and health of the individual, taking into account variability between individuals.
For ecosystems, as for humans, but also
8.That the determining events or adverse outcomes can be related unambiguously to the effects on population dynamics and density through effects on survivorship and reproduction.
9.That the determining events or adverse outcomes can be related through the identified population-level responses to effects on ecosystem services through ecological production functions.
Criteria for Success for the New Paradigm
The adjacent sidebar (Tests for the New Paradigm) lists criteria that must be fulfilled if the new paradigm is to inform human health and ecological risk assessments. There have to be key events that will affect responses that matter and that can be identified and characterized in terms of dose/concentration-response relationships and in which variability among individuals and species can be captured. We believe that the developers of the new paradigm have addressed these criteria in part, but important aspects remain to be demonstrated. In particular, quantitative mechanistic relationships linking responses across levels of biological organization are lacking (particularly criteria 7, 8, and 9; see sidebar). In addition, the new paradigm does not allow for the emergence of responses at higher levels of biological organization from the combined responses at lower levels of organization.
To summarize, reductionist approaches are always bedeviled by the part-to-whole problem. That is, effects on parts do not always manifest in the whole because of compensatory feedbacks. Small effects in parts might obscure larger effects in wholes because of interactions. We believe this is particularly a problem for ecological risk assessment. The Adverse Outcome Pathway Framework being proposed is unlikely to deliver in terms of relevant impacts on the objects of protection, and as specified currently, it fails to meet the criteria we have listed here. In particular, compared with traditional approaches that are based on observations of organisms, the new paradigm adds layers of uncertainty about interactions at or above the organism level. It does not include the potential for homeostasis or other compensatory feedbacks, such as those that might occur from interactions among individuals or species. The responses observed, therefore, cannot be related unambiguously to endpoints that matter for risk assessment. Moreover, the mechanistic complexity represented in the adverse outcome analyses can give the impression that important ecological uncertainties are reduced, when in fact they are not.
Failure to Deliver Means Judgments Must Be Applied
Traditional approaches to risk assessment are often based on simple ratios of expected exposure concentrations to toxicity test endpoints, with predefined factors applied to account for the uncertainties in extrapolating from laboratory to field, from acute to chronic effects, and from test species to other species 9. Standard but somewhat arbitrary thresholds that define chemicals as toxic, bioaccumulative, or persistent play an increasingly important role in certain legislation (e.g., REACH). Unfortunately, they tend to lead to management decisions that are based more on hazard than on risk.
For example, consider the fragrance material, musk xylene. As of 2005, the (former) EU Technical Committee on New and Existing Substances considered this material to be very persistent and very bioaccumulative, on the basis of defined thresholds. In an actual risk context, however, earlier deterministic risk assessments identified the risk quotients as below one (i.e., low risk), and risk characterization ratios would suggest that ecological impacts of this substance should be minor if any 4. Nevertheless, the industry banned this substance because under REACH criteria it was defined as hazardous.
The tendency to manage chemicals on the basis of their inherent hazard properties rather than on assessments of risk could be due to a lack of confidence in the estimates of risk. This is at least partially due to the very large gap between the effects measured (e.g., acute survival of selected test species) and the effects that matter (e.g., delivery of ecosystem services). As a result, the risk estimates must be translated, through expert judgment, into likely impacts on systems and the features of those systems that matter. However, sometimes the expert judgment is confused as a way of reducing the uncertainty. By definition it does not, but this does mean that the values of the experts making the judgments (for example, on how much precaution should be applied to the estimate of risk) can slip into the risk management decision without being obvious (see sidebar, What Risk Managers Need From Risk Assessment).
Despite the many promises that the new paradigm for human health and ecological risk assessment offers—that it is supposed to be faster and cheaper than traditional approaches, to reduce the use of animal testing, and to provide a mechanistic understanding of toxicological and ecotoxicological events—it raises challenging problems centering on the difficulties of extrapolating across levels of biological organization. The fundamental challenges comprise capturing the complexity of responses, their inherent nonlinearities, and the feedback processes that are an important feature of all levels of biological organization. Because the new paradigm fails to address these challenges it is likely to increase rather than decrease the need for expert judgments, and this is likely to lead to management decisions that are lacking in both transparency and consistency.
An Alternative: Predictive Systems Models
One response to the challenges raised by the new paradigm is to stick with observations related to traditional test systems that mostly measure effects on whole-organism performance. This would reduce the number of biological levels that need to be linked. However, this would not deliver in terms of saving on the costs or the use of animals, and we are still left with the problem of extrapolating from what we measure (e.g., survival of standard test species) to what we want to protect (e.g., human health and wellbeing and delivery of ecosystem services). To assess ecological risk, another approach is to develop test systems that focus on the target ecosystem services themselves. This would be time consuming and expensive, however, and for some services, at least, would involve extensive animal testing. Important issues also surround extrapolating the results of such tests from the specific conditions under which the test was performed to other conditions.
Applying the Predictive Systems Modeling Approach
The following steps are needed for applying a predictive systems modeling approach to human health and ecological risk assessment:
1.Articulate the protection goal: human health or ecological systems?
2.Operationalize the protection goal: How will it be measured? For humans: lives, life span, healthy life span; for ecological systems: delivery of ecosystem services.
3.Develop and implement Predictive Systems Models (PSMs) using the modeling cycle of Railsback and Grimm 18 to produce value-relevant outputs for risk assessment (see image above).
4.Use value-relevant dose– or concentration–response relationships from PSMs to inform risk management (e.g., socioeconomic analyses, risk mitigation, remediation, etc.).
Image adapted from Railsback and Grimm .
We believe, therefore, that the most promising way forward is to develop predictive systems models (PSMs) that incorporate the necessary biological complexities in terms of nonlinearities and feedback loops. Such PSMs would then represent processes and their consequences across levels of biological organization in a mechanistic way. The general approach is summarized in the adjacent sidebar, Applying the Predictive Systems Modeling Approach. This approach intentionally starts with a focus on the protection goals, operationalizes them, and moves to develop and implement PSMs. Importantly, the PSMs in step 3 (see sidebar) can be highly nonlinear and include compensatory feedbacks. This is in contrast to the Key Events Dose–Response and Adverse Outcome Pathway Frameworks depicted in Figures 2 and 3.
Particularly powerful kinds of PSMs are agent-based models (ABMs) in which individuals or agents are described as unique and autonomous entities that usually interact with one another and their environment locally. The dynamics of the system being modeled (e.g., a population) emerge from the behavior of individual agents (e.g., organisms). Thus, ABMs are ideal for studying problems that cross levels of organization and for understanding the mechanisms that drive cross-level linkages. Agent-based models offer several advantages compared with analytical approaches, including the following: they are not limited by mathematical tractability; they can include more characteristics of real systems and are better at handling stochasticities; they are mechanistic yet highly flexible; they can be made spatially explicit; they can incorporate evolutionary dynamics; and they are ideal for studying problems that concern emergence. Figure 4 illustrates an example of a simple ABM developed for a soil mite species, but these models can be much more complex as needed.
Mechanistic and process-based models such as ABMs are more powerful predictive tools than descriptive statistical approaches based on correlations among molecular, biochemical, and other suborganismic responses to whole-organism or higher-level responses. The latter have been shown to lack robustness and can rarely, if ever, be extrapolated to novel situations 5. Furthermore, because ABMs incorporate mechanisms, their use can improve understanding of the linkages across levels of biological organization and how toxic chemicals and other stressors influence these linkages, either alone or in combination. If applied appropriately, ABMs can solve the emergence problem, reduce the use of animals in testing, and lead to more efficient tests that are also management relevant. Agent-based models are already being used successfully to understand individual- to population-level interactions in response to exposure to toxic chemicals in ecologically realistic exposure scenarios. For example, Dalkvist et al. 10 used an ABM of the field vole to explore the relative importance of toxicity of an epigenetically acting pesticide and vole ecology on the response of exposed vole populations in a realistic landscape. They found that vole ecology and behavior were at least as important predictors of population-level effects as was the toxicity of the pesticide.
Agent-based models are also being coupled to dynamic energy budget (DEB) models in which mechanistic linkages from physiological to individual levels of biological organization are made. For example, Martin et al. 11 developed a generic ABM that is based on DEB theory. This ABM can be used to explore properties of both individual life-history traits and population dynamics that emerge from species-specific DEB parameters and their interactions with environmental variables such as food supply.
The suborganismic to organismic relationships also provide interesting challenges. The initial impacts of chemicals and other stressors occur at the suborganismic level, yet the complex relationships between these impacts and effects at the whole-organism level obscure their relevance from a risk assessment perspective. As noted above, more and more information is being collected on molecular and cellular changes under impact, and currently bioinformatics techniques are being used to establish correlations between these events and phenotypic expression. One approach that offers promise, particularly for linking molecular to individual responses, involves using artificial neural networks (ANNs). Artificial neural networks are a form of machine learning from the field of artificial intelligence that are being applied in bioinformatics and medicine. The ANNs simulate the behavior of biological neural networks in which learning is based on changes in the interconnections between elements (e.g., neurons) in the network. Artificial neural networks are better than traditional statistical methods at coping with noisy, nonlinear, and highly dimensional data sets; indeed, exactly the kinds of data likely to be generated by the new paradigm approaches to human health and ecological risk assessment. Thus, ANNs go beyond the simplistic, linear representations implicit in the AOP and Key Events Dose–Response Frameworks. To date, ANNs have shown promising results in classifying and predicting disease state in humans from gene expression data generated by DNA microarrays and peptide or protein data generated by mass spectrometry 12. However, issues related to over-training of the models, the high dimensionality of the data (i.e., a large number of input variables relative to a small number of replicates), and low reproducibility of data still need to be overcome.
We believe that ABMs may offer another promising approach for linking among suborganismic levels of organization 13. To assess risks to human health, one could develop models in which the molecular and cellular systems are represented as interacting agents within the organismic arena. The rules of interaction would be based on the key homeostatic properties of organisms. In this way, responses of whole organisms to toxic chemicals would emerge from the interactions among parts of the system. To assess ecological risk, one could develop models in which the molecular and cellular processes are incorporated within agents that are then allowed to interact as individuals in realistic ecological settings. Interactions among multiple species could be incorporated, if necessary, and outputs could be defined to represent the ecosystem services that are the targets of protection. This deserves further attention.
Tests for Predictive Systems Models
For predictive systems models to deliver, they need to fulfill the following criteria:
1.That the processes at lower hierarchical levels necessary to predict the relevant properties of higher hierarchical levels are included.
2.That the mathematical relationships to describe the key processes are suitably representative and can therefore be reproduced using pattern-oriented modeling 15.
3.That the minimum data necessary to parameterize the model are available.
4.That the recognized tradeoffs among realism (close match to system), precision (minimizing uncertainty), and generality (applicable to wide range of systems) are optimized to answer the question of interest. Very often in the context of risk assessment, precision and realism will have to be emphasized over generality 17.
5.That the model description is sufficiently transparent to allow others to replicate the outputs 14.
For ecosystems, as for humans, but also
6.That the system is modeled at the appropriate scale (population vs community vs ecosystem; local vs regional vs global; short-term vs long-term) for the question being addressed.
7.That the temporal and spatial variability in key environmental factors is appropriately captured for the question being addressed.
The sidebar below (Tests for Predictive Systems Models) lists criteria that must be fulfilled if PSMs are to inform human health and ecological risk assessments. The key challenge is to capture enough complexity to sufficiently represent the dynamics of the system under consideration in a way that can be understood and replicated. Solving problems arising from the new paradigm for risk assessment, whether using ABMs, ANNs, or other types of PSMs, will not be easy. Both ABMs and ANNs have been accused of being black boxes, although developing guidance in model development, testing, and documentation is addressing this issue for ABMs 14. Extrapolating from molecular responses to human health or ecosystem services will require handling huge amounts of information effectively. In some cases, challenges will arise for biology in terms of generating needed experimental data to feed the models, particularly in cases of multiple stressors or complex mixtures. Challenges will also arise for computer science. Programming languages will need to be optimized, for example, to be efficient and less prone to error, and training opportunities must ensure that all involved have the necessary expertise to implement and interpret the models. Important issues surrounding how much and what complexities to add into the models must be addressed, and, because such models may be very large, transparency and proper documentation will be needed. Substantial steps in this direction have been taken for ABMs, and, as modelers have started to adopt these methods, they have been further refined and improved 14. Finally, any modeling effort must include plans to validate the models to ensure they are producing appropriate results. For ABMs, the methodology of pattern-oriented modeling, which involves reproducing multiple, independent patterns 15, is an effective approach for validation.
Extrapolation from changes at molecular, cellular, and physiological levels of organization to impacts on humans and ecosystems that matter will always be challenging. Making connections by judgment or through correlations will not necessarily reduce uncertainties and will often leave them obscured. The only rationale for using risk management, however, is to protect things that matter to the public affected by the interventions. Our view is that risk assessments must be informative, that is, be relevant to these needs; otherwise, they might not be used, or worse, might be misused. The uncertainties left by unsatisfactory risk assessments often tempt risk managers and assessors to make judgments about the importance of impacts that may not be in tune with the preferences and values of the public affected.
No silver bullet is available to address these complex issues. However, we believe that developing predictive systems models that mechanistically connect levels of organization have the potential for addressing some of the challenges. Also, moving in silico provides as much, if not more, opportunity for reducing the number and cost of tests and saving animals as moving in vitro. We believe that these models deserve much more attention in developing new, more effective methods of risk assessment.
This work and the views expressed here have been inspired by interactions with our many collaborators and students. Most notably we acknowledge participants in the European Union 7th Framework Program Project, CREAM (PITN-GA-2009-238148), the Research Institute for Fragrance Materials, and members of the Ecotoxicology Group at Roskilde University, Denmark.