SEARCH

SEARCH BY CITATION

Keywords:

  • Safety factors;
  • Ecological risk assessment;
  • Uncertainty factors;
  • Assessment factors;
  • Precautionary;
  • Principle

Abstract

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES

Evaluation of environmental risks posed by potentially hazardous substances requires achieving a balance between over- and underprotection, i.e., between societal benefits posed by the use of particular substances and their potential risks. Uncertainty (e.g., only laboratory data may be available, field or epidemiological data may be limited and less than clear-cut, etc.) will always exist and is often conservatively dealt with by the use of so-called “safety” or “uncertainty” factors, some of which remain relatively little changed since their origin in 1945. Extrapolations involving safety factors for both aquatic and terrestrialenvironments include inter- and intraspecies, acute-to-chronic, lowest- to no-observed-effect concentration (NOEC), and laboratory-to-field extrapolation (e.g., extrapolation of laboratory results to the field). To be realistic, such extrapolations need to have a clear relationship with the field effect of concern and to be based on good science. The end result is, in any case, simply an estimate of a field NOEC, not an actual NOEC. Science-based versus policy-driven safety factors, including their uses and limitations, are critically examined in the context of national and international legislation on risk assessment. Key recommendations include providing safety factors as a potential threshold effects range instead of a discrete number and using experimental results rather than defaulting to safety factors to compensate for lack of information. This latter recommendation has the additional value of rendering safety factors predictive rather than simply protective. We also consider the so-called “Precautionary Principle”, which originated in 1980 and effectively addresses risk by proposing that the safety factor should be infinitely large.


INTRODUCTION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES

Environmental risk managers are often asked to make decisions with incomplete knowledge and sometimes great uncertainty. To accomodate uncertainty in risk assessment data and to reduce the probability of causing harm to the environment, safety (or uncertainty) factors are applied to the risk estimate as a management approach. Safety or uncertainty factors are a conservative approach for dealing with uncertainty related to assessing chemical risks. For instance, they can involve adjusting a point estimate (e.g., an EC50 [effective concentration at which 50% of a particular population of organisms is affected in a toxicity test]) by an arbitrary factor (e.g., dividing by 10 or 100) to estimate a safe level for a substance in the environment. Such empirical approaches may have little or no relevance to actual uncertainty, but they greatly reduce the probability of underestimating risk. Because they are “safe” and provide clear-cut answers, they are generally used. However, their use consequently also greatly increases the possibility of overestimating risk and may (and often does) lead to unrealistic answers in hazard and risk assessments.

Selection of the magnitude and type of safety factor to use (i.e., how large a margin of safety is needed) is primarily a policy, not a science-based decision, because definitive data frequently are insufficient for making accurate extrapolations from known to unknown circumstances (e.g., extrapolating toxicity thresholds among various species or different exposure durations). Thus, although scientific study can provide guidance about the magnitude of uncertainty in making extrapolations required for large-scale ecological risk management decisions, the selection of safety factors (i.e., desired margin of safety) remains in the domain of risk managers and policy makers [1].

In this report we discuss the origins and development of safety factors, assess their use and misuse, and provide a summary and specific recommendations for their future use to better reflect reality. The current applications of safety factors are primarily based on policy (i.e., the desire to provide conservative estimates of risk) rather than on empirical science. The goal of this report is to provide the impetus for development of realistic safety factors for both terrestrial and aquatic environments. In particular, we want to provide the basis for ensuring that any policy decisions on the use of safety factors for risk assessment are based on good science.

ORIGIN AND DEVELOPMENT

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES

The term safety factor includes any means by which known data are extrapolated to deal with situations for which there are no data. Basically, they are conservative tools to deal with uncertainty [2]. Safety factors include application factors (AFs) and acute-to-chronic ratios (ACRs) and are also known as uncertainty factors, particularly in avian and mammalian toxicology, or as assessment factors. However, some jurisdictions differentiate between these terms [3,4].

Application factors

Application factors were first proposed by Hart et al. [5], who used the slope of the time–mortality toxicity curve to estimate cumulative toxicity and tentatively proposed an AF of 0.3 to calculate, from acute toxicity data, the presumed harmless concentration of any substance. They also considered factors such as species tested and the population as a whole, physicochemical characteristics of the receiving environment, exposure time, life history stages likely to be exposed, bioaccumulation potential, and the quality of test organisms.

Subsequently, AFs were proposed as decimal fractions multiplied against an acute LC50 [lethal concentration to 50% of the organisms in a toxicity test] to predict “the concentration which will have no detrimental effect on aquatic life in the receiving water” [6]. These later AFs were assumed to be universally applicable, were derived largely on the basis of convenience, and were then, as now, convenient to use in the absence of data [7]. No data were available to support early AFs such as the factor of 0.1, which was initially suggested for 48-h LC50s to provide water quality values that would protect against more subtle effects [8].

Application factors were also generated by dividing the maximum acceptable toxicant concentration (MATC) by the LC50 [9,10]; this was considered fairly constant for a specific chemical but was later replaced by empirical data from early/ sensitive life stage testing. The MATC is the estimated (from partial or complete life cycle toxicity tests) threshold concentration of a material within a range bounded by the highest concentration that had no significant adverse effects (the no-observed-effect concentration [NOEC] and the lowest concentration that did have significant adverse effects (the lowest-observed-effect concentration [LOEC]). For regulatory purposes the MATC is sometimes calculated as the geometric mean of the NOEC and LOEC, which corresponds to the threshold effect concentration, which is sometimes used in Canada.

Despite a lack of supporting data, aquatic AFs became standardized relatively quickly at factors of 0.1, 0.05, and 0.01 depending on whether substances were persistent and/or cumulative [11,12]. These values have remained essentially unchanged over time (e.g., Canadian regulatory agencies [13,14] endorse application factors of 0.01 and 0.05; in Europe a similar approach is to divide by whole numbers such as 100 rather than multiplying by a decimal fraction such as 0.01) and are widely used to predict no-effect levels in the aquatic environment. However, an analysis of available data [15] indicates that AFs are not useful for predicting chronic toxicity from acute toxicity information because of variation among chemicals and species.

In human and mammal studies, 10-fold default values are routinely used for interspecies extrapolation and interindividual variability [16]. A safety factor of 100 is used in chronic feeding studies with rats to extrapolate potential effects of noncarcinogenic substances to humans [17].

Acute-to-chronic ratios

Acute-to-chronic ratios are a variant of AFs. Their numerical value is the inverse of an AF (i.e., instead of dividing a sublethal endpoint such as an NOEC by a lethal endpoint such as an LC50, which results in a decimal value, the lethal endpoint is divided by the sublethal endpoint, which results in a whole number). An ACR of 10 is generally used for conservatism when direct measurements are lacking [18,19] but may not always be appropriate [20] and is otherwise calculated for each chemical on the basis of toxicity data. Acute-to-chronic ratios are more likely to be specific than generic compared with AFs. They are also used to estimate MATCs when long-term data are lacking. When Canadian regulatory agencies [14] use ACRs to derive water quality guidelines, they use the NOEC rather than the MATC. Although the principle is being applied in a similar fashion, different results can be obtained from the same data set depending on the difference between the NOEC and the MATC (the geometric mean of the NOEC and LOEC).

In Europe, the European Center for Ecotoxicology and Toxicology of Chemicals (ECETOC) [21] calculated ACRs for a number of substances (excluding inorganics, organometallics, and pesticide active ingredients) and a range of species to derive a 90th percentile ACR ranging from 27 to 40 and a median ACR ranging from 4.2 to 22. On this basis, ECETOC [21] suggested, for generic usage, a conservative ACR of 40. Persoone and Janssen [22] note that ACRs can vary from 1 to 20,000 depending on the species and chemical but that most ACRs are less than 1,000 and that many are less than 50.

Uncertainty factors

The use of uncertainty factors in mammalian regulatory toxicology was first proposed by Lehman and Fitzhugh [23]. They proposed the use of a 100-fold safety factor in the derivation of acceptable daily intakes relative to food additives and contaminants to account for sources of uncertainty and variability (e.g., inter- and intraspecies differences). Uncertainty factors identified by the U.S. Environmental Protection Agency (EPA) for ecological risk assessment to address the adequacy of the database and to minimize arbitrary selection of numerical values are [24] intraspecies heterogeneity, interspecies extrapolation, acute-to-chronic comparisons, LOEC-to-NOEC extrapolations, professional judgement, and laboratory-to-field extrapolation. Uncertainty factors typically have been assigned order of magnitude numbers (10, 100, or 1000).

Uncertainty factors are used most frequently in ecological risk assessments that use the indicator species approach. Generally, this is for product registration such as for pesticides or chemicals in commerce. The U.S. EPA's Office of Pollution Prevention and Toxics (OPPT) refers to uncertainty factors as assessment factors [19,25]. The OPPT uses quantitative structure–activity relationships to estimate environmental concentrations that cause a response in aquatic organisms and divides this value by 10 for an estimate of terrestrial effect thresholds. This aquatic-to-terrestrial assessment factor has no empirical basis, but OPPT presumes that the production, use, and disposal of industrial chemicals will typically result in aquatic exposures and is less concerned with potential terrestrial hazard [24]. Similar methods are used by the Dutch National Institute of Public Health and Environmental Protection, except that the RIVM considers these methods adequate only for a preliminary assessment; they require more data for effects assessment [26]. The technical guidance document [27] that accompanies the European Union's dangerous substance directive suggests standard assessment factors, noting that numbers can be adjusted as new data become available. Different assessment factors are used for water, sediment, and sewage treatment plants. In general, the more data available, the lower the assessment factor.

The “Precautionary Principle”

One of the more extreme examples of the use of safety factors is embodied in what has now become know as the “Precautionary Principle”, although the former predate the latter. The Precautionary Principle originated in 1980 with the German Vorsorgeprinzip (“principle of foresight”) [28–30] and basically involves taking protective action “even when there is no scientific evidence to prove a causal link between emissions and effects” (from the London Conference [31]; similar wording is used by the United Nations Environmental Protection Governing Council [32], i.e., “even when it has not been proven that damage … will occur” [28]. The Precautionary Principle has been the subject of heated debates in the scientific literature [33–35] and has been suggested as one of the only means presently available to deal with cumulative impacts [36]. Proponents of the Precautionary Principle insist on reductions or eliminations of chemical discharges [37,38]. This position is typified by MacGarvin [39], who advocates “clean production” and emphasizes that acceptance of the Precautionary Principle as presently constituted means that “monitoring will no longer carry the burden of attempting to demonstrate that a particular level of environmental contamination is safe, which is currently destroying its scientific credibility”. In contrast, Stebbing [40] argues that the Precautionary Principle is not a viable alternative to the use of environmental (i.e., assimilation) capacity validated by monitoring. The Precautionary Principle has also been extensively criticized for marginalizing the role of science [41–43]. For example, many jurisdictions have concluded that any substance that is persistent, toxic, and liable to bioaccumulate should be eliminated or minimized, ignoring realities such as the facts that all substances are toxic at some concentration, that all elements are (by their very nature) persistent, and that some substances need to be bioaccumulated to sustain the health of organisms [44,45].

The Precautionary Principle effectively defines the safety factor as infinitely large and thus will almost always be overprotective [46,47]. Changes to the principle that have been suggested to increase its utility, most notably by using risk assessment to determine the size of safety factor to apply, also change its basic philosophical premise. Gray and Brewers [48] suggest a scientific definition of the Precautionary Principle that focuses not only on chemical impacts but also on physical disturbances, with the caveat that it be applied only where there is a scientific basis, through risk-assessment-based approaches, to believe that environmental damage will result. Chapman [49] has suggested adding introductions of exotic species to Gray and Brewers' [48] definition, allowing “proof of innocence,” and providing for possible changes to the definition and approach as science progresses. The latter two suggestions could also be applied usefully to safety factors.

EXPERIENCE WITH SAFETY FACTORS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES

Below we examine the following extrapolations: intra- and interspecies, acute to chronic, LOEC to NOEC, and laboratory to field. We then examine the use of safety factors in hazard and risk assessment.

For the purposes of this discussion (here and throughout this report), LOECs (used in aquatic systems) are considered equivalent to LOAELs (used for higher organisms in terrestrial systems). Similarly, NOECs are considered equivalent to NOAELs. More detailed explanation of these and other terms used in this report is provided elsewhere [3,50].

Uncertainty is, in terms of this discussion, ignorance about the precise value of a particular parameter, for example, the chronic toxicity threshold of a particular chemical. Depending on the amount of toxicity data available, extrapolations can be based on a function fitted mathematically to a distribution of species sensitivities, with thresholds based on a preset percentile (usually 95%) of the fitted distribution [2,19,51,52]. Some extrapolations appear to work better than others, but none are perfect [1,17,53,54].

Intra- and interspecies extrapolations

Kenaga [55] noted that AFs, calculated on the basis of ACRs for different chemicals, can range from 0.91 to 0.00009, a difference of four orders of magnitude. He used data for 84 chemicals, tested with nine species of fish and two species of aquatic invertebrates, to calculate 135 ACR values, which ranged from 1 to 18,100. Approximately 86% of the chemicals had ACR values of 100 for all species. Most AFs for fish and Daphnia were similar. No single chemical property was identified as corresponding to large ACR values, and chronic toxicity or exposure did not necessarily mean that the ACR was large. He concluded that additional information such as a substance's mode of action is necessary to predict ACRs.

To support development of wildlife criteria for the Great Lakes water quality initiative, the U.S. EPA commissioned a report describing the technical basis for recommended ranges of uncertainty factors for wildlife toxicity extrapolations [56]. Data on the comparative acute sensitivity (median lethal dose [LD50] and LC50) of avian species exposed to pesticides were reviewed. Ninety percent of the LD50 data were within a factor of 10, 99% within a factor of 100, and none were more than 237 times different from each other. Dietary exposure resulted in a slightly lower variability, with all species within a factor of 7. For mammals, across species for a variety of species, 50% of the LD50s were within a factor of 4, and 90% were within a factor of 20. A factor of 100 was greater than 96% of all the LD50s, with the range from 1 to 2,500. Chronic and subchronic dietary toxicity data from 174 toxicity studies on birds (22 species) and mammals (14 species) for four chemicals (cadmium, dichlorodiphenyltrichloroethane and metabolites, dieldrin, and mercury) showed that 94% of the species sensitivities were within a factor of 50 of each other (range, 1–1,000). Variability was greater than in the acute studies because although the acute studies were conducted following standardized protocols, the chronic studies were variable in design; the acute studies measured only mortality, whereas the chronic studies measured reproduction, growth, and other nonlethal endpoints; and the chronic studies were based on a smaller number of chemicals that was biased toward those with higher variability in response.

Fletcher et al. [57] reported a wide range of plant sensitivity to herbicides, but their data indicate that extrapolations among species within a genus can be done with a great deal of confidence and that uncertainty factors are unnecessary. When extrapolating from one genus to another within a family, an uncertainty factor of 2 captured 80% of the potential variability. Their data indicate that, for plants, extrapolations across families within or across orders within a class should be discouraged. If it must be done, an uncertainty factor of 15 would need to be applied to intraorder extrapolations, and a factor of at least 300 to intraclass extrapolations, to capture 80% of the variability. Variability similarly increases with taxonomic distance for aquatic animals [22,53] and may exceed a factor of 1,000 [58].

Although most attention in ecotoxicology is focused on extrapolations between different species, important variability in response to toxic substances occurs within a single species as well, because of genetic differences in individuals. Human heterogeneity in toxicological assessments is assigned an uncertainty factor of 10 on the basis of intraspecific variability in laboratory rodents [59]. Many ecotoxicologists assume that wildlife species tested for toxicity responses would have a greater intraspecific variation than inbred laboratory rodent strains because of greater genetic diversity.

Intraspecies variability in general was addressed by Dourson and Stara [59], who analyzed 490 probit, log-dose slopes. Approximately 92% of the log-dose slopes were 3 or greater, which corresponds to an intraspecies uncertainty factor of approximately 10 or less. Abt Associates [56] conducted a similar study for four species of birds and found that 95% of the species/chemical results had an inhalation LC50/LC1 ratio of 10 or less. Shirazi et al. [60] looked at the same ratio for bobwhite exposed to seven organic chemicals for 28 d and described the time–dose–response surface. With the exception of two of the rodenticides, LC50/LC1 ratios in these subchronic studies were less than 10. Abt Associates [56] concluded that an intraspecific uncertainty factor of 10 appeared to be justified, although they cautioned that the rat data were from a population of animals with less heterogeneity than expected in wild populations and that the bird data were based on only a few species. Similarly, in Germany an intraspecific safety factor of 10 is generally considered appropriate for aquatic ecosystems [52].

Acute-to-chronic extrapolations

Mount [10] compared aquatic MATC and AF values for 10 toxicants and showed that estimating “safe” concentrations by using an LC50 value and an AF was no more accurate than if a single MATC value was selected and used for all fish species. Although he noted the need for developing rapid and accurate screening methods for estimating toxicity, he also emphasized the importance of use of actual chronic toxicity data rather than sole reliance on extrapolations. Similarly, Davis [61] found that the AF was not always useful for predicting chronic toxicity of lead from acute toxicity data. He emphasized both that multiplying LC50 values by an arbitrary value frequently leads to erroneous predictions about concentrations at which fish can survive and reproduce and the importance of information on water quality characteristics for interpreting toxicity data.

Although wildlife toxicologists are also interested in long-term exposures (e.g., full lifecycles), such testing is expensive and thus rarely done. Instead, acute and subchronic data are used to make predictions of longer-term effects. Abt Associates [56] examined chronic and subchronic dietary toxicity data from 174 toxicity studies on birds and mammals for two organics and two metals and compared NOAELs determined at 28 to 89 d of test duration, 90 to 180 d duration, and greater than 180 d duration. Ninety percent of the NOAELs from the various studies were within a factor of 20. As expected, the variability between tests of similar duration was less than that of tests of widely different duration. Abt Associates [56] cautiously concluded that an uncertainty factor of 10 should account for most differences in test duration, with the caveat that much of the variability in the data set may have been due to differences in study design rather than real differences due to exposure duration.

LOEC-to-NOEC extrapolation

In some cases, an NOEC is required for ecological risk assessments, but in many cases only LOEC values are available. In these cases, use of a safety factor of 10 has been endorsed by the U.S. EPA for aquatic systems (Federal Register for Water Quality Criteria Documents, 45 FR 79353–79354, November 18, 1980; National Drinking Water Regulations, 50 FR 46944–46946, November 13, 1985). Abt Associates [56] examined the database for birds and mammals and concluded that 97% of the LOEC-to-NOEC ratios for birds and mammals were a factor of 10 or less; 80% of the studies had sixfold differences. However, analysis of the data was compromised by the fact that many of the LOECs were “unbounded” and not appropriate for this analysis; that is, the lowest dose tested caused an effect, so there was no real way of determining the true LOEC or NOEC value. Furthermore, they cautioned that the slope of the dose–response curve in the low-dose section of the curve is very important in deriving this uncertainty factor yet is almost never known. They cautiously endorsed an LOEC-to-NOEC uncertainty factor of 10 but urged the use of best professional judgement.

Laboratory-to-field extrapolation

Ecologists continue to question the validity and relevance of laboratory studies to field situations [62]. Similarly, risk assessors have been cautious in their use of laboratory data. Some argue for a 10-fold uncertainty factor to ensure adequate conservatism in risk estimates. One of the prominent beliefs espoused by many risk assessors is that organisms in the field, because of the multiple stresses they face, are more sensitive to chemical stress than laboratory organisms under the same conditions. This does not appear to be the case. In the subdiscipline of allelopathy, laboratory studies generally showed organisms sensitive to natural toxins, whereas many field studies with the same toxins found no effects [63]. This led to a consensus that allelochemic responses could be demonstrated conclusively only through field studies [64].

The first comparative analysis of laboratory-to-field extrapolations for plant toxicity information was produced by Fletcher et al. [57] for pesticide literature. They found remarkable agreement between laboratory and field determinations for plant EC50 values. Only three of 20 EC50 values from laboratory studies were 2-fold higher than the EC50 value determined in the field (response ratio, > 0.5). Similarly, only three of 20 field EC50 values were more than twofold greater than the laboratory EC50. However, similar agreement has not been shown for aquatic organisms. For instance, LC50s can vary by 100-fold depending on environmental conditions [22,65].

No similar comprehensive comparison of laboratory-derived toxicity thresholds with field exposure response for mammals or birds has been performed, although laboratory studies have shown that animals under various types of simulated environmental stresses show a toxic response at lower exposure doses. Porter et al. [66] showed that animals might be more vulnerable to chemicals when simultaneously stressed by food shortage or pathogen exposure. Maguire and Williams [67] showed an increased sensitivity of bobwhite quail to cholinesterase inhibitors when under cold stress; many other stress factors are known to influence cholinesterase activity in birds and mammals [68]. However, field studies suggest that these changes in sensitivity are minor compared with our ability to correctly estimate actual exposures. Therefore, an uncertainty factor should not be used to correct for hypothesized laboratory and field differences.

Edge et al. [69] conducted such a study with gray-tailed vole (Microtus canicaudus) and deer mice (Peromyscus maniculatus) exposed to an organophosphorus pesticide and showed that laboratory data overpredicted risk to the rodents. This result was likely due to a difference between actual exposure and expected exposure because the spray application of the pesticide did not reach expected concentrations beneath the short alfalfa canopy [70]. A study recently conducted by A. Fairbrother (unpublished) of small mammals inhabiting a large area downwind of a smelter showed no effects (from histology to population size and demography) at environmental concentrations predicted to cause a response based on laboratory data. Similar findings were reported for another zinc smelter [71]. These differences likely are due to an overestimation of the bioavailability of metals in environmental samples as compared with laboratory dosing regimes.

Indeed, the problems associated with extrapolating laboratory data to field circumstances lie in our inherent inability to completely simulate all the environmental variables that may influence the response of the receptor to the test substance (e.g., climate stress, interspecific interactions, behavioral responses, etc.). However, it is exactly this weakness in laboratory replication of environmental variables that is the greatest asset of laboratory studies. By keeping many environmental factors constant, the intrinsic hazards of compounds can be studied and understood. The caution illustrated by the discussion in the preceding paragraphs is that laboratory studies generally are not designed to generate data to accurately predict effects in the field. Additionally, the information summarized above suggests that there is not an unidirectional misapplication of laboratory data to field risk assessments; that is, laboratory responses are not always greater or lesser than environmental responses but vary with the compound and species studied. Thus, a scientific basis for derivation of a single safety (or uncertainty) factor to extrapolate laboratory data to field effects is lacking. As noted by Cairns [72], the best way to extrapolate from the laboratory to the field is not by using safety factors but rather by conducting appropriate tests and appropriate assessments of the results of such tests.

Hazard and risk assessment

In the case of the Canadian Environmental Protection Act, safety factors (termed uncertainty factors) are derived on a case-by-case basis depending on data quality [73]. Factors of 1 to 10 acount for intra- and interspecies variation; additional factors of 1 to 100 are used for data inadequacies such as in a key study or because of a lack of chronic data. Safety factors can be in some cases, when multiplied out, as high as 10,000.

In the European Union, legislation specifically requires that safety factors (more commonly termed “assessment factors”) be used in environmental risk assessment [19, 74]. In risk assessments, the predicted environmental concentration (PEC) of a substance is compared with its predicted no-effect concentration (PNEC). The latter can be derived by either statistical extrapolation [51] or a safety factor approach [27]. An initial comparison is done using very simple data (e.g., acute toxicity testing) and conservative assumptions (e.g., safety factors). If this initial comparison indicates there may be a problem, refinement of the PEC or PNEC is conducted (e.g., chronic toxicity testing), often in several stages using increasingly elaborate data and correspondingly less conservative assumptions. The initial comparison may involve a safety factor of 1,000 divided into the lowest acute toxicity value to give the PNEC [21]. This safety factor can be lowered, if appropriate relevant information is available; for substances whose use pattern is likely to result in intermittent release, it is reduced to 100. In subsequent comparisons, safety factors can be reduced to 50 or 10 if chronic testing data are available and depending on the extent of such data. Similar approaches are used by the U.S. EPA [24].

Safety factors such as application or assessment factors were originally intended as useful rules of thumb for their time, were not intended to be used indiscriminantly, and were intended to be tempered when actual data were available [75]. A major problem with safety factors is that, even when they are based on data, they are almost always general rather than specific and frequently overly conservative. For instance, when data are available from several different sublethal endpoints (e.g., growth and reproduction), the lower (most sensitive) value is used to select safety factors that are then applicable to all endpoints; similarly, when data are available for several species, that from the most sensitive species is used to determine safety factors that are then applicable to all species [19]. This adds an additional level of conservatism to the estimate.

The use of generic AFs for estimating safe concentrations, when only acute toxicity data are known, was developed for national usage in the United States by the National Academy of Sciences (NAS) [76]. This key report used “… present knowledge, experience, and judgement …” to determine two classes of toxic compounds, for which different generic AFs were applied for safety at any time or place: non-persistent (half-life < 4 d) or noncumulative substances were considered safe at 0.1 of 96-h LC50 values; persistent, accumulative substances were considered safe only at concentrations an order of magnitude lower. The document notes that these factors were arbitrary and that “Future studies may show that the application factor must be decreased or increased in magnitude.” The NAS [76] also recognized that AFs were not constant but varied for different substances and species. Thus, in addition to the above generic recommendations, specific intermediate AFs of 0.05 and 0.02 were recommended for some substances. The Organization for Economic Cooperation and Development (OECD) [19] has also recognized the need for different ACRs for different substances. The European Union [27] considers safety factors to be “living” numbers that are subject to change as data are generated to address extrapolation uncertainties.

Many AFs in use today (e.g., 0.1 and 0.01) are estimates made some three decades ago in the expectation that future research would result in refinements. Some refinement was even provided a quarter of a century ago [76] and has also been suggested recently [21]. However, many of these values have become dogma and are being generally applied, in both developed and developing countries [77]. For instance, in the United States dredged material is evaluated for disposal based on a factor of 0.01 applied to LC50 values [78]. This conservative estimate assumes that persistent, accumulative chemicals are present in the dredged material and ignores the high sensitivity of 1990s toxicity tests compared with 1970s toxicity tests. In many cases the true safety factors may be two- to threefold less than the standard default value of 10 adopted for individual components of uncertainty [73].

For several categories of substances the overall aquatic risk assessment scheme appears to present particular difficulties. These include substances that are bioavailable in routine laboratory tests and show toxic effects in laboratory test systems but in the real environment are effectively rendered less or not bioavailable through processes such as sorption onto solids or transformation to non- or less toxic forms; substances that are very sparingly soluble in water and that do not show acute or chronic effects at the limit of water solubility; and substances that rapidly photodegrade, biodegrade, hydrolyze, or are oxidized or reduced. Metals figure prominently in the first two categories of substances, and organic chemicals are prominent in the third category. Problems arise in interpreting laboratory toxicity tests with the first category of substances because analytical methods do not differentiate between bioavailable and nonbioavailable forms. This problem can be addressed experimentally by the development of additional toxicity test procedures to demonstrate whether the real environment is likely to make a substance more or less bioavailable. Coupled with this is a need for detailed understanding of the substance's chemistry (e.g., oxidation states and co-ordination chemistry of metal ions). Addressing the second class of substances (those that are very sparingly soluble) requires methods of determining transformation and solubility of the resulting transformed products. Methods for metal transformation and interpretation of the toxicity results relative to the parent metal are presently under development [79]. Regarding the third category (organic substances), it is commonly understood that laboratory tests (especially those that are flow through) over-estimate potential real-world toxicity because the test controls are maintained at a constant concentration, whereas in the field exposure concentrations often decline rapidly as a result of environmental fate processes.

Safety factors are used by the European Union [27] in risk assessments of sediment and soil. For example, an apparently arbitrary safety factor of 10 is applied to some substances in sediments to account for uptake via ingestion. As a result, calculated PNECs may indicate potential problems at concentrations of essential metals in soils well below plant nutrient requirements. To some extent this problem is addressed for both terrestrial and aquatic environments by the caution [27] that “calculated PNECs derived for essential metals may not be lower than natural background concentrations.” But this caution does not address cases in which natural background concentrations are deficient for the well-being of some organisms. For instance, because of the incorrect application of laboratory-determined safety factors to field situations, the Dutch freshwater quality criterion for zinc is below the average background level of zinc in Dutch rivers [80].

SUMMARY AND RECOMMENDATIONS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES

Summary

Safety factors are used in human health risk assessments and in risk assessments related to both the aquatic and terrestrial environments. Considerable effort has been devoted to discussions on the correct numeric size of different factors. To some extent this is a futile discussion because no one set of factors has universal applicability. Individual organisms show different responses to chemical stress that are dependent on both genetic and environmental factors [65,81]. There is a clear philosophical difference between protection of all humans as individuals (the basis for human health risk assessment and hence for a greater need for conservatism) and protection of ecosystems, not individuals (the basis for ecological risk assessment).

The set of factors that may be correct for one chemical substance in one particular exposure scenario will be incorrect for the same substance in a different exposure scenario and more especially for another substance with a different mode of toxic action. On one hand, failure to use assessment factors may allow substances with unforeseen modes of action to slip through the risk assessment net. On the other hand, if assessment factors are made too high, they may provide an almost impossible barrier for the development of new and possibly more environmentally friendly substances, cause unnecessary suspicion and testing costs for existing substances, or result in cleanup or criteria numbers that are so low they lack credibility.

The ideal situation is that of sufficient data so that safety factors are not needed or can be very small. When data are sparse, safety factors will tend to be large but should not be ridiculously large. Regulatory decisions based on large safety factors should be made with recognition of the high degree of uncertainty implicit in such factors and, as in the European Union, should be directed to acquisition of adequate data for informed decision-making rather than to decision-making without adequate information. In this regard, the Precautionary Principle undermines the risk assessment approach of comparing PECs with PNECs: either the PNEC is defined as infinitely small, or the safety factor is defined as infinitely large. The European Union [27] and other risk assessment schemes, following Cairns et al. [82], reduce safety factors as more data become available (e.g., as the toxicity database expands). This approach, in contrast to the Precautionary Principle, is useful because it is based on good science, whereas the latter arguably is applied independent of science [48,49].

Values derived using safety factors should not be used as threshold values for a toxic effect or as absolute values. Safety factors reflect a precautionary approach to environmental management that is laudable when science and facts are not excluded from consideration and final decisions are made on the basis of facts, not perceptions. One method of using science in a precautionary approach is to stress statistical type II errors (incorrectly concluding that there are no effects) and power analyses over traditional type I errors (incorrectly concluding that there are effects) [83]. Applying this approach would involve specifying the requirements for detecting real change and determining whether monitoring programs can really detect change, both of which are too rarely done.

Uncertainty, by definition [84], is an integral component of risk assessment and is usually addressed by conservatism (e.g., the use of large safety factors). Probablistic modeling is a better method of addressing uncertainty. However, the best method to address uncertainty is to measure the same endpoint using different approaches and assign the greatest credibility to results that are confirmed by a combined evaluation of the approaches used (i.e., weight of evidence [85–87]). For example, when chemical analyses indicate a potential problem but appropriate biological studies indicate no problem, corrective action would not generally be necessary. Otherwise, impossible situations can arise, for instance, cases where application of uncertainty (safety) factors result in estimates of environmentally acceptable levels of essential metals that are below nutritional requirements [80].

Safety factors are not intended as mathematical absolutes but rather as screening tools that are surrounded by some unquantifiable level of imprecision. Such screening tools may be suitable for use by trained risk assessors, but they can be deceiving and confusing to the general public, particularly when there are disagreements of an order of magnitude or more between different numbers. For example, the U.S. EPA and the U.S. Agency for Toxic Substances and Disease Registry have developed health guidance values for ethylbenzene inhalation by humans that differ by over an order of magnitude even though both numbers are based on the same data. The difference is due primarily to the use of different safety factors [88]. As noted by the OECD [3], safety factors “remain largely qualitative and lack a strong quantitative foundation.” Thus, safety factors should not be used as stand-alone numbers but rather in context with scientific knowledge about different substances of concern, including use patterns, and considering potentially exposed populations.

Recommendations

Extrapolation and use of safety factors are needed when data are missing. However, the unfortunate reality is that in too many cases there is no attempt to obtain data. Instead, too much information is being extrapolated from too few data. However, given that we do not have all available information on toxicity of all substances to all organisms, safety factors do have a place in risk assessment. The question is, of course, how large or small they should be (e.g., when extrapolating from acute to chronic data; cf. Appendix 1).

Any use of safety factors must be appropriate to their purpose. In particular, the scale, frequency, severity, and potential long-term consequences of any environmental insult (e.g., a spill with acute effects compared to long-term releases of a chronic toxicant) need to be considered in determining the element of precaution necessary (i.e., the magnitude of any safety factors). Clearly, potential widespread and irreversible effects require a greater level of precaution and hence greater safety factors than potential local and reversible effects.

The need for specific rather than generic safety factors was recognized by the NAS [76] but has not been implemented in the preceding more than two decades of their use. The U.S. EPA [89] recommended “a peer-reviewed report that provides direction on the application factor issue appropriate for the 1990s,” but this recommendation also remains to be implemented.

Environment Canada [90] suggests, on the basis of empirical evidence, that a factor of 10 applied to acute lethality aquatic toxicity tests “… has generally provided a reliable and moderately conservative estimate of thresholds of sublethal effect in the environment” but admits that different values may be more appropriate depending on the substance and the situation. Although the Canadian Council of Ministers of the Environment [14] uses AFs for developing Canadian water quality guidelines, they caution that AFs should be used only when chronic toxicity data or ACRs are not available and that they may not be suitable for some chemicals, such as zinc. In Europe and the United States, a factor of 10 has also been judged appropriate, provided chronic testing is done with algae, daphnids, and fish and the lowest endpoint is used; however, if only one of these tests is used and only acute toxicity is measured, safety factors of 1,000 or more could be and are required [4,17,27]. Zeeman [25] noted that although most ecotoxicologists justifiably abhor the use of arbitrary safety factors, the continued but judicious and highly caveated use of safety factors is justified “until the assumptions of … new methods have been tested rigorously.” It is tempting to suggest continued use (until better information is available) of a factor of 10 as a “rule of thumb” when sufficient toxicity test data are available (i.e., more than one test) with inclusions of caveats and uncertainties. But in the case of noncancer risks to humans, participants in a U.S. EPA workshop clearly concluded that default factors of 10 are generally excessive [91]. Also, any focus on a single number will be misleading (e.g., loss of critical information or reliance on the number neglecting the uncertainty surrounding that number).

An LOEC-to-NOEC uncertainty factor should not be used in ecotoxicological risk assessments. Derivation of LOEC and NOEC is affected by study design, including the number and spacing of tested doses, endpoints measured, and the number of replicates in each dose. Therefore, to begin with, there is a great deal of uncertainty about this point estimate. To estimate an NOEC from an LOEC by merely dividing the LOEC by 10 is compounding the uncertainty in a manner that makes the result essentially meaningless. In fact, human health toxicological risk assessments are moving away from the use of LOEC or NOEC values in favor of a benchmark value, the confidence interval of the concentration that results in an EC10 [92]. The ecotoxicology community is engaged in a debate about whether to adopt this same approach and, if so, what level of ECx may be appropriate [93–95]. In ecological systems, moreover, it may not be necessary to achieve an NOEC for 100% of the individual organisms if population maintenance or community structure are the endpoints of concern or if ecosystem elasticity (i.e., the ability to snap back from ecological displacement) is high [96].

We suggest that risk assessments be based on a point estimate value (e.g., an EC10 or EC20 [97]) and that a policy decision be made about a desired margin of safety in the resulting cleanup concentration or environmental guidelines. The slope of the dose–response curve should be considered when making this decision, as should the effects level and its associated confidence interval. A substance with a steep dose–response curve may warrant a greater margin of safety than a substance that requires a large additional amount before toxicological responses change.

For many natural waters, toxicity in situ of, for instance, metals may be far less than under standardized laboratory testing conditions if tests are carried out in the same natural waters, because of speciation effects, and receiving water sublethal toxicity thresholds may be close to laboratory acute toxicity thresholds. For many substances, including metals, mixtures toxicity can be addressed relatively simply assuming additivity [98], and large safety factors are not needed for extrapolation from the laboratory to the field [99]. Thus, for aquatic systems, we suggest the following range for expected threshold values, based on data from chronic laboratory studies with at least three appropriately sensitive and representative species (alga, invertebrate, and vertebrate): <0.1EC50 to >EC50. This range could be termed the “potential threshold effects range” (PTER) to clearly indicate that this information is not suitable for final decision-making. As data become available, different substances could be appropriately placed into different hazard categories and different PTERs determined for each category as per NAS guidelines [76]. In any case, risk assessors need to understand clearly that these are simply default values to be modified with data whenever possible and ideally to be validated at the “ecosystem level before they are implemented for regulatory purposes” [53].

Ultimately, we need to begin using experimental results rather than applying safety factors to compensate for lack of information. Specifically, we need to use available data, to obtain necessary new data, and to minimize extrapolations. For example, valid conclusions about safe concentrations of chemicals in aquatic ecosystems can be derived on the basis of a very few appropriate, sensitive laboratory toxicity tests [100]. This approach is presently being proposed by the U.S. EPA relative to cancer risk assessment [101] and, for both water and sediment quality values [102] and for human health risks in general [103], is a logical extension of the European Union [27] approach of decreasing the size of safety factors with increasing information and was historically recognized as appropriate not only for individual substances such as copper [104] but also for complex effluent discharges [105]. The use of experimental data has the added value of rendering safety factors predictive rather than simply protective. When such data are lacking, uncertainty analysis conducted as part of a risk assessment [106] can assist in identifying which experimental data will most reduce uncertainty.

In summary, we suggest the following principles for the use of safety factors. (1) Data supersede extrapolation. When appropriate data are available, they should be used rather than safety factors. Also, when safety factors are being used, they should be replaced with appropriate data as such data become available. (2) Extrapolation requires context. Any use of safety factors should be based on existing scientific knowledge and should include appropriate caveats. (3) Extrapolation is not fact. Safety factors should be used only for screening, not as threshold or absolute values. (4) Extrapolation is uncertain. Safety factors should encompass a range, not a single number. (5) All substances are not the same. Safety factors should be scaled relative to different substances (e.g., dose–response curves), potential exposures, and effects (severity and reversability). (6) Unnecessary overprotection is not useful. Safety factors for individual extrapolations (e.g., laboratory to field) should not exceed a factor of 10 and may be much less.

Acknowledgements

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES

Acknowledgement—The preparation of this report was supported by an unrestricted research grant to EVS Environment Consultants from the Ecotoxicity Research Program of the International Lead Zinc Research Organization, the Nickel Producers Environmental Research Organization, and the International Copper Association. We thank Cathy McPherson and Rhona Karbusicky for technical assistance. Independent review comments were provided by Chris Wood, Graeme Batley, Wilfried Ernst, Guido Persoone, Colin Janssen, William Adams, Craig Boreiko, and Andrew Green.

Appendix

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES
Table APPENDIX. Reasons for large and small safety factorsa
Reasons for large safety factorsReasons for small safety factors
  1. a Modified from Van Straalen and Dennemann [107].

Laboratory tests may be conducted under optimal conditions.Laboratory tests may be conducted under “worst-case” conditions.
Laboratory exposures tend to be relatively constant.Field exposures may be intermittent.
The most sensitive species cannot be tested (let alone held) in the laboratory.Ecological compensation and regulation mechanisms operate in the field.
In the field, organisms are exposed to mixtures, not individual substances.In the field, biological availability tends to be lower than in the laboratory.
Adaptation may entail costs in ecological performance.In the field, adaptation may occur.

REFERENCES

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. ORIGIN AND DEVELOPMENT
  5. EXPERIENCE WITH SAFETY FACTORS
  6. SUMMARY AND RECOMMENDATIONS
  7. Acknowledgements
  8. Appendix
  9. REFERENCES
  • 1
    Emans H, Van de Plassche E, Canton J, Okkeman P, Sparenburg P. 1993. Validation of some extrapolation models used for effects assessment. Environ Toxicol Chem 12: 21392154.
  • 2
    Stephan CE, Mount DI, Hansen DJ, Gentile JH, Chapman GA, Brungs WA. 1985. Guidelines for deriving numeric national water quality criteria for the protection of aquatic organisms and their uses. PB85–227049. National Technical Information Service, Springfield, VA, USA.
  • 3
    Organization for Economic Cooperation and Development. 1989. Report of the OECD workshop on ecological effects assessment. Environmental Monograph 26. Paris, France.
  • 4
    Organization for Economic Cooperation and Development. 1995. Guidance document for aquatic effects assessment. Environmental Monograph 92. Paris, France.
  • 5
    Hart WB, Doudoroff P, Greenbank J. 1945. The evaluation of the toxicity of industrial wastes, chemicals and other substances to freshwater fishes. Final Report. Waste Control Laboratory, Atlantic Refining, Philadephia, PA, USA.
  • 6
    Henderson C, Tarzwell CM. 1957. Bio-assays for control of industrial effluents. Sewage Ind Wastes 29: 10021017.
  • 7
    Henderson C. 1957. Application factors to be applied to bioassays for the safe disposal of toxic wastes. In TarzwellCM, ed, Biological Problems in Water Pollution. U.S. Public Health Service, Cincinnati, OH, pp 3137.
  • 8
    Aquatic Life Advisory Committee of the Ohio River Valley Water Sanitation Commission. 1955. Aquatic life water quality criteria. First Progress Report. Sewage Ind Wastes 27: 321331.
  • 9
    Warren CE, Doudoroff P. 1958. The development of methods for using bioassays in the control of pulp mill disposal. Tappi 41: 211A216A.
  • 10
    Mount DI. 1977. An assessment of application factors in aquatic toxicology. Proceedings, Recent Advances in Fish Toxicology Symposium, Corvallis OR, USA, January 13–14, pp 183190.
  • 11
    Warner RE. 1976. Bioassays for microchemical environmental contaminants: With special reference to water supplies. Bulletin 36. World Health Organization, Stockholm, Sweden.
  • 12
    National Technical Advisory Committee. 1968. Water quality criteria. U.S. Federal Water Pollution Control Administration, Washington, DC.
  • 13
    Ontario Ministry of Environment. 1984. Water management: Goals, policies, objectives and implementation procedures of the Ministry of Environment. Toronto, ON, Canada.
  • 14
    Canadian Council of Ministers of the Environment. 1991. A protocol for the derivation of water quality guidelines for the protection of aquatic life, Canadian water quality guidelines. Winnipeg, MB, Canada, appendix 9.
  • 15
    Giesy JP Jr, Graney RL. 1989. Recent developments in and intercomparisons of acute and chronic bioassays and bioindicators. Hydrobiologia 188/189: 2160.
  • 16
    Truhaut R. 1991. The concept of the acceptable daily intake: An historical review. Food Addit Contam 8: 151162.
  • 17
    Van Leeuwen K. 1990. Ecotoxicological effects assessment in The Netherlands: Recent developments. Environ Manage 14: 779792.
  • 18
    U.S. Environmental Protection Agency. 1991. Technical support document for water quality-based toxics control. EPA/505/2–90–001. Washington, DC.
  • 19
    Organization for Economic Cooperation and Development. 1992. Report of the OECD workshop on the extrapolation of laboratory aquatic toxicity data to the real environment. Environmental Monograph 58. Paris, France.
  • 20
    Martinez-Jeronimo F, Villasenor R, Espinosa R, Rios G. 1993. Use of life tables and application factors for evaluating chronic toxicity of kraft mill wastes on Daphnia magna. Bull Environ Contam Toxicol 50: 377384.
  • 21
    European Centre for Ecotoxicology and Toxicology of Chemicals. 1993. Environmental hazard assessment of substances. Technical Report 51. Brussels, Belgium.
  • 22
    Persoone G, Janssen CR. 1994. Field validation of predictions based on laboratory toxicity tests. In HillIR, HeimbachF, LeeuwanghP, MatthiessenP, eds, Freshwater Field Tests for Hazard Assessment of Chemicals. Lewis, Boca Raton, FL, USA, pp 379397.
  • 23
    Lehman AJ, Fitzhugh OG. 1954. 100-Fold margin of safety. Assoc Food Drug Off U S Q Bull 18: 3335.
  • 24
    Zeeman MG. 1995. EPA's framework for ecological effects assessment. OTA-BP-ENV-166. Office of Technology Assessment, U.S. Environmental Protection Agency, Washington, DC, pp 6978.
  • 25
    Zeeman MG. 1995. Ecotoxicity testing and estimation methods developed under section 5 of the Toxic Substances Control Act (TSCA). In RandGM, ed, Fundamentals of Aquatic Toxicology, 2nd ed. Taylor and Francis, Washington, DC, pp 703715.
  • 26
    Forbes VE, Forbes TL. 1994. Ecotoxicology in Theory and Practice. Chapman and Hall, New York, NY, USA.
  • 27
    European Union. 1996. Environmental risk assessment of new and existing substances. Technical Guidance Document. Brussels, Belgium.
  • 28
    Federal Republic of Germany. 1980. Umweltprobleme der Noordsee. Sondergutachten Juni 1980. Kohlhammer, Stuttgart, Germany.
  • 29
    Minister for the Environment. 1986. Umweltpolitik: Guidelines on anticipatory environmental protection. Nature Conservation and Nuclear Safety, Bonn, Germany.
  • 30
    Von Moltke J. 1992. Commentary: The Precautionary Principle. Environment 34: 34.
  • 31
    London Conference. 1987. Second International Conference on the Protection of the North Sea: Ministerial declaration. London, UK.
  • 32
    United Nations Environmental Protection Governing Council. 1989. Precautionary approach to marine pollution, including waste dumping at sea. Annex 1. UNEP/GC 15/21. United Nations General Council, New York, NY, USA, pp 6365.
  • 33
    Gilbertson M, Schneider RS. 1993. Causality: The missing link between science and policy. J Great Lakes Res 19: 720721.
  • 34
    El-Shaarawi AH. 1994. Commentary: Proving causality is not always necessary and sufficient for regulatory action. J Great Lakes Res 20: 593594.
  • 35
    Gilbertson M. 1994. Reply to Dr. A. H. El-Shaarawi. Inferring causality is sometimes necessary for extraordinary regulatory action—though not sufficient. J Great Lakes Res 20: 594596.
  • 36
    O'Brien MH. 1994. The scientific imperative to move society beyond “just not quite fatal.” Environ Prof 16: 356365.
  • 37
    Earll RC. 1992. Common sense and the Precautionary Principle—An environmentalist's perspective. Mar Pollut Bull 24: 182186.
  • 38
    MacGarvin M, Johnston P. 1993. On precaution, clean production and paradigm shifts. Water Sci Technol 27: 469480.
  • 39
    MacGarvin M. 1995. The implications of the Precautionary Principle for biological monitoring. Helgol Meeresunters 49: 647662.
  • 40
    Stebbing ARD. 1992. Environmental capacity and the Precautionary Principle. Mar Pollut Bull 24: 287295.
  • 41
    Ellis D. 1993. The Precautionary Principle: A taxpayer's revolt. Mar Pollut Bull 216: 170171.
  • 42
    Gray JS. 1995. Science and the Precautionary Principle: A marine perspective. In FreestoneD, HeyE, eds, The Precautionary Principle and International Law: The Challenge of Implementation. Kluwer, The Hague, The Netherlands.
  • 43
    Brewers JM. 1995. The declining influence of science on marine environmental policy. Chem Ecol 10: 923.
  • 44
    Gray JS. 1990. Statistics and the Precautionary Principle. Mar Pollut Bull 21: 174176.
  • 45
    Chapman PM, Thornton I, Persoone G, Janssen C, Godtfredsen K, Z'Graggen MN. 1996. International harmonization related to persistence and bioavailability. Hum Ecol Risk Assess 2: 393404.
  • 46
    Garcia S. 1994. The Precautionary Principle: Its implications in capture fisheries management. Ocean Coast Manage 22: 99125.
  • 47
    McCullagh JR. 1996. Russian dumping of radioactive wastes in the Sea of Japan: An opportunity to evaluate the effectiveness of the London Convention 1972. Pac Rim Law Pollut J 5: 399427.
  • 48
    Gray JS, Brewers JM. 1996. Towards a scientific definition of the Precautionary Principle. Mar Pollut Bull 32: 768771.
  • 49
    Chapman PM. 1997. The Precautionary Principle and ecological quality standards/objectives. Mar Pollut Bull 34: 227228.
  • 50
    Rand GM, Wells PG, McCarty LS. 1995. Introduction to aquatic toxicology. In RandGM, ed, Fundamentals of Aquatic Toxicology, 2nd ed. Taylor and Francis, Washington, DC, pp 367.
  • 51
    Kooijman SALM. 1987. A safety factor for LC50 values allowing for differences in sensitivity among species. J Fish Res Board Can 21: 269276.
  • 52
    Schudoma D. 1994. Derivation of water quality objectives for hazardous substances to protect aquatic ecosystems: Single-species test approach. Environ Toxicol Water Qual 9: 263272.
  • 53
    Wagner C, Lokke H. 1991. Estimation of ecotoxicological protection levels from NOEC toxicity data. Water Res 25: 12371242.
  • 54
    Aldenberg T, Slob W. 1993. Confidence limits for hazardous concentrations based on logistically distributed NOEC toxicity data. Ecotoxicol Environ Saf 25: 4863.
  • 55
    Kenaga EE. 1982. Predictability of chronic toxicity from acute toxicity of chemicals in fish and aquatic invertebrates. Environ Toxicol Contam 1: 347358.
  • 56
    Abt Associates. 1995. Technical basis for recommended ranges of uncertainty factors used in deriving wildlife criteria for the Great Lakes water quality initiative. Final Report. Office of Water, U.S. Environmental Protection Agency, Washington, DC.
  • 57
    Fletcher JS, Johnson FL, McFarlane JC. 1990. Influence of greenhouse versus field testing and taxonomic differences on plant sensitivity to chemical treatment. Environ Toxicol Chem 9: 769776.
  • 58
    Sloof W, Van Oers JAM, de Zwar D. 1986. Margins of uncertainty in ecotoxicological hazard assessment. Environ Toxicol Chem 5: 841852.
  • 59
    Dourson ML, Stara JF. 1983. Regulatory history and experimental support of uncertainty (safety) factors. Regul Toxicol Pharmacol 3: 224238.
  • 60
    Shirazi MA, Bennett RS, Ringer RK. 1994. An interpretation of toxicity response of bobwhite quail with respect to duration of exposure. Arch Environ Contam Toxicol 26: 417424.
  • 61
    Davis PH. 1973. Effects of chemical variations in aquatic environments: Vol III. Lead toxicity to rainbow trout and testing application factor concept. EPA-R3–73–011c. Ecological Research Report. Office of Research and Monitoring, U.S. Environmental Protection Agency, Washington, DC.
  • 62
    Chapman PM. 1996. Extrapolating laboratory toxicity results to the field. Environ Toxicol Chem 14: 927930.
  • 63
    Rice EL. 1984. Allelopathy, 2nd ed. Academic, Orlando, FL, USA.
  • 64
    Fuerst EP, Putnam AR. 1983. Separating the competitive and allelopathic components of interference: Theoretical principles. J Chem Ecol 8: 935937.
  • 65
    Persoone G, Van de Vel A, Steertegen M, De Nayer B. 1989. Predictive value of laboratory tests with aquatic invertebrates: Influence of experimental conditions. Aquat Toxicol 14: 149166.
  • 66
    Porter WP, et al. 1984. Toxicant–disease–environment interactions associated with suppression of immune system, growth and reproduction. Science 224: 10141017.
  • 67
    Maguire CC, Williams BA. 1987. Response of thermal stressed juvenile quail to dietary organophosphate exposure. Environ Pollut 47: 2539.
  • 68
    Rattner BA, Fairbrother A. 1991. Biological variability and the influence of stress on cholinesterase activity. In MineauP, ed, Cholinesterase-Inhibiting Insecticides: Their Impact on Wildlife and the Environment. Elsevier, Amsterdam, The Netherlands, pp 89108.
  • 69
    Edge WD, Carey RL, Wolff JO, Ganio LM, Manning T. 1996. Effects of Guthion 2S on Microtus canicaudus: A risk assessment validation. J Appl Ecol 32: 269278.
  • 70
    Bennett RS, Edge WD, Griffis WL, Matz AC, Wolff JO, Ganio LM. 1994. Temporal and spatial distribution of azinphos-methyl applied to alfalfa. Arch Environ Contam Toxicol 27: 534540.
  • 71
    Beyer WN, Storm G. 1995. Ecotoxicological damage from zinc smelting at Palmerton, Pennsylvania. In HoffmanD, RattnerBA, BurtonGAJr, CairnsJJr, eds, Handbook of Ecotoxicology. Lewis, Boca Raton, FL, USA, pp 596608.
  • 72
    Cairns J Jr. 1986. The myth of the most sensitive species. Bio Science 36: 670672.
  • 73
    Meek ME. 1998. Perceived precision of risk estimates for carcinogenic versus non-neoplastic effects: Implications for methodology. Hum Ecol Risk Assess (in press).
  • 74
    Brown, D. 1996. Environmental classification and risk assessment. In QuintMD, TaylorD, PurchaseR, eds, Environmental Impact of Chemicals: Assessment and Control. Royal Society of Chemistry, Cambridge, UK, pp 197208.
  • 75
    Sprague JB. 1971. Measurement of pollutant toxicity to fish—III. Sublethal effects and “safe” concentrations. Water Res 5: 245266.
  • 76
    National Academy of Sciences. 1972. Water quality criteria. EPA-R3–73–033. Committee on Water Quality Criteria, Washington, DC.
  • 77
    Chen J-C, Lin C-Y. 1991. Lethal doses of ammonia on Penaeus chinensis larvae. Bull Inst Zool Acad Sin (Taipei) 30: 289297.
  • 78
    U.S. Environmental Protection Agency, Army Corps of Engineers. 1995. Evaluation of dredged material proposed for discharge in waters of the U. S.—Testing manual. EPA-823-B-94–002. Washington, DC.
  • 79
    Chapman PM. 1996. Hazard identification, hazard classification and risk assessment for metals and metal compounds in the aquatic environment. International Council on Metals and the Environment, Ottawa, ON, Canada, p 31.
  • 80
    International Council on Metals and the Environment. 1997. Proceedings, International Workshop on Risk Assessment of Metals and Their Inorganic Compounds, Angers, France, November 13–15, 1996, p 180.
  • 81
    Forbes VE. 1996. Chemical stress and genetic variability in invertebrate populations. Toxicol Ecotoxicol News 3: 136141.
  • 82
    Cairns J Jr, Dickson KL, Maki AW. 1979. Estimating the hazard of chemical substances to aquatic life. Hydrobiologia 64: 157166.
  • 83
    Peterman RM, M'Gonigle M. 1992. Statistical power analysis and the precautionary principle. Mar Pollut Bull 24: 231234.
  • 84
    Suter GW II. 1993. Ecological Risk Assessment. Lewis, Boca Raton, FL, USA.
  • 85
    Chapman PM. 1996. Presentation and interpretation of sediment quality triad data. Ecotoxicology 5: 327339.
  • 86
    Menzie C, et al. 1996. A weight-of-evidence approach for evaluating ecological risks: Report of the Massachusetts Weight-of-Evidence Workgroup. Hum Ecol Risk Assess 2: 277304.
  • 87
    U.S. Environmental Protection Agency. 1997. An SAB Report: Review of the agency's draft ecological risk assessment guideliens. EPA-SAB-EPEC-97–002. Science Advisory Board, Washington, DC.
  • 88
    Risher JF, DeRosa CT. 1997. The precision, uses, and limitations of public health guidance values. Hum Ecol Risk Assess (in press).
  • 89
    U.S. Environmental Protection Agency. 1992. An SAB report: Review of a testing manual for evaluation of dredged material proposed for ocean disposal. EPA-SAB-EPEC-92–014. Science Advisory Board, Washington, DC.
  • 90
    Environment Canada. 1997. Guidance document on the interpretation and application of data for environmental toxicology. Report EPS 1/RM/34. Ottawa, ON, Canada.
  • 91
    Abdel-Rahman MS ed. 1995. EPA Uncertainty Factor Workshop. Hum Ecol Risk Assess 1: 512662.
  • 92
    Rees DC, Hattis D. 1994. Developing quantitative strategies for animal to human extrapolation. In HayesAW, ed, Principles and Methods of Toxicology, 3rd ed. Raven, New York, NY, USA, pp 275315.
  • 93
    Chapman PM, Caldwell RS, Chapman PF. 1996. A warning: NOECs are inappropriate for regulatory use. Environ Toxicol Chem 15: 7779.
  • 94
    Dhaliwal BS, Dolan RJ, Batts CW, Kelly JM, Smith RW, Johnson S. 1997. Warning: Replacing NOECs with point estimates may not solve regulatory contradictions. Environ Toxicol Chem 16: 124125.
  • 95
    Chapman PF, Chapman PM. 1997. Author's reply. Environ Toxicol Chem 16: 125126.
  • 96
    Cairns J Jr. 1991. Application factors and ecosystem elasticity: The missing connection. Environ Toxicol Chem 10: 12351236.
  • 97
    Moore DRJ, Caux P-Y. 1997. Estimating low toxic effects. Environ Toxicol Chem 16: 794801.
  • 98
    Pederson F, Petersen GI. 1996. Variability of species sensitivity to complex mixtures. Water Sci Technol 33: 109119.
  • 99
    Jak RG, Maas JL, Scholten MCT. 1996. Evaluation of laboratory derived toxic effect concentrations of a mixture of metals by testing freshwater plankton communities in enclosures. Water Res 30: 12151227.
  • 100
    Kimerle RA, Werner AF, Adams WJ. 1984. Aquatic hazard evaluation principles applied to the development of water quality criteria. In Cardwell RD, Purdy R, Bahner RC, eds, Aquatic Toxicology and Hazard Assessment: Seventh Symposium. STP 85. American Society for Testing and Materials, Philadelphia, PA, pp 538547.
  • 101
    Farland WH, Tuxen LC. 1997. New directions in cancer risk assessment: Accuracy, precision, credibility, and uncertainty. Hum Ecol Risk Assess (in press).
  • 102
    U.S. Environmental Protection Agency. 1997. Evaluation of Superfund ecotox threshold benchmark values for water and sediment. EPA-SAB-EPEC-LTR-97–009. Science Advisory Board, Washington, DC.
  • 103
    International Programme on Chemical Safety. 1994. Assessing human health risks of chemicals: Derivation of guidance values for health-based exposure limits. Environmental Health Criteria 170. Geneva, Switzerland.
  • 104
    Mount DI, Stephan CE. 1969. Chronic toxicity of copper to the fathead minnow (Pimephales promelas) in soft water. J Fish Res Board Can 26: 24492457.
  • 105
    Walden CC. 1976. The toxicity of pulp and paper mill effluents and corresponding measurement procedures. Water Res 10: 639664.
  • 106
    U.S. National Research Council. 1994. Science and Judgement in Risk Assessment. National Academy Press, Washington, DC.
  • 107
    Van Straalen NM, Dennemann CAJ. 1989. Ecotoxicological evaluation of soil quality criteria. Ecotoxicol Environ Saf 18: 241251.