In the past, finding an unwanted chemical in a food often led to the prohibition of that chemical in food; none of the substance was considered allowable. The mere presence of the chemical was considered unsafe and adverse to health. Zero allowance of unwanted chemicals was the prevailing wisdom at the time. However, in actuality, zero was the LOD for the chemical, perhaps a part per million. The zero-allowance manner of managing the detection of low levels of chemicals in food is dynamic, changing as the sensitivity of analytical techniques increases. More specifically, a chemical once thought not to be present because it was not detectable may, if present, be recognized once the sensitivity of analytical techniques reaches detection capability, at parts per billion or parts per trillion. Analytical sensitivity has improved over time from parts per thousand and parts per million in the 1960s to parts per billion and parts per trillion in the 1980s and parts per quadrillion in the 2000s.
Development of the threshold of toxicological concern
Within the past several decades, scientists have been developing sophisticated models to address very low-level exposures; risks from these are so low they are likely negligible. For example, a decision tree exists to determine presumptive toxicity and aid priority setting for analytical testing of food ingredients. Other sophisticated decision tree models describe a threshold of toxicological concern (TTC) as “a principle which refers to the possibility of establishing a human exposure threshold value for all chemicals, below which there is no appreciable risk to human health” (Kroes and Kozianowski 2002). TTC is thus an insignificant value or a negligible risk standard.
Food safety professionals have long realized the need to manage trace levels of compounds. They have done so with an evolving perspective as advances in analytical methods have allowed detection of increasingly lower trace levels of compounds, thereby finding a greater number of compounds. For example, Frawley (1967) determined that except for certain substances (such as Clostridium botulinum toxins), no single organic chemical has advanced from the laboratory, through development, and into general commercial use with a toxicity to experimental animals at a dietary level of 40 ppm or less. He proposed that substances migrating from food packaging materials at a level of 0.1 mg/kg of human diet (which included a hundredfold margin of safety), equivalent to an intake of 150 μg/person/d, could be safely consumed. The basis for Frawley's proposal was analysis of 2-year chronic toxicity studies of 222 chemicals and categorization by dose in which no toxicological effects were observed. Munro (1990) developed a human exposure threshold value of up to 1000 ppt for substances migrating from food contact materials (1.5 to 3 μg/person/d, depending on assumptions regarding food intake) on the basis of a database of 350 substances. The FDA TOR exemption for noncarcinogenic substances migrating from food contact materials established an acceptable dietary exposure level at or below 1.5 μg/person/d (21 CFR § 170.39). Munro and others (1999) described a procedure for the safety evaluation of flavoring substances by integrating data on metabolism and toxicity of substances within structurally related groups, structure–activity relationships, and daily intake. More recently, Blackburn and others (2005) evaluated and found TTC applicability for ingredients in consumer products, and Müller and others (2006) found similar applicability for impurities in pharmaceuticals.
The TTC provides an acceptable high probability of health protection and practicality. A TTC evaluation can be an efficient screening and prioritizing tool for the decision-making process, particularly when data are incomplete. Using the TTC may lead to a decision that for some chemicals makes further work and risk mitigation steps necessary while for others, further work is not necessary. There are many key references on the TTC concept (Kroes and others 2004; Barlow 2005). A brief review of a few foundational papers shows the conservative nature of the TTC and how it can be used to prioritize risks from any low-level chemical detection in food.
Munro (1990) summarized the work by Gold and others (1984) and Rulis (1986) by plotting the distribution of potencies for hundreds of carcinogens (Figure 2). On the left side of the chart is the distribution of doses that produce carcinogenicity in 50% of the tested laboratory animals (designated the TD50). The arrow pointing to the right shifts the log normal curve to the right to extrapolate down to 1 × 10−6, or 1 in 1,000,000 risk (note the x-axis is on a negative log scale, so lower doses are to the right and higher doses are to the left). Originally, the FDA's Rulis (1986) selected 0.15 μg/person/d as a level that would give less than a 1 in a million lifetime risk of cancer for any of these carcinogens. In a workshop exploring this relationship further, 1.5 μg/person/d was determined to still be very conservative (Munro 1990). In recent years, many more carcinogens have been added to this database, additional papers have been published, and 1.5 μg/person/d has been affirmed as safe.
Figure 2—. Distribution of potencies for various carcinogens. Source: Munro and others 1999. A procedure for the safety evaluation of flavouring substances. Food Chem Toxicol 37(2–3): 207–32. Used with permission.
Download figure to PowerPoint
For toxic endpoints other than cancer, Munro and others (1999) evaluated databases for neurotoxicants, immunotoxicants, and developmental and reproductive toxicants and determined that TTCs for these classes of compounds were even higher than those for cancer. It is therefore highly likely that a TTC that protects against cancer risk will protect against all toxic outcomes. Furthermore, if additional information about a substance is available (for example, it is not carcinogenic or exposure is of brief duration or limited to a small part of the diet), the TTC can safely be set at a level higher than the default (Munro and others 1999). Müller and others (2006) noted that 1.5 μg/d for a lifetime TTC would correspond to a TTC of 120 μg/d for exposures lasting only a month. Similarly, if an exposure was limited to a single food that was rarely consumed, the dietary exposure would be less than 3000 g/d and the TTC could be set at some level higher than 1.5 μg/d. Such information could be useful to risk decision-makers for exposures that are more limited in scope.
Regulatory agencies (such as the FDA and the EFSA) and other organizations (such as the JECFA) have applied the TTC approach to indirect additives, flavors, pharmaceuticals, and personal and household products. Although route of exposure affects toxic responses, the body's metabolic systems do not distinguish among sources of exposure. The TTC has been proposed to apply to any low-level detection of a chemical in food with some prudent exceptions (Barlow 2005; Felter and others 2009). Exceptions include heavy metals, proteins that may be allergenic, and highly toxic carcinogens.
Another consideration is the difficulty in assessing human cancer risk from individual compounds through animal bioassays conducted at high doses. The difficulty arises because of the complexity of the human diet, a variable mixture of naturally occurring and synthetic chemicals, and potential for interactions between components, some of which could be anticarcinogenic (NRC 1996). It is almost certain that the impact of single dietary components on cancer is the sum of numerous effects of a chemical rather than a single biological effect. Moreover, carcinogens and anticarcinogens in the diet can interact in a variety of ways that are not fully understood (NRC 1996).
Fundamentally, the potential human health risk posed by a chemical substance is a function of its inherent toxicity and exposure including route, dose, and duration. If there is little or no exposure, then the risk is insignificant. Figure 3 depicts the crux of the TTC concept. The grid shows low-level safe exposures having low priorities as green squares. As exposure and toxic potency increase, so do potential health risks, represented by higher priority yellow and red squares. The human exposure benchmark doses defining low, medium, and high exposures are built upon a body of scientific evidence that has established TTC dose levels representing minimal or insignificant human health risk (Felter and others 2009).
In Figure 3, a semiquantitative estimate of human exposure is combined with a qualitative determination of structural activity/toxicological potency to assist risk decision-making and to set priority for action. Breakpoints for low, medium, and high human exposure are not well defined and vary by class of compound, duration of exposure, and other factors. However, an exposure cutoff of 1.5 μg/d or less is protective for carcinogens and probably all other toxicants. For exposures above a TTC level, the priority can range from low to high, depending on evidence that indicates low-to-high potency based on structure–activity information or other toxicological data about the substance (Cheeseman and others 1999; Kroes and others 2004; Barlow 2005). The breakpoint between medium and high exposures has not been defined but could be some multiple of the TTC. Figure 3 uses tenfold the 1.5 μg/d TTC for carcinogens as a conservative assumption. At high intake levels, priority rises to at least medium or high, requiring typical resource-intensive toxicological approaches used for high exposure situations. Helpful in determining appropriate risk mitigation actions when human exposure and toxicity data are incomplete, the grid's 3 priority rankings are as follows:
Green: The green squares indicate low priorities (little or no safety issue), which correspond to either a low or medium exposure to a substance with a low to medium order of structural activity and acute toxicity. Typically, one would defer to the toxicity profile when available and to the SARs in situations for which only minimal toxicological data are available. Situations occurring within the green squares allow a recommendation of a low priority concern, and little follow-up work is indicated.
Yellow: The yellow squares indicate medium priorities, which correspond to high/low, medium/medium, or low/high exposure/toxicity combinations. Typically, one would defer to the toxicity profile when available and to the SARs in instances of only minimal toxicological data. Because the exposure level and/or toxicity profile is higher in this situation, it is critical that the decision maker have confidence in making sound recommendations to those involved in risk management. Situations involving the yellow squares will often constitute the most difficult decisions that a food safety professional will have to make, based on limited analytical and toxicological data.
Medium priority concerns indicate the need for additional information to allow a risk assessment to occur. Information about potential exposure and toxicological effects is necessary to determine the scope of the issue and source. Toxicological analysis would assure a full and complete understanding of the existing data set that is available. Consideration of the source of the exposure as per ingredients, commodities, and food products is essential.
Red: The red squares indicate high priorities due to combinations of high or medium exposures with high or medium toxicity. Typically, for high-priority issues, only interim risk management decisions can be made until traditional, full-scale information is available on toxicity and exposure. Because the exposure level and/or toxicity profile is higher in this situation, the decision maker must have complete confidence in making recommendations to senior officials.
Red-square situations often constitute the most clear-cut decisions that a food safety professional has to make, based on limited analytical and toxicological data. High priority concerns receive the highest level of immediate attention and risk management response. As with green and yellow situations, one should consider the source of the exposure as per ingredient, commodities, and whole foods.
Consumer dietary exposure assessment
When a substance is suspected of being potentially hazardous, a key issue that must be addressed before using TTC or any other decision tool is determining its likely exposure. In the case of substances found in foods, whether their presence is intended or not, the issue can be broken into 2 elements. The first is establishing the level of concentration in various foods, which can be addressed through techniques of chemical analysis. The second element is determining how much of the foods in which the substance is present are consumed. Depending on the known or suspected toxicological parameters of the substance, the issue may be acute exposure, in which case the amount of food consumed per occasion must be determined, or it may be chronic exposure, in which case consumption of the food over a longer period of time must be assessed.
Estimating long-term exposure (for example, exposure during weeks, months, or a lifetime) to substances found in foods is extremely difficult, but a first, highly conservative approach to estimating long-term exposure is estimating exposure during 1 day (24 hours). A worst-case estimate of long-term exposure can then be made based on the simple assumption that foods containing the target substance are consumed every day; thus, the weekly exposure is simply 7 times the 1-day exposure and the yearly exposure is 365 times the daily exposure. Since few foods are actually consumed every day, this approach overestimates long-term exposure, often quite extremely. But it is a conservative assumption in that it cannot underestimate long-term exposure, and at least the directionality of the error is known.
In the United States, estimates of 1-day consumption of all foods in the diet are available through the Natl. Health and Nutrition Examination Survey (NHANES), which is conducted continuously with face-to-face interviews of representative samples of the U.S. population (CDC-NCHS 2008). One element of this survey is a 24-hour dietary recall, in which respondents work with a trained interviewer to recall all of the foods and beverages that they consumed on the previous day, midnight to midnight, either at home or away from home. Additionally, the respondent is asked to estimate the quantity of each food or drink consumed on each occasion during the day, and various memory aids are employed to assist the respondent in making accurate estimates. Despite all efforts, there are numerous sources of potential error, from lack of knowledge of what some foods were as well as memory lapses to incorrect estimations of portion sizes and a desire to make one's diet appear healthier than it is. Nevertheless, 24-hour recalls are believed to be fundamentally valid and, other than direct covert observation, are regarded as the gold standard for estimating 1-day food consumption.
Obtaining precise estimates of 1-day levels of consumption of specific foods requires accessing the database of food-consumption data from the survey. The direct use of the survey is a time-consuming process requiring knowledge of the survey design and statistics to ensure its correct application. These resources are often not available particularly when an immediate estimate is needed. In such cases, it will often be preferable to have a method that can develop an estimate with less precision but one that is certain or almost certain to be conservative—that is, to err in the direction of overestimating consumption of the food and therefore overestimating exposure to the substance in the food.
When a substance of concern is detected in a food, it is useful to consider the source of the substance and the range of foods in which it is likely to be found. In some cases, the substance may have entered the food during or after manufacture and thus may be present only in a single food product. In this case, what is needed is an estimate of the consumption of the single food product as the sole source of the substance of concern. In other cases, the substance of concern may be present in 1 ingredient of a food product and thus may be present in other foods that contain that same ingredient. Thus, if an undesirable substance is found to be present in a commodity such as wheat or oats, it may have entered a large variety of food products—potentially any food product containing wheat or oats as an ingredient. These 2 situations demand different approaches to estimating exposure—1 method for substances of concern that are believed to be present in 1 or a small number of foods and another method for substances believed to be present in commodity ingredients that are components of a large number of different foods. In the former, what is needed is an estimate of consumption of the food while in the latter an estimate of the total consumption of the commodity from all foods containing it is needed.
Food consumption estimates of the NHANES food categories comprise consumption of 10 broad food categories for high consumers (90th percentile) on a per capita and per user basis (Table 2). The estimates for each category include the identified component as well as other ingredients in the food item from the NHANES 2003–2004 survey. It is also possible to estimate consumption for categories that are defined by the USDA and the Natl. Center for Health Statistics (NCHS). These categories are broad and include the weight of the entire food, including ingredients that are not part of the designated category. For example, the entire weight of a frozen plate meal that is mainly beef would be included in the meat category. Therefore, these estimates are overestimates of the 90th percentile intake of the ingredient but may be useful to estimate the consumption of a food product that contains multiple ingredients on a daily basis. One feature of current food labeling in the United States is helpful in this circumstance. With the Nutrition Labeling and Education Act of 1990 (NLEA), all foods sold directly to consumers are required to have a nutrition facts panel on the label, which includes the serving size of the food. Although in the early days of nutrition labeling the product marketer was responsible for determining the declared serving size, this changed with the NLEA, which required the FDA to establish serving sizes for use by food marketers. In response, the FDA developed the reference amounts customarily consumed (RACC) to serve as the basis for declared serving sizes of all food products. (See 21 CFR § 101.12.)
Table 2—. Food consumption by USDA food categories (as consumed).
|Food group||90th percentile per capita (g/kg bw/d)||90th percentile per user (g/kg bw/d)|
|Milk and milk products||18||21|
|Meat, including beef, pork, turkey, fish, seafood, and mixtures contains these products|| 7|| 7|
|Eggs|| 2|| 4|
|Dry beans, peas, legumes|| 1|| 4|
|Grains including mixtures||13||13|
|Fruits including fruit juices||10||16|
|Vegetables|| 7|| 7|
|Fats and oils|| 1|| 1|
|Sweets and sugar|| 1|| 2|
|Note: figures rounded to nearest g/kg bw/d|| || |
The FDA determined the RACC by examining food-consumption data from the 1987–1988 USDA Nationwide Food Consumption Survey (NFCS) and the 1989–1990 and 1990–1991 Continuing Surveys of Food Intakes by Individuals (CSFII), the most up-to-date food-consumption survey data available at that time. For each category of food, the FDA calculated the mean and median amount consumed per eating occasion of the food. The agency then used these figures to develop reasonable estimates of the amounts of foods customarily consumed. The FDA then rounded to numbers that could be easily expressed in common household units. For example, most beverages received an RACC of 240 mL, corresponding to 8 fluid ounces.
Since the advent of RACC and declared serving sizes reflect amounts customarily consumed, a reasonable estimate of the average amount of a food that Americans consume can be taken directly from the declared serving size. For example, it is reasonable to assume that the average amount consumed of a beverage labeled as having a serving size of 240 mL is indeed 240 mL. However, this figure represents the amount consumed at 1 consumption occasion. It is quite possible for an individual to have numerous consumption occasions during a day: an individual may have a beverage with breakfast, 1 with lunch, 1 with dinner, and perhaps 1 or more at other times of the day.
For this reason, it is useful to develop a multiplier to estimate 1-day consumption from serving sizes, which represent average 1-occasion amounts. Data on the 24-hour consumption of a number of categories of foods are available from the USDA's Agricultural Research Service (USDA-ARS 1997). These data provide mean daily intakes of a large number of categories of foods consumed by the population aged 2 years and older. The data, however, are provided on a per capita basis and thus offer average amounts consumed by the entire population, encompassing both people who did and did not consume the food. Thus, if 25% of the population consumed a food, and these consumers ate an average of 60 g each, the per capita average intake would be 15 g, representing the 25% of the people who consumed 60 g and the 75% who consumed 0 g. Fortunately, the USDA data also indicate the proportion of the population that reported consumption of each food. Thus, with a per capita average intake of 15 g of a food and the knowledge that 25% of the population consumed the food and 75% did not, the per user average intake is derived by dividing 15 g by 0.25, obtaining 60 g. Once the per-user average consumption of a food is available, that figure can be divided by the RACC to determine the multiplier needed to estimate daily consumption based on RACC (Table 3).
Table 3—. Daily food intakes expressed in RACC.
|Food category||Mean per capita intake (g/d)||Proportion consuming||Mean intake by users (g/d)||RACC (g)||Intake in RACC|
|Quick breads and pancakes|| 19||0.227|| 84||110||0.76|
|Table fats|| 4||0.304|| 13|| 15||0.88|
|Salad dressings|| 8||0.293|| 27|| 30||0.91|
|Candy|| 7||0.154|| 45|| 40||1.14|
|Citrus juice|| 60||0.204||294||240||1.23|
|Fried potatoes|| 24||0.270|| 89|| 70||1.27|
|Chicken|| 21||0.192||109|| 85||1.29|
|Noncitrus juice and nectar|| 27||0.085||318||240||1.32|
|Frankfurters, sausages, luncheon meats|| 21||0.286|| 73|| 55||1.34|
|Beef|| 24||0.209||115|| 85||1.35|
|Ready-to-eat cereal|| 16||0.285|| 56|| 40||1.40|
|Crackers, popcorn, pretzels, corn chips|| 12||0.278|| 43|| 30||1.44|
|Bread and rolls|| 50||0.663|| 75|| 50||1.51|
|Eggs|| 18||0.191|| 94|| 50||1.88|
|Mixtures mainly meat/poultry/fish|| 99||0.362||273||140||1.95|
|Fruit drinks and other flavored beverages|| 95||0.197||482||240||2.01|
|Milk desserts|| 27||0.174||155|| 70||2.22|
|Cheese|| 16||0.226|| 71|| 30||2.36|
|Carbonated soft drinks||332||0.504||659||240||2.74|
According to Table 3, the mean per capita daily intake of coffee is 259 g; 39.5% of respondents reported consumption of coffee on the survey day. Thus, those 39.5% of respondents consumed a mean of 656 g of coffee. The RACC for coffee is approximately 240 g (actually 240 mL); thus, the 656 g of coffee consumed by users is equivalent to 2.73 RACC. For most foods, mean daily consumption is less than 2 RACC; only frequently consumed beverages such as coffee and soft drinks exceed 2.5 RACC. Thus, a quite conservative estimate of the mean daily consumption of a food can be derived by multiplying the RACC by 2.5.
In risk assessment, it is common to evaluate exposure, not at the mean but at the level reached by heavy consumers of foods containing the target substance. While different regulatory authorities employ different percentiles (for example, 90th, 95th, 97.5th, and 99th), the profile most often used by the FDA is the 90th percentile of intake. The 90th percentile of intake is usually close to double the mean intake (FDA 2006). A conservative estimate of the 90th percentile of intake of a food can thus be derived by reading the serving size in the nutrition facts panel and multiplying by 5.
If the substance of interest is present in a commodity such as wheat, oats, tomatoes, and so on, it is futile to attempt to estimate the consumption of all foods that contain the commodity. Rather, it is necessary to estimate consumption of the commodity itself from all sources. The estimates for the crop group overestimate intake for individual commodities since each crop group contains many foods. Fortunately, although hundreds of commodities exist, only a small number are widely used that constitute a substantial proportion of finished foods and consequently have more than small levels of intake. The EPA has developed a database of intakes based on commodities (the Food Commodity Ingredient Database [FCID]), which is publicly available and provides a rapid tool for estimating intakes on a commodity basis. Table 4 shows consumption of commodities by DEEM™ FCID crop group categories.
Table 4—. Consumption of commodities by DEEM™ FCID crop group categories.
| ||90th percentile per capita (g/kg bw/d)||90th percentile per user (g/kg bw/d)|
|Meat (beef, pork, sheep, fish, shellfish, and poultry)|| 5|| 5|
|Root, tuber, and bulb vegetables|| 4|| 4|
|Leafy greens and brassica|| 2|| 3|
|Fruiting and cucurbit vegetables|| 3|| 4|
|Legumes|| 2|| 2|
|Cereal grains|| 8|| 8|
Inspection of the daily intakes of commodities used as food ingredients reveals that only 43 commodities have 90th percentile daily intakes exceeding about 0.1 g/kg bw/d (Table 5). The highest intake of any single commodity is that of wheat, with a 90th percentile consumption of about 3.4 g/kg bw/d. This level of consumption is so high relative to other commodities that wheat products constitute their own group: A. Wheat is followed by the 6 commodities in group B—apple products, beef products, corn products, oranges and juice, potato products, and tomato products—with 90th percentile daily intakes in the range of about 2 g/kg bw for each commodity (for example, to get a total daily intake you would need to multiply the commodity by its contaminant level).
Table 5—. Ninetieth percentile daily intake of commodities by commodity group.
|Group||Commodity group||90th percentile exposure (g/kg bw/d)|
|B||apple, apple juice||2.00|
|corn (field, syrup)|| |
|orange, orange juice|| |
|potato, potato chips|| |
|tomato, sauce/paste/puree/juice|| |
|beet (sugar)|| |
|carrot, carrot juice|| |
|corn (field, meal, flour)|| |
|corn (sweet)|| |
|grape, grape juice, wine grape|| |
|lettuce (head/leaf)|| |
|onion, all|| |
|rice, rice bran, rice flour|| |
|soybean oil/flour/milk|| |
|sugarcane, sugar|| |
|bean (snap, succulent)|| |
|celery, celery juice|| |
|cranberry, cranberry juice|| |
|grapefruit, grapefruit juice|| |
|oat, bran flour/groat/folled|| |
|peach, peach juice|| |
|peanut, peanut butter|| |
|pear, pear juice|| |
|pepper, bell|| |
|pineapple, pineapple juice|| |
|strawberry, strawberry juice|| |
|watermelon, watermelon juice|| |