R. W. R Crevel Safety & Environmental Assurance Centre Unilever Colworth Science Park Bedford MK44 1LQ UK
Thresholds constitute a critical piece of information in assessing the risk from allergenic foods at both the individual and population levels. Knowledge of the minimum dose that can elicit a reaction is of great interest to all food allergy stakeholders. For allergic individuals and health professionals, individual threshold data can inform allergy management. Population thresholds can help both the food industry and regulatory authorities assess the public health risk and design appropriate food safety objectives to guide risk management. Considerable experience has been gained with the double-blind placebo-controlled food challenge (DBPCFC), but only recently has the technique been adapted to provide data on thresholds. Available data thus vary greatly in quality, with relatively few studies providing the best quality individual data, using the low-dose DBPCFC. Such high quality individual data also form the foundation for population thresholds, but these also require, in addition to an adequate sample size, a good characterization of the tested population in relation to the whole allergic population. Determination of thresholds at both an individual level and at a population level is influenced by many factors. This review describes a low-dose challenge protocol developed as part of the European Community-funded Integrated Project Europrevall, and strongly recommends its wider use so that data are generated that can readily increase the power of existing studies.
Epidemiological studies published in the last 10–15 years indicate that significant proportions (2–4%) of the population of most countries surveyed suffer from IgE-mediated food allergy (1–6). Prevalence among children is higher, of the order of 5–8% (7, 8), and apparently increasing (9). Food allergy has thus evolved from being a problem for the food-allergic individual to one of significant public health importance. This recognition has led to a legal and moral responsibility that food allergens are to be managed, which is now reflected in specific legislation in many countries and regions.
Allergic people can find avoidance of specific foods very difficult. Labelling rules generally cover only deliberately added ingredients in prepacked foods, but supply chain and manufacturing processes are extremely complex (10), resulting in the adventitious presence of small amounts of allergens in many products. This situation presents allergic consumers with a dilemma: they can either accept decreased food choices or take a risk that they are unable to assess. Similarly, health professionals cannot advise their allergic patients about whether foods with precautionary labelling are safe for them. The food industry both wishes to, and is expected to provide safe products, but is unsure about what and how much need to be done to achieve this with respect to allergens. Similarly, public health authorities cannot formulate specific objectives, as they lack the information on which to base them. Reliable knowledge of thresholds would thus benefit all stakeholders by providing critical information for the management of food allergy by patients and health professionals, and management of allergen risks by the food industry.
Much data on thresholds still originate from diagnostic studies, conducted using a wide variety of protocols and food matrices. These studies have indicated a wide variability in the allergic population. However it is difficult to establish how well the patients tested represent the overall allergic population and thus interpret the findings in a population context. Much of the publicly available data are thus still poorly suited to risk assessment, leading recent analyses to conclude that thresholds (of elicitation) for food allergens exist (11), but cannot be estimated with adequate certainty (12). Although information is now being obtained using harmonized low-dose challenge protocols (13, 14), it remains limited to a few allergenic foods.
The Integrated Project Europrevall aims to develop tools for improved allergen risk management in Europe, as well as generate the information that those tools require. The present article discusses the relevance of thresholds to different stakeholders and how they can be used to help define public health objectives. It summarizes current knowledge about thresholds of elicitation and how they can be used in risk assessment, and considers where gaps exist and how they should be addressed. This review also describes the Europrevall low-dose challenge protocol, which aims to ensure that the data generated are fit for purpose and recommends it for use in similar studies to facilitate their use for risk assessment.
The Concise Oxford English Dictionary (9th edition) defines threshold (Physiology) as ‘a limit below which a stimulus causes no reaction’. Kroes et al. (15) define threshold in toxicology as a dose at or below which, a response is not seen in an experimental setting. They also highlight that economic and practical limitations on experimental design mean that threshold doses cannot be established in absolute terms even in conventional toxicology.
Individual experimental thresholds in a study lie between the No Observed Adverse Effect Level (NOAEL), the highest dose observed not to produce any adverse effect and the Lowest Observed Adverse Effect Level (LOAEL), the lowest dose that is observed to produce an adverse effect. The very wide range of doses over which allergic people respond, together with the limitations inherent in studies in human beings make remote the prospect of obtaining absolute experimental thresholds for food allergens for human populations.
Allergic responses, in common with other immune responses, consist of two phases: sensitization and elicitation. Thresholds probably apply to both phases. However, little is known about thresholds of sensitization to food proteins in human beings and in practice, the term ‘threshold’ is only used in relation to the elicitation phase. This article therefore only addresses thresholds of elicitation and furthermore limits itself to IgE-mediated reactions, which are those that can produce the most acutely life-threatening manifestations.
Thresholds exist at both an individual and a population level. Individual thresholds can be estimated experimentally, but this does not hold in practice for population thresholds. Crevel et al. (16) have suggested ‘minimum eliciting dose’ to designate the amount of allergen predicted to produce a reaction in a defined proportion of the allergic population to distinguish it from experimentally determined thresholds. The minimum eliciting dose can be considered as a threshold for a defined proportion of the allergic population. This parameter could be of particular value in using thresholds in public health contexts, as it relates to the concept of ‘protection of the vast majority’, which is the (usually implicit) basis for most food safety regulations. It acknowledges explicitly that it is impossible to prove absolute safety or zero risk.
Relevance of thresholds to different stakeholders
In the area of food safety, regulatory authorities are charged with the protection of public health. To discharge this wide-ranging responsibility effectively, they need to be able to allocate resources appropriately. This requires that they should assess the risk from different threats to public health, both in terms of numbers affected and severity.
Threshold data could form the basis of regulatory thresholds, i.e. the levels of allergen below which a regulatory authority would deem the public health impact to be negligible. However, to date, most regulatory authorities have focussed on ingredient labelling irrespective of the amount in the food. Indeed important reports from both the EFSA (12) and the US-FDA (17) have questioned whether adequate data currently exist to set thresholds. Thus, while allergic consumers are protected against undeclared allergenic ingredients, they remain at risk from non-ingredient allergenic components. This absence of a regulatory threshold also has other consequences, which may reduce the protection of the allergic consumer. Allergenic ingredients present in insignificant quantities must be declared. For example, an ingredient containing refined peanut oil, with almost undetectable protein, could be a component of another ingredient used in very small amounts, such as a flavour. Yet products containing this ingredient must be labelled as containing peanut and allergic consumers who eat them could erroneously conclude that their allergy had resolved even though the amounts are too small to trigger a reaction.
In the European Union, the Labelling Directive (2000/13/EC) (18), as amended by Directive 2003/89/EC (19), governs allergen labelling. The Directive identifies 11 foods or food groups and sulphur dioxide (listed in Annex IIIa) that are found in a wide variety of processed foods and that are known to trigger allergic reactions. These foods or their derivatives must be declared on the label whenever present as ingredients in a prepackaged product, including previously exempted alcoholic beverages. Subsequent legislation (20) added lupin and molluscs to Annex IIIa, while Directive 2005/26/EC (21) defined derived ingredients provisionally exempted until 25 November 2007 following evaluation of dossiers by the EFSA. At the time of writing, a new Directive is being drafted to specify ingredients that have obtained permanent exemptions.
The USA also recently implemented new legislation to improve the protection for allergic consumers, with the Food Allergen Labeling and Consumer Protection Act (FALCPA) (22) having come into force on 1 January 2006. This Act contains provisions which differ in detail rather than principle from those of EU Directive 2003/89/EC. The Act mandates a shorter list of allergenic food groups, but requires indication of the species on the label in the case of fish, crustacea and tree nuts. It also specifically exempts highly refined oils from allergen labelling. Subsequent guidance on tree nuts describes a much longer list than the EU’s Annex IIIa. FALCPA provides for a process to obtain exemption from labelling, but so far no derived ingredients have been exempted. As with the Directive, FALCPA (2004) requires labelling of allergenic ingredients irrespective of the amount ultimately present in the product. Establishment of thresholds would clearly facilitate the exemption process by providing a benchmark for relevant products to meet.
Both Directive 2003/89/EC and FALCPA (2004), as well as legislation in other countries such as Australia and Japan, undoubtedly improve the labelling of allergenic foods. However, despite better labelling of allergenic ingredients, the issue of adventitious allergen originating through, for instance, cross-contact during manufacturing is not addressed, even though substantial incorporation of undeclared allergenic constituents can occur (11). A practical way to deal with unintentional allergenic ‘cross-contact’ could be the adoption of an appropriate upper limit for non-ingredient allergenic food components. For example, Switzerland requires the declaration of specified allergenic constituents whenever present in concentrations >0.1% (1000 mg/kg), whether as ingredients or otherwise (23). However, a threshold of 0.1% might not be considered sufficiently protective of public health, as <1 mg of peanut protein has been shown to elicit adverse reactions in allergic subjects (reviewed in 11, 17). Obviously, an upper limit for non-ingredient allergenic food components also needs to consider the NOAEL reported for each of the important allergenic foods.
Regulatory upper limits for adventitious allergens are only meaningful if sensitive, specific and robust analytical tests exist to verify compliance. Several immunochemical, molecular biology-based and proteomic tests for the identification and quantification of allergenic food species have been developed within the past decade (24, 25). Of these, the enzyme-linked immunosorbent assay (ELISA) and the polymerase chain reaction are the most developed methods with limits of detection between 1 and 50 mg/kg of the allergenic food, depending on the technique and the specific food. Quantitative ELISA tests are generally preferred because of their ability to detect specific proteins, which include the allergenic component, and their relative ease of use. Several commercial ELISAs are available for allergenic foods. With a limit of detection of <10 mg/kg of the allergenic food and <1 mg/kg of the protein fraction for most allergenic foods, a level of 10–100 mg/kg of the allergenic food can be reliably verified. An upper limit for allergenic non-ingredient food components would need to consider both the needs of allergic consumers for safer foods as well as analytical traceability. On this basis, the ‘food allergy’ working group of the German Society for Allergology and Clinical Immunology and of the Association of German Allergologists proposed upper limits of 10–100 mg/kg of the allergenic food or 1–10 mg/kg of the protein fraction of the allergenic food, depending on its allergenicity, that would protect most allergic consumers from severe allergic reactions. The lower values would apply to highly allergenic foods, such as peanut (26). Others have proposed a similar approach. Based on the clinical profile of their patients, Morisset et al. (27) suggested adventitious food allergen in industrial food manufacture should not exceed 5 mg/kg of protein to protect 95 % of the allergic population. It may also be appropriate to consider serving size rather than only concentration of the allergen in a product. For instance 100 mg/kg adventitious milk protein would have different significance in mustard (serving size 5–10 g) than in a can of vegetable soup (serving size 250 g).
An upper limit that takes into account allergological, analytical, and industrial requirements can serve as a valuable benchmark, in the development of which reliable information about thresholds would clearly be a critical factor. Such a benchmark would be extremely valuable as a basis for compliance and enforcement, but would also provide a means for manufacturers to monitor the effectiveness of food-allergen management strategies. It would also provide a sound scientific basis for decisions on precautionary (’may contain’) labelling, helping to minimize its proliferation and restore its value as risk management tool. In setting the benchmark, regulatory authorities will need to steer a careful course to define a level that will minimize risk to the allergic consumer. While the hazards of too high a limit are obvious, too low a threshold would again result in unnecessary labelling of allergenic traces that cannot be avoided in food manufacture but would probably be irrelevant for the majority of food-allergic consumers.
Food industry perspective
Protecting the allergic consumer is a responsibility shared by all stakeholders, among which the food industry has a critical role. The food industry’s responsibility is to provide safe foods for all intended consumers. Food manufacturers can help individuals who know that they suffer from a food allergy by informing them when the specific allergen is present in a product (labelling), or by ensuring that it is not present at a level that will cause them harm. The simplest way to protect allergic consumers would be to ensure that the allergenic ingredients to which they are reactive are either declared on the product label in all circumstances or are totally excluded. This hazard-based approach is broadly appropriate to ingredients that are deliberately added, usually in significant amounts and indeed has been generally adopted in legislation regulating allergens. However, complete absence of specific allergenic constituents in foods where they are not ingredients, is usually very difficult to achieve because of manufacturing practices. For instance, few production lines are used to manufacture a single product. Labelling irrespective of the amount of adventitious allergen present therefore implies extensive precautionary labelling with no regard to the actual risk. The alternative approach requires food manufacturers to make a judgement about what residual level of a particular allergenic constituent can be considered insignificant, in other words to assess the risk it poses.
Risk can be defined as the probability of the hazard to become manifest, and is often expressed as a function of the intrinsic hazard and an exposure to that hazard. This definition is sometimes expanded to include severity of the resulting adverse effect (28). Once a risk management problem has been formulated, risk assessment proceeds through hazard identification, followed by hazard characterization, exposure assessment and risk characterization to determine the scope of the overall risk to the population (29–31). In the context of allergenic foods, the hazard is already defined as their ability to provoke an allergic reaction. Beyond this, risk assessors require information about the characteristics of the hazard, which in this context can mean the response characteristics of the population at risk (distribution of minimum eliciting doses – thresholds, dose–response), the size of the population at risk, and the extent of exposure. Ideally, risk assessors could calculate the number of reactions that would occur for any given level of residual allergen in a food product if people allergic to that food consumed it. Knowledge of thresholds and their distribution in the allergic population thus constitutes an essential component of allergen risk assessment. It permits manufacturers to make informed decisions about the level of risk they are prepared to accept. This knowledge allows risk management measures instituted to reduce or avoid cross-contact, to be evaluated so that the extent to which they reduce risk can be compared with the resources that need to be deployed to implement them. It also enables risk management objectives to be set and monitored, and could also help in communicating about allergen risks with other stakeholders.
Recently, Crevel et al. (16) proposed an approach, based on statistical modelling of dose distribution data from challenge studies to define the expected number of reactions from exposure to very small amounts of residual allergenic protein in a product and use it as a basis for risk assessment. This approach, building on the work of Bindslev-Jensen et al. (32) aligns with the preference for a risk assessment approach for establishing thresholds expressed by a recent report by the US FDA-CFSAN (17). This preference was based on the greater robustness of this approach, as well as its transparency. Recent publications have illustrated how this approach can be combined with data on the distribution of residual allergen in products to estimate the risk of reactions better (33, 34) (see also Fig. 1).
Allergic consumers’ (and clinicians’) perspective
In contrast to the public health dimension of allergen risk management discussed in the preceding sections, thresholds also have an individual dimension for allergic patients and clinicians caring for them. At a population level, thresholds can be considered as broadly stable, with transient decreases in some individuals being balanced by increases in others. Statistical techniques can be used to address this and other elements that introduce variability. It is more difficult to use information about individual thresholds, derived from controlled challenge studies, for individual management.
Current advice to allergic patients, or their parents, if they are children, is always to avoid the offending food and products which contain or may contain it, implying zero tolerance and an absence of thresholds. The implication is also that failure to comply will result in an allergic reaction or even death. Clearly, such an approach can place a heavy burden on the patient and family in terms of anxiety and reduction in quality of life. This burden would be eased if information about individual patients’ thresholds could inform advice on how to manage their condition.
Clinical studies and other evidence attest to the existence of thresholds, while showing significant differences between individuals and between the allergenic potency of different foods. Of potential key importance to the patients is how this knowledge can help them to eat more safely and minimize the effect of the allergy in their life. Many patients already depart from strict avoidance, assessing the risk of their choices, for instance, when deciding to buy products carrying a precautionary label (35). Applied knowledge of their thresholds could help them validate their choices and thereby reduce the uncertainty and confusion over their sensitivity and reactivity, which is one of the most difficult features of food allergy with which they have to cope.
Successful management of food allergy is a collaborative effort between patient, physician and the food industry. In managing food allergy, the physician largely relies on interpretation of the patient’s history (anamnesis) to try to identify a pattern of reactivity. This includes consideration of the characteristics and circumstances of the reaction(s) experienced, the type and amount of allergen involved, any concurrent allergies (e.g. asthma, rhinitis) and the general state of health. The patient (or their parent) also relies on the allergen information provided by the manufacturer in the management of their food allergy. Providing threshold information while explaining its limitations would enhance this partnership to the benefit of all parties. This would undoubtedly be difficult to achieve, but would prove very worthwhile (36, 37).
To conclude, threshold data are highly relevant to the management of food allergens and food allergy. The distribution of thresholds among allergic individuals constitutes a key element in characterizing the risk from food allergens at a population level, and is therefore also indispensable for risk managers both in the regulatory sphere and in industry. Individual thresholds can help allergic patients and their physicians manage their allergy better but these data need to be evaluated in the context of their overall history. The next section reviews the type of data that are currently available on thresholds against the requirements of the different stakeholders.
Current data on thresholds: availability, origin, nature and value for risk assessment
What data are available on thresholds?
Case reports and series show that exposure to small quantities of an offending food can sometimes elicit a severe allergic reaction in a sensitized individual (38, 39). However, these studies provide little quantitative information.
Diagnostic double-blind, placebo-controlled food challenges (DBPCFC), in use since the 1970s (40, 41) have generated more quantitative information on thresholds of reactivity. However, the design of the studies resulted in a high proportion of first dose reactors, although a recent review of such studies revealed that a majority of food-allergic individuals tested had minimum eliciting doses above 500 mg of the offending food (42). Taylor et al. (11) thoroughly analysed data produced up to the late 1990s and identified that several hundred patients had been challenged at lower doses with cows’ milk (n = 598), egg (n = 782) and peanuts (n = 663), as well as smaller numbers with other allergenic foods. The authors concluded however that ‘because these data were often obtained by means of different protocols, the estimation of a threshold dose was very difficult’. It should be noted again that studies designed specifically to establish low-dose reactivity did not appear until the late 1990s (43).
The most reliable data on threshold levels result from studies performed in peanut-allergic patients. We are aware of nine studies in different groups of peanut-allergic patients that provide dose–response data (13, 43–50). Starting doses varied between 5 μg and 5 mg of peanut protein. However, first-dose reactors were observed in the study that used 1 mg as a first dose (44). The discrete provocation doses were applied at intervals of 10–30 min and in most instances provocations took place over 1 day. In two studies (13, 47), lack of response on the first day led to challenges with higher provocation doses on a second day.
Immunotherapy for IgE-mediated conditions also involves administering increasing doses of the offending allergen, starting with a no-effect dose, and thus provides both no-observed effect and lowest-observed effect levels. Trials have been conducted with some food allergens e.g. peanuts (49–54). Individual minimum eliciting doses were determined for each participant both before and after the trial using DBPCFCs. Amounts observed to provoke reactions prior to de-sensitization were consistent with those reported in other low-dose diagnostic challenges.
How have threshold data been generated?: Protocols and their evolution
The DBPCFC remains the ‘gold standard’ for confirming food allergy to this day despite improvements in the predictive value of other diagnostic procedures, as acknowledged in guidelines (55). Starting doses, typically in the range of 250–500 mg of the food for the most sensitive subjects were chosen to produce an objective, but mild reaction (11, 49, 56). Studies also differ in critical details, such as challenge procedures, the form of the food used (42, 57), the matrix in which it was presented (13, 44, 58) and the weight accorded to subjective and objective manifestations (14, 54, 58).
In the last 5–10 years, the DBPCFC has been adapted as a tool to generate threshold data, as the value of such data both for clinical management of food allergy, and subsequently for public health purposes became increasingly apparent. Several clinical trials, with doses ranging from micrograms to grams, have been specifically designed to determine minimum eliciting doses for various allergenic foods (Table 1) (13, 14, 27, 44, 45, 59). The most recent ones have adhered fairly closely to a consensus clinical protocol (56), facilitating data analyses within and across the studies.
Table 1. Recent protocols used in low-dose challenge studies with peanuts
Recent years have seen proposals that aim to standardize DBPCFC protocols around the best practice. The position paper of the European Academy of Allergy and Clinical Immunology (55) recognized the value of threshold determination, although it focussed on general guidelines for the safe conduct of DBPCFC studies. A roundtable conference organized by the Food Allergy Research and Resource Program of the University of Nebraska in 2002 formulated a low-dose challenge protocol to establish thresholds (56). Together with the 1999 conference on current knowledge of thresholds (11), it galvanized the clinical and regulatory community into recognizing the potential of low-dose challenges to maximize the information from such procedures for the benefit of not only the individual patient, but also of wider public health.
Factors affecting the outcome of challenge studies and the type of data generated
As noted by Taylor et al. (11), a wide variety of DBPCFC protocols, differing in potentially significant ways have been used. This has influenced both the type of data generated and therefore its value for specific purposes, and the extent to which studies could be compared, even when they nominally used the same outcome measures. The factors in challenge studies that can be controlled fall into three main categories: the challenge procedure itself, the selection of patients and the challenge materials (summarized in Table 2).
Table 2. Factors affecting outcome of challenge studies and type and quality of data generated
Time interval between doses
Placebo placement in sequence
Thresholds for risk assessment studies (in addition to above)
Documented reactivity to food
Good patient characterization
Include patients with previous severe reactions if safe
Most allergenic form of food if known
‘Real food’ blinding matrix
Dose and allergenic activity verification in matrix
Sensory evaluation of blinding – taste, texture, smell.
Challenge procedure. Conduct of the challenge determines the type of data generated and therefore its suitability for different purposes (55, 56). The main factors influencing the precision of any threshold estimate include starting dose, dose progression, the time interval between doses and the way in which placebo- and active doses are randomized. In some studies these have been interspersed (44), but in others, active- and placebo-doses have been given on different days (60). The health status of participants and the avoidance of medications can also influence the sensitivity of the test. Scoring of reactions and stop criteria, particularly in the case of subjective reactions, will also affect any threshold estimate. Recent recommendations on challenge protocols (55, 56) suggest starting doses of the order of 10 μg of the suspected food, dose progression ranging from doubling to half or full-log intervals and a time interval between doses of 15–30 min, largely for practical reasons (Table 3). There is a consensus that, principally on grounds of safety, participants should discontinue medications likely to interfere with the outcome (or otherwise be excluded) and, if suffering from asthma, that their condition should be stable. This makes the allergen encounter during a challenge quite different from what might occur in the community where the subjects’ health and medication use etc may vary considerably (61).
Table 3. Comparison of protocols proposed for low-dose challenges
Patient-related criteria. Inclusion and exclusion criteria for low-dose challenge studies will differ according to the purpose for which the data are being generated. Where individual thresholds are being estimated, the primary concerns are benefit to and safety of the individual. Scientific studies on thresholds aim to generate data and test hypotheses that can be generalized to the relevant population and, in addition to the patient-safety criteria, participant selection must reflect these needs. Participants must therefore be well-characterized in relation to the allergic population, in particular in terms of their reactivity.
A key requirement in threshold studies is that the subjects are demonstrably still allergic to the food being tested. In several studies, the development of tolerance in previously allergic subjects has been demonstrated, in particular, in children with milk and egg allergy (62). Even up to 20% of peanut-allergic patients may outgrow their food allergy (63).
Inclusion of participants for risk assessment studies needs to reflect population variability. If this proves impracticable, the people tested should be characterized such that the test group can be mapped onto the overall allergic population, so that conclusions capable of being generalized can be drawn. Population variability encompasses both inter-individual variation within an otherwise homogeneous population, and the possible existence of subpopulations with a different distribution of reactivity (e.g. children, people with asthma). Challenge studies show that individual threshold doses can vary by at least 3–4 orders of magnitude and very likely more. However, statistical analysis of most studies suggests these figures represent the extremes of a continuous distribution, rather than two (or more) populations with distinct characteristics. With a few exceptions e.g. (64), minimum eliciting doses have not been determined in random samples, but instead by using groups of food-allergic patients referred to specialist allergy clinics in tertiary care centres. The challenged population will therefore contain individuals who are more reactive than the general allergic population, although it will often exclude any severe reactors in the referred population. This might additionally affect the results from threshold studies.
A careful analysis of the existing literature has not been done to determine if sufficient numbers of patients with histories of severe reactions have been included to suggest that they have been satisfactorily evaluated. Patients with a history of moderate-to-severe reactions to peanut have been reported to have significantly lower threshold doses compared to patients reporting mild reactions (14). However, the difference was modest when compared to the orders of magnitude differences between individuals and most challenges were scored on subjective reactions, which could affect interpretation. A retrospective analysis of diagnostic challenges performed with milk, egg, peanut, soy and wheat revealed that patients experiencing more severe reactions tended to react at a lower median percentage (15%) of the maximum dose (4 g of protein) than those experiencing milder reactions (42). However, this corresponds to a dose of 600 mg of food protein, nearly five orders of magnitude greater than the proposed starting doses in a low-dose challenge protocol. This is consistent with the analysis of Sicherer et al. (41) that the majority of food-allergic patients do not even react to the first dose (400–500 mg) in the typical diagnostic challenge. Subjects with lower thresholds than those tested to date have been documented in case reports (e.g. 54). Thus, those undergoing low-dose challenges are not representative of the entire group with a specific food allergy but are a generally more reactive subgroup, but with a lower proportion of highly reactive individuals. The possible consequences for the distribution of thresholds are illustrated in Fig. 2. Data are lacking, however, to quantify the relationship between this challenged population and the overall allergic population.
Data are scarce about the existence of subpopulations with different thresholds. A few published challenge trials have evaluated both infants and adults for peanuts, but more often such studies have covered only certain populations for specific allergens. For instance milk and eggs have been investigated largely in children, where these allergies occur most frequently. Data have therefore generally proved too limited to conduct a systematic analysis of differences in threshold doses between infants/young children and adults. One study that looked at possible subpopulation differences in relation to asthma status did not confirm their existence (44).
Challenge materials. The key to success in the DBPCFC is the accurate delivery of a range of doses of the relevant allergenic food in a form that is unrecognizable to the patient. The test material must therefore be well-characterized and its taste, smell, colour and texture must be masked.
Foods are consumed in various forms, e.g. raw, boiled, baked or roasted and in real life allergenic foods are often eaten in compound or complex meals. Ideally, in order to provide the greatest margin of safety, the threshold should be determined using the most allergenic form of the food, but in practice this can only be determined by challenge. Many different forms of food have been used in clinical DBPCFCs (11), ranging for instance from full fat- to defatted foods (peanut flour, milk). Grimshaw et al. (65) showed the relevance of this observation, inasmuch as a higher-fat matrix resulted in a significantly higher minimum eliciting dose in some subjects with more severe symptoms.
Processing can also influence the allergenicity of foods, but its effect is difficult to predict (66). In pollen-related food allergy, cooked food is often better tolerated than raw food on account of the central role of heat-labile proteins to the pathology (67). In contrast, in peanut allergy the challenge material and most of the allergenic proteins are much more stable and roasting may even increase allergenicity (63, 68), although this remains to be confirmed clinically. Differences in allergen content among apple varieties correlate with reactivity in DBPCFC (69, 70), but only minor differences have been noted between peanut cultivars (71) and they are still unconfirmed in vivo.
Allergic people often differ in their reactivity to individual proteins in foods (50, 72), but characterization of challenge materials with regard to their content and profile of allergenic proteins has received little attention.
The simplest way of masking a food is to deliver it in capsules. This resolves the sensory issues, but makes it difficult to deliver relatively high doses. This route also by-passes the oral cavity as a site where symptoms may occur, which may bias the outcome towards more severe symptoms. For these reasons, recent recommendations discourage it (55). The only alternative to capsules is a blinding matrix that is a ‘real’ food. These systems need not have an active and placebo that are indistinguishable; the patients must merely be unable to tell which is which. Taste and smell can be masked by a stronger taste and smell (73, 74), although allergenic materials with a strong smells or flavours can still pose problems and nose clips can be beneficial for particularly pungent foods. Pretreatments of the food such as freeze drying or defatting may help reduce smell or taste, but these can only be used with due regard to its effect on the allergens. Similar handling considerations apply to labile allergens, such as the Bet v1 homologues in fruits.
Masking should always aim to maximize the amount of active compound in the matrix, thereby minimizing the amount of material that the patient is required to consume and the probability of nonspecific gastrointestinal symptoms, which could decrease the sensitivity of the test. A close mimic to the active food in terms of the type of flavour could also be used as the placebo where applicable, although no instances have yet been reported.
Apart from the taste and smell, the other main sensory attribute that needs to be masked is texture, which can be achieved by adding material of a granular nature such as oatmeal, which can mask the texture of peanut flour (75). Again mimics can be used effectively for masking texture in some foods. Starch-based thickeners can be very good for controlling the thickness of a challenge food but the level of granularity can still be an issue. Colour also needs to be masked, but this is generally straightforward as the addition of relatively small amounts of highly coloured ingredients can be sufficient. Failing that, lighting can be controlled in the area where the challenge is administered, as is standard practice in sensory testing. The effectiveness of masking should be verified by testing the recipes using a trained sensory panel and standard sensory testing protocols, e.g. triangle tests (76).
No systematic research has been performed to investigate the availability of allergen (releasability) in different matrices. As recently shown in a study involving peanut allergy (65, 74) for instance, the fat content of a challenge vehicle can have profound effect on the kinetics of the clinical reaction. Other constituents, e.g. polyphenols, may also reduce availability. These considerations highlight the need for a more thorough assessment of availability, as well as confirmation of a selection of the doses administered.
In summary, LOAEL results determined in one study, by one food variety in one state of processing, in one matrix might not allow final conclusions to be drawn on the ‘real’ threshold level. However, this situation differs little from others where data must be used to make risk assessments, and where the variability is handled by application of appropriate uncertainty factors. Further studies assessing the impact of all the main variables are needed to circumscribe the degree of uncertainty.
Recommendations for conduct of challenge studies.
The Europrevall project developed a protocol for threshold studies, with the explicit aim of generating data that could be used to make multiple comparisons e.g. between different allergenic foods or between different European populations. It is strongly influenced by the consensus low-dose challenge protocol published in 2004 (56), but refines it in important aspects. We strongly recommend the wider use of this protocol, outlined below and in Table 3, for the conduct of all challenge studies beyond Europrevall as generation of data produced according to the same guidelines will considerably increase the power of the work initiated by Europrevall and thereby improve risk assessment of allergenic foods.
1Individual patient thresholds should be determined using DBPCFCs starting with very low doses of the suspected food, i.e. 10 μg/3 μg protein, to ensure that no one reacts at the lowest dose, and therefore a NOAEL can be established for the study, based ideally on both patient-reported (subjective) and externally observed (objective) endpoints.
2Dose progression should follow a half-logarithmic progression up to a maximum of a single serving of the food.
3To ensure adequate statistical power at least 29 patients with a food allergy previously confirmed by DBPCFC and not reactive to the proposed starting dose must be used for the determination of NOAELs (76). For dose-distribution modelling, patient selection should aim to achieve adequate numbers at each dose over a wide dose range (6 orders of magnitude) (77). The allergological status of these patients should be fully characterized so that they can be related to the overall allergic population.
4Clinically relevant food allergy should be confirmed for the determination of a patient’s LOAEL by increasing the doses until the first convincing (preferably objective) allergic reaction occurs or until a full daily serving has been ingested to exclude clinically relevant food allergy.
5The challenge matrix should be one of the ‘real food’ matrices (dessert, chocolate) developed as part of the Europrevall project.
6Sensory testing by trained testers should be used to confirm that participants cannot tell which preparation contains allergen.
7The dose of allergen present in the prepared challenge materials should be verified in a representative sample.
8The challenge matrix should contain the investigated food in its most allergenic form, if known, with due regard for patient safety, e.g. raw egg would not be used on account of possible contamination with Salmonella spp.
9Time interval between the discrete doses should be preferably at least 20–30 min.
10Placebo and active challenges should preferably be performed on separate days.
11All reactions, whether reported by the patient (subjective) or observed by study personnel (objective) must be recorded in detail to allow for determination of NOAEL and LOAEL for each type of symptom and sign.
Concluding remarks and future perspectives
The growing volume of data on thresholds has begun to make an impact on risk assessment of allergens. A decade ago, little information existed to define either the sensitivity of the allergic population in terms of the minimum doses known to provoke reactions, or the reactivity of that population in terms of the relationship between dose and severity. While such information remains insufficient, there is an appreciation of its importance to public health, reflected in the evolution of the DBPCFC from a diagnostic tool to one which could also deliver information that can improve management of food allergy both at the individual and at the public health levels. In relation to use of these data for risk assessment, specific issues and knowledge gaps fall into several categories.
Design and conduct of DBPCFC
The DBPCFC forms the foundation of the data generated, which are fundamental to the quality of risk assessments. The development of harmonized protocols constitutes a major development, which must now be followed by their application. Detailed recommendations for the conduct of challenge studies as part of Europrevall have been detailed above and are being implemented in the clinical parts of the project. We strongly recommend the use of the protocol to investigators outside Europrevall. Further work is required to improve the predictive value of the DBPCFC with regard to reactivity of allergic patients in the community. Included under this heading is the need to understand better how different food matrices affect the amounts to which allergic people react, both to interpret the challenge study data in a better manner, and for food manufacturers to design allergen-management measures specific to particular operations, if appropriate. Again, in Europrevall the correlation of diagnostic data of different types, all relating to the same allergic individuals, offers the possibility of identifying better predictors of reaction severity.
Sample size for challenge studies
Current recommendations are that a minimum of 29 patients be included in challenge studies for the determination of thresholds. This recommendation is based on the mathematics of the binomial theorem and American Academy of Pediatrics guidelines (76). However, while those involved closely in food allergy understand the rationale for this approach, it cannot be assumed to be universally accepted. The adequacy of this sample size, in particular, requires detailed consideration and justification as a basis for decision-making. A related, but distinct question is whether studies of such size are large enough for generating useful statistical models. Initial attempts with data combined with disparate protocols, proved more successful than might have been expected, and have generated much valuable debate about the approach. Analysis of the large data sets becoming available for some allergenic foods should help to evaluate the data needs for those foods where data are much scarcer.
Use of challenge data in risk assessment, including modelling approaches
Challenge data form the basis for risk assessments, but optimal approaches to using these data need to be defined. A key consideration is that those data are generated using human beings. This has the advantage that no extrapolation from other species or test systems is necessary, but also means that certain types of data, such as individual dose–response data, can only be generated to a very limited extent, compatible with the ethics of such testing. Modelling approaches make better use of the limited data, for instance by taking into account the whole dose distribution. However, the success of such approaches is predicated on developing a better understanding of how the predictions compare with reality. This requires qualitatively better data on both unintended exposure to allergens and the actual number of reactions in the population, both of which are currently largely unknown. Development of registries of allergic reactions, such as those in operation in Norway and France would help provide some of these data. Unlike low-dose extrapolation models to predict chronic disease outcomes, based on animal data (e.g. cancer studies), the acute nature of IgE-mediated reactions makes it possible to validate such approaches. Prospective studies in well-defined allergic populations could thus generate the necessary data, particularly, if coupled with analytical surveys of relevant food products.
A good knowledge, of both the populations for which the risk assessment is being made, and the population sample in which data have been obtained, is essential for the interpretation of the data. Methodologies to compare these groups, other than full-scale challenges, need to be developed. These could also be used to answer other questions of importance to risk assessors, such as whether reactivity to the same allergens differs between populations. One possibility in this area might be to use an integrated symptom score such as that developed by Hourihane et al. (44) to compare the frequency of symptoms of different severity observed in different populations or groups. An important objective of Europrevall is to develop novel diagnostic tools and data resulting from their application could help towards the goal of characterizing better and therefore for comparison of the populations.
Food allergy mediated by other mechanisms than IgE
Finally, this review concerns with only the thresholds for IgE-mediated food allergy. This focus stems from the observation that acute life-threatening reactions are associated only with this type of food allergy. Furthermore, other types of food allergy are very poorly documented. However, they can have a significant impact on health and quality of life and therefore warrant investigation.
This work was funded in part by the EU through the EuroPrevall project (FOOD-CT-2005-514000).