Although the quantification of exposure risk in most risk assessments follows the recognized Codex Alimetaris of hazard identification, exposure assessment, hazard characterization, risk characterization, documentation, and reassessment (Vitale et al. 2001), these can be concisely compressed into 3 stages in environmental, BSE risk assessment. Respectively, these are infective load estimation, exposure pathway identification, and risk estimation. Each is now considered separately.
Infective load estimation
The most fundamental part of any BSE risk assessment is the estimation of the potential for BSE infectivity contained in the source under consideration. This quantity is referred to as the “infective load”. Infective load is usually expressed in ID50 units, where a unit is the estimated mass of infected tissue that each individual in a population would need to ingest for 50% of the population to become infected. The choice of 50% is traditional but arbitrary. It leads to a convenient working definition of infective load as the potential of a specific mass of tissue to cause infection (Comer et al. 1998).
More formally, the estimated infective load, L, expressed in human, oral ID50 units is given by the following equation:
where N is the number of animals in the cohort under consideration in the risk assessment, p is disease prevalence in the cohort under consideration, m is the mass of infected tissue per animal (in grams), i is infectivity in bovine (or ovine) oral IDx units (IDx · g-1), and s is the bovine (or ovine) to human, species barrier factor (dimensionless). Unless otherwise stated, an ID50 unit is expressed in terms of an oral dosage throughout this article.
Disease prevalence (p)—Efforts have recently been made to determine values for TSE disease prevalences, both for scrapie and BSE, in the United Kingdom at the regional and national levels (Woolhouse and Anderson 1997; Hoinville et al. 2000; Donnelly et al. 2002; Kao et al. 2002). These include focused attempts at back-calculation to reconstruct historical trends in the incidence of BSE infection by considering disease incidence together with an incubation-period distribution (Donnelly et al. 2002, 2003). However, epidemiological modeling studies reported by Ferguson et al. (1999) have suggested that low-level, horizontal transmission of BSE (if it occurs) would not have been sufficient to sustain the BSE epidemic observed in cattle in the United Kingdom.
All such estimations are heavily reliant on accurate epidemiological data (Doherr et al. 1999; Hoinville et al. 2000); however, progress in the quantification of key parameters has been hindered by several factors typical to TSE disease surveillance. Most obviously, there are difficulties in accurate diagnosis of disease signs and underreporting of disease incidence (Hoinville et al. 1999, 2000). Theoretical modeling studies have been constrained further through difficulties in establishing a minimum threshold dose to ensure the onset of any TSE disease in a given species. Such unknowns can, at best, still be quantified only in broad terms. In this context, the use of Bayesian estimation methods, increasingly being applied elsewhere in epidemiology and risk estimation (e.g., Clough et al. 2003; Orr et al. 2003) has been surprisingly neglected in BSE risk assessment. Estimated TSE disease prevalences based on surveillance data gathered for bovine and ovine populations have usually been incorporated as a single value (point) rather than a range (interval), as in the DNV risk assessments and those by Gale and Stanfield (2001), Gale (1998), and Gale et al. (1998). Throughout such studies in the United Kingdom, the disease prevalence of BSE-infected cattle was generally assumed to be under the value for OTMS cattle in 1996 (0.54%) derived by Anderson et al. (1996). For BSE in sheep, a ceiling disease prevalence of 2% for scrapie was assumed and hypothetical scenarios were constructed for BSE prevalence as a proportion of scrapie prevalence, which were specified to be between 0.01 and 10% of scrapie prevalence (DNV 2001c).
Infective tissue (m) and incubation period—Although evidence of the BSE pathogen has been found in the distal ileum of cattle as early as 6 months after oral exposure (Wells et al. 1998), BSE infectivity is known to accumulate only in the central nervous system (CNS) of cattle towards the end of the incubation period. In general, duration of any TSE incubation period varies considerably within a species. Statistical analyses of several scrapie titration experiments with transgenic mice have demonstrated that the average incubation period decreases as dose is increased, whereas the variance in its duration becomes greater as dose is lowered (McClean and Bostock 2000).
In cattle, BSE disease signs usually take 4 to 6 y to appear, with a mean incubation period estimated at 5.2 y (Dealler 2001). However, no evidence exists, as yet, of bovine genetic susceptibility. In sheep, BSE disease signs typically appear much earlier, most frequently between 2 and 4 y in age (Jeffery et al. 2001). Additionally, studies of both scrapie- and BSE-infected sheep have shown the existence of a wide range of genetic susceptibility to TSE diseases (Foster et al. 1996; Hunter 1997; Jeffery et al. 2001). This susceptibility has been characterized at the molecular level in terms of 5 alleles, giving rise to 15 main sheep genotypes (Hoinville et al. 1999). In the most resistant genotype, homozygous sheep (denoted by ARR/ARR), no evidence, to date, shows that either scrapie or BSE can be induced by an oral challenge (Anon 2002; Houston et al. 2003). However, it has been shown that ARR/ARR sheep may become infected by intracerebral challenge (Houston et al. 2003); thus, raising the possibility that ARR/ARR sheep might act as pathogen reservoirs or carriers. To further complicate matters, different genotypes are not uniformly distributed among the various breeds of sheep in the United Kingdom (Dawson et al. 1998). In the face of such complex uncertainties, it is justifiably expeditious to simply assume that sheep carcasses in the source term are all of the most susceptible genotype (DNV 2001c).
Environmental BSE risk assessments have proceeded by applying the precautionary principle to the potential infectivity in tissue. A strong emphasis is placed on worst-case scenarios, for example, by assuming an upper proportion (and sometimes all) of the animal carcasses under consideration are maximally infected. Quantification of the level of infectivity present in a cattle carcass is based on the principle that total infectivity carried by a symptomatic, infected bovine is directly proportional to the mass of its brain and connected spinal tissue (DNV 1997a, 1997b, 1997c, 1997d, 1997e, 2001a, 2001b). Each such carcass is assumed to contain the level of infectivity typically associated with a fully symptomatic individual (DNV 1997a, 1997b, 1997c, 1997d, 1997e, 2001a, 2001b). Although this should imply an upper estimate of risk, the simplifications avoid consideration of uncertainties connected with disease incubation period or the potential level of variability of the infectivity in tissue.
For sheep, the calculation is more complicated because, during the course of pathogenesis (in a susceptible genotype), both scrapie and BSE infectivity are known to spread extensively into a variety of tissues. It has been demonstrated experimentally that TSE infectivity may enter virtually all tissues in sheep, and especially the lymphatic system, early after infection (Hadlow et al. 1982; Foster et al. 1996; Jeffrey et al. 2001). The level of TSE infectivity present in sheep tissue is, therefore, dependent on both tissue type and age. The total, infected tissue mass m must be obtained by summing over all tissue and age categories. This approach was adopted in the assessment to determine BSE risks from the disposal of sheep carcasses (DNV 2001c), where scrapie data from Hadlow et al. (1982) were used to group ovine tissues into 3 and 4 main categories based on tissue type and age, respectively. However, it implies an underlying assumption that the pathogenesis of BSE and scrapie in sheep is identical, which in reality has yet to be established.
Infectivity (i)—Several experimental BSE studies have shown that the likelihood of inducing BSE infection through oral ingestion increases as the dose of infectious material is increased (e.g., Wells et al. 1998). However, it has not been established whether there is a minimum threshold dose that is required to initiate infection in a given species. Applying the precautionary principle yet again, it has been universally assumed in environmental BSE risk assessments that there is no threshold dose and that BSE infectivity accumulates in direct proportion (in other words, linearly) to the quantity ingested over the course of time. This assumption appears to err on the side of safety because the assumption that there is no threshold for infection seems to imply that the risk must be overestimated. However, the assumption implicitly assumes that any infectivity accumulates without replication during the same period; in other words, any dose, no matter how small, may cause death among some individuals in an exposed population because of (1) variation in individual susceptibility to infection, (2) variation in infectivity of tissue, or (3) chance passage of infectivity through the gut wall.
Following advice from the Spongiform Encephalopathy Advisory Committee (SEAC), the majority of the DNV risk assessments assert that 0.1 g of BSE-infected bovine CNS tissue is equivalent to 1 bovine oral BSE ID50 unit. This estimate was based on data obtained in a single, bovine, attack-rate experiment (commenced in 1992 at the Central Veterinary Laboratory, UK) with results quoted in Anderson et al. (1996). The calculation of the ID50 was performed using the computer program QUAD, which used a logit model and the δ-statistical method to derive confidence intervals (Morgan et al. 1989). This approach generated an estimate of 0.38 g of BSE-infected CNS tissue for 1 bovine oral ID50 unit with a wide 95% confidence interval of 0.03 to 5.27 g (the point value was prudently rounded down to 0.1 grams after SEAC expert opinion).
Ideally, the ID50 estimate would have been inferred directly from a dose-response model established between exposure to BSE infectivity and mortality. This would be obtained by fitting a model curve (typically sigmoidal) to the plot of the mortality proportions of cohorts incurred under the different exposure levels (which were respectively, 1, 10, 100, and 300 g BSE-infected bovine brain tissue). However, the sparse data collected in the 1992 attack-rate experiment implied that estimation of the effects of small fractions of an ID50 unit were subject to very high uncertainty. In the DNV risk assessments, no minimum threshold dose was assumed, and the infectivity contained in doses between 0 and 0.1 g was taken to be in direct proportion to the fractional quantity of an ID50 unit ingested (in other words, linear).
An alternative approach was adopted by Gale et al. (1998). Again, no minimum threshold was assumed. However, calculation of the level of infectivity present in fractions of a bovine oral BSE ID50 unit was based on an estimate of the number of PrPSC macromolecules that would make up 1 bovine oral BSE ID50 unit. Using a species-adapted scrapie model, a value of 1 × 105 PrPSc molecules had been previously estimated to make up an intra-cerebral bovine ID50 (McKinley et al. 1983). A radical assumption was then made: that the oral ingestion route is 1 × 105 times less efficient than intracerebral challenge (Kimberlin 1996) and that the cow-to-human species barrier factor is 1,000. This would imply that 1 human oral BSE ID50 unit would contain 105 × 105 × 1,000 = 1 × 1013 PrPSc molecules.
Gale et al. (1998) then went on to consider how the β-Poisson and negative-exponential models compared with the dose-response data obtained for BSE infectivity in inbred mice (Taylor et al. 1995). The mathematical form of the negative-exponential model is P = 1 - exp(-rN), where P is the probability of infection from N pathogens, and r is a species-specific parameter. At low doses, this model can be approximated by the linear function P = rN. However, it is unclear why this relationship should necessarily hold for a conglomerate of prions, which having been described earlier as novel pathogens, are unlikely to be a typical water-borne pathogens. Gale et al. (1998) also claimed that the (even simpler) relationship of p = 0.5 X fraction of 1 ID50 unit ingested compared well with the available dose-response data. By this logic, Gale et al. (1998) concluded that the probability of human infection from an aggregate of 100,000 PrPsc molecules, the largest supported to date by biochemical evidence (Prusiner 1984), would be [(0.5 × 105) / 1013] = 5 × 10-7, which formed the basis of their calculations.
Species barrier factor (s)—Probably the least-certain parameter in any BSE risk assessment is the livestock-to-human species barrier factor. Early advice by SEAC stated that the cattle-to-human species barrier factor could lie anywhere in the range of 1 to 10,000 with 10 as a best estimate (SSC 2000). The risk assessment by Gale et al. (1998) considered that the cattle-to-human species barrier factor could be as high as 1,000. More recent, statistical research based on empirical disease data indicates is that the barrier is likely to be several orders of magnitude higher than 10 (Donnelly et al. 2002; Ghani et al. 2003). No direct experimental data, of course, is available for verification.
Assuming a cattle-to-human species barrier of 10 and a BSE infectivity at 10 bovine ID50 units/g, the estimated infectivity translates to 1 human ID50 units/g (or about 10 bovine ID50g-1/10) of BSE-infected, bovine, CNS tissue. This value was taken as a best estimate for both bovine and ovine CNS tissue in nearly all the DNV risk assessments. An exception is the assessment addressing the risk of BSE exposure to humans from large-scale disposal of sheep carcasses (DNV 2001c). In that case, the sheep-to-human species barrier factor was put at 50, albeit with no evidence of BSE in sheep in the United Kingdom, which resulted in an estimate of 0.2 (or about 10/50) human ID50 units/g of BSE-infected, ovine, CNS tissue.
In summary, environmental BSE risk assessments typically determine the infective load as the total mass of infected tissue contained in the source term under consideration divided by the estimated mass of tissue associated with 1 ID50 unit. More complex calculations to address the risk of exposure associated with BSE in sheep use additional assumptions; namely, that the spread and accumulation of BSE infectivity in sheep tissue occurs identical to that associated with scrapie (DNV 2001c), as recorded by Hadlow (1982).