Years of external beam radiobiology indicate that there is a dose-response relationship in tumor response and normal tissue toxicity. In most experimental systems and in humans, the greater the radiation dose the tumor receives, the more probable is cure. Similarly, for normal tissue toxicity, the higher the dose of radiation, the more probable is tissue damage. Such data extend from cell culture to experimental models of cancer and to patients.1–3 Such relationships would be expected to hold for radioimmunotherapy (RIT) as well. However, external beam therapy differs from RIT in that dose rates in external beam therapy are much higher than those in RIT and toxicity is usually to normal tissue adjacent to the beam, not unrelated organs throughout the entire body. Although tumor sites are selected for external beam therapy, some tumor margins may not be included because tumor infiltration into normal tissue is not apparent on computed tomography (CT), or because the inclusion would result in unacceptable toxicity to surrounding normal tissue. On the other hand, some areas of tumor may be “unreachable” by RIT due to the heterogeneity of radioisotopic deposition.
Perhaps the direct relationships among administered dose, tumor and normal tissue radiation dose, efficacy, and toxicity have been too casually assumed for RIT. Investigators who have participated in the development of tumor-targeting radionuclide therapies have produced substantial, seemingly contradictory information regarding administered doses, radiation dose calculations, and related responses. Because this information may prove useful in guiding future approaches to clinical trials and, ultimately, to patient care, this review attempts to summarize representative portions of preclinical and clinical RIT studies in which injected dose and/or radiation dose estimates (calculations) were examined for correlation with the resultant toxicity, efficacy, or both.
Considering this information in the context of approaches used for other cancer therapies, chemotherapy doses have historically been determined by dose escalation studies to determine the maximum tolerated dose (MTD) per unit of weight or body surface area. Tissue concentrations were at best difficult to measure, and tumor responses relied on small differences in tumor and normal tissue effects (therapeutic index). The resulting clinical chemotherapy has therefore been based on parameters such as mg/m2 or mg/kg. At the same time, in radiation oncology, where the dose to tumor and to normal tissue could better be defined, dosimetry-based treatment planning for individual patients has proven important both to optimize the tumor dose and to minimize normal tissue radiation and, thus, morbidity.2 The dose and fractionation schedules of radiation delivered to tumor and normal tissue have been biologically calibrated by the response-morbidity relationships obtained from formal toxicity and efficacy studies in patients. Thus, treatment planning for external beam therapy actually represents a careful definition of the calculated radiation dose distribution between normal tissue and tumor.
The earliest external beam radiation dosimetry consisted of absorbed radiation dose estimates as a function of depth beneath the skin's surface.1 This early form of dosimetry was satisfactory, mainly because the first radiotherapy units produced their maximum absorbed radiation dose at the skin's surface. Absorbed dose dropped off rapidly as the X-ray beam traveled to deeper tissues, and treatment was limited primarily by skin erythema. When cobalt-60 (60Co) and linear accelerator therapy units became widely used in the 1960s and 1970s, a need for more complicated radiation dosimetry developed.4 Major organs and structures other than skin became dose-limiting. As a way to better judge absorbed radiation dose to deeper structures, two-dimensional treatment planning systems were developed.3
With the advent of the CT scan, magnetic resonance imaging, and high-speed computers in the 1980s and 1990s, more advanced radiation dosimetry became available. Software was developed to calculate combined absorbed radiation dose from multiple radiation beams capable of being digitized on CT or magnetic resonance imaging, using 3-D reconstructions of anatomy and “beams-eye view” technology to more accurately direct and shape radiation beams to any organ or tumor. The most advanced method of 3-D radiation dosimetry and therapy available today is intensity modulated radiation therapy (IMRT), which is capable of modulating the shape and intensity of the radiation beam as it moves about the patient, resulting in customized dose distributions.5, 6
Radiation absorbed dose is expressed most accurately by 3-D software through the isodose diagram and dose-volume histogram: the isodose diagram provides a visual representation of calculated radiation dose distribution around a tumor volume, and the dose-volume histogram graphically illustrates external beam radiation dose as a function of volume for critical organs and tumors. Recent trials of radiation dose escalation have correlated dose-volume histogram characteristics to solid organ toxicity and tumor response.7, 8
Some form of radiation dosimetry is routinely performed prior to clinical external beam or sealed source radiotherapy. The amount varies significantly with the clinical situation. A radioresistant but curable tumor adjacent to a critical normal organ or structure requires a much more aggressive treatment planning approach than a radiosensitive metastatic tumor distant from critical organs, which may only require palliation. The patient with a curable tumor may benefit from 3-D imaging followed by hours of computer planning to optimize radiation beams. The patient with an incurable tumor or a radiosensitive curable tumor that is not near critical structures may only require 2-D radiation dosimetry for the area where the radiation beam is directed. The essence of this practical clinical approach should be considered in RIT applications.
Although significant inherent differences between systemically given tumor-targeted RIT and radiation therapy from external beam or sealed sources may result in different clinical approaches to radiation dosimetry for RIT, the clinical situation and the type of therapy medically indicated must still direct the general approach to treatment planning.
However, dosimetry as absorbed dose in living tissues is almost never directly measurable, whether external beam, brachytherapy, or RIT. The limitation follows from the invasive nature of even the smallest dosimeters.9 Thus, we use estimation (calculation), not measurement of dose. Furthermore, by using the definition of 1 centigray (cGy) as 100 erg/g for absorbed dose and assuming a water medium, one can show, using calorimetry, that the associated temperature rise is 2.6 × 10−6 °C. It is surprising, therefore, that absorbed doses on the order of only 100 cGy can have a significant impact on tumor, since the associated rise in tissue temperature is only 10−4 °C. The reason for this discrepancy is the strong dependence upon the precise volume of target tissue considered. If we limit our conceptual dosimeter down to the size of an antibody (MAb) molecule where a single representative ionization (i.e., 32 eV)10 has taken place, the absorbed dose would have been on the order of 107 cGy with an associated temperature rise of 24 °C. Such enormous spatial variations for energy absorbed per unit mass (D) are a consequence of the short wavelength of ionizing radiation. Thus, careful clinical definitions must be examined to correlate biologic results with calculated absorbed dose, an inherently problematic process.
Preclinical Studies: Evidence of a RIT Dose-Response Relationship?
During the 25 years of RIT development, the tumor uptake of radiolabeled antibodies in human xenografts in nude mice has varied, from 10 to 100% ID/g.11–16 The significance of this tumor-targeting achievement is apparent if we compare these tumor uptake values with the ratio of total injected activity divided by the mouse's total body weight, i.e., 100% ID/20 g = 5% ID/g. Thus, a two- to 20-fold enhancement of tumor uptake has occurred over what we might naïvely expect if targeting were completely nonspecific. RIT biodistributions have also demonstrated variations in tumor and normal organ uptake with tumor mass, number of lesions, protein mass, and circulating antigen.11, 15, 17 For example, in anticarcinoembryonic antigen (anti-CEA) studies, disease-free livers in nude mice have demonstrated uptake of more than 50% ID/g18 due to antibody binding of circulating CEA followed by uptake into hepatic Kupffer cells. For tumor-free mice, the corresponding hepatic value was close to 5% ID/g.
These variations in the tumor-targeting and normal tissue distribution of radiolabeled monoclonal antibodies in mice provides reason for attention to variations in distribution and therefore absorbed radiation dose in the clinical application of RIT. In RIT systems where pharmacokinetic differences were considered likely, calculated absorbed doses (cGy) to the more sensitive normal tissues have been used as escalation parameters in defining the maximum tolerated dose (MTD). Relative variation in tumor and organ uptake by factors of 10 or more in animals11, 15, 18 and patients19 has direct implications. This suggests that the parameters used for defining dose for chemotherapy may only be applicable in RIT in restricted situations.
Absorbed dose, on the other hand, is one direct way to take differential uptake data into account. Calculated in energy per unit mass, D has been used extensively in external beam and brachytherapy. In the case of beta and gamma rays, uptake and absorbed dose rates are proportional to each other, so that variation in D essentially mirrors spatial variation in targeting. Moreover, animal data are available supporting use of absorbed dose for RIT. Almost 10 years ago, Kuhn et al.20 provided evidence that only D was definitive in assessing WiDr xenograft regression in nude mice. Using interferon to up-regulate production of CEA in their low-CEA expression tumor model, these authors demonstrated that activity according to body weight or surface area did not correlate with tumor regression measurements. Two experimental groups (90Y–anti-CEA with or without interferon) received the same amount of activity, yet had very different absorbed dose values with correlated WiDr tumor response.20
Beaumier et al.21 reported analogous results in small cell lung cancer (SCLC) xenografts treated with the specific MAb NR-Lu-10 labeled with rhenium-186 (186Re). In these experiments, the authors found that total activity did not correlate with the growth delay of SCLC tumors; only the estimated absorbed dose appeared to be a relevant parameter. A second, nonspecific antibody (NR-ML-05) was used as a control in these experiments, which are summarized in Table 1.
Table 1. Growth Delay of Small Cell Lung Carcinoma Xenografts with Therapy using Rhenium-185-NR-Lu-10.
Tumor Dose (cGy)
Activity (μ Ci)
Growth Delay (d)
On the other hand, several preclinical studies have shown that response and toxicity correlates with injected dose as well as estimated tumor dose when antigen target on tumor is abundant and not shed nor shared with normal tissue, and tumor size is within a relatively narrow range (i.e., 50–300 mg). Despite the modest differences in pharmacokinetics between two radioimmunoconjugates, DeNardo et al. reported that mice given copper-67 (67Cu)–2IT-BAT-Lym-1 and -L6 received similar calculated radiation doses to marrow and total body per MBq administered.22 In nude mice treated with 67Cu-2IT-BAT-Lym-1, the calculated radiation dose to the total body was 32.4 cGy/MBq. The radiation doses to marrow, liver, and tumor were estimated to be 40.8, 41.1, and 144 cGy/MBq (1.51, 1.52, and 5.33 cGy/μCi), respectively. The LD50/30 dose of 67Cu-2IT-BAT-Lym-1, 21.6 MBq, corresponded to a calculated total-body radiation dose of 700 cGy. The LD50/30 dose of 67Cu-2IT-BAT-L6, 20.6 MBq, corresponded to a total-body radiation dose of 600 cGy.22 Comparatively, the LD50/30 for an acute total-body radiation dose to BALB/c mice from external beam radiation has been reported as 550–750 cGy.23, 24 Nontargeted whole-body radiation delivered by non-lymphoma-targeting 67Cu-2IT-BAT-L6 moderately delayed tumor growth; the marked therapeutic effect of 67Cu-2IT-BAT-Lym-1, in contrast, indicated that targeted delivery of radiation by 67Cu-2IT-BAT-Lym-1 was the dominant antitumor mechanism.
In another preclinical study25 of breast cancer tumors (HBT 3477) in groups of mice receiving 90Y-DOTA-peptide-ChL6 at doses of 4.1–12.2 MBq, higher dose was associated with a longer period of regression (P < 0.001). The mean tumor volume at nadir decreased with increasing dose (P < 0.001)(Fig. 1), showing that a higher dose resulted in greater cell kill. The regrowth delay was calculated to be 41, 48, 63, 68, and more than 78 days for the groups receiving doses of 4.1, 5.9, 8.5, 9.6, and 12.2-MBq, respectively. Among the groups receiving 90Y-DOTA-peptide-ChL6 therapy at sublethal doses, the percentage of tumors achieving a response increased with increasing dose, reaching 79% at 9.6 MBq. The dose response effect was significant (P < 0.01).
Using initial tumor size to correct for 90Y energy loss outside the tumor volume, the absorbed tumor doses were estimated.25 It is important to recall that Roberson and Buchsbaum,26 using pathology slide assessments, have shown that the results of absorbed RIT dose with 131I-17-1A MAb were internally consistent with those estimated in external beam (60Co) therapy. Here, the authors had to consider edge effects and spatial distribution of energy loss as the major correction factors in their comparison of these therapeutic modalities on LS174T colorectal xenografts.
Clinical Relevance: Dose-Response Relationships in Radioimmunotherapy
Dosimetry to the target tumor and major normal tissues has been considered in clinical radionuclide therapy since the mid-1940s, when radioiodine therapy was demonstrated to be effective on functioning metastases from adenocarcinoma of the thyroid.27, 28 Since the mid-1980s, others realized the need for radiation dosimetry-driven treatment planning in RIT.29–33 However, it is necessary to consider the current limitations of clinical methods for organ and tumor dosimetry before considering relationships between normal organ radiation dose and toxicity, or tumor radiation dose and response to therapy, in reported clinical studies. While beyond the scope of this review, dosimetry based on information from planar views, although very useful for dosimetric estimates for major organs, is far from a precise science. There are major limitations to the dosimetric data currently available, most obviously for tumor, bowel wall, and marrow. Investigators frequently assume that organs are the same size and shape, rather than using actual measurements where absorbed radiation dose estimates are calculated. Tumor dosimetry also necessitates the use of accurate volume. Even with reasonably accurate volumes determined by CT, the use of conjugate view scintigraphy to quantify radioactivity in deep-seated tumors provides information with a high standard error. There is considerable overlap with vessels and normal organs. Newer single photon emission computed tomography (SPECT) approaches with CT to SPECT data transformations, appropriate correction for attenuation and scatter, and verified calculation methods containing dose-volume histogram output,34, 35 may provide more accurate quantitation than information obtained by current methods. A practical approach may be the calibration of planar images with selected SPECT acquisition.
To summarize the three steps required to estimate absorbed dose for an internal emitter, they are 1) determination of the activity at appropriate time intervals, 2) integration of the activity (Ă), and 3) determination of the S factor for Dose = S · Ă. Relative errors have been found to be largest when the correct S value is determined for a given patient, because of uncertainty regarding the target size, shape, and microscopic distribution of radioactivity. Patient organ masses may be severalfold larger36 than those specified in the MIRD literature.37 Likewise, MIRD-type phantoms are, at best, only caricatures of average patient geometry37 so as to further confound the determination of S. Since red marrow effects are often the limiting toxicity in RIT trials, absorbed dose estimates have frequently become central to defining administered dose. However, both measurement of marrow activity (Ă) and determination of S have proven to be problematic. Many physicists use the blood curve as a surrogate for the poorly defined red marrow. Sgouros et al.38 have recently reviewed the situation for tracers that target marrow components. A compounding problem in marrow dosimetry is RIT-targeted tumor cells in the marrow cavity. Use of planar imaging data from selected regions of marrow to calculate marrow activity has been correlated with observed toxicity, with varied results.34, 39–44
Thus, in spite of considerable interest in dosimetric information, during the past two decades most Phase I, II, and III studies of RIT determined or used the MTD per square meter of body surface area (mCi/m2) or body weight (mCi/kg). Press et al.,45–47 breaking with tradition, were the first to use high-dose RIT with bone marrow transplant using treatment designed to determine the radiopharmaceutical dose that would deliver no more than the defined maximum radiation doses to normal organs, excluding marrow, in each patient. Correlations of administered dose, calculated radiation dose, and toxicity can readily be demonstrated with the data from these dosimetry-driven myeloablative 131I-anti-CD20 studies (Fig. 2). Therapeutic infusions were administered to cohorts of three patients in a dose-escalation format to deliver 10, 15, 17, 20, 24, and 31 grays (Gy) to the critical organ estimated to receive the highest dose by prior biodistribution study. Estimated mean absorbed dose ratios of tumor to normal organ ranged from 1.8 (tumor to lung) to 10.2 (tumor to bone marrow). Severe myelosuppression occurred as anticipated in all patients, but all patients demonstrated evidence of reconstitution of normal trilineage hematopoiesis. The 15 patients who received 131I–anti-B-cell MAb infusions, with doses of up to 24 Gy delivered to normal organs, tolerated the therapy with only minimal discomfort. All four patients treated at the two highest doses (27 and 31 Gy) experienced marked asthenia, nausea, and anorexia. One patient developed hemorrhagic pneumonitis and congestive cardiomyopathy 2 months after therapy, which resolved with supportive care. The one patient treated at the highest calculated normal organ dose developed reversible hypotension necessitating dopamine. Thus, the trial was terminated after these two cardiopulmonary toxicities, with an MTD presumed to be 27 Gy to the cardiopulmonary system.48 Twenty-five patients were entered into a Phase II trial, receiving a 25–27 cGy maximum normal organ dose.49 The estimated dosimetry and actual toxicity are compared in Figure 2.
More recently, pretargeted RIT demonstrated a dramatic dosimetric correlation when administration of doses of 90Y up to 5 times higher than possible with 90Y antibodies without marrow or peripheral blood stem cell support revealed the clinical need to estimate the dose limiting radiation being delivered to other vulnerable normal organs.50, 51 This single dose 90Y MTD study used with PRIT® employed the antibody NR-LU-10 conjugated with streptavidin, a glycoprotein clearing agent and 90Y-DOTA-biotin, and escalated patient cohorts to 140 mCi/m2. Indium-111 (111In) (3–5 mCi) and DOTA-biotin were coinjected for gamma camera imaging and dosimetry extrapolated from 111In kinetics for 90Y. Patient planar images identified the bowels and kidneys as potential organs at risk for clinically significant radiation toxicity, and bowel, bone marrow, and renal toxicity were observed. A new method of estimating the activity localized in the bowel wall was developed, and S values were derived to calculate bowel wall dose from radioactivity present in the lumen and the bowel wall. Grade 4 diarrhea was observed in patients estimated to have received 6850–14,000 cGy to the wall of the large bowel. The correlation coefficient of intestinal toxicity with absorbed dose was 0.64. It was also noted that myelotoxicity correlated better with marrow blood dose by (r = 0.72) than with the whole body dose (r = 0.44).50
Other dose escalation studies based on body surface area or body weight demonstrated substantial differences in the degree of correlation of dose and toxicity. In 1992, Meredith et al. reported a Phase I MTD trial of 131I-labeled chimeric B72.3 (human immunoglobulin G4), in which 12 patients with metastatic colorectal cancer received 18 mCi/m2, 27 mCi/m2, or 36 mCi/m2.52 Bone marrow suppression was the only side effect; and bone marrow suppression was directly related to the amount of injected dose (mei), and correlated particularly well with whole-body radiation dose estimates to marrow (r = 0.85; P = 0.0004). The lowest dose level produced no marrow suppression, whereas 27 mCi/m2 resulted in Grade 1 and 2 marrow suppression in two of three patients. The MTD was 36 mCi/m2, with all six patients at this dose level having at least Grade 1 and two patients with Grade 3 and 4 marrow suppression. Thus, in these relatively homogeneous patients with colon cancer, little or no marrow involvement, and no histories of multicourse chemotherapy, 131I-B72.3 given on an mCi/m2 basis provided a consistent decrement in marrow function.
However, many investigators have clearly demonstrated a lack of predictive value for marrow toxicity from mCi/m2 dose levels or from blood or whole-body calculated dose. To better estimate bone marrow dose, particularly when marrow is invaded by tumor, alternate approaches have been developed. DeNardo, Macey, and other investigators have suggested marrow radiation dose calculated from marrow uptake in three lumbar vertebra be added to the calculated marrow dose from blood and the whole body.39–41 The prediction of myelotoxicity was improved over the blood and total-body dose calculations in the lymphoma patient groups studied.40, 41, 53 Juweid et al. reported estimated red marrow dose by sacral scintigraphy in a study group with LL2 in non-Hodgkin lymphoma (NHL) in which the scintigraphy was technically closely controlled (Fig. 3). 42 This study provided evidence of a correlation between the hematologic toxicity grade and the calculated marrow dose.
Other studies of lymphoma patients have not demonstrated a correlation using similar imaging-derived dosimetry.43 One frequent problem for such dosimetric estimates from planar imaging, particularly in NHL, is tumor overlaying portions of sacral and lumbar spine areas (Fig. 4). A major Phase I/II multicenter trial demonstrated this problem when 111In-Zevalin™ (111-In ibritumomab tiuxetan, IDEC-In2B8; IDEC Pharmaceuticals Corporation, San Diego, CA) dosimetry studies of patients with NHL were performed prior to therapy to estimate the absorbed radiation dose to normal organs and bone marrow from treatment with 90Y-Zevalin (90Y ibritumomab tiuxetan™, IDEC-Y2B8; IDEC Pharmaceuticals Corporation, San Diego, CA) (Fig. 4).43 In 56 patients with imaging-derived dosimetry data, normal organ and red marrow radiation absorbed doses were estimated to be well under the protocol-defined upper limit of 20 and 3 Gy, respectively, and the clinical MTD was defined as 0.4 mCi/kg in patients with normal baseline platelet counts. The median estimated absorbed radiation dose to tumor was 17 Gy (range, 5.8–67 Gy). However, no correlation was noted between treatment-related hematologic toxicity and calculated red marrow radiation absorbed dose from 111In planar imaging of sacral or lumbar areas, nor blood or body clearance. They concluded that 90Y-Zevalin™ administered to the defined patient population at 0.4 mCi/kg resulted in acceptable radiation absorbed doses to normal organs and that hematologic toxicity was highly dependent on bone marrow reserve in this heavily pretreated population.
However, bone marrow dosimetry methods that are practical and predictive are needed. Multiple new approaches to calculating or evaluating marrow dose are under study. These include models to calculate dosimetry from pharmacokinetics.44, 54, 55 One such model was used in a careful study of cancer patients receiving three-step RIT, with 90Y biotin given either as 90Y-DOTA-biotin or 90Y-DTPA-biotin.55 Using calculations from this model and correlating them with MIRD3, they found major organ dosimetry to be quite similar. Platelet toxicity and global hematologic toxicity versus the estimated red marrow dose calculated from their model resulted in significant correlations.
The inherent difficulty of evaluating the activity in marrow and thus accurately calculating RIT radiation dose estimates for marrow led Wong et al. to suggest that stable chromosomal translocations (SCTs) that result after radiation exposure may provide an internal dosimeter from which to validate RIT radiation dose to marrow dosimetry estimates. Increases in the frequency of SCTs are observed after radiation exposure and are highly correlated with absorbed radiation dose.56 SCTs are cumulative after multiple radiation doses and conserved through an extended number of cell divisions. The development of chromosome-specific DNA probes allows a rapid and quantitative measure of these translocations. Wong et al. recently evaluated increases in SCT frequency in peripheral lymphocytes after RIT, and the magnitude of these increases correlated with estimated radiation dose to marrow and the whole body in a Phase I dose escalation therapy trial of 90Y-chimeric T84.66.56 A linear correlation between cumulative marrow dose and increases in SCT frequencies was observed for chromosome 3 (R2 = 0.63) and chromosome 4 (R2 = 0.80) and between increases in SCT frequency and whole-body radiation dose or administered activity (R2 = 0.67–0.89). There was less correlation between observed decrease in white blood cell or platelet counts and marrow dose, whole-body dose, or administered activity (R2 = 0.28–0.43). These data describe one of the strongest radiation dose-response and activity-response relationships reported with RIT.
However, the complexities associated with clinical application of complicated imaging for dosimetry stimulated Wahl et al.44 to develop and use a surrogate dose estimate to indicate the relative magnitude of marrow radiation dose. Calculated total-body dose was established as the benchmark for dose cohorts in lymphoma patients receiving 131I–anti-CD20.44 Patient individualization of dose was implemented as it was expected that there would be pharmacokinetic variation of the 131I-anti-CD20 between patients in part because CD20 is expressed in both malignant and normal lymphoid tissue. Thirty-four patients with CD20-expressing NHL were first studied with one or more dosimetric doses of ∽5 mCi of 131I-anti-CD20 antibody given after varying amounts of unlabeled anti-CD20 antibody.44, 57 Each patient was then treated with a patient-specific radioimmunotherapeutic dose designed to deliver a specified radiation dose to the whole body at levels from 25 to 85 cGy. In these studies, 131I labeling was used for both the dosimetry study and the therapy dose. Bone marrow toxicity was dose-limiting and dependent on the total-body dose of radiation. The Michigan group found that a total-body dose of 85 cGy was not myeloablative treatment but exceeded the MTD in the subset of patients who had had chemotherapy. Thrombocytopenia appeared to be more marked in patients with prior bone marrow transplantation. The total-body dose of 75 cGy was established as the MTD in patients who had not had prior bone marrow transplantation. Declines in white blood cell counts and platelets were clearly related to the total-body radiation dose. In this patient population, which had < 25% marrow involvement, the total-body dose, as predicted by dosimetric imaging, was a robust predictor of hematopoietic toxicity. This approach was considered necessary to avoid overdosing or underdosing patients.
A preliminary analysis was made of a large patient group treated with 131I-anti-CD20, in which the mCi/kg calculated to deliver atotal-body dose of 75 cGy varied markedly (over a fivefold range).57 While the average treatment was 1.21 mCi/kg, if treated at this mCi/kg level, about half of patients would have received > 10% more or less than the target dose to the total body (Fig. 5). The whole-body dosimetric method with 131I-anti-CD20 reduced the variation of total-body, and thus marrow, radiation dose that would have occurred between patients due to differences in patients' sizes and their individual 131I-anti-CD20 pharmacokinetics.
Thus, with 131I-anti-CD20, simplified total-body dosimetry was effectively used as a surrogate for calculated absorbed radiation dose to marrow tissue, in essence becoming a surrogate for a surrogate of radiation dose to marrow for calculation of the highest “safe” nonmyeloablative injected dose for each patient.
Clinical data demonstrating the relationships among dose administered, radiation dose estimate, and tumor response are less robust. There are major problems in estimating radiation dose, particularly for smaller and internal tumors. However, there is relevant evidence that supports the concept that the dose directly impacts therapeutic efficacy. This notwithstanding, evaluating response rate over large ranges in injected dose and across studies with a wide range of patient characteristics would be pointless. A selective review of the evidence starts with unlabeled antibody alone versus labeled antibody. Several such studies have provided data: ChL6 versus 131I-ChL6,58, 59 Lym-1 versus 131I-Lym-1,60, 61 and anti-CD20 versus 90Y-anti-CD20.62 The latter study compared unlabeled with radiolabeled anti-CD20 in 90 comparable patients with refractory low-grade NHL randomized to Zevalin or rituximab (Rituxan, IDEC Pharmaceuticals Corporation, San Diego, CA) (2 prior treatments). The Zevalin group received 250 mg/m2 × 2 rituximab and 0.4 mCi/kg 90Y-Zevalin, while the rituximab group received 4 weekly doses of 375 mg/m2 rituximab. The overall response rate for the Zevalin group was 80% and for the rituximab group 56% (significantly different at P < 0.001), suggesting clearly that this radioactive antibody is better than a clinically useful nonradioactive antibody.
A second type of evidence relates body radiation dose or injected mCi dose to response and duration. There was a longer disease-free survival for patients who received the higher total-body dose with 131I-tositumomab (Bexxar; Corixa Corporation, Seattle, WA; SmithKline Beecham Corporation, Philadelphia, PA) > 50 cGy versus those who received < 50 cGy in the studies from the University of Michigan (P < 0.05).63 DeNardo et al. reported an 131I-Lym-1 MTD study demonstrating that all of the patients at the highest mCi/kg dose of 131I-Lym 1 MAb had complete and durable remissions61 and that six of seven complete responses occurred at three of the six higher dose levels. In other analyses, it was also demonstrated that therapeutic remission to 131I Lym-1 therapy was associated with increased survival, significant in multivariate analysis.64, 65
More direct and recent evidence of radiation dose and tumor response has been reported by Koral et al. in their study of untreated low-grade NHL, in which the tumor dose was carefully calculated and correlated with tumor response. 131I-tositumomab (anti-CD20-antibody), in conjunction with unlabeled tositumomab, was employed in a Phase II clinical trial for the therapy of 76 previously untreated follicular NHL patients.34 The 30 individual tumors in PR patients had a mean radiation dose of 369 ± 54 cGy, while the 56 individual tumors in patients with complete responses had a mean radiation dose of 720 ± 80 cGy. According to a mixed analysis of variance, there was a trend toward a significant difference between the radiation dose absorbed by individual tumors for patients with complete responses and that for patients with partial responses (P = 0.04 in one analysis and P = 0.06 in a second). Since the patient response was complete in 75% of the patients, analysis of more patients is needed for a more significant difference. However, in this group of patients, reduction in tumor size was measured and correlated with SPECT-corrected tumor dose. A positive and significant correlation was seen between tumor dose and the percentage of tumor volume reduction that was statistically significant, and supports the concept that higher radiation doses to tumors are more effective.34
Evidence of relationships among administered dose, radiation dose, efficacy, and toxicity of RIT, found in preclinical and clinical reported studies, has been reviewed in this article. Selected representative excerpts have been presented. The evidence in preclinical models seems to suggest a frequent direct relationship of injected dose and estimated dose to toxicity and efficacy, unless unique complexities exist that vary the biodistribution among animals. Although dosimetric animal models are still in development, there is evidence of a general correlation of response and toxicity with estimated absorbed doses. These, however, are highly controlled subjects and tumors. In clinical trials in which the variations in tumors and patients are enormous, what seems on the surface to be contradictory information, when examined in depth, describes the inability with current experience to relate comparable information.
Bone marrow dosimetry continues to be a work in progress. A blood-derived marrow dosimetry method is usually acceptable for evaluation of estimated red marrow absorbed radiation dose when RIT is used in diseases that do not localize to bone marrow. However, blood-derived red marrow dosimetry obviously does not account for specific antibody or radiolabel targeting to the marrow elements, bone, or tumor. Current image-derived marrow dose methodology, although useful in defined circumstances and experienced hands, requires various degrees of calibration that are currently not easily transferred to multiple clinics for use with patients who have advanced disease. Neither blood-derived nor image-derived marrow dosimetry can be expected to account for the decreased bone marrow reserve and increased sensitivity of regenerated marrow stem cells seen in patients in whom bone marrow has been damaged by prior chemotherapy and external beam radiation.
In myeloablative stem cell or bone marrow transplant trials, dosimetry has proved useful in determining nonmarrow dose-limiting critical organ toxicity. It has also frequently been shown that dosimetry is necessary to determine the individual maximum radiolabeled antibody dose that can be safely administered without marrow support when using certain radiopharmaceuticals, most notably those with 131I. Unlike radiometal-chelated pharmaceuticals, there is often significant variability in urinary excretion of the radioactive dose. This can be followed by total-body activity, which appears to correlate well with marrow toxicity in lymphoma patients with < 25% marrow involvement. However, when little activity is excreted, as with radiometal labels, then injected dose, patient weight or body surface area, and extent of pretreatment regimens may be the most efficient and effective predictors of toxicity within a limited dose range. In advanced lymphoma patient populations, where long-term palliation (not cure) is the realistic goal, both agents and approaches appear very effective.
In external beam therapy, patients with curable tumor may benefit from 3-D imaging and computer planning to maximize tumor dose with minimal toxicity, and patients with advanced, heavily pretreated disease may require only minimal dosimetry to direct palliative therapy. RIT clinical applications and future treatment and dose planning are likely to require similar approaches to patient care.
The authors thank Nona L. Simons for substantial assistance in preparing the manuscript.