This research was based on data derived from the United Network for Organ Sharing on October 6, 2003. The content is the responsibility of the author alone and does not necessarily reflect the views or policies of the Department of Health and Human Services.
One of the main limitations of liver transplantation as treatment of end-stage liver disease is the scarce supply of liver donors relative to the number of patients in need of liver transplantation.1 According to the United Network for Organ Sharing (UNOS), 6,168 liver transplants were performed in the United States in 2004, and 17,895 patients are currently waiting for liver transplantation. This imbalance between supply and demand is likely to get even worse, since the number of transplants per year has remained relatively stable in the United States in recent years, whereas the number of patients on the waiting list has been increasing dramatically.2
One way to increase the availability of organs for liver transplantation is to expand the criteria that are used to determine whether an organ from a potential liver donor is acceptable for liver transplantation. Unfortunately, no such universally accepted criteria exist. Instead, individual transplant programs use different, and often poorly defined, criteria to determine whether to use the liver of a potential liver donor for transplantation. Such criteria include donor age, donor high-risk behavior, the degree of steatosis on liver biopsy, cold ischemia time, “down time,” and the macroscopic appearance of the liver.
In addition to donor characteristics, many recipient characteristics are important predictors of posttransplant survival. The aim of this study was to identify donor and recipient characteristics that are important predictors of graft survival following liver transplantation and to use these predictors to develop and validate a survival model using data from UNOS. Such a model could determine risk scores based on donor, or recipient, or donor/recipient characteristics that would be directly related to expected posttransplant survival. These risk scores may be used together with data on the expected mortality on the transplant waiting list, to determine whether or not a liver should be used for transplantation, or to form the basis for future discussions on “expanding” donor criteria to try to address the imbalance between supply and demand in liver transplantation. In addition, donor/recipient scores may be used by physicians to stratify liver transplantations into “high risk” (lower graft survival and higher risk of posttransplant complications) and “low risk” (higher graft survival and lower risk of posttransplant complications). Finally, these risk scores may be used in the future to inform liver transplant candidates and their doctors what posttransplant survival would be expected when a given donor is offered and may be particularly helpful for marginal or high-risk donors
UNOS, United Network for Organ Sharing; HCV, hepatitis C virus; MELD, model for end-stage liver disease; BMI, body mass index; SOLD, score of liver donor.
PATIENTS AND METHODS
Transplant centers and organ procurement organizations in the United States are required to submit to UNOS3 standardized data collection forms, including the Transplant Candidate Registration Form, which contains patient information at the time of listing for liver transplantation; the Deceased Donor Registration Form, which contains information on all consented recovered and nonrecovered donors; the Transplant Recipient Registration Form, which includes the patient status at discharge, pretransplant and posttransplant clinical information, and treatment data; and the Transplant Recipient Follow-up Form, which is generated 6 months posttransplant and on each subsequent transplant anniversary and includes patient status and clinical and treatment information. These forms are currently submitted to UNOS electronically, and the information is entered into a single Standard Transplant Analysis and Research file, which includes 1 record per transplant event and the most recent follow-up information on patient status as of the date that the file was created. The Standard Transplant Analysis and Research file created by UNOS on October 6, 2003, was kindly provided to the author for this study.
Data were available from UNOS for 52,845 patients aged ≥ 18 years who underwent orthotopic liver transplantation in the Unites States between 1987 and 2003. The analysis was limited to 38,811 liver transplantations that occurred after April 1, 1994, because potentially important variables such as donor alcohol use, donor weight, recipient diabetes status, and cold ischemia time were not routinely recorded prior to that date. We excluded patients who had donors under 10 (n = 519) or over 75 (n = 508) years of age, living donors (n = 1,249), split-liver donors (n = 564), non-heart-beating donors (n = 328), or donors with a serum sodium concentration >170 mmol/L (n = 591). We excluded patients with multiple simultaneous organ transplantation (n = 1,171), previous liver transplantation (n = 3,219), or no available follow-up records after transplant (n = 658), leaving 30,004 participants in the univariate analyses. In addition, 9,703 patients were excluded from the multivariate analyses because information was missing on 1 or more of the covariates leaving 20,301 in the current analysis including 6,477 with hepatitis C viral (HCV) infection.
Cox proportional hazards regression was used to model graft survival after liver transplantation using a number of prognostic variables.4 Graft failure was defined as liver failure (with or without retransplantation) or patient death from any cause. The variable for graft failure as defined above (“gstatus”) is provided in the UNOS Standard Transplant Analysis and Research file. Time was measured from the date of liver transplantation to the date of liver failure, death, or last follow-up (variable “gtime” in the UNOS Standard Transplant Analysis and Research file). Patients who remained alive without liver failure were censored at the time they were last traced alive.
Univariate survival analyses were initially performed to identify donor and recipient characteristics that were significant predictors of survival at a P < 0.05 level from the following list of a priori chosen potential predictors.
aDonor characteristics: body mass index (calculated as the weight in kilograms divided by the square of the height in meters) categorized as 15 to <25, 25 to <30, 30 to <35, 35 to <40, and 40 to <55 kg/m2; age; cold ischemia time; presence of diabetes mellitus or hypertension; history of alcohol dependency, race/ethnicity (categorized as white, black and African American, Hispanic, and other); gender; cigarette use (>20 pack years ever or not); degree of steatosis on liver biopsy (categorized as 0-19%, 20-35%, and >35%); laboratory values immediately prior to organ donation, including serum creatinine, aspartate aminotransferase, alanine aminotransferase, and total bilirubin; and intravenous or other drug use in the 6 months prior to organ donation.
bRecipient characteristics: UNOS urgency status (categorized as status 1 [fulminant hepatic failure, or immediate hepatic artery thrombosis or graft nonfunction after liver transplantation] or other); model for end-stage liver disease (MELD) score at the time of transplantation calculated using the formula MELD = [(0.957 × ln(creatinine) + 0.378 × ln(bilirubin) + 1.120 × ln(international normalized ratio) + 0.6] × 10,5, 6 where the international normalized ratio was approximated as the prothrombin time/12.5 when not available; serum albumin at the time of transplantation; liver disease (categorized as hepatitis C [± hepatitis B], hepatitis B, alcoholic cirrhosis, primary biliary cirrhosis, cryptogenic cirrhosis, hepatocellular carcinoma or cholangiocarcinoma, and other); gender; race/ethnicity (categorized in the same way as for donors); body mass index (at the time of liver transplantation, categorized in the same way as for donors), diabetes mellitus, and time period of liver transplantation (categorized into 4 periods from 1994 to 2003 each with a quarter of the total number of transplantations).
Modeling of Predictors
All categorical variables were modeled as dummy variables. For continuous variables we considered standard transformations (logarithmic, square, square root) as well as categorization into 5 categories based on the values of the 25th, 50th, 75th, and 90th centiles. The likelihood ratio test was used to determine which representation best predicted survival.
Selection of Predictors for Multivariate Model
To pick a small subset of donor and recipient characteristics that adequately predicted survival, we used both forward stepwise and backward elimination selection methods. Specifically, all characteristics that were significant (P < 0.05) in univariate analyses were entered into a multivariate model. We then eliminated from the model all donor variables that were not statistically significant in the multivariate analysis. Each of the eliminated variables was then individually added to the model of the significant variables and was kept if it was statistically significant or if its inclusion affected the value of the coefficients of other variables by more than 20%.
Region of Transplantation
Transplant centers and organ procurement organizations in the United States belong to 1 of 11 geographical transplant regions. Because transplant survival may vary by region, all multivariate analyses were stratified on geographical region. Stratifying by region yields equal coefficients for each predictor across regions but with baseline hazard unique to each region. Information on the center or hospital at which transplantation was performed is not available.
Etiology of Liver Disease
Preliminary analyses suggested that there were large differences in the models derived for patients with HCV infection compared to patients without HCV infection and that a single model could not adequately predict survival in both groups. Hence, models were derived separately for persons with and without HCV.
A data-splitting approach was used in which the dataset was randomly divided into 10 equal model validation groups, each containing 10% of the population.7 For each group, a model predicting survival was fit to the remaining 90% of the population (the model-building group) using the process described above. This model was then used to predict survival in the 10% of the population not involved in the derivation of the model. The Cox proportional hazards model takes the following standard form:
S(t,X) = S0(t)exp(aX1 + bX2 +..zXn), where S (t,X) is the predicted survival at time t of a person with values X1 to Xn for each of “n” predictors of survival, and S0(t) is the “baseline” survival at time t in persons who have “baseline” (that is, zero) values for each predictor. The sum (aX1 + bX2 +..zXn) is also known as the risk score. Risk scores were calculated using models derived from the model building groups for each person in the model validation group. Persons were divided into 3 non-overlapping risk score groups using the values corresponding to the 33rd and 66th centile. For each group the mean risk score was calculated and then used to calculate the predicted survival using the formula above. The observed survival for each group was computed using the Kaplan-Meier method. Observed and predicted survivals were compared graphically.
The donor and recipient characteristics of the patients included in or excluded from the current analysis are shown in Table 1. Liver transplantations excluded from the current analysis because of missing data had very similar characteristics except for a longer mean cold ischemia time, a higher proportion of status 1 patients, and a slightly different geographical regional distribution.
Table 1. Donor and Recipient Characteristics of the Liver Transplants Presented According to Whether the Patients Were Included or Excluded From Analysis Due to Missing Data
Liver Transplants Included in the Analysis (n = 20,301)
Liver Transplants Excluded From Analysis Because of Missing Data in at Least One Predictor (n = 9,703)
MELD score values range 6-40.
The 11 UNOS transplant regions were randomly assigned the letters A-K.
A = 14.5%, B = 8.6%, C = 3.6%, D = 14.4%, E = 10.5%, F = 11.8%, G = 8.8%, H = 3.8%, I = 5.6%, J = 8.5%, K = 10.0%
A = 16.0%, B = 10.0%, C = 3.5%, D = 14.9%, E = 6.9%, F = 15.3%, G = 5.0%, H = 3.1%, I = 10.0%, J = 7.7%, K = 7.7%
Predictors of Survival After Liver Transplantation in Patients Without HCV Infection
The following donor characteristics were significant predictors of graft survival in univariate analyses: age, cold ischemia time, diabetes mellitus, history of alcohol dependence, gender, race/ethnicity, terminal serum aspartate aminotransferase, alanine aminotransferase, total bilirubin, and creatinine. All recipient characteristics described in the Patients and Methods section were significant in univariate analyses. In a multivariate model that included only the donor and recipient characteristics significant in univariate analyses, all recipient variables were significant except gender and period of transplantation as well as the following donor variables: age, cold ischemia time, gender, and race/ethnicity. The multivariate model was then limited to these variables, all of which remained significant. Additional variables were individually added to this model, none of which was significant. However, addition of recipient gender did modify substantially the values of the coefficients of other predictors, hence this was included in the final model. For the continuous variables, survival was best predicted (as demonstrated by the likelihood ratio test) with categorization of cold ischemia time, recipient age and recipient albumin, linear MELD score and squaring of donor age. Thus, for patients without HCV the variables retained in the final multivariate model included 4 donor characteristics (age, cold ischemia time, gender, and race/ethnicity) and nine recipient characteristics (age, body mass index [BMI], MELD score, status at time of transplantation, gender, race/ethnicity, diabetes mellitus, cause of liver disease, and serum albumin) (Table 2).
Table 2. Adjusted Hazard Ratios and Regression Coefficients for the Predictors Included in the Multivariate Model for Patients Without HCV
Predictors of Survival After Liver Transplantation in HCV-infected Patients
A similar sequential process to that described above resulted in a final multivariate model for patients with HCV that included 4 donor characteristics (age, cold ischemia time, gender, and race/ethnicity) and 7 recipient characteristics (age, BMI, MELD score, status at time of transplantation, gender, race/ethnicity, and diabetes mellitus) (Table 3). This model was different from the model in patients without HCV in that albumin and cause of liver disease were not included and MELD score was coded as a categorical rather than continuous variable.
Table 3. The Adjusted Hazard Ratios and Regression Coefficients for the Predictors Included in the Multivariate Model for Patients With HCV
Figures 1 and 2 compare predicted survival to the survival observed in groups not used in the derivation of the prediction models. There is good agreement between predicted and observed survival in both HCV and non-HCV patients. Almost identical variables were selected in each of these prediction models that used 90% of the dataset, as for the models shown in Tables 1 and 2 that used 100% of the dataset.
The Impact of Donor and Recipient Characteristics on Predicted Posttransplant Survival
Tables 4 and 5 show the predicted survival at various times after transplantation for selected donor and recipient characteristics. Results are presented separately for persons without HCV (Table 4) and with HCV (Table 5). A risk score can be calculated for each donor/recipient by adding the adjusted regression coefficients for each donor and recipient characteristic shown in Tables 2 and 3. The hazard ratio for a donor/recipient X relative to the baseline donor/recipient can then be calculated using Hazard Ratio(X) = exp(Risk Score[X]). Survival at time t after transplant for a donor/recipient X can be calculated using
Table 4. Predicted Graft Survival for Different Donor and Recipient Characteristics in Patients Without HCV
“Baseline” survival in the model was the survival of persons with the donor and recipient characteristics shown under the “baseline” column. Since the model was stratified on UNOS geographical regions, different baseline survivals can be calculated for each UNOS region; however, for simplicity, the average baseline survival across the entire United States is used here. “Best” donors and recipients are those with the characteristics that give the best-predicted survival. “Average” donors and recipients are those with average values for numerical predictors (e.g., average age, BMI, etc.) and the most common categories for nominal variables (e.g., white race). “High Risk” donors or recipients have some predictors of low posttransplant survival, which are shown in bold.
The risk score can be calculated by adding the appropriate adjusted regression coefficients for each predictor shown in Table 2. For instance, the risk score for average donor and high-risk recipient = (382 − 1,444) × 0.000147 (for age of 38 years) + 0.09 (for cold ischemia time of 6.4 to <8.8 hours) + 0 (for white donor) + 0 (for male donor) + 0.26 (for BMI > 40) + 0.21 (for recipient age ≥ 63) + 0 (for UNOS status not 1) + 0 (for male recipient) + 0 (for white recipient) + 0 (for nondiabetic recipient) + (24 − 14) × 0.0176 (for MELD = 24) + 0.22 (for albumin <2.1) = 0.96.
The hazard ratio is calculated as exp(risk score), e.g. exp(0.96) = 2.61 (for average donor and high risk recipient).
Survival at time t for donor/recipient X is calculated as (baseline survival at time t)hazard ratio for X. In the example above, the survival of the average donor bad recipient at 5 years is 0.772.61 = 0.51.
“Baseline” survival in the model was the survival of persons with the donor and recipient characteristics shown under the “baseline” column. Since the model was stratified on UNOS geographical regions, different baseline survivals can be calculated for each UNOS region; however, for simplicity, the average baseline survival across the entire United States is used here. “Best” donors and recipients are those with the characteristics that give the best-predicted survival. “Average” donors and recipients are those with average values for numerical predictors (e.g., average age, BMI, etc.) and the most common categories for nominal variables (e.g., white race). “High-risk” donors or recipients have some predictors of low posttransplant survival, which are shown in bold.
The risk score can be calculated by adding the appropriate adjusted regression coefficients for each predictor shown in Table 3.
The hazard ratio is calculated as exp(risk score)
Survival at time t for donor/recipient X is calculated as (baseline survival at time t)hazard ratio for X.
S(t,X) = S0(t) (hazard ratio [X]), where Survival0(t), the survival of the baseline group at various times (t), is shown in the first columns of Tables 4 and 5.
Tables 4 and 5 demonstrate that both donor and recipient characteristics have a very large impact on predicted survival. For example, among patients without HCV, the 5-year survival of the average recipient with the average donor is 76%. However, when a “high-risk” recipient (MELD = 24, albumin <2.1) receives an average donor liver, the predicted 5-year survival is only 51%; and when the same “high-risk” recipient receives a “high-risk” donor liver (age = 60 years, cold ischemia time ≥ 14.3 hours) then the predicted 5-year survival is down to 28%.
Calculation of a Score of Liver Donor
The model can be used to calculate the contribution to the risk score of the 4 donor characteristics included in the survival models. This part of the risk score, which may be called a score of liver donor (SOLD), is directly related to the impact of the donor on posttransplant survival, such that the higher the SOLD the lower the survival. The score of liver donor for patients without HCV can be calculated as SOLD = 0.000147 × (age2 −1444) + (coefficient for cold ischemia time) + (coefficient for race ethnicity) + (coefficient for gender),
where the “coefficients” are the adjusted regression coefficients shown in Table 2 for each category of cold ischemia time, race/ethnicity, and gender. For instance, a 45-year-old white female donor with 8 hours of cold ischemia would have the following score: SOLD = 0.000147 × (452 − 1444) + 0.09 + 0 + 0.11 = 0.29. Similarly the SOLD for patients with HCV can be calculated from the coefficients in Table 3.
Prediction Models Without Cold Ischemia Time
Measurement of cold ischemia time is not possible before the transplantation operation has begun. Therefore, for the purposes of predicting survival before the donation process is initiated, we have developed models that use the same predictors as the models above except cold ischemia time. The adjusted regression coefficients and baseline survival for these models are shown in the Table 6. Survival predicted using these models showed excellent agreement with survival observed in groups not used in the derivation of the models when compared graphically (graphs available from author upon request).
Table 6. The Adjusted Regression Coefficients and Baseline Survival for Models Predicting Survival After Liver Transplantation Without Using Cold Ischemia Time as a Predictor are Given Here.
Adjusted Regression Coefficient
Patients Without HCV
Patients With HCV
Abbreviations: N/A, not applicable.
Donor age was squared, and then 1,444 (the median square value) was subtracted.
The MELD score was calculated with values of 6 to 40, and then 14 (the median MELD score) was subtracted.
In this paper, models have been developed separately for patients with and without HCV that predict survival after liver transplantation based on a small number of donor and recipient characteristics available at the time of liver transplantation. These models can be used to determine risk scores for a given donor, recipient, or donor/recipient combination that are directly related to survival after liver transplantation.
Whereas a large number of studies have evaluated the effect of selected donor and recipient characteristics on posttransplant survival,8–19 relatively few studies attempted to develop comprehensive models for predicting posttransplant survival,20–23 none of which validated their models. Three of these models are based on single-center experiences and lack the large numbers necessary to simultaneously model a large number of characteristics in sufficient detail, as well as possibly lacking applicability to the entire United States transplant experience.20–22 The fourth is an important study that also used UNOS data to develop models of posttransplant survival.23 However, that study was based on much older data (1990-1996), all of the predictors were available for only 2 years (1994-1996), and, most importantly, the presented models were not validated. In addition, the MELD score, which is now known to be an important predictor of posttransplant survival,15 was not used as predictor, whereas other characteristics (such as recipient BMI and diabetes and donor ischemia time) that have been identified in multiple studies,14, 17, 22 including the present one, to be important and independent predictors of survival, were not retained in the final models. Thuluvath et al. used UNOS data to develop a simple model to predict survival that would not require a computer or calculator but modeled only recipient and not donor characteristics.24
Patients with a very high MELD score have a high mortality, and it is estimated that every 4 hours a patient dies waiting for a liver transplant.25 Hence, it may actually be beneficial for a patient with a high MELD score to accept a “marginal quality” liver that is available now, rather than wait for a better liver donor with better posttransplant survival, while risking death on the waiting list. This was in fact suggested by a recent decision analysis.26 However, a limitation of this decision analysis acknowledged by the authors and the accompanying editorial25 was that survival only up to 1 year was modeled, that there were no good data on the impact of a marginal (or “expanded criteria”) donor on long-term, post-transplant survival, and that there was no good definition of what a marginal donor was. By providing accurate estimates of the expected posttransplant survival for specific marginal donors, our models allow more accurate predictions in future studies of when it might be beneficial for a given recipient to accept a given marginal donor.
The MELD score has been adopted since 2002 as the system that determines priority in allocating organs for liver transplantation in the United States, based on the fact that it is an excellent predictor of mortality without transplantation in patients with advanced liver disease.5, 6, 27 However, 2 patients at the top of a transplant waiting list with the same high MELD score may have very different expected posttransplant survival due to differences in predictors other than the MELD score (such as recipient age, underlying liver disease, diabetes, gender, and race, as exemplified by the models presented here). For instance, a 40-year-old non-diabetic woman with primary biliary cirrhosis and a MELD score of 30 would be expected to have a much better posttransplant survival than a 65-year-old diabetic man with hepatitis C and a MELD score of 30, even though both have a similar mortality without transplant, since they have the same MELD score. If 2 donors are expected to be available at approximately the same time, it would be more equitable for the recipient with worse predicted posttransplant survival (as determined by the models presented here) to receive the donor with better predicted survival and vice versa, since that would make the posttransplant survival of the 2 recipients more similar.
The models also show that a large proportion of post–liver transplant mortality is determined before the liver transplantation has actually occurred by pretransplant characteristics of the donor and recipient. The models make explicit in a multivariate analysis which donor and recipient characteristics have the greatest impact on survival. Although all the variables included in the model are important, donor age, cold ischemia time, recipient MELD score, and cause of liver disease have the greatest impact on survival. Tables 4 and 5 show that both high-risk recipients and/or high-risk donors can have great impact on posttransplant survival.
The degree of donor liver steatosis has been associated with delayed or primary graft nonfunction, early graft loss, and retransplantation in some,28, 32 but not all,16, 33, 34 previous studies. In the current study, donor liver steatosis was not associated with graft survival, either in univariate or multivariate analyses. However, liver steatosis has been recorded by UNOS only since October 1999, and it is not uniformly reported by transplant centers such that in our final analysis sample only 1,606 recipients without HCV had available data on donor steatosis (including only 52 with >35% steatosis) and only 982 recipients with HCV had available data on donor steatosis (including only 27 with >35% steatosis). A type II error can therefore not confidently be excluded. In addition, since organs that were found to have very high degrees of steatosis were probably not used for transplantation, it is difficult to assess the real impact of donor steatosis on graft survival. However, the fact that donor BMI, which is strongly associated with liver steatosis and was uniformly reported, was also not associated with graft survival suggests that the impact of mild to moderate donor liver steatosis on graft survival is likely to be small.
Increasing BMI was not associated with increasing risk of graft failure in a monotonic fashion. Instead, among patients without HCV there was a biphasic relationship whereby, relative to the baseline group with BMI 15-25, graft survival was better for groups with BMI 25-30, 30-35, and 35-40, and only decreased in the “morbidly” obese group with BMI ≥ 40 kg/m2. In patients with HCV all “high” BMI groups had better graft survival than the baseline group with BMI 15-25. Changing the baseline BMI group from 15-25 to 18-25 Kg/m2 had almost no impact on the results (data not shown). These results are perhaps not so surprising, because BMI cannot distinguish between fat and muscle. Thus, a patient with a “normal” BMI (15 to <25 kg/m2) may have a good prognosis because of absence of central fat or may have a poor prognosis because of severe muscle wasting, which is very common in advanced liver disease. In addition, the patients in the very high BMI categories were likely highly selected on the basis of other good prognostic indicators. The current results are slightly different from an earlier study based on UNOS data suggesting that both “severe” (BMI >35 to 40 kg/m2) and “morbid” (BMI >40 kg/m2) recipient obesity were associated with reduced 5-year posttransplant survival;17 however, in that study, cause of liver disease, which is an important confounder of the association between BMI and graft survival, was not adjusted for, and BMI at the time of listing rather than at the time of transplantation was used as the predictor.
It is likely that within transplant centers the perceived “quality” of a potential donor may influence which recipient that organ goes to. For instance, recipients who are very sick may be given a higher-quality liver (if there is choice) to improve their otherwise low posttransplant survival. Alternatively, it is possible that livers from donors who are considered “high-risk” or “marginal” may be transplanted more commonly in very sick recipients who cannot wait for a “better” liver to become available. Or, finally, a marginal liver may, perhaps, be used only in a relatively healthy recipient. These considerations suggest that there is likely to be a high degree of confounding between recipient and donor characteristics. By simultaneously adjusting for a large number of donor and recipient characteristics and by also including in the models variables that were not statistically significant if they affected the coefficients of other variables, such confounding was addressed in this study as much as possible.
The multivariate models were stratified on UNOS region of transplantation. Thus, the effects of the predictors presented are adjusted for any potential confounding effect of region of transplantation. It was not possible to adjust for each individual center, since there are too many centers for a meaningful analysis and since center information is not provided in the UNOS files. Adjusting or stratifying by center, rather than region, would probably make little difference overall to the models presented, since the centers within each region of transplantation would have to be associated with survival and with specific donor/recipient characteristics for the center of transplantation to be an important confounder. However, our models may not apply to specific centers that have particular expertise in high-risk transplantations, such as using extended-criteria donors.
This study is based on data submitted to UNOS by individual transplant centers, rather than data collected specifically for the purposes of this study. Hence we cannot verify the accuracy of the data. Out of 30,004 eligible liver transplants, we had to exclude 9,703 from multivariate analyses because data were missing for at least 1 predictor. However, persons with missing data who were excluded from the analysis had similar characteristics to the persons included (Table 1), and it is unlikely that their exclusion has substantially influenced the prediction models presented here. Furthermore, our model of posttransplant survival is strengthened by the fact that it was based on approximately 33% of all eligible liver transplant recipients in the United States. Most models of survival in non-transplant-related conditions are based on very small proportions of the at-risk population recruited from tertiary referral centers. Finally, predictors of survival might have changed slightly during the 10-year study period (1994-2003), and future studies have to be done using even more recent samples as more UNOS data accrue. In particular, the ability to successfully transplant “marginal” organs may substantially improve.
One of our models is based on all patients without HCV. This model predicts survival well in the entire population without HCV, but it predicts survival less well in certain subpopulations based on underlying liver disease (e.g., hepatocellular carcinoma or “other” liver disease) than in others (e.g., primary biliary cirrhosis or alcoholic liver disease). This problem can be overcome by developing survival models specific for each major cause of liver disease, just as has been done in this paper for HCV, although this approach was avoided in the current paper for the sake of simplicity.
It is hoped that the analyses presented here will serve as a starting point for subsequent investigators to improve upon the presented models. Ultimately, risk scores and predicted survivals determined from such models may be an objective way to assess the risk of a given liver donor, recipient, or donor/recipient combination.