Notice: Wiley Online Library will be unavailable on Saturday 27th February from 09:00-14:00 GMT / 04:00-09:00 EST / 17:00-22:00 SGT for essential maintenance. Apologies for the inconvenience.
Our goal was to describe disease-specific survival and the clinical variables that predict survival in a large national cohort of adult liver transplant recipients. Data on 17,044 adult patients who received an initial orthotopic liver transplant between 1990 and 1996 with follow-up through 1999 was obtained from the United Network for Organ Sharing (UNOS). Disease-specific Kaplan-Meier survival plots and Cox Proportional Hazards models were estimated, and differences in the clinical characteristics of patients at the time of transplantation by disease were examined. Overall posttransplant survival currently exceeds 85% in the first year and is approaching 75% at 5 years. Unadjusted Kaplan-Meier survival is improved for recipients who are younger, female, and in better clinical condition. Survival is a function of disease and level of illness: cancer, fulminant liver failure, alcoholic liver disease, and the hepatitidies have the poorest prognosis, while primary billiary cirrohsis and sclerosing cholangitis have the best. Recipients who were outpatients before transplantation have longer survival than those transplanted from the hospital or intensive care unit. Although the model for end-stage liver disease (MELD) score was designed to predict pretransplant survival, patients with higher MELD scores have poorer posttransplant survival, but the MELD score is less predictive than the specific disease. Differences in disease-specific survival are partially explained by differences in disease severity at the time of transplantation. In conclusion, Disease-specific survival models indicate that there remains tremendous variability in survival as a function of underlying liver disease. However, a significant portion of the difference in survival between diseases arises from differences in clinical characteristics at the time of transplantation. (Liver Transpl 2004;10:886–897.)
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
Since its first report in 1963, orthotopic liver transplantation has evolved to become a standard treatment for end-stage liver disease. There are now over 4,000 liver transplant procedures performed each year in the United States.1 However, liver transplantation is expensive and available only to a small portion of all patients who might benefit. This has fueled significant national debate regarding which patients should receive donor organs.2, 3 Many factors influence this debate, such as competing moral viewpoints,4 state's rights, and the efficient use of a scarce resource.5 Uncertainty regarding differences in patient outcomes furthers this concern. Therefore, key to an informed discussion is an updated, nationally representative, and accurate prediction of the likely survival given a patient's set of clinical characteristics at the time of transplantation. Such estimates do not currently exist at a level of detail sufficient enough to impact selection or allocation decisions. Current national survival estimates, which indicate 1- and 5-year survival ratess of 87.5 and 73.9%, respectively, are not disease-specific, and do not provide estimates of the effect of important clinical covariates on outcome.1 Disease-specific risk-adjusted survival estimates represent single-center experiences,6–8 are based on older data,9, 10 or consider single diseases.11 Recent analyses of national databases do not predict long-term or disease-specific survival.12
The United Network for Organ Sharing (UNOS) maintains a registry of all candidates for organ transplantation listed at transplant centers in the United States. As part of a larger effort to determine the optimal timing of liver transplantation, we developed disease-specific survival models based on the UNOS registry. These statistical models provide estimates of expected patient survival by disease and clinical characteristics.
UNOS, United Network for Organ Sharing; PSC, primary sclerosing cholangitis; MELD, model for end-stage liver disease; HCV, hepatitis C.
Detailed descriptions of the UNOS registry have been published elsewhere.13 Briefly, the registry follows all prospective candidates listed for organ transplantation, documenting any change in status and date of any transplant. The registry records additional clinical information at the time of transplant, as well as information on the donor, and continues to follow the recipient posttransplant. Registry data include standard demographic, clinical, and laboratory information available at the time of listing, as well as the patient's priority to receive an organ, known as the UNOS status. The UNOS status determines the order in which patients receive donor organs.
We received permission from the University of Pittsburgh Internal Review Board to obtain data from UNOS for liver transplant recipients who were transplanted between 1990 and 1996 (n = 23,791). From this file, we selected all adults (more than 16 years of age) who underwent their first liver transplant. We excluded those who had combination transplant procedures (e.g., combined heart-liver procedure), providing a sample of adult, initial solitary liver transplants (n = 17,044). The clinical data collected by UNOS changed in April 1994 when several new clinical variables were added to the data collection forms. For several analyses, therefore, only data in the smaller more clinically-rich later dataset are included (n = 8,172).
The UNOS classification scheme for the etiology of end-stage liver disease includes more than 60 codes, many of which included too few patients for a disease-specific analysis. We asked our National Clinical Oversight Committee, an expert panel of 7 liver transplant physicians and surgeons from transplant centers throughout the country, to collapse the UNOS scheme to a list of 15 or fewer diagnoses. After discussion at a 1-day meeting and subsequent e-mail distribution, we produced a final list of 10 categories into which all UNOS disease categories were mapped (Table 1). We used the diagnosis at time of transplant and assigned each patient to one of the categories. For those without a diagnosis at transplant, we used diagnosis at time of listing (n = 151 patients, 0.9%). If no diagnosis was available (n = 36 patients, 0.2%), we assigned the patients to the miscellaneous group, “Other.” For all diagnoses, we considered the UNOS stated diagnosis as accurate: it is not possible to re-evaluate the accuracy of the diagnosis codes in the UNOS dataset.
The list of potential predictor variables is long: there are over 100 variables contained in the UNOS registry alone. Many other factors not recorded in UNOS, such as Childs-Turcotte-Pugh or acetylketone body ratios, have also been associated with outcome. In addition, for some clinical characteristics, such as human leukocyte antigens-matching and race, there is conflicting evidence of an effect on survival.14–16 We therefore employed a 2-stage procedure to select variables for inclusion in the statistical models. First, we asked the National Clinical Oversight Committee to select all variables for which they believed there was either strong evidence or strong theoretical rationale for association with survival. If available in the registry, each of these recommended variables were considered candidate variables. Second, we tested each candidate variable in single variable analyses, using a log-rank test or Cox Proportional Hazards model including only the single candidate variable. All of the variables were tested for significance, and we retained those variables whose association with outcome had a significance level of less than 0.1. This final set of variables that were available, clinically important, and statistically significant in single variable analyses were used to construct the multivariable survival models. Although the model for end-stage liver disease (MELD) score17 was being developed at the time of our variable selection process, it was not included in the analysis as it was designed to predict pretransplant survival. The current analysis is interested in predicting posttransplant survival based on clinical characteristics unrelated to a particular priority scheme. However, because our dataset contained sufficient clinical material to calculate the MELD score at the time of transplant, we were able to stratify posttransplant survival by a calculated MELD score.
We described survival overall and by disease, gender, UNOS status, age, year of transplant, and MELD score using standard Kaplan-Meier methods.18 We constructed disease-specific survival estimates using the Cox Proportional Hazards regression model19 and fit the models using S-plus.20 To take advantage of the full sample size, we also built a single stratified model and controlled for disease by including disease category as a stratum indicator. The basic difference between these 2 techniques is that disease-specific models allow the effect of a specific covariate to vary across diseases, whereas a stratified model requires the magnitude of the effect of a covariate to be constant across all diseases and only allows the baseline (intercept) risk of that covariate to change. We selected variables for inclusion in all disease-specific models using a stepwise approach, dropping variables with an insignificant coefficient (P ≥ .05). We fit the stratified model using the same approach, although the disease stratum indicator always remained in the model.
We then used the models to generate survival curves for a “standardized” patient across all disease groups. As Kaplan-Meier estimates do not control for differences in the severity of illness across diseases at the time of transplantation, this step allows for the comparison across diseases of patients with equal severity of illness.
The majority of liver transplant recipients were Caucasian, and patients ranged in age from 16 to 77 years (Table 2). The most prevalent diagnoses were hepatitis C (HCV), alcoholic liver disease, and autoimmune disorders, and these accounted for nearly two-thirds of all cases. We restricted our study to patients receiving their first liver transplant and also eliminated patients who had already received a transplant for another organ (less than 1% of the sample).
Table 2. Sample and Subsample Characteristics
Entire Sample (n = 17044)
Subsample 1990–1992 (n = 5857)
Subsample 1993–1994 (n = 5128)
Subsample 1995–1996 (n = 6059)
American Indian/Alaskan Native
Age at time of transplant (years)
Hepatitis C virus
Alcoholic liver disease
Autoimmune disorders, crypt.
Primary biliary cirrhosis
Primary sclerosing cholangitis
Acute failure (fulminants)
Hepatitis B virus
Prior transplant (not liver)
The clinical characteristics of transplanted patients have changed over time. From 1990 to 1996, the average recipient age increased (46.8 vs. 50.0 years, P = .003), the proportion of women decreased (45.1 vs. 41.5%, P = .011), and the proportion of Caucasians (83.4 vs. 87.1% P = .080) and African-Americans (5.5 vs. 7.2%, P = .003) increased. (Some of this difference may be related to interval changes in the details of racial coding in the UNOS database.) The distribution of disease has changed as well. Consistent with the rising incidence of HCV, that disease now represents the largest group of transplant recipients (16.8 vs. 31.8%, P < .001), with declines in alcoholic liver disease (22.2 vs. 15.7%) and cancer (6.4 vs. 2.4%) as indications for liver transplantation.
The average survival was 83.0% at 1 year, 70.2% at 5 years, and 61.9% at 8 years (figures not shown). Survival was higher in women (83.1, 71.6, and 64.5%) than in men (82.9, 69.1, and 59.8%), and in patients transplanted under the age of 60 (Fig. 1a and b). Location at time of transplant (at home, in hospital, and in intensive care unit), which is an element of UNOS status, is a significant predictor of transplant success, with 1-year survival of 87, 81, and 79% for patients transplanted from home, hospital, or intensive care unit, espectively. Survival has improved over time: 1-year survival rose from 74.8% for those transplanted during 1990 to 86.2% for those transplanted in 1996 (P < .001). However, the improvement in survival over time has been slowing: the 1-year survival for patients transplanted in 1996 is not statistically significant from the survival of patients transplanted in 1994.
Figure 2 displays the Kaplan-Meier plots by disease group. As expected, there is substantial variability in survival by disease, with 5-year survival rates ranging from <40% for patients with cancer to >80% for patients with primary biliary cirrhosis or primary sclerosing cholangitis. Patients with fulminant liver failure; hepatitis B, HCV, and alcoholic liver disease have intermediate survivals. As expected, patients with fulminant liver disease have the highest initial posttransplant mortality, but their survival parallels the survival of primary biliary cirrohsis and PSC after the first year. Figure 3 illustrates survival based on MELD score at the time of transplantation, with MELD scores grouped into 4 categories of increasing levels of illness. Although the Kaplan-Meier curves are significantly different (log-rank tests P < .000 for all 4 curves), 5-year survival in the highest MELD category (MELD Score >24) is only 7% lower than those patients in the lowest MELD category (MELD score <10). The range of 5-year survival is much greater when stratified by disease, indicating disease is potentially a stronger predictor of posttransplant survival than MELD score.
Predictors of Survival
This analysis confirms many of the expected effects of various clinical characteristics on posttransplant survival. Table 3 presents the results of the overall disease-stratified models as well as disease-specific models for each of the 10 disease groups. Survival models for 2 time periods are presented: a “reduced model”, which contains only those clinical variables that were present in the UNOS database throughout the entire study period, and a “full model”, which is estimated on the smaller set of patients after 1994 that contains the more complete set of clinical variables. For the disease-specific models, only the direction and strength of significant relationships are shown: the full table with coefficients and standard errors is available from the authors. It is important to note that in the full model, for the years in which all clinical data were available, all of the variables listed in Table 3 were tested for significance, and the variable was retained in the model only if the variable was significant at P :≤.10.
Table 3. Predictors of Posttransplant Survival: 1990–1996
Across all disease categories, survival is adversely affected by increasing age of the recipient and the donor, by the presence of poor renal function, and by declining hepatic function (as measured by increasing bilirubin or declining albumin). Some of these relationships are non-linear: the best fitting model includes age of the donor and recipient as both a standard and squared term, indicating that the deleterious effect of age increases with advancing age of donor and recipient. Controlling for blood type compatibility (ABO) and other disease factors, Caucasian recipients survived longer than minorities, as did recipients who received organs from Caucasian donors.
In the models estimated from the more clinically complete data (1994-1996), clinical variables such as the presence of encephalopathy and ventilator support adversely affect survival. Several variables that the National Clinical Oversight Committee indicated should be expected to affect outcomes were not found to be significant predictors of survival in the UNOS dataset. For example, although in overall Kaplan-Meier analysis women live longer than men, multivariable analysis found that female gender was not protective when controlling for disease, severity of illness, and other clinical factors.. However, matching of donor and recipient gender increased survival. Body mass index (BMI), prothrombin time, and blood type of the recipient were not significant in either stratified model, although these variables were significant in several specific diseases and were included in the appropriate disease-specific models. As expected, many of the variables related to organ matching and the quality of the donor organ were found to be significant predictors of mortality. Evidence of cytomegalovirus in the donor and the donor race were significant predictors of survival in the overall models; cytomegalovirus and blood type matching were significant in several individual diseases.
Several covariates suggested by the National Clinical Oversight Committee as predictive of survival (Child-Pugh, presence of edema, history of ulcerative colitis, and length of stay in the intensive care unit) were not available in the UNOS database for either time period.
Predicted Survival by Disease
Figure 4 presents estimated predicted survival by a disease group using disease-specific Cox Proportional Hazards models for a cohort of patients with clinical characteristics at the median of our sample (Table 4). This figure compares survival across disease groups by controlling for disease severity at time of transplant: each curve is estimated assuming patients in the different disease categories had the same set of clinical characteristics (age, race, gender, bilirubin, prothrombin time, creatinine, albumin, and donor characteristics) at the time of transplantation. Although overall, the predicted survival for an average patient in the cohort was similar to that found in the Kaplan-Meier plots, there are several diseases that demonstrate differences in predicted survival.
Table 4. Clinical Characteristics Used to Compare Survival by Disease Using Cox Proportional Hazards Models
Prothrombin time (sec)
The major difference between actual (Kaplan-Meier) versus predicted (Cox Proportional Hazards) survival is the effect of different clinical characteristics at the time of transplantation detailed in Table 5, which describes the average value of clinical covariates by disease group at the time of transplantation. For example, the predicted 5-year survival for a patient with an underlying diagnosis of cancer and clinical characteristics similar to all patients transplanted is 34%. However,the Kaplan-Meier estimate of survival is 40%, indicating that, controlling for disease severity (as measured by equivalent laboratory values), patients with cancer have an even poorer survival compared to other diagnoses. A similar but opposite effect is seen in acute or fulminant liver failure where the 5-year Kaplan Meier survival is 67%. The predicted survival for patients with “average” disease severity is 72% indicating that, on average, patients with fulminant liver failure are sicker at the time of transplantation than those patients with chronic liver disease. Fulminants have higher bilirubin, worse renal function, and are more likely to be in the intensive care unit at the time of transplant.
Table 5. Mean Values of Clinical Covariates at Time of Transplantation by Disease Group
Met dis (583)
Standard deviations are in parentheses. The number of non-missing observations (if different from the total) is listed in square brackets.
African American (%)
Other race (%)
Total Bilirubin (mg/dL)
Prothrombin time (sec)
Anti-CMV (% positive)
In hospital (%)
In ICU (%)
Anti-CMV (% positive)
Cold ischemia time
The comparison between primary biliary cirrhosis and sclerosing cholangitis is also interesting. The Kaplan Meier estimates of survival for these 2 diseases are virtually identical, with a 5-year survival of 80%. However, predicted survival between the 2 diseases with patients of identical clinical characteristics indicates a substantial survival benefit to patients with primary biliary cirrhosis (5-year predicted survival of 86 vs. 80% of PSC). The only major difference in clinical characteristics is gender: women make up the majority of patients with PSC.
There are several observations from this study. First, survival after liver transplantation in the United States is excellent and has been improving over time, even with a changing distribution of the diseases leading to transplantation. Survival rates of 1 year for all but a few diseases exceed 90%, and long-term survival (8 years) approaches 70%. The epidemiology of liver disease as a cause of transplantation is rapidly changing and is becoming more weighted by HCV, which has a poorer survival relative to many other diseases. However, this relationship is complex as patients with HCV are different, from say, patients with primary billiary cirrohsis/PSC. Their clinical presentation at time of transplant is different, and they are less likely to be in the hospital at time of transplant. This work emphasizes the importance of considering not only the predictors of transplant success and failure, but also the constellation of characteristics and severity of illness with which patients present for transplant evaluation across different diseases.
Second, this work represents a comprehensive set of disease-specific models estimated from a large representative national database. The results underscore the need for disease-specific models because we have found that survival is strongly linked to disease and that different clinical variables affect future survival in different diseases, which may be important in timing and treatment decisions.
Third, our analysis confirms the findings of prior analyses indicating that the distribution of the timing of deaths is quite different between fulminant disease and chronic disease, with many deaths occurring early (within the first 6 months posttransplant) in acute liver disease. By extension, in chronic liver disease, most deaths across all diseases occur late, which indicates efforts focusing on events in this period may therefore be most likely to improve survival in chronic liver disease and efforts to improve early survival may have more impact in acute liver disease.
Fourth, the MELD score does not differentiate posttransplant survival well and is a poorer predictor than the underlying etiology of liver disease. Although this is not surprising (the MELD score was designed to predict pretransplant survival), it emphasizes that the characteristics that are predictive of survival while waiting for transplantation are not identical to those that predict posttransplant survival.
It is interesting to note that this disease-specific multivariate analysis indicates that a different set of clinical variables are significant predictors of posttransplant survival than has been previously reported for certain diseases. For example, in the classic descriptions of posttransplant survival for primary billiary cirrohsis and PSC described by Wiesner,8 and Grambsch21 age, bilirubin, albumin, prothrombin time, and edema are found to be significant predictors of survival. However, in the larger national UNOS sample, albumin and bilirubin are not found to be statistical predictors of posttransplant survival. It seems likely that outcome has improved since the early Mayo Clinic work with FK-506,22 maturation of surgical techniques, and widespread use of a standardized approach to cytomegalovirus prophylaxis, rejection, etc. However, these differences indicate that our understanding of the predictors of transplant success and failure remains dynamic, and that the re-evaluation of risk factors and survival predictors is an important ongoing activity.
These results are also interesting when compared to international results from similar time periods. In an analysis of 1,656 patients with follow-up through 1995, Smits and colleagues found the same increasing survival over time and found donor and recipient age, blood type compatibility, and several other similar variables to be predictors of survival.23 However, overall, survival in Europe was slightly poorer, e.g., 5-year survival for patients with cirrhosis in the European experience was only 54 vs. 80% in the US experience. However, data from a more recent review from the European Liver Transplant Registry (which includes 1,821 patients transplanted up to1994) found outcomes to be closer to the US experience.24 The 5-year posttransplant survival for patients with cirrhosis was 70% and 55% for those with acute hepatic failure The US experience is 80 and 67%, respectively.
Another important observation is that the current clinical variables contained within the UNOS dataset do not contain all of the variables that a national group of experts agreed were clinically important in the prediction of future survival. Further work should include collection of these variables in a sample large enough to permit an appraising of their predictive value. Such work is crucial as the national registry is currently under overhaul and identification of key variables will be essential if improved clinically acceptable models are to be built in the future. Although many variables that are clinically assumed to be important in survival may not reach statistical significance in multivariable models, the ability to test the effect of clinically relevant variables will increase the face value of statistical prediction.
Finally, the analysis presented here shows important findings for the use of disease specific models in the prediction of survival. Our work indicates that disease-specific models provide different estimates of survival than overall survival models with disease strata indicators. As the distribution of liver disease etiology changes over time (such as the dramatic rise in HCV as the etiology of end-state liver disease), it will become more important to have accurate disease-specific models of survival to predict the overall benefit from transplantation programs. In addition, the results presented here indicate that a significant portion of the difference in survival by disease demonstrated in standard Kaplan-Meier plots is dependent upon differences in the clinical characteristics of the patients who present with those diseases, rather than an effect of the disease itself. This may have implications on long-term survival if the timing of transplant evaluation and listing is changed for different diseases.
An understanding of the effect of clinical covariates on posttransplant survival is also directly applicable to the problem of the timing of living-related donor transplants. In this case, more complete information is available before transplantation (the characteristics of the donor are known, for example), and the major decision to be made is one of timing. Given the differences found in the clinical variables that predict future survival, it is likely that the optimal timing of living related donor transplants is different in different diseases.
In summary, analysis of the UNOS database provides the most current and comprehensive analysis of disease-specific survival after transplantation that is possible in the United States today. This analysis indicates that there are significant differences in survival by type of disease, and that the distribution of diseases is rapidly changing over time. The continuing analysis of the factors that predict transplant success and failure is necessary to predict the overall effect of changes in evaluation, timing, and allocation criteria on overall survival from transplantation.
Although survival has been improving, this does not imply that current transplantation times are optimal for patients: it remains possible that transplantation at different times with more favorable clinical characteristics could improve overall survival.