Allograft Survival Following Adult-to-Adult Living Donor Liver Transplantation


*Corresponding author: Abraham Shaked,


Adult-to-adult living donor liver transplantation (AALDLT) is emerging as a method to treat patients with end-stage liver disease. The aims of this study were to identify donor and recipient characteristics of AALDLT, to determine variables that affect allograft survival, and to examine outcomes compared with those achieved following cadaveric transplantation. Cox proportional hazards models were fit to examine characteristics associated with the survival of AALDLT. Survival of AALDLT was then compared with cadaveric allografts in multivariable Cox models. Older donor age (>44 years), female-to-male donor to recipient relationship, recipient race, and the recipient medical condition before transplant were factors related to allograft failure among 731 AALDLT. Despite favorable donor and recipient characteristics, the rate of allograft failure, specifically the need for retransplantation, was increased among AALDLT (hazard ratio 1.66, 95% C.I. = 1.30–2.11) compared with cadaveric recipients. In conclusion, among AALDLT recipients, selecting younger donors, placing the allografts in recipients who have not had a prior transplant and are not in the ICU, may enhance allograft survival. Analysis of this early experience with AALDLT suggests that allograft failure may be higher than among recipients of a cadaveric liver.


Adult-to-adult living donor liver transplantation (AALDLT) has become an important alternative for recipients who otherwise would have limited or delayed access to a cadaveric organ. Initial published results demonstrated the technical feasibility of the procedure, reported the medical outcomes for the donor, and described single-center short-term allograft and patient survival for the recipient (1–7). Interestingly, in these reports, survival after AALDLT was similar to that after cadaveric transplantation, despite a process that selects recipients with a better health status. This experience is in contrast to results achieved after living donor kidney transplantation, in which short- and long-term allograft and recipient survival rates are superior to those seen with cadaveric allografts (8). In our study we used national data from the United Network of Organ Sharing (UNOS) to examine pretransplant patient characteristics and post-transplant allograft survival among AALDLT, to identify variables that affect recipient outcomes, and compare outcomes achieved in the cadaveric setting.


All patients in the United States who were 18 years of age or older when they received an initial liver allograft from a living donor between January 1998 and December 2001 were potentially eligible for this study. Demographic data from donors and recipients as well as follow-up transplant data were supplied by the Organ Procurement Transplant Network/United Network for Organ Sharing. Patients who had complete data on the characteristics under study were eligible for this analysis. Eligible recipients of cadaveric liver allografts were identified in a similar manner. Between recipients of a cadaveric liver and AALDLT, means of continuous variables were compared by t-tests and categorical variables were compared by Chi-square testing. The primary outcome for this study was allograft failure defined as retransplantation, death, loss-to-follow up, or the end of the study period (July 2002), whichever occurred first. Additional analyses utilizing retransplantation or death alone as separate outcomes were conducted. Survival analysis fitting Cox's proportional hazards models was utilized first to investigate the relationships between donor and recipient characteristics and the time-to-allograft failure among AALDLT. Potential predictors for multivariable modeling were screened in unadjusted Cox models (9,10). Variables that had a nominal relationship (p ≤ 0.10) with allograft failure were eligible for inclusion in multivariable models. Multivariable Cox proportional hazards models were fit by first adding covariates using a forward stepwise algorithm and then removing variables that did not retain statistical significance by the Wald statistic (p ≤ 0.05). Additionally, an indicator variable for the year of transplant was included to account for other aspects of the management of transplant recipients that had evolved over the period of the study. Differences in transplant management were accounted for by including an indicator variable for the UNOS region where the transplant had occurred. The assumption of proportional hazards was tested by graphical and weighted residuals testing (11). A multivariable Cox model was then fit to compare the time-to-allograft failure by 3 years post-transplant between AALDLT and cadaveric allografts using the variables that were nominally (p < 0.30) different between these groups in unadjusted analyses. Analyses were performed using STATA version 7, College Station, TX. All p-values are two-sided and p ≤ 0.05 was considered to be statistically significant.


A total of 793 adult patients underwent liver transplantation from an adult living donor between January 1998 and December 2001. Of these, 19 patients were excluded because of missing data on allograft status and 43 subjects were excluded because of missing data on donor age, resulting in a total of 731 AALDLT subjects. Of the 171 allograft failures, 84 (49.1%) were owing to patient death, and 87 were owing to retransplant. Mean time to failure was 118 ± 194 days. There were 14 359 eligible recipients of a cadaveric donor with complete data. Mean time to allograft failure was 154 ± 235 days. Among the 3079 graft failures, 2137 (69.4%) were owing to patient death, and 942 (23.5%) were owing to retransplantation. This resulted in 12% of all AALDLT transplant recipient undergoing retransplantation (87/731), whereas only 6.6% of all cadaveric recipients underwent a similar procedure (942/14 359) (p = 0.0001). Patient survival at 1 year was 87.4% in recipients of living donor allografts and 83.9% in recipient of cadaveric livers (p = 0.008).

Donor and recipient characteristics

Pretransplant characteristics that were considered to affect outcomes are presented in Table 1. Living donors were younger than cadaveric donors (36.3 ± 10.3 vs. 38.2 ± 17.6, p = 0.0001) with the number of male donors equivalent between groups (56.9 vs. 58.8%, p = 0.31). Recipients of AALDLT and cadaveric allografts were of similar age, however, fewer AALDLT recipients were male than cadaveric recipients (57.3 vs. 63.5%, p = 0.001). Among AALDLT recipient and donor race were identical, indicating that there was no cross ethnic live donation.

Table 1.  Transplant characteristics according to donor source

(n = 731)
(n = 14 359)

  1. AALDLT = adult-to-adult living donor liver transplantation, SD = standard deviation, ICU = intensive care unit.

 Mean age (year) (SD)49.7 (11.3)50.2 (10.3)0.29
 Gender (Male)57.3%63.5%0.001
 Race (White)88.1%86.9%0.36
 Mean age (year) (SD)36.3 (10.3)38.2 (17.6)0.0001
 Gender (Male)56.9%58.8%0.31
 Race (White)88.1%85.6%0.06
Donor–recipient gender relationship 0.001
Cause of end-stage liver disease 0.0001
 Malignancy 7.3% 3.5% 
UNOS status at time of transplant 0.0001
 1 1.2% 8.5% 
 2A 2.2%22.6% 
 Unknown10.9% 0.4% 
Prior liver transplant 2.2% 8.6%0.0001
Medical Condition at Time of Transplant 0.0001
 Intensive care unit 4.8%23.0% 
 Hospitalized, non-ICU16.3%13.0% 
 Unknown 3.0% 2.4% 

The medical condition of the patient and the severity of liver disease at the time of transplant are important variables that affect post-transplant outcomes. Adult-to-adult living donor liver transplantation recipients were less likely to be located in the ICU before transplant when compared with cadaveric allograft recipients (4.8 vs. 23.0%). The UNOS urgency status of the recipient at the time of transplant is reported based on the severity of the liver failure (Child-Turcotte-Pugh score) and the patient location at the time of surgery. These data demonstrate that when transplanted, recipients of cadaveric allografts suffered from more advanced liver disease than those receiving AALDLT (p = 0.0001).

The etiology of liver disease had a different distribution in both groups. Adult-to-adult living donor liver transplantation was performed more frequently for patients suffering from cholestatic liver disease or with hepatic malignancy. The leading diagnoses were hepatitis C (32.6%), cholestatic liver disease (18.7%), and alcoholic liver failure (12.6%). However, the etiology of primary liver disease was not found to have a significant effect on allograft survival (Table 2, p = 0.2).

Table 2.  Unadjusted analysis of covariates influencing adult-to-adult living donor liver transplantation allograft failure (n = 731)
PredictorHazard ratio95% CIp-value
  1. AALDLT = adult-to-adult living donor liver transplantation, C = allograft failure as either retransplantation or death, R = allograft failure as retransplantation, D = allograft failure as patient death, ICU = intensive care unit.

Recipient age (vs. 1st quartile) 0.950.220.08
 2nd quartile (>44 & ≤51)1.010.761.560.67–1.530.44–1.300.80–3.050.960.310.20
 3rd quartile (>51 & ≤57)1.120.771.870.72–1.720.43–1.370.94–3.710.620.380.07
 4th quartile (>57)1.070.512.310.70–1.630.27–0.961.21–4.400.760.040.01
Recipient gender (vs. female)–1.640.69–1.610.90–
Recipient race (vs. White)1.691.871.511.14–2.511.1–3.180.84–2.730.0090.020.17
Donor age (vs. 1st quartile) 0.0070.180.02
 2nd quartile (>28 & ≤36)0.980.841.140.62–1.550.43–1.630.6–2.180.930.60.69
 3rd quartile (>36 & ≤44)1.181.341.000.76–1.830.74–2.410.51–1.940.470.330.99
 4th quartile (>44)1.811.572.091.21–2.710.89–2.771.18–3.710.0040.120.01
Donor gender (vs. female)0.660.630.690.49–0.890.41–0.960.45–1.050.0060.030.09
Donor–recipient gender relationship
 (vs. female-female) 0.0050.120.05
Cause of end-stage liver disease
 (vs. non-cholestatic)
Prior liver transplant
 (vs. no prior liver transplant)2.631.144.241.34–5.160.28–4.621.94–9.260.0050.860.0001
Located in the ICU at time of
 transplant (vs. other locations)3.293.582.962.04–5.311.90–6.751.42–6.140.00010.00010.0004

Factors influencing allograft failure

Potential variables for the multivariate model are presented in Table 2. Multivariable Cox analysis identified female-to-male donor-recipient gender relationship as a predictor of allograft failure among AALDLT recipients (Table 3). Analysis of donor age by quartiles suggests an increased rate of allograft failure when the liver was taken from a donor older then 44 years (adjusted HR = 1.72, 95% CI = 1.11–2.65). Recipient predictors of increased allograft failure in the AALDLT setting were related to the severity of the liver disease, specifically an ICU status before transplantation (adjusted HR = 2.67, 95% CI = 1.59–4.5) and history of a prior liver transplant (adjusted HR = 2.59, 95% CI = 1.21–5.51).

Table 3.  Multivariable Cox analysis of factors associated with allograft failure in living donor recipients (n = 731)1
PredictorHazard ratio95% CIp-value
  1. 1Adjustments within the model were made for year of transplant and UNOS region.

  2. C = allograft failure as either retransplantation or death, R = allograft failure as retransplantation, D = allograft failure as patient death, ICU = intensive care unit.

Donor age (vs. 1st quartile)
 2nd quartile (>28 & ≤36)1.10.941.260.68–1.770.48–1.870.64–2.460.710.870.50
 3rd quartile (>36 & ≤44)1.331.541.110.84–2.120.82–2.870.55–
 4th quartile (>44)1.721.511.941.11–2.650.82–2.801.05–3.580.010.190.03
Donor–recipient gender
 (vs. female-female)
Prior transplant (vs. none)2.591.034.201.21–5.510.22–4.801.72–
Recipient race (vs. White)1.611.781.451.06–2.451.02–3.110.76–2.780.030.040.26
Location in ICU at time of
 transplant (vs. other location)2.673.701.951.59–4.51.85–7.390.89–4.310.00010.00010.10

In a separate analysis, when allograft failure was defined solely by the requirement for retransplantation, significant associations were found with recipient race (HR = 1.78, 95% CI = 1.02–3.11) and location in the ICU (HR = 3.70, 95% CI = 1.85–7.39) before transplant. However, neither recipient race (HR = 1.45, 95% CI = 0.76–2.78) or ICU status (HR = 1.95, 95% CI = 0.89–4.31) had a significant impact on allograft failure when it was defined as patient death.

Comparison of allograft failure

The crude rate of allograft failure in the first 3 years among AALDLT was increased compared with cadaveric recipients (unadjusted HR 1.15, 95% C.I. = 0.99–1.34) (Table 4, Figure 1). The percentage of graft failure in this group was found to be 21.5% after 1 year. A multivariable model was then fit with recipient and donor age, gender, race, cause of liver disease, and ICU status at the time of transplant. We selected ICU status to be fit in the model as opposed to UNOS status based on collinearity and increased missingness of UNOS status. There was a significantly higher rate of allograft failure in recipients of AALDLT (adjusted HR 1.24, 95% CI = 1.05–1.47) (Table 4). Separating allograft failure into retransplant alone or death alone indicated that AALDLT recipients had a higher rate of retransplantation, but not death when compared with cadaveric recipients.

Table 4.  Comparison of the rate of allograft failure in the first 3 years post-transplant: living donor vs. cadaveric donor
 Rate ratio95% CIp-value
  1. 1Adjustment for the following variables: recipient age, gender and race, donor age, gender, race, cause of end-stage liver disease, ICU status at time of transplant, year of transplant, prior transplant, and UNOS region.

  2. C = allograft failure as either retransplantation or death, R = allograft failure as retransplantation, D = allograft failure as patient death.

Crude cadaveric = 14 3591.151.870.820.99–1.341.50–2.330.66–
AALDLT = 731
Adjusted1 cadaveric = 14 3591.241.660.971.05–1.471.30–2.110.77–
AALDLT = 731
Figure 1.

Liver allograft survival following adult-to-adult living donor liver transplantation (AALDLT) or cadaveric transplantation (p = 0.07, log-rank).


Advances in liver surgery and the need to expand the donor pool have resulted in the practice of AALDLT, aiming to provide patients who are suffering from end-stage liver disease with an opportunity for a preemptive transplant (12). Our study demonstrates that this elective procedure is performed on a selected and more favorable group of transplant candidates, whose predominant location at the time of transplantation, home, is suggestive of less severe symptoms and signs of liver failure. These recipients are transplanted with allografts that are taken from healthy young donors who undergo comprehensive evaluation of liver function and in many centers, a mandatory liver biopsy (7). The procurement of the lobar allograft is performed under controlled conditions in a donor who has never experienced the hemodynamic and hormonal instabilities that are common in the cadaveric setting, variables that were described to affect function of the transplanted organ (13). Moreover, our transplant experience and published data reveal that the allograft is exposed to a short cold ischemic time (14).

The summation of less prevalent factors among AALDLT such as cold ischemic time, and the better health of the donor and recipients at the time of transplantation should intuitively lead to a greater survival of the allograft, and would bias a study towards identifying improved allograft survival associated with AALDLT. Yet, we detected that there was worse allograft survival compared with cadaveric transplants after adjustment for factors that impact AALDLT. Specifically the need for retransplantation was greater among recipients of a living donor allograft.

We recognize that there are limitations to such retrospective investigations from residual confounding of factors unable to be considered in this analysis. For example, no transplant center-level data were made available for this investigation. As such, these analyses cannot examine for the potential influences from the experience of the transplant center to perform AALDLT. Nevertheless, the short-term outcomes differ significantly from what should be expected in this favorable setting. Moreover, these national data are not consistent with previously reported center-specific allograft survival, in which the initial results were comparable to cadaveric recipients (2,3,15). We assume that the previously published data are limited by the small number of recipients in each series, and that the relatively short-term follow up may not allow an accurate assessment of allograft survival. An analysis of national data improves the power to detect meaningful associations that are unable to be observed in single-center investigations owing to the relative low frequency of AALDLT.

This study suggests that an improvement in outcomes may be achieved by selecting recipient candidates who have not had a prior transplant, are not in an ICU setting, and the utilization of allografts from living donors who are under 44 years of age. All these variables are easily controlled by more careful selection of appropriate donor/recipient combinations. Moreover, we anticipate that survival rates will improve with further refinement of the surgical procedure, specifically by reducing technical complications that are related to the vascular and biliary reconstruction (16–18). Such an improvement has been observed in pediatric recipients, in which allograft and recipient survival in the living donor setting are now equivalent to cadaveric outcomes (19). In fact, prompt recognition of graft failure can lead to a rescue by urgent cadaveric whole organ retransplantation; this is reflected by the better 1-year patient survival in the AALDLT group.

An association between graft outcome and donor/recipient gender as well as recipient race has been described among recipients of cadaveric grafts in both single-center and nationally representative databases (20,21). Graft size mismatch as well as androgen/estrogen receptor affects have been speculated to account for the inferior graft survival among male recipients of female grafts. The etiology of inferior graft and patient survival in the non-White populations is likely to be multifactorial. Immunologic, socioeconomic, and health status have been cited as reasons for this finding.

A more concerning issue is whether the current results are also attributable to the transplantation of a smaller liver mass in AALDLT compared with cadaveric donation, which is incapable of providing adequate function to support the recovery of an ill recipient. Unfortunately, UNOS database dose not provide important information related to graft-to-recipient weight ratio, or other yet to be defined preoperative comorbid conditions that may impact on the function of reduced-size grafts in the early postoperative period. The small-for-size phenomenon was described as a predisposing factor for the development of allograft failure in animal models and human transplantation (22–24). Following transplantation, the reduced liver mass must have an adequate residual reserve to maintain physiologic homeostasis. This metabolic load may be higher in the AALDLT recipient, considering the acute-phase response during the immediate post-transplant period, as well as the need to activate intracellular mechanisms involved in DNA synthesis and hepatocyte regeneration (25,26). In this setting, the development of common surgical complications, infection, or organ system failure may overload the remaining hepatic reserve and contribute to allograft failure. This phenomenon is not seen in pediatric living donor liver transplant recipients who receive a relatively large liver mass, presumably, with enough reserve to withstand significant postoperative stress. We assume that AALDLT would have better outcomes when compared with similar lobar transplant from a split cadaveric liver, as the living donor lobe is taken from a stable donor during an elective procedure, and is coordinated with the recipient operation to result in a very short cold ischemia. Such a comparison cannot be performed until sufficient numbers of these cadaveric left and right lobe splits are performed to provide such information.

Our results suggest that AALDLT should be best applied in a more selected patient population who can best benefit from this procedure. There is no doubt that this procedure will add organs to the total pool, and reduce waiting list mortality. At this stage it may not be appropriate to recommend AALDLT in patients with the predictors for a higher rate of failure, and are likely to end up with poor outcomes. Nevertheless, we believe that the procedure will be perfected to a stage where it can be practiced in most candidates. These patients should preferably be incorporated into a multicenter study or national database, so long-term outcomes can be evaluated. Some of the answers will be provided by the prospective AALDLT cohort study which was recently organized by the NIH, and is aimed to accrue and follow sufficient number of patients being considered for, and undergoing the procedure to provide data for an adequately powered study (27). We suggest that the information that is provided to the potential recipient and the donor should include program-specific results, as well as the current data on national outcomes related to recipients of AALDLT. Disclosure of this data will help the donor and recipient to reach a more informed decision regarding whether to undergo surgery and transplantation.


Financial support: There were no sources of funding for this research.