Liver transplantation, though life-saving, continues to be one of the most expensive and resource-intensive therapeutic interventions that contemporary medicine has to offer. The current allocation policy assigns priority for deceased donor liver allografts to the sickest candidates first according to their Model for End-Stage Liver Disease (MELD) score. This algorithm has achieved a reduction in waitlist mortality without any obvious erosion in posttransplant outcomes.1 However, higher urgency recipients often use more hospital resources than those of lower disease severity.2–5 This redistribution of livers to the sickest, along with static or even declining reimbursement for liver transplantation, has steeply increased the financial risk faced by transplant centers.4
During the past 2 decades, the success of liver transplantation has led to exponential increases in the number of candidates on the waiting list. The resultant inadequacy of the organ supply has necessitated an expansion of acceptable donor and graft criteria to include advanced age, donation after cardiac death (DCD), split, and steatotic grafts.6, 7 Recently, an objective, continuous, and quantitative index of graft quality has been derived.8 Analysis has shown that donor quality has decreased over time, with increased donor age and the emergence of split and DCD grafts as major contributing factors.9 Utilization of grafts with higher risk profiles further heightens concern regarding the financial climate of liver transplantation.
The trends of increasing recipient disease severity and overall decreasing graft quality have motivated recent analyses to identify the current determinants of transplant resource utilization. Prior to MELD allocation, studies on the cost of liver transplantation identified recipient age and markers of increased recipient severity such as Child-Pugh class C, pretransplant intensive care unit location, pretransplant ventilator dependency, and/or UNOS status 1 designation as predictive of increased liver transplant costs.10–12 Limited information exists with respect to the influence of donor factors on the costs of liver transplantation. Since the implementation of MELD allocation, a single-center study has confirmed the tight and direct correlation between recipient disease severity, as signified by the laboratory MELD score, and transplant resource utilization.5 Correspondingly, a single publication using United Network for Organ Sharing data found an inverse correlation between graft quality, as signified by the donor risk index (DRI), and resource utilization independent of recipient factors, including MELD.13
The current study expands upon these two previous studies5, 13 by juxtaposing both recipient disease severity and donor quality in the analysis of liver transplant costs at 2 geographically distinct institutions with highly comparable liver transplant volumes. Although direct and indirect costs are generally considered to be the most accurate indicators of hospital resource use, variations in costs and in business models make it difficult to accurately consolidate cost data across different institutions. Likewise, costs clearly vary over time and in a manner that is highly specific to each institution and geography. Length of stay (LOS) after liver transplantation is a more transparent and consistent variable across institutions and can be used as a reliable surrogate for costs.12 We therefore aimed to ascertain the absolute and relative contributions of recipient disease severity and donor quality to liver transplantation LOS.
CC, cryptogenic cirrhosis; CI, confidence interval; Cr, creatinine; DCD, donation after cardiac death; DRI, donor risk index; HBV, hepatitis B virus; HCV, hepatitis C virus; HR, hazard ratio; INR, international normalized ratio; LOS, length of stay; MELD, Model for End-Stage Liver Disease; SD, standard deviation; SNF/rehab, skilled nursing or rehabilitation facility.
PATIENTS AND METHODS
This study was approved by the institutional review board at the University of California San Francisco and the University of Texas Health Science Center at San Antonio and conformed to the ethical guidelines of the 1975 Declaration of Helsinki.
We reviewed the medical records of all adults (≥18 years of age) who underwent liver transplantation for chronic liver disease between January 1, 1998 and December 31, 2005 at 2 large-volume centers (745 for center A, 710 for center B, and 1455 for centers A and B) to collect the following donor, recipient, and transplant factors. This cohort included 222 recipients previously analyzed and reported.5
Donor and Recipient Factors
Demographics, including age, gender, and race, were collected for both donors and recipients. The donor type (living versus deceased), graft type (split versus whole), donor height, donor cause of death, donor origin (local, regional, or national), and DCD status were noted. For recipients, additional collected variables included weight, height, etiology of liver disease, and transplant number. The laboratory values for creatinine, total bilirubin, and the international normalized ratio (INR) immediately preceding transplantation were used to calculate the MELD score. Dialysis requirement was specifically noted.
The cold ischemia time was defined as the interval from deceased donor cross-clamping to removal from cold storage for anastomosis. The cold ischemia time and donor factors were used to calculate the DRI8 for all deceased donor grafts. For cases of living donor liver transplantation in which donor and recipient operations were simultaneously performed, no cold ischemia time was recorded, and DRI was not calculated. The warm ischemia time represents the venous anastomotic time, which is defined as the interval between removal from cold storage and venous reperfusion. The number of days from the day of transplantation to the day of discharge was considered the transplant LOS. If a patient was transferred from the transplant institution to a skilled nursing or rehabilitation facility (SNF/rehab), the days at these auxiliary care facilities were not considered part of the LOS. We did, however, analyze the correlation between donor and recipient factors and death, discharge to home, or discharge to a SNF/rehab.
Descriptive statistics for the study cohort at the 2 transplant centers were calculated separately and together. Discrete and continuous variables were compared with Fisher's exact test and the Mann-Whitney test, respectively. Correlations between DRI, MELD, and LOS and transplant year and between DRI and MELD were assessed with Spearman rank correlation coefficients for each center. Donor, recipient, and transplant variables were assessed in univariate Cox proportional hazards models to identify risk factors associated with transplant LOS at each institution and for the combined cohort. Variables of significance (P < 0.10) were included in multivariate Cox proportional hazard models for each institution and the combined cohort. Variables were then eliminated in a stepwise fashion to derive the final model showing independent predictors of transplant LOS.
Donor and Graft Characteristics
Donor and graft characteristics for both institutions are shown and compared in Table 1. As may be expected for 2 geographically distant transplant centers, there were significant differences in donor demographics, including age, gender, race, and weight. Cerebrovascular accident was the most common cause of death for center A donors versus trauma for center B donors. Center A had a higher volume of living donor liver transplants [99/745 (13.3%) versus 9/710 (1.3%), P < 0.0001], whereas center B had a higher volume of split grafts [21/745 (2.8%) versus 45/710 (6.3%), P = 0.0015]. Center A, however, still had a substantially lower percentage of whole liver grafts [625/745 (83.9%) versus 656/710 (92.4%), P < 0.0001]. Local donors accounted for a lower percentage of donors for center A compared to center B. Notably, however, both institutions used DCD donors rarely, as they accounted for only 1% of all transplants. The mean DRI was higher at center A compared to center B (1.46 ± 0.38 versus 1.40 ± 0.38, P = 0.0013). Despite these multiple differences in donor characteristics, Fig. 1 shows striking similarities in the overall distribution of donor quality at the 2 institutions. In general, center A's donor risk profile appears to be slightly shifted to the right toward higher risk or lower quality.
Recipient characteristics for both institutions are shown and compared in Table 2. Although recipient age was comparable at the 2 institutions, recipient gender, race, height, weight, and body mass index differed. At both institutions, hepatitis C was the dominant indication for transplantation, accounting for more than half of all transplants (53.4% for center A versus 55.1% for center B). The frequencies of autoimmune etiologies (primary biliary cirrhosis, primary sclerosing cholangitis, and autoimmune hepatitis) and miscellaneous/other etiologies were comparable at both institutions. However, at center A versus center B, hepatitis B was much more common (13.2% versus 1.5%), whereas alcoholic liver disease (6.2% versus 15.9%) and cryptogenic cirrhosis/nonalcoholic steatohepatitis (8.9% versus 13.9%) were much less common. Ethnic origins were reflective of the geographic location of each center. Recipients of Asian descent were more common at center A than center B (19.8% versus 0.4%), and Latino descent was more common at center B than center A (19.0% versus 57.0%). Immediate pretransplant creatinine (1.68 ± 1.04 mg/dL for center A versus 1.49 ± 0.87 mg/dL for center B, P = 0.0015), frequency of dialysis requirement (11.5% for center A versus 5.5% for center B, P < 0.0001), and pretransplant total bilirubin (8.97 ± 11.51 mg/dL for center A versus 6.45 ± 9.14 mg/dL for center B, P = 0.022) were all higher at center A. As a result, the mean MELD score at the time of transplantation was higher at center A than center B (22.44 ± 11.34 versus 20.37 ± 8.42, P = 0.046). Figure 2 shows the overall distribution of recipient disease severity at the 2 institutions. In general, candidates with MELD scores of 15 to 30 accounted for a larger percentage of transplants at center B compared to center A (and vice versa for candidates with MELD scores > 30).
Table 2. Recipient Variables
Centers A and B
Abbreviations: CC, cryptogenic cirrhosis; HBV, hepatitis B virus; HCV, hepatitis C virus; HIV, human immunodeficiency virus; INR, international normalized ratio; MELD, Model for End-Stage Liver Disease; NASH, nonalcoholic steatohepatitis.
Recipient age (years)
52.9 ± 9.4
52.4 ± 9.2
52.7 ± 9.3
Recipient gender, male
Recipient height (cm)
171.3 ± 10.1
169.1 ± 9.9
170.2 ± 10.1
Recipient weight (kg)
79.3 ± 18.2
82.2 ± 19.7
Recipient body mass index
26.9 ± 5.2
28.7 ± 6.2
27.8 ± 5.8
Pretransplant creatinine (mg/dL)
1.68 ± 1.04
1.49 ± 0.87
1.59 ± 0.97
2.03 ± 1.30
1.90 ± 1.46
1.97 ± 1.38
Pretransplant total bilirubin (mg/dL)
8.97 ± 11.51
6.45 ± 9.14
7.74 ± 10.50
22.44 ± 11.34
20.37 ± 8.42
21.43 ± 10.07
Transplant characteristics for both institutions are shown and compared in Table 3. In general, center A's transplant volume increased over the 8-year study period, whereas center B's volume remained steady. Center A tended to have a higher proportion of simultaneous liver-kidney transplants (8.9% versus 6.3%, P = 0.070), and this was consistent with the higher recipient pretransplant creatinine and more frequent dialysis requirement (Table 2). On average, the cold ischemia time was approximately 3 hours and 20 minutes longer at center A (9.54 ± 2.80 hours versus 6.22 ± 2.96 hours, P < 0.0001). Finally, there was a slightly longer mean LOS at center A (13.7 ± 17.5 days versus 13.3 ± 16.1 days, P = 0.052).
Table 3. Transplant Variables
Centers A and B
Donor-recipient gender match
Cold ischemia time (hours)
9.54 ± 2.80
6.22 ± 2.96
7.80 ± 3.33
Transplant length of stay (days)
13.7 ± 17.5
13.3 ± 16.1
13.5 ± 16.8
Correlations Between MELD, DRI, LOS, and Transplant Year
To better understand trends and practice patterns, we next explored whether there were trends in MELD, DRI, and LOS over the 8-year study period and whether there were correlations between MELD and DRI at either institution (Table 4). Although there did not seem to be a change in either MELD or DRI over time for center A, there was an increase in both MELD [Spearman rank correlation coefficient, 0.28; 95% confidence interval (CI), 0.21–0.34; P < 0.0001] and DRI (Spearman rank correlation, 0.14; 95% CI, 0.065–0.21; P = 0.0002) over time for center B. Therefore, one might conclude that center A's practice patterns with respect to donor and recipient characteristics were stable whereas center B demonstrated a significant trend of increasing MELD and DRI over the 8-year study period. Interestingly, however, for both institutions, there was no correlation between MELD and DRI, and this indicated that there was not a systematic approach to the pairing of donors and recipients according to donor quality and recipient disease severity.
Table 4. Correlations Between DRI, MELD, LOS, and Transplant Year
95% Confidence Interval
Abbreviations: DRI, donor risk index; LOS, length of stay; MELD, Model for End-Stage Liver Disease.
−0.43 to 0.11
0.065 to 0.21
−0.066 to 0.078
0.21 to 0.34
−0.035 to 0.11
−0.14 to 0.009
−0.13 to 0.028
−0.051 to 0.097
LOS Predictors: Individual and Combined Centers
Univariate and then multivariate Cox models for transplant LOS were created for each institution's cohort and then for the combined cohort. The final multivariate models are shown in Table 5. Notably, recipient MELD score was the only variable present in all 3 models and exerted an effect of similar magnitude at both center A [hazard ratio (HR), 1.03 per point increment; 95% CI, 1.02–1.04; P < 0.0001] and center B (HR, 1.04 per point increment; 95% CI, 1.03–1.05; P < 0.0001). At center A, only 2 other variables, donor location (HR, 1.22 for nonlocal donors; 95% CI, 1.00–1.49; P = 0.048) and recipient age (HR, 1.01 per year increment; 95% CI, 1.00–1.02; P = 0.0085), were independent predictors of transplant LOS. At center B, 6 other variables were independent LOS predictors: 2 donor factors [age (HR, 1.01; 95% CI, 1.00–1.01; P < 0.0001) and weight (HR, 0.99; 95% CI, 0.99–1.00; P = 0.0001)], 3 additional recipient factors [gender (HR, 0.83 for male; 95% CI, 0.70–97; P = 0.019), INR (HR, 0.92 per 1.0 increment; 95% CI, 0.85–0.99; P = 0.027), and transplant number (HR, 1.57 per incremental transplant; 95% CI, 1.06–2.31; P = 0.023)], and cold ischemia time (HR, 1.04 per hour increment; 95% CI, 1.01–1.07; P = 0.009). For the multivariate analysis of the combined cohort, center was added and proved to have a significant association with transplant LOS (HR, 0.88; 95% CI, 0.79–0.99; P = 0.029). The final Cox model for the combined cohort was composed of 3 donor factors (age, weight, and location), 4 recipient factors (age, male, transplant number, and MELD), and center.
Table 5. Multivariate Cox Models for Transplant Length of Stay
Abbreviations: CI, confidence interval; HR, hazard ratio; INR, international normalized ratio; MELD, Model for End-Stage Liver Disease.
Center A cohort
Nonlocal donor (versus local)
Recipient age (per year increment)
Recipient MELD (per point increment)
Center B cohort
Donor age (per year increment)
Donor weight (per kg increment)
Recipient male (versus female)
Recipient INR (per 1.0 increment)
Recipient MELD (per point increment)
Transplant number (per increment)
Cold ischemia time (per hour increment)
Donor age (per year increment)
Donor weight (per kg increment)
Donor nonlocal (versus local)
Recipient age (per year increment)
Recipient male (versus female)
Transplant number (per increment)
Recipient MELD (per point increment)
Center A (versus center B)
Correlations Between Donor and Recipient Factors and Patient Disposition
The disposition of liver transplant recipients at discharge from the transplant hospital was collected and assessed for correlations to donor and recipient characteristics (Table 6). Disposition was classified as death, home, or SNF/rehab. Recipient disposition was modestly correlated with DRI, although it fell short of statistical significance (P = 0.058; Table 6). DRI was lowest for recipients discharged to home (1.42 ± 0.37), intermediate for those discharged to SNF/rehab (1.47 ± 0.38), and highest for those who died (1.53 ± 0.49). The association between discharge disposition and recipient age and all measures of recipient disease severity was, however, strongly significant (all P values < 0.0001). Recipients discharged to home were youngest (52.5 ± 9.4 years), whereas those who died or who were discharged to SNF/rehab were older (54.5 ± 8.7 and 55.4 ± 8.6 years, respectively). Similarly, recipients discharged to home had the lowest MELD score (20.4 ± 9.6), whereas those who died and those discharged to SNF/rehab had higher MELD scores (26.1 ± 10.5 and 27.1 ± 10.7, respectively). The same pattern was observed for each of the individual MELD components, with those discharged to home having significantly lower mean serum creatinine, bilirubin, and INR compared to those who died or were discharged to an SNF/rehab facility.
Table 6. Relationship of the Patient Disposition and the Donor and Recipient Characteristics
n (Centers A and B)
Mean ± SD
Abbreviations: Cr, creatinine; DRI, donor risk index; INR, international normalized ratio; MELD, Model for End-Stage Liver Disease; SD, standard deviation; SNF/rehab, skilled nursing or rehabilitation facility.
1.42 ± 0.37
1.53 ± 0.49
1.47 ± 0.38
Recipient age (years)
52.2 ± 9.4
54.5 ± 8.7
55.4 ± 8.6
20.4 ± 9.6
26.1 ± 10.5
27.1 ± 10.7
Recipient Cr (mg/dL)
1.50 ± 0.90
2.10 ± 1.18
2.03 ± 1.20
Recipient bilirubin (mg/dL)
7.0 ± 9.6
11.8 ± 14.0
11.2 ± 13.6
1.9 ± 1.4
2.1 ± 0.8
2.4 ± 1.5
Liver transplantation has flourished over the past decade as the optimal treatment for end-stage liver disease. Success has resulted in expansion of both recipient and donor criteria. The tempo and extent of liberalizing these criteria have varied from program to program on the basis of not only donor organ availability and recipient disease severity profiles but also programmatic factors such as size, philosophy, and maturity. Traditionally, transplant outcomes of patient and graft survival have been the yardstick of program quality without consideration of transplant resource utilization. With the liberalization of donor and recipient criteria (ie, the use of lower quality grafts and transplantation of sicker recipients), one would surmise that resource utilization would increase in parallel. In fact, several recent publications have tried to describe the relationship between either recipient disease severity or donor quality and resource utilization.2–5, 13 These 2-way analyses have failed to elucidate the relative contributions of and possible interactions between recipient disease severity and donor quality, and this was the primary aim of our study. Our strategy, to compare and contrast in detail the determinants of transplant resource utilization at 2 large-volume but geographically distinct centers, might also elucidate the impact of program parameters with respect to philosophy and maturity.
Comparing actual cost data between 2 institutions requires normalization, a process complicated by considerations of geography, payer mix, reimbursement patterns, and indirect costs. Imprecise normalization of costs will result in erroneous data and misleading analyses. The use of LOS, an objective and well-defined variable, as a surrogate for transplant resource utilization has precedence.12 Moreover, its correlation to cost in the current era has been verified.4, 5, 13 We also chose to analyze combined data from 2 centers well matched in transplant volume during the study period, rather than national registry data. Center data are characterized by completeness, accuracy, and granularity, traits critical to our goal of elucidating the impact of center practice patterns and maturity on transplant resource utilization. Although we recognize the power of registry analyses to reflect national practice patterns and trends, we did not feel that this data source was ideally suited for this particular study. We recognize that our definition of LOS, which is limited to the acute care setting, does not capture the full extent of resource utilization as a subset of liver transplant recipients are transferred to extended care facilities. As such, our LOS likely underestimates the true resource utilization. Inclusion of LOS at extended care institutions would introduce confounding factors. First, there are a wide variety of issues, including medical, physical, psychological, social, and logistical issues, that necessitate transfer to subacute care facilities. Second, the transplant team typically has no role in determining LOS at these facilities. We therefore decided that a strict acute care definition of LOS was most appropriate for our analyses. We did explore total acute care utilization during the first 60 and 90 posttransplant days as this would include readmissions to the transplant center. The results and conclusions were unchanged (data not presented)
The data that we have presented from these 2 academic transplant centers represent a sizable number of liver transplants over an 8-year study period. The 2 centers performed a remarkably similar number of adult liver transplants during the study period. There were several statistical differences in donor and recipient characteristics between the 2 centers. Despite similar distributions, both the MELD and DRI scores were significantly higher at center A versus center B. Moreover, during the study period, both MELD and DRI held constant at center A, whereas both increased at center B. However, during the study period, LOS remained stable at center A but decreased at center B. These data indicate that, during the study period, liver transplant practice was stable at center A but in evolution at center B. The evolution likely reflects not only internal programmatic issues but also external regional pressures such as the establishment of newer centers with more competition for recipients and donors. At the center level, the trends of increased recipient disease severity and lower donor quality yet a shorter mean LOS can be construed as a learning curve effect for center B, which was established more recently than center A. During the study period, no discrete changes in clinical care pathways were implemented at either institution. Moreover, there was high consistency in personnel as the medical and surgical teams and leadership were stable at both institutions. At the regional level, parallel increases in MELD and DRI likely reflect a widening of the disparity between organ supply and demand and possibly increased competition among the local centers
Predictors of resource utilization using LOS as a surrogate varied between the 2 study centers. The only strong LOS predictor common to both centers was the MELD score at the time of transplant. Notably, the magnitude of MELD's impact was strikingly similar at the 2 centers; a MELD score increment of 1 was associated with a 3% to 4% increase in LOS in both single-center models and in the combined model. The evaluation of patient disposition after the acute transplant hospitalization further endorses the finding that recipient disease severity is a dominant driver of resource utilization. DRI exhibited only a trend toward association, whereas both recipient age and MELD exhibited strong associations with the need for transitional care; this finding has not been previously demonstrated or reported. A recent single-center study has reported that MELD is the single variable most strongly correlated with posttransplant costs.5 Our study is wholly consistent with this report.
Although MELD was the only predictor common to both centers, several additional factors were significant predictors of transplant LOS at either center A or B. At center A, only 2 other variables were associated with transplant LOS (recipient age and nonlocal donor), whereas at center B, 6 other variables were associated (donor age and weight, recipient gender, INR, transplant number, and cold ischemia time). To derive the final multivariate Cox model for the 2 centers combined, we introduced the additional variable of transplant center. The final model identified 8 independent predictors of transplant LOS, including transplant center. Several predictors in the 3 multivariate models deserve special comment. First, male recipient gender was associated with shorter LOS for center B and centers A and B. We speculate that this may result from the fact that for a given MELD score, male recipients are “less sick” than female recipients. It is well known that the same serum creatinine corresponds to better renal function in male candidates. This inequity has been cited as systematic bias that disadvantages female transplant candidates.14 Second, higher INR was associated with shorter LOS in the multivariate model for center B. In univariate models for center A, center B, and centers A and B, higher INR, as expected, was associated with longer LOS (data not shown). However, the effect was reversed in the multivariate model that includes MELD. Therefore, the model indicates that, for a given MELD score, the patient with lower INR will have shorter LOS, and it implies that the patient with higher bilirubin/creatinine will have longer LOS. Finally, it is not surprising that transplant center itself is independently associated with transplant LOS for centers A and B. We believe that transplant resource utilization strongly reflects multiple dimensions of an individual center's practice, such as local competition as well as programmatic philosophy and maturity, not fully described by the donor, recipient, and transplant factors that were specifically explored.
Interestingly, DRI, an overall measure of donor quality,8 did not emerge as a factor at either center or for the combined cohort. Moreover, DRI also failed to show a strong correlation with the need for transitional care. Our findings, therefore, counter a recent study of national registry data that found a significant and independent association between DRI and transplant LOS for transplants performed in 2002–2005. In comparison with the reference group of donors with DRI between 1.0 and 1.5, they reported that high-risk donors (those with a DRI of 2.0–2.5, who accounted for 8.4% of the donor pool) and highest risk donors (those with a DRI > 2.5, who accounted for 1.9% of the donor pool) were associated with 9% and 29.7% increases, respectively, in transplant LOS.13 We speculate that the national experience may reflect a steep national learning curve with higher risk, lower quality donors. This hypothesis is supported by a subsequent report by the same group showing that the impact of DRI on transplant LOS had decreased dramatically.15
Although DRI did not emerge as a predictor in single-center or combined-center analyses, 3 donor factors did emerge as predictors of transplant LOS. Donor age and weight showed a significant association in analyses of center B and centers A and B, whereas nonlocal status showed a significant association in analyses of center A and centers A and B. The inconsistent association of these donor factors contrasts with the highly consistent association of MELD with transplant LOS. These findings again lead us to conclude that factors reflective of donor quality exert a modest impact on liver transplant resource utilization.
In conclusion, we have shown that recipient disease severity as measured by MELD exerts a potent, dominant, and consistent effect on transplant resource utilization as measured by transplant LOS at 2 transplant centers. The 2 centers and their transplant practices appear quite similar by superficial assessment. However, several parameters and their evolution, or lack thereof, over time revealed important differences that undoubtedly shaped transplant resource utilization at each center. These differences were reflected by the variable array of LOS predictors at the 2 centers. More striking than these differences were the identical impact of recipient MELD score and the lack of effect of DRI. The dominance of recipient disease severity among LOS determinants should simplify a transplant center's understanding of its resource utilization patterns and facilitate benchmarking against other transplant centers. Moreover, we suggest that recipient MELD score should weigh heavily in risk adjustment models for transplant center payments.
The authors gratefully acknowledge Alan Bostrom, Ph.D., for his expert statistical analysis.