Factors influencing liver transplant length of stay at two large-volume transplant centers

Authors


  • See Editorial on Page 1387

Abstract

Length of stay (LOS) is considered a reliable surrogate for liver transplant resource utilization. Little information exists about how donor and recipient variables interact to affect transplant LOS. Data for adult, non–status 1 transplants (1998–2005), including the donor risk index (DRI) and Model for End-Stage Liver Disease (MELD) scores, were collected from 2 institutions (n = 745 for center A and n = 710 for center B). Cox proportional hazards models identified variables associated with LOS for the separate and combined cohorts. The cohorts differed significantly in donor, recipient, and transplant factors. DRI (1.46 for center A and 1.40 for center B, P = 0.0013) and MELD (22.4 for center A and 20.4 for center B, P = 0.046) were both higher at center A, but LOS was comparable (13.7 days for center A and 13.3 days for center B, P = 0.052). Three factors at center A (nonlocal donor, recipient age, and MELD) and 7 factors at center B (donor age and weight, recipient female gender, retransplant status, international normalized ratio, MELD, and cold ischemia time) were associated with transplant LOS. For the combined cohort, donor age, weight, nonlocal status, recipient age, female gender, retransplant status, MELD, and transplant center were LOS risk factors. In conclusion, the impact of donor and recipient variables on LOS varies by institution. However, the MELD score exerts a potent and consistent effect across institutions, emphasizing the dominant role of disease severity in liver transplant resource utilization. Liver Transpl 15:1570–1578, 2009. © 2009 AASLD.

Liver transplantation, though life-saving, continues to be one of the most expensive and resource-intensive therapeutic interventions that contemporary medicine has to offer. The current allocation policy assigns priority for deceased donor liver allografts to the sickest candidates first according to their Model for End-Stage Liver Disease (MELD) score. This algorithm has achieved a reduction in waitlist mortality without any obvious erosion in posttransplant outcomes.1 However, higher urgency recipients often use more hospital resources than those of lower disease severity.2–5 This redistribution of livers to the sickest, along with static or even declining reimbursement for liver transplantation, has steeply increased the financial risk faced by transplant centers.4

During the past 2 decades, the success of liver transplantation has led to exponential increases in the number of candidates on the waiting list. The resultant inadequacy of the organ supply has necessitated an expansion of acceptable donor and graft criteria to include advanced age, donation after cardiac death (DCD), split, and steatotic grafts.6, 7 Recently, an objective, continuous, and quantitative index of graft quality has been derived.8 Analysis has shown that donor quality has decreased over time, with increased donor age and the emergence of split and DCD grafts as major contributing factors.9 Utilization of grafts with higher risk profiles further heightens concern regarding the financial climate of liver transplantation.

The trends of increasing recipient disease severity and overall decreasing graft quality have motivated recent analyses to identify the current determinants of transplant resource utilization. Prior to MELD allocation, studies on the cost of liver transplantation identified recipient age and markers of increased recipient severity such as Child-Pugh class C, pretransplant intensive care unit location, pretransplant ventilator dependency, and/or UNOS status 1 designation as predictive of increased liver transplant costs.10–12 Limited information exists with respect to the influence of donor factors on the costs of liver transplantation. Since the implementation of MELD allocation, a single-center study has confirmed the tight and direct correlation between recipient disease severity, as signified by the laboratory MELD score, and transplant resource utilization.5 Correspondingly, a single publication using United Network for Organ Sharing data found an inverse correlation between graft quality, as signified by the donor risk index (DRI), and resource utilization independent of recipient factors, including MELD.13

The current study expands upon these two previous studies5, 13 by juxtaposing both recipient disease severity and donor quality in the analysis of liver transplant costs at 2 geographically distinct institutions with highly comparable liver transplant volumes. Although direct and indirect costs are generally considered to be the most accurate indicators of hospital resource use, variations in costs and in business models make it difficult to accurately consolidate cost data across different institutions. Likewise, costs clearly vary over time and in a manner that is highly specific to each institution and geography. Length of stay (LOS) after liver transplantation is a more transparent and consistent variable across institutions and can be used as a reliable surrogate for costs.12 We therefore aimed to ascertain the absolute and relative contributions of recipient disease severity and donor quality to liver transplantation LOS.

Abbreviations

CC, cryptogenic cirrhosis; CI, confidence interval; Cr, creatinine; DCD, donation after cardiac death; DRI, donor risk index; HBV, hepatitis B virus; HCV, hepatitis C virus; HR, hazard ratio; INR, international normalized ratio; LOS, length of stay; MELD, Model for End-Stage Liver Disease; SD, standard deviation; SNF/rehab, skilled nursing or rehabilitation facility.

PATIENTS AND METHODS

This study was approved by the institutional review board at the University of California San Francisco and the University of Texas Health Science Center at San Antonio and conformed to the ethical guidelines of the 1975 Declaration of Helsinki.

Data Collection

We reviewed the medical records of all adults (≥18 years of age) who underwent liver transplantation for chronic liver disease between January 1, 1998 and December 31, 2005 at 2 large-volume centers (745 for center A, 710 for center B, and 1455 for centers A and B) to collect the following donor, recipient, and transplant factors. This cohort included 222 recipients previously analyzed and reported.5

Donor and Recipient Factors

Demographics, including age, gender, and race, were collected for both donors and recipients. The donor type (living versus deceased), graft type (split versus whole), donor height, donor cause of death, donor origin (local, regional, or national), and DCD status were noted. For recipients, additional collected variables included weight, height, etiology of liver disease, and transplant number. The laboratory values for creatinine, total bilirubin, and the international normalized ratio (INR) immediately preceding transplantation were used to calculate the MELD score. Dialysis requirement was specifically noted.

Transplant Variables

The cold ischemia time was defined as the interval from deceased donor cross-clamping to removal from cold storage for anastomosis. The cold ischemia time and donor factors were used to calculate the DRI8 for all deceased donor grafts. For cases of living donor liver transplantation in which donor and recipient operations were simultaneously performed, no cold ischemia time was recorded, and DRI was not calculated. The warm ischemia time represents the venous anastomotic time, which is defined as the interval between removal from cold storage and venous reperfusion. The number of days from the day of transplantation to the day of discharge was considered the transplant LOS. If a patient was transferred from the transplant institution to a skilled nursing or rehabilitation facility (SNF/rehab), the days at these auxiliary care facilities were not considered part of the LOS. We did, however, analyze the correlation between donor and recipient factors and death, discharge to home, or discharge to a SNF/rehab.

Statistical Analysis

Descriptive statistics for the study cohort at the 2 transplant centers were calculated separately and together. Discrete and continuous variables were compared with Fisher's exact test and the Mann-Whitney test, respectively. Correlations between DRI, MELD, and LOS and transplant year and between DRI and MELD were assessed with Spearman rank correlation coefficients for each center. Donor, recipient, and transplant variables were assessed in univariate Cox proportional hazards models to identify risk factors associated with transplant LOS at each institution and for the combined cohort. Variables of significance (P < 0.10) were included in multivariate Cox proportional hazard models for each institution and the combined cohort. Variables were then eliminated in a stepwise fashion to derive the final model showing independent predictors of transplant LOS.

RESULTS

Donor and Graft Characteristics

Donor and graft characteristics for both institutions are shown and compared in Table 1. As may be expected for 2 geographically distant transplant centers, there were significant differences in donor demographics, including age, gender, race, and weight. Cerebrovascular accident was the most common cause of death for center A donors versus trauma for center B donors. Center A had a higher volume of living donor liver transplants [99/745 (13.3%) versus 9/710 (1.3%), P < 0.0001], whereas center B had a higher volume of split grafts [21/745 (2.8%) versus 45/710 (6.3%), P = 0.0015]. Center A, however, still had a substantially lower percentage of whole liver grafts [625/745 (83.9%) versus 656/710 (92.4%), P < 0.0001]. Local donors accounted for a lower percentage of donors for center A compared to center B. Notably, however, both institutions used DCD donors rarely, as they accounted for only 1% of all transplants. The mean DRI was higher at center A compared to center B (1.46 ± 0.38 versus 1.40 ± 0.38, P = 0.0013). Despite these multiple differences in donor characteristics, Fig. 1 shows striking similarities in the overall distribution of donor quality at the 2 institutions. In general, center A's donor risk profile appears to be slightly shifted to the right toward higher risk or lower quality.

Table 1. Donor and Graft Variables
VariableCenter ACenter BCenters A and BP Value
  • *

    Deceased donors only.

Donor age (years)40.9 ± 16.139.3 ± 18.140.11 ± 17.10.029
Donor gender, male419 (56.2%)454 (63.9%)873 (60.0%)0.0027
Donor raceAsian64 (8.7%)11 (1.6%)75 (5.2%)<0.0001
African American45 (6.1%)39 (5.5%)84 (5.8%) 
Caucasian490 (66.9%)408 (57.5%)898 (62.3%) 
Latino118 (16.1%)246 (34.7%)364 (25.3%) 
Other15 (2.0%)5 (0.7%)20 (1.4%) 
Donor height (cm)170.6 ± 10.9170.9 ± 12.1170.8 ± 11.50.40
Donor weight (kg)75.1 ± 18.078.1 ± 21.276.6 ± 19.80.0165
Donor body mass index25.6 ± 5.126.7 ± 7.826.2 ± 6.70.073
Donor cause of death*Cerebrovascular304 (47.1%)293 (41.8%)597 (44.4%)0.0138
Trauma255 (39.5%)314 (44.8%)569 (42.3%) 
Anoxia82 (12.7%)77 (11.0%)159 (11.8%) 
 Other4 (0.6%)17 (2.4%)21 (1.5%) 
Donation after cardiac death6 (0.9%)7 (1.0%)13 (1.0%)0.89
Donor origin*Local510 (78.9%)583 (82.1%)1093 (80.6%)0.0007
Regional107 (16.6%)119 (16.8%)226 (16.7%) 
National29 (4.5%)8 (1.1%)37 (2.7%) 
Living donor99 (13.3%)9 (1.3%)108 (7.4%)<0.0001
Deceased donor split graft21 (2.8%)45 (6.3%)66 (4.6%) 
Whole organ625 (83.9%)656 (92.4%)1281 (88.0%)<0.0001
Donor risk index*1.46 ± 0.381.40 ± 0.381.43 ± 0.390.0013
Figure 1.

Donor risk index (DRI) distribution in centers A and B for adult, non–status 1, deceased donor liver transplants performed between 1998 and 2005.

Recipient Characteristics

Recipient characteristics for both institutions are shown and compared in Table 2. Although recipient age was comparable at the 2 institutions, recipient gender, race, height, weight, and body mass index differed. At both institutions, hepatitis C was the dominant indication for transplantation, accounting for more than half of all transplants (53.4% for center A versus 55.1% for center B). The frequencies of autoimmune etiologies (primary biliary cirrhosis, primary sclerosing cholangitis, and autoimmune hepatitis) and miscellaneous/other etiologies were comparable at both institutions. However, at center A versus center B, hepatitis B was much more common (13.2% versus 1.5%), whereas alcoholic liver disease (6.2% versus 15.9%) and cryptogenic cirrhosis/nonalcoholic steatohepatitis (8.9% versus 13.9%) were much less common. Ethnic origins were reflective of the geographic location of each center. Recipients of Asian descent were more common at center A than center B (19.8% versus 0.4%), and Latino descent was more common at center B than center A (19.0% versus 57.0%). Immediate pretransplant creatinine (1.68 ± 1.04 mg/dL for center A versus 1.49 ± 0.87 mg/dL for center B, P = 0.0015), frequency of dialysis requirement (11.5% for center A versus 5.5% for center B, P < 0.0001), and pretransplant total bilirubin (8.97 ± 11.51 mg/dL for center A versus 6.45 ± 9.14 mg/dL for center B, P = 0.022) were all higher at center A. As a result, the mean MELD score at the time of transplantation was higher at center A than center B (22.44 ± 11.34 versus 20.37 ± 8.42, P = 0.046). Figure 2 shows the overall distribution of recipient disease severity at the 2 institutions. In general, candidates with MELD scores of 15 to 30 accounted for a larger percentage of transplants at center B compared to center A (and vice versa for candidates with MELD scores > 30).

Table 2. Recipient Variables
VariableCenter ACenter BCenters A and BP Value
  1. Abbreviations: CC, cryptogenic cirrhosis; HBV, hepatitis B virus; HCV, hepatitis C virus; HIV, human immunodeficiency virus; INR, international normalized ratio; MELD, Model for End-Stage Liver Disease; NASH, nonalcoholic steatohepatitis.

Recipient age (years)52.9 ± 9.452.4 ± 9.252.7 ± 9.30.24
Recipient gender, male496 (66.6%)425 (59.9%)921 (63.3%)0.0079
Recipient raceAsian147 (19.8%)4 (0.6%)151 (10.4%)<0.0001
African American43 (5.8%)18 (2.5%)61 (4.2%) 
Caucasian409 (55.0%)282 (39.7%)691 (47.5%) 
Latino141 (19.0%)405 (57.0%)546 (37.6%) 
Other4 (0.5%)1 (0.1%)5 (0.3%) 
Recipient height (cm)171.3 ± 10.1169.1 ± 9.9170.2 ± 10.1<0.0001
Recipient weight (kg)79.3 ± 18.282.2 ± 19.780.7± 19.00.0074
Recipient body mass index26.9 ± 5.228.7 ± 6.227.8 ± 5.8<0.0001
Recipient diagnosisAutoimmune85 (11.4%)66 (9.3%)151 (10.4%)<0.0001
Alcohol46 (6.2%)113 (15.9%)159 (10.9%) 
CC/NASH66 (8.9%)99 (13.9%)165 (11.3%) 
HBV98 (13.2%)11 (1.5%)109 (7.5%) 
HCV398 (53.4%)391 (55.1%)789 (54.2%) 
Other52 (7.0%)30 (4.2%)82 (5.6%) 
Recipient HIV-positive17 (2.3%)017 (1.2%)<0.0001
Transplant number1703 (94.4%)682 (96.1%)1385 (95.2%)0.32
239 (5.2%)26 (3.7%)65 (4.5%) 
33 (0.4%)2 (0.3%)5 (0.3%) 
Pretransplant creatinine (mg/dL)1.68 ± 1.041.49 ± 0.871.59 ± 0.970.0015
Pretransplant dialysis86 (11.5%)39 (5.5%)125 (8.6%)<0.0001
Pretransplant INR2.03 ± 1.301.90 ± 1.461.97 ± 1.380.22
Pretransplant total bilirubin (mg/dL)8.97 ± 11.516.45 ± 9.147.74 ± 10.500.022
Pretransplant MELD22.44 ± 11.3420.37 ± 8.4221.43 ± 10.070.046
Figure 2.

Model for End-Stage Liver Disease (MELD) distribution in centers A and B for all adult, non–status 1 liver transplants performed between 1998 and 2005.

Transplant Characteristics

Transplant characteristics for both institutions are shown and compared in Table 3. In general, center A's transplant volume increased over the 8-year study period, whereas center B's volume remained steady. Center A tended to have a higher proportion of simultaneous liver-kidney transplants (8.9% versus 6.3%, P = 0.070), and this was consistent with the higher recipient pretransplant creatinine and more frequent dialysis requirement (Table 2). On average, the cold ischemia time was approximately 3 hours and 20 minutes longer at center A (9.54 ± 2.80 hours versus 6.22 ± 2.96 hours, P < 0.0001). Finally, there was a slightly longer mean LOS at center A (13.7 ± 17.5 days versus 13.3 ± 16.1 days, P = 0.052).

Table 3. Transplant Variables
VariableCenter ACenter BCenters A and BP Value
Transplant year199867 (9.0%)72 (10.1%)139 (9.6%)0.017
199975 (10.1%)64 (9.0%)139 (9.6%) 
200070 (9.4%)94 (13.2%)164 (11.3%) 
200198 (13.2%)90 (12.7)188 (12.9%) 
200294 (12.6%)112 (15.8%)206 (14.2%) 
2003103 (13.8%)106 (14.9%)209 (14.4%) 
2004106 (14.2%)84 (11.8%)190 (13.1%) 
2005132 (17.7%)88 (12.4%)220 (15.1%) 
Donor-recipient gender match435 (58.4%)387 (54.5%)822 (56.5%)0.14
Liver-kidney transplant66 (8.9%)45 (6.3%)111 (7.6%)0.070
Cold ischemia time (hours)9.54 ± 2.806.22 ± 2.967.80 ± 3.33<0.0001
Transplant length of stay (days)13.7 ± 17.513.3 ± 16.113.5 ± 16.80.052

Correlations Between MELD, DRI, LOS, and Transplant Year

To better understand trends and practice patterns, we next explored whether there were trends in MELD, DRI, and LOS over the 8-year study period and whether there were correlations between MELD and DRI at either institution (Table 4). Although there did not seem to be a change in either MELD or DRI over time for center A, there was an increase in both MELD [Spearman rank correlation coefficient, 0.28; 95% confidence interval (CI), 0.21–0.34; P < 0.0001] and DRI (Spearman rank correlation, 0.14; 95% CI, 0.065–0.21; P = 0.0002) over time for center B. Therefore, one might conclude that center A's practice patterns with respect to donor and recipient characteristics were stable whereas center B demonstrated a significant trend of increasing MELD and DRI over the 8-year study period. Interestingly, however, for both institutions, there was no correlation between MELD and DRI, and this indicated that there was not a systematic approach to the pairing of donors and recipients according to donor quality and recipient disease severity.

Table 4. Correlations Between DRI, MELD, LOS, and Transplant Year
Variable 1Variable 2CenterSpearman
Rank Correlation95% Confidence IntervalP Value
  1. Abbreviations: DRI, donor risk index; LOS, length of stay; MELD, Model for End-Stage Liver Disease.

DRITransplant yearA0.034−0.43 to 0.1109.39
B0.140.065 to 0.210.0002
MELDTransplant yearA0.006−0.066 to 0.0780.87
B0.280.21 to 0.34<0.0001
LOSTransplant yearA0.037−0.035 to 0.110.31
B−0.065−0.14 to 0.0090.085
DRIMELDA−0.050−0.13 to 0.0280.21
B0.023−0.051 to 0.0970.55

LOS Predictors: Individual and Combined Centers

Univariate and then multivariate Cox models for transplant LOS were created for each institution's cohort and then for the combined cohort. The final multivariate models are shown in Table 5. Notably, recipient MELD score was the only variable present in all 3 models and exerted an effect of similar magnitude at both center A [hazard ratio (HR), 1.03 per point increment; 95% CI, 1.02–1.04; P < 0.0001] and center B (HR, 1.04 per point increment; 95% CI, 1.03–1.05; P < 0.0001). At center A, only 2 other variables, donor location (HR, 1.22 for nonlocal donors; 95% CI, 1.00–1.49; P = 0.048) and recipient age (HR, 1.01 per year increment; 95% CI, 1.00–1.02; P = 0.0085), were independent predictors of transplant LOS. At center B, 6 other variables were independent LOS predictors: 2 donor factors [age (HR, 1.01; 95% CI, 1.00–1.01; P < 0.0001) and weight (HR, 0.99; 95% CI, 0.99–1.00; P = 0.0001)], 3 additional recipient factors [gender (HR, 0.83 for male; 95% CI, 0.70–97; P = 0.019), INR (HR, 0.92 per 1.0 increment; 95% CI, 0.85–0.99; P = 0.027), and transplant number (HR, 1.57 per incremental transplant; 95% CI, 1.06–2.31; P = 0.023)], and cold ischemia time (HR, 1.04 per hour increment; 95% CI, 1.01–1.07; P = 0.009). For the multivariate analysis of the combined cohort, center was added and proved to have a significant association with transplant LOS (HR, 0.88; 95% CI, 0.79–0.99; P = 0.029). The final Cox model for the combined cohort was composed of 3 donor factors (age, weight, and location), 4 recipient factors (age, male, transplant number, and MELD), and center.

Table 5. Multivariate Cox Models for Transplant Length of Stay
VariableHR95% CIP Value
  1. Abbreviations: CI, confidence interval; HR, hazard ratio; INR, international normalized ratio; MELD, Model for End-Stage Liver Disease.

Center A cohort   
 Nonlocal donor (versus local)1.221.00–1.490.048
 Recipient age (per year increment)1.011.00–1.020.0085
 Recipient MELD (per point increment)1.031.02–1.04<0.0001
Center B cohort   
 Donor age (per year increment)1.011.00–1.01<0.0001
 Donor weight (per kg increment)0.990.99–1.000.0001
 Recipient male (versus female)0.830.70–0.970.019
 Recipient INR (per 1.0 increment)0.920.85–0.990.027
 Recipient MELD (per point increment)1.041.03–1.05<0.0001
 Transplant number (per increment)1.571.06–2.310.023
 Cold ischemia time (per hour increment)1.041.01–1.070.009
Combined cohort   
 Donor age (per year increment)1.011.00–1.01<0.0001
 Donor weight (per kg increment)1.000.99–1.000.0016
 Donor nonlocal (versus local)1.161.01–1.340.041
 Recipient age (per year increment)1.011.00–1.020.0087
 Recipient male (versus female)0.840.75–0.950.0052
 Transplant number (per increment)1.451.14–1.840.0028
 Recipient MELD (per point increment)1.031.03–1.04<0.0001
 Center A (versus center B)0.880.79–0.990.029

Correlations Between Donor and Recipient Factors and Patient Disposition

The disposition of liver transplant recipients at discharge from the transplant hospital was collected and assessed for correlations to donor and recipient characteristics (Table 6). Disposition was classified as death, home, or SNF/rehab. Recipient disposition was modestly correlated with DRI, although it fell short of statistical significance (P = 0.058; Table 6). DRI was lowest for recipients discharged to home (1.42 ± 0.37), intermediate for those discharged to SNF/rehab (1.47 ± 0.38), and highest for those who died (1.53 ± 0.49). The association between discharge disposition and recipient age and all measures of recipient disease severity was, however, strongly significant (all P values < 0.0001). Recipients discharged to home were youngest (52.5 ± 9.4 years), whereas those who died or who were discharged to SNF/rehab were older (54.5 ± 8.7 and 55.4 ± 8.6 years, respectively). Similarly, recipients discharged to home had the lowest MELD score (20.4 ± 9.6), whereas those who died and those discharged to SNF/rehab had higher MELD scores (26.1 ± 10.5 and 27.1 ± 10.7, respectively). The same pattern was observed for each of the individual MELD components, with those discharged to home having significantly lower mean serum creatinine, bilirubin, and INR compared to those who died or were discharged to an SNF/rehab facility.

Table 6. Relationship of the Patient Disposition and the Donor and Recipient Characteristics
Variablen (Centers A and B)DispositionMean ± SDP Value
  1. Abbreviations: Cr, creatinine; DRI, donor risk index; INR, international normalized ratio; MELD, Model for End-Stage Liver Disease; SD, standard deviation; SNF/rehab, skilled nursing or rehabilitation facility.

DRI1116Home1.42 ± 0.370.058
65Death1.53 ± 0.49
166SNF/rehab1.47 ± 0.38
Recipient age (years)1217Home52.2 ± 9.4<0.0001
69Death54.5 ± 8.7
168SNF/rehab55.4 ± 8.6
Recipient MELD1218Home20.4 ± 9.6<0.0001
69Death26.1 ± 10.5
168SNF/rehab27.1 ± 10.7
Recipient Cr (mg/dL)1218Home1.50 ± 0.90<0.0001
69Death2.10 ± 1.18
168SNF/rehab2.03 ± 1.20
Recipient bilirubin (mg/dL)1217Home7.0 ± 9.6<0.0001
69Death11.8 ± 14.0
168SNF/rehab11.2 ± 13.6
Recipient INR1218Home1.9 ± 1.4<0.0001
69Death2.1 ± 0.8
168SNF/rehab2.4 ± 1.5

DISCUSSION

Liver transplantation has flourished over the past decade as the optimal treatment for end-stage liver disease. Success has resulted in expansion of both recipient and donor criteria. The tempo and extent of liberalizing these criteria have varied from program to program on the basis of not only donor organ availability and recipient disease severity profiles but also programmatic factors such as size, philosophy, and maturity. Traditionally, transplant outcomes of patient and graft survival have been the yardstick of program quality without consideration of transplant resource utilization. With the liberalization of donor and recipient criteria (ie, the use of lower quality grafts and transplantation of sicker recipients), one would surmise that resource utilization would increase in parallel. In fact, several recent publications have tried to describe the relationship between either recipient disease severity or donor quality and resource utilization.2–5, 13 These 2-way analyses have failed to elucidate the relative contributions of and possible interactions between recipient disease severity and donor quality, and this was the primary aim of our study. Our strategy, to compare and contrast in detail the determinants of transplant resource utilization at 2 large-volume but geographically distinct centers, might also elucidate the impact of program parameters with respect to philosophy and maturity.

Comparing actual cost data between 2 institutions requires normalization, a process complicated by considerations of geography, payer mix, reimbursement patterns, and indirect costs. Imprecise normalization of costs will result in erroneous data and misleading analyses. The use of LOS, an objective and well-defined variable, as a surrogate for transplant resource utilization has precedence.12 Moreover, its correlation to cost in the current era has been verified.4, 5, 13 We also chose to analyze combined data from 2 centers well matched in transplant volume during the study period, rather than national registry data. Center data are characterized by completeness, accuracy, and granularity, traits critical to our goal of elucidating the impact of center practice patterns and maturity on transplant resource utilization. Although we recognize the power of registry analyses to reflect national practice patterns and trends, we did not feel that this data source was ideally suited for this particular study. We recognize that our definition of LOS, which is limited to the acute care setting, does not capture the full extent of resource utilization as a subset of liver transplant recipients are transferred to extended care facilities. As such, our LOS likely underestimates the true resource utilization. Inclusion of LOS at extended care institutions would introduce confounding factors. First, there are a wide variety of issues, including medical, physical, psychological, social, and logistical issues, that necessitate transfer to subacute care facilities. Second, the transplant team typically has no role in determining LOS at these facilities. We therefore decided that a strict acute care definition of LOS was most appropriate for our analyses. We did explore total acute care utilization during the first 60 and 90 posttransplant days as this would include readmissions to the transplant center. The results and conclusions were unchanged (data not presented)

The data that we have presented from these 2 academic transplant centers represent a sizable number of liver transplants over an 8-year study period. The 2 centers performed a remarkably similar number of adult liver transplants during the study period. There were several statistical differences in donor and recipient characteristics between the 2 centers. Despite similar distributions, both the MELD and DRI scores were significantly higher at center A versus center B. Moreover, during the study period, both MELD and DRI held constant at center A, whereas both increased at center B. However, during the study period, LOS remained stable at center A but decreased at center B. These data indicate that, during the study period, liver transplant practice was stable at center A but in evolution at center B. The evolution likely reflects not only internal programmatic issues but also external regional pressures such as the establishment of newer centers with more competition for recipients and donors. At the center level, the trends of increased recipient disease severity and lower donor quality yet a shorter mean LOS can be construed as a learning curve effect for center B, which was established more recently than center A. During the study period, no discrete changes in clinical care pathways were implemented at either institution. Moreover, there was high consistency in personnel as the medical and surgical teams and leadership were stable at both institutions. At the regional level, parallel increases in MELD and DRI likely reflect a widening of the disparity between organ supply and demand and possibly increased competition among the local centers

Predictors of resource utilization using LOS as a surrogate varied between the 2 study centers. The only strong LOS predictor common to both centers was the MELD score at the time of transplant. Notably, the magnitude of MELD's impact was strikingly similar at the 2 centers; a MELD score increment of 1 was associated with a 3% to 4% increase in LOS in both single-center models and in the combined model. The evaluation of patient disposition after the acute transplant hospitalization further endorses the finding that recipient disease severity is a dominant driver of resource utilization. DRI exhibited only a trend toward association, whereas both recipient age and MELD exhibited strong associations with the need for transitional care; this finding has not been previously demonstrated or reported. A recent single-center study has reported that MELD is the single variable most strongly correlated with posttransplant costs.5 Our study is wholly consistent with this report.

Although MELD was the only predictor common to both centers, several additional factors were significant predictors of transplant LOS at either center A or B. At center A, only 2 other variables were associated with transplant LOS (recipient age and nonlocal donor), whereas at center B, 6 other variables were associated (donor age and weight, recipient gender, INR, transplant number, and cold ischemia time). To derive the final multivariate Cox model for the 2 centers combined, we introduced the additional variable of transplant center. The final model identified 8 independent predictors of transplant LOS, including transplant center. Several predictors in the 3 multivariate models deserve special comment. First, male recipient gender was associated with shorter LOS for center B and centers A and B. We speculate that this may result from the fact that for a given MELD score, male recipients are “less sick” than female recipients. It is well known that the same serum creatinine corresponds to better renal function in male candidates. This inequity has been cited as systematic bias that disadvantages female transplant candidates.14 Second, higher INR was associated with shorter LOS in the multivariate model for center B. In univariate models for center A, center B, and centers A and B, higher INR, as expected, was associated with longer LOS (data not shown). However, the effect was reversed in the multivariate model that includes MELD. Therefore, the model indicates that, for a given MELD score, the patient with lower INR will have shorter LOS, and it implies that the patient with higher bilirubin/creatinine will have longer LOS. Finally, it is not surprising that transplant center itself is independently associated with transplant LOS for centers A and B. We believe that transplant resource utilization strongly reflects multiple dimensions of an individual center's practice, such as local competition as well as programmatic philosophy and maturity, not fully described by the donor, recipient, and transplant factors that were specifically explored.

Interestingly, DRI, an overall measure of donor quality,8 did not emerge as a factor at either center or for the combined cohort. Moreover, DRI also failed to show a strong correlation with the need for transitional care. Our findings, therefore, counter a recent study of national registry data that found a significant and independent association between DRI and transplant LOS for transplants performed in 2002–2005. In comparison with the reference group of donors with DRI between 1.0 and 1.5, they reported that high-risk donors (those with a DRI of 2.0–2.5, who accounted for 8.4% of the donor pool) and highest risk donors (those with a DRI > 2.5, who accounted for 1.9% of the donor pool) were associated with 9% and 29.7% increases, respectively, in transplant LOS.13 We speculate that the national experience may reflect a steep national learning curve with higher risk, lower quality donors. This hypothesis is supported by a subsequent report by the same group showing that the impact of DRI on transplant LOS had decreased dramatically.15

Although DRI did not emerge as a predictor in single-center or combined-center analyses, 3 donor factors did emerge as predictors of transplant LOS. Donor age and weight showed a significant association in analyses of center B and centers A and B, whereas nonlocal status showed a significant association in analyses of center A and centers A and B. The inconsistent association of these donor factors contrasts with the highly consistent association of MELD with transplant LOS. These findings again lead us to conclude that factors reflective of donor quality exert a modest impact on liver transplant resource utilization.

In conclusion, we have shown that recipient disease severity as measured by MELD exerts a potent, dominant, and consistent effect on transplant resource utilization as measured by transplant LOS at 2 transplant centers. The 2 centers and their transplant practices appear quite similar by superficial assessment. However, several parameters and their evolution, or lack thereof, over time revealed important differences that undoubtedly shaped transplant resource utilization at each center. These differences were reflected by the variable array of LOS predictors at the 2 centers. More striking than these differences were the identical impact of recipient MELD score and the lack of effect of DRI. The dominance of recipient disease severity among LOS determinants should simplify a transplant center's understanding of its resource utilization patterns and facilitate benchmarking against other transplant centers. Moreover, we suggest that recipient MELD score should weigh heavily in risk adjustment models for transplant center payments.

Acknowledgements

The authors gratefully acknowledge Alan Bostrom, Ph.D., for his expert statistical analysis.

Ancillary