Acute kidney injury (AKI) is a common finding in patients with end-stage liver disease with an estimated prevalence of 20% at the time of transplantation.1, 2 Studies on the impact of AKI on patient outcomes after liver transplantation (LT) alone have yielded conflicting results. Although several single-center, retrospective studies have shown the importance of the pretransplant serum creatinine (SCR) level as a predictor of post-LT survival and renal dysfunction,3-7 others have failed to demonstrate a difference.8, 9 Similarly, studies of patients with hepatorenal syndrome (HRS) have also demonstrated conflicting results. Although some authors have demonstrated good renal recovery after LT alone,10, 11 others have argued that prolonged dialysis (>1 month) for patients with HRS may result in a chronic, irreversible condition, and they have advocated simultaneous liver-kidney (SLK) transplantation.12, 13 Literature analyses have usually been retrospective and single-center and have often suffered from small sample sizes and reporting bias. The most systematic weakness in reporting, however, is the arbitrary classification of AKI, which prevents study comparisons. The failure to stratify the severity of renal failure and provide a clear enunciation of its etiology in the literature has led to clinical confusion for transplant programs that are developing patient management strategies. In short, on the basis of the existing literature, it is difficult to determine the long-term outcomes of LT patients who have undergone transplantation in the context of an acute pretransplant renal insufficiency.
In response to the lack of a standard definition for AKI, the Acute Dialysis Quality Initiative (ADQI) workgroup developed a consensus definition and classification for AKI, the Risk, Injury, Failure, Loss, and End-Stage Kidney Disease (RIFLE) classification, which stratifies renal failure into grades of increasing severity of AKI according to changes in the patient's SCR level or glomerular filtration rate (GFR) and/or urine output14 (Table 1). The RIFLE criteria have been validated in more than 500,000 patients with AKI and have been shown to predict clinical outcomes: as the RIFLE class worsens, there is a progressive increase in mortality.15 Recent studies have shown that post-LT AKI based on the RIFLE criteria is associated with an increase in mortality after LT.16-18 However, there have been no studies focusing on survival or renal outcomes for patients with AKI at the time of transplantation as defined by the RIFLE criteria.
Table 1. RIFLE Criteria for the Definition and Classification of AKI
Urine Output Criteria
NOTE: This table was adapted from Bellomo et al.14
SCR increase (1.5- to 2-fold) from the baseline or GFR decrease > 25%
<0.5 mL/kg/hour for >6 hours
SCR increase (>2- to 3-fold) from the baseline or GFR decrease > 50%
<0.5 mL/kg/hour for >12 hours
SCR increase (>3-folds) or GFR decrease of 75% or SCR level ≥ 4 mg/dL with an acute rise > 0.5 mg/dL from the baseline or on dialysis
<0.3 mL/kg/hour for 24 hours or anuria for 12 hours
Acute tubular necrosis (ATN) and HRS account for the majority of cases of severe and prolonged AKI in patients with end-stage liver disease before transplantation. Although there have been several single-center studies of survival and renal outcomes for patients with HRS, there are no studies comparing the outcomes of patients with HRS and patients with ATN undergoing LT. We undertook the present study to address 2 issues. First, what are the survival and renal outcomes according to the RIFLE criteria at the time of LT? Second, for LT patients with comparable RIFLE scores (RIFLE failure), is there an impact of the AKI etiology (ie, ATN versus HRS) on patient survival and renal outcomes?
This was a retrospective study of all adult patients (age > 18 years) who underwent LT between March 1, 2002 [the inception of the Model for End-Stage Liver Disease (MELD) score] and December 31, 2006 at the University of Southern California. Patients who previously underwent LT, patients who required SLK transplants, and patients who died intraoperatively were excluded. Patients with AKI were placed into clinical cohorts according to the RIFLE classification (risk, injury, or failure) at the time of LT (Table 1). The failure group was further subdivided into 2 cohorts based on the AKI etiology: HRS or ATN. Patients without AKI served as internal controls. This study was approved by our institutional review board.
The baseline SCR level was defined as the lowest creatinine level within the 6 months before transplantation. The diagnosis of HRS was made according to the definition of the International Ascites Club19: a doubling of the SCR level to >2.5 mg/dL in patients with cirrhosis with no improvement in the SCR level after 48 hours of diuretic withdrawal and volume expansion in the absence of shock, nephrotoxic drugs, or renal parenchymal disease. ATN was defined as AKI in a setting that could be expected to cause ATN (sepsis, nephrotoxic drugs, or intravenous contrast) that could not be attributed to another cause, did not respond to fluid hydration, and had the majority of the following characteristics: (1) a blood urea nitrogen/creatinine ratio ≤ 20, (2) muddy brown granular and epithelial cell casts and free epithelial cells according to urinalysis, (3) a fractional excretion of sodium > 1% or a fractional excretion of urea > 35%, and (4) a urine specific gravity < 1.020. Because our patients were very coagulopathic and because of the high risk of potential complications associated with renal biopsy, we were unable to use this tool to differentiate between ATN and HRS in our patients. Renal recovery at 30 and 90 days was defined according to the ADQI recommendation of less than a 50% increase in SCR.14 Chronic kidney disease (CKD) was defined as an estimated glomerular filtration rate (eGFR) < 30 mL/minute of body surface area. eGFR was calculated according to the 4-variable formula used in the Modification of Diet in Renal Disease study.20
The demographic and preoperative clinical variables included age, sex, ethnicity, diabetes, etiology of liver failure, baseline SCR, baseline eGFR, SCR at transplantation, and renal replacement therapy (RRT). RRT support was either intermittent hemodialysis or continuous RRT. Continuous RRT was primarily used for those patients whose blood pressure was deemed insufficient to support standard intermittent hemodialysis. Except for the baseline SCR and eGFR values, the laboratory values closest to the time of transplantation were used in our analysis. The postoperative clinical variables included the following: the need for RRT (defined as dialysis within 72 hours of transplantation); the need for mechanical ventilation for >10 days; the length of the hospital stay; and the SCR levels at 3, 6, and 12 months and then yearly for 5 years. The MELD and Sequential Organ Failure Assessment (SOFA) scores were used to assess the severity of liver disease and illness at the time of LT. All survivors had a minimum follow-up of 1 year and a maximum follow-up of 5 years.
Patients without AKI uniformly received a calcineurin inhibitor (CNI), mycophenolate mofetil, and prednisone. Our standard regimen for a patient with AKI after transplantation was to withhold CNIs for the first 72 hours after LT. During this time period, the patient was maintained on a regimen containing only steroids and mycophenolate mofetil. If the patient's SCR level remained elevated despite good urine output, the patient was maintained on a low-dose CNI to allow for renal recovery. For patients who remained oliguric, sirolimus was sometimes substituted for a CNI in the regimen.
The associations between the RIFLE class and categorical baseline characteristics at the time of transplantation and the recovery of kidney function after transplantation were tested with Fisher's exact test. The continuous baseline characteristics (the number of days on dialysis after transplantation and the length of the hospital stay after transplantation) were compared between patients with different RIFLE classes with the Kruskal-Wallis test. The baseline and posttransplant variables were compared between survivors and nonsurvivors at 1 year with Fisher's exact test. A multivariate logistic regression analysis of survival at 1 year was performed, and it included the RIFLE class, the presence of ATN or HRS, the presence of diabetes, and the MELD score. Kaplan-Meier estimates of the probabilities of survival and the cumulative incidence of CKD were calculated for patients with different RIFLE classes. The survival time was calculated from the date of transplantation to the date of death. Data for those who did not die were censored at the date of the last follow-up. Statistical significance was considered to be reached at P < 0.05.
Two hundred eighty-three of the original 351 patients who underwent transplantation during the time period were identified for the study. The mean posttransplant follow-up was 1429 days. The baseline characteristics of the patient cohort at the time of transplantation (depicted according to the RIFLE classification) are summarized in Table 2. One hundred sixty five patients (58%) had no AKI, 34 patients (12%) had risk, 19 patients (7%) had injury, and 65 patients (29%) had failure. The patients in the failure group were then further subdivided according to the etiology of AKI: ATN (46%) or HRS (54%). Patients were more likely to be Hispanic and male and to have experienced liver failure due to hepatitis. In comparison with patients in the no-AKI, risk, and injury groups, patients who experienced failure (ATN and HRS groups) had higher MELD scores (P < 0.001) and SOFA scores (P < 0.001). There were no significant differences in the MELD scores, SOFA scores, or days of RRT before transplantation between the HRS and ATN groups. After transplantation, patients with ATN were more likely to require mechanical ventilation and RRT, received RRT for a longer period of time, and spent more time in the hospital in comparison with the other groups.
Table 2. Characteristics of the Patients According to the RIFLE Criteria at the Time of Transplantation
Figure 1 shows the Kaplan-Meier curves of overall survival for all groups. Patients with ATN had significantly worse survival than the other subgroups over the course of 5 years (P < 0.001). In a univariate analysis, a number of pre-LT factors were found to be significant predictors of mortality 1 year after LT (Table 2). Among the demographic variables, ATN, diabetes, SOFA and MELD scores, and RRT at the time of transplantation were associated with mortality 1 year after transplantation (Table 3). A multivariate logistic regression analysis, which was adjusted for age, sex, diabetes, MELD and SOFA scores, and AKI (presence and severity), showed that only the presence of ATN at the time of transplantation was associated with significantly increased odds of mortality 1 year after transplantation (odds ratio = 6.68, 95% confidence interval = 1.96-22.78, P = 0.001). At the end of 1 year, deaths in the ATN and HRS groups were mainly due to sepsis (76% and 50%, respectively). In contrast, for patients with no AKI, recurrent hepatocellular carcinoma (33%) was the leading cause of death at 1 year. None of the patients in the risk, injury, or failure groups had died from a recurrent malignancy at 1 year.
Table 3. Comparison of Survivors and Nonsurvivors at 1 Year: A Univariate Analysis
The mean preadmission SCR levels were similar for all groups (0.8-0.9 mg/dL). At the time of transplantation, the mean peak SCR values for the risk, injury, and failure groups increased by 50%, 112%, and 300% from their preadmission SCR levels (Fig. 2). The slopes of SCR improvement 30 days after transplantation were significantly different for the 4 groups with pretransplant AKI (risk, injury, HRS, and ATN; P < 0.01); the ATN group recovered renal function more slowly than the HRS, risk, and injury groups. The mean SCR values of all 5 groups remained elevated above the preadmission means across all time periods after transplantation. Renal recovery at 30 and 90 days for survivors was significantly lower for patients with ATN versus the other groups (Fig. 3). For 5 years after transplantation, the average eGFR declined in all groups; the lowest eGFRs were observed for the ATN and injury groups at 5 years (52 and 56 mL/minute, respectively; Fig. 420). Patients with ATN had the highest cumulative incidence of CKD at all time points after LT (Fig. 5). The higher incidence of CKD in the ATN group could not be attributed to a higher incidence of CNI use because a lower percentage of patients with ATN were receiving CNIs in comparison with the other groups at 6, 12, 24, and 36 months (data not shown). At 6 months, 64% of the patients in the ATN group were receiving CNIs alone, whereas 87% of the patients in the no-AKI group, 93% in the risk group, 80% in the injury group, and 70% in the HRS group were receiving CNIs alone. Within 3 years, 57% of the patients with ATN were maintained on CNIs alone, whereas more than 75% of the patients in the no-AKI, risk, injury, and HRS groups were maintained on CNIs alone.
Using the RIFLE criteria to classify AKI at the time of LT, we have shown that despite high medical acuities and more severe renal dysfunction in the failure group in comparison with the other groups, the HRS cohort distinguished itself by achieving post-LT survival and renal outcomes similar to those of patients in the no-AKI and risk groups.
The evolution of renal dysfunction in the context of liver failure may be insidious to rapid and mild to severe. Its development adds an element of urgency on behalf of the patient and strategic complexity on behalf of the program for making appropriate and timely decisions. The performance of unnecessary kidney transplants subtracts available kidneys from the pool of organs for renal transplant–only recipients, but failing to provide a kidney during combined organ failure may jeopardize the life of a liver recipient. The MELD score reflects this added acuity of renal failure and provides for more expedient allocation whenever there is accompanying renal failure. There are few studies on the natural history of renal failure in the setting of liver failure and subsequent LT that can help to promote a universal algorithm that serves the patient yet preserves kidney resources. Consequently, programs often follow a pattern of decision making that is anecdotal and often overly protective when they address the issue of combined liver-kidney failure. Key questions in this regard center around the duration and severity of pretransplant renal failure and the renal recovery patterns after LT alone as well as the ability to discriminate between those patients with reversible kidney dysfunction and those with permanent kidney dysfunction.
The natural history of liver failure ultimately begets kidney failure, but this kidney failure is not singular in etiology or recoverability. Both HRS and ATN are found in high-acuity (MELD and SOFA) patients, yet the kidneys of HRS patients demonstrate quick and nearly complete recovery, whereas the kidneys of ATN patients are associated with slower recovery, higher mortality, and CKD. Establishing transplant algorithms for dual-organ failure depends on our ability to predict the etiology of renal failure with certainty before LT. For patients with renal dysfunction at the time of LT, it is important to know whether their renal function will improve, stabilize, or continue to deteriorate after transplantation. For those at risk for nonrecovery of renal function, SLK transplantation may be justified. However, the key required factors for determining nonrecovery with a high degree of predictive value remain poorly defined.
Several single-center, retrospective studies have suggested that survival is worse for patients with renal dysfunction at the time of LT.4, 8, 9, 21, 22 The Scientific Registry of Transplant Recipients database demonstrated that patients with moderate (eGFR = 20-40 mL/minute) or severe renal failure (eGFR < 20 mL/minute) at the time of LT have significantly lower 2-year survival in comparison with patients with normal pretransplant renal function (eGFR > 70 mL/minute; 64% and 55% versus 76%, P < 0.05).23 Patients needing RRT have also been shown to have worse outcomes after LT.24 However, unlike the patients of our study, transplant candidates have not been stratified according to the etiology of renal dysfunction. This leads to interpretational difficulties when we attempt to differentiate patients with reversible kidney dysfunction who might be suitable for LT alone. Studies comparing the outcomes of HRS and non-HRS patients undergoing LT have yielded results similar to those of our study.10, 11 Two years after transplantation, HRS survivors were comparable to survivors without HRS.10, 11 In those studies, the average SCR level of the non-HRS group was approximately 1 mg/dL, and only a small percentage of the patients required dialysis before LT. These results are consistent with our comparison of patients with HRS to patients with risk or no AKI. Recent studies have shown that the severity of AKI after LT (as defined by the RIFLE criteria) correlates with higher mortality and a higher incidence of CKD.16, 18, 25
At the present time, specific prognostic indicators for predicting the development of CKD or end-stage renal disease (ESRD) after LT alone are lacking. The rates of CKD vary with the definitions. Some investigators have used the Kidney Disease Outcomes Quality Initiative guidelines with different CKD stages, and others have used specific creatinine levels to stratify patients; this makes the interpretation and comparison of the results difficult.26-28 The cumulative rates of CKD (stage 4 or 5) and ESRD 10 years after LT have been shown to be 18% and 25%, respectively.28 Although these high rates may be attributable to long-term CNI use, the most important predictor of CKD after transplantation is pretransplant renal function.28 Patients with a creatinine clearance of 30 to 59 mL/minute had a 2.5-fold increased risk of developing CKD. For HRS patients requiring extended dialysis before transplantation, the reported rates of posttransplant ESRD have ranged from 6% to 25%.9, 10, 29, 30 In addition to the higher incidence of ESRD, pretransplant renal failure has also been shown to be an independent predictor of posttransplant mortality.21, 23, 28, 31
Using the RIFLE criteria at the time of transplantation, we found that patients with ATN had considerably worse survival and renal outcomes at all time points in comparison with patients with HRS; for HRS patients, both patient survival and renal outcomes were similar to those of patients in the no-AKI or risk categories. The severity of illness in the HRS and ATN subgroups was similar according to the MELD and SOFA scores and the number of patients requiring RRT. Despite the similar severity of renal dysfunction in patients in the HRS and ATN subgroups at the time of LT, the rate of recovery after LT was much faster for patients with HRS versus patients with ATN.
Our study demonstrated that the incidence of CKD was higher in patients with ATN. Furthermore, the incidence of CKD at 5 years was higher for patients with ATN versus patients with HRS (56% versus 16%, respectively). It has been suggested that factors predicting the nonrecovery of renal function or progressive CKD after LT include preexisting diabetes mellitus, hypertension, and coronary artery disease32; however, in our cohort, there were no differences in these variables between the groups at the time of LT.
Several studies have suggested that pretransplant renal dysfunction for more than 12 weeks (defined as an SCR level > 1.5 mg/dL before transplantation) affects posttransplant renal outcomes.6, 33 In our study, we were unable to determine the duration of renal dysfunction because the majority of our patients had renal dysfunction for <12 weeks. The duration of dialysis before LT has been considered to be a strong predictor of post-LT renal nonrecovery and has been used by many centers to select candidates for SLK transplantation.29, 34, 35 However, because the initiation of dialysis is physician/center-dependent, the number of days of dialysis before transplantation as an indicator for which patients will require SLK transplantation should be used with caution. Pretransplant dialysis for ≤4 weeks in patients with HRS has been shown in small studies to be associated with renal recovery after LT alone.34, 35 However, the renal outcome of patients dialyzed for >4 weeks is unknown because these patients frequently undergo SLK transplantation. In our study, the pre-LT dialysis duration could not be correlated with patient or renal outcomes after LT. However, because of the severity of our patients' illness and their consequently high MELD scores, the majority of our patients were on dialysis for <8 weeks before LT.
There are several limitations to this study. Our study was a single-center, retrospective study. The distinction between HRS and ATN can be clinically vexing and was made according to a review of each patient's clinical course, blood work, urine analysis, and urine electrolytes; this can lead to misclassification bias and is prone to errors in a retrospective study design. Although preoperative renal biopsy can help in differentiating HRS from ATN, many clinicians have been reluctant to perform biopsy in patients with cirrhosis and particularly those with coagulopathy, low platelet counts, or both. Published reports have classified renal biopsy as safe; however, those patients had a low risk, which was evidenced by low MELD scores and platelet levels > 100,000.36, 37 Our patients with HRS and ATN, on the other hand, had a median MELD score of 40, a median platelet count of 50,000, and a median international normalized ratio of 2.0; therefore, we felt that preoperative biopsy was too risky. Urine biomarkers (neutrophil gelatinase-associated lipocalin and interleukin-18) have been suggested as ways of distinguishing between patients with HRS or prerenal azotemia and patients with ATN.38 However, the use of biomarkers in this setting is currently investigational, and future prospective, multicenter trials are urgently needed.
In conclusion, in contrast to patients with ATN, patients with HRS had survival and renal outcomes comparable to those with risk or no AKI, regardless of the dialysis status before transplantation. Our data suggest that the determining factor for patient and renal outcomes is neither pretransplant dialysis nor the SCR level at the time of transplantation but rather the etiology of AKI. These findings may elucidate the further development of prognostic models and guidelines for organ allocation. Our results will need to be validated by other centers.