Cirrhosis related to chronic infection with hepatitis C virus (HCV) has emerged as the most frequent indication for orthotopic liver transplantation. Presently, approximately 50% of transplants performed in the United States and Europe are for patients infected with HCV.1 The health burden due to HCV is expected to increase to such a degree that by the year 2020, the proportion of untreated HCV patients developing cirrhosis will have increased by 30%.1 Furthermore, it is projected that the number of HCV patients with cirrhosis will double and the number of HCV patients with cirrhosis developing hepatocellular carcinoma (HCC) will increase by 80%.1
HCV recurrence in the liver allograft is universal. The clinical course is characterized by more rapid disease progression after transplantation in comparison with the nontransplant population.1 Previously, it has been reported that the median duration from transplantation to HCV-related graft cirrhosis is approximately 9 to 12 years1 with a 42% risk for clinical decompensation at 1 year after the development of cirrhosis.1 A recent analysis of the United Network for Organ Sharing database demonstrated significantly diminished survival at 5 years after primary orthotopic liver transplantation among HCV-positive patients (65.6% of HCV-negative recipients versus 56.7% of HCV-positive recipients).2 Moreover, there is a death rate on the transplant waiting list of 20%, highlighting the shortage of liver allografts. Therefore, there is clearly a need to optimize the outcomes of liver transplantation among HCV-infected patients. This can be achieved only with a better understanding of the host, viral, and external factors influencing patient and graft survival.
Factors influencing HCV-related disease progression in the posttransplant setting include high pretransplant HCV RNA levels,3 early acute hepatitis,4 an increased number of acute rejection episodes and methylprednisolone (MPPT) boluses,1 OKT3 use,5 and utilization of older donors.1, 6–10 Viral factors, including genotype 1b and a persistently high viral burden post-transplant, have also been shown to be associated with allograft damage.11, 12 However, it is unclear if high serial viral load levels are an important independent factor in predicting outcome after liver transplantation.
In this study, we have examined within the first year post–liver transplant the impact of serial serum hepatitis C viral load levels on posttransplant survival. We have undertaken a multivariate analysis incorporating recipient, donor, surgical, and other viral factors of patient and graft survival.
Clinical and laboratory data were collected retrospectively from a total of 118 consecutive patients who underwent liver transplantation for HCV-related cirrhosis with or without HCC at the Australian National Liver Transplant Unit from January 1997 to September 2005. The following data were recorded:
1Demographics factors, including the age at transplantation, gender, and year of transplantation.
2Presence of intercurrent pathology at the time of transplantation, including HCC, hepatitis B virus coinfection, and a pretransplant history of excessive alcohol intake (more than 80 g/day for > 10 years).
4Serial viral load levels post-transplant.
5Episodes and treatment of acute rejection (steroid pulse and/or Orthoclone OKT3).
6Donor gender and age and donor-recipient gender mismatch.
7Type of organ transplantation (whole or split).
8Immunosuppressive therapy [calcineurin inhibitor (CNI) immunosuppression: tacrolimus (TAC) or cyclosporine A (CsA); azathioprine (AZA) or mycophenolate mofetil (MMF) use].
Recurrent HCV infection was confirmed by the presence of HCV RNA in serum as detected by reverse-transcription polymerase chain reaction (PCR). Human immunodeficiency virus–coinfected individuals were excluded from this study.
Primary endpoints were patient death and graft failure. Patient survival was defined as the time from the initial transplant until the time of death or last known follow-up. Graft survival was defined as the time from transplant until retransplantation or death, whichever came first. Patients lost to follow-up were censored at the date on which they were last known to be alive for analyses of both patient and graft survival. Hepatitis C–related death was defined as death resulting from graft failure secondary to either severe recurrent hepatitis C or fibrosing cholestatic hepatitis. Non–hepatitis C–related deaths were those deaths related to other causes such as infection, perioperative complications, recurrence of HCC, and biliary complications. Use of AZA or MMF was defined as the use of either agent for at least 3 months.
According to our institutional protocol, patients received 500 mg of MPPT intravenously on the day of transplantation (intraoperatively) and on day 1 after transplantation, and this was followed by a taper to 20 mg/day over the next 7 to 10 days. Oral prednisone was subsequently begun at 20 mg/day and tapered progressively as tolerated. In cases of acute rejection, therapy consisted of 3 daily doses of 1 g of intravenous MPPT followed by a taper to 100 mg over the next 4 days. On day 8, oral prednisone was started and reduced by 10 mg daily until a dose of 20 mg was reached, which was then tapered progressively as tolerated. Maintenance immunosuppression included combination therapy with either TAC (Prograf) or CsA (Neoral) and prednisone with or without AZA or MMF. TAC and CsA doses were adjusted on the basis of target trough levels. On occasion, the CsA dose was adjusted on the basis of C2 levels. Steroid-resistant acute rejections were treated with OKT3 for a course of 7 or 10 days.
Quantitative HCV RNA Detection
The HCV viral load determination was performed according to the clinical progression of the individual patient. Quantitative HCV RNA detection was performed with the Amplicor HCV assay (Roche Diagnostics, New Jersey) according to the manufacturer's instructions. In brief, 0.1 mL of serum was added to 0.4 mL of Lysis reagent and incubated at 60°C for 10 minutes. The RNA was precipitated by the addition of 0.5 mL of isopropyl alcohol and then pelleted by centrifugation at 13,000g for 15 minutes. The pellet was washed with 70% ethanol and then resuspended in 1 mL of specimen diluent. A total of 0.05 mL of this suspension was added to 0.05 mL of PCR Master mix. Reverse transcription and PCR were performed on the Roche Amplicor instrument. High-level samples (> 800,000 IU/mL) that fell outside the linear range of the assay were diluted with normal human serum to produce an accurate concentration within the assay's linear range.
HCV genotypes were identified with the Versant HCV (LiPA II) line probe assay (Bayer Corp., New York). In brief, 20 μL of amplified Amplicor PCR product was added to 1 mL of hybridization buffer and incubated with genotyping strips (with type-specific oligonucleotides attached) for 60 minutes. The strips were then stringently washed, and conjugate solution was added before the addition of substrate solution. The strips were analyzed and genotypes were assigned according to the reactivity pattern interpretation table.
The cohort was described with estimates of central tendency (means and medians) and spread (standard deviation and range) for continuous variables and frequencies and percentages for categorical variables. Patient and graft survival rates were determined with Kaplan-Meier analysis.
Univariate data comparison was performed by Kaplan-Meier estimation using the log-rank test to assess statistical significance. Variables with P values of <0.20 in the univariate analysis or those thought to be clinically relevant were included in the final multivariate model. Multivariate analysis was performed with the Cox proportional hazard regression model.
We analyzed the association between clinically significant outcomes (patient and graft survival and hepatitis-related deaths) and several risk factors. Host risk factors included gender, age at transplant (<50 versus >50 years), presence of intercurrent HCC (incidental or known), and intercurrent alcohol abuse. Donor-related factors included age, gender, and donor-recipient gender mismatch. A donor age cutoff of 50 years was selected on the basis of previous work in which we demonstrated that this was a significant predictor of patient outcome.13 Surgery-related factors included type of organ transplant (whole versus split). Viral risk factors included the HCV genotype (genotype 1 versus non–genotype 1) and posttransplant peak viral load within 1 year (≥107 IU/mL versus <107 IU/mL). External risk factors included the type of CNI (TAC versus CsA), use of AZA, use of MMF, exposure to antirejection treatment, exposure to steroid pulse, and exposure to OKT3.
Because data were retrospectively collected, we acknowledge the potential influence of missing data on our analysis. To address this problem, we used the complete case analysis or listwise deletion approach in which we simply omitted those cases with missing data. All statistical analyses were performed with SPSS version 14 software. All reported P values were 2-tailed, and all confidence intervals were 95%.
Demographic and baseline characteristics of the patients are shown in Table 1. The study comprised 118 patients, of whom 99 (83.9%) were male, with a median age at transplant of 49 years (range, 29–69 years). The median duration of follow-up was 32.4 months (range, 0–109.5 months). A history of intercurrent alcohol abuse was present in 28 (23.7%). Six patients (5.1%) were coinfected with hepatitis B virus. HCC, known or incidental at the time of transplantation, was present in 44 patients (37.3%). A single human immunodeficiency virus and HCV–coinfected patient was excluded from this study. The donor organ cold ischemic time was 8 to 10 hours, and the warm ischemic time was 30 to 60 minutes.
Table 1. Demographic and Clinical Characteristics of the Cohort of Consecutive Patients Transplanted for Hepatitis C Virus–Related Cirrhosis With or Without Hepatocellular Carcinoma
Thirty-one patients died at a median time of 28.1 months (range, 0–109.5 months) after transplantation. Overall patient survival was 87.8%, 79.9%, and 70.1% at 1, 3, and 5 years, respectively (Fig. 1A). Twelve of 31 (38.7%) deaths were hepatitis C–related. The indications for retransplantation in 5 patients were hepatic artery thrombosis (3 patients, 60.0%), primary nonfunction (1 patient, 20.0%), and HCV-related cholestatic hepatitis (1 patient, 20%). Causes of non–HCV-related deaths included sepsis (n = 6), multiorgan failure (n = 3), cardiovascular events (n = 1), malignancy (n = 3), surgical complications (n = 3), chronic rejection (n = 1), and other multifactorial nonviral causes (n = 2). Overall, 1-, 3-, and 5-year graft survival was 87.0%, 79.2%, and 68.2%, respectively (Fig. 1A). Five patients (4.2%) underwent retransplantation at a median time of 1.4 months (range, 3 days to 21.7 months) after first transplant.
Effect of Posttransplant Peak Viral Load on Survival
There were a total of 620 viral load estimations in the first year after transplant (Fig. 2). An examination of the mean viral load in each week following transplantation demonstrated that in week 11, the viral load peaked at 1.0 × 107 IU/mL.
The relationship between the peak viral load within 1 year post-transplant and patient and graft survival was then analyzed by the division of the cohort into 2 subgroups: patients with a posttransplant peak viral load of ≥107 IU/mL and those with a peak viral load of <107 IU/mL.
We observed that patients with a posttransplant peak viral load ≥ 107 IU/mL had reduced overall patient survival (52.8% versus 89.1%, P = 0.003) and reduced overall graft survival (52.8% versus 85.9%, P = 0.010) in comparison with those with a peak viral load ≤ 107 (Fig. 1C). The mean survival times for patients with peak viral load levels of >108, <108 to ≥107, and <107 IU/mL were 11.8, 70.6, and 89.1 months, respectively (P < 0.05; Fig. 2).
Although it was clear that peak viral load levels were associated with differing outcomes, other factors thought to be potentially important were examined.
Effect of Donor Age on Survival
There was a decrease in both patient and graft survival with increasing donor age. In recipients who received deceased donor organs from donors < 50 years and ≥ 50 years old, 1-, 3-, and 5-year patient survival was 89.8%, 82.4%, and 79.3% and 83.5%, 73.7%, and 45.2%, respectively (P = 0.02), whereas graft survival was 88.6%, 81.3%, and 76.3% and 83.5%, 73.7%, and 45.2%, respectively (P = 0.05; Fig. 1B).
Effect of Immunosuppression on Survival
All patients were maintained on prednisone therapy for a minimum of 6 months, with 83.4% being on low-dose steroids (1-10 mg; median, 5 mg) at 1 year. Importantly, all the peak viral load determinations within the first year occurred while the patients were on prednisone therapy.
Overall, 75% of the patients received TAC-associated immunosuppression, with the remaining 25% receiving CsA. In patients receiving TAC, the median duration of therapy was 32 months (mean, 38.1), whereas for CsA, the median was 39 months (mean, 53.8). There was diminished patient survival for recipients whose initial immunosuppression regimen included CsA. In recipients who received CsA, the 1-, 3-, and 5-year patient survival was 76.9%, 53.8%, and 46.2% versus 89.1%, 84.1%, and 73.5% for those on TAC (P = 0.05). However, graft survival was not significantly different between those on CsA and those on TAC. The graft survival was 76.9%, 53.8%, and 46.2% and 88.3%, 83.3%, and 71.2%, respectively (P = 0.10; Fig. 3A,B). Furthermore, there were 34% more rejection episodes in patients treated with CsA versus TAC (P = 0.11).
Additionally, there was diminished patient survival for recipients who had treated episodes of acute rejection. There were a total of 13 episodes of treated acute rejection with corticosteroids and 6 cases treated with OKT3 in the first year after transplantation (median of 33.5 months with a mean of 41.5 months post-transplantation). The mean survival of individuals treated for acute rejection was 65.7 ± 9.6 months versus 80.6 ± 4.2 months for individuals who did not have treated acute rejection (P < 0.04).
Patients who received AZA had a median follow-up of 37 months (mean, 42.6) versus 45 months (mean, 42.6) for those individuals who did not receive AZA. There was better patient and graft survival for those recipients who had AZA as part of their initial immunosuppression regimen. The 1-, 3-, and 5-year patient survival for those on AZA and not on AZA was 96.9%, 91.3%, and 91.3% and 74.3%, 63.9%, and 45.4%, respectively (P ≤ 0.001), whereas graft survival was 96.9%, 91.3%, and 88.3% and 72.5%, 62.3%, and 44.3%, respectively (P ≤ 0.001).
Effect of Transplantation Year on Survival
Patient and graft survival did not differ among recipients who were transplanted before 1999 and those who were transplanted between 1999 and 2005. In recipients who were transplanted before 1999 and between 1999 and 2005, the 1-, 3-, and 5-year patient survival was 82.1%, 71.4%, and 60.7% and 89.7%, 83.9%, and 76.0%, respectively (P = 0.13), whereas graft survival was 82.1%, 71.4%, and 60.7% and 88.6%, 82.9, and 72.9%, respectively (P = 0.36). Furthermore, a survival analysis was undertaken according to the year of transplantation, and there was no significant difference in either patient or graft survival.
Independent Predictors of Patient Survival
In the univariate analysis, 10 variables were significantly associated with an increased risk of patient death after transplantation among HCV-positive patients (Table 2):
1Recipient age greater than 50 years.
2A significant history of alcohol intake before transplantation.
3Posttransplant peak RNA greater than 107 IU/mL.
4Donor age greater than 50 years.
5Split organ transplantation.
7Use of AZA.
8Use of antirejection therapy.
10Those transplanted before 1999.
Table 2. Predictors of Patient Death, Graft Failure, and Hepatitis C–Related Graft Failure
Only significant variables are shown in the multivariate analysis.
Intercurrent alcohol abuse
Genotype (1 versus non-1)
Peak RNA post-LT: ≥107 versus <107 (IU/mL)
4.03 (1.45–11.2) P = 0.0007
3.04 (1.17–7.90) P = 0.02
8.68 (2.04–37.02) P = 0.004
P = 0.05
Donor recipient sex mismatch
Whole organ versus split
Primary CNI (tacrolimus versus CYA)
P = 0.004
P = 0.001
P = 0.04
Acute antirejection therapy
2.26 (1.0–5.38) P = 0.05
Year of transplant: after 1999 versus in or before 1999
Of these, only a posttransplant peak viral load of 107 IU/mL [hazard ratio (HR), 3.71; P = 0.02] and use of AZA (HR, 0.35; P = 0.05) were shown to be independent predictors of patient survival in the Cox regression analysis (Table 2).
Independent Predictors of Graft Survival
In the univariate analysis, the same variables were associated with an increased risk of graft failure, except for the presence of intercurrent HCC and split organ transplantation. Of these, only a posttransplant peak viral load of 107 IU/mL (HR, 3.70; P = 0.02), use of AZA (HR, 0.30; P = 0.02), and treated acute rejection (HR, 3.16; P = 0.03) were shown to be independent predictors of graft survival (Table 2).
Independent Predictors of Hepatitis C–Related Graft Failure
In the univariate analysis, variables associated with hepatitis C–related graft failure included recipient age ≥ 50 years, female recipient, intercurrent alcohol abuse, a posttransplant peak viral load of ≥107 IU/mL, CYP immunosuppression, nonuse of AZA, treated acute rejection, and exposure to OKT3. Of these, only a posttransplant peak viral load of 107 IU/mL (HR, 8.68; P = 0.004) and nonuse of AZA (HR, 2.83; P = 0.04) were independently associated with hepatitis C–related graft loss (Table 2).
In this study, we investigated the relative importance of early peak viral load levels in predicting overall outcome for patients with HCV infection undergoing liver transplantation. Transplantation of 118 HCV-positive patients at our institution showed overall patient survival rates of 87.8%, 79.9%, and 70.1% and graft survival rates of 87.0%, 79.2%, and 68.2% at 1, 3, and 5 years, respectively. These observations are in keeping with previous studies.1, 11 Following multivariate analysis, we showed that peak RNA of ≥107 IU/mL within the first year of transplantation, exposure to acute antirejection therapy, and use of AZA as part of the initial immunosuppressive regimen were associated with worse graft outcomes. This study is by far the largest sampling of viral load estimations following liver transplantation and is the first to show that early high peak viral load levels are independently associated with worse clinical outcomes.
In the nontransplant setting, HCV RNA levels do not correlate with the severity of liver disease.14-16 However, pretransplant viremia levels have been associated with more severe recurrent HCV.12 The effect of viral load in the posttransplant setting has not previously been incorporated into a multivariate analysis. The levels of viremia start to rise within the first week of surgery and progress in the following weeks.10 Posttransplantation viremia has been found to be a mean of 1 order of magnitude greater than pretransplantation viremia levels. These elevated viremia values have been reported during short-term follow-up (1-2 years post-transplantation) and could correspond to the invasive phase of recurrence and lobular hepatitis in most patients.17 However, the association of HCV RNA levels with the severity of HCV recurrence and the relation between the level of HCV replication and histology are unclear. Papatheodoridis et al.18 found a correlation between levels of fibrosis and HCV RNA levels at 12 months post-transplantation. Moreover, Gane et al.19 also reported an association between the level of viremia and histological severity of early HCV recurrence. In contrast, no relation between hepatitis and the level of serum HCV RNA was reported in 2 successive series.20, 21
In the present study, we were able to show that patients with a posttransplant peak viral load of ≥107 IU/mL within 1 year had worse patient and graft survival. Furthermore, in this study, the peak viral load was seen at week 11, and this suggests that early viral load estimation within the first 3 months after transplantation is necessary. Our study is unique in analyzing such a large cohort and identifying the importance of the hepatitis C viral load in predicting patient and allograft outcomes. Moreover, our study raises the question of whether or not prospective serial measurement of the viral load in the early posttransplant period as the level of viremia could predict survival. This is supported by Duvoux et al.,22 who found that serial HCV RNA quantitation provides a useful tool for the diagnosis and timing of HCV-related acute graft dysfunction.
Among recipients with a posttransplant peak viral load of >107 IU/mL, we were able to identify 4 patients who had a peak viral load of >108 IU/mL. All of these patients died within the first 2 years after transplant (mean, 11.8 months). Similar observations were reported by Doughty et al.23 They observed that recipients with more severe graft injury, progressive cholestasis, and jaundice with marked central zonal ballooning on liver biopsy had the highest viral loads, which repeatedly were measured in excess of 107 HCV RNA copies/mL.23
Although the peak viral load levels were most strongly predictive of poor outcomes in the multivariate analysis, a number of other factors were identified in the univariant analysis that should be discussed. One of these was donor age. Many studies have shown in multivariant analysis that advanced donor age is associated with lower graft survival.8, 10 However, it should be recognized that these studies did not include extensive analysis of posttransplant viral load as we have done here. Immunosuppression is a major factor that accounts for the accelerated history of HCV infection post-transplant.24 Worse outcomes have been reported in recent years that parallel changes in immunosuppression with the introduction of newer and more potent immunosuppressive agents.1, 7 In particular, there has been a shift to the greater use of TAC and less use of CsA.25 This has led to the speculation that the choice of CNI may affect the severity of HCV recurrence.
CNIs are the cornerstone of current immunosuppressive regimen in liver transplantation. Importantly, the CNI choice (TAC or CsA) was not an independent predictor of patient and allograft outcomes in our study. Interestingly, all patients with a peak viral load of >107 IU/mL were on TAC immunosuppression. Although this may imply a possible deleterious effect of TAC, previous studies have shown no significant difference in the HCV RNA levels in CsA-treated patients versus TAC-treated patients.26-28 Although in our cohort we found worse patient survival with CYP (P = 0.05), this observation was not explained by an increased number of rejection episodes (and therefore a greater likelihood of exposure to steroid pulses and OKT3) because the number of rejection episodes, although greater for CsA, was not significantly different between the 2 CNIs (P = 0.11). However, after adjusting for other clinically significant factors in the multivariate analysis, we have shown that the type of CNI is not an independent predictor of worse outcomes (both patient and graft survival). Therefore, our findings suggest that the choice of initial CNI does not per se adversely affect outcomes of patients undergoing orthotopic liver transplantation for HCV. The overall effect of the type of immunosuppression appears to have been overridden by the viral load effect in our study. We hypothesize that the viral load may be a surrogate measure of the level of immunosuppression and should be used as such.
AZA is primarily used in maintenance immunosuppressive regimens as a steroid-sparing agent. Data on the effect of AZA use on the posttransplant course are few with inconclusive results.1, 29 In our cohort, AZA use was associated with better patient and graft survival. Long-term analysis by Hunt et al.29 previously showed that patients treated with AZA-containing regimens experienced significantly less recurrence and progression than those without AZA as part of their immunosuppression regimen. Furthermore, AZA has a direct antiviral effect on HCV.30 The exact mechanism by which AZA confers a survival advantage is intriguing and likely to be multifactorial, including patient selection, avoidance of acute rejection episodes, and a direct antiviral effect of AZA.
Multiple studies have shown a strong association between treatment of rejection and severe recurrence of hepatitis C.12, 31–34 Patients with multiple rejection episodes, exposure to steroid pulses, and greater daily exposure to steroids have a greater incidence and severity of recurrent hepatitis C.12, 19, 32 In the National Institute of Diabetes and Digestive and Kidney Diseases Liver Transplant Database study, treatment of acute cellular rejection increased mortality in HCV-positive patients (relative risk, 2.9) in contrast to a seemingly protective effect of treatment of 1 rejection episode in non–HCV-positive patients (relative risk, 0.6).12 Treatment of rejection episodes as an important determinant of worse outcomes is likewise confirmed in the present study. Our findings confirm the importance of preventing acute rejection with adequate immunosuppression, although this should be balanced against the risks of excessive immunosuppression, which likewise negatively affect outcomes of HCV infection after transplantation.
In conclusion, our study shows that posttransplant peak hepatitis C viral load levels of >107 IU/mL within the first year are independently associated with worse patient and graft outcomes. Our study is unique in presenting HCV transplantation outcomes from the largest documented cohort of individuals, with these data being subjected to both univariate analysis and multivariate analysis. Previously, viral load estimations have not been incorporated into predictor models of HCV-related outcomes post–liver transplantation. However, our results clearly show that viral load is the most significant important predictor of patient death and allograft failure. We speculate that HCV viral load may be an alternate and superior surrogate marker of immunosuppression post-transplantation. Therefore, frequent early HCV viral load monitoring after transplantation should be incorporated into clinical practice.