The assessment of the suitability of a patient with liver disease for liver transplantation has been described as a process evaluating whether 2 conditions are likely present: (1) the patient is sick enough to benefit from transplantation, and (2) the patient is not too sick or does not have comorbid conditions such that posttransplant survival would be so poor that transplantation would be considered a waste of a scarce resource.1 To assist with the first part of the evaluation, we currently have the Model for End-Stage Liver Disease score and its variants, which are based on easily obtained variables and have been proven to be reasonably reliable and robust.2 At this point, there is no similarly validated or endorsed schema available for the evaluation of posttransplant mortality. If such a measure were to be identified, it would have the potential for wide-reaching effects on the evaluation of candidacy for liver transplantation. In this issue of Liver Transplantation, Prentis et al.3 report that the anaerobic threshold (AT), a parameter derived from cardiopulmonary exercise testing (CPET) before transplantation, is a strong predictor of early posttransplant mortality. In light of this, should we cut up our current pretransplant evaluation protocols and replace them? Let us start by examining where we are now.
Although the practice is not universal, many liver transplant programs evaluate potential candidates for comorbid conditions that may adversely affect posttransplant outcomes. Two comorbid conditions that are frequently sought are pulmonary hypertension and coronary artery disease (CAD). Screening for pulmonary hypertension is typically performed via routine resting echocardiography with the estimation of the pulmonary artery pressure and with subsequent direct pressure measurements for suspected cases. The intraoperative and early posttransplant outcomes of patients with severe portopulmonary hypertension are so dismal that patients identified with this condition are typically deferred from transplantation until satisfactory pulmonary pressures can be achieved by medical management.4
CAD is one of the leading causes of post–liver transplant morbidity and mortality.5 Early postoperative concerns date from 1996 when Plotkin et al.6 reported a series of patients with known CAD who had undergone liver transplantation. The 1-year mortality rate was 50%. On the basis of this notable finding, screening for CAD was recommended with algorithms using noninvasive stress testing (particularly dobutamine stress echocardiography).7 Such algorithms are now widely used and recommended,8 but the clinical situation is quite murky: more recently reported early posttransplant outcomes of patients with CAD9 are considerably better than those reported by Plotkin et al., the efficiency of noninvasive testing for identifying CAD in the pre–liver transplant population appears to be poor,10 and there is no good evidence or clear consensus about the management of CAD once it is identified.11 On this basis, we can see 2 potential outcomes of pretransplant evaluations: the test results have clear implications for transplant candidacy (pulmonary hypertension), or the results and the implications for candidacy are more problematic (CAD). With these examples in mind, we can consider the potential place of CPET.
CPET is not new. Readers of my vintage may remember physiology labs involving a reluctant volunteer, a treadmill, an electrocardiogram, and exhaled gas collection with a cumbersome Douglas bag. Although the concept remains the same, the technology has improved greatly, and CPET is now widely available and is frequently performed in pulmonary and cardiovascular departments. It is used for the investigation and characterization of many conditions and for objective measurements of functional capacity and reserve.12 It has also been used for perioperative risk stratification, and a recent review concluded that CPET-based stratification outperformed other methods in identifying high-risk patients, although direct head-to-head comparisons are scarce.13 There is previous experience with the measurement of exercise capacity in patients with liver failure and in liver transplant candidates. Elsewhere in this issue of Liver Transplantation, Jones et al.14 review this literature and previous indications of a link between exercise-derived parameters and outcomes.
To what extent do the data presented by Prentis et al.3 allow us to establish a place for CPET in pre–liver transplant assessments? Let us ask the questions.
Is there biological plausibility? Yes. CPET variables are measures of functional capacity and reserve, so it seems logical that a low functional reserve would predict a poor outcome after a major stressor such as liver transplantation.
Is the finding consistent with known evidence? Yes (as noted previously).
Is the testing practical? A potential downfall of exercise-based testing in patients with liver failure is their limited capacity to exercise. The parameter identified as most predictive in the current study (AT) is a submaximal exercise parameter; 91% of the studied patients were able to exercise adequately to allow the derivation of AT. This included 5 of 6 patients with a Model for End-Stage Liver Disease score > 30. Compare this with dobutamine stress echocardiography, for example; more that 30% of those test results are nondiagnostic because of an inadequate heart rate response.10, 15
Is the test an effective predictor? Using AT < 9 mL/kg/minute (adjusted for the ideal body weight) as the cutoff results in very good indices for the prediction of mortality. However, some caution is needed here: only 60 patients underwent transplantation, and there were just 6 deaths in the study population. Thus, the results are subject to a random event distribution. Although this is not an uncommon situation in reports evaluating preoperative assessments for liver transplantation, replication in a larger series with more outcome events is necessary for the findings to be convincing.
Are the results generalizable? No. This is a single series from a single center. Peculiarities in patients or practice cannot be excluded. In addition to larger study numbers, further study is necessary in patients from different centers.
It is also worth noting that CPET is not the only test of functional capacity and reserve. The 6-minute walk distance, which is another such measure, has previously been reported to be an independent predictor of waiting-list mortality for liver transplant candidates.16 It is unknown whether this would also be true for the prediction of posttransplant mortality and what its efficacy would be in comparison with CPET-based prediction.
If we assume for now that these findings hold up under further investigation, other issues merit consideration when we consider the place of CPET in the pre–liver transplant evaluation. It is unlikely that a single variable will be effective enough as a universal predictor of outcomes. Although Prentis et al.3 did include several other potential predictors in their analysis, others need to be investigated. The idea that CPET-derived variables in combination with other indicators (particularly those evaluating cardiovascular risk) may provide more effective predictions has support in the literature.13 The timing of testing may require consideration: the average time from testing to transplantation was 170 days in the study, but because candidates may spend considerably more time waiting for a transplant (during which time their functional status will likely change), the influence of timing on predictions and outcomes may need to be addressed. Finally, is the functional reserve in liver transplant candidates a modifiable risk factor? If a patient is identified as a high-risk candidate, would deferment for a program to increase the functional capacity (if feasible) improve the outcomes?
Liver transplantation is an extremely complex undertaking that uses a limited resource. Clinicians involved in this practice have a duty to provide potential candidates with a risk/benefit appraisal that is as accurate as possible and to ensure the most effective use of donated livers. The ability to predict posttransplant mortality with a reasonable degree of confidence would assist clinicians in these endeavors. The findings of Prentis et al.3 are striking enough to merit further evaluation. A large multicenter study is needed; this study should ideally include other potential indicators with the goal of identifying a robust outcome model.
Should we cut up our current pretransplant assessment algorithms? Not yet, but keep the scissors handy.