SEARCH

SEARCH BY CITATION

The model for end stage liver disease (MELD) score has been perhaps the most scrutinized model of medical outcomes in history. A recent PubMed literature search retrieved 199 citations using MELD and liver transplantation (LT) as the search terms. The entire liver disease field has been greatly impacted by elevating the prediction of liver disease progression to a level of scientific inquiry not seen before. And deceased donor liver allocation has moved from the realm of subjective, sometimes emotional pleas, to mostly productive scientific discourse. However, although much progress has been made by employing MELD as a tool, we all must constantly remain aware that this tool is utilized by humans who are behaving as humans do within systems. We must be reminded that MELD does not make behavior uniform and many factors other than MELD score influence results of allocation systems and post-transplant outcomes [1]. Moreover, when examining organ allocation systems, either within centers or across nations, their effects on patients must be analyzed using an intent-to-treat approach. The results of the system have to take into account the outcome for every patient entering the system, whether or not a patient actually receives an organ. Consequently, prioritization policies must serve patients most in need as well as achieve the best post-transplant results possible. This approach is a balance between individual justice (serving individuals in need) and population utility (getting the best results for the entire population at risk). With implementation of the MELD-based liver allocation system in the USA, mortality risk was chosen as the definition of need for LT [2] for adult patients with chronic, nonmalignant liver disease. Importantly, while many single-center studies have identified clinical, subjective, variables such as ascites, variceal bleeding, and encephalopathy as important predictors of mortality, these results have not been consistently reproduced in multi-center studies because of observer differences in measurement of these variables. Moreover, because liver allocation in the USA occurs across many centers, policymakers wanted to avoid observer defined measures in prioritization algorithms to limit these biases and the potential for exploiting these to ‘game the system’ [3]. Consequently, the MELD score offered an excellent objective tool for defining this endpoint and thus serves individual justice well [3,4].

However, the MELD score has not ever been validated as a highly predictive measure of post-transplant survival, and thus, some have argued, may not be the best measure of utility. While there are many recent reports documenting associations between pretransplant MELD score and post-transplant patient [5–9] or graft survival [5,10], no report has ever documented that pretransplant MELD score is an accurate predictor of post-transplant outcome. This is not surprising as the MELD score was developed to predict outcome of relatively noninvasive Transjugular Intrahepatic Portosystemic Shut (TIPS) procedures based on intrinsic liver disease [11] but does not account for important factors that are critical to the success of the transplant operation that are not predicted by candidate characteristics alone. For example, donor factors, such as age, race, gender, degree of steatosis, cause of death, ischemia time, have all been documented to play a role in patient and graft survival [12–16]. As these factors are independent of candidate MELD score at the time of organ offer and some degree of physician judgment (behavior) will determine whether some or all of these donor risk factors will be incorporated into the over risk profile for a given recipient with a given MELD score, it is understandable why the MELD score alone does not predict post-LT outcome. Furthermore, there is ample evidence that surgeon [17] and center [18–19] experience also influence outcome for complex surgical procedures. This experience also plays a role in determining what donor risks are acceptable for what level of candidate risk and in technical outcomes for surgical procedures.

In this issue of Transplant International, two very interesting papers are presented, where MELD is used as a measurement tool. Both studies highlight aspects of the MELD model and its use and interpretation in clinical practice. In the first report from Vienna, investigators describe their experience with 505 patients with chronic indications for liver disease of whom 306 received LT during their study period [20]. One hundred twenty-three (24.4%) of these patients died while waiting. The patients who died while waiting had significantly higher MELD scores at listing and at removal, and had significantly greater increases in MELD score while listed compared with the patients who received transplants. Moreover, these wait list failures were removed sooner (median 2.1 months to death vs. median 2.7 months to transplant) compared with those removed for transplant. In multivariable analyses, these authors found that MELD at the time of listing, refractory ascites, bacterial peritonitis, and co-morbidity score were independent predictors of death on the waiting list with MELD at listing having almost twice the hazard of death as the other factors. Silberhumer et al. found a slightly reduced post-LT survival for patients with MELD score >24 at the time of listing that was not statistically significant. The proportion of patients with refractory ascites who died was twice the proportion of those who were transplanted although they do not explicitly define refractory ascites.

These investigators do not mention how waiting candidates were prioritized or selected for transplant at the time of organ offer, but based on their data, it appears that they selected patients with lower MELD scores who less frequently had refractory ascites. As in many previous studies, it is difficult to assess what amount of ascites was considered refractory (or what criteria were used for bacterial peritonitis diagnosis) in this study. This makes it extremely difficult to reproduce these assessments in different hospital settings. Thus, while these care givers were likely very consistent within their institution at assessing these variables, it is unlikely that other clinicians in other centers would select the same candidates using ascites or peritonitis criteria, especially if they were in competition with one another and the interpretation of the severity of these subjective variables plays a role in determining which patient or center was offered the currently available liver. This illustrates the difficulty outlined above with observer defined-variables in prioritizing waiting LT candidates. In addition, these authors did not provide analyses that would confirm that their variables are truly predictive (not just associated with) of waiting list mortality. A receiver operating curve analysis [21] validating their Cox model-derived variables using a cohort of patients different from those used for derivation of the Cox models is required.

The MELD score at the time of transplant for LT recipients in this study was 17 ± 6 indicating these recipients had a 3-month mortality without a transplant of 7%. One wonders if their relatively high overall waiting list death rate might have been reduced by more frequently selecting candidates with higher risks of waiting list death. Selection of candidates based on characteristics associated with the best post-LT survival is not an unreasonable approach when there is a critical shortage of organs. However, as reported in this paper, the post-transplant survival for the highest risk patients was not statistically significantly different than for the lower risk candidates. One could argue that there was a nonsignificant trend toward poorer survival in this higher risk group but as these patients have much higher waiting list mortality, passing over these higher MELD score candidates for relatively small differences in post-LT outcome does not result in significant improvements in overall life-years saved when results are evaluated in from an intent-to-treat point of view. A recent analysis of the lifetime benefit of LT by Merion et al. found that there is very little gain in benefit as measured by life-years saved for transplantation of patients with MELD scores <18 and actual loss of life-years when patients with MELD scores <15 receive transplants [22]. This occurs because while there are differences in post-LT survival depending on the pre-LT MELD score, these are relatively small compared with the much wider distribution of pre-LT survival stratified by MELD score. As the range of MELD score at transplant in this study is relatively small, it is not surprising that the post-LT survival was similar among the MELD strata reported. In addition, although all of these transplants were performed in a single center where experience was not variable, difference in donor quality and ischemia times as well as technical events were not mentioned and likely contributed to some of the post-LT results for the low- and high-MELD candidates randomly. The post-LT results are similar to those reported in other studies [5,23], where there are small differences in survival that, although significant, are clinically irrelevant compared with not getting a transplant, especially for the higher MELD candidates. Using the intent-to-treat, lifetime benefit approach, the MELD-based prioritization plans help not only to balance individual justice by directing organs to those most in need, but also helps to ensure utility for the entire system because post-transplant survival is not widely different for low- or high-MELD candidates.

In the other MELD-based paper in this issue of Transplant International, Onaca et al. report their experience with 44 liver retransplantation (LrT) cases more than 30 days after primary transplant and compared these patients’ pre-LrT MELD scores to pre-LT MELD scores of 669 primary LT recipients, all of whom received transplants between 1994 and 1999, well before MELD-based liver allocation was in place in the USA [24]. These investigators found, as in many previous reports [25,26], that LrT results in inferior patient and graft survival compared with primary transplantation. In addition, they found that LrT performed more than 2 years after primary transplant had significantly poorer results than LrT performed earlier after the primary procedure. The greater post-LrT mortality was related to higher rates of sepsis, cardiovascular, technical, and neurolgic complications. There was no relationship between MELD score immediately prior to LrT and post-LrT outcome. The authors suggest, based on their findings, that candidates for LrT are not served well by a MELD-based liver allocation system.

This study focuses entirely on outcomes after LT and LrT and provides no data on pretransplant mortality. During the study period covered in this report, prevailing USA liver allocation policy categorized candidates for nonemergent, primary, or re-transplantation into three groups and otherwise ranked them by waiting time. Although waiting times for primary LT or LrT are not directly reported in this paper, the poorer outcomes for LrT recipients who were more than 2 years beyond their primary transplant may have been because of the failure to prioritize the more ill LrT candidates by severity disease because they were forced to wait longer. Thus, these results may serve to point out the misdirection of donor organs caused by waiting time-based systems. This was one of the problems that implementation of the MELD system was designed to correct. Candidates for primary- or re-transplantation do not all present to the waiting list at the same stage in their disease. In waiting time-based allocation, some candidates, more ill candidates, are forced to wait while less ill patients are served first because they have accrued more waiting time. This does not efficiently direct organs to those most in need. Consequently, it is difficult to accept the conclusion that LrT candidates should be granted increased priority in today's system, where waiting time plays almost no role, based on this study reporting poorer post-LrT outcomes obtained during an allocation era when the prioritization system directed organs to those who could wait the longest. Moreover, as MELD scores at the time of listing or re-listing are not reported, no assumptions regarding waiting list mortality can be made from this study. A recent analysis of Organ Procurement and Transplantation Network (OPTN) wait list data recognized that relisted candidates do face a higher mortality risk while waiting but the MELD score does effectively rank the relisted candidates according to pretransplant mortality risk [27]. Onaca et al. have a valid point in that LrT candidates may not be able to wait a long time, but this is unrelated to their finding that LrT candidates had higher MELD scores immediately prior to transplantation. These data serve to illustrate how misdirection of organs occurs when waiting time is a major determinant of priority on the list.

There is no question that LrT is an even more complex surgical procedure than primary LT. The MELD score will never be able to account for this. The authors of this study acknowledge this as is reflected in the higher technical and medical complication rate observed in their LrT recipients. The larger question, however, is whether known increased technical risk during the transplant procedure should qualify a candidate for increased priority even if he or she does not have an increased mortality risk without the transplant. This calculation is made even more complex by the problem of varying levels of surgical expertise alluded to above. The technical complication rate and post-transplant survival rate reported for the LrT group in this study are actually better than reported in some other studies [28,29] indicating that other centers may have poorer results for LrT even if the candidates have similar preretransplant MELD scores. These results may be partly because of center experience for selecting candidates for retransplantation, especially patients for retransplantation because of recurrent hepatitis C [30,31].

Again, these results must be evaluated with an intent-to-treat technique. It is not clear that prioritizing the LrT candidates beyond their MELD-defined pre-LrT mortality risk would significantly reduce the postoperative technical complication rate. Moreover, doing so will disadvantage waiting primary LT candidates with higher measured mortality risks defined by their calculated MELD score. This will likely result in fewer overall lives saved as the post-LrT results will still be poorer because of the technical and nonliver-related complications and more waiting primary LT candidates will die because they have higher mortality risks than the LrT candidates if the LrT candidates are artificially advanced beyond their own mortality risk. Conversely, some might argue on utilitarian terms that the increased technical complication rates in LrT recipients are not acceptable when so many primary LT candidates still die waiting and risk factors for poorer outcomes, especially if they do not carry increased pretransplant mortality risk, should be awarded less, not more, waiting list priority.

In conclusion, both of these papers serve to illustrate the issues surrounding the LT and organ allocation. Silberhumer et al. have shown, as others have, that clinical manifestations of portal hypertension, when they are consistently assessed, are associated with mortality risk, but not as strongly as the MELD score. These clinically subjective measures, however, are difficult to precisely reproduce and do not add much more accuracy to the MELD score [32,33]. Both studies assigned donor livers on clinical or waiting time-based measures and illuminate why using nonmortality endpoints for allocation generally does not serve patients with chronic progressive liver disease well, even if this is after a primary transplant. In fact, a recent report estimated that significantly more lives would be saved when donor organs are assigned based on MELD score compared with clinical judgment [34]. Moreover, advocating for higher priority because of poorer outcome after transplant applies to all waiting candidates, not just LrT patients and must be balanced with the effects of bypassing candidates with potentially higher mortality risks on the waiting list. This can only be carried out when the transplant allocation system is evaluated as an intent-to-treat model.

References

  1. Top of page
  2. References