There are 2 gorillas in the room: (1) liver allocation in the United States is not equal by any significant metric (wait time, Model for End-Stage Liver Disease (MELD) score at transplantation, or regional review board behavior), and (2) liver allograft utilization, as demonstrated by the increasing gap between consented deceased liver donors and transplanted deceased donor allografts or the decreasing donor risk index at the time of orthotopic liver transplantation (OLT), is declining.[1-3] As a community, our efforts to address the first gorilla, including the recent national implementation of the share 35 policy, have been modest, whereas organized efforts to address the second gorilla have been nonexistent. The cost of these 2 gorillas is enormous in economic, personal, and societal terms. This includes wait-list mortality, which, as a percentage of annual wait-list additions, has not significantly changed, and frustration over our inability to serve patients who are desperately ill. In this issue of Liver Transplantation, Kinkhabwala et al. analyze Organ Procurement and Transplantation Network (OPTN) data on expedited placement (EP) liver allograft codes by correlating the region of EP allograft origination, the region of EP allograft transplantation, the recipient MELD score at OLT, and early EP allograft function. The authors keenly integrate these data with their own wait-list mortality data during the study period to infer that inequity in US allocation practice has denied their patients an opportunity for OLT and call for additional regulatory oversight.
While this study is successful in evoking a visceral response similar to previous studies on nationally placed allografts,[5, 6] it does not provide the level of inquiry needed to form definitive conclusions or direct effective policy. Most notably, EP allocation algorithms and EP allograft data are omitted. When was sequence allocation abandoned for these allografts? When was the accepting center notified of an available allograft? Who were the recovering surgeons, and what was their intent at recovery? Were they recovering for their center, another local center, a regional center, or a distant center? Was cross-clamping delayed to facilitate EP placement? Lastly, what percentage of nationally placed allografts occurs by EP versus DonorNet sequence allocation? A separate OPTN query from my group at the University of Chicago over a similar time period indicates that EP allografts accounted for less than half of all allografts allocated nationally between 2010 and 2011. These data suggests that further relevant data for the EP allograft subset are essential. The prospective incorporation of such readily available data through a standardized EP allograft worksheet to be completed by an organ procurement organization (OPO) exercising EP placement would enhance transparency while providing critical data to guide policy.
Current OPTN data collection techniques do not support a structured analysis of the decision process leading to the determination of a transplantable allograft. Strengthening data collection algorithms that focus on earlier events in the donation process and incorporating new data points before recovery could improve our understanding of the decision process and increase utilization. The aforementioned EP worksheet would be invaluable in this context.
The authors acknowledge that EP allografts represent a very small percentage of the deceased donor pool, which has diminished since the implementation of DonorNet. While advocating for greater oversight of this “largely unexamined” allocation pathway, the authors acknowledge that all EP allocations are reviewed by the United Network for Organ Sharing (UNOS). In fact, EP allocations are likely the most scrutinized component of the US deceased donor pool because each and every EP allocation elicits a later query from UNOS to the OPO that allocated the liver as well as a query to the center that bypassed many of its higher MELD recipients to provide a transplant to an individual patient. A review of responses to UNOS allocation analyst queries by our group dating back to October 2003 shows that the allocation algorithms are clear and that all have satisfied UNOS review. These data already exist. What is missing is transparency. The public release of de-identified OPO and center responses to UNOS out-of-sequence allocation queries as well as the UNOS meeting minutes discussing these allocations would open a treasure trove of information for research, standardization of data collection, and teaching for other centers that seek to change their allograft acceptance practices.
An additional subject for review would be center coding practices and OPO notification practices as detailed in an editorial by Washburn and Olthoff. The current system does not reward accurate coding practices or discourage insincere utilization of provisional yes responses or backup positions that must later be sorted quickly in times of altered allocation. A detailed analysis of DonorNet declination patterns after the accepting center declines and the impact of DonorNet on the utilization of high-risk allografts is necessary because overall allograft utilization has declined. The authors join others in proposing another layer of EP notification, which will bring us even further away from addressing the gorillas in the room as this new category becomes embraced by centers that do not want to forego a competitive opportunity but actually demonstrate no proven record of EP utilization. Each scenario in this very high-risk group of allografts is different and requires much more than what is on DonorNet. Additional data may include pictures and discussions with a recovering surgeon or pathologist who typically has already left or is in transit and unavailable. The clock is ticking, and there is simply not enough time to tell the story over and over as a random assortment of aggressive centers from all over the country call in. An aggressive surgeon's interest may geographically vary and may certainly be very different 1 hour later when the surgeon is notified that he or she may use the allograft: there are simply too many variables for which to make DonorNet radio buttons. An alternative explanation for the authors' observation of geographic disparity may simply be that time and logistic constraints have already created functional super-regions. Studies directed at optimizing the flow of declined allografts within DonorNet must be a priority to afford greater time for in-sequence allocation.
Short-term EP allograft function was described as excellent by the authors, who report a less than 3% incidence of immediate allograft failure with 1-year allograft survival of 85%. The low incidence of immediate allograft failure reflects the widespread recognition of predictors for primary nonfunction; however, increasing data demonstrate that the simple calculation of short-term allograft function may not be the optimal indicator for evaluating allograft performance. Although others have demonstrated acceptable short-term EP allograft survival,[10, 11] there have been significant differences in the rates of biliary complications, hospitalization, reoperation, requirement for percutaneous drainage, and relisting for OLT.[12-15] For example, recipients of donation after cardiac death allografts have a slightly, but significantly lower 1 year allograft survival rate than donation after brain death allograft recipients, but their chances of being relisted for OLT at 1 year are approximately 250% higher.
The final point is the concept of EP allografts as a missed opportunity for our sickest patients. The theoretical argument for a survival advantage afforded to acutely ill patients through the utilization of literally any allograft has not been supported by numerous single-center and database studies which have demonstrated increased costs, complications, reoperations, and decreased survival with the utilization of nationally imported or high–donor risk index allografts.[13, 15, 17, 18] The clinical bias against these allografts is reflected in the fact that although national allocation has always existed, less than half of all US transplant centers have ever used a nationally allocated allograft.
There is no doubt that select US centers have consistently used these allografts to generate results that exceed predictions based on the donor risk index and the incidence of delayed graft function. Continuing single-center reports suggest their success requires integration of wait-list management, substantial technical competence, experienced procurement, dedicated anesthesia/critical care, precise diagnosis and management of delayed graft function, logistics, and significant institutional commitment. The implementation of a multicenter, multi-OPO study analyzing high-risk allograft utilization with aims similar to those of the Adult-to-Adult Living Donor Liver Transplantation Cohort Study in living donation would be the best solution to the authors' dilemma through the creation of a playbook for the transplant community. This would improve utilization and efficiency for all by permitting any center to realistically evaluate prospectively whether the utilization of these allografts is clinically appropriate.
Local allografts provide a quantitative and qualitative advantage for our patients, and there are specific actions that a center can take to maximize local allograft offers while affording an opportunity for EP allografts. The liberalization of DonorNet notification parameters, the application of a provisional yes to specific patients outside OPO notification algorithms, review of declined allografts, outcomes, and the routine utilization of in-sequence, nationally placed allografts have all been demonstrated to increase the center-specific transplant rate and decrease wait-list mortality. In their national analysis of liver offers for candidates who died or were removed from the wait list, Lai et al. determined that more than 80% of these patients received at least 1 allograft offer and that 55% had received what the authors defined as a high-quality allograft. Thus, a substantial proportion of wait-list mortality results from allograft acceptance patterns rather than a lack of opportunity. Kinkhabwala et al.'s contention that EP allografts represent a missed opportunity for their patients should be tempered by a comparison of the EP allograft population to a profile developed from the center's local allograft utilization available through program-specific reports. This would provide a more accurate estimate of applicability. Only after the optimization of center metrics that are directly in our control should we place the burden of wait-list mortality on this small and potentially dangerous minority of donors.
Kinkhabwala et al. have clearly identified the consequences of the 2 gorillas in the room and have challenged our community. Definitive actions by the transplant community to correct allocation inequity while stimulating utilization are possible and should be prioritized. The public release of already existing UNOS data on EP allografts and the standardized, prospective collection of data on EP allograft allocation would immediately improve transparency, whereas allocation realignment and a multicenter study designed to stimulate utilization have the potential to significantly increase the number of allografts for us all. The authors and members of region 9 should be applauded for their NY State Department of Health initiative to address allocation: the inclusion of an automatic regional review board exception pathway for recipients of these allografts who develop complications not currently covered by UNOS policy as a catalyst for expanded utilization could significantly improve our understanding of what a transplantable allograft is.