SEARCH

SEARCH BY CITATION

Keywords:

  • SRTR;
  • OPTN;
  • statistical analysis;
  • survival analysis;
  • data collection;
  • data sources;
  • data structure;
  • death ascertainment;
  • transplant research

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

Understanding how transplant data are collected is crucial to understanding how the data can be used. The collection and use of Organ Procurement and Transplantation Network/Scientific Registry of Transplant Recipients (OPTN/SRTR) data continues to evolve, leading to improvements in data quality, timeliness and scope while reducing the data collection burden. Additional ascertainment of outcomes completes and validates existing data, although caveats remain for researchers. We also consider analytical issues related to cohort choice, timing of data submission, and transplant center variations in follow-up data. All of these points should be carefully considered when choosing cohorts and data sources for analysis.

The second part of the article describes some of the statistical methods for outcome analysis employed by the SRTR. Issues of cohort and follow-up period selection lead into a discussion of outcome definitions, event ascertainment, censoring and covariate adjustment. We describe methods for computing unadjusted mortality rates and survival probabilities, and estimating covariate effects through regression modeling. The article concludes with a description of simulated allocation modeling, developed by the SRTR for comparing outcomes of proposed changes to national organ allocation policies.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

In articles corresponding with this one in the SRTR Report on the State of Transplantation of the three previous years, we have discussed a range of topics, including: the scope of transplant data available and the evolution of data collection mechanisms; how that data collection system is improving the quality of these data and reducing the data collection burden; how additional ascertainment of outcomes both completes and validates existing data; and caveats that remain for researchers (1–3). This year, in the first section of this article we continue to build upon that foundation and focus on two key areas: (i) a brief summary of the scope of data available; (ii) further discussion on the improvements of data submission patterns both on the waiting list and after transplant, as well as their implications for analysis.

Since this article now combines elements of analytical methods with the discussion of the database design, there is a separate, second section which reviews some essential analytical approaches which are frequently used by the Scientific Registry of Transplant Recipients (SRTR), including those used in the 2005 OPTN/SRTR Annual Report, the Center-Specific Reports (CSRs) published at http://www.ustransplant.org, and analyses pertaining to data requests from the Organ Procurement and Transplantation Network (OPTN) committees and the Secretary's Advisory Committee on Organ Transplantation (ACOT). The types of analyses conducted by the SRTR can be broadly classified as either unadjusted (‘crude’) or covariate-adjusted; the former are used primarily for descriptive purposes, while the latter focus on determining the relative importance of various factors on the outcome or for drawing risk-adjusted conclusions. Unadjusted and covariate-adjusted analyses will be discussed separately.

Overview

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

This article has been reformulated to combine the discussion of the database design with the discussion of cohort selection and choosing the appropriate methods for analyses. It includes new information on which transplant recipients become lost to follow-up (LTFU) and how this varies not only over time but also by the organ transplanted.

It is important that researchers using transplant data have an understanding of the scope and structure of available data, and that they be familiar with how these data are collected. Readers seeking more detailed background about the structure and source of available data should refer to ‘Transplant Data: Sources, Collection and Caveats’ (2), which also includes a more detailed discussion of initial multiple-source validation of mortality data. Readers seeking a more comprehensive description of the UNetsm data collection system and recent improvements should see ‘Data Sources and Structure’ (1).

Data reported by transplant centers and organ procurement organizations (OPOs) to the OPTN are an increasingly rich source of information about the practice and outcomes of solid organ transplantation in the United States. The SRTR has expanded the spectrum of addressable research questions on transplant outcomes, as well as the accuracy with which they are answered, by linking data from the OPTN to several other data sources (SSDMF [Social Security Death Master File], CMS [Centers for Medicare and Medicaid Services], NDI [National Death Index], SEER [Surveillance Epidemiology and End Results], and NCHS [National Center for Health Statistics]), as described in ‘Transplant Data: Sources, Collection and Caveats’ (2). New procedures implemented by the SRTR for including additional ascertainment of outcomes, such as mortality, may also have implications for transplant centers' ability and motivation to report these statistics. Another result of such linkages is the ability to study in detail outcomes other than mortality and graft failure. For example, Schaubel and Cai recently used the linked SRTR and CMS databases to compare hospitalization rates on the waiting list and after transplant (4).

Data quality and timeliness continue to improve from 1 year to the next. OPOs and transplant centers are increasingly familiar with new, more efficient data collection tools implemented by the OPTN. These factors make it important for researchers to continually remain aware of current measures of data timeliness in choosing cohorts, deciding on methods and watching for potential biases in their analyses. The statistical methods chosen by the SRTR for any particular analysis depend strongly on the nature of the research question. SRTR analyses often involve time-to-event data, which are inherently incomplete since, inevitably, the observation period concludes before all subjects have experienced the event of interest (e.g. transplantation, death or graft failure). Each method described later in this article requires careful consideration of the sequence of events for each individual organ and patient.

Database Design and Data Structure

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

A researcher seeking to fully understand the database design and the data structure of the SRTR may want to start with the ‘units of analysis’. Figure 1 shows a useful method of organizing transplant data into these ‘units of analysis’. These units of analysis are designed to be of most use to researchers asking questions about the events or outcomes that may follow the placement of a candidate on the waiting list, organ donation, or a transplant itself. The data table in Figure 1 relates to specific subjects of interest for research: candidacies, donors, transplants, and the components thereof. Also shown are some of the more specialized tables, ones from which researchers might analyze organ turndowns, use of immunosuppression medications, or changes in waiting list status prior to transplant.

image

Figure 1. Transplantation research data organization, primary and secondary sources.

Download figure to PowerPoint

Three tables in Figure 1 are the entry points for individual persons into the transplant process: the candidate registration table (which includes registrants who become transplant recipients), and the living and deceased donor tables. Underlying these three individual level tables (and not shown in the figure) is a ‘Person Linking Table’ (PLT) that is vital to the integration of multiple data sources discussed later. The PLT holds one record per person, establishes links on the basis of similarities in Social Security Numbers (SSNs), names and nicknames, dates of birth, and other person-level information, while accounting for many of the mistakes that may occur in entering data in these fields. The maintenance of this identification roster, with aggregated identification information compiled from all data sources, facilitates a system of matching to both external data sources and other records within the OPTN data, such as for persons who receive multiple transplants or even for donors who later become recipients.

In addition, this figure documents some of the primary and secondary data sources that may contribute to each table. Further detail regarding the specific data collection instruments, before the information is aggregated to records of interest, is shown in Figure 2.

image

Figure 2. Data submission and data flow, primary and secondary sources.

Download figure to PowerPoint

Waiting list data

In Figure 1, the ‘candidate registration’ table holds records for potential transplant recipients: patients who are placed on the waiting list as well as patients who receive living donor transplants without having been waitlisted. Analytically, this table helps researchers describe the ‘demand’ side of the transplant process, comparing characteristics of successful and unsuccessful transplant candidates and describing disease progression among prospective recipients while they are not transplanted, although the researcher must be cautious of the bias introduced by transplanting some of these patients, as discussed later. These candidates act as a useful comparison to those who do receive transplants; considering the consequences of not being transplanted can be helpful in evaluating the benefit of transplanting each type of patient. Because mortality plays such an important role in evaluating transplant benefit, the examination of the timeliness and accuracy of candidate data sources presented in this section focuses in particular on the reliability of mortality information.

Primary sources:  The primary source of information about candidates for transplantation is the OPTN database, which stores information about all persons on the national waiting lists. Transplant centers must continuously maintain their waiting lists by reporting on changes in severity of illness (for some organs) and other outcomes, such as transplant or death. Information in this table is taken from these waiting list maintenance records and the Transplant Candidate Registration (TCR) record completed soon after registration.

Because the maintenance of the waiting list is continuous, researchers should be able to report upon waiting list outcomes soon after they happen. In actuality, this depends on the outcome. Removal from the waiting list for transplant is linked to the generation of a transplant record, so reporting is nearly immediate. Reporting of death on the waiting list may display more lag in reporting, particularly among patients who are offered organs less frequently because of low severity of illness or accumulated waiting time, since turndown of offers often spurs waiting list maintenance.

Timing of waiting list maintenance:  Table 1 helps an analyst assess the currency of waiting list data for mortality analyses by showing the time between death and removal from the waiting list for death. The first three columns show evidence of improved timeliness of waiting list removal for death, though the statistics reported for 2004 may overstate completeness at any point in time because not all deaths during 2004 have been reported yet. About three-quarters of the deaths that are reported by the centers are reported within 2 months of their occurrence. This profile of lag time in reporting can help guide the researcher in choosing appropriate cohorts for analyses of waiting list outcomes that include mortality, based on primary data sources.

Table 1. Lag time to report of death on the waiting list; all deaths of waiting list registrants reported by center (cumulative percent reported)
Time until reporting:All organs, by year of deathBy organ, year of death = 2003
200220032004KidneyLiverHeart
  1. Source: SRTR analysis, July 2005. Note: figures for more recent years may overstate completeness at any time because all deaths (i.e. the full denominator) have not yet been reported.

On death date11.811.510.64.119.434.6
Within 1 month64.164.064.452.876.588.9
 2 months72.872.673.765.080.691.0
 3 months78.678.079.872.583.392.7
 6 months86.086.590.584.088.094.5
 12 months94.494.1 93.194.697.5

The reporting of death is less prompt among candidates for kidney transplant than for other organs: 65% versus 81% for livers and 91% for hearts at 2 months (Table 1). This difference is expected because of the longer waiting times and available alternative therapies that may make the contact between patient and transplant center less frequent. In 2003, nearly 35% of deaths among heart registrants were reported on the day of death, compared with less than 5% of kidney registrant deaths.

Extra ascertainment sources:  A transplant center's reporting duties end upon each candidate's removal from the waiting list. However, events occurring in the months following removal—such as death or transplant at another center—are frequently interesting analytical endpoints to the researcher. Therefore, a candidate file may incorporate additional mortality sources or waiting list, transplant, and follow-up information reported by other centers for the same person.

Many of the same additional sources of outcome ascertainment are used for both waiting list analyses and posttransplant analyses, particularly for mortality. Using the PLT (described above) to match patients, results may be incorporated from three other ‘secondary’ sources:

  • (i) 
    Patient linking between OPTN records allows a researcher to tell that a transplant candidate at one center has had a death or transplant reported by a different center or that a graft has failed, on the basis of a retransplant at another center.
  • (ii) 
    The Social Security Death Master File (SSDMF), publicly available from the Social Security Administration (SSA), contains over 70 million records created from reports of death to the SSA, for beneficiaries and nonbeneficiaries alike.
  • (iii) 
    The CMS-ESRD Database provides data primarily from Medicare records for ESRD patients, and helps provide evidence of start of dialysis therapy, resumption of posttransplant maintenance dialysis indicating graft failure, or death.

In addition, the National Death Index (NDI) is available for validation of the completeness of these sources, though its use is not permitted for most analyses. The NDI, based on death certificate information submitted by state vital statistics agencies, misses only about 5% of all deaths in the United States.

In 2002, the SRTR and OPTN jointly obtained data from the NDI for a sample of transplant candidates and patients to evaluate the completeness of mortality reporting in the other existing data sources. As the SRTR presented in this forum in 2002, the majority of deaths are reported by the main transplant center following the patient (1). It continues to be important to use all of these available sources in doing mortality analyses: of patients receiving a transplant between July 1, 1999 and June 30, 2004 (those included in the most recent CSR cohort), 78% of kidney and pancreas transplant recipient deaths were reported by the transplanting center. It is still the case that significant fractions of all the deaths are reported by other available sources, as 19% of these deaths were reported by the SSDMF and 3% of the deaths were reported first by another transplantation program. In cases where discrepancies arise among different death dates reported, the SRTR most often relies on what is reported by the center, first and foremost. The primary reason for this decision is that deaths are often reported to the SSDMF as occurring on either the first or last day of the month, or on the 15th of the month as an ‘average’.

In 2003, the SRTR began using extra ascertainment from CMS-ESRD data for kidney graft failure for many types of analyses. A study was conducted to explore the possibility of supplementing existing SRTR data with CMS graft failure data for kidney recipients followed by the OPTN. The CMS data may provide additional information on recipients that are LTFU, because CMS can be notified about a graft failure event through several possible mechanisms, in addition to the OPTN. Further discussion of this work can be found in ‘Transplant Data: Sources, Collection and Caveats’ (2).

Transplant and posttransplant data

The transplants table shown in Figure 1 provides a collected source of information about each transplant event, including information about the donor, recipient, operation and follow-up information, summarized to facilitate easy analyses. This file is used by analysts to describe trends in the characteristics of transplant recipients, examine transplant outcomes and provide an estimate of posttransplant survival for comparison to waiting list survival in allocation policy decisions.

Primary sources:  The data for the transplant table are primarily taken from the Transplant Recipient Registration (TRR) form collected by the OPTN. Additional characteristics, from the donor and candidate files, are added for ease of analysis, as are aspects of the interaction between donor and recipient characteristics (e.g. calculated HLA mismatch scores; ABO blood type compatibility; whether the organ was shared, based on the relationship between the OPO recovering the organ and the transplanting center).

The transplant follow-up data, collected primarily from the Transplant Recipient Follow-Up (TRF) record, may be summarized to the transplant level, creating indicators of death, graft failure, and time to follow-up. The expected—and actual—timing of the follow-up forms are very important to cohort choice in analyses. After each transplant, follow-up forms are collected at the 6-month (for nonthoracic organs) and yearly anniversaries (for thoracic and nonthoracic organs) of the transplant; these forms may also be submitted off-schedule to report such adverse events as graft failure or death. While transplant follow-ups may be useful on their own—or in conjunction with their own sub-tables for immunosuppression or malignancies—for analysis of specific events that occur during follow-up, they are most widely used in the summarized form for death and graft failure analyses discussed here. For such analyses, the timing is particularly important.

Timeliness of follow-up forms:  Just as with events on the waiting list, it is important to consider the time lag until follow-up forms are filed when determining cohorts for analysis of posttransplant events. Implementation of new data collection mechanisms and stricter rules has shortened the time until validation. Table 2 shows that the time from the date of record generation until validation (when the form has been submitted and verified by the center) has grown shorter, but it is still nearly 4 months after each anniversary until four of five forms are submitted, and 6 months before nine of ten are completed. However, the increase from 91% in 2003 to 97% in 2004 indicates that the timeliness of submission of routine follow-up forms continues to improve. If the trend continues, it is likely that more recent data could be used in analyses in the near future. However, a balance must be struck between the need for recent data and the need for complete data. Currently, the SRTR typically allows for between 3 and 6 months of lag time, depending on the need for analyzing data from the most recent cohort available.

Table 2. Timing for validation1 of follow-up forms
Year addedCumulative percent validated1 by month
Routine follow-upsInterim follow-ups
200220032004200220032004
  1. Source: SRTR analysis, July 2005.

  2. 1The form has been submitted and verified as complete by the center.

1 Month26.030.632.743.952.856.0
2 Months51.760.367.360.270.676.1
3 Months68.372.080.772.278.484.3
4 Months77.179.387.779.483.789.5
5 Months82.286.493.383.688.393.5
6 Months85.990.897.086.591.696.5
7 Months89.093.8 89.093.7 
8 Months91.695.8 90.995.4 
9 Months93.597.1 92.496.6 
10 Months94.998.0 93.597.7 
11 Months95.898.7 94.598.5 
12 Months96.599.3 95.399.0 
All unvalidated by 6 months14.19.23.013.58.53.5
All unvalidated by 1 year3.50.7N/A4.71.0N/A

Timing of follow-up forms:  In addition to the lag time until validation of follow-up forms after transplant, the pattern of form submission—often clustered soon after transplant anniversaries—has important implications for avoiding biases when analyzing recent data.

‘Routine’ follow-up forms are generated at each transplant anniversary, yet deaths occur on a continuous basis throughout the posttransplant period. When a patient dies during follow-up, the transplant center may file an ‘interim’ follow-up form off the regular reporting schedule for that patient. This means that centers might report mortality more quickly and continuously than they report on surviving patients, for whom they must wait until the transplant anniversary.

For example, in an analysis of patients transplanted 18 months ago, patients currently alive will have a 1-year follow-up form indicating their survival until the 1-year point, with no information beyond that. Patients who have died, on the other hand, might have follow-up forms indicating death both during the first year and any interim follow-up forms filed between months 12 and 18. Therefore, all of the data reported during months 12 to 18 would be about patients who had died. If a researcher used the Kaplan-Meier method to take advantage of the most recent data available, and censored at last follow-up, the portion of the survival curve calculated after the first year would be based inappropriately on over-reporting about patients who had died, thereby creating a bias in mortality reporting. This bias can be removed by waiting until the living patients are reported on at the 2-year anniversary. Similarly, 1-month survival rates cannot be reliably calculated until at least 6 months after transplant (1 year for thoracic organs), after the anniversaries have prompted reporting on all patients.

The examples given above are extreme cases. However, including these patients in a sample used for survival calculations, without appropriate censoring at transplant anniversaries, introduces the same bias into the average results. Further, these caveats are not limited to survival analyses: other analyses might over-represent outcomes associated with death in the final 6-month period.

The above example describes the case when transplant centers may report deaths as they occur. If this were a reliable pattern of reporting, one analytical solution might be to assume that the patient is alive unless we know otherwise. This approach would be effective if the multiple sources of mortality reliably captured all deaths. However, all sources are not reliably complete during many periods, since many deaths are reported as they occur and many more are reported at the next reporting anniversary, as the following figures exhibit. Figure 3 depicts when transplant follow-up forms are filed, comparing those filed for patients who have died to those for patients who have not. The actual time of the follow-up event (death in the top panel or reported as alive in the lower panel) is shown on the y-axis, and the time that the follow-up form was validated by the center is shown on the x-axis. If all events were reported as they occurred, points would fall only along the 45-degree diagonal dashed line. The horizontal distance, left to right, between this diagonal and each point represents the time lag between the event and notification to the OPTN.

image

Figure 3. Time to validation of death and survivor records.

Download figure to PowerPoint

The top panel shows this relationship for follow-up forms reporting deaths, and the pattern of reporting along the diagonal shows deaths that were reported near the time of death itself. (In the earlier example of using a cohort of transplants from 18 months ago to calculate a survival curve, it is this pattern of reporting along the diagonal for dead patients that introduces a possible bias beyond the 12-month follow-up time.) There is a more obvious clustering to the right of each vertical line at 6, 12 and 24 months after transplant, showing deaths are most often reported with the timing of routine follow-up forms. The actual death dates are distributed vertically along the line, emphasizing the extent to which many centers wait until prompted by the reporting cycle to report mortality, no matter when the death actually occurred.

The lower panel of the figure shows a similar clustering after each reporting anniversary, but the vertical height of these clusters, close to the diagonal itself, indicates that the events being reported on—that the patient is alive—occurred more recently compared to the reporting date. This difference is also borne out in the median lag reporting times, shown by arrows of different sizes in the two panels, at 133 days for deceased patients and only 28 days for living patients.

Which recipients are LTFU?:  Transplant centers may have difficulties following transplant patients over time for a variety of reasons. For example, patients may move away or transfer their care to other medical professionals, or centers may just have a difficult time allocating staff to report on all patients. There are two different ways in which patients may become LTFU: (i) the transplant center reports them as being lost, or (ii) the center just does not complete follow-up forms for a patient. About 13% of recipients transplanted with kidneys, livers, hearts or lungs since 1997 were LTFU by the end of the third year after transplant; about three-quarters of these had been coded as LTFU by the transplant center, and the other quarter had no records completed for at least the last 1.5 years before the 3-year anniversary.

Figure 4 demonstrates that LTFU varies both by the time since the transplant occurred and by organ. The top panel shows that not only does the number of patients being lost increase over time but also that the variation among transplant centers grows wider. Centers performing fewer than 10 transplants are not included here as their follow-up percentage is often quite uneven, depending on the small set of patients included. Almost all centers were able to follow at least 89% of their patients in the first year following a transplant, but after the fifth year half of the transplant programs had lost more than 14% of patients to follow-up and 5% of centers had lost over half of their patients; furthermore, a quarter of the facilities had lost more than 25% of their patients to follow-up by the fifth year, although it should be noted that this analysis only includes transplants from the first half of the period (1997–1999) due to the lack of sufficient follow-up time for later transplants.

image

Figure 4. Percentages of patients lost to follow-up among centers performing at least 10 transplants, 1997–2002.

Download figure to PowerPoint

The bottom panel of Figure 4 shows that LTFU also varies widely by organ type. Three years after transplantation, most kidney programs could no longer follow more than 10% of their patients, whereas heart, liver and lung programs had all lost less than 5% of their patients. Additionally, some kidney centers had lost track of more than one-third of their patients in 3 years, and programs transplanting other organ types tended to lose less than one-fifth of their patients. This is further evidence of the importance of secondary follow-up data sources, such as the SSDMF, especially among kidney recipients. The SRTR continually reviews its data needs for research to assure that the data items being collected are as complete and timely as possible. In addition, the Data Working Group initiated a review of the data elements and frequency of reporting for long-term follow-up of transplant recipients.

Lag time and cohort selection:  For patient survival analyses, the SRTR often adopts a technique of assuming a patient is alive unless known otherwise, allowing us to follow patients after they become LTFU. Patients are more prone to becoming LTFU after receiving a transplant than they are while still on the waiting list. In prior years, we have outlined arguments suggesting that even with significant LTFU, extra ascertainment of mortality makes it plausible to assume that all sources taken together provide reasonably complete ascertainment of death, such that less than 1% of deaths are missed (1).

It is important to continue to choose cohorts carefully, because the assumption of ‘alive unless we know otherwise’ holds true only during periods when we expect all sources to be complete and unbiased. This means that a patient can only be assumed to be alive for the periods in which follow-data have been reported; it should not be assumed that the patient will be alive at any point in the future. For example, if follow-up has been reported at the 2-year anniversary of the transplant and there is no indication of death, it should only be assumed that the patient has survived for 2 years and not for any period after the data of the reported follow-up. Additionally, because of the lag time in reporting, follow-up reporting may not be complete until 2–4 months after the anniversary. As a result, if a cohort of January 1999 to December 2001 is chosen for analysis, 2-year follow-up would not be complete for the entire cohort until approximately March 2004.

Two additional considerations also stand out as being particularly important in cohort selection. First, a large enough cohort is desirable to ensure that the corresponding analysis will have sufficient statistical power. Second, the selected cohort must also reflect the specific aims of the investigators. These can be somewhat conflicting goals, with the first enticing one to choose a cohort that spans many years, but the second often suggesting that only recent experience be employed. A cohort's maximum follow-up (e.g. 1-year posttransplant, vs. 5 years) and length (e.g. transplants occurring in 2003, vs. 1999–2003) are inherently connected. For example, if a cohort's maximum follow-up is 2 years, then survival probability can be estimated only up to the 2-year point. In the context of posttransplant mortality, to estimate 5-year survival, the cohort must contain at least some patients who had at least 5 years of potential follow-up; i.e. were transplanted at least 5 years before the end of the observation period. Longer follow-up times necessarily arise from patient experiences that are further in the past. Since investigators often want to predict the future prognosis of current patients, and because improvements in medical practices and changes in organ allocation policy occur rapidly, it is desirable to use the most recent data available that are relevant to the research question. In cases where less-recent cohorts are included in making predictions for short-term outcome studies, one must carefully consider the trade-off between improving the precision and retaining the relevance of an analysis.

The discussion of Figure 3 and the timing of follow-up form submissions are instructive for choosing a cohort for posttransplant survival analysis. It is important to choose a combination of survival endpoint (horizontal line) and lag time (vertical line) that allows for a reasonable capture of both deaths and survivors. The survival endpoint (12 months) and additional lag time (+4 months) used for the SRTR CSRs 1-year posttransplant survival estimate are shown on the graph. Events in the boxed area are captured from center reporting, and would also be available in nonmortality analyses such as graft survival. Some events to the right of the boxed area will be reported by the center, if the transplant occurred early enough in the cohort to afford more than 4 months of lag time; others will rely on extra ascertainment, since center-reporting occurs after the lag time's duration.

Having described the SRTR database in the first section, the article now explains many of the analytic methods employed to analyze the SRTR data.

Analytical Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

The second section of this article begins with a description of the analysis of waiting time until transplant, focusing on kidney and liver transplantation. A discussion of the analysis of posttransplant outcomes, including mortality and graft failure, follows. A general discussion of covariate-adjusted analysis, followed by a few comments on the limitations of regression models, is next. The final subsection describes the Simulation Allocation Models (SAMs) developed by the SRTR to address questions dealing directly with organ allocation policy.

Analysis of transplant waiting times

Increasing shortages of donor organs relative to the number of registrants awaiting transplantation holds for each type of organ failure, with the gap between demand and supply widening each year. The important issues in the analysis of waiting time until transplant, focusing specifically on kidney and liver transplantation, are now discussed.

Kidney transplantation:  For kidney transplants, which are still allocated primarily according to waiting time and human leukocyte antigen (HLA) mismatch, the SRTR computes several measures of waiting time. For example, in the CSRs, key questions addressed include:

  • 1
    Among all registrants, what percentage received a transplant (or other outcome) within a particular time period (e.g. 6, 12 or 18 months)?
  • 2
    By what time after listing had 50% of registrants received a transplant?
  • 3
    What is the rate of transplantation per time period among actively listed registrants?

Answers to questions one and two are the most relevant to waitlisted patients at the time of initial wait-listing, since each reflects the probability of transplantation while implicitly accounting for all potential outcomes (e.g. death prior to transplantation). Question three is relevant for patients who are currently active on the waiting list and for evaluation of the allocation process. The first two questions can be answered directly by evaluating outcomes in different groups of registrants, while the third involves a measure of events per unit of patient time (e.g. patient-years).

For the purposes of studying different regions or groups of registrants, all of the measures described above typically yield similar conclusions. In addition, an average waiting time among actual transplant recipients can be easily computed from transplant recipients during a recent interval of time. This statistic is useful for comparing waiting times among regions or among transplant programs. However, the average waiting time among recipients is not useful for patient counseling, since it does not factor in waiting times from registered patients who have not received an organ, or from patients who died or were removed from the waiting list before receiving a donor organ. Although average time until transplant among recipients has little relevance to patients currently on the waiting list, the statistic may be meaningful for the future prognosis of transplant patients; for example, increased time on dialysis is known to strongly influence postrenal transplant survival and, among pediatric patients, the occurrence of developmental problems.

The outcomes for all wait-listed registrants are summarized by the fraction who receive a transplant, die without a transplant, are removed from the waiting list for various reasons, are still surviving after removal from the list, and are still on the waiting list at various time points after wait-listing. Two examples of such statistics are described here. Among all registrants, the fraction transplanted (FT) is reported in Table 5 of the CSRs at several points in time after listing (30 days, 1 year, 2 years and 3 years) for each transplant program (http://www.ustransplant.org). The FT is a simple fraction of all wait-listed registrants who received a transplant, regardless of the program where the transplant was performed. The FT summarizes the time to transplantation at any program among all registrants in that transplant program.

The time to transplant (TT) is the time since listing by which 50% (or another stated fraction) of all wait-listed registrants receive a transplant. The TT calculation summarizes the time to transplantation at a transplant program or within a group, taking into account the possibility of not ever receiving an organ. The TT measures the rate of transplantation at a particular program, so registrants who transfer to another program's waiting list or who are removed for reasons of good health are dropped (censored) at that time, using actuarial methods for the TT outcome. Registrants who die or are removed from the list for reasons of poor health are not censored and are counted as never receiving a transplant in both the TT and the FT calculations. Note that the median TT would never be reached for groups in which more than 50% of the registrants die or are removed for poor health, since these registrants are counted as never receiving a transplant.

Different statistics are useful for the evaluation of organ allocation policies for deceased donor organs. For example, rates of transplantation among registrants on the waiting list are useful for evaluating and comparing the impact of allocation policies on different groups of registrants. Such policies only affect registrants while they are active on the waiting list. The Annual Report shows percentiles of waiting time based on rates of deceased donor transplantation among all registrants during the time from listing until removal from the list. For such calculations, time while inactive is excluded, and registrants are censored at removal from the list for any reason, including death, poor health, recovery of native organ function, or receiving a living donor organ transplant. This measure of waiting time reflects that, which would result for a hypothetical population with transplant rates identical to those observed, if all registrants remained active on the waiting list until transplant.

Liver transplantation:  In the setting where organs are allocated based on waiting list survival probability, the seemingly simple question, ‘How long do patients wait for a transplant?’ is no longer so simple to answer. Taking liver failure as an example, organs are allocated first to patients with acute liver failure (Status 1), then to chronic liver failure patients with the highest expected waiting list mortality, based on the Model for End-stage Liver Disease (MELD) score that can change over time. Estimation of a patient's time until future transplant requires that the probability of potential future MELD pathways be quantified. Even if the probability of changing MELD categories is correctly specified, there is still the issue that changes in MELD are associated with both changes in waiting list mortality and in transplant probability itself. In some regions of the country, registrants with a very low risk of death might never be allocated an organ unless and until their condition worsens. Due to the difficulty in projecting waiting time until transplant, other important questions arise when considering the liver waiting list:

  • 1
    Among registrants with acute liver failure, what fraction gets a transplant, what fraction dies and what fraction recovers?
  • 2
    Among chronic failure registrants, what is the rate of transplantation per month during the time that their MELD score has a particular value? What is the competing risk that the registrant dies during the same time?

Answering such questions allows for the evaluation and comparison of access to liver transplantation for both policy development and registrant counseling. Similarly, for each organ that is allocated on the basis of medical condition, it is useful to report the measures of transplantation rates separately for different categories of medical conditions, also allowing for and reflecting the possibility of moving amongst severity levels. Analogous methods can be used for registrants for other organ transplants, such as heart, if allocation rules are changed from a waiting-time basis to include death rates on the waiting list as a criterion.

The use of MELD to allocate livers among chronic liver failure registrants began in February 2002, along with rules for exceptions for registrants with other specific diseases, such as liver cancer (5). The SRTR reports relevant summary statistics and tables to summarize rates of liver transplantation according to the status and MELD in the CSRs.

The various methods described above are all useful for describing waiting times for transplantation and each is appropriate for specific purposes. The choice of method depends on the specific question or the purpose of the question.

Unadjusted analysis of patient survival and graft failure:  Unadjusted (crude) methods, such as the ‘actuarial method’, use death rates to compute the corresponding conditional survival probabilities for successive time intervals. These interval-specific conditional survival probabilities (i.e. the probability of surviving until the end of the interval, given that the patient was alive at the beginning of the interval) are multiplied to yield the cumulative survival probability for various time points (e.g. 3-year survival). Depending on the question posed, these actuarial results are reported as either the fraction that died, the fraction still surviving, or the expected years of life through the end of the last interval.

Unadjusted posttransplant graft and patient survival outcomes are reported as cumulative ‘success’ rates. These are calculated by Kaplan-Meier survival curves when the analyses are based on data from a single cohort, and they are shown at various time points after transplant. Results from different cohorts are sometimes shown at various time points after transplant, as in the Adjusted and Unadjusted Graft and Patient Survival tables in the 2005 Annual Report. However, since these results are from different groups of patients, the results computed across different time periods need not be consistent. For example, the 5-year survival for the 10-year cohort is not reported and should not be assumed to be the same as the 5-year survival that is reported for the 5-year cohort.

Mortality:  Generally speaking, wait-listed registrants are not tracked by their former listing centers for mortality after removal from the waiting list. That is, mortality ascertainment stops when a recipient is LTFU. Because of the incomplete follow-up available in the data, the actuarial methods described above must censor patients when they are LTFU. If the failure rates after LTFU are the same as the failure rates among those still being followed, then the actuarial method estimates are appropriate, even though some observations were censored. However, if recipients at high risk for eventual failure are disproportionately LTFU before they fail, then the estimated failure rates will underestimate the overall failure rates. When many subjects are LTFU, it is important to know if they were at high or low risk for subsequent unobserved events, compared with patients under observation.

OPTN death ascertainment, along with extra ascertainment from the SSDMF and the ESRD database, were used to compute death rates on the waiting list, as reported in each organ-specific section in the 2005 Annual Report. Such follow-up stops when a candidate is removed from the waiting list, because organ allocation is not affected by events after removal from the waiting list. The death rate per patient-year at risk method includes events and time only while on the waiting list and is not affected by events after removal. However, the resulting death outcomes are difficult to interpret because registrants are often removed from the list if their health deteriorates to the extent that they are no longer suitable for a transplant. Thus, low death rates on a waiting list are likely to reflect an effective screening process, which systematically removes (or transplants) patients when their health deteriorates. Rates based on patients not removed from the waiting list do not apply to registrants, in general, but to patients currently on (i.e. not removed from) the waiting list.

For the CSRs, mortality rates on the waiting list include extra ascertainment for death after removal from the waiting list or, in some cases, before removal. For these analyses, time at risk begins at the start of the observation period or the date of first wait-listing (latter thereof) and continues until the date of death, transplant, 60 days after removal for recovery, transfer to another center, or the end of the observation period (earliest thereof).

To compute expected lifetimes on the waiting list, the SRTR uses information on deaths from other data sources, such as the SSDMF. This is especially important when comparing pretransplant mortality (which includes time after removal from the waiting list) to posttransplant mortality.

Graft failure:  The analysis of graft failure is complicated by the potential for recipients to die. Death serves as a competing risk in the sense that the time of graft failure cannot be observed among patients who die with a functioning graft (6). Death-censored graft failure estimates the ‘cause-specific’ rate of graft failure; i.e. the rate of graft failure among patients who have not yet died. This is an interpretable measure that is frequently used. However, cause-specific rates, such as those estimated in an analysis of death-censored graft-failure, can only be combined to produce a meaningful survival curve if the competing risks are independent, an untenable assumption in the context of death and graft failure.

Frequently in analyses of graft failure, the end-point is defined as the minimum of the time until death and time until graft failure. This results in a well-defined lifetime (i.e. survival, with a functioning graft). If only graft failure were specifically of interest, one could argue that the graft is, by definition, truly no longer functioning after the patient dies. In the regression setting, the trade off for a cleanly-defined end-point is the interpretation of the covariate effects. For example, if a patient characteristic significantly increases the rate of graft failure, but not the rate of death, an analysis which combines graft failure and death may identify the covariate as being nonsignificant. In order to understand the mechanisms that lead to transplant failure, it is sometimes useful to count only failures of the transplanted organ itself, while not counting deaths from other causes. In addition to the issue of graft failure and death being competing risks, there is also the issue of determining exactly which events constitute graft failure. For example, when a graft failure is not explicitly recorded in the database, but a retransplant is recorded, the date of retransplantation can be used as the date of graft failure. In addition, for kidney transplant recipients, a reported return to dialysis can be counted as an organ failure.

Covariate-adjusted analyses

Analyses with covariate adjustment, such as regression modeling, are intended to compare patient subgroups with ‘all other factors being equal’. Many of the analyses performed by the SRTR involve comparisons of outcomes. For example, for tables comparing adjusted 1-year survival over 10 years of transplantation, adjustment helps ensure that differences from year to year are not due to changes in case mix. Also, the CSRs use covariate adjustment to compare center-specific mortality rates with what would be expected for a given case mix, allowing the reader to separate which part of a good result, for example, is due to patient case mix. The process of covariate risk-adjustment, known as ‘indirect standardization’, is detailed in ‘SRTR Center-Specific Reporting Tools: Posttransplant Outcomes’, an accompanying article in this report (7).

The SRTR often uses an adjustment method based on regression models, to compare the outcomes that would have resulted had the comparison groups been otherwise equivalent. Regression models can be used to compute expected outcomes given a patient's characteristics. The Cox proportional hazards regression model is commonly used for adjusted analyses of time-to-event data (8). Similar to the Kaplan-Meier estimates described above, the Cox regression model can yield the survival curve estimates for two or more groups of patients, adjusted to show the comparison that would result if the groups were equivalent with regard to particular factors, such as age and diagnosis.

Adjusted analyses are used extensively by the SRTR in the CSRs and in analyses based on data requests from committees. The choice of what to adjust for, or what to make equal in the comparison groups, is an important one that is under constant review by the SRTR and will differ according to the specific purpose of the analysis. For example, in a comparison involving patient characteristics (e.g. mortality rates by ethnicity), it would be prudent to adjust for variables reflecting therapeutic regimen, if available. However, in an analysis comparing center-specific transplant mortality rates, therapeutic regimen reflects a center's practices. To adjust for such factors amounts to adjusting away the difference that, if present, one wishes to discern. To make meaningful adjustments, relevant data must be available, complete, and accurate. The choice of factors used when adjusting center-specific outcomes for the mix of characteristics at each center involves OPTN committees and SRTR analysts. The documentation of CSRs (available at http://www.ustransplant.org/programs-report.html) includes detailed descriptions of the adjustment models they use.

Naturally, covariate adjustment is generally limited to patient characteristics for which data are collected and, with respect to the SRTR, limited comorbidity data are available. The extent to which lack of comorbidity data biases the results of a regression analysis is an open question. For example, suppose that bodymass index (BMI) is the covariate of interest in a kidney posttransplant model, with cardiovascular disease (CVD) being the potential confounder. The BMI regression coefficient, based on a model which does not contain a CVD covariate, would result in a biased estimate of the BMI effect only if CVD is both predictive of mortality and correlated with BMI after adjusting for all covariates which are included in the model. That CVD is a mortality risk factor alone would not mean that the BMI coefficient is biased if CVD were not included in the model. Although it is quite possible that CVD is correlated with BMI, the pair-wise correlation is of no relevance to the issue of bias; the pertinent correlation is that between BMI and CVD, adjusting for all other model covariates, which would be substantially less than the crude pair-wise correlation. In the assessment of potential residual confounding, it is often useful to compare the crude and covariate-adjusted analyses. For example, it would be encouraging if the unadjusted and covariate-adjusted hazard ratios for BMI were similar. That is, if there is little difference in the results which are unadjusted and the results which are adjusted for all available covariates, the hypothesis of residual confounding would be much less convincing. Nonetheless, the potential for residual confounding is frequently a consideration in SRTR analyses, mostly because it is impossible to verify its absence.

Further comments on regression modeling of time-to-event data:  Since its development in the 1970s, the Cox regression model has become the predominant method of analyzing survival data (8). The popularity of the Cox regression model is well founded. The model is semiparametric, in the sense that covariates are assumed to act multiplicatively on the baseline event rate (parametric), but that no functional form is assumed for that baseline event rate (nonparametric). The key advantage of the Cox model is that no specific survival model is assumed; that is, the relationship between the covariates and mortality is specified, such that covariate-adjusted mortality between subgroups can be compared. However, baseline mortality itself is not specified by the Cox model. In contrast, fully parametric models are valid only if the model specified truly fits the data. For example, hazard ratios based on a Weibull model will be biased if the Weibull model does not actually hold. Therefore, if covariate effects are of primary interest in a survival analysis, the Cox model is the method of choice for the SRTR and for biostatisticians in general. If survival predictions are of interest, a parametric model is simpler to apply. However, the predictions will be accurate to the extent the chosen model holds. Since the baseline hazard can be estimated nonparametrically under the Cox model, it still may be preferred even if prediction is the goal, in the interests of accuracy.

Despite its utility and flexibility, limitations exist with respect to regression models used for survival analysis, including the Cox model. For example, residual plots are generally difficult to interpret and the identification of patterns is a subjective matter. The more sophisticated methods recently developed are computationally intensive, to the point of not being feasible for data sets as large as those typically analyzed by the SRTR. In addition, global measures of fit are not available through any standard software packages and would be time-consuming and computationally demanding. Clearly, further development is needed with respect to regression diagnostics for survival models. Should the Cox model be found to provide inadequate fit to the data, alternative models include the additive hazard models of Lin and Ying (9) and Aalen (10).

The need for simulation models in addressing organ allocation issues:  Thus far, the survival models discussed in this article have dealt with a single end-point, be it mortality, graft failure or some other outcome. In such cases, a single model equation describes the relationship between patient characteristics and, for example, patient survival. Questions such as ‘How quickly does the mortality rate increase with increasing age?’ or ‘How much higher is the death rate for diabetics relative to patients without diabetes?’ can be addressed directly through a single regression model. However, many questions of interest from an organ allocation perspective are not nearly as straightforward to address, such as ‘What would be the difference in the number of deaths per year if a minimum MELD score was required for liver transplantation?’ No single model equation could address this question, since it is affected by many input systems, including organ donation, acceptance of offered organs, patient waiting list and posttransplant survival, degree of organ sharing (e.g. regional, national), and rates of new listings. No single model could accurately describe the entity of interest and, in fact, separate models would be required for each of the aforementioned systems. Therefore, rather than attempt to build a model of the interplay between these systems, it is easier to simulate patient experience under various conditions (e.g. a change in allocation rules). We now describe the family of simulation models developed by the SRTR primarily to quantify the potential impact of changes in organ allocation policies.

Simulated Allocation Modeling

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

The simulated allocation models (SAMs) developed by the SRTR are designed to simulate organ allocation and resultant patient outcomes in the United States. These models, whose value has been recognized by several OPTN committees and by the SRTR Scientific Advisory Committee (SAC), provide a method to compare relative outcomes under alternative allocation policies prior to implementation of these policies.

HRSA has developed a checklist of steps in the development of analyses in support of allocation policy to ensure that proposed allocation policies can credibly be expected to satisfy the requirements of the ‘OPTN; Final Rule’. The SAMs were developed to satisfy one of the important requirements specified by both the checklist and the OPTN Final Rule; to test the consequences of proposed allocation policies prior to implementation, using simulation modeling.

Prior to implementing a proposed allocation policy, the OPTN develops performance indicators to assess the achievement of the goals of that policy, e.g. equity and increased access for patients with greater medical urgency. The SRTR can use the SAMs to evaluate allocation policies based on any performance indicators based on or linkable to OPTN data that are developed by the OPTN. Thus far, the SAMs have been used to evaluate OPTN proposed allocation policies based on waiting list deaths, total deaths (overall, by zone and by urgency status), transplant equity (according to race, blood type, sensitization and age groups), transplant rates and graft failures.

SAMs incorporate both deterministic and random factors. If the input data are fixed, then the initial waiting list, waiting list arrivals, status changes, organ arrivals and rules of organ allocation are all deterministic. The match run itself is determined entirely by the allocation rule specification selected by the user, the organ offered, and the patients remaining on the waiting list who are available for that organ.

After the match run has determined the order in which an organ will be offered to candidates, the remaining events are determined randomly through various probability functions. These events include the probability of organ placement with each successive candidate in the match run, time from transplant to death, relisting events and relisting history. Their probability functions depend on candidate and organ characteristics. Organ placement is modeled using logistic regression, with adjustments for relevant candidate demographics, clinical factors, organ factors, and factors based on the particular organ and candidate involved (e.g. HLA match, distance, etc.). Posttransplant mortality is predicted using Cox regression models for time from transplant to death, with adjustments for organ, recipient and organ/recipient factors. Figure 5 shows the time order in which events are processed in SAMs (11).

image

Figure 5. SAM event-sequenced modeling processes events in time order.

Download figure to PowerPoint

Each of the SAM computer programs handles events in a time-ordered sequence. Event queues are maintained for organ arrivals, wait-listed candidate status changes (including removal and death events), candidate arrivals to the waiting list, status changes for relisted candidates and posttransplant death events. Each organ arrival event triggers the organ allocation engine that orders the waiting list according to the allocation rules specified. The placement model is then used to determine the probability of organ placement with the first candidate. A random number is compared against that probability, with the result determining acceptance or rejection of the organ. This is repeated until either the organ is placed or the list of candidates is exhausted. Once an organ placement is made, the candidate is removed from the waiting list. The posttransplant engine then schedules the posttransplant death event, with possible relisting prior to death. In the case of relisting, the posttransplant engine also schedules the time to relisting and candidate status changes while on the waiting list. Candidate arrival events place candidates on the waiting list and initialize the descriptions of the candidates such as medical status, listing center and ABO type. Wait-listed candidate status change events change the medical status of a candidate on the waiting list. Any serial data available from the OPTN can be updated over time for candidates on the waiting list. These data may then be used in allocation rules, placement models and/or posttransplant survival models. In addition, waiting list removal and waiting list death events are triggered through the status change event queue.

The entire family of organ-specific simulation models has been developed by the SRTR with input from the OPTN committees. These include the liver simulated allocation model (LSAM), the thoracic simulated allocation model (TSAM), and the kidney-pancreas simulated allocation model (KPSAM).

Each of these organ-specific SAMs has separate organ-specific components for inputs (candidate information, waiting list histories, and donor organ information), allocation rule specifications, placement models and posttransplant events. SAMs are designed to compare the differences in outcomes expected between allocation policies if they were nationally enacted and all other behavior remained the same. Exact replication of actual outcomes for a given year is not a specific goal, due to the effects of physician judgment and local variations in the means of implementing national allocation policy. However, validation tests comparing the results of these models with the actual results of particular calendar years has shown excellent agreement regarding those outcomes that are most relevant to the comparison of allocation rules. While certain proposed allocation systems require specific comparisons, the SRTR typically compares numbers of transplants, organ discards, and patient deaths when examining sets of proposed allocation systems against current rules.

SAMs have been used in support of OPTN committees charged with the development of national allocation policies to assess the effect of over 70 proposed changes to allocation policies prior to implementation. For instance, TSAM was used to evaluate the effect of implementing the new lung allocation policy, which is based on waiting list urgency and transplant benefit, compared with the previous system, which was based on waiting time. TSAM results indicated that lung patient deaths would decrease and overall patient life-years would increase under the proposed allocation system. LSAM was useful for evaluating the effect of a new allocation system that involved regionally sharing livers for MELD and PELD scores above 15, the effect of changing the score calculated for adolescents aged 12–17 from PELD to MELD, and the effect of requiring regional sharing of all pediatric donor livers to children from 0 to 11 years old. LSAM results indicated that by expanding the pool of donor organs available to candidates with higher MELD scores, these policy changes would reduce the number of deaths on the waiting list. KPSAM was used to test the effects of increasing points for zero HLA DR mismatches for pediatric recipients of kidneys from donors less than 35 years old. The KPSAM results indicated a sharp increase in pediatric transplantation rates under the proposed allocation system.

SAMs can use either actual historical data (and model parameters) as inputs or, through the data generator, data files built by resampling actual data according to user-specified over- or under-sampling specifications. The generated data can be used to model the simultaneous effects of a hypothesized behavioral change together with proposed rule changes. For example, in modeling a rule change that prioritizes a certain group of patients, the generator may be adjusted to reflect a possible increase in the number of patients wait-listed in that group; the generator might also be adjusted to raise the number of expanded criteria donors to be consistent with anticipated OPO focus in that direction.

In summary, SAMs can be used to analyze allocation effects in several ways: comparing outcomes with different allocation rules; generating realistic numbers of organ transplants and organ discards from the available pool of donor organs; approximating geographic distributions, organ type and status at transplant when current allocation rules are used; and enabling differential placement of organs with varying characteristics and compatibility (e.g. size and blood type). Results from the SAMs have been used by several OPTN committees in predicting the likely effects of changes in allocation rules before considering such rule changes for national policy.

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

In previous editions of the SRTR Report on the State of Transplantation, this article focused on data collection and organization schemes for transplant data, and offered beginning insights into implications of their timing and completeness. Additionally, in the past, there has been a separate article on analytical approaches to using these data. This year the two articles have been combined with the first part focusing on caveats related to cohort choice, timing and timeliness of data submission, and potential biases in follow-up data and the second part addresses using this knowledge to apply research methodologies properly and consistently. The numerous methodologies described here are applied by the SRTR and are tailored to address specific questions. Statistical adjustments to make ‘all else equal’ for comparisons of variables of interest usually require clinical input and thoughtful consideration. Confounding and potential biases must always be evaluated. Simulated allocation modeling is particularly valuable when considering modifications of national policies.

Acknowledgment

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References

The Scientific Registry of Transplant Recipients (SRTR) is funded by contract number 231-00-0116 from the Health Resources and Services Administration (HRSA), U.S. Department of Health and Human Services. The views expressed herein are those of the authors and not necessarily those of the U.S. Government. This is a U.S. Government-sponsored work. There are no restrictions on its use.

This study was approved by HRSA's SRTR project officer. HRSA has determined that this study satisfies the criteria for the IRB exemption described in the “Public Benefit and Service Program” provisions of 45 CFR 46.101(b)(5) and HRSA Circular 03.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Overview
  5. Database Design and Data Structure
  6. Analytical Methods
  7. Simulated Allocation Modeling
  8. Conclusion
  9. Acknowledgment
  10. References