The impact of HLA genotyping on survival following unrelated donor haematopoietic stem cell transplantation
Professor Alejandro Madrigal, The Anthony Nolan Research Institute, Royal Free Hospital, Pond Street, Hampstead, London, NW3 2QG, UK. E-mail: email@example.com
One of the major factors that have contributed to improving the outcomes of Stem Cell Transplantation is progress made in the field of human leucocyte antigen(s) (HLA). This is evident not only in developing techniques for rapid and accurate tissue typing, but also in the greatly improved understanding of the HLA system and the impact of HLA matching on transplant complications. It is now accepted that high-resolution HLA matching for transplant recipients and unrelated donors is associated with the best clinical outcomes. The most important HLA determinants are the six ‘classical’ polymorphic HLA loci: HLA-A, -B, -C, -DRB1,-DQB1, -DPB1. For several years, based on the outcome of numerous studies, a 10/10 matched donor (HLA-A, -B, -C, -DRB1, -DQB1) was considered the ideal. The impact of HLA-DPB1 has been less clear, in view of reduced likelihood of patient/donor matching for this locus. More recently, several large studies have questioned the importance of HLA-DQB1 matching on outcome. Based on the findings of recent studies, the current gold standard unrelated donor is believed to be one matched for 8/8 alleles at high resolution i.e. matched for HLA-A, -B, -C, -DRB1, however, in certain circumstances, mismatches may be tolerated and/or permissive.
Human leucocyte antigen (HLA) typing has benefited from the many technological breakthroughs that have arisen during the last two decades, mainly due to advancements in the miniaturization and automation of laboratory processes. In addition, the field has also been favoured by the introduction of highly sensitive instruments and technologies that enable researchers to carry out a great number of assays on an even greater number of samples. Finally, the astonishing developments in computer science have likewise stimulated and, in many cases, driven the technological possibilities even further by allowing for high throughput, user-friendly analysis of data.
However, certain technical difficulties, such as HLA typing ambiguities, have remained unresolved, mainly due to the nature and origin of HLA polymorphism (Parham et al, 1995). It has been widely known for several years now that HLA diversity arises mainly from the exchange of nucleotide sequence cartouches between recombinant pairs. This process dictates that most novel HLA alleles arise not from spontaneous or novel mutations but from the shuffling of extant DNA sequences. As a result of this reshuffling of persistent sequences, some alleles of a locus (or in some cases even those belonging to different loci) will share some motifs, albeit in different combinations with other motifs.
Currently available molecular HLA typing methods can be classified into two main groups: those that rely on the identification of polymorphic motifs in partial or complete nucleotide sequences and those that rely on conformational analysis of DNA fragments (Arguello & Madrigal, 1999). Although both groups require an initial polymerase chain reaction amplification step, they make use of different strategies to characterise the type of polymorphisms present in the amplified fragment.
Methods geared towards identifying specific polymorphisms or nucleotide motifs include both Polymerase Chain Reaction using Sequence-Specific Oligonucleotide Primers (PCR-SSP) or Sequence Specific Probes (PCR-SSOP) (Cao et al, 1999) (and their derivatives such as dot blot, reverse dot blot, microarrays, real time PCR, etc.) and Sequence Based Typing (SBT) (Smith et al, 2007). Both PCR-SSP and PCR–SSOP rely on the use of oligonucleotide primers or probes to react and/or detect specific and previously known polymorphic sequence motifs present within the amplified HLA allele fragment. In such approaches the presence of an amplification product denotes the presence of the polymorphism of interest, whereas the absence of the amplicon is evidence of the absence of such polymorphism. A major disadvantage that is particularly important in the context of transplantation and histocompatibility is that such methods rely on the screening of only a limited number of previously known polymorphisms. Their results are then compared to a database of known HLA alleles in order to interpret the way in which they indicate the presence of certain allele combinations. However, when a novel allele is present in one of the samples it can usually lead to mistyping, depending on whether the allele possesses a different, as yet unknown, polymorphism or different arrangement of known polymorphisms. On the other hand, SBT makes use of generic oligonucleotide primers directed towards conserved regions of a locus to amplify the polymorphic exons of all alleles. In the case of HLA class I alleles the polymorphisms are mainly concentrated on exon 2 and 3, while in HLA class II alleles most are concentrated on exon 2. As most loci are heterozygous for their allele content, assignment of HLA type heavily depends on the use of bioinformatics tools capable of resolving heterozygous positions based, again, on a database of known arrangements (known alleles). Although SBT is capable of detecting previously unknown HLA alleles, it is not entirely capable of resolving novel arrangements of known polymorphisms, a limitation known as ambiguity (as both alleles are sequenced at the same time). Therefore, both PCR-SSOP/PCR-SSP and SBT can encounter difficulties in determining whether a sequence motif is in cis or trans orientation as compared with a different motif. This problem can, however, be overcome by separating the alleles by groups or allele-specific PCR, cloning or by the use of conformational techniques. Conformational methods, such as Reference Strand-mediated Conformational Analysis (RSCA) can also be used to check for histocompatibility and to HLA type (Arguello et al, 1998). RSCA has been shown to achieve high-resolution results without the ambiguities seen by previously mentioned methods.
In all, HLA typing methods convey certain advantages and run into different limitations. As such, the choice of method needs to keep the intended application in mind and establish an appropriate balance of what level of resolution is needed with regards to cost, human intervention and speed of typing. An aspect that has been difficult to establish has to do with selecting and recommending the best HLA typing method and matching strategy for unrelated donor (UD) transplantation. It is difficult to achieve such consensus in a world where significant differences exist regarding the genetic background of different populations, the use of clinical and therapeutic criteria and the availability of HLA typing technologies.
The impact of HLA matching on survival
In 2004 the National Marrow Donor Program (NMDP) published results on the outcome of 1874 UD transplants (Flomenberg et al, 2004). Individual mismatches for HLA-DQB1 showed no impact on survival and therefore this locus was excluded from the analysis of a fully matched versus partially matched donor. This study showed a highly significant survival advantage for 8/8 matched pairs compared to those with one (relative risk [RR] = 1·32) or two (RR = 1·53) mismatches (P = 0·0003). Later, Lee et al (2007) reported on an extended group of 3857 UD transplants facilitated by the NMDP. The transplants took place between 1988 and 2003 and all used myeloablative conditioning regimens. The majority received bone marrow as the stem cell source (94%) and 78% of the grafts were T-cell replete. All pairs were typed at high-resolution for HLA-A, -B, -C, -DRB1, -DQA1, -DQB1, -DPA1 and DPB1. As previously, a single HLA-DQ mismatch was not associated with any survival disadvantage. An HLA-DP mismatch was not associated with a worse survival (in 8/8 matched pairs).
When considering matching for HLA-A, -B, -C, -DRB1, any single mismatch (7/8) resulted in a worse overall survival (OS) compared to an 8/8 matched donor (43% vs. 52% 1-year survival, P < 0·001). The survival for a 6/8 matched recipient was significantly worse than one with a 7/8 matched donor (33% vs. 43% 1-year survival, P < 0·001). This study found that the impact on OS from an HLA-A or -DRB1 mismatch (RR 1·36 and 1·48, respectively) was more marked than a mismatch at HLA-B or -C (RR 1·16 and 1·19, respectively). There were no significant differences in survival dependant on whether the mismatch was allelic or antigenic, except at HLA-C where an antigenic mismatch increased transplant risks while an allelic mismatch did not.
A recent study from the Center for International Blood and Marrow Transplant Research (CIBMTR) examined clinical outcomes in recipients of both sibling and UD for chronic myeloid leukaemia (CML) in first chronic phase (CP1) (Arora et al, 2009). There were 1052 recipients of UD transplants; 531 were matched for 8/8 alleles, 252 mismatched for 1 (7/8) allele and 269 mismatched for multiple alleles. The OS at 5 years was 55% for 8/8 matched transplant recipients, 40% for those with a 7/8 matched graft and 21–34% for those with various multiple mismatch combinations. Although there was no independent influence on survival of a single HLA-DQB1 mismatch, there was a significantly worse survival if an HLA-DQB1 mismatch was found in addition to a Class I mismatch. This has been reported in at least one other study, (Petersdorf et al, 2004) and in Lee et al (2007), the addition of an HLA-DQ mismatch to a 7/8 or 6/8 matched pair was associated with a small, though statistically insignificant, survival disadvantage.
The International Histocompatibility Working Group (IHWG) in Hematopoietic Cell Transplantation (an international collaborative group) has reported outcomes in 4796 UD transplant recipients, receiving myeloablative conditioning regimens (Petersdorf et al, 2007). Considering mismatches for HLA-A, -B, -C, -DRB1, -DQB1 (10/10), there was a significant survival detriment to having an HLA mismatch, with the hazard ratio (HR) of mortality (adjusted for disease stage, age, and ethnicity) conferred by a single HLA mismatch (9/10) being 1·20 (95% confidence interval [CI]: 1·12–1·30; P < 0·0001). Single mismatches for HLA-A, -B and -C were significantly associated with a worse OS. In contrast, mismatches for single HLA-DRB1 or – DQB1 allele did not confer a significant survival detriment.
Data reported by the Japanese Marrow Donor Program (JMDP) on 1298 recipients of predominantly myeloablative T-cell replete transplants from 1993–98 (Morishima et al, 2002) found the worst 3-year survival to be in pairs mismatched for either HLA-A or -B (39·9% compared to 65·4% in 10/10 matched pairs) in both standard and high risk disease. HLA-DQB1 mismatches were not associated with a worse outcome. While HLA-C and -DRB1 mismatches were both associated with significant increases in acute graft-versus-host disease (aGvHD), neither impacted significantly on OS. Any combination of two or more mismatches resulted in a worse OS.
A later study by the same group (Kawase et al, 2007), analysed the outcomes in 1790 recipients of UD transplantation for leukaemia in a T-cell replete myeloablative setting. Mismatches for HLA-A, -B and -DQB1 were found to be independent risk factors for mortality (HR 1·36, 1·40 and 1·28, respectively). There was a trend to a worse mortality in HLA-C mismatched recipients (HR1·17), but no impact of HLA-DRB1 (HR 0·92). Table I summarises these studies.
Thus, these studies show a superior survival in recipients of a well-matched (8/8 or 10/10) graft, however, the impact of a single mismatch at each individual locus is not the same in all studies. Possible explanations are that some mismatches may be tolerated in certain circumstances, or that some mismatches are permissive (i.e. a mismatch which does not result in a worse outcome), while others may be non-permissive (i.e. associated with a worse outcome), and that such mismatches are unevenly distributed in the different studies. Evidence for both of these scenarios are discussed in more depth below.
The evidence for tolerated mismatches
Although an 8/8 matched donor is now felt to be the ‘gold standard’, using a donor with a single allele mismatch (7/8) (or even, in some cases, multiple mismatches) has been associated with an equally favourable outcome in certain situations.
Reduced intensity/non-myeloablative conditioning and T-cell depletion
In a study of 144 recipients of T cell-depleted reduced intensity conditioned UD transplants (Shaw et al, 2005) there was no significant difference in OS between matched or one antigen mismatched grafts, compared to those with two or more mismatches (P = 0·005). The only deleterious effect in this cohort that was due to a single HLA mismatch was in an increase in the rate of primary graft failure (6/47,13% vs. 1/93, 1%, P = 0·006). A study of 89 patients receiving T-cell replete, nonmyeloablative conditioning regimens (Maris et al, 2003) showed similar findings, with a non-significant increase in graft rejection in patients receiving HLA Class I mismatched grafts, but no impact on OS. A smaller study in 52 patients (Niederwieser et al, 2003), showed no statistically significant difference in survival between HLA-matched and mismatched pairs, although the degree of mismatching was not separately reported on. In contrast, a study of 111 non-myeloablative-conditioned patients showed an adverse effect of HLA mismatching (Ho et al, 2006). The authors considered only HLA-C locus, but showed a worse OS in those with an isolated HLA-C mismatch (2-year OS: 27% compared to 51% in recipients of 10/10 matched transplants, P = 0·02).
The impact of T-cell depletion was assessed in a study that included 114 CML patients who underwent myeloablative transplants (Tiercy et al, 2004). The authors showed a significant 5-year survival detriment (HR = 2·43; P = 0·0019) in patients with a 9 or less/10 match, however, they reported that the influence of HLA incompatibilities was ‘scarcely evident’ in patients receiving a T cell-depleted graft (a third of the group received antithymocye globulin [ATG]).Table I summarises these studies.
Data from the Anthony Nolan Trust (ANT), in recipients of T cell-depleted transplantation protocols using Alemtuzumab, have consistently shown that a single HLA mismatch is tolerated (Shaw et al, 2005, 2010). Recent analysis (unpublished) in 727 transplant recipients showed that the OS in a 7/8 matched transplant was not significantly different to an 8/8 matched transplant (5-year OS: 40% compared to 43%), while OS in a <7/8 matched transplant was significantly worse (25% at 5 years, P = 0·002). These finding were similar in both recipients of myeloablative (n = 457) and reduced intensity (n = 223) conditioning regimens.
The IHWG study (Petersdorf et al, 2007) investigated whether the impact of a single HLA mismatch was dependant on the disease stage of the recipient. Interestingly, the impact of a single mismatch was found to be more marked in those with low risk disease (HR 1·50; 95% CI: 1·28–1·76; P ≤ 0·0001), than in those with intermediate risk disease (1·15: 95% CI: 1·02–1·29; P = 0·02) and not statistically significantly different in those with high risk disease (1·06; 95% CI: 0·92–1·22; P = 0·43). Similarly, Lee et al (2007) showed that the impact of HLA mismatching was most marked in early and intermediate stage disease, while mismatches may be ‘tolerated’ in patients with late stage disease. The authors remarked that more advanced disease had a greater impact on patient outcomes than increasing degrees of HLA mismatching. An additional study in 948 UD transplant pairs (Petersdorf et al, 2004), showed a significant increase in mortality in those with low risk disease and a single HLA mismatch (HR 2·27), while a single HLA mismatch had no significant effect on mortality in those with either intermediate (HR 1·0) or high risk (HR 1·19) disease.
The unpublished ANT data (mentioned above) showed an effect of disease stage on the impact of HLA matching status. In 315 patients with early stage disease, the 3-year survival was better in patients with well-matched grafts (50% and 51% in recipients of 8/8 and 7/8 matched grafts respectively) and worse in those with less well-matched grafts (18% in <7/8 matched grafts; P < 0·001). Conversely, in those with late stage disease (n = 328) the 3-year survival was similar, at 40%, 34% and 30% in recipients of 8/8, 7/8 and <7/8 matched grafts, respectively (P = 0·658).
The ANT has analysed the impact of HLA-DPB1 matching status in the subset of the patients with leukaemia from the above cohort (n = 488) (Shaw et al, 2010). The results differed depending on the degree of HLA matching for the other five HLA loci and the disease stage. Survival was significantly better in 12/12 matched transplants (i.e. HLA-DPB1 matched) in those with early stage leukaemia (5-year OS: 63% vs. 41% in 10/10 matched, P = 0·006), but not late stage disease, in accordance with other studies suggesting HLA matching is most important in patients with early stages of disease. Conversely, within the HLA mismatched group (≤9/10), there was a significant survival advantage to DPB1 mismatching (5-year OS: 39% vs. 21% in DPB1-matched, P = 0·008), particularly in late stage leukaemia (P = 0·01). As HLA-DPB1 matching has been associated with a significant increase in disease relapse in this and other studies (Shaw et al, 2006, 2007), the most likely reason for the benefit of an HLA-DPB1 mismatch in late disease stage is due to the lower risk of relapse.
The evidence for permissive mismatches
Not all mismatches result in a deleterious clinical outcome. It has thus been hypothesised that some mismatches may be ‘permissive’ (i.e. not associated with worse clinical outcomes than a match), while others may be ‘non-permissive’ (i.e. associated with worse clinical outcomes than either a match or a ‘permissive’ mismatch). A possible explanation for permissiveness relates to differences at an amino acid or epitope level, which may be as, or more important, than simply being mismatched at allele level.
The JMDP addressed this issue in a large cohort of 4643 recipients (Kawase et al, 2009). They reported a protective effect of HLA-C and -DPB1 mismatches on disease relapse. In addition, a number of specific allele combinations at these loci were found to be protective against relapse. The OS was significantly better in those recipients with an HLA-DPB1 mismatched recipient/donor combination associated with protection against relapse (as compared to a HLA-DPB1 mismatch not associated with a decrease in relapse).
In a different study from the JMDP, including 5210 transplant recipients (Kawase et al, 2007), the authors performed a similar analysis of the impact of specific allele and amino acid mismatch combinations on outcome. A total of 15 non-permissive specific allele mismatches (i.e. those that resulted in an increased risk of aGvHD) were identified (four in HLA-A, one in HLA-B, seven in HLA-C, one in HLA-DRB1 and two in HLA-DPB1). When grouping their pairs into those with a full HLA match, zero non-permissive mismatch and one or more non-permissive mismatch, the OS in the last group was significantly worse than in the two former groups.
The IHWG (Petersdorf et al, 2007) studied the impact of individual locus mismatches in different populations. They found that a single HLA-A mismatch was poorly tolerated in JMDP transplant recipients (HR: 2·27; 95% CI: 1·14–4·53) but less detrimental in the non-JMDP population (HR: 1·24; 95% CI: 0·92–1·67). Conversely, mismatches at HLA-C were well tolerated among the JMDP pairs (HR: 1·05; 95% CI: 0·50–2·23) but poorly tolerated among non-JMDP transplants (HR: 1·68; 95%CI: 1·36–2·08). One explanation for this may be the differences in the actual allele mismatches in these separate populations. (Morishima et al, 2006) The predominant HLA-A allele mismatch in the JMDP population was A*0201/A*0206. When the effects of this mismatch on mortality were analysed in the whole group (i.e. irrespective of the ethnic background) there remained a significant survival disadvantage (HR 1·58; P ≤ 0·001).
Analysing HLA-DPB1 mismatches in this way has lead to interesting findings (Shaw et al, 2004; Zino et al, 2004; Fleischhauer et al, 2006). Zino et al (2004) developed a functional ‘epitope-based’ algorithm, in which they classified different DPB1 mismatches into permissive or non-permissive based on immunogenicity to a shared T cell epitope (Fleischhauer et al, 2001). Most recently this group have published their findings in 621 UD transplants facilitated through the Italian Registry for onco-haematological patients using a modified algorithm (Crocchiolo et al, 2009). They reported a significantly higher 2-year survival in transplants with permissive as compared to non-permissive HLA-DPB1 mismatches (54·8% vs. 39·1%, P = 0·005). Interestingly, the survival detriment associated with non-permissive mismatches was seen in both 10/10 (HR = 2·12; C.I. 1·23–3·64; P = 0·006) and 9/10 allele matched transplants (HR = 2·21; C.I. 1·28–3·80; P = 0·004), and was observed in both early and advanced stage disease.
The IHWG tested the original algorithm (Zino et al, 2004) in a study of 5846 patients and their HLA-A, -B, -C, -DRB1, -DQB1 (10/10) matched donors (Shaw et al, 2009). Permissive matches were defined according to the functional algorithm such that three groups of patients were identified: (i) Zero DPB1 mismatches (i.e. 12/12 HLA allele matched, n = 1348), (ii) Permissive DPB1 mismatch (n = 2707) and (iii) Non-permissive DPB1 mismatch in either a graft-versus-host (GVH; n = 901) or host-versus-graft (HVG; n = 890) direction. There was a statistically significantly higher risk of mortality in those who had a non-permissive epitope mismatched in either the GVH or HVG direction (HR = 1·16, P = 0·0004) compared to the group with a permissive mismatch. Together, the data from these last two studies suggest that, in the setting of a 10/10 HLA matched donor, selection of a donor with a permissive DPB1 mismatch results in a better survival than a non-permissive DPB1 mismatch. The situation in 9/10 matched pairs remains to be confirmed.
There is no doubt that one of the factors implicated in the dramatic improvements in the outcome of UD transplantation over the years, is the advances in HLA testing and matching and the understanding of donor selection issues. HLA typing methods have shortened processing times and, at the same time, have increased the level of resolution achieved. The benefits of high-resolution HLA class I and Class II antigen typing have been well demonstrated, particularly for post-transplant survival. Future studies using epitope-based typing methods to target specific residues are possible. Current consensus with regards to HLA typing, however, has been summarized by the NMDP recommendations: (i) allele level typing for HLA-A, -B, -C and -DR; (ii) HLA matching of five out of six alleles for HLA-A, -B and -DRB1 by low resolution HLA typing; and (iii) matching eight out of eight alleles of HLA-A, -B, -C and -DRB1 by high-resolution (Bray et al, 2008). The current ‘gold standard’ is a donor matched for 8/8 alleles, however it is clear that mismatches may be tolerated with regards to survival in some transplant settings and that evidence for permissive mismatches exists.
Several issues remain to be addressed:
- 1 The majority of large published studies have been performed in a predominantly myeloablative T-cell replete setting using largely bone marrow as the stem cell source. A few smaller studies suggest that in other settings (T-cell depletion, reduced intensity conditioning, peripheral blood stem cells [PBSC] as stem cell source) the impact of HLA matching may differ and these are areas that need to be explored. Further studies elucidating the impact of the nature and type of immunosuppression (e.g. ATG compared to Alemtuzumab), particularly in reduced intensity conditioned transplantation, will be of interest.
- 2 Other (non-HLA) genetic factors may be mismatched between recipient and donor, and have been implicated in outcomes. The majority of these (e.g. minor histocompatibility antigens (Dickinson & Charron, 2005; Hambach & Goulmy, 2005), cytokine gene polymorphisms (Zeiser et al, 2004; Dickinson & Middleton, 2005) and KIR haplotypes (Bignon & Gagne, 2005; Parham, 2005; Ruggeri et al, 2005; Hsu et al, 2007) are less well studied than HLA, but may, in the future, be included in donor selection algorithms.
- 3 Although there are not strong data from large studies (Lee et al, 2007), several other donor factors, such as donor age, cytomegalovirus status and gender (Kollman et al, 2001; Nichols et al, 2002; Ljungman et al, 2003; Anasetti, 2008), as well as donation related factors: route of donation and CD34 cell dose (Eapen et al, 2004, 2007; Remberger et al, 2005; Pulsipher et al, 2009) may impact on transplant outcome in certain settings.
- 4 The impact of delays in accessing the donor (i.e. the time to transplantation) needs to be considered.
- 5 The recent dramatic increase in the use of umbilical cord blood (UCB) as a stem cell source for transplantation (especially for adult patients) is likely to have an impact on adult UD selection. As the HLA matching requirements in cord blood selection are less stringent than in UD selection, it may be possible to find a mismatched cord blood, which is nevertheless ‘better’ matched than a mismatched adult UD. Prospective clinical trial data comparing the outcome of UCB and PBSC/bone marrow from adult UD is lacking. The impact of HLA matching in cord blood transplantation has been studied (Kamani et al, 2008; Rocha & Gluckman, 2009; Smith & Wagner, 2009), however very few studies (with small numbers) have analysed the impact of matching at high resolution (Kogler et al, 2005; Liao et al, 2007). Practice may change as more results become available, but extending patient searches to include adult UD and cord blood simultaneously may streamline donor selection.
Ultimately an algorithm for selecting the best donor should include all available genetic information, as well as taking into account specific donor characteristics. In addition, the selection may differ depending on the individual patient’s disease, stage and transplant protocol. The number of transplants that would need to be analysed in order to obtain evidence for all of these questions is enormous and the importance of international collaborations and data-sharing cannot be overstated.
Summary of donor selection strategy (HLA matching)
- 1 Recent available data from large studies suggest that an 8/8 matched donor is the best choice. Most studies now agree that HLA-DQB1 dose not need to be considered in a well-matched donor, but there is evidence that there may be additive effects of a DQB1 mismatch, if a mismatch at another locus is present.
- 2 In some studies the use of a 7/8 (9/10) matched donor has been associated with an outcome equally as good as an 8/8 (10/10) matched donor. The clinical situations where a single mismatch may be tolerated include: in the T-cell depleted setting, using nonmyeloablative/reduced-intensity conditioning or in advanced stage disease.
- 3 In certain circumstances (and particularly if more than one donor is available) typing for the HLA-DPB1 locus should be done, and the degree and type of matching considered in donor selection.
- 4 How to select between HLA mismatched donors (i.e. which mismatched locus should be chosen in preference) remains incompletely answered. Different studies report greater significance related to different locus mismatches. One explanation may be differing numbers of mismatches at an individual locus in different studies. Another may be that certain mismatches are permissive (and that these are more prevalent in particular populations). Advice in choosing between an HLA-A, -B, -C or DRB1 mismatched donor should be based on local studies and experience
- 5 Two or more mismatches for HLA-A, -B, -C, -DRB1 alleles are usually associated with a worse survival (although may be tolerated in advanced disease).
- 6 There is conflicting data concerning the value of selecting an allelic mismatch over an antigenic mismatch.
The authors thank Professor Steven Marsh for the updated HLA figures.