Tacrolimus CYP3A Single-Nucleotide Polymorphisms and Preformed T- and B-Cell Alloimmune Memory Improve Current Pretransplant Rejection-Risk Stratification in Kidney Transplantation

Achieving fast immunosuppression blood exposure after kidney transplantation is key to abrogating both preformed and de novo anti-donor humoral and cellular alloresponses. However, while tacrolimus (TAC) is the cornerstone immunosuppressant inhibiting adaptive alloimmunity, its blood exposure is directly impacted by different single-nucleotide polymorphisms (SNPs) in CYP3A TAC-metabolizing enzymes. Here, we investigated how functional TAC-CYP3A genetic variants (CYP3A4*22/CYP3A5*3) influence the main baseline clinical and immunological risk factors of biopsy-proven acute rejection (BPAR) by means of preformed donor-specific antibodies (DSAs) and donor-specific alloreactive T cells (DSTs) in a large European cohort of 447 kidney transplants receiving TAC-based immunosuppression. A total of 70 (15.7%) patients developed BPAR. Preformed DSAs and DSTs were observed in 12 (2.7%) and 227 (50.8%) patients, respectively. According to the different CYP3A4*22 and CYP3A5*3 functional allele variants, we found 4 differential new clusters impacting fasting TAC exposure after transplantation; 7 (1.6%) were classified as high metabolizers 1 (HM1), 71 (15.9%) as HM2, 324 (72.5%) as intermediate (IM), and 45 (10.1%) as poor metabolizers (PM1). HM1/2 showed significantly lower TAC trough levels and higher dose requirements than IM and PM (p < 0.001) and more frequently showed TAC underexposure (<5 ng/ml). Multivariate Cox regression analyses revealed that CYP3A HM1 and IM pharmacogenetic phenotypes (hazard ratio (HR) 12.566, 95% CI 1.99–79.36, p = 0.007, and HR 4.532, 95% CI 1.10–18.60, p = 0.036, respectively), preformed DSTs (HR 3.482, 95% CI 1.99–6.08, p < 0.001), DSAs (HR 4.421, 95% CI 1.63–11.98, p = 0.003), and delayed graft function (DGF) (HR 2.023, 95% CI 1.22–3.36, p = 0.006) independently predicted BPAR. Notably, a significant interaction between T-cell depletion and TAC underexposure was observed, showing a reduction of the BPAR risk (HR 0.264, 95% CI 0.08–0.92, p = 0.037). Such variables except for DSAs displayed a higher predictive risk for the development of T cell-mediated rejection (TCMR). Refinement of pretransplant monitoring by incorporating TAC CYP3A SNPs with preformed DSAs as well as DSTs may improve current rejection-risk stratification and help induction treatment decision-making.


INTRODUCTION
Alloreactive immune memory is the hallmark of adaptive immunity and is a key factor driving acute kidney transplant rejection and accelerated graft loss (1)(2)(3). Indeed, preformed donor-specific antibodies (DSAs) are a well-recognized factor of poor graft outcome, and owing to systematic pretransplant screening, the incidence of acute antibody-mediated rejection (ABMR) has significantly decreased (4). Likewise, preformed donor-specific T-cell memory (DSTs) may also exist in a great proportion of transplant candidates and has been associated with a higher risk of T cell-mediated rejection (TCMR) (5)(6)(7)(8) after transplantation.
Importantly, memory T cells are more resistant to immunosuppressive therapies than their naïve counterparts (9)(10)(11), as they can rapidly repopulate and dominate peripheral antidonor alloimmune responses (12). Experimental and human ex vivo studies have shown that calcineurin inhibitors, and especially tacrolimus (TAC), can more efficiently inhibit these cells (13,14). However, even though the implementation of TAC-based regimens as the current standard of care immunosuppressive therapy has led to a significant reduction in acute rejection rates, acute TCMR still unpredictably occur (15,16).
TAC has a narrow therapeutic index leading to a large interindividual pharmacokinetic variability (17), and suboptimal TAC exposure during the initial period after transplantation has been associated with a higher risk of acute rejection (18,19), especially in highly immunized kidney transplant patients (20). Among different factors influencing TAC pharmacokinetics, single-nucleotide polymorphisms (SNPs) in genes coding for TAC-metabolizing enzymes cytochrome P450 (CYP) 3A4 and 3A5 have been shown to play a major impact (21,22). Indeed, patients expressing the CYP3A5*1 allele (CYP3A5 expressers) have significantly higher dose requirements to achieve similar TAC trough levels (C 0 ) than patients homozygous for the CYP3A5*3 allele (CYP3A5 non-expressers) (23,24). Similarly, the non-functional CYP3A4*22 allele has also been associated with a reduced TAC dose requirement, regardless of CYP3A5 genotype (25,26). Nevertheless, while genotype-based adjustment of initial TAC doses has proven useful in two prospective trials, no improvement on main clinical outcomes such as acute rejection rates has been described yet (27,28). Of note, these studies did not stratify kidney transplant patients according to pretransplant alloimmune memory status, both DSAs and also DSTs, in whom different individual CYP3A TAC phenotype expression could modulate their risk of biopsy-proven acute rejection (BPAR).
Therefore, since kidney transplant candidates with preformed anti-donor alloimmune memory might need a particularly fast exposure to TAC blood concentrations to effectively inhibit antidonor recall immune responses, particularly in the early posttransplant period, we hypothesized that the impact of pretransplant DSTs and DSAs, together with other main baseline clinical variables and the different CYP3A TAC phenotypes, could significantly modulate the relative risk and types of BPAR. Thus, the primary endpoint of the study was to evaluate the value of preformed alloimmune memory (both DSAs and DSTs) together with different CYP3A TAC pharmacogenetic phenotypes to discriminate patients at risk of developing acute rejection after kidney transplantation.

Study Population
A total of 738 consecutive, adult, single-kidney-transplant recipients from four different European kidney transplant centers (Bellvitge University Hospital in Barcelona, Spain; Campus Virchow-Clinic in the CharitéUniversity Hospital in Berlin, Germany; Academic Medical Center, University of Amsterdam in Amsterdam, the Netherlands; and Institute for Clinical and Experimental Medicine (IKEM) in Prague, Czech Republic), who were transplanted between June 2012 and December 2017, were retrospectively analyzed on the basis of the availability of both donor and recipient pretransplant peripheral blood mononuclear cells (PBMCs) and recipient plasma samples to assess DSTs, DSAs, and the CYP3A genotypes for their value predicting acute rejection after transplantation. Furthermore, the main baseline and clinical variables such as the use of T cell-depleting agents, development of delayed graft function (DGF), and importantly distinct TAC blood exposures were also assessed in this study to evaluate their impact on modulating these pretransplant immunologic and pharmacogenetic variables facilitating the risk of acute rejection. As illustrated in Figure 1, 218 patients were excluded because they are receiving the Meld-dose® extended-release TAC formulation (Envarsus®), were transplanted with another concomitant solid organ, or lack of biological samples. The first exclusion criterion was applied due to the different pharmacokinetic profiles that have been reported for the Envarsus® formulation. In addition, 73 out of the 520 patients in the study were dropped out because of the following reasons: poor quality of DNA for genotyping analyses (n = 25), insufficient donor and/or recipient cell counts (n = 13), or lost to follow-up (n = 35). Therefore, 447 patients were evaluated in the study. The respective institutional review boards approved the study, and all patients gave written informed consent. Patients were followed up for at least 24 months. The main baseline demographic variables were collected at the time of enrollment, and clinical variables associated with clinical transplant outcomes were pooled together for the study. DGF was considered as the absence of recovery of graft function requiring hemodialysis after transplant surgery.

Immunosuppression
All patients of the study received an immediate (Prograf® Astellas Pharma; or Adoport® Sandoz Pharma) or extendedrelease (Advagraf®, Astellas Pharma) TAC formulation, mycophenolate mofetil (MMF), 1 g bid, during the first 2 weeks and subsequently tapered to 500 mg bid, and steroids with oral prednisone at 5 mg/day, after the first month as maintenance immunosuppression. Either basiliximab (72.3%) or thymoglobulin (rATG) (27.7%) was given as induction therapy as per practice in each center. Initial TAC doses were adjusted by the respective patient's body weight and given at 0.05 mg/kg bid for the immediate release (TAC-IR) and 0.12 mg/kg/ day for the extended release (TAC-ER), to achieve TAC trough levels of 6-10 ng/ml during the first 6 months and 5-8 ng/ml thereafter. TAC trough levels were measured before the patient's administration of the morning dose, on days 7, 14, 30, and 90 and at 180 days after transplantation. TAC intra-patient variability (IPV) was estimated through the coefficient of variation (SD/mean × 100).

Donor and Recipient Cell Source
Peripheral blood samples were obtained in heparinized tubes from renal transplant recipients before kidney transplantation. Donor cells were obtained from donor spleens or PBMCs in deceased and living donors. PBMCs and splenocytes were isolated by standard Ficoll density gradient centrifugation, were frozen in liquid nitrogen, and subsequently used for the IFN-g Enzyme-linked Immunosorbent Spot (ELISPOT) assay.

Assessment of Pretransplant Donor-Specific Alloimmune Memory
Pretransplant humoral and cellular donor-specific alloimmune memory was assessed in all patients of the study in peripheral blood by means of serum DSAs and circulating donor-specific memory/effector T cells (DSTs), respectively. While pretransplant DSA data were available to the transplant physicians prior to transplantation, all data related to pretransplant DSTs were blinded and thus did not influence the type of immunosuppressive therapy used. Pretransplant DSAs were not detected at the time of transplantation and were confirmed later; therefore, these patients did not receive any desensitizing treatment before transplantation.

Donor-Specific Anti-HLA Antibodies
Screening for circulating anti-HLA class I and II antibodies was carried out in serum samples before transplantation in all patients and at the time of a kidney transplant biopsy and was determined using single-antigen flow beads assays on a Luminex platform (Lifecodes, division of Immucor, Stanford, CT, USA). Patients previously only screened for anti-HLA antibodies with the screening assay (Lifecodes, division of Immucor, Stanford, CT, USA), were re-assessed for single-antigen flow beads assays to rule out the presence of donor (HLA)-specific antibodies (DSAs). All beads showing a normalized mean fluorescence intensity (MFI) >1,500 were considered positive if [MFI/(MFI lowest bead)] > 5.

Renal Allograft Histology
Renal allograft biopsies were performed in patients undergoing acute clinical graft dysfunction, such as a change in serum creatinine levels, decreasing estimated glomerular filtration rate (eGFR), and the appearance of proteinuria and/or hematuria. All renal biopsies were blindly analyzed following the Banff 2013 score classification (31) and retrospectively revised following Banff 2015 (32) by an expert kidney transplant pathologist.

Statistical Analysis
All data are presented as mean ± SD or median and interquartile range. Groups were compared using the X 2 -test for categorical variables and the one-way ANOVA analysis or Student's t-test for normally distributed data for quantitative variables.
Cox regression analyses were performed to determine the significant univariate associations of pretransplantation factors with the risk of BPAR. An interaction analysis between ATG induction and low TAC exposure was also introduced, as it was suspected that a depleting ATG induction might dampen the effect of low TAC exposure. These interaction analyses were also carried out between ATG induction and DSTs, as well as ATG and DSAs. These interaction analyses were performed in Cox models with 2 covariates plus the interaction term. A multivariate Cox survival model was then built on these significant associations to evaluate the independent predictors of BPAR. Results were expressed as hazard ratios (HRs) with 95% CIs. Kaplan-Meier analyses were performed to represent allograft rejection-free survival, and log-rank tests were computed for the associated curves. The statistical significance level was defined as a 2-tailed p < 0.05. All statistical analyses were performed with IBM® SPSS Statistics (version 23) and R (version 4.1).

Main Demographics and Clinical Characteristics of the Study Population
The flowchart of the study is depicted in Figure 1. The mean study patient follow-up was 36.4 ± 18.1 months. As shown in Table 1

Pharmacogenetic CYP3A Phenotypes and Posttransplant Tacrolimus Exposure
Since the patients of this study were treated with either an immediate (TAC-IR) or an extended-release (TAC-ER) TAC formulation, we first compared whether TAC trough concentrations and dose ratios (C/D ratio) between the two groups were comparable between the two formulations. As shown in Supplementary Figure 1, no differences were observed between TAC-IR and TAC-ER formulations, and thus, all patients were analyzed together (p > 0.05 for all time points).
When considering the CYP3A phenotypes' categorization and their impact on first TAC trough levels and dose adjustments, we first analyzed the clusters by merging all HM, IM, and PM from each CYP3A genotype. As illustrated in Figure 2, HM patients showed significantly lower TAC trough levels as compared to IM and PM patients within the first 2 weeks. Notably, patients with an IM phenotype showed intermediate TAC trough levels compared to both HM and PM patients in the early period ( Figure 2A). Also, HM required higher TAC dose adjustments to reach the same TAC trough levels than did PM during the first 6 months of follow-up ( Figure 2B). This difference was also observed between IM and PM patients from 2 weeks to 6 months of follow-up. While TAC dose adjustments allowed TAC trough levels to converge starting ABMR, antibody-mediated rejection; BPAR, biopsy-proven acute rejection; DGF, delayed graft function; DSA, donor-specific antibodies; DST: donor-specific T cells; mo, months; PreTR, pre-transplant; rATG, rabit anti-thymocyte globulin; TCMR, T cell mediated rejection; yr, years.
The "*" is the symbol that represents the different allelic variants. It represents alleles with altered functionality which may lead to profiles of increased or reduced drug metabolism. # means "number" 3 months post-transplantation, the C/D ratio remained significantly different between the three CYP3A cluster phenotypes. However, when the HM phenotype was further stratified into HM1 and HM2 phenotypes, statistically significant differences in both TAC trough levels and TAC dose adjustments (TAC C/D ratio) between the two HM groups were observed ( Figures 2C, D). All pharmacokinetic data including TAC trough levels (ng/ml), daily doses (mg/day),

Clinical and Demographic Baseline
Characteristics of Different Pharmacogenetic CYP3A Clusters As described in Table 2, there were no statistically significant differences between the different pharmacogenetic clusters regarding main baseline demographic, clinical, and immunological characteristics. Also, the type of induction therapy was not different between the groups. However, there was a higher number of non-Caucasian transplant recipients among the HM1 cluster than in IM and PM patients [

CYP3A Clusters, Tacrolimus Trough Levels, and Biopsy-Proven Acute Rejection
Median TAC trough levels were lower among patients developing BPAR as compared to those who did not at the mean time of BPAR occurrence (7.10 [5.90-9.01] vs 8.10 [6.47-9.74] ng/ml, respectively, p = 0.035). Among patients experiencing BPAR, 27/70 (38.6%) had at least one TAC trough level <5 ng/ml at any time prior to BPAR, whereas only 79/377 (21%) of patients without BPAR did (p = 0.001). This threshold was therefore considered as low TAC exposure. The proportion of patients with TAC trough levels below 5 ng/ml, at least once within the study follow-up, according to each CYP3A cluster phenotype was significantly higher among HM as compared to IM and PM (p < 0.001) ( Figure 3A), as well as when HM was further stratified into HM1 and HM2 subgroups (p < 0.001) ( Figure 3B).

Pretransplant Risk Factors Predicting Biopsy-Proven Acute Rejection
As illustrated in Figure 4, the presence of preformed DSTs or DSAs was associated with a higher incidence of BPAR (log rank <0.001 and log rank <0.001, respectively), especially with TCMR and ABMR, respectively (log rank <0.001 and log rank <0.001, respectively), although preformed DSTs did also associate with higher ABMR rates (6.2% in DST+ vs 1.8% in DST−, ABMR p = 0.019). When we analyzed BPAR-free survival curves according to main CYP3A phenotype clusters (HM; IM and PM), only IM patients showed a significantly lower cumulative incidence of BPAR than the HM and PM ( Figure 5A) (log rank p = 0.038). Nevertheless, when we further stratified the HM cluster phenotype into the two HM1 and HM2 subgroups, patients displaying pharmacogenetic CYP3A clusters HM1 and IM showed significantly lower BPAR-free survival rates as compared to HM2 and PM1 patients ( Figure 5B) (log rank p < 0.001). However, when TCMR-free survival rates were assessed ( Figure 5C), only the HM1 group showed a significantly higher TCMR risk as compared to the other groups (p = 0.006 for HM1, p > 0.05 for IM and HM2, when compared to PM in a univariate Cox model).
Next, we assessed whether other relevant clinical or immunological variables were associated with the risk of BPAR, in univariate Cox analyses. As shown in Table 3, in addition to CYP3A clusters H1 and IM together with pretransplant DSTs and DSAs, the development of DGF and low TAC exposure (<5 ng/ml) were also correlates of BPAR. Conversely, while previous kidney transplantation, donor age, donor type (living vs brain dead), cold ischemia time, number of HLA mismatches, and rATG induction were not associated with BPAR, when we considered the combined effect of rATG induction and low TAC exposure, their interaction was associated with a significant reduction of BPAR (HR = 0.25, p = 0.025). Notably, when these significant covariates were assessed in a multivariate Cox regression analysis, CYP3A clusters (HM1 and IM), both preformed DSAs and DSTs, and DGF independently predicted BPAR. Although not statistically significant, a low TAC exposure (<5 ng/ml) showed a trend toward an increased risk of BPAR. Notably, the interaction between the use of rATG and TAC underexposure showed a significant reduction of the risk BPAR (HR = 0.312, p = 0.037). In other models adjusting for the interaction between rATG and DSTs or DSAs, while no significant interaction was observed, the risk of TCMR was numerically reduced in rATG-treated patients with DSTs but numerically increased in rATG patients with DSAs. Interestingly, when specifically focusing on the risk of TCMR, DSTs, DGF, recipient age, and the CYP3A HM1 phenotype as well as the interaction between rATG and low TAC exposure were associated with higher TCMR rates. While the interaction between DSTs and rATG induction was not significant (HR = 0.51, p = 0.308), it showed a numerically protective trend; i.e., patients with DSTs receiving rATG induction had a non-significantly lower risk of TCMR. Finally, in a multivariate Cox model, the three covariates DSTs, DGF, and recipient age remained independent predictors of the risk of TCMR; only the CYP3A HM1 phenotype, low TAC exposure, and the interaction between rATG and low TAC exposure were also independent correlates predicting TCMR.
Since rATG induces a deep T-cell depletion and although this variable was included in the multivariate analysis, we also performed a BPAR-free Cox survival analysis restricted to patients not receiving rATG induction. In this analysis, preformed DSTs, CYP3A cluster HM1, DGF, and low TAC exposure remained independently associated with BPAR (Supplementary Table 2). B A FIGURE 3 | Proportion of TAC underexposure (<5 ng/ml) according to different CYP3A clusters. 0, 1, 2, and 3 in the legend represent the number of times that patients were off target. (A) There were a higher proportion of patients with TAC trough levels below 5 ng/ml among HM as compared to IM and PM at mean time of BPAR occurrence or before BPAR. The frequencies of patients with low levels at least once in this follow-up period were 20%, 29%, and 69% in the PM, IM, and HM groups, respectively (p < 0.001). (B) There were a higher proportion of patients with TAC trough levels below 5 ng/ml among HM1 and HM2 as compared to IM and PM1 at mean time of BPAR occurrence or before BPAR. The frequencies of patients with low levels at least once in this follow-up period were 20%, 29%, 63%, and 86% in the PM, IM, HM2, and HM1 groups, respectively (p < 0.001). TAC, tacrolimus; HM, high metabolizers; IM, intermediates; PM, poor metabolizers; BPAR, biopsy-proven acute rejection. Finally, given that two different BPAR density peaks were identified over time, post-hoc analyses were performed to compare the two subpopulations of early (<4 months) and late (>4 months) BPAR (Supplementary Table 3). Notably, the same independent predictive variables described when the whole study population was analyzed together were also confirmed when stratifying by either early or late BPAR (data not shown).

DISCUSSION
This is the first study in solid organ transplant recipients evaluating the impact of main TAC CYP3A pharmacogenetic variants together with main immunologic biomarkers tracking preformed anti-donor alloimmune memory predicting the risk of posttransplant acute rejection. In this large, multicentric, European kidney transplant cohort, we first describe a further refined stratification of CYP3A pharmacogenetic phenotype clusters from those three previously described in the literature (22), which are significantly associated with different TAC metabolizer profiles. Indeed, this new categorization identifies kidney transplant recipients with distinct first fast TAC trough levels and C/D ratios and discriminates those patients at higher risk of both BPAR and TCMR, which was also directly influenced by low previous TAC exposure. Furthermore, we confirm the Interaction in a two-way model with rATG and low TAC exposure as predictors, with their interaction. 2 Low TAC exposure corresponds to any TAC trough measurement below 5 ng/ml prior to BPAR or until end of follow-up. 3 Interaction in a two-way model with rATG and DST as predictors, with their interaction. 4 Interaction in a two-way model with rATG and DSA as predictors, with their interaction. Bold values to p values of variables that are significantly associated with the risk of BPAR or TCMR.
persistent independent deleterious effects of preformed DSAs and DSTs on the risk of BPAR, independent of these distinct TAC-metabolizing phenotypes. Most notably, a significant protective effect on the risk of BPAR was observed with the use of T cell-depleting agents such as rATG in the setting of TAC underexposure. In the last years, an important body of evidence has shown the relevance of specific SNPs of the two main variants of the CYP3A TAC-metabolism enzymes (CYP3A4*22 and CYP3A5*3), leading to distinct functional phenotypes influencing TAC dose requirements to achieve whole blood pre-dose concentrations (C 0 ) in kidney transplant recipients (21)(22)(23)(24)(25)(26). In line with previous reports, we here show the impact of the two main CYP3A SNPs, on both TAC C 0 and TAC dose requirements; indeed, HM1/2 had a significantly lower TAC dose-adjusted C 0 ratios than IM, and PM patients had significantly higher TAC dose-adjusted C 0 ratios compared with IM patients. It is interesting to observe that while the prevalence of each gene variant is largely dependent on the ethnicity of the patients, in our study, mainly represented by Caucasian kidney transplant recipients, there was a small but significant proportion of CYP3A5 expressers and CYP3A4*22 allele carriers, which led to a representative number of patients with a distinct global TAC-metabolizing capacity.
Nevertheless, while genotype-based adjustment of TAC doses in the initial course of kidney transplantation has been shown to be useful to more accurately and rapidly reach the target C 0 shortly after transplantation, no advantages have been demonstrated in terms of improved clinical outcomes when prospectively assessed (27,28). Here, by using this new categorization considering the functional *1 allele, we observed that while CYP3A HM1 patients are at higher risk of BPAR and TCMR as compared to other clusters, PM transplant recipients seem to display a significantly lower BPAR risk as compared to other CYP3A clusters. Unexpectedly, the IM cluster showed a deleterious effect on the global BPAR risk over PM but also over the HM2 group. These findings might be explained by the high number of patients in the IM group, which is the most frequent in our population, thus inferring more heterogeneity among this group than the probably much more homogeneous HM1/2 and PM1 groups. Nonetheless, when evaluating their impact on TCMR only, now the HM1 phenotype was revealed the most relevant factor driving this type of rejection. Therefore, the study of TAC CYP3A pharmacogenetic variants should be encouraged, especially among study cohorts with a greater representation of non-Caucasian individuals.
Importantly, we confirm that patients developing BPAR were significantly underexposed to TAC than patients not developing BPAR. Indeed, the effect of early low TAC exposure (trough levels below 5 ng/ml) barely independently correlated to BPAR, thus underscoring that initial low TAC trough levels may facilitate anti-donor alloimmune activation triggering allograft rejection. Moreover, a significant interaction between rATG and low TAC exposure was observed, leading to a reduction in the risk of BPAR. This interaction, together with the finding of the independent predictive value of TAC subexposure among patients not receiving rATG, strongly suggests that T-cell depletion induction therapy mitigates the risk of early low TAC exposures, conferring a protective umbrella in patients with insufficient immunosuppressive coverage, with HM patients the most suitable group. Alternatively, earlier TAC initiation or higher TAC dosage could eventually also counterbalance this deleterious effect.
Another interesting finding in this work is that we confirm the independent negative effect of preformed DSTs and DSAs on the risk of BPAR. Previous studies by our group and others have reported a strong association between pretransplant DSTs and increased risk of BPAR, particularly TCMR, during the early period after transplantation, especially in patients not receiving T cell-depleting induction therapies (3,18,19). Indeed, induction protocols using rATG have marked lymphopenia effects resulting in subsequent induction of apoptosis and/or anergy (20). In line with these observations, although not significant, the interaction effect between rATG and DSTs showed a clinically relevant trend (HR < 1), reducing the risk of BPAR. To note, while no additional deleterious effect was observed within HM patients with preformed DSTs or DSAs as compared to those patients with an IM or PM phenotype, the low number of immunological events within each pharmacogenetic phenotype with preformed DSA or DST group may have precluded detecting statistical differences.
Our study has some limitations. Despite the high number of patients evaluated, the ethnicity of our study population, which is representative of most European kidney transplant programs, was mostly Caucasian; thus, certain CYP3A SNPs were less represented. Nevertheless, the similar distribution of main demographic, clinical, and immunological risk factors within all C YP3A pharmacogenetic clusters s ignifi cantly counterbalance this constraint. We also acknowledge that there are other relevant variables that may directly influence TAC pharmacokinetics variability in whole blood in addition to the CYP3A genotypes, such as patient hematocrit, weight, corticosteroid dose, and a reduction of the hepatic function. Nevertheless, the impact of the TAC-metabolizing CYP3A genotypes on TAC dose requirements to achieve whole blood pre-dose concentrations highlights the important effects of the individual genetic susceptibility on TAC blood exposure. We did not include subclinical rejections; thus, a number of additional immune-mediated events may have occurred in our study population and have not been taken into account. Finally, the short follow-up period of the study may not have allowed us to observe some additional deleterious impacts in the long term.
In conclusion, the results of our study strongly suggest that implementing pretransplant anti-donor alloimmune memory, both humoral and cellular, together with the individual genetic TAC-metabolizing susceptibility may significantly refine current immune-risk stratification of kidney transplant candidates prior to transplantation and may help in guiding treatment decision-making in a more personalized manner. Notably, these data warrant the development of large, prospective biomarker-guided trials, preferentially within multicenter international consortia.