**Recent Advances and Clinical Outcomes of Kidney Transplantation**

Special Issue Editors

**Charat Thongprayoon Wisit Cheungpasitporn Napat Leeaphorn**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Special Issue Editors* Charat Thongprayoon Division of Nephrology and Hypertension USA

Wisit Cheungpasitporn University of Mississippi Medical Center USA

Napat Leeaphorn Saint Luke's Health System USA

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Journal of Clinical Medicine* (ISSN 2077-0383) (available at: https://www.mdpi.com/journal/jcm/ special issues/outcomes kidney transplantation).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Article Number*, Page Range.

**Volume 2 ISBN 978-3-03936-407-7 (Pbk) ISBN 978-3-03936-408-4 (PDF)**

**Volume 1-2 ISBN 978-3-03936-409-1 (Pbk) ISBN 978-3-03936-410-7 (PDF)**

c 2020 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

### **Contents**




Reprinted from: *J. Clin. Med.* **2019**, *8*, 453, doi:10.3390/jcm8040453 ................. **247**

### **About the Special Issue Editors**

**Charat Thongprayoon**, M.D.; Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA. Email: charat.thongprayoon@gmail.com. Dr. Charat Thongprayoon, M.D., is affiliated with Mayo Clinic Hospital Rochester. His research interests include nephrology, electrolytes, acute kidney injury, renal replacement therapy, epidemiology, and outcome studies.

**Wisit Cheungpasitporn**, M.D.; Division of Nephrology, Department of Medicine, University of Mississippi Medical Center, Mississippi, USA. Email: wcheungpasitporn@gmail.com. Dr. Wisit Cheungpasitporn is board-certified in both Nephrology and Internal Medicine. He completed his nephrology fellowship training at Mayo Clinic, Rochester, Minnesota. Dr. Cheungpasitporn also completed his additional training at Mayo and has become an expert on kidney transplantation. He completed his postdoctoral diploma in the clinical and translational science (CCaTS) program in 2015. Dr. Cheungpasitporn received the 2016 Donald C. Balfour Research Award, given in recognition of outstanding research as a junior scientist whose primary training is in a clinical field at Mayo Clinic, Rochester, Minnesota, as well as the 2016 William H. J. Summerskill Award, given in recognition of outstanding achievement in research for a clinical fellow at Mayo Clinic, Rochester, Minnesota. Dr. Cheungpasitporn has been part of Division of Nephrology at UMMC since August 2017.

**Napat Leeaphorn**, M.D., is a nephrology specialist in Kansas City, MO. He currently practices at Saint Luke's Kidney Transplant Specialists. Dr. Leeaphorn is board-certified in Internal Medicine and Nephrology.

### *Article* **Chronic Use of Proton-Pump Inhibitors and Iron Status in Renal Transplant Recipients**

**Rianne M. Douwes 1,\*, António W. Gomes-Neto 1, Michele F. Eisenga 1, Joanna Sophia J. Vinke 1, Martin H. de Borst 1, Else van den Berg 1, Stefan P. Berger 1, Daan J. Touw 2, Eelko Hak 3, Hans Blokzijl 4, Gerjan Navis <sup>1</sup> and Stephan J.L. Bakker <sup>1</sup>**


Received: 14 August 2019; Accepted: 28 August 2019; Published: 3 September 2019

**Abstract:** Proton-pump inhibitor (PPI) use may influence intestinal iron absorption. Low iron status and iron deficiency (ID) are frequent medical problems in renal transplant recipients (RTR). We hypothesized that chronic PPI use is associated with lower iron status and ID in RTR. Serum iron, ferritin, transferrin saturation (TSAT), and hemoglobin were measured in 646 stable outpatient RTR with a functioning allograft for ≥ 1 year from the "TransplantLines Food and Nutrition Biobank and Cohort Study" (NCT02811835). Median time since transplantation was 5.3 (1.8–12.0) years, mean age was 53 ± 13 years, and 56.2% used PPI. In multivariable linear regression analyses, PPI use was inversely associated with serum iron (β = −1.61, *p* = 0.001), natural log transformed serum ferritin (β = −0.31, *p* < 0.001), TSAT (β = −2.85, *p* = 0.001), and hemoglobin levels (β = −0.35, *p* = 0.007), independent of potential confounders. Moreover, PPI use was independently associated with increased risk of ID (Odds Ratio (OR): 1.57; 95% Confidence Interval (CI) 1.07–2.31, *p* = 0.02). Additionally, the odds ratio in RTR taking a high PPI dose as compared to RTR taking no PPIs (OR 2.30; 95% CI 1.46–3.62, *p* < 0.001) was higher than in RTR taking a low PPI dose (OR:1.78; 95% CI 1.21–2.62, *p* = 0.004). We demonstrated that PPI use is associated with lower iron status and ID, suggesting impaired intestinal absorption of iron. Moreover, we found a stronger association with ID in RTR taking high PPI dosages. Use of PPIs should, therefore, be considered as a modifiable cause of ID in RTR.

**Keywords:** proton-pump inhibitors; iron; iron deficiency; renal transplantation

#### **1. Introduction**

Iron deficiency (ID) is very common in renal transplant recipients (RTR), with reported prevalence of 20% to 30% more than 12 months after transplantation [1–3]. ID is an important contributor to post-transplant anemia, which affects approximately 20% to 49% of RTR within the first year after transplantation and is associated with adverse health outcomes [1,4–6]. Besides clinical symptoms associated with ID, such as fatigue, dyspnea, and decreased exercise tolerance, iron deficiency anemia (IDA) has been associated with an increased risk of graft failure and mortality in RTR [4,6,7]. Moreover, iron deficiency, independent of anemia, has been shown to be a risk factor for mortality in RTR [3].

Identifying modifiable risk factors of post-transplant ID may improve transplant outcomes and quality of life in RTR. In this regard, drug-induced factors should not be ignored. Recently, several observational studies have demonstrated that chronic proton-pump inhibitor (PPI) use may negatively affect iron status and is associated with ID in the general population [8–11]. It is postulated that PPIs interfere with the absorption of iron in the duodenum, where non-heme iron is primarily absorbed in its ferrous form (Fe2<sup>+</sup>) after the reduction from its less absorbable ferric form (Fe3<sup>+</sup>), which is facilitated by gastric acid and membrane reductases localized at the apical membrane of the enterocytes [12,13]. This hypothesis is supported by a study from Ajmera et al., who found a reduced response to oral supplementation of ferrous sulfate in iron deficient patients taking omeprazole [14]. In a large population-based case-control study, an increased risk of ID was found among patients receiving PPI therapy for at least one year and even among intermittent long-term PPI users compared to PPI non-users [8]. These findings are in line with previous results from another large cohort study in the United States, which demonstrated a higher risk of ID among chronic users of both PPIs and H2-receptor antagonists (H2RAs), which diminished after treatment discontinuation [9].

PPIs are frequently prescribed after renal transplantation to prevent gastrointestinal complications from immunosuppressants, and may therefore possibly contribute to the high burden of post-transplant ID in RTR. It is currently unknown whether chronic PPI use adversely affects iron status in RTR and studies investigating this hypothesis are lacking. In the present study, we aimed to investigate the association of PPI use with iron status in a large single-center cohort of stable outpatient RTR.

#### **2. Methods**

#### *2.1. Study Design*

For this cross-sectional cohort study, we used data from a previously well-described cohort of 707 stable RTR registered at clinicaltrials.gov as "TransplantLines Food and Nutrition Biobank and Cohort Study", NCT02811835 [15]. In brief, all adult RTR with a functioning graft for at least 1 year without known or apparent systemic illnesses (i.e., malignancies, opportunistic infections) who visited the outpatient clinic of the University Medical Center Groningen (UMCG) between November 2008 and March 2011 were invited to participate. Written consent was obtained from 707 of the initially 817 invited RTR. Study measurements were performed during a single study visit at the outpatient clinic.

#### *2.2. Exposure Definition*

RTR using any PPI on a daily basis during a period of at least 3 months before the study visit were defined as chronic PPI users. For statistical analyses we excluded RTR with missing data on PPI dosage (n = 1), with on-demand PPI use (n = 3), with missing data on iron status parameters (n = 7), or using iron supplements or EPO stimulating agents (n = 50), leaving 646 RTR eligible for analysis.

#### *2.3. Study Approval*

The study protocol was approved by the institutional review board (METC 2008/186, approved on 17 September 2008) of the UMCG and all study procedures were performed in accordance with the Declaration of Helsinki and the Declaration of Istanbul.

#### *2.4. Clinical Measurements and Iron Status Parameters*

Information on medical history, including reported history of gastritis or peptic ulcer disease, was obtained from electronic patient records as described previously [15]. Medication use, including the use of PPIs, diuretics, renin-angiotensin-aldosterone system (RAAS) inhibitors, antiplatelet drugs, anti-diabetic drugs, mycophenolate mofetil (MMF), calcineurin inhibitors (CNIs) and prednisolone, was recorded at baseline. Blood pressure was measured using a standard protocol, as described previously [16]. Information on alcohol use and smoking behavior was obtained using a questionnaire.

Blood samples were collected after an 8–12 h overnight fasting period. Serum creatinine was measured using an enzymatic, isotope dilution mass spectrometry traceable assay (P-Modular automated analyzer, Roche Diagnostics, Mannheim, Germany). Estimated glomerular filtration rate (eGFR) was calculated applying the serum creatinine-based chronic kidney disease epidemiology collaboration (CKD-EPI) equation. Concentrations of glucose, hemoglobin A1c (HbA1c), and high-sensitivity C-reactive protein (hs-CRP) were determined using standard laboratory methods. Serum iron was measured using photometry (Modular P800 system; Roche Diagnostics, Mannheim, Germany). Serum ferritin concentrations were determined using the electrochemiluminescence immunoassay (Modular analytics E170; Roche Diagnostics, Mannheim, Germany). Transferrin was measured using an immunoturbidimetric assay (Cobas-c analyzer, P-Modular system; Roche Diagnostics, Mannheim, Germany). Transferrin saturation (TSAT, %) was calculated as 100 × serum iron (μmol/L)/25 × transferrin (g/L). Iron deficiency was defined as transferrin saturation (TSAT) < 20% and ferritin < 300 μg/L, as described in literature previously and commonly used in patients with pro-inflammatory conditions, such as chronic heart failure and chronic kidney disease [3,17–19]. Proteinuria was defined as urinary protein excretion ≥ 0.5 g/24 h.

#### *2.5. Assessment of Dietary Iron Intake*

Total dietary iron intake (i.e., heme and non-heme iron) was assessed using a validated semi-quantitative food frequency questionnaire (FFQ), which was filled out at home [20,21]. Dietary data were converted into daily nutrient intake using the Dutch Food Composition Table of 2006 [22].

#### *2.6. Statistical Analyses*

Statistical analyses were performed using Statistical Package for the Social Sciences (SPSS), version 23.0 (IBM corp., Armonk, NY, USA). Data are presented as mean ± SD for normally distributed data, median with interquartile range (IQR) for skewed data, and number with percentage for nominal data. Differences between PPI users versus PPI non-users were tested using independent sample T-tests, Mann–Whitney U-tests, and Chi-square tests or Fishers exact test when appropriate.

To investigate the association of PPI use with serum iron, serum ferritin, TSAT, and hemoglobin levels, univariable and multivariable linear regression analyses were performed with adjustment for potential confounders of iron status including: age, sex, eGFR, proteinuria, time since transplantation, history of gastrointestinal disorders (i.e., reported history of gastritis or peptic ulcer disease before baseline), lifestyle parameters (BMI, smoking behavior, and alcohol use, dietary iron intake), inflammation (hs-CRP), MMF use, and other medication use (i.e., diuretics, RAAS-inhibitors, anti-platelet therapy, CNI use, and prednisolone use). Serum ferritin was natural log (ln) transformed to obtain a normal distribution. To investigate a dose-response relationship, we performed additional analyses in which RTR were divided into three groups based on daily PPI dose defined in omeprazole equivalents: no PPI, low PPI dose (≤20 mg omeprazole equivalents/day (Eq/d)), and high PPI dose (>20mg omeprazole Eq/d) [23]. Tests of linear trend were conducted by assigning the median of daily PPI dose equivalents in subgroups treated as a continuous variable. To investigate the association between PPI use and ID, we performed logistic regression analyses with adjustment for the same potential confounders used in multivariable linear regression analyses. In sensitivity analyses, H2RA users (n = 20) were excluded to assess the robustness of the association between PPI use and ID. Additionally, we performed sensitivity analyses using an alternative definition of ID as proposed in a position statement by the European Best Practice (ERBP) group and previously recommended in the United Kingdom-based National Institute for Health and Care Excellence (NICE) guideline (NG8) (TSAT < 20% and ferritin < 100 μg/L) [24,25]. A two-sided *p*-value < 0.05 was considered statistically significant in all analyses.

#### **3. Results**

#### *3.1. Baseline Characteristics*

Baseline characteristics are shown in Table 1. At baseline, RTR were 53±13 years old and 382 (59.1%) were male. Mean BMI was 26.7 <sup>±</sup> 4.8 kg/m2, and 157 (24.3%) had diabetes. RTR were included at a median of 5.3 (1.8–12.0) years after transplantation. Mean eGFR was 53.5 <sup>±</sup> 19.9 mL/min/1.73 m2 and 135 (21.0%) had proteinuria. Mean serum iron and median ferritin concentrations were 15.2 ± 5.9 μmol/L

and 115.5 (53.0–213.3) μg/L, respectively. Mean hemoglobin concentration was 13.3 ± 1.7 g/dL and mean TSAT was 25.1 ± 10.5%. Iron deficiency was present in 193 (29.9%) RTR. PPIs were used by a small majority of 363 (56.2%) RTR and omeprazole was the most often prescribed PPI (n = 317). Other PPIs used included esomeprazole (n = 28), pantoprazole (n = 15), and rabeprazole (n = 3). RTR who used PPIs were older than RTR who did not use PPIs, had a higher BMI, and had shorter time between transplantation and baseline measurements. Furthermore, diabetes was more prevalent in RTR using PPIs and PPI users had higher glucose and HbA1c levels, and lower levels of hemoglobin, iron, ferritin, and TSAT. Dietary iron intake was not significantly different between PPI users and PPI non-users. Additionally, CNIs and MMF, diuretics, anti-diabetic drugs, and antiplatelet drugs were more often used by PPI users compared to PPI non-users.


**Table 1.** Baseline characteristics of 646 renal transplant recipients.

Data are presented as mean ± SD, median with interquartile ranges (IQR) or number with percentages (%). Abbreviations: BMI, body mass index; eGFR, estimated glomerular filtration rate; Hb, hemoglobin; HbA1c, hemoglobin A1c; HsCRP, high-sensitivity C-reactive protein; PPI, proton-pump inhibitor; RAAS-inhibitors, renin-angiotensin-aldosterone system inhibitors.

#### *3.2. Association of PPI Use with Iron Status Parameters*

In univariable linear regression analyses, PPI use was associated with a 2.18 μmol/L lower serum iron (95% CI: −3.09 to −1.27, *p* < 0.001), −0.34 μg/L lower ln serum ferritin (95% CI: −0.49 to −0.18, *p* < 0.001), 3.9% lower TSAT (95% CI: −5.5 to −2.3, *p* < 0.001), and 0.52 g/dL lower hemoglobin levels (95% CI: −0.78 to −0.25, *p* < 0.001). The association between PPI use and lower iron status parameters remained independent of adjustment for potential confounders, as shown in Table 2.


Model 6: model 5 + Ln, natural log transformed;

adjustment for other medication use (diuretic use,

 MMF,

mycophenolate

 mofetil;

RAAS-inhibitors,

RAAS-inhibition,

 antiplatelet therapy, CNI use, and prednisolone

renin-angiotensin-aldosterone

 system inhibitors.

 use).

Abbreviations:

 CNI, calcineurin inhibitor;


*J. Clin. Med.* **2019** , *8*, 1382

#### *3.3. Association of PPI Use with ID*

In crude logistic regression analysis, PPI use was associated with ID (OR: 1.95; 95% CI 1.37–2.77, *p* < 0.001), as shown in Table 3. The association remained independent of adjustment for age, sex, eGFR, proteinuria, time since transplantation, and history of GI-disorders (OR: 1.57; 95% CI 1.07–2.31, *p* = 0.02). Further adjustment for lifestyle parameters, including dietary iron intake (OR: 1.57; 95%CI 1.04–2.38, *p* = 0.03) and inflammation (OR: 1.56; 95% CI 1.06–2.30, *p* = 0.03), did not materially affect the association. In model 5 we adjusted for MMF use, which is known for its myelosuppressive nature. In this model, PPI use remained independently associated with ID (OR: 1.57; 95% CI 1.07–2.31, *p* = 0.02). The association between PPI use and ID lost significance when we additionally adjusted for other medication use (OR: 1.43; 95% CI 0.96–2.12, *p* = 0.08). In further models, in which we adjusted separately for each type of medication, it appeared that mainly diuretic use contributed to the attenuation of the association (Table S3). Associations of all potential confounders with ID are provided in Table S4. These analyses demonstrated that besides PPI use, also female sex, proteinuria, time since transplantation, diuretics use, and CNI use were independently associated with ID.


**Table 3.** Logistic regression analyses investigating the association of PPI use with iron deficiency in 646 renal transplant recipients.

Model 1: PPI use adjusted for age and sex. Model 2: model 1 + adjustment for eGFR, proteinuria, time since transplantation, history of GI-disorders. Model 3: model 2 + adjustment for lifestyle parameters (BMI, smoking behavior, alcohol use, dietary iron intake). Model 4: model 2 + adjustment for inflammation (hs-CRP). Model 5: model 2 + adjustment for MMF use. Model 6: model 5 + adjustment for other medication use (diuretic use, RAAS-inhibition, antiplatelet therapy, CNI use, and prednisolone use). Abbreviations: CNI, calcineurin inhibitor; MMF, mycophenolate mofetil; RAAS-inhibitors, renin-angiotensin-aldosterone system inhibitors.

#### *3.4. Dose-Response Analyses*

In this study, 237 RTR received a low PPI dose (≤20 mg omeprazole Eq/d) and 126 RTR received a high PPI dose (>20 mg omeprazole Eq/d). As shown in Table 4 and Figure 1, the point estimate of the odds ratio in RTR taking a high PPI dose as compared to RTR taking no PPIs (OR 2.30; 95% CI 1.46–3.62, *p* < 0.001) was higher than in RTR taking a low PPI dose (OR:1.78; 95% CI 1.21–2.62, *p* = 0.004). After adjustment for potential confounders, PPI use remained associated with ID in patients taking a high PPI dose (OR: 1.73, 95% CI 1.05–2.86, *p* = 0.03), but not in RTR taking a low PPI dose (OR: 1.29, 95% CI 0.84–1.98, *p* = 0.25), as shown in Table 4.


**Table 4.** Subgroup analyses of the association of PPI use with iron deficiency in 646 stable renal transplant recipients.

MMF,

mycophenolate

 mofetil;

RAAS-inhibitors,

renin-angiotensin-aldosterone

 system inhibitors.

**Figure 1.** Crude association between PPI use and risk of iron deficiency stratified by subgroups of PPI use. No PPI, low PPI dose (≤20 mg omeprazole Eq/d), high PPI dose (>20 mg omeprazole Eq/d). Presented are odds ratio's with 95% confidence intervals. \*\* and \* represent significant *p* values compared to No PPI subgroup.

#### *3.5. Sensitivity Analyses for Risk of ID*

In sensitivity analyses, H2RA users (n=20) were excluded from analyses (Table S1). The association between PPI use and risk of ID remained materially unchanged when H2RA users were excluded (OR: 1.99, 95%CI 1.39–2.86, *p* < 0.001). Moreover, the association between PPI use and ID became slightly stronger when the alternative definition of ID (TSAT < 20% and ferritin < 100 μg/L) was used (OR: 2.90, 95% CI 1.94–4.35, *p* < 0.001), and remained significant independent of adjustment for potential confounders (Table S2).

#### *3.6. Description of Excluded RTR Receiving Oral Iron Supplementation*

Baseline differences between RTR with oral iron supplementation and without oral iron supplementation are described in the supplemental results and are demonstrated in Table S5.

#### **4. Discussion**

In this study, we demonstrate that PPI use is associated with lower iron status and ID in a large cohort of stable RTR. Remarkably, the association between PPI use and risk of ID remained independent of adjustment for important potential confounders, and appeared to be independent of dietary iron intake, a finding that has not been shown previously. Furthermore, we found that RTR using a high PPI dose have a higher risk of ID. These results indicate that PPI use possibly contributes to the high burden of post-transplantation ID in RTR.

During the past few years, several case reports have demonstrated a relationship between PPI use and the occurrence of IDA [11,26]. Recently, these findings have been strengthened by two large population based cohort studies demonstrating an increased risk of ID among subjects from the general population [8,9]. Lam et al. were the first to observe in a large population that chronic use of both PPIs and H2RAs was associated with an increased risk of ID (adjusted OR: 2.49 for PPI use and 1.58 for H2RA use) [9]. A recent study in a large U.K. population found that the risk of ID was 3.6 times higher in subjects using PPIs for at least one year continuously, i.e., with time gaps between PPI prescriptions of less than 30 days [8]. Consistent with our findings, both studies found a positive dose-response relationship, which suggests a potential causal effect of PPIs. However, compared to these studies, the adjusted odds ratios in our study were lower. This may in part be

explained by a higher predisposition of ID in RTR compared to subjects from the general population. ID is highly prevalent after renal transplantation and the etiology is multifactorial. For example, high hepcidin and interleukin-6 levels as a result of inflammatory conditions after transplantation may lead to lower intestinal iron uptake due to the down regulation of the ferroportin transporter responsible for iron transport across the enterocyte [13,27,28]. Furthermore, insufficient iron stores at time of transplantation, per-operative blood loss, and inadequate intake of vegetables rich in iron may add to the risk of ID in RTR [29]. Another potential explanation for the lower odds ratio found in our study might be the relative high incidence of the CYP2C19\*17 variant in Caucasian populations, which results in ultra-rapid metabolism of PPIs in the liver [30]. Therefore, the association between PPI use and ID might be more pronounced in populations with lower incidences of this CYP polymorphism, such as Asian populations in which the slow metabolizer phenotype is more common [31]. Interestingly, the present study shows that PPI therapy also appears to be an important risk factor of post-transplantation ID. Since this is a modifiable risk factor, we think this finding is worth discussing given that clinicians may not be aware of the additional risk that PPI use constitutes in RTR.

In contrast to our study, no association was found between PPI use and ID in a cohort study of patients with Zollinger–Ellison syndrome [32]. However, these results cannot simply be extrapolated to other populations and it is likely that the negative effect of PPIs on enteric iron absorption may be less pronounced in patients with gastric acid hypersecretion. In another study among 34 patients with primarily reflux esophagitis, there was also no clear evidence that chronic PPI therapy lead to decreased levels of serum iron and ferritin [33].

Several mechanisms by which PPI use may induce ID are proposed in literature. The main mechanism postulated is decreased intestinal absorption of dietary non-heme iron as a consequence of reduced gastric acid secretion by PPIs [34]. In contrast to the absorption of heme iron, dietary non-heme iron is highly dependent on gastric acid to enhance its absorption [13]. Non-heme iron remains soluble as long as the environment remains acidic and is reducing, of which the latter is necessary to form ferrous iron. It has been shown that in an environment with a pH above 2.5 absorption fails [35]. This theory is supported by a study from Hutchinison et al., who demonstrated that absorption of non-heme iron was lower in patients with hereditary hemochromatosis after the use of PPIs for seven days [34].

However, other factors by which PPIs could affect iron absorption are reported. For example, vitamin C is known to facilitate non-heme iron absorption, since it is a strong reducing agent. Secretion of vitamin C by gastric cells is dependent on intragastric pH and decreased bioavailability of vitamin C has been demonstrated in *Helicobacter pylori* positive and negative subjects after 28 days of omeprazole administration [36]. A low vitamin C intake may add to this, which may be a consequence of patients being over-correct in avoiding all citrus juices, while they actually only need to avoid pomelo containing juices to avoid interaction with CNI use [37]. This suggestion is corroborated by the fact that we recently found that vitamin C depletion is very common in RTR [38]. Moreover, interactions between the gut microbiome and iron bioavailability are reported in literature [39–41]. It is known that PPIs tremendously alter the composition of the gut microbiome, which may potentially affect intestinal iron absorption [42]. It is furthermore known that 50% of patients do not take their long-term therapy for chronic diseases as prescribed [43]. This is a further unknown factor that could interact with the results and could weaken the associations that we found.

To our knowledge, the influence of iron intake on the association between PPI use and ID has not been previously investigated. Lam and colleagues argued that possibly only subjects with low-normal iron levels or with a low dietary iron intake may become iron deficient [9]. In the present study, the association between PPI use and ID remained unchanged after adjustment for dietary iron intake, which shows that the association between PPI use and ID is not confounded by iron intake. Besides PPI use, we also found that female sex, proteinuria, time since transplantation, diuretics use, and CNI use were independently associated with ID. To date no evidence has been found linking diuretic use to ID [44]. However, it cannot be excluded that diuretics adversely affect iron status via decreased tubular

reabsorption, resulting in increased urinary excretion of iron. The same accounts for proteinuria, which is a sign of either glomerular damage, tubular damage, or both. This may lead to increased protein filtration, decreased protein reabsorption, or both, which may result in increased urinary loss of transferrin-bound iron [45]. We also found that CNI use was independently associated with ID. In a previous study among liver transplant recipients, erythropoietin production and hematocrit levels were significantly reduced in CNI users, however the association with iron deficiency was not investigated [46].

Our study has some limitations. First, our study is cross-sectional in nature, and therefore we cannot assume causality. Next to that, we cannot exclude the possibility that the observed association between PPI use and decreased iron status parameters and ID are caused by residual confounding or indication bias. However, several analyses were performed to decrease this possibility. As described above, adjustment for iron intake and history of gastritis or peptic ulcer disease, conditions that can be the cause of a lower iron status, did not change the association between PPI use and ID. Moreover, in sensitivity analyses we excluded RTR using H2RAs and performed logistic regression analyses using an alternative definition of ID, which did not materially change the association. However, when we adjusted for medication use in model 6, the association between PPI use and ID lost significance, which could possibly mean two things. First, other medications may also negatively affect iron status, which attenuates the effect of PPIs on ID. Second, RTR using other medications may be more prone to ID compared to non-users. Furthermore, soluble transferrin receptor (sTfR) measurements were not available, and it should be realized that other than recording history of overt GI-disorders, patients were not thoroughly screened for presence of GI-disorders at the moment of sampling. Lastly, this is a single center study consisting of predominantly Caucasian RTR, which may limit generalizability of results to other populations.

Our study also has several strengths. It is the first study in its kind investigating the association of PPI use with iron status parameters and ID in a large cohort of stable RTR. The main strength is the well-characterized cohort of RTR, in which multiple iron status parameters and dietary iron intake were measured. Extensive data collection made it possible to correct for many possible confounding factors, including lifestyle parameters, inflammation, and medication use. Lastly, to define ID we used a definition previously used in chronic kidney disease and in RTR, including both functional and absolute iron deficiency situations [3,17,18].

#### **5. Conclusions**

In conclusion, we demonstrated that PPI use is associated with lower iron status and ID, indicating impaired intestinal absorption of iron potentially related to reduced gastric acid secretion. Moreover, we demonstrated that the association was stronger among RTR taking a high PPI dose. Taken together these results infer that PPI use is an important modifiable factor that potentially contributes to the high burden of post-transplantation ID after renal transplantation. Based on these results it should be advised to actively manage iron status in RTR using chronic PPI therapy. Reevaluation of treatment indication or switching to a less potent acid suppressing drug, such as antacids or H2RAs, might also be considered in RTR with ID. Potential clinical consequences associated with ID underscore this, including premature mortality [3] and severely disabling restless legs syndrome, which has a reported prevalence of 51.5% among RTR [47]. Since the majority of studies investigating the association between PPI use and ID are observational, randomized controlled clinical trials are needed to determine a causal effect of PPI use on iron status.

*J. Clin. Med.* **2019**, *8*, 1382

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2077-0383/8/9/1382/s1. Supplemental Results: Description of excluded RTR receiving oral iron supplementation. Table S1: Logistic regression analyses investigating the association of PPI use with ID in 626 stable RTR (H2RA users excluded). Table S2: Logistic regression analyses investigating the association of PPI use with ID (TSAT < 20% and ferritin < 100 μg/L) in 646 RTR. Table S3: Logistic regression analyses investigating the effect of medication use on the association of PPI use with ID in 646 RTR. Table S4: Logistic regression analyses investigating the association of PPI use with ID (TSAT < 20% and ferritin < 300 μg/L) in 646 RTR. Table S5: Baseline characteristics of RTR with and without oral iron supplementation.

**Author Contributions:** Data curation, R.M.D., A.W.G.-N., E.v.d.B. and S.J.L.B.; formal analysis, R.M.D., A.W.G.-N. and S.J.L.B.; methodology, R.M.D., A.W.G.-N., M.F.E., J.S.J.V., M.H.d.B., S.P.B, D.J.T., E.H., H.B., G.N. and S.J.L.B.; writing—original draft preparation, R.M.D., A.W.G.-N.; writing—review and editing, R.M.D., A.W.G.-N., M.F.E., J.S.J.V., M.H.d.B., E.v.d.B., S.P.B., D.J.T., E.H., H.B., G.N. and S.J.L.B; supervision, H.B., G.N. and S.J.L.B.; funding acquisition, R.M.D., E.v.d.B., and S.J.L.B.

**Funding:** Generation of this study was funded by Top Institute Food and Nutrition (grant A-1003). R.M. Douwes is supported by NWO/TTW in a partnership program with DSM, Animal Nutrition and Health, The Netherlands; grant number: 14939.

**Conflicts of Interest:** Dr. De Borst is the principal investigator of a clinical trial supported by the Dutch Kidney Foundation (grant no 17OKG18) and Vifor Fresenius Medical Care Renal Pharma. He received consultancy fees (to employer) from Amgen, Astra Zeneca, Bayer, Kyowa Kirin, Sanofi Genzyme, and Vifor Fresenius Medical Care Renal Pharma. None of these entities had any role in the design, collection, analysis, and interpretation of data for the current study, nor in writing the report or the decision to submit the report for publication. The other authors declare that they have no other relevant financial interests.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Native Nephrectomy before and after Renal Transplantation in Patients with Autosomal Dominant Polycystic Kidney Disease (ADPKD)**

**Andreas Maxeiner 1,**†**, Anna Bichmann 2,**†**, Natalie Oberländer 1, Nasrin El-Bandar 1, Nesrin Sugünes 1, Bernhard Ralla 1, Nadine Biernath 1, Lutz Liefeldt 3, Klemens Budde 3, Markus Giessing 4, Thorsten Schlomm <sup>1</sup> and Frank Friedersdor**ff **1,\***


Received: 13 August 2019; Accepted: 27 September 2019; Published: 4 October 2019

**Abstract:** The aim of this study was 1) to evaluate and compare pre-, peri-, and post-operative data of Autosomal Dominant Polycystic Kidney Disease (ADPKD) patients undergoing native nephrectomy (NN) either before or after renal transplantation and 2) to identify advantages of optimal surgical timing, postoperative outcomes, and economical aspects in a tertiary transplant centre. This retrospective analysis included 121 patients divided into two groups—group 1: patients who underwent NN prior to receiving a kidney transplant (*n* = 89) and group 2: patients who underwent NN post-transplant (*n* = 32). Data analysis was performed according to demographic patient details, surgical indication, laboratory parameters, perioperative complications, underlying pathology, and associated mortality. There was no significant difference in patient demographics between the groups, however right-sided nephrectomy was performed predominantly within group 1. The main indication in both groups undergoing a nephrectomy was pain. Patients among group 2 had no postoperative kidney failure and a significantly shorter hospital stay. Higher rates of more severe complications were observed in group 1, even though this was not statistically significant. Even though the differences between both groups were substantial, the time of NN prior or post-transplant does not seem to affect short-term and long-term transplantation outcomes. Retroperitoneal NN remains a low risk treatment option in patients with symptomatic ADPKD and can be performed either pre- or post-kidney transplantation depending on patients' symptom severity.

**Keywords:** ADPKD; native nephrectomy; kidney transplantation; patient outcome; perioperative complications

#### **1. Introduction**

Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most frequent cause of end-stage renal disease (ESRD) in Europe, accounting for around 10–15% of patients on dialysis and 9% to 10% of renal transplantation [1,2]. ADPKD patients develop progressive expansion of multiple bilateral cysts in the renal parenchyma, causing a deterioration of their glomerular filtration rate (GFR) [3]. Patients with ADPKD often develop recurrent urinary tract infections, nephrolithiasis, and back or abdominal pain. Approximately one-fifth of ADPKD patients will require unilateral or bilateral nephrectomy at some point in their life [4–6]. Due to a heterogeneous clinical presentation ranging from asymptomatic to very severe, treatment options are highly variable. In comparison to other forms of renal replacement therapy, kidney transplantation seems to be the option of choice in most ESRD patients, with improved survival and lower morbidity [7]. However, the optimal time for nephrectomy in ADPKD patients awaiting renal transplantation remains a matter of debate. Furthermore, the severity of clinical symptoms may also influence patients' wishes to undergo nephrectomy. Some previously published research does not recommend pre-transplant nephrectomy due to associated increased morbidity and mortality [8,9]. Others suggest a "sandwich technique", whereby the most severely affected native kidney is removed before, and the remaining polycystic kidney is removed after transplantation [10,11]. Concomitant nephrectomy and transplantation is another method that is described within the literature [6,12], which is predominantly used for ADPKD patients who are scheduled for living donor kidney transplants. The aim of this study was (1) to evaluate and compare pre-, peri-, and post-operative data of ADPKD patients undergoing native nephrectomy either before or after renal transplantation and (2) to identify advantages of optimal surgical timing, postoperative outcomes, and economical aspects in a tertiary transplant center.

#### **2. Methods**

#### *2.1. Patient and Study Design*

The retrospective analysis included 141 patients with ADPKD who underwent unilateral surgical nephrectomy between January 2005 and December 2018. Twenty patients were excluded due to incomplete data. Three patients underwent bilateral nephrectomy sequentially and were also excluded. Group 1 included nephrectomy patients who were on dialysis prior to kidney transplantation (*n* = 89) and group 2 represents patients who had post-transplant nephrectomy (*n* = 32). Data analysis was performed according to demographic patient details, surgical indication, laboratory parameters, perioperative complications, underlying pathology, and associated mortality. Patients in group 2 received a standard triple maintenance immunosuppression that consisted of tacrolimus or cyclosporin A in combination with mycophenolate mofetil and prednisolone.

#### *2.2. Surgical Procedure*

The operation procedure was performed by a unilateral flank incision of 20–25 cm with perioperative antibiotic treatment. A strictly extra-peritoneal surgical preparation was performed. If an intraoperative peritoneal laceration occurred, an immediate surgical reparation was done. The vessel hilum was sealed by using three Hem-o-lok clips. Surgical drains were placed at the time of transplant and were present postoperatively. Figure 1 shows a removed polycystic kidney preparation after retroperitoneal nephrectomy.

**Figure 1.** Polycystic kidney preparation after retroperitoneal nephrectomy.

#### *2.3. Statistical Analysis*

Statistical analyses were performed using SPSS (SPSS Inc., version 25, Armonk, NY, USA). Both univariate and multivariate analyses were applied to identify risk factors for complications following cystic kidney removal, both before and after kidney transplantation. Baseline characteristics were compared using the Chi-squared test and Fisher's exact test for categorical variables. Continuous variables were tested with the Student's t-test or Mann–Whitney U-test (if the assumption of Gaussian distribution was not fulfilled). Results were reported as means and standard deviations (SD) for continuous variables; categorical variables were reported as numbers and percentages. For all the statistical measures, a *p*-value <0.05 was considered significant. Odds ratio (OR) was calculated and statistical determinations were within the 95% confidence interval.

#### **3. Results**

#### *3.1. Demographic Data*

Out of the 121 included patients with ADPKD, 89 patients underwent nephrectomy prior to kidney transplantation (group 1) and 32 patients underwent nephrectomy post-transplant (group 2). Patient's demographic data is displayed in Table 1 below.


**Table 1.** Demographic data.

Group 1: pre-transplant, Group 2: post-transplant. \*, statistically significant; BMI, body mass index.

There was no significant difference in the patient demographics between both groups, although right-sided nephrectomy was predominantly performed within group 1 (*p* = 0.02). The main comorbidities in both groups were cardiovascular diseases (group 1: 83.1% verus group 2: 81.3%; *p* = 0.808), which were represented most commonly by coronary artery disease, hypertension, and peripheral vascular disease.

#### *3.2. Indications*

Table 2 shows the individual indications for a nephrectomy.


**Table 2.** Indications for a nephrectomy.

Group 1: pre-transplant, group 2: post-transplant.

#### *3.3. Patient Outcome Analysis*

Comparing pre-operative serum creatinine levels in patients within group 2, a mild increase from an average pre-operative level of 1.47 mg/dL to 1.61 mg/dL postoperatively occurred. No patients had peri-operative kidney failure. In all 32 cases undergoing post-transplantation nephrectomy, the initially elevated creatinine levels postoperatively appeared stable (3 months: 1.62 mg/dL, 6 months: 1.64 mg/dL, 1 year: 1.69 mg/dL, and 3 years: 1.64 mg/dL). The difference between the pre- and post-operative haemoglobin levels was insignificant (group 1: 2.2 g/dL versus group 2: 2.5 g/d, *p* = 0.468). The difference in surgical time between both groups was insignificant (group 1: 175 min versus group 2: 170.5 min, *p* = 0.541), although a significant difference in the duration of hospital admission was observed (group 1: 7 days versus group 2: 6 days; *p* = 0.001). The pathological assessment of polycystic nephrectomy samples showed a 3% risk for renal cell carcinoma in both groups (group 1: 3.4% versus group 2: 3.1%; *p* = 1.0). No statistical difference was reported in the rates of acute inflammation in the pathological report (group 1: 15.6% versus group 2: 5.6%; *p* = 0.127). Furthermore, there was no significant difference between the chronic renal inflammation rates (group 1: 61.8% versus group 2: 71.9%; *p* = 0.307), which were defined as low-grade chronic systemic inflammation characterized by persistent, low to moderate levels of one or more circulating inflammation markers, such as white blood cells count, C-reactive protein, and procalcitonin. However, a significant difference was observed in the median weight of the removed kidney (group 1: 2600 g compared to 1683 g in group 2 (*p* = 0.004)). Concerning postoperative complication rates, group 1 had a higher prevalence of 43.8% compared to 37.5% within group 2, even though it was not statistically significant (*p* = 0.936). The complications within group 1 were classified as Clavien-Dindo 1 in 7.9% and as Clavien-Dindo 2 in 22.5%. Those categorized as Clavien-Dindo 3 (7.9%) included two patients suffering from a pneumothorax and one patient appeared with a pancreatic injury. Severe complications (Clavien-Dindo 4: 5.5%) included two patients of whom one required laparotomy on the second postoperative day due to a retroperitoneal abscess and one suffered a pulmonary embolism with subsequent cardiac arrest with a return of spontaneous circulation upon resuscitation. A total of three patients died within group 1 (3.4%), of which two suffered severe sepsis and one a hypoglycaemic shock. Complication rates within group 2 were mostly minor (Clavien-Dindo 1: 9.4%, Clavien-Dindo 2: 25%). Only one patient was classified as Clavien-Dindo 3. No patients in group 2 were categorized as Clavien-Dindo 4 or deceased. Among all outcome parameters, the multivariate analysis identified the following parameters as significant risk factors for a prolonged hospital stay: age (*p* <0.001) as well as undergoing native nephrectomy prior to transplantation (represented by group 1) (*p* = 0.013). Factors such as male sex, body mass index (BMI), organ weight, duration of operation, and time on dialyses were not significant risk factors for a prolonged hospital stay.

#### **4. Discussion**

ADPKD is the most common inherited disease with over 12 million patients with associated terminal renal failure, representing the fourth leading cause for dialysis worldwide. Patients suffering from ADPKD develop progressive expansion of multiple bilateral cysts in the renal parenchyma, causing a deterioration of GFR [3]. Patients with ADPKD often develop recurrent urinary tract

infections, nephrolithiasis, and back or abdominal pain. Approximately one-fifth of ADPKD patients will require unilateral or bilateral nephrectomy at some point in their life [4–6]. It is still unclear if patients undergoing nephrectomy post-transplant have higher complication rates. This research was not able to display a significant difference between both groups, even though the prevalence of complications was higher within group 1 (43.8% versus 37.5%).

The indications for nephrectomy were often multiple for each patient and the major indications are listed in Table 2. Overall, the main nephrectomy indication was pain in over 50% of the patients. Further indications for nephrectomy need to be critically evaluated and can be based on intra-abdominal space issues, uncontrolled hypertension, and cystic bleeding [6,7,13]. Due to the often-present additional liver cysts in 80% of patients with ADPKD [14], a right-sided intra-abdominal space problem occurs, resulting in less affected contralateral kidneys. The statistically significant difference in organ weight, as stated previously (2600 g versus 1683 g, *p* = 0.004), seems to underline this hypothesis. In addition, kidney volume seems to be an early marker of severity of the disease and is shown to be a determinant of a reduction in kidney functions [3].

According to the literature, no difference in average age at the time of nephrectomy (*p* = 0.927) was observed [7,15] and, further, a higher rate of male patients was also reported in our cohort despite being an autosomal dominant disease [6,13,15]. Lifestyle and preventive factors also need to be addressed in ADPKD. Patients can prevent disease progression by controlling hypertension through a low salt diet [16,17]. These statements are of limited value for external validity as these publications involved low case numbers of patients with ADKPD who were reviewed at the time of nephrectomy. Nevertheless, the assumption can be made that disease progression can be delayed by a healthy lifestyle. Studies have shown that females have higher health awareness in their daily living [18], which could also explain the male dominant cohort. However, cyst expansion can cause ischemia within the kidney and, consequently, the activation of Renin-Angiotensin-Aldosterone-System (RAAS), leading to the development and/or maintenance of hypertension [19]. Hence, patients might benefit from native nephrectomy if hypertension is predominantly present. On the other hand, preserving patient's urine excretion might also preserve quality of life concerning daily fluid intake.

Research published focusing on patients who are positive for polycystic kidney disease 1 (PKD1) mutations prior to the age of 35 years show a worse and faster disease progression in the male sub-cohort and/or hypertension [20]. However, the average BMI within our cohort was around a normal range of 25 kg/m2. A known disadvantage using BMI as a reference is the inconsideration of water and muscle mass, which is greatly variable in these patients. Hence, observing that the BMI does not increase in pre-transplant patients despite significant edemas has not yet been discussed in published research.

In all cases, only unilateral nephrectomy was carried out. This changed to bilateral nephrectomies prior to renal transplants in the 1970s and reduced infection-based complications [21]. Nevertheless, higher postoperative complications were observed, including worsening anemia and loss of diuresis [22]. Within the past few decades, bilateral nephrectomy case numbers decreased due to advanced medication and stricter surgical indication. Fuller et al. reviewed a small cohort of 32 patients who underwent simultaneous and sequential bilateral nephrectomy [6]. Out of the studied 25 patients, 6 had simultaneous bilateral native nephrectomy with higher rates of blood transfusions, increasing antibody production, and worsening post-transplant outcomes [6]. Hence, the authors concluded not to promote bilateral simultaneous intervention. Further, 3% of patients in both groups were found to have a histological diagnosis of coincidental cancer, which is in concurrence with literature published to date [6,7,13]. Histological analysis within the post-transplant group showed higher rates of inflammation, which is potentially due to the effects of immunosuppressive medication. These findings have not been published elsewhere. We suggest taking samples from potentially inflamed cysts intraoperatively. An extended antibiotic cover with lipophilic properties can then be discussed.

However, our study found no significant differences between patients undergoing native nephrectomy prior or post-transplant. Hence, the role of transplantation and subsequent immunosuppressive therapy seems to be irrelevant as group 2 had less severe and a smaller number of post-operative complications. The deceased patients within group 1 were patients without transplants. Chebib et al. had fewer complications in the cohort undergoing a nephrectomy post-transplant [15]. Similar observations were published by Kirkman et al. [7]. Thus, the presumed increased risk through immunosuppressive therapy concerning wound healing and increased infection rates post-operatively cannot be supported. The significantly shorter hospital stay of the post-transplant patients in our cohort also represents a fact that can be witnessed within the literature [6].

Despite our findings, we acknowledge limitations of the present study and potential sources of bias that need to be addressed. The retrospective analysis as well as the limited number of patients of group 2 might confound our results. Furthermore, the exclusion of 23 patients due to missing data may have also decreased the potential study cohort. In addition, analysis of short-term and long-term transplantation outcomes (graft loss, delayed graft function, acute rejection, bacterial and cytomegalovirus (CMV] infection, and post-transplant diabetes mellitus) between both groups was not included.

#### **5. Conclusions**

In conclusion, our study demonstrates that open retroperitoneal nephrectomies represent a low risk management option in patients with symptomatic ADPKD and post-transplant nephrectomy seems to not be associated with higher complication rates. Hence, timing and indication of native nephrectomy should be primarily based on symptom severity rather than on the date of transplantation.

**Author Contributions:** F.F., K.B., L.L., and B.R. designed the study; N.O. and N.S. analyzed the data; A.M., F.F., and A.B. wrote the manuscript; N.E.-B., N.B., M.G., F.F., and T.S. drafted and revised the paper; all authors approved the final version of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Urinary Epidermal Growth Factor**/**Creatinine Ratio and Graft Failure in Renal Transplant Recipients: A Prospective Cohort Study**

**Manuela Yepes-Calderón 1, Camilo G. Sotomayor 1,\*, Matthias Kretzler 2, Rijk O. B. Gans 3, Stefan P. Berger 1, Gerjan J. Navis 1, Wenjun Ju <sup>2</sup> and Stephan J. L. Bakker <sup>1</sup>**


Received: 13 August 2019; Accepted: 12 October 2019; Published: 13 October 2019

**Abstract:** Graft failure (GF) remains a significant limitation to improve long-term outcomes in renal transplant recipients (RTR). Urinary epidermal growth factor (uEGF) is involved in kidney tissue integrity, with a reduction of its urinary excretion being associated with fibrotic processes and a wide range of renal pathologies. We aimed to investigate whether, in RTR, uEGF is prospectively associated with GF. In this prospective cohort study, RTR with a functioning allograft ≥1-year were recruited and followed-up for three years. uEGF was measured in 24-hours urine samples and normalized by urinary creatinine (Cr). Its association with risk of GF was assessed by Cox-regression analyses and its predictive ability by C-statistic. In 706 patients, uEGF/Cr at enrollment was 6.43 [IQR 4.07–10.77] ng/mg. During follow-up, 41(6%) RTR developed GF. uEGF/Cr was inversely associated with the risk of GF (HR 0.68 [95% CI 0.59–0.78]; *P* < 0.001), which remained significant after adjustment for immunosuppressive therapy, estimated Glomerular Filtration Rate, and proteinuria. C-statistic of uEGF/Cr for GF was 0.81 (*P* < 0.001). We concluded that uEGF/Cr is independently and inversely associated with the risk of GF and depicts strong prediction ability for this outcome. Further studies seem warranted to elucidate whether uEGF might be a promising marker for use in clinical practice.

**Keywords:** epidermal growth factor; creatinine; graft failure; renal transplantation.

#### **1. Introduction**

Although in recent decades short-term graft survival has seen great improvement, chronic graft failure remains a major clinical challenge for renal transplantation with no significant reduction achieved in the same time frame [1]. Graft failure is a culmination of several factors, including chronic rejection, toxicity of calcineurin inhibitors, infection, hypertension, oxidative stress, and proteinuria, together leading to progressive fibrosis and loss of renal function [2–5]. In clinical settings, most biomarkers used for follow-up, e.g., urinary albumin excretion and urinary protein excretion, are indicators of glomerular damage [6], improper of the development of fibrosis, which is an early event in the natural history of chronic rejection [3]. Finding non-invasive biomarkers that could reflect the pathophysiological changes in the renal tissue would be of remarkable utility as potential tools to monitor patients and timely identify those at high risk of graft failure [7], who could benefit from further interventions and stricter follow-up before structural damage is already present [8].

Epidermal growth factor (EGF) is a 53-amino acid peptide produced in the kidney at the ascending loop of Henle and the distal convoluted tubule [7,8]. It stimulates the proliferation and differentiation of epidermal and epithelial cells, and under normal conditions it has a critical role in renal development [7], maintenance of renal tubule integrity and tubular regenerative response to acute kidney injury [9–11]. However, the dysregulation and chronic activation of its receptor is known to promote pro-inflammatory response [12]; furthermore, it has been implicated in the development of interstitial fibrosis [13]. In clinical settings, the urinary excretion of EGF has shown to be decreased in a wide range of kidney pathologies—e.g., diabetic nephropathy and IgA nephropathy—suggesting that it could potentially work as a biomarker of a pathway which is common to several kidney tissue insults [14]. Although it would not be possible to summarize the complexity of the graft failure process with one biomarker, fibrosis is an important step towards graft failure development [2]; and suppression of urinary EGF (uEGF) is an early marker of this phenomenon [15]. It may be theorized that uEGF could also be altered in patients at high risk of graft failure; however, the potential association with outcome or predictive ability of uEGF for graft failure is yet to be evaluated.

In the current study, we aimed to investigate the hypothesis that uEGF is prospectively associated with the risk of graft failure in a large, well-phenotyped, cohort of stable renal transplant recipients (RTR). Furthermore, we aimed to evaluate the prediction ability of uEGF for graft failure.

#### **2. Materials and Methods**

#### *2.1. Study Design and Patient Population*

In this prospective cohort study, all adult RTR with a functioning graft for ≥1 year, without history of drug addiction, alcohol addiction or malignancy, who visited the outpatient clinic of the University Medical Center of Groningen (The Netherlands) between November 2008–May 2011 were invited to participate. In total 707 (86%) of the 817 eligible RTR signed a written informed consent. RTR with missing information about uEGF at enrollment (*n* = 57) were excluded, resulting in 649 RTR eligible for the statistical analyses (Figure 1). There were no significant differences in risk factors for graft failure between patients with complete data and patients with missing data (Table S1). The primary end point of the current study was death-censored graft failure, defined as restart of dialysis or need of re-transplantation. The patients were followed-up for a total of 3 years. We contacted general practitioners or referring nephrologists in cases where the status of a patient was unknown. No participants were lost to follow-up (Figure 1). The current study was approved by the institutional review board (METc 2008/186) and adhered to the Declarations of Helsinki and Istanbul.

**Figure 1.** Participant flow diagram.

#### *2.2. Data Collection*

Data at enrollment were collected during a visit to the outpatient clinic, following a detailed protocol described elsewhere [16,17]. Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured using a semiautomatic device (Dinamap 1846, Critikon, Tampa, Florida, USA) every minute for 15 minutes, following a strict protocol as described before [16].

Other relevant donor, recipient, and transplant information was extracted from the Groningen Renal Transplant Database [18]. Delayed graft function was defined as oliguria for 7 days or need for continuous ambulatory peritoneal dialisys or need for >2 sessions of hemodyalisis. Data collection is ensured by the continuous surveillance system of the outpatient clinic of our university hospital and close collaboration with affiliated hospitals.

#### *2.3. Laboratory Measurements and Calculations*

According to a strict protocol, all RTR were asked to collect a 24-hours urine sample during the day before to their visit to the outpatient clinic and on that day fasting blood samples were taken. Serum creatinine was determined using the Jaffé reaction (MEGA AU510, Merck Diagnostica, Germany); plasma glucose by the glucose oxidase method (YSI 2300 Stat Plus, Yellow Springs Instruments, Yellow Springs, OH, USA). uEGF concentration was measured by ELISA (R&D Systems, Minneapolis, MN, USA); the test has a range of detection of 3.9–250 pg/mL and the intra- and inter-plate coefficients of variation were less than 10% and 15%, respectively [15]. Urinary creatinine concentration was measured by colorimetric detection kit (Enzo, New York, NY, USA). Finally, the concentration of uEGF was normalized by the concentration of urinary creatinine, and a ratio was created and used for all analyses (uEGF/Cr).

Body surface area was calculated according to the Du Bois formula [19], estimated glomerular filtration rate (eGFR) by the serum creatinine based Chronic Kidney Disease EPIdemiology collaboration equation (CKD-EPI) [20] and the cumulative dose of prednisolone as the sum of the maintenance dose of prednisolone from transplantation until enrollment.

#### *2.4. Statistical Analysis*

Data analyses, computations, and graphs were performed with SPSS 22.0 software (IBM Corporation, Chicago, IL, USA) and GraphPad Prism version 7 software (GraphPad Software, San Diego, CA, USA). Descriptive statistics data are presented as mean ± standard deviation (SD) for normally distributed data, and as median (interquartile range [IQR]) for variables with a non-normal distribution. Categorical data are expressed as number (percentage). Differences in characteristics at enrollment between patients with and without data on uEGF, and among subgroups of RTR according to tertiles of uEGF/Cr were tested by one-way ANOVA for continuous variables with normal distribution, Mann–Whitney U test for continuous variables with skewed distribution and χ<sup>2</sup> test for categorical variables. We also performed linear regression analyses testing the association between time after transplantation and uEGF/Cr in crude and multivariable analyses with adjustment for use of cyclosporine inhibitors. For all statistical analyses, a statistical significance level of *P* ≤ 0.05 (two-tailed) was used.

Graft failure development was visualized by Kaplan-Meier curves according to tertiles of uEGF/Cr, with statistical significance among curves tested by log-rank (Mantel–Cox) test. The prospective association of uEGF/Cr with risk of graft failure during follow-up was further examined, incorporating time to event, by means of uni- and multivariate Cox proportional-hazards regression analyses with time-dependent covariates to calculate hazard ratios (HR) and 95% confidence intervals (CI). First, we performed an unadjusted model. Afterwards we adjusted for age and sex, and the following variables: in model 2, transplant related data (transplant vintage, pre–emptive transplantation, age and sex of donor, type of donor and cold ischemia time); in model 3, renal transplant recipient characteristics (human leukocyte antigen [HLA] mismatch with donor and delayed graft function); in model 4, we adjusted for the variables included in the model 2 and 3; in model 4, immunosuppressive therapy (usage of calcineurin inhibitors and proliferation inhibitors, and acute rejection treatment); in model 5, graft function (eGFR and urinary protein excretion); and the final model (model 6) was a combination of model 4 and 5. Schoenfeld residuals were calculated to assess whether proportionality assumptions were fulfilled. Furthermore, we tested the potential predictive ability of uEGF/Cr for graft failure by means of performing a receiver operating characteristics (ROC) curve. To investigate whether uEGF might be of additional value to urinary albumin excretion and protein excretion, we calculated the individual C-statistic of these variables, and then the C-statistic of them combined with uEGF/Cr. Moreover, we performed an F-test to check whether the difference between predictive models was significant. Positive and negative predictive value were calculated for the cut-off points of the uEGF/Cr tertiles.

As secondary analyses, we assessed potential effect-modifications by pre-specified variables of: age, sex, eGFR, plasma creatinine concentration, proteinuria, high-sensitivity C-reactive protein (hs-CRP), acute rejection, and transplantation without dialysis (pre-emptive) by fitting models containing both main effects and their cross-product terms. Finally, we performed sensitivity analyses in which we eliminated patients with extreme values of uEGF/Cr (outside −2 and 2 standard deviations).

#### **3. Results**

#### *3.1. Characteristics at Enrollment*

In total 649 RTR were included in the analyses with a mean ± SD age of 53 ± 13 years, 57% men. Patients were included at a median (IQR) of 5.28 (1.74–12.00) years after transplantation and uEGF/Cr ratio had a median of 6.43 (4.07–10.77) ng/mg. In crude linear regression analyses, there was no significant association between years after transplantation and uEGF/Cr (Std. β = −0.015; *P* = 0.71), however, the association became apparent after the adjustment for calcineurin inhibitors usage (Std. β = −0.81; *P* = 0.046). Characteristics at enrollment of the overall RTR population and according to tertiles of uEGF/Cr are shown in Table 1. In the highest uEGF/Cr tertile patients had older age (*P* = 0.01), smaller percentage of male population (*P* < 0.001), higher eGFR (*P* < 0.001), lower urinary protein excretion (*P* < 0.001), larger percentage of transplant from living donors (*P* < 0.001), younger donors age (*P* < 0.001), and higher percentage of donors were male (*P* = 0.03). Also, they used less cyclosporine (*P* = 0.002) and tacrolimus (*P* < 0.001) in their immunosuppressive regimens, but more mycophenolic acid (*P* = 0.03); and a smaller percentage of patients required acute rejection treatment (*P* < 0.001) (Table 1). Patients in the highest uEGF/Cr tertile also had higher glycated hemoglobin percentage (Table 1), independent of whether they were diabetic or non-diabetic subjects (Table S2).





C-reactive protein.

*J. Clin. Med.* **2019** , *8*, 1673

#### *3.2. Prospective Analyses on Graft Failure*

During a follow-up of 3 years, 41 (6%) RTR developed graft failure. Thirty-three events (80%) were in the lowest tertile of uEGF/Cr, 4 (10%) in the intermediate tertile and 4 (10%) in the highest tertile. The curves were significantly different according to the log-rank (Mantel cox) test (*P* < 0.001). The corresponding Kaplan–Meier curves are shown in Figure 2.

#### **uEGF/Cr and Death Censored Graft Failure**

**Figure 2.** Kaplan–Meier curves by tertiles of uEGF/Cr on graft failure. Tertile 1: < 4.78 ng/mg; Tertile 2: 4.78–8.80 ng/mg; Tertile 3: > 8.80 ng/mg. *P* value was obtained from the log-rank (Mantel cox) test. uEGF/Cr, urinary epidermal growth factor/creatinine ratio.

Cox regression analyses showed that uEGF/Cr ratio is inversely associated with the risk of graft failure (HR 0.68 [95% CI 0.59-0.78] per ng/mg) and this association is highly significant (*P* < 0.001). Further adjustment for transplantation-related data, renal transplant recipient characteristics, immunosuppressive therapy, eGFR and urinary protein excretion did not materially change this finding. The association between uEGF/Cr and graft failure was still strongly significant in the final model which included adjustment for both immunosuppressive therapy and graft function, with a HR of 0.79 (95% CI 0.67-0.94; *P* = 0.007) (Table 2).

**Table 2.** Multivariable-adjusted associations between uEGF/Cr and graft failure in 649 RTRs.


In total 41 (6%) patients developed graft failure. Model 1: adjusted for age, sex, and transplant related data. Model 2: adjusted for age, sex, and renal transplant recipient characteristics. Model 3: Model 1 + Model 2. Model 4: adjusted for age, sex, and immunosuppressive therapy. Model 5: adjusted for age, sex, and eGFR and urinary protein excretion. Model 6: model 4 + model 5. RTRs, renal transplant recipients; uEGF, urinary epidermal growth factor.

A ROC curve assessing the prediction ability of uEGF/Cr for graft failure is displayed in Figure 3. uEGF/Cr showed to be a good predictor of the development of graft failure up to the following three years (C-statistic = 0.81), with better predictive ability than urinary albumin excretion and urinary protein excretion (C-statistic = 0.78 and C-statistic = 0.76, respectively). The curve of uEGF/Cr was significantly different from the reference line (*P* < 0.001). Being on the first tertile of uEGF/Cr had a positive predictive value of 75% for the development graft failure, on the other hand, being in the third tertile had a negative predictive value of 81% (Table S3).

#### **0DUNHUVRIFKURQLFUHMHFWLRQDQG\*)**

**Figure 3.** ROC curve of uEGF/Cr for graft failure. During a follow-up of 3 years, 41 (6%) patients developed graft failure. GF, graft failure; uEGF/Cr, urinary epidermal growth factor/creatinine ratio; UAE, urinary albumin excretion; UPE, urinary protein excretion.

Urinary protein excretion and urinary albumin excretion had a C-statistic of 0.76 and 0.78, respectively. The predictive value for both variables was significantly improved after the addition of uEGF/Cr (C-statistic = 0.82, F-test for difference among models = *P* < 0.001) (Table 3).


**Table 3.** Predictive value (C-statistic) for uEGF/Cr on top of established risk factors for graft failure

\* *P*-value of F-test for difference between the reference model and the model plus uEGF/Cr. uEGF/Cr, urinary epidermal growth factor/creatinine ratio.

#### *3.3. Secondary and Sensitivity Analysis*

In effect-modification analyses we found that none of the pre-specified variables we explored (age, sex, eGFR, plasma creatinine concentration, proteinuria, hs-CRP, acute rejection, and pre-emptive transplantation) was a significant effect-modifier of the association between uEGF/Cr and the risk of graft failure (*P* > 0.10), therefore we did not proceed with any subgroup analyses (Table S4).

Finally, in the sensitivity analyses in which we removed patients with extreme values of uEGF/Cr (patients outside of the −2 and +2 standard deviation), our findings remained materially unchanged. uEGF/Cr was strongly inversely associated with risk of graft failure (HR 0.68 (95% CI 0.59–0.78);

*P* < 0.001) and further adjustments analogous to models used in the primary analyses did not materially modified this association (Table S5).

#### **4. Discussion**

In a large cohort of stable RTR, we showed first, that patients with impaired renal function have significantly lower excretion of uEGF. Second, that uEGF/Cr is inversely associated with the risk of graft failure and that this association is independent of potential confounders, including immunosuppressive therapy, eGFR and urinary protein excretion. Finally, uEGF/Cr also appears to have good prediction ability for the development of graft failure, superior to urinary albumin excretion and urinary protein excretion. These findings are in agreement with previous evidence showing that uEGF is a biomarker altered in several kidney pathologies [15,21,22], and for the first time we provided evidence in the post-renal transplantation setting.

EGF is a 53-amino acid peptide which expression is restricted to the kidney [7,15,22], particularly to the thick ascending limb of Henle and the distal tubule [14], therefore it is found in higher concentrations in urine than in any other body fluid [23]. EGF and its receptor are involved in several processes within kidney tissue, mainly related to tubular cell proliferation [13] and pathways of cell survival [10,11], making EGF a critical component in promoting kidney recovery from acute injury [11]. Therefore, its dysregulation is involved in key pathogenic pathways that drive kidney disease progression independent of etiology, e.g., chronic inflammation [24], extracellular matrix modulation and tubular cell dedifferentiation [15].

EGF has gained interest as a biomarker of renal disease because its decreased urinary excretion has been observed in nearly all rodent kidney injury models [20] and in various human kidney diseases [25], including diabetic nephropathy, IgA nephropathy, and lupus nephritis [14]. Consistently, we found that our study population of RTR had a decreased uEGF/Cr ratio when compared to healthy subjects, and comparable ratios to those of patients with chronic kidney disease [15,26,27]. Its common clinical standardization by creatinine (uEGF/Cr) has shown several advantages as a biomarker of kidney tissue damage: (i) it is highly tissue specific, which makes it robust to extra renal events that may affect the accuracy of other nonspecific biomarkers; (ii) it is known that even in the normal creatinine range there is a significant influence of kidney function on uEGF/Cr [22]; and (iii) it shows only a weak correlation with markers of glomerular damage as urinary protein excretion, which shows that uEFG/Cr is a representation of a different independent pathophysiologic mechanism [12,14,27] and could complement these other parameters. Our study further supports the role of uEGF/Cr as a biomarker of damage to renal tissue, and more importantly, as a biomarker independently associated with risk of graft failure in stable renal transplant recipients. Furthermore, the strong prediction abilities of uEGF/Cr for risk of graft failure, even superior to those of urinary albumin excretion and urinary protein excretion, and of adding predictive value in combination with these variables, also supports the idea of uEGF/Cr being a marker of a different pathological aspect of graft failure which might be earlier than stablished glomerular damage.

Because risk of graft failure increases with time, one could speculate that uEGF decreases with time after transplantation. However, we did not observe such a relationship over increasing tertiles of uEGF. This finding may be explained by a confounding effect of use of cyclosporine resulting in lowering of uEGF, which is supported by the observation that an association between uEGF and time after transplantation became apparent after adjustment for use of cyclosporine in linear regression analyses. We also found in our population that the use of calcineurin inhibitors was higher among patients with lower uEGF/Cr. This is in agreement with previous studies showing an inverse association between uEGF and the use of calcineurin inhibitors [28,29] and a potential involvement of the EGF receptor in the alterations that lead to magnesium loss in renal transplant recipients receiving calcineurin inhibitors [29,30]. Nevertheless, the association between uEGF/Cr and graft failure was independent of the adjustment for the use of calcineurin inhibitors. This suggests that the association of uEGF/Cr is not mediated by a nephrotoxic effect of calcineurin inhibitors, but is mediated by other mechanisms,

which may involve renal fibrosis. Furthermore, in contrast to previous results [31], no difference was observed in the prevalence of diabetes between the uEGF/Cr tertiles in our study population.

The present study has several strengths. We assessed not only the association but also the risk-prediction ability for graft failure of uEGF/Cr. Also, our extensively phenotyped cohort allowed us to control for several potential confounders, among which demographic and anthropometric variables, renal function and immunosuppressive therapy were accounted for. The following limitations should be considered in the interpretation of our results. This study was carried out in a single center with over-representation of Caucasian population, which calls prudence to extrapolation of our results to different populations regarding ethnicity. Also, we did not have repeated uEGF measurements, and the single measurement of the variable of interest could have given rise to the underestimation of the true effect [32,33]. Moreover, we used the Jaffé method to measure serum creatinine, which can generate false positive results in the presence of pesudochromogens such as ketones [34]. Next, only limited data were available regarding donors characteristics and therefore we could not adjust for donor variables such as donor serum creatinine or donor hepatitis C status. Finally, as with any observational study, residual confounding may occur despite the substantial number of potentially confounding factors for which we adjusted.

#### **5. Conclusions**

uEGF/Cr is inversely and independently associated with the risk of graft failure in stable RTR. This study provides for the first time relevant prospective data on a potential role of EGF in the pathophysiological changes that lead to graft failure. Furthermore, it appears that uEGF/Cr could be a biomarker of interest in the identification of patients at high risk of graft failure. Of note, to the best of our knowledge, current reference values for uEGF/Cr have not been established. Given our findings standardized assays for uEGF with reference values being generated are warranted. The potential utility of EGF directed therapies or the implementation of uEGF/Cr in clinical care of stable RTR requires further research and validation in a larger and more heterogeneous clinical studies.

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2077-0383/8/10/1673/s1, Table S1: Comparison of characteristics between renal transplant recipients with and without data of urinary EGF concentration; Table S2: Glycated hemoglobin according to tertiles of uEGF/Cr ratio in diabetic and non-diabetic subjects; Table S3: Positive and negative predictive value by tertiles of uEGF/Cr; Table S4: Effect-modification of pre-specified characteristics on the associations of uEGF/Cr with graft failure; Table S5: Multivariable-adjusted associations between uEGF/Cr ratio and graft failure in RTR among the −2 and +2 standard deviations of uEGF/Cr.

**Author Contributions:** Data curation, M.K. and W.J.; Formal analysis, M.Y.-C. and C.G.S.; Funding acquisition, S.J.L.B.; Investigation, M.K., R.O.B.G., S.P.B., G.J.N., and W.J.; Methodology, M.K., W.J., and S.J.L.B.; Project administration, R.O.B.G., S.P.B., G.J.N., and S.J.L.B.; Supervision, S.J.L.B.; Writing—original draft, M.Y.-C. and C.G.S.; Writing—review and editing, R.O.B.G., S.P.B., G.J.N., W.J., and S.J.L.

**Funding:** This study was based on the TransplantLines Food and Nutrition Biobank and Cohort Study (TxL-FN), which was funded by the Top institute food and nutrition of The Netherlands, grant number A-1003.

**Acknowledgments:** The study is registered at clinicaltrials.gov under number NCT02811835. Dr. Sotomayor is supported by a doctorate studies grant from CONICYT (F 72190118). This paper is partially supported by NIDDK P30 DK081943 for University of Michigan O'Brien Kidney Translational Core Center. The authors thank Therese Roth for technical support.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Review* **BK Polyomavirus Virus Glomerular Tropism: Implications for Virus Reactivation from Latency and Amplification during Immunosuppression**

#### **Donald J. Alcendor**

Center for AIDS Health Disparities Research, Meharry Medical College, 1005 Dr. D.B. Todd Jr. Blvd., Hubbard Hospital, 5th Floor, Rm. 5025, Nashville, TN 37208, USA; dalcendor@mmc.edu

Received: 12 August 2019; Accepted: 16 September 2019; Published: 17 September 2019

**Abstract:** BK polyomavirus (BKPyV), or BKV infection, is ubiquitous and usually non-pathogenic, with subclinical infections in 80–90% of adults worldwide. BKV infection is often associated with pathology in immunocompromised individuals. BKV infection often is associated with renal impairment, including ureteral stenosis, hemorrhagic cystitis, and nephropathy. BKV infection is less commonly associated with pneumonitis, retinitis, liver disease, and meningoencephalitis. BKV is known to replicate, establish latency, undergo reactivation, and induce clinical pathology in renal tubular epithelial cells. However, recent in vitro studies support the notion that BKV has expanded tropism-targeting glomerular parenchymal cells of the human kidney, which could impact glomerular function, enhance inflammation, and serve as viral reservoirs for reactivation from latency during immunosuppression. The implications of BKV expanded tropism in the glomerulus, and how specific host and viral factors that would contribute to glomerular inflammation, cytolysis, and renal fibrosis are related to BKV associated nephropathy (BKVAN), have not been explored. The pathogenesis of BKV in human glomerular parenchymal cells is poorly understood. In this review, I examine target cell populations for BKV infectivity in the human glomerulus. Specifically, I explore the implications of BKV expanded tropism in the glomerulus with regard viral entry, replication, and dissemination via cell types exposed to BKV trafficking in glomerulus. I also describe cellular targets shown to be permissive in vitro and in vivo for BKV infection and lytic replication, the potential role that glomerular parenchymal cells play in BKV latency and/or reactivation after immunosuppression, and the rare occurrence of BKV pathology in glomerular parenchymal cells in patients with BKVAN.

**Keywords:** polyomavirus; BKV; kidney; glomerulus; BKVAN; nephropathy; transplantation

#### **1. Introduction**

BK polyomavirus (BKPyV, hereafter referred to as BKV) is a member of the genus *Betapolyomavirus*, which belongs to the Polyomaviridae family of viruses that includes JC polyomavirus, or JCPyV, and Simian-virus 40 (SV40 virus) [1–6]. BKV was first isolated by Gardner in 1971, from the urine sample of a renal transplant patient diagnosed with ureteral stenosis with the initials "B.K." [7]. BKV is a small non-enveloped, icosahedral, circular, doubled-stranded DNA virus that is 40–45 nm in diameter, with a genome size of approximately 5 kb (kilobases) [8,9]. BKV was first reported in 1995 as being a cause of allograft failure in renal transplant patients [10]. It is now recognized as an emerging pathogen in renal transplant patients, with increased incidence that correlated with the use of more potent iatrogenic immunesuppressants such as tacrolimus (FK 506) and mycophenolate mofetil (Cellcept) [11–16].

While primary infection with BKV is usually asymptomatic and occurs early in life with a seroprevalence of 80–90% in adults worldwide, it is often associated with pathology in immunocompromised individuals [17,18]. BKV has a seroprevalence of 65% to 90% in children aged 5–9 years, and can be transmitted via respiratory, uro-oral, and feco-oral borne routes [19,20]. Because latent BKV is known to reactivate in patients who have immunocompromised incidence of conditions associated with BKV infection such as encephalitis, nephritis, hemorrhagic cystitis, retinitis, and pneumonia, it has also been reported in HIV-1 infected patients [21,22]. HIV-1 patients also experience a higher prevalence of BKV viruria than healthy individuals that show a positive correlation with the degree of immunosuppression [23].

BKV reactivation after immunosuppression in transplant recipients can result in clinical disease in the form of BKV associated nephropathy (BKVAN), leading to ureteral stenosis, tubular interstitial damage, as well as hemorrhagic cystitis in bone marrow transplant patients [24–26]. Primary BKV infection is accompanied by viral replication, followed by the establishment of latency in renal tissue [27]. BKV-associated pathology linked to immunosuppression includes diseases of the respiratory tract, urinary bladder, kidney, the central nervous system (CNS), eye, digestive tract, and endothelium [20]. BKV reactivation from latency is followed by viruria, which occurs in up to 20% of asymptomatic immunocompetent individuals, and in 20–60% of immunocompromised patients [28]. Approximately 80% of renal transplant recipients experience BK viruria and among those 5–10% develop BKVAN [28]. Virus infection leading to viremia, interstitial inflammation, graft rejection with the progression of interstitial fibrosis, and tubular atrophy, can lead to allograft failure and end stage renal disease (ESRD). ESRD represents an important health disparity among underserved populations [29–32]. Currently, there is no specific treatment for BKVAN. With no effective consistent antiviral therapy, pre-emptive reduction of maintenance immunosuppression and/or changes to the immunosuppressive regimen is recommended to control BKV replication, which may lead to an increased risk of allograft rejection [27]. The underlying mechanisms and kinetics of BKV infection in BKVAN remain largely unexplored. Primary infection of glomerular parenchymal cells could lead to progressive inflammation, injury, and cytolysis, which contribute to renal fibrosis and likely lead to ESRD.

#### **2. BKV Infection and Post-Transplant Kidney Disease**

In the adult population, there is a high prevalence of BKV infection and latency in renal tissue that usually remains asymptomatic in immunocompetent individuals, but predisposes renal transplant patients that require immunosuppression to BKV reactivation and replication. Approximately 50–80% of patients that develop BKVAN also experience graft failure [33]. The incidence of graft failure is dependent on the degree of glomerular inflammation caused by proinflammatory cytokines, the influx of immune effector cells, BKV lytic replication, and lysis of renal tubular epithelial cells that can lead renal fibrosis and subsequent graft failure [34–36]. In renal transplant patients, reactivation of BKV occurs in the graft and the infection is donor-derived [37], with higher rates of reactivation occurring with donors that are BKV seropositive [37]. BKV reactivation after renal transplantation is usually first observed by the appearance of virus-infected uroepithelial cells, known as decoy cells, that are found in the urine or BKV DNA in the urine, which is followed by a viremic phase that occurs approximately one month post-transplantation, according to Hirsch et al. [38]. BKV viremia precedes BKVAN. It is a better predictor of pathology associated with nephropathy than viruria, especially when accompanied by viral titers >10,000 copies/mL [39,40]. The timing of BKV reactivation and replication after transplant has been associated with several factors. These include the intensity of the immunosuppressive regimen involving the use of tacrolimus or mycophenolate mofetil, recipient-related factors (such as patient age, male sex, non-African American race), donor-related factors (such the degree of HLA mismatches, BKV seropositivity), and viral-related factors (such as the BKV genotype) [27,41]. In addition, other factors, such as renal injury associated with variation in cold ischemia time, delayed allograft function, and the placement of ureteral stents, have also been reported to influence BKV reactivation [42,43]. However, conclusive diagnosis of BKVAN requires the detection of viral inclusion bodies on renal biopsies, as well as confirmation of genome detection by in situ hybridization or viral antigen detection via immunohistochemical staining for the BKV large T antigen (LTAg) [44]. The BKV LTAg is known to cross-react with antibodies against the LTAg of simian virus 40 (SV40) that shares 70% genome sequence homology with BKV. While ultrastructural analysis by electron microscopy is highly sensitive

for detecting BKV, and has been used to diagnose BKV infection, the presence of BKV alone may not be sufficient to confirm a BKVAN diagnosis. The reliability of these techniques varies due to non-specific binding of immunoglobulins and DNA oligomers in human tissue, hence, standardization is warranted.

BKVAN is divided to three histopathological grades: A, B, and C. Grade A BKVAN presents as inflammation in the tubular epithelial with the absence of tubular epithelial necrosis. Grade B BKVAN is defined as more progressive in pathology, involving both tubular epithelial cell necrosis as well as tubular epithelial cell lysis. Grade C BKVAN is defined as the presence of interstitial fibrosis that can ultimately lead to ESRD [45]. A strong correlation exists between graft survival based on histopathological grades of BKVAN, with Grade A having the best prognosis for graft survival at two years (90%) and Grade C having the worst (50%) [46].

Histological lesions in BKVAN are normally scored by the Banff 97 classification of renal allograft pathology to indicate severity [46–48]. Several biomarkers have been examined to predict the onset of BKVAN and the relationship to graft failure, which includes urine analysis by PCR amplification of BKV-VP1, or the presence of grandzyme B, proteinase inhibitor-9, plasminogen activator inhibitor-1, as well as the urine polyomavirus Haufen Test to determine the presence of urinary cast [49–56]. There is currently no specific universal screening biomarker that is widely used in clinical practice that consistently predicts the early onset of BKVAN and correlates strongly with graft survival. Furthermore, the characteristic changes reported for BKVAN-associated renal pathology may only exist in a fraction of infected patients in varying degrees.

Expanded BKV tropism for glomerular parenchymal cells or GVU cells that includes glomerular podocytes, mesangial cells, and glomerular endothelial cells, has been confirmed by vitro studies in my laboratory [34] (Figure 1). This finding will require further investigation.

**Figure 1.** BK polyomavirus (BKV) infection of GVU cells. Immunofluorescent staining of GVU cells infected with BKV. (**A**) Primary human glomerular endothelial cells infected with BKV for 96 h and stained with a monoclonal antibodies against the SV40 Large T antigen (LTAg). (**B**) Human podocytes infected with BKV for 96 h and stained with monoclonal antibodies against the BKV major capsid protein VP1. (**C**) Primary human mesangial cells infected by BKV for 96 h and stained with a monoclonal antibody targeting the SV40 (LTAg). Nuclei were stained blue with 4 ,6-diamidino-2-phenylindole (DAPI). All images were obtained using a Nikon TE2000S microscope mounted with a charge-coupled device (CCD) camera at ×200 magnification.

#### **3. BKV Entry and Dissemination in the Glomerulus and the Cell Types Exposed to BKV Tra**ffi**cking**

In a hypothetical model proposed by Popik et al., BKV enters the glomerular parenchyma via the afferent arteriole during the viremic phase of infection, leading to viral dissemination and the initial exposure of glomerular mesangial cells to the virus (Figure 2). The virus then spreads from the mesangial cells to the glomerular podocytes and endothelial cells of glomerular capillaries. BKV may spread first to the parietal cells of the glomerular capsule and then to the proximal tubular cells before appearing in urine. The initial and continual dissemination track of BKV would also be influenced by the turbulence produced by blood flow and renal filtration. Most recently, a report by Popik et al. suggests that the tropism of BKV in the human kidney involves glomerular parenchymal cells, which have been shown to be permissive for BKV in vitro [34]. The potential role of these cells in viral latency, viral reactivation, viral load, viremic conversion, and BKVAN-associated renal pathology is unknown.

**Figure 2.** A hypothetical mode for BKV dissemination in the glomerular that includes GVU cells. BKV (black spheres) enters the glomerulus of the renal compartment via the afferent arteriole during the viremic phase of infection. This leads to the initial infection of GVU cells, namely the mesangial cells. Next, the virus spreads from mesangial cells to glomerular podocytes, and then locally to glomerular endothelial cells that are also highly permissive for infection in vitro and reported to be infected by BKV in vivo. The virus then encounters the parietal cells of the glomerular capsule that are reported to be permissive for BKV in vivo. Finally, the virus further disseminates and infects the proximal tubular epithelial cells that are highly permissive for BKV infection in vitro and in vivo. Widespread virus infection and replication in GVU targets cells, along with tubular epithelial cells and parietal glomerular capsular cells, would theoretically contributes to the viruria, viremia, inflammation, and nephropathy. Model of BKV entry and existence in the glomerulus (modified with permission from *Pearson Education Inc.* 2013 (unpublished data)).

#### **4. Cellular Targets that are Permissive for BKV Infection and Lytic Replication**

#### *4.1. Tubular Epithelial Cells*

A comprehensive examination of cellular targets for BKV infectivity in the proximal and distal glomerular compartments of the human kidney has not been reported. Rather, the focus of BKV infectivity and pathogenesis has been mainly on tubular epithelial cells, and most studies have proposed them as the primary viral reservoir and main driver of pathogenic pathways that lead to fibrosis in BKVAN [57,58]. These reports conclude that renal tubular epithelial cells are the major sites of viral persistence and reactivation in immunosuppressed kidney transplant patients [59]. Tubular epithelial cells are important in vitro and in vivo targets for BKV infection and replication. Findings from several studies support the notion that tubular epithelial cell infection, dysfunction, necrosis, and death are essential prerequisites for renal fibrosis associated with BKVAN. However, studies by de Kort H et al. suggest that rapid lytic replication of BKV occurs in tubular epithelial cells, because these cells are immunologically tolerant to BKV infection rendering them more susceptible to high levels of lytic replication when compared to other glomerular cells that are more immunologically responsive to BKV infection, as demonstrated by a robust induction of interferon beta (IFNβ) and CXCL10 in the latter, post-infection [60].

#### *4.2. Bowman's Capsular Epithelial Cells (BCEC)*

By examining renal biopsies from renal transplant patients with BKVAN, Celik and Randhawa detected cytopathic effects of BKV in Bowman's capsular epithelial cells (BCECs) at the parietal layer of Bowman's capsule [61]. The authors observed BKV cytopathology in BCECs in 36/124 biopsies (29%) from 83 patients examined with BKVAN in the allograft kidney, using H&E stained-light microscopy and immunohistochemistry [61]. The authors used in situ hybridization to confirm the presence of BKV DNA in BCECs [61]. Moreover, they also found that BKV cytopathology in BCECs correlated with high viral loads in the tubular epithelium [61]. Interestingly, tubular epithelial cells that are highly permissive for BKV lytic replication share the same embryologic origin as BCECs. Therefore, it is reasonable to speculate that BCECs are also permissive for BKV. However, the role for BCECs in BKV latency and reactivation is currently unknown. Comprehensive in vivo and in vitro studies of BCECs are warranted. The role of BCECs in viral latency and reactivation has not been explored. Results from studies that examine renal biopsies from transplant patients with BKVAN suggest that BKV infection of BCECs is rare. Nonetheless, it would be interesting to determine if BCECs play a similar role to that of tubular epithelial cells in BKVAN, due to their common origin.

#### *4.3. Mesangial Cells*

Until recently, there were no reports of BKV infection of mesangial cells. A study published in 2019 by Popik et al., shows that primary human renal mesangial cells are permissive for BKV infection in vitro. Specifically, the authors found that mesangial cells expressed BKV late genes 96 h post-infection, without exhibiting evidence of cytopathology [34]. However, immunofluorescent staining revealed high levels of virus replication in these cells, as demonstrated by nuclear staining of BKV-infected cells with an antibody against the SV40 LTAg, along with high levels of VP1 transcription [34]. The authors also observed significant induction of CXCL10 and IFNβ expression in BKV-infected cells that correlated with increased virus replication over a time course of infection. However, it is currently unclear if mesangial cells play a role in BKVAN progression in vivo. There are currently no reports of BKV-infected mesangial in biopsies from renal allograft patients with BKVAN. In a study by Celik et al., they report immune complex deposition in the mesangium and an increased mesangial cell matrix in renal biopsies from patients with BKVAN. However, the authors did not observe evidence of BKV infection in mesangial cells [61]. Since mesangial cells are immunologically responsive to BKV infection, as evidenced by induction of CXCL10 and IFNβ [34,35], they may be more effective at viral clearance than tubular epithelial cells. In addition, there could be host factors in the glomerular microenvironment induced in mesangial cells after infection that render them less permissive for BKV infection in vivo.

#### *4.4. Glomerular Podocytes*

Currently, there is only one report, by Brealey, describing a case study of BKVAN that shows evidence of viral particles in glomerular subepithelial humps. The author used transmission electron microscopy to analyze the glomeruli in a renal biopsy from a 59-year-old female kidney transplant patient who was experiencing symptoms of graft rejection [62]. There was clear clinical evidence from the examination of biopsy tissue to support a diagnosis of immune complex glomerulonephritis. Virus particles were observed in deposits in the cytoplasm of podocytes [62]. The authors confirmed the diagnosis of BKVAN by immunoperoxidase staining using BKV- specific antibodies. The author also observed evidence of cytoplasmic clearance of BKV by podocytes from the glomerular basement membrane [62]. However, this case study did not describe evidence of direct podocyte infection. Most recently, Popik et al., described BKV cytopathology and lytic replication in undifferentiated and differentiated podocytes in vitro, as demonstrated by high expression levels of VP1 total protein and mRNA post infection [34]. The authors also observed induction of CXCL10 and IFNβ transcriptional in BKV-infected podocytes that correlates with increased viral replication over the course of infection [34]. It is unclear if podocyte infection with BKV plays a direct role in BKVAN progression in vivo. Like mesangial cells, podocytes are immune responsive to BKV. Thus they may be able to clear BKV in vivo or avoid significant infection by the recruitment or of host factors that protect the cells against BKV infection, or by subverting those that enhance infection.

#### *4.5. Glomerular Endothelial Cells*

Until recently, there was only one case report of BKV-related polyomavirus vasculopathy in a renal transplant patient [63]. In this study, Petrogiannis-Haliotis et al. describes a 52-year-old male patient who had developed ESRD after undergoing a cadaveric renal transplantation [63]. The patient suffered from BKV vasculopathy resulting from virus infection of vascular endothelial cells [63]. BKV antigen expression was detected in endothelial cells by immunohistochemistry in renal biopsies and BKV DNA was identified in an extract of frozen kidney tissue by polymerase-chain-reaction (PCR) using BKV-specific primers [63]. Ultrastructural analysis by electron microscopy revealed BKV-infected endothelial cells in both the transplanted and native kidneys, but immunoperoxidase staining did not detect any virus in the renal tubules [63]. Recent, in vitro studies by Popik et al., using primary human glomerular endothelial cells (GECs), revealed that GECs are highly permissive for BKV infection and lytic replication, as demonstrated by BKV cytopathology as well as high expression levels of the BKV LTAg and VP1 [34]. They also observed the induction of a IFNβ transcription gene in BKV-infected GECs that correlates with increased viral replication over a time course of infection [34]. The authors also observed varying levels of CXCL10 induction over a time course of infection. In a recent study by An et al., the authors also observed an induction of CXCL10 and IFNβ expression in BKV-infected human GECs, along with the activation of IRF3 and STAT1 [64]. Findings from these studies support the notion that GECs can mount an immune protective response to BKV infection and may act as an immune barrier to BKV infection in vivo. These immune protective factors or receptors that may be suppressed or downregulated in vitro could render GECs more permissive for BKV infection and explain the rare occurrence GECs infection in vivo. Figure 2 shows a hypothetical model of cell types and routes of BKV dissemination in the proximal and distal compartments of the human glomerulus.

#### **5. The Potential Role of Glomerular Parenchymal Cells (GVU cells) in BKV Latency**/**Reactivation and BKVAN**

GVU cells have all been shown to be permissive for BKV infection in vitro. However, in vivo infection of GVU cells is rare in patients with BKVAN, possibly due to differential receptor expression, down regulation of the primary receptor, or induction of an antiviral host factors that promote viral clearance. Immunosuppression and concomitant suppression of T-cell immune surveillance trigger BKV reactivation from latency in renal transplant patients, subsequently leading to high levels of viral replication in the tubular epithelium. As a result, these patients develop BKV-associated nephropathy. The resulting denudation of the basement membrane, followed by robust viremia resulting in uncontrolled inflammation, can lead to nephropathy and fibrosis (Figure 3.) I propose that podocytes and mesangial cells may aid in the early phase recruitment of immune effector cells via

the induction CXCL10 (Figure 3). In addition, the induction of IFNβ in these cells may be protective against BKV infection, due to the cytokine's antiviral and anti-proliferative effects [64,65]. Taken together, it is likely that GVU cells serve as potential latent BKV reservoirs that contribute to early events in BKV reactivation. Comprehensive studies that examine temporal events and early stages of BKV infection are needed to identify target cells in renal biopsies prior to development of BKVAN. Examination of renal biopsies to detect both BKV antigen and DNA would provide clues to the role that GVU cells play in BKVAN with respect to viral latency and reactivation.

**Figure 3.** A hypothetical model for BKV pathogenesis that include infection of GVU cells. (**Left**) BKV (red spheres) infection of a normal kidney exists in the latent state in distal tubular epithelial cells. Initial BKV replication is then controlled by the immune system in an immunocompetent host. In the immunocompromised host, viral replication is not controlled, leading to extensive lytic replication and cellular necrosis. Viruria and viremia ensue, and the infection spreads to adjacent cells and the basement membrane is compromised. There is extensive inflammation and recruitment of immune effector cells. Nephropathy occurs followed by interstitial fibrosis. (**Right**) GVU cells are initially infected by BKV which leads to the induction of IFNβ and CXCL10. CXCL10 plays a role in the recruitment of immune effector cells that contribute to inflammation. The induction of IFNβ and IFNβ pathways may protect some GVU cells from BKV infection by establishing an immune barrier and promoting viral clearance. Model of BKV renal pathogenesis (modified with permission from Lamarche et al., BK polyomavirus and the transplanted kidney: Immunopathology and Therapeutic Approaches *Transplantation* 2016).

#### **6. Rare Appearance of BKV in Glomerular Parenchymal Cells in Renal Biopsies of Patients with BKVAN**

The role of BKV replication in glomerular parenchymal cells, as well as their contribution to viral latency, reactivation, and renal pathology associated with BKVAN, has been largely unexplored. Tubular epithelial cells may represent a selected cell type for BKV infection, because of their inability to mount an immune response against the virus. They also support high levels of viral replication. In other words, the tubular epithelium provides a functional microenvironment for BKV reactivation and provides an ideal site as a BKV reservoir for infection. GVU cells have all been shown to exhibit IFNβ induction in vitro, which may mount an antiviral response in uninfected cells during early stages of BKV reactivation. Elucidation of specific viral and host factor interactions required for BKV latency is warranted. Findings from these studies may explain why BKV infection is rarely detected in in GVU cells in vivo.

#### **7. Discussion**

BKV infection and reactivation following immunosuppression are important causes of renal allograft dysfunction and graft loss. These conditions eventually lead to BKVAN. Understanding the role of both proximal and distal glomerular cells in BKVAN progression will allow investigators to determine the pathogenic mechanisms involved in BKV trafficking and infection profiles, as well as additional viral reservoirs, and conditions required for the establishment of viral latency and reactivation. Future studies may help to advance the development of novel strategies to protect targeted cells in the glomerulus from BKV infection before and after immunosuppression. These studies may also contribute to novel strategies for early diagnosis and subsequent early interventions that aid in the recovery of renal function.

**Funding:** D.J.A. is supported by the Meharry Zika Startup Grant and the Research Centers in Minority Institutions (RCMI) grant (U54MD007586-01).

**Acknowledgments:** I thank Waldemar Popik and James E.K. Hildreth for reviewing the manuscript. I also thank the Meharry Office of Scientific Editing and Publications (NIH grant S21MD000104) for scientific editing support.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Fast Tac Metabolizers at Risk—It is Time for a C**/**D Ratio Calculation**

**Katharina Schütte-Nütgen 1,**†**, Gerold Thölking 1,**†**, Julia Steinke 1, Hermann Pavenstädt 1, René Schmidt 2, Barbara Suwelack <sup>1</sup> and Stefan Reuter 1,\***


Received: 31 March 2019; Accepted: 26 April 2019; Published: 28 April 2019

**Abstract:** Tacrolimus (Tac) is a part of the standard immunosuppressive regimen after renal transplantation (RTx). However, its metabolism rate is highly variable. A fast Tac metabolism rate, defined by the Tac blood trough concentration (C) divided by the daily dose (D), is associated with inferior renal function after RTx. Therefore, we hypothesize that the Tac metabolism rate impacts patient and graft survival after RTx. We analyzed all patients who received a RTx between January 2007 and December 2012 and were initially treated with an immunosuppressive regimen containing Tac (Prograf®), mycophenolate mofetil, prednisolone and induction therapy. Patients with a Tac C/D ratio <1.05 ng/mL × 1/mg at three months after RTx were characterized as fast metabolizers and those with a C/D ratio ≥1.05 ng/mL × 1/mg as slow metabolizers. Five-year patient and overall graft survival were noticeably reduced in fast metabolizers. Further, fast metabolizers showed a faster decline of eGFR (estimated glomerular filtration rate) within five years after RTx and a higher rejection rate compared to slow metabolizers. Calculation of the Tac C/D ratio three months after RTx may assist physicians in their daily clinical routine to identify Tac-treated patients at risk for the development of inferior graft function, acute rejections, or even higher mortality.

**Keywords:** kidney transplantation; tacrolimus; C/D-ratio; pharmacokinetics

#### **1. Introduction**

Tacrolimus (Tac) is recommended by The Kidney Disease: Improving Global Outcomes (KDIGO) guideline as the immunosuppressant of choice after renal transplantation (RTx) [1]. Although it is very effective in terms of preventing organ rejection, its highly inter-individual variable metabolism rate can be a challenging factor for physicians as many factors can impact on Tac metabolism [2]. Different approaches have largely failed to predict the dosing and Tac clearance or could not show the advantages pertaining to safety or outcomes [3–5]. Even though genetic polymorphisms have been shown to significantly influence Tac metabolism, genetic testing strategies did not improve clinical outcomes [6,7], and require effort in terms of cost and the interpretation of results and therefore have not found their way into clinical practice yet. Thus, therapeutic drug monitoring is essential for directing the therapy.

We recently proposed a classification of patients receiving Tac into two major metabolism groups. Our stratification is based on the calculation of the C/D ratio (expressed as the trough level concentration normalized by the dose). A C/D ratio <1.05 ng/mL × 1/mg identifies fast metabolizers, whereas a C/D

ratio ≥1.05 ng/mL × 1/mg indicates a slow metabolism [8]. Alternative definitions of the metabolic state category, such as dose requirements [9], clearance rate, or calculation of the D/C ratio, exist [10,11]. Interestingly, fast Tac metabolizers have been found as being more prone to developing BK viremia [12], calcineurin-inhibitor toxicity [8,9], and acute rejections [10,13,14] after RTx. In congruence, kidney function (three to 24 months after RTx and 36 months after liver transplantation, respectively) was lower in fast than in slow metabolizers [8,9,15,16]. Based on these findings, suggesting an influence of fast Tac metabolism on adverse events and inferior renal function after renal transplantation, the aim of this study was to analyze whether Tac metabolism type might even impact on definite outcomes such as patient and graft survival and to identify whether fast Tac metabolism constitutes an independent risk factor that physicians should consider besides already known determinants of kidney transplant patients' long-term outcome. Hypothesizing that Tac metabolism-dependent effects on mortality might become discernable in the long-term, the present study was performed in a patient cohort with a complete five-year follow-up.

#### **2. Methods**

#### *2.1. Patients*

Prior to analysis, all patient data was anonymized and de-identified. The local ethics committee (Ethik Kommission der Ärztekammer Westfalen-Lippe und der Medizinischen Fakultät der Westfälischen Wilhelms-Universität, No. 2014-381-f-N) approved the study. The methods used in this study were carried out in accordance with the current transplantation guidelines and the Declarations of Istanbul and Helsinki. Written informed consent with regard to recording their clinical data was given by all participants at the time of transplantation.

We retrospectively analyzed all patients who underwent RTx between January 2007 and December 2012 at the University Hospital Münster and were initially treated with an immunosuppressive regimen containing Tac (Prograf®), mycophenolate mofetil, prednisolone, and induction therapy. Oral CMV-prophylaxis with valganciclovir was administered for 100 days for D+/R+, D−/R+ and D+/R− recipients, and none if both the donor and the recipient were negative for CMV. Recipients aged < 18 years, with combined transplants, and for whom the three month C/D ratio could not be adequately calculated (due to Tac-free immunosuppressive regimen, missing data, or simultaneous higher dosage of prednisolone (≥20 mg/day, which is known to induce CYP3A activity)) were excluded. The Tac target trough level was 6–10 ng/mL. Recipient and donor data was collected from the patient files. The following parameters were examined: Patient and donor demographics, recipient body mass index (BMI), recipient history of hypertension or diabetes mellitus, cause of end-stage renal disease (ESRD), number of prior kidney transplants, time on dialysis, donor type of transplantation, degree of human leukocyte antigen (HLA)-mismatching, current panel-reactive antibodies (PRA), cold and warm ischemia time and incidence of new-onset diabetes after transplantation (NODAT) and cytomegalovirus (CMV) DNAaemia (a number of >600 copies/mL was considered as relevant corresponding to the threshold value given by the manufacturer (TaqMan-PCR, QIAamp DNA Blood Kit, Qiagen, Hilden, Germany)). CMV screening was performed monthly during the first six months after RTx, every second month during months 6–12, and on indication.

#### *2.2. Tacrolimus Metabolism Rate*

Tac metabolism rates were calculated at three months after RTx by dividing the Tac blood trough concentration (C) by the corresponding daily Tac dose (D), as published before [8,16].

C/D ratio (ng/mL × 1/mg) = blood Tac trough concentration (ng/mL)/daily Tac dose (mg) (1)

As inpatient values are more prone to errors due to coexisting factors like diarrhea, anaemia and CYP3A4 interfering drugs as azoles, e.g., only outpatient tacrolimus concentrations were considered. Measurements with exceptional high Tac trough concentrations (>15 ng/mL) were not considered to exclude false-high values due to Tac ingestion prior blood sampling.

For 50 randomly selected patients we additionally calculated the Tac C/D ratio at one and six months as an average C/D ratio and compared it to the three-month C/D ratio to account for further potential factors that can influence the C/D ratio and might affect single-time point measurements. As the 3-month C/D ratio strongly correlated with the average C/D ratio at month one and six, we applied the 3-month C/D ratio for the following patient categorization:

As defined previously, patients with a Tac C/D ratio <1.05 ng/mL × 1/mg were categorized as fast metabolizers. Patients with a C/D ratio of 1.05–1.54 ng/mL × 1/mg or a C/D ratio ≥1.55 ng/mL × 1/mg were defined as intermediate metabolizers and slow metabolizers, respectively [8]. For simplification, intermediate and slow metabolizers were summarized as slower metabolizers in this study.

#### *2.3. Outcome Measures*

The main outcome measures were patient and overall graft survival. Patient survival was defined as time from RTx to death (from any cause) or last contact for alive patients. Overall graft survival was defined as the time from RTx to death (from any cause), graft failure, or last contact, whichever occurred first. Graft failure was defined as the reinitiation of dialysis treatment.

Further outcome parameters were serum creatinine and estimated glomerular filtration rate (eGFR) at years one to five after transplantation as well as the frequency of biopsy-proven acute rejection episodes (defined by Banff classification) and the rejection-free survival. Patients were subjected to kidney biopsy in case of a relevant rise in creatinine (≥0.3 mg/dL). Biopsies were evaluated by two pathologists.

Whole blood was analyzed for creatinine (enzymatic assay; Creatinine-Pap, Roche Diagnostics, Mannheim, Germany) and Tac (automated tacrolimus (TACR) assay; Dimension Clinical Chemistry System; Siemens Healthcare Diagnostic GmbH; Eschborn; Germany). Only 12 h Tac trough levels were used for analysis. Renal function was determined by calculating the eGFR using the CKD-EPI equation.

#### *2.4. Statistical Analysis*

Statistical analysis was performed using IBM SPSS® Statistics 25 for Windows (IBM Corporation, Somers, NY, USA). Normally distributed continuous variables are shown as mean ± standard deviation (SD) and non-normally distributed continuous variables as median and 1st and 3rd quartiles (interquartile range, IQR). Absolute and relative frequencies have been given for categorical variables. Pairs of independent groups were compared using the Student's t-test for normally distributed data, Mann–Whitney U test for non-normal data, and Fisher's exact test for categorical variables. To compare paired data, we used the Wilcoxon test for continuous variables and the McNemar test for categorical variables.

Survival analyses were based on a maximum follow-up of five years after RTx. Patient survival, overall allograft survival as well as rejection-free survival were analyzed using the Kaplan-Meier method [17], and the groups were compared using the log-rank test. Cox proportional hazards regression models [18] were built using a stepwise variable selection procedure to assess the association between C/D ratio metabolism status and survival while simultaneously adjusting for potential confounding factors (inclusion: *p*-value of the score test ≤ 0.05, exclusion: *p*-value of the likelihood ratio test > 0.1). Results have been presented as hazard ratios (HR) with 95% confidence interval (95% CI) and *p*-value of likelihood ratio test. The *p*-value of score test is given for non-selected variables in multivariable analyses.

Mixed models with AR (1) covariance structure were fitted to analyze the impact of biological and clinical markers on the time course of eGFR between year one and five after the transplantation based on the eGFR values observed at annual intervals during this period. Univariable analyses included each marker separately along with its interaction with time since baseline measurement (at year one after transplantation) in order to assess (i) the baseline eGFR and/or (ii) whether potential time trends of eGFR differ between the subgroups defined by the marker. Multivariable models were built using a stepwise variable selection procedure in order to assess the impact of C/D ratio metabolism status on baseline eGFR and time trends of eGFR while adjusting for potential confounding factors. Models included (i) C/D ratio metabolism status and its interaction with time since baseline measurement in a first block and (ii) potential confounding factors along with their interactions with time since baseline measurement in a second block with forwards variable selection (inclusion/exclusion criterion: *p*-value of Wald test ≤0.05/>0.1).

No adjustment for multiple testing was made, and all analyses were regarded as explorative. *p*-values ≤ 0.05 were considered statistically noticeable.

#### **3. Results**

#### *3.1. Patient Cohort*

The enrollment flow chart for the study population is shown in Figure 1. Between January 2007 and December 2012, 633 kidney transplants were performed at our center. After the exclusion of 50 patients aged <18 years and 25 patients with combined transplantation, data on immunosuppressive therapy was extracted from the remaining 558 adult kidney-only transplant recipients. From these, 401 patients with an initial Tac-based immunosuppressive therapy and complete data on the 3-month C/D ratio were included. From all patients, 253 recipients (63.1%) were categorized as slow metabolizers and 148 recipients (36.9%) as fast metabolizers. The average C/D ratio of month one and six for 50 randomly selected patients did not differ from the three-month C/D ratio (*p* = 0.765, Table S1) and categorization of slow and fast Tac metabolizers was similar when applying the three-month C/D ratio or the average C/D ratio of months one and six (*p* = 1.000, Table S2), suggesting that three-month C/D ratio strongly correlated with the average C/D ratio during months one and six.

**Figure 1.** Enrollment flow chart for the study population. RTx = Renal transplantation; N/A: not available.

Baseline patient characteristics for donors and recipients and transplantation-associated parameters are shown in Table 1. Tac mean trough levels and daily doses were noticeably different between the groups. The two groups were similar with respect to all other baseline characteristics that were analyzed.


**Table 1.** Baseline patient characteristics.

Demographic characteristics of the study population by the Tac metabolization status. Results are presented as mean ± standard deviation (SD) or median and first and third quartile (IQR), respectively, or as absolute and relative frequencies. BMI = body mass index; ESRD = end-stage renal disease; FSGS = focal segmental glomerulosclerosis; HLA <sup>=</sup> human leukocyte antigen; PRA <sup>=</sup> panel reactive antibodies. <sup>a</sup> Student's *<sup>t</sup>*-test, <sup>b</sup> Mann-Whitney U test, <sup>c</sup> Fisher's exact test.

#### *3.2. Patient and Overall Allograft Survival*

Kaplan-Meier curves for patient and overall allograft survival by Tac metabolism status are shown in Figure 2. Five-year patient survival was noticeably reduced in fast metabolizers as compared to slow metabolizers (89.9% vs. 95.3%, log-rank *p* = 0.036, Figure 2). The Cox regression analysis revealed a noticeable association between a fast Tac metabolism and patient survival in both univariable (HR 2.209 (95% CI 1.034–4.719), *p* = 0.041) as well as multivariable analysis (HR 5.749 (95% CI 1.556–21.242), *p* = 0.004) (Table 2). Overall allograft survival was affected by the Tac metabolism status as well: Fast metabolizers showed a noticeably reduced 5-year allograft survival rate as compared to slow metabolizers (83.8% vs. 90.5%, log-rank *p* = 0.044, Figure 2). HR was 1.772 (95% CI 1.006–3.121, *p* = 0.047)) for fast metabolizers in univariable Cox regression and 2.715 (95% CI 1.231–5.989, *p* = 0.012) after adjustment for potential confounders (Table 3).

**Figure 2.** (**A**) Kaplan-Meier curves for patient survival and (**B**) overall graft survival. Survival rates of slow (red lines) and fast metabolizers (blue lines) were analyzed by the Kaplan–Meier method and compared using the log-rank test. Fast metabolizers showed a noticeably reduced patient and overall graft survival.


Results are presented as hazard ratios (HR) with their 95% confidence interval (CI) and *p*-value of likelihood ratio test. For non-selected variables in multivariable analyses, *p*-value of score test is given. HR = hazard ratio; CI = confidence interval.


**Table 3.** Univariable and multivariable analyses of overall graft survival using Cox regression.

Results are presented as hazard ratios (HR) with their 95% confidence interval (CI) and *p*-value of likelihood ratio test. For non-selected variables in multivariable analyses, *p*-value of score test is given. HR = hazard ratio; CI = confidence interval.

Causes of death are given in Table 4. While fast metabolizers mostly died from cardiovascular diseases (40%), the most common cause of death in slow metabolizers were infectious diseases (41.7%). In summary, a fast Tac metabolism noticeably affects patient as well as overall allograft survival after kidney transplantation.


**Table 4.** Causes of death for slow and fast metabolizers.

#### *3.3. Renal Function*

Renal function was assessed yearly within the first five years after transplantation. Figure 3 shows the development of the eGFR between year one and five after renal transplantation in slow and fast metabolizers. A linear mixed model was applied to estimate the time-dependent course of eGFR. Fast metabolizers showed a noticeably faster decline of the eGFR within five years after transplantation as compared to slow metabolizers in both univariable (*p* = 0.040) and multivariable analysis (*p* = 0.032) (Table 5a,b).

**Figure 3.** Time course of the eGFR within five years after renal transplantation. Fast metabolizers show a faster decline in the eGFR as compared to slow metabolizers over the first five years.

**Table 5.** (a) Univariable Analysis: eGFR at month 12 and linear time-trends of eGFR (between months 12 and 60) by subgroup/marker. (b) Multivariable Analysis: eGFR at month 12 and linear time-trends of eGFR (between month 12 and 60) by subgroup/marker.



**Table 5.** *Cont.*


**Table 5.** *Cont.*

\* *p*-value of F-test (global test).

#### *3.4. Rejections*

The Kaplan-Meier curve for rejection-free survival is shown in Figure 4A. The 5-year rejection-free survival was noticeably lower in fast metabolizers as compared to slow metabolizers (69.6% vs. 78.8%, log-rank *p* = 0.032, Figure 4A). The Cox regression analysis revealed a noticeable association between a fast Tac metabolism and rejection-free survival in univariable (HR 1.536 (95% CI 1.034–2.282), *p* = 0.035) as well as multivariable analysis (HR 1.622 (95% CI 1.085–2.424), *p* = 0.020) (Table 6). Table 7 shows the frequency of patients with ≥ 1 acute biopsy-proven rejection during the 5-year follow-up. While 45/148 (30.4%) fast metabolizers experienced at least one acute rejection, only 54/253 (21.3%) slow metabolizers were affected. Of note, the subtype analysis of the first rejection episode within the first five years after transplantation revealed an increased frequency of humoral and mixed rejections in fast metabolizers (*n* = 10, 6.8% vs. *n* = 9, 3.6% and *n* = 10, 6.8% vs. *n* = 6, 2.4%, respectively) (Table 7, Figure 4B), whereas slow metabolizers were mainly affected by borderline rejections.

**Figure 4.** (**A**) Kaplan-Meier curves for rejection-free survival of slow (red lines) and fast metabolizers (blue lines), analyzed by the Kaplan–Meier method and compared using the log-rank test. Fast metabolizers showed a noticeably reduced rejection-free survival. (**B**) Subtype analysis of the first rejection episode within the first five years after transplantation. Fast metabolizers experienced increased frequencies of humoral and mixed acute rejection, whereas slow metabolizers were mainly affected by borderline rejections.


**Table 6.** Cox regression model for rejection-free survival. Univariable and multivariable analyses of rejection-free survival using Cox regression. Results are presented as hazard ratios (HR) with their 95% confidence interval (CI) and *p*-value of likelihood ratio test. For non-selected variables in multivariable analyses, *p*-value of score test is given.

HR = hazard ratio; CI = confidence interval.


**Table 7.** Frequencies of acute rejections and subtype analysis of the first rejection after RTx within five years after transplantation. Fast metabolizers showed increased frequencies of acute biopsy-proven rejections as compared to slow metabolizers. The *p*-value from Fisher's exact test is given.

#### **4. Discussion**

Herein, we first described a significant influence of the Tac metabolism type on mortality after renal transplantation in a study population with a long-term observation period. A higher five-year mortality in fast metabolizers was accompanied by a higher rejection rate and inferior kidney function. Our study highlights the importance of a risk stratification strategy of RTx patients including information on individuals' Tac metabolism rate which turned out to be an independent risk factor for a lower patient survival after renal transplantation. The C/D ratio is a simple tool that can be easily applied for this purpose.

Based on our previous findings revealing an impact of fast Tac metabolism (C/D ratio < 1.05 ng/mL × 1/mg) on inferior renal function in a two-and three-year follow-up after RTx or LTx [8,16] we herein could demonstrate that this effect persists in the long term and that fast Tac metabolism also impacts on the time-dependent course of renal function in both univariable and multivariable analysis. Moreover, we identified fast Tac metabolism as an independent risk factor for a decreased graft survival.

In congruence, Kuypers et al. observed that patients with high early Tac dose requirements (namely, fast metabolism) had a significantly reduced kidney function at three-months post-RTx [9]. This was attributed to an increased rate of calcineurin inhibitor (CNI)-related toxicity, which is in line with the observations of our previous study [8]. High-dose requirement in Kuyper's study was associated with CYP3A5\*1 genotype carriage in only 1/3 of cases, suggesting further factors impacting a patient's Tac metabolism rate [2]. Notably, the area under the curve and the Tac trough level were not different between patients with and without CNI toxicity. The connection between different dose requirements and comparable trough levels in groups–although not calculated–hints at different C/D ratio categories of patients in both groups. Further, Genvigir et al. showed in a Brazilian cohort of CYP3A genotyped RTx patients that expression of CYP3A4/5 alleles leading to fast Tac metabolism (they also calculated the C/D ratio but did not calculate a cut-off) was associated with a lower eGFR at 3-months after RTx [15]. Again, no association was found between Tac exposure and the genetic score. By applying a multiple linear regression analysis, they showed that genetic variants and age impacted the C/D ratio. This is consistent with the literature – metabolism rate usually decreases with age – and with our findings that show tendencies of slow metabolizers being older age [8] (Table 1). Given the limitations of genetic testing-based strategies, we refrained from genotyping our patients but rather searched for a simple and cost-effective tool, as the C/D ratio, that can assist physicians in the daily routine to individualize their patients' immunosuppressive therapy and stratify individuals with high risk for Tac-related side effects independent from complex genotyping-based methods.

In both aforementioned studies, rejection rates were calculated but not related to the C/D ratio or the dose requirements. However, as Kuypers et al. observed significantly higher rates of graft failure (32.3% vs. 13.7%) and lower rates of patients discontinuing steroids (5.8% vs. 23.7%) in patients requiring higher Tac doses, one can assume a higher rejection rate in these patients. We herein firstly describe a significant effect of the C/D ratio on acute rejections in a long-term follow up. In our study, rejection-free survival was increased in slow metabolizers, with higher frequencies of humoral and mixed rejections in fast than in slow metabolizers. In multivariable regression analysis, the BMI and the number of prior transplantations were associated with rejection as well. Recently, Barraclough et al. stated that the outcome of RTx patients depends on the immunosuppression within the first week after transplantation, although a relation between the AUC or Tac trough level and rejection was not detected [19]. Of note, as mentioned before, Tac AUC and trough levels are usually similar in slow and fast metabolizers and the C/D ratio was not calculated in their study. In a meta-analysis including data from the FDCC, Symphony, and OptiCept studies, Boumar et al. reported that the Tac trough concentration was not different between patients with and without acute rejection within the first 6 months after RTx [20]. Again, information regarding the Tac doses or the C/D ratio was not provided. In this regard, Egeland et al. observed that a high Tac clearance (or a fast metabolism) was associated with an increased risk of developing an acute rejection within the first few days after RTx [10]. Patients with a high Tac clearance might not reach the trough levels in time and suffer under-immunosuppression (at least at some time of the day).

Mortality in fast metabolizers over the five-year observation period was consistently higher than in slow metabolizers, despite a tendency towards an older age in slow metabolizers. Overall, graft failure was low in both groups but aligned with the data from the literature [21,22]. In a recently published large registry analysis from England, the main reasons of death within the first year after RTX were stated as infection (21.6%), cardiovascular events (18.3%), and malignancy (7.4%) [21]. The main reasons of death in our cohort were cardiovascular diseases in fast metabolizers and infections in slow metabolizers, respectively, but did not differ between groups. Unfortunately, the reason of death remained unclear in 33.3% of cases in fast and 8.3% cases in slow metabolizers. As previously observed, fast metabolizers are more prone to developing BK virus infection than slow metabolizers. Thus, one can speculate that over-immunosuppression is an issue in these patients [12]. However, other infections, e.g., urinary tract infections, have not been shown to be related to the C/D ratio [23], and deaths due to infection were not different between groups in our cohort. This aligns with the fact that Tac mainly suppresses T-cell activity while the host's defense to bacterial infections, which are more fatal in RTx patients than viral infections, is mainly based on innate immune cells [24]. Interestingly, 20% of death certificates in the English registry study stated "renal" as the cause of death within the first year after RTx [21]. Lastly, we were unable to identify a difference in reasons of death between groups. One reason for this could be the low mortality rate. However, factors that have been previously associated with increased risk of death, such as age at transplantation, diabetes, time on dialysis, or postmortal donation were not different between groups but rather distributed in favor of the fast metabolizer group (Table 1). Patient demographics associated with kidney function after RTx, such as living donation, number of transplants, cold ischemia time, hypertension, diabetes, donor age, and gender, did not differ between groups. This implies that the differences in renal function are likely to be related to Tac metabolism and rejection. Consequently, an inferior renal function is associated with higher mortality as cardiovascular events, infections as well as malignancies are related to kidney function [25].

We recognize that a study of this nature has limitations because of its retrospective design and potential errors inherent to maintaining a single-center database. Moreover, due to the relatively small patient size, inaccuracies in the data collection might affect the results; though data acquisition was performed thoroughly to avoid inconsistency or entry errors. The analyses are based on the assumption that coding errors and missing data are stochastic. Although we attempted to include as many relevant confounding parameters as possible there might still be residual factors that were not accounted for like the non-adherence of patients for example, which is difficult to measure. Prospective studies are needed to confirm our findings. We conclude from our data that the calculation of the C/D ratio, as a simple, cost-effective tool, can assist physicians in their daily clinical routine to identify Tac-treated patients at risk of developing an inferior graft function, acute rejections, or even higher mortality. This information should be used to individualize and optimize immunosuppressive therapy.

*J. Clin. Med.* **2019**, *8*, 587

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2077-0383/8/5/587/s1, Table S1: The average C/D ratio of month one and six for 50 randomly selected patients did not differ from the 3-month C/D ratio, suggesting that 3-month C/D ratio strongly correlated with the average C/D ratio during month one and six. P-value of Mann-Whitney U test is given, Table S2: Categorization of slow and fast Tac metabolizers was similar when applying the 3-month C/D ratio or the average C/D ratio of month one and six (*p* = 1.000, Fisher's exact test).

**Author Contributions:** Conceptualization, G.T., K.S.-N., and S.R.; methodology, G.T., K.S.-N., R.S.; formal analysis, K.S.-N., G.T., J.S. and R.S.; data curation, K.S.-N. and J.S.; writing—original draft preparation, K.S.-N., G.T., R.S., H.P., S.R.; writing—review and editing, H.P., B.S.; supervision, H.P., B.S., S.R.; project administration, H.P., B.S. and S.R.; funding acquisition, K.S.-N.

**Funding:** This work was fully funded by the Open Access Publication Fund of University of Münster and by the Interdisciplinary Centre for Clinical Research (IZKF), University of Münster (Katharina Schütte-Nütgen).

**Conflicts of Interest:** SR received travel support from Astellas, Chiesi, and Pfizer and lecture fees from Chiesi.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Review* **Machine Perfusion for Abdominal Organ Preservation: A Systematic Review of Kidney and Liver Human Grafts**

### **Maria Irene Bellini 1,\*,**†**, Mikhail Nozdrin 2, Janice Yiu <sup>3</sup> and Vassilios Papalois 4,5**


Received: 19 July 2019; Accepted: 12 August 2019; Published: 15 August 2019

**Abstract:** Introduction: To match the current organ demand with organ availability from the donor pool, there has been a shift towards acceptance of extended criteria donors (ECD), often associated with longer ischemic times. Novel dynamic preservation techniques as hypothermic or normothermic machine perfusion (MP) are increasingly adopted, particularly for organs from ECDs. In this study, we compared the viability and incidence of reperfusion injury in kidneys and livers preserved with MP versus Static Cold Storage (SCS). Methods: Systematic review and meta-analysis with a search performed between February and March 2019. MEDLINE, EMBASE and Transplant Library were searched via OvidSP. The Cochrane Library and The Cochrane Central Register of Controlled Trials (CENTRAL) were also searched. English language filter was applied. Results: the systematic search generated 10,585 studies, finally leading to a total of 30 papers for meta-analysis of kidneys and livers. Hypothermic MP (HMP) statistically significantly lowered the incidence of primary nonfunction (PMN, *p* = 0.003) and delayed graft function (DGF, *p* < 0.00001) in kidneys compared to SCS, but not its duration. No difference was also noted for serum creatinine or eGFR post-transplantation, but overall kidneys preserved with HMP had a significantly longer one-year graft survival (OR: 1.61 95% CI: 1.02 to 2.53, *p* = 0.04). Differently from kidneys where the graft survival was affected, there was no significant difference in primary non function (PNF) for livers stored using SCS for those preserved by HMP and NMP. Machine perfusion demonstrated superior outcomes in early allograft dysfunction and post transplantation AST levels compared to SCS, but however, only HMP was able to significantly decrease serum bilirubin and biliary stricture incidence compared to SCS. Conclusions: MP improves DGF and one-year graft survival in kidney transplantation; it appears to mitigate early allograft dysfunction in livers, but more studies are needed to prove its potential superiority in relation to PNF in livers.

**Keywords:** machine perfusion; organ preservation; temperature; hypothermic; normothermic; transplant

#### **1. Introduction**

The increasing demand for allografts and growing waiting lists have led to the utilisation of organs from extended criteria donors (ECDs) or organs with prolonged ischemic times [1]. These organs are associated with higher rates of discard due to an anticipated increased risk of primary non function (PNF) or delayed graft function (DGF); therefore, novel dynamic preservation technologies are increasingly being adopted with the aim to allow organ utilisation in these circumstances.

Dynamic preservation is not a novel concept yet: ex situ organ perfusion was introduced in 1934 by Charles Lindbergh and Alexis Carrel, who developed the first machine perfusion (MP) to preserve animal organs, but the first application in a human kidney was performed by Belzer in 1967. Although the initial result was successful, the concept of dynamic preservation was not pursued forward at that time, with a progressive utilisation of static cold storage (SCS) mainly for logistic and economic reasons.

In the last thirty years instead, with the change in demographics of the donor population and the idea of tailoring the preservation method to the single graft, the debate as to what is the optimal organ treatment prior to transplantation, along with the possibility to ideally let the parenchymal cells continue their metabolic activity before implantation, has led to a re-investigation of the technique of dynamic preservation [2]. In this scenario, where the temperature setting seems to be a main determinant for the subsequent cell activity, and with no evidence for the gold standard temperature to store retrieved grafts before implantation, there are two main modalities as alternatives to SCS: hypothermic (0–4 ◦C) or normothermic (34–37 ◦C) machine perfusion.

The aim of this study is to provide evidence with a systematic review and metanalysis of the outcomes in terms of organ viability and incidence of reperfusion injury in hypothermic/normothermic MP in comparison to SCS in kidney and liver human grafts.

#### **2. Methods**

The following search algorithm was adopted (Table 1):

**Table 1.** Search Algorithm.


#### *2.1. Inclusion Criteria*

All published studies including: abstracts from conferences, primary research on new preservation strategies, clinical trials (randomised controlled trials, non-randomised trials), retrospective studies (single centre study, cohort study), and case-controlled studies on organ transplantation of kidney and liver comparing normothermic machine perfusion (NMP) and/or hypothermic machine perfusion (HMP) to CS. To be included, the study had to analyse and discuss the effects of preservation temperatures on ≥1 following post-transplant outcomes. For kidneys: PNF, incidence and duration of DGF, serum creatinine post-surgery, one year graft survival, acute rejection, and estimated glomerular filtration rate (eGFR). For livers: PNF, serum bilirubin post-surgery, biliary stricture incidence, 1–7 day post-surgery peak AST and early allograft dysfunction (EAD).

#### *2.2. Primary Objectives*


#### *2.3. Secondary Objectives*


#### *2.4. Data Extraction and Review*

Studies identified by the search strategy were screened for meeting the inclusion criteria using the titles and abstracts. Short-listed studies were further checked by reading the whole paper to exclude any ineligible studies, on the basis of the primary and secondary objectives.

#### *2.5. Risk of Bias Assessment*

The two reviewers (MN and JY) assessed the risk of bias independently. Randomised controlled trials (RCTs) and retrospective studies in humans were assessed by the Jadad scale. Where there was a disagreement about a Jadad score, advice from a third party (MIB) was sought.

#### *2.6. Data Analysis*

Meta-analysis was performed in Revman 5.3 [4]. The effect estimate was calculated together with 95% CI, studies were weighted by sample size, and heterogeneity was assessed with an I2 test. When I <sup>2</sup> > 50%, a random effects model was used to account for heterogeneity, otherwise a fixed effects model was used. The summary effect was determined using the *p*-value calculated from the Z test. Odds ratio (OR) was used to compare dichotomous data in organs perfused by HMP/NMP to SCS.

Standardised mean difference (SMD) was used to compare continuous data. For the papers that did not report mean and standard deviation, the method suggested by the Wan et al. 2014 paper [5] was used to approximate mean and standard deviation values using the median and either the interquartile range or range reported in those papers. Studies where this method was used are marked by **\*** in the forest plots.

#### **3. Results**

The systematic search generated 10,585 studies of which 672 abstracts and papers were shortlisted by reading the abstract title, and they were further reduced to 102 after reading the abstract. Finally, after reading the full article, a total of 30 papers were selected for meta-analysis (Figure 1).

**Figure 1.** Flow diagram of the systematic literature search.

#### *3.1. Selected Study Characteristics*

Twenty-two studies [6–27] identified by the systematic search were included into the analysis (Table 2); fifteen were published papers and seven were abstracts. Ten were RCTs [6,11,12,15,16,18,19, 21,24,25], seven studies were retrospective [7–10,17,22,23], and five were prospective [13,14,20,26,27]. Predominantly, the studies used a LifePort® kidney transporter for hypothermic machine preservation; there was a large variation in cold storage solution type, with some studies not mentioning the specific cold storage preservation solution, but instead referring to local guidelines.

The main difference between LifePort® and RM3® is that the latter provides oxygen by sweeping air over the membrane within the circuit.

**Table 2.** Studies comparing HMP and SCSin kidneys. Abbreviations: HTK: Histidine-tryptophan-ketoglutarate, UW: University Wisconsin, KPS-1: Kidney Perfusion Solution 1 (Organ recovery systems), SPS-1: Static Preservation Solution 1 (Organ recovery systems), ECD: expanded criteria donors, DBD: donation after brain death, and DCD: donation after cardiac death.



**Table 2.** *Cont.*

Four studies identified in the systematic search were focused on comparing the effects of HMP and SCS in liver preservation [28–31] (Table 3). There was a lot of heterogeneity in the type of machine used for HMP of liver grafts; however, almost all studies had used the University Wisconsin solution for SCS.


Four normothermic perfusion of the liver vs SCS studies [32–35] were included in the meta-analysis (Table 4). The predominant machine perfusion device was OrganOx metra®. There were a variety of cold storage preservation solutions, and the most prevalent donor type was DBD (Table 4).



#### *3.2. Risk of Bias Assessment*

Overall studies had a poor Jadad score, and this is explained by many retrospective studies where organs preserved with MP were matched with organs preserved via SCS, so therefore no randomisation or blinding was possible. There was a significant proportion of RCT's in the meta-analyses of HMP vs SCS in kidneys (Table 5) and NMP vs SCS in livers (Table 6); however, all of the studies comparing HMP to SCS in liver were retrospective studies and therefore had poor Jadad scales (Table 7).



**Table 6.** Risk of bias assessment of studies comparing HMP and SCS in liver.



#### *3.3. Kidney Transplant Outcomes*

PNF, DGF (incidence and duration), acute rejection, serum Creatinine, one-year graft survival, and e-GFR were meta-analysed.

#### *3.4. Primary Non-Function*

Five studies [12,13,15,21,24] which reported PNF (816 patients), demonstrated that HMP significantly decreased primary nonfunction compared to SCS (OR: 0.35 95% CI 1.02 to 2.53, *p* = 0.003) (Figure 2).


**Figure 2.** Primary nonfunction in kidneys preserved via HMP and SCS.

#### *3.5. Delayed Graft Function*

Twenty-two studies [6–27] comparing HMP and SCS described the incidence of DGF (Figure 3), and its duration (Figure 4), with a total of 7963 patients. The overall OR was 0.57 (0.45, 0.72, 95% CI), with *p* < 0.00001, favouring a statistically significantly lower prevalence of DGF in kidneys preserved by HMP.


**Figure 3.** DGF in kidneys preserved by hypothermic machine perfusion and cold storage.

Four of the studies [15,19,24,26] reporting DGF were included in a meta-analysis comparing the duration of DGF (352 patients) (Figure 4). There was no difference in duration of DGF in kidneys preserved with HMP and SCS (SMD: −0.04 CI 95% −0.25 to 0.17, *p* = 0.72) (Figure 4).


**Figure 4.** DGF duration in kidneys preserved by HMP and SCS. DGF duration was measured in days. In papers marked with "**\***", mean and standard deviation were calculated using the method described by Wan 2014 [5].

#### *3.6. Acute Rejection*

There was no significant difference in the prevalence of acute rejection in kidneys preserved by HMP or SCS (OR: 0.91 95% CI 0.66 to 1.27, *p* > 0.05). Five studies [12,15,19,23,25] were used for the meta-analysis of a total of 1193 patients (Figure 5).

**Figure 5.** Acute rejection in Kidneys preserved via HMP and SC.

#### *3.7. Comparison of Serum Creatinine One Month after Kidney Transplantation*

Three studies [15,24,26] reported one-month post-transplantation serum creatinine (397 patients). There was no significant difference in serum creatinine values (SMD: −0.16 95% CI −0.62 to 0.31) (Figure 6).

**Figure 6.** Comparison of one month post transplantation serum creatinine in kidneys preserved via HMP and SCS. In papers marked with "**\***", mean and standard deviation were calculated using the method described by Wan 2014 [5].

#### *3.8. One-Year Graft Survival*

Seven studies [7,10,11,13,15,19,23] that had data on graft survival (1397 patients) were meta-analysed. Overall, kidneys preserved with HMP had a significantly longer one-year graft survival (OR: 1.61 95% CI: 1.02 to 2.53, *p* = 0.04) (Figure 7).


**Figure 7.** One year graft survival in kidneys preserved via HMP and SCS.

#### *3.9. Post-Transplant Estimated Glomerular Filtration Rate in HMP and SCS Kidneys*

One of our previous studies [7] as well as the one from Tedesco et al. [24] were the only two that reported eGFR at more than one time point after the surgery. Combined meta-analyses of 200 patients demonstrate that HMP increased eGFR only on day 7 post surgery (SMD: 0.39 95% CI 0.11 to 0.67, *p* = 0.007) (Figure 8). There was no significant difference in eGFR of kidneys preserved with HMP and SCS both on day 14 (SMD: 0.99 95% CI −0.26 to 2.24, *p* > 0.05) (Figure 9) and day 365 (SMD: 0.6 95% CI −0.19 to 1.38, *p* > 0.05) (Figure 10).

**Figure 8.** Estimated glomerular filtration rate (eGFR) in kidneys preserved via HMP and SCS; eGFR day 7.


**Figure 9.** Estimated glomerular filtration rate (eGFR) in kidneys preserved via HMP and SCS; eGFR day 14.


**Figure 10.** Estimated glomerular filtration rate (eGFR) in kidneys preserved via HMP and SCS; eGFR day 365.

#### *3.10. Liver Transplant Outcomes*

PNF, EAD, and AST serum levels, bilirubin serum levels, and the incidence of biliary strictures were meta-analysed.

#### *3.11. Primary Non Function*

In livers preserved both by HMP (Figure 11) and NMP (Figure 12), there was no significant difference in PNF compared to livers stored using SCS. The odds ratio comparing HMP to SCS was 0.36 95% CI 0.05 to 2.35, *p* = 0.29, and the odds ratio comparing NMP to SCS was 2.53 95% CI 0.10 to 62.70, *p* = 0.67.

**Figure 11.** Primary nonfunction in livers preserved via HMP and SCS.

**Figure 12.** Primary nonfunction in livers preserved via NMP and SCS.

#### *3.12. Early Allograft Dysfunction*

Four studies [28–31] compared EAD prevalence in livers stored using HMP and SCS (206 patients). Overall, livers stored with HMP showed lower prevalence of EAD (OR: 0.36 95% CI 0.17 to 0.75, *p* = 0.006) (Figure 13). Similar results were reported by the three studies comparing EAD prevalence in livers stored using NMP and SCS (301 patients). Overall, livers stored with NMP also showed lower prevalence of EAD compared to SCS (OR: 0.36 95% CI 0.17 to 0.75, *p* = 0.006) (Figure 14).

**Figure 13.** Early allograft dysfunction in livers preserved via HMP and SCS.


**Figure 14.** Early allograft dysfunction in livers preserved via NMP and SCS.

#### *3.13. Serum AST*

Two studies [28,29] (115 patients) comparing HMP to SCS demonstrated the superiority of HMP in reducing post-transplantation AST levels (SMD −0.59 95% CI −0.98 to −0.20, *p* = 0.003) (Figure 15). Similarly, four studies [32–35] that focused on comparing NMP to SCS demonstrated that livers preserved with NMP had significantly lower serum AST levels than SCS (OR: −0.63 95% CI −0.85 to −0.41, *p* < 0.00001) (Figure 16).

**Figure 15.** Peak serum AST in studies comparing HMP to SCS. In papers marked with "**\***", mean and standard deviation were calculated using the method described by Wan 2014 [5].


**Figure 16.** Peak serum AST in studies comparing NMP to SCS. In papers marked with "**\***", mean and standard deviation were calculated using the method described by Wan 2014 [5].

#### *3.14. Serum Bilirubin*

Results from Dutkowski [28], Guarrera [29], and van Rijn [31] (115 patients) demonstrated the overall significant decrease in post transplantation serum bilirubin (SMD: −0.59 95% CI −0.98 to −0.20, *p* = 0.003) in livers stored with HMP compared to SCS (Figure 17).

**Figure 17.** One week post transplantation peak serum total bilirubin in studies comparing HMP to SCS. In papers marked with "**\***", mean and standard deviation were calculated using the method described by Wan 2014 [5].

Three studies [32,34,35] comparing NMP to SCS described total serum bilirubin one week post transplantation (181 patients), and demonstrated that there was no significant difference in bilirubin levels (SMD: −0.20 95% Ci −0.44 to 0.03, *p* = 0.09) (Figure 18).


**Figure 18.** One week post transplantation peak serum total bilirubin in studies comparing NMP to SCS. In papers marked with "**\***", mean and standard deviation were calculated using the method described by Wan 2014 [5].

#### *3.15. Biliary Strictures*

Four studies [28–31] (Figure 19) comparing SCS to HMP in the preservation of livers (206 patients) demonstrated significant difference in incidence of post-transplantation strictures (OR: 2.59 95% CI 1.19 to 5.61, *p* = 0.02).


**Figure 19.** Post transplantation biliary stricture rates in studies comparing HMP to SCS.

#### **4. Discussion**

This meta-analysis assessed the impact of dynamic preservation techniques on the viability and incidence of reperfusion injury in kidney and liver versus the traditional static cold storage before transplantation. The results were further sub-analysed in relation to the different organs considered.

HMP demonstrated significantly lowered delayed graft function incidence in transplanted kidneys compared to SCS, but it, however, had no effect on its duration, although only four studies reported this parameter. Furthermore, HMP was associated with reduced PNF and prolonged one-year graft survival, demonstrating the importance of machine perfusion technology in the utilisation of graft from extended criteria donors. Overall, serum creatinine of the transplanted grafts was similar, although a difference in eGFR could be seen on day 7 post transplantation. In the long term, there was yet no difference in kidneys preserved via HMP and SCS. This might lead to the debate of whether the long-term function of an organ is intrinsically related to the quality of the organ itself (standard or extended criteria), whilst the immediate post-transplant function is directly dependant on the preservation technique. For this reason, emergent possibilities of reconditioning during preservation are considered to improve the quality of the organ and to possibly impact the long-term outcome. In that regard, nutrients, therapeutic gases, mesenchymal stromal cells, gene therapies, and nanoparticles could be delivered to effectively repair an extended criteria organ during the preservation period and prior to implantation. The use of oxygen might in particular contribute to the long-term outcome of the preserved parenchymal cells. It is in fact of note, as shown in in Figure 10, that a difference in the one year eGFR is in favour of HMP kidneys preserved with an oxygenated circuit. Additional oxygen may support the aerobic activity and contrast the injury process of the cells with a more prominent effect in the long term. Furthermore, the efficiency of MP in assessing organ quality with possible reconditioning and predicting transplant outcome are of great interest in modern transplant practice, with an emerging role of these novel technologies to be evaluated as a possible diagnostic tool.

Differently from the kidney, no difference in PNF was seen in livers preserved via HMP or NMP compared to SCS; in liver preservation both HMP and NMP have demonstrated superior outcomes when it comes to mitigating early allograft dysfunction and post transplantation AST levels compared to SCS, but only HMP was able to significantly decrease serum bilirubin and the incidence of biliary strictures, compared to SCS. In addition to this, the value of AST as an end point is controversial because there can be a release of AST in the perfusate during MP; therefore, a more reliable marker should be considered in future studies. These conflicting results might be related to the relatively small numbers of RCT with, therefore, no sufficient evidence to conclude a clear superiority of one modality compared to the other. What appears to be clear is that more clinical studies are needed for verification with homogeneous parameters to measure the outcomes of interest.

In conclusion, there is growing evidence that MP allows for the utilisation of marginal kidneys with lower primary and delayed graft function rates. There is also evidence of improved early allograft dysfunction after dynamic preservation for livers, but more studies are needed to prove the potential superiority of these novel technologies.

**Author Contributions:** M.I.B. designed the study, analysed the data and wrote the paper; M.N. performed the study, collected the data and wrote the paper; J.Y. performed the study and collected the data; V.P. designed the study, analysed the data and review the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**


#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
