Next Article in Journal
Diagnostic Performance of ADNEX Model and IOTA Simple Rules in Differentiating Malignant from Benign Adnexal Masses When Assessed by Non-Expert Examiners
Previous Article in Journal
Comparing Auto-Machine Learning and Expert-Designed Models in Diagnosing Vitreomacular Interface Disorders
Previous Article in Special Issue
Understanding Cybersickness and Presence in Seated VR: A Foundation for Exploring Therapeutic Applications of Immersive Virtual Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ethics and Algorithms to Navigate AI’s Emerging Role in Organ Transplantation

by
Amankeldi A. Salybekov
1,*,
Ainur Yerkos
2,
Martin Sedlmayr
3 and
Markus Wolfien
3,4,*
1
Regenerative Medicine Division, Cell and Gene Therapy Department, Qazaq Institute of Innovative Medicine, Astana 020000, Kazakhstan
2
Department of Computer Science, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
3
Institute for Medical Informatics and Biometry, Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, 01069 Dresden, Germany
4
Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), 01069 Dresden, Germany
*
Authors to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(8), 2775; https://doi.org/10.3390/jcm14082775
Submission received: 19 March 2025 / Revised: 14 April 2025 / Accepted: 15 April 2025 / Published: 17 April 2025

Abstract

:
Background/Objectives: Solid organ transplantation remains a critical life-saving treatment for end-stage organ failure, yet it faces persistent challenges, such as organ scarcity, graft rejection, and postoperative complications. Artificial intelligence (AI) has the potential to address these challenges by revolutionizing transplantation practices. Methods: This review article explores the diverse applications of AI in solid organ transplantation, focusing on its impact on diagnostics, treatment, and the evolving market landscape. We discuss how machine learning, deep learning, and generative AI are harnessing vast datasets to predict transplant outcomes, personalized immunosuppressive regimens, and optimize patient selection. Additionally, we examine the ethical implications of AI in transplantation and highlight promising AI-driven innovations nearing FDA evaluation. Results: AI improves organ allocation processes, refines predictions for transplant outcomes, and enables tailored immunosuppressive regimens. These advancements contribute to better patient selection and enhance overall transplant success rates. Conclusions: By bridging the gap in organ availability and improving long-term transplant success, AI holds promise to significantly advance the field of solid organ transplantation.

1. Introduction

According to global transplantation and donation databases, as well as international registries in organ transplantation reports, Western countries are leading in solid organ transplantation per million population in 2023 [1,2]. Every donor has the potential to save at least eight lives and enhance over 75 others [3]. The United States, for example, is among the top in transplantation rates, with over 46,000 transplants performed, including those from deceased and living donors [3]. However, a recent report by the United Network for Organ Sharing (UNOS) revealed a steady increase in the number of patients on the waiting list, which, as of March 2024, included more than 103,000 patients [3]. Despite the high transplantation rate and a three-tiered system designed to ensure more equitable organ allocation, the demand for transplanted organs currently exceeds the supply, with 17 people dying each day while waiting for a transplant in the USA [3]. This situation highlights the need for future improvements in solid organ transplantation. Consequently, to enhance the allocation and transplantation process, artificial intelligence (AI) has recently been introduced [4,5,6,7].
The accessibility of healthcare data and advancements in analytical methods have driven the exponential growth of AI across various medical fields. AI algorithms can create models that automatically learn and generate predictions based on previous knowledge or experience, improving information processing without needing explicit programming for all possible cases. Furthermore, many AI algorithms can improve over time, adapting to new data and resulting in continuous system enhancement and increased accuracy. AI significantly enhances the complex decision-making involved in pre- and post-transplant patient care by analyzing vast datasets and offering clinical recommendations [8]. Notably, AI-based classifiers are being used to improve organ allocation and donor-recipient matching [7], immunosuppression management in real-time [9], and precision in transplant pathology [10,11]. The goal of AI is to uncover hidden patterns and intricate relationships in large datasets, enabling logical outcomes and efficient resource use. Although still in the early stages and lacking validated algorithms for precise organ selection, rejection prediction, or reduction of postoperative complications, AI has already made strides in reducing rejection rates and improving transplantation and organ preservation techniques over recent decades. As AI continues to advance, it holds the potential to transform not only the technical aspects of transplantation but also the ethical landscape for patients and physicians. The integration of AI in clinical workflows introduces new ethical considerations, such as ensuring transparency in AI-driven decisions, maintaining patient autonomy, and addressing potential biases in algorithmic predictions. For physicians, the advent of AI-enhanced decision-making tools, particularly those that may receive FDA approval in the near future, could necessitate updates to clinical workflows and protocols. These tools, while promising in their ability to improve outcomes and streamline processes, also require careful consideration of the physician’s role in interpreting AI recommendations and maintaining the patient–physician relationship. This review explores the current and potential future applications of AI in solid organ transplantation, while also examining the ethical challenges and changes in clinical practice that may arise as AI-based systems become more integrated into the field.

2. Materials and Methods

A literature search was conducted in the PubMed, Google Scholar, Cochrane, and Scopus databases using the keywords “artificial intelligence”, “artificial intelligence in solid organ transplantation”, “artificial intelligence guidelines”, and “artificial intelligence in transplantation”. Additionally, reference lists from relevant studies were manually reviewed. Only English-language guidelines, research articles, and reviews were selected for inclusion. The exclusion criteria were as follows:
  • Studies not written in English;
  • Conference abstracts, notes, letters, case reports, or animal studies;
  • Duplicate studies.

3. Evolution of Data Analysis—From Traditional Statistics to Generative AI

Recent years have witnessed rapid growth in research on AI and its applications in healthcare, driven by advancements in computing technology and a surge in the volume of digital data available for analysis. Additionally, the increasing complexity of medical data, such as genetic, follow-up, imaging, and laboratory data, often renders traditional statistical methods insufficient for accurate interpretation. Consequently, new AI-based research has demonstrated that AI-assisted analysis of donor and recipient data can enhance the prediction of short- and long-term solid organ transplant survival [12]. Figure 1 presents the transformation of data analysis strategies. Statistical methods are effective tools for analyzing data across various fields, including medicine. Traditional statistical methods are commonly employed to identify relationships between input and output data. However, there are notable differences between classical statistical learning and modern machine learning methods, primarily concerning their capabilities and limitations. Traditional statistical methods, such as linear and logistic regression, follow a top-down approach. In this approach, the researcher starts with a known model and estimates its parameters based on the available data [13]. Traditional statistical methods are effective for analyzing datasets with limited size and complexity. However, their capabilities diminish as the volume and diversity of information increase. In such scenarios, it is advisable to employ AI methods, such as machine and deep learning, which offer greater flexibility and can solve a wider range of problems [14]. These methods follow a bottom-up approach, starting with the data and allowing the algorithm to develop a model with the primary goal of prediction [13]. The resulting models are often complex, and some parameters cannot be directly estimated from the data. Instead, these parameters are either selected based on relevant previous studies or manually tuned during training to provide the best prediction. Compared to traditional statistical methods, machine learning algorithms can handle a larger number of complex variables—ordinal, nominal, etc.—but they also require a larger sample size for analysis. As a result, while machine learning algorithms can deliver remarkable results in certain contexts, their effectiveness depends on the quality and quantity of available data, as well as the specifics of the task [15,16]. For example, it can be argued that machine learning does not outperform Cox proportional hazards models in specific cases [17]. To address this issue, it is essential to understand that all machine learning algorithms have their own advantages and limitations and may not be suitable for all given tasks. In other words, machine learning can handle complex interactions in large datasets to more accurately predict outcomes, but these models require a larger number of input and output pairs for training, which traditional statistical methods may not require.
Deep learning and generative AI are two powerful subsets of machine learning, each offering unique advantages in the field of organ transplantation. Deep learning, characterized by its use of layered neural networks, excels in analyzing and interpreting complex medical data, such as images from CT scans or biopsy results. This capacity enables deep learning models to assess the suitability of organs for transplant and predict patient outcomes with high accuracy by capturing intricate patterns and dependencies in imaging data that traditional models might overlook. On the other hand, generative AI focuses on synthesizing new data points, which can be instrumental in organ transplantation for creating realistic, anonymized medical datasets for images but also for tabular data [18,19]. These synthesized datasets can be used for training more robust models without the privacy concerns associated with using real patient data [20]. Generative AI can also simulate various scenarios of organ matching, helping to optimize the allocation and expected outcomes of organ transplants. Additionally, generative AI, by creating new content based on training data, has the potential to greatly simplify tasks in transplantation, from research and education to clinical practice and patient education, provided that it is continually improved and implementation challenges are addressed [21]. The combination of these two approaches aims to transform transplantation by improving predictive models, optimizing organ selection, and improving the prognostic tools available to physicians, which in turn increases transplant success rates and improves patient care. However, to fully realize the potential of AI to improve the lives of patients in need of transplantation, it is necessary to ensure that AI systems are transparent and understandable and to provide physicians with detailed explanations of how they work and are validated [22].
As these AI-based innovations mature, we are getting closer to having advanced tools specifically designed for kidney, heart, and liver transplantation. These tools, some of which are already in the FDA approval process, promise to improve every aspect of transplantation, as will be detailed in the following sections.

4. AI Tools in Transplantation

Artificial intelligence (AI) is increasingly being used in various fields of medicine, including organ transplantation, to improve treatment efficiency and optimize clinical decision-making. Figure 2 shows some of the AI-based tools used in kidney, heart, and liver transplantation.

4.1. Kidney Transplantation

In the field of kidney transplantation, AI-based tools have been developed both in academic and commercial settings to predict outcomes, such as graft survival and immunosuppression, and to improve donor–recipient matching [9,22,23,24]. Here, we discuss prominent AI-based tools in kidney transplantation.
iBox: This tool was developed by an international team led by French researchers and is designed to predict the risk of kidney transplant loss [13,25]. The features include estimated glomerular filtration rate (eGFR), proteinuria, anti-human leukocyte antigen donor-specific antibody, and kidney graft biopsy histopathology, which can be assessed at any time after the transplant. iBox has been validated and tested on over 7500 patients with a combination of the above-mentioned parameters to provide personalized predictions of patient outcomes three, five, and ten years after the transplant [13,25]. Initially developed for individual patient use to guide clinical care and management of kidney transplant recipients, iBox was later adapted in collaboration with the Paris Transplant Group, Critical Path Institute, and other transplant community members to create the Transplant Therapeutics Consortium (TTC) for clinical trials [14,26]. This adaptation, approved by the European Medicines Agency (EMA) [19,27], aims to enhance drug development by using early data to predict long-term results. Five datasets supporting the regulatory endorsement of iBox, representing over 2500 de novo kidney transplant recipients with 1-year iBox assessments, were evaluated [18,28]. Furthermore, these datasets included details about each component measured by the iBox assay and documentation certifying that the analytical methods were strong, dependable, and suitable for their intended use [18,28]. Taken together, iBox allows clinicians to personalize patient follow-up and could potentially accelerate the development of new immunosuppressive treatments. The iBox Scoring System can be utilized to show how a new immunosuppressive therapy is better than the standard treatment from 6 to 24 months after transplantation in pivotal or investigative drug therapy studies.
OrganPredict: The OrganPredict (Precision and Reliability in Estimating Donor Immunological Compatibility for Transplantation) platform includes two major predictive models: L-TOP (Living-Donor Kidney Transplant Outcome Prediction) and D-TOP (Deceased-Donor Kidney Transplant Outcome Prediction) based on the UNOS database [29,30]. These models are used to optimize the matching process between donors and recipients in kidney transplants, predicting outcomes for both living and deceased donors. The predictions cover aspects like graft survival for up to 14 years. The authors utilized a deep Cox mixture model for prediction and compared it with the classical kidney donor profile index (KDPI). The results demonstrated that AI-based L-TOP and D-TOP outperformed the KDPI in evaluating transplant pairs based on graft survival, potentially enhancing deceased donor selection [29,30].
NephroCage: The NephroCage consortium began as a key project with support from the governments of Germany and Canada. It aimed to merge medical and technical knowledge to create and assess a practical example demonstrating the benefits of artificial intelligence (AI) in a specific medical scenario from nephrology. The NephroCage dataset consists of more than 8000 transplant patient cases over the past two decades with an average age of 51.7 years. NephroCage consortium members aim to prevent the occurrence of adverse post-transplant endpoints, such as loss of function, graft failure, and patient death, and also develop an open-source command line client for anonymization of HLA data of recipients and donors [31]. Of note, the NephroCage project is not finished yet, and it is difficult to judge from the available data.
In sum, the above-mentioned tools exemplify how AI can enhance decision-making in transplantation, from improving patient outcomes to optimizing organ allocation. iBox focuses more on post-transplant monitoring and prediction of long-term outcomes, while OrganPredict aids in both pre-transplant decision-making processes and post-transplant outcomes. Each tool leverages AI to handle complex datasets and improve the accuracy of predictions in the field of kidney transplantation.
iChoose Kidney: iChoose Kidney is designed to offer personalized prognosis estimates for patients with kidney disease, helping them understand the mortality risks associated with choosing between dialysis and kidney transplantation from living or deceased donors [32]. The mortality risk estimates provided by iChoose Kidney draw on data from the United States Renal Data System (2005–2015). This national database encompasses nearly all U.S. patients with end-stage kidney disease during that period, tracking outcomes related to transplantation and mortality. The personalized risk assessments are produced and confirmed through multivariable logistic regression models. These assessments provide mortality risks tailored to each patient’s age, race, and comorbid conditions, based on the information submitted in the patient information form. The iChoose Kidney model’s initial validation cohorts demonstrated a moderate ability to discriminate 3-year mortality risks, with a c-statistic of 0.7047 (95% confidence interval: 0.7029–0.7065) for dialysis patients and 0.7015 (95% confidence interval: 0.6875–0.7155) for transplant recipients [32]. Subsequent researchers revised the iChoose Kidney tool, and the coefficients from the updated version of the iChoose Kidney shared decision aid were comparable to the earlier version, demonstrating similar or enhanced predictive capabilities [33]. By incorporating a more detailed classification of time spent on dialysis and differentiating between preemptive transplants and other types, the predictive models can account for more subtleties in patient survival outcomes. Additionally, recognizing different dialysis methods—such as in-center hemodialysis, home hemodialysis, and peritoneal dialysis—helps to acknowledge the unique survival rates associated with each modality. This tool is designed as a collaborative decision-making aid for healthcare providers—including family doctors, nephrologists, transplant surgeons, nurses, social workers, and patient educators—to use with patients who are considered potential candidates for transplantation. It provides mortality risk estimates specifically for patients suffering from advanced chronic kidney disease or end-stage kidney disease (estimated glomerular filtration rate < 15 mL/min/1.73 m2). Legally, physicians must inform patients with end-stage kidney disease about all treatment options, including transplantation. Discussions about treatment choices for these patients should take place in chronic kidney disease clinics before dialysis begins, at the start of long-term dialysis, and throughout the transplant evaluation process. At the same time, iChoose Kidney is a perfect tool for patients with CKD or ESRD and enables easy access to learn the benefits of kidney transplantation over hemodialysis based on real-world data [34].

4.2. Heart Transplantation

Several AI-based tools are currently being employed and developed to predict outcomes in heart transplant patients, enhancing both the clinical decision-making process and patient management.
CARE (Cardiac Allograft Rejection Evaluator) Model: Arabyarmohammadi and colleagues developed a novel method for automated, comprehensive analysis of heart biopsy images called the Cardiac Allograft Rejection Evaluator (CARE). The authors utilized a total of 2900 endomyocardial biopsy images labeled with rejection grades (high versus low) and clinical trajectories (evident versus silent rejection). Using an image analysis approach, 370 quantitative morphology features describing lymphocytes and stroma were extracted from each slide. Two models were built to compare the subset of features associated with rejection grades to those linked with clinical trajectories. A proof-of-concept machine learning pipeline, the CARE, was then developed to test the feasibility of identifying the clinical severity of a rejection event. CARE extracts feature related to the shape, texture, and spatial architecture of muscle cells, immune cells, and stromal fibers in heart tissue specimens to predict rejection outcomes in heart transplant patients. The CARE model utilizes intuitive and explainable image features, making it more transparent and easier for clinicians to use compared to more opaque ‘black box’ AI models. The histopathological findings related to conventional rejection grades differ significantly from those associated with clinically evident allograft injury. Taken together, the quantitative assessment of a small set of well-defined morphological features can more accurately reflect the severity of rejection compared to the International Society of Heart and Lung Transplantation grades.
AI-ECG: In current clinical practice at some centers, heart transplant recipients undergo multiple endomyocardial biopsies (EMBs) every 1–4 weeks during the first year after transplantation, resulting in each patient potentially having up to 14 or more biopsies [35,36,37]. This frequent screening for allograft rejection is not only inconvenient due to the numerous hospital visits and associated costs but also carries risks of complications [38]. These complications can range from mild, such as pericardial effusion, to severe, like significant tricuspid valve regurgitation necessitating repeat cardiac surgery [38]. Therefore, it is crucial to provide heart transplant patients with safer, non-invasive screening methods for cardiac allograft rejection. Developing innovative diagnostic tools that enable remote screening could enhance our ability to monitor patients more frequently and reduce their need for constant visits to the transplant center for comprehensive care. These tools use electrocardiogram data to detect acute cellular rejection (ACR) among heart transplant recipients. AI-enhanced ECG demonstrated its ability to detect acute cellular rejection (ACR) with an area under the receiver operating characteristic (ROC) curve (AUC) of 0.84, within a 95% confidence interval (CI) of 0.78 to 0.90, and exhibited a sensitivity of 95% (19 out of 20 cases; 95% CI: 75–100%). In a prospective proof-of-concept screening study involving 56 participants and 97 ECG-biopsy pairs, AI-ECG identified ACR with an AUC of 0.78 (95% CI: 0.61–0.96) and achieved a sensitivity of 100% (2 out of 2 cases; 95% CI: 16–100%) [39]. To summarize, the findings reveal that a deep learning model utilizing 12-lead ECG can successfully identify moderate-to-severe acute cellular rejection (ACR) in heart transplant recipients. A proof-of-concept prospective analysis yielded promising results, however, needs verification in larger sample sizes. This study contributes to the expanding body of evidence supporting the potential of artificial intelligence models to improve cardiovascular care [33].
Machine Learning for Long-term Predictions: Research has been conducted on using machine learning models to predict 5-year mortality and graft failure in heart transplant recipients [40]. The study leveraged data from the International Society of Heart and Lung Transplant (ISHLT) registry to study 15,236 patients who received orthotopic heart transplants (OHT) between January 2005 and December 2009. A total of 342 variables were extracted and utilized to create a risk prediction model using a gradient-boosted machine (GBM) algorithm, aiming to predict the risk of graft failure and mortality five years after hospital discharge [40]. The authors excluded variables missing at least 50% of the observations and filled them using an AI model, which increased false predictions in their model. Five years after orthotopic heart transplantation (OHT), the mortality rate was 27.3% (n = 4161), while the graft failure (GF) rate was 28.1% (n = 4276) [40]. This might indicate that classical care and prediction methods after OHT are not as effective as desired, and further research is needed to improve post-transplant outcomes. The AUC for predicting 5-year mortality was 0.717 (95% CI 0.696–0.737), and for predicting graft failure, it was 0.716 (95% CI 0.696–0.736). The authors concluded that factors with the greatest impact on predicting 5-year mortality and graft failure were length of hospital stay, ages of the recipient and donor, body mass index of both the recipient and donor, and ischemic time. These tools represent a significant advancement in the field of transplant medicine, offering the potential for better patient outcomes through more precise and timely predictions. As these technologies continue to evolve, they are expected to become integral parts of clinical workflows in heart transplant centers soon.

4.3. Liver Transplantation

There are several AI-based tools and techniques used in the prediction of outcomes related to liver transplantation, ranging from academic research models to potentially commercially available tools. Here is a comparison of some notable examples:
Graft Failure Prediction: The collection of pre- and postoperative clinical data from both donors and recipients forms essential, non-invasive sources for machine learning to forecast graft failures. In this context, clinical and laboratory data analyzed using artificial neural networks (ANNs) more accurately predicted the risk of acute graft rejection compared to logistic regression (LR) at all evaluated time intervals [41]. In related research, pre-operative data were merged with donor and recipient characteristics, and the predictions from random forests (RF) and ANNs were strongly linked to the outcomes of grafts within 30 days [42]. The accuracy was further enhanced when RF incorporated validated elements from established predictive frameworks such as the Donor Risk Index (DRI) and Model for End-Stage Liver Disease (MELD) [42]. This study illustrates that not only is machine learning more precise than conventional models in predicting, but the selection of parameters is also crucial for its predictive efficacy.
Machine Learning Models vs. Traditional Scoring Systems in Liver Transplantation: Most recently, Chongo et al. performed a systematic analysis to assess the utility of ML models in prognostication for LT, comparing their performance and reliability to established traditional scoring systems [43]. In all the research reviewed, machine learning (ML) algorithms and models were constructed using variables from both donors and recipients prior to transplantation. Predictions of short-term mortality typically covered up to 90 days, whereas long-term predictions extended up to five years. The analysis of the area under the receiver operating characteristic (AUROC) curve indicated that ML models consistently provided satisfactory to excellent accuracy in predicting both short- and long-term mortality, as well as the risk of post-transplant complications in liver transplant patients [43]. Additionally, AUROC analysis showed that ML models surpassed traditional models and scoring systems, including widely recognized models such as the Model for End-Stage Liver Disease (MELD), Delta MELD (D-MELD), Survival Outcomes Following Liver Transplantation (SOFT), Pediatric SOFT (P-SOFT), Balance of Risk (BAR), Donor Risk Index (DRI), Age-Bilirubin-INR-Creatinine (ABIC) score, Chronic Liver Failure Consortium Organ Failure score (CLIF-C OFs), Acute-on-Chronic Liver Failure score (CLIF-C ACLFs), and the Sequential Organ Failure Assessment (CLIF SOFA) [12]. Moreover, ML models also demonstrated advantages over models based on Cox proportional hazards and logistic regression [43]. In evaluating 90-day mortality predictions, the Random Forest (RF) model showcased the highest area under the curve (AUC) score of 0.940, outperforming other machine learning models [43]. From the six studies reviewed that explored the use of machine learning models to predict post-liver transplant complications, an analysis revealed that the ‘gradient boosting machine’ model excelled above others in forecasting the likelihood of complications such as graft-versus-host disease (GVHD), pneumonia, and acute kidney injury (AKI) [12]. Conversely, the RF model was more effective in accurately predicting the risks of sepsis and AKI following liver transplantation [43]. Strong evidence shows that machine learning models are capable of significantly improving decision-making processes in allograft allocation and liver transplantation, representing a major advancement in the area of prognostication. The progress in this field suggests that more robust and widely applicable AI tools could soon be on the horizon, enhancing both the efficiency and outcomes of liver transplantation programs [16].

5. Ethical Aspects of AI Tools in Organ Transplantation

The integration of AI tools into the field of organ transplantation represents a significant shift, with the potential to impact various aspects of the overall transplantation process for both patients and physicians. As previously discussed, AI is being applied to improve organ matching, predict post-transplant outcomes, and enhance the efficiency of transplant logistics. However, these advancements bring important ethical considerations that need to be addressed to ensure that AI tools are used responsibly and equitably [44]. This section explores the key ethical challenges associated with the use of AI in transplantation, including issues related to bias, fairness, transparency, accountability, patient autonomy, and the broader implications for clinical workflows and healthcare systems.

5.1. Promoting Equity by Addressing Bias and Fairness in AI Systems

AI systems in transplantation, like in other areas of healthcare, are susceptible to biases that may perpetuate or even exacerbate existing disparities. These biases can stem from the data used to train AI algorithms, which often reflect historical inequities in healthcare access and outcomes [45]. For example, studies have shown that AI systems can exhibit racial bias, potentially leading to discriminatory practices in organ allocation. The implications of such biases are particularly severe in transplantation, where the stakes involve life and death decisions [46,47]. Efforts to mitigate bias involve developing more representative datasets and implementing algorithmic fairness techniques [45,48]. However, these measures alone may not fully eliminate the risk of unintended consequences. Therefore, continuous monitoring for bias and integrating bias detection tools and techniques, such as fairness audits and bias impact assessments, into the AI development and deployment processes is necessary [49].
Moreover, balancing fairness with accuracy is a critical challenge. While creating fair AI systems is paramount, it should not come at the cost of compromising the system’s accuracy and efficiency, particularly in high-stakes fields like organ transplantation [50]. A recently published article by Audrey Lebret discusses fairness in AI in the context of healthcare, emphasizing the need to mitigate bias and ensure equitable outcomes [51]. Here, the author discusses that the European human rights law approach directly supports these objectives by providing a legal and ethical framework that demands fairness, non-discrimination, and respect for human dignity in all AI applications. For instance, the article’s focus on bias in AI and its impact on healthcare aligns with the European human rights law’s emphasis on non-discrimination and the right to health (as articulated in Article 35 of the EU Charter). Moreover, the call for transparency and accountability in AI systems resonates with the European legal principles that require clear, explainable, and accountable decision-making processes. In addition to these technical and ethical challenges in bias and fairness, there is a broader need for stakeholder engagement, including input from AI developers, clinicians, and patient advocacy groups, to promote equity in AI-driven transplantation [52]. Organizations and institutions must take an active role in setting standards for fairness and enforcing compliance, while also educating end users about the potential biases in AI systems. This comprehensive approach is necessary to ensure that AI systems contribute to a more equitable healthcare system, particularly in the critical domain of organ transplantation [53].

5.2. Transparency and Explainability: Unveiling the Black Box Through Transparency and Explainability

The rapid advancement of AI technologies, particularly in fields like transplantation, offers tremendous potential to improve patient outcomes. However, the complexity of AI algorithms, especially those based on deep learning, presents significant challenges related to transparency and explainability. In high-stakes contexts like organ transplantation, where decisions directly impact life and death, it is essential that healthcare providers and patients understand how AI-generated recommendations are made.

5.2.1. Transparency and Its Ethical Importance

Transparency in AI refers to the openness and clarity with which the processes and operations of AI systems can be understood by humans [54]. This encompasses the data used to train AI algorithms, the architecture of the models, and the logic behind specific outputs. In the realm of transplantation, ensuring transparency is essential for several reasons:
  • Data Transparency: The datasets used to train AI systems must be accessible, representative, and of high quality. This involves disclosing the sources of data, the methods of data collection, and addressing any potential biases. In transplantation, where patient outcomes can vary widely, transparent data practices help stakeholders understand the factors influencing AI recommendations [55].
  • Algorithmic Transparency: The complexity of AI models, particularly deep learning algorithms, often results in what is termed a “black box” effect, where the internal workings of the model are opaque [56]. While it may not always be feasible to disclose every detail of the model’s operations, providing a high-level overview of how the model functions and the types of patterns it detects is crucial. This allows clinicians to better understand and trust AI-assisted decisions.

5.2.2. Provide Explainability to Bridge the Gap Between Complexity and Comprehension

Explainability in AI refers to the ability of the system to provide understandable reasons for its decisions. While transparency ensures that the processes are open, explainability makes these processes comprehensible to those who rely on AI outputs, such as clinicians and patients.
  • Interpretable Models: One approach to enhancing explainability is using simpler, more interpretable models like decision trees or linear regression, which offer straightforward explanations. However, in complex domains like transplantation, where numerous factors influence outcomes, these simpler models might not be sufficiently accurate [57]. The challenge lies in balancing the need for interpretability with the accuracy required for clinical decision-making [58].
  • Post-Hoc Explainability Tools: Techniques such as Local Interpretable Model-agnostic Explanations (LIME) [59] and SHapley Additive exPlanations (SHAP) [60] have been developed to provide post-hoc explanations for complex AI models. These tools allow clinicians to see how different features contributed to the AI’s decisions, offering insights that can increase confidence in AI-generated recommendations.
  • Contextual Explanations: Tailoring explanations to the audience is essential. For clinicians, detailed explanations that align with medical reasoning are necessary, while for patients, explanations should focus on the implications for their health and treatment options [61,62]. Providing relevant and understandable information helps ensure that both patients and healthcare providers can make informed decisions based on AI outputs.

5.2.3. The Ethical Imperative of Transparency and Explainability

Ensuring that AI systems in transplantation are both transparent and explainable is not merely a technical challenge but an ethical imperative [63]. The use of AI in healthcare must adhere to core ethical principles, including:
  • Autonomy and Informed Consent: Patients have the right to make informed decisions about their care, which requires a clear understanding of AI recommendations. Explainable AI respects patient autonomy by enabling informed choices based on a full understanding of the risks and benefits. Informed consent is a foundational principle of medical ethics, requiring that patients understand and agree to the treatments they receive. The use of AI in transplantation introduces new complexities to the process of obtaining informed consent. Patients may have a limited understanding of AI technology and its role in their care, which can hinder their ability to make fully informed decisions. This is particularly concerning when AI is used to predict outcomes or recommend specific treatments, as patients may not be aware of the limitations or uncertainties inherent in these tools. Healthcare providers must ensure that patients are adequately informed about the use of AI in their care, including the potential risks and benefits. This involves not only explaining how AI is used but also discussing the limitations of the technology and the potential for error. Ethical frameworks for informed consent in the era of AI must evolve to address these new challenges, ensuring that patients retain autonomy over their healthcare decisions [64].
  • Non-Maleficence and Beneficence: Transparent and explainable AI systems help prevent harm by allowing for the identification and correction of errors or biases. This ensures that AI tools contribute positively to patient outcomes. While AI can provide valuable insights and support to healthcare providers, there is a risk that over-reliance on these tools could undermine the autonomy of both physicians and patients. Ethical concerns arise when AI recommendations are followed without critical evaluation or when they conflict with the clinical judgment of physicians [65]. The concept of “algorithmic authority” refers to the growing reliance on AI systems to make decisions traditionally made by humans. In transplantation, this could lead to situations where AI-driven decisions override human judgment, potentially leading to outcomes that are not in the best interest of the patient. To address these concerns, it is essential to establish clear guidelines that balance the use of AI with human oversight, ensuring that AI tools augment rather than replace clinical expertise [66,67].
  • Justice: Transparency and explainability are critical for identifying and addressing potential biases in AI systems. Ensuring fairness and equity in AI applications is essential, especially in transplantation, where disparities in access and outcomes must be carefully managed. The adoption of AI in transplantation has the potential to exacerbate existing inequalities in healthcare access. Advanced AI tools are often developed and implemented in well-resourced healthcare settings, potentially leaving less affluent institutions and their patients at a disadvantage. This raises ethical questions about the equitable distribution of the benefits of AI, particularly in a field as critical as transplantation, where access to care can be a matter of life and death [68]. Efforts to promote equity in AI deployment include the development of policies and frameworks that ensure fair access to AI technologies, regardless of geographic location or socioeconomic status. Additionally, there is a need for global collaboration to address disparities in AI development and implementation, ensuring that the benefits of AI in transplantation are available to all patients [69,70].
  • Safety and Privacy: The deployment of AI in transplantation relies on the analysis of large datasets, including sensitive patient information. This raises significant ethical concerns regarding data privacy and security. The potential for data breaches, unauthorized access, and misuse of patient information is a major risk that must be carefully managed. Moreover, the sharing of data across institutions and international borders for AI development and validation purposes introduces additional layers of complexity and risk [71]. Ethical guidelines for AI in healthcare emphasize the importance of robust data protection measures, including encryption, anonymization, and strict access controls. However, the dynamic nature of AI development, which often requires continuous data collection and analysis, poses ongoing challenges to maintaining patient privacy. Ensuring that patient data is used responsibly and securely is essential to maintaining public trust and safeguarding patient rights [72].
In summary, to create ethically responsible AI applications in transplantation and other areas of healthcare, transparency and explainability must be prioritized. This involves not only implementing technical solutions but also committing to ethical standards that prioritize patient safety, trust, and fairness. By embedding these principles into AI development and deployment, AI systems can be developed that improve, rather than undermine, the quality and equity of healthcare.

6. Discussion

6.1. Impact on Patient–Physician Relationships

The introduction of AI into the transplantation process can also affect the traditional patient–physician relationship [73]. Trust is a key component of this relationship, and the use of AI tools that patients do not fully understand can undermine this trust. Patients may feel disconnected from their physicians if they believe that critical decisions about their care are being made by algorithms rather than human judgment. Moreover, the perception that decisions are being made by machines rather than humans can lead to skepticism and concern, potentially fostering a sense of depersonalization in what is already a high-stress medical environment [74].
To maintain a strong patient–physician relationship, it is crucial that healthcare providers clearly communicate the role of AI in the decision-making process and follow AI-reporting guidelines [75]. This includes explaining how AI complements and supports, rather than replaces, the expertise of the physician. Physicians should emphasize that they remain at the center of the decision-making process, using AI as a tool to enhance their judgment rather than as a substitute for their clinical experience. Additionally, healthcare providers should reassure patients that AI systems are designed to reduce risks and improve outcomes so that their personalized care remains the highest priority.
Fostering an open and transparent dialogue is essential to overcoming the potential negative effects of AI on the patient–physician relationship. It involves not only explaining how AI works but also addressing any concerns or misconceptions patients may have about the technology [76]. By encouraging patients to ask questions and providing clear, accessible explanations, physicians may improve the acceptance of dedicated AI tools and build higher trust in their use [77]. In doing so, healthcare providers can help ensure that patients remain active participants in their treatment, even as AI plays an increasingly prominent role in transplantation [78,79].

6.2. Is AI Changing the Clinical Workflows in Organ Transplantation?

The integration of AI into clinical workflows is set to significantly alter the role of physicians in transplantation and other medical fields. While AI brings powerful tools that support decision-making, its presence will introduce new complexities and may reshape traditional practices. One of the most notable changes is probably the influence on decision-making. AI’s ability to process vast amounts of data and generate predictive insights can improve diagnosis, organ matching, and post-transplant care. Physicians will be faced with assessing and interpreting these recommendations while ensuring they align with clinical judgment and patient needs. Thus, AI could offer the potential to shift the physician’s focus from time-consuming data analysis to more patient-centered care [80]. By automating administrative tasks such as reviewing patient histories and assessing organ matches, AI can free up physicians’ time, allowing them to dedicate more attention to patient interactions and treatment planning. However, this requires physicians to adapt to a model where they are more reliant on AI to process data while still maintaining oversight of critical decisions. Here, physicians may find themselves in situations where AI recommendations conflict with their own clinical judgment [81]. In cases where AI-driven decisions lead to adverse outcomes, determining responsibility, whether it lies with the physician, the healthcare provider, or the developers of the AI system, will be a critical issue. New legal frameworks and professional guidelines need to emerge to clarify physicians’ responsibilities when using AI tools.
Moreover, AI will likely lead to changes in time management. While certain tasks may be automated, such as data entry and scheduling, physicians may need to dedicate more time to reviewing AI-generated recommendations or explaining these insights to patients [82]. This shift in workload distribution will require adjustments in how physicians manage their time, with a greater emphasis on complex decision-making and patient communication [83].
Finally, as AI becomes a more embedded part of clinical workflows, physicians, as the primary domain experts, will be required to take on the responsibility of continuous monitoring and quality control. AI systems must be regularly assessed to ensure they remain accurate, unbiased, and aligned with the latest medical knowledge [84]. Physicians will need to collaborate with AI specialists to adjust and update AI algorithms as needed, ensuring that these tools evolve alongside advancements in medical science and changes in patient populations.

6.3. Future Perspectives

In the future, the role of AI in solid organ transplantation is expected to expand, with continuous advancements in algorithm development, data integration, and personalized medicine. Future research should focus on enhancing the interpretability and transparency of AI models, addressing ethical concerns, and ensuring the equitable distribution of AI benefits across diverse patient populations. Collaboration between AI specialists, computer scientists, clinicians, ethicists, and policymakers will be essential in shaping the future of AI in transplantation. This interdisciplinary approach will help in developing more robust guidelines that govern the use of AI, ensuring that it is applied in ways that maximize patient benefit while minimizing potential risks. Moreover, the future may see the integration of AI with other emerging technologies such as genomics, proteomics, and telemedicine, further pushing the boundaries of personalized care in transplantation. The application of ML models demonstrates significant potential in fields such as lung transplantation [85,86], where AI is being investigated to enhance transplantation outcomes. Nevertheless, the development of robust ML- and DL-based algorithms for accurately predicting graft failures in lung and pancreas transplantations necessitates larger datasets with greater ethnic diversity. Recent studies further highlight the growing role of machine learning in predicting and improving patient outcomes, particularly in kidney transplantation, reflecting a broader trend toward the integration of AI technologies across various organ transplant procedures. By continuing to innovate and address the challenges associated with AI, the transplantation community can leverage these tools to achieve better patient outcomes and contribute to the evolution of organ transplantation as a whole.

7. Conclusions

With the progression of data analysis techniques, moving from traditional statistical methods to the use of advanced AI models, there has been a remarkable improvement in the precision and versatility of predictive tools. This shift has empowered clinicians to make more informed decisions, ultimately improving patient treatments. In kidney transplantation, tools like iBox, SOTAI, OrganPredict, NephroCage, and iChoose Kidney exemplify how AI can provide tailored predictions for graft survival and patient outcomes. These tools offer more personalized care plans, reducing the likelihood of rejection and enhancing the longevity of the transplanted organs. Similarly, in heart transplantation, models like CARE and AI-ECG have shown promising results in predicting rejection and guiding therapy adjustments. Liver transplantation has also benefited from AI’s capabilities, particularly in predicting graft failure and comparing the performance of machine learning models with traditional scoring systems. These advancements underline the superiority of AI-driven tools in dedicated prognostic models, thereby supporting more precise clinical interventions.
However, the integration of AI in transplantation is not without ethical challenges. The use of AI tools raises concerns regarding transparency, accountability, data privacy, and the potential for bias. These ethical aspects must be carefully considered to ensure that the deployment of AI technologies in transplantation upholds the highest standards of fairness and equity. As AI becomes more integrated into clinical workflows, the patient–physician relationship is being reshaped, requiring physicians to balance trust-building with AI transparency while adapting to new roles of oversight, technical proficiency, and collaborative decision-making to ensure the responsible use of AI in patient care.

Author Contributions

A.A.S.: conceptualization, formal analysis, investigation, methodology, project administration, supervision, visualization, writing—original draft, writing—review and editing; A.Y.: investigation, methodology, visualization, writing—original draft, writing—review and editing; M.S.: investigation, methodology, writing—original draft, writing—review and editing; M.W.: conceptualization, formal analysis, investigation, methodology, project administration, supervision, visualization, writing—original draft, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan, grant # AP23489229. The funder had no role in the design, data collection, data analysis, or reporting of this study.

Acknowledgments

The figures were created using BioRender©.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. IRODaT—International Registry on Organ Donation and Transplantation. Available online: https://www.irodat.org/?p=publications (accessed on 20 August 2024).
  2. Global Observatory on Donation and Transplantation. Available online: https://www.transplant-observatory.org (accessed on 20 August 2024).
  3. Organ Donation Statistics. Available online: https://www.organdonor.gov/learn/organ-donation-statistics (accessed on 20 August 2024).
  4. Al Moussawy, M.; Lakkis, Z.S.; Ansari, Z.A.; Cherukuri, A.R.; Abou-Daya, K.I. The transformative potential of artificial intelligence in solid organ transplantation. Front. Transplant. 2024, 3, 1361491. [Google Scholar] [CrossRef] [PubMed]
  5. Peloso, A.; Moeckli, B.; Delaune, V.; Oldani, G.; Andres, A.; Compagnon, P. Artificial Intelligence: Present and Future Potential for Solid Organ Transplantation. Transpl. Int. 2022, 35, 10640. [Google Scholar] [CrossRef] [PubMed]
  6. Rawashdeh, B. Artificial Intelligence in Organ Transplantation: Surveying Current Applications, Addressing Challenges and Exploring Frontiers; IntechOpen: London, UK, 2024. [Google Scholar] [CrossRef]
  7. Deshpande, R. Smart match: Revolutionizing organ allocation through artificial intelligence. Front. Artif. Intell. 2024, 7, 1364149. [Google Scholar] [CrossRef] [PubMed]
  8. Balch, J.A.; Delitto, D.; Tighe, P.J.; Zarrinpar, A.; Efron, P.A.; Rashidi, P.; Upchurch, G.R.; Bihorac, A.; Loftus, T.J. Machine Learning Applications in Solid Organ Transplantation and Related Complications. Front. Immunol. 2021, 12, 739728. [Google Scholar] [CrossRef]
  9. Basuli, D.; Roy, S. Beyond Human Limits: Harnessing Artificial Intelligence to Optimize Immunosuppression in Kidney Transplantation. J. Clin. Med. Res. 2023, 15, 391–398. [Google Scholar] [CrossRef]
  10. Kers, J.; Bülow, R.D.; Klinkhammer, B.M.; Breimer, G.E.; Fontana, F.; Abiola, A.A.; Hofstraat, R.; Corthals, G.L.; Peters-Sengers, H.; Djudjaj, S.; et al. Deep learning-based classification of kidney transplant pathology: A retrospective, multicentre, proof-of-concept study. Lancet Digit Health 2022, 4, e18–e26. [Google Scholar] [CrossRef]
  11. Yi, Z.; Salem, F.; Menon, M.C.; Keung, K.; Xi, C.; Hultin, S.; Al Rasheed, M.R.H.; Li, L.; Su, F.; Sun, Z.; et al. Deep learning identified pathological abnormalities predictive of graft loss in kidney transplant biopsies. Kidney Int. 2022, 101, 288–298. [Google Scholar] [CrossRef]
  12. Gotlieb, N.; Azhie, A.; Sharma, D.; Spann, A.; Suo, N.J.; Tran, J.; Orchanian-Cheff, A.; Wang, B.; Goldenberg, A.; Chassé, M.; et al. The promise of machine learning applications in solid organ transplantation. npj Digit. Med. 2022, 5, 89. [Google Scholar] [CrossRef]
  13. Ley, C.; Martin, R.K.; Pareek, A.; Groll, A.; Seil, R.; Tischer, T. Machine learning and conventional statistics: Making sense of the differences. Knee Surg. Sports Traumatol. Arthrosc. 2022, 30, 753–757. [Google Scholar] [CrossRef]
  14. Aldoseri, A.; Al-Khalifa, K.N.; Hamouda, A.M. Re-thinking data strategy and integration for artificial intelligence: Concepts, opportunities, and challenges. Appl. Sci. 2023, 13, 7082. [Google Scholar] [CrossRef]
  15. Edwards, A.S.; Kaplan, B.; Jie, T. A Primer on Machine Learning. Transplantation 2021, 105, 699–703. [Google Scholar] [CrossRef] [PubMed]
  16. Bhat, M.; Rabindranath, M.; Chara, B.S.; Simonetto, D.A. Artificial intelligence, machine learning, and deep learning in liver transplantation. J. Hepatol. 2023, 78, 1216–1233. [Google Scholar] [CrossRef] [PubMed]
  17. Truchot, A.; Raynaud, M.; Kamar, N.; Naesens, M.; Legendre, C.; Delahousse, M.; Thaunat, O.; Buchler, M.; Crespo, M.; Linhares, K.; et al. Machine learning does not outperform traditional statistical modelling for kidney allograft failure prediction. Kidney Int. 2023, 103, 936–948. [Google Scholar] [CrossRef] [PubMed]
  18. Eckardt, J.N.; Hahn, W.; Röllig, C.; Stasik, S.; Platzbecker, U.; Müller-Tidow, C.; Serve, H.; Baldus, C.D.; Schliemann, C.; Schäfer-Eckart, K.; et al. Mimicking clinical trials with synthetic acute myeloid leukemia patients using generative artificial intelligence. npj Digit. Med. 2024, 7, 76. [Google Scholar] [CrossRef] [PubMed]
  19. Li, J.; Cairns, B.J.; Li, J.; Zhu, T. Generating synthetic mixed-type longitudinal electronic health records for artificial intelligent applications. npj Digit. Med. 2023, 6, 98. [Google Scholar] [CrossRef]
  20. Guillaudeux, M.; Rousseau, O.; Petot, J.; Bennis, Z.; Dein, C.A.; Goronflot, T.; Vince, N.; Limou, S.; Karakachoff, M.; Wargny, M.; et al. Patient-centric synthetic data generation, no reason to risk re-identification in biomedical data analysis. npj Digit. Med. 2023, 6, 37. [Google Scholar] [CrossRef]
  21. Deeb, M.; Gangadhar, A.; Rabindranath, M.; Rao, K.; Brudno, M.; Sidhu, A.; Wang, B.; Bhat, M. The emerging role of generative artificial intelligence in transplant medicine. Am. J. Transplant. 2024, 24, 1724–1730. [Google Scholar] [CrossRef]
  22. Assis de Souza, A.; Stubbs, A.; Hesselink, D.; Baan, C.C.; Boer, K. Cherry on Top or Real Need? A Review of Explainable Machine Learning in Kidney Transplantation. Transplantation 2025, 109, 123–132. [Google Scholar] [CrossRef]
  23. Connor, K.L.; O’Sullivan, E.D.; Marson, L.P.; Wigmore, S.J.; Harrison, E.M. The Future Role of Machine Learning in Clinical Transplantation. Transplantation 2021, 105, 723–735. [Google Scholar] [CrossRef]
  24. Ravindhran, B.; Chandak, P.; Schafer, N.; Kundalia, K.; Hwang, W.; Antoniadis, S.; Haroon, U.; Zakri, R.H. Machine learning models in predicting graft survival in kidney transplantation: Meta-analysis. BJS Open 2023, 7, zrad011. [Google Scholar] [CrossRef]
  25. Loupy, A.; Aubert, O.; Orandi, B.J.; Naesens, M.; Bouatou, Y.; Raynaud, M.; Divard, G.; Jackson, A.M.; Viglietti, D.; Giral, M.; et al. Prediction system for risk of allograft loss in patients receiving kidney transplants: International derivation and validation study. BMJ 2019, 366, l4923. [Google Scholar] [CrossRef] [PubMed]
  26. Impact For Patients: Transplant Therapeutics Consortium, C-Path. Available online: https://c-path.org/story/impact-for-patients-transplant-therapeutics-consortium/ (accessed on 20 August 2024).
  27. Qualification Opinion for the iBox Scoring System as a Secondary Efficacy Endpoint in Clinical Trials Investigating Novel Immunosuppressive Medicines in Kidney Transplant Patients. 2022. Available online: https://www.ema.europa.eu/en/documents/scientific-guideline/qualification-opinion-ibox-scoring-system-secondary-efficacy-endpoint-clinical-trials-investigating-novel-immunosuppressive-medicines-kidney-transplant-patients_en.pdf (accessed on 20 August 2024).
  28. Klein, A.; Loupy, A.; Stegall, M.; Helanterä, I.; Kosinski, L.; Frey, E.; Aubert, O.; Divard, G.; Newell, K.; Meier-Kriesche, H.-U.; et al. Qualifying a Novel Clinical Trial Endpoint (iBOX) Predictive of Long-Term Kidney Transplant Outcomes. Transpl. Int. 2023, 36, 11951. [Google Scholar] [CrossRef] [PubMed]
  29. Ali, H.; Mohammed, M.; Molnar, M.Z.; Fülöp, T.; Burke, B.; Shroff, S.; Shroff, A.; Briggs, D.; Krishnan, N. Live-Donor Kidney Transplant Outcome Prediction (L-TOP) using artificial intelligence. Nephrol. Dial. Transplant. 2024, 39, 2088–2099. [Google Scholar] [CrossRef] [PubMed]
  30. Ali, H.; Mohamed, M.; Molnar, M.Z.; Fülöp, T.; Burke, B.; Shroff, A.; Shroff, S.; Briggs, D.; Krishnan, N. Deceased-Donor Kidney Transplant Outcome Prediction Using Artificial Intelligence to Aid Decision-Making in Kidney Allocation. ASAIO J. 2024, 70, 808–818. [Google Scholar] [CrossRef] [PubMed]
  31. Schapranow, M.P.; Bayat, M.; Rasheed, A.; Naik, M.; Graf, V.; Schmidt, D.; Budde, K.; Cardinal, H.; Sapir-Pichhadze, R.; Fenninger, F.; et al. NephroCAGE—German-Canadian Consortium on AI for Improved Kidney Transplantation Outcome: Protocol for an Algorithm Development and Validation Study. JMIR Res. Protoc. 2023, 12, e48892. [Google Scholar] [CrossRef]
  32. Patzer, R.E.; Basu, M.; Larsen, C.P.; Pastan, S.O.; Mohan, S.; Patzer, M.; Konomos, M.M.; McClellan, W.M.; Lea, J.; Howard, D.; et al. iChoose Kidney: A Clinical Decision Aid for Kidney Transplantation Versus Dialysis Treatment. Transplantation 2016, 100, 630–639. [Google Scholar] [CrossRef]
  33. Gander, J.C.; Basu, M.; McPherson, L.; Garber, M.D.; Pastan, S.O.; Manatunga, A.; Arriola, K.J.; Patzer, R.E. iChoose Kidney for Treatment Options: Updated Models for Shared Decision Aid. Transplantation 2018, 102, e370–e371. [Google Scholar] [CrossRef]
  34. iChoose Kidney | Emory University | Atlanta GA. Available online: https://ichoosekidney.emory.edu/index.html (accessed on 20 August 2024).
  35. Seferović, P.M.; Tsutsui, H.; Mcnamara, D.M.; Ristić, A.D.; Basso, C.; Bozkurt, B.; Cooper, L.T.; Filippatos, G.; Ide, T.; Inomata, T.; et al. Heart Failure Association, Heart Failure Society of America, and Japanese Heart Failure Society Position Statement on Endomyocardial Biopsy. J. Card. Fail. 2021, 27, 727–743. [Google Scholar] [CrossRef]
  36. Costanzo, M.R.; Dipchand, A.; Starling, R.; Anderson, A.; Chan, M.; Desai, S.; Fedson, S.; Fisher, P.; Gonzales-Stawinski, G.; Martinelli, L.; et al. The International Society of Heart and Lung Transplantation Guidelines for the care of heart transplant recipients. J. Heart Lung Transplant. 2010, 29, 914–956. [Google Scholar] [CrossRef]
  37. Caves, P.K.; Stinson, E.B.; Billingham, M.E.; Shumway, N.E. Serial transvenous biopsy of the transplanted human heart. Improved management of acute rejection episodes. Lancet 1974, 303, 821–826. [Google Scholar] [CrossRef]
  38. Bermpeis, K.; Esposito, G.; Gallinoro, E.; Paolisso, P.; Bertolone, D.T.; Fabbricatore, D.; Mileva, N.; Munhoz, D.; Buckley, J.; Wyffels, E.; et al. Safety of Right and Left Ventricular Endomyocardial Biopsy in Heart Transplantation and Cardiomyopathy Patients. JACC Heart Fail. 2022, 10, 963–973. [Google Scholar] [CrossRef] [PubMed]
  39. Adedinsewo, D.; Hardway, H.D.; Morales-Lara, A.C.; Wieczorek, M.A.; Johnson, P.W.; Douglass, E.J.; Dangott, B.J.; E Nakhleh, R.; Narula, T.; Patel, P.C.; et al. Non-invasive detection of cardiac allograft rejection among heart transplant recipients using an electrocardiogram based deep learning model. Eur. Heart J. Digit. Health 2023, 4, 71–80. [Google Scholar] [CrossRef] [PubMed]
  40. Agasthi, P.; Buras, M.R.; Smith, S.D.; Golafshar, M.A.; Mookadam, F.; Anand, S.; Rosenthal, J.L.; Hardaway, B.W.; DeValeria, P.; Arsanjani, R. Machine learning helps predict long-term mortality and graft failure in patients undergoing heart transplant. Gen. Thorac. Cardiovasc. Surg. 2020, 68, 1369–1376. [Google Scholar] [CrossRef] [PubMed]
  41. Zare, A.; Zare, M.A.; Zarei, N.; Yaghoobi, R.; Zare, M.A.; Salehi, S.; Geramizadeh, B.; Malekhosseini, S.A.; Azarpira, N. A Neural Network Approach to Predict Acute Allograft Rejection in Liver Transplant Recipients Using Routine Laboratory Data. Hepat. Mon. 2017, 17, e55092. [Google Scholar] [CrossRef]
  42. Lau, L.; Kankanige, Y.; Rubinstein, B.; Jones, R.; Christophi, C.; Muralidharan, V.; Bailey, J. Machine-Learning Algorithms Predict Graft Failure After Liver Transplantation. Transplantation 2017, 101, e125–e132. [Google Scholar] [CrossRef]
  43. Chongo, G.; Soldera, J. Use of machine learning models for the prognostication of liver transplantation: A systematic review. World J. Transplant. 2024, 14, 88891. [Google Scholar] [CrossRef]
  44. Kang, J. Opportunities and challenges of machine learning in transplant-related studies. Am. J. Transplant. 2024, 24, 322–324. [Google Scholar] [CrossRef]
  45. van Giffen, B.; Herhausen, D.; Fahse, T. Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods. J. Bus. Res. 2022, 144, 93–106. [Google Scholar] [CrossRef]
  46. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  47. Rajkomar, A.; Hardt, M.; Howell, M.D.; Corrado, G.; Chin, M.H. Ensuring Fairness in Machine Learning to Advance Health Equity. Ann. Intern. Med. 2018, 169, 866–872. [Google Scholar] [CrossRef]
  48. González-Sendino, R.; Serrano, E.; Bajo, J. Mitigating bias in artificial intelligence: Fair data generation via causal models for transparent and explainable decision-making. Future Gener Comput. Syst. 2024, 155, 384–401. [Google Scholar] [CrossRef]
  49. Alvarez, J.M.; Colmenarejo, A.B.; Elobaid, A.; Fabbrizzi, S.; Fahimi, M.; Ferrara, A.; Ghodsi, S.; Mougan, C.; Papageorgiou, I.; Reyero, P.; et al. Policy advice and best practices on bias and fairness in AI. Ethics Inf. Technol. 2024, 26, 31. [Google Scholar] [CrossRef]
  50. FERI: A Multitask-based Fairness Achieving Algorithm with Applications to Fair Organ Transplantation. AMIA Jt. Summits Transl. Sci. Proc. 2024, 2024, 593–602.
  51. Lebret, A. Allocating organs through algorithms and equitable access to transplantation—A European human rights law approach. J. Law Biosci. 2023, 10, lsad004. [Google Scholar] [CrossRef]
  52. Chen, R.J.; Wang, J.J.; Williamson, D.F.K.; Chen, T.Y.; Lipkova, J.; Lu, M.Y.; Sahai, S.; Mahmood, F. Algorithm fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 2023, 7, 719–742. [Google Scholar] [CrossRef]
  53. Fairness and Machine Learning, MIT Press. Available online: https://mitpress.mit.edu/9780262048613/fairness-and-machine-learning/ (accessed on 20 August 2024).
  54. Balasubramaniam, N.; Kauppinen, M.; Rannisto, A.; Hiekkanen, K.; Kujala, S. Transparency and explainability of AI systems: From ethical guidelines to requirements. Inf. Softw. Technol. 2023, 159, 107197. [Google Scholar] [CrossRef]
  55. Al Kuwaiti, A.; Nazer, K.; Al-Reedy, A.; Al-Shehri, S.; Al-Muhanna, A.; Subbarayalu, A.V.; Al Muhanna, D.; Al-Muhanna, F.A. A Review of the Role of Artificial Intelligence in Healthcare. J. Pers. Med. 2023, 13, 951. [Google Scholar] [CrossRef]
  56. Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. Available online: https://arxiv.org/pdf/1708.08296 (accessed on 20 August 2024).
  57. Lipton, Z.C. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 2018, 16, 31–57. [Google Scholar] [CrossRef]
  58. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
  59. Zafar, M.R.; Khan, N. Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability. Mach. Learn Knowl. Extr. 2021, 3, 525–541. [Google Scholar] [CrossRef]
  60. Lundberg, S.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. arXiv 2017, arXiv:1705.07874. Available online: https://arxiv.org/pdf/1705.07874 (accessed on 20 August 2024).
  61. Schuler, K.; Sedlmayr, M.; Sedlmayr, B. Mapping Medical Context: Workshop-Driven Clustering of Influencing Factors on Medical Decision-Making. Stud. Health Technol. Inform. 2024, 316, 1754–1758. [Google Scholar] [CrossRef] [PubMed]
  62. Schoonderwoerd, T.A.J.; Jorritsma, W.; Neerincx, M.A.; van den Bosch, K. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. Int. J. Hum-Comput. Stud. 2021, 154, 102684. [Google Scholar] [CrossRef]
  63. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  64. Grote, T.; Berens, P. On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics 2020, 46, 205–211. [Google Scholar] [CrossRef]
  65. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  66. Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Melbourne, VIC, Australia, 19–25 August 2017; pp. 4691–4697. [Google Scholar] [CrossRef]
  67. Char, D.S.; Shah, N.H.; Magnus, D. Implementing Machine Learning in Health Care—Addressing Ethical Challenges. N. Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef]
  68. Wiens, J.; Saria, S.; Sendak, M.; Ghassemi, M.; Liu, V.X.; Doshi-Velez, F.; Jung, K.; Heller, K.; Kale, D.; Saeed, M.; et al. Do no harm: A roadmap for responsible machine learning for health care. Nat. Med. 2019, 25, 1337–1340. [Google Scholar] [CrossRef]
  69. Vayena, E.; Blasimme, A.; Cohen, I.G. Machine learning in medicine: Addressing ethical challenges. PLoS Med. 2018, 15, e1002689. [Google Scholar] [CrossRef]
  70. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. SSRN Electron. J. 2020. [Google Scholar] [CrossRef]
  71. Price, W.N. Big data and black-box medical algorithms. Sci. Transl. Med. 2018, 10, eaao5333. [Google Scholar] [CrossRef] [PubMed]
  72. Mittelstadt, B.D.; Floridi, L. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. Sci. Eng. Ethics 2016, 22, 303–341. [Google Scholar] [CrossRef] [PubMed]
  73. Drezga-Kleiminger, M.; Demaree-Cotton, J.; Koplin, J.; Savulescu, J.; Wilkinson, D. Should AI allocate livers for transplant? Public attitudes and ethical considerations. BMC Med. Ethics 2023, 24, 102. [Google Scholar] [CrossRef] [PubMed]
  74. Doshi-Velez, F.; Kortz, M.; Budish, R.; Bavitz, C.; Gershman, S.; O’Brien, D.F.; Scott, K.; Shieber, S.; Waldo, J.; Weinberger, D.; et al. Accountability of AI Under the Law: The Role of Explanation. arXiv 2019, arXiv:1711.01134. Available online: https://arxiv.org/pdf/1711.01134 (accessed on 20 August 2024). [CrossRef]
  75. Salybekov, A.A.; Wolfien, M.; Hahn, W.; Hidaka, S.; Kobayashi, S. Artificial Intelligence Reporting Guidelines’ Adherence in Nephrology for Improved Research and Clinical Outcomes. Biomedicines 2024, 12, 606. [Google Scholar] [CrossRef]
  76. Richardson, J.P.; Smith, C.; Curtis, S.; Watson, S.; Zhu, X.; Barry, B.; Sharp, R.R. Patient apprehensions about the use of artificial intelligence in healthcare. npj Digit. Med. 2021, 4, 140. [Google Scholar] [CrossRef]
  77. Robertson, C.; Woods, A.; Bergstrand, K.; Findley, J.; Balser, C.; Slepian, M.J. Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis. PLoS Digit Health 2023, 2, e0000237. [Google Scholar] [CrossRef]
  78. Rawbone, R. Principles of Biomedical Ethics, 7th Edition. Occup. Med. 2015, 65, 88–89. [Google Scholar] [CrossRef]
  79. Hatherley, J.J. Limits of trust in medical AI. J. Med. Ethics 2020, 46, 478–481. [Google Scholar] [CrossRef]
  80. Transforming Healthcare with AI | IJGM. Available online: https://www.dovepress.com/transforming-healthcare-with-ai-promises-pitfalls-and-pathways-forward-peer-reviewed-fulltext-article-IJGM (accessed on 20 August 2024).
  81. Kempt, H.; Heilinger, J.C.; Nagel, S.K. “I’m afraid I can’t let you do that, Doctor”: Meaningful disagreements with AI in medical contexts. AI Soc. 2023, 38, 1407–1414. [Google Scholar] [CrossRef]
  82. Yelne, S.; Chaudhary, M.; Dod, K.; Sayyad, A.; Sharma, R. Harnessing the Power of AI: A Comprehensive Review of Its Impact and Challenges in Nursing Science and Healthcare. Cureus 2023, 15, e49252. [Google Scholar] [CrossRef] [PubMed]
  83. Shamszare, H.; Choudhury, A. Clinicians’ Perceptions of Artificial Intelligence: Focus on Workload, Risk, Trust, Clinical Decision Making, and Clinical Integration. Healthcare 2023, 11, 2308. [Google Scholar] [CrossRef] [PubMed]
  84. London, A.J. Artificial intelligence in medicine: Overcoming or recapitulating structural challenges to improving patient care? Cell Rep. Med. 2022, 3, 100622. [Google Scholar] [CrossRef]
  85. Sage, A.T.; Donahoe, L.L.; Shamandy, A.A.; Mousavi, S.H.; Chao, B.T.; Zhou, X.; Valero, J.; Balachandran, S.; Ali, A.; Martinu, T.; et al. A machine-learning approach to human ex vivo lung perfusion predicts transplantation outcomes and promotes organ utilization. Nat. Commun. 2023, 14, 4810. [Google Scholar] [CrossRef]
  86. Michelson, A.P.; Oh, I.; Gupta, A.; Puri, V.; Kreisel, D.; Gelman, A.E.; Nava, R.; Witt, C.A.; Byers, D.E.; Halverson, L.; et al. Developing Machine Learning Models to Predict Primary Graft Dysfunction After Lung Transplantation. Am. J. Transplant. 2024, 24, 458–467. [Google Scholar] [CrossRef]
Figure 1. Transformation of data analysis strategies. Traditional statistics are ideal for evaluating small sample tabular data, while machine learning algorithms are suited for more complex datasets and perform best when trained on large amounts of data. Deep learning allows for the analysis of both medical images and tabular data, but unlike Generative AI, it cannot produce new data.
Figure 1. Transformation of data analysis strategies. Traditional statistics are ideal for evaluating small sample tabular data, while machine learning algorithms are suited for more complex datasets and perform best when trained on large amounts of data. Deep learning allows for the analysis of both medical images and tabular data, but unlike Generative AI, it cannot produce new data.
Jcm 14 02775 g001
Figure 2. AI tools for organ transplantation. The illustration shows the anatomical locations of the human kidney, heart, and liver, followed by a list of AI-based tools used in the transplantation of each organ. On the left are the kidney transplant tools, including iBox, SOTAI, OrganPredict, NephroCage, and iChoose Kidney. In the center, above the heart, are the heart transplant tools, including CARE, AI-ECG, and machine learning models for long-term prediction. On the right, above the liver, are the liver transplant tools, including graft rejection prediction and a comparison of machine learning models with traditional scoring systems.
Figure 2. AI tools for organ transplantation. The illustration shows the anatomical locations of the human kidney, heart, and liver, followed by a list of AI-based tools used in the transplantation of each organ. On the left are the kidney transplant tools, including iBox, SOTAI, OrganPredict, NephroCage, and iChoose Kidney. In the center, above the heart, are the heart transplant tools, including CARE, AI-ECG, and machine learning models for long-term prediction. On the right, above the liver, are the liver transplant tools, including graft rejection prediction and a comparison of machine learning models with traditional scoring systems.
Jcm 14 02775 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salybekov, A.A.; Yerkos, A.; Sedlmayr, M.; Wolfien, M. Ethics and Algorithms to Navigate AI’s Emerging Role in Organ Transplantation. J. Clin. Med. 2025, 14, 2775. https://doi.org/10.3390/jcm14082775

AMA Style

Salybekov AA, Yerkos A, Sedlmayr M, Wolfien M. Ethics and Algorithms to Navigate AI’s Emerging Role in Organ Transplantation. Journal of Clinical Medicine. 2025; 14(8):2775. https://doi.org/10.3390/jcm14082775

Chicago/Turabian Style

Salybekov, Amankeldi A., Ainur Yerkos, Martin Sedlmayr, and Markus Wolfien. 2025. "Ethics and Algorithms to Navigate AI’s Emerging Role in Organ Transplantation" Journal of Clinical Medicine 14, no. 8: 2775. https://doi.org/10.3390/jcm14082775

APA Style

Salybekov, A. A., Yerkos, A., Sedlmayr, M., & Wolfien, M. (2025). Ethics and Algorithms to Navigate AI’s Emerging Role in Organ Transplantation. Journal of Clinical Medicine, 14(8), 2775. https://doi.org/10.3390/jcm14082775

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop