Explainable Artificial Intelligence: A Perspective on Drug Discovery
Abstract
1. Introduction
2. Explainable AI
2.1. Intrinsically Interpretable Models
2.1.1. Linear Models
2.1.2. Decision Tree
2.2. Post-Hoc Explainability
2.2.1. Model-Agnostic XAI
LIME
Shapley Additive Explanations
Partial Dependence Plots
2.2.2. Model-Specific XAI
Attention
Saliency Maps
- (1)
- Vanilla Saliency Maps: These use the absolute value of the gradient of the output class with respect to each input pixel, providing a basic visualization of feature relevance. Guided Backpropagation enhances this approach by allowing only the gradients that positively influence the target class to flow back, thus filtering out irrelevant information and offering a more refined view of feature importance. Integrated Gradients further refine the attribution process by calculating the cumulative gradient as the input transitions from a baseline to the actual input, resulting in a more stable and comprehensive measure of feature contribution. Gradient saliency methods constitute a category of XAI techniques that utilize the gradients of a model’s output with respect to its input features to determine the contribution of each feature to the model’s predictions. These methods are grounded in the principle that the gradient of the output with respect to the input can indicate how sensitive the model’s prediction is to small changes in the input variables. By analyzing these gradients, one can infer which features are most influential in driving the model’s decisions [37]. The operational process of gradient saliency methods involves computing the derivative of the model’s output with respect to each input feature, resulting in a gradient vector. This vector captures the direction and magnitude of change in the prediction for infinitesimal variations in each feature. The gradients are then used to generate visualizations or attribution scores that highlight the relative importance of the input features. There are several notable gradient-based attribution techniques, each tailored to provide unique insights into model behavior:
- (2)
- Gradient Saliency Maps: These use the raw gradients to generate a visual representation of feature importance. The saliency map indicates which input features, such as pixels in an image or words in a text, have the most significant impact on the model’s prediction. This visualization allows for a straightforward interpretation of the model’s focus and decision-making process.
- Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM): CAM and Grad-CAM extend the concept of saliency maps by integrating class-specific gradient information with spatial feature maps from convolutional layers. CAM works by leveraging the linear relationship between convolutional feature maps and the output layer in CNNs with global average pooling (GAP). Specifically, it computes the weighted sum of the feature maps in the last convolutional layer using the weights from the output layer corresponding to a particular class [66]. This yields a coarse localization map that indicates the most discriminative regions used by the model for a given prediction. Grad-CAM computes the gradient of the class score with respect to the feature maps of a target convolutional layer, and then performs a GAP on these gradients to obtain importance weights for each feature map. Grad-CAM utilizes the gradients of any target concept flowing into the final convolutional layer to produce a localization map, making it compatible with a variety of CNN-based models without architectural modifications [67]. By combining the spatial awareness of CNNs with gradient information, Grad-CAM provides a more interpretable and class-discriminative visualization, which is particularly valuable for complex image-based models [37].
- Deep-Learning Important FeaTures (DeepLIFT): DeepLIFT assigns contribution scores to each input feature by comparing the network’s output to a baseline or reference output. Unlike simple gradient methods, DeepLIFT propagates these differences backward through the network, providing a more stable and interpretable measure of feature importance. This approach addresses some limitations of gradient-based methods, such as zero gradients in saturated regions of activation functions, thereby offering a more comprehensive view of feature contributions [37].
The primary advantage of gradient-based attribution methods is their ability to provide both local, instance-specific, and global, model-wide interpretability, making them versatile tools for understanding complex models. Gradient-based methods have broad applicability across various domains. In computer vision, they are employed to visualize feature importance in image classification, object detection, and segmentation tasks, providing insights into which parts of an image contribute most to the model’s predictions. In NLP, they help identify the significance of individual words or phrases in tasks such as text classification and sentiment analysis, facilitating a deeper understanding of how models process linguistic information. In the healthcare sector, gradient-based methods are employed to evaluate the impact of clinical variables on model predictions, facilitating medical diagnosis and prognosis by identifying the factors that most significantly influence the model’s decision-making process. Overall, gradient saliency methods are powerful tools for elucidating the inner workings of complex machine-learning models, offering interpretable explanations that can enhance trust, transparency, and accountability in high-stakes applications.
3. XAI in Healthcare
4. XAI in Drug Discovery
XAI Tools Enabling Interpretability in Drug Discovery
5. Impact of XAI on Drug Discovery
5.1. Data Analysis
5.2. Molecular Property Prediction
5.3. Personalized Medicine
5.4. Unraveling Drug–Drug and Drug–Target Interactions
5.5. Facilitating Drug Repositioning and Combination Therapy
5.6. Clinical Trial Design
5.7. Ethics and Regulatory Implications
6. Key Challenges and Future Research Directions in XAI for Drug Discovery
6.1. Key Challenges
6.1.1. Data Limitations
6.1.2. Complexity and Interpretability Tradeoff
6.1.3. Ethical and Bias Concerns
6.1.4. Regulatory Compliance
6.2. Future Research Directions
6.2.1. Multimodal Data Integration and Augmentation
6.2.2. Next-Generation XAI Frameworks
6.2.3. Experimental Validation and Hybrid Models
6.2.4. Collaborative Open Platforms
6.2.5. Ethical-by-Design Frameworks
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
ADME | Absorption, Distribution, Metabolism, and Excretion |
ADMET | Absorption, Distribution, Metabolism, Excretion, and Toxicity |
ALE | Accumulated Local Effects |
CAM | Class Activation Mapping |
CNN | Convolutional Neural Networks |
DDI | Drug–Drug Interactions |
DL | Deep Learning |
DNA | Deoxyribonucleic Acid |
DTI | drug–target interaction |
EHR | Electronic Health Records |
FDA | Food and Drug Administration |
FL | Federated learning |
GAM | Generalized Additive Models |
GAP | Global Average Pooling |
GNN | Graph Neural Networks |
HGP | Human Genome Project |
HTS | High-Throughput Screening |
ICU | Intensive Care Unit |
IID | Independent and Identically Distributed |
LIME | Local Interpretable Model-Agnostic Explanation |
LLM | Large Language Model |
MDS | Molecular Dynamics Simulations |
ML | Machine Learning |
NLP | Natural Language Processing |
NN | Neural Networks |
PDP | Partial Dependence Plots |
QSAR | Quantitative Structure–Activity Relationship |
RNN | Recurrent Neural Networks |
SHAP | Shapley Additive exPlanations |
SVM | Support vector Machine |
XAI | Explainable Artificial Intelligence |
References
- Hood, L.; Rowen, L. The Human Genome Project: Big science transforms biology and medicine. Genome Med. 2013, 5, 79. [Google Scholar] [CrossRef]
- Meganck, R.M.; Baric, R.S. Developing therapeutic approaches for twenty-first-century emerging infectious viral diseases. Nat. Med. 2021, 27, 401–410. [Google Scholar] [CrossRef]
- Lombardino, J.G.; Lowe, J.A. The role of the medicinal chemist in drug discovery–then and now. Nat. Rev. Drug Discov. 2004, 3, 853–862. [Google Scholar] [CrossRef] [PubMed]
- Ali, S.; Ahmad, K.; Shaikh, S.; Chun, H.J.; Choi, I.; Lee, E.J. Mss51 protein inhibition serves as a novel target for type 2 diabetes: A molecular docking and simulation study. J. Biomol. Struct. Dyn. 2024, 42, 4862–4869. [Google Scholar] [CrossRef]
- Ali, S.; Ahmad, K.; Shaikh, S.; Lim, J.H.; Chun, H.J.; Ahmad, S.S.; Lee, E.J.; Choi, I. Identification and Evaluation of Traditional Chinese Medicine Natural Compounds as Potential Myostatin Inhibitors: An In Silico Approach. Molecules 2022, 27, 4303. [Google Scholar] [CrossRef]
- Ahmad, S.S.; Ahmad, K.; Lee, E.J.; Shaikh, S.; Choi, I. Computational Identification of Dithymoquinone as a Potential Inhibitor of Myostatin and Regulator of Muscle Mass. Molecules 2021, 26, 5407. [Google Scholar] [CrossRef] [PubMed]
- Maryanoff, B.E. Drug Discovery and the Medicinal Chemist. Future Med. Chem. 2009, 1, 11–15. [Google Scholar] [CrossRef] [PubMed]
- Glassman, P.M.; Muzykantov, V.R. Pharmacokinetic and Pharmacodynamic Properties of Drug Delivery Systems. J. Pharmacol. Exp. Ther. 2019, 370, 570–580. [Google Scholar] [CrossRef] [PubMed]
- Shaikh, S.; Ali, S.; Lim, J.H.; Ahmad, K.; Han, K.S.; Lee, E.J.; Choi, I. Virtual Insights into Natural Compounds as Potential 5α-Reductase Type II Inhibitors: A Structure-Based Screening and Molecular Dynamics Simulation Study. Life 2023, 13, 2152. [Google Scholar] [CrossRef]
- Velkov, T.; Bergen, P.J.; Lora-Tamayo, J.; Landersdorfer, C.B.; Li, J. PK/PD models in antibacterial development. Curr. Opin. Microbiol. 2013, 16, 573–579. [Google Scholar] [CrossRef]
- Cook, D.; Brown, D.; Alexander, R.; March, R.; Morgan, P.; Satterthwaite, G.; Pangalos, M.N. Lessons learned from the fate of AstraZeneca’s drug pipeline: A five-dimensional framework. Nat. Rev. Drug Discov. 2014, 13, 419–431. [Google Scholar] [CrossRef]
- Morgan, P.; Brown, D.G.; Lennard, S.; Anderton, M.J.; Barrett, J.C.; Eriksson, U.; Fidock, M.; Hamrén, B.; Johnson, A.; March, R.E.; et al. Impact of a five-dimensional framework on R&D productivity at AstraZeneca. Nat. Rev. Drug Discov. 2018, 17, 167–181. [Google Scholar] [CrossRef]
- Singh, N.; Vayer, P.; Tanwar, S.; Poyet, J.L.; Tsaioun, K.; Villoutreix, B.O. Drug discovery and development: Introduction to the general public and patient groups. Front. Drug Discov. 2023, 3, 1201419. [Google Scholar] [CrossRef]
- Matthews, H.; Hanison, J.; Nirmalan, N. “Omics”-Informed Drug and Biomarker Discovery: Opportunities, Challenges and Future Perspectives. Proteomes 2016, 4, 28. [Google Scholar] [CrossRef] [PubMed]
- Paul, D.; Sanap, G.; Shenoy, S.; Kalyane, D.; Kalia, K.; Tekade, R.K. Artificial intelligence in drug discovery and development. Drug Discov. Today 2021, 26, 80–93. [Google Scholar] [CrossRef]
- Bhardwaj, A.; Kishore, S.; Pandey, D.K. Artificial Intelligence in Biological Sciences. Life 2022, 12, 1430. [Google Scholar] [CrossRef] [PubMed]
- Lawrence, E.; El-Shazly, A.; Seal, S.; Joshi, C.K.; Liò, P.; Singh, S.; Bender, A.; Sormanni, P.; Greenig, M. Understanding Biology in the Age of Artificial Intelligence. arXiv 2024, arXiv:2403.04106. [Google Scholar] [CrossRef]
- Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef] [PubMed]
- Yu, H.; Yang, L.T.; Zhang, Q.; Armstrong, D.; Deen, M.J. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021, 444, 92–110. [Google Scholar] [CrossRef]
- Minic, A.; Jovanovic, L.; Bacanin, N.; Stoean, C.; Zivkovic, M.; Spalevic, P.; Petrovic, A.; Dobrojevic, M.; Stoean, R. Applying Recurrent Neural Networks for Anomaly Detection in Electrocardiogram Sensor Data. Sensors 2023, 23, 9878. [Google Scholar] [CrossRef]
- He, K.; Mao, R.; Lin, Q.; Ruan, Y.; Lan, X.; Feng, M.; Cambria, E. A survey of large language models for healthcare: From data, technology, and applications to accountability and ethics. Inf. Fusion 2025, 118, 102963. [Google Scholar] [CrossRef]
- Ali, S.; Qadri, Y.A.; Ahmad, K.; Lin, Z.; Leung, M.F.; Kim, S.W.; Vasilakos, A.V.; Zhou, T. Large Language Models in Genomics—A Perspective on Personalized Medicine. Bioengineering 2025, 12, 440. [Google Scholar] [CrossRef] [PubMed]
- U.S. Food and Drug Administration. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. 2025. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (accessed on 21 May 2025).
- Mak, K.K.; Pichika, M.R. Artificial intelligence in drug development: Present status and future prospects. Drug Discov. Today 2019, 24, 773–780. [Google Scholar] [CrossRef]
- Zhang, K.; Yang, X.; Wang, Y.; Yu, Y.; Huang, N.; Li, G.; Li, X.; Wu, J.; Yang, S. Artificial intelligence in drug development. Nat. Med. 2025, 31, 45–59. [Google Scholar] [CrossRef]
- Sellwood, M.A.; Ahmed, M.; Segler, M.H.S.; Brown, N. Artificial intelligence in drug discovery. Future Med. Chem. 2018, 10, 2025–2028. [Google Scholar] [CrossRef]
- Pillai, N.; Dasgupta, A.; Sudsakorn, S.; Fretland, J.; Mavroudis, P.D. Machine Learning guided early drug discovery of small molecules. Drug Discov. Today 2022, 27, 2209–2215. [Google Scholar] [CrossRef]
- Kırboğa, K.K.; Abbasi, S.; Küçüksille, E. Explainability and White Box in Drug Discovery. Chem. Biol. Drug Des. 2023, 101, 560–572. [Google Scholar] [CrossRef]
- Ding, Q.; Yao, R.; Bai, Y.; Da, L.; Wang, Y.; Xiang, R.; Jiang, X.; Zhai, F. Explainable Artificial Intelligence in the Field of Drug Research. Drug Des. Dev. Ther. 2025, 19, 4501–4516. [Google Scholar] [CrossRef]
- Ponzoni, I.; Capoferri, L.; Reis, P.A.B.; Holliday, J.D.; Bender, A. Explainable artificial intelligence: A taxonomy and guidelines for its application to drug discovery. WIREs Comput. Mol. Sci. 2023, 13, e1681. [Google Scholar] [CrossRef]
- Jiménez-Luna, J.; Grisoni, F.; Schneider, G. Drug discovery with explainable artificial intelligence. Nat. Mach. Intell. 2020, 2, 573–584. [Google Scholar] [CrossRef]
- Vo, T.; Nguyen, N.; Kha, Q.; Le, N. On the Road to Explainable AI in Drug–Drug Interactions Prediction: A Systematic Review. Comput. Struct. Biotechnol. J. 2022, 20, 2112–2123. [Google Scholar] [CrossRef]
- Alizadehsani, R.; Oyelere, S.S.; Hussain, S.; Jagatheesaperumal, S.K.; Calixto, R.R.; Rahouti, M. Explainable Artificial Intelligence for Drug Discovery and Development—A Comprehensive Survey. IEEE Access 2024, 12, 35796–35812. [Google Scholar] [CrossRef]
- Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
- Montavon, G.; Samek, W.; Müller, K.R. Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 2018, 73, 1–15. [Google Scholar] [CrossRef]
- Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021, 76, 89–106. [Google Scholar] [CrossRef]
- Das, A.; Rad, P. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv 2020, arXiv:2006.11371. [Google Scholar] [CrossRef]
- Seddik, B.; Ahlem, D.; Hocine, C. An Explainable Self-Labeling Grey-Box Model. In Proceedings of the 2022 4th International Conference on Pattern Analysis and Intelligent Systems (PAIS), Oum El Bouaghi, Algeria, 12–13 October 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
- Dwivedi, R.; Dave, D.; Naik, H.; Singhal, S.; Rana, O.; Patel, P.; Qian, B.; Wen, Z.; Shah, T.; Morgan, G.; et al. Explainable AI (XAI): Core ideas, techniques and solutions. ACM Comput. Surv. 2023, 55, 194. [Google Scholar] [CrossRef]
- Stiglic, G.; Kocbek, P.; Fijacko, N.; Zitnik, M.; Verbert, K.; Cilar, L. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1379. [Google Scholar] [CrossRef]
- Vollert, S.; Atzmueller, M.; Theissler, A. Interpretable Machine Learning: A brief survey from the predictive maintenance perspective. In Proceedings of the 2021 IEEE 26th International Conference on Emerging Technologies and Factory Automation (ETFA), Online, 7–10 September 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Hanif, A.; Zhang, X.; Wood, S. A Survey on Explainable Artificial Intelligence Techniques and Challenges. In Proceedings of the 2021 IEEE 25th International Enterprise Distributed Object Computing Conference Workshops (EDOCW), Gold Coast, Australia, 25–29 October 2021; pp. 81–89. [Google Scholar] [CrossRef]
- Salih, A.M.; Wang, Y. Are Linear Regression Models White Box and Interpretable? arXiv 2024, arXiv:2407.12177. [Google Scholar] [CrossRef]
- Abu-Faraj, M.; Al-Hyari, A.; Alqadi, Z.A.A. Experimental Analysis of Methods Used to Solve Linear Regression Models. Comput. Mater. Contin. 2022, 72, 5699–5712. [Google Scholar] [CrossRef]
- Hope, T.M.H. Linear regression. In Machine Learning: Methods and Applications to Brain Disorders; Mechelli, A., Vieira, S., Eds.; Academic Press: London, UK, 2020; pp. 67–81. [Google Scholar] [CrossRef]
- Tian, Y.; Zhang, Y. A comprehensive survey on regularization strategies in machine learning. Inf. Fusion 2022, 80, 146–166. [Google Scholar] [CrossRef]
- Pargent, F.; Pfisterer, F.; Thomas, J.; Bischl, B. Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features. Comput. Stat. 2022, 37, 2671–2692. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Retzlaff, C.O.; Angerschmid, A.; Saranti, A.; Schneeberger, D.; Röttger, R.; Müller, H.; Holzinger, A. Post-hoc vs ante-hoc explanations: XAI design guidelines for data scientists. Cogn. Syst. Res. 2024, 86, 101243. [Google Scholar] [CrossRef]
- Agarwal, R.; Melnick, L.; Frosst, N.; Zhang, X.; Lengerich, B.; Caruana, R.; Hinton, G. Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv 2020, arXiv:2004.13912. [Google Scholar]
- Wood, S.N. Inference and computation with generalized additive models and their extensions. TEST 2020, 29, 307–339. [Google Scholar] [CrossRef]
- Oviedo, F.; Lavista Ferres, J.; Buonassisi, T.; Butler, K.T. Interpretable and Explainable Machine Learning for Materials Science and Chemistry. Accounts Mater. Res. 2022, 3, 597–607. [Google Scholar] [CrossRef]
- Lötsch, J.; Kringel, D.; Ultsch, A. Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients. BioMedInformatics 2022, 2, 1–17. [Google Scholar] [CrossRef]
- Izza, Y.; Ignatiev, A.; Marques-Silva, J. On Explaining Decision Trees. arXiv 2020, arXiv:2010.11034. [Google Scholar] [CrossRef]
- Kozielski, M.; Sikora, M.; Wawrowski, Ł. Towards consistency of rule-based explainer and black box model: Fusion of rule induction and XAI-based feature importance. arXiv 2024, arXiv:2407.14543. [Google Scholar] [CrossRef]
- Cesarini, M.; Malandri, L.; Pallucchini, F.; Seveso, A.; Xing, F. Explainable AI for Text Classification: Lessons from a Comprehensive Evaluation of Post Hoc Methods. Cogn. Comput. 2024, 16, 3077–3095. [Google Scholar] [CrossRef]
- Gianfagna, L.; Di Cecco, A. Explainable AI with Python; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
- Dieber, J.; Kirrane, S. Why model why? Assessing the strengths and limitations of LIME. arXiv 2020, arXiv:2012.00093. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You? Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar] [CrossRef]
- Salih, A.M.; Raisi-Estabragh, Z.; Boscolo Galazzo, I.; Radeva, P.; Petersen, S.E.; Lekadir, K.; Menegaz, G. A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME. Adv. Intell. Syst. 2024, 7, 2400304. [Google Scholar] [CrossRef]
- Speith, T. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), Seoul, Republic of Korea, 21–24 June 2022; pp. 1–12. [Google Scholar] [CrossRef]
- Weber, L.; Lapuschkin, S.; Binder, A.; Samek, W. Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement. Inf. Fusion 2023, 92, 154–176. [Google Scholar] [CrossRef]
- Dehimi, N.E.H.; Tolba, Z. Attention Mechanisms in Deep Learning: Towards Explainable Artificial Intelligence. In Proceedings of the 2024 6th International Conference on Pattern Analysis and Intelligent Systems (PAIS), Oum El Bouaghi, Algeria, 24–25 April 2024. [Google Scholar] [CrossRef]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. arXiv 2015, arXiv:1512.04150. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef]
- Higgins, D.; Madai, V.I. From Bit to Bedside: A Practical Framework for Artificial Intelligence Product Development in Healthcare. Adv. Intell. Syst. 2020, 2, 2000052. [Google Scholar] [CrossRef]
- Nasarian, E.; Alizadehsani, R.; Acharya, U.R.; Tsui, K.L. Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework. Inf. Fusion 2024, 108, 102412. [Google Scholar] [CrossRef]
- Rucco, M.; Viticchi, G.; Falsetti, L. Towards Personalized Diagnosis of Glioblastoma in Fluid-Attenuated Inversion Recovery (FLAIR) by Topological Interpretable Machine Learning. Mathematics 2020, 8, 770. [Google Scholar] [CrossRef]
- Carrieri, A.P.; Haiminen, N.; Maudsley-Barton, S.; Gardiner, L.J.; Murphy, B.; Mayes, A.E.; Paterson, S.; Grimshaw, S.; Winn, M.; Shand, C.; et al. Explainable AI reveals changes in skin microbiome composition linked to phenotypic differences. Sci. Rep. 2021, 11, 4565. [Google Scholar] [CrossRef]
- Magesh, P.R.; Myloth, R.D.; Tom, R.J. An Explainable Machine Learning Model for Early Detection of Parkinson’s Disease using LIME on DaTSCAN Imagery. Comput. Biol. Med. 2020, 126, 104041. [Google Scholar] [CrossRef]
- Lauritsen, S.M.; Kristensen, M.; Olsen, M.V.; Larsen, M.S.; Lauritsen, K.M.; Jørgensen, M.J.; Lange, J.; Thiesson, B. Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat. Commun. 2020, 11, 3852. [Google Scholar] [CrossRef]
- Meldo, A.; Utkin, L.; Kovalev, M.; Kasimov, E. The natural language explanation algorithms for the lung cancer computer-aided diagnosis system. Artif. Intell. Med. 2020, 108, 101952. [Google Scholar] [CrossRef] [PubMed]
- Yeboah, D.; Steinmeister, L.; Hier, D.B.; Hadi, B.; Wunsch, D.C.; Olbricht, G.R.; Obafemi-Ajayi, T. An Explainable and Statistically Validated Ensemble Clustering Model Applied to the Identification of Traumatic Brain Injury Subgroups. IEEE Access 2020, 8, 180690–180705. [Google Scholar] [CrossRef]
- Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
- Yao, L.; Syed, A.R.; Rahman, M.H.; Rahman, M.M.; Foraker, R.E.; Banerjee, I. Predicting Post-stroke Hospital Discharge Disposition Using Interpretable Machine Learning Approaches. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2955–2961. [Google Scholar] [CrossRef]
- Ye, Q.; Xia, J.; Yang, G. Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study. arXiv 2021, arXiv:2104.14506. [Google Scholar] [CrossRef]
- Reyna, M.A.; Josef, C.S.; Jeter, R.; Shashikumar, S.P.; Westover, M.B.; Nemati, S.; Clifford, G.D.; Sharma, A. Early Prediction of Sepsis From Clinical Data: The PhysioNet/Computing in Cardiology Challenge. Crit. Care Med. 2020, 48, 210–217. [Google Scholar] [CrossRef] [PubMed]
- Varzandian, A.; Razo, M.A.S.; Sanders, M.R.; Atmakuru, A.; Fatta, G.D. Classification-Biased Apparent Brain Age for the Prediction of Alzheimer’s Disease. Front. Neurosci. 2021, 15, 673120. [Google Scholar] [CrossRef]
- Pierson, E.; Cutler, D.M.; Leskovec, J.; Mullainathan, S.; Obermeyer, Z. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nat. Med. 2021, 27, 136–140. [Google Scholar] [CrossRef]
- Born, J.; Wiedemann, N.; Cossio, M.; Buhre, C.; Brändle, G.; Leidermann, K.; Goulet, J.; Aujayeb, A.; Moor, M.; Rieck, B.; et al. Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis. Appl. Sci. 2021, 11, 672. [Google Scholar] [CrossRef]
- Shen, Y.; Wu, N.; Phang, J.; Park, J.; Liu, K.; Tyagi, S.; Heacock, L.; Kim, S.G.; Moy, L.; Cho, K.; et al. An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization. Med. Image Anal. 2021, 68, 101908. [Google Scholar] [CrossRef] [PubMed]
- Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Zha, Y.; et al. Deep Learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) with CT Images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.H.; Govindaraj, V.V.; Górriz, J.M.; Zhang, X.; Zhang, Y.D. COVID-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network. Inf. Fusion 2021, 67, 208–229. [Google Scholar] [CrossRef] [PubMed]
- Fan, Z.; Gong, P.; Tang, S.; Lee, C.U.; Zhang, X.; Song, P.; Chen, S.; Li, H. Joint localization and classification of breast masses on ultrasound images using an auxiliary attention-based framework. Med. Image Anal. 2023, 90, 102960. [Google Scholar] [CrossRef]
- Sutton, R.T.; Zaïane, O.R.; Goebel, R.; Baumgart, D.C. Artificial intelligence enabled automated diagnosis and grading of ulcerative colitis endoscopy images. Sci. Rep. 2022, 12, 2748. [Google Scholar] [CrossRef]
- Lu, S.; Zhu, Z.; Gorriz, J.M.; Wang, S.H.; Zhang, Y.D. NAGNN: Classification of COVID-19 based on neighboring aware representation from deep graph neural network. Int. J. Intell. Syst. 2022, 37, 1572–1598. [Google Scholar] [CrossRef]
- Punn, N.S.; Agarwal, S. Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks. Appl. Intell. 2021, 51, 2689–2702. [Google Scholar] [CrossRef]
- Wang, H.; Wang, S.; Qin, Z.; Zhang, Y.; Li, R.; Xia, Y. Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med. Image Anal. 2021, 67, 101846. [Google Scholar] [CrossRef]
- Alsinglawi, B.; Alshari, O.; Alorjani, M.; Mubin, O.; Alnajjar, F.; Novoa, M.; Darwish, O. An explainable machine learning framework for lung cancer hospital length of stay prediction. Sci. Rep. 2022, 12, 607. [Google Scholar] [CrossRef]
- Le, N.Q.K.; Kha, Q.H.; Nguyen, V.H.; Chen, Y.C.; Cheng, S.J.; Chen, C.Y. Machine Learning-Based Radiomics Signatures for EGFR and KRAS Mutations Prediction in Non-Small-Cell Lung Cancer. Int. J. Mol. Sci. 2021, 22, 9254. [Google Scholar] [CrossRef]
- Abeyagunasekera, S.H.P.; Perera, Y.; Chamara, K.; Kaushalya, U.; Sumathipala, P.; Senaweera, O. LISA: Enhance the explainability of medical images unifying current XAI techniques. In Proceedings of the 2022 IEEE 7th International Conference for Convergence in Technology (I2CT), Pune, India, 7–9 April 2022; pp. 1–9. [Google Scholar] [CrossRef]
- Rodríguez-Pérez, R.; Bajorath, J. Interpretation of Compound Activity Predictions from Complex Machine Learning Models Using Local Approximations and Shapley Values. J. Med. Chem. 2020, 63, 8761–8777. [Google Scholar] [CrossRef]
- Takagi, A.; Kamada, M.; Hamatani, E.; Kojima, R.; Okuno, Y. GraphIX: Graph-based In silico XAI (explainable artificial intelligence) for drug repositioning from biopharmaceutical network. arXiv 2022, arXiv:2212.10788. [Google Scholar]
- Cao, H.; Liu, Z.; Lu, X.; Yao, Y.; Li, Y. InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery. arXiv 2024, arXiv:2311.16208v2. [Google Scholar]
- Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
- Kamya, P.; Ozerov, I.V.; Pun, F.W.; Tretina, K.; Fokina, T.; Chen, S.; Naumov, V.; Long, X.; Lin, S.; Korzinkin, M.; et al. PandaOmics: An AI-Driven Platform for Therapeutic Target and Biomarker Discovery. J. Chem. Inf. Model. 2024, 64, 3961–3969. [Google Scholar] [CrossRef] [PubMed]
- Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Precup, D., Teh, Y.W., Eds.; PMLR: New York, NY, USA, 2017; pp. 3319–3328. [Google Scholar]
- Harren, T.; Matter, H.; Hessler, G.; Rarey, M.; Grebner, C. Interpretation of structure–activity relationships in real-world drug design data sets using explainable artificial intelligence. J. Chem. Inf. Model. 2022, 62, 447–462. [Google Scholar] [CrossRef] [PubMed]
- Maia, E.H.B.; de Souza, L.H.M.; de Souza, R.T.; Andricopulo, A.D. Structure-Based Virtual Screening: From Classical to Artificial Intelligence. Front. Chem. 2020, 8, 343. [Google Scholar] [CrossRef] [PubMed]
- Schneider, P.; Walters, W.P.; Plowright, A.T.; Sieroka, N.; Listgarten, J.; Goodnow, R.A., Jr.; Fisher, J.; Jansen, J.M.; Duca, J.S.; Rush, T.S.; et al. Rethinking Drug Design in the Artificial Intelligence Era. Nat. Rev. Drug Discov. 2020, 19, 353–364. [Google Scholar] [CrossRef]
- Sahoo, B.M.; Kumar, B.V.V.R.; Sruti, J.; Mahapatra, M.K.; Banik, B.K.; Borah, P. Drug Repurposing Strategy (DRS): Emerging Approach to Identify Potential Therapeutics for Treatment of Novel Coronavirus Infection. Front. Mol. Biosci. 2021, 8, 628144. [Google Scholar] [CrossRef]
- Ali, S.; Shaikh, S.; Ahmad, K.; Choi, I. Identification of active compounds as novel dipeptidyl peptidase-4 inhibitors through machine learning and structure-based molecular docking simulations. J. Biomol. Struct. Dyn. 2025, 43, 1611–1620. [Google Scholar] [CrossRef]
- Danishuddin; Kumar, V.; Faheem, M.; Lee, K.W. A decade of machine learning-based predictive models for human pharmacokinetics: Advances and challenges. Drug Discov. Today 2022, 27, 529–537. [Google Scholar] [CrossRef] [PubMed]
- Rao, J.; Zheng, S.; Lu, Y.; Yang, Y. Quantitative evaluation of explainable graph neural networks for molecular property prediction. Patterns 2022, 3, 100628. [Google Scholar] [CrossRef]
- Jiménez-Luna, J.; Skalic, M.; Weskamp, N.; Schneider, G. Coloring molecules with explainable artificial intelligence for preclinical relevance assessment. J. Chem. Inf. Model. 2021, 61, 1083–1094. [Google Scholar] [CrossRef]
- Shen, L.; Bai, J.; Jiao, W.; Shen, B. The fourth scientific discovery paradigm for precision medicine and healthcare: Challenges ahead. Precis. Clin. Med. 2021, 4, 80–84. [Google Scholar] [CrossRef]
- Drancé, M. Neuro-Symbolic XAI: Application to Drug Repurposing for Rare Diseases. In Database Systems for Advanced Applications; Bhattacharya, A., Lee Mong Li, J., Agrawal, D., Reddy, P.K., Mohania, M., Mondal, A., Goyal, V., Uday Kiran, R., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 539–543. [Google Scholar]
- Askari, M.; Eslami, S.; Louws, M.; Wierenga, P.C.; Dongelmans, D.A.; Kuiper, R.A.; Abu-Hanna, A. Frequency and nature of drug-drug interactions in the intensive care unit. Pharmacoepidemiol. Drug Saf. 2013, 22, 430–437. [Google Scholar] [CrossRef]
- Bories, M.; Bouzillé, G.; Cuggia, M.; Le Corre, P. Drug–Drug Interactions in Elderly Patients with Potentially Inappropriate Medications in Primary Care, Nursing Home and Hospital Settings: A Systematic Review and a Preliminary Study. Pharmaceutics 2021, 13, 266. [Google Scholar] [CrossRef]
- Reis, A.M.M.; Cassiani, S.H.D.B. Evaluation of three brands of drug interaction software for use in intensive care units. Pharm. World Sci. 2010, 32, 822–828. [Google Scholar] [CrossRef]
- Vonbach, P.; Dubied, A.; Krähenbühl, S.; Beer, J.H. Evaluation of frequently used drug interaction screening programs. Pharm. World Sci. 2008, 30, 367–374. [Google Scholar] [CrossRef] [PubMed]
- Cheng, F.; Zhao, Z. Machine learning-based prediction of drug–drug interactions by integrating drug phenotypic, therapeutic, chemical, and genomic properties. J. Am. Med. Inform. Assoc. 2014, 21, e278–e286. [Google Scholar] [CrossRef] [PubMed]
- Ryu, J.Y.; Kim, H.U.; Lee, S.Y. Deep learning improves prediction of drug–drug and drug–food interactions. Proc. Natl. Acad. Sci. USA 2018, 115, E4304–E4311. [Google Scholar] [CrossRef]
- Vilar, S.; Uriarte, E.; Santana, L.; Lorberbaum, T.; Hripcsak, G.; Friedman, C.; Tatonetti, N.P. Similarity-based modeling in large-scale prediction of drug–drug interactions. Nat. Protoc. 2014, 9, 2147–2163. [Google Scholar] [CrossRef]
- Xu, L.; Ru, X.; Song, R. Application of Machine Learning for Drug–Target Interaction Prediction. Front. Genet. 2021, 12, 680117. [Google Scholar] [CrossRef]
- Ideker, T.; Nussinov, R. Network approaches and applications in biology. PLoS Comput. Biol. 2017, 13, e1005771. [Google Scholar] [CrossRef] [PubMed]
- Lai, X.; Gupta, S.K.; Schmitz, U.; Marquardt, S.; Knoll, S.; Spitschak, A.; Wolkenhauer, O.; Pützer, B.M.; Vera, J. MiR-205-5p and miR-342-3p cooperate in the repression of the E2F1 transcription factor in the context of anticancer chemotherapy resistance. Theranostics 2018, 8, 1106–1120. [Google Scholar] [CrossRef] [PubMed]
- Lai, X.; Eberhardt, M.; Schmitz, U.; Vera, J. Systems biology-based investigation of cooperating microRNAs as monotherapy or adjuvant therapy in cancer. Nucleic Acids Res. 2019, 47, 7753–7766. [Google Scholar] [CrossRef]
- You, Y.; Lai, X.; Pan, Y.; Zheng, H.; Vera, J.; Liu, S.; Deng, S.; Zhang, L. Artificial intelligence in cancer target identification and drug discovery. Signal Transduct. Target. Ther. 2022, 7, 156. [Google Scholar] [CrossRef]
- Peyvandipour, A.; Saberian, N.; Shafi, A.; Donato, M.; Drăghici, S. A novel computational approach for drug repurposing using systems biology. Bioinformatics 2018, 34, 2817–2825. [Google Scholar] [CrossRef]
- Würth, R.; Thellung, S.; Bajetto, A.; Mazzanti, M.; Florio, T.; Barbieri, F. Drug-repositioning opportunities for cancer therapy: Novel molecular targets for known compounds. Drug Discov. Today 2016, 21, 190–199. [Google Scholar] [CrossRef]
- Pantziarka, P.; Bouche, G.; Meheus, L.; Sukhatme, V.; Sukhatme, V.P. Repurposing drugs in your medicine cabinet: Untapped opportunities for cancer therapy? Future Oncol. 2015, 11, 181–184. [Google Scholar] [CrossRef] [PubMed]
- Park, K. A review of computational drug repurposing. Transl. Clin. Pharmacol. 2019, 27, 59–63. [Google Scholar] [CrossRef]
- Al-Taie, Z.; Liu, D.; Mitchem, J.B.; Papageorgiou, C.; Kaifi, J.T.; Warren, W.C.; Shyu, C.R. Explainable Artificial Intelligence in High-Throughput Drug Repositioning for Subgroup Stratifications with Interventionable Potential. J. Biomed. Inform. 2021, 118, 103792. [Google Scholar] [CrossRef]
- Xue, H.; Li, J.; Xie, H.; Wang, Y. Review of Drug Repositioning Approaches and Resources. Int. J. Biol. Sci. 2018, 14, 1232–1244. [Google Scholar] [CrossRef]
- Lotfi Shahreza, M.; Ghadiri, N.; Mousavi, S.R.; Varshosaz, J.; Green, J.R. Heter-LP: A heterogeneous label propagation algorithm and its application in drug repositioning. J. Biomed. Inform. 2017, 68, 167–183. [Google Scholar] [CrossRef]
- Xu, R.; Wang, Q. PhenoPredict: A disease phenome-wide drug repositioning approach towards schizophrenia drug discovery. J. Biomed. Inform. 2015, 56, 348–355. [Google Scholar] [CrossRef] [PubMed]
- Xu, R.; Wang, Q. A genomics-based systems approach towards drug repositioning for rheumatoid arthritis. BMC Genom. 2016, 17, 518. [Google Scholar] [CrossRef]
- Lamb, J. The Connectivity Map: A new tool for biomedical research. Nat. Rev. Cancer 2007, 7, 54–60. [Google Scholar] [CrossRef]
- Lee, B.K.B.; Tiong, K.H.; Chang, J.K.; Liew, C.S.; Abdul Rahman, Z.A.; Tan, A.C.; Khang, T.F.; Cheong, S.C. DeSigN: Connecting gene expression with therapeutics for drug repurposing and development. BMC Genom. 2017, 18, 934. [Google Scholar] [CrossRef]
- Tian, Z.; Teng, Z.; Cheng, S.; Guo, M. Computational drug repositioning using meta-path-based semantic network analysis. BMC Syst. Biol. 2018, 12, 134. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Huang, K.; Chandak, P.; Zitnik, M.; Gehlenborg, N. Extending the nested model for user-centric XAI: A design study on GNN-based drug repurposing. IEEE Trans. Vis. Comput. Graph. 2023, 29, 1266–1276. [Google Scholar] [CrossRef] [PubMed]
- Zhang, B.; Huang, Z.; Zheng, H.; Li, W.; Liu, Z.; Zhang, Y.; Huang, Q.; Liu, X.; Jiang, H.; Liu, Q. EFMSDTI: Drug–target interaction prediction based on an efficient fusion of multi-source data. Front. Pharmacol. 2022, 13, 1009996. [Google Scholar] [CrossRef] [PubMed]
- Huang, A.; Xie, X.; Wang, X.; Peng, S. A Multimodal Data Fusion-Based Deep Learning Approach for Drug–Drug Interaction Prediction. In Bioinformatics Research and Applications; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13760, pp. 275–285. [Google Scholar] [CrossRef]
- Sturm, H.; Teufel, J.; Isfeld, K.; Friederich, P.; Davis, R. Mitigating Molecular Aggregation in Drug Discovery with Predictive Insights from Explainable AI. Angew. Chem. Int. Ed. 2025, 137, e202503259. [Google Scholar] [CrossRef]
- Ying, R.; Bourgeois, D.; You, J.; Zitnik, M.; Leskovec, J. GNNExplainer: Generating Explanations for Graph Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 8–14 December 2019; Volume 32, pp. 9244–9255. [Google Scholar]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef] [PubMed]
Technique | Basic Working Principle | Input Type | Requirements |
---|---|---|---|
SHAP | Uses cooperative game theory. Assigns each feature an importance value for a prediction | Tabular, molecular descriptors, genomic data | High |
LIME | Perturbs input locally and fits a simple interpretable model to approximate the prediction | Tabular, image, text | Moderate |
Partial Dependence Plots | Shows average predicted outcome as a function of one or two features, marginalizing others | Tabular | Low to moderate |
Attention | Allocates weights to input elements, indicating their contribution | Sequences like SMILES, molecular graphs | Moderate |
Saliency Maps | Computes gradients of the output with respect to input features | Image, 2D/3D molecular structures | Moderate |
Gradient Saliency | Measures the sensitivity of output to small perturbations in input by computing gradients | Text, image, sequence | Moderate |
XAI Tool | Modality | Applications | Reference |
---|---|---|---|
CAM | Bone X-ray | The model was intended to estimate knee damage severity and pain level based on X-ray images. | [81] |
CAM | Lung Ultrasound and X-ray | The model uses three types of lung ultrasound images and VGG-16 and VGGCAM networks to classify three pneumonia subtypes. | [82] |
CAM | Breast X-ray | A globally aware multiple instance classifier (GMIC) was proposed, which uses CAM to find the most informative regions by combining local and global data. | [83] |
CAM | Lung CT | It trains the DRE-Net model on data from both healthy and COVID-19 patients. | [84] |
Grad-CAM | Lung CT | A deep feature fusion method was proposed, with higher performance compared to a single CNN. | [85] |
Grad-CAM | Chest Ultrasound | A semi-supervised model integrating an attention mechanism and disentanglement was proposed, with Grad-CAM used to improve explainability. | [86] |
Grad-CAM | Colonoscopy | It uses DenseNet121 to predict the presence of ulcerative colitis in patients. | [87] |
Grad-CAM | Chest CT | A neighboring-aware graph neural network was suggested for COVID-19 detection based on chest CT images. | [88] |
Grad-CAM and LIME | Lung X-ray and CT | The study examines five deep-learning models and uses a visualization technique to interpret NASNetLarge. | [89] |
Attention | Breast X-ray | The study uses the A3Net model with triple-attention learning to diagnose 14 chest illnesses. | [90] |
SHAP | EHR | It proposes a predicted length-of-stay strategy to solve imbalanced EHR datasets. | [91] |
SHAP | Lung CT | It introduces a model for predicting mutations in individuals with non-small cell lung cancer. | [92] |
LIME and SHAP | Chest X-ray | It provides a single pipeline to improve CNN explainability using several XAI approaches. | [93] |
Tool/Platform | Description | Applications in Drug Discovery | Reference |
---|---|---|---|
SHAP | A model-agnostic method that assigns each feature an importance value for a particular prediction | Interpreting ML predictions in QSAR and SAR studies, identifying key molecular features influencing compound activity, and increasing transparency in model-guided drug design | [61,94] |
LIME | Explains the predictions of any classifier by approximating it locally with an interpretable model | Understanding model decisions in compound activity prediction and toxicity assessments | [60] |
DeepLIFT | Attributes importance scores to each input feature by comparing the activation to a reference activation | Interpreting DL models in genomics and proteomics data analysis | [37] |
Integrated Gradients | Assigns feature importance by integrating gradients of the model’s output with respect to the inputs | Explaining deep neural networks in molecular property prediction | [99] |
AlphaFold 3 | Predicts protein structures and their interactions with high accuracy using AI | Accelerating target identification and understanding protein-ligand interactions. | [97] |
GraphIX | A graph-based XAI framework for drug repositioning using biopharmaceutical networks | Identifying potential new uses for existing drugs by analyzing biological networks | [95] |
InstructMol | Integrates molecular graph data and SMILES sequences with natural language by fine-tuning a pretrained LLM | Enhances the foundation for XAI in drug discovery by aligning molecular structures with natural language through instruction tuning | [96] |
PandaOmics | An AI-driven platform for target discovery and biomarker identification | Discovering novel therapeutic targets and biomarkers in various diseases | [98] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qadri, Y.A.; Shaikh, S.; Ahmad, K.; Choi, I.; Kim, S.W.; Vasilakos, A.V. Explainable Artificial Intelligence: A Perspective on Drug Discovery. Pharmaceutics 2025, 17, 1119. https://doi.org/10.3390/pharmaceutics17091119
Qadri YA, Shaikh S, Ahmad K, Choi I, Kim SW, Vasilakos AV. Explainable Artificial Intelligence: A Perspective on Drug Discovery. Pharmaceutics. 2025; 17(9):1119. https://doi.org/10.3390/pharmaceutics17091119
Chicago/Turabian StyleQadri, Yazdan Ahmad, Sibhghatulla Shaikh, Khurshid Ahmad, Inho Choi, Sung Won Kim, and Athansios V. Vasilakos. 2025. "Explainable Artificial Intelligence: A Perspective on Drug Discovery" Pharmaceutics 17, no. 9: 1119. https://doi.org/10.3390/pharmaceutics17091119
APA StyleQadri, Y. A., Shaikh, S., Ahmad, K., Choi, I., Kim, S. W., & Vasilakos, A. V. (2025). Explainable Artificial Intelligence: A Perspective on Drug Discovery. Pharmaceutics, 17(9), 1119. https://doi.org/10.3390/pharmaceutics17091119