Deciphering Medicine: The Role of Explainable Artificial Intelligence in Healthcare Innovations

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 878

Special Issue Editors


E-Mail Website
Guest Editor
Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
Interests: computer vision; artificial intelligence; machine and deep learning; big data; medical imaging; computer-aided diagnostics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
Interests: Artificial intelligence (AI); machine learning; deep learning; robotics;metaheuristics; computer-assisted diagnosis systems; computer vision; bioinspired optimization algorithms; smart systems engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In an era where artificial intelligence (AI) is rapidly transforming the landscape of healthcare, the need for transparency and understandability in AI algorithms is critical. This Special Issue, "Deciphering Medicine: The Role of Explainable Artificial Intelligence in Healthcare Innovations", seeks to bridge the gap between advanced AI technologies and their practical, ethical, and efficient application in medical settings.

Focus and Scope:

We invite authors to submit original research, reviews, and insightful studies that focus on the development, implementation, and evaluation of explainable AI systems in medical diagnostics and treatment. This issue aims to highlight innovative methodologies, case studies, and frameworks that enhance the interpretability and transparency of AI models, thereby fostering trust and reliability among healthcare professionals and patients.

Key Themes:

  • The development of explainable AI models for diagnosis, prognosis, and treatment planning.
  • Ethical implications and considerations in deploying AI in medical settings.
  • Case studies showcasing the successful implementation of explainable AI in clinical practice.
  • Advances in machine learning and deep learning that enhance transparency and interpretability.
  • The integration of AI with traditional medical knowledge to improve patient outcomes.
  • User-centric approaches to designing explainable AI systems in healthcare.
  • Regulatory and policy perspectives on the use of AI in medical diagnostics and treatment.

Submissions:

We welcome submissions from researchers, practitioners, and thought leaders in the fields of computer science, medical informatics, bioengineering, and related disciplines. Articles should emphasize not only the technological aspects of AI, but also its practical implications, user experience, and ethical considerations in a medical context.

By focusing on explainable AI in healthcare, this Special Issue aims to illuminate the path towards the more transparent, ethical, and effective integration of AI in medicine, ultimately contributing to improved patient care and healthcare outcomes.

Dr. Mohamed Shehata
Prof. Dr. Mostafa Elhosseini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deciphering medicine
  • explainable AI
  • machine learning
  • deep learning
  • medical diagnostics and treatment

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Review

18 pages, 743 KiB  
Review
Artificial Intelligence Support for Informal Patient Caregivers: A Systematic Review
by Sahar Borna, Michael J. Maniaci, Clifton R. Haider, Cesar A. Gomez-Cabello, Sophia M. Pressman, Syed Ali Haider, Bart M. Demaerschalk, Jennifer B. Cowart and Antonio Jorge Forte
Bioengineering 2024, 11(5), 483; https://doi.org/10.3390/bioengineering11050483 - 12 May 2024
Viewed by 379
Abstract
This study aims to explore how artificial intelligence can help ease the burden on caregivers, filling a gap in current research and healthcare practices due to the growing challenge of an aging population and increased reliance on informal caregivers. We conducted a search [...] Read more.
This study aims to explore how artificial intelligence can help ease the burden on caregivers, filling a gap in current research and healthcare practices due to the growing challenge of an aging population and increased reliance on informal caregivers. We conducted a search with Google Scholar, PubMed, Scopus, IEEE Xplore, and Web of Science, focusing on AI and caregiving. Our inclusion criteria were studies where AI supports informal caregivers, excluding those solely for data collection. Adhering to PRISMA 2020 guidelines, we eliminated duplicates and screened for relevance. From 947 initially identified articles, 10 met our criteria, focusing on AI’s role in aiding informal caregivers. These studies, conducted between 2012 and 2023, were globally distributed, with 80% employing machine learning. Validation methods varied, with Hold-Out being the most frequent. Metrics across studies revealed accuracies ranging from 71.60% to 99.33%. Specific methods, like SCUT in conjunction with NNs and LibSVM, showcased accuracy between 93.42% and 95.36% as well as F-measures spanning 93.30% to 95.41%. AUC values indicated model performance variability, ranging from 0.50 to 0.85 in select models. Our review highlights AI’s role in aiding informal caregivers, showing promising results despite different approaches. AI tools provide smart, adaptive support, improving caregivers’ effectiveness and well-being. Full article
Show Figures

Graphical abstract

Back to TopTop