Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,430)

Search Parameters:
Keywords = explainable AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 956 KiB  
Article
An Explainable Radiomics-Based Classification Model for Sarcoma Diagnosis
by Simona Correra, Arnar Evgení Gunnarsson, Marco Recenti, Francesco Mercaldo, Vittoria Nardone, Antonella Santone, Halldór Jónsson and Paolo Gargiulo
Diagnostics 2025, 15(16), 2098; https://doi.org/10.3390/diagnostics15162098 - 20 Aug 2025
Abstract
Objective: This study introduces an explainable, radiomics-based machine learning framework for the automated classification of sarcoma tumors using MRI. The approach aims to empower clinicians, reducing dependence on subjective image interpretation. Methods: A total of 186 MRI scans from 86 patients [...] Read more.
Objective: This study introduces an explainable, radiomics-based machine learning framework for the automated classification of sarcoma tumors using MRI. The approach aims to empower clinicians, reducing dependence on subjective image interpretation. Methods: A total of 186 MRI scans from 86 patients diagnosed with bone and soft tissue sarcoma were manually segmented to isolate tumor regions and corresponding healthy tissue. From these segmentations, 851 handcrafted radiomic features were extracted, including wavelet-transformed descriptors. A Random Forest classifier was trained to distinguish between tumor and healthy tissue, with hyperparameter tuning performed through nested cross-validation. To ensure transparency and interpretability, model behavior was explored through Feature Importance analysis and Local Interpretable Model-agnostic Explanations (LIME). Results: The model achieved an F1-score of 0.742, with an accuracy of 0.724 on the test set. LIME analysis revealed that texture and wavelet-based features were the most influential in driving the model’s predictions. Conclusions: By enabling accurate and interpretable classification of sarcomas in MRI, the proposed method provides a non-invasive approach to tumor classification, supporting an earlier, more personalized and precision-driven diagnosis. This study highlights the potential of explainable AI to assist in more secure clinical decision-making. Full article
(This article belongs to the Special Issue New Trends in Musculoskeletal Imaging)
30 pages, 1923 KiB  
Article
Perceived AI Consumer-Driven Decision Integrity: Assessing Mediating Effect of Cognitive Load and Response Bias
by Syed Md Faisal Ali Khan and Yasser Moustafa Shehawy
Technologies 2025, 13(8), 374; https://doi.org/10.3390/technologies13080374 - 20 Aug 2025
Abstract
This study examines the influence of artificial intelligence (AI) system transparency, cognitive load, response bias, and individual values on perceived AI decision integrity. Using a quantitative approach, data were collected through surveys and analyzed via SEM-PLS. The findings highlight that AI transparency and [...] Read more.
This study examines the influence of artificial intelligence (AI) system transparency, cognitive load, response bias, and individual values on perceived AI decision integrity. Using a quantitative approach, data were collected through surveys and analyzed via SEM-PLS. The findings highlight that AI transparency and familiarity significantly impact users’ trust and perception of decision fairness. Response biases were found to be increased by the cognitive load and decision fatigue, affecting decision integrity. This study identifies mediating effects of sensitivity to errors and response bias in AI-driven decision-making. Practical implications imply that lowering the cognitive load and increasing transparency will help to increase the acceptance of AI, and incorporating ethical considerations into AI system design helps to minimize bias. This study contributes to AI ethics by emphasizing fairness, explainability, and user-centered trust mechanisms. Future research should explore AI decision-making across industries and cultural contexts. The findings of this study offer managerial, theoretical, and practical insights into responsible AI deployment. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

42 pages, 2529 KiB  
Review
Artificial Intelligence in Sports Biomechanics: A Scoping Review on Wearable Technology, Motion Analysis, and Injury Prevention
by Marouen Souaifi, Wissem Dhahbi, Nidhal Jebabli, Halil İbrahim Ceylan, Manar Boujabli, Raul Ioan Muntean and Ismail Dergaa
Bioengineering 2025, 12(8), 887; https://doi.org/10.3390/bioengineering12080887 (registering DOI) - 20 Aug 2025
Abstract
Aim: This scoping review examines the application of artificial intelligence (AI) in sports biomechanics, with a focus on enhancing performance and preventing injuries. The review addresses key research questions, including primary AI methods, their effectiveness in improving athletic performance, their potential for injury [...] Read more.
Aim: This scoping review examines the application of artificial intelligence (AI) in sports biomechanics, with a focus on enhancing performance and preventing injuries. The review addresses key research questions, including primary AI methods, their effectiveness in improving athletic performance, their potential for injury prediction, sport-specific applications, strategies for translating knowledge, ethical considerations, and remaining research gaps. Following the PRISMA-ScR guidelines, a comprehensive literature search was conducted across five databases (PubMed/MEDLINE, Web of Science, IEEE Xplore, Scopus, and SPORTDiscus), encompassing studies published between January 2015 and December 2024. After screening 3248 articles, 73 studies met the inclusion criteria (Cohen’s kappa = 0.84). Data were collected on AI techniques, biomechanical parameters, performance metrics, and implementation details. Results revealed a shift from traditional statistical models to advanced machine learning methods. Based on moderate-quality evidence from 12 studies, convolutional neural networks reached 94% agreement with international experts in technique assessment. Computer vision demonstrated accuracy within 15 mm compared to marker-based systems (6 studies, moderate quality). AI-driven training plans showed 25% accuracy improvements (4 studies, limited evidence). Random forest models predicted hamstring injuries with 85% accuracy (3 studies, moderate quality). Learning management systems enhanced knowledge transfer, raising coaches’ understanding by 45% and athlete adherence by 3.4 times. Implementing integrated AI systems resulted in a 23% reduction in reinjury rates. However, significant challenges remain, including standardizing data, improving model interpretability, validating models in real-world settings, and integrating them into coaching routines. In summary, incorporating AI into sports biomechanics marks a groundbreaking advancement, providing analytical capabilities that surpass traditional techniques. Future research should focus on creating explainable AI, applying rigorous validation methods, handling data ethically, and ensuring equitable access to promote the widespread and responsible use of AI across all levels of competitive sports. Full article
(This article belongs to the Section Biomechanics and Sports Medicine)
Show Figures

Figure 1

22 pages, 1706 KiB  
Review
Integrating Precision Medicine and Digital Health in Personalized Weight Management: The Central Role of Nutrition
by Xiaoguang Liu, Miaomiao Xu, Huiguo Wang and Lin Zhu
Nutrients 2025, 17(16), 2695; https://doi.org/10.3390/nu17162695 - 20 Aug 2025
Abstract
Obesity is a global health challenge marked by substantial inter-individual differences in responses to dietary and lifestyle interventions. Traditional weight loss strategies often overlook critical biological variations in genetics, metabolic profiles, and gut microbiota composition, contributing to poor adherence and variable outcomes. Our [...] Read more.
Obesity is a global health challenge marked by substantial inter-individual differences in responses to dietary and lifestyle interventions. Traditional weight loss strategies often overlook critical biological variations in genetics, metabolic profiles, and gut microbiota composition, contributing to poor adherence and variable outcomes. Our primary aim is to identify key biological and behavioral effectors relevant to precision medicine for weight control, with a particular focus on nutrition, while also discussing their current and potential integration into digital health platforms. Thus, this review aligns more closely with the identification of influential factors within precision medicine (e.g., genetic, metabolic, and microbiome factors) but also explores how these factors are currently integrated into digital health tools. We synthesize recent advances in nutrigenomics, nutritional metabolomics, and microbiome-informed nutrition, highlighting how tailored dietary strategies—such as high-protein, low-glycemic, polyphenol-enriched, and fiber-based diets—can be aligned with specific genetic variants (e.g., FTO and MC4R), metabolic phenotypes (e.g., insulin resistance), and gut microbiota profiles (e.g., Akkermansia muciniphila abundance, SCFA production). In parallel, digital health tools—including mobile health applications, wearable devices, and AI-supported platforms—enhance self-monitoring, adherence, and dynamic feedback in real-world settings. Mechanistic pathways such as gut–brain axis regulation, microbial fermentation, gene–diet interactions, and anti-inflammatory responses are explored to explain inter-individual differences in dietary outcomes. However, challenges such as cost, accessibility, and patient motivation remain and should be addressed to ensure the effective implementation of these integrated strategies in real-world settings. Collectively, these insights underscore the pivotal role of precision nutrition as a cornerstone for personalized, scalable, and sustainable obesity interventions. Full article
(This article belongs to the Section Nutrition and Public Health)
Show Figures

Figure 1

30 pages, 4430 KiB  
Article
Co-Optimization and Interpretability of Intelligent–Traditional Signal Control Based on Spatiotemporal Pressure Perception in Hybrid Control Scenarios
by Yingchang Xiong, Guoyang Qin, Jinglei Zeng, Keshuang Tang, Hong Zhu and Edward Chung
Sustainability 2025, 17(16), 7521; https://doi.org/10.3390/su17167521 (registering DOI) - 20 Aug 2025
Abstract
As cities transition toward intelligent traffic systems, hybrid networks combining AI and traditional intersections raise challenges for efficiency and sustainability. Existing studies primarily focus on global intelligence assumptions, overlooking the practical complexities of hybrid control environments. Moreover, the decision-making processes of AI-based controllers [...] Read more.
As cities transition toward intelligent traffic systems, hybrid networks combining AI and traditional intersections raise challenges for efficiency and sustainability. Existing studies primarily focus on global intelligence assumptions, overlooking the practical complexities of hybrid control environments. Moreover, the decision-making processes of AI-based controllers remain opaque, limiting their reliability in dynamic traffic conditions. To address these challenges, this study investigates the following realistic scenario: a Deep Reinforcement Learning (DRL) intersection surrounded by max–pressure-controlled neighbors. A spatiotemporal pressure perception agent is proposed, which (a) uses a novel Holistic Traffic Dynamo State (HTDS) representation that integrates real-time queue, predicted vehicle merging patterns, and approaching traffic flows and (b) innovatively proposes Neighbor–Pressure–Adaptive Reward Weighting (NP-ARW) mechanism to dynamically adjust queue penalties at incoming lanes based on relative pressure differences. Additionally, spatial–temporal pressure features are modeled using 1D convolutional layers (Conv1D) and attention mechanisms. Finally, our Strategy Imitation–Mechanism Attribution framework leverages XGBoost and Decision Trees to systematically analyze traffic condition impacts on phase selection, fundamentally enabling explainable control logic. Experimental results demonstrate the following significant improvements: compared to fixed-time control, our method reduces average travel time by 65.45% and loss time by 85.04%, while simultaneously decreasing average queue lengths and pressure at neighboring intersections by 91.20% and 95.21%, respectively. Full article
(This article belongs to the Special Issue Sustainable Traffic and Mobility—2nd Edition)
17 pages, 774 KiB  
Review
Artificial Intelligence in Assessing Reproductive Aging: Role of Mitochondria, Oxidative Stress, and Telomere Biology
by Efthalia Moustakli, Themos Grigoriadis, Sofoklis Stavros, Anastasios Potiris, Athanasios Zikopoulos, Angeliki Gerede, Ioannis Tsimpoukis, Charikleia Papageorgiou, Konstantinos Louis and Ekaterini Domali
Diagnostics 2025, 15(16), 2075; https://doi.org/10.3390/diagnostics15162075 - 19 Aug 2025
Abstract
Fertility potential ever more diminishes due to the complex, multifactorial, and still not entirely clarified process of reproductive aging in women and men. Gamete quality and reproductive lifespan are compromised by biologic factors like mitochondrial dysfunction, increased oxidative stress (OS), and incremental telomere [...] Read more.
Fertility potential ever more diminishes due to the complex, multifactorial, and still not entirely clarified process of reproductive aging in women and men. Gamete quality and reproductive lifespan are compromised by biologic factors like mitochondrial dysfunction, increased oxidative stress (OS), and incremental telomere shortening. Clinically confirmed biomarkers, including follicle-stimulating hormone (FSH) and anti-Müllerian hormone (AMH), are used to estimate ovarian reserve and reproductive status, but these markers have limited predictive validity and an incomplete representation of the complexity of reproductive age. Recent advances in artificial intelligence (AI) have the capacity to address the integration and interpretation of disparate and complex sets of data, like imaging, molecular, and clinical, for consideration. AI methodologies that improve the accuracy of reproductive outcome predictions and permit the construction of personalized treatment programs are machine learning (ML) and deep learning. To promote fertility evaluations, here, as part of its critical discussion, the roles of mitochondria, OS, and telomere biology as latter-day biomarkers of reproductive aging are presented. We also address the current status of AI applications in reproductive medicine, promises for the future, and applications involving embryo selection, multi-omics set integration, and estimation of reproductive age. Finally, to ensure that AI technology is used ethically and responsibly for reproductive care, model explainability, heterogeneity of data, and other ethical issues remain as residual concerns. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

22 pages, 747 KiB  
Article
Unpacking the Black Box: How AI Capability Enhances Human Resource Functions in China’s Healthcare Sector
by Xueru Chen, Maria Pilar Martínez-Ruiz, Elena Bulmer and Benito Yáñez-Araque
Information 2025, 16(8), 705; https://doi.org/10.3390/info16080705 - 19 Aug 2025
Abstract
Artificial intelligence (AI) is transforming organizational functions across sectors; however, its application to human resource management (HRM) within healthcare remains underexplored. This study aims to unpack the black-box nature of AI capability’s impact on HR functions within China’s healthcare sector, a domain undergoing [...] Read more.
Artificial intelligence (AI) is transforming organizational functions across sectors; however, its application to human resource management (HRM) within healthcare remains underexplored. This study aims to unpack the black-box nature of AI capability’s impact on HR functions within China’s healthcare sector, a domain undergoing rapid digital transformation, driven by national innovation policies. Grounded in resource-based theory, the study conceptualizes AI capability as a multidimensional construct encompassing tangible resources, human resources, and organizational intangibles. Using a structural equation modeling approach (PLS-SEM), the analysis draws on survey data from 331 professionals across five hospitals in three Chinese cities. The results demonstrate a strong, positive, and statistically significant relationship between AI capability and HR functions, accounting for 75.2% of the explained variance. These findings indicate that AI capability enhances HR performance through smarter recruitment, personalized training, and data-driven talent management. By empirically illuminating the mechanisms linking AI capability to HR outcomes, the study contributes to theoretical development and offers actionable insights for healthcare administrators and policymakers. It positions AI not merely as a technological tool but as a strategic resource to address talent shortages and improve equity in workforce distribution. This work helps to clarify a previously opaque area of AI application in healthcare HRM. Full article
(This article belongs to the Special Issue Emerging Research in Knowledge Management and Innovation)
Show Figures

Figure 1

22 pages, 4350 KiB  
Review
A Review of Artificial Intelligence Techniques in Fault Diagnosis of Electric Machines
by Christos Zachariades and Vigila Xavier
Sensors 2025, 25(16), 5128; https://doi.org/10.3390/s25165128 - 18 Aug 2025
Abstract
Rotating electrical machines are critical assets in industrial systems, where unexpected failures can lead to costly downtime and safety risks. This review presents a comprehensive and up-to-date analysis of artificial intelligence (AI) techniques for fault diagnosis in electric machines. It categorizes and evaluates [...] Read more.
Rotating electrical machines are critical assets in industrial systems, where unexpected failures can lead to costly downtime and safety risks. This review presents a comprehensive and up-to-date analysis of artificial intelligence (AI) techniques for fault diagnosis in electric machines. It categorizes and evaluates supervised, unsupervised, deep learning, and hybrid/ensemble approaches in terms of diagnostic accuracy, adaptability, and implementation complexity. A comparative analysis highlights the strengths and limitations of each method, while emerging trends such as explainable AI, self-supervised learning, and digital twin integration are discussed as enablers of next-generation diagnostic systems. To support practical deployment, the article proposes a modular implementation framework and offers actionable recommendations for practitioners. This work serves as both a reference and a guide for researchers and engineers aiming to develop scalable, interpretable, and robust AI-driven fault diagnosis solutions for rotating electrical machines. Full article
(This article belongs to the Special Issue Sensors for Fault Diagnosis of Electric Machines)
Show Figures

Figure 1

22 pages, 9020 KiB  
Article
Towards Transparent Urban Perception: A Concept-Driven Framework with Visual Foundation Models
by Yixin Yu, Zepeng Yu, Xuhua Shi, Ran Wan, Bowen Wang and Jiaxin Zhang
ISPRS Int. J. Geo-Inf. 2025, 14(8), 315; https://doi.org/10.3390/ijgi14080315 - 18 Aug 2025
Abstract
Understanding urban visual perception is crucial for modeling how individuals cognitively and emotionally interact with the built environment. However, traditional survey-based approaches are limited in scalability and often fail to generalize across diverse urban contexts. In this study, we introduce the UP-CBM, a [...] Read more.
Understanding urban visual perception is crucial for modeling how individuals cognitively and emotionally interact with the built environment. However, traditional survey-based approaches are limited in scalability and often fail to generalize across diverse urban contexts. In this study, we introduce the UP-CBM, a transparent framework that leverages visual foundation models (VFMs) and concept-based reasoning to address these challenges. The UP-CBM automatically constructs a task-specific vocabulary of perceptual concepts using GPT-4o and processes urban scene images through a multi-scale visual prompting pipeline. This pipeline generates CLIP-based similarity maps that facilitate the learning of an interpretable bottleneck layer, effectively linking visual features with human perceptual judgments. Our framework not only achieves higher predictive accuracy but also offers enhanced interpretability, enabling transparent reasoning about urban perception. Experiments on two benchmark datasets—Place Pulse 2.0 (achieving improvements of +0.041 in comparison accuracy and +0.029 in R2) and VRVWPR (+0.018 in classification accuracy)—demonstrate the effectiveness and generalizability of our approach. These results underscore the potential of integrating VFMs with structured concept-driven pipelines for more explainable urban visual analytics. Full article
Show Figures

Figure 1

34 pages, 4790 KiB  
Article
An Explainable Approach to Parkinson’s Diagnosis Using the Contrastive Explanation Method—CEM
by Ipek Balikci Cicek, Zeynep Kucukakcali, Birgul Deniz and Fatma Ebru Algül
Diagnostics 2025, 15(16), 2069; https://doi.org/10.3390/diagnostics15162069 - 18 Aug 2025
Abstract
Background/Objectives: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that requires early and accurate diagnosis. This study aimed to classify individuals with and without PD using volumetric brain MRI data and to improve model interpretability using explainable artificial intelligence (XAI) techniques. Methods: This [...] Read more.
Background/Objectives: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that requires early and accurate diagnosis. This study aimed to classify individuals with and without PD using volumetric brain MRI data and to improve model interpretability using explainable artificial intelligence (XAI) techniques. Methods: This retrospective study included 79 participants (39 PD patients, 40 controls) recruited at Inonu University Turgut Ozal Medical Center between 2013 and 2025. A deep neural network (DNN) was developed using a multilayer perceptron architecture with six hidden layers and ReLU activation functions. Seventeen volumetric brain features were used as the input. To ensure robust evaluation and prevent overfitting, a stratified five-fold cross-validation was applied, maintaining class balance in each fold. Model transparency was explored using two complementary XAI techniques: the Contrastive Explanation Method (CEM) and Local Interpretable Model-Agnostic Explanations (LIME). CEM highlights features that support or could alter the current classification, while LIME provides instance-based feature attributions. Results: The DNN model achieved high diagnostic performance with 94.1% accuracy, 98.3% specificity, 90.2% sensitivity, and an AUC of 0.97. The CEM analysis suggested that reduced hippocampal volume was a key contributor to PD classification (–0.156 PP), whereas higher volumes in the brainstem and hippocampus were associated with the control class (+0.035 and +0.150 PP, respectively). The LIME results aligned with these findings, revealing consistent feature importance (mean = 0.1945) and faithfulness (0.0269). Comparative analyses showed different volumetric patterns between groups and confirmed the DNN’s superiority over conventional machine learning models such as SVM, logistic regression, KNN, and AdaBoost. Conclusions: This study demonstrates that a deep learning model, enhanced with CEM and LIME, can provide both high diagnostic accuracy and interpretable insights for PD classification, supporting the integration of explainable AI in clinical neuroimaging. Full article
(This article belongs to the Special Issue Artificial Intelligence in Brain Diseases)
Show Figures

Figure 1

22 pages, 2087 KiB  
Article
Explainable AI-Based Feature Selection Approaches for Raman Spectroscopy
by Nicola Rossberg, Rekha Gautam, Katarzyna Komolibus, Barry O’Sullivan and Andrea Visentin
Diagnostics 2025, 15(16), 2063; https://doi.org/10.3390/diagnostics15162063 - 18 Aug 2025
Viewed by 26
Abstract
Background: Raman Spectroscopy is a non-invasive technique capable of characterising tissue constituents and detecting conditions such as cancer with high accuracy. Machine learning techniques can automate this task and discover relevant data patterns. However, the high-dimensional, multicollinear nature of Raman data makes [...] Read more.
Background: Raman Spectroscopy is a non-invasive technique capable of characterising tissue constituents and detecting conditions such as cancer with high accuracy. Machine learning techniques can automate this task and discover relevant data patterns. However, the high-dimensional, multicollinear nature of Raman data makes their deployment and explainability challenging. A model’s transparency and ability to explain decision pathways have become crucial for medical integration. Consequently, an effective method of feature-reduction while minimising information loss is sought. Methods: Two new feature selection methods for Raman spectroscopy are introduced. These methods are based on explainable deep learning approaches, considering Convolutional Neural Networks and Transformers. Their features are extracted using GradCam and attention scores, respectively. The performance of the extracted features is compared to established feature selection approaches across four classifiers and three datasets. Results: We compared the proposed method against established feature selection approaches over three real-world datasets and different compression levels. Comparable accuracy levels were obtained using only 10% of features. Model-based approaches are the most accurate. Using Convolutional Neural Networks and Random Forest-assigned feature importance performs best when maintaining between 5–20% of features, while LinearSVC with L1 penalisation leads to higher accuracy when selecting only 1% of them. The proposed Convolutional Neural Networks-based GradCam approach has the highest average accuracy. Conclusions: No approach is found to perform best in all scenarios, suggesting that multiple alternatives should be assessed in each application. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

14 pages, 293 KiB  
Article
Refining Filter Global Feature Weighting for Fully Unsupervised Clustering
by Fabian Galis, Darian M. Onchis and Codruta Istin
Appl. Sci. 2025, 15(16), 9072; https://doi.org/10.3390/app15169072 - 18 Aug 2025
Viewed by 68
Abstract
In the context of unsupervised learning, effective clustering plays a vital role in revealing patterns and insights from unlabeled data. However, the success of clustering algorithms often depends on the relevance and contribution of features, which can differ between various datasets. This paper [...] Read more.
In the context of unsupervised learning, effective clustering plays a vital role in revealing patterns and insights from unlabeled data. However, the success of clustering algorithms often depends on the relevance and contribution of features, which can differ between various datasets. This paper explores feature weighting for clustering and presents new weighting strategies, including methods based on SHAP (SHapley Additive exPlanations), a technique commonly used for providing explainability in various supervised machine learning tasks. By taking advantage of SHAP values in a way other than just to gain explainability, we use them to weight features and ultimately improve the clustering process itself in unsupervised scenarios. Our empirical evaluations across five benchmark datasets and clustering methods demonstrate that feature weighting based on SHAP can enhance unsupervised clustering quality, achieving up to a 22.69% improvement over other weighting methods (from 0.586 to 0.719 in terms of the Adjusted Rand Index). Additionally, these situations where the weighted data boosts the results are highlighted and thoroughly explored, offering insight for practical applications. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

22 pages, 5941 KiB  
Article
Explainable AI Methods for Identification of Glue Volume Deficiencies in Printed Circuit Boards
by Theodoros Tziolas, Konstantinos Papageorgiou, Theodosios Theodosiou, Dimosthenis Ioannidis, Nikolaos Dimitriou, Gregory Tinker and Elpiniki Papageorgiou
Appl. Sci. 2025, 15(16), 9061; https://doi.org/10.3390/app15169061 - 17 Aug 2025
Viewed by 347
Abstract
In printed circuit board (PCB) assembly, the volume of dispensed glue is closely related to the PCB’s durability, production costs, and the overall product reliability. Currently, quality inspection is performed manually by operators, inheriting the limitations of human-performed procedures. To address this, we [...] Read more.
In printed circuit board (PCB) assembly, the volume of dispensed glue is closely related to the PCB’s durability, production costs, and the overall product reliability. Currently, quality inspection is performed manually by operators, inheriting the limitations of human-performed procedures. To address this, we propose an automatic optical inspection framework that utilizes convolutional neural networks (CNNs) and post-hoc explainable methods. Our methodology handles glue quality inspection as a three-fold procedure. Initially, a detection system based on CenterNet MobileNetV2 is developed to localize PCBs, thus, offering a flexible lightweight tool for targeting and cropping regions of interest. Consequently, a CNN is proposed to classify PCB images into three classes based on the placed glue volume achieving 92.2% accuracy. This classification step ensures that varying glue volumes are accurately assessed, addressing potential quality issues that appear early in the production process. Finally, the Deep SHAP and Grad-CAM methods are applied to the CNN classifier to produce explanations of the decision making and further increase the interpretability of the proposed approach, targeting human-centered artificial intelligence. These post-hoc explainable methods provide visual explanations of the model’s decision-making process, offering insights into which features and regions contribute to each classification decision. The proposed method is validated with real industrial data, demonstrating its practical applicability and robustness. The evaluation procedure indicates that the proposed framework offers increased accuracy, low latency, and high-quality visual explanations, thereby strengthening quality assurance in PCB manufacturing. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

18 pages, 958 KiB  
Article
A Feature-Augmented Explainable Artificial Intelligence Model for Diagnosing Alzheimer’s Disease from Multimodal Clinical and Neuroimaging Data
by Fatima Hasan Al-bakri, Wan Mohd Yaakob Wan Bejuri, Mohamed Nasser Al-Andoli, Raja Rina Raja Ikram, Hui Min Khor, Yus Sholva, Umi Kalsom Ariffin, Noorayisahbe Mohd Yaacob, Zuraida Abal Abas, Zaheera Zainal Abidin, Siti Azirah Asmai, Asmala Ahmad, Ahmad Fadzli Nizam Abdul Rahman, Hidayah Rahmalan and Md Fahmi Abd Samad
Diagnostics 2025, 15(16), 2060; https://doi.org/10.3390/diagnostics15162060 - 17 Aug 2025
Viewed by 171
Abstract
Background/Objectives: This study presents a survey-based evaluation of an explainable AI (Feature-Augmented) approach, which was designed to support the diagnosis of Alzheimer’s disease (AD) by integrating clinical data, MMSE scores, and MRI scans. The approach combines rule-based reasoning and example-based visualization to improve [...] Read more.
Background/Objectives: This study presents a survey-based evaluation of an explainable AI (Feature-Augmented) approach, which was designed to support the diagnosis of Alzheimer’s disease (AD) by integrating clinical data, MMSE scores, and MRI scans. The approach combines rule-based reasoning and example-based visualization to improve the explainability of AI-generated decisions. Methods: Five doctors participated in the survey: two with 6 to 10 years of experience and three with more than 10 years of experience in the medical field and expertise in AD. The participants evaluated different AI outputs, including clinical feature-based interpretations, MRI-based visual heat maps, and a combined interpretation approach. Results: The model achieved a 100% trust score, with 20% of the participants reporting full trust and 80% expressing conditional trust, understanding the diagnosis but seeking further clarification. Overall, the participants reported that the integrated explanation format improved their understanding of the model decisions and enhanced their confidence in using AI-assisted diagnosis. Conclusions: To our knowledge, this paper is the first to gather the views of medical experts to evaluate the explainability of an AI decision-making model when diagnosing AD. These preliminary findings suggest that explainability plays a key role in building trust and ease of use of AI tools in clinical settings, especially when used by experienced clinicians to support complex diagnoses, such as AD. Full article
(This article belongs to the Special Issue Explainable Machine Learning in Clinical Diagnostics)
Show Figures

Figure 1

71 pages, 8414 KiB  
Systematic Review
Towards Maintenance 5.0: Resilience-Based Maintenance in AI-Driven Sustainable and Human-Centric Industrial Systems
by Lech Bukowski and Sylwia Werbinska-Wojciechowska
Sensors 2025, 25(16), 5100; https://doi.org/10.3390/s25165100 - 16 Aug 2025
Viewed by 369
Abstract
Industry 5.0 introduces a new paradigm where digital technologies support sustainable and human-centric industrial development. Within this context, resilience-based maintenance (RBM) emerges as a forward-looking maintenance strategy focused on system adaptability, fault tolerance, and recovery capacity under uncertainty. This article presents a systematic [...] Read more.
Industry 5.0 introduces a new paradigm where digital technologies support sustainable and human-centric industrial development. Within this context, resilience-based maintenance (RBM) emerges as a forward-looking maintenance strategy focused on system adaptability, fault tolerance, and recovery capacity under uncertainty. This article presents a systematic literature review (SLR) on RBM in the context of Maintenance 5.0. The review follows the PRISMA methodology and incorporates bibliometric and content-based analyses of selected publications. Key findings highlight the integration of AI methods, such as machine learning and digital twins, in enhancing system resilience. The results demonstrate how RBM aligns with the pillars of Industry 5.0, sustainability, and human-centricity, by reducing resource consumption and improving human–machine interaction. Research gaps are identified in AI explainability, sector-specific implementation, and ergonomic integration. The article concludes by outlining directions for developing Maintenance 5.0 as a strategic concept for resilient, intelligent, and inclusive industrial systems. Full article
(This article belongs to the Special Issue Human-Centred Smart Manufacturing - Industry 5.0)
Show Figures

Figure 1

Back to TopTop