Next Article in Journal
Accurate Oil Temperature Prediction Model and Oil Refilling Parameters Optimization for Hydraulic Closed-Circuit System
Previous Article in Journal
A Deep Learning Inversion Method for Airborne Time-Domain Electromagnetic Data Using Convolutional Neural Network
Previous Article in Special Issue
Using Machine Learning to Calibrate Automated Performance Assessment in a Virtual Laboratory: Exploring the Trade-Off between Accuracy and Explainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Recent Applications of Explainable AI (XAI): A Systematic Literature Review

1
Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, FI-40014 Jyväskylä, Finland
2
Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8884; https://doi.org/10.3390/app14198884
Submission received: 4 August 2024 / Revised: 3 September 2024 / Accepted: 25 September 2024 / Published: 2 October 2024
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))

Abstract

:
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems.

1. Introduction

In recent decades, there has been a rapid surge in the development and widespread utilization of artificial intelligence (AI) and Machine Learning (ML). The complexity and scale of these models have expanded in pursuit of improved predictive capabilities. However, there is growing scrutiny directed towards the sole emphasis on model performance. This approach often results in the creation of opaque, large-scale models, making it challenging for users to assess, comprehend, and potentially rectify the system’s decisions. Consequently, there is a pressing need for interpretable and explainable AI (XAI), which aims to enhance the comprehensibility of AI systems and their outputs for humans. The advent of deep learning over the past decade has intensified efforts to devise methodologies for elucidating and interpreting these opaque systems [1,2,3].
The literature on XAI is highly diverse, spanning multiple (sub-)disciplines [4], and has been growing at an exponential rate [5]. While numerous reviews have been published on XAI in general [1,2,5], there is a noticeable gap when it comes to in-depth analyses focused specifically on XAI applications. Existing reviews predominantly explore foundational concepts and theoretical advancements, but only a few concentrate on how XAI is being applied across different domains. Although a few reviews on XAI applications do exist [6,7,8], they have limitations in terms of the coverage period and the number of articles reviewed. For instance, Hu et al. [6] published their review in 2021, thus excluding any articles published thereafter. Additionally, they do not specify the total number of articles reviewed, and their reference list includes only 70 articles. Similarly, Islam et al. [7] and Saranya and Subhashini [8] reviewed 137 and 91 articles, respectively, but also focused on earlier periods, leaving a gap in the literature regarding the latest XAI applications.
In contrast, our review fills this gap by providing a more comprehensive and up-to-date synthesis of XAI applications, analyzing a significantly larger set of 512 recent articles. Each article was thoroughly reviewed and categorized according to predefined codes, enabling a systematic and detailed examination of current trends and developments in XAI applications. This broader scope not only captures the latest advancements but also offers a more thorough and nuanced overview than previous reviews, making it a valuable resource for understanding the current landscape of XAI applications.
Given the rapid advancements and diverse applications of XAI, our research focuses on addressing the following key questions:
  • Domains: what are the most common domains of recent XAI applications, and what are emerging XAI domains?
  • Techniques: Which XAI techniques are utilized? How do these techniques vary based on the type of data used, and in what forms are the explanations presented?
  • Evaluation: How is explainability measured? Are specific metrics or evaluation methods employed?
The remainder of this review is structured as follows: In Section 2, we provide a brief overview of XAI taxonomies. Section 3 details the process used to identify relevant recent XAI application articles, along with our coding and review procedures. Section 4 presents the findings, highlighting the most common and emerging XAI application domains, the techniques employed based on data type, and a summary of how the different XAI explanations were evaluated. Finally, in Section 5, we discuss our findings in the context of our research questions and suggest directions for future research.

2. Background: XAI Taxonomies

The primary focus of this review is on the recent applications of XAI across various domains. However, to fully appreciate how XAI has been implemented in these areas, it is essential to provide a brief overview of the key taxonomies of XAI methods. While an exhaustive discussion of these taxonomies, along with the advantages and disadvantages of each method, lies beyond the scope of this article, a concise summary is necessary to ensure that the content and findings of this review are accessible to a broad audience. For those seeking a more comprehensive exploration of XAI taxonomies and detailed discussions on the pros and cons of various XAI methods, we recommend consulting recent reviews [5,9,10,11] and comprehensive books on the subject [12,13].
Generally, XAI methods can be categorized based on their explanation mechanisms, which may rely on examples [14,15,16], counterfactuals [17], hidden semantics [18], rules [19,20,21], or features/attributions/saliency [22,23,24,25]. Among these, feature importances are the most common explanation for classification models [26]. Feature importances leverage scoring and ranking of features to quantify and enhance the interpretability of a model, thereby explaining its behavior [27]. In cases where the model is trained on images, leading to features representing super pixels, methods such as saliency maps or pixel attribution are employed. Evaluating the saliency of features aids in ranking their explanatory power, applicable for both feature selection and post-hoc explainability [5,28,29].
Other approaches to categorizing XAI methods are related to the techniques applied, such as (i) ante-hoc versus post-hoc, (ii) global versus local, and (iii) model-specific versus model-agnostic (see Figure 1). Ante-hoc/intrinsic XAI methods encompass techniques that are inherently transparent, often due to their simplistic structures, such as linear regression models. Conversely, post-hoc methods elucidate a model’s reasoning retrospectively, following its training phase [5,26,30]. Moreover, distinctions are made between local and global explanations: while modular global explanations provide an overarching interpretation of the entire model, addressing it comprehensively, local explanations elucidate specific observations, such as individual images [31,32]. Furthermore, explanation techniques may be categorized as model-specific, relying on aspects of the particular model, or model-agnostic, applicable across diverse models [5,33]. Model-agnostic techniques can be further categorized into perturbation- or occlusion-based versus gradient-based. Techniques like occlusion- or perturbation-based methods manipulate sections of input features or images to generate explanations, while gradient-based methods compute the gradient of prediction (or classification score) concerning input features [34].
As with machine learning models themselves, there is no universally best XAI approach; the optimal technique depends on factors such as the nature of the data, the specific application, and the characteristics of the underlying AI model. For instance, local explanations are particularly useful when seeking insights into specific instances, such as identifying the reasons behind false positives in a model’s predictions [35]. In cases where the AI model is inherently complex, post-hoc techniques may be necessary to provide explanations, with some methods, like those relying on gradients, being applicable only to specific models, such as neural networks with differentiable layers [34,36]. While a variety of XAI methods are available, evaluating their effectiveness remains a less-explored area [4,11]. As illustrated in Figure 1, XAI evaluation approaches can be categorized into consultations with human experts, anecdotal evidence, and quantitative metrics.
As explained above, our review extends existing work on XAI methods and taxonomies [5,9,10,11] by shifting the focus towards the practical applications of XAI across various domains. In the next section, we will describe how we used the categorizations in Figure 1 to classify the recent XAI application papers in our review.

3. Research Methodology

Based on the research questions posed in Section 1 and the different taxonomies of XAI described in Section 2, we initiated our systematic review on recent applications of XAI. To collect the relevant publications for this review, we followed the analytical protocol of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [37]. A systematic review “is a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyze data from the studies that are included in the review” [12]. According to the PRISMA guidelines, our evaluation consisted of several stages: defining eligibility criteria, defining information sources, presenting the search strategy, specifying the selection process, data collection process, data item selection, studying the risk of bias assessment, specifying effect measures, describing the synthesis methods, reporting bias, and certainty assessment [37].
Information sources and search strategy: The search was conducted in February 2024 on Web of Science (WoS) by using the following Boolean search string on the paper topic (note that searches for topic terms in WoS search the following fields within a record: Title, Abstract, Author Keywords, Keywords Plus): TS = ((“explainable artificial intelligence” OR XAI) AND (application* OR process*)). The asterisk (*) at the end of a keyword ensures the inclusion of the term in both singular and plural forms and its derivatives. The search was limited to English-language non-review articles published between 1 January 2021 and 20 February 2024 (the search results can be found here: https://www.webofscience.com/wos/woscc/summary/495b659d-8f9e-4b77-8671-2fac26682231-cda1ce8b/relevance/1, accessed on 24 September 2024). We exclusively used WoS due to its authoritative status and comprehensive coverage. Birkle et al. (2020) [38] highlight WoS as the world’s oldest and most widely used research database, ensuring reliable and high-quality data. Its extensive discipline coverage and advanced citation indexing make it ideal for identifying influential works and mapping research trends [38].
Eligibility criteria and selection process: The literature selection process flow chart is summarized in Figure 2. The database search produced 664 papers. After removing non-English articles (n = 4), 660 were eligible for the full-text review and screening. During the full-text screening, we implemented the inclusion and exclusion criteria (Table 1) established through iterative discussions among the two authors. The reviewers assessed each article under the inclusion and exclusion criteria, with 512 research articles meeting the inclusion criteria and being incorporated into the evaluation procedure.
As reported in Figure 2, five articles were not retrievable from our universities’ networks, and 143 were excluded because they did not meet our inclusion criteria (primarily because they introduced general XAI taxonomies or new methods without describing specific XAI applications). Consequently, 512 articles remained for data extraction and synthesis. For reasons of reproducibility, the entire list of included articles is attached in Table A1, along with the XAI application and the reason(s) why the authors say that explainability is essential in their domain.
Data collection process, data items, study risk of bias assessment, effect measures, synthesis methods, and reporting bias assessment: To categorize and summarize the included articles in this review, the first author developed a Google Survey that was filled out for each selected article. The survey included both categorical (multiple-choice) and open-ended questions designed to systematically categorize the key aspects of the research. This approach ensured a consistent and comprehensive analysis across all articles. The survey provided an Excel file with all responses, simplifying the analysis process.
Each reviewer assessed their allocated articles using the predefined codes and survey questions created by the first author. In cases of uncertainty regarding the classification of an article, reviewers noted the ambiguity, and these articles, along with their tentative classifications, were discussed collectively among both authors to reach a consensus. This discussion was conducted in an unbiased manner to ensure accurate classifications. While no automated tools were used for the review process, Python libraries were employed for quantitative assessment.
Some of the developed codes (survey questions) were as follows:
  • What was the main application domain, and what was the specific application?
  • In what form (such as rules, feature importance, counterfactual) was the explanation created?
  • Did the authors use intrinsically explainable models or post-hoc explainability, and did they focus on global or local explanations?
  • How was the quality of the explanation(s) evaluated?
  • What did the authors say about why the explainability of their specific application is important? (Open-ended question.)
After completing the coding process and filling out the survey for each included article, we synthesized the data using both qualitative and quantitative techniques to address our research questions [39]. Qualitatively, we summarized the characteristics of the included studies based on the predefined codes. Quantitatively, we performed statistical analysis of the data, utilizing Python 3.11.5 to extract statistics from the annotated Excel table. This combination of qualitative and quantitative approaches, along with collaborative efforts, ensured the reliability and accuracy of our review process.
To assess the risk of reporting bias, we examined the completeness and transparency of the data reported in each article, focusing on the availability of results related to our predefined research questions. Articles that lacked essential data or failed to report key outcomes were flagged for potential bias, and this was considered during the certainty assessment.
Certainty assessment: Regarding the quality of the articles, potential bias, and the certainty of their evidence, we followed the general recommendations [40] and included only articles for which at least seven out of the ten quality questions proposed by Kitchenham and Charters (2007) [39] could be answered affirmatively. Additionally, we ensured quality by selecting only articles published in prestigious journals that adhere to established academic standards, such as being peer-reviewed and having an international editorial board [35].
Table 2 reports the number of publications per journal for the ten journals with the highest publication counts in our sample. As shown in the table, IEEE Access has the highest number of publications, totaling 45, which represents 8.79% of our sample of articles on recent XAI applications. It is followed by this journal (Applied Sciences-Basel) with 37 publications (7.23%) and Sensors with 28 publications (5.47%).

4. Results

In this section, we present the results of the 512 recent XAI application articles that met our inclusion and quality criteria. As detailed in Section 3, we included only those articles that satisfied our rigorous standards and were not flagged for bias. Once the articles passed our inclusion criteria and were coded and analyzed, we did not conduct further assessments of potential bias within the study results themselves. Our analysis relied on quantitative summary statistics and qualitative summaries derived from these high-quality articles. The complete list of these articles is provided in Table A1, along with their specific XAI applications and the authors’ justifications for the importance of explainability in their respective domains. Next, we provide an overview of recent XAI applications by summarizing the findings from these 512 included articles.

4.1. Application Domains

As shown in Figure 3, the absolute majority of recent XAI applications are from the health domain. For instance, several works have focused on different kinds of cancer prediction and diagnosis, such as skin cancer detection and classification [32,41,42], breast cancer prediction [43,44,45], prostate cancer management and prediction [46,47], lung cancer (relapse) prediction [48,49], and ovarian cancer classification and surgery decision-making [50,51]. In response to the COVID-19 pandemic, significant research has been directed toward using medical imaging for detecting COVID-19 [52], predicting the need for ICU admission for COVID-19 patients [53], diagnosing COVID-19 using chest X-ray images [54], predicting COVID-19 [55,56,57,58,59,60], COVID-19 data classification [61], assessment of perceived stress in healthcare professionals attending COVID-19 [62], and COVID-19 forecasting [58].
Medical imaging and diagnostic applications are also prominent, including detecting paratuberculosis from histopathological images [63], predicting coronary artery disease from myocardial perfusion images [64], diagnosis and surgery [65], identifying reasons for MRI scans in multiple sclerosis patients [66], detecting the health status of neonates [67], spinal postures [68], and chronic wound classification [69]. Additionally, studies have focused on age-related macular degeneration detection [70], predicting immunological age [71], cognitive health assessment [72,73], cardiovascular medicine [74,75], glaucoma prediction and diagnosis [76,77,78], as well as predicting diabetes [79,80,81,82] and classifying arrhythmia [83,84].
General management applications in healthcare include predicting patient outcomes in ICU [60], functional work ability prediction [85], a decision support system for nutrition-related geriatric syndromes [86], predicting hospital admissions for cancer patients [87], medical data management [88], medical text processing [89], ML model development in medicine [90], pain recognition [91], drug response prediction [92,93], face mask detection [94], and studying the sustainability of smart technology applications in healthcare [95]. Lastly, studies about tracing food behaviors [96], aspiration detection in flexible endoscopic evaluation of swallowing [97], human activity recognition [98], human lower limb activity recognition [99], factors influencing hearing aid use [100], predicting chronic obstructive pulmonary disease [101], and assessing developmental status in children [102] underline the diverse use of XAI in the health domain.
It is also noteworthy that brain and neuroscience studies have frequently been the main application (Figure 3), often related to health. For example, Alzheimer’s disease classification and prediction have been major areas of focus [103,104,105,106,107,108,109], and Parkinson’s disease diagnosis has been extensively studied [110,111,112,113]. There is also significant research on brain tumor diagnosis and localization [114,115,116,117,118], predicting brain hemorrhage [119], cognitive neuroscience development [120], and detecting and explaining autism spectrum disorder [121]. Other notable brain studies include the detection of epileptic seizures [122,123], predicting the risk of brain metastases in patients with lung cancer [124], and automating skull stripping from brain magnetic resonance images [125]. Similarly, three pharmacy studies are related to health, including metabolic stability and CYP inhibition prediction [126] and drug repurposing [127,128].
In the field of environmental and agricultural applications, various studies have utilized XAI techniques for a wide range of purposes. For instance, earthquake-related studies have focused on predicting an earthquake [129] and assessing the spatial probability of earthquake impacts [130]. In the area of water resources and climate analysis, research has been conducted on groundwater quality monitoring [131], predicting ocean circulation regimes [132], water resources management through snowmelt-driven streamflow prediction [133], and analyzing the impact of land cover changes on climate [134]. Additionally, studies have addressed predicting spatiotemporal distributions of lake surface temperature in the Great Lakes [135] and soil moisture prediction [136]. Environmental monitoring and resource management applications also include predicting heavy metals in groundwater [137], detection and quantification of isotopes using gamma-ray spectroscopy [138], and recognizing bark beetle-infested forest areas [139]. Agricultural applications have similarly leveraged XAI techniques for plant breeding [140], disease detection in agriculture [141], diagnosis of plant stress [142], prediction of nitrogen requirements in rice [143], grape leaf disease identification [144], and plant genomics [145].
Urban and industrial applications are also prominent, with studies on urban growth modeling and prediction [146], building energy performance benchmarking [147], and optimization of membraneless microfluidic fuel cells for energy production [148]. Furthermore, predicting product gas composition and total gas yield [149], wastewater treatment [150], and the prediction of undesirable events in oil wells [151] have been significant areas of research. Lastly, environmental studies have also focused on predicting drought conditions in the Canadian prairies [152].
In the manufacturing sector, XAI techniques have been employed for a variety of predictive and diagnostic tasks. For instance, research has focused on prognostic lifetime estimation of turbofan engines [153], fault prediction in 3D printers [154], and modeling hydrocyclone performance [155]. Moreover, the prediction and monitoring of various manufacturing processes have seen substantial research efforts. These include predictive process monitoring [156,157], average surface roughness prediction in smart grinding processes [158], and predictive maintenance in manufacturing systems [159]. Additionally, modeling refrigeration system performance [160] and thermal management in manufacturing processes [161] have been explored. Concrete-related studies include predicting the strength characteristics of concrete [162] and the identification of concrete cracks [163]. In the realm of industrial optimization and fault diagnosis, research has addressed the intelligent system fault diagnosis of the robotic strain wave gear reducer [164] and the optimization of injection molding processes [165]. The prediction of pentane content [166] and the hot rolling process in the steel industry [167] have also been areas of focus. Studies have further examined job cycle time [168] and yield prediction [169].
In the realm of security and defense, XAI techniques have been widely applied to enhance cybersecurity measures. Several studies have focused on intrusion detection systems [170,171,172], as well as trust management within these systems [173]. Research has also explored detecting vulnerabilities in source code [174]. Cybersecurity applications include general cybersecurity measures [175], the use of XAI methods in cybersecurity [176], and specific studies on malware detection [177]. In the context of facial and voice recognition and verification, XAI techniques have been employed for face verification [178] and deepfake voice detection [179]. Additionally, research has addressed attacking ML classifiers in EEG signal-based human emotion assessment systems using data poisoning attacks [180]. Emerging security concerns in smart cities have led to studies on attack detection in IoT infrastructures [181]. Furthermore, aircraft detection from synthetic aperture radar (SAR) imagery has been a significant area of research [182]. Social media monitoring for xenophobic content detection [183] and the broader applications of intrusion detection and cybersecurity [184] highlight the diverse use of XAI in this domain.
In the finance sector, XAI techniques have been employed to enhance various decision-making processes. Research has focused on decision-making in banking and finance sector applications [185], asset pricing [186], and predicting credit card fraud [187]. Studies have also aimed at predicting decisions to approve or reject loans [188] and addressing a range of credit-related problems, including fraud detection, risk assessment, investment decisions, algorithmic trading, and other financial decision-making processes [189]. Credit risk assessment has been a significant area of research, with studies on credit risk assessment [190], predicting loan defaults [191], and credit risk estimation [192,193]. The prediction and recognition of financial crisis roots have been explored [194], alongside risk management in insurance savings products [195]. Furthermore, time series forecasting and anomaly detection have been important areas of study [196].
XAI has also been used for transportation and self-driving car applications, such as the safety of self-driving cars [197], marine autonomous surface vehicle engineering [198], autonomous vehicles for object detection and networking [199,200], and the development of advanced driver-assistance systems [201]. Similarly, XAI offered support in retail and sales, such as inventory management [202], on-shelf availability monitoring [203], predicting online purchases based on information about online behavior [204], customer journey mapping automation [205], and churn prediction [206,207].
In the field of education, XAI has been applied to various areas such as the early prediction of student performance [208], predicting dropout rates in engineering faculties [209], forecasting alumni income [210], and analyzing student agency [211]. In psychology, XAI was used for classifying psychological traits from digital footprints [212]; in social care, for child welfare screening [213]; and in the laws, for detecting reasons behind a judge’s decision-making process [214], predicting withdrawal from the legal process in cases of violence towards women in intimate relationships [215], and inter partes institution outcomes predictions [216]. In natural language processing, XAI was used for explaining sentence embedding [217], question classification [218], questions answering [219], sarcasm detection in dialogues [220], identifying emotions from speech [221], assessment of familiarity ratings for domain concepts [222], and detecting AI-generated text [223].
In entertainment, XAI was used, for example, for movie recommendations [224], explaining art [225], and different gaming applications, including analyzing and optimizing the performance of agents in a game [226], deep Q-learning experience replay [227], and cheating detection and player churn prediction [228]. Furthermore, several studies concentrated on (social) media deceptive online content (such as fake news and deepfake images) detection [229,230,231,232,233,234]. In summary, the recent applications of XAI span a diverse array of domains, reflecting its evolving scope; Figure 4 illustrates eight notable application areas.

4.2. XAI Methods

As shown in Figure 5, the majority of recent XAI papers used local explanations (53%), or a combination of global and local explanations (29%).
SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) are the most commonly used local XAI methods (Figure 6). While LIME is fully model-agnostic, meaning it is independent of the prediction model and can be used on top of any linear or non-linear model, the SHAP toolbox includes both model-agnostic XAI tools (such as the SHAP Kernel Explainer) and model-specific XAI tools (such as the TreeExplainer, which has been optimized for tree-based models [239]). However, LIME has faced criticism for its instability, meaning the same inputs do not always result in the same outputs [32], and its local approximation lacks a stable connection to the global level of the model. In contrast, SHAP boasts four desirable properties: efficiency, symmetry, dummy, and additivity [240], providing mathematical guarantees to address the local-to-global limitation. These guarantees may explain SHAP’s higher popularity in recent XAI application papers (Figure 6). Another local model-agnostic method used in recent XAI application papers is Anchors, which belongs to the same XAI group as SHAP and LIME but is much less popular in recent XAI application papers (e.g., [167,188,190,229,241]).
While perturbation-based techniques, such as LIME (e.g., [65,175,187,242,243]) and SHAP (e.g., [65,175,186,244,245]), are often the choices in recent XAI studies for tabular data, studies involving images or other more complex data frequently use gradient-based techniques such as Grad-CAM (e.g., [89,94,164,178,243]), Grad-CAM++ (e.g., [41,94,246,247,248]), SmoothGrad (e.g., [246,249,250,251,252]), Integrated Gradients (e.g, [50,179,182,253,254]), or Layer-Wise Relevance Propagation (LRP), such as those in [175,179,241,255,256]. Figure 4 shows eight examples of saliency maps from image data of diverse recent XAI applications from various domains.
The most commonly used global model-agnostic techniques are Partial Dependence Plots (PDP), such as those in [65,74,102,257,258], Accumulated Local Effects (ALE), as seen in [136,157,258,259,260], and Permutation Importance (e.g., [74,136,137,156,180]). Conversely, the most commonly used global intrinsically explainable methods are decision trees (e.g., [88,91,183,191,261]) and logistic regression (e.g., [50,53,61,191,211]). It should be noted that the latter two are used in countless other papers, but, given their inherent interpretability, they are often not explicitly listed as XAI methods [31].

4.3. ML Models and Tasks

Figure 7 represents the mostly used ML models in recent XAI papers (please note that more than one ML model can be used in the same paper). Various neural network models (predominantly deep NN) are mostly used ML models (used in 59% of papers), followed by the tree-based modes (e.g., decision tree, random forest, gradient boosting, used in 37% of papers), support vector machine (11%), linear or logistic regression (9%), K nearest neighbor (4%), Bayesian-based models (3%), and Gaussian models (e.g., Gaussian process regression and Gaussian mixture model, used in 2% of papers). The distribution of the ML models used in the reviewed articles is comparable to what is generally used.
Besides the most common ML models, there are some others that are less used and could therefore provide interesting alternative views on XAI. These include methods based on fuzzy logic (e.g., fuzzy rule-based classification [262], rule-based fuzzy inference [226,263], fuzzy decision tree [264], fuzzy nonlinear programming [95]), graph-based models (e.g., graph-deep NN [265,266], knowledge graph [267]), or some sort of optimization with computational intelligence (e.g., particle swarm optimization [148,160], clairvoyance optimization [268]).
The ML models have been used mainly for classification purposes (70%), followed by regression (21%), clustering (4%) and reinforcement learning (1%), as can be seen in Figure 8. Other tasks, which occurred in only one or at most two articles, include segmentation [97,269], optimization [270,271], semi-supervised [272,273] or self-supervised tasks [274], object detection [275], and novelty search [276].
There is no substantial difference between the major ML models with regard to the ML task of their target application. The distributions of ML tasks for specific ML models (NN, DT, LR, kNN, etc.) are all very similar to the overall one represented in Figure 8. Among all major ML models, SVM stands out the most, which is used for classification somewhat more often than the others (in 80% of cases).
With regard to the application domain, health, environment, industry, and security and defense are among the top five domains for all the major ML models, with the only exception being linear or logistic regression. In the case when linear or logistic regression was used as an ML model, finance is among the top three application domains, which is never the case for other major ML models. As finance is the second most used with the tree-based ML models, which, similarly to linear and logistic regression, can be characterized as the most transparent and inherently interpretable models, it suggests that the users in the financial domain are especially keen on getting insights and explanations on how the ML models operate on their data.

4.4. Intrinsically Explainable Models

As shown in Figure 9, the majority of recent XAI papers used post-hoc explainability approaches on ML models, which are not naturally easily interpretable (79%), as opposed to the intrinsically explainable models (12%); other papers (9%) reported a combination of both. Figure 10 presents the distribution of intrinsically explainable ML models. From all the reviewed XAI papers that reported their used method as intrinsically explainable, the majority were tree-based (41%), followed by deep NN (19%), linear or logistic regression (5%), and some Bayesian models (3%). The predominance of tree-based ML models could have been expected, as well as a relatively high number of linear and logistic regression models, which both are considered naturally transparent and simpler to understand, given their inherent interpretability. On the other hand, the relatively high number of deep neural networks that have been represented as intrinsically explainable is somewhat surprising.
There are significant differences between different ML models represented as intrinsically explainable with regard to the form of explanation they use. While the intrinsically explainable tree-based ML models use a variety of forms of explanation, including feature importance (in 50% of all cases), rules (38%), and visualization (31%), the deep NN models being reported as intrinsically explainable rely mainly on visualization (in more than 67% of all cases). The intrinsically explainable linear and logistic regression ML models, however, use predominantly feature importance as their form of explanation (in 75% of all cases).
In the most frequent XAI application domain, namely health, the use of tree-based ML models is predominant, as the tree-based models are used in 28% of all health applications, followed by (deep) neural networks (22%), and interestingly fuzzy logic (11%), while all other models were used only once in health. Given the known history of the development of ML methods in the field of medicine and healthcare, where the ability to validate predictions is as important as the prediction itself, and consequently the key role of decision trees [277], this result does not even surprise us.
With regard to other application domains, we can see that intrinsically explainable ML models, like tree-based models and linear or logistic regression models, are used for finance and education applications much more often than other ML models. While the financial domain represents only 1% of (deep) neural network applications, it represents 6% of all tree-based ML model applications (used for credit risk estimation [192,193], risk management in insurance [195], financial crisis prediction [194], investment decisions and algorithmic trading [189], and asset pricing [186]) and even 9% of linear or logistic regression applications (used primarily for credit risk assessment [190] and prediction [193], as well as financial decision-making processes [189]). While post-hoc explainability methods, primarily SHAP and LIME, are the most favored in the financial sector [189], intrinsically explainable modes are gaining popularity for revealing the insights and are being used for stock market analysis [278] and forecasting [279], profit optimization, and predicting loan defaults [191]. Education represents 2% of all applications of tree-based ML models (including early prediction of student performance [208], predicting student dropout [209], and advanced learning analytics [211]) and 4% of linear or logistic regression models (such as pedagogical decision-making [211] and prediction of post-graduate success and alumni income [210]), while (deep) neural networks are used for comparison with other methods in only two of all the reviewed XAI papers concerning education [209,211].

4.5. Evaluating XAI

The use of well-defined and validated metrics for evaluating the quality of XAI results (i.e., explanations) is of great importance for widespread adoption and further development of XAI. However, a significant number of authors still use XAI methods as a sort of add-on to their ML models and results without properly addressing the quality aspects of provided explanations, and only a few articles in our corpus use metrics to quantitatively measure the quality of their XAI results (Figure 11). More than 58% of the reviewed articles applied XAI but did not provide any evaluation of their XAI results (e.g., [65,121,170,175,186]). Among those that evaluated their XAI results, most relied on anecdotal evidence (20% of the reviewed articles, e.g., [185,245,249,272,280]). In approximately 8% of papers, the authors evaluated their XAI results by asking domain experts to evaluate the explanations (e.g., [66,70,89,114,167]). In approximately 19% of papers, however, some sort of quantitative metrics are used to provide the quality assessment (e.g., [94,179,187,242,244,281]).
These numbers are in line with a recent review article about XAI evaluation methods that also highlighted the lack of reporting metrics to measure explanation quality, according to Nauta et al. [4], only one in three studies that developed XAI algorithms evaluates explanations with anecdotal evidence, and only one in five studies evaluates explanations with users. Also, Leite et al. state that “evaluation measures for the interpretability of a computational model is an open issue” [282]. To address this issue, they introduced an interpretability index to quantify how a granular rule-based model is interpretable during online operation. In fact, the gap of “no agreed approach on evaluating produced explanations” [283] is often mentioned as future work. Having such a metric would solve several XAI issues, such as decreasing the risk of confirmation bias [283,284].
For this purpose, we further analyzed the articles that used metrics to measure explanation quality, primarily to see what the authors reported about the explainability of their results. Since different ML tasks and/or ML models may focus on different aspects, we divided the analysis according to the main task of the ML model.
In the case when metrics have been used for evaluating the quality of clustering, segmentation, and other unsupervised ML methods’ explanations, the findings highlight that the evaluated XAI approaches provided accurate, transparent, and robust explanations, aiding in the interpretation of the ML models and results (e.g., [285,286]). Human and quantitative evaluations confirmed the methods’ superiority in generating reliable, interpretable, and meaningful explanations [287], despite occasional contradictory insights that proved useful for identifying anomalies [269].
For the reinforcement learning applications, the findings of papers assessing their XAI results by evaluation metrics demonstrate that the proposed methods effectively explained complex models and highlighted the potential of Shapley values for explainable reinforcement learning [288]. Additionally, participants using the AAR/AI approach identified more bugs with greater precision [289], and while explanations improved factory layout efficiency, their interpretability remains an area for improvement [290].
The findings of the papers using regression as their main task, which used some metric to evaluate the explanations, underscore the critical role of explainability techniques like Shapley and Grad-CAM in enhancing model interpretability and accuracy (e.g., [157,291]) across various domains, from wind turbine anomaly detection [244] to credit card fraud prediction [187]. While global scores aid in feature selection, semi-local analyses offer more meaningful insights [292]. XAI methods revealed system-level insights and emergent properties [293], though challenges like inconsistency, instability, and complexity persist [157,294]. User studies and model retraining confirmed the practical benefits of improved explanations [213,295]. However, the authors mentioned that the explainability of their results was limited by the lack of suitable metrics for evaluating the explainability of algorithms [294].
Finally, for the most frequent ML task of classification, the analysis of the papers, which used some metrics to evaluate their explainability results, emphasizes the importance of explainability in enhancing model transparency, robustness, and decision-making accuracy across various applications, from object detection from SAR images [182] and hate speech detection [296] to classification of skin cancer [32] and cyber threats [297]. Techniques like SHAP, LIME, and Grad-CAM provided insights into feature importance and model behavior (e.g., [124,298,299]). In some situations, the adopted XAI methods showed improved performance and more meaningful explanations, aiding in tasks like malware detection [177], diabetes prediction [82], extracting concepts [298], and remote sensing [300]. Evaluations confirmed that aligning explanations with human expectations and ensuring local and global consistency are key to improving the effectiveness and trustworthiness of AI systems [235]. The authors concluded that while explanation techniques showed promise, there is still a long way to go before automatic systems can be reliably used in practice [32], and widely adopted XAI metrics can help here a lot.
In summary, the results reveal distinct preferences and practices in using XAI. Tree-based models, commonly used in health applications, employ various explanation forms like feature importance, rules, and visualization, while deep neural networks primarily utilize visualization. Linear and logistic regression models favor feature importance. In finance and education, tree-based and regression models are more prevalent than deep neural networks. However, despite the widespread application of XAI methods, evaluation practices remain underdeveloped. Over half of the studies did not assess the quality of their explanations, with only a minority using quantitative metrics. There is a need for standardized evaluation metrics to improve the reliability and effectiveness of XAI systems.

5. Discussion and Conclusions

This systematic literature review explored recent applications of Explainable AI (XAI) over the last three years, identifying 664 relevant articles from the Web of Science (WoS). After applying exclusion criteria, 512 articles were categorized based on their application domains, utilized techniques, and evaluation methods. The findings indicate a dominant trend in health-related applications, particularly in cancer prediction and diagnosis, COVID-19 management, and various other medical imaging and diagnostic uses. Other significant domains include environmental and agricultural applications, urban and industrial optimization, manufacturing, security and defense, finance, transportation, education, psychology, social care, law, natural language processing, and entertainment.
In health, XAI has been extensively applied to areas such as cancer detection, brain and neuroscience studies, and general healthcare management. Environmental applications span earthquake prediction, water resources management, and climate analysis. Urban and industrial applications focus on energy performance, waste treatment, and manufacturing processes. In security, XAI techniques enhance cybersecurity and intrusion detection. Financial applications improve decision-making processes in banking and asset management. Transportation studies leverage XAI for autonomous vehicles and marine navigation. The review also highlights emerging XAI applications in education for predicting student performance and in social care for child welfare screening.
In categorizing recent XAI applications, we aimed to identify and highlight the most significant overarching themes within the literature. While some categories, such as “health”, are clearly defined and widely recognized within the research community, others, like “industry” and “technology”, are broader and less distinct. The latter categories encompass a diverse range of applications, reflecting the varied contexts in which XAI methods are employed across different sectors. This categorization approach, though occasionally less precise, captures the most critical global trends in XAI research. It acknowledges the interdisciplinary nature of the field, where specific categories may overlap or lack the specificity found in others. Despite these challenges, our goal was to provide a comprehensive overview that highlights the most prominent domains where XAI is being applied while recognizing that some categories, by their nature, are more general and encompass a wider array of subfields.
By far the most frequent ML task among the reviewed XAI papers is classification, followed by regression and clustering. Among the used ML models, deep neural networks are predominant, especially convolutional neural networks. The second most used group of ML models are tree-based models (decision and regression trees, random forest, and other types of tree ensembles). Interestingly, there is no substantial difference between the major ML models with regard to the ML task of their target application.
Feature importance, referring to techniques that assign a score to input features based on how useful they are at predicting a target variable [26], is the most common form of explanation among the reviewed XAI papers. Some sort of visualization, trying to visually represent the (hidden) knowledge of a ML model [301], is used very often as well. Other commonly used forms of explanation include the use of saliency maps, rules, and counterfactuals.
Regarding methods, local explanations are predominant, with SHAP and LIME being the most commonly used techniques. SHAP is preferred for its stability and mathematical guarantees [240], while LIME is noted for its model-agnostic nature but criticized for its instability [32]. Gradient-based techniques such as Grad-CAM, Grad-CAM++, SmoothGrad, LRP, and Integrated Gradients are frequently used for image and complex data [179,182]. In general, post-hoc explainability is much more frequent than the use of some intrinsically explainable ML model. However, only a few studies quantitatively measure the quality of XAI results, with most relying on anecdotal evidence or expert evaluation [4].
In conclusion, the recent surge in XAI applications across diverse domains underscores its growing importance in providing transparency and interpretability to AI models [4,5]. Health-related applications, particularly in oncology and medical diagnostics, dominate the landscape, reflecting the critical need for explainable and trustworthy AI in sensitive and high-stakes areas. The review also reveals significant research efforts in environmental management, industrial optimization, cybersecurity, and finance, demonstrating the versatile utility of XAI techniques.
Despite the widespread adoption of XAI, there is a notable gap in the evaluation of explanation quality. The analysis of how the authors evaluate the quality of their XAI approaches and results revealed that in the majority of studies, the authors still do not evaluate the quality of their explanations or simply rely on subjective or anecdotal methods, with only a few employing rigorous quantitative metrics [284]. Cooperation with domain experts and including users can greatly contribute to the practical usefulness of the results, but above all, more attention needs to be paid to the development and use of well-defined and generally adopted metrics for evaluating the quality of explanations. It turns out that in such a case, we can expect reliable, interpretable, and meaningful explanations with a significantly higher degree of confidence. There is an urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI methods, as well as to improve the interpretability and stability of explanations. The development of such metrics could mitigate risks like confirmation bias and enhance the overall robustness of XAI applications.

Limitations and Future Work

This systematic literature review has several limitations that should be acknowledged. Firstly, the review relied exclusively on the WoS database to identify and retrieve relevant studies. While WoS is recognized as one of the most prestigious and widely utilized research databases globally, known for its rigorous indexing standards and the high quality of its data sources [38], the reliance on a single database may introduce a potential bias by omitting relevant literature indexed in other databases such as Scopus, IEEE Xplore, or Google Scholar. However, it is important to note that the comprehensive nature of WoS mitigates this limitation to some extent. WoS encompasses a vast array of high-impact journals across various disciplines, ensuring that the most significant and influential works in the field of XAI are likely to be included. Moreover, the substantial volume of results yielded from WoS alone necessitated a practical constraint on the scope of the review. Including additional databases would have exponentially increased the literature volume, rendering the review process unmanageable within the given resources and timeframe.
Secondly, the exclusion criteria applied in this review present additional limitations. Only studies published in English were included, which could potentially skew the findings by overlooking valuable contributions from non-English-speaking researchers and regions. Furthermore, the review was limited to studies published after 2021 to ensure the “recentness” of the applications of XAI. While this criterion was essential to focus on the latest advancements and trends, it may have excluded foundational studies that, although older, remain highly relevant to the current state of the field. Additionally, the review was restricted to journal articles, excluding conference papers that often publish seminal work, particularly in the fast-evolving domain of XAI. Given the considerable volume of literature, including conference papers would have extended the scope beyond what was feasible within the current study.
Moreover, the review process involved manually reading and categorizing each paper to develop detailed codes, allowing for a nuanced analysis of the literature. While more automated approaches to systematic reviews could have incorporated a broader range of sources, such methods may lack the precision and depth achieved through manual categorization. Future research could explore the use of automated methods to include key conference papers and older foundational studies, providing a more comprehensive understanding of the field’s development over time. However, for this review, our focus on recent journal publications, combined with an in-depth manual analysis, was necessary to provide a manageable and focused examination of the most current trends in XAI.
In summary, while these limitations—namely, the reliance on a single database, language restrictions, the specific timeframe, and the focus on journal articles excluding conference papers—are noteworthy, they were necessary to manage the scope and ensure a focused and feasible review process. Future research could address these limitations by incorporating multiple databases, including non-English studies, expanding the temporal range to include older foundational work, and considering a broader set of sources, such as conference papers. This approach would provide a more comprehensive overview of the literature on XAI and its development over time. Finally, it is important to highlight that the field of XAI is rapidly evolving. During the course of conducting and writing this review, numerous additional relevant articles emerged that could not be incorporated due to time constraints. This underscores the dynamic and ongoing nature of research in this area.

Author Contributions

Conceptualization, M.S.; methodology, M.S.; validation, M.S. and V.P.; formal analysis, M.S. and V.P.; investigation, M.S. and V.P.; resources, M.S.; data curation, M.S. and V.P.; writing—original draft preparation, M.S. and V.P.; writing—review and editing, M.S. and V.P. All authors have read and agreed to the published version of the manuscript.

Funding

The work by M.S. was supported by the K.H. Renlund Foundation and the Academy of Finland (project no. 356314). The work by V.P. was supported by the Slovenian Research Agency (Research Core Funding No. P2-0057).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The review was not registered; however, the dataset created during the full-text review, including predefined codes and protocol details, is available from the first author upon request.

Acknowledgments

The authors (M.S. and V.P.) would like to thank Lilia Georgieva for serving with them as a guest editor of the special issue on “Recent Application of XAI” that initiated this review.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALEAccumulated Local Effects
AIArtificial Intelligence
CAMClass Activation Mapping
COVID-19Coronavirus Disease
CYPCytochrome P450
DTDecision Tree
ICUIntensive Care Unit
IEEEInstitute of Electrical and Electronics Engineers
IGIntegrated Gradients
IMLInterpretable Machine Learning
IoTInternet of Things
k-NNk-Nearest Neighbor
LIMELocal Interpretable Model-agnostic Explanations
LRLogistic Regression
LRPLayer-wise Relevance Propagation
MLMachine Learning
NNNeural Network
PDPPartial Dependency Plots
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analysis
RISERandomized Input Sampling for Explanation
SHAPSHapley Additive exPlanations
SVMSupport Vector Machine
WoSWeb of Science
XAIExplainable Artificial Intelligence

Appendix A. Included Articles

Table A1. Included articles in our corpus of recent applications of XAI articles, their application, and the reasons why the authors argue that explainability is important in their application.
Table A1. Included articles in our corpus of recent applications of XAI articles, their application, and the reasons why the authors argue that explainability is important in their application.
Authors & YearXAI ApplicationWhy Explainability Is Important
Li et al. (2023) [94]Face mask detectionTo verify the models predictions.
Zhang et al. (2022) [65]Diagnosis and surgeryThe key to AI deployment in the clinical environment is not the model’s accuracy but the explainability of the AI model. Medical AI applications should be explained before being accepted and integrated into the medical practice.
Hilal et al. (2022) [121]Detect and explain Autism Spectral DisorderFor explaining the logic behind decisions, describe the strengths and weaknesses of decision-making and offer insights about the upcoming behaviors.
Manoharan et al. (2023) [185]Decision-making in banking and finance sector applicationsTo assure transparency in banking and finance and to assure that in case of deception occurrence, the individual will be identified clearly in the sector.
Rjoub et al. (2023) [175]CybersecurityTo better understand the behavior of cyber threats and to design more effective defenses.
Astolfi et al. (2023) [244]Wind turbine maintenanceExplainability increases transparency and trustworthiness.
Berger (2023) [186]Asset pricingEconomic data is noisy, and there are many correlations. Explainability increases understanding of economically relevant variables and correlations.
Alqaralleh et al. (2022) [170]Intrusion detectionExplainability increases transparency for the user and gives more insight into the decisions/recommendations made by the intrusion detection system.
Neghawi et al. (2023) [272]Evaluating performance of SSMLMachine learning models are becoming more and more difficult, and explainability is needed to be able to evaluate questionable outcomes of the models.
Meskauskas et al. (2022) [302]Risk assessmentTraceability of the decision the model makes increases the credibility of the model and can be achieved by implementing explainability techniques on the model.
Fouladgar et al. (2022) [242]Sensitivity of XAI models on time series dataML models that process time series data are often quite complex (due to the nature of time series data), and so explainability would increase usability of time series data in ML.
Jean-Quartier et al. (2023) [245]Tracking emissions of ML algorithmsML models consume a lot of energy, and XAI implementations can reduce the amount of energy needed to get the wanted outcomes of utilizing a ML model.
Almohimeed et al. (2023) [280]Cancer predictionExplainability is added to increase efficiency and reliability.
Leem et al. (2023) [303]Box office analysisThey say explainability here is essential for stakeholders in the film industry to gain insights into the model’s decision-making process, assess its reliability, and make informed decisions about film production, marketing, and distribution strategies.
Ayoub et al. (2023) [304]Lightpath quality of transmission estimationLack of explainability is hindering the deployment of ML systems because the results cannot be interpreted by domain experts. With a better understanding of the model’s decision-making process, domain experts can evaluate decisions and make better choices when designing a network.
Bhambra et al. (2022) [249]Image processing in astronomyExplainability would give information on what parts of a picture of a galaxy are important for classification for the CNN used for the task.
Arrotta et al. (2022) [243]Sensor-based activity recognitionThey mentioned that while heat maps generated by the model may be informative for data scientists, they are poorly understandable by non-expert users. Therefore, the inclusion of a module to transform heat maps into sentences in natural language was deemed necessary to enhance interpretability for a wider audience. Additionally, providing explanations in natural language targeted towards non-expert users was highlighted as a key aspect of their work to ensure that the rationale behind the classification decisions made by the model could be easily understood and trusted by individuals without a deep technical background.
Jena et al. (2023) [130]Earthquake spatial probability assessment (predicting where the earthquake hits)Getting an explanation of why the ML model predicts an earthquake enables interpretation and evaluation based on expertise and knowledge of the area, and therefore one can judge if the model performs and/or if there is actually a risk of an earthquake.
Alshehri et al. (2023) [131]Groundwater quality monitoringExplainability techniques provide valuable insights on ML-made decisions, which is valuable for decision-making in water quality management. When important features are known, decisions can be made to focus on getting them better first.
Kim et al. (2022) [197]Safety of self-driving carsMistakes made by self-driving cars can lead to dangerous accidents. Explainability gives insight into why models make mistakes and, therefore, leads to better development and safer cars.
Raval et al. (2023) [187]Predicting credit card fraudExplanations on model predictions help users understand significant features that the LSTM model predicts credit card fraud with.
Lim et al. (2022) [179]Detecting deepfake voiceThe authors highlighted the explainability of deepfake voice detection to ensure the system’s reliability and trustworthiness by allowing users to understand and trust its decisions. They aimed to deliver interpretations at a human perception level, making the results comprehensible for non-experts. This approach differentiates human and deepfake voices, improving the system’s effectiveness.
Vieira et al. (2023) [122]Detection of epileptic seizuresApplying explainability methods in ML models used in the healthcare field is important because otherwise practitioners cannot understand the reasons behind decisions made by ML models.
Jena et al. (2023) [129]Predicting an earthquakeEarthquakes can lead to significant financial losses and casualties, and that is why ML models for earthquake prediction are developed. Because ML models get more complex, explainability is needed to interpret the results and to design better models.
Youness et al. (2023) [153]Prognostic lifetime estimation of turbofan enginesIn system prognostics and health management, explainability is needed to increase the reliability of decisions made by remaining useful lifetime prediction models and also to gain knowledge on what parts caused the engine to fail. Increasing the reliability of remaining useful lifetime models is important because too early maintenance is a useless cost, and too late maintenance results in unexpected downtime, which is also a useless cost.
Ornek et al. (2021) [67]Detecting health status of neonatesIn healthcare, doctors need explanations of the ML model’s decisions so they can make the right decisions in patient care.
Sarp et al. (2021) [69]Chronic wound classificationThe authors highlight that explainability is essential because it builds trust and transparency in healthcare and supports doctors’ decision-making through visual cues like heatmaps. With the help of this method, AI decisions are more understandable for non-experts and can provide unexpected insights, improving wound management and treatment outcomes.
Hanchate et al. (2023) [158]Average surface roughness prediction in smart grinding processGrinding is a part of the process of manufacturing devices and machines and their parts in many (critical) fields. Post-process quality control can be long and costly, and so quality control is shifting towards in-line processes. In-line quality control is often achieved with ML methods, and explainability gives important insight into key features.
Aguilar et al. (2023) [305]Interpretable ML model for (general) anomality detectionIn detecting anomalies in sensitive fields (like healthcare and cyber security), it is important that decisions made by ML models are interpretable, because actions based on those decisions can cause serious harm if they are wrong.
del Castillo Torres et al. (2023) [306]Facial recognitionModern ML models are quite good at facial recognition but give no insight in their decision-making process. In facial recognition, explainability is needed to gain confidence in ML methods and their solutions.
Wang et al. (2023) [70]Age-related macular degeneration detectionThe authors mentioned that explainability is essential for improving the robustness, performance, and clinical adoption of AI-based models in medical applications, especially when it comes to tasks like AMD detection.
Dewi et al. (2023) [307]Image captioningThe authors highlight that explainability supports technical validation and model improvement. As well as ensuring the AI system can be trusted and effectively utilized, particularly in assistive technologies for visually impaired individuals.
Ghnemat et al. (2023) [52]Detecting COVID-19 with medical imagingMedical devices use actual rather than synthetic Legal regulations can hinder the use of ML models in medical imaging because the models are not interpretable. Interpretability increases available use cases for ML in medical imaging because of this issue. Explainability also increases fairness in diagnostic work because practitioners are able to evaluate models’s decisions. Explainability with a good ML model can also give new information about illnesses (COVID in this case).
Martinez et al. (2023) [188]Predicting decision to approve or reject a loanFinancial institutes use an increasing amount of AI in bank loan decision-making processes, and these decisions can affect the loan applicants significantly. Explainability is needed to evaluate AI-based decisions and to improve the models.
Younisse et al. (2022) [171]Intrusion detectionExplainability adds reliance and trust towards ML systems. Explainability can also shift the focus on decision-making from humans to AI. Trust and reliability are as important in intrusion detection as efficiency.
Chelgani et al. (2023) [155]Modeling hydrocyclone performanceModeling and AI are crucial to determining hypercyclone operational variables and their impact on particle sizes. Explainability techniques applied to AI methods can help to gain insight on the sensitivities of industrial modeling (hypercyclone processes in this case).
Rietberg et al. (2023) [66]Identifying reasons for taking MRI scan from MS (multiple sclerosis) patientIt is constant demand in the healthcare field to make processes less costly while ensuring the quality of patient care doesn’t drop, and AI can help reduce costs by increasing efficiency. Explainability can make AI more trustworthy, and it is also crucial that medical professionals know the reasons behind AI-made decisions (patient health is on the line).
Martins et al. (2024) [189]Credit related problems, fraud detection, risk assessment, investment decisions, algorithmic trading and other financial decision-making processesThe authors did not directly state why explainability is important for their specific application. But as a summary, they highlighted that explainability is critical for ensuring transparency, trust, and informed decision-making in the financial domain.
Diaz et al. (2022) [206]Churn predictionExplainability with predictive AI methods can give more insight on important factors that lead to churn. When it is known why valuable customers churn, decisions can be made to avoid that.
Lohaj et al. (2023) [53]Predicting the need of ICU on COVID-19 patientsCOVID-19 is a quickly evolving disease, and all the features influencing the course of the illness are not understood. With the help of AI and XAI methods, more understanding of COVID-19 illness can be gained.
Geetha et al. (2022) [163]Identification of concrete cracksThe authors mentioned that the explainability can address the “black box” nature of the deep learning models that they are using in their domain. They are aiming to generate high-quality, interpretable explanations of the decisions for concrete crack detection and classification.
Clare et al. (2022) [132]Predicting ocean circulation regimesExplainability would enable better decision-making based on AI-based knowledge on climate change because good decisions cannot be made based on uncertain knowledge, and wrong decisions can have wide-ranging impacts. For example, decisions made on the coasts where sea level rise can cause great damage need to be based on interpretable knowledge to ensure safety.
Zhang et al. (2023) [89]Medical text processingThe authors highlighted that for practical usage in the healthcare context, AI models must be able to explain things to people. And also, understanding how those models work is essential for adopting and using medical AI applications. And also, they mentioned explainability is necessary to ensure the acceptability of AI in medicine and to use in clinical applications.
Ramon et al. (2021) [212]Classifying psychological traits from digital footprintsProfessionals using the AI-made decisions need explainability to trust the models. EU and GDPR also require certain levels of interpretability from ML models (especially on applications in critical areas). Interpretability would increase understanding on the issue that ML is used for and also reveal relations that wouldn’t have been found otherwise.
Alkhalaf et al. (2023) [308]Cancer diagnosisThey mentioned that this helps doctors and patients understand the reasons behind the automated diagnosis made by the ML models. And also, they say, experts can provide better medical interpretations of the diagnosis and give suitable treatment options using this explainability. They also mentioned this can build trust between patients, medical staff, and AI in the medical field.
Noh et al. (2023) [164]Intelligent system fault diagnosis of the robotic strain wave gear reducerIt is hard to convince stakeholders and engineers of ML models usability if they are not interpretable.
Chen et al. (2023) [281]Interpretation of ML results from image dataUse cases of AI are increasing, also in fields that use a lot of image-based AI (medical fields, for example). LIME is usually used for text data or numerical data. Methods for applying LIME to image data would increase the use cases of explainability methods for image processing AI.
Nunez et al. (2023) [133]Water resources management (snowmelt-driven streamflow prediction)The authors mentioned that the explainability contributes to a greater understanding of hydrological processes and ensures the trust and transparency of the models and decision-making processes used in this context.
Chowdhury et al. (2023) [154]Fault prediction on 3D printerModels can sometimes make decisions based on wrong or irrelevant information. Interpretability increases trust because the user can evaluate if the ML-made decision makes sense.
Shah et al. (2023) [223]Detecting AI-generated textAI is nowadays very good at producing text that seems human-made, and detection techniques are needed to ensure safety and prevent identity theft. Interpretation techniques give more insight and help evaluate detection models’ decisions, which increases the benefits of detection model usage.
Kolevatova et al. (2021) [134]Analysis of the impact of land cover changes on climateThe authors mentioned that explainability helps to understand the complex relationships between land cover changes and temperature changes.
Mehta et al. (2022) [296]Social media analysisThe authors mentioned that explainability can be used for users to understand the results and to trust the decisions of the algorithms. And also, they highlighted that explainability is essential for gaining trust from AI regulators and business partners, enabling commercially beneficial and ethically viable decision-making.
Ferretti et al. (2022) [174]Detecting vulnerabilities in source codeNeural network models are becoming more accurate but also more complex and harder to understand. In the domain of cybersecurity, it is important to know the reasons behind AI-made decisions when it comes to source code vulnerability detection, because wrong decisions can lead to disaster.
Cha et al. (2024) [217]Sentence embeddingIn the field of natural language processing, sentence embedding models do not tend to perform very well. It is not sure how to make sentence embedding models perform better. In this study, explanations are added in the middle of the model to enhance model performance.
Veitch et al. (2021) [198]Marine autonomous surface vehicles engineeringThey mentioned that explainability is essential to enhance usability, trust, and safety in the context of decision-making, AI functionality, sensory perception, and behavior. The aim was to build trust among ASV users by providing transparent and understandable representations.
Kulasooriya et al. (2023) [162]Predicting strength characteristics of concreteInterpretability of ML models in the structural engineering domain is important so (1) engineers can identify reasons behind model-based decisions, (2) users and domain experts can trust more in ML-made decisions, and (3) proposed methods can be explained clearly for the non-technical community (especially without knowledge of AI).
Elkhawaga et al. (2022) [156]Predictive process monitoringIn predictive process monitoring, stakeholders need ML-based decisions to be interpretable so they can evaluate them properly and make good business decisions with the help of AI.
Nascita et al. (2021) [309]Mobile traffic classificationThe authors mentioned that it is important to have an explanation due to the lack of interpretability of the classification models used in this context. Further, they highlighted that lack of explainability can cause untrustworthy behaviors, lack of transparency, legal and ethical issues, and especially in cybersecurity applications.
Larriva-Novo et al. (2023) [172]Intrusion detectionIt is a security threat when a technician operates an ML-based intrusion detection system without interpretability in the model or knowledge of AI. This can also lead to a lack of trust in AI and ML tools.
Andreu-Perez et al. (2021) [120]Cognitive neuroscience developmentThe authors mentioned that infant fNIRS data are still quite limited, and by using XAI learning and inference mechanisms, they can overcome that limitation. And also, they mentioned that XAI provides explanations for classification in their context.
El-khawaga et al. (2022) [157]Predictive process monitoringThe goal of predictive process monitoring is to inform stakeholders about how business processes are operating now and in the near future. When business processes are described by black-box models (which is often the case), stakeholders don’t get good explanations on ML-made decisions, which reduces trust. Interpretability is needed to increase trust and to help stakeholders make data-driven decisions.
Silva-Aravena et al. (2023) [310]Cancer predictionInterpretability can increase ML model usage in the field of cancer prediction, especially in more complex use cases. Explainability increases the amount of knowledge gained from ML-made decisions, which leads to better decision-making in patient care. Explainability of ML models benefits both practitioners and management.
Bjorklund et al. (2023) [311]Explainable AI methodsHighly performing black-box AI models can be insufficient if they make predictions based on the wrong features. Interpretability gives information on a model’s decision-making process and so can lead to the development of better AI models. Interpretability is crucial in safety-critical fields (like medical) and when finding new information (like physics research).
Dobrovolskis et al. (2023) [312]Agent developmentThe authors mentioned that the use of explainability can improve user experience and trust by providing clear and understandable explanations of the system’s behavior. And also, they mentioned it can lead to great acceptance and adoption by users of the systems. They highlighted that the explainability in the smart home domain is essential due to the sensitive and high-risk nature of some AI applications that are closely related to human lives, wellness, and safety.
Kamal et al. (2022) [76]Glaucoma predictionThe authors mentioned that explainability increases the user’s confidence in the decision-making process with existing ML models that are limited to glaucoma prediction. And also, they mentioned that explainability provides convincing and coherent decisions for clinicians/medical experts and patients.
Kumar et al. (2021) [114]Brain tumor diagnosisExplainability increases trust towards AI systems and safety for use in the medical field. Explainability should also be measurable so the explanations can be trusted.
Pandey et al. (2023) [149]Predicting product gas composition and total amount of gas yieldGas production systems are complex, and black-box methods are used for product gas prediction for that reason. Production systems can fail without anyone knowing why. Explainability would increase the use of AI and increase safety and efficiency due to increased knowledge of the system.
Amoroso et al. (2023) [105]Predicting Alzheimer’s diseaseIt is difficult for clinical practitioners to adopt highly developed AI systems due to their lack of interpretability. There are lots of great tools for brain disease prediction that could help diagnose illness in its early stages. Explainability would allow people with little to no knowledge of AI to use these diagnostic tools.
Tao et al. (2023) [228]Cheating detection and player churn prediction in online gamesLack of interpretability in black-box models hinders the development of the models and use of AI in online gaming. Explainability is needed to make sure models are learning the right relations, allow practitioners to adjust the model in problematic cases, make sure that models perform the same way in an online setting, and enable easy debugging.
Stassin et al. (2024) [246]Vision transformersThe authors mentioned that explainability in the context of Vision Transformers is essential for ensuring transparency, mitigating biases, enhancing safety, and promoting trust in AI systems.
Bobek et al. (2022) [167]Hot-rolling process (steel industry)In manufacturing industries, data is collected straight from the machines and processed with AI to provide information about the machines to help making decisions. Decision makers usually aren’t experts of AI, so they cannot rely fully on AI-made decisions. Explainability would make relevant decision-making easier and adopting AI in decision-making processes more worthwhile.
Mollaei et al. (2022) [85]Functional work ability predictionAs ML models become more widely used, explainability is needed so the right decisions can be made based on AI-made decisions. There are several methods to analyze an ML model’s performance, but those don’t give insight on the decision-making process.
Lin et al. (2021) [178]Face verificationExplainability is needed in complex ML systems that do face verification so that ML-made decisions can be trusted. False-positive results on face recognition used in security applications are a big threat to security and privacy. Interpretability increases users’ trust and helps develop better and more accurate models.
Petrauskas et al. (2021) [86]Decision support system for the nutrition-related geriatric syndromesThey mentioned that explainability is needed for medical professionals to understand the reasoning behind the decisions made by the clinical decision support system (CDSS). And also, they mentioned this approach enables physicians to comprehend the system’s assessment errors and identify areas for improvement. And also, they mentioned CDSS’s explainability allows less experienced physicians to pay attention to nutrition-related geriatric syndromes and perform detailed examinations of nutrition-related disorders.
Sharma et al. (2023) [161]Thermal management in manufacturing processThe authors mentioned that the explainability of their context is important to develop a highly precise model. Further, they recognized the significance of transparency and interpretability in their model, particularly in the context of predicting the thermophysical properties of nanofluids.
Torky et al. (2023) [194]Prediction and recognition of financial crisis rootsThe authors mentioned that explainability enables the interpretation of complex data patterns, allowing humans to understand and interpret the logic behind classifying patterns efficiently. This is essential for financial crisis prediction as it helps in providing evidence for financial decisions to regulators and customers, especially where the results of the AI model may be inaccurate. And also, they mentioned that this will help with financial institutions’ work.
Perl et al. (2024) [313]Fault location in power systemsThey mentioned that explainability in this domain is essential because the lack of transparency in ML models for fault location in power systems poses a significant challenge. And also, they mentioned the black box ML models make it difficult for power system experts to understand the connections between input bus measurements and the output fault classification. This can cause less trust in the model’s recommendations and makes it challenging to improve PMU placement for better fault classification.
Luo et al. (2021) [182]Aircraft detection from synthetic aperture radar (SAR) imageryThe authors mentioned that explainability helps to address the trustworthiness of SAR image analytics. And also they mentioned that explainability helps to provide a better understanding of the DNN feature extraction effectiveness, select the optimal backbone DNN for aircraft detection, and map the detection performance.
Andresini et al. (2023) [139]Recognition of bark beetle infested forest areasBecause bark beetles can affect forest health quickly and in large areas, AI methods are needed to aid the recognition of bark beetle infestations. Explainability in these AI methods is important to gain trust in forest managers and other non-AI-experts and remote stakeholders.
van Stein et al. (2022) [140]Plant breedingThe authors mentioned that the explainability of this context is important because it helps to gain a deep understanding of the role of each feature (SNP) in the model’s predictions. And also, they have mentioned that providing transparency and interpretability through sensitivity analysis can enhance the reliability and applicability of genomic prediction models in real-world scenarios.
Moscato et al. (2021) [190]Credit risk assessmentIn peer-to-peer lending, lenders use P2P platforms to aid their decision-making. There platforms use complex models that are hard to interpret (especially without knowledge of AI). Explainability of AI-made decisions on P2P platforms is important to help lenders make accurate loan decisions.
Nwafor et al. (2023) [314]Non-technical losses in electricity supply chain in sub-Saharan AfricaThe goal of this study is to find if and to what extent staff-related issues impact non-technical loss of electricity in sub-Saharan Africa. To answer this research question, feature importance is necessary.
Panagoulias et al. (2023) [315]Intelligent decision support for energy managementThe authors mentioned that explainability is essential for their context because it builds user trust and ensures faster adoption rates, especially in the energy sector, where AI can provide a more sustainable future. And also, they mentioned that it is essential for providing justification for recommended actions and ensuring transparency and interpretability of the analytics results.
Rodriguez Oconitrillo et al. (2021) [214]Detecting reasons behind judge’s decision-making processStudying judges’ decision-making process is very sensitive because of their freedom and juridical independence. Studying judges’ behavior and decision-making is still important to help other judges when they are reviewing previous cases to help their decision-making. XAI is an important tool here because XAI techniques give insight into the reasons behind decisions.
Kim et al. (2023) [316]Designing an XAI interface for BCI expertsThe authors mentioned that the explainability is important for BCI researchers to understand the decisions made by AI models in classifying neural signals or analyzing signals based on their domain expertise.
Qaffas et al. (2023) [202]Inventory managementThe authors mentioned that providing explanations for the assignment of items to classes A, B, and C allows for a better analysis of the items, easy detection of misclassifications, improved understanding of inventory classes, and flexibility in inventory management decisions. And also they mentioned explainability helps make decisions more transparent and enhances interpretability.
Wang et al. (2023) [317]Improving performance of XAI techniques for image classificationUse of black-box models in critical fields is increasing, and explainability is much needed to help users evaluate models’ decisions and increase trust from users. The efficiency and accuracy of modern XAI visual explanation methods (CAM, LIME) can be improved, which is the goal of this study.
Mahbooba et al. (2021) [173]Trust management in intrusion detection systemsThe authors mentioned that human experts need to understand the underlying data evidence and causal reasoning behind the decisions made by AI in their domain. Further, the network administrators can enforce security policies more effectively for identified attacks, leading to improved trust in the systems by providing explanations.
Puechmorel (2023) [318]Creating better XAI techniques with manifolds and geometryData manifolds are not much researched, even when computing on manifolds can result in higher performance and the ability to compute on very high-dimensional data.
Rozanec et al. (2021) [166]Prediction of pentane content during liquefied petroleum gas debutanization processExplainability is important so users can understand the model’s limitations in operational use.
Heuillet et al. (2022) [288]Explaining reinforcement learning systemsAs ML models get progressively more complex, transparency is needed so the use of a black-box model can be justified. Explainability also increases trust toward AI systems from the user end. Reinforcement learning is an ML technique that is increasingly used in critical fields where the interpretability of the ML model is necessary for the end users.
Gramespacher et al. (2021) [191]Predicting loan defaultsML models used for credit score/loan risk prediction in finance are becoming increasingly complex, which is contrary to increasing demand for transparency from authorities. As loan default is more costly to businesses than the unaccepted possible clients, the most beneficial model might not be the same as the most optimal model; interpretability is needed to aid in developing these models.
Mohamed et al. (2022) [275]Small-object detectionExplainability increases reliability and therefore accelerates the ML model’s approval for real-life applications.
Xue et al. (2022) [135]Predicting spatiotemporal distributions of the lake surface temperature in the Great LakesThere is a growing need to understand the relationships between features and predictions in black-box models. The Great Lakes area is so big (84% of North America’s surface water) that it impacts the environment very differently than “regular” lakes. Explainable AI methods are needed to understand the climate of the Great Lakes area better.
Muna et al. (2023) [181]Attack detection on IoT infrastructures of smart citiesComplex ML models are widely used in the areas of IoT and smart cities. Lack of interpretability is a security issue because ML models are hard to develop to be more safe if it is not known how they make decisions.
Yigit et al. (2022) [63]Detect paratuberculosis from histopathological imagesTo help pathologists in the diagnosis of paratuberculosis
Machlev et al. (2022) [319]Power quality disturbances classificationXAI is important because power experts may find it hard to trust the results of such algorithms if they do not fully understand the reasons for a certain algorithm’s output.
Monteiro et al. (2023) [320]Machine learning model surrogatesThe authors mentioned that the explainability of their context is important because there is a need for AI models that balance the tradeoff between interpretability and accuracy and explain the feature relevance in complex algorithms.
Chen et al. (2022) [95]Studying sustainability of smart technology applications in healthcareExisting methods based on AI are not easy to understand or communicate, so explainability is needed to enhance the usability of AI systems within users. Smart technology can also be deemed unsustainable because it is not easy to implement and repair; explainability makes implementation easier due to higher trust and understanding; and repairing is easier because problems can be located more easily.
Shi et al. (2022) [321]Finding bugs in softwareExplainability can help fuzzy models in bug detection by giving explanations of which parts of code need to be searched. Explainability also gives information about false positives/negatives, which helps develop a better model.
Chen et al. (2024) [168]Job cycle time predictionFuzzification of complex DNN is a difficult task. Explainability simplifies the decisions made by the DNN model, which can help in the fuzzification. It is common in the domain of manufacturing and management that complex black-box models are used without understanding of AI, and explainability would lead to more meaningful model usage and more efficient processes.
Li et al. (2023) [124]Predicting risk of brain metastase on patients with lung cancerMore complex and accurate models are needed to predict brain metastasis because there are lots of patients at risk of developing brain tumors. More complex models are usually not interpretable, so explanations are needed so interpretability is not compromised.
Igarashi et al. (2024) [322]The effects of secondary cavitation bubbles on the velocity of a laser-induced microjetThe authors mentioned that the explainability is important in their domain because it supports understanding the physical phenomena related to the influence of secondary cavitation bubbles on jet velocity.
Yilmazer et al. (2021) [203]On-shelf availability monitoringThe authors mentioned that explainability is useful in their context because it provides users with an explanation about individual decisions, enabling them to manage, understand, and trust the on-self availability model. And also, they mentioned explainability allows non-experts and engineers in grocery stores to understand, trust, and manage AI applications to increase OSA. Further, they mentioned it provides transparency and understandability.
Zhang et al. (2023) [180]Attacking ML classifier of EEG signal-based human emotion assessment system with data poisoning attackEEG data is unstable and complex, which makes interpreting models that use this data very difficult to understand. Explainability is needed to make sure models are doing what they are supposed to do and also help develop better models. In attack detection systems, explainability is needed to identify, analyze, and explain DP attacks.
Kim et al. (2023) [146]Urban growth modeling and predictionThe authors mentioned that the explainability in their domain strengthens the interpretive aspect of ML algorithms. And also, they mentioned XAI models are likely to increase use in urban and environmental planning fields because they effectively supplement the black box features of AI.
Ilman et al. (2022) [261]Predicting transient and residual vibration levelsMathematical and statistical models may not be able to find all relations between parameters and to predict accurately in complex settings. When predicting response to vibration, explainability is needed to find unknown relationships between parameters and therefore build more efficient and stable systems.
Deperlioglu et al. (2022) [77]Glaucoma diagnosisThe authors mentioned that explainability in their specific application is important because it provides transparency and improves trust and confidence in the automated deep learning solution among medical professionals.
Bermudez et al. (2023) [195]Risk management in insurance savings productsThe authors mentioned that the explainability of their specific application is important for understanding, trust, and management of ML methods that are not directly interpretable. And also, they mentioned XAI techniques are useful for risk managers to identify patterns, gain insights, and understand the limitations and potential biases of the models, finally leading to more informed and accurate decisions for their organizations and stakeholders.
Sarp et al. (2023) [54]COVID-19 diagnosis using chest X-ray imagesThe authors mentioned that the explainability of their application is important for providing insights and understanding the inner workings of the black box AI model, especially in COVID-19 diagnosis. And also they mentioned that the explainability helps non-expert end-users understand the AI model by providing explainability and transparency, which is essential for feedback and providing more information to assist doctors in decision-making.
Soto et al. (2023) [323]Improving counterfactual explanation modelsLack of explainability can lead to using less developed AI systems, which is not optimal in any field and can lead to losses and other serious consequences. Counterfactuals is becoming an increasingly important area of research for XAI because it provides very human-like explanations (very interpretable, hard to misunderstand).
Ganguly et al. (2023) [80]Diabetes predictionThe authors mentioned that to make ML models crystal clear and authentically explainable, they have to use explainability. And also, they mentioned that lack of explanation and transparency in AI systems in healthcare can lead to less trust from patients and healthcare providers.
Messner (2023) [259]Improving regression model explanation techniquesUsing AI systems in research in social and behavioral sciences is increasing because of the high performance of black-box models. Researchers in these areas are not usually experts of AI, and therefore using complex models without explainability can lead to mistrust towards models, misuse, or wrong conclusions. Explainability is needed to make sure quality, data-driven research can be made in all fields. Also, more research is needed to improve the interpretability of regression models because XAI research has been focused on explaining classification models.
Rudin et al. (2023) [193]Predicting credit-riskComplex AI models without explainability are used to accept or turn down loan applications, and lack of explainability leads to lack of fairness and transparency. When the explanation technique approximates a ML model, explanations may not always be accurate or global-consistent, which again affects the fairness.
Han et al. (2022) [324]Competitor analysisThe authors mentioned that understanding the competitive factors and points of differentiation from the customer’s perspective is essential for product developers. And also, they mentioned that their method effectively reflects customers opinions, which is essential for understanding customer preferences and improving product competitiveness. Therefore, they mentioned the explainability of their specific application.
Jo et al. (2023) [177]Malware detectionThe fact that deep-learning-based malware detection models don’t (usually) provide explanations for their classification decisions is a cybersecurity threat. Explainability of malware detection models would increase user trust and make integrating these models into cybersecurity systems more easy and accessible (because of regulations). Both malware detection and explanation models need to constantly improve to keep up with the improvement of malware.
Quach et al. (2023) [141]Disease detection in agricultureThe authors mentioned that the explainability in their context is important because it is essential for making in-depth assessments and ensuring reliability in practice.
Hasan et al. (2024) [325]Productivity predictionProductivity in production is a complex situation and is hard to model with linear ML models. More complex models would perform better, but with the cost of interpretability.
Perez-Landa et al. (2021) [183]Social media monitoring for xenophobic content detectionThe authors mentioned that the explainability is essential here because it gives the ability to understand why a tweet has been classified as xenophobic. They mentioned that tweets can affect people’s behavior, and the development of an XAI model is essential to providing a set of explainable patterns describing xenophobic posts. This method can enable experts in the application area to analyze and predict xenophobic trends effectively.
Lorente et al. (2021) [201]Development of advanced driver-assistance systems (ADAS)The authors mentioned that explainability is essential in this context because the level of automation is constantly increasing according to the development of AI. And also, they mentioned ADAS assists drivers in driving functions, and it is essential to know the reasons for the decisions taken. And also, they said trusted AI is the cornerstone of the confidence needed in this research area.
Raza et al. (2022) [83]Classifying different arrythmiasWhen using AI-based systems for decision-making in healthcare, it is important for patient health that the model is interpretable and that practitioners can justify the model’s decisions. Both clinical healthcare practitioners and patients need to trust the AI system when AI is used for diagnostic decision-making.
Gim et al. (2024) [165]Optimization for injection moldingThe authors mentioned that the IMC features are difficult to interpret and control independently without affecting other features, and therefore the quality differences cannot be regarded as the sole response due to the change of a specific feature. To address this issue, the explainability requires and interprets the relationship between the features in the IMC and each part quality.
Varghese et al. (2023) [104]Alzheimer’s disease classificationThe authors mentioned that explainability provides insights into the features or characteristics of the model used to make predictions in the context of AD classification. And also, they mentioned it is more important for improving trust in the system and its results.
Sajjad et al. (2022) [326]Heat rransfer optimization for nanoporous coated surfacesThe authors mentioned that explainability uncovers the most influencing surface features for the nanoporous coatings.
Aquino et al. (2023) [98]Human activity recognitionThe authors mentioned that it is essential to understand how the model makes decisions and to ensure the model’s predictions are not based on biased features.
Lee et al. (2023) [169]Yield predictionThe authors mentioned that explainability in their specific application is important to increase the transparency of the model to improve usability. And also, they mentioned it is important to improve the decision-making process and to understand the factors influencing the semiconductor manufacturing field.
Althoff et al. (2021) [257]Hydrological modeling and predictionThe authors mentioned that explainability is important in their context because it extends the interpretability of ML models and makes the results more understandable to humans. Also, they mentioned it is important to uncover how runoff routing is being resolved and to run black box models into glass box models.
Posada-Moreno et al. (2024) [298]Explaining in both global and local way with same methodGlobal explanations are used to explain the model as a whole, but those explanations don’t provide exact information of where important features are. More research is needed to create explanation methods that give both global and local explanations, because both of those methods separately have severe limitations.
Ravi et al. (2023) [327]Predicting hardness in alloy based on composition and conditionLack of explainability leads to models not being used in unexplored use cases and use cases with low amounts of data. In material science/engineering, there are so many different use cases for AI that the models need to be explainable so they are trustworthy. Explainability can also help find new features about physical phenomena under experiments.
Tasci (2023) [116]Brain tumor classificationThe authors mentioned that explainability enables humans to interpret and understand the results of artificial intelligence, which is crucial in the medical field for ensuring the safety and reliability of the diagnostic solutions offered by deep learning techniques.
Sauter et al. (2022) [328]Computational histopathologyDeep learning models can learn unwanted or wrong correlations that are not causally related to the classification task. Explainability helps detect possible biases and ensure the model performs correctly.
Laios et al. (2022) [51]Surgical decision-making in advanced ovarian cancerThe authors mentioned that explainability is supporting explaining feature effects and interactions associated with specific threshold surgical effort. And also they mentioned surgical decision-making at cytoreductive surgery for epithelial ovarian cancer (EOC) is a complex matter, and an accurate prediction of surgical effort is required to ensure the good health and care of patients. AI applications are encountered with several challenges derived from their “black box” nature, which limits their adoption by clinicians, and that is why explainability is important.
Kalyakulina et al. (2022) [111]Disease classification (Parkinson’s and Schizophrenia)The authors mentioned that explainability is giving the ability to interpret and verify the decisions made by ML models that is essential for medical experts. And also, they mentioned that it helps to improve the system and understand the internal mechanics of the model, which can lead to enhancements and refinements.
Bhatia et al. (2023) [96]Tracing food behaviorsExplainability enhances comprehension and trust, and in this use case, it can make ML-based software more comfortable to use.
Rozanec et al. (2021) [196]Time series forecasting and anomaly detectionThey mentioned that explainability in their context is important because the increasing adoption of AI demands understanding the logic beneath the forecasts to determine whether such forecasts can be trusted. And also, they mentioned understanding when and why global time series forecasting models work is essential for users to detect anomalous forecasts and comprehend the features that influence the forecast.
Huang et al. (2023) [136]Soil moisture predictionAI methods are becoming increasingly important for soil drought prediction due to climate change. Explainability is needed so models can be interpreted and their decisions evaluated by an end user that has knowledge of physics (and other things related).
Bandstra et al. (2023) [138]Detection and quantification of isotopes using gamma-ray spectroscopyAI models to detect gamma rays are used in high-stakes security situations, where explainability is necessary and crucial to avoid model-induced damage. Explainability increases trust and helps understand and evaluate model performance.
Konradi et al. (2022) [97]Aspiration detection in flexible endoscopic evaluation of swallowing (FEES)The authors mentioned that the lack of transparency in automated processing conflicts with the European General Data Protection Regulation (GDPR), which prohibits decisions based solely on automated processing.
Mishra et al. (2022) [226]Solution development for analyzing and optimizing the performance of agents in a classic arcade game “Fuzzy Asteroids”The authors mentioned that the results provided by AI models would be more acceptable by end users if there were explanations in layman’s terms associated with them.
Lysov et al. (2023) [142]Diagnosis of plant stressThe authors mentioned that the explainability of the AI models used in the early diagnostics of plant stress using the HSI process is essential for understanding the decision-making process and the features that contribute to the diagnostic outcomes.
Yagin et al. (2023) [44]Breast cancer predictionThe authors mentioned that explainability has the potential to advance a more comprehensive understanding of breast cancer metastasis and the identification of genomic biomarkers, and it is opening new paths for transformative advances in breast cancer research and patient care.
Dworak et al. (2022) [199]Autonomous vehicles for object detection using LiDARThe authors mentioned that in the domain of autonomous driving, where decisions based on ML models could impact human lives, it is essential to understand how neural networks process data and make decisions. And also, they mentioned explainability is essential for ensuring the safety and reliability of autonomous driving systems.
Thi-Minh-Trang Huynh et al. (2022) [137]Predicting heavy metals in groundwaterWhen predicting heavy metals and groundwater quality, future data might be much different than historical data. Explainability is needed to gain trust from domain experts and ensure usability when utilizing models that have been trained with historical data. Relationships between heavy metals and other chemical contents are highly non-linear, so white-box models give poor results. Explainability would increase both implementation and improvement of ML models used in this use case.
Bhandari et al. (2023) [110]Parkinson’s disease diagnosisThe authors mentioned that interpretability and explainability of predictions are essential in critical areas like healthcare, medicine, and therapeutic applications, and while ML models are effective in predicting outcomes, trust issues and transparency can be addressed through explainability.
Akyol et al. (2023) [160]Modeling refrigeration system performanceInterpretable white-box models don’t perform well when predicting refrigeration system performance due to non-linearity in data, so black-box models are in wide use. Explainability would increase understanding of how input values, which are system components, affect the goal values.
Vijayvargiya et al. (2022) [99]Human lower limb activity recognitionThe authors mentioned that explainability is essential due to the difficulty in understanding how the classifiers predicted the actions.
Renda et al. (2022) [200]automated vehicle networking (In the context of 6G Systems)The authors mentioned that understanding the decisions made by AI models is essential for ensuring the safety and reliability of the automated driving systems. As well as explainability permits improving the user experience of the offered communication services by helping end users trust (by design) that in-network AI functionality issues appropriate action recommendations. And also, they mentioned explainability needs at the designing stage to perform model debugging and knowledge discovery.
Akilandeswari et al. (2022) [329]Factory/plant location selectionThe authors mentioned that the explainability in their specific application is important because it is providing insights into why the model makes certain decisions. And also, they mentioned that this transparency helps stakeholders understand the reasoning behind the model’s predictions, enabling them to make informed decisions and potentially improve the model further.
Zlahtic et al. (2023) [90]ML model development in medicineThe authors mentioned that many prevailing ML algorithms used in medicine are often considered black box models, and this lack of transparency hinders medical experts from effectively leveraging these models in high-stakes decision-making scenarios. Therefore, explainability is needed. And also, they mentioned that by empowering white box algorithms like Data Canyons, they hope to allow medical experts to contribute their knowledge to the decision-making process and obtain clear and transparent output.
Aghaeipoor et al. (2023) [330]Explaining DNN with fuzzy methodsExplanations given by modern XAI methods aren’t always intuitive for non-expert users to comprehend, especially when it comes to rule-based explanations. Fuzzy linguistic representations in rule explanations would increase comprehension and therefore usability of XAI methods.
Lee et al. (2021) [331]Generating global explanations for model using unstructured dataThere are little to no good methods for creating global explanations for unstructured data; for structured data, these methods exist and are valid. Visualizing globally high-level features on predictions on unstructured data in an easily interpretable way is important to gain knowledge on the model’s inner processes.
Gouverneur et al. (2023) [91]Pain recognitionThe authors mentioned that understanding the decisions made by the classifiers is essential for gaining insights into the mechanisms of pain in detail.
Hung et al. (2021) [332]Improving image data quality at preprocessing stageExplainability methods can be used in image classification tasks to give insight on relationships between inputs and outputs. Explainability is also needed to analyze and demonstrate the importance of the proposed method for image quality improvement.
Kamal et al. (2021) [106]Detecting Alzheimer’s disease from MRI images and gene expression dataBlack-box models are hard to interpret. Explainability is necessary to gain knowledge about the model’s decision-making process and to find possibly new features that predict and affect the appearance of Alzheimer’s disease.
Qaffas et al. (2023) [285]Inventory managementThe authors mentioned that the explainability of their specific domain is important to enhance the decision-making process by providing transparent justifications for item assignments and interpretations of obtained clusters.
Dindorf et al. (2021) [68]Pathology-independent classifierThe authors mentioned that explainability is important in their specific application to understand why subjects were classified, including instances of misclassification, and to reduce the black box nature of the machine learning model.
Javed et al. (2023) [72]Cognitive health assessmentThe authors mentioned that explainability is providing insights into the decision-making process of ML models, particularly in the context of healthcare and cognitive health assessment. This transparency and interpretability are essential for understanding how the models identify and classify activities, especially in scenarios involving individuals with dementia or cognitive impairments.
Gramegna et al. (2021) [192]Credit risk estimationThe authors mentioned that explainability in credit risk assessment is important to address the trade-off between predictive power and interpretability. They mentioned that new algorithms offer high accuracy but lack intelligibility with limited understanding of their inner workings. Therefore, the use of explainability provides transparency and insights into why certain outputs are generated by these models.
Wani et al. (2024) [49]Lung cancer detectionThe authors mentioned that explainability, along with interpretability and transparency, is an essential aspect of AI in healthcare.
Nguyen et al. (2023) [148]Optimization of membraneless microfluidic fuel cells (MMFCs) for energy productionThe authors mentioned that explainability is important in their specific application because the black box nature of AI optimization models reduces their credibility and hinders additional understanding of the importance and contributions of each feature in the decision-making process of these models.
Kuppa et al. (2021) [176]XAI methods in cybersecurityThe authors mentioned that the explainability of their specific application is important because it enhances trust, gives understanding of model decisions, and addresses security concerns in the cybersecurity domain.
Iatrou et al. (2022) [143]Prediction of nitrogen requirement in riceThe authors mentioned that the explainability of their specific application is important to provide rice growers with sound nitrogen fertilizer recommendations in precision agriculture.
Sevastjanova et al. (2021) [218]Question classificationThe authors mentioned that explainability is important to provide insights into linguistic structures and patterns. Also, they mentioned that traditional ML models are black boxes, making it challenging to extract meaningful linguistic insights.
Real et al. (2023) [92]Drug response predictionThe authors mentioned that explainability is essential for providing insights into the underlying mechanisms of drug actions, which is critical for effective clinical decision-making and patient care.
Aghaeipoor et al. (2022) [262]Big data preprocessingThe authors mentioned that the explainability of their specific application needs understanding and explaining the internal logic of their model. Also, they mentioned that explainability helps human users to trust sincerely, manage effectively, avoid biases, evaluate decisions, and provide more robust machine learning models.
Galli et al. (2022) [147]Building energy performance benchmarkingThe authors mentioned that understanding why a certain prediction is provided by a black-box model is essential in modern contexts where the decisions of an AI system are required to be transparent and fair, such as for certification purposes. And also they mentioned that the proposed method is providing insight about the behavior of classification models used to benchmark the energy performance of buildings and to understand the motivations behind correct and wrong classifications. This information is helpful for certification entities, technical figures, and other stakeholders involved in the decision-making process.
Kaplun et al. (2021) [45]Cancer cell profilingThe authors mentioned that understanding the reasons behind the test results is essential to relay analyzing, retraining, or modifying the model.
Moreno-Sanchez (2023) [74]Cardiovascular medicineThe authors mentioned that explainability is important in their specific application of heart failure survival prediction to facilitate healthcare professionals’ understanding and interpretation of the model’s outcomes.
Wongburi et al. (2022) [150]Wastewater treatmentThe authors mentioned that while complex models like RNNs offer high accuracy, they can be challenging to interpret. Therefore, explainability was crucial to understand why the algorithm made certain predictions in the context of predicting the Sludge Volume Index in a Wastewater Treatment Plant.
Obayya et al. (2022) [79]Diabetic retinopathy grading and classificationThe authors mentioned that explainability is essential in healthcare settings, especially in diagnosing diseases like diabetic retinopathy, to provide transparent and interpretable insights into the decision-making process of the AI model.
Heistrene et al. (2023) [333]Electricity price forecastingTo identify whether or not the model prediction at a given instance is trustworthy.
Azam et al. (2023) [125]Automating the skull stripping from brain magnetic resonance (MR) imagesThey need visualizations to detect/segment the brain from non-brain tissue.
Ribeiro et al. (2022) [334]Detection of abnormal screw tightening processesFor an interactive visualization tool that provides explainable artificial intelligence (XAI) knowledge for the human operators, helping them to better identify the angle–torque regions associated with screw tightening failures.
Zinonos et al. (2022) [144]Grape leaf diseases IdentificationTo visualize the decisions of CNN’s output layer.
Neupane et al. (2022) [184]Intrusion detection/cybersecurityFor example, to enable users to locate malicious instructions.
Aslam et al. (2022) [151]Prediction of undesirable events in oil wellsTo enable surveillance engineers to interpret black box models to understand the causes of abnormalities.
Pisoni et al. (2021) [225]Explanations for artTo provide improved accessibility to museums and cultural heritage sites.
Blomerus et al. (2022) [335]Synthetic aperture radar target detectionTo furnish the user with additional information about classification decisions.
Estivill-Castro et al. (2022) [336]Human-in-the-loop machine learningHumans’ (domain experts or not) tend to not trust ML systems if they do not provide explanations. Sufficient explanations are important because they show correlations between features and therefore help understand the model.
Mardian et al. (2023) [152]Predicting drought in Canadian prairiesDroughts in prairies are becoming increasingly worse due to climate change, and droughts cause losses in agriculture. Explainable AI can give insight on what factors predict or induce drought, and with this knowledge losses can be minimized.
Park et al. (2023) [93]Predicting drug responseML models do not give information about feature importance, but it is important to know genomic features that affect drug response prediction. XAI is not much researched in drug response prediction, and because of these reasons, explainability is necessary and needs to be researched more in this use case.
Danilevicz et al. (2023) [145]Plant genomicsThe authors mentioned that by extracting and ranking the most relevant genomic features employed by the best performing models, they can provide insights into the interpretability of the models and the identification of important motifs for IncRNA classification. And also, they mentioned that explainability is essential for understanding the underlying mechanisms driving the classification of IncRNAs and for gaining insights into the regulatory motifs present in plant genomes.
Alfeo et al. (2022) [159]Predictive maintenance in manufacturing systemsThe authors mentioned that explainable ML provides human-understandable insights about the mechanism used by the model to produce a result, such as the contribution of each input in the prediction, and this is essential in the context of predictive maintenance.
Sargiani et al. (2022) [55]Predicting COVID-19Lack of explanations makes ML systems incomprehensible for medical experts. Explanations are needed to make sure the doctor can be the one that makes the final decision. Explainability can also help detect biases in models, because biases are not uncommon in COVID prediction models due to unbalanced data.
Angelotti et al. (2023) [337]Cooperative multi-agent systemsThe authors mention the importance of explainability in their specific application to shed light on why one agent is more important than another in a cooperative game setting. Also, the authors mentioned that they can provide insights into the factors that influence the achievement of a common goal within a multi-agent system by assessing the contributions of individual agents’ policies and attributes.
Jeong et al. (2022) [103]Predicting Alzheimer’s disease dementiaExplainability is needed to make sure that ML models’ decision-making processes line up with current knowledge on Alzheimer’s disease and dementia. When a model is proven to work accurately, it can be implemented in concrete use cases.
Pereira et al. (2021) [208]Early prediction of student performanceThe authors mentioned that the explainability is important in their specific application to facilitate human AI collaboration towards perspective analytics. And also, they mentioned that by providing explanations for the predictive model decisions, they can support students, instructors, and other stakeholders in understanding why certain predictions were made.
Wickramasinghe et al. (2021) [286]Cyber-physical systemsThe authors mentioned that it is essential to understand the reasoning behind the model’s decisions in CPSs because the outcomes of machine learning models can have significant impacts on safety, security, and operations. And also they mentioned that explainable unsupervised machine learning methods are needed to enhance transparency, trust, and decision-making in CPS applications.
Bello et al. (2024) [241]Object detection and image analysisThe authors mentioned that the explainability is essential to providing insights into the decision-making process of complex deep learning architectures.
Song et al. (2022) [253]Predicting minimum energy pathways in chemical reactionWhite-box models often fail to express complex chemical systems (like enzyme catalysis). Explainability is needed to enhance understanding of ML models and assist in responsible decision-making.
Tang et al. (2022) [338]Safety of XAI, preventing adversarial attack on XAI systemIt has been shown that explanation techniques are vulnerable to adversarial attacks; one can change explanations without changing the model outcome. Stability of explanations is important and needs to be studied to achieve safer ML/XAI systems.
Al-Sakkari et al. (2023) [339]Carbon dioxide capture and storage (CCS)The authors mentioned that explainability is important in their specific application to gain a better understanding of the effects of process and material properties on the variables of interest. Further, they mentioned that by adding explainability to accurate models, it can provide insights into the impact of different variables on the measured variables, enhancing the overall understanding of the system.
Iliadou et al. (2022) [100]Predicting factors that influence hearing aid useAutomated decision-making systems that utilize AI are not easily accepted in healthcare because usually they aren’t interpretable and therefore not trustworthy. In healthcare and medical decision-making, accountability in case of a wrong decision is a serious ethical question when AI is used to make decisions. Explainability is needed to increase trust and transparency.
Kwong et al. (2022) [46]Prostate cancer managementThe authors mentioned that explainability is important not only to provide predictions but also to highlight the variables driving the predictions. Further, they mentioned that this transparency helps build trust in the model by ensuring that the predictions and explanations align with clinical intuition.
Ge et al. (2023) [297]Cyber threat intelligence (CTI) analysis and classificationThe authors mentioned that explainability is important in their application to enhance the interpretability, reliability, and effectiveness of cyber threat behavior identification through the clear delineation of key evidence and decision-making processes.
Alcauter et al. (2023) [209]Predicting dropout in faculty of engineeringExplanations can give comprehensive insight on prediction factors. This increases trust and also enables precise actions on decreasing dropout rates in engineering studies, where dropout rates are high.
Apostolopoulos et al. (2022) [340]Skin cancer detection and classificationThe authors mentioned that explainability in their specific application is important to help in understanding and interpreting the decisions made by the model, especially in the context of diagnosing skin lesions. Further, they mentioned that by visualizing the important regions, the model’s predictions can be better understood and trusted, leading to more transparent and interpretable results in the classification of skin lesions.
Brdar et al. (2023) [254]Pollen identificationClassifying pollen is a complicated ML task because of complex data (chemical structure of pollen, etc.), and intrinsically interpretable models often fail in performance. Explainability of black-box models’s solutions is needed to create trust towards AI systems.
Apostolopoulos et al. (2022) [340]COVID-19 detectionThe authors mentioned that explainability in their specific application is important to provide insights into the decision-making process of the deep learning model. Also, they mentioned that transparency is also essential for building trust in the model’s decisions, ensuring accountability, and enabling the actual users of the model to understand and interpret the image findings.
Henzel et al. (2021) [61]COVID-19 data classificationThe authors mentioned that explainability in their specific application is important to explain the relationship between symptoms and the predicted outcomes. And also to enhance the interpretability of the models and provide a transparent understanding of how the classifiers make decisions.
Deramgozin et al. (2023) [341]Facial action unit detectionExplanations in systems that detect facial expressions give interpretability and deeper understanding about how the model works.
Maouche et al. (2023) [43]Breast cancer metastasis predictionThe authors mentioned that explainability in their specific application is important to address the issue of model interpretability. Further, they mentioned that the increased complexity of the models is associated with decreased interpretability, which causes clinicians to distrust the prognosis.
Zaman et al. (2021) [263]Control chart patterns recognitionResearchers and other users need explanations of ML-made decisions to evaluate the model and to make correct final decisions. It is important to find explainable and efficient ML systems that do not require too many resources.
Dassanayake et al. (2022) [342]Autonomous vehiclesThe authors mentioned that explainability in their specific application is important to enhance the confidence of DNN-based solutions. They mentioned that for autonomous systems operating in unpredictable environmental conditions, the rationale behind the decisions made by DNNs is essential for accountability, reliability, and transparency, specifically in safety-critical edge systems like autonomous transportation.
McFall et al. (2023) [112]Early detection of dementia in Parkinson’s diseaseThe authors mentioned that explainability in their specific application is important to selectively identify and interpret early dementia risk factors in Parkinson’s disease patients.
Zhang et al. (2023) [56]Diagnosis of COVID-19The authors mentioned that the existing deep learning classifiers lack transparency in interpreting findings, which can limit their applications in clinical practice. Further, they mentioned that providing explainable results (like the proposed CXR-Net model to assist radiologists in screening patients with suspected COVID-19) is reducing the waiting time for clinical decisions.
Qayyum et al. (2023) [343]Material property predictionThe authors mentioned that explainability in their specific application is important to gain insights, interpret model predictions, identify key factors influencing the outcomes, and advance material discovery in the field of PZT ceramics.
Lellep et al. (2022) [344]Relaminarization events in wall-bounded shear flowsThe authors mentioned that explainability in their specific application is important to provide a physical interpretation of the machine learning model’s output. And also, they mentioned that the interpretability is crucial for understanding the underlying physical processes driving relaminarization events and gaining insights into the dynamics of turbulent flows close to the onset of turbulence.
Bilc et al. (2022) [345]Retinal nerve fiber layer segmentationStandard software suffers from noisy data and unclear decision-making processes. Explainability would enable controlling the model’s learning process and also validating the results.
Sakai et al. (2022) [346]Congenital heart disease detectionMedical professionals tend not to trust black-box models and therefore not use them. Explainability would increase use of AI systems by medical professionals and therefore enhance their performance.
Terzi et al. (2021) [347]Credit card fraud detectionExplainability is needed to ensure fairness and ethics of ML model-made decisions and also to improve ML model performance. Credit card frauds are constantly evolving, and explainability techniques give insight on ML models and therefore can help detect new kinds of attacks.
Allen (2023) [260]Obesity prevalence predictionExplainability of ML models that predict obesity can help detect features that effect obesity rates the most and therefore lead to better decision-making in obesity prevention.
Kothadiya et al. (2023) [348]Sign language recognitionThe authors mentioned that explainability in their specific application is important for sign language recognition to address variability in sign gestures, facilitate communication for physically impaired individuals, and enhance user trust and understanding of the recognition model.
Slijepcevic et al. (2023) [349]Human gait analysis in children with Cerebral PalsyThe authors mentioned that explainability in their specific application is important to promote trust and understanding of machine learning models in clinical practice, especially in the medical field where decisions impact patient care and outcomes. And also, they mentioned that by examining whether the features learned by the models are clinically relevant, explainability ensures that the decisions made by the models align with the expertise and expectations of healthcare professionals.
Hwang et al. (2021) [350]Sensor fault detectionThe authors mentioned that explainability in their specific application is important to enhance the reliability of AI models, facilitate direct response to threats, and provide comprehensive explanations for security experts to ensure the safety of Industrial Control Systems.
Rivera et al. (2023) [351]predict the arrivals at an emergency departmentClassical techniques for explaining regression models are often biased and model-specific. It is important to search for more generalizable and global explaining techniques when AI is used in increasing amounts in critical fields.
Park et al. (2023) [352]Prediction of nitrogen oxides in diesel enginesThe authors mentioned that explainability in their specific application is important to understand the influence of input features on NOx prediction. Further, they mentioned that by explaining the model’s decisions and the relationships between input and output variables, the model becomes more transparent and trustworthy in applications where prediction accuracy and feature importance are essential, such as in the automotive industry for developing low-carbon vehicles.
Abdollahi et al. (2021) [353]Urban vegetarian mappingThe authors mentioned that explainability in their specific application is important to comprehend model decisions, to grasp complicated inherent non-linear relations, and to determine the model’s suitability for monitoring and evaluation purposes.
Xie et al. (2021) [354]Air-traffic managementThe authors mentioned that explainability in their specific application is important for transparency in decision-support systems to ensure that the AI/ML algorithms used in predicting risks in uncontrolled airspace can be understood and trusted by human operators.
Al-Hawawreh et al. (2024) [355]Cyber-physical attacks (use case: gas pipeline system)The authors mentioned that explainability in their specific application is important to enhance the trustworthiness of the AI models and to contribute to performance improvements, safety, audit capabilities, learning, and compliance with regulations.
Laios et al. (2023) [50]Cancer predictionThe authors mentioned that explainability in their specific application is important for understanding the features that drive a model prediction, which can potentially aid in decision-making in complex healthcare scenarios. They also mentioned that as natural language processing moves towards deep learning, transparency becomes increasingly challenging, making explainability essential for ensuring trust and understanding in the model’s predictions.
Ramirez-Mena et al. (2023) [47]Prostate cancer predictionPredicting and identifying prostate cancer is difficult because of complex indicators of disease. Medical professionals would benefit from using clinical decision support systems for diagnosing prostate cancer, but they often do not use them because they are not interpretable and trustworthy. XAI is needed to increase trust and therefore allow the use of diagnostic tools for cancer prediction.
Srisuchinnawong et al. (2021) [356]RoboticsState-of-the-art explanation techniques often fail to visualize the whole structure of a neural network (neural ingredients) and also do not support robot interface.
Dai et al. (2023) [357]Using classical statistical analysis methods for explaining NN modelExplainability increases trust and understanding of AI systems and therefore enables AI system use in clinical settings.
Feng et al. (2022) [300]Remote sensing image scene classificationRemote sensing image scene classification is a computationally demanding task, and deep learning methods and neural networks provide the computational accuracy and efficiency needed. Lack of explainability in those black-box methods leads to distrust towards models.
Li (2022) [358]Ride-hailing service demand modelingThe authors mentioned that the explainability of their specific application is important to understand the processes underlying the observed data rather than solely performing predictive tasks in the context of spatial data modeling. And also, they mentioned that explainability is essential for extracting spatial relationships, visualizing them on maps, and enabling analysts to understand and interpret the spatial effects captured by the machine learning models.
Palatnik de Sousa et al. (2021) [57]Predicting COVID-19 disease from chest X-ray image and CT-scanExplainability is important in AI systems that are used in healthcare to ensure accuracy of models’ decisions and trust towards models. Explanations can also help detect different kinds of biases in AI systems.
Delgado-Gallegos et al. (2023) [62]Assessment of perceived stress in healthcare professionals attending COVID-19The authors mentioned that the explainability of their specific application is important to locate the combination of factors necessary to correctly classify healthcare professionals based on their perceived stress levels. Further, they mentioned that the decision tree model served as a graphical tool that allowed for a clearer interpretation of the factors influencing stress levels in healthcare professionals.
Gonzalez-Gonzalez et al. (2022) [359]Industrial carbon footprint estimationThe authors mentioned that the explainability of their specific application is important to provide a human operator with an in-depth understanding of the classification process and to validate the relevant explanation terms.
Elayan et al. (2023) [360]Power consumption predictionThe authors mentioned that the explainability of their specific application is important to ensure that users understand and trust the system’s predictions and decisions regarding power consumption. Further, they mentioned that users can gain insights into the model’s behavior, biases, and outcomes, and explainability increased transparency and user confidence in the system.
Duc Q Nguyen et al. (2022) [58]COVID-19 forecastingThe authors mentioned that the explainability of their specific application is important and essential for building trust in the AI model, facilitating collaboration between AI systems and human experts, and ultimately improving the effectiveness of decision-making processes in managing the COVID-19 pandemic.
Cheng et al. (2022) [361]Deep learning models used in forestryResearchers and users are having a difficult time trying to understand black-box models that are widely used due to their high performance and efficiency. Explanations can give otherwise not found hints about how the ML model can be made better.
Gomez-Cravioto et al. (2022) [210]Alumni income predictionStudying alumni income and socioeconomic status can help educational institutions in improving efficiency and planning the studies, which helps future graduates. Explainability can give necessary insight on important factors that influence the success of the alumni.
Qiu et al. (2023) [362]Biological age predictionLinear models are used in biological age prediction because of their interpretability, but they offer low accuracy. Explainability of black-box models is needed so more efficient and accurate models can be used. Local explanations are needed so ML models can be used on individuals.
Abba et al. (2023) [363]Water quality assessmentThe authors mentioned that the explainability of their specific application is important to demonstrate the impact of individual features on the model’s predictions and supports stakeholders and decision-makers in making informed choices regarding groundwater resource management.
Martinez-Seras et al. (2023) [364]Image classificationThe authors mentioned that the explainability of their specific application is important to build the trustworthiness of machine learning models, especially with Spiking Neural Networks, and it is essential for ensuring the acceptance and adoption of these models in real-world settings.
Krupp et al. (2023) [365]Tool life predictionThe authors mentioned that the explainability of their specific application is important to enable domain experts from the field of machining to develop, validate, and optimize remaining tool life models without extensive machine learning knowledge.
Nayebi et al. (2023) [366]Clinical time series analysisThe authors mentioned that explainability in their specific application is important for understanding the features that drive a model prediction, which can potentially aid in decision-making in complex healthcare scenarios. They also mentioned that as natural language processing moves towards deep learning, transparency becomes increasingly challenging, making explainability essential for ensuring trust and understanding in the model’s predictions.
Lee et al. (2022) [367]A de-identification of medical dataA de-identification system without explainability is unusable in the medical domain because of critical data. Explainability is needed to gain transparency and also assist in developing better de-identification models and modifying processes connected to de-identification.
Nahiduzzaman et al. (2023) [368]Mulberry leaf disease classificationMulberry is a culturally important plant in the Himalayan area, and little to no studies have been made to improve mulberry leaf disease detection. Explainability enables the use of AI systems for disease detection by mulberry farmers and enhances model development. An explainable model could also be used to detect disease from other plants’ leaves.
Khan et al. (2022) [369]Vision-based industrial applicationsThe authors mentioned that the model’s decisions and predictions can be understood and interpreted by ensuring that the explainable correct annotations by the proxy model. And also, they mentioned that transparency in the decision-making process is essential for building trust in the model’s outputs in industrial applications.
Beucher et al. (2022) [370]Detecting acid sulfate in wetland areasExplanations would validate and clarify otherwise uninterpretable ML model decisions. Explainability is needed so that the results of research can be communicated to both expert and non-expert audiences. Explanations also help build better ML models.
Kui et al. (2022) [371]Disease severity predictionThe authors mentioned that the explainability in their specific application is important to help physicians understand the decision-making process of the ML model.
Szandala (2023) [372]Explaining ML model with saliency mapsAI methods cannot be implemented in critical fields without a good understanding of how models work. State-of-the-art visual XAI techniques don’t explain why important areas are important. Saliency maps with information about the selection of important areas give more comprehensive insight about the ML model’s decision-making.
Rengasamy et al. (2021) [373]Safety critical systemsThe authors mentioned that in the context of safety-critical systems, explainability is essential for ensuring transparency, accountability, and trust in the decision-making process facilitated by ML models.
Jahin et al. (2023) [374]Supply chain managementThe authors mentioned that practical implications of their specific application include improved inventory control, reduced backorders, and enhanced operational efficiency. Thus, by using explainability, it empowers the decision-making and efficient resource allocation in supply chain management systems. Further, they mentioned that this transparency and interpretability are essential for stakeholders to understand the model’s predictions and trust its recommendations.
Nielsen et al. (2023) [375]Evaluating and explaining XAI methodsExplaining and evaluating the explanations given by XAI methods is important to ensure model robustness, faithfulness, and safety.
Hashem et al. (2023) [376]Brain computer interface system to Analyze EEG signalsThe authors mentioned that by using explainability, they can get greater transparency and understanding of the relationship between the EEG features and the model’s predictions. Further, they mentioned that this transparency is essential for enhancing the interpretability of the BCI systems in the context of controlling diverse limb motor tasks to assist individuals with limb impairments and improve their quality of life.
Lin et al. (2022) [377]Classifying lncRNA and protein-coding transcriptsRNA data is complicated, and neural network models have shown the best performance in classification tasks, but the neural network models lack interpretability. Explainability increases understanding of the ML model’s decision-making process in the RNA classification task.
Chen et al. (2023) [378]Land cover mapping and monitoringThe authors mentioned that the explainability of their specific application is important to enable researchers and practitioners to understand how ML models work in order to strategically improve model performance for land cover mapping with Google Earth Engine, to support fine-tuning and optimizing models, to help gauge trust in the models, and to address the lack of explainability in some parts of the scientific process.
Oveis et al. (2023) [379]Automatic target recognitionIn automatic target recognition, it is very important to know that the ML model learns to look at the right things (target) in image data because when a new kind of situation (new kind of truck/car/tank) appears, the model has to be able to recognize the target correctly.
Llorca-Schenk et al. (2023) [380]Designing porthole aluminium extrusion diesThe authors mentioned that explainability is important in their specific application to help when deciding the best way in which to adjust an initial design to the predictive model.
Diaz et al. (2023) [381]HR decision-makingThe authors mentioned that the explainability is essential in their application of predicting employee attrition as it helps in designing effective retention and recruitment strategies as well as enhances trust, transparency, and informed decision-making in human resources management.
Pelaez-Rodriguez et al. (2023) [382]Extreme low-visibility events predictionTo enable the consideration of interpretability, which is an extremely important additional design driver, especially in some areas where the physics of the problem plays a major role, such as geoscience and Earth observation problems.
An et al. (2023) [383]NATo understand how deep learning models make predictions.
Anjara et al. (2023) [48]Oncolocy (lung cancer relapse prediction)To improve trust and adoption of AI models.
Glick et al. (2022) [384]Dental radiographyTo assist/help novice dental clinicians (dental students) in decision-making.
Qureshi et al. (2023) [385]Mosquito trajectory analysisTo give insights into the mechanisms that may limit mosquito breeding and disease transmission.
Kim et al. (2023) [78]CardiologyTo reduce a high rate of false alarms in cardiac arrest prediction models and to make their results clinically (more) interpretable.
Wen et al. (2023) [386]Alzheimer disease detection (from patient transcriptions)To discover the underlying relationships between PoS features and AD.
Alvey et al. (2023) [387]Aerial images analysisTo understand and explain the behavior of deep learning models.
Maaroof et al. (2023) [82]Diabetes predictionTo gain insight into how the model makes its predictions and build trust in its decision-making process.
Hou et al. (2022) [388]Image classificationTo produce improved filters for preventing advanced backdoor attacks.
Nakagawa et al. (2021) [389]Mortality prediction of COVID-19 patients (from healthcare data)To allow data scientists and developers to have a holistic view, a better understanding of the explainable machine learning process, and to build trust.
Yang et al. (2022) [390]Process execution time predictionTo explain how ML models predicting the time until the next activity in the manufacturing process works.
O’Shea et al. (2023) [391]Lung tumor detectionInterpretability is essential for reliable convolutional neural network (CNN) image classifiers in radiological applications.
Tasnim et al. (2023) [392]CardiologyTo examine the contribution of features to the decision-making process and to foster public confidence and trust in ML model predictions.
Marques-Silva et al. (2023) [393]NANA
Lin et al. (2023) [274]Visual reasoningTo disclose/explain the decision-making process from the numerous parameters and complex non-linear functions.
Pedraza et al. (2023) [394]Sensor measurementsTo better understand the AI model.
Kwon et al. (2023) [395]NATo derive a mechanism of quantifying the importance of words from the explainability score of each word in the text.
Rosenberg et al. (2023) [396]Integer linear programming and quadratic unconstrained binary optimizationExplainability is needed so that ML models can be trusted. Explainability can also help detect biases and help improve the ML model. Expressive boolean formulas for explanations can increase flexibility and interpretability.
O’Sullivan et al. (2023) [397]Water quality modelingNeural network models provide good performance in simulating water quality, but because of bad explainability, the model’s decisions are hard to use to make management decisions. Explainability would increase trust and usability of ANN models in water quality modeling.
Richter et al. (2023) [398]Radar-based target classificationThe authors mentioned that the explainability is important in their specific application to obtain insights about the decision-making processes of the model and ensure the reliability and effectiveness of their system.
Khan et al. (2024) [237]Traffic sign recognitionTraffic sign recognition systems need to be accurate and reliable, and that is why neural network models are used. They lack interpretability, which makes detecting bias and evaluating model performance difficult. Transparent and safe systems in this kind of critical application of ML models are very necessary.
Heimerl et al. (2022) [235]Emotional facial expression recognitionThe authors mentioned that the explainability is important in their specific application to fully understand the underlying process in the classification because the classification results could lead to harmful events for individuals. Further, they have mentioned that the transparency and interpretability of AI models are really important in applications involving sensitive information and safety-critical scenarios.
Dong et al. (2021) [399]Medical image noise reduction by feature extractionPortable ultrasound devices are cheap and very convenient, but they can give noisy images. Explainability and identifying important features are crucial for successful noise reduction with feature extraction in the medical domain.
Murala et al. (2023) [400]Healthcare metaverse online modelExplainability is needed in online healthcare (medical metaverse) so doctors’ can have more information about patients’ status and therefore make better medical decisions. Explainability of AI-made medical decisions enhances transparency, reliability, predictability, and therefore safety, which benefits both the doctor and the patient.
Brakefield et al. (2022) [401]Health surveillance and decision supportThe authors mentioned that explainability is important in their specific application to improve decision-making capability for physicians, researchers, and health officials at both patient and community levels. Further, they mentioned that there are many existing digital health solutions that lack the ability to explain their decisions and actions to human users, which can hinder informed decision-making in public health.
Lee et al. (2021) [204]Predicting online purchase based on information about online behaviourEnd users can find it hard to trust a black-box model and therefore end up not using the model. Explainability increases trust towards models and enables reliable use of AI, which can be very beneficial in the context of online marketing.
Ortega et al. (2021) [402]Applying inductive logic programming to explain black-box modelIn some applications of AI, explanations are required by law or needed to ensure ethnicity of decision-making. Inductive logic programming systems are interpretable by design, and applying this method to classical ML models enhances their interpretability and performance.
An et al. (2022) [403]Producing clear visual explanations to black-box modelsLack of explainability leads to users not trusting the ML model in critical applications (like healthcare, finance, and security).
De Bosscher et al. (2023) [293]Airport terminal operationsThe authors mentioned that the explainability is important in their specific application to interpret and understand these emergent properties in a more efficient way because existing airport terminal operation models have heavy computational requirements. And also, they mentioned airport terminals are involving complex systems, and explainability helps to understand the dynamics of these complex sociotechnical systems. Further, they mentioned that using explainability, they can identify opportunities for optimization and improvement in processes such as passenger flow, security checkpoints, and overall terminal efficiency.
Huang et al. (2022) [404]Remote sensing scene classificationExplainability in computer vision tasks is important so the ML model can be trusted and used safely. State-of-the-art heatmap explanation methods (CAM-methods) can give a good explanation to a black-box model, but they are not always accurate (failing to detect multiple planes, for example).
Senocak et al. (2023) [405]Precipitation forecastThe authors mentioned that the explainability is important in their specific application to make the machine learning models more transparent, interpretable, and aligned with domain expertise. And also to enhance the reliability and utility of the predictions.
Kalutharage et al. (2023) [406]Anomaly detectionExplainability is important in AI applications in the field of cybersecurity because one mistake made by the model can lead to a large amount of damage. Explanations make the ML model trustworthy and also enable better model development. Detecting important features can also increase the performance of the intrusion detection system.
Sorayaie Azar et al. (2023) [407]Monkeypox detectionThe authors mentioned that the explainability is important in their specific application to provide a clear understanding of the decision-making process of the AI models. Further, they mentioned that by providing explainability, the clinicians can gain deeper insights into how the AI models arrive at their predictions, which is crucial for fostering trust and confidence in the reliability of AI systems in real-world clinical applications.
Di Stefano et al. (2023) [408]Early diagnosis of ATTRv AmyloidosisThe authors mentioned that explaining the prediction is essential in medical domains because the patterns a model discovers may be more important than its performance.
Huong et al. (2022) [409]Industrial Control Systems (ICS)The authors mentioned that the explainability is important in their specific application of anomaly detection in Industrial Control Systems (ICS) because explaining the detection outcomes and providing explanations for anomaly detection results is essential for ensuring that experts can understand and trust the decisions made by the model.The authors mentioned that the explainability is important in their specific application of anomaly detection in Industrial Control Systems (ICS) because explaining the detection outcomes and providing explanations for anomaly detection results is essential for ensuring that experts can understand and trust the decisions made by the model.
Diefenbach et al. (2022) [410]Smart living roomTo understand how the technology works, what its limits are, and what consequences regarding autonomy and privacy emerge.
Gkalelis et al. (2022) [265]Video event and action recognitionTo derive explanations along the spatial and temporal dimensions for the event recognition outcome.
Patel et al. (2022) [411]Water quality predictionTo provide the transparency of the model to evaluate the results of the model.
Mandler et al. (2023) [292]Data-driven turbulence modelsTo make the prediction process of neural network-based turbulence models more transparent.
Kim et al. (2023) [412]Cognitive load predictionTo detect important features.
Huelsmann et al. (2023) [270]Energy system designTo better understand the influence of all design parameters on the computed energy system design.
Schroeder et al. (2023) [413]Predictive maintenanceFor creating a wide acceptance of AI models in real-world applications and aiding the identification of artifacts.
Singh et al. (2022) [414]Bleeding detection (from streaming gastrointestinal images)To reverse engineer the test results for the impact of features on a given test dataset.
Pianpanit et al. (2021) [113]Parkinson’s disease (PD) recognition (from SPECT images)For easier model interpretation in a clinical environment.
Khanna et al. (2022) [289]Assessing AI agentsTo be able to assess an AI agent.
Kumara et al. (2023) [415]Performance prediction for deployment configurable cloud applicationsTo provide explanations for the prediction outcomes of valid deployment variants in terms of the deployment options.
Konforti et al. (2023) [416]Image recognitionTo explain neural network decisions and internal mechanisms.
Ullah et al. (2022) [255]Credit card fraud detection; customer churn predictionTo improve trust and credibility in ML models.
Gaur et al. (2022) [115]Prediction of brain tumors (from MRI images)To realize disparities in predictive performance, to help in developing trust, and in integration into clinical practice.
Al-Hussaini et al. (2023) [123]Seizure detection (from EEG)To foster trust and accountability among healthcare professionals.
Oblak et al. (2023) [417]Fingermark quality assessmentTo make the models more transparent.
Sovrano et al. (2022) [219]Questions answering (as explaining)To make the models more transparent.
Zytek et al. (2022) [213]Child welfare screening (risk score perdiction)To overcome ML usability challenges, such as lack of user trust in the model, inability to reconcile human-ML disagreement, and ethical concerns about oversimplification of complex problems to a single algorithm output.
Quach et al. (2024) [247]Tomato detection and classificationTo assess model reliability.
Guarrasi et al. (2023) [59]Prediction of the progression of COVID-19 (from images and health record)To enable physicians to explore and understand data-driven DL-based system.
Le et al. (2021) [418]NANot mentioned.
Capuozzo et al. (2022) [419]Glioblastoma multiforme identification (from brain MRI images)To assess the interpretability of the solution showing the best performance and thus to take a little step further toward the clinical usability of a DL-based approach for MGMT promoter detection in brain MRI.
Vo et al. (2023) [420]Dragon-fruit ripeness (from images)To explain the outcomes of the image classification model and thereby enhance its performance, optimization, and reliability.
Artelt et al. (2022) [421]NAThere is no specific application.
Abeyrathna et al. (2021) [422]NAThere is no specific application.
Krenn et al. (2021) [276]Experimental quantum opticsThe interpretable representation and enormous speed-up allow one to produce solutions that a human scientist can interpret and gain new scientific concepts from outright.
Pandiyan et al. (2023) [423]Laser powder bed fusion processTo highlight the most relevant parts of the input data for making a prediction.
Huang et al. (2023) [222]Assessment of familiarity ratings for domain conceptsTo be able to evaluate familiarity ratings of domain concepts more in-depth and to underline the importance of focusing on domain concepts’ familiarity ratings to pinpoint helpful linguistic predictors for assessing students’ cognitive engagement during language learning or online discussions.
Jeon et al. (2023) [424]Land use (from satellite images)To enhance the reliability of the image analysis.
Fernandez et al. (2022) [264]NAThere is no specific application; however, the explainability is important because of the increasing number of applications where it is advisable and even compulsory to provide an explanation.
Jia et al. (2022) [425]WiFi fingerprint-based localizationTo improve the trust of the proposed method.
Munkhdalai et al. (2022) [426]NAThere is no specific application.
Schrills et al. (2023) [295]Subjective information processing awareness (in automated insulin delivery)To help users cooperate with AI systems by addressing the challenge of opacity, subjective information processing awareness (SIPA) is strongly correlated with trust and satisfaction with explanations; therefore, explanations and higher levels of transparency may improve cooperation between humans and intelligent systems.
Gouabou et al. (2022) [427]Melanoma detectionTo overcome the dermatologist’s fear of being misled by a false negative and the assimilation of CNNs to a “black box”, making their decision process difficult to understand by a non-expert.
Okazaki et al. (2022) [205]Customer journey mapping automation (through model-level data fusion)Trustworthiness and fairness have to be established (by using XAI) in order for the black-box AI to be used in the social systems it is meant to support.
Mridha et al. (2023) [41]Skin cancer classificationUsing provided explanations, the clinician may notice the color irregularity in the dermatoscopic picture, which is not evident on the lesion, and figure out why the classifier predicted incorrectly.
Abeyrathna et al. (2021) [428]NAThere is no specific application.
Nagaoka et al. (2022) [429]COVID-19 prediction (from lung CT slice images)The Grad-CAM method has been used for the authors to be sure that their method has used only the pixels from certain locations (where lungs are) of the image used for classification; the explanation itself is not important here.
Joshi et al. (2023) [234]Misinformation detection (specifically COVID-19 misinformation; from texts)Knowing the reasoning behind the outcomes is essential to making the detector trustworthy.
Ali et al. (2022) [430]COVID-19 prediction (from X-ray images)To maintain the transparency, interpretability, and explainability of the model.
Elbagoury et al. (2023) [431]Stroke prediction (based on EMG signals)To support (personalized) decision-making.
Yuan et al. (2022) [432]Human identification and activity recognitionExplainability is necessary so relationships between model inputs and outputs can be identified. This is necessary so the behavior of the proposed method (fusion model) can be inferred.
Someetheram et al. (2022) [433]Explaining and improving discrete hopfield neural networkElection algorithms are good at reducing the complexity of HNN models, but it is not known how they do that. It is known how it must reduce the complexity, so explainability is needed to ensure the models work the right way.
Sudars et al. (2022) [434]Traffic sign classificationThe authors mentioned that understanding the decision-making process of the CNNs is essential in applications where human lives are at stake, such as autonomous driving. Further, they mentioned that using explainability can provide insights into the inner workings of the CNN model for improved transparency and trust in the classification results.
Altini et al. (2023) [269]Kidney tumor segmentationThe authors mentioned that the explainability is important in their specific application of anomaly detection in Industrial Control Systems (ICS) because explaining the detection outcomes and providing explanations for anomaly detection results is essential for ensuring that experts can understand and trust the decisions made by the model.
Serradilla et al. (2021) [273]Predictive maintenanceExplanations about the model’s decision in the anomaly detection task enable the operator to evaluate the model’s accuracy and act based on their own expertise.
Aslam et al. (2023) [435]Malware detectionBlack-box models are widely used in studies and real-life applications of malware detection, but black-box models lack interpretability that could be used to validate models’ decisions. Explainability in the context of malware detection on Android devices has not been studied much, and new information about attacks on Android devices could be gained by applying explainability to malware detection models. Explainability also would help users trust malware detection models.
Shin et al. (2023) [436]Network traffic classificationThe authors mentioned that explainability in their specific application is important to increase reliability. And also, they mentioned that as the performance of both ML and DL models improves, the derivation process of the results becomes more opaque, highlighting the need for research on transparent design and post-hoc explanation for artificial intelligence.
Samir et al. (2023) [437]Bug assignment and developer allocationThe authors mentioned that the explainability of their specific application is important to increase user trust and satisfaction with the system.
Guidotti et al. (2021) [438]Distinguish time series representing heart rate between normal heartbeat and myocardial infarction.To reveal how the AI system is reasoning and agree with it or not in an easier way. Also, developers can unveil misclassification reasons and vulnerabilities and act to align the AI reasoning with human beliefs.
Ekanayake et al. (2023) [439]Predict adhesive strengthFor identifying the importance of features and elucidating the ML model’s inner workings.
Hendawi et al. (2023) [81]Diabetes predictionThe authors mentioned that providing easily interpretable explanations for complex machine learning models and their outcomes is essential for healthcare professionals to get a clear understanding of AI-generated predictions and recommendations for diabetes care.
Kobayashi et al. (2024) [440]Predict remaining useful life within intelligent digital twin frameworksFor AI decisions to be audited, accounted for, and easy to understand.
Misitano et al. (2022) [271]Multiobjective optimizationExplanations support the decision maker (user) to make the changes needed in the multiobjective optimization task. The opaqueness of black-box methods is problematic when these methods are applied in critical domains (healthcare, security, etc.).
Leite et al. (2023) [282]Predict the geographic location of a vehicleTo assure stable and understandable rule-based modeling.
Varam et al. (2023) [248]Endoscopy image classificationThe authors mentioned that the explainability is essential in their specific application to enhance the reliability, trust, and interpretability of deep learning models for Wireless Capsule Endoscopy image classification, which is benefiting clinical research and decision-making processes in the medical domain.
Bitar et al. (2023) [441]Explaining spike neural networkThere is very little research done about explaining spike neural networks. Explainability of these models is needed so they can be understood better and therefore improved more efficiently. Also, developing model-specific explanation tools for SNN models is beneficial because model-specific tools are often less computationally exhaustive than model-agnostic XAI tools.
Kim et al. (2023) [442]Cerebral cortices processingThe authors mentioned that the explainability is essential in their specific application to understand the neural representations of various human behaviors and cognitions, such as semantic representation according to words, neural representation of visual objects, or kinetics of movement. Further, they have mentioned that the explainability allows for a deeper understanding of the cortical contributions to decoding kinematic parameters, which is essential for advancing the study of neural representations in different cognitive processes and behaviors.
Khondker et al. (2022) [443]Pediatric urologyThe authors mentioned that the explainability is essential in their specific application for transparency in the medical field in pediatric urology, as it allows clinicians to comprehend the factors influencing the model’s decisions and enhances confidence in the model’s predictions.
Lucieri et al. (2023) [444]Biomedical image analysisThe authors mentioned that the explainability is essential in their specific application to address the privacy risks posed by concept-based explanations. And also they mentioned the need to investigate the privacy risk posed by different human-centric explanation methods such as Concept Localization Maps (CLMs) and TCAV scores to properly reflect practical application scenarios.
Suhail et al. (2023) [445]Cyber-physical systemsThe authors mentioned that the explainability is essential in their specific application to provide justifiable decisions by reasoning what, why, and how specific cybersecurity defense decisions are made in a gaming context. Further, they mentioned the transparency and interpretability given by the explainability are helping in building trust, confidence, and understanding among stakeholders and finally leading to more informed and effective cybersecurity measures in the context of Digital Twins (DTs) and Cyber-Physical Systems (CPS).
George et al. (2023) [87]Predictive modeling for emergency department admissions among cancer patientsThe authors mentioned that the explainability is essential in their specific application to enable clinicians to intervene prior to unplanned emergency department admissions. Further, they mentioned that clinicians can better understand the factors influencing the risk of ED visits using explainability, and it leads to more informed decision-making and potentially improved patient outcomes.
Bacco et al. (2021) [446]Sentiment analysisAI systems for sentiment analysis have a great effect on the real world because sentiment analysis is usually used to analyze customer behavior or public opinion. Explainability is needed to ensure the models make ethical and rightful decisions.
Szczepanski et al. (2021) [229]Fake news detectionSeveral kinds of biases are prevalent and hard to detect when detecting fake news with AI due to the fact that fake news spreads on social media. Explainability is needed to gain understanding of the model’s decision process and therefore prevent biases. Explainability increases trust and therefore enables wider use of AI.
Dong et al. (2021) [256]Classifying functional connectivity for brain-computer interface systemExplainability can lead to new knowledge about aging and using brain-computer interface systems on elderly people.
El-Sappagh et al. (2021) [108]Alzheimer’s disease predictionThe authors mentioned that the explainability is essential in their specific application to ensure that the AI model’s decisions are transparent, understandable, and actionable for clinical practice.
Prakash et al. (2023) [447]Electrocardiogram beat classificationLack of explainability makes AI methods challenging to implement in real-life use cases. Explainability increases trust, user performance, and user satisfaction.
Alani et al. (2022) [448]Malware detectionThe authors mentioned that explainability builds trust in the AI model in their context as well as ensures that the high accuracy originates from explainable conditions rather than from a black-box operation.
Sasahara et al. (2021) [126]Metabolic stability and CYP inhibition predictionExplainability could give new information about the importance of different physicochemical parameters. This knowledge can be used to design better drugs and to understand underlying structures better.
Maloca et al. (2021) [449]Classify medical (retinal OCT) imagesUnderstanding an AI model’s decision process will provide confidence and acceptance of the machine.
Tiensuu et al. (2021) [258]Stainless steel manufacturingThe authors mentioned that the explainability in their specific application is essential for facilitating human decision-making, early detection of quality risks, and conducting root cause analysis to improve product quality and production efficiency.
Valladares-Rodriguez et al. (2022) [73]Cognitive impairment detectionThe authors mentioned that explainability may become a fundamental requirement in their domain and tasks, such as detecting MCI, to improve transparency and interpretability of AI-based decisions.
Ahn et al. (2021) [450]Hospital management and patient careThe authors mentioned that the explainability is important in their specific application to provide persuasive discharge information, such as the expected individual discharge date and risk factors related to cardiovascular diseases. Further, they have mentioned that this explainability can assist in precise bed management and help the medical team and patients understand the conditions in detail for better treatment preparation.
Hammer et al. (2022) [451]Brain Computer Interfacing (BCI)The authors mentioned that the explainability is important in their specific application to uncover and understand how functional specialization emerges in artificial deep convolutional neural networks during a brain-computer interfacing (BCI) task.
Ikushima et al. (2023) [452]Age prediction based on bronchial imageExplainability increases trust toward ML model due to its ability to justify model-made decisions. Explainability can also reveal new information and connections that have not been discovered otherwise.
Kalir et al. (2023) [453]Semiconductor manufacturingThe authors mentioned that explainability in their specific application to provide insights into the decision-making process of the machine learning models to domain experts. Further, they mentioned that this transparency in model predictions helps in building trust in the AI systems and aids in decision-making processes related to capacity, productivity, and cost improvements in semiconductor manufacturing processes.
Shin et al. (2022) [454]Cardiovascular age assessmentExplainability can give new information about features that predict cardiovascular aging. Explainability also helps evaluate model performance and improve the model.
Chandra et al. (2023) [455]Soil fertility predictionThe authors mentioned that explainability in their specific application is important to build trust and transparency, to enhance the decision-making process, and to provide user-friendly interpretation.
Blix et al. (2022) [456]Water quality monitoringThe authors mentioned that explainability in their specific application is important to understand the relevance of spectral features for optical water types. Further, they mentioned that explainability provides insights into which variables were affecting each derived water type. And also, they mentioned that this understanding is essential for improving the estimation of chlorophyll-a content through the application of preferred in-water algorithms and improving the accuracy and interpretability of water quality monitoring processes.
Resendiz et al. (2023) [238]Cancer diagnosisThe authors mentioned that explainability in their specific application is important to address the black box problem associated with deep learning methods in medical diagnosis. And also, they mentioned that the lack of semantic associations between input data and predicted classes in deep learning models can hide the interpretability, which can lead to potential risks when applying these systems to different databases or integrating them into routine clinical practice.
Topp et al. (2023) [457]Predicting water temperature changeML models can behave unpredictably when new data is used. Explainability is needed to be able to evaluate and justify the model’s performance and decisions. Explanations are used to evaluate the fidelity and generalizability of the ML model.
Till et al. (2023) [458]Wrist fracture detectionExplainability in ML models used in healthcare is needed to ensure trust toward the model because the IT knowledge of healthcare professionals is often limited. Trading predictive performance to explainability (black-box–white-box models) is problematic, which makes explaining black-box models important.
Aswad et al. (2022) [459]Flood predictionVariables used in flood prediction can have some complexity, and explainability is needed to evaluate model performance and understand the decisions.
Kalyakulina et al. (2023) [71]Predicting immunological ageExplainability enhances understanding of a model’s decision-making process, but it can also be used to improve model performance by selecting only important features in computing. This reduces the cost of using the ML model and therefore makes it more usable. Local explanations are necessary to personalize treatments when needed.
Ghosh et al. (2024) [460]Predicting energy generation patternsExplanations give practical insight on previously theoretically studied energy generation patterns. Explanations give important information about relationships and depencies between different features that effect energy production, especially when clean energy is being focused on.
Katsushika et al. (2023) [75]Predicting reduced left ventricular ejection fraction (LVEF)Medical practitioners using ML models in medical decision-making need explainability to be able to evaluate and validate model-made decisions. Without explainability, medical practitioners cannot utilize ML models to help their work.
Hernandez et al. (2022) [107]Predicting Alzheimer’s disease and mild cognitive impairmentExplainability enables getting information about important features and also about the model (preprocessing, feature selection, methods). Explainability is important for validating model-made decisions with domain knowledge from the user.
Mohanrajan et al. (2022) [461]Predicting the land use/land cover changesThat the predicted results will be more informative and trustworthy to the urban planners and forest department to take appropriate measures in the protection of the environment.
Zhang et al. (2023) [462]Strain predictionExplainability is needed to understand relationships and qualitative/quantitative impacts of input parameters in modeling mechanical properties. Explainability can also reveal new information about the effects of stress and strain on each other.
Wang et al. (2023) [463]ML-based decision support in medical fieldExplainability is important so that medical practitioners can communicate their and ML model-made decisions to patients, therefore ensuring patient autonomy and informed consent. Explanations also help develop better ML models by revealing the model’s decision-making process. Here explainability is used to evaluate the effect of feature selection on model performance.
Pierrard et al. (2021) [464]Medical imagingThe authors mentioned the need for transparency and human understandability in the reasoning of the model in critical scenarios where decisions based on image classification and annotation can have significant consequences.
Praetorius et al. (2023) [465]Detecting intramuscular fatExplainability is needed to ensure generalizability of the ML model used in intramuscular fat detection.
Escobar-Linero et al. (2023) [215]Predicting withdrawal from legal process in case of violence towards woman in intimate relationshipExplainability is important so reasons behind (predicted) withdrawal from legal process can be recognized. Explainability can give new knowledge about features that affect participation and therefore help taking care of intimate relationship violence victims. The data from legal cases of intimate relationship violence is quite complex, and explainability is needed to interpret the black-box models needed to use in this prediction task.
Pan et al. (2022) [466]Biometric presentation attack detectionThe authors mentioned that the explainability in their specific application is important to enhance the usability, security, and performance of their Facial Biometric Presentation Attack Detection system.
Wang et al. (2023) [467]Drug discovery applicationsThe authors mentioned that understanding the decisions made by models is crucial for building trust and credibility in their predictions in the context of drug discovery, where interpretability is essential for inferring target properties of compounds from their molecular structures. Further, they mentioned that explainability in drug design is a way to leverage medicinal chemistry knowledge, address model limitations, and facilitate collaboration between different experts in the field.
Jin et al. (2023) [252]Medical image analysisThe authors mentioned that explaining model decisions from medical image inputs is essential for deploying ML models as clinical decision assistants. Further, they mentioned that providing explanations helps clinicians understand the reasoning behind the model’s predictions. And also, they mentioned that explainability is essential in their specific application to enhance the transparency, trustworthiness, and utility of ML models in the context of multi-modal medical image analysis.
Naser (2022) [468]Evaluating fire resistance of concrete-filled steel tubular columnsBlack-box models cannot be used reliably and effectively by engineers in practice because they don’t provide explanations about their decision-making process. Explainability is also needed to ensure liability and fairness in the fire engineering domain because human lives and legal aspects are involved.
Karamanou et al. (2022) [469]Anomaly detection from open governmental dataExplainability is needed to ensure accountability, transparency, and interpretability of black-box ML models used in unsupervised learning tasks.
Kim et al. (2021) [470]Predicting the wave transmission coefficient of low-crested structuresExplainability is needed to evaluate the performance and decision-making process of the ML model.
Saarela et al. (2021) [211]Student agency analyticsThe authors mentioned that explainability is important in their specific application to ensure transparency, accountability, and actionable insights for both students and educators. Further, they mentioned the General Data Protection Regulation (GDPR) includes a right for explanation, and for that, automatic profiling must be used in a Learning Analytics (LA) tool. And also, they mentioned explainability can help teachers increase their awareness of the effects of their pedagogical planning and interventions.
Gong et al. (2022) [471]COVID-19 detectionWhite-box models are widely used in the medical field because of their high interpretability despite their low performance. Explainability of back-box models is needed to enable the use of more effective ML models in the medical domain.
Burzynski (2022) [472]Battery health diagnosisThe authors mentioned that explainability is important in their specific application to provide insights into the model’s behavior and facilitate the interpretation of the relationships between input parameters and predictions. Further, they mentioned that this way enables a better understanding of the model’s decision-making process and enhances the trustworthiness of the predictions, which is essential for optimizing battery management systems and extending battery life.
Kim et al. (2022) [473]Forecasting particle matter of aerodynamic diameter less than 2.5  μ mAmbient particle matter forecast experts tend to question the reliability of ML models and validity of their predictions. Explainability is needed to increase trust towards black-box models.
Galiger et al. (2023) [474]Histopathology tissue type detectionThe authors mentioned that explainability is important in their specific application to align the decision-making process with that of human radiologists and to provide clear, human-readable justifications for model decision-making. Further, they mentioned explainability is essential for gaining trust in the model’s decisions and ensuring its reliability in the medical imaging domain.
Naeem et al. (2023) [475]Malware detectionThe proposed ensemble method for malware prediction is quite complex, and explainability is needed to enable interpretation and validation of model-made decisions.
Burzynski (2022) [472]Battery health monitoringThe authors mentioned that explainability is important in their specific application to understand and interpret the results of machine learning and deep learning models applied to lithium-ion battery datasets. Further, they mentioned that using XAI researchers can gain insights into the outcomes produced by the algorithms, describe the model’s accuracy, fairness, transparency, and results in decision-making, and investigate any biases in predicted results.
Uddin et al. (2021) [476]Human activity recognitionPeople tend to not accept ML systems that might be accurate and efficient if they lack interpretability. Explainability is needed to gain trust toward ML models and allow use of more efficient models.
Sinha et al. (2023) [477]Fault diagnosis of low-cost sensorsThe authors mentioned that explainability is important in their specific application to increase the trust and reliability of the AI model used for fault diagnosis of low-cost sensors.
Jacinto et al. (2023) [478]Mapping karstified zonesExplainability is needed to validate ML-made decisions and detect biases. Explainability allows the use of more complex models when interpretability is necessary. Explainability gives information about relationships between model inputs and outputs.
Jakubowski et al. (2022) [479]Anomaly detection in asset degradation processThe authors mentioned that by providing explanations in their context, experts can understand the reasoning behind the model’s decisions and ensure the reliability of the predictive maintenance actions taken based on those decisions.
Guo et al. (2024) [480]Intelligent fault diagnosis in rotating machineryThe authors mentioned that providing explanations for the model’s predictions is essential to improving the trust and understanding of the diagnostic process. And also, they mentioned explainability helps in validating the diagnostic results and improving the generalization ability of the model in unseen domains.
Shi et al. (2021) [481]Age-related macular degeneration diagnosisThe authors mentioned that explainability in their specific application is important because it helps in understanding the decision-making process of the model and the rationale behind its classifications. And also, they mentioned explainability helps to maximize its clinical applicability for the specific task of geographic atrophy detection, and clinicians trust the model’s predictions and integrate them into their decision-making process.
Wang et al. (2023) [127]Drug repurposingExplanations are not always interpretable or reliable; do not provide information that is well related to the application domain. Explanations need to connect well to the problem they explain so that reliable interpretations and decisions can be made.
Klar et al. (2024) [290]Factory layout designExplainability enables evaluating training processes and model decisions when using AI in factory layout planning. Explainability also enhances trust towards decisions. Explainability reveals relationships and importance of features and therefore can give valuable information that can be used later in the factory layout design process.
Panos et al. (2023) [482]Predicting solar flaresExplainability is important so model decisions can be evaluated and justified. Explainability can also help improve model performance and possibly give new information about solar flares and features that predict them because of the high diagnostic capabilities of spectral data.
Fang et al. (2023) [483]Predicting landslideExplainability helps to make decisions on evacuations and interventions in landslide areas in an effective and ethical way. Explanations can also help identify the need for a specific intervention (slope stabilization, for example).
Karami et al. (2021) [484]Predicting response to COVID-19 virusExplainability is needed to allow interpretation of model-made decisions and to find information about connections between features.
Baek et al. (2023) [485]Semiconductor equipment productionThe authors mentioned that explainability is important in their specific application to understand how deep learning algorithms make decisions due to their complexity and to explain the outputs.
Antoniou et al. (2022) [283]Attention deficit hyperactivity disorder (ADHD) diagnosisBecause clinicians are only willing to adopt a technological solution if they understand the basis of the provided recommendation.
Nguyen et al. (2022) [486]Decision-making agentsThe authors mentioned that explainability is important in their specific application to enhance trust, ensure legal compliance, improve user understanding, and increase user satisfaction in their specific application.
Solorio-Ramirez et al. (2021) [119]Predicting brain hemorrhageIn ML tasks in the healthcare domain, it is usual that the model has to make predictions on data that it hasn’t seen before. Model-made decisions in this kind of use case have to be explainable so the decision can be evaluated and justified. Explainability increases transparency and therefore understanding of the ML model’s decision-making process.
de Velasco et al. (2023) [221]Identifying emotions from speechIdentifying emotions from speech data is a complex task, and complex models are needed to achieve appropriate results. Explainability is needed to increase understanding of computational methods and models’ decision-making processes in this use case.
Shahriar et al. (2022) [487]Predicting state of battery charge in electric vehicleElectric vehicles and their batteries are constantly evolving and can be very different, which makes developing a globally applicable model for state of charge estimation difficult. Explainability is needed for evaluating and improving model performance.
Kim et al. (2023) [488]Maritime engineeringThe authors mentioned that explainability is important in their specific application to provide transparency on how the ML model produces its predictions. Further, they mentioned that using XAI, they can get a clear understanding of how different predictors influence the outcome of the prediction regarding vessel shaft power, which is essential for decision-making processes in the shipping industry.
Lemanska-Perek et al. (2022) [489]Sepsis managementThe authors mentioned that explainability is important in their specific application to support medical decision-making for individual patients, such as to better understand the model predictions, identify important features for each patient, and show how changes in variables affect predictions.
Minutti-Martinez et al. (2023) [490]Classifying chest X-ray imagesHealthcare professionals tend not to trust easily black-box models, which could be fixed by utilizing explainability methods. Explainability is also legally required in the healthcare domain.
Wang et al. (2023) [101]Predicting chronic obstructive pulmonary disease (COPD)Lack of explainability makes well-performing ML models useless in the healthcare domain. Explainability is needed to ensure interpretability and transparency, which leads to a wider application of ML in healthcare. Explainability also helps detect biases and improve model performance.
Kim et al. (2023) [491]Medical imaging for fracture detectionThe authors mentioned that the use of AI with explainability for fracture diagnosis has the potential to serve as a basis for specialist diagnosis. And also, they mentioned that AI could assist specialists by offering reliable opinions, preventing misinterpretations, and also speeding up the decision-making process for diagnosis.
Ivanovic et al. (2023) [88]Medical data management; cancer patient caseThe authors mentioned that explainability is important in their specific application to ensure that the AI models are not only accurate but also transparent, trustworthy, and interpretable for the end users in the medical and healthcare domains.
Sullivan et al. (2023) [227]Deep Q-learning experience replayThe authors mentioned that explainability is important in their specific application, Deep Reinforcement Learning (DRL), because the lack of transparency in DRL models leads to challenges in debugging and interpreting the decision-making process.
Humer et al. (2022) [492]Drug discoveryThe authors mentioned that explainability helps in identifying chemical regions of interest and gaining insights into the ML model’s reasoning.
Zhang et al. (2023) [493]Power systems dispatch and operationThe authors mentioned that the explainability of their specific application is important to provide a more intuitive and comprehensive explanation of decision-making for power systems with complex topology. Further, they mentioned that this is essential for operators to obtain noteworthy power grid areas as the basis of auxiliary decision-making to realize efficient and accurate control.
Yang et al. (2023) [494]Machinery health predictionExplainability is needed in industry machinery health assessment systems to increase reliability, allow evaluation of the model’s decision-making process, and help the end user understand and trust the model.
Altini et al. (2023) [269]Nuclei classification from breast cancer imagesExplainability is legally required in ML models used in the healthcare domain, and because complex models are needed because of their high performance, explainability techniques need to be used. Explainability also reveals important features and therefore increases interpretability and usability.
Papandrianos et al. (2022) [64]Predicting coronary artery disease from myocardial perfusion imagesExplainability is needed so medical professionals can verify the model’s decisions.
Liang et al. (2021) [230]Identifying deceptive online contentExplainability is needed to evaluate the model in cases of wrong decisions and to help develop model that are more reliable against targeted attacks when using ML to detect deceptive text/content.
Alabdulhafith et al. (2023) [60]Remote prognosis of the state of intensive care unit patientsExplainability is needed to ensure reliability of the model in addition to performance metrics. Medical professionals need explanations to evaluate models’ decisions and their medical relevance to be able to use ML models in practice.
Zolanvari et al. (2023) [299]Intrusion detectionLack of explainability leads to lack of trust, and trusting ML models without explanations leads to a lack of applicability and legitimacy. Explainability is needed to ensure transparency and applicability.
Carta et al. (2021) [279]Stock market forecastingThe authors mentioned that the explainability of their specific application is important to provide transparency and understanding of the prediction process. Further, they mentioned that explainability allows for a deep understanding of the obtained set of features and provides insights into the factors influencing the stock market forecasting results.
Esmaeili et al. (2021) [117]Brain tumor localizationThe authors mentioned that the explainability of their specific application is important to improve the interpretability, transparency, and reliability of deep learning models in the context of tumor localization in brain imaging.
Cheng et al. (2022) [495]Healthcare predictive modelingThe authors mentioned that explainability is important in their specific application because models in the healthcare domain require being transparent and interpretable. And also, they mentioned clinicians may not have technical expertise in machine learning, and therefore explanations need to be provided in a way that aligns with their domain knowledge rather than technical details.
Wenninger et al. (2022) [294]Building energy performance predictionThe authors mentioned that explainability is important in their specific application because to understand the mechanics behind the methods applied for increasing trust and accountability in the context of retrofit implementation where uncertainty is a major barrier. Further, they mentioned that explainability provides insights for experts on the influence of various building characteristics on the final energy performance predictions.
Laqua et al. (2023) [496]E-bikesThe authors mentioned that explainability is important in their specific application to enhance the understanding of the user experience of e-bike riding.
Espinoza et al. (2021) [268]Antibiotic discoveryThe authors mentioned that explainability is important in their specific application for model interpretability, validation, feature selection optimization, and advancing scientific discovery in the context of predicting antimicrobial mechanisms of action using AI models.
Sanderson et al. (2023) [497]Flood inundation mappingThe authors mentioned that deep learning models are often considered “black boxes”, which can have challenges regarding transparency and potential ethical biases. Using XAI to flood inundation mapping, they aimed to enhance insight into the behavior of their proposed deep learning model and how it is impacted by varying input data types.
Abe et al. (2023) [498]Estimating pathogenicity of genetic variantsThe authors mentioned that by incorporating explainability into their AI model, they can provide understandable explanations to physicians and make informed decisions based on the AI’s estimation results and genomic medical knowledge. And also, they mentioned this approach eliminates the bottlenecks in genomic medicine by combining high accuracy with explainability and supporting the identification of disease-causing variants in patients.
Kerz et al. (2023) [499]Mental health detectionThe authors mentioned that there is a growing need for explainable AI approaches in psychiatric diagnosis and prediction to ensure transparency in the decision-making process.
Kim et al. (2022) [500]Satellite image analysis for environment monitoring and analysisThe authors mentioned that explainability in their specific application is important to improve the reliability of AI-based systems by providing visual explanations of predictions made by black-box deep learning models. And also, they mentioned explainability helps in preventing critical errors, especially false negative errors in image selection, and by providing visual explanations, the system can be refined based on supervisor feedback, which can reduce the risk of misinterpretation or incorrect predictions.
Thrun et al. (2021) [501]Water quality predictionExplainability is necessary to enable the use of complex and high-performing models in predicting water quality, because domain experts usually are not familiar with AI. They need interpretable and clear explanations to evaluate and trust the model’s decisions.
Gowrisankar et al. (2024) [231]Detecting deepfake imagesExplainability is necessary to ensure users’ trust toward the ML model and to help users understand the ML model better. Different XAI methods perform differently (especially saliency map techniques), and therefore efficient XAI evaluation techniques are needed to help find the most accurate and interpretable XAI technique.
Beni et al. (2023) [502]Predicting weathering on rock slopesThe ML model used in weather prediction does not give information about the contributions of different features. Explainability is needed to gain insight on model performance and therefore evaluate the model’s decisions.
Singh et al. (2022) [84]Arrhythmia classificationA ML model often has to deal with unseen data in arrythmia classification task, and explainability is needed to evaluate model performance and decisions in these cases. Healthcare professionals tend not to trust AI-based diagnostic tools, and explainability would increase trust towards ML models and therefore enable use of AI diagnostic tools. In the healthcare domain, explainability is also necessary in an ethical and legal sense.
Zhou et al. (2023) [503]Predict dissolved oxygen concentrations in karst springflowExplainability is needed for accessing information about physical processes and mechanisms learned by the ML model. Data from karstic areas is often complex, and explainability is therefore even more necessary for evaluating model performance.
Maqsood et al. (2022) [118]Brain tumor detectionBrain image data is complex, and models that make predictions based on those images need to be complex. Explaining the model provides more information about the model.
Cui et al. (2022) [287]Machine reading comprehensionExplainability allows users to understand how the model answers questions, which can be very helpful for educational purposes.
Barros et al. (2023) [504]Cement industry dispatch workflowThe authors mentioned that explainability in their specific application is important for obtaining information about potential blockages of transportation vehicles, enabling monitoring and inspection to prevent delays or process restarts in advance. They also mentioned that explainability helps avoid security issues such as violations of federal regulations on vehicle weight. Also, in the context of finances, they mentioned explainability assists in preventing orders from being sent in quantities greater than requested, and it helps to avoid monetary losses.
Kayadibi et al. (2023) [505]Recognizing and classifying retinal disordersExplainability helps medical professionals understand ML-made diagnoses and use them as diagnostic tools. Explainability enables more accurate, efficient, and reliable diagnosis because of the necessary human evaluation step and the complex nature of retinal data.
Qamar et al. (2023) [506]Fruit classificationThe authors mentioned that explainability is important in fruit classification because it can enhance processes such as sorting, grading, and packaging, reducing waste and increasing profitability. Further, they mentioned that by using explainability, they can enhance the transparency and interpretability of the models used in automated fruit classification systems, and it improves trust, identifies biases, meets regulatory requirements, and increases users’ confidence in the system.
Crespi et al. (2023) [507]Multi-agent systems for military operationsThe authors mentioned that explainability is important in their specific application because it can provide insights into the inner workings of the learned strategies, facilitate human understanding of agent behaviors, and enhance transparency and trust in the decision-making processes of the multi-agent system.
Sabrina et al. (2022) [508]Optimizing crop yieldThe authors mentioned that explainability is important in their specific application to ensure that the system is trusted and easily adopted by farmers. Further, they mentioned that this explainability is essential for making the system understandable, trustworthy, and user-friendly for farmers.
Wu et al. (2023) [509]Flood predictionThe authors mentioned that explainability is important in their specific application to improve model credibility and provide insights into the factors influencing runoff predictions. Further, they mentioned that explainability is essential for understanding the complex relationships between meteorological variables and runoff dynamics.
Nakamura et al. (2023) [510]Disease preventionThe authors mentioned that explainability is important in their specific application to identify concrete disease prevention methods at the individual level. They also mentioned explainability is essential for setting intervention goals for future disease development prevention and improving outcomes through targeted health condition improvements.
Damian et al. (2022) [232]Detecting fake newsExplainability is needed to gain insight into the ML model’s reasoning and decision-making process. Explainability can also help develop better and more effective models by revealing the most important features, which is important with text data, where there are thousands of features (aka different words).
Oh et al. (2021) [511]Glaucoma DdagnosisThe authors mentioned that explainability is important in their specific application to provide a basis for ophthalmologists to determine whether to trust the predicted results.
Borujeni et al. (2023) [512]Air pollution forecastingThe authors mentioned that explainability is important in their specific application to get a better understanding of how the model reaches its decisions. And also they referred to the phenomenon of “Clever Hans” predictors, where models might perform well on training and test datasets but fail in practical scenarios. Thus they mentioned that by understanding how the model makes decisions, it is possible to identify instances where the model may be relying on incorrect criteria for predictions. And also, they mentioned explainability is essential for efficient feature selection and model optimization.
Alharbi et al. (2023) [513]Unmanned aerial vehicle (UAVs) operationThe authors mentioned that explainability is important in their specific application to ensure the safe, efficient, and equitable allocation of airspace system resources in UTM operations.
Sheu et al. (2023) [514]Pneumonia predictionIn the pneumonia classification application of ML, explainability is needed to gain insight about important features that affect the classification of pneumonia. Explainability is also needed to convince medical professionals about the model’s reliability and therefore to gain acceptance from the medical domain. Users need to be able to interpret and trust the ML model in order to use it efficiently in practice as a diagnostic tool.
Solis-Martin et al. (2023) [291]Predictive maintenanceTime series data in predictive maintenance is complex and hard to interpret. Explainability is needed to understand the ML model and relationships between inputs and outputs better.
Castiglione et al. (2023) [128]Drug repurposingExplainability increases the reliability of ML models. In drug repurposing tasks, explainability is also mandatory to ensure transparency and accountability.
Aslam et al. (2022) [515]Antepartum fetal monitoring and risk prediction of IUGRThe authors mentioned the explainability in their specific application is important to enhance the interpretability of ML models, generate confidence in the predictions, add to comprehensibility, and assist doctors in their decision-making process regarding antepartum fetal monitoring to predict the risk of IUGR.
Peng et al. (2022) [516]Fault detection and diagnosisExplainability is needed to gain insight on reasons behind predicted faults and to ensure model performance with complex data and possible online/offline use.
Na Pattalung et al. (2021) [517]Critical care medicine for ICU patientsThe authors mentioned that the explainability in their specific application is important to provide a causal explanation in the ML models, and making predictions visible from a black box model is essential to understanding the severity of illness and to enable early interventions for patients in ICU.
Oliveira et al. (2023) [518]Decision support systemLack of explainability is concerning when ML is used in high-stake cases. Explainability is also needed to ensure the legitimacy of AI use.
Burgueno et al. (2023) [519]Land cover classificationUsing explainability techniques leads to transparency, justifiability, and informativeness of the ML model, which is necessary in applications where there are critical aspects involved.
Horst et al. (2023) [520]Human gait recognitionThe authors mentioned that explainability is important in their specific application to identify the most relevant characteristics used for classification in clinical gait analysis.
Napoles et al. (2023) [521]Predictive analytics; case study in diabetesThe authors mentioned that the explainability is important in their specific application to understand how an algorithm works and how it can help analysts with the understanding of key questions and needs of their organization.
Ni et al. (2023) [522]HydrometeorologyThe authors mentioned that providing physical explanations for data-driven models is essential. Further, they mentioned that it is important to understand the inner workings of the deep learning model and provide insights into what the network has learned.
Amiri-Zarandi et al. (2023) [523]Threat detection in IoTThe authors mentioned that the explainability is important to help cybersecurity experts understand the reasons behind detected threats, improve security monitoring practices, and communicate with users about the reasons for their investigation.
Huang et al. (2023) [250]Soil moisture predictionExplainability enables extracting information about relationships between features in data and/or in the model. Explainability increases trust in ML models among users and decision-makers.
Niu et al. (2022) [524]Diabetic retinopathy detectionThe authors mentioned that the explainability in their specific application is important to understand how DL models make predictions, to improve trust, and to encourage collaboration within the medical community.
Kliangkhlao et al. (2022) [525]Predicting demand and supply behaviorCausality explanations help decision-makers understand the reasons behind models’ decisions.
Singha et al. (2023) [266]Cancer treatment; drug response predictionThe authors mentioned that traditional AI models operate as black boxes, and in critical domains like cancer therapy, where trust, accountability, and regulatory compliance are essential, the lack of explainability in AI models is a significant drawback. Further, they mentioned using explainability can provide clear, interpretable, and human-understandable explanations for the model’s actions and decisions, and it improves trustworthiness and usability and facilitates further research on potential drug targets for cancer therapy.
Thrun (2022) [278]Stock market analysisThe authors mentioned that commonly known explanations for stock-picking processes are often too vague to be applied in concrete cases. They also mentioned that explainability is important to provide specific criteria for stock picking that are explainable and can lead to above-average returns in the stock market.
Dissanayake et al. (2021) [526]Heart anomaly detectionThe authors mentioned that the explainability in their specific application is important to trust the predictions made by the models in the medical domain. And also, they mentioned that even if a model performs with excellent accuracy, understanding its behavior and predictions is important for medical experts and patients to trust the validity of the system.
Dastile et al. (2021) [527]Credit scoringThe authors mentioned that the explainability in their specific application of credit scoring is important to regulatory requirements like the Basel Accord, which mandates that lending institutions must be able to explain to loan applicants why their applications were denied. Also, they mentioned explainability is important to gain trust in model predictions, ensure no discrimination occurs during the credit assessment process, and meet the “right to explanation” requirement under regulations like the European Union General Data Protection Regulation (GDPR).
Khan et al. (2022) [528]COVID-19 classificationThe authors mentioned that the explainability in their specific application is important to provide significant proof that explainable AI is essential in the context of healthcare applications like COVID-19 diagnosis. Also, they mentioned that using visualization techniques like Grad-CAM helps highlight the crucial regions in the input images that influenced the deep learning model’s predictions and enhances understanding and trust in the classification results for COVID-19 detection.
Moon et al. (2021) [529]Alzheimer’s diseaseThe authors mentioned that explainability is important in their specific application to provide insights into the complex models used for classification.
Carrieri et al. (2021) [42]Skin microbiome compositionThe authors mentioned that explainability is important in their specific application to enhance the utility and reliability of ML models in microbiome research and to facilitate the translation of research findings into actionable insights.
Beker et al. (2023) [236]Volcanic deformation detectionThe authors mentioned that the explainability is important in their specific application to understand model behavior, improve performance, validate predictions, and determine the sensitivity of the model in detecting subtle volcanic deformations in the InSAR data.
Kiefer et al. (2022) [530]Document classificationThe authors mentioned that explainability is important in their specific application to align machine learning systems with human goals, contexts, concerns, and ways of working.
Sokhansanj et al. (2022) [216]Inter Partes Review (IPR) predictionsThe authors mentioned that explainability is important in their specific application to align machine learning systems with human goals, contexts, concerns, and ways of working.
Matuszelanski et al. (2022) [207]Customer churn predictionThe authors mentioned that the explainability in their specific application is important to understand the limitations of the model and address issues without sacrificing the performance gain from black-box models.
Franco et al. (2021) [531]Face recognitionThe authors mentioned that the explainability in their specific application of face recognition is important because of the widespread and controversial use of facial recognition technology in various contexts. Further, they mentioned that making face recognition algorithms more trustworthy through explainability, fairness, and privacy can improve public opinion and general acceptance of these technologies.
Montiel-Vazquez et al. (2022) [532]Empathy detection in textual communicationThe authors mentioned that the explainability in their specific application is important to improve transparency and a better understanding of how the model makes decisions, which is essential for building trust in the system and for potential applications in various fields where empathy detection is valuable.
Mollas et al. (2023) [533]User-oriented/interpretable XAIExplaining the explanations with good metrics is important when complex models are approximated with understandable explanations. Understandability of these explainability metrics is important so the end user can evaluate the outcomes.
Wei et al. (2022) [251]Detecting disease from fruit leavesLack of explainability hinders widespread use of black-box models that could be effective and beneficial in agriculture. Black-box models and their interpretability are important in agriculture because of complex data and the variety of plant species that are of interest.
Samih et al. (2021) [224]Movie recommendationsExplainability in recommendation systems increases efficiency, transparency, and user satisfaction.
Juang et al. (2021) [534]Hand palm trackingThe authors mentioned that the explainability is important because the linguistic relationship between the input and output variables of each fuzzy rule is explainable, and providing human explainable fuzzy features and inference models can improve the interpretability of the tracking method. Further, they mentioned that visualization of fuzzy features can give a clear understanding of the decision-making process.
Cicek et al. (2023) [535]Diagnosing nephrotoxicityIn the field of healthcare, explainability and interpretability are needed to enable the use of black-box ML models for diagnosing diseases.
Jung et al. (2023) [536]Medicinal plants classificationThe authors mentioned that explainability helps in understanding how the ML model makes predictions and enabling the assessment of whether the model’s learning intentions are consistent in the context of classifying similar medicinal plant species like Cynanchum wilfordii and Cynanchum auriculatum.
De Magistris et al. (2022) [233]Detecting fake newsExplanations are needed to convince people about the classification of fake news.
Rawal et al. (2023) [537]Identification of variables associated with the risk of developing neutralizing antidrug antibodies to factor VIII in hemophilia A patientsThe authors mentioned that the explainability is important in their specific application for identifying and ranking variables associated with the risk of developing neutralizing antidrug antibodies to Factor VIII in hemophilia A patients.
Kumar et al. (2021) [220]Sarcasm detection in dialoguesThe authors mentioned that the explainability in their specific application is important to understand which words or features influence the model’s decision-making process and how the model identifies sarcasm in conversational threads.
Yeung et al. (2022) [538]Photonic device designThe authors mentioned the explainability in their specific application is important to understand the relationship between the device structure and performance in photonic inverse design. Further, they mentioned that explainability is important to reveal the structure-performance relationships of each device, highlight the features contributing to the figure-of-merit (FOM), and potentially optimize the devices further by overcoming local minima in the adjoint optimization process.
Naeem et al. (2022) [539]Malware detection in IoT devicesThe authors mentioned that explainability is important in their specific application to enhance model transparency, facilitate security analysis, evaluate model performance, and support continuous model improvement.
Mey et al. (2022) [540]Machine fault diagnosisThe authors mentioned that the black-box nature of deep learning models hides the understanding of the decision-making process and makes it challenging for humans to interpret the classifications. They mentioned that using explainability can make the classification process transparent and provide insights into why certain decisions were made by the model.
Martinez et al. (2023) [541]Genomics and gene regulationThe authors mentioned explainability can provide transparency to ML models and allow for a better understanding of how the predictions were made. And this transparency was essential in delivering a high-scale annotation of archaeal promoter sequences and ensuring the reliability of the curated promoter sequences generated by the model.
Nkengue et al. (2024) [542]COVID-19 detectionThe authors mentioned explainability in their specific application is important to provide a cross-validation tool for practitioners. Further, they mentioned that highlighting the different patterns of the ECG signal that are related to a COVID-19/non-COVID-19 classification helps practitioners to understand which features of the signal are responsible for the classification, and it helps in decision-making and validation of results.
Behrens et al. (2022) [543]Climate modelingThe authors mentioned explainability in their specific application is important to enhance the interpretability of convective processes in climate models.
Fatahi et al. (2022) [544]Cement productionThe authors mentioned explainability in their specific application is important to understand the correlations between operational variables and energy consumption factors in an industrial vertical roller mill circuit.
De Groote et al. (2022) [545]Mechatronic systems modelingThe authors mentioned that by incorporating physics-based relations within the Neural Network Augmented Physics (NNAP) model, they aimed to provide interpretable explanations that align with physical laws. Further, they mentioned that in this way, the understanding of the system dynamics with partially unknown interactions leads to more reliable and insightful predictions.
Takalo-Mattila et al. (2022) [546]Steel quality predictionThe authors mentioned explainability is important in their specific application to enhance transparency and allow users to understand why a particular decision was reached, to build trust in the model’s predictions, to audit the decisions made by the model, and to ensure compliance with regulations and standards.
Drobnic et al. (2023) [102]Assessment of developmental status in childrenThe authors mentioned explainability in their specific application is important to enhance the interpretability of the model’s predictions and to provide insights into the features that influence the motor efficiency index (MEI) assessment of children and adolescents.
Saarela et al. (2022) [32]Skin cancer classificationThe authors mentioned that explainability in their specific application is important for building trust and confidence in the model’s decisions and to know why the system has made a particular decision. Further, they mentioned that using explanations for the model’s decisions increases trust among users and also potentially teaches humans to make better decisions in skin lesion classification.
Jang et al. (2023) [547]Energy managementThe authors mentioned that in the field of energy management, understanding complex AI models can be challenging because of the black box nature. And they mentioned that using explainability can explain the impact of input variables on the model’s output. Further, they mentioned that explainability is essential for EMS managers to comprehend why specific predictions are made, enabling informed decision-making in energy management processes.
Aishwarya et al. (2022) [548]Diagnostic of common lung pathologiesThe authors mentioned that the explainability of their specific application is important to improve the interpretability of the deep learning model’s outputs, which is essential for medical professionals to trust and understand the diagnostic results, and it is leading to faster diagnosis and early treatment.
Kaczmarek-Majer et al. (2022) [549]Mental health; bipolar disorderThe authors mentioned that explainability is important in their specific application to build trust, enhance understanding, improve decision-making processes, and manage uncertainty in the context of psychiatric care and mental health diagnosis.
Bae (2024) [550]Malware classificationThe authors mentioned that the explainability in their specific application is important to address the challenges of interpreting heterogeneous data and to provide reliable explanations for the models used in cybersecurity applications, especially in malware detection.
Mahim et al. (2024) [109]Alzheimer’s disease detection and classificationThe authors mentioned that in medical image classification, XAI is essential for helping medical professionals understand and interpret the decisions made by AI systems, and this leads to more informed decisions about patient care and treatment plans. Also, the authors mentioned that XAI is essential for regulatory and ethical reasons, as transparency in the decision-making process of AI systems in medical applications is required to ensure consistency with medical standards and regulations.
Gerussi et al. (2022) [551]Primary Biliary Cholangitis (PBC) risk predictionThe authors mentioned that the explainability in their specific application is important to provide insights into the decision-making process of the ML model and facilitate its application in precision medicine and risk stratification for PBC.
Li et al. (2022) [552]MRI imagingThe authors mentioned that the explainability in their specific application is important to increase the transparency and interpretability of the super-resolution process for clinical MRI scans. Further, they mentioned that explainability is essential for understanding the decision-making process of the deep learning model and ensuring that the generated high-resolution images are clinically relevant and accurate.
Shang et al. (2021) [267]Clinical practiceThe authors mentioned that the explainability in their specific application is important to help clinicians better understand and utilize important clinical information buried in electronic health record (EHR) data. Also, they mentioned that explainable illustrations of important clinical findings are necessary to provide comprehensive and convincing details for better understanding and acceptance by clinicians beyond their specialties.

References

  1. Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  2. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
  3. Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 2023, 263, 110273. [Google Scholar] [CrossRef]
  4. Nauta, M.; Trienes, J.; Pathak, S.; Nguyen, E.; Peters, M.; Schmitt, Y.; Schlötterer, J.; van Keulen, M.; Seifert, C. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput. Surv. 2023, 55, 295. [Google Scholar] [CrossRef]
  5. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  6. Hu, Z.F.; Kuflik, T.; Mocanu, I.G.; Najafian, S.; Shulner Tal, A. Recent studies of xai-review. In Proceedings of the Adjunct 29th ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 421–431. [Google Scholar]
  7. Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 2022, 12, 1353. [Google Scholar] [CrossRef]
  8. Saranya, A.; Subhashini, R. A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends. Decis. Anal. J. 2023, 7, 100230. [Google Scholar]
  9. Schwalbe, G.; Finzel, B. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts. Data Min. Knowl. Discov. 2024, 38, 3043–3101. [Google Scholar] [CrossRef]
  10. Speith, T. A review of taxonomies of explainable artificial intelligence (XAI) methods. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 2239–2250. [Google Scholar]
  11. Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021, 76, 89–106. [Google Scholar] [CrossRef]
  12. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef]
  13. Samek, W.; Montavon, G.; Vedaldi, A.; Hansen, L.K.; Müller, K.R. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer Nature: Berlin/Heidelberg, Germany, 2019; Volume 11700. [Google Scholar]
  14. Koh, P.W.; Liang, P. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; Volume 70. [Google Scholar]
  15. Yeh, C.K.; Kim, J.; Yen, I.E.H.; Ravikumar, P.K. Representer point selection for explaining deep neural networks. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
  16. Li, O.; Liu, H.; Chen, C.; Rudin, C. Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  17. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harv. J. Law Technol. 2017, 31, 841. [Google Scholar] [CrossRef]
  18. Erhan, D.; Bengio, Y.; Courville, A.; Vincent, P. Visualizing higher-layer features of a deep network. Univ. Montr. 2009, 1341. [Google Scholar]
  19. Towell, G.G.; Shavlik, J.W. Extracting refined rules from knowledge-based neural networks. Mach Learn 1993, 13, 71–101. [Google Scholar] [CrossRef]
  20. Castro, J.L.; Mantas, C.J.; Benitez, J.M. Interpretation of artificial neural networks by means of fuzzy rules. IEEE Trans. Neural Netw. 2002, 13, 101–116. [Google Scholar] [CrossRef]
  21. Mitra, S.; Hayashi, Y. Neuro-fuzzy rule generation: Survey in soft computing framework. IEEE Trans. Neural Netw. 2000, 11, 748–768. [Google Scholar] [CrossRef]
  22. Fisher, A.; Rudin, C.; Dominici, F. All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously. J. Mach. Learn. Res. 2019, 20, 1–81. [Google Scholar]
  23. Fong, R.C.; Vedaldi, A. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  24. Zintgraf, L.M.; Cohen, T.S.; Adel, T.; Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. In Proceedings of the International Conference on Learning Representations, ICLR, Toulon, France, 24–26 April 2017; pp. 1–12. [Google Scholar]
  25. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  26. Saarela, M.; Jauhiainen, S. Comparison of feature importance measures as explanations for classification models. SN Appl. Sci. 2021, 3, 272. [Google Scholar] [CrossRef]
  27. Wojtas, M.; Chen, K. Feature Importance Ranking for Deep Learning. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2020), Vancouver, BC, Canada, 6–12 December 2020; Volume 33, pp. 5105–5114. [Google Scholar]
  28. Burkart, N.; Huber, M.F. A Survey on the Explainability of Supervised Machine Learning. J. Artif. Intell. Res. 2021, 70, 245–317. [Google Scholar] [CrossRef]
  29. Saarela, M. On the relation of causality-versus correlation-based feature selection on model fairness. In Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing, Avila, Spain, 8–12 April 2024; pp. 56–64. [Google Scholar]
  30. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 2018, 51, 93. [Google Scholar] [CrossRef]
  31. Molnar, C. Interpretable Machine Learning; Lulu. com: Morrisville, NC, USA, 2020. [Google Scholar]
  32. Saarela, M.; Geogieva, L. Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model. Appl. Sci. 2022, 12, 9545. [Google Scholar] [CrossRef]
  33. Carvalho, D.V.; Pereira, E.M.; Cardoso, J.S. Machine learning interpretability: A survey on methods and metrics. Electronics 2019, 8, 832. [Google Scholar] [CrossRef]
  34. Wang, Y.; Zhang, T.; Guo, X.; Shen, Z. Gradient based Feature Attribution in Explainable AI: A Technical Review. arXiv 2024, arXiv:2403.10415. [Google Scholar]
  35. Saarela, M.; Kärkkäinen, T. Can we automate expert-based journal rankings? Analysis of the Finnish publication indicator. J. Inf. 2020, 14, 101008. [Google Scholar] [CrossRef]
  36. Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Müller, K.R. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE 2021, 109, 247–278. [Google Scholar] [CrossRef]
  37. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021, 88, 105906. [Google Scholar] [CrossRef]
  38. Birkle, C.; Pendlebury, D.A.; Schnell, J.; Adams, J. Web of Science as a data source for research on scientific and scholarly activity. Quant. Sci. Stud. 2020, 1, 363–376. [Google Scholar] [CrossRef]
  39. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; EBSE Technical Report, EBSE-2007-01; Software Engineering Group, School of Computer Science and Mathematics, Keele University: Keele, UK, 2007. [Google Scholar]
  40. Da’u, A.; Salim, N. Recommendation system based on deep learning methods: A systematic review and new directions. Artif. Intell. Rev. 2020, 53, 2709–2748. [Google Scholar] [CrossRef]
  41. Mridha, K.; Uddin, M.M.; Shin, J.; Khadka, S.; Mridha, M.F. An Interpretable Skin Cancer Classification Using Optimized Convolutional Neural Network for a Smart Healthcare System. IEEE Access 2023, 11, 41003–41018. [Google Scholar] [CrossRef]
  42. Carrieri, A.P.; Haiminen, N.; Maudsley-Barton, S.; Gardiner, L.J.; Murphy, B.; Mayes, A.E.; Paterson, S.; Grimshaw, S.; Winn, M.; Shand, C.; et al. Explainable AI reveals changes in skin microbiome composition linked to phenotypic differences. Sci. Rep. 2021, 11, 4565. [Google Scholar] [CrossRef]
  43. Maouche, I.; Terrissa, L.S.; Benmohammed, K.; Zerhouni, N. An Explainable AI Approach for Breast Cancer Metastasis Prediction Based on Clinicopathological Data. IEEE Trans. Biomed. Eng. 2023, 70, 3321–3329. [Google Scholar] [CrossRef] [PubMed]
  44. Yagin, B.; Yagin, F.H.; Colak, C.; Inceoglu, F.; Kadry, S.; Kim, J. Cancer Metastasis Prediction and Genomic Biomarker Identification through Machine Learning and eXplainable Artificial Intelligence in Breast Cancer Research. Diagnostics 2023, 13, 3314. [Google Scholar] [CrossRef]
  45. Kaplun, D.; Krasichkov, A.; Chetyrbok, P.; Oleinikov, N.; Garg, A.; Pannu, H.S. Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database. Mathematics 2021, 9, 2616. [Google Scholar] [CrossRef]
  46. Kwong, J.C.C.; Khondker, A.; Tran, C.; Evans, E.; Cozma, I.A.; Javidan, A.; Ali, A.; Jamal, M.; Short, T.; Papanikolaou, F.; et al. Explainable artificial intelligence to predict the risk of side-specific extraprostatic extension in pre-prostatectomy patients. Cuaj-Can. Urol. Assoc. J. 2022, 16, 213–221. [Google Scholar] [CrossRef] [PubMed]
  47. Ramirez-Mena, A.; Andres-Leon, E.; Alvarez-Cubero, M.J.; Anguita-Ruiz, A.; Martinez-Gonzalez, L.J.; Alcala-Fdez, J. Explainable artificial intelligence to predict and identify prostate cancer tissue by gene expression. Comput. Methods Programs Biomed. 2023, 240, 107719. [Google Scholar] [CrossRef]
  48. Anjara, S.G.; Janik, A.; Dunford-Stenger, A.; Mc Kenzie, K.; Collazo-Lorduy, A.; Torrente, M.; Costabello, L.; Provencio, M. Examining explainable clinical decision support systems with think aloud protocols. PLoS ONE 2023, 18, e0291443. [Google Scholar] [CrossRef]
  49. Wani, N.A.; Kumar, R.; Bedi, J. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence. Comput. Methods Programs Biomed. 2024, 243, 107879. [Google Scholar] [CrossRef]
  50. Laios, A.; Kalampokis, E.; Mamalis, M.E.; Tarabanis, C.; Nugent, D.; Thangavelu, A.; Theophilou, G.; De Jong, D. RoBERTa-Assisted Outcome Prediction in Ovarian Cancer Cytoreductive Surgery Using Operative Notes. Cancer Control. 2023, 30, 10732748231209892. [Google Scholar] [CrossRef]
  51. Laios, A.; Kalampokis, E.; Johnson, R.; Munot, S.; Thangavelu, A.; Hutson, R.; Broadhead, T.; Theophilou, G.; Leach, C.; Nugent, D.; et al. Factors Predicting Surgical Effort Using Explainable Artificial Intelligence in Advanced Stage Epithelial Ovarian Cancer. Cancers 2022, 14, 3447. [Google Scholar] [CrossRef]
  52. Ghnemat, R.; Alodibat, S.; Abu Al-Haija, Q. Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification. J. Imaging 2023, 9, 177. [Google Scholar] [CrossRef]
  53. Lohaj, O.; Paralic, J.; Bednar, P.; Paralicova, Z.; Huba, M. Unraveling COVID-19 Dynamics via Machine Learning and XAI: Investigating Variant Influence and Prognostic Classification. Mach. Learn. Knowl. Extr. 2023, 5, 1266–1281. [Google Scholar] [CrossRef]
  54. Sarp, S.; Catak, F.O.; Kuzlu, M.; Cali, U.; Kusetogullari, H.; Zhao, Y.; Ates, G.; Guler, O. An XAI approach for COVID-19 detection using transfer learning with X-ray images. Heliyon 2023, 9, e15137. [Google Scholar] [CrossRef]
  55. Sargiani, V.; De Souza, A.A.; De Almeida, D.C.; Barcelos, T.S.; Munoz, R.; Da Silva, L.A. Supporting Clinical COVID-19 Diagnosis with Routine Blood Tests Using Tree-Based Entropy Structured Self-Organizing Maps. Appl. Sci. 2022, 12, 5137. [Google Scholar] [CrossRef]
  56. Zhang, X.; Han, L.; Sobeih, T.; Han, L.; Dempsey, N.; Lechareas, S.; Tridente, A.; Chen, H.; White, S.; Zhang, D. CXR-Net: A Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia from Chest X-ray Images. IEEE J. Biomed. Health Inform. 2023, 27, 980–991. [Google Scholar] [CrossRef]
  57. Palatnik de Sousa, I.; Vellasco, M.M.B.R.; Costa da Silva, E. Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers. Sensors 2021, 21, 5657. [Google Scholar] [CrossRef] [PubMed]
  58. Nguyen, D.Q.; Vo, N.Q.; Nguyen, T.T.; Nguyen-An, K.; Nguyen, Q.H.; Tran, D.N.; Quan, T.T. BeCaked: An Explainable Artificial Intelligence Model for COVID-19 Forecasting. Sci. Rep. 2022, 12, 7969. [Google Scholar] [CrossRef] [PubMed]
  59. Guarrasi, V.; Soda, P. Multi-objective optimization determines when, which and how to fuse deep networks: An application to predict COVID-19 outcomes. Comput. Biol. Med. 2023, 154, 106625. [Google Scholar] [CrossRef]
  60. Alabdulhafith, M.; Saleh, H.; Elmannai, H.; Ali, Z.H.; El-Sappagh, S.; Hu, J.W.; El-Rashidy, N. A Clinical Decision Support System for Edge/Cloud ICU Readmission Model Based on Particle Swarm Optimization, Ensemble Machine Learning, and Explainable Artificial Intelligence. IEEE Access 2023, 11, 100604–100621. [Google Scholar] [CrossRef]
  61. Henzel, J.; Tobiasz, J.; Kozielski, M.; Bach, M.; Foszner, P.; Gruca, A.; Kania, M.; Mika, J.; Papiez, A.; Werner, A.; et al. Screening Support System Based on Patient Survey Data-Case Study on Classification of Initial, Locally Collected COVID-19 Data. Appl. Sci. 2021, 11, 790. [Google Scholar] [CrossRef]
  62. Delgado-Gallegos, J.L.; Aviles-Rodriguez, G.; Padilla-Rivas, G.R.; Cosio-Leon, M.d.l.A.; Franco-Villareal, H.; Nieto-Hipolito, J.I.; Lopez, J.d.D.S.; Zuniga-Violante, E.; Islas, J.F.; Romo-Cardenas, G.S. Application of C5.0 Algorithm for the Assessment of Perceived Stress in Healthcare Professionals Attending COVID-19. Brain Sci. 2023, 13, 513. [Google Scholar] [CrossRef] [PubMed]
  63. Yigit, T.; Sengoz, N.; Ozmen, O.; Hemanth, J.; Isik, A.H. Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning. Trait. Signal 2022, 39, 863–869. [Google Scholar] [CrossRef]
  64. Papandrianos, I.N.; Feleki, A.; Moustakidis, S.; Papageorgiou, I.E.; Apostolopoulos, I.D.; Apostolopoulos, D.J. An Explainable Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM. Appl. Sci. 2022, 12, 7592. [Google Scholar] [CrossRef]
  65. Zhang, Y.; Weng, Y.; Lund, J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics 2022, 12, 237. [Google Scholar] [CrossRef]
  66. Rietberg, M.T.; Nguyen, V.B.; Geerdink, J.; Vijlbrief, O.; Seifert, C. Accurate and Reliable Classification of Unstructured Reports on Their Diagnostic Goal Using BERT Models. Diagnostics 2023, 13, 1251. [Google Scholar] [CrossRef] [PubMed]
  67. Ornek, A.H.; Ceylan, M. Explainable Artificial Intelligence (XAI): Classification of Medical Thermal Images of Neonates Using Class Activation Maps. Trait. Signal 2021, 38, 1271–1279. [Google Scholar] [CrossRef]
  68. Dindorf, C.; Konradi, J.; Wolf, C.; Taetz, B.; Bleser, G.; Huthwelker, J.; Werthmann, F.; Bartaguiz, E.; Kniepert, J.; Drees, P.; et al. Classification and Automated Interpretation of Spinal Posture Data Using a Pathology-Independent Classifier and Explainable Artificial Intelligence (XAI). Sensors 2021, 21, 6323. [Google Scholar] [CrossRef]
  69. Sarp, S.; Kuzlu, M.; Wilson, E.; Cali, U.; Guler, O. The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification. Electronics 2021, 10, 1406. [Google Scholar] [CrossRef]
  70. Wang, M.H.; Chong, K.K.l.; Lin, Z.; Yu, X.; Pan, Y. An Explainable Artificial Intelligence-Based Robustness Optimization Approach for Age-Related Macular Degeneration Detection Based on Medical IOT Systems. Electronics 2023, 12, 2697. [Google Scholar] [CrossRef]
  71. Kalyakulina, A.; Yusipov, I.; Kondakova, E.; Bacalini, M.G.; Franceschi, C.; Vedunova, M.; Ivanchenko, M. Small immunological clocks identified by deep learning and gradient boosting. Front. Immunol. 2023, 14, 1177611. [Google Scholar] [CrossRef]
  72. Javed, A.R.; Khan, H.U.; Alomari, M.K.B.; Sarwar, M.U.; Asim, M.; Almadhor, A.S.; Khan, M.Z. Toward explainable AI-empowered cognitive health assessment. Front. Public Health 2023, 11, 1024195. [Google Scholar] [CrossRef]
  73. Valladares-Rodriguez, S.; Fernandez-Iglesias, M.J.; Anido-Rifon, L.E.; Pacheco-Lorenzo, M. Evaluation of the Predictive Ability and User Acceptance of Panoramix 2.0, an AI-Based E-Health Tool for the Detection of Cognitive Impairment. Electronics 2022, 11, 3424. [Google Scholar] [CrossRef]
  74. Moreno-Sanchez, P.A. Improvement of a prediction model for heart failure survival through explainable artificial intelligence. Front. Cardiovasc. Med. 2023, 10, 1219586. [Google Scholar] [CrossRef]
  75. Katsushika, S.; Kodera, S.; Sawano, S.; Shinohara, H.; Setoguchi, N.; Tanabe, K.; Higashikuni, Y.; Takeda, N.; Fujiu, K.; Daimon, M.; et al. An explainable artificial intelligence-enabled electrocardiogram analysis model for the classification of reduced left ventricular function. Eur. Heart J.-Digit. Health 2023, 4, 254–264. [Google Scholar] [CrossRef] [PubMed]
  76. Kamal, M.S.; Dey, N.; Chowdhury, L.; Hasan, S.I.; Santosh, K.C. Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning. IEEE Trans. Instrum. Meas. 2022, 71, 2509209. [Google Scholar] [CrossRef]
  77. Deperlioglu, O.; Kose, U.; Gupta, D.; Khanna, A.; Giampaolo, F.; Fortino, G. Explainable framework for Glaucoma diagnosis by image processing and convolutional neural network synergy: Analysis with doctor evaluation. Future Gener. Comput.-Syst.- Int. J. Escience 2022, 129, 152–169. [Google Scholar] [CrossRef]
  78. Kim, Y.K.; Koo, J.H.; Lee, S.J.; Song, H.S.; Lee, M. Explainable Artificial Intelligence Warning Model Using an Ensemble Approach for In-Hospital Cardiac Arrest Prediction: Retrospective Cohort Study. J. Med. Internet Res. 2023, 25, e48244. [Google Scholar] [CrossRef]
  79. Obayya, M.; Nemri, N.; Nour, M.K.; Al Duhayyim, M.; Mohsen, H.; Rizwanullah, M.; Zamani, A.S.; Motwakel, A. Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification. Appl. Sci. 2022, 12, 8749. [Google Scholar] [CrossRef]
  80. Ganguly, R.; Singh, D. Explainable Artificial Intelligence (XAI) for the Prediction of Diabetes Management: An Ensemble Approach. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 158–163. [Google Scholar] [CrossRef]
  81. Hendawi, R.; Li, J.; Roy, S. A Mobile App That Addresses Interpretability Challenges in Machine Learning-Based Diabetes Predictions: Survey-Based User Study. JMIR Form. Res. 2023, 7, e50328. [Google Scholar] [CrossRef]
  82. Maaroof, N.; Moreno, A.; Valls, A.; Jabreel, M.; Romero-Aroca, P. Multi-Class Fuzzy-LORE: A Method for Extracting Local and Counterfactual Explanations Using Fuzzy Decision Trees. Electronics 2023, 12, 2215. [Google Scholar] [CrossRef]
  83. Raza, A.; Tran, K.P.; Koehl, L.; Li, S. Designing ECG monitoring healthcare system with federated transfer learning and explainable AI. Knowl.-Based Syst. 2022, 236, 107763. [Google Scholar] [CrossRef]
  84. Singh, P.; Sharma, A. Interpretation and Classification of Arrhythmia Using Deep Convolutional Network. IEEE Trans. Instrum. Meas. 2022, 71, 2518512. [Google Scholar] [CrossRef]
  85. Mollaei, N.; Fujao, C.; Silva, L.; Rodrigues, J.; Cepeda, C.; Gamboa, H. Human-Centered Explainable Artificial Intelligence: Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms. Int. J. Environ. Res. Public Health 2022, 19, 9552. [Google Scholar] [CrossRef] [PubMed]
  86. Petrauskas, V.; Jasinevicius, R.; Damuleviciene, G.; Liutkevicius, A.; Janaviciute, A.; Lesauskaite, V.; Knasiene, J.; Meskauskas, Z.; Dovydaitis, J.; Kazanavicius, V.; et al. Explainable Artificial Intelligence-Based Decision Support System for Assessing the Nutrition-Related Geriatric Syndromes. Appl. Sci. 2021, 11, 1763. [Google Scholar] [CrossRef]
  87. George, R.; Ellis, B.; West, A.; Graff, A.; Weaver, S.; Abramowski, M.; Brown, K.; Kerr, L.; Lu, S.C.; Swisher, C.; et al. Ensuring fair, safe, and interpretable artificial intelligence-based prediction tools in a real-world oncological setting. Commun. Med. 2023, 3, 88. [Google Scholar] [CrossRef] [PubMed]
  88. Ivanovic, M.; Autexier, S.; Kokkonidis, M.; Rust, J. Quality medical data management within an open AI architecture-cancer patients case. Connect. Sci. 2023, 35, 2194581. [Google Scholar] [CrossRef]
  89. Zhang, H.; Ogasawara, K. Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing. Bioengineering 2023, 10, 1070. [Google Scholar] [CrossRef]
  90. Zlahtic, B.; Zavrsnik, J.; Vosner, H.B.; Kokol, P.; Suran, D.; Zavrsnik, T. Agile Machine Learning Model Development Using Data Canyons in Medicine: A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement. Appl. Sci. 2023, 13, 8329. [Google Scholar] [CrossRef]
  91. Gouverneur, P.; Li, F.; Shirahama, K.; Luebke, L.; Adamczyk, W.M.; Szikszay, T.M.M.; Luedtke, K.; Grzegorzek, M. Explainable Artificial Intelligence (XAI) in Pain Research: Understanding the Role of Electrodermal Activity for Automated Pain Recognition. Sensors 2023, 23, 1959. [Google Scholar] [CrossRef]
  92. Real, K.S.D.; Rubio, A. Discovering the mechanism of action of drugs with a sparse explainable network. Ebiomedicine 2023, 95, 104767. [Google Scholar] [CrossRef]
  93. Park, A.; Lee, Y.; Nam, S. A performance evaluation of drug response prediction models for individual drugs. Sci. Rep. 2023, 13, 11911. [Google Scholar] [CrossRef] [PubMed]
  94. Li, D.; Liu, Y.; Huang, J.; Wang, Z. A Trustworthy View on Explainable Artificial Intelligence Method Evaluation. Computer 2023, 56, 50–60. [Google Scholar] [CrossRef]
  95. Chen, T.C.T.; Chiu, M.C. Evaluating the sustainability of smart technology applications in healthcare after the COVID-19 pandemic: A hybridising subjective and objective fuzzy group decision-making approach with explainable artificial intelligence. Digit. Health 2022, 8, 20552076221136381. [Google Scholar] [CrossRef]
  96. Bhatia, S.; Albarrak, A.S. A Blockchain-Driven Food Supply Chain Management Using QR Code and XAI-Faster RCNN Architecture. Sustainability 2023, 15, 2579. [Google Scholar] [CrossRef]
  97. Konradi, J.; Zajber, M.; Betz, U.; Drees, P.; Gerken, A.; Meine, H. AI-Based Detection of Aspiration for Video-Endoscopy with Visual Aids in Meaningful Frames to Interpret the Model Outcome. Sensors 2022, 22, 9468. [Google Scholar] [CrossRef]
  98. Aquino, G.; Costa, M.G.F.; Costa Filho, C.F.F. Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks. Sensors 2023, 23, 4409. [Google Scholar] [CrossRef]
  99. Vijayvargiya, A.; Singh, P.; Kumar, R.; Dey, N. Hardware Implementation for Lower Limb Surface EMG Measurement and Analysis Using Explainable AI for Activity Recognition. IEEE Trans. Instrum. Meas. 2022, 71, 2004909. [Google Scholar] [CrossRef]
  100. Iliadou, E.; Su, Q.; Kikidis, D.; Bibas, T.; Kloukinas, C. Profiling hearing aid users through big data explainable artificial intelligence techniques. Front. Neurol. 2022, 13, 933940. [Google Scholar] [CrossRef]
  101. Wang, X.; Qiao, Y.; Cui, Y.; Ren, H.; Zhao, Y.; Linghu, L.; Ren, J.; Zhao, Z.; Chen, L.; Qiu, L. An explainable artificial intelligence framework for risk prediction of COPD in smokers. BMC Public Health 2023, 23, 2164. [Google Scholar] [CrossRef] [PubMed]
  102. Drobnic, F.; Starc, G.; Jurak, G.; Kos, A.; Pustisek, M. Explained Learning and Hyperparameter Optimization of Ensemble Estimator on the Bio-Psycho-Social Features of Children and Adolescents. Electronics 2023, 12, 4097. [Google Scholar] [CrossRef]
  103. Jeong, T.; Park, U.; Kang, S.W. Novel quantitative electroencephalogram feature image adapted for deep learning: Verification through classification of Alzheimer’s disease dementia. Front. Neurosci. 2022, 16, 1033379. [Google Scholar] [CrossRef] [PubMed]
  104. Varghese, A.; George, B.; Sherimon, V.; Al Shuaily, H.S. Enhancing Trust in Alzheimer’s Disease Classification using Explainable Artificial Intelligence: Incorporating Local Post Hoc Explanations for a Glass-box Model. Bahrain Med. Bull. 2023, 45, 1471–1478. [Google Scholar]
  105. Amoroso, N.; Quarto, S.; La Rocca, M.; Tangaro, S.; Monaco, A.; Bellotti, R. An eXplainability Artificial Intelligence approach to brain connectivity in Alzheimer’s disease. Front. Aging Neurosci. 2023, 15, 1238065. [Google Scholar] [CrossRef] [PubMed]
  106. Kamal, M.S.; Northcote, A.; Chowdhury, L.; Dey, N.; Gonzalez Crespo, R.; Herrera-Viedma, E. Alzheimer’s Patient Analysis Using Image and Gene Expression Data and Explainable-AI to Present Associated Genes. IEEE Trans. Instrum. Meas. 2021, 70, 2513107. [Google Scholar] [CrossRef]
  107. Hernandez, M.; Ramon-Julvez, U.; Ferraz, F.; Consortium, A. Explainable AI toward understanding the performance of the top three TADPOLE Challenge methods in the forecast of Alzheimer’s disease diagnosis. PLoS ONE 2022, 17, e0264695. [Google Scholar] [CrossRef] [PubMed]
  108. El-Sappagh, S.; Alonso, J.M.; Islam, S.M.R.; Sultan, A.M.; Kwak, K.S. A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 2021, 11, 2660. [Google Scholar] [CrossRef]
  109. Mahim, S.M.; Ali, M.S.; Hasan, M.O.; Nafi, A.A.N.; Sadat, A.; Al Hasan, S.A.; Shareef, B.; Ahsan, M.M.; Islam, M.K.; Miah, M.S.; et al. Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model. IEEE Access 2024, 12, 8390–8412. [Google Scholar] [CrossRef]
  110. Bhandari, N.; Walambe, R.; Kotecha, K.; Kaliya, M. Integrative gene expression analysis for the diagnosis of Parkinson’s disease using machine learning and explainable AI. Comput. Biol. Med. 2023, 163, 107140. [Google Scholar] [CrossRef] [PubMed]
  111. Kalyakulina, A.; Yusipov, I.; Bacalini, M.G.; Franceschi, C.; Vedunova, M.; Ivanchenko, M. Disease classification for whole-blood DNA methylation: Meta-analysis, missing values imputation, and XAI. Gigascience 2022, 11, giac097. [Google Scholar] [CrossRef] [PubMed]
  112. McFall, G.P.; Bohn, L.; Gee, M.; Drouin, S.M.; Fah, H.; Han, W.; Li, L.; Camicioli, R.; Dixon, R.A. Identifying key multi-modal predictors of incipient dementia in Parkinson’s disease: A machine learning analysis and Tree SHAP interpretation. Front. Aging Neurosci. 2023, 15, 1124232. [Google Scholar] [CrossRef]
  113. Pianpanit, T.; Lolak, S.; Sawangjai, P.; Sudhawiyangkul, T.; Wilaiprasitporn, T. Parkinson’s Disease Recognition Using SPECT Image and Interpretable AI: A Tutorial. IEEE Sens. J. 2021, 21, 22304–22316. [Google Scholar] [CrossRef]
  114. Kumar, A.; Manikandan, R.; Kose, U.; Gupta, D.; Satapathy, S.C. Doctor’s Dilemma: Evaluating an Explainable Subtractive Spatial Lightweight Convolutional Neural Network for Brain Tumor Diagnosis. Acm Trans. Multimed. Comput. Commun. Appl. 2021, 17, 105. [Google Scholar] [CrossRef]
  115. Gaur, L.; Bhandari, M.; Razdan, T.; Mallik, S.; Zhao, Z. Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data. Front. Genet. 2022, 13, 822666. [Google Scholar] [CrossRef]
  116. Tasci, B. Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet. Diagnostics 2023, 13, 859. [Google Scholar] [CrossRef] [PubMed]
  117. Esmaeili, M.; Vettukattil, R.; Banitalebi, H.; Krogh, N.R.; Geitung, J.T. Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization. J. Pers. Med. 2021, 11, 1213. [Google Scholar] [CrossRef] [PubMed]
  118. Maqsood, S.; Damasevicius, R.; Maskeliunas, R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM. Medicina 2022, 58, 1090. [Google Scholar] [CrossRef]
  119. Solorio-Ramirez, J.L.; Saldana-Perez, M.; Lytras, M.D.; Moreno-Ibarra, M.A.; Yanez-Marquez, C. Brain Hemorrhage Classification in CT Scan Images Using Minimalist Machine Learning. Diagnostics 2021, 11, 1449. [Google Scholar] [CrossRef]
  120. Andreu-Perez, J.; Emberson, L.L.; Kiani, M.; Filippetti, M.L.; Hagras, H.; Rigato, S. Explainable artificial intelligence based analysis for interpreting infant fNIRS data in developmental cognitive neuroscience. Commun. Biol. 2021, 4, 1077. [Google Scholar] [CrossRef]
  121. Hilal, A.M.; Issaoui, I.; Obayya, M.; Al-Wesabi, F.N.; Nemri, N.; Hamza, M.A.; Al Duhayyim, M.; Zamani, A.S. Modeling of Explainable Artificial Intelligence for Biomedical Mental Disorder Diagnosis. CMC-Comput. Mater. Contin. 2022, 71, 3853–3867. [Google Scholar] [CrossRef]
  122. Vieira, J.C.; Guedes, L.A.; Santos, M.R.; Sanchez-Gendriz, I.; He, F.; Wei, H.L.; Guo, Y.; Zhao, Y. Using Explainable Artificial Intelligence to Obtain Efficient Seizure-Detection Models Based on Electroencephalography Signals. Sensors 2023, 23, 9871. [Google Scholar] [CrossRef]
  123. Al-Hussaini, I.; Mitchell, C.S. SeizFt: Interpretable Machine Learning for Seizure Detection Using Wearables. Bioengineering 2023, 10, 918. [Google Scholar] [CrossRef]
  124. Li, Z.; Li, R.; Zhou, Y.; Rasmy, L.; Zhi, D.; Zhu, P.; Dono, A.; Jiang, X.; Xu, H.; Esquenazi, Y.; et al. Prediction of Brain Metastases Development in Patients with Lung Cancer by Explainable Artificial Intelligence from Electronic Health Records. JCO Clin. Cancer Inform. 2023, 7, e2200141. [Google Scholar] [CrossRef] [PubMed]
  125. Azam, H.; Tariq, H.; Shehzad, D.; Akbar, S.; Shah, H.; Khan, Z.A. Fully Automated Skull Stripping from Brain Magnetic Resonance Images Using Mask RCNN-Based Deep Learning Neural Networks. Brain Sci. 2023, 13, 1255. [Google Scholar] [CrossRef] [PubMed]
  126. Sasahara, K.; Shibata, M.; Sasabe, H.; Suzuki, T.; Takeuchi, K.; Umehara, K.; Kashiyama, E. Feature importance of machine learning prediction models shows structurally active part and important physicochemical features in drug design. Drug Metab. Pharmacokinet. 2021, 39, 100401. [Google Scholar] [CrossRef]
  127. Wang, Q.; Huang, K.; Chandak, P.; Zitnik, M.; Gehlenborg, N. Extending the Nested Model for User-Centric XAI: A Design Study on GNN-based Drug Repurposing. IEEE Trans. Vis. Comput. Graph. 2023, 29, 1266–1276. [Google Scholar] [CrossRef]
  128. Castiglione, F.; Nardini, C.; Onofri, E.; Pedicini, M.; Tieri, P. Explainable Drug Repurposing Approach from Biased Random Walks. IEEE-Acm Trans. Comput. Biol. Bioinform. 2023, 20, 1009–1019. [Google Scholar] [CrossRef]
  129. Jena, R.; Pradhan, B.; Gite, S.; Alamri, A.; Park, H.J. A new method to promptly evaluate spatial earthquake probability mapping using an explainable artificial intelligence (XAI) model. Gondwana Res. 2023, 123, 54–67. [Google Scholar] [CrossRef]
  130. Jena, R.; Shanableh, A.; Al-Ruzouq, R.; Pradhan, B.; Gibril, M.B.A.; Khalil, M.A.; Ghorbanzadeh, O.; Ganapathy, G.P.; Ghamisi, P. Explainable Artificial Intelligence (XAI) Model for Earthquake Spatial Probability Assessment in Arabian Peninsula. Remote. Sens. 2023, 15, 2248. [Google Scholar] [CrossRef]
  131. Alshehri, F.; Rahman, A. Coupling Machine and Deep Learning with Explainable Artificial Intelligence for Improving Prediction of Groundwater Quality and Decision-Making in Arid Region, Saudi Arabia. Water 2023, 15, 2298. [Google Scholar] [CrossRef]
  132. Clare, M.C.A.; Sonnewald, M.; Lguensat, R.; Deshayes, J.; Balaji, V. Explainable Artificial Intelligence for Bayesian Neural Networks: Toward Trustworthy Predictions of Ocean Dynamics. J. Adv. Model. Earth Syst. 2022, 14, e2022MS003162. [Google Scholar] [CrossRef]
  133. Nunez, J.; Cortes, C.B.; Yanez, M.A. Explainable Artificial Intelligence in Hydrology: Interpreting Black-Box Snowmelt-Driven Streamflow Predictions in an Arid Andean Basin of North-Central Chile. Water 2023, 15, 3369. [Google Scholar] [CrossRef]
  134. Kolevatova, A.; Riegler, M.A.; Cherubini, F.; Hu, X.; Hammer, H.L. Unraveling the Impact of Land Cover Changes on Climate Using Machine Learning and Explainable Artificial Intelligence. Big Data Cogn. Comput. 2021, 5, 55. [Google Scholar] [CrossRef]
  135. Xue, P.; Wagh, A.; Ma, G.; Wang, Y.; Yang, Y.; Liu, T.; Huang, C. Integrating Deep Learning and Hydrodynamic Modeling to Improve the Great Lakes Forecast. Remote. Sens. 2022, 14, 2640. [Google Scholar] [CrossRef]
  136. Huang, F.; Zhang, Y.; Zhang, Y.; Nourani, V.; Li, Q.; Li, L.; Shangguan, W. Towards interpreting machine learning models for predicting soil moisture droughts. Environ. Res. Lett. 2023, 18, 074002. [Google Scholar] [CrossRef]
  137. Huynh, T.M.T.; Ni, C.F.; Su, Y.S.; Nguyen, V.C.N.; Lee, I.H.; Lin, C.P.; Nguyen, H.H. Predicting Heavy Metal Concentrations in Shallow Aquifer Systems Based on Low-Cost Physiochemical Parameters Using Machine Learning Techniques. Int. J. Environ. Res. Public Health 2022, 19, 12180. [Google Scholar] [CrossRef] [PubMed]
  138. Bandstra, M.S.; Curtis, J.C.; Ghawaly, J.M., Jr.; Jones, A.C.; Joshi, T.H.Y. Explaining machine-learning models for gamma-ray detection and identification. PLoS ONE 2023, 18, e0286829. [Google Scholar] [CrossRef]
  139. Andresini, G.; Appice, A.; Malerba, D. SILVIA: An eXplainable Framework to Map Bark Beetle Infestation in Sentinel-2 Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 10050–10066. [Google Scholar] [CrossRef]
  140. van Stein, B.; Raponi, E.; Sadeghi, Z.; Bouman, N.; van Ham, R.; Back, T. A Comparison of Global Sensitivity Analysis Methods for Explainable AI with an Application in Genomic Prediction. IEEE Access 2022, 10, 103364–103381. [Google Scholar] [CrossRef]
  141. Quach, L.D.; Quoc, K.N.; Quynh, A.N.; Thai-Nghe, N.; Nguyen, T.G. Explainable Deep Learning Models with Gradient-Weighted Class Activation Mapping for Smart Agriculture. IEEE Access 2023, 11, 83752–83762. [Google Scholar] [CrossRef]
  142. Lysov, M.; Pukhkiy, K.; Vasiliev, E.; Getmanskaya, A.; Turlapov, V. Ensuring Explainability and Dimensionality Reduction in a Multidimensional HSI World for Early XAI-Diagnostics of Plant Stress. Entropy 2023, 25, 801. [Google Scholar] [CrossRef]
  143. Iatrou, M.; Karydas, C.; Tseni, X.; Mourelatos, S. Representation Learning with a Variational Autoencoder for Predicting Nitrogen Requirement in Rice. Remote. Sens. 2022, 14, 5978. [Google Scholar] [CrossRef]
  144. Zinonos, Z.; Gkelios, S.; Khalifeh, A.F.; Hadjimitsis, D.G.; Boutalis, Y.S.; Chatzichristofis, S.A. Grape Leaf Diseases Identification System Using Convolutional Neural Networks and LoRa Technology. IEEE Access 2022, 10, 122–133. [Google Scholar] [CrossRef]
  145. Danilevicz, M.F.; Gill, M.; Fernandez, C.G.T.; Petereit, J.; Upadhyaya, S.R.; Batley, J.; Bennamoun, M.; Edwards, D.; Bayer, P.E. DNABERT-based explainable lncRNA identification in plant genome assemblies. Comput. Struct. Biotechnol. J. 2023, 21, 5676–5685. [Google Scholar] [CrossRef]
  146. Kim, M.; Kim, D.; Jin, D.; Kim, G. Application of Explainable Artificial Intelligence (XAI) in Urban Growth Modeling: A Case Study of Seoul Metropolitan Area, Korea. Land 2023, 12, 420. [Google Scholar] [CrossRef]
  147. Galli, A.; Piscitelli, M.S.; Moscato, V.; Capozzoli, A. Bridging the gap between complexity and interpretability of a dataanalytics-based process for benchmarking energy performance of buildings. Expert Syst. Appl. 2022, 206, 117649. [Google Scholar] [CrossRef]
  148. Nguyen, D.D.; Tanveer, M.; Mai, H.N.; Pham, T.Q.D.; Khan, H.; Park, C.W.; Kim, G.M. Guiding the optimization of membraneless microfluidic fuel cells via explainable artificial intelligence: Comparative analyses of multiple machine learning models and investigation of key operating parameters. Fuel 2023, 349, 128742. [Google Scholar] [CrossRef]
  149. Pandey, D.S.; Raza, H.; Bhattacharyya, S. Development of explainable AI-based predictive models for bubbling fluidised bed gasification process. Fuel 2023, 351, 128971. [Google Scholar] [CrossRef]
  150. Wongburi, P.; Park, J.K. Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network. Sustainability 2022, 14, 6276. [Google Scholar] [CrossRef]
  151. Aslam, N.; Khan, I.U.; Alansari, A.; Alrammah, M.; Alghwairy, A.; Alqahtani, R.; Alqahtani, R.; Almushikes, M.; Hashim, M.A.L. Anomaly Detection Using Explainable Random Forest for the Prediction of Undesirable Events in Oil Wells. Appl. Comput. Intell. Soft Comput. 2022, 2022, 1558381. [Google Scholar] [CrossRef]
  152. Mardian, J.; Champagne, C.; Bonsal, B.; Berg, A. Understanding the Drivers of Drought Onset and Intensification in the Canadian Prairies: Insights from Explainable Artificial Intelligence (XAI). J. Hydrometeorol. 2023, 24, 2035–2055. [Google Scholar] [CrossRef]
  153. Youness, G.; Aalah, A. An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction. Aerospace 2023, 10, 474. [Google Scholar] [CrossRef]
  154. Chowdhury, D.; Sinha, A.; Das, D. XAI-3DP: Diagnosis and Understanding Faults of 3-D Printer with Explainable Ensemble AI. IEEE Sens. Lett. 2023, 7, 6000104. [Google Scholar] [CrossRef]
  155. Chelgani, S.C.; Nasiri, H.; Tohry, A.; Heidari, H.R. Modeling industrial hydrocyclone operational variables by SHAP-CatBoost-A “conscious lab” approach. Powder Technol. 2023, 420, 118416. [Google Scholar] [CrossRef]
  156. Elkhawaga, G.; Abu-Elkheir, M.; Reichert, M. Explainability of Predictive Process Monitoring Results: Can You See My Data Issues? Appl. Sci. 2022, 12, 8192. [Google Scholar] [CrossRef]
  157. El-khawaga, G.; Abu-Elkheir, M.; Reichert, M. XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework. Algorithms 2022, 15, 199. [Google Scholar] [CrossRef]
  158. Hanchate, A.; Bukkapatnam, S.T.S.; Lee, K.H.; Srivastava, A.; Kumara, S. Reprint of: Explainable AI (XAI)-driven vibration sensing scheme for surface quality monitoring in a smart surface grinding process. J. Manuf. Process. 2023, 100, 64–74. [Google Scholar] [CrossRef]
  159. Alfeo, A.L.L.; Cimino, M.G.C.A.; Vaglini, G. Degradation stage classification via interpretable feature learning. J. Manuf. Syst. 2022, 62, 972–983. [Google Scholar] [CrossRef]
  160. Akyol, S.; Das, M.; Alatas, B. Modeling the Energy Consumption of R600a Gas in a Refrigeration System with New Explainable Artificial Intelligence Methods Based on Hybrid Optimization. Biomimetics 2023, 8, 397. [Google Scholar] [CrossRef] [PubMed]
  161. Sharma, K.V.; Sai, P.H.V.S.T.; Sharma, P.; Kanti, P.K.; Bhramara, P.; Akilu, S. Prognostic modeling of polydisperse SiO2/Aqueous glycerol nanofluids’ thermophysical profile using an explainable artificial intelligence (XAI) approach. Eng. Appl. Artif. Intell. 2023, 126, 106967. [Google Scholar] [CrossRef]
  162. Kulasooriya, W.K.V.J.B.; Ranasinghe, R.S.S.; Perera, U.S.; Thisovithan, P.; Ekanayake, I.U.; Meddage, D.P.P. Modeling strength characteristics of basalt fiber reinforced concrete using multiple explainable machine learning with a graphical user interface. Sci. Rep. 2023, 13, 13138. [Google Scholar] [CrossRef]
  163. Geetha, G.K.; Sim, S.H. Fast identification of concrete cracks using 1D deep learning and explainable artificial intelligence-based analysis. Autom. Constr. 2022, 143, 104572. [Google Scholar] [CrossRef]
  164. Noh, Y.R.; Khalid, S.; Kim, H.S.; Choi, S.K. Intelligent Fault Diagnosis of Robotic Strain Wave Gear Reducer Using Area-Metric-Based Sampling. Mathematics 2023, 11, 4081. [Google Scholar] [CrossRef]
  165. Gim, J.; Lin, C.Y.; Turng, L.S. In-mold condition-centered and explainable artificial intelligence-based (IMC-XAI) process optimization for injection molding. J. Manuf. Syst. 2024, 72, 196–213. [Google Scholar] [CrossRef]
  166. Rozanec, J.M.; Trajkova, E.; Lu, J.; Sarantinoudis, N.; Arampatzis, G.; Eirinakis, P.; Mourtos, I.; Onat, M.K.; Yilmaz, D.A.; Kosmerlj, A.; et al. Cyber-Physical LPG Debutanizer Distillation Columns: Machine-Learning-Based Soft Sensors for Product Quality Monitoring. Appl. Sci. 2021, 11, 1790. [Google Scholar] [CrossRef]
  167. Bobek, S.; Kuk, M.; Szelazek, M.; Nalepa, G.J. Enhancing Cluster Analysis with Explainable AI and Multidimensional Cluster Prototypes. IEEE Access 2022, 10, 101556–101574. [Google Scholar] [CrossRef]
  168. Chen, T.C.T.; Lin, C.W.; Lin, Y.C. A fuzzy collaborative forecasting approach based on XAI applications for cycle time range estimation. Appl. Soft Comput. 2024, 151, 111122. [Google Scholar] [CrossRef]
  169. Lee, Y.; Roh, Y. An Expandable Yield Prediction Framework Using Explainable Artificial Intelligence for Semiconductor Manufacturing. Appl. Sci. 2023, 13, 2660. [Google Scholar] [CrossRef]
  170. Alqaralleh, B.A.Y.; Aldhaban, F.; AlQarallehs, E.A.; Al-Omari, A.H. Optimal Machine Learning Enabled Intrusion Detection in Cyber-Physical System Environment. CMC-Comput. Mater. Contin. 2022, 72, 4691–4707. [Google Scholar] [CrossRef]
  171. Younisse, R.; Ahmad, A.; Abu Al-Haija, Q. Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley Additive Explanations (SHAP). Big Data Cogn. Comput. 2022, 6, 126. [Google Scholar] [CrossRef]
  172. Larriva-Novo, X.; Sanchez-Zas, C.; Villagra, V.A.; Marin-Lopez, A.; Berrocal, J. Leveraging Explainable Artificial Intelligence in Real-Time Cyberattack Identification: Intrusion Detection System Approach. Appl. Sci. 2023, 13, 8587. [Google Scholar] [CrossRef]
  173. Mahbooba, B.; Timilsina, M.; Sahal, R.; Serrano, M. Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model. Complexity 2021, 2021, 6634811. [Google Scholar] [CrossRef]
  174. Ferretti, C.; Saletta, M. Do Neural Transformers Learn Human-Defined Concepts? An Extensive Study in Source Code Processing Domain. Algorithms 2022, 15, 449. [Google Scholar] [CrossRef]
  175. Rjoub, G.; Bentahar, J.; Wahab, O.A.; Mizouni, R.; Song, A.; Cohen, R.; Otrok, H.; Mourad, A. A Survey on Explainable Artificial Intelligence for Cybersecurity. IEEE Trans. Netw. Serv. Manag. 2023, 20, 5115–5140. [Google Scholar] [CrossRef]
  176. Kuppa, A.; Le-Khac, N.A. Adversarial XAI Methods in Cybersecurity. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4924–4938. [Google Scholar] [CrossRef]
  177. Jo, J.; Cho, J.; Moon, J. A Malware Detection and Extraction Method for the Related Information Using the ViT Attention Mechanism on Android Operating System. Appl. Sci. 2023, 13, 6839. [Google Scholar] [CrossRef]
  178. Lin, Y.S.; Liu, Z.Y.; Chen, Y.A.; Wang, Y.S.; Chang, Y.L.; Hsu, W.H. xCos: An Explainable Cosine Metric for Face Verification Task. ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 112. [Google Scholar] [CrossRef]
  179. Lim, S.Y.; Chae, D.K.; Lee, S.C. Detecting Deepfake Voice Using Explainable Deep Learning Techniques. Appl. Sci. 2022, 12, 3926. [Google Scholar] [CrossRef]
  180. Zhang, Z.; Umar, S.; Al Hammadi, A.Y.; Yoon, S.; Damiani, E.; Ardagna, C.A.; Bena, N.; Yeun, C.Y. Explainable Data Poison Attacks on Human Emotion Evaluation Systems Based on EEG Signals. IEEE Access 2023, 11, 18134–18147. [Google Scholar] [CrossRef]
  181. Muna, R.K.; Hossain, M.I.; Alam, M.G.R.; Hassan, M.M.; Ianni, M.; Fortino, G. Demystifying machine learning models of massive IoT attack detection with Explainable AI for sustainable and secure future smart cities. Internet Things 2023, 24, 100919. [Google Scholar] [CrossRef]
  182. Luo, R.; Xing, J.; Chen, L.; Pan, Z.; Cai, X.; Li, Z.; Wang, J.; Ford, A. Glassboxing Deep Learning to Enhance Aircraft Detection from SAR Imagery. Remote. Sens. 2021, 13, 3650. [Google Scholar] [CrossRef]
  183. Perez-Landa, G.I.; Loyola-Gonzalez, O.; Medina-Perez, M.A. An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets. Appl. Sci. 2021, 11, 10801. [Google Scholar] [CrossRef]
  184. Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [Google Scholar] [CrossRef]
  185. Manoharan, H.; Yuvaraja, T.; Kuppusamy, R.; Radhakrishnan, A. Implementation of explainable artificial intelligence in commercial communication systems using micro systems. Sci. Prog. 2023, 106, 00368504231191657. [Google Scholar] [CrossRef] [PubMed]
  186. Berger, T. Explainable artificial intelligence and economic panel data: A study on volatility spillover along the supply chains. Financ. Res. Lett. 2023, 54, 103757. [Google Scholar] [CrossRef]
  187. Raval, J.; Bhattacharya, P.; Jadav, N.K.; Tanwar, S.; Sharma, G.; Bokoro, P.N.; Elmorsy, M.; Tolba, A.; Raboaca, M.S. RaKShA: A Trusted Explainable LSTM Model to Classify Fraud Patterns on Credit Card Transactions. Mathematics 2023, 11, 1901. [Google Scholar] [CrossRef]
  188. Martinez, M.A.M.; Nadj, M.; Langner, M.; Toreini, P.; Maedche, A. Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology. ACM Trans. Interact. Intell. Syst. 2023, 13, 27. [Google Scholar] [CrossRef]
  189. Martins, T.; de Almeida, A.M.; Cardoso, E.; Nunes, L. Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in Finance. IEEE Access 2024, 12, 618–629. [Google Scholar] [CrossRef]
  190. Moscato, V.; Picariello, A.; Sperli, G. A benchmark of machine learning approaches for credit score prediction. Expert Syst. Appl. 2021, 165, 113986. [Google Scholar] [CrossRef]
  191. Gramespacher, T.; Posth, J.A. Employing Explainable AI to Optimize the Return Target Function of a Loan Portfolio. Front. Artif. Intell. 2021, 4, 693022. [Google Scholar] [CrossRef]
  192. Gramegna, A.; Giudici, P. SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk. Front. Artif. Intell. 2021, 4, 752558. [Google Scholar] [CrossRef]
  193. Rudin, C.; Shaposhnik, Y. Globally-Consistent Rule-Based Summary-Explanations for Machine Learning Models: Application to Credit-Risk Evaluation. J. Mach. Learn. Res. 2023, 24, 1–44. [Google Scholar] [CrossRef]
  194. Torky, M.; Gad, I.; Hassanien, A.E. Explainable AI Model for Recognizing Financial Crisis Roots Based on Pigeon Optimization and Gradient Boosting Model. Int. J. Comput. Intell. Syst. 2023, 16, 50. [Google Scholar] [CrossRef]
  195. Bermudez, L.; Anaya, D.; Belles-Sampera, J. Explainable AI for paid-up risk management in life insurance products. Financ. Res. Lett. 2023, 57, 104242. [Google Scholar] [CrossRef]
  196. Rozanec, J.; Trajkova, E.; Kenda, K.; Fortuna, B.; Mladenic, D. Explaining Bad Forecasts in Global Time Series Models. Appl. Sci. 2021, 11, 9243. [Google Scholar] [CrossRef]
  197. Kim, H.S.; Joe, I. An XAI method for convolutional neural networks in self-driving cars. PLoS ONE 2022, 17, e0267282. [Google Scholar] [CrossRef]
  198. Veitch, E.; Alsos, O.A. Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles. J. Mar. Sci. Eng. 2021, 9, 227. [Google Scholar] [CrossRef]
  199. Dworak, D.; Baranowski, J. Adaptation of Grad-CAM Method to Neural Network Architecture for LiDAR Pointcloud Object Detection. Energies 2022, 15, 4681. [Google Scholar] [CrossRef]
  200. Renda, A.; Ducange, P.; Marcelloni, F.; Sabella, D.; Filippou, M.C.; Nardini, G.; Stea, G.; Virdis, A.; Micheli, D.; Rapone, D.; et al. Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking. Information 2022, 13, 395. [Google Scholar] [CrossRef]
  201. Lorente, M.P.S.; Lopez, E.M.; Florez, L.A.; Espino, A.L.; Martinez, J.A.I.; de Miguel, A.S. Explaining Deep Learning-Based Driver Models. Appl. Sci. 2021, 11, 3321. [Google Scholar] [CrossRef]
  202. Qaffas, A.A.; Ben HajKacem, M.A.; Ben Ncir, C.E.; Nasraoui, O. An Explainable Artificial Intelligence Approach for Multi-Criteria ABC Item Classification. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 848–866. [Google Scholar] [CrossRef]
  203. Yilmazer, R.; Birant, D. Shelf Auditing Based on Image Classification Using Semi-Supervised Deep Learning to Increase On-Shelf Availability in Grocery Stores. Sensors 2021, 21, 327. [Google Scholar] [CrossRef] [PubMed]
  204. Lee, J.; Jung, O.; Lee, Y.; Kim, O.; Park, C. A Comparison and Interpretation of Machine Learning Algorithm for the Prediction of Online Purchase Conversion. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1472–1491. [Google Scholar] [CrossRef]
  205. Okazaki, K.; Inoue, K. Explainable Model Fusion for Customer Journey Mapping. Front. Artif. Intell. 2022, 5, 824197. [Google Scholar] [CrossRef]
  206. Diaz, G.M.; Galan, J.J.; Carrasco, R.A. XAI for Churn Prediction in B2B Models: A Use Case in an Enterprise Software Company. Mathematics 2022, 10, 3896. [Google Scholar] [CrossRef]
  207. Matuszelanski, K.; Kopczewska, K. Customer Churn in Retail E-Commerce Business: Spatial and Machine Learning Approach. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 165–198. [Google Scholar] [CrossRef]
  208. Pereira, F.D.; Fonseca, S.C.; Oliveira, E.H.T.; Cristea, I.A.; Bellhauser, H.; Rodrigues, L.; Oliveira, D.B.F.; Isotani, S.; Carvalho, L.S.G. Explaining Individual and Collective Programming Students’ Behavior by Interpreting a Black-Box Predictive Model. IEEE Access 2021, 9, 117097–117119. [Google Scholar] [CrossRef]
  209. Alcauter, I.; Martinez-Villasenor, L.; Ponce, H. Explaining Factors of Student Attrition at Higher Education. Comput. Sist. 2023, 27, 929–940. [Google Scholar] [CrossRef]
  210. Gomez-Cravioto, D.A.; Diaz-Ramos, R.E.; Hernandez-Gress, N.; Luis Preciado, J.; Ceballos, H.G. Supervised machine learning predictive analytics for alumni income. J. Big Data 2022, 9, 11. [Google Scholar] [CrossRef]
  211. Saarela, M.; Heilala, V.; Jaaskela, P.; Rantakaulio, A.; Karkkainen, T. Explainable Student Agency Analytics. IEEE Access 2021, 9, 137444–137459. [Google Scholar] [CrossRef]
  212. Ramon, Y.; Farrokhnia, R.A.; Matz, S.C.; Martens, D. Explainable AI for Psychological Profiling from Behavioral Data: An Application to Big Five Personality Predictions from Financial Transaction Records. Information 2021, 12, 518. [Google Scholar] [CrossRef]
  213. Zytek, A.; Liu, D.; Vaithianathan, R.; Veeramachaneni, K. Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making. IEEE Trans. Vis. Comput. Graph. 2022, 28, 1161–1171. [Google Scholar] [CrossRef] [PubMed]
  214. Rodriguez Oconitrillo, L.R.; Jose Vargas, J.; Camacho, A.; Burgos, A.; Manuel Corchado, J. RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning. Electronics 2021, 10, 1500. [Google Scholar] [CrossRef]
  215. Escobar-Linero, E.; Garcia-Jimenez, M.; Trigo-Sanchez, M.E.; Cala-Carrillo, M.J.; Sevillano, J.L.; Dominguez-Morales, M. Using machine learning-based systems to help predict disengagement from the legal proceedings by women victims of intimate partner violence in Spain. PLoS ONE 2023, 18, e0276032. [Google Scholar] [CrossRef]
  216. Sokhansanj, B.A.; Rosen, G.L. Predicting Institution Outcomes for Inter Partes Review (IPR) Proceedings at the United States Patent Trial & Appeal Board by Deep Learning of Patent Owner Preliminary Response Briefs. Appl. Sci. 2022, 12, 3656. [Google Scholar] [CrossRef]
  217. Cha, Y.; Lee, Y. Advanced sentence-embedding method considering token importance based on explainable artificial intelligence and text summarization model. Neurocomputing 2024, 564, 126987. [Google Scholar] [CrossRef]
  218. Sevastjanova, R.; Jentner, W.; Sperrle, F.; Kehlbeck, R.; Bernard, J.; El-assady, M. QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling. ACM Trans. Interact. Intell. Syst. 2021, 11, 19. [Google Scholar] [CrossRef]
  219. Sovrano, F.; Vitali, F. Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces. ACM Trans. Interact. Intell. Syst. 2022, 12, 26. [Google Scholar] [CrossRef]
  220. Kumar, A.; Dikshit, S.; Albuquerque, V.H.C. Explainable Artificial Intelligence for Sarcasm Detection in Dialogues. Wirel. Commun. Mob. Comput. 2021, 2021, 2939334. [Google Scholar] [CrossRef]
  221. de Velasco, M.; Justo, R.; Zorrilla, A.L.; Torres, M.I. Analysis of Deep Learning-Based Decision-Making in an Emotional Spontaneous Speech Task. Appl. Sci. 2023, 13, 980. [Google Scholar] [CrossRef]
  222. Huang, J.; Wu, X.; Wen, J.; Huang, C.; Luo, M.; Liu, L.; Zheng, Y. Evaluating Familiarity Ratings of Domain Concepts with Interpretable Machine Learning: A Comparative Study. Appl. Sci. 2023, 13, 2818. [Google Scholar] [CrossRef]
  223. Shah, A.; Ranka, P.; Dedhia, U.; Prasad, S.; Muni, S.; Bhowmick, K. Detecting and Unmasking AI-Generated Texts through Explainable Artificial Intelligence using Stylistic Features. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1043–1053. [Google Scholar] [CrossRef]
  224. Samih, A.; Ghadi, A.; Fennan, A. ExMrec2vec: Explainable Movie Recommender System based on Word2vec. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 653–660. [Google Scholar] [CrossRef]
  225. Pisoni, G.; Diaz-Rodriguez, N.; Gijlers, H.; Tonolli, L. Human-Centered Artificial Intelligence for Designing Accessible Cultural Heritage. Appl. Sci. 2021, 11, 870. [Google Scholar] [CrossRef]
  226. Mishra, S.; Shukla, A.K.; Muhuri, P.K. Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient and Explainable Solution. Axioms 2022, 11, 489. [Google Scholar] [CrossRef]
  227. Sullivan, R.S.; Longo, L. Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations. Mach. Learn. Knowl. Extr. 2023, 5, 1433–1455. [Google Scholar] [CrossRef]
  228. Tao, J.; Xiong, Y.; Zhao, S.; Wu, R.; Shen, X.; Lyu, T.; Fan, C.; Hu, Z.; Zhao, S.; Pan, G. Explainable AI for Cheating Detection and Churn Prediction in Online Games. IEEE Trans. Games 2023, 15, 242–251. [Google Scholar] [CrossRef]
  229. Szczepanski, M.; Pawlicki, M.; Kozik, R.; Choras, M. New explainability method for BERT-based model in fake news detection. Sci. Rep. 2021, 11, 23705. [Google Scholar] [CrossRef]
  230. Liang, X.S.; Straub, J. Deceptive Online Content Detection Using Only Message Characteristics and a Machine Learning Trained Expert System. Sensors 2021, 21, 7083. [Google Scholar] [CrossRef]
  231. Gowrisankar, B.; Thing, V.L.L. An adversarial attack approach for eXplainable AI evaluation on deepfake detection models. Comput. Secur. 2024, 139, 103684. [Google Scholar] [CrossRef]
  232. Damian, S.; Calvo, H.; Gelbukh, A. Fake News detection using n-grams for PAN@CLEF competition. J. Intell. Fuzzy Syst. 2022, 42, 4633–4640. [Google Scholar] [CrossRef]
  233. De Magistris, G.; Russo, S.; Roma, P.; Starczewski, J.T.; Napoli, C. An Explainable Fake News Detector Based on Named Entity Recognition and Stance Classification Applied to COVID-19. Information 2022, 13, 137. [Google Scholar] [CrossRef]
  234. Joshi, G.; Srivastava, A.; Yagnik, B.; Hasan, M.; Saiyed, Z.; Gabralla, L.A.; Abraham, A.; Walambe, R.; Kotecha, K. Explainable Misinformation Detection across Multiple Social Media Platforms. IEEE Access 2023, 11, 23634–23646. [Google Scholar] [CrossRef]
  235. Heimerl, A.; Weitz, K.; Baur, T.; Andre, E. Unraveling ML Models of Emotion with NOVA: Multi-Level Explainable AI for Non-Experts. IEEE Trans. Affect. Comput. 2022, 13, 1155–1167. [Google Scholar] [CrossRef]
  236. Beker, T.; Ansari, H.; Montazeri, S.; Song, Q.; Zhu, X.X. Deep Learning for Subtle Volcanic Deformation Detection with InSAR Data in Central Volcanic Zone. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 5218520. [Google Scholar] [CrossRef]
  237. Khan, M.A.; Park, H.; Lombardi, M. Exploring Explainable Artificial Intelligence Techniques for Interpretable Neural Networks in Traffic Sign Recognition Systems. Electronics 2024, 13, 306. [Google Scholar] [CrossRef]
  238. Resendiz, J.L.D.; Ponomaryov, V.; Reyes, R.R.; Sadovnychiy, S. Explainable CAD System for Classification of Acute Lymphoblastic Leukemia Based on a Robust White Blood Cell Segmentation. Cancers 2023, 15, 3376. [Google Scholar] [CrossRef]
  239. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  240. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
  241. Bello, M.; Napoles, G.; Concepcion, L.; Bello, R.; Mesejo, P.; Cordon, O. REPROT: Explaining the predictions of complex deep learning architectures for object detection through reducts of an image. Inf. Sci. 2024, 654, 119851. [Google Scholar] [CrossRef]
  242. Fouladgar, N.; Alirezaie, M.; Framling, K. Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing. IEEE Access 2022, 10, 23995–24009. [Google Scholar] [CrossRef]
  243. Arrotta, L.; Civitarese, G.; Bettini, C. DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments. Proc. Acm Interact. Mob. Wearable Ubiquitous-Technol.-Imwut 2022, 6, 1. [Google Scholar] [CrossRef]
  244. Astolfi, D.; De Caro, F.; Vaccaro, A. Condition Monitoring of Wind Turbine Systems by Explainable Artificial Intelligence Techniques. Sensors 2023, 23, 5376. [Google Scholar] [CrossRef] [PubMed]
  245. Jean-Quartier, C.; Bein, K.; Hejny, L.; Hofer, E.; Holzinger, A.; Jeanquartier, F. The Cost of Understanding-XAI Algorithms towards Sustainable ML in the View of Computational Cost. Computation 2023, 11, 92. [Google Scholar] [CrossRef]
  246. Stassin, S.; Corduant, V.; Mahmoudi, S.A.; Siebert, X. Explainability and Evaluation of Vision Transformers: An In-Depth Experimental Study. Electronics 2024, 13, 175. [Google Scholar] [CrossRef]
  247. Quach, L.D.; Quoc, K.N.; Quynh, A.N.; Ngoc, H.T.; Thai-Nghe, N. Tomato Health Monitoring System: Tomato Classification, Detection, and Counting System Based on YOLOv8 Model with Explainable MobileNet Models Using Grad-CAM plus. IEEE Access 2024, 12, 9719–9737. [Google Scholar] [CrossRef]
  248. Varam, D.; Mitra, R.; Mkadmi, M.; Riyas, R.A.; Abuhani, D.A.; Dhou, S.; Alzaatreh, A. Wireless Capsule Endoscopy Image Classification: An Explainable AI Approach. IEEE Access 2023, 11, 105262–105280. [Google Scholar] [CrossRef]
  249. Bhambra, P.; Joachimi, B.; Lahav, O. Explaining deep learning of galaxy morphology with saliency mapping. Mon. Not. R. Astron. Soc. 2022, 511, 5032–5041. [Google Scholar] [CrossRef]
  250. Huang, F.; Zhang, Y.; Zhang, Y.; Wei, S.; Li, Q.; Li, L.; Jiang, S. Interpreting Conv-LSTM for Spatio-Temporal Soil Moisture Prediction in China. Agriculture 2023, 13, 971. [Google Scholar] [CrossRef]
  251. Wei, K.; Chen, B.; Zhang, J.; Fan, S.; Wu, K.; Liu, G.; Chen, D. Explainable Deep Learning Study for Leaf Disease Classification. Agronomy 2022, 12, 1035. [Google Scholar] [CrossRef]
  252. Jin, W.; Li, X.; Fatehi, M.; Hamarneh, G. Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks. Methodsx 2023, 10, 102009. [Google Scholar] [CrossRef]
  253. Song, Z.; Trozzi, F.; Tian, H.; Yin, C.; Tao, P. Mechanistic Insights into Enzyme Catalysis from Explaining Machine-Learned Quantum Mechanical and Molecular Mechanical Minimum Energy Pathways. ACS Phys. Chem. Au 2022, 2, 316–330. [Google Scholar] [CrossRef]
  254. Brdar, S.; Panic, M.; Matavulj, P.; Stankovic, M.; Bartolic, D.; Sikoparija, B. Explainable AI for unveiling deep learning pollen classification model based on fusion of scattered light patterns and fluorescence spectroscopy. Sci. Rep. 2023, 13, 3205. [Google Scholar] [CrossRef] [PubMed]
  255. Ullah, I.; Rios, A.; Gala, V.; Mckeever, S. Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation. Appl. Sci. 2022, 12, 136. [Google Scholar] [CrossRef]
  256. Dong, S.; Jin, Y.; Bak, S.; Yoon, B.; Jeong, J. Explainable Convolutional Neural Network to Investigate Age-Related Changes in Multi-Order Functional Connectivity. Electronics 2021, 10, 3020. [Google Scholar] [CrossRef]
  257. Althoff, D.; Bazame, H.C.; Nascimento, J.G. Untangling hybrid hydrological models with explainable artificial intelligence. H2Open J. 2021, 4, 13–28. [Google Scholar] [CrossRef]
  258. Tiensuu, H.; Tamminen, S.; Puukko, E.; Roening, J. Evidence-Based and Explainable Smart Decision Support for Quality Improvement in Stainless Steel Manufacturing. Appl. Sci. 2021, 11, 10897. [Google Scholar] [CrossRef]
  259. Messner, W. From black box to clear box: A hypothesis testing framework for scalar regression problems using deep artificial neural networks. Appl. Soft Comput. 2023, 146, 110729. [Google Scholar] [CrossRef]
  260. Allen, B. An interpretable machine learning model of cross-sectional US county-level obesity prevalence using explainable artificial intelligence. PLoS ONE 2023, 18, e0292341. [Google Scholar] [CrossRef]
  261. Ilman, M.M.; Yavuz, S.; Taser, P.Y. Generalized Input Preshaping Vibration Control Approach for Multi-Link Flexible Manipulators using Machine Intelligence. Mechatronics 2022, 82, 102735. [Google Scholar] [CrossRef]
  262. Aghaeipoor, F.; Javidi, M.M.; Fernandez, A. IFC-BD: An Interpretable Fuzzy Classifier for Boosting Explainable Artificial Intelligence in Big Data. IEEE Trans. Fuzzy Syst. 2022, 30, 830–840. [Google Scholar] [CrossRef]
  263. Zaman, M.; Hassan, A. Fuzzy Heuristics and Decision Tree for Classification of Statistical Feature-Based Control Chart Patterns. Symmetry 2021, 13, 110. [Google Scholar] [CrossRef]
  264. Fernandez, G.; Aledo, J.A.; Gamez, J.A.; Puerta, J.M. Factual and Counterfactual Explanations in Fuzzy Classification Trees. IEEE Trans. Fuzzy Syst. 2022, 30, 5484–5495. [Google Scholar] [CrossRef]
  265. Gkalelis, N.; Daskalakis, D.; Mezaris, V. ViGAT: Bottom-Up Event Recognition and Explanation in Video Using Factorized Graph Attention Network. IEEE Access 2022, 10, 108797–108816. [Google Scholar] [CrossRef]
  266. Singha, M.; Pu, L.; Srivastava, G.; Ni, X.; Stanfield, B.A.; Uche, I.K.; Rider, P.J.F.; Kousoulas, K.G.; Ramanujam, J.; Brylinski, M. Unlocking the Potential of Kinase Targets in Cancer: Insights from CancerOmicsNet, an AI-Driven Approach to Drug Response Prediction in Cancer. Cancers 2023, 15, 4050. [Google Scholar] [CrossRef] [PubMed]
  267. Shang, Y.; Tian, Y.; Zhou, M.; Zhou, T.; Lyu, K.; Wang, Z.; Xin, R.; Liang, T.; Zhu, S.; Li, J. EHR-Oriented Knowledge Graph System: Toward Efficient Utilization of Non-Used Information Buried in Routine Clinical Practice. IEEE J. Biomed. Health Inform. 2021, 25, 2463–2475. [Google Scholar] [CrossRef]
  268. Espinoza, J.L.; Dupont, C.L.; O’Rourke, A.; Beyhan, S.; Morales, P.; Spoering, A.; Meyer, K.J.; Chan, A.P.; Choi, Y.; Nierman, W.C.; et al. Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence approach. PLoS Comput. Biol. 2021, 17, e1008857. [Google Scholar] [CrossRef]
  269. Altini, N.; Puro, E.; Taccogna, M.G.; Marino, F.; De Summa, S.; Saponaro, C.; Mattioli, E.; Zito, F.A.; Bevilacqua, V. Tumor Cellularity Assessment of Breast Histopathological Slides via Instance Segmentation and Pathomic Features Explainability. Bioengineering 2023, 10, 396. [Google Scholar] [CrossRef]
  270. Huelsmann, J.; Barbosa, J.; Steinke, F. Local Interpretable Explanations of Energy System Designs. Energies 2023, 16, 2161. [Google Scholar] [CrossRef]
  271. Misitano, G.; Afsar, B.; Larraga, G.; Miettinen, K. Towards explainable interactive multiobjective optimization: R-XIMO. Auton. Agents-Multi-Agent Syst. 2022, 36, 43. [Google Scholar] [CrossRef]
  272. Neghawi, E.; Liu, Y. Analysing Semi-Supervised ConvNet Model Performance with Computation Processes. Mach. Learn. Knowl. Extr. 2023, 5, 1848–1876. [Google Scholar] [CrossRef]
  273. Serradilla, O.; Zugasti, E.; Ramirez de Okariz, J.; Rodriguez, J.; Zurutuza, U. Adaptable and Explainable Predictive Maintenance: Semi-Supervised Deep Learning for Anomaly Detection and Diagnosis in Press Machine Data. Appl. Sci. 2021, 11, 7376. [Google Scholar] [CrossRef]
  274. Lin, C.S.; Wang, Y.C.F. Describe, Spot and Explain: Interpretable Representation Learning for Discriminative Visual Reasoning. IEEE Trans. Image Process. 2023, 32, 2481–2492. [Google Scholar] [CrossRef] [PubMed]
  275. Mohamed, E.; Sirlantzis, K.; Howells, G.; Hoque, S. Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification. Sensors 2022, 22, 5596. [Google Scholar] [CrossRef]
  276. Krenn, M.; Kottmann, J.S.; Tischler, N.; Aspuru-Guzik, A. Conceptual Understanding through Efficient Automated Design of Quantum Optical Experiments. Phys. Rev. X 2021, 11, 031044. [Google Scholar] [CrossRef]
  277. Podgorelec, V.; Kokol, P.; Stiglic, B.; Rozman, I. Decision trees: An overview and their use in medicine. J. Med Syst. 2002, 26, 445–463. [Google Scholar] [CrossRef]
  278. Thrun, M.C. Exploiting Distance-Based Structures in Data Using an Explainable AI for Stock Picking. Information 2022, 13, 51. [Google Scholar] [CrossRef]
  279. Carta, S.M.; Consoli, S.; Piras, L.; Podda, A.S.; Recupero, D.R. Explainable Machine Learning Exploiting News and Domain-Specific Lexicon for Stock Market Forecasting. IEEE Access 2021, 9, 30193–30205. [Google Scholar] [CrossRef]
  280. Almohimeed, A.; Saleh, H.; Mostafa, S.; Saad, R.M.A.; Talaat, A.S. Cervical Cancer Diagnosis Using Stacked Ensemble Model and Optimized Feature Selection: An Explainable Artificial Intelligence Approach. Computers 2023, 12, 200. [Google Scholar] [CrossRef]
  281. Chen, Z.; Lian, Z.; Xu, Z. Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing. Axioms 2023, 12, 997. [Google Scholar] [CrossRef]
  282. Leite, D.; Skrjanc, I.; Blazic, S.; Zdesar, A.; Gomide, F. Interval incremental learning of interval data streams and application to vehicle tracking. Inf. Sci. 2023, 630, 1–22. [Google Scholar] [CrossRef]
  283. Antoniou, G.; Papadakis, E.; Baryannis, G. Mental Health Diagnosis: A Case for Explainable Artificial Intelligence. Int. J. Artif. Intell. Tools 2022, 31, 2241003. [Google Scholar] [CrossRef]
  284. Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: A systematic review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
  285. Qaffas, A.A.; Ben Hajkacem, M.A.; Ben Ncir, C.E.; Nasraoui, O. Interpretable Multi-Criteria ABC Analysis Based on Semi-Supervised Clustering and Explainable Artificial Intelligence. IEEE Access 2023, 11, 43778–43792. [Google Scholar] [CrossRef]
  286. Wickramasinghe, C.S.; Amarasinghe, K.; Marino, D.L.; Rieger, C.; Manic, M. Explainable Unsupervised Machine Learning for Cyber-Physical Systems. IEEE Access 2021, 9, 131824–131843. [Google Scholar] [CrossRef]
  287. Cui, Y.; Liu, T.; Che, W.; Chen, Z.; Wang, S. Teaching Machines to Read, Answer and Explain. IEEE-ACM Trans. Audio Speech Lang. Process. 2022, 30, 1483–1492. [Google Scholar] [CrossRef]
  288. Heuillet, A.; Couthouis, F.; Diaz-Rodriguez, N. Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning with Shapley Values. IEEE Comput. Intell. Mag. 2022, 17, 59–71. [Google Scholar] [CrossRef]
  289. Khanna, R.; Dodge, J.; Anderson, A.; Dikkala, R.; Irvine, J.; Shureih, Z.; Lam, K.H.; Matthews, C.R.; Lin, Z.; Kahng, M.; et al. Finding Al’s Faults with AAR/AI An Empirical Study. ACM Trans. Interact. Intell. Syst. 2022, 12, 1. [Google Scholar] [CrossRef]
  290. Klar, M.; Ruediger, P.; Schuermann, M.; Goeren, G.T.; Glatt, M.; Ravani, B.; Aurich, J.C. Explainable generative design in manufacturing for reinforcement learning based factory layout planning. J. Manuf. Syst. 2024, 72, 74–92. [Google Scholar] [CrossRef]
  291. Solis-Martin, D.; Galan-Paez, J.; Borrego-Diaz, J. On the Soundness of XAI in Prognostics and Health Management (PHM). Information 2023, 14, 256. [Google Scholar] [CrossRef]
  292. Mandler, H.; Weigand, B. Feature importance in neural networks as a means of interpretation for data-driven turbulence models. Comput. Fluids 2023, 265, 105993. [Google Scholar] [CrossRef]
  293. De Bosscher, B.C.D.; Ziabari, S.S.M.; Sharpanskykh, A. A comprehensive study of agent-based airport terminal operations using surrogate modeling and simulation. Simul. Model. Pract. Theory 2023, 128, 102811. [Google Scholar] [CrossRef]
  294. Wenninger, S.; Kaymakci, C.; Wiethe, C. Explainable long-term building energy consumption prediction using QLattice. Appl. Energy 2022, 308, 118300. [Google Scholar] [CrossRef]
  295. Schrills, T.; Franke, T. How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems. ACM Trans. Interact. Intell. Syst. 2023, 13, 25. [Google Scholar] [CrossRef]
  296. Mehta, H.; Passi, K. Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI). Algorithms 2022, 15, 291. [Google Scholar] [CrossRef]
  297. Ge, W.; Wang, J.; Lin, T.; Tang, B.; Li, X. Explainable cyber threat behavior identification based on self-adversarial topic generation. Comput. Secur. 2023, 132, 103369. [Google Scholar] [CrossRef]
  298. Posada-Moreno, A.F.; Surya, N.; Trimpe, S. ECLAD: Extracting Concepts with Local Aggregated Descriptors. Pattern Recognit. 2024, 147, 110146. [Google Scholar] [CrossRef]
  299. Zolanvari, M.; Yang, Z.; Khan, K.; Jain, R.; Meskin, N. TRUST XAI: Model-Agnostic Explanations for AI with a Case Study on IIoT Security. IEEE Internet Things J. 2023, 10, 2967–2978. [Google Scholar] [CrossRef]
  300. Feng, J.; Wang, D.; Gu, Z. Bidirectional Flow Decision Tree for Reliable Remote Sensing Image Scene Classification. Remote. Sens. 2022, 14, 3943. [Google Scholar] [CrossRef]
  301. Yin, S.; Li, H.; Sun, Y.; Ibrar, M.; Teng, L. Data Visualization Analysis Based on Explainable Artificial Intelligence: A Survey. IJLAI Trans. Sci. Eng. 2024, 2, 13–20. [Google Scholar]
  302. Meskauskas, Z.; Kazanavicius, E. About the New Methodology and XAI-Based Software Toolkit for Risk Assessment. Sustainability 2022, 14, 5496. [Google Scholar] [CrossRef]
  303. Leem, S.; Oh, J.; So, D.; Moon, J. Towards Data-Driven Decision-Making in the Korean Film Industry: An XAI Model for Box Office Analysis Using Dimension Reduction, Clustering, and Classification. Entropy 2023, 25, 571. [Google Scholar] [CrossRef]
  304. Ayoub, O.; Troia, S.; Andreoletti, D.; Bianco, A.; Tornatore, M.; Giordano, S.; Rottondi, C. Towards explainable artificial intelligence in optical networks: The use case of lightpath QoT estimation. J. Opt. Commun. Netw. 2023, 15, A26–A38. [Google Scholar] [CrossRef]
  305. Aguilar, D.L.; Medina-Perez, M.A.; Loyola-Gonzalez, O.; Choo, K.K.R.; Bucheli-Susarrey, E. Towards an Interpretable Autoencoder: A Decision-Tree-Based Autoencoder and its Application in Anomaly Detection. IEEE Trans. Dependable Secur. Comput. 2023, 20, 1048–1059. [Google Scholar] [CrossRef]
  306. del Castillo Torres, G.; Francesca Roig-Maimo, M.; Mascaro-Oliver, M.; Amengual-Alcover, E.; Mas-Sanso, R. Understanding How CNNs Recognize Facial Expressions: A Case Study with LIME and CEM. Sensors 2023, 23, 131. [Google Scholar] [CrossRef] [PubMed]
  307. Dewi, C.; Chen, R.C.; Yu, H.; Jiang, X. XAI for Image Captioning using SHAP. J. Inf. Sci. Eng. 2023, 39, 711–724. [Google Scholar] [CrossRef]
  308. Alkhalaf, S.; Alturise, F.; Bahaddad, A.A.; Elnaim, B.M.E.; Shabana, S.; Abdel-Khalek, S.; Mansour, R.F. Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers 2023, 15, 1492. [Google Scholar] [CrossRef]
  309. Nascita, A.; Montieri, A.; Aceto, G.; Ciuonzo, D.; Persico, V.; Pescape, A. XAI Meets Mobile Traffic Classification: Understanding and Improving Multimodal Deep Learning Architectures. IEEE Trans. Netw. Serv. Manag. 2021, 18, 4225–4246. [Google Scholar] [CrossRef]
  310. Silva-Aravena, F.; Delafuente, H.N.; Gutierrez-Bahamondes, J.H.; Morales, J. A Hybrid Algorithm of ML and XAI to Prevent Breast Cancer: A Strategy to Support Decision Making. Cancers 2023, 15, 2443. [Google Scholar] [CrossRef] [PubMed]
  311. Bjorklund, A.; Henelius, A.; Oikarinen, E.; Kallonen, K.; Puolamaki, K. Explaining any black box model using real data. Front. Comput. Sci. 2023, 5, 1143904. [Google Scholar] [CrossRef]
  312. Dobrovolskis, A.; Kazanavicius, E.; Kizauskiene, L. Building XAI-Based Agents for IoT Systems. Appl. Sci. 2023, 13, 4040. [Google Scholar] [CrossRef]
  313. Perl, M.; Sun, Z.; Machlev, R.; Belikov, J.; Levy, K.Y.; Levron, Y. PMU placement for fault line location using neural additive models-A global XAI technique. Int. J. Electr. Power Energy Syst. 2024, 155, 109573. [Google Scholar] [CrossRef]
  314. Nwafor, O.; Okafor, E.; Aboushady, A.A.; Nwafor, C.; Zhou, C. Explainable Artificial Intelligence for Prediction of Non-Technical Losses in Electricity Distribution Networks. IEEE Access 2023, 11, 73104–73115. [Google Scholar] [CrossRef]
  315. Panagoulias, D.P.; Sarmas, E.; Marinakis, V.; Virvou, M.; Tsihrintzis, G.A.; Doukas, H. Intelligent Decision Support for Energy Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics. Electronics 2023, 12, 4430. [Google Scholar] [CrossRef]
  316. Kim, S.; Choo, S.; Park, D.; Park, H.; Nam, C.S.; Jung, J.Y.; Lee, S. Designing an XAI interface for BCI experts: A contextual design for pragmatic explanation interface based on domain knowledge in a specific context. Int. J.-Hum.-Comput. Stud. 2023, 174, 103009. [Google Scholar] [CrossRef]
  317. Wang, Z.; Joe, I. OISE: Optimized Input Sampling Explanation with a Saliency Map Based on the Black-Box Model. Appl. Sci. 2023, 13, 5886. [Google Scholar] [CrossRef]
  318. Puechmorel, S. Pullback Bundles and the Geometry of Learning. Entropy 2023, 25, 1450. [Google Scholar] [CrossRef]
  319. Machlev, R.; Perl, M.; Belikov, J.; Levy, K.Y.; Levron, Y. Measuring Explainability and Trustworthiness of Power Quality Disturbances Classifiers Using XAI-Explainable Artificial Intelligence. IEEE Trans. Ind. Inform. 2022, 18, 5127–5137. [Google Scholar] [CrossRef]
  320. Monteiro, W.R.; Reynoso-Meza, G. A multi-objective optimization design to generate surrogate machine learning models in explainable artificial intelligence applications. Euro J. Decis. Process. 2023, 11, 100040. [Google Scholar] [CrossRef]
  321. Shi, J.; Zou, W.; Zhang, C.; Tan, L.; Zou, Y.; Peng, Y.; Huo, W. CAMFuzz: Explainable Fuzzing with Local Interpretation. Cybersecurity 2022, 5, 17. [Google Scholar] [CrossRef]
  322. Igarashi, D.; Yee, J.; Yokoyama, Y.; Kusuno, H.; Tagawa, Y. The effects of secondary cavitation position on the velocity of a laser-induced microjet extracted using explainable artificial intelligence. Phys. Fluids 2024, 36, 013317. [Google Scholar] [CrossRef]
  323. Soto, J.L.; Uriguen, E.Z.; Garcia, X.D.C. Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders. Appl. Sci. 2023, 13, 2912. [Google Scholar] [CrossRef]
  324. Han, J.; Lee, Y. Explainable Artificial Intelligence-Based Competitive Factor Identification. ACM Trans. Knowl. Discov. Data 2022, 16, 10. [Google Scholar] [CrossRef]
  325. Hasan, M.; Lu, M. Enhanced model tree for quantifying output variances due to random data sampling: Productivity prediction applications. Autom. Constr. 2024, 158, 105218. [Google Scholar] [CrossRef]
  326. Sajjad, U.; Hussain, I.; Hamid, K.; Ali, H.M.; Wang, C.C.; Yan, W.M. Liquid-to-vapor phase change heat transfer evaluation and parameter sensitivity analysis of nanoporous surface coatings. Int. J. Heat Mass Transf. 2022, 194, 123088. [Google Scholar] [CrossRef]
  327. Ravi, S.K.; Roy, I.; Roychowdhury, S.; Feng, B.; Ghosh, S.; Reynolds, C.; Umretiya, R.V.; Rebak, R.B.; Hoffman, A.K. Elucidating precipitation in FeCrAl alloys through explainable AI: A case study. Comput. Mater. Sci. 2023, 230, 112440. [Google Scholar] [CrossRef]
  328. Sauter, D.; Lodde, G.; Nensa, F.; Schadendorf, D.; Livingstone, E.; Kukuk, M. Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology. Sensors 2022, 22, 5346. [Google Scholar] [CrossRef]
  329. Akilandeswari, P.; Eliazer, M.; Patil, R. Explainable AI-Reducing Costs, Finding the Optimal Path between Graphical Locations. Int. J. Early Child. Spec. Educ. 2022, 14, 504–511. [Google Scholar] [CrossRef]
  330. Aghaeipoor, F.; Sabokrou, M.; Fernandez, A. Fuzzy Rule-Based Explainer Systems for Deep Neural Networks: From Local Explainability to Global Understanding. IEEE Trans. Fuzzy Syst. 2023, 31, 3069–3080. [Google Scholar] [CrossRef]
  331. Lee, E.H.; Kim, H. Feature-Based Interpretation of the Deep Neural Network. Electronics 2021, 10, 2687. [Google Scholar] [CrossRef]
  332. Hung, S.C.; Wu, H.C.; Tseng, M.H. Integrating Image Quality Enhancement Methods and Deep Learning Techniques for Remote Sensing Scene Classification. Appl. Sci. 2021, 11, 1659. [Google Scholar] [CrossRef]
  333. Heistrene, L.; Machlev, R.; Perl, M.; Belikov, J.; Baimel, D.; Levy, K.; Mannor, S.; Levron, Y. Explainability-based Trust Algorithm for electricity price forecasting models. Energy AI 2023, 14, 100259. [Google Scholar] [CrossRef]
  334. Ribeiro, D.; Matos, L.M.; Moreira, G.; Pilastri, A.; Cortez, P. Isolation Forests and Deep Autoencoders for Industrial Screw Tightening Anomaly Detection. Computers 2022, 11, 54. [Google Scholar] [CrossRef]
  335. Blomerus, N.; Cilliers, J.; Nel, W.; Blasch, E.; de Villiers, P. Feedback-Assisted Automatic Target and Clutter Discrimination Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications. Remote. Sens. 2022, 14, 96. [Google Scholar] [CrossRef]
  336. Estivill-Castro, V.; Gilmore, E.; Hexel, R. Constructing Explainable Classifiers from the Start-Enabling Human-in-the Loop Machine Learning. Information 2022, 13, 464. [Google Scholar] [CrossRef]
  337. Angelotti, G.; Diaz-Rodriguez, N. Towards a more efficient computation of individual attribute and policy contribution for post-hoc explanation of cooperative multi-agent systems using Myerson values. Knowl.-Based Syst. 2023, 260, 110189. [Google Scholar] [CrossRef]
  338. Tang, R.; Liu, N.; Yang, F.; Zou, N.; Hu, X. Defense Against Explanation Manipulation. Front. Big Data 2022, 5, 704203. [Google Scholar] [CrossRef] [PubMed]
  339. Al-Sakkari, E.G.; Ragab, A.; So, T.M.Y.; Shokrollahi, M.; Dagdougui, H.; Navarri, P.; Elkamel, A.; Amazouz, M. Machine learning-assisted selection of adsorption-based carbon dioxide capture materials. J. Environ. Chem. Eng. 2023, 11, 110732. [Google Scholar] [CrossRef]
  340. Apostolopoulos, I.D.; Apostolopoulos, D.J.; Papathanasiou, N.D. Deep Learning Methods to Reveal Important X-ray Features in COVID-19 Detection: Investigation of Explainability and Feature Reproducibility. Reports 2022, 5, 20. [Google Scholar] [CrossRef]
  341. Deramgozin, M.M.; Jovanovic, S.; Arevalillo-Herraez, M.; Ramzan, N.; Rabah, H. Attention-Enabled Lightweight Neural Network Architecture for Detection of Action Unit Activation. IEEE Access 2023, 11, 117954–117970. [Google Scholar] [CrossRef]
  342. Dassanayake, P.M.; Anjum, A.; Bashir, A.K.; Bacon, J.; Saleem, R.; Manning, W. A Deep Learning Based Explainable Control System for Reconfigurable Networks of Edge Devices. IEEE Trans. Netw. Sci. Eng. 2022, 9, 7–19. [Google Scholar] [CrossRef]
  343. Qayyum, F.; Khan, M.A.; Kim, D.H.; Ko, H.; Ryu, G.A. Explainable AI for Material Property Prediction Based on Energy Cloud: A Shapley-Driven Approach. Materials 2023, 16, 7322. [Google Scholar] [CrossRef]
  344. Lellep, M.; Prexl, J.; Eckhardt, B.; Linkmann, M. Interpreted machine learning in fluid dynamics: Explaining relaminarisation events in wall-bounded shear flows. J. Fluid Mech. 2022, 942, A2. [Google Scholar] [CrossRef]
  345. Bilc, S.; Groza, A.; Muntean, G.; Nicoara, S.D. Interleaving Automatic Segmentation and Expert Opinion for Retinal Conditions. Diagnostics 2022, 12, 22. [Google Scholar] [CrossRef] [PubMed]
  346. Sakai, A.; Komatsu, M.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Dozen, A.; Shozu, K.; Arakaki, T.; Machino, H.; Asada, K.; et al. Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening. Biomedicines 2022, 10, 551. [Google Scholar] [CrossRef] [PubMed]
  347. Terzi, D.S.; Demirezen, U.; Sagiroglu, S. Explainable Credit Card Fraud Detection with Image Conversion. Adcaij-Adv. Distrib. Comput. Artif. Intell. J. 2021, 10, 63–76. [Google Scholar] [CrossRef]
  348. Kothadiya, D.R.; Bhatt, C.M.; Rehman, A.; Alamri, F.S.; Saba, T. SignExplainer: An Explainable AI-Enabled Framework for Sign Language Recognition with Ensemble Learning. IEEE Access 2023, 11, 47410–47419. [Google Scholar] [CrossRef]
  349. Slijepcevic, D.; Zeppelzauer, M.; Unglaube, F.; Kranzl, A.; Breiteneder, C.; Horsak, B. Explainable Machine Learning in Human Gait Analysis: A Study on Children with Cerebral Palsy. IEEE Access 2023, 11, 65906–65923. [Google Scholar] [CrossRef]
  350. Hwang, C.; Lee, T. E-SFD: Explainable Sensor Fault Detection in the ICS Anomaly Detection System. IEEE Access 2021, 9, 140470–140486. [Google Scholar] [CrossRef]
  351. Rivera, A.J.; Munoz, J.C.; Perez-Goody, M.D.; de San Pedro, B.S.; Charte, F.; Elizondo, D.; Rodriguez, C.; Abolafia, M.L.; Perea, A.; del Jesus, M.J. XAIRE: An ensemble-based methodology for determining the relative importance of variables in regression tasks. Application to a hospital emergency department. Artif. Intell. Med. 2023, 137, 102494. [Google Scholar] [CrossRef]
  352. Park, J.J.; Lee, S.; Shin, S.; Kim, M.; Park, J. Development of a Light and Accurate Nox Prediction Model for Diesel Engines Using Machine Learning and Xai Methods. Int. J. Automot. Technol. 2023, 24, 559–571. [Google Scholar] [CrossRef]
  353. Abdollahi, A.; Pradhan, B. Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI). Sensors 2021, 21, 4738. [Google Scholar] [CrossRef]
  354. Xie, Y.; Pongsakornsathien, N.; Gardi, A.; Sabatini, R. Explanation of Machine-Learning Solutions in Air-Traffic Management. Aerospace 2021, 8, 224. [Google Scholar] [CrossRef]
  355. Al-Hawawreh, M.; Moustafa, N. Explainable deep learning for attack intelligence and combating cyber-physical attacks. Ad Hoc Netw. 2024, 153, 103329. [Google Scholar] [CrossRef]
  356. Srisuchinnawong, A.; Homchanthanakul, J.; Manoonpong, P. NeuroVis: Real-Time Neural Information Measurement and Visualization of Embodied Neural Systems. Front. Neural Circuits 2021, 15, 743101. [Google Scholar] [CrossRef] [PubMed]
  357. Dai, B.; Shen, X.; Chen, L.Y.; Li, C.; Pan, W. Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation. Ann. Appl. Stat. 2023, 17, 2019–2038. [Google Scholar] [CrossRef]
  358. Li, Z. Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and XGBoost. Comput. Environ. Urban Syst. 2022, 96, 101845. [Google Scholar] [CrossRef]
  359. Gonzalez-Gonzalez, J.; Garcia-Mendez, S.; De Arriba-Perez, F.; Gonzalez-Castano, F.J.; Barba-Seara, O. Explainable Automatic Industrial Carbon Footprint Estimation from Bank Transaction Classification Using Natural Language Processing. IEEE Access 2022, 10, 126326–126338. [Google Scholar] [CrossRef]
  360. Elayan, H.; Aloqaily, M.; Karray, F.; Guizani, M. Internet of Behavior and Explainable AI Systems for Influencing IoT Behavior. IEEE Netw. 2023, 37, 62–68. [Google Scholar] [CrossRef]
  361. Cheng, X.; Doosthosseini, A.; Kunkel, J. Improve the Deep Learning Models in Forestry Based on Explanations and Expertise. Front. Plant Sci. 2022, 13, 902105. [Google Scholar] [CrossRef] [PubMed]
  362. Qiu, W.; Chen, H.; Kaeberlein, M.; Lee, S.I. ExplaiNAble BioLogical Age (ENABL Age): An artificial intelligence framework for interpretable biological age. Lancet Healthy Longev. 2023, 4, E711–E723. [Google Scholar] [CrossRef]
  363. Abba, S.I.; Yassin, M.A.; Mubarak, A.S.; Shah, S.M.H.; Usman, J.; Oudah, A.Y.; Naganna, S.R.; Aljundi, I.H. Drinking Water Resources Suitability Assessment Based on Pollution Index of Groundwater Using Improved Explainable Artificial Intelligence. Sustainability 2023, 15, 5655. [Google Scholar] [CrossRef]
  364. Martinez-Seras, A.; Del Ser, J.; Lobo, J.L.; Garcia-Bringas, P.; Kasabov, N. A novel Out-of-Distribution detection approach for Spiking Neural Networks: Design, fusion, performance evaluation and explainability. Inf. Fusion 2023, 100, 101943. [Google Scholar] [CrossRef]
  365. Krupp, L.; Wiede, C.; Friedhoff, J.; Grabmaier, A. Explainable Remaining Tool Life Prediction for Individualized Production Using Automated Machine Learning. Sensors 2023, 23, 8523. [Google Scholar] [CrossRef] [PubMed]
  366. Nayebi, A.; Tipirneni, S.; Reddy, C.K.; Foreman, B.; Subbian, V. WindowSHAP: An efficient framework for explaining time-series classifiers based on Shapley values. J. Biomed. Inform. 2023, 144, 104438. [Google Scholar] [CrossRef] [PubMed]
  367. Lee, J.; Jeong, J.; Jung, S.; Moon, J.; Rho, S. Verification of De-Identification Techniques for Personal Information Using Tree-Based Methods with Shapley Values. J. Pers. Med. 2022, 12, 190. [Google Scholar] [CrossRef]
  368. Nahiduzzaman, M.; Chowdhury, M.E.H.; Salam, A.; Nahid, E.; Ahmed, F.; Al-Emadi, N.; Ayari, M.A.; Khandakar, A.; Haider, J. Explainable deep learning model for automatic mulberry leaf disease classification. Front. Plant Sci. 2023, 14, 1175515. [Google Scholar] [CrossRef] [PubMed]
  369. Khan, A.; Ul Haq, I.; Hussain, T.; Muhammad, K.; Hijji, M.; Sajjad, M.; De Albuquerque, V.H.C.; Baik, S.W. PMAL: A Proxy Model Active Learning Approach for Vision Based Industrial Applications. ACM Trans. Multimed. Comput. Commun. Appl. 2022, 18, 123. [Google Scholar] [CrossRef]
  370. Beucher, A.; Rasmussen, C.B.; Moeslund, T.B.; Greve, M.H. Interpretation of Convolutional Neural Networks for Acid Sulfate Soil Classification. Front. Environ. Sci. 2022, 9, 809995. [Google Scholar] [CrossRef]
  371. Kui, B.; Pinter, J.; Molontay, R.; Nagy, M.; Farkas, N.; Gede, N.; Vincze, A.; Bajor, J.; Godi, S.; Czimmer, J.; et al. EASY-APP: An artificial intelligence model and application for early and easy prediction of severity in acute pancreatitis. Clin. Transl. Med. 2022, 12, e842. [Google Scholar] [CrossRef]
  372. Szandala, T. Unlocking the black box of CNNs: Visualising the decision-making process with PRISM. Inf. Sci. 2023, 642, 119162. [Google Scholar] [CrossRef]
  373. Rengasamy, D.; Rothwell, B.C.; Figueredo, G.P. Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems Using Feature Importance Fusion. Appl. Sci. 2021, 11, 1854. [Google Scholar] [CrossRef]
  374. Jahin, M.A.; Shovon, M.S.H.; Islam, M.S.; Shin, J.; Mridha, M.F.; Okuyama, Y. QAmplifyNet: Pushing the boundaries of supply chain backorder prediction using interpretable hybrid quantum-classical neural network. Sci. Rep. 2023, 13, 18246. [Google Scholar] [CrossRef] [PubMed]
  375. Nielsen, I.E.; Ramachandran, R.P.; Bouaynaya, N.; Fathallah-Shaykh, H.M.; Rasool, G. EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models. IEEE Access 2023, 11, 82556–82569. [Google Scholar] [CrossRef]
  376. Hashem, H.A.; Abdulazeem, Y.; Labib, L.M.; Elhosseini, M.A.; Shehata, M. An Integrated Machine Learning-Based Brain Computer Interface to Classify Diverse Limb Motor Tasks: Explainable Model. Sensors 2023, 23, 3171. [Google Scholar] [CrossRef] [PubMed]
  377. Lin, R.; Wichadakul, D. Interpretable Deep Learning Model Reveals Subsequences of Various Functions for Long Non-Coding RNA Identification. Front. Genet. 2022, 13, 876721. [Google Scholar] [CrossRef]
  378. Chen, H.; Yang, L.; Wu, Q. Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning Approach Using Google Earth Engine. Remote. Sens. 2023, 15, 4585. [Google Scholar] [CrossRef]
  379. Oveis, A.H.; Giusti, E.; Ghio, S.; Meucci, G.; Martorella, M. LIME-Assisted Automatic Target Recognition with SAR Images: Toward Incremental Learning and Explainability. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 9175–9192. [Google Scholar] [CrossRef]
  380. Llorca-Schenk, J.; Rico-Juan, J.R.; Sanchez-Lozano, M. Designing porthole aluminium extrusion dies on the basis of eXplainable Artificial Intelligence. Expert Syst. Appl. 2023, 222, 119808. [Google Scholar] [CrossRef]
  381. Diaz, G.M.; Hernandez, J.J.G.; Salvador, J.L.G. Analyzing Employee Attrition Using Explainable AI for Strategic HR Decision-Making. Mathematics 2023, 11, 4677. [Google Scholar] [CrossRef]
  382. Pelaez-Rodriguez, C.; Marina, C.M.; Perez-Aracil, J.; Casanova-Mateo, C.; Salcedo-Sanz, S. Extreme Low-Visibility Events Prediction Based on Inductive and Evolutionary Decision Rules: An Explicability-Based Approach. Atmosphere 2023, 14, 542. [Google Scholar] [CrossRef]
  383. An, J.; Zhang, Y.; Joe, I. Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models. Appl. Sci. 2023, 13, 8782. [Google Scholar] [CrossRef]
  384. Glick, A.; Clayton, M.; Angelov, N.; Chang, J. Impact of explainable artificial intelligence assistance on clinical decision-making of novice dental clinicians. JAMIA Open 2022, 5, ooac031. [Google Scholar] [CrossRef] [PubMed]
  385. Qureshi, Y.M.; Voloshin, V.; Facchinelli, L.; McCall, P.J.; Chervova, O.; Towers, C.E.; Covington, J.A.; Towers, D.P. Finding a Husband: Using Explainable AI to Define Male Mosquito Flight Differences. Biology 2023, 12, 496. [Google Scholar] [CrossRef] [PubMed]
  386. Wen, B.; Wang, N.; Subbalakshmi, K.; Chandramouli, R. Revealing the Roles of Part-of-Speech Taggers in Alzheimer Disease Detection: Scientific Discovery Using One-Intervention Causal Explanation. JMIR Form. Res. 2023, 7, e36590. [Google Scholar] [CrossRef] [PubMed]
  387. Alvey, B.; Anderson, D.; Keller, J.; Buck, A. Linguistic Explanations of Black Box Deep Learning Detectors on Simulated Aerial Drone Imagery. Sensors 2023, 23, 6879. [Google Scholar] [CrossRef] [PubMed]
  388. Hou, B.; Gao, J.; Guo, X.; Baker, T.; Zhang, Y.; Wen, Y.; Liu, Z. Mitigating the Backdoor Attack by Federated Filters for Industrial IoT Applications. IEEE Trans. Ind. Inform. 2022, 18, 3562–3571. [Google Scholar] [CrossRef]
  389. Nakagawa, P.I.; Pires, L.F.; Moreira, J.L.R.; Santos, L.O.B.d.S.; Bukhsh, F. Semantic Description of Explainable Machine Learning Workflows for Improving Trust. Appl. Sci. 2021, 11, 804. [Google Scholar] [CrossRef]
  390. Yang, M.; Moon, J.; Yang, S.; Oh, H.; Lee, S.; Kim, Y.; Jeong, J. Design and Implementation of an Explainable Bidirectional LSTM Model Based on Transition System Approach for Cooperative AI-Workers. Appl. Sci. 2022, 12, 6390. [Google Scholar] [CrossRef]
  391. O’Shea, R.; Manickavasagar, T.; Horst, C.; Hughes, D.; Cusack, J.; Tsoka, S.; Cook, G.; Goh, V. Weakly supervised segmentation models as explainable radiological classifiers for lung tumour detection on CT images. Insights Imaging 2023, 14, 195. [Google Scholar] [CrossRef] [PubMed]
  392. Tasnim, N.; Al Mamun, S.; Shahidul Islam, M.; Kaiser, M.S.; Mahmud, M. Explainable Mortality Prediction Model for Congestive Heart Failure with Nature-Based Feature Selection Method. Appl. Sci. 2023, 13, 6138. [Google Scholar] [CrossRef]
  393. Marques-Silva, J.; Ignatiev, A. No silver bullet: Interpretable ML models must be explained. Front. Artif. Intell. 2023, 6, 1128212. [Google Scholar] [CrossRef] [PubMed]
  394. Pedraza, A.; del Rio, D.; Bautista-Juzgado, V.; Fernandez-Lopez, A.; Sanz-Andres, A. Study of the Feasibility of Decoupling Temperature and Strain from a f-PA-OFDR over an SMF Using Neural Networks. Sensors 2023, 23, 5515. [Google Scholar] [CrossRef] [PubMed]
  395. Kwon, S.; Lee, Y. Explainability-Based Mix-Up Approach for Text Data Augmentation. ACM Trans. Knowl. Discov. Data 2023, 17, 13. [Google Scholar] [CrossRef]
  396. Rosenberg, G.; Brubaker, J.K.; Schuetz, M.J.A.; Salton, G.; Zhu, Z.; Zhu, E.Y.; Kadioglu, S.; Borujeni, S.E.; Katzgraber, H.G. Explainable Artificial Intelligence Using Expressive Boolean Formulas. Mach. Learn. Knowl. Extr. 2023, 5, 1760–1795. [Google Scholar] [CrossRef]
  397. O’Sullivan, C.M.; Deo, R.C.; Ghahramani, A. Explainable AI approach with original vegetation data classifies spatio-temporal nitrogen in flows from ungauged catchments to the Great Barrier Reef. Sci. Rep. 2023, 13, 18145. [Google Scholar] [CrossRef]
  398. Richter, Y.; Balal, N.; Pinhasi, Y. Neural-Network-Based Target Classification and Range Detection by CW MMW Radar. Remote. Sens. 2023, 15, 4553. [Google Scholar] [CrossRef]
  399. Dong, G.; Ma, Y.; Basu, A. Feature-Guided CNN for Denoising Images from Portable Ultrasound Devices. IEEE Access 2021, 9, 28272–28281. [Google Scholar] [CrossRef]
  400. Murala, D.K.; Panda, S.K.; Dash, S.P. MedMetaverse: Medical Care of Chronic Disease Patients and Managing Data Using Artificial Intelligence, Blockchain, and Wearable Devices State-of-the-Art Methodology. IEEE Access 2023, 11, 138954–138985. [Google Scholar] [CrossRef]
  401. Brakefield, W.S.; Ammar, N.; Shaban-Nejad, A. An Urban Population Health Observatory for Disease Causal Pathway Analysis and Decision Support: Underlying Explainable Artificial Intelligence Model. JMIR Form. Res. 2022, 6, e36055. [Google Scholar] [CrossRef]
  402. Ortega, A.; Fierrez, J.; Morales, A.; Wang, Z.; de la Cruz, M.; Alonso, C.L.; Ribeiro, T. Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning. Computers 2021, 10, 154. [Google Scholar] [CrossRef]
  403. An, J.; Joe, I. Attention Map-Guided Visual Explanations for Deep Neural Networks. Appl. Sci. 2022, 12, 3846. [Google Scholar] [CrossRef]
  404. Huang, X.; Sun, Y.; Feng, S.; Ye, Y.; Li, X. Better Visual Interpretation for Remote Sensing Scene Classification. IEEE Geosci. Remote. Sens. Lett. 2022, 19, 6504305. [Google Scholar] [CrossRef]
  405. Senocak, A.U.G.; Yilmaz, M.T.; Kalkan, S.; Yucel, I.; Amjad, M. An explainable two-stage machine learning approach for precipitation forecast. J. Hydrol. 2023, 627, 130375. [Google Scholar] [CrossRef]
  406. Kalutharage, C.S.; Liu, X.; Chrysoulas, C.; Pitropakis, N.; Papadopoulos, P. Explainable AI-Based DDOS Attack Identification Method for IoT Networks. Computers 2023, 12, 32. [Google Scholar] [CrossRef]
  407. Sorayaie Azar, A.; Naemi, A.; Babaei Rikan, S.; Mohasefi, J.B.; Pirnejad, H.; Wiil, U.K. Monkeypox detection using deep neural networks. BMC Infect. Dis. 2023, 23, 438. [Google Scholar] [CrossRef] [PubMed]
  408. Di Stefano, V.; Prinzi, F.; Luigetti, M.; Russo, M.; Tozza, S.; Alonge, P.; Romano, A.; Sciarrone, M.A.; Vitali, F.; Mazzeo, A.; et al. Machine Learning for Early Diagnosis of ATTRv Amyloidosis in Non-Endemic Areas: A Multicenter Study from Italy. Brain Sci. 2023, 13, 805. [Google Scholar] [CrossRef] [PubMed]
  409. Huong, T.T.; Bac, T.P.; Ha, K.N.; Hoang, N.V.; Hoang, N.X.; Hung, N.T.; Tran, K.P. Federated Learning-Based Explainable Anomaly Detection for Industrial Control Systems. IEEE Access 2022, 10, 53854–53872. [Google Scholar] [CrossRef]
  410. Diefenbach, S.; Christoforakos, L.; Ullrich, D.; Butz, A. Invisible but Understandable: In Search of the Sweet Spot between Technology Invisibility and Transparency in Smart Spaces and Beyond. Multimodal Technol. Interact. 2022, 6, 95. [Google Scholar] [CrossRef]
  411. Patel, J.; Amipara, C.; Ahanger, T.A.; Ladhva, K.; Gupta, R.K.; Alsaab, H.O.O.; Althobaiti, Y.S.S.; Ratna, R. A Machine Learning-Based Water Potability Prediction Model by Using Synthetic Minority Oversampling Technique and Explainable AI. Comput. Intell. Neurosci. 2022, 2022, 9283293. [Google Scholar] [CrossRef]
  412. Kim, J.K.; Lee, K.; Hong, S.G. Cognitive Load Recognition Based on T-Test and SHAP from Wristband Sensors. Hum.-Centric Comput. Inf. Sci. 2023, 13. [Google Scholar] [CrossRef]
  413. Schroeder, M.; Zamanian, A.; Ahmidi, N. What about the Latent Space? The Need for Latent Feature Saliency Detection in Deep Time Series Classification. Mach. Learn. Knowl. Extr. 2023, 5, 539–559. [Google Scholar] [CrossRef]
  414. Singh, A.; Pannu, H.; Malhi, A. Explainable Information Retrieval using Deep Learning for Medical images. Comput. Sci. Inf. Syst. 2022, 19, 277–307. [Google Scholar] [CrossRef]
  415. Kumara, I.; Ariz, M.H.; Chhetri, M.B.; Mohammadi, M.; Van Den Heuvel, W.J.; Tamburri, D.A. FOCloud: Feature Model Guided Performance Prediction and Explanation for Deployment Configurable Cloud Applications. IEEE Trans. Serv. Comput. 2023, 16, 302–314. [Google Scholar] [CrossRef]
  416. Konforti, Y.; Shpigler, A.; Lerner, B.; Bar-Hillel, A. SIGN: Statistical Inference Graphs Based on Probabilistic Network Activity Interpretation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3783–3797. [Google Scholar] [CrossRef] [PubMed]
  417. Oblak, T.; Haraksim, R.; Beslay, L.; Peer, P. Probabilistic Fingermark Quality Assessment with Quality Region Localisation. Sensors 2023, 23, 4006. [Google Scholar] [CrossRef]
  418. Le, T.T.H.; Kang, H.; Kim, H. Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial Images with Different Patch Sizes and Perturbation Ratios. IEEE Access 2021, 9, 133049–133061. [Google Scholar] [CrossRef]
  419. Capuozzo, S.; Gravina, M.; Gatta, G.; Marrone, S.; Sansone, C. A Multimodal Knowledge-Based Deep Learning Approach for MGMT Promoter Methylation Identification. J. Imaging 2022, 8, 321. [Google Scholar] [CrossRef] [PubMed]
  420. Vo, H.T.; Thien, N.N.; Mui, K.C. A Deep Transfer Learning Approach for Accurate Dragon Fruit Ripeness Classification and Visual Explanation using Grad-CAM. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1344–1352. [Google Scholar] [CrossRef]
  421. Artelt, A.; Hammer, B. Efficient computation of counterfactual explanations and counterfactual metrics of prototype-based classifiers. Neurocomputing 2022, 470, 304–317. [Google Scholar] [CrossRef]
  422. Abeyrathna, K.D.; Granmo, O.C.; Goodwin, M. Adaptive Sparse Representation of Continuous Input for Tsetlin Machines Based on Stochastic Searching on the Line. Electronics 2021, 10, 2107. [Google Scholar] [CrossRef]
  423. Pandiyan, V.; Wrobel, R.; Leinenbach, C.; Shevchik, S. Optimizing in-situ monitoring for laser powder bed fusion process: Deciphering acoustic emission and sensor sensitivity with explainable machine learning. J. Mater. Process. Technol. 2023, 321, 118144. [Google Scholar] [CrossRef]
  424. Jeon, M.; Kim, T.; Kim, S.; Lee, C.; Youn, C.H. Recursive Visual Explanations Mediation Scheme Based on DropAttention Model with Multiple Episodes Pool. IEEE Access 2023, 11, 4306–4321. [Google Scholar] [CrossRef]
  425. Jia, B.; Qiao, W.; Zong, Z.; Liu, S.; Hijji, M.; Del Ser, J.; Muhammadh, K. A fingerprint-based localization algorithm based on LSTM and data expansion method for sparse samples. Future Gener. Comput.-Syst.- Int. J. Escience 2022, 137, 380–393. [Google Scholar] [CrossRef]
  426. Munkhdalai, L.; Munkhdalai, T.; Pham, V.H.; Hong, J.E.; Ryu, K.H.; Theera-Umpon, N. Neural Network-Augmented Locally Adaptive Linear Regression Model for Tabular Data. Sustainability 2022, 14, 5273. [Google Scholar] [CrossRef]
  427. Gouabou, A.C.F.; Collenne, J.; Monnier, J.; Iguernaissi, R.; Damoiseaux, J.L.; Moudafi, A.; Merad, D. Computer Aided Diagnosis of Melanoma Using Deep Neural Networks and Game Theory: Application on Dermoscopic Images of Skin Lesions. Int. J. Mol. Sci. 2022, 23, 3838. [Google Scholar] [CrossRef] [PubMed]
  428. Abeyrathna, K.D.; Granmo, O.C.; Goodwin, M. Extending the Tsetlin Machine with Integer-Weighted Clauses for Increased Interpretability. IEEE Access 2021, 9, 8233–8248. [Google Scholar] [CrossRef]
  429. Nagaoka, T.; Kozuka, T.; Yamada, T.; Habe, H.; Nemoto, M.; Tada, M.; Abe, K.; Handa, H.; Yoshida, H.; Ishii, K.; et al. A Deep Learning System to Diagnose COVID-19 Pneumonia Using Masked Lung CT Images to Avoid AI-generated COVID-19 Diagnoses that Include Data outside the Lungs. Adv. Biomed. Eng. 2022, 11, 76–86. [Google Scholar] [CrossRef]
  430. Ali, S.; Hussain, A.; Bhattacharjee, S.; Athar, A.; Abdullah, A.; Kim, H.C. Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model. Sensors 2022, 22, 9983. [Google Scholar] [CrossRef]
  431. Elbagoury, B.M.; Vladareanu, L.; Vladareanu, V.; Salem, A.B.; Travediu, A.M.; Roushdy, M.I. A Hybrid Stacked CNN and Residual Feedback GMDH-LSTM Deep Learning Model for Stroke Prediction Applied on Mobile AI Smart Hospital Platform. Sensors 2023, 23, 3500. [Google Scholar] [CrossRef] [PubMed]
  432. Yuan, L.; Andrews, J.; Mu, H.; Vakil, A.; Ewing, R.; Blasch, E.; Li, J. Interpretable Passive Multi-Modal Sensor Fusion for Human Identification and Activity Recognition. Sensors 2022, 22, 5787. [Google Scholar] [CrossRef]
  433. Someetheram, V.; Marsani, M.F.; Mohd Kasihmuddin, M.S.; Zamri, N.E.; Muhammad Sidik, S.S.; Mohd Jamaludin, S.Z.; Mansor, M.A. Random Maximum 2 Satisfiability Logic in Discrete Hopfield Neural Network Incorporating Improved Election Algorithm. Mathematics 2022, 10, 4734. [Google Scholar] [CrossRef]
  434. Sudars, K.; Namatevs, I.; Ozols, K. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach. J. Imaging 2022, 8, 30. [Google Scholar] [CrossRef] [PubMed]
  435. Aslam, N.; Khan, I.U.; Bader, S.A.; Alansari, A.; Alaqeel, L.A.; Khormy, R.M.; Alkubaish, Z.A.; Hussain, T. Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features. CMC-Comput. Mater. Contin. 2023, 76, 3167–3188. [Google Scholar] [CrossRef]
  436. Shin, C.Y.; Park, J.T.; Baek, U.J.; Kim, M.S. A Feasible and Explainable Network Traffic Classifier Utilizing DistilBERT. IEEE Access 2023, 11, 70216–70237. [Google Scholar] [CrossRef]
  437. Samir, M.; Sherief, N.; Abdelmoez, W. Improving Bug Assignment and Developer Allocation in Software Engineering through Interpretable Machine Learning Models. Computers 2023, 12, 128. [Google Scholar] [CrossRef]
  438. Guidotti, R.; D’Onofrio, M. Matrix Profile-Based Interpretable Time Series Classifier. Front. Artif. Intell. 2021, 4, 699448. [Google Scholar] [CrossRef]
  439. Ekanayake, I.U.; Palitha, S.; Gamage, S.; Meddage, D.P.P.; Wijesooriya, K.; Mohotti, D. Predicting adhesion strength of micropatterned surfaces using gradient boosting models and explainable artificial intelligence visualizations. Mater. Today Commun. 2023, 36, 106545. [Google Scholar] [CrossRef]
  440. Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [Google Scholar] [CrossRef]
  441. Bitar, A.; Rosales, R.; Paulitsch, M. Gradient-based feature-attribution explainability methods for spiking neural networks. Front. Neurosci. 2023, 17, 1153999. [Google Scholar] [CrossRef] [PubMed]
  442. Kim, H.; Kim, J.S.; Chung, C.K. Identification of cerebral cortices processing acceleration, velocity, and position during directional reaching movement with deep neural network and explainable AI. Neuroimage 2023, 266, 119783. [Google Scholar] [CrossRef]
  443. Khondker, A.; Kwong, J.C.C.; Rickard, M.; Skreta, M.; Keefe, D.T.; Lorenzo, A.J.; Erdman, L. A machine learning-based approach for quantitative grading of vesicoureteral reflux from voiding cystourethrograms: Methods and proof of concept. J. Pediatr. Urol. 2022, 18, 78.e1–78.e7. [Google Scholar] [CrossRef]
  444. Lucieri, A.; Dengel, A.; Ahmed, S. Translating theory into practice: Assessing the privacy implications of concept-based explanations for biomedical AI. FRontiers Bioinform. 2023, 3, 1194993. [Google Scholar] [CrossRef] [PubMed]
  445. Suhail, S.; Iqbal, M.; Hussain, R.; Jurdak, R. ENIGMA: An explainable digital twin security solution for cyber-physical systems. Comput. Ind. 2023, 151, 103961. [Google Scholar] [CrossRef]
  446. Bacco, L.; Cimino, A.; Dell’Orletta, F.; Merone, M. Explainable Sentiment Analysis: A Hierarchical Transformer-Based Extractive Summarization Approach. Electronics 2021, 10, 2195. [Google Scholar] [CrossRef]
  447. Prakash, A.J.; Patro, K.K.; Saunak, S.; Sasmal, P.; Kumari, P.L.; Geetamma, T. A New Approach of Transparent and Explainable Artificial Intelligence Technique for Patient-Specific ECG Beat Classification. IEEE Sens. Lett. 2023, 7, 5501604. [Google Scholar] [CrossRef]
  448. Alani, M.M.; Awad, A.I. PAIRED: An Explainable Lightweight Android Malware Detection System. IEEE Access 2022, 10, 73214–73228. [Google Scholar] [CrossRef]
  449. Maloca, P.M.; Mueller, P.L.; Lee, A.Y.; Tufail, A.; Balaskas, K.; Niklaus, S.; Kaiser, P.; Suter, S.; Zarranz-Ventura, J.; Egan, C.; et al. Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence. Commun. Biol. 2021, 4, 170. [Google Scholar] [CrossRef] [PubMed]
  450. Ahn, I.; Gwon, H.; Kang, H.; Kim, Y.; Seo, H.; Choi, H.; Cho, H.N.; Kim, M.; Jun, T.J.; Kim, Y.H. Machine Learning-Based Hospital Discharge Prediction for Patients with Cardiovascular Diseases: Development and Usability Study. JMIR Med. Inform. 2021, 9, e32662. [Google Scholar] [CrossRef]
  451. Hammer, J.; Schirrmeister, R.T.; Hartmann, K.; Marusic, P.; Schulze-Bonhage, A.; Ball, T. Interpretable functional specialization emerges in deep convolutional networks trained on brain signals. J. Neural Eng. 2022, 19, 036006. [Google Scholar] [CrossRef]
  452. Ikushima, H.; Usui, K. Identification of age-dependent features of human bronchi using explainable artificial intelligence. ERJ Open Res. 2023, 9. [Google Scholar] [CrossRef]
  453. Kalir, A.A.; Lo, S.K.; Goldberg, G.; Zingerman-Koladko, I.; Ohana, A.; Revah, Y.; Chimol, T.B.; Honig, G. Leveraging Machine Learning for Capacity and Cost on a Complex Toolset: A Case Study. IEEE Trans. Semicond. Manuf. 2023, 36, 611–618. [Google Scholar] [CrossRef]
  454. Shin, H.; Noh, G.; Choi, B.M. Photoplethysmogram based vascular aging assessment using the deep convolutional neural network. Sci. Rep. 2022, 12, 11377. [Google Scholar] [CrossRef] [PubMed]
  455. Chandra, H.; Pawar, P.M.; Elakkiya, R.; Tamizharasan, P.S.; Muthalagu, R.; Panthakkan, A. Explainable AI for Soil Fertility Prediction. IEEE Access 2023, 11, 97866–97878. [Google Scholar] [CrossRef]
  456. Blix, K.; Ruescas, A.B.; Johnson, J.E.; Camps-Valls, G. Learning Relevant Features of Optical Water Types. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1502105. [Google Scholar] [CrossRef]
  457. Topp, S.N.; Barclay, J.; Diaz, J.; Sun, A.Y.; Jia, X.; Lu, D.; Sadler, J.M.; Appling, A.P. Stream Temperature Prediction in a Shifting Environment: Explaining the Influence of Deep Learning Architecture. Water Resour. Res. 2023, 59, e2022WR033880. [Google Scholar] [CrossRef]
  458. Till, T.; Tschauner, S.; Singer, G.; Lichtenegger, K.; Till, H. Development and optimization of AI algorithms for wrist fracture detection in children using a freely available dataset. Front. Pediatr. 2023, 11, 1291804. [Google Scholar] [CrossRef] [PubMed]
  459. Aswad, F.M.; Kareem, A.N.; Khudhur, A.M.; Khalaf, B.A.; Mostafa, S.A. Tree-based machine learning algorithms in the Internet of Things environment for multivariate flood status prediction. J. Intell. Syst. 2022, 31, 1–14. [Google Scholar] [CrossRef]
  460. Ghosh, I.; Alfaro-Cortes, E.; Gamez, M.; Garcia-Rubio, N. Modeling hydro, nuclear, and renewable electricity generation in India: An atom search optimization-based EEMD-DBSCAN framework and explainable AI. Heliyon 2024, 10, e23434. [Google Scholar] [CrossRef]
  461. Mohanrajan, S.N.; Loganathan, A. Novel Vision Transformer-Based Bi-LSTM Model for LU/LC Prediction-Javadi Hills, India. Appl. Sci. 2022, 12, 6387. [Google Scholar] [CrossRef]
  462. Zhang, L.; Bibi, F.; Hussain, I.; Sultan, M.; Arshad, A.; Hasnain, S.; Alarifi, I.M.; Alamir, M.A.; Sajjad, U. Evaluating the Stress-Strain Relationship of the Additively Manufactured Lattice Structures. Micromachines 2023, 14, 75. [Google Scholar] [CrossRef] [PubMed]
  463. Wang, H.; Doumard, E.; Soule-Dupuy, C.; Kemoun, P.; Aligon, J.; Monsarrat, P. Explanations as a New Metric for Feature Selection: A Systematic Approach. IEEE J. Biomed. Health Inform. 2023, 27, 4131–4142. [Google Scholar] [CrossRef] [PubMed]
  464. Pierrard, R.; Poli, J.P.; Hudelot, C. Spatial relation learning for explainable image classification and annotation in critical applications. Artif. Intell. 2021, 292, 103434. [Google Scholar] [CrossRef]
  465. Praetorius, J.P.; Walluks, K.; Svensson, C.M.; Arnold, D.; Figge, M.T. IMFSegNet: Cost-effective and objective quantification of intramuscular fat in histological sections by deep learning. Comput. Struct. Biotechnol. J. 2023, 21, 3696–3704. [Google Scholar] [CrossRef] [PubMed]
  466. Pan, S.; Hoque, S.; Deravi, F. An Attention-Guided Framework for Explainable Biometric Presentation Attack Detection. Sensors 2022, 22, 3365. [Google Scholar] [CrossRef] [PubMed]
  467. Wang, Y.; Huang, M.; Deng, H.; Li, W.; Wu, Z.; Tang, Y.; Liu, G. Identification of vital chemical information via visualization of graph neural networks. Briefings Bioinform. 2023, 24, bbac577. [Google Scholar] [CrossRef] [PubMed]
  468. Naser, M.Z. CLEMSON: An Automated Machine-Learning Virtual Assistant for Accelerated, Simulation-Free, Transparent, Reduced-Order, and Inference-Based Reconstruction of Fire Response of Structural Members. J. Struct. Eng. 2022, 148, 04022120. [Google Scholar] [CrossRef]
  469. Karamanou, A.; Brimos, P.; Kalampokis, E.; Tarabanis, K. Exploring the Quality of Dynamic Open Government Data Using Statistical and Machine Learning Methods. Sensors 2022, 22, 9684. [Google Scholar] [CrossRef]
  470. Kim, T.; Kwon, S.; Kwon, Y. Prediction of Wave Transmission Characteristics of Low-Crested Structures with Comprehensive Analysis of Machine Learning. Sensors 2021, 21, 8192. [Google Scholar] [CrossRef]
  471. Gong, H.; Wang, M.; Zhang, H.; Elahe, M.F.; Jin, M. An Explainable AI Approach for the Rapid Diagnosis of COVID-19 Using Ensemble Learning Algorithms. Front. Public Health 2022, 10, 874455. [Google Scholar] [CrossRef] [PubMed]
  472. Burzynski, D. Useful energy prediction model of a Lithium-ion cell operating on various duty cycles. Eksploat. -Niezawodn.-Maint. Reliab. 2022, 24, 317–329. [Google Scholar] [CrossRef]
  473. Kim, D.; Ho, C.H.; Park, I.; Kim, J.; Chang, L.S.; Choi, M.H. Untangling the contribution of input parameters to an artificial intelligence PM2.5 forecast model using the layer-wise relevance propagation method. Atmos. Environ. 2022, 276, 119034. [Google Scholar] [CrossRef]
  474. Galiger, G.; Bodo, Z. Explainable patch-level histopathology tissue type detection with bag-of-local-features models and data augmentation. ACTA Univ. Sapientiae Inform. 2023, 15, 60–80. [Google Scholar] [CrossRef]
  475. Naeem, H.; Dong, S.; Falana, O.J.; Ullah, F. Development of a deep stacked ensemble with process based volatile memory forensics for platform independent malware detection and classification. Expert Syst. Appl. 2023, 223, 119952. [Google Scholar] [CrossRef]
  476. Uddin, M.Z.; Soylu, A. Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-based neural structured learning. Sci. Rep. 2021, 11, 16455. [Google Scholar] [CrossRef] [PubMed]
  477. Sinha, A.; Das, D. XAI-LCS: Explainable AI-Based Fault Diagnosis of Low-Cost Sensors. IEEE Sens. Lett. 2023, 7, 6009304. [Google Scholar] [CrossRef]
  478. Jacinto, M.V.G.; Neto, A.D.D.; de Castro, D.L.; Bezerra, F.H.R. Karstified zone interpretation using deep learning algorithms: Convolutional neural networks applications and model interpretability with explainable AI. Comput. Geosci. 2023, 171, 105281. [Google Scholar] [CrossRef]
  479. Jakubowski, J.; Stanisz, P.; Bobek, S.; Nalepa, G.J. Anomaly Detection in Asset Degradation Process Using Variational Autoencoder and Explanations. Sensors 2022, 22, 291. [Google Scholar] [CrossRef]
  480. Guo, C.; Zhao, Z.; Ren, J.; Wang, S.; Liu, Y.; Chen, X. Causal explaining guided domain generalization for rotating machinery intelligent fault diagnosis. Expert Syst. Appl. 2024, 243, 122806. [Google Scholar] [CrossRef]
  481. Shi, X.; Keenan, T.D.L.; Chen, Q.; De Silva, T.; Thavikulwat, A.T.; Broadhead, G.; Bhandari, S.; Cukras, C.; Chew, E.Y.; Lu, Z. Improving Interpretability in Machine Diagnosis Detection of Geographic Atrophy in OCT Scans. Ophthalmol. Sci. 2021, 1, 100038. [Google Scholar] [CrossRef]
  482. Panos, B.; Kleint, L.; Zbinden, J. Identifying preflare spectral features using explainable artificial intelligence. Astron. Astrophys. 2023, 671, A73. [Google Scholar] [CrossRef]
  483. Fang, H.; Shao, Y.; Xie, C.; Tian, B.; Shen, C.; Zhu, Y.; Guo, Y.; Yang, Y.; Chen, G.; Zhang, M. A New Approach to Spatial Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence. Sustainability 2023, 15, 3094. [Google Scholar] [CrossRef]
  484. Karami, H.; Derakhshani, A.; Ghasemigol, M.; Fereidouni, M.; Miri-Moghaddam, E.; Baradaran, B.; Tabrizi, N.J.; Najafi, S.; Solimando, A.G.; Marsh, L.M.; et al. Weighted Gene Co-Expression Network Analysis Combined with Machine Learning Validation to Identify Key Modules and Hub Genes Associated with SARS-CoV-2 Infection. J. Clin. Med. 2021, 10, 3567. [Google Scholar] [CrossRef]
  485. Baek, M.; Kim, S.B. Failure Detection and Primary Cause Identification of Multivariate Time Series Data in Semiconductor Equipment. IEEE Access 2023, 11, 54363–54372. [Google Scholar] [CrossRef]
  486. Nguyen, P.X.; Tran, T.H.; Pham, N.B.; Do, D.N.; Yairi, T. Human Language Explanation for a Decision Making Agent via Automated Rationale Generation. IEEE Access 2022, 10, 110727–110741. [Google Scholar] [CrossRef]
  487. Shahriar, S.M.; Bhuiyan, E.A.; Nahiduzzaman, M.; Ahsan, M.; Haider, J. State of Charge Estimation for Electric Vehicle Battery Management Systems Using the Hybrid Recurrent Learning Approach with Explainable Artificial Intelligence. Energies 2022, 15, 8003. [Google Scholar] [CrossRef]
  488. Kim, D.; Handayani, M.P.; Lee, S.; Lee, J. Feature Attribution Analysis to Quantify the Impact of Oceanographic and Maneuverability Factors on Vessel Shaft Power Using Explainable Tree-Based Model. Sensors 2023, 23, 1072. [Google Scholar] [CrossRef] [PubMed]
  489. Lemanska-Perek, A.; Krzyzanowska-Golab, D.; Kobylinska, K.; Biecek, P.; Skalec, T.; Tyszko, M.; Gozdzik, W.; Adamik, B. Explainable Artificial Intelligence Helps in Understanding the Effect of Fibronectin on Survival of Sepsis. Cells 2022, 11, 2433. [Google Scholar] [CrossRef] [PubMed]
  490. Minutti-Martinez, C.; Escalante-Ramirez, B.; Olveres-Montiel, J. PumaMedNet-CXR: An Explainable Generative Artificial Intelligence for the Analysis and Classification of Chest X-Ray Images. Comput. Y Sist. 2023, 27, 909–920. [Google Scholar] [CrossRef]
  491. Kim, T.; Moon, N.H.; Goh, T.S.; Jung, I.D. Detection of incomplete atypical femoral fracture on anteroposterior radiographs via explainable artificial intelligence. Sci. Rep. 2023, 13, 10415. [Google Scholar] [CrossRef] [PubMed]
  492. Humer, C.; Heberle, H.; Montanari, F.; Wolf, T.; Huber, F.; Henderson, R.; Heinrich, J.; Streit, M. ChemInformatics Model Explorer (CIME): Exploratory analysis of chemical model explanations. J. Cheminform. 2022, 14, 21. [Google Scholar] [CrossRef]
  493. Zhang, K.; Zhang, J.; Xu, P.; Gao, T.; Gao, W. A multi-hierarchical interpretable method for DRL-based dispatching control in power systems. Int. J. Electr. Power Energy Syst. 2023, 152, 109240. [Google Scholar] [CrossRef]
  494. Yang, J.; Yue, Z.; Yuan, Y. Noise-Aware Sparse Gaussian Processes and Application to Reliable Industrial Machinery Health Monitoring. IEEE Trans. Ind. Inform.S 2023, 19, 5995–6005. [Google Scholar] [CrossRef]
  495. Cheng, F.; Liu, D.; Du, F.; Lin, Y.; Zytek, A.; Li, H.; Qu, H.; Veeramachaneni, K. VBridge: Connecting the Dots between Features and Data to Explain Healthcare Models. IEEE Trans. Vis. Comput. Graph. 2022, 28, 378–388. [Google Scholar] [CrossRef]
  496. Laqua, A.; Schnee, J.; Pletinckx, J.; Meywerk, M. Exploring User Experience in Sustainable Transport with Explainable AI Methods Applied to E-Bikes. Appl. Sci. 2023, 13, 1277. [Google Scholar] [CrossRef]
  497. Sanderson, J.; Mao, H.; Abdullah, M.A.M.; Al-Nima, R.R.O.; Woo, W.L. Optimal Fusion of Multispectral Optical and SAR Images for Flood Inundation Mapping through Explainable Deep Learning. Information 2023, 14, 660. [Google Scholar] [CrossRef]
  498. Abe, S.; Tago, S.; Yokoyama, K.; Ogawa, M.; Takei, T.; Imoto, S.; Fuji, M. Explainable AI for Estimating Pathogenicity of Genetic Variants Using Large-Scale Knowledge Graphs. Cancers 2023, 15, 1118. [Google Scholar] [CrossRef] [PubMed]
  499. Kerz, E.; Zanwar, S.; Qiao, Y.; Wiechmann, D. Toward explainable AI (XAI) for mental health detection based on language behavior. Front. Psychiatry 2023, 14, 1219479. [Google Scholar] [CrossRef]
  500. Kim, T.; Jeon, M.; Lee, C.; Kim, J.; Ko, G.; Kim, J.Y.; Youn, C.H. Federated Onboard-Ground Station Computing with Weakly Supervised Cascading Pyramid Attention Network for Satellite Image Analysis. IEEE Access 2022, 10, 117315–117333. [Google Scholar] [CrossRef]
  501. Thrun, M.C.; Ultsch, A.; Breuer, L. Explainable AI Framework for Multivariate Hydrochemical Time Series. Mach. Learn. Knowl. Extr. 2021, 3, 170–204. [Google Scholar] [CrossRef]
  502. Beni, T.; Nava, L.; Gigli, G.; Frodella, W.; Catani, F.; Casagli, N.; Gallego, J.I.; Margottini, C.; Spizzichino, D. Classification of rock slope cavernous weathering on UAV photogrammetric point clouds: The example of Hegra (UNESCO World Heritage Site, Kingdom of Saudi Arabia). Eng. Geol. 2023, 325, 107286. [Google Scholar] [CrossRef]
  503. Zhou, R.; Zhang, Y. Predicting and explaining karst spring dissolved oxygen using interpretable deep learning approach. Hydrol. Process. 2023, 37, e14948. [Google Scholar] [CrossRef]
  504. Barros, J.; Cunha, F.; Martins, C.; Pedrosa, P.; Cortez, P. Predicting Weighing Deviations in the Dispatch Workflow Process: A Case Study in a Cement Industry. IEEE Access 2023, 11, 8119–8135. [Google Scholar] [CrossRef]
  505. Kayadibi, I.; Guraksin, G.E. An Explainable Fully Dense Fusion Neural Network with Deep Support Vector Machine for Retinal Disease Determination. Int. J. Comput. Intell. Syst. 2023, 16, 28. [Google Scholar] [CrossRef]
  506. Qamar, T.; Bawany, N.Z. Understanding the black-box: Towards interpretable and reliable deep learning models. Peerj Comput. Sci. 2023, 9, e1629. [Google Scholar] [CrossRef] [PubMed]
  507. Crespi, M.; Ferigo, A.; Custode, L.L.; Iacca, G. A population-based approach for multi-agent interpretable reinforcement learning. Appl. Soft Comput. 2023, 147, 110758. [Google Scholar] [CrossRef]
  508. Sabrina, F.; Sohail, S.; Farid, F.; Jahan, S.; Ahamed, F.; Gordon, S. An Interpretable Artificial Intelligence Based Smart Agriculture System. CMC-Comput. Mater. Contin. 2022, 72, 3777–3797. [Google Scholar] [CrossRef]
  509. Wu, J.; Wang, Z.; Dong, J.; Cui, X.; Tao, S.; Chen, X. Robust Runoff Prediction with Explainable Artificial Intelligence and Meteorological Variables from Deep Learning Ensemble Model. Water Resour. Res. 2023, 59, e2023WR035676. [Google Scholar] [CrossRef]
  510. Nakamura, K.; Uchino, E.; Sato, N.; Araki, A.; Terayama, K.; Kojima, R.; Murashita, K.; Itoh, K.; Mikami, T.; Tamada, Y.; et al. Individual health-disease phase diagrams for disease prevention based on machine learning. J. Biomed. Inform. 2023, 144, 104448. [Google Scholar] [CrossRef]
  511. Oh, S.; Park, Y.; Cho, K.J.; Kim, S.J. Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation. Diagnostics 2021, 11, 510. [Google Scholar] [CrossRef]
  512. Borujeni, S.M.; Arras, L.; Srinivasan, V.; Samek, W. Explainable sequence-to-sequence GRU neural network for pollution forecasting. Sci. Rep. 2023, 13, 9940. [Google Scholar] [CrossRef]
  513. Alharbi, A.; Petrunin, I.; Panagiotakopoulos, D. Assuring Safe and Efficient Operation of UAV Using Explainable Machine Learning. Drones 2023, 7, 327. [Google Scholar] [CrossRef]
  514. Sheu, R.K.; Pardeshi, M.S.; Pai, K.C.; Chen, L.C.; Wu, C.L.; Chen, W.C. Interpretable Classification of Pneumonia Infection Using eXplainable AI (XAI-ICP). IEEE Access 2023, 11, 28896–28919. [Google Scholar] [CrossRef]
  515. Aslam, N.; Khan, I.U.; Aljishi, R.F.; Alnamer, Z.M.; Alzawad, Z.M.; Almomen, F.A.; Alramadan, F.A. Explainable Computational Intelligence Model for Antepartum Fetal Monitoring to Predict the Risk of IUGR. Electronics 2022, 11, 593. [Google Scholar] [CrossRef]
  516. Peng, P.; Zhang, Y.; Wang, H.; Zhang, H. Towards robust and understandable fault detection and diagnosis using denoising sparse autoencoder and smooth integrated gradients. Isa Trans. 2022, 125, 371–383. [Google Scholar] [CrossRef] [PubMed]
  517. Na Pattalung, T.; Ingviya, T.; Chaichulee, S. Feature Explanations in Recurrent Neural Networks for Predicting Risk of Mortality in Intensive Care Patients. J. Pers. Med. 2021, 11, 934. [Google Scholar] [CrossRef] [PubMed]
  518. Oliveira, F.R.D.S.; Neto, F.B.D.L. Method to Produce More Reasonable Candidate Solutions with Explanations in Intelligent Decision Support Systems. IEEE Access 2023, 11, 20861–20876. [Google Scholar] [CrossRef]
  519. Burgueno, A.M.; Aldana-Martin, J.F.; Vazquez-Pendon, M.; Barba-Gonzalez, C.; Jimenez Gomez, Y.; Garcia Millan, V.; Navas-Delgado, I. Scalable approach for high-resolution land cover: A case study in the Mediterranean Basin. J. Big Data 2023, 10, 91. [Google Scholar] [CrossRef]
  520. Horst, F.; Slijepcevic, D.; Simak, M.; Horsak, B.; Schoellhorn, W.I.; Zeppelzauer, M. Modeling biological individuality using machine learning: A study on human gait. Comput. Struct. Biotechnol. J. 2023, 21, 3414–3423. [Google Scholar] [CrossRef]
  521. Napoles, G.; Hoitsma, F.; Knoben, A.; Jastrzebska, A.; Espinosa, M.L. Prolog-based agnostic explanation module for structured pattern classification. Inf. Sci. 2023, 622, 1196–1227. [Google Scholar] [CrossRef]
  522. Ni, L.; Wang, D.; Singh, V.P.; Wu, J.; Chen, X.; Tao, Y.; Zhu, X.; Jiang, J.; Zeng, X. Monthly precipitation prediction at regional scale using deep convolutional neural networks. Hydrol. Process. 2023, 37, e14954. [Google Scholar] [CrossRef]
  523. Amiri-Zarandi, M.; Karimipour, H.; Dara, R.A. A federated and explainable approach for insider threat detection in IoT. Internet Things 2023, 24, 100965. [Google Scholar] [CrossRef]
  524. Niu, Y.; Gu, L.; Zhao, Y.; Lu, F. Explainable Diabetic Retinopathy Detection and Retinal Image Generation. IEEE J. Biomed. Health Inform. 2022, 26, 44–55. [Google Scholar] [CrossRef]
  525. Kliangkhlao, M.; Limsiroratana, S.; Sahoh, B. The Design and Development of a Causal Bayesian Networks Model for the Explanation of Agricultural Supply Chains. IEEE Access 2022, 10, 86813–86823. [Google Scholar] [CrossRef]
  526. Dissanayake, T.; Fernando, T.; Denman, S.; Sridharan, S.; Ghaemmaghami, H.; Fookes, C. A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection without Segmentation. IEEE J. Biomed. Health Inform. 2021, 25, 2162–2171. [Google Scholar] [CrossRef] [PubMed]
  527. Dastile, X.; Celik, T. Making Deep Learning-Based Predictions for Credit Scoring Explainable. IEEE Access 2021, 9, 50426–50440. [Google Scholar] [CrossRef]
  528. Khan, M.A.; Azhar, M.; Ibrar, K.; Alqahtani, A.; Alsubai, S.; Binbusayyis, A.; Kim, Y.J.; Chang, B. COVID-19 Classification from Chest X-Ray Images: A Framework of Deep Explainable Artificial Intelligence. Comput. Intell. Neurosci. 2022, 2022, 4254631. [Google Scholar] [CrossRef]
  529. Moon, S.; Lee, H. JDSNMF: Joint Deep Semi-Non-Negative Matrix Factorization for Learning Integrative Representation of Molecular Signals in Alzheimer’s Disease. J. Pers. Med. 2021, 11, 686. [Google Scholar] [CrossRef]
  530. Kiefer, S.; Hoffmann, M.; Schmid, U. Semantic Interactive Learning for Text Classification: A Constructive Approach for Contextual Interactions. Mach. Learn. Knowl. Extr. 2022, 4, 994–1010. [Google Scholar] [CrossRef]
  531. Franco, D.; Oneto, L.; Navarin, N.; Anguita, D. Toward Learning Trustworthily from Data Combining Privacy, Fairness, and Explainability: An Application to Face Recognition. Entropy 2021, 23, 1047. [Google Scholar] [CrossRef]
  532. Montiel-Vazquez, E.C.; Uresti, J.A.R.; Loyola-Gonzalez, O. An Explainable Artificial Intelligence Approach for Detecting Empathy in Textual Communication. Appl. Sci. 2022, 12, 9407. [Google Scholar] [CrossRef]
  533. Mollas, I.; Bassiliades, N.; Tsoumakas, G. Truthful meta-explanations for local interpretability of machine learning models. Appl. Intell. 2023, 53, 26927–26948. [Google Scholar] [CrossRef]
  534. Juang, C.F.; Chang, C.W.; Hung, T.H. Hand Palm Tracking in Monocular Images by Fuzzy Rule-Based Fusion of Explainable Fuzzy Features with Robot Imitation Application. IEEE Trans. Fuzzy Syst. 2021, 29, 3594–3606. [Google Scholar] [CrossRef]
  535. Cicek, I.B.; Colak, C.; Yologlu, S.; Kucukakcali, Z.; Ozhan, O.; Taslidere, E.; Danis, N.; Koc, A.; Parlakpinar, H.; Akbulut, S. Nephrotoxicity Development of a Clinical Decision Support System Based on Tree-Based Machine Learning Methods to Detect Diagnostic Biomarkers from Genomic Data in Methotrexate-Induced Rats. Appl. Sci. 2023, 13, 8870. [Google Scholar] [CrossRef]
  536. Jung, D.H.; Kim, H.Y.; Won, J.H.; Park, S.H. Development of a classification model for Cynanchum wilfordii and Cynanchum auriculatum using convolutional neural network and local interpretable model-agnostic explanation technology. Front. Plant Sci. 2023, 14, 1169709. [Google Scholar] [CrossRef] [PubMed]
  537. Rawal, A.; Kidchob, C.; Ou, J.; Yogurtcu, O.N.; Yang, H.; Sauna, Z.E. A machine learning approach for identifying variables associated with risk of developing neutralizing antidrug antibodies to factor VIII. Heliyon 2023, 9, e16331. [Google Scholar] [CrossRef]
  538. Yeung, C.; Ho, D.; Pham, B.; Fountaine, K.T.; Zhang, Z.; Levy, K.; Raman, A.P. Enhancing Adjoint Optimization-Based Photonic Inverse Designwith Explainable Machine Learning. Acs Photonics 2022, 9, 1577–1585. [Google Scholar] [CrossRef]
  539. Naeem, H.; Alshammari, B.M.; Ullah, F. Explainable Artificial Intelligence-Based IoT Device Malware Detection Mechanism Using Image Visualization and Fine-Tuned CNN-Based Transfer Learning Model. Comput. Intell. Neurosci. 2022, 2022, 7671967. [Google Scholar] [CrossRef]
  540. Mey, O.; Neufeld, D. Explainable AI Algorithms for Vibration Data-Based Fault Detection: Use Case-Adadpted Methods and Critical Evaluation. Sensors 2022, 22, 9037. [Google Scholar] [CrossRef]
  541. Martinez, G.S.; Perez-Rueda, E.; Kumar, A.; Sarkar, S.; Silva, S.d.A.e. Explainable artificial intelligence as a reliable annotator of archaeal promoter regions. Sci. Rep. 2023, 13, 1763. [Google Scholar] [CrossRef]
  542. Nkengue, M.J.; Zeng, X.; Koehl, L.; Tao, X. X-RCRNet: An explainable deep-learning network for COVID-19 detection using ECG beat signals. Biomed. Signal Process. Control. 2024, 87, 105424. [Google Scholar] [CrossRef]
  543. Behrens, G.; Beucler, T.; Gentine, P.; Iglesias-Suarez, F.; Pritchard, M.; Eyring, V. Non-Linear Dimensionality Reduction with a Variational Encoder Decoder to Understand Convective Processes in Climate Models. J. Adv. Model. Earth Syst. 2022, 14, e2022MS003130. [Google Scholar] [CrossRef]
  544. Fatahi, R.; Nasiri, H.; Dadfar, E.; Chelgani, S.C. Modeling of energy consumption factors for an industrial cement vertical roller mill by SHAP-XGBoost: A “conscious lab” approach. Sci. Rep. 2022, 12, 7543. [Google Scholar] [CrossRef] [PubMed]
  545. De Groote, W.; Kikken, E.; Hostens, E.; Van Hoecke, S.; Crevecoeur, G. Neural Network Augmented Physics Models for Systems with Partially Unknown Dynamics: Application to Slider-Crank Mechanism. IEEE-ASME Trans. Mechatronics 2022, 27, 103–114. [Google Scholar] [CrossRef]
  546. Takalo-Mattila, J.; Heiskanen, M.; Kyllonen, V.; Maatta, L.; Bogdanoff, A. Explainable Steel Quality Prediction System Based on Gradient Boosting Decision Trees. IEEE Access 2022, 10, 68099–68110. [Google Scholar] [CrossRef]
  547. Jang, J.; Jeong, W.; Kim, S.; Lee, B.; Lee, M.; Moon, J. RAID: Robust and Interpretable Daily Peak Load Forecasting via Multiple Deep Neural Networks and Shapley Values. Sustainability 2023, 15, 6951. [Google Scholar] [CrossRef]
  548. Aishwarya, N.; Veena, M.B.; Ullas, Y.L.; Rajasekaran, R.T. “SWASTHA-SHWASA”: Utility of Deep Learning for Diagnosis of Common Lung Pathologies from Chest X-rays. Int. J. Early Child. Spec. Educ. 2022, 14, 1895–1905. [Google Scholar] [CrossRef]
  549. Kaczmarek-Majer, K.; Casalino, G.; Castellano, G.; Dominiak, M.; Hryniewicz, O.; Kaminska, O.; Vessio, G.; Diaz-Rodriguez, N. PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries. Inf. Sci. 2022, 614, 374–399. [Google Scholar] [CrossRef]
  550. Bae, H. Evaluation of Malware Classification Models for Heterogeneous Data. Sensors 2024, 24, 288. [Google Scholar] [CrossRef]
  551. Gerussi, A.; Verda, D.; Cappadona, C.; Cristoferi, L.; Bernasconi, D.P.; Bottaro, S.; Carbone, M.; Muselli, M.; Invernizzi, P.; Asselta, R.; et al. LLM-PBC: Logic Learning Machine-Based Explainable Rules Accurately Stratify the Genetic Risk of Primary Biliary Cholangitis. J. Pers. Med. 2022, 12, 1587. [Google Scholar] [CrossRef]
  552. Li, B.M.; Castorina, V.L.; Hernandez, M.D.C.V.; Clancy, U.; Wiseman, S.J.; Sakka, E.; Storkey, A.J.; Garcia, D.J.; Cheng, Y.; Doubal, F.; et al. Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols. Front. Comput. Neurosci. 2022, 16, 887633. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of different XAI approaches and evaluation methods. These categories were used to classify the XAI application papers reviewed in this study.
Figure 1. Overview of different XAI approaches and evaluation methods. These categories were used to classify the XAI application papers reviewed in this study.
Applsci 14 08884 g001
Figure 2. PRISMA flow chart of the study selection process.
Figure 2. PRISMA flow chart of the study selection process.
Applsci 14 08884 g002
Figure 3. Main XAI application domain of the studies in our corpus (including all the main domains mentioned in at least three papers).
Figure 3. Main XAI application domain of the studies in our corpus (including all the main domains mentioned in at least three papers).
Applsci 14 08884 g003
Figure 4. Saliency maps of eight diverse recent XAI applications from various domains: brain tumor classification [116], grape leaf disease identification [144], emotion detection [235], ripe status recognition [141], volcanic localizations [236], traffic sign classification [237], cell segmentation [238], and glaucoma diagnosis [77] (from top to bottom and left to right).
Figure 4. Saliency maps of eight diverse recent XAI applications from various domains: brain tumor classification [116], grape leaf disease identification [144], emotion detection [235], ripe status recognition [141], volcanic localizations [236], traffic sign classification [237], cell segmentation [238], and glaucoma diagnosis [77] (from top to bottom and left to right).
Applsci 14 08884 g004
Figure 5. Number of papers in our corpus that used global versus local explanations.
Figure 5. Number of papers in our corpus that used global versus local explanations.
Applsci 14 08884 g005
Figure 6. Most common explanation techniques used in the papers in our corpus (only XAI techniques used in at least five papers are shown).
Figure 6. Most common explanation techniques used in the papers in our corpus (only XAI techniques used in at least five papers are shown).
Applsci 14 08884 g006
Figure 7. Mostly used ML models in the papers in our corpus (only ML models used at least five times are shown).
Figure 7. Mostly used ML models in the papers in our corpus (only ML models used at least five times are shown).
Applsci 14 08884 g007
Figure 8. The main ML tasks in the papers in our corpus (all other ML tasks are used in only one or at most two papers).
Figure 8. The main ML tasks in the papers in our corpus (all other ML tasks are used in only one or at most two papers).
Applsci 14 08884 g008
Figure 9. Number of papers in our corpus that used a post-hoc approach versus intrinsically explainable ML model.
Figure 9. Number of papers in our corpus that used a post-hoc approach versus intrinsically explainable ML model.
Applsci 14 08884 g009
Figure 10. Number of papers that used a specific ML model, which is presented as intrinsically explainable.
Figure 10. Number of papers that used a specific ML model, which is presented as intrinsically explainable.
Applsci 14 08884 g010
Figure 11. Evaluation of the explanations in recent XAI application papers.
Figure 11. Evaluation of the explanations in recent XAI application papers.
Applsci 14 08884 g011
Table 1. Inclusion and exclusion criteria for the review of recent applications of XAI.
Table 1. Inclusion and exclusion criteria for the review of recent applications of XAI.
CriterionIncludedExcluded
LanguageEnglishOther languages, such as German, Chinese, and Spanish.
Publication typePeer-reviewed journal articlesBook chapters, conference papers, magazine articles, reports, theses, and other gray literature.
RecentnessRecent papers published in 2021 or afterPapers published before 2021.
Study contentApplication of XAI methodsPapers that generally described XAI or reviewed other works without describing any XAI applications.
QualityPapers of sufficient qualityPapers that were exceptionally short (less than six pages) or those that did not fulfill the basic requirements for a publication channel (e.g., be peer-reviewed, have an international board [35]).
Table 2. Number of publications for the ten journals with the highest publication counts in our sample of articles on recent XAI applications.
Table 2. Number of publications for the ten journals with the highest publication counts in our sample of articles on recent XAI applications.
Journal# of Publications
IEEE Access45
Applied Sciences-Basel37
Sensors28
Scientific Reports15
Electronics14
Remote Sensing8
Diagnostics7
Information7
Machine Learning And Knowledge Extraction7
Sustainability7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saarela, M.; Podgorelec, V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. Appl. Sci. 2024, 14, 8884. https://doi.org/10.3390/app14198884

AMA Style

Saarela M, Podgorelec V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. Applied Sciences. 2024; 14(19):8884. https://doi.org/10.3390/app14198884

Chicago/Turabian Style

Saarela, Mirka, and Vili Podgorelec. 2024. "Recent Applications of Explainable AI (XAI): A Systematic Literature Review" Applied Sciences 14, no. 19: 8884. https://doi.org/10.3390/app14198884

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop