Next Article in Journal
Hybrid Transformer and Convolution for Image Compressed Sensing
Previous Article in Journal
Encouraging the Submission of Information by Reducing Confirming Costs
Previous Article in Special Issue
Efficient Speech Signal Dimensionality Reduction Using Complex-Valued Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey

by
Sajad Moosavi
1,*,
Maryam Farajzadeh-Zanjani
1,
Roozbeh Razavi-Far
1,2,
Vasile Palade
3 and
Mehrdad Saif
1
1
Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON N9B 3P4, Canada
2
Faculty of Computer Science, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
3
Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 5FB, UK
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(17), 3497; https://doi.org/10.3390/electronics13173497
Submission received: 5 July 2024 / Revised: 28 August 2024 / Accepted: 30 August 2024 / Published: 3 September 2024
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)

Abstract

:
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play a pivotal role in enhancing the trustworthiness and reliability of intelligent systems applied to industrial systems, ensuring human operators can comprehend and validate the decisions made by these intelligent systems. This review paper begins by highlighting the imperative need for explainable artificial intelligence, and, subsequently, classifies explainable artificial intelligence techniques systematically. The paper then investigates diverse explainable artificial-intelligence-related works within a wide range of industrial applications, such as predictive maintenance, cyber-security, fault detection and diagnosis, process control, product development, inventory management, and product quality. The study contributes to a comprehensive understanding of the diverse strategies and methodologies employed in integrating explainable artificial intelligence within industrial contexts.

1. Introduction

Artificial intelligence (AI) imitates natural intelligence in machines by mimicking human thinking and problem-solving capabilities. Industrial and manufacturing systems, and, in particular, industrial cyber–physical systems (ICPS) can greatly benefit from AI as they are continually looking for ways to reduce the operational and maintenance costs, improve the process efficiency, and enhance their safe and secure operation over a long period of time. In the era of digitalized manufacturing equipment and the growing use of the internet of things (IoT) and ICPSs, a high volume of data can be gathered from physical environments and smart devices that can be deployed to make the process smarter and more efficient. With the help of AI, productivity and efficiency can be enhanced by increasing production speed, lowering product defects, reducing labor costs, and minimizing unplanned downtime. Despite the vast potential of AI-based systems, it is still risky for the human users to blindly trust their recommendations, insights, or predictions. As we have a poor understanding of how artificial intelligence makes decisions, we cannot fully take advantage of what it offers. This has resulted in the unveiling of so-called explainable AI (XAI). In the following, we address three basic questions regarding XAI: What?—the main idea behind XAI, Why?—the reasons for exploiting XAI, and How?—the methods and techniques developed for explainability.

1.1. What?

Thanks to the recent development of many machine learning (ML) techniques, solving complex problems has become more feasible; however, they cannot be easily examined after implementation to understand their logic. A deep neural network (DNN) has always been considered a “black-box”, without providing any explanation or reason regarding the decisions made in a human-friendly manner. This black-box model utilizes different ML techniques that take several input features and predict one or more outputs. Therefore, a serious concern arises when evaluating the results in most applications, because the prediction pattern formed during the learning phase is not easily described [1]. Certainly, relying on a black-box model to make decisions without knowing the logic and proof underneath is not of much interest in critical applications [2]. If the application dictates the user’s awareness and trustworthiness of the results, then XAI will be imperative [3]. In other words, XAI is essential for unlocking AI and gaining deep insights into the deep learning process, as mandated by regulations imposed by the European Parliament in May 2018 regarding the use of XAI-based systems: “a right of explanation for all individuals to obtain meaningful explanations of the logic involved” [4].
Recently, XAI has gained considerable attention and become a prominent field of study. According to Google Trends, the global interest in XAI surged in 2021, as shown in Figure 1. The graph depicts the search interest for the term “Explainable AI”.

1.2. Why?

AI systems have consistently demonstrated their effectiveness in providing highly accurate results in a variety of application areas. However, not all ML systems demand interpretability. In such systems, either unacceptable results generally entail no consequences, or, despite potential imperfections, its decisions are deemed trustworthy due to rigorous study and validation in practical scenarios. According to [6], in the presence of an incomplete problem formalization, the need for interpretability arises. In numerous real-world applications, it is necessary to have extremely accurate and understandable AI models. In the literature, multiple incentives are discussed for XAI, depending on the users and applications targeted by the AI system. According to [7], they could be summarized as follows:
  • Trustworthiness: In order to use the system predictions in real world applications, the user needs to be trust the applied model. Offering an explanation for a prediction is an important aspect for ensuring human trust and the effective use of ML, if the explanations are faithful and intelligible [8].
  • Causality: Causality reveals the cause-and-effect relationship between feature space and possible outputs. Assigning a set of causes for an effect demands wide knowledge, as ML models try to find correlations between the features.
  • Transferability: In general, a model is usually trained and tested based on limited data. One reason for pursuing model explainability is to properly utilize a trained model in another domain with similar characteristics. Hence, it can be referred to as reusability.
  • Informativeness: Informativeness is the information that an XAI model provides about how it works in order to avoid misconceptions. The model explains the relations inside the box and increases the knowledge of the user regarding the internal process.
  • Confidence: A measure that provides an expectation that a decision will prove to be correct or incorrect. The measure should always be assessed within a system where reliability is a concern. Therefore, an explainable model designed for this purpose must offer a confidence level for its predictions.
  • Fairness: ML algorithms are products of their data and any bias in the input data will influence the attained results and avoid fair conclusions. An XAI system could reveal imbalances within the data and ensure fairness in ML models.
  • Accessibility: Explainability can be considered as a tool for improving internal process of ML models. It gives the users and non-professionals the ability to tune performance based on their requirements.
  • Interactivity: This goal is considered for models that require interaction with end-users. The model should describe the decision made and the choices considered, then present the explanation in straightforward, natural language to resolve ambiguity.
  • Privacy Awareness: The ability to explain the logic in a model could provide a tool for assessing the privacy. An opaque model may capture sensitive data and cause a privacy breach.
Although providing interpretability and explainability in the system increases the trust of the users and helps in rectifying deficiencies, it adversely affects the performance of the models [7]. The more interpretable the systems are, the lower the performance will be. Consequently, achieving a balance between interpretability and performance is always a challenge [9]. Performance is considered as the accuracy in an out-of-sample prediction of the model with no concern regarding the reason for such an outcome. Conversely, interpretability refers to the ability of the model to convey information in a comprehensible manner to humans [6].

1.3. How?

Experts across various fields have created a variety of XAI tools to help uncover the internal mechanisms of AI models that operate as black boxes [10,11,12]. In general, they can be categorized into three groups based on the sources of explanation, the scopes of explanation, and the dependency on the model [13]. The taxonomy is shown in Figure 2. The source of an explanation could inherently stem from the model structure itself, as seen in simple models like small decision trees (DT) and linear models, or it could be derived by applying post hoc methods to a trained model. In [7], they are referred to as transparent and post hoc explainable models. Then, three levels of transparency are defined: algorithmic transparency, decomposability, and simulatability. The post hoc explainability is also divided into text, visual, local, example-based, simplification-based, and feature relevance explanation techniques.
Based on the taxonomy, the scope of explanations could be global or local. Global interpretability is referred to as the ability to describe the whole logic and any possible outcome of the model. Conversely, local interpretability focuses on explaining the model’s interpretation of specific situations or decisions [2]. When considering dependence on the predictor model, model-specific interpretation tools depend on a single model or a group of models, while model-agnostic techniques can be applied to any AI model. Model-specific techniques have the advantage of accessing the internal structure and weights of a particular model, but they are not easily transferable to other models [26]. Model-agnostic techniques only analyze models’ input–output pairs and generate explanations after the training session. Therefore, they are applicable to any ML model to offer post hoc explanations.
In another review by F. Bodria et al. [27], the explainers are categorized based on different data types: tabular, image, and text. For each type of data, they have distinguished a different type of explanation, as illustrated in Figure 3. For tabular data, feature importance is introduced as the most commonly used type of explanation. The explainer assigns an importance value to each feature, reflecting its contribution to the prediction under analysis. The sign and magnitude of each importance value signify whether the feature positively or negatively influences the outcome. Importance-based methods are effective for domain experts who understand the relevance of the features. Other types of explanations for tabular data include rules, prototypes, and counterfactuals.
Nonetheless, it may be too complicated for a common end-user to understand the feature impact. In these cases, rule-based explanations, prototypes and counterfactuals are more suitable for end-users because of their coherence and use of example-based explanations. For image data, the explanations are provided by the following methods: saliency maps, concept attribution, prototypes, and counterfactuals. The saliency maps change the brightness of image pixels to differentiate visual features in the image. A colored saliency map shows pixels with a positive contribution in red and negative ones in blue. The problem with saliency maps is the confirmation bias. Again, the popular saliency map is suitable for domain experts, as the explanations are at pixel-level, not in the form of straightforward interpretation. Concept attribution generates explanations in terms of higher features called concepts. They compute a score that assesses the probability that the concept selected by a human team affected the prediction. Therefore, this method could provide human-like explanations. The other approaches are based on providing examples to demonstrate the explanation. In contrast to tabular and image data, text data are not structured, making classification more difficult. Text classification refers to the task of tagging or categorizing text based on its content. XAI techniques help to understand which words affect the specific tag assignment. For text data, the explanations are usually provided by the saliency highlighting, attention-based methods among others.
Figure 4 presents a timeline chart for the significant XAI methods proposed since 2011. Model-agnostic methods, such as Shapley values for explaining reinforcement learning (SVERL) [42], diverse counterfactual explanations (DICE) [43], Anchors [44], and testing with concept activation vectors (TCAV) [45], are displayed above the time axis, while model-specific methods, like class activation map (CAM) [46], deep learning important features (DeepLIFT) [47], and layer-wise relevance propagation (LRP) [48], are shown below. During the period from 2017 to 2020, significant growth in both categories is evident. Based on the number of citations, local interpretable model-agnostic explanations (LIME), Shapley additive explanations (SHAP), and Anchors are the most cited methods in the model-agnostic category. In the model-specific category, deconvolutional network (DeconvNet) [49], CAM, and GradCAM [50] are the most frequently cited. The explanation types include feature importance [14], rules [44], counterfactuals [51], saliency maps [52], prototypes [53], and concepts [45]. Moreover, it can be seen that several variations of gradient-based explanatory methods have been proposed, such as saliency gradient (SG) [54], integrated gradient (IntGrad) [55], smoothing gradient (SmoothGrad) [52], and gradient-weighted CAM (grad-CAM) [50]. These methods access the internal signals of the model to estimate the model’s decision process. The gradients are used to realize how the output responds to variations in the input. Perturbation-based explanation algorithms can also be identified, such as individual conditional expectation (ICE) [56], SHAP, LIME, and Anchors. They differ from gradient-based methods because they do not require access to the model’s internal parameters. Instead, they rely only on input–output pairs to analyze the model’s decision-making process. This characteristic makes them generally model-agnostic.
In SHAP, the feature importance is quantified using Shapley values, a concept derived from coalitional game theory. The primary approach involves approximating the original complex model, f ( x ) , with a simpler and interpretable function, g ( x ) . This simpler model captures the contribution of each feature to the prediction. All the explanation models provided by SHAP are known as additive feature attribution methods and adhere to the following property [14]:
g ( x ) = φ 0 + j = 1 M φ j x j
where x represents simplified input features, φ j is the contribution of the j-th feature, and M is the number of simplified input features. For a given input x j , the simplified variable x j is defined with a mapping function, where x j = h x ( x j ) . The Shapley values, φ , are computed using the following equation:
φ j ( f , x ) = k x | k | ! ( M | k | 1 ) ! M ! [ f ( h x ( k ) ) f ( h x ( k j ) ) ]
where | k | denotes the number of non-zero elements in k , and k j indicates setting the j-th element of k to zero. If Shapley values are used for individual samples, they provide a local explanation. Conversely, when aggregated across samples, they offer a global explanation. The computation of Shapley values can require significant computational resources, especially for models with many features, as it involves evaluating every possible combination of features.
Equation (1) also applies to the LIME method, classifying it as an additive attribution method that provides local explanations in the form of feature importance vectors. For the target sample, LIME aims to approximate a linear model by generating and analyzing synthetic data points (perturbed data) in the neighborhood of the sample being explained. Then, the local feature importance vector is derived from the weights of the linear model. The objective function for this method is to minimize the weighted sum of squared errors between the original model’s predictions and the linear model’s predictions [8]:
ξ = a r g m i n g G L ( f , g , π x ) + Ω ( g )
where G represents the set of explanation models, L ( f , g , π x ) denotes the squared loss of explanation model g, and Ω ( g ) is a complexity penalty. π x is the proximity measure that quantifies the distance between the sample x and synthetic data points.
Several algorithms have been proposed for generating synthetic neighborhoods, such as DLIME [75], ALIME [76], QLIME [77], leading to different implementations of this method and thus different outcomes. The primary drawback of LIME is its instability, particularly in high-dimensional data where defining a local neighborhood is challenging. Even minor changes in the neighborhood can result in significantly different explanations, making the results unreliable.
Although various explanation methods have been investigated, it remains essential to evaluate their quality quantitatively [78]. The evaluation helps to determine the extent to which the provided explainability aligns with the defined objectives [79]. Additionally, comparing the available explanation methods allows us to identify the most suitable explanation for a specific task. However, a significant challenge arises from the lack of a definitive ground truth when evaluating post hoc explanations. This is primarily because the internal mechanisms of the model remain undisclosed [80].

2. XAI in Manufacturing and Industrial Systems

Manufacturers are consistently seeking innovative strategies to maximize profits, minimize risks, and enhance production efficiency. This is vital for their survival and ensure a prosperous and sustainable future. Through Industry 4.0, data-rich ICPSs, and IoT devices, AI-based and ML-powered techniques are unlocking new opportunities to leverage data for the aforementioned business objectives [81,82]. In Figure 5, major application areas for AI in ICPSs and manufacturing industries have been identified.
Despite the growing demand for AI technologies in the industry, the specialists now need more explanations about how decisions are made and what instructions are given. Hence, for a smooth deployment and acceptance, decisions need to be clear and comprehensible [83]. Although the need for XAI is very clear, achieving this goal can be challenging. The root cause is the complexity of AI systems and the challenge of summarizing it to suit human intuition. In the rest of this paper, we will review the articles that employ XAI methods in ICPSs, IoT, and manufacturing devices and classify them according to use cases mentioned in Figure 5. The selection criteria were designed to identify studies that specifically addressed recent advancements in XAI, showcased practical applications in industrial settings, and were subject to peer review.

2.1. Product Development

AI methods are revolutionizing product development (PD) across industries, as they can reduce the design costs, optimize the product design, enable the early detection of potential issues, and accelerate the design verification through virtual prototyping and simulation tools. XAI aids in gaining the trust of developers and stakeholders by revealing the laws and strategies of black-box models in the learning process.
The history shows a large number of studies regarding the benefits gained from XAI in chemistry and material science in terms of assessing the characteristics of new material systems and structures, such as predicting the material properties of high-strength metal alloys. S. Park et al. [84] proposed employing a Keras-based DNN algorithm to recommend new chemical compositions and fabrication processes, leading to the enhanced mechanical properties of 7xxx aluminium alloys. The LIME algorithm then reveals the significance of certain chemical compositions and processing parameters in this improvement. Yan et al. [85] analyzed the fatigue strength of the steel material by combining extreme gradient boosting (XGB) and light gradient-boosting machine (LGBM) methods coupled with SHAP feature importance graph. The oxidation resistance of a FeCrAl alloy was predicted by neural networks in [86], and the subsequent SHAP method unveiled the contribution of each material. For the first time in [87], SHAP, which was built upon the Gaussian process regression (GPR) model, was employed as an XAI tool to analyze how feature values impact the hardness variation of FeCrAl. Xiong et al. [88] and Yang et al. [89] used SHAP to identify crucial parameters for enhancing hardness of the high-entropy alloys. SHAP values were also used in the modeling of the dielectric constant of crystals [23]. To accurately predict the creep life, ref. [19] compared various AI models, which led to the proposal of a new alloy composition with a predicted creep life exceeding 100,000 h under specified conditions. This work utilized SHAP to deliver clear insights into the impact of individual variables on creep-life prediction for high-temperature components.
XAI has also demonstrated benefits in the field of drug design [90,91]. Authors believe that XAI has the potential to enhance human intuition and expertise in the creation of novel bioactive compounds with specific properties [90].

2.2. Process Control and Automation

Process control (PC) in manufacturing refers to the application of technology to manage and regulate manufacturing processes to ensure a consistent and economic production level. This concept introduces various forms of automation to the field aimed at minimizing the labor needed and the possibility of human error. The automation entails the utilization of sensors, robots, programmable controllers, actuators, interfaces, and software. An AI-driven automated process can significantly enhance efficiency, workplace safety, and reduce human error. Furthermore, it provides valuable insights that contribute to informed decision-making. These insights comprise data analysis, trend identification, predictive modeling, or other methods where the AI system extracts valuable knowledge from the processed data. XAI ensures that those insights are transparent and interpretable to users, which can be crucial for the decision-making processes.
Ref. [92] explored the requirements and implications of implementing XAI in manufacturing environments, emphasizing the demand for transparency and interpretability in AI-driven systems. The study investigated a big car manufacturing plant as a use case. The work focused on formulating and integrating business-oriented XAI requirements into processes. The author believed that the early integration of XAI was essential for realizing the complete capabilities of AI systems. In a similar use case, ref. [93] employed LIME to provide an explanation for the deviation found from the expected production. The explanation specified what elements influenced the production level to infer the root cause of the deviation. In an optimization process, ref. [94] tested the use of XGB and LGBM methods to detect product defects in an injection molding machine. Then, the SHAP method extracted the key variables influencing product defects. In the manufacturing of a semiconductor, SHAP values contributed to a deeper understanding of yield-related factors [95]. Likewise, Shapley values quantified the significance of each process step on the performance of the semiconductor device [96]. A recent study by Zhai et al. [97] introduced a domain-specific, explainable automated machine learning technique (xAutoML). The method was capable of learning the optimal models for yield prediction. The method provided explainability by assessing all elements, such as features and hyper-parameters, locally and globally.

2.3. Inventory Management

Efficient inventory management (InvM) is vital for cutting costs in the supply chain. It aims to optimize inventory levels efficiently to prevent both overstocking and stockouts. ML and data-driven methods could help to categorize commodities, identify demand patterns, optimize inventory levels, automate reordering, and pinpoint bottlenecks in the supply chain [98]. Ref. [99] proposed an explainable k-means approach for multi-criteria ABC item classification. ABC classification is a method used in the inventory management that classifies inventory items based on their importance to the business. Authors believe that the automatic classification process made by the black-box model is difficult to understand and is not sufficient in real life business applications. Consequently, using the SHAP method, the contribution of each criterion is visualized during the construction of inventory classes. In the work by [100], the backorder prediction problem was addressed. A backorder happens when customers can place an order for a product despite that product being temporarily out of stock at the time of the order placement. Several ML models, such as random forest (RF), XGB, and LGBM, were compared to solve this binary classification problem. The SHAP algorithm was utilized to discover the critical features that impact material backorders. For the backorder prediction task, ref. [101] proposed a convolutional neural network-based (CNN) predictive model coupled with SHAP as a global explainer. The LIME algorithm could also analyze individual decisions for stakeholders in order to enhance the interpretability and trust in the model.

2.4. Fault Detection and Diagnosis

The fault detection and diagnosis (FDD) of machinery and ICPSs is one of the main applications of AI in the industry, since the health of components and subsystems, machines, and equipment could significantly impact productivity and efficiency [102,103,104,105]. In the literature, numerous research studies have been conducted to address explainable fault detection, diagnosis, and prognosis in industrial applications. The task of fault detection in system components and subsystems, machines, and equipment mainly resembles the task of anomaly or outlier detection, in the realm of computational intelligence and data analytics. As a definition, anomaly or outlier detection is the process of identifying samples, observations or events that deviate significantly from other instances in data. Early anomaly detection enables quick and informed decision-making, allowing for necessary actions to be taken before critical failures occur. This proactive approach prevents collateral damage to other system components and, in extreme scenarios, ensures personnel safety. Upon detecting an anomaly and notifying the operator, it is essential to provide additional explanations. This ensures that the information can effectively assist the operator in making informed decisions.
In [106], authors devised a practical anomaly detection system for large-scale rotating machinery using the power spectrum of machine vibrations. They employed a CNN visualization technique to obtain an explanation for each prediction. The resulting visualizations effectively guided experts’ focus toward specific machine regions, streamlining fault analysis. In a related study, Grezmak et al. [107] employed CNN and LRP for diagnosing motor faults. They evaluated the CNN model trained on time-frequency spectra images derived from vibration signals of an induction motor. Although many XAI techniques are designed for image, tabular, or textual data, their application to typical industrial data, especially multivariate time-series data, may not adequately enhance human interpretability [108]. Hence, to improve XAI applicability in industrial contexts, more advanced approaches tailored to multivariate time-series data are necessary. M. Dix et al. [21] addressed this limitation in the industrial process automation domain. Three architectures were considered: dense autoencoders, long short-term memory (LSTM) autoencoders, and LSTMs. An autoencoder is an unsupervised model composed of two artificial neural networks (ANN): an encoder, which compresses data, and a decoder, which reconstructs the compressed data. Dense layers are used to learn the nonlinear relationships in data, enabling efficient reduction to a lower-dimensional latent space. This model is used for anomaly detection. The model is initially constructed based on the normal plant data. When a new data sample emerges, the model can predict whether or not the sample represents an anomaly. This can be achieved by using the reconstruction error, which is calculated as the mean squared error (MSE) between the model’s input and output. A threshold is set during the training session by averaging the errors of all normal training samples and adding a standard deviation to this average. If the error of a new sample exceeds this threshold, an anomaly is identified and reported. LSTM is a type of recurrent neural network that can capture long-term dependencies within data layers. It learns temporal patterns in time-series data for future point predictions. By using backpropagation through time, LSTM tracks errors between predicted and actual outputs. It can be used for anomaly detection by reproducing input time windows and measuring MSE to assess prediction accuracy and identify anomalies in data samples. In [21], authors evaluated the detection of 20 simulated failure cases involving various valve failures in separator process time-series data. The study assesses the model’s ability to explain the root causes of these anomalies. The proposed approach involves breaking down the reconstruction error into various signals to guide the operator to the machine likely responsible for a specific anomaly in the plant process. As a variant of LSTM, a Bayesian LSTM integrates Bayesian inference by employing dropout during training to handle uncertainty. In [109], Bayesian LSTM was employed for gas turbine anomaly detection and prognosis, with outputs explained by SHAP. The model included two output layers to assess data and parameter uncertainty, reflecting confidence levels. Performance was evaluated using root-mean-square error (RMSE) and early prediction score, while SHAP explanations were assessed for local accuracy and consistency.
Brito et al. [110] diagnosed faults in rotating machinery using different unsupervised approaches, including minimum covariance determinant (MCD) and isolation forest (IF), etc. Feature importance was then determined using the model-agnostic SHAP and the model-specific local depth-based isolation forest feature importance (DIFFI) method. Finally, root cause analysis identified the most critical features. Similarly, [22] diagnosed a CNC machine faults using IF in a supervised manner, utilizing a large history of sensory data for learning. The SHAP library then generated explanations through custom charts.
In rotating machinery, bearing fault detection is vital for maintaining the performance and longevity of the machine, In [29], an additive Shapley explanation combined with the k-nearest neighbors (KNN) classifier was used to diagnose bearing faults. The algorithm demonstrated significant accuracy on experimental data. This approach was believed to be adaptable to various datasets with different configurations, making it a versatile and effective model. LRP is an explanation method specifically designed for deep convolutional neural networks (DCNNs). It quantifies the contribution of individual input features to the DCNN’s output. Grezmak et al. [111] employed the LRP to clarify the classification decisions of a DCNN developed for diagnosing gearbox faults. LRP helped to identify the time-frequency points in spectra images that most significantly contribute to determining fault type and severity.
A linear motion guide, also known as a linear guide or linear bearing, is a mechanical component designed to provide accurate linear motion with low friction. Kim et al. [34] identified faults in linear guides by training a 1-D CNN using time-domain data and visualizing classification criteria with frequency-domain-based grad-CAM (FG-CAM). Grad-CAM interprets the model through reverse learning process. The proposed method was anticipated to be applicable across various complex physical models. Similar approaches were presented in [35,106,112].
Srinivasan et al. [113] proposed an XAI-based approach for detecting and diagnosing faults in chillers. In this work, the XGB model, an ensemble of classification and regression trees, was employed. The LIME technique was introduced to aid in detecting initial faults, potentially with human assistance. Furthermore, this information enhanced both accuracy and transparency while shortening fault detection time. Likewise, LIME validated the ANN and support vector machine (SVM) models in the study conducted by [114] on air handling unit (AHU) fault detection. The primary goal of this validation was to enhance the model’s trustworthiness for the users.
Remaining useful life (RUL) refers to the estimated duration remaining before a machine or device requires repair or replacement. It provides valuable information for maintenance planning and decision-making. In the study by Hong et al. [115], three algorithms, including CNN, LSTM, and Bi-LSTM, were utilized to accurately predict the RUL of turbofan engines. The impact of each input variable was confirmed through SHAP. Conventional CNN filters lack transparency and may contain noisy or undesirable spectral patterns. This concern was resolved in [116] through the application of SincNet. SincNet encourages the initial convolutional layer to generate interpretable filter kernels. The approach known as Deep-SincNet demonstrated superior performance, greater explainability, broader generalization, noise immunity, and lower implementation costs. In a similar approach, T. Li [25] integrated a wavelet convolutional layer (CWConv) into the initial layer of the CNN, resulting in a wavelet-driven DNN named wavelet kernel net (WKN) that utilizes more meaningful kernels. It was proven that WKN achieves better accuracy, and the CWConv layer also enhances interpretability. The classification results of a CNN model can be interpreted using CAM, which incorporates a final layer known as the global average pooling (GAP) layer. The CAM layer identifies regions of the input image that contribute to a specific prediction. K. Sun et al. [117] applied this approach to diagnose faults in two separate datasets comprising water pumps and cantilever beams. Table 1 provides an overview of the models employed in the context of FDD, along with their corresponding XAI techniques.

2.5. Predictive Maintenance

Predictive maintenance (PdM), also referred to as “condition-based maintenance" or “risk-based maintenance", involves monitoring system and equipment performance during regular operations to reduce the risk of breakdowns. The ability to predict when maintenance is needed optimizes system lifetime and minimizes downtime [146]. PdM, along with preventive maintenance, aim to schedule maintenance activities in order to prevent system failures. However, unlike traditional preventive maintenance, PdM relies on data collected from sensors and their analysis [147]. Figure 6 compares the costs associated with different maintenance strategies. As a machine experiences degradation over time, repair costs increase. In contrast, the cost of preventive measures decreases. This extended operational lifespan enhances profitability. However, equipment replacement can lead to significant losses, highlighting the importance of pinpointing the optimal time for maintenance, which is facilitated by PdM. In PdM, three kinds of approaches are available: (1) data-driven, (2) model-based, and (3) hybrid. Generally, data-driven methods are preferred because of the abundance of sensory measurements available, and to avoid the complexities associated with building intricate physical models for systems [148]. A primary concern with data-driven methods is the lack of model interpretability. In fact, the interpretability of models poses significant challenges to the adoption of data-driven methods in industrial settings. There are various papers addressing PdM in the industry, but few consider the explainability of predictions to the service engineers.
A recent work by L. Cummins et al. [149] reviewed explainable predictive maintenance (XPM) methods. This systematic review identified deficiencies in the field, particularly the under-utilization of explanatory metrics in PdM. Additionally, the survey introduced a variety of potential metrics from the existing literature that could be adopted in this domain.
Figure 6. Cost relation between the different maintenance strategies [150].
Figure 6. Cost relation between the different maintenance strategies [150].
Electronics 13 03497 g006
Matzka [151] utilized DTs for explainability on a synthetic PdM dataset that mirrors real-world industry data. This work evaluates two methods for explaining the classification results of a complex ensemble. While both methods provided overall benefits to the user without incurring significant additional costs, DTs offered explanations of higher quality, albeit occasionally absent. On the other hand, normalized feature deviations consistently provided explanations, albeit of slightly lower quality.
For preventive maintenance in industry, two intrinsic XAI methods were introduced by [16,148]. Upasane et al. [16] employed an interpretable rule-based type-2 fuzzy logic system (FLS) to monitor water pump health, demonstrating superior explainability compared to other transparent models. Langone et al. [148] proposed an interpretable anomaly prediction (IAP) method that benefited from regularized logistic regression as the core model to explain detected anomalies.This methodology breaks down anomaly detection into interpretable steps, providing clarity at each stage and offering a probabilistic anomaly score for future abnormal data events. The method was validated using a time-series dataset collected from a high-pressure pump in a chemical plant.
The prediction of bearing health was conducted in the work by Haiyue et al. [152], where they employed an LSTM-RNN model. Additionally, The LRP technique was used to understand how the model learned from input data and visualize the contribution and relevance distribution across the neural network’s input space. [153] introduced the use of learning fuzzy cognitive maps (LFCMs) to enhance the explainability of LSTMs in PdM for industrial bearings. The authors demonstrated how LFCMs could provide insights into which input features contributed to predictions and how adjusting specific values could potentially prevent faults, offering advantages over the existing explainability methods. In another work by the same authors [154], a novel model-agnostic explainability method, known as the Gumbel–Sigmoid explanator (GSX), was developed to illustrate the contribution of features to predictions.
Ref. [155] evaluated the quantitative association rule mining algorithms (QARMAs) on real-life datasets from two production lines to enable a PdM paradigm. When sufficient data was available, QARMA achieved excellent results in terms of the RUL prediction, outperforming other popular models. However, QARMA was less effective in scenarios with limited sensory data, emphasizing that quality control (QC) measurements alone cannot predict the maintenance needs. Furthermore, QARMA’s rule-based outputs made its models straightforward to interpret, providing a higher level of explainability compared to other deep learning approaches.

2.6. Product Quality

In manufacturing plants, the quality of products may not meet the desired specification due to defects in several factors such as raw materials, production technology, labor skills, storage, and transport facility. Quality control and quality assurance (QA) are vital operations in manufacturing that ensure the company process is working as planned and the final product fulfills the quality requirements. AI solutions have made a major breakthrough in automated inspection and quality control. They outperform humans in inspection accuracy and rate. One of the most powerful types of AI techniques used in the product inspection is computer vision, i.e., the methods used to provide image-based automatic inspection. The use of deep learning and machine vision (MV) provides opportunities for building smart systems that perform thorough quality checks down to the smallest details. A detailed review of MV systems for industrial quality control inspection was presented in [156]. Although there are numerous research studies have been accomplished in this field, a few works have been found that implement explainability in quality management domain.
XAI techniques can clarify how process parameters relate to the quality of the product [22]. Goldman et al. [157] employed XAI techniques, including CAM and contrastive gradient-based saliency maps, to interpret a black-box classifier in the context of assessing weld quality in ultrasonically welded battery tabs. They produced heatmaps to gain insights into distinguishing true positive predictions from false positives. Lee et al. [158] applied various XAI methods to explain defect classification in thin film-transistor liquid-crystal display panels. Methods such as CAM, LRP, IntGrad, guided backpropagation, and SmoothGrad were employed and visualized using the VGG-16 model. Among these, LRP and guided backpropagation were chosen for their well-distributed heatmaps. Furthermore, the authors enhanced explainability by converting model predictions into human-readable texts within a DT framework, achieving maximum interpretability for evaluation by domain experts. Senoner et al. [159] introduced a data-driven decision model aimed at selecting improvement actions for a transistor chip product. Initially, the method prioritizes processes for quality enhancement and subsequently identifies suitable improvement measures. A novel estimation approach for process importance was introduced using SHAP values, quantifying the extent to which production parameters of a specific process influence variations in the overall process quality. Enhancement efforts can be allocated in an efficient way with this approach. The detection of surface defects of steel plates is studied in [160]. Kharal et al. compared nine different classifiers that ultimately found the best figures. Due to imbalanced multi-class data, several methodologies were applied and compared for balancing data, such as oversampling, undersampling and synthetic minority oversampling method (SMOTE). For explainability, the rules were extracted through RF and association rule mining (ARM). In addition, a local model agnostic XAI tool, called the ceteris peribus (CP) technique, was used for feature importance visualization. For a global explanation, the authors found the dependence of the average prediction upon different variables using partial dependence plots (PDPs). The automated classification of fibre layup defects was studied in [161]. A combination of CNN classifiers with smoothed IntGrad, guided grad-CAM and DeepSHAP was found to be suitable in the proposed context. The research findings offer valuable insights to engineers developing camera-based monitoring systems in the composites sector. These insights pertain to designing and implementing sophisticated yet reliable ML solutions. The 3D laser scanners are widely employed as quality control tools in industries such as automotive, aeronautics, and energy sectors. E. Lavasa et al. [162] have developed an AI-based decision support system called Modified PointNet architecture to forecast the point-wise accuracy of the laser scanner throughout the surface of the analyzed part. Additionally, they utilized SHAP to offer a in-depth understanding of the most critical parameters influencing the predictions made by the model.
Table 2 provides a summary of the latest AI and XAI approaches applied to various use cases in the manufacturing industry and ICPS. It also highlights the types of data processed (e.g., images, tabular data) and the forms of explanations provided to users (e.g., feature importance).

3. XAI in Industrial Cyber–Physical Systems

Historically, the evolution from traditional manufacturing to ICPSs marks a significant shift towards smarter, more interconnected production environments and industrial systems. This progression reflects the industry’s growing focus on integrating advanced technologies to enhance precision, adaptability, and security in modern manufacturing processes [170]. Building on our discussion of XAI in the manufacturing industry, it is essential to consider how these technologies are integrated within ICPSs.
The rapid development of information technology demands sophisticated cyber–physical systems (CPSs) that integrate computing, communication, and control (3C technologies), crucial for Industry 4.0 and various applications. CPSs integrate computing, networks, and physical environments to achieve real-time systems that monitor and control physical entities through a computing core [171]. Based on the OSI model [172], CPS architectures include seven layers: a physical layer, a data-link layer, a network layer, a transport layer, a session layer, a presentation layer, and an application layer.
Cybersecurity is crucial for CPSs, which are vulnerable to cyberattacks threatening data confidentiality and infrastructure [173]. Cyber–physical attacks exploit network vulnerabilities to disrupt physical systems using methods like malware injection and man-in-the-middle attacks. Stealthy attacks, such as replay and zero-dynamics attacks, manipulate control signals undetected by conventional systems. Therefore, cyber–physical security must address both the physical and the network aspects [174], unlike traditional cybersecurity. It includes passive measures, like encryption, and active measures, such as recovery strategies, to ensure CPS resiliency [175]. Cyber-resilience ensures CPS function despite attacks, essential for sectors like healthcare and manufacturing.
Implementing AI in ICPSs without explainability introduces significant risks, including security vulnerabilities, undetected biases, misuse, unfair outcomes, and lack of accountability. XAI mitigates these risks by ensuring transparency, fostering trust, and enabling proper oversight. XAI enhances cybersecurity by allowing users to monitor and analyze system behavior, thereby improving intrusion detection and response. Additionally, XAI helps reduce costs, prevent accidents, and ensure legal compliance.
XAI can help in enhancing the transparency and trust of AI models used to support CPS operations. Combining XAI and CPS fosters safe, secure, accountable, and resilient systems driving significant societal and environmental benefits [176]. The current research on XAI for CPS is limited, focusing on specific applications like medical and industrial CPS. Challenges include biases, the lack of standardized methods, and the inadequate handling of time-series data.

Cyber-Security

The advent of connected and digital manufacturing devices and systems promises to significantly increase the efficiency and speed of manufacturing. This optimizes supply chain processes and allows for effective automation. In ICPS automation, multiple layers of critical connections are established, comprising cyber–physical components, systems, networks, and controls. This configuration increases both the complexity and vulnerability of ICPS to cyber-security (CybSec) threats [177].
Ensuring efficient manufacturing automation is vital for the success of industries. Hence, by using AI techniques, we can detect abnormal behaviors in ICPSs and respond quickly to prevent security incidents. Although AI models guarantee high detection accuracy, no explanation is given for the decision made, specifically in ICPSs, where information is often abstract. Here, we review the works that bring explainability into their solutions for cyber threat detection.
Amarasinghe et al. [137] proposed a framework to identify DoS attacks in ICPSs. A DNN was used to detect anomalies. Then, a post hoc explanation was generated using the LRP technique to assess the relevance of input features in explaining predictions made by the trained DNN model. The explainer also provided the user with the confidence of the prediction and a textual description of detected anomalies. The evaluation was performed using a subset of the NSL-KDD dataset. In a related work, Hwang et al. [163] employed multiple Bi-LSTM networks to detect security threats in an ICPS ecosystem. The approach aimed to reduce false detections within each model, while also detecting a broader range of anomalies across all models collectively. Then, the contribution score of each feature was provided using SHAP. A novel Conv-LSTM-based autoencoder framework for explainable attack detection in time-series data collected from the industrial internet of things (IIoT) has been proposed by I.A.Khan et al. [20]. They demonstrated that the method effectively detects both known and unknown attacks in IIoT environments. With the integration of the LIME method into the framework, the most relevant features were recognized as the basis of interpretation.
Le et al. [177] developed a visual analytic framework for monitoring and assessing complex automation networks in real time to highlight possible errors, warnings, and malicious threats. In the most recent research study, a federated learning-based (FL) explainable anomaly detection (FedeX) has been proposed to detect and analyze anomalies in ICPSs. FL has demonstrated its potential as an effective approach for edge computing in distributed environments. The authors demonstrated that the FedeX is considerably accurate, fast, and lightweight compared to compelling methods. After the detection process, the XAI-SHAP model interprets and validates the model, identifying the elements causing anomalies in ICPSs.
In [178,179], SHAP has been used to provide local explanations over the results of intrusion detection systems trained on NSL-KDD. In [180], an XAI-based scheme, based on SHAP, LIME and RuleFit, has been developed for detecting intrusions. In [181], a SHAP-based intrusion detection system has been developed for DNS over HTTPS attacks. A framework has been proposed in [182], which makes use of SHAP to provide local and global explanations for a deep learning-based intrusion detection system. In [183], LIME and SHAP are used to explain the outcomes of a trained multi-layer perceptron model for detecting intrusions in IoT devices. An intrusion detection system, called X-CANIDS, has been proposed in [184] for detecting intrusion, including zero-day attacks in a controller area network. An end-to-end framework has been developed in [185] to assess XAI techniques for network intrusion detection tasks. In [186], SHAP and LIME are used to interpret the predictions of deep learning-based intrusion detection systems. In [187], an explainable deep learning-based intrusion detection system has been developed for Industry 5.0. This method integrates a bidirectional-gated recurrent unit, a bidirectional LSTM network, fully connected layers, and a softmax model along with the SHAP method, to determine the most impactful features on the predictions. Refs. [188,189,190] review recent intrusion detection techniques and provide insight on how XAI can be used to improve the state of the art in the field.
Clarity and trust are vital for the adoption of XAI in ICPSs. XAI builds trust, accountability, resilience, and legal compliance, assisting in safety and security through interpretations. Future CPS should feature context-aware interpretations and self-explainability in order to adapt to non-stationary environments.

4. Discussion and Future Directions

As AI becomes increasingly integrated into manufacturing and industrial processes, ensuring the transparency and interpretability of AI models becomes crucial. In this study, we found that XAI techniques serve more as a means to enhance the trustworthiness and reliability of AI systems in CPS and smart manufacturing, enabling human operators to comprehend and validate the decisions made by these intelligent systems. T.C. Chen [191] believes that XAI development in manufacturing can enhance the practicality and effectiveness of existing AI technology by explaining the reasoning process and integrating easy to interpret visual features. While XAI techniques are commonly used in domains like medicine, service, and, education, their application in manufacturing remains limited, highlighting an imperative and an opportunity for its integration in this sector. From a systematic literature review by [192], it was shown that industrial and manufacturing applications accounted for about 10 percent of the domains where different XAI methods were applied. Following the review of the implemented explainable models, a number of comments and potential research directions are suggested below:
  • Focus areas of XAI in manufacturing: In the literature, predictive maintenance, fault diagnosis and prognosis emerge as the most extensively discussed fields where XAI is applied within manufacturing. Conversely, other use cases such as product development, process control, and inventory management are rarely addressed or explored in this context.
  • Need for tailored explanations: It was noted that the majority of XAI explanations are primarily conveyed through feature importance. However, given the diverse range of users in the manufacturing industry, including machine operators, engineers, scientists, and managers, it is essential to tailor explanations based on the specific expertise and comprehension levels of the intended users, thus ensuring optimal understanding and usability.
  • Limitations of existing XAI methods: It was observed that most of the explanations were generated using post hoc methods like SHAP and LIME, covering both global and local perspectives. Nevertheless, these methods exhibited limitations in delivering real-time insights and actionable recommendations for immediate decision-making.
  • Advancing XAI techniques: There could be further advancements in XAI techniques tailored specifically for manufacturing applications. This could include the development of hybrid approaches that combine traditional AI methods with XAI techniques to achieve optimal transparency and interpretability while maintaining high performance. Besides, addressing challenges related to scalability, robustness, and compliance with industry regulations will be crucial for the widespread adoption of XAI in industrial settings.
  • Real-time XAI: There could be a focus on real-time XAI capabilities to enable dynamic decision-making and adaptation in rapidly changing industrial environments.
  • XAI and emerging technologies: The integration of XAI into emerging technologies such as the IoT and edge computing could open up new possibilities for enhancing transparency and interpretability in this domain.
  • Regulatory compliance: In the context of ICPSs and industrial settings, adherence to industry standards and regulations is particularly critical due to the potential impact on safety, security, reliability, and performance. For example, safety-critical domains such as healthcare, automotive, and aerospace have stringent regulatory requirements governing the development and deployment of AI systems to ensure patient safety, vehicle reliability, and aviation security. Therefore, any future trends in XAI within ICPSs and industrial settings must take into account the unique regulatory landscape of each industry. This includes conducting thorough assessments of regulatory requirements, integrating compliance measures into the design and development process of AI systems, and establishing transparent mechanisms for documenting and auditing XAI implementations. Furthermore, collaboration between industry stakeholders, regulatory bodies, and AI researchers is essential to address regulatory challenges effectively and ensure that XAI techniques align with industry standards and regulations. By prioritizing compliance and transparency, organizations can build trust in AI systems and facilitate their widespread adoption across diverse industrial sectors.

5. Conclusions

This study offers valuable insights into the integration of XAI in various manufacturing applications and industrial cyber–physical systems. In the introduction, we discussed the importance of XAI, its implementation, and introduced a taxonomy and categorization of XAI methods. We also highlighted advancements in XAI techniques. Throughout the study, we explored various use cases in manufacturing and outlined different XAI approaches documented in the literature. Post hoc methods, particularly the model-agnostic techniques LIME and SHAP, are the most popular approaches used. We observed the limited utilization of XAI in specific use cases, such as product development and process control. Consequently, there are substantial research opportunities in manufacturing, particularly in sensitive and safety-related domains. Promising areas include the development of self-explaining approaches and exploration into hybrid models that combine traditional AI methods with XAI techniques. These approaches have the potential to enhance transparency and interpretability while maintaining high performance. Moreover, to ensure optimal understanding and usability, it is crucial to tailor XAI explanations to the diverse range of users in the manufacturing industry. Future research could also focus on real-time XAI techniques, integrating emerging technologies with XAI, and developing regulatory-aware XAI solutions. In conclusion, the application of XAI in manufacturing industries and ICPSs could enhance the decision-making procedure, boost operational efficiency, and ensure the reliability of AI-powered systems. By providing clarity, XAI also significantly contributes to improving safety and security within ICPSs.

Author Contributions

Conceptualization, S.M., M.F.-Z. and R.R.-F.; validation, S.M., M.F.-Z. and R.R.-F.; formal analysis, S.M., M.F.-Z. and R.R.-F.; resources, S.M. and M.F.-Z.; writing—original draft preparation, S.M. and M.F.-Z.; writing—review and editing, S.M., M.F.-Z., R.R.-F. and V.P.; visualization, S.M.; supervision, R.R.-F. and M.S.; project administration, R.R.-F. and V.P. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHUAir handling unit
ACEAutomated concept-based explanation
AIArtificial intelligence
ANNArtificial neural network
ARMAssociation rule mining
ASVAsymmetric Shapley values
BBBalanced blagging
CAMClass activation map
CEMContrastive explanation method
CNNConvolutional neural network
CPCeteris peribus
CPSCyber–physical system
CWConvContinuous wavelet convolutional
CybSecCyber-security
DCAEDeep convolutional autoencoder
DCNNDeep convolutional neural network
DeconvNetDeconvolutional network
DeepLIFTDeep learning important features
DICEDiverse counterfactual explanations
DIFFIDepth-based isolation forest feature importance
DNNDeep neural network
DTDecision tree
EmSHAPEnergy-based model for Shapley value estimation
FACEFeasible and actionable counterfactual explanations
FAMFrequency activation map
FDDFault detection and diagnosis
FedeXFederated learning based explainable
FFNNFeed forward neural network
FG-CAMFrequency-domain-based gradient-weighted class activation map
FLFederated learning
FLSFuzzy logic system
GAPGlobal average pooling
GNNGraph neural networks
GPRGaussian process regression
Grad-CAMGradient-weighted class activation mapping
GSXGumbel–Sigmoid explanator
IAPInterpretable anomaly prediction
ICEIndividual conditional expectation
ICPSIndustrial cyber–physical system
IFIsolation forest
IIoTIndustrial internet of things
InvMInventory management
IntGradIntegrated gradient
IoTInternet of things
ISGIntrinsic subgraph generation
KBDBNKnowledge-based deep belief network
KNNK-nearest neighbors
LFCMLearning fuzzy cognitive maps
LGBMLight gradient-boosting machine
LIMELocal interpretable model-agnostic explanations
LRPLayer-wise relevance propagation
LSTMLong short-term memory
MCDMinimum covariance determinant
MLMachine learning
MMDMaximum mean discrepancy
MPMeaningful perturbation
MSEMean squared error
MVMachine vision
NAMNeural additive models
NNNeural network
OCTETObject-aware counterfactual explanations
PdMPredictive maintenance
PDPPartial dependence plot
PCProcess control
PDProduct development
ProtoPNetPrototypical part network
PSPrototype selection
QAQuality assurance
QCQuality control
RFRandom forest
RMSERoot-mean-square error
RULRemaining useful life
SGSaliency gradient
SHAPShapley additive explanations
SmoothGradSmoothing gradient
SMOTESynthetic minority oversampling method
SVERLShapley values for explaining reinforcement learning
SVMSupport vector machine
TCAVTesting with concept activation vectors
WKNWavelet kernel net
XAIExplainable artificial intelligence
XGBExtreme gradient boosting

References

  1. Ye, Q.; Xia, J.; Yang, G. Explainable AI for COVID-19 CT Classifiers: An Initial Comparison Study. In Proceedings of the 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 521–526. [Google Scholar] [CrossRef]
  2. Guidotti, R.; Monreale, A.; Turini, F.; Pedreschi, D.; Giannotti, F. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 2018, 51, 1–42. [Google Scholar] [CrossRef]
  3. Gunning, D.; Aha, D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
  4. Goodman, B.; Flaxman, S. European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Mag. 2017, 38, 50–57. [Google Scholar] [CrossRef]
  5. Nor, A.K.M.; Pedapati, S.R.; Muhammad, M.; Leiva, V. Overview of Explainable Artificial Intelligence for Prognostic and Health Management of Industrial Assets Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Sensors 2021, 21, 8020. [Google Scholar] [CrossRef]
  6. Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
  7. Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  8. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the KDD ’16: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
  9. El Shawi, R.; Sherif, Y.; Al-Mallah, M.; Sakr, S. Interpretability in HealthCare A Comparative Study of Local Machine Learning Interpretability Techniques. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 275–280. [Google Scholar] [CrossRef]
  10. Speith, T. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In Proceedings of the FAccT ’22, 2022 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 21–24 June 2022; pp. 2239–2250. [Google Scholar] [CrossRef]
  11. Rong, Y.; Leemann, T.; trang Nguyen, T.; Fiedler, L.; Qian, P.; Unhelkar, V.; Seidel, T.; Kasneci, G.; Kasneci, E. Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations. arXiv 2023, arXiv:cs.AI/2210.11584. [Google Scholar] [CrossRef]
  12. Marcinkevičs, R.; Vogt, J.E. Interpretable and explainable machine learning: A methods-centric overview with concrete examples. WIREs Data Min. Knowl. Discov. 2023, 13, e1493. [Google Scholar] [CrossRef]
  13. Sofianidis, G.; Rozanec, J.M.; Mladenic, D.; Kyriazis, D. A Review of Explainable Artificial Intelligence in Manufacturing. CoRR 2021, 24, 93–113. [Google Scholar]
  14. Lundberg, S.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. arXiv 2017, arXiv:cs.AI/1705.07874. [Google Scholar]
  15. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar] [CrossRef]
  16. Upasane, S.J.; Hagras, H.; Anisi, M.H.; Savill, S.; Taylor, I.; Manousakis, K. A Big Bang-Big Crunch Type-2 Fuzzy Logic System for Explainable Predictive Maintenance. In Proceedings of the 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Luxembourg, Luxembourg, 11–14 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
  17. Upasane, S.J.; Hagras, H.; Anisi, M.H.; Savill, S.; Taylor, I.; Manousakis, K. A Type-2 Fuzzy-Based Explainable AI System for Predictive Maintenance Within the Water Pumping Industry. IEEE Trans. Artif. Intell. 2024, 5, 490–504. [Google Scholar] [CrossRef]
  18. Huong, T.T.; Bac, T.P.; Ha, K.N.; Hoang, N.V.; Hoang, N.X.; Hung, N.T.; Tran, K.P. Federated Learning-Based Explainable Anomaly Detection for Industrial Control Systems. IEEE Access 2022, 10, 53854–53872. [Google Scholar] [CrossRef]
  19. Kong, B.O.; Kim, M.S.; Kim, B.H.; Lee, J.H. Prediction of Creep Life Using an Explainable Artificial Intelligence Technique and Alloy Design Based on the Genetic Algorithm in Creep-Strength-Enhanced Ferritic 9% Cr Steel. Met. Mater. Int. 2023, 29, 1334–1345. [Google Scholar] [CrossRef]
  20. Khan, I.A.; Moustafa, N.; Pi, D.; Sallam, K.M.; Zomaya, A.Y.; Li, B. A New Explainable Deep Learning Framework for Cyber Threat Discovery in Industrial IoT Networks. IEEE Internet Things J. 2022, 9, 11604–11613. [Google Scholar] [CrossRef]
  21. Dix, M.; Chouhan, A.; Ganguly, S.; Pradhan, S.; Saraswat, D.; Agrawal, S.; Prabhune, A. Anomaly detection in the time-series data of industrial plants using neural network architectures. In Proceedings of the 2021 IEEE Seventh International Conference on Big Data Computing Service and Applications (BigDataService), Oxford, UK, 23–26 August 2021; pp. 222–228. [Google Scholar] [CrossRef]
  22. Sesana, M.; Cavallaro, S.; Calabresi, M.; Capaccioli, A.; Napoletano, L.; Antonello, V.; Grandi, F. Process and Product Quality Optimization with Explainable Artificial Intelligence. In Artificial Intelligence in Manufacturing; Springer: Berlin/Heidelberg, Germany, 2024; pp. 459–477. [Google Scholar] [CrossRef]
  23. Morita, K.; Davies, D.W.; Butler, K.T.; Walsh, A. Modeling the dielectric constants of crystals using machine learning. J. Chem. Phys. 2020, 153, 024503. [Google Scholar] [CrossRef] [PubMed]
  24. Hajgató, G.; Wéber, R.; Szilágyi, B.; Tóthpál, B.; Gyires-Tóth, B.; Hős, C. PredMaX: Predictive maintenance with explainable deep convolutional autoencoders. Adv. Eng. Inform. 2022, 54, 101778. [Google Scholar] [CrossRef]
  25. Li, T.; Zhao, Z.; Sun, C.; Cheng, L.; Chen, X.; Yan, R.; Gao, R.X. WaveletKernelNet: An Interpretable Deep Neural Network for Industrial Intelligent Diagnosis. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 2302–2312. [Google Scholar] [CrossRef]
  26. Sahakyan, M.; Aung, Z.; Rahwan, T. Explainable Artificial Intelligence for Tabular Data: A Survey. IEEE Access 2021, 9, 135392–135422. [Google Scholar] [CrossRef]
  27. Bodria, F.; Giannotti, F.; Guidotti, R.; Naretto, F.; Pedreschi, D.; Rinzivillo, S. Benchmarking and Survey of Explanation Methods for Black Box Models. arxiv 2021, arXiv:cs.AI/2102.13076. [Google Scholar] [CrossRef]
  28. Gawde, S.; Patil, S.; Kumar, S.; Kamat, P.; Kotecha, K.; Alfarhood, S. Explainable Predictive Maintenance of Rotating Machines Using LIME, SHAP, PDP, ICE. IEEE Access 2024, 12, 29345–29361. [Google Scholar] [CrossRef]
  29. Hasan, M.J.; Sohaib, M.; Kim, J.M. An Explainable AI-Based Fault Diagnosis Model for Bearings. Sensors 2021, 21, 4070. [Google Scholar] [CrossRef]
  30. Dhaou, A.; Bertoncello, A.; Gourvénec, S.; Garnier, J.; Le Pennec, E. Causal and Interpretable Rules for Time Series Analysis. In Proceedings of the KDD ’21, 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, New York, NY, USA, 14–18 August 2021; pp. 2764–2772. [Google Scholar] [CrossRef]
  31. Jakubowski, J.; Stanisz, P.; Bobek, S.; Nalepa, G.J. Roll Wear Prediction in Strip Cold Rolling with Physics-Informed Autoencoder and Counterfactual Explanations. In Proceedings of the 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), Shenzhen, China, 13–16 October 2022; pp. 1–10. [Google Scholar] [CrossRef]
  32. Ming, Y.; Xu, P.; Qu, H.; Ren, L. Interpretable and Steerable Sequence Learning via Prototypes. In Proceedings of the KDD ’19, 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 903–913. [Google Scholar] [CrossRef]
  33. Tan, S.; Soloviev, M.; Hooker, G.; Wells, M.T. Tree Space Prototypes: Another Look at Making Tree Ensembles Interpretable. In Proceedings of the FODS ’20, 2020 ACM-IMS on Foundations of Data Science Conference, New York, NY, USA, 19–20 October 2020; pp. 23–34. [Google Scholar] [CrossRef]
  34. Kim, M.S.; Yun, J.P.; Park, P. An Explainable Convolutional Neural Network for Fault Diagnosis in Linear Motion Guide. IEEE Trans. Ind. Inform. 2021, 17, 4036–4045. [Google Scholar] [CrossRef]
  35. Chen, H.Y.; Lee, C.H. Vibration Signals Analysis by Explainable Artificial Intelligence (XAI) Approach: Application on Bearing Faults Diagnosis. IEEE Access 2020, 8, 134246–134256. [Google Scholar] [CrossRef]
  36. Yeh, C.K.; Kim, B.; Arik, S.O.; Li, C.L.; Pfister, T.; Ravikumar, P. On Completeness-aware Concept-Based Explanations in Deep Neural Networks. arXiv 2022, arXiv:cs.LG/1910.07969. [Google Scholar]
  37. Guidotti, R.; Monreale, A.; Matwin, S.; Pedreschi, D. Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13665–13668. [Google Scholar] [CrossRef]
  38. Chen, C.; Li, O.; Tao, C.; Barnett, A.J.; Su, J.; Rudin, C. This looks like that: Deep learning for interpretable image recognition. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 8 December 2019; Curran Associates Inc.: Red Hook, NY, USA, 2019. [Google Scholar]
  39. Mollas, I.; Bassiliades, N.; Tsoumakas, G. LioNets: Local Interpretation of Neural Networks Through Penultimate Layer Decoding. In Communications in Computer and Information Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 265–276. [Google Scholar] [CrossRef]
  40. Hoover, B.; Strobelt, H.; Gehrmann, S. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models. arXiv 2019, arXiv:cs.CL/1910.05276. [Google Scholar]
  41. Lampridis, O.; Guidotti, R.; Ruggieri, S. Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars. In Proceedings of the Discovery Science; Appice, A., Tsoumakas, G., Manolopoulos, Y., Matwin, S., Eds.; Springer: Cham, Switzerland, 2020; pp. 357–373. [Google Scholar]
  42. Beechey, D.; Smith, T.M.S.; Özgür, Ş. Explaining Reinforcement Learning with Shapley Values. arXiv 2023, arXiv:cs.LG/2306.05810. [Google Scholar]
  43. Mothilal, R.K.; Sharma, A.; Tan, C. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the ACM FAT ’20, 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020. [Google Scholar] [CrossRef]
  44. Ribeiro, M.T.; Singh, S.; Guestrin, C. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI’18/IAAI’18/EAAI’18, Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; AAAI Press: Washington, DC, USA, 2018. [Google Scholar]
  45. Kim, B.; Wattenberg, M.; Gilmer, J.; Cai, C.; Wexler, J.; Viegas, F.; Sayres, R. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). arXiv 2018, arXiv:stat.ML/1711.11279. [Google Scholar]
  46. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. arXiv 2015, arXiv:cs.CV/1512.04150. [Google Scholar]
  47. Shrikumar, A.; Greenside, P.; Kundaje, A. Learning Important Features Through Propagating Activation Differences. arXiv 2019, arXiv:cs.CV/1704.02685. [Google Scholar]
  48. Lapuschkin, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef]
  49. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. arXiv 2013, arXiv:cs.CV/1311.2901. [Google Scholar]
  50. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
  51. Poyiadzi, R.; Sokol, K.; Santos-Rodriguez, R.; De Bie, T.; Flach, P. FACE: Feasible and Actionable Counterfactual Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–9 February 2020. [Google Scholar] [CrossRef]
  52. Smilkov, D.; Thorat, N.; Kim, B.; Viégas, F.; Wattenberg, M. SmoothGrad: Removing noise by adding noise. arXiv 2017, arXiv:cs.LG/1706.03825. [Google Scholar]
  53. Chen, C.; Li, O.; Tao, C.; Barnett, A.J.; Su, J.; Rudin, C. This Looks Like That: Deep Learning for Interpretable Image Recognition. arXiv 2019, arXiv:cs.LG/1806.10574. [Google Scholar]
  54. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv 2014, arXiv:cs.CV/1312.6034. [Google Scholar]
  55. Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic Attribution for Deep Networks. arXiv 2017, arXiv:cs.LG/1703.01365. [Google Scholar]
  56. Goldstein, A.; Kapelner, A.; Bleich, J.; Pitkin, E. Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation. arXiv 2014, arXiv:stat.AP/1309.6392. [Google Scholar] [CrossRef]
  57. Bien, J.; Tibshirani, R. Prototype selection for interpretable classification. Ann. Appl. Stat. 2011, 5, 2403–2424. [Google Scholar] [CrossRef]
  58. Kim, B.; Khanna, R.; Koyejo, O.O. Examples are not enough, learn to criticize! Criticism for Interpretability. In Proceedings of the Advances in Neural Information Processing Systems; Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29. [Google Scholar]
  59. Fong, R.C.; Vedaldi, A. Interpretable Explanations of Black Boxes by Meaningful Perturbation. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  60. Dhurandhar, A.; Chen, P.Y.; Luss, R.; Tu, C.C.; Ting, P.; Shanmugam, K.; Das, P. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. arXiv 2018, arXiv:cs.AI/1802.07623. [Google Scholar]
  61. Chattopadhay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 839–847. [Google Scholar] [CrossRef]
  62. Ghorbani, A.; Wexler, J.; Zou, J.; Kim, B. Towards Automatic Concept-based Explanations. arXiv 2019, arXiv:stat.ML/1902.03129. [Google Scholar]
  63. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. Explainable AI for Trees: From Local Explanations to Global Understanding. arXiv 2019, arXiv:cs.LG/1905.04610. [Google Scholar] [CrossRef]
  64. Ying, R.; Bourgeois, D.; You, J.; Zitnik, M.; Leskovec, J. GNNExplainer: Generating Explanations for Graph Neural Networks. arXiv 2019, arXiv:cs.LG/1903.03894. [Google Scholar]
  65. Looveren, A.V.; Klaise, J. Interpretable Counterfactual Explanations Guided by Prototypes. arXiv 2020, arXiv:cs.LG/1907.02584. [Google Scholar]
  66. Huang, Q.; Yamada, M.; Tian, Y.; Singh, D.; Yin, D.; Chang, Y. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks. arXiv 2020, arXiv:cs.LG/2001.06216. [Google Scholar] [CrossRef]
  67. Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. arXiv 2020, arXiv:cs.CV/1910.01279. [Google Scholar]
  68. Frye, C.; Rowat, C.; Feige, I. Asymmetric Shapley values: Incorporating causal knowledge into model-agnostic explainability. arXiv 2021, arXiv:stat.ML/1910.06358. [Google Scholar]
  69. Agarwal, R.; Melnick, L.; Frosst, N.; Zhang, X.; Lengerich, B.; Caruana, R.; Hinton, G. Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv 2021, arXiv:cs.LG/2004.13912. [Google Scholar]
  70. Nauta, M.; van Bree, R.; Seifert, C. Neural Prototype Trees for Interpretable Fine-grained Image Recognition. arXiv 2021, arXiv:cs.CV/2012.02046. [Google Scholar]
  71. Schnake, T.; Eberle, O.; Lederer, J.; Nakajima, S.; Schutt, K.T.; Muller, K.R.; Montavon, G. Higher-Order Explanations of Graph Neural Networks via Relevant Walks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7581–7596. [Google Scholar] [CrossRef]
  72. Zemni, M.; Chen, M.; Éloi, Z.; Ben-Younes, H.; Pérez, P.; Cord, M. OCTET: Object-aware Counterfactual Explanations. arXiv 2023, arXiv:cs.CV/2211.12380. [Google Scholar]
  73. Lu, C.; Zeng, J.; Xia, Y.; Cai, J.; Luo, S. Energy-based Model for Accurate Shapley Value Estimation in Interpretable Deep Learning Predictive Modeling. arXiv 2024, arXiv:cs.LG/2404.01078. [Google Scholar]
  74. Tilli, P.; Vu, N.T. Intrinsic Subgraph Generation for Interpretable Graph based Visual Question Answering. arXiv 2024, arXiv:cs.CL/2403.17647. [Google Scholar]
  75. Zafar, M.R.; Khan, N.M. DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems. arXiv 2019, arXiv:cs.LG/1906.10263. [Google Scholar]
  76. Shankaranarayana, S.M.; Runje, D. ALIME: Autoencoder Based Approach for Local Interpretability. arXiv 2019, arXiv:cs.LG/1909.02437. [Google Scholar]
  77. Bramhall, S.; Horn, H.E.; Tieu, M.; Lohia, N. QLIME-A Quadratic Local Interpretable Model-Agnostic Explanation Approach. SMU Data Sci. Rev. 2020, 3, 4. [Google Scholar]
  78. Zhou, J.; Gandomi, A.H.; Chen, F.; Holzinger, A. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 2021, 10, 593. [Google Scholar] [CrossRef]
  79. Nauta, M.; Trienes, J.; Pathak, S.; Nguyen, E.; Peters, M.; Schmitt, Y.; Schlötterer, J.; van Keulen, M.; Seifert, C. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 2023, 55, 1–42. [Google Scholar] [CrossRef]
  80. Samek, W.; Montavon, G.; Vedaldi, A.; Hansen, L.K.; Müller, K.R. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer Nature: Cham, Switzerland, 2019; Volume 11700. [Google Scholar]
  81. Razavi-Far, R.; Wan, D.; Saif, M.; Mozafari, N. To Tolerate or To Impute Missing Values in V2X Communications Data? IEEE Internet Things J. 2022, 9, 11442–11452. [Google Scholar] [CrossRef]
  82. Hallaji, E.; Razavi-Far, R.; Saif, M. DLIN: Deep Ladder Imputation Network. IEEE Trans. Cybern. 2022, 52, 8629–8641. [Google Scholar] [CrossRef]
  83. Ahmed, I.; Jeon, G.; Piccialli, F. From Artificial Intelligence to Explainable Artificial Intelligence in Industry 4.0: A Survey on What, How, and Where. IEEE Trans. Ind. Inform. 2022, 18, 5031–5042. [Google Scholar] [CrossRef]
  84. Park, S.; Kayani, S.H.; Euh, K.; Seo, E.; Kim, H.; Park, S.; Yadav, B.N.; Park, S.J.; Sung, H.; Jung, I.D. High strength aluminum alloys design via explainable artificial intelligence. J. Alloys Compd. 2022, 903, 163828. [Google Scholar] [CrossRef]
  85. Yan, F.; Song, K.; Liu, Y.; Chen, S.; Chen, J. Predictions and mechanism analyses of the fatigue strength of steel based on machine learning. J. Mater. Sci. 2020, 55, 15334–15349. [Google Scholar] [CrossRef]
  86. Roy, I.; Feng, B.; Roychowdhury, S.; Ravi, S.K.; Umretiya, R.V.; Reynolds, C.; Ghosh, S.; Rebak, R.B.; Hoffman, A. Understanding oxidation of Fe-Cr-Al alloys through explainable artificial intelligence. MRS Commun. 2023, 13, 82–88. [Google Scholar] [CrossRef]
  87. Ravi, S.K.; Roy, I.; Roychowdhury, S.; Feng, B.; Ghosh, S.; Reynolds, C.; Umretiya, R.V.; Rebak, R.B.; Hoffman, A.K. Elucidating precipitation in FeCrAl alloys through explainable AI: A case study. Comput. Mater. Sci. 2023, 230, 112440. [Google Scholar] [CrossRef]
  88. Xiong, J.; Shi, S.Q.; Zhang, T.Y. Machine learning of phases and mechanical properties in complex concentrated alloys. J. Mater. Sci. Technol. 2021, 87, 133–142. [Google Scholar] [CrossRef]
  89. Yang, C.; Ren, C.; Jia, Y.; Wang, G.; Li, M.; Lu, W. A machine learning-based alloy design system to facilitate the rational design of high entropy alloys with enhanced hardness. Acta Mater. 2022, 222, 117431. [Google Scholar] [CrossRef]
  90. Jiménez-Luna, J.; Grisoni, F.; Schneider, G. Drug discovery with explainable artificial intelligence. Nat. Mach. Intell. 2020, 2, 573–584. [Google Scholar] [CrossRef]
  91. Preuer, K.; Klambauer, G.; Rippmann, F.; Hochreiter, S.; Unterthiner, T. Interpretable Deep Learning in Drug Discovery; Springer: Cham, Switzerland, 2019. [Google Scholar]
  92. Baum, D.; Baum, K.; Gros, T.P.; Wolf, V. XAI Requirements in Smart Production Processes: A Case Study. In Proceedings of the Explainable Artificial Intelligence; Longo, L., Ed.; Springer: Cham, Switzerland, 2023; pp. 3–24. [Google Scholar]
  93. Perez-Castanos, S.; Prieto-Roig, A.; Monzo, D.; Colomer-Barbera, J. Holistic Production Overview: Using XAI for Production Optimization. In Artificial Intelligence in Manufacturing: Enabling Intelligent, Flexible and Cost-Effective Production Through AI; Springer Nature: Cham, Switzerland, 2024; pp. 423–436. [Google Scholar] [CrossRef]
  94. Ji-Soo, H.; Yong-Min, H.; Seung-Yong, O.; Tae-Ho, K.; Hyeon-Jeong, L.; Sung-Woo, K. Injection Process Yield Improvement Methodology Based on eXplainable Artificial Intelligence (XAI) Algorithm. J. Korean Soc. Qual. Manag. 2023, 51, 55–65. [Google Scholar]
  95. Lee, Y.; Roh, Y. An Expandable Yield Prediction Framework Using Explainable Artificial Intelligence for Semiconductor Manufacturing. Appl. Sci. 2023, 13, 2660. [Google Scholar] [CrossRef]
  96. Kim, S.; Lee, K.; Noh, H.K.; Shin, Y.; Chang, K.B.; Jeong, J.; Baek, S.; Kang, M.; Cho, K.; Kim, D.W.; et al. Automatic Modeling of Logic Device Performance Based on Machine Learning and Explainable AI. In Proceedings of the 2020 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD), Kobe, Japan, 23 September–6 October 2020; pp. 47–50. [Google Scholar] [CrossRef]
  97. Zhai, W.; Shi, X.; Wong, Y.D.; Han, Q.; Chen, L. Explainable AutoML (xAutoML) with adaptive modeling for yield enhancement in semiconductor smart manufacturing. arXiv 2024, arXiv:cs.CE/2403.12381. [Google Scholar]
  98. Singh, N.; Adhikari, D. AI in Inventory Management: Applications, Challenges, and Opportunities. Int. J. Res. Appl. Sci. Eng. Technol. 2023, 11, 2049–2053. [Google Scholar] [CrossRef]
  99. Qaffas, A.A.; Hajkacem, M.A.B.; Ncir, C.E.B.; Nasraoui, O. Interpretable Multi-Criteria ABC Analysis Based on Semi-Supervised Clustering and Explainable Artificial Intelligence. IEEE Access 2023, 11, 43778–43792. [Google Scholar] [CrossRef]
  100. Ntakolia, C.; Kokkotis, C.; Karlsson, P.; Moustakidis, S. An Explainable Machine Learning Model for Material Backorder Prediction in Inventory Management. Sensors 2021, 21, 7926. [Google Scholar] [CrossRef] [PubMed]
  101. Shajalal, M.; Boden, A.; Stevens, G. Explainable product backorder prediction exploiting CNN: Introducing explainable models in businesses. Electron. Mark. 2022, 32, 2107–2122. [Google Scholar] [CrossRef]
  102. Razavi-Far, R.; Kinnaert, M. Incremental Design of a Decision System for Residual Evaluation: A Wind Turbine Application*. In IFAC Proceedings Volumes, Proceedings of the 8th IFAC Symposium on Fault Detection, Supervision and Safety of Technical Processes, Mexico City, Mexico, 29–31 August 2012; Elsevier: Amsterdam, The Netherlands, 2012; Volume 45, pp. 343–348. [Google Scholar] [CrossRef]
  103. Razavi-Far, R.; Zio, E.; Palade, V. Efficient residuals pre-processing for diagnosing multi-class faults in a doubly fed induction generator, under missing data scenarios. Expert Syst. Appl. 2014, 41, 6386–6399. [Google Scholar] [CrossRef]
  104. Farajzadeh-Zanjani, M.; Razavi-Far, R.; Saif, M.; Rueda, L. Efficient feature extraction of vibration signals for diagnosing bearing defects in induction motors. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 4504–4511. [Google Scholar] [CrossRef]
  105. Razavi-Far, R.; Kinnaert, M. A multiple observers and dynamic weighting ensembles scheme for diagnosing new class faults in wind turbines. Control Eng. Pract. 2013, 21, 1165–1177. [Google Scholar] [CrossRef]
  106. Saeki, M.; Ogata, J.; Murakawa, M.; Ogawa, T. Visual explanation of neural network based rotation machinery anomaly detection system. In Proceedings of the 2019 IEEE International Conference on Prognostics and Health Management (ICPHM), San Francisco, CA, USA, 17–20 June 2019; pp. 1–4. [Google Scholar] [CrossRef]
  107. Grezmak, J.; Zhang, J.; Wang, P.; Loparo, K.A.; Gao, R.X. Interpretable Convolutional Neural Network Through Layer-wise Relevance Propagation for Machine Fault Diagnosis. IEEE Sens. J. 2020, 20, 3172–3181. [Google Scholar] [CrossRef]
  108. Kotriwala, A.; Klöpper, B.; Dix, M.; Gopalakrishnan, G.; Ziobro, D.; Potschka, A. XAI for Operations in the Process Industry - Applications, Theses, and Research Directions. In CEUR Workshop Proceedings, Proceedings of the AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge Engineering (AAAI-MAKE 2021), Palo Alto, CA, USA, 22–24 March 2021; Martin, A., Hinkelmann, K., Fill, H.G., Gerber, A., Lenat, D., Stolle, R., van Harmelen, F., Eds.; CEUR-WS.Org: Örebro, Sweden, 2021; Volume 2846. [Google Scholar]
  109. Nor, K.; Pedapati, S.R.; Muhammad, M. Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification. Preprints 2021, 2021, 2021090034. [Google Scholar] [CrossRef]
  110. Brito, L.C.; Susto, G.A.; Brito, J.N.; Duarte, M.A. An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery. Mech. Syst. Signal Process. 2022, 163, 108105. [Google Scholar] [CrossRef]
  111. Grezmak, J.; Wang, P.; Sun, C.; Gao, R.X. Explainable Convolutional Neural Network for Gearbox Fault Diagnosis. In Procedia CIRP, Proceedings of the 26th CIRP Conference on Life Cycle Engineering (LCE) Purdue University, West Lafayette, IN, USA, 7–9 May 2019; Elsevier: Amsterdam, The Netherlands, 2019; Volume 80, pp. 476–481. [Google Scholar] [CrossRef]
  112. Kim, M.S.; Yun, J.P.; Park, P. An Explainable Neural Network for Fault Diagnosis With a Frequency Activation Map. IEEE Access 2021, 9, 98962–98972. [Google Scholar] [CrossRef]
  113. Srinivasan, S.; Arjunan, P.; Jin, B.; Sangiovanni-Vincentelli, A.L.; Sultan, Z.; Poolla, K. Explainable AI for Chiller Fault-Detection Systems: Gaining Human Trust. Computer 2021, 54, 60–68. [Google Scholar] [CrossRef]
  114. Madhikermi, M.; Malhi, A.K.; Främling, K. Explainable Artificial Intelligence Based Heat Recycler Fault Detection in Air Handling Unit. In Proceedings of the Explainable, Transparent Autonomous Agents and Multi-Agent Systems; Calvaresi, D., Najjar, A., Schumacher, M., Främling, K., Eds.; Springer: Cham, Switzerland, 2019; pp. 110–125. [Google Scholar]
  115. Hong, C.W.; Lee, C.; Lee, K.; Ko, M.S.; Hur, K. Explainable Artificial Intelligence for the Remaining Useful Life Prognosis of the Turbofan Engines. In Proceedings of the 2020 3rd IEEE International Conference on Knowledge Innovation and Invention (ICKII), Kaohsiung, Taiwan, 21–23 August 2020; pp. 144–147. [Google Scholar] [CrossRef]
  116. Abid, F.B.; Sallem, M.; Braham, A. Robust Interpretable Deep Learning for Intelligent Fault Diagnosis of Induction Motors. IEEE Trans. Instrum. Meas. 2020, 69, 3506–3515. [Google Scholar] [CrossRef]
  117. Sun, K.H.; Huh, H.; Tama, B.A.; Lee, S.Y.; Jung, J.H.; Lee, S. Vision-Based Fault Diagnostics Using Explainable Deep Learning With Class Activation Maps. IEEE Access 2020, 8, 129169–129179. [Google Scholar] [CrossRef]
  118. Li, Y.F.; Liu, J. A Bayesian Network Approach for Imbalanced Fault Detection in High Speed Rail Systems. In Proceedings of the 2018 IEEE International Conference on Prognostics and Health Management (ICPHM), Seattle, WA, USA, 11–13 June 2018; pp. 1–7. [Google Scholar] [CrossRef]
  119. Carletti, M.; Masiero, C.; Beghi, A.; Susto, G.A. Explainable Machine Learning in Industry 4.0: Evaluating Feature Importance in Anomaly Detection to Enable Root Cause Analysis. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 21–26. [Google Scholar] [CrossRef]
  120. Szelążek, M.; Bobek, S.; Gonzalez-Pardo, A.; Nalepa, G.J. Towards the Modeling of the Hot Rolling Industrial Process. Preliminary Results. In Proceedings of the Intelligent Data Engineering and Automated Learning—IDEAL 2020; Analide, C., Novais, P., Camacho, D., Yin, H., Eds.; Springer: Cham, Switzerland, 2020; pp. 385–396. [Google Scholar]
  121. Serradilla, O.; Zugasti, E.; Cernuda, C.; Aranburu, A.; de Okariz, J.R.; Zurutuza, U. Interpreting Remaining Useful Life estimations combining Explainable Artificial Intelligence and domain knowledge in industrial machinery. In Proceedings of the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
  122. Wang, Y.; Wang, P. Explainable machine learning for motor fault diagnosis. In Proceedings of the 2023 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Kuala Lumpur, Malaysia, 22–25 May 2023; pp. 1–6. [Google Scholar] [CrossRef]
  123. Gamal Al-Kaf, H.A.; Lee, K.B. Explainable Machine Learning Method for Open Fault Detection of NPC Inverter Using SHAP and LIME. In Proceedings of the 2023 IEEE Conference on Energy Conversion (CENCON), Kuching, Malaysia, 23–24 October 2023; pp. 14–19. [Google Scholar] [CrossRef]
  124. Gummadi, A.; Napier, J.; Abdallah, M. XAI-IoT: An Explainable AI Framework for Enhancing Anomaly Detection in IoT Systems. IEEE Access 2024, 12, 71024–71054. [Google Scholar] [CrossRef]
  125. Sinha, A.; Das, D. An explainable deep learning approach for detection and isolation of sensor and machine faults in predictive maintenance paradigm. Meas. Sci. Technol. 2023, 35, 015122. [Google Scholar] [CrossRef]
  126. Zhou, D.; Yao, Q.; Wu, H.; Ma, S.; Zhang, H. Fault diagnosis of gas turbine based on partly interpretable convolutional neural networks. Energy 2020, 200, 117467. [Google Scholar] [CrossRef]
  127. Oh, C.; Jeong, J. VODCA: Verification of Diagnosis Using CAM-Based Approach for Explainable Process Monitoring. Sensors 2020, 20, 6858. [Google Scholar] [CrossRef]
  128. Kumar, P.; Hati, A.S. Deep convolutional neural network based on adaptive gradient optimizer for fault detection in SCIM. ISA Trans. 2021, 111, 350–359. [Google Scholar] [CrossRef]
  129. Felsberger, L.; Apollonio, A.; Cartier-Michaud, T.; Müller, A.; Todd, B.; Kranzlmüller, D. Explainable Deep Learning for Fault Prognostics in Complex Systems: A Particle Accelerator Use-Case. In Proceedings of the Machine Learning and Knowledge Extraction; Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E., Eds.; Springer: Cham, Switzerland, 2020; pp. 139–158. [Google Scholar]
  130. Grezmak, J.; Zhang, J.; Wang, P.; Gao, R.X. Multi-stream convolutional neural network-based fault diagnosis for variable frequency drives in sustainable manufacturing systems. In Procedia Manufacturing, Proceedings of the Sustainable Manufacturing—Hand in Hand to Sustainability on Globe: Proceedings of the 17th Global Conference on Sustainable Manufacturing, Shanghai, China, 9–11 October 2020; Elsevier: Amsterdam, The Netherlands, 2020; Volume 43, pp. 511–518. [Google Scholar]
  131. Lee, J.; Noh, I.; Lee, J.; Lee, S.W. Development of an Explainable Fault Diagnosis Framework Based on Sensor Data Imagification: A Case Study of the Robotic Spot-Welding Process. IEEE Trans. Ind. Inform. 2022, 18, 6895–6904. [Google Scholar] [CrossRef]
  132. Yang, D.; Karimi, H.R.; Gelman, L. An explainable intelligence fault diagnosis framework for rotating machinery. Neurocomputing 2023, 541, 126257. [Google Scholar] [CrossRef]
  133. Nie, X.; Xie, G. A novel normalized recurrent neural network for fault diagnosis with noisy labels. J. Intell. Manuf. 2021, 32, 1271–1288. [Google Scholar] [CrossRef]
  134. Gribbestad, M.; Hassan, M.U.; Hameed, I.A.; Sundli, K. Health Monitoring of Air Compressors Using Reconstruction-Based Deep Learning for Anomaly Detection with Increased Transparency. Entropy 2021, 23, 83. [Google Scholar] [CrossRef] [PubMed]
  135. Brusa, E.; Cibrario, L.; Delprete, C.; Di Maggio, L.G. Explainable AI for Machine Fault Diagnosis: Understanding Features’ Contribution in Machine Learning Models for Industrial Condition Monitoring. Appl. Sci. 2023, 13, 2038. [Google Scholar] [CrossRef]
  136. Moosavi, S.; Razavi-Far, R.; Palade, V.; Saif, M. Explainable Artificial Intelligence Approach for Diagnosing Faults in an Induction Furnace. Electronics 2024, 13, 1721. [Google Scholar] [CrossRef]
  137. Amarasinghe, K.; Kenney, K.; Manic, M. Toward Explainable Deep Neural Network Based Anomaly Detection. In Proceedings of the 2018 11th International Conference on Human System Interaction (HSI), Gdansk, Poland, 4–6 July 2018; pp. 311–317. [Google Scholar] [CrossRef]
  138. Yu, J.; Liu, G. Knowledge extraction and insertion to deep belief network for gearbox fault diagnosis. Knowl.-Based Syst. 2020, 197, 105883. [Google Scholar] [CrossRef]
  139. Keleko, A.T.; Kamsu-Foguem, B.; Ngouna, R.H.; Tongne, A. Health condition monitoring of a complex hydraulic system using Deep Neural Network and DeepSHAP explainable XAI. Adv. Eng. Softw. 2023, 175, 103339. [Google Scholar] [CrossRef]
  140. Liu, Y.; Li, Z.; Chen, H. Artificial Intelligence-based Fault Detection and Diagnosis: Towards Application in a Chemical Process. In Proceedings of the 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS), Yibin, China, 22–24 September 2023; pp. 1–6. [Google Scholar] [CrossRef]
  141. Santos, M.R.; Guedes, A.; Sanchez-Gendriz, I. SHapley Additive exPlanations (SHAP) for Efficient Feature Selection in Rolling Bearing Fault Diagnosis. Mach. Learn. Knowl. Extr. 2024, 6, 316–341. [Google Scholar] [CrossRef]
  142. Harinarayan, R.R.A.; Shalinie, S.M. XFDDC: Explainable Fault Detection Diagnosis and Correction framework for chemical process systems. Process Saf. Environ. Prot. 2022, 165, 463–474. [Google Scholar] [CrossRef]
  143. Sinha, A.; Das, D. XAI-LCS: Explainable AI-Based Fault Diagnosis of Low-Cost Sensors. IEEE Sens. Lett. 2023, 7, 1–4. [Google Scholar] [CrossRef]
  144. Meas, M.; Machlev, R.; Kose, A.; Tepljakov, A.; Loo, L.; Levron, Y.; Petlenkov, E.; Belikov, J. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors 2022, 22, 6338. [Google Scholar] [CrossRef]
  145. Devkar, P.; Venkatarathnam, G. Enhancing Fault Detection and Diagnosis in AHU Using Explainable AI. In Sustainability in Energy and Buildings 2023; Littlewood, J.R., Jain, L., Howlett, R.J., Eds.; Springer Nature: Singapore, 2024; pp. 131–142. [Google Scholar] [CrossRef]
  146. Hrnjica, B.; Softic, S. Explainable AI in Manufacturing: A Predictive Maintenance Case Study; Springer: Cham, Switzerland, 2020; pp. 66–73. [Google Scholar] [CrossRef]
  147. Paolanti, M.; Romeo, L.; Felicetti, A.; Mancini, A.; Frontoni, E.; Loncarski, J. Machine Learning approach for Predictive Maintenance in Industry 4.0. In Proceedings of the 2018 14th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Oulu, Finland, 2–4 July 2018; pp. 1–6. [Google Scholar] [CrossRef]
  148. Langone, R.; Cuzzocrea, A.; Skantzos, N. Interpretable Anomaly Prediction: Predicting anomalous behavior in industry 4.0 settings via regularized logistic regression tools. Data Knowl. Eng. 2020, 130, 101850. [Google Scholar] [CrossRef]
  149. Cummins, L.; Sommers, A.; Ramezani, S.B.; Mittal, S.; Jabour, J.; Seale, M.; Rahimi, S. Explainable Predictive Maintenance: A Survey of Current Methods, Challenges and Opportunities. IEEE Access 2024, 12, 57574–57602. [Google Scholar] [CrossRef]
  150. Tchakoua, P.; Wamkeue, R.; Hasnaoui, F.; Theubou Tameghe, T.A.; Ekemb, G. New trends and future challenges for wind turbines condition monitoring. In Proceedings of the 2013 International Conference on Control, Automation and Information Sciences (ICCAIS), Nha Trang, Vietnam, 25–28 November 2013; pp. 238–245. [Google Scholar] [CrossRef]
  151. Matzka, S. Explainable Artificial Intelligence for Predictive Maintenance Applications. In Proceedings of the 2020 Third International Conference on Artificial Intelligence for Industries (AI4I), Irvine, CA, USA, 21–23 September 2020; pp. 69–74. [Google Scholar] [CrossRef]
  152. Wu, H.; Huang, A.; Sutherland, J.W. Layer-wise relevance propagation for interpreting LSTM-RNN decisions in predictive maintenance. Int. J. Adv. Manuf. Technol. 2022, 118, 963–978. [Google Scholar] [CrossRef]
  153. Mansouri, T.; Vadera, S. Explainable fault prediction using learning fuzzy cognitive maps. Expert Syst. 2023, 40, e13316. [Google Scholar] [CrossRef]
  154. Mansouri, T.; Vadera, S. A Deep Explainable Model for Fault Prediction Using IoT Sensors. IEEE Access 2022, 10, 66933–66942. [Google Scholar] [CrossRef]
  155. Christou, I.T.; Kefalakis, N.; Zalonis, A.; Soldatos, J. Predictive and Explainable Machine Learning for Industrial Internet of Things Applications. In Proceedings of the 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS), Marina del Rey, CA, USA, 25–27 May 2020; pp. 213–218. [Google Scholar] [CrossRef]
  156. Silva, R.L.; Rudek, M.; Szejka, A.L.; Junior, O.C. Machine Vision Systems for Industrial Quality Control Inspections. In Proceedings of the Product Lifecycle Management to Support Industry 4.0; Chiabert, P., Bouras, A., Noël, F., Ríos, J., Eds.; Springer: Cham, Switzerland, 2018; pp. 631–641. [Google Scholar]
  157. Goldman, C.; Baltaxe, M.; Chakraborty, D.; Arinez, J. Explaining Learning Models in Manufacturing Processes. Procedia Comput. Sci. 2021, 180, 259–268. [Google Scholar] [CrossRef]
  158. Lee, M.; Jeon, J.; Lee, H. Explainable AI for domain experts: A post Hoc analysis of deep learning for defect classification of TFT–LCD panels. J. Intell. Manuf. 2022, 33, 1747–1759. [Google Scholar] [CrossRef]
  159. Senoner, J.; Netland, T.; Feuerriegel, S. Using Explainable Artificial Intelligence to Improve Process Quality: Evidence from Semiconductor Manufacturing. Manag. Sci. 2021, 68, 5557–6354. [Google Scholar] [CrossRef]
  160. Kharal, A. Explainable Artificial Intelligence Based Fault Diagnosis and Insight Harvesting for Steel Plates Manufacturing. arXiv 2020, arXiv:cs.AI/2008.04448. [Google Scholar]
  161. Meister, S.; Wermes, M.; Stüve, J.; Groves, R. Investigations on Explainable Artificial Intelligence methods for the deep learning classification of fibre layup defect in the automated composite manufacturing. Compos. Part B Eng. 2021, 224, 109160. [Google Scholar] [CrossRef]
  162. Lavasa, E.; Chadoulos, C.; Siouras, A.; Etxabarri Llana, A.; Rodríguez Del Rey, S.; Dalamagas, T.; Moustakidis, S. Toward Explainable Metrology 4.0: Utilizing Explainable AI to Predict the Pointwise Accuracy of Laser Scanning Devices in Industrial Manufacturing. In Artificial Intelligence in Manufacturing: Enabling Intelligent, Flexible and Cost-Effective Production Through AI; Springer Nature: Cham, Switzerland, 2024; pp. 479–501. [Google Scholar] [CrossRef]
  163. Hwang, C.; Lee, T. E-SFD: Explainable Sensor Fault Detection in the ICS Anomaly Detection System. IEEE Access 2021, 9, 140470–140486. [Google Scholar] [CrossRef]
  164. Makridis, G.; Theodoropoulos, S.; Dardanis, D.; Makridis, I.; Separdani, M.M.; Fatouros, G.; Kyriazis, D.; Koulouris, P. XAI enhancing cyber defence against adversarial attacks in industrial applications. In Proceedings of the 2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS), Genova, Italy, 5–7 December 2022; Volume 5, pp. 1–8. [Google Scholar] [CrossRef]
  165. Bac, T.P.; Ha, D.T.; Tran, K.D.; Tran, K.P. Explainable Articial Intelligence for Cybersecurity in Smart Manufacturing. In Artificial Intelligence for Smart Manufacturing: Methods, Applications, and Challenges; Tran, K.P., Ed.; Springer International Publishing: Cham, Switzerland, 2023; pp. 199–223. [Google Scholar] [CrossRef]
  166. Sivamohan, S.; Sridhar, S.S. An optimized model for network intrusion detection systems in industry 4.0 using XAI based Bi-LSTM framework. Neural Comput. Appl. 2023, 35, 11459–11475. [Google Scholar] [CrossRef] [PubMed]
  167. Kundu, R.K.; Hoque, K.A. Explainable Predictive Maintenance is Not Enough: Quantifying Trust in Remaining Useful Life Estimation. Annu. Conf. Phm Soc. 2023, 15. [Google Scholar] [CrossRef]
  168. Klamert, V.; Schmid-Kietreiber, M.; Bublin, M. A deep learning approach for real time process monitoring and curling defect detection in Selective Laser Sintering by infrared thermography and convolutional neural networks. In Procedia CIRP, Proceedings of the 12th CIRP Conference on Photonic Technologies [LANE 2022], Furth, Germany, 4–8 September 2022; Elsevier: Amsterdam, The Netherlands, 2022; Volume 111, pp. 317–320. [Google Scholar] [CrossRef]
  169. Hanchate, A.; Bukkapatnam, S.T.; Lee, K.H.; Srivastava, A.; Kumara, S. Explainable AI (XAI)-driven vibration sensing scheme for surface quality monitoring in a smart surface grinding process. J. Manuf. Processes 2023, 99, 184–194. [Google Scholar] [CrossRef]
  170. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R.; Gonzalez, E.S. Understanding the adoption of Industry 4.0 technologies in improving environmental sustainability. Sustain. Oper. Comput. 2022, 3, 203–217. [Google Scholar] [CrossRef]
  171. Yu, Z.; Gao, H.; Cong, X.; Wu, N.; Song, H.H. A Survey on Cyber–Physical Systems Security. IEEE Internet Things J. 2023, 10, 21670–21686. [Google Scholar] [CrossRef]
  172. Alguliyev, R.; Imamverdiyev, Y.; Sukhostat, L. Cyber-physical systems and their security issues. Comput. Ind. 2018, 100, 212–223. [Google Scholar] [CrossRef]
  173. Farajzadeh-Zanjani, M.; Hallaji, E.; Razavi-Far, R.; Saif, M. Generative-Adversarial Class-Imbalance Learning for Classifying Cyber-Attacks and Faults - A Cyber-Physical Power System. IEEE Trans. Dependable Secur. Comput. 2022, 19, 4068–4081. [Google Scholar] [CrossRef]
  174. Farajzadeh-Zanjani, M.; Hallaji, E.; Razavi-Far, R.; Saif, M. Generative adversarial dimensionality reduction for diagnosing faults and attacks in cyber-physical systems. Neurocomputing 2021, 440, 101–110. [Google Scholar] [CrossRef]
  175. Kim, S.; Park, K.J.; Lu, C. A Survey on Network Security for Cyber–Physical Systems: From Threats to Resilient Design. IEEE Commun. Surv. Tutor. 2022, 24, 1534–1573. [Google Scholar] [CrossRef]
  176. Hoenig, A.; Roy, K.; Acquaah, Y.; Yi, S.; Desai, S. Explainable AI for Cyber-Physical Systems: Issues and Challenges. IEEE Access 2024, 12, 73113–73140. [Google Scholar] [CrossRef]
  177. Le, D.; Vung, P.; Nguyen, H.; Dang, T. Visualization and Explainable Machine Learning for Efficient Manufacturing and System Operations. Smart Sustain. Manuf. Syst. 2019, 3, 20190029. [Google Scholar] [CrossRef]
  178. Wang, M.; Zheng, K.; Yang, Y.; Wang, X. An Explainable Machine Learning Framework for Intrusion Detection Systems. IEEE Access 2020, 8, 73127–73141. [Google Scholar] [CrossRef]
  179. Barnard, P.; Marchetti, N.; DaSilva, L.A. Robust Network Intrusion Detection Through Explainable Artificial Intelligence (XAI). IEEE Netw. Lett. 2022, 4, 167–171. [Google Scholar] [CrossRef]
  180. Houda, Z.A.E.; Brik, B.; Khoukhi, L. “Why Should I Trust Your IDS?”: An Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks. IEEE Open J. Commun. Soc. 2022, 3, 1164–1176. [Google Scholar] [CrossRef]
  181. Zebin, T.; Rezvy, S.; Luo, Y. An Explainable AI-Based Intrusion Detection System for DNS Over HTTPS (DoH) Attacks. IEEE Trans. Inf. Forensics Secur. 2022, 17, 2339–2349. [Google Scholar] [CrossRef]
  182. Oseni, A.; Moustafa, N.; Creech, G.; Sohrabi, N.; Strelzoff, A.; Tari, Z.; Linkov, I. An Explainable Deep Learning Framework for Resilient Intrusion Detection in IoT-Enabled Transportation Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1000–1014. [Google Scholar] [CrossRef]
  183. Gaspar, D.; Silva, P.; Silva, C. Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi-Layer Perceptron. IEEE Access 2024, 12, 30164–30175. [Google Scholar] [CrossRef]
  184. Jeong, S.; Lee, S.; Lee, H.; Kim, H.K. X-CANIDS: Signal-Aware Explainable Intrusion Detection System for Controller Area Network-Based In-Vehicle Network. IEEE Trans. Veh. Technol. 2024, 73, 3230–3246. [Google Scholar] [CrossRef]
  185. Arreche, O.; Guntur, T.R.; Roberts, J.W.; Abdallah, M. E-XAI: Evaluating Black-Box Explainable AI Frameworks for Network Intrusion Detection. IEEE Access 2024, 12, 23954–23988. [Google Scholar] [CrossRef]
  186. Shtayat, M.M.; Hasan, M.K.; Sulaiman, R.; Islam, S.; Khan, A.U.R. An Explainable Ensemble Deep Learning Approach for Intrusion Detection in Industrial Internet of Things. IEEE Access 2023, 11, 115047–115061. [Google Scholar] [CrossRef]
  187. Javeed, D.; Gao, T.; Kumar, P.; Jolfaei, A. An Explainable and Resilient Intrusion Detection System for Industry 5.0. IEEE Trans. Consum. Electron. 2024, 70, 1342–1350. [Google Scholar] [CrossRef]
  188. Arisdakessian, S.; Wahab, O.A.; Mourad, A.; Otrok, H.; Guizani, M. A Survey on IoT Intrusion Detection: Federated Learning, Game Theory, Social Psychology, and Explainable AI as Future Directions. IEEE Internet Things J. 2023, 10, 4059–4092. [Google Scholar] [CrossRef]
  189. Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [Google Scholar] [CrossRef]
  190. Moustafa, N.; Koroniotis, N.; Keshk, M.; Zomaya, A.Y.; Tari, Z. Explainable Intrusion Detection for Cyber Defences in the Internet of Things: Opportunities and Solutions. IEEE Commun. Surv. Tutor. 2023, 25, 1775–1807. [Google Scholar] [CrossRef]
  191. Chen, T.C.T. Explainable Artificial Intelligence (XAI) in Manufacturing. In Explainable Artificial Intelligence (XAI) in Manufacturing: Methodology, Tools, and Applications; Springer International Publishing: Cham, Switzerland, 2023; pp. 1–11. [Google Scholar] [CrossRef]
  192. Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Appl. Sci. 2022, 12, 1353. [Google Scholar] [CrossRef]
Figure 1. Popularity of the “Explainable AI” term in the Google search engine taken from Google Trends [5].
Figure 1. Popularity of the “Explainable AI” term in the Google search engine taken from Google Trends [5].
Electronics 13 03497 g001
Figure 2. Taxonomy of XAI methods [8,13,14,15,16,17,18,19,20,21,22,23,24,25].
Figure 2. Taxonomy of XAI methods [8,13,14,15,16,17,18,19,20,21,22,23,24,25].
Electronics 13 03497 g002
Figure 3. Categorization of XAI methods by data type [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41].
Figure 3. Categorization of XAI methods by data type [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41].
Electronics 13 03497 g003
Figure 4. Major advancements in XAI techniques since 2011 [8,14,36,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
Figure 4. Major advancements in XAI techniques since 2011 [8,14,36,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
Electronics 13 03497 g004
Figure 5. Use cases of AI in CPS and manufacturing industry.
Figure 5. Use cases of AI in CPS and manufacturing industry.
Electronics 13 03497 g005
Table 1. Predictive models used for fault detection and diagnosis.
Table 1. Predictive models used for fault detection and diagnosis.
ModelsXAI MethodsReferences
Bayesian NetworkIntrinsic[118]
Fuzzy Logic SystemIntrinsic[16]
Isolation ForestSHAP, Local-DIFFI[22,110,119]
RFSHAP, ELI5, LIME, CEM[120,121,122,123,124]
CNNCAM, grad-CAM, FAM, SHAP, LRP[34,35,106,107,111,112,115,117,125,126,127,128,129,130,131,132]
RNNLRP[133]
LSTMSHAP, MAE[115,134]
Bi-LSTMSHAP[115]
Bayesian DLSHAP[109]
KNNSHAP[29,135]
ANNLIME[114,122]
DNNSHAP, LIME, LRP, KBDBN, CEM[124,136,137,138,139]
SVMSHAP, LIME, DIFFI, CEM[114,122,124,135,140,141]
XGBLIME, SHAP [113,125,142,143,144,145]
Table 2. XAI applications in manufacturing industry and ICPSs.
Table 2. XAI applications in manufacturing industry and ICPSs.
ReferenceYearUse CaseAlgorithmData TypeXAI ApproachXAI Output
[85]2020PDXGB and LGBMTabularSHAPFeature Importance
[84]2022PDKeras-based DNNChemical compositionLIMEFeature Importance
[86]2023PDNNTabularSHAPFeature Importance
[87]2023PDGPRTabularSHAPFeature Importance
[96]2020PCRF, XGB, CatboostTabularSHAPFeature Importance
[93]2023PCRFTabularLIMEFeature Importance
[94]2023PCXGB and LGBMTabularSHAPFeature Importance
[95]2023PCRFTabularSHAPFeature Importance
[100]2021InvMRF, KNN, LGBM, XGB, BB, NN, LR, SVMTabularSHAPFeature Importance
[101]2022InvMCNNTabularSHAP, LIMEFeature Importance
[99]2023InvMSS-E-k-meansTabularSHAPFeature Importance
[106]2019FDDCNNVibrationgrad-CAMVisual Explanation
[114]2019FDDSVM and DNNTemperature and SpeedLIMEFeature Importance
[111]2019FDDDCNNVibrationLRPVisual Explanation
[35]2020FDDCNNVibrationgrad-CAMVisual Explanation
[107]2020FDDCNNVibrationLRPVisual Explanation
[115]2020FDDCNN, LSTM, Bi-LSTMTemperature, Pressure and SpeedSHAPFeature Importance
[116]2020FDDdeep-SincNetCurrentIntrinsicTemporal and Spectral Presentation
[117]2020FDDCNNVibrationCAMVisual Explanation
[110]2021FDDKNN, IF, etc.VibrationSHAP and Local-DIFFIFeature Importance
[29]2021FDDKNNVibrationSHAPFeature Importance
[34]2021FDDCNNVibrationgrad-CAMVisual Explanation
[112]2021FDDCNNVibrationFAMVisual Explanation
[109]2021FDDDNNTemperature, Pressure, Speed and PositionSHAPFeature Importance
[113]2021FDDXGBTemperature, Flow and PowerLIMEFeature Importance
[25]2021FDDWKNVibrationIntrinsicFeature Map
[142]2022FDDXGBTabularSHAPFeature Importance
[125]2023FDDCNN-XGBVibrationSHAPFeature Importance
[137]2018CybSecDNNTabularLRPFeature Importance
[163]2021CybSecBi-LSTMTabularSHAPFeature Importance
[20]2021CybSecConv-LSTMTabularLIMEFeature Importance
[164]2022CybSecCNNImagegrad-CAM, LIMEVisual Explanation
[165]2023CybSecLSTMTabularSHAPFeature Importance
[166]2023CybSecBiLSTMTabularSHAP, LIMEFeature Importance
[146]2020PdMLGBMTabularIntrinsicFeature Importance
[148]2020PdMRegularized Logistic RegressionTemperature, Pressure, and SpeedIntrinsicFeature Importance
[16]2021PdMFLSPressure and CurrentIntrinsicRules
[24]2022PdMPCA + DCAETabularIntrinsicFeature Importance
[167]2023PdMXGB, RF, LR, FFNNTabularSHAP, LIME, AnchorRules, Feature Importance
[17]2024PdMFLSPressure, vibration, ultrasonic, and CurrentIntrinsicRules
[28]2024PdMSVM, RF, DT, KNNVibration, current, and temperatureLIME, SHAP, PDP, ICEFeature Importance
[157]2021QACNNTabularCAMVisual Explanation
[158]2021QACNNImageLRP and DTVisualization and Rules
[159]2021QANon-linear Meta ModelTabularSHAPFeature Importance
[160]2020QARFTabularARMRules
[161]2021QACNNImageSHAP, Smooth IG, grad-CAMVisual Explanation
[168]2022QACNNImagegrad-CAMVisual Explanation
[169]2023QACNNImageLIMEFeature Importance
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moosavi, S.; Farajzadeh-Zanjani, M.; Razavi-Far, R.; Palade, V.; Saif, M. Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey. Electronics 2024, 13, 3497. https://doi.org/10.3390/electronics13173497

AMA Style

Moosavi S, Farajzadeh-Zanjani M, Razavi-Far R, Palade V, Saif M. Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey. Electronics. 2024; 13(17):3497. https://doi.org/10.3390/electronics13173497

Chicago/Turabian Style

Moosavi, Sajad, Maryam Farajzadeh-Zanjani, Roozbeh Razavi-Far, Vasile Palade, and Mehrdad Saif. 2024. "Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey" Electronics 13, no. 17: 3497. https://doi.org/10.3390/electronics13173497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop