Human-in-the-Loop XAI for Predictive Maintenance: A Systematic Review of Interactive Systems and Their Effectiveness in Maintenance Decision-Making
Abstract
1. Introduction
- RQ1: How do human-in-the-loop XAI techniques for predictive maintenance maintain interpretability while handling temporal data?
- RQ2: What human-centric metrics are used to evaluate XAI’s effectiveness on maintenance decision-making?
- RQ3: How do XAI systems involve maintenance experts in their design and use, particularly in addressing challenges to sustaining effective human–AI collaboration?
2. Methods
3. Findings of the Literature Review
3.1. Descriptive Analysis
3.2. Thematic Analysis
3.2.1. Model Interpretability in Practice
- Finding 1: Temporal adaptation strategies enhance transparency in PdM workflows.
- Finding 2: Declining reliance on SHAP/LIMEs in temporal domains highlights the need for specialized XAI methods.
3.2.2. Evolutions and Limitations of XAI Methods
- Finding 3: Inherently interpretable models remain competitive across domains.
- Finding 4: Domain-specific XAI improves adaptation and usability but require further validation in industrial settings.
3.2.3. Trust Dynamics and Human Reliance on XAI
- Finding 5: Explanation design can both build and erode trust.
- Finding 6: Domain expectations and role shape explanation usability.
3.2.4. Collaborative Design and Human–AI Interaction
- Finding 7: Feedback loops improve both model calibration and user understanding.
- Finding 8: Domain-specific co-design increases system adoption.
3.2.5. Factors Influencing the Efficacy of Explanations in Decision-Making
- Finding 9: Explanation quality does not guarantee better outcomes.
- Finding 10: Modality and structure of explanations significantly affect user comprehension.
4. Discussion
4.1. Answering Research Questions
4.1.1. Interpretability in Temporal Data
4.1.2. Human-Centric Metrics for Evaluating XAI Effectiveness
4.1.3. Collaborative Design for Sustaining Human–AI Collaboration
4.1.4. Trust Dynamics in PdM
4.2. Use Cases of Generative AI in HITL-XAI from the Literature
4.3. Illustrative Case: Integrating GenAI and HITL-XAI into PdM
4.4. Theoretical Contributions
4.5. Practical Implications
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | artificial intelligence |
AM | additive manufacturing |
AUC | area under the curve |
CNN | convolutional neural network |
CSV | comma-separated values |
DEN | deep expert network |
EU AI Act | European Union Artificial Intelligence Act |
FFT | fast-Fourier transform |
GenAI | generative artificial intelligence |
Grad-CAM | gradient-weighted class activation mapping |
HITL | human-in-the-loop |
IEEE | Institute of Electrical and Electronics Engineers |
LIMEs | local interpretable model-agnostic explanations |
LLM | large language models |
LSTM | long short-term memory |
LUXs | local rule-based explanations |
LVLM | large vision-language model |
ML | machine learning |
NSN | neuro-symbolic nodes |
OODA | observe–orient–decide–act |
PdM | predictive maintenance |
PRISMA | preferred reporting items for systematic reviews and meta-analyses |
RAG | retrieval-augmented generation |
ResNet | residual network |
RF | random forest |
RQs | research questions |
SHAP | SHapley Additive exPlanations |
SLR | systematic literature review |
SME | small and medium-sized enterprise |
t-SNE | t-distributed stochastic neighbor embedding |
US | United States |
WKN | weighted k-nearest neighbors |
XAI | explainable artificial intelligence |
XEdgeAI | explainable edge AI |
XGBoost | extreme gradient boosting |
Appendix A
Section and Topic | Item Number | Checklist Item | Location Where the Item Is Reported |
---|---|---|---|
TITLE | |||
Title | 1 | Identify the report as a systematic review. | Title |
ABSTRACT | |||
Abstract | 2 | See the PRISMA 2020 for Abstracts checklist. | |
INTRODUCTION | |||
Rationale | 3 | Describe the rationale for the review in the context of existing knowledge. | 1 |
Objectives | 4 | Provide an explicit statement of the objective(s) or question(s) the review addresses. | 1 |
METHODS | |||
Eligibility criteria | 5 | Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses. | 2 |
Information sources | 6 | Specify all databases, registers, websites, organisations, reference lists and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted. | 2 |
Search strategy | 7 | Present the full search strategies for all databases, registers and websites, including any filters and limits used. | 2 |
Selection process | 8 | Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process. | 2 |
Data collection process | 9 | Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process. | 2 |
Data items | 10a | List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g., for all measures, time points, analyses), and if not, the methods used to decide which results to collect. | 3 |
10b | List and define all other variables for which data were sought (e.g., participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information. | 3 | |
Study risk of bias assessment | 11 | Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process. | 2 |
Effect measures | 12 | Specify for each outcome the effect measure(s) (e.g., risk ratio, mean difference) used in the synthesis or presentation of results. | N/A |
Synthesis methods | 13a | Describe the processes used to decide which studies were eligible for each synthesis (e.g., tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item #5)). | 2 |
13b | Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions. | 2 | |
13c | Describe any methods used to tabulate or visually display results of individual studies and syntheses. | 2 | |
13d | Describe any methods used to synthesize results and provide a rationale for the choice(s). If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used. | 2 | |
13e | Describe any methods used to explore possible causes of heterogeneity among study results (e.g., subgroup analysis, meta-regression). | 2 | |
13f | Describe any sensitivity analyses conducted to assess robustness of the synthesized results. | 2 | |
Reporting bias assessment | 14 | Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases). | N/A |
Certainty assessment | 15 | Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome. | N/A |
RESULTS | |||
Study selection | 16a | Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram. | 2 |
16b | Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded. | N/A | |
Study characteristics | 17 | Cite each included study and present its characteristics. | N/A |
Risk of bias in studies | 18 | Present assessments of risk of bias for each included study. | N/A |
Results of individual studies | 19 | For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g., confidence/credible interval), ideally using structured tables or plots. | 3 |
Results of syntheses | 20a | For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies. | 3 |
20b | Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g., confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect. | 3 | |
20c | Present results of all investigations of possible causes of heterogeneity among study results. | 3 | |
20d | Present results of all sensitivity analyses conducted to assess the robustness of the synthesized results. | 3 | |
Reporting biases | 21 | Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed. | N/A |
Certainty of evidence | 22 | Present assessments of certainty (or confidence) in the body of evidence for each outcome assessed. | 3 |
DISCUSSION | |||
Discussion | 23a | Provide a general interpretation of the results in the context of other evidence. | 4 |
23b | Discuss any limitations of the evidence included in the review. | 5 | |
23c | Discuss any limitations of the review processes used. | 5 | |
23d | Discuss implications of the results for practice, policy, and future research. | 5 | |
OTHER INFORMATION | |||
Registration and protocol | 24a | Provide registration information for the review, including register name and registration number, or state that the review was not registered. | N/A |
24b | Indicate where the review protocol can be accessed, or state that a protocol was not prepared. | N/A | |
24c | Describe and explain any amendments to information provided at registration or in the protocol. | N/A | |
Support | 25 | Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review. | 5 |
Competing interests | 26 | Declare any competing interests of review authors. | 5 |
Availability of data, code and other materials | 27 | Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review. | 5 |
Section and Topic | Item Number | Checklist Item | Reported (Yes/No) |
---|---|---|---|
TITLE | |||
Title | 1 | Identify the report as a systematic review. | Yes |
BACKGROUND | |||
Objectives | 2 | Provide an explicit statement of the main objective(s) or question(s) the review addresses. | Yes |
METHODS | |||
Eligibility criteria | 3 | Specify the inclusion and exclusion criteria for the review. | Yes |
Information sources | 4 | Specify the information sources (e.g., databases, registers) used to identify studies and the date when each was last searched. | Yes |
Risk of bias | 5 | Specify the methods used to assess risk of bias in the included studies. | Yes |
Synthesis of results | 6 | Specify the methods used to present and synthesise results. | Yes |
RESULTS | |||
Included studies | 7 | Give the total number of included studies and participants and summarise relevant characteristics of studies. | Yes |
Synthesis of results | 8 | Present results for main outcomes, preferably indicating the number of included studies and participants for each. If meta-analysis was done, report the summary estimate and confidence/credible interval. If comparing groups, indicate the direction of the effect (i.e., which group is favoured). | Yes |
DISCUSSION | |||
Limitations of evidence | 9 | Provide a brief summary of the limitations of the evidence included in the review (e.g., study risk of bias, inconsistency and imprecision). | Yes |
Interpretation | 10 | Provide a general interpretation of the results and important implications. | Yes |
OTHER | |||
Funding | 11 | Specify the primary source of funding for the review. | No |
Registration | 12 | Provide the register name and registration number. | No |
References
- Cinar, Z.M.; Abdussalam Nuhu, A.; Zeeshan, Q.; Korhan, O.; Asmael, M.; Safaei, B. Machine Learning in Predictive Maintenance towards Sustainable Smart Manufacturing in Industry 4.0. Sustainability 2020, 12, 8211. [Google Scholar] [CrossRef]
- Rykov, M. The Top 10 Industrial AI Use Cases. IoT Analytics. Available online: https://iot-analytics.com/the-top-10-industrial-ai-use-cases/ (accessed on 9 August 2025).
- Shang, G.; Low, S.P.; Lim, X.Y.V. Prospects, drivers of and barriers to artificial intelligence adoption in project management. Built Environ. Proj. Asset Manag. 2023, 13, 629–645. [Google Scholar] [CrossRef]
- Hermansa, M.; Kozielski, M.; Michalak, M.; Szczyrba, K.; Wróbel, Ł.; Sikora, M. Sensor-Based Predictive Maintenance with Reduction of False Alarms—A Case Study in Heavy Industry. Sensors 2021, 22, 226. [Google Scholar] [CrossRef]
- Liu, D.; Alnegheimish, S.; Zytek, A.; Veeramachaneni, K. MTV: Visual Analytics for Detecting, Investigating, and Annotating Anomalies in Multivariate Time Series. Proc. ACM Hum. Comput. Interact. 2022, 6, 1–30. [Google Scholar] [CrossRef]
- Garouani, M.; Ahmad, A.; Bouneffa, M.; Hamlich, M.; Bourguin, G.; Lewandowski, A. Towards big industrial data mining through explainable automated machine learning. Int. J. Adv. Manuf. Technol. 2022, 120, 1169–1188. [Google Scholar] [CrossRef]
- Martinović, B.; Bijanić, M.; Danilović, D.; Petrović, A.; Delibasić, B. Unveiling Deep Learning Insights: A Specialized Analysis of Sucker Rod Pump Dynamographs, Emphasizing Visualizations and Human Insight. Mathematics 2023, 11, 4782. [Google Scholar] [CrossRef]
- Najar, M.; Wang, H. Establishing operator trust in machine learning for enhanced reliability and safety in nuclear Power Plants. Prog. Nucl. Energy 2024, 173, 105280. [Google Scholar] [CrossRef]
- van Oudenhoven, B.; Van de Calseyde, P.; Basten, R.; Demerouti, E. Predictive maintenance for industry 5.0: Behavioural inquiries from a work system perspective. Int. J. Prod. Res. 2022, 61, 7846–7865. [Google Scholar] [CrossRef]
- Ingemarsdotter, E.; Kambanou, M.L.; Jamsin, E.; Sakao, T.; Balkenende, R. Challenges and solutions in condition-based maintenance implementation—A multiple case study. J. Clean. Prod. 2021, 296, 126420. [Google Scholar] [CrossRef]
- Liao, Q.V.; Varshney, K.R. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. April 2022. Available online: http://arxiv.org/abs/2110.10790 (accessed on 5 May 2025).
- Moosavi, S.; Razavi-Far, R.; Palade, V.; Saif, M. Explainable Artificial Intelligence Approach for Diagnosing Faults in an Induction Furnace. Electronics 2024, 13, 1721. [Google Scholar] [CrossRef]
- Bansal, G.; Wu, T.; Zhou, J.; Fok, R.; Nushi, B.; Kamar, E.; Ribeiro, M.T.; Weld, D. Does the whole exceed its parts? The effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Virtual, 8–13 May 2021; pp. 1–16. [Google Scholar] [CrossRef]
- Marques-Silva, J.; Ignatiev, A. Delivering Trustworthy AI through Formal XAI. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; AAAI Press: Palo Alto, CA, USA, 2022; Volume 36, pp. 12342–12350. [Google Scholar] [CrossRef]
- Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
- Sadeghi, Z.; Alizadehsani, R.; Cifci, M.A.; Kausar, S.; Rehman, R.; Mahanta, P.; Bora, P.K.; Almasri, A.; Alkhawaldeh, R.S.; Hussain, S.; et al. A review of Explainable Artificial Intelligence in healthcare. Comput. Electr. Eng. 2024, 118, 109370. [Google Scholar] [CrossRef]
- Sovrano, F.; Sapienza, S.; Palmirani, M.; Vitali, F. Metrics, Explainability and the European AI Act Proposal. J 2022, 5, 126–138. [Google Scholar] [CrossRef]
- Herm, L.-V.; Steinbach, T.; Wanner, J.; Janiesch, C. A nascent design theory for explainable intelligent systems. Electron. Mark. 2022, 32, 2185–2205. [Google Scholar] [CrossRef]
- Kotsiopoulos, T.; Papakostas, G.; Vafeiadis, T.; Dimitriadis, V.; Nizamis, A.; Bolzoni, A.; Bellinati, D.; Ioannidis, D.; Votis, K.; Tzovaras, D.; et al. Revolutionizing defect recognition in hard metal industry through AI explainability, human-in-the-loop approaches and cognitive mechanisms. Expert Syst. Appl. 2024, 255, 124839. [Google Scholar] [CrossRef]
- Zacharias, J.; von Zahn, M.; Chen, J.; Hinz, O. Designing a feature selection method based on explainable artificial intelligence. Electron. Mark. 2022, 32, 2159–2184. [Google Scholar] [CrossRef]
- Wanner, J.; Herm, L.-V.; Heinrich, K.; Janiesch, C. A social evaluation of the perceived goodness of explainability in machine learning. J. Bus. Anal. 2022, 5, 29–50. [Google Scholar] [CrossRef]
- He, G.; Balayn, A.; Buijsman, S.; Yang, J.; Gadiraju, U. Opening the Analogical Portal to Explainability: Can Analogies Help Laypeople in AI-assisted Decision Making? J. Artif. Intell. Res. 2024, 81, 117–162. [Google Scholar] [CrossRef]
- Ehsan, U.; Wintersberger, P.; Liao, Q.V.; Watkins, E.A.; Manger, C.; Iii, H.D.; Riener, A.; Riedl, M.O. Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7. [Google Scholar]
- Breque, M.; De Nul, L.; Petridis, A. Industry 5.0—Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar] [CrossRef]
- Pan, Y.; Stark, R. An interpretable machine learning approach for engineering change management decision support in automotive industry. Comput. Ind. 2022, 138, 103633. [Google Scholar] [CrossRef]
- Sauer, P.C.; Seuring, S. How to conduct systematic literature reviews in management research: A guide in 6 steps and 14 decisions. Rev. Manag. Sci. 2023, 17, 1899–1933. [Google Scholar] [CrossRef]
- Angelov, P.P.; Soares, E.A.; Jiang, R.; Arnold, N.I.; Atkinson, P.M. Explainable artificial intelligence: An analytical review. WIREs Data Min. Knowl. Discov. 2021, 11, e1424. [Google Scholar] [CrossRef]
- Moosavi, S.; Farajzadeh-Zanjani, M.; Razavi-Far, R.; Palade, V.; Saif, M. Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey. Electronics 2024, 13, 3497. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
- Cheng, X.; Chaw, J.K.; Goh, K.M.; Ting, T.T.; Sahrani, S.; Ahmad, M.N.; Kadir, R.A.; Ang, M.C. Systematic Literature Review on Visual Analytics of Predictive Maintenance in the Manufacturing Industry. Sensors 2022, 22, 6321. [Google Scholar] [CrossRef]
- Dalkin, S.; Forster, N.; Hodgson, P.; Lhussier, M.; Carr, S.M. Using computer assisted qualitative data analysis software (CAQDAS.; NVivo) to assist in the complex process of realist theory generation, refinement and testing. Int. J. Soc. Res. Methodol. 2021, 24, 123–134. [Google Scholar] [CrossRef]
- Vaismoradi, M.; Turunen, H.; Bondas, T. Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nurs. Health Sci. 2013, 15, 398–405. [Google Scholar] [CrossRef]
- Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Brito, L.C.; Susto, G.A.; Brito, J.N.; Duarte, M.A.V. Fault Diagnosis using eXplainable AI: A transfer learning-based approach for rotating machinery exploiting augmented synthetic data. Expert Syst. Appl. 2023, 232, 120860. [Google Scholar] [CrossRef]
- Lim, S.; Kim, J.; Lee, T. Shapelet-Based Sensor Fault Detection and Human-Centered Explanations in Industrial Control System. IEEE Access 2023, 11, 138033–138051. [Google Scholar] [CrossRef]
- Lundström, A.; O’nIls, M.; Qureshi, F.Z. Contextual Knowledge-Informed Deep Domain Generalization for Bearing Fault Diagnosis. IEEE Access 2024, 12, 196842–196854. [Google Scholar] [CrossRef]
- Lughofer, E. Evolving multi-user fuzzy classifier systems integrating human uncertainty and expert knowledge. Inf. Sci. 2022, 596, 30–52. [Google Scholar] [CrossRef]
- Spangler, R.M.; Raeisinezhad, M.; Cole, D.G. Explainable, Deep Reinforcement Learning–Based Decision Making for Operations and Maintenance. Nucl. Technol. 2024, 210, 2331–2345. [Google Scholar] [CrossRef]
- Zabaryłło, M.; Barszcz, T. Proposal of Multidimensional Data Driven Decomposition Method for Fault Identification of Large Turbomachinery. Energies 2022, 15, 3651. [Google Scholar] [CrossRef]
- Zeng, C.; Zhao, G.; Xie, J.; Huang, J.; Wang, Y. An explainable artificial intelligence approach for mud pumping prediction in railway track based on GIS information and in-service train monitoring data. Constr. Build. Mater. 2023, 401, 132716. [Google Scholar] [CrossRef]
- Krupp, L.; Wiede, C.; Friedhoff, J.; Grabmaier, A. Explainable Remaining Tool Life Prediction for Individualized Production Using Automated Machine Learning. Sensors 2023, 23, 8523. [Google Scholar] [CrossRef]
- Orošnjak, M.; Beker, I.; Brkljač, N.; Vrhovac, V. Predictors of Successful Maintenance Practices in Companies Using Fluid Power Systems: A Model-Agnostic Interpretation. Appl. Sci. 2024, 14, 5921. [Google Scholar] [CrossRef]
- Usuga-Cadavid, J.P.; Lamouri, S.; Grabot, B.; Fortin, A. Using deep learning to value free-form text data for predictive maintenance. Int. J. Prod. Res. 2022, 60, 4548–4575. [Google Scholar] [CrossRef]
- Ieva, S.; Loconte, D.; Loseto, G.; Ruta, M.; Scioscia, F.; Marche, D.; Notarnicola, M. A Retrieval-Augmented Generation Approach for Data-Driven Energy Infrastructure Digital Twins. Smart Cities 2024, 7, 3095–3120. [Google Scholar] [CrossRef]
- Rajaoarisoa, L.; Randrianandraina, R.; Nalepa, G.J.; Gama, J. Decision-making systems improvement based on explainable artificial intelligence approaches for predictive maintenance. Eng. Appl. Artif. Intell. 2025, 139, 109601. [Google Scholar] [CrossRef]
- Steenwinckel, B.; De Paepe, D.; Hautte, S.V.; Heyvaert, P.; Bentefrit, M.; Moens, P.; Dimou, A.; Bossche, B.V.D.; De Turck, F.; Van Hoecke, S.; et al. FLAGS: A methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning. Futur. Gener. Comput. Syst. 2021, 116, 30–48. [Google Scholar] [CrossRef]
- Wanner, J.; Herm, L.-V.; Heinrich, K.; Janiesch, C. The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electron. Mark. 2022, 32, 2079–2102. [Google Scholar] [CrossRef]
- Gitzel, R.; Hoffmann, M.W.; Heiden, P.Z.; Skolik, A.; Kaltenpoth, S.; Müller, O.; Kanak, C.; Kandiah, K.; Stroh, M.-F.; Boos, W.; et al. Toward Cognitive Assistance and Prognosis Systems in Power Distribution Grids—Open Issues, Suitable Technologies, and Implementation Concepts. IEEE Access 2024, 12, 107927–107943. [Google Scholar] [CrossRef]
- Simard, S.R.; Gamache, M.; Doyon-Poulin, P. Development and Usability Evaluation of VulcanH, a CMMS Prototype for Preventive and Predictive Maintenance of Mobile Mining Equipment. Mining 2024, 4, 326–351. [Google Scholar] [CrossRef]
- Agostinho, C.; Dikopoulou, Z.; Lavasa, E.; Perakis, K.; Pitsios, S.; Branco, R.; Reji, S.; Hetterich, J.; Biliri, E.; Lampathaki, F.; et al. Explainability as the key ingredient for AI adoption in Industry 5.0 settings. Front. Artif. Intell. 2023, 6, 1264372. [Google Scholar] [CrossRef]
- Gentile, D.; Donmez, B.; Jamieson, G.A. Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance. Artif. Intell. 2023, 321, 103945. [Google Scholar] [CrossRef]
- Beden, S.; Lakshmanan, K.; Giannetti, C.; Beckmann, A. Steelmaking Predictive Analytics Based on Random Forest and Semantic Reasoning. Appl. Sci. 2023, 13, 12778. [Google Scholar] [CrossRef]
- Galanti, R.; de Leoni, M.; Monaro, M.; Navarin, N.; Marazzi, A.; Di Stasi, B.; Maldera, S. An explainable decision support system for predictive process analytics. Eng. Appl. Artif. Intell. 2023, 120, 105904. [Google Scholar] [CrossRef]
- Jain, P.; Farzan, R.; Lee, A.J. Co-Designing with Users the Explanations for a Proactive Auto-Response Messaging Agent. Proc. ACM Hum. Comput. Interact. 2023, 7, 1–23. [Google Scholar] [CrossRef]
- Siyaev, A.; Valiev, D.; Jo, G.-S. Interaction with Industrial Digital Twin Using Neuro-Symbolic Reasoning. Sensors 2023, 23, 1729. [Google Scholar] [CrossRef] [PubMed]
- Choudhary, A.; Vuković, M.; Mutlu, B.; Haslgrübler, M.; Kern, R. Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic Process. Sensors 2024, 24, 3728. [Google Scholar] [CrossRef] [PubMed]
- Souza, P.V.d.C.; Lughofer, E. EFNC-Exp: An evolving fuzzy neural classifier integrating expert rules and uncertainty. Fuzzy Sets Syst. 2023, 466, 108438. [Google Scholar] [CrossRef]
- Nguyen, A.; Foerstel, S.; Kittler, T.; Kurzyukov, A.; Schwinn, L.; Zanca, D.; Hipp, T.; Da Jun, S.; Schrapp, M.; Rothgang, E.; et al. System Design for a Data-Driven and Explainable Customer Sentiment Monitor Using IoT and Enterprise Data. IEEE Access 2021, 9, 117140–117152. [Google Scholar] [CrossRef]
- Nadim, K.; Ragab, A.; Ouali, M.-S. Data-driven dynamic causality analysis of industrial systems using interpretable machine learning and process mining. J. Intell. Manuf. 2023, 34, 57–83. [Google Scholar] [CrossRef]
- Senjoba, L.; Ikeda, H.; Toriya, H.; Adachi, T.; Kawamura, Y. Enhancing Interpretability in Drill Bit Wear Analysis through Explainable Artificial Intelligence: A Grad-CAM Approach. Appl. Sci. 2024, 14, 3621. [Google Scholar] [CrossRef]
- Li, Q.; Qin, L.; Xu, H.; Lin, Q.; Qin, Z.; Chu, F. Transparent information fusion network: An explainable network for multi-source bearing fault diagnosis via self-organized neural-symbolic nodes. Adv. Eng. Informatics 2025, 65, 103156. [Google Scholar] [CrossRef]
- Tran, T.-A.; Ruppert, T.; Abonyi, J. The Use of eXplainable Artificial Intelligence and Machine Learning Operation Principles to Support the Continuous Development of Machine Learning-Based Solutions in Fault Detection and Identification. Computers 2024, 13, 252. [Google Scholar] [CrossRef]
- Dintén, R.; Zorrilla, M. Design, Building and Deployment of Smart Applications for Anomaly Detection and Failure Prediction in Industrial Use Cases. Information 2024, 15, 557. [Google Scholar] [CrossRef]
- Bhakte, A.; Chakane, M.; Srinivasan, R. Alarm-based explanations of process monitoring results from deep neural networks. Comput. Chem. Eng. 2023, 179, 108442. [Google Scholar] [CrossRef]
- Kabir, S.; Hossain, M.S.; Andersson, K. An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings. Energies 2024, 17, 1797. [Google Scholar] [CrossRef]
- Zou, D.; Zhu, Y.; Xu, S.; Li, Z.; Jin, H.; Ye, H. Interpreting Deep Learning-based Vulnerability Detector Predictions Based on Heuristic Searching. ACM Trans. Softw. Eng. Methodol. 2021, 30, 1–31. [Google Scholar] [CrossRef]
- Kisten, M.; Ezugwu, A.E.-S.; Olusanya, M.O. Explainable Artificial Intelligence Model for Predictive Maintenance in Smart Agricultural Facilities. IEEE Access 2024, 12, 24348–24367. [Google Scholar] [CrossRef]
- Li, Q.; Liu, Y.; Sun, S.; Qin, Z.; Chu, F. Deep expert network: A unified method toward knowledge-informed fault diagnosis via fully interpretable neuro-symbolic AI. J. Manuf. Syst. 2024, 77, 652–661. [Google Scholar] [CrossRef]
- Martakis, P.; Movsessian, A.; Reuland, Y.; Pai, S.G.S.; Quqa, S.; Cava, D.G.; Tcherniak, D.; Chatzi, E. A semi-supervised interpretable machine learning framework for sensor fault detection. Smart Struct. Syst. 2022, 29, 251–266. [Google Scholar] [CrossRef]
- Wang, Z.; Zhou, Z.; Xu, W.; Sun, C.; Yan, R. Physics informed neural networks for fault severity identification of axial piston pumps. J. Manuf. Syst. 2023, 71, 421–437. [Google Scholar] [CrossRef]
- Chew, M.Y.L.; Yan, K.; Shao, H. Enhancing Interpretability of Data-Driven Fault Detection and Diagnosis Methodology with Maintainability Rules in Smart Building Management. J. Sensors 2022, 2022, 5975816. [Google Scholar] [CrossRef]
- Kuk, E.; Bobek, S.; Nalepa, G.J. Explainable proactive control of industrial processes. J. Comput. Sci. 2024, 81, 102329. [Google Scholar] [CrossRef]
- Sobrie, L.; Verschelde, M.; Roets, B. Explainable real-time predictive analytics on employee workload in digital railway control rooms. Eur. J. Oper. Res. 2024, 317, 437–448. [Google Scholar] [CrossRef]
- Wahlström, M.; Tammentie, B.; Salonen, T.-T.; Karvonen, A. AI and the transformation of industrial work: Hybrid intelligence vs double-black box effect. Appl. Ergon. 2024, 118, 104271. [Google Scholar] [CrossRef] [PubMed]
- Dang, J.-F.; Chen, T.-L.; Huang, H.-Y. The human-centric framework integrating knowledge distillation architecture with fine-tuning mechanism for equipment health monitoring. Adv. Eng. Informatics 2025, 65, 103167. [Google Scholar] [CrossRef]
- Gómez-Carmona, O.; Casado-Mansilla, D.; López-De-Ipiña, D.; García-Zubia, J. Human-in-the-loop machine learning: Reconceptualizing the role of the user in interactive approaches. Internet Things 2024, 25, 101048. [Google Scholar] [CrossRef]
- Nguyen, H.T.T.; Nguyen, L.P.T.; Cao, H. XEdgeAI: A human-centered industrial inspection framework with data-centric Explainable Edge AI approach. Inf. Fusion 2025, 116, 102782. [Google Scholar] [CrossRef]
- Lughofer, E. Evolving multi-label fuzzy classifier with advanced robustness respecting human uncertainty. Knowl. Based Syst. 2022, 255, 109717. [Google Scholar] [CrossRef]
- Abrokwah-Larbi, K. The role of IoT and XAI convergence in the prediction, explanation, and decision of customer perceived value (CPV) in SMEs: A theoretical framework and research proposition perspective. Discov. Internet Things 2025, 5, 4. [Google Scholar] [CrossRef]
- Fok, R.; Weld, D.S. In search of verifiability: Explanations rarely enable complementary performance in AI-advised decision making. AI Mag. 2024, 45, 317–332. [Google Scholar] [CrossRef]
- Diehl, M.; Ramirez-Amaro, K. A causal-based approach to explain, predict and prevent failures in robotic tasks. Robot. Auton. Syst. 2023, 162, 104376. [Google Scholar] [CrossRef]
- Shin, W.; Han, J.; Rhee, W. AI-assistance for predictive maintenance of renewable energy systems. Energy 2021, 221, 119775. [Google Scholar] [CrossRef]
- Nasiri, S.; Khosravani, M.R. Machine learning in predicting mechanical behavior of additively manufactured parts. J. Mater. Res. Technol. 2021, 14, 1137–1153. [Google Scholar] [CrossRef]
- Ucar, A.; Karakose, M.; Kırımça, N. Artificial Intelligence for Predictive Maintenance Applications: Key Components, Trustworthiness, and Future Trends. Appl. Sci. 2024, 14, 898. [Google Scholar] [CrossRef]
Study Selection Parameters | Title and Abstract Screening | Full-Text Assessment |
---|---|---|
Databases: Scopus, ProQuest, EBSCO Studies contain the following search strings in the title, abstract, and keywords: (“predictive maintenance” OR “PdM” OR “condition-based maintenance” OR “condition monitoring” OR “failure prediction”) AND (“explainable AI” OR “XAI” OR “explainable artificial intelligence” OR “AI-assisted decision making” OR “AI-advised decision making”) Document type: academic journal article Publication year: 2019–2025 Language: English | Addresses the research questions Indicates use of explainable or interpretable models Involve human-in-the-loop (HITL) or interactive AI systems It is not a secondary research article | Full-text available for review Present a human-centric or HITL component Relevant to predictive maintenance or related applied contexts, unless directly linked to HITL-XAI systems |
Group | Themes | Code |
---|---|---|
XAI techniques | XAI methodologies Temporal data adaptation Interpretability design | Model architecture XAI techniques Dataset type Domain application Proposed XAI method Adaptation strategy Explanation platform Design type |
Evaluation | Technical performance Human-centric impact | Technical metric Computational efficiency Benchmark comparison Evaluation type User satisfaction Decision accuracy Empirical evidence or outcome |
HITL integration | Collaboration dynamic Interaction design Challenges and Findings | HITL methods User roles Task allocation Workforce skepticism Real-time collaboration features HITL case studies HITL challenges |
Theme | Finding |
---|---|
Model Interpretability in Practice | 1. Temporal adaptation strategies improve transparency in PdM workflows 2. The declining use of SHAP and LIMEs in temporal contexts underscores the need for specialized time-aware XAI methods |
Evolutions and Limitations of XAI Methods | 3. Inherently interpretable models remain competitive across domains 4. Domain-specific XAI techniques enhance usability but require further validation in industrial environments |
Trust Dynamics and Human Reliance on XAI | 5. Explanation design can both strengthen and undermine user trust. 6. Trust and usability are influenced by domain norms and user roles |
Collaborative Design and Human–AI Interaction | 7. Feedback loops enhance both model calibration and user understanding 8. Domain-specific co-design approaches increase system adoption and relevance |
Factors Influencing the Efficacy of Explanations in Decision-Making | 9. High-quality explanations do not always lead to better decisions 10. The modality and structure of explanations significantly affect user comprehension |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Amaliah, N.R.; Tjahjono, B.; Palade, V. Human-in-the-Loop XAI for Predictive Maintenance: A Systematic Review of Interactive Systems and Their Effectiveness in Maintenance Decision-Making. Electronics 2025, 14, 3384. https://doi.org/10.3390/electronics14173384
Amaliah NR, Tjahjono B, Palade V. Human-in-the-Loop XAI for Predictive Maintenance: A Systematic Review of Interactive Systems and Their Effectiveness in Maintenance Decision-Making. Electronics. 2025; 14(17):3384. https://doi.org/10.3390/electronics14173384
Chicago/Turabian StyleAmaliah, Nuuraan Risqi, Benny Tjahjono, and Vasile Palade. 2025. "Human-in-the-Loop XAI for Predictive Maintenance: A Systematic Review of Interactive Systems and Their Effectiveness in Maintenance Decision-Making" Electronics 14, no. 17: 3384. https://doi.org/10.3390/electronics14173384
APA StyleAmaliah, N. R., Tjahjono, B., & Palade, V. (2025). Human-in-the-Loop XAI for Predictive Maintenance: A Systematic Review of Interactive Systems and Their Effectiveness in Maintenance Decision-Making. Electronics, 14(17), 3384. https://doi.org/10.3390/electronics14173384