Explainability in AI and Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 November 2024 | Viewed by 8404

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering and Informatics, University of Patras, 26504 Patras, Greece
Interests: artificial intelligence; knowledge representation; intelligent systems; intelligent e-learning; sentiment analysis

E-Mail Website
Guest Editor
Department of Computer Engineering & Informatics, University of Patras, 26504 Rio, Greece
Interests: artificial intelligence; learning technologies; machine learning; human–computer interaction; social media; affective computing; sentiment analysis;
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Explainable Artificial Intelligence (XAI) in general concerns the problem of communicating explanations to human users by AI systems regarding their decisions. This has been of natural interest in systems or models of "traditional" AI (e.g., knowledge representation and reasoning systems, planning systems), where the "internal" decision-making process is mostly transparent (white-box models), which, although mostly interpretable, are not explainable. However, recently, due to the development of many successful models, explainability is of particular concern to the machine learning (ML) community. This is because, although some models are interpretable, most ML models act like black-boxes, and in many applications (e.g., medicine, healthcare, education, automated driving), practitioners want to understand models' decision making, to be able to trust them when used in reality.

So, XAI has become an active subfield of machine learning aiming at increasing the transparency of machine learning models. Explainability, apart from increasing trust and confidence, can also provide further insights regarding the model itself and the problem.

Deep Neural Networks (DNNs) are ML models that have achieved major advances. However, a clear understanding of their internal decision making is lacking. Interpreting the internal mechanisms of DNNs has been a very interesting topic. Symbolic methods could be used for network interpretation, by making clear inference patterns inside DNNs, and explaining the decisions made by them. On the other hand, re-designing DNNs in an interpretable or explainable way could be a solution.

Natural language (NL) techniques, such NL Generation (NLG) and NL Processing (NLP), can help in providing comprehensible explanations of automated decisions to human users of AI systems.

Topics of interest include, but are not limited to, the following:

  • Applications of XAI systems;
  • Evaluation of XAI approaches;
  • Explainable Agents;
  • Explaining Black-box Models;
  • Explaining Logical Formulas;
  • Explainable Machine Learning;
  • Explainable Planning;
  • Interpretable Machine Learning;
  • Metrics for Explainability Evaluation;
  • Models for Explainable Recommendations;
  • Natural Language Generation for Explainable AI;
  • Self-explanatory Decision-Support Systems;
  • Verbalizing Knowledge Bases.

Prof. Dr. Ioannis Hatzilygeroudis
Prof. Dr. Vasile Palade
Dr. Isidoros Perikos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 1421 KiB  
Article
Ethical ChatGPT: Concerns, Challenges, and Commandments
by Jianlong Zhou, Heimo Müller, Andreas Holzinger and Fang Chen
Electronics 2024, 13(17), 3417; https://doi.org/10.3390/electronics13173417 - 28 Aug 2024
Cited by 1 | Viewed by 2577
Abstract
Large language models, e.g., Chat Generative Pre-Trained Transformer (also known as ChatGPT), are currently contributing enormously to making artificial intelligence even more popular, especially among the general population. However, such chatbot models were developed as tools to support natural language communication between humans. [...] Read more.
Large language models, e.g., Chat Generative Pre-Trained Transformer (also known as ChatGPT), are currently contributing enormously to making artificial intelligence even more popular, especially among the general population. However, such chatbot models were developed as tools to support natural language communication between humans. Problematically, it is very much a “statistical correlation machine” (correlation instead of causality), and there are indeed ethical concerns associated with the use of AI language models including ChatGPT, such as bias, privacy, and abuse. This paper highlights specific ethical concerns about ChatGPT and articulates key challenges when ChatGPT is used in various applications. Practical recommendations for different stakeholders of ChatGPT are also proposed that can serve as checklist guidelines for those applying ChatGPT in their applications. These best practice examples are expected to motivate the ethical use of ChatGPT. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

17 pages, 836 KiB  
Article
Explainable Artificial Intelligence Approach for Diagnosing Faults in an Induction Furnace
by Sajad Moosavi, Roozbeh Razavi-Far, Vasile Palade and Mehrdad Saif
Electronics 2024, 13(9), 1721; https://doi.org/10.3390/electronics13091721 - 29 Apr 2024
Cited by 1 | Viewed by 1265
Abstract
For over a century, induction furnaces have been used in the core of foundries for metal melting and heating. They provide high melting/heating rates with optimal efficiency. The occurrence of faults not only imposes safety risks but also reduces productivity due to unscheduled [...] Read more.
For over a century, induction furnaces have been used in the core of foundries for metal melting and heating. They provide high melting/heating rates with optimal efficiency. The occurrence of faults not only imposes safety risks but also reduces productivity due to unscheduled shutdowns. The problem of diagnosing faults in induction furnaces has not yet been studied, and this work is the first to propose a data-driven framework for diagnosing faults in this application. This paper presents a deep neural network framework for diagnosing electrical faults by measuring real-time electrical parameters at the supply side. Experimental and sensory measurements are collected from multiple energy analyzer devices installed in the foundry. Next, a semi-supervised learning approach, known as the local outlier factor, has been used to discriminate normal and faulty samples from each other and label the data samples. Then, a deep neural network is trained with the collected labeled samples. The performance of the developed model is compared with several state-of-the-art techniques in terms of various performance metrics. The results demonstrate the superior performance of the selected deep neural network model over other classifiers, with an average F-measure of 0.9187. Due to the black box nature of the constructed neural network, the model predictions are interpreted by Shapley additive explanations and local interpretable model-agnostic explanations. The interpretability analysis reveals that classified faults are closely linked to variations in odd voltage/current harmonics of order 3, 11, 13, and 17, highlighting the critical impact of these parameters on the model’s prediction. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

30 pages, 4185 KiB  
Article
Intelligent Decision Support for Energy Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics
by Dimitrios P. Panagoulias, Elissaios Sarmas, Vangelis Marinakis, Maria Virvou, George A. Tsihrintzis and Haris Doukas
Electronics 2023, 12(21), 4430; https://doi.org/10.3390/electronics12214430 - 27 Oct 2023
Cited by 12 | Viewed by 1773
Abstract
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be [...] Read more.
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be used by different stakeholders, with different backgrounds, preferences, abilities, skills, and goals. Our methodology is aligned with the explainable artificial intelligence (XAI) paradigm and aims to enhance the interpretability of AI-empowered decision support systems (DSSs). Specifically, a clustering-based approach is adopted to customize the depth of explainability based on the specific needs of different user groups. This approach improves the accuracy and effectiveness of energy management analytics while promoting transparency and trust in the decision-making process. The methodology is structured around an iterative development lifecycle for an intelligent decision support system and includes several steps, such as stakeholder identification, an empirical study on usability and explainability, user clustering analysis, and the implementation of an XAI framework. The XAI framework comprises XAI clusters and local and global XAI, which facilitate higher adoption rates of the AI system and ensure responsible and safe deployment. The methodology is tested on a stacked neural network for an analytics service, which estimates energy savings from renovations, and aims to increase adoption rates and benefit the circular economy. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 515 KiB  
Review
Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review
by Georgios Kostopoulos, Gregory Davrazos and Sotiris Kotsiantis
Electronics 2024, 13(14), 2842; https://doi.org/10.3390/electronics13142842 - 19 Jul 2024
Cited by 2 | Viewed by 1577
Abstract
This survey article provides a comprehensive overview of the evolving landscape of Explainable Artificial Intelligence (XAI) in Decision Support Systems (DSSs). As Artificial Intelligence (AI) continues to play a crucial role in decision-making processes across various domains, the need for transparency, interpretability, and [...] Read more.
This survey article provides a comprehensive overview of the evolving landscape of Explainable Artificial Intelligence (XAI) in Decision Support Systems (DSSs). As Artificial Intelligence (AI) continues to play a crucial role in decision-making processes across various domains, the need for transparency, interpretability, and trust becomes paramount. This survey examines the methodologies, applications, challenges, and future research directions in the integration of explainability within AI-based Decision Support Systems. Through an in-depth analysis of current research and practical implementations, this article aims to guide researchers, practitioners, and decision-makers in navigating the intricate landscape of XAI-based DSSs. These systems assist end-users in their decision-making, providing a full picture of how a decision was made and boosting trust. Furthermore, a methodical taxonomy of the current methodologies is proposed and representative works are presented and discussed. The analysis of recent studies reveals that there is a growing interest in applying XDSSs in fields such as medical diagnosis, manufacturing, and education, to name a few, since they smooth down the trade-off between accuracy and explainability, boost confidence, and also validate decisions. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

Back to TopTop