Advances in Explainable Artificial Intelligence, 2nd Edition

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 10 October 2025 | Viewed by 4468

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Università degli Studi di Milano, 20122 Milano, MI, Italy
Interests: machine learning; computational intelligence; game theory applications to machine learning and networking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Technology of INSA Lyon, LIRIS laboratory, 69100 Villeurbanne, France
Interests: machine learning; semantic web; information retrieval
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine Learning (ML)-based Artificial Intelligence (AI) algorithms can learn from known examples of various abstract representations and models that, once applied to unknown examples, can perform classification, regression, or forecasting tasks, to name a few.

Very often, these highly effective ML representations are difficult to understand; this holds true particularly for deep learning models, which can involve millions of parameters. However, for many applications, it is of utmost importance for stakeholders to understand the decisions made by the system, in order to use them better. Furthermore, for decisions that affect an individual, future legislation might even advocate for a “right to an explanation”. Overall, improving the algorithms’ explainability may foster trust and social acceptance of AI.

The need to make ML algorithms more transparent and more explainable has generated several lines of research that form an area known as explainable Artificial Intelligence (XAI).

Among the goals of XAI are adding transparency to ML models by providing detailed information about why the system has reached a particular decision; designing more explainable and transparent ML models, while at the same time maintaining high performance levels; finding a way to evaluate the overall explainability and transparency of the models; and quantifying their effectiveness for different stakeholders.

The objective of this Special Issue is to explore recent advances and techniques in the XAI area.

Research topics of interest include (but are not limited to):

- Devising machine learning models that are transparent by design;

- Planning for transparency, from data collection to training, testing, and production;

- Developing algorithms and user interfaces for explainability;

- Identifying and mitigating biases in data collection;

- Performing black-box model auditing and explanation;

- Detecting data bias and algorithmic bias;

- Learning causal relationships;

- Integrating social and ethical aspects of explainability;

- Integrating explainability into existing AI systems;

- Designing new explanation modalities;

- Exploring theoretical aspects of explanation and interpretability;

- Investigating the use of XAI in application sectors such as healthcare, bioinformatics, multimedia, linguistics, human–computer interaction, machine translation, autonomous vehicles, risk assessment, and justice.

Prof. Dr. Gabriele Gianini
Dr. Pierre-Edouard Portier
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • explainability
  • transparency
  • accountability

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 2251 KiB  
Article
Predicting Individual Well-Being in Teamwork Contexts Based on Speech Features
by Tobias Zeulner, Gerhard Johann Hagerer, Moritz Müller, Ignacio Vazquez and Peter A. Gloor
Information 2024, 15(4), 217; https://doi.org/10.3390/info15040217 - 12 Apr 2024
Cited by 1 | Viewed by 1556
Abstract
Current methods for assessing individual well-being in team collaboration at the workplace often rely on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related [...] Read more.
Current methods for assessing individual well-being in team collaboration at the workplace often rely on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related to individual well-being in team collaboration from raw audio and video data collected in teamwork contexts. The goal was to develop computational methods and measurements to facilitate the mirroring of individuals’ well-being to themselves. We focus on how speech behavior is perceived by team members to improve their well-being. Our main contribution is the assembly of an integrated toolchain to perform multi-modal extraction of robust speech features in noisy field settings and to explore which features are predictors of self-reported satisfaction scores. We applied the toolchain to a case study, where we collected videos of 20 teams with 56 participants collaborating over a four-day period in a team project in an educational environment. Our audiovisual speaker diarization extracted individual speech features from a noisy environment. As the dependent variable, team members filled out a daily PERMA (positive emotion, engagement, relationships, meaning, and accomplishment) survey. These well-being scores were predicted using speech features extracted from the videos using machine learning. The results suggest that the proposed toolchain was able to automatically predict individual well-being in teams, leading to better teamwork and happier team members. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Figure 1

14 pages, 15148 KiB  
Article
Explainable Machine Learning Method for Aesthetic Prediction of Doors and Home Designs
by Jean-Sébastien Dessureault, Félix Clément, Seydou Ba, François Meunier and Daniel Massicotte
Information 2024, 15(4), 203; https://doi.org/10.3390/info15040203 - 5 Apr 2024
Viewed by 1276
Abstract
The field of interior home design has witnessed a growing utilization of machine learning. However, the subjective nature of aesthetics poses a significant challenge due to its variability among individuals and cultures. This paper proposes an applied machine learning method to enhance manufactured [...] Read more.
The field of interior home design has witnessed a growing utilization of machine learning. However, the subjective nature of aesthetics poses a significant challenge due to its variability among individuals and cultures. This paper proposes an applied machine learning method to enhance manufactured custom doors in a proper and aesthetic home design environment. Since there are millions of possible custom door models based on door types, wood species, dyeing, paint, and glass types, it is impossible to foresee a home design model fitting every custom door. To generate the classification data, a home design expert has to label thousands of door/home design combinations with the different colors and shades utilized in home designs. These data train a random forest classifier in a supervised learning context. The classifier predicts a home design according to a particular custom door. This method is applied in the following context: A web page displays a choice of doors to a customer. The customer selects the desired door properties, which are sent to a server that returns an aesthetic home design model for this door. This door configuration generates a series of images through the Unity 3D engine module, which are returned to the web client. The customer finally visualizes their door in an aesthetic home design context. The results show the random forest classifier’s good performance, with an accuracy level of 86.8%, in predicting suitable home design, marking the way for future developments requiring subjective evaluations. The results are also explained using a feature importance graphic, a decision tree, a confusion matrix, and text. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Figure 1

Review

Jump to: Research

42 pages, 1544 KiB  
Review
Collaborative Intelligence for Safety-Critical Industries: A Literature Review
by Inês F. Ramos, Gabriele Gianini, Maria Chiara Leva and Ernesto Damiani
Information 2024, 15(11), 728; https://doi.org/10.3390/info15110728 - 12 Nov 2024
Viewed by 742
Abstract
While AI-driven automation can increase the performance and safety of systems, humans should not be replaced in safety-critical systems but should be integrated to collaborate and mitigate each other’s limitations. The current trend in Industry 5.0 is towards human-centric collaborative paradigms, with an [...] Read more.
While AI-driven automation can increase the performance and safety of systems, humans should not be replaced in safety-critical systems but should be integrated to collaborate and mitigate each other’s limitations. The current trend in Industry 5.0 is towards human-centric collaborative paradigms, with an emphasis on collaborative intelligence (CI) or Hybrid Intelligent Systems. In this survey, we search and review recent work that employs AI methods for collaborative intelligence applications, specifically those that focus on safety and safety-critical industries. We aim to contribute to the research landscape and industry by compiling and analyzing a range of scenarios where AI can be used to achieve more efficient human–machine interactions, improved collaboration, coordination, and safety. We define a domain-focused taxonomy to categorize the diverse CI solutions, based on the type of collaborative interaction between intelligent systems and humans, the AI paradigm used and the domain of the AI problem, while highlighting safety issues. We investigate 91 articles on CI research published between 2014 and 2023, providing insights into the trends, gaps, and techniques used, to guide recommendations for future research opportunities in the fast developing collaborative intelligence field. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Graphical abstract

Back to TopTop