Reprint

Advances in Explainable Artificial Intelligence

Edited by
February 2024
208 pages
  • ISBN978-3-7258-0283-8 (Hardback)
  • ISBN978-3-7258-0284-5 (PDF)

This book is a reprint of the Special Issue Advances in Explainable Artificial Intelligence that was published in

Computer Science & Mathematics
Summary

Machine Learning (ML)-based Artificial Intelligence (AI) algorithms have the capability to learn from known examples, creating various abstract representations and models. When applied to unfamiliar examples, these algorithms can perform a range of tasks, including classification, regression, and forecasting, to name a few.

Frequently, these highly effective ML representations are challenging to comprehend, especially in the case of Deep Learning models, which may involve millions of parameters. However, in many applications, it is crucial for stakeholders to grasp the reasoning behind the system's decisions to utilize them more effectively. This necessity has prompted extensive research efforts aimed at enhancing the transparency and interpretability of ML algorithms, forming the field of explainable Artificial Intelligence (XAI).

The objectives of XAI encompass: introducing transparency to ML models by offering comprehensive insights into the rationale behind specific decisions; designing ML models that are both more interpretable and transparent, while maintaining high levels of performance;, and establishing methods for assessing the overall interpretability and transparency of models, quantifying their effectiveness for various stakeholders.

This Special Issue gathers contributions on recent advancements and techniques within the domain of XAI.

Format
  • Hardback
License and Copyright
© 2022 by the authors; CC BY-NC-ND license
Keywords
activation function; ReLU family; activation function test; psychological profiling; predictive modeling; behavioral data; explainable artificial intelligence; rule extraction; counterfactual explanations; fairness; bias; artificial intelligence; machine learning; psychiatry; health; mental health; explainable artificial intelligence; federated learning; 6G; vehicle-to-everything (V2X); quality of service; quality of experience; interactive machine learning; decision tree classifiers; transparent-by-design; parallel coordinates; natural language generation; fact-checking; explainable AI; deep learning; LSTM; Arabic sentiment analysis; Explainable AI; text mining; random forest; multi-layer perceptron; explainable AI; protein data bank; neural network; machine learning; explainable AI; artificial neural networks; knowledge representation; source code analysis; text classification; uncertainty quantification; efficiency; electroencephalography; convolutional variational autoencoder; latent space interpretation; deep learning; spectral topographic maps