Topic Editors

School of Computer Science, Technological University Dublin, D08 X622 Dublin, Ireland
Dr. Mario Brcic
Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia

Opportunities and Challenges in Explainable Artificial Intelligence (XAI)

Abstract submission deadline
10 October 2024
Manuscript submission deadline
10 December 2024
Viewed by
2865

Topic Information

Dear Colleagues,

Artificial intelligence has seen a shift in focus toward designing and deploying intelligent systems that are interpretable and explainable with the rise of a new field: explainable artificial intelligence (XAI). This has been echoed in the research literature and the press, attracting scholars worldwide and a lay audience. Initially devoted to the design of post hoc methods for explainability, essentially combining machine and deep learning models with explanations, it is now expanding its boundaries to ante hoc methods for producing self-interpretable models. Along with this, neuro-symbolic approaches for reasoning have been employed in conjunction with machine learning to improve modeling accuracy and precision by incorporating self-explainability and justifiability. Scholars have also shifted the focus onto the structure of explanations, since the ultimate users of interactive technologies are humans, linking artificial intelligence and computer sciences to psychology, human–computer interaction, philosophy, and sociology.

This multi- and interdisciplinary topic brings together academics and scholars from different disciplines, including computer science, psychology, philosophy, and social science, to mention a few, as well as industry practitioners interested in the technical, practical, social, and ethical aspects of the explanation of the models emerging from the discipline of artificial intelligence (AI). In particular, XAI can help to solve some of the problems of AI, as highlighted in the Regulation of the European Parliament and The Council (AI ACT), laying down harmonized rules on artificial intelligence and amending certain union legislative acts.

Explainable artificial intelligence is certainly gaining momentum. Therefore, this Special Issue calls for contributions to this new fascinating area of research, seeking articles that are devoted to the theoretical foundation of XAI, its historical perspectives, and the design of explanations and interactive human-centered intelligent systems with knowledge representation principles and automated learning capabilities, not only for experts but for a lay audience as well. Invited topics include, but are not limited to, the following:

Technical methods for XAI:

  • Action influence graphs;
  • Agent-based explainable systems;
  • Ante hoc approaches for interpretability;
  • Argumentative-based approaches for explanations;
  • Argumentation theory for explainable AI;
  • Attention mechanisms for XAI;
  • Automata for explaining recurrent neural network models;
  • Auto-encoders and explainability of latent spaces;
  • Bayesian modeling for interpretable models;
  • Black boxes vs. white boxes;
  • Case-based explanations for AI systems;
  • Causal inference and explanations;
  • Constraint-based explanations;
  • Decomposition of neural-network-based models for XAI;
  • Deep learning and XAI methods;
  • Defeasible reasoning for explainability;
  • Evaluation approaches for XAI-based systems;
  • Explainable methods for edge computing;
  • Expert systems for explainability;
  • Explainability and the semantic web;
  • Explainability of signal processing methods;
  • Finite state machines for enabling explainability;
  • Fuzzy systems and logic for explainability;
  • Graph neural networks for explainability;
  • Hybrid and transparent black box modeling;
  • Interpreting and explaining convolutional neural networks;
  • Interpretable representational learning;
  • Methods for latent space interpretations;
  • Model-specific vs. model-agnostic methods for XAI;
  • Neuro-symbolic reasoning for XAI;
  • Natural language processing for explanations;
  • Ontologies and taxonomies for supporting XAI;
  • Pruning methods with XAI;
  • Post hoc methods for explainability;
  • Reinforcement learning for enhancing XAI systems;
  • Reasoning under uncertainty for explanation;
  • Rule-based XAI systems;
  • Robotics and explainability;
  • Sample-centric and dataset-centric explanations;
  • Self-explainable methods for XAI;
  • Sentence embeddings to explainable semantic features;
  • Transparent and explainable learning methods;
  • User interfaces for explainability;
  • Visual methods for representational learning;
  • XAI benchmarking;
  • XAI methods for neuroimaging and neural signals;
  • XAI and reservoir computing.

Ethical considerations for XAI:

  • Accountability and responsibility in XAI-based technologies;
  • Addressing user-centric requirements for XAI systems;
  • Assessment of model accuracy and interpretability trade-off;
  • Explainable bias and fairness of XAI-based systems;
  • Explainability for discovering, improving, controlling, and justifying;
  • Explainability as a prerequisite for responsible AI systems;
  • Explainability and data fusion;
  • Explainability and responsibility in policy guidelines;
  • Explainability pitfalls and dark patterns in XAI;
  • Historical foundations of XAI;
  • Moral principles and dilemma for XAI-based systems;
  • Multimodal XAI approaches;
  • Philosophical consideration of synthetic explanations;
  • Prevention and detection of deceptive AI explanations;
  • Social implications of automatically generated explanations;
  • Theoretical foundations of XAI;
  • Trust and explainable AI;
  • The logic of scientific explanation within AI;
  • The epistemic and moral goods expected from explaining AI;
  • XAI for fairness checking;
  • XAI for time-series-based approaches;
  • XAI for transparency and unbiased decision making.

Psychological notions and concepts for XAI:

  • Algorithmic transparency and actionability;
  • Cognitive approaches and architectures for explanations;
  • Cognitive relief in explanations;
  • Contrastive nature of explanations;
  • Comprehensibility vs. interpretability vs. explainability;
  • Counterfactual explanations;
  • Designing new explanation styles;
  • Explanations for correctability;
  • Faithfulness and intelligibility of explanations;
  • Interpretability vs. traceability;
  • Interestingness and informativeness of explanations;
  • Irrelevance of probabilities to explanations;
  • Iterative dialog explanations;
  • Justification and explanations in AI-based systems;
  • Local vs. global interpretability and explainability;
  • Methods for assessing the quality of explanations;
  • Non-technical explanations in AI-based systems;
  • Notions and metrics of/for explainability;
  • Persuasiveness and robustness of explanations;
  • Psychometrics of human explanations;
  • Qualitative approaches for explainability;
  • Questionnaires and surveys for explainability;
  • Scrutability and diagnosis of XAI methods;
  • Soundness and stability of XAI methods;
  • Theories of explanation.

Social examinations of XAI:

  • Adaptive explainable systems;
  • Backward- and forward-looking responsibility forms of XAI;
  • Data provenance and explainability;
  • Explainability for reputation;
  • Epistemic and non-epistemic values for XAI;
  • Human-centric explainable AI;
  • Person-specific XAI systems;
  • Presentation and personalization of AI explanations for target groups;
  • Social nature of explanations.

Legal and administrative considerations within XAI:

  • Black box model auditing and explanation;
  • Explainability in regulatory compliance;
  • Human rights for explanations in AI systems;
  • Policy-based systems of explanations;
  • The potential harm of explainability in AI;
  • Trustworthiness of explanations for clinicians and patients;
  • XAI methods for model governance;
  • XAI in policy development;
  • XAI to increase situational awareness and compliance behavior.

Safety and security approaches for XAI:

  • Adversarial attack explanations;
  • Explanations for risk assessment;
  • Explainability of federated learning;
  • Explainable IoT malware detection;
  • Privacy and agency of explanations;
  • XAI for privacy-preserving systems;
  • XAI techniques of stealing, attack, and defense;
  • XAI for human–AI cooperation;
  • XAI and model output confidence estimation.

Applications of XAI-based systems:

  • Application of XAI in cognitive computing;
  • Dialog systems for enhancing explainability;
  • Explainable methods for medical diagnosis;
  • Business and marketing applications of XAI;
  • Biomedical knowledge discovery and explainability;
  • Explainable methods for human–computer interaction;
  • Explainability in decision support systems;
  • Explainable recommender systems;
  • Explainable methods for finance and automatic trading systems;
  • Explainability in agricultural AI-based methods;
  • Explainability in transportation systems;
  • Explainability for unmanned aerial vehicles (UAVs);
  • Explainability in brain–computer interface systems;
  • Interactive applications for XAI;
  • Manufacturing chains and application of XAI systems;
  • Models of explanations in criminology, cybersecurity, and defense;
  • XAI approaches in Industry 4.0;
  • XAI systems for health-care;
  • XAI technologies for autonomous driving;
  • XAI methods for bioinformatics;
  • XAI methods for linguistics and machine translation;
  • XAI methods for neuroscience;
  • XAI models and applications for IoT;
  • XAI methods for XAI for terrestrial, atmospheric, and ocean remote sensing;
  • XAI in sustainable finance and climate finance;
  • XAI in bio-signal analysis.

Dr. Luca Longo
Dr. Mario Brcic
Topic Editors

Keywords

  • Explainable Artificial Intelligence (xAI)
  • structure of explanations
  • human-centred artificial intelligence
  • explainability and interpretability of AI systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Computers
computers
2.6 5.4 2012 17.2 Days CHF 1800 Submit
Entropy
entropy
2.1 4.9 1999 22.4 Days CHF 2600 Submit
Information
information
2.4 6.9 2010 14.9 Days CHF 1600 Submit
Machine Learning and Knowledge Extraction
make
4.0 6.3 2019 27.1 Days CHF 1800 Submit
Systems
systems
2.3 2.8 2013 17.3 Days CHF 2400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers

This Topic is now open for submission.
Back to TopTop