Advances in Explainable Artificial Intelligence (XAI): 3rd Edition

Special Issue Editor

School of Computer Science, Technological University Dublin, D08 X622 Dublin, Ireland
Interests: explainable artificial intelligence; defeasible argumentation; deep learning; human-centred design; mental workload modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, artificial intelligence has seen a shift in focus towards the design and deployment of intelligent systems that are interpretable and explainable, with the rise of a new field: explainable artificial intelligence (XAI). This has been echoed both in the research literature and in the press, attracting scholars from all around the world as well as a lay audience. Initially devoted to the design of post hoc methods for explainability, essentially wrapping machine- and deep-learning models with explanations, it is now expanding its boundaries to ante hoc methods for the production of self-interpretable models. Along with this, neuro-symbolic approaches for reasoning have been employed in conjunction with machine learning in order to extend modeling accuracy and precision with self-explainability and justifiability. Scholars have also started shifting the focus towards the structure of explanations since the ultimate users of interactive technologies are humans, linking artificial intelligence and computer sciences to psychology, human–computer interaction, philosophy, and sociology.

It is certain that explainable artificial intelligence is gaining momentum, and this Special Issue calls for contributions exploring this new fascinating area of research, seeking articles that are devoted to the theoretical foundation of XAI, its historical perspectives, and the design of explanations and interactive human-centered intelligent systems with knowledge–representation principles and automated learning capabilities, not only for experts but for the lay audience as well.

Dr. Luca Longo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • neuro-symbolic reasoning for XAI
  • interpretable deep learning
  • argument-based models of explanations
  • graph neural networks for explainability
  • machine learning and knowledge graphs
  • human-centric explainable AI
  • interpretation of black-box models
  • human-understandable machine learning
  • counterfactual explanations for machine learning
  • natural language processing in XAI
  • quantitative/qualitative evaluation metrics for XAI
  • ante and post hoc XAI methods
  • rule-based systems for XAI
  • fuzzy systems and explainability
  • human-centered learning and explanations
  • model-dependent and model-agnostic explainability
  • case-based explanations for AI systems
  • interactive machine learning and explanations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4291 KiB  
Article
Comparative Analysis of Perturbation Techniques in LIME for Intrusion Detection Enhancement
by Mantas Bacevicius, Agne Paulauskaite-Taraseviciene, Gintare Zokaityte, Lukas Kersys and Agne Moleikaityte
Mach. Learn. Knowl. Extr. 2025, 7(1), 21; https://doi.org/10.3390/make7010021 - 21 Feb 2025
Viewed by 638
Abstract
The growing sophistication of cyber threats necessitates robust and interpretable intrusion detection systems (IDS) to safeguard network security. While machine learning models such as Decision Tree (DT), Random Forest (RF), k-Nearest Neighbors (K-NN), and XGBoost demonstrate high effectiveness in detecting malicious activities, their [...] Read more.
The growing sophistication of cyber threats necessitates robust and interpretable intrusion detection systems (IDS) to safeguard network security. While machine learning models such as Decision Tree (DT), Random Forest (RF), k-Nearest Neighbors (K-NN), and XGBoost demonstrate high effectiveness in detecting malicious activities, their interpretability decreases as their complexity and accuracy increase, posing challenges for critical cybersecurity applications. Local Interpretable Model-agnostic Explanations (LIME) is widely used to address this limitation; however, its reliance on normal distribution for perturbations often fails to capture the non-linear and imbalanced characteristics of datasets like CIC-IDS-2018. To address these challenges, we propose a modified LIME perturbation strategy using Weibull, Gamma, Beta, and Pareto distributions to better capture the characteristics of network traffic data. Our methodology improves the stability of different ML models trained on CIC-IDS datasets, enabling more meaningful and reliable explanations of model predictions. The proposed modifications allow for an increase in explanation fidelity by up to 78% compared to the default Gaussian approach. Pareto-based perturbations provide the best results. Among all distributions tested, Pareto consistently yielded the highest explanation fidelity and stability, particularly for K-NN (R2 = 0.9971, S = 0.9907) and DT (R2 = 0.9267, S = 0.9797). This indicates that heavy-tailed distributions fit well with real-world network traffic patterns, reducing the variance in attribute importance explanations and making them more robust. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 3rd Edition)
Show Figures

Figure 1

Back to TopTop