Explainable Artificial Intelligence (XAI) for Big Data

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 887

Special Issue Editors

School of Computer Science & Technology, University of Bedfordshire, Luton LU1 3JU, UK
Interests: data science; distributed AI; knowledge engineering (KE); agent and multi-agent systems; grid computing; HCI

E-Mail Website
Guest Editor
School of Computer Science and Technology; University of Law, London EC1Y 8HQ, UK
Interests: algorithm design; combinatorics and discrete mathematics; bioinformatics; trust modelling; computer security and forensics

E-Mail Website
Guest Editor
Department of Computing, The Hong Kong Polytechnic University, Hong Kong 999077, China
Interests: data sciences; machine learning algorithms; decision algorithms; social computing; operational research

Special Issue Information

Dear Colleagues,

This Special Issue aims to explore the intersection of Explainable Artificial Intelligence (XAI) and Big Data, addressing the challenges and opportunities in rendering complex AI models understandable and transparent. We invite researchers and practitioners to contribute to the advancement of XAI methodologies tailored for large-scale data applications.

As the reliance on AI algorithms grows in managing vast datasets, ensuring transparency and interpretability becomes imperative. This Special Issue focuses on the scientific background of XAI, emphasizing its role in overcoming the opacity of AI models. The significance of this research area lies in fostering trust, facilitating regulatory compliance, and enabling practical deployment of AI in Big Data contexts.

The aim is to provide a platform for cutting-edge research that enhances the explainability of AI systems dealing with extensive datasets. This aligns with the broader scope of Electronics, offering insights into the evolving landscape of AI applications.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • The importance of XAI in Big Data analytics;
  • Impact of XAI on Big Data analytics;
  • XAI techniques for Big Data analytics;
  • Complexities of XAI and Big Data analytics;
  • Challenges and opportunities in XAI for Big Data analytics;
  • Interpretable Machine Learning Models for Big Data;
  • Ethical Considerations in XAI for Large-scale Data;
  • XAI and data privacy;
  • XAI and bias in Big Data analytics;
  • Human–AI Collaboration in Understanding Complex Data Models;
  • XAI and decision-making;
  • XAI and efficiency;
  • XAI and transparency;
  • XAI and interpretability;
  • Regulatory Compliance and Transparency in AI for Big Data;
  • XAI and trust.

Dr. Gangmin Li
Dr. Paul Sant
Dr. Kevin Yuen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • big data
  • transparency
  • interpretable machine learning
  • ethical AI
  • human-AI collaboration
  • regulatory compliance
  • trust in AI

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1007 KiB  
Article
Non-Stationary Transformer Architecture: A Versatile Framework for Recommendation Systems
by Yuchen Liu, Gangmin Li, Terry R. Payne, Yong Yue and Ka Lok Man
Electronics 2024, 13(11), 2075; https://doi.org/10.3390/electronics13112075 - 27 May 2024
Viewed by 344
Abstract
Recommendation systems are crucial in navigating the vast digital market. However, user data’s dynamic and non-stationary nature often hinders their efficacy. Traditional models struggle to adapt to the evolving preferences and behaviours inherent in user interaction data, posing a significant challenge for accurate [...] Read more.
Recommendation systems are crucial in navigating the vast digital market. However, user data’s dynamic and non-stationary nature often hinders their efficacy. Traditional models struggle to adapt to the evolving preferences and behaviours inherent in user interaction data, posing a significant challenge for accurate prediction and personalisation. Addressing this, we propose a novel theoretical framework, the non-stationary transformer, designed to effectively capture and leverage the temporal dynamics within data. This approach enhances the traditional transformer architecture by introducing mechanisms accounting for non-stationary elements, offering a robust and adaptable solution for multi-tasking recommendation systems. Our experimental analysis, encompassing deep learning (DL) and reinforcement learning (RL) paradigms, demonstrates the framework’s superiority over benchmark models. The empirical results confirm our proposed framework’s efficacy, which provides significant performance enhancements, approximately 8% in LogLoss reduction and up to 2% increase in F1 score with other attention-related models. It also underscores its potential applicability across accumulative reward scenarios with pure reinforcement learning models. These findings advocate adopting non-stationary transformer models to tackle the complexities of today’s recommendation tasks. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) for Big Data)
Show Figures

Figure 1

Back to TopTop