Explainable and Trustworthy AI Models for Data Analytics

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 20 June 2025 | Viewed by 1528

Special Issue Editor


E-Mail Website
Guest Editor
A-Star, Institute for Infocomm Research, Singapore City, Singapore
Interests: advanced machine learning (deep learning, interpretable ML, etc. ); text analytics; semantic computing

Special Issue Information

Dear Colleagues,

I am pleased to announce a Special Issue of the journal Mathematics on "Explainable and Trustworthy AI Models for Data Analytics". This Special Issue aims to address the lack of transparency and interpretability in AI systems of data analysis. The Issue will explore the development of explainable artificial intelligence models that enhance our understanding of the decision-making processes behind AI predictions, thereby fostering trust and enabling effective utilization of AI in data analysis tasks. Topics of interest include explainable AI; trustworthy AI, data analytics; machine learning; explainable models; model trustworthiness; mathematical optimization and applications of explainable and trustworthy AI in various domains.

The Special Issue seeks to provide a platform for researchers to contribute to the advancement of AI models that are not only accurate and efficient but also transparent, interpretable, and accountable in the context of data analysis. Diverse perspectives and approaches are welcome to foster a comprehensive understanding of the challenges and opportunities in developing explainable and trustworthy AI models for data analysis.

Dr. Rajaraman Kanagasabai
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • trustworthy AI
  • data analytics
  • machine learning
  • explainable models
  • model trustworthiness
  • mathematical optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3313 KiB  
Article
Understanding Public Opinion towards ESG and Green Finance with the Use of Explainable Artificial Intelligence
by Wihan van der Heever, Ranjan Satapathy, Ji Min Park and Erik Cambria
Mathematics 2024, 12(19), 3119; https://doi.org/10.3390/math12193119 - 5 Oct 2024
Viewed by 843
Abstract
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning [...] Read more.
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning to extract nuanced insights from a large corpus of social media data. Our approach integrates state-of-the-art models, including the SenticNet API, for sentiment analysis and implements multiple XAI methods such as LIME, SHAP, and Permutation Importance to enhance interpretability. Results reveal predominantly positive sentiment towards environmental topics, with notable variations across ESG categories. The contrastive learning visualization demonstrates clear sentiment clustering while highlighting areas of uncertainty. This research contributes to the field by providing an interpretable, trustworthy AI system for ESG sentiment analysis, offering valuable insights for policymakers and business stakeholders navigating the complex landscape of sustainable finance and climate action. The methodology proposed in this paper advances the current state of AI in ESG and green finance in several ways. By combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning, our approach provides a more comprehensive understanding of public sentiment towards ESG factors than traditional methods. The integration of multiple XAI techniques (LIME, SHAP, and Permutation Importance) offers a transparent view of the subtlety of the model’s decision-making process, which is crucial for building trust in AI-driven ESG assessments. Our approach enables a more accurate representation of public opinion, essential for informed decision-making in sustainable finance. This paper paves the way for more transparent and explainable AI applications in critical domains like ESG. Full article
(This article belongs to the Special Issue Explainable and Trustworthy AI Models for Data Analytics)
Show Figures

Figure 1

20 pages, 478 KiB  
Article
Case-Based Deduction for Entailment Tree Generation
by Jihao Shi, Xiao Ding and Ting Liu
Mathematics 2024, 12(18), 2893; https://doi.org/10.3390/math12182893 - 17 Sep 2024
Viewed by 354
Abstract
Maintaining logical consistency in structured explanations is critical for understanding and troubleshooting the reasoning behind a system’s decisions. However, existing methods for entailment tree generation often struggle with logical consistency, resulting in erroneous intermediate conclusions and reducing the overall accuracy of the explanations. [...] Read more.
Maintaining logical consistency in structured explanations is critical for understanding and troubleshooting the reasoning behind a system’s decisions. However, existing methods for entailment tree generation often struggle with logical consistency, resulting in erroneous intermediate conclusions and reducing the overall accuracy of the explanations. To address this issue, we propose case-based deduction (CBD), a novel approach that retrieves cases with similar logical structures from a case base and uses them as demonstrations for logical deduction. This method guides the model toward logically sound conclusions without the need for manually constructing logical rule bases. By leveraging a prototypical network for case retrieval and reranking them using information entropy, CBD introduces diversity to improve in-context learning. Our experimental results on the EntailmentBank dataset show that CBD significantly improves entailment tree generation, achieving performance improvements of 1.7% in Task 1, 0.6% in Task 2, and 0.8% in Task 3 under the strictest Overall AllCorrect metric. These findings confirm that CBD enhances the logical consistency and overall accuracy of AI systems in structured explanation tasks. Full article
(This article belongs to the Special Issue Explainable and Trustworthy AI Models for Data Analytics)
Show Figures

Figure 1

Back to TopTop