Trustworthy Deep Learning in Practice

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 November 2024 | Viewed by 2064

Special Issue Editors


E-Mail Website
Guest Editor
Zhongguancun Laboratory, Beijing 100094, China
Interests: trustworthy AI in multimodal (e.g., adversarial examples/physical adversarial attacks/adversarial defense/backdoor detection/deepfake detection)

E-Mail Website
Guest Editor
State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China
Interests: AI safety and security, with broad interests in the areas of adversarial examples; backdoor attacks; interpretable deep learning; model robustness; fairness testing; AI testing and evaluation

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Beihang University, Beijing 100191, China
Interests: fast visual computing (e.g., large-scale search/understanding) and robust deep learning (e.g., network quantization, adversarial attack/defense, few shot learning)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, deep learning has achieved remarkable performance across a wide range of applications, including computer vision, natural language processing, and acoustics. However, research has revealed severe security challenges over the deep learning life-cycle, prompting concern about their trustworthiness in practice. Since there are potential risks that threaten the applications of deep learning in both the digital and physical world, it is necessary to converge advanced investigations in correlated research areas to successfully diagnose model blind-spots and further understand, and improve, deep learning systems in practice.

In this Special Issue, we aim to bring together researchers from the fields of adversarial machine learning, model robustness, model privacy, and explainable AI to discuss recent research and future directions for trustworthy AI. We invite submissions on any aspect of trustworthiness in practical deep learning systems (in particular computer vision and pattern recognition). We welcome research contributions related to the following (but not limited to) topics:

  • Adversarial learning (attacks, defenses);
  • Backdoor attacks and mitigations for deep learning models;
  • Model stealing for AI applications and systems;
  • Deepfake techniques on images and videos;
  • Stable learning and model generalization;
  • Robustness, fairness, privacy, and reliability in AI;
  • Explainable and practical AI.

Dr. Jiakai Wang
Dr. Aishan Liu
Prof. Dr. Xianglong Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • trustworthy AI
  • adversarial learning
  • stable learning
  • practical learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 9126 KiB  
Article
Interpretable Mixture of Experts for Decomposition Network on Server Performance Metrics Forecasting
by Fang Peng, Xin Ji, Le Zhang, Junle Wang, Kui Zhang and Wenjun Wu
Electronics 2024, 13(20), 4116; https://doi.org/10.3390/electronics13204116 - 18 Oct 2024
Viewed by 449
Abstract
The accurate forecasting of server performance metrics, such as CPU utilization, memory usage, and network bandwidth, is critical for optimizing resource allocation and ensuring system reliability in large-scale computing environments. In this paper, we introduce the Mixture of Experts for Decomposition Kolmogorov–Arnold Network [...] Read more.
The accurate forecasting of server performance metrics, such as CPU utilization, memory usage, and network bandwidth, is critical for optimizing resource allocation and ensuring system reliability in large-scale computing environments. In this paper, we introduce the Mixture of Experts for Decomposition Kolmogorov–Arnold Network (MOE-KAN), a novel approach designed to improve both the accuracy and interpretability of server performance prediction. The MOE-KAN framework employs a decomposition strategy that breaks down complex, nonlinear server performance patterns into simpler, more interpretable components, facilitating a clearer understanding of how predictions are made. By leveraging a Mixture of Experts (MOE) model, trend and residual components are learned by specialized experts, whose outputs are transparently combined to form the final prediction. The Kolmogorov–Arnold Network further enhances the model’s ability to capture intricate input–output relationships while maintaining transparency in its decision-making process. Experimental results on real-world server performance datasets demonstrate that MOE-KAN not only outperforms traditional models in terms of accuracy but also provides a more trustworthy and interpretable forecasting framework. This makes it particularly suitable for real-time server management and capacity planning, offering both reliability and interpretability in predictive models. Full article
(This article belongs to the Special Issue Trustworthy Deep Learning in Practice)
Show Figures

Figure 1

19 pages, 6626 KiB  
Article
RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving
by Wei Jiang, Lu Wang, Tianyuan Zhang, Yuwei Chen, Jian Dong, Wei Bao, Zichao Zhang and Qiang Fu
Electronics 2024, 13(16), 3299; https://doi.org/10.3390/electronics13163299 - 20 Aug 2024
Viewed by 1067
Abstract
Autonomous driving technology has advanced significantly with deep learning, but noise and attacks threaten its real-world deployment. While research has revealed vulnerabilities in individual intelligent tasks, a comprehensive evaluation of these impacts across complete end-to-end systems is still underexplored. To address this void, [...] Read more.
Autonomous driving technology has advanced significantly with deep learning, but noise and attacks threaten its real-world deployment. While research has revealed vulnerabilities in individual intelligent tasks, a comprehensive evaluation of these impacts across complete end-to-end systems is still underexplored. To address this void, we thoroughly analyze the robustness of four end-to-end autonomous driving systems against various noise and build the RobustE2E Benchmark, including five traditional adversarial attacks and a newly proposed Module-Wise Attack specifically targeting end-to-end autonomous driving in white-box settings, as well as four major categories of natural corruptions (a total of 17 types, with five severity levels) in black-box settings. Additionally, we extend the robustness evaluation from the open-loop model level to the closed-loop case studies of autonomous driving system level. Our comprehensive evaluation and analysis provide valuable insights into the robustness of end-to-end autonomous driving, which may offer potential guidance for targeted improvements to models. For example, (1) even the most advanced end-to-end models suffer large planning failures under minor perturbations, with perception tasks showing the most substantial decline; (2) among adversarial attacks, our Module-Wise Attack poses the greatest threat to end-to-end autonomous driving models, while PGD-l2 is the weakest, and among four categories of natural corruptions, noise and weather are the most harmful, followed by blur and digital distortion being less severe; (3) the integrated, multitask approach results in significantly higher robustness and reliability compared with the simpler design, highlighting the critical role of collaborative multitask in autonomous driving; and (4) the autonomous driving systems amplify the model’s lack of robustness, etc. Our research contributes to developing more resilient autonomous driving models and their deployment in the real world. Full article
(This article belongs to the Special Issue Trustworthy Deep Learning in Practice)
Show Figures

Figure 1

Back to TopTop