Machine and Deep Learning: Beyond Computational and Data-Related Limitations

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 June 2024) | Viewed by 3832

Special Issue Editors


E-Mail Website
Guest Editor
Numediart Institute of Creative Technologies, University of Mons (UMONS), 7000 Mons, Belgium
Interests: prediction, analysis and trigger of human attention
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering, University of Mons, 7000 Mons, Belgium
Interests: artificial intelligence; explainable artificial intelligence; machine and deep learning; edge artificial intelligence; multimedia processing; high-performance computing; cloud and edge computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

While we have made tremendous progress in AI, some limitations prevent many approaches from reaching industrial applications. Considering these limitations, this Special Issue focuses on 1) computational limitations and 2) data-related limitations. The goal of the first axis is to focus on the software and hardware compression of neural networks of all types in order to reduce both the size of these networks and the inference time they need to produce their predictions. Those neural networks should be able to run on smaller devices, reducing the energetic consumption and avoiding data transfer through the network, which helps preserve its privacy and ensures the quality of service anywhere and anytime. The goal of the second axis is to study architectures that allows learning from 1) multimodal data, 2) unlabeled or weakly labeled data and 3) continuously arriving data, which requires learning throughout the life of the algorithm, providing convenient interpretation and explanation. Indeed, in real-life applications, if many data are available, these data are usually not labelled or biased, and they come from diverse sensors. Massive amounts of manual labeling and models that cannot adapt to novel data are not realistic. This Special Issue, therefore, aims to investigate innovative solutions to overcome two major obstacles in current AI technology: the lack of properly labeled data and the lack of storage and computational capacity on lightweight and embedded systems. We invite submissions from researchers addressing these two axes. We encourage authors to submit papers within different domains with or without industrial applications. This Special Issue aims to cover recent advances in DNN architecture compression and edge deployment on the one hand, and advances in unsupervised learning, self-/semi-supervised learning, multimodal learning, explainable deep learning, active learning and continuous learning on the other hand. Reviews and surveys on the state-of-the-art DNN architectures are also welcomed.  The topics of interest for this Special Issue include:

  • DNN software compression;
  • DNN hardware compression;
  • DNN pruning and quantization;
  • Knowledge distillation;
  • Model deployment in edge and cloud architectures; 
  • Edge artificial intelligence;
  • Unsupervised learning;
  • Semi-supervised and self-supervised learning;
  • Active learning;
  • Explainable deep learning;
  • Continual learning;
  • Knowledge transfer;
  • Lifelong learning.

However, please do not feel limited by these topics; we will consider submissions in any related area. The Special Issue is linked to the TRAIL Institute for AI, Belgium, but is open to any submission.

Dr. Matei Mancas
Prof. Dr. Sidi Ahmed Mahmoudi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine and deep learning
  • DNN compression
  • knowledge distillation
  • self-supervised learning
  • active learning
  • cloud and edge computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 835 KiB  
Article
Deep-Autoencoder-Based Radar Source Recognition: Addressing Large-Scale Imbalanced Data and Edge Computing Constraints
by Yuehua Liu, Xiaoyu Li and Jifei Fang
Electronics 2024, 13(15), 2891; https://doi.org/10.3390/electronics13152891 - 23 Jul 2024
Viewed by 475
Abstract
Radar radiation source recognition technology is vital in electronic countermeasures, electromagnetic control, and air traffic management. Its primary function is to identify radar signals in real time by computing and inferring the parameters of intercepted signals. With the rapid advancement of AI technology, [...] Read more.
Radar radiation source recognition technology is vital in electronic countermeasures, electromagnetic control, and air traffic management. Its primary function is to identify radar signals in real time by computing and inferring the parameters of intercepted signals. With the rapid advancement of AI technology, deep learning algorithms have shown promising results in addressing the challenges of radar radiation source recognition. However, significant obstacles remain: the radar radiation source data often exhibit large-scale, unbalanced sample distribution and incomplete sample labeling, resulting in limited training data resources. Additionally, in practical applications, models must be deployed on outdoor edge computing terminals, where the storage and computing capabilities of lightweight embedded systems are limited. This paper focuses on overcoming the constraints posed by data resources and edge computing capabilities to design and deploy large-scale radar radiation source recognition algorithms. Initially, it addresses the issues related to large-scale radar radiation source samples through data analysis, preprocessing, and feature selection, extracting and forming prior knowledge information. Subsequently, a model named RIR-DA (Radar ID Recognition based on Deep Learning Autoencoder) is developed, integrating this prior knowledge. The RIR-DA model successfully identified 96 radar radiation source targets with an accuracy exceeding 95% in a dataset characterized by a highly imbalanced sample distribution. To tackle the challenges of poor migration effects and low computational efficiency on lightweight edge computing platforms, a parallel acceleration scheme based on the embedded microprocessor T4240 is designed. This approach achieved a nearly eightfold increase in computational speed while maintaining the original training performance. Furthermore, an integrated solution for a radar radiation source intelligent detection system combining PC devices and edge devices is preliminarily designed. Experimental results demonstrate that, compared to existing radar radiation source target recognition algorithms, the proposed method offers superior model performance and greater practical extensibility. This research provides an innovative exploratory solution for the industrial application of deep learning models in radar radiation source recognition. Full article
Show Figures

Figure 1

43 pages, 33854 KiB  
Article
Explainability and Evaluation of Vision Transformers: An In-Depth Experimental Study
by Sédrick Stassin, Valentin Corduant, Sidi Ahmed Mahmoudi and Xavier Siebert
Electronics 2024, 13(1), 175; https://doi.org/10.3390/electronics13010175 - 30 Dec 2023
Cited by 1 | Viewed by 2827
Abstract
In the era of artificial intelligence (AI), the deployment of intelligent systems for autonomous decision making has surged across diverse fields. However, the widespread adoption of AI technology is hindered by the risks associated with ceding control to autonomous systems, particularly in critical [...] Read more.
In the era of artificial intelligence (AI), the deployment of intelligent systems for autonomous decision making has surged across diverse fields. However, the widespread adoption of AI technology is hindered by the risks associated with ceding control to autonomous systems, particularly in critical domains. Explainable artificial intelligence (XAI) has emerged as a critical subdomain fostering human understanding and trust. It addresses the opacity of complex models such as vision transformers (ViTs), which have gained prominence lately. With the expanding landscape of XAI methods, selecting the most effective method remains an open question, due to the lack of a ground-truth label for explainability. To avoid subjective human judgment, numerous metrics have been developed, with each aiming to fulfill certain properties required for a valid explanation. This study conducts a detailed evaluation of various XAI methods applied to the ViT architecture, thereby exploring metrics criteria like faithfulness, coherence, robustness, and complexity. We especially study the metric convergence, correlation, discriminative power, and inference time of both XAI methods and metrics. Contrary to expectations, the metrics of each criterion reveal minimal convergence and correlation. This study not only challenges the conventional practice of metric-based ranking of XAI methods but also underscores the dependence of explanations on the experimental environment, thereby presenting crucial considerations for the future development and adoption of XAI methods in real-world applications. Full article
Show Figures

Figure 1

Back to TopTop