Advanced Machine Learning, Pattern Recognition, and Deep Learning Technologies: Methodologies and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 5442

Special Issue Editors


E-Mail
Guest Editor
School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China
Interests: machine learning; biometrics; data mining; image processing
School of Computer Science & Technology, Harbin Institute of Technology, Shenzhen 150001, China
Interests: machine learning; data mining; pattern recognition; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen 518107, China
Interests: anomaly detection; multimedia analysis; object detection; image/video compression; deep learning

E-Mail Website
Guest Editor
Department of Computer and Information Science, University of Macau, Macau, China
Interests: biometrics; pattern recognition; image processing; medical image analysis

Special Issue Information

Dear Colleagues,

In recent years, machine learning, pattern recognition, and deep learning techniques have been successfully applied to science and engineering research fields. For example, biometric recognition, i.e., palmprint, face, and iris, performs personal security authentication for airport, bank, and online payments. We often retrieve the information we are interested in from the Internet. Furthermore, image processing technology can help us obtain more beautiful photos. Particularly, deep learning has brought out powerful capabilities in extracting discriminant patterns and making accurate predictions from large-scale databases. In fact, the performances of machine learning, pattern recognition, and deep learning algorithms highly rely on model design, mathematical interpretation, and optimization. Good fusion of theories and models are crucial to success in these above applications. The aim of this topic is to highlight recent advances in machine learning, pattern recognition, and deep learning methodologies and theories. Papers with interesting/significant new applications of the above methods are also welcome. The topics of interest include, but are not limited to, the following:

  1. Advanced machine intelligence methods and applications;
  2. Advanced pattern analysis methods and applications;
  3. Deep-learning-based methods and applications;
  4. Biometric recognition algorithms and applications;
  5. Multi-view/-modal learning and fusion;
  6. Data mining and analysis;
  7. Hashing learning-based methods and applications;
  8. Dimensionality reduction and discriminant representation;
  9. Subspace learning and clustering;
  10. Graph learning-based methods and applications;
  11. Image super-resolution/enhancing/restoration;
  12. Advanced models in computer vision, such as object tracking and detection;
  13. Sparse representation and application.

Dr. Shuping Zhao
Dr. Jie Wen
Dr. Chao Huang
Dr. Bob Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • pattern recognition
  • deep learning
  • mathematical optimization

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1211 KiB  
Article
FireXplainNet: Optimizing Convolution Block Architecture for Enhanced Wildfire Detection and Interpretability
by Muneeb A. Khan and Heemin Park
Electronics 2024, 13(10), 1881; https://doi.org/10.3390/electronics13101881 - 11 May 2024
Viewed by 371
Abstract
The early detection of wildfires is a crucial challenge in environmental monitoring, pivotal for effective disaster management and ecological conservation. Traditional detection methods often fail to detect fires accurately and in a timely manner, resulting in significant adverse consequences. This paper presents FireXplainNet, [...] Read more.
The early detection of wildfires is a crucial challenge in environmental monitoring, pivotal for effective disaster management and ecological conservation. Traditional detection methods often fail to detect fires accurately and in a timely manner, resulting in significant adverse consequences. This paper presents FireXplainNet, a Convolutional Neural Network (CNN) base model, designed specifically to address these limitations through enhanced efficiency and precision in wildfire detection. We optimized data input via specialized preprocessing techniques, significantly improving detection accuracy on both the Wildfire Image and FLAME datasets. A distinctive feature of our approach is the integration of Local Interpretable Model-agnostic Explanations (LIME), which facilitates a deeper understanding of and trust in the model’s predictive capabilities. Additionally, we have delved into optimizing pretrained models through transfer learning, enriching our analysis and offering insights into the comparative effectiveness of FireXplainNet. The model achieved an accuracy of 87.32% on the FLAME dataset and 98.70% on the Wildfire Image dataset, with inference times of 0.221 and 0.168 milliseconds, respectively. These performance metrics are critical for the application of real-time fire detection systems, underscoring the potential of FireXplainNet in environmental monitoring and disaster management strategies. Full article
Show Figures

Figure 1

15 pages, 2456 KiB  
Article
Deep Reinforcement Learning with Godot Game Engine
by Mahesh Ranaweera and Qusay H. Mahmoud
Electronics 2024, 13(5), 985; https://doi.org/10.3390/electronics13050985 - 5 Mar 2024
Viewed by 1307
Abstract
This paper introduces a Python framework for developing Deep Reinforcement Learning (DRL) in an open-source Godot game engine to tackle sim-to-real research. A framework was designed to communicate and interface with the Godot game engine to perform the DRL. With the Godot game [...] Read more.
This paper introduces a Python framework for developing Deep Reinforcement Learning (DRL) in an open-source Godot game engine to tackle sim-to-real research. A framework was designed to communicate and interface with the Godot game engine to perform the DRL. With the Godot game engine, users will be able to set up their environment while defining the constraints, motion, interactive objects, and actions to be performed. The framework interfaces with the Godot game engine to perform defined actions. It can be further extended to perform domain randomization and enhance overall learning by increasing the complexity of the environment. Unlike other proprietary physics or game engines, Godot provides extensive developmental freedom under an open-source licence. By incorporating Godot’s built-in powerful node-based environment system, flexible user interface, and the proposed Python framework, developers can extend its features to develop deep learning applications. Research performed on Sim2Real using this framework has provided great insight into the factors that affect the gap in reality. It also demonstrated the effectiveness of this framework in Sim2Real applications and research. Full article
Show Figures

Figure 1

19 pages, 9458 KiB  
Article
Seismic Event Detection in the Copahue Volcano Based on Machine Learning: Towards an On-the-Edge Implementation
by Yair Mauad Sosa, Romina Soledad Molina, Silvana Spagnotto, Iván Melchor, Alejandro Nuñez Manquez, Maria Liz Crespo, Giovanni Ramponi and Ricardo Petrino
Electronics 2024, 13(3), 622; https://doi.org/10.3390/electronics13030622 - 2 Feb 2024
Viewed by 819
Abstract
This study focused on seismic event detection in a volcano using machine learning by leveraging the advantages of software/hardware co-design for a system on a chip (SoC) based on field-programmable gate array (FPGA) devices. A case study was conducted on the Copahue Volcano, [...] Read more.
This study focused on seismic event detection in a volcano using machine learning by leveraging the advantages of software/hardware co-design for a system on a chip (SoC) based on field-programmable gate array (FPGA) devices. A case study was conducted on the Copahue Volcano, an active stratovolcano located on the border between Argentina and Chile. Volcanic seismic event processing and detection were integrated into a PYNQ-based implementation by using a low-end SoC-FPGA device. We also provide insights into integrating an SoC-FPGA into the acquisition node, which can be valuable in scenarios where stations are deployed solely for data collection and holds the potential for the development of an early alert system. Full article
Show Figures

Figure 1

23 pages, 1650 KiB  
Article
A Heterogeneous Inference Framework for a Deep Neural Network
by Rafael Gadea-Gironés, José Luís Rocabado-Rocha, Jorge Fe and Jose M. Monzo
Electronics 2024, 13(2), 348; https://doi.org/10.3390/electronics13020348 - 14 Jan 2024
Viewed by 890
Abstract
Artificial intelligence (AI) is one of the most promising technologies based on machine learning algorithms. In this paper, we propose a workflow for the implementation of deep neural networks. This workflow attempts to combine the flexibility of high-level compilers (HLS)-based networks with the [...] Read more.
Artificial intelligence (AI) is one of the most promising technologies based on machine learning algorithms. In this paper, we propose a workflow for the implementation of deep neural networks. This workflow attempts to combine the flexibility of high-level compilers (HLS)-based networks with the architectural control features of hardware description languages (HDL)-based flows. The architecture consists of a convolutional neural network, SqueezeNet v1.1, and a hard processor system (HPS) that coexists with acceleration hardware to be designed. This methodology allows us to compare solutions based solely on software (PyTorch 1.13.1) and propose heterogeneous inference solutions, taking advantage of the best options within the software and hardware flow. The proposed workflow is implemented on a low-cost field programmable gate array system-on-chip (FPGA SOC) platform, specifically the DE10-Nano development board. We have provided systolic architectural solutions written in OpenCL that are highly flexible and easily tunable to take full advantage of the resources of programmable devices and achieve superior energy efficiencies working with a 32-bit floating point. From a verification point of view, the proposed method is effective, since the reference models in all tests, both for the individual layers and the complete network, have been readily available using packages well known in the development, training, and inference of deep networks. Full article
Show Figures

Figure 1

25 pages, 17393 KiB  
Article
Enhancing Human Activity Recognition with LoRa Wireless RF Signal Preprocessing and Deep Learning
by Mingxing Nie, Liwei Zou, Hao Cui, Xinhui Zhou and Yaping Wan
Electronics 2024, 13(2), 264; https://doi.org/10.3390/electronics13020264 - 6 Jan 2024
Viewed by 1003
Abstract
This paper introduces a novel approach for enhancing human activity recognition through the integration of LoRa wireless RF signal preprocessing and deep learning. We tackle the challenge of extracting features from intricate LoRa signals by scrutinizing the unique propagation process of linearly modulated [...] Read more.
This paper introduces a novel approach for enhancing human activity recognition through the integration of LoRa wireless RF signal preprocessing and deep learning. We tackle the challenge of extracting features from intricate LoRa signals by scrutinizing the unique propagation process of linearly modulated LoRa signals—a critical aspect for effective feature extraction. Our preprocessing technique involves converting intricate data into real numbers, utilizing Short-Time Fourier Transform (STFT) to generate spectrograms, and incorporating differential signal processing (DSP) techniques to augment activity recognition accuracy. Additionally, we employ frequency-to-image conversion for the purpose of intuitive interpretation. In comprehensive experiments covering activity classification, identity recognition, room identification, and presence detection, our carefully selected deep learning models exhibit outstanding accuracy. Notably, ConvNext attains 96.7% accuracy in activity classification, 97.9% in identity recognition, and 97.3% in room identification. The Vision TF model excels with 98.5% accuracy in presence detection. Through leveraging LoRa signal characteristics and sophisticated preprocessing techniques, our transformative approach significantly enhances feature extraction, ensuring heightened accuracy and reliability in human activity recognition. Full article
Show Figures

Figure 1

Back to TopTop