New Advances in Computer Vision and Deep Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 9261

Special Issue Editors


E-Mail Website
Guest Editor
Center of Cognitive Technology and Machine Vision, Moscow State University of Technology «STANKIN», Moscow 127055, Russia
Interests: image processing; inpainting; computer vision; artificial intelligence; data fusion; 3-D reconstruction; action recognition
School of Computer Science Information Technology, Beijing Jiaotong University, Beijing 100044, China
Interests: compressed sensing; sparse representation; low-rank matrix reconstruction; wavelet construction theory; image processing; object recognition

E-Mail Website
Guest Editor
Laboratory «Cognitive Technologies and Simulation Systems», Tula State University (TulSU), Tula 300012, Russia
Interests: image preprocessing; denoising; image stitching; multichannel image; computer and machine vision; deep learning; medical imaging

Special Issue Information

Dear Colleagues,

The rapid advancement of computer vision has been one of a set of artificial-intelligence-oriented technologies to emphasize changes beyond personal computing and the internet. Various intelligent models have been developed for solving practical problems in different applications, including, but not limited to, security purposes, human activity recognition, agriculture analysis, medical diagnoses, VR and AR environments, etc. In such applications, diverse types of data need to be captured and processed with high accuracy and in a real-time manner. To address these challenges, the development of novel high-performance techniques for the capture of high-dimensional data is highly desirable, employing recent advances in deep learning. With the development of GPU technology and certain other parallel computing platforms, the analysis and processing of big data sets are contributing significant advances in computer vision and deep learning. In many domains, deep learning has drastically outperformed traditional methods and overtaken them to become the method of choice. However, there is a critical need for the development of advanced computer vision and deep learning methods for the next generation of computing, robotic and artificial intelligence systems.

The purpose of our Special Issue is to contribute to the demonstration of innovative techniques and applications in computer vision and deep learning to solve practical tasks in various research domains. Topics of interest include, but are not limited to:

  1. Novel computer vision and machine learning methods and algorithms;
  2. Deep learning algorithms and architectures, including deep generative models;
  3. Visual quality assessment with computer vision and deep learning;
  4. Deep learning model for data fusion;
  5. The application of artificial intelligence and machine learning models in various domains, such as smart health, cities and factories;
  6. Deep learning for the analysis of digital multimodal biometric and forensics data;
  7. Artificial intelligence and machine learning for computer vision (e.g., object classification, detection, segmentation and/or recognition);
  8. Deep learning and machine learning in robotics and automation applications (e.g., perception, control, planning, navigation, inspection, manipulation and grasping).

We look forward to receiving your contributions.

Dr. Viacheslav Voronin
Dr. Yigang Cen
Dr. Evgenii Semenishchev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • big data analysis
  • computer vision
  • deep learning
  • machine learning
  • image processing
  • visual quality assessment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 2402 KiB  
Article
Dual-Dataset Deep Learning for Improved Forest Fire Detection: A Novel Hierarchical Domain-Adaptive Learning Approach
by Ismail El-Madafri, Marta Peña and Noelia Olmedo-Torre
Mathematics 2024, 12(4), 534; https://doi.org/10.3390/math12040534 - 8 Feb 2024
Cited by 3 | Viewed by 1265
Abstract
This study introduces a novel hierarchical domain-adaptive learning framework designed to enhance wildfire detection capabilities, addressing the limitations inherent in traditional convolutional neural networks across varied forest environments. The framework innovatively employs a dual-dataset approach, integrating both non-forest and forest-specific datasets to train [...] Read more.
This study introduces a novel hierarchical domain-adaptive learning framework designed to enhance wildfire detection capabilities, addressing the limitations inherent in traditional convolutional neural networks across varied forest environments. The framework innovatively employs a dual-dataset approach, integrating both non-forest and forest-specific datasets to train a model adept at handling diverse wildfire scenarios. The methodology leverages a novel framework that combines shared layers for broad feature extraction with specialized layers for forest-specific details, demonstrating versatility across base models. Initially demonstrated with EfficientNetB0, this adaptable approach could be applicable with various advanced architectures, enhancing wildfire detection. The research’s comparative analysis, benchmarking against conventional methodologies, showcases the proposed approach’s enhanced performance. It particularly excels in accuracy, precision, F1-score, specificity, MCC, and AUC-ROC. This research significantly reduces false positives in wildfire detection through a novel blend of multi-task learning, dual-dataset training, and hierarchical domain adaptation. Our approach advances deep learning in data-limited, complex environments, offering a critical tool for ecological conservation and community protection against wildfires. Full article
(This article belongs to the Special Issue New Advances in Computer Vision and Deep Learning)
Show Figures

Figure 1

20 pages, 5367 KiB  
Article
Deep Learning in Sign Language Recognition: A Hybrid Approach for the Recognition of Static and Dynamic Signs
by Ahmed Mateen Buttar, Usama Ahmad, Abdu H. Gumaei, Adel Assiri, Muhammad Azeem Akbar and Bader Fahad Alkhamees
Mathematics 2023, 11(17), 3729; https://doi.org/10.3390/math11173729 - 30 Aug 2023
Cited by 10 | Viewed by 5063
Abstract
A speech impairment limits a person’s capacity for oral and auditory communication. A great improvement in communication between the deaf and the general public would be represented by a real-time sign language detector. This work proposes a deep learning-based algorithm that can identify [...] Read more.
A speech impairment limits a person’s capacity for oral and auditory communication. A great improvement in communication between the deaf and the general public would be represented by a real-time sign language detector. This work proposes a deep learning-based algorithm that can identify words from a person’s gestures and detect them. There have been many studies on this topic, but the development of static and dynamic sign language recognition models is still a challenging area of research. The difficulty is in obtaining an appropriate model that addresses the challenges of continuous signs that are independent of the signer. Different signers’ speeds, durations, and many other factors make it challenging to create a model with high accuracy and continuity. For the accurate and effective recognition of signs, this study uses two different deep learning-based approaches. We create a real-time American Sign Language detector using the skeleton model, which reliably categorizes continuous signs in sign language in most cases using a deep learning approach. In the second deep learning approach, we create a sign language detector for static signs using YOLOv6. This application is very helpful for sign language users and learners to practice sign language in real time. After training both algorithms separately for static and continuous signs, we create a single algorithm using a hybrid approach. The proposed model, consisting of LSTM with MediaPipe holistic landmarks, achieves around 92% accuracy for different continuous signs, and the YOLOv6 model achieves 96% accuracy over different static signs. Throughout this study, we determine which approach is best for sequential movement detection and for the classification of different signs according to sign language and shows remarkable accuracy in real time. Full article
(This article belongs to the Special Issue New Advances in Computer Vision and Deep Learning)
Show Figures

Figure 1

16 pages, 1402 KiB  
Article
Scene Recognition for Visually-Impaired People’s Navigation Assistance Based on Vision Transformer with Dual Multiscale Attention
by Yahia Said, Mohamed Atri, Marwan Ali Albahar, Ahmed Ben Atitallah and Yazan Ahmad Alsariera
Mathematics 2023, 11(5), 1127; https://doi.org/10.3390/math11051127 - 24 Feb 2023
Cited by 4 | Viewed by 2294
Abstract
Notable progress was achieved by recent technologies. As the main goal of technology is to make daily life easier, we will investigate the development of an intelligent system for the assistance of impaired people in their navigation. For visually impaired people, navigating is [...] Read more.
Notable progress was achieved by recent technologies. As the main goal of technology is to make daily life easier, we will investigate the development of an intelligent system for the assistance of impaired people in their navigation. For visually impaired people, navigating is a very complex task that requires assistance. To reduce the complexity of this task, it is preferred to provide information that allows the understanding of surrounding spaces. Particularly, recognizing indoor scenes such as a room, supermarket, or office provides a valuable guide to the visually impaired person to understand the surrounding environment. In this paper, we proposed an indoor scene recognition system based on recent deep learning techniques. Vision transformer (ViT) is a recent deep learning technique that has achieved high performance on image classification. So, it was deployed for indoor scene recognition. To achieve better performance and to reduce the computation complexity, we proposed dual multiscale attention to collect features at different scales for better representation. The main idea was to process small patches and large patches separately and a fusion technique was proposed to combine the features. The proposed fusion technique requires linear time for memory and computation compared to existing techniques that require quadratic time. To prove the efficiency of the proposed technique, extensive experiments were performed on a public dataset which is the MIT67 dataset. The achieved results demonstrated the superiority of the proposed technique compared to the state-of-the-art. Further, the proposed indoor scene recognition system is suitable for implementation on mobile devices with fewer parameters and FLOPs. Full article
(This article belongs to the Special Issue New Advances in Computer Vision and Deep Learning)
Show Figures

Figure 1

Back to TopTop