sensors-logo

Journal Browser

Journal Browser

Recent Advances in Deep Learning Technology for Intelligent Sensing Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 July 2023) | Viewed by 6132

Special Issue Editor

School of Computing, National University of Singapore, Singapore 117417, Singapore
Interests: vision and language; video understanding
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid development of deep learning technology, intelligent sensing systems have garnered significant interest from academics and industry. Numerous data are generated daily from various sensors, such as RGB cameras, depth cameras, infrared cameras, etc. Accordingly, we have witnessed a recent dramatic growth in various AI-based vision applications in many fields. This Special Issues invites the submission of original research addressing critical challenges in deep learning for various AI-based vision applications in intelligent sensor-based systems. Potential topics include, but are not limited to:

  • Image/video manipulation (super-resolution, deblurring, dehazing, image quality assessment, etc);
  • Depth estimation, depth completion;
  • Image/video saliency, salient object detection;
  • Object detection, real-time tracking, object segmentation;
  • Anomaly detection, video localization;
  • Human activity recognition (human pose estimation, hand pose estimation);
  • Cross-modal learning (RGB-D data, RGB-T data).

Dr. Wei Ji
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • machine vision
  • machine learning
  • artificial intelligence
  • intelligent system
  • visual understanding and recognition
  • deep learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4335 KiB  
Article
Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition
by Muhammad Islam, Mohammed Aloraini, Suliman Aladhadh, Shabana Habib, Asma Khan, Abduatif Alabdulatif and Turki M. Alanazi
Sensors 2023, 23(22), 9068; https://doi.org/10.3390/s23229068 - 9 Nov 2023
Viewed by 1123
Abstract
Sign language recognition, an essential interface between the hearing and deaf-mute communities, faces challenges with high false positive rates and computational costs, even with the use of advanced deep learning techniques. Our proposed solution is a stacked encoded model, combining artificial intelligence (AI) [...] Read more.
Sign language recognition, an essential interface between the hearing and deaf-mute communities, faces challenges with high false positive rates and computational costs, even with the use of advanced deep learning techniques. Our proposed solution is a stacked encoded model, combining artificial intelligence (AI) with the Internet of Things (IoT), which refines feature extraction and classification to overcome these challenges. We leverage a lightweight backbone model for preliminary feature extraction and use stacked autoencoders to further refine these features. Our approach harnesses the scalability of big data, showing notable improvement in accuracy, precision, recall, F1-score, and complexity analysis. Our model’s effectiveness is demonstrated through testing on the ArSL2018 benchmark dataset, showcasing superior performance compared to state-of-the-art approaches. Additional validation through an ablation study with pre-trained convolutional neural network (CNN) models affirms our model’s efficacy across all evaluation metrics. Our work paves the way for the sustainable development of high-performing, IoT-based sign-language-recognition applications. Full article
Show Figures

Figure 1

15 pages, 5292 KiB  
Article
Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches
by Kuldoshbay Avazov, Muhammad Kafeel Jamil, Bahodir Muminov, Akmalbek Bobomirzaevich Abdusalomov and Young-Im Cho
Sensors 2023, 23(16), 7078; https://doi.org/10.3390/s23167078 - 10 Aug 2023
Cited by 10 | Viewed by 2646
Abstract
Fire incidents occurring onboard ships cause significant consequences that result in substantial effects. Fires on ships can have extensive and severe wide-ranging impacts on matters such as the safety of the crew, cargo, the environment, finances, reputation, etc. Therefore, timely detection of fires [...] Read more.
Fire incidents occurring onboard ships cause significant consequences that result in substantial effects. Fires on ships can have extensive and severe wide-ranging impacts on matters such as the safety of the crew, cargo, the environment, finances, reputation, etc. Therefore, timely detection of fires is essential for quick responses and powerful mitigation. The study in this research paper presents a fire detection technique based on YOLOv7 (You Only Look Once version 7), incorporating improved deep learning algorithms. The YOLOv7 architecture, with an improved E-ELAN (extended efficient layer aggregation network) as its backbone, serves as the basis of our fire detection system. Its enhanced feature fusion technique makes it superior to all its predecessors. To train the model, we collected 4622 images of various ship scenarios and performed data augmentation techniques such as rotation, horizontal and vertical flips, and scaling. Our model, through rigorous evaluation, showcases enhanced capabilities of fire recognition to improve maritime safety. The proposed strategy successfully achieves an accuracy of 93% in detecting fires to minimize catastrophic incidents. Objects having visual similarities to fire may lead to false prediction and detection by the model, but this can be controlled by expanding the dataset. However, our model can be utilized as a real-time fire detector in challenging environments and for small-object detection. Advancements in deep learning models hold the potential to enhance safety measures, and our proposed model in this paper exhibits this potential. Experimental results proved that the proposed method can be used successfully for the protection of ships and in monitoring fires in ship port areas. Finally, we compared the performance of our method with those of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved. Full article
Show Figures

Figure 1

18 pages, 15713 KiB  
Article
Hierarchical Image Transformation and Multi-Level Features for Anomaly Defect Detection
by Isack Farady, Chia-Chen Kuo, Hui-Fuang Ng and Chih-Yang Lin
Sensors 2023, 23(2), 988; https://doi.org/10.3390/s23020988 - 15 Jan 2023
Cited by 5 | Viewed by 1821
Abstract
Anomalies are a set of samples that do not follow the normal behavior of the majority of data. In an industrial dataset, anomalies appear in a very small number of samples. Currently, deep learning-based models have achieved important advances in image anomaly detection. [...] Read more.
Anomalies are a set of samples that do not follow the normal behavior of the majority of data. In an industrial dataset, anomalies appear in a very small number of samples. Currently, deep learning-based models have achieved important advances in image anomaly detection. However, with general models, real-world application data consisting of non-ideal images, also known as poison images, become a challenge. When the work environment is not conducive to consistently acquiring a good or ideal sample, an additional adaptive learning model is needed. In this work, we design a potential methodology to tackle poison or non-ideal images that commonly appear in industrial production lines by enhancing the existing training data. We propose Hierarchical Image Transformation and Multi-level Features (HIT-MiLF) modules for an anomaly detection network to adapt to perturbances from novelties in testing images. This approach provides a hierarchical process for image transformation during pre-processing and explores the most efficient layer of extracted features from a CNN backbone. The model generates new transformations of training samples that simulate the non-ideal condition and learn the normality in high-dimensional features before applying a Gaussian mixture model to detect the anomalies from new data that it has never seen before. Our experimental results show that hierarchical transformation and multi-level feature exploration improve the baseline performance on industrial metal datasets. Full article
Show Figures

Figure 1

Back to TopTop