sensors-logo

Journal Browser

Journal Browser

Sensors and Sensing Technologies for Object Detection and Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 1657

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, National Central University, Taoyuan 320317, Taiwan
Interests: computer vision; image processing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Universitas Syiah Kuala, Banda Aceh 23111, Indonesia
Interests: multimedia; video surveillance; 3D reconstruction

Special Issue Information

Dear Colleagues,

Object detection and recognition are fundamental tasks in various fields, including computer vision, machine vision, robotics, surveillance systems, autonomous vehicles, and industrial automation. The accurate and efficient detection and recognition of objects plays a crucial role in enabling intelligent systems and enhancing decision-making processes.

Modern sensing technologies, such as cameras, LiDAR, radar, and other emerging modalities, have revolutionized object detection and recognition by capturing essential data about the environment. These sensors capture diverse information about the environment, including visual, depth, and motion data, which are crucial for accurate and reliable object detection and recognition.

This Special Issue aims to explore the latest advancements in sensor and sensing technologies for object detection and recognition. We invite both original research papers and review articles that showcase the significant developments in these fields. Potential areas of interest include, but are not limited to:

  • Object detection;
  • Object recognition;
  • Vision sensors;
  • Defect detection;
  • Sensing technologies;
  • Machine vision;
  • Computer vision;
  • Artificial intelligence;
  • Deep learning;
  • Image processing;
  • Feature extraction;
  • Classification algorithms;
  • Sensor fusion.

If you want to learn more information or need any advice, you can contact the Special Issue Editor Anika Deng via <[email protected]> directly.

Prof. Dr. Chih-Yang Lin
Dr. Ir. Kahlil Muchtar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2472 KiB  
Article
LiDAR-Based Intensity-Aware Outdoor 3D Object Detection
by Ammar Yasir Naich and Jesús Requena Carrión
Sensors 2024, 24(9), 2942; https://doi.org/10.3390/s24092942 - 6 May 2024
Viewed by 275
Abstract
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can [...] Read more.
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird’s-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

24 pages, 7917 KiB  
Article
Recognition of 3D Images by Fusing Fractional-Order Chebyshev Moments and Deep Neural Networks
by Lin Gao, Xuyang Zhang, Mingrui Zhao and Jinyi Zhang
Sensors 2024, 24(7), 2352; https://doi.org/10.3390/s24072352 - 7 Apr 2024
Viewed by 508
Abstract
In order to achieve efficient recognition of 3D images and reduce the complexity of network parameters, we proposed a novel 3D image recognition method combining deep neural networks with fractional-order Chebyshev moments. Firstly, the fractional-order Chebyshev moment (FrCM) unit, consisting of Chebyshev moments [...] Read more.
In order to achieve efficient recognition of 3D images and reduce the complexity of network parameters, we proposed a novel 3D image recognition method combining deep neural networks with fractional-order Chebyshev moments. Firstly, the fractional-order Chebyshev moment (FrCM) unit, consisting of Chebyshev moments and the three-term recurrence relation method, is calculated separately using successive integrals. Next, moment invariants based on fractional order and Chebyshev moments are utilized to achieve invariants for image scaling, rotation, and translation. This design aims to enhance computational efficiency. Finally, the fused network embedding the FrCM unit (FrCMs-DNNs) extracts depth features to analyze the effectiveness from the aspects of parameter quantity, computing resources, and identification capability. Meanwhile, the Princeton Shape Benchmark dataset and medical images dataset are used for experimental validation. Compared with other deep neural networks, FrCMs-DNNs has the highest accuracy in image recognition and classification. We used two evaluation indices, mean square error (MSE) and peak signal-to-noise ratio (PSNR), to measure the reconstruction quality of FrCMs after 3D image reconstruction. The accuracy of the FrCMs-DNNs model in 3D object recognition was assessed through an ablation experiment, considering the four evaluation indices of accuracy, precision, recall rate, and F1-score. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

15 pages, 2735 KiB  
Article
Application of Self-Attention Generative Adversarial Network for Electromagnetic Imaging in Half-Space
by Chien-Ching Chiu, Yang-Han Lee, Po-Hsiang Chen, Ying-Chen Shih and Jiang Hao
Sensors 2024, 24(7), 2322; https://doi.org/10.3390/s24072322 - 5 Apr 2024
Viewed by 513
Abstract
In this paper, we introduce a novel artificial intelligence technique with an attention mechanism for half-space electromagnetic imaging. A dielectric object in half-space is illuminated by TM (transverse magnetic) waves. Since measurements can only be made in the upper space, the measurement angle [...] Read more.
In this paper, we introduce a novel artificial intelligence technique with an attention mechanism for half-space electromagnetic imaging. A dielectric object in half-space is illuminated by TM (transverse magnetic) waves. Since measurements can only be made in the upper space, the measurement angle will be limited. As a result, we apply a back-propagation scheme (BPS) to generate an initial guessed image from the measured scattered fields for scatterer buried in the lower half-space. This process can effectively reduce the high nonlinearity of the inverse scattering problem. We further input the guessed images into the generative adversarial network (GAN) and the self-attention generative adversarial network (SAGAN), respectively, to compare the reconstruction performance. Numerical results prove that both SAGAN and GAN can reconstruct dielectric objects and the MNIST dataset under same measurement conditions. Our analysis also reveals that SAGAN is able to reconstruct electromagnetic images more accurately and efficiently than GAN. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

Back to TopTop