sensors-logo

Journal Browser

Journal Browser

Sensors and Sensing Technologies for Object Detection and Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 20 December 2025 | Viewed by 9475

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City 320317, Taiwan
Interests: machine learning; computer vision; big data analysis; agricultural sensor system
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Universitas Syiah Kuala, Banda Aceh 23111, Indonesia
Interests: multimedia; video surveillance; 3D reconstruction

Special Issue Information

Dear Colleagues,

Object detection and recognition are fundamental tasks in various fields, including computer vision, machine vision, robotics, surveillance systems, autonomous vehicles, and industrial automation. The accurate and efficient detection and recognition of objects plays a crucial role in enabling intelligent systems and enhancing decision-making processes.

Modern sensing technologies, such as cameras, LiDAR, radar, and other emerging modalities, have revolutionized object detection and recognition by capturing essential data about the environment. These sensors capture diverse information about the environment, including visual, depth, and motion data, which are crucial for accurate and reliable object detection and recognition.

This Special Issue aims to explore the latest advancements in sensor and sensing technologies for object detection and recognition. We invite both original research papers and review articles that showcase the significant developments in these fields. Potential areas of interest include, but are not limited to:

  • Object detection;
  • Object recognition;
  • Vision sensors;
  • Defect detection;
  • Sensing technologies;
  • Machine vision;
  • Computer vision;
  • Artificial intelligence;
  • Deep learning;
  • Image processing;
  • Feature extraction;
  • Classification algorithms;
  • Sensor fusion.

If you want to learn more information or need any advice, you can contact the Special Issue Editor Anika Deng via <anika.deng@mdpi.com> directly.

Prof. Dr. Chih-Yang Lin
Dr. Ir. Kahlil Muchtar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 16639 KiB  
Article
Improving Object Detection for Time-Lapse Imagery Using Temporal Features in Wildlife Monitoring
by Marcus Jenkins, Kirsty A. Franklin, Malcolm A. C. Nicoll, Nik C. Cole, Kevin Ruhomaun, Vikash Tatayah and Michal Mackiewicz
Sensors 2024, 24(24), 8002; https://doi.org/10.3390/s24248002 - 14 Dec 2024
Cited by 1 | Viewed by 1728
Abstract
Monitoring animal populations is crucial for assessing the health of ecosystems. Traditional methods, which require extensive fieldwork, are increasingly being supplemented by time-lapse camera-trap imagery combined with an automatic analysis of the image data. The latter usually involves some object detector aimed at [...] Read more.
Monitoring animal populations is crucial for assessing the health of ecosystems. Traditional methods, which require extensive fieldwork, are increasingly being supplemented by time-lapse camera-trap imagery combined with an automatic analysis of the image data. The latter usually involves some object detector aimed at detecting relevant targets (commonly animals) in each image, followed by some postprocessing to gather activity and population data. In this paper, we show that the performance of an object detector in a single frame of a time-lapse sequence can be improved by including spatio-temporal features from the prior frames. We propose a method that leverages temporal information by integrating two additional spatial feature channels which capture stationary and non-stationary elements of the scene and consequently improve scene understanding and reduce the number of stationary false positives. The proposed technique achieves a significant improvement of 24% in mean average precision (mAP@0.05:0.95) over the baseline (temporal feature-free, single frame) object detector on a large dataset of breeding tropical seabirds. We envisage our method will be widely applicable to other wildlife monitoring applications that use time-lapse imaging. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

12 pages, 3196 KiB  
Article
Comparing Human Performance on Target Localization in Near Infrared and Long Wave Infrared for Cluttered Environments
by Li Zhang, Mark Martino, Orges Furxhi, Eddie L. Jacobs, Ronald G. Driggers and C. Kyle Renshaw
Sensors 2024, 24(20), 6662; https://doi.org/10.3390/s24206662 - 16 Oct 2024
Cited by 1 | Viewed by 926
Abstract
In the context of rapid advancements in AI, the accuracies and speeds among various AI models and methods are often compared. However, a basic question is rarely asked: is AI better than humans, and if so, under what conditions? This paper investigates human [...] Read more.
In the context of rapid advancements in AI, the accuracies and speeds among various AI models and methods are often compared. However, a basic question is rarely asked: is AI better than humans, and if so, under what conditions? This paper investigates human ability to detect distant landmark targets under cluttered surroundings such as buildings, trees, and clouds in NIR and LWIR images, aiming to facilitate AI object detection performance analysis. Our investigation employs perception tests and a human performance model to analyze object detection capabilities. The results reveal distinctive differences in NIR and LWIR detectability, showing that although LWIR performs less effectively at range, it offers superior robustness across various environmental conditions. Our findings suggest that AI could be particularly advantageous for object detection in LWIR as it outperform humans in terms of detection accuracy at a long range. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

17 pages, 2472 KiB  
Article
LiDAR-Based Intensity-Aware Outdoor 3D Object Detection
by Ammar Yasir Naich and Jesús Requena Carrión
Sensors 2024, 24(9), 2942; https://doi.org/10.3390/s24092942 - 6 May 2024
Cited by 3 | Viewed by 2272
Abstract
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can [...] Read more.
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird’s-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

24 pages, 7917 KiB  
Article
Recognition of 3D Images by Fusing Fractional-Order Chebyshev Moments and Deep Neural Networks
by Lin Gao, Xuyang Zhang, Mingrui Zhao and Jinyi Zhang
Sensors 2024, 24(7), 2352; https://doi.org/10.3390/s24072352 - 7 Apr 2024
Cited by 2 | Viewed by 1542
Abstract
In order to achieve efficient recognition of 3D images and reduce the complexity of network parameters, we proposed a novel 3D image recognition method combining deep neural networks with fractional-order Chebyshev moments. Firstly, the fractional-order Chebyshev moment (FrCM) unit, consisting of Chebyshev moments [...] Read more.
In order to achieve efficient recognition of 3D images and reduce the complexity of network parameters, we proposed a novel 3D image recognition method combining deep neural networks with fractional-order Chebyshev moments. Firstly, the fractional-order Chebyshev moment (FrCM) unit, consisting of Chebyshev moments and the three-term recurrence relation method, is calculated separately using successive integrals. Next, moment invariants based on fractional order and Chebyshev moments are utilized to achieve invariants for image scaling, rotation, and translation. This design aims to enhance computational efficiency. Finally, the fused network embedding the FrCM unit (FrCMs-DNNs) extracts depth features to analyze the effectiveness from the aspects of parameter quantity, computing resources, and identification capability. Meanwhile, the Princeton Shape Benchmark dataset and medical images dataset are used for experimental validation. Compared with other deep neural networks, FrCMs-DNNs has the highest accuracy in image recognition and classification. We used two evaluation indices, mean square error (MSE) and peak signal-to-noise ratio (PSNR), to measure the reconstruction quality of FrCMs after 3D image reconstruction. The accuracy of the FrCMs-DNNs model in 3D object recognition was assessed through an ablation experiment, considering the four evaluation indices of accuracy, precision, recall rate, and F1-score. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

15 pages, 2735 KiB  
Article
Application of Self-Attention Generative Adversarial Network for Electromagnetic Imaging in Half-Space
by Chien-Ching Chiu, Yang-Han Lee, Po-Hsiang Chen, Ying-Chen Shih and Jiang Hao
Sensors 2024, 24(7), 2322; https://doi.org/10.3390/s24072322 - 5 Apr 2024
Cited by 3 | Viewed by 1927
Abstract
In this paper, we introduce a novel artificial intelligence technique with an attention mechanism for half-space electromagnetic imaging. A dielectric object in half-space is illuminated by TM (transverse magnetic) waves. Since measurements can only be made in the upper space, the measurement angle [...] Read more.
In this paper, we introduce a novel artificial intelligence technique with an attention mechanism for half-space electromagnetic imaging. A dielectric object in half-space is illuminated by TM (transverse magnetic) waves. Since measurements can only be made in the upper space, the measurement angle will be limited. As a result, we apply a back-propagation scheme (BPS) to generate an initial guessed image from the measured scattered fields for scatterer buried in the lower half-space. This process can effectively reduce the high nonlinearity of the inverse scattering problem. We further input the guessed images into the generative adversarial network (GAN) and the self-attention generative adversarial network (SAGAN), respectively, to compare the reconstruction performance. Numerical results prove that both SAGAN and GAN can reconstruct dielectric objects and the MNIST dataset under same measurement conditions. Our analysis also reveals that SAGAN is able to reconstruct electromagnetic images more accurately and efficiently than GAN. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

Back to TopTop