sensors-logo

Journal Browser

Journal Browser

Innovations with LiDAR Sensors and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Radar Sensors".

Deadline for manuscript submissions: closed (22 March 2024) | Viewed by 4303

Special Issue Editors


E-Mail Website
Guest Editor
Centre for Research on Geospatial Data and Intelligence (CRDIG), Laval University, Quebec, QC G1V 0A6, Canada
Interests: LiDAR; 3D point cloud processing; classification; segmentation; 3D reconstruction; transfer learning

E-Mail Website
Guest Editor
Centre for Research on Geospatial Data and Intelligence (CRDIG), Laval University, Quebec, QC G1V 0A6, Canada
Interests: geospatial big data; machine learning; 3D modeling and data processing; geo-analytics; IoT; geospatial data streaming; digital twins
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

LiDAR sensors perform fast and geometrically truthful acquisitions. They have been increasingly used in various fields, ranging from digital twins to forestry mapping. Recently, the question of the generalization capabilities of LiDAR, especially for complex tasks in which the available data are limited, has been raised. This problem can be tackled thanks to the design of a custom sensor including all the desired information, the implementation of new learning techniques, or the generalization via transfer learning and domain adaptation, from one domain to another.

Through this Special Issue, we want to encourage works that focus on the design of new means of acquisitions or processing and learning techniques for innovative applications, along with their generalization capabilities.

Potential topics include:

- LiDAR sensors

- Hybrid use of LiDAR with other sensors

- Monitoring systems

- 3D data processing

- Classification

- Segmentation

- Object detection

- 3D reconstruction

- Deep learning

- Transfer learning

- Domain adaptation

- Digital twins

- Bathymetry

- Forestry mapping

- Cultural heritage

- Real-time mapping

Dr. Stephane Guinard
Prof. Dr. Thierry Badard
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 4198 KiB  
Article
3D Point Cloud Object Detection Method Based on Multi-Scale Dynamic Sparse Voxelization
by Jiayu Wang, Ye Liu, Yongjian Zhu, Dong Wang and Yu Zhang
Sensors 2024, 24(6), 1804; https://doi.org/10.3390/s24061804 - 11 Mar 2024
Viewed by 751
Abstract
Perception plays a crucial role in ensuring the safety and reliability of autonomous driving systems. However, the recognition and localization of small objects in complex scenarios still pose challenges. In this paper, we propose a point cloud object detection method based on dynamic [...] Read more.
Perception plays a crucial role in ensuring the safety and reliability of autonomous driving systems. However, the recognition and localization of small objects in complex scenarios still pose challenges. In this paper, we propose a point cloud object detection method based on dynamic sparse voxelization to enhance the detection performance of small objects. This method employs a specialized point cloud encoding network to learn and generate pseudo-images from point cloud features. The feature extraction part uses sliding windows and transformer-based methods. Furthermore, multi-scale feature fusion is performed to enhance the granularity of small object information. In this experiment, the term “small object” refers to objects such as cyclists and pedestrians, which have fewer pixels compared to vehicles with more pixels, as well as objects of poorer quality in terms of detection. The experimental results demonstrate that, compared to the PointPillars algorithm and other related algorithms on the KITTI public dataset, the proposed algorithm exhibits improved detection accuracy for cyclist and pedestrian target objects. In particular, there is notable improvement in the detection accuracy of objects in the moderate and hard quality categories, with an overall average increase in accuracy of about 5%. Full article
(This article belongs to the Special Issue Innovations with LiDAR Sensors and Applications)
Show Figures

Figure 1

21 pages, 10642 KiB  
Article
Non-Repetitive Scanning LiDAR Sensor for Robust 3D Point Cloud Registration in Localization and Mapping Applications
by Ahmad K. Aijazi and Paul Checchin
Sensors 2024, 24(2), 378; https://doi.org/10.3390/s24020378 - 08 Jan 2024
Viewed by 929
Abstract
Three-dimensional point cloud registration is a fundamental task for localization and mapping in autonomous navigation applications. Over the years, registration algorithms have evolved; nevertheless, several challenges still remain. Recently, non-repetitive scanning LiDAR sensors have emerged as a promising 3D data acquisition tool. However, [...] Read more.
Three-dimensional point cloud registration is a fundamental task for localization and mapping in autonomous navigation applications. Over the years, registration algorithms have evolved; nevertheless, several challenges still remain. Recently, non-repetitive scanning LiDAR sensors have emerged as a promising 3D data acquisition tool. However, the feasibility of this type of sensor to leverage robust point cloud registration still needs to be ascertained. In this paper, we explore the feasibility of one such LiDAR sensor with a Spirograph-type non-repetitive scanning pattern for robust 3D point cloud registration. We first characterize the data of this unique sensor; then, utilizing these results, we propose a new 3D point cloud registration method that exploits the unique scanning pattern of the sensor to register successive 3D scans. The characteristic equations of the unique scanning pattern, determined during the characterization phase, are used to reconstruct a perfect scan at the target distance. The real scan is then compared with this reconstructed scan to extract objects in the scene. The displacement of these extracted objects with respect to the center of the unique scanning pattern is compared in successive scans to determine the transformations that are then used to register these scans. The proposed method is evaluated on two real and different datasets and compared with other state-of-the-art registration methods. After analysis, the performance (localization and mapping results) of the proposed method is further improved by adding constraints like loop closure and employing a Curve Fitting Derivative Filter (CFDT) to better estimate the trajectory. The results clearly demonstrate the suitability of the sensor for such applications. The proposed method is found to be comparable with other methods in terms of accuracy but surpasses them in performance in terms of processing time. Full article
(This article belongs to the Special Issue Innovations with LiDAR Sensors and Applications)
Show Figures

Figure 1

22 pages, 5889 KiB  
Article
Towards Minimizing the LiDAR Sim-to-Real Domain Shift: Object-Level Local Domain Adaptation for 3D Point Clouds of Autonomous Vehicles
by Sebastian Huch and Markus Lienkamp
Sensors 2023, 23(24), 9913; https://doi.org/10.3390/s23249913 - 18 Dec 2023
Cited by 1 | Viewed by 1139
Abstract
Perception algorithms for autonomous vehicles demand large, labeled datasets. Real-world data acquisition and annotation costs are high, making synthetic data from simulation a cost-effective option. However, training on one source domain and testing on a target domain can cause a domain shift attributed [...] Read more.
Perception algorithms for autonomous vehicles demand large, labeled datasets. Real-world data acquisition and annotation costs are high, making synthetic data from simulation a cost-effective option. However, training on one source domain and testing on a target domain can cause a domain shift attributed to local structure differences, resulting in a decrease in the model’s performance. We propose a novel domain adaptation approach to address this challenge and to minimize the domain shift between simulated and real-world LiDAR data. Our approach adapts 3D point clouds on the object level by learning the local characteristics of the target domain. A key feature involves downsampling to ensure domain invariance of the input data. The network comprises a state-of-the-art point completion network combined with a discriminator to guide training in an adversarial manner. We quantify the reduction in domain shift by training object detectors with the source, target, and adapted datasets. Our method successfully reduces the sim-to-real domain shift in a distribution-aligned dataset by almost 50%, from 8.63% to 4.36% 3D average precision. It is trained exclusively using target data, making it scalable and applicable to adapt point clouds from any source domain. Full article
(This article belongs to the Special Issue Innovations with LiDAR Sensors and Applications)
Show Figures

Figure 1

17 pages, 4769 KiB  
Article
Efficient Object Detection Using Semantic Region of Interest Generation with Light-Weighted LiDAR Clustering in Embedded Processors
by Dongkyu Jung, Taewon Chong and Daejin Park
Sensors 2023, 23(21), 8981; https://doi.org/10.3390/s23218981 - 05 Nov 2023
Viewed by 1129
Abstract
Many fields are currently investigating the use of convolutional neural networks to detect specific objects in three-dimensional data. While algorithms based on three-dimensional data are more stable and insensitive to lighting conditions than algorithms based on two-dimensional image data, they require more computation [...] Read more.
Many fields are currently investigating the use of convolutional neural networks to detect specific objects in three-dimensional data. While algorithms based on three-dimensional data are more stable and insensitive to lighting conditions than algorithms based on two-dimensional image data, they require more computation than two-dimensional data, making it difficult to drive CNN algorithms using three-dimensional data in lightweight embedded systems. In this paper, we propose a method to process three-dimensional data through a simple algorithm instead of complex operations such as convolution in CNN, and utilize its physical characteristics to generate ROIs to perform a CNN object detection algorithm based on two-dimensional image data. After preprocessing the LiDAR point cloud data, it is separated into individual objects through clustering, and semantic detection is performed through a classifier trained based on machine learning by extracting physical characteristics that can be utilized for semantic detection. The final object recognition is performed through a 2D-based object detection algorithm that bypasses the process of tracking bounding boxes by generating individual 2D image regions from the location and size of objects initially detected by semantic detection. This allows us to utilize the physical characteristics of 3D data to improve the accuracy of 2D image-based object detection algorithms, even in environments where it is difficult to collect data from camera sensors, resulting in a lighter system than 3D data-based object detection algorithms. The proposed model achieved an accuracy of 81.84% on the YOLO v5 algorithm on an embedded board, which is 1.92% higher than the typical model. The proposed model achieves 47.41% accuracy in an environment with 40% higher brightness and 54.12% accuracy in an environment with 40% lower brightness, which is 8.97% and 13.58% higher than the general model, respectively, and can achieve high accuracy even in non-optimal brightness environments. The proposed technique also has the advantage of reducing the execution time depending on the operating environment of the detection model. Full article
(This article belongs to the Special Issue Innovations with LiDAR Sensors and Applications)
Show Figures

Figure 1

Back to TopTop