Next Article in Journal
Design, Development and Application of a Modular Electromagnetic Induction (EMI) Sensor for Near-Surface Geophysical Surveys
Previous Article in Journal
Privacy-Centric AI and IoT Solutions for Smart Rural Farm Monitoring and Control
Previous Article in Special Issue
A Review of Deep Learning-Based LiDAR and Camera Extrinsic Calibration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Snow-CLOCs: Camera-LiDAR Object Candidate Fusion for 3D Object Detection in Snowy Conditions

1
School of Automation, Guangxi University of Science and Technology, Liuzhou 545006, China
2
Guangxi Collaborative Innovation Centre for Earthmoving Machinery, Guangxi University of Science and Technology, Liuzhou 545006, China
3
Key Laboratory of Disaster Prevention & Mitigation and Prestress Technology of Guangxi Colleges and Universities, Liuzhou 545006, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4158; https://doi.org/10.3390/s24134158
Submission received: 20 May 2024 / Revised: 23 June 2024 / Accepted: 25 June 2024 / Published: 26 June 2024
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)

Abstract

Although existing 3D object-detection methods have achieved promising results on conventional datasets, it is still challenging to detect objects in data collected under adverse weather conditions. Data distortion from LiDAR and cameras in such conditions leads to poor performance of traditional single-sensor detection methods. Multi-modal data-fusion methods struggle with data distortion and low alignment accuracy, making accurate target detection difficult. To address this, we propose a multi-modal object-detection algorithm, Snow-CLOCs, specifically for snowy conditions. In image detection, we improved the YOLOv5 algorithm by integrating the InceptionNeXt network to enhance feature extraction and using the Wise-IoU algorithm to reduce dependency on high-quality data. For LiDAR point-cloud detection, we built upon the SECOND algorithm and employed the DROR filter to remove noise, enhancing detection accuracy. We combined the detection results from the camera and LiDAR into a unified detection set, represented using a sparse tensor, and extracted features through a 2D convolutional neural network to achieve object detection and localization. Snow-CLOCs achieved a detection accuracy of 86.61% for vehicle detection in snowy conditions.
Keywords: multi-modal object detection; YOLOv5; InceptionNeXt; DROR multi-modal object detection; YOLOv5; InceptionNeXt; DROR

Share and Cite

MDPI and ACS Style

Fan, X.; Xiao, D.; Li, Q.; Gong, R. Snow-CLOCs: Camera-LiDAR Object Candidate Fusion for 3D Object Detection in Snowy Conditions. Sensors 2024, 24, 4158. https://doi.org/10.3390/s24134158

AMA Style

Fan X, Xiao D, Li Q, Gong R. Snow-CLOCs: Camera-LiDAR Object Candidate Fusion for 3D Object Detection in Snowy Conditions. Sensors. 2024; 24(13):4158. https://doi.org/10.3390/s24134158

Chicago/Turabian Style

Fan, Xiangsuo, Dachuan Xiao, Qi Li, and Rui Gong. 2024. "Snow-CLOCs: Camera-LiDAR Object Candidate Fusion for 3D Object Detection in Snowy Conditions" Sensors 24, no. 13: 4158. https://doi.org/10.3390/s24134158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop