Next Article in Journal
DOA Estimation on One-Bit Quantization Observations through Noise-Boosted Multiple Signal Classification
Previous Article in Journal
Integrating sEMG and IMU Sensors in an e-Textile Smart Vest for Forward Posture Monitoring: First Steps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera

1
Institute of Advanced Technology, University of Science and Technology of China, Hefei 230088, China
2
China Academy of Electronics and Information Technology, Beijing 100041 , China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4718; https://doi.org/10.3390/s24144718 (registering DOI)
Submission received: 28 April 2024 / Revised: 14 July 2024 / Accepted: 19 July 2024 / Published: 20 July 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

3D object detection is a challenging and promising task for autonomous driving and robotics, benefiting significantly from multi-sensor fusion, such as LiDAR and cameras. Conventional methods for sensor fusion rely on a projection matrix to align the features from LiDAR and cameras. However, these methods often suffer from inadequate flexibility and robustness, leading to lower alignment accuracy under complex environmental conditions. Addressing these challenges, in this paper, we propose a novel Bidirectional Attention Fusion module, named BAFusion,which effectively fuses the information from LiDAR and cameras using cross-attention. Unlike the conventional methods, our BAFusion module can adaptively learn the cross-modal attention weights, making the approach more flexible and robust. Moreover, drawing inspiration from advanced attention optimization techniques in 2D vision, we developed the Cross Focused Linear Attention Fusion Layer (CFLAF Layer) and integrated it into our BAFusion pipeline. This layer optimizes the computational complexity of attention mechanisms and facilitates advanced interactions between image and point cloud data, showcasing a novel approach to addressing the challenges of cross-modal attention calculations. We evaluated our method on the KITTI dataset using various baseline networks, such as PointPillars, SECOND, and Part-A2, and demonstrated consistent improvements in 3D object detection performance over these baselines, especially for smaller objects like cyclists and pedestrians. Our approach achieves competitive results on the KITTI benchmark.
Keywords: 3D object detection; LiDAR–camera fusion; cross attention 3D object detection; LiDAR–camera fusion; cross attention

Share and Cite

MDPI and ACS Style

Liu, M.; Jia, Y.; Lyu, Y.; Dong, Q.; Yang, Y. BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera. Sensors 2024, 24, 4718. https://doi.org/10.3390/s24144718

AMA Style

Liu M, Jia Y, Lyu Y, Dong Q, Yang Y. BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera. Sensors. 2024; 24(14):4718. https://doi.org/10.3390/s24144718

Chicago/Turabian Style

Liu, Min, Yuanjun Jia, Youhao Lyu, Qi Dong, and Yanyu Yang. 2024. "BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera" Sensors 24, no. 14: 4718. https://doi.org/10.3390/s24144718

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop