Next Article in Journal
A Systematic Approach to the Design and Characterization of a Smart Insole for Detecting Vertical Ground Reaction Force (vGRF) in Gait Analysis
Previous Article in Journal
A Novel Gamma Distributed Random Variable (RV) Generation Method for Clutter Simulation with Non-Integral Shape Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor

1
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing 100101, China.
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(4), 956; https://doi.org/10.3390/s20040956
Submission received: 15 January 2020 / Revised: 4 February 2020 / Accepted: 10 February 2020 / Published: 11 February 2020
(This article belongs to the Section Physical Sensors)

Abstract

For autonomous driving, it is important to detect obstacles in all scales accurately for safety consideration. In this paper, we propose a new spatial attention fusion (SAF) method for obstacle detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF. The proposed fusion method can be embedded in the feature-extraction stage, which leverages the features of mmWave radar and vision sensor effectively. Based on the SAF, an attention weight matrix is generated to fuse the vision features, which is different from the concatenation fusion and element-wise add fusion. Moreover, the proposed SAF can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework. In addition, we build a generation model, which converts radar points to radar images for neural network training. Numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking. In addition, the source code will be released in the GitHub.
Keywords: autonomous driving; obstacle detection; mmWave radar; vision; spatial attention fusion autonomous driving; obstacle detection; mmWave radar; vision; spatial attention fusion

Share and Cite

MDPI and ACS Style

Chang, S.; Zhang, Y.; Zhang, F.; Zhao, X.; Huang, S.; Feng, Z.; Wei, Z. Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor. Sensors 2020, 20, 956. https://doi.org/10.3390/s20040956

AMA Style

Chang S, Zhang Y, Zhang F, Zhao X, Huang S, Feng Z, Wei Z. Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor. Sensors. 2020; 20(4):956. https://doi.org/10.3390/s20040956

Chicago/Turabian Style

Chang, Shuo, Yifan Zhang, Fan Zhang, Xiaotong Zhao, Sai Huang, Zhiyong Feng, and Zhiqing Wei. 2020. "Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor" Sensors 20, no. 4: 956. https://doi.org/10.3390/s20040956

APA Style

Chang, S., Zhang, Y., Zhang, F., Zhao, X., Huang, S., Feng, Z., & Wei, Z. (2020). Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor. Sensors, 20(4), 956. https://doi.org/10.3390/s20040956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop