Next Article in Journal
Design and Evaluation of a Rapid Monolithic Manufacturing Technique for a Novel Vision-Based Tactile Sensor: C-Sight
Previous Article in Journal
Organic Thermoelectric Materials for Wearable Electronic Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IRBEVF-Q: Optimization of Image–Radar Fusion Algorithm Based on Bird’s Eye View Features

by
Ganlin Cai
1,2,
Feng Chen
2 and
Ente Guo
1,*
1
School of Computer and Big Data, Minjiang University, Fuzhou 350108, China
2
College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4602; https://doi.org/10.3390/s24144602 (registering DOI)
Submission received: 12 June 2024 / Revised: 20 June 2024 / Accepted: 27 June 2024 / Published: 16 July 2024
(This article belongs to the Section Radar Sensors)

Abstract

:
In autonomous driving, the fusion of multiple sensors is considered essential to improve the accuracy and safety of 3D object detection. Currently, a fusion scheme combining low-cost cameras with highly robust radars can counteract the performance degradation caused by harsh environments. In this paper, we propose the IRBEVF-Q model, which mainly consists of BEV (Bird’s Eye View) fusion coding module and an object decoder module.The BEV fusion coding module solves the problem of unified representation of different modal information by fusing the image and radar features through 3D spatial reference points as a medium. The query in the object decoder, as a core component, plays an important role in detection. In this paper, Heat Map-Guided Query Initialization (HGQI) and Dynamic Position Encoding (DPE) are proposed in query construction to increase the a priori information of the query. The Auxiliary Noise Query (ANQ) then helps to stabilize the matching. The experimental results demonstrate that the proposed fusion model IRBEVF-Q achieves an NDS of 0.575 and a mAP of 0.476 on the nuScenes test set. Compared to recent state-of-the-art methods, our model shows significant advantages, thus indicating that our approach contributes to improving detection accuracy.

1. Introduction

As autonomous driving vehicles continue to evolve, the integration of various sensors that are equipped to perceive the surrounding environment has become a vital aspect of the system’s operation [1]. However, achieving accurate and robust detection of 3D objects remains a challenge [2]. To address this, sensor fusion solutions [3,4,5,6,7] have been developed that aim to combine the strengths of different sensors, particularly cameras and LiDARs. While cameras are well suited for capturing rich texture and semantic information, LiDAR is effective at capturing spatial structural information [8]. Unfortunately, both sensors are prone to limitations in harsh weather conditions, such as heavy rain and snow. Though this combination of sensors can achieve better detection results, it is not sufficient for practical applications of autonomous vehicles. To overcome these limitations, millimeter-wave radar is a more robust sensor that can be used in all weather conditions [9]. Additionally, radar can provide critical velocity information through the Doppler effect, which is essential for collision avoidance in autonomous driving environments. Furthermore, radar sensors are relatively lower in cost and can complement the strengths of cameras and LiDARs. Despite these advantages, there has been limited research [10,11] focused on the fusion of radar data with other sensors. One significant challenge is the lack of datasets containing radar data for autonomous driving applications, thus making it difficult to conduct research in this area. Furthermore, applying existing LiDAR-based algorithms [12,13,14,15,16] to radar point clouds has proven to be highly challenging [17] due to the sparsity of radar point clouds, which makes extracting the geometric information of objects challenging.
Currently, the existing work [17,18] on fusing radar and cameras for 3D object detection employs a matching-based approach to accomplish the fusion. First, the view frustum generated by the camera is used to filter associated radar points. Then, the radar points are projected onto the camera image to generate radar features. These features are then concatenated with the image features to create associated radar–image features that are used for object detection. Another type of method uses the DETR paradigm [19,20,21,22,23,24], thus treating queries as the targets to be predicted, and it employs an attention mechanism and multilevel decoders for learning. Regarding these methods, we identify the following issues in radar and image fusion for 3D object detection: (a) the heterogeneity of image and radar features makes direct alignment impossible; and (b) the fusion methods or structures are overly simplistic, thus preventing full and deep integration. A major reason for these problems [25] is that images lack depth information, thus making it difficult to accurately correlate radar data with images. Since image data are in perspective space and radar data are in a 3D bird’s eye view (BEV) space, projecting the image into the radar feature space is ill posed. On the other hand, projecting radar points into the image feature space makes it extremely challenging to perform 3D perception tasks on a 2D image plane. To address this issue, new approaches are needed that can effectively fuse multimodal data while preserving the essential characteristics of each sensor. This requires developing new techniques that can accurately associate radar data with images while preserving the rich spatial information provided by radar.
To this end, we present a novel fusion scheme for 3D object detection using radar and camera data, which is termed the BEV fusion encoder. The BEV furnishes traffic scenes with relative localization and scale information and can be precisely mapped to the physical world, thus facilitating 3D object detection [26]. Furthermore, The BEV representation serves as a physical medium, thus offering an interpretable fusion approach for data from various sensors and timestamps and giving this scheme a significant edge over existing approaches [27]. The BEV fusion encoder that we propose not only elevates image features from 2D (two-dimensional) space to 3D (three-dimensional) space implicitly but also incorporates BEV-formatted radar data. Additionally, the BEV fusion encoding scheme decouples the network’s dependence on the correlation between images and radar while accomplishing the fusion of images and radar in an adaptive manner.
In the object decoder, a set of queries is used to learn information from the BEV features. The queries usually contain content queries and location embeddings. First, for the problem of random initialization of query content, this paper proposes heat map-guided query initialization HGQI, i.e., heat maps are generated based on the image features, and the peaks of the heat maps are used to initialize the object queries. Secondly, this paper proposes a dynamic position encoding module DPE for the problem of fixed query position encoding, which takes the output of each decoding layer as the position encoding input of the next layer, thus providing more effective a priori information for the position encoding of the query. In addition, the query also participates in the process of positive and negative sample matching, but the model convergence is difficult due to the instability of bipartite graph matching. Therefore, this paper also employs an auxiliary noise query module, ANQ, to help stabilize the matching problem during training by adding noise queries with known ground truth values to make the object query focus on the regression of the location information.
The key contributions of our works are as follows:
  • We compared various radar and image fusion schemes for 3D object detection, summarized the current research shortcomings, and identified the underlying reasons.
  • The BEV fusion encoder is proposed. We simulated 3D spatial reference points and learned the intrinsic connection between radar and images through a multilayer structure to generate BEV features with good 3D spatial perception.
  • We optimized the decoder structure. We generated heat maps for initial content query through a priori information, and we additionally utilized dynamic correction of the reference points to improve position coding. Then auxiliary noise was used in the training phase to help stabilize the convergence.

2. Materials and Methods

2.1. Architecture

As shown in Figure 1, the proposed model in this paper mainly consists of a feature extraction module, a BEV fusion encoder, and an object decoder. In the training phase, we extract features from the multiview images and radar data separately using the feature extraction module. Specifically, given N input images I R N × 3 × W × H at one time, the multiscale image features F cam R N × L × c × w × h are obtained by ResNet101 [28] with an L layer FPN [29]. For the raw radar data, the coordinates, velocity, relative velocity, and radar cross-section (RCS) are fed into a multilayer perceptron to obtain the D 1 dimensional feature Radar Feature F rad R M 1 × D 1 . The image feature and radar feature are fed into the BEV fusion encoder, which is updated by stacking multiple coding layers to obtain the BEV feature. Finally, the decoder uses multiple query embeddings for the BEV feature and uses two feedforward neural (FFN) networks to decode the information from the query to obtain the category and bounding box predictions.

2.2. BEV Fusion Encoder

For the BEV fusion encoder, we follow the encoder structure of Transformer [30], which consists of l encoding layers. We predefine a set of learnable parameters F BEV R M 2 × D 2 as the initialization of BEV features, where M 2 represents the size of flattened features, which should be similar to the product of the width and height of the image, and D 2 represents the dimension of each BEV feature. Then, the BEV features of the current layer are subjected to multihead self-attention (MSA) with the BEV features output from the previous layer. The resulting BEV features are then processed by multihead crossattention(MCA) and an FFN network to obtain the output of this encoding layer.
As shown in Figure 2. The BEV fusion coding module is mainly used to generate BEV features with 3D spatial awareness by aggregating two different modal features, namely image features and radar features. It includes local image sampling module, maximum proximity radar sampling module, and BEV feature generation module.
As shown in Figure 3, in the multihead crossattention, we encode the image and radar features separately using deformable attention [31] and minimum distance sampling. First, the BEV space is discretized into a set of points P R N × 3 [32]; these points are projected using camera transformation matrices to obtain their relative positions P c a m in multicamera multiscale image features. Then, deformable attention is applied to sample the relevant image features, which can be represented as follows:
F b e v _ c a m = 1 N p i = 0 N j = 0 L DeformAttention F i , j , P cam
where N represents the number of cameras, L represents the number of feature maps at different scales, N p denotes the number of points projected onto the images, and F i , j represents the jth feature map of the ith camera.
For the BEV encoding of radar features, we normalize the coordinates of radar points in BEV space and calculate the Euclidean distance between them and the point set in the BEV space. For each point in the BEV space, we select the K radar points with the smallest distance and use the sampled radar information to generate BEV features encoded by radar. As shown in Figure 4, for M 1 spatial points and M 2 radar points, the radar features sampled by each spatial point can be obtained based on the indices of the calculated distances. The D−dimensional feature corresponding to each of the M 1 spatial points can be obtained by summing over the K dimensions. This sampling process can be represented as F s a m p l e :
F b e v _ r a d = F s a m p l e ( F r a d )
The image features sampled in BEV space are combined with radar features to obtain BEV features through a neural network:
F B E V = Φ F b e v _ c a m F b e v _ r a d
where Φ ( · ) stands for multilayer perceptron, and ⊕ represents two tensors concated together in the last dimension. The output BEV features are first subjected to a self-attention operation with the BEV features of the next layer. The purpose is to extract the spatial positional information of each feature from the previous layer’s BEV output, thus providing prior knowledge for subsequent sampling tasks. For the first BEV feature encoding layer, let the current BEV features perform the self-attention operation among themselves.

2.3. Decoder and Query Optimization

The decoder was designed using the DETR [33] paradigm by setting a set of spatial object queries Q l = { q i } i = 1 N q R N q × D q , where N q denotes the number of queries, and D q denotes the dimension of each query. Q l contains both content queries and location queries.In this paper, we improved the decoder structure by optimizing the query.

2.3.1. Heat Map Guide Query Initial

In object detection tasks, heat maps can help identify the locations and regions of objects, thus indicating which regions the model believes contain specific objects. Therefore, we used heat maps to provide prior information and initialize query content embedding. We used radar to generate heat maps during the first training. For generating heat maps using radar BEV features, the procedure is as follows: Give the radar feature map R B E V R C r b × H r b × W r b , where C r b , H r b , and W r b represent the channel number, height, and width of the radar feature map, respectively. Initially, a fully connected layer is utilized to transform the channel number of the radar feature map into the number of classes C l that the network needs to predict. Subsequently, each value in the radar feature map is normalized to the range of 0−1. Next, we have a tensor L M A X R C r b × H r b × W r b , where C r b of the same size as the radar feature map is established to acquire local maxima, thus following the rule that each value must be greater than the surrounding 8 values. The values of R B E V are then compared to the values of L M A X one by one, and if they are the same, the maximum value is retained, thus selecting the top−K maximum values in descending order. Finally, the two−dimensional feature map is folded into a one−dimensional vector, thus obtaining the index of each maximum value on the one-dimensional vector. Using these position indices on the folded original radar BEV feature map, the values of the original radar BEV feature map are obtained. At this point, a heat map H R C r b × K is given, where C r b is generated from the radar BEV feature map, and through transposition, the query Q R K × M initialized by the heat map can be obtained, where M represents the dimension of each query, which is equivalent to the size of C r b . An additional output head is added at the output of each decoding layer for predicting the heat map H p r e R K × H r b × W r b .
For the generation of the ground truth of the heat map, firstly, the number of objects with a ground truth is obtained from the ground truth labels, and the width h and height w of the object frame are obtained from the ground truth label of each object, which are used to compute the radius r. It is still mainly dependent on the overlap between the real frame’s and the predicted frames, and the value of the radius r is taken according to the different critical situations.
According to the form of the solution of the binary equation, three radii r 1 , r 2 , and r 3 can be obtained by I O U :
I O U 1 = h w h + 2 r 1 w + 2 r 1
I O U 2 = h 2 r 2 w 2 r 2 h w
I O U 3 = h r 3 w r 3 2 h w h r 3 w r 3
Taking the minimum of r 1 , r 2 , and r 3 is the required radius r.
Next, use the obtained for radius r to establish the rectangular grid coordinates of rectangles −r−r. For each coordinate, apply the Gaussian function G = e x 2 + y 2 2 σ 2 to obtain the Gaussian distribution of the rectangles in order for the values spreading out in all directions from the center of the object to decay according to the distance for penalizing long-range prediction. Then, obtain the center of the object (x, y) from the real value label, obtain the size of the top, bottom, left, and right that each center can cover according to the width, height, and radius of the heat map, and crop the Gaussian distributed rectangle to the center of each object, and this completes the generation of the real value of the heat map.

2.3.2. Dynamic Position Encoding

Positional encoding is a crucial concept in image tasks, particularly within the Transformer architecture. Due to its self-attentive mechanism, the Transformer inherently lacks order perception of the elements in a sequence. Consequently, without positional encoding, the Transformer cannot distinguish the relative positions of the elements. Positional encoding enhances the model’s expressive power by providing additional information, thus enabling better sequence understanding and generation. Moreover, positional encoding is learnable, thus allowing the model to adapt during training to better suit specific tasks.This section introduces improvements to the traditional positional encoding in the Transformer architecture for the task of 3D object detection. Unlike traditional positional encoding, which uses random parameters to learn positional information from the query, the dynamic positional encoding proposed here leverages positional information connected with the predicted values (x, y, z) of the outputs from each layer. This approach provides more accurate positional information related to the objects.
A comparison with the structure of conventional position coding is illustrated in Figure 5. For the reference point R e f R K × 3 , which is used in detection models as a crucial medium for the interaction of different modalities, the previous algorithm employed randomly initialized position encoding information P o s R K × M . In contrast, the proposed dynamic position encoding algorithm first initializes the value of the reference point, then utilizes this reference point to generate position encoding information, and subsequently uses the output of each layer to continuously optimize the position of the reference point. This process ensures that the position information of the reference point more closely aligns with the position of the real object. By using the position encoding generated from the reference point, a positional prior is provided for the subsequent decoding of the query, thus allowing the query to search for the position of the real object more accurately. The use of reference points to generate position encoding for query decoding provides a positional prior that enables the query to begin its search from a position relatively close to the actual object position. This approach results in faster convergence and improved performance compared to the traditional position encoding method, as it reduces the solution space and thereby lessens the breadth and complexity of the neural network’s parameter learning.
Then, for the obtained reference point above, it first passes through a sinusoidal encoding module:
Enc ( Re f ) = sin Ref / 10,000 2 i M / 2 cos Ref / 10,000 2 i + 1 M / 2
where M is the dimension representing the position encoding, the new reference point R e f 1 R K × 3 M / 2 is obtained by sinusoidal encoding, and then the reference point R e f 2 R K × M is obtained by a fully connected layer F with the parameter (3M/2,M). This algorithm also sets a scale function Scale for the MLP multilayer perceptron with parameters (M, 2M, M) for generating P o s s c a l e for sensing object scale information for position encoding, where the scale function P o s s c a l e is 1 for the first of the L decoding layers and P o s s c a l e for the other layers. The output of a layer is generated by the scale function Scale, and the final position encoding Pos is the product of R e f 2 and P o s s c a l e .

2.3.3. Auxiliary Noise Query

The decoder addresses target detection as an ensemble prediction problem, thus aiming to achieve an optimal matching between each predicted frame and the true frame that minimizes the overall cost. The instability of the cost matrix results in unstable matching between the query and ground truth, which frequently disrupts the learning process of the query. To alleviate this problem, this paper introduces an auxiliary noise query module using the true value information to stabilize the matching process and optimize the learning conditions of the query.
In this paper, the query is divided into two parts. The first part, the matching component, follows the same processing method as the query in the previous model, thus using the Hungarian algorithm for matching and learning to approximate the true value labeled pairs and using the matched decoder outputs. The second part is the denoising component. The input for this component is the noisy ground truth, and the output aims to reconstruct the real 3D frame object. Since the added noise is relatively small, it is easier for the model to predict the corresponding GT based on these noisy inputs, thereby reducing the learning difficulty. Additionally, because the learning goal is clear, the input derived from a specific noised GT will be responsible for predicting its corresponding true value, thus avoiding the dichotomous phenomenon introduced by Hungarian matching.
The construction of the denoising query consists of two main components: labeling noise and 3D box noise. Additionally, this algorithm applies various noise addition scenarios to the ground truth (GT), as illustrated in Figure 6. Specifically, assuming a batch of data contains d real GT values repeated n times to construct n groups of different denoising queries, the total number of denoising queries is n×d, which is denoted by D. For label noise addition, the algorithm first generates D probability values conforming to a normal distribution. It then uses a preset probability p to screen k indexes of D probability values that are smaller than p. Next, k positive integers representing random labels within the range of the number of categories are generated. These k random labels are then adjusted according to the indexes, thus modifying the values at the real label locations after n repetitions to the labels with added noise. Finally, a fully connected layer encodes this noise query to obtain the noise query Q N o i s e R D × M .
Finally, the original content query and the noise query are spliced in the number dimension to obtain the hybrid content query Q h y b r i d c o n t e n t R K + D × M .
For 3D box noise addition containing six parameters x , y , z , w , h , l , the main operations of this algorithm are centroid displacement and scale scaling, while to ensure small perturbation, the 3D frame parameters are first normalized to 0−1. For centroid displacement, 1 perturbation parameter is first sampled from a uniform distribution l a m b d a 1 0 , 1 , and then the offset corresponding to the center point x , y , z is caluclated as follows:
| Δ x | = λ 1 x , | Δ y | = λ 1 y , | Δ z | = λ 1 z
and this constrains
| Δ x | < λ 1 w 2 , | Δ y | < λ 1 h 2 , | Δ z | < λ 1 l 2
Similarly, 1 perturbation parameter λ 2 0 , 1 is sampled from the uniform distribution, and then also the offset corresponding to w , h , l is computed separately:
| Δ x | = λ 2 w , | Δ y | = λ 2 h , | Δ z | = λ 2 l
The length, width, and height of the final 3D box is scaled to the interval [0, 2]. According to the dynamic position encoding algorithm above, it is known that the noise-added 3D box represents the reference point, which needs to be used to generate the position encoding, while the subsequent query of the decoder is the superposition of the content query and the position encoding.
After completing the design of the noise query, the current query has been changed to the hybrid query Q h y b r i d R K + D × M ; this query subsequently acts in the same way as the usage DETR model, but in the decoder, the hybrid query Q h y b r i d needs to go through the process of going through a self-attention module, which performs a global interaction. Thus, the query used for matching fetches the content of the noise query, which leads to information leakage, since the noise query is noised from the true-value GT information. Therefore, this section designs a mask module, which serves two purposes: firstly, it prevents the query used for matching from interacting with the noise query for information, and secondly, it prevents the noise query from interacting with the information between different groups. This ensures that the matching task and the denoising task are independent of each other and do not interfere with each other. Also, whether the denoising part can see the matching part does not affect the performance, because the queries in the matching part are learned queries and do not contain information about the GT objects.
This algorithm uses M a s k = [ m i j ] A × A matrix to represent the self-attention mask, where A = K + n × d , K is the number of queries used for matching, d is the number of true-valued GTs corresponding to this set of queries, and n represents the different denoising groups; if m i j = 1 , it means that the ith query can interact with the jth query; if m i j = 0 , it means that the ith query cannot interact with the jth query. As shown in Figure 4, the first K rows and K columns of the self-attention mask represent the matching part, and the subsequent K + d rows and K + d columns represent the part of a denoising group, of which there are n, and so on. The final noisy query prediction yields categories with 3D box sizes that still require the computation of the loss l o s s A N :
loss A N = λ A N loss A N C L S + λ A N loss A N B O X = λ A N focal l o s s ( P , G ) + λ A N S L 1 ( P , G )
where λ A N is used to control the weight of the auxiliary noise loss, P represents the predicted value, G represents the true value, and S L 1 represents the Smooth L1 loss, and since the auxiliary noise is noise added to the true value, the values of P and G are one-to-one, without the need to perform a positive–negative sample matching, which is theoretically equivalent to increasing the proportion of the positive samples, which is an intrinsic part of the validity of the approach This is also the intrinsic reason for the effectiveness of the method.

3. Results

3.1. Experimental Settings

Datasets: We carried out experiments on the nuScenes [34] dataset, which is the sole dataset that provides an abundance of multiview image and large-scale radar point cloud data, as shown in Table 1. We exclusively used keyframes in our experiments. The official evaluation metrics employed in our study encompass a wide range of performance indicators, including the mean Average Precision (mAP), mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), mean Average Attribute Error (mAAE), as well as the nuScenes Detection Score (NDS). NDS integrates the above metrics by computing a weighted sum, with a weight of 5 assigned to the mAP, a weight of 1 assigned to each of the five error indicators, and the normalized total sum being calculated.
Model Details: In this experiment, the original size of the input image for the network is 1600 × 900. Through the feature extraction networks of Res101 and FPN, the 3−channel image was transformed into a feature map with channel sizes of 2048, 1024, 512, and 256. For radar data containing 14 dimensions of information, we used a multilayer perceptron to uniformly transform the radar features into 64 dimensions. Subsequently, the radar and image features were encoded to obtain transformed BEV features. These features have an initial size of 40,000, a feature dimension of 256, and a BEV feature resolution size of 0.512 m. A learnable position embedding was introduced for the BEV features, and the BEV fusion encoder consists of six encoding layers, thus refining the BEV features in each layer. The decoder has 900 object queries by default and contains six decoding layers. Finally, two network branches were used to predict the bounding box parameters and class labels for each object query. Each branch includes two fully connected layers, with a hidden dimension of 512 and an output dimension of 10.
Implementation Details: We took a multicamera 3D object detector and pretrained it as the pretraining weights for our camera-radar fusion network. We kept the weights of the camera feature extraction module frozen during the training of the camera-radar fusion network, while the other fusion modules were trained in an end-to-end fashion. The AdamW optimizer was used to train the model, with an initial learning rate of 2 × 10 4 , and the learning rate was adjusted using the Cosine Annealing algorithm. During the first 500 steps, the learning rate was linearly increased from one-third of the initial learning rate to the initial learning rate and then decreased to 1 × 10 3 of the initial learning rate. The model was trained for a total of 24 epochs on four RTX 3090 GPUs, with a batch size of 1 for each GPU. For the AND module proposed in this paper, the hyperparameters set the number of denoising groups n to 5, the screening probability p to 0.2, and the auxiliary noise weights λ A N to 0.75.

3.2. Main Results

The baseline model for this experiment is referred to as IRBEVF, and the improved model proposed through this experiment will be referred to as IRBEVF-Q. In order to validate the effectiveness of the methods proposed in this section, this experiment compares the radar-image fusion model with state-of-the-art 3D object detection methods on both the nuScenes test set and the validation set, and only those with a paper or code on the nuScenes Detection Challenge Leaderboards have been used for comparison. This work serves as a comparison and used the data provided on the original paper.
As demonstrated in Table 2, employing the same ResNet101 backbone network on the nuScenes test set, our approach surpassed recent competitive camera-radar and multicamera methods. When compared to the benchmark model IRBEVF-Q, the proposed enhancement scheme in this paper yielded improvements of 4.0% and 4.6% in the primary metrics NDS and mAP, respectively. Additionally, enhancements were observed across all five error metrics, with a notable 6.5% reduction in the distance error, which is an area where the benchmark model underperformed. This improvement can be attributed to the theoretical benefits of dynamic location encoding, which enables the query to concentrate more on location information regression, thereby reducing prediction errors. In comparison to the highly effective camera-radar fusion method CRAFT, our method achieves a 5.2% enhancement in NDS and a 6.5% improvement in the mAP. Moreover, compared to the fusion method RCBEV, which also utilizes a BEV scheme, our method achieved a 7.1% improvement in the NDS and a 7.6% enhancement in the mAP. The significant performance gains in scale and velocity estimation errors, where velocity information is derived from radar data availability, underscore the effectiveness of the radar-camera fusion method proposed in this paper. Furthermore, the improvements in scale and position errors indicate that the query optimization and structural enhancements proposed in this section lead to a more focused iteration of query information, thus benefiting model prediction and convergence. These advancements will be further illustrated in the experiments discussed in the subsequent subsection.
The comparisons on the validation set are presented in Table 3. Compared to the benchmark model IRBEVF, the proposed enhanced model in this paper achieved improvements of 3.7% and 3.6% in the NDS and mAP, respectively. Additionally, enhancements were observed in the mATE, mASE, mAVE, and mAAE, with gains of 11.0%, 0.5%, 1.5%, and 4.9%, respectively, with the most significant improvements seen in the reduction of the position estimation errors. Compared to CRAFT, our model demonstrated gains of 5.1% and 4.7% in key metrics, with the error in distance estimation further reduced from 20.4% to 9.4%, while the error in speed estimation slightly increased from 15.6% to 17.1%. Furthermore, when compared to the simultaneous BEV solution RCBEV, the method proposed in this paper outperformed in terms of the NDS and mAP by 7.1% and 7.6%, respectively, and it reduced the speed estimation error by 15%. These outstanding results on both the validation and test sets underscore the effectiveness of the model enhancements proposed in this paper.
Table 4 compares the camera-only benchmark methods, namely Centernet, CRAFT-I, and IRBEVF, with the camera-radar fusion methods, including Centerfusion, CRAFT, and IRBEVF-Q, in terms of the improvement in the mAP for each category on the validation set. It is evident that the overall enhancement range of radar and image fusion-based Centerfusion over image-based Centernet was approximately 5%, and Centerfusion’s performance fell behind that of CRAFT with IRBEVF-Q comprehensively. CRAFT demonstrated a significant improvement in detecting large vehicles such as cars, trucks, and buses. However, its performance enhancement was limited for small objects such as pedestrians, bicycles, traffic cones, and obstacles, with even negative improvement observed for traffic cones. The method proposed in this section, IRBEVF-Q, not only achieved good detection results for large objects like cars, trucks, and buses, with a performance gap from CRAFT that is not excessively large, but it also exhibited superior performance and improvement in detecting small objects where CRAFT struggled. Experimental results demonstrate that the BEV representation-based method offers global representation, thus overcoming object occlusion issues arising from the lack of depth in pure vision and addressing challenges in small object detection in images. Utilizing BEV features as a fusion medium for image and radar demonstrates greater potential than traditional fusion methods. Furthermore, prediction based on the query obviates the need for postprocessing to propose redundant prediction frames, thus highlighting the advantages of the end-to-end model.

3.3. Ablation Study

In this section, experiments are conducted on the nuScenes validation set for validating the effectiveness of each module in the proposed improved algorithm. The experiments were conducted uniformly using 1600 × 900 images, the res101 backbone network, and pretrained loaded weights to train the model for 24 epochs.
Table 5 examines the influence of image, radar, and their fusion methods on 3D target detection. A comparison between scenarios (a) and (b) in the table reveals that the inclusion of radar information marginally enhanced the performance of 3D target detection. However, the improvement resulting from the mere addition of radar information was limited, thus emphasizing the necessity to identify a more effective fusion method to facilitate better learning by the neural network. Comparing schemes (b) and (c) demonstrates that the correlation between the radar and image data led to further performance enhancement through projection. Meanwhile, comparing schemes (d) and (b) underscores the effectiveness of the BEV fusion proposed in this paper for integrating image and radar information. It highlights that simple projection fails to fully exploit the inherent relationship between radar and image features.
Table 6 examines the impact of HGQI, DPE, and ANQ, as proposed in this paper, on the overall model. For standalone cases (b), (c), and (d), there was an improvement of 0.7%, 0.9%, and 1.3%, respectively, compared to the baseline model, thus indicating that each module can positively influence the baseline model. In the combined scenarios (e), (f), and (g), where two modules were paired, improvements of 2.0%, 2.8%, and 2.2% were observed, respectively, compared to the benchmark model. When comparing the combination of HGQI and ANQ (f) to the individual HGQI and ANQ modules, improvements of 2.1% and 1.5%, respectively, became evident. This suggests that overcoming query initialization randomness and ensuring query stability and matching positively affects the query-based 3D object detection scheme. For scenarios (e) and (g), both of which included the DPE module, its inclusion led to a stable improvement (1.3% performance enhancement for both) compared to using the HGQI and ANQ modules alone. This indicates that dynamic position coding can effectively alleviate the hindrance posed by the original fixed position coding in 3D object detection. Finally, with the simultaneous operation of all three modules, it was observed that once the randomness in the model leading to instability was mitigated, the model performance was proportionally enhanced. This reflects the fact that the improvements proposed in this paper can achieve synergistic effects, thus resulting in a performance boost greater than the sum of individual improvements.
As an important hyperparameter, the number of BEV features controls both the size and accuracy of the model. The experiment shown in Table 7 demonstrates that the model’s performance improved with the number of BEV features until it reached 40,000. Beyond that point, increasing the number of BEV features only increased the number of parameters without effectively improving performance.
Table 8 displays the variation in the performance and parameters of IRBEVF-Q across different numbers of decoding layers. The table reveals that the model performance improved as the number of decoding layers increased from one to six. However, the magnitude of improvement diminished, and the model performance declined when the number of decoding layers reached seven, thus suggesting that performance saturates at six layers. Furthermore, the performance of the IRBEVF-Q model matched that of the IRBEVF model when the number of decoding layers was three. This suggests that the proposed improvement module enables the model to achieve comparable performance with fewer decoding iterations.
Figure 7 illustrates the NDS performance curves of the IRBEVF model and the IRBEVF-Q model. It is evident that the IRBEVF-Q model achieved peak performance by the 20th epoch, with subsequent fluctuations remaining near the peak performance. This indicates that the IRBEVF-Q model surpasses the IRBEVF model in terms of both performance and convergence speed. This improvement can be attributed to the enhanced model stability achieved through query optimization, which mitigates the impact of randomness on the model.
Noise and sensor failures are critical considerations in the operation of autonomous vehicles. We deliberately introduced noise and simulated experiments to address passive sensor failures. As seen in Figure 8a, noise was added to the projection matrix, with higher noise levels resulting in increased noise. Subsequently, as seen in Figure 8b, experiments involved the removal (or filling with zero matrices) of six image inputs and five radar inputs. As depicted in Figure 8a, when employing IRBEVF-Q and another outstanding 3D detection approach, FUTR3D, we observe that higher errors in the camera’s external reference led to more significant performance degradation. However, for the same level of noise error, the performance degradation of IRBEVF-Q compared to FUTR3D was less pronounced, thus indicating that IRBEVF-Q exhibits greater resilience to interference and enhancing the model’s robustness. From Figure 8b, we observe that both the loss of image frames and the loss of radar frames impacted the model’s performance. However, the impact of losing image frames is considerably larger than that of losing radar frames. This discrepancy arises from the high resolution and rich semantic information provided by images, which are crucial for model performance, whereas radar signals, with their lower resolution, serve primarily as supplementary information.

3.4. Visualization and Analysis

In Figure 9, we compare and analyze the 3D detection results of the open source radar–vision fusion detection work FUTR3D and the IRBEVF-Q model in this paper in the same scene. (a) In rainy days and nights when the camera image is not clear, our IRBEVF-Q can predict more and more effective information. First, the radar information brings more potentially useful features, and second, the radar image fusion method has a better effect. (b) In sunny weather, our IRBEVF-Q is more accurate in predicting occluded and truncated objects. The distance information brought by the radar brings a gain to the prediction after passing through our fusion method, which better overcomes the disadvantage of the image in predicting occluded objects. Figure 10 further contrasts the detection results of IRBEVF and IRBEVF-Q in the BEV view. Notably, IRBEVF-Q exhibited more accurate location predictions compared to IRBEVF, and it successfully anticipated a greater number of potential objects at a distance, thus facilitating road comprehension for autonomous vehicles.
Figure 11 illustrates the initial position distribution of the query in IRBEVF-Q during training, which is juxtaposed with the BEV coordinates within the true value box. It is evident that the relative position distribution of IRBEVF-Q within the range of 0 to 1 closely resembles the object distribution in the real coordinate system, thus spanning from −51.2 to 51.2. This alignment indicates that dynamic position coding enables the query to mitigate the adverse effects of random initialization during training.

4. Conclusions

This paper introduced a novel BEV fusion encoder, which leverages radar and image fusion to aggregate 3D spatial points and generate BEV features. Subsequently, we developed the 3D target detection model IRBEVF. Recognizing the significance of the query in the decoder, we proposed a heat map initialization query module and a dynamic position encoding module to address the limitations of random initialization. Additionally, we introduced an auxiliary noise query module to stabilize positive and negative sample matching and expedite convergence. Experimental evaluations conducted on the nuScenes dataset validate the efficacy of our approach in enhancing overall detection performance.

Author Contributions

Resources, F.C. and E.G.; Writing—original draft, G.C.; Writing—review and editing, G.C.; Funding acquisition, F.C. and E.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Education Research Project for Young and Middle-aged Teachers of Fujian Provincial Department of Education, grant number JAT220310, and the Minjiang University Scientific Research Promotion Fund, grant number MJY22025.

Data Availability Statement

The data are contained within the article.

Acknowledgments

We would like to thanks J.K. for formal analysis and data curation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, T.; Zhu, X.; Pang, J.; Lin, D. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 913–922. [Google Scholar]
  2. Feng, D.; Haase-Schütz, C.; Rosenbaum, L.; Hertlein, H.; Glaeser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1341–1360. [Google Scholar] [CrossRef]
  3. Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. Pointpainting: Sequential fusion for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4604–4612. [Google Scholar]
  4. Yin, T.; Zhou, X.; Krähenbühl, P. Multimodal virtual point 3d detection. Adv. Neural Inf. Process. Syst. 2021, 34, 16494–16507. [Google Scholar]
  5. Wang, C.; Ma, C.; Zhu, M.; Yang, X. Pointaugmenting: Cross-modal augmentation for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11794–11803. [Google Scholar]
  6. Xu, S.; Zhou, D.; Fang, J.; Yin, J.; Bin, Z.; Zhang, L. Fusionpainting: Multimodal fusion with adaptive attention for 3d object detection. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; IEEE: Berlin/Heidelberg, Germany, 2021; pp. 3047–3054. [Google Scholar]
  7. Bai, X.; Hu, Z.; Zhu, X.; Huang, Q.; Chen, Y.; Fu, H.; Tai, C.L. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1090–1099. [Google Scholar]
  8. Li, Y.; Deng, J.; Zhang, Y.; Ji, J.; Li, H.; Zhang, Y. EZFusion: A Close Look at the Integration of LiDAR, Millimeter-Wave Radar, and Camera for Accurate 3D Object Detection and Tracking. IEEE Robot. Autom. Lett. 2022, 7, 11182–11189. [Google Scholar] [CrossRef]
  9. Ma, X.; Zhang, Y.; Xu, D.; Zhou, D.; Yi, S.; Li, H.; Ouyang, W. Delving into localization errors for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4721–4730. [Google Scholar]
  10. Lim, T.Y.; Ansari, A.; Major, B.; Fontijne, D.; Hamilton, M.; Gowaikar, R.; Subramanian, S. Radar and camera early fusion for vehicle detection in advanced driver assistance systems. In Proceedings of the Machine Learning for Autonomous Driving Workshop at the 33rd Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 2. [Google Scholar]
  11. Kim, Y.; Choi, J.W.; Kum, D. Grif net: Gated region of interest fusion network for robust 3d object detection from radar point cloud and monocular image. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 10857–10864. [Google Scholar]
  12. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  13. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5099–5108. [Google Scholar]
  14. Mao, J.; Xue, Y.; Niu, M.; Bai, H.; Feng, J.; Liang, X.; Xu, H.; Xu, C. Voxel transformer for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3164–3173. [Google Scholar]
  15. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
  16. Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10529–10538. [Google Scholar]
  17. Nabati, R.; Qi, H. Centerfusion: Center-based radar and camera fusion for 3d object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 1527–1536. [Google Scholar]
  18. Kim, Y.; Kim, S.; Choi, J.W.; Kum, D. Craft: Camera-radar 3d object detection with spatio-contextual fusion transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2023; Volume 37, pp. 1160–1168. [Google Scholar]
  19. Yang, Z.; Chen, J.; Miao, Z.; Li, W.; Zhu, X.; Zhang, L. Deepinteraction: 3d object detection via modality interaction. Adv. Neural Inf. Process. Syst. 2022, 35, 1992–2005. [Google Scholar]
  20. Cai, Y.; Zhang, W.; Wu, Y.; Jin, C. FusionFormer: A Concise Unified Feature Fusion Transformer for 3D Pose Estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–24 February 2024; Volume 38, pp. 900–908. [Google Scholar]
  21. Chen, Y.; Yu, Z.; Chen, Y.; Lan, S.; An kumar, A.; Jia, J.; Alvarez, J.M. Focalformer3d: Focusing on hard instance for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 8394–8405. [Google Scholar]
  22. Liu, H.; Teng, Y.; Lu, T.; Wang, H.; Wang, L. Sparsebev: High-performance sparse 3d object detection from multi-camera videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 18580–18590. [Google Scholar]
  23. Wang, H.; Tang, H.; Shi, S.; Li, A.; Li, Z.; Schiele, B.; Wang, L. UniTR: A Unified and Efficient Multi-Modal Transformer for Bird’s-Eye-View Representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 6792–6802. [Google Scholar]
  24. Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; Zhang, X. Cross modal transformer: Towards fast and robust 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 18268–18278. [Google Scholar]
  25. Dong, X.; Zhuang, B.; Mao, Y.; Liu, L. Radar camera fusion via representation learning in autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1672–1681. [Google Scholar]
  26. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; Dai, J. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In Proceedings of the European conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 1–18. [Google Scholar]
  27. Ma, Y.; Wang, T.; Bai, X.; Yang, H.; Hou, Y.; Wang, Y.; Qiao, Y.; Yang, R.; Manocha, D.; Zhu, X. Vision-centric bev perception: A survey. arXiv 2022, arXiv:2208.02797. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  30. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  31. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable detr: Deformable transformers for end-to-end object detection. arXiv 2020, arXiv:2010.04159. [Google Scholar]
  32. Tang, Y.; Dorn, S.; Savani, C. Center3d: Center-based monocular 3d object detection with joint depth understanding. In Proceedings of the DAGM German Conference on Pattern Recognition, Tübingen, Germany, 28 September–1 October 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 289–302. [Google Scholar]
  33. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
  34. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
  35. Zhou, T.; Chen, J.; Shi, Y.; Jiang, K.; Yang, M.; Yang, D. Bridging the view disparity between radar and camera features for multi-modal fusion 3d object detection. IEEE Trans. Intell. Veh. 2023, 8, 1523–1535. [Google Scholar] [CrossRef]
  36. Yu, F.; Wang, D.; Shelhamer, E.; Darrell, T. Deep layer aggregation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2403–2412. [Google Scholar]
  37. Wu, Z.; Chen, G.; Gan, Y.; Wang, L.; Pu, J. Mvfusion: Multi-view 3d object detection with semantic-aligned radar and camera fusion. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 2766–2773. [Google Scholar]
  38. Lee, Y.; Hwang, J.W.; Lee, S.; Bae, Y.; Park, J. An energy and GPU-computation efficient backbone network for real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 752–760. [Google Scholar]
Figure 1. The overall architecture of our implementation of 3D object detection consists of three main components: (a) Feature extraction module. (b) BEV fusion encoder module. The input radar and image features are encoded mainly through local image sampling attention and maximum adjacent radar sampling attention to generate BEV features. (c) Object decoder module. Content encoding is initialized using heat maps, and position encoding is generated using dynamic reference points. In addition, noisy queries are added to the query to help stabilize the matching process.
Figure 1. The overall architecture of our implementation of 3D object detection consists of three main components: (a) Feature extraction module. (b) BEV fusion encoder module. The input radar and image features are encoded mainly through local image sampling attention and maximum adjacent radar sampling attention to generate BEV features. (c) Object decoder module. Content encoding is initialized using heat maps, and position encoding is generated using dynamic reference points. In addition, noisy queries are added to the query to help stabilize the matching process.
Sensors 24 04602 g001
Figure 2. The details of BEV Fusion Encoder. Discretizing three-dimensional space into uniformly distributed points, project these points onto a multiscale feature map, and sample the points that fall into the feature map. In addition, the distance of these points from the radar point is calculated, and each point is sampled for the closest radar feature.
Figure 2. The details of BEV Fusion Encoder. Discretizing three-dimensional space into uniformly distributed points, project these points onto a multiscale feature map, and sample the points that fall into the feature map. In addition, the distance of these points from the radar point is calculated, and each point is sampled for the closest radar feature.
Sensors 24 04602 g002
Figure 3. Local image feature sampling. Project the 3D reference point onto the graph to obtain the sampling position in the x and y directions. Obtain the characteristic value of the sampling point by obtaining the values around the sampling point.
Figure 3. Local image feature sampling. Project the 3D reference point onto the graph to obtain the sampling position in the x and y directions. Obtain the characteristic value of the sampling point by obtaining the values around the sampling point.
Sensors 24 04602 g003
Figure 4. Maximum proximity radar sampling. The distance between the radar point and the 3D reference point is calculated to obtain a 3D tensor. The features of the K radar points closest to each 3D reference point are taken as sampling features.
Figure 4. Maximum proximity radar sampling. The distance between the radar point and the 3D reference point is calculated to obtain a 3D tensor. The features of the K radar points closest to each 3D reference point are taken as sampling features.
Sensors 24 04602 g004
Figure 5. Original position encoding and dynamic position encoding. Traditional position coding uses randomly generated fixed position coding to generate reference points. Dynamic position coding first generates reference points then uses the reference points to generate position coding. At the same time, each layer outputs position information to modify the reference points, so the position coding information also changes dynamically.
Figure 5. Original position encoding and dynamic position encoding. Traditional position coding uses randomly generated fixed position coding to generate reference points. Dynamic position coding first generates reference points then uses the reference points to generate position coding. At the same time, each layer outputs position information to modify the reference points, so the position coding information also changes dynamically.
Sensors 24 04602 g005
Figure 6. Original position encoding and dynamic position encoding. Generate noise labels and indexes through probability, and disrupt the labels of ground truth through indexes.
Figure 6. Original position encoding and dynamic position encoding. Generate noise labels and indexes through probability, and disrupt the labels of ground truth through indexes.
Sensors 24 04602 g006
Figure 7. Comparison of the performance of different training epochs.
Figure 7. Comparison of the performance of different training epochs.
Sensors 24 04602 g007
Figure 8. Results of ablation experiments with different noise levels (a) and sensor faults (b).
Figure 8. Results of ablation experiments with different noise levels (a) and sensor faults (b).
Sensors 24 04602 g008
Figure 9. Comparison of 3D detection effect of different methods under different scene columns. The orange, blue, and red 3D boxes represent the predictions for cars, pedestrians, and bicycles, respectively. The yellow dashed ovals represent different places to watch out for.
Figure 9. Comparison of 3D detection effect of different methods under different scene columns. The orange, blue, and red 3D boxes represent the predictions for cars, pedestrians, and bicycles, respectively. The yellow dashed ovals represent different places to watch out for.
Sensors 24 04602 g009
Figure 10. Comparison of BEV detection effect before and after model improvement. Red circles represent accuracy contrasts predicted at a distance, and orange circles represent contrasts predicted for potentially possible objects.
Figure 10. Comparison of BEV detection effect before and after model improvement. Red circles represent accuracy contrasts predicted at a distance, and orange circles represent contrasts predicted for potentially possible objects.
Sensors 24 04602 g010
Figure 11. Schematic representation of the improved query location distribution.
Figure 11. Schematic representation of the improved query location distribution.
Sensors 24 04602 g011
Table 1. Overview of the nuScenes dataset.
Table 1. Overview of the nuScenes dataset.
DatesetCameraRadarBoxesTrainTestValClass
nuscenes651.4 M28,1306008601923 (10)
Table 2. Performance comparison for 3D object detection on nuScenes test dataset. ‘C’ and ‘R’ respectively refer to camera and radar. V2-99 and R101 are VoVNet-99 and ResNet-101. ↑ indicates higher is better, and ↓ indicates lower is better. The same role appears in the table below.
Table 2. Performance comparison for 3D object detection on nuScenes test dataset. ‘C’ and ‘R’ respectively refer to camera and radar. V2-99 and R101 are VoVNet-99 and ResNet-101. ↑ indicates higher is better, and ↓ indicates lower is better. The same role appears in the table below.
SplitModalityBackboneNDS ↑mAP ↑mATE ↓mASE ↓mAOE ↓mAVE ↓mAAE ↓
RCBEV [35]testC+RR1010.4860.4060.4840.2570.5870.7020.140
CenterFusiontestC+RDLA34 [36]0.4490.3260.6310.2610.5160.6140.115
MVFusion [37]testC+RV2-99 [38]0.5170.4530.5690.2460.3790.7810.128
CRAFTtestC+RDLA340.5230.4110.4670.2680.4560.5190.114
IRBEVFtestC+RR1010.5350.4300.6220.2610.3930.3770.139
IRBEVF-QtestC+RR1010.5750.4760.5570.2410.3650.3350.132
Table 3. The 3D detection results on nuScenes val set.
Table 3. The 3D detection results on nuScenes val set.
SplitModalityBackboneNDS ↑mAP ↑mATE ↓mASE ↓mAOE ↓mAVE ↓mAAE ↓
RCBEVvalC+RR1010.4970.3810.5260.2720.4450.4650.185
CenterFusionvalC+RDLA340.4530.3320.6490.2630.5350.5400.142
MVFusionvalC+RR1010.4550.3800.6750.2580.3720.8330.196
CRAFTvalC+RDLA340.5170.4110.4490.2760.4540.4860.176
IRBEVFvalC+RR1010.5310.4190.6530.2710.3420.3300.188
IRBEVF-QvalC+RR1010.5680.4570.5430.2660.3420.3150.139
Table 4. Per-class comparisons on nuScenes val set. ‘C.V.’, ‘Ped.’, ‘M.C.’, and ‘T.C.’ denote construction vehicle, pedestrian, motorcycle, and traffic cone, respectively. CenterNet, CRAFT-I, and DETR3D, are camera baselines of CenterFusion, CRAFT, and IRBEVF.
Table 4. Per-class comparisons on nuScenes val set. ‘C.V.’, ‘Ped.’, ‘M.C.’, and ‘T.C.’ denote construction vehicle, pedestrian, motorcycle, and traffic cone, respectively. CenterNet, CRAFT-I, and DETR3D, are camera baselines of CenterFusion, CRAFT, and IRBEVF.
MethodInputCarTruckBusTrailerC.V.Ped.M.C.BicycleT.C.BarriermAP
CenterNetC48.423.134.013.13.537.724.923.45545.630.6
CenterFusionC+R52.4 (+4.0)26.5 (+3.4)36.2 (+2.2)15.4 (+2.3)5.5 (+2.0)38.9 (+1.2)30.5 (+5.6)22.9 (−0.5)56.3 (+1.3)47 (+1.4)33.2 (+2.6)
CRAFT-IC52.425.730.015.55.439.328.629.857.547.833.2
CRAFTC+R69.9 (+17.2)37.6 (+11.9)47.3 (+17.3)20.1 (+4.3)10.7 (+5.3)46.2 (+6.9)39.5 (+10.9)31 (+1.2)57.1 (−0.4)51.1 (+3.3)41.1 (+7.9)
IRBEVFC69.136.445.619.312.348.742.744.058.049.841.9
IRBEVF-QC+R71.6 (+2.5)41.7 (+5.3)50.4 (+4.8)22.2 (+2.9)13.3 (+1.0)53.7 (+5.0)46.6 (+3.9)45.5 (+1.5)60.1 (+2.1)53.8 (+4.0)45.8 (+3.9)
Table 5. Results of ablation experiments with different modules of the encoder.
Table 5. Results of ablation experiments with different modules of the encoder.
ImageRadarBEV Fusing EncodingProjection FusionNDSmAP
(a)---0.4250.346
(b)--0.4590.350
(c)-0.4750.380
(d)-0.5310.430
Table 6. Results of ablation experiments on different modules of the decoder.
Table 6. Results of ablation experiments on different modules of the decoder.
HGQIDPEANQNDS
(a)---0.531
(b)--0.538
(c)--0.540
(d)--0.544
(e)-0.551
(f)-0.559
(g)-0.553
(h)0.568
Table 7. Performance and parameters of different numbers of BEV features.
Table 7. Performance and parameters of different numbers of BEV features.
BEV Features10,00020,00030,00040,00050,000
NDS ↑0.5050.4110.4150.4170.416
Params ↓63 M65 M67 M71 M74 M
Memory ↓10.7 G14.2 G17.4 G21.1 G24.1 G
Table 8. Performance and parameters of different decoding layers.
Table 8. Performance and parameters of different decoding layers.
Decoder Layers1234567
NDS ↑0.3980.4850.5320.5540.5650.5680.566
Params ↓63.8 M65.3 M66.7 M68.2 M69.7 M71.2 M72.8 M
Memory ↓15.6 G16.6 G17.7 G18.9 G20.0 G21.1 G23.1 G
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, G.; Chen, F.; Guo, E. IRBEVF-Q: Optimization of Image–Radar Fusion Algorithm Based on Bird’s Eye View Features. Sensors 2024, 24, 4602. https://doi.org/10.3390/s24144602

AMA Style

Cai G, Chen F, Guo E. IRBEVF-Q: Optimization of Image–Radar Fusion Algorithm Based on Bird’s Eye View Features. Sensors. 2024; 24(14):4602. https://doi.org/10.3390/s24144602

Chicago/Turabian Style

Cai, Ganlin, Feng Chen, and Ente Guo. 2024. "IRBEVF-Q: Optimization of Image–Radar Fusion Algorithm Based on Bird’s Eye View Features" Sensors 24, no. 14: 4602. https://doi.org/10.3390/s24144602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop