Next Article in Journal
Vehicle and Pedestrian Traffic Signal Performance Measures Using LiDAR-Derived Trajectory Data
Previous Article in Journal
Design and Experimental Test of Rope-Driven Force Sensing Flexible Gripper
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ego-Vehicle Speed Correction for Automotive Radar Systems Using Convolutional Neural Networks

by
Sunghoon Moon
,
Daehyun Kim
and
Younglok Kim
*
Department of Electronic Engineering, Sogang University, Seoul 04107, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(19), 6409; https://doi.org/10.3390/s24196409
Submission received: 16 August 2024 / Revised: 30 September 2024 / Accepted: 1 October 2024 / Published: 3 October 2024
(This article belongs to the Section Radar Sensors)

Abstract

:
The development of autonomous driving vehicles has increased the global demand for robust and efficient automotive radar systems. This study proposes an automotive radar-based ego-vehicle speed detection network (AVSD Net) model using convolutional neural networks for estimating the speed of the ego vehicle. The preprocessing and postprocessing methods used for vehicle speed correction are presented in detail. The AVSD Net model exhibits characteristics that are independent of the angular performance of the radar system and its mounting angle on the vehicle, thereby reducing the loss of the maximum detection range without requiring a downward or wide beam for the elevation angle. The ego-vehicle speed is effectively estimated when the range–velocity spectrum data are input into the model. Moreover, preprocessing and postprocessing facilitate an accurate correction of the ego-vehicle speed while reducing the complexity of the model, enabling its application to embedded systems. The proposed ego-vehicle speed correction method can improve safety in various applications, such as autonomous emergency braking systems, forward collision avoidance assist, adaptive cruise control, rear cross-traffic alert, and blind spot detection systems.

1. Introduction

The advancement of autonomous driving vehicles and the mandatory implementation of autonomous emergency braking systems (AEBSs) in various countries have increased the demand for automotive radar systems that offer increased robustness to adverse weather and variable lighting conditions while providing long-range and accurate measurements [1,2,3,4]. In addition to their application to AEBSs, automotive radar systems can be used in various safety applications, such as forward collision-avoidance assist (FCA), adaptive cruise control (ACC), rear cross-traffic alert, and blind spot detection systems [5,6]. Unlike other sensors—such as cameras and lidar—used in autonomous driving systems, reliable radar systems that can better withstand weather conditions such as sunlight and fog and provide accurate measurements underlie the continued advancement of autonomous driving technology, thereby expanding the scope of applications of radar technology in autonomous vehicles.
Determining the absolute speed of an object is crucial in automotive radar systems. These systems differentiate between the moving and stationary states of an object at absolute object speeds [7]. This distinction is crucial for decision-making during ACC when onward vehicles are being tracked and for AEBS and FCA systems for predicting the time to collision and providing information for emergency braking and collision-warning alarms [8,9]. Additionally, the distinction between moving and stationary states facilitates the classification of objects such as vehicles, bicycles, and pedestrians [10] and aids in determining and removing clutter, which reduces radar detection performance [11,12].
In radar systems that can only measure the relative speed of objects, the absolute speed of an object is determined by combining the ego-vehicle speed and the relative speed of the object detected by the radar. Therefore, accurate information on the speed of the ego-vehicle must be obtained. Generally, automotive radar systems receive minimal vehicle data, such as the wheel speed and yaw rate, from the Controller Area Network (CAN). The ego-vehicle speed is indirectly measured via the wheel-rotation speed monitored using a wheel-speed sensor [13,14,15]. However, if the contact area between the tire and road increases because of the air pressure or wear of the tire, the radius of the tire may decrease, resulting in a deviation in the ego-vehicle speed estimation [16,17,18]. Therefore, the ego-vehicle speed information in automotive radar systems must be corrected to minimize the object speed distortion caused by this deviation.
To this end, we propose an automotive radar-based ego-vehicle speed detection network (AVSD Net) model that uses convolutional neural networks (CNNs) and outputs the encoding value of the ego-vehicle speed feature from the range-velocity (RV) spectrum. Furthermore, we introduce postprocessing to increase the accuracy of the ego-vehicle speed extracted using the AVSD Net model. The primary contributions of the study can be summarized as follows:
  • The proposed model is independent of the radar mounting angle on the vehicle.
  • The model is independent of the angular performance of the automotive radar system.
  • The loss in the maximum detection distance is reduced because the model is not operated with a downward or wide beam for the elevation angle.
  • The complexity is reduced by simplifying the model and reducing the parameters, enabling the application of the model in embedded systems.
The remainder of this paper is organized as follows: Section 2 reviews related research on ego-vehicle speed correction. Section 3 introduces the developed method and presents the results of the measurement and simulation data. The proposed CNN-based model used to estimate the ego-vehicle speed and the preprocessing and postprocessing steps are also explained in this section. Section 4 discusses the results of the AVSD Net model and analyzes the postprocessing results described in Section 3. Finally, Section 5 summarizes the conclusions of the study.

2. Related Work

Several methods have been proposed for correcting the ego-vehicle speed.
  • Orthogonal distance regression (ODR): In a previous study [18], ODR algorithms were used to analyze the correlation between the speed of an ego vehicle and the relative speed/angle of stationary objects to predict the ego-vehicle speed. When there was significant noise in the measured values or when stationary objects were not detected, the ego-vehicle speed was predicted using the Kalman filter.
  • Hough transform: Lim and Lee [19] aimed to accurately estimate the speed of the ego vehicle by examining the distribution characteristics of stationary objects within an angle-velocity domain and by identifying the intersection point of the Hough transform.
  • Elevated objects: Kingery and Song [20] proposed a method to compensate for the elevation angle and estimate the ego-vehicle speed by analyzing the radial velocity difference in elevated objects, such as buildings and tall trees.
  • Ground backscattering: A unique technique has been proposed [21] for estimating the ego-vehicle speed by analyzing backscattering signals reflected from the ground.
  • Random sample consensus (RANSAC): The speed of the ego-vehicle was matched by distinguishing stationary objects using the RANSAC algorithm and interpreting the radial velocity errors associated with these stationary objects [22].
  • Complex value neural network (CVNN): An innovative approach presented in [23] and [24] provides a method for predicting the ego-vehicle speed using a sophisticated deep network model, referred to as CVNN, wherein the multi-channel RV spectrum serves as input data for the model.
In previous studies [18,19,20,21], the angular performance of stationary objects was an important parameter that relied significantly on the angular accuracy of the radar system and its mounting angle on the vehicle. In [22], the yaw rate was separately estimated to keep it independent of the radar mounting angle; however, the estimation of ego-vehicle speed was dependent on the radar angle performance. Additionally, certain studies required intentional beam steering or a wide beam at an elevated angle [20,21], which reduced the maximum detection range. Although specifics regarding the number of layers and parameters in the deep network were omitted in [23,24], the network architecture comprised a series of deep feature extractions composed of multiple residual groups, which in turn included multiple residual blocks. Moreover, the utilization of complex numbers increased the complexity of these methods.
Table 1 summarizes the constraints associated with each existing method used for estimating the ego-vehicle speed.
This study presents a novel model for predicting and correcting the ego-vehicle speed for the automotive radar system using a CNN to overcome the constraints reported in previous studies. The developed method is independent of the angular performance of the radar and its mounting angle on the vehicle. This reduces the loss of the maximum detection range because a downward or wide beam is not required for the elevation angle. Furthermore, the proposed model can be applied to embedded systems owing to its simplicity and reduced parameter size.

3. Materials and Methods

3.1. Measurement Dataset

3.1.1. Configuration of the Vehicle and Radar System

We used the automotive radar system built by Texas Instruments (TI) with an AWR2944 board, which is a millimeter wave (mmWave) sensor developed for automotive radar system applications that operate within the frequency range of 76–81 GHz [25]. Subsequently, a sample from an analog-to-digital conversion (ADC) was obtained by interfacing it with a TI DCA1000 [26]. A global positioning system (GPS) module was used to obtain reference information for the ego-vehicle speed and heading angle. The ego-vehicle speed was extracted from the CAN bus based on the wheel speed. Driving videos were recorded using a webcam to monitor the driving conditions. Figure 1 illustrates the configurations of the vehicle and radar system.

3.1.2. Waveform Specifications of the Automotive Radar System

Figure 2 depicts the waveform of the automotive radar system used in this study. Table 2 details the waveform specifications.
The waveform set exhibited a frequency of 77.1–77.31 GHz with a bandwidth (BW) of 211.35 MHz, sample rate (SR) of 20 MHz, and an ADC sampling time ( T A D C ) of 12.8 µs. The range resolution ( R r e s ) was determined to be 0.71 m using Equation (1) [27,28].
R r e s = c 2   B W ,
where c denotes the speed of light. The maximum detection range ( R m a x ) was 90.78 m and was determined using Equation (2) [27,28] as follows:
R m a x = c · T A D C · S R 4   B W .
The wavelength ( λ ) for the center frequency ( f c ) of 77.2 GHz in the frequency range of 77.1–77.31 GHz is 3.883 mm, and the velocity resolution ( V r e s ) 0.38 m/s is defined using Equation (3) [27,28].
V r e s = c 2   f c · P R I · N c   = λ 2 · P R I · N c .
The maximum velocity ( V m a x ) is 24.27 m/s, which can be expressed as follows [27,28]:
V m a x = c 4   f c · P R I = λ 4 · P R I .

3.1.3. Measurement Data Results

We measured the data twice to obtain datasets of 28 min 46 s and 32 min 26 s. Figure 3 presents the measurement results at an ego-vehicle speed of 17.5 m/s. The dashed and solid red lines in each figure represent the stationary objects on the left- and right-hand sides, which are the left and right guardrails, respectively.
This result indicates that the radial velocity is affected by the angle of the object when stationary objects are close to the automotive radar. Conversely, when stationary objects are far from the automotive radar, the radial velocity is less affected by the angle of the objects. For instance, the angle of the stationary object at 90 m tends to approach 0°, indicating that the radial velocity of the stationary object approaches −17.5 m/s, which is the opposite of the ego-vehicle speed. As illustrated in Figure 4, when a vehicle equipped with an automotive radar moves in the y direction, the angle of the green stationary object is larger than that of the blue one; therefore, the radial velocity relatively changes more. The red stationary object placed at an extremely long distance has a considerably small angle; therefore, the vehicle receives a speed that is close to the opposite of the ego-vehicle speed. The relationship between the stationary target angle ( θ ), radial velocity ( V r ), and ego-vehicle speed ( V e g o ) can be presented as follows [18,20]:
V r = V e g o · c o s θ .
Figure 4b depicts the radial velocity trend with respect to position x. We observed that the velocity is close to the opposite speed of the ego vehicle at approximately 60 m or higher. We intend to estimate the ego-vehicle speed by training features where distant stationary objects converge to a velocity opposite to that of the ego vehicle, as depicted in Figure 3b.
As explained in Section 1, the ego-vehicle speed of the CAN signal based on the wheel speed may differ from the actual speed owing to differences in the tire radius caused by the air pressure and wear of the tire. Figure 5 illustrates the ego-vehicle speed measured by the GPS and CAN. Considering the ego-vehicle speed obtained from the GPS signal as reference data, the speed ratio was calculated as an indicator of the difference in the ego-vehicle speed of the wheel-speed-based CAN signal, as follows:
S p e e d   R a t i o = m e d i a n   E g o v e h i c l e   S p e e d   f r o m   G P S i E g o v e h i c l e   S p e e d   f r o m   C A N i   ,
where i represents the sequentially collected data. However, the GPS signal may not be received, and errors may occur because of non-line-of-sight conditions caused by tunnels or high-rise buildings, which may reduce the accuracy of the measured ego-vehicle speed [29,30]. Moreover, in the case of CAN signals based on wheel-speed sensors, ego-vehicle speed errors may occur when the vehicle is cornering [31]. Therefore, the speed ratio was calculated as the median value to minimize errors induced by the outliers in the GPS and CAN data. However, based on the measured results, the calculated speed ratio is 1.0183, indicating a difference of 1.8%. This discrepancy demonstrates the necessity of precisely correcting the ego-vehicle speed in automotive radar systems to collect accurate and reliable information before defining the object properties.
It is crucial to consider several aspects to determine whether only the measured results can be used for training. First, we acquired two datasets of 28 min 46 s and 32 min 26 s using the automotive radar, each comprising 34,538 and 38,909 frames, respectively. A sufficient amount of data appears to be available when the complete dataset is considered. However, if the outlier data generated by GPS failure and cornering are excluded, limited usable data remain. Second, the automotive radar data measured at intervals of 0.5 ms are likely to overfit during training because of the continuous data. For instance, the automotive radar data at a specific time (t) are not significantly different from the data at the previous instant of time (t − 1) and the subsequent instant of time (t + 1). Third, data diversity may not be secured based on the driving habits of the driver. For instance, data obtained from drivers who prefer driving in a specific lane or at high or low speeds are not diverse. Consequently, data diversity must be ensured via simulated data to enhance the training results.

3.2. Simulation Dataset

3.2.1. Simulation Methods

The simulation was performed under the same conditions as the automotive radar system used for the measurements. Four channels were used for Tx and Rx, and the antenna channel spacing and pattern were simulated by referring to the antenna pattern image included in the TI AWR2944 user guide [32]. Furthermore, the waveform used in the simulation corresponds to the specifications presented in Table 2.
When the target information is predetermined (Figure 6), the distance between the target and Tx ( R t x n ) can be calculated based on their corresponding positions, and the distance between the target and Rx ( R r x m ) can be obtained from the positions of the target and Rx, respectively. Subsequently, the distance between the target and each TRx channel ( R t r x n , m ) can be derived as follows:
R t r x n , m = R t x n + R r x m .
The radial velocity ( V r ) of the target can be defined based on the predefined velocity ( V t ) and angle ( θ t ) of the target and ego-vehicle speed ( V e g o ), as follows:
V r = V t V e g o · c o s θ t .
The frequency of the frequency-modulated continuous wave radar can be expressed as
f t = f 0 + f t ,
where f 0 represents the start frequency and f t denotes the modulated frequency over time. The transmitter signal can be expressed as
S t x t = A c o s 2 π f t t ,
where A denotes the amplitude. Equation (10) can be modified using Equation (9) as follows:
S t x t = A c o s 2 π f 0 t + f t 2 .
Considering the distance (R) and radial velocity ( V r ) of the target, the delay time ( τ ) can be calculated as
τ = 2 ( R + V r t ) c .
The received signal can then be expressed as follows:
S r x t = c os 2 π f 0 t τ + f t τ 2 .
Finally, the ADC signal can be simulated using the mixed signal as
S m i x t = S t x t · S r x t .
If the previously described equations are expressed in a more algorithmic form, with subscript s i as the set of the number of ADC samples ( N s ) and subscript c i as the set of the number of chirps ( N c ), we obtain
s i = 0,1 , , N s 1 , c i = 0,1 , , N c 1 .
The time ( t s i , c i ) and modulation frequency ( f s i , c i ) can be expressed as a two-dimensional (2D) matrix for the ADC samples and chirps, as follows:
t s i , c i = s i · T A D C N s 1 + c i · P R I ,
f s i , c i = f s + s i · B W N s 1 ,
where T A D C indicates the ADC sampling time, PRI represents the pulse repetition interval, f s denotes the starting frequency, and BW indicates the bandwidth (Table 2). Additionally, the delay time induced by the target distance ( τ r ) and Doppler ( τ d ) can be expressed as
τ r = R t x n + R r x m c ,
τ d = 2 V r · t s i , c i c ,
where c denotes the speed of light, n denotes the Rx channel, m indicates the Tx channel, and V r represents the radial velocity, as defined by Equation (8). Additionally, each signal in the Tx channel ( T x s i , c i ) and the Rx channel ( R x s i , c i ) can be expressed as follows:
T x s i , c i n = c o s 2 π · f s i , c i · t s i , c i n ,
R x s i , c i n , m = c o s 2 π · f s i , c i · t s i , c i + τ r + τ d n , m .
Finally, the Tx signal ( S t x ) and Rx signal for each channel ( S r x m ) can be obtained as
S t x = k = 1 n T x s i , c i k ,
S r x m = k = 1 n R x s i , c i k , m .
Obtaining the ADC signal using a mixed signal is identical to that indicated in Equation (14), and the RV spectrum can be obtained using the 2D fast Fourier transform of the ADC signal [33]. Figure 7 outlines the simulation method.

3.2.2. Simulation Results

Figure 8 presents a comparison of the simulation results described in Section 3.2.1 with the results obtained from the measured data. Figure 8a presents the measured results obtained using the automotive radar system, and Figure 8b shows the simulation results with respect to the targets, which are similar to the measurement results. The dashed red line indicates the left continuous and stationary objects (the left guardrail), while the solid red line represents the right continuous and stationary objects (the right guardrail). This result indicates adequate agreement between the simulation and measurement results, confirming that the simulation data can be used to ensure diversity in datasets and secure insufficient datasets from the measurement results.
We simulated and extracted various datasets by assuming a fixed object with a continuous uniform distribution over a certain range for every simulation without being biased toward specific cases. The distributions can be indicated as
U a , b = 1 b a ,     a x b 0 , o t h e r w i s e .
Based on the coordinates indicated in Figure 4a, Table 3 lists the continuous uniform distribution range of the stationary object sets. Here, X (the lateral direction from the vehicle) ranges from −15 m to +15 m, and Y (the longitudinal direction from the vehicle) begins at 1 m–3 m and ends at 70 m–90 m at 1 m spacing. The number of stationary object sets ranges from one to two. For example, when +10 m is selected from the x position (−15 m~+15 m) by the continuous uniform distribution, 2 m is selected from the Y start position (1 m~3 m), and 80 m is selected from the Y end position (70 m~90 m), a set of stationary objects with X of 3 m and a spacing of 1 m between 2 m and 80 m of Y is generated.
Moreover, when the number of stationary object sets is two, two different sets of stationary objects are generated by a continuous uniform distribution.
As mentioned in Section 3.1.3, the objective of this study is to estimate the ego-vehicle speed using the stationary target of the RV spectrum. Therefore, to secure robustness for training, extraneous objects such as moving vehicles, which interfere with the estimation of the ego-vehicle speed on the road, were created in the range presented in Table 4. Similar to the stationary objects, the extraneous objects were generated with X ranging from −10 m to +10 m, Y ranging from 1 m to 90 m, and the radial velocity ranging from −23 m/s to +23 m/s based on the coordinates depicted in Figure 4a. In total, 20 to 25 extraneous objects were created, and the continuous uniform distribution was used in the same way as that presented in Equation (24). For example, when 0 m is selected from x (−10 m~+10 m) by the continuous uniform distribution, 40 m is selected from Y (1 m~90 m), and 5 m/s is selected from radial velocity (−23 m/s~+23 m/s), an extraneous object with a radial velocity of 5 m/s is created at the location where X is 0 m and Y is 40 m. Additionally, when 22 extraneous objects are selected, 22 different extraneous objects are created by a continuous uniform distribution. Finally, we used 73,447 and 60,000 frames of measurement and simulation data, respectively, to train the ego-vehicle speed (Table 5).

3.3. Implementation of CNN

3.3.1. Preprocessing

Preprocessing was performed to select and reprocess the data to prevent deviations from the intended objective or a decline in network performance. Initially, data with an ego-vehicle speed below 30 km/h (8.3 m/s) were filtered to increase the probability of straight-driving conditions; the ego-vehicle speed was determined using either CAN or GPS data. Subsequently, 40 frames were windowed, and the frames that exceeded a heading angle of 0.5° were excluded. The GPS, yaw rate, or vehicle radius of curvature was utilized based on the heading angle. As the ego-vehicle speed was estimated regardless of the angular performance of the radar, the RV spectrum of each channel was combined to minimize Gaussian noise with non-coherent characteristics while simultaneously increasing the signal-to-noise ratio of the objects, known as non-coherent summation of RV spectrum. As the simulation data were generated under straight driving conditions, only the RV spectrum was combined during preprocessing. Figure 9 illustrates a flowchart of the preprocessing of the measurement and simulation data.

3.3.2. Network Architecture Considerations

In Equation (5), the stationary target angle ( θ ) ranges from −90° to +90°. Consequently, the radial velocity ( V r ) ranges from 0 m/s to the negative ego-vehicle speed ( V e g o ). Therefore, only half of the non-coherent summation of the RV spectrum described in Section 3.3.1, which contains the information of the ego-vehicle speed, was used in the network to improve its efficiency. For instance, the velocity in Figure 8 ranges from −24.27 m/s to +24.27 m/s, yet only the data from −24.27 m/s to 0 m/s are utilized, which is satisfactory. Therefore, the size of the RV spectrum input into the network was 128 × 65.
Table 6 presents the experimental results, which demonstrate that a non-uniform stride (2 × 1) outperforms a uniform stride (2 × 2) in estimating the ego-vehicle speed despite the number of parameters being reduced by 1/3. This implies that no information loss occurs for the 128 distance indices in the case of data compression of continuous stationary objects; however, for the 65 velocity indices, information loss occurs when a uniform stride (2 × 2) is used. Therefore, a non-uniform stride (2 × 1) must be used for effective data compression while minimizing the loss of velocity information.
Table 7 summarizes the impact of the kernel size of the CNN on the network performance. When the kernel size is three, a performance similar to that of a kernel size of five is achieved while reducing the number of parameters by 1/3. This implies that the velocity information can be contained within the convolutional filter (3 × 3). For example, using the convolutional filter (3 × 3), stationary objects within a distance of 50 m to 90 m can contain information of three velocity indices, as shown in Figure 8. Therefore, expanding the kernel size does not significantly impact the network performance. In other words, using a minimum kernel size is reasonable when the number of parameters increases rapidly.

3.3.3. Network Architecture

The proposed CNN-based AVSD Net model estimates the ego-vehicle speed in automotive radar systems. Table 8 lists the channel sizes, parameters, and floating-point operations (FLOPs) of six AVSD Net models that received 128 × 65 input data and comprised seven AVSD blocks, a fully connected layer, and Softmax. Figure 10 depicts the composition of the AVSD block. The convolutional layer of the AVSD block comprised a 3 × 3 convolutional filter, a 2 × 1 non-uniform stride, and replication padding. Additionally, the convolutional layer applied different numbers of input/output channels for the six AVSD Net models listed in Table 8. Batch normalization was applied to the model after the convolutional layer and before the activation function [34], and stochastic gradient descent with a mini-batch size of 64 was used. The output obtained after Softmax contained encoded bit information that compressed the velocity information of the stationary objects. The velocity index of the ego-vehicle speed ranging from 0 to 65 was obtained by decoding this bit information via binary-to-decimal conversion. Finally, the ego-vehicle speed was estimated by multiplying the obtained velocity index and the velocity resolution ( V r e s ) presented in Table 2. Figure 11 and Figure 12 illustrate the architectures of the AVSD Net-84k and AVSD Net-717 models, respectively.

3.3.4. Postprocessing

The ego-vehicle speed estimated using the AVSD Net model introduced in Section 3.3.3 may contain abnormal values. Therefore, the abnormal values must be suppressed via postprocessing to implement a robust system.
Initially, an exception was processed when the deviation between the current and previous heading angles of the vehicle exceeded the reference value. The yaw rate or the radius of curvature of the vehicle could be used rather than the heading angle. Subsequently, the real-time speed ratio between the real-time ego-vehicle speed received from the CAN of the vehicle and the ego-vehicle speed estimated using AVSD Net was calculated as follows:
R e a l t i m e   S p e e d   R a t i o = E s t i m a t e d   E g o v e h i c l e   S p e e d   f r o m   A V S D   N e t R e a l t i m e   E g o v e h i c l e   S p e e d   f r o m   C A N .
Finally, abnormally high real-time speed ratio values were eliminated using the speed ratio stored in the electrically erasable programmable read-only memory (EEPROM). A sliding window was considered over time to obtain the real-time speed ratios after eliminating the abnormal values. Subsequently, the median value was extracted, and the final extracted speed ratio was stored in EEPROM. Figure 13 depicts a flowchart of postprocessing.
In summary, the ego-vehicle speed correction in an automotive radar system was performed in real time using the speed ratio stored in the EEPROM. Figure 14 illustrates the preprocessing, network modeling, and postprocessing steps.

3.3.5. Comparison of AVSD Net Models

When applying the AVSD Net model to automotive radar systems, both processing time and memory availability need to be considered. Table 9 presents a comparison of the proposed AVSD Net in this paper for ego-vehicle speed estimation. Since the automotive radar system performs serial processing based on the microcontroller unit, the comparison of processing time was performed using CPU-based serial processing. The model size was compared to the volume of data that each model and its final parameters contained. The processing times of the AVSD Net-717, the simplest model, and the AVSD Net-84k, the most complex model, were 1.33 ms and 2.5 ms, respectively, indicating a difference of approximately twice. Additionally, the model size was 26 and 355 kilobytes, indicating a difference of approximately 14 times.

4. Results and Discussion

The datasets used for training were simulation datasets 1 and 2, measurement dataset 1, and half of measurement dataset 2 (Table 5). The remaining half of measurement dataset 2 was used for testing. Each model was trained for 2 × 10 4 iterations through 20 epochs. Figure 15 illustrates the AVSD Net model results of training and testing error rates with respect to GPS data, with Table 10 listing the final error rates. The AVSD Net-84k model, comprising the highest number of parameters, exhibits training and testing error rates of 0.02% and 5.29%, respectively. Conversely, the AVSD Net-717 model, which is a simple model with the least number of parameters, exhibits training and testing error rates of 1.45% and 7.39%, respectively. Overall, the training and testing error rates decreased slightly in each model as the number of parameters increased. These results are similar to those reported in other related studies based on CNN [35,36,37]. This result can be attributed to training with increasing weights for stationary objects located at long distances as the number of channels increases in each AVSD Net model.
The obtained results imply that the selection of a model depends on the hardware performance of the automotive radar system. If automotive radar systems are expensive and exhibit high performance, the AVSD Net-84k model is preferred. However, most automotive radar system manufacturers adopt minimal and optimized hardware performances to increase profits. Therefore, to consider the performance of an embedded system with optimized hardware, the ego-vehicle speed correction must be optimized via postprocessing using a simplified model.
The postprocessing verification results, as explained in Section 3.3.4 for each model with the remaining half of measurement dataset 2, are illustrated in Figure 16. The blue line in the figure represents the postprocessing results before the application of the sliding window to the real-time speed ratio described in Equation (25). In this scenario, the error-limited filter restricts the change in the heading angle to less than 0.5° and limits the difference in the speed ratio stored in the EEPROM to 0.05. The orange line indicates the postprocessing results after the application of the sliding window to the real-time speed ratio with a window size of 150; the initial absence of data in the orange line denotes the interval used for securing the required window size of 150. Table 11 summarizes the postprocessing results of each model. The overall training error rate increases for the dataset; however, the speed ratio for all models converges to a value of approximately 1.018 after postprocessing. This value is close to the speed ratio of 1.0183, which was obtained from GPS measurements (Section 3.1.3). The standard deviation of the speed ratio is negligible.
The obtained results confirm that all six models designed for ego-vehicle speed estimation can achieve equivalent performance levels via postprocessing. Ideally, a model with the maximum number of parameters must be selected; however, selecting the AVSD Net-717 model with the minimum number of parameters can be reasonable considering the hardware performance. This approach ensures optimal performance without requiring a complex model architecture.
Figure 17 compares the results when the window size of the AVSD Net-171 model is changed. The orange line that represents the result of postprocessing appears stable for window sizes of 150 and 100; however, an abnormal value is observed when the window sizes are 50 and 25. Table 12 summarizes the anomalous findings depicted in Figure 17. The rates of abnormal value are 2.65% and 8.55% for window sizes of 50 and 25, respectively, and the standard deviation of the speed ratio increases rapidly. Figure 18 depicts the rate of abnormal value with respect to the change in window size in this dataset. Here, the rate of abnormal value reaches zero when the window size is 81 or higher. However, considering the possibility of unexpected scenarios, a margin for error must be applied. Therefore, a window size that ranges from 100 to 150 can be considered appropriate. This result demonstrates the applicability of a model with minimal complexity to automotive radar systems to perform ego-vehicle speed correction via postprocessing optimization.

5. Conclusions

In this study, we designed six CNN-based models specifically for ego-vehicle speed estimation. The training and testing of each model indicated that both training and testing error rates tended to decrease with the increase in the number of parameters. However, considering the limitations of the embedded system, the AVSD Net-717 model, which comprised the minimum number of parameters, was deemed optimal. Therefore, we examined the AVSD Net-717 model by adjusting the window size to achieve satisfactory performance via postprocessing optimization. Our analysis indicated that the optimal performance and minimal complexity methods of the AVSD Net-717 model can be derived by postprocessing with a window size ranging from 100 to 150. The final speed ratio obtained after postprocessing is approximately 1.018. This value is close to the speed ratio obtained from GPS measurements (Section 3.1.3). However, there exists a trade-off between model complexity and the initial correction time of the ego-vehicle speed. For instance, a simple model can be used in automotive radar systems with poor hardware performance; however, the initial correction time of the ego-vehicle speed increases because of the large window size. Conversely, high-performing automotive radar systems that can handle models with high complexity use small window sizes, thereby reducing the initial correction time of the ego-vehicle speed. This trade-off relationship should be appropriately adjusted based on the requirements of the automotive radar system.
Since the ADC data of the automotive radar system used in this study were acquired as one frame at a cycle of 0.05 s, and the data size per frame is 0.5 MB, if data are acquired in various environments considering all situations, difficulties would arise owing to the large capacity and excessive acquisition time. Therefore, we efficiently simulated various environments and used them for training. However, simulation data do not provide all data for unpredicted environments. Although this study has opened up the possibility of applying deep learning to estimate the ego-vehicle speed, future research should acquire data from a wider range of environments and scenarios and prepare the data for training.
It is also important to note that current automotive radar systems do not utilize INS/GNSS-based filtering and fusion techniques, which are more common in navigation systems. However, techniques such as the double-channel sequential probability ratio test [38], hypothesis test-constrained robust Kalman filter [39], and set-membership-based hybrid Kalman filter [40] have proven their effectiveness in managing sensor fusion and uncertainty in INS/GNSS applications. These methods, including the adaptive and robust cubature Kalman filter [41,42,43], could potentially be adapted to enhance radar systems, particularly in handling complex environments and sensor noise. As future research progresses, exploring the integration of these INS/GNSS-based methods into radar systems could provide an effective approach to further improve real-time performance and robustness in unpredictable scenarios.

Author Contributions

Conceptualization, S.M., D.K. and Y.K.; methodology, S.M., D.K. and Y.K.; software, S.M.; validation, S.M. and D.K.; formal analysis, S.M.; investigation, S.M.; resources, S.M.; data curation, S.M.; writing—original draft preparation, S.M.; writing—review and editing, S.M. and Y.K.; visualization, S.M.; supervision, Y.K.; project administration, Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this article are not readily available due to proprietary concerns. Requests to access the datasets should be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bilik, I.; Longman, O.; Villeval, S.; Tabrikian, J. The Rise of Radar for Autonomous Vehicles: Signal Processing Solutions and Future Research Directions. IEEE Signal Process. Mag. 2019, 36, 20–31. [Google Scholar] [CrossRef]
  2. Meinel, H.H. Evolving Automotive Radar—From the Very Beginnings into the Future. In Proceedings of the 8th European Conference on Antennas and Propagation (EuCAP 2014), Hague, The Netherlands, 6–11 April 2014; pp. 3107–3114. [Google Scholar] [CrossRef]
  3. Parliament Approves EU Rules Requiring Life-saving Technologies in Vehicles. Available online: https://www.europarl.europa.eu/news/en/press-room/20190410IPR37528/parliament-approves-eu-rules-requiring-life-saving-technologies-in-vehicles (accessed on 8 July 2024).
  4. NHTSA Finalizes Key Safety Rule to Reduce Crashes and Save Lives. Available online: https://www.nhtsa.gov/press-releases/nhtsa-fmvss-127-automatic-emergency-braking-reduce-crashes (accessed on 8 July 2024).
  5. Mohammed, A.S.; Amamou, A.; Ayevide, F.K.; Kelouwani, S.; Agbossou, K.; Zioui, N. The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review. Sensors 2020, 20, 6532. [Google Scholar] [CrossRef]
  6. Klotz, M.; Rohling, H. 24 GHz Radar Sensors for Automotive Applications. In Proceedings of the 13th International Conference on Microwaves, Radar and Wireless Communications. MIKON-2000. Conference Proceedings (IEEE Cat. No. 00EX428), Wroclaw, Poland, 22–24 May 2000; pp. 359–362. [Google Scholar] [CrossRef]
  7. Song, H.; Shin, H.C. Classification and spectral mapping of stationary and moving objects in road environments using FMCW radar. IEEE Access 2020, 8, 22955–22963. [Google Scholar] [CrossRef]
  8. Yang, W.; Zhang, X.; Lei, Q.; Cheng, X. Research on longitudinal active collision avoidance of autonomous emergency braking pedestrian system (AEB-P). Sensors 2019, 19, 4671. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, Y.; Lin, Y.; Qin, Y.; Dong, M.; Gao, L.; Hashemi, E. A new adaptive cruise control considering crash avoidance for intelligent vehicle. IEEE Trans. Ind. Electron. 2024, 71, 688–696. [Google Scholar] [CrossRef]
  10. Grimm, C.; Breddermann, T.; Farhoud, R.; Fei, T.; Warsitz, E.; Haeb-Umbach, R. Discrimination of Stationary from Moving Targets with Recurrent Neural Networks in Automotive Radar. In Proceedings of the 2018 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), Munich, Germany, 15–17 April 2018; pp. 1–4. [Google Scholar] [CrossRef]
  11. Raj, S.; Ghosh, D. Optimized DBSCAN with Improved Static Clutter Removal for High Resolution Automotive Radars. In Proceedings of the 2022 19th European Radar Conference (EuRAD), Milan, Italy, 28–30 September 2022; pp. 1–4. [Google Scholar] [CrossRef]
  12. Kopp, J.; Kellner, D.; Piroli, A.; Dietmayer, K. Fast Rule-Based Clutter Detection in Automotive Radar Data. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 3010–3017. [Google Scholar] [CrossRef]
  13. Hernandez, W. Improving the response of a wheel speed sensor by using frequency-domain adaptive filtering. IEEE Sens. J. 2003, 3, 404–413. [Google Scholar] [CrossRef]
  14. Hernandez, W. Improving the response of a wheel speed sensor by using a RLS lattice algorithm. Sensors 2006, 6, 64–79. [Google Scholar] [CrossRef]
  15. Magnusson, M.; Trobro, C. Improving Wheel Speed Sensing and Estimation. Master’s Thesis, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, December 2003. [Google Scholar]
  16. Isermann, R.; Wesemeier, D. Indirect vehicle tire pressure monitoring with wheel and suspension sensors. IFAC Proc. Vol. 2009, 42, 917–922. [Google Scholar] [CrossRef]
  17. Spangenberg, M.; Calmettes, V.; Tourneref, J.Y. Fusion of GPS, INS and Odometric Data for Automotive Navigation. In Proceedings of the 2007 15th European Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 886–890. [Google Scholar]
  18. Grimm, C.; Farhoud, R.; Fei, T.; Warsitz, E.; Haeb-Umbach, R. Detection of Moving Targets in Automotive Radar with Distorted Ego-Velocity Information. In Proceedings of the 2017 IEEE Microwaves, Radar and Remote Sensing Symposium (MRRS), Kiev, Ukraine, 29–31 August 2017; pp. 111–116. [Google Scholar] [CrossRef]
  19. Lim, S.; Lee, S. Hough transform based ego-velocity estimation in automotive radar system. Electron. Lett. 2021, 57, 80–82. [Google Scholar] [CrossRef]
  20. Kingery, A.; Song, D. Improving ego-velocity estimation of low-cost Doppler radars for vehicles. IEEE Robot. Autom. Lett. 2022, 7, 9445–9452. [Google Scholar] [CrossRef]
  21. Klinefelter, E.; Nanzer, J.A. Automotive velocity sensing using millimeter-wave interferometric radar. IEEE Trans. Microw. Theory Techn. 2021, 69, 1096–1104. [Google Scholar] [CrossRef]
  22. Kellner, D.; Barjenbruch, M.; Klappstein, J.; Dickmann, J.; Dietmayer, K. Instantaneous Ego-Motion Estimation Using Doppler Radar. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, Netherlands, 6–9 October 2013; pp. 869–874. [Google Scholar] [CrossRef]
  23. Cho, H.W.; Choi, S.; Cho, Y.R.; Kim, J. Complex-valued channel attention and application in ego-velocity estimation with automotive radar. IEEE Access 2021, 9, 17717–17727. [Google Scholar] [CrossRef]
  24. Cho, H.W.; Choi, S.; Cho, Y.R.; Kim, J. Deep Complex-Valued Network for Ego-Velocity Estimation with Millimeter-Wave Radar. In Proceedings of the 2020 IEEE SENSORS, Rotterdam, Netherlands, 25–28 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  25. Automotive, Second-Generation 76-GHz to 81-GHz High-Performance SoC for Corner and Long-Range Radar. Available online: https://www.ti.com/product/AWR2944 (accessed on 10 July 2024).
  26. DCA1000 Evaluation Module for Real-Time Data Capture and Streaming. Available online: https://www.ti.com/tool/DCA1000EVM (accessed on 10 July 2024).
  27. Iovescu, C.; Rao, S. The fundamentals of millimeter wave sensors. Tex. Instrum. 2017, 1–8. Available online: https://www.ti.com/lit/wp/spyy005a/spyy005a.pdf?ts=1727787945304&ref_url=https%253A%252F%252Fwww.google.com%252F (accessed on 10 July 2024).
  28. Rao, S. Introduction to mmWave sensing: FMCW radars. Tex. Instrum. (TI) Mmwave Train. Ser. 2017, 1–11. Available online: https://www.ti.com/content/dam/videos/external-videos/zh-tw/2/3816841626001/5415203482001.mp4/subassets/mmwaveSensing-FMCW-offlineviewing_0.pdf (accessed on 10 July 2024).
  29. Lee, Y.; Park, B. Nonlinear regression-based GNSS multipath modelling in deep urban area. Mathematics 2022, 10, 412. [Google Scholar] [CrossRef]
  30. Liu, F.; Han, H.; Cheng, X.; Li, B. Performance of tightly coupled integration of GPS/BDS/MEMS-INS/Odometer for real-time high-precision vehicle positioning in urban degraded and denied environment. J. Sens. 2020, 2020, 8670262. [Google Scholar] [CrossRef]
  31. Wang, G.; Li, S. A Novel Longitudinal Speed Estimator for Fully Automation Ground Vehicle on Cornering Maneuver. In Proceedings of the 2009 International Conference on Artificial Intelligence and Computational Intelligence, Shanghai, China, 7–8 November 2009; pp. 433–437. [Google Scholar] [CrossRef]
  32. User’s Guide AWR2944 Evaluation Module. Available online: https://www.ti.com/lit/ug/spruj22b/spruj22b.pdf (accessed on 14 July 2024).
  33. Damnjanović, V.; Petrović, M.; Milovanović, V. On Hardware Implementations of Two-Dimensional Fast Fourier Transform for Radar Signal Processing. In Proceedings of the IEEE EUROCON 2023—20th International Conference on Smart Technologies, Torino, Italy, 6 July 2023; pp. 400–405. [Google Scholar] [CrossRef]
  34. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015. [Google Scholar]
  35. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 30 June 2016; pp. 770–778. [Google Scholar]
  37. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  38. Gao, G.; Zhong, Y.; Gao, S.; Gao, B. Double-Channel Sequential Probability Ratio Test for Failure Detection in Multisensor Integrated Systems. IEEE Trans. Instrum. Meas. 2021, 70, 3514814. [Google Scholar] [CrossRef]
  39. Gao, G.; Gao, B.; Gao, S.; Hu, G.; Zhong, Y. A Hypothesis Test-Constrained Robust Kalman Filter for INS/GNSS Integration With Abnormal Measurement. IEEE Trans. Vehi. Technol. 2023, 72, 1662–1673. [Google Scholar] [CrossRef]
  40. Zhao, Y.; Zhang, J.; Hu, G.; Zhong, Y. Set-Membership Based Hybrid Kalman Filter for Nonlinear State Estimation under Systematic Uncertainty. Sensors 2020, 20, 627. [Google Scholar] [CrossRef]
  41. Gao, B.; Hu, G.; Zhu, X.; Zhong, Y. A Robust Cubature Kalman Filter with Abnormal Observations Identification Using the Mahalanobis Distance Criterion for Vehicular INS/GNSS Integration. Sensors 2019, 19, 5149. [Google Scholar] [CrossRef] [PubMed]
  42. Gao, B.; Hu, G.; Zhong, Y.; Zhu, X. Cubature Kalman Filter With Both Adaptability and Robustness for Tightly-Coupled GNSS/INS Integration. IEEE Sens. J. 2021, 21, 14997–15011. [Google Scholar] [CrossRef]
  43. Gao, B.; Gao, S.; Zhong, Y.; Hu, G.; Gu, C. Interacting multiple model estimation-based adaptive robust unscented Kalman filter. Int. J. Control Autom. Syst. 2017, 15, 2013–2025. [Google Scholar] [CrossRef]
Figure 1. Configuration of the vehicle and radar system: (a) photograph; (b) schematic.
Figure 1. Configuration of the vehicle and radar system: (a) photograph; (b) schematic.
Sensors 24 06409 g001
Figure 2. Waveform of the automotive radar system.
Figure 2. Waveform of the automotive radar system.
Sensors 24 06409 g002
Figure 3. Measurement results at an ego-vehicle speed of 17.5 m/s. The dashed and solid red lines represent the left and right guardrails, respectively: (a) Video acquired from the webcam; (b) range-velocity (RV) spectrum data acquired from the automotive radar.
Figure 3. Measurement results at an ego-vehicle speed of 17.5 m/s. The dashed and solid red lines represent the left and right guardrails, respectively: (a) Video acquired from the webcam; (b) range-velocity (RV) spectrum data acquired from the automotive radar.
Sensors 24 06409 g003
Figure 4. Variation in the radial velocity based on the position of objects: (a) automotive radar coordinate system; (b) radial velocity according to position x of the stationary objects.
Figure 4. Variation in the radial velocity based on the position of objects: (a) automotive radar coordinate system; (b) radial velocity according to position x of the stationary objects.
Sensors 24 06409 g004
Figure 5. Ego-vehicle speed obtained from the global positioning system (GPS) and controller area network (CAN).
Figure 5. Ego-vehicle speed obtained from the global positioning system (GPS) and controller area network (CAN).
Sensors 24 06409 g005
Figure 6. Position of the target and antenna.
Figure 6. Position of the target and antenna.
Sensors 24 06409 g006
Figure 7. Schematic of the simulation method.
Figure 7. Schematic of the simulation method.
Sensors 24 06409 g007
Figure 8. Comparison of results: (a) measurement data; (b) simulation data.
Figure 8. Comparison of results: (a) measurement data; (b) simulation data.
Sensors 24 06409 g008
Figure 9. Preprocessing flowchart: (a) measurement data; (b) simulation data.
Figure 9. Preprocessing flowchart: (a) measurement data; (b) simulation data.
Sensors 24 06409 g009
Figure 10. Automotive radar-based ego-vehicle speed detection (AVSD) block.
Figure 10. Automotive radar-based ego-vehicle speed detection (AVSD) block.
Sensors 24 06409 g010
Figure 11. Architecture of the automotive radar-based ego-vehicle speed detection network (AVSD Net)-84k.
Figure 11. Architecture of the automotive radar-based ego-vehicle speed detection network (AVSD Net)-84k.
Sensors 24 06409 g011
Figure 12. Architecture of AVSD Net-717.
Figure 12. Architecture of AVSD Net-717.
Sensors 24 06409 g012
Figure 13. Postprocessing flowchart.
Figure 13. Postprocessing flowchart.
Sensors 24 06409 g013
Figure 14. Method followed for ego-vehicle speed correction.
Figure 14. Method followed for ego-vehicle speed correction.
Sensors 24 06409 g014
Figure 15. Results obtained from the AVSD Net model: (a) training error rate; (b) testing error rate.
Figure 15. Results obtained from the AVSD Net model: (a) training error rate; (b) testing error rate.
Sensors 24 06409 g015
Figure 16. Speed ratio results after postprocessing: (a) AVSD Net-84k; (b) AVSD Net-38k; (c) AVSD Net-12k; (d) AVSD Net-3k; (e) AVSD Net-1k; (f) AVSD Net-717.
Figure 16. Speed ratio results after postprocessing: (a) AVSD Net-84k; (b) AVSD Net-38k; (c) AVSD Net-12k; (d) AVSD Net-3k; (e) AVSD Net-1k; (f) AVSD Net-717.
Sensors 24 06409 g016
Figure 17. Speed ratio results after postprocessing of AVSD Net-717 for various window sizes: (a) 150; (b) 100; (c) 50; (d) 25.
Figure 17. Speed ratio results after postprocessing of AVSD Net-717 for various window sizes: (a) 150; (b) 100; (c) 50; (d) 25.
Sensors 24 06409 g017
Figure 18. Rate of abnormal value with respect to window size.
Figure 18. Rate of abnormal value with respect to window size.
Sensors 24 06409 g018
Table 1. Constraints associated with the estimation of ego-vehicle speed.
Table 1. Constraints associated with the estimation of ego-vehicle speed.
StudyMounting AngleAngular PerformanceDownward or Wide BeamComplexity
ODRDependentDependentUnnecessarySimple
Hough transformDependentDependentUnnecessarySimple
Elevated objectsDependentDependentNecessarySimple
Ground backscatteringDependentDependentNecessarySimple
RANSACIndependentDependentUnnecessarySimple
CVNNIndependentIndependentUnnecessaryComplex
Table 2. Specifications of the waveform of the automotive radar system.
Table 2. Specifications of the waveform of the automotive radar system.
ParameterValueParameterValue
Start   frequency   ( f s )77.0991 GHz Number   of   ADC   samples   ( N s )256
End   frequency   ( f e )77.3104 GHz Number   of   chirps   ( N c )128
Bandwidth (BW)211.3536 MHz Frame   duration   ( T f r m )50 ms
Sample rate (SR)20 MHz Range   resolution   ( R r e s )0.7092 m
ADC   sampling   time   ( T A D C )12.8 µs Maximum   range   ( R m a x )90.7802 m
Frequency   slope   ( S f )16.512 MHz/µs Velocity   resolution   ( V r e s )0.3792 m/s
Pulse repetition interval (PRI)40 µs Maximum   velocity   ( V m a x )24.2693 m/s
Table 3. Continuous uniform distribution range of stationary object sets.
Table 3. Continuous uniform distribution range of stationary object sets.
RangeX (m)Y Start (m)Y End (m)Y Direction Spacing (m)Number of Stationary Object Sets
a−1517011
b+15390-2
Table 4. Continuous uniform distribution range of extraneous objects.
Table 4. Continuous uniform distribution range of extraneous objects.
RangeX (m)Y (m)Radial Velocity (m/s)Number of Extraneous Objects
a−101−2320
b+1090+2325
Table 5. Number of measurement and simulation datasets.
Table 5. Number of measurement and simulation datasets.
DatasetsMeasurement DataSimulation Data
Dataset 134,53830,000
Dataset 238,90930,000
Total73,44760,000
Table 6. Comparison of uniform stride (2 × 2) and non-uniform stride (2 × 1).
Table 6. Comparison of uniform stride (2 × 2) and non-uniform stride (2 × 1).
Layer
Name
Uniform Stride (2 × 2)Non-Uniform Stride (2 × 1)
Input SizeOutput SizeChannels
(Input)
Channels
(Output)
Input SizeOutput SizeChannels
(Input)
Channels
(Output)
Input-128 × 65-1-128 × 65-1
Conv 1128 × 6564 × 3318128 × 6564 × 6518
Conv 264 × 3332 × 1781664 × 6532 × 6588
Conv 332 × 1717 × 9161632 × 6516 × 6588
Conv 416 × 98 × 5161616 × 658 × 6588
Conv 58 × 54 × 316168 × 654 × 6588
Conv 64 × 32 × 216164 × 652 × 6588
Conv 7----2 × 651 × 6581
FC647--657--
Params11,3063633
Error7.09%4.26%
Table 7. Comparison of kernel sizes of (5 × 5) and (3 × 3).
Table 7. Comparison of kernel sizes of (5 × 5) and (3 × 3).
Layer
Name
Input SizeOutput SizeKernel Size (5 × 5)Kernel Size (3 × 3)
Channels
(Input)
Channels
(Output)
Channels
(Input)
Channels
(Output)
Input-128 × 65-1-1
Conv 1128 × 6564 × 65116116
Conv 264 × 6532 × 6516321632
Conv 332 × 6516 × 6532323232
Conv 416 × 658 × 6532323232
Conv 58 × 654 × 6532323232
Conv 64 × 652 × 6532163216
Conv 72 × 651 × 65161161
FC657----
Params-104,14538,097
Error-4.22%3.99%
Table 8. Channels and parameters of the automotive radar-based ego-vehicle speed detection network (AVSD Net) model.
Table 8. Channels and parameters of the automotive radar-based ego-vehicle speed detection network (AVSD Net) model.
Layer
Name
Input SizeOutput SizeAVSD NET-84kAVSD NET-38kAVSD NET-12kAVSD NET-3kAVSD NET-1kAVSD NET-717
ChannelsChannelsChannelsChannelsChannelsChannels
InOutInOutInOutInOutInOutInOut
Input-128 × 65-1-1-1-1-1-1
Conv 1128 × 6564 × 65116116116181412
Conv 264 × 6532 × 65163216321616884422
Conv 332 × 6516 × 65326432321616884422
Conv 416 × 658 × 65646432321616884422
Conv 58 × 654 × 65643232321616884422
Conv 64 × 652 × 65321632161616884422
Conv 72 × 651 × 65161161161814121
FC657------------
Softmax77------------
Parameters-84,36938,09712,56136331329717
FLOPs-55,141 k28,533 k10,550 k2954 k897 k304 k
Table 9. Comparison of processing time and model size of AVSD Net models.
Table 9. Comparison of processing time and model size of AVSD Net models.
ModelProcessing Time in CPUModel Size
AVSD Net-84k2.50 ms355 kilobytes
AVSD Net-38k2.34 ms174 kilobytes
AVSD Net-12k1.99 ms74 kilobytes
AVSD Net-3k1.71 ms37 kilobytes
AVSD Net-1k1.44 ms29 kilobytes
AVSD Net-7171.33 ms26 kilobytes
Table 10. Error rates of training and testing.
Table 10. Error rates of training and testing.
ModelNumber of ParametersTraining Error RateTesting Error Rate
AVSD Net-84k84,3690.02%5.29%
AVSD Net-38k38,0970.10%5.01%
AVSD Net-12k12,5610.17%6.09%
AVSD Net-3k36330.32%5.89%
AVSD Net-1k13290.64%7.31%
AVSD Net-7177171.45%7.39%
Table 11. Testing error rate and speed ratio after postprocessing.
Table 11. Testing error rate and speed ratio after postprocessing.
ModelTraining Error RateMean of Speed RatioStandard Deviation of Speed Ratio
AVSD Net-84k7.00%1.01813 10.19 × 10 5
AVSD Net-38k9.47%1.01816 7.85 × 10 5
AVSD Net-12k9.29%1.01813 10.91 × 10 5
AVSD Net-3k7.71%1.01817 9.37 × 10 5
AVSD Net-1k8.82%1.01826 13.63 × 10 5
AVSD Net-71714.94%1.01823 12.12 × 10 5
Table 12. Speed ratio after the application of sliding window over the entire measurement dataset 2.
Table 12. Speed ratio after the application of sliding window over the entire measurement dataset 2.
Window SizeMean of Speed RatioStandard Deviation of Speed RatioRate of Abnormal Value
1501.01813 12.12 × 10 5 0.00%
1001.01822 14.40 × 10 5 0.00%
501.01869 291.50 × 10 5 2.65%
251.01994 618.78 × 10 5 8.55%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moon, S.; Kim, D.; Kim, Y. Ego-Vehicle Speed Correction for Automotive Radar Systems Using Convolutional Neural Networks. Sensors 2024, 24, 6409. https://doi.org/10.3390/s24196409

AMA Style

Moon S, Kim D, Kim Y. Ego-Vehicle Speed Correction for Automotive Radar Systems Using Convolutional Neural Networks. Sensors. 2024; 24(19):6409. https://doi.org/10.3390/s24196409

Chicago/Turabian Style

Moon, Sunghoon, Daehyun Kim, and Younglok Kim. 2024. "Ego-Vehicle Speed Correction for Automotive Radar Systems Using Convolutional Neural Networks" Sensors 24, no. 19: 6409. https://doi.org/10.3390/s24196409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop