Next Article in Journal
Corrections of Mesoscale Eddies and Kuroshio Extension Surface Velocities Derived from Satellite Altimeters
Previous Article in Journal
A Multiscale Spatiotemporal Fusion Network Based on an Attention Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CAE-CNN-Based DOA Estimation Method for Low-Elevation-Angle Target

1
Graduate School, Air Force Engineering University, Xi’an 710043, China
2
Air and Missile Defense College, Air Force Engineering University, Xi’an 710043, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(1), 185; https://doi.org/10.3390/rs15010185
Submission received: 22 November 2022 / Revised: 22 December 2022 / Accepted: 23 December 2022 / Published: 29 December 2022

Abstract

:
For the DOA (direction of arrival) estimation of a low-elevation-angle target under the influence of a multipath effect, this paper proposes a DOA estimation method based on CAE (convolutional autoencoder) and CNN (convolutional neural network). The algorithm firstly inputs the signal covariance matrix of the array of the low-elevation target containing direct and reflected waves into the convolutional autoencoder to realize the de-multipath, and uses the spatial features extracted by the convolutional autoencoder as the input of the extreme learning machine to realize the DOA preclassification of direct waves; based on the preclassification results, one branch of the three parallel convolutional neural nets is selected, and the output of the convolutional autoencoder is used as the input of this branch to realize DOA estimation. The simulation results show that the algorithm has better estimation accuracy and efficiency than the conventional algorithms, especially when the DOA of the target is in the lower range. The analysis of the simulation results shows that the algorithm is effective, in which the convolutional autoencoder can effectively realize the de-multipath, and the use of parallel convolutional neural networks can avoid overfitting and underfitting and realize DOA estimation more accurately.

1. Introduction

With the continuous development of electromagnetic wave theory and technology, radar has been widely used in meteorology, exploration, remote sensing, and especially in the military [1]. Driven by the gradual deepening of operational requirements, various countries have developed better air defense systems, and the possibility of targets breaking through the defense from high altitude has become less, while low-altitude/ultralow-altitude surprise defense has become one of the main threats to modern radar air defense systems. In the low-/ultralow-altitude environment, the difficulties of the target direction of arrival (DOA) estimation processing problem include clutter, strong interference, terrain occlusion, and especially the serious multipath effect. When the radar beam hits the target, in addition to the return echoes directly from the target, the echoes scattered indirectly through the ground/sea surface or obstacles will also reach the radar receiver at the same time, superimposing each other to produce an interference effect, called the multipath interference effect [2], and such indirect target echoes that are scattered or bypassed by obstacles to reach the receiver are weaker than the direct target echoes in most cases, and appropriate measures can be taken. It is relatively easy to attenuate or eliminate the interference effect caused by obstacles, while the indirect target echoes reflected from the ground/sea surface are very strong and almost comparable to the direct target echoes. Therefore, the reflection from the ground surface/sea surface is the main cause of the multipath interference effect, and is one of the important issues of concern to radar signal processing researchers.
Currently, the majority of radars operate in the 200 MHz-to-10 GHz frequency band, covering meter-wave radar, decimeter-wave radar, and centimeter-wave radar, etc. According to the antenna 3 dB beamwidth calculation method, the beamwidth of meter-wave radar is usually in the order of 10 degrees, the beamwidth of centimeter-wave radar is around a few degrees, and decimeter-wave radar is in between [1]. In the field of signal processing, the multipath effect that is being focused on mainly occurs when the target elevation angle is less than 0.8 beamwidths. When the target elevation angle is greater than 0.8 beamwidths, the effect of the multipath can be more easily filtered out from the time, air, or frequency domains by hardware means or other data-processing methods [3]. Therefore, the range of low-elevation angles is from 0 to about 10°. For the case of low-elevation angles generating a multipath effect, the current solutions include multipath mitigation and coherent signal separation. Multipath mitigation is mainly achieved by changing antenna pattern; however, its effect on the phase will lead to a certain extent to the degradation of the angle-measurement performance [4]; coherent signal separation is mainly achieved by making the signal covariance matrix full-rank [5]; common methods include the forward or backward spatial smoothing algorithm, Toeplitz matrix reconstruction [6,7], etc. The forward or backward spatial smoothing algorithm divides the array signal into subarrays and solves the problem of the covariance matrix of the original signal being not full-rank by calculating the subarray covariance matrix [8,9] and then averaging it, but the algorithm will lose the array aperture and reduce freedom, thus affecting the angle estimation performance. The Toeplitz matrix reconstruction method changes the data structure of the covariance matrix by constructing a Toeplitz matrix to achieve rank recovery, but the difference between multipath coherent signals and real coherent signals is that the covariance matrix of the received signal does not have Toeplitzness. Ebrahim M. et al. propose a segmented DOA estimation method, with the first stage for distinguishing uncorrelated signals and the second stage for solving the direction of arrival of correlated signals using covariance and iterative spatial smoothing [10,11], but in the actual signal-processing process, the peaks of direct and reflected waves are difficult to distinguish in a multipath environment, and it is difficult to invert the covariance matrix by the peaks already obtained. ZHAO et al. use the ICA algorithm to obtain the steering vector containing multipath component information, and use CS theory to estimate the direct component of the array signal and the DOA of each multipath component [12], but it is difficult to achieve effective differentiation with this method when the direct and reflected angles are small. In addition to these, DOA estimation methods based on artificial intelligence algorithms also provide ideas for the solution of this type of problem. For example, a hierarchical convolutional neural network-based smart antenna is proposed [13], which realizes DOA estimation by progressively refining subsectors and transforms the estimation problem into a classification problem, but the direct angle and reflected angle of the low-elevation-angle target are both small, which leads to a high computational cost; Xiang proposes the estimation performance of existing super-resolution algorithms by enhancing the phase of the direct angle in the received signal through deep convolutional neural networks [14] or deep neural networks to build a feature-to-feature phase enhancement framework [15], but simply enhancing the phase of the direct angle signal in the received signal is more difficult, which requires more prerequisites; Ge uses deep convolutional neural networks to achieve DOA estimation of coherent sources based on the sparse representation of the received signal of the array [16], which requires the discretization of the received signal, while the actual received signal angle is unknown, which leads to discrepancies in the data input when applying this model and affects the estimation accuracy; Liu uses deep neural networks to achieve DOA estimation with robustness to array defects [17], but this method is not applicable to low-elevation-angle targets.
For low-elevation-angle targets with small direct and reflected angles, which are difficult to distinguish and estimate with high accuracy, this paper proposes a low-elevation-angle target DOA estimation algorithm based on convolutional autoencoder and convolutional neural network. The convolutional autoencoder (CAE) in this algorithm can effectively extract the direct angle part of the received signal, and the angle estimation using convolutional neural network (CNN) after preclassification by the extreme learning machine can further improve the estimation accuracy and reduce the time and space complexity, and the overall model uses the received signal covariance matrix as input without complex preprocessing.

2. Multipath Signal Model

2.1. Multipath Signal Spatial Model

The spatial model where the multipath effect occurs is shown in Figure 1 below, demonstrating the correspondence between angle, wave path, and spatial distance. Usually, the plane parallel to the ground or sea surface is used as the base plane, and the direct angle θ d is positive and the reflected angle θ i is negative. For the convenience of presentation, the direct angle and reflected angle are calculated with positive values in the spatial model.
According to the geometric relationship in Figure 1, the following can be obtained:
t a n θ d = h t h r R 0 ,
t a n θ i = h t + h r R 0 ,
Dividing Equations (1) and (2),
t a n θ d t a n θ i = h t h r h t + h r ,
that is,
t a n θ d = h t h r h t + h r   t a n θ i ,
θ d = arctan h t h r h t + h r   t a n θ i ,
From Figure 1, the wave paths of the direct and reflected waves can be obtained as follows:
R d = h t h r s i n θ d ,
R 1 + R 2 = h t + h r s i n θ i ,
where R d denotes the wave path of the direct wave and R 1 + R 2 denotes the wave path of the reflected wave. Therefore, the wave path difference Δ R can be expressed as follows:
Δ R = R 1 + R 2 R d = h t + h r s i n θ i h t h r s i n θ d ,
According to the existing small-angle approximation [18], when θ < 12 ° , the numerical error of the sine and tangent functions does not exceed 1%, and the sine and tangent values are approximately equal. The elevation angles of the far-field low-altitude targets studied in this paper are all in the range of (0°, 10°), which are consistent with the small-angle approximation. Therefore, combined with Equation (4), Equation (8) can be expressed in the following form:
Δ R = h t + h r t a n θ i h t h r s i n θ d = h t h r t a n θ d h t h r s i n θ d = h t h r c o s θ d 1 s i n θ d ,
When the target is far from the radar and the elevation angle is small, according to L’Hopital’s law, c o s θ d 1 s i n θ d 0 , the wave range difference is approximated as 0.

2.2. Multipath Signal Model

In practice, the signal is propagated in both directions between the radar and the target; therefore, the echo signal received by the radar is divided into four parts, corresponding to the four propagation paths of ground reflection in the low-altitude environment [19], as shown in the four paths in Figure 1 above: direct AB–direct BA, direct AB–reflected BOA, reflected AOB–direct BA, reflected AOB–reflected BOA. For the convenience of calculation, only the multipath effect in the receiving process is calculated, corresponding to the first two of the four paths above. Suppose there is an ideal uniform line array with M array elements: array element spacing d is no greater than half wavelength, and the number of snapshots is L . The far-field narrowband signal is incident to the array with a small elevation angle θ d ( θ d > 0 ), and the multipath signal reflected by a smooth surface is incident to the array with an angle θ i ( θ i < 0 ), then the array received signal can be expressed as
x t = A S + n t = a θ d + ε a ( θ i ) s t + n t ,
where S = s t denotes the signal vector; n t denotes the array additive noise vector, ε = ρ e j 2 π Δ R λ 0 ; ε denotes the total reflection coefficient; ρ denotes the complex reflection coefficient, which is determined by the reflecting surface characteristics [20]; λ 0 is the signal wavelength; 2 π Δ R λ 0 is the phase difference due to the distance difference between the direct and reflected waves; and A = a θ d + ε a ( θ i ) , a θ d and a θ i denote the direct and reflected wave guide vectors, respectively,
a θ d = [ 1 , e j 2 π d s i n θ d λ 0 , , e j 2 π L 1 d s i n θ d λ 0 ] T ,
a θ i = [ 1 , e j 2 π d s i n θ i λ 0 , , e j 2 π L 1 d s i n θ i λ 0 ] T ,
According to the derivation of the multipath effect spatial model, ∆R is approximately 0, so the phase difference between the direct and reflected waves is approximately 0, the direct and reflected waves are coherent, and the total reflection coefficient is approximately equal to the complex reflection coefficient. When the relative position relationship between the target and the radar is fixed, the mapping relationship that exists between the direct angle and reflected angle derived from Equation (5)—it is possible to obtain further:
A k = a θ d k + ρ a θ i k = e j 2 π k 1 d s i n θ d λ 0 + ρ e j 2 π k 1 d s i n θ i λ 0 = τ d l e j 2 π k 1 d s i n θ d λ 0 ,
Substituting Equation (13) into Equation (10), a new signal expression can be obtained:
x t = Γ d a θ d s t + n t ,
where denotes the Hadamard product, Γ d = τ d 1 , τ d 2 , , τ d L T . When the array signal is rewritten in the form of Equation (14), the array received signal covariance matrix is represented in the following form [21]:
R = Γ d   a θ d R s s ( a H θ d Γ d H ) + σ 2 I ,
where R s s = E s t s H t is the signal covariance matrix, σ 2 denotes the unknown noise power, and I denotes the unit matrix with dimension L × L .
In general, when the array is incident only by the direct wave signal θ d and there is no multipath signal, the array received signal covariance matrix is as follows:
R 0 = a θ d R s s a H θ d + σ 2 I ,
Comparing Equations (15) and (16), when the mapping f : R R 0 and its neural network structure are obtained using the deep learning method, the reflected angle signal part of the multipath signal can be filtered out and the direct-angle signal is obtained, thus realizing the de-multipath.

3. Deep Neural Network Model

The proposed deep learning network structure is divided into two parts: the first part by CAE to realize the de-multipath and by the extreme learning machine (ELM) to realize the angle preclassification; the second part by the parallel convolutional neural network to realize the angle estimation. The deep learning network structure is shown in Figure 2 below:
In the CAE and preclassification model, the CAE is trained with the array signal covariance matrix R under the multipath effect as input, and its corresponding signal covariance matrix R 0 , containing only the direct wave signal without the reflected wave as the output of the CAE, so as to achieve the purpose of extracting features and removing the multipath; then, according to the three angle intervals, the angle preclassification is performed by the ELM with latent features extracted from the CAE model, and after the classification is completed, the deep convolutional neural network is trained according to the principle of separate training for different categories, so as to obtain the final angle estimate.

3.1. Convolutional Autoencoder and Preclassification Model

The CAE and preclassification model consists of two neural networks in parallel, as shown in Figure 3 below.
The first part strung by the blue arrows is CAE, which is used to extract features and implement the mapping of the multipath signal covariance matrix to the direct wave signal covariance matrix. The part strung by the green arrows is the extreme learning machine part, which serves to realize the angle preclassification.

3.1.1. Convolutional Autoencoder

CAE is a variant of autoencoder (AE) that is based on the same principles as the autoencoder and is often applied in data compression and feature extraction [22]. Although AE is often considered as an unsupervised learning mode, strictly speaking, AE should be classified as self-supervised learning, which aims at copying the input to the output and reconstructing the feature representation between the input and the output. Structurally, AE is divided into two major parts: the encoder transforms the input data into a latent space representation or latent feature representation, and then reconstructs this mapping relationship by the decoder. The difference between CAE and AE lies in the encoding and decoding process with the help of convolutional operation instead of other neural networks and with better training effect.
In the coding process, the input data are the covariance matrix R , there are K convolution kernels, each convolution kernel is denoted as w k , the size of the convolution kernels are 3 × 3 , and the bias is added to the convolution result after each convolution operation, denoted by b k , then the operation of the first convolution layer in the encoding process can be expressed as
h k = σ R w k + b k ,
Among them, denotes the convolution operation and σ x denotes the activation function, and the ReLU function is usually chosen. The pooling layer is entered after the convolution operation, and the maximum pooling function is chosen in this paper [23], i.e., the maximum value is selected to be retained in the pooling region, and the size of the pooling region is 2 × 2 . Two convolution-pooling operations are performed during the coding process, and the latent feature representation of the data can be obtained at the end of the coding process.
The upsampling operation is performed first in the decoding process, and its purpose is to expand the data by a certain proportion. After the convolution operation, the size of the data is usually reduced by a certain percentage, and the size of the feature map after the upsampling operation is larger than the size of the input feature map, so that the size of the data can be recovered. The upsampling operation methods include the transposed convolution method, bilinear difference method, inverse pooling method, and direct filling method. In this paper, the direct padding method is chosen, i.e., each value in the matrix is expanded into an equivalent matrix corresponding to the required size according to the expansion requirement. Taking the expansion of a 2 × 2 matrix into a 4 × 4 matrix as an example, the expansion process is shown in Figure 4.
Compared with the transposed convolution method and the bilinear interpolation method, the direct padding method does not require complex operations and has a significant advantage in terms of time and space complexity compared with the inverse pooling method, which does not require storing the location and numerical information of the pooling operation during the encoder process. After the upsampling operation, the output result is convolved with the same rules as the convolution in the encoding process and the same activation function. The loss function of CAE in this paper is the binary cross-entropy function, which is defined as
f b c e _ l o s s = i = 1 L j = 1 L r i j log ( r i j ) + ( 1 r i j ) log ( 1 r i j ) ,
where r i j denotes the actual value of the elements in the covariance matrix and r i j is the predicted value.

3.1.2. Extreme Learning Machine for Preclassification

Since the angle range is not strictly divided, the selection for CNN branches is not strict, especially when the direct angle is at the boundary of the range. Based on the above requirements, this paper selects an extreme learning machine with simple structure and low time and space complexity for angle preclassification. After the encoding process of CAE, the latent features are generated, and the latent features are used as the input of ELM guided by the green arrow in Figure 3 for training, so as to complete the preclassification of angles.
The characteristics of ELM are that there is only one implicit layer; it does not require gradient-based backpropagation to adjust the weights, i.e., the connection weights of the input and implicit layers as well as the implicit and output layers do not need to be adjusted iteratively, which can effectively reduce the number of operations, and they do not require a loss function. Some studies have confirmed that ELM has more obvious advantages in generalization [24]. The expression of the training process of the extreme learning machine for samples X i = x i 1 , x i 2 , , x i m ,   j = 1 , 2 , , N can be expressed in the following form:
j = 1 L β j σ W j · X i + b j = t i ,
where m denotes the data dimension of sample X i ; L denotes the number of neurons in the hidden layer; W j and b j denote the input weights and bias, respectively; β j denotes the output weight; σ x denotes the activation function; and t i denotes the output of the limit learning machine. For all samples, it can be further abbreviated as
H β = t ,
where H = σ W j · X i + b j N × L denotes the output of all hidden layer neurons, β = β 1 , β 2 , , β L T denotes the output weight, and t denotes the actual output. The purpose of neural network learning is to minimize the error between the actual output and the target output T i , i.e., the expression of the minimization loss function is
E = m i n i = 1 N T i j = 1 L β j σ W j · X i + b j 2 ,
Traditional gradient descent-based algorithms used to solve the above problem often require adjusting all parameters during the iterative process, but in ELM, once the input weights W j and bias b j are determined, the output of the hidden layer neurons is uniquely determined. Therefore, the ELM training problem can be transformed into a linear system H β = T solution problem, and the weights β can be uniquely determined:
β = H + T ,
where H + is the Moore–Penrose generalized inverse matrix of the matrix H .
The performance of ELM for preclassification is evaluated by precision, which is calculated as shown below:
P i = N t N i ,
where N i denotes the number of samples classified as i intervals, N t denotes the number of samples in N i that are correctly classified.

3.2. Convolutional Neural Network Model

Considering the estimation accuracy and time–space complexity, a parallel convolutional neural network [25] is designed, and its model is shown in Figure 5 below:
The “de-multipath” covariance matrix obtained from CAE and the angle classification obtained from ELM are input to the parallel convolutional neural network model in Figure 5, and the covariance matrix is input to the corresponding convolutional neural network by judging the angle category to output the estimated angle. The three parallel convolutional neural networks CNN1, CNN2, and CNN3 have the same convolutional structure, which are three convolutional layers connected with a pooling layer and finally connected with three fully connected layers; the size of the convolutional kernel is 3 × 3, the activation function is the ReLU function, the pooling layer adopts the maximum pooling criterion, the size is 2 × 2, and the depth of fully connected layer is 3. To make the output more streamlined and intuitive, CNN1, CNN2, and CNN3 are designed according to the regression problem model, and the output of the convolutional neural network is the DOA estimate, and the loss function of each convolutional neural network model is the mean-square error function, i.e.,
f m s e _ l o s s = θ d θ d 2 ,
where θ d is the actual value and θ d denotes the predicted value.
The difference is that the number of convolution kernels in each convolutional layer and the number of neurons in each fully connected layer are different in the three convolutional neural network models. In the training process, the forward propagation is followed by the backward error propagation to correct the parameters, and so on iteratively, and the network training is completed when the training error is less than the set threshold or the training count is reached. During training and testing, the three convolutional neural network models are independent of each other and do not interfere with each other.

4. Simulation Experiments and Data Analysis

In the simulation experiments, the designed array is an ideal uniform line array with 20 elements, the array element interval is half-wavelength, and the target is a far-field narrowband signal. According to Equation (4), 20 groups of spatially located relatively fixed direct and reflected angles are randomly set, the low-altitude range is 0 ° , 10 ° , and the accuracy of the direct-angle change is Δ θ = 0.001 ° , so the capacity of all samples is 200,000, containing a one-to-one corresponding covariance matrix R with multipath, covariance matrix R 0 without multipath, and corresponding direct and reflected angles. The capacity of the test set is 2000, which is selected randomly, and the rest of samples are used as the training set. The CAE and preclassification model and parallel CNN model are shown in Figure 3 and Figure 5 above, where the number of convolutional kernels for each layer of CAE is 20, the size is 3 × 3 , and the activation function is the ReLU function. In the decoding stage, a convolutional layer with a number of convolutional kernels of 1, a size of convolutional kernels of 3 × 3, and an activation function of the sigmoid function is added at the end. The number of neurons in the hidden layer of the ELM model is 50, and the three intervals of angle classification are C1:(0°,3.2°], C2:(3.2°,5.8°], C3:(5.8°,10°]; the number of convolutional kernels in each branch of the parallel CNN model are CNN1:(32,16,12), CNN2:(20,12,12), CNN3:(12,12,6); and the number of neurons in the fully connected layer are CNN1:(6000,3000,3000), CNN2:(3000,2000,2000), and CNN3:(1500,1500,1500), respectively, and DOA estimation is realized after convolutional, pooling, and fully connected layer estimation. In this paper, root-mean-square error is chosen as a measure of DOA estimation performance, which is defined as
R M S E = 1 Q i = 1 Q θ θ 2 ,
where Q denotes the test set capacity, and θ and θ denote the estimated and actual angle values, respectively.

4.1. Verification of Algorithm Validity

The number of array elements in the experiment is 20, the signal-to-noise ratio (SNR) is 10 dB, and the number of snapshots is 100; three sets of direct and reflected angles are taken as examples, respectively. The spatial smoothing preprocessing (SSP) algorithm [8], the improved spatial smoothing preprocessing (MSSP) algorithm [9], and the convolutional autoencoder algorithm proposed in this paper are used for decoherence, and the MUSIC algorithm is used for DOA estimation after decoherence, which are noted as SSP MUSIC, MSSP MUSIC, and CAE MUSIC in order. The spatial spectrum is used to compare the angular resolution and estimation performance of the three algorithms, as shown in Figure 6, Figure 7 and Figure 8.
Comparing SSP MUSIC and MSSP MUSIC in Figure 6 shows that when the direct angle is small, the spacing between the direct angle and reflected angle is also small, and at this time, relying on the spatial smoothing for decoherence cannot fully achieve the distinction between the direct and reflected angles, and the spectral peaks of the SSP MUSIC and MSSP MUSIC algorithms appear only in the negative-angle region. As shown in Figure 7 and Figure 8, when the direct angle increases, SSP MUSIC and MSSP MUSIC can achieve the distinction between the direct and reflected angles, but there are interference spectral peaks in the target region, and especially for the SSP MUSIC algorithm, the interference spectral peaks can even seriously affect the final angle estimation. As can be seen in Figure 6, Figure 7 and Figure 8, compared with SSP MUSIC and MSSP MUSIC, the CAE MUSIC algorithm is able to obtain spectral peaks within the target region, and its most significant advantage is that there are no interference spectral peaks within the target region, and the amplitude of the interference spectral peaks outside the target region is much smaller than that of the spectral peaks within the target region. When the incidence angle is small, the estimation errors of all three algorithms are large, and as the incidence angle increases, the estimation accuracy of the three algorithms increases, and the estimation accuracy of the CAE MUSIC algorithm is better than the other two algorithms. It can be proved that the CAE algorithm is effective in the extraction of direct wave signals in low-elevation-angle signals, and its performance is better than that of SSP and MSSP.
The RMSE of the combined algorithm and CAE CNN for DOA estimation in each angle range (0 to 1, 1 to 2, 2 to 3, etc.) are calculated under the above conditions, and the calculation results are shown in Figure 9.
From Figure 9a, it can be seen that the RMSEs of CAE MUSIC, CAE ESPRIT, and CAE ML are lower when the direct angle is greater than 5°; when the direct angle is less than 5°, the RMSEs are all higher, with CAE ESPRIT estimation accuracy slightly higher than the other two, but at a direct angle greater than 7°, the RMSE of this algorithm is greater than the other two algorithms. In Figure 9b, the three curves of CAE CNN1, CAE CNN2, and CAE CNN3 are plotted from the results of each of the above three neural networks tested after completing the training of all samples, i.e., no preclassification by ELM, and DOA estimation is realized using a single CNN. The CAE CNNs are the results obtained from the preclassification of ELM and then the DOA estimation by the corresponding CNN implementation, which is the model proposed in this paper. From Figure 9b, it can be seen that CAE CNN1, CAE CNN2, and CAE CNN3 have unequal RMSEs in each angle range, and the RMSE decreases as the angle increases. When the direct angle is small, CNN1 has the lowest RMSE and the highest estimation accuracy, and in the range from 4° to 6°, CNN2 has a lower RMSE than the other two, and when the direct angle is larger than 6°, CNN3 performs slightly better than the other two. Comparing Figure 9a,b, it can be seen that the use of CNN can significantly reduce the RMSE of DOA estimation, especially when the angle is less than 5°. Comparing the four curves in (b), it can be seen that when the three angle ranges are trained separately, the RMSE of CAE CNNs in each angle range is better than the other three. From the analysis of model training efficiency, the increase in the number of convolutional kernels, the number of fully connected layer neurons, and the number of iterations in the convolutional neural network will increase the time and space complexity of the model, and there is even a risk of overfitting, and from the above two aspects, dividing the angle intervals for model training and angle estimation separately has significant advantages.
The precision of the three angle intervals using ELM for preclassification is shown in Table 1.
As can be seen from Table 1, the precision of the classification of each angle interval is high after adopting ELM, which can meet the preclassification requirements.

4.2. Effect of the Number of Snapshots on DOA Estimation Performance

In this group of simulation experiments, S N R = 10   dB , other simulation parameters are kept constant, and the array received data with snapshot numbers of 25, 50, 100, 150, 200, 300, and 400 are designed for simulation experiments. Since there are cases where it is difficult to effectively estimate the direct or reflected angles when the DOA estimation is performed by the conventional algorithm (e.g., it is difficult to effectively separate the spatial spectra of the direct and reflected angles in the MSSP MUSIC algorithm, or the estimation of the angles by the algorithm is seriously deviated from the target region, etc.), the above cases are regarded as invalid estimates, and therefore, only the valid direct- and reflected-angle estimates are calculated when the mean-square error calculation is performed The mean-square error and the comparison experiments on the effect of signal-to-noise ratio on DOA estimation performance in the next section are performed according to this principle. The statistical effective estimation rates of each conventional algorithm are shown in Table 2.
From Table 2, it can be seen that the effective estimation rates of the above three algorithms for the direct angle are lower than those for the reflected angle, among which MSSP ESPRIT has the lowest estimation rate for the direct angle, MSSP MUSIC has the highest estimation rate for the direct angle, and the ML algorithm has the highest estimation rate for the reflected angle, and the overall effective estimation rates for the direct and reflected angles increase as the number of snapshots increases. When CAE is used in combination with the conventional algorithm, the estimated angles are all within the target range, and all of them can achieve effective estimation.
The RMSE of each algorithm is calculated for different numbers of snapshots, including the total (direct and reflected angles) RMSE, the direct and reflected angles are calculated separately, and the RMSE of DOA estimation by CAE combined with conventional and CNN algorithms are compared separately, as shown in Figure 10 below.
Figure 10a–c show that the RMSE of DOA estimation by CAE combined with the conventional algorithm for the direct angle is smaller than that by the conventional algorithm after MSSP decoherence, and the RMSE shows an insignificant decreasing trend as the number of snapshots increases, and the RMSE of each algorithm is basically stable when the number of snapshots is larger than 200. When the number of snapshots is larger, MSSP ESPRIT has the highest estimation accuracy for the direct angle among the conventional algorithms, but its effective estimation rate is also the lowest when combined with Table 2. Combining the RMSE and the effective estimation accuracy, the ML algorithm among the conventional algorithms has better performance for the estimation of the direct angle; from Figure 10d, it can be seen that the CAE MUSIC, CAE ESPRIT, and CAE ML algorithms’ RMSE is not greatly affected by the number of snapshots, and CAE CNN algorithm has a significantly better RMSE than the other algorithms at different snapshot numbers, and the RMSE decreases as the number of snapshots increases and stabilizes when the number of snapshots is greater than 200.

4.3. Effect of SNR on DOA Estimation Performance

In this set of experiments, the number of snapshots is 200, other simulation parameters are kept constant, and the array received data with SNR of −5 dB, 0 dB, 3 dB, 6 dB, 9 dB, 12 dB, and 15 dB are designed for simulation experiments. The ratios of effective direct and reflected angles obtained by each algorithm at different SNRs are shown in Table 2 below.
From Table 3, it can be seen that as the SNR increases, the effective estimation rate of each algorithm for the direct and reflected angles increases, among which the effective estimation rate of the MSSP MUSIC algorithm for the direct angle is higher than the other two algorithms, and the ML algorithm has the highest effective estimation rate for the reflected angle, but at the same time, the ML algorithm has the lowest effective estimation rate when the SNR is −5 dB and 0 dB, so it is most affected by the low SNR.
The RMSE of each algorithm under different SNRs was calculated and compared with the RMSE of CAE combined with conventional and CNN algorithms for DOA estimation, respectively, as shown in Figure 11 below.
As shown in Figure 11a–c, the RMSE of each algorithm DOA estimation decreases with the increase in SNR, among which the ML algorithm has the most significant change, and the RMSE of CAE MUSIC, CAE ESPRIT, and CAE ML algorithms does not change significantly with SNR. The RMSE of the traditional algorithm MSSP ESPRIT algorithm is the smallest for the total, direct, and reflected angles, but combined with Table 2, its effective estimation rate is the lowest, compared with the other two algorithms. As can be seen in Figure 11d, the RMSE of CAE ML for DOA estimation of the direct angle is the lowest among the combined algorithms, but the RMSE of CAE CNN decreases with increasing SNR and is significantly lower than the other three algorithms under all SNR conditions, and the RMSE tends to be stable when the SNR is greater than or equal to 9dB. It can be concluded above that the CAE CNN algorithm is better than the conventional algorithm and the combined algorithm for DOA estimation of the direct angle in low-elevation signals under different SNRs.

5. Conclusions

In the field of radar signal processing, DOA estimation of low-elevation-angle targets has always been a key and difficult problem, which is rooted in the fact that compared with high-altitude targets, the multipath effect generated by low-elevation-angle targets is more serious, and the strong ground reflection signal is highly correlated with the direct wave in the time domain, and it is almost impossible to distinguish the direct wave signal from the ground reflection multipath signal in the spatial, temporal, and frequency domain, resulting in a large DOA estimation error. With the application of deep learning in radar signal processing, deep learning also provides new ideas for DOA estimation of low-elevation-angle targets. Based on this, this paper proposes a new CAE-CNN-based DOA estimation method for low-elevation-angle targets, which is divided into two main parts. The first part takes the covariance matrix of the received signals of the low-elevation-angle target array containing both direct and reflected waves as the input of CAE, and obtains the "de-multipath" covariance matrix containing only the direct wave signals. In order to reduce the increase in complexity caused by preclassification, the preclassification model is implemented by the ELM model, and the latent spatial features obtained in the encoding process of CAE are used as input to ELM to obtain the DOA preclassification of the direct waves. The “de-multipath” covariance matrix and the angular preclassification of DOA obtained in the first part are used as the inputs to the parallel CNN in the second part. According to the preclassification, the corresponding convolutional neural network is selected and the “de-multipath” covariance matrix is used as the input to obtain the DOA estimation of the direct wave. As deep neural network models can lead to underfitting when the dataset is small, sufficient training set data are necessary in order for the model to be adequately trained. The model is trained offline, and when the neural network is trained and then predicted, the time required is minimal. Taking into account the estimation accuracy and model complexity, a parallel convolutional neural network model with three branches is designed. Simulation experiments show that the CAE algorithm can effectively achieve the de-multipath of a low-elevation-angle target signal, and has higher estimation accuracy and effective estimation rate when combined with the traditional algorithm for DOA estimation, and the combination of CAE and parallel CNN algorithm can better extract the feature relationship between the covariance matrix and angle in different angle ranges, and the RMSE of DOA estimation is lower and the estimation performance is better, and overfitting and underfitting can be avoided as much as possible, reducing time and space complexity.

Author Contributions

This article was coauthored by F.Z., G.H., H.Z. and C.Z.; the major individual contributions are as follows: conceptualization, F.Z. and G.H.; methodology, F.Z., G.H. and C.Z.; software, F.Z. and H.Z.; validation, H.Z. and C.Z.; formal analysis, F.Z.; investigation, C.Z.; resources, G.H.; data curation, H.Z.; writing—original draft preparation, F.Z.; writing—review and editing, F.Z.; supervision, G.H.; project administration, G.H..; funding acquisition, G.H. and H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 6207011332.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the data in this paper not being from publicly available datasets but obtained from the simulation of the signal models listed in the article.

Acknowledgments

We thank the college for providing us with an efficient simulation platform so that we can complete the experimental simulation as scheduled. Thanks are due to Bingqi Liu for valuable discussion. Funding from the National Natural Science Foundation of China (No. 6207011332) is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Richards, M. Fundamentals of Radar Signal Processing, 2nd ed.; IET: London, UK, 2014; pp. 1–16. [Google Scholar]
  2. Zhu, W.; Chen, B. Altitude measurement based on terrain matching in VHF array radar. Circuits Syst. Signal Process. 2013, 32, 647–662. [Google Scholar] [CrossRef]
  3. Xia, J.; Bai, W.; Zhao, D. First Shipborne GNSS-R Campaign for Receiving Low Elevation Angle Sea Surface Reflected Signals. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016. [Google Scholar]
  4. Wang, P.; Wang, Y.; Morton, Y. Signal Tracking Algorithm with Adaptive Multipath Mitigation and Experimental Results for LTE Positioning Receivers in Urban Environments. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 2779–2795. [Google Scholar] [CrossRef]
  5. Guo, Y.; Zhang, L.; Zhang, J. A Coherent Signal Beamforming Technique Based on Sub-Array Cross Correlation. Digit. Signal Process. 2022, 121, 103291. [Google Scholar] [CrossRef]
  6. Wen, F.; Shi, J. Generalized Spatial Smoothing in Bistatic EMVS-MIMO Radar. Signal Process. 2021, 193, 108406. [Google Scholar] [CrossRef]
  7. Zhang, W.; Han, Y. Multiple-Toeplitz Matrices Reconstruction Algorithm for DOA Estimation of Coherent Signals. IEEE Access 2019, 7, 49504–49512. [Google Scholar] [CrossRef]
  8. Shan, T.; Wax, M. On Spatial Smoothing for Direction-Of-Arrival Estimation of Coherent Signals. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 806–811. [Google Scholar] [CrossRef]
  9. Pillai, S.; Kwon, B. Forward/Backward Spatial Smoothing Techniques for Coherent Signal Identification. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 8–15. [Google Scholar] [CrossRef] [Green Version]
  10. Ebrahim, M.; Raed, M. Computationally Efficient High-Resolution DOA Estimation in Multipath Environment. Electron. Lett. 2004, 40, 908–910. [Google Scholar]
  11. Ebrahim, M.; Raed, M.; Mohammed, E. Computationally Efficient DOA Estimation in a Multipath Environment Using Covariance Differencing and Iterative Spatial Smoothing. In Proceedings of the IEEE International Symposium on Circuits and Systems, Kobe, Japan, 23–26 May 2005. [Google Scholar]
  12. Zhao, L.; Ding, J. Direction-Of-Arrival Estimation of Multipath Signals Using Independent Component Analysis and Compressive Sensing. PLoS ONE 2017, 12, e0181838. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Harkouss, Y. Direction of Arrival Estimation in Multipath Environments Using Deep Learning. Int. J. Commun. Syst. 2021, 34, e4882. [Google Scholar] [CrossRef]
  14. Xiang, H.; Chen, B. Phase Enhancement Model Based on Supervised Convolutional Neural Network for Coherent DOA Estimation. Appl. Intell. 2020, 50, 2411–2422. [Google Scholar] [CrossRef]
  15. Xiang, H.; Chen, B. Improved De-Multipath Neural Network Models With Self-Paced Feature-to-Feature Learning for DOA Estimation in Multipath Environment. IEEE Trans. Veh. Technol. 2020, 69, 5068–5078. [Google Scholar] [CrossRef]
  16. Ge, X.; Hu, X. DOA Estimation for Coherent Sources Using Deep Learning Method. J. Signal Process. 2019, 8, 98–106. [Google Scholar]
  17. Liu, Z.; Zhang, C. Direction-of-Arrival Estimation Based on Deep Neural Networks with Robustness to Array Imperfections, IEEE Trans. Antennas Propag. 2018, 66, 7315–7327. [Google Scholar] [CrossRef]
  18. David, H.; Robert, R. Fundamentals of Physics, 10th ed.; Wiley: Hoboken, NJ, USA, 2013; pp. 295–317. [Google Scholar]
  19. Cheng, L.; Li, Y. DOA Estimation for Highly Correlated and Coherent Multipath Signals with Ultralow SNRs. Int. J. Antennas Propag. 2019, 1, 2837315. [Google Scholar] [CrossRef] [Green Version]
  20. Constantine, A. Advanced Engineering Electromagnetics, 1st ed.; Wiley: Hoboken, NJ, USA, 1989; pp. 78–79. [Google Scholar]
  21. Mei, W.; Tian, W.; Yin, L. Research on Amplitude-Phase Error for LFMCW Radar. In Proceedings of the IET International Radar Conference, Hangzhou, China, 14–16 October 2015. [Google Scholar]
  22. Pintelas, E.; Livieris, I. A Convolutional Autoencoder Topology for Classification in High-Dimensional Noisy Image Datasets. Sensors 2021, 21, 7731. [Google Scholar] [CrossRef] [PubMed]
  23. Zhou, J.; Zhang, Q. Fuzzy Graph Subspace Convolutional Network. IEEE Trans. Neural Networks Learn. Syst. 2022, 99, 1–15. [Google Scholar] [CrossRef] [PubMed]
  24. Sun, F.; Toh, K. Extreme Learning Machines 2013: Algorithms and Applications, 2014th ed.; Springer: London, UK, 2014; pp. 35–52. [Google Scholar]
  25. Zhao, F.; Hu, G. DOA Estimation Method Based on Improved Deep Convolutional Neural Network. Sensors 2021, 22, 1305. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Spatial diagram of multipath effect.
Figure 1. Spatial diagram of multipath effect.
Remotesensing 15 00185 g001
Figure 2. Deep learning network structure.
Figure 2. Deep learning network structure.
Remotesensing 15 00185 g002
Figure 3. CAE and preclassification model.
Figure 3. CAE and preclassification model.
Remotesensing 15 00185 g003
Figure 4. Direct padding method for matrix expansion.
Figure 4. Direct padding method for matrix expansion.
Remotesensing 15 00185 g004
Figure 5. Parallel convolutional neural network model.
Figure 5. Parallel convolutional neural network model.
Remotesensing 15 00185 g005
Figure 6. DOA estimation comparison: DA = 1.405° RA = −2.609°.
Figure 6. DOA estimation comparison: DA = 1.405° RA = −2.609°.
Remotesensing 15 00185 g006
Figure 7. DOA estimation comparison: DA = 3.668° RA = −6.834°.
Figure 7. DOA estimation comparison: DA = 3.668° RA = −6.834°.
Remotesensing 15 00185 g007
Figure 8. DOA estimation comparison: DA = 7.110° RA = −13.204°.
Figure 8. DOA estimation comparison: DA = 7.110° RA = −13.204°.
Remotesensing 15 00185 g008
Figure 9. RMSE comparison of combined algorithms and CAE CNN for DOA estimation.
Figure 9. RMSE comparison of combined algorithms and CAE CNN for DOA estimation.
Remotesensing 15 00185 g009
Figure 10. Comparison of RMSE of DOA estimation by each algorithm with different numbers of snapshots.
Figure 10. Comparison of RMSE of DOA estimation by each algorithm with different numbers of snapshots.
Remotesensing 15 00185 g010
Figure 11. Comparison of RMSE of DOA estimation by each algorithm with different SNRs.
Figure 11. Comparison of RMSE of DOA estimation by each algorithm with different SNRs.
Remotesensing 15 00185 g011
Table 1. Precision of the three angle intervals.
Table 1. Precision of the three angle intervals.
CategoryC1:(0°,3.2°]C2:(3.2°,5.8°]C3:(5.8°,10°]
Precision96.85%97.32%97.56%
Table 2. Ratio of effective angles (%) of direct and reflected angles for each conventional algorithm with different numbers of snapshots.
Table 2. Ratio of effective angles (%) of direct and reflected angles for each conventional algorithm with different numbers of snapshots.
Algorithms2550100150200300400
MSSP MUSICDA84.0781.3781.6083.8792.6082.3383.87
RA99.0699.4799.4099.8099.8099.8099.33
MSSP ESPRITDA79.4077.8076.7078.1077.4776.5076.70
RA99.5399.6799.7399.9099.8799.8399.43
MLDA82.8781.9080.3381.0380.5379.8079.73
RA99.9099.8399.9310010099.97100
Table 3. Ratio of effective angles (%) of direct and reflected angles for each conventional algorithm with different SNR.
Table 3. Ratio of effective angles (%) of direct and reflected angles for each conventional algorithm with different SNR.
Algorithms−503691215
MSSP MUSICDA93.6375.3081.1081.4082.3083.1085.87
RA83.6096.6399.1399.5399.7399.5799.67
MSSP ESPRITDA63.2072.8775.2376.8777.6377.3378.17
RA90.3398.2098.9399.3099.7799.8399.80
MLDA37.4366.4090.0783.8781.7679.3078.37
RA89.9092.4799.9399.9310099.93100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, F.; Hu, G.; Zhou, H.; Zhan, C. CAE-CNN-Based DOA Estimation Method for Low-Elevation-Angle Target. Remote Sens. 2023, 15, 185. https://doi.org/10.3390/rs15010185

AMA Style

Zhao F, Hu G, Zhou H, Zhan C. CAE-CNN-Based DOA Estimation Method for Low-Elevation-Angle Target. Remote Sensing. 2023; 15(1):185. https://doi.org/10.3390/rs15010185

Chicago/Turabian Style

Zhao, Fangzheng, Guoping Hu, Hao Zhou, and Chenghong Zhan. 2023. "CAE-CNN-Based DOA Estimation Method for Low-Elevation-Angle Target" Remote Sensing 15, no. 1: 185. https://doi.org/10.3390/rs15010185

APA Style

Zhao, F., Hu, G., Zhou, H., & Zhan, C. (2023). CAE-CNN-Based DOA Estimation Method for Low-Elevation-Angle Target. Remote Sensing, 15(1), 185. https://doi.org/10.3390/rs15010185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop