Next Article in Journal
DOA Estimation Based on Virtual Array Aperture Expansion Using Covariance Fitting Criterion
Previous Article in Journal
SPA: Annotating Small Object with a Single Point in Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Data and Model-Driven Clutter Suppression Method for Airborne Bistatic Radar Based on Deep Unfolding

National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2516; https://doi.org/10.3390/rs16142516
Submission received: 25 May 2024 / Revised: 7 July 2024 / Accepted: 8 July 2024 / Published: 9 July 2024

Abstract

:
Space–time adaptive processing (STAP) based on sparse recovery achieves excellent clutter suppression and target detection performance, even with a limited number of available training samples. However, most of these methods face performance degradation due to grid mismatch, which impedes their application in bistatic clutter suppression. Some gridless methods, such as atomic norm minimization (ANM), can effectively address grid mismatch issues, yet they are sensitive to parameter settings and array errors. In this article, the authors propose a data and model-driven algorithm that unfolds the iterative process of atomic norm minimization into a deep network. This approach establishes a concrete and systematic link between iterative algorithms, extensively utilized in signal processing, and deep neural networks. This methodology not only addresses the challenges associated with parameter settings in traditional optimization algorithms, but also mitigates the lack of interpretability issues commonly found in deep neural networks. Moreover, due to more rational parameter settings, the proposed algorithm achieves effective clutter suppression with fewer iterations, thereby reducing computational time. Finally, extensive simulation experiments demonstrate the effectiveness of the proposed algorithm in clutter suppression for airborne bistatic radar.

1. Introduction

Space–time adaptive processing (STAP) is a highly advanced technique for mitigating ground clutter in airborne radar systems, based on the pioneering research of Brennan and Reed [1,2,3,4]. The optimal STAP filter is created using the theoretical clutter plus noise covariance matrix (CNCM) for the cell under test (CUT), which is practically estimated using independently and identically distributed (IID) training samples. According to the Reed–Mallet–Brennan (RMB) criterion [5], to ensure that performance degradation remains below 3 dB, the fully adaptive sample matrix inversion (SMI) STAP filter requires at least twice as many IID samples as the system’s degrees of freedom. Unlike monostatic airborne radar, bistatic airborne radar system separate the transmitter and receiver, positioning the transmitter behind the detection area. This configuration is highly valued for its superior electronic jamming resistance and anti-stealth capabilities. However, bistatic radars often encounter non-stationary clutter environment, influenced by factors such as system configuration, antenna placement, and aircraft velocity. Therefore, the RMB criterion is virtually unattainable in bistatic systems.
In recent decades, the rapid advancement of compressed sensing technology has catalyzed extensive research into sparse recovery-based space–time adaptive processing (SR-STAP) algorithms. These algorithms exploit the inherent sparsity of the clutter spectrum. In 2006, Maria’s seminal work established the foundations for clutter spectrum estimation using sparse recovery theory [6], demonstrating its capability to alleviate the challenges posed by IID samples scarcity. Emerging from this theoretical framework, SR-STAP algorithms are broadly categorized into two types: grid-based methods [7,8,9,10,11,12,13] and gridless methods [14,15,16,17,18,19].
Grid-based sparse recovery methods discretize the continuous spatial-temporal plane into finite grids to establish an overcomplete dictionary [7,8]. The efficacy of most existing grid-based SR-STAP algorithms hinges on the precision of this discrete dictionary. However, overly dense grid discretization may lead to significant computational burdens and create strong correlations among adjacent elements within the dictionary, thus undermining the precision of the sparse recovery [11]. Additionally, these methods presume that the clutter atom match precisely with the grids of dictionary. In real-world bistatic environments, the clutter’s Doppler frequencies are influenced by both the transmitter and the receiver, while the spatial frequencies are solely dependent on the receiver. This unique aspect often causes bistatic clutter to distribute along a curved trajectory on the spatial-temporal plane, consequently causing off-grid errors and spectral leakage in the clutter signal reconstruction.
To tackle the challenges associated with grid-based methods, research in the field of SR-STAP has increasingly pivoted toward the exploration of gridless techniques. These techniques, which are not confined to discrete grids, enable the analysis of any continuous frequency. Candès et al., pioneered a robust framework for linear spectral estimation based on atomic norm minimization in the context of single-measurement vectors (SMVs) in 2014, offering reliable frequency estimation in the continuous domain [14]. This framework was subsequently extended by Tang to accommodate multiple measurement vectors (MMVs), broadening its applicability [15]. Building on this theoretical advancement, Feng et al., refined and applied atomic norm minimization (ANM) to the two-dimensional spatial-temporal field, specifically for the SR-STAP application [16]. Additionally, the introduction of a non-convex variant known as reweighted ANM (RAM) by Feng et al., aimed to further promote sparsity and enhance the accuracy of estimations [17]. Meanwhile, Li et al., proposed a novel gridless mixed-norm minimization (MNM) approach to STAP, which they efficiently executed using the alternating direction method of multipliers (ADMM) technique [18]. Despite the ANM-STAP method mitigating the grid mismatch, it faces two significant limitations: (a) the computational burden of the CVX solver-based ANM-STAP [16] is substantial, particularly for large-scale applications, impeding its real-time deployment; and (b) the efficacy of ANM-STAP hinges on the selection of the regularization parameter [18], which proves challenging to determine in practical settings.
After a comprehensive analysis of the advantages and limitations of grid-based and gridless sparse recovery algorithms, it is clear that, while these methods can be effective, they are often hampered by off-grid errors, significant computational demands, and intricate parameter tuning challenges. Therefore, the methods mentioned above are not suitable for application in bistatic clutter suppression. This inadequacy underscores the urgent need for an alternative approach that can offer both computational efficiency and simplicity in parameter selection.
Deep neural networks provide unprecedented performance gains in many real-world problems in signal processing. The study in [20] referenced a CNN network that was utilized to achieve the transformation of clutter power spectra from low to high resolution. However, this method is purely data-driven and their underlying structures are difficult to interpret. In [21], an autoencoder was employed to facilitate the elimination of heterogeneous training samples. Reference [22] represents the first application of deep unfolding algorithms to space–time adaptive processing. However, it still relies on grid-based sparse recovery algorithms, which suffer significant performance degradation when grid mismatch occurs. Moreover, the labels used were derived from 3000 iterations of the ADMM algorithm, necessitating the preliminary setting of a parameter for iteration. This method cannot ensure the optimality of the chosen parameter, leading to potentially inaccurate labels.
In this article, we commence with a novel approach that unfolds the iterative procedure of ANM into a deep neural network architecture, aimed at suppressing bistatic clutter. Leveraging ANM to mitigate the detrimental effects of grid mismatches, this methodology harnesses the power of deep learning (DL) algorithms to adaptively update the parameters traditionally set manually. Most purely data-driven deep learning algorithms possess a black-box nature, characterized by a lack of interpretability. However, the algorithm presented in this paper, being driven by both model and data, retains not only the capability of deep learning to automatically update parameters but also exhibits significant interpretability. During the training of the proposed network, we utilized the improvement factor (IF) derived from the ideal CNCM as labels, ensuring the absolute accuracy of the labels. Therefore, we believe that this algorithm is specifically tailored for the airborne bistatic clutter suppression.
The main contributions of this paper are outlined as follows:
(1)
We developed a detailed clutter signal model for airborne bistatic radar.
(2)
To address the parameter setting difficulties of the ANM algorithm, we unfolded it into a deep neural network, which was called deep unfolding-based gridless sparse recovery bistatic STAP net (DUGLSR-BSTAP-Net). By selecting appropriate training data and labels to calculate the loss function and using the Adam optimizer for backpropagation to update parameters, the network can be trained to set different, more suitable parameters for each layer. This approach contrasts with traditional iterative algorithms, which maintain fixed preset parameters throughout the process.
(3)
We conducted extensive simulation experiments, comparing the proposed algorithm (DUGLSR-BSTAP-Net) with several classic SR-STAP methods across multiple aspects. These aspects included the clutter Capon spectrum, the eigenspectra of the estimated CNCM, improvement factor, and the variation of target detection probability with respect to the signal-to-noise ratio (SNR). The results validated that the proposed algorithm offers superior clutter suppression and target detection performance.
The remainder of this paper is organized as follows.
We establish the clutter model under a bistatic working environment and provide a brief introduction to atomic norm minimization (ANM) in Section 2. Section 3 presents the method for unfolding the iterative process of ANM into a deep neural network. Section 4 conducted comparative experiments on algorithm performance to verify the effectiveness of the proposed algorithm.
Notation: and denote the set of real and complex numbers, respectively. a , a and A represent a scalar, a vector, and a matrix, respectively. The superscripts ( ) T and ( ) H are the transpose and conjugate transpose of a vector or matrix. 2 , 1 and F are defined as the l 2 , 1 and Frobenius norms. denotes the Kronecker product. For a vector x , diag ( x ) denotes a diagonal matrix with the elements in x on its main diagonal. Tr ( A ) denotes the trace of a matrix A . A 0 means A is a positive semidefinite matrix. The brackets indicate rounding to the nearest integer. For matrices A and B , A , B = Tr ( A H B ). And vec, ( ) denotes the vectorization operation.

2. Signal Model and Overview of ANM-STAP

In this section, we introduce the airborne bistatic signal model, followed by a brief overview of the ANM-STAP.

2.1. Airborne Bistatic Signal Model

Consider an airborne bistatic pulsed Doppler radar system featuring a uniformly linear array (ULA) consisting of N elements spaced d apart. Both the transmitter and receiver are equipped with this configuration. The transmitter operates at a constant pulse repetition frequency (PRF) of f P R F . Within each coherent processing interval (CPI), the system transmits a coherent burst comprising K pulses. In Figure 1, we established a Cartesian coordinate system, where c T = ( X T , Y T , H T ) and c R = ( X R , Y R , H R ) represent the spatial coordinates of the transmitter and the receiver, respectively. α T represents the angle between the transmitter’s array axis and the X axis, while β T denotes the angle between the direction of the transmitter’s flight velocity and the X axis. In the subsequent sections of the paper, α R and β R will, respectively, represent the angle between the receiver’s array axis and the X axis, and the angle between the direction of the receiver’s flight velocity and the X axis (these two angles are not depicted in Figure 1). The set of all scatterers in space for which the sum of the distances to the receiver and transmitter equals a certain value forms a bistatic constant range sum ellipsoid. The intersection of this ellipsoid with the XOY plane is known as the bistatic constant range ring of clutter. In the description, points C 1 and C 2 represent the ground projections of the transmitter and receiver, respectively.
In practical applications, the center of ellipsoids, defined by the transmitter and receiver as foci, often does not reside on the Z-axis, and the foci are not positioned on the same coordinate axis. Consequently, standard ellipsoidal formulas are not directly applicable for solutions. To address this, this section will implement translation and rotation transformations to align the ellipsoid’s foci on a singular coordinate axis and reposition the ellipsoid’s center onto the Z-axis, as illustrated in Figure 2. By performing these transformations, we can employ the standard ellipsoidal equation to solve for the ellipsoid, thereby accurately determining the coordinates of clutter patches.
First, we need to translate the center of the ellipsoid to the Z-axis. Prior to the translation, the center coordinates are c o = ( X T + X R , Y T + Y R , H T + H R ) 2 , and the coordinates of a point on the ellipsoid surface are a ( x , y , z ) . After the translation, the center coordinates become c o = ( 0 , 0 , H T + H R ) 2 , and the coordinates of that point on the ellipsoid surface change to be b ( x B , y B , z B ) = ( x Δ X , y Δ Y , z ) , Δ X = X T + X R 2 , Δ Y = Y T + Y R 2 . Next, we need to rotate the major axis of the ellipsoid successively around the Z axis and Y axis until it aligns with the X-axis. Let R z ( a z ) and R y ( a y ) be the rotation matrices for rotations around the Z-axis and Y-axis, respectively. The coordinates after rotation are c ( x C , y C , z C ) :
x C y C z C = R y ( a y ) R z ( a z ) x B y B z B
The coordinate point c ( x C , y C , z C ) satisfies the standard equation of an ellipsoid:
x C 2 a 2 + y C 2 + z C 2 b 2 = 1
where a and b, respectively, represent the semi-major and semi-minor axes of the ellipsoid.
For any given l-th range ring with bistatic range R l , the corresponding major and minor axes of the ellipsoid can be determined. According to the definition of an ellipsoid, the semi-major axis a satisfies 2 a = R l . The semi-focal distance c can be solved based on 2 c = c T c R , and the semi-minor axis b can be determined from the relationship b 2 = a 2 c 2 .
For the rotation matrices R z ( a z ) and R y ( a y ) , the sign of the rotation angles should be determined based on the positions of the two foci. In this article, the first rotation is around the Z axis, with a rotation angle of a z . The angle of rotation is from the positive direction of the X-axis towards the positive direction of the Y axis; if the rotation is in the opposite direction, the angle a z is negative. Specifically, if ( X T X R ) × ( Y T Y R ) 0 , the rotation angle is negative. The cosine and sine values of the rotation angle are derived from the position coordinates of the focus:
sin a z = Y T Y R ( X T X R ) 2 + ( Y T Y R ) 2
cos a z = X T X R ( X T X R ) 2 + ( Y T Y R ) 2
Furthermore, the rotation matrix R z ( a z ) can be expressed as:
R z ( a z ) = cos a z sin a z 0 sin a z cos a z 0 0 0 1
Similarly, in the second rotation, when rotating the major axis along the Y axis, the direction of rotation must also be considered when solving for the rotation transformation matrix R y ( a y ) . When rotating around the Y-axis, the rotation angle a y is from the positive direction of the Z-axis towards the positive direction of the X axis; if the rotation is in the opposite direction, the angle a y is negative. Specifically, if ( X T X R ) × ( H T H R ) 0 , the rotation angle a y is positive, and the cosine and sine values of the rotation angle are derived from the position coordinates of the focus:
cos a y = ( X T X R ) 2 + ( Y T Y R ) 2 ( X T X R ) 2 + ( Y T Y R ) 2 + ( H T H R ) 2
sin a y = H T H R ( X T X R ) 2 + ( Y T Y R ) 2 + ( H T H R ) 2
Furthermore, the rotation matrix R y ( a y ) can be expressed as:
R y ( a y ) = cos a y 0 sin a y 0 1 0 sin a y 0 cos a y
By substituting Equations (5) and (8) into Equation (1), the coordinates of the position after rotation can be obtained
x C y C z C = R y ( a y ) R z ( a z ) x B y B z B = cos a y cos a z cos a y sin a z sin a y sin a z cos a z 0 sin a y cos a z sin a y sin a z cos a y x B y B z B
Subsequently, by substituting the coordinates obtained from Equation (9) into Equation (2), and setting z = 0, the formula for the bistatic clutter iso-range contour can be derived:
( x cos a y cos a z y cos a y sin a z ( Δ X cos a y cos a z Δ Y cos a y sin a z + Δ Z sin a y ) ) 2 a 2 + ( x sin a z + y cos a z ( Δ X sin a z + Δ Y cos a z ) ) 2 b 2 + ( x sin a y cos a z + y sin a y sin a z + ( Δ X sin a y cos a z Δ Y sin a y sin a z Δ Z cos a y ) ) 2 b 2 = 1
Using polar coordinates, any point on the ground can be represented as:
x = r cos θ y = r sin θ
By substituting Equation (11) into Equation (10) and transforming it into polar coordinates, the equation can be simplified and rearranged to obtain:
r 2 ( b 2 M 1 2 + a 2 M 2 2 + a 2 M 3 2 ) + r ( 2 ( b 2 M 1 C 1 + a 2 M 2 C 2 + a 2 M 3 C 3 ) ) + b 2 C 1 2 + a 2 C 2 2 + a 2 C 3 2 a 2 b 2 = 0
The symbols in the above equation are defined as follows:
C 1 = Δ X cos a y cos a z Δ Y cos a y sin a z + Δ Z sin a y C 2 = Δ X sin a z + Δ Y cos a z C 3 = Δ X sin a y cos a z Δ Y sin a y sin a z Δ Z cos a y M 1 = cos θ cos a y cos a z sin θ cos a y sin a z M 2 = cos θ sin a z + sin θ cos a z M 3 = cos θ sin a y cos a z sin θ sin a y sin a z
By simplifying the coefficients of Equation (12) and rearranging the equation, the following can be derived:
A 1 = ( b 2 M 1 2 + a 2 M 2 2 + a 2 M 3 2 ) A 2 = 2 ( b 2 M 1 C 1 + a 2 M 2 C 2 + a 2 M 3 C 3 ) A 3 = b 2 C 1 2 + a 2 C 2 2 + a 2 C 3 2 a 2 b 2 r 2 A 1 + r A 2 + A 3 = 0
Solving the above equation can yield the coordinates of the clutter patch:
r 1 = A 2 + A 2 2 4 A 3 A 1 2 A 1 r 2 = A 2 A 2 2 4 A 3 A 1 2 A 1
( x 1 , y 1 , 0 ) = ( r 1 cos θ , r 1 sin θ , 0 ) ( x 2 , y 2 , 0 ) = ( r 2 cos θ , r 2 sin θ , 0 )
After conducting numerous experiments, we found that r 2 is always less than zero, and needs to be discarded.
In summary, when a bistatic range is specified, a unique polar radius can be obtained. By combining this with the angle θ , we can determine the coordinates of each clutter patch.
( x i , y i , 0 ) = ( r 1 cos θ , r 1 sin θ , 0 )
From Figure 1, we can see that the axial direction vector of the receiver’s array is d r = ( cos α r , sin α r , 0 ) , the velocity direction vector is v r = ( cos β r , sin β r , 0 ) , and the direction vectors of the transmitter and receiver relative to the clutter block connection line are, respectively, c t = ( x i X T , y i Y T , H T ) and c r = ( x i X R , y i Y R , H R ) .
Based on this, the expressions for the velocity cone angles and spatial cone angles of this clutter patch are:
cos φ R = < c r , d r > c r d r = ( x i X R ) cos β r + ( y i Y R ) sin β r ( x i X R ) 2 + ( y i Y R ) 2 + H R 2
cos ψ R = < v r , c r > v r c r = ( x i X R ) cos α r + ( y i Y R ) sin α r ( x i X R ) 2 + ( y i Y R ) 2 + H R 2
cos ψ T = < v t , c t > v t c t = ( x i X T ) cos α t + ( y i Y T ) sin α t ( x i X T ) 2 + ( y i Y T ) 2 + H T 2
In the above expression, cos φ R represents the spatial cone angle of the receiver, cos ψ R represents the velocity cone angle of the receiver, and cos ψ T represents the velocity cone angle of the transmitter. Assuming that the array is an equidistant linear array and the spacing between the elements is d the spatial frequency of the clutter patch received can be derived from the receiver’s spatial cone angle:
f s i = d cos φ R λ
where λ is the wavelength, and d is the inter-element interval. Combining (19), (20), and the magnitudes of the velocities of the transmitter and receiver, the normalized Doppler frequency of the clutter patch can be determined:
f d i = v t cos ψ T λ f P R F + v r cos ψ R λ f P R F
with the derivations above, we can logically deduce the space–time steering vector of the clutter block:
s ( f s i , f d i ) = s t ( f d i ) s s ( f s i )
where s s ( f s i ) = 1 , e j 2 π f s i , , e j 2 π ( N 1 ) f s i T and s t ( f d i ) = 1 , e j 2 π f d i , , e j 2 π ( K 1 ) f d i T denote the spatial and temporal steering vector, respectively.
From the above derivation, it can be concluded that the clutter Doppler frequency in airborne bistatic radar is determined by both the transmitter and receiver, while the spatial frequency is solely determined by the receiver. This results in more complex space-time characteristics of the clutter. Figure 3 illustrates the space–time characteristics of the clutter under four different bistatic configurations. When using grid-based sparse recovery algorithms, a significant amount of clutter does not fall on the pre-divided space–time grid points, leading to severe grid mismatch and consequently degrading the algorithm’s performance.
Based on the Ward report [4], the clutter component at each range ring can be envisioned as a superposition of uniformly distributed and independent clutter patches. Consequently, it can be articulated as follows:
x c = i = 1 N c α i s s ( f s i ) s t ( f d i ) = i = 1 N c α i s ( f s i , f d i )
When the target is absent in the scenarios, the received space–time snapshot consists solely of the clutter signal and thermal noise.
x = x c + n
where the noise vector n is represented as white Gaussian noise with zero mean and covariance matrix E { n n H } = σ 2 I , and σ 2 denotes the noise variance.
The clutter plus noise covariance matrix (CNCM) can be estimated by:
R E x x H = i = 1 N c E α i 2 s ( f s i , f d i ) s H ( f s i , f d i ) + σ 2 I = R c + σ 2 I
can be given by R ^ = 1 L l = 1 L x l x l H , where L denotes the amounts of available IID training samples and x l is the snapshot of the l-th training range ring, R c is the clutter covariance matrix. In accordance with the maximum signal-to-interference-plus-noise ratio (SINR) criterion [2], the derivation of the sample matrix inversion (SMI) weight vector for the space–time adaptive processing (STAP) filter can be systematically established:
w S M I = R ^ 1 s T s T H R ^ 1 s T
where s T denotes the target spatial-temporal steering vector.

2.2. ANM-STAP

Employing grid-based sparse recovery algorithms for the suppression of bistatic clutter often results in substantial performance degradation due to grid mismatch issues. To resolve these challenges from their root, it is imperative to utilize gridless sparse recovery algorithms. The concept of ANM-STAP was initially proposed in [16] as a solution to counteract the off-grid effect. In contrast to grid-based algorithms, the core principle of gridless SR-STAP is to represent the clutter subspace through a linear combination of selected atoms from a predefined, continuous spatial-temporal atom set. Based on (24), this approach utilizes an infinite set of atoms defined as follows:
A s ( f s , f d ) β T | f s , f d 0.5 , 0.5 , β 2 = 1
Reference [23] states that, the pure clutter signal matrix, consisting of L snapshots, can be succinctly represented by a few specific atoms within A , as demonstrated by the subsequent linear combination.
X c = r = 1 N r s ( f s , r , f d , r ) [ β r 1 , β r 2 , , β r L ] = r = 1 N r χ r s ( f s , r , f d , r ) β r T
where N r denotes the clutter rank, indicating that the clutter subspace is spanned by merely N r space–time steering vectors, and here, we define β r 0 = [ β r 1 , β r 2 , , β r L ] T , χ r = β r 0 2 and β r = β r 0 / χ r . Within the ANM-STAP framework, the atomic norm of X c relative to the atom set is defined by the following infimum
X c A inf r χ r X c = r χ r s ( f s , r , f d , r ) β r T
which seeks the sparsest representation of X c over A .
The computation of the atomic norm is inherently complex. However, in the specific context of the uniform linear array (ULA) operating under a invariant pulse repetition frequency, both spatial and temporal steering vectors manifest a Vandermonde structure. Exploiting the intrinsic Toeplitz-block-Toeplitz architecture of the CNCM, the calculation of the atomic norm delineated in (30) can be accurately executed via the proposed semidefinite programming (SDP) framework [15].
min Z , U , X c + Tr ( Z ) + Tr ( L T ( U ) ) + λ A 2 X X c F 2 s . t .   Θ =   Z X c H X c L T ( U ) 0   ,   Z = Z H
In this formulation, λ A > 0 is the regularization parameter, Z = r = 1 N r β r · ( β r ) H L × L , U refers to a complex matrix of dimensions (2N − 1) × (2K − 1). Additionally, T ( U ) N K × N K denotes the parametric representation of R c that capitalizes on the block-Toeplitz characteristic.
T ( U ) = U 0 U 1 U N + 1 U 1 U 0 U N + 2 U N 1 U N 2 U 0
U n = U n , 0 U n , 1 U n , K + 1 U n , 1 U n , 0 U n , K + 2 U n , K 1 U n , K 2 U 0
where each inner block U n ( N < n < N ) is an K × K Toeplitz matrix defined from the nth row of U .
The above optimization problem in (31) is termed as ANM-STAP and can be solved using the ADMM [24] (alternating direction method of multipliers) algorithm. When U ^ is obtained, the estimated CNCM can then be reconstructed by
R ^ s r = 1 L l = 1 L Γ ^ d i a g Γ ^ 1 x l 2 Γ ^ H + σ 2 I
where Γ ^ N K × N K is a matrix whose columns are composed of the eigenvectors of T ( U ^ ) . Substituting this into (27) results in the derivation of the weight vector for the ANM-STAP methodology.
In this section, we introduced the signal model of the airborne bistatic radar and the related concepts of gridless SR-STAP, laying the groundwork for our subsequent application of deep unfolding networks to solve (31).

3. Unfolding Gridless Sparse Recovery Bistatic STAP Algorithms into Deep Networks

Moving forward, in this section, we will propose an innovative deep unfolding-based gridless sparse recovery bistatic STAP net (DUGLSR-BSTAP-Net) whose clutter suppression performance surpasses that of some current mainstream sparse recovery algorithms.

3.1. Gridless Sparse Recovery Bistatic STAP(GLSR-BSTAP)

First of all, we describe how to apply the ADMM [24] to solve (31). The augmented Lagrangian expression for Equation (31) is defined as follows.
L ( X c , U , Z , Θ , Λ ) = T r ( L T ( U ) ) + T r ( Z ) + λ A 2 X X c F 2 + Λ , Θ Z X c H X c L T ( U ) + ρ 2 Θ Z X c H X c L T ( U ) F 2
where Λ ( N K + L ) × ( N K + L ) is the Lagrangian multiplier, ρ > 0 is a penalty parameter. Before beginning the iterations, both Θ and Λ are initialized as zero matrices.
(1)
Update X C t + 1 , U t + 1 , Z t + 1 :
( X C t + 1 , U t + 1 , Z t + 1 ) = arg min X C , U , Z ( L ( X C , U , Z , Θ t , Λ t ) )
We partition the matrices Θ t and Λ t as follows:
Θ t = Θ Z t ( Θ X C t ) H Θ X C t Θ T ( U ) t
Λ t = Λ Z t ( Λ X C t ) H Λ X C t Λ T ( U ) t
where Θ Z t and Λ Z t are L × L matrices, Θ X C t and Λ X C t are NK × L matrices, and Θ T ( U ) t and Λ T ( U ) t are NK × NK matrices.
Calculating the partial derivatives of Equation (36) with respect to X c , Z , and T ( U ) , respectively, yields the following results
L ( X C , U , Z , Θ t , Λ t ) X C = λ A X 2 Λ X C t 2 ρ Θ X C t + ( 2 ρ + λ A ) X C
L ( X C , U , Z , Θ t , Λ t ) Z = I L Λ Z t ρ Θ Z t + ρ Z
L ( X C , U , Z , Θ t , Λ t ) T ( U ) = L I N K L Λ T ( U ) t ρ L Θ T ( U ) t + ρ L 2 T ( U )
Let (39)–(41) be 0, we then have
X C t + 1 = 1 ( 2 ρ + λ A ) ( λ A X + 2 Λ X C t + 2 ρ Θ X C t )
Z t + 1 = ρ 1 Λ z t + Θ z t ρ 1 I L
( T ( U ) ) t + 1 = ( ρ L ) 1 Λ T ( U ) t + L 1 Θ T ( U ) t ( ρ L ) 1 I N K
By leveraging Equation (44) along with the previously established block-Toeplitz property of ( T ( U ) ) t + 1 , we can derive the following
( U ) t + 1 = T * ( ( ρ L ) 1 Λ T ( U ) t + L 1 Θ T ( U ) t ) ( ρ L ) 1 E N , K
where E N , K is a (2N − 1) × (2K − 1) matrix with the (N, K)-th element being 1 and others being 0. Additionally, the symbol T * ( · ) in (32) is a mapping from a NK × NK matrix to a (2N − 1) × (2K − 1) matrix.
(2)
Update Θ t + 1 :
Θ t + 1 = arg min Θ 0 L ( X C t + 1 , U t + 1 , Z t + 1 , Θ , Λ t ) = arg min Θ 0 ρ 2 Θ Z t + 1 ( X C t + 1 ) H X C t + 1 L T ( U t + 1 ) + ρ 1 Λ t F 2
Clearly, the optimal solution to Equation (46) in the absence of the constraint Θ 0 is
Θ u n t + 1 = Z t + 1 ( X C t + 1 ) H X C t + 1 L T ( U t + 1 ) ρ 1 Λ t
Let Θ u n t + 1 = G d i a g ( { δ g } ) G 1 ,   g = 1 , 2 , , N K + L , and set all negative eigenvalues to zero that we can have the updating result
Θ u n t + 1 = Z t + 1 ( X C t + 1 ) H X C t + 1 L T ( U t + 1 ) ρ 1 Λ t Θ u n t + 1 = G d i a g ( { δ g } ) G 1 ,   g = 1 , 2 , , N K + L Θ t + 1 = G d i a g ( { δ g } + ) G 1
(3)
Update Λ t + 1 :
Finally, we can update Λ t + 1 by
Λ t + 1 = Λ t + ρ Θ t + 1 Z t + 1 ( X C t + 1 ) H X C t + 1 L T ( U t + 1 )
The aforementioned algorithm exhibits two primary limitations: Firstly, the Lagrangian multiplier Λ and penalty parameter ρ require predefined settings, which, if not judiciously chosen, can lead to substantial declines in algorithm performance. Secondly, throughout the iterative execution of the algorithm, λ A and ρ remain fixed, thereby inhibiting the adaptive recalibration of these parameters in response to suboptimal initial settings. The proposed DU-GLSR-BSTAP can perfectly address the aforementioned issues, effectively achieving the suppression of bistatic clutter.

3.2. DUGLSR-BSTAP-Net

Figure 4 presents a clear and accessible depiction of the deep unfolding network. Each iteration of the traditional algorithm (left) is represented as one layer of the network (right). Concatenating these layers forms a deep neural network, and as data pass through this network, these effectively replicate the process of running the iterative algorithm multiple times. Furthermore, algorithmic parameters, including model parameters and regularization coefficients, are transformed into network parameters. The network can be trained via back-propagation, enabling the learning of model parameters from real-world training datasets. Consequently, the trained network can be viewed as a parameter-optimized algorithm, which effectively overcomes the two issues present in the GLSR-BSTAP algorithm by enabling the adaptive correction of suboptimal parameter presets and deploying distinct parameters for each network layer.
We define a DUGLSR-BSTAP-Net to solve Equation (31). As depicted in Figure 5, each layer of the network comprises five sub-layers, corresponding to (42), (43), (45), (48), and (49), respectively, named Xc-sublayer, Z_sublayer, U_sublayer, Θ _sublayer, and Λ _sublayer.
In the (t + 1)-th layer of the network, each sub-layer extracts the data required for its computations from the output “Data Package” of the t-th layer, facilitating the forward propagation in the t-th layer.
(1) Reconstruction sublayer—Xc_sublayer(t+1): Using the outputs Λ X c t from sub-layer Λ _ sublayer ( t ) and Θ X c t from sub-layer Θ _ sublayer ( t ) of the t-th layer, along with the received signal X as inputs, the output X C t + 1 is updated accordingly.
X C t + 1 = 1 ( 2 ρ + λ A ) ( λ A X + 2 Λ X C t + 2 ρ Θ X C t )
(2) Auxiliary variable update sublayer—Z_sublayer(t+1): Taking the outputs Λ Z t and Θ Z t from sub-layers Λ _ sublayer ( t ) and Θ _ sublayer ( t ) , respectively, in the t-th layer, the subsequent output Z t + 1 is then updated.
Z t + 1 = ρ 1 Λ z t + Θ z t ρ 1 I L
(3) Parametric CCM update sublayer—U_sublayer(t+1): Using the outputs Λ T ( U ) t from sub-layer Λ _ sublayer ( t ) and Θ T ( U ) t from sub-layer Θ _ sublayer ( t ) of the t-th layer, the output ( U ) t + 1 is updated accordingly.
( U ) t + 1 = T * ( ( ρ L ) 1 Λ T ( U ) t + L 1 Θ T ( U ) t ) ( ρ L ) 1 E N , K
(4) Nonlinear activation sublayer— Θ _ s u b l a y e r ( t + 1 ) : Taking the outputs X C t + 1 , Z t + 1 , T ( U t + 1 ) from the sub-layers X C _ sublayer ( t + 1 ) , Z _ sublayer ( t + 1 ) , U _ sublayer ( t + 1 ) , respectively, in the (t + 1)-th layer and Λ t from Λ _ sublayer ( t ) in the t-th layer, the subsequent output Θ t + 1 is then updated
Θ u n t + 1 = Z t + 1 ( X C t + 1 ) H X C t + 1 L T ( U t + 1 ) ρ 1 Λ t Θ u n t + 1 = G d i a g ( { δ g } ) G 1 ,   g = 1 , 2 , , N K + L Θ t + 1 = G d i a g ( { δ g } + ) G 1
(5) Lagrangian multiplier update sublayer— Λ _ s u b l a y e r ( t + 1 ) : Taking the outputs X C t + 1 , Z t + 1 , T ( U t + 1 ) from sub-layers X C _ sublayer ( t + 1 ) , Z _ sublayer ( t + 1 ) , U _ sublayer ( t + 1 ) , respectively, in the (t + 1)-th layer and Λ t from Λ _ sublayer ( t ) in the t-th layer, the subsequent output Θ t + 1 is then updated
Λ t + 1 = Λ t + ρ Θ t + 1 Z t + 1 ( X C t + 1 ) H X C t + 1 L T ( U t + 1 )

3.3. Network Structure Analysis

Compared to traditional convolutional neural networks, the proposed DUGLSR-BSTAP-Net has the following similarities and differences.
(1): Unlike convolutional neural networks, proposed DUGLSR-BSTAP-Net inherently integrate physical domain knowledge into CNCM reconstruction. The operation in the reconstruction sublayer is a crucial step for performing the inverse mapping from a received signal to the reconstructed clutter component. The auxiliary variable update sublayer contains information related to the coefficients of the atoms that span the clutter subspace. The parametric CCM update sublayer exists to compute T(U), which is the parameterized representation of the CCM. Similar to CNNs, the nonlinear activation sublayer introduces non-linear activation capability to the network when computing the constraint term Θ .
(2): Additionally, proposed net feature skip connections are shown in Figure 6. For example, the nonlinear activation sublayer and Lagrangian multiplier update sublayer form short connections between the reconstruction sublayer, auxiliary variable update sublayer, and parametric CCM update sublayer across various layers. Moreover, the Lagrangian multiplier update sublayer naturally connects to themselves through gradient descent updates. This approach is similar to the residual network [25] where skip connections have been demonstrated to be effective for training very deep neural networks. Our networks also utilize skip connections, but these are inherently determined by ADMM iterative algorithm.

3.4. Generating the Training Dataset

This network employs a supervised training approach, utilizing the training dataset obtained through the simulation of the model introduced in Section 2, which includes both inputs and labels. In practical applications, it is assumed that certain characteristics of the radar system are adequately known. Although the reflectivity of the ground at each clutter patch is naturally random, it adheres to a specific distribution influenced by factors such as terrain type, radar frequency, and polarization. We divided each bistatic equal range ellipse into 720 clutter patches, corresponding to θ in Equation (17), ranging from 0° to 360°. Using Equation (17), we calculated the spatial coordinates of each clutter patch. With the known positions of the transmitter and receiver, we applied Equations (18)–(20) to determine the array cone angle and velocity cone angle of each clutter patch relative to the receiver, as well as the velocity cone angle relative to the transmitter. This allowed us to derive the spatial frequency (Equation (21)) and Doppler frequency (Equation (22)), and subsequently compute the space–time steering vector (Equation (23)). Using the radar equation, we calculated the power of each clutter patch, with a −30 dB power loss for those located in the receiver’s backlobe. To simulate the fluctuations of clutter patches in a real environment, we assigned each patch a complex random number following a standard normal distribution. We simulated 3200 samples from different range gates, encompassing four different bistatic configurations. Each range gate represents a different elevation angle. At the same time, due to the highly non-stationary nature of airborne bistatic clutter, the clutter characteristics vary across different range gates, thus avoiding uniformity in the training data. Among them, 2560 samples are used for training, and 640 samples are used for testing.
Based on this, we can have the input dataset X q q = 1 Q , Q is the amount of training samples. The goal of STAP is to effectively suppress clutter, and the improvement factor (IF) serves as a valuable measure for evaluating the clutter suppression performance of an algorithm.
IF = Tr ( R ) N K w H s T ( f s , f d ) 2 w H R w
where R is the clairvoyant CNCM of the CUT is known as a prior via simulation, s T ( f s , f d ) is the practical target spatial-temporal steering vector.
Therefore, we will use the IF calculated using the weight vector derived from the clairvoyant CNCM through Equation (27) as our label dataset Y q q = 1 Q .

3.5. Network Initialization and Training Method

For network initialization settings, we set the initial values with λ A = 0.5 and ρ = 0.01 in each layer. The normalized mean squared error (NMSE) is utilized as the loss function. When received signal pass the DUGLSR-BSTAP-Net is completed, we obtain the output T ( U ^ ) . Using Equation (34), we compute the CNCM, and subsequently derive the weights through Equation (27). These weights are then applied in Equation (55) to calculate the IF, F q ( λ A , ρ ; X q ) q = 1 Q . This result is incorporated into the loss function calculation, after which the parameters are updated using the Adam [26] optimizer, completing one training cycle.
L ( λ A , ρ ) = 1 Q q = 1 Q Y q F q ( λ A , ρ ; X q ) F 2 Y q F 2

4. Numerical Simulations

In this section, we evaluate the performance of the newly proposed DUGLSR-BSTAP-Net algorithm through extensive simulation experiments. We conduct thorough comparisons using various advanced SR-STAP algorithms, such as the grid-based MIAA [9], MSBL [11], MOMP [27], and MFOCUSS [28], and gridles ANM [29] and last but not least, the optimal STAP filter. The available number of IID training samples is twelve. The main simulation parameters are listed in Table 1. The DUGLSR-BSTAP-Net was trained on an Intel(R) Xeon(R) Gold 6258R CPU @ 2.70 GHz 2.69 GHz (two processors) and an NVIDIA Quadro RTX 8000 GPU. The Intel Xeon CPU is manufactured by Intel Corporation, located in Santa Clara, CA, USA. The NVIDIA Quadro RTX 8000 GPU is manufactured by NVIDIA Corporation, also located in Santa Clara, CA, USA. The training environment utilized Python 3.8 and Pytorch 1.8.

4.1. NMSE during Training

In Figure 7, we present the variation in NMSE during 1000 training iterations for networks with T = 20, T = 25, and T = 35 layers. It is evident that as the number of training iterations increases, the NMSE gradually decreases, ensuring the convergence of the network. As the number of network layers increases, while the computational complexity grows, there is no significant enhancement in clutter suppression performance. Therefore, in the subsequent comparative experiments, we set the number of layers for the DUGLSR-BSTAP-Net to 35 due to its acceptable convergency speed and computational complexity.

4.2. Recovered Capon Spectrum

Figure 8 illustrates the comparison of clutter Capon spectrum recovery effects under the configuration of two aircraft flying in a collinear formation ( c T = ( 150 KM , 0 , 8 KM ) , c R = ( 0 , 0 , 8 KM ) , α T = 0 ° , β T = 0 ° , α R = 0 ° , β R = 0 ° ) using different methods. It is apparent that the clutter distribution in airborne bistatic configurations is more complex compared to monostatic configurations, leading to severe grid mismatch issues. This causes grid-based sparse recovery methods to struggle with accurately estimating the CNCM. To ensure a fair comparison, we also iterate the ANM algorithm 35 times, setting the parameters to λ A = 0.5 and ρ = 0.01 , identical to the initial setting for DUGLSR-BSTAP-Net.

4.3. Adaptive Pattern

To further elucidate the performance of the DUGLSR-BSTAP-Net, we present the adaptive patterns generated by various algorithms in this part, conducted under the condition of two aircraft flying in a collinear formation ( c T = ( 150 KM , 0 , 8 KM ) , c R = ( 0 , 0 , 8 KM ) , α T = 0 ° , β T = 0 ° , α R = 0 ° , β R = 0 ° ) . In this scenario, we assumed the target to be situated at a normalized spatial frequency of 0 with a normalized Doppler frequency of −0.12. The results, depicted in Figure 9, demonstrate that the proposed algorithm effectively maintains the high gain of the target while simultaneously creating a notch at the clutter ridge. This results in the suppression of both main lobe and sidelobe clutter. In contrast, the MFOCUSS and MOMP algorithms fail to accurately form the notch at the clutter ridge, leading to distorted adaptive patterns Figure 9.

4.4. Comparison of Eigenspectra

The eigenspectra of optimum CNCM and estimated CNCM are shown in Figure 10. It is evident that the eigenspectra decline progressively in both algorithms. Figure 10 also demonstrates that the large eigenvalues of the CNCM eigenspectra estimated by our proposed algorithm are the closest to those of the ideal covariance matrix. Since the large eigenvalues represent the clutter components, it can be said that the clutter subspace estimated by the proposed algorithm is the more accurate compared to other algorithms.

4.5. IF of Different Algorithms

In this subsection, we present comparative improvement factor curves of various algorithms under ideal condition and with array Gain-phase (Gp) error (set to ±0.05, ±2°) to evaluate the clutter suppression performance. Figure 11a shows that, due to severe grid mismatch issues under bistatic conditions, the performance of grid-based sparse recovery algorithms significantly deteriorates, resulting in wider nulls that fail to maintain target direction gain effectively. Additionally, the gridless ANM algorithm, with only 35 iterations, does not perform as well in clutter suppression compared to our proposed DUGLSR-BSTAP-Net algorithm. Therefore, we can conclude that the data-driven deep neural network approach overcomes some of the shortcomings of traditional iterative models. From Figure 11b, we can see that when the Gp error is present, the performance of the ANM algorithm deteriorates significantly, whereas the DUGLSR-BSTAP-Net experiences less performance loss. This indicates that the trained network is more robust compared to traditional algorithms, mitigating the sensitivity to errors that was a drawback in the original algorithms.
As shown in the above figure, we already observed the comparison of the improvement factor (IF) when the number of iterations for the ANM algorithm is the same as the number of layers in the DUGLSR-BSTAP-Net. Next, we will perform more iterations of the ANM algorithm to compare the clutter suppression performance.
From Figure 12a, we can see that, in an ideal case, 150 iterations of the ANM algorithm achieve the same clutter suppression performance as the 35-layer DUGLSR-BSTAP-Net. However, when the Gp error is present in Figure 12b, even after 150 iterations, the ANM algorithm still exhibits a slight performance gap compared to the proposed network. This indicates that the trained network can achieve a better performance with lower computational complexity, regardless of whether Gp error is present.

4.6. Target Detection Probability under Different SNR

The ultimate goal of STAP is to achieve target detection. Therefore, after comparing clutter suppression performance, we need to compare how the target detection probability of different algorithms varies with signal-to-noise ratio (SNR) under constant false alarm rate condition. The false alarm probability is fixed to 10 4 . In Figure 13a,b, we compare the target detection probability of different algorithms with a slow-moving target (normalized Doppler frequency of 0.08) and a fast-moving target (normalized Doppler frequency of 0.25) added to the CUT in an ideal case, respectively. The results indicate that in an airborne bistatic radar system, the DUGLSR-BSTAP-Net and ANM algorithm exhibits better target detection performance compared to grid-based SR-STAP.
In Figure 14a,b, we compare the target detection probability of different algorithms with a slow-moving target (normalized Doppler frequency of 0.08) and a fast-moving target (normalized Doppler frequency of 0.2) added to the CUT in the non-ideal case, respectively. When the GP error is present, the target detection performance of the ANM algorithm deteriorates significantly. However, the DUGLSR-BSTAP-Net algorithm maintains the performance closest to that of the optimal filter in detecting both slow-moving and fast-moving targets.

4.7. Computational Efficiency

In this experiment, we assess the computational efficiency of various algorithms by measuring their average running time in Table 2. The simulation results for ANM-35, ANM-150, MSBL, MIAA, MOMP, and MFOCUSS were obtained using MATLAB 2022b, while the results for DUGLSR-BSTAP-Net were obtained using Python 3.8 and PyTorch 1.8. Both were installed on a computer equipped with an AMD Ryzen 7 5800H processor with Radeon Graphics at 3.20 GHz.

5. Conclusions

In this article, we provide a detailed derivation of the signal model for airborne bistatic clutter and unfolding ANM into a deep neural network. After extensive data training, the initially suboptimal model parameters were corrected, enhancing the clutter suppression performance. Each layer of the network contains five sub-layers, each with its own physical meaning, thereby improving the interpretability of the network. The simulation results demonstrate that the proposed algorithm achieves excellent clutter suppression and target detection performance.

Author Contributions

Conceptualization, W.H. and T.W.; investigation, W.H.; methodology, W.H. and K.L.; project administration, T.W.; software, W.H.; supervision, K.L.; visualization, W.H.; writing—original draft, W.H.; writing—review and editing, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

Key Technologies Research and Development Program, Grant Number: 2021YFA1000400.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brennan, L.E.; Reed, L.S. Theory of Adaptive Radar. IEEE Trans. Aerosp. Electron. Syst. 1973, AES-9, 237–252. [Google Scholar] [CrossRef]
  2. Klemm, R. Principles of Space-Time Adaptive Processing. In Radar, Sonar and Navigation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  3. Melvin, W.L. Chapter 12—Space-Time Adaptive Processing for Radar. In Academic Press Library in Signal Processing; Sidiropoulos, N.D., Gini, F., Chellappa, R., Theodoridis, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2014; Volume 2, pp. 595–665. ISBN 9780123965004. ISSN 2351-9819. [Google Scholar] [CrossRef]
  4. Ward, J. Space-time adaptive processing for airborne radar. In Proceedings of the 1995 International Conference on Acoustics, Speech, and Signal Processing, Detroit, MI, USA, 9–12 May 1995. [Google Scholar]
  5. Reed, I.S.; Mallett, J.D.; Brennan, L.E. Rapid Convergence Rate in Adaptive Arrays. IEEE Trans. Aerosp. Electron. Syst. 1974, AES-10, 853–863. [Google Scholar] [CrossRef]
  6. Maria, S.; Fuchs, J.-J. Application of the Global Matched Filter to Stap Data an Efficient Algorithmic Approach. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; p. 4. [Google Scholar] [CrossRef]
  7. Sun, K.; Zhang, H.; Li, G.; Meng, H.; Wang, X. A novel STAP algorithm using sparse recovery technique. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; pp. V-336–V-339. [Google Scholar] [CrossRef]
  8. Sun, K.; Meng, H.; Wang, Y.; Wang, X. Direct data domain STAP using sparse representation of clutter spectrum. Signal Process. 2011, 91, 2222–2236. [Google Scholar] [CrossRef]
  9. Yang, Z.; Li, X.; Wang, H.; Jiang, W. Adaptive clutter suppression based on iterative adaptive approach for airborne radar. Signal Process. 2013, 93, 3567–3577. [Google Scholar] [CrossRef]
  10. Yang, Z.; Li, X.; Wang, H.; Jiang, W. On Clutter Sparsity Analysis in Space–Time Adaptive Processing Airborne Radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  11. Duan, K.; Wang, Z.; Xie, W.; Chen, H.; Wang, Y. Sparsity-based stap algorithm with multiple measurement vectors via sparse bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  12. Liu, C.; Wang, T.; Zhang, S.; Ren, B. Clutter suppression based on iterative reweighted methods with multiple measurement vectors for airborne radar. IET Radar Sonar Navig. 2022, 16, 1446–1459. [Google Scholar] [CrossRef]
  13. Cui, W.; Wang, T.; Wang, D.; Liu, C. An Improved Iterative Reweighted STAP Algorithm for Airborne Radar. Remote Sens. 2023, 15, 130. [Google Scholar] [CrossRef]
  14. Candès, E.J.; Fernandez-Granda, C. Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 2014, 67, 906–956. [Google Scholar] [CrossRef]
  15. Tang, G.; Bhaskar, B.N.; Shah, P.; Recht, B. Compressed Sensing Off the Grid. IEEE Trans. Inf. Theory 2013, 59, 7465–7490. [Google Scholar] [CrossRef]
  16. Feng, W.; Guo, Y.; Zhang, Y.; Gong, J. Airborne radar space time adaptive processing based on atomic norm minimization. Signal Process. 2018, 148, 31–40. [Google Scholar] [CrossRef]
  17. Zhang, T.; Hu, Y.; Lai, R. Gridless super-resolution sparse recovery for non-sidelooking STAP using reweighted atomic norm minimization. Multidimens. Syst. Signal Process. 2021, 32, 1259–1276. [Google Scholar] [CrossRef]
  18. Li, Z.; Wang, T.; Su, Y. A fast and gridless stap algorithm based on mixed-norm minimisation and the alternating direction method of multipliers. IET Radar Sonar Navig. 2021, 15, 1340–1352. [Google Scholar] [CrossRef]
  19. Cui, W.; Wang, T.; Wang, D.; Zhang, X. A novel sparse recovery-based space-time adaptive processing algorithm based on gridless sparse Bayesian learning for non-sidelooking airborne radar. IET Radar Sonar Navig. 2023, 17, 1380–1390. [Google Scholar] [CrossRef]
  20. Duan, K.; Chen, H.; Xie, W.; Wang, Y. Deep learning for high-resolution estimation of clutter angle-Doppler spectrum in STAP. IET Radar Sonar Navig. 2022, 16, 193–207. [Google Scholar] [CrossRef]
  21. Zou, B.; Wang, X.; Feng, W.; Lu, F.; Zhu, H. Memory-Augmented Autoencoder-Based Nonhomogeneous Detector for Airborne Radar Space-Time Adaptive Processing. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3502405. [Google Scholar] [CrossRef]
  22. Zhu, H.; Feng, W.; Feng, C.; Zou, B.; Lu, F. Deep unfolding based space-time adaptive processing method for airborne radar. J. Radars 2022, 11, 676–691. [Google Scholar] [CrossRef]
  23. Li, Y.; Chi, Y. Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors. IEEE Trans. Signal Process. 2016, 64, 1257–1269. [Google Scholar] [CrossRef]
  24. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers; Foundations and Trends® in Machine Learning: Hanover, MA, USA, 2011. [Google Scholar] [CrossRef]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  26. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  27. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 40–44. [Google Scholar]
  28. Gorodnitsky, I.F.; Rao, B.D. Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 1997, 45, 600–616. [Google Scholar] [CrossRef]
  29. Li, Z.; Wang, T. ADMM-Based Low-Complexity Off-Grid Space-Time Adaptive Processing Methods. IEEE Access 2020, 8, 206646–206658. [Google Scholar] [CrossRef]
Figure 1. Geometric configuration of airborne bistatic radar system.
Figure 1. Geometric configuration of airborne bistatic radar system.
Remotesensing 16 02516 g001
Figure 2. Bistatic constant range ellipsoids. (a) Before transformations. (b) After transformations.
Figure 2. Bistatic constant range ellipsoids. (a) Before transformations. (b) After transformations.
Remotesensing 16 02516 g002
Figure 3. The clutter ridge distributions for four different bistatic configurations: (a) transmitter and receiver fly in parallel; (b) transmitter and receiver fly collinearly; (c) transmitter and receiver fly in a cross pattern; and (d) transmitter and receiver fly perpendicularly.
Figure 3. The clutter ridge distributions for four different bistatic configurations: (a) transmitter and receiver fly in parallel; (b) transmitter and receiver fly collinearly; (c) transmitter and receiver fly in a cross pattern; and (d) transmitter and receiver fly perpendicularly.
Remotesensing 16 02516 g003
Figure 4. A clear depiction of the deep unfolding.
Figure 4. A clear depiction of the deep unfolding.
Remotesensing 16 02516 g004
Figure 5. An example of DUGLSR-BSTAP-Net with three layers.
Figure 5. An example of DUGLSR-BSTAP-Net with three layers.
Remotesensing 16 02516 g005
Figure 6. The data flow graph of DUGLSR-BSTAP-Net.
Figure 6. The data flow graph of DUGLSR-BSTAP-Net.
Remotesensing 16 02516 g006
Figure 7. Training loss of different network depth.
Figure 7. Training loss of different network depth.
Remotesensing 16 02516 g007
Figure 8. Comparison of the recovered capon spectrum. (a) optimal; (b) proposed net (DUGLSR−BSTAP−Net); (c) ANM−35; (d) MSBL; (e) MOMP; (f) MIAA; (g) MFOCUSS.
Figure 8. Comparison of the recovered capon spectrum. (a) optimal; (b) proposed net (DUGLSR−BSTAP−Net); (c) ANM−35; (d) MSBL; (e) MOMP; (f) MIAA; (g) MFOCUSS.
Remotesensing 16 02516 g008
Figure 9. Comparison of adaptive pattern. (a) Optimal; (b) Proposed Net (DUGLSR−BSTAP−Net); (c) ANM−35; (d) MSBL; (e) MOMP; (f) MIAA; and (g) MFOCUSS.
Figure 9. Comparison of adaptive pattern. (a) Optimal; (b) Proposed Net (DUGLSR−BSTAP−Net); (c) ANM−35; (d) MSBL; (e) MOMP; (f) MIAA; and (g) MFOCUSS.
Remotesensing 16 02516 g009aRemotesensing 16 02516 g009b
Figure 10. CNCM eigenspectra estimated by different algorithms under two aircraft flying in a collinear formation.
Figure 10. CNCM eigenspectra estimated by different algorithms under two aircraft flying in a collinear formation.
Remotesensing 16 02516 g010
Figure 11. Comparison results of IF under bistatic collinear configuration, which is the same as in Section 4.2. (a) Ideal case; and (b) With GP error (±0.05, ±2°).
Figure 11. Comparison results of IF under bistatic collinear configuration, which is the same as in Section 4.2. (a) Ideal case; and (b) With GP error (±0.05, ±2°).
Remotesensing 16 02516 g011
Figure 12. Comparison of the clutter performance between ANM with different iteration counts and proposed net (DUGLSR−BSTAP−Net). (a) Ideal case. (b) With GP error (±0.05, ±2°).
Figure 12. Comparison of the clutter performance between ANM with different iteration counts and proposed net (DUGLSR−BSTAP−Net). (a) Ideal case. (b) With GP error (±0.05, ±2°).
Remotesensing 16 02516 g012
Figure 13. Comparison of target detection performance in ideal case. (a) Slow-moving target. (b) Fast-moving target.
Figure 13. Comparison of target detection performance in ideal case. (a) Slow-moving target. (b) Fast-moving target.
Remotesensing 16 02516 g013
Figure 14. Comparison of target detection performance with GP-error (±0.05, ±2°). (a) Slow-moving target. (b) Fast-moving target.
Figure 14. Comparison of target detection performance with GP-error (±0.05, ±2°). (a) Slow-moving target. (b) Fast-moving target.
Remotesensing 16 02516 g014
Table 1. Parameters of airborne bistatic radar system.
Table 1. Parameters of airborne bistatic radar system.
ParameterValueUnit
Number of transmitter and receiver array element N8-
Number of pulses K8-
Pulse repetition frequency f P R F 2000Hz
λ 0.3m
Bandwidth2.5MHz
Velocity of transmitter and receiver120m/s
CNR40dB
Table 2. Average running time.
Table 2. Average running time.
AlgorithmTimeUnit
ANM-350.283s
ANM-1500.847s
MSBL14.425s
MIAA2.100s
MOMP0.025s
MFOCUSS3.687s
DUGLSR-BSTAP-Net0.304s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, W.; Wang, T.; Liu, K. A Data and Model-Driven Clutter Suppression Method for Airborne Bistatic Radar Based on Deep Unfolding. Remote Sens. 2024, 16, 2516. https://doi.org/10.3390/rs16142516

AMA Style

Huang W, Wang T, Liu K. A Data and Model-Driven Clutter Suppression Method for Airborne Bistatic Radar Based on Deep Unfolding. Remote Sensing. 2024; 16(14):2516. https://doi.org/10.3390/rs16142516

Chicago/Turabian Style

Huang, Weijun, Tong Wang, and Kun Liu. 2024. "A Data and Model-Driven Clutter Suppression Method for Airborne Bistatic Radar Based on Deep Unfolding" Remote Sensing 16, no. 14: 2516. https://doi.org/10.3390/rs16142516

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop