Next Article in Journal
Registration and Combined Adjustment for the Laser Altimetry Data and High-Resolution Optical Stereo Images of the GF-7 Satellite
Previous Article in Journal
Woody Plant Encroachment: Evaluating Methodologies for Semiarid Woody Species Classification from Drone Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Omega-KA-Net: A SAR Ground Moving Target Imaging Network Based on Trainable Omega-K Algorithm and Sparse Optimization

1
School of Information and Navigation, Air Force Engineering University, Xi’an 710077, China
2
Key Laboratory for Information Science of Electromagnetic Waves, Ministry of Education, Fudan University, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(7), 1664; https://doi.org/10.3390/rs14071664
Submission received: 25 February 2022 / Revised: 26 March 2022 / Accepted: 28 March 2022 / Published: 30 March 2022

Abstract

:
The ground moving target (GMT) is defocused due to unknown motion parameters in synthetic aperture radar (SAR) imaging. Although the conventional Omega-K algorithm (Omega-KA) has been proven to be applicable for GMT imaging, its disadvantages are slow imaging speed, obvious sidelobe interference, and high computational complexity. To solve the above problems, a SAR-GMT imaging network is proposed based on trainable Omega-KA and sparse optimization. Specifically, we propose a two-dimensional (2-D) sparse imaging model deducted from the Omega-KA focusing process. Then, a recurrent neural network (RNN) based on an iterative optimization algorithm is built to learn the trainable parameters of Omega-KA by an off-line supervised training method, and the solving process of the sparse imaging model is mapped to each layer of the RNN. The proposed trainable Omega-KA network (Omega-KA-net) forms a new GMT imaging method that can be applied to high-quality imaging under down-sampling and a low signal to noise ratio (SNR) while saving the imaging time substantially. The experiments of simulation data and measured data demonstrate that the Omega-KA-net is superior to the conventional algorithms in terms of GMT imaging quality and time.

1. Introduction

Ground moving target (GMT) imaging has become an important research hotspot with the development of synthetic aperture radar (SAR) imaging technology [1,2,3,4]. A high-quality GMT image is crucial for indicating a moving target and extracting the moving target state for both civilian and military applications [5]. In the conventional SAR imaging method for a stationary target, the Doppler frequency shift and azimuth modulation frequency variation caused by the GMT motion parameters will lead to an azimuth position offset and defocusing of the imaging results, respectively [6,7,8,9,10]. Therefore, it is difficult to obtain the GMT focused image by processing the GMT echo signal with the conventional SAR imaging method for a stationary target. As an important application of SAR, the key technical problem to be solved in GMT imaging is to compensate the residual phase error caused by the motion of a non-cooperative GMT [11]. Since high-quality GMT imaging results can be further utilized to recognize and classify targets in engineering applications, it is valuable to investigate the imaging method for high-quality GMT focused images.
Defocused GMT images can be refocused through accurate compensation of phase error, which can be achieved by the exact estimation of motion parameters [12,13,14,15,16,17,18]. The exhaustive search is a simple and effective method for motion parameter estimation. Ref. [13] assumes that the phase error can be compensated by using the equivalent azimuth and range velocity, which are estimated by the two-dimensional (2-D) exhaustive search method (ESM) to achieve the maximum contrast of the GMT image. However, the equivalent velocity cannot reflect the actual motion parameters of GMT, which would result in the model mismatch problem. Chen et al. [14] proposed a parametric sparse representation method for GMT imaging with a range migration algorithm (RMA). This method uses an iterative minimum entropy algorithm (IMEA) to estimate the motion parameter and has a higher tolerance for the model mismatch problem than the method in [13]. On this basis, a modified minimum entropy algorithm (MMEA) is proposed in [15], which improves the speed of convergence and requires fewer iterations than IMEA to obtain a satisfactory estimated parameter. Then, the authors use the Omega-K algorithm (Omega-KA) to achieve effective results in GMT imaging. However, the above algorithms have two problems. One is repeated iterations and a long imaging time; the other is that the robustness of the results may decrease in large-scale scenes.
Recently, GMT imaging technology based on compressed sensing (CS) algorithm and sparse optimization theory [19,20,21] has developed rapidly. The CS algorithm brings the possibility of reconstructing sparse SAR images with fewer measurements and has been applied to GMT imaging in the last few years. Kang et al. [19] proposed a novel SAR-GMT imaging algorithm based on the CS framework, in which CS theory is used to decompose the SAR signal into a set of polynomial basis functions to determine the motion parameters. Ref. [20] addresses the GMT imaging and velocity estimation problems for GMT indication applications. The GMTs are extracted via a sparsity-based iterative decomposition algorithm and the unknown velocity parameters of GMTs are subsequently estimated by sparsity constraints. However, the above methods based on CS and sparse optimization have two shortcomings. One is a large number of parameters and high computational complexity; the other is that the sidelobe interference of GMT imaging results is obvious.
With the development of artificial intelligence (AI) technology, deep learning (DL) has been widely applied in moving target focusing and recognition of SAR [22,23,24]. Convolutional neural network (CNN) is a representative network with deep structure and convolution computation in DL. Lu et al. proposed a GMT imaging method based on CNN and range doppler algorithm in [22], where the U-net structure is improved to provide good GMT reconstruction. As another representative network of DL, the recurrent neural network (RNN) can mine temporal and semantic information from a large amount of data with sequence characteristics. Due to the powerful computing and abstract mapping capabilities of DL, the shortcomings in conventional CS-based SAR imaging methods can be solved by a RNN framework [25,26,27,28]. An approach to SAR imaging based on the recurrent auto-encoder network is proposed in [25], which can obtain focused images under the condition of uncertain phase. Ref. [28] also proposes an end-to-end DL algorithm for SAR imaging based on RNN. Instead of arranging 2-D echo data into a vector as the input of network, the algorithm in [28] performs signal processing in the 2-D data domain, which makes SAR imaging of large-scale scenes possible. Due to the uncertainty of the GMT imaging scene and the difficulty of GMT phase compensation, RNN is rarely applied in GMT imaging.
To solve the difficulties of slow imaging speed, obvious sidelobe interference and high computational complexity in conventional GMT imaging methods, a novel trainable Omega-KA network (Omega-KA-net) based on sparse optimization for GMT imaging is proposed in this paper. The Omega-KA has been utilized for squint circular trace scanning SAR imaging [29], parallel bistatic forward-looking SAR imaging [30], and maneuvering high-squint-mode SAR imaging [31]. Differing from the previous work where the motion parameters are estimated by ESM, minimum entropy algorithm, or sparsity constraint, the proposed method in this paper achieves the accurate estimation of motion parameters through network training. Firstly, the Omega-KA-based GMT imaging method is derived based on the 2-D SAR echo signal model. Then, a 2-D sparse imaging model deducted from the Omega-KA focusing process is proposed. On this basis, the sparse imaging model and iterative soft thresholding algorithm (ISTA) are incorporated within a RNN framework. To reduce the influence of the vanishing gradient, the trainable parameters of RNN are learned by adopting an incremental training-based supervised training method. Finally, the GMT focused images are reconstructed through network feedforward operation based on the trained network parameters. Experimental results based on simulated and measured data support that the Omega-KA-net outperforms the conventional RMA, ESM in [13] and IMEA in [14], in terms of GMT imaging quality and time. To avoid the interference of ground clutter in complex ground scenes, the measured data of ship targets are adopted in this paper, which is conducive to further analyzing and verifying the performance of Omega-KA-net. The simulation also shows that, compared with the conventional algorithms, the Omega-KA-net can provide high-quality imaging results under down-sampling and a low signal to noise ratio (SNR), while reducing the computational complexity and saving the imaging time substantially.
The rest of this study is organized as follows. Section 2 reviews the Omega-KA for GMT imaging. Section 3 formulates the 2-D sparse imaging model and the novel GMT imaging network. In Section 4, the performance of the proposed method is evaluated through some simulation results. Section 5 presents the conclusion.

2. SAR Imaging of GMT

2.1. SAR Echo Signal Model

Figure 1 shows the geometry configuration for airborne SAR and GMT in the 2-D slant range plane. In fact, there are many targets within the target range of SAR in the actual scene. However, we generally only consider using a single point to deduce the SAR equation.
It is assumed that the SAR platform flies with a constant velocity v along the azimuth direction (x-axis) and the squint angle is θ s q . P is a GMT in the illumination area of the radar beam, and its range and azimuth velocities are vr and vx. The distance from the SAR platform to the target P is R(η), which changes with variation of the azimuth time η .
In order to establish the range model, we suppose that the coordinates of SAR projection in the ground scene are (0, 0) when η = 0. At the same time, the beam center and the azimuth direction of target P intersect at position C. The position C is located at ( R c cos θ s q , R c sin θ s q ) , and the distance from the target P to the position C is x0. When η = η s , the coordinates of the SAR platform in the ground scene are (0, v η s ), so we can get:
R ( η s ) = ( v η s ( x 0 + R c sin θ s q + v x η s ) ) 2 + ( v r η s + R c cos θ s q ) 2
Suppose that the waveform transmitted by SAR is a chirp signal. Based on the range model above, the baseband echo signal expression of the point target can be deduced (here, we omit the range and azimuth envelopes for simplifying further analysis) [31], and it can be expressed as:
s r ( τ , η s ) = exp { j 4 π f c R ( η s ) / c + j π K r [ τ 2 R ( η s ) / c ] 2 }
where τ is the range time, fc is the carrier frequency, c is the speed of the light, and Kr represents the tuning frequency of range-oriented pulse. It is noted that SAR echo is a complex signal.

2.2. GMT Imaging Method Based on Omega-KA

The Omega-KA performs signal processing in the 2-D frequency domain and has mature applications in SAR-GMT imaging [14,15]. Transforming (2) into the 2-D frequency domain via the fast Fourier transform (FFT), the signal can be obtained by using the stationary phase method, so we can get:
S r ( f r , f a ) = exp { j [ 2 π f a α v e 2 π f r 2 K r 4 π R e cos θ s q c ( f c + f r ) 2 c 2 4 v e 2 f a 2 ] }
where f r and f a are the range frequency and azimuth frequency, respectively, and v e , R e , and α are three unknowns to be determined. Therefore, an equation system composed of three equations and three unknowns can be obtained. The solution to the equation system is as follows:
{ v e = ( v v x ) 2 + v r 2 R e = ( x 0 + R c sin θ s q ) v r cos θ s q + R c ( v v x ) v e α = ( x 0 + R c sin θ s q ) ( v v x ) R c v r cos θ s q
In order to reduce the impacts of range frequency modulation, range migration, range-azimuth coupling, and azimuth frequency modulation, a bulk compression factor in the 2-D frequency domain is given by:
H 1 ( f r , f a ) = exp { j [ 2 π R ref sin θ s q v f a + π f r 2 K r + 4 π R ref cos θ s q c ( f c + f r ) 2 c 2 4 v 2 f a 2 ] }
where Rref is the reference range of the beam center. Then, multiplying (3) by (5), the signal after the compensation is expressed as:
S 1 ( f r , f a ) = S r ( f r , f a ) H 1 ( f r , f a ) = exp { j [ 2 π f a ( α v e 2 + R ref   sin θ s q v ) 4 π R e cos θ s q c ( f c + f r ) 2 c 2 4 v e 2 f a 2   + 4 π R ref   cos θ s q c ( f c + f r ) 2 c 2 4 v 2 f a 2 ] }
The focusing quality will be affected by the velocity of the GMT and the distance between the GMT and the reference position. Accurate Stolt interpolation can eliminate the defocusing caused by distance, while the defocusing caused by velocity needs to be solved by compensating the residual phase generated by velocity. In (6), the second exponential term contains v e related to the unknown velocity of the GMT, so the defocusing of the image mainly comes from the influence of the unknown velocity parameter Λ = 1 / v e 2 , which must be estimated before compensation.
To compensate for the residual phase generated by the unknown velocity, a phase compensation factor is established according to the estimated parameter Λ in the 2-D frequency domain, which is given by:
H 2 ( Λ ) ( f r , f a ) = exp { j [ 4 π R ref cos θ s q c ( ( f c + f r ) 2 c 2 f a 2 4 Λ ( f c + f r ) 2 c 2 4 v 2 f a 2 ) ] }
After compensating the residual phase generated by the unknown velocity, the signal is expressed as:
S 2 ( f r , f a ) = S 1 ( f r , f a ) H 2 ( Λ ) ( f r , f a ) = exp { j [ 2 π f a ( α Λ + R ref   sin θ s q v ) 4 π ( R e R ref   ) cos θ s q c ( f c + f r ) 2 c 2 f a 2 4 Λ ] }
Then, in order to complete the differential focusing, a Stolt interpolation function is given by:
( f c + f r ) 2 c 2 f a 2 4 Λ = ( f c + f r _ _ _ _ _ _ _ _ )
and the signal after implementing Stolt interpolation can be expressed as:
S 2 _ stolt ( f r , f a ) = exp { j [ 2 π f a ( α Λ + R ref   sin θ s q v ) 4 π ( R e R ref   ) cos θ s q c ( f c + f r ) _ _ _ _ _ _ _ _ ] }
At last, a 2-D inverse FFT(IFFT) is applied after the aforementioned procedures, and the GMT will be well focused. Figure 2 shows the flowchart of the Omega-KA-based GMT imaging method.

3. GMT Imaging Network Based on Trainable Omega-KA and Sparse Optimization

In this section, we developed a trainable Omega-KA-net, where the unknown velocity parameter Λ and all other parameters of the Omega-KA can be self-learned through network training. This helps the GMT imaging method to gain better generalization performance under the situation of less prior information of imaging scene, incomplete or noise-corrupted radar echo data. Moreover, when all the parameters of the network are accurately estimated, the whole imaging process can be regarded as a network feedforward operation, which requires much less time than Omega-KA.
These notations will be used in this section: the matrix forms of time domain signal and frequency domain signal, respectively, will be denoted by bold lower case and bold upper case. The scattering coefficient matrix will be denoted by a bold Greek lower-case letter, σ . A T ,   A * ,   and   A H denote the transpose, conjugate, and Hermitian transpose of matrix A, respectively.

3.1. GMT Imaging Network

3.1.1. Implementation Scheme of Imaging-Net

In SAR-GMT imaging, it is of great significance to accurately estimate the unknown velocity of GMT. The minimum image entropy is used as the criterion for estimating the unknown velocity parameter Λ in [14,15,16]. Compared with the conventional methods, the network training method has better efficiency and accuracy in the estimation of the unknown velocity parameter. Moreover, it is more applicable in the case of velocity error and low SNR.
Suppose that the number of sampling points along the range and azimuth direction are M and N, respectively. The ideal point target images and multiple point target echoes can be obtained through the simulation of coordinate points, and then we can get the training set of the network. Currently, there are many studies on DL algorithms, among which supervised and unsupervised learning are two commonly adopted learning methods [32]. Due to the unknown motion parameters, it is hard to achieve GMT focusing by estimating Λ based on the echo domain. Therefore, the unsupervised training method cannot achieve satisfactory results in GMT imaging. The supervised training method is applied in this paper. The implementation scheme of the Omega-KA-net algorithm is shown in Figure 3.

3.1.2. 2-D Sparse Imaging Model Based on Omega-KA

The sparse optimization theory has been utilized for SAR motion compensation [6] and moving target motion parameters estimation [15]. From (6), the SAR signal after bulk compression can be described as a general matrix form as:
S 1 = F r s r F x H 1
where H 1 M × N is the bulk compression matrix defined by the bulk compression factor H 1 ( f r , f a ) in (5), s r M × N is the echo data matrix defined by the baseband echo signal s r ( τ , η s ) in (2), F r M × M and F x N × N are the range FFT matrix and azimuth FFT matrix, respectively, and (a b) denotes the Hadamard product of a and b. The phase compensation matrix H 2 ( Λ ) M × N defined by the phase compensation factor H 2 ( Λ ) ( f r , f a ) in (7) can be obtained through network training. From (8), the signal after compensating the residual phase generated by the unknown velocity can be described as a general matrix form as:
S 2 = S 1 H 2 ( Λ )
Then, we suppose that there is a Stolt interpolation operator denoted as H stolt ( · ) , which can be used to describe the function in (9). The signal matrix after implementing Stolt interpolation in (10) can be described as:
S 2 _ s t o l t = H stolt ( S 2 )
Therefore, the reconstructed scattering coefficient σ ^ M × N can be obtained in a compact form by transforming (13) via 2-D IFFT, and it can be derived from (11)–(13) as follows:
σ ^ =   E ( s r ) = F r H S 2 _ s t o l t F x H = F r H H stolt ( S 2 ) F x H = F r H H stolt ( S 1 H 2 ( Λ ) ) F x H = F r H H stolt [ ( F r s r F x H 1 ) H 2 ( Λ ) ] F x H
where E ( · ) denotes the imaging operator, while F r H M × M and F x H N × N , respectively, are the range IFFT matrix and azimuth IFFT matrix. Based on the description above, the 2-D approximate echo data s ^ r M × N deduced from Omega-KA can be expressed as:
s ^ r = G ( σ ) = F r H { [ H stolt * ( F r σ F x ) H 2 ( Λ ) * ] H 1 * } F x H
where G ( · ) denotes the observation operator, H stolt * ( · ) is an interpolation operator, which denotes the inverse of H stolt ( · ) .
The vector form of σ ^ and s ^ r are denoted as vec ( σ ^ ) M N × 1 and vec ( s ^ r ) M N × 1 , respectively. Based on the above Formulas (14) and (15), the vector can be expressed as vec ( σ ^ ) = Ε vec ( s r ) and vec ( s ^ r ) = G vec ( σ ) , where E M N × M N and G M N × M N , respectively, are the matrix form of imaging operator and observation operator.
Theorem 1. 
The imaging operator E ( · ) and observation operator G ( · ) are linear operators, and the Hermitian transpose of E equals to G , namely, G = E H .
Proof of Theorem.
It is obvious that E ( · ) and G ( · ) are linear operators because of the linearity of all sub operations in (14) and (15). Specifically, the vector can be rewritten as follows:
vec ( σ ^ ) = Ε vec ( s r ) = F ^ r H F ^ x H H ^ s t o l t H ^ 2 ( Λ ) H ^ 1 F ^ x F ^ r vec ( s r )
vec ( s ^ r ) = G vec ( σ ) = F ^ r H F ^ x H H ^ 1 * H ^ 2 ( Λ ) * H ^ s t o l t * F ^ x F ^ r vec ( σ )
where
{ H ^ 1 = diag ( vec ( H 1 ) )   M N × M N H ^ 2 ( Λ ) = diag ( vec ( H 2 ( Λ ) ) )   M N × M N
{ F ^ r = I x F r   M N × M N F ^ x = F x T I r   M N × M N
where I r M × M and I x N × N are the identity matrices, and ( a b ) denotes the Kronecker product of a and b. H ^ s t o l t and H ^ s t o l t * are the interpolation matrix defined, respectively, by the interpolation operator H stolt ( · ) and H stolt * ( · ) . Comparing (16) and (17), it is easy to check that G = E H . □
It is concluded from Theorem 1 that the conjugate transposition of E can be utilized to calculate the approximate echo data s ^ r . Consequently, (14) can be seen as an approximate estimator of the 2-D sparse regularization model based on L1 decoupling:
σ ^ = min σ s r G ( σ ) 2 + λ σ 1
where · 2 denotes the 2-norm operator, λ is the regularization parameter, and λ σ 1 is the L1 regularized constraint item, which constrains the model by the prior information of scene.
To improve the efficiency of matrix computation and save the computer memories, the region of interest (ROI) data [14] of S 1 is extracted, which can be expressed as S 1 _ R O I . Since only small images in contact with ROI data is extracted from conventional GMT imaging result, the amount of data to be processed is dramatically reduced without losing the GMT information. On this basis, the background clutter can be suppressed in the subsequent process of GMT focusing, while reducing the noise interference. Based on the extracted ROI data S 1 _ R O I , (14) and (15) can be rewritten as:
{ σ ^ = E ( S 1 _ R O I ) = F r H H stolt ( S 1 _ R O I H 2 ( Λ ) ) F x H S ^ 1 _ R O I = G ( σ ) =   H stolt * ( F r σ F x ) H 2 ( Λ ) *
Then, since the GMT image is usually sparse in the 2-D space domain [15], we can obtain the GMT imaging results by solving the following 2-D sparse regularization model:
σ ^ = min σ S 1 _ R O I G ( σ ) 2 + λ σ 1
To solve this regularized optimization scheme, many iterative algorithms are designed, including ISTA, alternating direction method of multipliers (ADMM), approximate message passing (AMP), et al. In this paper, we establish the GMT imaging network based on ROI data.

3.1.3. Imaging-Net Architecture

Suppose the approximate estimator can be obtained by an iterative algorithm and the l-th iteration can be written as σ l = f ( σ l 1 , S 1 _ R O I , Γ ) , where Γ is the parameter set. The iterative algorithm can be unfolded into a RNN with L layers. More specifically, the inner product of the data with a weight vector, plus bias, is fed as the input at each neuron, and the output at l-th layer of the RNN can be characterized as σ ( l ) = F ( l ) ( W ( Γ ) ( l ) ( σ ( l 1 ) ) + b ( Γ ) ( l ) ) , where W ( Γ ) is the weight item, b ( Γ ) is the bias containing the ROI data, F ( · ) denotes the non-linearity function, and Γ = { H 2 ( Λ ) , Λ , λ , } includes the trainable parameters such as unknown velocity parameter Λ , phase compensation matrix H 2 ( Λ ) , regularization parameter λ and so on. The final output obtained for the L-layer RNN is given as:
Y RNN ( S 1 _ R O I ) = F ( L ) ( W ( Γ ) ( L ) F ( 2 ) ( W ( Γ ) ( 2 ) F ( 1 ) ( W ( Γ ) ( 1 ) σ ( 0 ) + b ( Γ ) ( 1 ) ) + b ( Γ ) ( 2 ) ) + b ( Γ ) ( L ) )
The ISTA is chosen as an iterative algorithm and unfolded into an RNN. The ISTA is a well-known iterative optimization algorithm [33] divided into three stages: calculating residual, updating operator, and iterating soft threshold, which can be defined by the following recursive formula:
ϒ l = S 1 _ R O I G ( σ )
x l = σ ^ l 1 + β E ( ϒ l )
σ ^ l = ρ ( x l ; T l ) = sign ( x ) max { | x | T l , 0 }  
where ϒ l M × N denotes the residual in frequency domain, x l M × N denotes the operator, β represents the step size, ρ (   ·   ; T l ) is the soft thresholding function, and the parameter T l = λ l β indicates the threshold value.
In the proposed RNN, three sub-layers were built in the l-th layer to achieve the ISTA algorithm. The first sub-layer is the residual layer denoted by R, which is given by:
ϒ ( l ) = S 1 _ R O I G ( σ ^ ( l 1 ) ) = S 1 _ R O I H stolt * ( F r σ ^ ( l 1 ) F x ) H 2 ( Λ ) *
where H 2 ( Λ ) is a learnable matrix rather than fixed matrix. The second sub-layer is the operator layer denoted by X, and the updated operator is given by:
x ( l ) = σ ^ ( l 1 ) + β E ( ϒ ( l ) ) = σ ^ ( l 1 ) + β [ F r H H stolt ( ϒ ( l ) H 2 ( Λ ) ) F x H ]
where β is a learnable parameter rather than a fixed value. The third sub-layer is a non-linearity layer denoted by Ρ , which provides the scattering coefficient for the next layer, and chose the non-linearity function in (26). Hence, the scattering coefficient is given by σ ^ ( l ) = ρ ( x ( l ) ; T ( l ) ) , where T ( l ) = λ ( l ) β is also the learnable parameter in this sub-layer.
As a result, the output of the proposed RNN is σ ^ ( Γ net ) , where Γ net = { H 2 ( Λ ) , Λ , β , { T ( l ) } l = 1 L } are the trainable parameters of RNN. To reduce the data size of learnable parameters, H 2 ( Λ ) ,   Λ   and   β are set to be fixed across all layers and changed after one round of backpropagation training.
The RNN contains L layers with the same topology, and the monolayer representing a single iteration is composed of the three sub-layers, R, X, and P. Normally, the scene information and GMT velocity information in the echo signal are often not available to use in training, and it is hard to achieve GMT focusing through unsupervised training. Hence, we design the Omega-KA-net based on L-layer RNN to achieve high-quality GMT imaging. With this design, the Omega-KA-net can be trained in a supervised manner and learn a mapping of the ROI data space to scattering coefficient. The topology architecture of Omega-KA-net is shown in Figure 4.

3.2. Network Training

Because of the unknown velocity of GMT, we adapt a supervised learning method to overcome the limitation of generating GMT focused images directly from the echo data mapping to itself. Labeled training samples can be generated by random down sampling, adding system noise with different SNR, and mixing disturbance phase. Furthermore, the labels of training samples can be generated by the minimum entropy algorithm with sparse enhancement.
More specifically, random down-sampling can be achieved by multiplying the SAR echo by a left-multiplication sampling matrix Θ r M × M ,   M < M and a right-multiplication sampling matrix Θ x N × N ,   N < N , where M = γ M and N = γ N are the number of sampling points in range and azimuth direction after down-sampling, and γ denotes the sampling rate.
Adding noise to the SAR echo is another common method to generate training samples. For example, additive white Gaussian noise (AWGN) with different SNR or multiplicative noise caused by phase disturbance can be added. Since the velocity error of GMT will lead to the phase disturbance of the SAR echo, training samples can be generated by mixing the disturbance phase, which is determined by the velocity error of GMT.
Based on the above methods, unlabeled training samples can be generated. We suppose that the training set is given by S = { S 1 _ R O I _ 1 , S 1 _ R O I _ ϕ , S 1 _ R O I _ Φ } , where S 1 _ R O I _ ϕ = { [ F r ( Θ r s r ϕ Θ x ) F x ] H 1 } R O I , Φ is the number of samples, and each sample is collected under the same radar parameters, which means that the variation in S is a result of scene change or unexplained phase error.
In Omega-KA-net, the estimated scattering coefficient of the L-th layer is σ ^ ϕ ( L ) , which can be obtained by utilizing the ROI data S 1 _ R O I _ ϕ . As for supervised learning, we utilize the label d ϕ in label set D = { d 1 , d ϕ , d Φ } and the estimated scattering coefficient σ ^ ϕ ( L ) output from Omega-KA-net to establish the supervised cost function instead of comparing the least square error between the recovered ROI data in (21) and the original ROI data in S. The training process of Omega-KA-net can be regarded as searching the phase compensation matrix H 2 ( Λ ) containing motion parameter Λ for the appropriate one. When the supervised training is completed, the network model can be regarded as a mapping from the ROI data space to the label set.
The training procedure can be viewed as minimizing the mean Euclidean distance loss :
S ( Γ net ) = 1 2 Φ ϕ = 1 Φ σ ^ ϕ ( L ) [ Γ net , S 1 _ R O I _ ϕ ] d ϕ 2 2 Γ ^ net = arg min Γ net S ( Γ net )
In (29), the supervised cost function S ( Γ net ) with respect to the network parameter set Γ net operates and depends on the training data set S. Furthermore, many existing DL training methods offer the possibility of solving the optimization problem Γ ^ net = arg min Γ net S ( Γ net ) . In Omega-KA-net, the gradient is updated using the mini-batch gradient descent (MBGD) algorithm [34] to train the RNN in combination with the backpropagation-through-time (BPTT). Due to having complex matrix parameters in the parameter set Γ net , the complex gradients are derived by using the backpropagation algorithm based on Wirtinger calculus [35].
The training process with complex valued iteration can be summarized as follows
  • σ ^ ϕ ( L ) = Y RNN [ S 1 _ R O I _ ϕ , Γ net ( i ) ] ;
  • S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ σ ^ ϕ ( L ) d ϕ 2 2
  • Γ net ( i + 1 ) = Γ net ( i ) η ( i ) Γ net S ( Γ net ( i ) ) ,
where η ( i ) is the learning rate of the i-th update, and Γ net denotes the gradient of the supervised cost function. The first step in training is to achieve a fixed number of iterations through RNN, which amounts to performing the forward propagation to obtain the estimated scattering coefficient σ ^ ( L ) . Then, all training samples are mapped to the scattering coefficients space, and the supervised cost function can be calculated subsequently with σ ^ ( L ) obtained in the Step 1. In the last step, the average gradient is calculated using the mini-batch training scheme in which the update of Γ net is performed with respect to the data set S.
Because of the identical compensation matrix at each layer, the vanishing gradient problem will be encountered, which makes the one-shot training of the network difficult. The incremental training [36] can reduce the influence of vanishing gradient and effectively learn and train to provide appropriate values of the learnable parameter.
The optimizer attempts to minimize S ( Γ net ( i ) ) by tuning Γ net ( i ) in the i-th generation of the incremental training. In this case, the number of RNN layers L is the same as the number of generations, namely, L = i in the i-th generation. Supposing that the number of mini-batches and epochs, respectively, are denoted by Ψ and Ω , each epoch will perform Ψ rounds of backpropagation training. After processing Ω epochs, the i-th generation of the incremental training is completed and the objective function of the optimizer is changed to S ( Γ net ( i + 1 ) ) . Specifically, the new (i + 1)-th layer is appended to the RNN after training the first to i-th layers, and then the whole network is trained by processing Ω epochs again. In conclusion, Γ net is updated sequentially by appending a new layer at each generation of incremental training, in which the variable values of the previous generation are regarded as the initial values for the new generation.
It is worth noting that the phase compensation matrix H 2 ( Λ ) M × N is extended to a three-dimensional tensor H 2 ( Λ ) Ψ × M × N when the network parameters are learned using the MBGD algorithm. After completing the training of Omega-KA-net, the tensor H 2 ( Λ ) and other network parameters can be utilized to focus the defocused ROI data through a network feedforward operation in test stage. Therefore, the effectiveness and generality of Omega-KA-net and the supervised learning mechanism for GMT imaging are successfully justified.

3.3. Gradient Computation by Backpropagation

To describe the complex gradients computation by backpropagation more clearly and make the gradient expressions of the proposed method more general and practical, the learnable variables in (16) and (17) are parameterized instead of directly parameterizing the feature mapping of the Omega-KA-net. Thereby, we consider the vector form of σ ^ and s ^ r to derive the expression of gradient. In this case, the output of the sub-layers in the l-th layer can be rewritten as:
{ vec ( r ( l ) ) = vec ( s r ) G vec ( σ ^ ( l 1 ) ) vec ( x ( l ) ) = vec ( σ ^ ( l 1 ) ) + β Ε vec ( r ( l ) ) = ( I β G H G ) vec ( σ ^ ( l 1 ) ) + β G H vec ( s r ) = W vec ( σ ^ ( l 1 ) ) + b vec ( σ ^ ( l ) ) = ρ ( W vec ( σ ^ ( l 1 ) ) + b ; T ( l ) )
where r l M × N denotes the residual in time domain, W = I β G H G M N × M N is the weight matrix, and b = β G H vec ( s r ) M N × 1 is the bias vector of the l-th layer. The final output obtained for the L-layer RNN can be derived from (23). In RNN, the weight matrix W and bias vector b including parameters { G , β } are fixed across all layers. In addition to G and β , the iterative threshold { T ( l ) } l = 1 L associated with the non-linearity function ρ ( vec ( x ( l ) ) ; T ( l ) ) is also learned in each layer.
Then, in order to give a general description of the derivation process of gradients, the parameter set in Section 3.2 is rewritten as Γ net = { G , β , { T ( l ) } l = 1 L } and updated by minimizing the cost function in (29), which can be rewritten as:
S ( Γ net ) = 1 2 Φ ϕ = 1 Φ 2 ( vec ( s ^ r ϕ ) ,   vec ( s r ϕ ) ) = 1 2 Φ ϕ = 1 Φ 2 ( G vec ( σ ^ ϕ ( L ) ) ,   vec ( s r ϕ ) )
where vec ( σ ^ ϕ ( L ) ) is the scattering coefficient generated by RNN with the echo data vec ( s r ϕ ) , and 2 is the 2-norm of the mismatch between the approximate echo data vec ( s ^ r ϕ ) and vec ( s r ϕ ) .
Then, backpropagation is performed by MBGD at each epoch and the i-th update resulting in parameters is then deduced by the backpropagation as follows:
{ G i + 1 = G i η ( i ) G S ( Γ net ( i ) ) = G i η ( i ) 1 2 Φ ϕ = 1 Φ [ G 2 ( Γ net ( i ) ) ] β i + 1 = β i η ( i ) β S ( Γ net ( i ) ) = β i η ( i ) 1 2 Φ ϕ = 1 Φ [ β 2 ( Γ net ( i ) ) ] T ( l ) i + 1 = T ( l ) i η ( i ) T ( l ) S ( Γ net ( i ) ) = T ( l ) i η ( i ) 1 2 Φ ϕ = 1 Φ [ T ( l ) 2 ( Γ net ( i ) ) ]
The parameter update formulas in (32) are determined by computing the gradient and the partial derivative of the 2-norm 2 in (31) with respect to Γ net = { G , β , { T ( l ) } l = 1 L } . By using the complex backpropagation and Wirtinger derivative, the gradient of 2 with respect to G can be defined as:
G 2 = 2 G ¯
where a ¯ denotes the complex conjugation of a. The gradient of unsupervised cost function with respect to G as well as the partial derivative of the unsupervised cost function with respect to β   and   { T ( l ) } l = 1 L in (32) can be computed using the BPTT algorithm. The detailed derivation process can be found in Appendix A and the derivation results are given as follows:
G S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ [ ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) vec ( σ ^ ϕ ( L = i ) ) T + ( l = 1 L = i vec ( σ ^ ϕ ( l ) ) G vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) ¯ ) × 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) ) ]   | Γ net = Γ net ( i ) β S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ [ ( l = 1 L = i vec ( σ ^ ϕ ( l ) ) β vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) ) × 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) ) ]   | Γ net = Γ net ( i ) T ( l ) S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ [ ( vec ( σ ^ ϕ ( l ) ) T ( l ) vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) ) × 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) ) ]   | Γ net = Γ net ( i )

4. Experiments

In this section, we use the simulated data and measured data from GF-3 space-borne SAR to demonstrate the performance of the proposed Omega-KA-net. The experiment of simulated data composed of eleven moving targets in the scene is given in Section 4.1. The experiment of measured data including three ships is given in Section 4.2. The feasibility and imaging quality of the proposed Omega-KA-net are analyzed in both the experiments, and the experiment results of GMT imaging are displayed, including the imaging result with RMA, ESM in [13], IMEA in [14], and Omega-KA-net. The imaging results of the proposed method are compared with the results of conventional algorithms to verify the superior performance of Omega-KA-net. The Omega-KA-net is implemented in PyTorch [37] and sped up by a NVIDIA GeForce RTX 2080Ti GPU.

4.1. Imaging Experiment Based on Simulated Data

We create a flat scene that consists of eleven moving targets in this subsection. The airborne SAR platform parameters are as follows: the velocity of the SAR platform is 150 m/s, the carrier frequency is 10 GHz, the bandwidth is 75 MHz, the pulse repetition frequency is 1 KHz, the pulse duration is 1.2 μ s , and both the range and azimuth resolution are 2 m. In the training stage, random down-sampling and adding AWGN to SAR echo are adopted to generate the training set S, in which the number of samples is Φ train = 3000, and each sample consists of eleven randomly distributed GMTs. The range velocity vr of GMTs in the training set is randomly distributed from 5 m/s to 10 m/s, and the azimuth velocity va of GMTs is randomly distributed from 10 m/s to 20 m/s. Γ net ( 0 ) = { H 2 ( Λ ) ( 0 ) , Λ ( 0 ) , β ( 0 ) , T ( 0 ) } is the initialized parameter set, where Λ ( 0 ) = 1 / ( 150 ) 2 , β ( 0 ) = 0.1 , T ( 0 ) = 0.5 , and H 2 ( Λ ) ( 0 ) is initialized by Omega-KA in Section 2.2. The incremental training is adopted to reduce the influence of vanishing gradient in backpropagation, and the initial learning rate is set to 1 × e 4 but changes to 5 × e 5 after 5 generations. The batch size is 50 and the epoch number is set as Ω = 50.
In the testing stage, the trained network parameters are input into the RNN of Omega-KA-net and the reconstructed scattering coefficient is output through network feedforward operation. The number of samples in the test set is Φ test = 200 , in which each test sample is composed of eleven GMTs with regular distribution. Firstly, the results of GMT imaging in side-looking mode are analyzed. When the sampling rate is γ = 0.8 and the SNR of SAR echo is 10 dB, the imaging results of point targets with vr = 7 m/s and va = 13 m/s obtained by RMA, ESM, IMEA and the proposed Omega-KA-net with L = 3, L = 5, and L = 7 are shown in Figure 5. When the sampling rate is γ = 0.4 and the SNR of SAR echo is 5 dB, the imaging results of point targets with vr = 7 m/s and va = 13 m/s are shown in Figure 6. The imaging quality indexes of different algorithms are analyzed and quantified subsequently. The single point target response of azimuth profile in imaging results is shown in Figure 7. Table 1 shows the other quality indexes, including peak side-lobe ratio (PSLR), integrated side-lobe ratio (ISLR), and the imaging time of different algorithms.
Figure 5 and Figure 6 show that the imaging results of conventional RMA, ESM and IMEA are significantly disturbed under the premise of random down-sampling and adding AWGN, and the imaging quality is degraded because of many sidelobe interference. Conversely, the proposed Omega-KA-net compares the reconstructed scattering coefficient with the label through supervised training, which can solve the limitation of original echo data loss on imaging quality. We conclude that the Omega-KA-net can provide high-quality imaging results under down-sampling and low SNR. It should also be noted that although ESM and IMEA can achieve almost the same GMT imaging results, the imaging time of IMEA is significantly less than that of ESM. Compared with conventional algorithms, Omega-KA-net has better imaging performance and can effectively suppress sidelobe and other clutter interference. Comparing Figure 5d–f and Figure 6d–f, the influence of the number of layers on the reconstructed scattering coefficient can be obtained. When the SNR and γ are high, the Omega-KA-net can obtain good performance with fewer layers. On the contrary, when the SNR and γ are low, the Omega-KA-net needs more layers to achieve better focused images. Taking L = 5 as an example, it is obvious that the imaging result in Figure 5e has better focusing than the result in Figure 6e.
The focusing performance comparison of different algorithms is shown in Figure 7, where the enlarged areas provide a clearer display for the point target response graph. The imaging results of Omega-KA-net have a narrower main lobe and lower sidelobe than the images reconstructed by conventional RMA, ESM and IMEA. The sampling rate γ in Figure 6 is significantly lower than that in Figure 5, which is the main reason why the main lobe of Omega-KA-net with L = 7 in Figure 7b is narrower than that in Figure 7a. It can be concluded from Figure 7 that Omega-KA-net can obtain higher resolution imaging results. By comparing the imaging quality indexes of different algorithms, the conclusion drawn from Figure 7 can also be verified in Table 1. The Omega-KA-net is globally optimized by the backpropagation algorithm from the first layer to the last layer. Therefore, Omega-KA-net has stronger fitting capability and can achieve the focusing imaging of GMT with fewer layers. Since the imaging process only needs one feedforward operation after the network parameters are trained, the imaging speed of Omega-KA-net is significantly faster than that of RMA, the method in [13], and the method in [14].
Then, we investigate the performance of Omega-KA-net in low squint mode and the squint angle is 30 degrees. To efficiently analyze the influence of squint angle on GMT imaging results, a flat scene consisting only of two moving targets is created. The training set is obtained through simulation, which is composed of 2000 training samples. The range and azimuth velocity of each training sample are randomly distributed from 5 m/s to 10 m/s and 10 m/s to 20 m/s, respectively. Figure 8a–c show the focused imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.8 and SNR = 10 dB. It can be observed that compared with the imaging results in Figure 5, the point target imaging results in Figure 8 have geometric distortion. In low squint mode, the focused image of GMT obtained by IMEA in [14] or Omega-KA-net cannot be further utilized until its geometry distortion has been processed. To remove the distortion of GMT focused image, an equivalent squint angle spectrum rotation (ESASR) algorithm in [38] is adopted. The point target imaging results with geometry distortion processing are shown in Figure 8d–f, which prove that the distortion of focused image can be corrected by spectrum rotation.

4.2. Imaging Experiment Based on Measured Data

The performance of Omega-KA-net is further verified by using the measured data from the GF-3 satellite, which is a side-looking spaceborne SAR. Normally, the navigation speed of ships is within a certain range, so the Omega-KA-net based on supervision training has good performance for focusing the measured data of ship targets. In the training stage, the OpenSARShip dataset in [39] is adopted as the training set, in which the range velocity vr of ships is randomly distributed from 5 m/s to 10 m/s, and the azimuth velocity va of ships is randomly distributed from 10 m/s to 20 m/s. The OpenSARShip dataset established by Shanghai Key Laboratory of Intelligent Sensing and Recognition is dedicated to ship interpretation.
In the testing stage, the measured data of the ship target collected by the GF-3 spaceborne SAR is adopted as the test set. The original sea surface image obtained by the conventional imaging method of stationary target is shown in Figure 9, where the three yellow boxes represent the defocused images of moving ships. We can see that the waves and other information are reconstructed well, but the ship targets cannot be recognized due to unknown motion parameters. To make the experimental results more objective and reliable, three ship targets named T1, T2, and T3 are analyzed in this subsection, respectively, which are extracted from the original complex image reconstructed by measured data. As the latest high-resolution spaceborne SAR in China, the main parameters of the GF-3 satellite are listed in Table 2.
We process the data of three ship targets sequentially by using the method in [13], the method in [14], and the proposed Omega-KA-net, respectively. The network parameters are the same as those in Section 4.1. The imaging results are shown in Figure 10, Figure 11 and Figure 12, in which the reconstructed ship images obtained by the above methods are compared with the defocused image in Figure 9. As shown in Figure 10, Figure 11 and Figure 12, the method in [13], the method in [14], and Omega-KA-net can successfully achieve phase compensation and image focusing. The experimental results also show that, compared with the methods in [13,14], the sidelobes can be suppressed significantly by Omega-KA-net. To further compare the imaging performance of different algorithms, the ship image entropy values and imaging time of the above methods are listed in Table 3. It can be concluded that, when L = 3, the imaging quality of Omega-KA-net is worse than that of the method in [14], but the imaging performance of Omega-KA-net is substantially improved when L = 5 or 7. In addition, Omega-KA-net with L = 7 can provide the minimum image entropy values in the above methods, while improving the imaging quality and computational efficiency obviously.

5. Conclusions

In this paper, we have proposed a trainable Omega-KA and sparse optimization-based SAR-GMT deep learning imaging network, with which the imaging quality can be improved and the imaging time as well as computational complexity can be dramatically reduced. The existing Omega-KA for GMT imaging is first derived based on the 2-D SAR echo signal model. Next, the 2-D sparse imaging model deducted from the inverse of Omega-KA focusing process is investigated. Then, based on the sparse optimization theory, the ISTA is utilized to solve the 2-D sparse regularization model based on L1 decoupling. On this basis, the RNN framework is established, and the solving process of the sparse regularization model is mapped to each layer of RNN. We incorporate ISTA within the RNN framework to build the Omega-KA-net, where the trainable parameters are learnt through an off-line supervised training method. Finally, the high-quality imaging results are obtained based on the trained network parameters.
The main contributions of the present work are as follows.
  • Incorporating the sparse optimization theory and Omega-KA into GMT imaging framework, an efficient 2-D sparse regularization-based GMT imaging model is formulated. The new model combined with the iterative optimization algorithm can be compatible with other existing GMT imaging methods.
  • To solve the difficulties of slow imaging speed, obvious sidelobe interference, and high computational complexity in conventional GMT imaging methods, a novel SAR-GMT deep learning imaging method, namely Omega-KA-net, is proposed based on the 2-D sparse imaging model and RNN.
  • According to the experimental results of simulated and measured data, it is proven that Omega-KA-net is superior to the conventional GMT imaging algorithms in terms of imaging quality and time. Moreover, the Omega-KA-net can be applied to side-looking mode and low squint mode imaging under down-sampling and low SNR, while reducing the computational complexity and substantially improving the imaging quality.
It is worthwhile, however, to remark that the Omega-KA-net does not overcome the essential limitations of sparse optimization and supervised learning. On the one hand, although the down-sampling of echo can reduce the amount of data, an insufficient sampling rate will also affect the imaging quality. How to find a balance between data down-sampling and reconstruction accuracy is crucial to the imaging-net research. On the other hand, supervised learning is constrained by training samples. Even under some conditions where the training samples can contain a large velocity range, there are still many GMTs whose velocity cannot be estimated exactly. However, the Omega-KA-net will focus on such GMTs incorrectly, which will affect the understanding of GMT images and the application of target recognition. Therefore, how to improve the accuracy and robustness of the GMT imaging-net will be the subject of our future work.

Author Contributions

Conceptualization and methodology, H.Z. and J.N.; software, H.Z. and S.X.; validation, H.Z., S.X. and J.N.; resources, Q.Z. and Y.L.; writing—original draft preparation, H.Z.; writing—review and editing, H.Z., J.N. and Q.Z.; funding acquisition, Q.Z. and J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No.62131020, No.62001508 and No.61871396, and Natural Science Basic Research Program of Shaanxi under Grant No.2020JQ-480.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The scattering coefficient vec ( σ ^ ( l ) ) M N × 1 is a real valued vector and vec ( σ ^ ( l ) ) ¯ = vec ( σ ^ ( l ) ) . The partial derivative of the 2-norm 2 with respect to G can be written using tensor algebra as:
2 G = vec ( σ ^ ϕ ( L = i ) ) G 2 vec ( σ ^ ϕ ( L = i ) ) + G G vec ( σ ^ ϕ ( L = i ) ) ( vec ( s ^ r ϕ ) ¯ vec ( s r ϕ ) ¯ )
For the first term from (A1), the partial derivative of the 2-norm 2 with respect to vec ( σ ^ ( L = i ) ) can be deduced as:
2 vec ( σ ^ ϕ ( L = i ) ) = G T ( vec ( s ^ r ϕ ) ¯ vec ( s r ϕ ) ¯ ) + G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) = 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) )
The gradient contributions from l-th layer of the RNN are computed with the BPTT algorithm, and the derivative of vec ( σ ^ ( L = i ) ) with respect to G from (A1) becomes:
vec ( σ ^ ϕ ( L = i ) ) G = l = 1 L = i vec ( σ ^ ϕ ( l ) ) G vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) )
where each partial derivative of vec ( σ ^ ( L = i ) ) with respect to vec ( σ ^ ( l ) ) in different layer can be written as vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) = vec ( σ ^ ϕ ( l + 1 ) ) vec ( σ ^ ϕ ( l ) ) vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l + 1 ) ) by using the chain rule.
Then, we consider the second component from (A1). Tensor G G is an MN×MN array of MN × MN matrices, and the (m, m’)-th matrix Imm has all entries Imm(p,q) = 0 except for 1 at p = m, q = m’. Based on the tensor-vector multiplication, we obtain:
G G vec ( σ ^ ϕ ( L = i ) ) ( vec ( s ^ r ϕ ) ¯ vec ( s r ϕ ) ¯ ) = ( vec ( s ^ r ϕ ) ¯ vec ( s r ϕ ) ¯ ) vec ( σ ^ ϕ ( L = i ) ) T
Thus, the gradient of unsupervised cost function with respect to G can be obtained as follows:
G S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ [ 2 G ¯ ]   | Γ net = Γ net ( i ) = 1 2 Φ ϕ = 1 Φ [ ( vec ( s ^ r ϕ ) ¯ vec ( s r ϕ ) ¯ ) vec ( σ ^ ϕ ( L = i ) ) T ¯ + vec ( σ ^ ϕ ( L = i ) ) G 2 vec ( σ ^ ϕ ( L = i ) ) ¯ ]   | Γ net = Γ net ( i ) = 1 2 Φ ϕ = 1 Φ [ ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) vec ( σ ^ ϕ ( L = i ) ) T + ( l = 1 L = i vec ( σ ^ ϕ ( l ) ) G vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) ¯ ) × 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) ) ]   | Γ net = Γ net ( i )
Detailed derivation of the partial derivative of the unsupervised cost function with respect to β   and   { T ( l ) } l = 1 L is identical to G S ( Γ net ( i ) ) . Following the same steps in (A5), we obtain:
β S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ [ 2 β ]   | Γ net = Γ net ( i ) = 1 2 Φ ϕ = 1 Φ [ vec ( σ ^ ϕ ( L = i ) ) β 2 vec ( σ ^ ϕ ( L = i ) ) ]   | Γ net = Γ net ( i ) = 1 2 Φ ϕ = 1 Φ [ ( l = 1 L = i vec ( σ ^ ϕ ( l ) ) β vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) ) × 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) ) ]   | Γ net = Γ net ( i ) T ( l ) S ( Γ net ( i ) ) = 1 2 Φ ϕ = 1 Φ [ 2 T ( l ) ]   | Γ net = Γ net ( i ) = 1 2 Φ ϕ = 1 Φ [ vec ( σ ^ ϕ ( L = i ) ) T ( l ) 2 vec ( σ ^ ϕ ( L = i ) ) ]   | Γ net = Γ net ( i ) = 1 2 Φ ϕ = 1 Φ [ ( vec ( σ ^ ϕ ( l ) ) T ( l ) vec ( σ ^ ϕ ( L = i ) ) vec ( σ ^ ϕ ( l ) ) ) × 2 Re ( G H ( vec ( s ^ r ϕ ) vec ( s r ϕ ) ) ) ]   | Γ net = Γ net ( i )

References

  1. Zhao, Y.; Han, S.; Yang, J.; Zhang, L.; Xu, H.; Wang, J. A novel approach of slope detection combined with Lv’s distribution for airborne SAR imagery of fast moving targets. Remote Sens. 2018, 10, 764. [Google Scholar] [CrossRef] [Green Version]
  2. Graziano, M.D.; Errico, M.D.; Rufino, G. Wake component detection in X-band SAR images for ship heading and velocity estimation. Remote Sens. 2016, 8, 498. [Google Scholar] [CrossRef] [Green Version]
  3. Li, Y.; Nie, L. A new ground moving target imaging algorithm for high-resolution airborne CSSAR-GMTI systems. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 2308–2311. [Google Scholar]
  4. Zhang, Y.; Mu, H.L.; Xiao, T.; Jiang, Y.C.; Ding, C. SAR imaging of multiple maritime moving targets based on sparsity Bayesian learning. IET Radar Sonar Navig. 2020, 14, 1717–1725. [Google Scholar] [CrossRef]
  5. Zhao, S.Y.; Zhang, Z.H.; Guo, W.W.; Luo, Y. An Automatic Ship Detection Method Adapting to Different Satellites SAR Images with Feature Alignment and Compensation Loss. IEEE Trans. Geosci. Remote Sens. 2022, 1. [Google Scholar] [CrossRef]
  6. Chen, J.; Xing, M.; Yu, H.; Liang, B.; Peng, J.; Sun, G. Motion compensation/autofocus in airborne synthetic aperture radar: A review. IEEE Geosci. Remote Sens. Mag. 2021, 2–23. [Google Scholar] [CrossRef]
  7. Buckreuss, S. Motion compensation for airborne SAR based on inertial data, RDM and GPS. In Proceedings of the 1994 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Pasadena, CA, USA, 8–12 August 1994; pp. 1971–1973. [Google Scholar]
  8. Fornaro, G. Trajectory deviations in airborne SAR: Analysis and compensation. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 997–1009. [Google Scholar] [CrossRef]
  9. Li, G.; Xia, X.G.; Xu, J.; Peng, Y.N. A velocity estimation algorithm of moving targets using single antenna SAR. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 1052–1062. [Google Scholar] [CrossRef]
  10. Fornaro, G.; Franceschetti, G.; Perna, S. Motion compensation errors: Effects on the accuracy of airborne SAR images. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1338–1352. [Google Scholar] [CrossRef]
  11. Zhang, L.; Qiao, Z.; Xing, M.; Yang, L.; Bao, Z. A robust motion compensation approach for UAV SAR imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3202–3218. [Google Scholar] [CrossRef]
  12. Wang, G.; Zhang, L.; Li, J.; Hu, Q. Precise aperture-dependent motion compensation for high-resolution synthetic aperture radar imaging. IET Radar Sonar Navig. 2017, 11, 204–211. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Sun, J.; Lei, P.; Li, G.; Hong, W. High-resolution SAR-based ground moving target imaging with defocused ROI data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1062–1073. [Google Scholar] [CrossRef]
  14. Chen, Y.C.; Li, G.; Zhang, Q. Iterative Minimum Entropy Algorithm for Refocusing of Moving Targets in SAR Images. IET Radar Sonar Navig. 2019, 13, 1279–1286. [Google Scholar] [CrossRef]
  15. Xiong, S.; Ni, J.; Zhang, Q.; Luo, Y.; Yu, L. Ground moving target imaging for highly squint SAR by modified minimum entropy algorithm and spectrum rotation. Remote Sens. 2021, 13, 4373. [Google Scholar] [CrossRef]
  16. Chen, Y.; Li, G.; Zhang, Q.; Sun, J. Refocusing of moving targets in SAR images via parametric sparse representation. Remote Sens. 2017, 9, 795. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, L.; Wang, G.; Qiao, Z.; Wang, H. Azimuth motion compensation with improved subaperture algorithm for airborne SAR imaging. IEEE J. Select. Topics Appl. Earth Observat. Remote Sens. 2017, 10, 184–193. [Google Scholar] [CrossRef]
  18. Gu, F.F.; Zhang, Q.; Chen, Y.C.; Huo, W.J.; Ni, J.C. Parametric sparse representation method for motion parameter estimation of ground moving target. IEEE Sens. J. 2016, 16, 7646–7652. [Google Scholar] [CrossRef]
  19. Kang, M.S.; Kim, K.T. Ground moving target imaging based on compressive sensing framework with single-channel SAR. IEEE Sens. J. 2020, 20, 1238–1250. [Google Scholar] [CrossRef]
  20. Wu, D.; Yaghoobi, M.; Davies, M.E. Sparsity-driven GMTI processing framework with multichannel SAR. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1434–1447. [Google Scholar] [CrossRef]
  21. Kelly, S.; Yaghoobi, M.; Davies, M.E. Sparsity-based autofocus for undersampled synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 972–986. [Google Scholar] [CrossRef] [Green Version]
  22. Lu, Z.J.; Qin, Q.; Shi, H.Y.; Huang, H. SAR moving target imaging based on convolutional neural network. Digit Signal Process. 2020, 106, 102832. [Google Scholar] [CrossRef]
  23. Chen, X.; Peng, X.; Duan, R. Deep kernel learning method for SAR image target recognition. Rev. Sci. 2017, 10, 104706. [Google Scholar] [CrossRef] [PubMed]
  24. Zhao, S.Y.; Zhang, Z.H.; Guo, W.W.; Luo, Y. Transferable SAR Image Classification Crossing Different Satellites under Open Set Condition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  25. Mason, E.; Yonel, B.; Yazici, B. Deep learning for SAR image formation. In Proceedings of the 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Anaheim, CA, USA, 28 April 2017. [Google Scholar]
  26. Rittenbach, A.; Walters, J.P. RDAnet: A Deep learning based approach for synthetic aperture radar image formation. arXiv 2020, arXiv:2001.08202. [Google Scholar]
  27. Yonel, B.; Mason, E.; Yaz&Imath, B. Deep learning for passive synthetic aperture radar. IEEE J. Sel. Top. Signal Process. 2018, 12, 90–103. [Google Scholar] [CrossRef] [Green Version]
  28. Zhao, S.; Ni, J.; Liang, J.; Xiong, S.; Luo, Y. End-to-end SAR deep learning imaging method based on sparse optimization. Remote Sens. 2021, 13, 4429. [Google Scholar] [CrossRef]
  29. Liao, Y.; Wang, W.Q.; Xing, M. A modified Omega-K algorithm for squint circular trace scanning SAR using improved range model. Signal Process. 2019, 160, 59–65. [Google Scholar] [CrossRef]
  30. Wang, C.; Su, W.; Gu, H. Focusing bistatic forward-looking synthetic aperture radar based on an improved hyperbolic range model and a modified Omega-K algorithm. Sensors 2019, 19, 3792. [Google Scholar] [CrossRef] [Green Version]
  31. Li, Z.; Yi, L.; Xing, M. An improved range model and Omega-K-based imaging algorithm for high-squint SAR with curved trajectory and constant acceleration. IEEE Geosci. Remote Sens. Lett. 2016, 13, 656–660. [Google Scholar] [CrossRef]
  32. Yang, H.; Wang, B.; Lin, S. Unsupervised extraction of video highlights via robust recurrent auto-encoders. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Washington, DC, USA, 7–13 December 2015; pp. 4633–4641. [Google Scholar]
  33. Daisuke, I.; Satoshi, T.; Tadashi, W. Trainable ISTA for sparse signal recovery. IEEE Trans. Signal Process. 2019, 67, 3113–3125. [Google Scholar]
  34. Cui, Y.; Wu, D.; Huang, J. Optimize TSK fuzzy systems for classification problems: Minibatch gradient descent with uniform regularization and batch normalization. IEEE Trans. Fuzzy Syst. 2020, 28, 3065–3075. [Google Scholar] [CrossRef] [Green Version]
  35. Candes, E.J.; Li, X.; Soltanolkotabi, M. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory. 2015, 61, 1985–2007. [Google Scholar] [CrossRef] [Green Version]
  36. Liu, D.; Chang, T.S.; Yi, Z. A constructive algorithm for feedforward neural networks with incremental training. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2003, 49, 1876–1879. [Google Scholar]
  37. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; Devito, Z. Automatic differentiation in PyTorch. In Proceedings of the 2017 International Conference of Neural Information Processing System, Long Beach, CA, USA, 4–9 December 2017; pp. 1–4. Available online: Pytorch.org (accessed on 22 January 2022).
  38. Li, Z.; Chen, J.; Du, W.; Gao, B.; Xing, M. Focusing of maneuvering high-squint-mode SAR data based on equivalent range model and wavenumber-domain imaging algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2419–2433. [Google Scholar] [CrossRef]
  39. Huang, L.; Liu, B.; Li, B.; Guo, W.; Yu, W.; Zhang, Z. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 195–208. [Google Scholar] [CrossRef]
Figure 1. Geometry of an airborne SAR and a GMT.
Figure 1. Geometry of an airborne SAR and a GMT.
Remotesensing 14 01664 g001
Figure 2. Flowchart of Omega-KA-based GMT imaging method.
Figure 2. Flowchart of Omega-KA-based GMT imaging method.
Remotesensing 14 01664 g002
Figure 3. Flowchart of Omega-KA-net implementation.
Figure 3. Flowchart of Omega-KA-net implementation.
Remotesensing 14 01664 g003
Figure 4. Topology architecture of Omega-KA-net.
Figure 4. Topology architecture of Omega-KA-net.
Remotesensing 14 01664 g004
Figure 5. Imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.8 and SNR = 10 dB. (a) Defocused imaging result obtained by RMA; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Figure 5. Imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.8 and SNR = 10 dB. (a) Defocused imaging result obtained by RMA; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Remotesensing 14 01664 g005
Figure 6. Imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.4 and SNR = 5 dB. (a) Defocused imaging result obtained by RMA; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Figure 6. Imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.4 and SNR = 5 dB. (a) Defocused imaging result obtained by RMA; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Remotesensing 14 01664 g006
Figure 7. Single point target response in azimuth profile. (a) Performance comparison of algorithms in Figure 5; (b) Performance comparison of algorithms in Figure 6.
Figure 7. Single point target response in azimuth profile. (a) Performance comparison of algorithms in Figure 5; (b) Performance comparison of algorithms in Figure 6.
Remotesensing 14 01664 g007
Figure 8. Spectrum rotation imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.8 and SNR = 10 dB. (a) Imaging result obtained by the method in [14]; (b) Imaging result obtained by Omega-KA-net with L = 3; (c) Imaging result obtained by Omega-KA-net with L = 7; (df) Imaging result with spectrum rotation of (ac), respectively.
Figure 8. Spectrum rotation imaging results of point targets with vr = 7 m/s and va = 13 m/s when γ = 0.8 and SNR = 10 dB. (a) Imaging result obtained by the method in [14]; (b) Imaging result obtained by Omega-KA-net with L = 3; (c) Imaging result obtained by Omega-KA-net with L = 7; (df) Imaging result with spectrum rotation of (ac), respectively.
Remotesensing 14 01664 g008
Figure 9. Imaging results of measured data from GF-3 satellite. (a) Imaging result of sea surface; (b) Defocused image of ship T1; (c) Defocused image of ship T2; (d) Defocused image of ship T3.
Figure 9. Imaging results of measured data from GF-3 satellite. (a) Imaging result of sea surface; (b) Defocused image of ship T1; (c) Defocused image of ship T2; (d) Defocused image of ship T3.
Remotesensing 14 01664 g009
Figure 10. Imaging results of ship T1. (a) Defocused image of ship T1; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Figure 10. Imaging results of ship T1. (a) Defocused image of ship T1; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Remotesensing 14 01664 g010
Figure 11. Imaging results of ship T2. (a) Defocused image of ship T2; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Figure 11. Imaging results of ship T2. (a) Defocused image of ship T2; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Remotesensing 14 01664 g011
Figure 12. Imaging results of ship T3. (a) Defocused image of ship T3; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Figure 12. Imaging results of ship T3. (a) Defocused image of ship T3; (b) Imaging result obtained by the method in [13]; (c) Imaging result obtained by the method in [14]; (d) Imaging result obtained by Omega-KA-net with L = 3; (e) Imaging result obtained by Omega-KA-net with L = 5; (f) Imaging result obtained by Omega-KA-net with L = 7.
Remotesensing 14 01664 g012
Table 1. Comparison of imaging quality indexes.
Table 1. Comparison of imaging quality indexes.
γ = 0.8   and   SNR = 10   dB γ = 0.4   and   SNR = 5   dB
AlgorithmPSLRISLRImaging TimePSLRISLRImaging Time
RMA−11.48 dB−11.97 dB7.79 s−7.06 dB−2.41 dB7.43 s
Method in [13]−12.63 dB−9.36 dB816.01 s−12.07 dB−1.21 dB776.85 s
Method in [14]−12.89 dB−9.25 dB50.62 s−12.13 dB−1.18 dB43.07 s
Omega-KA-net with L = 3−14.93 dB−12.91 dB0.32 s−13.61 dB−7.69 dB0.28 s
Omega-KA-net with L = 5−21.09 dB−22.40 dB0.32 s−22.67 dB−18.54 dB0.28 s
Omega-KA-net with L = 731.77 dB30.86 dB0.32 s27.72 dB24.41 dB0.28 s
Table 2. GF-3 SAR platform main parameters.
Table 2. GF-3 SAR platform main parameters.
ParametersValue
Carrier frequency5.4 GHz
Bandwidth60 MHz
Pulse repetition frequency2.3 KHz
Chirp rate 1.7 × 1 0 12   s 2
Table 3. Comparison of imaging quality indexes.
Table 3. Comparison of imaging quality indexes.
Ship T1Ship T2Ship T3
AlgorithmEntropyImaging TimeEntropyImaging TimeEntropyImaging Time
Method in [13]5.07787346.12 s4.58477238.67 s3.44987335.98 s
Method in [14]4.9631137.66 s4.5069128.91 s3.425298.67 s
Omega-KA-net with L = 36.34600.89 s5.64200.84 s5.17900.87 s
Omega-KA-net with L = 54.13680.89 s2.59550.84 s2.18100.87 s
Omega-KA-net with L = 72.55410.89 s2.21790.84 s1.70580.87 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, H.; Ni, J.; Xiong, S.; Luo, Y.; Zhang, Q. Omega-KA-Net: A SAR Ground Moving Target Imaging Network Based on Trainable Omega-K Algorithm and Sparse Optimization. Remote Sens. 2022, 14, 1664. https://doi.org/10.3390/rs14071664

AMA Style

Zhang H, Ni J, Xiong S, Luo Y, Zhang Q. Omega-KA-Net: A SAR Ground Moving Target Imaging Network Based on Trainable Omega-K Algorithm and Sparse Optimization. Remote Sensing. 2022; 14(7):1664. https://doi.org/10.3390/rs14071664

Chicago/Turabian Style

Zhang, Hongwei, Jiacheng Ni, Shichao Xiong, Ying Luo, and Qun Zhang. 2022. "Omega-KA-Net: A SAR Ground Moving Target Imaging Network Based on Trainable Omega-K Algorithm and Sparse Optimization" Remote Sensing 14, no. 7: 1664. https://doi.org/10.3390/rs14071664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop