Next Article in Journal
Characteristics of Spring Sea Surface Currents near the Pearl River Estuary Observed by a Three-Station High-Frequency Surface Wave Radar System
Next Article in Special Issue
Synthetic Aperture Radar Image Compression Based on Low-Frequency Rejection and Quality Map Guidance
Previous Article in Journal
Person Mobility Algorithm and Geographic Information System for Search and Rescue Missions Planning
Previous Article in Special Issue
DeepRED Based Sparse SAR Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network

1
The Department of Space Control and Communication, Space Engineering University, Beijing 102249, China
2
The School of Information Science and Engineering, Southeast University, Nanjing 214135, China
3
The 15th Research Institute of China Electronics Technology Corporation, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(4), 671; https://doi.org/10.3390/rs16040671
Submission received: 7 December 2023 / Revised: 3 February 2024 / Accepted: 9 February 2024 / Published: 13 February 2024
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)

Abstract

:
A novel compressive sensing (CS) synthetic-aperture radar (SAR) called AgileSAR has been proposed to increase swath width for sparse scenes while preserving azimuthal resolution. AgileSAR overcomes the limitation of the Nyquist sampling theorem so that it has a small amount of data and low system complexity. However, traditional CS optimization-based algorithms suffer from manual tuning and pre-definition of optimization parameters, and they generally involve high time and computational complexity for AgileSAR imaging. To address these issues, a pseudo- L 0 -norm fast iterative shrinkage algorithm network (pseudo- L 0 -norm FISTA-net) is proposed for AgileSAR imaging via the deep unfolding network in this paper. Firstly, a pseudo- L 0 -norm regularization model is built by taking an approximately fair penalization rule based on Bayesian estimation. Then, we unfold the operation process of FISTA into a data-driven deep network to solve the pseudo- L 0 -norm regularization model. The network’s parameters are automatically learned, and the learned network significantly increases imaging speed, so that it can improve the accuracy and efficiency of AgileSAR imaging. In addition, the nonlinearly sparsifying transform can learn more target details than the traditional sparsifying transform. Finally, the simulated and data experiments demonstrate the superiority and efficiency of the pseudo- L 0 -norm FISTA-net for AgileSAR imaging.

1. Introduction

High-resolution wide-swath (HRWS) synthetic-aperture radars (SARs) are an important research topic due to their highly efficient observation [1,2]. Although azimuthal multi-channel SARs [3] and multi-input multi-output (MIMO) SARs [4] can achieve high azimuthal resolution and wide-range swath, they have a large amount of data and a highly complex antenna. As the compressive sensing (CS) theorem develops [5,6,7], an innovative system concept for sparse scenes, called AgileSAR, is unconstrained by the Nyquist theorem so that the amount of data is less than that of the mentioned HRWS system [8]. Simultaneously, the CS algorithm guarantees azimuthal resolution, and time–space sampling persuades wide-swath coverage mosaicked by several range sub-swaths to achieve 5 m resolution and a 300 km swath with a single channel. AgileSAR adopts sub-Nyquist sampling along the azimuthal dimension, and the sparse scene is recovered by AgileSAR imaging [8].
In spite of the fact that AgileSAR can achieve the above merits, AgileSAR imaging still has the following limitations: Under the assumption of satisfying the restricted isometry property (RIP) [9], CS algorithms are vital to AgileSAR imaging. At present, they mainly include traditional optimization-based CS methods and recent network-based CS methods. The optimization-based CS methods include greedy algorithms [10,11], L 1 -norm optimization algorithms [12,13], and Bayesian-based methods [14,15,16], where the L 1 -norm optimization algorithm has better performance in terms of the recovered error evaluated by mean squared error (MSE) [17,18,19]. Although optimization-based CS methods have achieved better recovered performance, these methods still suffer from some problems, i.e., manual tuning and pre-definition of optimization parameters (e.g., the regularization parameter and the thresholding parameter), high computational complexity, low signal-to-noise ratio (SNR) resistance, and the prior assumption of observed scenes. With the development of deep learning techniques, some network-based CS methods are proposed to address the manual tuning issue and improve the performance and speed of signal reconstruction [20,21,22,23,24,25,26,27,28,29]. Specifically, deep unfolding networks relate optimization-based CS methods with deep neural networks so that they can provide better interpretability, and these methods have been successfully applied in SAR imaging for remote sensing.
However, the current CS methods do not take full advantage of prior scene information we may hold, and the sparse property is imposed uniformly and independently on each variable. Consequently, some low signal-to-noise ratio (SNR) targets in the sparse scene cannot be accurately recovered, often yielding false targets by the current CS methods. Although some algorithms have already been proposed [30,31], there is still no idea of how and why to select the penalized rule in AgileSAR imaging to further mitigate the impact of empirical parameter setting on reconstructed quality and computational complexity. In addition, when the observed scenes are non-sparse, CS methods with traditional fixed sparsifying transforms (e.g., DCT, wavelet [32,33]) usually cannot result in good reconstruction performance.
To address these challenges in AgileSAR imaging, a pseudo- L 0 -norm fast iterative shrinkage algorithm network (pseudo- L 0 -norm FISTA-net) is proposed in this paper. Firstly, a pseudo- L 0 -norm regularization model is built based on Bayesian estimation. This model approximately fairly penalizes the regularization item with prior scene information, i.e., the reciprocal of its previous solution, to nearly acquire the number of nonzero values. Due to the fact that this model approximates a L 0 -norm regularization equation, we name it the pseudo- L 0 -norm regularization model. Then, we unfold the iterative updating process of FISTA into an interpretable deep network to solve the regularization model. The network and the network’s parameters are automatically learnable through training in a data-driven manner, overcoming the difficulties of manually adjustable parameters and high computation complexity. Additionally, the learnable nonlinear transform sparsifies the non-sparse scenes instead of the traditional sparsifying transform to improve reconstruction performance. Finally, the quantitative analysis and numerical results for simulation are detailed in the following sections to demonstrate the evident advantage of the proposed algorithm.
The rest of this paper is organized as follows: In Section 2, the observation model for AgileSAR is established, and we propose a pseudo- L 0 -norm FISTA-net for AgileSAR imaging and present the details of the AgileSAR imaging network. In Section 3, simulation experiments and data experiments of real TerraSAR-X images confirm the effectiveness and efficiency of our proposed method. In Section 4, a discussion is given to show the performance and advantages of our proposed algorithm. Section 5 concludes this paper.

2. Materials and Methods

In this section, the observation model of AgileSAR is described. To further improve the performance and efficiency of AgileSAR imaging, a pseudo- L 0 -norm FISTA-net is proposed.

2.1. AgileSAR Signal Model

In the traditional HRWS system, e.g., the azimuthal multi-channel SAR [3] and MIMO SAR [4], the equivalent sampling still satisfies the Nyquist theorem. To lower the amount of data and relieve the contradiction between high resolution and wide swath, AgileSAR with sub-Nyquist sampling frequency has been proposed and can achieve these requirements [8]. As usual, the SAR system is deemed to be stationary after transmitting one pulse and receiving an echo before the next transmission [5]. The imaging geometry of SAR is shown in Figure 1.
Similar to the traditional SAR system, the raw data in AgileSAR for many point targets can be written as
s c τ , η = x i , y i σ i W i τ , η exp j π K r τ 2 R i η / c 2 · exp j 4 π R i η / λ + n τ , η
where τ is the fast time along the range and σ i and W i τ , η are the backscattering coefficient and the weighting pattern corresponding to the i-th target at x i , y i . K r denotes the chirp rate of a linear frequency-modulated (LFM) signal, c is the light speed, λ is the wavelength, and n τ , η is the system noise.
To relieve the inherent contradiction in the HRWS system, sub-Nyquist sampling in AgileSAR is implied along the azimuthal dimension, and the sampling method is as shown in Figure 1. The AgileSAR system only adopts the traditional matched filtering (MF) method to not recover the scene exactly [5]. Accordingly, AgileSAR imaging includes three steps: (1) range compression based on MF; (2) range cell migration correction (RCMC) (interpolate by zero-padding every azimuthal signal after range compression and then sum up all the range-compressed signals of grids on the same range cell to the pre-defined grid); and (3) azimuth reconstruction with sub-Nyquist samples based on the CS algorithm. After implementing range compression and RCMC in Equation (1), the signal at a certain range cell is represented by
s c τ 0 , η = x i , y i σ i W i τ 0 , η T r sin c K r T r τ 0 2 R i η η c i / c × exp j 4 π R i η η c i / λ + n τ 0 , η
where η c i is the beam center crossing time for target x i , y i . T r is the pulse width, and sin c · denotes the sin c function.
Let σ = σ 1 , σ 2 , , σ M T be the vectorized backscattering cross-sections of targets on the same range cell, and s N × 1 = [ s c τ 0 , η 1 , s c τ 0 , η 2 , , s c τ 0 , η N ] T be the vectored signal after range compression and RCMC. This can be expressed as
s N × 1 = D N × M σ M × 1 + n N × 1
where N is the sampling number on the azimuthal dimension, and M is the number of resolution cells at a certain range cell in the observed scene. D N × M = D i τ 0 , η n n = 1 , i = 1 N , M denotes the mapping relation between the received signal and the scene, and D i τ 0 , η n = W i τ 0 , η n · T r · exp j 4 π R i η n η c i / λ . n N × 1 = n τ 0 , η 1 , n τ 0 , η 2 , , n τ 0 , η N T is the noise.

2.2. Pseudo- L 0 -Norm FISTA-Net for AgileSAR Imaging

In this subsection, the CS theorem is briefly introduced. Then, a pseudo- L 0 -norm regularization model is built based on Bayesian estimation. Finally, the pseudo- L 0 -norm FISTA-net unfolds the iterative updating process of FISTA into the deep network to solve the pseudo- L 0 -norm regularization model for AgileSAR imaging.

2.2.1. CS Theorem

In CS, the L 1 -norm regularization model can effectively recover nearly sparse signals with/without measurement noise from remarkably few measurements, and its application is so wide that it broadly could be considered the modern least-squares method [12,13]. The L 1 -norm regularization model has good recovery performance in terms of the recovered error evaluated by MSE [17]. Its equation is to solve underdetermined problem (3) without subscript
σ ^ = arg min σ s D σ 2 2 + α σ 1
where α is the regularization parameter. The first item s D σ 2 2 ensures fidelity, and the second item σ 1 guarantees the sparsity of the recovered scene. The parameter γ 1 balances between the recovered error and the sparsity and is empirically chosen by minimizing the recovered error of the whole scene.
Even though the L 1 -norm regularization model has achieved good recovery performance, some low SNR targets cannot be accurately recovered, often yielding false targets. The further optimization of the regularization model contributes to improving the reconstruction performance. Is there an alternative to the L 1 -norm regularization model to achieve better recovered quality? To solve this problem, a reweighted L 1 -based algorithm has already been proposed to improve the recovered performance [30,31]. However, the selection of an approximately fair penalization rule does not have a theoretical analysis.

2.2.2. Pseudo- L 0 -Norm Regularization Model

For the sparse undetermined equation, the L 0 -norm regularization model has the best recovered performance, but this model is a non-polynomial hard (NP-hard) problem [12]. Inspired by this, we make full use of prior scene information to penalize the regularization item to be close to L 0 -norm to achieve the better performance of the L 0 -norm regularization model and make the optimization equation solvable. Due to the fact that it is closer to the L 0 -norm regularization model in some sense, we name it the pseudo- L 0 -norm regularization model. Subsequently, we adopt Bayesian estimation to analyze and deduce an approximately fair penalization rule. The rule is deduced as follows: Usually, the noise n N × 1 is assumed to be a Gaussian distribution with a zero mean and variance σ n 2 [12]. For simplicity, all the following matrixes/vectors omit subscripts.
p n n = p s / σ s / σ = 1 2 π σ n N exp s D σ 2 2 2 σ n 2
The Laplace distribution promotes most coefficients to be small so that it can describe the sparse scene [32]. Assuming Γ · nonlinearly sparsifies the observed scene σ ,
p σ i = ξ i 2 exp ξ i Γ σ i 1
where ξ i is the scale parameter of the Laplace distribution, and ξ i > 0 . Then the probability distribution of the vector σ M × 1 is
p σ σ = i = 1 M ξ i 2 exp ξ i Γ σ i 1 = i = 1 M ξ i 2 exp i = 1 M ξ i Γ σ i 1
Based on the Bayesian rule in information theory, the maximum a posteriori (MAP) probability of the vector σ M × 1 is
σ ^ = arg max p σ / s σ / s = arg max p s / σ s / σ · p σ σ
Taking the logarithm of the above Formula (8),
σ ^ = arg max σ log p s / σ s / σ + log p σ σ = arg max 1 2 σ n 2 s D σ 2 2 ξ · Γ σ 1 = arg min s D σ 2 2 + β ξ · Γ σ 1
where the reweighting matrix ξ is the diagonal matrix with ξ 1 , ξ 2 , , ξ M on the diagonal and zeros elsewhere. Γ · denotes the vector–matrix form of the sparse transform Γ · . Similar to α in Equation (4), β is the regularization parameter. After the above deduction, optimization Equation (9) seems to be a modification of the L 1 -norm regularization model.
To solve Equation (9), the matrix ξ is first calculated. The logarithm function of the vector σ M × 1 is
log p σ σ = log i = 1 M ξ i 2 exp i = 1 M ξ i Γ σ i 1 = log i = 1 M ξ i 2 + log exp i = 1 M ξ i Γ σ i 1 = i = 1 M log ξ i 2 i = 1 M ξ i Γ σ i 1
Let the partial derivative function of Formula (10) with respect to ξ i be equal to zero, and the estimation of the scale parameter ξ i be
log p σ σ ξ i = 1 ξ i Γ σ i 1 = 0
ξ i = 1 / Γ σ i
If Γ σ i , Formula (12) makes no sense. Formula (12) is modified as
ξ i = 1 / Γ σ i + ι
where ι > 0 is some very small positive constant.
The above deduction confirms the penalization rule by the prior distribution, i.e., the Laplace distribution, and this model makes full use of prior data information to achieve good performance. Certainly, different probability distributions confirm different penalization rules. In reality, without knowing the variables themselves, using the iteratively updated method to establish the approximately fair penalization rule allows successively better estimation of nonzero variables. According to optimization Equations (9) and (13), we know that the regularized item is iteratively penalized by itself reciprocally to nearly acquire the number of nonzero values, and the large coefficients are more heavily penalized to discourage their effects compared to those with small coefficients and are more likely to be identified as nonzero. Once the nonzero locations are identified, their influence attenuates in order to allow more sensitivity to identify the remaining small but nonzero elements. This means that this model can recover more target details more accurately.

2.2.3. AgileSAR Imaging Based on Pseudo- L 0 -Norm FISTA-Net

To solve the pseudo- L 0 -norm regularization model for AgileSAR imaging, this paper proposes a pseudo- L 0 -norm FISTA-net by taking full advantage of the merits of optimization-based and network-based methods. The basic idea of the pseudo- L 0 -norm FISTA-net is to map the iterative steps of FISTA to a fixed number of phases of a deep network framework, in which self-adapted parameters and sparse priors integrated with a CNN module are learnable using appropriate training data. Therefore, better reconstructed performance and higher efficiency can also be achieved due to the self-adapted optimal parameters of the deep network.
A.
Network Mapping of Pseudo- L 0 -Norm FISTA-net
To solve the regularization problem (9), FISTA accelerates convergence and consumes less time by adding momentum [34]. The pseudo- L 0 -norm FISTA-net takes advantage of the merits of optimization-based and network-based CS methods. The basic idea of the pseudo- L 0 -norm FISTA-net is to map the FISTA iterative steps to a deep network framework, and it is formulated as
υ g = σ g 1 μ g D H D σ g 1 s
ξ g = d i a g 1 / Γ υ 1 g + ι g , 1 / Γ υ 2 g + ι g , , 1 / Γ υ M g + ι g
ω g = Γ 1 ξ g 1 · Θ ξ g · Γ υ g , δ g
σ g = ω g + γ g ω g ω g 1
where Θ σ i , δ = s i g n σ i max σ i δ , 0 is the soft threshold function. μ g , ι g , Γ · , δ g , and γ g are learnable variables of the deep unfolding network. Pseudo- L 0 -norm FISTA-net alternates among the υ g module from one-step gradient descent, the ξ g module, the ω g module, and the σ g module. It generalizes the four types of operations to have learnable parameters as network layers. Given the undersampled data s , they flow over all modules and finally generate a reconstructed image σ . Figure 2 illustrates the overall framework of the pseudo- L 0 -norm FISTA-net, and more details are provided in the following paragraphs.
(1)
υ g Module: This layer achieves an estimation by solving the standard convex quadratic program s D σ 2 2 and updates the reconstructed image based on the gradient descent operation of the closed-form solution Equation (14), using the output of the previous layer σ g 1 as the input for the current layer. Additionally, the weight μ g is learned through end-to-end training rather than being fixed.
(2)
ξ g Module: This layer takes advantage of prior scene information υ g held in the previous layer. Bayesian estimation analyzes and deduces an approximately fair penalization rule ξ i = 1 / Γ υ i + ι , which explains the method of applying prior scene information. This reweighting method penalizes the regularization item to be close to L 0 -norm to achieve good recovered performance of the L 0 -based algorithm; thus, it can be beneficial for the recovery of low SNR targets.
(3)
ω g Module: The proximal operator aims to remove the noise and false targets of the previous layer υ g through thresholding in the sparse transform domain. Complex image details are captured by fine-tuning existing sparse transformations. The pseudo- L 0 -norm FISTA-net aims to learn a more flexible representation Γ · and threshold δ g from the training data.
Inspired by the powerful representation power of CNNs and their universal approximation property, we design a general nonlinear sparse transform function Γ · . In the pseudo- L 0 -norm FISTA-net, the CNN network module is adopted to replace Γ · , and it is a combination of three linear convolutional operators (without bias terms) separated by a Rectified Linear Unit (ReLU), as shown in the dashed box in Figure 2. The first convolutional operator and the last convolutional operator both correspond to 1 filter (each of size 3 × 3 ), and the second convolutional operator corresponds to 32 filters (each of size 3 × 3 ). The first convolution layer corresponds to one filter in order to adapt single-channel data. Mathematically, the sparse transform Γ · is invertible, i.e., Γ · · Γ 1 · = Ι , where Ι is the identity operator. A symmetric network Γ 1 · is denoted by a mirror-symmetrical framework. Therefore, ω g can be efficiently computed by (16). The parameters of Γ · are shared throughout different iterations. As the processing of the iterations continues, δ g , a learnable shrinkage thresholding parameter, varies at each iteration. The advantage of this setting is that it can maintain the flexibility to self-adapt to the noise at each iteration.
Additionally, the scene reconstructed from the echo is a complex-valued recovery, so we need to extend the pseudo- L 0 -norm FISTA-net to the complex-valued domain. The real–imaginary individual model takes the basic operators for complex-valued data into consideration, especially for multiplication operators; they can be converted into matrix operators based on the real and imaginary parts, respectively, i.e., for complex multiplication s = D σ + n . We establish the following data structures for echo s and imaging results σ , which simply doubles the data’s dimensional size by separating the real and imaginary parts of the original data, as follows:
s n = s n r e a l s n i m a g , σ m = σ m r e a l σ m i m a g
As for the measurement matrix D , we adopt the modification as
D n m = D n m r e a l D n m i m a g D n m i m a g D n m r e a l
where D n m is an element of D . With the modifications above, we can effectively convert complex-valued problems into a real–imaginary individual model that can be expressed as
s r e a l s i m a g = D r e a l D i m a g D i m a g D r e a l σ r e a l σ i m a g + n r e a l n i m a g
Additionally, the cross-item between the real and imaginary components of the complex-valued signal was considered in (20). With the modifications above, we can effectively convert complex-valued problems into real-valued ones. Thus, the pseudo- L 0 -norm FISTA-net can be applied to microwave imaging problems.
(4)
σ g Module: In this layer, a momentum term, which chooses the update weights of two previous results, is introduced to accelerate the convergence rate. Simultaneously, this advantage is further optimized by an autonomously learnable parameter γ g .
B.
Network-Based Parameter Constraint
Although μ g , δ g , and γ g are learnable and there are no manual parameters in a pseudo- L 0 -norm FISTA-net, extra constraints are introduced to ensure convergence properly. However, non-positive step size and thresholding values contradict the definition of these variables during iteration in an ISTA-net. Therefore, μ g , δ g , and γ g are constrained to be positive. Additionally, the gradient step μ g decays smoothly with iterations. The thresholding value δ g also iteratively decreases because noise variance is suppressed progressively with iterations. The two-step update weight γ g should increase monotonously, corresponding to the two-step update weight in the FISTA. Therefore, we define
μ g = ln 1 + exp a 1 g + b 1 , a 1 < 0
θ g = ln 1 + exp a 2 g + b 2 , a 2 < 0
γ g = ln 1 + exp a 3 g + b 3 ln 1 + exp a 3 + b 3 ln 1 + exp a 3 g + b 3 , a 3 > 0
One benefit of ln 1 + exp · is its simple derivative function.
The loss function is an index to quantify the effects of the network, and its choice depends on many factors, e.g., the choice of objective function and the efficiency of running gradient descent. In this paper, reconstruction error and sparsity constraint were considered in the loss function, and adjustable parameters κ were adopted to control the tradeoff between two constrained terms. With the training data σ , s and the learnable parameters μ g , ι g , Γ · , δ g , γ g g = 1 N g , the global loss function
L o s s μ g , ι g , Γ · , δ g , γ g = N d a t a σ ^ σ 2 2 + κ g N G Γ 1 Γ υ g υ g 2 2
where · 2 represents the L 2 -norm; N d a t a is the number of training samples; N G , a learnable parameter, is the total number of network phases; κ is the weight parameter for two different constrained factors; and σ ^ and σ represent the SAR image recovered with undersampled and Nyquist-sampled echoes, respectively. Once the loss function has been determined, the parameters can be learned automatically.
C.
Initialization
Initialization near the minimum of the objective function can accelerate convergence. For σ g , σ ^ 0 = D H s . The parameters a 1 , b 1 , a 2 , b 2 , a 3 , b 3 and the weight parameter κ are initialized as 0.5 , 0.2 , 1 , 2 , 1 , 0 and 0.1, respectively.
D.
Implementation Details
The unknown parameters of the pseudo- L 0 -norm FISTA-net are μ g , ι g , Γ · , δ g , and γ g . The layer number of the network is fixed while conducting end-to-end training. Since the networks are shared throughout and a 1 , b 1 , a 2 , b 2 , a 3 , b 3 are decoupled with iteration, we may choose a different number of iterations for reconstruction. The networks were implemented in Pycharm with the Pytorch 2.0.0 library, and the training of the pseudo- L 0 -norm FISTA-net was performed on a workstation with NVIDIA RTX 3090Ti GPUs.
Once the network has been built, the training dataset can be used to learn and train the network. The input, output, and tagged data for the network are the undersampled raw data after RCMC, the SAR image, and the SAR image reconstructed with Nyquist-sampled echoes, respectively. Given a training dataset and a certain number of network layers, the network can automatically adjust the learnable parameters through training to obtain the reconstructed scene.

3. Experiment

In this section, we validate the performance of the pseudo- L 0 -norm FISTA-net for AgileSAR imaging through simulation experiments and data experiments.

3.1. Data Description

For SAR-observed scenes, both training and testing datasets are obtained with SAR images from simulations and the TerraSAR-X satellite. A total of 500 SAR images σ , each with a size of 256 × 256 , are used to simulate echo data s and selected for training. Additionally, three SAR images are used for testing. Their sizes are 256 × 256 , 1601 × 4141 , and 1024 × 2048 . During testing, the three test images are first straightened and preprocessed to the corresponding size for the training model. The training and testing datasets consist of simulated sparse scenes, real sparse SAR scenes, and real complex SAR scenes.
In order to verify the validity and effectiveness of AgileSAR imaging based on the pseudo- L 0 -norm FISTA-net, the reflectivity functions of the images were used to simulate raw data according to the simulated parameters, and then the raw data were randomly received, as shown in Figure 1. The results obtained with the Nyquist-sampled echo after chirp scaling algorithm (CSA) were used as labels to train the proposed network.

3.2. Evaluation Index

To quantify the performance of the pseudo- L 0 -norm FISTA-net, the recovered performance can be evaluated by normalized MSE (NMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in this paper. NMSE is defined as the ratio of the error energy to the true signal energy
NMSE = σ ^ σ 2 2 σ 2 2
where σ ^ is the recovered scene. σ is achieved by sampling on the uniform grid for reference. After performing the imaging processing procedure, NMSE can also be acquired. SSIM compares local patterns of pixel intensities that have been normalized for luminance and contrast [35]. In SSIM, the product of three components of similarity between the reference scene σ and the recovered scene σ ^ is computed to estimate the image’s local quality, as follows:
SSIM ( σ ^ , σ ) = l ( σ ^ , σ ) s 1 · c ( σ ^ , σ ) s 2 · s ( σ ^ , σ ) s 3
where l ( σ ^ , σ ) , c ( σ ^ , σ ) , and s ( σ ^ , σ ) are luminance, contrast, and structural similarity. s 1 , s 2 , and s 3 are positive constants used to adjust the relative importance of these three components. A simple setting s 1 = s 2 = s 3 = 1 is adopted in SSIM.
The PSNR is a performance index of the quality of a reconstructed image relative to the label image, which is described as
PSNR = 10 log 10 max σ ^ 2 σ ^ σ 2 2
A higher PSNR value indicates that the reconstructed image is of higher quality and closer to the label image. The lower the MMSE, the larger the SSIM, and the higher the PSNR are, the better the image quality is.

3.3. Experimental Results

3.3.1. AgileSAR Imaging under Different Optimization Algorithms

To verify the effectiveness and superiority of the pseudo- L 0 -norm FISTA-net for AgileSAR imaging, we conduct experiments in one simulated SAR scene and two real SAR scenes, and the proposed method is compared with the CSA, an optimization-based algorithm. The simulated parameters are shown in Table 1. The reconstruction results are illustrated in Figure 3, Figure 4 and Figure 5, and the quantitative index analysis is in Table 2. In each reconstructed result, (a) is the result reconstructed by CSA with Nyquist sampling raw data, (b) is the result reconstructed by the orthogonal matching pursuit (OMP) algorithm with sub-Nyquist sampling raw data in AgileSAR, (c) is the result reconstructed by the L 1 -norm optimization algorithm with sub-Nyquist sampling raw data in AgileSAR, (d) is the result reconstructed by the Bayesian-based algorithm with sub-Nyquist sampling raw data in AgileSAR, and (e) is the result reconstructed by the pseudo- L 0 -norm FISTA-net with sub-Nyquist sampling raw data in AgileSAR. The recovered algorithms for results (b), (c), and (d) belong to the optimization-based algorithm. Additionally, (a) is used as the label.
For the sparse scenes in Figure 3 and Figure 4, both the optimization-based algorithm and the pseudo- L 0 -norm FISTA-net reconstruct the scene exactly. For the complex scene in Figure 5, the pseudo- L 0 -norm FISTA-net can reconstruct the whole scene, while the optimization-based algorithm only focuses on the outline.
In the comparison of optimization-based algorithms, for the sparse scenes, i.e., the first scene and the second scene, the L 1 -norm optimization algorithm has better performance indexes but a high computation time. For the complex scene, i.e., the third scene, the slight difference in indexes in Table 2 does not demonstrate the recovered performance of different optimization-based algorithms because the three optimization-based algorithms do not exactly recover the whole scene.
In the comparison of optimization-based algorithms and the Bayesian-based algorithm, the proposed method exhibits lower NMSE, higher SSIM, and PSNR values for the three SAR images in Table 2. This indicates that the pseudo- L 0 -norm FISTA-net can reconstruct the scene more accurately. Additionally, the imaging time after training decreases, and it significantly improves the efficiency of AgileSAR imaging. The imaging results illustrate that, under appropriate training scenes, the pseudo- L 0 -norm FISTA-net presents a good ability to preserve target details. The fidelity of the imaging results is evident as target details are maintained. This suggests the potential of the proposed imaging method for preserving details, which offers promising prospects for high-resolution wide-swath imaging applications.

3.3.2. AgileSAR Imaging under Different Undersampling Ratios

To evaluate the performance under different undersampling ratios, we train our pseudo- L 0 -norm FISTA-net under undersampling ratios 15 % , 20 % , 30 % and then verify the performance for three testing SAR images. The performance indexes under different ratios are illustrated in Table 3. The performance indexes demonstrate that the recovered results perform better as the undersampling ratios increase. The undersampling ratio of the data used in the experiment in Figure 3, Figure 4 and Figure 5 is 20%, as illustrated by the parameters in Table 1. The average PRF in the AgileSAR is 381 Hz, and the PRF in the traditional SAR is 1907 Hz, so the undersampling ratio is 20%. Considering the limited space, we only display the recovered results with a 20% undersampling ratio in Figure 3, Figure 4 and Figure 5.

3.3.3. AgileSAR Imaging under Different Phase Numbers

To evaluate the effect of the number of layers, we train the network with different phase numbers ranging from 5 to 15 and then verify the performance on the testing set. Figure 6 curves illustrate the quantitative evaluation results of the pseudo- L 0 -norm FISTA-net on the testing set: Figure 6a shows the curves of the performance indexes for the first scene with respect to different phase numbers, Figure 6b shows the curves of the performance indexes for the second scene with respect to different phase numbers, and Figure 6c shows the curves of the performance indexes for the third scene with respect to different phase numbers. The curves demonstrate that the performance indexes improve when the number of layers is six and fluctuate when the number of layers is larger than six. GPU computational cost increases as the number of layers increases. Thus, considering the tradeoff between GPU computation cost and recovered performance, the six-layer configuration is a preferable setting for our experiments.

3.3.4. AgileSAR Imaging under Different Epoch Numbers

The iterative number of epochs is set to 200 as the basis for the iterative stopping in our manuscript. To evaluate the effect of the number of epochs, we train the network with different epochs ranging from 40 to 200 and then verify the performance on the testing set. Figure 7 curves illustrate the quantitative evaluation results of the pseudo- L 0 -norm FISTA-net on the testing set: Figure 7a shows the curves of the performance indexes for the first scene with respect to different epoch numbers, Figure 7b shows the curves of the performance indexes for the second scene with respect to different epoch numbers, and Figure 7c shows the curves of the performance indexes for the third scene with respect to different epoch numbers. The curves demonstrate that the performance indexes improve when the epoch number is 100 and slightly fluctuate when the epoch number is larger than 100. The network can achieve better recovered performance when trained for a larger number of epochs. Thus, considering the recovered performance, the 200-epoch configuration is a preferable setting for our experiments.

4. Discussion

HRWS SAR imaging has always been the goal of spaceborne SAR systems in remote sensing applications [2]. Because high resolution and wide swath are inherently conflicting requirements, the HRWS SAR system brings new difficulties and challenges. These requirements are simultaneously satisfied by advanced observing modes, e.g., the azimuthal multi-channel SAR [3] and MIMO SAR [4], with both of these observing modes having the characteristics of large amounts of data and a long antenna. As the CS theorem develops, a novel imaging mode named AgileSAR has been proposed without large amounts of data and a long antenna for the sparse scene, and AgileSAR imaging based on CS algorithms can recover the sparse scene [8].
Although the L 1 -norm regularization model in CS algorithms has achieved quite good performance, low SNR targets are not accurately recovered, and the model may yield false targets. To further improve the recovered performance of AgileSAR imaging, we propose a pseudo- L 0 -norm FISTA-net. Firstly, this algorithm presents a pseudo- L 0 -norm regularization model, which penalizes the regularization item with the reciprocal of its previous solution to nearly acquire the number of nonzero values. This penalized rule is approximately fair and makes full use of prior data information. Then, it unfolds the iterative process of FISTA into the deep network to solve the pseudo- L 0 -norm regularization model so that it improves the efficiency and accuracy of AgileSAR imaging. Additionally, the learnable sparsifying transform in this network can reconstruct more target details instead of the traditional sparsifying transform. For the recovered error, we deduce the expression of MSE as follows: Assume that F N × S is the submatrix constructed by taking S columns from the recovered matrix D N × M , which are specified by index vector Λ , and each element of Λ satisfies the following:
σ Λ l 0 , l = 1 , 2 , , S
The estimated covariance matrix for the nonzero components σ l is
C = F N × S H F N × S + γ 2 Σ σ Λ l 1 F N × S H F N × S F N × S H F N × S + γ 2 Σ σ Λ l 1
where Σ σ Λ l = d i a g ξ Λ 1 / σ Λ 1 , ξ Λ 2 / σ Λ 2 , , ξ Λ S / σ Λ S is a diagonal matrix. The estimated error is the trace of the covariance matrix C
σ ^ Λ σ Λ 2 2 = σ n 2 trace ( C ) = σ n 2 trace ( F N × S H F N × S + γ 2 Σ σ Λ l ) 1 F N × S H F N × S F N × S H F N × S + γ 2 Σ σ Λ l 1 = σ n 2 trace F N × S H F N × S + γ 2 Σ σ Λ l 1 2 F N × S H F N × S σ n 2 2 trace F N × S H F N × S + γ 2 Σ σ Λ l 1 trace F N × S H F N × S 1
Assuming that A = F N × S H F N × S and B = F N × S H F N × S + γ 2 Σ σ Λ l , their eigenvalues are λ 1 , λ 2 , , λ S and α 1 , α 2 , , α S , respectively. Based on the structure form of the recovered matrix D N × M , the L 2 -norm of each column is nearly equal, assuming D l 2 2 L , where L is the number of samples in one aperture time [8]. Formula (30) is simplified as
σ ^ Λ σ Λ 2 2 σ n 2 2 t r a c e B 1 t r a c e A 1
Gail’s circle theorem [36] indicates that
λ l L S 1 · u · L 1 S 1 · u · L λ l 1 + S 1 · u · L 1 1 + S 1 · u · L λ l 1 1 1 S 1 · u · L S 1 + S 1 · u · L l = 1 S λ l 1 S 1 S 1 · u · L S 1 + S 1 · u · L t r a c e A 1 S 1 S 1 · u · L
α l L + γ 2 · ξ Λ l σ Λ l ( S 1 ) · u · L σ Λ l · [ 1 ( S 1 ) · u ] · L + γ 2 · ξ Λ l σ Λ l α l σ Λ l · [ 1 + ( S 1 ) · u ] · L + γ 2 · ξ Λ l σ Λ l σ Λ l σ Λ l · [ 1 + ( S 1 ) · u ] · L + γ 2 · ξ Λ l α l 1 σ Λ l σ Λ l · [ 1 ( S 1 ) · u ] · L + γ 2 · ξ Λ l S [ 1 + ( S 1 ) · u ] · L + γ 2 · max ξ Λ l σ Λ l l = 1 S α l 1 S [ 1 ( S 1 ) · u ] · L + γ 2 · min ξ Λ l σ Λ l S [ 1 + ( S 1 ) · u ] · L + γ 2 · max ξ Λ l σ Λ l t r a c e B 1 S [ 1 ( S 1 ) · u ] · L + γ 2 · min ξ Λ l σ Λ l
where u = max 1 m 1 m 2 M D m 1 , D m 2 / D m 1 2 · D m 2 2 is the mutual coherence coefficient, which reflects the maximum similarity between any two different columns m 1 , m 2 in the recovered matrix
D N × M . Combine Formulas (32) and (33), as follows:
2 S 1 + S 1 · u · L + γ 2 · max ξ Λ l σ Λ l S 1 S 1 · u · L 2 t r a c e B 1 t r a c e A 1 2 S 1 S 1 · u · L + γ 2 · min ξ Λ l σ Λ l S 1 + S 1 · u · L
So, we have the MSE of the pseudo- L 0 -norm regularization model
σ ^ Λ σ Λ 2 2 σ n 2 2 t r a c e B 1 t r a c e A 1 σ n 2 2 S 1 + S 1 · u · L + γ 2 · max ξ Λ l σ Λ l S 1 S 1 · u · L
Similarly, the MSE of the L 1 -norm regularization model
σ ^ l 1 σ Λ 2 2 σ n 2 2 t r a c e B 1 t r a c e A 1 σ n 2 2 S 1 + S 1 · u · L + γ 1 · max 1 σ Λ l S 1 S 1 · u · L
From the above deduction, we can know the optimal performance achieved by the two regularization models above. By comparing (35) and (36), it can be observed that the difference is in the dashed block. The pseudo- L 0 -norm optimization algorithm is more sensitive to small coefficients. For low SNR targets, i.e., ξ Λ l = 1 / Γ σ Λ l + ι > 1 , our proposed algorithm performs better and can recover low SNR targets more accurately.
AgileSAR imaging includes three steps: range compression, range cell migration correction (RCMC), and azimuth compression. After range compression and RCMC, a pseudo- L 0 -norm FISTA-net is employed to achieve azimuth compression. The simulation experiments and data experiments in Figure 3, Figure 4 and Figure 5 demonstrate that the algorithm can achieve smaller NMSE, higher SSIM, and PSNR compared to the optimization-based algorithm. Additionally, the proposed method significantly reduces the imaging time after training the model, which improves the efficiency of AgileSAR imaging.
The potential applications of the pseudo- L 0 -norm FISTA-net can recover more target details and reduce computation complexity, so that the efficiency and accuracy of AgileSAR imaging are further improved. This algorithm can also be applied to the traditional HRWS system, e.g., the azimuthal multi-channel SAR and MIMO SAR, to lower the amount of data to relieve the pressure on data storage.

5. Conclusions

In this paper, we propose a signal processing algorithm for an innovative single-channel HRWS system called AgileSAR for sparse scenes. To further improve AgileSAR imaging performance and efficiency, a pseudo- L 0 -norm FISTA-net is proposed. Firstly, the pseudo- L 0 -norm regularization model takes an approximately fair penalization rule with prior scene information to nearly acquire the number of nonzero variables based on Bayesian estimation, so that it can more accurately recover more target details compared with the L 1 -norm regularization model. Then, the proposed algorithm unfolds the iterative process into the deep network to solve the pseudo- L 0 -norm regularization model to improve the accuracy and efficiency of AgileSAR imaging. In addition, the learnable sparsifying transform in this network can recover more details than the traditional sparsifying transform. Finally, the quantitative analysis and simulated experiments demonstrate the effectiveness and efficiency of the proposed algorithm.

Author Contributions

Methodology, W.C., J.G., F.M. and L.Z.; Validation, W.C., J.G. and F.M.; Writing—original draft, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data were bought from Deutsches Zentrum für Luft- und Raumfahrt (DLR) and are available from the author Wenjiao Chen with the permission of DLR.

Acknowledgments

The authors would like to thank the editors and the reviewers for their help and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krieger, G.; Moreira, A.; Fiedler, H.; Hajnsek, I.; Werner, M.; Younis, M.; Zink, M. TanDEM-X: A satellite formation for high-resolution SAR interferometry. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3317–3341. [Google Scholar] [CrossRef]
  2. Krieger, G.; Younis, M.; Gebert, N.; Huber, S.; Moreira, A. Advanced concepts for high-resolution wide-swath SAR imaging. In Proceedings of the 8th European Conference on Synthetic Aperture Radar, Aachen, Germany, 7–10 June 2010; pp. 1–4. [Google Scholar]
  3. Sikaneta, I.; Gierull, C.H.; Cerutti-Maori, D. Optimum signal processing for multichannel SAR: With application to high-resolution wideswath imaging. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6095–6109. [Google Scholar] [CrossRef]
  4. Krieger, G. MIMO-SAR: Opportunities and pitfalls. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2628–2645. [Google Scholar] [CrossRef]
  5. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory. 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  6. Donoho, D.L.; Elad, M.; Temlyakov, V.N. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 2006, 52, 6–18. [Google Scholar] [CrossRef]
  7. Baraniuk, R. Compressive sensing. IEEE Signal Process Mag. 2007, 24, 118–121. [Google Scholar] [CrossRef]
  8. Yu, Z.; Chen, W.; Xiao, P.; Li, C. AgileSAR: Achieving Wide-Swath Spaceborne SAR Based on Time-Space Sampling. IEEE Access 2019, 7, 674–686. [Google Scholar] [CrossRef]
  9. Tilllmann, A.M.; Pfetsch, M.E. The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 2014, 60, 1248–1259. [Google Scholar] [CrossRef]
  10. Tropp, J.A.; Gilbert, A.C. Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  11. Cerrone, C.; Cerull, R.; Golden, B. Carousel greedy: A generalized greedy algorithm with applications in optimization. Comput. Oper. Res. 2017, 85, 97–112. [Google Scholar] [CrossRef]
  12. Candes, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  13. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  14. Ji, S.; Xue, Y.; Carin, L. Bayesian Compressive Sensing. IEEE Trans. Signal Process 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  15. Tipping, M.E.; Faul, A.C. Fast marginal likelihood maximization for sparse Bayesian models. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3–6 January 2003; pp. 3–6. [Google Scholar]
  16. Babacan, S.D.; Molina, R.; Katsaggelos, A.K. Bayesian compressive sensing using laplace priors. IEEE Trans. Image Process. 2010, 19, 53–63. [Google Scholar] [CrossRef] [PubMed]
  17. Arjoune, Y.; Kaabouch, N.; Ghazi, H.E.; Tamtaoui, A. Compressive sensing: Performance comparison of sparse recovery algorithms. In Proceedings of the Annual Computing and Communication Workshop and Conference (CCWC) 2017, Las Vegas, NV, USA, 9–11 January 2017; pp. 1–7. [Google Scholar]
  18. Joshi, S.; Siddamal, K.V.; Saroja, V.S. Performance analysis of compressive sensing reconstruction. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015; pp. 724–729. [Google Scholar]
  19. Celik, S.; Basaran, M.; Erkucuk, S.; Cirpan, H. Comparison of compressed sensing based algorithms for sparse signal reconstruction. In Proceedings of the 2016 24th Signal Processing and Communication Application Conference (SIU), Zonguldak, Turkey, 16–19 May 2016. [Google Scholar]
  20. Xiong, K.; Zhao, G.; Wang, Y.; Shi, G. SPB-Net: A Deep Network for SAR Imaging and Despeckling with DownSampled Data. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9238–9256. [Google Scholar] [CrossRef]
  21. Xiong, K.; Zhao, G.; Wang, Y.; Shi, G.; Chen, S. Lq-SPB-Net: A Real-Time Deep Network for SAR Imaging and Despeckling. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5209721. [Google Scholar] [CrossRef]
  22. Xiong, K.; Zhao, G.; Wang, Y.; Shi, G. SAR Imaging and Despeckling Based on Sparse, Low-Rank, and Deep CNN Priors. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4501205. [Google Scholar] [CrossRef]
  23. Hershey, J.R.; Roux, J.L.; Weninger, F. Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures. arXiv 2014. [Google Scholar] [CrossRef]
  24. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1828–1837. [Google Scholar]
  25. You, D.; Xie, J.; Zhang, J. ISTA-Net++: Flexible Deep Unfolding Network for Compressive Sensing. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021. [Google Scholar]
  26. Zhang, K.; Gool, L.V.; Timofte, R. Deep Unfolding Network for Image Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3214–3223. [Google Scholar]
  27. Xiang, J.; Dong, Y.; Yang, Y. FISTA-Net: Learning A Fast Iterative Shrinkage Thresholding Network for Inverse Problems in Imaging. IEEE Trans. Med. Imaging 2021, 40, 1329–1339. [Google Scholar] [CrossRef]
  28. Zhou, G.; Xu, Z.; Fan, Y.; Zhang, Z.; Qiu, X.; Zhang, B.; Fu, K.; Wu, Y. HPHR-SAR-Net: Hyperpixel High-Resolution SAR Imaging Network Based on Nonlocal Total Variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8595–8608. [Google Scholar] [CrossRef]
  29. Wang, M.; Zhang, Z.; Qiu, X.; Gao, S.; Wang, Y. ATASI-Net: An Efficient Sparse Reconstruction Network for Tomographic SAR Imaging with Adaptive Threshold. IEEE Trans. Geosci. Remote Sensing 2023, 61, 4701918. [Google Scholar] [CrossRef]
  30. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef]
  31. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing Sparsity by Reweighted L1 Minimization. J. Fourier Anal. Appl. 2007, 14, 877–905. [Google Scholar] [CrossRef]
  32. Seeger, M.W.; Nickisch, H. Compressed sensing and Bayesian experimental design. In Proceedings of the 25th international conference on Machine Learning (ICML), 5–9 July 2008; pp. 912–919. [Google Scholar]
  33. Rousset, F.; Ducros, N.; Farina, A.; Valentini, G.; Andrea, C.; Peyrin, F. Adaptive basis scan by wavelet prediction for single-pixel imaging. IEEE Trans. Comput. Imaging 2017, 3, 36–46. [Google Scholar] [CrossRef]
  34. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  36. Golub, G.; Loan, C.F.V. Matrix Computations, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
Figure 1. The imaging geometry. η is the slow time along the azimuth and R i η represents the range between the radar and the point target located at the coordinate x i , y i , 0 at the azimuth time η . x i and y i denote the azimuth and range coordinates, respectively. Because a two-dimensional image, i.e., azimuth and range, is considered, the coordinate x i , y i , 0 is simplified as x i , y i Remotesensing 16 00671 i001 denotes the Nyquist samples. Remotesensing 16 00671 i002 demonstrates the real azimuthal samples chosen randomly from Nyquist samples in the AgileSAR system.
Figure 1. The imaging geometry. η is the slow time along the azimuth and R i η represents the range between the radar and the point target located at the coordinate x i , y i , 0 at the azimuth time η . x i and y i denote the azimuth and range coordinates, respectively. Because a two-dimensional image, i.e., azimuth and range, is considered, the coordinate x i , y i , 0 is simplified as x i , y i Remotesensing 16 00671 i001 denotes the Nyquist samples. Remotesensing 16 00671 i002 demonstrates the real azimuthal samples chosen randomly from Nyquist samples in the AgileSAR system.
Remotesensing 16 00671 g001
Figure 2. The overall framework of the proposed pseudo- L 0 -norm FISTA-net. Specifically, the pseudo- L 0 -norm FISTA-net consists of four main modules, i.e., gradient descenting, reweight updating, proximal mapping, and momentum updating.
Figure 2. The overall framework of the proposed pseudo- L 0 -norm FISTA-net. Specifically, the pseudo- L 0 -norm FISTA-net consists of four main modules, i.e., gradient descenting, reweight updating, proximal mapping, and momentum updating.
Remotesensing 16 00671 g002
Figure 3. The reconstructed results: (a) the result reconstructed by RDA with Nyquist sampling raw data; (b) the result reconstructed by the OMP algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (c) the result reconstructed by the L 1 -norm optimization algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (d) the result reconstructed by the Bayesian-based algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (e) the result reconstructed by the pseudo- L 0 -norm FISTA-net with 20% sub-Nyquist sampling raw data in AgileSAR.
Figure 3. The reconstructed results: (a) the result reconstructed by RDA with Nyquist sampling raw data; (b) the result reconstructed by the OMP algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (c) the result reconstructed by the L 1 -norm optimization algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (d) the result reconstructed by the Bayesian-based algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (e) the result reconstructed by the pseudo- L 0 -norm FISTA-net with 20% sub-Nyquist sampling raw data in AgileSAR.
Remotesensing 16 00671 g003
Figure 4. The reconstructed results: (a) the result reconstructed by RDA with Nyquist sampling raw data; (b) the result reconstructed by the OMP algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (c) the result reconstructed by the L 1 -norm optimization algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (d) the result reconstructed by the Bayesian-based algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (e) the result reconstructed by the pseudo- L 0 -norm FISTA-net with 20% sub-Nyquist sampling raw data in AgileSAR.
Figure 4. The reconstructed results: (a) the result reconstructed by RDA with Nyquist sampling raw data; (b) the result reconstructed by the OMP algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (c) the result reconstructed by the L 1 -norm optimization algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (d) the result reconstructed by the Bayesian-based algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (e) the result reconstructed by the pseudo- L 0 -norm FISTA-net with 20% sub-Nyquist sampling raw data in AgileSAR.
Remotesensing 16 00671 g004
Figure 5. The reconstructed results: (a) the result reconstructed by RDA with Nyquist sampling raw data; (b) the result reconstructed by the OMP algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (c) the result reconstructed by the L 1 -norm optimization algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (d) the result reconstructed by the Bayesian-based algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (e) the result reconstructed by the pseudo- L 0 -norm FISTA-net with 20% sub-Nyquist sampling raw data in AgileSAR.
Figure 5. The reconstructed results: (a) the result reconstructed by RDA with Nyquist sampling raw data; (b) the result reconstructed by the OMP algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (c) the result reconstructed by the L 1 -norm optimization algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (d) the result reconstructed by the Bayesian-based algorithm with 20% sub-Nyquist sampling raw data in AgileSAR; (e) the result reconstructed by the pseudo- L 0 -norm FISTA-net with 20% sub-Nyquist sampling raw data in AgileSAR.
Remotesensing 16 00671 g005
Figure 6. The curves (a), (b) and (c) of performance indexes for the first, second and third testing scenes with respect to different phase numbers, respectively.
Figure 6. The curves (a), (b) and (c) of performance indexes for the first, second and third testing scenes with respect to different phase numbers, respectively.
Remotesensing 16 00671 g006
Figure 7. The curves (a), (b) and (c) performance indexes for the first, the second and the third testing scenes with respect to different epoch numbers, respectively.
Figure 7. The curves (a), (b) and (c) performance indexes for the first, the second and the third testing scenes with respect to different epoch numbers, respectively.
Remotesensing 16 00671 g007aRemotesensing 16 00671 g007b
Table 1. Simulated parameters.
Table 1. Simulated parameters.
ParameterData
Average PRF (Hz) in the AgileSAR381
PRF (Hz) in the traditional SAR1907
Range sampling frequency (MHz)40
The referred slant range (km)888
The pulse width (us)50
The doppler bandwidth (Hz)1401
Wavelength (mm)5.55
Velocity (m/s)7513
Height (km)693
The squint angle (°)0
Table 2. Performance indexes under different algorithms.
Table 2. Performance indexes under different algorithms.
Different AlgorithmsNMSESSIMPSNR (dB)Time (s)
Figure 3Optimization-based algorithmsOMP0.00420.985036.861.0279
L 1 -norm optimization0.00420.985036.8657.6280
Bayesian-based0.00410.985036.9133.3824
Pseudo- L 0 -norm FISTA-net algorithm0.00010.995752.880.0378
Figure 4Optimization-based algorithmsOMP1.00460.068517.2341.83
L 1 -norm optimization0.79130.199818.272012.1
Bayesian-based0.72490.106518.65411.3130
Pseudo- L 0 -norm FISTA-net algorithm0.02160.898033.912.3124
Figure 5Optimization-based algorithmsOMP0.86740.026311.3911.10
L 1 -norm optimization0.81290.052911.67442.9401
Bayesian-based0.75390.084211.99124.1645
Pseudo- L 0 -norm FISTA-net algorithm0.06150.777022.880.0309
Table 3. Performance indexes under different undersampling ratios.
Table 3. Performance indexes under different undersampling ratios.
AlgorithmUndersampling RatiosNMSESSIMPSNR (dB)
The first scenePseudo- L 0 -norm FISTA-net algorithm30%0.00010.996753.89
20%0.00010.995752.88
15%0.01350.965431.76
The second scenePseudo- L 0 -norm FISTA-net algorithm30%0.01260.941136.23
20%0.02160.898033.91
15%0.09800.545727.34
The third scenePseudo- L 0 -norm FISTA-net algorithm30%0.04460.830824.27
20%0.06150.777022.88
15%0.22970.428617.16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, W.; Geng, J.; Meng, F.; Zhang, L. Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network. Remote Sens. 2024, 16, 671. https://doi.org/10.3390/rs16040671

AMA Style

Chen W, Geng J, Meng F, Zhang L. Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network. Remote Sensing. 2024; 16(4):671. https://doi.org/10.3390/rs16040671

Chicago/Turabian Style

Chen, Wenjiao, Jiwen Geng, Fanjie Meng, and Li Zhang. 2024. "Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network" Remote Sensing 16, no. 4: 671. https://doi.org/10.3390/rs16040671

APA Style

Chen, W., Geng, J., Meng, F., & Zhang, L. (2024). Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network. Remote Sensing, 16(4), 671. https://doi.org/10.3390/rs16040671

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop