Next Article in Journal
Observed Surface Wind Field Structure of Severe Tropical Cyclones and Associated Precipitation
Next Article in Special Issue
MT-FANet: A Morphology and Topology-Based Feature Alignment Network for SAR Ship Rotation Detection
Previous Article in Journal
Semi-Automated BIM Reconstruction of Full-Scale Space Frames with Spherical and Cylindrical Components Based on Terrestrial Laser Scanning
Previous Article in Special Issue
Geolocation Accuracy Validation of High-Resolution SAR Satellite Images Based on the Xianning Validation Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Persistent Polar Format Algorithm for Fast Imaging of Airborne Video SAR

1
School of Optoelectronic Information and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
2
Terahertz Technology Innovation Research Institute, Terahertz Spectrum and Imaging Technology Cooperative Innovation Center, University of Shanghai for Science and Technology, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(11), 2807; https://doi.org/10.3390/rs15112807
Submission received: 24 April 2023 / Revised: 23 May 2023 / Accepted: 26 May 2023 / Published: 28 May 2023

Abstract

:
As a cutting-edge research direction in the field of radar imaging, video SAR has the capability of high-resolution and persistent imaging at any time and under any weather. Video SAR requires high computational efficiency of the imaging algorithm, and PFA has become the preferred imaging algorithm because of its applicability to the spotlight mode and relatively high computational efficiency. However, traditional PFA also has problems, such as low efficiency and limited scene size. To address the above problems, a generalized persistent polar format algorithm, called GPPFA, is proposed for airborne video SAR imaging that is applicable to the persistent imaging requirements of airborne video SAR under multitasking conditions. Firstly, the wavenumber domain resampling characteristics of video SAR PFA are analyzed, and a generalized resampling method is proposed to obtain higher efficiency. Secondly, for the problem of scene size limitation caused by wavefront curvature error, an efficient compensation method applicable to different scene sizes is proposed. GPPFA is capable of video SAR imaging at different wavebands, different slant ranges, and arbitrary scene sizes. Point target and extended target experiments verify the effectiveness and efficiency of the proposed method.

1. Introduction

Video synthetic aperture radar (SAR) combines SAR with high frame rate video display technology, which allows for continuous observation of targets at night and in harsh conditions such as rain, fog, sand, and dust [1,2]. As an inheritance and expansion of traditional SAR, video SAR can display information on ground motion targets in all directions [3,4]. Sandia National Laboratories (SNL) and Defense Advanced Research Projects Agency (DARPA) first proposed the concept of video SAR [5], and then video SAR has been listed as a key research project by many countries and institutes. Currently, many institutions have published their video SAR systems and results, such as SNL [6,7], DARPA [8], General Atomics Aeronautical Systems Incorporated (GA-ASI) [9], Fraunhofer Institute for High-Frequency Physics and Radar Techniques (FHR) [10], Jet Propulsion Laboratory (JPL) [11], and ICEYE [12,13], which proves the importance and broad application value of video SAR.
SAR image formation can be treated as a class of ill-posed linear inverse problems [14,15], so the imaging algorithm is the core part of video SAR systems. The persistent imaging characteristics of video SAR require it to work in spotlight mode [16]; therefore, the traditional strip SAR imaging algorithms, such as range Doppler algorithm (RDA) [17], chirp scaling algorithm (CSA) [18], and frequency scaling algorithm (FSA) [19], are no longer applicable to video SAR. The current mainstream imaging algorithms for spotlight mode mainly include the back-projection algorithm (BPA) [20,21] and the polar format algorithm (PFA) [22,23,24]. BPA requires point-by-point processing of echo signals, which makes its computational load too high to be applied in video SAR. The modified version, such as the fast back-projection algorithm (FBP) [25] and the fast factorized back-projection algorithm (FFBP) [26], has improved the efficiency of BPA, but this comes at the cost of losing imaging accuracy and is still less efficient than frequency-domain algorithms. PFA is the preferred imaging algorithm for video SAR due to its relatively high computational efficiency and ease of incorporating motion compensation algorithms [27].
However, traditional PFA also has some problems. Firstly, the two-dimensional (2D) wavenumber domain interpolation is inefficient, which affects the efficiency of PFA. Secondly, due to the use of the planar wavefront approximation, the wavefront curvature error of PFA makes its effective scene size and image fidelity limited under the conditions of short slant range or large scene size [28,29,30]. Thirdly, the video SAR frames should be oriented in a fixed direction on the ground output coordinate system (GOCS); otherwise, the image will rotate with the azimuth angle of video SAR [31]. In [32], by changing the sampling rate and sampling time of the radar system in real-time, the wavenumber domain signal with azimuth-uniform distribution can be obtained directly, while the azimuth chirp-z transform (CZT) is used to avoid the wavenumber domain interpolation and has higher efficiency. In [33], a parameter adjusting autoregistration PFA (PAAR-PFA) for linear spotlight mode video SAR is proposed; PAAR-PFA calculates the relevant parameters based on a parameter adjustment strategy to avoid range interpolation and geometric correction, but as in [32], the system architecture has higher hardware cost, and the scene size limitation problem is still not solved. In [34], a unified coordinate system algorithm (UCSA) is proposed for terahertz (THz) video SAR imaging; UCSA exploits the small-angle assumption property of THz video SAR, then a 2D CZT is used to achieve wavenumber domain resampling and ignores the effect of residual quadratic phase error, showing the great advantages of THz technology [35,36] in video SAR. However, the UCSA cannot be applied well to low-band video SAR, which limits its application. In [37], an efficient video SAR imaging algorithm based on type-3 non-uniform fast Fourier transform (NuFFT-3) is proposed by R. Hu; the method combines Fourier imaging and image correction to directly obtain distortion-free images. Then R. Hu extends the method to solve the defocusing problem, called refocusing and zoom-in PFA (RZPFA) [28]. In [38], the orthorectified polar format algorithm (OPFA) incorporates RZPFA with the digital elevation model (DEM); OPFA can obtain the orthorectified image efficiently without postprocessing, even with the residual distortion and defocus caused by rugged terrain.
To address the scene size limitation of PFA, some methods, such as spatially variant post filtering (SVPF) [39,40], are proposed to solve the defocus of imaging results. In [41], an extended polar format algorithm (EPFA) is proposed to solve the wavenumber domain resampling and wavefront curvature error of squint SAR, but EPFA does not consider the geometric correction, which leads to the constant and primary terms of wavefront curvature error still exist and thus causes geometric distortion of the image. In [42], a quadtree beam segmentation-based PFA is proposed to solve the image defocus problem of wide-angle staring SAR (WAS-SAR), but the computational load of this method increases significantly with the number of sub-beams. In [43], an extended PFA is proposed to reduce the number of segmentations by combining sub-block imaging with spatially variant post-filtering. However, the extensive operation of image interpolation and alignment makes its computational complexity too high and, therefore, not suitable for application in video SAR.
In this paper, a generalized persistent polar format algorithm (GPPFA) is proposed to address the above problem. First, the imaging model of airborne video SAR is established, and the principles of high resolution and high frame rate of video SAR are analyzed. Then, the critical conditions of azimuth uniform and non-uniform resampling are analyzed, and the most suitable methods are used to achieve wavenumber domain resampling for THz video SAR and X-band video SAR, respectively. In addition, for the scene size limitation problem, the geometric distortion mapping (GDM) of video SAR PFA is established, and a residual phase error compensation method is proposed, which selects different processing procedures for different scene sizes to achieve higher operational efficiency. The main contributions of this paper are:
(1)
The principle of wavenumber domain resampling in video SAR PFA is analyzed, and a resampling criterion applicable to different wavebands of video SAR is proposed;
(2)
The effect of wavefront curvature error in video SAR PFA is analyzed, and an efficient compensation method applicable to different scene sizes is proposed;
(3)
The proposed method is capable of video SAR fast imaging with high image fidelity in different wavebands, as well as arbitrary slant range and scene size.
The construction of the paper is organized as follows: Section 2 establishes the airborne video SAR imaging model, analyzes the high-resolution and high-frame-rate imaging characteristics of video SAR, and briefly introduces the PFA in video SAR. Section 3 analyzes the wavefront curvature error of PFA and discusses the proposed GPPFA in detail. Section 4 validates the proposed method by point target and extended target experiments and compares it with other methods to verify the generality, effectiveness, and efficiency of the proposed method. Finally, Section 5 summarizes the findings of the study.

2. Materials

2.1. Video SAR Geometric Definition

The imaging model of the airborne video SAR is shown in Figure 1, where R s is the flight radius, H is the altitude, v is the flight speed, θ k denotes the central degree of the kth frame sub-aperture, θ s is the rotation angle of the video SAR, and the ground scene center coincides with the coordinate center O of the Cartesian coordinate system X-Y-Z. During the flight, the radar beam always illuminates the region of interest (ROI) and receives echoes. For simplicity, the position of the antenna phase center (APC) in the spherical coordinate system can be defined as:
X a = R a cos φ cos θ Y a = R a cos φ sin θ Z a = R a sin φ
where R a represents the slant range between APC and scene center O, θ indicates the carrier azimuth angle, φ represents the elevation angle of the radar platform.
The slant range R t between APC and ground target p x , y , 0 can be expressed as:
R t = X a x 2 + Y a y 2 + Z a 2
Let X c , Y c , Z c = X a , Y a , Z a t = 0 represents the position of the APC at the center of the aperture t = 0 , and then the instantaneous slant range R t k can be expressed as:
R t k = X c x 2 + Y c y 2 + Z c 2
The circular spotlight mode ensures the continuous acquisition of ground targets, and in order to form a real-time updated video, the following two points must be met: (1) Efficient processing of the echo signals to quickly obtain a single frame video SAR image. (2) The consistency between different frame images should be ensured; that is, the position of stationary ground targets in the image should be at the same pixel point p . This requires the final image to be located in a unified ground output coordinate system (GOCS), where the X-Y-Z axis of the Cartesian coordinate system always points towards the range, azimuth, and altitude directions, respectively.

2.2. The Resolution and Frame Rate of Video SAR

High frame rate imaging is the most significant feature that distinguishes video SAR from traditional SAR. In order to achieve continuous tracking of ground-moving targets, the system frame rate of video SAR must reach at least 5 Hz, and the imaging resolution of video SAR must reach at least 0.2 m to accurately identify the observed targets [16].
According to the theory of SAR imaging [44], the range resolution ρ r and azimuth resolution ρ a are described as:
ρ r = c 2 B cos φ ρ a = λ 2 θ s cos φ
where B is the bandwidth, c denotes the speed of light, and λ denotes the wavelength.
Assuming that φ = 45 ° , in order to achieve a range resolution of 0.2 m, the system bandwidth must reach at least 1.06 GHz. Usually, in order to achieve consistency between different frame images, the range resolution and azimuth resolution should match, i.e., ρ x = ρ r = ρ a . According to (4), the synthetic aperture angle should be θ s = B / f c . The azimuth resolution ρ a is inversely proportional to the frequency and synthetic aperture angle, which means that for high-frequency video SAR, only a small synthetic aperture angle is required to achieve the required azimuth resolution. Under the condition of constant speed, its corresponding synthetic aperture time is relatively shorter. When the system parameters and azimuth resolution are constant, the imaging frame rate of video SAR is related to the system carrier frequency f c and sub-aperture overlap ratio ω:
F r = 2 ρ a v 1 ω R a c f c
where ω is the aperture overlap rate.
Assuming the average slant range of video SAR is R a = 1000   m and average speed v = 50   m / s . To achieve a 5 Hz frame rate without considering aperture overlap, the carrier frequency of video SAR needs to exceed 207 GHz. Therefore, video SAR usually works at a high-frequency band to meet the system frame rate requirements. In addition, it is also necessary to consider the impact of atmospheric attenuation on the signal [45]. As the frequency of 220 GHz is near the atmospheric window, the relatively small attenuation makes it a reasonable choice. Table 1 show the video SAR parameters at different frequencis, it can be seen that to achieve an imaging rate of 5 Hz, X-band video SAR requires a minimum aperture overlap of 94%, while THz video SAR does not require aperture overlap at all, and X-band video SAR requires longer synthetic aperture time. When the hardware cannot meet the high frequency, increasing the aperture overlap ratio ω can also improve the image frame rate, but the synthetic aperture time of the system remains unchanged; that is, the delay between the video SAR and the actual scene remains unchanged [10,46].

2.3. Signal Model and Polar Format Algorithm

Assuming the transmission signal of video SAR is linear frequency modulation (LFM), the echo signal of any ground target p x , y has the following form:
s R τ , t = σ · r e c t τ τ T r · e x p j 2 π f c τ τ + 1 2 K r τ τ 2
where τ is the fast time, t is the slow time, T r is the pulse width, f c is the center frequency, K r is the chirp rate, r e c t · is the rectangular window function, and the delay of radar echo is τ = 2 R t / c .
Assuming that the actual position of the platform is accurately measured by the difference global positioning system and inertial measurement unit (DGPS AND IMU). Then, the DGPS AND IMU-based motion compensation (MOCO) is applied by multiplying the scene center echo as a reference. After that, the dechirped signal prepared for polar format storage can be described as:
s i f τ , t = σ · r e c t τ τ T r · e x p j 4 π c K r R τ τ 0 f c R + K r R 2 c
where R is the differential distance between the scene center and the ground target p x , y :
R = R t R a = X a x 2 + Y a y 2 + Z a 2 R a
During the dechirp processing, additional residual video phase (RVP) and skew terms are generated, which may cause image distortion and defocus. An effective compensation method is to multiply the following compensation function in the range-Doppler domain [47]:
S c f τ = exp j π f τ 2 K r
Then perform the range inverse fast Fourier transform (IFFT) to obtain the intermediate frequency signal after removing the RVP term and envelope skew term:
s i c τ , t = σ · r e c t τ τ 0 T r · e x p j 4 π f c c + 4 π K r c τ τ 0 R
PFA uses the plane wavefront assumption; that is, only the first-order Taylor approximation of R at the slow time t = 0 is retained:
R p = R t = 0 + R t t = 0 t = x cos θ cos φ + y sin θ cos φ
Let K R , K X , and K Y denote the spatial wavenumber, range wavenumber, and azimuth wavenumber, respectively:
K R = 4 π f c c + 4 π K r c τ τ 0 K X K R , t = K R cos φ cos θ t K Y K R , t = K R cos φ sin θ t
Then, the signal can be represented in the wavenumber domain as:
S K R , t = e x p j x K X K R , t + y K Y K R , t
where the amplitude is ignored as it hardly affects the process of the imaging algorithm.
S K R , t is uniformly distributed in K R t domain, yet it is non-uniform in K X K Y . In order to use an efficient 2D fast Fourier transform (2D-FFT), it is necessary to perform 2D resampling on the wavenumber domain signal to distribute it in a uniform rectangular format. This process can be achieved through a two-dimensional wavenumber domain interpolation, or it can be decomposed into cascaded range and azimuth interpolation. After completing the 2D resampling of the signal, the PFA image can be obtained by using 2D-FFT once on the wavenumber domain signal distributed in a rectangular format:
I p x ~ , y ~ = K ~ X r K ~ Y r S K ~ X r , K ~ Y r · e x p j x K ~ X r + y K ~ Y r d K ~ X r d K ~ Y r
As shown in Figure 2a, (14) is based on line-of-sight polar interpolation (LOSPI). With the movement of video SAR, I p x ~ , y ~ will rotate accordingly. As shown in Figure 2b, stabilized scene polar interpolation (SSPI) can also be applied to the wavenumber domain signal and then obtain the PFA image under GOCS:
I p x , y = K X r K Y r S K X r , K Y r · e x p j x K X r + y K Y r d K X r d K Y r
The relationship between x ~ , y ~ and x , y is related to the azimuth angle of video SAR:
x ~ = x cos θ k + y sin θ k y ~ = x cos θ k + y cos θ k

3. Methods

There are still some problems when using PFA to process video SAR data. First, the 2D interpolation in the polar format transformation affects the computational efficiency of PFA. In addition, the wavefront curvature error leads to the limitation of the scene size of PFA. Considering the target localization tasks of video SAR, this wavefront curvature error must be corrected. To address the above issues, a generalized persistent polar format algorithm (GPPFA) is proposed to achieve a more efficient video SAR imaging that can be applied to different scenes.

3.1. Polar Format Transformation of GPPFA

The discrete form of range and azimuth wavenumber before PFA resampling can be expressed as:
K X m , n = 4 π c f c + m K T r cos φ cos n θ K Y m , n = 4 π c f c + m K T r cos φ sin n θ
where m is the range sample index such as N r / 2 m N r / 2 , N r is the number of range sampling, n is the azimuth sample index such as N a / 2 n N a / 2 , N a is the number of pulses.
There is a coupling between m and n in K X and K Y , therefore, a polar-to-rectangular transformation should be carried out to remove the coupling [48]. The range and azimuth wavenumber after resampling can be rewritten as:
K X m = 4 π c f c + m K T r cos φ K Y n = 4 π c n θ cos φ
To improve the efficiency, a 2D resampling is usually decomposed into range and azimuth resampling. The range wavenumber is always uniformly distributed, which is consistent with the sampling properties of CZT [49,50]. For the azimuth resampling, it is not always uniformly distributed because the azimuth wavenumber is related to the azimuth angle. Therefore, more efficient CZT can be used for range resampling, while for azimuth resampling, it can only be used within certain constraints.
CZT can actively set the start frequency and frequency interval, and it can calculate the Fourier transform of the signal on any arc on the unit circle. The z-transform of a sequence x n is defined as:
X z k = C Z T x n = n = 0 N 1 x n z n = n = 0 N 1 x n A n W n k
where A and W are parameters of CZT:
A = exp j ω 0 W = exp j ω
where ω 0 represents the start digital frequency, ω is the digital frequency interval between adjacent two points.
The purpose of range resampling is to remove the coupling between range wavenumber K X m , n and pulse number n . Select the reference value as the range wavenumber at the azimuth index center K X c = 4 π c f c + m K T r cos φ . Then, let K X m , n = K X c , the index before and after range resampling can be calculated as:
m = f c K T r + m cos n θ f c K T r
According to (21), the digital frequency after the range CZT can be calculated as:
ω r c = π 2 π N r f c K T r + m cos n θ f c K T r
Therefore, the initial digital frequency ω r 0 and the frequency interval ω r can be expressed as:
ω r 0 = 1 cos n θ cos n θ N r K T r 2 f c N r K T r π ω r = 2 π N r cos n θ
Then, the parameters of range CZT are:
A r = exp j π 1 cos n θ cos n θ N r K T r 2 f c N r K T r W r = exp j 2 π N r cos n θ
As shown in Figure 3a, after the range CZT, the wavenumber domain signal distribution is converted from a polar format to a keystone format, where the azimuth wavenumber K Y m , n changes to the following form:
K Y m , n = 4 π c f c + m K T r cos φ tan n θ
As shown in Figure 3b, the purpose of azimuth resampling is to remove the coupling between azimuth wavenumber K Y m , n and range index m. Select the reference value as the azimuth wavenumber at the range index center K Y c = 4 π c f c cos φ tan n θ . Then, let K Y m , n = K Y c :
4 π c f c + m K T r cos φ tan n θ = 4 π c f c cos φ tan n θ
Make the following approximation for (26):
tan n θ n · θ
The index before and after azimuth resampling can be calculated as:
n = f c f c + m K T r n
According to (28), the digital frequency after the azimuth CZT can be calculated as:
ω a c = 2 π f c n N a f c + m K T r
Therefore, the initial digital frequency ω a 0 and the frequency interval ω a can be expressed as:
ω a 0 = π f c f c + m K T r ω a = 2 π N a
Then, the parameters of azimuth CZT are:
A a = exp j ω a 0 W a = exp j ω a
The process of azimuth CZT can be expressed as:
I p x ~ , y ~ = F r S r m , l = F r C Z T a s i f m , n
where C Z T a · represents the azimuth CZT, F r · is the range FFT.
The CZT-based azimuth resampling is based on a small-angle approximation of (27), which means that the azimuth wavenumbers are uniformly distributed; otherwise, it will lead to errors. Therefore, for the process of azimuth resampling, it is necessary to choose different processing methods according to different scenarios. For X-band video SAR ( f c = 9.6   G H z ), the synthetic accumulation angle θ s = 7.16 ° , while for THz video SAR ( f c = 220   G H z ) θ s = 0.31 ° . Therefore, for THz video SAR, the azimuth CZT is sufficiently accurate, while X-band video SAR is not. To obtain an explicit criterion, it is necessary to derive the threshold for azimuth resampling.
Assume that the azimuth wavenumber is uniformly distributed in a two-dimensional wavenumber domain, then the azimuth phase error ψ a 0 can be written as:
ψ a 0 = y 4 π c cos φ f c + m K T s m θ t a n m θ
where f c + m K T s / c can be approximated as 1 / λ by Taylor series expansion, and the Taylor expansion of t a n m θ can be expressed as:
t a n m θ m θ + 1 3 m θ 3
A practical guideline is that the cubic error between two edges can be ignored if it does not exceed π/4 [51]. Therefore, the phase error ψ a 0 needs to satisfy:
ψ a 0 = π cos φ θ s 2 6 λ y π 8
The synthetic aperture angle θ s = λ / 2 ρ a cos φ , then ( 35 ) can be expressed in the form of the carrier frequency:
f 0 c 2 y 6 ρ a 3 cos φ
Equation (36) reveals the information that the azimuth uniform resampling can be considered sufficiently accurate when y is kept within the threshold.
Figure 4 shows the relationship between the frequency and scene size when the azimuth uniform resampling is satisfied. When ρ a = 0.2   m , x = 130   m , and φ = π / 4 , the frequency should satisfy f c > f 0 = 18.56   G H z . Therefore, under this condition, X-band video SAR needs to use non-uniform azimuth resampling, such as interpolation, while THz video SAR can use a more efficient azimuth CZT to complete the process of azimuth resampling. After the azimuth resampling is completed, a range IFFT is used to obtain the preliminary PFA image.

3.2. Wavefront Curvature Error Analysis and Compensation

The phase of the target can be expressed as:
Φ t = K R R = Φ 0 + Φ 1 t + Φ 2 t 2 +
Constant and linear phase errors lead to target offset, quadratic phase errors lead to image defocus, while higher-order phase errors have little impact on the image quality [47]. Therefore, only constant terms, linear terms, and quadratic phase errors need to be considered, which decomposes wavefront curvature error compensation into geometric distortion correction and image defocus compensation.
From (37), it can be seen that the only term related to t in phase Φ t is the differential distance R . Implementing the second order of Taylor series expansion to R and the approximate difference distance R p at the aperture center moment t = 0 :
R R 0 + R ˙ 0 t + R ¨ 0 t 2 2 R p R p 0 + R ˙ p 0 t + R ¨ p 0 t 2 2
The constant terms are:
R 0 = Δ R | t = 0 = R t k R a R p 0 = R p t = 0 = x * X c + y * Y c R a
where x * , y * is the distortion position of the target x , y .
The linear terms are:
R ˙ 0 = R t t = 0 = x Y c y X c θ s 2 R t k R ˙ p 0 = R p t t = 0 = x * Y c y * X c θ s 2 R a
Associate the following relationships:
R 0 = R p 0 R ˙ 0 = R ˙ p 0
Then the following relationship can be obtained:
x * X c + y * Y c = R a 2 R a R t k x * Y c y * X c = R a x Y c y X c R t k
According to (42), the relationship between distortion position x * , y * and ( x , y ) is obtained, which is called GOCS geometric distortion mapping (GDM):
x * = R a R t k cos θ k cos φ ( x X c x Y c ) sin θ k R t k cos φ y * = R a R t k sin θ k cos φ + ( y X c x Y c ) cos θ k R t k cos φ
According to (16) and (43), the LOCS-GDM is obtained as:
x ~ * = R a R t k cos φ y ~ * = ( y X c x Y c ) cos θ k R t k cos φ
Let r d denotes the offset distance, and then the offset distance of the kth frame image is:
r d k x , y = x x * 2 + x x * 2
The effect of geometric distortion can be ignored only if it is within the distortion negligible region (DiR), which is defined as:
D i k x , y = x , y r d k x , y ρ x
Geometric distortion correction can be accomplished by image domain interpolation, the key of which is to find the image coordinate positions of resampling points, and the GDM shown in (43) and (44) gives this relationship. The schematic diagram of geometric distortion correction is shown in Figure 5. Since image rotation and distortion correction can be accomplished by one image domain, 2D interpolation, distortion correction, and coordinate system unification can be achieved simultaneously.
The quadratic terms of the difference distance are:
R ¨ 0 = 2 R 2 t t = 0 = R a cos φ θ 2 4 R t k x y 2 R a cos φ R t k 2 R ¨ p 0 = 2 R p 2 t t = 0 = x * cos φ θ 2 4
Additionally, the quadratic phase error is expressed in the form of the difference of the second-order derivatives of the difference distance:
Φ ~ p = 4 π λ 1 2 R ¨ 0 R ¨ p 0 = π R a cos φ θ 2 2 λ R t k x y 2 R a cos φ R t k 2 x * R t k R a
The residual quadratic phase error is a two-dimensional spatial variable, so it is impossible to construct a filter to accurately compensate for the phase error at all points, which is the difficulty of compensating for the residual phase error. However, it is not necessary to compensate for it in all scenarios. In fact, it can be ignored if the phase error is less than a certain threshold during time-to-frequency conversion, and an approximate judgment formula is given in [20]. Based on the threshold, the effective scene size is:
r e 1 = ρ a 4 R a / λ r e 2 = ρ a 2 R a / λ
where r e 1 and r e 2 are the effective scene radius at thresholds of π / 2 and π / 4 , respectively, and λ is the radar wavelength.
Similarly, the effect of image defocus can be ignored only within the defocus negligible region (DeR), which is defined as:
D e k = R s c e n e r e 2
According to (46) and (50), the DiR and DeR at different frequencies can be calculated as shown in Table 2, where the DiR is independent of the frequency and only related to the distance of the point target from the center point, both of which are 23.8 m.
In addition, Figure 6 shows the residual quadratic phase error and the effective scene size at different wavebands, where the scene size is set to 200 m × 200 m. As shown in Figure 6a,b, THz video SAR has a larger DeR of 171.3 m, while the X-band video SAR only has a DeR of 35.7 m. Generally, the scene size of video SAR is between 50 m and 150 m, so the effect of residual quadratic phase error can be ignored for THz video SAR, while the error must be corrected for X-band video SAR. This indicates that THz video SAR can ignore the effect of defocus in many cases, while it must be corrected for X-band video SAR.
Figure 7 depicts the residual quadratic phase error in different coordinate systems under X-band video SAR with a scene size of 65 m, where Figure 7a is shown in the actual coordinates X Y , and Figure 7b is shown in the distorted coordinates X d Y d . It can be seen that the residual quadratic phase error is mainly a function of distance to the distorted coordinates x * . Therefore, a correction for each range bin in the distorted coordinates can complete the compensation of the residual quadratic phase error.
Perform an azimuth FFT on the PFA image and perform a residual phase error compensation. The compensation function is:
Φ d c x * = Φ p x * , y = 0 = π R a cos φ θ 2 2 λ R t k x x * R t k R a
The process of the compensation is:
I x * , y * = F a 1 F a I d x * , y * · e x p j Φ d c x * t 2
where F a and F a 1 denotes the azimuth FFT and azimuth IFFT, respectively. Then, a distortion-free video SAR image is obtained by applying geometric distortion correction to I x * , y * .

3.3. Imaging Approach of GPPFA

Based on the criteria obtained from the previous calculations, the flowchart of the proposed GPPFA can be described in Figure 8.
Considering the possible trajectory deviation or platform vibration during the video SAR flight, which usually adversely affects the received SAR signal, a DGPS and IMU-based MOCO is applied before processing. After that, most of the motion errors are compensated, and the Map-drift algorithm (MD) [52] or phase gradient autofocus algorithm (PGA) [53] can be applied if the residual motion errors exist and affect the image.
Then, the range processing is based on range FFT, RVP compensation, and range CZT. For the possible wavefront curvature errors in the preliminary PFA image, the two criteria in (46) and (50) are chosen as the criteria to ensure the optimal processing flow in different scenarios.
For the case R i < D i R , only an image rotation is implemented. For the case D i R < R i < D e R , the distortion coordinate positions are calculated using the GDM, and the distortion correction and coordinate system unification are completed using an image domain 2D interpolation. For the case R i > D e R , the residual phase error compensation is required. The proposed GPPFA is capable of adapting to airborne video SAR imaging at any waveband, any slant range, and any scene size.

4. Results

4.1. Point Target Results

To validate the proposed method, point target and extended target experiments of THz and X-band video SAR are performed. The system parameters are listed in Table 3. According to the analysis in Section 3.2, the shorter the slant range is, the more serious the distortion is, so a shorter slant range is used to highlight the effect of distortion.
As shown in Figure 9a, the scene size is 130 m × 130 m, and the point targets are distributed in a rectangle of 100 m × 100 m at an interval of 10 m. Figure 9b shows the GDM of the first frame video SAR image ( θ k = 0 ° ), whose shape changes from rectangle to sector, and Figure 9c shows the GDM of the second frame video SAR image ( θ k = 75 ° ), whose shape exhibits a distorted and rotated sector.
First, the proposed method is validated in THz video SAR mode, and the traditional PFA and UCSA [34] are chosen as comparison methods. The imaging results are shown in Figure 10, where the first and second rows correspond to the first frame ( θ k = 0 ° ) and second frame ( θ k = 75 ° ) frames of the video SAR, respectively. It can be seen that the plane wave assumption is no longer accurate enough at a short slant range, which makes the PFA image distorted. Additionally, with the motion of the video SAR, the distorted image is subsequently rotated, but the point target position always coincides with the theoretical position of the labeled GDM. For UCSA and GPPFA, the point targets are located at the real position in different video SAR frame images due to distortion correction and coordinate system unification, and the effect of residual phase error is ignored with the advantage of the THz band.
The imaging results of X-band video SAR are shown in Figure 11. Compared with THz video SAR, X-band video SAR requires a larger synthetic aperture angle. From the GDM, it is known that the distortion position is independent of the frequency, so the target position is the same as in Figure 10. The effective scene size of PFA is marked in Figure 11a,b, and the target is well-focused within the range and defocused outside the range. For UCSA, since the synthetic aperture angle no longer satisfies the small angle assumption, the resulting azimuth phase error increases and affects the range wavenumber signal, so the images of UCSA are significantly defocused. For GPPFA, it is first processed with azimuth interpolation by the criterion (49), and then the residual quadratic phase error compensation is implemented because the scene size satisfies the criterion R i > D e R , and it can be seen that the images of GPPFA are well-focused and free of distortion.
To compare the effect of geometric distortion, the position of three targets in the first frame of THz video SAR image: P 1 ( 30,30 ) , P 2 40,0 , and P 3 ( 50,50 ) with different distances from the center point are measured. The geometry position of three point targets are listed in Table 4, it can be seen that in the first frame video SAR image, the point target has a large offset from the actual position, and the offset reaches tens of meters in the second frame image, which makes the target localization and tracking of video SAR difficult. For UCSA and GPPFA, the point target position basically overlaps with the actual position and can be considered accurate enough.
To quantitatively analyze the imaging performance of the point target, the range and azimuth profiles of the point target P 3 ( 50,50 ) in the first frame of THz video SAR and X-band video SAR are shown in Figure 12, respectively. Additionally, the measured parameters, including impulse response width (IRW), peak side lobe ratio (PSLR), and integral side lobe ratio (ISLR), are shown in Table 5. For THz video SAR, the point target resolutions of several methods are close to the theoretical resolution ρ x = 0.125   m , and the PSLR and ISLR meet the requirements of SAR imaging (PSLR = −13 dB, ISLR = −20 dB) [54]. Then, for X-band video SAR, the point is defocused due to being outside the effective scene range, and the defocus is due to wavefront curvature error. Since the X-band video SAR does not satisfy the small azimuthal accumulation angle assumption, the defocus also occurs for the UCSA based on 2D CZT, which leads to its IRW and PSLR deterioration. For GPPFA, geometric distortion and image defocus are basically absent in GPPFA images due to the use of the wavefront curvature error compensation method.

4.2. Extended Target Results

To further validate the performance of the proposed method, the extended target experiments are performed. The open source original image is obtained from the website of the Air Force Research Laboratory (AFRL) [55], which is shown in Figure 13a. The scene size is 130 m × 130 m, and the same system parameters are used as in the previous point target experiments. The echo data are obtained by the time domain simulation method [56], and the pre-processed range compressed image is shown in Figure 13b.
The imaging results of THz video SAR are shown in Figure 14. Figure 14a,c,e shows the imaging results of PFA, UCSA, and GPPFA at the first frame of video SAR ( θ k = 0 ° ), and Figure 14b,d,f shows the imaging results of PFA, UCSA, and GPPFA at the second frame of video SAR ( θ k = 75 ° ). It can be seen that the images of PFA are distorted and rotated, while UCSA and GPPFA enable high-quality and distortion-free imaging at different video SAR frames.
In order to more intuitively illustrate the compensation effect of the proposed method on wavefront curvature error, a strong scattering point target set in the opposite target edge region is tested in X-band video SAR. Figure 15a shows the original image, with the strongly scattering points divided into three rows and three columns, with a square shape. Figure 15b shows the imaging result of PFA. It can be seen that the distortion and defocus of the targets are so severe that they obscure the nearby “rivers”. Figure 15c shows the imaging result of UCSA, the UCSA image is severely defocused in X-band. Figure 15d shows the imaging result of GPPFA, and it can be seen that the image is distortion-free and well-focused, and the river near the target is not covered by the side flaps of the target, which proves the capability of the proposed method under X-band video SAR.
The quantitative results of the expanded target imaging results are shown in Table 6, including image entropy, root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) [57], and the best value of each metric is the value of the original image. It can be seen that GPPFA has better image metrics than PFA and UCSA, which indicates that the proposed method meets the image quality requirements.
To compare the actual computing efficiency of several methods, the average running times of PFA, UCSA, GPPFA, and BPA are listed in Table 7. The system environment of the PC is Microsoft Windows 11 (64-bit OS and 16 GB memory size), the processor is AMD Ryzen 5 5600H, the simulation environment is MATLAB 2021a, and the image interpolation is based on the Cubic interpolation of MATLAB. Performing 10 processes for each method and calculating the average time, it can be seen that PFA takes the shortest time, which is due to the fact that it does not use any additional processing, as a cost PFA image may be distorted, rotated, and defocused. UCSA and the proposed GPPFA have corrected the distortion and unified the coordinate system. The average elapsed time increased slightly, while the imaging time of BPA reached 45.02 s. However, PFA is only applicable to conditions that long slant range and small scene size are satisfied together, UCSA is only applicable to tasks with THz video SAR with scene radius less than DeR, while the proposed GPPFA is applicable to airborne video SAR imaging in any scene.

4.3. Large Observation Scenarios Experiments

To verify the performance of the GPPFA in different scenarios, a larger observation scenario, with a scene size of 500 m × 500 m, is validated to demonstrate the real applicability of the proposed method. The system parameters are listed in Table 8. The imaging scene is shown in Figure 16, and the interval of each point target is 100 m. It can be calculated that the DiR is 27 m, and the DeR is 382 m for THz-band video SAR, while it is 80 m for X-band video SAR. Moreover, motion errors have been added to validate the effectiveness of the MOCO method. The platform vibration R e t is modeled as follows:
R e t = i = 1 M A i sin 2 π f i t + φ i
where M is the number of platform vibration components, A i , f i and φ i represents the amplitude, frequency, and initial phase of the i-th vibration component, respectively. In this article, M = 1 , the value of A i is two times the corresponding wavelength, f i = 5   H z and φ i = π / 6 .
The large observation scenarios imaging results are shown in Figure 17, where Figure 17a,b show the imaging results of the proposed method in THz-band without using MOCO and using MOCO, respectively, and Figure 17c,d show the corresponding imaging results of the proposed method in X-band, respectively. It can be seen that for the method that does not use MOCO, motion errors cause severe defocusing in its imaging results, while for a method with MOCO, the distortion or defocusing does not exist, and each target is accurately focused. Therefore, the proposed GPPFA is applicable to any slant range, scene size, and flight trajectory.

5. Conclusions

In this paper, a generalized persistent polar format algorithm (GPPFA) is proposed, which can meet the fast imaging of airborne video SAR in arbitrary waveband, arbitrary slant range, and scene size. Firstly, an airborne video SAR imaging model is established, and the high-resolution and high-frame rate characteristics and requirements of video SAR are analyzed. Then, the principle of CZT is introduced, and range resampling is completed based on CZT. For azimuth resampling, the critical of azimuth uniform resampling is analyzed, and the azimuth resampling of different waveband video SAR systems is completed based on azimuth CZT or interpolation, respectively. For the wavefront curvature error of PFA, the geometric distortion mapping of airborne video SAR in circular spotlight mode is derived, and a geometric distortion and image defocus correction method is proposed. GPPFA is capable of video SAR imaging at different wavebands, arbitrary scene sizes, and arbitrary flight trajectories. Point target and extended target experiments verify the accuracy, generality, and imaging efficiency of the proposed method. Although the proposed method is derived in circular spotlight mode, it can be applied to various working modes, such as linear spotlight mode and curve flight trajectories, as long as the platform position is accurately measured.
There are still some improvements to the proposed method. In reality, the THz video SAR platform is small and cannot carry high-precision DGPS AND IMU devices, and it is more sensitive to motion errors which may be introduced by airflow disturbance and attitude control. Therefore, the more accurate and efficient motion error compensation algorithm and its combination with the proposed method will be studied in the future to further improve the performance and generality of the algorithm.

Author Contributions

Conceptualization, J.J.; methodology, J.J.; validation, J.J. and Y.L.; formal analysis, J.J. and Y.L.; writing—original draft preparation, J.J.; writing—review and editing, J.J., Y.L. and Y.Y.; visualization, J.J.; supervision, Y.L., Y.Y. and Y.Z.; funding acquisition, Y.L. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shanghai under Grant 21ZR1444300, in part by the National Nature Science Foundation of China under Grants 12105177, 61988102, and 21731020, in part by the National Key R&D Project of China under Grant 2018YFF01013003, the Opened Foundation of Hongque Innovation Center (HQ202204002), and the Shanghai Social Development Science and Technology Research Project (Grant No. 22dz1200302).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, P.; Wu, F.; Tang, S.; Jiang, C.; Liu, C. Implementation Method of Automotive Video SAR (ViSAR) Based on Sub-Aperture Spectrum Fusion. Remote Sens. 2023, 15, 476. [Google Scholar] [CrossRef]
  2. Chen, J.; An, D.; Wang, W.; Chen, L.; Feng, D.; Zhou, Z. A Novel Generation Method of High Quality Video Image for High Resolution Airborne ViSAR. Remote Sens. 2021, 13, 3706. [Google Scholar] [CrossRef]
  3. Yang, C.; Chen, Z.; Deng, Y.; Wang, W.; Wang, P.; Zhao, F. Generation of Multiple Frames for High Resolution Video SAR Based on Time Frequency Sub-Aperture Technique. Remote Sens. 2023, 15, 264. [Google Scholar] [CrossRef]
  4. Kim, S.; Yu, J.; Jeon, S.-Y.; Dewantari, A.; Ka, M.-H. Signal Processing for a Multiple-Input, Multiple-Output (MIMO) Video Synthetic Aperture Radar (SAR) with Beat Frequency Division Frequency-Modulated Continuous Wave (FMCW). Remote Sens. 2017, 9, 491. [Google Scholar] [CrossRef]
  5. Wells, L.; Sorensen, K.; Doerry, A.; Remund, B. Developments in Sar and Ifsar Systems and Technologies at Sandia National Laboratories. In Proceedings of the 2003 IEEE Aerospace Conference Proceedings (Cat. No.03TH8652), Big Sky, MT, USA, 8–15 March 2003; Volume 2. [Google Scholar]
  6. Yocky, D.A.; West, R.D.; Riley, R.M.; Calloway, T.M. Monitoring Surface Phenomena Created by an Underground Chemical Explosion Using Fully Polarimetric VideoSAR. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2481–2493. [Google Scholar] [CrossRef]
  7. Yocky, D.A.; Calloway, T.M.; Wahl, D.E. VideoSAR Collections to Image Underground Chemical Explosion Surface Phenomena. In Proceedings of the Radar Sensor Technology XXI, Anaheim, CA, USA, 1 May 2017; Volume 10188, pp. 222–234. [Google Scholar]
  8. Kim, S.-H.; Fan, R.; Dominski, F. ViSAR: A 235 GHz Radar for Airborne Applications. In Proceedings of the 2018 IEEE Radar Conference (RadarConf18), Oklahoma City, OK, USA, 23–27 April 2018; pp. 1549–1554. [Google Scholar]
  9. Miller, J.; Bishop, E.; Doerry, A. An Application of Backprojection for Video SAR Image Formation Exploiting a Subaperature Circular Shift Register. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XX, Baltimore, MA, USA, 23 May 2013; Volume 8746, pp. 66–79. [Google Scholar]
  10. Johannes, W.; Stanko, S.; Wahlen, A.; Sommer, R.; Pohl, N.; Wellig, P.; Sennhauser, C.; Meier, E.; Kallfass, I. Implementation of a 35 GHz SAR Sensor and a High Resolution Camera to Enable Real-Time Observation. In Proceedings of the EUSAR 2014; 10th European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014; pp. 1–4. [Google Scholar]
  11. Ogut, M.; Cooke, C.; Deal, W.; Kangaslahti, P.; Tanner, A.; Reising, S. A Novel 1/f Noise Mitigation Technique Applied to a 670 GHz Receiver. IEEE Trans. Terahertz Sci. Technol. 2020, 11, 109–112. [Google Scholar] [CrossRef]
  12. Ignatenko, V.; Nottingham, M.; Radius, A.; Lamentowski, L.; Muff, D. ICEYE Microsatellite SAR Constellation Status Update: Long Dwell Spotlight and Wide Swath Imaging Modes. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 1493–1496. [Google Scholar]
  13. Muff, D.; Ignatenko, V.; Dogan, O.; Lamentowski, L.; Leprovost, P.; Nottingham, M.; Radius, A.; Seilonen, T.; Tolpekin, V. The ICEYE Constellation—Some New Achievements. In Proceedings of the 2022 IEEE Radar Conference (RadarConf22), New York City, NY, USA, 21–25 March 2022; pp. 1–4. [Google Scholar]
  14. Wang, J.; Liang, X.-D.; Chen, L.-Y.; Wang, L.-N.; Li, K. First Demonstration of Joint Wireless Communication and High-Resolution SAR Imaging Using Airborne MIMO Radar System. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6619–6632. [Google Scholar] [CrossRef]
  15. Xu, G.; Zhang, B.; Yu, H.; Chen, J.; Hong, W. Sparse Synthetic Aperture Radar Imaging From Compressed Sensing and Machine Learning: Theories, Applications, and Trends. IEEE Geosci. Remote Sens. Mag. 2022, 10, 32–69. [Google Scholar] [CrossRef]
  16. Yan, H.; Mao, X.; Zhang, J.; Zhu, D. Frame Rate Analysis of Video Synthetic Aperture Radar (ViSAR). In Proceedings of the 2016 International Symposium on Antennas and Propagation (ISAP), Okinawa, Japan, 24–28 October 2016; pp. 446–447. [Google Scholar]
  17. Mehrdad, S. Synthetic Aperture Radar Signal Processing with MATLAB Algorithms; Wiley: Hoboken, NJ, USA, 1999. [Google Scholar]
  18. Raney, R.K.; Runge, H.; Bamler, R.; Cumming, I.G.; Wong, F.H. Precision SAR Processing Using Chirp Scaling. IEEE Trans. Geosci. Remote Sens. 1994, 32, 786–799. [Google Scholar] [CrossRef]
  19. Mittermayer, J.; Moreira, A.; Loffeld, O. Spotlight SAR Data Processing Using the Frequency Scaling Algorithm. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2198–2214. [Google Scholar] [CrossRef]
  20. Desai, M.D.; Jenkins, W.K. Convolution Backprojection Image Reconstruction for Spotlight Mode Synthetic Aperture Radar. IEEE Trans. Image Process. 1992, 1, 505–517. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, B.; Xu, G.; Zhou, R.; Zhang, H.; Hong, W. Multi-Channel Back-Projection Algorithm for Mmwave Automotive MIMO SAR Imaging with Doppler-Division Multiplexing. IEEE J. Sel. Top. Signal Process. 2022, 1–13. [Google Scholar] [CrossRef]
  22. Walker, J.L. Range-Doppler Imaging of Rotating Objects. IEEE Trans. Aerosp. Electron. Syst. 1980, AES-16, 23–52. [Google Scholar] [CrossRef]
  23. Rigling, B.D.; Moses, R.L. Polar Format Algorithm for Bistatic SAR. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1147–1159. [Google Scholar] [CrossRef]
  24. Jiang, J.; Li, Y.; Zheng, Q. A THz Video SAR Imaging Algorithm Based on Chirp Scaling. In Proceedings of the 2021 CIE International Conference on Radar (Radar 2021), Haikou, China, 15–19 December 2021; pp. 656–660. [Google Scholar]
  25. Song, X.; Yu, W. Processing Video-SAR Data with the Fast Backprojection Method. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2838–2848. [Google Scholar] [CrossRef]
  26. Xie, H.; Shi, S.; An, D.; Wang, G.; Wang, G.; Xiao, H.; Huang, X.; Zhou, Z.; Xie, C.; Wang, F.; et al. Fast Factorized Backprojection Algorithm for One-Stationary Bistatic Spotlight Circular SAR Image Formation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1494–1510. [Google Scholar] [CrossRef]
  27. Jakowatz, C.V., Jr.; Doren, N. Comparison of Polar Formatting and Back-Projection Algorithms for Spotlight-Mode SAR Image Formation. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIII, Orlando, FL, USA, 17 May 2006; Volume 6237, pp. 137–143. [Google Scholar]
  28. Hu, R.; Li, X.; Yeo, T.S.; Yang, Y.; Chi, C.; Zuo, F.; Hu, X.; Pi, Y. Refocusing and Zoom-in Polar Format Algorithm for Curvilinear Spotlight SAR Imaging on Arbitrary Region of Interest. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7995–8010. [Google Scholar] [CrossRef]
  29. Gorham, L.A.; Rigling, B.D. Scene Size Limits for Polar Format Algorithm. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 73–84. [Google Scholar] [CrossRef]
  30. Gorham, L.; Rigling, B. Fast Corrections for Polar Format Algorithm with a Curved Flight Path. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2815–2824. [Google Scholar] [CrossRef]
  31. Zuo, F.; Li, J. A Persistent Imaging Method for Video-SAR in Terahertz Band. In Proceedings of the 2017 International Applied Computational Electromagnetics Society Symposium (ACES), Suzhou, China, 1–4 August 2017; pp. 1–2. [Google Scholar]
  32. Doerry, A.W. Forming Rotated SAR Images by Real-Time Motion Compensation; Sandia National Lab: Albuquerque, NM, USA, 2012. [Google Scholar]
  33. Gao, A.; Sun, B.; Li, J.; Li, C. A Parameter-Adjusting Autoregistration Imaging Algorithm for Video Synthetic Aperture Radar. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  34. Zuo, F.; Li, J.; Hu, R.; Pi, Y. Unified Coordinate System Algorithm for Terahertz Video-SAR Image Formation. IEEE Trans. Terahertz Sci. Technol. 2018, 8, 725–735. [Google Scholar] [CrossRef]
  35. Yinwei, L.; Ding, L.; Zheng, Q.; Zhu, Y.; Sheng, J. A Novel High-Frequency Vibration Error Estimation and Compensation Algorithm for THz-SAR Imaging Based on Local FrFT. Sensors 2020, 20, 2669. [Google Scholar] [CrossRef]
  36. Li, Y.; Wu, Q.; Wu, J.; Li, P.; Zheng, Q.; Ding, L. Estimation of High-Frequency Vibration Parameters for Terahertz SAR Imaging Based on FrFT with Combination of QML and RANSAC. IEEE Access 2021, 9, 5485–5496. [Google Scholar] [CrossRef]
  37. Hu, R.; Zuo, F.; Li, X.; Hu, X.; Soon, Y.T.; Ma, C. Curvilinear Video-SAR Persistent Imaging with Distortion Correction Based on Nufft-3. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July 2–August 2019; pp. 684–687. [Google Scholar]
  38. Hu, R.; Rao, B.S.M.R.; Alaee-Kerahroodi, M.; Ottersten, B. Orthorectified Polar Format Algorithm for Generalized Spotlight SAR Imaging with DEM. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3999–4007. [Google Scholar] [CrossRef]
  39. Garber, W.L.; Hawley, R.W. Extensions to Polar Formatting with Spatially Variant Post-Filtering. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XVIII, Orlando, FL, USA, 4 May 2011; Volume 8051, pp. 30–41. [Google Scholar]
  40. Doren, N.E.; Jakowatz, C.V.; Wahl, D.E.; Thompson, P.A. General Formulation for Wavefront Curvature Correction in Polar-Formatted Spotlight-Mode SAR Images Using Space-Variant Post-Filtering. In Proceedings of the Proceedings of International Conference on Image Processing; Santa Barbara, CA, USA, 26–29 October 1997, Volume 1, pp. 861–864.
  41. Guo, P.; Wu, F.; Wang, A. Extended Polar Format Algorithm (EPFA) for High-Resolution Highly Squinted SAR. Remote Sens. 2023, 15, 456. [Google Scholar] [CrossRef]
  42. Nie, X.; Zhuang, L.; Shen, S. A Quadtree Beam-Segmenting Based Wide-Swath SAR Polar Format Algorithm. IEEE Access 2020, 8, 147682–147691. [Google Scholar] [CrossRef]
  43. Chen, J.; An, D.; Wang, W.; Luo, Y.; Chen, L.; Zhou, Z. Extended Polar Format Algorithm for Large-Scene High-Resolution WAS-SAR Imaging. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5326–5338. [Google Scholar] [CrossRef]
  44. John, C.; Robert, M. Synthetic Aperture Radar: Systems and Signal Processing; Wiley: Hoboken, NJ, USA, 1992; ISBN 978-0-471-85770-9. [Google Scholar]
  45. Rosker, M.J.; Wallace, H.B. Imaging through the Atmosphere at Terahertz Frequencies. In Proceedings of the 2007 IEEE/MTT-S International Microwave Symposium, Honolulu, HI, USA, 3–8 June 2007; pp. 773–776. [Google Scholar]
  46. Palm, S.; Wahlen, A.; Stanko, S.; Pohl, N.; Wellig, P.; Stilla, U. Real-Time Onboard Processing and Ground Based Monitoring of FMCW-SAR Videos. In Proceedings of the EUSAR 2014; 10th European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014; pp. 1–4. [Google Scholar]
  47. Carrara, W.G.; Goodman, R.S.; Majewski, R.M. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms; Artech House Remote Sensing Library; Artech House: Boston, MA, USA, 1995; ISBN 978-0-89006-728-4. [Google Scholar]
  48. Zhu, D.; Zhu, Z. Range Resampling in the Polar Format Algorithm for Spotlight SAR Image Formation Using the Chirp Z-Transform. IEEE Trans. Signal Process. 2007, 55, 1011–1023. [Google Scholar] [CrossRef]
  49. Rabiner, L.; Schafer, R.; Rader, C. The Chirp Z-Transform Algorithm. IEEE Trans. Audio Electroacoust. 1969, 17, 86–92. [Google Scholar] [CrossRef]
  50. Lawton, W. A New Polar Fourier Transform for Computer-Aided Tomography and Spotlight Synthetic Aperture Radar. IEEE Trans. Acoust. Speech Signal Process. 1988, 36, 931–933. [Google Scholar] [CrossRef]
  51. Zuo, F.; Min, R.; Pi, Y.; Li, J.; Hu, R. Improved Method of Video Synthetic Aperture Radar Imaging Algorithm. IEEE Geosci. Remote Sens. Lett. 2019, 16, 897–901. [Google Scholar] [CrossRef]
  52. Samczynski, P.; Kulpa, K.S. Coherent MapDrift Technique. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1505–1517. [Google Scholar] [CrossRef]
  53. Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C.; Jakowatz, C.V. Phase Gradient Autofocus-a Robust Tool for High Resolution SAR Phase Correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef]
  54. Cumming, I.G.; Wong, F. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House Remote Sensing Library; Artech House: Boston, MA, USA, 2005; ISBN 978-1-58053-058-3. [Google Scholar]
  55. Air Force Research Laboratory. Available online: https://www.sdms.afrl.af.mil/ (accessed on 3 March 2023).
  56. Mori, A.; Vita, F. A Time-Domain Raw Signal Simulator for Interferometric SAR. Geosci. Remote Sens. IEEE Trans. 2004, 42, 1811–1817. [Google Scholar] [CrossRef]
  57. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Imaging model for airborne video SAR.
Figure 1. Imaging model for airborne video SAR.
Remotesensing 15 02807 g001
Figure 2. 2D resampling of wavenumber domain signals. (a) LOSPI. (b) SSPI.
Figure 2. 2D resampling of wavenumber domain signals. (a) LOSPI. (b) SSPI.
Remotesensing 15 02807 g002
Figure 3. Resampling in the wavenumber domain. (a) Range resampling. (b) Azimuth resampling.
Figure 3. Resampling in the wavenumber domain. (a) Range resampling. (b) Azimuth resampling.
Remotesensing 15 02807 g003
Figure 4. The relationship between frequency and scene size when azimuth uniform resampling is satisfied.
Figure 4. The relationship between frequency and scene size when azimuth uniform resampling is satisfied.
Remotesensing 15 02807 g004
Figure 5. Distortion correction and coordinate system unification.
Figure 5. Distortion correction and coordinate system unification.
Remotesensing 15 02807 g005
Figure 6. Residual quadratic phase error and effective scene range at different wavebands. (a) THz PFA ( f c = 220   G H z ). (b) X-band PFA ( f c = 9.6   G H z ).
Figure 6. Residual quadratic phase error and effective scene range at different wavebands. (a) THz PFA ( f c = 220   G H z ). (b) X-band PFA ( f c = 9.6   G H z ).
Remotesensing 15 02807 g006
Figure 7. Residual quadratic phase error in different coordinate systems in X-band. (a) Actual coordinates. (b) Distorted coordinates.
Figure 7. Residual quadratic phase error in different coordinate systems in X-band. (a) Actual coordinates. (b) Distorted coordinates.
Remotesensing 15 02807 g007
Figure 8. Flow chart of the proposed GPPFA.
Figure 8. Flow chart of the proposed GPPFA.
Remotesensing 15 02807 g008
Figure 9. (a) Point targets distribution. (b) GDM of frame 1. (c) GDM of frame 2.
Figure 9. (a) Point targets distribution. (b) GDM of frame 1. (c) GDM of frame 2.
Remotesensing 15 02807 g009
Figure 10. Imaging results of THz video SAR. (a,b) Imaging result obtained by PFA. (c,d) Imaging results obtained by UCSA [34]. (e,f) Imaging result obtained by GPPFA.
Figure 10. Imaging results of THz video SAR. (a,b) Imaging result obtained by PFA. (c,d) Imaging results obtained by UCSA [34]. (e,f) Imaging result obtained by GPPFA.
Remotesensing 15 02807 g010
Figure 11. Imaging results of X-band video SAR. (a,d) Imaging results obtained by PFA. (b,e) Imaging results obtained by UCSA [34]. (c,f) Imaging results obtained by GPPFA.
Figure 11. Imaging results of X-band video SAR. (a,d) Imaging results obtained by PFA. (b,e) Imaging results obtained by UCSA [34]. (c,f) Imaging results obtained by GPPFA.
Remotesensing 15 02807 g011
Figure 12. Image profiles of point target P 3 . (a) Range profiles of THz video SAR. (b) Azimuth profiles of THz video SAR. (c) Range profiles of X-band video SAR. (d) Azimuth profiles of X-band video SAR.
Figure 12. Image profiles of point target P 3 . (a) Range profiles of THz video SAR. (b) Azimuth profiles of THz video SAR. (c) Range profiles of X-band video SAR. (d) Azimuth profiles of X-band video SAR.
Remotesensing 15 02807 g012
Figure 13. (a) Original SAR image. (b) Range compressed image.
Figure 13. (a) Original SAR image. (b) Range compressed image.
Remotesensing 15 02807 g013
Figure 14. THz video SAR imaging results of extended targets. (a,b) Imaging result obtained by PFA. (c,d) Imaging result obtained by UCSA [34]. (e,f) Imaging result obtained by GPPFA.
Figure 14. THz video SAR imaging results of extended targets. (a,b) Imaging result obtained by PFA. (c,d) Imaging result obtained by UCSA [34]. (e,f) Imaging result obtained by GPPFA.
Remotesensing 15 02807 g014
Figure 15. X-band video SAR imaging results of extended targets with strong scattering points. (a) Original input SAR image with strong scattering points. (b) Imaging result obtained by PFA. (c) Imaging result obtained by UCSA [34]. (d) Imaging result obtained by GPPFA.
Figure 15. X-band video SAR imaging results of extended targets with strong scattering points. (a) Original input SAR image with strong scattering points. (b) Imaging result obtained by PFA. (c) Imaging result obtained by UCSA [34]. (d) Imaging result obtained by GPPFA.
Remotesensing 15 02807 g015
Figure 16. Image scene of large observation scene experiments.
Figure 16. Image scene of large observation scene experiments.
Remotesensing 15 02807 g016
Figure 17. Imaging results of the proposed method in large observation scenarios with motion errors. (a,b) Imaging results without MOCO and using MOCO in THz-band. (c,d) Imaging results without MOCO and using MOCO in X-band.
Figure 17. Imaging results of the proposed method in large observation scenarios with motion errors. (a,b) Imaging results without MOCO and using MOCO in THz-band. (c,d) Imaging results without MOCO and using MOCO in X-band.
Remotesensing 15 02807 g017
Table 1. Video SAR parameters at different carrier frequencies.
Table 1. Video SAR parameters at different carrier frequencies.
FrequencyFrame RateAperture Overlap RatioSynthetic Aperture Time
9.6 GHz5 Hz94%3.34 s
220 GHz6.84 Hz0%0.15 s
Table 2. DiR and DeR at different carrier frequencies.
Table 2. DiR and DeR at different carrier frequencies.
Carrier FrequencyDiRDeR
9.6 GHz23.8 m35.7 m
220 GHz23.8 m171.3 m
Table 3. Video SAR System Parameter.
Table 3. Video SAR System Parameter.
ParametersValues
Carrier frequency9.6 GHz/220 GHz
Reference slant range500 m
Grazing angle45 °
Average velocity50 m/s
Bandwidth1.2 GHz
Sampling Frequency15 MHz
Table 4. Geometry position of point targets.
Table 4. Geometry position of point targets.
Frame 1 P 1 ( 30,30 ) P 2 ( 40,0 ) P 3 ( 50,50 )
PFA(28.28, 31.02)(38.98, 0.06)(53.30, 43.85)
UCSA [34](29.92, 29.94)(39.86, 0.06)(49.87, 49.86)
GPPFA(29.96, 30.05)(39.97, −0.06)(49.98, 50.00)
Frame 2 P 1 ( 30,30 ) P 2 ( 40,0 ) P 3 ( 50,50 )
PFA(18.92, 37.75)(37.36, 10.93)(28.87, 63.84)
UCSA [34](29.89, 30.06)(39.85, 0.08)(49.86, 50.04)
GPPFA(29.92, 29.98)(39.97, 0.08)(49.94, 49.92)
Table 5. Imaging quality analysis of point target P 3 .
Table 5. Imaging quality analysis of point target P 3 .
THz-BandResolution (m)PSLR (dB)ISLR (dB)
RangeAzimuthRangeAzimuthRangeAzimuth
PFA0.1270.126−12.297−12.742−24.699−23.506
UCSA [34]0.1340.138−15.009−15.066−30.350−26.203
GPPFA0.1350.136−13.415−13.808−30.989−24.848
X-bandResolution (m)PSLR (dB)ISLR (dB)
RangeAzimuthRangeAzimuthAzimuthRange
PFA0.1280.214−10.663−0.571−18.740−10.767
UCSA [34]0.1370.455−1.345−0.919−13.5053.938
GPPFA0.1380.142−13.229−13.173−26.095−25.295
Table 6. Quantitative analysis of extended target imaging results.
Table 6. Quantitative analysis of extended target imaging results.
PFAUCSA [34]GPPFAOriginal Image
Entropy5.4129.7855.1974.120
RMSE0.1460.2590.0130
PSNR15.04510.31116.490
SSIM0.1870.0550.2851
Table 7. Evaluation of average elapsed time.
Table 7. Evaluation of average elapsed time.
PFAUCSA [34]GPPFABPA
Time (s)0.540.650.6945.02
Table 8. System parameters for large observation scenario experiments.
Table 8. System parameters for large observation scenario experiments.
ParametersValues
Carrier frequency9.6 GHz/220 GHz
Reference slant range2600 m
Bandwidth1.2 GHz
Sampling frequency52 MHz
Pulse repetition frequency24 kHz
Sampling Points4096
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, J.; Li, Y.; Yuan, Y.; Zhu, Y. Generalized Persistent Polar Format Algorithm for Fast Imaging of Airborne Video SAR. Remote Sens. 2023, 15, 2807. https://doi.org/10.3390/rs15112807

AMA Style

Jiang J, Li Y, Yuan Y, Zhu Y. Generalized Persistent Polar Format Algorithm for Fast Imaging of Airborne Video SAR. Remote Sensing. 2023; 15(11):2807. https://doi.org/10.3390/rs15112807

Chicago/Turabian Style

Jiang, Jiawei, Yinwei Li, Yinghao Yuan, and Yiming Zhu. 2023. "Generalized Persistent Polar Format Algorithm for Fast Imaging of Airborne Video SAR" Remote Sensing 15, no. 11: 2807. https://doi.org/10.3390/rs15112807

APA Style

Jiang, J., Li, Y., Yuan, Y., & Zhu, Y. (2023). Generalized Persistent Polar Format Algorithm for Fast Imaging of Airborne Video SAR. Remote Sensing, 15(11), 2807. https://doi.org/10.3390/rs15112807

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop