Next Article in Journal
Employing Machine Learning for Seismic Intensity Estimation Using a Single Station for Earthquake Early Warning
Previous Article in Journal
Efficient and Rapid Modeling of Radar Echo Responses for Complex Targets under Arbitrary Wideband Excitation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Dimensional Autofocus for Ultra-High-Resolution Squint Spotlight Airborne SAR Based on Improved Spectrum Modification

1
National Key Laboratory of Microwave Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100094, China
3
Suzhou Key Laboratory of Microwave Imaging, Processing and Application Technology, Suzhou 215123, China
4
Suzhou Aerospace Information Research Institute, Suzhou 215123, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(12), 2158; https://doi.org/10.3390/rs16122158
Submission received: 27 April 2024 / Revised: 7 June 2024 / Accepted: 12 June 2024 / Published: 14 June 2024
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
For ultra-high-resolution (UHR) squint spotlight airborne synthetic aperture radar (SAR), the severe range-azimuth coupling caused by squint mode and the spatial and frequency dependence of the motion error brought by ultra-wide bandwidth both make it difficult to obtain satisfactory imaging results. Although some autofocus methods for squint airborne SAR have been presented in the published literature, their practical applicability in UHR situations remains limited. In this article, a new 2D wavenumber domain autofocus method combined with the Omega-K algorithm dedicated to UHR squint spotlight airborne SAR is proposed. First, we analyze the dependence of range envelope shift error (RESE) and range defocus on the squint angle and then propose a new spectrum modification strategy, after which the spectrum transforms into a quasi-side-looking one. The accuracy of estimation and compensation can be improved significantly in this way. Then, the 2D phase error can be calculated with the 1D estimated error by the mapping relationship, and after that the 2D compensation is performed in the wavenumber domain. Furthermore, the image-blocking technique and range-dependent motion error compensation method are embedded to accommodate the spatial-variant motion error for UHR cases. Simulations are carried out to verify the effectiveness of the proposed method.

1. Introduction

Synthetic aperture radar (SAR) is an active remote-sensing technique that can operate in several different modes [1,2]. With the developments over the past few decades, it can realize ultra-high-resolution (UHR) up to the centimeter level [3,4,5]. For UHR squint spotlight SAR, the squint mode is more flexible compared with the side-looking mode [6,7], and the UHR means more detailed information about the observation scenarios of interest. Although possessing many advantages, it is more challenging to process airborne UHR squint spotlight SAR data [8], for which the severe range-azimuth coupling introduced by the squint angle, and the spatial and frequency dependence of the motion error brought by the ultra-wide bandwidth, both place higher demands on the imaging process.
Due to the severe range-azimuth coupling of squint SAR, traditional SAR imaging algorithms, such as the range-Doppler algorithm (RDA) [9] and the chirp scaling algorithm (CSA) [10], are no longer applicable. Although some modified versions, for example, the squinted RDA [11], the nonlinear CSA (NCSA) [12,13], the extended NCS (ENCS) method [14,15], etc., have been proposed in the literature, they adopt different approximations, which still cannot satisfy the UHR situations. Besides the frequency-domain algorithms mentioned above, the back-projection algorithm (BPA) [16], as a time-domain algorithm, and the Omega-K algorithm ( ω KA) [17], as a wavenumber domain algorithm, are both considered to be the most accurate SAR imaging algorithms and can handle the squint SAR data with high accuracy. Since BPA is a point-by-point projection method, it is of great computational complexity and rather low efficiency. Therefore, in this article we adopt ω KA as the imaging algorithm to obtain the coarse-focused image.
Different from the spaceborne SAR, which moves along a fixed orbit, the airborne SAR platform is subject to inevitable trajectory deviations. To obtain an accurately focused image, the slant range error should be limited within a fractional wavelength, which is generally not feasible with the provided positioning equipment, so in addition to motion compensation with the measured trajectory, it is necessary to estimate the error from the raw echo data, which is the autofocus process. Several autofocus methods have been proposed in the past few decades. They can be broadly classified into two categories: (1) the 1D autofocus methods that estimate and compensate the azimuth phase error (APE) based on parametric [18,19,20,21] or nonparametric methods [22,23,24,25,26]; (2) the 2D autofocus methods [27,28,29,30], which take into account not only the APE but also the range envelope shift error (RESE), and even the range defocus caused by higher-order errors along the range dimension.
On the one hand, for the UHR situation, the motion error will cause an obvious RESE that spans several or even dozens of range cells. In addition, the ultra-wide bandwidth makes the frequency dependence of the error non-negligible, which would otherwise cause defocus in the range dimension. Based on these considerations, 1D autofocus, which only considers the APE, cannot meet the requirements, and 2D autofocus is a better choice. On the other hand, for squint cases, it has been revealed in [31] that, with a non-negligible squint angle, even a small phase error will cause significantly amplified RESE after Omega-K imaging. To deal with this problem, several time-domain autofocus algorithms [31,32,33] are proposed, in which, they first perform azimuth deramping on the range downsampled data, and then compensate the residual azimuth-dependent phase terms by estimating the azimuth positions of the targets, after which the azimuth phase error can be estimated, and the motion error is retrieved and compensated on the raw echo data. In this way, both the APE and the amplified RESE are accounted for. However, the linear motion error will lead to a mismatch of the azimuth positions and further affect the accuracy of the subsequent phase error estimation [34]. Different from these time-domain algorithms, frequency-domain autofocus algorithms estimate the phase error in the azimuth-frequency domain [28,29], which avoids the deramp processing, thus circumventing the problem of azimuth position estimation, but the methods proposed in [28,29] are only suitable for side-looking or small squint mode, which cannot be directly used for squint cases. Recently, a novel frequency-domain autofocus algorithm tailored to squint SAR is proposed in [34]. However, the range defocus will affect the estimation accuracy of APE, especially for UHR cases, which is not mentioned in [34], and the approximation adopted during the azimuth alignment processing in [34] will also deteriorate the compensation result. The characteristics and limitations of the above methods are summarized in Table 1.
To address the aforementioned issues, we present a novel 2D wavenumber domain autofocus method for UHR squint spotlight SAR in this article. First, with the approximated spectrum, we demonstrate that both the RESE and range defocus worsen as the squint angle increases. To improve the accuracy of the phase error estimation, a new spectrum modification strategy is proposed, which can alleviate the serious range defocus for squint cases and align the spectrum support regions. After spectrum modification, a 2D mapping of APE, which is deduced from the accurate spectrum, is used to compensate for the frequency-dependent phase error in the 2D wavenumber domain. Furthermore, to tackle the non-ignorable spatial variance of motion error for UHR cases, the image-blocking technique [35] and range-dependent motion error compensation methods [36] are embedded in the proposed method.
The remainder of this article is organized as follows. In Section 2.1, the signal model is established, and with the approximated spectrum the dependence of the phase error in the range-Doppler domain on the squint angle is analyzed. After that, the structure of the 2D phase error based on the accurate spectrum is derived. In Section 2.2, the spectrum modification strategy is introduced in detail, and the 2D phase error mapping after spectrum modification is discussed. Several simulation experiments are presented to verify the effectiveness of the proposed method in Section 3. Some discussions are presented in Section 4, and the conclusion is drawn in Section 5.

2. Materials and Methods

2.1. Signal Model and Problems Discussion

2.1.1. Signal Model

The geometric model of squint spotlight SAR is shown in Figure 1. The blue dashed line is the ideal trajectory and the red curved line denotes the real one affected by the motion error. The squint angle is θ 0 . r is the slant range from the aperture center to the scene center, O. Considering a target P located at the same range bin as O, the distance between P and O is x n . The instantaneous range between the radar and P is
R X = R i ( X ) + Δ R ( X ) = ( r cos θ 0 ) 2 + ( X r sin θ 0 x n ) 2 + Δ R ( X ) ,
where X is the azimuth position of the radar, L / 2 X L / 2 , L is the aperture length, R i ( X ) denotes the ideal slant range, and Δ R ( X ) represents the slant range error caused by the motion error.
The demodulated received signal of target P can be expressed as
s s ( X , t ) = ω r t 2 R ( X ) c ω a ( X x n ) · exp j π k r t 2 R ( X ) c 2 exp j 4 π f 0 c R ( X ) ,
where ω r ( · ) is the envelope of the transmitted signal and ω a ( · ) is the azimuth envelope decided by the antenna pattern function. t is the fast time, c is the speed of light, and f 0 and k r represent the carrier frequency and the frequency-modulated rate of the transmitted signal, respectively.
After range compression, the signal turns into
s s 1 ( X , t ) = p r t 2 R ( X ) c ω a ( X x n ) exp j 4 π f 0 c R ( X ) ,
where p r ( · ) is the range-compressed envelope (generally in a sin c ( · ) form).
The range spectrum of (3) can be obtained by taking Fourier transform (FT) with respect to the fast time, t, which can be written as
s S ( X , K r ) = W r K r ω a ( X x n ) exp j K r R ( X ) ,
where W r ( · ) is the range envelope function in the wavenumber domain, K r = 4 π ( f 0 + f r ) / c is the range wavenumber, and f r is the range frequency.
Then, performing azimuth FT on (4) yields the 2D spectrum of target P, that is
S S ( K x , K r ) = s S 1 ( X , K r ) exp ( j K x X ) d X = W r K r ω a ( X x n ) exp j K r R ( X ) · exp ( j K x X ) d X .
According to the principle of stationary phase (POSP), the stationary phase point can be calculated by setting the first-order derivative of the integrated phase of (5) to X to zero, i.e.,
d d X K r R ( X ) K x X = 0 ,
which can be turned into
X r sin θ 0 x n ( r cos θ 0 ) 2 + ( X r sin θ 0 x n ) 2 d Δ R ( X ) d X = K x K r .
After solving the stationary phase point X * with (7) and inserting it into (5), we can obtain the expression of the 2D spectrum.

2.1.2. Negative Effects of Squint Angle

In this section, we discuss the negative effects of squint angle by analyzing the approximated spectrum of the signal.
With the ideal stationary point approximation, the stationary point can be solved from (7) as
X * r cos θ 0 K x K r 2 K x 2 + x n + r sin θ 0 .
Then, with (8), the 2D spectrum can be approximated as Equation (22) in [31],
S S 1 ( K x , K r ) = W x ( K x ) · W r ( K r ) · exp j K x x n · exp j r cos θ 0 K r 2 K x 2 j K x r sin θ 0 · exp j K r · Δ R ( X * ) ,
where K x is the azimuth wavenumber and W x ( · ) represents the azimuth envelope function in the wavenumber domain.
Next, with conventional Stolt mapping [7], i.e., K y = K r 2 K x 2 , the signal turns into
S S 2 ( K x , K y ) = W x ( K x ) · W y ( K y ) · exp j K x x n · exp j K y r cos θ 0 j K x r sin θ 0 · exp j K x 2 + K y 2 · Δ R ( X * ) .
The last term of (10) is the 2D phase error, that is
Φ e ( K x , K y ) = K x 2 + K y 2 · Δ R ( X * ) .
To further explore the relationship between the phase error and the squint angle, we perform Taylor series expansion on (11) with respect to range wavenumber K y at K y = K y c , that is,
Φ e ( K x , K y ) = ϕ 0 ( K x ) + ϕ 1 ( K x ) · ( K y K y c ) + ϕ 2 ( K x ) · ( K y K y c ) 2 + ,
where ϕ 0 ( K x ) denotes APE, ϕ 1 ( K x ) represents the RESE, and ϕ 2 ( K x ) and other higher-order terms account for the range defocus. To quantify the impact of the squint angle, we derive the values of coefficients ϕ 1 and ϕ 2 at the azimuth wavenumber center K x = K x c as follows.
ϕ 1 K x c cos θ 0 · Δ R X * K y = K y c r · sin θ 0 cos θ 0 · Δ R X * X * K y = K y c
ϕ 2 K x c 1 2 K y c · sin 2 θ 0 cos θ 0 · Δ R X * K y = K y c + 1 K y c r sin 3 θ 0 cos θ 0 · Δ R X * X * K y = K y c 1 2 K y c · r 2 sin 2 θ 0 cos θ 0 · 2 Δ R X * X * 2 K y = K y c
The detailed derivations are given in Appendix A.
To show the influence of squint angles intuitively, we conduct several simulations with the parameters given in Table 2. The trajectory deviation used for the simulation is extracted from a real flight history obtained by an airborne microwave photonic-assisted SAR system [3], as shown in Figure 2. Here, no motion compensation is applied. By setting different squint angles varying from 0 to 30 , we obtain corresponding range envelopes in the range-Doppler domain after range cell migration correction (RCMC), as shown in Figure 3.
For side-looking, θ 0 = 0 , so ϕ 1 K x c is linearly dependent on Δ R X * and ϕ 2 K x c 0 , according to (13) and (14). As shown in Figure 3a, the RESE is constricted in about 20 range cells, and no range defocus exists in the middle part. While in the squint case, θ 0 0 ; thus, ϕ 1 K x c has an additional term, which means larger RESE, and ϕ 2 K x c 0 anymore, which is consistent with Figure 3b–d, where the RESE is significantly larger and the range defocus appears in the middle part. As the squint angle increases, the RESE and range defocus become more severe, which will greatly degrade the estimation accuracy of RESE and APE during autofocus processing.
It should be noted that, although some approximations are made in the above analysis, the simulation results shown in Figure 3 are basically consistent with (13) and (14). Therefore, (13) and (14) can, to some extent, reflect the dependence of the phase error on the squint angle.

2.1.3. Accurate Structure of the 2D Phase Error

The 2D spectrum expressed in (9) is derived under the ideal stationary point approximation, which is intended to simplify the illustration of the negative influence of squint angles. However, for more accurate estimation and compensation of errors, it is necessary to derive a more accurate structure of the 2D phase error.
According to [28,34], the stationary point without approximation can be written in the following form, i.e.,
X * = ξ K x K r + r sin θ 0 + x n ,
where ξ ( · ) is an unknown function that denotes the dependence of X * on K x and K r .
Substituting (15) into (5) and using β ( · ) to represent the composite function R X * · , the spectrum without approximation can be written as
S S 3 ( K x , K r ) = W x ( K x ) · W r ( K r ) · exp j K x x n · exp j K r β K x K r j K x ξ K x K r · exp j K x r sin θ 0 .
Then, again, substitute the conventional Stolt mapping K y = K r 2 K x 2 into (16), it turns into
S S 4 ( K x , K y ) = W x ( K x ) · W y ( K y ) · exp j K x x n · exp j K x r sin θ 0 · exp j K x 2 + K y 2 β K x K x 2 + K y 2 · exp j K x ξ K x K x 2 + K y 2 .
Comparing (17) with the ideal spectrum, i.e., the first two exponential terms of (10), we can get the 2D error phase written as
Φ e ( K x , K y ) = K x 2 + K y 2 β K x K x 2 + K y 2 K x ξ K x K x 2 + K y 2 + K y r cos θ 0 .
It can also be written as
Φ e ( K x , K y ) = K y [ 1 + K x K y 2 · β K x K y 1 + K x K y 2 K x K y · ξ K x K y 1 + K x K y 2 + r cos θ 0 ] .
Then, (19) can be simplified as
Φ e ( K x , K y ) = K y · ζ K x K y ,
where ζ ( · ) is defined as
ζ ( ν ) = 1 + ν 2 · β ν 1 + ν 2 ν · ξ ν 1 + ν 2 + r cos θ 0 .
Therefore, although it is derived in the squint mode, the form of the 2D phase error, i.e., (20), is the same as that deduced in the side-looking mode in [28].

2.2. 2D Autofocus for UHR Squint Spotlight SAR

In this section, we propose a new 2D autofocus approach for UHR squint spotlight airborne SAR based on spectrum modification. According to the analysis in Section 2.1.2, under the same motion error both the RESE and range defocus deteriorate with increasing squint angles. Based on the previous conclusion and the subsequent analysis of the support region, we propose an improved spectrum modification strategy, in which the spectrum of the squint mode is transformed into a quasi-side-looking one. In this way, the range defocus can be greatly alleviated, and thus the estimation accuracy of RESE and APE can be greatly improved. Then, considering the frequency dependence of the phase error, we calculate the 2D phase error using the 2D mapping relationship deduced in Section 2.1.3 and compensate for it in the wavenumber domain.

2.2.1. Support Region Analysis

In fact, the analytical expressions of the 2D spectrum are the same for both side-looking and squint modes, as derived in Section 2.1.3. The main difference between the two modes lies in the support region of the wavenumber domain.
According to the geometric explanation, Chapter 8 in [2], the relationship between K x (the azimuth wavenumber), K r (the range wavenumber before Stolt mapping), and K y (the range wavenumber after Stolt mapping) can be written as
K x = K r sin θ K y = K r cos θ ,
where θ is the instantaneous squint angle. K r is determined only by the carrier frequency and bandwidth of the transmitted signal, while K x and K y are both affected by K r and the instantaneous squint angle. In side-looking mode, for a given target the squint angle at the synthetic aperture center is θ c = 0 , which implies that K x is symmetric about K x = 0 and K y is centered around K y = K r c = 4 π f 0 / c . This means that the center of the support region ( K x c , K y c ) is located at ( 0 , 4 π f 0 / c ) . While for squint mode, θ c 0 ; hence, the center of the support region ( K x c , K y c ) will be displaced from ( 0 , 4 π f 0 / c ) to ( K r c sin θ c , K r c cos θ c ) .
Based on the above analysis about the support region, we propose the following spectrum modification strategy to convert the squint mode spectrum into a quasi-side-looking one, that is to say, shift the spectrum support region to make it change from a skew one centered at K x = K r c sin θ c to a symmetric one centered at K x = 0 .

2.2.2. Azimuth Spectrum Shifting

For conventional autofocus methods, it is often assumed that the phase error is space-invariant in the azimuth time domain, while this is not the case in the azimuth-wavenumber domain [28]. Therefore, the purpose of azimuth spectrum shifting is twofold: (1) to align the azimuth wavenumber support regions, so that the phase error can be considered space-invariant in the wavenumber domain, and (2) to translate the azimuth wavenumber center to zero in order to minimize the negative effects brought by squint angles.
Considering a generic target whose azimuth coordinate is x, the center line of the azimuth wavenumber, i.e., the red dashed line A B in Figure 4a, can be written as
K x c = K y tan θ c = K y · x + r sin θ 0 r cos θ 0 ,
where θ c is the squint angle for this target at the synthetic aperture center.
Therefore, if we consider only the center line of the azimuth wavenumber, the azimuth spectrum shifting can be implemented in the ( x , K y ) domain by the following filter, which is derived by integrating (23) along the x dimension.
H 1 x , K y = exp K y · x + r sin θ 0 r cos θ 0 d x = exp j K y x 2 + 2 r sin θ 0 · x 2 r cos θ 0
After this step, the support region in the wavenumber domain of the target has changed from Figure 4a to Figure 4b, where the support region is moved near zero-Doppler; meanwhile, the azimuth-wavenumber support regions of different located targets are aligned. It should also be pointed out that since the phase of (24) is linearly dependent on K y , it will cause an azimuth-variant space shift in the range dimension.
Different from the approximation used in Equation (38) in [34], which is written below,
K x c = K r c ( x + r sin θ 0 ) r 2 + 2 r sin θ 0 · x + x 2 K r c sin θ 0 + K r c cos 2 θ 0 r x ,
we derive the spectrum shift function (24) from the relationship between K x and K y , not using K r c directly. Since K r has transformed into K y after Stolt mapping, it is more accurate in this way. Furthermore, the Taylor expansion approximation used in (25) is also circumvented in our derivation.

2.2.3. Range Spectrum Shifting

Considering the center line of the range wavenumber, i.e., the red dashed line, C D , in Figure 4a, it can be expressed as
K y c 1 = K r c 2 K x 2 .
Then, after azimuth shifting, C D in Figure 4a moved to C D in Figure 4b, that is
K y c 2 = K r c 2 K x + K y · x + r sin θ 0 r cos θ 0 2 .
With (27), the difference of the center line of range wavenumber between squint and side-looking mode can be written as
Δ K y c K x = K y c 2 K r c 2 K x 2 K r c 2 K x + K r c sin θ 0 2 K r c 2 K x 2 sin θ 0 · K x + K r c 2 sin θ 0 .
Ignoring the constant term in (28), range spectrum shifting can be realized in the ( K x , y ) domain by
H 2 K x , y = exp j K x sin θ 0 · y y r e f ,
where y r e f is the center coordinate along the range dimension of the observed region.
Here, only the center line of the range wavenumber is considered and the spatial variance is ignored due to the approximation used in (28). Besides, the same as the azimuth spectrum shifting, the phase of (29) is linearly dependent on K x ; thus, it will cause a range-variant space shift in the azimuth dimension. To minimize the space shift, y r e f is used in (29).
In fact, the range defocus has been significantly reduced after azimuth spectrum shifting. However, at this stage the spectrum is skewed in the range dimension, as shown in Figure 4b, indicating significant variation of K y c at different K x . The range spectrum shifting is helpful to realize the tilt correction, as shown in Figure 4c, making it more reasonable to use a unified K y c for the 2D phase error mapping.

2.2.4. 2D Phase-Error Mapping

The exact analytical structure of the 2D phase error for Omega-K imaging has been derived as shown in (20). Since we perform the spectrum modification in the azimuth and range dimensions, respectively, it is necessary to analyze how the structure of the 2D phase error changes after the spectrum modification.
From (24) and (29), it can be observed that the wavenumber shift of one dimension is linearly related to the other, which can be simplified as K x K x C 1 K y and K y K y C 2 K x , respectively, where C 1 and C 2 are constants for a specific target.
Therefore, after azimuth spectrum shifting, (20) turns into
Φ e K x , K y = K y · ζ K x + C 1 K y K y .
Then, after range spectrum shifting, (30) turns into
Φ e K x , K y = K y + C 2 K x · ζ K x + C 1 K y + C 2 K x K y + C 2 K x = K y 1 + C 2 K x K y ζ 1 + C 1 C 2 K x K y + C 1 1 + C 2 K x K y ,
which can also be simplified as
Φ e K x , K y = K y · ϑ K x K y .
Comparing (20) and (32), it can be observed that, after the spectrum manipulations, despite the specific expression of the 2D phase error changes, the form of the analytical structure remains identical. Therefore, if we perform Taylor series expansion on the 2D phase error of the modified spectrum (32) with respect to K y at K y = K y c , the analytical relationship between the 2D phase error Φ e ( K x , K y ) and the 1D APE ϕ 0 ( K x ) is still the same as Equation (34) in [28], i.e.,
Φ ^ e K x , K y = K y K y c ϕ ^ 0 K y c K y K x .

2.2.5. 2D Autofocus Processing

Based on the above analysis, we propose a new 2D autofocus method for UHR squint spotlight SAR. In addition to the spectrum modification used to mitigate the RESE and range defocus, considering the spatial variance of the motion error for UHR cases, the image-blocking technique [35] is used. Furthermore, taking into account both the estimation error and the residual range-variant error, which is ignored in the 2D autofocus, another 1D range-variant autofocus [36,37,38] is performed for each sub-image following the 2D wavenumber domain autofocus. Finally, the well-focused sub-images are stitched together to produce the final output. The entire processing flowchart of the proposed method is shown in Figure 5.
There are four main steps for the autofocus of each sub-image, as detailed below.
  • Spectrum modification. Perform azimuth spectrum shifting with (24) in the ( x , K y ) domain and then range spectrum shifting with (29) in the ( K x , y ) domain. After that, the squint spectrum transforms into a quasi-side-looking one. The range defocus and RESE can be greatly alleviated, and the support regions of targets at different locations are aligned in the wavenumber domain.
  • Phase error estimation and correction. In order to obtain an accurate estimation of APE, we need first to eliminate the effect of the RESE, which can be completed by the downsampling method [27] or minimum entropy method [39]. The former enlarges the range cell by downsampling so that the RESE will be smaller than one range cell. The latter estimates the RESE by entropy minimization of the average range profile. After that, the APE can be estimated with conventional autofocus methods such as PGA [22] in the aligned range-Doppler domain. Then, with the estimated APE, perform 2D mapping using (33) and compensate for it in the 2D wavenumber domain.
  • Inverse spectrum modification. This is the inverse process of step 1. Perform range and azimuth spectrum shifting with the conjugate of (29) and (24) sequentially. After this step, the support region of the spectrum is restored to its original position.
  • Second 1D autofocus. Considering the estimation error and the residual range-variant RESE and APE within each block after the 2D compensation, we perform 1D range-variant autofocus on the result of step 3 with the method in [36,37,38].

3. Results

3.1. Phase Error Analysis and Performance Comparison

To analyze the phase error intuitively and show the superiority of the proposed method over the referenced method in [34], we simulate with the parameters shown in Table 2, and the squint angle is set to be 20 . The motion error is shown in Figure 2. The scene consists of three point targets distributed along the azimuth direction with adjacent intervals of 15 m, i.e., A 1 (−15 m, 0), A 2 (0, 0), and A 3 (+15 m, 0), as illustrated in Figure 6. The simulated echo data is 16,002 × 26,900 points.

3.1.1. 2D Phase Error Analysis

First, we analyze the rationality of compensating the 2D phase error of these three azimuth-distributed point targets in the wavenumber domain uniformly. The differences of the actual 2D phase error between A 1 and A 2 , A 3 and A 2 after spectrum alignment are presented in Figure 7, from which we can see that the differences of the phase error are smaller than π / 4 ; hence, it is rational to perform the phase error estimation and compensation using the middle point A 2 as a reference.
Then, we discuss the necessity of the second 1D autofocus. We examine the estimation error of the middle target, A 2 . Figure 8a,b show the actual and the estimated phase errors of the middle target. Although they have similar shapes, the estimation error is unavoidable, which is shown in Figure 9a. For clarity, we also show the side view of Figure 9a in Figure 9b, where we can see that, at both ends of the azimuth frequency, the estimation error is greater than π / 4 . Despite the estimation error, we compensate the 2D phase error using the estimated one. The 2D compensated results are shown in Figure 10, where the second row of range profiles shows that the range dimension is well focused, while the third row of the azimuth dimension indicates that there is still evident phase error along the azimuth dimension. Therefore, a second 1D autofocus is necessary due to the estimation error during the 2D autofocus process, even though the range variance of the error is out of consideration in this simulation.

3.1.2. Performance Comparison

After analyzing the phase error in the previous part, we carry out 2D autofocus with the alignment method in [34] and the proposed method, respectively.
Figure 11 shows the variation of the 2D spectrum, where Figure 11a is the original spectrum, corresponding to the sketch in Figure 4a, Figure 11b is the spectrum after alignment with the method in [34], Figure 11c,d are the spectrum after azimuth and rang alignment with the proposed method, corresponding to the sketches in Figure 4c,d, respectively. Due to the approximation used in [34], the spectrum after processing appears to deviates from the zero-Doppler and is not fully aligned, as shown in Figure 11b. Whereas after the proposed azimuth processing, the spectrum is more aligned, as shown in Figure 11c. Furthermore, as shown in Figure 11d, our range processing eliminates any observed tilt in the range dimension.
Figure 12 shows the range envelope in the range-Doppler domain using the alignment method in [34] and the proposed method. Due to the deviation from zero-Doppler, the range defocus in Figure 12a is more severe than that in Figure 12b, as shown in the enlarged red box at the right side, which degrades the estimation accuracy. Furthermore, the compensation effect of the referenced method is also undermined by the misalignment shown in the enlarged red box at the left side of Figure 12a. Whereas the lines in Figure 12b are more aligned, which is beneficial for the subsequent estimation and correction of the phase error. For these reasons, the final result of the referenced method is worse, especially for the edge targets, as shown in Figure 13a. In comparison, the focusing performance of the proposed method is obviously better, as shown in Figure 13b. In order to provide a quantitative demonstration of the superiority of the proposed method over the referenced method in [34], we measured the entropy and contrast of the results of Figure 13, which are presented in Table 3.

3.2. Range-Variant Compensation

In the previous section, we simulated three azimuth-distributed point targets without considering the range-variance of the phase error. In this section, we simulate seven range-distributed point targets, namely R 1 (0, −15 m), R 2 (0, −10 m), R 3 (0, −5 m), R 4 ( 0 , 0 ) , R 5 (0, +5 m), R 6 (0, +10 m), and R 7 (0, +15 m), as illustrated in Figure 14.
First, we perform 2D compensation using the middle target, R 4 , as a reference. The residual APE ϕ 0 ( K x ) and RESE ϕ 1 ( K x ) are shown in Figure 15. We can see that, after uniform 2D compensation, the maximum residual APEs of those non-reference points are greater than π / 4 , and the maximum residual RESEs of those non-reference points are larger than 1 / 2 range cell.
To deal with such range-variant residual errors, we refer to the LS-based range-variant APE correction method [36] and the CZT-based range-variant RESE correction method [37,38]. After the estimation and correction of the range-variant RESEs and APEs by the second 1D autofocus, the residual RESEs and APEs of the seven targets at this stage are shown in Figure 16, where the residual APEs are less than π / 4 and the residual RESEs are less than 1 / 2 range cell, confirming the necessity and effectiveness of the range-variant envelope error and phase error compensation method adopted in the second 1D autofocus.

3.3. Large Scenario Verification

To further validate the effectiveness of the proposed method, we simulate with the same parameters and motion errors as shown in Table 2 and Figure 2 for a large scenario. The squint angle is set to be 10 . The scene is composed of 81 point targets evenly distributed in the ( x , y ) direction. The interval of adjacent targets in both directions is 10 m, as illustrated in Figure 17. The simulated echo data is 16,002 × 27,200 points.
Figure 18a shows the original coarse-focused image processed by Omega-K without motion compensation. The entire image is divided into 30 m × 30 m blocks for autofocus processing. The final well-focused image is presented in Figure 18b and the enlarged contour images of the three targets indicated by red boxes are shown on the right side. The quality indexes, including impulse response width (IRW), peak sidelobe ratio (PSLR), and integral sidelobe ratio (ISLR), of the three targets are presented in Table 4. The enlarged contour images and the quality indexes together confirm the significant effectiveness of the proposed method.

4. Discussion

In this article, a novel 2D autofocus method for UHR squint spotlight airborne SAR is proposed. This method addresses the problem of frequency dependence of the motion error by compensating for higher-order terms through a two-dimensional phase error mapping relationship. Additionally, it tackles the problem of spatial variance of the motion error in UHR cases by integrating the image-blocking technique and range-variant autofocus method into the processing flow.
It is challenging to effectively compensate for the frequency- and spatial-dependence of the motion error. This dilemma arises from the fact that the former requires compensation in the 2D frequency domain, while the latter necessitates compensation in the 2D image domain. It cannot be achieved simultaneously, unless point-by-point processing is employed, which entails significant computational resources. Image-blocking is a compromise between the aforementioned problem. By partitioning the entire image into small pieces, the spatial variance of the higher-order terms of the error is ignored within each sub-image. Then, the second 1D autofocus compensates for part of the range-variant errors, i.e., the constant term, ϕ 0 ( K x ) and the first-order term, ϕ 1 ( K x ) .
To ensure that the spatial variance of the motion error can be ignored within each sub-image, the size of the image-blocking, which is determined by both the radar parameters and the magnitude of the motion errors, needs careful consideration. For a given azimuth time, η , the trajectory deviations of the radar in the Y and Z directions are represented by Δ y ( η ) and Δ z ( η ) , respectively. Then, for two differently located target whose incident angle at this azimuth time are γ 1 ( η ) and γ 2 ( η ) , respectively, the instantaneous slant range error for them can be calculated as follows,
Δ R i ( η ) Δ x ( η ) · sin ( γ i ( η ) ) + Δ z ( η ) · cos ( γ i ( η ) ) ( i = 1 , 2 ) .
Then, the phase error of the two targets at η can be expressed as
ϕ i ( η ) 4 π λ · Δ R i ( η ) = 4 π λ · Δ x ( η ) · sin ( γ i ( η ) ) + Δ z ( η ) · cos ( γ i ( η ) ) ( i = 1 , 2 ) .
With (35), the phase error of targets at different locations can be calculated, thereby providing a reference for the selection of the size of the image-blocking. In our simulation, with the given parameters, the maximum difference of the phase error for two targets at a distance of 30 m in the azimuth is approximately 0.25 rad, smaller than π / 4 ; thus, the phase error within a region of such a size can be considered spatially invariant.
Next, the estimation accuracy of the proposed method is discussed. In the proposed method, the APE is initially estimated, after which the APE estimation result is employed to calculate the 2D phase error. Therefore, the accuracy of the method depends on both the APE estimation accuracy and the interpolation precision during the 2D mapping. In our simulations, we use PGA for APE estimation. To enhance estimation accuracy, improved versions of PGA, such as QPGA [40] and the weighted PGA [41], can also be employed. Besides, PGA is a dominant point-based method, which implies that dominant points are required for obtaining reliable estimation results. As the number of dominant points increases, the estimation results become more accurate. Indeed, all PGA-based autofocus methods are subject to the same problem of being constrained by dominant points. If no dominant scatters exist and there are no available manually placed corner reflectors, other APE estimation algorithms, such as image-optimization-based methods, can be considered as potential alternatives.
Then, the computational complexity of the referenced method in [34] and the proposed method is analyzed and compared. The present analysis solely concerns the autofocus processing of a single sub-image. Suppose the size of one sub-image is N a and N r points in the azimuth and range dimensions, respectively, and the interpolation kernel length used for 2D phase error mapping is L. The referenced method comprises three range FFTs/IFFTs, two azimuth FFTs/IFFTs, three phase multiplications, N r times azimuth interpolation, and one APE estimation. Since APE estimation is identical for the two methods, we use C A P E to represent the computational complexity for it, which is dependent on the selected estimation algorithm and the iteration times. Thus, the computational complexity of the reference method can be calculated as
C 1 = 3 × 5 N a N r l o g 2 N r + 2 × 5 N a N r l o g 2 N a + 3 × 6 N a N r + 2 ( 2 L 1 ) N a N r + C A P E = N a N r 15 l o g 2 N r + 10 l o g 2 N a + 16 + 4 L + C A P E .
The proposed method contains six range FFTs/IFFTs, two azimuth FFTs/IFFTs, five phase multiplications, N r times azimuth interpolation, and one APE estimation, which is calculated as
C 2 = 6 × 5 N a N r l o g 2 N r + 2 × 5 N a N r l o g 2 N a + 5 × 6 N a N r + 2 ( 2 L 1 ) N a N r + C A P E = N a N r 30 l o g 2 N r + 10 l o g 2 N a + 28 + 4 L + C A P E .
Compared with the referenced method, the proposed method has an additional pair of range spectrum shifting operations in the range time-azimuth-frequency domain, thus adding the number of transitions between range time-frequency domains and the phase multiplication times. Nevertheless, a slight increase in computational complexity can be anticipated, which is tolerable in order to achieve enhanced focusing capabilities.
For UHR radar with centimeter resolution, several systems based on microwave photonic (MWP) technology have been built in recent years [42]. The simulation parameters we used in this paper are set according to a real MWP SAR system published in [3]. However, the available measured airborne UHR SAR data are all in broadside mode [3,4,43]. Since the MWP SAR is still in the development stage, the airborne MWP SAR data is scarce and the squint data is not yet available. Although subaperture data can be used to equate the echoes acquired with a squint angle, the azimuth resolution will decrease accordingly, which is not desired for our purpose. Consequently, we present only experiments on simulated data in this paper.
In addition, for UHR airborne SAR systems, the gyro-stabilized sensor is employed to effectively isolate carrier disturbances and maintain platform stability. For instance, the gyro-stabilized sensor mount ‘Leica PAV80’ exhibits a nominal root mean square error of 0.02 , which can be ignored during SAR data processing. Therefore, in this paper, we do not consider the squint angle error caused by the attitude variations. Besides, the cases of highly nonlinear flight tracks are not within the scope of this paper.

5. Conclusions

This article proposes a new 2D wavenumber domain autofocus method for UHR squint spotlight airborne SAR based on an improved spectrum modification strategy. Compared to the existing methods, this approach considers the dependence of range defocus on the squint angle and employs an improved spectrum modification strategy, which can transform the spectrum of squint mode into a quasi-side-looking one, and meanwhile achieve alignment in azimuth and tilt correction in range. After that, the compensation in the wavenumber domain is completed with the 2D phase error mapping. Furthermore, to account for the spatial-variant motion error, the proposed method incorporates image-blocking and range-dependent motion error compensation methods. The effectiveness of the proposed method is verified by simulations. Future research will focus on the extension of the proposed method to stripmap mode.

Author Contributions

Conceptualization, M.C. and X.Q.; methodology, M.C. and X.Q.; software, M.C. and X.Q.; validation, M.C.; formal analysis, M.C.; investigation, M.C.; resources, X.Q., Y.C. and M.S.; writing—original draft preparation, M.C.; writing—review and editing, M.C., X.Q., Y.C. and M.S.; supervision, X.Q., R.L. and W.L.; funding acquisition, X.Q. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China under Grant No. 2018YFA0701903.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The Taylor coefficients of (12) can be calculated as follows.
ϕ 1 K x = Φ e K x , K y K y K y = K y c = K y c K x 2 + K y c 2 · Δ R X * K y = K y c K x 2 + K y c 2 · Δ R X * K y K y = K y c
ϕ 2 K x = 1 2 · 2 Φ e K x , K y K y 2 K y = K y c = 1 2 · K x 2 K x 2 + K y c 2 3 / 2 Δ R X * K y = K y c K y c K x 2 + K y c 2 · Δ R X * K y K y = K y c 1 2 K x 2 + K y c 2 · 2 Δ R X * K y 2 K y = K y c
With (8) and K y = K r 2 K x 2 , we have
Δ R X * K y = Δ R X * X * · K x K y 2 r cos θ 0 2 Δ R X * K y 2 = Δ R X * X * · 2 K x K y 3 r cos θ 0 + 2 Δ R X * X * 2 · K x K y 2 r cos θ 0 2
After substituting K x c K y c · tan θ 0 and (A3) into (A1) and (A2), we can obtain (13) and (14).

References

  1. Carrara, W.G.; Goodman, R.S.; Majewski, R.M. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms; Artech House: Norwood, MA, USA, 1995. [Google Scholar]
  2. Cumming, I.G.; Wong, F.H. Digital Signal Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Norwood, MA, USA, 2004. [Google Scholar]
  3. Li, R.; Li, W.; Dong, Y.; Wen, Z.; Zhang, H.; Sun, W.; Yang, J.; Zeng, H.; Deng, Y.; Luan, Y.; et al. PFDIR—A Wideband Photonic-Assisted SAR System. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 4333–4346. [Google Scholar] [CrossRef]
  4. Deng, Y.; Xing, M.; Sun, G.C.; Liu, W.; Li, R.; Wang, Y. A Processing Framework for Airborne Microwave Photonic SAR With Resolution Up To 0.03 m: Motion Estimation and Compensation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–17. [Google Scholar] [CrossRef]
  5. Chen, M.; Qiu, X.; Li, R.; Li, W.; Fu, K. Analysis and Compensation for Systematical Errors in Airborne Microwave Photonic SAR Imaging by 2D Autofocus. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2023, 16, 2221–2236. [Google Scholar] [CrossRef]
  6. Guo, Y.; Wang, P.; Zhou, X.; He, T.; Chen, J. A Decoupled Chirp Scaling Algorithm for High-Squint SAR Data Imaging. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  7. Chen, X.; Hou, Z.; Dong, Z.; He, Z. Performance Analysis of Wavenumber Domain Algorithms for Highly Squinted SAR. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2023, 16, 1563–1575. [Google Scholar] [CrossRef]
  8. Chen, J.; Xing, M.; Yu, H.; Liang, B.; Peng, J.; Sun, G.C. Motion Compensation/Autofocus in Airborne Synthetic Aperture Radar: A Review. IEEE Geosci. Remote Sens. Mag. 2022, 10, 185–206. [Google Scholar] [CrossRef]
  9. Jin, M.Y.; Wu, C. A SAR correlation algorithm which accommodates large-range migration. IEEE Trans. Geosci. Remote Sens. 1984, GE-22, 592–597. [Google Scholar] [CrossRef]
  10. Raney, R.K.; Runge, H.; Bamler, R.; Cumming, I.G.; Wong, F.H. Precision SAR processing using chirp scaling. IEEE Trans. Geosci. Remote Sens. 1994, 32, 786–799. [Google Scholar] [CrossRef]
  11. Wong, F.H.; Cumming, I.G. Error Sensitivities of a Secondary Range Compression Algorithm for Processing Squinted Satellite Sar Data. In Proceedings of the IGARSS, Vancouver, BC, Canada, 10–14 July 1989; Volume 4, pp. 2584–2587. [Google Scholar] [CrossRef]
  12. Moreira, A.; Huang, Y. Airborne SAR processing of highly squinted data using a chirp scaling approach with integrated motion compensation. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1029–1040. [Google Scholar] [CrossRef]
  13. Davidson, G.W.; Cumming, I.G.; Ito, M.R. A chirp scaling approach for processing squint mode SAR data. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 121–133. [Google Scholar] [CrossRef]
  14. An, D.; Huang, X.; Jin, T.; Zhou, Z. Extended Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Data Focusing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3595–3609. [Google Scholar] [CrossRef]
  15. Sun, G.; Xing, M.; Liu, Y.; Sun, L.; Bao, Z.; Wu, Y. Extended NCS Based on Method of Series Reversion for Imaging of Highly Squinted SAR. IEEE Geosci. Remote Sens. Lett. 2011, 8, 446–450. [Google Scholar] [CrossRef]
  16. Munson, D.; O’Brien, J.; Jenkins, W. A tomographic formulation of spotlight-mode synthetic aperture radar. Proc. IEEE 1983, 71, 917–925. [Google Scholar] [CrossRef]
  17. Cafforio, C.; Prati, C.; Rocca, F. SAR data focusing using seismic migration techniques. IEEE Trans. Aerosp. Electron. Syst. 1991, 27, 194–207. [Google Scholar] [CrossRef]
  18. Samczynski, P.; Kulpa, K.S. Coherent MapDrift Technique. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1505–1517. [Google Scholar] [CrossRef]
  19. Zhang, L.; Hu, M.; Wang, G.; Wang, H. Range-Dependent Map-Drift Algorithm for Focusing UAV SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1158–1162. [Google Scholar] [CrossRef]
  20. Wang, J.; Liu, X. SAR Minimum-Entropy Autofocus Using an Adaptive-Order Polynomial Model. IEEE Geosci. Remote Sens. Lett. 2006, 3, 512–516. [Google Scholar] [CrossRef]
  21. Gao, Y.; Yu, W.; Liu, Y.; Wang, R.; Shi, C. Sharpness-Based Autofocusing for Stripmap SAR Using an Adaptive-Order Polynomial Model. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1086–1090. [Google Scholar] [CrossRef]
  22. Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C.; Jakowatz, C.V. Phase gradient autofocus-a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef]
  23. Van Rossum, W.L.; Otten, M.P.G.; Van Bree, R.J.P. Extended PGA for range migration algorithms. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 478–488. [Google Scholar] [CrossRef]
  24. Zhu, D.; Jiang, R.; Mao, X.; Zhu, Z. Multi-Subaperture PGA for SAR Autofocusing. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 468–488. [Google Scholar] [CrossRef]
  25. Zeng, T.; Wang, R.; Li, F. SAR Image Autofocus Utilizing Minimum-Entropy Criterion. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1552–1556. [Google Scholar] [CrossRef]
  26. Morrison, R.L.; Do, M.N.; Munson, D.C. SAR Image Autofocus By Sharpness Optimization: A Theoretical Study. IEEE Trans. Image Process. 2007, 16, 2309–2321. [Google Scholar] [CrossRef]
  27. Doerry, A.W. Autofocus Correction of Excessive Migration in Synthetic Aperture Radar Images; Technical Report; Sandia National Laboratories (SNL): Albuquerque, NM, USA; Livermore, CA, USA, 2004.
  28. Mao, X.; He, X.; Li, D. Knowledge-Aided 2D Autofocus for Spotlight SAR Range Migration Algorithm Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5458–5470. [Google Scholar] [CrossRef]
  29. Mao, X.; Zhu, D. Two-dimensional Autofocus for Spotlight SAR Polar Format Imagery. IEEE Trans. Comput. Imag. 2016, 2, 524–539. [Google Scholar] [CrossRef]
  30. Mao, X.; Ding, L.; Zhang, Y.; Zhan, R.; Li, S. Knowledge-Aided 2-D Autofocus for Spotlight SAR Filtered Backprojection Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9041–9058. [Google Scholar] [CrossRef]
  31. Zhang, L.; Sheng, J.; Xing, M.; Qiao, Z.; Xiong, T.; Bao, Z. Wavenumber-Domain Autofocusing for Highly Squinted UAV SAR Imagery. IEEE Sens. J. 2012, 12, 1574–1588. [Google Scholar] [CrossRef]
  32. Xu, G.; Xing, M.; Zhang, L.; Bao, Z. Robust Autofocusing Approach for Highly Squinted SAR Imagery Using the Extended Wavenumber Algorithm. IEEE Trans. Geosci. Remote Sens. 2013, 51, 5031–5046. [Google Scholar] [CrossRef]
  33. Ran, L.; Liu, Z.; Li, T.; Xie, R.; Zhang, L. Extension of Map-Drift Algorithm for Highly Squinted SAR Autofocus. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2017, 10, 4032–4044. [Google Scholar] [CrossRef]
  34. Lin, H.; Chen, J.; Xing, M.; Chen, X.; You, D.; Sun, G. 2-D Frequency Autofocus for Squint Spotlight SAR Imaging With Extended Omega-K. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  35. Zhang, L.; Wang, G.; Qiao, Z.; Wang, H.; Sun, L. Two-Stage Focusing Algorithm for Highly Squinted Synthetic Aperture Radar Imaging. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5547–5562. [Google Scholar] [CrossRef]
  36. Chen, J.; Liang, B.; Zhang, J.; Yang, D.G.; Deng, Y.; Xing, M. Efficiency and Robustness Improvement of Airborne SAR Motion Compensation With High Resolution and Wide Swath. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  37. Chen, J.; Liang, B.; Yang, D.G.; Zhao, D.J.; Xing, M.; Sun, G.C. Two-Step Accuracy Improvement of Motion Compensation for Airborne SAR With Ultrahigh Resolution and Wide Swath. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7148–7160. [Google Scholar] [CrossRef]
  38. Chen, J.; Xing, M.; Sun, G.C.; Li, Z. A 2-D Space-Variant Motion Estimation and Compensation Method for Ultrahigh-Resolution Airborne Stepped-Frequency SAR With Long Integration Time. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6390–6401. [Google Scholar] [CrossRef]
  39. Zhu, D.; Wang, L.; Yu, Y.; Tao, Q.; Zhu, Z. Robust ISAR Range Alignment via Minimizing the Entropy of the Average Range Profile. IEEE Geosci. Remote Sens. Lett. 2009, 6, 204–208. [Google Scholar] [CrossRef]
  40. Chan, H.L.; Yeo, T.S. Noniterative quality phase-gradient autofocus (QPGA) algorithm for spotlight SAR imagery. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1531–1539. [Google Scholar] [CrossRef]
  41. de Macedo, K.A.C.; Scheiber, R.; Moreira, A. An Autofocus Approach for Residual Motion Errors With Application to Airborne Repeat-Pass SAR Interferometry. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3151–3162. [Google Scholar] [CrossRef]
  42. Panda, S.S.S.; Panigrahi, T.; Parne, S.R.; Sabat, S.L.; Cenkeramaddi, L.R. Recent Advances and Future Directions of Microwave Photonic Radars: A Review. IEEE Sens. J. 2021, 21, 21144–21158. [Google Scholar] [CrossRef]
  43. Li, R.; Li, W.; Dong, Y.; Wang, B.; Wen, Z.; Luan, Y.; Yang, Z.; Yang, X.; Yang, J.; Sun, W.; et al. An Ultrahigh-Resolution Continuous Wave Synthetic Aperture Radar with Photonic-Assisted Signal Generation and Dechirp Processing. In Proceedings of the 2020 17th European Radar Conference (EuRAD), Utrecht, The Netherlands, 10–15 January 2021; pp. 13–16. [Google Scholar] [CrossRef]
Figure 1. Geometric model of squint SAR.
Figure 1. Geometric model of squint SAR.
Remotesensing 16 02158 g001
Figure 2. Trajectory deviation of the radar platform.
Figure 2. Trajectory deviation of the radar platform.
Remotesensing 16 02158 g002
Figure 3. Range envelopes in range-Doppler domain for different squint angles. (a) 0 , (b) 10 , (c) 20 and (d) 30 .
Figure 3. Range envelopes in range-Doppler domain for different squint angles. (a) 0 , (b) 10 , (c) 20 and (d) 30 .
Remotesensing 16 02158 g003
Figure 4. The variations of the spectrum support region. (a) The original support region of a target whose squint angle at the synthetic aperture center is θ c . (b) The support region after azimuth spectrum shifting. (c) The support region after range spectrum shifting.
Figure 4. The variations of the spectrum support region. (a) The original support region of a target whose squint angle at the synthetic aperture center is θ c . (b) The support region after azimuth spectrum shifting. (c) The support region after range spectrum shifting.
Remotesensing 16 02158 g004
Figure 5. Flowchart of the proposed 2D autofocus method. The variations of the 2D spectrum of three point targets during processing are shown on both sides.
Figure 5. Flowchart of the proposed 2D autofocus method. The variations of the 2D spectrum of three point targets during processing are shown on both sides.
Remotesensing 16 02158 g005
Figure 6. Three azimuth-distributed point targets with adjacent intervals of 15 m.
Figure 6. Three azimuth-distributed point targets with adjacent intervals of 15 m.
Remotesensing 16 02158 g006
Figure 7. The differences of the actual 2D phase error between (a) the targets A 1 and A 2 , (b) the targets A 3 and A 2 .
Figure 7. The differences of the actual 2D phase error between (a) the targets A 1 and A 2 , (b) the targets A 3 and A 2 .
Remotesensing 16 02158 g007
Figure 8. (a) The actual and (b) estimated 2D phase error of the middle target, A 2 .
Figure 8. (a) The actual and (b) estimated 2D phase error of the middle target, A 2 .
Remotesensing 16 02158 g008
Figure 9. (a) Estimation error of the middle target, A 2 . (b) Side view of (a).
Figure 9. (a) Estimation error of the middle target, A 2 . (b) Side view of (a).
Remotesensing 16 02158 g009
Figure 10. (ac) are the enlarged contour images of the three azimuth-distributed point targets after 2D phase error correction. (df) are the range profiles corresponding to (ac). (gi) are the azimuth profiles corresponding to (ac).
Figure 10. (ac) are the enlarged contour images of the three azimuth-distributed point targets after 2D phase error correction. (df) are the range profiles corresponding to (ac). (gi) are the azimuth profiles corresponding to (ac).
Remotesensing 16 02158 g010aRemotesensing 16 02158 g010b
Figure 11. 2D Spectrum. (a) Before processing. (b) After processing with the method in [34]. (c) After azimuth processing of the proposed method. (d) After range processing of the proposed method.
Figure 11. 2D Spectrum. (a) Before processing. (b) After processing with the method in [34]. (c) After azimuth processing of the proposed method. (d) After range processing of the proposed method.
Remotesensing 16 02158 g011aRemotesensing 16 02158 g011b
Figure 12. Range-Doppler domain of the image after azimuth spectrum shifting by (a) the method in [34] and (b) the proposed method.
Figure 12. Range-Doppler domain of the image after azimuth spectrum shifting by (a) the method in [34] and (b) the proposed method.
Remotesensing 16 02158 g012
Figure 13. Processing results and enlarged contour images of three azimuth-distributed point targets with (a) the alignment method in [34] and (b) the proposed method. The sizes of (a,b) are both 2364 × 1392 pixels.
Figure 13. Processing results and enlarged contour images of three azimuth-distributed point targets with (a) the alignment method in [34] and (b) the proposed method. The sizes of (a,b) are both 2364 × 1392 pixels.
Remotesensing 16 02158 g013
Figure 14. Seven range-distributed point targets with adjacent intervals of 5 m.
Figure 14. Seven range-distributed point targets with adjacent intervals of 5 m.
Remotesensing 16 02158 g014
Figure 15. (a) The residual APE ϕ 0 ( K x ) and (b) RESE ϕ 1 ( K x ) of the seven range-distributed point targets after 2D compensation.
Figure 15. (a) The residual APE ϕ 0 ( K x ) and (b) RESE ϕ 1 ( K x ) of the seven range-distributed point targets after 2D compensation.
Remotesensing 16 02158 g015
Figure 16. (a) The residual APE ϕ 0 ( K x ) and (b) RESE ϕ 1 ( K x ) of the seven range-distributed point targets after range-variant 1D autofocus.
Figure 16. (a) The residual APE ϕ 0 ( K x ) and (b) RESE ϕ 1 ( K x ) of the seven range-distributed point targets after range-variant 1D autofocus.
Remotesensing 16 02158 g016
Figure 17. Eighty-one point targets evenly distributed in the ( x , y ) direction with adjacent intervals of 10 m in both directions.
Figure 17. Eighty-one point targets evenly distributed in the ( x , y ) direction with adjacent intervals of 10 m in both directions.
Remotesensing 16 02158 g017
Figure 18. (a) The original image of a large scenario processed by Omega-K. (b) Stitched fine-focused result processed by the proposed method. The sizes of (a,b) are both 5910 × 4924 pixels.
Figure 18. (a) The original image of a large scenario processed by Omega-K. (b) Stitched fine-focused result processed by the proposed method. The sizes of (a,b) are both 5910 × 4924 pixels.
Remotesensing 16 02158 g018
Table 1. Comparison of current autofocus methods for UHR squint SAR.
Table 1. Comparison of current autofocus methods for UHR squint SAR.
Time-Domain Methods [31,32,33]Frequency-Domain Methods [34]
Characteristics
  • Needs deramp processing and azimuth position estimation;
  • Compensation is applied to the raw data.
  • Avoids deramp processing and azimuth position estimation;
  • Compensation is applied to the coarse-focused images.
Limitations
  • The estimation error of azimuth position will affect the subsequent estimation of phase error.
  • Lack of analysis of range defocus dependence on squint angles;
  • Approximation adopted during azimuth alignment will deteriorate the compensation result.
Table 2. Simulation parameters.
Table 2. Simulation parameters.
ParametersValues
Carrier frequency15.14 GHz
Bandwidth5.72 GHz
Time width10 us
Sampling frequency2500 MHz
Synthetic aperture length296.34 m
Pulse repetition frequency3600 Hz
Center slant range885 m
Velocity67 m/s
Table 3. Image quality index of the three azimuth-distributed point targets.
Table 3. Image quality index of the three azimuth-distributed point targets.
Referenced MethodProposed Method
Entropy5.97194.4266
Contrast13.320017.5166
Table 4. Index measurement results of the focused targets.
Table 4. Index measurement results of the focused targets.
RangeAzimuth
PointIRW (m)PSLR (dB)ISLR (dB)IRW (m)PSLR (dB)ISLR (dB)
A0.0236−13.116−10.4420.0260−12.535−12.823
B0.0238−13.053−10.2980.0268−12.420−12.612
C0.0239−13.076−10.2810.0278−12.358−12.529
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, M.; Qiu, X.; Cheng, Y.; Shang, M.; Li, R.; Li, W. Two-Dimensional Autofocus for Ultra-High-Resolution Squint Spotlight Airborne SAR Based on Improved Spectrum Modification. Remote Sens. 2024, 16, 2158. https://doi.org/10.3390/rs16122158

AMA Style

Chen M, Qiu X, Cheng Y, Shang M, Li R, Li W. Two-Dimensional Autofocus for Ultra-High-Resolution Squint Spotlight Airborne SAR Based on Improved Spectrum Modification. Remote Sensing. 2024; 16(12):2158. https://doi.org/10.3390/rs16122158

Chicago/Turabian Style

Chen, Min, Xiaolan Qiu, Yao Cheng, Mingyang Shang, Ruoming Li, and Wangzhe Li. 2024. "Two-Dimensional Autofocus for Ultra-High-Resolution Squint Spotlight Airborne SAR Based on Improved Spectrum Modification" Remote Sensing 16, no. 12: 2158. https://doi.org/10.3390/rs16122158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop