Next Article in Journal
An Optimization Algorithm for Forward-Scatter Radar Network Node Deployment Based on BFGS and Improved NSGA-II
Previous Article in Journal
Dual VHF Stratospheric–Tropospheric Radar Measurements in the Lower Atmosphere
Previous Article in Special Issue
High-Resolution High-Squint Large-Scene Spaceborne Sliding Spotlight SAR Processing via Joint 2D Time and Frequency Domain Resampling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Dimensional Spatial Variation Analysis and Correction Method for High-Resolution Wide-Swath Spaceborne Synthetic Aperture Radar (SAR) Imaging

College of Electronic Science and Technology, National University of Defense Technology, Deya Road No. 109, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(7), 1262; https://doi.org/10.3390/rs17071262
Submission received: 21 February 2025 / Revised: 22 March 2025 / Accepted: 31 March 2025 / Published: 2 April 2025

Abstract

:
With the development and application of spaceborne Synthetic Aperture Radar (SAR), higher resolution and a wider swath have become significant demands. However, as the resolution increases and the swath widens, the two-dimensional (2D) spatial variation between different targets in the scene and the radar becomes very pronounced, severely affecting the high-precision focusing and high-quality imaging of spaceborne SAR. In previous studies on the correction of two-dimensional spatial variation in spaceborne SAR, either the models were not accurate enough or the computational efficiency was low, limiting the application of corresponding algorithms. In this paper, we first establish a slant range model and a signal model based on the zero-Doppler moment according to the spaceborne SAR geometry, thereby significantly reducing the impact of azimuth spatial variation in two-dimensional spatial variation. Subsequently, we propose a Curve-Sphere Model (CUSM) to describe the ground observation geometry of spaceborne SAR, and based on this, we establish a more accurate theoretical model and quantitative description of two-dimensional spatial variation. Next, through modeling and simulation, we conduct an in-depth analysis of the impact of two-dimensional spatial variation on spaceborne SAR imaging, obtaining corresponding constraints and thresholds and concluding that in most cases, only one type of azimuth spatial variation needs to be considered, thereby greatly reducing the demand and difficulty of two-dimensional spatial variation correction. Relying on these, we propose a two-dimensional spatial variation correction method that combines range blocking and azimuth nonlinear chirp scaling processing and analyze its scalability to be applicable to more general cases. Finally, the effectiveness and applicability of the proposed method are validated through both simulation experiments and real data experiments.

1. Introduction

Spaceborne SAR has long garnered extensive attention and in-depth research due to its all-weather and all-day observation capabilities [1,2,3,4,5,6,7,8,9,10]. With the increasing application demands in both the military and civilian fields, higher resolution and larger mapping bandwidth have become important development directions for spaceborne SAR. In recent years, many spaceborne SAR systems have achieved decimeter-level resolution capabilities [11,12], with the Capella-9 and Capella-10 satellites even reaching centimeter-level theoretical resolution capabilities in the azimuth dimension [13].
However, as the resolution increases and the observation range expands, a scenario generally referred to as High Resolution, Wide Swath (HRWS), the difficulty of precise imaging for spaceborne SAR also gradually increases. On one hand, higher resolution implies lower error tolerance, necessitating the use of more accurate geometric and signal models. On the other hand, in the case of high resolution, a larger observation range will make the effects of spatial variation more severe, and this spatial variation exists in both the range and azimuth dimensions, thus requiring the consideration of two-dimensional spatial variation.
In HRWS spaceborne SAR, the sources of two-dimensional spatial variation mainly stem from three aspects. First, as the synthetic aperture increases, the satellite trajectory over the entire imaging time cannot be approximated by a straight line, and the satellite’s velocity is no longer constant. Second, with the increase in imaging time, the Earth’s rotation must be taken into account, which causes the satellite trajectory to become a non-coplanar curved line in the Earth-Centered, Earth-Fixed (ECEF) coordinate system. Third, as the observation range expands, the curvature of the Earth’s surface also needs to be considered, leading to higher-order characteristics in the two-dimensional spatial variation, especially in the variation in spatial range.
The development of spaceborne SAR imaging algorithms has always been accompanied by research on the analysis and correction methods of two-dimensional spatial variation. In fact, if imaging efficiency is not considered, time-domain imaging algorithms such as the back-projection algorithm (BPA) can effectively address the issue of two-dimensional spatial variation. However, in the HRWS scenario, the data volume is so large that even the optimized and accelerated BPA remains highly inefficient. Therefore, frequency-domain algorithms that can balance imaging quality and computational efficiency have become a continuous exploration goal.
In early studies, due to the lower resolution, spaceborne SAR could often be processed using the geometric model of airborne SAR, thereby neglecting the azimuth spatial variation issues inherent in spaceborne SAR and only considering the impact of spatial range variation. However, with the improvement in resolution, the increase in synthetic aperture length necessitates the consideration of azimuth spatial variation effects in spaceborne SAR imaging. Traditional algorithms such as the chirp scaling algorithm (CSA), the range-Doppler algorithm (RDA), and the omega-K algorithm ( ω KA) are no longer applicable.
Subsequently, a large number of studies on the azimuth spatial variation problem of spaceborne SAR have been conducted, and many imaging algorithms have been proposed, but the imaging results are not very satisfactory. Reference [14] divides the full aperture into several sub-apertures to ensure that there is no azimuth spatial variation within a sub-aperture, but the data stitching problem of sub-apertures is very challenging. References [15,16] achieve full-aperture imaging based on sub-aperture correction preprocessing and azimuth resampling, but their signal models still rely on the inaccurate assumption that the radar moves along a straight line. References [17,18] implement two-dimensional spatial variation decoupling to remove azimuth spatial variation based on time-domain and frequency-domain resampling (interpolation) processing, but the computational load of resampling (interpolation) is very large, and the interpolation accuracy is limited by the length of the interpolation kernel.
In recent years, several more precise imaging algorithms have been proposed to accommodate higher resolution and larger swathes. Reference [9] introduced a method that involves coarse imaging first, followed by blocking and iterative imaging, and finally, the fine imaging of small blocks that no longer have spatial variation. Although the algorithm is conceptually simple, it effectively addresses the two-dimensional spatial variation problem, and the increased computational load is still much smaller than that of BPA. However, this algorithm does not provide a basis for block segmentation, the iterative method lacks a clear termination condition, and the efficiency is relatively low. Reference [10] proposed an imaging algorithm based on spherical geometry, which corrects the actual satellite trajectory to a circle rather than a straight line, as is the case in most of the literature, thereby achieving two-dimensional spatial variation correction through motion compensation. However, this algorithm requires two-dimensional interpolation, has low efficiency, and approximates the Earth as a perfect sphere.
In summary, there is almost no literature that provides precise qualitative analysis and quantitative description of the two-dimensional spatial variation characteristics of HRWS spaceborne SAR, and there is also no algorithm that can simply and efficiently remove the two-dimensional spatial variation to achieve accurate focusing.
Therefore, this paper first conducts an in-depth analysis of the two-dimensional spatial variation based on the spaceborne SAR imaging geometry and establishes a quantitative description model. Subsequently, the two-dimensional spatial variation correction is achieved based on range blocking and azimuth nonlinear chirp scaling (NCS) processing without the need for any interpolation or iterative processing. The remainder of this paper is organized as follows: Section 2 establishes the spaceborne SAR geometric model and signal model; Section 3 provides a thorough theoretical analysis of the two-dimensional spatial variation in HRWS spaceborne SAR and establishes a quantitative description; Section 4 details the impact of two-dimensional spatial variation on imaging and identifies the constraints of these effects; Section 5 proposes a two-dimensional spatial variation correction method based on range blocking and azimuth NCS and discusses its extensibility; Section 6 demonstrates the algorithm’s effectiveness through both simulation experiments and real data validation; and Section 7 concludes the paper.

2. Geometric and Signal Models of Spaceborne SAR

SAR achieves high-resolution imaging capability through the relative motion between the radar and targets. To obtain higher resolution within the same imaging time duration, azimuth beam scanning is often required, i.e., operating in spotlight or sliding mode. To maximize the range swath width, azimuth multi-channel antenna configurations are typically employed. Since azimuth multi-channel processing falls beyond the scope of this paper, we neglect the errors introduced by this processing and assume the received signal can be equivalent to a single-channel received signal, whose Pulse Repetition Frequency (PRF) already satisfies the imaging requirements for SAR beam scanning modes.
When the radar transmits a linear frequency-modulated (LFM) signal, and under the “stop-and-go” assumption, for a ground point target P R 0 , t p , the received signal after quadrature demodulation can be expressed as
  s s r e c e i v e t r , t a ; R 0 , t p = w r t r 2 R t a ; R 0 , t p c w a t a t c   ×   σ R 0 , t p exp j 4 π R t a ; R 0 , t p λ exp j π K r t r 2 R t a ; R 0 , t p c 2
where λ is the wavelength; c is the speed of light; K r is the transmit signal chirp rate; t r and t a represent the fast time and slow time, respectively; w r and w a are the range and azimuth window functions; and t p and R 0 denote the zero-Doppler time and corresponding slant range of point target P. R ( t a ; R 0 , t p ) represents the slant range history between the radar and target P, σ R 0 , t p represents the Radar Cross-Section (RCS) of Point Target P. The azimuth time instant t c corresponds to the moment when the beam center illuminates Target P, which arises from variations in the azimuth beam direction. The observation geometry of spaceborne SAR is illustrated in Figure 1 (taking the sliding spotlight mode as a representative example).
In Equation (1), the slant range history R t a ; R 0 , t p has the most significant impact on subsequent focusing processing, and thus, its precise description directly affects imaging quality. In early literature, the slant range history model was typically modified based on the hyperbolic model, but its essence still relied on the premise of linear satellite motion. For HRWS scenarios, satellites can no longer be assumed to follow linear motion, and their curved trajectories must be considered. Therefore, recent literature generally establishes more precise slant range models using higher-order Taylor expansions and accordingly develops descriptive two-dimensional spatial variation equations.
It should be noted that many papers (such as [19,20]) expand the slant range history as a polynomial of t a t c , thereby defining the two-dimensional spatial variation as functions of the azimuth beam center plane crossing time t c and the corresponding slant range R c . However, this definition approach for two-dimensional spatial variation is insufficiently accurate. Figure 2a and Figure 2b, respectively, illustrate the intersection lines between the azimuth beam center plane and the observed scene in sliding spotlight mode and spotlight mode. The figures reveal that different targets sharing the same t c exhibit “tilted” spatial distributions within the scene, which will lead to severe azimuth spatial variation when using t c as the variable. Consequently, subsequent imaging processing becomes extremely challenging.
In this paper, we define two-dimensional spatial variation as functions of zero-Doppler time t p and closest slant range R 0 . Targets sharing the same t p will be distributed along the red line shown in Figure 2c, thereby reducing azimuth spatial variation and facilitating precise imaging. Furthermore, describing targets and two-dimensional spatial variation using zero-Doppler time t p and closest slant range R 0 offers two additional advantages. First, the two-dimensional spatial variation originates solely from the geometric distribution of targets, independent of imaging modes. This interpretation better aligns with the fundamental definition of spatial variation and enables uniform processing across different imaging methods. Second, the imaged targets will be accurately compressed at position R 0 , t p , which better matches the geographic distribution of observation scenarios and eliminates the need for additional geometric distortion correction [21].
Figure 3 illustrates the target layout with a resolution of 0.5 × 0.5 m and an observation area of 10 × 10 km, with simulation parameters detailed in Table 1. The targets depicted in the figure are projections onto the Earth’s surface from three-dimensional space, centered around a reference target. The horizontal axis represents the azimuth position offset of the target relative to the reference target, while the vertical axis indicates the range position offset of the target relative to the reference target. (a) shows the target layout results based on the sliding-spotlight mode at the moment of beam center, (b) presents the target layout results based on the spotlight mode at the moment of beam center, and (c) depicts the target layout results for both sliding-spotlight mode and spotlight mode at zero-Doppler moments. The blue circles in the figure represent targets obtained from the sliding-spotlight mode, while the red stars represent targets obtained from the spotlight mode. The simulation results are consistent with the previous analysis.
In this paper, we first construct a slant range model based on the sixth-order Taylor expansion:
R t a ; R 0 , t p R 0 + n = 1 6 k n t a t p n
where k n denotes the n-th order coefficient of t a t p , and since t p represents the zero-Doppler time instant, k 1 is always zero and is thus neglected in subsequent derivations. In high-resolution spaceborne SAR operations, due to orbital curvature, Earth’s rotation, and Earth’s curvature, the expansion coefficients k n for different targets become functions of R 0 and t p (detailed descriptions and representations will be provided in the next section).
In most cases, a sixth-order model is sufficiently accurate, and the model order can be increased as needed without affecting subsequent analysis and processing workflows. Figure 4 shows the phase error between the sixth-order slant range model and the actual slant range, with the simulation parameters listed in Table 1. (a) presents the simulation results for an azimuth resolution of 0.2 m and an azimuth swath of 20 km, while (b) shows the simulation results for a resolution of 0.1 m and an azimuth swath of 10 km. Different lines represent different targets with the same closest slant range but different azimuth positions. As shown, the phase error of all of the targets are significantly below the threshold of 0.25 π [22].
By employing the series-inversion method, the corresponding two-dimensional spectrum of Equation (1) can be obtained. Neglecting amplitude and window function effects, it can be expressed as
  S S r e c e i v e f r , f a ; R 0 , t p = exp j π f r 2 K r exp j 4 π f c + f r c R 0   × exp j 2 π t p f a exp j π c f c + f r 1 4 k 2 f a 2 exp j π c 2 f c + f r 2 k 3 16 k 2 3 f a 3   × exp j π c 3 f c + f r 3 9 k 3 2 4 k 2 k 4 256 k 2 5 f a 4   × exp j π c 4 f c + f r 4 27 k 3 3 24 k 2 k 3 k 4 + 4 k 2 2 k 5 1024 k 2 7 f a 5   × exp j π c 5 f c + f r 5 189 k 3 4 252 k 2 k 3 2 k 4 + 32 k 2 2 k 4 2 + 60 k 2 2 k 3 k 5 8 k 2 3 k 6 8192 k 2 9 f a 6
By further organizing it as a function of range frequency, it can be expressed as
  S S r e c e i v e f r , f a ; R 0 , t p = exp j π f r 2 K r exp j 4 π f c + f r c R 0 exp j 2 π t p f a   × exp j ϕ 0 f a ; R 0 , t p exp j ϕ 1 f a ; R 0 , t p f r exp j ϕ 2 f a ; R 0 , t p f r 2   × exp j ϕ 3 f a ; R 0 , t p f r 3 exp j N > 3 ϕ N f a ; R 0 , t p f r N
In Equation (4), the first exponential term represents the range modulation of the transmitted signal, the second exponential term denotes the phase information and range position of the target, the third exponential term indicates the azimuth position, the fourth exponential term represents the azimuth modulation, and the fifth exponential term accounts for the RCM. The sixth, seventh, and eighth exponential terms correspond to the second-order, third-order, and higher-order information of the range frequency, respectively. The specific expressions for ϕ 0 , ϕ 1 , ϕ 2 , and ϕ 3 can be found in [22].

3. Theoretical Analysis of Two-Dimensional Spatial Variation

In Reference [22], we proposed the Circle-Sphere Model (CSM) to describe the observation geometry of spaceborne SAR and conducted a theoretical analysis of the spatial range variation based on this model. The CSM was derived under the assumption of no spatial azimuth variation, which remains applicable for high-resolution wide-swath scenarios with a 10 km swath width and a 0.2 m resolution. However, as the resolution further increases and the swath width further expands, the CSM will no longer be applicable. Therefore, in this paper, we first establish a more generalized spaceborne SAR geometric model, the Curve-Sphere Model (CUSM), and subsequently perform theoretical analysis and provide a quantitative description of two-dimensional spatial variation based on this model.

3.1. Curve-Sphere Model

The CUSM treats satellite orbits as general three-dimensional curves while still approximating the Earth as a standard sphere. To facilitate theoretical analysis, we assume the satellite and Earth operate within a two-body model, neglecting orbital deviations caused by perturbations. According to the two-body model in the Earth-Centered Inertial (ECI) coordinate system, the satellite follows a periodic elliptical orbit. Its instantaneous position is determined by six classical orbital elements: semi-major axis a, eccentricity e, orbital inclination i, longitude of the ascending node Ω , argument of periapsis ω , and true anomaly f. Figure 5a illustrates the geometric relationship between the satellite and Earth in the ECI frame, where the green solid line represents the Earth’s surface and the red solid line depicts the satellite’s orbital path.
In the imaging geometry of spaceborne SAR, to maintain the observation scene fixed relative to the satellite, it is necessary to transform the coordinate system to the Earth-Centered, Earth-Fixed (ECEF) frame. Within the ECEF framework, influenced by Earth’s rotation, the satellite trajectory no longer maintains an elliptical form but rather becomes a non-coplanar three-dimensional curve, which significantly complicates the quantitative description of two-dimensional spatial variation characteristics. Figure 5b illustrates the geometric relationship between the satellite and Earth in the ECEF frame. The green solid line represents Earth’s surface, while the red solid line depicts the satellite orbit. At the azimuth central time instant ( t a = 0 ), the black dot indicates a target (denoted as P R 0 , 0 ) located at the intersection of the radar’s zero-Doppler plane and the ground surface with slant range R 0 . The target’s look angle relative to the satellite is θ off , with the blue solid line representing the target-to-satellite slant range and the purple solid line indicating Earth’s radius R e .
Based on this, we have established the CUSM, where the satellite trajectory is precisely modeled with the Earth assumed to be a perfect sphere. Considering the Earth’s immense size and the operational characteristics of HRWS spaceborne SAR in low Earth orbit (LEO) scenarios, where the side length of the observation area is measured only on the order of hundreds of kilometers, it is theoretically acceptable to approximate the Earth as either a perfect sphere or a local spherical surface during analytical investigations.
In the process of spaceborne SAR imaging, the orbital elements and the radar look angle at the azimuth center time are typically known. Therefore, in CUSM, we can express both the satellite trajectory and target position as functions of known quantities. Consequently, the slant range history can also be represented as a function of these known parameters, thereby establishing a descriptive two-dimensional spatial variation theoretical model.

3.2. Satellite Trajectory

According to Kepler’s equations, the position coordinates of the satellite in ECI are expressed as
P e c i = r s cos ( f ) P + r s sin ( f ) Q
where
  P = cos Ω cos ω sin Ω sin ω cos i sin Ω cos ω + cos Ω sin ω cos i sin ω sin i   Q = cos Ω sin ω sin Ω cos ω cos i sin Ω sin ω + cos Ω cos ω cos i cos ω sin i   r s = a 1 e 2 1 + e cos f
The transformation formula for converting a satellite’s position coordinates from ECI to ECEF is expressed as
P e c f = cos ϕ sin ϕ 0 sin ϕ cos ϕ 0 0 0 1 P e c i
where ϕ = ϕ 0 + ω e t a , where t a is the azimuth time, ϕ 0 represents the rotation angle between ECI and ECEF at the central azimuth time, and ϕ is the rotation angle between ECI and ECEF at time t a . Without loss of generality, we can set Ω = 0 and ϕ 0 = 0 ; then, the satellite coordinates in ECEF can be expressed as
P e c f = a cos f 1 e 2 1 + e cos f cos ω cos ϕ + sin ω cos i sin ϕ cos ω sin ϕ + sin ω cos i cos ϕ sin ω sin i + a sin f 1 e 2 1 + e cos f sin ω cos ϕ + cos ω cos i sin ϕ sin ω sin ϕ + cos ω cos i cos ϕ cos ω sin i
To further characterize the time variation properties of satellite motion states, we need to express Equation (8) as a function of azimuth time t a . One challenge lies in the absence of an explicit expression for the true anomaly f with respect to t a . However, based on orbital dynamics theory, we can derive an approximate expression of f versus t a by introducing the satellite’s eccentric anomaly E and mean anomaly M (as shown in Figure 6). To simplify the expression’s complexity, we may assume f = 0 at the azimuth’s center time. This assumption is justified by two considerations: Firstly, the orbital curvature reaches a maximum at f = 0 , where the azimuth spatial variation becomes most severe. Secondly, the adjustment of the argument of periapsis ω can regulate the satellite’s geographic position, so setting f = 0 at azimuth center time does not affect the analysis of the Earth’s rotation effects on the azimuth spatial variation.
Thus, according to the geometric relationship, we have
cos f = cos E e 1 e cos E sin f = 1 e 2 sin E 1 e cos E
Therefore, the multiplicative factor in Equation (8) can be expressed as
a cos f 1 e 2 1 + e cos f = a cos E e 1 e cos E 1 e 2 1 + e cos E e 1 e cos E = a cos E e
a sin f 1 e 2 1 + e cos f = a 1 e 2 sin E
Thus, we first obtain the expression of f with respect to E. Meanwhile, based on the geometric relationships, it is known that
M = E e sin E M = n s t a
where n s = μ / a 3 is the satellite’s mean angular velocity. However, an explicit expression of E in terms of M cannot be obtained directly from Equation (12). Therefore, we first perform a Taylor expansion on M and then employ the series inversion method to derive a polynomial approximation of E in terms of M:
M = E e sin E E e E E 3 / 6 + E 5 / 120 E M 1 e e M 3 6 1 e 4 + e + 9 e 2 M 5 120 1 e 7
This approximation yields an error between the E obtained by the series inversion method and the true value smaller than 1 × 10 9 within the range E π / 8 , π / 8 , as shown in Figure 7a. For typical orbital parameters with semi-major axis a = 6800 km and eccentricity e = 0.0013 , the resulting errors between Equations (10) and (11) are both less than 0.005 m, as demonstrated in Figure 7b,c.
By substituting Equations (10), (11) and (13) into Equation (8), we obtain the satellite’s ECEF position as a function of the azimuth time t a . For clarity of presentation, we use x s , y s , and z s to represent the three coordinates of P e c f , respectively. Each coordinate is expanded as a Taylor series with respect to t a (truncated to the fourth order for conciseness), yielding
x s t a a 1 e cos ω + a 1 e 2 n s sin ω e 1 + a ω e cos i sin ω a e ω e cos i sin ω t a + a n s 2 cos ω 2 e 1 2 a 1 e 2 n s ω e cos i cos ω e 1 1 2 a ω e 2 cos ω + 1 2 a e ω e 2 cos ω t a 2 + a n s 2 ω e cos i sin ω 2 ( 1 + e ) 2 1 6 a ω e 3 cos i sin ω + 1 6 a e ω e 3 cos i sin ω + a 1 e 2 n s 3 sin ω 6 ( e 1 ) 4 n s ω e 2 sin ω 2 ( e 1 ) t a 3 + a n s 4 cos ω 24 e 1 5 a e n s 4 cos ω 8 e 1 5 + a n s 2 ω e 2 cos ω 4 e 1 2 + 1 24 a ω e 4 cos ω 1 24 a e ω e 4 cos ω + a 1 e 2 n s 3 ω e cos i cos ω 6 e 1 4 + n s ω e 3 cos i cos ω 6 e 1 t a 4
y s t a a 1 e cos i sin ω + a e 1 ω e cos ω a 1 e 2 n s cos i cos ω e 1 t a + a 1 e 2 n s ω e sin ω e 1 a n s 2 cos i sin ω 2 e 1 2 + 1 2 a e 1 ω e 2 cos i sin ω t a 2 + a n s 2 ω e cos ω 2 e 1 2 + 1 6 a 1 e ω e 3 cos ω + a 1 e 2 n s 3 cos i cos ω 6 e 1 4 + n s ω e 2 cos i cos ω 2 e 1 t a 3 + a n s 4 cos i sin ω 24 e 1 5 a e n s 4 cos i sin ω 8 e 1 5 + a n s 2 ω e 2 cos i sin ω 4 e 1 2 + 1 24 a 1 e ω e 4 cos i sin ω + a 1 e 2 n s 3 ω e sin ω 6 e 1 4 + n s ω e 3 sin ω 6 e 1 t a 4
z s t a a 1 e sin i sin ω + a 1 e 2 1 e n s cos ω sin i t a a n s 2 sin ω sin i 2 e 1 2 t a 2 a 1 e 2 n s 3 cos ω sin i 6 e 1 4 t a 3 a 1 + 3 e n s 4 sin ω sin i 24 e 1 5 t a 4

3.2.1. Slant Range History

After obtaining the expression for the satellite trajectory with respect to the azimuth time t a , we can directly derive the expression for the slant range history based on the geometric relationship:
R t a ; R 0 , t p = x s t a x R 0 , t p 2 + y s t a y R 0 , t p 2 + z s t a z R 0 , t p 2 = x 0 + x 1 t a + x 2 t a 2 + x 3 t a 3 + x 4 t a 4 x R 0 , t p 2 + y 0 + y 1 t a + y 2 t a 2 + y 3 t a 3 + y 4 t a 4 y R 0 , t p 2 + z 0 + z 1 t a + z 2 t a 2 + z 3 t a 3 + z 4 t a 4 z R 0 , t p 2
Here, x m , y m , z m m = 0 , 1 , 2 , 3 , 4 represent the k-th order coefficients of t a in x s , y s , z s , respectively. Meanwhile, x R 0 , t p , y R 0 , t p , z R 0 , t p T denotes the ECEF position coordinates of targets with slant range R 0 along the intersection line between the satellite’s zero-Doppler plane and the Earth’s surface at time t p , which is denoted as T R 0 , t p .
Expanding Equation (17) with respect to t a t p (for simplicity of presentation, we only expand up to the fourth order) and simplifying, we obtain
R t a ; R 0 , t p R 0 + t a t p 2 2 R 0 V s , t p 2 + A s , t p · P t p T R 0 , t p + t a t p 3 R 0 1 2 A s , t p · V s , t p + 1 6 A ˙ s , t p · P t p T R 0 , t p + t a t p 4 8 R 0 3 4 R c 2 A s , t p 2 4 + A ˙ s , t p · V s , t p 3 + A ¨ t p P t p T R 0 , t p 12 V s , t p 2 + A s , t p · P t p T R 0 , t p
where P t p = x s t p , y s t p , z s t p T is the satellite’s ECEF position vector at time t p , while V s , t p , A s , t p , A ˙ s , t p , A ¨ s , t p represent the first to fourth derivatives of P e c f with respect to t a evaluated at t p , respectively. Their corresponding x-axis coordinates are
x t p = x 0 + x 1 t p + x 2 t p 2 + x 3 t p 3 + x 4 t p 4 v x s t p = x 1 + 2 x 2 t p + 3 x 3 t p 2 + 4 x 4 t p 3 a x s t p = 2 x 2 + 6 x 3 t p + 12 x 4 t p 2 a ˙ x s t p = 6 x 3 + 24 x 4 t p a ¨ x s t p = 24 x 4
The same applies to the y-axis coordinate and the z-axis coordinate.

3.2.2. Two-Dimensional Spatial Variation

The coefficients of each order in the term t a t p correspond to k n in the equation. By expanding k n in a Taylor series with respect to t p , we can obtain the specific expression of the azimuth’s spatial variation:
k n k n , a 0 + k n , a 1 t p + + k n , a j t p j
It should be noted that k n , a j is also a function of R 0 , so the spatial range variation and range-azimuth coupling spatial variation are also included.
According to the empirical findings from Reference [18], it is generally sufficient to consider only the first- and second-order azimuth spatial variation terms of k 2 and the first-order azimuth spatial variation term of k 3 . The remaining azimuth spatial variation terms are sufficiently small to be neglected.
Below, we proceed with the calculations term by term. First, we compute k 2 . According to the equation,
k 2 t p = V s , t p 2 + A s , t p · P t p T R 0 , t p 2 R 0
In the equation, the first term in the numerator originates from the spatial variability of the satellite orbit, while the second term arises from the spatial variability of the geometric relationship between the satellite orbit and the ground target. Expanding k 2 as a Taylor series with respect to t p ,
k 2 t p k 2 , a 0 + k 2 , a 1 t p + k 2 , a 2 t p 2
where
k 2 , a 0 = V s , 0 2 + A s , 0 · P 0 T R 0 , 0 2 R 0 k 2 , a 1 = 2 V s , 0 · A s , 0 + A ˙ s , 0 · P 0 T R 0 , 0 + A s , 0 · V s , 0 T R 0 , 0 2 R 0 k 2 , a 2 = 2 A s , 0 · A s , 0 + 2 V s , 0 · A ˙ s , 0 + A ¨ s , 0 · P 0 T R 0 , 0 + 2 A ˙ s , 0 · V s , 0 T R 0 , 0 + A s , 0 · A s , 0 T R 0 , 0 4 R 0
where P 0 , V s , 0 , A s , 0 , A ˙ s , 0 , A ¨ s , 0 , T s , 0 denote the values of P t p , V s , t p , A s , t p , A ˙ s , t p , A ¨ s , t p , T R 0 , t p at t p = 0 , while T R 0 , 0 , T R 0 , 0 are the first derivative and second derivative of T R 0 , t p evaluated at t p = 0 .
In Equation (23), the terms related to the spatial variability of the satellite orbit can be easily expressed using x m , y m , z m . However, since it is difficult to obtain precise expressions for the target position coordinates, the terms associated with the spatial variability of the geometric relationship between the satellite orbit and the ground target need to be considered as a whole and appropriately approximated. Specifically, according to orbital motion laws under two-dimensional yaw steering, when f = 0, V s , 0 is perpendicular to P s , 0 . Therefore, P 0 T R 0 , 0 can be directly derived using geometric relationships:
P 0 T R 0 , 0 = P 0 P 0 R 0 cos θ o f f + P 0 × V s , 0 P 0 V s , 0 R 0 sin θ o f f
where θ off represents the azimuth center time and the look angle of the radar. After that, we assume that P t p T R 0 , t p always satisfies the above equation (clearly, this inference is not precise, but we only need to obtain the derivatives at t p = 0 ; it has been verified that the error introduced by this approximation is negligible). That is,
P t p T R 0 , t p = P t p P 0 R 0 cos θ o f f + P t p × V s , t p P 0 V s , 0 R 0 sin θ o f f
By differentiating and then evaluating at t p = 0 , we obtain
V s , 0 T R 0 , 0 = V s , 0 P 0 R 0 cos θ o f f + P 0 × A s , 0 P 0 V s , 0 R 0 sin θ o f f A s , 0 T R 0 , 0 = A s , 0 P 0 R 0 cos θ o f f + V s , 0 × A s , 0 + P 0 × A ˙ s , 0 P 0 V s , 0 R 0 sin θ o f f
In this way, we can represent the spatial-variability-related terms associated with the geometric relationships between the satellite orbit and ground targets using x m , y m , z m as well.
Subsequently, we calculate k 3 according to the equation
k 3 t p = 1 R 0 1 2 A s , t p · V s , t p + 1 6 A ˙ s , t p · P t p T R 0 , t p
Similarly, the first term in the square brackets originates from the spatial variation of the satellite orbit, while the second term arises from the spatial variation in the geometric relationship between the satellite orbit and the ground target. Expanding it with respect to t p , we obtain
k 3 t p k 3 , a 0 + k 3 , a 1 t p
where
k 3 , a 0 = 1 R 0 1 2 A s , 0 · V s , 0 + 1 6 A ˙ s , 0 · P 0 T R 0 , 0 k 3 , a 1 = 1 R 0 1 2 A ˙ s , 0 · V s , 0 + A s , 0 · A s , 0 + 1 6 A ¨ s , 0 · P 0 T R 0 , 0 + A ˙ s , 0 · V s , 0 T R 0 , 0
In summary, by utilizing Equations (15)–(19), (25) and (26), we can obtain the expressions of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 in terms of known quantities such as orbital elements and look angle, thereby establishing the two-dimensional theoretical spatial variation model. Due to the excessive complexity of the expressions, they are not explicitly provided in this paper. Readers may derive the corresponding expressions using computational tools such as Mathmatica based on the aforementioned equations.
To further demonstrate the relationship between two-dimensional spatial variation and relevant parameters, we conducted a series of Monte Carlo simulations. A set of typical parameters for LEO spaceborne SAR is listed in Table 1. By varying different parameters in the table, we can obtain the variation trends of two-dimensional spatial variation with respect to each parameter.
It should be noted that R 0 is not an independent quantity but rather constrained by a, e, and θ off . When t a = 0 , the following relationship holds:
cos θ off = R s 2 + R 0 2 R e 2 2 R s R 0 R 0 = R s cos θ off R e 2 R s 2 sin 2 θ off
where
R s = a 1 e
Therefore, when calculating the variation trend of two-dimensional spatial variation, it is necessary to adjust the value of R 0 according to a , e , and θ off accordingly. The variation trends of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with respect to i, ω , e, and θ off are shown in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
From the series of figures above, we can observe several patterns:
(1)
The variation trends of k 2 , a 2 and k 3 , a 1 are nearly identical, being approximately three orders of magnitude smaller than k 2 , a 1 .
(2)
The spatial variation appears most severe in polar orbits (with orbital inclination of 90°).
(3)
Spatial variation intensifies with decreasing orbital altitude.
(4)
Larger look angles (which correspond to greater R 0 under identical orbital elements) result in more pronounced spatial variation.
However, merely knowing the two-dimensional spatial variation patterns and their magnitude values remains insufficient for subsequent processing. It is further necessary to determine their impacts on imaging based on resolution and observation range parameters, thereby implementing corresponding calibration measures.

4. Analysis of Two-Dimensional Spatial Variation Effects

In the previous section, we established the CUSM to model and describe the satellite trajectory and slant range history, ultimately obtaining the theoretical expression of two-dimensional spatial variation. In this section, we will theoretically analyze the effects of two-dimensional spatial variation. As concluded in [22], the spatial range variation has non-negligible impacts on the quadratic and cubic terms of f r , thus necessitating range blocking. When considering a specific block, the quadratic, cubic, and higher-order terms of f r with two-dimensional spatial variation can be neglected, requiring only the consideration of the zeroth-order and first-order terms of f r .

4.1. Impact on RCM

The impact of two-dimensional spatial variation on RCM manifests as a first-order phase error in f r . This error phase can be expressed as
Δ Φ RCM B r , B a ; R 0 , t p = ϕ 1 B a 2 ; R 0 , t p B r 2 ϕ 1 B a 2 ; R ref , 0 B r 2
where R ref represents the shortest slant range to the scene center target, which remains constant when both the orbital elements and look angle are determined. B a denotes the azimuth bandwidth, and B r represents the range bandwidth, both determined by the resolution requirements. The ranges of R 0 and t p are defined by the observation scene. Therefore, under identical geometry conditions, higher resolution and larger swath width lead to more pronounced two-dimensional spatial variation effects on RCM. Figure 13 demonstrates the impact of two-dimensional spatial variation on RCM under 0.1 m resolution and 20 km swath width, with satellite–ground geometric parameters listed in Table 1. Subfigures (a), (b), and (c), respectively, show cases where only k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 components exist. The horizontal coordinate indicates the target’s distribution along the range dimension in the scene, while the vertical coordinate represents the target’s distribution along the azimuth direction. The azimuth distribution of targets in the scene can be approximately obtained by multiplying t p with the ground velocity of the zero-Doppler plane, i.e.,
W azim V z t p
where V z = T R 0 , 0 . Specifically, according to the equation, the ground-projected velocity of the zero-Doppler surface (it is precisely the ratio of this velocity to the azimuth bandwidth B a that determines the azimuth resolution) also varies with the orbital elements and the look angle:
T R 0 , 0 = V s , 0 V s , 0 T R 0 , 0 = V s , 0 V s , 0 P 0 R 0 cos θ o f f P 0 × A s , 0 P 0 V s , 0 R 0 sin θ o f f
From Figure 13, we can observe that among the two-dimensional spatial variation effects, the spatial range variation effect exhibits the most significant variation, which is consistent with the conclusion drawn in literature [22]. According to the analysis in literature [22], severe spatial range variation effects can be corrected using range-blocking methods. Therefore, in this paper, we only need to consider the azimuth spatial variation effects (including range–azimuth coupling spatial variation effects) within a single range block. That is,
Δ Φ RCM , av B r , B a ; R 0 , t p = ϕ 1 B a 2 ; R 0 , t p B r 2 ϕ 1 B a 2 ; R 0 , 0 B r 2
Figure 14 demonstrates the impact of azimuth spatial variation effects on RCM under 0.1 m resolution and 20 km swath width conditions, with satellite–ground geometric parameters listed in Table 1. Subfigures (a), (b), and (c), respectively, represent scenarios where only k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 are present. The results reveal that Δ Φ RCM , av consistently remains smaller than π / 4 under these conditions, indicating that two-dimensional spatial variation effects on RCM can be neglected according to the conclusions of [22]. Furthermore, since Δ Φ RCM , av becomes negligible when only k 2 , a 2 and k 3 , a 1 exist, we only need to consider Δ Φ RCM , av in the presence of k 2 , a 1 . By setting Δ Φ RCM , av to its upper limit π / 4 and neglecting higher-order terms beyond the cubic order in f a (which have minimal impact), we derive the HRWS condition requiring the consideration of two-dimensional spatial variation effects on RCM:
π λ 2 c 1 4 k 2 , a 0 B a 2 4 B r 2 + π λ 2 c 1 4 k 2 , a 0 + k 2 , a 1 t p B a 2 4 B r 2 π 4
Utilizing k 2 , a 0 k 2 , a 1 t p , the expression can be simplified to obtain
π λ 2 c B a 2 4 B r 2 1 4 k 2 , a 1 t p k 2 , a 0 2 π 4
Furthermore, using the formulas for azimuth and range resolution ρ a = 0.886 V z / B a and ρ r = 0.443 c / B r , Equation (37) can be converted to
W azim ρ r ρ a 2 23 k 2 , a 0 2 λ 2 k 2 , a 1 V z
and this equation is the condition for neglecting the azimuth spatial variation of RCM. When equality holds in the above equation, the corresponding azimuth swath width is defined as the maximum observable swath that allows the azimuth spatial variation of RCM to be neglected:
L azim , noavRCM = 46 k 2 , a 0 2 ρ r ρ a 2 λ 2 k 2 , a 1 V z
which can be used as the basis for azimuth blocking.
Under the simulation parameters listed in Figure 1, Figure 15 illustrates the variation of the maximum observable swath with respect to resolution when ignoring the azimuth spatial variation of RCM, where the resolution is varied from 0.01 m to 1 m. Specifically, (a) presents a numerical plot, which more clearly reflects the trend of variation. (b) shows a contour plot, which provides a more intuitive representation of the magnitude of L azim , noavRCM . (c) displays the result of taking the logarithm of the axes in (b), making it easier to observe the values of L azim , noavRCM at higher resolutions. In more general cases, we often assume ρ r = ρ a , allowing us to obtain the variation of L azim , noavRCM with respect to ρ a , as shown in Figure 16. Here, (a) demonstrates the variation of L azim , noavRCM as ρ a varies from 0.01 m to 1 m, with both axes plotted on a logarithmic scale. (b) provides a zoomed-in view of (a), where ρ a varies from 0.1 m to 0.2 m.

4.2. Impact of Azimuth Modulation on Phase Characteristics

Similar to the previous section, the impact of two-dimensional spatial variation on the azimuth modulated phase (AMP) manifests as an error in the zeroth-order term of f r . This phase error is expressed as
Δ Φ AMP B r , B a ; R 0 , t p = ϕ 0 B a 2 ; R 0 , t p ϕ 0 B a 2 ; R ref , 0
Figure 17 demonstrates the impact of two-dimensional spatial variation on AMP under 0.1 m resolution and 20 km mapping swath conditions, with satellite-ground geometric parameters listed in Table 1. Subfigures (a), (b), and (c), respectively, represent cases where only k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 exist. Following the conclusions from Reference [22], we can disregard the dominant spatial range variation while focusing solely on the azimuth spatial variation (including range–azimuth coupling spatial variation) within a range block. This can be expressed as
Δ Φ AMP , av B r , B a ; R 0 , t p = ϕ 0 B a 2 ; R 0 , t p ϕ 0 B a 2 ; R 0 , 0
Figure 18 demonstrates the impact of azimuth spatial variation on AMP under 0.1 m resolution and 20 km swath width conditions, with satellite–ground geometric parameters listed in Table 1. Subfigures (a), (b), and (c) correspond to the cases where only k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 exist, respectively.
Based on the conclusions from [22], k 2 , a 1 and k 2 , a 2 primarily introduce errors in the quadratic term of f a in AMP, while k 3 , a 1 mainly causes errors in the cubic term of f a . The thresholds for negligible impacts of quadratic and cubic terms of f a in AMP errors are 0.113 π and 0.022 π , respectively. Tolerance analyses are subsequently conducted for scenarios involving only k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 separately. First, assuming only the presence of k 2 , a 1 while neglecting higher-order terms of f a (whose influences are minimal), this condition allows us to disregard the azimuth spatial variation of Δ Φ AMP , av :
π λ 4 1 k 2 , a 0 1 k 2 , a 0 + k 2 , a 1 t p B a 2 2 0.113 π
Using k 2 , a 0 k 2 , a 1 t p , it can be simplified to obtain
π λ 4 k 2 , a 1 t p k 2 , a 0 2 B a 2 2 0.113 π
Furthermore, by using the azimuth resolution formula ρ a = 0.886 V z / B a , Equation (43) can be transformed into
W azim ρ a 2 2 . 3 k 2 , a 0 2 λ k 2 , a 1 V z
and this equation represents the case where the azimuth spatial variation of APM can be neglected when only k 2 , a 1 exists. When the equation takes the equals sign, the corresponding azimuth swath width is defined as the maximum swath:
L azim , noavAPM , k 21 = 4.6 k 2 , a 0 2 ρ a 2 λ k 2 , a 1 V z
It can be observed that this width is independent of range resolution, being solely determined by the azimuth resolution under specified spaceborne geometric parameters. Figure 19 illustrates the variation of the maximum swath, neglecting APM errors with azimuth resolution, where both the horizontal and vertical axes are plotted on a logarithmic scale. Here, (a) shows the variation of L azim , noavAPM , k 21 as ρ a varies from 0.01 m to 1 m. (b) provides a zoomed-in view of (a), where ρ a varies from 0.1 m to 0.2 m.
Secondly, assume only the existence of k 2 , a 2 , neglecting quadratic and higher terms in f a (which have a minimal impact). Under this condition, it becomes unnecessary to consider the azimuth spatial variation of Δ Φ AMP , av :
π λ 4 1 k 2 , a 0 1 k 2 , a 0 + k 2 , a 2 t p 2 B a 2 2 0.113 π
Using the condition k 2 , a 0 k 2 , a 2 t p 2 , we can simplify to obtain
π λ 4 k 2 , a 2 t p 2 k 2 , a 0 2 B a 2 2 0.113 π W azim 2 ρ a 2 2.3 k 2 , a 0 2 λ k 2 , a 2
When the equality holds in the above equation, we define the azimuth swath width at this time as the maximum swath:
L azim , noavAPM , k 22 = 2 ρ a 2.3 k 2 , a 0 2 λ k 2 , a 2
Finally, assume that only k 3 , a 1 exists while neglecting the third-order and higher terms of f a (whose effects are negligible). Under this condition, it is unnecessary to consider the azimuth spatial variation of Δ Φ AMP , av :
π λ 2 16 k 3 , a 0 + k 3 , a 1 t p k 2 , a 0 3 k 3 , a 0 k 2 , a 0 3 B a 2 3 0.022 π
The expression can be further simplified to obtain
W azim ρ a 3 4.05 k 2 , a 0 3 λ 2 k 3 , a 1 V z 2
When the equality holds in the above equation, the azimuth swath width at this point is defined as the maximum swath:
L azim , noavAPM , k 31 = 8.1 k 2 , a 0 3 ρ a 3 λ 2 k 3 , a 1 V z 2
Figure 20 illustrates the variation of the maximum swath, ignoring APM error with azimuth resolution, where both axes are plotted on a logarithmic scale. Here, (a) shows the case when ρ a varies from 0.01 m to 1 m. (b) provides a zoomed-in view of (a), where ρ a varies from 0.1 m to 0.2 m.
Figure 21 illustrates the variation of maximum swath with azimuth resolution when APM errors are ignored, where both axes employ logarithmic scales. Here, (a) demonstrates the variation of L azim , noavAPM , k 31 when ρ a varies from 0.01 m to 1 m. (b) provides a zoomed-in view of (a), where ρ a varies from 0.1 m to 0.2 m.
In summary, the impact of two-dimensional spatial variation (primarily azimuth spatial variation) on imaging is related to both the satellite–ground geometric relationship and parameters such as resolution and azimuth swath. To further demonstrate the relationship between spatial variation effects and various parameters, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26 and Figure 27 shows the variations of Equations (39), (45), (48) and (51) with different parameters. For visualization purposes, the portions of L azim , noavAPM , k 21 exceeding 100 km are set to 100 km, while other maximum azimuth swaths exceeding 1000 km are set to 1000 km.

5. Two-Dimensional Spatial Variation Processing Methods

From the analysis in Section 4, we can observe that L azim , noavRCM , L azim , noavAPM , k 22 , and L azim , noavAPM , k 31 are relatively large, while L azim , noavAPM , k 21 is significantly smaller. In HRWS spaceborne SAR, constrained by the enormous data volume, we typically maintain the ratio of the azimuth swath to the azimuth resolution within 10 5 . This implies that, in most cases, we do not need to consider the azimuth spatial variation of ϕ 1 . Only the azimuth spatial variation of ϕ 0 with respect to k 2 , a 1 needs to be taken into account. However, in extreme cases (e.g., when the resolution is very high), considering only the azimuth spatial variation of ϕ 0 with respect to k 2 , a 1 may be insufficient, and azimuth blocking would then be required.
With the aforementioned analysis, we propose a two-dimensional spatial variation correction algorithm based on azimuth nonlinear chirp scaling (ANCS) processing. This algorithm is developed from the spatial range variation correction algorithm using the range-blocking method presented in [22]. The flowchart of the proposed algorithm is illustrated in Figure 28. The complete algorithm consists of two main components, marked by red and green dashed boxes: preprocessing and range blocking, respectively. The range blocking further comprises three sub-modules, indicated by yellow, purple and blue dashed boxes: spatial range variation correction, azimuth NCS processing, and azimuth compression.
The preprocessing stage aims to eliminate spectral aliasing introduced by azimuth beam scanning and perform unified phase processing using the parameters of reference targets. The spatial range variation correction stage removes spatial range variation components, including RCM and the higher-order terms of f r through processing on each block. The azimuth compression stage eliminates spatial range variation components of the azimuth modulation phase while completing azimuth compression for each range cell. Detailed derivations and formula representations for these stages have been comprehensively presented in [22] and will not be reiterated here. The key distinction between our algorithm and [22] lies in the execution sequence of azimuth compression: our method performs azimuth compression within each range block, whereas the referenced algorithm implements azimuth compression after merging range blocks. However, since both approaches require operations on each range cell for azimuth compression, there is no essential difference in their final implementation.
The following section primarily introduces the newly added azimuth NCS processing component.

5.1. Azimuth Nonlinear Chirp Scaling Processing

In most cases, we can consider only the first-order azimuth spatial variation of k 2 while ignoring the azimuth spatial variations of other coefficients. Moreover, we only need to consider the influence of k 2 ’s first-order azimuth spatial variation on ϕ 0 . However, according to previous analysis, the first-order azimuth spatial variation coefficient k 2 , a 1 of k 2 varies with range, so its correction needs to be implemented during the range blocking. After completing the range block compensation for the first-order and higher terms of f r , the range-Doppler domain signal of P R 0 , t p can be expressed as
s S after _ block t r , f a ; R 0 , t p = sinc ( t r 2 R 0 c ) × exp j 4 π R 0 λ exp j 2 π t p f a × exp j ϕ 0 f a ; R 0 , t p exp j π f a 2 K c
where K c represents the chirp rate introduced by the azimuth time–frequency transformation. The presence of this term causes aliasing in the azimuth time domain. Therefore, we first perform an inverse azimuth time-frequency transform to restore the original chirp characteristics of the signal. After that, the target signal is transformed into the two-dimensional time domain as
s s after _ block t r , t a ; R 0 , t p = sinc ( t r 2 R 0 c ) × exp j 4 π λ R 0 + k 2 t a t p 2 + k 3 t a t p 3 + k 4 t a t p 4 + k 5 t a t p 5 + k 6 t a t p 6
If we compensate the above equation with a cubic time phase,
H av , time = exp j 4 π λ p 3 t a 3
Multiplying Equation (53) by Equation (54) and expanding k 2 as k 2 , a 0 + k 2 , a 1 t p , we obtain
s s after _ av _ time t r , t a ; R 0 , t p = sinc ( t r 2 R 0 c ) × exp j 4 π λ R 0 + p 3 t p 3 + 3 p 3 t p 2 t a t p + k 2 , a 0 + k 2 , a 1 t p + 3 p 3 t p t a t p 2 + k 3 + p 3 t a t p 3 + k 4 t a t p 4 + k 5 t a t p 5 + k 6 t a t p 6
It can be observed that by setting p 3 = k 2 , a 1 / 3 , the azimuth spatial variation of k 2 can be eliminated. Moreover, since the absolute value of k 2 , a 1 is very small, the impact of the zeroth-order and first-order terms of t a t p introduced by this can be neglected. It should be noted that here, k 2 , a 1 is obtained by the polynomial fitting of k 2 to avoid the influence of insufficient precision in the theoretical model, and the fitting method has been provided in [22]. Thus, Equation (54) can be rewritten as
H av , time = exp j 4 π λ k 2 , a 1 3 t a 3
Equation (55) can be rewritten as
s s after _ av _ time t r , t a ; R 0 , t p = sinc ( t r 2 R 0 c ) × exp j 4 π λ R 0 + k 2 , a 0 t a t p 2 + k 3 + p 3 t a t p 3 + k 4 t a t p 4 + k 5 t a t p 5 + k 6 t a t p 6
After performing the azimuth time–frequency transformation again, Equation (57) is transformed into the Range-Doppler (RD) domain, yielding
s s after _ av _ time t r , t a ; R 0 , t p = sinc ( t r 2 R 0 c ) exp j 4 π R 0 λ exp j 2 π t p f a exp j π f a 2 K c × exp j π λ 4 1 k 2 , a 0 f a 2 + π λ 2 16 k 3 , n e w k 2 , a 0 3 f a 3 + π λ 3 256 9 k 3 , n e w 2 4 k 2 , a 0 k 4 k 2 , a 0 5 f a 4 + π λ 4 1024 27 k 3 , n e w 3 24 k 2 , a 0 k 3 , n e w k 4 + 4 k 2 , a 0 2 k 5 k 2 , a 0 7 f a 5 + π λ 5 8192 189 k 3 , n e w 4 252 k 2 , a 0 k 3 , n e w 2 k 4 + 32 k 2 , a 0 2 k 4 2 + 60 k 2 , a 0 2 k 3 , n e w k 5 8 k 2 , a 0 3 k 6 k 2 , a 0 9 f a 6
where k 3 , n e w = k 3 + p 3 . After this, there is no longer any azimuth spatial variation in the last phase term of the aforementioned equation, and the azimuth pulse compression can be performed for each range cell using H ac from [22].

5.2. Algorithm Extensions

In the algorithm shown in Figure 28, only the influence of k 2 , a 1 on the azimuth modulation phase ( ϕ 0 ) is considered, which is applicable to the vast majority of cases. However, according to the analysis in Section 4, when the resolution becomes extremely high, the effects of rest azimuth spatial variation need to be taken into account. Therefore, when necessary, the algorithm requires extension. This extension corresponds to azimuth blocking.
Azimuth blocking is mainly divided into two scenarios. The first scenario involves considering the impact of k 2 , a 1 on RCM, i.e., L azim , real < L azim , noavRCM , where L azim , real represents the actual azimuth swath in a single imaging process. In this case, azimuth blocking should be performed according to L azim , noavRCM after spatial range variation correction and before ANCS processing. Since the k n corresponding to different R 0 and t p can be precisely calculated by polynomial fitting coefficient, or each azimuth block, k n can be corrected based on the t p , ablock of its central target. The specific compensation factor can be expressed as
H ablock f r , f a ; R block , t p , ablock = conj S S receive f r , f a ; R block , t p , ablock × exp j ϕ 0 f a ; R block , t p , ablock exp j 4 π f c + f r c R block × S S receive f r , f a ; R block , 0 × exp j ϕ 0 f a ; R block , 0 exp j 4 π f c + f r c R block
The second scenario requires the consideration of the influence of k 2 , a 2 and k 3 , a 1 on the azimuth modulation phase, i.e., when L azim , real < L azim , noavAPM , k 22 or L azim , real < L azim , noavAPM , k 31 . These two cases can be discussed collectively by performing azimuth blocking after completing ANCS and before azimuth compression, using min L azim , noavAPM , k 22 , L azim , noavAPM , k 31 as the blocking criterion. For each azimuth block, the corresponding azimuth compression function can then be selected according to its respective t p , ablock :
H ac , block f a ; R 0 , t p , ablock = exp j ϕ 0 f a ; R 0 , t p , ablock exp j π f a 2 K c
To verify the necessity and effectiveness of this azimuth blocking method, we conducted a series of simulation experiments. The simulation parameters are shown in Table 1. Notably, to reduce L azim , noavAPM , k 22 and L azim , noavAPM , k 31 , we set the wavelength to 0.06 m. In the simulation, the azimuth resolution is set to 0.1 m, and the width of the observation scene in the azimuth direction is set to 200 km. Five targets are configured in the simulation, all having the same closest slant range, with azimuth positions at −100 km, −50 km, 0 km, 50 km, and 100 km. The azimuth block length is sequentially set from 1 km to 100 km, and the imaging quality assessment results for the five targets are shown in Figure 29.
According to the calculations from Equations (48) and (51), L azim , noavAPM , k 22 is 78.45 km, and L azim , noavAPM , k 31 is 19.87 km, with L azim , noavAPM , k 31 being the smaller one. Therefore, the results in the figure indicate that when the azimuth block length is less than L azim , noavAPM , k 31 , the imaging quality for all targets meets our required criteria (PSLR > 13 dB). Thus, when necessary, azimuth blocking is required, and the azimuth block length should be appropriately selected.

6. Experiments and Results

To verify the accuracy and effectiveness of the proposed algorithm, both simulation experiments and real data experiments are conducted in the following sections.

6.1. Simulation

A total of three sets of experiments were designed (marked as simulations 1, 2 below). In simulation 1, both the range and azimuth resolution were 0.1 m, with range and azimuth swaths of 10 km each and an antenna azimuth length of 1m in spotlight mode. The remaining simulation parameters are as shown in Table 1. In simulation 2, both the range resolution and azimuth resolution were 0.2 m, with range and azimuth swaths of 20 km each and an antenna azimuth length of 3.2 m in sliding spotlight mode. The remaining simulation parameters are as shown in Table 1. Figure 30 illustrates the scene and target layout diagrams corresponding to the three sets of simulation experiments.
Based on the analysis in Section 5, the parameters for analyzing the spatial variation obtained from the three sets of simulations are listed in Table 2. According to the last four rows of the table, it can be concluded that all three sets of simulation experiments need to consider the influence of k 2 , a 1 on ϕ 0 , while other azimuth spatial variations no longer need to be considered.
The parameters for the three sets of simulation experiments corresponding to the analysis of spatial variation are shown in Table 2.
To more prominently demonstrate the effectiveness of the algorithm proposed in this paper, we adopt the algorithm from [22] as a comparison (referred to as the referenced algorithm below). To be more precise, the referenced algorithm does not correct for azimuth spatial variation, whereas the algorithm proposed in this paper corrects for azimuth spatial variation using ANCS based on the referenced algorithm. Figure 31 and Figure 32 respectively, show the imaging results of the referenced algorithm for simulations 1 and 2. To more clearly observe the differences at various positions in the scene, points P 3 , P 5 , and P 7 are selected for the presentation of the results of each simulation. In each figure, (a), (b), and (c), respectively, represent the two-dimensional focused images, azimuth profiles, and range profiles of P 3 , P 5 , and P 7 . From the figures, it can be observed that for these three sets of simulations, the impact of azimuth spatial variation is non-negligible, and the imaging quality using the referenced algorithm is not satisfactory.
The imaging results of the proposed algorithm presented in Figure 33 and Figure 34 show the imaging results of simulations 1 and 2, respectively. In each figure, (a), (b), and (c) represent the two-dimensional focused images, azimuth profiles, and range profiles of P 3 , P 5 , and P 7 , respectively. It can be observed from the figures that the effects of azimuth spatial variation are almost completely eliminated after ANCS processing, and excellent imaging results are achieved in all three sets of simulations.

6.2. Processing of Real Data

To further demonstrate the effectiveness of the proposed algorithm, we conducted imaging verification using the same measured data as in [22]. The data were acquired by the GF-3 satellite, with an orbital altitude of approximately 755 km [23]. The transmitted signal is in the C-band, and the radar operates in spotlight mode, achieving a nominal resolution of 1 m in both the azimuth and range (ground range) dimensions, with a nominal swath width of 10 km.
First, we analyze the imaging parameters using the two-dimensional spatial variation analysis method proposed in this paper, with the spatial-variance-related parameters shown in Table 3. From the table, it can be observed that the maximum observable swaths are all greater than the actual observed azimuth swath, thus allowing us to disregard the influence of azimuth spatial variation.
However, to validate the effectiveness of the imaging algorithm presented in this paper, we still conducted imaging experiments using both the reference algorithm and the proposed algorithm. Firstly, we conducted a point target simulation using its satellite parameters and radar parameters, with the scene setup identical to Figure 30a. Figure 35 and Figure 36 show the imaging results of the referenced algorithm and the proposed algorithm, respectively. Table 4 compares the imaging metrics of the two algorithms. The range resolution is the slant-range resolution and has not been converted to ground-range resolution.
From the figures and table, the following conclusions can be drawn: (1) In the comparative algorithm’s imaging results, the worst peak sidelobe ratio of the azimuth profile is 13.15 dB, exceeding the threshold of 13 dB set in [22]. Therefore, under these parameter conditions, the impact of azimuth spatial variation can generally be ignored. (2) However, azimuth spatial variation still reduces the PSLR; thus, after correction with the proposed algorithm, the imaging quality can also be improved.
Finally, we deal with these real data acquired by GF-3. Figure 37 presents the imaging results of the entire scene. For detailed analysis, we selected two representative regions (marked by red boxes in Figure 37): one from the center of the scene and the other from the edge. Enlarged views of these regions are shown in Figure 38 and Figure 39, respectively. Figure 38a and Figure 39a display the imaging results obtained by the proposed algorithm, while Figure 38b and Figure 39b show the imaging results from the reference algorithm.

7. Discussion

To further demonstrate the effectiveness of the proposed algorithm, we analyze the imaging result metrics of the aforementioned two sets of simulation experiments, which are presented in Table 5 and Table 6, respectively. From these metrics, in addition to confirming that the proposed algorithm adequately corrects the azimuth spatial variation and achieves better imaging results, we can also obtain more information: (1) From the metrics of range dimension in the two sets of experiments, it can be observed that the referenced algorithm is so effective in correcting spatial range variation that, regardless of whether the azimuth spatial variation is removed, it can achieve good imaging quality in the range dimension. (2) By comparing the metrics of P 3 and P 7 in Table 6, it can be found that the impact of azimuth spatial variation on P 3 is greater than that on P 7 , which also confirms that, under the same parameters, the azimuth spatial variation increases with θ off (or R 0 ), as shown in Figure 11.
Furthermore, we evaluated the imaging quality of the two algorithms on the real data. From a visual comparison standpoint, the results of these two algorithms are very similar and difficult to distinguish. To provide a clearer comparison of imaging quality, we selected commonly used metrics for evaluating SAR images: contrast and entropy. Higher contrast and lower entropy indicate better imaging quality. Using these two metrics, we assessed the images in Figure 38 and Figure 39, with the results presented in Table 7. Based on the results in the table, we can observe that after azimuth spatial variation correction, the algorithm proposed in this paper achieves better imaging quality.

8. Conclusions

Benefiting from the slant range model based on the zero-Doppler moment established at the beginning of this paper, the two-dimensional spatial variation of HRWS spaceborne SAR will only originate from the satellite–ground geometry and will be independent of beam rotation. This significantly reduces its azimuth spatial variation component and makes it applicable to various high-resolution imaging modes.
Given the realistic scenario where the satellite trajectory in the ECEF frame is a non-coplanar curve, this paper establishes a new spaceborne SAR geometric model: CUSM. Based on this model, by developing quantitative descriptions of the satellite trajectory and the slant range history, we are able to quantitatively analyze and model the two-dimensional spatial variation characteristics of HRWS spaceborne SAR.
It is precisely for this reason that we can precisely analyze the specific impact and extent of two-dimensional spatial variation. By establishing thresholds for different types of azimuth spatial variation, we can derive theoretical guidance on when and which type of azimuth spatial variation needs to be considered.
Subsequently, we propose a two-dimensional spatial variation correction method that combines range blocking and azimuth NCS processing. This method does not require iteration and interpolation, offering high computational efficiency. Finally, we also demonstrate the effectiveness and broad applicability of this algorithm through multiple sets of simulation experiments and real data experiments.

Author Contributions

Conceptualization, Z.H. and Z.D.; methodology, Z.H. and Z.Z.; software, Z.H. and F.H.; validation, Z.H., P.L. and Z.Y.; formal analysis, Z.H. and P.L.; investigation, Z.H.; resources, Z.D.; data curation, Z.D.; writing—original draft preparation, Z.H. and Z.Z.; writing—review and editing, Z.H., Z.Z., Z.D. and F.H.; visualization, Z.H.; supervision, F.H.; project administration, Z.H.; funding acquisition, Z.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 42205142.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank all authors of the references. We are also grateful to the anonymous reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, C.; Liu, K.Y.; Jin, M. Modeling and a Correlation Algorithm for Spaceborne SAR Signals. IEEE Trans. Aerosp. Electron. Syst. 1982, AES-18, 563–575. [Google Scholar] [CrossRef]
  2. Eldhuset, K. A New Fourth-Order Processing Algorithm for Spaceborne SAR. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 824–835. [Google Scholar]
  3. Eldhuset, K. Ultra High Resolution Spaceborne SAR Processing. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 370–378. [Google Scholar]
  4. Kim, J.-H.; Younis, M.; Prats-Iraola, P.; Gabele, M.; Krieger, G. First Spaceborne Demonstration of Digital Beamforming for Azimuth Ambiguity Suppression. IEEE Trans. Geosci. Remote Sens. 2012, 51, 579–590. [Google Scholar] [CrossRef]
  5. Prats-Iraola, P.; Scheiber, R.; Rodriguez-Cassola, M.; Mittermayer, J.; Wollstadt, S.; De Zan, F.; Bräutigam, B.; Schwerdt, M.; Reigber, A.; Moreira, A. On the Processing of Very High Resolution Spaceborne SAR Data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6003–6016. [Google Scholar]
  6. Xiang, T.; Zhu, D.; Xu, F. Processing of Ultra-High Resolution Spaceborne Spotlight SAR Data Based on One-Step Motion Compensation. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8933–8936. [Google Scholar]
  7. Liang, D.; Zhang, H.; Fang, T.; Deng, Y.; Yu, W.; Zhang, L.; Fan, H. Processing of Very High Resolution GF-3 SAR Spotlight Data with Non-Start–Stop Model and Correction of Curved Orbit. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2112–2122. [Google Scholar]
  8. Sun, G.-C.; Liu, Y.; Xiang, J.; Liu, W.; Xing, M.; Chen, J. Spaceborne Synthetic Aperture Radar Imaging Algorithms: An Overview. IEEE Geosci. Remote Sens. Mag. 2021, 10, 161–184. [Google Scholar]
  9. Meng, D.; Huang, L.; Qiu, X.; Li, G.; Hu, Y.; Han, B.; Hu, D. A Novel Approach to Processing Very-High-Resolution Spaceborne SAR Data With Severe Spatial Dependence. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7472–7482. [Google Scholar]
  10. Mao, X. Spherical Geometry Algorithm for Space-Borne Synthetic Aperture Radar Imaging. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar]
  11. Hernández-Burgos, S.; Gibert, F.; Broquetas, A.; Kleinherenbrink, M.; De la Cruz, A.F.; Gómez-Olivé, A.; García-Mondéjar, A.; i Aparici, M.R. A Fully Focused SAR Omega-K Closed-Form Algorithm for the Sentinel-6 Radar Altimeter: Methodology and Applications. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar]
  12. Shibata, M.; Kuriyama, T.; Hoshino, T.; Nakamura, S.; Kankaku, Y.; Motohka, T.; Suzuki, S. SAR Techniques and SAR Processing Algorithm for ALOS-4. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 7449–7451. [Google Scholar]
  13. Farquharson, G.; Castelletti, D.; De, S.; Stringham, C.; Yague, N.; Bes, V.C.; Ryu, J.; Goncharenko, Y. The New Capella Space Satellite Generation: Acadia. In Proceedings of the IGARSS 2023–2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 1513–1516. [Google Scholar]
  14. He, F.; Chen, Q.; Dong, Z.; Sun, Z. Processing of Ultrahigh-Resolution Spaceborne Sliding Spotlight SAR Data on Curved Orbit. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 819–839. [Google Scholar] [CrossRef]
  15. Wang, P.; Liu, W.; Chen, J.; Niu, M.; Yang, W. A High-Order Imaging Algorithm for High-Resolution Spaceborne SAR Based on a Modified Equivalent Squint Range Model. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1225–1235. [Google Scholar] [CrossRef]
  16. Wu, Y.; Sun, G.-C.; Yang, C.; Yang, J.; Xing, M.; Bao, Z. Processing of Very High Resolution Spaceborne Sliding Spotlight SAR Data Using Velocity Scaling. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1505–1518. [Google Scholar] [CrossRef]
  17. Tang, S.; Lin, C.; Zhou, Y.; So, H.C.; Zhang, L.; Liu, Z. Processing of Long Integration Time Spaceborne SAR Data with Curved Orbit. IEEE Trans. Geosci. Remote Sens. 2017, 56, 888–904. [Google Scholar]
  18. Liu, W.; Sun, G.-C.; Xia, X.-G.; Chen, J.; Guo, L.; Xing, M. A Modified CSA Based on Joint Time-Doppler Resampling for MEO SAR Stripmap Mode. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3573–3586. [Google Scholar]
  19. Guo, Y.; Wang, P.; Chen, J.; Men, Z.; Cui, L.; Zhuang, L. A Novel Imaging Algorithm for High-Resolution Wide-Swath Space-Borne SAR Based on a Spatial-Variant Equivalent Squint Range Model. Remote Sens. 2022, 14, 368. [Google Scholar] [CrossRef]
  20. Ding, Z.; Zheng, P.; Li, H.; Zhang, T.; Li, Z. Spaceborne High-Squint High-Resolution SAR Imaging Based on Two-Dimensional Spatial-Variant Range Cell Migration Correction. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  21. Chen, X.; Hou, Z.; Dong, Z.; He, Z. Performance Analysis of Wavenumber Domain Algorithms for Highly Squinted SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1563–1575. [Google Scholar]
  22. Hou, Z.; Zhang, Z.; Li, P.; Yun, Z.; He, F.; Dong, Z. Range-Dependent Variance Correction Method for High-Resolution and Wide-Swath Spaceborne Synthetic Aperture Radar Imaging Based on Block Processing in Range Dimension. Remote Sens. 2024, 17, 50. [Google Scholar] [CrossRef]
  23. Zhang, Q. System Design and Key Technologies of the GF-3 Satellite. Acta Geod. Cartogr. Sin. 2017, 46, 269–277. [Google Scholar]
Figure 1. Schematic diagram of spaceborne SAR imaging geometry. (In the figure, the red solid line represents the satellite trajectory, the green solid line represents the Earth’s surface, the green dashed line indicates the observation swath, the black solid line denotes the instantaneous beam illumination area, the yellow triangle marks the beam azimuth center plane, the black dash-dotted line shows the connection between satellite and target within the beam azimuth center plane, and the black dashed line represents the zero-Doppler connection between satellite and target).
Figure 1. Schematic diagram of spaceborne SAR imaging geometry. (In the figure, the red solid line represents the satellite trajectory, the green solid line represents the Earth’s surface, the green dashed line indicates the observation swath, the black solid line denotes the instantaneous beam illumination area, the yellow triangle marks the beam azimuth center plane, the black dash-dotted line shows the connection between satellite and target within the beam azimuth center plane, and the black dashed line represents the zero-Doppler connection between satellite and target).
Remotesensing 17 01262 g001
Figure 2. Schematic diagram of target deployment. (The black rectangle in the figure represents the observation scene coverage, the blue solid lines in (a,b) denote the intersection lines between the beam azimuth center plane and Earth’s surface at different azimuth times, and the red solid line in (c) indicates the intersection line between the zero-Doppler plane and Earth’s surface at different azimuth times.)
Figure 2. Schematic diagram of target deployment. (The black rectangle in the figure represents the observation scene coverage, the blue solid lines in (a,b) denote the intersection lines between the beam azimuth center plane and Earth’s surface at different azimuth times, and the red solid line in (c) indicates the intersection line between the zero-Doppler plane and Earth’s surface at different azimuth times.)
Remotesensing 17 01262 g002
Figure 3. Simulation of the target layout in different methods. (a) Target layout results based on the sliding-spotlight mode at the moment of beam center. (b) Target layout results based on the spotlight mode at the moment of beam center. (c) Target layout results for both sliding-spotlight mode and spotlight mode at zero-Doppler moments.
Figure 3. Simulation of the target layout in different methods. (a) Target layout results based on the sliding-spotlight mode at the moment of beam center. (b) Target layout results based on the spotlight mode at the moment of beam center. (c) Target layout results for both sliding-spotlight mode and spotlight mode at zero-Doppler moments.
Remotesensing 17 01262 g003
Figure 4. Phase error between the range model and the actual slant range. (a) Simulation results for an azimuth resolution of 0.2 m and an azimuth swath of 20 km. (b) Simulation results for a resolution of 0.1 m and an azimuth swath of 10 km.
Figure 4. Phase error between the range model and the actual slant range. (a) Simulation results for an azimuth resolution of 0.2 m and an azimuth swath of 20 km. (b) Simulation results for a resolution of 0.1 m and an azimuth swath of 10 km.
Remotesensing 17 01262 g004
Figure 5. Schematic diagram of CUSM. The green solid line represents the Earth’s surface, and the red solid line depicts the satellite’s orbital path. (a) Geometric relationship between the satellite and Earth in the ECI frame. (b) Geometric relationship between the satellite and Earth in the ECEF frame.
Figure 5. Schematic diagram of CUSM. The green solid line represents the Earth’s surface, and the red solid line depicts the satellite’s orbital path. (a) Geometric relationship between the satellite and Earth in the ECI frame. (b) Geometric relationship between the satellite and Earth in the ECEF frame.
Remotesensing 17 01262 g005
Figure 6. Geometric diagram of a satellite’s elliptical orbit. (The red ellipse represents the satellite’s orbit in the orbital coordinate system, the green circle is the circumcircle of the ellipse, the blue point indicates the focus of the ellipse, a is the semi-major axis, f is the true anomaly, E is the eccentric anomaly, and M is the mean anomaly).
Figure 6. Geometric diagram of a satellite’s elliptical orbit. (The red ellipse represents the satellite’s orbit in the orbital coordinate system, the green circle is the circumcircle of the ellipse, the blue point indicates the focus of the ellipse, a is the semi-major axis, f is the true anomaly, E is the eccentric anomaly, and M is the mean anomaly).
Remotesensing 17 01262 g006
Figure 7. The (a) Error between E obtained by the series inversion method and the true value. (b) Error between Equation (10) obtained by the series inversion method and the true value. (c) Error between Equation (11) obtained by the series inversion method and the true value.
Figure 7. The (a) Error between E obtained by the series inversion method and the true value. (b) Error between Equation (10) obtained by the series inversion method and the true value. (c) Error between Equation (11) obtained by the series inversion method and the true value.
Remotesensing 17 01262 g007
Figure 8. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with i. (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Figure 8. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with i. (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Remotesensing 17 01262 g008
Figure 9. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with ω . (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Figure 9. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with ω . (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Remotesensing 17 01262 g009
Figure 10. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with e. (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Figure 10. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with e. (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Remotesensing 17 01262 g010
Figure 11. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with θ off . (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Figure 11. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with θ off . (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Remotesensing 17 01262 g011
Figure 12. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with a. (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Figure 12. Variation of k 2 , a 1 , k 2 , a 2 , and k 3 , a 1 with a. (a) k 2 , a 1 . (b) k 2 , a 2 . (c) k 3 , a 1 .
Remotesensing 17 01262 g012
Figure 13. Impact of two-dimensional spatial variation on RCM. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Figure 13. Impact of two-dimensional spatial variation on RCM. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Remotesensing 17 01262 g013
Figure 14. Impact of azimuth spatial variation on RCM. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Figure 14. Impact of azimuth spatial variation on RCM. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Remotesensing 17 01262 g014
Figure 15. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of RCM. (a) Numerical plot. (b) Contour plot. (c) Logarithmic plot.
Figure 15. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of RCM. (a) Numerical plot. (b) Contour plot. (c) Logarithmic plot.
Remotesensing 17 01262 g015
Figure 16. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of RCM (while maintaining equivalent range and azimuth resolutions). (a) Large view. (b) Zoomed-in view.
Figure 16. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of RCM (while maintaining equivalent range and azimuth resolutions). (a) Large view. (b) Zoomed-in view.
Remotesensing 17 01262 g016
Figure 17. Impact of two-dimensional spatial variation on AMP. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Figure 17. Impact of two-dimensional spatial variation on AMP. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Remotesensing 17 01262 g017
Figure 18. Impact of azimuth spatial variation on AMP. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Figure 18. Impact of azimuth spatial variation on AMP. (a) Only existing k 2 , a 1 . (b) Only existing k 2 , a 2 . (c) Only existing k 3 , a 1 .
Remotesensing 17 01262 g018
Figure 19. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of APM (in the case where only k 2 , a 1 exists). (a) Large view. (b) Zoomed-in view.
Figure 19. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of APM (in the case where only k 2 , a 1 exists). (a) Large view. (b) Zoomed-in view.
Remotesensing 17 01262 g019
Figure 20. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of APM (in the case where only k 2 , a 2 exists). (a) Large view. (b) Zoomed-in view.
Figure 20. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of APM (in the case where only k 2 , a 2 exists). (a) Large view. (b) Zoomed-in view.
Remotesensing 17 01262 g020
Figure 21. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of APM (in the case where only k 3 , a 1 exists). (a) Large view. (b) Zoomed-in view.
Figure 21. Variation of maximum swath with resolution when ignoring the azimuth spatial variation of APM (in the case where only k 3 , a 1 exists). (a) Large view. (b) Zoomed-in view.
Remotesensing 17 01262 g021
Figure 22. Variation of the four maximum azimuth swaths with azimuth resolution and the semi-major axis. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 (d) L azim , noavAPM , k 31 .
Figure 22. Variation of the four maximum azimuth swaths with azimuth resolution and the semi-major axis. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 (d) L azim , noavAPM , k 31 .
Remotesensing 17 01262 g022
Figure 23. Variations of the four maximum azimuth swaths with azimuth resolution and eccentricity. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Figure 23. Variations of the four maximum azimuth swaths with azimuth resolution and eccentricity. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Remotesensing 17 01262 g023
Figure 24. Variation of the four maximum azimuth swaths with azimuth resolution and inclination. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Figure 24. Variation of the four maximum azimuth swaths with azimuth resolution and inclination. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Remotesensing 17 01262 g024
Figure 25. Variation of the four maximum azimuth swaths with azimuth resolution and the argument of periapsis. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Figure 25. Variation of the four maximum azimuth swaths with azimuth resolution and the argument of periapsis. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Remotesensing 17 01262 g025
Figure 26. Variation of the four maximum azimuth swaths with azimuth resolution and look angle. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Figure 26. Variation of the four maximum azimuth swaths with azimuth resolution and look angle. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Remotesensing 17 01262 g026
Figure 27. Variation of the four maximum azimuth swaths with azimuth resolution and wavelength. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Figure 27. Variation of the four maximum azimuth swaths with azimuth resolution and wavelength. (a) L azim , noavRCM . (b) L azim , noavAPM , k 21 . (c) L azim , noavAPM , k 22 . (d) L azim , noavAPM , k 31 .
Remotesensing 17 01262 g027
Figure 28. Flowchart of the proposed algorithm. Yellow, purple and blue dashed boxes represent spatial range variation correction, azimuth NCS processing, and azimuth compression, respectively.
Figure 28. Flowchart of the proposed algorithm. Yellow, purple and blue dashed boxes represent spatial range variation correction, azimuth NCS processing, and azimuth compression, respectively.
Remotesensing 17 01262 g028
Figure 29. Imaging metrics for different length of azimuth blocks. (a) PSLR (b) ISLR (c) Azimuth Resolution.
Figure 29. Imaging metrics for different length of azimuth blocks. (a) PSLR (b) ISLR (c) Azimuth Resolution.
Remotesensing 17 01262 g029
Figure 30. Schematic diagram of the scenarios and target deployment corresponding to the three sets of simulation experiments. (a) Simulation 1. (b) Simulation 2.
Figure 30. Schematic diagram of the scenarios and target deployment corresponding to the three sets of simulation experiments. (a) Simulation 1. (b) Simulation 2.
Remotesensing 17 01262 g030
Figure 31. The imaging results of the referenced algorithm in simulation 1. (a) P3. (b) P5. (c) P7.
Figure 31. The imaging results of the referenced algorithm in simulation 1. (a) P3. (b) P5. (c) P7.
Remotesensing 17 01262 g031
Figure 32. The imaging results of the referenced algorithm in simulation 2. (a) P3. (b) P5. (c) P7.
Figure 32. The imaging results of the referenced algorithm in simulation 2. (a) P3. (b) P5. (c) P7.
Remotesensing 17 01262 g032
Figure 33. The imaging results of the proposed algorithm in simulation 1. (a) P3. (b) P5. (c) P7.
Figure 33. The imaging results of the proposed algorithm in simulation 1. (a) P3. (b) P5. (c) P7.
Remotesensing 17 01262 g033
Figure 34. The imaging results of the proposed algorithm in simulation 2. (a) P3. (b) P5. (c) P7.
Figure 34. The imaging results of the proposed algorithm in simulation 2. (a) P3. (b) P5. (c) P7.
Remotesensing 17 01262 g034
Figure 35. Point target simulation imaging results using GF-3 real data parameters (reference algorithm). (a) P3. (b) P5. (c) P7.
Figure 35. Point target simulation imaging results using GF-3 real data parameters (reference algorithm). (a) P3. (b) P5. (c) P7.
Remotesensing 17 01262 g035
Figure 36. Point target simulation imaging results using GF-3 real data parameters (proposed algorithm). (a) P3. (b) P5. (c) P7.
Figure 36. Point target simulation imaging results using GF-3 real data parameters (proposed algorithm). (a) P3. (b) P5. (c) P7.
Remotesensing 17 01262 g036
Figure 37. Imaging results of GF-3 (whole scene). In the figure, the red box marked with the number “1” represents the selected region at the center of the scene, while the red box marked with the number “2” represents the selected region at the edge of the scene.
Figure 37. Imaging results of GF-3 (whole scene). In the figure, the red box marked with the number “1” represents the selected region at the center of the scene, while the red box marked with the number “2” represents the selected region at the edge of the scene.
Remotesensing 17 01262 g037
Figure 38. Imaging results of GF-3 (region 1). (a) Reference algorithm. (b) Proposed algorithm.
Figure 38. Imaging results of GF-3 (region 1). (a) Reference algorithm. (b) Proposed algorithm.
Remotesensing 17 01262 g038
Figure 39. Imaging results of GF-3 (region 2). (a) Reference algorithm. (b) Proposed algorithm.
Figure 39. Imaging results of GF-3 (region 2). (a) Reference algorithm. (b) Proposed algorithm.
Remotesensing 17 01262 g039
Table 1. Simulation parameters of LEO spaceborne SAR.
Table 1. Simulation parameters of LEO spaceborne SAR.
ParameterValue
Semi-major axis6897.56 km
Orbital inclination97.46°
Eccentricity0.0013
Longitude of the ascending node
Argument of periapsis111.11°
True anomaly
Look angle25°
Wavelength0.031 m
Table 2. The parameters of the three sets of simulation experiments for spatial variation analysis.
Table 2. The parameters of the three sets of simulation experiments for spatial variation analysis.
ParameterSimulation 1Simulation 2
B r ( MHz )1328.08664.04
B a ( Hz )62,768.0031,384.00
V s ( m / s )7084.427084.42
R ref ( km )588.76588.76
L azim , noavRCM ( km )29.90241.50
L azim , noavAPM , k 21 ( km )1.867.49
L azim , noavAPM , k 22 ( km )164.27329.56
L azim , noavAPM , k 31 ( km )344.082778.22
Table 3. The parameters of real data experiments for spatial variation analysis.
Table 3. The parameters of real data experiments for spatial variation analysis.
ParameterValue
B r ( MHz )177.08
B a ( Hz )7984.61
V z ( m / s )6756.13
R ref ( km )8270.36
L azim , noavRCM ( km )3705.14
L azim , noavAPM , k 21 ( km )54.80
L azim , noavAPM , k 22 ( km )652.91
L azim , noavAPM , k 31 ( km )16,683.69
Table 4. Point target simulation imaging result metrics using GF-3 real data parameters.
Table 4. Point target simulation imaging result metrics using GF-3 real data parameters.
AlgorithmTargetAzimuthRange
IRWPSLRISLRIRWPSLRISLR
Reference
Algorithm
P 3 0.744613.156710.0560.552613.25069.8105
P 5 0.754013.264210.10490.552613.25249.8135
P 7 0.762913.258610.12100.551513.27739.8024
Proposed
Algorithm
P 3 0.742913.265810.11770.552613.25939.8146
P 5 0.754013.264210.10490.552613.25549.8136
P 7 0.765613.241610.09720.552613.27709.8010
Table 5. Evaluation metrics of imaging quality in simulation 1.
Table 5. Evaluation metrics of imaging quality in simulation 1.
AlgorithmTargetAzimuthRange
IRWPSLRISLRIRWPSLRISLR
Reference
Algorithm
P 3 0.102112.76999.16580.100013.228810.0081
P 5 0.099713.353210.60650.100013.26219.9797
P 7 0.097712.55169.23750.100013.24829.8236
Proposed
Algorithm
P 3 0.099213.342210.59220.100013.27059.9382
P 5 0.099713.263410.44910.100013.27909.9282
P 7 0.100013.290410.64560.100013.27279.9384
Table 6. Evaluation metrics of imaging quality in simulation 2.
Table 6. Evaluation metrics of imaging quality in simulation 2.
AlgorithmTargetAzimuthRange
IRWPSLRISLRIRWPSLRISLR
Reference
Algorithm
P 3 0.184412.82069.79080.199613.25709.7928
P 5 0.199813.284810.31010.199613.25369.7913
P 7 0.210813.221210.34750.200013.24799.7831
Proposed
Algorithm
P 3 0.184013.218810.28050.199613.25459.7910
P 5 0.199813.285310.31050.199613.25449.7916
P 7 0.209913.295110.29490.200013.25549.7868
Table 7. Imaging quality of real data.
Table 7. Imaging quality of real data.
RegionAlgorithmContrastEntropy
Region 1Reference Algorithm7.12721.6524
Proposed Algorithm7.12741.6518
Region 2Reference Algorithm7.36341.6062
Proposed Algorithm7.44501.5920
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, Z.; Li, P.; Zhang, Z.; Yun, Z.; He, F.; Dong, Z. Two-Dimensional Spatial Variation Analysis and Correction Method for High-Resolution Wide-Swath Spaceborne Synthetic Aperture Radar (SAR) Imaging. Remote Sens. 2025, 17, 1262. https://doi.org/10.3390/rs17071262

AMA Style

Hou Z, Li P, Zhang Z, Yun Z, He F, Dong Z. Two-Dimensional Spatial Variation Analysis and Correction Method for High-Resolution Wide-Swath Spaceborne Synthetic Aperture Radar (SAR) Imaging. Remote Sensing. 2025; 17(7):1262. https://doi.org/10.3390/rs17071262

Chicago/Turabian Style

Hou, Zhenyu, Pin Li, Zehua Zhang, Zhuo Yun, Feng He, and Zhen Dong. 2025. "Two-Dimensional Spatial Variation Analysis and Correction Method for High-Resolution Wide-Swath Spaceborne Synthetic Aperture Radar (SAR) Imaging" Remote Sensing 17, no. 7: 1262. https://doi.org/10.3390/rs17071262

APA Style

Hou, Z., Li, P., Zhang, Z., Yun, Z., He, F., & Dong, Z. (2025). Two-Dimensional Spatial Variation Analysis and Correction Method for High-Resolution Wide-Swath Spaceborne Synthetic Aperture Radar (SAR) Imaging. Remote Sensing, 17(7), 1262. https://doi.org/10.3390/rs17071262

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop