Next Article in Journal
Review of Satellite Remote Sensing and Unoccupied Aircraft Systems for Counting Wildlife on Land
Previous Article in Journal
Global and Multiscale Aggregate Network for Saliency Object Detection in Optical Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Track Estimation Method for Airborne SAR Based on Weak Navigation Information and Additional Envelope Errors

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geo-Spatial Information Processing and Application Systems, Chinese Academy of Sciences, Beijing 100190, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
4
Suzhou Key Laboratory of Microwave Imaging, Processing and Application Technology, Suzhou 215124, China
5
Suzhou Aerospace Information Research Institute, Suzhou 215124, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(4), 625; https://doi.org/10.3390/rs16040625
Submission received: 10 January 2024 / Revised: 4 February 2024 / Accepted: 5 February 2024 / Published: 7 February 2024
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
As miniaturization technology has progressed, Synthetic Aperture Radar (SAR) can now be mounted on Unmanned Aerial Vehicles (UAVs) to carry out observational tasks. Influenced by airflow, UAVs inevitably experience deviations or vibrations during flight. In the context of cost constraints, the precision of the measurement equipment onboard UAVs may be relatively low. Nonetheless, high-resolution imaging demands more accurate track information. It is therefore of great importance to estimate high-precision tracks in the presence of both motion and measurement errors. This paper presents a robust track estimation method for airborne SAR that makes use of both envelope and phase errors. Firstly, weak navigation information is employed for motion compensation, which reduces a significant portion of the motion error. Subsequently, the track is initially estimated using additional envelope errors introduced by the Extended Omega-K (EOK) algorithm. The track is then refined using a phase-based approach. Furthermore, this paper presents the calculation method of the compensated component for each target and provides an analysis of accuracy from both theoretical and simulation perspectives. The track estimation and imaging results in the simulations and real data experiments validate the effectiveness of the proposed method, with an estimation accuracy of real data experiments within 5 cm.

1. Introduction

An SAR is an active imaging radar that operates independently of lighting and weather conditions. In recent years, there has been a trend towards the miniaturization of SAR systems [1,2,3,4], allowing for their installation on UAVs. Due to airflow and other influencing factors, it is challenging for UAVs to maintain an ideal flight track, introducing motion errors into the raw echo. Due to cost constraints, the measurement equipment carried by UAVs often fails to offer precise track information. However, an accurate track is necessary for precise motion compensation. Consequently, in the presence of both motion errors and measurement errors, precise track estimation has become a critical issue.
In response to motion errors within signals, a significant processing algorithm is image autofocus. SAR autofocus methods [5,6,7,8,9,10] can estimate motion errors in a signal, thereby improving the focusing quality of images. These methods can be divided into two categories: those based on phase gradient autofocus (PGA) [11,12,13,14] and map drift (MD) [15,16,17,18], and those that utilize image metrics such as entropy [19,20,21], sharpness [22,23,24,25], and contrast [26]. In recent research, there have been some improved and advanced autofocus algorithms with notable performance. Zhang et al. [27] introduced a time-domain autofocus algorithm which was based on generalized sharpness metrics and Accelerated Factorized Back-Projection (AFBP). This method produced high-resolution imaging results with superior quality compared to PGA without incurring significant additional processing time. Chen et al. [28] proposed a two-dimensional space-variant motion error estimation and compensation method for ultrahigh-resolution airborne stepped-frequency SAR which fully considered the space-variant range error, the space-variant azimuth error, and the coupling error. The latest overview [29] provided a comprehensive summary of SAR image autofocus algorithms, indicating that the majority of these algorithms were focused on high-resolution imaging scenes. However, many SAR autofocus methods are primarily focused on enhancing image quality without providing an accurate radar track.
Some algorithms have addressed this issue by implementing track estimation while achieving image autofocus. Several methods still rely on MD. Xing et al. [30] divided azimuth sub-apertures and estimated the displacement in the Line-of-Sight (LOS) direction. Bezvesilniy et al. [31] presented a local quadratic MD algorithm which estimated the local quadratic phase error and further obtained the residual acceleration. A limitation was that the phase error was modeled, and multiple iterations were required. Among methods based on PGA, Liang et al. [32] proposed an approach based on a hybrid coordinate system. The focusing quality post-implementation was akin to that of PGA. Li et al. [33] proposed a robust motion error estimation algorithm called WTLS-based Autofocus (WTA). This method calculated the double gradients of phase errors, filtered the gradients through a polynomial fitting, and solved the overdetermined linear equation by using the Weighted Total Least Squares (WTLS) method. The focusing quality of this method surpassed that of PGA. Comparison experiments in the work of Li et al. [34] indicated that WTA was a state-of-the-art algorithm which exhibited optimal performance in scenes with a certain quantity of strong scatterers. Ding et al. [35] introduced an autofocus framework for ultrawideband ultrawidebeam SAR data, which contained Quasi-Polar Grid-based Fast Factorized Back-Projection (QPG-FFBP) imaging, Multiple Sub-Band Local Autofocus (MSBLA), and sub-aperture track deviation estimation and fusion.
The following are some methods [36,37] based on image metrics. Ran et al. [38] obtained a local phase error function by maximizing the image sharpness and then solved 3D motion errors by using the WTLS method. Zhang et al. [39] also proposed an improved 3D motion error estimation method which estimated nonspatial-variant phase errors by maximizing the maximum pixel value. A mixed integer programming problem was solved with a combination of the Genetic Algorithm (GA) and Tikhonov regularization. However, the main limitations of such metric-based methods were the convergence speed and computational cost.
Additionally, some other track estimation algorithms utilize Factorized Geometrical Autofocus (FGA). Torgrimsson et al. [40] achieved image focus by adjusting track parameters. They also presented a new search method [41] from local to global to enhance efficiency. Building on these two algorithms, they proposed a method [42] that was completely independent of navigation information. However, the search space for methods using FGA was always six-dimensional, leading to inefficient processing.
Many of the aforementioned methods primarily rely on raw data and lack the utilization of weak navigation information provided by measurement sensors. Li et al. [43] highlighted this issue and proposed a corresponding track estimation method. However, this method required multiple iterative steps, which was inefficient, and only experimented with low-resolution data, rendering it unconvincing.
Moreover, the exploitation of envelope errors is crucial. Certain methods, such as WTA, lack the utilization of envelope errors, yet substantial envelope errors are always present in the signal. Operations such as phase estimation become difficult to perform when the energy of the signal is dispersed across multiple range bins.
Consequently, we propose a robust track estimation algorithm that sequentially exploits weak navigation information, additional envelope errors, and phase errors. Initially, we utilize weak navigation information for preliminary motion compensation [44,45]. While not highly accurate, this step can mitigate a significant portion of motion errors. This is particularly beneficial when the signal experiences substantial motion errors, preventing the complete defocusing of the image and providing a solid foundation for subsequent estimations. Next, we consider and use the additional envelope errors [46] introduced by the EOK algorithm [47]. Owing to the envelope errors being amplified by the EOK algorithm, the signal’s energy is spread over multiple range bins, which is not conducive to utilizing phase or performing certain other operations. Thus, we use the envelope errors to preliminarily estimate the track. After these two steps, the residual errors in the signal are significantly reduced, allowing us to employ a phase-based method [33] for refined track estimation. Moreover, as we perform motion compensation on the raw data, it necessitates the calculation of compensated components for each target. This paper provides the calculation methods and analyzes the accuracy from both theoretical and simulation perspectives.
The main contributions of this article are as follows.
  • Using weak navigation information to perform preliminary motion compensation, which ensures that only residual errors remain in the signal, preventing complete image defocusing and enhancing the algorithm’s broad applicability;
  • Initial track estimation is carried out utilizing the additional envelope errors introduced by the EOK algorithm, aimed at further reducing the residual errors. This results in a straighter envelope, which provides a solid foundation for subsequent estimations;
  • The article presents the calculation method of the compensated component for each target and analyzes the accuracy from both theoretical and simulation perspectives.
The article is organized as follows. Section 2 describes the basic algorithms and introduces the method of the proposed track estimation algorithm. The results of simulations and real data experiments are presented in Section 3. Specific discussions are carried out in Section 4, and Section 5 concludes the article.

2. Materials and Methods

2.1. Basic Methodologies

2.1.1. Principle of Motion Compensation (MoCo)

When a SAR does not move in uniform motion along an ideal straight line but exhibits deviations, jitters, or other errors, it is necessary to apply motion compensation to the raw SAR echoes to mitigate motion errors in the signal as much as possible, thereby enhancing the focusing quality of the image.
The SAR signal after range compression can be expressed as
S r ( τ , η ) = rect η T syn sinc τ 2 R r ( η ) c exp j 4 π R r ( η ) λ ,
where η and τ are the azimuth time and the range time, T syn   is the synthetic aperture time, R r ( η ) is the target’s real slant range, c is the speed of light, and λ is the wavelength.
The equation for the MoCo can be represented as
Δ R ( η ) = cos ( θ ) · Δ x ( η ) · sin ( φ ) + Δ z ( η ) · cos ( φ ) ,
where Δ x is the motion error in the x direction and Δ z is the motion error in the z direction. θ is the squint angle, and φ is the incident angle. After MoCo, the signal can be written as
S m ( τ , η ) = rect η T syn sinc τ 2 ( R r ( η ) Δ R ( η ) ) c exp j 4 π ( R r ( η ) Δ R ( η ) ) λ .

2.1.2. EOK Algorithm and Additional Envelope Errors

The EOK algorithm is a variant of the classic high-precision SAR imaging algorithm, Omega-K. It modifies the formula for Stolt mapping to prevent azimuth compression, thereby creating favorable conditions for the utilization of envelope and phase information.
After motion compensation, the signal with residual errors can be expressed as
S res ( τ , η ) = rect η T syn sinc τ 2 ( R 0 ( η ) + Δ R res ( η ) ) c exp j 4 π ( R 0 ( η ) + Δ R res ( η ) ) λ
where R 0 ( η ) is the ideal slant range and Δ R res ( η ) represents the residual range error.
For the convenience of writing, we omit the amplitude weighting of the signal. Thus, after the range Fourier Transform (FT), we can obtain
S res ( f r , η ) = exp j 4 π ( f 0 + f r ) ( R 0 ( η ) + Δ R res ( η ) ) c ,
where f 0 is the carrier frequency and f r is the range frequency.
As shown in Figure 1, the ideal range history, R 0 ( η ) , can be expressed as
R 0 ( η ) = ( r 0 cos θ ) 2 + v η d d 0 2 ,
where v is the velocity and r 0 is the range from the scene center, O , to the radar. The distance between the scene center, O , and the target, P , is d , and d 0 = r 0 sin   ( θ ) .
Then, by utilizing the Principle of Standing Phase (POSP) to perform the azimuth FT on Equation (5), a two-dimensional spectrum can be obtained as follows:
S res ( f r , f η ) = exp j 4 π ( f 0 + f r ) c 2 2 π f η v 2 cos ( θ ) + 2 π f η v sin ( θ ) r 0 + 2 π f η v d · exp j 4 π ( f 0 + f r ) c Δ R res ( η )
where f η represents the azimuth frequency and η is the stationary phase point.
For this two-dimensional spectrum, we perform a Stolt mapping of the EOK algorithm as follows:
( f 0 + f r ) 2 c 2 f η 2 4 v 2 cos ( θ ) f + f 0 2 c 2 f η 2 4 v 2 cos ( θ ) ,
where f is the new range frequency after Stolt mapping.
After Stolt mapping, the two-dimensional spectrum can be written as follows:
S res ( f , f η ) = exp j 4 π c f · r 0 + f 0 2 c 2 f η 2 4 v 2 · cos ( θ ) · r 0 + c f η d + d 0 2 v · exp j 4 π c Δ f , f η
where Δ f , f η represents the impact of the residual errors on the ideal spectrum.
Applying the azimuth and the range Inverse Fourier Transform (IFT), the two-dimensional time-domain signal can be expressed as
S res ( t , η ) = ω r t r 0 Δ R res ( η ) · exp j 4 π f 0 R 0 ( η ) c · exp j 4 π f 0 Δ R res ( η ) c ,
where Δ R res ( η ) represents the envelope error, which can be represented as
Δ R res ( η ) = Δ R res ( η ) + r 0 sin ( θ ) cos 2 ( θ ) · d Δ R res ( η ) d η .
Comparing Equation (4) with Equation (10), the envelope error changes from Δ R res ( η ) to Δ R res ( η ) after the EOK algorithm is applied [46]. The additional envelope error produced is r 0 sin ( θ ) cos 2 ( θ ) · d Δ R res ( η ) d η , which is a major component of the entire envelope error since Δ R res ( η ) can be considered to be within one range bin.

2.1.3. Principle of WTA Algorithm

The WTA algorithm [33] is an advanced [29,34] track estimation method based on phase errors which performs well in high-resolution conditions by using strong scatterers. The principal steps and corresponding equations are as follows.
(1)
Estimating the double gradients of phase errors
The signal within a range bin that contains a phase error Δ R p ( η ) can be expressed as
S p ( η ) = exp j 4 π f 0 R 0 ( η ) c · exp j 4 π f 0 Δ R p ( η ) c .
Using the following equation to perform an azimuth dechirp on S p ( η ) , we obtain
H d = exp j π K a η 2 ,
where K a represents the azimuth chirp rate. The dechirped signal is converted to the azimuth frequency domain, and the targets can be roughly focused. Subsequently, center shifting and windowing are performed. After the above-mentioned processing, converting the signal back to the azimuth time domain yields S e for the phase error estimation. To mitigate the impact of the residual linear phase, the double gradients of the phase errors, Φ ¨ e , are estimated. The estimation kernel is as follows:
Φ ¨ e ( l ) = arg S e ( l 1 ) · S ¯ e ( l ) 2 S e ( l + 1 ) ,
where l represents the azimuth index, ¯ denotes the complex conjugate, and arg ( · ) represents the phase extraction operation.
(2)
Performing the polynomial fitting
Due to the influence of noise, large spikes are present in the estimated double gradients of the phase errors. To enhance the accuracy, it is essential to minimize this estimation error as much as possible. Therefore, the WTA method first integrates the estimated double gradients, followed by polynomial fitting of the integrated phase. Finally, by taking the second derivative of the fitted phase error, the filtered double gradient of the phase error can be obtained, which may be utilized for subsequent track estimation.
(3)
Solving the track error with the WTLS method
The WTLS method amalgamates the advantages of both Weighted Least Squares (WLS) and Total Least Squares (TLS). The WTLS method consistently performs well when the variances of each estimation differ or when the coefficient matrix of the equations is inaccurate. In summary, it is more robust when solving systems of linear equations. The formula for WTLS is succinctly summarized as follows.
The linear system of equations to be solved can be represented as
Φ ¨ = H D ¨ + e ,
where Φ ¨ is the double gradient of the phase errors, H is the coefficient matrix, D ¨ represents the track error gradients to be estimated, and e is the error matrix.
Firstly, Singular Value Decomposition (SVD) can be performed as follows
H   Φ ¨ = USV T .
And U can be decomposed as
U = U S   u ,
where U S consists of the components of U that correspond to larger singular values.
The solution to the linear system of equations derived from the WTLS algorithm can be expressed as D ¨ WTLS :
D ¨ WTLS = H T P sw H 1 H T P sw Φ ¨ ,
where
P sw = P s W T 1 P s ,   W T = diag var P s Φ ¨ ,   and   P s = U S U S T .

2.2. Novel Algorithm Methodology

As shown in Figure 2, we provide an overall flowchart of the proposed method. The algorithm initially applies motion compensation using weak navigation information, followed by EOK imaging to obtain a two-dimensional time-domain image which is uncompressed in azimuth direction. The subsequent operations are divided into two parts: one is to calculate the compensated component of each target during motion compensation, and the other is to estimate the residual errors. The estimation of the residual errors is performed twice. The initial estimation is represented in blue in the figure, utilizing the additional envelope error of the signal to estimate the residual range error. This is then combined with the compensated portion to obtain the total range error, which leads to the initial estimation of the track. The track from the preliminary estimation is used for re-motion compensation. When the process reaches the estimation of residual errors again, what is estimated is the signal’s phase error, which is represented in green in the figure. By estimating the residual phase error and combining it with the compensated part, a refined estimate of the track can ultimately be obtained. The detailed implementation steps of the algorithm and the relevant formulas will be introduced in subsequent subsections.

2.2.1. Pre-Processing of Raw Data

Initially, we performed range compression and MoCo on the raw data to mitigate a significant portion of the motion errors. Subsequently, we applied the EOK algorithm to obtain a two-dimensional time-domain image, which was uncompressed in the azimuth direction. It is important to note that we needed to divide the sub-apertures in the azimuth direction, with all the following operations being conducted within azimuth sub-apertures.
Due to the presence of residual errors after motion compensation in the signal, large additional envelope errors exist in the image processed by the EOK algorithm. The energy of the signal is dispersed across multiple range bins, making it challenging to utilize the signal within one range bin for operations such as phase error estimation. Therefore, the additional envelope errors should be thoroughly considered and exploited.

2.2.2. The Initial Track Estimation Based on Additional Envelope Errors

(1)
Solving the residual range errors
The signal processed by the EOK algorithm has large envelope errors, which are
Δ R res ( η ) = Δ R res ( η ) + Δ R ad ( η ) ,
where Δ R res ( η ) is the original residual range error after MoCo, which is considered comparatively small. Conversely, the additional envelope error, Δ R res ( η ) , introduced by the EOK algorithm constitutes a major component of the entire envelope error. Δ R ad ( η ) can be represented as
Δ R ad ( η ) = r 0 sin ( θ ) cos 2 ( θ ) · d Δ R res ( η ) d η .
Δ R ad ( η ) is related to d Δ R res ( η ) d η , which is the first-order derivative of the original residual error, Δ R res ( η ) .
Therefore, we can extract the total envelope error, Δ R res ( η ) , and consider it to be approximately equal to the additional error, Δ R ad ( η ) . Then, we can use the formula below to acquire the first-order derivative of the residual error, Δ R res ( η ) :
d Δ R res ( η ) d η = cos 2 ( θ ) r 0 sin ( θ ) Δ R res ( η ) .
To further eliminate the influence of the linear term, we took the derivative of the above result to obtain the second-order derivative of the residual range error, d 2 Δ R r e s ( η ) d η 2 .
(2)
Calculating the compensated range errors
We have estimated the residual range errors. However, to solve for the entire track, the compensated component must be calculated. In MoCo, we explicitly know the compensation values for each range bin. Nevertheless, due to the energy of each target being dispersed across multiple range bins, we cannot directly ascertain the compensation values for each target. Therefore, we need to perform calculations during the processing.
First, we need to calculate the position of the target in the time domain. Since the position can be derived from the focused image, we perform an azimuth dechirp to obtain the roughly focused image in the azimuth frequency domain. The location of the target in the azimuth frequency domain is K a · t d . K a is the azimuth chirp rate, which can be expressed as
K a = 2 v 2 cos 2 ( θ ) λ R c .
R c is the slant range when the beam center crosses the target. t d is the time when the beam center crosses the target. Therefore, the azimuth time, t d , can be calculated by utilizing the position of the target in the frequency domain. Subsequently, the y-coordinate of the target in the azimuth direction can be calculated as follows:
y = v t d + R c sin ( θ ) .
Moreover, the x-coordinate of the target in the range direction can be calculated as
x = R c cos ( θ ) 2 H 0 2 ,
where H 0 is the reference height of the track. Thus far, we have obtained the time-domain coordinates of the target ( x , y ) .
To obtain the compensated range error, we initially computed the target’s range history. In the absence of precise track information, an approximated calculation of the range history was carried out using the measured track data. Suppose the measurement data are ( x m ( η ) , y m ( η ) , z m ( η ) ) , and the approximate range history can be expressed as
R m = x m ( η ) x 2 + y m ( η ) y 2 + z m ( η ) 2 .
The calculated range history is relatively rough. However, upon completion of the initial track estimation, more accurate information becomes available. Subsequently, the calculation of the range history will be more precise, which will contribute to ensuring the accuracy of the entire estimation process.
Suppose R t = R m cos ( θ ) ; then, according to the MoCo equation, the compensated component of the target can be calculated as follows:
Δ R e = cos ( θ ) · Δ x · R t 2 H 0 2 R t + Δ z · H 0 R t .
(3)
Estimating the track initially
The residual range errors were estimated in (1), and the compensated components were calculated in (2). Therefore, we can obtain the double gradients of the entire range errors, which can be expressed as
d 2 Δ R sum ( η ) d η 2 = d 2 Δ R res ( η ) + Δ R e ( η ) d η 2 .
Subsequently, we can solve for the track error using the following equations. The estimation model can be expressed as
d 2 Δ R sum _ 1 d η 2 | η m d 2 Δ R sum _ N d η 2 | η m = cos ( θ ) · sin ( φ 1 ) cos ( φ 1 ) sin ( φ N ) cos ( φ N ) · D ¨ η m ,
where the formula calculates the track error at the azimuth time, η m , and N denotes the number of targets used. The track error can be expressed at η m as follows:
D ¨ η m = Δ x ¨ η m Δ z ¨ η m .
The entire track error can be represented as
D ¨ = Δ x ¨ η 1 Δ x ¨ η M Δ z ¨ η 1 Δ z ¨ η M ,
where M is the number of samples in the azimuth sub-aperture. After obtaining the double gradients of the track error, the entire track estimation results can be obtained after sub-aperture stitching and double integration.
The initially estimated track can be reapplied to the motion compensation of the raw data to further reduce the residual errors in the signal, straighten the envelope, and lay the foundation for subsequent track refinement.

2.2.3. The Track Refinement Based on Phase Errors

After motion compensation with the preliminarily estimated track, a more focused image is obtained. The envelope error of the signal is further reduced, which is beneficial for the estimation of phase errors. Furthermore, as can be seen from Equation (10), while the EOK algorithm alters the envelope error, it does not affect the phase error [48], providing conditions favorable for the estimation of phase errors.
Consequently, based on these steps, the phase error estimation of WTA can be employed. At this point, algorithms that are based on the estimation of phase errors can yield higher precision in track estimation results. With a high-precision track, the image quality is correspondingly enhanced.

2.2.4. Analysis of Calculation Accuracy of Compensated Range Error

During the calculation of compensated components, there is an occurrence of calculation errors. As the entire envelope error and phase error need to be obtained, it is imperative to calculate the compensated components with the highest possible accuracy. This is especially true for phase errors, which are more sensitive to calculation inaccuracies. Generally, the calculation error for the compensated components should not exceed 45 degrees. If it is less than 45 degrees, it is considered acceptable [49].
At any azimuth time, we suppose the real range of the target is R , the reference track is H , and Δ x and Δ z are the measurement values (or estimation values) of the motion error. Therefore, the real compensated value can be expressed as
Δ R 0 = cos ( θ ) · Δ x · R 2 H 2 R + Δ z · H R .
However, we can only obtain an estimation of R which contains the error Δ R :
R e = R + Δ R .
Therefore, the compensated value can be calculated as follows
Δ R e = cos ( θ ) · Δ x · ( R + Δ R ) 2 H 2 R + Δ R + Δ z · H R + Δ R .
The Taylor expansion of the above equation can be expressed as
Δ R e cos ( θ ) · Δ x · R 2 H 2 R + Δ z · H R cos ( θ ) · Δ x · H 2 R 2 R 2 H 2 + Δ z · H R 2 · Δ R
The calculation error of the compensated component can be obtained as
Δ e = Δ R e Δ R 0 = cos ( θ ) · Δ x · H 2 R 2 R 2 H 2 + Δ z · H R 2 · Δ R .
Converting the above range error into a phase error gives
Δ p e = 4 π λ · cos ( θ ) · Δ x · H 2 R 2 R 2 H 2 + Δ z · H R 2 · Δ R · 180 π .
The calculation error, Δ p e , is related to factors such as Δ x and Δ z . The analysis in detail is as follows:
  • The shorter the wavelength, λ , the larger the calculation error, Δ p e ;
  • The larger the reference height, H , the larger the calculation error, Δ p e ;
  • The smaller the range history, R , the larger the calculation error, Δ p e ;
  • The larger the range error, Δ R , the larger the calculation error, Δ p e ;
  • The larger the measurement values (or estimation values) of the motion error, Δ x and Δ z , the larger the calculation error, Δ p e . The impact of Δ x and Δ z on the outcome is contingent upon their coefficients. The coefficient of Δ x is H R 2 · H R 2 H 2 , and the coefficient of Δ z is H R 2 . Consequently, Δ x exerts a greater influence on Δ p e when H R 2 H 2 exceeds 1; conversely, Δ z has a more pronounced effect on Δ p e when H R 2 H 2 is less than 1.
The foregoing presents a theoretical analysis of the calculation error of the compensated component. The following will elucidate this quantitatively from the perspective of numerical simulation.
The simulation experiment is as follows. Assume the carrier frequency is 15.2 GHz, corresponding to a wavelength of 0.0197 m, with a squint angle of −7 degrees, a reference height of 411.02 m, and a target range R of 600 m. Then, when the motion errors, Δ x and Δ z , and the range error, Δ R , change, the calculation error, Δ p e , changes, as shown in Figure 3.
Five sets of values were established for Δ x and Δ z . The values are set with reference to, but are greater than, the actual UAV track data. In other words, the motion errors in many UAV scenarios are not as large as the values set for the simulations. Ten sets of values were assigned to Δ R , ranging from 0.06 m to 0.6 m. When the range bin is around 0.06 m, the value of Δ R corresponds to 1–10 range bins.
As shown in Figure 3, the calculation error of the compensated component, Δ p e , remains within 45 degrees. The values of Δ x , Δ z , and Δ R set for the simulation are relatively large, while the actual values are basically smaller than the simulated values. In other words, the calculation error of the compensated component, Δ p e , can be maintained within 45 degrees.
From the analysis above, it is evident that the calculation precision of the compensated component is high. This process does not introduce significant errors, nor does it adversely affect the overall estimation accuracy. In other words, it demonstrates the feasibility of track estimation using weak navigation information.

3. Results

3.1. Results of Simulations

3.1.1. Pre-Processing of Raw Data

As presented in Table 1, the simulation experiments were conducted in the Ku band, with relevant parameters listed therein. The simulation scene and the settings for the ground targets are depicted in Figure 4. The experiments were performed at a signal-to-noise ratio of 15 dB. The motion error of the real track and the measurement error of the measured track are set as shown in Figure 5.

3.1.2. Validating the Calculation Accuracy of Compensated Component

Since motion compensation was performed before estimating the residual errors, this necessitates the calculation of compensated components for each target. The precision of these calculations should be maintained to ensure they do not adversely affect the overall estimation. In the experiments, we obtained the calculation error curves of the compensated components for each target along the azimuth sub-aperture. The maximum absolute value was extracted from each curve and converted into degrees to construct histograms. Figure 6 illustrates the calculation errors during the initial track estimation and the refinement process.
The statistical results from the histograms indicate that the calculation error of the compensated components consistently remains within 45 degrees, which suggests that precision is ensured and will not affect subsequent estimations.

3.1.3. Validating the Feasibility of Utilizing the Additional Envelope Errors

Taking one target from the experiment as an example, the envelope error during the processing is illustrated in Figure 7a. The envelope error before motion compensation is significant. Following compensation, the residual error is small and within one range bin. After the EOK algorithm, the residual error is amplified, resulting in additional envelope error. We extracted the envelope error from the signal, which closely approximates the theoretically computed additional envelope error, indicating that using the additional envelope error formula for track estimation is feasible. To mitigate the impact of linear terms, we derived both the extracted envelope error and the computed additional envelope error, as presented in Figure 7b.
The calculation error curve of the compensated component of this target is shown in Figure 8a. It can be observed that the maximum absolute value is approximately 10 degrees, indicating that the calculation precision is adequate. Since we have validated the feasibility of motion compensation based on weak navigation information and estimation based on additional envelope errors, we can obtain the estimation for the entire range error, as shown in Figure 8b. It is evident that the estimated value of the second derivative of the total range error closely approximates the ground truth.
When the estimated range error of each target has been obtained, the track error in this sub-aperture can be solved using the WTLS method. The estimated track error is shown in Figure 9a, and the corresponding estimation error is shown in Figure 9b. The track estimation is relatively accurate, verifying the feasibility of using additional envelope errors.

3.1.4. The Track Estimation and the Imaging Results

(1)
The signal after initial motion compensation
After preliminary motion compensation using the weak navigation information, the imaging results are displayed in Figure 10a. Figure 10b provides a magnified view of one of the targets. It is apparent that the envelope error is significant after the preliminary motion compensation and EOK processing.
(2)
The initially estimated track based on additional envelope errors
The large additional envelope error following EOK processing can be utilized for initial track estimation. The estimation results are presented in Figure 11a, with the corresponding estimation error depicted in Figure 11b. We can see that the estimated track closely approximates the true value, with an estimation precision of about 0.05 m.
(3)
The signal after using the initially estimated track for compensation
Motion compensation using the initially estimated track significantly reduces residual errors. The envelope of the signal becomes straighter, which is conducive to subsequent estimation processes. Figure 12a illustrates the imaging results after motion compensation with the initially estimated track, and Figure 12b displays a magnified view of one of the targets.
(4)
The refined track using the phase-based estimation
As depicted in Figure 13a, the phase-based estimation is performed, yielding a refined track. It is evident that the track estimation results are closer to the true value. Figure 13b presents the corresponding estimation error, which is within a relatively small range of 0.03 m, achieving a high-precision estimation of the airborne SAR track.
(5)
The signal after using the refined track for compensation
Based on the refined track, motion compensation is performed anew. The residual errors in the signal are substantially reduced, and the envelope error is also small. Figure 14a depicts the imaging results, while Figure 14b shows a magnified view of one of the targets. It is apparent that the envelope is quite straight at this stage, indicating extremely small errors within the signal.
(6)
The focused image and imaging quality test
In Figure 15a, after the azimuth compression, the targets can be focused. Figure 15b presents an enlarged display of the target in the red box after upsampling 16 times.
In addition, we tested the quality of the selected target, with the results illustrated in Figure 16. The slices demonstrate that high-resolution imaging has been accomplished by utilizing the estimated track of the proposed method.

3.1.5. The Comparison Experiment

The comparison experiments were conducted using the WTA algorithm. Within the WTA framework, the estimation is based on raw data, implying that no motion compensation is performed before the estimation. Consequently, in the absence of motion compensation employing weak navigational information, the signal after applying the EOK algorithm is as follows.
As depicted in Figure 17, significant errors are present within the signal, resulting in a completely defocused image. Compared to the results obtained with the proposed algorithm that incorporates motion compensation, the image quality using the WTA algorithm is evidently inferior. Clearly, the defocused signal cannot be utilized for subsequent estimation with the WTA algorithm. In summary, the WTA algorithm fails in the simulation.

3.2. Results of Real Data Experiments

3.2.1. Parameters

The above simulations conducted a detailed analysis of the performance of the proposed algorithm. Next, we conducted experiments with real data. The X-band data from the scene with abundantly strong scatterers were utilized. The specific parameters of the data are presented in Table 2.
The high-precision track is used as the reference truth, while the weak navigation information is employed within the proposed method. The track information is shown below. Figure 18a shows the real motion error and the track measurement error in the x-direction, and Figure 18b shows them in the z-direction.

3.2.2. The Estimated Track and Comparison Results

We conducted experiments on real data using the proposed method and compared the results with those obtained from the WTA algorithm. Figure 19a displays the track estimation results of both the proposed method and the WTA method. It is evident that the track estimated by the proposed method is closer to the true value. Figure 19b demonstrates the corresponding estimation errors. It can be observed that the estimation error of the WTA is close to 0.1 m, whereas the proposed method achieves precision within 0.05 m, hence realizing high-accuracy track estimation. The comparative experiments show that the proposed method is more robust and yields better results.

3.2.3. The Imaging Results and Quality Test

The estimated track can be utilized for SAR imaging. Figure 20 illustrates the image obtained using the track estimated by the proposed method, while Figure 21 presents the image acquired using the track estimated by the WTA algorithm. It is apparent that the image produced by the proposed method exhibits superior focusing quality compared to that of the comparison experiment. Particularly, the contrast between some strong scatterers in the image is distinctly evident.
We assessed the imaging quality of two sets of targets contained within the boxes. Figure 22a presents the two-dimensional images of the targets in the red box after upsampling 16 times, while Figure 22b displays the two-dimensional images of the targets in the yellow box after upsampling 16 times. The comparative analysis reveals that the proposed method achieves a higher focusing quality. Moreover, Figure 23a illustrates a comparison of the azimuth quality of the two targets within the red box, and Figure 23b compares the azimuth quality of the targets in the yellow box. The slice results also corroborate the improved effectiveness of the proposed method.

4. Discussion

The proposed method aims to enhance the applicability and robustness of track estimation. It sequentially employs motion compensation, envelope error estimation, and phase error estimation, which achieves track refinement in the process of gradually reducing the residual error. The principle of the algorithm ensures its utility and accuracy under substantial motion and envelope errors.
Two experiments were conducted in this article. In a simulation experiment, the absence of motion compensation would result in the complete defocusing of the signal in the range direction, rendering the WTA algorithm entirely ineffective. However, the proposed method, which utilizes motion compensation, can achieve high precision estimation. In the real data experiment, the signal is not completely defocused, thereby permitting the use of the WTA algorithm. Nonetheless, our method demonstrated superior track estimation precision and imaging quality, owing to its improved utilization of weak navigation data and envelope errors.
The article analyzes the computation accuracy of the compensated component using theory and simulations. The results indicate that the calculations are unlikely to introduce intolerable errors. This is a favorable conclusion that can serve as a reference for other works on calculating the compensated error of targets.
The experiments conducted in this study were performed under conditions of small squint angles since the squint angles for many UAV SAR systems are typically small in practical scenarios. The experimental results indicate that the method proposed achieves favorable outcomes when operating at small squint angles. However, large squint angles have an impact on the two-dimensional spectrum of the signal, leading to an increase in the additional envelope error and causing range defocusing. This range defocusing can compromise the utility of the proposed method for the range envelope, potentially limiting the effectiveness of the algorithm or even resulting in its failure. Accordingly, the method proposed herein is tailored primarily toward cases with small squint angles. The challenges associated with large squint angles constitute a separate research direction, meriting further investigation in the future.

5. Conclusions

This article presents a robust track estimation method for airborne SAR. It exploits both weak navigation information and additional envelope errors, which achieves track refinement in the process of gradually reducing the residual error. The feasibility and effectiveness of the algorithm are demonstrated through both simulations and real data experiments. In the simulation experiments, the proposed method achieved track estimation accuracy within 0.03 m. Conversely, the WTA algorithm exhibited severe range-defocusing issues, precluding its normal operation. In the real data experiments, the track estimation accuracy of the proposed method was within 0.05 m, whereas the WTA algorithm achieved lower precision, close to 0.1 m. In summary, although the WTA algorithm has been proven to be advanced, the method introduced in this article has broader applicability and higher accuracy.
In the proposed algorithm, the calculation error of the compensated component was consistently maintained within 45 degrees, thereby ensuring the final accuracy of the track estimation. Ultimately, a high-precision track can be employed for motion compensation and imaging, resulting in high-resolution SAR images.
Moreover, a common drawback of the proposed method and PGA-based methods like WTA is their dependence on the presence of strong scatterers in the scene. Hence, future research could explore combining such strong-scatterer-dependent approaches with image metric-based optimization algorithms to overcome this dependency.

Author Contributions

Conceptualization, M.G. and X.Q.; funding acquisition, X.Q. and C.D.; investigation, M.G.; methodology, M.G. and X.Q.; resources, X.Q. and Y.C.; writing—original draft, M.G.; writing—review and editing, M.G., X.Q. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China under Grant No. 2018YFA0701903.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Edwards, M.; Madsen, D.; Stringham, C.; Margulis, A.; Wicks, B.; Long, D.G. microASAR: A small, robust LFM-CW SAR for operation on UAVs and small aircraft. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008. [Google Scholar]
  2. Otten, M.; van Rossum, W.; van der Graaf, M.; Vlothuizen, W.; Tan, R. Multichannel imaging with the AMBER FMCW SAR. In Proceedings of the EUSAR 2014, 10th European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014. [Google Scholar]
  3. Gromek, D.; Samczynski, P.; Kulpa, K.; Cruz, G.C.S.; Oliveira, T.M.M.; Felix, L.F.S.; Goncalves, P.A.V.; Silva, C.M.B.P.; Santos, A.L.C.; Morgado, J.A.P. C-band SAR radar trials using UAV platform: Experimental results of SAR system integration on a UAV carrier. In Proceedings of the 2016 17th International Radar Symposium (IRS), Krakow, Poland, 10–12 May 2016. [Google Scholar]
  4. Ding, M.; Wang, X.; Tang, L.; Qu, J.; Wang, Y.; Zhou, L.; Wang, B. A W-Band Active Phased Array Miniaturized Scan-SAR with High Resolution on Multi-Rotor UAVs. Remote Sens. 2022, 14, 5840. [Google Scholar] [CrossRef]
  5. Ye, W.; Yeo, T.S.; Bao, Z. Weighted least-squares estimation of phase errors for SAR/ISAR autofocus. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2487–2494. [Google Scholar] [CrossRef]
  6. Zhu, D.; Jiang, R.; Mao, X.; Zhu, Z. Multi-Subaperture PGA for SAR Autofocusing. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 468–488. [Google Scholar] [CrossRef]
  7. Zhang, L.; Hu, M.; Wang, G.; Wang, H. Range-Dependent Map-Drift Algorithm for Focusing UAV SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1158–1162. [Google Scholar] [CrossRef]
  8. Chen, Z.; Zhang, Z.; Zhou, Y.; Wang, P.; Qiu, J. A Novel Motion Compensation Scheme for Airborne Very High Resolution SAR. Remote Sens. 2021, 13, 2729. [Google Scholar] [CrossRef]
  9. Tong, X.; Bao, M.; Sun, G.; Han, L.; Zhang, Y.; Xing, M. Refocusing of Moving Ships in Squint SAR Images Based on Spectrum Orthogonalization. Remote Sens. 2021, 13, 2807. [Google Scholar] [CrossRef]
  10. Chen, M.; Qiu, X.; Li, R.; Li, W.; Fu, K. Analysis and Compensation for Systematical Errors in Airborne Microwave Photonic SAR Imaging by 2-D Autofocus. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2221–2236. [Google Scholar] [CrossRef]
  11. Wahl, D.; Eichel, P.; Ghiglia, D.; Jakowatz, C. Phase gradient autofocus-a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef]
  12. Thompson, D.G.; Bates, J.S.; Arnold, D.V. Extending the phase gradient autofocus algorithm for low-altitude stripmap mode SAR. In Proceedings of the 1999 IEEE Radar Conference, Radar into the Next Millennium (Cat. No. 99CH36249), Waltham, MA, USA, 22 April 1999. [Google Scholar]
  13. Zhang, L.; Qiao, Z.; Xing, M.; Yang, L.; Bao, Z. A robust motion compensation approach for UAV SAR imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3202–3218. [Google Scholar] [CrossRef]
  14. Evers, A.; Jackson, J.A. A Generalized Phase Gradient Autofocus Algorithm. IEEE Trans. Comput. Imaging 2019, 5, 606–619. [Google Scholar] [CrossRef]
  15. Franceschetti, G.; Lanari, R. Synthetic Aperture Radar Processing; CRC Press: Boca Raton, FL, USA, 1999. [Google Scholar]
  16. Samczynski, P.; Kulpa, K.S. Coherent mapdrift technique. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1505–1517. [Google Scholar] [CrossRef]
  17. Tang, Y.; Zhang, B.; Xing, M.; Bao, Z.; Guo, L. The space-variant phase-error matching map-drift algorithm for highly squinted SAR. IEEE Geosci. Remote Sens. Lett. 2012, 10, 845–849. [Google Scholar] [CrossRef]
  18. Huang, Y.; Liu, F.; Chen, Z.; Li, J.; Hong, W. An improved map-drift algorithm for unmanned aerial vehicle SAR imaging. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1966–1970. [Google Scholar]
  19. Liu, M.; Li, C.; Shi, X. A back-projection fast autofocus algorithm based on minimum entropy for SAR imaging. In Proceedings of the 2011 3rd International Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Seoul, Republic of Korea, 26–30 September 2011. [Google Scholar]
  20. Yang, L.; Xing, M.; Zhang, L.; Sheng, J.; Bao, Z. Entropy-based motion error correction for high-resolution spotlight SAR imagery. IET Radar Sonar Navig. 2012, 6, 627–637. [Google Scholar] [CrossRef]
  21. Xiong, T.; Xing, M.; Wang, Y.; Wang, S.; Sheng, J.; Guo, L. Minimum-entropy-based autofocus algorithm for SAR data using chebyshev approximation and method of series reversion, and its implementation in a data processor. IEEE Trans. Geosci. Remote Sens. 2013, 52, 1719–1728. [Google Scholar] [CrossRef]
  22. Fienup, J.R.; Miller, J.J. Aberration correction by maximizing generalized sharpness metrics. J. Opt. Soc. Am. A 2003, 20, 609–620. [Google Scholar] [CrossRef] [PubMed]
  23. Schulz, T.J. Optimal sharpness function for SAR autofocus. IEEE Signal Process. Lett. 2006, 14, 27–30. [Google Scholar] [CrossRef]
  24. Ash, J.N. An autofocus method for backprojection imagery in synthetic aperture radar. IEEE Geosci. Remote Sens. Lett. 2011, 9, 104–108. [Google Scholar] [CrossRef]
  25. Wu, J.; Li, Y.; Pu, W.; Li, Z.; Yang, J. An effective autofocus method for fast factorized back-projection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6145–6154. [Google Scholar] [CrossRef]
  26. Yang, J.; Huang, X.; Jin, T.; Xue, G.; Zhou, Z. An interpolated phase adjustment by contrast enhancement algorithm for SAR. IEEE Geosci. Remote Sens. Lett. 2010, 8, 211–215. [Google Scholar]
  27. Zhang, T.; Liao, G.; Li, Y.; Gu, T.; Zhang, T.; Liu, Y. A two-stage time-domain autofocus method based on generalized sharpness metrics and AFBP. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  28. Chen, J.; Xing, M.; Sun, G.-C.; Li, Z. A 2-D space-variant motion estimation and compensation method for ultrahigh-resolution airborne stepped-frequency SAR with long integration time. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6390–6401. [Google Scholar] [CrossRef]
  29. Chen, J.; Xing, M.; Yu, H.; Liang, B.; Peng, J.; Sun, G.-C. Motion compensation/autofocus in airborne synthetic aperture radar: A review. IEEE Geosci. Remote Sens. Mag. 2022, 10, 185–206. [Google Scholar] [CrossRef]
  30. Xing, M.; Jiang, X.; Wu, R.; Zhou, F.; Bao, Z. Motion compensation for UAV SAR based on raw radar data. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2870–2883. [Google Scholar] [CrossRef]
  31. Bezvesilniy, O.O.; Gorovyi, I.M.; Vavriv, D.M. Autofocusing SAR images via local estimates of flight trajectory. Int. J. Microw. Wirel. Technol. 2016, 8, 881–889. [Google Scholar] [CrossRef]
  32. Liang, Y.; Li, G.; Wen, J.; Zhang, G.; Dang, Y.; Xing, M. A Fast Time-Domain SAR Imaging and Corresponding Autofocus Method Based on Hybrid Coordinate System. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8627–8640. [Google Scholar] [CrossRef]
  33. Li, Y.; Liu, C.; Wang, Y.; Wang, Q. A robust motion error estimation method based on raw data. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2780–2790. [Google Scholar] [CrossRef]
  34. Li, J.; Chen, J.; Wang, P.; Loffeld, O. A coarse-to-fine autofocus approach for very high-resolution airborne stripmap SAR imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3814–3829. [Google Scholar] [CrossRef]
  35. Ding, Z.; Li, L.; Wang, Y.; Zhang, T.; Gao, W.; Zhu, K.; Zeng, T.; Yao, D. An autofocus approach for UAV-based ultrawideband ultrawidebeam SAR data with frequency-dependent and 2-D space-variant motion errors. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  36. Sjanic, Z.; Gustafsson, F. Simultaneous navigation and SAR auto-focusing. In Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010. [Google Scholar]
  37. Pu, W.; Wu, J.; Huang, Y.; Yang, J.; Yang, H. Fast Factorized Backprojection Imaging Algorithm Integrated With Motion Trajectory Estimation for Bistatic Forward-Looking SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3949–3965. [Google Scholar] [CrossRef]
  38. Ran, L.; Liu, Z.; Zhang, L.; Li, T.; Xie, R. An Autofocus Algorithm for Estimating Residual Trajectory Deviations in Synthetic Aperture Radar. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3408–3425. [Google Scholar] [CrossRef]
  39. Zhang, T.; Liao, G.; Li, Y.; Gu, T.; Zhang, T.; Liu, Y. An Improved Time-Domain Autofocus Method Based on 3-D Motion Errors Estimation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  40. Torgrimsson, J.; Dammert, P.; Hellsten, H.; Ulander, L.M.H. Factorized Geometrical Autofocus for Synthetic Aperture Radar Processing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6674–6687. [Google Scholar] [CrossRef]
  41. Torgrimsson, J.; Dammert, P.; Hellsten, H.; Ulander, L.M.H. An Efficient Solution to the Factorized Geometrical Autofocus Problem. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4732–4748. [Google Scholar] [CrossRef]
  42. Torgrimsson, J.; Dammert, P.; Hellsten, H.; Ulander, L.M.H. SAR Processing Without a Motion Measurement System. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1025–1039. [Google Scholar] [CrossRef]
  43. Li, J.; Wang, P.; Li, C.; Chen, J.; Yang, W. Precise estimation of flight path for airborne SAR motion compensation. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014. [Google Scholar]
  44. Dadi, M.; Donghui, H.U.; Chibiao, D. A new approach to airborne high Resolution SAR motion compensation for large trajectory deviations. Chin. J. Electron. 2012, 21, 764–769. [Google Scholar]
  45. Yang, M.; Zhu, D.; Song, W. Comparison of two-step and one-step motion compensation algorithms for airborne synthetic aperture radar. Electron. Lett. 2015, 51, 1108–1110. [Google Scholar] [CrossRef]
  46. Zhang, L.; Sheng, J.; Xing, M.; Qiao, Z.; Xiong, T.; Bao, Z. Wavenumber-Domain Autofocusing for Highly Squinted UAV SAR Imagery. IEEE Sens. J. 2012, 12, 1574–1588. [Google Scholar] [CrossRef]
  47. Reigber, A.; Alivizatos, E.; Potsis, A.; Moreira, A. Extended wavenumber-domain synthetic aperture radar focusing with integrated motion compensation. IEE Proc.-Radar Sonar Navig. 2006, 153, 301–310. [Google Scholar] [CrossRef]
  48. Xing, M.; Wu, Y.; Zhang, Y.D.; Sun, G.C.; Bao, Z. Azimuth Resampling Processing for Highly Squinted Synthetic Aperture Radar Imaging With Several Modes. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4339–4352. [Google Scholar] [CrossRef]
  49. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Norwood, MA, USA, 2005. [Google Scholar]
Figure 1. The SAR geometry schematic.
Figure 1. The SAR geometry schematic.
Remotesensing 16 00625 g001
Figure 2. The flow chart of the proposed track estimation method.
Figure 2. The flow chart of the proposed track estimation method.
Remotesensing 16 00625 g002
Figure 3. The change in the calculation error, Δ p e .
Figure 3. The change in the calculation error, Δ p e .
Remotesensing 16 00625 g003
Figure 4. The simulation scene and targets.
Figure 4. The simulation scene and targets.
Remotesensing 16 00625 g004
Figure 5. The motion error and measurement error. (a) The errors in the x-direction; (b) the errors in the z-direction.
Figure 5. The motion error and measurement error. (a) The errors in the x-direction; (b) the errors in the z-direction.
Remotesensing 16 00625 g005
Figure 6. The histogram of the calculation error of the compensated components.
Figure 6. The histogram of the calculation error of the compensated components.
Remotesensing 16 00625 g006
Figure 7. The envelope errors of the target. (a) The envelope errors of the target during processing; (b) the derivative of the envelope errors.
Figure 7. The envelope errors of the target. (a) The envelope errors of the target during processing; (b) the derivative of the envelope errors.
Remotesensing 16 00625 g007
Figure 8. The calculation and estimation results. (a) The calculation error of the compensated component of the target; (b) the range error estimation results.
Figure 8. The calculation and estimation results. (a) The calculation error of the compensated component of the target; (b) the range error estimation results.
Remotesensing 16 00625 g008
Figure 9. The track estimation results using the additional envelope error. (a) The track estimation results of the target; (b) the estimation errors of the target.
Figure 9. The track estimation results using the additional envelope error. (a) The track estimation results of the target; (b) the estimation errors of the target.
Remotesensing 16 00625 g009
Figure 10. The imaging results after initial motion compensation. (a) The signal after initial motion compensation and EOK processing; (b) the magnified view of one of the targets.
Figure 10. The imaging results after initial motion compensation. (a) The signal after initial motion compensation and EOK processing; (b) the magnified view of one of the targets.
Remotesensing 16 00625 g010
Figure 11. The initially estimated results of the track. (a) The initially estimated track based on additional envelope errors; (b) the track estimation errors.
Figure 11. The initially estimated results of the track. (a) The initially estimated track based on additional envelope errors; (b) the track estimation errors.
Remotesensing 16 00625 g011
Figure 12. The imaging results using the initially estimated track. (a) The signal after using the initially estimated track for motion compensation; (b) magnified view of one of the targets.
Figure 12. The imaging results using the initially estimated track. (a) The signal after using the initially estimated track for motion compensation; (b) magnified view of one of the targets.
Remotesensing 16 00625 g012
Figure 13. The refined estimation results. (a) The refined track based on phase errors; (b) the track estimation errors.
Figure 13. The refined estimation results. (a) The refined track based on phase errors; (b) the track estimation errors.
Remotesensing 16 00625 g013
Figure 14. The imaging results using the refined track. (a) The signal after using the refined track for motion compensation; (b) magnified view of one of the targets.
Figure 14. The imaging results using the refined track. (a) The signal after using the refined track for motion compensation; (b) magnified view of one of the targets.
Remotesensing 16 00625 g014
Figure 15. The imaging results after azimuth compression. (a) The signal after azimuth compression; (b) magnified view of one of the targets.
Figure 15. The imaging results after azimuth compression. (a) The signal after azimuth compression; (b) magnified view of one of the targets.
Remotesensing 16 00625 g015
Figure 16. The image quality test. (a) Range quality of the target; (b) azimuth quality of the target.
Figure 16. The image quality test. (a) Range quality of the target; (b) azimuth quality of the target.
Remotesensing 16 00625 g016
Figure 17. The imaging results using the WTA algorithm. (a) The signal after EOK algorithm without motion compensation; (b) magnified view of one of the targets.
Figure 17. The imaging results using the WTA algorithm. (a) The signal after EOK algorithm without motion compensation; (b) magnified view of one of the targets.
Remotesensing 16 00625 g017
Figure 18. The motion error and measurement error of real data. (a) The errors in the x-direction; (b) the errors in the z-direction.
Figure 18. The motion error and measurement error of real data. (a) The errors in the x-direction; (b) the errors in the z-direction.
Remotesensing 16 00625 g018
Figure 19. The estimation results of real data. (a) The estimated track of real data; (b) the track estimation errors.
Figure 19. The estimation results of real data. (a) The estimated track of real data; (b) the track estimation errors.
Remotesensing 16 00625 g019
Figure 20. The image by using our estimated track.
Figure 20. The image by using our estimated track.
Remotesensing 16 00625 g020
Figure 21. The image by using the estimated track of WTA.
Figure 21. The image by using the estimated track of WTA.
Remotesensing 16 00625 g021
Figure 22. The comparison of the two-dimensional imaging results after upsampling 16 times. (a) The two-dimensional images of the targets in the red box after upsampling 16 times; (b) the two-dimensional images of the targets in the yellow box after upsampling 16 times.
Figure 22. The comparison of the two-dimensional imaging results after upsampling 16 times. (a) The two-dimensional images of the targets in the red box after upsampling 16 times; (b) the two-dimensional images of the targets in the yellow box after upsampling 16 times.
Remotesensing 16 00625 g022
Figure 23. The image quality comparison. (a) The azimuth quality comparison of the targets in the red box; (b) the azimuth quality comparison of the targets in the yellow box.
Figure 23. The image quality comparison. (a) The azimuth quality comparison of the targets in the red box; (b) the azimuth quality comparison of the targets in the yellow box.
Remotesensing 16 00625 g023
Table 1. The parameters for the simulation.
Table 1. The parameters for the simulation.
ParameterValue
carrier frequency15.2 GHz
wavelength0.0197 m
bandwidth1.2 GHz
velocity10.034 m/s
height411.024 m
squint angle−7°
PRF250 Hz
Table 2. The parameters of the real data.
Table 2. The parameters of the real data.
ParameterValue
carrier frequency9.6 GHz
wavelength0.0312 m
bandwidth810 MHz
velocity98.187 m/s
height4288.56 m
squint angle1.71°
PRF1000 Hz
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, M.; Qiu, X.; Cheng, Y.; Chen, M.; Ding, C. A Robust Track Estimation Method for Airborne SAR Based on Weak Navigation Information and Additional Envelope Errors. Remote Sens. 2024, 16, 625. https://doi.org/10.3390/rs16040625

AMA Style

Gao M, Qiu X, Cheng Y, Chen M, Ding C. A Robust Track Estimation Method for Airborne SAR Based on Weak Navigation Information and Additional Envelope Errors. Remote Sensing. 2024; 16(4):625. https://doi.org/10.3390/rs16040625

Chicago/Turabian Style

Gao, Ming, Xiaolan Qiu, Yao Cheng, Min Chen, and Chibiao Ding. 2024. "A Robust Track Estimation Method for Airborne SAR Based on Weak Navigation Information and Additional Envelope Errors" Remote Sensing 16, no. 4: 625. https://doi.org/10.3390/rs16040625

APA Style

Gao, M., Qiu, X., Cheng, Y., Chen, M., & Ding, C. (2024). A Robust Track Estimation Method for Airborne SAR Based on Weak Navigation Information and Additional Envelope Errors. Remote Sensing, 16(4), 625. https://doi.org/10.3390/rs16040625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop