Next Article in Journal
Eco-Friendly Shield Muck-Incorporated Grouting Materials: Mix Optimization and Property Evaluation for Silty Clay Tunnel Construction
Previous Article in Journal
Enhancing Multisensory Virtual Reality Environments through Olfactory Stimuli for Autobiographical Memory Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Compensation-Based Range–Doppler Model for SAR High-Precision Positioning

School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8829; https://doi.org/10.3390/app14198829
Submission received: 8 July 2024 / Revised: 12 September 2024 / Accepted: 26 September 2024 / Published: 1 October 2024

Abstract

:
The range–Doppler (R–D) model is extensively employed for the geometric processing of synthetic aperture radar (SAR) images. Refining the sensor motion state and imaging parameters is the most common method for achieving high-precision geometric processing using the R–D model, comprising a process that involves numerous parameters and complex computations. In order to reduce the specialization and complexity of parameter optimization in the classic R–D model, we introduced a novel approach called ICRD (image compensation-based range–Doppler) to improve the positioning accuracy of the R–D model, implementing a low-order polynomial to compensate for the original imaging errors without altering the initial positioning parameters. We also designed low-order polynomial compensation models with different parameters. The models were evaluated on various SAR images from different platforms and bands, including spaceborne TerraSAR-X and Gaofen3-C images, manned airborne SAR-X images, and unmanned aerial vehicle-mounted miniSAR-Ku images. Furthermore, image positioning experiments involving the use of different polynomial compensation models and various numbers and distributions of ground control points (GCPs) were conducted. The experimental results demonstrate that geometric processing accuracy comparable to that of the classical rigorous positioning method can be achieved, even when applying only an affine transformation model to the images. Compared to classical refinement models, however, the proposed image-compensated R–D model is much simpler and easy to implement. Thus, this study provides a convenient, robust, and widely applicable method for the geometric-positioning processing of SAR images, offering a potential approach for the joint-positioning processing of multi-source SAR images.

1. Introduction

1.1. Geometric Models of SAR Images

The synthetic aperture radar (SAR) geometric model plays a foundational role in high-precision geolocation processes, encompassing geometric correction, stereo positioning, block adjustment, and three-dimensional reconstruction of SAR imagery [1,2,3,4]. Geometric models of SAR imagery are generally divided into two categories: rigorous and non-rigorous. Rigorous models rely on the mechanisms of SAR imaging, in which both the sensor’s motion state and imaging parameters are incorporated. Meanwhile, non-rigorous models usually ignore the SAR imaging mechanism and disregard the sensor’s geometric, physical, and imaging parameters, typically fitting the relationship between image and ground coordinates.
A variety of rigorous SAR models have been developed, such as the range-zero Doppler model created by F. Leberl, the three-dimensional model created by Touitin from the Canadian Remote Sensing Center, and the SAR collinear equation model created by You from the Chinese Academy of Sciences [5,6]. However, the most prevalent and effective model is the range–Doppler (R–D) equations introduced by Curlander in 1982 [7]. From a geometric perspective, the Doppler equation confines the target of observation within an imaging Doppler cone, with the sensor’s position serving as the vertex and the velocity vector as the axis. Meanwhile, the range equation bounds the target within a sphere centered on the sensor, with a radius equivalent to the SAR measurement distance. The intersection curve between the sphere corresponding to the range equation and the cone corresponding to the Doppler equation is a circle, representing the location range of the SAR observation target.
In the course of image correction, the R–D model either takes elevation as a known parameter or as integrated with the Earth’s ellipsoid model, which accounts for the elevation parameter [8]. The R–D model subsequently calculates the object’s spatial coordinates (X, Y, and Z). In stereo positioning, two or multiple sets of R–D equations, derived from corresponding points in multi-view images, are used jointly to determine the object’s spatial coordinates [9,10]. At present, the R–D model is used as the basis for the majority of methods for the precise geometric processing of SAR images. Yuan et al. [11] developed an airborne SAR target positioning algorithm which refines the R–D model parameters through the use of corresponding points from airborne SAR and reference images, thereby enhancing the positioning accuracy. Pu-Huai Chen [10] presented a weighted spatial intersection algorithm that predicts algorithm accuracy based on prediction error analysis. Schubert et al. [12] achieved the high-precision geolocation of TerraSAR-X images using the R–D model without ground control points (GCPs), through the use of calibrated sensor parameters and correcting for atmospheric and tidal effects.
Using the R–D and digital elevation models, accurate ground positioning within 2 m can be achieved for Tianhui-2 (TH-2) imagery [13]. Considering the scale differences between the range and Doppler equations, researchers have normalized the R–D model and improved the positioning accuracy through a weighting strategy [14]. The model’s efficiency has been validated through experiments conducted on the Gaofen-3 C-band (GF3-C) and TerraSAR-X satellite data sets. To address the complexity of traditional geometric models, Liu [15] introduced a fast algorithm based on local linear approximation to simplify the associated computations. The R–D model is also beneficial in other image processing tasks; for example, Wang [9] employed the model to correct geometric distortions in SAR images, resulting in the improved accuracy of the SAR scale-invariant feature transform (SAR-SIFT) matching.
Non-rigorous models are classified into true replacement models and correspondence models, as specified by the ISO 19130-1 standard [16]. The general polynomial transformation is the prevalent correspondence model, which creates a functional link between ground and image coordinates and is usually established by fitting numerous reference points. For the initial positioning of airborne SAR images, Zhang et al. [17] used a primary affine transformation to approximate the translation between image and geographical coordinates, disregarding the impact of terrain elevation on positioning accuracy. In contrast, Huang et al. [18] introduced elevation into a general polynomial model for SAR geolocation. The rational function model (RFM)—now the dominant true replacement model—was originally based on optical satellite imagery [19] and has become the primary geometric processing model for optical satellite images [20]. For SAR image positioning, the RFM has 78 rational polynomial coefficients (RPCs) to be fitted from the R–D model’s virtual positioning results [21,22]. Zhang et al. [22] analyzed factors affecting the RPC-fitting accuracy with respect to the R–D model and assessed the impacts of virtual control point grid size, elevation layer number, orbit accuracy, and other aspects on RPC solutions. Their study highlighted the importance of elevation layers in regard to SAR RPC extraction accuracy and suggested that careful adjustment of the terrain conditions should be considered. Typically, the accuracy of RPC fitting with respect to the R–D model exceeds 1% of a pixel. For high-precision sensors such as TerraSAR-X, which have stable motion and excellent imaging capabilities, RPC fitting accuracies better than 10−3 relative to the R–D model can be achieved [23].

1.2. Refinement of Geometric Model Parameters

The geometric model describes the relationship between the object’s spatial coordinates and the SAR image coordinates. However, the achievement of high-precision positioning is contingent on calibrating the sensor parameters and refining the model parameters through the use of GCPs.
The positioning accuracy of the R–D model is affected by various factors, such as initial range errors, synchronization discrepancies between the SAR sensor and the position and orientation system (POS), sensor motion parameter errors, and errors in the transmission of electromagnetic waves. Several studies have conducted thorough analyses on the effects of parameter errors on the accuracy of SAR image positioning [9,24,25]. Some of these affecting factors, such as initial slant range, slant range rate, and time synchronization, can be calibrated in a laboratory or testing field, while others, such as atmospheric delay and tidal correction, can be obtained through relevant rigorous models [12]. The optimization of sensor system parameters is contingent on the availability of a high-precision calibration field, and the timeliness of calibration is pivotal to ensuring the accuracy of image positioning. However, some sensors may encounter challenges regarding the prompt calibration of parameters, potentially leading to suboptimal geolocation accuracy when only the initial parameters are used. To further enhance the positioning accuracy of SAR images, it is often necessary to employ high-precision reference points for the comprehensive optimization of the sensor position, velocity, slant range, time synchronization, and Doppler parameters within the R–D model [26,27].
Although conventional methods for enhancing the accuracy of the R–D model are theoretically rigorous, they have several limitations with regard to the practical use of reference points for comprehensive optimization. These limitations include the difficulty of dealing with a large number of model parameters and the necessity of using additional models to express variations throughout an entire image. For example, in terms of additional models, positional and velocity corrections typically require six distinct sets of polynomial parameters [26,27]. Strong parameter correlations in the R–D model present challenges related to decorrelation when parameters are solved under conditions of sparse GCPs [27]. The process of obtaining refined R–D model parameters using GCPs is complex and requires a high level of expertise, significantly hindering advanced users from efficiently performing high-precision SAR image geolocation processing independently. These limitations restrict the application of the R–D model in certain scenarios. Therefore, it is necessary to develop a simpler and more user-friendly technique for SAR positioning based on the R–D model.
In contrast, the RFM can relatively easily achieve high-precision geolocation through introducing an error compensation model for the image space with the support of GCPs. The most common approach involves using a linear affine transformation model to correct image errors caused by various factors. Studies have suggested that, with the support of adequate GCPs, the positioning accuracy of RFMs enhanced with affine transformations is comparable to that of R–D models refined with model parameters [20,28,29]. For instance, Eftekhari et al. [23] demonstrated that RFMs with affine transformations can achieve sub-pixel accuracy. This methodology has been gradually applied to multi-scene imagery, as demonstrated in the studies conducted by Zhang et al. [30] and Wang et al. [31], who employed RFM-based block adjustment techniques to accomplish high-precision positioning for GF3-C and Yaogan-5 (YG-5) multi-scene images.
The application of SAR data processing technologies, including high-precision GNSS orbit-aided imaging [32,33] and motion compensation [34,35], enables the refinement of a select few crucial parameters of the rigorous model, such as initial slant range and azimuth time, significantly enhancing the positioning accuracy of imagery [36,37]. The systematic influence of these parameters on image errors underscores a strong consistency in geometric positioning [38]. The utilization of SAR high-precision positioning under conditions of sparse control points, particularly with the implementation of image space-corrected RFM, further reinforces this perspective [20]. Considering the inherent complexities and difficulties of implementation in the classical refinement of the R–D model with GCPs, we raise a question: is it possible to achieve high-precision positioning for the R–D model, in order to allow it to be combined with an image-based compensation function? This idea is similar to the approach used in the RFM. If feasible, it could greatly simplify the refinement process for the R–D model. In addition, the RFM is primarily designed for spaceborne images and has not been effectively applied to airborne images (until now). In contrast, the R–D model with image compensation may be easily extended to airborne SAR images.
To the best of our knowledge, there is still no relevant work on the abovementioned aspects. In this regard, similar to RFM with image space compensation, the strategy for improving the positioning accuracy of satellite-borne SAR images [20,29], we developed an image compensation-based R–D (ICRD) model for SAR high-precision positioning and conducted a variety of experiments on both spaceborne and airborne SAR images to validate its accuracy. That means we attempt to achieve high-precision positioning of the R–D model by adding a correction function in the image space instead of optimizing imaging and geometric parameters. The effectiveness of this method will reduce the complexity and professionalism of the R–D model, and provide a consistent processing method for aerospace SAR high-precision positioning.
The outline of this paper is as follows. A short overview of SAR image geometric models and model refinement is provided in Section 1; some limitations related to model refinement are also discussed in this section. In Section 2, an ICRD model for SAR high-precision positioning is proposed, and image compensation using polynomial functions is detailed. In the next section, we report on the comparative experiments we conducted, and the accuracy of the model is validated. We discuss some important issues associated with the proposed model in Section 4, and the last section contains our conclusions.

2. The Proposed Image Compensation-Based R–D Model

The ICRD model combines the R–D model with a low-order polynomial image compensation model. This section first introduces the R–D model and the image compensation model, followed by the technique for calculating the parameters of the image compensation functions. Subsequently, the method for image geolocation and image correction using the ICRD model is presented. The overall technical roadmap of this paper is shown in Figure 1.

2.1. R–D Model

For any pixel (c, r) in an image, the acquisition time can be determined from the image row coordinate r. Using this time information, data such as sensor position and speed can be extracted from position and orientation system (POS) data. Additionally, the SAR measurement of distance from the target to the sensor can be calculated based on the column coordinate c. The R–D model utilizes this information to establish geometric and physical relationships between the sensor and the ground target, forming the basis for the R–D equations, as follows:
( R 0 + c M r ) = ( X X s ) 2 + ( Y Y s ) 2 + ( Z Z s ) 2 f D = 2 [ ( X X s ) V X s + ( Y Y s ) V Y s + ( Z Z s ) V Z s ] / ( λ R ) ,
where the parameter R0 represents the initial slant range corresponding to the first column of an image, M r is the slant range resolution of the image, and λ represents the wavelength of the electromagnetic wave. The first equation represents the range equation, where the left side denotes the distance extracted from the SAR image, and the right side presents the spatial distance between the sensor coordinates (XS, YS, ZS) and the ground target coordinates (X, Y, Z). The second equation expresses the Doppler equation, with the left side representing the Doppler frequency fD used in imaging, while the right side is the Doppler shift of the SAR electromagnetic wave obtained based on the sensor velocity [VXs, VYs, VZs]T and the relative position [XXS, YYS, ZZS]T between the sensor and the ground target.
High-precision positioning in the R–D model is typically achieved through optimizing the model parameters using reference points. In Equation (1), the sensor’s position and velocity, imaging Doppler frequency, and initial slant range correction parameters (serving as their functional parameters) are unknown variables. Under the condition of having reference points for support, error equations can be constructed, and correction values for the initial values of the model parameters may be determined.

2.2. The Proposed Image Compensation Model

The image compensation model was developed by adding correction values ( c , r ) to the original image coordinates. These values were derived from two-dimensional low-order polynomial functions of the original image row and column coordinates (c, r). When a second-order polynomial is used, it can be expressed as follows:
c = a 0 + a 1 c + a 2 r + a 3 c 2 + a 4 c r + a 5 r 2 r = b 0 + b 1 c + b 2 r + b 3 c 2 + b 4 c r + b 5 r 2 ,
where ai and bi (i = 0, 1, …, 5) represent the coefficients of polynomial functions for image compensation. According to the minimum number of GCPs required and the application requirements, the polynomial order can also be chosen as the 0th order (with parameters a0 and b0) or 1st order (with parameters a0, a1, a2, b0, b1, and b2). In the case of order 0, the equations only contain constant terms, while, in the case of order 1, they represent the affine transformation model.
SAR image row data, obtained by moving the sensor along the azimuth direction, primarily involve parallel projections. Image column data depend on the sensor sampling-reflected electromagnetic waves at equal time intervals, constituting range projections. From the perspective of the R–D model form, the main factors affecting SAR imaging or positioning errors include SAR sensor time and POS time synchronization errors, as well as the initial slant range error, which are the main factors influencing SAR imaging errors and are mainly characterized by translation [39]. These imaging errors can be absorbed through the two parameters a0 and b0 in the image compensation model. Within approximately 15 min, the sensor position and velocity errors measured via GNSS are primarily characterized by translation and drift, which can be expressed by a linear model [39]. Although there is a non-linear functional relationship between the image column coordinates and sensor position error, the position measurement accuracy of GNSS is easily better than the 1 m level and even reaches 0.01 m [32]. Imaging errors caused by sensor motion errors, as well as the effects of other low-frequency errors, can be corrected using linear or second-order polynomial models. Considering the parallel projection relationship between image lines, a simplified quadratic polynomial model was constructed, as follows:
c = a 0 + a 1 c + a 2 r + a 3 c 2 r = b 0 + b 1 c + b 2 r + b 5 r 2
The above polynomial refinement model was used to correct the comprehensive impact of all parameter errors on image imaging and positioning. For ease of expression, image-compensated R–D models were categorized based on the number of polynomial parameters in the compensation functions. The functions containing only parameters consisting of the constant terms a0 and b0 are referred to as the one-parameter model. Polynomials with parameters a0, a1, a2, b0, b1, and b2 are called the three-parameter model, while those with parameters a0, a1, a2, a3, b0, b1, b2, and b5 are termed the four-parameter model. Similarly, Equation (2) is labeled as the six-parameter model, and the R–D model refined in the classical way is denoted as the classical model. Thus, in this article, a total of five types of R–D refinement methods are used. In theory, different polynomial forms yield varying processing accuracies, a notion that will be experimentally evaluated later.

2.3. Solving Polynomial Coefficients of Image Compensation Functions

The determination of parameters involves two main steps, carried out with the support of GCPs. The first step is to calculate the image coordinates using the ground coordinates of GCPs; in other words, the image coordinates (cci, rci) corresponding to the ground point (Xi, Yi, Zi) are calculated utilizing the original orientation parameters. The second step is to determine the parameters of the image compensation polynomial. The least squares fitting approach is employed to compute the differences between the calculated image coordinates and those from the GCPs.
The two steps are detailed below.
(1) 
Translating Ground Coordinates into Image Calculation Coordinates
The iterative method used to determine the image coordinates (cci, rci) from the ground coordinates (Xi, Yi, Zi) via the R–D model includes the following steps:
(1)
Begin with temporary values of the image coordinates (c0 and r0);
(2)
Extract the Doppler value and the SAR range measurement value corresponding to the temporary image coordinates (c0, r0);
(3)
Determine the image row time and interpolate to determine the sensor’s position and velocity at that time;
(4)
Calculate the Doppler value and the range distance based on the sensor motion information and the ground target position, and then compare these with the values from Step (2) to obtain the differences dfD and dR;
(5)
Calculate the increments of the image coordinates using the formulas dc0 = dfD/Vy and dr0 = dR/Mr;
(6)
If |dc| < 0.001 and |dr0| < 0.001, terminate the iteration, setting (cci, rci) equal to (c0, r0). Otherwise, update the temporary coordinate values using c0 = c0 + dc0, r0 = r0 + dr0, and return to Step (2).
(2) 
Determining the Parameters of the Polynomial Coefficients in the Image Compensation Function
For each reference point or GCP (Xi, Yi, Zi, ci, and ri), the calculated values of the image point—denoted as cci and rci—can be determined based on the ground point coordinates (Xi, Yi, Zi) using the R–D model. Meanwhile, the image coordinates of the GCP are denoted as (ci, ri) and the corresponding difference values ( c i , r i ) for this GCP can be computed. Taking the parameters a0a5 and b0b5 in Equation (2) as unknowns, we can establish two error equations for each GCP, resulting in a total of 2n error equations for n GCPs. When one-, three-, four-, and six-parameter models for image refinement are employed, the minimum number of GCPs required is one, three, four, and six, respectively. Once the minimum requirement is met or exceeded, the n parameters can be solved using the least squares method.
Based on Equation (2), error equations for row and column coordinates can be created separately and solved. The error equations for the column coordinate of the GCP, in the form of a vector, can be expressed as follows:
V C = A X a L C ,
where V C = [ v c 1 , v c 2 , v c n ] T is the error vector of image column coordinates, L C = [ c 1 , c 2 , c n ] T is the constant vector of image column coordinates, and n is the number of GCPs.
The coefficient matrix A for the six-parameter model is shown below:
A = 1 c 1 r 1 c 1 r 1 c 1 2 r 1 2 1 c 2 r 2 c 2 r 2 c 2 2 r 2 2 . . . . . . 1 c n r n c n r n c n 2 r n 2 .
Xa is the vector of the parameters of the six-parameter model for the image column coordinates, as follows:
X a = a 0 , a 1 , a 2 , a 3 , a 4 , a 5 T .
The unknown parameters in the form of vector Xa are solved using the least squares method, as follows:
X a = A T A 1 ( A T L C ) .
Similarly, the parameters of the image compensation model for row coordinates can be solved as follows:
X b = [ b 0 , b 1 , b 2 , b 3 , b 4 , b 5 ] T = A T A 1 A T L r ,
where L r = [ l r 1 , l r 2 , l r n ] T is the constant term vector for the image row coordinates.
Thus, the original image coordinates and the object space coordinates have the following equations:
( c + c , r + r ) = R D X , Y , Z .

2.4. Use of Image Compensation-Based R–D Model

Applying the R–D model based on image compensation for the geometric processing of images mainly involves the mutual conversion of coordinates between image and object spaces, as well as related processing during image correction.
Assuming that the image coordinates (c, r) correspond to a ground elevation H, the point coordinates P(X, Y, Z) in the geocentric Cartesian coordinate system with a ground height of H should satisfy the Earth ellipsoid model, as follows:
X 2 + Y 2 ( a + H ) 2 + Z 2 ( b + H ) 2 = 1 ,
where a and b represent the Earth’s minor and major axes, respectively.
The adjusted values (ca, ra) corresponding to the original image coordinates (c and r) can be determined using the image compensation model. Based on the adjusted values (ca = c + c , ra = r + r ) , the corresponding sensor position, velocity, and imaging Doppler parameters for the R–D model can be obtained. Therefore, the R–D model equation and the Earth ellipsoid model together form a system of three equations with three unknowns (X, Y, and Z), which can be solved.
When transforming the coordinates of object point P(X, Y, Z) to the original image coordinates, the method described in Section 2.3 can be employed first, and the calculated values of the image points (cc, rc) corresponding to object points can be obtained. Theoretically, this calculated value should match the adjusted values (ca, ra) of the image compensation model from the original image point p(c, r); that is, cc = ca and rc = ra. For the image compensation model based on original image coordinates, these values are unknowns to be determined in this task. There are two solutions, as follows:
(1)
The calculated coordinates of the image points (cc, rc) obtained from object points can be used as the initial values for c and r in the compensation model, and the original image point coordinates and correction values can be refined through step-by-step iteration.
(2)
Considering that the difference between the calculated image point position (cc, rc) and the real image point position (c, r) corresponding to P is not significant (usually consisting of a few pixels) and the non-constant term coefficients of the compensation model are very small, the original image coordinate in the compensation model can be approximated as (cca0, rcb0). The increment values can be computed and the original image coordinate can be updated without iteration, typically resulting in an error that can be completely ignored.
After obtaining the parameters for the image polynomial compensation function, high-precision ortho-rectification of SAR images can be achieved with the support of a digital elevation model (DEM), a process typically realized through an indirect method. Based on the geolocation coordinates of an image’s four corner points, the corrected mapping image range can be determined. For each corrected image point, the object plane coordinates of the point can be calculated according to the range and the resolution of the corrected image, and the elevation value can be extracted from the DEM; thus, the three-dimensional object coordinates of the point can be obtained. Subsequently, the corresponding position in the original image can be calculated, and the pixel value can be resampled to match the corrected image.

3. Experiments

The purposes of the conducted experiments were to test the precision of the four aforementioned image compensation models under different GCPs, compare the accuracy of the four models with that of the classical R–D refinement model, and evaluate the effectiveness of the proposed method across different satellite and airborne SAR images.
The purposes of the conducted experiments were to test the precision of the four aforementioned image compensation models, namely the one-parameter, two-parameter, four-parameter, and six-parameter model, under different GCPs, compare the accuracy of the four models with that of the classical R–D refinement model, and evaluate the effectiveness of the proposed method across different satellite and airborne SAR images.
In these experiments, we adopted SAR data sets from three platforms and four sensors—namely, the spaceborne X-band TerraSAR-X, spaceborne C-band GF3-C, manned airborne X-band SAR-X, and low-altitude unmanned aerial vehicle (UAV)-mounted Ku-band miniSAR-Ku image data sets.
The classical R–D refinement model, developed by the Chinese Academy of Surveying and Mapping and integrated into the SARplore software (Version 3.3) product, was used for comparative studies. The unknowns of the model comprised sensor position, velocity, initial slant range, and Doppler parameters. Sensor position and velocity errors were represented using a linear refinement model, and a combined adjustment approach was employed to solve all unknowns simultaneously.
Under the given conditions of initial orientation parameters, refinement parameters, and ground elevation, there exists a strict transformation between object space coordinates and image coordinates. Accuracy assessment can be conducted using either the ground coordinates of reference points (for assessing geolocation accuracy) or the image coordinates of reference points (for assessing orientation accuracy). In the former method, the image coordinates and elevation of the reference point are treated as known parameters, and the refined model is used to calculate their geolocation plane coordinates, which are then compared with the plane coordinates of the reference point to calculate the accuracy. In the latter method, the refined R–D model and the object coordinates of reference points are used to calculate image coordinates, which are then compared with the image coordinates of the reference points. That means that, for the geolocation accuracy values in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, which follow, we first calculated the planar error for each CP based on the positioning errors d X i and d Y i using the formula d p i = d X i 2 + d Y i 2 . Subsequently, the geolocation accuracy was computed with all n CPs using the formula m p = d p i 2 / n (unit: meters). The calculation of image orientation accuracy was similar to that of geolocation accuracy, obtained through calculating the errors in the image coordinates of the reference points and then computing the mean error (unit: pixels). When different models are compared under the same conditions, the same set of GCPs and check points (CPs) is used to prevent their error differences and distribution differences from affecting the accuracy. When some experiments required multiple GCPs but the number of reference points was insufficient, we employed the leave-one-out cross-validation (LOOCV) method for accuracy validation [40]; namely, one reference point was selected as a CP in turn, and the remaining reference points were used as GCPs for testing. Through tallying the errors for all the CPs, the accuracy was obtained. In the following experiments, when the number of CPs was equal to the total number of reference points, LOOCV was applied.

3.1. Experiments on Spaceborne SAR Images

3.1.1. Experimental Data

The X-band TerraSAR-X images and GF3-C images were selected for the spaceborne SAR experiments.
(1)
TerraSAR-X Images
The TerraSAR-X images and reference points used in this study have also been previously used in the literature [41] as an experimental data set. This image data set was acquired on 5 September 2012, at a center longitude and latitude of E 120.78° and N 50.37°, respectively. The test area is located near Yigen Farm to the east of Eerguna City in the Inner Mongolia Autonomous Region, China. The average elevation of the study area is 706 m. This area is characterized by relatively flat terrain, consisting of bare soil, farmland, and woodland. The image resolution is 2.614 m in the azimuth and 0.909 m in the slant range direction. A total of 21 corner reflectors (numbered 01 to 21) were strategically placed along approximately straight lines in both the range and azimuth directions. Figure 2 illustrates the image, including the distribution of corner reflectors (a), and a part of the TerraSAR-X image (b) indicated by the blue box in (a).
(2)
GF3-C Image
GF3-C is China’s first C-band multi-polarization synthetic aperture radar imaging satellite, with a maximum resolution of 1 m. The image selected for the test was acquired on 27 September 2019, with dimensions of 16,886 × 24,878 pixels, and was captured from a descending orbit. The slant range resolution of the image is 1.54 m, and the azimuth resolution is 1.12 m. The test area is located in the northeast corner of Beijing, China, characterized by flat and mountainous terrains with elevations ranging from 10 m to 400 m. In the area covered by the image, 12 uniformly distributed reference points (labeled A01–A12) are shown, as illustrated in Figure 3a. A part of the image is shown in Figure 3b.

3.1.2. Experimental Results

(1)
TerraSAR-X Images
Different configurations, wherein different GCPs and CPs from the reference points were selected, were designed for accuracy validation. The GCP numbers selected were zero, one (labeled as 01), three (from 01 to 03), six (from 01 to 06), nine (from 01 to 09), twelve (from 01 to 12), and twenty (LOOCV). When GCP zero was used, it means that the original data set was utilized for R–D positioning. In this case, the five models were identical. Positioning tests were performed using the ICRD models with one, three, four, and six parameters, along with the classical R–D refinement model. Table 1’s values represent the image geolocation accuracy in meters. In Table 1, the “/” signifies that there is no value at that position.
Table 1. Geolocation accuracy in a TerraSAR-X image for the five models with different GCPs (unit: meters).
Table 1. Geolocation accuracy in a TerraSAR-X image for the five models with different GCPs (unit: meters).
GCP/CP
Number
Classical ModelOne-Parameter ModelThree-Parameter ModelFour-Parameter ModelSix-Parameter Model
0/211.881.881.881.881.88
1/201.251.21///
3/190.740.820.75//
6/150.400.670.410.450.47
12/90.330.590.320.300.29
20/21 (LOOCV)0.330.580.320.310.31
Under the condition of not having any GCPs with which to refine the R–D model, the mean square errors of orientation accuracy in the range/azimuth direction were 0.62/0.39 pixels, and the average error absolute values for all CPs were 0.45/0.37 pixels. For the three-parameter model with six GCPs, the accuracies in the range/azimuth directions were 0.09/0.09 pixels for GCPs and 0.11/0.12 pixels for CPs.
(2)
GF3-C Image
In this experiment, accuracy was assessed using different GCPs and CPs. The quantities of GCPs chosen were zero, one (labeled A01), three (A01 to A03), six (A01 to A06), nine (A01 to A09), and eleven (LOOCV). Various models, namely, one-, three-, four-, and six-parameter ICRD models, along with the classical R–D refinement model, were employed to calculate the positioning accuracy. The results are summarized in Table 2. Different GCPs and models were utilized to compute positioning errors, and the geolocation accuracy for CPs is presented in Table 2.
Table 2. Geolocation accuracy with respect to the GF-3 image for the five refinement models with different GCPs (unit: meters).
Table 2. Geolocation accuracy with respect to the GF-3 image for the five refinement models with different GCPs (unit: meters).
GCP/CP
Number
Classical ModelOne-Parameter ModelThree-Parameter ModelFour-Parameter ModelSix-Parameter Model
0/1242.4242.4242.4242.4242.42
1/113.113.13///
3/92.782.832.80//
6/61.882.011.901.952.03
9/31.591.811.551.561.59
11/12 (LOOCV)1.401.641.411.381.37
In the absence of GCPs, the orientation accuracies in the range/azimuth directions were recorded as 16.84/2.48 pixels, while the mean error absolute values of all CPs were 16.82/2.26 pixels. Under the refinement of a three-parameter model with six GCPs, the precision values for GCPs in the range/azimuth directions were 0.55/0.68 pixels, while those for CPs were 0.63/0.65 pixels.

3.2. Experiments Conducted on Manned Airborne SAR Images

3.2.1. Experimental Data

The X-band manned airborne SAR sensor was developed by the Aerospace Information Research Institute of the Chinese Academy of Sciences. This data set was acquired in 2012 and constitutes early Chinese airborne SAR experimental data. Rigorous indoor and outdoor geometric calibrations before flying and imaging were not undertaken for the sensor. The system was equipped with a high-precision POS (POS510) capable of obtaining highly accurate sensor position and attitude parameters. The test area is located in Gaocheng Town, Dengfeng, Henan Province, China, with an elevation ranging from 230 to 350 m. It is an area characterized by hilly terrain. Surveying flights were conducted in both east–west and west–east directions. Two scenes of images with a resolution of 0.15 m—namely, the airSAR-X-1 image from the east–west flight and the airSAR-X-2 image from the west–east flight—were selected for geometric positioning experiments. There are 20 and 17 reference points in the two images, respectively. Their distribution and parts of the two images are shown in Figure 4.

3.2.2. Experimental Results

Different GCPs were selected to conduct accuracy comparison tests for the above models. For the airSAR-X-1 image, GCPs were chosen in quantities of zero, one (labeled C01), three (C01 to C03), six (C01 to C06), twelve (C01 to C12), and nineteen (LOOCV). For the airSAR-X-2 image, GCPs were selected in quantities of zero, one (D01), three (D01 to D03), six (D01 to D06), twelve (D01 to D12), and sixteen (LOOCV). The above five types of R–D refinement models were realized using different numbers of GCPs, and the geolocation accuracies of CPs for each model are indicated in Table 3 and Table 4.
Table 3. Geolocation accuracy in the airSAR-X-1 image for the five models with different GCPs (unit: meters).
Table 3. Geolocation accuracy in the airSAR-X-1 image for the five models with different GCPs (unit: meters).
GCP/CP
Number
Classical ModelOne-Parameter ModelThree-Parameter ModelFour-Parameter ModelSix-Parameter Model
0/20174.50174.50174.50174.50174.50
1/190.650.59///
3/170.630.580.61//
6/60.420.440.400.430.45
12/80.320.430.330.300.30
19/20 (LOOCV)0.290.400.280.270.25
Table 4. Geolocation accuracy in the airSAR-X-2 image for the five models with different GCPs (unit: meters).
Table 4. Geolocation accuracy in the airSAR-X-2 image for the five models with different GCPs (unit: meters).
GCP/CP
Number
Classical ModelOne-Parameter ModelThree-Parameter ModelFour-Parameter ModelSix-Parameter Model
0/17183.74183.74183.74183.74183.74
1/160.490.49///
3/140.390.410.37//
6/110.340.400.330.350.37
12/50.300.370.310.280.28
16/17(LOOCV)0.230.380.240.220.20
For the original, unrefined R–D model, the airSAR-X-1 image orientation accuracies regarding range/azimuth were 514.85/4.83 pixels, and the mean error absolute values for all CPs were 514.85/4.77 pixels. The airSAR-X-2 orientation accuracies were 514.14/4.42 pixels, and the mean absolute values were 514.14/4.39 pixels.

3.3. Experiments Conducted on UAV-Mounted miniSAR-Ku Strip Images

3.3.1. Experimental Data

The Ku-band UAV SAR data set (with a wavelength of 0.0205 m) utilized in this study was developed by the Aerospace Information Research Institute of the Chinese Academy of Sciences. An octocopter UAV was employed as the payload platform for the sensor. The miniSAR-Ku system was used to acquire the experimental data set in June 2021 over Anyang, Henan Province, China. The UAV flew at a relative altitude of 350 m. Two strip images, miniKu-1 and miniKu-2, from two flight lines were selected for experiments. The azimuth resolution is 0.080 m and the slant range resolution is 0.125 m. Both images have a column width of 3536 pixels, while the miniKu-1 image has a row height of 28,800 pixels and the miniKu-2 image has a row height of 26,752 pixels. These sizes indicate that the two data sets include long-strip images. Both strip images have uniformly distributed reference points, with 28 in miniKu-1 and 38 in miniKu-2. The distribution of some reference points used as GCPs in the following experiments and parts of the two images are shown in Figure 5.

3.3.2. Experimental Results

In these experiments, we used different numbers of GCPs and CPs to validate the accuracy of the five models. In particular, zero, one (labeled G01/F01), three (G01/F01 to G03/F03), six (G01/F01 to G06/F06), twelve (G01/F01 to G12/F12), and twenty (G01/F01 to G20/F20) GCPs were selected, and the accuracies of the five types of refinement models are summarized in Table 5 and Table 6.
Table 5. Geolocation accuracy in miniKu-1 for the five models with different GCPs (unit: meters).
Table 5. Geolocation accuracy in miniKu-1 for the five models with different GCPs (unit: meters).
GCP/CP
Number
Classical ModelOne-Parameter ModelThree-Parameter ModelFour-Parameter ModelSix-Parameter Model
0/281.361.361.361.361.36
1/270.570.61///
3/250.380.470.39//
6/220.320.400.300.310.36
12/160.170.330.170.160.17
20/80.160.290.170.130.12
Table 6. Geolocation accuracy in miniKu-2 for the five models with different GCPs (unit: meters).
Table 6. Geolocation accuracy in miniKu-2 for the five models with different GCPs (unit: meters).
GCP/CP NumberClassical ModelOne-Parameter ModelThree-Parameter ModelFour-Parameter ModelSix-Parameter Model
0/381.191.191.191.191.19
1/370.610.53///
3/350.390.380.38//
6/320.320.380.330.340.38
12/260.200.320.180.200.21
20/180.200.310.200.180.17
In the absence of GCPs, the image orientation accuracies for the miniKu-1 image were 8.45/2.76 pixels in the distance/azimuth direction, and the mean absolute values for all CPs were 7.97/2.45 pixels. For the three-parameter model with 12 GCPs, the accuracies were 0.86/1.02 pixels for the GCPs and 0.95/0.91 pixels for the CPs. Similarly, in the absence of GCPs, the orientation accuracies for the miniKu-2 image were 7.29/1.81 pixels, with mean absolute values of 7.22/1.42 pixels. For the three-parameter model with 12 GCPs, the GCP accuracies were 0.89/0.77 pixels, and the CP accuracies were 1.01/0.89 pixels.
For the miniKu-2 image, 12 GCPs were utilized to refine the three-parameter model and the classical model, and the refined parameters were subsequently employed to ortho-rectify the image. The Digital Orthophoto Map (DOM) obtained using the three-parameter model and DEM from AW3D30-V2.2 [42] is shown in Figure 6, and the stitched images in two local regions (indicated by the boxes in Figure 6) for both models are displayed in Figure 7. In the stitched images of the local areas, part A and part C (top-left and bottom-right) represent the corrected results achieved using the three-parameter model, while part B and part D (bottom-left and top-right) show the corrected results obtained using the classical model.

4. Discussion

From the comparative experiments using various images, GCPs, and models, it is evident that the ICRD model enables the high-precision geometric processing of SAR images, albeit with varying accuracy performance under different conditions.
In the airSAR-X image experiments, it was observed that the two uncalibrated images exhibited geolocation accuracies of 174.5 and 183.7 m, constituting a difference of nearly 10 m. When considering orientation accuracy, the two image orientation accuracies in the range/azimuth direction were 514.85/4.83 pixels and 514.14/4.42 pixels, respectively, with the difference being less than 1 pixel. Within the same scene imagery or among images acquired with the same sensor in close temporal proximity, the image orientation residual errors demonstrated greater consistency, compared to the errors for geolocation coordinates. The tests for airborne miniSAR-Ku and spaceborne SAR images exhibited a similar trend. This consistency provides further experimental support for the application of compensation functions in imagery as an alternative approach to achieving high-precision SAR image positioning. The airSAR-X tests indicated that, even though internal and external geometric calibrations were not carried out for the sensors and the original parameters of the image revealed poor accuracy, satisfactory accuracy can still be achieved using the compensation model and sparse GCPs, suggesting that low-order polynomials can effectively absorb the significant image errors derived from coarse sensor parameters.
The one-parameter model is capable of geometric positioning across various GCP configurations. Even for images whose original parameters are insufficiently accurate, such as airborne SAR data, a single GCP could significantly enhance the performance. The positioning results—both with sparse and sufficient GCPs—demonstrated that the one-parameter model enhances the accuracy of spaceborne SAR images more significantly than it does the airSAR-X and miniSAR-Ku images, enabling it to readily achieve near-pixel precision, meaning that the errors of satellite images exhibit more regularity compared to those of airborne images. The three-parameter model consistently yielded optimal performance for all tested SAR image types. Its accuracy was comparable to that of the classical refinement model, regardless of GCP density. Compared to those caused by image resolution, the differences in geolocation accuracy were negligible. Moreover, when there were enough GCPs, the three-parameter linear model was superior to the one-parameter model. Comparing the four- and six-parameter models with the three-parameter model, when there were enough GCPs, the four- and six-parameter models outperformed the three-parameter model. However, the distinction in accuracy between the four- and six-parameter models was minimal when compared to the image resolution. When there were sparse GCPs, the accuracy of both models with respect to CPs was lower than that of the three-parameter model, and the accuracy of the six-parameter model was lower than that of the four-parameter model. This means that sparse GCPs render muti-parameter models prone to overfitting.
From the experimental results, it is evident that the three-parameter model reaches an accuracy comparable to that of the classical refinement model for satellite images. Under the condition of sufficient GCPs, the four-parameter model slightly outperforms the three-parameter model and the classical model, with regard to the positioning of large-width or long-strip images. Furthermore, in scenarios with sparse GCPs, the one-parameter model can achieve an accuracy approximately comparable to that of the classical model. The experimental results demonstrate that, through selecting the appropriate compensation model based on the number and distribution of control points, it is possible to achieve—or even slightly exceed—the precision of the classical model.
The ICRD shares similarities with the RFM of satellite imagery in its processing approach, both of which employ low-order polynomial compensation in the image space to mitigate imaging errors. This paper took imagery from two spaceborne sensors as the research subjects and compares the results of the proposed ICRD model with those of the RFM. Based on the R–D model, we first fitted the RPC parameters for the corresponding images [21,28], achieving fitting accuracy losses of less than 10−4 and 10−3 pixels, respectively, for TerraSAR-X and GF-3. We then utilized the RFM to incorporate one-parameter and three-parameter compensation models in the image space, and compared the results with those obtained under the same experimental conditions. The differences in accuracy under various conditions were all less than 10−3 pixels. For spaceborne SAR, the related documents [21,28,29] and the results of this paper mutually verified that the positioning accuracy of the ICRD model, the classical R–D model, and the RFM was highly consistent under the same conditions.

5. Conclusions

Due to the complexity and the empirical nature of precision assignment in SAR rigorous model refinement, among other shortcomings, the number of commercial remote sensing software packages capable of supporting SAR image high-precision positioning with rigorous models is significantly fewer compared to those for optical images. In this study, we presented an image compensation-based R–D (ICRD) model that can achieve high-precision geometric positioning and be used for the geometric correction of spaceborne and airborne SAR images. The experimental results demonstrate that the proposed model can achieve the same accuracy as the classical R–D refinement model, and can also be applied to the same range of scenarios as the classical model. However, when compared to the classical refinement models, the proposed ICRD model is much simpler and easier to implement. As the number of parameters to be solved is small, the solution of our model is more robust under the condition of sparse GCPs. Furthermore, our model can be applied not only to spaceborne SAR images but also to airborne SAR images, whereas the RFM-based approach is usually applied to spaceborne SAR images.
The method presented in this paper not only significantly simplifies the positioning process of the classical R–D model but also provides a foundation for ensuring consistency in the positioning and geometric correction of imagery from various aerospace SAR sensors. As the precision information of the original auxiliary parameters of the imagery is not required when modeling, we can deviate from the conventional technical approach of tailored modeling for specific sensors to achieve high-precision positioning with the R–D model. In the preprocessing stage, we merely extract the image parameters from various sensors and incorporate them into standardized structural data. During the intermediate positioning calculations, the software provides a unified algorithm and program for the mutual conversion between image space and object space coordinates. In the parameter optimization phase, the four image compensation models exhibit distinct characteristics under varying GCP conditions, and the construction and selection can be made based on requirements. Depending on the image type and characteristics, as well as the number, accuracy, and distribution information of GCPs, a program can be designed to automatically select the most suitable compensation model. Alternatively, following the method outlined in [42], the program can utilize the four-parameter model and treat the refined model parameters as weighted virtual observations within the adjustment model, thus enabling the image compensation model to cater to the requirements of various scenarios and conditions.
SAR image types mainly include ground-based SAR, airborne SAR, and spaceborne SAR. However, whether the proposed method is suitable for ground-based SAR applications has not been studied. In addition, our model and approach, similarly to the classical R–D model, need to provide imaging, sensors, and orbital parameters. Without this information, the implementation of our method would not be feasible. The methods in this paper are mainly applied to SAR data obtained from R–D imaging. Images imaged by other methods, such as back projection imaging data, have not been verified using the methods in this paper.
Although the image refinement model proposed in this article has the advantages of simple computation, high accuracy, robust solutions, and broad applicability, this study mainly focused on the application of single-scene SAR images, confirmation of the feasibility of the models, and the provision of a basis for further applications. Stereo positioning and block adjustment of multi-view SAR images are important aspects in SAR geometric processing. In light of the characteristics and advantages of the proposed methods, extension of the ICRD model for stereo positioning of multi-view SAR images, block adjustment, and other related applications under the conditions of few or no GCPs are worth further in-depth study.

Author Contributions

Conceptualization, Y.D.; data curation, K.C.; investigation, K.C.; methodology, Y.D. and K.C.; supervision, Y.D.; validation, K.C.; writing—original draft preparation, K.C.; writing—review and editing, Y.D. and K.C.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grants 42171444 and 42301516, the Beijing Natural Science Foundation Project—Municipal Education Commission Joint Fund Project (No. KZ202110016021), the Beijing Municipal Education Commission Scientific Research Project—Science and Technology Plan General Project (No. KM202110016005), and the Fundamental Research Funds for the Beijing University of Civil Engineering and Architecture (No. X20043).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the Aerospace Information Research Institute of the Chinese Academy of Sciences and the Chinese Academy of Surveying and Mapping for providing airborne SAR data and SARplore software. They are also deeply appreciative of Peking University for providing spaceborne SAR data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, B.; Dong, X.; Deng, M.; Wan, F.; Wang, T.; Li, X.; Zhang, G.; Cheng, Q.; Lv, S. Geolocation Accuracy Validation of High-Resolution SAR Satellite Images Based on the Xianning Validation Field. Remote Sens. 2023, 15, 1794. [Google Scholar] [CrossRef]
  2. Wei, Y.; Zhao, R.; Fan, Q.; Dai, J.; Zhang, B. Improvement of the spaceborne synthetic aperture radar stereo positioning accuracy without ground control points. Photogramm. Record. 2024, 39, 118–140. [Google Scholar] [CrossRef]
  3. Hao, X.; Zhang, H.; Wang, Y.; Wang, J. A framework for high-precision DEM reconstruction based on the radargrammetry technique. Remote Sens. Lett. 2019, 10, 1123–1131. [Google Scholar] [CrossRef]
  4. Chang, Y.; Xu, Q.; Xiong, X.; Jin, G.; Hou, H.; Cui, R. A Robust Method for Block Adjustment of UAV SAR Images. IEEE Access 2023, 11, 43975–43984. [Google Scholar] [CrossRef]
  5. Leberal, F. Radargrammetric Image Processing; Artech House Inc.: Norwood, MA, USA, 1990. [Google Scholar]
  6. You, H.; Ding, C.; Fu, K. SAR image localization using rigorous SAR collinearity equation model. Acta Geod. CARTO Graph. Sin. 2007, 2, 158–162. [Google Scholar]
  7. Curlander, J. Location of spaceborne SAR imagery. IEEE Trans. Geosci. Remote Sens. 1982, 20, 359–364. [Google Scholar] [CrossRef]
  8. Wang, M.; Zhang, J.; Deng, K.; Hua, F. Combining optimized SAR-SIFT features and R-D model for multisource SAR image registration. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5206916. [Google Scholar] [CrossRef]
  9. Sansosti, E. A simple and exact solution for the interferometric and stereo SAR geolocation problem. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1625–1634. [Google Scholar] [CrossRef]
  10. Chen, P.; Dowman, I. A weighted least squares solution for space intersection of spaceborne stereo SAR data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 233–240. [Google Scholar] [CrossRef]
  11. Wu, Y. An airborne SAR image target location algorithm based on parameter refining. J. Electron. Inf. Technol. 2019, 41, 1063–1068. [Google Scholar]
  12. Schubert, A.; Jehle, M.; Small, D.; Meier, E. Mitigation of atmospheric perturbations and solid earth movements in a TerraSAR-X time-series. J. Geod. 2012, 86, 257–270. [Google Scholar] [CrossRef]
  13. Wang, S.; Meng, X.; Lou, L.; Fang, M.; LIU, Z. Target location performance evaluation of single SAR image of TH-2 satellite system. Acta Geod. Cartogr. Sin. 2022, 51, 2501–2507. [Google Scholar]
  14. Luo, Y.; Qiu, X.; Dong, Q.; Fu, K. A Robust Stereo Positioning Solution for Multiview Spaceborne SAR Images Based on the Range-Doppler Model. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4008705. [Google Scholar] [CrossRef]
  15. Liu, X.; Teng, X.; Li, Z.; Yu, Q.; Bian, Y. A fast algorithm for high accuracy airborne SAR geolocation based on local linear approximation. IEEE Trans. Instrum. Meas. 2022, 71, 5501612. [Google Scholar] [CrossRef]
  16. ISO 19130-1; Geographic Information Imagery Sensor Models for Geopositioning Part 1: Fundamentals. First Edition. ISO: Geneva, Switzerland, 2018. Available online: http://www.iso.org (accessed on 1 May 2023).
  17. Zhang, B.; Yu, A.; Chen, X.; Tang, F.; Zhang, Y. An image planar positioning method base on fusion of dual-view airborne SAR data. Remote Sens. 2023, 15, 2499. [Google Scholar] [CrossRef]
  18. Huang, G.; Yue, X.; Zhao, Z. Block adjustment with airborne SAR images based on polynomial ortho-rectification. Geom. Inf. Sci. Wuhan Univ. 2008, 6, 569–572. [Google Scholar]
  19. Grodecki, J.; Dial, G. Block adjustment of high-resolution satellite images described by rational polynomials. Photogramm. Eng. Remote Sens. 2003, 69, 59–68. [Google Scholar] [CrossRef]
  20. Kim, N.; Choi, Y.; Bae, J.; Sohn, H. Estimation and improvement in the geolocation accuracy of rational polynomial coefficients with minimum Gcps using KOMPSAT-3A. Remote Sens. 2020, 57, 719–734. [Google Scholar] [CrossRef]
  21. Zhang, G.; Fei, W.; Li, Z.; Zhu, X.; Tang, X. Analysis and Test of the Substitutability of the RPC Model for the Rigorous Sensor Model of Spaceborne SAR Imagery. Acta Geod. Et Cartogr. Sin. 2010, 39, 264–270. [Google Scholar]
  22. Zhang, L.; He, X.; Balz, T.; Wei, X.; Liao, M. Rational function modeling for spaceborne SAR datasets. ISPRS J. Photogramm. Remote Sens. 2011, 66, 133–145. [Google Scholar] [CrossRef]
  23. Eftekhari, A.; Saadatseresht, M.; Motagh, M. A study on rational function model generation for TerraSAR-X imagery. Sensors 2013, 13, 12030–12043. [Google Scholar] [CrossRef] [PubMed]
  24. Miao, H.; Wang, Y.; Zhang, B.; Huang, Q. Influence of the motion error to airborne SAR geolocation accuracy. Electron. Meas. Technol. 2007, 1, 63–67. [Google Scholar]
  25. Doerry, A.W.; Bickel, D.L. Motion Measurement Impact on Synthetic Aperture Radar (SAR) Geolocation[R]. SPIE 2021 Defense & Commercial Sensing Symposium(Vol-11742). Sandia National Lab.(SNL-NM), Albuquerque, NM (United States). 2021. Available online: https://www.osti.gov/servlets/purl/1844831 (accessed on 1 September 2023).
  26. Ma, J.; You, H.; Hu, D. Block adjustment of InSAR images based on the combination of F. Leberl and interferometric models. J. Infrared Millim. 2012, 31, 271–276. [Google Scholar] [CrossRef]
  27. Cheng, C.; Zhang, J.; Huang, G.; Zhang, L. Range-Cocone equation with Doppler parameter for SAR imagery positioning. J. Remote Sens. 2013, 7, 1444–1458. [Google Scholar]
  28. Zhang, G.; Fei, W.; Li, Z.; Zhu, X.; Li, D. Evaluation of the RPC model for spaceborne SAR imagery. Photogramm. Eng. Remote Sens. 2010, 76, 727–733. [Google Scholar] [CrossRef]
  29. Wei, X.; Zhang, L.; He, X.; Liao, M. Spaceborne SAR image geocoding with RFM model. J. Remote Sens. 2012, 16, 1089–1099. [Google Scholar]
  30. Zhang, G.; Wu, Q.; Wang, T.; Zhao, R.; Deng, M.; Jiang, B.; Li, X.; Wang, H.; Zhu, Y.; Li, F. Block adjustment without GCPs for Chinese spaceborne SAR GF-3 imagery. Sensors 2018, 18, 4023. [Google Scholar] [CrossRef]
  31. Wang, T.; Zhang, G.; Li, D.; Zhao, R.; Deng, M.; Zhu, T.; Yu, L. Planar block adjustment and orthorectification of Chinese spaceborne SAR YG-5 imagery based on RPC. Int. J. Remote Sens. 2018, 39, 640–654. [Google Scholar] [CrossRef]
  32. Kim, T.J.; Fellerhoff, J.R.; Kohler, S.M. An Integrated Navigation System Using GPS Carrier Phase for Real-Time Airborne/Synthetic Aperture Radar (SAR). J. Inst. Navig. 2001, 48, 13–24. [Google Scholar] [CrossRef]
  33. Papazoglou, M.; Tsioras, C. Integrated SAR/GPS/INS for target geolocation improvement. J. Comput. Model 2014, 4, 12. [Google Scholar]
  34. Rigling, B.D.; Moses, R.L. Motion measurement errors and autofocus in bistatic SAR. IEEE Trans. Image Process. 2006, 15, 1008–1016. [Google Scholar] [CrossRef] [PubMed]
  35. Manzoni, M.; Tagliaferri, D.; Rizzi, M.; Tebaldini, S.; Guarnieri, A.V.; Prati, C.M.; Nicoli, M.; Russo, I.; Duque, S.; Mazzucco, C.; et al. Motion Estimation and Compensation in Automotive MIMO SAR. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1756–1772. [Google Scholar] [CrossRef]
  36. Hong, S.; Choi, Y.; Park, I.; Sohn, H.G. Comparison of orbit-based and time-offset-based geometric correction models for SAR satellite imagery based on error simulation. Sensors 2017, 17, 170. [Google Scholar] [CrossRef] [PubMed]
  37. Xiong, X.; Jin, G.; Xu, Q.; Zhang, H. Block adjustment with airbrone SAR very high-resolution images using trajectory constraints. Int. J. Remote Sens. 2018, 39, 2383–2398. [Google Scholar] [CrossRef]
  38. Cheng, C.; Zhang, J.; Huang, G.; Zhang, L. Combined Positioning of TeraSAR-X and SPOT-5 HRS Images with RFM Considering Accuracy Information of Orientation Parmeters. Acta Geod. Cartogr. Sin. 2017, 46, 179–187. [Google Scholar]
  39. Yuan, X. POS-supported bundle block adjustment. Act Geod. Cart OGRAPHICA Sin. 2008, 3, 342–348. [Google Scholar]
  40. Brovelli, M.; Crespi, M.; Fratarcangeli, F.; Giannone, F.; Realini, E. Accuracy Assessment of High Resolution Satellite Imagery by Leave-one-out method. In Proceedings of the 7th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Lisbon, Portugal, 5–7 July 2006; pp. 533–542. [Google Scholar]
  41. Zhou, X.; Zeng, Q.; Jiao, J. Analysis of TerraSAR-X sensor calibration accuracy and its application. Remote Sens. Inf. 2014, 29, 31–35. [Google Scholar]
  42. JAXA (Japan Aerospace Exploration Agency). 2023. Available online: https://www.eorc.jaxa.jp/ALOS/en/aw3d30/data/index.htm (accessed on 1 September 2023).
Figure 1. Technical roadmap of the ICRD model positioning.
Figure 1. Technical roadmap of the ICRD model positioning.
Applsci 14 08829 g001
Figure 2. Layout of corner reflectors in the image (a) and a part of the TerraSAR-X image (b). The numbers in Figure (a) are serial numbers of the corner reflectors, and the blue square in Figure (a) is the corresponding part to Figure (b).
Figure 2. Layout of corner reflectors in the image (a) and a part of the TerraSAR-X image (b). The numbers in Figure (a) are serial numbers of the corner reflectors, and the blue square in Figure (a) is the corresponding part to Figure (b).
Applsci 14 08829 g002
Figure 3. Distribution of the reference points (a) and the GF3-C image section (b). The numbers in Figure (a) are serial numbers of the reference points.
Figure 3. Distribution of the reference points (a) and the GF3-C image section (b). The numbers in Figure (a) are serial numbers of the reference points.
Applsci 14 08829 g003
Figure 4. Distribution of reference points and parts of airSAR-X images. (a) Distribution of reference points in the airSA-X-1 image. (b) Part of an airSAR-X-1 image. (c) Distribution of reference points in the airSA-X-2 image. (d) Part of an airSAR-X-2 image. The numbers in Figure (a,c) are serial numbers of the reference points.
Figure 4. Distribution of reference points and parts of airSAR-X images. (a) Distribution of reference points in the airSA-X-1 image. (b) Part of an airSAR-X-1 image. (c) Distribution of reference points in the airSA-X-2 image. (d) Part of an airSAR-X-2 image. The numbers in Figure (a,c) are serial numbers of the reference points.
Applsci 14 08829 g004
Figure 5. GCP distribution and parts of the two UAV miniSAR-Ku images: GCP distribution in the miniSAR-Ku-1 (a) and miniSAR-Ku-2 (b) images; parts of the miniSAR-Ku-1 (c) and miniSAR-Ku-2 (d) images. The numbers in Figure (a,b) are serial numbers of the GCPs.
Figure 5. GCP distribution and parts of the two UAV miniSAR-Ku images: GCP distribution in the miniSAR-Ku-1 (a) and miniSAR-Ku-2 (b) images; parts of the miniSAR-Ku-1 (c) and miniSAR-Ku-2 (d) images. The numbers in Figure (a,b) are serial numbers of the GCPs.
Applsci 14 08829 g005
Figure 6. Overall view of miniKu-2 DOM image, corrected using the three-parameter model with 12 GCPs and AW3D30 DEM. The blue-boxed areas in the bottom-left and top-right correspond to Figure 7a and Figure 7b.
Figure 6. Overall view of miniKu-2 DOM image, corrected using the three-parameter model with 12 GCPs and AW3D30 DEM. The blue-boxed areas in the bottom-left and top-right correspond to Figure 7a and Figure 7b.
Applsci 14 08829 g006
Figure 7. Stitching of local DOMs using the three-parameter model and the classical model. (a,b) are from the blue-boxed areas in the bottom-left and top-right, respectively, of Figure 6. A and C are generated by the three-parameter model, while B and D are generated by the classical model.
Figure 7. Stitching of local DOMs using the three-parameter model and the classical model. (a,b) are from the blue-boxed areas in the bottom-left and top-right, respectively, of Figure 6. A and C are generated by the three-parameter model, while B and D are generated by the classical model.
Applsci 14 08829 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, K.; Dong, Y. An Image Compensation-Based Range–Doppler Model for SAR High-Precision Positioning. Appl. Sci. 2024, 14, 8829. https://doi.org/10.3390/app14198829

AMA Style

Cheng K, Dong Y. An Image Compensation-Based Range–Doppler Model for SAR High-Precision Positioning. Applied Sciences. 2024; 14(19):8829. https://doi.org/10.3390/app14198829

Chicago/Turabian Style

Cheng, Kexin, and Youqiang Dong. 2024. "An Image Compensation-Based Range–Doppler Model for SAR High-Precision Positioning" Applied Sciences 14, no. 19: 8829. https://doi.org/10.3390/app14198829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop