Next Article in Journal
Ensemble One-Class Support Vector Machine for Sea Surface Target Detection Based on k-Means Clustering
Previous Article in Journal
Integrating Knowledge Graph and Machine Learning Methods for Landslide Susceptibility Assessment
Previous Article in Special Issue
On the Initial Phase of the Ongoing Unrest at Campi Flegrei and Its Relation with Subsidence at Vesuvio (Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration of SAR Polarimetric Images by Covariance Matching Estimation Technique with Initial Search

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
3
School of Information Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
4
School of Electronic Information Engineering, University of Beihang, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2400; https://doi.org/10.3390/rs16132400
Submission received: 27 May 2024 / Revised: 22 June 2024 / Accepted: 25 June 2024 / Published: 29 June 2024

Abstract

:
To date, various methods have been proposed for calibrating polarimetric synthetic aperture radar (SAR) using distributed targets. Some studies have utilized the covariance matching estimation technique (Comet) for SAR data calibration. However, practical applications have revealed issues stemming from ill-conditioned problems due to the analytical solution in the iterative process. To tackle this challenge, an improved method called Comet IS is introduced. Firstly, we introduce an outlier detection mechanism which is based on the Quegan algorithm’s results. Next, we incorporate an initial search approach which is based on the interior point method for recalibration. With the outlier detection mechanism in place, the algorithm can recalibrate iteratively until the results are correct. Simulation experiments reveal that the improved algorithm outperforms the original one. Furthermore, we compare the improved method with Quegan and Ainsworth algorithms, demonstrating its superior performance in calibration. Furthermore, we validate our method’s advancement using real data and corner reflectors. Compared with the other two algorithms, the improved performance in crosstalk isolation and channel imbalance is significant. This research provides a more reliable and effective approach for polarimetric SAR calibration, which is significant for enhancing SAR imaging quality.

1. Introduction

Polarimetric Synthetic Aperture Radar (PolSAR) technology stands as a pivotal technique in high-resolution microwave imaging. PolSAR systems use both power and relative phase information from imagery to reflect differences in target scattering characteristics. As a result, PolSAR technology holds significant potential and value in various applications. These include unsupervised land cover segmentation, target detection and recognition, and blind source separation. Additionally, PolSAR technology is valuable for estimating parameters such as soil moisture and biomass. SAR calibration ensures that the pixel values in the image accurately reflect the actual physical characteristics of the ground objects. This improves the accuracy of image classification and target detection. Additionally, calibrated SAR images can be effectively fused with other types of remote sensing data, such as optical images and LIDAR data, providing more comprehensive and accurate surface information [1,2,3]. However, design flaws in radar systems and variations in antenna design and manufacturing can affect the transmission and reception of signals under different environmental conditions, leading to data distortion. Channel imbalance and crosstalk, primarily caused by inconsistent channel performance and energy leakage within the polarimetric channels, are common issues. These errors can disrupt the relative relationships between polarization channels, causing inaccurate reflections of the scattering characteristics of observed terrain features and thereby affecting subsequent application processing [4,5]. Calibration estimates and corrects for imbalances and crosstalk between polarimetric channels [6]. Absolute (radiometric) calibration is not considered here.
The calibration targets can be classified into two categories [7]: point targets and distributed targets. Point targets require the selection of artificially deployed calibration targets as reference objects for calibration parameter estimation. Classical algorithms for this include the Whitt [8] and PARC [9] algorithms, of which the former is susceptible to variations in attitude and the latter is disadvantaged by the mandatory use of external power sources, making it impractical for widespread deployment in scenes. Distributed targets, on the other hand, use naturally occurring targets with well-known polarimetric scattering characteristics instead of artificial calibrators. For instance, Ainsworth [10] and Quegan [11] developed algorithmic models that combine both point and distributed targets. Both types of algorithms require low system crosstalk to function accurately, as high crosstalk (amplitude > 0.05) impairs their performance. Additionally, Ainsworth’s algorithm exhibits weak robustness to noise. Further research has been conducted on crosstalk calibration. In the study by Shi [12], methods for estimating crosstalk and imbalance when corner reflectors are unavailable were proposed. Pincus [13] derived the calibration technique for circularly polarized antennas. Han conducted an in-depth analysis of circularly polarized SAR calibration techniques under various assumptions [14].
However, there has been limited effort dedicated to fully estimating the complete polarimetric distortion model through model analysis. In [15], Villa introduced a numerical optimization technique based on covariance matching to preliminarily examine the anticipated accuracy metrics and identify uncertainties associated with distorted parameters. Comet presented a statistically optimal method evaluated for detection purposes, demonstrating results comparable to or better than those achieved with the generalized maximum-likelihood ratio index. This approach has found successful application in SAR-related fields, such as polarimetric decomposition of In-SAR stacks [16]. Building on these studies, Zhang [17,18] established a loss function for estimation crosstalk and imbalance parameters, iteratively predicting these parameters. However, this method heavily relies on the setting of initial values. Poor initial values may lead to convergence to local optima or yield erroneous analytical results. This paper elaborates on the reasons behind this phenomenon and proposes an enhanced method for estimating distorted parameters. Our contribution proposes an enhanced method for estimating distorted parameters. Building upon Zhang’s work, we analyze the underlying causes of these challenges and introduce an initial search mechanism to mitigate these shortcomings. Specifically, we introduce an outlier detection mechanism to identify outliers, which are subsequently processed using an initial search method known as the interior point method. This approach delivers an improved estimation and calibration algorithm. It not only addresses the issue of inaccurate analytical solutions caused by inappropriate initial values but also generates practical results suitable for practical applications.
The structure of this paper unfolds as follows. Section 2 lays out the preliminaries essential to this study. Building upon this foundation, in Section 3, we scrutinize the original algorithm, identifying potential vulnerabilities, devising a method for detecting erroneous values, and presenting corresponding solutions. Subsequently, in Section 4, numerical simulation experiments are conducted, aimed at validating the important conclusions drawn on the practical use of initial search techniques. In Section 5, real data are used to demonstrate the practical use of the initial search algorithm. Conclusions are drawn in Section 6.

2. Polarimetric Calibration Model

2.1. Constraints on Model

Polarimetric calibration aims to recover the real scattering matrix from the observed matrix, which might be distorted by the transmission and reception crosstalks and channel imbalances that accompany noise. To counteract distortion effects, a standardized model has been devised to correlate horizontally and vertically polarized transmitted signals with their polarized counterparts upon reception:
O = R S T + N
The standard model elucidates the transformation of the actual scattering matrix, denoted as S . This transformation is impacted by accounting for both reception and transmission distortion, represented by R and T , respectively, plus the noise matrix N . As a result of these parameters, we obtain the observed scattering matrix, denoted as O .
MMoreover, Equation (1) can be refined by incorporating crosstalk ratios u , v , w , z , channel imbalances α and h , and the system’s overall gain Y :
O = Y h 0 0 1 1 w u 1 S 1 z v 1 h 0 0 1 α 0 0 1 + N
By transforming the scattering matrix into a vector format with identity A B C = ( C T A ) B , where denotes Kronecker product, and redefining the equation, we obtain:
O H H O H V O V H O V V = Y Ψ S H H S H V S V H S V V + N H H N H V N V H N V V ,
where Y = t v v r v v , Ψ = P A H ,
P = 1 v w v w z 1 w z w u u v 1 v u z u z 1 ,
A = α 2 0 0 0 0 1 0 0 0 0 α 2 0 0 0 0 1 ,
H = h 2 0 0 0 0 h 0 0 0 0 h 0 0 0 0 1 ,
Subsequently, the scattering vector O could be converted into the covariance matrix [ C ] = O · O H to provide conjugate symmetric observations matrix as follows:
C = Y 2 Ψ Z ϱ Ψ H + σ N I ,
in which Z ( ϱ ) = S · S H and σ N denotes the mean power of noise. In this paper, we assume that the noise between channels is mutually independent, and we employ the average noise power to represent the noise power matrix. Note that this strategy is essentially a concession aimed at resolving the issue, given the absence of a method guaranteeing equal noise powers.
Usually, the value of Y is not considered in the polarimetric calibration thus set | Y | = 1 . For the parameter h , its value can be easily determined using a trihedral corner reflector, which serves as an external target. This article does not discuss this process in detail [19]. Instead, h is set to “1” during estimation and corrected later using corner reflectors.
In practical applications, the calibration of SAR images on a per-pixel basis entails a significant workload and may involve individual observation which deviates from the assumed model. Therefore, it is customary to select pixel blocks for batch processing, followed by interpolation of distortion parameters within these blocks to obtain global distortion parameters. Given a collection of N pixels, the sample covariance matrix is evaluated by:
C = 1 N n = 1 N O n O n H
Further analysis of Equation (7) reveals that the equation comprises solely 16 observed values (4 real numbers and 6 complex numbers), while posing a challenge with 27 unknowns (16 numbers in the Z σ , 5 complex numbers in Ψ ( u , v , w , z , α ) , and the noise mean power σ N which is real). Thus, this constitutes an underdetermined equation. For simplicity, the distorted parameters will be designated as vector κ = [ u , v , w , z , α ] T .
Fortunately, the imposition of constraints on the scattering matrix below has enabled the reduction of unknown parameters to 16. Apply reciprocity and azimuth symmetry assumption [20], i.e.,
S H V = S V H ,
S V H S H H H = S V H S V V H = S H V S H H H = S H V S V V H = 0
Considering these conditions, the matrix can be simplified as follows:
Z σ = ϱ 1 0 0 ϱ 4 + j ϱ 5 0 ϱ 2 ϱ 2 0 0 ϱ 2 ϱ 2 0 ϱ 4 j ϱ 5 0 0 ϱ 3 ,
where ϱ is a real vector shown as:
ϱ = ϱ 1 ϱ 2 ϱ 3 ϱ 4 ϱ 5 = S H H S H H H S V H S V H H S V V S V V H R { S H H S V V H } I { S H H S V V H } .
For later use, we insert herein the definitions pertaining to the correlation coefficient of HH and VV channel ( τ ), the cross-polarized and co-polarized backscatter ratio ( χ ) as well as the co-pol backscatter ratio ( ε ). The specific formula is defined as outlined below:
τ = S H H S V V H S H H S H H H S V V S V V H ,
χ = S V H S V H H S H H S H H H S V V S V V H ,
ε = S H H S H H H S V V S V V H .
Until now, the remaining 16 unknown parameters are: u , v , w , z , α which is complex, and the real value ϱ 1 ~ ϱ 5 , σ N which guarantee the problem solvable.

2.2. Comet Estimator

With an equal number of unknowns and knowns, the calibration process yields a singular solution for the distortion parameters. Ensuring the precision of calibration outcomes necessitates specific evaluation criteria, notably facilitated by a loss function. This function measures the difference between model predictions and the actual ground truth, formulated as:
L = C C ^ 2 ,
with C and C ^ denoting estimations and observations, in turn. What’s more, a weighting matrix is used to help to reduce the variance of parameters estimation: L = W 1 / 2 ( C C ^ ) 2 with W = C T C . Zhang has converted the calibration model into a least squares problem based on the principles of Covariance Matching Estimation Techniques for Array Signal Processing Applications [21]. The principle holds true provided that N . Below is a concise description of this approach.
To begin with, the matrix model needs to be transformed into a system of linear equations. According to Equation (7) we have
C = Ψ * Ψ Z ϱ + σ N I
Here we differentiate between the three superscripts T ,   * ,   H . T denotes the transpose transformation, * denotes the conjugate transformation, and H represents the conjugate transpose transformation.
Subsequently, we separate the scattering power from the matrix:
Z ϱ = L ϱ ,
where L is a 16 × 5 sparse matrix with nonzero entries that have been presented in Zhang’s work.
Substituting Equations (17) and (18) into Equation (16) yields:
L = I B ζ 2 ,
where B = W 1 / 2 [ Ψ Ψ L I ] , ζ = ϱ σ N with size 16 × 6 and 6 × 1 , respectively. Zhang adopts the idea of derivation and iteration to approach the result as:
ζ i = ( B T B ) B T I κ = arg min κ I B ζ i 2 ,
in which Q denotes the Moore Penrose inverse of Q . This formulation allows us to systematically address the scattering problem by decoupling the key variables and iterating towards the solution. By implementing this iterative method, Zhang’method can converge on a solution that minimizes the error between the predicted and actual values, thus enhancing the robustness of the model. This process involves repeatedly adjusting the parameters until the difference falls within an acceptable range.
The separation of the scattering power, combined with the substitution and iterative refinement, forms a comprehensive methodology for addressing discrepancies between model predictions and actual observations. This methodology is particularly effective in scenarios involving high-dimensional data and complex interactions, providing a robust framework for achieving accurate and reliable results.

3. Troubles and Solutions

3.1. Trouble and Cause

Upon meticulous examination of the iterative process, it becomes apparent that it introduces supplementary parameters, thereby leading to a potential misinterpretation within the computational framework. In particular, within the context of deriving the least squares solution outlined in Equation (20), it necessitates ζ i to transition from the domain of the real numbers to the complex space.
Following is a more detailed explanation. This deviation stems from the observation that, during the initial stage of the iterative process, inaccurate estimations lead to distorted power values. To clarify, in the theoretically deduction, the imaginary component of the power, ζ , associated with the correct set of distortion parameters, κ , ought to be zero. However, during the preliminary estimation process, the function often fails to accurately determine the parameter set, κ . Consequently, the model’s predictions of power values incorporating these inaccurate parameters do not align fully with the intended values for data restoration. This discrepancy is evident in the analytical solution for power predicted by Equation (20), which yields complex values. Furthermore, the transition of the real-valued unknown power vector from 6 to 12 introduces ambiguity, rendering the characterization of the solution an underdetermined problem.
Nevertheless, owing to the iterative inherent of estimation in this methodology, characterized by a progressive convergence toward a final value, encountering such a challenge is inevitable. Despite Zhang’s attempt to convert the complex matrix into the real domain through matrix transformation, the influence of the complex domain persists. His objective is to facilitate the translation of data from the complex domain to the real domain, with the goal of enhancing efficiency. Nevertheless, this operation does not negate the effect of the imaginary part; instead, it integrates the effects originating from the imaginary part into the real domain. Consequently, the inaccuracy persists, and even minor discrepancies can lead to significant deviations from the real value.
Using the Figure 1 for analysis, within the model, the real-valued parameters of power predominantly function as scalers. However, as the prediction process unfolds, the incorporation of imaginary components expands the influence of power, transitioning it from solely a scaler to a facilitator of rotation within the model’s framework. This expansion introduces additional pathways within the model, leading to deviations from the true values.
Luckily, the power parameters in SAR exhibit a considerable magnitude compared to distortion parameters. As a result, perturbations in power parameters provide discernible insights into the impacts attributed to distortion parameters, making them distinguishable to some extent.

3.2. Outlier Exclude

Given the polarized outcomes observed with the Comet algorithm, it becomes imperative in practical prediction processes to assess algorithmic results against predefined criteria; this evaluation helps to determine the need for recalibration based on initial values.
In this study, Monte Carlo simulations were conducted to simulate the estimation results of the model under various conditions and derive reasonable discrimination rules. Estimation results that fail to accurately produce parameters were regarded as outliers. In the experiments conducted in Section 4, the experimental results indicate that the Quegan algorithm produces more stable results compared to those of the Comet algorithm, albeit with room for improvement in precision. Therefore, data predicted by the Comet algorithm were compared against the results of the Quegan algorithm to determine the effectiveness.
In the actual judgment process, the only data available are these distorted parameters. Therefore, we chose to use the Quegan algorithm, which is relatively stable but requires improvement, as a reference to evaluate the quality of the prediction results. The most straightforward comparison lies in the differences between the results of the two algorithms. Hence, the magnitude of the difference between the Comet and Quegan algorithm results served as input features, enabling the determination of whether initial value estimation is warranted based on predefined thresholds.
The specific algorithmic process involves obtaining results from both the Comet and Quegan algorithms separately, then operating the difference treatment outlined in the subsequent equation:
κ = κ C κ Q ,
where κ C denotes the results estimated by the Comet method, while κ Q denotes Quegan’s counterpart.
Before conducting model training, it is essential to categorize the obtained results into outlier or truth, labeled as 0 and 1, respectively. This step transforms the problem into a binary classification task with five-dimensional features. To address this, we employ KNN algorithm for binary classification to model the outcomes.
Given the training set ( κ 1 , y 1 ) ,   ( κ 2 , y 2 ) , ,   ( κ M , y M ) , where κ m K R 5 ,   y m Y = { C 0 ,   C 1 } , in the training phase of this problem, the calculation of point distances performs optimally when utilizing the Chebyshev distance, that is,
d p , q = lim N i = 1 5 | κ p κ q | N N
Given the data’s five-dimensional nature, we employ the kd-tree for nearest neighbor search. The kd-tree is employed to expedite the search for the closest neighboring points to a target point [22]. During the construction phase of the kd-tree, a dimension is selected, and the dataset is sorted according to the values along that dimension. Subsequently, the dataset is partitioned into two subsets based on the median value, thus forming left and right subtrees. This partitioning process is recursively applied to each subtree until each leaf node contains a single data point.
During the search process, initiated from the root node, the target node is compared with the current node, and the search direction is determined based on the value of the target point along the current dimension, moving either toward the left or right subtree. This iterative process continues until a leaf node is reached, where the data of that node are recorded as the current nearest neighbor. Subsequently, the search algorithm backtracks to the parent node to check for closer points; if found, the nearest neighbor is updated. This backtracking process continues until the root node is reached, thereby completing the search operation.
Using the Chebyshev distance metric, the method seeks the k samples in the training set closest to κ which are then represented N k ( κ ) . According to the principle of majority voting, the error rate of the k-nearest neighbor training samples for the input instance κ is defined as the proportion of k-nearest neighbor training samples whose labels differ from the input label:
ο = 1 k κ m N k κ I y i , C k
When determining the value of k, the objective is to minimize the error rate. Therefore, the training function is designed to optimize this objective:
y = a r g m a x κ m N k κ I y i , C k ,
in which I ( y i , C k ) denotes an indicator function, specifically as:
I y i , C k = 1 , y = C k   0 , y C k .
Further elaboration on the specific process is omitted here; detailed information can be found in [23].

3.3. Initial Search

Referring to Equation (19), we observe that the loss function is not subjected to a differentiation process. The loss function remains undifferentiated, thereby avoiding the problem. However, due to the high dimensionality and the nonlinearity of the function, obtaining an accurate solution without processing is challenging. Nevertheless, a middle-ground approach can be considered. We derive an initial rough solution from the undifferentiated loss function and utilize it as the starting point for Equation (20). This step allows the parameters to approach the result before the iteration starts, thereby reducing the likelihood of convergence to local optima. We signify these unknown 16 parameters into a 16 × 1 parameter vector
x = ϱ σ N R κ I κ .
The objective function for initial value is set as follows:
L x = I B ζ 2
To recalibrate parameters flagged as outliers, an initial pre-processing is executed using the interior point method [24]. The specific iterative procedure unfolds as outlined below:
(a)
Boundary Setting
The range of unknown parameters is established based on Quegan’s prediction results, following the constraint:
min x L x s . t . g x 0 .
Let’s refer to the feasible region as S =   { x | g x 0 } .
In detail, extensive experiments have demonstrated that setting the boundaries of the crosstalk and imbalance parameters centered around the results obtained from the Quegan algorithm, with an estimation range set to 0.1, yields more favorable search outcomes. Additionally, the power boundaries are subtly expanded based on the values of the calibrated observation matrix. The function g x will be centered around the Quegan result, denoted as:
κ i [ κ Q 0.05 ,   κ Q + 0.05 ] ,
ϱ i ϱ Q × [ 0.95 ,   1.05 ] ,
where κ i and ϱ i denote the range of initial search, while κ Q and ϱ Q represent the Quegan’s calibration outcome.
(b)
Parameter Updating
Given the set range, we define the Lagrangian function as:
L x = L x λ g x ,
where λ is a small positive number, ensuring that g ( x ) as x approaches the boundary. This constraint ensures that iteration points remain within the feasible domain. Given the smallness of λ , the function value L x closely approximates L x . Thus, solving the problem is simplified to:
m i n   L x s . t . x i n t S .
The function updates iteration steps by employing the Hessian matrix. The formula for Hessian function corresponds to this Lagrangian equation:
H k = x x 2 L x , λ = 2 L x + λ 2 g x .
The specific iteration equation is:
x k + 1 = x k H k 1 f k x ,
in which f k ( x ) = L ( x ) . In this iteration, x k + 1 is updated based on the previous value x k , the inverse of Hessian matrix H k 1 , and the gradient of the Lagrangian function f k ( x ) .
(c)
Matrix Approximation
Because of the complex nature of function solution, calculating the inverse matrix of the Hessian matrix presents a formidable challenge, necessitating the utilization of iterative techniques to approximate this outcome. Note that, in this procedure, our objective is to ascertain the zero points of a function through the combined application of Newton’s method for locating stationary points and the BFGS algorithm [25]. As both methodologies involve iterative steps, these processes are conducted concurrently. The outlined approach unfolds as follows:
H k + 1 1 = I d k y k T y k T d k H k 1 I d k y k T y k T d k + d k y k T y k T d k ,
where d k = x k + 1 x k , y k = f k + 1 ( x ) f k ( x ) . In the first step, H k 1 is the identity matrix.
In the process of initial value exploration, it is imperative to acknowledge the potential inaccuracies introduced by the utilization of interior-point algorithms. These inherent errors arise from the optimization algorithms potentially halting at a certain distance from the boundaries of inequality constraints rather than converging closely to them. Consequently, such circumstances may result in a degree of deviation in the final optimization outcome, as the algorithm fails to thoroughly explore the solution space adjacent to the constraint boundaries. Hence, subsequent precise computations are deemed necessary following the acquisition of initial values.
(d)
Algorithm steps
1.
Take the penalty factor λ > 0 , and the error ϖ > 0 ,
2.
Select the initial point x 0 i n t   S and set the number of iterations to k = 1,
3.
Construct the hessian matrix corresponding to the Lagrange function L x , update the iterative parameters according to the Formulas (31)–(32), and k = k + 1,
4.
Determine whether the stop criteria are met:
If | L ( x k + 1 ) L ( x k ) | < ϖ , then stop iterating and output x k , otherwise repeat step 3.
At the outset of the search, the interior-point method typically achieves linear error reduction, progressing to quadratic convergence near the optimal solution, facilitated by its utilization of the Hessian matrix. Despite the problem’s non-convex nature, encountering local minima is inevitable, although multiple adjustments to the initial point can mitigate this.
The initial value search yields preliminary results, and in cases where convergence fails, expanding the parameter search range may be necessary, albeit at the cost of increased computational complexity and reduced efficiency. Each iteration of the interior-point method operates with a complexity of O ( n 3 ) , where n denotes the number of variables. Given 16 variables in the initial value search process, the computational demands are substantial.
Furthermore, computing and storing the Hessian matrix are both time- and space-intensive tasks. To streamline this process, the quasi-Newton algorithm approximates the Hessian matrix, simplifying computations.

3.4. Algorithms

In subsequent sections of the paper, we will use the term “Comet with Initial Search” interchangeably with “Comet IS” to refer to experiments conducted with this additional search step. Before conducting the experiment, Algorithms –3—the Quegan algorithm, the Ainsworth algorithm, and the Comet IS algorithm—are briefly summarized in this paper.
Algorithm 1 Quegan
(1). Crosstalk estimation based on second-order statistics of natural terrain features:
u = C 44 C 31 C 41 C 34 Δ ,
v = C 11 C 34 C 31 C 14 Δ ,
u = C 11 C 24 C 21 C 14 Δ ,
u = C 44 C 21 C 41 C 24 Δ ,
where   Δ = C 11 C 44 C 41 2 .
(2). Estimating the magnitude and phase of cross-polarization channel imbalance:
α = α 1 α 2 1 + α 1 α 2 1 2 + 4 α 1 α 2 2 2 α 2 ,
A r g ( α ) = A r g C 32 C 23 * ,
where   α 1 = ( C 33 u C 13 v C 43 ) C 23 z C 13 w C 43 ,   α 2 = C 23 z C 13 w C 43 * ( C 22 z C 12 w C 42 ) .
(3). Computing co-polarization channel imbalance based on corner reflectors.
Algorithm 2 Ainsworth
( 1 ) .   Initialize   crosstalk :   u = 0 ,   v = 0 ,   w = 0 ,   z = 0
( 2 ) .   Calculate   the   initial   correction   from   O by
α = | C 33 C 22 | 2 e x p ( j · a r c t a n ( C 32 ) 2 )
The crosstalk corrections are initially zero.
(3) Iterative updating of distorted parameters:
for
   a. Apply the total correction to the observed covariance:
    Π = C 11 / h 2 α 2 C 12 α * / h α C 13 / h α 2 C 14 h * α * / h α C 21 / h * α * C 22 α 2 C 23 α / α * C 24 h * α 2 C 12 / h * α 2 C 32 α * / α C 33 / α 2 C 34 h * α * / α C 41 h α / h * α * C 42 h α 2 C 43 h α / α * C 44 h 2 α 2 .
      b .   Estimate   values   from   Π , (12)
A = Π 21 + Π 31 2 ,
B = Π 24 + Π 34 2 .
      c .   Compute   [ X ] ,   [ ς ] , [ τ ] ,   and   define   [ δ ] . The detailed explanations of these equations can be found in [10].
      d .   Compute   [ δ ] , according to
R X I X = R ς + τ I ς τ I ς + τ R ς τ R δ I δ .
   e. Update crosstalk by:
u v w z = u v w z + δ u δ v δ w δ z .
    Further   correction   of   Π is performed based on the new crosstalk matrix P as defined by Equation (4):
Ξ = P 1 Π P H
    f .   Update   α     by   α = α · δ α ,
    where   δ α = | Ξ 33 Ξ 22 | 2 e x p ( j · a r c t a n ( Ξ 32 ) 2 ) .
end
Algorithm 3 Comet IS
(1). Segment the SAR image.
(2). Estimate parameters by Comet method and Quegan method, respectively.
(3). Compute the difference between the outcomes obtained from the two methods.
(4). Classify results based on KNN model
κ =        t r u e o u t l i e r
(5). Search the position of outlier and compute the initial value according to Equation (29).
(6). Introduce it into Comet algorithm for recalibration.
(7). Perform interpolation processing on the κ-set.
(8). Calculate the h value in conjunction with κ and the triangular reflector.

4. Simulation Test

Given the unknown true distortion parameters in real data, our exploration of the optimality of the Comet method relies mostly on simulation experiments, wherein the true distortion parameters are ascertainable. This approach allows us to investigate the effectiveness of the methods as model metrics.
This paper delves into the Comet calibration methodology, which hinges upon the utilization of standard targets. To gauge the accuracy of the prediction algorithm, various combinations of χ and τ within the H/V polarization covariance matrix are employed. The simulation analysis presented in this section primarily assesses the viability of target selection parameters and the efficacy of the calibration method across the chosen targets.
To further investigate the Comet algorithm, we employ Monte Carlo simulations to explore various combinations of χ and τ . Additionally, this study integrates classic algorithms such as Quegan and Ainsworth for a comprehensive comparative evaluation of algorithmic efficacy.

4.1. Genarating Variables

Initially, random observation data are generated according to the specified parameters,
(1)
ideal covariance matrix elements:
By specifying the values of τ ,   ε and χ , the generator can produce the covariance matrix. Based on the polarimetric ratios defined in (12), (13), and (14), the ideal covariance matrix elements of the covariance matrix are calculated as follows:
ϱ 1 = ε ϱ 3 , ϱ 2 = χ ϱ 1 ϱ 3 , ϱ 4 = R τ ϱ 1 ϱ 3 e j 2 π U 0 , 1 ϱ 5 = I τ ϱ 1 ϱ 3 e j 2 π U 0 , 1 ,
The function U (0, 1) denotes a random number generator which yields a single random number uniformly distributed within the interval (0, 1).
Set ϱ 3 = 1000 , and then the real covariance matrix can be obtained by combining ( ε ,   χ ,   τ ). Here, we set ε 1 and define the range of covariance matrix parameters as χ [ 16 , 3 ] d B and τ [ 0 ,   1 ] , that is:
ε = 10 0.5 [ 2 U ( 0,1 ) 1 ] ,
χ = 16 : 0.2 : 3   d B ,
τ = 0.02 : 0.02 : 1 .
The notation p : Δ t : q signifies a uniformly sampled vector, where the elements start from p and terminate at q , with an interval of Δ t between successive elements.
(2)
distortion parameters:
To facilitate calibration simulation, random polarimetric distortion parameters must be added to the real covariance matrices. These parameters, integral to the calibration process, are defined as follows:
u , v , w , z = 0.1 U 0,1 e x p j 2 π U 0,1 ,
α 2 = 1 + 0.2 U 0,1 e x p j 2 π U 0,1 .
In our experiment, the predictions of h and Y were not taken into consideration; therefore, they were simply set to 1.
(3)
other details
Given the number of samples N as 1000 and set SNR = 20 dB. The detailed steps for constructing the noise matrix are as follows:
(a) N = σ N d i a g ( n 1 , n 2 , n 3 , n 4 )
(b) σ N = t r { C } S N R + 1
(c) n 1 , n 2 , n 3 2 U ( 0 , 1 ) , n 4 = 4 ( n 1 + n 2 + n 3 ) ,
(d) if  n 4 > 0 , output N ;
else return step (c).

4.2. On the Practice of Comet

To assess the performance of three algorithms in parameter prediction, the Euclidean norms of the differences between the predicted values and the actual computational results were computed for each algorithm. In this study, the accuracy of parameters is assessed by computing the L 2 norm between the estimated parameters and the actual results. Specifically,
R = κ e κ c 2 ,
where κ c represents the configured parameter, whereas κ e denotes the estimated outcome.
The results of the computation were converted to dB conversion in Figure 2. In the Ainsworth dataset, white spots indicate matrix singularity, making distortion parameters unattainable. Similarly, red regions are unsuitable for the Ainsworth algorithm combinations. While the outcomes in these regions may not deviate as drastically from the actual scenario as observed with the white points, they still exhibit considerable disparities from the ground truth. Conversely, concerning the Comet algorithm, it is notable that while there is some overlap between erroneous parameter estimation and function loss at certain points, a significant portion of the points does not exhibit a correlated relationship. An evident example is the occurrence of curves in both the Comet error plot and the function loss plot. The arcs in the error plot depict the algorithm’s subpar estimations, whereas the finer arcs in the function loss plot demonstrate comparatively better predictive outcomes. This disparity arises due to the shared segments of the two plots, stemming from constraints such as finite maximum iteration counts, non-convergent initial parameters, or oscillations during convergence. Conversely, the contradictions between the two plots serve as substantial evidence affirming the error source analysis outlined in Section 3.
The first experiment is devoted to illustrating whether the Comet algorithm outperforms the Quegan and Ainsworth algorithms or not. The findings reveal a noteworthy stability in outcome variance when utilizing the Quegan and Ainsworth methods. Notably, while Ainsworth displays superior accuracy over the Quegan algorithm, it occasionally confronts challenges in estimation feasibility at isolated data points. In contrast, the Comet method displays a dichotomy in outcomes: successful predictions exhibit considerably smaller errors compared to the two former algorithms, yet a notable frequency of erroneous estimations persists.
Analogous conclusions can be drawn from the statistical data of the three algorithms shown in Table 1. It is noteworthy that, due to certain failures in Ainsworth’s algorithm, irregular results were excluded from the statistical analysis. The statistical findings reveal that, apart from a few exceptional points, the overall performance of the Ainsworth algorithm remains relatively stable, with generally favorable prediction outcomes and a total range length of less than 12 dB. The average variance of this algorithm is 9.4803 dB, indicating its superiority among the three algorithms. However, the Quegan algorithm falls short of Ainsworth’s results in both optimal and average estimation variances. Nevertheless, this does not imply that Ainsworth’s algorithm consistently outperforms Quegan’s. For instance, within the interval τ = [ 0.7 ,   1 ] ,   χ = [ 16 , 14 ]   d B , the Quegan algorithm not only exhibits greater stability but also yields more accurate predictions. On the other hand, the Comet algorithm demonstrates superior performance in successful predictions (−18.1582 dB) but the poorest performance in erroneous predictions (10.8171 dB). In comparison to the range of lengths of less than 15 and 12 for the former two algorithms, the length associated with the Comet algorithm approaches approximately 20 dB.
This observation corroborates our initial investigation, highlighting the persistent risk of inaccurate estimations even when employing minimal loss functions. It underscores the importance of further research to mitigate such errors and enhance the reliability of our predictions.

4.3. KNN Classification

In this experiment, the classification data groups were established by categorizing them according to the Euclidean norm of the difference between predicted and actual results.
A total of 6600 sets of data were collected, comprising 927 outlier sets and 5673 true data. The decision threshold distinguishing outliers from true values is set at 0.15, based on the calculation result from Equation (34). This threshold, determined experimentally, provides optimal classification performance. It also accounts for an average error difference of 0.05 across the 10 distortion parameters (real and imaginary parts) considered during its selection process. Prior to the experiment, the dataset was divided into training and testing sets in a 7:3 ratio. In the process of utilizing the kd-tree, a bucket size of 50 was set to balance search speed and accuracy. As shown in Figure 3, the function stabilizes around the 29th iteration. At this point, the observed minimum target value and the estimated minimum function value are 0.006 and 0.0060048, respectively.
Fortunately, for classification training, we utilized the ‘fitcknn’ function available in MATLAB. To optimize the performance of the classification model, we employed the Bayesian optimization algorithm due to its effectiveness in exploring complex parameter spaces and maximizing model accuracy. The specific configurations of training parameters, including those for the ‘fitcknn’ function and the Bayesian optimization algorithm, are provided in detail in Appendix A.
After training, the testing data were evaluated using the trained model, yielding the results depicted in Figure 4. Figure 4a,b illustrate the actual classification and predicted outcomes, respectively. Furthermore, cross-validation loss was employed as a complementary metric to assess the model’s performance on the testing set. The testing set demonstrated strong performance on the trained model, with target function loss of the 0.0060048, indicating effective outlier identification capabilities. For more detailed training results, readers are encouraged to refer to the author’s GitHub page.

4.4. Comet with Initial Search

After identifying the outliers, experiments were re-conducted at the corresponding positions. Prior to re-conducting the experiments, an additional initial value search step was introduced to refine the experimental process.
The initial value was set as recommended in Section 3 and the Lagrange coefficient λ was set to 0.02. During the experiment, unreasonable values were detected using the KNN model, prompting an initial value search. Some areas required multiple adjustments, with a maximum of four adjustments needed in certain cases. Following the incorporation of initial value search, a noticeable reduction in the discrepancy between predicted results and actual set parameters was observed. The results after re-experimentation are shown in Figure 5. However, it became apparent that the estimation performance for arch positions slightly lag behind other parts. Nevertheless, to some extent, compared to the original algorithm, this approach demonstrates an improvement in mitigating the impact of the complex domain. Furthermore, even in areas of slightly inferior performance, the algorithm consistently outperforms both the Quegan and Ainsworth algorithms.
Statistical data analysis reveals consistent findings displayed in Table 2, indicating that the inclusion of an initial value search process constrains the prediction error range to below −1.0132 dB. On average, the overall predictive outcomes appear quite favorable. However, upon scrutinizing the analysis of the optimal prediction points, it becomes evident that despite the inclusion of the initial value search, the recalculation of outlier points fails to surpass the initially computed results. This suggests that while the initial value search process helps constrain prediction errors on average, it may not fully address underlying issues within the algorithm. Instead, it serves more as a remedial measure to mitigate the impact of these inherent issues.

5. Experiment on Real Data

In this section, the empirical data were acquired by the Microwave Microsystems Research and Development Department at the Aerospace Information Research Institute, Chinese Academy of Sciences. The data, obtained at the Luo Ding Airport in Zhao Qing, Guangdong in May 2021, consist of an S-band quad-polarimetric image captured by the airborne experimental SAR system.
The aircraft, with active phased array antenna supports, flew at a speed of 74 m/s and at an altitude of 2600 m. The nadir angle of the airborne SAR radar is 45 ° downward. Employing a dual-illumination mode for target observation, the radar’s center of the scene was at 3989 m. The SAR operated at a frequency of 3.2 GHz, with a range sampling rate of 400 MHz, a signal bandwidth of 300 MHz, and a signal sampling interval of 1000 Hz. The radar achieved a resolution of 1 m, allowing for detailed imaging of the target area. Three sets of parallel data were obtained from this sampling, which were used for calibration of testing results in the actual data experiment.
Near the imaging airport, various artificial corner reflectors were deployed for data polarization quality assessment and calibration. Within the radar testing area, 16 trihedral corner reflectors and two bi-corner reflectors rotated at 45 degrees were positioned. The schematic of the placements is illustrated in Figure 6b: the magenta plus sign marks the positions where a trihedral corner reflector is placed, and the yellow marks the position where a rotational bi-corner reflector is placed.
Three algorithms—the Ainsworth, Quegan, and Comet IS algorithm—were employed in the experiments. The results indicate that there are discernible differences among the outcomes generated by the three algorithms, with notable similarities observed between the results of the Comet IS algorithm and the Quegan algorithm. This similarity can be attributed to the outlier exclusion procedure employed earlier, where outlier detection was conducted based on the results of the Quegan algorithm. Using this result as a reference, we established the initial search range for revalidation, which influenced the outcomes of the Comet IS algorithm. Furthermore, upon observing the measurement results of Ainsworth in Figure 7, it is evident that the amplitude values of the crosstalk coefficient “u” and “z” largely overlap. Additionally, around the position of approximately 2600, both latter two algorithms exhibit similar peaks, whereas the Ainsworth shows no significant fluctuations.
Figure 8 sequentially presents the SAR raw image after Pauli decomposition, followed by the image calibrated using the Quegan algorithm, Ainsworth algorithm, and Comet IS algorithm. Upon calibration, all algorithms exhibit varying degrees of success in restoring the information of the original image. Compared to the pre-calibrated Pauli color image, the post-calibrated image shows noticeable changes: fields and grasslands near the airport appear greener, indicating single or volume scattering, while clutter in dark target areas such as airport runways adjacent to grasslands and ponds near fields transitions from red to purple. This is consistent with the scattering mechanism influenced by the characteristics of the objects themselves.
To carefully analyze the calibration effects of algorithms on images, the central positions of the images were selected for magnification research in Figure 9. Position 1 corresponds to areas with bare soil and vegetation, predominantly exhibiting surface and volume scattering, which appear as green and blue, respectively, in the Pauli decomposition. Position 2 corresponds to residential areas, primarily exhibiting double bounce scattering, resulting in a red appearance. Position 3 represents calm water bodies, exhibiting specular reflection, theoretically appearing black, or undergoing surface scattering upon reaching a certain roughness on the water surface, appearing blue [26].
Before calibration, position 1 exhibited yellow and sporadic red hues. Following calibration with the three algorithms, position 1 predominantly appears green with sporadic blue hues. Moreover, after calibration with the Comet IS algorithm, blue is more pronounced at position 1. The built-up area at position 2 exhibits double bounce scattering characteristics both before and after calibration. Initially, position 3 appears mainly black before calibration, yet still interspersed with some red. After calibration, the results obtained using the Ainsworth algorithm reveal sporadic green hues in addition to black at position 3. The Quegan and Comet IS algorithms show sporadic blue hues at position 3.
However, solely relying on visual observation makes it difficult to distinguish the calibration effects of the three algorithms. Therefore, further validation was conducted using corner reflectors to assess the accuracy of each algorithm’s performance and its ability to restore the SAR data.
During the validation process, we utilized 16 trihedral corner reflectors, denoted as TH1 to TH16, along with two bi-static corner reflectors rotated at 45 ° , labeled as RH1 and RH2. Subsequently, we employed the returned values from points with established reflector positions to estimate residual errors. The calculation process proceeded as follows:
x t a l k = m i n 1 N 1 i N 1 | S H H | | S H V | 2 , 1 N 1 1 N 1 | S V V | | S V H | 2 ,
c a m p = 1 N 1 i N 1 | S H H | | S V V | 2 ,
c p h = 1 N 1 i N 1 S H H S V V 2 ,
x c m p = 1 N 2 i N 2 | S H V | | S V H | 2 ,
x p h = 1 N 2 i N 2 S H V S V H 2
Among these, xtalk, camp, cph, xcmp, and xcph represent polarization isolation, co-polarization channel imbalance magnitude, co-polarization channel imbalance phase, cross-polarization channel imbalance magnitude, and cross-polarization channel imbalance phase, respectively. N 1 and N 2 denote the number of TH and RH. The first three evaluation metrics were calculated using trihedral corner reflectors, while the remaining evaluation metrics were calculated using rotated bi-static corner reflectors. The statistics of these three data are presented in Table 3, Table 4 and Table 5.
For a more intuitive comparison, in the detailed evaluation table, we show the results of the first group of every corner reflector in Table 6, Table 7 and Table 8, in order. From the overall evaluation table, the Comet IS algorithm exhibited favorable results in the xtalk, cph, and xph metrics.
Generally speaking, the Quegan algorithm and the Comet IS algorithm exhibit significant advantages over the Ainsworth algorithm in terms of xtlak. However, in specific areas, the Ainsworth algorithm’s calibration results show superiority. The results of the first set of data are analyzed as follows: in the TH4 region, the xtalk values for the Quegan algorithm and the Comet IS algorithm are 1.1195 and 17.1912, while Ainsworth’s xtalk is 29.6012. Similar data results also appear in regions like TH5 and TH16. For the camp and cph, the error results of the three algorithms are very similar, with Ainsworth’s results slightly weaker. Compared to the other two algorithms, the Quegan algorithm achieves better results in terms of cross-polarization channel imbalance. The results from the second and third sets of data closely resembled those from the first set.
Analyzing each algorithm individually reveals that for the Quegan algorithm, xtalk at the positions of TH2, TH4, and TH16 is not ideal, consistent with the results of the simulation experiments. In the simulation experiments, when both χ and τ are relatively high, the calibration results are weaker compared to other positions. At positions TH12 and TH13, xtlak is higher, but the cross-polarization channel imbalance and phase recovery effects are lower. It can be inferred from Table 9 that this may be related to the higher eta values of the observed data at corresponding positions. In contrast, the Ainsworth algorithm has low xtalk at TH2, and the cross-polarization channel imbalance is also larger. Although this does not align with the simulation experiment results, Table 9 reveals that the corresponding eta value is higher. At the TH5 position, the crosstalk isolation is high, which also corresponds to the simulation experiment results: under the combination τ , χ = [ 0.52 , 0.30 ] , and the corresponding eta, its value is close to 1, indicating good performance in both xtalk and camp, cph indicators. The results of the Comet IS algorithm show low xtlak at the TH2 position, which is consistent with the simulation experiment results. Under the combination τ , χ = [ 0.4 , 4.4 ] , the results of the simulation experiment are also weaker, but the performance in crosstalk isolation is better than the other two algorithms. Similar situations also occur at the TH16 position. Additionally, the results of Comet IS are generally stable, with good performance in xtalk. Particularly, the xmp results obtained by Comet IS near RH1 are significantly weaker than the other two algorithms. It can be observed from Table 9 that the combination of τ , χ = [0.42, −5.8] at this point is also a set of combinations whose performance is slightly inferior in the simulation experiments. Similarly, the Quegan algorithm near RH2 also exhibits deficiencies.
Furthermore, in Figure 10, the x- and co-pol signature of the corner reflector RH9 are plotted. The calibration results of the Quegan algorithm and the Comet IS algorithm are superior, while the results of the Ainsworth algorithm are relatively poor. This is consistent with the results presented in Table 6, Table 7 and Table 8.
The analysis of individual corner reflectors indicates that no algorithm has an absolute advantage because each algorithm’s model may not fully correspond to the actual situation during modeling. The assumptions about noise made by these three algorithms are also quite simplistic, which is a compromise made to solve the model but does not reflect the actual situation. Moreover, noise during radar data collection is not constant, explaining why the experimental results of corner reflectors do not fully follow the simulation experiments. This limitation also applies to the Comet IS algorithm. Furthermore, in practical applications, due to the limited number of sampling points and the complexity of the environment, it is difficult to meet the premise of an infinite number of sampling points during the calibration process. On the contrary, in practical applications, a balance must be struck in terms of sample quantity.

6. Conclusions

This paper investigates techniques for polarimetric SAR calibration employing distributed targets. The establishment of Covariance Matching Estimation Techniques and the formulation of the loss function offer a more precise method for predicting distortion coefficients and scrutinizing result authenticity. However, challenges arise due to the analytical solution derived from a single differentiation and the introduction of unrealistically estimated parameters. This leads to an underdetermined equation and increased unknowns.
This paper’s contribution lies in evaluating the reasonableness of the solution and re-solving unreasonable outcomes. Prior to solving, an initial value search process is incorporated to confine the solution within a smaller range, mitigating to some extent the diffusion of real-domain solutions into the complex domain. With the successful establishment of an evaluation mechanism, the solving process can be iteratively conducted. Given that the training data almost completely cover real-world scenarios, incorrect data from initial training can be accurately identified during actual application. This allows for re-execution of predictions using an initial value search. Comprehensive learning data ensure the effectiveness and robustness of the recognition mechanism. The prediction process using an initial value search can converge more swiftly and accurately in the correct direction, resulting in precise distortion parameter estimation due to the rational setup of initial values.
The experimental results demonstrate the outstanding performance of the proposed Comet IS algorithm. During the simulation stage, Comet IS exhibited a significant reduction of 4.9419 dB in average error compared to its predecessor, Comet. Relative to classical algorithms, Ainsworth and Quegan, Comet IS showed reductions of 2.1211 dB and 7.7069 dB in average error, respectively.
In the real data experiment phase, all three algorithms—Comet IS, Quegan, and Ainsworth—exhibited a notable calibration effect on the original images, rendering the scattering characteristics of SAR images more pronounced. Results from corner reflectors further validate the excellent performance of the Comet IS algorithm across various indicators. These include polarization isolation (Ainsworth ~ 17.9926 dB, Quegan ~ 1.5226 dB), co-polarization channel imbalance magnitude (Ainsworth ~ 4.3862 dB, Quegan ~ 0.0180 dB), co-polarization channel imbalance phase (Ainsworth ~ 1.4125, Quegan ~ 0.0270), and cross-polarization channel imbalance phase (Ainsworth ~ 1.3231 ° , Quegan ~ 0.1492 ° ).
Nevertheless, the problem at hand presents several areas requiring immediate improvement, such as significant computational overhead and prolonged computation time. In comparison to the Q and A algorithms, this algorithm exhibits lower computational complexity and higher efficiency. However, it delivers more accurate data and is better suited for scenarios requiring high-standard and high-quality data acquisition. Additionally, there may be instances necessitating multiple corrections of the initial value search range to attain reasonable outcomes. Furthermore, akin to other studies, this paper assumes similar noise characteristics, merely positing average noise power while overlooking varying effects of noise across different channels.

Author Contributions

Conceptualization, J.L.; methodology, J.L.; software, J.L.; validation, J.L.; formal analysis, J.L.; investigation, J.L.; resources, L.L.; data curation, X.Z.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; visualization, J.L.; supervision, J.L.; project administration, L.L.; funding acquisition, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Part of the code and the trained model are available at: https://github.com/ABlueEye/SAR-Calibration.git (accessed on 16 June 2024).

Acknowledgments

The authors would like to thank Zhang for his help.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this appendix, we exemplify the utilization of MATLAB’s built-in function, ‘fitcknn’, for addressing classification tasks. In the following, we provide a minimum working example to demonstrate its usage:
1% set optimization options
2 Mdl = fitcknn (X, Y,
3        ‘OptimizeHyperparameters’, ‘BayesianOptimization’,
4        ‘NumNeighbors’, 1,
5        ‘NSMethod’, ‘kdtree’,
6        Distance’, ‘chebychev’,
7        ‘Standardize’, false,
8        ‘HyperparameterOptimizationOptions’,
9        struct (‘AcquisitionFunctionName’, ‘expected-improvement-plus’));
10%test the result
11 Predict _ labels = predict (Mdl,Z);
12 CVMdl = crossval (Mdl);
13 kloss = kfoldLoss (CVMdl);
The green sections provide explanations for the pseudocode, while the purple sections indicate the parameters selected during the function calls.
The second line presents the input data, where ‘X’ signifies the input features, while ‘Y’ represents labels (0 or 1). Lines 3–9 enumerate the additional parameters utilized to regulate the classification algorithm. The 11th line predicts the categories of the test set ‘Z. Subsequently, lines 12–13 are engaged in computing the cross-validation loss. When adopted in its default configuration, this loss metric serves as an indicator of the model’s efficacy, with diminished values signifying superior model performance.

References

  1. Freeman, A. SAR calibration: An overview. IEEE Trans. Geosci. Remote Sens. 1992, 30, 1107–1121. [Google Scholar] [CrossRef]
  2. Chang, S.; Deng, Y.; Zhang, Y. An Advanced Scheme for Range Ambiguity Suppression of Spaceborne SAR Based on Blind Source Separation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  3. Yamaguchi, Y. Polarimetric SAR Imaging: Theory and Applications, 1st ed.; CRC Press: Boca Raton, FA, USA, 2020; pp. 52–59. [Google Scholar]
  4. Hu, D.S.; Qiu, X.L.; Lei, B.; Xu, F. Analysis of Crosstalk Impact on the Cloude-decomposition-based Scattering Characteristic-All Databases. J. Radars 2017, 6, 221–228. [Google Scholar]
  5. Wang, Y.; Ainsworth, T.L.; Lee, J.-S. Assessment of System Polarization Quality for Polarimetric SAR Imagery and Target Decomposition. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1755–1771. [Google Scholar] [CrossRef]
  6. Geudtner, D.; Torres, R.; Snoeij, P. Sentinel-1 mission capabilities and SAR system calibration. In Proceedings of the 2013 IEEE Radar Conference (RadarCon13), Ottawa, ON, Canada, 9 September 2013. [Google Scholar]
  7. Sarabandi, K.; Pierce, L.E.; Dobson, M.C. Polarimetric calibration of SIR-C using point and distributed targets. IEEE Trans. Geosci. Remote Sens. 1995, 33, 858–866. [Google Scholar] [CrossRef]
  8. Whitt, M.W.; Ulaby, F.T.; Polatin, P.; Liepa, V.V. A general polarimetric radar calibration technique. IEEE Trans. Antennas Propagat. 1991, 39, 62–67. [Google Scholar] [CrossRef]
  9. Brunfeldt, D.R.; Ulaby, F.T. Active Reflector for Radar Calibration. IEEE Trans. Geosci. Remote Sens. 1984, GE-22, 165–169. [Google Scholar] [CrossRef]
  10. Ainsworth, T.L.; Ferro-Famil, L.; Lee, J.-S. Orientation angle preserving a posteriori polarimetric SAR calibration. IEEE Trans. Geosci. Remote Sens. 2006, 44, 994–1003. [Google Scholar] [CrossRef]
  11. Quegan, S. A unified algorithm for phase and cross-talk calibration of polarimetric data-theory and observations. IEEE Trans. Geosci. Remote Sens. 1994, 32, 89–99. [Google Scholar] [CrossRef]
  12. Shi, L.; Li, P.; Yang, J. Polarimetric SAR Calibration and Residual Error Estimation When Corner Reflectors Are Unavailable. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4454–4471. [Google Scholar] [CrossRef]
  13. Pincus, P.; Preiss, M.; Goh, A.S.; Gray, D. Polarimetric calibration of circularly polarized synthetic aperture radar data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6824–6839. [Google Scholar] [CrossRef]
  14. Han, Y.; Lu, P.; Liu, X. On the Method of Circular Polarimetric SAR Calibration Using Distributed Targets. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  15. Villa, A.; Iannini, L.; Giudici, D. Calibration of SAR Polarimetric Images by Means of a Covariance Matching Approach. IEEE Trans. Geosci. Remote Sens. 2015, 53, 674–686. [Google Scholar] [CrossRef]
  16. Tebaldini, S.; Rocca, F.; Guarnieri, A.M. Model Based SAR Tomography of Forested Areas. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008. [Google Scholar]
  17. Zhang, J.-J.; Hong, W.; Jin, Y.-Q. On the Method of Polarimetric SAR Calibration Using Distributed Targets. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  18. Zhang, J.-J.; Hong, W.; Xu, F.; Jin, Y.-Q. On the Model of Polarimetric SAR Calibration Using Distributed Targets. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10112–10125. [Google Scholar] [CrossRef]
  19. Shi, L.; Yang, J.; Li, P. Co-polarization channel imbalance determination by the use of bare soil. ISPRS J. Photogramm. Remote Sens. 2014, 95, 53–67. [Google Scholar] [CrossRef]
  20. Nghiem, S.V.; Yueh, S.H.; Kwok, R.; Li, F.K. Symmetry properties in polarimetric remote sensing. Radio Sci. 1992, 27, 693–711. [Google Scholar] [CrossRef]
  21. Ottersten, B.; Stoica, P.; Roy, R. Covariance Matching Estimation Techniques for Array Signal Processing Applications. Digit. Signal Process 1998, 8, 185–210. [Google Scholar] [CrossRef]
  22. Bentley, J.L. Multidimensional binary search trees used for associative searching. Commun. ACM 1975, 18, 509–517. [Google Scholar] [CrossRef]
  23. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  24. Fletcher, R. A New Approach to Variable Metric Algorithms. Comput. J. 1970, 13, 317–322. [Google Scholar] [CrossRef]
  25. Broyden, C.G. The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations. IMA J. Appl. Math. 1970, 6, 76–90. [Google Scholar] [CrossRef]
  26. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
Figure 1. Parameter iterative estimation procedure. (a) The ideal calibration processes; (b) the actual calibration process, where θ represents the rotation angle induced by the introduction of the imaginary part.
Figure 1. Parameter iterative estimation procedure. (a) The ideal calibration processes; (b) the actual calibration process, where θ represents the rotation angle induced by the introduction of the imaginary part.
Remotesensing 16 02400 g001
Figure 2. Parameter estimation error and loss function values. (a) R by Ainsworth algorithm in dB; (b) R by Quegan algorithm in dB; (c) R by Comet algorithm in dB; (d) the value of loss function in dB.
Figure 2. Parameter estimation error and loss function values. (a) R by Ainsworth algorithm in dB; (b) R by Quegan algorithm in dB; (c) R by Comet algorithm in dB; (d) the value of loss function in dB.
Remotesensing 16 02400 g002
Figure 3. Minimum target values. The blue line represents the observed minimum target values. The green line represents the estimated minimum target values.
Figure 3. Minimum target values. The blue line represents the observed minimum target values. The green line represents the estimated minimum target values.
Remotesensing 16 02400 g003
Figure 4. Classification results. (a) Real classification; (b) predicted classification. The figure solely illustrates the relationship between the classification results and the chosen two-dimensional features. The abscissa and ordinate respectively represent the first and third dimensions of the feature vectors to characterize the distribution of five-dimensional features.
Figure 4. Classification results. (a) Real classification; (b) predicted classification. The figure solely illustrates the relationship between the classification results and the chosen two-dimensional features. The abscissa and ordinate respectively represent the first and third dimensions of the feature vectors to characterize the distribution of five-dimensional features.
Remotesensing 16 02400 g004
Figure 5. R with Comet IS in dB.
Figure 5. R with Comet IS in dB.
Remotesensing 16 02400 g005
Figure 6. Measured information graph. (a) Optical images of the test area; (b) polarimetric SAR image of the corner reflector calibration site.
Figure 6. Measured information graph. (a) Optical images of the test area; (b) polarimetric SAR image of the corner reflector calibration site.
Remotesensing 16 02400 g006
Figure 7. Crosstalk amplitude. (a) Ainsworth; (b) Quegan; (c) Comet IS.
Figure 7. Crosstalk amplitude. (a) Ainsworth; (b) Quegan; (c) Comet IS.
Remotesensing 16 02400 g007aRemotesensing 16 02400 g007b
Figure 8. SAR image (Pauli). (a) Raw SAR image; (b) image processed by Ainsworth algorithm; (c) image processed by Quegan algorithm; (d) image processed by Comet IS.
Figure 8. SAR image (Pauli). (a) Raw SAR image; (b) image processed by Ainsworth algorithm; (c) image processed by Quegan algorithm; (d) image processed by Comet IS.
Remotesensing 16 02400 g008
Figure 9. Magnified section of SAR image. (a) Raw image; (b) image calibrated by Ainsworth algorithm; (c) image calibrated by Quegan algorithm; (d) image calibrated by Comet IS algorithm.
Figure 9. Magnified section of SAR image. (a) Raw image; (b) image calibrated by Ainsworth algorithm; (c) image calibrated by Quegan algorithm; (d) image calibrated by Comet IS algorithm.
Remotesensing 16 02400 g009
Figure 10. Signatures of corner reflector RH9. (ad) The x-pol signatures. (ef) The co-pol signatures. (a,e) Rraw data; (b,f) data calibrated by Ainsworth algorithm; (c,g) data calibrated by Quegan algorithm; (d,h) data calibrated by Comet IS algorithm.
Figure 10. Signatures of corner reflector RH9. (ad) The x-pol signatures. (ef) The co-pol signatures. (a,e) Rraw data; (b,f) data calibrated by Ainsworth algorithm; (c,g) data calibrated by Quegan algorithm; (d,h) data calibrated by Comet IS algorithm.
Remotesensing 16 02400 g010
Table 1. Variation of distortion results.
Table 1. Variation of distortion results.
AlgorithmsVariations (dB)
MeanMax.Min.
Ainsworth 8.48030.8913 10.6775
Quegan 2.89459.8020 4.9945
Comet 5.759510.8171 18.1581
Table 2. Variation of distortion results.
Table 2. Variation of distortion results.
AlgorithmsVariations (dB)
MeanMax.Min.
Comet−5.759510.8171−18.1581
Comet IS−10.6014−1.0132−18.1581
Table 3. Overall evaluation metrics (first group).
Table 3. Overall evaluation metrics (first group).
Algorithmsxtalk (dB)camp (dB) cph   ( ° )xcmp (dB) xph   ( ° )
Quegan29.9440−8.17442.4217−20.65443.2675
Ainworth13.4740−3.77023.8072−7.36564.4414
Comet IS31.4666−8.15642.3947−4.96933.1183
Table 4. Overall evaluation metrics (second group).
Table 4. Overall evaluation metrics (second group).
Algorithmsxtalk (dB)camp (dB) cph   ( ° )xcmp (dB) xph   ( ° )
Quegan29.9747−8.21032.3906−22.00182.9057
Ainworth14.2331−3.76003.7964−6.48333.8984
Comet IS31.3942−8.15642.4155−9.66763.2100
Table 5. Overall evaluation metrics (third group).
Table 5. Overall evaluation metrics (third group).
Algorithmsxtalk (dB)camp (dB) cph   ( ° )xcmp(dB) xph   ( ° )
Quegan29.1243−8.00432.5978−18.16474.1512
Ainworth15.0020−3.59843.9112−8.42514.1013
Comet IS32.5003−8.29322.4007−15.01033.0034
Table 6. Evaluation metrics of Comet IS.
Table 6. Evaluation metrics of Comet IS.
Corner Reflector Numerxtalkcamp cph   ( ° )xcmp xph   ( ° )
TH128.70410.24670.1992
TH28.56360.31851.6325
TH324.72660.52490.7088
TH417.19120.49750.2490
TH518.38460.12960.1110
TH627.91980.10271.3288
TH779.80580.02116.0983
TH839.99270.33960.6149
TH968.94600.28703.7258
TH1018.99950.04060.0309
TH1125.07180.09022.7224
TH1268.37160.80234.2082
TH1324.13620.63742.3639
TH1414.92190.28420.1457
TH1514.69080.30541.8039
TH169.91530.54911.1293
RH1 0.79693.4028
RH2 0.04352.8051
Table 7. Evaluation metrics of Quegan.
Table 7. Evaluation metrics of Quegan.
Corner
Reflector Number
xtalkcamp cph   ( ° )xcmp xph   ( ° )
TH123.28200.50670.1075
TH27.59270.24351.9978
TH325.06120.48011.8528
TH41.11950.17512.5790
TH525.37110.4440−0.4107
TH612.31300.00940.7181
TH761.11110.00775.4248
TH816.53760.31240.1972
TH968.54270.29274.3499
TH1015.50430.03981.4972
TH1119.48490.06771.9741
TH1259.20790.77783.7904
TH1323.87720.65722.8647
TH1413.09350.25350.0413
TH1511.47580.29161.2983
TH168.11290.53470.5228
RH1 0.11064.0667
RH2 0.07052.1942
Table 8. Evaluation metrics of Ainsworth.
Table 8. Evaluation metrics of Ainsworth.
Corner
Reflector Number
xtalkcamp cph   ( ° )xcmp xph   ( ° )
TH114.48300.63680.3687
TH28.30900.41781.5247
TH330.11140.45901.6869
TH429.60120.21722.0635
TH542.18890.39090.3569
TH610.36790.12480.5231
TH727.46440.00955.3580
TH810.04520.37690.1267
TH921.45760.30754.1847
TH1012.65480.02052.0674
TH1118.99400.05032.4207
TH1222.16190.77873.7139
TH1314.38900.68633.1686
TH1421.50350.34690.0759
TH1524.79240.27361.2895
TH1619.08500.54810.7352
RH1 0.05453.2928
RH2 0.25362.1497
Table 9. SAR data Information.
Table 9. SAR data Information.
Corner Reflector Number τ χ   ( d B ) ε
TH10.48946762−1.91942602.2425146
TH20.40278637−4.43030933.5309291
TH30.59288347−2.17170211.1541296
TH40.59339267−3.36030481.4142637
TH50.54164469−3.017959310.85068184
TH60.57737476−0.864820180.70104456
TH70.62415862−1.15221190.70582014
TH80.57448375−2.01623320.50525302
TH90.62745881−1.16038430.69839001
TH100.55428106−0.154699621.2578657
TH110.57731920−0.440856461.2952329
TH120.81376517−5.03011271.1800278
TH130.71061641−2.10844901.2729205
TH140.53795475−0.884693501.4626347
TH150.59874237−4.17282821.1617874
TH160.61266279−5.89116333.3354874
RH10.41774085−5.77977871.3721687
RH20.560555340.264277461.1695871
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, J.; Liu, L.; Zhou, X. Calibration of SAR Polarimetric Images by Covariance Matching Estimation Technique with Initial Search. Remote Sens. 2024, 16, 2400. https://doi.org/10.3390/rs16132400

AMA Style

Liu J, Liu L, Zhou X. Calibration of SAR Polarimetric Images by Covariance Matching Estimation Technique with Initial Search. Remote Sensing. 2024; 16(13):2400. https://doi.org/10.3390/rs16132400

Chicago/Turabian Style

Liu, Jingke, Lin Liu, and Xiaojie Zhou. 2024. "Calibration of SAR Polarimetric Images by Covariance Matching Estimation Technique with Initial Search" Remote Sensing 16, no. 13: 2400. https://doi.org/10.3390/rs16132400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop