1. Introduction
Synthetic aperture radar (SAR) can create high cross-range resolution images through coherent processing of the returned pulses received at different antenna position due to a moving platform, which enables provision for the effect of a large virtual aperture size [
1,
2,
3]. To achieve full performance of SAR, the exact flight trajectory of the platform should be provided to the signal processor for proper motion compensation. Especially in the case of airborne SAR, the effect of the unexpected platform motion due to the factors such as wind gusts and aircraft vibrations should be compensated before SAR processing. These compensations for airborne SAR are usually performed by using the measured navigation data such as the output of global positioning system (GPS), inertial navigation system (INS), or embedded GPS/INS (EGI) [
4]. However, these data also contain the measurement errors due to the inaccuracy of the navigation sensors, and these residual errors cause the phase errors of SAR data, which degrades the quality of the SAR image. There are several methods to solve this kind of image quality degradation, which are usually called autofocus.
Phase gradient autofocus (PGA) is one of the most widely used algorithms to estimate the phase error [
5]. The method assumes all the complex reflectivity in the image windowed with appropriate window size, except the center-shifted point target at each range bin, are distributed as zero-mean Gaussian random noises. These assumptions limit the performance of the algorithm in spite of its robustness and fast convergence. The point target and random noise assumption are inappropriate to the scene that contains dominant targets close to each other in the azimuth direction. Quality phase gradient autofocus (QPGA) and generalized phase gradient autofocus (GPGA) are proposed to alleviate this problem [
6,
7]. However, both QPGA and GPGA use the limited window size, which also limits the bandwidth of the estimated phase error. Therefore, these PGA-based methods are not suitable for estimating the phase error containing the high frequency components such as the error generated by the GPS measurement update in EGI.
Optimization of sharpness metrics is also a well-known autofocus technique [
8,
9]. These kinds of methods are used to optimize the cost function, which enables the scene to be well focused. One of the most commonly used cost functions is entropy, and such methods are often called the minimum entropy (ME) method [
10,
11,
12,
13,
14,
15]. ME-based methods are not restricted by the assumptions in PGA, so there is no limitation on the bandwidth of estimation. There are various ways to achieve the optimization because these methods have no closed-form solutions [
8]. One way is to model the estimated error as a polynomial and adjust the coefficients to minimize the entropy of the image [
10]. Although this method achieves appropriate performance, the higher order components of the phase errors are difficult to estimate because of the limited order of the estimates, which is similar to the bandwidth issue in PGA. Even though the order of the polynomial can be adaptively increased, it would be computationally intensive. The optimization method based on the trial and error [
11] also experiences similar or a more extensive computational problem.
The most widely used ME methods can be divided into two categories. The first is based on gradient searching. One of the well-known methods in this category is the monotonic iterative algorithm (MIA) [
12], which minimizes well-defined surrogate functions instead of the entropy. An algorithm based on Newton’s method to obtain the phase estimate, which makes the gradient of entropy zero, also provides reasonable performance [
13]. These methods can be carried out through the fast Fourier transform (FFT), which enables the fast phase estimation. Another category is based on the fixed-point iteration [
14,
15]. In these methods, the fixed-point iteration is used to solve the implicit equation of the phase, which makes the derivative of the entropy zero. Every iteration can also be conducted through FFT. The algorithms in these two categories show similar estimation accuracy and convergence.
Regularization-based autofocus techniques are also proposed by some authors [
16,
17,
18,
19,
20,
21,
22], which are methods that are quite different from postprocessing autofocus such as PGA and ME. These methods are based on a regularized reconstruction of the SAR images, which are often called compressive sensing [
22], sparsity-driven imaging [
23], or feature-enhanced SAR imaging [
24,
25,
26]. Unlike the conventional SAR image formation such as the polar format algorithm (PFA) [
1], regularization-based imaging methods have advantages such as high resolution, which is not limited by the SAR system bandwidth, and suppression of the artifact owing to speckle and side-lobe, even if the datasets are nonuniform and undersampled. Sparsity-driven autofocus (SDA), one of the well-known autofocus methods in this area, is to minimize the cost function, composed of a fidelity and regularization term, jointly with the estimate of image and the phase error [
16]. SDA achieves quite accurate phase estimation with preservation of the advantages of regularized reconstruction. Similar approaches have been proposed such as optimization of cost function, including total variation [
17] and modified Tikhonov regularization-based autofocus [
18,
19] to improve the performance and reduce the computational burden. Although the regularization-based methods achieve high quality SAR images, these are hard to utilize for the SAR mission, which requires on-board processing and a large scene size. The reason is that these methods perform not only the estimation of phase error but also the reconstruction of the image simultaneously, which requires huge computation power. The large computation burden of regularized reconstruction is one of the reasons why PFA is still widely utilized for on-board SAR image formation. If the SAR system is well designed to meet the performance requirements, the only concern for image degradation is the phase error, not the sparse sampling.
In this paper, we propose a new postprocessing autofocus algorithm for phase-corrupted images. The algorithm is designed to minimize the cost function containing regularization term based on l1-norm, which is similar to regularization-based autofocus such as SDA. However, the proposed algorithm deals with the processed complex image corrupted by the phase error, whereas SDA deals with returned pulse data before processing. This difference makes the proposed algorithm straightforward and requires only simple calculation for FFT and soft-threshold. The equation for minimization of the cost function is carried out through an indirect optimization manner, and fixed-point iteration is used to obtain the optimal solution.
The rest of this paper is outlined as follows. In
Section 2, the fundamental background for the proposed algorithm, such as iterative shrinkage thresholding algorithm (ISTA) and denoising [
27], are explained. In
Section 3, we define the cost function for the proposed method and present the iterative algorithm to achieve the minimization. Then, we demonstrate the performance, convergence, and robustness of the proposed algorithm and compare them with those of the existing autofocus methods such as PGA, GPGA, and ME through experimental results in
Section 4. Finally, the conclusions are presented in
Section 5.
4. Experimental Results
In this section, some experimental results are demonstrated to verify the benefits of our proposed algorithm. In
Section 4.1, we verify the performance and characteristics of the proposed method with a different threshold
, and explain the tradeoff between the convergence and accuracy. Then, we compare the results for the constant threshold with those of the varying threshold to compromise the tradeoff, which is introduced in the previous section. In
Section 4.2, we demonstrate the convergence and the performance of the proposed method with various types of the phase error, and compare them with those of the existing autofocus algorithms. PFA is utilized for SAR imaging to all the experimental results. The quantitative measures to verify the performance of the autofocus algorithm are defined as
where
represents the spatial mean operator, and
and
represent the contrast and entropy of the image
, respectively. The initial image
is scaled to have a maximum magnitude of 1 for all the cases of the experiment, which makes the threshold
for the soft-threshold function within 0 to 1. The constant
for stop criteria is set to be 10
–4 for all cases.
4.1. Performance and Convergence of the Proposed Method
The SAR image used for the proposed FPA is shown in
Figure 2a. The size of the image is 4000 × 4000 pixels. The vertical and horizontal coordinates of the image represent the range and azimuth, respectively. The definitions of coordinates are the same for all of the other SAR images in this paper.
Figure 2b,c shows the image enlarged at the point near the center of the image both without and with the phase error of
Figure 3, respectively. The images corrected by FPA are shown in
Figure 4, and the variation of contrast and entropy at each iteration are represented in
Figure 5. The images in
Figure 4a–d are the results for a constant threshold
fixed by 0.01, 0.1, 0.3, and 0.9, respectively. Although the images look similar to each other, their contrast and entropy are slightly different, as shown in
Figure 5 and
Table 1. For a small value of
, the convergence rate is low, even though the measures tend to converge to its optimal value. On the contrary, a large value of
enables fast convergence, whereas the final performances are degraded. The reason for these characteristics can be explained by the number of features in
Figure 5c. It shows that the smaller value of the threshold applied, the larger the number of features the algorithm uses for each iteration. As mentioned in the previous section, the features may contain the artifacts if the number of features is too large, which causes an adverse effect on the convergence rate. In the case of a small number of features, on the other hand, some dominant scatterers are omitted and it may interfere with the global optimality of the estimation, even if it achieves fast convergence.
As explained in the previous section, we applied a varying threshold to the proposed FPA to satisfy both fast convergence and optimal performance. We set the threshold
for initial iteration to a relatively large value, and gradually decrease the value at each iteration as in (23). The initial threshold
and the forgetting factor
are set to 0.9 and 0.5, respectively, and the corrected image from this method is shown in
Figure 4e. As shown in
Figure 5a,b, the quantitative measures reach the optimal value with an appropriate iteration number. Unlike the fixed threshold cases, the number of features increases for each iteration, as shown in
Figure 5c, which enables the features of the current iteration to contain the dominant scatterers omitted at the previous iteration.
4.2. Comparison with Existing Autofocus Algorithms
We have compared the proposed method with existing postprocessing autofocus methods of PGA [
5], GPGA [
7], and ME [
12]. Constant false alarm rate (CFAR) detection was used to select the strongest scatterers for GPGA, which is a modified algorithm of PGA. The stop conditions for PGA, GPGA, and ME were the same as that of proposed FPA, and we used the varying threshold for FPA as described in
Section 3.3. The value of
and
for FPA were the same as the previous experiment.
We added various types of phase errors as shown in
Figure 6 to the scene in
Figure 2a. The images corrected by PGA, GPGA, ME, and the proposed FPA are shown in
Figure 7. The contrast and entropy variation for each iteration are represented in
Figure 8 and
Figure 9, respectively. The performance measures of the images for each method and phase error are represented in
Table 2, and the number of iteration and total computing time are shown in
Table 3. The computation time was measured with a workstation equipped with Intel
® Xeon
® Gold 6140 CPU. MATLAB was used as the programming language.
It is well-known fact that PGA shows fast convergence and sufficient performance for low-frequency errors, such as the quadratic error shown in
Figure 6a, which can be observed from the results in
Figure 7a,
Figure 8a and
Figure 9a, and
Table 2 and
Table 3. GPGA shows almost the same result with PGA with less iteration, but requires more computation time due to the additional process such as the strongest peak selection through CFAR for every iteration. The proposed method shows a similar trend with less computation time, because both PGA and GPGA require additional procedure for center-shifting, windowing, elimination of linear phase, etc. Meanwhile, ME requires more iterations due to the large phase error, even though it slowly converges to the optimal performance as shown in those figures. Unlike the quadratic error case, the image with random phase error cannot be corrected by PGA and GPGA, as shown in
Figure 7b,
Figure 8b and
Figure 9b, and
Table 2 and
Table 3. It is a natural result because of the assumptions and limited window size of PGA, described in
Section 1. Meanwhile, ME and FPA show nearly optimal performance with few iterations. Therefore, it can be inferred from these results that the performance and convergence rate of the proposed FPA are not limited by the bandwidth of the phase error, unlike PGA and ME.
In a practical SAR system, Wiener process and discontinuous phase error can occurr because of the navigation systems for motion compensation. If we use an INS system, the Wiener process errors are generated due to the integration of IMU measurements that contain the Gaussian white noises. Furthermore, the navigation data experience some discontinuities if the system uses GPS measurement updates. The phase errors in
Figure 6c,d are this kind of error, and the autofocus results for these errors are represented in (c) and (d) of
Figure 7,
Figure 8 and
Figure 9, and
Table 2 and
Table 3. These results verify that FPA also shows the best performance with appropriate iteration number and computation time. Therefore, we verified the performance and convergence of the proposed FPA as well as its robustness for most types of phase errors through these results. We performed the same procedure for two more scenes in
Figure 10 to verify the reliability of the algorithm. Scenes A and B are selected to have a higher and lower value of entropy, i.e., lower and higher contrast, than those of the image in
Figure 2a. The results for these two scenes are represented in
Table 4,
Table 5,
Table 6 and
Table 7, and similar trends of performance and convergence are observed when compared to that of the results in
Table 2 and
Table 3. FPA shows best performance with sufficiently small iterations and computation time for all cases, and it verifies that FPA produces reliable performance.
5. Conclusions
In this paper, we proposed and demonstrated a new autofocus method for postprocessing of a phase-corrupted SAR image based on minimization of the cost function that consists of fidelity and regularization terms. The equation to achieve the optimality is derived by indirect optimization, and the algorithm to solve the equation is proposed. Each iteration in the proposed algorithm requires only one soft-thresholding for the reference image formation, and it enables more efficient processing than the existing regularization-based autofocus such as SDA. The tradeoff between the performance and convergence for the proposed FPA can be compromised by selecting proper constant threshold or using an asymptotically decreasing threshold with appropriate initial value and forgetting factor. The experimental results verified its better performance, convergence, and robustness when compared to the existing methods of PGA and ME. We verified the reliability of the proposed method by performing additional two experiments with a different scene.
Although the proposed FPA shows sufficient performance and convergence for these experiments, there are still remaining factors to improve, such as selection of features and determining the threshold for each iteration. These factors would depend on the scene. Hence, the modifications through adaptive methods would improve the performance and convergence of the proposed algorithm, which are the future works for this study.