Next Article in Journal
Integrating Multi-Scale Remote-Sensing Data to Monitor Severe Forest Infestation in Response to Pine Wilt Disease
Previous Article in Journal
Effects of Human Disturbance on Riparian Wetland Landscape Pattern in a Coastal Region
 
 
Article
Peer-Review Record

SPA-GAN: SAR Parametric Autofocusing Method with Generative Adversarial Network

Remote Sens. 2022, 14(20), 5159; https://doi.org/10.3390/rs14205159
by Zegang Ding 1,2,3, Ziwen Wang 1,2,3, Yangkai Wei 1,2,3,*, Linghao Li 1,2,3, Xinnong Ma 1,2,3, Tianyi Zhang 1,2,3 and Tao Zeng 1,2,3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2022, 14(20), 5159; https://doi.org/10.3390/rs14205159
Submission received: 17 August 2022 / Revised: 19 September 2022 / Accepted: 28 September 2022 / Published: 15 October 2022

Round 1

Reviewer 1 Report

In this paper, the SAR parametric autofocusing method with GAN to solve the over focusing problem of arc scattering targets caused by the coupling of time varying scattering phase and motion error phase. I think there are some problems in this paper. These problems are given below:

1)    Please introduce what the training samples are, and whether the training samples are consistent with the data which is verify the effectiveness of the method. If the two are the same or highly similar, then the results of the method are problematic.

2)    What is the number of training samples in the experiment? Deep learning has few training samples and poor generalization performance in SAR. How to ensure the generalization of the proposed network and the high robustness of motion phase error estimation?

3)    Whether the method proposed in this paper is effective for any motion error? The motion error of the airborne platform is large, and it is difficult to maintain the original shape of the targets. Could you explain the motion error range applicable to the proposed method?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper proposes a SAR autofocusing method using a Generative Adversarial Network (GAN) to obtain the correct focused SAR image of the distributed targets. The topic is interesting, in particular, the usage of GAN for such an application. Moreover, the real data experiments support the theoretical analysis. The paper can be considered for publication given a few minor revisions. 

Line 19: Synthetic aperture radar (SAR) is a radar system that puts radar ... the authors recommended rephrasing this sentence as it has three repetitive words of "radar" in a row.

Line 30: "Too small or too large will obtain the suboptimal solution." include the subject of the sentence - learning rate.

Figure 1: Please mention the sub-images in the text (all the subfigures)

Line 119: Analyses--> analysis

Regarding the use of GAN in SAR images, the introduction of GAN, and its application to SAR images are not discussed comprehensively in the paper. The authors are suggested to improve the quality of the paper by using materials from the following paragraph, which has been adopted from A Survey on the Applications of Convolutional Neural Networks for Synthetic Aperture Radar: Recent Advances

Generative Adversarial Networks (GANs) [205] are DL architectures typically used for generating new instances of the input data that mimic the real data. They can also be used to distinguish between real and fake data. A GAN consists of a generator network and a discriminator network that compete against each other. The generator network tries to produce fake data and the discriminator tries to identify the real data from the fake one in order not to be fooled by the generator. At the end of this adversarial game, they reach the Nash equilibrium point. Radford et al. [206] introduced deep convolutional GANs (DCGANs) in 2015. Guo et al. [207] used DCGANs to implement a SAR image simulator. This simulator could be helpful to synthesize SAR images in a desired observation angle from a limited set of aspects. This is important when orbital geometry limitations and high maintenance costs are taken into consideration. Gao et al. [208] employed DCGANs to predict the labels of SAR samples by training the network with a small amount of labeled samples and then extending the labeled set iteratively. They used a co-training [209] method to perform this task by utilizing a few labeled samples to predict the labels of the unlabeled samples at first. Afterwards, samples with high confidence were chosen and added to the previous labeled set for the next training iteration. Shi et al. [210] employed DCGAN for SAR image enhancement and used the result for the SAR-ATR task.  Zhang et al. [211] used DCGAN for transferring knowledge from unlabeled SAR images. They trained a DCGAN with unlabeled samples to learn generic features of SAR images and reused the learned parameters for the SAR-ATR task. 

 

Line 302: 1) Can you mention in the paper if there is any difference between the Radon transform and Hough transform? 2) can we use a "coherent Hough transform" here? 

 Table 3: PSNR should be also defined in the text and it is recommended to compare PSNR vs SSIM, which one - where- it works better

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper proposes a novel autofocusing method for distributed SAR target with generative adversarial network. In this paper, the motion error modeling of arc-scattering targets reveals the overfocused problem of distributed targets caused by the coupling of time-varying scattering phase and motion error phase. Then, this paper proposes to build a GAN to estimate the geometry parameters of the target and infer the correct motion error phase. The paper is well written and innovative, and it is verified with sufficient experimental results and analysis.

Therefore, it can be accepted after addressing the following problems.

1.     It is written in the paper that the proposed method can perform efficient and reliable parameter estimation for arc-scattering target, but if there are no arc-scattering target in the scene but some point targets, whether the proposed method is compatible?

2.     The paper introduces the parametric autofocusing method for arc-scattering target in detail, so how the proposed method selects the arc-scattering target from the image?

3.     What is the advantage of exploiting adversarial learning instead of general loss-based mean squared error criterion?

4.     It is recommended to add a time variable in (13) to indicate that both of these will change with the radar observation angle.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop