Next Article in Journal
Advancing Cave Detection Using Terrain Analysis and Thermal Imagery
Previous Article in Journal
Satellite-Based Human Settlement Datasets Inadequately Detect Refugee Settlements: A Critical Assessment at Thirty Refugee Settlements in Uganda
 
 
Article
Peer-Review Record

Edge-Preserving Convolutional Generative Adversarial Networks for SAR-to-Optical Image Translation

Remote Sens. 2021, 13(18), 3575; https://doi.org/10.3390/rs13183575
by Jie Guo 1, Chengyu He 1, Mingjin Zhang 1,*, Yunsong Li 1, Xinbo Gao 1,2 and Bangyu Song 3
Reviewer 1:
Reviewer 2: Anonymous
Remote Sens. 2021, 13(18), 3575; https://doi.org/10.3390/rs13183575
Submission received: 31 July 2021 / Revised: 31 August 2021 / Accepted: 1 September 2021 / Published: 8 September 2021

Round 1

Reviewer 1 Report

The authors develop a CNN-based architecture for SAR-to-Optical image translation, posing it as a trainable style transfer problem in which they add extra features to the already existing proposals. The main contribution is related to the proposal of an edge-preserving convolutional generative adversarial networks that the author claims enhances details and visual cues that may be lost in using widespread GAN architectures, in particular edge information. They propose an edge-preserving kernel to modify the standard convolutional filter weights during training. The standard definitions and procedures associated with this kind of proposals are presented (generator, discriminator, loss function and training). The method is tested with available datasets of paired 256x256 blocks. The results show a slight enhancement in terms of structural similarity and signal-to-noise ratio with respect to other proposals. However, the visual comparison does not seem to be very conclusive. I recommend that a more thorough performance analysis is completed, in which regions of the image are labeled by experts and then conventional confusion matrix and f-measure analyses can be used for comparison of the actual advantages of the proposal.

Author Response

Thank you for constructive comments and valuable suggestions. We have considered your suggestions very carefully and made every effort to respond to every question raised in the review report.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper is very well written and presented. The novel proposed method using the GAN approach can produce robust results in SAR to Optical image conversion. EPCGAN is an interesting proposition and can be also be used in other fields. Here are some of my comments

  1. Have you tried to use EPCGAN in other datasets to establish this approach as a general method for GAN in deep learning applications?
  2. Will your codes be available in git or other sites for the research community.
  3. Could you please mention the training times and FLOPs to compare with the other methods?
  4. Comparison of real images with the generated images on the unseen dataset and reporting the mean square error could be another way to justify your claims. Could you please add that to your manuscript?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors addressed most of my concerns. In technical perspective the paper is ready to be published. I still recommend a thorough revision of English since the article still has several typos and grammar mistakes.

Author Response

Thank you. We have modified the English by the MDPI English Editing as in the attachment.

Author Response File: Author Response.pdf

Back to TopTop