Next Article in Journal
Terrestrial and Airborne Structure from Motion Photogrammetry Applied for Change Detection within a Sinkhole in Thuringia, Germany
Next Article in Special Issue
Estimating All-Weather Surface Longwave Radiation from Satellite Passive Microwave Data
Previous Article in Journal
Detection of Irrigated Permanent Grasslands with Sentinel-2 Based on Temporal Patterns of the Leaf Area Index (LAI)
Previous Article in Special Issue
Estimation of Downwelling Surface Longwave Radiation with the Combination of Parameterization and Artificial Neural Network from Remotely Sensed Data for Cloudy Sky Conditions
 
 
Article
Peer-Review Record

STF-EGFA: A Remote Sensing Spatiotemporal Fusion Network with Edge-Guided Feature Attention

Remote Sens. 2022, 14(13), 3057; https://doi.org/10.3390/rs14133057
by Feifei Cheng 1, Zhitao Fu 1,*, Bohui Tang 1,2, Liang Huang 1, Kun Huang 1 and Xinran Ji 1
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(13), 3057; https://doi.org/10.3390/rs14133057
Submission received: 10 May 2022 / Revised: 17 June 2022 / Accepted: 23 June 2022 / Published: 25 June 2022

Round 1

Reviewer 1 Report

This paper presents a novel method for the spatio-temporal fusion of remote sensing images with edge-guided feature attention. To my knowledge, the method is original and innovative. The experimental results show that the proposed method, comprising 5 neural modules, allows to obtain improvements with respect to other state of the art techniques used for comparison. I would recommend to address these few points:

 

(a) The introduction is not very clear, I would suggest some editing of this section in order to make it more linear.

 

(b) The proposed method could be explained in a more thorough way.

 

(c) The methods used for comparison are not very recent (the newest was published in 2019). I would kindly suggest to add a few comparisons with newer state of the art methods.

 

(d) The English of the manuscript could be improved, I suggest using proof-reading services available online.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper provides an interesting method for image fusion. I suggest a major revision since (1) the entire text could be better polished (I suggest the authors use an English editing service), (2) the authors could better explain some parts, and (3) the authors should add a robust discussion section.

 

Some punctual comments:

1.       Since the paper has many acronyms and abbreviations, I would suggest inserting a table with all of them.

2.       The abstract is critical for grabbing the readers’ attention, and it must provide a brief explanation of the problem and the solution. The abstract as it is presents generic phrases, and it should be significatively enhanced.

3.       Why is the method called EGFA-STF instead of STF-EGFA?

4.       The authors must specify what each acronym and abbreviations mean in the abstract.

5.       Line 10: “Remote sensing spatiotemporal fusion play a crucial role in Earth science applications.” It doesn’t need to be a long explanation but the authors need to clarify why it plays a crucial role.

6.       Line 11: Explain what is the problem with fuzzy edges, it is unclear to the reader.

7.       Some phrases in the introduction are too long and could be synthesized. One example is: “Due to the continuous launch of different types of sensor satellites, the remote sensing data that can be collected has increased dramatically, and new satellite sensors have developed in the direction of high spectral resolution, high spatial resolution, and high temporal resolution.” I would suggest something like: “The continuous launches of satellites enable a drastic increase in remote sensing data, and new sensors are developed in the direction of high spectral, spatial, and temporal resolution.”

8.       The author has written an introduction with related studies. I would suggest the authors break the introduction into an introduction and related works.

9.       Figure 1 would be better suited if explained in the materials and methods section.

10.   The methods regarding the models are well described. The authors must insert in the methods section a subsection named evaluation or something similar and explain the metrics that are going to be used.

11.   In line 284, instead of writing “… proposed by professor Li Jun’s team [12]”, change it to “… proposed by Jun et al. [12]”

12.   It should be explained in Figure 7 legend what is this yellow line drawn in the image. The same must be done to all other images.

13.   Line 312: Insert the reference for the Tianjin dataset.

 

14.   The paper does not present a discussion section. Usually, papers in this journal present a discussion section in which the authors should explain in more detail insights from the results, and how this work contributes to the scientific community of this field.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper presents a new deep learning approach to image fusion. In that respect it is interesting however I think that some improvements can be made. It is not clear how to evaluate the result of an image fusion when no ground truth exists. I may have missed this in the manuscript but how does one know what the correct answer looks like when testing on new data? When considering supervised learning for image fusion I think that the connection to a test dataset needs to be better defined. Second, the introduction is quite long. I would suggest a short introduction that describes the context of the paper and then perhaps a background section that details all the necessary references.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

The manuscript has been reviewed according to the comments, therefore I recommend to accept it.

Reviewer 2 Report

I would like to congratulate the authors. The manuscript has drastically improved and all my concerns were attended. 

Back to TopTop