Next Article in Journal
Virtual Metrology Filter-Based Algorithms for Estimating Constant Ocean Current Velocity
Previous Article in Journal
Comparative Study of the Atmospheric Gas Composition Detection Capabilities of FY-3D/HIRAS-I and FY-3E/HIRAS-II Based on Information Capacity
Previous Article in Special Issue
MASA-SegNet: A Semantic Segmentation Network for PolSAR Images
 
 
Article
Peer-Review Record

Spatial-Temporal Semantic Perception Network for Remote Sensing Image Semantic Change Detection

Remote Sens. 2023, 15(16), 4095; https://doi.org/10.3390/rs15164095
by You He, Hanchao Zhang *, Xiaogang Ning, Ruiqian Zhang, Dong Chang and Minghui Hao
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2023, 15(16), 4095; https://doi.org/10.3390/rs15164095
Submission received: 25 May 2023 / Revised: 9 August 2023 / Accepted: 15 August 2023 / Published: 20 August 2023

Round 1

Reviewer 1 Report

Overall Decision: Minor revision

This paper introduces a novel network called the SCD Semantic Perception of Space and Time-oriented Network (STSP-.net). It provides a detailed description of the Perception Path (DAP) and effectively utilizes space while incorporating deep contextual features to generate temporal semantic perception characteristics. Furthermore, the paper introduces the Same Consistency Loss function (ICLoss) to improve the accuracy and robustness of SCD.

 

1. The article references a total of 47 sources, with 37 publications from the past 5 years (79%), 8 publications from the past 5-10 years (17%), and 2 publications over 10 years (4%). This indicates that 96% of the citations are recent, providing sufficient and relevant support for the study.

2. In conclusion, this study is interesting and yields valuable results. However, there are several weaknesses in the current paper that should be addressed to enhance the value of the publication.

3. The article covers a wide range of topics and provides convincing data and comparisons. However, the overall framework could be streamlined for improved clarity.

4. The block diagram figure would benefit from enhanced visual appeal by incorporating color contrasts.

5. The authors may add more state-of-art application articles for the integrity of the manuscript (Rachis detection and three-dimensional localization of cut off point for vision-based banana robot; Computers and Electronics in Agriculture. Detection and counting of banana bunches by integrating deep learning and classic image-processing algorithms; Computers and Electronics in Agriculture. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: a review; Precision Agriculture. ).

Overall, the paper effectively addresses the research problem and presents a strong performance when compared to other network datasets.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper propose a spatial-temporal semantic perception network
 for  semantic change detection from 
high-resolution remote sensing images.

- I think some results for loss function comparison should be added that shows the affect of your proposed loss function on the accuracy

- In conclusion, I found this words " the invariant
consistency loss function (ICLoss)" with strange format. It should be check.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This work presents a spatial-temporal semantic perception network for remote sensing image semantic change detection. The network enhances the representation of semantic features in spatial and temporal dimensions by leveraging a spatial attention fusion module and a temporal refinement detection module. Overall, this manuscirpt is interesting and well organized. However, there are several issues before possible publication.

Q1. Please highlight the motivation of this work. What are the main difference between the proposed method and others?

Q2. Please optimize Figure 2. The name of each image and module should be given.

Q3. All vectors and matrices should be highlighted in bold.

Q4. Could you add several recently methods for comparison?

Q5.  The computational complexity of all methods should be analyzed.

Q6. Several related publications should be cited and reviewed, such as DOI: 10.1007/s11431-021-1989-9; 10.1016/j.sigpro.2023.109040; 10.1016/j.jvcir.2023.103823.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

No more comments.

Back to TopTop