Next Article in Journal
CNN and Transformer Fusion for Remote Sensing Image Semantic Segmentation
Next Article in Special Issue
MHLDet: A Multi-Scale and High-Precision Lightweight Object Detector Based on Large Receptive Field and Attention Mechanism for Remote Sensing Images
Previous Article in Journal
A Design Scenario Approach for Choosing Protection Works against Rockfall Phenomena
Previous Article in Special Issue
A Split-Frequency Filter Network for Hyperspectral Image Classification
 
 
Article
Peer-Review Record

ARE-Net: An Improved Interactive Model for Accurate Building Extraction in High-Resolution Remote Sensing Imagery

Remote Sens. 2023, 15(18), 4457; https://doi.org/10.3390/rs15184457
by Qian Weng 1,2, Qin Wang 1,2, Yifeng Lin 1,2 and Jiawen Lin 1,2,*
Reviewer 1:
Reviewer 2:
Reviewer 3:
Remote Sens. 2023, 15(18), 4457; https://doi.org/10.3390/rs15184457
Submission received: 12 August 2023 / Revised: 3 September 2023 / Accepted: 4 September 2023 / Published: 10 September 2023

Round 1

Reviewer 1 Report

The Authors introduced ARE module for interactive segmentation approach. Some suggenstions to improve the quality of the paper:

- The ARE module was inserted to RITM, hence called ARE-Net. Fig. 1 shows the pipeline but nothing about the architecture. Please provide the architecture of the ARE-Net network. 

Fig 1.

- Where is the "Add" operator in the pipeline?

- What are Segmentation model (predict) and (train). Please add explanation here.

- Provide the summary of the workflow in the caption.

- The input and output of the pipeline are not clear.

- Explain "n" and "Niter" in the caption.

- Put "N" and "Y" directly after the diamond 

- Typo "Postive".

- The figure should be centered to the paragaphs, no?

 

Algorithm 1.

- What is distanceTransformers? Please add explanation

- Add paragraph explaining Algorithm 1 in details (line per line) and link the explanation with the line number.

 

- Would it be possible to compare this interactive approach with the fully supervised classification in terms of the accuracy? As the reader, I would like to see that comparison.

 

- There are many typos of missing space after the end of the sentences. Fix them.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The improved interactive building extraction method proposed by the author has shown a significant improvement in efficiency. However, there are several suggestions that the author should consider:

1. The experimental results have only been visually compared on individual images. It is recommended to provide visual comparisons on both datasets separately. This will demonstrate the generalizability of the proposed method across various building shapes, scales, distributions, and other scenarios.

2. It is advised to include experimental results that showcase the overall IoU accuracy on the two test datasets under different numbers of interactive clicks. This addition will highlight the advantages of the proposed interactive building extraction method, offering insights into its performance variations based on interactive interactions.

3. Certain references in the bibliography require updating. For instance, the source cited in reference 44 still originates from arXiv.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

1. The English of the paper should be improved. The paper should be corrected by a native speaker.

2. Please, provide a webpage (in the paper) that will include a link to the used dataset and your results for people that are interesting for comparisons.

3. If it is possible, apply your method on more public datasets to get more results and you may add more comparisons with state-of-the-art methods. For example, you can also use SZTAKI-INRIA building detection dataset (see Benedek et al. (2012)).   http://web.eee.sztaki.hu/remotesensing/building_benchmark.html

4. In addition, in order to improve your related work, you can cite the following related works.  

 [1] Benedek, C., Descombes, X., Zerubia, J., 2012. Building development monitoring in multitemporal remotely sensed image pairs with stochastic birth-death dynamics. IEEE Trans. Pattern Anal. Mach. Intell. 34, 33–50.

[2] I. Grinias , C. Panagiotakis and G. Tziritas, MRF-based Segmentation and Unsupervised Classification for Building and Road Detection in Peri-urban Areas of High-resolution, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 122, pp. 145-166, 2016.

5. Please report some cases where the proposed method fails.

6. Please report an analysis of computational cost.

7. In Figures 6-7, please explain why the cyan line shows an decrease after click 8.

The quality of English Language is good and it may improved more  by sending the paper to a native speaker. 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop