Next Article in Journal
Quantifying Multi-Source Uncertainties in GRACE-Based Estimates of Groundwater Storage Changes in Mainland China
Next Article in Special Issue
On the Initial Phase of the Ongoing Unrest at Campi Flegrei and Its Relation with Subsidence at Vesuvio (Italy)
Previous Article in Journal
Exploring the Conversion Model from Aerosol Extinction Coefficient to PM1, PM2.5 and PM10 Concentrations
Previous Article in Special Issue
Real Representation of the Polarimetric Scattering Matrix for Monostatic Radar
 
 
Article
Peer-Review Record

A Hierarchical Fusion SAR Image Change-Detection Method Based on HF-CRF Model

Remote Sens. 2023, 15(11), 2741; https://doi.org/10.3390/rs15112741
by Jianlong Zhang 1, Yifan Liu 1, Bin Wang 1,* and Chen Chen 2
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4:
Remote Sens. 2023, 15(11), 2741; https://doi.org/10.3390/rs15112741
Submission received: 26 April 2023 / Revised: 23 May 2023 / Accepted: 23 May 2023 / Published: 25 May 2023

Round 1

Reviewer 1 Report

In this paper, a hierarchical fusion SAR image change detection model based on HF-CRF is proposed, which introduces multimodal difference images and constructs the fusion energy potential function using dynamic convolutional neural networks and sliding window information entropy. Experimental results demonstrate that the proposed method outperforms existing methods in terms of the overall number of detection errors and reduces the occurrence of false positives. In experiment and analysis section, the authors use FP, FN, OE, PCC, and Kappa as evaluation metrics to compare with other state-of-the-art methods. In order to evaluate the performance of the proposed method more comprehensively, reviewers suggest adding more evaluation metrics, such as network parameters (Params), and the number of floating-point operations (FLOPs), which is the valid indicator in network model computational complexity comparison. Moreover, in ablation experiment results, the performance indicator of the proposed method improves obviously. However, compared with the existing methods, the performance indicator of the proposed method shows little improvement, as shown in table 7.

Some grammar errors need further improvement.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

this manuscript proposed a network based on HF-CRF model for change detection. Overall, the design of the network is soundness and novel, the experiment results are promising. 

The proposed method adopts HF-CRF and CNN, please supply comparison result of computational efficiency in experiment part.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

This paper presents a new method for change detection in SAR images as called with HF-CRF model. The HF-CRF model shows good scientific soundness but does not show an supreme novelty compared with other change detection methods. 

It makes the paper better when the HF-CRF model is superior than other change detection methods. for example, small object change detection or wide area change detection with precise boundary.

And, PCC is described as (TP+FN)/(TP+FP+TN+FN) in this paper, but PCC should be calculated as (TP * TN - FP * FN) / sqrt((TP+FP) * (TP+FN) * (TN+FP) * (TN+FN)). So I could not understand the meaning of your PCC method. I think that the PCC should be revised.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

 

The paper raises a problem of changes detection of Synthetic Aperture Radar (SAR) images. Authors propose new approach to solve such problem. They describe new detection model of changes based on Hierarchical Fusion Conditional Random Field (HF-CRF). This model introduces multimodal difference images and constructs the fusion energy potential function. Authors prove that the proposed method could accurately detect the change regions. In this purpose they use Prewitt, Sobel, Laplacian operators.

The paper is interesting, obtained results are presented in legible and intelligibly way. In spite of my good opinion about this paper a have several remarks:

- there are only two-three different images used to prove new method. I think it is weak side of conducted analyses. Authors should consider more images with various graphic structure. Differences between images should be less obvious;

- what method of pseudo-coloring was used for analysed images (Figure 7);

- all used abbreviations should be explained;

- Prewitt, Sobel and Laplacian operators are used to obtain more sharp images (Figure 8). Reader doesn't know parameters of such operators which were used. Authors applied the operators to one image (Figure 7a). The operators aren't applied to the other images (Figure 7c), why?

- during conducted analyses there are obtained images in black-white color scale. Can we connect this with a value of threshold of pixel amplitude?

- Conclusions - please to develop the sentence "In future work ...". Additionally the Conclusions should be more developed. The Conclusions has to refer to carried out of tests and obtained results.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

I understand the meaning of PCC (percentage correct classification).

In SAFNET, PCC presents ((TP+FP+TN+FN) - (FP+FN)) / (TP+FP+TN+FN) *100, therefore (TP+TN) / (TP+FP+TN+FN) * 100, not (TP+FN)/(TP+FP+TN+FN).

your PCC is not same with SAFNET, and if FN takes large portion, your formula says that is accurate but it's not accurate. So your PCC should be re-calculated.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

The paper was corrected according my remarks but the explanations could be more convincing. I hope that the future work will allow to achieve more interesting results.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop