Next Article in Journal
OpenHSI: A Complete Open-Source Hyperspectral Imaging Solution for Everyone
Next Article in Special Issue
Elevation Resolution Enhancement Method Using Non-Ideal Linear Motion Error of Airborne Array TomoSAR
Previous Article in Journal
Interpretation of the Spatiotemporal Evolution Characteristics of Land Deformation in Beijing during 2003–2020 Using Sentinel, ENVISAT, and Landsat Data
Previous Article in Special Issue
Multi-Rotor UAV-Borne PolInSAR Data Processing and Preliminary Analysis of Height Inversion in Urban Area
 
 
Article
Peer-Review Record

SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information

Remote Sens. 2022, 14(9), 2243; https://doi.org/10.3390/rs14092243
by Jinbiao Zhu 1,2, Jie Pan 2,*, Wen Jiang 2, Xijuan Yue 2 and Pengyu Yin 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(9), 2243; https://doi.org/10.3390/rs14092243
Submission received: 22 April 2022 / Accepted: 5 May 2022 / Published: 7 May 2022
(This article belongs to the Special Issue Recent Progress and Applications on Multi-Dimensional SAR)

Round 1

Reviewer 1 Report

no more comments.

Reviewer 2 Report

I think there are no other problems with the revised version of this article and suggest it be published.

Reviewer 3 Report

Dear authors,

You have made a great work. Congratulations.

 

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

See attached file.

Comments for author File: Comments.pdf

Reviewer 2 Report

  1. The authors proposed a SAR image fusion classification method based on multi-band information decision level combination is proposed. Combining the conflict coefficient and the idea of TF-IDF, the weights of different sensors are obtained, and the final feature types are obtained by decision-level fusion. The feasibility of the method is verified by two groups of multi-band SAR images for classification experiments. In a whole, this paper can be accepted.

    There are some minor problems that need to be revised.

    1. Why A is chosen as the comparison method, what are the characteristics of this method when classifying multi-band SAR images?
    2. Are the label samples and test samples consistent when classifying images of different bands?
    3. Some pictures in the article are not clear.

     

Reviewer 3 Report

Dear authors,

After a carefully reading of your work I have seen a number of flaws that can be drastically improved.

In general, the work needs for a major grammar editing and style. The number for the references starts very well and from the line 39 to the end they are bad written. I do not know what the authors have made with the numbering of the different section, but it denotes careless. Finally, I am not convinced at all that the proposed method be better than older ones.

More in detail:

Abstract: Some more specific comments on the results must be included.

Introduction:

Lines 30-35: Too long sentence and difficult to understand. Please, write it again.
Line 39: [13] instead of 13
Line 42: idem with 14.
Line 43: A novel reader does not know what a super pixel is. It is effortless writting the sentence in a more readable way.
Line 44: It is hard to understand. The processed data or data processing?
Line 46: See comments for lines 29 and 42.
Line 49: idem line 46
Line 51: idem line 49   and so on .....  The authors must correct them!


Line 78: Section 2.2 after Section 1? Please, this is careless.
Line 78: Why convolution is capitalized and not neural networks?

Lines 84/85: Convolutional layers or Convolution layers? 

Lines 91-94: Too long sentence

Lines 140...: The authors take as a very basic of the work the belief entropy, function of the belief degree. 
I know a number of ways to compute the entropy but this needs to be clarify
These concepts must be clarified .

Images: In a general way, the images must have coordinates. It is very convenient to say the covered area. How many gray levels (8-bits, 12 bits) have the images? Are the same for all?

Results when computing the 1.7% of improvement in not a significant if the computation effort if greater than the previous technique. 
So, what is the real contribution of the work. This must be fully explained and clarified.

I hope these comments be useful to improve your work.

Good luck

Back to TopTop