Next Article in Journal
An Adaptive Modeling-Based Aeromagnetic Maneuver Noise Suppression Method and Its Application in Mine Detection
Previous Article in Journal
DASANet: A 3D Object Detector with Density-and-Sparsity Feature Aggregation
 
 
Article
Peer-Review Record

A Novel Discriminative Enhancement Method for Few-Shot Remote Sensing Image Scene Classification

Remote Sens. 2023, 15(18), 4588; https://doi.org/10.3390/rs15184588
by Yanqiao Chen 1, Yangyang Li 2,*, Heting Mao 2, Guangyuan Liu 2, Xinghua Chai 1 and Licheng Jiao 2
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2023, 15(18), 4588; https://doi.org/10.3390/rs15184588
Submission received: 13 August 2023 / Revised: 7 September 2023 / Accepted: 15 September 2023 / Published: 18 September 2023
(This article belongs to the Section AI Remote Sensing)

Round 1

Reviewer 1 Report

1. Introduction and related work section are written in a very appropriate way. It gives a clear idea about the objectives and contribution of the work.

2. In the result section, the accuracies obtained are not comparable with existing methods on the same datasets. For example, the accuracy of RSISC on the NWPU45 dataset, mentioned in their base paper [Ref.41], using Alexnet and GoogleNet seems better than the proposed method. Kindly justify.

3. In the introduction, the authors mentioned that  the proposed method can handle the classification for the class which is not in the dataset, but nothing is mentioned about it in the results and discussion. Kindly clarify.

4. Authors have tested the proposed method on 3 datasets, kindly mention the reason for choosing these datasets.

 

Author Response

Please see the attachment. We provide our responses to reviewer 1 in "Response letter1.pdf".

Author Response File: Author Response.pdf

Reviewer 2 Report

This study proposes an improved version of DN4AM for few-shot rsisc tasks, named DEADN4, which is in line with the objectives of this journal. The following suggestions are provided for revision reference.

1.Important literature on few-shot learning should be explained in detail in section 2, such as the following articles:

(a)X. Sun, B. Wang, Z. Wang, H. Li, H. Li and K. Fu, "Research Progress on Few-Shot Learning for Remote Sensing Image Interpretation," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2387-2402, 2021

(b)Wang, Yaqing, et al. "Generalizing from a few examples: A survey on few-shot learning." ACM computing surveys (csur) 53.3 (2020): 1-34.

(c)Li, Xiaoxu, et al. "A concise review of recent few-shot meta-learning methods." Neurocomputing 456 (2021): 463-468.

......

2.There is too much repetition between section 2 and section 3, and many symbols in the 16 formulas are confusing. It is recommended that sections 2 and 3 be reduced and rewritten, and a symbol explanation table be added.

3.In addition to M, what other important parameters are there in the DEADN4 model? It is recommended to add a sensitivity analysis of these important parameters.

4.There are still typos in the manuscript, please proofread carefully.

Author Response

Please see the attachment. We provide our responses to reviewer 2 in "Response letter2.pdf".

Author Response File: Author Response.pdf

Reviewer 3 Report

Aiming at the problem of intra-class variability and inter-class similarity in remote sensing scenes, the manuscript designs center loss, cosine margin loss and DLGD to enhance the discriminative features on the basis of DN4 and DN4AM. The authors conducted experiments on three datasets, NWPU-RESIS45, UC Merced and WHU-RS19, to verify the effectiveness of the proposed method. However, the manuscript still has some problems and needs to be revised.

1. In the manuscript, the authors describe various optimization effects of different improvements on the feature space. For example, "the angles between pairwise features become larger", "making the features more elongated". Please perform dimensionality reduction (t-SNE or UMAP) and visualization analysis of the feature space after the improvement loss in the testing phase (using samples from the test category). Include the original loss function, add the center loss alone, the cosine margin loss alone, and all losses.

2. According to Eq. 14, the center loss of using the same weights for different categories, which is reflected in the cumulative center loss of unweighted categories. And in the introduction, the authors point out that different categories have different intra-class methods, should different weights be used to bound the homogeneity of different categories?

3. Lack of references to some key literature in the introduction, such as doi: 10.1109/TGRS.2022.3219726”;10.1109/TGRS.2021.3099033”. A review of some key literature is missing in the introduction, e.g., "10.1109/TGRS.2022.3219726". They enhance the separability of category prototypes in different ways.

4. Is the class center in section 2.1 the same as the class prototypes in line 181? Please explain.

5. Why are the weights of loss Lc and Ls 1:1? Please do a theoretical analysis or add an experimental analysis.

6. Please analyze the model parameters and complexity.

Minor editing of English language required

Author Response

Please see the attachment. We provide our responses to reviewer 3 in "Response letter3.pdf".

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

All comments have been addressed

Reviewer 2 Report

The authors have properly handled my review comments and suggests that it can be accepted for publication.

Back to TopTop