Next Article in Journal
Reconstruction of Vegetation Index Time Series Based on Self-Weighting Function Fitting from Curve Features
Previous Article in Journal
Absolute Radiometric Calibration of an Imaging Spectroradiometer Using a Laboratory Detector-Based Approach
 
 
Article
Peer-Review Record

Graph-Based Deep Multitask Few-Shot Learning for Hyperspectral Image Classification

Remote Sens. 2022, 14(9), 2246; https://doi.org/10.3390/rs14092246
by Na Li 1, Deyun Zhou 1, Jiao Shi 1,*, Xiaolong Zheng 1, Tao Wu 1 and Zhen Yang 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(9), 2246; https://doi.org/10.3390/rs14092246
Submission received: 22 March 2022 / Revised: 2 May 2022 / Accepted: 4 May 2022 / Published: 7 May 2022

Round 1

Reviewer 1 Report

Overall the manuscript was clear, informative, and will be valuable for the remote sensing community. The narrative, justification, and discussion were excellent, and the conclusions were well supported by the results. My only two suggestions are as follows:

1) The novelty of this approach was in the semi supervised graph based approach. The mathematical description of how this was implemented was good, however some/most readers will not be able to dive deep into these differential equations to understand what is being produced and how this information is passed. Can you provide additional information describing what the graph based constraint layer is (geospatial data product that is used to ....?  The terminology of "graph" implies something different. 

2) As I am sure this paper will attract interest from the community, what will stifle the momentum generated by this new approach is easy access to the code the was used to generate the graph and implement the GDMSFL. It would improve transparency and the overall impact of this article if you archived your code on a github site and link to it in the Data Availability Statement. This will also allow others to build off of your work and would be easier to test this approach using other datasets. 

overall nice job

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors proposed an algorithm that adopts two networks, one for unsupervised learning and one for supervised learning, and then shared the information of the networks with each other to enhance the learning performance. the results support that their algorithm can provide better performance compared to other methods. 

In section 3.2. better to give some details about the parameter setting. how their recommended values are determined? 

minor comment:  line 316, the I_ik should be z_ik.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper developed a few-shot learning method for HSI classification, and the corresponding experimental results demonstrated the excellent classification performance of the method with limited training samples. In the reviewer's opinion, this article provides a good piece of work. However, there are some weaknesses exist in this paper.

Specific comments are as follows:

 

  1. Section 2.1: Most of the content in this section is already mentioned in the Introduction and can be streamlined. It is recommended that the authors describe in detail the process of Figure1 and the role of each part.
  2. Lines 228: XS, what does the D, s, m subside represent? Suggest adding the size of each xi for clear understanding.
  3. Section 2.3: This section is extensive and it is recommended that it be described in several subsections to make it easier to read.
  4. Lines 264~282: Are the Feature Extractors in Siamese Subnetwork and Classifier Subnetwork the same structure? Please describe in detail.
  5. Lines 362: There are two “as a result”.
  6. Section 3.2: Is it possible to find a common set of framework parameters for the GDMFSL framework, as the parameters vary considerably across different datasets? What is the performance difference ratio of different parameters on different datasets?
  7. What is the number of parameters and running time of the proposed framework? Does the Siamese subnetwork need to process all the samples and does this lead to a huge number of parameters?

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop