Next Article in Journal
Precise Orbit Determination for Climate Applications of GNSS Radio Occultation including Uncertainty Estimation
Previous Article in Journal
European Radiometry Buoy and Infrastructure (EURYBIA): A Contribution to the Design of the European Copernicus Infrastructure for Ocean Colour System Vicarious Calibration
 
 
Article
Peer-Review Record

Unsupervised and Supervised Feature Extraction Methods for Hyperspectral Images Based on Mixtures of Factor Analyzers

Remote Sens. 2020, 12(7), 1179; https://doi.org/10.3390/rs12071179
by Bin Zhao 1, Magnus O. Ulfarsson 1, Johannes R. Sveinsson 1,* and Jocelyn Chanussot 1,2
Reviewer 1:
Reviewer 2: Anonymous
Remote Sens. 2020, 12(7), 1179; https://doi.org/10.3390/rs12071179
Submission received: 24 February 2020 / Revised: 28 March 2020 / Accepted: 31 March 2020 / Published: 7 April 2020
(This article belongs to the Section Remote Sensing Image Processing)

Round 1

Reviewer 1 Report

Review of “Unsupervised and Supervised Feature Extraction Methods for Hyperspectral Images Based on Mixtures of Factor Analyzers”, by bin Zhao et al.


General Comments:

The manuscript Unsupervised and Supervised Feature Extraction Methods for Hyperspectral Images Based on Mixtures of Factor Analyzers” by Zhao et al. focuses on methods of Feature Extraction to reduce the dimensionality of HSIs. In particular, the authors have proposed three methods to address this goal and evaluated the accuracy of them when subsequent classification algorithm is applied to obtain thematic maps. The results are very promising, especially when compared with more common techniques of FE.

The manuscript is very well organized and focused. The results show an improvement over the techniques already available. Therefore, my recommendation is for acceptance, with just some minor revisions.


Detailed Comments:

In figure 6 you show false color and ground truth map of the Houston dataset. I can see that a large part of the ground-truth image is black and I suppose it is because you have no ground truth for those pixels. In addition, the image shows a dark area to the right that resembles a cloud shadow. You should clarify this aspect. Then in figure 11 you show the classification results for the Houston dataset. Here the image has been overall classified. I guess you should mask the unknown pixels, as you did for the other datasets. Otherwise a question could be: how about the distribution of the i.e., ‘Railway’ class (pink color) in the rightmost part of the image? It is not visible in the ground truth image.

Is it possible that the small number of labeled pixels in the Houston dataset also produce the unrealistic 100% accuracy obtained in certain classes with some classification method, as listed in Table 7?

Finally, I would ask to the authors whether they imagine to provide the community with an open code for allowing the application of this promising method for dimensional reduction of HSIs.

Author Response

We would like to thank you for the prompt and insightful comments on our manuscript (remotesensing-741934) entitled “Unsupervised and Supervised Feature Extraction Methods for Hyperspectral Images Based on Mixtures of Factor Analyzers.” The reviewers have raised many valuable suggestions, which are quite helpful for revising and improving our manuscript. We have studied the comments very carefully and made corresponding revisions. Below, you will find our detailed response to reviewers’ comments. All changes to the manuscript are in blue font. We sincerely hope that you are satisfied with our revisions and responses.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper proposed three novel approaches to extract features from hyperspectral images in order to reduce their dimensionality. The proposed methods reached a better accuracy than conventional methods. However, in a quick search in literature is possible to see publications about the method, so I'm wondering how novel are they? It would be interesting include the difference of the proposed approach comparing with existent mixture of factor analyzer approaches, and to cite proper references.

I also missed a deeper discussion about possible reasons of their methods be more effective than other methods, and is not clear how they performed the accuracy assessment, using the training x test samples described in the tables, or 10-fold cross-validation. Also, it would be interesting if the authors provide a website (like a GitHub) with examples of their codes available for users apply the propose methods in their own data.

I recommend organizing the paper sections, many tables and figures are not placed in their respective section.

I strongly recommend a revision in English writing before publication.

Specific comments below

Page 1, Line 47. Please, update this information, is well known in literature that nonparametric machine learning methods are rarely affected by Hughes phenomenon ‘This is known as the Hughes phenomenon or the curse of dimensionality’. But I agree in reducing dimensionality to reduce the computer power requirement and avoid correlated features.

Page 2, line 15. Remove ‘and’ if you put etc. , (MSFE), sparse and smooth low-rank analysis (SSLRA) [45], etc.

Page 2, line 20. Again, and and etc, or remove etc or remove and… HSI feature extraction [48], AND low-rank representation with the ability to preserve the local pairwise constraints information (LRLPC) [49], ETC..

Page 2, line 54. Please, rewrite: …complicated probability distribution of HSIs…

By the way, what do you mean by ‘complicated probability distribution’? Non-normal distribution? Please, clarify this term.

Page 3, line 1. You placed Figure 1 here, but you did not call it before in the text. Also, it would be better in methodology section, since it is explaining your proposed methods.

Page 5, line 57. Please, rewrite (remove ‘so’ and include a comma). ….gave inferior or slightly inferior results compared to SVM results, only the results of the SVM classifier are reported.

Page 8, line 1. The description of the Algorithm 2 and 3 are in the wrong place. Because in this section you are describing your datasets, not algorithms. Include them in the section before.

Table 4 seems displaced here, put it in the text below when you mention it. Why did you only compute the processing time for the Indian pine dataset? And how about the others?

Only for Indian Pine dataset there is no figure with samples location above.

Page 11, line 29. And about the cost and gamma? SVM parameters of RBF function. Did you use a CV to find the best parameters for each dataset? (i.e. each DR method). Because the parameters vary according to the dataset.

Page 11, line 30. Replace classification by evaluation metrics.

Page 11, line 34. Is not clear, did you perform the CV to find the parameters and then run each experiment ten times in a CV way? And how about the training and test samples described in Tables 1, 2, 3 and 5? Because looking the tables, one can think that test samples were used for the accuracy assessment. If so, why you would run ten times using the same training and test set? Or each time your randomly split the training and test set?

Page 12, line 26. CV of unsupervised methods? In this case, how the DR determine which parameter is better if you have no labeled samples?

Page 13, line 18. Please, use the acronyms for deep layers MFA (DMFA) and supervised MFA (SMFA) because you already defined them.

Please, reallocate Figure 13 and Table 9 to the right place, not in the middle of conclusion.

Your methods performed better than conventional DR methods, however I missed a deeper discussion exploring possible explanation about it. Also, I would like to know if the methods are freely available for the user, if some codes example are provided.

Author Response

We would like to thank you for the prompt and insightful comments on our manuscript (remotesensing-741934) entitled “Unsupervised and Supervised Feature Extraction Methods for Hyperspectral Images Based on Mixtures of Factor Analyzers.” The reviewers have raised many valuable suggestions, which are quite helpful for revising and improving our manuscript. We have studied the comments very carefully and made corresponding revisions. Below, you will find our detailed response to reviewers’ comments. All changes to the manuscript are in blue font. We sincerely hope that you are satisfied with our revisions and responses.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Dear authors,

Thanks for improving the manuscript. All my comments and suggestions have been included accordingly. I just recommend a more refined revision in English writing.

 

Author Response

Thank you very much for your constructive comment . We went carefully through the manuscript to make sure that the methods and the results were clearly described. We corrected typos, eliminated redundancies, rewrote and reorganized sentences and paragraphs in the revised manuscript.

Author Response File: Author Response.pdf

Back to TopTop