Next Article in Journal
A Novel Self-Moving Framed Piezoelectric Actuator
Previous Article in Journal
Data-Driven Real-Time Online Taxi-Hailing Demand Forecasting Based on Machine Learning Method
 
 
Article
Peer-Review Record

GPU Parallel Implementation for Real-Time Feature Extraction of Hyperspectral Images

Appl. Sci. 2020, 10(19), 6680; https://doi.org/10.3390/app10196680
by Chunchao Li 1, Yuanxi Peng 1,*, Mingrui Su 1 and Tian Jiang 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2020, 10(19), 6680; https://doi.org/10.3390/app10196680
Submission received: 26 August 2020 / Revised: 14 September 2020 / Accepted: 22 September 2020 / Published: 24 September 2020
(This article belongs to the Section Optics and Lasers)

Round 1

Reviewer 1 Report

I would just recommend revising the English, as some errors are still present.

Author Response

We gratefully appreciate your valuable comment. Through careful reading, we have made changes to minor errors and improved some words and sentences in the manuscript. We have used "Track Changes" function in Microsoft Word, and all the revisions can be easily seen in the manuscript. In addition, we list all the English minor changes in the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The updated manuscript is a significant improvement from the previous version. The authors have clearly explained the INAPC algorithm, as well as compared against the more popularly available parallel computing framework (Pytorch and opencv). 

I understand and appreciate that the paper primarily focuses on implementation aspects, but some visualization of classification results is necessary. In particular, it may help to show some results from the UAV data, which is very specific to this paper.

Author Response

Thank you very much for your comment and appreciation. We gratefully appreciate your valuable suggestions. We have added visualization results based on your suggestions.

In the feature extraction part (Sect.4.1), the results and related discussions are given in lines 244-251 and Figure 7.

In the UAV experiment part (Sect.4.3), the results and discussions are given in detail in lines 406-416 and Figure 14.

We have used "Track Changes" function in Microsoft Word, and all the revisions can be easily seen in the manuscript. So other changes in English are not listed here.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Overall paper is good and topic is very promising and interesting

 

Reviewer 2 Report

The paper describes a parallel implementation of the noise adaptive principal component algorithm.

 

The description of the proposed method should be rewritten. Section 2 starts with the description of the choosing rule for the second SVD. Authors did not introduce the original algorithm, therefore the reader does not know what second SVD means until it reads Section 2.2. This Section should be reorganized. First, authors should describe the original algorithm and then introduce their modifications. Moreover, the writing style of this part is poor and it affects the readability. This reviewer suggests to introduce a clear pseudocode and to comment it instead of lists of formulas linked with few words.

 

In the choosing rule of equation (3), there are two fixed values (94% and 1%). Authors did not explain how they chose those values. How does the results change if these two thresholds assume different values? The choices of fixed values must always be explained.

 

Figure 1 contains a well-known reduction kernel: of course it is not a novel strategy. The instructions contained in the code reported in Figure 1 are not commented or explained. Therefore, this figure can be omitted.

 

Section 3.3 is not sufficiently detailed. It is of crucial importance to clearly describe the task performed by each thread. Moreover, figure 3 does not detail the adopted strategy.

 

Table 1 reports the SVD parallel algorithm. Again, the steps are not explained. Moreover, the use of cuSOLVER to implement the SVD computation is reported in the library documentation (https://docs.nvidia.com/cuda/cusolver/index.html#svd_examples). Did the authors adopt a different strategy? If yes, they should clearly explain the difference with the standard approach.

 

In Table 2, authors reported that they stored matrices in the column major order, while the C standard adopts row major format. Authors did not discuss this aspect. It is known that the cuBLAS library adopts column major format, but gives the possibility of working with row-major format by indicating to a function the inputs to transpose. Why did not the authors adopt row-major format?

 

Figure 4 is not explained and it is not possible to understand which data are transferred between host and device.

 

In the experimental result section of UAV, authors declare that the image acquisition time is 5 s. They did not provide details about the number of acquired pixels and bands.

 

Concerning the real-time compliance of the algorithm, it is not demonstrated. Concerning the considered dataset, the acquisition times of the three images are not reported. On the other hand, for the UAV experiments, Figure 10 shows that the sum of all the phases are greater than 5 seconds. 

 

Minor points and typos:

 

- Please, do not use the capital letter after the colon

- Please, rephrase lines 17-19

- On line 54, please change “Yield” with “yield”

- On line 59, please change “Focus” with “focus”

- On line 61, please change “A flexible” with “a flexible”

- Please rephrase lines 70-73 

- On line 82, please delete “and” between “results” and “in terms”

- Please, rephrase lines 173-174

- Please, rephrase lines 178-179

- Please, rephrase line 176

- Please, rephrase lines 198-201

- On lines 293 and 295, please change “implement” with “implementation”

- On line 320, please delete “seen” between “It can be” and “observed”

- On line 336, please delete “briefly”

- On line 343, authors wrote “onboard on-board”

- On line 370, change “30000mah” with “30000 mAh”

Reviewer 3 Report

The paper proposes a GPU-based approach to extracting hyperspectral image features in an efficient manner. Since several feature extraction techniques rely on singular value decomposition (SVD) of the hyperspectral image, the authors show that speeding the SVD step provides significant improvements. Some comparisons are shown on various computing platforms.

Strengths:
1. Hyperspectral imaging continues to be the mainstay of reliable material and object detection, and hence is of importance to a wide range of computer vision tasks
2. The idea of a combined pipeline from noise estimation to feature extraction is very useful.

Weaknesses:
1. The paper is poorly written, with no clear structure. The first paragraph talks about computational overhead, while the second paragraph talks about the importance of feature extraction. A reversal of paragraphs would be more appropriate. Further, the third paragraph jumps to talking about UAV-based hyperspectral imaging -- which I believe should be part of first paragraph. Overall, the introduction alone lacks structure. Please see editorial suggestions for some ideas on how it can be structured better.
2. Section 2 starts abruptly without any discussion about NAPC (not expanded anywhere), or INAPC. Considering the approach is innovative, the authors should spend at least a paragraph on it.
3. Automatically choosing "k" based on accuracy is a well known approach in literature, and hence it should not be presented as a new contribution. Further, the authors should cite similar papers on automatically choosing parameters.
4. There is little to no comparison against competing techniques. For example, a simple pytorch (or tensorflow) implementation of these algorithms will be a good baseline. Similarly, the authors should consider reviewing literature and adding more comparisons.

Editorial suggestions:
1. Please consider rewriting parts of the paper in a way that is easier to understand.
2. Phrases like, "become the main contradiction of the task" are hard to understand. I believe the authors want to say "bottle neck".
3. Line 29: Observation object --> scene
4. What are the environmental factors mentioned in line 41?
5. Introduction can be formatted in the following manner:
a. State the applications of hyperspectral imaging (object detection, classification, etc.)
b. State the difficulties of processing data, especially on mobile platforms such as UAV. In the same paragraph, discuss how existing solutions are ill suited for current and future applications
c. Then briefly discuss your approach, succinctly, stating only the important and relevant details.
6. Lot more comparisons are required to evaluate the efficacy of the proposed work.

Back to TopTop