Next Article in Journal
Energy-Spectrum Efficiency Trade-Off in UAV-Enabled Mobile Relaying System with Bisection-PSO Algorithm
Previous Article in Journal
Performance Evaluation of Microfluidically Tunable Microwave Filters
 
 
Communication
Peer-Review Record

An Efficient Hyperspectral Image Classification Method: Inter-Class Difference Correction and Spatial Spectral Redundancy Removal

Electronics 2022, 11(18), 2890; https://doi.org/10.3390/electronics11182890
by Lei Zhao 1, Qiang Pan 2, Shurong Yuan 2 and Lei Shi 2,*
Reviewer 2: Anonymous
Electronics 2022, 11(18), 2890; https://doi.org/10.3390/electronics11182890
Submission received: 21 August 2022 / Revised: 9 September 2022 / Accepted: 10 September 2022 / Published: 13 September 2022
(This article belongs to the Section Computer Science & Engineering)

Round 1

Reviewer 1 Report

I think the results obtained by the authors are interesting, have a certain value of theory and the argument is correct. I recommend the paper for publication in Electronics after minor revising the comments.

This work, the authors introduced a new method with low requirements for the scale of the dataset that involves correcting the inter-class differences of hyperspectral images and eliminating the redundant information of spatial–spectral features. I think the results obtained by the authors are interesting, have a certain value of theory and the argument is correct. I recommend the paper for publication in Electronics after minor revising the comments.

Minor comments:

1.     Figure 7: What is the proposal? The author has mentioned about ERS-FFT-SVM in the last part of the section 3.

 

2.     In the last part of section 3 and conclusion, the average is used, but it does not appear in any table. It may make the reader confused.

Author Response

Original Manuscript ID: electronics-1901362

Original Article Title: “An Efficient Hyperspectral Image Classification Method: Inter-class Difference Correction and Spatial Spectral Redundancy Removal”

To: Multidisciplinary Digital Publishing Institute

Re: Response to reviewers

Dear Reviewer#1:

Thank you very much for your enlightening suggestions on the revision of our manuscript, which are very comprehensive and targeted. We have made due modifications and corresponding explanations following your comments in the email, and we feel that the logic and integrity of the revised manuscript are clearer. The following will be a detailed explanation of the key problems you pointed out.

Reviewer#1, Concern # 1:
Figure 7: What is the proposal? The author has mentioned about ERS-FFT-SVM in the last part of the section 3.

Author response: Thank you very much for your conscientious examination of our manuscript! We apologize for our carelessness and have made corresponding corrections in our original manuscript. Some places failed to unify the algorithm names and did not adopt the ERS-FFT-SVM name uniformly, which brings reading difficulties. Figure 7 can be clearly and powerfully combined with the discussion in Section 3 to show that the increase in the number of training samples can improve the classification accuracy and the superiority of the classification results of ERS-FFT-SVM compared with SVM and KNN methods

Author action:    

Page 11: The corresponding OA value curves are shown in Fig. 8. As the number of samples increases from 10 to 100, the accuracies of different classification methods improve, which indicates that the increase in the number of samples does help to improve the classification accuracy. Also Figure 8 reflects that the classification accuracy using ERS-FFT-SVM algorithm is significantly better than that of KNN and SVM algorithms which only use spectral information.

Reviewer#1, Concern # 2:

In the last part of section 3 and conclusion, the average is used, but it does not appear in any table. It may make the reader confused. 

Author response: Thank you very much for the suggestion. We have carefully modified the tables by adding the mean row to the last row of Table 2 and Table 3. The mean value is the average of the algorithm's OA value Kappa value for different samples of the current dataset, reflecting the overall level of the algorithm under different conditions on the dataset. The OA difference value column reflects the extent to which the ERS-FFT-SVM algorithm outperforms KNN and SVM.

Author action:    

Page 10:

Table 2. Overall classification accuracy and Kappa coefficient of classification algorithms under different sample numbers with the PaviaU dataset (K=1000).

Dataset

Training samples

KNN

SVM

ERS–FFT-SVM

OA difference value

OA(%)

Kappa

OA(%)

Kappa

OA(%)

Kappa

KNN

SVM

 

 

 

 

 

PaviaU

10

52.7

0.4370

69.57

0.6173

75.56

0.6869

22.86

5.99

20

62.01

0.5363

77.37

0.7099

84.44

0.7916

22.43

7.07

30

65.03

0.5652

78.29

0.723

85.19

0.8039

20.16

6.9

40

64.91

0.5694

79.93

0.7432

88.15

0.8427

23.24

8.22

50

66.98

0.5935

79.99

0.7435

89.63

0.8626

22.65

9.64

60

68.92

0.6133

81.82

0.7667

90.37

0.8722

21.45

8.55

70

70.07

0.6363

82.42

0.7733

91.11

0.8822

21.04

8.69

80

71.57

0.6445

82.87

0.7794

91.11

0.882

19.54

8.24

90

71.29

0.6419

84.44

0.7989

92.59

0.9018

21.3

8.15

100

71.97

0.6486

84.90

0.8047

93.33

0.9109

21.36

8.43

average

66.55

0.5887

80.16

0.7460

88.15

0.8437

21.6

7.99

 

Page 11:

Table 3. Overall classification accuracy and Kappa coefficient of classification algorithms unwithder Indian Pines dataset with different sample numbers (K=1000).

Dataset

Training samples

KNN

SVM

ERS–FFT-SVM

OA difference value

OA(%)

Kappa

OA(%)

Kappa

OA(%)

Kappa

KNN

SVM

 

 

 

 

 

Indian Pines

10

32.79

0.2546

52.88

0.4747

57.1

0.5212

24.31

4.22

20

44.19

0.3726

57.8

0.5293

74.2

0.711

30.01

16.4

30

46.83

0.4030

61.1

0.5648

76.23

0.7347

29.4

15.13

40

50.05

0.4407

65.28

0.611

78.26

0.7559

 28.21

12.98

50

53.46

0.4781

66.86

0.628

83.48

0.8146

30.02

16.62

60

54.79

0.4929

66.86

0.6492

83.48

0.8152

28.69

16.62

70

55.3

0.4989

70.38

0.6678

82.61

0.8054

27.31

12.23

80

57.34

0.5216

70.51

0.6693

90.72

0.8956

33.38

20.21

90

58.48

0.5338

72.36

0.6898

90.72

0.8958

32.24

18.36

100

59.35

0.5438

72.85

0.6951

91.88

0.9088

32.53

19.03

average

51.26

0.454

65.69

0.6179

80.87

0.7858

29.61

15.18

Page 11:From Tables 2 and 3, we can obtain that the overall classification accuracy (OA) of ERS-FFT-SVM on the PaviaU dataset is improved by 7.99% on average compared to SVM and 21.60% on average compared to KNN in the case of small samples; in Indian Pines data using ERS-FFT-SVM is improved by 15.18% on average compared to the SVM algorithm and by 29.61% on average compared to KNN.

Reviewer#1, Concern # 3:

Why SVM is considered for classification? Justify the answer

Author response: Thank you very much for the question. In this paper, the proposed algorithm mainly addresses the problem of low classification accuracy caused by intra-class variation and dimensional catastrophe of hyperspectral images. The combination of ERS-FFT and SVM can better avoid the dimensional catastrophe problem and obtain higher accuracy for small sample classification tasks. So we give preference to SVM for classification.

Author action:    

Page 3: The core objective for classification tasks with small samples is to obtain the optimal solution using the available information.SVM performs the classification task by finding the classification hyperplane and keeping the sample points in the training samples separated and as far as possible from the classification plane. For linearly indistinguishable classification problems, SVM can transform linearly indistinguishable problems in low-dimensional space into linearly separable problems in high-dimensional space by introducing kernel methods. It has good adaptability for hyperspectral data sets.

Page 12: he proposed ERS-FFT-SVM algorithm is based on the classical method of hyperspectral image classification, the SVM algorithm, and corrects the intra-class differences of spectral features within superpixels and reduces the redundancy of spatial information. Moreover, unlike traditional classification methods that only use dimensional spectral information, this algorithm considers the fusion of spatial-spectral features, resulting in a good performance in terms of classification accuracy and consistency of classification results. Especially in the case of small sample classification, the ERS-FFT-SVM algorithm still has better classification results.

Reviewer#1, Concern # 4:

Can we have the feasibility of implementing the K-NN to the proposed classification problem? comment on it.

Author response: Thank you very much for your suggestion. We seriously consider and analyze that the KNN algorithm can be used as an alternative algorithm to fuse with ERS-FFT. However, the performance is not as good as SVM in solving the problem of intra-class variation and dimensional catastrophe in hyperspectral images. KNN can easily cause the dimensional catastrophe problem, and the misclassification problem is more serious when facing unbalanced data of various types of samples. The KNN algorithm is easy to operate without complicated parameter settings. If it can overcome the classification defects brought by adopting the KNN algorithm, the fusion with ERS-FFT may improve classification performance. This problem also needs to continue to be studied in depth.

Author action:    

Page 3: The KNN classification algorithm is suitable for sample sets with many overlapping features, it is easy to operate without complicated parameter settings, but it is prone to misclassification when the number of samples is not balanced. For example, the number of samples in a particular class is large, but the number of surrounding samples is small, which may lead to an unknown sample being classified as a large class sample when the K nearest neighbors of the sample are more in the large class sample. Therefore, the SVM algorithm is more suitable for the problem faced in this paper, so the proposed algorithm is considered to be used in combination with SVM in this paper.

Page 12: However, the proposed algorithm in this paper has not yet considered the best matching relationship between the number of training samples and the number of superpixel segmentation blocks, which leads to the unstable classification performance of the algorithm. Moreover, only the combination with the SVM method has been considered, not with KNN or other classification algorithms. The subsequent work will further focus on the profound fusion of spatial-spectral features and the applicability of other classification methods, and clarify the relationship between the training samples and the number of superpixel segmentation blocks, to propose a classification method with better efficiency and higher classification accuracy for small-sample features. For example, if the KNN algorithm can overcome the problem of misclassification or misclassification when the number of samples is not balanced, or if it can take advantage of other algorithms, it may be able to form a more efficient and accurate classification algorithm.

 

Reviewer#1, Concern # 5:

Comment on the accuracy of superpixel segmentation.

Author response: Thank you for your comments on our manuscript. The superpixel segmentation algorithm has the role of clustering and can introduce the spatial information of hyperspectral images into the classification task. The similarity of features in the superpixel blocks is high, and the classification is much more accurate after subsequent processing by the classifier. We found that the number of superpixel blocks is 1000, and it is already possible to cluster similar features in one superpixel block. This is a great help to improve the classification accuracy and one of the reasons why ERS-FFT-SVM performs well in the case of small samples.

Author action:   

Page 8:

Figure 5. Pseudo-color plot of the dataset after super-pixel segmentation of 1000 blocks

PaviaU dataset on the left,Indian Pines dataset on the right

In the results shown in Fig. 5, the segmentation result is 1000 superpixel blocks. Each superpixel block is the same class of features, which is consistent with the assumption of the proposed algorithm that each superpixel block has the same class of pixels, so the number of superpixel blocks selected in this paper is 1000 to ensure the classification accuracy.

Page 12: From the kappa coefficient columns in Table 2 and Table 3, the kappa coefficient values of the ERS-FFT-SVM algorithm are higher than those of the KNN and SVM algorithms for different sample numbers, which indicates that the classification results of the ERS-FFT-SVM algorithm possess a high degree of consistency with the real feature conditions.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

The article presents the hyperspectral image classification method with Inter-class Difference Correction and Spatial Spectral Redundancy Removal. The work is neatly presented and substantiated with the observed results. However, a minor revision is required before considering it for publication.

1. A moderate linguistic correction is required

2. Why SVM is considered for classification? Justify the answer.

3. Can we have the feasibility of implementing the K-NN to the proposed classification problem? comment on it.

4. Comment on the accuracy of superpixel segmentation.

 

Author Response

Original Manuscript ID: electronics-1901362

Original Article Title: “An Efficient Hyperspectral Image Classification Method: Inter-class Difference Correction and Spatial Spectral Redundancy Removal”

To: Multidisciplinary Digital Publishing Institute

Re: Response to reviewers

Dear Reviewer#1:

Thank you very much for your enlightening suggestions on the revision of our manuscript, which are very comprehensive and targeted. We have made due modifications and corresponding explanations following your comments in the email, and we feel that the logic and integrity of the revised manuscript are clearer. The following will be a detailed explanation of the key problems you pointed out.

Reviewer#1, Concern # 1:
Figure 7: What is the proposal? The author has mentioned about ERS-FFT-SVM in the last part of the section 3.

Author response: Thank you very much for your conscientious examination of our manuscript! We apologize for our carelessness and have made corresponding corrections in our original manuscript. Some places failed to unify the algorithm names and did not adopt the ERS-FFT-SVM name uniformly, which brings reading difficulties. Figure 7 can be clearly and powerfully combined with the discussion in Section 3 to show that the increase in the number of training samples can improve the classification accuracy and the superiority of the classification results of ERS-FFT-SVM compared with SVM and KNN methods

Author action:    

Page 11: The corresponding OA value curves are shown in Fig. 8. As the number of samples increases from 10 to 100, the accuracies of different classification methods improve, which indicates that the increase in the number of samples does help to improve the classification accuracy. Also Figure 8 reflects that the classification accuracy using ERS-FFT-SVM algorithm is significantly better than that of KNN and SVM algorithms which only use spectral information.

Reviewer#1, Concern # 2:

In the last part of section 3 and conclusion, the average is used, but it does not appear in any table. It may make the reader confused. 

Author response: Thank you very much for the suggestion. We have carefully modified the tables by adding the mean row to the last row of Table 2 and Table 3. The mean value is the average of the algorithm's OA value Kappa value for different samples of the current dataset, reflecting the overall level of the algorithm under different conditions on the dataset. The OA difference value column reflects the extent to which the ERS-FFT-SVM algorithm outperforms KNN and SVM.

Author action:    

Page 10:

Table 2. Overall classification accuracy and Kappa coefficient of classification algorithms under different sample numbers with the PaviaU dataset (K=1000).

Dataset

Training samples

KNN

SVM

ERS–FFT-SVM

OA difference value

OA(%)

Kappa

OA(%)

Kappa

OA(%)

Kappa

KNN

SVM

 

 

 

 

 

PaviaU

10

52.7

0.4370

69.57

0.6173

75.56

0.6869

22.86

5.99

20

62.01

0.5363

77.37

0.7099

84.44

0.7916

22.43

7.07

30

65.03

0.5652

78.29

0.723

85.19

0.8039

20.16

6.9

40

64.91

0.5694

79.93

0.7432

88.15

0.8427

23.24

8.22

50

66.98

0.5935

79.99

0.7435

89.63

0.8626

22.65

9.64

60

68.92

0.6133

81.82

0.7667

90.37

0.8722

21.45

8.55

70

70.07

0.6363

82.42

0.7733

91.11

0.8822

21.04

8.69

80

71.57

0.6445

82.87

0.7794

91.11

0.882

19.54

8.24

90

71.29

0.6419

84.44

0.7989

92.59

0.9018

21.3

8.15

100

71.97

0.6486

84.90

0.8047

93.33

0.9109

21.36

8.43

average

66.55

0.5887

80.16

0.7460

88.15

0.8437

21.6

7.99

 

Page 11:

Table 3. Overall classification accuracy and Kappa coefficient of classification algorithms unwithder Indian Pines dataset with different sample numbers (K=1000).

Dataset

Training samples

KNN

SVM

ERS–FFT-SVM

OA difference value

OA(%)

Kappa

OA(%)

Kappa

OA(%)

Kappa

KNN

SVM

 

 

 

 

 

Indian Pines

10

32.79

0.2546

52.88

0.4747

57.1

0.5212

24.31

4.22

20

44.19

0.3726

57.8

0.5293

74.2

0.711

30.01

16.4

30

46.83

0.4030

61.1

0.5648

76.23

0.7347

29.4

15.13

40

50.05

0.4407

65.28

0.611

78.26

0.7559

 28.21

12.98

50

53.46

0.4781

66.86

0.628

83.48

0.8146

30.02

16.62

60

54.79

0.4929

66.86

0.6492

83.48

0.8152

28.69

16.62

70

55.3

0.4989

70.38

0.6678

82.61

0.8054

27.31

12.23

80

57.34

0.5216

70.51

0.6693

90.72

0.8956

33.38

20.21

90

58.48

0.5338

72.36

0.6898

90.72

0.8958

32.24

18.36

100

59.35

0.5438

72.85

0.6951

91.88

0.9088

32.53

19.03

average

51.26

0.454

65.69

0.6179

80.87

0.7858

29.61

15.18

Page 11:From Tables 2 and 3, we can obtain that the overall classification accuracy (OA) of ERS-FFT-SVM on the PaviaU dataset is improved by 7.99% on average compared to SVM and 21.60% on average compared to KNN in the case of small samples; in Indian Pines data using ERS-FFT-SVM is improved by 15.18% on average compared to the SVM algorithm and by 29.61% on average compared to KNN.

Reviewer#1, Concern # 3:

Why SVM is considered for classification? Justify the answer

Author response: Thank you very much for the question. In this paper, the proposed algorithm mainly addresses the problem of low classification accuracy caused by intra-class variation and dimensional catastrophe of hyperspectral images. The combination of ERS-FFT and SVM can better avoid the dimensional catastrophe problem and obtain higher accuracy for small sample classification tasks. So we give preference to SVM for classification.

Author action:    

Page 3: The core objective for classification tasks with small samples is to obtain the optimal solution using the available information.SVM performs the classification task by finding the classification hyperplane and keeping the sample points in the training samples separated and as far as possible from the classification plane. For linearly indistinguishable classification problems, SVM can transform linearly indistinguishable problems in low-dimensional space into linearly separable problems in high-dimensional space by introducing kernel methods. It has good adaptability for hyperspectral data sets.

Page 12: he proposed ERS-FFT-SVM algorithm is based on the classical method of hyperspectral image classification, the SVM algorithm, and corrects the intra-class differences of spectral features within superpixels and reduces the redundancy of spatial information. Moreover, unlike traditional classification methods that only use dimensional spectral information, this algorithm considers the fusion of spatial-spectral features, resulting in a good performance in terms of classification accuracy and consistency of classification results. Especially in the case of small sample classification, the ERS-FFT-SVM algorithm still has better classification results.

Reviewer#1, Concern # 4:

Can we have the feasibility of implementing the K-NN to the proposed classification problem? comment on it.

Author response: Thank you very much for your suggestion. We seriously consider and analyze that the KNN algorithm can be used as an alternative algorithm to fuse with ERS-FFT. However, the performance is not as good as SVM in solving the problem of intra-class variation and dimensional catastrophe in hyperspectral images. KNN can easily cause the dimensional catastrophe problem, and the misclassification problem is more serious when facing unbalanced data of various types of samples. The KNN algorithm is easy to operate without complicated parameter settings. If it can overcome the classification defects brought by adopting the KNN algorithm, the fusion with ERS-FFT may improve classification performance. This problem also needs to continue to be studied in depth.

Author action:    

Page 3: The KNN classification algorithm is suitable for sample sets with many overlapping features, it is easy to operate without complicated parameter settings, but it is prone to misclassification when the number of samples is not balanced. For example, the number of samples in a particular class is large, but the number of surrounding samples is small, which may lead to an unknown sample being classified as a large class sample when the K nearest neighbors of the sample are more in the large class sample. Therefore, the SVM algorithm is more suitable for the problem faced in this paper, so the proposed algorithm is considered to be used in combination with SVM in this paper.

Page 12: However, the proposed algorithm in this paper has not yet considered the best matching relationship between the number of training samples and the number of superpixel segmentation blocks, which leads to the unstable classification performance of the algorithm. Moreover, only the combination with the SVM method has been considered, not with KNN or other classification algorithms. The subsequent work will further focus on the profound fusion of spatial-spectral features and the applicability of other classification methods, and clarify the relationship between the training samples and the number of superpixel segmentation blocks, to propose a classification method with better efficiency and higher classification accuracy for small-sample features. For example, if the KNN algorithm can overcome the problem of misclassification or misclassification when the number of samples is not balanced, or if it can take advantage of other algorithms, it may be able to form a more efficient and accurate classification algorithm.

 

Reviewer#1, Concern # 5:

Comment on the accuracy of superpixel segmentation.

Author response: Thank you for your comments on our manuscript. The superpixel segmentation algorithm has the role of clustering and can introduce the spatial information of hyperspectral images into the classification task. The similarity of features in the superpixel blocks is high, and the classification is much more accurate after subsequent processing by the classifier. We found that the number of superpixel blocks is 1000, and it is already possible to cluster similar features in one superpixel block. This is a great help to improve the classification accuracy and one of the reasons why ERS-FFT-SVM performs well in the case of small samples.

Author action:   

Page 8:

Figure 5. Pseudo-color plot of the dataset after super-pixel segmentation of 1000 blocks

PaviaU dataset on the left,Indian Pines dataset on the right

In the results shown in Fig. 5, the segmentation result is 1000 superpixel blocks. Each superpixel block is the same class of features, which is consistent with the assumption of the proposed algorithm that each superpixel block has the same class of pixels, so the number of superpixel blocks selected in this paper is 1000 to ensure the classification accuracy.

Page 12: From the kappa coefficient columns in Table 2 and Table 3, the kappa coefficient values of the ERS-FFT-SVM algorithm are higher than those of the KNN and SVM algorithms for different sample numbers, which indicates that the classification results of the ERS-FFT-SVM algorithm possess a high degree of consistency with the real feature conditions.

 

Author Response File: Author Response.pdf

Back to TopTop