Next Article in Journal
Action Recognition by an Attention-Aware Temporal Weighted Convolutional Neural Network
Next Article in Special Issue
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle
Previous Article in Journal
A New Method of High-Precision Positioning for an Indoor Pseudolite without Using the Known Point Initialization
Previous Article in Special Issue
An Improved Randomized Local Binary Features for Keypoints Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter

1
School of Computer Science, China University of Geosciences, Wuhan 430074, China
2
Beibu Gulf Big Data Resources Utilisation Lab, Qinzhou University, Qinzhou 535000, China
3
Guangxi Key Laboratory of Beibu Gulf Marine Biodiversity Conservation, Qinzhou 535000, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(6), 1978; https://doi.org/10.3390/s18061978
Submission received: 29 April 2018 / Revised: 6 June 2018 / Accepted: 17 June 2018 / Published: 20 June 2018
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)

Abstract

:
Recently, image-filtering based hyperspectral image (HSI) feature extraction has been widely studied. However, due to limited spatial resolution and feature distribution complexity, the problems of cross-region mixing after filtering and spectral discriminative reduction still remain. To address these issues, this paper proposes a spectral-spatial propagation filter (PF) based HSI feature extraction method that can effectively address the above problems. The dimensionality/band of an HSI is typically high; therefore, principal component analysis (PCA) is first used to reduce the HSI dimensionality. Then, the principal components of the HSI are filtered with the PF. When cross-region mixture occurs in the image, the filter template reduces the weight assignments of the cross-region mixed pixels to handle the issue of cross-region mixed pixels simply and effectively. To validate the effectiveness of the proposed method, experiments are carried out on three common HSIs using support vector machine (SVM) classifiers with features learned by the PF. The experimental results demonstrate that the proposed method effectively extracts the spectral-spatial features of HSIs and significantly improves the accuracy of HSI classification.

1. Introduction

Hyperspectral images (HSIs) have many spectral bands and complex spatial structures that contain abundant information [1,2]. Therefore, HSIs are widely applied in areas such as ocean monitoring [3,4], precision agriculture [5,6], forest degradation statistics [7] and military reconnaissance [8]. However, the high-dimensional spectral features of HSIs may cause the Hughes phenomenon [9,10], leading to a decrease in HSI classification accuracy [11,12,13]. Thus, before performing HSI classification, dimensionality reduction (DR) and feature extraction techniques [14,15] are typically used to obtain low-dimensional and discriminative features for classification [16,17].
Many DR models have been utilized to pre-process high-dimensional HSIs, including supervised, unsupervised, and semi-supervised DR methods [18]. Examples of supervised DR methods include linear discriminant analysis (LDA) [19] and nonparametric weighted feature extraction (NWFE) [20]; the unsupervised methods include PCA [21], independent component analysis (ICA) [22], superpixelwise PCA [23]; and the semi-supervised DR methods include semi-supervised discriminant analysis (SDA) [24]. Among these methods, the new optimized feature extracted by the best discriminant vector satisfies the class separability after the samples in high-dimensional feature space are projected to the low-dimensional feature space through the supervised DR model LDA. However, when the data samples between classes are nonlinearly separated in the input space, LDA is expected to fail. The semi-supervised DR technique SDA adds a regularization term to the LDA algorithm to ensure that the local structure between the samples is preserved during feature extraction. The unsupervised DR approach ICA represents HSIs with a relatively small number of independent features; however, ICA is more complicated than PCA, which is a simple, rapid and effective method for unsupervised DR that has been widely used in HSI feature extraction because it can extract the most informative features of HSIs using only a few principal components. However, the drawback of PCA is that it considers each pixel separately, regardless of the pixels’ spatial context information, which has proven to be very effective prior knowledge [25,26,27].
To make better use of the spatial context information, the most commonly adopted strategy is to leverage the filter to extract HSI features. For example, Li et al. [28] developed the PCA-Gabor-SVM algorithm, which improved HSI classification accuracy by combining spatial and spectral information to filter dimensionality-reduced features from PCA. The edge-preserving filtering algorithm (EPF) proposed in [29] utilized PCA to decompose greyscale or colour-guided images, taking advantage of the edge-preserving properties of bilateral filtering and guided filtering. Methods that combine spatial and spectral information obviously enhance the classification performance by preserving the spatial structure. Pan et al. [30] constructed a hierarchical guidance filtering and a matrix of spectral angle distance and iteratively trained classifiers using the integrated learning spatial and spectral information from different scales to achieve good generalization performance. The deep learning method proposed by Zhou et al. [31] achieved very good results by using convolutional filters that learned directly from images by extracting their spectral-spatial features. Wei et al. [32] proposed a hierarchical deep framework called spectral-spatial response that uses a template acquired through Marginal Fisher Analysis and PCA to learn the combination of spectral-spatial features simply.
The aforementioned filters have demonstrated the ability to represent the latent spatial structures embedded in HSIs. However, HSI cross-regional mixing typically exists due to the limited spatial resolution and the complexity of the feature distribution. That is, the filter template, which consists of adjacent pixels centered on the target pixel, not only includes the characteristics of the target features but also a mixture of other features. The cross-regional mixture affects the implementation of smooth filtering or other filtering tasks, leading to fuzzy areas and inefficient features for HSI classification. Shen et al. [33] proposed a multiscale spectral-spatial context-aware propagation filter that extracts the features of hyperspectral images from multiple views to generate spatial-spectral features. The PF addresses the cross-regional mixing problem of HSIs, however a too-large or too-small scale parameter may have a negative impact and is not conducive to the suppression of cross-regional mixing problems. Therefore, this paper proposes a novel spectral-spatial feature extraction method of an HSI based on the PF method, which addresses the cross-regional mixing problem in HSIs effectively.
The structure of this paper is as follows. Section 2 details the proposed method. Experimental results and discussions are given in Section 3, and Section 4 summarizes this paper.

2. Proposed Method

2.1. Propagation Filter

The PF [34] is a smoothing filter in which the pixel values of an HSI are acquired by
O s = 1 Z s t N s ω s , t I t ,
where Z s = t N s ω s , t is the normalised factor, N s is the set of neighbouring pixels set, the size of the window size is ( ( 2 w + 1 ) × ( 2 w + 1 ) ) for the central pixel s, ω s , t is the weight of pixel t of the neighbouring pixel set to perform the filtering of pixel s, and I t is the pixel value of pixel t of the HSI.
Here, ω s , t = P ( s t ) is defined as the weight between pixel s and the its adjacent pixel t such that t = s . The weight between pixel s and itself is ω s , s = P ( s s ) = 1 ; otherwise,
ω s , t = ω s , t 1 D ( t 1 , t ) R ( s , t ) ,
where the two distances D ( t 1 , t ) and R ( s , t ) can be defined by
D ( t 1 , t ) = g ( I t 1 I t ; σ α ) ,
R ( s , t ) = g ( I s I t ; σ r ) ,
with the function g ( . ) being the Gaussian function
g ( I t 1 I t ; σ α ) = exp ( I t 1 I t 2 2 σ α 2 ) ,
g ( I s I t ; σ r ) = exp ( I s I t 2 2 σ r 2 ) .
Without the loss of generality, it is assumed that σ α = σ r and D ( . ) = R ( . ) throughout the this paper.

2.2. Spectral-Spatial Feature Extraction Method Based on the PF

As shown in Figure 1a, the cross-region mixture problem is quite common in HSIs. In particular, a lower spatial resolution increases the number of classes. As the ground sample distance increases, there is a potential for objects covered by a given pixel to be mixed [34]. Therefore, this paper presents the spectral-spatial feature extraction of the HSI algorithm based on the advantage that the PF can handle the cross-regional mixture problem [35]. As seen in Equations (1)–(6) and Figure 1b–d, the PF generates a new center pixel using a weighted summation of the neighbouring pixels in the HSI. The adjacent pixel t, center pixel s and pixel t− 1 in the neighbouring pixel set are all the same class, and the weight of pixel t is relatively larger. In Figure 1d, pixel t− 1 selected is close to the pixel t and points to the pixel t, where pixel s is in yellow, pixel t is in red, pixel t− 1 is in green. However, when there are mixed pixels in the neighbouring pixel sets, the weight of pixel t is smaller. Therefore, the PF ensures that the similar features of the same classes of pixels are enhanced, which suppresses the effects of cross-regional mixed pixels.
In addition, to improve the performance of the PF for feature extraction in HSIs, PCA is performed before filtering: the HSIs are reduced by PCA, and the redundant information between bands is greatly reduced in the updated pixels. However, although the HSIs lose a small amount of information after PCA, the bands are sorted according to the importance of the information. After the PF process, the increased effects of the important and reduced effects of the less important features are beneficial for feature extraction and in improving the classification accuracy.
The specific process is shown in Figure 2. In the first step, PCA is used to reduce the dimensionality and remove the redundant inter-spectral information to obtain the principal components of an HSI. Then, the PCA feature is filtered with the PF. When cross-regional mixing occurs in the image, the filter template reduces or avoids the influence of cross-regional mixed pixels on the object pixel, thereby avoiding or effectively mitigating the effects of cross-regional mixed pixels. Through this technique, the proposed method can accurately extract the reflected spectral-spatial features of the real objects. Finally, to validate the effectiveness of the proposed method, experiments are carried out on HSIs using an SVM classifier trained on the learned spectral-spatial features. Algorithm 1 depicts the proposed HSI spectral-spatial feature extraction model based on the PF.
Algorithm 1: Specific flowchart of the spectral-spatial feature extraction algorithm based on the PF.
Sensors 18 01978 i001

3. Experimental Settings

In this paper, the training and testing samples for each HSI dataset were chosen randomly. In the experiments shown in Table 1, 20 label samples were randomly selected for each class as training samples, and the rest were used as test samples to verify the performance of the proposed methods in the three experiments. To verify the classification performance of different methods with sufficient training samples and insufficient training samples, in the experiments shown in Table 5, 10–50 training samples were selected from each class and the rest were used as test samples. For stability, each experiment was performed 10 times; the reported results are the averages.

3.1. Dataset Description

Three real HSI sets are used in this paper: the Indian Pines, Salinas and University of Pavia scenes. The Indian Pines image was obtained by the AVIRIS sensor and covers the northern agricultural Indian Pines test site. The image, which includes 16 categories of ground objects, contains 145 × 145 pixels, and only 200 out of all 224 bands are valid due to water absorption. The spatial resolution is 20 m per pixel, and the spectral range is 0.4 to 2.5 μ m. The Salinas image contains 512 × 217 pixels and includes 16 types of ground objects at a 3.7-m spatial resolution and was acquired by the AVIRIS sensor over the Salinas Valley in California, USA. After removing 20 of the 224 bands due to noise and water absorption, the remaining 204 valid bands were utilized in the experiments. The University of Pavia image was acquired with 610 × 340 pixels at 1.3-m spatial resolution by the ROSIS Sensor in the city area around the University of Pavia. The image has a spectral range of 0.43 to 0.86 μ m with 115 bands, where 12 bands that were noisy or impacted by water absorption were removed, and the remaining 103 bands were used.

3.2. Compared Algorithms

In the experiments, the proposed classification method PCA-PF-SVM was compared to other widely used HSI classification methods, including SVM [11], PCA-SVM [36], PCA-Gabor-SVM [28], EPF-SVM [29], HiFi [30], LBP-SVM [37], R-VCANet-SVM [38] and PF-SVM. The parameters used for these methods were the default settings provided in the related literature. The source code for the algorithms was provided by the respective authors. The SVM classifier was based on the Libsvm library [39], and the optimal parameters of the SVM classifier were determined by a fivefold cross validation. The overall accuracy (OA), the average accuracy (AA), and the kappa coefficient are used to evaluate the performance of the methods. The OA indicates the probability that the classification results are consistent with the reference classification results. The AA refers to the mean of the percentage of correctly classified pixels for each class. The kappa coefficient is used for consistency check.

3.3. Parameter Sensitivity Analysis

The proposed PCA-PF-SVM method has the following three important parameters: the filtering standard deviation σ α ( σ r ) , the filtering window size ( w ) and the feature dimension ( k ) . To test the influence of the different parameter settings of the proposed model, we conducted extensive experiments were conducted on the Indian Pines scene. As shown in Figure 3a, the best OA, AA and kappa values were achieved when σ α ( σ r ) = 1.5 . In contrast, when σ α ( σ r ) < 1.5 , the accuracies decreased significantly because a small σ α ( σ r ) leads to a smoother image. When σ α ( σ r ) > 1.5 , the classification accuracy remains relatively stable because the ability to suppress bad information improves after the filter parameter reaches a certain value. As shown in Figure 3b, the best OA, AA and kappa values were achieved when w = 8 . These values are significantly lower when w < 8 because considerable important spatial information is lost when the window size is too small. Moreover, the values also decrease when w > 8 because the window contains a larger amount of irrelevant information that reduces the effect of the important spatial information and, thus, reduces the classification accuracy. From Figure 3c, OA becomes lager with the increase of PCA dimensions. When the dimension reaches to 45, OA trends to become decrease. In our experiments, k is set to 45 for the tradeoff between the computation complexity and classification accuracy. Therefore, in all of our experiments, the parameters were set as follows: σ α ( σ r ) = 1.5 , w = 8 and k = 45 .

3.4. Experimental Results

(1) The proposed PCA-PF-SVM method has strong spatial capabilities. According to Figure 4, Figure 5 and Figure 6 and Table 2, Table 3 and Table 4, the PCA-PF-SVM method achieves better OA, AA and kappa values than does the spectral classification method. The OA values based on the proposed PCA-PF-SVM method with respect to the Indian pines, Salinas and University of Pavia datasets are 36.14%, 8.87% and 17.78% higher, respectively, than the OA values based on the PCA-SVM method and 25.32%, 11.15% and 14.68% higher, respectively, than the OA values based on the SVM method. The main reason is that the spectral classification methods do not consider spatial information, while the method proposed in this paper fully considers spatial information. These results verify that the proposed method is effective in spectral-spatial feature extraction.
(2) The results verify that combining PCA and the PF is effective for HSI feature extraction. Figure 4, Figure 5 and Figure 6 and Table 2, Table 3 and Table 4 show that the PCA dimensionality reduction of the HSI does not improve the performance of the SVM classification and may even reduce the classification performance of the SVM. For example, the OA values of the PCA-SVM method for the Indian Pines dataset are lower than those for the SVM method. This result mainly occurs because although the PCA preserves the HSI’s main information, it also loses a small amount of information, thus affecting the SVM classification accuracy. However, the combination of PCA and the PF greatly enhances the performance. The OA values based on the proposed PCA-PF-SVM method for the Indian pines, Salinas and University of Pavia datasets are 13.26%, 3.42% and 7.86% higher, respectively, than are the OA values resulting the PF-SVM method. These experimental results show that it is necessary to apply PCA dimensionality reduction before filtering.
(3) The proposed method is more effective than the other advanced classification methods. As shown in Figure 4, Figure 5 and Figure 6 and Table 2, Table 3 and Table 4, compared with other methods, the PCA-PF-SVM method shows very good performance in terms of OA and kappa. On the Indian Pines, Salinas and University of Pavia datasets, the PCA-PF-SVM method shows more obvious effects than do the HiFi-We, LBP-SVM and R-VCANet-SVM methods. The OA values based on the proposed PCA-PF-SVM method for the Indian Pines, Salinas and University of Pavia datasets are 1.77%, 5.61% and 1.93% higher, respectively, than the OA values based on the HiFi-We method and 2.89%, 2.14% and 8.59% higher, respectively, than the OA values based on the LBP-SVM method and 8.36%, 4.53% and 3.38% higher, respectively, than the OA values based on the R-VCANet-SVM method.
(4) The experimental results demonstrate the robustness of the proposed PCA-PF-SVM method. As shown in Figure 7, Figure 8 and Figure 9 and Table 5, in both scenarios, as the number of training samples varies from 10 to 50, the proposed method achieves the highest OA. Its advantage is especially obvious when the number of training samples is small. For example, when the number of training samples per class is 10, our method has a 3.12–36.31% advantage on the Indian Pines image and a 3.5–20.29% advantage on the Salinas image and a 3.31–23.43% advantage on the University of Pavia image compared to the other methods. This is a highly meaningful result, because it means that a large number of non-labelled samples can be distinguished using a much smaller number of labelled samples, thus greatly improving the work efficiency, which further illustrates the robustness of the proposed method.
(5) These experimental results show that the proposed method is useful for addressing the cross-regional mixture problems of HSIs. In Figure 10, the complete classified maps and ground truth maps obtained by PCA-PF-SVM are presented. The proposed method achieves better results on the cross-region mixture problem. For cross-region marked by white box in the three figures, PF can reduce cross-region problem, which keep better feature of image and improve further classification accuracy.
(6) Statistical evaluation about the results: To further validate whether the observed increase in kappa is statistically significant, we use paired t-test to show the statistical evaluation about the results. T-test is popular in many related works [40,41,42]. We accept the hypothesis that the mean kappa of PCA-PF-SVM is larger than a compared method only if Equation (7) is valid:
( a ¯ 1 a ¯ 2 ) n 1 + n 2 2 ( 1 n 1 + 1 n 2 ) ( n 1 s 1 2 ) + n 2 s 2 2 > t 1 α [ n 1 + n 2 2 ]
where a ¯ 1 and a ¯ 2 are the means of kappa of PCA-PF-SVM and a compared method, s 1 and s 2 are the corresponding standard deviations, n 1 and n 2 are the number of realizations of experiments reported which is set as 10 in this paper. Paired t-test shows that the increases on kappa are statistically significant in all the three datasets (at the level of 95%), and it can be also observed in Figure 11.

4. Conclusions

The motivation for this study was to develop a simple feature extraction method to handle the cross-regional mixed problem of HSIs. The developed method extracts spectral-spatial features via the PF. However, the HSI’s high-dimensional problems affect the PF’s performance to a certain extent. To ensure a real effect, based on the characteristics of the HSI, PCA is used to reduce an images dimensions. Moreover, a combination PCA-PF feature extraction method is proposed. To evaluate the performance of the proposed method, three classical datasets with different complexities of cross-regional mixing problems were analyzed, and comparative experiments were also employed. The results show that the proposed method effectively solves the cross-regional mixture problem. In addition, feature extraction method in this paper use NRS and ELM for classification, and compares with PCA-Gabor-NRS and LBP-ELM.As shown in Table 6, classification results show that our method can obtain better results than that of the compared methods.

Author Contributions

Z.C. (Zhikun Chen) and Z.C. (Zhihua Cai) conceived and designed the experiments; Z.C. (Zhikun Chen) and J.J. performed the experiments; X.J. and X.F. analyzed the data; Z.C. (Zhikun Chen) wrote the paper.

Funding

This work is partially supported by the National Natural Science Foundation of China under Grants 61773355 and 61403351 and 61402424, the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan), Qinzhou scientific research and technology development plan project (201714322).

Acknowledgments

The authors would like to thank B. Pan, Y. Zhou, J.H.R. Chang, X. Kang and W. Li for providing the source code for their algorithms.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jia, X.; Kuo, B.C.; Crawford, M.M. Feature mining for hyperspectral image classification. Proc. IEEE 2013, 101, 676–697. [Google Scholar] [CrossRef]
  2. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
  3. Han, Y.; Li, J.; Zhang, Y. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data. Sensors 2017, 17, 1124. [Google Scholar] [Green Version]
  4. Wong, E.; Minnett, P. Retrieval of the Ocean Skin Temperature Profiles From Measurements of Infrared Hyperspectral Radiometers—Part II: Field Data Analysis. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1891–1904. [Google Scholar] [CrossRef]
  5. Zhang, T.; Wei, W.; Zhao, B. A Reliable Methodology for Determining Seed Viability by Using Hyperspectral Data from Two Sides of Wheat Seeds. Sensors 2018, 18, 813. [Google Scholar] [CrossRef] [PubMed]
  6. Behmann, J.; Acebron, K.; Emin, D. Specim IQ: Evaluation of a New, Miniaturized Handheld Hyperspectral Camera and Its Application for Plant Phenotyping and Disease Detection. Sensors 2018, 18, 441. [Google Scholar] [CrossRef] [PubMed]
  7. Sandino, J.; Pegg, G.; Gonzalez, F. Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence. Sensors 2018, 18, 944. [Google Scholar] [CrossRef] [PubMed]
  8. Ma, N.; Peng, Y.; Wang, S. An Unsupervised Deep Hyperspectral Anomaly Detector. Sensors 2018, 18, 693. [Google Scholar] [CrossRef] [PubMed]
  9. Jiang, J.; Ma, J.; Wang, Z.; Chen, C. Hyperspectral Image Classification in the Presence of Noisy Labels. IEEE Trans. Geosci. Remote Sens. 2018, 1, 99. [Google Scholar]
  10. Ma, J.; Jiang, J.; Zhou, H.; Zhao, J.; Guo, X. Guided Locality Preserving Feature Matching for Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2018. [Google Scholar] [CrossRef]
  11. Pal, M.; Foody, G. Feature Selection for Classification of Hyperspectral Data by SVM. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2297–2307. [Google Scholar] [CrossRef] [Green Version]
  12. Huo, H.; Guo, J.; Li, Z. Hyperspectral Image Classification for Land Cover Based on an Improved Interval Type-II Fuzzy C-Means Approach. Sensors 2018, 18, 363. [Google Scholar] [CrossRef] [PubMed]
  13. Tong, F.; Tong, H.; Jiang, J.; Zhang, Y. Multiscale union regions adaptive sparse representation for hyperspectral image classification. Remote Sens. 2017, 9, 872. [Google Scholar] [CrossRef]
  14. Ma, J.; Jiang, J.; Liu, C.; Li, Y. Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration. Inf. Sci. 2017, 417, 128–142. [Google Scholar] [CrossRef]
  15. Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
  16. Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  17. Jon, D.; David, M. Supervised Topic Models. Adv. Neural Inf. Process. Syst. 2008, 20, 121–128. [Google Scholar]
  18. Jiang, X.; Fang, X.; Chen, Z. Supervised Gaussian Process Latent Variable Model for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1–5. [Google Scholar] [CrossRef]
  19. Lee, H.; Kim, M.; Jeong, D. Detection of cracks on tomatoes using a hyperspectral near-infrared reflectance imaging system. Sensors 2014, 14, 18837–18850. [Google Scholar] [CrossRef] [PubMed]
  20. Kuo, B.; Li, C.; Yang, J. Kernel Nonparametric Weighted Feature Extraction for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1139–1155. [Google Scholar]
  21. Jolliffe, I. Principal Component Analysis, 2nd ed.; Springer: New York, NY, USA, 2002; Volume 98. [Google Scholar]
  22. Boukhechba, K.; Wu, H.; Bazine, R. DCT-Based Preprocessing Approach for ICA in Hyperspectral Data Analysis. Sensors 2018, 18, 1138. [Google Scholar] [CrossRef] [PubMed]
  23. Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A Superpixelwise Principal Component Analysis Approach for Unsupervised Feature Extraction of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018. [Google Scholar] [CrossRef]
  24. Song, Y.; Nie, F.; Zhang, C. A unified framework for semi-supervised dimensionality reduction. Pattern Recognit. 2008, 41, 2789–2799. [Google Scholar] [CrossRef]
  25. Chen, C.; Li, W.; Tramel, E.W.; Cui, M.; Prasad, S.; Fowler, J.E. Spectral–spatial preprocessing using multihypothesis prediction for noise-robust hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1047–1059. [Google Scholar] [CrossRef]
  26. Chen, C.; Li, W.; Su, H.; Liu, K. Spectral-spatial classification of hyperspectral image based on kernel extreme learning machine. Remote Sens. 2014, 6, 5795–5814. [Google Scholar] [CrossRef]
  27. Jiang, J.; Chen, C.; Yu, Y. Spatial-Aware Collaborative Representation for Hyperspectral Remote Sensing Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 404–408. [Google Scholar] [CrossRef]
  28. Li, W.; Du, Q. Gabor-Filtering-Based Nearest Regularized Subspace for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
  29. Kang, X.; Li, S.; Benediktsson, J. Spectral–Spatial Hyperspectral Image Classification With Edge-Preserving Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  30. Pan, B.; Zhen, W.; Xu, X.j. Hierarchical Guidance Filtering-Based Ensemble Classification for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4177–4189. [Google Scholar] [CrossRef]
  31. Zhou, Y.; Wei, Y. Learning Hierarchical Spectral-Spatial Features for Hyperspectral Image Classification. IEEE Trans. Cybern. 2016, 46, 1667–1678. [Google Scholar] [CrossRef] [PubMed]
  32. Wei, Y.; Zhou, Y.; Li, H. Spectral-Spatial Response for Hyperspectral Image Classification. Remote Sens. 2017, 9, 203. [Google Scholar] [CrossRef]
  33. Yu, S.; Liang, X.; Molaei, M. Joint Multiview Fused ELM Learning with Propagation Filter for Hyperspectral Image Classification. In Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan, 20–24 November 2016; pp. 374–388. [Google Scholar]
  34. Chang, J.; Wang, Y. Propagated image filtering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognit, Boston, MA, USA, 7–12 June 2015; Volume 1, pp. 10–18. [Google Scholar]
  35. Li, J.; Bioucas-Dias, J.; Plaza, A. Spectral–Spatial Classification of Hyperspectral Data Using Loopy Belief Propagation and Active Learning. IEEE Trans. Geosci. Remote Sens. 2013, 51, 844–856. [Google Scholar] [CrossRef]
  36. Prasad, S.; Bruce, L. Limitations of Principal Components Analysis for Hyperspectral Target Recognition. IEEE Geosci. Remote Sens. Lett. 2008, 5, 625–629. [Google Scholar] [CrossRef]
  37. Li, W.; Chen, C.; Su, H.; Du, Q. Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  38. Pan, B.; Shi, Z.; Xu, X. R-VCANet: A New Deep-Learning-Based Hyperspectral Image Classification Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1975–1986. [Google Scholar] [CrossRef]
  39. Chang, C.; Lin, C. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 27. [Google Scholar] [CrossRef]
  40. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 7, 2094–2107. [Google Scholar] [CrossRef]
  41. Pan, B.; Shi, Z.; Zhang, N.; Xie, S. Hyperspectral Image Classification Based on Nonlinear Spectral–Spatial Network. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1782–1786. [Google Scholar] [CrossRef]
  42. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
Figure 1. Flow diagram of HSI filtering using the PF. (a) Hyperspectral image (b) Neighbour pixel set N s (c) the calculation of ω s , t and (d) the pattern for performing 2D filtering with w = 3 pixels.
Figure 1. Flow diagram of HSI filtering using the PF. (a) Hyperspectral image (b) Neighbour pixel set N s (c) the calculation of ω s , t and (d) the pattern for performing 2D filtering with w = 3 pixels.
Sensors 18 01978 g001
Figure 2. Schematic of the proposed PCA-PF-SVM method.
Figure 2. Schematic of the proposed PCA-PF-SVM method.
Sensors 18 01978 g002
Figure 3. Indian Pines: analysis of the influence of parameters. (a) Standard deviation σ α ( σ r ) ; (b) Window size w and (c) Dimension k.
Figure 3. Indian Pines: analysis of the influence of parameters. (a) Standard deviation σ α ( σ r ) ; (b) Window size w and (c) Dimension k.
Sensors 18 01978 g003
Figure 4. The classification results of the Indian Pines image.
Figure 4. The classification results of the Indian Pines image.
Sensors 18 01978 g004
Figure 5. The classification results of the Salinas image.
Figure 5. The classification results of the Salinas image.
Sensors 18 01978 g005
Figure 6. The classification results of the University of Pavia image.
Figure 6. The classification results of the University of Pavia image.
Sensors 18 01978 g006
Figure 7. Influence of training samples on Indian Pines dataset.
Figure 7. Influence of training samples on Indian Pines dataset.
Sensors 18 01978 g007
Figure 8. Influence of training samples on Salinas dataset.
Figure 8. Influence of training samples on Salinas dataset.
Sensors 18 01978 g008
Figure 9. Influence of training samples on University of Pavia dataset.
Figure 9. Influence of training samples on University of Pavia dataset.
Sensors 18 01978 g009
Figure 10. Classification maps of PCA-BF-SVM methods on three datasets. (a,d,g) are false color composite image (R-G-B = band 50-27-17) for Indian Pines , University of pavia and Salinas datasets; (b,e,h) are ground truth classification results image; (c,f,i) are complete classified results image.
Figure 10. Classification maps of PCA-BF-SVM methods on three datasets. (a,d,g) are false color composite image (R-G-B = band 50-27-17) for Indian Pines , University of pavia and Salinas datasets; (b,e,h) are ground truth classification results image; (c,f,i) are complete classified results image.
Sensors 18 01978 g010
Figure 11. Box plot of kappa of different methods on three datasets. (a) Indian Pine (b) University of pavia (c) Salinas 1. SVM 2. PCA-SVM 3. PCA-Gabor-SVM 4. PF-SVM 5. EPF-SVM 6. HiFi 7. LBP-SVM 8. R-VCANet-SVM 9. PCA-PF-SVM. The center line is the median value, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme points, and the abnormal outliers are plotted by “+”.
Figure 11. Box plot of kappa of different methods on three datasets. (a) Indian Pine (b) University of pavia (c) Salinas 1. SVM 2. PCA-SVM 3. PCA-Gabor-SVM 4. PF-SVM 5. EPF-SVM 6. HiFi 7. LBP-SVM 8. R-VCANet-SVM 9. PCA-PF-SVM. The center line is the median value, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme points, and the abnormal outliers are plotted by “+”.
Sensors 18 01978 g011
Table 1. Train-Test Distribution of sample for three datasets.
Table 1. Train-Test Distribution of sample for three datasets.
Indian PinesSalinasUniversity of Pavia
ClassTrainTestClassTrainTestClassTrainTest
Aifalfa2026weeds_1201989Asphalt2018,629
Corn_n201408weeds_2203706Meadows202079
Corn_m20810fallow201956Gravel203044
Corn20217fallow_p201374Trees201325
Grass_m20463fallow_s202658Sheets205009
Grass_t20710stubble203939Soil201310
Grass_p1414Celery203559Bitumen203662
Hay_w20458Grapes2011,251Bricks20927
Oats1010Soil206183Shadows20170
Soybean_n20952Corn203258
Soybean_m202435Lettuce_4201048
Soybean_c20573Lettuce_5201907
Wheat20185Lettuce_620896
Woods201245Lettuce_7201050
Buildings20366Vinyard_U207248
Stone2073Vinyard_T201787
Table 2. Classification accuracy of different methods on Indian Pines data set (%).
Table 2. Classification accuracy of different methods on Indian Pines data set (%).
ClassSVMPCA-SVMPCA-Gabor-SVMPF-SVMEPF-SVMHiFiLBP-SVMR-VCANet-SVMPCA-PF-SVM
Aifalfa55.0054.3570.2712.3857.78100.0046.58100.0054.55
Corn_n52.1651.3281.1867.1885.8084.9489.9565.4195.22
Corn_m63.3525.2290.7877.5589.3593.0986.7085.3194.97
Corn53.3328.4582.2072.5343.0687.1091.8597.2491.44
Grass_m82.8075.8197.3790.8992.9392.0188.7291.3672.16
Grass_t85.9186.6296.1987.5991.9397.6185.7096.48100.00
Grass_p37.1453.8545.1635.0082.35100.0030.00100.0018.92
Hay_w97.8999.7688.59100.00100.0099.7888.4999.13100.00
Oats27.2738.8924.398.85100.00100.0013.89100.0045.45
Soybean_n57.3829.1495.8468.7966.3293.7074.1483.6184.34
Soybean_m71.5751.7587.7591.3392.1378.5297.0671.7995.90
Soybean_c37.8836.6993.1368.5852.7794.2485.8987.4388.51
Wheat88.1496.8377.0295.81100.0099.4683.1299.4695.85
Woods92.5593.9895.4996.6196.9498.2399.8495.74100.00
Buildings39.3153.6790.2074.4488.9993.9995.8795.3672.58
Stone95.7787.6576.0434.4587.95100.0078.43100.0087.01
OA66.27 ± 2.4655.45 ± 4.3888.99 ± 1.3378.33 ± 1.6983.03 ± 1.8589.82 ± 2.0188.70 ± 1.9383.23 ± 1.7591.59 ± 1.32
AA64.84 ± 2.2860.25 ± 5.6380.73 ± 1.6067.62 ± 1.5283.02 ± 3.1994.54 ± 0.9777.26 ± 2.5891.77 ± 0.8281.06 ± 3.91
kappa0.62 ± 0.020.50 ± 0.040.87 ± 0.020.76 ± 0.020.81 ± 0.020.88 ± 0.020.87 ± 0.020.81 ± 0.010.90 ± 0.01
Table 3. Classification accuracy of different methods on Salinas data set (%).
Table 3. Classification accuracy of different methods on Salinas data set (%).
ClassSVMPCA-SVMPCA-Gabor-SVMPF-SVMEPF-SVMHiFiLBP-SVMR-VCANet-SVMPCA-PF-SVM
weeds_198.05100.0088.1898.07100.0098.4999.4099.90100.00
weeds_299.3799.4388.9999.9299.8998.7099.2699.8499.84
fallow91.2294.3582.4693.9394.9199.8097.9299.39100.00
fallow_p97.6894.4173.8786.1397.8697.4583.8999.5691.79
fallow_s97.0095.2481.1397.6299.9688.7597.2899.6299.52
stubble100.0099.9592.2299.9599.9299.5995.1399.9799.97
Celery99.94100.0096.0498.22100.0096.6094.6698.17100.00
Grapes72.9876.8592.0191.6382.0482.1391.5778.5495.28
Soil98.5999.0097.2999.4999.4899.9799.9799.2699.97
Corn79.3993.3264.7592.4885.0687.9799.0494.6997.76
Lettuce_493.6591.0295.6695.4298.2196.1898.9698.76100.00
Lettuce_594.3491.9797.6396.07100.0099.4899.89100.00100.00
Lettuce_693.3791.1484.2976.1996.1097.2192.6494.3198.33
Lettuce_792.2994.2690.2699.4199.2092.6795.9796.8693.09
Vinyard_U54.3058.2573.3777.5973.9773.1783.0085.3285.01
Vinyard_T94.4499.5494.0398.5999.4996.7599.1799.2795.21
OA84.96 ± 1.1787.24 ± 1.7385.67 ± 1.9992.69 ± 1.3891.41 ± 2.2990.50 ± 1.3293.97 ± 2.2891.58 ± 1.0996.11 ± 0.86
AA91.04 ± 0.5392.42 ± 0.9387.01 ± 1.7893.80 ± 0.8595.38 ± 0.8594.06 ± 0.6895.48 ± 1.6296.05 ± 0.4097.24 ± 0.45
kappa0.83 ± 0.010.86 ± 0.020.84 ± 0.020.92 ± 0.020.90 ± 0.030.89 ± 0.010.93 ± 0.030.91 ± 0.010.96 ± 0.01
Table 4. Classification accuracy of different methods on University of Pavia data set (%).
Table 4. Classification accuracy of different methods on University of Pavia data set (%).
ClassSVMPCA-SVMPCA-Gabor-SVMPF-SVMEPF-SVMHiFiLBP-SVMR-VCANet-SVMPCA-PF-SVM
Asphalt87.5282.1472.3985.4798.0580.4084.3679.9692.30
Meadows91.0090.5195.9697.6097.4089.7497.9883.3999.47
Gravel61.7239.4275.0156.1789.1682.9272.9388.1284.96
Trees70.1079.5440.2780.3096.2083.6451.1996.7576.68
Sheets98.42100.0088.2199.2595.0599.1786.32100.0099.92
Soil46.0453.6168.6970.3064.2789.7275.0293.5784.80
Bitumen54.6432.0678.9471.7258.2096.7976.8599.0185.61
Bricks80.2357.6880.2060.7976.2092.5578.4388.3979.43
Shadows100.0099.3549.4483.2399.8999.4645.34100.0096.95
OA75.73 ± 1.6472.63 ± 3.4076.58 ± 2.9882.55 ± 3.4187.00 ± 2.4388.48 ± 1.9081.82 ± 1.6887.03 ± 1.1990.41 ± 1.90
AA76.63 ± 1.4370.48 ± 2.4172.12 ± 2.8178.31 ± 3.3486.05 ± 2.3990.49 ± 0.9774.27 ± 2.1991.17 ± 0.8988.90 ± 2.05
kappa0.69 ± 0.020.65 ± 0.040.70 ± 0.030.78 ± 0.040.83 ± 0.030.83 ± 0.020.76 ± 0.020.83 ± 0.010.89 ± 0.02
Table 5. Classification accuracy using varying numbers of training samples applied to three datasets.
Table 5. Classification accuracy using varying numbers of training samples applied to three datasets.
MethodQuality IndexesIndian PinesSalinasUniversity of Pavia
Training Samples PerclassTraining Samples PerclassTraining Samples Perclass
102030405010203040501020304050
SVMOA57.4366.2773.3175.9478.6682.6484.9686.4286.2087.7067.0275.7378.9582.3083.78
AA55.8764.8469.8472.6775.8688.8791.0491.3891.7792.7569.1276.6377.6980.2381.36
kappa0.520.620.700.730.760.810.830.850.850.860.590.690.730.770.79
PCA-SVMOA47.8955.4558.4762.0766.6784.4787.2488.5988.3789.3061.7172.6376.5377.9080.41
AA53.2360.2564.1467.0272.1588.9892.4293.8993.9994.4060.6070.4874.0475.2977.15
kappa0.420.500.530.570.620.830.880.870.870.880.520.650.700.720.75
PCA-Gabor-SVMOA76.0388.9993.0694.6496.0973.6285.6789.2993.0894.4665.5176.5881.2684.3086.18
AA75.9080.7386.9388.7991.7876.9587.0190.4993.7094.9163.7672.1277.1980.1183.28
kappa0.730.870.920.940.960.710.840.880.920.940.570.700.760.800.82
PF-SVMOA64.7778.3384.1987.8490.4088.6992.6994.2895.1695.4671.2382.5587.6289.1391.73
AA59.0667.6273.2777.4782.3991.2493.8095.7796.4296.6468.9178.3182.8283.7687.38
kappa0.610.760.820.860.890.970.920.940.950.950.640.780.840.860.89
EPF-SVMOA69.3283.0387.4189.6392.4187.7191.4192.7092.7394.1573.7687.0088.9792.1993.57
AA72.0683.0287.6089.7492.0293.8095.3895.9696.1296.8576.2186.0588.5690.8992.66
kappa0.660.810.860.880.910.860.900.920.920.930.670.830.860.900.92
HiFiOA81.0889.8291.6593.6393.4486.5390.5092.0892.6793.5981.8388.4888.6490.2290.94
AA89.4494.5495.7496.3696.7292.0894.0695.4796.2096.7685.4090.4991.9192.9993.58
kappa0.790.880.910.930.930.850.890.910.920.930.770.830.850.870.88
LBP-SVMOA80.4988.7092.0194.8595.5889.6593.9796.1896.8697.9170.3581.8285.7589.3990.34
AA70.9677.2683.2986.7287.0090.4195.4896.1396.8897.8766.3974.2781.3384.8586.41
kappa0.780.870.910.940.950.890.930.960.970.980.630.760.820.860.87
R-VCANet-SVMOA75.4083.2387.5689.6691.3387.9691.5892.9393.2994.2181.4787.0390.9592.1893.46
AA85.8291.7794.0095.0595.8894.3296.0596.6896.9197.3487.2192.1393.5194.4895.51
kappa0.720.810.860.880.900.870.910.920.930.940.760.830.880.900.91
PCA-PF-SVMOA84.2091.5994.3295.2396.5593.9196.1196.8397.8498.4585.1490.4191.6294.1295.34
AA78.2881.0687.2989.6592.2296.0497.2498.2898.7099.1483.1788.9088.2691.8093.41
kappa0.820.900.940.950.960.930.960.960.980.980.810.890.890.920.94
Table 6. Classification Results obtained by PCA-Gabor-NRS, PCA-PF-NRS, LBP-ELM and PCA-PF-ELM.
Table 6. Classification Results obtained by PCA-Gabor-NRS, PCA-PF-NRS, LBP-ELM and PCA-PF-ELM.
Indian Pines
Training Samples PerclassPCA-Gabor-NRSPCA-PF-NRSLBP-ELMPCA-PF-ELM
OAAAkappaOAAAkappaOAAAkappaOAAAkappa
1068.4661.320.6584.5076.990.8380.8989.160.7983.1590.430.81
2082.5675.630.8090.8283.840.9088.3793.620.8791.4495.320.90
3088.9383.280.8793.7387.690.9392.5796.090.9294.3596.810.94
4091.9987.170.9194.7989.670.9494.4296.760.9495.6997.680.95
5093.7189.210.9395.7290.080.9595.7697.770.9597.0898.370.97
Salinas
Training Samples PerclassPCA-Gabor-NRSPCA-PF-NRSLBP-ELMPCA-PF-ELM
OAAAkappaOAAAkappaOAAAkappaOAAAkappa
1057.5355.950.5493.5495.640.9390.4192.920.8993.2296.700.92
2075.7475.550.7395.9797.460.9694.9096.470.9495.9698.120.96
3087.6288.110.8696.9198.240.9796.4697.840.9696.5898.490.96
4091.9492.20.9197.4198.480.9797.6998.380.9797.9098.990.98
5094.8594.80.947.9398.740.9898.0298.6797.7998.4099.230.98
University of Pavia
Training Samples PerclassPCA-Gabor-NRSPCA-PF-NRSLBP-ELMPCA-PF-ELM
OAAAkappaOAAAkappaOAAAkappaOAAAkappa
1050.8651.760.4180.7378.730.7573.9876.150.6782.1882.470.77
2063.0762.570.5589.1886.870.8682.4782.90.7889.4289.090.86
3069.3967.650.6293.0691.040.9186.5286.420.8291.1391.260.88
4076.6475.210.7194.4892.770.9388.8387.930.8592.6992.520.90
5082.2681.090.7895.2193.730.9490.7790.360.8894.6093.420.93

Share and Cite

MDPI and ACS Style

Chen, Z.; Jiang, J.; Jiang, X.; Fang, X.; Cai, Z. Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter. Sensors 2018, 18, 1978. https://doi.org/10.3390/s18061978

AMA Style

Chen Z, Jiang J, Jiang X, Fang X, Cai Z. Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter. Sensors. 2018; 18(6):1978. https://doi.org/10.3390/s18061978

Chicago/Turabian Style

Chen, Zhikun, Junjun Jiang, Xinwei Jiang, Xiaoping Fang, and Zhihua Cai. 2018. "Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter" Sensors 18, no. 6: 1978. https://doi.org/10.3390/s18061978

APA Style

Chen, Z., Jiang, J., Jiang, X., Fang, X., & Cai, Z. (2018). Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter. Sensors, 18(6), 1978. https://doi.org/10.3390/s18061978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop