Next Article in Journal
Performance Analysis and Parameter Optimization of the Optimal Fixed-Point Quantum Search
Previous Article in Journal
Graphene Nanoplatelets Impact on Concrete in Improving Freeze-Thaw Resistance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Hyperspectral Images Based on Supervised Sparse Embedded Preserving Projection

1
College of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, Fujian, China
2
Engineering Research Center on Cloud Computing & Internet of Things and E-commerce Intelligence of Fujian Universities, Quanzhou Normal University, Quanzhou 362000, Fujian, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(17), 3583; https://doi.org/10.3390/app9173583
Submission received: 19 July 2019 / Revised: 16 August 2019 / Accepted: 26 August 2019 / Published: 2 September 2019
(This article belongs to the Section Applied Industrial Technologies)

Abstract

:
Dimensionality reduction is an important research area for hyperspectral remote sensing images due to the redundancy of spectral information. Sparsity preserving projection (SPP) is a dimensionality reduction (DR) algorithm based on the l1-graph, which establishes the relations of samples by sparse representation. However, SPP is an unsupervised algorithm that ignores the label information of samples and the objective function of SPP; instead, it only considers the reconstruction error, which means that the classification effect is constrained. In order to solve this problem, this paper proposes a dimensionality reduction algorithm called the supervised sparse embedded preserving projection (SSEPP) algorithm. SSEPP considers the manifold structure information of samples and makes full use of the label information available in order to enhance the discriminative ability of the projection subspace. While maintaining the sparse reconstruction error, the algorithm also minimizes the error between samples of the same class. Experiments were performed on an Indian Pines hyperspectral dataset and HJ1A-HSI remote sensing images from the Zhangjiang estuary in Southeastern China, respectively. The results show that the proposed method effectively improves its classification accuracy.

1. Introduction

Alongside the development of hyperspectral remote sensing imaging technology, hyperspectral imaging (HSI) is providing increasingly useful information and is widely used in many fields, such as the military, agricultural industry, and geological exploration [1,2,3]. In recent years, hyperspectral imaging has become a research hotspot both locally and internationally. However, hyperspectral imaging involves a large number of bands and is computationally complex. In addition, it has a high degree of correlation and band redundancy, which makes the process prone to dimension disaster [4]. As a result, dimensionality reduction (DR) is an essential part in the classification of hyperspectral images [5].
In the research field of high-dimensional data processing, scholars have proposed a series of classic algorithms. The common DR algorithms mainly include principal component analysis (PCA) [6] and liner discriminative analysis (LDA) [7]. PCA and LDA are both based on the assumption that the embedded subspace of the high-dimensional data space is linear. Therefore, the intrinsic properties of high-dimensional data are difficult to find and it is difficult to prevent the low-dimensional manifold structure of hyperspectral data from being revealed [8]. In recent years, sparse representation has been applied to signal processing, pattern recognition, and other fields, and has achieved certain research results [9]. In addition, sparse representation adaptively captures the intrinsic structural information of the sample data [10]. Due to its robustness to noise and natural discriminatory ability in the coding process, sparse representation [11] is used to extract the manifold structure of a sample in dimensionality reduction. Zhou et al. [12] proposed a method to analyze the principal components of data based on sparse solution methods in the Lasso and Elastic Net theories, referred to as sparse PCA (SPCA). Siddiqui et al. [13] used sparse principal component analysis in the reduction of hyperspectral remote sensing images. Qiao [14] proposed an unsupervised sparsity preserving projection (SPP) for the dimensionality reduction of data, which constructed a weight relationship between samples in a high-dimensional space using sparse representation theory. SPP only focuses on the sparse structure and ignores the discriminant information of labeled samples. In order to enhance the classification performance of SPP, some supervised sparse methods were proposed. Huang et al. [15] proposed a sparse discriminant embedding (SDE) hyperspectral data reduction algorithm based on SPP, which utilized the merits of both the property of sparsity and its manifold structure.
In this paper, we propose a supervised sparse embedding preserving projection (SSEPP) for the dimensionality reduction of hyperspectral remote sensing images. In this approach, SSEPP makes full use of the label information of samples in order to enhance the discriminative ability of the projection subspace. In addition to maintaining the sparse reconstruction error, the algorithm also minimizes the error between the samples of the same class. The experiments were performed on Indian Pines and HJ1A-HSI hyperspectral remote sensing imagery from the Zhangjiang estuary of Southeastern China. The results show that the proposed method effectively improves the classification accuracy of hyperspectral remote sensing imagery.
The rest of this paper is arranged as follows: Section 2 briefly introduces the graph embedding (GE) and SPP. The supervised sparse embedding preserving projection for the dimensionality reduction (SSEPP) algorithm is presented in Section 3. The experimental results are given in Section 4, and finally, the conclusions are presented in Section 5.

2. Relation Work

2.1. Graph Embedding

Graph embedding (GE) [16] revealed certain geometrical features in the data through the spectrogram theory. It used the Laplacian operator to process the constructed graph, which retained useful information and suppressed useless information of the graph in low-dimensional embedding. In GE, the construction of an intrinsic graph was required to describe the geometric characteristics of similar data, and a penalty graph was also constructed to represent the geometric characteristics, which should be avoided. The intrinsic graph G = { X , W } and the penalty graph G p = { X , W P } are both undirected graphs, where X is the vertex set and W R n × n and W p R n × n represent the weight matrices. In graph G , the weight w i j denotes the edge weights between vertices x i and x j , which needed to maintain the similarity characteristics of similar data. In graph G P , the weight w i j p suppressed the similarity characteristics of heterogeneous data in low-dimensional embedding. According to the principle of the graph embedding relationship, the objective function can be written as:
min y T B y = c 1 2 i , j y i y j 2 w i j = min y T B y = c Y T L Y ,
where c is a constant. In order to avoid degeneracy, an extra constraint matrix, B , was added. L was a Laplacian matrix of the intrinsic graph G . B was a typical normalized diagonal matrix and can be set as the Laplacian matrix of the penalty graph G P , that is, B = L p . Thus, the Laplacian matrix L and L p were defined as:
L = D W ,    D i i = j i w i j , i L p = D p W p ,    D i i p = j i w i i p , i
where D and D p are both diagonal matrices. If the relationship is linear, that is, Y = V T X , the graph embedded objective function can be converted to:
min tr ( V T X B X T V ) tr ( V T X B X T V ) .

2.2. Sparsity Preserving Projection

Sparse representation can use a small amount of data to represent the main information of an image. Sparsity preserving projection (SPP) is an unsupervised dimensionality reduction algorithm based on the sparse representation theory. Unlike traditional graph construction methods, the algorithm constructs a graph representing the relationships between data samples through sparse representation. This is a global sparse graph construction method, and thus there is no need to artificially select neighbor parameter values. The core idea of SPP was to construct the sparse reconstructive weight matrix and to look for an optimal projection vector V that minimizes the projection error between the original sample on the V and the sparse reconstruction sample on the V .
Assuming that the sample set X = { x 1 , x 2 , , x N } and N make up the total number of samples, the objective function of the original sparse reconstruction weight was:
min s i s i 1 s . t . x i = X s i ; 1 = 1 T . s i .
Due to the noise of the sample, the constraints were not necessarily satisfied, so we could relax the constraints to get the following equation:
min α i s i 1 s . t . x i - X s i ξ ; 1 = 1 T . s i ,
where ξ represents the allowable error value for sparse reconstruction. By solving the above equation, it was possible to obtain the sparse representation coefficient matrix S = [ s 1 , s 2 , , s n ] , where s i was the sparse representation coefficient of x i .
In the process of dimensionality reduction, the purpose of SPP was to keep the sparse representation relationship constant between samples, and thus the objective function of the SPP could be written as:
J ( V ) = min V i = 1 N V T x i V T X s i 2 ,
where V was the projection matrix. After a simple algebraic operation, the objective function of SPP could be recorded as:
J ( V ) = min V i = 1 N V T x i V T X s i 2 = min V V T X ( I S β ) X T V
In order to prevent degeneracy, an extra constraint was added: V T X X T V = I . Thus, the objective function of the SPP was expressed as:
J ( V ) = min V V T X ( I S S T + S T S ) X T V V T X X T V .
To obtain more stable numerical solutions, we transformed the above-mentioned minimization problem into the following maximization problem:
J ( V ) = max V V T X S β X T V V T X X T V ,
where S β = S + S T S T S , the optimization problem in Equation (9) could be solved using the generalized eigenvalue vectors X S β X T V = λ X X T V , and the best projection matrix was obtained as V = [ v 1 , v 2 , , v d ] .

3. Supervised Sparse Embedded Preserving Projection

SPP is an unsupervised algorithm that ignores the label information of samples, and the objective function only considers the sparse reconstruction error, which does not reflect the local structural relationship of samples. A supervised sparse embedded preserving projection (SSEPP) algorithm was proposed on the basis of SPP. SSEPP not only used the label information of samples to construct the weight matrix, which found the manifold structural information of the data and enhanced the discrimination ability of the projection subspace, but also minimized the spacing between the same class samples. Assume the sample set X = { x 1 , x 2 , , x n } , where x i is the i th sample of X . To improve the objective function of the SPP algorithm, the objective function of SSEPP was recast as the following optimization problem:
J ( V ) = min V i = 1 N   V T x i V T X s i g i j 2 ,
where V represented a projection vector and g i j was a weight matrix, that is, the set of weights among all the samples. The definition was as follows:
g i j = { ( 1 ( x i x 1 σ ) 2 ) 2 ,    if   l ( x i ) = l ( x j ) 0 ,    else .
In Equation (11), if x i and x j both belong to the same class, the weight value is not 0. When the distance between x i and x j is closer, the value is smaller. This was used to enhance the discrimination ability of the projection subspace. After a few simple algebraic operations, the objective function was optimized to:
J ( V ) = min i = 1 N V T x i V T X s i g i j 2 = min V T x [ I ( s g ) T s g + ( s g ) ( s g ) T ] x T V
In order to obtain a more stable numerical solution, the above problem regarding the minimization of the objective function was converted into a maximization problem:
J ( V ) = max V T x S α x T V ,
where S α = ( s g ) T s g + ( s g ) ( s g ) T is a sparse reconstruction matrix.
The SPP objective function only considers the reconstruction error. On this basis, SSEPP considers the same class spacing and minimizes the error of samples in the same class in the projection space by establishing the objective function. If y i = V T x i , y j = V T x j is the projection of the different training samples in the projection space, the objective function can be recorded as follows:
min V 1 2 i , j y i y j 2 P i j ,
where P i j = { 1 , if l ( x i ) = l ( x j ) 0 , others . After some simple operations, the objective function can be converted into the following:
min V 1 2 tr { i , j y i y j 2 P i j } = min tr { V T X S β X T V }
where S β = D - P , D is a diagonal matrix and d i i = i P i i is a diagonal element.
By combining Equation (13) and Equation (15), the multi-objective optimization function of the SSEPP algorithm was obtained:
max tr ( V T X S α X T V ) tr ( V T X S β X T V ) .
The SSEPP objective function not only highlights the role of a sample’s label, but also minimizes the sparse reconstruction error while minimizing the distance between samples of the same class so that the same class sample projection is more compact.
Equation (16) was solved by using the following generalized value eigenvalue decomposition problem:
X S α X T V = λ X S β X T V .
The eigenvalues given by the above equation are sorted in descending order, allowing us to obtain the projection matrix V = [ v 1 , v 2 , , v n ] , which corresponded to the n maximum eigenvalues. Thus, the dimensionality reduction sample feature is given as follows:
m i ( x i ) = V T x i ,
where m i ( x i ) is a feature value of the i th sample data. The specific steps of the supervising sparse embedded preserving projection algorithm are described in detail in Algorithm 1.
Algorithm 1: Supervised Sparse Embedded Preserving Projection (SSEPP)
Input: The set of sample X = { x i | x i R B , 1 i n } , error value ξ , and dimensionality reduction d .
Output: The set of feature after dimension reduction M .
(1) Standardize the sample set;
(2) Use the SPGL1 [17] algorithm to solve the sparse reconstruction coefficients using Equation (5);
(3) Calculate the weight matrix of the labeled sample using Equation (11);
(4) Use Equation (15) to calculate the objective function, which minimizes the error of samples of the same class. Then, obtain the multi-objective function according to Equation (16);
(5) Obtain the projection matrix by solving the eigenvectors of the generalized eigenvalue decomposition using Equation (17);
(6) Use Equation (18) to obtain the reduced-dimension sample feature set M .
End For

4. Experimental Results and Analysis

In this section, we validated our proposed method with several HSI datasets and present experimental results that demonstrate the benefits of SSEPP for DR of HSI data.

4.1. Indian Pines Dataset

The Indian Pines dataset [18] was used in the first experiment. The image had 145 × 145 pixels and its spatial resolution was 20 m. The image had a total of 220 spectral bands, of which 20 bands were more affected by water absorption and were removed before the experiment [19]. The remaining 200 bands were used for experiments. The wavelength range was between 0.4 and 2.5 um. In the entire HSI image, there was a total of 10249 pixels with label information. There were 16 sample ground truth classes, of which 10% were used as training samples and the rest were used for testing. Table 1 lists the classes and the number of their training samples.
In the Indian Pines HSI dataset experiment, the dimensions of different dimensionality reduction algorithms were uniformly set to 30 dimensions. Then, the nearest neighbor classifier (1-NN) was used for classification, that is, K was set to 1. The SPGL1 algorithm was used to solve the sparse reconstruction coefficients. In order to obtain better performance, the sparse reconstruction error value ζ was set to 0.9.
Figure 1 showed a comparison of the classification results of the Indian Pines. Figure 1a showed the false color image of Indian pines, Figure 1b showed the ground truth, Figure 1c showed the sparsity preserving projection (SPP), Figure 1d showed the principal component analysis (PCA), Figure 1e showed the sparse discriminant embedding, and Figure 1f showed the supervised sparse embedding preserving projection (SSEPP).
Figure 1 showed that the SSEPP algorithm is better than the SPP, PCA, and SDE comparison algorithms. SSEPP had the least misclassification and leakage, and was the closest to the ground truth image of the Indian Pines. The SPP algorithm had the worst classification result because SPP is an unsupervised algorithm. SSEPP and SDE are both supervised algorithms, which means that they can introduce sample label information to enhance their discrimination ability, thus improving the classification accuracy. In addition, compared to the SDE algorithm, according to the similarity of the samples, SSEPP constructed a similarity matrix that assigned different weights to samples of the same class; it therefore enhanced the natural discrimination ability of sparse representation. In order to improve classification accuracy of the Indian Pines dataset, it also constructed multi-objective optimization based on the original objective function that minimized both the sparse reconstruction error and same class sample spacing.
As shown in Table 2 and Figure 2, the difference in feature dimensions directly led to different levels of classification accuracy of hyperspectral remote sensing images. With the increase in feature dimensions, the overall accuracy showed an upward trend, and when dimensions reached about 30, the trend was stable. An SSEPP algorithm was always better than an SPP algorithm because SSEPP indicated the sample label and minimized the same class sample spacing, features that were very helpful for feature extraction. Before 7-dimension, the classification effect of the PCA algorithm was better than SSEPP, but after 7-dimension, the proposed algorithm was always better than PCA. After feature extraction reached 8-dimension, the SSEPP method was always optimal. This verifies the effectiveness of the algorithm.

4.2. Zhangjiangkou Mangrove Nature Reserve HJ1A-HSI Dataset

The Environment and Disaster Detection Small Satellite (HJ) is a new civilian satellite system in China. It was successfully launched in Taiyuan at 11:25 a.m. on 6 September 2008. The HJ1A satellite is equipped with a CCD camera and hyperspectral imaging (HSI). The remote sensing data used in these experiments were derived from the HJ-HSI [20] on 28 March 2010, which involved sampling of the Zhangjiangkou Mangrove Nature Reserve in Fujian province. The mangrove area is one of the national key nature reserves. The location is 22 53 45 ~ 23 56 00 N; and 117 24 07 ~ 117 30 00 E, with total area of 2360 hectares. The spatial resolution of this image was 100 m and the band range were 450–950 nm, with a total of 115 bands. According to field investigations in the study area, the land cover types are defined as being in seven categories. Table 3 lists the seven land cover types and their descriptions.
In this experiment, 100 samples were randomly selected as training samples, of which 10 were mangroves and 15 were for each of the other six land cover types. Different dimensionality reduction algorithms uniformly reduced 115 spectral bands to 10 dimensions. The spectral bands had four texture features (mean, variance, dissimilarity, and second moment) [21], Digital elevation model (DEM) and normalized difference vegetation index (NDVI) [22] were integrated into one decision-making feature and a total of 16 dimensional features. Finally, the classification was performed by the 1-NN classifier. The false color composite of the Zhangjiangkou Mangrove Nature Reserve HJ1A-HSI is shown in Figure 3a. Figure 3b–e shows a comparison of the classification results of different algorithms, Figure 3b shows the sparsity preserving projection (SPP) algorithm, Figure 3c shows the principal component analysis (PCA) algorithm, Figure 3d shows the sparse discriminant embedding (SDE) algorithm, and Figure 3e shows the supervised sparse embedding preserving projection (SSEPP) algorithm.
A confusion matrix is a very effective way to assess classification accuracy. We used a confusion matrix, the overall accuracy, and the kappa coefficient to evaluate the classification performance of the algorithms. Using field sampling and high-resolution imagery, we randomly took 30 mangrove samples and 50 samples of the remaining six land cover types as the ground truth samples to build the confusion matrices. The confusion matrices for the classification results of the four algorithms are listed in Table 4. Overall accuracy and kappa coefficients are shown in Table 5.
Both Figure 3 and Table 4 show that the SSEPP algorithm greatly reduced the misidentification phenomenon of mangroves, that is, the fewest red noise points were observed, and its classification result image was closest to the Zhangjiang estuary mangroves HJ1A-HSI ground truth image. Table 5 shows that the proposed SSEPP method yielded the best overall accuracy and kappa coefficient, followed by the SDE algorithm. Relative to the traditional sparse preserving projection algorithm, SSEPP classification accuracy and the kappa coefficient were increased by 3% and 0.04, respectively, which means that the sample label information was introduced and the spacing of same class samples considered could both result in better classification. Compared with the SDE and PCA algorithms, the proposed algorithm improved by 2% and 3%, respectively, and the Kappa coefficient was increased by 0.02 and 0.03, respectively.

5. Conclusions

This paper presented a supervised sparse embedded preserving projection (SSEPP) dimensionality reduction algorithm for hyperspectral images; it is an extension of SPP. To enhance the natural discrimination ability of sparse representation, this algorithm introduced sample class label information to construct a weight matrix using sample similarity. Furthermore, it minimized the distance between samples of the same class while ensuring the error of the sparse reconstruction. As a result, it more accurately revealed the sparse manifold structure of samples. Experiments on the Indian Pines and the HJ1A-HSI Zhangjiangkou Mangrove Reserve datasets demonstrated the effectiveness of the proposed SSEPP algorithm, which had a better performance than other algorithms.

Author Contributions

F.C. performed the experiments, analyzed the data, and wrote the original draft; M.-X.G. and L.-F.H. analyzed the data and participated in discussions of the research; Y.-Y.H. supervised the project and revised the manuscript.

Funding

This research was partially supported by the Natural Science Foundation of Fujian Province for Youths, China (grant No. 2017J05116), the Natural Science Foundation of Fujian Province, China (grant No. 2015J01286), the JK class project in Fujian Province Department of Education (grant No. JK2014037), and the Education Research Project of Fujian Province for Young and Middle-Aged Teachers (grant No. JAS150448). We deeply appreciate the organizations mentioned above.

Acknowledgments

This work was supported by Fujian Provincial Key Laboratory of Data-Intensive Computing, Fujian University Laboratory of Intelligent Computing and Information Processing, Fujian Provincial Big Data Research Institute of Intelligent Manufacturing, and Engineering Research Center on Cloud Computing & Internet of Things and E-commerce Intelligence of Fujian Universities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dalponte, M.; Bruzzone, L.; Vescovo, L.; Gianelle, D. The role of spectral resolution and classifier complexity in the analysis of hyperspectral images of forest areas. Remote Sens. Environ. 2009, 113, 2345–2355. [Google Scholar] [CrossRef]
  2. Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of Spectral–Temporal Response Surfaces by Combining Multispectral Satellite and Hyperspectral UAV Imagery for Precision Agriculture Applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3140–3146. [Google Scholar] [CrossRef]
  3. Manolakis, D.; Shaw, G. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag. 2002, 19, 29–43. [Google Scholar] [CrossRef]
  4. Wei, H.; Zhang, H.; Zhang, L.; Philips, W.; Liao, W. Weighted Sparse Graph Based Dimensionality Reduction for Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2017, 13, 686–690. [Google Scholar] [CrossRef]
  5. Zhang, X.; He, Y.; Jiao, L.; Liu, R.; Feng, J.; Zhou, S. Scaling cut criterion-based discriminant analysis for supervised dimension reduction. Knowl. Inf. Syst. 2015, 43, 633–655. [Google Scholar] [CrossRef]
  6. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  7. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Hum. Genet. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  8. Bachmann, C.M.; Ainsworth, T.L.; Fusina, R.A. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 441–454. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Du, B.; Zhang, L. A Sparse Representation-Based Binary Hypothesis Model for Target Detection in Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1346–1354. [Google Scholar] [CrossRef]
  10. Chen, P.; Jiao, L.; Liu, F.; Gou, S.; Zhao, J.; Zhao, Z. Dimensionality Reduction of Hyperspectral Imagery Using Sparse Graph Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1165–1181. [Google Scholar] [CrossRef]
  11. Sun, X.; Wang, J.; She, M.F.H.; Kong, L. Scale invariant texture classification via sparse representation. Neurocomputing 2013, 122, 338–348. [Google Scholar] [CrossRef]
  12. Zou, H.; Hastie, T.; Tibshirani, R. Sparse Principal Component Analysis. J. Comput. Graph. Stat. 2006, 15, 265–286. [Google Scholar] [CrossRef] [Green Version]
  13. Siddiqui, S.; Robila, S.; Peng, J.; Wang, D. Sparse Representations for Hyperspectral Data Classification. In Proceedings of the 2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; pp. 577–580. [Google Scholar]
  14. Qiao, L.; Chen, S.; Tan, X. Sparsity preserving projections with applications to face recognition. Pattern Recognit. 2010, 43, 331–341. [Google Scholar] [CrossRef] [Green Version]
  15. Huang, H.; Yang, M. Dimensionality Reduction of Hyperspectral Images with Sparse Discriminant Embedding. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5160–5169. [Google Scholar] [CrossRef]
  16. Yan, S.; Xu, D.; Zhang, B.; Zhang, H.-J.; Yang, Q.; Lin, S. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [PubMed]
  17. Van den Ewout, B.; Friedlander, M.P. SPGL1: A Solver for Large-Scale Sparse Reconstruction. 2007. Available online: https://friedlander.io/software/spgl1/ (accessed on 30 August 2019).
  18. Baumgardner, M.F.; Biehl, L.L.; Landgrebe, D.A. 220 Band AVIRIS Hyperspectral Image Data Set: June 12, 1992 Indian Pine Test Site 3. Available online: https://purr.purdue.edu/publications/1947/1 (accessed on 30 September 2015).
  19. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-Variation-Regularized Low-Rank Matrix Factorization for Hyperspectral Image Restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  20. China Centre for Resources Satellite Data and Application. HJ-1A/B/C. Available online: http://www.cresda.com/EN/satellite/7117.shtml (accessed on 3 November 2015).
  21. Soh, L.K.; Tsatsoulis, C. Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Trans. Geosci. Remote Sens. 1999, 37, 780–795. [Google Scholar] [CrossRef] [Green Version]
  22. Deering, D.W. Rangeland Reflectance Characteristics Measured by Aircraft and Spacecraft Sensors. Ph.D. Thesis, Texas A&M University, College Station, TX, USA, 1978. [Google Scholar]
Figure 1. (a) False color image of Indian Pines; (b) ground truth; (c) sparsity preserving projection (SPP); (d) principal component analysis (PCA); (e) sparse discriminant embedding (SDE); and (f) supervised sparse embedding preserving projection (SSEPP).
Figure 1. (a) False color image of Indian Pines; (b) ground truth; (c) sparsity preserving projection (SPP); (d) principal component analysis (PCA); (e) sparse discriminant embedding (SDE); and (f) supervised sparse embedding preserving projection (SSEPP).
Applsci 09 03583 g001aApplsci 09 03583 g001b
Figure 2. Overall accuracy curves of different DR algorithms in different dimensions.
Figure 2. Overall accuracy curves of different DR algorithms in different dimensions.
Applsci 09 03583 g002
Figure 3. (a) HJ1A-HSI RGB remote sensing image in the Zhangjiang estuary of southeastern China; (b) SPP; (c) PCA; (d) SDE; and (e) SSEPP.
Figure 3. (a) HJ1A-HSI RGB remote sensing image in the Zhangjiang estuary of southeastern China; (b) SPP; (c) PCA; (d) SDE; and (e) SSEPP.
Applsci 09 03583 g003
Table 1. Information about the Indian Pines dataset.
Table 1. Information about the Indian Pines dataset.
#ClassSamplesColor (R,G,B)
C1Alfalfa46255,254,137
C2Corn-notill14283,28,241
C3Corn-min830255,89,1
C4Corn2375,255,133
C5Grass/pasture483255,5,251
C6Grass/trees73089,1,255
C7Grass/pasture-mowed283,171,255
C8Hay-windrowed47812,255,7
C9Oats20172,175,84
C10Soybean-notill972160,78,158
C11Soybean-min2455101,173,255
C12Soybean-clean59360,91,112
C13Wheat205104,192,63
C14Woods1265139,69,46
C15Bldg-Grass-Trees-Drives386119,255,172
C16Stone-Steel towers93254,255,3
Table 2. Indian Pines classification results.
Table 2. Indian Pines classification results.
#SamplesDR+1-NN Classifier
TrainTestSPPPCASDESSEPP
C154114.6368.2970.7365.85
C2143128543.9745.2162.0268.72
C38374738.2945.9244.4450.87
C42421312.6830.9921.1323.00
C54843567.8278.1685.0681.38
C67365786.0088.4392.6995.74
C732536.0072.0060.0072.00
C84843061.1695.5895.3599.77
C921811.110.0011.1122.22
C109787542.1760.4656.3457.83
C11246220959.4867.4166.8275.69
C125953426.9737.2760.8657.87
C132118486.9691.8595.6598.91
C14127113874.8784.2792.6293.76
C153934732.5627.3850.1453.03
C1698428.5782.1473.8180.95
Overall Accuracy (OA) /%54.1563.7369.0673.31
Table 3. Classification of land cover types.
Table 3. Classification of land cover types.
Class NameLand Cover ClassDescription
C1MangrovesMangrove forests
C2Upland vegetationDeciduous or evergreen forest land, orchards, and tree groves
C3Urban areasResidential, commercial, industrial, and other developed land
C4WaterPermanent open water, lakes reservoirs, bays, and estuaries
C5Littoral zoneLand in the intertidal zone or the transitional zone
C6Fallow landFields no longer under cultivation
C7Agricultural landCrop fields, paddy fields, and grasslands
Table 4. Comparison of the confusion matrices.
Table 4. Comparison of the confusion matrices.
#SPPPCA
C1C2C3C4C5C6C7TotalC1C2C3C4C5C6C7Total
C1210300643412000002436
C24501100106810500000050
C3403200174415038010559
C400047100480004700047
C500134950581003495058
C600000266320000037037
C7103002273320120082143
Total3050505050505033030505050505050330
#SDESSEPP
C1C2C3C4C5C6C7TotalC1C2C3C4C5C6C7Total
C118000062261800001221
C2050110010071050110010071
C340320017444033001745
C400047100480004710048
C500134950580013495058
C630000266353000031640
C750600235485050023547
Total3050505050505033030505050505050330
Table 5. Comparison of the overall accuracy and kappa coefficients.
Table 5. Comparison of the overall accuracy and kappa coefficients.
SPPPCASDESSEPP
OA/%76.3676.9777.8879.70
Kappa0.72350.73070.74070.7618

Share and Cite

MDPI and ACS Style

Cai, F.; Guo, M.-X.; Hong, L.-F.; Huang, Y.-Y. Classification of Hyperspectral Images Based on Supervised Sparse Embedded Preserving Projection. Appl. Sci. 2019, 9, 3583. https://doi.org/10.3390/app9173583

AMA Style

Cai F, Guo M-X, Hong L-F, Huang Y-Y. Classification of Hyperspectral Images Based on Supervised Sparse Embedded Preserving Projection. Applied Sciences. 2019; 9(17):3583. https://doi.org/10.3390/app9173583

Chicago/Turabian Style

Cai, Fen, Miao-Xia Guo, Li-Fang Hong, and Ying-Yi Huang. 2019. "Classification of Hyperspectral Images Based on Supervised Sparse Embedded Preserving Projection" Applied Sciences 9, no. 17: 3583. https://doi.org/10.3390/app9173583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop