Next Article in Journal
Reconstruction of Single Tree with Leaves Based on Terrestrial LiDAR Point Cloud Data
Previous Article in Journal
Small Magnitude Co-Seismic Deformation of the 2017 Mw 6.4 Nyingchi Earthquake Revealed by InSAR Measurements with Atmospheric Correction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Imagery Classification Based on Semi-Supervised Broad Learning System

1
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
2
Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 99999, China; also with the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(5), 685; https://doi.org/10.3390/rs10050685
Submission received: 26 March 2018 / Revised: 18 April 2018 / Accepted: 26 April 2018 / Published: 28 April 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Recently, deep learning-based methods have drawn increasing attention in hyperspectral imagery (HSI) classification, due to their strong nonlinear mapping capability. However, these methods suffer from a time-consuming training process because of many network parameters. In this paper, the concept of broad learning is introduced into HSI classification. Firstly, to make full use of abundant spectral and spatial information of hyperspectral imagery, hierarchical guidance filtering is performing on the original HSI to get its spectral-spatial representation. Then, the class-probability structure is incorporated into the broad learning model to obtain a semi-supervised broad learning version, so that limited labeled samples and many unlabeled samples can be utilized simultaneously. Finally, the connecting weights of broad structure can be easily computed through the ridge regression approximation. Experimental results on three popular hyperspectral imagery datasets demonstrate that the proposed method can achieve better performance than deep learning-based methods and conventional classifiers.

1. Introduction

Hyperspectral imagery (HSI) captured by hyperspectral sensors has high spectral and spatial resolution, thus has a strong capability to distinguish surface objects [1]. HSI has been widely applied in many fields including agricultural monitoring [2], environment analysis and prediction [3], and climate monitoring [4]. HSI classification is a common task in these applications, i.e., to assign a class label of surface object to every HSI pixel by using a small number of training samples.
In recent years, many methods have been proposed to address HSI classification. The K-nearest neighborhood (KNN) [5] determines the class of testing sample by calculating the Euclidean distance between testing and training samples. Support vector machine (SVM) [6,7] projects samples to a high-dimensional space by kernel functions and distinguishes sample classes by learning a classification hyperplane, which achieves satisfactory performance in the small-sample classification tasks. Extreme learning machine (ELM) [8,9] is a single hidden-layer neural network which has the following characteristics: (1) the connecting weights between input-layer and hidden-layer neurons are randomly assigned and do not need to be adjusted during the learning process; and (2) the connecting weights between hidden-layer and output-layer neurons can be calculated via the least square method. Therefore, the computational efficiency of ELM is high.
Recently, deep learning (DL) is found to be able to automatically learn representative features from data via stacking multi-layer nonlinear units [10,11], making successful application on HSI analysis. Chen et al. [12] firstly introduced DL into HSI classification, and directly deemed spectral and spatial information as inputs of stacked autoencoder (SAE). Afterwards, Tao et al. [13] added sparse constraints on SAE and Chen et al. [14] introduced the deep belief network. The unsupervised CNN was designed by Romero et al. [15]. By layer-wise training approach, the unsupervised sparse features of HSI are learned. Compared with unsupervised CNN, supervised CNN can extract features that are more helpful on classification. A CNN with pixel-pair features (CNN-PPF) was proposed by Li et al. [16]. By comparing the classes of a couple of samples, new training samples are obtained, which highly outnumbered the original, and thus ensures a vast number of parameter learning in CNN. In terms of limited labeled samples as well as dimensionality disaster problem, Santara et al. [17] proposed BASS-Net. Compared with conventional CNN, BASS-Net has fewer parameters and needs fewer training samples. By simplifying the training process, Pan et al. [18] introduced PCANet to realize HSI classification and further the NSSNet for more complex nonlinear mapping by using kernel PCA (KPCA) instead of PCA due to insufficient expression of nonlinearity from PCANet. In later study, Pan et al. [19] further proposed R-VCANet comprehensively using rolling guidance filter (RGF) and vertex component analysis network (VCANet). Compared with conventional CNN, R-VCANet has a simpler structure and fewer network parameters, which demand fewer labeled training samples. Meanwhile, the fully utilized spatial information of HSI makes the extracted features from network more discriminative, thus achieving higher classification accuracy. Several experiments on hyperspectral datasets demonstrated that the classification accuracy of R-VCANet is higher than other deep learning methods such as R-PCANet and NSSNet.
However, DL methods require complicated structural adjustment and a vast computation of network training. Aiming at such problems, Chen and Liu [20] proposed a novel broad learning system (BLS) to offer an optional learning approach. The approach is based on the random vector functional link neural network (RVFLNN) [21,22]. First, the original data are mapped via random weights as mapped features (MF) and stored in feature nodes. Next, MF is similarly mapped via random weights to obtain enhancement nodes (EN) for broad expansion. Finally, the normalized optimization of l2-norm is solved by ridge regression approximation to get ultimate network weights. Compared with DL, BLS has the following advantages: (1) BLS is merely composed of three parts while the deep learning requires deep structure that is stacked by multiple nonlinear units. Therefore, BLS has a simpler structure. (2) BLS solves the network weights with ridge regression while DL adopts gradient descent. When the weights are not well initialized, DL requires more iterations. Therefore, the training process of BLS is simpler and faster. (3) The connecting weights from input data to MF and from MF to EN in BLS are randomly generated and the trainable parameters merely include the connecting weights from MF and EN to output nodes. Therefore, compared with DL, BLS generally needs less network parameter training and hence fewer labeled training samples. In the tasks of HSI classification, there exactly exists such a problem of limited numbers of labeled samples. Therefore, BLS might be better suited to HSI classification than DL. However, BLS belongs to supervised classification method while the unlabeled samples are huge in number. To fully make use of this part of information, it is necessary to investigate semi-supervised BLS.
Semi-supervised learning (SSL) methods have attracted much research attention recently due to its capability of full use of both vast numbers of unlabeled samples and the limited numbers of labeled samples. Plenty of graph-based SSL methods have been successively proposed, such as the adjacency structure of graph is constructed by KNN or ε-ball neighborhood, which further determines weight matrices by Gaussian kernel [23,24], non-negative local linear reconstruction coefficients [25], etc. However, SSL methods based on conventional graph have the following disadvantages: (1) the algorithm performance is heavily influenced by graphs to be constructed; and (2) higher sensitivity occurs in terms of neighboring parameters. Considering these problems, SSL methods based on sparse graph were successively proposed. The nonnegative low-rank and sparse graph proposed by Zhuang et al. [26] can capture both the global mixture of subspaces structure (by the low-rankness) and the locally linear structure (by the sparseness) of data, hence it is both generative and discriminative. Morsier et al. [27] presented a kernel low-rank and sparse graph, which was based on sample proximities in reproducing kernel Hilbert spaces and expressed sample relationships under sparse and low-rank constraints. However, data class structure is not considered in the above methods. Considering this, Shao, et al. [28] presented a class-probability (CP) structure, which can express the relation between each sample and each class via a class-probability matrix.
In summary, a HSI classification method is proposed based on a semi-supervised BLS (SBLS). The main contributions of this paper include: (1) To our knowledge, this is the first trial where BLS is applied in HSI classification tasks. The proposed SBLS can get higher HSI classification accuracy and faster training speed. (2) The class-probability structure is introduced into BLS for an extended semi-supervised BLS to make use of limited numbers of labeled samples as well as vast unlabeled samples.

2. HSI Classification Based On SBLS

The flowchart of HSI classification based on SBLS is shown in Figure 1 and includes three steps: (1) the original HSI data are processed by hierarchical guidance filtering (HGF) to get the spectral-spatial expression of HSI; (2) the pseudo labels of unlabeled samples are obtained via CP structure; and (3) SBLS is trained by labeled samples and corresponding labels, as well as unlabeled samples and corresponding pseudo labels.

2.1. Hierarchical Guidance Filtering

The first step of SBLS is to get the HGF representation of HSI, shown as Step 1 in Figure 1. The original hyperspectral images are expressed in the form of 3D tensor. If vectorization is expressed by a tensor, not only is the data dimension greatly increased, but the inherent data structure is also destroyed. Pan et al. [29] proposed a spectral-spatial expression of HSI data by using HGF. As one of the edge-preserving filtering methods, HGF can remove noise and small details while preserving the overall structure of the image, thus can map the original HSI data into a feature subspace having more abundant feature expression. In terms of the superiority of HGF, the original HSI is processed by HGF to get the spectral-spatial expression of HSI.
As an extension of guided filtering and rolling guidance filtering, HGF can generate a series of joint spectral-spatial features. HGF minimizes the following energy function:
E ( a k p , b k p ) = i ω k ( ( a k p G i + b k p I i p ) 2 + ε a k p 2 )
where a k p and b k p are linear coefficients based on the input HSI data I ˜ and the guidance image G , ω k is the window around pixel k with size ( 2 r + 1 ) × ( 2 r + 1 ) , r is the window radius, i is one of a pixel in ω k , p denotes the p-th band, and ε is the controlling parameter. Larger ε will lead smoother output. Equation (1) is a ridge regression, and can be solved by:
a k p = 1 | ω | i ω k I ˜ i p G i μ k I ¯ k p σ k 2 + ε b k p = I ˜ k p a k p μ k p
where μ k and σ k are the mean and standard variance of G , respectively; I ¯ k p is the mean of I ˜ in ω k ; and | ω | is the number of pixels in ω k . More details can be found in [29]. HGF is a kind of preprocessing trick, and a similar strategy is also used in [19,29].

2.2. Class-Probability Structure

The second step of SBLS is to obtain the pseudo labels of unlabeled samples via CP structure, shown as Step 2 in Figure 1. The labeled samples via HGF expression X S = { x 1 , , x n } R n s × m and corresponding labels Y S = { y 1 , , y n s } R n s × c are given, where n s is the number of labeled samples, m is the number of dimensionality, c is the number of classes, y i j is a binary number and if the i-th sample belongs to j-th class, y i j = 1 , or else y i j = 0 . Given the unlabeled samples X U = { x 1 , , x n } R n U × m via HGF expression, where n U means the number of unlabeled samples, the overall number of samples is n = n S + n U . Hence, the similarity between the labeled X S and unlabeled samples X U can be expressed by the following:
min a 1 s . t . X S a = x i
where a is the sparsity coefficient. Equation (3) can be solved with alternating direction methods of multipliers with adaptive penalty (ADMAP). More details can be referred to in [28]. The class-probability vector of xi is written as:
p i = a T Y S
where p i = ( p i 1 , p i 2 , , p i c ) R 1 × c , and p i c means the probability that the i-th sample belongs to the c-th class. Regarding the unlabeled samples, it is feasible to get the class-probability matrix p U R U × c via label propagation for a given sample. Regarding the labeled samples, the class-probability matrix p S R S × c is defined. Therefore, the probability that the i-th and the j-th samples belong to an identical class is written as:
P i j = { 1        i = j p i p j T i j
As a further step, P can be expressed as P = ( P S S P U S P S U P U U ) , where P S S means the probability that the labeled samples have the same class while P U U means the probability that the unlabeled samples have the same class. P U S and P S U represent the probabilities that the unlabeled and labeled samples have the same class, respectively. Finding the index of maximum probability per row in P U S , can obtain the labeled sample that is most similar to each unlabeled one as well as the pseudo label Y U of the unlabeled samples. The calculation principle is as follows:
if   p i j = max ( p i ) ,   then   y i U = y j S

2.3. SBLS

The third step of SBLS is to train the SBLS model and get the predictive labels of unlabeled samples, shown as Step 3 in Figure 1. BLS is proposed based on RVFLNN, including three parts: mapped features (which are the mapping from inputs), enhancement nodes (which are the mapping from mapped features), and output labels (which are the joint mapping from mapped features and enhancement nodes). The learning parameter is W m , which can be fast and approximately obtained by ridge regression. However, the BLS model belongs to the supervised method and cannot utilize vast numbers of commonly unlabeled samples in HSI. Hence, for better adaption of BLS into HSI classification, it is necessary to investigate semi-supervised BLS. Here, CP is introduced into BLS and SBLS is proposed to realize the semi-supervision classification of HSI.
HSI samples X = [ X S ; X U ] R n × m generally expressed by HGF are given, as well as labels Y S and Y U that are obtained by the class-probability structure. In terms of SBLS, the input is first mapped to “mapped features” via the random weight W M = [ W 1 M , , W G M M ] and bias β M = [ β 1 M , , β G M M ] , which is:
Z i = ϕ i ( X W i M + β i M )
where G M is the number of groups of MF. ϕ i ( ) is a nonlinear function, and different functions can be chosen for different groups of MF. Here, linear mapping is used in all MF for simplicity, which means Z i = X W i M + β i M . To have better features, WM is usually fine-tuned by linear sparse auto encoder.
After obtaining the MF, Z = [ Z 1 , Z 2 , , Z G M ] , the expansion of SBLS can be realized by mapping the features of MF to EN with random weights W E and bias β E
H j = ϕ j ( Z W j E + β j E )
where G E is the number of ENs. Further, the SBLS model is expressed as:
[ Y S | Y U ] = [ Z | H ] W m
where W m are the connecting weights from MF and EN to output nodes. It can be solved the following problem:
arg min W m [ Z | H ] W m [ Y S | Y U ] 2 2 + λ W m 2 2
where λ denotes the further constraints on the sum of the Wm. The solution of Equation (8) can be solved by ridge regression:
W m = ( λ I + [ Z | H ] T [ Z | H ] ) 1 [ Z | H ] T [ Y S | Y U ]
If λ = 0 , Equation (8) degenerates into the least square problem. On the other hand, if λ , the solution is heavily constrained and tends to 0. Thus, we set λ 0 here, such as 2 30 . By giving an approximation to the Moore–Penrose generalized inverse of [ Z | H ] , Equation (8) can be written as:
W m = [ Z | H ] + [ Y S | Y U ]
Specifically, we have:
[ Z | H ] + = lim λ 0 ( λ   I + [ Z | H ] T [ Z | H ] ) 1 [ Z | H ] T
Finally, the predictive labels can be obtained by
Y = [ Z | H ] W m
In summary, the algorithm steps of HSI classification based on SBLS is shown in Table 1.

3. Experiments and Analysis

3.1. HSI Datasets

In this section, three real HSI datasets, i.e. Indian Pines, Salinas and Botswana, are used to evaluate the accuracy and efficiency of the proposed SBLS method. Figure 2 shows the ground truth maps of the three HSI datasets. For the three HSI datasets, 20 samples are randomly selected from different surface objects as labeled (training) samples, with the remaining as unlabeled (testing) samples.
(1) For supervised classification methods, only the labeled samples are used to train the classifier and the trained classifier is used to predict the labels of unlabeled samples.
(2) For semi-supervised classification methods, all labeled and unlabeled samples are used to train the classifier. In addition, since the total size of Salinas dataset is large, only part of labeled samples participates in the classifier training.
(3) Since the total size of surface object “Oats” in Indian Pines dataset is small, the size of labeled samples (denoted by s.l.s.) equals that of unlabeled samples (denoted by s.u.s.). The detailed sample settings for different HSI datasets are shown in Table 2.

3.2. Comparative Experiments

To evaluate the performance of the proposed SBLS on HSI classification, we investigate the following nine methods for comparison.
(1) Traditional classifiers include SVM [6], ELM [8], and SPELM [9]. Since only the linear feature mapping is used in BLS and SBLS, the linear kernel function is used in SVM and ELM in our experiments. The hyper parameters of SVM and ELM are selected through five-fold cross validation, and the penalty factor of SVM and the regularization coefficient of ELM and SPELM are selected from {1, 10, 100, 1000}. In addition, HSI data after HGF preprocessing were taken as the input of SVM, ELM, and SPELM for fair comparison. The number of trails of SPELM is set as 50.
(2) Semi-supervised graph-based classification method is SSG [23]. The width of Gaussian and regularization parameter are selected from {10−5, 10−4, …, 105}.
(3) Deep learning-based methods include CNN-PPF [16], BASS-Net [17], and R-VCANet [19]. The network configurations of CNN-PPF, BASS-Net, and R-VCANet refer to corresponding articles, respectively.
(4) Spectral-spatial classification method is HiFi-We [29].
(5) BLS [20], where HSI data after HGF preprocessing were taken as the input of BLS.
The proposed SBLS and nine comparative methods are used on the three HSI datasets for classification. Related experiments about CNN-PPF and BASS-Net are tested on Theano and Torch platforms with GPU GTX 980. Other experiments are performed in MATLAB R2014a using a computer with a 3.60 GHz Intel Core i7-4790 CPU and 16 GB of RAM. Each experiment is conducted five times to get the average value for stochastic. Table 3, Table 4 and Table 5, respectively, show the comparison of classification performance on different datasets, where five performance indexes are considered: the accuracy on each surface object (%), average accuracy (AA, %), overall accuracy (OA, %), Kappa coefficient, and consumed time (t, s) for classifier training and testing sample classification.
The following can be observed in Table 3, Table 4 and Table 5:
(1) The AA, OA, and Kappa coefficient of SBLS on the three datasets are the highest. This is because the CP structure is introduced into SBLS, which can make use of vast unlabeled samples compared with BLS.
(2) ELM has the shortest period of consumed time, followed by SVM. Besides SVM and ELM, BLS has the shortest consumed time. This is because the BLS network parameters can be directly solved by the generalized inverse and BLS has simple network.
(3) CNN-PPF, BASS-Net, and R-VCANET have longer consumed time. This is because these methods belong to deep learning. For BASS-Net, a high number of iteration steps are needed when the network parameters are updated based on the gradient descent. For CNN-PPF, to ensure the training of the CNN with many layers, the training samples are expanded greatly in number and, therefore, it has longer training time. For R-VCANet, its testing process is time-consuming due to the high dimensions of extracted features per layer.
(4) Compared with BLS, SBLS has longer period of consumed time. This is because the correlation computation between samples in CP structure consumes much time.
The classification maps on Indian Pines and Salinas datasets are visibly shown in Figure 3 and Figure 4, respectively. A conclusion consistent with the aforementioned can be obtained from Figure 3 and Figure 4: the classification effect of HSI with SBLS is the best.

3.3. Parameter Analysis

The adjustable parameters in SBLS include: MF group number, and number of MF nodes per group. Let MF group number equal to number of nodes per MF, which is set as G M . The number of EN nodes is set as G E . The relation between OA and G M or G E of SBLS on the three datasets is shown in Figure 5. It is demonstrated that:
(1) With increase of G M and G E , the OA on each of the three datasets takes on the tendency of rising up first and then dropping down. This is because the expression ability of SBLS increases gradually and saturates with increase of G M and G E .
(2) The excessively low G M and G E will lead to the OA decrease while the excessively high G M and G E will lead to additional computation. Therefore, G M and G E are, respectively, selected as 30–100, 40–400 and 20–500 in terms of the three datasets.

4. Discussion

The experimental results show the following:
(1) From the aspect of classification accuracy, our proposed method, SBLS, achieves the highest performance. There are two main reasons. First, BLS helps us to get a more accurate mapping between input HSI and labels by utilizing a small amount of labeled samples. Then, by exploring the relationship between each sample and each class with class-probability structure, we can obtain the most similar labeled samples to each unlabeled one. Furthermore, the pseudo labels of unlabeled samples can be given, and both the labeled and unlabeled samples can be utilized.
(2) From the aspect of consumed time, compared with deep learning-based methods, BLS and SBLS consumed less time. The reasons can be summarized as two aspects. On the one hand, the architectures of two broad learning-based methods only contain three parts (MF, EN and output layer), while the deep learning-based methods are built with many layers. On the other hand, the trainable parameters of broad learning are less than deep learning, and can be easily solved with ridge regression. Compared with BLS, SBLS is more time-consuming because of the utilizing of more samples and the extra computation for obtaining the class-probability matrix.
(3) The classification maps (shown in Figure 3 and Figure 4) and the analysis of adjustable parameters (shown in Figure 5) are also given for more details. We cannot guarantee that OAs obtained from SBLS with any parameter settings are the highest. This is mainly because, when there are too few nodes, much information is lost during the procedure of mapping.
Since there is no perfect thing, the drawbacks of the proposed SBLS method are summarized as follows:
(1) Similar to other types of classifiers, both BLS and SBLS are sensitive to input.
(2) If too many nodes in MF or EN are settled, much memory space will be used.

5. Conclusions

With advances of hyperspectral imaging techniques, HSI classification remains an active and challenging topic in the remote sensing community. Due to the difficulty of labeled samples, in this paper, a semi-supervised broad learning system-based HSI classification method called SBLS is proposed, which incorporates the class-probability into the broad learning. The designed model can take advantage of limited samples and large number of unlabeled samples simultaneously. Compared with deep learning-based methods, the weights of SBLS can be easily computed through ridge regression approximation instead of gradient descent methods. Nine classification methods including three traditional classifiers (SVM, ELM, and SPELM), one semi-supervised graph-based method (SSG), three deep learning-based methods (CNN-PPF, BASS-Net, and R-VCANet), one spectral-spatial method (HiFi-We), and the original broad learning system (BLS) are compared. Experimental results on three real hyperspectral datasets (Indian Pines, Salinas and Botswana) demonstrate that, under the condition of limited labeled samples, the proposed SBLS method can not only get higher classification accuracy, but also cost much less time than deep learning-based methods.
Moreover, the proposed SBLS still has space for further improvement. For instance, SBLS cannot determine the labels of samples when there is no a priori information. We will explore an unsupervised version of BLS for HSI clustering.

Author Contributions

All of the authors made significant contributions to the work. Y.K. and Y.C. conceived and designed the experiments; Y.K., X.W. and Y.C. performed the experiments; Y.K., X.W. and Y.C. analyzed the data; C.L.P.C. provided the codes about broad learning system; Y.K., X.W., Y.C., and C.L.P.C. wrote the paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61772532, Grant 61472424, and Grant 61703404.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, L.; Yang, B.; Du, Q.; Zhang, B. Adjusted spectral matched filter for target detection in hyperspectral imagery. Remote Sens. 2015, 7, 6611–6634. [Google Scholar] [CrossRef]
  2. Onoyama, H.; Ryu, C.; Suguri, M.; Lida, M. Integrate growing temperature to estimate the nitrogen content of rice plants at the heading stage using hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2506–2515. [Google Scholar] [CrossRef]
  3. Brunet, D.; Sills, D. A generalized distance transform: Theory and applications to weather analysis and forecasting. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1752–1764. [Google Scholar] [CrossRef]
  4. Islam, T.; Hulley, G.C.; Malakar, N.K.; Radocinski, R.G.; Guillevic, P.C.; Hook, S.J. A physics-based algorithm for the simultaneous retrieval of land surface temperature and emissivity from VIIRS thermal infrared data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 563–576. [Google Scholar] [CrossRef]
  5. Li, W.; Du, Q.; Zhang, F.; Hu, W. Collaborative-representation-based nearest neighbor classifier for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2015, 12, 389–393. [Google Scholar] [CrossRef]
  6. Wu, Y.F.; Yang, X.H.; Plaza, A.; Qiao, F.; Gao, L.R.; Zhang, B.; Cui, Y.B. Approximate computing of remotely Sensed data: SVM hyperspectral image classification as a case study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 5806–5818. [Google Scholar] [CrossRef]
  7. Xue, Z.; Du, P.; Su, H. Harmonic analysis for hyperspectral image classification integrated with PSO optimized SVM. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2131–2146. [Google Scholar] [CrossRef]
  8. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  9. Alom, M.; Sidike, P.; Taha, T.; Asari, V. State preserving extreme learning machine: A monotonically increasing learning approach. Neural Process. Lett. 2016, 45, 703–725. [Google Scholar] [CrossRef]
  10. Feng, S.; Chen, C.L.P. A fuzzy restricted boltzmann machine: Novel learning algorithms based on crisp possibilistic mean value of fuzzy numbers. IEEE Trans. Fuzzy Syst. 2016. [Google Scholar] [CrossRef]
  11. Chen, C.L.P.; Zhang, C.Y.; Chen, L.; Gan, M. Fuzzy restricted boltzmann machine for the enhancement of deep learning. IEEE Trans. Fuzzy Syst. 2015, 23, 2163–2173. [Google Scholar] [CrossRef]
  12. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  13. Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
  14. Chen, Y.; Zhao, X.; Jia, X. Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1–12. [Google Scholar] [CrossRef]
  15. Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
  16. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  17. Santara, A.; Mani, K.; Hatwar, P.; Singh, A.; Garg, A.; Padia, K.; Mitra, P. BASS net: Band-adaptive spectral-spatial feature learning neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017. [Google Scholar] [CrossRef]
  18. Pan, B.; Shi, Z.; Zhang, N.; Xie, S. Hyperspectral image classification based on nonlinear spectral-spatial network. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1782–1786. [Google Scholar] [CrossRef]
  19. Pan, B.; Shi, Z.; Xu, X. R-VCANet: A new deep-learning-based hyperspectral image classification method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1975–1986. [Google Scholar] [CrossRef]
  20. Chen, C.L.P.; Liu, Z. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE Trans. Neural Netw. Learn. Syst. 2017. [Google Scholar] [CrossRef] [PubMed]
  21. Chen, C.L.P.; Wan, J.Z. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction. IEEE Trans. Syst. Man Cybern. Part B 1999, 29, 62–72. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, C.L.P. A rapid supervised learning neural network for function interpolation and approximation. IEEE Trans. Neural Netw. 1996, 7, 1220–1230. [Google Scholar] [CrossRef] [PubMed]
  23. Camps-Valls, G.; Marsheva, T.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
  24. Belkin, M.; Niyogi, P.; Sindhwani, V. Manifold regularization: Ageometric framework for learning from examples. J. Mach. Learn. Res. 2006, 7, 2399–2434. [Google Scholar]
  25. Wang, F.; Zhang, C. Label propagation through linear neighborhoods. IEEE Trans. Knowl. Data Eng. 2008, 20, 55–67. [Google Scholar] [CrossRef]
  26. Zhuang, L.; Gao, S.; Tang, J.; Wang, J.; Lin, Z.; Ma, Y.; Yu, N. Constructing a nonnegative low-rank and sparse graph with data-adaptive features. IEEE Trans. Image Process. 2015, 24, 3717–3728. [Google Scholar] [CrossRef] [PubMed]
  27. De Morsier, F.; Borgeaud, M.; Gass, V.; Thiran, J.P.; Tuia, D. Kernel low-rank and sparse graph for unsupervised and semi-supervised classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3410–3420. [Google Scholar] [CrossRef]
  28. Shao, Y.J.; Sang, N.; Gao, C.X.; Ma, L. Probabilistic class structure regularized sparse representation graph for semi-supervised hyperspectral image classification. Pattern Recognit. 2017, 63, 102–114. [Google Scholar] [CrossRef]
  29. Pan, B.; Shi, Z.W.; Xu, X. Hierarchical guidance filtering based ensemble classification for hyperspectral image. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4177–4189. [Google Scholar] [CrossRef]
Figure 1. Flowchart of HSI classification based on SBLS.
Figure 1. Flowchart of HSI classification based on SBLS.
Remotesensing 10 00685 g001
Figure 2. Ground truth maps of three HSI datasets: (a) Indian Pines; (b) Salinas; and (c) Botswana.
Figure 2. Ground truth maps of three HSI datasets: (a) Indian Pines; (b) Salinas; and (c) Botswana.
Remotesensing 10 00685 g002
Figure 3. Classification maps on Indian pines dataset: (a) SVM; (b) ELM; (c) SPELM; (d) SSG; (e) CNN-PPF; (f) BASS-Net; (g) R-VCANet; (h) HiFi-We; (i) BLS; and (j) SBLS.
Figure 3. Classification maps on Indian pines dataset: (a) SVM; (b) ELM; (c) SPELM; (d) SSG; (e) CNN-PPF; (f) BASS-Net; (g) R-VCANet; (h) HiFi-We; (i) BLS; and (j) SBLS.
Remotesensing 10 00685 g003
Figure 4. Classification maps on Salinas dataset: (a) SVM; (b) ELM; (c) SPELM; (d) SSG; (e) CNN-PPF; (f) BASS-Net; (g) R-VCANet; (h) HiFi-We; (i) BLS; and (j) SBLS.
Figure 4. Classification maps on Salinas dataset: (a) SVM; (b) ELM; (c) SPELM; (d) SSG; (e) CNN-PPF; (f) BASS-Net; (g) R-VCANet; (h) HiFi-We; (i) BLS; and (j) SBLS.
Remotesensing 10 00685 g004aRemotesensing 10 00685 g004b
Figure 5. OA versus G M and G E : (a) Indian Pines; (b) Salinas; and (c) Botswana.
Figure 5. OA versus G M and G E : (a) Indian Pines; (b) Salinas; and (c) Botswana.
Remotesensing 10 00685 g005
Table 1. The proposed HSI classification method based on SBLS.
Table 1. The proposed HSI classification method based on SBLS.
Input: HGF-based HSI spectral-spatial representation.
(a) Calculate class-probability matrix P according to Equation (5).
(b) Calculate pseudo labels Y U for unlabeled samples P according to Equation (6).
(c) Calculate Z and H according to Equations (7)–(8), respectively.
(d) Calculate weights W m of BLS according to Equations (12)–(13).
(e) Calculate predictive labels Y with Equations (7), (8), and (14), according to W M , β E , W E , β m , and W m .
Output: predictive labels Y .
Table 2. Size of labeled and unlabeled samples for different HSI datasets.
Table 2. Size of labeled and unlabeled samples for different HSI datasets.
No.Indian PinesSalinasBotswana
Surface Objects.l.s.s.u.s.Surface Objects.l.s.s.u.s.Surface Objects.l.s.s.u.s.
1Alfalfa2026Brocoli_green_weeds_120500Water20250
2Corn-notill201408Brocoli_green_weeds_220500Hippo grass2081
3Corn-mintill20810Fallow20500Floodplain grasses120231
4Corn20217Fallow_rough_plow20500Floodplain grasses220195
5Grass-pasture20463Fallow_smooth20500Reeds120249
6Grass-trees20710Stubble20500Riparian20249
7Grass-pasture-mowed208Celery20500Firescar220239
8Hay-windrowed20458Grapes_untrained20500Island interior20183
9Oats1010Soil_vinyard_develop20500Acacia woodlands20294
10Soybean-notill20952Corn_senesced_green_weeds20500Acacia shrublands20228
11Soybean-mintill202435Lettuce_romaine_4wk20500Acacia grasslands20285
12Soybean-clean20573Lettuce_romaine_5wk20500Short mopane20161
13Wheat20185Lettuce_romaine_6wk20500Mixed mopane20248
14Woods201245Lettuce_romaine_7wk20500Exposed soils2075
15Buildings-Grass-Trees-Drives20366Vinyard_untrained20500
16Stone-Steel-Towers2073Vinyard_vertical_trellis20500
Table 3. Comparison of classification performance on Indian Pines dataset.
Table 3. Comparison of classification performance on Indian Pines dataset.
Surface ObjectSVM [6]ELM [8]SPELM [9]SSG [23]CNN-PPF [16]BASS-Net [17]R-VCANet [19]HiFi-We [29]BLS [20]SBLS
Alfalfa (%)54.1566.6390.1396.1532.8480.7710010096.1599.23
Corn-notill (%)75.0079.1787.0171.4653.6455.3364.9885.7278.6984.73
Corn-mintill (%)72.1187.9385.3084.0756.3659.7586.0591.3698.5294.49
Corn (%)54.1361.3177.3593.8233.7391.2499.5498.16100100
Grass-pasture (%)87.8898.5894.3185.4079.3489.4291.1491.1496.3390.67
Grass-trees (%)98.1599.4099.6297.3594.6794.9399.3099.8690.1499.86
Grass-pasture-mowed (%)27.3428.5938.4697.5057.14100100100100100
Hay-windrowed (%)10098.3599.5798.6991.0299.5698.4798.6987.1299.34
Oats (%)66.2173.7894.8510058.82100100100100100
Soybean-notill (%)72.3671.2480.1879.3353.8373.1189.5083.8282.9888.11
Soybean-mintill (%)91.8582.3194.3878.9472.8154.7472.9883.6189.4088.38
Soybean-clean (%)75.7857.5982.5678.2543.4663.1895.6487.0997.9193.40
Wheat (%)99.7899.4710099.1498.9098.9298.9299.4610099.68
Woods (%)99.5799.5099.7494.1893.1482.0994.1499.2098.9699.81
Buildings-Grass-Trees-Drives (%)83.8392.8497.9785.7473.1065.5790.1689.8999.7399.07
Stone-Steel-Towers (%)99.1996.3598.1298.6387.1898.6310098.6398.6399.18
AA (%)78.5880.8288.7289.9167.5081.7092.5594.1694.6695.99
OA (%)83.5683.0190.7883.9167.2069.9584.3689.9490.8892.47
Kappa0.81390.80710.89500.81770.63250.66160.82340.88550.89590.9143
t(s)0.980.3435.80372.961500.031251.783238.74250.164.81420.02
Table 4. Comparison of classification performance on Salinas dataset.
Table 4. Comparison of classification performance on Salinas dataset.
Surface ObjectSVM [6]ELM [8]SPELM [9]SSG [23]CNN-PPF [16]BASS-Net [17]R-VCANet [19]HiFi-We [29]BLS [20]SBLS
Brocoli_green_weeds_1 (%)10010010098.0699.9599.5099.6099.6699.97100
Brocoli_green_weeds_2 (%)99.8010099.9593.8498.8499.6599.8799.1999.5199.81
Fallow (%)91.0799.8099.8888.2078.4799.4998.0699.0610099.96
Fallow_rough_plow (%)97.3397.0198.8794.2995.8198.8498.9199.0799.3399.80
Fallow_smooth (%)97.2691.7796.8290.3296.2197.0399.3298.5998.9899.13
Stubble (%)99.7099.9799.9894.5499.6199.8098.6599.2099.7899.80
Celery (%)98.2699.9099.8192.8997.6699.7298.2098.7399.5399.84
Grapes_untrained (%)85.3177.5786.1256.6572.8465.1870.7178.9988.8191.31
Soil_vinyard_develop (%)99.2098.4398.5089.7399.0898.6199.7499.8799.9799.65
Corn_senesced_green_weeds (%)84.8495.9296.6177.3880.8487.7891.2289.1393.5294.12
Lettuce_romaine_4wk (%)86.6892.5696.4989.4363.2092.1898.6697.7399.5499.79
Lettuce_romaine_5wk (%)97.2797.7695.5695.0291.7298.5310099.9799.94100
Lettuce_romaine_6wk (%)96.5288.6396.4191.5896.8496.2199.4496.5899.1199.00
Lettuce_romaine_7wk (%)86.6276.2088.6989.6088.6196.9597.3396.5097.1897.10
Vinyard_untrained (%)67.1081.5679.7172.9760.1467.4374.6687.1482.7589.27
Vinyard_vertical_trellis (%)99.2499.9399.9888.3596.7498.1599.4495.8698.6898.80
AA (%)92.8993.5695.8487.6888.5393.4495.2395.9597.2897.96
OA (%)89.1290.7393.2981.2184.7686.7789.4292.5694.6796.14
Kappa0.87930.89650.92520.79270.83060.85330.88240.91740.94060.9570
t(s)3.261.68131.60156.201560.191294.5017,080.47352.2413.95240.26
Table 5. Comparison of classification performance on Botswana dataset.
Table 5. Comparison of classification performance on Botswana dataset.
Surface ObjectSVM [6]ELM [8]SPELM [9]SSG [23]CNN-PPF [16]BASS-Net [17]R-VCANet [19]HiFi-We [29]BLS [20]SBLS
Water (%)10099.8410010099.2110010010010098.32
Hippo grass (%)88.8993.3397.8387.9010010010096.0599.2697.28
Floodplain grasses1 (%)95.4398.599.5798.3510099.5710096.62100100
Floodplain grasses2 (%)91.2678.0995.8196.2194.2097.4410099.4999.28100
Reeds1 (%)89.1994.7993.879.5289.0286.3596.7990.8493.5796.87
Riparian (%)62.4310010077.8378.7482.3390.3694.4687.7199.28
Firescar2 (%)97.0710010098.7494.3510010095.73100100
Island interior (%)97.8398.1998.7197.2787.56100100100100100
Acacia woodlands (%)93.4087.6998.4793.4792.0994.2287.0795.7899.2599.86
Acacia shrublands (%)75.1888.3199.3990.6193.6296.4999.5698.77100100
Acacia grasslands (%)93.8599.0299.9388.1495.4092.6397.5494.95100100
Short mopane (%)89.7094.4398.4398.1410010010097.52100100
Mixed mopane (%)89.5297.8099.8492.4295.3895.5699.1993.7998.4799.27
Exposed soils (%)94.9499.8410097.8710010098.6799.7398.9397.33
AA (%)96.5194.6198.7092.6194.2696.0497.8096.7098.3299.16
OA (%)96.7194.1698.6792.1593.4095.2597.2796.3698.1399.32
Kappa0.96440.93670.98560.91490.92840.94850.97040.96060.97980.9926
t(s)1.591.3112.5716.351020.091120.53908.54439.563.8370.97

Share and Cite

MDPI and ACS Style

Kong, Y.; Wang, X.; Cheng, Y.; Chen, C.L.P. Hyperspectral Imagery Classification Based on Semi-Supervised Broad Learning System. Remote Sens. 2018, 10, 685. https://doi.org/10.3390/rs10050685

AMA Style

Kong Y, Wang X, Cheng Y, Chen CLP. Hyperspectral Imagery Classification Based on Semi-Supervised Broad Learning System. Remote Sensing. 2018; 10(5):685. https://doi.org/10.3390/rs10050685

Chicago/Turabian Style

Kong, Yi, Xuesong Wang, Yuhu Cheng, and C. L. Philip Chen. 2018. "Hyperspectral Imagery Classification Based on Semi-Supervised Broad Learning System" Remote Sensing 10, no. 5: 685. https://doi.org/10.3390/rs10050685

APA Style

Kong, Y., Wang, X., Cheng, Y., & Chen, C. L. P. (2018). Hyperspectral Imagery Classification Based on Semi-Supervised Broad Learning System. Remote Sensing, 10(5), 685. https://doi.org/10.3390/rs10050685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop