Next Article in Journal
Remote Sensing of River Erosion on the Colville River, North Slope Alaska
Next Article in Special Issue
A New Algorithm for the On-Board Compression of Hyperspectral Images
Previous Article in Journal
Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field
Previous Article in Special Issue
Evaluating Endmember and Band Selection Techniques for Multiple Endmember Spectral Mixture Analysis using Post-Fire Imaging Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Classification Based on Texture Feature Enhancement and Deep Belief Networks

1
The State Key Lab. of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
2
The Department of Electronic and Computer Engineering, Mississippi State University, Starkville, MS 39762, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(3), 396; https://doi.org/10.3390/rs10030396
Submission received: 2 January 2018 / Revised: 14 February 2018 / Accepted: 2 March 2018 / Published: 4 March 2018
(This article belongs to the Special Issue Hyperspectral Imaging and Applications)

Abstract

:
With success of Deep Belief Networks (DBNs) in computer vision, DBN has attracted great attention in hyperspectral classification. Many deep learning based algorithms have been focused on deep feature extraction for classification improvement. Multi-features, such as texture feature, are widely utilized in classification process to enhance classification accuracy greatly. In this paper, a novel hyperspectral classification framework based on an optimal DBN and a novel texture feature enhancement (TFE) is proposed. Through band grouping, sample band selection and guided filtering, the texture features of hyperspectral data are improved. After TFE, the optimal DBN is employed on the hyperspectral reconstructed data for feature extraction and classification. Experimental results demonstrate that the proposed classification framework outperforms some state-of-the-art classification algorithms, and it can achieve outstanding hyperspectral classification performance. Furthermore, our proposed TFE method can play a significant role in improving classification accuracy.

Graphical Abstract

1. Introduction

Hyperspectral imagery with hundreds of narrow spectral channels provides wealthy spectral information. With very high spectral resolution, hyperspectral data has been of great interest in many practical applications, such as in agriculture, environment, surveillance, medicine [1,2,3,4] etc. Hyperspectral classification is a key technique employed in aforementioned applications. A majority of classification methods have been promoted in the last several decades to distinguish physical objects and classify each pixel into a unique land-cover label, such as maximum likelihood [5], minimum distance [6], K-nearest neighbors [7,8], random forests [9], Bayesian models [10,11], neural networks, etc., and their improvements [12,13,14,15]. Among these supervised classifiers, one of the most important classifiers is kernel-based support vector machine (SVM), which can also be considered as a kind of neural network. It can achieve superior hyperspectral classification accuracy via building an optimal hyperplane to best separate training samples.
In addition, sparse representation based on an over-complete signal dictionary has gained great attention in the literature. Sparse representation-based classification (SRC) [16,17,18] and collaborative representation classification (CRC) [19,20] are proposed from a different aspect: they do not adopt the traditional training–testing fashion. Such classification methods do not need any prior knowledge about probability density distribution of the data. To further enhance the performance of SRC and CRC, Du and Li [21] utilized a diagonal weight matrix to adaptively adjust the regularization parameter. To address the issues of Hughes phenomenon in hyperspectral classification, majority of feature extraction and selection algorithms are utilized to delete redundant features from the original data. To further improve performance of hyperspectral classification, multi-features are extracted and employed for classification. For instance, Kang et al. combined spectral and spatial features through a guided filter to process pixel-wise classification map in each class [22]. Several studies [23,24,25] focused on integrating spatial and spectral information in hyperspectral imagery. In addition, texture features are considered to assist hyperspectral classification [26], and modeling of hyperspectral image textures is significant for classification and material identification.
Recent research has highlighted deep learning with deep neural networks, which can learn high-level features hierarchically. They have demonstrated their potential in image classification, which also motivated successful applications of deep models on hyperspectral image classification. The classic deep learning method is convolutional neural networks (CNN), which plays a dominant role in visual-based issues. The local receptive fields of CNN can extract spatial-related features at high levels. Fukushima [27] introduced the motivations of CNNs. Ciresan and Lee et al. [28,29] depicted the invariants of CNNs. Chen et al. proposed 2-D CNN and 3-D CNN [30] to capture deep abstract and robust features, yielding superior hyperspectral classification performance. Although CNNs are typical supervised models, a massive training dataset is needed to trigger their powers. Unfortunately, a limited number of labeled samples are usually given in hyperspectral imagery. Deep belief networks (DBNs) [31] and stacked autoencoders (SAEs) [32] are also very promising deep learning methods for hyperspectral classification with limited training samples.
In this paper, we mainly investigate the DBN for its suitability and practicality to hyperspectral classification. A novel hyperspectral classification framework is proposed based on an optimum DBN. To acquire desirable performance, we also promote an advanced algorithm to enhance the texture features of hyperspectral imagery. The main contributions of this paper are summarized below.
  • We first promote a band group method to separate the bands of hyperspectral data into different band groups. Multi-texture features are used to select a sample band in each band group.
  • We propose a novel algorithm to enhance the texture features of hyperspectral data. We advocate the use of guided filter to complete the procedure of texture feature enhancement (TFE).
  • An optimal DBN structure is proposed with consideration of learning and deep features extraction. The learned features are exploited in Softmax to address the classification problem. Furthermore, with enhanced texture features, accurate classification maps can be generated by considering spatial information.
The rest of the paper is organized into four sections. Section 2 is a brief description of related work. In Section 3, we detail our proposed DBN model. Datasets and parameters setting are demonstrated in Section 4. Experimental results and discussions are depicted in Section 5. Section 6 draws the conclusion of this paper.

2. The Related Work

A deep belief network (DBN) is a model that is first pre-trained in an unsupervised way, and then the available labeled training samples are used to fine-tune the pre-trained model through optimizing a cost function defined over the labels of training samples and their predictions.
The original DBN, published in Science [33], uses a generative model in the pre-training procedure, and uses back-propagation in the fine-tuning stage. This is very useful when the number of training samples is limited, such as in the case of hyperspectral remote sensing. DBN can be efficiently trained in an unsupervised, layer-by-layer manner where the layers are typically made of restricted Boltzmann machines (RBM). Thus, to explain the structure and theory of the DBN, we first describe its main component, the RBM.

2.1. Restricted Boltzmann Machines (RBM)

An RBM generally uses unsupervised learning, which can be interpreted as stochastic neural networks. It was originally developed to form a distributed representation. It is a two layer-wise network, which is composed of visible and hidden units. Learning RBM only allows the full connection between visible and hidden units, and does not allow connection between two visible units or connections between two hidden units. With the given visible units, hidden units can be obtained via mapping of visible units. The activations of each neuron in hidden layers are independent. Meanwhile, with the given hidden units, visible units have the same effects. A typical RBM structure is depicted in Figure 1.
The visible units can be represented as h , and the hidden units can be expressed as v . The RBM model is a kind of energy-based models in which the joint distribution of the layers can be expressed as Boltzmann distribution. Energy-based probabilistic models define a probability distribution through an energy function as:
p ( v , h | θ ) = exp ( E ( v , h | θ ) ) Z ( θ ) ,
where the normalization constant Z ( θ ) is called the partition function by analogy with physical systems:
Z ( θ ) = v h E ( v , h ; θ )
A joint configuration of the units has an energy given by:
E ( v , h ; θ ) = i = 1 n a i v i j = 1 m b j h j i = 1 n j = 1 m v i w i j h j = a T v b T h v T w h ,
where θ = { a i , b j , w i j } ; w i j represents the weight connecting the visible unit i and the hidden unit j; a i and b j denote the bias terms of visible and hidden layers, respectively; n and m are the total visible and hidden unit numbers; and v i and h j represent the states of visible unit i and hidden unit j.
Due to the specific structure of RBMs, visible and hidden units are conditionally independent, as given by:
P ( v i = 1 | h , θ ) = σ ( a i + i w i j v i ) P ( h j = 1 | v , θ ) = σ ( b j + j w i j h j ) ,
where σ ( ) is the logistic function defined as
σ ( x ) = 1 1 + exp ( x )
Overall, an RBM has five parameters: h , v , w , a and b , where w , a and b are achieved via learning, v is input, and h is output. w , a and b can be learned and updated via the contrastive divergence (CD) method as
w i j w i j + λ ( P ( h j | v i ) v i P ( h j r | v i r ) v i r )
a i a i + λ ( v i v i r )
b j b j + λ ( h j h j r )
where λ denotes the learning rate, P ( h j r | v i r ) represents the reconstructed probability distribution, and v i r and h j r are the reconstruction of visible and hidden unit, respectively. Once the states of hidden units are chosen, the visible units can be reconstructed via the hidden units sampled via Gibbs method. Then, the states of hidden units are updated through the visible units, so that the hidden units demonstrate the features of reconstruction. The distribution of visible units approximates the distribution of the real data. The learning ability of an RBM depends on whether the hidden units contain enough information of the input data.

2.2. Deep Belief Learning

The learning ability of a single hidden layer is limited. To capture the comprehensive information of data, the hidden units of the RBM can be feed as the input (visible units) of another RBM. This kind of layer-by-layer learning structure trained in a greedy manner forms so-called Deep Belief Networks. In this way, DBN can extract deep features of image data. The structure of three-layer DBN is depicted in Figure 2.
The process of training of DBN consists of two parts: pre-training and fine-tuning. The pre-training is an unsupervised training stage that initializes the model in such a way to enhance the efficiency of supervised training. The fine-tuning process can be realized as supervised training stage, which adjusts the classifier’s prediction to match the ground truth of the data.

3. The Proposed Framework

To extract more powerful and invariant features, we propose a novel DBN hyperspectral classification algorithm based on TFE. DBN is composed of several layers of latent factors, which can be deemed as neurons of neural networks. However, the limited training samples in the real hyperspectral image classification task usually lead to many “dead” (never responding) or “potential over-tolerant” (always responding) latent factors (neurons) in the trained DBN. Our proposed framework mainly consists of three steps: band grouping and sample band selection, TFE, and DBN-based classification.

3.1. Band Grouping and Sample Band Selection

Compared to multispectral imagery, hyperspectral imagery with hundreds of spectral bands has relatively narrow bandwidths. The correlation between spectral bands needs to be considered. In our framework, we calculated all the pair wise correlation coefficient of bands, and then utilized the correlations between adjacent bands. The spectral correlation coefficients in different datasets are depicted in Figure 3.
We can obtain the correlation coefficient between adjacent bands as:
ρ i , j = c o r r ( B i , B j ) = cov ( B i , B j ) / var ( B i ) var ( B j )
where cov is covariance and var means variance. B i and B j represent the i -th and j -th band channels, respectively. i = 1 , 2 , , L 1 . Here, L denotes the number of bands of the hyperspectral dataset. Based on Equation (9), the correlation coefficients between adjacent bands in different datasets are calculated, as shown in Figure 4. We can see that the highest correlation coefficient in Indian Pines is 0.9997, and the lowest correlation coefficient is 0.0686. The spectral bands of university of Pavia have strong correlations overall, where the highest correlation coefficient is 0.9998, and the lowest correlation coefficient is 0.9294. The highest correlation coefficient in Salinas is 0.9999, and the lowest correlation coefficient is 0.5856.
Here, we design an algorithm for grouping bands rationally.
Firstly, calculate the average correlation coefficients of the adjacent bands, denoted as C ¯ , which is utilized as the threshold in the following steps. It can be calculated through:
C ¯ = 1 L 1 i = 1 L 1 ρ i , j
where j = i + 1 . If the correlation coefficients of adjacent bands are greater than C ¯ , these two bands are considered to have strong correlation.
Second, search local minimum values from the correlation coefficients between the adjacent bands, denoted as ρ min , where ρ min = { ρ i , j | ρ i , j ρ i + 1 , j + 1   | |   ρ i , j ρ i 1 , j 1 } . All the elements in ρ min are compared with C ¯ . If the inequality { ρ i , j ρ min } < C ¯ is satisfied, it indicates that the correlation between the i -th band and the j -th band is lower than the average correlation value, and the correlation between these two bands is considered to be weak. Then, the corresponding index group {i, j} is recorded and added to the set ρ L o c .
Third, band grouping depends on the stored index pairs in ρ L o c . For instance, with regard to index pair {i, j}, the i-th band is set as the end band of the former band group and the j-th band is set as the first band of the next band group. Thus, based on the aforementioned rules, all the bands are divided into different band groups { G 1 , G 2 , , G K } .
After dividing all the bands of hyperspectral dataset into different band groups, a sample band with the strongest and clearest texture features is searched and selected from each group.
To extract texture features, the gray level co-occurrence matrix (GLCM) has been employed successfully. GLCM [34] is defined as a matrix of frequencies which can extract second order statistics from a hyperspectral image. The distribution in the matrix depends on the angular and distance relationship between pixels. After the GLCM is created, it can be used to compute various features. We choose the five most commonly used features in Table 1 to select a sample band from each band group. The texture feature score of each band can be calculated by Equation (11):
T = i = 1 5 F i
The sample band in each band group can be selected through:
g k = arg max B l k { T B l k | B l k G k } ,
where G k represents the k -th band group of the dataset, l k { 1 , 2 , , N k } , N k is the number of bands in the k -th band group, and B l k represent the l k -th band in the k -th band group. Finally, the sample band set are comprised of { g 1 , g 2 , , g k } .

3.2. Texture Feature Enhancement

As an effective edge-preserving filter, guided filter (GF) was proposed by He in 2012. It can enhance the detail of an image. Texture feature is a kind of important spatial characteristics and also has long history in image processing. In this paper, we utilize the GF in each band group to enhance the texture features of the image.
The general guided image filtering was designed for gray-scale images or color images. It is very easy to extend to multi-channel image. Firstly, the guidance image in our proposed framework is multi-channel image, denoted as I M , which is comprised of the copies of the band with the strongest texture features in each band group. We assume q M is a linear transform of I M in a window ω k centered at the pixel k, and the multi-channel guided filter model can be expressed as
q i M = ( a k M ) T I i M + b k M , i ω k
where I i M is a C × 1 vector, and C is the channel number of the input image, a k M is a C × 1 coefficient vector, and q i M and b k M are scalars. The guided filter for multi-channel guidance image becomes
a k M = ( k + ε U ) 1 ( 1 | ω | i ω k I i M p i M μ k p k M ¯ ) b k M = p k M ¯ ( a k M ) T μ k q i M = ( a i M ¯ ) T I i M + b i M ¯ ,
where k is the C × C covariance matrix of I M in ω k , U is an C × C identity matrix, p M denotes a filtering input image which is given beforehand according to the application, μ k is the mean of I M in ω k , p k M ¯ is the mean of p M in ω k , and | ω | represents the number of pixels in ω k .
Then, the extending guided image filtering for multi-channel images will be applied to each band group. For instance, each channel of the guidance image I M in Equation (14) for the k -th band group G k is the copy of the sample band g k selected previously.
After guided filtering for all groups is completed, the output bands are restored to a hyperspectral image cube according to the band number. Finally, the reconstructed image data with enhanced texture features are obtained through the aforementioned steps. Figure 5 demonstrates the procedure of band grouping and TFE. We can see that, after sample bands with strongest textures are obtained, the reconstructed image data with enhanced texture feature can be achieved through the GF process.

3.3. DBN Classification Model

In this section, a DBN-based framework for hyperspectral classification with feature enhanced data is developed.
Spectral information is the most significant and direct feature, and can be directly utilized for classification. Architectures of existing methods, such as SVM and KNN, can extract spectral features but not deep enough. Therefore, only a deep architecture can make full use of the texture enhanced hyperspectral image characteristics. However, as the training samples are limited, the overfitting problem often occurs if the network is too deep, so we advocate a novel DBN framework, which has only two hidden layers (Figure 6).
The input data consist of training samples that are one-dimensional (1-D) vectors, and each pixel of a training sample is collected from the texture enhanced HSI data. For ease of description, the first hidden layer is denoted as h 1 and the second h 2 . The first layer is learned for extracting features from the input data, and the learned features are preserved in h 1 . Then, to pursue refined and abstract features, using the features contained in h 1 as the visible data of the second layer, h 2 keeps the refined features. This procedure is generally called recursive greedy learning for pre-training a DBN.
In practice, learning each layer is often performed through the n -step CD, and the weights are updated using Equations (6)–(8).
To fine-tune the DBN and accomplish classification, a Softmax layer is added to the end of the network.
Now, let X = { x 1 , x 2 , , x K } be a set of training samples and Y = { y 1 , y 2 , , y K } be the corresponding labels, where x k = [ x k 1 , x k 2 , , x k L ] T is the spectral signature of the k -th sample with L bands. Utilizing the maximum likelihood method, the objective function can be written as
C ( θ ) = k = 1 K log ( P ( y k | x k ) , θ ) = k = 1 K log ( S y k ( x k , θ ) )
where K is the number of training samples, ( P ( y k | x k ) , θ ) means the distribution of y k when given x k with the parameters θ of the Softmax layer, and S y k ( x k , θ ) denotes the output of the Softmax layer of the k -th training sample, that is
S y k ( x k , θ ) = exp { m = 1 M δ ( y k = m ) θ m T h H L } n = 1 M exp { θ n T h H L } ,
where H L is the number of the hidden layers, which is set to 2 in our proposed framework, and M is the number of the classes. θ m and θ n are the parameter vectors for the m -th and n -th unit of the softmax layer, respectively. h H L is the output of the H L -th hidden layer, which is calculated via the input data, the weights and bias from the first layer to the H L -th hidden layer. To optimize the objective function, the stochastic gradient descent (SGD) algorithm is used. Finally, the label of each testing pixel is determined via the weights and biases from aforementioned steps.

4. Experiments

4.1. Datasets

In this section, three typical hyperspectral datasets, namely Indian Pines, University of Pavia and Salinas, are employed to compare the proposed DBN classification method with other state-of-the-art methods. In these experiments, we randomly select 300 labeled pixels per class for training, of which 20 samples are utilized for validation. The remaining pixels of labeled data are used for testing. Furthermore, each pixel is uniformly scaled to the range of −1 to 1.
The first experiment is Indian Pines dataset, which was gathered by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in northwestern Indiana. There are 220 spectral channels in 0.4 to 2.45 μm region with spatial resolution of 20 m. It consists of 145 × 145 pixels with 200 bands after removing 20 noisy and water absorption bands. Here, we employ 8 large classes in this experiment. The numbers of training and testing samples are listed in Table 2.
The second dataset with 610 × 340 pixels is the University of Pavia, which was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) during a flight campaign over Pavia, northern Italy. The ROSIS sensor cover 115 spectral bands from 0.43 to 0.86 μm and the geometric resolution is 1.3 m. Each pixel has 103 bands after discarding bad bands. There are 9 ground-truth classes with the number of labeled samples shown in Table 3.
The third experiment is on Salinas dataset, which was also collected by the AVIRIS sensor, capturing an area over Salinas Valley, California, with a spatial resolution of 3.7 m. The area comprises 512 × 217 pixels with 204 bands after removing noisy and water absorption bands. It mainly contains vegetables, bare soils, and vineyard fields. There are 16 different ground-truth classes, and the numbers of training and testing samples are listed in Table 4.
Our experiments are implemented using Matlab 2015b which is manufactured by Mathworks in Massachusetts, US. The CPU we employed is Intel Core i5-3470. The basic frequency is 3.200 GHz. The operation system is Win7 with 64 bits.

4.2. Parameters Tuning and Analysis

In our proposed framework, we have several parameters that need to be adjusted: the number of hidden units, the learning rate, the max epoch and the number of hidden layers. In this section, some tuning experimental results are listed for selecting proper values. Both the number of hidden layers and the number of hidden units in hidden layers play an important role in classification performance. A suitable number of hidden layers and neurons can make full use of texture enhanced hyperspectral data without over-training, and can support a fitting mapping from original hyperspectral data to hyperspectral features. In the training process of DBN, the learning rate controls the pace of learning. It implies that a too large learning rate will lead an unstable output of training, and a too small learning rate will lead a longer training process. Therefore, an appropriate learning rate can expedite our training procedure with satisfactory performance.
In Figure 7, we can see that our proposed framework achieves best classification accuracy with 200 hidden neurons in each hidden layer. It demonstrates that 200 is a suitable number of hidden neurons. Figure 8 depicts the relationship between accuracies and the learning rates. It can be seen that the values of learning rate from 0.15 to 0.2 can obtain better performance. Therefore, we select 0.15 for the first RBM, and 0.2 for the second RBM. To determine the max epoch, we set the range of max epoch from 50 to 500. Figure 9 demonstrates that, when max epoch reaches 300, our proposed framework can achieve best classification performance. Consequently, the max epoch is set to 300. Table 5 lists the accuracies achieved with different numbers of hidden layers in DBN. When employing two hidden layers, the classification performance of DBN can achieve superior results. Thus, in our proposed framework, we set the number of hidden layers to 2.
In our paper, we utilize Graycomatrix function in Matlab to calculate the GLCM. The parameters used in experiments are “NumLevels” and “Offset”, and they are set to 8 and [0, 3; −3, 3; −3, 0; −3, −3], respectively.

4.3. Evaluation Criteria

The evaluation criteria used in our paper are overall accuracy (OA), average accuracy (AA), precision, and Kappa. Especially, OA, Precision and Kappa are highlighted for assessment of the proposed framework.
Figure 10 demonstrates a p-class confusion matrix. Based on Figure 10, AA and precision can be derived as [35]
P AA = 1 p ( i = 1 p n i i j = 1 p n j i )
P p r e c i s i o n = 1 p ( i = 1 p n i i j = 1 p n i j )
where p is the number of classes. N is the total number of the hyperspctral image data samples and N = i = 1 p n i . nii is the number of hyperspectral image samples in the i-th class to be classified into the i-th class, and nji is the number of hyperspectral image samples in the i-th class to be classified into the j-th class.
We also take the nonparametric McNemar’s test based on the standardized normal test statistic to evaluate the statistical significance in the improvement of OA with different hyperspectral classification algorithms. The McNemar’s test statistic for two different algorithms noted as Algorithm 1 and Algorithm 2 can be calculated as [36]:
z = ( f 12 f 21 ) / f 12 + f 21 ,
where f 12 denotes the number of samples misclassified using Algorithm 2 but not Algorithm 1, and f 21 means the number of samples misclassified using Algorithm 1 but not Algorithm 2. | z | is the absolute value of z . For 5% level of significance, the | z | value is 1.96. If a | z | value is greater than this quantity, the two classification algorithms have significant discrepancy.

5. Experimental Results and Discussion

In this section, the proposed TFE and the novel classification framework will be evaluated and the relevant results will be summarized and discussed in detail.

5.1. Compared Methods and Band Groups

To analyze and evaluate our proposed algorithm, which combines the TFE and the optimal DBN efficiently, existing algorithm, such as SVM with Radial Basis Function kernel (SVM-RBF), the Radical Basis Function neural network (RBFNN) and CNN, are employed for comparison purpose. Besides, we also compare with a state-of-the-art spectral–spatial algorithm called EPF-G-c [22]. All these algorithms are widely used with excellent performance in hyperspectral image classification tasks, especially EPF-G-c. In addition, for evaluating our proposed texture feature enhancement (TFE) algorithm, we also applied TFE algorithm on the traditional SVM-RBF and RBFNN. All experiments are repeated 10 times with the average classification results demonstrated for comparison.
According to our proposed band grouping solution, the bands of Indian Pines can be divided into 41 groups: 1, 2, 3, 4–17, 18, 19–33, 34, 35, 36, 37–56, 57, 58–60, 61, 62, 63–74, 75, 76, 77–82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92–93, 94, 95, 96–97, 98–102, 103, 104, 105, 106–143, 144, 145, 146–198, 199 and 200. The bands of University of Pavia can be divided into 19 groups: 1, 2, 3, 4, 5, 6, 7, 8–68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78–84 and 85–103. The bands of Salinas can be divided into 21 groups: 1, 2, 3, 4, 5–35, 36, 37, 38, 39, 40, 41–104, 105–106, 107, 108, 109–146, 147, 148, 149–201, 202, 203 and 204. All these band groups are employed in the TFE algorithm.

5.2. Discussion on Effectiveness of the Proposed TFE

Figure 11 demonstrates the reconstructions of border and inner pixels of four classes after TFE in Indian Pines dataset. The first image of each row depicts the locations of border and inner pixels. The reconstruction and reconstructed error of the border pixel are demonstrated in the second image of each row. Meanwhile, the reconstruction and reconstructed error of the inner pixel are demonstrated in the third image.
In hyperspectral classification, some spectra of the hyperspectral image are distorted through imaging noise or low spatial resolution, especially border-pixels, therefore the difficulty of hyperspectral classification primarily focuses on the correct classification of the border pixels. In Figure 11, it can be seen that, by utilizing TFE, the reconstructed border pixels become different from the original border pixels, and the reconstructed inner pixels are nearly the same as the original inner pixels, which implies that TFE plays an important role for border pixels. TFE can make border pixels distinct with its characteristics and more similar to their original spectra. Hence, the texture feature of the hyperspectral image become more obvious and clear. Consequently, the pixels that are difficult to distinguish can be recognized more easily than before with clearer texture feature. In other words, TFE has a positive effect for enhancing hyperspectral classification performance.

5.3. Discussion on Classification Results and Statistical Test

Table 6 provides the classification performance on Indian Pines achieved by different classification algorithms: SVM, RBFNN, optimal DBN (O_DBN), SVM combined with TFE (SVM_TFE), RBFNN combined with TFE (RBFNN_TFE), CNN, EFP-G-c and our proposed framework. O_DBN denotes the optimal DBN we proposed but without TFE. The SVM_TFE and RBFNN_TFE are two algorithms combined with the TFE method. The classification accuracy of each class is also listed in this table. In Table 6, we can see that our proposed framework can obtain the superior performance compared with other algorithms. Meanwhile, the optimal DBN has the best classification accuracy compared to the other algorithms without TFE, such as SVM and RBFNN. Although EFP-G-c is an outstanding spectral–spatial hyperspectral classification algorithm, our proposed framework utilizing TFE still has slightly better classification accuracy. Besides, SVM_TFE and RBFNN_TFE outperform SVM and RBFNN, respectively. The OA of SVM_TFE is 5.06% greater than SVM, and the OA of RBFNN_TFE is 8.97% higher than RBFNN. Compared with O_DBN, the OA obtained via our proposed framework improved by 8.08% and the Kappa increased by 9.98%. All these facts indicate the successful effects of TFE and demonstrates that our proposed framework and TFE have good influence on Indian Pines in hyperspectral classification.
Table 7 lists the classification precision achieved via these different classification algorithms. In Table 7, we can see that the precision of our proposed algorithm outperforms SVM, RBFNN, O_DBN, SVM_TFE, RBFNN_TEF, CNN and EPF-G-c. In addition, the methods associated with TFE have better classification precision than without TFE.
Table 8 and Table 10 present the classification accuracy acquired via different algorithms for University of Pavia and Salinas datasets. Meanwhile, Table 9 and Table 11 also list the precisions obtained through our proposed model and other classification algorithms on different datasets. It is obvious in Table 8 and Table 10 that our proposed framework has better performance than other classification methods. Especially, we can see that all algorithms that integrate TFE outperform those without TFE. By employing the TFE, the performance of SVM increased by 5.78% in University of Pavia and 1.75% in Salinas, while the performance of RBFNN improved by 6.8% in University of Pavia and 1.55% in Salinas. The OA achieved by the proposed framework is 6.55% higher than the OA achieved via optimal DBN in University of Pavia and 3.94% larger than the OA achieved via optimal DBN in Salinas. Furthermore, the proposed classification framework has better performance than CNN and EPF-G-c. As for kappa coefficients, we can see that our proposed framework has better consistency. The possible reason is the ability of our proposed framework, as a deep network, to extract high-level features of data is stronger than the RBFN and the SVM, as shallow networks, thus the description ability of our proposed framework is more stable. In Table 9 and Table 11, the precisions obtained through our proposed model on different datasets are better than precisions achieved via other algorithms. Furthermore, our proposed TFE has a positive effect on classification accuracy.
Figure 12, Figure 13 and Figure 14 demonstrate the classification maps obtained in Indian Pines, University of Pavia and Salinas, respectively. Clearly, the classification maps shown in Figure 12, Figure 13 and Figure 14 achieved by our proposed framework are the smoothest and clearest. The classification accuracy of border pixels in these datasets is improved greatly and the boundaries of different classes are more distinct. Compared to other classification algorithms, the results of our proposed framework are better because they contain less salt-and-pepper noise.
Table 12 presents the average | z | values achieved from Indian Pines, Pavia University and Salinas of the proposed classification framework as well as other classification algorithms. A “yes” here denotes the two classification algorithms in McNemar’s test have significant performance discrepancy. Obviously, the proposed classification framework is statistically different from its counterparts with 5% significance level.

6. Conclusions

In this paper, we investigate a novel hyperspectral classification framework based on an optimal DBN algorithm. In our proposed framework, we develop a new TFE algorithm that employs multi-texture features and band grouping method. The resulting classification framework can offer better classification accuracy than other classic algorithms. To further test our proposed TFE algorithm, a series of experiments based on the combination of the state-of-the-art algorithms and the TFE algorithm are applied on the three classic hyperspectral datasets. Experimental results demonstrate that the algorithms with TFE outperform those without TFE, which implies that our proposed TFE can play an important role in improving hyperspectral classification performance. We believe that the proposed hyperspectral classification framework based on the optimal DBN and TFE is more suitable to process hyperspectral data in practical applications when training samples are limited.

Acknowledgments

This work was partially supported by the National Nature Science Foundation of China (Nos. 61571345, 91538101, 61501346, 61502367 and 61701360) and the 111 project (B08038). It was also partially supported by the Fundamental Research Funds for the Central Universities JB170109, the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2016JQ6023) and General Financial Grant from the China Postdoctoral Science Foundation (No. 2017M623124).

Author Contributions

L.J. and L.Y.S. conceived and designed the study; L.J. performed the experiments; X.B. analyzed the data; L.J. and X.B. wrote the paper; and W.K.Y. and D.Q. reviewed and edited the manuscript. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. Agric. Veg. 2017, 9, 1110. [Google Scholar] [CrossRef]
  2. Yokoya, N.; Chan, J.C.W.; Segl, K. Potential of Resolution-Enhanced Hyperspectral Data for Mineral Mapping Using Simulated EnMAP and Sentinel-2 Images. Remote Sens. 2016, 8, 172. [Google Scholar] [CrossRef]
  3. Merentitis, A.; Debes, C.; Heremans, R. Ensemble Learning in Hyperspectral Image Classification: Toward Selecting a Favorable Bias-Variance Tradeoff. IEEE J. STARS 2014, 7, 1089–1102. [Google Scholar] [CrossRef]
  4. He, J.; He, Y.; Zhang, C. Determination and Visualization of Peimine and Peiminine Content in Fritillaria thunbergii Bulbi Treated by Sulfur Fumigation Using Hyperspectral Imaging with Chemometrics. Molecules 2017, 22, 1402. [Google Scholar] [CrossRef] [PubMed]
  5. Richards, J.A.; Jia, X. Using Suitable Neighbors to Augment the Training Set in Hyperspectral Maximum Likelihood Classification. IEEE Geosci. Remote Sens. Lett. 2008, 5, 774–777. [Google Scholar] [CrossRef]
  6. Leonenko, G.; Los, S.O.; North, P.R.J. Statistical Distances and Their Applications to Biophysical Parameter Estimation: Information Measures, M-Estimates, and Minimum Contrast Methods. Remote Sens. 2013, 5, 1355–1388. [Google Scholar] [CrossRef]
  7. Zhang, J.; Mani, I. KNN Approach to Unbalanced Data Distributions: A Case Study Involving Information Extraction. In Proceedings of the ICML 2003 Learning Imbalanced Datasets, Washington, DC, USA, 21–24 August 2003. [Google Scholar]
  8. Mathew, J.; Luo, M.; Pang, C.K.; Chan, H.L. Kernel-based SMOTE for SVM classification of imbalanced datasets. In Proceedings of the IECON 2015 41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 001127–001132. [Google Scholar]
  9. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  10. Dobigeon, N.; Tourneret, J.Y.; Chang, C.I. Semi-Supervised Linear Spectral Unmixing Using a Hierarchical Bayesian Model for Hyperspectral Imagery. IEEE Trans. Signal Process. 2008, 56, 2684–2695. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, L.; Wei, W.; Zhang, Y.; Li, F.; Yan, H. Structured sparse BAYESIAN hyperspectral compressive sensing using spectral unmixing. In Proceedings of the 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014. [Google Scholar]
  12. Yu, H.; Gao, L.; Li, J.; Li, S.S.; Zhang, B.; Benediktsson, J.A. Spectral-Spatial Hyperspectral Image Classification Using Subspace-Based Support Vector Machines and Adaptive Markov Random Fields. Remote Sens. 2016, 8, 355. [Google Scholar] [CrossRef]
  13. Chen, H.M.; Wang, H.C.; Chai, J.W.; Chen, C.C.C.; Xue, B.; Wang, L.; Yu, C.; Wang, Y.; Song, M.; Chang, C.I. A Hyperspectral Imaging Approach to White Matter Hyperintensities Detection in Brain Magnetic Resonance Images. Remote Sens. 2017, 9, 1174. [Google Scholar] [CrossRef]
  14. Kayabol, K. Bayesian Gaussian mixture model for spatial-spectral classification of hyperspectral images. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 1805–1809. [Google Scholar]
  15. Ramo, R.; Chuvieco, E. Developing a Random Forest Algorithm for MODIS Global Burned Area Classification. Remote Sens. 2017, 9, 1193. [Google Scholar] [CrossRef]
  16. Starck, J.; Elad, M.; Donoho, D. Image decomposition via the combination of sparse representation and a variational approach. IEEE Trans. Image Process. 2005, 14, 1570–1582. [Google Scholar] [CrossRef] [PubMed]
  17. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–226. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, L.; Zhou, W.D.; Chang, P.C.; Yan, Z.; Wang, T.; Li, F.Z. Kernel Sparse Representation-Based Classifier. IEEE Trans. Signal Process. 2012, 60, 1684–1695. [Google Scholar] [CrossRef]
  19. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  20. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef]
  21. Li, W.; Tramel, E.W.; Prasad, S.; Fowler, J.E. Nearest regularized subspace for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 477–489. [Google Scholar] [CrossRef]
  22. Kang, X.; Li, S.; Benediktsson, J.A. Spectral–Spatial Hyperspectral Image Classification with Edge-Preserving Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  23. Chen, C.; Li, W.; Su, H.; Liu, K. Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine. Remote Sens. 2014, 6, 5795–5814. [Google Scholar] [CrossRef]
  24. Wang, T.; Zhang, H.; Lin, H.; Fang, C. Textural–Spectral Feature-Based Species Classification of Mangroves in Mai Po Nature Reserve from Worldview-3 Imagery. Remote Sens. 2016, 8, 24. [Google Scholar] [CrossRef]
  25. Zhong, Y.; Jia, T.; Zhao, J.; Wang, X.; Jin, S. Spatial-Spectral-Emissivity Land-Cover Classification Fusing Visible and Thermal Infrared Hyperspectral Imagery. Remote Sens. 2017, 9, 910. [Google Scholar] [CrossRef]
  26. Peng, B.; Li, W.; Xie, X.; Du, Q.; Liu, K. Weighted-Fusion-Based Representation Classifiers for Hyperspectral Imagery. Remote Sens. 2015, 7, 14806–14826. [Google Scholar] [CrossRef]
  27. Fukushima, K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Netw. 1988, 1, 119–130. [Google Scholar] [CrossRef]
  28. Ciresan, D.C.; Meier, U.; Masci, J.; Gambardella, L.M.; Schmidhuber, J. Flexible, high performance convolutional neural networks for image classification. In Proceedings of the Joint Conference Artificial Intelligence (IJCAI ’11), Barcelona, Catalonia, Spain, 16–22 July 2011; pp. 1237–1242. [Google Scholar]
  29. Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed]
  30. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  31. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. STARS 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  32. Özdemir, A.O.B.; Gedik, B.E.; Çetin, C.Y.Y. Hyperspectral classification using stacked autoencoders with deep learning. In Proceedings of the 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014. [Google Scholar]
  33. Hinton, G.E.; Slakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  34. Ohanian, P.P.; Dubes, R.C. Performance evaluation for four classes of textural features. Pattern Recognit. 1992, 25, 819–833. [Google Scholar] [CrossRef]
  35. Xue, B.; Yu, C.; Wang, Y.; Song, M.; Li, S.; Wang, L. A subpixel target detection approach to hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5093–5114. [Google Scholar] [CrossRef]
  36. Foody, G.M. Thematic map comparison: Evaluating the statistical significance of differences in classification accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
Figure 1. Architecture of Restricted Boltzmann Machines.
Figure 1. Architecture of Restricted Boltzmann Machines.
Remotesensing 10 00396 g001
Figure 2. An illustration of three-layer DBN with logistic regression.
Figure 2. An illustration of three-layer DBN with logistic regression.
Remotesensing 10 00396 g002
Figure 3. The maps of correlation coefficients of spectral bands in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Figure 3. The maps of correlation coefficients of spectral bands in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Remotesensing 10 00396 g003
Figure 4. The correlation coefficients of adjacent spectral bands in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Figure 4. The correlation coefficients of adjacent spectral bands in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Remotesensing 10 00396 g004
Figure 5. The procedure of band grouping and texture features enhancement.
Figure 5. The procedure of band grouping and texture features enhancement.
Remotesensing 10 00396 g005
Figure 6. Our proposed DBN network for classification.
Figure 6. Our proposed DBN network for classification.
Remotesensing 10 00396 g006
Figure 7. The relationship between accuracies and the number of hidden units in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Figure 7. The relationship between accuracies and the number of hidden units in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Remotesensing 10 00396 g007
Figure 8. The relationship between accuracies and the learning rates in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Figure 8. The relationship between accuracies and the learning rates in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Remotesensing 10 00396 g008
Figure 9. The relationship between accuracies and the numbers of Max epoch in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Figure 9. The relationship between accuracies and the numbers of Max epoch in different datasets: (a) Indian Pines; (b) University of Pavia; and (c) Salinas.
Remotesensing 10 00396 g009
Figure 10. P-class confusion matrix.
Figure 10. P-class confusion matrix.
Remotesensing 10 00396 g010
Figure 11. The reconstructions of the border-pixels and inner-pixels of different classes in Indian Pines. First row is the reconstruction information of Class 2, second row is the reconstruction information of Class 4, third row is the reconstruction information of Class 6 and last row is the reconstruction information of Class 8.
Figure 11. The reconstructions of the border-pixels and inner-pixels of different classes in Indian Pines. First row is the reconstruction information of Class 2, second row is the reconstruction information of Class 4, third row is the reconstruction information of Class 6 and last row is the reconstruction information of Class 8.
Remotesensing 10 00396 g011
Figure 12. The classification maps obtained via different algorithms in Indian Pines: (a) Ground truth; (b) SVM; (c) RBFNN; (d) O_DBN; (e) SVM_TFE; (f) RBFNN_TFE; (g) CNN; (h) EFP-G-c; and (i) the proposed framework.
Figure 12. The classification maps obtained via different algorithms in Indian Pines: (a) Ground truth; (b) SVM; (c) RBFNN; (d) O_DBN; (e) SVM_TFE; (f) RBFNN_TFE; (g) CNN; (h) EFP-G-c; and (i) the proposed framework.
Remotesensing 10 00396 g012
Figure 13. The classification maps obtained via different algorithms in University of Pavia: (a) Ground truth, (b) SVM, (c) RBFNN, (d) O_DBN, (e) SVM_TFE, (f) RBFNN_TFE, (g) CNN, (h) EFP-G-c and (i) the proposed framework.
Figure 13. The classification maps obtained via different algorithms in University of Pavia: (a) Ground truth, (b) SVM, (c) RBFNN, (d) O_DBN, (e) SVM_TFE, (f) RBFNN_TFE, (g) CNN, (h) EFP-G-c and (i) the proposed framework.
Remotesensing 10 00396 g013
Figure 14. The classification maps obtained via different algorithms in Salinas Dataset: (a) Ground truth, (b) SVM, (c) RBFNN, (d) O_DBN, (e) SVM_TFE, (f) RBFNN_TFE, (g) CNN, (h) EFP-G-c and (i) the proposed framework.
Figure 14. The classification maps obtained via different algorithms in Salinas Dataset: (a) Ground truth, (b) SVM, (c) RBFNN, (d) O_DBN, (e) SVM_TFE, (f) RBFNN_TFE, (g) CNN, (h) EFP-G-c and (i) the proposed framework.
Remotesensing 10 00396 g014
Table 1. Feature calculated from the normalized co-occurrence matrix P ( i , j ) .
Table 1. Feature calculated from the normalized co-occurrence matrix P ( i , j ) .
No.FeatureFormula
F 1 Energy i j P 2 ( i , j )
F 2 Entropy i j P ( i , j ) log P ( i , j )
F 3 Contrast i j ( i j ) 2 P ( i , j )
F 4 Mean 1 m * n i j | i j | P ( i , j )
F 5 Homogeneity i j P ( i , j ) 1 + | i j |
Table 2. Number of training and testing samples used in the Indian Pines dataset.
Table 2. Number of training and testing samples used in the Indian Pines dataset.
No.ClassesTrainingTesting
1Corn-notill3001160
2Corn-mintill300534
3Grass-pasture300197
4Hay-windrowed300189
5Soybean-notill300668
6Soybean-mintill3002168
7Soybean-clean300314
8Woods300994
Total24006224
Table 3. Number of training and testing samples used in the Pavia University dataset.
Table 3. Number of training and testing samples used in the Pavia University dataset.
No.ClassesTrainingTesting
1Asphalt3006331
2Meadows30018,349
3Gravel3001799
4Trees3002764
5Painted metal sheets3001045
6Bare Soil3004729
7Bitumen3001030
8Self-Blocking Bricks3003382
9Shadows300647
Total270040,076
Table 4. Number of training and testing samples used in the Salinas dataset.
Table 4. Number of training and testing samples used in the Salinas dataset.
No.ClassesTrainingTesting
1Brocoli_green_weeds_13001709
2Brocoli_green_weeds_23003426
3Fallow3001676
4Fallow_rough_plow3001094
5Fallow_smooth3002378
6Stubble3003659
7Celery3003279
8Grapes_untrained30010,971
9Soil_vinyard_develop3005903
10Corn_senesced_green_weeds3002978
11Lettuce_romaine_4wk300768
12Lettuce_romaine_5wk3001627
13Lettuce_romaine_6wk300616
14Lettuce_romaine_7wk300770
15Vinyard_untrained3006968
16Vinyard_vertical_trellis3001507
Total480049,329
Table 5. The accuracies obtained via different numbers of hidden layers in DBN.
Table 5. The accuracies obtained via different numbers of hidden layers in DBN.
Datasets1 Layer2 Layers3 Layers4 Layers
Indian Pines0.89190.89480.88920.8432
University of Pavia0.90900.91230.90650.8994
Salinas0.91230.92280.91040.9064
Table 6. Classification accuracy of different algorithms on Indian Pines.
Table 6. Classification accuracy of different algorithms on Indian Pines.
ClassSVMRBFNNO_DBNSVM_TFERBFNN_TFECNNEPF-G-cOur Proposed
10.85780.86720.85620.90690.96380.91070.97570.9690
20.92510.92880.95320.96250.99440.77830.97360.9888
30.93910.95430.95940.95940.99490.84620.93140.9594
40.984110.9947110.97930.97931
50.91620.92370.91720.95060.99100.78420.92680.9880
60.80540.79750.81890.89620.95530.93480.98550.9613
70.93630.94590.94900.95220.98090.84420.98730.9682
80.99400.99500.9909110.99290.98811
OA0.88370.88540.89480.93430.97510.89830.97540.9756
AA0.91970.92650.92700.95350.98500.88380.96850.9793
Kappa0.85590.85820.86170.91800.96880.87360.96920.9694
Table 7. Classification precision of different algorithms on Indian Pines.
Table 7. Classification precision of different algorithms on Indian Pines.
ClassSVMRBFNNO_DBNSVM_TFERBFNN_TFECNNEPF-G-cOur Proposed
10.85850.86430.85630.91320.96460.84400.94740.9571
20.75770.76310.74960.88770.96200.92700.96060.9661
30.91130.93530.84000.88730.98490.94920.96450.9692
40.96880.98440.94950.989511.000011
50.79170.74340.80370.86750.92720.90870.9650.9396
60.93070.93410.94170.96430.98620.85380.96860.9836
70.78610.87100.84660.86170.97160.94900.99360.9882
80.99300.99400.99700.999010.990911
Precision0.87470.88620.87310.92130.97460.92780.97500.9755
Table 8. Classification accuracy of different algorithms on University of Pavia.
Table 8. Classification accuracy of different algorithms on University of Pavia.
ClassSVMRBFNNO_DBNSVM_TFERBFNN_TFECNNEPF-G-cOur Proposed
10.74660.77330.86500.85340.90290.97580.95790.9458
20.84420.89800.92810.90580.96010.98320.99930.9728
30.85330.83770.84100.89220.93050.77950.95110.9550
40.98010.96020.97650.97720.97870.90960.96770.9881
50.99900.99900.99900.99810.99710.98300.93720.9990
60.91080.94920.91250.95580.99030.81530.92630.9873
70.94560.95830.89900.95440.99320.66800.98850.9893
80.84300.86280.86130.91010.95710.85620.94210.9438
9110.9985110.99850.98951.0000
OA0.85550.88880.91230.91330.95680.92110.96710.9696
AA0.90250.91540.92010.93850.96780.88550.96220.9757
Kappa0.81030.85250.88240.88450.94180.89430.95900.9590
Table 9. Classification precision of different algorithms on University of Pavia.
Table 9. Classification precision of different algorithms on University of Pavia.
ClassSVMRBFNNO_DBNSVM_TFERBFNN_TFECNNEPF-G-cOur Proposed
10.97950.97980.96750.98360.98770.85310.98220.9837
20.97200.98410.97630.98690.99800.93410.97560.9978
30.66570.69050.75680.78030.88760.87660.97110.9261
40.76570.89060.82070.91220.98080.96780.96420.9437
50.98310.99810.98490.994310.99810.99000.9877
60.67140.73880.77350.74560.86390.94840.94500.9189
70.50840.55830.65150.75670.85750.95920.91570.9586
80.83120.80280.86450.83940.87720.87520.98640.9117
9110.99850.998510.99850.87790.9969
Precision0.81970.84920.86600.88860.93920.93450.95650.9583
Table 10. Classification accuracy of different algorithms on Salinas Dataset.
Table 10. Classification accuracy of different algorithms on Salinas Dataset.
ClassSVMRBFNNO_DBNSVM_TFERBFNN_TFECNNEPF-G-cOur Proposed
10.99650.99710.99470.99820.99881.00001.00000.9947
20.99470.994710.99560.99500.99330.99940.9962
30.99760.99880.99760.99760.99820.95890.99940.9976
40.99630.99630.99630.99540.99540.98380.99730.9973
50.98860.98490.98110.98740.98990.98980.99920.9853
60.99810.99860.99780.99810.99810.99950.99840.9973
70.99700.99630.99570.99600.99660.99880.99890.9963
80.86060.85670.83150.87610.88930.83790.86900.9085
90.99340.99850.99390.99420.99660.98960.99110.9949
100.96610.97580.94260.96980.97750.88480.97150.9614
110.99870.99610.99610.99870.99870.891911
120.9994110.999410.96850.99920.9994
130.99680.99510.99840.99510.99510.95340.99870.9968
140.97920.98570.99480.98570.98050.91590.99780.9948
150.69720.73360.76460.79410.79160.76730.88560.9127
160.99200.99000.98540.99200.99140.969510.9887
OA0.92120.92660.92280.93870.94210.91550.95430.9622
AA0.96580.96870.96690.97330.97460.94390.98160.9826
Kappa0.91140.91750.91330.93120.93500.90510.94860.9575
Table 11. Classification precision of different algorithms on Salinas Dataset.
Table 11. Classification precision of different algorithms on Salinas Dataset.
ClassSVMRBFNNO_DBNSVM_TFERBFNN_TFECNNEPF-G-cOur Proposed
10.99880.99940.99710.999410.980111
20.99850.99850.99800.99940.99940.99470.99950.9991
30.97440.97210.94890.98300.98240.99760.97820.9682
40.99090.98640.98470.99180.99000.99730.99910.9900
50.99410.99700.99780.99200.98950.93150.99870.9924
60.99950.99970.98840.99920.99950.99780.99970.9940
70.9966110.995110.99570.99910.9973
80.82090.83720.85920.87290.87260.89520.91620.9415
90.99560.99160.98980.99310.99270.98100.94750.9926
100.95170.97350.86990.95340.96740.93250.96270.9487
110.98080.99220.82420.99350.99480.98310.99940.9785
120.99090.98970.97480.99330.99211.00000.99870.9933
130.97770.99190.99350.98710.98390.99680.99200.9731
140.87370.92450.82350.92560.89450.95060.93590.8899
150.78030.77470.73440.81870.83410.51280.77770.8559
160.97010.99200.98610.96580.99530.98540.99460.9900
Precision0.95590.96380.93560.96650.96800.94580.96870.9690
Table 12. ( | z | values/Siginificant?) in the McNemar’s Test.
Table 12. ( | z | values/Siginificant?) in the McNemar’s Test.
AlgorithmsIndian PinesPavia UniversitySalinas
SVM31.16/Yes68.33/Yes41.19/Yes
RBFNN31.34/Yes69.27/Yes41.39/Yes
O_DBN2.78/Yes3.74/Yes3.32/Yes
SVM_TFE31.95/Yes73.29/Yes41.21/Yes
RBFNN_TFE32.82/Yes74.84/Yes42.49/Yes
CNN3.50/Yes3.00/Yes4.49/Yes
EPF_G_c32.16/Yes75.13/Yes41.21/Yes
Note: 5% significance level is selected.

Share and Cite

MDPI and ACS Style

Li, J.; Xi, B.; Li, Y.; Du, Q.; Wang, K. Hyperspectral Classification Based on Texture Feature Enhancement and Deep Belief Networks. Remote Sens. 2018, 10, 396. https://doi.org/10.3390/rs10030396

AMA Style

Li J, Xi B, Li Y, Du Q, Wang K. Hyperspectral Classification Based on Texture Feature Enhancement and Deep Belief Networks. Remote Sensing. 2018; 10(3):396. https://doi.org/10.3390/rs10030396

Chicago/Turabian Style

Li, Jiaojiao, Bobo Xi, Yunsong Li, Qian Du, and Keyan Wang. 2018. "Hyperspectral Classification Based on Texture Feature Enhancement and Deep Belief Networks" Remote Sensing 10, no. 3: 396. https://doi.org/10.3390/rs10030396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop