Next Article in Journal
Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results (Part I): Earth’s Radiation Budget
Previous Article in Journal
Detection of Spatio-Temporal Changes of Norway Spruce Forest Stands in Ore Mountains Using Landsat Time Series and Airborne Hyperspectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features

Institute of Remote Sensing and GIS, Peking University, Beijing 100871, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(2), 99; https://doi.org/10.3390/rs8020099
Submission received: 19 October 2015 / Revised: 20 December 2015 / Accepted: 30 December 2015 / Published: 27 January 2016

Abstract

:
In recent years, deep learning has been widely studied for remote sensing image analysis. In this paper, we propose a method for remotely-sensed image classification by using sparse representation of deep learning features. Specifically, we use convolutional neural networks (CNN) to extract deep features from high levels of the image data. Deep features provide high level spatial information created by hierarchical structures. Although the deep features may have high dimensionality, they lie in class-dependent sub-spaces or sub-manifolds. We investigate the characteristics of deep features by using a sparse representation classification framework. The experimental results reveal that the proposed method exploits the inherent low-dimensional structure of the deep features to provide better classification results as compared to the results obtained by widely-used feature exploration algorithms, such as the extended morphological attribute profiles (EMAPs) and sparse coding (SC).

Graphical Abstract

1. Introduction

Hyperspectral images can provide rich information both from the spectral and spatial domain simultaneously. For this reason, hyperspectral images are widely used in agriculture, environmental management and urban planning. Classification of each pixel in hyperspectral imagery is a common method used in these applications. However, hyperspectral sensors generally have more than 100 spectral bands for each pixel (e.g., AVIRIS, Reflective Optics System Imaging Spectrometer (ROSIS)), and the interpretation of such high dimensionality imagery with good accuracy is rather difficult.
Recently, sparse representation [1] has been demonstrated as a useful tool for high dimensional data processing. It is also widely applied in hyperspectral imagery classification [2,3,4]. Sparse models intend to represent most observations with linear combinations of a small set of elementary samples, often referred to as atoms, chosen from an over-complete training dictionary. In this way, hyperspectral pixels, which lie in a high dimension space, can be approximately represented by a low dimension subspace structured by dictionary atoms from the same class. Therefore, given the entire training dictionary, an unlabeled pixel can be sparsely represented by a specific linear combination of atoms. Finally, according to the positions and values of the sparse coefficients of the unlabeled pixel, the class label can be determined.
Spatial information is an important aspect in sparse representations of hyperspectral images. It is widely accepted that a combination of spatial and spectral information provides significant advantages in terms of improving the performance of hyperspectral image representation and classification (e.g., [5,6]). To explore effective spatial features, several methods have been developed in this direction. In [3], two kinds of spatial-based sparse representation are proposed for hyperspectral image processing. Among them, one is a local contextual-based method. In this method, it adds a spatial smoothing term in the optimization formulation during the sparse reconstruction process of the original data. The second one jointly utilizes the sparse constraints of neighboring pixels, around the pixel of interest. The experimental results show that these two strategies perform better, in terms of classification results. However, both spatial smoothing and the joint sparsity model lay emphasis only on local consistency in the spectral domain, whereas spatial features (e.g., shapes and textures) also need to be explored for better representation of hyperspectral imagery. Recently, mathematical morphology (MM) methods [7] have been commonly used for modeling the spatial characteristics of the objects in hyperspectral images. For panchromatic images, derivative morphological profiles (DMPs) [8] have been successfully used for image classification. In the field of hyperspectral image interpretation, spatial features are commonly extracted by building extended morphological profiles (EMPs) [9] on the first few principal components. Moreover, extended morphological attribute profiles (EMAPs) [10], similar to EMPs, have been introduced as an advanced algorithm to obtain detailed multilevel spatial features of high resolution images generated by the sequential application of various spatial attribute filters that can be used to model different kinds of structural information. Such morphological spatial features, which are generated from the pixel level (low level), suffer heavily from redundancy and great variations in feature representation. To reduce the redundancy in morphological feature space, several studies have been set to find more representative spatial features by using a sparse coding technique, such as [11,12]. However, due to the variability of low-level morphological features, which limited the power of sparse representation, it is necessary to find higher level and more robust spatial features.
To explore higher level and more effective spatial features, [13] defines sparse contextual properties based on over-segmentation results, which greatly reduce computational cost. However, objects seldom belong to only one superpixel because of the spectral variations, and this is particularly so in high resolution images. Moreover, the spatial features, defined at the superpixel level, are commonly merged and linearly transformed from low level (pixel-level) ones; therefore, they probably would not significantly increase the representation power of spatial features in remote sensing images. After all, both MM and object-level spatial features require prior knowledge of setting proper parameters for feature extraction. The process of parameter setting always produces inefficient and redundant spatial features [14,15]. Therefore, in this paper, instead of setting spatial features, we explore high level spatial features by using a deep learning strategy [16,17]. Deep learning, as one of the state-of-the-art algorithms in the computer vision field, shifts the human-engineered feature extraction process to automatic feature learning and highly application-dependent feature exploration [18,19,20]. Furthermore, due to the deep structure in such learning strategies (e.g., stacked autoencoder (SAE) [21], convolutional neural network (CNN) [22]), one can extract higher level spatial features by using non-linear activation functions, layer by layer, which are much more robust and effective than low level ones. Recently, some efforts have been made in deep learning for hyperspectral image classification. Chen [23] probably is the first one to explore the SAE framework for hyperspectral classification. In his work, SAE was used for spectral and spatial feature extraction in the hierarchical structure. However, SAE can only extract higher level features from one-dimensional data, while it overlooked the two-dimensional spatial characteristics (although an adjacent effect has been considered). Unlike SAE, CNN takes a fixed size image patch, called the “receptive field”, for deep spatial feature extraction; thus, it can keep spatial information intact. In the work of Chen [24], wherein the vehicles on the roads are detected by deep CNN (DCNN), the results show that CNN is effective for object detection in high resolution images. Instead of object detection, Yue [14,25] explored both spatial and spectral features in higher levels by using a deep CNN framework for the possible classification of hyperspectral images. However, the extracted deep features still remain in a high dimensional space, which involves a rather high computational cost and may lead to lower classification accuracies.
In this paper, we follow a different strategy, exploit the low dimensional structure of high level spatial features and perform sparse representation using both spectral and spatial information for hyperspectral image classification. Specifically, we focus on CNN, which offers the potential to describe the structural characteristics in high levels according to the hierarchical feature extraction procedure. At the same time, we also exploit the fact that the deep spatial features of the same class lie in a low-dimensional subspace or manifold and can be expressed by linear sparse regression. Thus, it would be worthwhile to combine sparse representation with high dimensional deep features, which may provide better representation in terms of the characterization of spatial and spectral features and for better discrimination between different classes. Therefore, the method proposed in this paper for hyperspectral image classification combines the merits of deep learning and sparse representations. In this work, we tested our method on two well-known hyperspectral datasets: an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) scene over Indian Pines, IN, USA, and a Reflective Optics System Imaging Spectrometer (ROSIS) scene over Pavia University. The experimental results show that the proposed method can effectively exploit the sparsity that lies at a higher level spatial feature subspace and also provides better classification performance. The merits of the proposed method are as follows: (1) instead of manually setting spatial features, we use CNN to learn such features automatically, which is more effective for hyperspectral image representation; (2) the hierarchical network strategy is applied to explore higher level spatial features, which are more robust and effective for classification compared to the low level spatial features; (3) the sparse representation method is introduced to exploit a suitable subspace for high dimension spatial features, which reduces the computational cost and increases feature discrimination between classes.
The remainder of the paper is structured as follows. Section 2 presents the proposed methodology in two parts: CNN deep feature extraction and sparse classification and describes the datasets used for experiments and compares the performance of the proposed method with that of other well-known approaches. Finally, Section 3 concludes with some suggestions for future works.

2. Proposed Methodology

The proposed method can be divided mainly into three parts, as shown in Figure 1. First, the high level spatial features are extracted by the CNN framework. Then, the sparse representation technique was applied to reduce the dimensionality of the high level spatial features generated by the previous step. Finally, with the learned sparse dictionary, classification results can be obtained.
Figure 1. Graphical illustration of the convolutional neural networks (CNN)-based spatial feature extraction, sparse representation and classification of hyperspectral images.
Figure 1. Graphical illustration of the convolutional neural networks (CNN)-based spatial feature extraction, sparse representation and classification of hyperspectral images.
Remotesensing 08 00099 g001

2.1. CNN-Based Deep Feature Extraction

Recently, deep learning, one of the state-of-the-art techniques in the field of computer vision, has demonstrated impressive performances in recognition and classification of several famous image datasets [26,27]. Instead of setting image features, CNN can automatically learn higher level features from a hierarchical neural network in a way similar to the process of human cognition. To explore such spatial information, a fixed size of the neighborhood area (receptive field) should be first given. For a PC band of hyperspectral images, given a training sample p i and its pixel neighbors in P s ( p i ) , a local neighborhood area forms with the size of P × P ; the patch-based training sample can be denoted as X i . Additionally, the label of patch sample X i can be denoted as t i . CNN works like a black box; given the input patches and its labels, the hierarchical spatial features can be generated by a layer-wise activation structure, shown in Figure 2. Conventionally, two kinds of layers are stacked together in the CNN framework f ( k , b | X ) ; the convolution layer and the sub-sampling layer [28]. Here, f ( x ) = ( 1 + e - x ) 1 is the non-linear activation function. The convolution layer generates spatial features by activating the output value of previous layers with spatial filters. Then, the sub-sampling layer generates more general and abstract features, which greatly reduces the computational cost and increases the generalization power for image classification. Learning a CNN network with L layers involves learning the trainable parameters in each layer of the framework. The feature maps of the previous layer are convolved by the convolution layer with learnable kernel k and bias term b through the activation function to form feature maps of the current layer. For l-th convolution layer l ( 1 , 2 , . . . , L ) , we have that:
F l = f ( F l - 1 * k l + b l )
where F l represents the feature maps of the current layer and F l - 1 means the feature map lies in the previous layer. k and b are trainable parameters in the convolution layer. Commonly, sub-sampling layers are interspersed with convolution layers for computational cost reduction and feature generalization. Specifically, a subsampling layer produces downsampled versions of the input feature maps for feature abstraction. For example, for the q-th sub-sampling layer q ( 1 , 2 , . . . , L ) , we have:
F q = f ( down ( F q - 1 ) + b q )
where down ( · ) represents the sub-sampling function that shrinks a feature map by using a mean value pooling operation and b is the bias term of the sub-sampling layer. The final output layer can be defined as:
y ( k , b ) = f L ( k L h L - 1 + b L )
where y ( k , b ) is the predicted value of the entire CNN and h L - 1 means the output feature map of the ( L - 1 ) -th hidden layer in the CNN, which could be either a convolution layer or a sub-sampling layer. During the training process, the squared loss function is applied to measure the deviation from target labels and predicted labels. If there are N training samples, the optimization problem is to minimize the loss function E N as follows:
min E N = 1 2 i = 1 N | | t i - y i ( k , b ) | | 2 2
where a L = k L h L - 1 + b L denotes a single activation unit. To minimize the loss function, a backward propagation algorithm is a common choice. Specifically, the stochastic gradient descent algorithm (SGD) is applied to optimize the parameters k and b.
The parameter of the entire network could be updated according to the derivatives. Once the back propagation process is finished, k and b are determined. Then, a feed-forward step is applied to generate new error derivatives, which can be used for another round for parameter updating. These feed-forward and back-propagation processes are repeated until convergence is achieved, and thus, optimal k and b are obtained. High level spatial features D i can thus be extracted by using such learned parameters and a hierarchical framework.
O = f L ( k X i + b )
Once the output feature map of the last layer is obtained, it is important to flatten the feature map into a one-dimension vector for pixel-based classification. Therefore, the flattened deep feature can be represented as D i = vectorize ( O ) , where O is the final output feature map.
Figure 2. The process of CNN-based spatial feature extraction. The training samples are squared patches. The convolution layer and sub-sampling layer are interspersed in the framework of CNN.
Figure 2. The process of CNN-based spatial feature extraction. The training samples are squared patches. The convolution layer and sub-sampling layer are interspersed in the framework of CNN.
Remotesensing 08 00099 g002

2.2. Deep Feature-Based Sparse Representation

Deep spatial features generated by the CNN framework are usually of high dimensionality, which are ineffective for classification. Therefore, we introduce sparse coding as one of the-state-of-art techniques to find a subspace for deep feature representation and to possibly improve the classification performances, as shown in Figure 3. The sparse representation classification (SRC) framework was first introduced for face recognition [29]. Similarly, in hyperspectral images, a particular class with high dimensional features both in the spectral and spatial domain should lie in a low dimensional subspace spanned by dictionary atoms (training pixels) of the same class. Specifically, an unknown test pixel can be represented as a linear combination of training pixels from all classes. As a concrete example, let x i R M × 1 be the pixel with M denoting the dimension of deep features in D and A = [ A 1 , . . . , A c , . . . , A C ] the structural dictionary, where A c R M × n c , c = 1 , . . . , C holds the samples of class c in its columns; C is the number of classes, n c is the number of samples in A c ; and c = 1 C N c = N is the total number of atoms in A . Therefore, a pixel x i , whose class identity is unknown, can be represented as a linear combination of atoms from the dictionary A :
x i = A α
where ff R n is a sparse coefficient for the unknown pixel x i . Given the structural dictionary A , the sparse coefficient α can be obtained by solving the following optimization problem:
α ^ = arg min | | α | | 0 subject to | | x i - A α | | 2 δ
where | | α | | 0 denotes the 0 -norm of α, which counts the number of nonzero components on the coefficient vector, and δ is the error tolerance, which represents noise and possible modeling error. However, the aforementioned problems make it hard to solve this optimization problem because of its nondeterministic and NP-hard characteristic. To tackle this problem, therefore, greedy algorithms, such as basis pursuit (BP) [30] and orthogonal matching pursuit (OMP) [31], have been proposed. In the BP algorithm, the 1 norm replaces the 0 norm. The optimization problem is transferred into:
α ^ = arg min | | α | | 1 subject to | | α | | 0 K
where K is the sparsity level, representing the number of selected atoms in the dictionary. | | α | | 1 = i | α i | , for i = 1 , . . . , n . On the other hand, the OMP algorithm incorporates the following steps at each iteration based on the correlation between the dictionary A and the residual vector R , where R = x - A α . Specifically, at each iteration, the OMP finds the index of the atom that best approximates the residual, adds this member to the matrix of atoms, updates the residual and computes the estimate of α using the newly-obtained atoms. Once the approximation error falls below a certain prescribed limit, then OMP finds the sparse coefficient vector α ^ . The class label for x i can be determined by the minimal representation error between x i and its approximation from the sub-dictionary of each class:
c ^ = arg min c | | x i - A c α ^ c | | 2 , c = 1 , . . . , C
Therefore, we proposed the deep feature-based OMP algorithm to explore the low dimension subspace for deep feature representation and for image classification.
Figure 3. The process of deep feature-based sparse representation classification.
Figure 3. The process of deep feature-based sparse representation classification.
Remotesensing 08 00099 g003
Algorithm 1 Framework of the deep feature-based sparse coding.
Require:
x i , training pixels from all classes with deep features; A, structural dictionary; C, number of classes;
K, sparsity level;
Ensure:
α ^ , sparse coefficients matrix;
1: Initialization: set the index matrix I iter = 1 = , residual matrix R iter = 1 = x i , the iteration counter iter = 1;
2: Compute residual correlation matrix E i t e r : E i t e r = A T R i t e r ;
3: Select a new adaptive set based on E i t e r
4: Find the best representation atoms’ indexes i c i t e r and the corresponding coefficient values v c iter for each class c.
5: Combine the best representative atoms for each class into a cluster W c i t e r , and obtain the corresponding coefficients V c i t e r in that cluster.
6: Select the adaptive set L i t e r from the best atoms out of W c i t e r according to the indexes in V c i t e r .
7: Combine the newly-selected adaptive set with the previously-selected adaptive sets: I iter = I iter L iter
8: Calculate the sparse representation coefficients α ^ .
9: Update the residual matrix: R i t e r = x - D A .
10: Check if sparsity coefficient α ^ i t e r > K ; stop the procedures; and output the final sparse coefficient matrix; otherwise, set i t e r = i t e r + 1 and go to Step 2.

2.3. Datasets

In this section, we evaluate the performance of the proposed deep feature-based sparse classification algorithm on two hyperspectral image datasets, i.e., the Reflective Optics System Imaging Spectrometer (ROSIS-03) University of Pavia data and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Indian Pines data.
The AVIRIS Indian Pines image was captured over the agricultural Indian Pine test site located in the northwest of Indiana. Two-thirds of the site contain agricultural crops and one-third forest or other natural, perennial vegetation. The spectral range includes 220 spectral bands from 0.2 to 2.4 μm, and each band measures 145 × 145 with a spatial resolution of 20 m. Prior to commencing the experiments, the water absorption bands were removed. There are 16 different classes in Indian Pines reference map, and most of them can be related to different types of crops.
The University of Pavia image was acquired by the ROSIS-03 sensor over the University of Pavia, Italy. The image measures 610 × 340 with a spatial resolution of 1.3 m per pixel. There are 115 channels whose coverage ranges from 0.43 to 0.86 μm. Prior to commencing the experiment, 12 absorption bands were discarded because of noise. Nine information classes were considered for this scene.

2.4. Configuration of CNN

During the deep feature extraction process, it is important to address the configuration of the deep learning framework. The receptive field ( P ), the kernel (k), the number of layers ( n l ) and the number of feature maps of each layer ( n f ) are primary variables that affect the quality of deep features. We empirically set the size of the receptive field to 28 * 28 , which offered enough contextual information. The kernel sizes recommended in recent studies for CNN framework are 5 × 5 , 7 × 7 or 9 × 9 [32]. For 7 × 7 and 9 × 9 kernels, there were 49 and 81 trainable parameters, which significantly increased the computational cost during training of such a framework, as compared to the cost for 5 × 5 kernels. Therefore, for our CNN framework, we adopted 5 × 5 kernels to accelerate the training process. Once the sizes of the receptive field and kernels are determined, the main structure of the CNN framework can be considered established. A training patch X i (receptive field) can generate four levels of feature maps (two convolutional layers and two subsampling layers), and the size of the final output map is 4 × 4 (((28 − (5 − 1))/2 − 4)/2). However, the number of feature maps ( n f ) of each layer still remains unsolved. To solve this problem, we constrained the number of feature maps to be equal at each layer. With this configuration, CNN works, like the deep Boltzmann machine (DBM) [33], which should not significantly affect the quality of the output of deep features. To illustrate the impact of different CNN configurations on classification accuracy, we conducted a series of experiments, as will be explained in the following section. The experiment was conducted on the spatially independent training sets and the remaining as the test datasets.

2.4.1. CNN Depth Effect

The depth parameter of CNN plays an important role in classification accuracy, because it controls the deep feature quality in terms of the level of abstraction. To measure the effectiveness of the depth parameter, a series of experiments were conducted on the Pavia University dataset. We set four different depths of CNNs from 1 to 4, and the feature number was fixed to 50. Overall accuracy was used to measure the classification performance with different depth configurations. The experimental results are shown in Figure 4 and Figure 5.
As can be seen from the figure, the classification accuracies can be obtained as increased with the increase in the depth configuration. The shallow layers contain low-level spatial features, but they vary greatly because of the constrained representation power. However, in deeper layers, the deep features are more robust and representative than those of lower ones. In addition, the shallow CNNs seem to have suffered more from overfitting, as presented in Figure 5.
Figure 4. Overall accuracies of the university of Pavia dataset classified by CNN under different depths.
Figure 4. Overall accuracies of the university of Pavia dataset classified by CNN under different depths.
Remotesensing 08 00099 g004
Figure 5. Overall accuracies of Indian Pines dataset classified by CNN under different depths.
Figure 5. Overall accuracies of Indian Pines dataset classified by CNN under different depths.
Remotesensing 08 00099 g005

2.4.2. CNN Feature Number Effect

In the CNN framework, the number of features can determine the dimensionality of the extracted spatial features. To measure the effect of spatial number on classification accuracy, a series of experiments were conducted. The feature number was varied from 10 to 100, and the whole CNN framework was constructed with a four-layer structure. Overall accuracy was used to measure the performance of the CNN-based classification algorithm. The classification results are reported in Figure 6.
Figure 6. Classification results by setting different deep feature numbers for the CNN framework.
Figure 6. Classification results by setting different deep feature numbers for the CNN framework.
Remotesensing 08 00099 g006
From the results for the University scene, it can be seen that the classification accuracy increased with the increase in feature number. However, no appreciable change in accuracy was noticed beyond the number of 50 deep features. A similar pattern can be seen in the Indian Pines dataset as well. Unlike in the University scene, the classification accuracy in the Indian Pines data dropped significantly after 50, reaching the lowest point at 90. This indicates that the classification accuracy becomes unstable when the number of deep features increases beyond a limit. Therefore, for our experiments, we set the number of deep features to 50.

2.5. Analysis of Sparse Representation

To conclude the effectiveness of sparse representation, we analyzed the relationship between the size of the training dictionary in both EMAP space and the deep feature space and the classification accuracies obtained by the OMP algorithm in this work. In Figure 7, the obtained classification accuracies are plotted as a function of the size of the training dictionary. The best classification accuracies are obtained by exploring the sparse representation of deep features for both the Indian Pines and Pavia University datasets. Generally, as the number of training samples increase, the uncertainty of classes decreases.
Figure 7. Classification OAs as a function of training dictionary size (expressed as a percentage of training samples for each class) for the Indian Pines and Pavia University datasets.
Figure 7. Classification OAs as a function of training dictionary size (expressed as a percentage of training samples for each class) for the Indian Pines and Pavia University datasets.
Remotesensing 08 00099 g007
The following experiments illustrate the advantage of using a sparse representation in deep feature space for image classification over using the EMAP-based sparse coding. We considered a training dictionary made up of 1043 atoms and labeled the remaining samples as the test set. After constructing the dictionary, we randomly selected a pixel (belonging to Class 3) for sparse representation analysis, and the sparse coefficients are shown as bars in Figure 8. From these figures, it can seen that in the original spectral space, the sparse coefficients appear so mixed up that it is hard to distinguish one class from the other. In the EMAP space, the differences between classes are becoming clear, but they are also hard to classify in the highly mixed-up pixel. In the deep feature space, the unknown pixel can be seen as belonging to Class 3, because it is more discriminative than that in the spectral or EMAP space. The reason behind this phenomenon is that the redundancy in spectral information and EMAP features greatly reduced the representativeness of the pixels. However, in the space of deep features, the correlation between different features is rather poor, and thus, it is more discriminative than the spectral and EMAP space.
Figure 8. Estimated sparse coefficients (spectral space) for one pixel (belonging to Class 3) in the Indian Pines image. (a) Spectral space; (b) extended morphological attribute profile (EMAP) space; (c) deep feature space.
Figure 8. Estimated sparse coefficients (spectral space) for one pixel (belonging to Class 3) in the Indian Pines image. (a) Spectral space; (b) extended morphological attribute profile (EMAP) space; (c) deep feature space.
Remotesensing 08 00099 g008

2.6. Comparison of Different Methods

The main purpose of the experiments with such remote sensing datasets is to compare the performances of different state-of-the-art algorithms in terms of classification results. Prior to feature extraction and classification, all of the datasets were whitened with the PCA algorithm, preserving the first several bands that contained more than 98 % information.
To assess the effect of deep features, the well-known spatial feature extended morphological attribute profiles (EMAP) were introduced to classify the images in the spatial domain. Specifically, the EMAPs were built by using the attributes of area and standard deviation. Following the work in [34], threshold values were chosen for the area in the range of {50,500} with a stepwise increment of 50 and for a standard deviation in the range of 2.5% to 20% with a stepwise increment of 2.5%. However, both EMAP and deep features are commonly shown in great redundancy and also with high dimensionality. To address the importance of the sparsity constraint in such spectral and spatial features, we added the original spectral information to our EMAP and deep feature-based sparse representation classification experiments. It should be noted that, in all of the experiments, the OMP algorithm was used to approximately solve the sparse problem for the original spectral information, EMAP and deep features, which can be denoted respectively as Spe o , EMAP o and Deep o . We also compared the proposed method with the nonlocal weighting sparse representation (NLW-SR) and the spectral-spatial deep convolutional neural network (SSDCNN) [25] in terms of classification accuracy.
In addition, we compared the sparse-based classification accuracy of OMP with the accuracies obtained by several state-of-the-art methods. Recently, some novel classification strategies have proposed for classifying hyperspectral images, such as random forest [35]. However, to evaluate the robustness of the deep features, the widely-used SVM classifier is considered as the benchmark in the experiment. It should be noted that the parameters of SVM were determined by five-fold cross-validation, and we selected the polynomial kernel for the rest of the experiments. The polynomial kernel can easily reveal the effectiveness of deep features, for future comparison. We denote the SVM-based classifications of spectral information, EMAP and deep features respectively as Spe s , EMAP s and Deep s .
For our research, a series of experiments were conducted to extract deep features with different numbers of feature map settings. Furthermore, the training dictionary was constituted of randomly-selected samples from a reference map. The remaining samples were used for evaluating the classification performances. Overall accuracy (OA), average accuracy (AA) and the Kappa coefficient were used to quantitatively measure the performance of the proposed method. The classification results are shown in Figure 9 and Figure 10.
Figure 9. Classification results obtained by different classifiers for the AVIRIS Indian Pines scene. (a) Original map; (b) Reference map; (c) Spe s classification map; (d) EMAP s classification map; (e) Deep s classification map; (f) Spe o classification map; (g) EMAP o classification map; (h) Nonlocal weighting sparse representation (NLW-SR) classification map; (i) Spectral-spatial deep convolutional neural network (SSDCNN) classification map; (j) Deep o classification map.
Figure 9. Classification results obtained by different classifiers for the AVIRIS Indian Pines scene. (a) Original map; (b) Reference map; (c) Spe s classification map; (d) EMAP s classification map; (e) Deep s classification map; (f) Spe o classification map; (g) EMAP o classification map; (h) Nonlocal weighting sparse representation (NLW-SR) classification map; (i) Spectral-spatial deep convolutional neural network (SSDCNN) classification map; (j) Deep o classification map.
Remotesensing 08 00099 g009
Figure 10. Classification results obtained by different classifiers for the Reflective Optics System Imaging Spectrometer (ROSIS) Pavia University Scene; (b) Reference map; (c) Spe s classification map; (d) EMAP s classification map; (e) Deep s classification map; (f) Spe o classification map; (g) EMAP o classification map; (h) NLW-SR classification map; (i) SSDCNN classification map; (j) Deep o classification map.
Figure 10. Classification results obtained by different classifiers for the Reflective Optics System Imaging Spectrometer (ROSIS) Pavia University Scene; (b) Reference map; (c) Spe s classification map; (d) EMAP s classification map; (e) Deep s classification map; (f) Spe o classification map; (g) EMAP o classification map; (h) NLW-SR classification map; (i) SSDCNN classification map; (j) Deep o classification map.
Remotesensing 08 00099 g010

2.7. Experiments with the AVIRIS Indian Pines Scene

In our first experiment with AVIRIS Indian Pines dataset, we investigate the characteristic of CNN-based deep features. Specifically, we considered the four-layer CNN with 50 features at each layer as the default configurations for deep feature generation.
We compare the classification accuracies obtained for the Indian Pines dataset by the proposed method with those obtained by the other state-of-the-art classification methods. To illustrate the classification accuracies obtained with a limited number of training samples in a better way, the individual class accuracies obtained for the case of 10% training samples are presented in Table 1. As can be seen in this table, in most cases, the proposed deep feature-based sparse classification (Deep o ) method provided the best results, in terms of individual class accuracies, as compared to the results obtained by other methods. When only the spectral information is considered, it is difficult to classify the Indian Pines image, because of the spectral mixture phenomenon. However, by introducing spatial information (EMAP), higher classification accuracies can be obtained in comparison to the accuracies obtained with the methods using only spectral information.
Table 1. OA, average accuracy (AA) and Kappa statistic obtained after executing 10 Monte Carlo runs for the AVIRIS Indian Pines data.
Table 1. OA, average accuracy (AA) and Kappa statistic obtained after executing 10 Monte Carlo runs for the AVIRIS Indian Pines data.
ClassTrainTestSpe s EMAP s Deep s Spe o EMAP o NLW-SRSSDCNNDeep o
134354.8894.8885.4233.3396.5196.3490.3495.83
214141442.6368.0687.6752.2472.2395.3095.1896.08
3882229.5360.2274.4032.6666.3493.5095.0395.46
4323417.6535.7389.5232.3851.9287.8893.5297.28
5547857.5174.1087.2571.5874.2995.7093.9298.01
6772381.0291.5998.9883.6388.7399.3199.7199.80
732586.0495.0256.5221.7398.0056.4096.1297.06
8547362.3794.5299.3593.1899.9899.8694.6199.77
931770.5985.8833.335.5697.0650.5596.3499.55
101096239.7375.6071.5236.2883.3792.6890.3693.66
1125243073.3187.3794.3263.2188.6196.4395.6796.53
12658721.7257.6877.1739.3170.4991.1488.3491.52
13320287.2896.4499.3693.1598.6191.5296.65100
1413125284.0397.3299.9293.8195.6799.6398.3699.91
15438217.3865.1679.5342.6975.6889.9795.1397.91
1639070.6786.8995.3185.8895.1198.0997.8498.29
OA56.7379.0783.1861.4282.7095.3896.0297.45
AA56.0479.1788.8355.0484.5489.6493.5995.91
Kappa49.8876.0887.1855.6880.2695.2694.6796.36
time (s)2.1313.2686.3231.3240.2335.36124.3292.61
As regards the sparse representation effects, some important observations can be made from the content of Table 1. After introducing the sparse coding technique, both EMAP and deep feature-based classification methods show a significant improvement in terms of classification accuracy. This reveals the importance of using sparse representation techniques, particularly in EMAP and the deep feature space.

2.8. Experiments with the ROSIS Pavia University Scene

In the second experiment with the ROSIS Pavia University scene, we investigated the characteristic of CNN-based deep features. As in the case of the AVIRIS Indian Pines dataset, here also, we considered, for deep feature generation, a CNN with four layers and 50 features at each layer as the default configuration. Unlike the image of the Indian Pines dataset, the image of the Pavia University dataset is of high spatial resolution, which is even more complicated in terms of classification. Nine thematic land cover classes were identified in the university campus: trees, asphalt, bitumen, gravel, metal sheets, shadows, self-blocking bricks, meadows and bare soil. There are 42,776 reference datasets. From the training dataset, we randomly selected 300 samples per class to obtain the classification results by six different classification methods, and the results are presented in Table 2. The table shows the OA, AA, Kappa and individual class accuracies obtained with different classification algorithms. It can be seen that the proposed classification method Deep o provided the best results in terms of OA, AA and most of individual class accuracies.
In comparison, the classification accuracies obtained by using the SVM classifier tend to be lower than those obtained by sparse coding-based methods. With the introduction of EMAP features, the classification accuracies increased significantly, indicating thereby that spatial features are important, especially for high spatial resolution images. However, great redundancy lies in the EMAP space. Therefore, sparse coding-based OMP E M A P can give better performance in terms of classification accuracy. Compared to EMAP features, deep features are more effective and representative. Therefore, deep feature-based classification methods (both Deep s and Deep o ) provide higher classification accuracy.
Table 2. OA, AA and Kappa statistic obtained after executing 10 Monte Carlo runs for the ROSIS Pavia University scene.
Table 2. OA, AA and Kappa statistic obtained after executing 10 Monte Carlo runs for the ROSIS Pavia University scene.
ClassTrainTestSpe s EMAP s Deep s Spe o EMAP o NLW-SRSSDCNNDeep o
1300663184.8786.9696.7865.6689.3190.2584.5694.78
230018,64964.5378.9997.8360.3884.7497.1398.9599.29
3300209974.5481.2477.8752.3582.9999.8096.6298.65
4300306494.3795.0287.7492.5776.9297.4295.3397.53
5300134599.6199.4197.7498.9399.3299.9776.65100
6300502972.7784.9881.4847.8988.6299.5098.4599.91
7300133096.7798.9069.1284.7199.7198.8599.6299.80
8300368277.5990.8886.9274.5987.8898.2494.5799.09
930094799.4499.2199.8996.1992.0886.7096.8398.29
OA75.2884.9291.8065.6286.6196.5095.1898.35
AA84.9490.6288.3974.8189.0696.4393.5198.61
Kappa69.1480.9090.1257.1582.8595.3493.6497.86
time (s)2.5134.9293.1986.8297.6357.31134.75104.82

3. Conclusions

In this paper, we investigated a new classification method that integrates sparse representations and deep learning techniques for spatial-spectral classification of hyperspectral remote sensing images. The classification results indicate that the proposed method can effectively classify the hyperspectral images. Furthermore, it can appropriately exploit the inherent sparsity present in deep features to provide state-of-the-art classification results. We also investigated the characteristics of deep learning features, which are more discriminative than the low-level hand-crafted spatial features. In comparison to the the state-of-the-art classifiers, the proposed method gives very promising results, particularly when the number of available training samples is very small. Future work will be focused on the development of computationally-efficient implementation of the proposed method.

Acknowledgments

The authors would like to thank David A. Landgrebe for making the AVIRIS Indian Pines hyperspectral dataset available to the community and Paolo Gamba for providing the ROSIS data over Pavia, Italy, along with the training and test sets. Additionally, we also should thank Mauro Dalla Mura for providing the EMAP code. Last, but not least, the authors would like to thank the Associate Editor and the three anonymous reviewers for their detailed and highly constructive criticisms, which greatly helped us to improve the quality and presentation of our manuscript.

Author Contributions

Heming Liang designed the study, developed the methodology, performed the experiments, analyzed the experimental results and wrote this paper. Qi Li supervised the study and revised this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bruckstein, A.M.; Donoho, D.L.; Elad, M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 2009, 51, 34–81. [Google Scholar] [CrossRef]
  2. Zhang, H.; Li, J.; Huang, Y.; Zhang, L. A nonlocal weighted joint sparse representation classification method for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2056–2065. [Google Scholar] [CrossRef]
  3. Chen, Y.; Nasrabadi, N.; Tran, T. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  4. Tang, Y.Y.; Yuan, H.; Li, L. Manifold-based sparse representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7606–7618. [Google Scholar] [CrossRef]
  5. Fauvel, M.; Benediktsson, J.; Chanussot, J.; Sveinsson, J. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef]
  6. Bernabe, S.; Reddy Marpu, P.; Plaza, A.; Dalla Mura, M.; Atli Benediktsson, J. Spatial classification of multispectral images using kernel feature space representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 288–292. [Google Scholar] [CrossRef]
  7. Benediktsson, J.; Pesaresi, M.; Amason, K. Classification and feature extraction for remote sensing images from urban areas based on morphological transformations. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1940–1949. [Google Scholar] [CrossRef]
  8. Plaza, A.; Martinez, P.; Plaza, J.; Perez, R. Dimensionality reduction and classification of hyperspectral image data using sequences of extended morphological transformations. IEEE Trans. Geosci. Remote Sens. 2005, 43, 466–479. [Google Scholar] [CrossRef]
  9. Benediktsson, J.; Palmason, J.; Sveinsson, J. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  10. Dalla Mura, M.; Benediktsson, J.; Waske, B.; Bruzzone, L. Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  11. Song, B.; Li, J.; Dalla Mura, M.; Li, P.; Plaza, A.; Bioucas-Dias, J.; Atli Benediktsson, J.; Chanussot, J. Remotely sensed image classification using sparse representations of morphological attribute profiles. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5122–5136. [Google Scholar] [CrossRef]
  12. Li, J.; Zhang, H.; Zhang, L. Supervised segmentation of very high resolution images by the use of extended morphological attribute profiles and a sparse transform. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1409–1413. [Google Scholar]
  13. Fang, L.; Li, S.; Kang, X.; Benediktsson, J. Spatial hyperspectral image classification via multiscale adaptive sparse representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7738–7749. [Google Scholar] [CrossRef]
  14. Zhao, W.; Guo, Z.; Yue, J.; Zhang, X.; Luo, L. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery. Int. J. Remote Sens. 2015, 36, 3368–3379. [Google Scholar] [CrossRef]
  15. Ji, R.; Gao, Y.; Hong, R.; Liu, Q.; Tao, D.; Li, X. Spectral-spatial constraint hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1811–1824. [Google Scholar]
  16. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  17. Le, Q.V. Building high-level features using large scale unsupervised learning. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 8595–8598.
  18. Rifai, S.; Vincent, P.; Muller, X.; Glorot, X.; Bengio, Y. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA, 28 June–2 July 2011; pp. 833–840.
  19. Razavian, A.S.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Columbus, OH, USA, 23–28 June 2014; pp. 512–519.
  20. Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724.
  21. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  22. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2012; pp. 1097–1105.
  23. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  24. Chen, X.; Xiang, S.; Liu, C.L.; Pan, C.H. Vehicle detection in satellite images by hybrid deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1797–1801. [Google Scholar] [CrossRef]
  25. Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
  26. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  27. Arel, I.; Rose, D.; Karnowski, T. Deep machine learning—A new frontier in artificial intelligence research [research frontier]. IEEE Comput. Int. Mag. 2010, 5, 13–18. [Google Scholar] [CrossRef]
  28. Nebauer, C. Evaluation of convolutional neural networks for visual recognition. IEEE Trans. Neural Netw. 1998, 9, 685–696. [Google Scholar] [CrossRef] [PubMed]
  29. Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  30. Tropp, J. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef]
  31. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  32. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35, 2279–2301. [Google Scholar] [CrossRef]
  33. Salakhutdinov, R.; Hinton, G.E. Deep boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Clearwater Beach, FL, USA, 16–18 April 2009; pp. 448–455.
  34. Pedergnana, M.; Marpu, P.; Mura, M.; Benediktsson, J.; Bruzzone, L. A novel technique for optimal feature selection in attribute profiles based on genetic algorithms. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3514–3528. [Google Scholar] [CrossRef]
  35. Marpu, P.; Pedergnana, M.; Mura, M.; Peeters, S.; Benediktsson, J.; Bruzzone, L. Classification of hyperspectral data using extended attribute profiles based on supervised and unsupervised feature extraction techniques. Int. J. Image Data Fusion 2012, 3, 269–298. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Liang, H.; Li, Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sens. 2016, 8, 99. https://doi.org/10.3390/rs8020099

AMA Style

Liang H, Li Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sensing. 2016; 8(2):99. https://doi.org/10.3390/rs8020099

Chicago/Turabian Style

Liang, Heming, and Qi Li. 2016. "Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features" Remote Sensing 8, no. 2: 99. https://doi.org/10.3390/rs8020099

APA Style

Liang, H., & Li, Q. (2016). Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sensing, 8(2), 99. https://doi.org/10.3390/rs8020099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop