Next Article in Journal
Spatial and Temporal Dynamics of Urban Wetlands in an Indian Megacity over the Past 50 Years
Previous Article in Journal
Evaluation of Zenith Tropospheric Delay Derived from ERA5 Data over China Using GNSS Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Sparse Representation-Based Sample Pseudo-Labeling Method for Hyperspectral Image Classification

1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education of China, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(4), 664; https://doi.org/10.3390/rs12040664
Submission received: 30 December 2019 / Revised: 5 February 2020 / Accepted: 11 February 2020 / Published: 17 February 2020

Abstract

:
Hyperspectral image classification methods may not achieve good performance when a limited number of training samples are provided. However, labeling sufficient samples of hyperspectral images to achieve adequate training is quite expensive and difficult. In this paper, we propose a novel sample pseudo-labeling method based on sparse representation (SRSPL) for hyperspectral image classification, in which sparse representation is used to select the purest samples to extend the training set. The proposed method consists of the following three steps. First, intrinsic image decomposition is used to obtain the reflectance components of hyperspectral images. Second, hyperspectral pixels are sparsely represented using an overcomplete dictionary composed of all training samples. Finally, information entropy is defined for the vectorized sparse representation, and then the pixels with low information entropy are selected as pseudo-labeled samples to augment the training set. The quality of the generated pseudo-labeled samples is evaluated based on classification accuracy, i.e., overall accuracy, average accuracy, and Kappa coefficient. Experimental results on four real hyperspectral data sets demonstrate excellent classification performance using the new added pseudo-labeled samples, which indicates that the generated samples are of high confidence.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) have hundreds of spectral bands that contain detailed spectral information. Thus, hyperspectral images are widely used in precision agriculture [1], geological prospecting [2], target detection [3], and landscape classification [4]. Classification is one of the important branches in hyperspectral image processing [5] because it can help people understand hyperspectral image scenes via visual classification. However, in the context of supervised classification, the existence of the “Hughes phenomenon” [6] caused by an imbalance between the limited training samples and the extreme spectral dimension of hyperspectral images influences the classification performance [7].
To alleviate the “Hughes phenomenon”, dimensionality reduction [8,9] and semi-supervised classification [10,11] have been extensively studied. The former can reduce the dimensions of hyperspectral images, and the latter can increase the number of training samples. In general, the dimension reduction method can be divided into two categories: feature extraction and feature selection. Among them, feature extraction [12,13] reduces computational complexity by projecting high-dimensional data into low-dimensional data space, and feature selection [14] selects appropriate bands from the original set of spectral bands. In addition to the dimensionality reduction method, semi-supervised classification is often used in the case of small samples. For example, graph-based [15,16], active learning-based [17,18], and generative model-based [19,20] methods have attracted the attention of researchers seeking to tackle the problems caused by the limited availability of labeled samples. These semi-supervised methods can compensate for the lack of labeled samples and improve the classification results.
Pseudo-labeled sample generation is also a commonly used method for semi-supervised classification, which attempts to assign pseudo-labels to more samples. In [21], a combination of active learning (AL) and semi-supervised learning (SSL) is used in a novel manner to obtain more pseudo-labeled samples. In [22], high-confidence semi-labeled samples whose class labels are determined by an ML classifier are added to the training samples to improve hyperspectral image classification performance. In [23], a constrained Dirichlet process mixture model is proposed that produces high-quality pseudo-labels for unlabeled data. In [24], a sifting generative adversarial networks is used to generate more numerous, more diverse, and more authentic labeled samples for data augmentation. However, active learning requires manual intervention, and other methods require multiple iterations or deep learning to converge.
In recent years, sparse representation-based classification (SRC) methods have drawn much attention in dealing with small training sample problems in hyperspectral image classification [25,26]. The goal of SRC is to represent an unknown sample compactly using only a small number of atoms in an overcomplete dictionary. The class label with the minimal representative error will be assigned to the unlabeled samples. Some researchers extended the SRC-based methods to utilize spatial information in hyperspectral images, e.g., joint SRC [27,28], kernel SRC [29,30,31], adaptive SRC [32,33], etc. In addition, there was also some work that used multi-objective optimization to improve SRC directly [34,35]. Ideally, the nonzero elements in a sparse representation vector will be associated with atoms of a single class in the dictionary, and we can easily assign a test sample to that class.
Motivated by sparse representation, a novel sparse representation-based sample pseudo-labeling (SRSPL) method is proposed. In the field of remote sensing, if a pixel contains only one type of land cover, this pixel is called a pure pixel; otherwise, it is a mixed pixel. Due to the limitation of the spatial resolution of the imaging spectrometers, the problem of mixed pixels is quite common in hyperspectral remote sensing images [36]. In general, for the mixed pixels of HSIs, the sparse representation of their spectral features usually involves multiple classes, so the representation coefficient vectors have larger entropies. Conversely, for the pure pixels of HSIs, the spectral features can be linearly represented using atoms from a single class in the overcomplete dictionary, so the representation coefficient vectors of pure pixels will have smaller entropies. Thus, the pixels with small sparse representation entropies can be used to augment the training samples. Using this method, the size of the training sample set can be effectively increased. However, due to factors such as illumination, shading, and noise, the spectral features of different land covers in natural scenes are usually subject to some degree of distortion. Consequently, the generated pseudo-labeled samples cannot guarantee a high quality when the sparse representation method is applied to the hyperspectral pixels. To solve this problem, the intrinsic image decomposition (IID) [37] method is used in this paper before generating samples to extract the spectral reflectance components of the original hyperspectral image. The classification accuracy on four HSIs verifies that the pseudo-labeled samples generated by the proposed method are of high quality.
The main innovations of the proposed SRSPL method are summarized as follows:
1. The information entropy of sparse representation coefficient vectors is first used in the training sample generation of HSIs. The SRC method will assign class labels with minimal sparse representation errors for hyperspectral pixels other than training samples and test samples. However, we only select pixels with small class uncertainty as pseudo-labeled samples to augment the original training set.
2. We have found that, for HSI, remote sensing reflectance extraction is necessary before using the sparse representation method. In this paper, the effective IID method is performed on the HSI, which is beneficial for reducing the sparse representation error.
The experimental results illustrate that the proposed method for pseudo-labeled sample generation based on sparse representation is very effective in improving the classification results.
The remainder of this paper is organized as follows. Related work is described in Section 2. The proposed SRSPL method is introduced in Section 3. The experiment are presented in Section 4. Discussion and conclusions are provided in Section 5 and Section 6, respectively.

2. Related Work

2.1. Intrinsic Image Decomposition of Hyperspectral Images

Intrinsic image decomposition (IID) [37] is a challenging problem in computer vision that aims at modeling the perceiving function of human vision to distinguish the reflectance and shading of the objects from a single image [38]. Since the intrinsic components of an image reflect different physical characteristics of the scene, e.g., reflectance, illumination, and shading, many issues such as natural image segmentation [39] can benefit from IID. Given a single RGB image G , the IID algorithm decomposes G into two components: the spectral reflectance component R and the shading component S :
G = RS
We denote each pixel i G as G i = R i S i , where G i = ( G i r , G i g , G i b ) and R i = ( R i r , R i g , R i b ) . The parameters r, g, and b refer to the red, green, and blue channels of a color image, respectively.
The optimization-based IID [40] method is based on the assumption of the local color characteristics in images; in a local window of an image, the changes in pixel values are usually caused by changes in the reflectance [41]. Under this assumption, the shading component could be separated from the input image. Thus, the reflectance value of one pixel can be represented by the weighted summation of its neighborhood pixel values:
R i = j N ( i ) w i j R j , w i j = e ( Y i Y j ) 2 / 2 σ i Y 2 + A ( G i , G j ) 2 / σ i A 2
where N ( i ) is the set of neighboring pixels around pixel i; w i j measures the similarity of the intensity value and the spectral angle value between pixel i and pixel j; Y represents the intensity image, which is calculated by averaging all the image bands; R j = ( R j r , R j g , R j b ) , A ( G i , G j ) = arcos ( G i r G j r + G i g G j g + G i b G j b ) denote the angle between the pixel vectors G i and G j ; and σ i Y , σ i A denote the variance of the intensities and the angle in a local window around i, respectively.
Based on Equations (1) and (2), the shading component S and the reflectance component R can be obtained by optimizing the following energy function:
E ( G , R , S ) = i G ( R i j N ( i ) w i j R j ) 2 + i G ( G i S i R i ) 2
The complete description of the optimized process can be found in [40].
Kang et al. [37] first extended the IID as a feature extraction method from a three-band image to a hyperspectral image (HSI) of more than 100 bands and achieved excellent results. Here, the pixel values of HSIs are determined by the spectral reflectance, which is determined by the material of different objects, and the shading component, which consists of light and shape-dependent properties. The shading component is not directly related to the material of the object. Therefore, IID is adopted to extract the spectral reflectance components of the HSI to distinguish more classes, and remove useless spatial information preserved in the shading component of the HSIs.

2.2. Sparse Representation Classification of Hyperspectral Images

The sparse representation classification (SRC) framework was first introduced for face recognition [42]. Chen et al. [43] extended the SRC to pixelwise HSI classification, which relied on the observation that spectral pixels of a particular class should lie in a low-dimensional subspace spanned by dictionary atoms (training pixels) from the same class. An unknown test pixel can be represented as a linear combination of training pixels from all classes. For HSI classification, suppose that we have C distinct classes and stack an overcomplete dictionary A = [ A 1 , A 2 , , A N ] R B × N , where B denotes the number of bands and N is the number of training samples, respectively. Set L { 1 , 2 , , C } is a set of labels, and c L refers to the cth class. For a test sample x R B × 1 of HSI, x can be represented as follows:
x = A 1 α 1 + A 2 α 2 + + A N α N = A α
where α = [ α 1 , α 2 , , α N ] T R B represents the sparse vector. Intuitively, the sparse vector α can be measured by the l 0 -norm of it ( l 0 -norm counts the number of nonzero entries in a vector). Since the combinatorial l 0 -norm minimization is an NP-hard problem, the l 1 -norm minimization, as the closest convex function to l 0 -norm minimization, is widely employed in sparse coding, and it was shown that l 0 -norm and l 1 -norm minimizations are equivalent if the solution is sufficiently sparse [44].
As the hyperspectral signals from the same class often span the same low-dimensional subspace that is constructed by the corresponding training samples, which involves the non-zero entries of the sparse vector, the class of the hyperspectral signal x can be directly determined by the characteristics of the recovered sparse vector α . Given the dictionary of training samples A , the sparse representation coefficient vector α can be be recovered by solving the following Lasso problem:
α ^ = min α 1 2 x A α 2 2 + λ α 1
where · 2 represents the l 2 norm, · 1 represents the l 1 norm of the vector and λ > 0 is a scalar regularization parameter. This optimization problem can be solved by Proximal Gradient Descent (PGD) [45]. After the sparse representation vector α ^ is obtained, the label of the test sample x of HSI can be assigned by the minimal reconstructed residual:
y ^ x = arg min c = 1 , , C x A c α ^ c 2 2
where A c is the subdictionary of the cth class and α ^ c denotes the representative coefficient of the cth class.

3. Proposed Method

A novel sparse representation-based sample pseudo-labeling (SRSPL) method is proposed in this paper. The SRSPL method consists of the following three steps. First, IID is used to reduce the dimension and noise of the original hyperspectral image. Second, sparse representation is applied to generate the sparse representation vectors of the hyperspectral pixels other than the training samples and test samples. The information entropies of all these hyperspectral pixels are calculated based on the sparse representation vectors to discriminate purity, and the samples with low representation entropies are selected to generate candidate samples. Third, these candidate samples are assigned pseudo-labels by the minimal reconstructed residual. Then, these pseudo-labeled samples are augmented to the original training set, and an extended random walker (ERW) classifier is trained to evaluate the sample quality. Figure 1 shows a graphical example illustrating the principle of the proposed SRSPL method.

3.1. Feature Extraction of Hyperspectral Images

(1) Spectral Dimensionality Reduction
The initial hyperspectral image I = ( I 1 , , I N ) R B × N is divided into M groups of equal band sizes [37], where N is the number of pixels and B is the dimension of the initial image. The number of bands in each group is denoted by B 1 , B 2 , , B M .
The averaging-based image fusion is applied to each group and the resulting fused bands are used for further processing:
I ˜ m = n = 1 B m I m n B m
where m is the mth group, I m n is the nth band in the mth group of the original hyperspectral data, B m is the number of bands in the mth group, and I ˜ m is the mth fused band. Here, the averaging-based method can ensure that the pixels of the dimensionality-reduced hyperspectral image can still be directly interpreted in a physical sense. In other words, the pixels of the dimensionality-reduced image will be related to the reflectance of the scene. Moreover, this method can effectively remove image noise.
(2) Band Grouping
The dimensionality-reduced image I ˜ is partitioned into several subgroups of adjacent bands as follows:
I ^ k = ( I ˜ ( k 1 ) Z + 1 , , I ˜ ( k 1 ) Z + Z ) , k = 1 , 2 , , M / Z ( I ˜ M Z + 1 , , I ˜ M ) , k = M / Z M / Z
where I ^ k refers to the kth subgroup, M / Z is the largest integer not greater than M / Z , and M / Z is the smallest integer not less than M / Z . Here, Z refers to the number of bands in each subgroup.
(3) Reflectance Extraction with the Optimization-Based IID
For each subgroup I ^ k , the optimization-based IID method is used to obtain the reflectance and shading components, and the equation is as follows:
( R k * , S k * ) = arg min R k , S k E ( I ^ k , R k , S k )
where R k and S k refer to the reflectance and the shading components of the kth subgroup, respectively.
The reflectance components of the different subgroups are combined to obtain the resulting IID features, which is an M-dimensional feature matrix R ˜ that can be used for subsequent processing.
R ˜ = R 1 R k R M / Z R M × N

3.2. Candidate Sample Selection Based on Sparse Representation

To find the purest pixels and use them as candidate samples, we first calculate the sparse representation of all the hyperspectral samples other than the training samples and test them (denoted as S u n l a b e l e d = ( I i ) i = 1 u ). Then, we determine the purity of the samples by calculating its sparse representation information entropy. When a sample has a lower sparse representation information entropy, it is very likely to be a pure pixel. The process can be described as follows:
(1) Calculating the Hyperspectral Sample’s Sparse Representation
For a given sample I i S u n l a b e l e d , R ˜ I i denotes the reflectance component of I i , and R ˜ I i R M . The goal of sparse representation is to represent R ˜ I i by a sparse linear combination of training samples that make up the dictionary. Suppose L { 1 , 2 , , C } is a set of labels and c L refers to the cth class. The representation coefficient α ˜ of R ˜ I i can be calculated based on an overcomplete dictionary A ˜ according to Equation (5). Therefore, R ˜ I i is assumed to lie in the union of the C different subspaces, which can be seen as the sparse linear combination of all the training samples:
R ˜ I i = A ˜ 1 α ˜ 1 + A ˜ 2 α ˜ 2 + · · · + A ˜ C α ˜ C = A ˜ 1 A ˜ C α ˜ 1 α ˜ C = A ˜ α ˜
where A ˜ R M × N m is the structured overcomplete dictionary, which consists of the class sub-dictionaries { A ˜ c } c = 1 , , C . N m is the number of training samples, and α ˜ is an N m -dimensional representation coefficient vector formed by concatenating the sparse vectors { α ˜ c } c = 1 , , C .
The representation coefficient α ˜ of R ˜ I i can be recovered by solving the following optimization problem:
α ˜ = min α ˜ 1 2 R ˜ I i A ˜ α ˜ 2 2 + λ α ˜ 1
where λ is a scalar regularization parameter and requires experimental analysis to determine the optimal value.
(2) Discriminating the Purity of Hyperspectral Samples based on Information Entropy
The information entropy of each R ˜ I i is calculated based on its representation coefficient:
Ent ( R ˜ I i ) = j = 1 N m α ˜ j log ( α ˜ j )
where Ent ( · ) denotes an entropy function and α ˜ is the representation coefficient vector of R ˜ I i . The purity of R ˜ I i is determined according to the magnitude of the entropy.
(3) Finding the Candidate Samples
First, the information entropy for the reflectance components of each R ˜ I i is obtained. Second, these samples corresponding to these reflectance components are sorted in ascending order according to the magnitude of their entropies. Third, the first T samples are selected as the candidate samples set S c a n d i d a t e = ( R ˜ I i ) i = 1 T , where the parameter T is an optimal value obtained by experiments.

3.3. Pseudo-Label Assignment for Candidate Samples

For each candidate sample, one pseudo-label is determined based on the minimal reconstructed residual:
y ˜ I i = arg min c = 1 , , C R ˜ I i A ˜ c α ˜ c 2 2
where A ˜ c is the sub-dictionary of the cth class and α ˜ c denotes the representative coefficient of the cth class. Thus, the pseudo-labeled sample set is S p s e u d o = ( R ˜ I i , y ˜ I i ) i = 1 T .

3.4. Classification Model Optimization Using Pseudo-Labeled Samples

First, the pseudo-labeled samples set S p s e u d o and the initial labeled samples set S i n i t i a l are combined to form the new labeled samples set S l a b e l e d . Then, the ERW [46] algorithm is used as a classification model to evaluate the quality of these newly generated pseudo-labeled samples, where the ERW algorithm is adopted to calculate a set of optimized probabilities for the class of each pixel to be determined based on the maximum probability.

3.5. Pseudo-Code of the Proposed SRSPL Method

For a detailed illustration of the proposed SRSPL method, the pseudocode of the proposed SRSPL method is shown in Algorithm 1.
Algorithm 1: Sparse Representation-based Sample Pseudo-Labeling (SRSPL) for Hyperspectral Image
                  Classification
Input: Hyperspectral image I = ( I 1 , , I N ) ; the initial labeled samples set S i n i t i a l = ( I i , y i ) i = 1 l ; the hyper-
          spectral samples set other than the training samples and test samples: S u n l a b e l e d = ( I i ) i = 1 u
Output: Classification map C
1: Reduce the dimension and noise of I based on averaging image fusion (Equation (7)) to obtain the
   dimensionally reduced image I ˜ .
2: According to Equation (8), partition the I ˜ into several subgroups of adjacent bands, denoted as I ^ k .
3: According to Equations (9) and (10), obtain the spectral reflectance components of each I ^ k , (through the
   optimization-based IID method) and combined to obtain the resulting IID features R ˜ .
4: For each I i , i = 1 : u ( I i S u n l a b e l e d ) Do
      (1) According to Equation (11) and (12), solve the sparse representation coefficient α ˜ of each sample I i
         based on the reflectance component R ˜ I i and the overcomplete dictionary A ˜ .
      (2) According to Equation (13), calculate the information entropy of each R ˜ I i to discriminating the
         purity of each sample I i .
5:End For
6: Sort the samples corresponding to these reflectance components in ascending order according to their
   entropy magnitudes.
7: Selected the first T samples as the candidate samples set S c a n d i d a t e = ( R ˜ I i ) i = 1 T .
8: According to Equation (14), assign the pseudo-label for each sample in set S c a n d i d a t e , to obtain the pseudo-
   labeled samples set S p s e u d o = ( R ˜ I i , y ˜ I i ) i = 1 T .
9: Combine the initial labeled samples set S i n i t i a l and the pseudo-labeled samples set S p s e u d o to the new
   labeled samples set S l a b e l e d .
10: Classify the spectral reflectance components R ˜ with the extended random walker (ERW) classifier and the
   new labeled samples set S l a b e l e d to obtain the final classification map C .
11:Return C

4. Experiment

4.1. Experimental Data Sets

(1) Indian Pines data set: The Indian Pines image was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor, which captured the Indian Pines unlabeled agricultural site in northwestern Indiana. The image contains 220 × 145 × 145 bands. Twenty water absorption bands (Nos. 104–108, 150–163, and 220) were removed before hyperspectral image classification. The spatial resolution of the Indian Pines image is 20 m per pixel, and the spectral coverage ranges from 0.4 to 2.5 μ m. Figure 2 shows a color composite of the Indian Pines image and the corresponding ground-truth data.
(2) University of Pavia data set: The University of Pavia image, which captured the campus of the University of Pavia, Italy, was recorded by the Reflective Optics System Imaging Spectrometer (ROSIS). This image contains 115 bands of size 610 × 340 and has a spatial resolution of 1.3 m per pixel and a spectral coverage ranging from 0.43 to 0.86 μ m. Using a standard preprocessing approach before hyperspectral image classification, 12 noisy channels were removed. Nine classes of interest are considered for this image. Figure 3 shows a color composite of the University of Pavia image and the corresponding ground-truth data.
(3) Salinas data set: The Salinas image was captured by the AVIRIS sensor over Salinas Valley, California, at a spatial resolution of 3.7 m per pixel. The Salinas image contains 224 bands of size 512 × 217. Twenty water absorption bands (Nos. 108–112, 154–167, and 224) were discarded before classification. Figure 4 shows a color composite of the Salinas image and the corresponding ground-truth data.
(4) Kennedy Space Center data set: The Kennedy Space Center (KSC) image was captured by the National Aeronautics and Space Administration (NASA) Airborne Visible/Infrared Imaging Spectrometer instrument at a spatial resolution of 18 m per pixel. The KSC image contains 224 bands of size 512 × 614. The water absorption and low signal-to-noise ratio (SNR) bands were discarded before classification. Figure 5 shows the KSC image and the corresponding ground-truth data.

4.2. Parameter Analysis

(1) Analysis of the Influence of Parameter λ
For sparse representation, the regularization parameter λ used in Equation (12) controls the relative importance between sparsity level and reconstruction error, leading to different classification accuracies. Figure 6 shows the effect of varying λ on the classification accuracies for Indian Pines, University of Pavia, Salinas, and Kennedy Space Center data sets. In the experiment, the four different data sets are trained with 20 samples per class, respectively, and the remaining labeled samples are used for testing. The regularization parameter λ for sparse representation is chosen between λ = 1 × 10 8 and λ = 1 . In Figure 6, when λ < 1 × 10 6 , the overall accuracy (OA) shows an increasing trend on each data set. As λ continues to increase, the overall trend of OA is decreasing. This is because more weight on sparsity level and less weight on approximation accuracy. Therefore, λ = 1 × 10 6 is set as the optimal value in this paper.
(2) Analysis of the Influence of Parameter T
The parameter T is selected by using the five-fold cross-validation strategy and repeating the finetuning procedure. As described in Section 3.2, when the entropy of the sparse representation vector of a sample is small, the probability of that sample being a pure sample is large. As T increases, samples subsequently added will have a larger entropy, and these newly pseudo-labeled samples have a high probability of being mixed pixels. Thus, although the number of samples increases, the magnitude of the increase in the classification accuracy is small; consequently, it is important to choose a suitable T. In the experiment, the four different data sets are trained with 20 samples per class, respectively, and the remaining labeled samples are used for testing. Figure 7 shows the impact of different numbers of pseudo-labeled samples on the OA for the four data sets. The OA corresponding to different T values is the mean value of 30 random replicate experiments. As seen by observing Figure 7, for the different data sets, when T < 40, the OA tends to grow, although there is a downward trend in the interval related to the quality of the generated samples and the selected sample points. When T > 40, the OA shows a significant downward trend. In addition, when T = 40, the OAs are highest. Therefore, in this paper, we selected T = 40 as the default parameter value.
(3) Analysis of the Influences of Parameters M and Z
The parameters M and Z from Equation (8) are hyperparameters that require tuning. We evaluated the influences of the parameters M and Z by objective and visual analyses. In the experiment, the four different data sets are trained with 5, 20, 3, and 15 samples per class, respectively, and the remaining labeled samples are used for testing. Figure 8 shows the OA of the proposed SRSPL method of four different data sets. For the four data sets, as Figure 8 shows, as M and Z change, the OA exhibits large fluctuations. When M is less than 10, the accuracy of the proposed method is relatively low. This result indicates that, when M has few features, useful spectral information will be lost. Furthermore, the results show that the proposed method can achieve stable and high accuracy when the size of the subgroup is less than 8 because the IID method works for small channel images. In this paper, the default parameter settings M = 32 and Z = 4 were adopted for subsequent testing of the data sets.
(4) Sensitivity Analysis
The parameters involved in our proposed SRSPL method are mainly regularization parameter λ , the number of generated pseudo-labeled samples T, feature numbers M and different subgroup sizes Z. We have conducted an extensive experimental analysis of each parameter for the SRSPL method. The one-parameter-at-a-time (OAT) [47] approach is used for model parameter sensitivity analysis, that is, to ensure that other parameters remain unchanged when changing one parameter, and then to study the effect of one parameter on the model. Figure 9 shows the sensitivity of parameters λ , T, M and Z for four data sets. It can be seen that the sensitivity value of λ on the four data sets is the smallest. However, parameters T, M, and Z show a higher sensitivity value on the four different data sets. Thus, it follows that the λ has a slight effect on the SRSPL method and T, M, Z have a greater effect on the SRSPL method, which is highly sensitive and should be tuned.

4.3. Performance Evaluation

We applied the proposed SRSPL method and other methods to four hyperspectral images to evaluate the effectiveness of various hyperspectral image classification methods. We compared the proposed SRSPL method with several other classification methods, including the traditional SVM [48], three semi-supervised methods, e.g., extended random walker (ERW) [46], spatial-spectral label propagation based on SVM (SSLP-SVM) [49], the maximizer of the posterior marginal by loopy belief propagation (MPM-LBP) [50] and two deep learning methods, e.g., recurrent 2D convolutional neural network (R-2D-CNN) [51], cascaded recurrent neural network (CasRNN) [52]. IID [37] is used to extract the useful features and reduce noise in the proposed SRSPL method. Therefore, in order to verify the validity of the IID, an experiment that does not use the IID method is added (record as Without-IID). In this section, the effect of different feature extraction methods to the proposed method is also analyzed, e.g., principal component analysis (PCA) [53], image fusion, and recursive filtering (IFRF) [54] (record as With-PCA and With-IFRF, respectively). Here, the magnitude setting of the parameter T in the With-PCA and With-IFRF methods is the same as the SRSPL method. The SVM classifier was implemented from a library for LIBSVM and used a Gaussian kernel. The tests were performed using five-fold cross-validation.
The parameters for the ERW, SSLP-SVM, MPM-LBP, R-2D-CNN, CasRNN, PCA and IFRF methods were set to the default parameters reported in the corresponding papers. In this paper, three common metrics, namely: the OA, average accuracy (AA), and Kappa coefficient, were used to evaluate classifier performance. The classification results for each method are given as the average of 30 experiments to reduce the influence of sample randomness. The data on the left side of Table 1, Table 2 and Table 3 represent the mean values, while those on the right represent the standard deviations.
The experiment was first performed on the Indian Pines date set. As shown in Table 1, the training samples were randomly selected and accounted for 5, 10, 15, 20, and 25 per class of the reference data. Note that, as the number of training samples increases, the classification accuracy grows steadily. The proposed SRSPL method always outperforms the other methods, such as ERW, SSLP-SVM, and MPM-LBP, in terms of the highest overall classification accuracies; this method obtained the highest accuracies in all cases (84.73%, 90.90%, 93.88%, 95.52% and 97.09%, respectively). Compared with the two deep learning methods R-2D-CNN and CasRNN, the SRSPL method can achieve higher accuracy in the case of small samples. Compared with the three methods of Without-IID, With-PCA, and With-IFRF, Table 1 shows that the proposed SRSPL method obtains higher accuracies, which indicates that feature extraction using the IID method is effective.
Figure 10 shows the classification maps obtained for the Indian Pines data set with the five methods when using 25 training samples from each class. As shown in Figure 10c–g, the proposed SRSPL method performs better than the other classification methods (i.e., ERW, SSLP-SVM, MPM-LBP, R-2D-CNN, and CasRNN) when fewer training samples are provided. For example, the classification maps of the SVM, SSLP-SVM, MPM-LBP, R-2D-CNN, and CasRNN methods include more noise, and the ERW method does not perform as well as the SRSPL method does on certain classes.
Table 2 shows the experimental results using the University of Pavia data set when the number of training samples per class is 20. As Table 2 shows, the nine compared methods achieve various performance characteristics when only 20 training samples per class are provided. The proposed SRSPL method not only outperforms the R-2D-CNN and CasRNN methods, which are the state-of-the-art methods based on deep learning in the hyperspectral image classification, but also outperforms the other compared methods by 2–20%. Notably, on classes such as “Asphalt” and “Trees”, the proposed SRSPL method performs much better than does the ERW method-by 12.57% and 12.94%, respectively. In the feature extraction experiment, the OA of the proposed SRSPL method outperforms the Without-IID, With-PCA, and With-IFRF methods by 2.01%, 15.14%, and 1.87%, respectively. Figure 11 shows the classification maps obtained for the University of Pavia data set with the different methods when using 20 training samples from each class. As Figure 11 shows, the SRSPL method is superior to other semi-supervised and deep learning methods in visual appearance, which shows its effectiveness.
The third experiment was conducted on the Salinas data set. In Table 3, three training samples per class were randomly selected for training, and the rest were used for testing. Given the limited samples, the experiment is quite challenging. The corresponding quantitative values of the classification results are tabulated in Table 3. As shown, although only limited training samples were provided, some of the methods achieved high OA and Kappa scores. In fact, some of the methods achieved 100% classification accuracy on several classes. This is because the Salinas image includes many large uniform regions that make classification simpler. Compared with the other methods, the proposed SRSPL method greatly improves the classification accuracy when training samples are extremely limited. Because the R-2D-CNN and CasRNN methods require a large number of samples for training, when there are small training samples, it is easy to produce overfitting and thus lead to poor classification results. As seen from Figure 12, the proposed SRSPL method can classify most of the features correctly, which reflects the effectiveness of the method.
The fourth experiment was performed on the Kennedy Space Center data set. In the experiment, the proposed SRSPL method was compared with the SVM method and three other semi-supervised methods, e.g., ERW, SSLP-SVM, MPM-LBP, and two different deep learning methods, e.g., R-2D-CNN, CasRNN. Figure 13 shows the changes in OA and Kappa coefficient as the number of training samples per class increases from 3 to 15. Compared with the deep learning-based classification methods R-2D-CNN and CasRNN, the proposed SRSPL method shows great advantages. As shown in Figure 13, the SRSPL method achieves the highest accuracy among the tested methods.
Finally, we evaluated the computational times of the three different semi-supervised methods (i.e., MPM-LBP, SSLP-SVM, and SRSPL) on the four different data sets using 20 samples per class using MATLAB on a computer with 3.6 GHz CPU and 8-GB memory. Because the semi-supervised method proposed in this paper does not involve a single training process, its computational cost is greatly improved. Table 4 shows that the proposed SRSPL method requires less time to process the Indian Pines, Salinas, and Kennedy Space Center data sets than do the other tested methods.

5. Discussion

In hyperspectral image classification, it is difficult and expensive to obtain enough labeled samples for model training. Considering the strong spectral correlation between labeled and unlabeled samples in the image, we proposed a novel sparse representation-based sample pseudo-labeling method (SRSPL). The pseudo-labeled samples generated by this method can be used to augment the training set, thereby solving the problem of poor classification performance under the condition of small samples.
Compared with other pseudo-labeled sample generation method (such as SSLP-SVM), the proposed SRSPL method is more reliable in generating pseudo-labeled samples. The SSLP-SVM method only adds a small number of pseudo-labeled samples near the labeled samples, and the added samples may be mixed samples. The proposed SRSPL method considers the relationship between sample purity and information entropy, that is, the purer the sample’s spectrum, the lower the information entropy of the sparse representation coefficient, and vice versa. Specifically, the spectral characteristics of a pure sample can be linearly represented using a single class of atoms in an overcomplete dictionary, so its coefficient vector will have smaller entropy. In our method, pure samples with smaller entropy are used to expand the initial training sample set, which can greatly improve classifier performance.
In the case of small samples, compared with other classifiers (such as SVM, ERW, and MPM-LBP), the SRSPL method produces a good classification map for each hyperspectral data set visually, as shown in Figure 9, Figure 10 and Figure 11. This is because the hyperspectral image is first processed using the intrinsic image decomposition technology, which helps reduce errors in subsequent sparse representations, and the pseudo-label samples generated by the SRSPL method can help optimize the classification model. In addition, compared with deep-learning-based classifiers (such as R-2D-CNN and CasRNN), the proposed SRSPL method performs better for a limited number of samples because, in such cases, these two methods are more likely to overfit the training samples, resulting in poor classification results. Thus, the proposed SRSPL method shows better classification results than other comparative methods.

6. Conclusions

In this paper, we proposed a novel sample pseudo-labeling method based on sparse representation that addresses the problem of limited samples. The previously proposed semi-supervised methods for generating pseudo-labeled samples typically select some samples and assign them a pseudo-label based on spectral information correlations or local neighborhood information. However, due to the presence of mixed pixels, the selected samples are not necessarily representative. To find the purest samples, we designed a sparse representation-based pseudo-labeling method that utilizes the coefficient vector of sparse representation and draws on the definition and idea of entropy from information theory. Overall, the proposed SRSPL method provides a new option for semi-supervised learning, which is the first contribution of the paper. Moreover, the proposed SRSPL method also solves the problem of uneven sample distribution through sparse representation based on spectral features, which is beneficial for subsequent classification. In addition, by comparing the standard deviations of OA, AA, and Kappa of 30 random replicate experiments with other state-of-the-art classification methods, we found that the proposed SRSPL method had higher robustness and stability. This is the second contribution of the paper. The experimental results on four real-world hyperspectral images show that the proposed SRSPL method is superior to other state-of-the-art classification methods from the perspectives of both quantitative indicators and classification maps.

Author Contributions

B.C. conceived of the idea of this paper. J.C. designed the experiments and drafted the paper. B.C., Y.L., N.G., and M.G. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was co-supported by the National Natural Science Foundation of China (NSFC) (41406200, 61701272) and the National Key R&D Program of China (2017YFC1405600).

Acknowledgments

The authors would like to thank all reviewers and editors for their comments on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kang, X.; Li, C.; Li, S.; Lin, H. Classification of hyperspectral images by Gabor filtering based deep network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1166–1178. [Google Scholar] [CrossRef]
  2. Pontius, J.; Martin, M.; Plourde, L.; Hallett, R. Ash decline assessment in emerald ash borer-infested regions: A test of tree-level, hyperspectral technologies. Remote Sens. Environ. 2008, 112, 2665–2676. [Google Scholar] [CrossRef]
  3. Imani, M. Manifold structure preservative for hyperspectral target detection. Adv. Space Res. 2018, 61, 2510–2520. [Google Scholar] [CrossRef]
  4. Cao, X.; Zhou, F.; Xu, L.; Meng, D.; Xu, Z.; Paisley, J. Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Trans. Image Process. 2018, 27, 2354–2367. [Google Scholar] [CrossRef] [Green Version]
  5. Sharma, S.; Buddhiraju, K.M. Spatial–spectral ant colony optimization for hyperspectral image classification. Int. J. Remote Sens. 2018, 39, 2702–2717. [Google Scholar] [CrossRef]
  6. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  7. Xia, J.; Du, P.; He, X.; Chanussot, J. Hyperspectral remote sensing image classification based on rotation forest. IEEE Geosci. Remote Sens. Lett. 2013, 11, 239–243. [Google Scholar] [CrossRef] [Green Version]
  8. Mohanty, R.; Happy, S.; Routray, A. Spatial–Spectral Regularized Local Scaling Cut for Dimensionality Reduction in Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2018, 16, 932–936. [Google Scholar] [CrossRef]
  9. Shamsolmoali, P.; Zareapoor, M.; Yang, J. Convolutional neural network in network (CNNiN): Hyperspectral image classification and dimensionality reduction. IET Image Process. 2018, 13, 246–253. [Google Scholar] [CrossRef]
  10. Fang, B.; Li, Y.; Zhang, H.; Chan, J. Semi-supervised deep learning classification for hyperspectral image based on dual-strategy sample selection. Remote Sens. 2018, 10, 574. [Google Scholar] [CrossRef] [Green Version]
  11. Cui, B.; Xie, X.; Hao, S.; Cui, J.; Lu, Y. Semi-supervised classification of hyperspectral images based on extended label propagation and rolling guidance filtering. Remote Sens. 2018, 10, 515. [Google Scholar] [CrossRef] [Green Version]
  12. Liu, B.; Yu, X.; Zhang, P.; Yu, A.; Fu, Q.; Wei, X. Supervised deep feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1909–1921. [Google Scholar] [CrossRef]
  13. He, N.; Paoletti, M.E.; Haut, J.M.; Fang, L.; Li, S.; Plaza, A.; Plaza, J. Feature extraction with multiscale covariance maps for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 755–769. [Google Scholar] [CrossRef]
  14. Medjahed, S.A.; Ouali, M. Band selection based on optimization approach for hyperspectral image classification. Egyptian J. Remote Sens. Space Sci. 2018, 21, 413–418. [Google Scholar] [CrossRef]
  15. Cui, B.; Xie, X.; Ma, X.; Ren, G.; Ma, Y. Superpixel-based extended random walker for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3233–3243. [Google Scholar] [CrossRef]
  16. Jamshidpour, N.; Homayouni, S.; Safari, A. Graph-based semi-supervised hyperspectral image classification using spatial information. In Proceedings of the Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, LA, USA, 21–24 August 2016; pp. 1–4. [Google Scholar]
  17. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
  18. Wang, Z.; Du, B.; Zhang, L.; Zhang, L.; Jia, X. A novel semisupervised active-learning algorithm for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3071–3083. [Google Scholar] [CrossRef]
  19. Li, J.; Xi, B.; Li, Y.; Du, Q.; Wang, K. Hyperspectral classification based on texture feature enhancement and deep belief networks. Remote Sens. 2018, 10, 396. [Google Scholar] [CrossRef] [Green Version]
  20. Zhou, S.; Chen, Q.; Wang, X. Fuzzy deep belief networks for semi-supervised sentiment classification. Neurocomputing 2014, 131, 312–322. [Google Scholar] [CrossRef]
  21. Sun, B.; Kang, X.; Li, S.; Benediktsson, J.A. Random-walker-based collaborative learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 55, 212–222. [Google Scholar] [CrossRef]
  22. Imani, M.; Ghassemian, H. Adaptive expansion of training samples for improving hyperspectral image classification performance. In Proceedings of the Iranian Conference on Electrical Engineering (ICEE), Mashhad, Iran, 14–16 May 2013; pp. 1–6. [Google Scholar]
  23. Wu, H.; Prasad, S. Semi-supervised deep learning using pseudo labels for hyperspectral image classification. IEEE Trans. Image Process. 2017, 27, 1259–1270. [Google Scholar] [CrossRef] [PubMed]
  24. Ma, D.; Tang, P.; Zhao, L. SiftingGAN: Generating and Sifting Labeled Samples to Improve the Remote Sensing Image Scene Classification Baseline In Vitro. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1046–1050. [Google Scholar] [CrossRef] [Green Version]
  25. Prasad, S.; Labate, D.; Cui, M.; Zhang, Y. Morphologically decoupled structured sparsity for rotation-invariant hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4355–4366. [Google Scholar] [CrossRef]
  26. Prasad, S.; Labate, D.; Cui, M.; Zhang, Y. Rotation invariance through structured sparsity for robust hyperspectral image classification. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 6205–6209. [Google Scholar]
  27. Li, J.; Zhang, H.; Zhang, L. Efficient superpixel-level multitask joint sparse representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar]
  28. Tu, B.; Zhang, X.; Kang, X.; Zhang, G.; Wang, J.; Wu, J. Hyperspectral image classification via fusing correlation coefficient and joint sparse representation. IEEE Geosci. Remote Sens. Lett. 2018, 15, 340–344. [Google Scholar] [CrossRef]
  29. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2012, 51, 217–231. [Google Scholar] [CrossRef] [Green Version]
  30. Gan, L.; Xia, J.; Du, P.; Chanussot, J. Multiple feature kernel sparse representation classifier for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5343–5356. [Google Scholar] [CrossRef]
  31. Liu, J.; Xiao, Z.; Chen, Y.; Yang, J. Spatial-spectral graph regularized kernel sparse representation for hyperspectral image classification. ISPRS Int. J. Geo-Inf. 2017, 6, 258. [Google Scholar] [CrossRef]
  32. Fang, L.; Wang, C.; Li, S.; Benediktsson, J.A. Hyperspectral image classification via multiple-feature-based adaptive sparse representation. IEEE Trans. Instrum. Meas. 2017, 66, 1646–1657. [Google Scholar] [CrossRef]
  33. Tong, F.; Tong, H.; Jiang, J.; Zhang, Y. Multiscale union regions adaptive sparse representation for hyperspectral image classification. Remote Sens. 2017, 9, 872. [Google Scholar] [CrossRef] [Green Version]
  34. Pan, B.; Shi, Z.; Xu, X. Multiobjective-based sparse representation classifier for hyperspectral imagery using limited samples. IEEE Trans. Geosci. Remote Sens. 2018, 57, 239–249. [Google Scholar] [CrossRef]
  35. Jian, M.; Jung, C. Class-discriminative kernel sparse representation-based classification using multi-objective optimization. IEEE Trans. Signal Process. 2013, 61, 4416–4427. [Google Scholar] [CrossRef]
  36. Sun, X.; Yang, L.; Zhang, B.; Gao, L.; Gao, J. An endmember extraction method based on artificial bee colony algorithms for hyperspectral remote sensing images. Remote Sens. 2015, 7, 16363–16383. [Google Scholar] [CrossRef] [Green Version]
  37. Kang, X.; Li, S.; Fang, L.; Benediktsson, J.A. Intrinsic image decomposition for feature extraction of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2241–2253. [Google Scholar] [CrossRef]
  38. Tappen, M.F.; Freeman, W.T.; Adelson, E.H. Recovering intrinsic images from a single image. In Advances in Neural Information Processing Systems; Massachusetts Institute of Technology Press: Cambridge, MA, USA, 2003; pp. 1367–1374. [Google Scholar]
  39. Yacoob, Y.; Davis, L.S. Segmentation using appearance of mesostructure roughness. Int. J. Comput. Vis. 2009, 83, 248–273. [Google Scholar] [CrossRef] [Green Version]
  40. Shen, J.; Yang, X.; Li, X.; Jia, Y. Intrinsic image decomposition using optimization and user scribbles. IEEE Trans. Cybern. 2013, 43, 425–436. [Google Scholar] [CrossRef]
  41. Shen, J.; Yang, X.; Jia, Y.; Li, X. Intrinsic images using optimization. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 20–25 June 2011; pp. 3481–3487. [Google Scholar]
  42. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 210–227. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  44. Tibshirani, R. Regression shrinkage and selection via the lasso: A retrospective. J. R. Stat. Soc. Ser. (Stat. Methodol.) 2011, 73, 273–282. [Google Scholar] [CrossRef]
  45. Nitanda, A. Stochastic proximal gradient descent with acceleration techniques. In Advances in Neural Information Processing Systems; Massachusetts Institute of Technology Press: Cambridge, MA, USA, 2014; pp. 1574–1582. [Google Scholar]
  46. Kang, X.; Li, S.; Fang, L.; Li, M.; Benediktsson, J.A. Extended random walker-based classification of hyperspectral images. IEEE Trans. Geosci. Remote. Sens. 2014, 53, 144–153. [Google Scholar] [CrossRef]
  47. Bouda, M.; Rousseau, A.N.; Gumiere, S.J.; Gagnon, P.; Konan, B.; Moussa, R. Implementation of an automatic calibration procedure for HYDROTEL based on prior OAT sensitivity and complementary identifiability analysis. Hydrol. Process. 2014, 28, 3947–3961. [Google Scholar] [CrossRef]
  48. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  49. Wang, L.; Hao, S.; Wang, Q.; Wang, Y. Semi-supervised classification for hyperspectral imagery based on spatial-spectral label propagation. ISPRS J. Photogramm. Remote Sens. 2014, 97, 123–137. [Google Scholar] [CrossRef]
  50. Wu, Y.; Li, J.; Gao, L.; Tan, X.; Zhang, B. Graphics processing unit–accelerated computation of the Markov random fields and loopy belief propagation algorithms for hyperspectral image classification. J. Appl. Remote Sens. 2015, 9, 097295. [Google Scholar] [CrossRef]
  51. Yang, X.; Ye, Y.; Li, X.; Lau, R.Y.; Zhang, X.; Huang, X. Hyperspectral image classification with deep learning models. IEEE Trans. Geosci. Remote. Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
  52. Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote. Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef] [Green Version]
  53. Farrell, M.D.; Mersereau, R.M. On the impact of PCA dimension reduction for hyperspectral detection of difficult targets. IEEE Geosci. Remote Sens. Lett. 2005, 2, 192–195. [Google Scholar] [CrossRef]
  54. Kang, X.; Li, S.; Benediktsson, J.A. Feature extraction of hyperspectral images with image fusion and recursive filtering. IEEE Trans. Geosci. Remote Sens. 2013, 52, 3742–3752. [Google Scholar] [CrossRef]
Figure 1. Graphical example illustrating the process of pseudo-labeled sample generation based on sparse representation. In the new training sample set, small balls of different colors represent different classes of training samples, and the plus sign represents a newly generated pseudo-labeled sample.
Figure 1. Graphical example illustrating the process of pseudo-labeled sample generation based on sparse representation. In the new training sample set, small balls of different colors represent different classes of training samples, and the plus sign represents a newly generated pseudo-labeled sample.
Remotesensing 12 00664 g001
Figure 2. Indian Pines data set. (a) false-color composite; (b,c) ground truth data.
Figure 2. Indian Pines data set. (a) false-color composite; (b,c) ground truth data.
Remotesensing 12 00664 g002
Figure 3. University of Pavia data set. (a) false-color composite; (b,c) ground truth data.
Figure 3. University of Pavia data set. (a) false-color composite; (b,c) ground truth data.
Remotesensing 12 00664 g003
Figure 4. Salinas data set. (a) false-color composite; (b,c) ground truth data.
Figure 4. Salinas data set. (a) false-color composite; (b,c) ground truth data.
Remotesensing 12 00664 g004
Figure 5. Kennedy Space Center data set. (a) false-color composite; (b,c) ground truth data.
Figure 5. Kennedy Space Center data set. (a) false-color composite; (b,c) ground truth data.
Remotesensing 12 00664 g005
Figure 6. Effects of the regularization parameter λ on the overall accuracy (OA) for four different data sets.
Figure 6. Effects of the regularization parameter λ on the overall accuracy (OA) for four different data sets.
Remotesensing 12 00664 g006
Figure 7. Effects of the number of generated pseudo-labeled samples (T) on the overall accuracy (OA) for different data sets.
Figure 7. Effects of the number of generated pseudo-labeled samples (T) on the overall accuracy (OA) for different data sets.
Remotesensing 12 00664 g007
Figure 8. Experimental results. (a) the results for the Indian Pines data set; (b) the results for University of Pavia data set; (c) the results for Salinas data set; (d) the results for Kennedy Space Center data set. Each image displays the overall classification accuracy of the proposed method with respect to varying feature numbers (M) and different subgroup sizes (Z).
Figure 8. Experimental results. (a) the results for the Indian Pines data set; (b) the results for University of Pavia data set; (c) the results for Salinas data set; (d) the results for Kennedy Space Center data set. Each image displays the overall classification accuracy of the proposed method with respect to varying feature numbers (M) and different subgroup sizes (Z).
Remotesensing 12 00664 g008
Figure 9. Sensitivity of the parameters for different data sets.
Figure 9. Sensitivity of the parameters for different data sets.
Remotesensing 12 00664 g009
Figure 10. The classification maps of the different tested methods for the Indian Pines image. (a) ground-truth data; (b) SVM, support vector machine; (c) ERW, extended random walke; (d) SSLP-SVM, spatial-spectral label propagation based on SVM; (e) MPM-LBP, the maximizer of the posterior marginal by loopy belief propagation; (f) R-2D-CNN, recurrent 2D convolutional neural network; (g) CasRNN, cascaded recurrent neural network; (h) SRSPL, sample pseudo-labeling method based on sparse representation.
Figure 10. The classification maps of the different tested methods for the Indian Pines image. (a) ground-truth data; (b) SVM, support vector machine; (c) ERW, extended random walke; (d) SSLP-SVM, spatial-spectral label propagation based on SVM; (e) MPM-LBP, the maximizer of the posterior marginal by loopy belief propagation; (f) R-2D-CNN, recurrent 2D convolutional neural network; (g) CasRNN, cascaded recurrent neural network; (h) SRSPL, sample pseudo-labeling method based on sparse representation.
Remotesensing 12 00664 g010
Figure 11. The classification maps of different methods for the University of Pavia image. (a) ground-truth data; (b) SVM; (c) ERW; (d) SSLP-SVM; (e) MPM-LBP; (f) R-2D-CNN; (g) CasRNN; (h) SRSPL.
Figure 11. The classification maps of different methods for the University of Pavia image. (a) ground-truth data; (b) SVM; (c) ERW; (d) SSLP-SVM; (e) MPM-LBP; (f) R-2D-CNN; (g) CasRNN; (h) SRSPL.
Remotesensing 12 00664 g011
Figure 12. The classification maps of different methods for the Salinas image. (a) ground-truth data; (b) SVM; (c) ERW; (d) SSLP-SVM; (e) MPM-LBP; (f) R-2D-CNN; (g) CasRNN; (h) SRSPL.
Figure 12. The classification maps of different methods for the Salinas image. (a) ground-truth data; (b) SVM; (c) ERW; (d) SSLP-SVM; (e) MPM-LBP; (f) R-2D-CNN; (g) CasRNN; (h) SRSPL.
Remotesensing 12 00664 g012
Figure 13. The classifications of different methods for the Kennedy Space Center image when the number of training samples increases from three per class to 15 per class. (a) OA; (b) Kappa coefficient.
Figure 13. The classifications of different methods for the Kennedy Space Center image when the number of training samples increases from three per class to 15 per class. (a) OA; (b) Kappa coefficient.
Remotesensing 12 00664 g013
Table 1. Classification performance of different methods performed on the Indian Pines data set with 5–25 training samples per class. Without-IID is the proposed method without the IID Step. With-PCA and With-IFRF denote the proposed method using different methods for feature extraction. (SVM, support vector machine; ERW, extended random walker; SSLP-SVM, spatial-spectral label propagation based on SVM; MPM-LBP, the maximizer of the posterior marginal by loopy belief propagation; R-2D-CNN, recurrent 2D convolutional neural network; CasRNN, cascaded recurrent neural network; SRSPL, sample pseudo-labeling method based on sparse representation. Three common metrics: overall accuracy (OA), average accuracy (AA), Kappa coefficient. The bold values indicate the greatest accuracy among the methods in each case.)
Table 1. Classification performance of different methods performed on the Indian Pines data set with 5–25 training samples per class. Without-IID is the proposed method without the IID Step. With-PCA and With-IFRF denote the proposed method using different methods for feature extraction. (SVM, support vector machine; ERW, extended random walker; SSLP-SVM, spatial-spectral label propagation based on SVM; MPM-LBP, the maximizer of the posterior marginal by loopy belief propagation; R-2D-CNN, recurrent 2D convolutional neural network; CasRNN, cascaded recurrent neural network; SRSPL, sample pseudo-labeling method based on sparse representation. Three common metrics: overall accuracy (OA), average accuracy (AA), Kappa coefficient. The bold values indicate the greatest accuracy among the methods in each case.)
MethodMetricNumber of Training Samples per Class
510152025
OA45.31 ± 5.1957.58 ± 2.9863.56 ± 2.6166.92 ± 1.4769.68 ± 0.97
SVMAA47.41 ± 3.7155.19 ± 2.2759.84 ± 2.2162.13 ± 1.8663.84 ± 0.91
Kappa39.21 ± 5.4352.52 ± 3.1759.11 ± 2.8262.80 ± 1.5665.82 ± 1.08
OA81.97 ± 5.9386.89 ± 3.4590.13 ± 3.4792.64 ± 1.7494.92 ± 1.70
Without-IIDAA84.35 ± 6.4089.07 ± 3.4792.73 ± 1.2395.63 ± 0.9796.76 ± 0.95
Kappa80.91 ± 6.5085.12 ± 3.7789.42 ± 3.9191.59 ± 1.9994.20 ± 1.94
OA76.08 ± 3.6082.62 ± 2.9086.80 ± 2.1390.90 ± 1.8393.53 ± 1.55
With-PCAAA81.47 ± 4.2080.47 ± 2.9487.36 ± 3.5191.02 ± 2.7695.06 ± 0.79
Kappa73.03 ± 3.9180.40 ± 3.2085.30 ± 2.3989.78 ± 2.0792.61 ± 1.77
OA82.47 ± 2.3088.62 ± 2.9192.80 ± 2.1394.90 ± 1.6696.67 ± 1.21
With-IFRFAA89.68 ± 0.9190.47 ± 0.9493.36 ± 2.0395.02 ± 2.6397.83 ± 0.72
Kappa80.12 ± 2.5689.40 ± 3.2392.97 ± 1.9693.78 ± 2.8096.19 ± 1.39
OA72.30 ± 4.3884.87 ± 4.4190.02 ± 1.3092.94 ± 1.8694.64 ± 1.17
ERWAA83.32 ± 2.4191.48 ± 2.3094.39 ± 0.9496.18 ± 1.0097.14 ± 0.62
Kappa68.96 ± 4.6982.95 ± 4.8688.69 ± 1.4691.99 ± 2.1093.91 ± 1.33
OA64.84 ± 1.4376.05 ± 1.4480.79 ± 0.7382.13 ± 0.4485.41 ± 0.26
SSLP-SVMAA65.96 ± 2.3878.07 ± 0.9882.08 ± 0.6482.25 ± 0.6886.87 ± 0.35
Kappa60.63 ± 1.4973.17 ± 1.6078.38 ± 0.8079.90 ± 0.4983.49 ± 0.29
OA59.36 ± 5.8072.83 ± 3.9978.15 ± 2.5181.92 ± 2.1785.12 ± 2.18
MPM-LBPAA72.10 ± 3.6183.61 ± 1.2787.34 ± 0.9290.33 ± 1.6391.82 ± 1.09
Kappa54.39 ± 6.3169.50 ± 4.2575.38 ± 2.7579.61 ± 2.4283.14 ± 2.42
OA51.63 ± 3.8159.83 ± 3.7964.15 ± 2.1870.92 ± 1.0775.03 ± 1.18
R-2D-CNNAA52.01 ± 3.1658.61 ± 2.7265.43 ± 1.2169.33 ± 1.3676.82 ± 1.39
Kappa50.93 ± 3.1360.50 ± 2.5263.82 ± 1.5771.61 ± 1.2474.19 ± 1.43
OA60.57 ± 4.5170.25 ± 2.1875.22 ± 2.2381.43 ± 1.6987.79 ± 1.50
CasRNNAA58.44 ± 3.1272.86 ± 1.8776.59 ± 1.7483.66 ± 1.8788.94 ± 1.21
Kappa60.92 ± 3.5869.34 ± 2.7474.54 ± 2.0580.49 ± 1.7286.62 ± 1.74
OA84.73 ± 1.2390.90 ± 1.2593.88 ± 0.6795.52 ± 0.3997.09 ± 0.21
SRSPLAA88.86 ± 1.5193.97 ± 0.9596.27 ± 0.5897.04 ± 0.5797.96 ± 0.39
Kappa81.79 ± 1.4589.64 ± 1.0593.02 ± 0.7294.88 ± 0.4596.66 ± 0.27
Table 2. Classification accuracies (University of Pavia) in percentages for the tested methods when 20 training samples per class are provided. (The bold values indicate the greatest accuracy among the methods in each case.)
Table 2. Classification accuracies (University of Pavia) in percentages for the tested methods when 20 training samples per class are provided. (The bold values indicate the greatest accuracy among the methods in each case.)
ClassTrainingTestSVMWithout-IIDDifferent Feature Extraction MethodsDifferent Semi-Supervised MethodsDeep Learning MethodsSRSPL
With-PCAWith-IFRFERWSSLP-SVMMPM-LBPR-2D-CNNCasRNN
Asphalt20661192.99 ± 2.2790.92 ± 2.2170.23 ± 6.4599.84 ± 0.0386.36 ± 6.7896.22 ± 0.6490.39 ± 8.3879.34 ± 5.3883.52 ± 6.3398.93 ± 1.33
Meadows2018,62990.78 ± 2.3699.99 ± 0.5880.84 ± 4.3190.03 ± 3.6198.32 ± 2.6797.65 ± 0.2092.95 ± 11.8179.96 ± 3.6171.37 ± 5.8199.48 ± 0.57
Gravel20207955.61 ± 6.8999.86 ± 0.0798.98 ± 0.7498.97 ± 0.0396.34 ± 5.8271.05 ± 1.6985.25 ± 8.2178.88 ± 6.1264.51 ± 10.2199.95 ± 0.03
Trees20304467.48 ± 10.2085.81 ± 2.0789.01 ± 5.7790.70 ± 3.4578.92 ± 4.0781.88 ± 4.5089.48 ± 5.9173.91 ± 4.4885.43 ± 3.4291.86 ± 1.66
Metal Sheets20132594.27 ± 3.49100.00 ± 0.0099.31 ± 0.1699.77 ± 0.0299.66 ± 0.2796.95 ± 1.9198.56 ± 0.6698.25 ± 0.2098.55 ± 0.4799.62 ± 0.20
Bare Soil20500949.36 ± 7.8092.25 ± 3.4285.18 ± 7.1398.42 ± 2.5097.04 ± 3.6266.64 ± 4.6787.25 ± 10.3880.94 ± 4.5189.08 ± 5.2598.49 ± 0.08
Bitumen20131053.58 ± 6.35100.00 ± 0.0099.19 ± 0.5395.53 ± 1.8599.42 ± 0.3360.77 ± 3.5997.94 ± 1.6678.49 ± 2.9291.13 ± 2.6299.95 ± 0.09
Bricks20366276.30 ± 5.6798.00 ± 1.8398.14 ± 1.0097.71 ± 2.8597.34 ± 2.1387.75 ± 1.4688.32 ± 5.4583.81 ± 2.4593.54 ± 3.4199.84 ± 3.07
Shadows2092799.88 ± 0.0998.05 ± 0.0994.45 ± 1.4898.88 ± 0.1599.89 ± 0.0999.95 ± 0.1398.78 ± 1.5894.27 ± 0.3094.72 ± 1.7899.03 ± 0.06
OA75.49 ± 3.4596.62 ± 1.3583.49 ± 1.8696.76 ± 1.1594.86 ± 1.3386.71 ± 1.4487.40 ± 4.7078.15 ± 1.4780.86 ± 3.7498.63 ± 0.31
AA76.06 ± 1.9896.42 ± 2.9888.61 ± 2.5696.54 ± 2.7495.02 ± 3.2588.55 ± 2.1588.75 ± 2.0579.19 ± 1.3584.29 ± 4.5398.34 ± 0.13
Kappa69.06 ± 3.9495.52 ± 1.7279.88 ± 2.2195.18 ± 3.7393.19 ± 1.7382.92 ± 1.7587.62 ± 5.4877.34 ± 1.8177.93 ± 3.1898.18 ± 0.41
Table 3. Classification accuracies (University of Pavia) in percentages for the tested methods when 20 training samples per class are provided. (The bold values indicate the greatest accuracy among the methods in each case.)
Table 3. Classification accuracies (University of Pavia) in percentages for the tested methods when 20 training samples per class are provided. (The bold values indicate the greatest accuracy among the methods in each case.)
ClassTrainingTestSVMWithout-IIDDifferent Feature Extraction MethodsDifferent Semi-Supervised MethodsDeep Learning MethodsSRSPL
With-PCAWith-IFRFERWSSLP-SVMMPM-LBPR-2D-CNNCasRNN
Weeds 13200686.74 ± 12.6896.74 ± 12.68100.00 ± 0.00100.00 ± 0.00100.00 ± 0.0099.87 ± 0.1598.82 ± 0.8088.83 ± 1.8590.12 ± 1.28100.00 ± 0.00
Weeds 23372399.08 ± 0.8999.08 ± 0.89100.00 ± 0.0099.34 ± 0.0499.95 ± 0.0398.75 ± 0.2499.40 ± 0.6799.46 ± 0.3799.20 ± 0.6999.96 ± 0.06
Fallow3197377.08 ± 16.4298.08 ± 1.42100.00 ± 0.00100.00 ± 0.00100.00 ± 0.0084.21 ± 2.1690.98 ± 12.6679.49 ± 2.6681.88 ± 3.76100.00 ± 0.00
Fallow_P3139196.94 ± 1.2384.94 ± 1.2396.81 ± 3.1685.28 ± 6.8486.28 ± 7.6497.64 ± 0.1299.47 ± 0.3096.13 ± 0.4697.48 ± 0.4794.64 ± 2.84
Fallow_S3267590.08 ± 8.2697.08 ± 1.2697.36 ± 2.9798.41 ± 0.2599.41 ± 0.2796.86 ± 3.2297.94 ± 1.5390.07 ± 1.3592.97 ± 1.7397.38 ± 6.17
Stubble3395699.11 ± 4.7698.11 ± 1.7698.00 ± 2.0197.34 ± 1.5897.72 ± 4.5899.87 ± 0.2597.75 ± 3.2897.81 ± 1.7897.89 ± 0.3899.92 ± 0.06
Celery3357693.79 ± 4.1798.79 ± 1.17100.00 ± 0.0090.20 ± 4.7772.20 ± 24.7798.95 ± 0.4199.06 ± 0.8494.91 ± 2.4396.56 ± 1.6499.35 ± 0.18
Grapes31126864.24 ± 7.8794.24 ± 2.8793.12 ± 2.1899.15 ± 0.4299.17 ± 0.9272.05 ± 3.2372.48 ± 18.6570.84 ± 6.5573.45 ± 7.6799.70 ± 2.56
Soil3620096.20 ± 3.0396.20 ± 2.0398.38 ± 1.48100.00 ± 0.00100.00 ± 0.0099.37 ± 0.1799.21 ± 0.6096.18 ± 1.0697.41 ± 0.78100.00 ± 0.00
Corn3327572.58 ± 19.0697.58 ± 1.0691.57 ± 6.3597.21 ± 7.1595.36 ± 7.1589.21 ± 3.9083.91 ± 10.8275.06 ± 10.2878.04 ± 9.1297.36 ± 9.51
Lettuce_43106574.66 ± 19.3796.66 ± 2.3780.51 ± 6.4798.76 ± 2.23100.00 ± 0.0083.41 ± 4.4092.94 ± 4.8777.27 ± 8.7880.64 ± 6.7699.98 ± 0.06
Lettuce_53192481.48 ± 10.0798.48 ± 0.0783.16 ± 12.5499.60 ± 1.83100.00 ± 0.0092.60 ± 5.6799.36 ± 1.9783.55 ± 7.7585.61 ± 5.67100.00 ± 0.00
Lettuce_6391378.58 ± 13.9398.58 ± 0.9398.31 ± 2.7498.74 ± 0.4797.88 ± 0.4795.24 ± 0.6697.17 ± 3.2480.20 ± 6.2482.80 ± 4.4198.47 ± 0.27
Lettuce_73106783.59 ± 19.1493.59 ± 2.1497.24 ± 2.5995.33 ± 2.2195.35 ± 2.3692.14 ± 4.7494.76 ± 2.6385.73 ± 7.3688.78 ± 5.3796.26 ± 0.68
Vinyard_U3726545.43 ± 6.2197.43 ± 2.2188.51 ± 8.6499.37 ± 0.1799.18 ± 0.0558.67 ± 7.8370.93 ± 25.1071.81 ± 10.0175.93 ± 9.7098.92 ± 0.13
Vinyard_T3180483.43 ± 18.8099.43 ± 0.80100.00 ± 0.0099.48 ± 0.89100.00 ± 0.0071.12 ± 4.4476.88 ± 11.3586.12 ± 6.3589.06 ± 4.3599.68 ± 0.79
OA76.06 ± 2.9795.88 ± 3.6386.06 ± 2.9797.65 ± 1.3697.17 ± 2.2383.30 ± 1.5887.79 ± 3.2678.12 ± 2.4281.81 ± 3.8998.69 ± 1.22
AA82.69 ± 2.4696.76 ± 2.6490.69 ± 2.4697.78 ± 1.6996.62 ± 2.6988.12 ± 0.7892.57 ± 2.0777.45 ± 2.0482.34 ± 2.1098.01 ± 1.44
Kappa73.52 ± 3.2294.59 ± 3.9785.52 ± 3.2298.00 ± 1.4596.84 ± 2.4981.46 ± 1.7286.41 ± 3.6379.47 ± 2.3180.87 ± 3.3898.54 ± 1.37
Table 4. Execution times of three semi-supervised methods of four data sets. (The bold values indicate the minimum computational times among the methods in each data set.)
Table 4. Execution times of three semi-supervised methods of four data sets. (The bold values indicate the minimum computational times among the methods in each data set.)
Data SetsComputational Times (s)
MPM-LBPSSLP-SVMSRSPL
Indian Pines197.78479.51124.31
University of Pavia604.35164.09240.14
Salinas339.031592.75202.14
Kennedy Space Center939.519122.2072.32

Share and Cite

MDPI and ACS Style

Cui, B.; Cui, J.; Lu, Y.; Guo, N.; Gong, M. A Sparse Representation-Based Sample Pseudo-Labeling Method for Hyperspectral Image Classification. Remote Sens. 2020, 12, 664. https://doi.org/10.3390/rs12040664

AMA Style

Cui B, Cui J, Lu Y, Guo N, Gong M. A Sparse Representation-Based Sample Pseudo-Labeling Method for Hyperspectral Image Classification. Remote Sensing. 2020; 12(4):664. https://doi.org/10.3390/rs12040664

Chicago/Turabian Style

Cui, Binge, Jiandi Cui, Yan Lu, Nannan Guo, and Maoguo Gong. 2020. "A Sparse Representation-Based Sample Pseudo-Labeling Method for Hyperspectral Image Classification" Remote Sensing 12, no. 4: 664. https://doi.org/10.3390/rs12040664

APA Style

Cui, B., Cui, J., Lu, Y., Guo, N., & Gong, M. (2020). A Sparse Representation-Based Sample Pseudo-Labeling Method for Hyperspectral Image Classification. Remote Sensing, 12(4), 664. https://doi.org/10.3390/rs12040664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop