Next Article in Journal
Precision Near-Field Reconstruction in the Time Domain via Minimum Entropy for Ultra-High Resolution Radar Imaging
Next Article in Special Issue
Maritime Semantic Labeling of Optical Remote Sensing Images with Multi-Scale Fully Convolutional Network
Previous Article in Journal
Hyperspectral Alteration Information from Drill Cores and Deep Uranium Exploration in the Baiyanghe Uranium Deposit in the Xuemisitan Area, Xinjiang, China
Previous Article in Special Issue
Gated Convolutional Neural Network for Semantic Segmentation in High-Resolution Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Dimensionality Reduction by Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis

1
School of Information Science & Technology, Southwest Jiaotong University, Chengdu 610031, China
2
College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029,China
3
Department of Electrical & Computer Engineering, Mississippi State University, Starkville, MS 39762, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(5), 452; https://doi.org/10.3390/rs9050452
Submission received: 14 March 2017 / Revised: 28 April 2017 / Accepted: 3 May 2017 / Published: 6 May 2017
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)

Abstract

:
Recently, sparse and low-rank graph-based discriminant analysis (SLGDA) has yielded satisfactory results in hyperspectral image (HSI) dimensionality reduction (DR), for which sparsity and low-rankness are simultaneously imposed to capture both local and global structure of hyperspectral data. However, SLGDA fails to exploit the spatial information. To address this problem, a tensor sparse and low-rank graph-based discriminant analysis (TSLGDA) is proposed in this paper. By regarding the hyperspectral data cube as a third-order tensor, small local patches centered at the training samples are extracted for the TSLGDA framework to maintain the structural information, resulting in a more discriminative graph. Subsequently, dimensionality reduction is performed on the tensorial training and testing samples to reduce data redundancy. Experimental results of three real-world hyperspectral datasets demonstrate that the proposed TSLGDA algorithm greatly improves the classification performance in the low-dimensional space when compared to state-of-the-art DR methods.

Graphical Abstract

1. Introduction

A hyperspectral image contains a wealth of spectral information about different materials by collecting the reflectance of hundreds of contiguous narrow spectral bands from the visible to infrared electromagnetic spectrum [1,2,3]. However, the redundant information in a hyperspectral image not only increases computational complexity but also degrades classification performance when training samples are limited. Some research has demonstrated that the redundancy can be reduced without a significant loss of useful information [4,5,6,7]. As such, reducing the dimensionality of hyperspectral images is a reasonable and important preprocessing step for subsequent analysis and practical applications.
Dimensionality reduction (DR) aims to reduce the redundancy among features and simultaneously preserve the discriminative information. In general, existing DR methods may belong to one of three categories: unsupervised, supervised, and semisupervised. The unsupervised methods do not take the class label information of training samples into consideration. The most commonly used unsupervised DR algorithm is principal component analysis (PCA) [8], which is to find a linear transformation by maximizing the variance in the projected subspace. Linear discriminant analysis (LDA) [9], as a simple supervised DR method, is proposed to maximize the trace ratio of between-class and within-class scatter matrices. To address the application limitation in data distribution of LDA, local Fisher’s discriminant analysis (LFDA) [10] is developed. In order to overcome the difficulty that the number of training samples is usually limited, some semisupervised DR methods in [11,12] are proposed.
The graph, as a mathematical data representation, has been successfully embedded in the framework of DR, resulting in the development of many effective DR methods. Recently, a general graph embedding (GE) framework [13] has been proposed to formulate most of the existing DR methods, in which an undirected graph is constructed to characterize the geometric information of the data. k-nearest neighbors and ε -radius ball [14] are two traditional methods to construct adjacency graphs. However, these two methods are sensitive to the noise and may lead to incorrect data representation. To construct an appropriate graph, a graph-based discriminant analysis with spectral similarity (GDA-SS) measurement was recently proposed by considering curves changing description among spectral bands in [15]. Sparse representation (SR) [16,17] has attracted much attention because of its benefits of data-adaptive neighborhoods and noise robustness. Based on this work, a sparse graph embedding (SGE) model [18] was developed by exploring the sparsity structure of the data. In [19], a sparse graph-based discriminant analysis (SGDA) model was developed for hyperspectral image dimensionality reduction and classification by exploiting the class label information, improving the performance of SGE. In [20], a weighted SGDA integrated both the locality and sparsity structure of the data. To reduce the computational cost, collaborative graph-based discriminant analysis (CGDA) [21] was introduced by imposing an l 2 regularization on sparse coefficient vector. In [22], Laplacian regularization was imposed on CGDA, resulting in the LapCGDA algorithm. SR is able to reveal the local structure but fails in capturing the global structure. To solve this problem, a sparse and low-rank graph-based discriminant analysis (SLGDA) [23] was proposed to simultaneously preserve the local and global structure of hyperspectral data.
However, the aforementioned graph-based DR methods only deal with spectral vector-based (first-order) representations, which do not take the spatial information of hyperspectral data into consideration. Aiming to overcome this shortcoming, simultaneous sparse graph embedding (SSGE) was proposed to improve the classification performance in [24]. Although SSGE has obtained enhanced performance, it still puts the spectral-spatial feature into first-order data for analysis and ignores the cubic nature of hyperspectral data that can be taken as a third-order tensor. Some researchers have verified the advantage of tensor representation when processing the hyperspectral data. For example, multilinear principal component analysis (MPCA) [25] was integrated with support vector machines (SVM) for tensor-based classification in [26]. A group based tensor model [27] by exploiting clustering technique was developed for DR and classification. In addition, a tensor discriminative locality alignment (TDLA) [28] algorithm was proposed for hyperspectral image spectral-spatial feature representation and DR, which has been extended in [29] by combining with well-known spectral-spatial feature extraction methods (such as extended morphological profiles (EMPs) [30], extended attribute profiles (EAPs) [31], and Gabors [32]) for classification. Though the previous tensor-based DR methods have achieved great improvement on performance, they do not consider the structure property from other perspectives, such as representation-based and graph-based points.
In this context, we propose a novel DR method, i.e., tensor sparse and low-rank graph-based discriminant analysis (TSLGDA), for hyperspectral data, in which the information from three perspectives (tensor representation, sparse and low-rank representation, and graph theory) is exploited to present the data structure for hyperspectral image. It is noteworthy that the proposed method aims to exploit the spatial information through tensor representation, which is different from the work in [23] only considering the spectral information. Furthermore, tensor locality preserving projection (TLPP) [33] is exploited to obtain three projection matrices for three dimensions (one spectral dimension and two spatial dimensions) in TSLGDA, while SLGDA [23] only considers one spectral projection matrix by locality preserving projection. The contributions of our work lie in the following aspects: (1) tensor representation is utilized in the framework of sparse and low-rank graph-based discriminant analysis for DR of hyperspectral image. To the best of our knowledge, this is the first time that tensor theory, sparsity, and low-rankness are combined in graph embedding framework; (2) Tensorial structure contains the spectral-spatial information, sparse and low-rank representation reveals both local and global structure and a graph preserves manifold structure. The integration of these three techniques remarkably promotes discriminative ability of reduced features in low-dimensional subspaces; (3) The proposed method can effectively deal with small training size problem, even for the class with only two labeled samples.
The rest of this paper is organized as follows. Section 2 briefly describes the tensor basics and some existing DR methods. The proposed TSLGDA algorithm for DR of hyperspectral imagery is provided in detail in Section 3. Parameters discussions and experimental results compared with some state-of-the-art methods are given in Section 4. Finally, Section 5 concludes this paper with some remarks.

2. Related Work

In this paper, if not specified otherwise, lowercase italic letters denote scalars, e.g., i , j , k , bold lowercase letters denote vectors, e.g., x , y , bold uppercase letters denote matrices, e.g., U , X , and bold uppercase letters with underline denote tensors, e.g., A ̲ , X ̲ .

2.1. Tensor Basics

A multidimensional array is defined as a tensor, which is represented as A ̲ R I 1 × I n × I N . We regard A ̲ R I 1 × I n × I N as an N-order tensor, corresponding to an N-dimensional data array, with its element denoted as A ̲ i 1 i n i N , where 1 i n I n , and 1 n N . Some basic definitions related to tensor operation are provided as follows [28,33,34].
Definition 1.
(Frobenius norm): The Frobenius norm of a tensor A ̲ is defined as || A ̲ || F = ( i 1 i N ( A ̲ i 1 i N ) 2 ) 1 / 2 .
Definition 2.
(Mode-n matricizing): The n-mode vector of an N-order tensor A ̲ R I 1 × I n × I N is defined as an n-dimensional vector by fixing all indices except i n . The n-mode matrix is composed of all the n-mode vectors in column form, denoted as A n R I n × ( I 1 I n - 1 I n + 1 I N ) . The obtained n-mode matrix is also known as n-mode unfolding of a tensor A ̲ .
Definition 3.
(Mode-n product): The mode-n product of a tensor A ̲ with a matrix U R I n × I n yields C ̲ = A ̲ × n U , and C ̲ R I 1 I n - 1 I n I n + 1 I N , whose entries are computed by
C ̲ i 1 i n - 1 i n i n + 1 i N = i n = 1 I n A ̲ i 1 i n - 1 i n i n + 1 i N U i n i n
where i k = 1 , 2 , , I k , ( k n ) and i n = 1 , 2 , , I n . Note that the n-mode product can also be expressed in terms of unfolding tensor
C ̲ = A ̲ × n U C n = U A n
where × n denotes mode-n product between a tensor and a matrix.
Definition 4.
(Tensor contraction): The contraction of tensors A ̲ R I 1 × × I N × I 1 × × I N and B ̲ R I 1 × × I N × I 1 × × I N is defined as
[ A ̲ B ̲ ; ( 1 : N ) ( 1 : N ) ] i 1 , i 2 , , i N = i 1 = 1 I 1 i N = 1 I N A ̲ i 1 , , i N , i 1 , , i N B ̲ i 1 , , i N , i 1 , , i N
The condition for tensor contraction is that both two tensors should have the same size at the specific mode. For example, when the contraction is conducted on all indices except for the index n on tensors A ̲ , B ̲ R I 1 × I n × I N , this operation can be denoted as [ A ̲ B ̲ ; ( n ¯ ) ( n ¯ ) ] . According to the property of tensor contraction, we have
[ A ̲ B ̲ ; ( n ¯ ) ( n ¯ ) ] = A n B n T

2.2. Sparse and Low-Rank Graph-Based Discriminant Analysis

In [19], sparse graph-based discriminant analysis (SGDA), as a supervised DR method, was proposed to extract important features for hyperspectral data. Although SGDA can successfully reveal the local structure of the data, it fails to capture the global information. To address this problem, sparse and low-rank graph-based discriminant analysis (SLGDA) [23] was developed to preserve local neighborhood structure and global geometrical structure simultaneously by combining the sparse and low-rank constraints. The objective function of SLGDA can be formulated as
arg min W ( l ) 1 2 || X ( l ) - X ( l ) W ( l ) || F 2 + β || W ( l ) || * + λ || W ( l ) || 1 , s . t . d i a g ( W ( l ) ) = 0
where β and λ are two regularization parameters to control the effect of low-rank term and sparse term, respectively, X ( l ) represents samples from the lth class in a vector-based way, and l = [ 1 , 2 , , c ] , in which c is the number of total classes. After obtaining the complete graph weight matrix W = d i a g ( W ( 1 ) , W ( 2 ) , , W ( c ) ) , the projection operator can be solved as
P * = arg min P T XL p X T P i j || P T x i - P T x j || 2 2 W i j = arg min P T XL p X T P t r ( P T XL s X T P )
where L s = D - W is defined as the Laplacian matrix, D is a diagonal matrix with the ith diagonal entry being D i i = j = 1 N W i j , and L p may be a simple scale normalization constraint [13].
The projection can be further formulated as
P * = arg min P | P T X L s X T P | | P T X L p X T P |
which can be solved as a generalized eigendecomposition problem
X L s X T p b = λ b X L p X T p b
The bth projection vector p b is the eigenvector corresponding to the bth smallest nonzero eigenvalue. The projection matrix can be formed as P = [ p 1 , , p B ] R d × B , B d . Finally, the reduced features are denoted as X ^ = P T X R B × M .

2.3. Multilinear Principal Component Analysis

In order to obtain a set of multilinear projections that will map the original high-order tensor data into a low-order tensor space, MPCA performs to directly maximize the total scatter matrix on the subspace U i ( i n )
max U n U n T = I n t r ( U n S T n U n T ) = max U n U n T = I n t r U n ( k = 1 M X k n X k n T ) U n T ,
where S T n = k = 1 M X k n X k n T and X k n is the n-mode unfolding matrix of tensor X ̲ k .
The optimal projections of MPCA can be obtained from the eigendecomposition
S T n U n T = U n T D n
where U n = [ u n 1 , , u n d n ] is the eigenvector matrix and D n = d i a g ( λ n 1 , , λ n d n ) is the eigenvalue matrix of S T n , in which the eigenvalues are ranked in descending order, and λ n j is the eigenvalue corresponding to the eigenvector u n j . The optimal projection matrix for mode-n is composed of the eigenvectors corresponding to the first B n largest eigenvalues, e.g., U n = [ u n 1 , , u n B n ] . After obtained the projection matrix for each mode, the reduced features can be formulated as
X ^ ̲ k = X ̲ k × 1 U 1 × N U N
where U i R B n × I n ( B n I n ) .

3. Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis

Consider a hyperspectral image as a third-order tensor A ̲ R I 1 × I 2 × I 3 , in which I 1 and I 2 refer to the width and height of the data cube, respectively, and I 3 represents the number of spectral bands, I 3 = d . Assume that the kth small patch is composed of the kth training sample and its i 1 × i 2 neighbors, which is denoted as X ̲ k R i 1 × i 2 × d . M patches construct the training set { X ̲ k } k = 1 M . The training patches belonging to the lth class are expressed as { X ̲ k , l } k = 1 M l , where M l represents the number of patches belonging to the lth class and l { 1 , 2 , , c } . For the purpose of convenient expression, a fourth-order tensor X ̲ ( l ) R i 1 × i 2 × d × M l is defined to represent these M l patches, and X ̲ R i 1 × i 2 × d × M denotes all training patches for c classes, where M = l = 1 c M l . A visual illustration of 3-mode vectors, 3-mode unfolding, and 3-mode product is shown in Figure 1.

3.1. Tensor Sparse and Low-Rank Graph

The previous SLGDA framework can capture the local and global structure of hyperspectral data simultaneously by imposing both sparse and low-rank constraints. However, it may lose some important structural information of hyperspectral data, which presents an intrinsic tensor-based data structure. To overcome this drawback, a tensor sparse and low-rank graph is constructed with the objective function
arg min W ( l ) 1 2 || X ̲ ( l ) - X ̲ ( l ) × 4 W ( l ) || F 2 + β || W ( l ) || * + λ || W ( l ) || 1 , s . t . d i a g ( W ( l ) ) = 0 ,
where W ( l ) R M l × M l denotes the graph weigh matrix using labeled patches from the lth class only. As such, with the help of class-specific labeled training patches, the global graph weigh matrix W can be designed as a block-diagonal structure
W = W ( 1 ) 0 0 W ( c )
To obtain the lth class graph weight matrix W ( l ) , the alternating direction method of multipliers (ADMM) [35] is adopted to solve problem (12). Two auxiliary variables Z ( l ) and J ( l ) are first introduced to make the objective function separable
arg min Z ( l ) , J ( l ) , W ( l ) 1 2 || X ̲ ( l ) - X ̲ ( l ) × 4 W ( l ) || F 2 + β || Z ( l ) || * + λ || J ( l ) || 1 , s . t . W ( l ) = Z ( l ) , W ( l ) = J ( l ) - d i a g ( J ( l ) )
The augmented Lagrangian function of problem (14) is given as
L ( Z ( l ) , J ( l ) , W ( l ) , D 1 , D 2 ) = 1 2 || X ̲ ( l ) - X ̲ ( l ) × 4 W ( l ) || F 2 + β || Z ( l ) || * + λ || J ( l ) || 1 + D 1 , W ( l ) - Z ( l ) + D 2 , W ( l ) - J ( l ) + d i a g ( J ( l ) ) + μ 2 ( || W ( l ) - Z ( l ) || F 2 + || W ( l ) - J ( l ) + d i a g ( J ( l ) ) || F 2 )
where D 1 and D 2 are Lagrangian multipliers, and μ is a penalty parameter.
By minimizing the function L ( Z ( l ) , J ( l ) , W ( l ) ) , each variable is alternately updated with other variables being fixed. The updating rules are expressed as
Z t + 1 ( l ) = arg min Z ( l ) β || Z ( l ) || * + D 1 , t , W t ( l ) - Z ( l ) + μ t 2 || W t ( l ) - Z ( l ) || F 2 = arg min Z ( l ) β μ t || Z ( l ) || * + 1 2 || Z ( l ) - ( W t ( l ) + D 1 , t μ t ) || F 2 = Ω β μ t ( W t ( l ) + D 1 , t μ t )
J t + 1 ( l ) = arg min J ( l ) λ || J ( l ) || 1 + D 2 , t , W t ( l ) - J + μ t 2 || W t ( l ) - J ( l ) || F 2 = arg min J ( l ) λ μ t || J ( l ) || 1 + 1 2 || J ( l ) - ( W t ( l ) + D 2 , t μ t ) || F 2 = S λ μ t ( W t ( l ) + D 2 , t μ t ) , J t + 1 ( l ) = J t + 1 ( l ) - d i a g ( J t + 1 ( l ) ) ,
where μ t denotes the learning rate, Ω τ ( Δ ) = Q S τ ( ) V T is the singular value thresholding operator (SVT), in which S τ ( x ) = s g n ( x ) max ( | x | - τ , 0 ) is the soft thresholding operator [36]. By fixing Z t + 1 ( l ) and J t + 1 ( l ) , the formulation of W t + 1 ( l ) can be written as
W t + 1 ( l ) = arg min W ( l ) 1 2 || X ̲ ( l ) - X ̲ ( l ) × 4 W ( l ) || F 2 + D 1 , t , W ( l ) - Z t + 1 ( l ) + D 2 , t , W ( l ) - J t + 1 ( l ) + μ t 2 ( || W ( l ) - Z t + 1 ( l ) || F 2 + || W ( l ) - J t + 1 ( l ) || F 2 ) = ( H ( l ) + 2 μ t I ) - 1 ( H ( l ) + μ t Z t + 1 ( l ) + μ t J t + 1 ( l ) - ( D 1 , t + D 2 , t ) ) ,
where H ( l ) = [ X ̲ ( l ) X ̲ ( l ) ; ( 4 ¯ ) ( 4 ¯ ) ] R M l × M l , W ( l ) R M l × M l , and I R M l × M l is an identity matrix.
The global similarity matrix W will be obtained depending on Equation (13) when each sub-similarity matrix corresponding to each class is calculated from problem (12). Until now, a tensor sparse and low-rank graph G = { X ̲ , W } is completely constructed with vertex set X ̲ and similarity matrix W . How to obtain a set of projection matrices { U n R B n × I n , B n I n , n = 1 , 2 , , N } is the following task.

3.2. Tensor Locality Preserving Projection

The aim of tensor LPP is to find transformation matrices { U 1 , U 2 , , U N } to project high-dimensional data X ̲ i into low-dimensional representation X ^ ̲ i , where X ^ ̲ i = X i ̲ × 1 U 1 × 2 U 2 × N U N .
The optimization problem for tensor LPP can be expressed as
arg min J ( U 1 , U 2 , , U N ) = i , j || X ^ ̲ i - X ^ ̲ j || 2 W i j = i , j || X ̲ i × 1 U 1 × N U N - X ̲ j × 1 U 1 × N U N || 2 W i j s . t . i || X ̲ i × 1 U 1 × N U N || 2 C i i = 1
where C i i = j W i j . It can be seen that the corresponding tensors X ^ ̲ i and X ^ ̲ j in the embedded tensor space are expected to be close to each other if original tensors X ̲ i and X ̲ j are greatly similar.
To solve the optimization problem (19), an iterative scheme is employed [33]. First, we assume that { U 1 , , U n - 1 , U n + 1 , , U N } are known, then, let X ^ ̲ i , ( n ) = X ̲ i × 1 U 1 × n - 1 U n - 1 × n + 1 U n + 1 × N U N . With properties of tensor and trace, the objective function (19) is rewritten as
arg min J n ( U n ) = i , j || X ^ ̲ i , ( n ) × n U n - X ^ ̲ j , ( n ) × n U n || 2 W i j = i , j || U n X ^ i n - U n X ^ j n || 2 W i j = i , j t r U n ( ( X ^ i n - X ^ j n ) ( X ^ i n - X ^ j n ) T W i j ) U n T = t r U n ( i , j ( X ^ i n - X ^ j n ) ( X ^ i n - X ^ j n ) T W i j ) U n T , s . t . t r ( U n ( i X ^ i n X ^ i n T C i i ) U n T ) = 1 ,
where X ^ i n denotes the n-mode unfolding of tensor X ^ ̲ i , ( n ) . Finally, the optimal solution of problem (20) is the eigenvectors corresponding to the first B n smallest nonzero eigenvalues of the following generalized eigenvalue problem
( i , j ( X ^ i n - X ^ j n ) ( X ^ i n - X ^ j n ) T W i j ) u = λ ( i X ^ i n X ^ i n T C i i ) u
Assume Φ = i , j ( X ^ i n - X ^ j n ) ( X ^ i n - X ^ j n ) T W i j , Ψ = i X ^ i n X ^ i n T C i i , then, problem (21) can be transformed into
Φ u = λ Ψ u
To solve this problem, the function e i g ( · ) embedded in the MATLAB software (R2013a, The MathWorks, Natick, Massachusetts, USA) is adopted, i.e., [ u , Λ ] = e i g ( Φ , Ψ ) , and the eigenvectors in u corresponding to the first B n smallest nonzero eigenvalues in Λ are chosen to form the projection matrix. The other projection matrices can be obtained in a similar manner. The complete TSLGDA algorithm is outlined in Algorithm 1.
Algorithm 1: Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis for Classification.
Input: Training patches X ̲ = [ X ̲ ( 1 ) , X ̲ ( 2 ) , , X ̲ ( c ) ] , testing patches Y ̲ , regularization parameters β and λ ,
            reduced dimensionality { B 1 , B 2 , B 3 } .
Initialize: Z 0 ( l ) = J 0 ( l ) = W 0 ( l ) = 0 , Y 1 , 0 = Y 2 , 0 = 0 , μ 0 = 0 . 1 , μ m a x = 10 3 , ρ 0 = 1 . 1 , ε 1 = 10 - 4 , ε 2 = 10 - 3 ,
            maxIter = 100, t = 0 .
1.   for l = 1 , 2 , , c do
2.         repeat
3.             Compute Z t + 1 ( l ) , J t + 1 ( l ) , and W t + 1 ( l ) according to (16)–(18).
4.             Update the Lagrangian multipliers:
Y 1 , t + 1 = Y 1 , t + μ t ( W t + 1 ( l ) - Z t + 1 ( l ) ) ,   Y 2 , t + 1 = Y 2 , t + μ t ( W t + 1 ( l ) - J t + 1 ( l ) ) .
5.             Update μ : μ t + 1 = min ( ρ μ t , μ m a x ) , where
ρ = ρ 0 , i f μ t max ( || W t + 1 ( l ) - W t ( l ) || F , || Z t + 1 ( l ) - Z t ( l ) || F , || J t + 1 ( l ) - J t ( l ) || F ) / || X ^ ̲ ( l ) || F < ε 2 , 1 , o t h e r w i s e .
6.             Check convergence conditions: || W t + 1 ( l ) - Z t + 1 ( l ) || < ε 1 , || W t + 1 ( l ) - J t + 1 ( l ) || < ε 1 .
7.              t t + 1 .
8.         until convergence conditions are satisfied or t > maxIter.
9.   end for
10. Construct the block-diagonal weight matrix W according to (13).
11. Compute the projection matrices { U 1 , U 2 , U 3 } according to (21).
12. Compute the reduced features:
       X ^ ̲ = X ̲ × 1 U 1 × 2 U 2 × 3 U 3 , Y ^ ̲ = Y ̲ × 1 U 1 × 2 U 2 × 3 U 3 .
13. Determine the class label of Y ^ ̲ by NN classifier.
14. Output: The class labels of test patches.

4. Experiments and Discussions

In this section, three hyperspectral datasets are used to verify the performance of the proposed method. The proposed TSLGDA algorithm is compared with some state-of-the-art approaches, including unsupervised methods (e.g., PCA [8], MPCA [25]) and supervised methods (e.g., LDA [9], LFDA [10], SGDA [19], GDA-SS [15], SLGDA [23], G-LTDA (local tensor discriminant analysis with Gabor filters) [29]). SGDA is implemented using the SPAMS (SPArse Modeling Software) toolbox [38]. The nearest neighbor classifier (NN classifier) is exploited to classify the projected features obtained by these DR methods. The class-specific accuracy , overall accuracy (OA), average accuracy (AA), and kappa coefficient ( κ ) are reported for quantitative assessment after ten runs. All experiments are implemented on an Inter Core i5-4590 CPU personal computer (Santa Clara, CA, USA).

4.1. Experimental Datasets

The first dataset [39] was acquired by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over northwest Indiana’s Indian Pine test site in June 1992. The AVIRIS sensor generates the wavelength range of 0.4–2.45- μ m covered 220 spectral bands. After removing 20 water-absorption bands (bands 104–108, 150–163, and 220), a total of 200 bands is used in experiments. The image with 145 × 145 pixels represents a rural scenario having 16 different land-cover classes. The numbers of training and testing samples in each class are listed in Table 1.
The second dataset [39] is the University of Pavia collected by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor in Italy. The image has 103 bands after removing 12 noisy bands with a spectral coverage from 0.43 to 0.86 μ m, covering a region of 610 × 340 pixels. There are nine ground-truth classes, from which we randomly select training and testing samples as shown in Table 1.
The third dataset [39] was also collected by the AVIRIS sensor over the Valley of Salinas, Central Coast of California, in 1998. The image comprises 512 × 217 pixels with a spatial resolution of 3.7 m, and only preserves 204 bands after 20 water-absorption bands removed. Table 2 lists 16 land-cover classes and the number of training and testing samples.

4.2. Parameters Tuning

For the proposed method, four important parameters (i.e., regularization parameters β and λ , window size, and the number of spectral dimension) that can be divided into three groups need to be determined before proceeding to the following experiments. β and λ control the effect of sparse term and low-rank term in the objective function, respectively, which can be tuned together, while window size and the number of spectral dimension are another two groups that can be determined separately. When analyzing one group specific parameter, the other group parameters are fixed on their corresponding chosen values. According to many existing DR methods [22,23,24] and tensor-based research [26,28], window size is the first set as 9 for the Indian Pines and Salinas datasets, and 7 for the University of Pavia dataset; the initial value for the number of spectral dimension is given as 30 for all three datasets, and the performance basically reaches steady state with this dimension.

4.2.1. Regularization Parameters for TSLGDA

With the initial values of window size and the number of spectral dimension fixed, β and λ are first tuned to achieve better classification performance. Figure 2 shows the overall classification accuracy with respect to different β and λ by fivefold cross validation for three experimental datasets. It can be clearly seen that the OA values can reach the maximum values for some β and λ . Accordingly, for the Indian Pines dataset, the optimal values of β and λ can be set as ( 0 . 01 , 0 . 1 ) , which is also an appropriate choice for the University of Pavia dataset, while ( 0 . 001 , 0 . 1 ) is chosen for the Salinas data.

4.2.2. Window Size for Tensor Representation

For tensor-based DR methods, i.e., MPCA and TSLGDA, window size (or patch size) is another important parameter. Note that small windows may fail to cover enough spatial information, whereas large windows may contain multiple classes, resulting in complicated analysis and heavy computational burden. Therefore, the window size is searched in the range of { 3 × 3 , 5 × 5 , 7 × 7 , 9 × 9 } . β and λ are fixed on the tuned values, while the numbers of spectral dimension are still set as initial values for three datasets, respectively. Figure 3 presents the variation of classification performances of MPCA and TSLGDA with different window sizes for experimental datasets. It can be seen that the window sizes for MPCA and TSLGDA can be both chosen as 9 × 9 for the Indian Pines and Salinas datasets, while the optimal values are 5 × 5 and 7 × 7 , respectively, for the University of Pavia dataset. This may be because the formers represent a rural scenario containing large spatial homogeneity while the Pavia University data is obtained from an urban area with small homogeneous regions. To evaluate the classification performance using the low-dimensional data, 1NN classifier is adopted in this paper.

4.2.3. The Number of Spectral Dimension for TSLGDA

According to [28], { 1 , 1 } is set as the reduced dimensionality of the first two dimensions (i.e., two spatial dimensions). The third dimension (i.e., spectral dimension) is considered carefully by keeping the tuned values of β , λ , and window size is fixed. Figure 4 shows the overall classification accuracy with respect to spectral dimension for three hyperspectral datasets. Obviously, due to the spatial information contained in tensor structure, tensor-based DR methods (i.e., MPCA, TSLGDA) outperform vector-based DR methods (i.e., PCA, SGDA, GDA-SS, SLGDA). According to [29,37], G-LTDA can automatically obtain the optimal reduced dimensions during the optimization procedure; therefore, the number of spectral dimension for G-LTDA is not discussed here. For the Indian Pines dataset, the performances of all considered methods increase when the spectral dimension increases, and then keep stable at the maximum values. The similar results can also be observed from the University of Pavia and Salinas datasets. In any case, TSLGDA outperforms other DR methods even when the spectral dimension is as low as 5. In the following assessment, { 1 , 1 , 30 } and { 1 , 1 , 20 } dimensions are used to conduct classification for two AVIRIS datasets and one ROSIS dataset, respectively.

4.3. Classification Results

4.3.1. Classification Accuracy

Table 3, Table 4 and Table 5 present the classification accuracy of individual class, OA, AA, and kappa coefficient for three experimental datasets, respectively. Obviously, the proposed method provides the best results than other compared methods on almost all of classes; meanwhile, OA, AA, and kappa coefficient are also better than those of other methods. Specifically, by comparing to all considered methods, TSLGDA yields about 2% to 30%, 5% to 20%, and 2% to 12% gain in OA with limited training sets for three datasets, respectively. Even for classes with few labeled training samples, such as class 1, class 7, and class 9 in the Indian Pines data, the proposed TSLGDA algorithm offers great improvement in performance as well. Besides TSLGDA, MPCA and G-LTDA also obtain much higher accuracies than other vector-based methods, which effectively demonstrates the advantage of tensor-based techniques. In addition, SLGDA yields better results than SGDA (about 3%, 1%, and 0.6% gain) by simultaneously exploiting the properties of sparsity and low-rankness, while GDA-SS is superior to SGDA by considering the spectral similarity measurement based on spectral characteristics when constructing the graph.

4.3.2. Classification Maps

In order to show the classification results more directly, classification maps of all considered methods are provided in Figure 5, Figure 6 and Figure 7 for three experimental datasets, respectively. From Figure 5, it can be clearly seen that the proposed method can obtain much smoother classification regions than other methods, especially for class 1 (Alfalfa), class 2 (Corn-notill), class 3 (Corn-mintill), and class 12 (Soybean-clean) whose spectral characteristics are highly correlated with other classes. The similar results can also be observed from Figure 6 and Figure 7, where class 1 (Asphalt), class 6 (Bare Soil), and class 8 (Self-blocking bricks) in the second dataset, and class 8 (Grapes untrained), class 15 (Vineyard untrained) in the third dataset are labeled more precisely. These observations are consistent with the quantitative results listed in Table 3, Table 4 and Table 5.

4.3.3. The Influence of Training Size

To show the influence of training size, some considered DR methods are tested. The results are given in Figure 8, from which we can see that the OA values of all methods are improved when the number of training samples increases for three datasets. Due to the spatial structure information contained in the tensor, the proposed method always performs better than other methods in all cases. In addition, with the label information, the supervised DR methods (i.e., SGDA, GDA-SS, SLGDA, G-LTDA, TSLGDA) achieve better results than the corresponding unsupervised DR methods (i.e., PCA, MPCA).

4.3.4. The Analysis of Computational Complexity

For the comparison of computational complexity, we take the Indian Pines data as an example. Table 6 shows the time requirements of all considered methods, from which it can be clearly seen that traditional methods (e.g., PCA, LDA, LFDA) run faster than other recently proposed methods. In addition, due to complicated tensor computation, tensor-based DR methods (e.g., MPCA, G-LTDA, TSLGDA) cost more time than vector-based methods (e.g., SGDA, GDA-SS, SLGDA). Although TSLGDA has the highest computational complexity, it yields the best classification performance. In practice, the general-purpose graphics processing units (GPUs) can be adopted to greatly accelerate the TSLGDA algorithm.

5. Conclusions

In this paper, we have proposed a tensor sparse and low-rank graph-based discriminant analysis method (i.e., TSLGDA) for dimensionality reduction of hyperspectral imagery. The hyperspectral data cube is taken as a third-order tensor, from which sub-tensors (local patches) centered at the training samples are extracted to construct the sparse and low-rank graph. On the one hand, by imposing both the sparse and low-rank constraints on the objective function, the proposed method is capable of capturing the local and global structure simultaneously. On the other hand, due to the spatial structure information introduced by tensor data, the proposed method can improve the graph structure and enhance the discriminative ability of reduced features. Experiments conducted on three hyperspectral datasets have consistently confirmed the effectiveness of our proposed TSLGDA algorithm, even for small training size. Compared to some state-of-the-art methods, the overall classification accuracy of TSLGDA in the low-dimensional space improves about 2% to 30%, 5% to 20%, and 2% to 12% for three experimental datasets, respectively, with increased computational complexity.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61371165 and Grant 61501018, and by the Frontier Intersection Basic Research Project for the Central Universities under Grant A0920502051714-5. The authors would like to thank Prof. David A. Landgreve from Purdue University for providing the AVIRIS image of Indian Pines and Prof. Paolo Gamba from University of Pavia for providing the ROSIS dataset. The authors would like to thank Dr. Zisha Zhong for sharing the code of Gabor filters and giving some useful suggestions. Last but not least, we would like to thank the editors and the anonymous reviewers for their detailed comments and suggestions, which greatly helped us to improve the clarity and presentation of our manuscript.

Author Contributions

All of the authors made significant contributions to the work. Lei Pan and Heng-Chao Li designed the research model, analyzed the results and wrote the paper. Yang-Jun Deng provided codes about tensor processing. Fan Zhang and Xiang-Dong Chen reviewed the manuscript. Qian Du contributed to the editing and review of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, Z.; Li, J.; Liu, L. Tensor block-sparsity based representation for spectral-spatial hyperspectral image classification. Remote Sens. 2016, 8, 636. [Google Scholar] [CrossRef]
  2. Peng, B.; Li, W.; Xie, X.M.; Du, Q.; Liu, K. Weighted-fusion-based representation classifiers for hyperspectral imagery. Remote Sens. 2015, 7, 14806–14826. [Google Scholar] [CrossRef]
  3. Yu, S.Q.; Jia, S.; Xu, C.Y. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  4. Jimenez, L.O.; Landgrebe, D.A. Supervised classification in high-dimensional space: Geometrical, statistical, and asymptoticla properitier of multivariate data. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 1998, 28, 39–54. [Google Scholar] [CrossRef]
  5. Chang, C.-I.; Safavi, H. Progressive dimensionality reduction by transform for hyperspectral imagery. Pattern Recognit. 2011, 44, 2760–2773. [Google Scholar] [CrossRef]
  6. Yuan, Y.; Lin, J.Z.; Wang, Q. Dual-clustering-based hyperspectral band selection by contextual analysis. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1431–1445. [Google Scholar] [CrossRef]
  7. Wang, Q.; Lin, J.Z.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  8. Jolliffe, I.T. Principal Component Analysis; Springer-Verlag: New York, NY, USA, 2002. [Google Scholar]
  9. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  10. Li, W.; Prasad, S.; Fowler, J.E.; Bruce, L.M. Locality-preserving dimensionality reduction and classification for hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1185–1198. [Google Scholar] [CrossRef]
  11. Tan, K.; Zhou, S.Y.; Du, Q. Semisupervised discriminant analysis for hyperspectral imagery with block-sparse graph. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1765–1769. [Google Scholar] [CrossRef]
  12. Chatpatanasiri, R.; Kijsirikul, B. A unifier semi-supervised dimensionality reduction framework for manifold learning. Neurocomputing 2010, 73, 1631–1640. [Google Scholar] [CrossRef]
  13. Yan, S.C.; Xu, D.; Zhang, B.Y.; Zhang, H.-J.; Yang, Q.; Lin, S. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [PubMed]
  14. He, X.F.; Cai, D.; Yan, S.C.; Zhang, H.-J. Neighborhood preserving embedding. In Proceedings of the 2005 IEEE Conference on Computer Vision (ICCV), Beijing, China, 17–20 October 2005; pp. 1208–1213. [Google Scholar]
  15. Feng, F.B.; Li, W.; Du, Q.; Zhang, B. Dimensionality reduction of hyperspectral image with graph-based discriminant analysis considering spectral similarity. Remote Sens. 2017, 9, 323. [Google Scholar] [CrossRef]
  16. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  17. Wright, J.; Ma, Y.; Mairal, J.; Sapiro, G.; Huang, T.S.; Yan, S.C. Sparse representation for computer vision and pattern recognition. Proc. IEEE 2010, 98, 1031–1044. [Google Scholar] [CrossRef]
  18. Cheng, B.; Yang, J.C.; Yan, S.C.; Fu, Y.; Huang, T.S. Learning with l1-graph for image analysis. IEEE Trans. Image Process. 2015, 23, 2241–2253. [Google Scholar]
  19. Ly, N.H.; Du, Q.; Fowler, J.E. Sparse graph-based discriminant analysis for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3872–3884. [Google Scholar]
  20. He, W.; Zhang, H.Y.; Zhang, L.P.; Philips, W.; Liao, W.Z. Weighted sparse graph based dimensionality reduction for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 686–690. [Google Scholar] [CrossRef]
  21. Ly, N.H.; Du, Q.; Fowler, J.E. Collaborative graph-based discriminant analysis for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2688–2696. [Google Scholar] [CrossRef]
  22. Li, W.; Du, Q. Laplacian regularized collaborative graph for discriminant analysis of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7066–7076. [Google Scholar] [CrossRef]
  23. Li, W.; Liu, J.B.; Du, Q. Sparse and low-rank graph for discriminant analysis of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4094–4105. [Google Scholar] [CrossRef]
  24. Xue, Z.H.; Du, P.J.; Li, J.; Su, H.J. Simultaneous sparse graph embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6114–6132. [Google Scholar] [CrossRef]
  25. Lu, H.P.; Plataniotis, K.N.; Venetsanopoulos, A.N. MPCA: Multilinear principal component analysis of tensor objects. IEEE Trans. Neural Netw. 2008, 19, 18–39. [Google Scholar] [PubMed]
  26. Guo, X.; Huang, X.; Zhang, L.F.; Zhang, L.P.; Plaza, A.; Benediktssom, J.A. Support tensor machines for classification of hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3248–3264. [Google Scholar] [CrossRef]
  27. An, J.L.; Zhang, X.R.; Jiao, L.C. Dimensionality reduction based on group-based tensor model for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1497–1501. [Google Scholar] [CrossRef]
  28. Zhang, L.P.; Zhang, L.F.; Tao, D.C.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 242–256. [Google Scholar] [CrossRef]
  29. Zhong, Z.S.; Fan, B.; Duan, J.Y.; Wang, L.F.; Ding, K.; Xiang, S.M.; Pan, C.H. Discriminant tensor spectral-spatial feature extraction for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1028–1032. [Google Scholar] [CrossRef]
  30. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  31. Dallal Mura, M.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  32. Rajadell, O.; Garcia-Sevilla, P.; Pla, F. Spectral-spatial pixel characterization using Gabor filters for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2013, 10, 860–864. [Google Scholar] [CrossRef]
  33. Zhao, H.T.; Sun, S.Y. Sparse tensor embedding based multispectral face recognition. Neurocomputing 2014, 133, 427–436. [Google Scholar] [CrossRef]
  34. Lai, Z.H.; Xu, Y.; Chen, Q.C.; Yang, J.; Zhang, D. Multilinear sparse principal component analysis. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1942–1950. [Google Scholar] [CrossRef] [PubMed]
  35. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  36. Cai, J.-F.; Candés, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 24, 1956–1982. [Google Scholar] [CrossRef]
  37. Nie, F.P.; Xiang, S.M.; Song, Y.Q.; Zhang, C.S. Extracting the optimal dimensionality for local tensor discriminant analysis. Pattern Recognit. 2009, 42, 105–114. [Google Scholar] [CrossRef]
  38. SPArse Modeling Software. Available online: http://spams-devel.gforge.inria.fr/index.html (accessed on 20 January 2017).
  39. Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 20 January 2017).
Figure 1. Visual illustration of n-mode vectors, n-mode unfolding, and n-mode product of a third-order tensor from a hyperspectral image.
Figure 1. Visual illustration of n-mode vectors, n-mode unfolding, and n-mode product of a third-order tensor from a hyperspectral image.
Remotesensing 09 00452 g001
Figure 2. Parameter tuning of β and λ for the proposed TSLGDA algorithm using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Figure 2. Parameter tuning of β and λ for the proposed TSLGDA algorithm using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Remotesensing 09 00452 g002
Figure 3. Parameter tuning of window size for MPCA and TSLGDA using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Figure 3. Parameter tuning of window size for MPCA and TSLGDA using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Remotesensing 09 00452 g003
Figure 4. Overall accuracy versus the reduced spectral dimension for different methods using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Figure 4. Overall accuracy versus the reduced spectral dimension for different methods using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Remotesensing 09 00452 g004
Figure 5. Classification maps of different methods for the Indian Pines dataset: (a) ground truth; (b) training set; (c) origin; (d) PCA; (e) LDA; (f) LFDA; (g) SGDA; (h) GDA-SS; (i) SLGDA; (j) MPCA; (k) G-LTDA; and (l) TSLGDA.
Figure 5. Classification maps of different methods for the Indian Pines dataset: (a) ground truth; (b) training set; (c) origin; (d) PCA; (e) LDA; (f) LFDA; (g) SGDA; (h) GDA-SS; (i) SLGDA; (j) MPCA; (k) G-LTDA; and (l) TSLGDA.
Remotesensing 09 00452 g005
Figure 6. Classification maps of different methods for the University of Pavia dataset: (a) ground truth; (b) training set; (c) origin; (d) PCA; (e) LDA; (f) LFDA; (g) SGDA; (h) GDA-SS; (i) SLGDA; (j) MPCA; (k) G-LTDA; and (l) TSLGDA.
Figure 6. Classification maps of different methods for the University of Pavia dataset: (a) ground truth; (b) training set; (c) origin; (d) PCA; (e) LDA; (f) LFDA; (g) SGDA; (h) GDA-SS; (i) SLGDA; (j) MPCA; (k) G-LTDA; and (l) TSLGDA.
Remotesensing 09 00452 g006
Figure 7. Classification maps of different methods for the Salinas dataset: (a) ground truth; (b) training set; (c) origin; (d) PCA; (e) LDA; (f) LFDA; (g) SGDA; (h) GDA-SS; (i) SLGDA; (j) MPCA; (k) G-LTDA; and (l) TSLGDA.
Figure 7. Classification maps of different methods for the Salinas dataset: (a) ground truth; (b) training set; (c) origin; (d) PCA; (e) LDA; (f) LFDA; (g) SGDA; (h) GDA-SS; (i) SLGDA; (j) MPCA; (k) G-LTDA; and (l) TSLGDA.
Remotesensing 09 00452 g007
Figure 8. Overall classification accuracy and standard deviation versus different numbers of training samples per class for all methods using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Figure 8. Overall classification accuracy and standard deviation versus different numbers of training samples per class for all methods using three datasets: (a) Indian Pines; (b) University of Pavia; (c) Salinas.
Remotesensing 09 00452 g008
Table 1. Number of training and testing samples for the Indian Pines and University of Pavia datasets.
Table 1. Number of training and testing samples for the Indian Pines and University of Pavia datasets.
Indian PinesUniversity of Pavia
ClassNameTrainingTestingNameTrainingTesting
1Alfalfa541Asphalt406591
2Corn-notill1431285Meadows4018,609
3Corn-mintill83747Gravel402059
4Corn24213Tree403024
5Grass-pasture48435Painted metal sheets401305
6Grass-trees73657Bare Soil404989
7Grass-pasture-mowed325Bitumen401290
8Hay-windrowed48430Self-blocking bricks403642
9Oats218Shadows40907
10Soybean-notill97875
11Soybean-mintill2462209
12Soybean-clean59534
13Wheat21184
14Woods1271138
15Buildings-Grass-Trees-Drive39347
16Stone-Steel-Towers984
Total 10279222 36042,416
Table 2. Number of training and testing samples for the Salinas dataset.
Table 2. Number of training and testing samples for the Salinas dataset.
Salinas
ClassNameTrainingTesting
1Brocoli-green-weeds-1401969
2Brocoli-green-weeds-2753651
3Fallow401936
4Fallow-rough-plow281366
5Fallow-smooth542624
6Stubble793880
7Celery723507
8Grapes-untrained22511,046
9Soil-vinyard-develop1246079
10Corn-senesced-green-weeds663212
11Lettuce-romaine-4wk211047
12Lettuce-romaine-5wk391888
13Lettuce-romaine-6wk18898
14Lettuce-romaine-7wk211049
15Vinyard-untrained1457123
16Vinyard-vertical-trellis361771
Total 108353,046
Table 3. Classification accuracy (%) and standard deviation of different methods for the Indian Pines data when the reduced dimension is 30.
Table 3. Classification accuracy (%) and standard deviation of different methods for the Indian Pines data when the reduced dimension is 30.
No.OriginPCALDALFDASGDAGDA-SSSLGDAMPCAG-LTDATSLGDA
139.0254.1533.6644.8865.0449.5948.7871.3492.2091.71
±8.27±11.1±17.8±15.5±7.45±12.2±6.90±9.63±4.69±8.02
255.9252.9657.2867.7869.3174.2473.0481.0996.4797.32
±2.68±1.53±2.13±3.56±2.37±3.95±1.93±2.74±1.01±0.68
349.8350.1558.3466.7562.6569.5767.0082.2693.9897.51
±2.68±2.34±2.57±2.82±1.76±5.56±0.28±2.74±2.34±0.91
442.0740.1938.1254.9349.1458.0662.6887.9196.5397.37
±7.75±4.56±4.00±7.69±5.40±8.24±12.3±4.65±3.93±1.90
582.9584.4781.2088.2589.5592.0393.3291.1393.1597.00
±2.93±4.58±3.87±2.41±1.74±1.34±0.98±2.00±1.44±2.50
690.7593.0693.3694.6495.3896.9196.2797.5394.7699.27
±1.00±2.95±1.47±1.59±0.61±0.89±0.11±1.14±2.94±0.46
781.6072.0076.0079.2088.0088.0088.0094.0095.2096.80
±8.29±13.6±12.3±22.5±4.00±8.00±5.66±7.66±7.15±3.35
896.2893.0295.2699.1299.5397.9199.1998.3797.8199.86
±1.78±1.52±2.71±1.47±0.40±2.02±0.49±1.66±0.67±0.31
926.6734.4425.5643.3350.0037.0425.0054.1778.8993.33
±4.65±12.0±16.5±9.94±33.8±16.9±11.8±19.4±15.4±7.24
1066.0663.9165.4069.0469.6473.6474.0384.1295.9396.52
±2.04±3.49±3.61±3.05±5.81±3.02±0.32±1.32±1.35±1.56
1171.7571.4173.6572.4378.1879.4579.5290.3096.3298.53
±3.00±2.00±1.81±1.83±1.42±1.23±2.08±0.78±1.41±0.59
1243.4141.4648.6367.2067.2974.7876.8373.7393.6096.17
±6.34±2.55±3.25±1.56±2.19±4.59±1.99±2.38±1.70±1.75
1391.4194.0293.5998.7096.0197.8398.6498.2391.8599.46
±2.44±2.40±1.11±0.62±0.63±1.63±1.15±1.12±4.21±0.67
1490.0489.6589.4493.8394.5894.0096.0595.7897.7299.67
±1.96±2.10±2.16±1.56±0.89±1.18±0.87±0.40±0.66±0.43
1537.9836.5441.1561.0448.9056.2056.4888.2695.9198.67
±2.18±2.30±3.73±2.89±1.92±3.20±2.85±4.69±1.62±1.16
1688.4388.6791.0889.6492.3791.2793.9893.0784.2997.35
±6.30±3.02±3.47±5.56±3.03±2.99±1.70±4.33±8.68±1.32
OA69.2568.5270.8676.6077.6580.5180.7688.3495.6798.08
±1.16±0.88±0.76±0.82±1.44±0.31±0.08±0.51±0.49±0.30
AA65.8966.2666.3674.4275.9776.9176.8086.3393.4197.28
±1.19±1.62±2.30±1.79±2.37±2.38±1.98±1.17±0.56±0.85
κ 64.9064.0466.7373.3274.4077.7078.0186.7095.0797.81
±1.30±0.98±0.92±0.93±1.68±0.38±0.14±0.59±0.56±0.34
Table 4. Classification accuracy (%) and standard deviation of different methods for the University of Pavia data when the reduced dimension is 20.
Table 4. Classification accuracy (%) and standard deviation of different methods for the University of Pavia data when the reduced dimension is 20.
No.OriginPCALDALFDASGDAGDA-SSSLGDAMPCAG-LTDATSLGDA
156.1355.9864.7760.5647.4452.8852.8484.2072.4191.15
±1.99±2.90±2.11±5.24±2.00±6.58±1.98±1.49±2.03±1.46
269.6870.3068.7577.0582.1578.8880.9284.6089.2492.59
±5.59±3.27±3.44±4.42±2.71±2.80±3.74±3.31±0.93±2.68
368.0267.3469.9066.4763.8364.2761.1780.2489.4886.83
±3.95±1.49±3.10±3.94±10.5±3.28±3.26±3.01±5.68±2.44
490.2186.9888.9291.3390.7391.2692.5492.2071.2896.04
±4.43±3.70±2.23±2.01±2.25±2.10±0.07±1.85±4.90±2.23
599.3999.4999.5199.8899.7399.7999.6699.7298.41100
±0.38±0.23±0.25±0.10±0.18±0.08±0.27±0.26±1.10±0.00
659.1161.6866.3565.3659.4765.0763.9777.9995.0493.06
±2.25±6.60±6.62±7.09±5.18±2.72±0.50±4.68±2.35±3.12
783.3683.2286.3475.7882.2579.0481.7189.2298.2697.50
±4.59±3.57±2.25±1.97±5.40±3.64±1.75±2.09±1.37±0.90
868.0666.8968.2460.8161.1664.6765.4676.3093.3186.07
±2.72±4.34±3.24±4.18±8.92±4.21±2.87±3.07±1.32±3.27
995.9495.9097.0083.9584.0487.8185.1799.4988.0098.39
±1.52±1.36±1.82±4.64±6.01±2.20±1.01±0.32±2.23±1.03
OA69.4769.6571.3873.0472.5973.0173.8084.3086.9292.33
±2.16±0.88±1.10±0.70±0.68±1.47±1.91±1.05±0.42±0.93
AA76.6676.4278.8675.6974.5375.9675.9487.1188.3893.52
±0.52±0.70±0.92±1.55±1.82±0.74±0.25±0.71±0.43±0.53
κ 61.2261.4363.7965.3164.3965.2266.1079.5782.8889.93
±2.30±0.88±1.19±0.83±0.89±1.74±2.14±1.24±0.50±1.17
Table 5. Classification accuracy (%) and standard deviation of different methods for the Salinas data when the reduced dimension is 30.
Table 5. Classification accuracy (%) and standard deviation of different methods for the Salinas data when the reduced dimension is 30.
No.OriginPCALDALFDASGDAGDA-SSSLGDAMPCAG-LTDATSLGDA
198.0798.7398.9899.4499.4999.3999.6198.0096.9499.92
±0.44±0.80±0.81±0.10±0.13±0.14±0.23±0.98±1.63±0.15
298.6898.9098.8899.2399.5499.2599.5099.4798.7399.98
±0.38±0.25±0.29±0.17±0.28±0.21±0.37±0.55±0.81±0.03
396.2096.8595.1399.1699.2899.5999.5798.1793.6599.97
±0.25±0.61±1.05±0.25±0.05±0.15±0.17±0.19±1.88±0.06
499.2499.3999.5199.1299.4199.1299.1599.7193.9298.41
±0.08±0.35±0.18±0.46±0.13±0.41±0.30±0.87±3.27±0.68
594.5593.4595.6398.7998.6498.4299.0397.9596.5098.87
±0.66±1.85±0.81±0.09±0.87±0.62±0.12±1.28±1.76±1.33
699.6799.6399.5699.7999.7799.7099.8799.2498.74100
±0.16±0.25±0.11±0.21±0.05±0.13±0.13±1.27±0.52±0.00
798.8799.4099.3499.4399.4499.6499.6498.1896.2199.99
±0.53±0.11±0.24±0.24±0.09±0.30±0.08±0.35±2.39±0.02
872.4173.5974.1373.0176.2578.1178.8690.8097.9397.73
±2.03±2.33±0.49±3.40±4.74±0.42±1.50±0.19±0.60±0.22
997.8297.9198.7998.9299.1098.7899.6599.5498.71100
±0.01±0.88±0.50±0.18±0.19±1.46±0.12±0.07±1.07±0.00
1087.7089.6291.6895.2496.0794.8895.4294.7794.9699.77
±4.21±0.33±1.05±0.44±1.28±1.65±1.12±0.67±2.25±0.37
1193.8296.8593.4795.0396.4995.6197.2994.5890.58100
±1.38±1.92±4.81±2.28±3.75±2.83±3.54±1.72±4.90±0.00
1299.7599.9399.4599.9599.9199.9599.8299.4497.17100
±0.16±0.12±0.46±0.09±0.06±0.07±0.17±0.98±1.53±0.00
1397.2996.1497.1498.3697.8497.9498.5999.7495.01100
±0.17±1.56±0.17±0.73±0.89±0.08±0.84±0.28±2.11±0.00
1492.4993.8995.0094.9196.9195.2397.2394.9793.1699.87
±1.53±0.87±0.98±1.63±1.39±2.02±0.25±2.23±5.57±0.15
1562.0458.3864.3769.3667.0567.5166.3188.6396.2296.77
±1.48±2.25±1.98±4.08±5.23±1.65±1.88±0.62±1.10±1.47
1694.7594.4498.0098.7898.5798.7699.3096.9591.91100
±1.41±0.85±0.58±0.40±0.31±0.16±0.46±1.68±7.30±0.00
OA86.9786.9688.2389.3489.8690.1390.4395.2796.7398.98
±0.63±0.49±0.27±0.79±0.45±0.42±0.07±0.04±0.89±0.15
AA92.7192.9493.6994.9195.2495.1295.5596.7095.6599.46
±0.58±0.23±0.40±0.43±0.38±0.24±0.18±0.06±1.41±0.08
κ 85.5085.4886.9088.1589.0288.3389.3494.7496.3598.86
±0.70±0.53±0.30±0.88±0.49±0.46±0.08±0.05±0.99±0.16
Table 6. Execution time (in seconds) of different methods for the Indian Pines data with different training size.
Table 6. Execution time (in seconds) of different methods for the Indian Pines data with different training size.
Methods6%8%10%12%14%
PCA1.231.491.862.352.54
LDA1.231.511.882.342.54
LFDA1.241.571.932.402.62
SGDA10.6014.1118.5323.9029.30
GDA-SS1.131.361.672.152.45
SLGDA3.244.817.2010.1913.09
MPCA115.94150.00161.06182.37203.94
G-LTDA30.9640.2449.8662.4174.83
TSLGDA183.91225.06281.19349.44456.84

Share and Cite

MDPI and ACS Style

Pan, L.; Li, H.-C.; Deng, Y.-J.; Zhang, F.; Chen, X.-D.; Du, Q. Hyperspectral Dimensionality Reduction by Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis. Remote Sens. 2017, 9, 452. https://doi.org/10.3390/rs9050452

AMA Style

Pan L, Li H-C, Deng Y-J, Zhang F, Chen X-D, Du Q. Hyperspectral Dimensionality Reduction by Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis. Remote Sensing. 2017; 9(5):452. https://doi.org/10.3390/rs9050452

Chicago/Turabian Style

Pan, Lei, Heng-Chao Li, Yang-Jun Deng, Fan Zhang, Xiang-Dong Chen, and Qian Du. 2017. "Hyperspectral Dimensionality Reduction by Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis" Remote Sensing 9, no. 5: 452. https://doi.org/10.3390/rs9050452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop