Next Article in Journal
Single-Satellite Integrated Navigation Algorithm Based on Broadband LEO Constellation Communication Links
Previous Article in Journal
Countrywide Monitoring of Ground Deformation Using InSAR Time Series: A Case Study from Qatar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Fully Convolutional Embedding Networks for Hyperspectral Images Dimensionality Reduction

1
School of Electronics and Information, Northwestern Polytechnical University, 127 West Youyi Road, Xi’an 710072, Shaanxi, China
2
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi’an 710071, Shaanxi, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(4), 706; https://doi.org/10.3390/rs13040706
Submission received: 31 December 2020 / Revised: 6 February 2021 / Accepted: 6 February 2021 / Published: 15 February 2021

Abstract

:
Due to the superior spatial–spectral extraction capability of the convolutional neural network (CNN), CNN shows great potential in dimensionality reduction (DR) of hyperspectral images (HSIs). However, most CNN-based methods are supervised while the class labels of HSIs are limited and difficult to obtain. While a few unsupervised CNN-based methods have been proposed recently, they always focus on data reconstruction and are lacking in the exploration of discriminability which is usually the primary goal of DR. To address these issues, we propose a deep fully convolutional embedding network (DFCEN), which not only considers data reconstruction but also introduces the specific learning task of enhancing feature discriminability. DFCEN has an end-to-end symmetric network structure that is the key for unsupervised learning. Moreover, a novel objective function containing two terms—the reconstruction term and the embedding term of a specific task—is established to supervise the learning of DFCEN towards improving the completeness and discriminability of low-dimensional data. In particular, the specific task is designed to explore and preserve relationships among samples in HSIs. Besides, due to the limited training samples, inherent complexity and the presence of noise in HSIs, a preprocessing where a few noise spectral bands are removed is adopted to improve the effectiveness of unsupervised DFCEN. Experimental results on three well-known hyperspectral datasets and two classifiers illustrate that the low dimensional features of DFCEN are highly separable and DFCEN has promising classification performance compared with other DR methods.

1. Introduction

With the rapid development of modern technology, hyperspectral imaging technology has been widely used in many fields, such as geology [1], ecology [2], geomorphology [3], atmospheric science [4], forensic science [5] and so on, not just in remote sensing satellite sensors and airborne platforms. Hyperspectral sensors can capture hundreds of narrow continuous spectral bands from visible to infrared wavelengths that are reflected or emitted from the scene. The 3D hyperspectral images (HSIs) have high spectral resolution and fine spatial resolution for the taken scene. These allow us to get more information about the object being studied. However, due to the high spectral dimensionality, the interpretation and analysis of hyperspectral images face many challenges. (1) Radiometric noise in some bands limits the precision of image processing [6]. (2) Some redundant bands reduce the quality of image analysis since the adjacent spectral bands are often correlated and not all bands are valuable for image processing [7]. (3) These redundant bands also lead to the cost of huge computational resources and storage space [8]. (4) There is a Hughes phenomenon, that is, the higher the data dimensionality, the poorer the classification performance because of the limited samples [9]. These makes dimensionality reduction (DR) become an essential task for hyperspectral image processing.
Many classic algorithms have been used for HSIs DR, such as principal component analysis (PCA) [10], Laplacian eigenmaps (LE) [11], locally linear embedding (LLE) [11], Isometric feature mapping (ISOMAP) [12], linear discriminant analysis (LDA) [13]. These classical algorithms based on different concepts all attempt to explore and maintain the relationship among samples in HSIs, which is beneficial to improve the separability of low-dimensional features. However, there are several problems when they are applied for HSIs DR. Firstly, ISOMAP, LE and LLE have the out-of-sample problems. On this issue, locality preserving projection (LPP) [14] and neighborhood preserving embedding (NPE) [15] are proposed. Nevertheless, LPP, NPE, PCA and LAD are the linear transformations, which are ill-suited for HSIs because HSIs derived from the complex light scattering of natural objects are inherently nonlinear [16]. Also, spatial feature extraction is a common problem faced by these classical algorithms for HSI DR, which has allowed for good improvements in HSIs representation. Moreover, these algorithms focus on the shallow features of HSIs via a single mapping but cannot extract the deep complex features iteratively.
In recent years, deep learning, as one of the most popular learning algorithms, has been applied to various fields, which can yield more non-linear and more abstract deep representations of data by multiple processing layers [17]. The spatial features extraction is generally achieved by convolutional neural networks (CNN) which can exploit a set of trainable filters to capture local spatial features from receptive fields but often needs supervised information. Many studies have used CNN for HSIs [18]. Paoletti et al. [19] proposed a new deep convolutional neural network for fast hyperspectral image classification. Zhong et al. [20] proposed a supervised spectral-spatial residual network for HSIs on basic of the 3D convolutional layers. Han et al. [21] proposed a different-scale two-stream convolutional network for HSIs. These CNN-based methods can extract superior hyperspectral image features for classification, but they generally require enough class label samples for supervised learning. As a matter of fact, the task of labeling each pixel contained in HSIs is arduous and time-consuming, which generally requires a human expert. As a result, the class label samples of HSIs are scarce and limited, and even unavailable in some scenarios. To address this issue, a few of unsupervised CNN-based methods have been proposed for HSIs. Mou et al. [22] proposed a deep residual conv-deconv network for unsupervised spectral-spatial feature learning. Zhang et al. [23] proposed a novel modified generative adversarial network for unsupervised feature extraction in HSIs. Recently, Zhang et al. [24] proposed a symmetric all convolutional neural-network-based unsupervised feature extraction for HSIs. However, these unsupervised CNN-based approaches are usually based on data reconstruction, but they are short of the exploration of discriminability which is usually the primary goal of DR.
To overcome the drawbacks mentioned above, we propose an unsupervised deep fully convolutional embedding network (DFCEN) for dimensionality reduction of HSIs. Different from the conventional CNN-based network, DFCEN utilizes the learning parameters of convolutional (deconvolutional) layer to replace the fixed down-sampling (up-sampling) of pooling layer to improve the validity of the representation. Meanwhile parameter sharing of convolutional layer is conducive to the extraction of spatial features and reduce the number of parameters compared with fully-connected layer. For the convenience of explanation, DFCEN can be divided into two parts: convolutional subnetwork that encodes high-dimensional data into a low-dimensional space and deconvolutional subnetwork that recovers low-dimensional features to the original high-dimensional data. Accordingly, the network structure of DFCEN lays a foundation for unsupervised learning.
To address the shortcoming of the above unsupervised CNN-based approaches, we introduce a specific learning task of enhancing feature discriminability into DFCEN. Considering the completeness and discriminability of low-dimensional data, we particularly design a novel objective function containing two terms: reconstruction term and embedding term of the specific learning task. The former makes the low-dimensional features keep completeness and original intrinsic information in HSIs. How to design a specific learning task to enhance the discriminability and separability of low-dimensional features is the key point of the latter. The relationships among samples is of considerable value, which are concerned in the classical DR algorithms described above and has been shown to be conducive to HSIs DR. In this paper, the DR concepts of two classical algorithms, LLE and LE, are used as references for the specific learning task in embedding term. Furthermore, in order to balance the contribution of two terms to DR, an adjustable trade-off parameter is added to the objective function. In addition, in order to reduce the training time, we choose to utilize the convolutional autoencoder (CAE) for pretraining to get good initial learning parameters of DFCEN.
Specifically, the contributions of this paper are as follows.
  • An end-to-end symmetric fully convolutional network, DFCEN, is proposed for HSIs DR, which is the foundation of unsupervised learning. In addition, owing to the symmetry of DFCEN, the network structure of symmetry layer in convolutional subnetwork and deconvolutional subnetwork is the same. For that, these two subnetwork can share the same pretraining parameters, which saves the pretraining time.
  • A novel objective function with two terms constraining different layers respectively is designed for DFCEN. This allows DFCEN to explore not only completeness but also discriminability compared to the previous unsupervised CNN-based approaches
  • This is the first work to introduce LLE and LE into an unsupervised fully convolutional network, which simultaneously solved their out-of-sample, linear transformation, and spatial feature extraction problem. In addition, other different DR concepts also can be implemented in embedding term as long as it can be expressed in the form of an objective function.
  • Due to the limited training samples, inherent complexity and the presence of noise bands in HSIs, DFCEN as an unsupervised network is sensitive to input data. So, a preprocessing strategy of removing noise band is adopted, which is proved to effectively improve the DFCEN representation of HSIs.
This paper is organized as follows. In Section 2, we introduce the background and the related works. The proposed deep fully convolutional embedding network are described in detail in Section 3. Section 4 presents the experimental results on three datasets that demonstrate the superiority of the proposed DR method. A conclusion is presented in Section 5.

2. Background and the Related Works

2.1. Mutual Information

Mutual information (MI) has the capacity of measuring the statistical dependence between two random variables [25]. Treating spectral bands and Ground Truth map G shown in Figure 7b, 8b and 9b as random variables, MI can be used to evaluate the relative utility of each band to classification [8]. Given two random variables a and b with marginal probability distributions p a and p b and joint probability distribution p a , b , the MI is defined as below
MI a , b = a a , b b p a , b log p a , b p a · p b .
The higher the MI value between a band and G, the greater the contribution of this band to classification. In practical application, G usually cannot be obtained. The work [8] used an estimated ground truth map G ^ = 1 E I j E I j to evaluate the contribution of each band to classification. I j is a spectral band and E is a set of bands with the highest entropy. Let random variable a take values in the set a with the probability distribution p a , the entropy is defined by H a = a a p a log p a [26].

2.2. Locally Linear Embedding

Locally linear embedding (LLE) is an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs [27]. The local geometry is characterized by linear coefficients that reconstruct the data point using its neighbors [28]. For a data set X = x 1 , x 2 , , x i , , x m , assuming that x i can be reconstructed by a linear combination of neighborhood samples x k , x l , x s , that is x i = w i k x k + w i l x l + w i s x s , the low-dimensional data also maintains the same linear relationship which is z i = w i k z k + w i l z l + w i s z s . The linear reconstruction coefficients are obtained by the following optimization
min w i j i = 1 m x i j Q i w i j x j 2 2 s . t . j Q i w i j = 1 ,
where Q i is a sample set consisting of the nearest k neighbor samples of x i based on the Euclidean distance. The coefficient w i j has a closed solution
w i j = h Q i C j h 1 l , s Q i C l s 1 ,
where C i j = x i x j T x i x k . w i j summarizes the contribution of x j to the reconstruction of x i . According to LLE, the extracted features should preserve neighborhood geometric manifold [29], therefore the embedding cost function is
min z 1 , , z m i = 1 m z i j Q i w i j z j 2 2 s . t . Z = A T X , i = 1 n z i = 0 , 1 n A A T = I
where z i is the low-dimensional data point corresponds to x i . Z = { z 1 , z 2 , , z m } is the low-dimensional representation. LLE maps its inputs into a single global coordinate system of lower dimensionality. LLE explores the reconstructed relationship between each sample and its nearest neighbors, preserving the manifold structure of the data.

2.3. Laplacian Eigenmaps

Laplacian Eigenmaps [11] (LE) has remarkable properties of preserving local neighborhood structure of data. LE is to construct the relationship between data with local angles and reconstruct the local structure and features of the data by constructing adjacency graph [30]. If two data instances x i and x j are very similar, i and j should be as close as possible in the target subspace after dimensionality reduction. Its intuitive concept is to hope that the points that are related to each other (the points connected in the graph) are as close as possible in the low-dimensional space.
A k-nearest neighborhood graph or an ε -ball neighborhood graph is constructed and weights of edges (between vertices) are assigned using the Gaussian kernel function or 0–1 weighting method [31]. Given a dataset X = { x 1 , x 2 , , x n } with n samples, each sample x i X has m features. Let y 1 , y 2 , , y n be the d dimensional representations of X. That is, each y i is a d dimensional row vector. With LE, the lower dimensional representation of X can be achieved by solving the following optimization problem
min y 1 , y 2 , , y n i n j n y i y j 2 M i j ,
where M = M i j n × n is the weight matrix of the k-nearest neighborhood graph. The weight matrix M is calculated based on the Euclidean distance between samples, which is defined as
M i j = e x i x j 2 t , x j Q i 0 , x j Q i ,
where Q i is a sample set consisting of the nearest k neighbor samples of x i based on the Euclidean distance. t is an adjustable parameter. LE explores and preserves the relationship between each sample and its nearest neighbors.

2.4. Convolutional Autoencoder

Convolutional autoencoder adopts the convolutional layer instead of the full-connected layer, of which the principle is the same as the autoencoder [32]. Figure 1 shows the structure of the 2D convolutional autoencoder which comprises an encoder and a decoder. The encoder encodes the input data and maps the features to the hidden layer space, and then the decoder decodes the features of the hidden layer space (the process of reconstruction) to obtain the reconstructed samples of the input [33]. For a input data X s 1 × s 1 × d 1 , the encoder is defined as
h = s conv 2 X , θ , h s 2 × s 2 × d 2 ,
where conv 2 () represents the 2D convolution and θ is the learning parameter in the encoder. h is the output of the hidden layer in the 2D convolutional autoencoder and s () is the activation function. Based on h, the decoder is defined as
X = s dconv 2 h , θ , X s 1 × s 1 × d 1 ,
where dconv 2 represents the 2D deconvolution and θ is the learning parameter in the decoder. X stands for the output of the reconstruction layer and has the same structure as the input data X. The cost function can be defined as
L ( X ; θ , θ ) = X X 2 .
Compared with the traditional autoencoder, the convolutional encoder is more advantageous in extracting spatial features from images [34].

3. The Proposed Method

In this section, we will introduce our proposed method in detail. The flowchart is shown in Figure 2. Usually due to changes in atmospheric conditions, occlusion caused by the presence of clouds, changes in lighting, and other environmental disturbances, some noise bands in HSIs increase the difficulty in feature extraction and classification. As an unsupervised network, DFCEN is sensitive to these noise spectral bands because of the limited training samples and complex intrinsic features of HSIs. For this reason, a simple band selection based on mutual information is adopted for selecting and removing the noise bands at first. Then the relationships among samples is obtained for the specific learning task, which is specially based on LLE and LE in this paper. Next, training samples specifically applied to DFCEN are generated through a data preprocessing. Afterwards, DFCEN is learning from the training samples and relationship among samples. Eventually, the low-dimensional features from DFCEN is classified by classifiers.

3.1. Data Preprocessing

Data preprocessing includes data standardization, data denoising and data expansion. Data standardization is to standardize the pixel values of each spectral band to 0∼1 since it is not appropriate to directly process the raw HSIs data with large pixel values. Data denoising is to select and remove the noise spectral band that may disturb feature extraction and classification. MI can evaluate the contribution of each band to classification [8], Besides, due to the simplicity of calculation, MI is adopted to search for bands that contribute little to the classification as the noise spectral band. Each band I j in HSIs is considered as a random variable. Its probability distribution function can be estimated as p ( I j ) = h ( I j ) m × n , where h I j represents the gray-level histogram of the jth band with m × n pixels. The joint probability distributions of any two bands in HSIs is estimated by p ( I i , I j ) = H ( I i , I j ) m × n , where H ( I i , I j ) is the joint gray-level histogram of the ith and jth band.
Figure 3 shows the MI values of each band in three datasets. As we can see, the two lines fluctuate almost identically. For this reason, we can find and remove noise bands with low MI in an unsupervised way according to the red dotted line. For a raw HSIs data X M × N × D 1 , where M and N is the spatial size and D 1 is the raw number of the spectral bands, the corresponding de-noising data can be expressed as X M × N × D 2 , where D 2 is the number of bands after removing the noise bands and D 2 < D 1 . Actually, we only removed 30 noise bands for Indian Pines dataset, 0 band for Pavia University dataset, 8 bands for Salinas dataset. In order to further prove the validity of removing the noise bands before DFCEN, we take the Indian Pines dataset as an example to compare the classification accuracy of different dimensionality reduction algorithms before and after removing the noise bands. From Table 1, NBS means that the algorithm directly acts on the raw data while BS represents removing the noise bands before dimensionality reduction algorithm. It can be seen from Table 1 that for two unsupervised methods based on neural network, DFCEN and SAE, removing the noise bands is conducive to improving classification accuracy. In the meantime, it also slightly improves other dimensionality reduction algorithms.
Spatial features have been proven to be beneficial to improve the representation of HSIs and increase interpretation accuracy [35,36]. For each pixel, the neighborhood pixel is one of the most important spatial information which is fed to DFCEN in the form of neighborhood window centered around each pixel. With this in mind, the input data size of DFCEN is designed as s × s × D 2 , where s is the size of the neighborhood window and D 2 is the number of bands. However, the problem is that the neighborhood window of the pixels at the image boundary is incomplete. These boundary pixels cannot be ignored since our goal is to reduce the dimensions of each pixel in HSIs. It is also inappropriate to simply fill the neighborhood window of boundary pixels with 0. In order to deal with this problem better, we implement a data expansion strategy based on the Manhattan distance to fill the neighborhood window of the boundary pixels. Figure 4 shows the process of expanding the data by two layers, where the dark color is the original data and the light color is the filling data. For a pixel p 1 × D 2 in a de-noising HSI x M N × D 2 ( M N is the number of pixels), its neighborhood window is a training sample t s × s × D 2 that is fed to the proposed DFCEN. As a result, a training sample set T s × s × D 2 × M N with M N samples can be generated from a de-noising HSI x M × N × D 2 .

3.2. Structure of DFCEN

DFCEN is composed of convolutional layer and deconvolutional layer, excluding pooling layer and full-connected layer. Accordingly, DFCEN can be divided into two parts: convolutional subnetwork and deconvolutional subnetwork. In the convolutional subnetwork, the input data is propagated through multiple convolutional layers to a perception layer, while this perception layer is propagated through multiple deconvolutional layers to a output layer (whose size is same as the input layer) in the deconvolutional subnetwork.
Figure 5 shows the network structure of DFCEN. The introduction in the red box is the name and structure of each layer, while the name of the learning parameter and the filter size is in the green box. It is worth emphasizing that DFCEN is a symmetric and end-to-end network where the number of layers can be set or changed based on specific data or tasks. For the sake of explanation, we take a 7-layer DFCEN shown in Figure 5 as an example to introduce the network structure characteristics of DFCEN in detail. The following is the description of a 7-layer DFCEN shown in Figure 5.
In the convolutional subnetwork, firstly, a training sample t s × s × D 2 is fed to DFCEN, where D 2 is also the number of channels of the input layer. Secondly, the output of input layer is sent to the first convolutional layer C 1 through d 1 filters of size f 1 × f 1 . The output of C 1 contains d 1 feature maps p 1 s 1 × s 1 × d 1 that are then transmitted to the second convolutional layer C 2 via d 2 filters of size f 2 × f 2 . Next, d 2 feature maps p 2 s 2 × s 2 × d 2 are obtained after C 2 is activated, which are then send to the last convolutional layer C 3 by d filters of size s 2 × s 2 . The last convolutional layer in the convolutional subnetwork is also the central layer C T of the whole DFCEN. Eventually, a low-dimensional feature p c 1 × 1 × d of concern is generated after applying the activation function to C T .
In the deconvolutional subnetwork, the low-dimensional feature p c 1 × 1 × d (which is also the output of the convolutional subnetwork) from C T is up-sampled layer by layer through multiple deconvolutional layers. At first, p c 1 × 1 × d is sent to the first deconvolutional layer D C 1 with d 2 filters of size s 2 × s 2 . Then, d 2 feature maps p 4 s 2 × s 2 × d 2 are gained after the activation function and then transfered to the second deconvolutional layer D C 2 through d 1 filters of size f 2 × f 2 . Next, after activating D C 2 , d 1 feature maps p 5 s 1 × s 1 × d 1 are obtain and transfered to the last deconvolutional layer D C 3 (which is also the output layer of the whole DFCEN) with D 2 filters size of f 1 × f 1 . In the end, the output q s × s × D of the whole DFCEN is generated after D C 3 is activated, whose size is the same as the input of DFCEN.
In fact, the characteristics of DFCEN are the size and number of filters (learning parameters), which are identical for the symmetrical layer in the convolutional and deconvolutional subnetwork. This rule also applies to the number and size of feature maps per layer. In particular, the number of feature maps per layer exists: D 2 d 1 d 2 d where d is target dimension of dimensionality reduction and D 2 is the dimension of input data. Meanwhile, the relationship of the size of feature maps per layer is s s 1 s 2 1 where s is the size of input data and the size of C T must be 1 since it represents the low-dimensional features of one pixel. For this reason, the size of the filter between C T and its preceding layer must be the same as the size of its preceding layer. In Figure 5, the preceding layer of C T is C 2 . In brief, DFCEN is a symmetric full convolutional network with a central layer of size 1, where the convolutional subnetwork reduce the dimensionality and size of data layer by layer while the deconvolutional subnetwork restores the data dimensionality and size layer by layer. Therefore, the network structure determines that feature extraction of DFCEN is an unsupervised process as long as the embedding term in objective function does not require any class label information.

3.3. Objective Function of DFCEN

As discussed in Section 1, DFCEN supports not only unsupervised feature extraction based on data reconstruction, but also task-specific learning which is conducive to dimensionality reduction and classification. The objective function of DFCEN consists of two terms: embedding term for the specific learning task and reconstruction term. The embedding term can be changed or designed according to specific concept or task, which is dedicated to improving the discriminant ability of the low-dimensional features. As shown in Figure 5, the embedding term is to constrain the low-dimensional output of the central layer C T . So it only acts on the parameter update of the convolutional subnetwork. For a training sample set T s × s × D 2 × M N = t 1 , t 2 , , t i , , t M N , t i s × s × D 2 , the output of C T in Figure 5 is expressed as follows
p c t i , Θ d = s ( conv 2 ( s ( conv 2 ( s ( conv 2 ( t i , θ 1 ) ) , θ 2 ) ) , θ 3 ) ) ,
where Θ d = θ 1 , θ 2 , θ 3 is the learning parameters in the convolutional subnetwork. conv 2 ( ) denotes the 2D convolution and s ( ) is the activation function s ( x ) = log ( 1 + e x ) . p c t i , Θ d is also the low-dimensional representation of DFCEN.
In order to enhance the separability and discriminability of low-dimensional features, we explore and maintain the relationship among samples as a specific learning task. In this paper, LLE and LE, two classical manifold learning algorithms are introduced into the embedding term of DFCEN.

3.3.1. LLE-Based Embedding Term

LLE aims at preserving the original reconstruction relationship between each sample and its neighbors in the mapping space, which assumes that a sample data can be reconstructed by a linear combination of its neighborhood samples. The linear reconstruction is described in Equation (2). The original reconstruction coefficient W can be calculated according to Equation (3). For a HSI dataset x M × N × D 2 , the relationship coefficient W can be expressed as: W M N × M N = w 11 , w 12 , , w i j , , w M N × M N . Since the coefficient W only characterizes the relationship between the sample and its nearest k neighbor samples, it can also be described as
w i j = h Q i x i x j T x i x h 1 l , s Q i x i x l T x i x s 1 , x j Q i 0 , x j Q i .
Q i is the nearest k neighbor samples of x i . The number of selected neighbor samples k is much smaller than the total number of samples M N , namely, k M N . Therefore, the relationship coefficient matrix W is a sparse matrix.
Referring to LLE, the embedding term should constrain the low-dimensional representation to maintain the original reconstruction relationship. Hence, for a training sample set T s × s × D 2 × M N , the LLE-based embedding term can be defined as follow
L ED _ LLE ( T , Θ d ) = min Θ d 1 M N i = 1 M N p c ( t i , Θ d ) j = 1 M N w i j p c ( t j , Θ d ) F 2 ,
where w i j is the original reconstruction coefficient that is calculated according to Equation (11), which is a constant for the LLE-based embedding term. p c t i , Θ d is the output of C T in DFCEN. Θ d is the learning parameters in the convolutional subnetwork. M N is the number of training samples in T. · F 2 is the square of the F norm, which is to calculate the sum of the squares of all the elements inside.

3.3.2. LE-Based Embedding Term

LE is to construct the relationship among samples with local angels and reconstruct the local structure and features in the low-dimensional space. An adjacency graph based on the Euclidean distance is constructed to characterize the relationship among samples, which is also called the weight matrix and defined in Equation (6). When the sample x j does not belong to the nearest k neighbor samples of the sample x i , the weight coefficient M i j between the samples x j and x i is 0. In fact, for a HSI dataset x M × N × D 2 , due to k M N , the adjacency graph matrix M is also a sparse matrix. In practice, LE hopes that samples that are related to each other (the points connected in the adjacency graph) are as close as possible in the low-dimensional space, which is described in a formula in Equation (5).
Referring to LE, for samples that are related in the original space, the embedding term should constrain their low-dimensional representation as close as possible. As a result, for a training sample set T s × s × D 2 × M N , the LE-based embedding term can be defined as follow
L ED _ LE ( T , Θ d ) = min Θ d 1 M N i = 1 M N j = 1 M N p c ( t i , Θ d ) p c t j , Θ d F 2 M i j ,
where M i j is the adjacency graph coefficient in the original space, which also is a constant.

3.3.3. Reconstruction Term

As shown in Figure 5, the reconstruction term is to constrain the output of the whole DFCEN. So it acts on all learning parameter updates. The reconstruction term ensures that low-dimensional features can be restored as input data. For a training sample set T s × s × D × M N , the output of DFCEN in Figure 5 is expressed as follow
q t i , Θ = s ( dconv 2 ( s ( dconv 2 ( s ( dconv 2 ( p c t i , Θ d , θ 4 ) ) , θ 5 ) ) , θ 6 ) ) ,
where Θ = Θ d , θ 4 , θ 5 , θ 6 represents all learning parameters in DFCEN and θ 4 , θ 5 , θ 6 is the parameters in the deconvolutional subnetwork. dconv 2 ( ) denotes the 2D deconvolution and s ( ) is the activation function. p c t i , Θ d is the output of the convolutional subnetwork.
The reconstruction term aims at maintaining original intrinsic information, which restores the low-dimensional features to the original input data. After the low-dimensional representation p c is propagated by the multiple deconvolutional layers, the reconstructed data q is obtained. The reconstruction term minimizes the error between the reconstructed data and the original input data. For a training sample set T s × s × D 2 × M N , the reconstruction term can be described as follow
L RT ( T , Θ ) = min Θ 1 M N i = 1 M N t i q ( t i , Θ ) F 2 ,
where q t i , Θ is the output of DFCEN and Θ denotes all learning parameter.

3.3.4. Objective Function

The embedding and reconstruction term have been introduced above. The embedding term constrains the low-dimensional output of the central layer to maintain the original sample relationship, while the reconstruction term ensures that the low-dimensional feature is reconstructed back to the high-dimensional input data. To balance the effects of these two terms on dimensionality reduction, a trade-off parameter is added to the objective function. As a result, for a training sample set T s × s × D 2 × M N , the objective function of DFCEN can be described as
L ( T , Θ ) = L RT ( T , Θ ) + λ L ED T , Θ d ,
where λ is a adjustable trade-off parameter. L RT ( T , Θ ) is the reconstruction term and L ED T , Θ d is the embedding term.

3.4. Learning of DFCEN

The learning of DFCEN is to optimize the network parameters Θ according to the objective function which is formulated in Equation (16). In this paper, we adopt the gradient descent method to optimize learning parameters. The update formula for Θ is expressed as Θ = Θ Δ Θ , where Δ Θ is the partial derivative of the objective function with respect to Θ , which has the form
Δ Θ = L RT ( T , Θ ) Θ + λ L ED ( T , Θ d ) Θ .
In the following, we calculate these two partial derivatives separately. For a training sample t i , the partial derivative from the reconstruction term can be formulated as
Θ L RT ( t i , Θ ) = Θ t i q ( t i , Θ ) F 2 = Θ tr ( ( t i q ( t i , Θ ) ) T ( t i q ( t i , Θ ) ) ) = 2 ( q ( t i , Θ ) t i ) q ( t i , Θ ) Θ ,
Here q t i , Θ Θ is the partial derivative of the output layer (also last layer) with respect to all network parameters Θ = θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 . For the 7-layer DFCEN shown in Figure 5, θ 1 , θ 2 , θ 3 is the parameters in the convolutional subnetwork while θ 4 , θ 5 , θ 6 is in the deconvolutional subnetwork. For θ 1 , θ 2 , θ 3 , the partial derivative with respect to the lth layer parameters θ l can be calculated as
q ( t i , θ l ) θ l = rot 180 ( conv 2 ( p l 1 , rot 180 ( s ( L l ) ) ) ) ,
where p l 1 is the feature maps in the ( l 1 ) th layer and L l is the lth layer of DFCEN. When l = 1 , p l 1 is the input data t i . The derivation process can be consulted in [37]. rot 180 ( ) represents a rotation of 180 degrees. conv 2 ( ) is a 2D convolution. s is the derivative function of the activation function, which is described as s ( x ) = e x 1 + e x . For θ 4 , θ 5 , θ 6 , the partial derivative is calculated as
q ( t i , θ l ) θ l = rot 180 ( dconv 2 ( p l 1 , rot 180 ( s ( L l ) ) ) ) ,
where dconv 2 ( ) is a 2D deconvolution.
The embedding term is only responsible for updating the parameters Θ d = θ 1 , θ 2 , θ 3 in the convolutional subnetwork. For a training sample t i , the partial derivative of the LLE-based embedding term with respect to Θ d can be formulated as
Θ L ED _ LLE ( t i , Θ d ) = Θ p c ( t i , Θ d ) j = 1 M N w i j p c ( t j , Θ d ) F 2 = 2 ( p c ( t i , Θ d ) j = 1 M N w i j p c ( t j , Θ d ) ) · ( p c ( t i , Θ d ) Θ j = 1 M N w i j p c ( t j , Θ d ) Θ ) ,
Here w i j is a constant. p c ( t i , Θ d ) Θ is the partial derivative of the central layer C T with to the parameters Θ d in the convolutional subnetwork. It can be expressed in the form of Equation (19). The partial derivative of the LE-based embedding term can be formulated as
Θ L ED _ LE ( t i , Θ d ) = Θ p c ( t i , Θ d ) p c ( t j , Θ d ) 2 2 M i j = 2 ( p c ( t i , Θ d ) p c ( t j , Θ d ) ) ( p c ( t i , Θ d ) Θ p c ( t j , Θ d ) Θ ) M i j ,
where M i j is also a constant.
In order to reduce the training time, we choose to use the convolutional autoencoder (CAE) to pretrain network to obtain good initial parameters. Owing to the symmetry of DFCEN, the parameter structure between the layers in the convolutional subnetwork is the same as that between the corresponding layers in the deconvolutional subnetwork. For this reason, symmetrical layers of two subnetworks can be initialized with the same parameters. So, a 7-layer DFCEN shown in Figure 5 only requires 3 CAEs for pretraining parameters, which saves the pretraining time. Figure 6 shows the pretraining process, where only after the first CAE has been trained can the second CAE be trained, and so on. The parameters in Figure 6, corresponding to the parameters in Figure 5, initializes DFCEN. The activation function of CAE is the same as that of DFCEN.

4. Experimental Study

4.1. Description of Data Sets

The first dataset, Indian Pines Dataset, covering the Indian Pines region, northwest Indiana, USA, was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in 1992. The spatial resolution of this image is 20 m. It has 220 original spectral bands in the 0.4–2.5 μ m spectral region and each band contains 145 × 145 pixels. Owing to the noise and water absorption, 20 spectral bands are abandoned and the remaining 200 bands are used in this data set. This dataset contains background with 10,776 pixels and 16 ground-truth classes with 10,249 pixels. The number of pixels in each class is range from 20 to 2455. The color image and the labeled image with 16 classes are shown in Figure 7.
The second dataset covers the University of Pavia, Northern Italy, which was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor and called Pavia University Dataset. Its spectral range is 0.4–0.82 μ m. After removing 12 noise bands from the original dataset with 115 spectral bands, 103 bands are employed in this paper. The spatial resolution is 1.3 m and each band has 610 × 340 pixels. This dataset consists of 9 ground-truth classes with 42,776 pixels and background with 164,624 pixels. Figure 8 shows the color image and the labeled image with 9 classes.
The third dataset, Salinas Dataset, covering Salinas Vally, CA, was acquired by AVIRIS sensor in 1998, whose spatial resolution is 3.7 m. There are 224 original bands with spectral ranging from 0.4 to 2.45 μ m. Each band has 512 × 217 pixels including 16 ground-truth classes with 56,975 pixels and background with 54,129 pixels. After removing 20 bands that are severely affected by noise, the remaining 204 bands are used for the experiments. The color image and the labeled image with 16 classes are shown in Figure 9.

4.2. Experimental Setup

For the sake of clarity, the proposed DFCEN with LLE-based embedding term is named DFCEN_LLE below while that with LE-based embedding term is written as DFCEN_LE. The network structure of DFCEN for three datasets is experientially designed on the basis of the structure of DFCEN described in Section 3.2. In this paper, DFCEN_LLE and DFCEN_LE have the same network structure for experimental convenience. The following is the network structure with a target dimensionality of 30. For the Indian Pines dataset, the network structure is 170–100–50–30–50–30–170 and the size of filter per layer is 3 × 3–2 × 2–2 × 2–2 × 2–2 × 2–3 × 3. For the Pavia University dataset, the network structure is 103–70–30–70–30–103 and the size of filer in all layers is 3 × 3 . For the Salinas dataset, the network structure is 196–110–60–30–60–110–196 and the size of filer per layer is also 3 × 3 .
To prove the effectiveness, DFCEN is compared with several dimensionality reduction algorithms, such as LE [11], LLE [11], SAE, spatial-domain local pixel NPE (LPNPE) [38], spatial and spectral regularized local discriminant embedding (SSRLDE) [38], SSMRPE [39], spatial–spectral local discriminant projection (SSLDP) [40]. The former three methods are spectral-based methods while the latter four approaches make use of both spatial and spectral information for dimensionality reduction of HSIs. Besides, the raw HSIs is also used for comparison. SAE is a algorithm based on neural network, and its network structures are 170–100–50–30–170 for Indian Pines dataset, 103–70–30–103 for Pavia University dataset, and 196–110–60–30–196 for Salinas dataset. LPNPE [38] minimizes the distance of the spatial local pixel neighborhood. SSRLDE [38] preserves not only the spectral-domain local Euclidean neighborhood class relations but also the spatial-domain local pixel neighborhood structures. SSMRPE [39] shares the same DR concept as LLE. SSLDP [40] designs a weighted within neighborhood scatter to reveal the similarity of spatial neighbors. Among them, SSRLDE [38] and SSLDP [40] are supervised and require class labels to implement dimensionality reduction, while others are unsupervised.
For the fairness of the experimental comparison, the numbers of the nearest neighbor samples k of LE and LLE are the same as that of DFCEN_LE and DFCEN_LLE in the following experiments. We also choose the optimal parameters of their source literature for LPNPE [38], SSRLDE [38], SSMRPE [39], SSLDP [40]. In all the experiments below, all algorithms including DFCEN use raw data (that is not filtered to de-noise and smoothen pixels). For this reason, the results of the comparative experiments in this paper are different from those in the source literature (they usually use de-noising and smooth pixels).
Moreover, two classifiers support vector machines (SVM) and k nearest neighbor (KNN) are employed for classifying dimensionality reduction results. In fact, the number of the nearest neighbor of KNN is equal to 1. In all experiments, we randomly divide each HSI dataset into training and test sets. It should be emphasized that the training set is used to train the dimensionality reduction models and classifiers for supervised algorithms while that is only used to train classifiers for unsupervised algorithms. Actually, all samples in a HSI dataset are utilized to train the dimensionality reduction models for unsupervised methods. Overall classification accuracy (OA), average classification accuracy (AA), and the kappa coefficient κ are used to evaluate classification performance. To robustly evaluate the results with different dimensionality reduction algorithms, we repeat 10 times for each experiment.

4.3. Parameters Analysis

Both DFCEN_LE and DFCEN_LLE have three parameters that need to be set manually, including nearest neighbor number k, spatial window size s and trade-off parameter λ . In order to analyze the influence of three parameters on dimensionality reduction, we conduct parameter tuning experiments on three HSI datasets. 10 % in each class are randomly selected as the training set and the remaining samples are the testing set for two classifiers. Figure 10 shows the classification accuracy from DFCEN with different parameters on Indian Pines dataset, where the parameter range is set to: k = { 1 , 3 , , 29 } , s = { 1 , 3 , , 9 } , λ = { 0 , 0.1 , 0.2 , , 1 } and the fixed values are set to k = 19 , s = 5 , λ = 0.4 to analyze the other two parameters.
From Figure 10, the effects of the three parameters on DFCEN_LE and DFCEN_LLE are almost the same. The classification accuracy increases significantly with the increase of s when k or λ is fixed, which means that spatial information is important for DR. But the classification accuracy tends to decline when s continues to increase, because the large spatial window may contain heterogeneous samples which interfere with the extraction of spatial homogeneous information. Meanwhile, the classification accuracy increases with the increase of λ and k when s is fixed. In particular, the change of λ from zero has led to a significant improvement in classification, which proves that the specific learning task (this is embodied in the embedding term) of exploring and preserving the relationships among samples can effectively enhance the discriminability and separability of low-dimensional features and the proposed DFCEN is meaningful. Through a simple parameter tuning experiment, the three parameters of DFCEN_LE and DFCEN_LLE on the three datasets are set as shown in Table 2.

4.4. Convergence and Discriminant Analysis

To illustrate the convergence of DFCEN, the learning curves of the embedding and reconstruction terms of DFCEN_LLE and DFCEN_LE on three datasets are present in Figure 11, in which the parameters have been initialized by CAEs. The x-axis represents the number of learning parameter updates that are performed after learning each batch of samples (a batch contains 50 samples). The curve represents the error values of two terms in objective function after one iteration (namely, all samples have been learned). (a)–(c) and (g)–(i) is about DFCEN_LLE, where two terms on three datasets all can remain convergent and obtain small error values after repeated iterations. (d)–(f) and (j)–(l) is about DFCEN_LE. From that, the error values of two terms on the Indian Pines and Salinas datasets can remain consistently convergent as the number of iterations increases. However, the error values of the reconstruction term in the early learning stage on the Pavia University dataset does not converge but increases. The reason is probably high trade-off parameter λ ( λ = 1 shown in Table 2) and overfitting occurred in pretraining where the objective function of CAEs is consistent with the reconstruction term. Nevertheless, two terms on the Pavia University dataset eventually converge to a small error value as the number of iterations increases. Accordingly, DFCEN_LLE and DFCEN_LE can achieve a good convergence, from which the low-dimensional features not only preserves the original relationship among samples, but also retains the original intrinsic information in HSIs.
To analyze the discriminability and separability of the low-dimensional features from DFCEN, t-SNE is used to visualize the low-dimensional data of DFCEN comparing the raw data. The 2-dimensional features obtained by t-SNE on three datasets are shown in Figure 12 where different colors stand for different classes. Figure 12 shows all class samples for the Indian Pines and Pavia University datasets and randomly 80% for the Salinas dataset due to the large number of the class samples. As we can observe from these visualizations, the dimensionality reduction results from DFCEN are more discriminative than the raw HSIs data. Owing to DFCEN, the separability among different classes in the low-dimensional space is significantly improved compared to the original space. The reason is that DFCEN not only maintains the original intrinsic information but also preserves the original relationship among samples. In particular, DFCEN_LLE preserves the original reconstructed relationship between each sample and its k nearest neighbors, while DFCEN_LE keeps each sample as close as possible to its k nearest neighbors, since there is a high probability that each sample and its neighbor belong to the same class. As a result, from Figure 12, the same classes from DFCEN are clustered together and the different classes are effectively separated.

4.5. Classification Performance

In this subsection, we examine the classification performance of dimensionality reduction results on three datasets. SVM and KNN are used to classify dimensionality reduction results to reduce the influence of classifiers. Firstly, in order to analyze the classification performance under different classification conditions, we randomly selected 5%, 10% and 15% of samples from each class as training set, and other samples are tested. The training set and test set are applied to all algorithms in the manner described in Section 4.2.
Table 3 shows the overall classification accuracy of the dimensionality reduction results (dim = 30) from different algorithms on three datasets, where the OA values is the average of 10 experiments under the same classification conditions. From Table 3, we can see that the classification OA values of all dimensionality reduction algorithms improve as the proportion of training samples increases since more training data can provide more class information for classifiers and supervised dimensionality reduction algorithms. The highest OA value under the same classification condition has been marked in bold.
As we have seen, the spatial–spectral combined algorithm, LPNPE [38], SSRLDE [38], SSMRPE [39], SSLDP [40] and DFCEN, are superior to the spectral-based algorithm, LE, LLE and SAE, which indicates that spatial features are beneficial to the dimensionality reduction of HSIs. Neural network based methods, SAE and DFCEN, are superior to traditional dimensionality reduction algorithms, which testifies that neural network is suitable for dimensionality reduction of HSIs. Compared with other algorithms in this paper, the dimensionality reduction results of DFCEN has the best classification performance for three datasets under two classifiers. In particular, DFCEN achieves superior classification accuracy even with only 5% of the training samples of classifiers.
Secondly, in order to analyze the classification performances per class of different algorithms, 10% of samples per class are randomly selected as training samples and others are as test samples. The individual class classification accuracy, OA, AA, and κ on three datasets are shown in Table 4, Table 5 and Table 6. The highest value of each item has been marked in bold. Figure 13, Figure 14 and Figure 15 show the corresponding classification maps of different algorithms on three datasets. From Table 4, Table 5 and Table 6, the supervised algorithms, SSRLDE and SSLDP, give unsatisfactory classification results in Indian Pines and Pavia University datasets due to the absence of pixel filtering, which indicates that SSRLDE and SSLDP are very sensitive to noise pixels. Meanwhile, as two unsupervised algorithms, DFCEN_LLE and DFCEN_LE achieved the highest classification accuracy in most classes, even with OA, AA and κ achieving the best. Especially for class 9 in Indian Pines, class 3 in Pavia and class 15 in Salinas, DFCEN obtain high classification accuracy while other algorithms are poor because of the difficulty in classifying these classes. In terms of OA, DFCEN is approximately 4% better than that of the second best algorithm.
Figure 13, Figure 14 and Figure 15 visually show the classification maps of the DR results (DIM = 30) of different algorithms. From that, it can be observed that DFCEN has significant regional classification uniformity because DFCEN not only guarantees the intrinsic information of HSIs but also explores and maintains the relationship among samples and their nearest neighbors. Especially for classes 3, 9, 10, 12, and 15 of the Indian Pines dataset, classes 6 and 7 of the Pavia University dataset, and classes 8 and 15 of the Salinas dataset (these classes have been circled in white in Figure 13, Figure 14 and Figure 15, DFCEN performs much better than the other methods under two classifiers.
Thirdly, to analyze the influence of different dimensions on each algorithm, Figure 16 shows the changes of OA with two classifiers on three datasets when the dimensionality ranges from 5 to 50 with the step length of 5. From that, the OAs of most algorithms improve with the increase of dimensions and tend to be stable when the dimension increases to a certain degree. The reason is that the higher the feature dimension is, the more information it can provide for the classification, but it will reach saturation when the feature dimension continues to increase. Moreover, in Figure 16, spatial–spectral DR methods, LPNPE, SSRLDE, SSMRPE, SSRLDE and DFCEN, are generally superior to spectral-based methods, LE, LLE and SAE. In particular, DFCEN achieves almost the best classification on the results of different dimensions compared with other algorithms.
Figure 16 also shows that the classification OAs of LE and LLE are relatively poor. However, DFCEN_LE and DFCEN_LLE have satisfactory classification performance when the concepts of LE and LLE are introduced to DFCEN. The reason may be summarized as follows: (1) the fully convolutional network of DFCEN can effectively obtain the spatial–spectral information of HSIs by layer-by-layer feature extraction, (2) the reconstruction term, as a regularization term corresponding to the embedding term, can constrain low-dimensional features to retain the intrinsic information.

5. Conclusions

In this paper, a novel unsupervised DFCEN was proposed for HSIs dimensionality reduction. Different from the existing unsupervised CNN-based method which only focuses on data reconstruction, DFCEN was designed to not only ensure data reconstruction but also realize the learning of specific tasks. In DFCEN, convolutional subnetwork is for dimensionality reduction and specific task learning while deconvolutional subnetwork is for data reconstruction. A novel objective function was proposed, including two terms: embedding term of the specific task and reconstruction term of data reconstruction. The former enhance the discriminant ability of low-dimensional features and the latter maintain the original intrinsic information. In this paper, exploring and maintaining relationships between samples as a specific task to improve dimensionality reduction performance, while the dimensionality reduction concepts of LLE and LE are introduced into DFCEN. Experimental results on three hyperspectral datasets prove the superior classification performance of the dimensionality reduction results from DFCEN_LLE and DFCEN_LE.
In our future work, different dimensionality reduction concepts and objective functions designed according to specific requirements will be applied to DFCEN to achieve DR and the idea of the combination of LE and LLE will be tried. In addition, we will try to apply DFCEN to other areas.

Author Contributions

Conceptualization, N.L. and M.Z.; methodology, N.L.; software, N.L.; validation, N.L., M.Z. and T.W.; formal analysis, N.L.; investigation, N.L.; resources, D.Z.; data curation, J.S.; writing—original draft preparation, N.L.; writing—review and editing, N.L.; visualization, N.L.; supervision, M.G.; project administration, D.Z.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China (Grant No. 62076204), the National Natural Science Foundation of Shaanxi Province under Grantnos. 2018JQ6003 and 2018JQ6030, the China Postdoctoral Science Foundation (Grant nos. 2017M613204 and 2017M623246).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Murphy, R.J.; Monteiro, S.T.; Schneider, S. Evaluating Classification Techniques for Mapping Vertical Geology Using Field-Based Hyperspectral Sensors. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3066–3080. [Google Scholar] [CrossRef]
  2. Ryan, J.P.; Davis, C.O.; Tufillaro, N.B.; Kudela, R.M.; Gao, B.C. Application of the hyperspectral imager for the coastal ocean to phytoplankton ecology studies in Monterey Bay, CA, USA. Remote Sens. 2014, 6, 1007–1025. [Google Scholar] [CrossRef] [Green Version]
  3. Pi, W.; Du, J.; Liu, H.; Zhu, X. Desertification Glassland Classification and Three-Dimensional Convolution Neural Network Model for Identifying Desert Grassland Landforms with Unmanned Aerial Vehicle Hyperspectral Remote Sensing Images. J. Appl. Spectrosc. 2020, 87, 309–318. [Google Scholar] [CrossRef]
  4. Ofner, J.; Kamilli, K.A.; Eitenberger, E.; Friedbacher, G.; Lendl, B.; Held, A.; Lohninger, H. Chemometric analysis of multisensor hyperspectral images of precipitated atmospheric particulate matter. Anal. Chem. 2015, 87, 9413–9420. [Google Scholar] [CrossRef] [PubMed]
  5. de la Ossa, M.Á.F.; Amigo, J.M.; García-Ruiz, C. Detection of residues from explosive manipulation by near infrared hyperspectral imaging: A promising forensic tool. Forensic Sci. Int. 2014, 242, 228–235. [Google Scholar] [CrossRef]
  6. Guo, X.; Huang, X.; Zhang, L.; Zhang, L. Hyperspectral image noise reduction based on rank-1 tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2013, 83, 50–63. [Google Scholar] [CrossRef]
  7. Jia, X.; Kuo, B.C.; Crawford, M.M. Feature Mining for Hyperspectral Image Classification. Proc. IEEE 2013, 101, 676–697. [Google Scholar] [CrossRef]
  8. Guo, B.; Gunn, S.R.; Damper, R.I.; Nelson, J.D.B. Band Selection for Hyperspectral Image Classification Using Mutual Information. IEEE Geosci. Remote Sens. Lett. 2006, 3, 522–526. [Google Scholar] [CrossRef] [Green Version]
  9. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  10. Li, Y.; Qu, J.; Dong, W.; Zheng, Y. Hyperspectral pansharpening via improved PCA approach and optimal weighted fusion strategy. Neurocomputing 2018, 315, 371–380. [Google Scholar] [CrossRef]
  11. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  12. Li, W.; Zhang, L.; Zhang, L.; Du, B. GPU parallel implementation of isometric mapping for hyperspectral classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1532–1536. [Google Scholar] [CrossRef]
  13. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images With Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  14. He, X.; Niyogi, P. Locality preserving projections. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2004; pp. 153–160. [Google Scholar]
  15. He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE ICCV, Beijing, China, 17–20 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
  16. Han, T.; Goodenough, D.G. Investigation of Nonlinearity in Hyperspectral Imagery Using Surrogate Data Methods. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2840–2847. [Google Scholar] [CrossRef]
  17. Bengio, Y.; Courville, A.C.; Vincent, P. Unsupervised feature learning and deep learning: A review and new perspectives. CoRR 2012, 1, 2012. [Google Scholar]
  18. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  19. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  20. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar]
  21. Han, M.; Cong, R.; Li, X.; Fu, H.; Lei, J. Joint spatial-spectral hyperspectral image classification based on convolutional neural network. Pattern Recognit. Lett. 2020, 130, 38–45. [Google Scholar] [CrossRef]
  22. Mou, L.; Ghamisi, P.; Zhu, X.X. Unsupervised spectral–spatial feature learning via deep residual Conv–Deconv network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 391–406. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, M.; Gong, M.; Mao, Y.; Li, J.; Wu, Y. Unsupervised feature extraction in hyperspectral images based on wasserstein generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2669–2688. [Google Scholar] [CrossRef]
  24. Zhang, M.; Gong, M.; He, H.; Zhu, S. Symmetric All Convolutional Neural-Network-Based Unsupervised Feature Extraction for Hyperspectral Images Classification. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  25. Estévez, P.A.; Tesmer, M.; Perez, C.A.; Zurada, J.M. Normalized mutual information feature selection. IEEE Trans. Neural Netw. 2009, 20, 189–201. [Google Scholar] [CrossRef] [Green Version]
  26. Chang, C.I.; Kuo, Y.M.; Chen, S.; Liang, C.C.; Ma, K.Y.; Hu, P.F. Self-Mutual Information-Based Band Selection for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens. 2020. [Google Scholar] [CrossRef]
  27. Pan, Y.; Ge, S.S.; Al Mamun, A. Weighted locally linear embedding for dimension reduction. Pattern Recognit. 2009, 42, 798–811. [Google Scholar] [CrossRef]
  28. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  29. Wang, M.; Yu, J.; Niu, L.; Sun, W. Unsupervised feature extraction for hyperspectral images using combined low rank representation and locally linear embedding. In Proceedings of the 2017 IEEE ICASSP, New Orleans, LA, USA, 5–9 March 2017; pp. 1428–1431. [Google Scholar]
  30. Li, B.; Li, Y.R.; Zhang, X.L. A survey on Laplacian eigenmaps based manifold learning methods. Neurocomputing 2019, 335, 336–351. [Google Scholar]
  31. Ma, M.; Deng, T.; Wang, N.; Chen, Y. Semi-supervised rough fuzzy Laplacian Eigenmaps for dimensionality reduction. Int. J. Mach. Learn. Cybern. 2019, 10, 397–411. [Google Scholar] [CrossRef]
  32. Seyfioğlu, M.S.; Özbayoğlu, A.M.; Gürbüz, S.Z. Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1709–1723. [Google Scholar] [CrossRef]
  33. Azarang, A.; Manoochehri, H.E.; Kehtarnavaz, N. Convolutional autoencoder-based multispectral image fusion. IEEE Access 2019, 7, 35673–35683. [Google Scholar] [CrossRef]
  34. Palsson, B.; Ulfarsson, M.O.; Sveinsson, J.R. Convolutional Autoencoder for Spectral–Spatial Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 59, 535–549. [Google Scholar]
  35. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef] [Green Version]
  36. Plaza, A.; Martínez, P.; Pérez, R.; Plaza, J. Spatial/spectral endmember extraction by multidimensional morphological operations. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2025–2041. [Google Scholar] [CrossRef] [Green Version]
  37. Bouvrie, J. Notes on Convolutional Neural Networks. Available online: http://cogprints.org/5869/ (accessed on 13 February 2021).
  38. Zhou, Y.; Peng, J.; Chen, C.P. Dimension reduction using spatial and spectral regularized local discriminant embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1082–1095. [Google Scholar] [CrossRef]
  39. Huang, H.; Shi, G.; He, H.; Duan, Y.; Luo, F. Dimensionality reduction of hyperspectral imagery based on spatial-spectral manifold learning. IEEE Trans. Cybern. 2019, 50, 2604–2616. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Huang, H.; Duan, Y.; He, H.; Shi, G.; Luo, F. Spatial-spectral local discriminant projection for dimensionality reduction of hyperspectral image. ISPRS J. Photogramm. Remote Sens. 2019, 156, 77–93. [Google Scholar] [CrossRef]
Figure 1. The structure of the 2D convolutional autoencoder.
Figure 1. The structure of the 2D convolutional autoencoder.
Remotesensing 13 00706 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Remotesensing 13 00706 g002
Figure 3. MI values of each spectral band with the Ground Truth map and the estimated ground map on three datasets.
Figure 3. MI values of each spectral band with the Ground Truth map and the estimated ground map on three datasets.
Remotesensing 13 00706 g003
Figure 4. Data expansion strategy. This is the data expansion process when the size of neighborhood window is 5.
Figure 4. Data expansion strategy. This is the data expansion process when the size of neighborhood window is 5.
Remotesensing 13 00706 g004
Figure 5. The structure of the proposed deep fully convolutional embedding network.
Figure 5. The structure of the proposed deep fully convolutional embedding network.
Remotesensing 13 00706 g005
Figure 6. Pretraining process. Each dashed box represents a convolutional autoencoder. θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 correspond to the parameters in Figure 5.
Figure 6. Pretraining process. Each dashed box represents a convolutional autoencoder. θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 correspond to the parameters in Figure 5.
Remotesensing 13 00706 g006
Figure 7. Indian Pines dataset: (a) the color image, (b) the Ground Truth map.
Figure 7. Indian Pines dataset: (a) the color image, (b) the Ground Truth map.
Remotesensing 13 00706 g007
Figure 8. Pavia University dataset: (a) the color image, (b) the Ground Truth map.
Figure 8. Pavia University dataset: (a) the color image, (b) the Ground Truth map.
Remotesensing 13 00706 g008
Figure 9. Salinas dataset: (a) the color image, (b) the Ground Truth map.
Figure 9. Salinas dataset: (a) the color image, (b) the Ground Truth map.
Remotesensing 13 00706 g009
Figure 10. Classification overall accuracy with respect to different parameters of deep fully convolutional embedding network (DFCEN) on Indian Pines dataset from two classifiers.
Figure 10. Classification overall accuracy with respect to different parameters of deep fully convolutional embedding network (DFCEN) on Indian Pines dataset from two classifiers.
Remotesensing 13 00706 g010
Figure 11. The learning curves of the embedding and reconstruction terms of DFCEN_LLE and DFCEN_LE on three datasets.
Figure 11. The learning curves of the embedding and reconstruction terms of DFCEN_LLE and DFCEN_LE on three datasets.
Remotesensing 13 00706 g011
Figure 12. The two-dimensional features obtained by t-SNE from the raw data and the low-dimensional features of DFCEN on three datasets: (ac) Indian Pines, (df) Pavia University, (gi) Salinas.
Figure 12. The two-dimensional features obtained by t-SNE from the raw data and the low-dimensional features of DFCEN on three datasets: (ac) Indian Pines, (df) Pavia University, (gi) Salinas.
Remotesensing 13 00706 g012
Figure 13. Classification maps with two classifiers of different methods on the University of Pavia dataset (dim = 30). (aj) are for KNN and (kt) are for SVM. (i,j) and (s,t) are the classification result of DFCEN.
Figure 13. Classification maps with two classifiers of different methods on the University of Pavia dataset (dim = 30). (aj) are for KNN and (kt) are for SVM. (i,j) and (s,t) are the classification result of DFCEN.
Remotesensing 13 00706 g013
Figure 14. Classification maps with two classifier of different methods on Salinas data set (dim = 30). (aj) are for KNN and (kt) are for SVM. (i,j) and (s,t) are the classification result of the proposed DFCEN.
Figure 14. Classification maps with two classifier of different methods on Salinas data set (dim = 30). (aj) are for KNN and (kt) are for SVM. (i,j) and (s,t) are the classification result of the proposed DFCEN.
Remotesensing 13 00706 g014aRemotesensing 13 00706 g014b
Figure 15. Classification maps of different methods on Indian Pines dataset (dim = 30) via two classifiers. (aj) are for k nearest neighbor (KNN) and (kt) are for support vector machines (SVM). (i,j) and (s,t) are the classification result of DFCEN.
Figure 15. Classification maps of different methods on Indian Pines dataset (dim = 30) via two classifiers. (aj) are for k nearest neighbor (KNN) and (kt) are for support vector machines (SVM). (i,j) and (s,t) are the classification result of DFCEN.
Remotesensing 13 00706 g015
Figure 16. Classification overall accuracy of reduced dimensionality (DIM = 5∼50) on three datasets with SVM and KNN classifiers.
Figure 16. Classification overall accuracy of reduced dimensionality (DIM = 5∼50) on three datasets with SVM and KNN classifiers.
Remotesensing 13 00706 g016
Table 1. Classification accuracy of different dimensionality reduction (DR) algorithms with or without band selection for Indian Pines dataset.
Table 1. Classification accuracy of different dimensionality reduction (DR) algorithms with or without band selection for Indian Pines dataset.
RAWLELLESAELPNPESSRLDESSMRPESSLDPDFCEN_LEDFCEN_LLE
SVMNBS76.178.061.375.086.283.178.974.188.489.9
BS83.178.160.581.685.384.681.175.790.391.7
KNNNBS68.777.557.661.485.481.677.772.585.087.5
BS72.477.268.870.580.578.974.767.186.989.3
Table 2. Three parameter Settings for DFCEN_LE and DFCEN_LLE on three datasets.
Table 2. Three parameter Settings for DFCEN_LE and DFCEN_LLE on three datasets.
DFCEN_LLEDFCEN_LE
Parameters λ s k λ s k
Indian Pines0.55200.3515
Pavia U0.559015400
Salinas0.371200.57600
Table 3. Classification accuracy of dimensionality reduction results (dim = 30) of different algorithms using SVM and KNN classifiers with different proportions of training samples on three datasets.
Table 3. Classification accuracy of dimensionality reduction results (dim = 30) of different algorithms using SVM and KNN classifiers with different proportions of training samples on three datasets.
Dataset RAWLELLESAELPNPESSRLDESSMRPESSLDPDFCEN_LEDFCEN_LLE
Indian5%SVM75.4074.6859.2977.8182.4778.2673.7870.5884.5586.30
KNN65.0073.4856.0866.5981.7180.4574.9770.4181.0181.85
10%SVM76.0677.9961.2681.6486.2483.1178.9374.0790.2591.18
KNN68.7077.4857.8570.5285.3981.5677.6672.4586.7488.52
15%SVM83.9078.8862.0383.5888.1385.8181.4875.0192.6093.28
KNN70.6379.0458.9172.2587.1182.6279.6672.9689.8991.81
Pavia U5%SVM93.5680.2089.3392.7289.6389.1088.0578.5596.0996.32
KNN84.9673.9581.2282.8891.6990.3586.0577.7494.0093.45
10%SVM94.4981.0990.4093.4991.1390.5789.7080.0397.2597.05
KNN86.6374.6082.4984.0292.4191.4387.1177.9595.7395.37
15%SVM94.8281.3190.9993.8092.1191.3790.6980.8197.7697.57
KNN87.3774.8483.4184.7292.8492.0187.6678.8696.3996.26
Salinas5%SVM93.3785.8590.1492.3292.8291.8293.5192.5496.1296.87
KNN86.9381.9586.0188.1594.1391.7490.9693.5895.5397.11
10%SVM94.0486.2390.829393.8893.2394.1293.0196.8297.64
KNN88.1382.8486.7689.1894.5192.1491.8593.9696.8498.29
15%SVM94.5886.4891.1193.294.4293.9494.4693.2497.0998.02
KNN88.6683.3687.3789.7494.8592.4192.0094.2297.5398.69
Table 4. Classification accuracy of each class (DIM = 30) for Indian Pines datasets via SVM and KNN classifiers.
Table 4. Classification accuracy of each class (DIM = 30) for Indian Pines datasets via SVM and KNN classifiers.
Class RAWLELLESAELPNPESSRLDESSMRPESSLDPDFCEN_LEDFCEN_LLE
C1SVM19.519.534.175.668.390.058.562.556.185.4
KNN43.948.822.029.380.570.053.762.546.373.2
C2SVM77.772.840.580.486.478.174.676.089.685.9
KNN56.070.643.366.580.275.269.369.981.283.1
C3SVM68.343.89.967.380.169.362.863.590.287.3
KNN53.762.133.152.376.067.262.553.776.279.1
C4SVM56.821.124.962.073.279.357.745.577.978.9
KNN41.330.035.241.873.762.064.354.952.667.6
C5SVM90.377.273.188.793.894.389.786.497.098.2
KNN79.181.672.477.294.990.890.679.594.996.1
C6SVM93.698.095.993.698.293.994.594.196.899.1
KNN93.891.580.893.397.996.594.589.898.998.3
C7SVM88.092.068.064.010090.984.072.788.096.0
KNN88.092.044.080.092.081.892.086.488.0100
C8SVM97.997.790.298.499.898.699.399.398.199.3
KNN94.093.089.595.110099.397.099.899.3100
C9SVM5.611.1050.088.985.744.421.410083.3
KNN16.722.244.433.366.710038.935.7100100
C10SVM71.372.225.571.581.974.974.658.785.491.0
KNN61.674.640.858.981.878.573.658.190.192.1
C11SVM83.986.387.585.883.082.178.178.287.589.7
KNN71.581.860.372.785.886.979.683.888.087.9
C12SVM71.956.621.570.683.581.159.967.886.388.8
KNN40.850.627.257.785.270.063.969.970.473.2
C13SVM93.585.390.296.799.595.196.796.799.5100
KNN95.196.285.388.098.996.296.297.898.998.9
C14SVM95.794.594.489.695.092.993.396.196.996.1
KNN85.492.090.986.994.391.390.995.395.997.1
C15SVM61.178.415.953.378.169.256.843.883.083.0
KNN36.669.733.734.074.167.155.936.070.983.6
C16SVM83.397.685.781.086.991.785.784.586.997.6
KNN85.798.888.183.388.192.990.584.594.092.9
OASVM80.577.761.381.387.083.178.677.290.391.1
KNN67.577.258.070.586.382.778.176.286.588.5
AASVM72.469.053.676.887.385.475.771.788.791.2
KNN65.272.255.765.785.682.975.872.384.188.9
κ SVM78.574.454.378.685.280.875.573.888.989.9
KNN63.774.052.066.384.480.375.072.584.686.9
Table 5. Classification accuracy of each class (DIM = 30) for Pavia University datasets via SVM and KNN classifiers.
Table 5. Classification accuracy of each class (DIM = 30) for Pavia University datasets via SVM and KNN classifiers.
Class RAWLELLESAELPNPESSRLDESSMRPESSLDPDFCEN_LEDFCEN_LLE
C1SVM94.984.890.793.991.590.191.379.597.597.5
KNN87.480.180.285.491.789.584.578.493.994.5
C2SVM98.497.097.297.696.996.196.493.199.299.0
KNN94.483.494.792.197.497.795.994.599.699.6
C3SVM80.731.171.775.971.671.970.256.492.390.8
KNN65.240.456.560.578.378.163.559.487.886.7
C4SVM95.377.991.193.192.292.890.469.198.597.1
KNN84.074.874.784.092.587.285.664.992.792.5
C5SVM99.798.699.899.399.899.999.899.8100100
KNN98.899.199.599.599.699.899.899.810099.9
C6SVM87.331.277.384.085.683.081.073.293.093.6
KNN66.146.063.759.788.080.777.672.687.786.3
C7SVM87.570.772.582.672.371.467.344.990.687.5
KNN81.558.469.480.788.688.174.751.594.092.3
C8SVM88.186.087.289.479.279.477.555.894.096.1
KNN81.968.271.681.980.285.172.554.793.592.4
C9SVM99.999.699.699.899.899.899.974.999.999.9
KNN99.699.699.810010099.998.189.499.899.3
OASVM94.281.190.793.091.090.289.880.297.197.0
KNN86.374.583.084.392.591.487.180.995.695.2
AASVM92.475.287.590.687.787.286.071.896.195.7
KNN84.372.278.982.690.789.683.673.994.393.7
κ SVM92.474.087.590.688.087.086.373.596.296.1
KNN82.066.177.179.090.088.682.874.394.193.7
Table 6. Classification accuracy of each class (DIM = 30) for Salinas datasets via SVM and KNN classifiers.
Table 6. Classification accuracy of each class (DIM = 30) for Salinas datasets via SVM and KNN classifiers.
Class RAWLELLESAELPNPESSRLDESSMRPESSLDPDFCEN_LEDFCEN_LLE
C1SVM99.897.598.699.099.999.499.499.9100100
KNN98.397.198.498.699.999.299.599.999.5100
C2SVM99.998.899.299.899.999.899.999.9100100
KNN99.798.399.599.710099.810010099.999.9
C3SVM99.996.997.899.699.799.299.799.899.799.7
KNN98.895.785.299.099.999.799.810099.999.7
C4SVM99.498.699.499.597.498.799.899.2100100
KNN99.097.598.399.599.299.399.999.899.899.5
C5SVM99.296.698.798.298.799.399.298.810099.9
KNN98.597.395.998.099.199.299.598.599.4100
C6SVM99.899.510099.899.910010099.9100100
KNN99.899.299.999.799.910099.999.9100100
C7SVM99.899.399.899.999.999.810099.999.9100
KNN99.698.099.999.399.999.910099.9100100
C8SVM90.381.186.289.790.987.588.488.793.195.0
KNN75.166.372.076.287.782.180.988.792.694.3
C9SVM99.998.699.810099.699.199.710099.999.8
KNN99.498.399.299.599.999.899.910099.999.8
C10SVM96.986.890.794.798.397.798.897.999.499.3
KNN90.681.689.690.998.397.198.097.698.599.2
C11SVM98.987.296.196.098.998.499.799.498.199.9
KNN94.987.391.197.597.899.699.899.5100100
C12SVM99.398.199.299.999.698.699.9100100100
KNN99.395.297.199.910099.9100100100100
C13SVM97.997.598.499.095.898.799.699.5100100
KNN97.696.197.596.199.298.899.099.5100100
C14SVM97.091.492.795.196.196.497.697.099.799.9
KNN93.891.394.195.698.296.798.496.799.699.3
C15SVM73.544.161.663.973.274.876.467.385.391.5
KNN60.547.460.364.181.473.068.677.887.795.3
C16SVM98.892.799.298.599.099.099.698.698.8100
KNN98.291.599.096.999.899.599.598.699.499.9
OASVM93.786.290.892.294.093.494.192.996.597.7
KNN87.982.986.789.094.792.291.694.396.698.1
AASVM96.991.594.895.896.796.697.496.698.499.1
KNN93.989.992.394.497.596.596.497.398.599.2
κ SVM93.384.689.791.493.392.693.592.196.197.5
KNN86.980.985.287.894.091.390.693.696.297.8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, N.; Zhou, D.; Shi, J.; Zhang, M.; Wu, T.; Gong, M. Deep Fully Convolutional Embedding Networks for Hyperspectral Images Dimensionality Reduction. Remote Sens. 2021, 13, 706. https://doi.org/10.3390/rs13040706

AMA Style

Li N, Zhou D, Shi J, Zhang M, Wu T, Gong M. Deep Fully Convolutional Embedding Networks for Hyperspectral Images Dimensionality Reduction. Remote Sensing. 2021; 13(4):706. https://doi.org/10.3390/rs13040706

Chicago/Turabian Style

Li, Na, Deyun Zhou, Jiao Shi, Mingyang Zhang, Tao Wu, and Maoguo Gong. 2021. "Deep Fully Convolutional Embedding Networks for Hyperspectral Images Dimensionality Reduction" Remote Sensing 13, no. 4: 706. https://doi.org/10.3390/rs13040706

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop