Next Article in Journal
Ozone Pollution and Its Response to Nitrogen Dioxide Change from a Dense Ground-Based Network in the Yangtze River Delta: Implications for Ozone Abatement in Urban Agglomeration
Next Article in Special Issue
Predictability of the Wintertime Western Pacific Pattern in the APEC Climate Center Multi-Model Ensemble
Previous Article in Journal
Vertical Distribution of Atmospheric Ice Nucleating Particles in Winter over Northwest China Based on Aircraft Observations
Previous Article in Special Issue
Effects of Low-Frequency Oscillation at Different Latitudes on Summer Precipitation in Flood and Drought Years in Southern China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dimensionality Reduction by Similarity Distance-Based Hypergraph Embedding

1
Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
2
Science and Technology on Integrated Information System Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
3
China Academy of Information and Communications Technology, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Atmosphere 2022, 13(9), 1449; https://doi.org/10.3390/atmos13091449
Submission received: 30 June 2022 / Revised: 30 August 2022 / Accepted: 2 September 2022 / Published: 7 September 2022
(This article belongs to the Special Issue Climate Modeling and Dynamics)

Abstract

:
Dimensionality reduction (DR) is an essential pre-processing step for hyperspectral image processing and analysis. However, the complex relationship among several sample clusters, which reveals more intrinsic information about samples but cannot be reflected through a simple graph or Euclidean distance, is worth paying attention to. For this purpose, we propose a novel similarity distance-based hypergraph embedding method (SDHE) for hyperspectral images DR. Unlike conventional graph embedding-based methods that only consider the affinity between two samples, SDHE takes advantage of hypergraph embedding to describe the complex sample relationships in high order. Besides, we propose a novel similarity distance instead of Euclidean distance to measure the affinity between samples for the reason that the similarity distance not only discovers the complicated geometrical structure information but also makes use of the local distribution information. Finally, based on the similarity distance, SDHE aims to find the optimal projection that can preserve the local distribution information of sample sets in a low-dimensional subspace. The experimental results in three hyperspectral image data sets demonstrate that our SDHE acquires more efficient performance than other state-of-the-art DR methods, which improve by at least 2% on average.

1. Introduction

Hyperspectral remote sensing images have been taking a significant role in earth observation and climate models. Every collected pixel point indicates a high-dimensional sample that consists of a broad range of electromagnetic spectral band information [1,2]. Nevertheless, the high correspondence of adjacent bands not only leads to information redundancy but also requires tremendous time and space complexity, and the high-dimensional data also make hyperspectral image analysis a challenging task as a consequence of the Hughes phenomenon [3]. As Chang et al. proposed in [4], there can exist at most 94% redundant electromagnetic spectral band information, on the prem that adequate valuable information can be extracted for machine learning. In view of the aforementioned issues, hyperspectral data dimensionality reduction (DR) turns out to be a crucial part of data processing [5,6], usually via projecting original high-dimensional data into a low-dimensional space on the condition of maintaining as much valuable information as possible.
Supervised DR methods manage to increase the between-class separability and decrease the within-class divergency, such as linear discriminant analysis (LDA) [7], nonparametric weighted feature extraction (NWFE) [8], and local Fisher discriminant analysis (LFDA) [9]. LDA intends to maintain global discriminant information according to available labels, which is proven to work well in the case that samples from the same class follow Gaussian distribution. As an extension to LDA, LFDA is proposed to eliminate the limitation of LDA that requests the reduced dimensionality to be less than the total number of sample classes and ignores the local structural information.
However, in various practical applications, labeling samples exactly is labor intensive, computationally expensive, and time-consuming due to the limitations of experimental conditions, especially in hyperspectral remote sensing images [10]. So many research studies focus on unsupervised cases. Locality preserving projection (LPP) [11] and principal component analysis (PCA) are the representatives of unsupervised DR methods [12]. Different from LPP, on the purpose of preserving the local manifold structure of data, PCA aims at maintaining the global structure of data by maximizing sample variance.
A great deal of research demonstrates that high-dimensional data can be described by or similar to a smooth manifold in a low-dimensional space [13,14,15] and propose some DR methods based on manifold learning. Laplacian eigenmaps (LE) [14] try to maintain local manifold structure by constructing an undirected graph that indicates the pairwise relationship of samples. Locally linear embedding (LLE) [15] tries to reconstruct samples in a low-dimensional space while maintaining their local linear representation coefficients under the assumption that local samples follow a certain linear representation in a manifold patch. Yan et al. summarize relevant DR approaches and proposed a general graph embedding framework [16], which contains a series of variant graph embedding models, including neighborhood preserving embedding (NPE) [17], LPP, and several expanded versions to LPP [11,18,19]. For these graph embedding-based DR models, researchers usually utilize Euclidean distance to construct adjacent graphs [20], where vertices indicate samples and the weighted edges reflect pairwise affinities between two samples. Consequently, there exist two basic problems to be addressed.
  • The conventional graph embedding-based DR methods, for example, LPP, aims to preserve the local adjacent relationship of samples by constructing a weight matrix which only takes the affinity between pairwise samples into account. However, the weight matrix fails to reflect the complex relationship of samples in high order [21], leading to the loss of information.
  • When employed to calculate the similarity between two samples, the usual Euclidean distance is merely related to the two samples themselves but hardly considers the influence caused by their ambient samples [22,23] and ignores the distribution information of samples, which usually plays an important role for further data processing.
Accordingly, we propose a novel similarity distance-based hypergraph embedding method (SDHE) for unsupervised DR to solve the two above issues. Unlike conventional graph embedding-based models that only describe the affinity between two samples, SDHE is based on hypergraph embedding, which can take advantage of the complicated sample relationships in high order [24,25,26]. Besides, a novel similarity distance is defined instead of Euclidean distance to measure the affinity between samples because the similarity distance can not only discover complex geometrical structure information but also make use of the local distribution information of samples.
The remainder of our work is organized as follows. In Section 2, some related work is introduced, including the classic graph embedding model (LPP) and hypergraph embedding learning. Section 3 proposes our similarity distance-based hypergraph embedding method (SDHE) for dimensionality reduction in detail. In Section 4, we adopt three real hyperspectral images to evaluate the performance of SDHE in comparison with other related DR methods. Finally, Section 5 provides the conclusions.

2. Related Work

2.1. Notations of Unsupervised Dimensionality Reduction Problem

We focus on the unsupervised dimensionality reduction problem. The dataset is denoted as V = [ v 1 , v 2 , , v n ] R d × n , where v i R d represents the i t h sample with d feature values, n denotes the number of total samples. In order to obtain a discriminative low-dimensional representation y i R m   ( m < d ) for each v i , an optimal projection matrix P R d × m is to be learned. We denote y i = P T v i or Y = P T V , where Y = [ y 1 , y 2 , , y n ] R m × n as the data in the transformed space.

2.2. Locality Preserving Projection (LPP)

As is shown in [27], numerous high-dimensional observation data contain low-dimensional manifold structures, which motivates us to solve DR problems by extracting local metric information hidden in the low-dimensional manifold. Graph embedding has been proposed to present certain statistical or geometric characteristics of samples via constructing a graph embedding model [16]. In particular, LPP utilizes K nearest neighbors (KNN) algorithm to construct an adjacent graph so that local neighborhood structure is considered in feature space [17]. The basic derivation idea of the Formulas (1)–(4) comes from [17].
LPP is formulated to find a projection matrix P R d × m by minimizing.
      1 2 i , j = 1 n W i , j y i y j 2 2 = 1 2 i , j = 1 n W i , j P T v i P T v j 2 2 = t r a c e ( P T V ( D W ) V T P ) = t r a c e ( P T V L V T P )  
where D is a diagonal matrix with diagonal entries D i , i = j = 1 n W i , j , and L = D W is Laplacian matrix. The symmetric weighted matrix W is defined on an adjacent graph, in which each entry W i , j corresponds to a weighted edge denoting the similarity between two samples. The most popular approach to define W i , j is as below:
W i , j = { exp ( v i v j 2 2 / t ) v i   and   v j   are   neighbors   0 otherwise
where t denotes the heat kernel parameter, and W i , j increases monotonously with the decrease of distance between v i and v j .
Therefore, if samples v i and v j are the K nearest neighbors of each other, the mapped samples y i and y j are close to each other in the transformed space as well, due to the heavy penalty incurred by W i , j . Usually a constraint P T V D V T P = I is imposed to ensure a meaningful solution, where I denotes the identity matrix. Then the final optimization problem can be written as follows:
m in P         trace ( P T VLV T P )   s . t .                   P T V D V T P = I
The solution to the optimal projection matrix can be translated into the following generalized eigenvalues problem.
V L V T P = V D V T P Λ
where P denotes the eigenvector matrix of ( V D V T ) 1 V L V T and Λ denotes the eigenvalue matrix whose diagonal entries are eigenvalues corresponding with P .

2.3. Hypergraph Embedding

Since hypergraph theory is proposed, hypergraph learning has made promising progress in many applications in recent years, and the basic derivation idea of the Formulas (5)–(8) comes from [26,28,29]. As an extension to the classic graph, a hypergraph facilitates the representation of a data structure by capturing adjacent sample relationships in high order, which overcomes the limitation of a classic graph in that each edge only considers the affinity between pairwise samples. Unlike a classic graph, where a weighted edge links up two vertices, the hyperedge consists of several nodes in a certain neighborhood. Figure 1 is taken as an example of a classic graph and hypergraph.
The hypergraph G = ( V , E , w )   is   constructed   as   the   following . Here, V = [ v 1 , v 2 , , v n ] R d × n denotes the vertex set corresponding to samples, and E = [ E 1 , E 2 , , E n ] denotes the hyperedge set, in which each hyperedge is assigned a positive weight w ( E i ) . For a certain vertex, its K nearest neighbors (let K = 2 in Figure 1b) are found out to make up a hyperedge, and an incidence matrix H R n × n is defined to express the affiliation between vertices and hyperedges as follows:
H i , j = h ( v i , E j ) = { 1 if v i E j 0 otherwise
Then each hyperedge is assigned with a weight computed by:
w i = w ( E i ) = v j E i exp ( v j v i 2 2 / h )
where h is the Gaussian kernel parameter. According to incidence matrix H and hyperedge weight w ( E ) , the vertex degree for each vertex v i V is defined as:
d i = d ( v i ) = j = 1 n w j H i , j
and the hyperedge degree δ i for each hyperedge E i E is defined as:
δ i = δ ( E i ) = j = 1 n H j , i
Namely, δ i denotes the number of vertices that belong to the same hyperedge E i .

3. Proposed Method

In this section, we propose a novel unsupervised DR method called similarity distance-based hypergraph embedding (SDHE). Below we first give a kind of hypergraph embedding-based similarity, then construct a novel similarity distance, and finally, propose a similarity distance-based hypergraph embedding model for DR.

3.1. Hypergraph Embedding-Based Similarity

It is a reasonable choice to describe a high-order similarity relationship with a hypergraph rather than a simple graph. Because the hypergraph has the characteristic that each hyperedge connects more than two vertices and these vertices share one weighted hyperedge, i.e., the samples in the same hyperedge are regarded as a whole. A hyperedge E i consists of the sample v i together with its K nearest neighbors, thus an incidence matrix H R n × n is defined by Equation (5) to represent the affiliation between vertices and hyperedges. Then a positive weight w i is assigned to the hyperedge E i according to Equation (6), and the weight of hyperedge E i is calculated by summing up the certain relationships between sample v i with its K nearest neighbors.
However, the weight of hyperedge excessively relies on parameter K . If K is too small, the hypergraph will approach a simple graph inducing that the hypergraph cannot depict high order sample relationship sufficiently. Otherwise, if K is too large, one hyperedge would connect too much number of vertices sharing the common weight of the hyperedge, which fails to reflect vertices’ own unique similarity characteristics. It is worth noting that outliers also share hyperedge weight with other vertices, and stated thus, hypergraph embedding is sensitive to outliers (usually noise), so we manage to modify the disadvantage by constructing a robust similarity to alleviate the sensitiveness of outliers. The similarity s i , j between arbitrary two samples v i and v j is defined as follows:
s i , j = E k E v i , v j V w ( E k ) h ( v i , E k ) h ( v j , E k )   = k , i , j = 1 n w k H i , k H j , k
where the notations H and w have been defined in Equations (5) and (6), respectively.
According to Equation (9), the similarity between samples v i and v j is calculated by summing up all the weight of these common hyperedges they both belong to. The weight of common hyperedge is associated with local sample distribution; next, we explains how it works. On the one hand, each hyperedge connects K + 1 vertices so that the weight w i of hyperedge E i becomes larger if these K + 1 vertices are distributed compactly, and vice versa. If an outlier and its K nearest neighbors make up a hyperedge, then the hyperedge has a smaller weight because the distribution of these vertices is more scattered. In other words, outlier has little contribution to the weight of hyperedge, making the measure of similarity more robust. On the other hand, each vertex can belong to several different hyperedges. When two vertices are very close to each other, they can participate in more of the same hyperedges and have a higher similarity as we expect. Considering the sample distribution means we can mine more valuable information from the training samples of the same size according to their local structure and distribution relationship in the hypergraph, especially small-size samples. Our experiments conducted on different data sets have confirmed the conclusion, as shown in Section 4.

3.2. Similarity Distance Construction

Euclidean distance is the most popular tool to measure the similarity between samples in graph embedding-based DR methods [30]. However, it is not very accurate for analyzing hyperspectral images problem. For example, as depicted in Figure 2a, three samples v 1 , v 2 and v 3 are from three different classes respectively, and v 1 is closer to v 2 but far away from v 3 in Euclidean Distance. Accordingly, v 1 and v 2 are more likely to be misclassified into the same class when we ignore some complex structure and distribution information, which probably leads to the increase of classification error. For another example, as depicted in Figure 2b, Euclidean distance from v 1 to v 2 is equivalent to that from v 1 to v 3 . But v 1 and v 2 are more likely to belong to the same class according to the distribution of samples, which cannot be reflected intuitively by Euclidean distance. So we are motivated to propose a novel similarity distance to replace Euclidean distance.
It is natural that if two samples have high similarity, they are likely to come from the same class, even though we know nothing about their exact labels in unsupervised DR problem. That is to say, when two similar samples are mapped to low-dimensional space, they ought to be close to each other according to their similarity in original feature space. Directly using the similarity to represent the distance relationship encounters the problem of non-uniform measurement, so we normalize the similarity by defining the relative similarity r i , j , as below:
r i , j = S i , j S m i n S m a x S m i n
where s i , j has been defined in Equation (9), s m i n and s m a x denote the minimum and maximum elements in similarity matrix   S , respectively. Thus s i , j = s m a x corresponds to r i , j = 1 and s i , j = s m i n = 0 corresponds to r i , j = 0 . As a normalized metric of s i , j , r i , j reflects the probability that samples i and j belong to the same class. Besides, the relative similarity matrix consisting of entries r i , j is sparse because the majority of entries r i , j = s i , j = 0 , i.e., there exists no hyperedge that contains samples i and j simultaneously.
Based on the relative similarity r i , j , a novel similarity distance E D i , j is defined for measuring the location relationship of samples as below:
E D i , j = 1 log ( r i , j )
where 0 < r i j 1 and E D i , j 1 . Specially, if r i j = 0 , we define E D i , j = + . And the similarity distance is symmetric, i.e., E D i , j = E D j , i .
In order to get an intuitive recognition of similarity distance, Figure 2b gives a directed diagram to explain how it works. Despite the equivalent Euclidean distance from v 1 to v 2 or v 3 , the sample distribution around v 1 and v 2 is denser than that around v 1 and v 3 . According to Equation (6), denser distribution leads to the larger weight of hyperedge and corresponds to larger similarity. A larger similarity means smaller similarity distance, which demonstrates that the similarity distance between v 1 and v 2 is smaller than that between v 1 and v 3 . Obviously, the result accords with our intuitive judgment.
One advantage of Euclidean distance is simple and easy to acquire, but also limits the amount of information it can take along with. Whereas the geometrical structure of hyperspectral data in high-dimensional feature space is complex and hard to learn, Euclidean distance cannot effectively reflect the interaction between samples. However, via using similarity distance, we can discover crucial information that is not directly exhibited through geometrical distance and make great progress in analyzing hyperspectral images DR problem.

3.3. Similarity Distance-Based Hypergraph Embedding Model

As portrayed in the above two sections, we extract similarity from hypergraph embedding, then utilize the similarity to construct similarity distance. Now we propose our similarity distance-based hypergraph embedding (SDHE) model for DR, whose basic idea is to find out a projection matrix P that projects original high-dimensional data to low-dimensional manifold space while preserving the similarity distance among samples.
Similar to LPP, a penalty factor E W i , j is defined to balance the similarity distance between samples i and j in transformed space as follows:
E W i , j = exp ( E D i , j 2 / t )
where E D i , j is formulated in Equation (11) and t is a positive heat kernel parameter. Thus, the optimization problem of SDHE is formulated to minimize.
        1 2 i , j = 1 n E W i , j P T v i P T v j 2 2 = 1 2 i , j = 1 n ( P T v i ) T E W i , j P T v i + 1 2 i , j = 1 n ( P T v j ) T E W i , j P T v j i , j = 1 n ( P T v i ) T E W i , j P T v j = i = 1 n ( P T v i ) T D i , i P T v i i , j = 1 n ( P T v i ) T E W i , j P T v j = trace ( P T VDV T P ) trace ( P T V ( EW ) V T P ) = trace ( P T V ( D EW ) V T P ) = trace ( P T VLV T P )
where D is a diagonal matrix with diagonal entries D i , i = j = 1 n E W i , j , and L = D E W is the Laplacian matrix.
Therefore, if samples v i and v j have a small similarity distance in the original feature space, the mapped samples y i and y j would be close to each other in the transformed feature space as well due to the heavy penalty incurred by E W i , j . In order to avoid a degeneracy solution, the final optimization problem is formulated as follows by adding a regularization term.
max P t r a c e ( P T V D V T P ) t r a c e ( P T V L V T P )
which is a trace-ratio problem, can be reduced to solve the following generalized eigenvalues problem.
V D V T P = λ V L V T P
where λ represents generalized eigenvalue. The optimal projection matrix P = [ P 1 , P 2 , , P m ] is acquired by choosing eigenvectors corresponding with the first m maximum eigenvalues.
An outline of SDHE Algorithm 1 is summarized as follows:
Algorithm 1: SDHE
Require:
Training samples V = [ v 1 , v 2 , , v n ] R d × n ,
dimensionality of transformed space m ,
the number of nearest neighbors K,
the Gaussian kernel parameters h and t
Ensure:
 The optimal projection matrix P * R d × m .
Step 1: Embed hypergraph by using K nearest neighbors algorithm and get affiliation relationship H i , j according to Equation (5);
Step 2: Calculate the weight of each hyperedge w i according to Equation (6);
Step 3: Calculate the similarity s i , j by
           s i , j = k , i , j = 1 n W k H i , k H j , k ;
Step 4: Translate the similarity s i , j into relative similarity r i j :
           s m i n = m i n ( s i , j )   ;
           s m a x = m a x ( s i , j )   ;
            r i , j = s i , j s m i n s m a x s m i n   ;
Step 5: Construct the similarity distance by E D i , j = 1 log ( r i , j )   ;
Step 6: Construct penalty factor by E W i , j = exp ( E D i , j 2 / t )   ;
Step 7: Calculate D and L ;
Step 8: Solve generalized eigenvalues problem V D V T P = λ V L V T P   ;
Step 9: P * = [ P 1 , P 2 , , P m ] is the eigenvectors corresponded with m maximum eigenvalues.

4. Result and Discussion

In this section, the validity of our proposed SDHE method was tested on three hyperspectral data sets compared with some related DR methods. The DR effectiveness was evaluated according to classification accuracy, which was calculated by the nearest neighbor (NN) classifier after different DR methods were conducted on the data set, respectively.

4.1. Hyperspectral Images Data Set

Our experiments were conducted by employing three standard hyperspectral image data sets as follows; more details are shown in Section 4.3.

4.1.1. Pavia University

The Pavia University scene was gathered by the reflective optics system imaging spectrometer (ROSIS) optical sensor over Pavia, northern Italy. It is a 610 × 610 pixels image that was divided into 9 classes grounds truth with 103 spectral bands after some invalid samples had to be removed.

4.1.2. Salinas

The Salinas scene was acquired by the airborne visible/infrared imaging spectrometer (AVIRIS) sensor over Salinas Valley, Southern California, in 1998. This area consists of 512 × 217 pixels with 224 spectral bands. Discarded 20 water absorption bands, it contains 16 classes of observations with 204 spectral bands.

4.1.3. Kennedy Space Center

The Kennedy Space Center (KSC) data was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) instrument over the KSC, Florida, in 1996. It consists of 13 classes of observations with 176 spectral bands after we discarded uncalibrated and noisy bands that cover the region of water absorption features.

4.2. Experimental Setup

4.2.1. Training Set and Testing Set

Considering the distinct scale and distribution of the data sets above, we randomly choose 15, 20 or 25 samples from per class in Pavia University, Salinas and KSC scenes to make up the training sets, respectively. Naturally, the rest of the samples were regarded as testing sets. In addition, a random 10-fold validation method was adopted, that is, the partition process was repeated 10 times independently to weaken the influence of random bias.

4.2.2. Data Pre-Processing

As Camps-Valls G et al. proposed in [31], we utilized spatial mean filtering to enhance hyperspectral data classification. For example, assuming a pixel x i with coordinate ( p i , q i ) , we denote its local pixel neighborhood N ( x i ) , as below:
N ( x i ) = { x ( p , q ) p [ p i a , p i + a ] , q [ q i a , q i + a ] } , a = 0 , 1 , 2 ,
For pixels at the edge of image, the samples were mirrored before using spatial mean filtering. Then all the pixels had their spatial neighborhood N ( x ) including ( 2 a + 1 ) 2 pixels, where 2 a + 1 indicates the width of spatial filtering window. Finally, each pixel x is represented by:
x ^ = 1 ( 2 a + 1 ) 2 s = 1 ( 2 a + 1 ) 2 x s
In our experiments, we set a = 2 for all the hyperspectral images, that is, the width of spatial neighborhood is 5, as depicted in Figure 3. Besides, the filtered data is normalized by min-max scaling as a popular routine.

4.2.3. Comparison and Evaluation

In order to evaluate the effectiveness of different DR methods, the testing set is transformed into low-dimensional data utilizing the optimal projection matrix, which is learned from the training set. As a contract, two classical unsupervised DR methods PCA [12] and LPP [11], two state-of-the-art unsupervised DR methods BH and SH [32], as well as two supervised DR methods LFDA [9] and NWFE [8], were compared with our proposed SDHE method. As a baseline to illustrate others, the raw data (RAW) is also directly classified without DR. In our experiments, the nearest neighbor (NN) classifier is adopted for classification, and we can acquire overall accuracy (OA), average accuracy (AA), and kappa coefficient (KC) together with their standard deviations (STD) to evaluate these DR methods.

4.2.4. Parameter Selection

It is essential to select the appropriate parameters for different DR methods in our experiments. The number of nearest neighbors K is selected from the given set of { 3 , 5 , 7 , 9 , 11 } , the Gaussian kernel parameters h and t are selected from the given set of { 2 8 , 2 7 , , 2 7 , 2 8 } , respectively. In order to decrease the influence of random bias, we repeat each single experiment 10 times, with every combination of parameters and randomly divided training and testing sets. The optimal combination of parameters is acquired associated with the highest mean overall accuracy (OA).

4.3. Experimental Results

To have a further knowledge of our data sets, Table 1, Table 2 and Table 3 present the detailed ground truth classes and the number of their individual samples for Pavia University, Salinas, and KSC respectively.
First, we randomly choose 20 samples per class to comprise the training set, and the rest samples are regarded as a testing set. Then we can learn a projection matrix with the training set and conduct DR on the testing set by making use of the projection matrix learned before. The reduced dimensionality is fixed at 30, which turns out to be a relatively stable state for all the related DR methods in our experiments. Finally, the nearest neighbor (NN) classifier is adopted for classification, and these processes are repeated 10 times to get the mean classification accuracy with the corresponding standard deviation. Below we display our experimental results in the form of a Table and Figure together with some relevant discussion.
The experimental results for the three hyperspectral data sets are displayed in Table 4, Table 5 and Table 6, and the bolded experimental values indicate the best performance among all the competitive DR methods. In addition, the optimal parameters for our SDHE are K = 5 , h = 32 ,   t = 0.0313 for Pavia University, K = 5 ,   h = 16 ,   t = 0.0313 for Salinas, K = 3 , h = 0.0156 , t = 0.1250 for KSC.
As listed in Table 4, Table 5 and Table 6, our proposed SDHE acquires prominently higher classification accuracy than other competitive DR methods in AA, OA, and KC. Note that both BH and SH belong to hypergraph embedding DR methods, as well as our SDHE, but they have a comparatively poor performance in that they ignore the sample distribution information and Euclidean distance cannot reveal intrinsic similarity. What is more, the results of RAW, NWFE, and PCA are very similar to each other, which demonstrates that the transformed feature spaces founded by NWFE or PCA cannot promote classification effectiveness but reduce the redundancy of high-dimensional data to make data processing more efficient, but they still outperform other DR methods for KSC. All the competitive DR methods except for SDHE reach a very near classification accuracy for Salinas.
For each individual class, the SDHE also prevailed over the other related DR methods in the total 6 of 9 classes for Pavia University, 5 of 16 classes for Salinas and 11 of 13 classes for KSC. Remarkably, the SDHE was notably superior to others, especially in the classes that had a comparatively lower classification accuracy, especially the 1st, 2nd and 3rd classes in Pavia University, the 8th and 15th classes in Salinas and the 4th, 5th and 6th classes in KSC. Although the SDHE was inferior to the others in several classes, the classification accuracy gaps between these classes of different DR methods were narrow.
In order to present the classification effectiveness of different DR methods intuitively, the samples of testing set were given different pseudo labels, which were simulated by NN classifier after we conducted corresponding DR methods. Then, the results are portrayed via classification maps in Figure 4, Figure 5 and Figure 6. For each Figure, the subfigure (a) indicates the ground truth of original hyperspectral data set, the subfigure (b–h) indicates the performance of BH, LFDA, LPP, NWFE, PCA, SH and our proposed SDHE, respectively. The higher classification accuracy means the less miscellaneous samples in the corresponding subfigure, and the key points were highlighted by a white circle in subfigure (h) of Figure 4 and Figure 5. Obviously, there are less miscellaneous samples in our proposed SDHE in contrast to the others.
To research the DR effectiveness on the different sizes of training sets, we randomly selected 15, 20, or 25 samples per class as the training set, and the rest samples were regarded as the testing set, respectively. Then the related experimental results, including OA (%) and STD for the three hyperspectral data sets are listed in Table 7, Table 8 and Table 9.
Besides considering the influence of reduced dimensionality on classification accuracy, Figure 7, Figure 8 and Figure 9 draw relevant curve figures according to the same training sets with their OAs of different DR methods listed in Table 7, Table 8 and Table 9. By adding the x-axis to denote the change of reduced dimensionality, the performances of different DR methods are depicted in Figure 7, Figure 8 and Figure 9. For each Figure, the subfigure (a–c) indicates the training set of 15, 20, or 25 samples per class, respectively.
According to Table 7, Table 8 and Table 9, despite the size of training sets ranging from 15 to 25 samples per class, the SDHE always performs best among all the related DR methods. We found that the smaller size of the training set, the greater superiority our SDHE had than other DR methods. Note that when the training set consisted of 15 samples per class, the LFDA not only performed poorly in OA but also had a much higher STD, which means the performance of LFDA was sensitive to the small size training set because the local within-class scatter matrix was likely to be singular or ill-conditioned. But the LFDA had a rapid increase in classification accuracy with the increasing size of the training set. However, as listed in Table 8, the mean OA decreases with the size of the training set became larger, from 20 to 25 samples per class for Salinas, because of the parameter values were discrete, which limits the optimal accuracy the model can achieve. Thus, it is a normal phenomenon.
According to Figure 7, Figure 8 and Figure 9, the SDHE still outperforms other DR methods in the different sizes of training sets. With the increase of reduced dimensionality, the classification accuracy increases rapidly in the beginning but then reaches a steady level, which proves the reasonability of analyzing the results at 30-dimensionality previously. It is worth mentioning that the smaller size of the training set the more outstanding advantage our SDHE possesses, for the reason that the use of hypergraph and similarity distance help to mine more hidden information. Empirically, when the reduced dimensionality is more than 15, our SDHE shows a remarkable advantage.

5. Conclusions

Three main contributions of our work are listed as follows:
  • A novel similarity distance is proposed via hypergraph construction. Compared with Euclidean distance; it can make better use of the sample structure and distribution information; for the reason that it considers not only the adjacent relationship between samples but also the mutual affinity of samples in high order.
  • The proposed similarity distance is employed to optimize DR problem, i.e., our proposed SDHE aims to maintain the similarity distance in a low-dimensional space. In this way, the similarity in capturing the structure and distribution information between samples is inherited in the transformed space.
  • When applied for the classification task of three different hyperspectral images, our SDHE is proved to perform more effectively, especially the size of the training set is comparatively small. As shown in Table 7, Table 8 and Table 9, our method improves OA, AA, and KC by at least 2% on average on different data sets.
Furthermore, our work is to use a graph to mine the intrinsic geometric information of the data. Graph data itself is a kind of structured data that is different from our work. For graph learning, there are many ways to perform dimensionality reduction in graphs, such as weight pruning, vertex pruning, and joint weight and vertex pruning [33]. In addition, compared with graphs, where each sample is a structure, the input of our data is a vector. If the input is tensor, tensor decompositions will be suitable [34]. Compared with the neural network [35], which often requires large-scale computation, our method is more like a single-layer neural network with a special objective function, which has the advantage of effectively utilizing lightweight computing resources.
We propose a similarity distance-based hypergraph embedding method (SDHE) for unsupervised dimensionality reduction. First, the hypergraph embedding technique is employed to discover the complicated affinity of samples in high order. Then we take advantage of the complicated affiliation between vertices and hyperedges to construct a similarity matrix, which includes the local distribution information of samples. Finally, based on hypergraph embedding and the similarity matrix, a novel similarity distance is proposed to be an alternative substitute for Euclidean distance, which can better reflect complicated geometry structure information of data well. The experimental results in three hyperspectral image data sets demonstrate that our proposed SDHE obtains more efficient performance than other popular DR methods. For further study, we prepare to derive the similarity distance to semi-supervised model learning, which can combine discriminative analysis with structure and distribution information, and wish to make good progress in the remote sensing field of a climate model.

Author Contributions

Methodology, S.F.; writing-original draft preparation, W.Q.; writing-review and editing, X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Special Project for the Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) (GML2019ZD0603) and the Chinese Academy of Sciences (No. E1YD5906).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tan, K.; Wang, X.; Zhu, J.; Hu, J.; Li, J. A novel active learning approach for the classification of hyperspectral imagery using quasi-Newton multinomial logistic regression. Int. J. Remote Sens. 2018, 39, 3029–3054. [Google Scholar] [CrossRef]
  2. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  3. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  4. Chang, C.-I. Hyperspectral Data Exploitation: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  5. Yu, C.; Lee, L.-C.; Chang, C.-I.; Xue, B.; Song, M.; Chen, J. Band-specified virtual dimensionality for band selection: An orthogonal subspace projection approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2822–2832. [Google Scholar] [CrossRef]
  6. Wang, Q.; Meng, Z.; Li, X. Locality adaptive discriminant analysis for spectral–spatial classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2077–2081. [Google Scholar] [CrossRef]
  7. Fan, Z.; Xu, Y.; Zuo, W.; Yang, J.; Tang, J.; Lai, Z.; Zhang, D. Modified principal component analysis: An integration of multiple similarity subspace models. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1538–1552. [Google Scholar] [CrossRef]
  8. Kuo, B.-C.; Li, C.-H.; Yang, J.-M. Kernel nonparametric weighted feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1139–1155. [Google Scholar]
  9. Sugiyama, M. Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
  10. Zhong, Z.; Fan, B.; Duan, J.; Wang, L.; Ding, K.; Xiang, S.; Pan, C. Discriminant tensor spectral–spatial feature extraction for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2014, 12, 1028–1032. [Google Scholar] [CrossRef]
  11. Wang, R.; Nie, F.; Hong, R.; Chang, X.; Yang, X.; Yu, W. Fast and orthogonal locality preserving projections for dimensionality reduction. IEEE Trans. Image Process. 2017, 26, 5019–5030. [Google Scholar] [CrossRef]
  12. Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef]
  13. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef]
  14. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef]
  15. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  16. Yan, S.; Xu, D.; Zhang, B.; Zhang, H.-J. Graph embedding: A general framework for dimensionality reduction. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 830–837. [Google Scholar]
  17. He, X.; Cai, D.; Yan, S.; Zhang, H.-J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; pp. 1208–1213. [Google Scholar]
  18. Zhong, F.; Zhang, J.; Li, D. Discriminant locality preserving projections based on L1-norm maximization. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 2065–2074. [Google Scholar] [CrossRef] [PubMed]
  19. Soldera, J.; Behaine, C.A.R.; Scharcanski, J. Customized orthogonal locality preserving projections with soft-margin maximization for face recognition. IEEE Trans. Instrum. Meas. 2015, 64, 2417–2426. [Google Scholar] [CrossRef]
  20. Goyal, P.; Ferrara, E. Graph embedding techniques, applications, and performance: A survey. Knowl. -Based Syst. 2018, 151, 78–94. [Google Scholar] [CrossRef]
  21. Yu, J.; Tao, D.; Wang, M. Adaptive hypergraph learning and its application in image classification. IEEE Trans. Image Process. 2012, 21, 3262–3272. [Google Scholar]
  22. Sun, Y.; Wang, S.; Liu, Q.; Hang, R.; Liu, G. Hypergraph embedding for spatial-spectral joint feature extraction in hyperspectral images. Remote Sens. 2017, 9, 506. [Google Scholar] [CrossRef]
  23. Du, W.; Qiang, W.; Lv, M.; Hou, Q.; Zhen, L.; Jing, L. Semi-supervised dimension reduction based on hypergraph embedding for hyperspectral images. Int. J. Remote Sens. 2018, 39, 1696–1712. [Google Scholar] [CrossRef]
  24. Xiao, G.; Wang, H.; Lai, T.; Suter, D. Hypergraph modelling for geometric model fitting. Pattern Recognit. 2016, 60, 748–760. [Google Scholar] [CrossRef]
  25. Armanfard, N.; Reilly, J.P.; Komeili, M. Local feature selection for data classification. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 1217–1227. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, Z.; Bai, L.; Liang, Y.; Hancock, E. Joint hypergraph learning and sparse regression for feature selection. Pattern Recognit. 2017, 63, 291–309. [Google Scholar] [CrossRef] [Green Version]
  27. Tenenbaum, J.B.; Silva, V.D.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  28. Zhang, L.; Gao, Y.; Hong, C.; Feng, Y.; Zhu, J.; Cai, D. Feature correlation hypergraph: Exploiting high-order potentials for multimodal recognition. IEEE Trans. Cybern. 2013, 44, 1408–1419. [Google Scholar] [CrossRef] [PubMed]
  29. Du, D.; Qi, H.; Wen, L.; Tian, Q.; Huang, Q.; Lyu, S. Geometric hypergraph learning for visual tracking. IEEE Trans. Cybern. 2016, 47, 4182–4195. [Google Scholar] [CrossRef] [PubMed]
  30. Feng, F.; Li, W.; Du, Q.; Zhang, B. Dimensionality reduction of hyperspectral image with graph-based discriminant analysis considering spectral similarity. Remote Sens. 2017, 9, 323. [Google Scholar] [CrossRef]
  31. Camps-Valls, G.; Gomez-Chova, L.; Muñoz-Marí, J.; Vila-Francés, J.; Calpe-Maravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
  32. Yuan, H.; Tang, Y.Y. Learning with hypergraph for hyperspectral image feature extraction. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1695–1699. [Google Scholar] [CrossRef]
  33. Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data analytics on graphs Part I: Graphs and spectra on graphs. Found. Trends® Mach. Learn. 2020, 13, 1–157. [Google Scholar] [CrossRef]
  34. Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, H.A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef]
  35. Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data analytics on graphs part III: Machine learning on graphs, from graph topology to applications. Found. Trends® Mach. Learn. 2020, 13, 332–530. [Google Scholar] [CrossRef]
Figure 1. (a) Classic graph built by two nearest neighbors. (b) Hypergraph built by built by two nearest neighbors.
Figure 1. (a) Classic graph built by two nearest neighbors. (b) Hypergraph built by built by two nearest neighbors.
Atmosphere 13 01449 g001
Figure 2. Diagrammatic presentation of comparison between Euclidean distance and similarity distance. (a) Three samples are from three different classes; (b) Three samples are from two classes.
Figure 2. Diagrammatic presentation of comparison between Euclidean distance and similarity distance. (a) Three samples are from three different classes; (b) Three samples are from two classes.
Atmosphere 13 01449 g002
Figure 3. Spatial neighborhood of data pre-processing. The red dashed line indicates the width of the selected pixels’ spatial neighborhood.
Figure 3. Spatial neighborhood of data pre-processing. The red dashed line indicates the width of the selected pixels’ spatial neighborhood.
Atmosphere 13 01449 g003
Figure 4. Classification maps for Pavia University. (a) Ground truth, (b) BH, (c) LDFA, (d) LPP, (e) NWFE, (f) PCA, (g) SH, (h) the proposed SDHE. The different colors indicate the different classes.
Figure 4. Classification maps for Pavia University. (a) Ground truth, (b) BH, (c) LDFA, (d) LPP, (e) NWFE, (f) PCA, (g) SH, (h) the proposed SDHE. The different colors indicate the different classes.
Atmosphere 13 01449 g004
Figure 5. Classification maps for Salinas. (a) Ground truth, (b) BH, (c) LDFA, (d) LPP, (e) NWFE, (f) PCA, (g) SH, (h) the proposed SDHE. The different colors indicate the different classes.
Figure 5. Classification maps for Salinas. (a) Ground truth, (b) BH, (c) LDFA, (d) LPP, (e) NWFE, (f) PCA, (g) SH, (h) the proposed SDHE. The different colors indicate the different classes.
Atmosphere 13 01449 g005
Figure 6. Classification maps for KSC. (a) Ground truth, (b) BH, (c) LDFA, (d) LPP, (e) NWFE, (f) PCA, (g) SH, (h) the proposed SDHE. The different colors indicate the different classes.
Figure 6. Classification maps for KSC. (a) Ground truth, (b) BH, (c) LDFA, (d) LPP, (e) NWFE, (f) PCA, (g) SH, (h) the proposed SDHE. The different colors indicate the different classes.
Atmosphere 13 01449 g006
Figure 7. The OA (%) with the change of reduced dimensionality for Pavia University. (a) Indicates the training set of 15 samples per class, (b) indicates the training set of 20 samples per class, (c) indicates the training set of 25 samples per class.
Figure 7. The OA (%) with the change of reduced dimensionality for Pavia University. (a) Indicates the training set of 15 samples per class, (b) indicates the training set of 20 samples per class, (c) indicates the training set of 25 samples per class.
Atmosphere 13 01449 g007
Figure 8. The OA (%) with the change of reduced dimensionality for Salinas. (a) Indicates the training set of 15 samples per class, (b) indicates the training set of 20 samples per class, (c) indicates the training set of 25 samples per class.
Figure 8. The OA (%) with the change of reduced dimensionality for Salinas. (a) Indicates the training set of 15 samples per class, (b) indicates the training set of 20 samples per class, (c) indicates the training set of 25 samples per class.
Atmosphere 13 01449 g008
Figure 9. The OA (%) with the change of reduced dimensionality for KSC. (a) indicates the training set of 15 samples per class, (b) indicates the training set of 20 samples per class, (c) indicates the training set of 25 samples per class.
Figure 9. The OA (%) with the change of reduced dimensionality for KSC. (a) indicates the training set of 15 samples per class, (b) indicates the training set of 20 samples per class, (c) indicates the training set of 25 samples per class.
Atmosphere 13 01449 g009
Table 1. Ground truth classes and their individual samples number for Pavia University.
Table 1. Ground truth classes and their individual samples number for Pavia University.
NumberClassSamples
1Asphalt6631
2Meadows18,649
3Gravel2099
4Trees3064
5Painted metal sheets1345
6Bare Soil5029
7Bitumen1330
8Self-Blocking Bricks3682
9Shadows947
Total 42,776
Table 2. Ground truth classes and their individual samples number for Salinas.
Table 2. Ground truth classes and their individual samples number for Salinas.
NumberClassSamples
1Brocoil-green-weeds-12009
2Brocoil-green-weeds-23726
3Fallow1976
4Fallow-rough-plow1394
5Fallow-smooth2678
6Stubble3959
7Celery3579
8Grapes-untrained11,271
9Soil-vinyard-develop6203
10Corn-senesced-green-weeds3278
11Lettuce-romaine-4wk1068
12Lettuce-romaine-5wk1927
13Lettuce-romaine-6wk916
14Lettuce-romaine-7wk1070
15Vinyard-untrained7268
16Vinyard-vertical-trellis1807
Total 54,129
Table 3. Ground truth classes and their individual samples number for KSC.
Table 3. Ground truth classes and their individual samples number for KSC.
NumberClassSamples
1Scrub761
2Willow swamp243
3CP hammock256
4CP/Oak hammock252
5Slash pine161
6Oak/Broadleaf hammock229
7Hardwood swamp105
8Graminoid marsh431
9Spartina marsh520
10Cattail marsh404
11Salt marsh419
12Mud flats503
13Water927
Total 5211
Table 4. Classification accuracy (%) at 30-dimensionality for Pavia University with the training set of 20 samples per class.
Table 4. Classification accuracy (%) at 30-dimensionality for Pavia University with the training set of 20 samples per class.
ClassRAWBHLFDALPPNWFEPCASHSDHE
167.82 ± 5.1761.51 ± 5.8870.34 ± 6.0856.46 ± 6.6967.99 ± 5.1567.84 ± 5.1657.88 ± 5.0173.68 ± 8.17
263.54 ± 5.5560.97 ± 5.7875.98 ± 4.0169.00 ± 8.3463.56 ± 5.5263.54 ± 5.5569.40 ± 8.2379.54 ± 5.75
362.37 ± 5.1053.46 ± 6.3465.97 ± 6.1950.26 ± 5.2562.46 ± 5.1762.34 ± 5.1348.84 ± 5.7969.74 ± 6.78
486.70 ± 3.1384.76 ± 4.8789.01 ± 4.8689.14 ± 3.2886.76 ± 3.1186.70 ± 3.1389.08 ± 3.9487.53 ± 6.02
599.52 ± 0.36100.00 ± 099.92 ± 0.11100.00 ± 099.52 ± 0.3699.52 ± 0.36100.00 ± 099.77 ± 0.40
675.52 ± 6.4772.68 ± 4.4974.53 ± 8.9974.13 ± 4.9875.58 ± 6.4775.52 ± 6.4773.33 ± 5.3686.70 ± 3.70
780.05 ± 3.5881.23 ± 5.1584.25 ± 7.4666.92 ± 9.2079.98 ± 3.6680.02 ± 3.5767.10 ± 4.8188.93 ± 5.30
874.10 ± 5.7961.16 ± 5.0859.92 ± 6.0052.92 ± 5.7074.22 ± 5.8474.09 ± 5.7954.36 ± 5.3774.18 ± 8.81
999.08 ± 0.4699.15 ± 0.4498.79 ± 0.7198.91 ± 0.7099.09 ± 0.4799.08 ± 0.4698.88 ± 0.8099.26 ± 0.35
OA70.52 ± 2.7766.45 ± 2.4875.49 ± 2.2168.35 ± 3.5370.58 ± 2.7770.52 ± 2.7768.71 ± 3.0780.45 ± 3.68
AA78.74 ± 1.4074.99 ± 1.2279.86 ± 1.2873.08 ± 1.8578.80 ± 1.4278.74 ± 1.4073.21 ± 1.6184.37 ± 2.96
KC60.88 ± 3.6855.48 ± 3.2967.48 ± 2.9458.00 ± 4.6960.96 ± 3.6860.87 ± 3.6858.47 ± 4.0874.06 ± 4.88
Table 5. Classification accuracy (%) at 30-dimensionality for Salinas with the training set of 20 samples per class.
Table 5. Classification accuracy (%) at 30-dimensionality for Salinas with the training set of 20 samples per class.
ClassRAWBHLFDALPPNWFEPCASHSDHE
198.49 ± 0.5999.79 ± 0.4398.93 ± 1.5199.43 ± 0.6098.49 ± 0.5998.49 ± 0.5999.61 ± 0.3899.50 ± 0.76
299.61 ± 0.4699.88 ± 0.2499.78 ± 0.5099.09 ± 1.6199.62 ± 0.4699.61 ± 0.4699.59 ± 0.6399.90 ± 0.16
397.07 ± 1.7098.56 ± 1.4198.30 ± 1.2899.36 ± 0.9997.11 ± 1.6797.06 ± 1.7099.18 ± 0.8199.16 ± 1.55
497.90 ± 1.6098.53 ± 1.4398.15 ± 0.6499.02 ± 0.5897.93 ± 1.5797.89 ± 1.6298.96 ± 0.6198.52 ± 0.91
593.92 ± 1.2696.58 ± 0.8793.67 ± 2.0296.86 ± 0.9293.91 ± 1.2893.92 ± 1.2796.90 ± 0.8595.64 ± 1.79
699.55 ± 0.5599.84 ± 0.4399.74 ± 0.5499.97 ± 0.0799.55 ± 0.5599.55 ± 0.5599.96 ± 0.0799.77 ± 0.43
798.76 ± 0.5499.47 ± 0.5599.64 ± 0.3799.70 ± 0.2098.76 ± 0.5598.76 ± 0.5499.68 ± 0.2199.54 ± 0.30
868.68 ± 3.5862.60 ± 5.2167.41 ± 5.7265.74 ± 5.0168.64 ± 3.5468.64 ± 3.5965.45 ± 5.3375.46 ± 4.40
998.70 ± 0.5599.87 ± 0.2098.80 ± 2.1999.54 ± 1.3098.71 ± 0.5498.70 ± 0.5599.78 ± 0.5699.80 ± 0.22
1086.22 ± 4.1394.67 ± 1.8992.28 ± 2.5295.28 ± 1.8886.27 ± 4.1486.22 ± 4.1395.43 ± 1.6992.38 ± 1.77
1195.29 ± 2.1998.46 ± 1.1798.44 ± 1.2698.94 ± 0.8395.33 ± 2.2095.29 ± 2.1998.89 ± 0.7298.38 ± 1.41
1299.94 ± 0.0899.51 ± 0.5198.22 ± 1.8399.63 ± 0.4499.95 ± 0.0899.94 ± 0.0899.27 ± 1.4899.44 ± 1.51
1398.67 ± 1.7899.01 ± 1.3398.95 ± 0.8799.23 ± 0.9198.65 ± 1.7798.67 ± 1.7899.11 ± 1.1399.74 ± 0.44
1495.70 ± 2.8797.00 ± 1.7797.15 ± 2.2896.86 ± 2.3895.71 ± 2.8595.70 ± 2.8796.84 ± 2.4497.88 ± 1.88
1573.89 ± 4.7572.64 ± 5.6665.79 ± 6.0569.82 ± 5.4773.80 ± 4.8373.87 ± 4.7570.47 ± 6.1578.80 ± 3.70
1696.56 ± 1.8298.82 ± 0.5898.63 ± 0.4499.13 ± 0.3796.56 ± 1.8296.55 ± 1.8299.17 ± 0.3398.04 ± 0.73
OA87.98 ± 0.7687.67 ± 0.7487.24 ± 1.5387.99 ± 0.7987.97 ± 0.7587.97 ± 0.7688.07 ± 0.7891.01 ± 1.37
AA93.68 ± 0.3594.70 ± 0.2293.99 ± 0.8594.85 ± 0.3293.69 ± 0.3593.68 ± 0.3594.89 ± 0.3395.75 ± 0.59
KC86.61 ± 0.8586.27 ± 0.8385.79 ± 1.7186.62 ± 0.8886.59 ± 0.8486.59 ± 0.8586.71 ± 0.8789.98 ± 1.53
Table 6. Classification accuracy (%) at 30-dimensionality for KSC with the training set of 20 samples per class.
Table 6. Classification accuracy (%) at 30-dimensionality for KSC with the training set of 20 samples per class.
ClassRAWBHLFDALPPNWFEPCASHSDHE
194.55 ± 3.9090.20 ± 6.8686.13 ± 7.0492.47 ± 3.3394.55 ± 3.9094.55 ± 3.9092.46 ± 4.3195.03 ± 2.75
290.45 ± 4.2589.78 ± 4.9989.06 ± 5.8691.75 ± 4.5990.49 ± 4.2590.45 ± 4.2591.84 ± 5.2394.39 ± 3.84
392.63 ± 1.6088.18 ± 7.7486.86 ± 6.8086.44 ± 5.8892.63 ± 1.6292.58 ± 1.6186.10 ± 9.5595.89 ± 3.22
461.51 ± 5.5054.05 ± 6.1272.76 ± 8.2643.97 ± 7.9261.72 ± 5.6461.42 ± 5.5351.98 ± 6.9581.64 ± 4.18
572.84 ± 4.9374.47 ± 7.3690.00 ± 6.4368.01 ± 7.0072.70 ± 4.9872.70 ± 5.0471.28 ± 11.894.04 ± 3.89
680.86 ± 2.8584.74 ± 6.3690.38 ± 7.8983.11 ± 6.2580.86 ± 2.8580.81 ± 2.8782.49 ± 5.6894.59 ± 3.30
799.18 ± 1.8397.29 ± 4.5096.82 ± 5.9897.65 ± 3.1199.18 ± 1.8399.18 ± 1.8397.65 ± 2.6899.53 ± 0.94
888.44 ± 3.8885.23 ± 8.6790.24 ± 3.3991.05 ± 6.7688.44 ± 3.8888.44 ± 3.8889.81 ± 6.0196.45 ± 3.02
996.20 ± 2.1395.14 ± 3.2993.60 ± 4.6896.82 ± 2.8696.20 ± 2.1396.18 ± 2.1696.66 ± 3.3499.92 ± 0.13
1093.54 ± 4.4794.48 ± 2.3992.60 ± 2.1795.10 ± 2.6693.72 ± 4.5193.52 ± 4.4495.78 ± 2.0799.14 ± 1.22
1198.97 ± 1.2999.22 ± 0.7797.72 ± 2.8599.25 ± 0.6898.97 ± 1.2998.97 ± 1.2999.10 ± 1.0899.17 ± 1.43
1292.88 ± 4.3780.70 ± 6.2679.36 ± 5.9884.16 ± 7.3593.21 ± 4.3692.88 ± 4.3783.35 ± 6.6494.95 ± 2.92
13100.00 ± 098.64 ± 0.8298.69 ± 0.8597.76 ± 2.48100.00 ± 0100.00 ± 098.24 ± 0.7799.99 ± 0.03
OA92.38 ± 1.1889.60 ± 2.4390.32 ± 2.0790.11 ± 1.7692.44 ± 1.1492.37 ± 1.1790.46 ± 1.8396.61 ± 0.78
AA89.39 ± 1.0887.09 ± 2.3489.56 ± 1.7986.73 ± 1.6689.44 ± 1.0589.36 ± 1.0787.44 ± 1.9295.75 ± 0.74
KC91.49 ± 1.3288.39 ± 2.7289.19 ± 2.3188.95 ± 1.9691.55 ± 1.2891.47 ± 1.3189.35 ± 2.0596.22 ± 0.87
Table 7. Classification accuracy (%) at 30-dimensionality for the different sizes of training sets in Pavia University.
Table 7. Classification accuracy (%) at 30-dimensionality for the different sizes of training sets in Pavia University.
MethodThe Size of Training Set
152025
RAW70.43 ± 1.7570.52 ± 2.7773.47 ± 1.18
BH56.14 ± 1.7266.45 ± 2.4871.65 ± 3.09
LFDA57.16 ± 8.4175.49 ± 2.2180.76 ± 1.25
LPP57.92 ± 2.5768.35 ± 3.5474.68 ± 2.51
NWFE70.46 ± 1.7570.58 ± 2.7773.56 ± 1.18
PCA70.42 ± 1.7570.52 ± 2.7773.47 ± 1.18
SH57.43 ± 1.9468.71 ± 3.0874.60 ± 2.94
SDHE78.41 ± 5.1380.45 ± 3.6782.56 ± 2.87
Table 8. Classification accuracy (%) at 30-dimensionality for the different sizes of training sets in Salinas>.
Table 8. Classification accuracy (%) at 30-dimensionality for the different sizes of training sets in Salinas>.
MethodThe Size of Training Set
152025
RAW86.77 ± 1.8387.98 ± 0.7688.00 ± 0.92
BH81.32 ± 1.2987.67 ± 0.7489.40 ± 0.76
LFDA75.27 ± 3.3287.24 ± 1.5388.99 ± 0.98
LPP81.87 ± 1.0587.99 ± 0.7989.97 ± 1.11
NWFE86.78 ± 1.8587.97 ± 0.7588.00 ± 0.92
PCA86.76 ± 1.8387.97 ± 0.7687.98 ± 0.92
SH81.86 ± 1.5788.07 ± 0.7889.90 ± 1.24
SDHE89.43 ± 1.0791.01 ± 1.3790.78 ± 0.87
Table 9. Classification accuracy (%) at 30-dimensionality for the different sizes of training sets in KSC>.
Table 9. Classification accuracy (%) at 30-dimensionality for the different sizes of training sets in KSC>.
MethodThe Size of Training Set
152025
RAW91.16 ± 0.5892.38 ± 1.1893.39 ± 0.57
BH73.23 ± 2.3589.60 ± 2.4393.99 ± 0.61
LFDA60.05 ± 11.5490.32 ± 2.0794.56 ± 1.03
LPP74.05 ± 2.8890.11 ± 1.7693.91 ± 0.92
NWFE91.11 ± 0.5892.44 ± 1.1493.38 ± 0.56
PCA91.15 ± 0.5892.37 ± 1.1793.38 ± 0.58
SH73.73 ± 1.8090.46 ± 1.8394.53 ± 0.70
SDHE95.88 ± 1.0496.61 ± 0.9297.49 ± 0.44
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, X.; Fang, S.; Qiang, W. Dimensionality Reduction by Similarity Distance-Based Hypergraph Embedding. Atmosphere 2022, 13, 1449. https://doi.org/10.3390/atmos13091449

AMA Style

Shen X, Fang S, Qiang W. Dimensionality Reduction by Similarity Distance-Based Hypergraph Embedding. Atmosphere. 2022; 13(9):1449. https://doi.org/10.3390/atmos13091449

Chicago/Turabian Style

Shen, Xingchen, Shixu Fang, and Wenwen Qiang. 2022. "Dimensionality Reduction by Similarity Distance-Based Hypergraph Embedding" Atmosphere 13, no. 9: 1449. https://doi.org/10.3390/atmos13091449

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop