Next Article in Journal
A FEM Flow Impact Acoustic Model Applied to Rapid Computation of Ocean-Acoustic Remote Sensing in Mesoscale Eddy Seas
Next Article in Special Issue
Remote Sensing Micro-Object Detection under Global and Local Attention Mechanism
Previous Article in Journal
Integrating Remote Sensing Data and CNN-LSTM-Attention Techniques for Improved Forest Stock Volume Estimation: A Comprehensive Analysis of Baishanzu Forest Park, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

End-to-End Convolutional Network and Spectral-Spatial Transformer Architecture for Hyperspectral Image Classification

1
School of Materials Science Engineering, Wuhan Institute of Technology, Wuhan 430079, China
2
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
3
Hyperspectral Computing Laboratory, Department of Technology of Computers and Communications, Escuela Politécnica, University of Extremadura, E-10071 Cáceres, Spain
4
School of Information Engineering, Nanchang Institute of Technology, Nanchang 330099, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(2), 325; https://doi.org/10.3390/rs16020325
Submission received: 6 November 2023 / Revised: 1 January 2024 / Accepted: 10 January 2024 / Published: 12 January 2024
(This article belongs to the Special Issue Recent Advances in Remote Sensing Image Processing Technology)

Abstract

:
Although convolutional neural networks (CNNs) have proven successful for hyperspectral image classification (HSIC), it is difficult to characterize the global dependencies between HSI pixels at long-distance ranges and spectral bands due to their limited receptive domain. The transformer can compensate well for this shortcoming, but it suffers from a lack of image-specific inductive biases (i.e., localization and translation equivariance) and contextual position information compared with CNNs. To overcome the aforementioned challenges, we introduce a simply structured, end-to-end convolutional network and spectral–spatial transformer (CNSST) architecture for HSIC. Our CNSST architecture consists of two essential components: a simple 3D-CNN-based hierarchical feature fusion network and a spectral–spatial transformer that introduces inductive bias information. The former employs a 3D-CNN-based hierarchical feature fusion structure to establish the correlation between spectral and spatial (SAS) information while capturing richer inductive bias and more discriminative local spectral-spatial hierarchical feature information, while the latter aims to establish the global dependency among HSI pixels while enhancing the acquisition of local information by introducing inductive bias information. Specifically, the spectral and inductive bias information is incorporated into the transformer’s multi-head self-attention mechanism (MHSA), thus making the attention spectrally aware and location-aware. Furthermore, a Lion optimizer is exploited to boost the classification performance of our newly developed CNSST. Substantial experiments conducted on three publicly accessible hyperspectral datasets unequivocally showcase that our proposed CNSST outperforms other state-of-the-art approaches.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) consist of hundreds of contiguous narrow spectral bands extending across the electromagnetic spectrum, from visible to near-infrared wavelengths [1], resulting in abundant SAS information. Effectively classifying SAS features is critical in HSI processing, which aims at categorizing the content of each pixel using a set of pre-defined classes. In recent years, HSIC has seen widespread adoption across various domains, including urban planning [2], military reconnaissance [3], agriculture monitoring [4], and ocean monitoring [5].
The advancement of deep learning (DL) in artificial intelligence has considerably improved the processing of remote sensing images. When compared with traditional machine learning techniques including support vector machines (SVMs) [6], morphological profiles [7], k-nearest neighbor [8], or random forests [9], DL-based approaches exhibit a powerful feature extraction capability, thus being able to learn discriminative and high-level semantic information. Therefore, DL-based techniques are extensively employed for HSIC [10]. For instance, a deep stacked autoencoder network has been suggested for the classification of HSIs by focusing on learning spectral features [11]. Chen et al. [12] employed a multi-layer deep neural network and a singular restricted Boltzmann machine for the purpose of capturing the spectral characteristics within HSI data. However, these approaches solely utilize spectral data information and overlook the importance of spatial-contextual information for enhancing classification performance. Hence, joint SAS feature extraction methods have been proposed to extract additional contextual semantic information from complex spatial structures, thus enhancing the model’s classification performance. Yang et al. [13] presented a two-branch SAS characteristic extraction network that employed a 1D-CNN for spectral characteristic extraction and a 2D-CNN for spatial characteristic extraction. The learned SAS information is linked and channeled into a fully connected (FC) layer, which extracts spectral–spatial characteristics to facilitate further classification. Yet, the 2D-CNN architecture could potentially result in the loss of spectral information within HSI. To proficiently capture SAS features, a 3D-CNN coupled with a regularization model has been proposed [14]. Roy et al. [15] combined a 2D-CNN and 3D-CNN to acquire spectral–spatial characteristics jointly represented from spectral bands using 3D-CNN, and then further learned spatial feature representations using 2D-CNN. Guo et al. [16] proposed a dual-view spectral and global spatial feature fusion network that utilized an encoder–decoder structure with channel and spatial attention to fully mine the global spatial characteristics, while utilizing a dual-view spectral feature aggregation model with a view attention for learning the diversity of the spectral characteristics and achieving a relatively good classification performance.
Despite the above CNN-based approaches achieving relatively good categorization results in the classification tasks, they did not exploit hierarchical SAS feature information across various layers. Furthermore, the excessive depth of convolutional layers may cause gradient vanishing and explosion problems. The dense connected convolutional network (DenseNet) offers an effective solution to mitigate these issues; it achieves this by promoting the maximal flow of information among different convolutional layers through connectivity operations, effectively fusing the hierarchical features between different layers [17]. Based on this, a comprehensive deep multi-layer fusion DenseNet using 2D and 3D dense blocks was presented in [18], which effectively improved the exploitation of HSI hierarchical signatures and handled the gradient vanishing problem. In [19], a fast dense spectral–spatial convolution network (FDSSC) was introduced, which combines two separate dense blocks and increases the network’s depth, allowing for a more straightforward utilization of feature information across different layers. By combining the advantages of CNN and graph convolutional network (GCN), Zhou et al. [20] proposed an attention fusion network based on multiscale convolution and multihop graph convolution to extract multi-level complex SAS features of HSI. Liang et al. [21] presented a framework that integrates a multiscale DenseNet with bidirectional recurrent neural networks, which adopted the multiscale DenseNet (instead of traditional CNNs) to strengthen the utilization of spatial characteristics across different convolutional layers. Despite the powerful ability of the above DenseNet-based approaches to retrieve SAS characteristics in HSI classification tasks, they still suffer from the limitation that CNNs typically only consider local SAS information between features, while ignoring global SAS information (failing to establish global dependencies across long-range distances among HSI pixels).
Recently, vision transformers have witnessed a surge in popularity within numerous facets of computer vision, including target recognition, image classification, and instance segmentation [22,23]. Transformers are primarily composed of numerous self-attention and feed-forward layers that inherit the global receptive field, which allows them to efficiently establish long-range dependencies among HSI pixels, compensating for the lack of CNNs in global feature extraction. Hence, vision transformers have attracted widespread attention in HSIC, in which the MHSA serves as the primary characteristic extractor of the transformer for learning the remote locations of HSI pixels and global dependencies between spectral bands [24,25]. Furthermore, the transformer emphasizes prominent features while concealing less significant information. He et al. [26] were pioneers in developing a bi-directional encoder representation of the transformer-based model for establishing global dependencies in HSIC. This approach primarily relies on the MHSA mechanism of the MHSA layer, where each head encodes a global contextual semantic-aware representation of the HSI for discriminative SAS characteristics. Hong et al. [27] proposed a framework for learning the long-range dependence information between spectral signatures using group spectral embedding and transform encoders by treating HSI data as sequential information, while fusing “soft” residuals across layers to mitigate the loss of critical signature information in the process of hierarchical propagation. Xue et al. [28] introduced a local transformer model in combination with the spatial partition restore network, which can effectively acquire the HSI global contextual dependencies and dynamically acquire the spatial attention weights through the local transformer to adapt to the intrinsic changes in HSI spatial pixels, thus augmenting the model’s ability to retrieve spatial–contextual pixel characteristics. Mei et al. [29] introduced a group-aware hierarchical transformer (GAHT) for HSIC, which incorporates a new group pixel embedding module that highlights local relationships in each HSI spectral channel, thus modeling global–local dependencies from a spectral–spatial point of view.
Although the above transformer-based models exhibit excellent abilities to model long-range dependencies among HSI pixels, they still suffer from some limitations in terms of extracting HSI characteristic information: (1) MHSA falls short in effectively considering both the positional and spectral information of the input HSI blocks when establishing the global dependencies of the HSI, which renders that the network lacks the utilization of the positional information among HSI pixels, and (2) some discriminative local SAS characteristic information that is helpful for HSIC purposes is not sufficiently exploited. Given that CNNs exhibit strong local characteristic learning abilities, a convolutional transformer (CT) network was proposed in [30], first employing central position coding to merge the spectral signatures and pixel positions to obtain the spatial positional signatures of the HSI patches, and then utilizing the CT block (containing two 2D-CNNs with 3 × 3 convolutional kernel sizes) to acquire the local–global characteristic information of HSIs, which significantly improved this model’s local–global feature acquisition ability. The spectral–spatial feature tokenization transformer (SSFTT) was introduced in [31], which converts the SAS characteristics learned by a simple 3D-CNN and 2D-CNN layer into semantic tokens, and inputs them into a transformer encoder to perform spectral–spatial characteristic representation. Although the above methodologies try to employ CNNs to strengthen the local characteristic extraction capabilities of the network, the simple CNN structure fails to adequately extract hierarchical features in various network layers. In this regard, Yan et al. [32] proposed a hybrid convolutional and ViT network classification approach, where one branch uses hybrid convolution and ViT to boost the capability of acquiring local–global spatial characteristics, and the other branch utilizes 3D-CNNs to retrieve spectral characteristics. However, separate extraction of SAS characteristics with a branch based on 2D-CNNs and a hybrid convolutional transformer network based on 3D-CNNs may ignore the intrinsic correlation between SAS signatures. A local semantic feature aggregation-based transformer approach was proposed in [33], which utilizes 3D-CNNs to simultaneously extract shallow spectral–spatial characteristics, and then merges pixel-labeled features using a local pixel aggregation operation to provide multi-scale characteristic neighborhood representations for HSIC. A two-branch bottleneck spectral–spatial transformer (BS2T) method was introduced in [34], which utilizes two 3D-CNNs DenseNet structures to separately abstract SAS properties to boost the extraction of the localized characteristics, as well as two transformers for establishing the long-range dependencies between HSI pixels. However, it may result in the model failing to adequately leverage the correlation between SAS information (this architecture contains two 3D-CNNs hierarchical structures and two transformers, and is relatively complex). Zu et al. [35] proposed exploiting a cascaded convolutional feature token to obtain joint spectral–spatial information and incorporate certain inductive bias properties of CNNs into the transformer. The densely connected transformer is then utilized to improve the characteristic propagation, significantly boosting the model’s performance.
Inspired by the above, we propose a simply structured, end-to-end convolutional network and spectral–spatial transformer (CNSST) architecture for HSIC. It comprises two primary modules, a 3D-CNN-based hierarchical feature fusion network and a spectral–spatial transformer that introduces inductive bias properties information (i.e., localization, contextual position, and translation equivariance), which are used to boost the extraction of local feature information and establish global dependencies, respectively. Regarding the local spectral–spatial feature extraction, to acquire SAS hierarchical characteristic representations with more rich inductive bias information, a 3D-CNN-based hierarchical network strategy is utilized to capture SAS information simultaneously, so as to establish the correlation between the SAS information of HSI pixels and to obtain a more rich inductive bias (yet more discriminative spectral–spatial joint feature information). Meanwhile, the hierarchical feature fusion structure is utilized to boost the utilization of the HSI semantic feature information across different convolutional layers. In the spectral–spatial transformer network, the SAS hierarchical characteristics containing rich inductive bias information are introduced into the MHSA to make up for the shortcoming of insufficient inductive bias in the image features acquired by the transformer. This allows the transformer not only to effectively establish long-range dependencies among HSI pixels, but also to enhance the model’s location-aware and spectral-aware capabilities. Moreover, a Lion optimizer is exploited to enhance the performance of the model. A summary of the primary contributions of this research is as follows:
  • We propose a simply structured, end-to-end convolutional network and spectral–spatial transformer (CNSST) architecture based on a 3D-CNN hierarchical feature fusion network and a spectral–spatial transformer that introduces rich inductive bias information in the HSI classification process.
  • To obtain feature representations with richer inductive bias information, a 3D-CNN-based hierarchical network is utilized to capture SAS information simultaneously in order to establish the correlation between these two sources of information in HSI pixels, while the hierarchical structure is exploited to improve the utilization of the HSI semantic feature information in various convolutional layers.
  • Spectral–spatial hierarchical features containing rich inductive bias information are introduced into MHSA, which enables the transformer to effectively establish long-range dependencies among HSI pixels, and to be more location-aware and spectral-aware. Moreover, a Lion optimizer is exploited to boost the categorization performance of the network.
The rest of this article is structured as follows. The related work is briefly described in Section 2. Section 3 provides an in-depth description of the general framework of the CNSST. Section 4 shows the experimental analysis and discussion. Finally, Section 5 wraps up the paper with concluding remarks and hints at future research directions.

2. Related Work

This section provides an introduction of the basic modules used in our CNSST architecture, including 3D-CNNs, hierarchical DenseNet, and the self-attention mechanism.

2.1. 3D-CNNs for HSI Classification

Here, 1D-CNNs can be applied to extract spectral signatures of HSI pixels, and 2D-CNNs are normally utilized to obtain spatial information. Yet, there is abundant SAS information contained in HSIs, which means that separate extraction of SAS characteristics may ignore the correlation between certain spatial characteristics and specific spectral characteristics. Compared with traditional pixel-based approaches, 3D-CNN-based approaches employ the target pixel and its neighboring pixels as inputs, which makes it possible to capture the rich spatial information surrounding the target pixel and fully leverages the correlation between SAS information, whereas pixel-based approaches solely employ a single pixel for network training. The input size of the 3D-CNN-based approach is p × p × b, where p × p and b stand for the number of neighboring pixels and spectral bands, respectively. Consequently, 3D-CNN is utilized as the foundational structure of the proposed CNSST model to obtain SAS feature information to fully capitalize on the correlation between certain spatial features and specific spectral features.
The inclusion of batch normalization in 3D-CNN modules is a common tool used in DL models to make the learning process fast and to reduce the dependence on initial values [36]. As a result, a batch normalization (BN) layer is incorporated into each 3D-CNN layer to increase numerical stability and suppress overfitting. As demonstrated in Figure 1, n i feature map (FM) of size p i × p i × b i is input into a 3D-CNN layer containing m i + 1 channels sized α i + 1 × α i + 1 × d i + 1 , resulting in n i + 1 output FM of size p i + 1 + p i + 1 + b i + 1 . The ith output of the ( i + 1 ) -th 3D-CNN layer with BN output can be computed as follows:
X k i + 1 = A f j = 1 n i X j i E x ( X j i ) V f ( X j i ) H k i + 1 + b k i + 1 ,
where A f ( · ) represents the activation function (AF) employed to introduce nonlinear properties to boost the representation of the network. The jth input FM of the ( i + 1 ) -th layer is denoted as X j i R P × P × b , while E x ( · ) and V f ( · ) correspond to the expectation and variance function of the input feature tensor, separately. H k i + 1 and b k i + 1 stand for the weight parameters and bias values of the ( i + 1 ) -th 3D-CNN layer, respectively, while ∗ stands for the convolution operation.

2.2. Hierarchical DenseNet

Traditional CNNs merely transform FMs forward from one convolutional layer to the next one. They are unable to train the network using information from different layers. Typically, increasing the number of convolutional layers tends to enhance the network performance. However, an excessive number of layers may result in gradient disappearing and explosion problems. The hierarchical Densenet is used to effectively mitigate these issues. It connects each layer directly to the other ones and combines features in the channel dimension by concatenating them to ensure maximum information flow between layers. Every convolutional layer receives information from the preceding layer as an input and subsequently transmits its FM to the succeeding layer [17]. The architecture of the hierarchical DenseNet is depicted Figure 2. The dense block serves as the fundamental unit in the hierarchical DenseNet. Assuming that the lth layer’s output FM is x l , the output of the l-th layer’s dense block may be represented as follows:
x l = H l x 0 , x 1 , , x l 1 ,
where H l ( · ) denotes a functional module that comprises BN layers, convolution layers, and Mish AF layers. Additionally, x 0 , x 1 , , x l 1 denote the output FMs of the previous dense blocks.
The architecture of the dense block employed in our model is presented in Figure 3. Specifically, each convolutional layer consists of m kernels of shape α × α × d . Each layer then produces m FMs with dimensions p × p × b . The number of FMs corresponds to the number of convolutional kernels and a linear correlation exists between the number of channels in each layer and the convolutional layers. The number of channels m j in the dense block of the jth layer takes the value ( j 1 ) × m + b , with b representing the number of channels from the input FMs.

2.3. Self-Attention Mechanism

Attention mechanisms have their origins in the investigation of the human visual nervous system, which has always been able to selectively concentrate on the significant parts of all information, while ignoring other irrelevant parts. The same is true for the attention mechanism in DL. The self-attention mechanism (SA) has revolutionized various natural language processing (NLP) tasks by capturing dependencies and relationships among various elements in a sequence. It enables models to assess the significance of different elements dynamically, resulting in improved performance on tasks, including text translation [37], sentiment analysis [38], and NLP [39]. In the domain of HSIC, SA has also been widely exploited [26,27,28,29,30,31]. The SA can be represented as:
A t t e n t i o n ( Q , K , V ) = S ( Q K T d K ) V ,
where S ( · ) denotes the softmax AF. Q, K, V, and d K represent the query, key, value, and dimension of the value K, correspondingly. The query holds the information to be extracted, the keyword serves as the index, and the value encapsulates the feature to be fetched. Attention is computed by obtaining the correlation between the query and the key, obtaining the attention graph, which is then utilized to derive the eigenvalues of the values. Figure 4 illustrates the detailed architecture of the SA. In HSIC, SA exhibits superior discrimination. Ge et al. [40] combined multiscale pyramidal convolutional blocks and polarized attention blocks to retrieve SAS characteristics from HSIs. Xia et al. [41] introduced a lightweight residual structure to replace the standard residual structure. This structure introduces an SA, enabling adaptive fusion of the input and output FMs, thereby further enhancing the feature extraction capability of the residual structure. [42] developed a novel high-order self-attention network that utilizes the SA module to capture long-range dependencies within scenes, facilitating the extraction of high-level semantic features. In the proposed method, to enhance the transformer’s location and spectral awareness, a novel MHSA with position coding is used to characterize the spatial location correlation and spectral–spatial correlation among hierarchical spectral–spatial features that contain rich induced bias information.

3. Methodology

The structure of the proposed CNSST model is schematically depicted in Figure 5. The CNSST architecture is formed by two primary components: a 3D-CNN-based hierarchical feature fusion network and a spectral–spatial transformer network that introduces inductive bias properties information. In terms of the 3D-CNN-based hierarchical feature fusion network, the 3D-CNN-based hierarchical network strategy is employed to capture SAS information simultaneously, so as to establish the correlation among the SAS information of the HSI and to obtain more abundant inductive bias. Moreover, the hierarchical DenseNet feature fusion structure is utilized to promote the utilization of the HSI semantic characteristic information in the respective convolutional layers, aiming to achieve a spectral–spatial hierarchical signature representation with richer inductive bias information. The spectral–spatial transformer network is employed to establish long-range dependencies between HSI pixels and to reinforce the local characteristic extraction capability. Specifically, the spectral–spatial hierarchical signatures containing rich inductive bias information are introduced into the multi-head spectral–spatial self-attention module to make up for the shortcomings of insufficient inductive bias in the image features acquired by the transformer, as well as to make the model more location-aware and spectral-aware. Finally, spectral–spatial feature fusion is conducted by the FC layer and then the probability prediction of each class is conducted by the Softmax AF. Moreover, a Lion optimizer is exploited to improve the categorization performance of the model. Next, we describe each of the modules in detail.

3.1. 3D-CNN-Based Hierarchical Dense Spectral-Spatial Feature Fusion Network

In this section, we provide a detailed description of the 3D-CNN-based hierarchical dense spectral–spatial feature fusion network module in CNSST. As shown in Figure 6, the structure primarily consists of a 3D-CNN-based hierarchical DenseNet spectral–spatial block. Unlike methods that obtain SAS characteristics by spectral branch and spatial branch, respectively, here, a 3D-CNN-based hierarchical DenseNet is adopted to extract the spectral–spatial characteristics simultaneously in order to establish the correlation between SAS information while capturing richer inductive bias and more discriminative local SAS hierarchical characteristic information. When the pixels containing abundant spectral–spatial characteristic information are introduced into the proposed structure, the proposed model with multiple nonlinear layers can effectively provide hierarchical feature representations. Furthermore, the utilization of multiple convolutional layers enables CNN to learn features more discriminatively under sparsity constraints. Regarding the network parameter settings, assuming that the input FM is of size H × W × D with n channels, and the convolution layer comprises m o kernels with size a o × a o × d o , then each layer calculates FMs as follows:
H o = H + 2 P a d a o s o + 1 ,
W o = W + 2 P a d a o s o + 1 ,
D o = D + 2 P a d d o s o + 1 ,
where H o , W o , and D o represent the corresponding sizes of the produced FMs. Parameter P a d denotes the padding applied during the resizing of the output FM, while s o signifies the stride of the filter used. Moreover, the corresponding number of channels within the resulting feature map can be expressed by n + ( j 1 ) × m o , in which j pertains to the jth convolutional layer under consideration.
Specifically, the 3D-CNN-based hierarchical Dense spectral–spatial feature fusion model mainly consists of 4 convolutional layers, where each layer has a kernel size of 3 × 3 × 7 and number of channels m o set to 12. In addition, we added a dropout layer between the last BN layer and the global average pooling layer to prevent overfitting. AF can enhance the efficiency of the counter-propagation and facilitate the network’s convergence. As shown in Figure 6, we adopted a self-regularized non-monotone AF Mish, which can preserve negative inputs as negative outputs, thereby effectively trading the input information and network sparsity. In the end, the local spectral–spatial characteristics extracted from the hierarchical dense spectral–spatial feature fusion network containing rich inductive bias and more discriminative characteristics are used as the inputs to the spectral–spatial transformer block.

3.2. Spectral–Spatial Transformer Network

As illustrated in Figure 5, the proposed spectral–spatial transformer block primarily contains a spectral–spatial MHSA module, as well as feature contraction and expansion modules. The MHSA module feeds the feature mapping into the spectral–spatial transformer module, which contains rich induced bias and hierarchical spectral–spatial characteristics, and then utilizes the spectral–spatial self-attention and positional coding modules to establish the global remote dependencies of the spectral–spatial characteristics in HSI pixels. To be specific, in the spectral–spatial transformer block, spectral–spatial hierarchical characteristics of size H o × W o × D o are first fed into the feature contraction module consisting of convolutions with a convolution kernels of size 1 × 1 and BN operations. Following that, the new characteristics obtained after feature contraction are input into the MHSA module to establish long-range dependencies between the HSI pixels. Finally, the convolution kernel with a size of 1 × 1 is employed in the feature expansion module for the dimensionality change, so the output features can be adapted to the structure of the network and are better combined between different levels of FMs.
Generally, positional coding is employed as a constraint to boost the attention sensitivity to positional information in transformer-based approaches [34,35]. Relative distance-aware position coding has great potential for describing the spatial content location of image pixels. The reason for this is that the attention considers not only the contextual feature information, but also the relative distances between the different positional features in pixels, which can effectively establish the correlation between the image feature information and positional awareness [43]. Hence, in our proposed CNSST, we used 2D relative position self-attention to realize the relative position encoding of HSI pixel features. The 2D relative height information L h and relative width information L w are computed for each HSI pixel feature to obtain a new spectral–spatial feature F N containing the relative position information.
In addition, MHSA is a mapping process that converts a query and a set of key–value pairs into an output. In this process, each input (query, key, and value) is represented as a vector and the output is a weighted sum of the values. The architecture of MHSA in the spectral–spatial transformer is presented in Figure 7. To enhance the location-awareness and spectral-awareness of the proposed CNSST, MHSA with relative position coding is utilized to co-describe the spatial–positional and spectral–spatial correlations between the HSI pixel patches. Firstly, the HSI pixel features F are processed by three convolutional layers to yield three new groups of features Q , K , V R H × W × D . Meanwhile, the entire hierarchical spatial features on the channel are mapped to global features, utilizing the global pooling operation to produce the spectral signatures F o of F N , which are then introduced into the attention mechanism, where the spectral–spatial attention A M can be represented as follows:
A M = A t t e n t i o n ( Q , K , V , F o ) = S ( L h + L w ) Q K T d k F o V ,
where S ( · ) denotes the softmax AF. L h and L w stand for the height information and width information of the 2D relative position encoding, respectively. Q, K, V, F o , and d K correspond to the query, key, value, spectral signatures, and dimension of the value K, correspondingly. These weight matrices and parameters were utilized to calculate MHSA, and the outcomes from each attention head are concatenated to obtain the output MHSA with H-heads, which can be expressed as follows:
M H S A ( Q , K , V , F o ) = C o n c a t ( A M 1 , A M 2 , , A M H ) W ,
where W signifies the matrix parameters obtained from the linear layers, and H signifies the number of heads.

3.3. Lion Optimizer

The optimizer has a significant role in training DL models, and its primary aim is to help the model gradually learn and update the parameters to make it fit the data better and decrease its loss function. The Lion optimizer is a simple and efficient optimization algorithm, and it has achieved excellent performance in image classification, computer vision, and other areas [44]. Unlike traditional optimizers, that store 1st and 2nd order moments, Lion merely tracks momentum and utilizes symbolic function operations for calculating parameter updates, thereby not only boosting the performance of the model, but also reducing memory overhead. To improve the categorization performance of CNSST, the Lion optimizer is applied to the CNSST model instead of the traditional Adam optimizer. The Lion optimizer’s computational procedure can be expressed as follows:
g t = ϑ f ( ϑ t 1 ) ,
ϑ t = ϑ t 1 ψ t { s i g n [ ρ 1 m t 1 + ( 1 ρ 1 ) g t ] + λ ϑ t 1 } ,
m t = ρ 2 m t 1 + ( 1 ρ 2 ) g t ,
where g t = ϑ f ( ϑ t 1 ) is denoted as the gradient of the loss function at weight ϑ t 1 for the current sample. Equation (10) represents the weight reduction process of decoupling, in which ψ t denotes the step size and s i g n ( · ) denotes the sign function. ρ 1 and ρ 2 denote the decay rates of the 1st and 2nd order moments, respectively, and their corresponding default values are set to 0.9 and 0.99. m t is the momentum vector of the t-th iteration. Equation (11) is employed for calculating the bias-corrected 1st and 2nd moments to offset the bias.

4. Experiments and Analysis

To assess the efficacy of the proposed CNSST approach, intensive experiments are performed using three familiar HSIC datasets. Next, we describe the datasets utilized, experimental settings, and then compare and experimentally analyze them in conjunction with several state-of-the-art models to exemplify the validity of the CNSST.

4.1. Datasets Description

In the experimental evaluations, four HSIC datasets are adopted to assess the CNSST approach we introduced. These datasets include the University of Pavia (UP), Salinas Scene (SV), Indian Pines (IP), and ZaoYan region (ZY). The corresponding pseudo-color and ground-truth images for these three datasets are depicted in Figure 8. Details about the categories and samples of the counterpart datasets are provided in Table 1, Table 2, Table 3 and Table 4. The details are shown below:
UP: It was acquired utilizing the ROSIS-3 sensor through an aerial survey performed over the Pavia region, Italy. It includes 610 × 340 pixels, containing a combined count of 42,776 labeled samples distributed among 9 distinct classes. Notably, this dataset encompasses 103 spectral bands, spanning a wavelength range from 4.3 μ m to 8.6 μ m .
SV: It was gathered utilizing the AVIRIS sensor—equipped with 224 spectrum bands—over Salinas Valley, USA. The dimensions of the images within this dataset are 512 × 217 pixels. It contains 54,129 sample pixels labeled samples distributed among 16 distinct classes and encompasses 204 bands in the range of 0.4 μ m to 2.5 μ m wavelengths.
IP: It was gathered utilizing the AVIRIS sensor over the region of Indiana, USA. It includes 16 distinct classes in total, spanning a wavelength ranging from 0.4 μ m to 2.5 μ m . The scene’s dimensions encompass 145 × 145 pixels, 220 spectral bands, and a combined count of 10,249 samples are available within this dataset.
ZY: It was collected by the OMIS sensor over the Zaoyuan region, China. The sense contained 137 × 202 pixels and 80 spectral bands with the first 64 spectral bands in the range of 0.4 μ m to 1.1 μ m and the last 16 covering the region of 1.06 μ m to 1.7 μ m . The available ground-truth map contains only 23,821 labeled samples and 8 landcover classes.
Figure 8. Pseudo-color and ground-truth images of three datasets. Pseudo-color images of the UP, SV, IP, and ZY datasets are depicted in (a,c,e,g), while the counterpart ground-truth maps are displayed in (b,d,f,h).
Figure 8. Pseudo-color and ground-truth images of three datasets. Pseudo-color images of the UP, SV, IP, and ZY datasets are depicted in (a,c,e,g), while the counterpart ground-truth maps are displayed in (b,d,f,h).
Remotesensing 16 00325 g008aRemotesensing 16 00325 g008b
Table 1. Details of the categories and sample numbers for UP dataset.
Table 1. Details of the categories and sample numbers for UP dataset.
CategoryNameTotal NumberCategoryNameTotal Number
N1Asphalt6631N6Bare Soil5029
N2Meadows18,649N7Bitumen1330
N3Gravel2099N8Self-Blocking Bicks3682
N4Trees3064N9Shadows Bare Soil947
N5Painted metal sheets1345
Table 2. Details of the categories and sample numbers for SV dataset.
Table 2. Details of the categories and sample numbers for SV dataset.
CategoryNameTotal NumberCategoryNameTotal Number
N1Broccoli-green-weeds-12009N9Soil-vinyard-develop6203
N2Broccoli-green-weeds-23726N10Corn-senesced-green-weeds3278
N3Fallow1976N11Lettuce-romaine-4wk1068
N4Fallow-rough-plow1394N12Lettuce-romaine-5wk1927
N5Fallow-smooth2678N13Lettuce-romaine-6wk916
N6Stubble3959N14Lettuce-romaine-7wk1070
N7Celery3579N15Vinyard-untrained7268
N8Grapes-untrained11,271N16Vinyard-vertical-trellis1807
Table 3. Details of the categories and sample numbers for the IP dataset.
Table 3. Details of the categories and sample numbers for the IP dataset.
CategoryNameTotal NumberCategoryNameTotal Number
N1Alfalfa46N9Oats20
N2Corn-notill142N10Soybean-notill972
N3Corn-mintill830N11Soybean-mintill2455
N4Corn237N12Soybean-clean593
N5Grass-pasture483N13Wheat205
N6Grass-trees730N14Woods1265
N7Grass-pasture-mowed28N15Buildings-Grass-Trees-Drives386
N8Hay-windrowed478N16Stone-Steel-Towers93
Table 4. Details of the categories and sample numbers for ZY dataset.
Table 4. Details of the categories and sample numbers for ZY dataset.
CategoryNameTotal NumberCategoryNameTotal Number
N1Vegetable2625N5Corn1425
N2Grape1302N6Terrace/Grass1484
N3Dry vegetable3442N7Bush-Lespedeza1808
N4Pear10,243N8Peach1492

4.2. Experimental Settings

To better compare the classification performance (experimental classification accuracy and classification visual maps) of different methods, during the selection of experimental training data, 1% of labeled samples are uniformly chosen for training from the UP and SV datasets, which contain a substantial number of labeled samples (42,776 and 54,129 labeled samples), while the remainder is for testing. However, for the IP and ZY datasets, which have relatively fewer labeled samples (10,249 and 23,821 labeled samples), 10% and 2.5% of the samples are respectively chosen for training, while the rest serve for testing. It’s worth noting that all experimental samples were chosen randomly. To evaluate the CNSST model’s performance, we assessed the outcomes using three well-established metrics: overall accuracy (OA), average accuracy (AA), and the Kappa coefficient (Ka). Every phase of model training and testing was performed on a computer system equipped with 64 GB RAM, RTX 3070Ti GPU, and Pytorch framework.
In addition, we performed a comparative analysis of the CNSST model, comparing it to several state-of-the-art classification approaches, including SVM [6], SSRN [45], CDCNN [46], FDSSC [19], DBMA [47], SF [27], SSFTT [31], GAHT [29], and BS2T [34]. The CNSST framework takes the original 3D HSI as input, without any pre-processing for dimensionality reduction. For optimizing the performance of CNSST, the optimal experimental parameters are empirically adopted. The batch size, epoch and learning rate are correspondingly set as 64, 200, and 0.0001. The convolution kernel size is set at 3 × 3 × 7 , and there are a total of 5 convolution layers in the architecture (the hierarchical Dense spectral-spatial feature fusion block consists of 4 layers, with each layer having 12 convolutional kernel channels). After repeating the test twenty times for each experimental method, the final classification outcome is determined by taking the average of the results from each test.
The spatial patch size has a significant influence on HSIC. As the size of the spatial patch in the CNN increases, the model can cover more pixel information. It helps to enhance the HSIC accuracy because a larger patch can collect more HSI characteristics and contextual information. However, too large spatial patches may also suffer from the problem of introducing too much irrelevant pixel information, which may cause confusion and misclassification [21]. Hence, the sizes of spatial were set to 5 × 5 , 7 × 7 , 9 × 9 , 11 × 11 , 13 × 13 , and 15 × 15 to explore the influence on the categorization performance. The OA outcomes of the CNSST approach on UP, SV, IP and ZY datasets at various spatial sizes are reported in Figure 9. According to the classification accuracies under different spatial patch sizes in three datasets, the patch size of the proposed CNSST set as 11 × 11 .

4.3. Experiment Outcomes and Discussion Analysis

The results, categorized using various approaches for the UP dataset, are demonstrated in Table 5, with the highest category-specific precision highlighted in bold. It is observed that CNSST has the highest categorization accuracy with 99.30%, 99.08%, and 99.07% for OA, AA and Ka, respectively. The OA categorization accuracy of SVM is 88.69%, which is 8.39%, 8.96%, 7.31%, 8.54%, 9.18%, 10.24%, and 10.61% lower than the DL-based SSRN, FDSSC, DBMA, SSFTT, GAHT, BS2T, and CNSST approaches, respectively. The reason is that the DL-based approaches (except for CDCNN and SF) can automatically extract the SAS characteristic information of HSI pixels and are superior in their characteristic extraction capability to the traditional SVM approach based on manual feature extraction. However, the classification accuracies of the DL-based methods, CDCNN and SF, are only 87.90% and 88.67% (similar to the classification accuracies of SVM and lower classification accuracies relative to other DL-based methods). The reason may be that there are limitations in the network structure design of CDCNN based on ResNet and multi-scale convolution, which results in CDCNN’s poor characteristic extraction capacity. The SF approach merely utilizes the group spectral embedding and transform encoder to acquire long-range dependency information, which fails to adequately use the local spectral–spatial feature information of HSI. In contrast, the classification accuracies of SSFTT, BS2T, and CNSST are 8.56%, 10.26%, and 10.63% higher, respectively, than that of SF, because they are not only able to utilize the transformer to efficiently establish long-range dependencies between HSI pixels, but also utilize CNN to efficiently augment the model’s ability to capture the local spectral–spatial characteristic information. Moreover, the accuracies of BS2T and CNSST are 98.93% and 99.30%, respectively, which are both higher than SSFTT. This is because SSFTT merely adopts one 3D-CNN and one 2D-CNN layer for extracting the local spectral–spatial signature information, which fails to extract the local signature information of HSI at a deeper level. However, BS2T and CNSST adopt the DenseNet-based structure, which can efficiently exploit the hierarchical local signature information from different convolutional layers, while also capturing the long-range dependency between HSI pixels with the transformer.
The classification maps for various approaches on UP are depicted in Figure 10. FDSSC, BS2T, and CNSST have relatively fewer misclassified pixels and better intra-class homogeneity, generating relatively smoother classification visual maps. Meanwhile, the visual maps of the other methods have relatively more misclassified labels and poorer homogeneity. This may be because FDSSC using 3D-CNN dense SAS networks with various kernel sizes can adequately capture different hierarchical levels of detailed information on spectral–spatial characteristics. Meanwhile, BS2T and CNSST not only exploit the 3D-CNN DenseNet’s ability to efficiently extract local hierarchical features, but also the transformer’s ability to model the long-range global characteristics of HSI pixels, and thus their categorization performance is better than FDSSC. In addition, the categorization accuracy of the proposed CNSST is 0.37% higher than BS2T, and CNSST has significantly fewer misclassification labels than BS2T in the lower left corner of the classification map. This is because BS2T employs a two-branch DenseNet structure to acquire the SAS characteristics of HSI separately, which fails to efficiently build up the correlation between SAS characteristics, and may result in the loss of characteristic information. However, the proposed CNNST employs a single 3D-CNN-based hierarchical DenseNet structure to capture SAS information simultaneously, which not only establishes a correlation between the SAS information of the HSI pixels, and obtains richer inductive bias and more discriminative spectral–spatial joint feature information; this information (inductive bias and contextual positional information) is also input into the transformer, which enables the model to be more positional-aware and spectral-aware. In addition, CNNST also utilizes the new Lion optimizer to boost the categorization performance of the proposed CNSST.
From Table 6, it can be seen that CNSST still achieves the optimal categorization accuracies of OA, AA and Ka, which are 99.35%, 99.52%, and 99.28%, respectively. Also, the classification accuracies of all the individual categories reached more than 99.04%, except for Vinyard-untrained (category N15) and Fallow-roughplow (category N4), which had classification accuracies of 97.78% and 98.13%, respectively. The classification accuracy of FDSSC based on 3D-CNN hierarchical DenseNet is 2.06% and 11.15% higher than SSRN and CDCNN based on the simple 3D-CNN structure, respectively. Similarly, the classification accuracies of CNSST and BS2T are significantly superior to SF, SSFTT, and GAHT in the transformer-based approaches. This further illustrates that the hierarchical DenseNet can effectively capture the characteristic information at different hierarchical levels, and has more powerful characteristic capture capabilities than methods based on simple CNN architectures. Moreover, the categorization accuracy of CNSST is 0.9% higher than BS2T on OA. This also illustrates that CNSST can effectively establish the correlation between the SAS feature information, reduce the loss of information, and obtain rich inductive bias information and spectral–spatial joint feature information. This information is then input into the spectral–spatial transformer with position encoding, which can effectively enhance the model’s spectral–spatial feature extraction capabilities.
The classification maps of various approaches on SV are depicted in Figure 11. As FDSSC based on 3D-CNN hierarchical DenseNet can fully exploit the SAS characteristic information of various convolutional layers, it significantly outperforms SSRN, CDCNN, and DBMA in the classification maps. The classification maps of FDSSC based on 3D-CNN hierarchical DenseNet are significantly better than SSRN, CDCNN, and DBMA. Among the transformer-based approaches, the SF, SSFTT and GAHT approaches suffer from obvious misclassified pixels and relatively poor homogeneity. The reason for this is that SF merely exploits the transformer to capture long-range dependence information. GAHT merely utilizes the group-aware hierarchical transformer to constrain MHSA to the local spatial–spectral context. However, there are some limitations in these approaches based on the transformer structure alone in obtaining localized characteristic information. Although SSFTT adopts 3D-CNN and 2D-CNN to enhance the extraction of local feature information, its structure is relatively simple, resulting in a limited capacity for local feature extraction by the model. Comparatively, the BS2T and CNNST approaches, which combine the advantages of transformer and DenseNet, have a better performance in classification visual maps. Moreover, CNNST has fewer misclassified labels and better smoothing than BS2T. This further illustrates the effectiveness of CNNST in establishing the correlation between SAS feature information, reducing information loss and enhancing spectral–spatial transformer feature extraction.
From Table 7, the proposed CNSST approach still achieves the highest accuracy of 98.84% on OA. SVM, CDCNN, and SF have the lowest accuracies on OA, which are 79.72%, 74.10%, and 87.46%, respectively. The classification maps of various approaches on IP are depicted in Figure 12. It is also obvious that they contain a lot of noise and mislabels. This further indicates the limitations of the network structure design of ResNet and multiscale CNN-based CDCNN with a poor feature extraction capability, even lower than the traditional hand-crafted SVM. Furthermore, SF, which is based on group spectral embedding and a transform encoder, fails to efficiently utilize the local feature information of the HSI pixels, even though it can acquire the long-range dependency information among HSI pixels. The DenseNet-based FDSSC achieves a classification accuracy of 98.17% with relatively few misclassified pixels in the classification visual map. However, the 3D-CNN DenseNet-based FDSSC fails to exploit the long-distance dependency between HSI pixels. BS2T and CNSST combine the strengths of both hierarchical DenseNet and transformers, and effectively realize the extraction of local–global SAS features. Moreover, CNNST not only outperforms BS2T with 0.35% in classification accuracy, but also has fewer misclassified labels on the classified visual maps and is relatively smoother. It proves the effectiveness of CNSST in enhancing the correlation between SAS feature information and in introducing rich inductive bias information into the transformer with position coding to strengthen the local–global feature extraction of the model.
From Table 8, it is obvious that the classification accuracy achieved by the proposed CNSST approach is still the highest, with OA, AA, and Ka of 98.27%, 97.70%, and 97.73%, respectively. In terms of OA, the classification accuracies of CNSST are higher than those of the GAHT, SSFTT, SF, and FDSSC approaches by 0.84%, 1.66%, 4.28%, and 0.86%, respectively. In addition to the classification results of the test labeled pixels in the reference map, we also considered background pixels (i.e., pixels that were not assigned any labels) for classification tests on the ZY dataset to show the consistency of the classification results from the classification visual map. From Figure 13, the CNSST method has significantly fewer misclassified labels than them and has better edge detail information preservation. This further demonstrates that the CNSST approach combining the hierarchical DenseNet and Transformers can more adequately realize the local-global SAS feature extraction for HSI pixels. Moreover, the proposed CNSST method significantly outperforms BS2T both in terms of the classification accuracy and classification visual map, which further demonstrates the validity of CNNST in strengthening the correlation between SAS features as well as introducing location information and rich inductive bias information into the transformer to reinforce the feature extraction capability of the model.

4.4. Performance with Various Percentages of Training Samples

To further verify the sample sensitivity of the CNSST method, a comparison experiment of the different methods at varying sample proportions was conducted. In the experiments, labeled samples amounting to 0.5%, 0.75%, 1%, 2%, and 3% were randomly chosen from the UP and SV datasets for training. Similarly, 6%, 7%, 8%, 9%, and 10% of the samples were randomly chosen from the IP dataset. For the ZY dataset, 0.5%, 1%, 1.5%, 2%, and 2.5% samples were randomly selected. The classification accuracies of the various approaches with various percentages of training samples on the UP, SV, IP and ZY datasets are presented in Figure 14. Notably, the SVM and CDCNN methods are too low (even below 80.0%) to achieve their classification accuracy on the ZY dataset under small samples. Therefore, some curves in subfigure (d) are not shown for a better visual comparison. As depicted in Figure 14, the categorization accuracy of all models rises with the increase in training samples. With a decrease in training samples, the classification accuracies of all models continue to decrease, and the curves of the other models (apart from CNSST) have relatively large variations on the UP and SV datasets. However, CNSST has a relatively smooth change and still maintains the optimal categorization accuracy on all four datasets. It also demonstrates that CNSST is relatively insensitive to the proportion of training samples and has a relatively good robustness.

4.5. Parameter Sizes and Runtimes

The parameter sizes and runtimes of the various methods on four datasets are shown in Table 9, where Par denotes the size of the parameter. Obviously, it is shown that the good classification performance of our proposed CNSST method on the four datasets is obtained at the expense of the computational complexity of the model. Notably, despite the relatively large parameters of the CNSST model, its training time is not the longest. This is because, to reduce the training cost of the model and avoid model overfitting, the proposed CNSST method employs an early stopping strategy. Furthermore, the batch size of the proposed CNSST is 64, while that of the BoS2T and FDSSC methods is 16. Meanwhile, a larger batch size usually implies that more samples can be processed in parallel, allowing them to be simultaneously modeled by the forward propagation process, thus effectively reducing the inference time.

4.6. Ablation Experiments

To better validate the efficacy of the modules in the proposed CNSST approach, ablation experiments were conducted. The ablation experiment outcomes on various datasets are presented in Figure 15. Among them, SCNSST indicates that the proposed CNSST does not utilize hierarchical dense blocks to obtain hierarchical spectral–spatial characteristics from different convolutional layers, but rather utilizes the simple 3D-CNN structure (as shown in Figure 5, only the second stage is used, while the first stage is replaced with a conventional CNN network). The no-RPE means that the proposed CNSST does not utilize relative position encoding in the transformer. The no-RPT means that the proposed CNSST does not utilize the transformer with relative position encoding (considering only the first stage without the existence of the second stage, as seen in Figure 5, solely employing the first stage without the presence of the second stage). Also, no-Lion indicates that the traditional Adam optimizer is employed in the proposed CNSST instead of the new Lion optimizer.
From Figure 15, the CNSST approach achieves a significantly higher categorization accuracy than SCNSST, no-RPE, and no-RPT, which further demonstrates that CNSST with hierarchical DenseNet can adequately exploit the spectral–spatial joint characteristic information at various levels and acquire richer inductive bias information. Secondly, its introduction into the transformer with 2D-relative position encoding allows for a better characterization of the spatial position information of HSI pixels and strengthens the position-aware and spectral-aware capabilities of the model. Moreover, the spectral–spatial transformer with relative position encoding can effectively establish long-range dependencies between HSI pixels and enhance the feature extraction capabilities of the model. Moreover, the classification outcome of the proposed CNSST outperforms the no-Lion, which further demonstrates the effectiveness of the new Lion optimizer employed in this work in enhancing the model’s categorization performance.

5. Conclusions

In this study, we propose an end-to-end, yet structurally simple CNSST framework for spectral–spatial HSI classification, which organically integrates a 3D-CNN-based hierarchical feature fusion network with a spectral–spatial transformer structure that introduces inductive bias properties information. On the one hand, the 3D-CNN-based hierarchical network is utilized to establish the correlation between SAS information and capture richer inductive bias and spectral–spatial hierarchical feature information, effectively introducing abundant inductive bias in the hierarchical network into the transformer. On the other hand, the spectral and inductive bias information is synthesized into the MHSA of the spectral–spatial transformer to empower it with both spectral and positional awareness, which enables the transformer to not only efficiently utilize the long-range dependencies between HSI pixels, but also to improve the capture of local characteristic information. Experimental results performed on four HSIC datasets demonstrate that CNSST outperforms other state-of-the-art networks in both quantitative and visualization analyses, and maintains an excellent classification performance with small samples. Furthermore, extensive ablation experiments also further prove the effectiveness of the different components of CNSST, including the Lion optimizer, in improving HSIC performance.
However, the good classification results of the CNSST approach depend on a relatively large computational complexity. The further development of this work will investigate lightweight methodologies to decrease the computational cost of the model. In another future work, we will investigate how to develop self-supervised or semi-supervised spectral–spatial transformer networks for HSIC to alleviate the model’s dependence on the number of samples.

Author Contributions

S.L., L.L. and S.Z. conceived the experiments. S.L., L.L., S.Z., Y.Z., A.P. and X.W. of the authors executed the experiments and wrote the manuscript and revised it. All authors have read and agreed to the published version of the manuscript.

Funding

This research receives support from the National Natural Science Foundation of China under Grant 62361042, the Training Program for Academic and Technical Leaders of Jiangxi Province under Grant 20225BCJ23019, the Jiangxi Provincial Natural Science Foundation under Grant 20224ACB202002, Grant 20224BAB202007, Grant 20232BAB202039, and the China Scholarship Council.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors confirm that there are no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional neural network
HSICHyperspectral image classification
HSIHyperspectral image
CNSSTConvolutional network and spectral-spatial transformer
SASSpectral and spatial
MHSAMulti-head self-attention mechanism
DLDeep learning
FCFully connected
SVMSupport vector machines
DenseNetDense connected convolutional network
GAHTGroup-aware hierarchical transformer
FDSSCFast dense spectral-spatial convolution framework
CTConvolutional transformer
SSFTTSpectral–spatial feature tokenization transformer
BS2TBottleneck spectral–spatial transformer
BNBatch normalization
FMFeature map
AFActivation function
SASelf-attention mechanism
NLPNatural language processing
UPUniversity of Pavia
SVSalinas scence
IPIndian Pines
OAOverall accuracy
AAAverage accuracy
KaKappa coefficient

References

  1. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  2. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, C.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  3. Shimoni, M.; Haelterman, R.; Perneel, C. Hyperspectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geosci. Remote Sens. Lett. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  4. He, L.; Qi, S.; Duan, J.; Guo, T.; Feng, W.; He, D. Monitoring of wheat powdery mildew disease severity using multiangle hyperspectral remote sensing. IEEE Trans. Geosci. Remote Sens. 2021, 59, 979–990. [Google Scholar] [CrossRef]
  5. Duan, P.; Kang, X.; Ghamisi, P.; Li, S. Hyperspectral remote sensing benchmark database for oil spill detection with an isolation forest-guided unsupervised detector. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5509711. [Google Scholar] [CrossRef]
  6. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  7. Li, J.; Marpu, P.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J. Generalized composite kernel framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
  8. Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. Geosci. Remote. Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  9. Wang, W.; Wang, C.; Liu, S.; Zhang, T.; Cao, X. Robust target tracking by online random forests and superpixels. IEEE Trans. Circuits Syst. 2018, 28, 1609–1622. [Google Scholar]
  10. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  11. Lu, X.; Wang, B.; Zheng, X.; Li, X. Exploring models and data for remote sensing image caption generation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2183–2195. [Google Scholar] [CrossRef]
  12. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  13. Yang, J.; Zhao, Y.; Chan, J.C. Learning and transferring deep joint spectral–spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  14. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  15. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  16. Guo, T.; Wang, R.; Luo, F.; Gong, X.; Zhang, L.; Gao, X. Dual-View Spectral and Global Spatial Feature Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5512913. [Google Scholar] [CrossRef]
  17. Huang, G.; Liu, Z.; Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  18. Li, Z.; Wang, T.; Li, W.; Du, Q.; Wang, C.; Liu, C.; Shi, X. Deep multilayer fusion dense network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 1258–1270. [Google Scholar] [CrossRef]
  19. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral-spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 7, 1068. [Google Scholar] [CrossRef]
  20. Zhou, H.; Luo, F.; Zhuang, H.; Weng, Z.; Gong, X.; Lin, Z. Attention Multihop Graph and Multiscale Convolutional Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5508614. [Google Scholar] [CrossRef]
  21. Liang, L.; Zhang, S.; Li, J. Multiscale DenseNet meets with Bi-RNN for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 5401–5415. [Google Scholar] [CrossRef]
  22. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with aunified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
  23. Srinivas, A.; Lin, T.Y.; Parmar, N.; Shlens, J.; Abbeel, P.; Vaswani, A. Bottleneck transformers for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 21–24 July 2021; pp. 16519–16529. [Google Scholar]
  24. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  25. Liang, L.; Zhang, Y.; Zhang, S.; Li, J.; Plaza, A.; Kang, X. Fast hyperspectral image classification combining transformers and SimAM-based CNNs. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5522219. [Google Scholar] [CrossRef]
  26. He, J.; Zhao, L.; Yang, H.; Zhang, M.; Li, W. HSI-BERT: Hyperspectral image classification using the bidirectional encoder representation from transformers. IEEE Trans. Geosci. Remote Sens. 2020, 58, 165–178. [Google Scholar] [CrossRef]
  27. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. Spectralformer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3130716. [Google Scholar] [CrossRef]
  28. Xue, Z.; Xu, Q.; Zhang, M. Local transformer with spatial partition restore for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 4307–4325. [Google Scholar] [CrossRef]
  29. Mei, S.; Song, C.; Ma, M.; Xu, F. Hyperspectral image classification using group-aware hierarchical transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5539014. [Google Scholar] [CrossRef]
  30. Zhao, Z.; Hu, D.; Wang, H.; Yu, X. Convolutional transformer network for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6009005. [Google Scholar] [CrossRef]
  31. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  32. Yan, H.; Zhang, E.; Wang, J.; Leng, C.; Basu, A.; Peng, J. Hybrid Conv-ViT network for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5506105. [Google Scholar] [CrossRef]
  33. Tu, B.; Liao, X.; Li, Q.; Peng, Y.; Plaza, A. Local semantic feature aggregation-based transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536115. [Google Scholar] [CrossRef]
  34. Song, R.; Feng, Y.; Cheng, W.; Mu, Z.; Wang, X. BS2T: Bottleneck spatial—Spectral transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5532117. [Google Scholar] [CrossRef]
  35. Zu, B.; Li, Y.; Li, J.; He, Z.; Wang, H.; Wu, P. Cascaded convolution-based transformer with Densely connected mechanism for spectral—Spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5513119. [Google Scholar] [CrossRef]
  36. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
  37. Wu, X.; Shi, S.; Huang, H. RESA: Relation Enhanced Self-Attention for Low-Resource Neural Machine Translation. In Proceedings of the International Conference on Asian Language Processing (IALP), Singapore, 11–13 December 2021; pp. 159–164. [Google Scholar]
  38. Li, F.; Yi, Y.; Tang, X. Text Sentiment Analysis Network Model Based on Self-attention Mechanism. In Proceedings of the IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Dalian, China, 25–27 August 2020; pp. 56–60. [Google Scholar]
  39. Zhang, Z.; Wu, Y.; Zhou, J.; Duan, S.; Zhao, H.; Wang, R. SG-Net: Syntax Guided Transformer for Language Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3285–3299. [Google Scholar] [CrossRef] [PubMed]
  40. Ge, H.; Wang, L.; Liu, M.; Zhao, X.; Zhu, Y.; Pan, H.; Liu, Y. Pyramidal Multiscale Convolutional Network With Polarized Self-Attention for Pixel-Wise Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5504018. [Google Scholar] [CrossRef]
  41. Xia, J.; Cui, Y.; Li, W.; Wang, L.; Wang, C. Lightweight Self-Attention Residual Network for Hyperspectral Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6009305. [Google Scholar] [CrossRef]
  42. He, N.; Fang, L.; Li, Y.; Plaza, A. High-Order Self-Attention Network for Remote Sensing Scene Classification. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 3013–3016. [Google Scholar]
  43. Ashish, V.; Peter, S.; Jakob, U. Self-Attention with Relative Position Representations. arXiv 2018, arXiv:1803.02155v2. [Google Scholar]
  44. Chen, X.; Liang, C.; Huang, D.; Real, E.; Wang, K.; Liu, Y.; Pham, H.; Dong, X.; Luong, T.; Hsieh, C.; et al. Symbolic discovery of optimization algorithms. arXiv 2023, arXiv:2302.06675. [Google Scholar]
  45. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral-Spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  46. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef]
  47. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef]
Figure 1. Configuration of 3D-CNN with a BN layer.
Figure 1. Configuration of 3D-CNN with a BN layer.
Remotesensing 16 00325 g001
Figure 2. Structure of the hierarchical DenseNet.
Figure 2. Structure of the hierarchical DenseNet.
Remotesensing 16 00325 g002
Figure 3. Configuration of the dense block employed in the CNSST approach, where BN + Mish represents a BN layer and a Mish AF layer.
Figure 3. Configuration of the dense block employed in the CNSST approach, where BN + Mish represents a BN layer and a Mish AF layer.
Remotesensing 16 00325 g003
Figure 4. The architecture of the self-attention mechanism, where Q, K, V, and d K represent the query, key, value, and dimension of the value K, respectively. The query holds the information to be extracted, the keyword serves as the index, and the value encapsulates the feature to be fetched. Softmax denotes the AF.
Figure 4. The architecture of the self-attention mechanism, where Q, K, V, and d K represent the query, key, value, and dimension of the value K, respectively. The query holds the information to be extracted, the keyword serves as the index, and the value encapsulates the feature to be fetched. Softmax denotes the AF.
Remotesensing 16 00325 g004
Figure 5. General framework of CNSST for HSIC. The network is organized into two stages (the hierarchical DenseNet block and the spectral–spatial transformer block). The previous stage is utilized to extract the local SAS feature properties of the HSI pixels and to obtain more abundant inductive bias. In the latter stage, a spectral–spatial transformer is employed to effectively establish long-range dependencies among HSI pixels, and to improve their location-awareness and spectral-awareness capabilities.
Figure 5. General framework of CNSST for HSIC. The network is organized into two stages (the hierarchical DenseNet block and the spectral–spatial transformer block). The previous stage is utilized to extract the local SAS feature properties of the HSI pixels and to obtain more abundant inductive bias. In the latter stage, a spectral–spatial transformer is employed to effectively establish long-range dependencies among HSI pixels, and to improve their location-awareness and spectral-awareness capabilities.
Remotesensing 16 00325 g005
Figure 6. Structure of the 3D-CNN-based hierarchical Dense spectral–spatial feature fusion network.
Figure 6. Structure of the 3D-CNN-based hierarchical Dense spectral–spatial feature fusion network.
Remotesensing 16 00325 g006
Figure 7. The architecture of MHSA in the spectral–spatial transformer.
Figure 7. The architecture of MHSA in the spectral–spatial transformer.
Remotesensing 16 00325 g007
Figure 9. OA of the CNSST approach on UP, SV, IP, and ZY datasets for various spatial patch sizes.
Figure 9. OA of the CNSST approach on UP, SV, IP, and ZY datasets for various spatial patch sizes.
Remotesensing 16 00325 g009
Figure 10. Classification maps of various approaches on UP dataset. (a) SVM (OA = 88.69%). (b) SSRN (OA = 97.08%). (c) CDCNN (OA = 87.90%). (d) FDSSC (OA = 97.65%). (e) DBMA (OA = 96.00%). (f) SF (OA = 88.67%). (g) SSFTT (OA = 97.23%). (h) GAHT (OA = 97.87%). (i) BS2T (OA = 98.93%). (j) CNSST (OA = 99.30%).
Figure 10. Classification maps of various approaches on UP dataset. (a) SVM (OA = 88.69%). (b) SSRN (OA = 97.08%). (c) CDCNN (OA = 87.90%). (d) FDSSC (OA = 97.65%). (e) DBMA (OA = 96.00%). (f) SF (OA = 88.67%). (g) SSFTT (OA = 97.23%). (h) GAHT (OA = 97.87%). (i) BS2T (OA = 98.93%). (j) CNSST (OA = 99.30%).
Remotesensing 16 00325 g010
Figure 11. Classification maps of various approaches on SV dataset. (a) SVM (OA = 88.90%). (b) SSRN (OA = 94.75%). (c) CDCNN (OA = 85.66%). (d) FDSSC (OA = 96.81%). (e) DBMA (OA = 96.12%). (f) SF (OA = 91.38%). (g) SSFTT (OA = 93.56%). (h) GAHT (OA = 96.36%). (i) BS2T (OA = 98.45%). (j) CNSST (OA = 99.35%).
Figure 11. Classification maps of various approaches on SV dataset. (a) SVM (OA = 88.90%). (b) SSRN (OA = 94.75%). (c) CDCNN (OA = 85.66%). (d) FDSSC (OA = 96.81%). (e) DBMA (OA = 96.12%). (f) SF (OA = 91.38%). (g) SSFTT (OA = 93.56%). (h) GAHT (OA = 96.36%). (i) BS2T (OA = 98.45%). (j) CNSST (OA = 99.35%).
Remotesensing 16 00325 g011
Figure 12. Classification maps of various approaches on IP dataset. (a) SVM (OA = 79.72%). (b) SSRN (OA = 98.06%). (c) CDCNN (OA = 74.10%). (d) FDSSC (OA = 98.17%). (e) DBMA (OA = 95.06%). (f) SF (OA = 87.46%). (g) SSFTT (OA = 97.00%). (h) GAHT (OA = 98.41%). (i) BS2T (OA = 98.49%). (j) CNSST (OA = 98.84%).
Figure 12. Classification maps of various approaches on IP dataset. (a) SVM (OA = 79.72%). (b) SSRN (OA = 98.06%). (c) CDCNN (OA = 74.10%). (d) FDSSC (OA = 98.17%). (e) DBMA (OA = 95.06%). (f) SF (OA = 87.46%). (g) SSFTT (OA = 97.00%). (h) GAHT (OA = 98.41%). (i) BS2T (OA = 98.49%). (j) CNSST (OA = 98.84%).
Remotesensing 16 00325 g012
Figure 13. Classification maps of various approaches on the ZY dataset. (a) SVM (OA = 88.24%). (b) SSRN (OA = 95.56%). (c) CDCNN (OA = 87.69%). (d) FDSSC (OA = 97.41%). (e) DBMA (OA = 96.77%). (f) SF (OA = 93.99%). (g) SSFTT (OA = 96.61%). (h) GAHT (OA = 97.43%). (i) BS2T (OA = 98.01%). (j) CNSST (OA = 98.27%).
Figure 13. Classification maps of various approaches on the ZY dataset. (a) SVM (OA = 88.24%). (b) SSRN (OA = 95.56%). (c) CDCNN (OA = 87.69%). (d) FDSSC (OA = 97.41%). (e) DBMA (OA = 96.77%). (f) SF (OA = 93.99%). (g) SSFTT (OA = 96.61%). (h) GAHT (OA = 97.43%). (i) BS2T (OA = 98.01%). (j) CNSST (OA = 98.27%).
Remotesensing 16 00325 g013
Figure 14. Accuracy of various approaches with various percentages of training samples on four datasets. (a) UP dataset. (b) SV dataset. (c) IP dataset. (d) ZY dataset.
Figure 14. Accuracy of various approaches with various percentages of training samples on four datasets. (a) UP dataset. (b) SV dataset. (c) IP dataset. (d) ZY dataset.
Remotesensing 16 00325 g014
Figure 15. Outcome of ablation experiments on various datasets.
Figure 15. Outcome of ablation experiments on various datasets.
Remotesensing 16 00325 g015
Table 5. Classification accuracy ( % ) achieved by various approaches on the UP dataset with 1% training samples in each category. The bold denotes the highest value.
Table 5. Classification accuracy ( % ) achieved by various approaches on the UP dataset with 1% training samples in each category. The bold denotes the highest value.
CategorySVMSSRNCDCNNFDSSCDBMASFSSFTTGAHTBS2TCNSST
N188.6299.1690.0098.8697.2388.0997.4797.8599.3497.60
N292.0698.4794.7399.3798.5899.7599.1199.5499.4199.98
N373.6491.3042.1985.1385.6697.5888.3189.4692.1499.19
N493.9599.9897.4799.0898.6591.0696.8697.3798.3499.21
N596.4499.8997.8499.7899.3199.5699.89100.099.5798.80
N684.9197.9784.0999.0298.4599.5997.9797.8799.0299.79
N773.6597.7070.2099.9692.7656.2591.1893.3699.7699.77
N881.7286.1475.2893.1086.5094.6392.2895.0291.2998.67
N999.9399.6188.0997.6897.1694.0999.5499.6995.8998.78
OA88.69 ± 0.7697.08 ± 0.7187.90 ± 1.4697.65 ± 1.2196.00 ± 1.0788.67 ± 1.0197.23 ± 0.3797.87 ± 0.1098.93 ± 0.1499.30 ± 0.16
AA87.21 ± 1.3496.69 ± 0.9882.21 ± 2.1796.89 ± 1.7794.89 ± 1.3083.80 ± 1.6895.85 ± 0.5896.68 ± 0.1498.31 ± 0.1899.08 ± 0.12
Ka84.89 ± 1.0696.13 ± 0.9583.92 ± 1.9396.89 ± 1.6194.69 ± 1.4284.88 ± 1.3796.33 ± 0.4997.17 ± 0.1498.59 ± 0.1799.07 ± 0.22
Table 6. Classification accuracy ( % ) achieved by various approaches on the SV dataset with 1% training samples in each category. The bold denotes the highest value.
Table 6. Classification accuracy ( % ) achieved by various approaches on the SV dataset with 1% training samples in each category. The bold denotes the highest value.
CategorySVMSSRNCDCNNFDSSCDBMASFSSFTTGAHTBS2TCNSST
N199.7899.9740.00100.0100.094.4199.1799.96100.0100.0
N298.9797.7474.9797.0099.9799.4499.84100.099.95100.0
N391.1798.7792.6398.7798.8995.6196.9998.5299.5499.70
N497.7595.8795.2196.5594.8593.3699.4498.7597.5398.13
N595.7494.0292.0999.6798.8092.7396.7798.8799.8999.90
N699.9099.8398.6899.9899.2799.0799.7199.9299.9499.99
N797.64100.096.6399.9799.9898.5699.5699.8999.9099.33
N873.8187.7676.3495.5692.3482.6389.7793.0497.6499.18
N998.4899.6098.3899.7499.8597.2699.0899.9199.95100.0
N1088.2297.9587.7899.5098.1990.8794.5797.4098.9299.39
N1191.0997.2989.0497.2093.2688.8295.5498.5499.94100.00
N1296.3799.3590.2299.7298.7098.7299.8999.9099.94100.00
N1393.8696.0092.5099.7399.9293.4098.0898.5999.4199.95
N1496.1997.9597.2798.8495.8297.2594.6897.8298.0099.04
N1576.2397.2962.1890.6790.0982.2876.5087.4594.4697.78
N1698.1199.3599.11100.0100.093.2096.5897.6799.98100.00
OA88.90 ± 0.8094.75 ± 1.0285.66 ± 3.4496.81 ± 1.5896.12 ± 1.4391.38 ± 0.3493.56 ± 0.7496.36 ± 0.3698.45 ± 0.4099.35 ± 0.20
AA93.33 ± 0.3597.11 ± 0.8786.45 ± 5.1098.20 ± 0.6197.33 ± 1.0993.60 ± 0.6596.01 ± 0.4797.89 ± 0.1998.92 ± 0.1899.52 ± 0.11
Ka87.60 ± 0.9094.15 ± 1.1483.96 ± 3.9496.45 ± 1.7695.68 ± 1.5990.41 ± 0.3892.82 ± 0.8395.95 ± 0.4098.27 ± 0.4499.28 ± 0.23
Table 7. Classification accuracy ( % ) achieved by various approaches on IP dataset with 10% training samples in each category. The bold denotes the highest value.
Table 7. Classification accuracy ( % ) achieved by various approaches on IP dataset with 10% training samples in each category. The bold denotes the highest value.
CategorySVMSSRNCDCNNFDSSCDBMASFSSFTTGAHTBS2TCNSST
N161.3388.9448.2598.0297.6635.6782.7078.9196.5896.04
N271.1598.6073.5399.2491.8481.1096.4698.8298.6199.24
N375.1896.9672.5598.7595.9383.2897.2298.5299.2498.88
N459.4393.1970.0398.2195.7072.8494.9497.4797.3397.42
N590.4399.2794.1898.3897.5887.4095.1897.3098.8899.66
N688.1299.5695.0699.4698.7397.6399.4599.7999.2299.86
N785.3297.0367.5084.9884.5276.3699.09100.057.7162.98
N889.6199.6888.2199.9499.0197.1299.31100.0100.0100.0
N973.5889.8761.9586.6688.9951.2577.592.598.82100.0
N1074.8596.7670.9197.0092.7985.3796.6897.9997.4397.65
N1177.5698.1166.6597.7895.4290.2196.8598.8098.9599.48
N1271.2798.1464.8797.2693.9770.1894.0297.0598.3798.82
N1391.52100.098.5698.1099.6597.3199.8799.5199.39100.0
N1491.6898.8587.1399.1498.4995.9499.2698.9999.0998.27
N1575.8798.2982.2496.7292.6485.8294.2395.2197.5398.92
N1697.2496.8797.2194.2997.6797.0298.6398.6493.6897.82
OA79.72 ± 0.7598.06 ± 0.6474.10 ± 3.6698.17 ± 0.7795.06 ± 2.0887.46 ± 0.6297.00 ± 0.6798.41 ± 0.2998.49 ± 0.1398.84 ± 0.12
AA79.63 ± 1.9796.88 ± 0.5577.43 ± 5.3596.50 ± 1.2994.92 ± 1.0181.53 ± 1.5295.09 ± 0.3696.15 ± 1.2595.69 ± 0.4696.56 ± 0.32
Ka76.75 ± 0.8697.79 ± 0.7369.90 ± 4.8797.92 ± 0.8894.37 ± 1.6485.70 ± 0.7096.58 ± 0.8698.19 ± 0.3398.28 ± 0.1598.68 ± 0.14
Table 8. Classification accuracy ( % ) achieved by various approaches on the ZY dataset with 2.5% training samples in each category. The bold denotes the highest value.
Table 8. Classification accuracy ( % ) achieved by various approaches on the ZY dataset with 2.5% training samples in each category. The bold denotes the highest value.
CategorySVMSSRNCDCNNFDSSCDBMASFSSFTTGAHTBS2TCNSST
N188.1799.4587.4999.3496.0093.5096.4696.7697.9798.49
N290.0493.4090.1588.3298.1594.9697.0896.6792.4296.50
N388.5995.0683.7698.7797.7595.7797.3698.4198.8997.80
N492.3498.0392.2598.5597.6496.8397.8198.6399.4999.14
N567.0088.8767.9794.9294.5792.1592.0096.1393.4497.05
N688.0897.0788.8997.6195.1195.2695.9196.9097.8997.76
N794.5196.9994.3796.6896.7494.9096.4195.6096.7496.91
N869.9896.3568.1397.0493.3476.6593.4993.8097.7898.03
OA88.24 ± 0.5995.56 ± 0.9187.69 ± 0.8197.41 ± 0.8796.77 ± 0.3593.99 ± 0.5096.61 ± 0.4997.43 ± 0.2598.01 ± 0.1698.27 ± 0.23
AA84.84 ± 1.1395.65 ± 1.5184.13 ± 1.7496.40 ± 0.8196.16 ± 0.9892.50 ± 1.0695.81 ± 1.0196.61 ± 0.7596.83 ± 0.5497.70 ± 0.37
Ka84.53 ± 0.7595.48 ± 1.1883.79 ± 1.1096.59 ± 1.1395.76 ± 0.4492.15 ± 0.7195.55 ± 0.6396.64 ± 0.3197.40 ± 0.1897.73 ± 0.17
Table 9. The parameter sizes and runtimes of the various methods on four datasets.
Table 9. The parameter sizes and runtimes of the various methods on four datasets.
CategoryParameter Sizes and Runtimes
SVMSSRNCDCNNFDSSCDBMASFSSFTTGAHTBS2TCNSST
UPPar/M-0.2170.6280.6510.3210.1640.4840.9271.4902.957
Train/s20.79702.334.213563.2144.32373.93133.10309.911102.3528.23
Test/s8.7835.7917.2584.46122.8390.5622.4971.65391.64226.48
SVPara/M-0.3701.0821.2510.6180.3030.9500.9731.6744.578
Train/s38.201083.345.875655.5429.43693.82344.03453.241470.71223.2
Test/s14.0572.8628.32185.38277.86100.4528.2544.59413.92557.11
IPPar/M0000.3641.0641.2270.6060.3430.9321.3661.6664.513
Train/s374.232273.473.298275.7772.291258.9593.18512.342607.22140.59
Test/s7.5211.144.4031.2242.7618.795.996.2763.0184.51
ZYPar/M-0.1800.5250.5070.2570.1390.3781.2241.4472.568
Train/s61.24802.625.033472.4241.00440.18165.29261.301890.1469.25
Test/s9.3315.986.2738.8054.889.162.486.61213.8995.64
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Liang, L.; Zhang, S.; Zhang, Y.; Plaza, A.; Wang, X. End-to-End Convolutional Network and Spectral-Spatial Transformer Architecture for Hyperspectral Image Classification. Remote Sens. 2024, 16, 325. https://doi.org/10.3390/rs16020325

AMA Style

Li S, Liang L, Zhang S, Zhang Y, Plaza A, Wang X. End-to-End Convolutional Network and Spectral-Spatial Transformer Architecture for Hyperspectral Image Classification. Remote Sensing. 2024; 16(2):325. https://doi.org/10.3390/rs16020325

Chicago/Turabian Style

Li, Shiping, Lianhui Liang, Shaoquan Zhang, Ying Zhang, Antonio Plaza, and Xuehua Wang. 2024. "End-to-End Convolutional Network and Spectral-Spatial Transformer Architecture for Hyperspectral Image Classification" Remote Sensing 16, no. 2: 325. https://doi.org/10.3390/rs16020325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop