Next Article in Journal
A Fully Integrated Low-Power Multi-Mode RF Receiver for BDS-3/GPS
Next Article in Special Issue
OutcropHyBNet: Hybrid Backbone Networks with Data Augmentation for Accurate Stratum Semantic Segmentation of Monocular Outcrop Images in Carbon Capture and Storage Applications
Previous Article in Journal
Co-Dependency of IAQ in Functionally Different Zones of Open-Kitchen Restaurants Based on Sensor Measurements Explored via Mutual Information Analysis
Previous Article in Special Issue
Small Sample Hyperspectral Image Classification Based on the Random Patches Network and Recursive Filtering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Feature-Learning with a Unified Model for Hyperspectral Image Classification

1
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
2
Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea
3
Department of Computer Science, Al Ain University, Abu Dhabi P.O. Box 112612, United Arab Emirates
4
Computer Science Department, Community College, King Saud University, Riyadh 11437, Saudi Arabia
5
Mathematics and Computer Science Department, Faculty of Science, Menofia University, Shebin Elkom 6131567, Egypt
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7628; https://doi.org/10.3390/s23177628
Submission received: 4 August 2023 / Revised: 20 August 2023 / Accepted: 1 September 2023 / Published: 3 September 2023
(This article belongs to the Special Issue Machine Learning Based Remote Sensing Image Classification)

Abstract

:
In the realm of hyperspectral image classification, the pursuit of heightened accuracy and comprehensive feature extraction has led to the formulation of an advance architectural paradigm. This study proposed a model encapsulated within the framework of a unified model, which synergistically leverages the capabilities of three distinct branches: the swin transformer, convolutional neural network, and encoder–decoder. The main objective was to facilitate multiscale feature learning, a pivotal facet in hyperspectral image classification, with each branch specializing in unique facets of multiscale feature extraction. The swin transformer, recognized for its competence in distilling long-range dependencies, captures structural features across different scales; simultaneously, convolutional neural networks undertake localized feature extraction, engendering nuanced spatial information preservation. The encoder–decoder branch undertakes comprehensive analysis and reconstruction, fostering the assimilation of both multiscale spectral and spatial intricacies. To evaluate our approach, we conducted experiments on publicly available datasets and compared the results with state-of-the-art methods. Our proposed model obtains the best classification result compared to others. Specifically, overall accuracies of 96.87%, 98.48%, and 98.62% were obtained on the Xuzhou, Salinas, and LK datasets.

1. Introduction

Hyperspectral image data are acquired through hyperspectral sensors, capturing both spatial and spectral information from the visible to infrared spectrum for each pixel [1]. These images provide detailed spatial characteristics of objects along with their continuous diagnostic spectra [2]. Due to the valuable combination of multiscale spectral and spatial information, hyperspectral data find applications in various domains such as agriculture [3,4], mineralogy [5], earth observation [6], and other related applications [7,8,9]. How to classify hyperspectral images and extract multiscale features effectively is a hot topic for researchers. Various data processing techniques have been explored to effectively utilize acquired hyperspectral images, including unmixing, detection, and classification [10]. How to use hyperspectral image classification is also a hotspot topic. In previous studies, traditional machine learning algorithms were employed for HSI classification, including k-nearest neighbor [11], logistic regression [12], Bayesian estimation [13], and support vector machines [14]. However, it was observed that these conventional classification approaches often resulted in misclassification. In addition, several methods for dimensionality reduction and spectral information extraction have been developed, such as principle component analysis [15], independent component analysis [16], and linear discriminative analysis [17]. However, these methods tend to overlook the spatial correlation among pixels in a spatial dimension, which is crucial for optimal spatial feature extraction. To address this limitation, various mathematical operators have been developed, such as morphological profile [18], extended morphological operator [19], and extended multiattribute profile [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47].
In recent years, deep learning models, especially convolutional neural networks, have shown significant advantages over traditional methods in extracting more relevant and discriminative multiscale features. In [21], a stacked autoencoder and deep belief network were applied to extract multiscale features. These methods required a 1D feature as input. In [22], a 2D-CNN was proposed to carry out principle component analysis after the dimensionality reduction process. In [23], a more effective method for extracting spatial–spectral features in 3D-CNNs was proposed. In [24], combining 3D and 2D-CNN characteristics to reduce the computational complexity and improve classification accuracy HybridCNN was proposed. In [25], the author developed two stream residual deep feature fusion convolutional neural networks to fuse two branches to extract multiscale spatial spectral features. One branch was used for global feature extraction, and the other branch was used for local feature extraction. Moreover, recurrent neural networks [26], generative adversarial neural networks [27], graph neural networks [28], and capsule networks were proposed [29]. Conversely, encoder–decoder architectures are often employed in unsupervised multiscale feature learning to extract and reconstruct features from hyperspectral images [43]. Nonetheless, it is widely acknowledged that deep learning methodologies demand a substantial amount of labeled samples and an extensive number of training epochs, posing significant challenges in hyperspectral image classification.
Recently, a new model vision transformer [30] has exhibited better performance in the domain of computer vision. The transformer uses a self-attention mechanism to extract global dependencies. Attention mechanisms are also widely used in HSI classification. In [31], the spectral–spatial attention network was designed to extract discriminative features from the HSI cube. Much work has been carried out to apply the vision transformer model to hyperspectral image classification. In [32], spectral–spatial transformers (SST) were proposed. The author used a similar VGGNet model for spectral and spatial feature extraction and developed a relationship with a dense transformer. In [33], the author proposed a new model called SpectralFormer. This model can learn GroupWise spectral information and design cross-layer transformer encoders. In [34], the author introduced a spectral–spatial feature tokenization transformer for HSI classification; it uses 3D and 2D-CNN models for multiscale spatial and spectral features, in addition to a Gaussian weighted tokenizer. In [35], the author used a convolution network with a transformer model called CT Mixer and introduced a novel local global multi-head self-attention. In [36], the author proposed two branches of pure transformers: one is the spectral branch, and the other is a spatial branch. For the spectral branch, the author used a vision transformer for spectral features and for the spatial branch, the author used a swin transformer for spatial features; at the end, branch fusion strategy was used to learn joint features. Inspired by [42], depending on the desired information, different types of features can be extracted, such as pixel-based and structure-based features. However, finding an efficient and universal approach to fuse these features optimally remains a challenge due to the subtle relationship between the data.
In [48], the author proposed a model for deep multiscale feature learning for distorted image quality assessment. The author proposed a two-branch network for distorted images and residual maps. The network consists of spatial pyramid pooling and feature pyramid, aiming to learn hierarchical multiscale features from images. In addition, the author of [49] proposed DeepCervix, a hybrid deep feature fusion technique based on deep learning. In this method, various deep learning pre-trained models such as VGG16, VGG19, XceptionNet, and ResNet50 models are used to capture multiscale information to enhance the classification performance. In [50], the author proposed a multiscale feature fusion model based on ResNet50 and VGG16; they extract multiscale feature vectors of the last layer before the softmax layer of these two models. In [51], the author proposed the DeepFusion model to extract structural similarity features and sub-structure features and feed them into the interaction feature fusion module to encode interaction features. In [52], the author proposed a deep feature fusion classification network (DFFCNet) based on EfficientNetV2 as a backbone network ResNet with channel attention and a spatial attention module to fuse the features. These methods adopt concatenation or adding techniques to fuse the features of branches, causing an increase in dimensionality and an increase in computational cost. However, these methods perform well and obtain multiscale features with fusion techniques, but they are not capable of extracting high-level multiscale features.
Therefore, to take advantage of different models and their ability to extract multiscale features, we propose a swin transformer [34] with deep model architecture. To extract multiscale features, we propose a model that has three branches. First, we use a fully connected encoder–decoder with an attention module to reconstruct the spectral features. Secondly, we use a convolutional neural network to extract the multiscale spatial and spectral features. Third, we use a swin transformer to extract high-level multiscale features. Moreover, it is not easy to obtain satisfactory results using only one type of feature.
The main contributions of this paper are as follows:
  • Various deep multiscale feature learning models were identified with distinct feature extraction abilities, including an encoder–decoder for reconstruction features, CNNs for spatial features, and transformers for long-range or structural features.
  • The proposed model combines and fuses these diverse multiscale features to create a comprehensive representation of hyperspectral images.
  • We developed an effective weight fusion strategy to merge multiscale features, optimizing the integration process.
  • We highlighted the model’s ability to capture and utilize valuable spatial–spectral information, leading to improved accuracy in classification results compared to existing approaches.
The rest of the paper is organized as follows: Section II explains the methods, Section III presents the dataset and experimental evaluation, Section IV shows the results, Section V provides the discussion, and Section VI concludes the paper.

2. Methods

Figure 1 presents the proposed model. This section will explain the structure of the model and how the model works. In this study, we attempt to extract multiscale features from an HSI cube and consider different level feature extraction methods. Then, we develop a three-branch network for low and mid-range features using a CNN, for high-level features using a swin transformer, and for reconstruction features using an encoder–decoder.
(1)
Swin Transformer
The pivotal disparities between the swin transformer and vision transformer (ViT) reside in their fundamental feature mapping strategies. ViT yields singular low-resolution feature maps due to its employment of a global self-attention mechanism, which consequently results in a quadratic computational complexity concerning input image dimension. On the other hand, swin transformers employ a novel approach of merging image patches to construct hierarchical feature maps, offering an ingenious solution that curtails the computational intricacy to a linear scale with respect to input image dimension. This approach utilizes local windows for self-attention computation, affording enhanced efficiency and scalability in processing images of varying sizes. The characteristics of the swin transformer make the model suitable for vision tasks such as image classification, image detection, and image segmentation. Swin transformers can extract multiscale features in the spatial dimension.
The swin transformer model works by dividing the input image into non-overlapping patches. For the input of the model, first, we apply the dimensionality reduction technique to reduce the redundancy in data as well as computational complexity. In addition, without dimensionality reduction, overfitting occurs to alleviate the above problem; dimensionality reduction is important. x = m   ×   n   ×   b is the input of the model, where m × n represents the height width of the HSI image and b represents the bands. y indicates the dimensionality reduction layer and y z   ×   3 . z represents the number of bands. After that, the patch token processes the input to the swin transformer. The model consists of shifted window self-attention, an MLP layer, and layer normalization layers. A patch-merging layer is adopted to reduce the number of tokens, and the model becomes deeper to generate spatial features. The core component of the swin transformer is shifted window self-attention. The shifted window technique is applied in image classification tasks. Figure 2 shows the blocks of the swin transformer.
The window partition process is computed as follows:
z ^ l = W M S A ( L N ( z l     1 ) ) + z l     1 ,
z l = M L P ( L N ( z ^ l ) ) + z ^ l ,
z ^ l + 1 = S W M S A ( L N ( z l ) ) + z l ,
z l + 1 = M L P ( L N ( z ^ l + 1 ) ) + z ^ l + 1 ,
where z ^ l represents the output of window multihead self-attention. z l represents the MLP output at the lth block. The process of window shifting shows in Figure 3.
(2)
The convolutional Neural Network (CNN)
The second model is the CNN, which has achieved great success in the computer vision domain. It can extract multiscale spatial–spectral features simultaneously. This characteristic facilitates the differentiation of ground object materials in classification tasks. Let the input image cube be X H   ×   W   ×   C , where H   ×   W represents the height and width of the image and C represents the channels of the image. The block diagram of the CNN block is shown in Figure 4.
After using principle component analysis (PCA) to reduce the dimension of data to remove noise and redundancy, c is decreased to D. Input reduced data convert into small patches, and the process involves generating patches D S   ×   S   ×   D centered at the spatial location of (a, b), which covers spatial window size (s × s). Given M convolution kernel to input feature weights w i , the output can be computed as
Y = δ ( w i D ) ,
where δ denotes the activation function. After the convolution layer is used to reduce the spatial size and extract more discriminative features, the maxpooling layer is used, where MP denotes the maxpool operation
p = M P ( Y ) .
After reduction, the spatial size batch normalization layer is used to normalize the incoming batches, which helps to train the model faster. After the normlization layer, an activation function is used called a rectified linear unit (ReLU) to introduce non-linearity to output neurons. One convolution block consists of a CNN layer, MXPOOL layer, batch normalization layer, and ReLU layer. In this paper, block three is set with a (8, 16, 32) filter size and one stride kernel size, 3 × 3, is used in all blocks.
(3)
Encoder–Decoder (ED)
The third part of the model is the encoder–decoder, which is an unsupervised feature extraction method. Generally, the encoder–decoder is implemented in two ways: it is fully connected [34] and fully convolutional. In this paper, we choose a fully connected method with a band attention module (BAM).
The fundamental principle behind this type of encoder–decoder is to reconstruct the band information. This involves the retrieval of complete spectral details using a limited set of informative bands. Figure 1 shows that the overall architecture encompasses three key components: the band attention module (BAM), band reconstruction weights (BRW), and reconstruction network (RecNet). Figure 5 shows the internal process of encoder–decoder.
The band attention module (BAM) is a function of g. Input X produces non-negative weights, and the tensor shape is w 1   ×   1   ×   b
w = g ( X ; θ b ) ,
where θ b denotes the trainable parameter of BAM. To ensure the enforceable non-negativity of the acquired weights, the sigmoid function is incorporated into the output layer of the BAM module using following formulation:
ϕ ( w ) = 1 1 + e w .
In order to establish an interaction between the initial inputs and their corresponding weights, a band-wise multiplication operation, denoted as BRW, is operated. This operation can be succinctly described as follows:
h = X w .
Subsequently, we proceed with the utilization of RecNet to reconstruct the initial spectral band from its reweighted counterpart. Analogously, RecNet is characterized as a function denoted by h, which accepts a reweighted tensor y as input and produces its corresponding predictions.
X ^ = h ( y ; θ r ) .
The reconstruction block simply consists of an MLP model with the same hidden neurons with a ReLU activation function for the reconstruction of features. Figure 6 shows the basic diagram of encoder decoder process.
(4)
Weight Fusion
We discover three branch-extracted features from the swin transformer, CNN, and a fully connected encoder–decoder. These branches could present various features or characteristics of the data. The goal of the weight fusion technique is to provide each branch with an appropriate level of importance when classifying the result. First, we assess the importance of each branch. This could be based on the relevance of the information it captures. Once we determine the importance scores, we multiply each branch’s information by its corresponding weights. Fusing these features by summing operation, the formulation of weight fusion is calculated as:
F 1 = λ × F C N N + ( 1 λ ) × F E D ,
where F 1 denotes two branches of CNN and ED fusion features and λ denotes the weighted parameter range in between [0, 1].
F 2 = λ × F 1 + ( 1 λ ) × F T r a n s f r o m e r ,
where F 2 is the final output after fusing three branch features. Again, F 1 can multiply with transformer branch features.
(5)
Classifier
Figure 7 shows the multilayer perceptron classifier, a neural network model. It consists of multilayers of interconnected nodes, where each node in a layer is connected to all nodes in the subsequent layers.
The input of the MLP classifier is multiscale features received from the weight fusion block; each node corresponds to specific weighted features and calculates the weighted sum. This weighted sum is then passed through an activation function ReLU. The activation function helps the network learn complex relationships within data. The second fully connected layer performs linear transformation, and the applied softmax function converts the output value into probabilities representing the likelihood of each class. The classes with the highest probability are predicted as the final classification.
In summary, we use the preprocessing technique principle component analysis (PCA) to reduce the noise and redundancy in data. After preprocessing, the input image data are embeded into three different branches: the first is a swin transformer to extract high-level features, the second is a convolutional neural network to extract low-level features, and the third is an encoder–decoder with a fully connected layer to extract reconstruction features from hyperspectral data. The details of working all branches is described in sections. After extracting multiscale features, the weight fusion technique is applied to fuse features, computational complexity is very low. Moreover, the classifier is used to classify the image.

3. Dataset and Experimental Evaluation

3.1. Dataset

In this paper, the performance of the model is evaluated using three widely used hyperspectral datasets, including the Xuzhou dataset, the WHU-Hi-LongKou dataset, and the Salinas dataset. The details regarding the three datasets can be found in Table 1, Table 2 and Table 3.
The Xuzhou dataset has false color and ground truth. This dataset was acquired by a HYPEX spectral camera over the Xuzhou peri-urban site. The spatial resolution of this dataset is a pixel with a high resolution of 0.73 m/pixel. The spectrum used in this dataset is 436 after the removal of noisy bands from 415 nm to 825 nm. The application of the Xuzhou dataset can be employed in mineral classification.
The WHU-Hi-LongKou dataset was acquired by a Nano-Hyperspec imaging sensor in Longkou Town, Hubei Province, China. The size of the imagery is 550 × 400 pixels. There are 270 bands from 400 to 1000 nm, and the spatial resolution of the UAV-borne hyperspectral imagery is approximately 0.463 m. The WHU-Hi-LongKou dataset is used for fine crop classification.
The other dataset we used for this experiment is Salinas, which was acquired by a (AVIRIS) sensor. A spatial dimension of 512 × 217 with 3.7 m spatial resolution was used. This dataset comprises 224 bands; however, 20 bands sensitive to water absorption were excluded, resulting in 204 bands used for experiments. There are 54,128 labeled pixels in total, spanning 16 different land-cover categories. Figure 5 shows the false color and ground truth of this dataset.

3.2. Experimental Evaluation

To obtain the quantitative performance of the proposed model, the classification results are evaluated in terms of three matrices: overall accuracy, average accuracy, and Kappa coefficient. The formulation of these matrices is as follows:
Overall accuracy (OA), which computes the Number of Correct Pixels over the number of overall samples
O A = N u m b e r   o f   C o r r e c t l y   P i x e l O v e r a l l   p i x e l s .
Average accuracy (AA) is a crucial assessment metric [36] that offers an insightful evaluation of classification proficiency. This metric computes the average accuracy achieved across all categories:
A A = 1 n i = 1 n x i ,
where n is the number of classes, and x is a correctly assigned pixel to a single class. In addition, the Kappa coefficient is calculated as follows [37]:
K a p p a = O A p e 1 p e ,
where Kappa determines the agreement between the predicted classification map and ground truth map. p e represents the expected agreement between the model classification map and ground truth maps by chance probability. Usually, Kappa ≥ 80 indicates good agreement, while Kappa ≤ 0.4 indicates poor performance of the model.

3.3. Experimental Setting

In this experiment, we set the Adam optimizer while categorical cross entropy was chosen as a loss function to train the proposed model. The learning rate was set 0.001 and weight decay to 0.0001. For training and testing samples, we set 1% for training on the Xuzhou and Salinas datasets 99% to test the model. For the LK Dataset, we set 0.005% samples to train and 99.95% to test the model. We set the batch size, and the epoch was 64, 100 for simulation. All the experiments were run on NVIDIA RTX 3060 GPU with 64 GB RAM. We chose PYTHON 3.8 with a Tensorflow library. For the swin transformer, we set patch size as 4 × 4. When the spatial size could not be evenly divided, zero padding was applied. We also set three channels after dimensionality reduction. The window size was set to default seven.

3.4. Model Parameters Selection

In this section, we discuss some parameters that impact classification accuracy, like principle component analysis (PCA), and the different number of patch sizes or window sizes.
(1)
Impact of principle component analysis (PCA).
Extracting spectral information from bands is a very challenging task due to redundant and noisy data. In addition, it is very expensive with computational cost. Researchers found that reducing the dimensionality is the best way to extract the information from bands. To mitigate the aforementioned issue, we conducted comparative experiments on three datasets using 10, 20, 30, 40, and 50 PCA components and found from which band we should extract optimal information. Figure 8 shows the overall accuracy result on different PCA components.
(2)
Impact of Patch size
Different input patch sizes affect classification accuracy, so a selection of input patch size is important. Figure 9 shows the overall accuracy result on different patch sizes. We conducted experiments on all datasets to determine the optimal size of input. In this paper, we chose the optimal patch size as 13 × 13 for the input size of the model. We found that when increasing the patch size at some points, the accuracy stopped increasing.

4. Results

To compare the classification results of the proposed model, comparative experiments were conducted and the models used were SVM [37], 2D-CNN [38], 3D-CNN [39], Hybrid [22], DFFN [41], Bam-CM [42], ViT [26], SwinT [33], SSFTT [30], and CT Mixer [31].
The comparison classification result of the Xuzhou Dataset is shown in Table 4. One can see that the quantitative result of the SVM-based method obtained 84.39% classification accuracy compared to a single deep model with fewer parameters. This means traditional classification methods still have some advantages in specific cases. On the other hand, as one can see, OA of state-of-the-art methods 3D, Hybrid, ViT, SwinT, and CT Mixer was 94.41%, 95.09%, 92.50%, and 95.72%, respectively; on the other hand, our proposed model obtained 95.87 % classification accuracy on the Xuzhou dataset with 1 % training samples.
Additionally, the observation reflects the effectiveness of the proposed model in seamlessly integrating features extracted from different models. As a result, it significantly enhances OA classification performance.
Figure 10 shows the classification maps of different methods on the Xuzhou dataset. As shown in the figure, more training samples and more model layers can obtain better results with less noise. As one can see, ViT obtains excellent classification accuracy with less noise and intra-class smoothness. In addition, the proposed model not only obtains multiscale spectral–spatial features but also includes high semantic features from the transformer and reconstruction features from the encoder–decoder. It can obtain a better classification map and obtain more information that is detailed. The comparison classification result of the Salinas dataset is shown in Table 5. From the table, one can see first the shallow traditional multiscale feature learning method is lower than the deep learning model in terms of OA, AA, and Kappa coefficient. In contrast, 2D achieved the best result due to its spatial feature extraction capability. On the other hand, the proposed model is better than other multiscale feature learning models in terms of OA. Furthermore, SSFTT and CT Mixer performed better than other deep models. Additionally, our proposed model achieved better multiscale features among other single models and obtained the highest classification accuracy in some categories. SSFTT and CT Mixer use combined CNN and transformer models for feature extraction; the combination of both models is the best choice for local and global information.
Figure 11 shows the classification maps of different methods; as we can see, the SSFTT model and proposed model have less noise. Our classification is almost near to the ground truth image. As the level of noise increases, the accuracy of the classification maps tends to decrease. This observation highlights the importance of noise reduction techniques and the need to address and minimize noise effects to improve the accuracy of classification results. As one can see, the Xuzhou dataset and the proposed model improve 12.48%, 15.31%, 2.67%, 2.46%, 5.8%, 7.38%, 4.37%, 3.47%, and 1.15% OA accuracy compared to SVM, 2D-CNN, 3D-CNN, Hybrid, DFFN, Bam-CM, Vit, SwinT, SSFTT, and CT mixer. Traditional methods are not able to fully extract multiscale spectral–spatial features. The 2D-CNN has the ability to extract both spectral and spatial features, but for semantic features, in terms of semantic or high level features, CNNs are not able to extract these types of features. The transformer model has the ability to learn sematic features; however, a CNN and transformer combined can extract more discriminative features like CT Mixer in terms of improved classification accuracy. The comparison results of the WHU-Hi-LongKou dataset are shown in Table 6 with different methods. In this experiment, one can see that the traditional method of SVM performed better than other deep models. Due to some class variability, the 2D-CNN network also performed well due to its spatial feature extraction capability. From the other state-of-the-art methods, vision transformer also performed well to extract long-range features. On the other hand, our proposed model performed significantly well in terms of OA, AA, and Kappa coefficient.
Figure 12 shows the classification maps of different methods. As one can see, SVM, 2DCNN, ViT, and our proposed model have competitive classification results from classification maps; therefore, there is less noise in the maps. Our proposed method can achieve high accuracy in some classes. If the number of samples is increased, the deep model layers model can achieve higher accuracy due to the computational complexity increase. Our proposed model shows impressive classification performance due to the best weight fusion design strategy. The weight fusion strategy makes full use of multiscale feature fusion from different branches. The reconstruction feature from the encoder–decoder model can enhance the fused feature more. Our proposed model systematically combined multiscale features. It can be seen from Table 4, Table 5 and Table 6 that our proposed model obtained better results than other state-of-the-art methods.

5. Discussion

This paper developed a three-branch unified model to extract multiscale features. Hyperspectral image data have different kinds of features such as texture, structural features of data, and land cover object shape and size. Due to different model characterization, we used three different deep learning models to extract multiscale features at different levels and fused these multiscale features with weight fusion techniques to provide different features with different weights. It can be seen from Table 4, Table 5 and Table 6 that the classical method SVM, several deep learning methods, 2D, 3D, Hybrid, DFFN, and Bam-CM, and recently introduced vision transformer-based methods, ViT, Swin Transformer, SSFTT, and CT Mixer, were considered for comparison.
Our experiment shows that the proposed methods achieve the best classification results in terms of overall accuracy (OA), average accuracy (AA), and Kappa coefficient (k) on the three publically available datasets. Taking the example of the WHU-Hi-LonhKou dataset, Hybrid and SSFTT models are not able to extract class C3 features. For class C4 and class C5, our proposed method achieved satisfactory results. The strength of our model in all classes except class C9 achieved more than 90% accuracy, which exhibits the robustness and discriminative power of the model. Compared with the CNN model and transformer model, the OA of the proposed method improved 0.91%, 4.84%, 3.66%, and 2.72% on the LK dataset with DFFN, Bam-CM, SSFTT, and CT Mixer models, respectively. The proposed model combines the advantage of each branch to extract different kinds of multiscale features at each level to improve the classification performance. Moreover, the proposed method adopts a comprehensive feature fusion technique that could potentially improve the model’s capability to extract multiscale information. In addition to the other deep feature fusion models, such as DeepCervix, multiscale feature learning methods cannot achieve semantic high-level features; also, they use concatenation and add methods to fuse multiscale features. These fusion techniques might introduce redundant information, especially if the branches capture similar multiscale features. Because of imbalance and small training samples, these models are susceptible to overfitting. On the other hand, our proposed model achieved satisfactory classification results on small training samples. The weight fusion technique facilitates the integration of multiscale features extracted by different modules. This fusion strategy can lead to more comprehensive and refined multiscale feature representation. By integrating multiple modules and fusion techniques, the model offers a holistic approach to hyperspectral image classification, potentially improving its ability to handle complex real-world scenarios. In terms of the feature fusion technique, weight fusion allows the model to assign different importance to features from different branches, emphasizing more relevant features; furthermore, weight fusion technique controls the dimensionality and reduces the computational cost.

5.1. Ablation Study

To highlight the effectiveness of the proposed model, an investigation was conducted on the Xuzhou dataset to examine the impact of different combinations of network components. The findings are presented in Table 7; the result demonstrates that our fusion technique, as proposed, achieves superior performance when compared to other combination approaches.

5.2. Different Models on Different Training Samples over the Xuzhou Dataset

Model performance can be effectively evaluated by examining the classification accuracy across different numbers of training samples. To access this, we randomly selected 1%, 2%, 3%, and 5% data for training and the remaining for testing the model. Figure 13 shows that the classification accuracy increases when the number of training samples increases. The strategy shows that the proposed model is always effective on a small number of training samples.

6. Conclusions

In this paper, we proposed a deep learning model based on a parallel branch structure for hyperspectral image classification tasks. The model can mine multiscale features such as reconstructive spectral features and low-level and high-level features and perform a weight fusion strategy. Due to the high dimensionality of the data, we used the PCA algorithm to reduce data redundancy. Our experimental results show that the proposed model performs well in terms of OA, AA, and Kappa coefficient with a small number of training samples. The proposed model has fewer parameters than other multiscale feature learning models. In future endeavors, we aim to explore the utilization of lighter architecture with fewer training samples.

Author Contributions

Conceptualization, T.A.; Data curation, Y.Y.G. and A.G.; Formal analysis, J.Z. and I.U.; Funding acquisition, O.A. and A.G.; Investigation, Y.Y.G.; Methodology, T.A.; Project administration, I.U., O.A. and A.G.; Resources, I.U.; Software, T.A.; Supervision, J.Z.; Validation, T.A. and J.Z.; Visualization, O.A. and A.G.; Writing—original draft, T.A.; Writing—review and editing, I.U. and O.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Researchers Supporting Project Number (RSP2023R102), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The data can be obtained from: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 12 September 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.X.W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar]
  2. Della, C.; Bekit, A.; Lampe, B.; Chang, C.-I. Hyperspectral image classification via compressive sensing. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8290–8303. [Google Scholar] [CrossRef]
  3. Camino, C.; González-Dugo, V.; Hernández, P.; Sillero, J.C.; Zarco-Tejada, P.J. Improved nitrogen retrievals with airborne derived fluorescence and plant traits quantified from VNIR-SWIR hyperspectral imagery in the context of precision agriculture. Int. J. Appl. Earth Observ. Geoinf. 2018, 70, 105–117. [Google Scholar] [CrossRef]
  4. Murphy, R.J.; Whelan, B.; Chlingaryan, A.; Sukkarieh, S. Quantifying leaf-scale variations in water absorption in lettuce from hyperspectral imagery: A laboratory study with implications for measuring leaf water content in the context of precision agriculture. Precis. Agricult. Aug. 2019, 20, 767–787. [Google Scholar] [CrossRef]
  5. Murphy, R.; Schneider, S.; Monteiro, S. Consistency of measurements of wavelength position from hyperspectral imagery: Use of the ferric iron crystal field absorption at ∼900 nm as an indicator of mineralogy. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2843–2857. [Google Scholar] [CrossRef]
  6. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in Hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE Signal Process. Mag. 2014, 31, 45–54. [Google Scholar] [CrossRef]
  7. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Select. Topics Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  8. Khan, R.; Yang, Q.; Ullah, I.; Rehman, A.U.; Tufail, A.B.; Noor, A.; Rehman, A.; Cengiz, K. 3D convolutional neural networks based automatic modulation classification in the presence of channel noise. IET Commun. 2022, 16, 497–509. [Google Scholar] [CrossRef]
  9. Khalil, H.; Rahman, S.U.; Ullah, I.; Khan, I.; Alghadhban, A.J.; Al-Adhaileh, M.H.; Ali, G.; ElAffendi, M. A UAV-Swarm-Communication Model Using a Machine-Learning Approach for Search-and-Rescue Applications. Drones 2022, 6, 372. [Google Scholar] [CrossRef]
  10. Cariou, C.; Chehdi, K. A new k-nearest neighbor density-based clustering method and its application to hyperspectral images. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6161–6164. [Google Scholar]
  11. Haut, J.; Paoletti, M.; Paz-Gallardo, A.; Plaza, J.; Plaza, A. Cloud implementation of logistic regression for hyperspectral image classification. In Proceedings of the 17th International Conference on Computational and Mathematical Methods in Science and Engineering, Cádiz, Spain, 4–8 July 2017; Volume 3, pp. 1063–2321. [Google Scholar]
  12. SahIn, Y.E.; Arisoy, S.; Kayabol, K. Anomaly detection with Bayesian Gauss background model in hyperspectral images. In Proceedings of the 26th Signal Processing Communications Application Conference (SIU), Izmir, Turkey, 2–5 May 2018. [Google Scholar]
  13. Chen, Y.-N.; Thaipisutikul, T.; Han, C.-C.; Liu, T.-J.; Fan, K.-C. Feature line embedding based on support vector machine for hyperspectral image classification. Remote Sens. 2021, 13, 130. [Google Scholar] [CrossRef]
  14. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles. IEEE Geosci. Remote Sens. Lett. 2012, 9, 447–451. [Google Scholar] [CrossRef]
  15. Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral image classification with independent component discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef]
  16. Fu, L.; Li, Z.; Ye, Q.; Yin, H.; Liu, Q.; Chen, X.; Fan, X.; Yang, W.; Yang, G. Learning robust discriminant subspace based on joint L2, p- and L2, s -norm distance metrics. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 130–144. [Google Scholar] [CrossRef] [PubMed]
  17. Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens. 2007, 46, 3804–3814. [Google Scholar] [CrossRef]
  18. Dalla, M.M.; Villa, A.; Benediktsson, J.A.; Chanussot, J.; Bruzzone, L. Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar] [CrossRef]
  19. Ma, X.; Wang, H.; Geng, J. Spectral-spatial classification of hyperspectral image based on deep auto-encoder. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4073–4085. [Google Scholar] [CrossRef]
  20. Shao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar]
  21. Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
  22. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  23. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  24. Xian, L.; Ding, M.; Pižurica, A. Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2615–2629. [Google Scholar]
  25. Zhang, X.; Sun, Y.; Jiang, K.; Li, C.; Jiao, L.; Zhou, H. Spatial sequential recurrent neural networks for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 4141–4155. [Google Scholar] [CrossRef]
  26. Feng, J.; Yu, H.; Wang, L.; Zhang, X.; Jiao, L. Classification of hyperspectral images based on multiclass spatial–spectral generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5329–5343. [Google Scholar] [CrossRef]
  27. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  28. Zhu, K.; Chen, Y.; Ghamisi, P.; Jia, X.; Benediktsson, J.A. Deep convolutional capsule network for hyperspectral image spectral and spectral-spatial classification. Remote Sens. 2019, 11, 223. [Google Scholar] [CrossRef]
  29. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16× 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  30. Sun, H.; Zheng, X.; Lu, X.; Wu, S. Spectral–spatial attention network for hyperspectral image classifiation. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar] [CrossRef]
  31. Chen, X.H.Y.; Lin, Z. Spatial–spectral transformer for hyperspectral image classification. Remote Sens. 2021, 13, 498. [Google Scholar]
  32. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. arXiv 2021, arXiv:2107.02988. [Google Scholar] [CrossRef]
  33. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  34. Zhang, J.; Meng, Z.; Zhao, F.; Liu, H.; Chang, Z. Convolution transformer mixer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. Lett. 2022, 19, 6014205. [Google Scholar] [CrossRef]
  35. Xin, H.; Chen, Y.; Li, Q. Two-Branch Pure Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. Lett. 2022, 19, 6015005. [Google Scholar]
  36. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted Windows. arXiv 2021, arXiv:2103.14030. [Google Scholar]
  37. Cai, Y.; Liu, X.; Cai, Z. BS-nets: An end-to-end framework for band selection of hyperspectral image. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1969–1984. [Google Scholar] [CrossRef]
  38. Nyasaka, D.; Wang, J.; Tinega, H. Learning hyperspectral feature extraction and classification with resnext network. arXiv 2020, arXiv:2002.02585. [Google Scholar]
  39. Song, H.; Yang, W.; Dai, S.; Yuan, H. Multi-source remote sensing image classification based on two-channel densely connected convolutional networks. Math. Biosci. Eng. 2020, 17, 7353–7378. [Google Scholar] [CrossRef] [PubMed]
  40. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  41. Hamida, A.B.; Benoit, A.; Lambert, P.; Amar, C.B. 3-D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef]
  42. Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
  43. Dong, H.; Zhang, L.; Zou, B. Band attention convolutional networks for hyperspectral image classification. arXiv 2019, arXiv:1906.04379. [Google Scholar]
  44. Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  45. Khan, S.; Ullah, I.; Ali, F.; Shafiq, M.; Ghadi, Y.Y.; Kim, T. Deep learning-based marine big data fusion for ocean environment monitoring: Towards shape optimization and salient objects detection. Front. Marine Sci. 2023, 9, 1094915. [Google Scholar] [CrossRef]
  46. Mazhar, T.; Irfan, H.M.; Haq, I.; Ullah, I.; Ashraf, M.; Shloul, T.A.; Ghadi, Y.Y.; Imran Elkamchouchi, D.H. Analysis of Challenges and Solutions of IoT in Smart Grids Using AI and Machine Learning Techniques: A Review. Electronics 2023, 12, 242. [Google Scholar] [CrossRef]
  47. Mazhar, T.; Irfan, H.M.; Khan, S.; Haq, I.; Ullah, I.; Iqbal, M.; Hamam, H. Analysis of Cyber Security Attacks and Its Solutions for the Smart Grid Using Machine Learning and Blockchain Methods. Future Internet 2023, 15, 83. [Google Scholar] [CrossRef]
  48. Zhou, W.; Chen, Z. Deep multi-scale features learning for distorted image quality assessment. In Proceedings of the 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, Republic of Korea, 22–28 May 2021. [Google Scholar]
  49. Pal, A.; Xue, Z.; Antani, S. Deep cervix model development from heterogeneous and partially labeled image datasets. In Proceedings of the Frontiers of ICT in Healthcare: Proceedings of EAIT, Kolkata, India, 30–31 March 2022; pp. 679–688. [Google Scholar]
  50. Rahaman, M.M.; Li, C.; Yao, Y.; Kulwa, F.; Wu, X.; Li, X.; Wang, Q. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput. Biology Med. 2021, 136, 04649. [Google Scholar] [CrossRef] [PubMed]
  51. Li, Y.; Wang, Q.; Liang, X.; Jiao, L. A novel deep feature fusion network for remote sensing scene classification. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5484–5487. [Google Scholar]
  52. Song, T.; Zhang, X.; Ding, M.; Rodriguez-Paton, A.; Wang, S.; Wang, G. DeepFusion: A deep learning based multi-scale feature fusion method for predicting drug-target interactions. Methods 2022, 204, 269–277. [Google Scholar] [CrossRef]
Figure 1. Proposed model illustration.
Figure 1. Proposed model illustration.
Sensors 23 07628 g001
Figure 2. Illustration of swin transformer blocks.
Figure 2. Illustration of swin transformer blocks.
Sensors 23 07628 g002
Figure 3. Illustration of shifted window partition in swin transformers [33].
Figure 3. Illustration of shifted window partition in swin transformers [33].
Sensors 23 07628 g003
Figure 4. Illustration of convolutional neural block.
Figure 4. Illustration of convolutional neural block.
Sensors 23 07628 g004
Figure 5. Illustration of the encoder–decoder.
Figure 5. Illustration of the encoder–decoder.
Sensors 23 07628 g005
Figure 6. Block diagram of the encoder–decoder.
Figure 6. Block diagram of the encoder–decoder.
Sensors 23 07628 g006
Figure 7. Illustration of classifier.
Figure 7. Illustration of classifier.
Sensors 23 07628 g007
Figure 8. OA classification result (%) at different PCA components.
Figure 8. OA classification result (%) at different PCA components.
Sensors 23 07628 g008
Figure 9. OA classification result (%) and different patch sizes.
Figure 9. OA classification result (%) and different patch sizes.
Sensors 23 07628 g009
Figure 10. Classification maps obtained by different methods for the Xuzhou dataset. (a) SVM, (b) 2D, (c) 3D, (d) Hybrid, (e) DFFN, (f) Bam-CM, (g) ViT, (h) ST, (i) CT Mixer, (j) SSFTT, and (k) proposed.
Figure 10. Classification maps obtained by different methods for the Xuzhou dataset. (a) SVM, (b) 2D, (c) 3D, (d) Hybrid, (e) DFFN, (f) Bam-CM, (g) ViT, (h) ST, (i) CT Mixer, (j) SSFTT, and (k) proposed.
Sensors 23 07628 g010
Figure 11. Classification maps of different methods for Salinas dataset. (a) SVM, (b) 2D, (c) 3D, (d) Hybrid, (e) DFFN, (f) Bam-CM, (g) ViT, (h) ST, (i) CT Mixer, (j) SSFTT, and (k) proposed.
Figure 11. Classification maps of different methods for Salinas dataset. (a) SVM, (b) 2D, (c) 3D, (d) Hybrid, (e) DFFN, (f) Bam-CM, (g) ViT, (h) ST, (i) CT Mixer, (j) SSFTT, and (k) proposed.
Sensors 23 07628 g011
Figure 12. Classification map of different methods. (a) SVM, (b) 2D, (c) 3D, (d) Hybrid, (e) DFFN, (f) Bam-CM, (g) ViT, (h) ST, (i) CT Mixer, (j) SSFTT, and (k) proposed.
Figure 12. Classification map of different methods. (a) SVM, (b) 2D, (c) 3D, (d) Hybrid, (e) DFFN, (f) Bam-CM, (g) ViT, (h) ST, (i) CT Mixer, (j) SSFTT, and (k) proposed.
Sensors 23 07628 g012
Figure 13. OA 1–5% of the training sample on the Xuzhou dataset.
Figure 13. OA 1–5% of the training sample on the Xuzhou dataset.
Sensors 23 07628 g013
Table 1. Xuzhou dataset labeled samples.
Table 1. Xuzhou dataset labeled samples.
NoClassColorTrainSamplesFalse ColorGround Truth
C1BareLand-1 26326,396Sensors 23 07628 i001Sensors 23 07628 i002
C2Lakes 404027
C3Coals 272783
C4Cement 525214
C5Crops-1 13113,184
C6Trees 242436
C7Bareland-2 706990
C8Crops-2 474777
C9Red tiles 303070
Total 68468,877
Table 2. WHU-Hi-LongKou dataset labeled samples.
Table 2. WHU-Hi-LongKou dataset labeled samples.
NoClassColorTrainSamplesFalse ColorGround Truth
C1Corn 17234,511Sensors 23 07628 i003Sensors 23 07628 i004
C2Cotton 418374
C3Sesame 153031
C4Broad-leaf soybean 31663,212
C5Narrow-leaf soybean 204151
C6Rice 5911,854
C7Water 33567,056
C8Road and house 357124
C9Mixed weeds 265229
Total 1019204,542
Table 3. Salinas dataset labeled samples.
Table 3. Salinas dataset labeled samples.
NoClassColorTrainSamplesFalse ColorGround Truth
C1Broccoli-g-w-1 202009Sensors 23 07628 i005Sensors 23 07628 i006
C2Broccoli-g-gw-2 373726
C3Fallow 191976
C4Fallow-rough-plow 131394
C5Fallow-smooth 262678
C6Stuble 393959
C7Cerely 353579
C8Graphes-untrained 11211,271
C9Soil-vineyard-develop 626203
C10Corn-senesced-g-w 323278
C11Lettuce-romaine-4 kw 101068
C12Lettuce-romaine-5 kw 191927
C13Lettuce-romaine-6 kw 9916
C14Lettuce-romaine-7 kw 101070
C15Vineyard_untrained 727268
C16Vinyard_verticle_trellis 181807
Total 53354,129
Table 4. Classification result (%) on the Xuzhou dataset with 1% samples.
Table 4. Classification result (%) on the Xuzhou dataset with 1% samples.
Class IDSVM2D3DHybridDFFNBam-CMVITSWINSSFTTCT MIXProposed
C192.4177.0496.9694.5290.8096.5294.3696.9598.0797.9796.61
C288.1594.9598.1996.4897.8697.3695.2494.4596.1698.7997.34
C36.3290.2386.3891.7253.7583.5289.8382.5745.1576.5893.01
C486.0473.2896.5591.8899.0785.7694.7875.2688.1993.7093.43
C587.6481.7999.5693.5110093.0298.2894.3398.7496.3096.75
C658.2984.578.8792.6668.3237.6087.8484.6170.2775.7088.68
C776.1079.3699.3495.8010089.6995.1692.6094.3797.2898.15
C879.3391.2897.4899.3656.9887.5698.4293.7497.5010097.50
C977.8495.6596.1591.8381.6794.4397.3086.1793.6196.3895.50
OA84.3981.5694.1994.4189.4991.0794.7392.5093.4095.7296.87
AA72.4585.3586.6194.2083.1685.0593.3690.4491.5692.5294.66
Kappa80.0577.5792.6292.9586.7088.6493.1288.9690.2694.5595.76
Table 5. Classification result (%) on the Salinas dataset with 1% samples.
Table 5. Classification result (%) on the Salinas dataset with 1% samples.
Class IDSVM2D3DHybridDFFNBam-CMVITSWINSSFTTCT MIXProposed
C192.1399.7995.4798.1399.4998.4999.3453.7910099.9498.84
C294.4010099.2410099.9199.7010098.37100100100
C354.6699.6998.7799.3399.7997.3999.7484.1599.1810099.94
C497.4892.1798.6996.4474.785.6510099.4999.9299.4298.95
C582.7597.0182.5798.0710093.5496.6098.3799.2499.6699.81
C699.5110010010099.5198.4999.97100100100100
C790.6610099.5799.9199.9199.1599.8596.5899.9410099.15
C857.6892.8980.8698.0310088.6989.4189.7196.5194.4196.45
C986.7710099.6710099.8899.6799.9099.96100100100
C1065.3097.1979.9395.0096.5185.1193.7796.9199.1399.2298.90
C1120.1099.6298.9598.7710044.2710083.6399.0510099.43
C1293.7399.8492.1310065.2575.6298.5397.6999.7398.7499.94
C1315.2024.8075.5257.9932.4142.2279.1696.5895.586.7299.00
C1491.3699.5274.1290.1796.6070.9196.6997.4562.1399.7198.67
C1558.2367.9483.1185.1525.8665.8187.8884.8298.6999.9798.98
C1618.3099.1697.3199.4499.6087.7510093.4699.9499.55100
OA73.4792.3589.9996.0686.6385.1095.0992.1798.1297.1098.48
AA69.8991.8590.9994.7886.8478.2896.3091.9396.8193.5898.94
Kappa70.2591.4689.9295.6084.9683.3294.5491.2797.9096.7898.31
Table 6. Classification result (%) on the WHU-Hi-LongKou dataset with 0.005% samples.
Table 6. Classification result (%) on the WHU-Hi-LongKou dataset with 0.005% samples.
Class IDSVM2D3DHybridDFFNBam-CMVITSWINSSFTTCT MIXProposed
C198.2099.9699.5599.8899.9399.3599.9499.8697.0599.4899.91
C282.0893.9078.0897.5699.7475.6790.8376.1097.7799.8196.65
C380.0285.4183.722.1896.7530.1390.5195.091.3986.0492.24
C495.4599.5198.4098.6398.1998.7298.9997.1098.6696.2099.65
C564.8167.6271.5080.3660.5539.2275.1886.3178.6667.4090.00
C697.9998.7299.5091.7298.7092.7198.3298.4498.9796.6397.97
C798.5510099.6699.8798.6199.9099.9999.8998.9799.5099.99
C889.8493.1089.0097.4397.2777.1096.8192.4655.0176.0093.58
C983.4586.8986.0068.8790.9053.0888.8387.7972.2369.8882.08
OA95.8398.0796.8395.6197.7193.7898.1697.0594.9695.9098.62
AA87.8191.6889.4985.2893.4173.9993.2792.5685.8687.8894.68
Kappa94.4997.4695.8194.2297.0091.7097.5896.1392.0694.6398.15
Table 7. OA (%) result of ablation study of different combinations of a model over the Xuzhou dataset.
Table 7. OA (%) result of ablation study of different combinations of a model over the Xuzhou dataset.
MethodsEncoder-DecoderCNNSwinTED + CNNED + SwinTCNN + SwinTProposed
OA (%)88.9092.9393.0094.0093.6594.7896.87
AA (%)85.2792.9189.4992.7991.9692.7594.66
Kappa (%)85.8991.1491.1092.4592.0093.3895.76
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arshad, T.; Zhang, J.; Ullah, I.; Ghadi, Y.Y.; Alfarraj, O.; Gafar, A. Multiscale Feature-Learning with a Unified Model for Hyperspectral Image Classification. Sensors 2023, 23, 7628. https://doi.org/10.3390/s23177628

AMA Style

Arshad T, Zhang J, Ullah I, Ghadi YY, Alfarraj O, Gafar A. Multiscale Feature-Learning with a Unified Model for Hyperspectral Image Classification. Sensors. 2023; 23(17):7628. https://doi.org/10.3390/s23177628

Chicago/Turabian Style

Arshad, Tahir, Junping Zhang, Inam Ullah, Yazeed Yasin Ghadi, Osama Alfarraj, and Amr Gafar. 2023. "Multiscale Feature-Learning with a Unified Model for Hyperspectral Image Classification" Sensors 23, no. 17: 7628. https://doi.org/10.3390/s23177628

APA Style

Arshad, T., Zhang, J., Ullah, I., Ghadi, Y. Y., Alfarraj, O., & Gafar, A. (2023). Multiscale Feature-Learning with a Unified Model for Hyperspectral Image Classification. Sensors, 23(17), 7628. https://doi.org/10.3390/s23177628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop