Advanced Global Prototypical Segmentation Framework for Few-Shot Hyperspectral Image Classification
Abstract
:1. Introduction
- To capture global information, a patch-free feature extractor based on FCN is proposed. The input to the extractor is the entire HSI, and the features of the entire HSI can be obtained through one-shot forward computation. This process is similar to semantic segmentation, so we refer to the proposed feature extractor as the segmentation network (SegNet). Since SegNet takes the entire HSI as input, it does not restrict the receptive field of the network or create computational redundancies by dividing the HSI into fixed-size overlapping patches. As a result, SegNet offers a significantly larger receptive field than patch-based methods;
- Building upon the data characteristics of HSI and the network architecture of SegNet, we propose an FLC structure that fuses the rich detail features from the encoder with the semantic information in the decoder to enhance the feature representation capability of SegNet. Furthermore, we design a multi-scale position attention module called the ASPP-PA module by concatenating the ASPP module and PA module to fuse information across different scales and to allocate more attention to critical areas;
- To better adapt to few-shot scenarios, we integrate the global prototypical representation learning strategy with supervised CL and propose an advanced global prototypical representation learning strategy that learns a global prototypical representation feature vector for each class in HSI as the representative of that class and optimizes the network through a triplet constraint. Due to the incorporation of CL, the similarity between feature vectors of the same class increases, while the similarity between different classes decreases, thus enabling SegNet to map HSI to a more easily classifiable feature space.
2. Related Works
2.1. Extraction of Global Information from HSI
2.2. HSI Classification in Few-Shot Scenarios
3. Proposed Method
3.1. Overall Framework
3.2. SegNet
3.2.1. Encoder
3.2.2. Fusion of Lateral Connection (FLC) Structure
3.2.3. Atrous Spatial Pyramid Pooling-Position Attention (ASPP-PA) Module
- (1)
- Atrous Spatial Pyramid Pooling (ASPP) Module
- (2)
- Position Attention (PA) Module
3.2.4. Decoder
3.3. Advanced Global Prototypical Representation Learning Strategy
3.3.1. Global Prototypical Representation Learning Strategy
3.3.2. Introduce Supervised CL
4. Experimental Results and Analysis
4.1. Experimental Datasets
- Training Dataset:
- Chikusei Dataset: The spectral range of this dataset is 343–1080 nm, with a spatial resolution of approximately 2.5 m. This dataset size is 2571 × 2335, consisting of 128 spectral bands. The Chikusei dataset is divided into 19 land cover classes. Figure 4 shows the false-color image and the ground truth of the Chikusei dataset.
- Test Dataset:
- Indian Pines (IP) Dataset: The spectral range of this dataset is 400–2500 nm, with a spatial resolution of approximately 20 m. This dataset size is 145 × 145, consisting of 200 spectral bands. This dataset contains 16 land cover classes. Figure 5 shows the false-color image and the ground truth of the Indian Pines dataset.
- Salinas (SA) Dataset: The spectral range of this dataset is 400–2500 nm, with a spatial resolution of approximately 3.7 m. This dataset size is 512 × 217, consisting of 224 spectral bands. After removing the bands with severe water vapor absorption, there are 204 bands left. This dataset contains 16 land cover classes. Figure 6 shows the false-color image and the ground truth of the Salinas dataset.
- University of Pavia (UP) Dataset: The spectral range of this dataset is 430–860 nm, with a spatial resolution of approximately 1.3 m. This dataset size is 610 × 340, consisting of 115 original bands. After removing 12 noisy bands, there are 103 bands left. This dataset contains 9 land cover classes. Figure 7 shows the false-color image and the ground truth of the University of Pavia dataset.
4.2. Experimental Settings
- Running Platform: The experiments were conducted on a computer with an Intel® CoreTM i7-6700K CPU @ 4.00 GHz, 32 GB of RAM, and an NVIDIA TITAN X (Pascal) 12 GB graphics card (Santa Clara, CA, USA). The neural network framework Pytorch 1.9.1 was used for training and testing.
- Evaluation Metrics: The overall accuracy (OA), Kappa coefficient, and average accuracy (AA) were used for evaluation. Here, OA represents the ratio of correctly classified pixels to the total number of test pixels in HSI. AA represents the average accuracy of all classes. The Kappa coefficient is used to measure the consistency between the classification result of the hyperspectral data set and the actual effect, with a range of 0 to 1. A value of 1 indicates complete consistency, a value greater than 0.75 indicates satisfactory consistency, and a value less than 0.4 indicates less than ideal consistency [42]. The larger the values of these three evaluation metrics, the better the model performance. All experiments were conducted ten times, and the average values were taken.
4.3. Analysis of Hyperparameters
4.4. Analysis of Few-Shot Classification Performance in the AGPS Framework
4.5. Analysis of Inference Speed and Computational Cost in the AGPS Framework
4.6. Analysis of Feature Separability
4.7. Ablation Analysis of the AGPS Framework
4.8. Analysis of the Impact of Different Numbers of Labeled Samples on the Performance of the AGPS Framework
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
- Wang, J.; Ma, A.; Zhong, Y.; Zheng, Z.; Zhang, L. Cross-sensor domain adaptation for high spatial resolution urban land-cover mapping: From airborne to spaceborne imagery. Remote Sens. Environ. 2022, 277, 113058. [Google Scholar] [CrossRef]
- Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
- Wang, D.; Du, B.; Zhang, L. Fully contextual network for hyperspectral scene parsing. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5501316. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hypersectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Tong, F.; Zhang, Y. Exploiting spectral–spatial information using deep random forest for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5509505. [Google Scholar] [CrossRef]
- Deng, L.; Cao, G.; Xu, L.; Xu, H.; Pan, Q.; Ding, L.; Shang, Y. Hyperspectral image classification based on spectral spatial feature extraction and deep rotation forest ensemble with AdaBoost. In Proceedings of the Fourteenth International Conference on Graphics and Image Processing (ICGIP 2022), Nanjing, China, 21–23 October 2022; Volume 12705, pp. 396–405. [Google Scholar]
- Wang, X. Hyperspectral image classification powered by khatri-rao decomposition-based multinomial logistic regression. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5530015. [Google Scholar] [CrossRef]
- Pathak, D.K.; Kalita, S.K.; Bhattacharya, D.K. Hyperspectral image classification using support vector machine: A spectral spatial feature based approach. Evol. Intell. 2022, 15, 1809–1823. [Google Scholar] [CrossRef]
- Wang, Q.; Miao, Y.; Chen, M.; Yuan, Y. Spatial-Spectral Clustering with Anchor Graph for Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5542413. [Google Scholar] [CrossRef]
- Wang, Y.; Yu, W.; Fang, Z. Multiple Kernel-Based SVM Classification of Hyperspectral Images by Combining Spectral, Spatial, and Semantic Information. Remote Sens. 2020, 12, 120. [Google Scholar] [CrossRef]
- Thiyaneswaran, B.; Anguraj, K.; Kumarganesh, S.; Thangaraj, K. Early detection of melanoma images using gray level co-occurrence matrix features and machine learning techniques for effective clinical diagnosis. Int. J. Imaging Syst. Technol. 2021, 31, 682–694. [Google Scholar] [CrossRef]
- Vatsavayi, V.K.; Bobbili, C.; Jyothi, V. Models for Exploring the Benefits of using Discrete Wavelet Transformation in HSI. In Proceedings of the International Conference on Signal Processing and Integrated Networks, Noida, India, 21–22 March 2024; pp. 22–27. [Google Scholar]
- He, L.; Liu, C.; Li, J.; Li, Y.; Li, S.; Yu, Z. Hyperspectral Image Spectral–Spatial-Range Gabor Filtering. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4818–4836. [Google Scholar] [CrossRef]
- Diakite, A.; Gui, J.; Fu, X. Extended Morphological Profile Cube for Hyperspectral Image Classification. TechRxiv 2023. [Google Scholar] [CrossRef]
- Anand, R.; Veni, S.; Geetha, P.; Subramoniam, S.R. Extended morphological profiles analysis of airborne hyperspectral image classification using machine learning algorithms. Int. J. Intell. Netw. 2021, 2, 1–6. [Google Scholar] [CrossRef]
- Dong, Y.; Liu, Q.; Du, B.; Zhang, L. Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification. IEEE Trans. Image Process. 2022, 31, 1559–1572. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q.; Dong, Y.; Zhang, Y.; Luo, H. A Fast Dynamic Graph Convolutional Network and CNN Parallel Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5530215. [Google Scholar] [CrossRef]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Alkhatib, M.Q.; Al-Saad, M.; Aburaed, N.; Almansoori, S.; Zabalza, J.; Marshall, S.; Al-Ahmad, H. Tri-CNN: A three branch model for hyperspectral image classification. Remote Sens. 2023, 15, 316. [Google Scholar] [CrossRef]
- Yang, J.; Du, B.; Zhang, L. From center to surrounding: An interactive learning framework for hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2023, 197, 145–166. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, M.; Li, W.; Wang, S.; Tao, R. Language-aware domain generalization network for cross-scene hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5501312. [Google Scholar] [CrossRef]
- Zhang, C.; Yue, J.; Qin, Q. Global prototypical network for few-shot hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4748–4759. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhong, Y.; Ma, A.; Zhang, L. FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5612–5626. [Google Scholar] [CrossRef]
- Sun, H.; Zheng, X.; Lu, X. A supervised segmentation network for hyperspectral image classification. IEEE Trans. Image Process. 2021, 30, 2810–2825. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Yu, A.; Zhang, P.; Wan, G.; Wang, R. Deep Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2290–2304. [Google Scholar] [CrossRef]
- Li, Z.; Guo, H.; Chen, Y.; Liu, C.; Du, Q.; Fang, Z.; Wang, Y. Few-Shot Hyperspectral Image Classification With Self-Supervised Learning. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5517917. [Google Scholar] [CrossRef]
- Rai, A.; Lall, B.; Zalani, A.; Prakash, R.; Srivastava, S. Enforcement of DNN with LDA-PCA-ELM for PIE Invariant Few-Shot Face Recognition. In Proceedings of the International Conference on Pattern Recognition and Machine Intelligence, Kolkata, India, 12–15 December 2023; pp. 791–801. [Google Scholar]
- Huang, Q.; Zhang, H.; Xue, M.; Song, J.; Song, M. A Survey of Deep Learning for Low-shot Object Detection. ACM Comput. Surv. 2023, 56, 1–37. [Google Scholar] [CrossRef]
- Li, X.; Yang, X.; Ma, Z.; Xue, J.-H. Deep metric learning for few-shot image classification: A Review of recent developments. Pattern Recognit. 2023, 138, 109381. [Google Scholar] [CrossRef]
- Billion, P.P.; Prusa, J.D.; Khoshgoftaar, T.M. Low-shot learning and class imbalance: A survey. J. Big Data 2024, 11, 1. [Google Scholar] [CrossRef]
- Liu, Q.; Peng, J.; Ning, Y.; Chen, N.; Sun, W.; Du, Q.; Zhou, Y. Refined Prototypical Contrastive Learning for Few-Shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Xie, Z.; Duan, P.; Liu, W.; Kang, X.; Wei, X.; Li, S. Feature Consistency-Based Prototype Network for Open-Set Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 9286–9296. [Google Scholar] [CrossRef]
- Shi, M.; Ren, J. A lightweight dense relation network with attention for hyperspectral image few-shot classification. Eng. Appl. Artif. Intell. 2023, 126, 106993. [Google Scholar] [CrossRef]
- Di, X.; Xue, Z.; Zhang, M. Active learning-driven siamese network for hyperspectral image classification. Remote Sens. 2023, 15, 752. [Google Scholar] [CrossRef]
- Zhang, S.; Chen, Z.; Wang, D.; Wang, Z.J. Deep cross-domain few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar]
- Li, D.; Shen, Y.; Kong, F.; Liu, J.; Wang, Q. Spectral–Spatial Prototype Learning-Based Nearest Neighbor Classifier for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5502215. [Google Scholar]
- Liu, Q.; Peng, J.; Chen, N.; Sun, W.; Ning, Y.; Du, Q. Category-Specific Prototype Self-Refinement Contrastive Learning for Few-Shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5524416. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Xia, M.; Yuan, G.; Yang, L.; Xia, K.; Ren, Y.; Shi, Z.; Zhou, H. Few-Shot Hyperspectral Image Classification Based on Convolutional Residuals and SAM Siamese Networks. Electronics 2023, 12, 3415. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Cheng, Y.; Zhang, W.; Wang, H.; Wang, X. Causal Meta-transfer Learning for Cross-domain Few-shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5521014. [Google Scholar] [CrossRef]
- Xue, Z.; Zhou, Y.; Du, P. S3Net: Spectral–spatial Siamese network for few-shot hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531219. [Google Scholar] [CrossRef]
No. | SVM | 3D-CNN | FPGA | DFSL | DFSL + SVM | DFSL + NN | DCFSL | CMTL | S3Net | CRSSNet | Ours |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 72.20 | 95.12 | 97.83 | 92.68 | 96.75 | 96.75 | 87.80 | 95.12 | 100.00 | 100.00 | 100.00 |
2 | 34.27 | 37.70 | 49.23 | 39.85 | 36.38 | 38.65 | 45.40 | 51.09 | 76.47 | 75.32 | 58.98 |
3 | 39.18 | 19.77 | 42.65 | 56.85 | 38.34 | 42.79 | 57.21 | 88.61 | 57.93 | 62.39 | 56.13 |
4 | 50.34 | 32.51 | 72.99 | 71.55 | 77.16 | 68.10 | 84.91 | 80.17 | 99.44 | 99.66 | 98.92 |
5 | 69.75 | 88.45 | 68.08 | 64.64 | 73.92 | 71.20 | 78.45 | 65.90 | 91.61 | 89.41 | 77.67 |
6 | 66.36 | 73.65 | 68.90 | 89.52 | 86.25 | 76.18 | 88.97 | 94.35 | 93.48 | 90.97 | 80.17 |
7 | 89.13 | 81.82 | 100.00 | 100.00 | 97.10 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
8 | 68.73 | 53.35 | 94.97 | 73.36 | 81.82 | 74.84 | 73.57 | 58.99 | 98.75 | 97.93 | 100.00 |
9 | 86.87 | 100.00 | 100.00 | 100.00 | 75.56 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
10 | 37.49 | 41.35 | 75.61 | 51.09 | 52.22 | 47.98 | 65.25 | 76.32 | 65.64 | 58.13 | 91.69 |
11 | 33.96 | 66.71 | 68.46 | 52.28 | 59.96 | 57.95 | 59.14 | 58.98 | 67.01 | 68.99 | 73.65 |
12 | 31.43 | 37.40 | 54.63 | 29.08 | 36.56 | 38.21 | 43.37 | 71.97 | 64.52 | 60.82 | 82.09 |
13 | 86.50 | 85.71 | 99.02 | 98.00 | 98.00 | 97.50 | 99.00 | 100.00 | 91.50 | 90.30 | 99.90 |
14 | 62.93 | 62.57 | 90.11 | 82.30 | 84.63 | 83.44 | 90.16 | 81.75 | 99.58 | 99.01 | 91.13 |
15 | 28.08 | 56.42 | 95.33 | 54.07 | 74.10 | 62.23 | 62.47 | 87.93 | 92.76 | 95.33 | 99.84 |
16 | 90.91 | 90.36 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 98.86 | 99.32 | 99.20 | 100.00 |
OA% | 45.85 | 54.76 | 68.82 | 59.55 | 61.69 | 59.65 | 66.40 | 71.35 | 78.60 | 78.03 | 78.93 |
AA% | 59.24 | 63.93 | 79.86 | 72.20 | 73.05 | 72.24 | 77.23 | 81.88 | 87.37 | 86.72 | 88.13 |
Kappa% | 39.68 | 48.72 | 64.71 | 54.43 | 56.78 | 54.55 | 63.10 | 67.97 | 75.98 | 75.33 | 76.28 |
Number | SVM | 3D-CNN | FPGA | DFSL | DFSL + SVM | DFSL + NN | DCFSL | CMTL | S3Net | CRSSNet | Ours |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 97.57 | 95.29 | 99.90 | 94.66 | 73.92 | 95.63 | 99.55 | 99.25 | 100.00 | 100.00 | 100.00 |
2 | 87.43 | 97.20 | 89.77 | 99.00 | 96.85 | 99.09 | 99.71 | 99.52 | 99.99 | 99.97 | 99.99 |
3 | 82.95 | 91.45 | 100.00 | 72.35 | 96.28 | 94.01 | 93.68 | 99.95 | 99.99 | 100.00 | 99.85 |
4 | 99.11 | 97.31 | 99.35 | 93.38 | 99.11 | 99.54 | 99.45 | 87.40 | 99.96 | 99.83 | 96.04 |
5 | 94.29 | 91.24 | 95.59 | 83.84 | 80.72 | 90.58 | 90.39 | 94.09 | 95.04 | 95.57 | 98.09 |
6 | 98.36 | 98.80 | 97.62 | 99.37 | 91.63 | 98.47 | 99.27 | 100.00 | 99.67 | 99.55 | 96.15 |
7 | 94.39 | 99.69 | 99.97 | 98.68 | 97.73 | 99.81 | 99.04 | 99.52 | 100.00 | 100.00 | 100.00 |
8 | 59.99 | 66.40 | 49.64 | 68.26 | 82.33 | 77.74 | 72.61 | 85.76 | 88.77 | 88.91 | 90.29 |
9 | 96.09 | 96.25 | 99.78 | 97.42 | 94.44 | 91.13 | 99.74 | 99.77 | 99.85 | 99.71 | 99.30 |
10 | 71.45 | 70.72 | 74.64 | 76.54 | 80.96 | 60.98 | 84.51 | 95.69 | 93.20 | 96.49 | 91.23 |
11 | 91.25 | 93.15 | 100.00 | 98.02 | 93.38 | 95.99 | 98.17 | 99.53 | 99.57 | 99.45 | 100.00 |
12 | 97.22 | 99.65 | 95.38 | 98.49 | 97.94 | 93.13 | 99.04 | 96.20 | 97.65 | 97.97 | 99.98 |
13 | 97.30 | 92.63 | 100.00 | 92.97 | 95.79 | 99.34 | 98.97 | 95.39 | 91.13 | 93.12 | 87.70 |
14 | 91.84 | 93.56 | 98.69 | 98.78 | 98.87 | 98.06 | 97.99 | 99.72 | 98.22 | 98.21 | 100.00 |
15 | 60.52 | 68.02 | 95.04 | 81.34 | 91.13 | 77.54 | 74.12 | 69.89 | 96.66 | 94.24 | 99.95 |
16 | 81.45 | 81.41 | 96.45 | 88.12 | 90.57 | 85.05 | 90.62 | 99.06 | 97.00 | 98.06 | 98.39 |
OA% | 80.71 | 84.20 | 85.88 | 86.22 | 86.95 | 87.05 | 88.53 | 91.73 | 96.13 | 96.20 | 96.61 |
AA% | 87.58 | 89.56 | 93.25 | 90.10 | 90.08 | 91.01 | 93.55 | 95.05 | 97.29 | 97.63 | 97.31 |
Kappa% | 78.61 | 82.46 | 84.41 | 84.74 | 85.51 | 85.63 | 87.27 | 90.79 | 95.70 | 95.73 | 96.23 |
No. | SVM | 3D-CNN | FPGA | DFSL | DFSL + SVM | DFSL + NN | DCFSL | CMTL | S3Net | CRSSNet | Ours |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 88.98 | 59.82 | 75.58 | 69.53 | 73.43 | 69.19 | 81.06 | 89.77 | 90.78 | 80.05 | 91.56 |
2 | 83.91 | 63.05 | 66.68 | 84.46 | 89.25 | 84.63 | 87.74 | 92.96 | 85.38 | 89.48 | 95.57 |
3 | 39.98 | 68.91 | 59.64 | 67.91 | 48.09 | 57.47 | 63.33 | 56.59 | 86.07 | 88.79 | 85.63 |
4 | 60.22 | 77.31 | 78.23 | 76.72 | 84.72 | 89.99 | 92.56 | 92.87 | 96.38 | 91.73 | 77.69 |
5 | 95.44 | 90.77 | 100.00 | 100.00 | 99.65 | 100.00 | 99.01 | 100.00 | 99.66 | 100.00 | 100.00 |
6 | 37.12 | 63.4 | 95.66 | 48.09 | 67.81 | 71.23 | 74.58 | 64.89 | 75.04 | 84.17 | 91.45 |
7 | 40.62 | 87.64 | 85.41 | 69.81 | 64.48 | 70.62 | 77.74 | 87.02 | 99.91 | 99.99 | 100.00 |
8 | 68.17 | 57.27 | 72.70 | 80.61 | 67.37 | 58.13 | 62.42 | 87.33 | 87.23 | 86.29 | 76.91 |
9 | 99.03 | 95.57 | 99.37 | 86.31 | 92.92 | 96.92 | 98.22 | 94.37 | 98.13 | 93.94 | 82.38 |
OA% | 64.12 | 65.74 | 75.25 | 76.24 | 79.63 | 77.75 | 83.05 | 86.97 | 87.16 | 88.00 | 91.08 |
AA% | 68.18 | 73.72 | 82.16 | 75.94 | 76.41 | 77.57 | 81.85 | 85.09 | 90.95 | 90.49 | 89.02 |
Kappa% | 55.59 | 57.37 | 69.10 | 68.59 | 73.05 | 71.11 | 77.46 | 85.08 | 83.42 | 84.34 | 88.21 |
S3Net | CRSSNet | DCFSL | AGPS | |
---|---|---|---|---|
IP-Test time(s) | 5.47 | 13.58 | 1.91 | 0.17 |
IP-GFLOPs | 777.807 | 5790.635 | 438.894 | 16.235 |
UP-Test time(s) | 17.11 | 32.92 | 7.19 | 0.63 |
UP- GFLOPs | 1724.580 | 12,161.884 | 1810.940 | 139.328 |
SA-Test time(s) | 29.21 | 68.01 | 10.53 | 0.79 |
SA- GFLOPs | 4134.996 | 30,777.663 | 2334.755 | 72.740 |
Baseline | NO FLC | NO ASPP-PA | NO CL | AGPS | |
---|---|---|---|---|---|
IP (OA%) | 65.23 | 72.97 | 76.87 | 74.26 | 78.93 |
UP (OA%) | 62.68 | 64.08 | 88.24 | 89.16 | 91.08 |
SA (OA%) | 85.84 | 89.99 | 93.45 | 93.75 | 96.61 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xia, K.; Yuan, G.; Xia, M.; Li, X.; Gui, J.; Zhou, H. Advanced Global Prototypical Segmentation Framework for Few-Shot Hyperspectral Image Classification. Sensors 2024, 24, 5386. https://doi.org/10.3390/s24165386
Xia K, Yuan G, Xia M, Li X, Gui J, Zhou H. Advanced Global Prototypical Segmentation Framework for Few-Shot Hyperspectral Image Classification. Sensors. 2024; 24(16):5386. https://doi.org/10.3390/s24165386
Chicago/Turabian StyleXia, Kunming, Guowu Yuan, Mengen Xia, Xiaosen Li, Jinkang Gui, and Hao Zhou. 2024. "Advanced Global Prototypical Segmentation Framework for Few-Shot Hyperspectral Image Classification" Sensors 24, no. 16: 5386. https://doi.org/10.3390/s24165386
APA StyleXia, K., Yuan, G., Xia, M., Li, X., Gui, J., & Zhou, H. (2024). Advanced Global Prototypical Segmentation Framework for Few-Shot Hyperspectral Image Classification. Sensors, 24(16), 5386. https://doi.org/10.3390/s24165386