Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,139)

Search Parameters:
Keywords = hyperspectral imaging (HSI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3459 KB  
Article
Diagnosis of Potassium Content in Rubber Leaves Based on Spatial–Spectral Feature Fusion at the Leaf Scale
by Xiaochuan Luo, Rongnian Tang, Chuang Li and Cheng Qian
Remote Sens. 2025, 17(17), 2977; https://doi.org/10.3390/rs17172977 (registering DOI) - 27 Aug 2025
Abstract
Hyperspectral imaging (HSI) technology has attracted extensive attention in the field of nutrient diagnosis for rubber leaves. However, the mainstream method of extracting leaf average spectra ignores the leaf spatial information in hyperspectral imaging and dilutes the response characteristics exhibited by nutrient-sensitive local [...] Read more.
Hyperspectral imaging (HSI) technology has attracted extensive attention in the field of nutrient diagnosis for rubber leaves. However, the mainstream method of extracting leaf average spectra ignores the leaf spatial information in hyperspectral imaging and dilutes the response characteristics exhibited by nutrient-sensitive local areas of leaves, thereby limiting the accuracy of modeling. This study proposes a spatial–spectral feature fusion method based on leaf-scale sub-region segmentation. It introduces a clustering algorithm to divide leaf pixel spectra into several subclasses, and segments sub-regions on the leaf surface based on clustering results. By optimizing the modeling contribution weights of leaf sub-regions, it improves the modeling and generalization accuracy of potassium diagnosis for rubber leaves. Experiments have been carried out to verify the proposed method, which is based on spatial–spectral feature fusion to outperform those of average spectral modeling. Specifically, after pixel-level MSC preprocessing, when the spectra of rubber leaf pixel regions were clustered into nine subsets, the diagnostic accuracy of potassium content in rubber leaves reached 0.97, which is better than the 0.87 achieved by average spectral modeling. Additionally, precision, macro-F1, and macro-recall all reached 0.97, which is superior to the results of average spectral modeling. Moreover, the proposed method is also superior to the spatial–spectral feature fusion method that integrates texture features. The visualization results of leaf sub-region weights showed that strengthening the modeling contribution of leaf edge regions is conducive to improving the diagnostic accuracy of potassium in rubber leaves, which is consistent with the response pattern of leaves to potassium. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

24 pages, 5558 KB  
Review
Advances in Hyperspectral Imaging Technology for Grain Quality and Safety Detection: A Review
by Yuting Liang, Zhihua Li, Jiyong Shi, Ning Zhang, Zhou Qin, Liuzi Du, Xiaodong Zhai, Tingting Shen, Roujia Zhang, Xiaobo Zou and Xiaowei Huang
Foods 2025, 14(17), 2977; https://doi.org/10.3390/foods14172977 - 26 Aug 2025
Abstract
This review provides an overview of recent advancements in hyperspectral imaging (HSI) technology for grain quality and safety detection, focusing on its impact on global food security and economic stability. Traditional methods for grain quality assessment are labor-intensive, time-consuming, and destructive, whereas HSI [...] Read more.
This review provides an overview of recent advancements in hyperspectral imaging (HSI) technology for grain quality and safety detection, focusing on its impact on global food security and economic stability. Traditional methods for grain quality assessment are labor-intensive, time-consuming, and destructive, whereas HSI offers a non-destructive, efficient, and rapid alternative by integrating spatial and spectral data. Over the past five years, HSI has made significant strides in several key areas, including disease detection, quality assessment, physicochemical property analysis, pesticide residue identification, and geographic origin determination. Despite its potential, challenges such as high costs, complex data processing, and the lack of standardized models limit its widespread adoption. This review highlights these advancements, identifies current limitations, and discusses the future implications of HSI in enhancing food safety, traceability, and sustainability in the grain industry. Full article
Show Figures

Graphical abstract

18 pages, 6210 KB  
Article
A Non-Destructive System Using UVE Feature Selection and Lightweight Deep Learning to Assess Wheat Fusarium Head Blight Severity Levels
by Xiaoying Liang, Shuo Yang, Lin Mu, Huanrui Shi, Zhifeng Yao and Xu Chen
Agronomy 2025, 15(9), 2051; https://doi.org/10.3390/agronomy15092051 - 26 Aug 2025
Abstract
Fusarium head blight (FHB), a globally significant agricultural disaster, causes annual losses of dozens of millions of tons of wheat toxins produced by FHB, such as deoxyroscyliaceol, further pose serious threats to human and livestock health. Consequently, rapid and non-destructive determination of FHB [...] Read more.
Fusarium head blight (FHB), a globally significant agricultural disaster, causes annual losses of dozens of millions of tons of wheat toxins produced by FHB, such as deoxyroscyliaceol, further pose serious threats to human and livestock health. Consequently, rapid and non-destructive determination of FHB severity is crucial for implementing timely and precise scientific control measures, thereby ensuring wheat supply security. Therefore, this study adopts hyperspectral imaging (HSI) combined with a lightweight deep learning model. Firstly, the wheat ears were inoculated with Fusarium fungi at the spike’s midpoint, and HSI data were acquired, yielding 1660 samples representing varying disease severities. Through the integration of multiplicative scatter correction (MSC) and uninformative variable elimination (UVE) methods, features are extracted from spectral data in a manner that optimizes the reduction of feature dimensionality while preserving elevated classification accuracy. Finally, a lightweight FHB severity discrimination model based on MobileNetV2 was developed and deployed as an easy-to-use analysis system. Analysis revealed that UVE-selected characteristic bands for FHB severity predominantly fell within 590–680 nm (chlorophyll degradation related), 930–1043 nm (water stress related) and 738 nm (cell wall polysaccharide decomposition related). This distribution aligns with the synergistic effect of rapid chlorophyll degradation and structural damage accompanying disease progression. The resulting MobileNetV2 model achieved a mean average precision (mAP) of 99.93% on the training set and 98.26% on the independent test set. Crucially, it maintains an 8.50 MB parameter size, it processes data 2.36 times faster, significantly enhancing its suitability for field-deployed equipment by optimally balancing accuracy and operational efficiency. This advancement empowers agricultural workers to implement timely control measures, dramatically improving precision alongside optimized field deployment. Full article
Show Figures

Figure 1

19 pages, 4004 KB  
Article
Spectral-Spatial Fusion for Soybean Quality Evaluation Using Hyperspectral Imaging
by Md Bayazid Rahman, Ahmad Tulsi and Abdul Momin
AgriEngineering 2025, 7(9), 274; https://doi.org/10.3390/agriengineering7090274 - 25 Aug 2025
Abstract
Accurate postharvest quality evaluation of soybeans is essential for preserving product value and meeting industry standards. Traditional inspection methods are often inconsistent, labor-intensive, and unsuitable for high-throughput operations. This study presents a non-destructive soybean classification approach using a simplified reflectance-mode hyperspectral imaging system [...] Read more.
Accurate postharvest quality evaluation of soybeans is essential for preserving product value and meeting industry standards. Traditional inspection methods are often inconsistent, labor-intensive, and unsuitable for high-throughput operations. This study presents a non-destructive soybean classification approach using a simplified reflectance-mode hyperspectral imaging system equipped with a single light source, eliminating the complexity and maintenance demands of dual-light configurations used in prior studies. A spectral–spatial data fusion strategy was developed to classify harvested soybeans into four categories: normal, split, diseased, and foreign materials such as stems and pods. The dataset consisted of 1140 soybean samples distributed across these four categories, with spectral reflectance features and spatial texture attributes extracted from each sample. These features were combined to form a unified feature representation for use in classification. Among multiple machine learning classifiers evaluated, Linear Discriminant Analysis (LDA) achieved the highest performance, with approximately 99% accuracy, 99.05% precision, 99.03% recall and 99.03% F1-score. When evaluated independently, spectral features alone resulted in 98.93% accuracy, while spatial features achieved 78.81%, highlighting the benefit of the fusion strategy. Overall, this study demonstrates that a single-illumination HSI system, combined with spectral–spatial fusion and machine learning, offers a practical and potentially scalable approach for non-destructive soybean quality evaluation, with applicability in automated industrial processing environments. Full article
(This article belongs to the Special Issue Latest Research on Post-Harvest Technology to Reduce Food Loss)
Show Figures

Figure 1

28 pages, 7371 KB  
Article
Deep Fuzzy Fusion Network for Joint Hyperspectral and LiDAR Data Classification
by Guangen Liu, Jiale Song, Yonghe Chu, Lianchong Zhang, Peng Li and Junshi Xia
Remote Sens. 2025, 17(17), 2923; https://doi.org/10.3390/rs17172923 - 22 Aug 2025
Viewed by 176
Abstract
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly [...] Read more.
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly model category ambiguity; second, the feature fusion stage lacks a dynamic perception mechanism for inter-modal differences and uncertainties. To this end, this paper proposes a Deep Fuzzy Fusion Network (DFNet) for the joint classification of hyperspectral and LiDAR data. DFNet adopts a dual-branch architecture, integrating CNN and Transformer structures, respectively, to extract multi-scale spatial–spectral features from hyperspectral and LiDAR data. To enhance the model’s discriminative robustness in ambiguous regions, both branches incorporate fuzzy learning modules that model class uncertainty through learnable Gaussian membership functions. In the modality fusion stage, a Fuzzy-Enhanced Cross-Modal Fusion (FECF) module is designed, which combines membership-aware attention mechanisms with fuzzy inference operators to achieve dynamic adjustment of modality feature weights and efficient integration of complementary information. DFNet, through a hierarchical design, realizes uncertainty representation within and fusion control between modalities. The proposed DFNet is evaluated on three public datasets, and the extensive experimental results indicate that the proposed DFNet considerably outperforms other state-of-the-art methods. Full article
Show Figures

Figure 1

21 pages, 4917 KB  
Article
A High-Capacity Reversible Data Hiding Scheme for Encrypted Hyperspectral Images Using Multi-Layer MSB Block Labeling and ERLE Compression
by Yijie Lin, Chia-Chen Lin, Zhe-Min Yeh, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2025, 17(8), 378; https://doi.org/10.3390/fi17080378 - 21 Aug 2025
Viewed by 162
Abstract
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically [...] Read more.
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically designed for encrypted HSIs, offering enhanced embedding capacity without compromising data security or reversibility. The approach introduces a multi-layer block labeling mechanism that leverages the similarity of most significant bits (MSBs) to accurately locate embeddable regions. To minimize auxiliary information overhead, we incorporate an Extended Run-Length Encoding (ERLE) algorithm for effective label map compression. The proposed method achieves embedding rates of up to 3.79 bits per pixel per band (bpppb), while ensuring high-fidelity reconstruction, as validated by strong PSNR metrics. Comprehensive security evaluations using NPCR, UACI, and entropy confirm the robustness of the encryption. Extensive experiments across six standard hyperspectral datasets demonstrate the superiority of our method over existing RDH techniques in terms of capacity, embedding rate, and reconstruction quality. These results underline the method’s potential for secure data embedding in next-generation Internet-based geospatial and remote sensing systems. Full article
Show Figures

Figure 1

24 pages, 7251 KB  
Article
WTCMC: A Hyperspectral Image Classification Network Based on Wavelet Transform Combining Mamba and Convolutional Neural Networks
by Guanchen Liu, Qiang Zhang, Xueying Sun and Yishuang Zhao
Electronics 2025, 14(16), 3301; https://doi.org/10.3390/electronics14163301 - 20 Aug 2025
Viewed by 327
Abstract
Hyperspectral images are rich in spectral and spatial information. However, their high dimensionality and complexity pose significant challenges for effective feature extraction. Specifically, the performance of existing models for hyperspectral image (HSI) classification remains constrained by spectral redundancy among adjacent bands, misclassification at [...] Read more.
Hyperspectral images are rich in spectral and spatial information. However, their high dimensionality and complexity pose significant challenges for effective feature extraction. Specifically, the performance of existing models for hyperspectral image (HSI) classification remains constrained by spectral redundancy among adjacent bands, misclassification at object boundaries, and significant noise in hyperspectral data. To address these challenges, we propose WTCMC—a novel hyperspectral image classification network based on wavelet transform combining Mamba and convolutional neural networks. To establish robust shallow spatial–spectral relationships, we introduce a shallow feature extraction module (SFE) at the initial stage of the network. To enable the comprehensive and efficient capture of both spectral and spatial characteristics, our architecture incorporates a low-frequency spectral Mamba module (LFSM) and a high-frequency multi-scale convolution module (HFMC). The wavelet transform suppresses noise for LFSM and enhances fine spatial and contour features for HFMC. Furthermore, we devise a spectral–spatial complementary fusion module (SCF) that selectively preserves the most discriminative spectral and spatial features. Experimental results demonstrate that the proposed WTCMC network attains overall accuracies (OA) of 98.94%, 98.67%, and 97.50% on the Pavia University (PU), Botswana (BS), and Indian Pines (IP) datasets, respectively, outperforming the compared state-of-the-art methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 36602 KB  
Article
FE-MCFN: Fuzzy-Enhanced Multi-Scale Cross-Modal Fusion Network for Hyperspectral and LiDAR Joint Data Classification
by Shuting Wei, Mian Jia and Junyi Duan
Algorithms 2025, 18(8), 524; https://doi.org/10.3390/a18080524 - 18 Aug 2025
Viewed by 373
Abstract
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, [...] Read more.
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, different materials” and “same material, different spectra” phenomena, as well as the complexity of spectral features. Furthermore, existing multimodal fusion approaches often fail to fully leverage the complementary advantages of hyperspectral and LiDAR data. We propose a fuzzy-enhanced multi-scale cross-modal fusion network (FE-MCFN) designed to achieve joint classification of hyperspectral and LiDAR data. The FE-MCFN enhances convolutional neural networks through the application of fuzzy theory and effectively integrates global contextual information via a cross-modal attention mechanism. The fuzzy learning module utilizes a Gaussian membership function to assign weights to features, thereby adeptly capturing uncertainties and subtle distinctions within the data. To maximize the complementary advantages of multimodal data, a fuzzy fusion module is designed, which is grounded in fuzzy rules and integrates multimodal features across various scales while taking into account both local features and global information, ultimately enhancing the model’s classification performance. Experimental results obtained from the Houston2013, Trento, and MUUFL datasets demonstrate that the proposed method outperforms current state-of-the-art classification techniques, thereby validating its effectiveness and applicability across diverse scenarios. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

20 pages, 13547 KB  
Article
Hyperspectral Image Denoising via Low-Rank Tucker Decomposition with Subspace Implicit Neural Representation
by Cheng Cheng, Dezhi Sun, Yaoyuan Yang, Zhoucheng Guo and Jiangjun Peng
Remote Sens. 2025, 17(16), 2867; https://doi.org/10.3390/rs17162867 - 18 Aug 2025
Viewed by 364
Abstract
Hyperspectral image (HSI) denoising is an important preprocessing step for downstream applications. Fully characterizing the spatial-spectral priors of HSI is crucial for denoising tasks. In recent years, denoising methods based on low-rank subspaces have garnered widespread attention. In the low-rank matrix factorization framework, [...] Read more.
Hyperspectral image (HSI) denoising is an important preprocessing step for downstream applications. Fully characterizing the spatial-spectral priors of HSI is crucial for denoising tasks. In recent years, denoising methods based on low-rank subspaces have garnered widespread attention. In the low-rank matrix factorization framework, the restoration of HSI can be formulated as a task of recovering two subspace factors. However, hyperspectral images are inherently three-dimensional tensors, and transforming the tensor into a matrix for operations inevitably disrupts the spatial structure of the data. To address this issue and better capture the spatial-spectral priors of HSI, this paper proposes a modeling approach named low-rank Tucker decomposition with subspace implicit neural representation (LRTSINR). This data-driven and model-driven joint modeling mechanism has the following two advantages: (1) Tucker decomposition allows for the characterization of the low-rank properties across multiple dimensions of the HSI, leading to a more accurate representation of spectral priors; (2) Implicit neural representation enables the adaptive and precise characterization of the subspace factor continuity under Tucker decomposition. Extensive experiments demonstrate that our method outperforms a series of competing methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

21 pages, 3126 KB  
Article
CViT Weakly Supervised Network Fusing Dual-Branch Local-Global Features for Hyperspectral Image Classification
by Wentao Fu, Xiyan Sun, Xiuhua Zhang, Yuanfa Ji and Jiayuan Zhang
Entropy 2025, 27(8), 869; https://doi.org/10.3390/e27080869 - 15 Aug 2025
Viewed by 316
Abstract
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant [...] Read more.
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant performance usually comes at the cost of feature representation capabilities. High-dimensional and deep convolution can capture rich deep semantic features, but with high complexity and resource consumption. To deal with these problems, we propose a CViT Weakly Supervised Network (CWSN) for HSI classification. Specifically, a lightweight 1D-2D two-branch network is used for local generalization and enhancement of spatial–spectral features. Then, the fusion and characterization of local and global features are achieved through the CNN-Vision Transformer (CViT) cascade strategy. The experimental results on four benchmark HSI datasets show that CWSN has good anti-noise ability and ensures the robustness and versatility of the network facing both clean and noisy training sets. Compared to other methods, the CWSN has better classification accuracy. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Graphical abstract

20 pages, 7578 KB  
Article
Cross Attention Based Dual-Modality Collaboration for Hyperspectral Image and LiDAR Data Classification
by Khanzada Muzammil Hussain, Keyun Zhao, Yang Zhou, Aamir Ali and Ying Li
Remote Sens. 2025, 17(16), 2836; https://doi.org/10.3390/rs17162836 - 15 Aug 2025
Viewed by 431
Abstract
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing [...] Read more.
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing HSI and LiDAR, we can mitigate the limitations of each and improve tasks like land cover classification, vegetation analysis, and terrain mapping through more robust spectral–spatial feature representation. However, traditional multi-scale feature fusion models often struggle with aligning features effectively, which can lead to redundant outputs and diminished spatial clarity. To address these issues, we propose the Cross Attention Bridge for HSI and LiDAR (CAB-HL), a novel dual-path framework that employs a multi-stage cross-attention mechanism to guide the interaction between spectral and spatial features. In CAB-HL, features from each modality are refined across three progressive stages using cross-attention modules, which enhance contextual alignment while preserving the distinctive characteristics of each modality. These fused representations are subsequently integrated and passed through a lightweight classification head. Extensive experiments on three benchmark RS datasets demonstrate that CAB-HL consistently outperforms existing state-of-the-art models, confirm that CAB-HL consistently outperforms in learning deep joint representations for multimodal classification tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

27 pages, 9913 KB  
Article
BioLiteNet: A Biomimetic Lightweight Hyperspectral Image Classification Model
by Bo Zeng, Suwen Chao, Jialang Liu, Yanming Guo, Yingmei Wei, Huimin Yi, Bin Xie, Yaowen Hu and Lin Li
Remote Sens. 2025, 17(16), 2833; https://doi.org/10.3390/rs17162833 - 14 Aug 2025
Viewed by 407
Abstract
Hyperspectral imagery (HSI) has demonstrated significant potential in remote sensing applications because of its abundant spectral and spatial information. However, current mainstream hyperspectral image classification models are generally characterized by high computational complexity, structural intricacy, and a strong reliance on training samples, which [...] Read more.
Hyperspectral imagery (HSI) has demonstrated significant potential in remote sensing applications because of its abundant spectral and spatial information. However, current mainstream hyperspectral image classification models are generally characterized by high computational complexity, structural intricacy, and a strong reliance on training samples, which poses challenges in meeting application demands under resource-constrained conditions. To this end, a lightweight hyperspectral image classification model inspired by bionic design, named BioLiteNet, is proposed, aimed at enhancing the model’s overall performance in terms of both accuracy and computational efficiency. The model is composed of two key modules: BeeSenseSelector (Channel Attention Screening) and AffScaleConv (Scale-Adaptive Convolutional Fusion). The former mimics the selective attention mechanism observed in honeybee vision for dynamically selecting critical spectral channels, while the latter enables efficient fusion of spatial and spectral features through multi-scale depthwise separable convolution. On multiple hyperspectral benchmark datasets, BioLiteNet is shown to demonstrate outstanding classification performance while maintaining exceptionally low computational costs. Experimental results show that BioLiteNet can maintain high classification accuracy across different datasets, even when using only a small amount of labeled samples. Specifically, it achieves overall accuracies (OA) of 90.02% ± 0.97%, 88.20% ± 5.26%, and 78.64% ± 7.13% on the Indian Pines, Pavia University, and WHU-Hi-LongKou datasets using just 5% of samples, 10% of samples, and 25 samples per class, respectively. Moreover, BioLiteNet consistently requires fewer computational resources than other comparative models. The results indicate that the lightweight hyperspectral image classification model proposed in this study significantly reduces the requirements for computational resources and storage while ensuring classification accuracy, making it well-suited for remote sensing applications under resource constraints. The experimental results further support these findings by demonstrating its robustness and practicality, thereby offering a novel solution for hyperspectral image classification tasks. Full article
Show Figures

Figure 1

19 pages, 2896 KB  
Article
Multimodal Prompt Tuning for Hyperspectral and LiDAR Classification
by Zhengyu Liu, Xia Yuan, Shuting Yang, Guanyiman Fu, Chunxia Zhao and Fengchao Xiong
Remote Sens. 2025, 17(16), 2826; https://doi.org/10.3390/rs17162826 - 14 Aug 2025
Viewed by 312
Abstract
The joint classification of hyperspectral imaging (HSI) and Light Detection and Ranging (LiDAR) data holds significant importance for various practical uses, including urban mapping, mineral prospecting, and ecological observation. Achieving robust and transferable feature representations is essential to fully leverage the complementary properties [...] Read more.
The joint classification of hyperspectral imaging (HSI) and Light Detection and Ranging (LiDAR) data holds significant importance for various practical uses, including urban mapping, mineral prospecting, and ecological observation. Achieving robust and transferable feature representations is essential to fully leverage the complementary properties of HSI and LiDAR modalities. However, existing methods are often constrained to scene-specific training and lack generalizability across datasets, limiting their discriminative power. To tackle this challenge, we introduce a new dual-phase approach for the combined classification of HSI and LiDAR data. Initially, a transformer-driven network is trained on various HSI-only datasets to extract universal spatial–spectral features. In the second stage, LiDAR data is incorporated as a task-specific prompt to adapt the model to HSI-LiDAR scenes and enable effective multimodal fusion. Through extensive testing on three benchmark datasets, our framework proves highly effective, outperforming all competing approaches. Full article
Show Figures

Figure 1

14 pages, 2118 KB  
Article
Joint Spectral–Spatial Representation Learning for Unsupervised Hyperspectral Image Clustering
by Xuanhao Liu, Taimao Wang and Xiaofeng Wang
Appl. Sci. 2025, 15(16), 8935; https://doi.org/10.3390/app15168935 - 13 Aug 2025
Viewed by 316
Abstract
Hyperspectral image (HSI) clustering has attracted significant attention due to its broad applications in agricultural monitoring, environmental protection, and other fields. However, the integration of high-dimensional spectral and spatial information remains a major challenge, often resulting in unstable clustering and poor generalization under [...] Read more.
Hyperspectral image (HSI) clustering has attracted significant attention due to its broad applications in agricultural monitoring, environmental protection, and other fields. However, the integration of high-dimensional spectral and spatial information remains a major challenge, often resulting in unstable clustering and poor generalization under noisy or redundant conditions. To address these challenges, we propose a Joint Spectral–Spatial Representation Learning (JSRL) framework for robust hyperspectral image clustering. We first perform spectral clustering to generate pseudo-labels and guide a residual Graph Attention Network (GAT) that jointly refines pixel-level spectral and spatial features. We then aggregate pixels into superpixels and employ a Variational Graph Autoencoder (VGAE) to learn structure-aware representations, further optimized via a quantum-behaved particle swarm optimization (QPSO) strategy. This hierarchical architecture not only mitigates spectral redundancy and reinforces spatial coherence, but also enables more robust and generalizable clustering across diverse HSI scenarios. Extensive experiments on multiple benchmark HSI datasets demonstrate that JSRL consistently achieves state-of-the-art performance, highlighting its robustness and generalization capability across diverse clustering scenarios. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 4191 KB  
Article
A Deep Transfer Contrastive Learning Network for Few-Shot Hyperspectral Image Classification
by Gan Yang and Zhaohui Wang
Remote Sens. 2025, 17(16), 2800; https://doi.org/10.3390/rs17162800 - 13 Aug 2025
Viewed by 362
Abstract
Over recent decades, the hyperspectral image (HSI) classification landscape has undergone significant transformations driven by advances in deep learning (DL). Despite substantial progress, few-shot scenarios remain a significant challenge, primarily due to the high cost of manual annotation and the unreliability of visual [...] Read more.
Over recent decades, the hyperspectral image (HSI) classification landscape has undergone significant transformations driven by advances in deep learning (DL). Despite substantial progress, few-shot scenarios remain a significant challenge, primarily due to the high cost of manual annotation and the unreliability of visual interpretation. Traditional DL models require massive datasets to learn sophisticated feature representations, hindering their full potential in data-scarce contexts. To tackle this issue, a deep transfer contrastive learning network is proposed. A spectral data augmentation module is incorporated to expand limited sample pairs. Subsequently, a spatial–spectral feature extraction module is designed to fuse the learned feature information. The weights of the spatial feature extraction network are initialized with knowledge transferred from source-domain pretraining, while the spectral residual network acquires rich spectral information. Furthermore, contrastive learning is integrated to enhance discriminative representation learning from scarce samples, effectively mitigating obstacles arising from the high inter-class similarity and large intra-class variance inherent in HSIs. Experiments on four public HSI datasets demonstrate that our method achieves competitive performance against state-of-the-art approaches. Full article
Show Figures

Figure 1

Back to TopTop