sensors-logo

Journal Browser

Journal Browser

Sensors for Hyperspectral Imaging: Technologies, Methods and Data Processing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 7840

Special Issue Editor


E-Mail Website
Guest Editor
1. School of Science and Technology, Faculty of SABL, University of New England, Armidale, NSW 2351, Australia
2. Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), University of Technology Sydney, Ultimo, NSW 2007, Australia
3. Griffith Business School, Griffith University, Brisbane, QLD 4111, Australia
Interests: optimisation models; data analytics; machine learning; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Hyperspectral imaging is a key image modality with significant application potential in diverse domains, including solving image application issues beyond visual, thermal, and multispectral images. In this Special Issue, we aim to bring together novel tools and technologies for acquisition, processing, and analysing hyperspectral images. We also aim to compile applications of hyperspectral imaging in different domains. This Special Issue has a two-fold focus:

  • New tools and techniques for image acquisition and processing, including (but not limited to):
    • Band selection, data compression, information fusion, and data visualization;
    • Model/algorithm development using machine learning/deep learning;
    • New sensor technology and hardware.
  • Applications of hyperspectral imaging in different domains, including (but not limited to):
    • Agriculture: disease, pests, food quality, land condition assessment, etc.;
    • Environment: forests, lakes, and damage assessment;
    • Defence: surveillance, search and rescue, and targeting;
    • Medical: lesion and tissue condition analysis;
    • Other: mining, archeology, and geology surveys.

Dr. Subrata Chakraborty
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hyperspectral sensors
  • hyperspectral image processing
  • hyperspectral applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 19893 KiB  
Article
A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme
by Yu Bai, Dongmin Liu, Lili Zhang and Haoqi Wu
Sensors 2024, 24(20), 6647; https://doi.org/10.3390/s24206647 - 15 Oct 2024
Viewed by 373
Abstract
The cost of hyperspectral image (HSI) classification primarily stems from the annotation of image pixels. In real-world classification scenarios, the measurement and annotation process is both time-consuming and labor-intensive. Therefore, reducing the number of labeled pixels while maintaining classification accuracy is a key [...] Read more.
The cost of hyperspectral image (HSI) classification primarily stems from the annotation of image pixels. In real-world classification scenarios, the measurement and annotation process is both time-consuming and labor-intensive. Therefore, reducing the number of labeled pixels while maintaining classification accuracy is a key research focus in HSI classification. This paper introduces a multi-strategy triple network classifier (MSTNC) to address the issue of limited labeled data in HSI classification by improving learning strategies. First, we use the contrast learning strategy to design a lightweight triple network classifier (TNC) with low sample dependence. Due to the construction of triple sample pairs, the number of labeled samples can be increased, which is beneficial for extracting intra-class and inter-class features of pixels. Second, an active learning strategy is used to label the most valuable pixels, improving the quality of the labeled data. To address the difficulty of sampling effectively under extremely limited labeling budgets, we propose a new feature-mixed active learning (FMAL) method to query valuable samples. Fine-tuning is then used to help the MSTNC learn a more comprehensive feature distribution, reducing the model’s dependence on accuracy when querying samples. Therefore, the sample quality is improved. Finally, we propose an innovative dual-threshold pseudo-active learning (DSPAL) strategy, filtering out pseudo-label samples with both high confidence and uncertainty. Extending the training set without increasing the labeling cost further improves the classification accuracy of the model. Extensive experiments are conducted on three benchmark HSI datasets. Across various labeling ratios, the MSTNC outperforms several state-of-the-art methods. In particular, under extreme small-sample conditions (five samples per class), the overall accuracy reaches 82.97% (IP), 87.94% (PU), and 86.57% (WHU). Full article
Show Figures

Figure 1

21 pages, 5400 KiB  
Article
Hybrid Sparse Transformer and Wavelet Fusion-Based Deep Unfolding Network for Hyperspectral Snapshot Compressive Imaging
by Yangke Ying, Jin Wang, Yunhui Shi and Nam Ling
Sensors 2024, 24(19), 6184; https://doi.org/10.3390/s24196184 - 24 Sep 2024
Viewed by 657
Abstract
Recently, deep unfolding network methods have significantly progressed in hyperspectral snapshot compressive imaging. Many approaches directly employ Transformer models to boost the feature representation capabilities of algorithms. However, they often fall short of leveraging the full potential of self-attention mechanisms. Additionally, current methods [...] Read more.
Recently, deep unfolding network methods have significantly progressed in hyperspectral snapshot compressive imaging. Many approaches directly employ Transformer models to boost the feature representation capabilities of algorithms. However, they often fall short of leveraging the full potential of self-attention mechanisms. Additionally, current methods lack adequate consideration of both intra-stage and inter-stage feature fusion, which hampers their overall performance. To tackle these challenges, we introduce a novel approach that hybridizes the sparse Transformer and wavelet fusion-based deep unfolding network for hyperspectral image (HSI) reconstruction. Our method includes the development of a spatial sparse Transformer and a spectral sparse Transformer, designed to capture spatial and spectral attention of HSI data, respectively, thus enhancing the Transformer’s feature representation capabilities. Furthermore, we incorporate wavelet-based methods for both intra-stage and inter-stage feature fusion, which significantly boosts the algorithm’s reconstruction performance. Extensive experiments across various datasets confirm the superiority of our proposed approach. Full article
Show Figures

Figure 1

23 pages, 7913 KiB  
Article
A Dual-Branch Fusion of a Graph Convolutional Network and a Convolutional Neural Network for Hyperspectral Image Classification
by Pan Yang and Xinxin Zhang
Sensors 2024, 24(14), 4760; https://doi.org/10.3390/s24144760 - 22 Jul 2024
Viewed by 611
Abstract
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To [...] Read more.
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial–spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets. Full article
Show Figures

Figure 1

21 pages, 12118 KiB  
Article
Advanced Hyperspectral Image Analysis: Superpixelwise Multiscale Adaptive T-HOSVD for 3D Feature Extraction
by Qiansen Dai, Chencong Ma and Qizhong Zhang
Sensors 2024, 24(13), 4072; https://doi.org/10.3390/s24134072 - 22 Jun 2024
Viewed by 946
Abstract
Hyperspectral images (HSIs) possess an inherent three-order structure, prompting increased interest in extracting 3D features. Tensor analysis and low-rank representations, notably truncated higher-order SVD (T-HOSVD), have gained prominence for this purpose. However, determining the optimal order and addressing sensitivity to changes in data [...] Read more.
Hyperspectral images (HSIs) possess an inherent three-order structure, prompting increased interest in extracting 3D features. Tensor analysis and low-rank representations, notably truncated higher-order SVD (T-HOSVD), have gained prominence for this purpose. However, determining the optimal order and addressing sensitivity to changes in data distribution remain challenging. To tackle these issues, this paper introduces an unsupervised Superpixelwise Multiscale Adaptive T-HOSVD (SmaT-HOSVD) method. Leveraging superpixel segmentation, the algorithm identifies homogeneous regions, facilitating the extraction of local features to enhance spatial contextual information within the image. Subsequently, T-HOSVD is adaptively applied to the obtained superpixel blocks for feature extraction and fusion across different scales. SmaT-HOSVD harnesses superpixel blocks and low-rank representations to extract 3D features, effectively capturing both spectral and spatial information of HSIs. By integrating optimal-rank estimation and multiscale fusion strategies, it acquires more comprehensive low-rank information and mitigates sensitivity to data variations. Notably, when trained on subsets comprising 2%, 1%, and 1% of the Indian Pines, University of Pavia, and Salinas datasets, respectively, SmaT-HOSVD achieves impressive overall accuracies of 93.31%, 97.21%, and 99.25%, while maintaining excellent efficiency. Future research will explore SmaT-HOSVD’s applicability in deep-sea HSI classification and pursue additional avenues for advancing the field. Full article
Show Figures

Graphical abstract

23 pages, 4735 KiB  
Article
Pixel-Level Recognition of Trace Mycotoxins in Red Ginseng Based on Hyperspectral Imaging Combined with 1DCNN-Residual-BiLSTM-Attention Model
by Biao Liu, Hongxu Zhang, Jieqiang Zhu, Yuan Chen, Yixia Pan, Xingchu Gong, Jizhong Yan and Hui Zhang
Sensors 2024, 24(11), 3457; https://doi.org/10.3390/s24113457 - 27 May 2024
Viewed by 685
Abstract
Red ginseng is widely used in food and pharmaceuticals due to its significant nutritional value. However, during the processing and storage of red ginseng, it is susceptible to grow mold and produce mycotoxins, generating security issues. This study proposes a novel approach using [...] Read more.
Red ginseng is widely used in food and pharmaceuticals due to its significant nutritional value. However, during the processing and storage of red ginseng, it is susceptible to grow mold and produce mycotoxins, generating security issues. This study proposes a novel approach using hyperspectral imaging technology and a 1D-convolutional neural network-residual-bidirectional-long short-term memory attention mechanism (1DCNN-ResBiLSTM-Attention) for pixel-level mycotoxin recognition in red ginseng. The “Red Ginseng-Mycotoxin” (R-M) dataset is established, and optimal parameters for 1D-CNN, residual bidirectional long short-term memory (ResBiLSTM), and 1DCNN-ResBiLSTM-Attention models are determined. The models achieved testing accuracies of 98.75%, 99.03%, and 99.17%, respectively. To simulate real detection scenarios with potential interfering impurities during the sampling process, a “Red Ginseng-Mycotoxin-Interfering Impurities” (R-M-I) dataset was created. The testing accuracy of the 1DCNN-ResBiLSTM-Attention model reached 96.39%, and it successfully predicted pixel-wise classification for other unknown samples. This study introduces a novel method for real-time mycotoxin monitoring in traditional Chinese medicine, with important implications for the on-site quality control of herbal materials. Full article
Show Figures

Figure 1

40 pages, 21076 KiB  
Article
A Study on Dimensionality Reduction and Parameters for Hyperspectral Imagery Based on Manifold Learning
by Wenhui Song, Xin Zhang, Guozhu Yang, Yijin Chen, Lianchao Wang and Hanghang Xu
Sensors 2024, 24(7), 2089; https://doi.org/10.3390/s24072089 - 25 Mar 2024
Viewed by 1093
Abstract
With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth’s surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the “curse [...] Read more.
With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth’s surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the “curse of dimensionality” leading to the “Hughes phenomenon”, “strong correlation” due to high resolution, and “nonlinear characteristics” caused by varying surface reflectances. Consequently, dimensionality reduction of hyperspectral data emerges as a critical task. This paper begins by elucidating the principles and processes of hyperspectral image dimensionality reduction based on manifold theory and learning methods, in light of the nonlinear structures and features present in hyperspectral remote-sensing data, and formulates a dimensionality reduction process based on manifold learning. Subsequently, this study explores the capabilities of feature extraction and low-dimensional embedding for hyperspectral imagery using manifold learning approaches, including principal components analysis (PCA), multidimensional scaling (MDS), and linear discriminant analysis (LDA) for linear methods; and isometric mapping (Isomap), locally linear embedding (LLE), Laplacian eigenmaps (LE), Hessian locally linear embedding (HLLE), local tangent space alignment (LTSA), and maximum variance unfolding (MVU) for nonlinear methods, based on the Indian Pines hyperspectral dataset and Pavia University dataset. Furthermore, the paper investigates the optimal neighborhood computation time and overall algorithm runtime for feature extraction in hyperspectral imagery, varying by the choice of neighborhood k and intrinsic dimensionality d values across different manifold learning methods. Based on the outcomes of feature extraction, the study examines the classification experiments of various manifold learning methods, comparing and analyzing the variations in classification accuracy and Kappa coefficient with different selections of neighborhood k and intrinsic dimensionality d values. Building on this, the impact of selecting different bandwidths t for the Gaussian kernel in the LE method and different Lagrange multipliers λ for the MVU method on classification accuracy, given varying choices of neighborhood k and intrinsic dimensionality d, is explored. Through these experiments, the paper investigates the capability and effectiveness of different manifold learning methods in feature extraction and dimensionality reduction within hyperspectral imagery, as influenced by the selection of neighborhood k and intrinsic dimensionality d values, identifying the optimal neighborhood k and intrinsic dimensionality d value for each method. A comparison of classification accuracies reveals that the LTSA method yields superior classification results compared to other manifold learning approaches. The study demonstrates the advantages of manifold learning methods in processing hyperspectral image data, providing an experimental reference for subsequent research on hyperspectral image dimensionality reduction using manifold learning methods. Full article
Show Figures

Figure 1

26 pages, 11607 KiB  
Article
Advancing Hyperspectral Image Analysis with CTNet: An Approach with the Fusion of Spatial and Spectral Features
by Dhirendra Prasad Yadav, Deepak Kumar, Anand Singh Jalal, Bhisham Sharma, Julian L. Webber and Abolfazl Mehbodniya
Sensors 2024, 24(6), 2016; https://doi.org/10.3390/s24062016 - 21 Mar 2024
Viewed by 1204
Abstract
Hyperspectral image classification remains challenging despite its potential due to the high dimensionality of the data and its limited spatial resolution. To address the limited data samples and less spatial resolution issues, this research paper presents a two-scale module-based CTNet (convolutional transformer network) [...] Read more.
Hyperspectral image classification remains challenging despite its potential due to the high dimensionality of the data and its limited spatial resolution. To address the limited data samples and less spatial resolution issues, this research paper presents a two-scale module-based CTNet (convolutional transformer network) for the enhancement of spatial and spectral features. In the first module, a virtual RGB image is created from the HSI dataset to improve the spatial features using a pre-trained ResNeXt model trained on natural images, whereas in the second module, PCA (principal component analysis) is applied to reduce the dimensions of the HSI data. After that, spectral features are improved using an EAVT (enhanced attention-based vision transformer). The EAVT contained a multiscale enhanced attention mechanism to capture the long-range correlation of the spectral features. Furthermore, a joint module with the fusion of spatial and spectral features is designed to generate an enhanced feature vector. Through comprehensive experiments, we demonstrate the performance and superiority of the proposed approach over state-of-the-art methods. We obtained AA (average accuracy) values of 97.87%, 97.46%, 98.25%, and 84.46% on the PU, PUC, SV, and Houston13 datasets, respectively. Full article
Show Figures

Figure 1

19 pages, 4631 KiB  
Article
Spatial and Spectral Reconstruction of Breast Lumpectomy Hyperspectral Images
by Lynn-Jade S. Jong, Jelmer G. C. Appelman, Henricus J. C. M. Sterenborg, Theo J. M. Ruers and Behdad Dashtbozorg
Sensors 2024, 24(5), 1567; https://doi.org/10.3390/s24051567 - 28 Feb 2024
Cited by 1 | Viewed by 1504
Abstract
(1) Background: Hyperspectral imaging has emerged as a promising margin assessment technique for breast-conserving surgery. However, to be implicated intraoperatively, it should be both fast and capable of yielding high-quality images to provide accurate guidance and decision-making throughout the surgery. As there exists [...] Read more.
(1) Background: Hyperspectral imaging has emerged as a promising margin assessment technique for breast-conserving surgery. However, to be implicated intraoperatively, it should be both fast and capable of yielding high-quality images to provide accurate guidance and decision-making throughout the surgery. As there exists a trade-off between image quality and data acquisition time, higher resolution images come at the cost of longer acquisition times and vice versa. (2) Methods: Therefore, in this study, we introduce a deep learning spatial–spectral reconstruction framework to obtain a high-resolution hyperspectral image from a low-resolution hyperspectral image combined with a high-resolution RGB image as input. (3) Results: Using the framework, we demonstrate the ability to perform a fast data acquisition during surgery while maintaining a high image quality, even in complex scenarios where challenges arise, such as blur due to motion artifacts, dead pixels on the camera sensor, noise from the sensor’s reduced sensitivity at spectral extremities, and specular reflections caused by smooth surface areas of the tissue. (4) Conclusion: This gives the opportunity to facilitate an accurate margin assessment through intraoperative hyperspectral imaging. Full article
Show Figures

Figure 1

Back to TopTop