remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Methods for Hyperspectral Image Processing with Limited Labeled Samples

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 9085

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer & Software, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: machine learning; pattern recognition; multisource fusion; semantic segmentation and their applications in remote sensing

E-Mail Website
Guest Editor
School of Geography and Remote Sensing, Nanjing University of Information Science & Technology, Nanjing, China
Interests: deep learning; image classification; image fusion

E-Mail Website
Guest Editor
School of Information Engineering, Nanjing Audit University, Nanjing, China
Interests: deep learning; hyper/multispectral image processing

E-Mail Website
Guest Editor
Ecole Centrale Marseille, Marseille, France
Interests: signal & image processing source localization hyperspectral imaging

Special Issue Information

Dear Colleagues,

Owing to abundant spectral and spatial information, hyperspectral images have played a significant role in many applications, such as mineral exploitation, precision agriculture, and climate monitoring. In recent years, deep learning technology has attracted much attention in the field of hyperspectral image processing, because of its powerful non-linear fitting ability. Generally, most of the existing deep learning models are based on supervised learning which demands considerable labeled samples to obtain satisfactory performance. However, the commonly used labeling methods for hyperspectral images, including field investigation and visual interpretation, are costly, time-consuming, and error-prone, thus limiting the number of available training samples.

This Special Issue aims at inviting the manuscripts that propose new deep learning technologies to deal with the small sample problem existing in hyperspectral image processing. The topics of interest include, but are not limited to, the following:

  • Image Super-resolution
  • Image Fusion
  • Image Denoising
  • Image Classification
  • Image Segmentation
  • Object Detection

Dr. Renlong Hang
Prof. Dr. Yong Xie
Dr. Feng Zhou
Prof. Dr. Caroline Fossati
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hyperspectral image processing
  • deep learning
  • small training samples
  • low level task
  • high level task

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 6543 KiB  
Article
Graph-Based Domain Adaptation Few-Shot Learning for Hyperspectral Image Classification
by Yanbing Xu, Yanmei Zhang, Tingxuan Yue, Chengcheng Yu and Huan Li
Remote Sens. 2023, 15(4), 1125; https://doi.org/10.3390/rs15041125 - 18 Feb 2023
Cited by 4 | Viewed by 2087
Abstract
Due to a lack of labeled samples, deep learning methods generally tend to have poor classification performance in practical applications. Few-shot learning (FSL), as an emerging learning paradigm, has been widely utilized in hyperspectral image (HSI) classification with limited labeled samples. However, the [...] Read more.
Due to a lack of labeled samples, deep learning methods generally tend to have poor classification performance in practical applications. Few-shot learning (FSL), as an emerging learning paradigm, has been widely utilized in hyperspectral image (HSI) classification with limited labeled samples. However, the existing FSL methods generally ignore the domain shift problem in cross-domain scenes and rarely explore the associations between samples in the source and target domain. To tackle the above issues, a graph-based domain adaptation FSL (GDAFSL) method is proposed for HSI classification with limited training samples, which utilizes the graph method to guide the domain adaptation learning process in a uniformed framework. First, a novel deep residual hybrid attention network (DRHAN) is designed to extract discriminative embedded features efficiently for few-shot HSI classification. Then, a graph-based domain adaptation network (GDAN), which combines graph construction with domain adversarial strategy, is proposed to fully explore the domain correlation between source and target embedded features. By utilizing the fully explored domain correlations to guide the domain adaptation process, a domain invariant feature metric space is learned for few-shot HSI classification. Comprehensive experimental results conducted on three public HSI datasets demonstrate that GDAFSL is superior to the state-of-the-art with a small sample size. Full article
Show Figures

Figure 1

25 pages, 25289 KiB  
Article
Active Learning-Driven Siamese Network for Hyperspectral Image Classification
by Xiyao Di, Zhaohui Xue and Mengxue Zhang
Remote Sens. 2023, 15(3), 752; https://doi.org/10.3390/rs15030752 - 28 Jan 2023
Cited by 8 | Viewed by 1928
Abstract
Hyperspectral image (HSI) classification has recently been successfully explored by using deep learning (DL) methods. However, DL models rely heavily on a large number of labeled samples, which are laborious to obtain. Therefore, finding a way to efficiently embed DL models in limited [...] Read more.
Hyperspectral image (HSI) classification has recently been successfully explored by using deep learning (DL) methods. However, DL models rely heavily on a large number of labeled samples, which are laborious to obtain. Therefore, finding a way to efficiently embed DL models in limited labeled samples is a hot topic in the field of HSI classification. In this paper, an active learning-based siamese network (ALSN) is proposed to solve the limited labeled samples problem in HSI classification. First, we designed a dual learning-based siamese network (DLSN), which consists of a contrastive learning module and a classification module. Secondly, in view of the problem that active learning is difficult to effectively sample under the extremely limited labeling cost, we proposed an adversarial uncertainty-based active learning (AUAL) method to query valuable samples, and to promote DLSN to learn a more complete feature distribution by fine-tuning. Finally, an active learning architecture, based on inter-class uncertainty (ICUAL), is proposed to construct a lightweight sample pair training set, fully extracting the inter-class information of sample pairs and improving classification accuracy. Experiments on three generic HSI datasets strongly demonstrated the effectiveness of ALSN for HSI classification, with performance improvements over other related DL methods. Full article
Show Figures

Figure 1

24 pages, 8487 KiB  
Article
Low-Rank Constrained Attention-Enhanced Multiple Spatial–Spectral Feature Fusion for Small Sample Hyperspectral Image Classification
by Fan Feng, Yongsheng Zhang, Jin Zhang and Bing Liu
Remote Sens. 2023, 15(2), 304; https://doi.org/10.3390/rs15020304 - 4 Jan 2023
Cited by 5 | Viewed by 2488
Abstract
Hyperspectral images contain rich features in both spectral and spatial domains, which bring opportunities for accurate recognition of similar materials and promote various fine-grained remote sensing applications. Although deep learning models have been extensively investigated in the field of hyperspectral image classification (HSIC) [...] Read more.
Hyperspectral images contain rich features in both spectral and spatial domains, which bring opportunities for accurate recognition of similar materials and promote various fine-grained remote sensing applications. Although deep learning models have been extensively investigated in the field of hyperspectral image classification (HSIC) tasks, classification performance is still limited under small sample conditions, and this has been a longstanding problem. The features extracted by complex network structures with large model size are redundant to some extent and prone to overfitting. This paper proposes a low-rank constrained attention-enhanced multiple feature fusion network (LAMFN). Firstly, factor analysis is used to extract very few components that can describe the original data using covariance information to perform spectral feature preprocessing. Then, a lightweight attention-enhanced 3D convolution module is used for deep feature extraction, and the position-sensitive information is supplemented using a 2D coordinate attention. The above widely varying spatial–spectral feature groups are fused through a simple composite residual structure. Finally, low-rank second-order pooling is adopted to enhance the convolutional feature selectivity and achieve classification. Extensive experiments were conducted on four representative hyperspectral datasets with different spatial–spectral characteristics, namely Indian Pines (IP), Pavia Center (PC), Houston (HU), and WHU-HongHu (WHU). The contrast methods include several advanced models proposed recently, including residual CNNs, attention-based CNNs, and transformer-based models. Using only five samples per class for training, LAMFN achieved overall accuracies of 78.15%, 97.18%, 81.35%, and 87.93% on the above datasets, which has an improvement of 0.82%, 1.12%, 1.67%, and 0.89% compared to the second-best model. The running time of LAMFN is moderate. For example, the training time of LAMFN on the WHU dataset was 29.1 s, and the contrast models ranged from 3.0 s to 341.4 s. In addition, ablation experiments and comparisons with some advanced semi-supervised learning methods further validated the effectiveness of the proposed model designs. Full article
Show Figures

Graphical abstract

18 pages, 9741 KiB  
Article
Strip Attention Networks for Road Extraction
by Hai Huan, Yu Sheng, Yi Zhang and Yuan Liu
Remote Sens. 2022, 14(18), 4516; https://doi.org/10.3390/rs14184516 - 9 Sep 2022
Cited by 11 | Viewed by 1872
Abstract
In recent years, deep learning methods have been widely used for road extraction in remote sensing images. However, the existing deep learning semantic segmentation networks generally show poor continuity in road segmentation due to the high-class similarity between roads and buildings surrounding roads [...] Read more.
In recent years, deep learning methods have been widely used for road extraction in remote sensing images. However, the existing deep learning semantic segmentation networks generally show poor continuity in road segmentation due to the high-class similarity between roads and buildings surrounding roads in remote sensing images, and the existence of shadows and occlusion. To deal with this problem, this paper proposes strip attention networks (SANet) for extracting roads in remote sensing images. Firstly, a strip attention module (SAM) is designed to extract the contextual information and spatial position information of the roads. Secondly, a channel attention fusion module (CAF) is designed to fuse low-level features and high-level features. The network is trained and tested using the CITY-OSM dataset, DeepGlobe road extraction dataset, and CHN6-CUG dataset. The test results indicate that SANet exhibits excellent road segmentation performance and can better solve the problem of poor road segmentation continuity compared with other networks. Full article
Show Figures

Figure 1

Back to TopTop