Next Article in Journal
Utilizing Hyperspectral Reflectance and Machine Learning Algorithms for Non-Destructive Estimation of Chlorophyll Content in Citrus Leaves
Next Article in Special Issue
Automated Detection and Analysis of Massive Mining Waste Deposits Using Sentinel-2 Satellite Imagery and Artificial Intelligence
Previous Article in Journal
Spatiotemporal Fusion Model of Remote Sensing Images Combining Single-Band and Multi-Band Prediction
Previous Article in Special Issue
A New Architecture of a Complex-Valued Convolutional Neural Network for PolSAR Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data

1
Department of Aerospace Science and Technology, Space Engineering University, Beijing 101416, China
2
China Astronaut Research and Training Center, Beijing 100094, China
3
National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(20), 4937; https://doi.org/10.3390/rs15204937
Submission received: 21 August 2023 / Revised: 27 September 2023 / Accepted: 11 October 2023 / Published: 12 October 2023
(This article belongs to the Special Issue Remote Sensing Image Classification and Semantic Segmentation)

Abstract

Few-shot semantic segmentation (FSS) is committed to segmenting new classes with only a few labels. Generally, FSS assumes that base classes and novel classes belong to the same domain, which limits FSS’s application in a wide range of areas. In particular, since annotation is time-consuming, it is not cost-effective to process remote sensing images using FSS. To address this issue, we designed a feature transformation network (FTNet) for learning to few-shot segment remote sensing images from irrelevant data (FSS-RSI). The main idea is to train networks on irrelevant, already labeled data but inference on remote sensing images. In other words, the training and testing data neither belong to the same domain nor category. The FTNet contains two main modules: a feature transformation module (FTM) and a hierarchical transformer module (HTM). Among them, the FTM transforms features into a domain-agnostic high-level anchor, and the HTM hierarchically enhances matching between support and query features. Moreover, to promote the development of FSS-RSI, we established a new benchmark, which other researchers may use. Our experiments demonstrate that our model outperforms the cutting-edge few-shot semantic segmentation method by 25.39% and 21.31% in the one-shot and five-shot settings, respectively.
Keywords: meta-learning; cross-domain segmentation; few-shot semantic segmentation; transformer meta-learning; cross-domain segmentation; few-shot semantic segmentation; transformer
Graphical Abstract

Share and Cite

MDPI and ACS Style

Sun, Q.; Chao, J.; Lin, W.; Xu, Z.; Chen, W.; He, N. Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data. Remote Sens. 2023, 15, 4937. https://doi.org/10.3390/rs15204937

AMA Style

Sun Q, Chao J, Lin W, Xu Z, Chen W, He N. Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data. Remote Sensing. 2023; 15(20):4937. https://doi.org/10.3390/rs15204937

Chicago/Turabian Style

Sun, Qingwei, Jiangang Chao, Wanhong Lin, Zhenying Xu, Wei Chen, and Ning He. 2023. "Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data" Remote Sensing 15, no. 20: 4937. https://doi.org/10.3390/rs15204937

APA Style

Sun, Q., Chao, J., Lin, W., Xu, Z., Chen, W., & He, N. (2023). Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data. Remote Sensing, 15(20), 4937. https://doi.org/10.3390/rs15204937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop