Next Article in Journal
Sea Surface Wind Speed Retrieval Using Gaofen-3-02 SAR Full Polarization Data
Previous Article in Journal
Ground-Based RFI Source Localization via Single-Channel SAR Using Pulse Range Difference of Arrival
Previous Article in Special Issue
Adaptive Granularity-Fused Keypoint Detection for 6D Pose Estimation of Space Targets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RSAM-Seg: A SAM-Based Model with Prior Knowledge Integration for Remote Sensing Image Semantic Segmentation

1
College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
2
Jiangsu Provincial Geological Surveying and Mapping Brigade, Nanjing 211102, China
3
College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(4), 590; https://doi.org/10.3390/rs17040590
Submission received: 24 December 2024 / Revised: 28 January 2025 / Accepted: 5 February 2025 / Published: 8 February 2025
(This article belongs to the Special Issue Advanced AI Technology for Remote Sensing Analysis)

Abstract

High-resolution remote sensing satellites have revolutionized remote sensing research, yet accurately segmenting specific targets from complex satellite imagery remains challenging. While the Segment Anything Model (SAM) has emerged as a promising universal segmentation model, its direct application to remote sensing imagery yields suboptimal results. To address these limitations, we propose RSAM-Seg, a novel deep learning model adapted from SAM specifically designed for remote sensing applications. Our model incorporates two key components: Adapter-Scale and Adapter-Feature modules. The Adapter-Scale modules, integrated within Vision Transformer (ViT) blocks, enhance model adaptability through learnable transformations, while the Adapter-Feature modules, positioned between ViT blocks, generate image-informed prompts by incorporating task-specific information. Extensive experiments across four binary and two multi-class segmentation scenarios demonstrate the superior performance of RSAM-Seg, achieving an F1 score of 0.815 in cloud detection, 0.834 in building segmentation, and 0.755 in road extraction, consistently outperforming established architectures like U-Net, DeepLabV3+, and Segformer. Moreover, RSAM-Seg shows significant improvements of up to 56.5% in F1 score compared to the original SAM. In addition, RSAM-Seg maintains robust performance in few-shot learning scenarios, achieving an F1 score of 0.656 with only 1% of the training data and increasing to 0.815 with full data availability. Furthermore, RSAM-Seg exhibits the capability to detect missing areas within the ground truth of certain datasets, highlighting its capability for completion.
Keywords: remote sensing; image segmentation; segment anything model; prompt adapter technique remote sensing; image segmentation; segment anything model; prompt adapter technique
Graphical Abstract

Share and Cite

MDPI and ACS Style

Zhang, J.; Li, Y.; Yang, X.; Jiang, R.; Zhang, L. RSAM-Seg: A SAM-Based Model with Prior Knowledge Integration for Remote Sensing Image Semantic Segmentation. Remote Sens. 2025, 17, 590. https://doi.org/10.3390/rs17040590

AMA Style

Zhang J, Li Y, Yang X, Jiang R, Zhang L. RSAM-Seg: A SAM-Based Model with Prior Knowledge Integration for Remote Sensing Image Semantic Segmentation. Remote Sensing. 2025; 17(4):590. https://doi.org/10.3390/rs17040590

Chicago/Turabian Style

Zhang, Jie, Yunxin Li, Xubing Yang, Rui Jiang, and Li Zhang. 2025. "RSAM-Seg: A SAM-Based Model with Prior Knowledge Integration for Remote Sensing Image Semantic Segmentation" Remote Sensing 17, no. 4: 590. https://doi.org/10.3390/rs17040590

APA Style

Zhang, J., Li, Y., Yang, X., Jiang, R., & Zhang, L. (2025). RSAM-Seg: A SAM-Based Model with Prior Knowledge Integration for Remote Sensing Image Semantic Segmentation. Remote Sensing, 17(4), 590. https://doi.org/10.3390/rs17040590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop