Next Article in Journal
Integration of Multi-Source Datasets for Assessing Ground Swelling/Shrinking Risk in Cyprus: The Case Studies of Pyrgos–Parekklisia and Moni
Previous Article in Journal
Towards Optimising the Derivation of Phenological Phases of Different Crop Types over Germany Using Satellite Image Time Series
Previous Article in Special Issue
Super-Resolution Learning Strategy Based on Expert Knowledge Supervision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Multi-View Feature Fusion and Rich Information Refinement Network for Semantic Segmentation of Remote Sensing Images

School of Computer Science and Technology, Xinjiang University, Ürümqi 830046, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(17), 3184; https://doi.org/10.3390/rs16173184
Submission received: 24 June 2024 / Revised: 16 August 2024 / Accepted: 27 August 2024 / Published: 28 August 2024
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)

Abstract

Semantic segmentation is currently a hot topic in remote sensing image processing. There are extensive applications in land planning and surveying. Many current studies combine Convolutional Neural Networks (CNNs), which extract local information, with Transformers, which capture global information, to obtain richer information. However, the fused feature information is not sufficiently enriched and it often lacks detailed refinement. To address this issue, we propose a novel method called the Multi-View Feature Fusion and Rich Information Refinement Network (MFRNet). Our model is equipped with the Multi-View Feature Fusion Block (MAFF) to merge various types of information, including local, non-local, channel, and positional information. Within MAFF, we introduce two innovative methods. The Sliding Heterogeneous Multi-Head Attention (SHMA) extracts local, non-local, and positional information using a sliding window, while the Multi-Scale Hierarchical Compressed Channel Attention (MSCA) leverages bar-shaped pooling kernels and stepwise compression to obtain reliable channel information. Additionally, we introduce the Efficient Feature Refinement Module (EFRM), which enhances segmentation accuracy by interacting the results of the Long-Range Information Perception Branch and the Local Semantic Information Perception Branch. We evaluate our model on the ISPRS Vaihingen and Potsdam datasets. We conducted extensive comparison experiments with state-of-the-art models and verified that MFRNet outperforms other models.
Keywords: semantic segmentation; feature refinement; remote sensing; multi-view feature fusion semantic segmentation; feature refinement; remote sensing; multi-view feature fusion

Share and Cite

MDPI and ACS Style

Liu, J.; Cheng, S.; Du, A. Multi-View Feature Fusion and Rich Information Refinement Network for Semantic Segmentation of Remote Sensing Images. Remote Sens. 2024, 16, 3184. https://doi.org/10.3390/rs16173184

AMA Style

Liu J, Cheng S, Du A. Multi-View Feature Fusion and Rich Information Refinement Network for Semantic Segmentation of Remote Sensing Images. Remote Sensing. 2024; 16(17):3184. https://doi.org/10.3390/rs16173184

Chicago/Turabian Style

Liu, Jiang, Shuli Cheng, and Anyu Du. 2024. "Multi-View Feature Fusion and Rich Information Refinement Network for Semantic Segmentation of Remote Sensing Images" Remote Sensing 16, no. 17: 3184. https://doi.org/10.3390/rs16173184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop