Next Article in Journal
Comparing Satellite Soil Moisture Products Using In Situ Observations over an Instrumented Experimental Basin in Romania
Previous Article in Journal
Near-Real-Time Long-Strip Geometric Processing without GCPs for Agile Push-Frame Imaging of LuoJia3-01 Satellite
Previous Article in Special Issue
ConvMambaSR: Leveraging State-Space Models and CNNs in a Dual-Branch Architecture for Remote Sensing Imagery Super-Resolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution

College of Publishing, University of Shanghai for Science and Technology, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(17), 3282; https://doi.org/10.3390/rs16173282
Submission received: 2 July 2024 / Revised: 26 August 2024 / Accepted: 27 August 2024 / Published: 4 September 2024
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)

Abstract

Due to the inadequacy in utilizing complementary information from different modalities and the biased estimation of degraded parameters, the unsupervised hyperspectral super-resolution algorithm suffers from low precision and limited applicability. To address this issue, this paper proposes an approach for hyperspectral image super-resolution, namely, the Unsupervised Multimodal Multilevel Feature Fusion network (UMMFF). The proposed approach employs a gated cross-retention module to learn shared patterns among different modalities. This module effectively eliminates the intermodal differences while preserving spatial–spectral correlations, thereby facilitating information interaction. A multilevel spatial–channel attention and parallel fusion decoder are constructed to extract features at three levels (low, medium, and high), enriching the information of the multimodal images. Additionally, an independent prior-based implicit neural representation blind estimation network is designed to accurately estimate the degraded parameters. The utilization of UMMFF on the “Washington DC”, Salinas, and Botswana datasets exhibited a superior performance compared to existing state-of-the-art methods in terms of primary performance metrics such as PSNR and ERGAS, and the PSNR values improved by 18.03%, 8.55%, and 5.70%, respectively, while the ERGAS values decreased by 50.00%, 75.39%, and 53.27%, respectively. The experimental results indicate that UMMFF demonstrates excellent algorithm adaptability, resulting in high-precision reconstruction outcomes.
Keywords: hyperspectral image super-resolution; multimodal multilevel feature fusion; gate-cross keeping shared encoder; multilevel parallel fusion decoder; prior-knowledge implicit estimation hyperspectral image super-resolution; multimodal multilevel feature fusion; gate-cross keeping shared encoder; multilevel parallel fusion decoder; prior-knowledge implicit estimation

Share and Cite

MDPI and ACS Style

Jiang, Z.; Chen, M.; Wang, W. UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution. Remote Sens. 2024, 16, 3282. https://doi.org/10.3390/rs16173282

AMA Style

Jiang Z, Chen M, Wang W. UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution. Remote Sensing. 2024; 16(17):3282. https://doi.org/10.3390/rs16173282

Chicago/Turabian Style

Jiang, Zhongmin, Mengyao Chen, and Wenju Wang. 2024. "UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution" Remote Sensing 16, no. 17: 3282. https://doi.org/10.3390/rs16173282

APA Style

Jiang, Z., Chen, M., & Wang, W. (2024). UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution. Remote Sensing, 16(17), 3282. https://doi.org/10.3390/rs16173282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop