Next Article in Journal
A Review of Flying Ad Hoc Networks: Key Characteristics, Applications, and Wireless Technologies
Next Article in Special Issue
Multispectral and Hyperspectral Image Fusion Based on Regularized Coupled Non-Negative Block-Term Tensor Decomposition
Previous Article in Journal
Bistatic Radar Scattering from Non-Gaussian Height Distributed Rough Surfaces
Previous Article in Special Issue
Knowledge Graph Representation Learning-Based Forest Fire Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping

School of Computer Science and Informatics, Cardiff University, Cardiff CF24 4AG, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(18), 4458; https://doi.org/10.3390/rs14184458
Submission received: 20 July 2022 / Revised: 26 August 2022 / Accepted: 5 September 2022 / Published: 7 September 2022

Abstract

Land cover mapping provides spatial information on the physical properties of the Earth’s surface for various classes of wetlands, artificial surface and constructions, vineyards, water bodies, etc. Having reliable information on land cover is crucial to developing solutions to a variety of environmental problems, such as the destruction of important wetlands/forests, and loss of fish and wildlife habitats. This has made land cover mapping become one of the most widespread applications in remote sensing computational imaging. However, due to the differences between modalities in terms of resolutions, content, and sensors, integrating complementary information that multi-modal remote sensing imagery exhibits into a robust and accurate system still remains challenging, and classical segmentation approaches generally do not give satisfactory results for land cover mapping. In this paper, we propose a novel dynamic deep network architecture, AMM-FuseNet that promotes the use of multi-modal remote sensing images for the purpose of land cover mapping. The proposed network exploits the hybrid approach of the channel attention mechanism and densely connected atrous spatial pyramid pooling (DenseASPP). In the experimental analysis, in order to verify the validity of the proposed method, we test AMM-FuseNet with three datasets whilst comparing it to the six state-of-the-art models of DeepLabV3+, PSPNet, UNet, SegNet, DenseASPP, and DANet. In addition, we demonstrate the capability of AMM-FuseNet under minimal training supervision (reduced number of training samples) compared to the state of the art, achieving less accuracy loss, even for the case with 1/20 of the training samples.
Keywords: multi-modal fusion; channel attention; land cover mapping multi-modal fusion; channel attention; land cover mapping
Graphical Abstract

Share and Cite

MDPI and ACS Style

Ma, W.; Karakuş, O.; Rosin, P.L. AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping. Remote Sens. 2022, 14, 4458. https://doi.org/10.3390/rs14184458

AMA Style

Ma W, Karakuş O, Rosin PL. AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping. Remote Sensing. 2022; 14(18):4458. https://doi.org/10.3390/rs14184458

Chicago/Turabian Style

Ma, Wanli, Oktay Karakuş, and Paul L. Rosin. 2022. "AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping" Remote Sensing 14, no. 18: 4458. https://doi.org/10.3390/rs14184458

APA Style

Ma, W., Karakuş, O., & Rosin, P. L. (2022). AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping. Remote Sensing, 14(18), 4458. https://doi.org/10.3390/rs14184458

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop