Next Article in Journal
Interactive Path Editing and Simulation System for Motion Planning and Control of a Collaborative Robot
Previous Article in Journal
Towards 6G Satellite–Terrestrial Networks: Analysis of Air Mobility Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition

School of Information Science and Engineering, Yunnan University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2856; https://doi.org/10.3390/electronics13142856
Submission received: 8 July 2024 / Revised: 17 July 2024 / Accepted: 18 July 2024 / Published: 19 July 2024

Abstract

Infrared image and visible image fusion (IVIF) is a research direction that is currently attracting much attention in the field of image processing. The main goal is to obtain a fused image by reasonably fusing infrared images and visible images, while retaining the advantageous features of each source image. The research in this field aims to improve image quality, enhance target recognition ability, and broaden the application areas of image processing. To advance research in this area, we propose a breakthrough image fusion method based on the Residual Attention Network (RAN). By applying this innovative network to the task of image fusion, the mechanism of the residual attention network can better capture critical background and detail information in the images, significantly improving the quality and effectiveness of image fusion. Experimental results on public domain datasets show that our method performs excellently on multiple key metrics. For example, compared to existing methods, our method improves the standard deviation (SD) by 35.26%, spatial frequency (SF) by 109.85%, average gradient (AG) by 96.93%, and structural similarity (SSIM) by 23.47%. These significant improvements validate the superiority of our proposed residual attention network in the task of image fusion and open up new possibilities for enhancing the performance and adaptability of fusion networks.
Keywords: image fusion; residual attention module; dense block; infrared image; visible image image fusion; residual attention module; dense block; infrared image; visible image

Share and Cite

MDPI and ACS Style

Yu, J.; Lu, G.; Zhang, J. RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition. Electronics 2024, 13, 2856. https://doi.org/10.3390/electronics13142856

AMA Style

Yu J, Lu G, Zhang J. RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition. Electronics. 2024; 13(14):2856. https://doi.org/10.3390/electronics13142856

Chicago/Turabian Style

Yu, Jia, Gehao Lu, and Jie Zhang. 2024. "RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition" Electronics 13, no. 14: 2856. https://doi.org/10.3390/electronics13142856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop