Next Article in Journal
A First Step towards Meteosat Third Generation Day-2 Precipitation Rate Product: Deep Learning for Precipitation Rate Retrieval from Geostationary Infrared Measurements
Next Article in Special Issue
Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property
Previous Article in Journal
Cloud-Type Classification for Southeast China Based on Geostationary Orbit EO Datasets and the LighGBM Model
Previous Article in Special Issue
Feature Relation Guided Cross-View Image Based Geo-Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer

School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Street, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(24), 5661; https://doi.org/10.3390/rs15245661
Submission received: 27 September 2023 / Revised: 22 November 2023 / Accepted: 4 December 2023 / Published: 7 December 2023
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)

Abstract

Infrared (IR) images containing rich spectral information are essential in many fields. Most RGB-IR transfer work currently relies on conditional generative models to learn and train IR images for specific devices and scenes. However, these models only establish an empirical mapping relationship between RGB and IR images in a single dataset, which cannot achieve the multi-scene and multi-band (0.7–3 μm and 8–15 μm) transfer task. To address this challenge, we propose VQ-InfraTrans, a comprehensive framework for transferring images from the visible spectrum to the infrared spectrum. Our framework incorporates a multi-mode approach to RGB-IR image transferring, encompassing both unconditional and conditional transfers, achieving diverse and flexible image transformations. Instead of training individual models for each specific condition or dataset, we propose a two-stage transfer framework that integrates diverse requirements into a unified model that utilizes a composite encoder–decoder based on VQ-GAN, and a multi-path transformer to translate multi-modal images from RGB to infrared. To address the issue of significant errors in transferring specific targets due to their radiance, we have developed a hybrid editing module to precisely map spectral transfer information for specific local targets. The qualitative and quantitative comparisons conducted in this work reveal substantial enhancements compared to prior algorithms, as the objective evaluation metric SSIM (structural similarity index) was improved by 2.24% and the PSNR (peak signal-to-noise ratio) was improved by 2.71%.
Keywords: infrared image; image-to-image translation; multi-modal controls; vector quantization; transformer infrared image; image-to-image translation; multi-modal controls; vector quantization; transformer
Graphical Abstract

Share and Cite

MDPI and ACS Style

Sun, Q.; Wang, X.; Yan, C.; Zhang, X. VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer. Remote Sens. 2023, 15, 5661. https://doi.org/10.3390/rs15245661

AMA Style

Sun Q, Wang X, Yan C, Zhang X. VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer. Remote Sensing. 2023; 15(24):5661. https://doi.org/10.3390/rs15245661

Chicago/Turabian Style

Sun, Qiyang, Xia Wang, Changda Yan, and Xin Zhang. 2023. "VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer" Remote Sensing 15, no. 24: 5661. https://doi.org/10.3390/rs15245661

APA Style

Sun, Q., Wang, X., Yan, C., & Zhang, X. (2023). VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer. Remote Sensing, 15(24), 5661. https://doi.org/10.3390/rs15245661

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop