DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion
Abstract
:1. Introduction
2. Methods
2.1. Overall Framework
2.2. DDGANSE of the Generator
2.3. DDGANSE of the Discriminator
2.4. Loss Function
3. Experiments
3.1. Data
3.2. Training Details
Algorithm 1: Training procedure for DDGANSE. |
1: for M epochs do |
2: for m steps do |
3: for r times do |
4: Select b visible patches ; |
5: Select b infrared patches ; |
6: Select b fused patches ; |
7: Update the parameters of the discriminator by |
Adam Optimizer: ∇D(LD); |
8: end for |
9: Select b visible patches ; |
10: Select b infrared patches ; |
11: Update the parameters of the generator by |
AdamOptimizer: ∇G(LG); |
12: end for |
13: end for |
3.3. Performance Metrics
3.4. Results for the TNO Dataset
4. Conclusions
Author Contributions
Funding
Informed Consent Statement
Conflicts of Interest
References
- Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
- Das, S.; Zhang, Y. Color Night vision for navigation and surveillance. Transp. Res. Rec. J. Transp. Res. Board 2000, 1708, 40–46. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.-J.; Durrani, T. NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models. IEEE Trans. Instrum. Meas. 2020, 69, 9645–9656. [Google Scholar] [CrossRef]
- Yang, Y.; Que, Y.; Huang, S.; Lin, P. Multiple Visual Features Measurement with Gradient Domain Guided Filtering for Multisensor Image Fusion. IEEE Trans. Instrum. Meas. 2017, 66, 691–703. [Google Scholar] [CrossRef]
- Yang, Y.; Zhang, Y.; Huang, S.; Zuo, Y.; Sun, J. Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
- Zhang, Q.; Liu, Y.; Blum, R.S.; Han, J.; Tao, D. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inf. Fusion 2018, 40, 57–75. [Google Scholar] [CrossRef]
- Kong, W.; Lei, Y.; Zhao, H. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization. Infrared Phys. Technol. 2014, 67, 161–172. [Google Scholar] [CrossRef]
- Zhang, X.; Ma, Y.; Fan, F.; Zhang, Y.; Huang, J. Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition. J. Opt. Soc. Am. A 2017, 34, 1400–1410. [Google Scholar] [CrossRef]
- Zhao, J.; Chen, Y.; Feng, H.; Xu, Z.; Li, Q. Infrared image enhancement through saliency feature analysis based on multi-scale decomposition. Infrared Phys. Technol. 2014, 62, 86–93. [Google Scholar] [CrossRef]
- Ma, J.; Zhou, Z.; Wang, B.; Zong, H. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 2017, 82, 8–17. [Google Scholar] [CrossRef]
- Li, S.; Yang, B.; Hu, J. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 2011, 12, 74–84. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
- Wang, H.; Li, S.; Song, L.; Cui, L.; Wang, P. An Enhanced Intelligent Diagnosis Method Based on Multi-Sensor Image Fusion via Improved Deep Learning Network. IEEE Trans. Instrum. Meas. 2020, 69, 2648–2657. [Google Scholar] [CrossRef]
- Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
- Zhao, J.; Cui, G.; Gong, X.; Zang, Y.; Tao, S.; Wang, D. Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Phys. Technol. 2017, 81, 201–209. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.-J. DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Trans. Image Process. 2019, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.; Kittler, J. MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion. IEEE Trans. Image Process. 2020, 29, 4733–4746. [Google Scholar] [CrossRef] [Green Version]
- Li, H.; Wu, X.; Durrani, T. Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Phys. Technol. 2019, 102, 103039. [Google Scholar] [CrossRef] [Green Version]
- Zhong, J.; Yang, B.; Li, Y.; Zhong, F.; Chen, Z. Image Fusion and Super-Resolution with Convolutional Neural Network. In Proceedings of the Informatics and Intelligent Applications, Cairo, Egypt, 24–26 October 2016; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2016; pp. 78–88. [Google Scholar]
- Toet, A. Image fusion by a ratio of low-pass pyramid. Pattern Recognit. Lett. 1989, 9, 245–253. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2014, 63, 2672–2680. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.-P. DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Bovik, A. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156. [Google Scholar] [CrossRef]
- Lewis, J.J.; O’Callaghan, R.J.; Nikolov, S.G.; Bull, D.R.; Canagarajah, N. Pixel-and region-based image fusion with complex wavelets. Inf. Fusion 2007, 8, 119–130. [Google Scholar] [CrossRef]
- Gan, W.; Wu, X.; Wu, W.; Yang, X.; Ren, C.; He, X.; Liu, K. Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter. Infrared Phys. Technol. 2015, 72, 37–51. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolut. Inf. Process. 2018, 16, 1850018. [Google Scholar] [CrossRef]
- Minahil, S.; Kim, J.-H.; Hwang, Y. Patch-Wise Infrared and Visible Image Fusion Using Spatial Adaptive Weights. Appl. Sci. 2021, 11, 9255. [Google Scholar] [CrossRef]
- Ma, J.; Zhang, H.; Shao, Z.; Liang, P.; Xu, W. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans. Instrum. Meas. 2020, 70, 1–14. [Google Scholar] [CrossRef]
- Zhang, H.; Xu, H.; Xiao, Y.; Guo, X.; Ma, J. Rethinking the Image Fusion: A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; The AAAI Press: Palo Alto, CA, USA, 2020; Volume 34, pp. 12797–12804. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.-J.; Kittler, J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf. Fusion 2021, 73, 72–86. [Google Scholar] [CrossRef]
- Li, Q.; Lu, L.; Li, Z.; Wu, W.; Liu, Z.; Jeon, G.; Yang, X. Coupled GAN with Relativistic Discriminators for Infrared and Visible Images Fusion. IEEE Sens. J. 2021, 21, 7458–7467. [Google Scholar] [CrossRef]
- Burt, P.; Adelson, E. The Laplacian Pyramid as a Compact Image Code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
- Van Aardt, J.; Roberts, J.W.; Ahmed, F. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2008, 2, 023522. [Google Scholar] [CrossRef]
- Aslantas, V.; Bendes, E. A new image quality metric for image fusion: The sum of the correlations of differences. AEU-Int. J. Electron. Commun. 2015, 69, 1890–1896. [Google Scholar] [CrossRef]
- Deshmukh, M.; Bhosale, U. Image fusion and image quality assessment of fused images. Int. J. Image Process. 2010, 4, 484. [Google Scholar]
- Rao, Y.-J. In-fibre bragg grating sensors. Meas. Sci. Technol. 1997, 8, 355. [Google Scholar] [CrossRef]
Generator | Discriminator | Performance Metrics | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
First | Second | Third | Fourth | First | Second | Third | Fourth | SSIM | SCD | CC | EN |
5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 0.63 | 1.65 | 0.45 | 6.89 |
5 × 5 | 5 × 5 | 3 × 3 | 3 × 3 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 0.67 | 1.66 | 0.47 | 6.74 |
3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 0.69 | 1.79 | 0.46 | 6.99 |
3 × 3 | 3 × 3 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 0.67 | 1.54 | 0.50 | 6.99 |
5 × 5 | 5 × 5 | 5 × 5 | 5 × 5 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 0.64 | 1.67 | 0.49 | 6.89 |
5 × 5 | 5 × 5 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 0.73 | 1.79 | 0.52 | 7.09 |
3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 0.69 | 1.78 | 0.48 | 6.84 |
3 × 3 | 3 × 3 | 5 × 5 | 5 × 5 | 3 × 3 | 3 × 3 | 3 × 3 | 3 × 3 | 0.70 | 1.81 | 0.52 | 6.94 |
Method | LPP | LP | CVT | DTCWT | GTF | CNN | FusionGAN | GANMcC | PMGI | DDCGAN | RFN-Nest | RCGAN | Ours |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SSIM | 0.63 | 0.59 | 0.76 | 0.79 | 0.71 | 0.83 | 0.81 | 0.84 | 0.72 | 0.56 | 0.83 | 0.77 | 0.86 |
PSNR | 16.8 | 17.4 | 16.2 | 17.8 | 16.8 | 18.4 | 16.6 | 17.5 | 18.1 | 16.3 | 18.5 | 18.8 | 19.6 |
EN | 6.84 | 6.85 | 6.74 | 6.69 | 6.98 | 7.29 | 6.51 | 6.96 | 6.99 | 6.09 | 6.84 | 6.89 | 7.09 |
SCD | 1.64 | 1.71 | 1.66 | 1.67 | 1.05 | 1.73 | 1.54 | 1.73 | 1.56 | 1.39 | 1.83 | 1.63 | 1.79 |
CC | 0.45 | 0.47 | 0.47 | 0.48 | 0.30 | 0.43 | 0.43 | 0.50 | 0.46 | 0.34 | 0.51 | 0.49 | 0.52 |
SD | 0.12 | 0.13 | 0.12 | 0.11 | 0.18 | 0.19 | 0.07 | 0.12 | 0.11 | 0.05 | 0.08 | 0.09 | 0.13 |
Method | DDGANSE (Ours) | Without Two Discriminators | Without One Discriminator | Without SE |
---|---|---|---|---|
SSIM | 0.86 | 0.63 | 0.57 | 0.73 |
PSNR | 19.60 | 18.05 | 18.47 | 18.82 |
EN | 7.09 | 6.59 | 6.67 | 7.11 |
SCD | 1.79 | 1.47 | 1.51 | 1.73 |
CC | 0.52 | 0.46 | 0.51 | 0.41 |
SD | 0.13 | 0.09 | 0.10 | 0.11 |
Method | LP | CVT | DTCWT | GTF | CNN | FusionGAN |
0.089 | 1.224 | 0.394 | 4.369 | 51.250 | 0.159 | |
Method | GANMcC | PMGI | DDCGAN | RFN-Nest | RCGAN | DDGANSE (Ours) |
0.331 | 0.369 | 0.469 | 0.332 | 0.452 | 0.321 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Ren, J.; Li, H.; Sun, Z.; Luan, Z.; Yu, Z.; Liang, C.; Monfared, Y.E.; Xu, H.; Hua, Q. DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion. Photonics 2022, 9, 150. https://doi.org/10.3390/photonics9030150
Wang J, Ren J, Li H, Sun Z, Luan Z, Yu Z, Liang C, Monfared YE, Xu H, Hua Q. DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion. Photonics. 2022; 9(3):150. https://doi.org/10.3390/photonics9030150
Chicago/Turabian StyleWang, Jingjing, Jinwen Ren, Hongzhen Li, Zengzhao Sun, Zhenye Luan, Zishu Yu, Chunhao Liang, Yashar E. Monfared, Huaqiang Xu, and Qing Hua. 2022. "DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion" Photonics 9, no. 3: 150. https://doi.org/10.3390/photonics9030150
APA StyleWang, J., Ren, J., Li, H., Sun, Z., Luan, Z., Yu, Z., Liang, C., Monfared, Y. E., Xu, H., & Hua, Q. (2022). DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion. Photonics, 9(3), 150. https://doi.org/10.3390/photonics9030150