MEFSR-GAN: A Multi-Exposure Feedback and Super-Resolution Multitask Network via Generative Adversarial Networks
Abstract
:1. Introduction
- Introduction of an end-to-end Multi-Exposure Super-Resolution Generative Adversarial Network (MEFSR-GAN): This paper presents the first use of a GAN framework to simultaneously achieve multi-exposure fusion (MEF) and super resolution (SR) in a unified model. MEFSR-GAN effectively enhances both MEF and SR performance under extreme exposure conditions;
- Development of a multi-exposure feedback block (MEFB): We propose a novel MEFB specifically designed to handle low-resolution images with over-exposure and under-exposure. The MEFB processes highly exposed images in parallel and incorporates a channel attention mechanism to optimize feature extraction and improve model generalization;
- Proposal of a dual discriminator network: To tackle the challenges of training on extremely exposed images, we introduce a dual discriminator network that guides the generator to learn stable and distinct feature representations, producing images that closely resemble the ground truth.
- State-of-the-art results with SICE and PQA-MEF datasets: Experimental results obtained using the SICE and PQA-MEF datasets demonstrate that MEFSR-GAN outperforms the latest MEF and SR methods, achieving state-of-the-art performance in both qualitative and quantitative evaluations.
2. Related Work
2.1. Single-Image Super-Resolution
2.2. Multi-Exposure Fusion
3. Multi-Exposure Feedback and Super-Resolution Generative Adversarial Networks (MEFSR-GAN)
3.1. Network Architecture
3.2. Multi-Exposure Feedback Block (MEFB)
3.3. Discriminator Network
3.4. Loss Function
3.4.1. Adversarial Loss
3.4.2. Content Loss
3.4.3. Perceptual Loss
3.4.4. Structural Similarity Index Measure Loss
3.4.5. Mixed Loss Function
4. Results
4.1. Experimental Setup
4.1.1. Dataset
4.1.2. Training Details
4.1.3. Comparison Methods
4.2. Quantitative Comparison Results
4.3. Qualitative Comparison Results
4.4. Ablation Study
4.4.1. Effect of Attention Mechanism
4.4.2. Effect of the Number of MEFBs
4.4.3. Effect of Output Weight
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Lei, F.; Crow, W.T.; Shen, H.; Su, C.-H.; Holmes, T.R.; Parinussa, R.M.; Wang, G. Assessment of the impact of spatial heterogeneity on microwave satellite soil moisture periodic error. Remote Sens. Environ. 2018, 205, 85–99. [Google Scholar] [CrossRef] [PubMed]
- Lee, S.-H.; Kim, T.-E.; Choi, J.-S. Correction of radial distortion using a planar checkerboard pattern and its image. IEEE Trans. Consum. Electron. 2009, 55, 27–33. [Google Scholar] [CrossRef]
- Hsu, W.-Y.; Jian, P.-W. Detail-Enhanced Wavelet Residual Network for Single Image Super-Resolution. IEEE Trans. Instrum. Meas. 2022, 71, 5016913. [Google Scholar] [CrossRef]
- Wu, W.; Yang, X.; Liu, K.; Liu, Y.; Yan, B.; Hua, H. A new framework for remote sensing image super-resolution: Sparse representation-based method by processing dictionaries with multi-type features. J. Syst. Arch. 2016, 64, 63–75. [Google Scholar] [CrossRef]
- Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3297–3305. [Google Scholar]
- Timofte, R.; De Smet, V.; Van Gool, L. A plus: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution. In Proceedings of the 12th Asian Conference on Computer Vision (ACCV), Singapore, 1–5 November 2014; Volume 9006, pp. 111–126. [Google Scholar]
- Zhou, Y.; Deng, W.; Tong, T.; Gao, Q. Guided Frequency Separation Network for Real-World Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 1722–1731. [Google Scholar]
- Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F. Real-World Super-Resolution via Kernel Estimation and Noise Injection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 1914–1923. [Google Scholar]
- SMaeda, S. Unpaired Image Super-Resolution using Pseudo-Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 288–297. [Google Scholar]
- Yuan, Y.; Liu, S.; Zhang, J.; Zhang, Y.; Dong, C.; Lin, L. Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 814–823. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Computer Vision-Eccv 2016; Li, P., Leibe, B., Eds.; Springer International Publishing: Amsterdam, The Netherlands, 2016; Volume 9906, pp. 391–407. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.P.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Jinno, T.; Okuda, M. Multiple Exposure Fusion for High Dynamic Range Image Acquisition. IEEE Trans. Image Process. 2012, 21, 358–365. [Google Scholar] [CrossRef]
- Jia, W.; Song, Z.; Li, Z. Multi-Scale Exposure Fusion via Content Adaptive Edge-Preserving Smoothing Pyramids. IEEE Trans. Consum. Electron. 2022, 68, 317–326. [Google Scholar] [CrossRef]
- Lefevre, S.; Tuia, D.; Wegner, J.D.; Produit, T.; Nassaar, A.S. Toward Seamless Multiview Scene Analysis from Satellite to Street Level. Proc. IEEE 2017, 105, 1884–1899. [Google Scholar] [CrossRef]
- Yan, Q.; Sun, J.; Li, H.; Zhu, Y.; Zhang, Y. High dynamic range imaging by sparse representation. Neurocomputing 2017, 269, 160–169. [Google Scholar] [CrossRef]
- Yang, Z.; Chen, Y.; Le, Z.; Ma, Y. GANFuse: A novel multi-exposure image fusion method based on generative adversarial networks. Neural Comput. Appl. 2021, 33, 6133–6145. [Google Scholar] [CrossRef]
- Abed, F.; Khan, I.R.; Rahardja, S. A New Four-Channel Format for Encoding of HDR Images. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2018, E101A, 512–515. [Google Scholar] [CrossRef]
- Shermeyer, J.; Van Etten, A. The Effects of Super-Resolution on Object Detection Performance in Satellite Imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 15–20 June 2019; pp. 1432–1441. [Google Scholar]
- Deng, X.; Zhang, Y.; Xu, M.; Gu, S.; Duan, Y. Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution. IEEE Trans. Image Process. 2021, 30, 3098–3112. [Google Scholar] [CrossRef] [PubMed]
- Hassan, M.; Wang, Y.; Pang, W.; Wang, D.; Li, D.; Zhou, Y.; Xu, D. IPAS-Net: A deep-learning model for generating high-fidelity shoeprints from low-quality images with no natural references. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 2743–2757. [Google Scholar] [CrossRef]
- He, Z.; Jin, Z.; Zhao, Y. SRDRL: A Blind Super-Resolution Framework With Degradation Reconstruction Loss. IEEE Trans. Multimed. 2021, 24, 2877–2889. [Google Scholar] [CrossRef]
- Li, Y.; Wang, Y.; Li, Y.; Jiao, L.; Zhang, X.; Stolkin, R. Single image super-resolution reconstruction based on genetic algorithm and regularization prior model. Inf. Sci. 2016, 372, 196–207. [Google Scholar] [CrossRef]
- Irani, M.; Peleg, S. Improving resolution by image registration. CVGIP Graph. Models Image Process. 1991, 53, 231–239. [Google Scholar] [CrossRef]
- Ur, H.; Gross, D. Improved resolution from subpixel shifted pictures. CVGIP Graph. Model. Image Process. 1992, 54, 181–186. [Google Scholar] [CrossRef]
- Schultz, R.; Stevenson, R. A Bayesian approach to image expansion for improved definition. IEEE Trans. Image Process. 1994, 3, 233–242. [Google Scholar] [CrossRef]
- Schultz, R.; Stevenson, R. Extraction of high-resolution frames from video sequences. IEEE Trans. Image Process. 1996, 5, 996–1011. [Google Scholar] [CrossRef]
- Elad, M.; Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 1997, 6, 1646–1658. [Google Scholar] [CrossRef]
- Lertrattanapanich, S.; Bose, N. High resolution image formation from low resolution frames using delaunay triangulation. IEEE Trans. Image Process. 2002, 11, 1427–1441. [Google Scholar] [CrossRef] [PubMed]
- Freeman, W.; Jones, T.; Pasztor, E. Example-based super-resolution. IEEE Comput. Graph. Appl. 2002, 22, 56–65. [Google Scholar] [CrossRef]
- Yang, J.; Wright, J.; Huang, T.; Ma, Y. Image super-resolution as sparse representation of raw image patches. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image Super-Resolution Via Sparse Representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
- Timofte, R.; De Smet, V.; Van Gool, L. Anchored Neighborhood Regression for Fast Example-Based Super-Resolution. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 1920–1927. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Proceedings of the ECCV 2014, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar] [CrossRef]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1637–1645. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. MemNet: A Persistent Memory Network for Image Restoration. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4549–4557. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Volume 11211, pp. 294–310. [Google Scholar]
- Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; Dong, C. Activating More Pixels in Image Super-Resolution Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 22367–22377. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Chen, Y.; Liu, S.; Wang, X. Learning continuous image representation with local implicit image function. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 8628–8638. [Google Scholar]
- Burt, P.J.; Adelson, E.H. The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
- Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (PG’07), Maui, HI, USA, 29 October–2 November 2007; pp. 382–390. [Google Scholar]
- Goshtasby, A.A.; Nikolov, S. Image fusion: Advances in the state of the art. Inf. Fusion 2007, 8, 114–118. [Google Scholar] [CrossRef]
- Burt, P.J.; Kolczynski, R.J. Enhanced image capture through fusion. In Proceedings of the 1993 (4th) International Conference on Computer Vision, Berlin, Germany, 11–14 May 1993; pp. 173–182. [Google Scholar]
- Goshtasby, A.A. Fusion of multi-exposure images. Image Vis. Comput. 2005, 23, 611–618. [Google Scholar] [CrossRef]
- Ma, K.; Wang, Z. Multi-exposure image fusion: A patch-wise approach. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 1717–1721. [Google Scholar]
- Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense SIFT. Inf. Fusion 2015, 23, 139–155. [Google Scholar] [CrossRef]
- Lee, S.H.; Park, J.S.; Cho, N.I. A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1737–1741. [Google Scholar]
- Prabhakar, K.R.; Srikar, V.S.; Babu, R.V. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4724–4732. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
- Li, H.; Zhang, L. Multi-exposure fusion with CNN features. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1723–1727. [Google Scholar]
- Xu, H.; Ma, J.; Zhang, X.-P. MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks. IEEE Trans. Image Process. 2020, 29, 7203–7216. [Google Scholar] [CrossRef]
- Xu, H.; Ma, J.; Le, Z.; Jiang, J.; Guo, X. Fusiondn: A unified densely connected network for image fusion. In Proceedings of the 34th AAAI Conference on Artificial Intelligence/32nd Innovative Applications of Artificial Intelligence Conference/10th AAAI Symposium on Educational Advances in Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12484–12491. [Google Scholar]
- Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 502–518. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the 15th European Conference on Computer Vision, ECCV 2018, Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Deep Back-Projection Networks For Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1664–1673. [Google Scholar]
- Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback Network for Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3862–3871. [Google Scholar]
- Jolicoeur-Martineau, A. The relativistic discriminator: A key element missing from standard GAN. arXiv 2018, arXiv:1807.00734. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
- Ma, K.; Duanmu, Z.; Zhu, H.; Fang, Y.; Wang, Z. Deep Guided Learning for Fast Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2020, 29, 2808–2819. [Google Scholar] [CrossRef]
- Li, H.; Ma, K.; Yong, H.; Zhang, L. Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2020, 29, 5805–5816. [Google Scholar] [CrossRef]
- Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans. Comput. Imaging 2017, 4, 60–72. [Google Scholar] [CrossRef]
SR + MEF | ||||||||||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion | ||||||||
PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | |
EDSR | 21.003 | 0.812 | 0.808 | 15.127 | 0.765 | 0.794 | 17.088 | 0.757 | 0.829 | 16.119 | 0.723 | 0.810 |
SRFBN | 20.949 | 0.808 | 0.803 | 15.106 | 0.760 | 0.786 | 17.052 | 0.752 | 0.819 | 16.098 | 0.718 | 0.819 |
RDN | 20.910 | 0.796 | 0.790 | 15.093 | 0.753 | 0.774 | 17.063 | 0.743 | 0.798 | 16.088 | 0.714 | 0.774 |
SWINIR | 21.034 | 0.816 | 0.811 | 15.153 | 0.770 | 0.799 | 17.149 | 0.763 | 0.840 | 16.131 | 0.727 | 0.816 |
MEF + SR | ||||||||||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion | ||||||||
PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | |
EDSR | 20.251 | 0.777 | 0.756 | 15.106 | 0.763 | 0.787 | 16.908 | 0.739 | 0.796 | 16.031 | 0.686 | 0.757 |
SRFBN | 20.286 | 0.776 | 0.754 | 15.090 | 0.759 | 0.781 | 16.935 | 0.739 | 0.796 | 16.022 | 0.684 | 0.754 |
RDN | 20.347 | 0.774 | 0.757 | 15.071 | 0.752 | 0.771 | 16.962 | 0.736 | 0.793 | 16.028 | 0.683 | 0.751 |
SWINIR | 20.082 | 0.772 | 0.749 | 15.118 | 0.766 | 0.790 | 16.867 | 0.739 | 0.791 | 16.035 | 0.688 | 0.759 |
CF-Net | PSNR = 22.669 | SSIM = 0.857 | MEF-SSIM = 0.848 | |||||||||
Ours | PSNR = 24.821 | SSIM = 0.896 | MEF-SSIM = 0.855 |
SR + MEF | ||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion |
EDSR | 0.774 | 0.801 | 0.869 | 0.867 |
SRFBN | 0.782 | 0.806 | 0.870 | 0.870 |
RDN | 0.787 | 0.810 | 0.861 | 0.875 |
SWINIR | 0.762 | 0.796 | 0.864 | 0.861 |
MEF + SR | ||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion |
EDSR | 0.744 | 0.800 | 0.842 | 0.875 |
SRFBN | 0.752 | 0.804 | 0.847 | 0.877 |
RDN | 0.761 | 0.808 | 0.849 | 0.875 |
SWINIR | 0.732 | 0.794 | 0.828 | 0.874 |
CF-Net | 0.851 | |||
Ours | 0.843 |
SR + MEF | ||||||||||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion | ||||||||
PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | |
EDSR | 19.813 | 0.662 | 0.643 | 14.727 | 0.627 | 0.618 | 16.559 | 0616 | 0.627 | 15.562 | 0.587 | 0.633 |
SRFBN | 19.814 | 0.673 | 0.657 | 14.722 | 0.630 | 0.623 | 16.568 | 0.623 | 0.643 | 15.579 | 0.588 | 0.636 |
RDN | 19.632 | 0.620 | 0.595 | 14.270 | 0.507 | 0.526 | 15.972 | 0.512 | 0.543 | 15.418 | 0.526 | 0.584 |
SWINIR | 20.054 | 0.711 | 0.702 | 14.828 | 0.665 | 0.668 | 16.726 | 0.657 | 0.701 | 15.689 | 0.623 | 0.685 |
MEF + SR | ||||||||||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion | ||||||||
PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | PSNR | SSIM | MEF-SSIM | |
EDSR | 19.020 | 0.646 | 0.616 | 14.718 | 0.626 | 0.615 | 16.448 | 0.601 | 0.616 | 15.532 | 0.555 | 0.590 |
SRFBN | 18.990 | 0.651 | 0.619 | 14.705 | 0.629 | 0.619 | 16.454 | 0.606 | 0.621 | 15.540 | 0.555 | 0.592 |
RDN | 19.050 | 0.626 | 0.603 | 14.677 | 0.604 | 0.596 | 16.420 | 0.580 | 0.595 | 15.550 | 0.546 | 0.582 |
SWINIR | 18.525 | 0.651 | 0.614 | 14.760 | 0.656 | 0.650 | 16.254 | 0.612 | 0.623 | 15.584 | 0.568 | 0.605 |
CF-Net | PSNR = 20.965 | SSIM = 0.722 | MEF-SSIM = 0.678 | |||||||||
Ours | PSNR = 21.928 | SSIM = 0.729 | MEF-SSIM = 0.743 |
SR + MEF | ||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion |
EDSR | 0.788 | 0.807 | 0.835 | 0.850 |
SRFBN | 0.791 | 0.806 | 0.845 | 0.843 |
RDN | 0.791 | 0.823 | 0.823 | 0.869 |
SWINIR | 0.739 | 0.776 | 0.820 | 0.809 |
MEF + SR | ||||
Methods | IFCNN | MEF-Net | Fast SPD | U2Fusion |
EDSR | 0.749 | 0.804 | 0.815 | 0.848 |
SRFBN | 0.750 | 0.803 | 0.818 | 0.845 |
RDN | 0.767 | 0.822 | 0.825 | 0.875 |
SWINIR | 0.693 | 0.772 | 0.774 | 0.846 |
CF-Net | 0.766 | |||
Ours | 0.748 |
Metric | Without Attention Mechanism | Added Attention Mechanism |
---|---|---|
PSNR | 24.746 | 24.821 |
SSIM | 0.881 | 0.896 |
Metric | Num = 2 | Num = 3 | Num = 4 |
---|---|---|---|
PSNR | 23.503 | 24.821 | 22.689 |
SSIM | 0.885 | 0.896 | 0.876 |
PSNR | SSIM | ||
---|---|---|---|
0.3 | 0.7 | 24.801 | 0.895 |
0.4 | 0.6 | 24.817 | 0.895 |
0.5 | 0.5 | 24.821 | 0.896 |
0.6 | 0.4 | 24.813 | 0.896 |
0.7 | 0.3 | 24.793 | 0.895 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, S.; Wu, K.; Zhang, G.; Yan, W.; Wang, X.; Tao, C. MEFSR-GAN: A Multi-Exposure Feedback and Super-Resolution Multitask Network via Generative Adversarial Networks. Remote Sens. 2024, 16, 3501. https://doi.org/10.3390/rs16183501
Yu S, Wu K, Zhang G, Yan W, Wang X, Tao C. MEFSR-GAN: A Multi-Exposure Feedback and Super-Resolution Multitask Network via Generative Adversarial Networks. Remote Sensing. 2024; 16(18):3501. https://doi.org/10.3390/rs16183501
Chicago/Turabian StyleYu, Sibo, Kun Wu, Guang Zhang, Wanhong Yan, Xiaodong Wang, and Chen Tao. 2024. "MEFSR-GAN: A Multi-Exposure Feedback and Super-Resolution Multitask Network via Generative Adversarial Networks" Remote Sensing 16, no. 18: 3501. https://doi.org/10.3390/rs16183501
APA StyleYu, S., Wu, K., Zhang, G., Yan, W., Wang, X., & Tao, C. (2024). MEFSR-GAN: A Multi-Exposure Feedback and Super-Resolution Multitask Network via Generative Adversarial Networks. Remote Sensing, 16(18), 3501. https://doi.org/10.3390/rs16183501