Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion
Abstract
:1. Introduction
- ▶
- To address the lack of paired visible–NIR image datasets and improve training quality, two-stage training is adopted. The first stage uses CycleGAN to generate fake visible images from unpaired data, while the second stage fine-tunes the network with paired fake visible and real NIR images, enhancing domain translation and structural consistency;
- ▶
- The purpose of the two-stage image fusion is to blend luminance information from visible and NIR images effectively. In the first stage, luminance differences are calculated and combined to emphasize details, while the second stage applies a bilateral filter to suppress noise and refine blending, followed by gamma correction to enhance global tone;
- ▶
- Additional local tone processing is achieved using CLAHE to improve local contrast, while color adjustment restores natural chromatic balance by compensating for distortions caused during luminance enhancement.
2. Related Works
2.1. Cycle-Consistent Generative Adversarial Networks
2.2. Image Fusion
3. Proposed Method
- ▶
- Extraction of hidden information: Sequential unpaired-to-paired transformations are conducted to generate a synthetic NIR image, revealing details hidden in dark environments;
- ▶
- Enhanced visibility: Dual blending is utilized to synthesize the NIR and VIS images, improving overall visibility;
- ▶
- Detail refinement: Gamma correction and CLAHE are applied to refine image details, enhance contrast, and improve brightness.
- ▶
- Color compensation: The final step involves applying color compensation to produce a visually balanced and color-accurate image.
3.1. Unpaired and Paired Dataset Training Using CycleGAN
3.2. Visible–NIR Fusion
3.3. Color Space Transformation and Compensation
4. Simulations
4.1. Comparative Experiments
4.2. Quantitative Evaluations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kwon, H.-J.; Lee, S.-H. Visible and Near-Infrared Image Acquisition and Fusion for Night Surveillance. Chemosensors 2021, 9, 75. [Google Scholar] [CrossRef]
- Park, C.-W.; Kwon, H.-J.; Lee, S.-H. Illuminant Adaptive Wideband Image Synthesis Using Separated Base-Detail Layer Fusion Maps. Appl. Sci. 2022, 12, 9441. [Google Scholar] [CrossRef]
- Sukthankar, R.; Stockton, R.G.; Mullin, M.D. Smarter presentations: Exploiting homography in camera-projector systems. In Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; pp. 247–253. [Google Scholar] [CrossRef]
- Kuang, J.; Johnson, G.M.; Fairchild, M.D. iCAM06: A refined image appearance model for HDR image rendering. J. Vis. Commun. Image Represent. 2007, 18, 406–414. [Google Scholar] [CrossRef]
- Ma, C.; Yeo, T.S.; Liu, Z.; Zhang, Q.; Guo, Q. Target imaging based on ℓ 1 ℓ 0 norms homotopy sparse signal recovery and distributed MIMO antennas. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 3399–3414. [Google Scholar] [CrossRef]
- Kwon, H.-J.; Lee, S.-H. Contrast Sensitivity Based Multiscale Base–Detail Separation for Enhanced HDR Imaging. Appl. Sci. 2020, 10, 2513. [Google Scholar] [CrossRef]
- Go, Y.-H.; Lee, S.-H.; Lee, S.-H. Multiexposed Image-Fusion Strategy Using Mutual Image Translation Learning with Multiscale Surround Switching Maps. Mathematics 2024, 12, 3244. [Google Scholar] [CrossRef]
- Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic Tone Reproduction for Digital Images. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2; ACM: New York, NY, USA, 2023; pp. 661–670. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. arXiv 2018, arXiv:1808:04560. [Google Scholar] [CrossRef]
- Kim, Y.-J.; Son, D.-M.; Lee, S.-H. Retinex Jointed Multiscale CLAHE Model for HDR Image Tone Compression. Mathematics 2024, 12, 1541. [Google Scholar] [CrossRef]
- Son, D.-M.; Kwon, H.-J.; Lee, S.-H. Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion. Chemosensors 2022, 10, 124. [Google Scholar] [CrossRef]
- Elad, M. On the origin of the bilateral filter and ways to improve it. IEEE Trans. Image Process. 2002, 11, 1141–1151. [Google Scholar] [CrossRef]
- Yan, B.; Guo, W. A novel identification method for CPPU-treated kiwifruits based on images. J. Sci. Food Agric. 2019, 99, 6234–6240. [Google Scholar] [CrossRef] [PubMed]
- Im, C.-G.; Son, D.-M.; Kwon, H.-J.; Lee, S.-H. Tone Image Classification and Weighted Learning for Visible and NIR Image Fusion. Entropy 2022, 24, 1435. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Szekeres, B.J.; Gyöngyössy, M.N.; Botzheim, J. A ResNet-9 Model for Insect Wingbeat Sound Classification. In Proceedings of the 2023 IEEE Symposium Series on Computational Intelligence (SSCI), Mexico City, Mexico, 5–8 December 2023; pp. 587–592. [Google Scholar] [CrossRef]
- Liu, G.; Yan, S. Latent Low-Rank Representation for subspace segmentation and feature extraction. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1615–1622. [Google Scholar] [CrossRef]
- Zarmehi, N.; Marvasti, F. Removal of sparse noise from sparse signals. Signal Process. 2019, 158, 91–99. [Google Scholar] [CrossRef]
- Borstelmann, A.; Haucke, T.; Steinhage, V. The Potential of Diffusion-Based Near-Infrared Image Colorization. Sensors 2024, 24, 1565. [Google Scholar] [CrossRef]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
- Yang, Z.; Chen, Z. Learning From Paired and Unpaired Data: Alternately Trained CycleGAN for Near Infrared Image Colorization. In Proceedings of the 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), Macau, China, 1–4 December 2020; pp. 467–470. [Google Scholar] [CrossRef]
- Su, H.; Jung, C.; Yu, L. Multi-Spectral Fusion and Denoising of Color and Near-Infrared Images Using Multi-Scale Wavelet Analysis. Sensors 2021, 21, 3610. [Google Scholar] [CrossRef]
- Radke, R.J.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image change detection algorithms: A systematic survey. IEEE Trans. Image Process. 2005, 14, 294–307. [Google Scholar] [CrossRef]
- Zhang, H.; Ma, J. IID-MEF: A multi-exposure fusion network based on intrinsic image decomposition. Inf. Fusion 2023, 95, 326–340. [Google Scholar] [CrossRef]
- Lee, S.-H.; Kwon, H.-J.; Lee, S.-H. Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System. Appl. Sci. 2023, 13, 9313. [Google Scholar] [CrossRef]
- Lee, G.-Y.; Lee, S.-H.; Kwon, H.-J.; Sohng, K.-I. Visual sensitivity correlated tone reproduction for low dynamic range images in the compression field. Opt. Eng. 2014, 53, 113111. [Google Scholar] [CrossRef]
- Musa, P.; Al Rafi, F.; Lamsani, M. A Review: Contrast-Limited Adaptive Histogram Equalization (CLAHE) methods to help the application of face recognition. In Proceedings of the 2018 Third International Conference on Informatics and Computing (ICIC), Palembang, Indonesia, 17–18 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Bartleson, C.J. Predicting corresponding colors with changes in adaptation. Color Res. Appl. 1979, 4, 143–155. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Sa, I.; Lim, J.Y.; Ahn, H.S.; MacDonald, B. DeepNIR: Datasets for Generating Synthetic NIR Images and Improved Fruit Detection System Using Deep Learning Techniques. Sensors 2022, 22, 4721. [Google Scholar] [CrossRef]
- Loh, Y.P.; Chan, C.S. Getting to know low-light images with the Exclusively Dark dataset. Comput. Vis. Image Underst. 2019, 178, 30–42. [Google Scholar] [CrossRef]
- Meylan, L.; Susstrunk, S. High dynamic range image rendering with a retinex-based adaptive filter. IEEE Trans. Image Process. 2006, 15, 2820–2830. [Google Scholar] [CrossRef]
- Meylan, L. Tone Mapping for High Dynamic Range Images; EPFL: Lausanne, Switzerland, 2006. [Google Scholar]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
- Vu, C.T.; Chandler, D.M. S3: A Spectral and Spatial Sharpness Measure. In Proceedings of the 2009 First International Conference on Advances in Multimedia, Colmar, France, 20–25 July 2009; pp. 37–43. [Google Scholar] [CrossRef]
- Hassen, R.; Wang, Z.; Salama, M.M.A. Image Sharpness Assessment Based on Local Phase Coherence. IEEE Trans. Image Process. 2013, 22, 2798–2810. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘Completely Blind’ Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Yang, S.; Wu, T.; Shi, S.; Lao, S.; Gong, Y.; Cao, M.; Wang, J.; Yang, Y. Maniqa: Multi-dimension attention network for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1191–1200. [Google Scholar] [CrossRef]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
- Computer Vision Test Images. Available online: https://www.cs.cmu.edu/afs/cs/project/cil/ftp/html/v-images.html (accessed on 5 May 2024).
iCAM06 | L1L0 | Kwon et al. [6] | Reinhard (2012) [8] | Retinexnet | Kim et al. [10] | Proposed | |
---|---|---|---|---|---|---|---|
BRISQUE (↓) | 25.272 | 26.424 | 22.602 | 25.946 | 20.895 | 22.098 | 20.473 1 |
SSEQ (↓) | 21.316 | 23.052 | 16.311 | 23.842 | 21.995 | 18.165 | 16.475 2 |
S3 (↑) | 0.160 | 0.139 | 0.232 | 0.173 | 0.178 | 0.225 | 0.245 1 |
LPC_SI (↑) | 0.902 | 0.888 | 0.925 | 0.917 | 0.919 | 0.921 | 0.948 1 |
NIQE (↓) | 3.427 | 3.325 | 4.530 | 3.289 | 4.309 | 3.319 | 3.001 1 |
MANIQA (↑) | 0.2246 | 0.2361 | 0.2392 | 0.2353 | 0.2851 | 0.2288 | 0.3043 1 |
CNNIQA (↓) | 20.311 | 20.920 | 30.122 | 24.135 | 19.254 | 22.006 | 18.802 1 |
iCAM06 | L1L0 | Kwon et al. [6] | Reinhard (2012) [8] | Retinexnet | Kim et al. [10] | Proposed | |
---|---|---|---|---|---|---|---|
Process time | 5.39 s | 5.57 s | 22.54 s | 1.10 s 1 | 6.40 s | 26.78 s | 2.29 s 2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, M.-H.; Go, Y.-H.; Lee, S.-H.; Lee, S.-H. Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion. Mathematics 2024, 12, 4028. https://doi.org/10.3390/math12244028
Lee M-H, Go Y-H, Lee S-H, Lee S-H. Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion. Mathematics. 2024; 12(24):4028. https://doi.org/10.3390/math12244028
Chicago/Turabian StyleLee, Min-Han, Young-Ho Go, Seung-Hwan Lee, and Sung-Hak Lee. 2024. "Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion" Mathematics 12, no. 24: 4028. https://doi.org/10.3390/math12244028
APA StyleLee, M.-H., Go, Y.-H., Lee, S.-H., & Lee, S.-H. (2024). Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion. Mathematics, 12(24), 4028. https://doi.org/10.3390/math12244028