A Zero-Reference Low-Light Image-Enhancement Approach Based on Noise Estimation
Abstract
:1. Introduction
- Proposing an improved high-order curve expression for mapping low-light images to enhanced images, which involves adding a noise term to reduce the noise in the enhanced image;
- Proposing a color-map-based noise-estimation method, fusing noise maps with low-light images to enhance the input features of the depth curve estimation network;
- Designing a reference-free noise loss function and adjusting the depth curve estimation network structure to achieve noise reduction in reference-free enhanced images while reducing color distortion;
- Achieving SOAT on many reference-free low-light datasets and successfully applying to real coal mine image enhancement.
2. Related Works and Methods
2.1. Related Works
2.2. Method
2.2.1. Improved Higher-Order Curve Expressions
2.2.2. Noise Estimation and Fusion
2.2.3. Depth-Curve-Estimation Network
2.2.4. Zero-Reference Loss Function
3. Results
3.1. Implementation Details
3.1.1. Dataset
3.1.2. Experiment Settings
3.2. Benchmark Evaluations
3.2.1. Validation on the Full-Reference Dataset LOL
Reference | Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
---|---|---|---|---|
With | KinD (ACM ICM, 2019) [40] | 18.970 | 0.782 | 0.257 |
KinD++ (IJCV, 2021) [38] | 20.870 | 0.804 | 0.175 | |
LLFlow (AAAI, 2022) [9] | 24.999 | 0.870 | 0.117 | |
LLFlow-SKF (CVPR, 2023) [39] | 26.798 | 0.879 | 0.105 | |
LIME (TIP, 2016) [17] | 16.760 | 0.560 | 0.350 | |
DRBN (CVPR, 2020) [28] | 19.137 | 0.784 | 0.252 | |
EnlightenGAN (TIP, 2021) [10] | 17.483 | 0.652 | 0.322 | |
Zero | RetinexNet (BMVC, 2018) [31] | 16.770 | 0.462 | 0.474 |
Zero-DCE (CVPR, 2020) [13] | 14.861 | 0.562 | 0.335 | |
Ours | 19.457 (best) | 0.786 (best) | 0.235 (best) |
3.2.2. Validation of Non-Reference Datasets
3.3. Ablation Studies
3.3.1. Contribution of the Noise Fusion Module
3.3.2. Contribution of Noise Loss
3.3.3. The Impact of Adjusting Depth Curve Estimate Networks
3.4. Application of Low-Light Images in Mines and Indoor Squares
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SOAT | State of the art |
GAN | Generative adversarial network |
CNN | Convolutional neural network |
PSNR | Peak signal-to-noise ratio |
SSIM | Structural similarity |
LPIPS | Learned perceptual image patch similarity |
US | User study |
PI | Perceptual index |
References
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Chu, Q.; Ouyang, W.; Li, H.; Wang, X.; Liu, B.; Yu, N. Online multi-object tracking using CNN-based single object tracker with spatial-temporal attention mechanism. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4836–4845. [Google Scholar]
- Zhong, Z.; Lei, M.; Cao, D.; Fan, J.; Li, S. Class-specific object proposals re-ranking for object detection in automatic driving. Neurocomputing 2017, 242, 187–194. [Google Scholar] [CrossRef]
- Wang, Z.C.; Zhao, Y.Q. An image enhancement method based on the coal mine monitoring system. Adv. Mater. Res. 2012, 468, 204–207. [Google Scholar] [CrossRef]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for real image restoration and enhancement. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 492–511. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar]
- Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6849–6857. [Google Scholar]
- Wang, Y.; Wan, R.; Yang, W.; Li, H.; Chau, L.P.; Kot, A. Low-light image enhancement with normalizing flow. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 2604–2612. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Wolf, V.; Lugmayr, A.; Danelljan, M.; Van Gool, L.; Timofte, R. Deflow: Learning complex image degradations from unpaired data with conditional flows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 94–103. [Google Scholar]
- Yuan, L.; Sun, J. Automatic exposure correction of consumer photographs. In Proceedings of the Computer Vision—ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 771–785. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
- Land, E.H. The retinex theory of color vision. Sci. Am. 1997, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
- Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10561–10570. [Google Scholar]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NE, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef] [PubMed]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2223–2232. [Google Scholar]
- Battiato, S.; Bosco, A.; Castorina, A.; Messina, G. Automatic image enhancement by content dependent exposure correction. EURASIP J. Adv. Signal Process. 2004, 2004, 613282. [Google Scholar] [CrossRef]
- Bhukhanwala, S.A.; Ramabadran, T.V. Automated global enhancement of digitized photographs. IEEE Trans. Consum. Electron. 1994, 40, 1–10. [Google Scholar] [CrossRef]
- Lee, K.; Kim, S.; Kim, S.D. Dynamic range compression based on statistical analysis. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3157–3160. [Google Scholar]
- Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. Semin. Graph. Pap. Push. Boundaries 2023, 2, 661–670. [Google Scholar]
- Albu, F.; Vertan, C.; Florea, C.; Drimbarean, A. One scan shadow compensation and visual enhancement of color images. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3133–3136. [Google Scholar]
- Liu, J.; Xu, D.; Yang, W.; Fan, M.; Huang, H. Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 2021, 129, 1153–1184. [Google Scholar] [CrossRef]
- Cao, P.; Chen, P.; Niu, Q. Multi-label image recognition with two-stream dynamic graph convolution networks. Image Vis. Comput. 2021, 113, 104238. [Google Scholar] [CrossRef]
- Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3063–3072. [Google Scholar]
- Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure image. IEEE Trans. Image Process. 2018, 27, 2026–2049. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Lai, Q.; Fu, H.; Shen, J.; Ling, H.; Yang, R. Salient object detection in the deep learning era: An in-depth survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3239–3259. [Google Scholar] [CrossRef] [PubMed]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Xu, P.; Hospedales, T.M.; Yin, Q.; Song, Y.Z.; Xiang, T.; Wang, L. Deep learning for free-hand sketch: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 285–312. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef]
- Yu, R.; Liu, W.; Zhang, Y.; Qu, Z.; Zhao, D.; Zhang, B. Deepexposure: Learning to expose photos with asynchronously reinforced adversarial learning. In Proceedings of the Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Montreal, ON, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 965–968. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Wu, Y.; Pan, C.; Wang, G.; Yang, Y.; Wei, J.; Li, C.; Shen, H.T. Learning Semantic-Aware Knowledge Guidance for Low-Light Image Enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 1662–1671. [Google Scholar]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; Volume 129, pp. 1632–1640. [Google Scholar]
- Wang, Y.; Cao, Y.; Zha, Z.J.; Zhang, J.; Xiong, Z.; Zhang, W.; Wu, F. Progressive retinex: Mutually reinforced illumination-noise perception network for low-light image enhancement. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 2015–2023. [Google Scholar]
- Ren, X.; Li, M.; Cheng, W.H.; Liu, J. Joint enhancement and denoising method via sequential decomposition. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
Method | Runtime/Second | Platform |
---|---|---|
RetinexNet (BMVC, 2018) [31] | 0.1700 | TensorFlow 2.0 (GPU) |
EnlightenGAN (TIP, 2021) [10] | 0.0125 | PyTorch 1.8.1 (GPU) |
Zero-DCE (CVPR, 2020) [13] | 0.0037 | PyTorch 1.8.1 (GPU) |
Ours | 0.0052 | PyTorch 1.8.1 (GPU) |
Method | NPE | LIME | MEF | DICM | Average |
---|---|---|---|---|---|
DeepUPE (ACM ICM, 2019) [41] | 3.62/3.01 | 3.51/2.73 | 3.43/2.92 | 3.18/3.12 | 3.44/2.95 |
LIME (TIP, 2016) [17] | 3.72/2.98 | 3.92/2.94 | 3.68/3.17 | 3.22/3.26 | 3.64/3.09 |
DRBN (CVPR, 2020) [26] | 3.78/2.83 | 3.76/2.83 | 3.13/2.72 | 3.44/3/20 | 3.53/2.90 |
EnlightenGAN (TIP, 2021) [10] | 3.81/2.85 | 3.78/2.77 | 3.69/2.37 | 3.46/3.06 | 3.69/2.78 |
RetinexNet (BMVC, 2018) [31] | 3.20/3.14 | 2.23/3.01 | 2.69/2.73 | 2.72/3.12 | 2.71/3.00 |
JED (ISCAS, 2018) [42] | 3.65/3.05 | 3.50/3.01 | 2.93/3.61 | 3.47/3/43 | 3.39/3.28 |
SRIE (CVPR, 2016) [16] | 3.58/2.64 | 3.46/2.61 | 3.17/2.58 | 3.40/3.15 | 3.40/2.75 |
Zero-DCE (CVPR, 2020) [13] | 3.77/2.76 | 3.78/2.68 | 4.01/2.37 | 3.44/2.97 | 3.75/2.70 |
Ours | 3.92/2.71 | 3.84/2.58 | 4.62/2.25 | 3.83/2.79 | 4.05/2.58 |
Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
---|---|---|---|
w/o Noise Fusion | 18.725 | 0.764 | 0.253 |
w/o | 18.573 | 0.759 | 0.261 |
w/o Adjust Network | 18.209 | 0.753 | 0.267 |
Our full model | 19.457 | 0.786 | 0.235 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, P.; Niu, Q.; Zhu, Y.; Li, T. A Zero-Reference Low-Light Image-Enhancement Approach Based on Noise Estimation. Appl. Sci. 2024, 14, 2846. https://doi.org/10.3390/app14072846
Cao P, Niu Q, Zhu Y, Li T. A Zero-Reference Low-Light Image-Enhancement Approach Based on Noise Estimation. Applied Sciences. 2024; 14(7):2846. https://doi.org/10.3390/app14072846
Chicago/Turabian StyleCao, Pingping, Qiang Niu, Yanping Zhu, and Tao Li. 2024. "A Zero-Reference Low-Light Image-Enhancement Approach Based on Noise Estimation" Applied Sciences 14, no. 7: 2846. https://doi.org/10.3390/app14072846
APA StyleCao, P., Niu, Q., Zhu, Y., & Li, T. (2024). A Zero-Reference Low-Light Image-Enhancement Approach Based on Noise Estimation. Applied Sciences, 14(7), 2846. https://doi.org/10.3390/app14072846