Guided Filter-Inspired Network for Low-Light RAW Image Enhancement
Abstract
:1. Introduction
- From the perspective of decomposing and fusing contextual and textural information, we propose a novel approach for the LRIE task. The proposed method incorporates the guided filtering mechanism into the network structure, leading to a more reasonable and effective fusion for this task.
- To better preserve details, we propose an ImGF network module to fuse input source images in the image domain, emphasizing contextual and textural information related to the low-light RAW image, respectively. In addition, we introduce an FeGF module that extends the GF-like operation to the feature domain, allowing for a more effective fusion of multi-scale features extracted from input sources, thus leading to better noise reduction. Together, these modules promote more effective fusion by better utilizing complementary information from the source images.
- By combining the ImGF and FeGF modules, along with a multi-scale feature extraction module, we construct the GFNet for the LRIE task, achieving state-of-the-art performance on the SID benchmark.
2. Related Work
3. Guided Filter-Inspired Network
3.1. Preliminary and Motivation
3.2. Overall Network Structure and Workflow
3.3. Architecture Details
3.3.1. SMFE Module for Multi-Scale Feature Extraction
3.3.2. ImGF Module for Image-Domain Fusion
3.3.3. FeGF Module for Feature-Domain Fusion
4. Experiments
4.1. Experiments on the LRIE Task
4.1.1. Settings
4.1.2. Results
4.2. Ablation Study
4.3. Test in Real-World Scenarios
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Guided Fusion Network
Appendix B. Additional Evaluation Results
References
- Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Ren, X.; Yang, W.; Cheng, W.H.; Liu, J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE Trans. Image Process. 2020, 29, 5862–5876. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
- Liu, X.; Xie, Q.; Zhao, Q.; Wang, H.; Meng, D. Low-light Image Enhancement by Retinex Based Algorithm Unrolling and Adjustment. arXiv 2022, arXiv:2202.05972. [Google Scholar] [CrossRef]
- Hamza, A.B.; Krim, H. A Variational Approach to Maximum a Posteriori Estimation for Image Denoising. In Proceedings of the Energy Minimization Methods in Computer Vision and Pattern Recognition, Sophia Antipolis, France, 3–5 September 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 19–34. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to see in the dark. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar]
- Gu, S.; Li, Y.; Gool, L.V.; Timofte, R. Self-guided network for fast image denoising. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2511–2520. [Google Scholar]
- Xu, K.; Yang, X.; Yin, B.; Lau, R.W. Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2281–2290. [Google Scholar]
- Wei, K.; Fu, Y.; Yang, J.; Huang, H. A physics-based noise formation model for extreme low-light raw denoising. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2758–2767. [Google Scholar]
- Zhu, M.; Pan, P.; Chen, W.; Yang, Y. Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 13106–13113. [Google Scholar]
- Dong, X.; Xu, W.; Miao, Z.; Ma, L.; Zhang, C.; Yang, J.; Jin, Z.; Teoh, A.B.J.; Shen, J. Abandoning the Bayer-Filter To See in the Dark. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17431–17440. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Guided image filtering. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–14. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; Volume 25. [Google Scholar]
- Dong, J.; Pan, J.; Ren, J.S.; Lin, L.; Tang, J.; Yang, M.H. Learning Spatially Variant Linear Representation Models for Joint Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 8355–8370. [Google Scholar] [CrossRef] [PubMed]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
- Guan, X.; Jian, S.; Hongda, P.; Zhiguo, Z.; Haibin, G. An image enhancement method based on gamma correction. In Proceedings of the 2009 Second International Symposium on Computational Intelligence and Design, Changsha, China, 12–14 December 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 1, pp. 60–63. [Google Scholar]
- Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef]
- Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
- Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
- Lv, F.; Liu, B.; Lu, F. Fast enhancement for non-uniform illumination images using light-weight CNNs. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1450–1458. [Google Scholar]
- MacPhee, C.; Jalali, B. Vision enhancement via virtual diffraction and coherent detection (VEViD): A physics-inspired low-light enhancement algorithm. In Proceedings of the AI and Optical Data Sciences IV, San Francisco, CA, USA, 28 January–3 February 2023; SPIE: Bellingham, WA, USA, 2023; p. PC124380P. [Google Scholar]
- Yang, S.; Zhang, X.; Wang, Y.; Yu, J.; Wang, Y.; Zhang, J. DiffLLE: Diffusion-based Domain Calibration for Weak Supervised Low-light Image Enhancement. Int. J. Comput. Vis. 2025, 133, 2527–2546. [Google Scholar] [CrossRef]
- Sharif, S.M.A.; Myrzabekov, A.; Khudjaev, N.; Tsoy, R.; Kim, S.; Lee, J. Learning optimized low-light image enhancement for edge vision tasks. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–18 June 2024; pp. 6373–6383. [Google Scholar]
- Bai, J.; Yin, Y.; He, Q.; Li, Y.; Zhang, X. Retinexmamba: Retinex-based mamba for low-light image enhancement. arXiv 2024, arXiv:2405.03349. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Jin, X.; Han, L.H.; Li, Z.; Guo, C.L.; Chai, Z.; Li, C. DNF: Decouple and Feedback Network for Seeing in the Dark. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 18135–18144. [Google Scholar]
- Huang, Y.; Zhu, X.; Yuan, F.; Shi, J.; Kintak, U.; Fu, J.; Peng, Y.; Deng, C. A two-stage HDR reconstruction pipeline for extreme dark-light RGGB images. Sci. Rep. 2025, 15, 2847. [Google Scholar] [CrossRef]
- Wang, W.; Song, B. Multi-Scale Feature Fusion for Low-light Image Enhancement in the RAW Domain. In Proceedings of the 2024 10th International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2024; pp. 615–619. [Google Scholar] [CrossRef]
- Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted guided image filtering. IEEE Trans. Image Process. 2014, 24, 120–129. [Google Scholar] [PubMed]
- Zhang, Q.; Shen, X.; Xu, L.; Jia, J. Rolling guidance filter. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 815–830. [Google Scholar]
- Ham, B.; Cho, M.; Ponce, J. Robust guided image filtering using nonconvex potentials. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1576–1590. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Hu, J. Image Fusion With Guided Filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [CrossRef]
- Bavirisetti, D.P.; Kollu, V.; Gang, X.; Dhuli, R. Fusion of MRI and CT images using guided image filter and image statistics. Int. J. Imaging Syst. Technol. 2017, 27, 227–237. [Google Scholar] [CrossRef]
- Zhang, X.; Zhao, W.; Zhang, W.; Peng, J.; Fan, J. Guided Filter Network for Semantic Image Segmentation. IEEE Trans. Image Process. 2022, 31, 2695–2709. [Google Scholar] [CrossRef]
- Abid, M.; Shahid, M. Tumor Detection in MRI Data using Deep Learning Techniques for Image Classification and Semantic Segmentation. Sustain. Mach. Intell. J. 2024, 9, 1–13. [Google Scholar] [CrossRef]
- Ullah, W.; Naveed, H.; Ali, S. Deep Learning for Precise MRI Segmentation of Lower-Grade Gliomas. Sustain. Mach. Intell. J. 2025, 10, 23–36. [Google Scholar] [CrossRef]
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 7 January 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 839–846. [Google Scholar]
- Zhao, Z.; Zhang, J.; Xu, S.; Lin, Z.; Pfister, H. Discrete cosine transform network for guided depth map super-resolution. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5697–5707. [Google Scholar]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. arXiv 2022, arXiv:2204.04676. [Google Scholar]
- Lamba, M.; Mitra, K. Restoring extremely dark images in real time. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 3487–3497. [Google Scholar]
- Wu, H.; Zheng, S.; Zhang, J.; Huang, K. Fast end-to-end trainable guided filter. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1838–1847. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Backhaus, W.G.; Kliegl, R.; Werner, J.S. Color Vision: Perspectives from Different Disciplines; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]
- Zhang, X. Deep learning-based multi-focus image fusion: A survey and a comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4819–4838. [Google Scholar] [CrossRef] [PubMed]
- Wang, B.; Chen, W.; Qian, J.; Feng, S.; Chen, Q.; Zuo, C. Single-shot super-resolved fringe projection profilometry (SSSR-FPP): 100,000 frames-per-second 3D imaging with deep learning. Light. Sci. Appl. 2025, 14, 70. [Google Scholar] [CrossRef] [PubMed]
Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ | Param. (M) | FLOPs (G) | Time (s) | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
Mean ± CI | Std | Mean ± CI | Std | Mean ± CI | Std | Mean ± CI | Std | ↓ | ↓ | ↓ | |
GF | 16.03 ± 1.51 | 0.20 | 0.519 ± 0.05 | 0.20 | 0.410 ± 0.03 | 0.12 | 18.76 ± 2.50 | 8.06 | - | - | 0.001 |
FGF | 28.86 ± 1.04 | 3.65 | 0.772 ± 0.04 | 0.13 | 0.311 ± 0.04 | 0.15 | 5.04 ± 0.49 | 1.73 | 0.035 | 2.584 | 0.003 |
SID | 29.39 ± 1.95 | 5.12 | 0.794 ± 0.06 | 0.16 | 0.240 ± 0.06 | 0.16 | 5.25 ± 0.88 | 2.32 | 7.761 | 54.98 | 0.009 |
LRDE | 29.54 ± 1.03 | 3.63 | 0.798 ± 0.04 | 0.13 | 0.253 ± 0.04 | 0.13 | 4.72 ± 0.49 | 1.72 | 8.621 | 138.2 | 0.014 |
SGN | 29.46 ± 1.19 | 4.17 | 0.800 ± 0.04 | 0.13 | 0.249 ± 0.04 | 0.14 | 4.94 ± 0.56 | 1.97 | 4.113 | 57.59 | 0.010 |
EEMEFN | 30.07 ± 1.25 | 4.38 | 0.801 ± 0.04 | 0.13 | 0.247 ± 0.04 | 0.14 | 4.66 ± 0.60 | 2.10 | 85.24 | 702.1 | 0.021 |
RED | 28.66 ± 2.00 | 7.04 | 0.790 ± 0.10 | 0.35 | 0.312 ± 0.11 | 0.39 | 20.17 ± 2.20 | 5.78 | 0.785 | 5.173 | 0.005 |
DBLE | 29.70 ± 1.40 | 4.93 | 0.796 ± 0.05 | 0.18 | 0.247 ± 0.07 | 0.25 | 4.76 ± 1.00 | 2.52 | 15.01 | 100.5 | 0.015 |
GFNet (Ours) | 30.48 ± 1.24 | 4.06 | 0.805 ± 0.04 | 0.13 | 0.245 ± 0.04 | 0.14 | 4.50 ± 0.49 | 1.72 | 9.742 | 72.47 | 0.012 |
Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |||||
---|---|---|---|---|---|---|---|---|
Mean ± CI | Std | Mean ± CI | Std | Mean ± CI | Std | Mean ± CI | Std | |
GF | 11.77 ± 0.81 | 2.56 | 0.467 ± 0.05 | 0.16 | 0.515 ± 0.04 | 0.11 | 27.07 ± 1.78 | 5.62 |
FGF | 27.40 ± 1.28 | 4.05 | 0.713 ± 0.07 | 0.21 | 0.418 ± 0.06 | 0.17 | 6.51 ± 1.03 | 3.26 |
SID | 28.24 ± 1.30 | 4.13 | 0.745 ± 0.07 | 0.22 | 0.333 ± 0.06 | 0.20 | 6.30 ± 0.97 | 3.07 |
LRDE | 26.64 ± 2.00 | 5.04 | 0.713 ± 0.11 | 0.29 | 0.380 ± 0.10 | 0.25 | 6.92 ± 1.80 | 3.23 |
SGN | 28.03 ± 1.33 | 4.21 | 0.742 ± 0.07 | 0.22 | 0.343 ± 0.06 | 0.20 | 6.36 ± 1.00 | 3.16 |
EEMEFN | 28.66 ± 1.23 | 4.49 | 0.743 ± 0.07 | 0.22 | 0.315 ± 0.06 | 0.18 | 6.20 ± 0.92 | 3.92 |
RED | 16.15 ± 2.80 | 4.85 | 0.500 ± 0.20 | 0.70 | 0.595 ± 0.18 | 0.63 | 16.32 ± 1.50 | 5.83 |
DBLE | 24.05 ± 2.50 | 4.79 | 0.669 ± 0.15 | 0.53 | 0.372 ± 0.12 | 0.42 | 8.38 ± 2.20 | 3.74 |
GFNet (Ours) | 29.39 ± 1.37 | 4.35 | 0.747 ± 0.07 | 0.22 | 0.352 ± 0.06 | 0.20 | 5.69 ± 0.99 | 3.15 |
PSNR ↑ | SSIM ↑ | Param. (M) ↓ | |
---|---|---|---|
(a) w/o BF | 29.96 | 0.792 | 9.742 |
(b) BF ⇒ Gauss | 30.04 | 0.796 | 9.742 |
(c) BF ⇒ UNet | 30.62 | 0.804 | 11.07 |
(d) ImGF ⇒ Conv | 30.13 | 0.800 | 9.727 |
(e) FeGF ⇒ Conv | 29.99 | 0.797 | 9.742 |
(f) ImGF&FeGF ⇒ Conv | 29.49 | 0.792 | 9.727 |
(g) GFNet (Ours) | 30.61 | 0.808 | 9.742 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, X.; Zhao, Q. Guided Filter-Inspired Network for Low-Light RAW Image Enhancement. Sensors 2025, 25, 2637. https://doi.org/10.3390/s25092637
Liu X, Zhao Q. Guided Filter-Inspired Network for Low-Light RAW Image Enhancement. Sensors. 2025; 25(9):2637. https://doi.org/10.3390/s25092637
Chicago/Turabian StyleLiu, Xinyi, and Qian Zhao. 2025. "Guided Filter-Inspired Network for Low-Light RAW Image Enhancement" Sensors 25, no. 9: 2637. https://doi.org/10.3390/s25092637
APA StyleLiu, X., & Zhao, Q. (2025). Guided Filter-Inspired Network for Low-Light RAW Image Enhancement. Sensors, 25(9), 2637. https://doi.org/10.3390/s25092637