Priori Knowledge Makes Low-Light Image Enhancement More Reasonable
Abstract
1. Introduction
- This paper presents a parametric enhancement function in which the priori enhancement probability is adaptively adjusted based on pixel’s brightness.
- This paper proposes GA Block, which allows each pixel in the enhanced image to be computed from all the pixels in the low-light image, making the enhanced image appear more natural.
- This paper proposes a priori channels for indicating the brightness of the enhanced image, which allows the enhanced image to be freely adjusted.
- This paper presents comprehensive experiments, and the results demonstrate that the proposed Priori DCE significantly enhances the quality of the enhanced images.
2. Related Works
2.1. Plain Methods
2.2. Retinex
2.3. Deep Learning
3. Methodology
3.1. Pipeline
- The reference image serves as the enhancement target for a low-light image of size , and the process of deriving the priori channels from it is illustrated in Figure 2. The reference image is first segmented by channel into Chan R, Chan G, and Chan B, denoted as . Then, the mean value of each corresponding channel is calculated, and these priori channels (Priori R, Priori G, and Priori B) are generated based on these mean values, denoted as . This process can be represented by Equation (1).
- The parameter generation model processes , generating parameters that correspond to a set of enhancement functions for each pixel in the low-light image.
- The low-light image is enhanced using the obtained enhancement functions to generate the corresponding enhanced image.
- Calculate the loss value between the enhanced image and the reference image, then perform backpropagation and update the parameters in the parameter generation model.
3.2. Parameter Generation
- A linear transformation of is performed using the fully connected layer Linear (top) to obtain , as shown in Equation (3).
- The fully connected layer (below) is used to linearly transform , mapping the with C channels to an positional feature with 8 channels, where , as shown in Equation (5). corresponds to four reference points related to point p, namely the cyan point , the green point , the pink point , and the yellow point .
- The weight of the feature is calculated based on the cosine similarity between the features and the feature , as shown in Equation (6).
- Calculate the weighted sum of the four reference points to obtain the output feature at the red point p, as shown in Equation (7).
3.3. Enhancement Function
3.3.1. Priori Probability
3.3.2. The Shortcomings of the Current Method
3.3.3. Solutions
3.4. Training
4. Experiments
4.1. Experimental Details
4.1.1. Implementation Details
4.1.2. Datasets
4.1.3. Metrics
4.2. Ablation Study
4.2.1. The Impact of Loss Function Strategies
4.2.2. The Impact of Hyperparameter k
4.2.3. Model Structure
4.2.4. The Impact of Scale Factor
4.3. Performance Evaluation
4.3.1. Reference Evaluation
4.3.2. Unreference Evaluation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1777–1786. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Meinhardt, T.; Kirillov, A.; Leal-Taixé, L.; Feichtenhofer, C. TrackFormer: Multi-Object Tracking with Transformers. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 8834–8844. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Cheng, H.; Shi, X. A simple and effective histogram equalization approach to image enhancement. Digit. Signal Process. 2004, 14, 158–170. [Google Scholar] [CrossRef]
- Land, E.H.; McCann, J.J. Lightness and Retinex Theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond Brightening Low-light Images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018; British Machine Vision Association: Durham, UK, 2018. [Google Scholar]
- Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef] [PubMed]
- Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2RNet: Low-light image enhancement via Real-low to Real-normal Network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
- Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
- Gharbi, M.; Chen, J.; Barron, J.T.; Hasinoff, S.W.; Durand, F. Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. 2017, 36, 1–12. [Google Scholar] [CrossRef]
- Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed Photo Enhancement Using Deep Illumination Estimation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6842–6850. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, MM ’19, Nice, France, 21–25 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1632–1640. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z.; Zeng, Y.; Zeng, H.; Zhao, D. PD-GAN: Perceptual-Details GAN for Extremely Noisy Low Light Image Enhancement. In Proceedings of the ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 1840–1844. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Fan, G.D.; Fan, B.; Gan, M.; Chen, G.Y.; Chen, C.L.P. Multiscale Low-Light Image Enhancement Network with Illumination Constraint. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7403–7417. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4225–4238. [Google Scholar] [CrossRef]
- Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the effective receptive field in deep convolutional neural networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Barcelona, Spain, 5–10 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 4905–4913. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2Net: A New Multi-Scale Backbone Architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 652–662. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer Nature: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6000–6010. [Google Scholar]
- Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Vienna, Austria, 3–7 May 2021. [Google Scholar]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar] [CrossRef]
- Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable ConvNets V2: More Deformable, Better Results. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9300–9308. [Google Scholar] [CrossRef]
- Wang, W.; Wei, C.; Yang, W.; Liu, J. GLADNet: Low-Light Enhancement Network with Global Awareness. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 751–755. [Google Scholar] [CrossRef]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Bennett, E.P.; McMillan, L. Video enhancement using per-pixel virtual exposures. In Proceedings of the ACM SIGGRAPH 2005 Papers, SIGGRAPH ’05, Los Angeles, CA, USA, 31 July–4 August 2005; Association for Computing Machinery: New York, NY, USA, 2005; pp. 845–852. [Google Scholar] [CrossRef]
- Yuan, L.; Sun, J. Automatic Exposure Correction of Consumer Photographs. In Proceedings of the Computer Vision—ECCV 2012, Florence, Italy, 7–13 October 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 771–785. [Google Scholar]
- Rahman, Z.; Jobson, D.; Woodell, G. Multi-scale retinex for color image enhancement. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996; Volume 3, pp. 1003–1006. [Google Scholar] [CrossRef]
- Jobson, D.; Rahman, Z.; Woodell, G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–3 October 2023; pp. 12504–12513. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer Nature: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Lee, C.; Lee, C.; Kim, C.S. Contrast Enhancement Based on Layered Difference Representation of 2D Histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Ma, L.; Jin, D.; An, N.; Liu, J.; Fan, X.; Luo, Z.; Liu, R. Bilevel Fast Scene Adaptation for Low-Light Image Enhancement. Int. J. Comput. Vis. 2023. [Google Scholar] [CrossRef]
- Wang, Y.; Wan, R.; Yang, W.; Li, H.; Chau, L.P.; Kot, A. Low-Light Image Enhancement with Normalizing Flow. Proc. AAAI Conf. Artif. Intell. 2022, 36, 2604–2612. [Google Scholar] [CrossRef]
Loss | LOLv1 | LOLv2-Real | LOLv2-Synthetic | ||||||
---|---|---|---|---|---|---|---|---|---|
25.015 | 75.102 | 4.444 | 25.825 | 73.812 | 5.357 | 28.326 | 91.380 | 3.869 | |
+ | 25.775 | 81.222 | 3.572 | 26.828 | 80.791 | 4.381 | 29.488 | 93.598 | 3.911 |
Priori
Channels | GA | Priori Probability | LOLv1 | LOLv2-Real | LOLv2-Synthetic | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
baseline | 21.002 | 77.737 | 3.675 | 22.028 | 76.907 | 4.290 | 21.495 | 89.793 | 3.879 | ||
✓ | 22.408 | 76.945 | 3.823 | 23.877 | 76.815 | 4.382 | 22.453 | 90.580 | 3.818 | ||
✓ | ✓ | 24.376 | 79.162 | 3.819 | 25.074 | 78.529 | 4.394 | 29.115 | 93.661 | 3.895 | |
✓ | ✓ | ✓ | 25.775 | 81.222 | 3.572 | 26.828 | 80.791 | 4.381 | 29.488 | 93.598 | 3.911 |
Method | Complexity | LOLv1 | LOLv2_Real | LOLv2_Synthetic | LSRW_Huawei | LSRW_Nikon | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MACs (G) | Params (M) | FPS | ||||||||||||||||
Low | 5.72 | 6.01 | 4.09 | 3.16 | 3.45 | |||||||||||||
Reference | 4.25 | 4.73 | 4.19 | 3.44 | 4.24 | |||||||||||||
BL [48] | 150.799 | 1.606 | 146.895 | 10.31 | 40.13 | 7.31 | 12.89 | 43.53 | 7.73 | 13.58 | 61.44 | 4.74 | 11.78 | 31.24 | 3.06 | 13.43 | 36.19 | 3.85 |
DeepUPE [18] | 45.935 | 0.079 | 2426.394 | 12.71 | 45.04 | 7.79 | 14.60 | 47.02 | 8.23 | 13.82 | 60.50 | 4.37 | 13.63 | 36.25 | 3.00 | 13.36 | 35.97 | 3.64 |
EnlightenGAN [24] | 42.983 | 17.48 | 65.15 | 4.89 | 18.64 | 67.67 | 5.50 | 16.57 | 77.15 | 3.83 | 17.85 | 48.92 | 2.94 | 15.92 | 42.09 | 3.18 | ||
GLADNet [35] | 200.6 | 12.15 | 79.839 | 19.72 | 68.20 | 6.80 | 19.82 | 68.47 | 7.73 | 18.11 | 82.59 | 3.99 | 19.00 | 49.45 | 2.96 | 16.63 | 44.07 | 3.36 |
KinD [19] | 61.01 | 114.35 | 24.385 | 17.64 | 77.13 | 3.90 | 20.58 | 81.78 | 4.14 | 17.27 | 75.78 | 4.25 | 17.03 | 49.88 | 2.64 | 15.47 | 44.04 | 3.46 |
KinD++ [7] | 1050 | 17.42 | 14.844 | 17.75 | 75.82 | 4.01 | 17.66 | 76.09 | 4.20 | 17.48 | 78.57 | 4.76 | 16.97 | 41.15 | 3.02 | 14.74 | 36.80 | 3.72 |
LIME [8] | 3.463 | 16.05 | 48.60 | 8.79 | 17.16 | 48.02 | 9.31 | 16.37 | 73.74 | 4.76 | 17.13 | 39.31 | 3.44 | 14.64 | 34.99 | 3.61 | ||
MELLEN-IC [25] | 2532 | 8.275 | 1.432 | 17.23 | 75.44 | 3.31 | 20.75 | 78.98 | 3.32 | 21.57 | 88.08 | 3.98 | 18.22 | 53.48 | 2.64 | 16.71 | 45.08 | 3.41 |
Retinexformer [42] | 9.293 | 25.15 | 84.34 | 2.97 | 22.79 | 83.86 | 3.59 | 25.67 | 92.82 | 3.94 | 16.25 | 49.48 | 2.60 | 15.56 | 42.38 | 3.27 | ||
RetinexNet [13] | 9.342 | 16.77 | 42.50 | 9.73 | 16.10 | 40.71 | 10.56 | 17.14 | 75.64 | 5.69 | 16.82 | 38.50 | 4.33 | 13.49 | 28.94 | 4.27 | ||
Zero DCE [1] | 517.129 | 28.539 | 24.091 | 14.86 | 56.24 | 8.22 | 18.06 | 57.95 | 8.77 | 17.76 | 81.40 | 4.36 | 16.41 | 46.19 | 3.15 | 15.05 | 41.37 | 3.40 |
Zero DCE++ [26] | 0.109 | 0.594 | 123.976 | 17.04 | 56.25 | 8.46 | 18.14 | 55.18 | 9.06 | 18.64 | 83.52 | 4.55 | 18.12 | 45.55 | 3.27 | 15.10 | 39.20 | 3.56 |
LLFlow [49] | 0.761 | 24.06 | 86.02 | 4.07 | 26.43 | 90.26 | 4.53 | 19.22 | 82.41 | 4.66 | 20.09 | 55.07 | 2.88 | 16.88 | 45.66 | 3.73 | ||
Priori DCE | 834.2 | 36.21 | 11.673 | 25.77 | 81.22 | 3.57 | 26.83 | 80.79 | 4.38 | 29.49 | 93.60 | 3.91 | 21.39 | 56.76 | 3.13 | 18.33 | 48.52 | 3.40 |
Method | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
DICM | LIME | MEF | NPE | DICM | LIME | MEF | NPE | |||
Low | 3.317 | 3.566 | 3.256 | 3.187 | 3.332 | |||||
BL [48] | 4.078 | 4.216 | 3.383 | 4.424 | 4.025 | 1.534 | 1.933 | 0.665 | 1.877 | 1.502 |
DeepUPE [18] | 3.542 | 3.793 | 3.199 | 3.591 | 3.531 | 0.985 | 2.094 | 0.546 | 1.258 | 1.221 |
EnlightenGAN [24] | 3.056 | 3.380 | 2.895 | 3.368 | 3.175 | 0.823 | 1.514 | 0.466 | 1.241 | 1.011 |
GLADNet [35] | 3.276 | 3.902 | 3.179 | 3.271 | 3.407 | 0.955 | 2.608 | 0.625 | 0.986 | 1.293 |
KinD [19] | 3.351 | 4.357 | 3.378 | 3.269 | 3.589 | 1.017 | 3.794 | 0.464 | 0.934 | 1.552 |
KinD++ [7] | 3.280 | 4.853 | 3.471 | 3.636 | 3.810 | 1.000 | 4.466 | 0.473 | 1.220 | 1.790 |
LIME [8] | 3.471 | 3.835 | 3.488 | 3.470 | 3.566 | 1.179 | 2.364 | 0.804 | 1.258 | 1.401 |
MELLEN-IC [25] | 2.911 | 3.503 | 3.097 | 3.087 | 3.150 | 0.784 | 1.673 | 0.415 | 0.783 | 0.914 |
Retinexformer [42] | 3.353 | 3.705 | 3.139 | 3.174 | 3.343 | 0.972 | 1.468 | 0.753 | 1.014 | 1.052 |
RetinexNet [13] | 4.315 | 4.916 | 4.904 | 4.388 | 4.631 | 1.715 | 3.557 | 1.475 | 1.496 | 2.061 |
Zero DCE [1] | 3.430 | 3.786 | 3.309 | 3.433 | 3.489 | 1.195 | 2.093 | 0.861 | 1.223 | 1.343 |
Zero DCE++ [26] | 3.543 | 4.092 | 3.568 | 3.603 | 3.701 | 1.259 | 2.489 | 0.953 | 1.249 | 1.488 |
LLFlow [49] | 3.368 | 3.891 | 3.515 | 3.556 | 3.583 | 0.885 | 2.007 | 0.542 | 0.877 | 1.078 |
Priori DCE (Ours) | 3.155 | 3.153 | 2.848 | 2.968 | 3.031 | 0.872 | 1.174 | 0.603 | 0.708 | 0.839 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Z.; Lin, Y.; Xu, J.; Lu, K.; Huang, Z. Priori Knowledge Makes Low-Light Image Enhancement More Reasonable. Sensors 2025, 25, 5521. https://doi.org/10.3390/s25175521
Chen Z, Lin Y, Xu J, Lu K, Huang Z. Priori Knowledge Makes Low-Light Image Enhancement More Reasonable. Sensors. 2025; 25(17):5521. https://doi.org/10.3390/s25175521
Chicago/Turabian StyleChen, Zefei, Yongjie Lin, Jianmin Xu, Kai Lu, and Zihao Huang. 2025. "Priori Knowledge Makes Low-Light Image Enhancement More Reasonable" Sensors 25, no. 17: 5521. https://doi.org/10.3390/s25175521
APA StyleChen, Z., Lin, Y., Xu, J., Lu, K., & Huang, Z. (2025). Priori Knowledge Makes Low-Light Image Enhancement More Reasonable. Sensors, 25(17), 5521. https://doi.org/10.3390/s25175521