GS-AGC: An Adaptive Glare Suppression Algorithm Based on Regional Brightness Perception
Abstract
:1. Introduction
- Luminance region delineation for single and multiple light sources was carried out using a region luminance perception algorithm, which helped guide the subsequent image processing.
- The GS-AGC algorithm, based on region brightness perception, was proposed to effectively suppress glare from single and multiple light sources in complex real-life scenarios. This algorithm achieved a balanced brightness of low-light images and provided valuable reference ideas and inspiration for related research fields.
- Building upon this foundation, the GS-AGC algorithm was accelerated with hardware, and corresponding modules for algorithm acceleration were designed. The implementation was carried out using a field programmable gate array (FPGA).
2. Related Work
3. An Adaptive Glare Suppression ALGORITHM Based on Regional Brightness Perception
3.1. Regional Brightness Perception
Algorithm 1 Brightness threshold acquisition: |
|
3.2. GS-AGC
Algorithm 2 GS-AGC: |
|
3.3. FPGA Implementation of the Algorithm
3.3.1. Image Processing Framework on FPGA Platforms
3.3.2. Modular Design of the Algorithm
- (1)
- Memory access module
- (2)
- Parameter Cache Module
- (3)
- Image processing module
Algorithm 3 calulate loop |
Input: in_channel, rows, cols Output: gamma_val_1
|
4. Experimental Results and Analysis
4.1. Experimental Preparation
4.1.1. Experimental Dataset
4.1.2. Simulation Experiment Environment
4.2. Hardware Experimental Verification
4.3. Experimental Results
4.3.1. Global Image Quality Evaluation
4.3.2. Local Image Quality Evaluation
4.3.3. Road Pedestrian Detection
4.3.4. Hardware Energy Consumption and Real-Time Comparison Test
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jiang, X.; Yao, H.; Liu, D. Nighttime image enhancement based on image decomposition. SIViP 2019, 13, 189–197. [Google Scholar] [CrossRef]
- Tao, R.; Zhou, T.; Qiao, J. Improved Retinex for low illumination image enhancement of nighttime traffic. In Proceedings of the 2022 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI), Shijiazhuang, China, 22–24 July 2022; pp. 226–229. [Google Scholar] [CrossRef]
- Mandal, G.; Bhattacharya, D.; De, P. Real-time automotive night-vision system for drivers to inhibit headlight glare of the oncoming vehicles and enhance road visibility. J. Real-Time Image Process. 2021, 18, 2193–2209. [Google Scholar] [CrossRef]
- Wang, W.; Wang, A.; Liu, C. Variational single nighttime image haze removal with a gray haze-line prior. IEEE Trans. Image Process. 2022, 31, 1349–1363. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Yan, Z.; Tan, J.; Li, Y. Multi-purpose Oriented Single Nighttime Image Haze Removal Based on Unified Variational Retinex Model. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1643–1657. [Google Scholar] [CrossRef]
- Liu, Y.; Yan, Z.; Wu, A.; Ye, T.; Li, Y. Nighttime Image Dehazing Based on Variational Decomposition Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 640–649. [Google Scholar]
- Tan, L.; Wang, S.; Zhang, L. Nighttime Haze Removal Using Saliency-Oriented Ambient Light and Transmission Estimation. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 1764–1768. [Google Scholar]
- Yang, C.; Ke, X.; Hu, P.; Li, Y. NightDNet: A Semi-Supervised Nighttime Haze Removal Frame Work for Single Image. In Proceedings of the 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST), Guangzhou, China, 10–12 December 2021; pp. 716–719. [Google Scholar]
- Afifi, M.; Derpanis, K.G.; Ommer, B.; Brown, M.S. Learning multi-scale photo exposure correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 9157–9167. [Google Scholar]
- Zhou, Z.; Shi, Z.; Ren, W. Linear Contrast Enhancement Network for Low-illumination Image Enhancement. IEEE Trans. Instrum. Meas. 2022, 72, 5002916. [Google Scholar] [CrossRef]
- Yu, N.; Li, J.; Hua, Z. FLA-Net: Multi-stage modular network for low-light image enhancement. Vis. Comput. 2023, 39, 1251–1270. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Yu, W.; Zhao, L.; Zhong, T. Unsupervised Low-Light Image Enhancement Based on Generative Adversarial Network. Entropy 2023, 25, 932. [Google Scholar] [CrossRef] [PubMed]
- Kozłowski, W.; Szachniewicz, M.; Stypułkowski, M.; Zięba, M. Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming. arXiv 2023, arXiv:2310.09633. [Google Scholar]
- Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 2021, 30, 3461–3473. [Google Scholar] [CrossRef]
- Li, W.; Xiong, B.; Ou, Q.; Long, X.; Zhu, J.; Chen, J.; Wen, S. Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition. arXiv 2023, arXiv:2311.02995. [Google Scholar]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. arXiv 2021, arXiv:2103.00860. [Google Scholar] [CrossRef] [PubMed]
- Jin, Y.; Yang, W.; Tan, R.T. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression. In Proceedings of the Computer Vision–ECCV 2022, 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Proceedings, Part XXXVII. Springer Nature: Cham, Switzerland, 2022; pp. 404–421. [Google Scholar]
- Sharma, A.; Tan, R.T. Nighttime visibility enhancement by increasing the dynamic range and suppression of light effects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 11977–11986. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 14–19 June 2020; pp. 1780–1789. [Google Scholar]
- Jeong, I.; Lee, C. An optimization-based approach to gamma correction parameter estimation for low-light image enhancement. Multimed. Tools Appl. 2021, 80, 18027–18042. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X.J.I. topa and intelligence, m. 2010. Single Image Haze Remov. Using Darn. Channel Prior 2011, 12, 2341–2353. [Google Scholar]
- Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 2000, 9, 889–896. [Google Scholar] [CrossRef]
- Braun, G.J. Visual display characterization using flicker photometry techniques//Human vision and electronic imaging VIII. SPIE 2003, 5007, 199–209. [Google Scholar]
- Haines, E.; Hoffman, N. Real-Time Rendering; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Li, C.; Tang, S.; Yan, J.; Zhou, T. Low-light image enhancement via pair of complementary gamma functions by fusion. IEEE Access 2020, 8, 169887–169896. [Google Scholar] [CrossRef]
- Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand–dust image enhancement. IET Image Process. 2020, 14, 747–756. [Google Scholar] [CrossRef]
- Bee, M.A.; Vélez, A.; Forester, J.D. Sound level discrimination by gray treefrogs in the presence and absence of chorus-shaped noise. J. Acoust. Soc. Am. 2012, 131, 4188–4195. [Google Scholar] [CrossRef]
- Drösler, J. An n-dimensional Weber law and the corresponding Fechner law. J. Math. Psychol. 2000, 44, 330–335. [Google Scholar] [CrossRef]
- Wang, W.; Chen, Z.; Yuan, X. Simple low-light image enhancement based on Weber–Fechner law in logarithmic space. Signal Process. Image Commun. 2022, 106, 116742. [Google Scholar] [CrossRef]
- Wang, W.; Chen, Z.; Yuan, X.; Wu, X. Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 2019, 496, 25–41. [Google Scholar] [CrossRef]
- Kim, D. Weighted Histogram Equalization Method adopting Weber-Fechner’s Law for Image Enhancement. J. Korea Acad.-Ind. Coop. Soc. 2014, 15, 4475–4481. [Google Scholar]
- Gonzales, R.C.; Wintz, P. Digital Image Processing; Addison-Wesley Longman Publishing Co., Inc.: Reading, MA, USA, 1987. [Google Scholar]
- Drago, F.; Myszkowski, K.; Annen, T.; Chiba, N. Adaptive logarithmic mapping for displaying high contrast scenes. Comput. Graph. Forum 2003, 22, 419–426. [Google Scholar] [CrossRef]
- Thai, B.C.; Mokraoui, A.; Matei, B. Contrast enhancement and details preservation of tone mapped high dynamic range images. J. Vis. Commun. Image Represent. 2019, 58, 589–599. [Google Scholar] [CrossRef]
- Zhang, Y.; Di, X.; Zhang, B.; Ji, R.; Wang, C. Better than reference in low-light image enhancement: Conditional re-enhancement network. IEEE Trans. Image Process. 2021, 31, 759–772. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
- Wang, Y.; Yu, Y.; Yang, W.; Guo, L.; Chau, L.P.; Kot, A.C.; Wen, B. Exposurediffusion: Learning to expose for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 12438–12448. [Google Scholar]
- Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef]
- Roh, S.; Choi, M.C.; Jeong, D.K. A Maximum Eye Tracking Clock-and-Data Recovery Scheme with Golden Section Search (GSS) Algorithm in 28-nm CMOS. In Proceedings of the 2021 18th International SoC Design Conference (ISOCC), Jeju Island, Republic of Korea, 6–9 October 2021; pp. 47–48. [Google Scholar]
- Alrajoubi, H.; Oncu, S. PV Fed Water Pump System with Golden Section Search and Incremental Conductance Algorithms. In Proceedings of the 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Bucharest, Romania, 29–30 June 2023; pp. 1–4. [Google Scholar]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 14–19 June 2020; pp. 2636–2645. [Google Scholar]
- Fan, C.M.; Liu, T.J.; Liu, K.H. Half wavelet attention on M-Net+ for low-light image enhancement. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3878–3882. [Google Scholar]
- Gao, M.-J.; Dang, H.-S.; Wei, L.-L.; Wang, H.-L.; Zhang, X.-D. Combining global and local variation for image quality assessment. Acta Autom. Sin. 2020, 46, 2662–2671. [Google Scholar] [CrossRef]
- Varga, D. Saliency-Guided Local Full-Reference Image Quality Assessment. Signals 2022, 3, 483–496. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv5, Improved real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 14–19 June 2020. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014, 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13; Springer International Publishing: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Yang, W.; Yuan, Y.; Ren, W.; Liu, J.; Scheirer, W.J.; Wang, Z.; Zhang, T.; Zhong, Q.; Xie, D.; Pu, S.; et al. Advancing image understanding in poor visibility environments: A collective benchmark study. IEEE Trans. Image Process. 2020, 29, 5737–5752. [Google Scholar] [CrossRef]
- Yang, S.; Luo, P.; Loy, C.C.; Tang, X. Wider face: A face detection benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5525–5533. [Google Scholar]
- Wang, W.; Xu, X. Lightweight CNN-Based Low-Light-Image Enhancement System on FPGA Platform. Neural Process. Lett. 2023, 55, 8023–8039. [Google Scholar] [CrossRef]
Algorithm | PSNR | Entropy | VIF | MSE |
---|---|---|---|---|
Jin Y | 27.84 | 7.19 | 0.80 | 106.69 |
HWMNet | 27.88 | 6.85 | 0.80 | 105.84 |
Zero-DCE | 27.20 | 7.20 | 0.79 | 98.27 |
AGC | 27.76 | 7.62 | 0.80 | 108.87 |
Dark channel prior | 26.33 | 6.33 | 0.71 | 97.03 |
Histogram equilibrium | 28.13 | 7.16 | 0.80 | 93.17 |
Ours | 28.91 | 7.37 | 0.81 | 83.40 |
Algorithm | PSNR | Entropy | VIF | MSE |
---|---|---|---|---|
Jin Y | 27.43 | 7.29 | 0.80 | 117.25 |
HWMNet | 27.87 | 6.17 | 0.68 | 106.07 |
Zero-DCE | 27.58 | 6.91 | 0.81 | 113.49 |
AGC | 27.41 | 6.72 | 0.80 | 118.05 |
Dark channel prior | 27.61 | 6.58 | 0.79 | 118.55 |
Histogram equilibrium [22] | 27.82 | 6.55 | 0.81 | 107.32 |
Ours | 28.10 | 7.74 | 0.81 | 102.59 |
Dataset | Evaluation Metrics | Jin Y | HWMNet | Zero-DCE | AGC | Dark Channel Prior | Histogram Equilibrium | Ours |
---|---|---|---|---|---|---|---|---|
BDD | GLV-SIM | 0.91 | 0.93 | 0.95 | 0.96 | 0.93 | 0.96 | 0.96 |
SG-ESSIM | 0.82 | 0.84 | 0.91 | 0.87 | 0.83 | 0.91 | 0.92 | |
LLVIP | GLV-SIM | 0.96 | 0.95 | 0.94 | 0.97 | 0.91 | 0.95 | 0.96 |
SG-ESSIM | 0.89 | 0.88 | 0.88 | 0.90 | 0.87 | 0.88 | 0.91 |
Platform | CPU | FPGA |
---|---|---|
Processor Model | Intel Core i7 | ZYNQ |
Process Technology (nm) | 10 | 28 |
Clock Frequency(GHz) | 2.30 | 0.05 |
Single frame processing time (ms) | 680 | 19 |
Frames per Second (FPS) | 1.47 | 52 |
Energy Consumption (W) | 49.50 | 1.67 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, P.; Wei, W.; Pan, X.; Wang, H.; Mu, Y. GS-AGC: An Adaptive Glare Suppression Algorithm Based on Regional Brightness Perception. Appl. Sci. 2024, 14, 1426. https://doi.org/10.3390/app14041426
Li P, Wei W, Pan X, Wang H, Mu Y. GS-AGC: An Adaptive Glare Suppression Algorithm Based on Regional Brightness Perception. Applied Sciences. 2024; 14(4):1426. https://doi.org/10.3390/app14041426
Chicago/Turabian StyleLi, Pei, Wangjuan Wei, Xiaoying Pan, Hao Wang, and Yuanzhen Mu. 2024. "GS-AGC: An Adaptive Glare Suppression Algorithm Based on Regional Brightness Perception" Applied Sciences 14, no. 4: 1426. https://doi.org/10.3390/app14041426
APA StyleLi, P., Wei, W., Pan, X., Wang, H., & Mu, Y. (2024). GS-AGC: An Adaptive Glare Suppression Algorithm Based on Regional Brightness Perception. Applied Sciences, 14(4), 1426. https://doi.org/10.3390/app14041426