An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal
Abstract
:1. Introduction
2. Method Construction
2.1. Range Adaptive Fast Median Filtering
2.2. YOLO-V5 Pedestrian Target Detection
2.3. MSRCR Image Enhancement Algorithm Combined with Bilateral Filtering
2.4. GrabCut Algorithm Combining Spatial Information, Chromaticity Information, and Texture Information
2.5. Algorithms Run Pseudo-Code
- Apply a range adaptive fast median filter with a window size of 3 × 3 to the input raw image, resulting in the filtered image .
- (Lines 4 and 5) Pass the image to a function that combines the pedestrian bounding box positions obtained from YOLO-V5. Crop the image based on these bounding box positions, generating a pedestrian image that contains a small background area.
- (Line 15) Perform initial k-means clustering on the image to obtain cluster centers. Calculate the distance between the UV components of each pixel and the chromaticity factors of the cluster centers.
- (Line 16) Convert the enhanced image from the RGB color space to the YUV color space. Compute the two-dimensional information entropy specifically for the Y channel of the image (Equation (25)).
- (Line 17) Incorporate pixel spatial information, chromaticity information, and texture features from the image into the k-means clustering process. Initialize the parameters of the Gaussian mixture model (GMM) using LBP components based on defined background pixels, potential foreground pixels, and potential background pixels (Equation (27)).
- (Line 19) Combine the mask image with the filtered output image to generate the segmented image .
Algorithm 1 BIL-MSRCR and H-GrabCut |
Input: Camera_Image, b and The coordinates returned by YOLO-V5 |
Output: Mask_Image and Image after segmentation |
1: ; |
2: ; |
3: while Camera_Image and is not empty do |
4: = Range_filtering(Camera_Image); |
5: = Cut_Process(, , , , ); |
6: for i = 0 to scale step 1: |
7: = Convert_log(); |
8: for k = 0 to N step 1: |
9: = BIL_Process(, , ); |
10: = Convert_log(, ); |
11: = Sub(, , ); |
12: = Convert_Scale(); |
13: end for |
14: end for |
15: = Extract_YUV(); |
16: = Extract_entropy(); |
17: = Extract_LBP(); |
18: = HGrabcut_process(, D, H, L); |
19: = add(, ); |
20: end while |
3. Results
3.1. Experimental Analysis of Image Enhancement
3.2. Experimental Analysis of Segmentation Algorithm
3.3. Contrast Experiment
3.4. Segmentation Result Analysis
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hayati, M.; Muchtar, K.; Maulina, N. Impact of CLAHE-based image enhancement for diabetic retinopathy classification through deep learning. Procedia Comput. Sci. 2023, 216, 57–66. [Google Scholar] [CrossRef]
- Aboshosha, S.; Zahran, O.; Dessouky, M.I. Resolution and quality enhancement of images using interpolation and contrast limited adaptive histogram equalization. Multimed. Tools Appl. 2019, 78, 18751–18786. [Google Scholar] [CrossRef]
- Alwazzan, M.J.; Ismael, M.A.; Ahmed, A.N. A hybrid algorithm to enhance colour retinal fundus images using a Wiener filter and CLAHE. J. Digit. Imaging 2021, 34, 750–759. [Google Scholar] [CrossRef] [PubMed]
- Subramani, B.; Veluchamy, M. Fuzzy gray level difference histogram equalization for medical image enhancement. J. Med. Syst. 2020, 44, 103. [Google Scholar] [CrossRef] [PubMed]
- Bhandari, A.K.; Kandhway, P.; Maurya, S. Salp swarm algorithm-based optimally weighted histogram framework for image enhancement. IEEE Trans. Instrum. Meas. 2020, 69, 6807–6815. [Google Scholar] [CrossRef]
- Chang, J.; Liu, W.; Bai, J. A Retinex image enhancement algorithm based on image fusion technology. Comput. Eng. Sci. 2018, 40, 1624–1635. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Gu, Z.; Li, F.; Fang, F. A novel retinex-based fractional-order variational model for images with severely low light. IEEE Trans. Image Process. 2019, 29, 3239–3253. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Liu, J.; Yang, W. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.F.; Liu, H.M.; Fu, Z.W. Low-light image enhancement via the absorption light scattering model. IEEE Trans. Image Process. 2019, 28, 5679–5690. [Google Scholar] [CrossRef] [PubMed]
- Fan, Z.; Liu, K.; Hou, J. JAUNet: A U-Shape Network with Jump Attention for Semantic Segmentation of Road Scenes. Appl. Sci. 2023, 13, 1493. [Google Scholar] [CrossRef]
- Liang, C.; Xiao, B.; Cheng, B. XANet: An Efficient Remote Sensing Image Segmentation Model Using Element-Wise Attention Enhancement and Multi-Scale Attention Fusion. Remote Sens. 2022, 15, 236. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, Y.; Zhang, Q. Semantic Segmentation of Traffic Scene Based on DeepLabv3+ and Attention Mechanism. In Proceedings of the 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 24–26 February 2023. [Google Scholar]
- Lahmyed, R.E.; Ansari, M.; Kerkaou, Z. A novel visible spectrum images-based pedestrian detection and tracking system for surveillance in non-controlled environments. Multimed. Tools Appl. 2022, 81, 39275–39309. [Google Scholar] [CrossRef]
- Xie, B.; Yang, Z.; Yang, L. Multi-scale fusion with matching attention model: A novel decoding network cooperated with NAS for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 2021, 23, 12622–12632. [Google Scholar] [CrossRef]
- Boykov, Y.Y.; Jolly, M.P. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001. [Google Scholar]
- Rother, C.; Kolmogorov, V.; Blake, A. “GrabCut” interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 2004, 23, 309–314. [Google Scholar] [CrossRef]
- Yao, G.; Wu, S.; Yang, H. GrabCut Image Segmentation Based on Local Sampling. In Business Intelligence and Information Technology, Proceedings of the International Conference on Business Intelligence and Information Technology BIIT 2021, Harbin, China, 18–20 December 2021; Springer: Cham, Switzerland, 2021. [Google Scholar]
- Prabu, S. Object segmentation based on the integration of adaptive K-means and GrabCut algorithm. In Proceedings of the 2022 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 24–26 March 2022. [Google Scholar]
- Wang, Q.; He, X.; Wu, X.; Wu, X.; Teng, Q. An improved segmentation algorithm based on GrabCut. Inf. Technol. Netw. Secur. 2021, 40, 43–52. [Google Scholar] [CrossRef]
- Ünver, H.M.; Ayan, E. Skin lesion segmentation in dermoscopic images with combination of YOLO and grabcut algorithm. Diagnostics 2019, 9, 72. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
- Li, J.; Han, D.; Wang, X. Multi-sensor medical-image fusion technique based on embedding bilateral filter in least squares and salient detection. Sensors 2023, 23, 3490. [Google Scholar] [CrossRef]
- Wang, W.; Yuan, X.; Chen, Z. Weak-light image enhancement method based on adaptive local gamma transform and color compensation. J. Sens. 2021, 2021, 5563698. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, X.; Wang, S.; Gao, X.; Luo, D.; Xu, W.; Pang, H.; Zhou, M. An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal. Sensors 2023, 23, 7937. https://doi.org/10.3390/s23187937
Huang X, Wang S, Gao X, Luo D, Xu W, Pang H, Zhou M. An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal. Sensors. 2023; 23(18):7937. https://doi.org/10.3390/s23187937
Chicago/Turabian StyleHuang, Xuchao, Shigang Wang, Xueshan Gao, Dingji Luo, Weiye Xu, Huiqing Pang, and Ming Zhou. 2023. "An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal" Sensors 23, no. 18: 7937. https://doi.org/10.3390/s23187937
APA StyleHuang, X., Wang, S., Gao, X., Luo, D., Xu, W., Pang, H., & Zhou, M. (2023). An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal. Sensors, 23(18), 7937. https://doi.org/10.3390/s23187937