Object Detection for Construction Waste Based on an Improved YOLOv5 Model
Abstract
:1. Introduction
2. Materials and Methods
2.1. YOLOv5 Architecture
2.1.1. YOLOv5 Backbone
2.1.2. YOLOv5 Neck
2.1.3. YOLOv5 Head
2.2. Improved YOLOv5
2.2.1. CBAM-CSPDarknet53
2.2.2. SimSPPF (Simplified SPPF)
2.2.3. Multi-Scale Detection
2.3. Dataset Construction and Evaluation Index
2.3.1. Dataset
2.3.2. Model Performance Evaluation Index
2.3.3. Experimental Platform and Parameter Setting
3. Results
3.1. Comparison of Experimental Results on a Public Dataset
3.2. Ablation Study
3.3. Contrast Experiment
4. Conclusions
- (1)
- An improved YOLOv5 can be obtained through the fourth-scale feature fusion, shallow detection layer, CBAM and SimSPPF. The increase in CBAM and SimSPPF in the backbone layer of YOLOv5 can strengthen the characteristics of small objects and mutual occlusion and enhance the effect and accuracy of detection, thus improving the generalization ability and robustness of the model. The fourth-scale feature fusion is added to the feature fusion part in the Neck and a shallow detection layer is added to the Head, which can aid in the detection of small objects and inter-occlusion.
- (2)
- Compared with the conventional models of Faster-RCNN, YOLOv3, YOLOv4 and YOLOv7, the detection accuracy of the proposed model is higher, and its mAP can reach up to 0.9480, which verifies the accuracy and the availability of the improved YOLOv5 model.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Huang, B.; Gao, X.; Xu, X.; Song, J.; Geng, Y.; Sarkis, J.; Fishman, T.; Kua, H.; Nakatani, J. A life cycle thinking framework to mitigate the environmental impact of building materials. One Earth 2020, 3, 564–573. [Google Scholar] [CrossRef]
- Teh, S.H.; Wiedmann, T.; Moore, S. Mixed-unit hybrid life cycle assessment applied to the recycling of construction materials. J. Econ. Struct. 2018, 7, 13. [Google Scholar] [CrossRef] [Green Version]
- Duan, H.; Miller, T.R.; Liu, G.; Tam, V.W.Y. Construction debris becomes growing concern of growing cities. Waste Manag. 2019, 83, 1–5. [Google Scholar] [CrossRef] [PubMed]
- Lei, J.; Huang, B.; Huang, Y. Life cycle thinking for sustainable development in the building industry. In Life Cycle Sustainability Assessment for Decision-Making; Elsevier: Amsterdam, The Netherlands, 2020; pp. 125–138. [Google Scholar]
- Yu, B.; Wang, J.; Li, J.; Lu, W.; Li, C.Z.; Xu, X. Quantifying the potential of recycling demolition waste generated from urban renewal: A case study in Shenzhen, China. J. Clean. Prod. 2020, 247, 119127. [Google Scholar] [CrossRef]
- Ku, Y.; Yang, J.; Fang, H.; Xiao, W.; Zhuang, J. Deep learning of grasping detection for a robot used in sorting construction and demolition waste. J. Mater. Cycles Waste Manag. 2021, 23, 84–95. [Google Scholar] [CrossRef]
- Zen Robotics. Available online: https://zenrobotics.com/ (accessed on 1 October 2022).
- Sadako Technologies. Applications/Max-AI. Available online: https://sadako.es/max-ai/ (accessed on 1 October 2022).
- Machinex. SAMURAI-Recycling Sorting Robots. Available online: https://www.machinexrecycling.com/products/samurai-sorting-robot/ (accessed on 1 September 2022).
- AMP Robotics. Available online: https://www.amprobotics.com/ (accessed on 1 October 2022).
- Koskinopoulou, M.; Raptopoulos, F.; Papadopoulos, G.; Mavrakis, N.; Maniadakis, M. Robotic waste sorting technology: Toward a vision-based categorization system for the industrial robotic separation of recyclable waste. IEEE Robot. Autom. Mag. 2021, 28, 50–60. [Google Scholar] [CrossRef]
- Adedeji, O.; Wang, Z. Intelligent waste classification system using deep learning convolutional neural network. Procedia Manuf. 2019, 35, 607–612. [Google Scholar] [CrossRef]
- Chen, J.; Lu, W.; Xue, F. “Looking beneath the surface”: A visual-physical feature hybrid approach for unattended gauging of construction waste composition. J. Environ. Manag. 2021, 286, 112233. [Google Scholar] [CrossRef] [PubMed]
- Yang, J.; Zeng, Z.; Wang, K.; Zou, H.; Xie, L. GarbageNet: A unified learning framework for robust garbage classification. IEEE Trans. Artif. Intell. 2021, 2, 372–380. [Google Scholar] [CrossRef]
- Hoong, J.D.L.H.; Lux, J.; Mahieux, P.-Y.; Turcry, P.; Ait-Mokhtar, A. Determination of the composition of recycled aggregates using a deep learning-based image analysis. Automat. Constr. 2020, 116, 103204. [Google Scholar] [CrossRef]
- Zhihong, C.; Hebin, Z.; Yanbo, W.; Binyan, L.; Yu, L. A vision-based robotic grasping system using deep learning for garbage sorting. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 11223–11226. [Google Scholar]
- Awe, O.; Mengistu, R.; Sreedhar, V. Smart trash net: Waste localization and classification. arXiv 2017. preprint. Available online: http://cs229.stanford.edu/proj2017/final-reports/5226723.pdf (accessed on 1 January 2022).
- Wang, Z.; Li, H.; Zhang, X. Construction waste recycling robot for nails and screws: Computer vision technology and neural network approach. Automat. Constr. 2019, 97, 220–228. [Google Scholar] [CrossRef]
- Nowakowski, P.; Pamuła, T. Application of deep learning object classifier to improve e-waste collection planning. Waste Manag. 2020, 109, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Zhou, H.; Zhao, L. Intelligent detection and classification of domestic waste based on improved faster-RCNN. J. Fuyang Norm. Univ. Nat. Sci. 2022, 39, 49–55. [Google Scholar]
- Li, J.; Fang, H.; Fan, L.; Yang, J.; Ji, T.; Chen, Q. RGB-D fusion models for construction and demolition waste detection. Waste Manag. 2022, 139, 96–104. [Google Scholar] [CrossRef] [PubMed]
- Lin, K.; Zhou, T.; Gao, X.; Li, Z.; Duan, H.; Wu, H.; Lu, G.; Zhao, Y. Deep convolutional neural networks for construction and demolition waste classification: VGGNet structures, cyclical learning rate, and knowledge transfer. J. Environ. Manag. 2022, 318, 115501. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Liu, W.; Peng, J.; Wu, B.; You, T. Improved YOLOv3 life article detection method for sorting. Transducer Microsyst. Technol. 2022, 41, 134–137. [Google Scholar]
- Chen, Y.; Li, L.; Xie, H.; Lu, S.; Dong, J. Garbage sorting robot based on machine vision. Instrum. Anal. Monit. 2022, 1, 30–34. [Google Scholar]
- Yuan, H.; Zang, T. Underwater garbage target detection based on the attention mechanism Ghosty-YOLOV5. Environ. Eng. 2022, 9, 1–14. Available online: https://kns.cnki.net/kcms/detail/11.2097.X.20220913.1006.004.html (accessed on 1 January 2022).
- Wang, Z.; Li, H.; Yang, X. Vision-based robotic system for on-site construction and demolition waste sorting and recycling. J. Build. Eng. 2020, 32, 101769. [Google Scholar] [CrossRef]
- Chen, L.; Cao, Y.; Huang, M.; Xie, X. Flame detection method based on improved YOLOv5. Comput. Eng. 2022, 10, 1–17. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar]
- Jiang, D.; Jiang, Z.; Huang, Z.; Guo, C.; Li, B. Uav vehicle target detection algorithm based on Efficientnet. Comput. Eng. Appl. 2022, 10, 1–11. Available online: https://kns.cnki.net/kcms/detail/11.2127.TP.20221027.0859.002.html (accessed on 1 January 2022).
- Su, S.; Chen, R.; Zhu, Y.; Jiang, B. Relocation non-maximum suppression algorithm. Opt. Precis. Eng. 2022, 30, 1620–1630. [Google Scholar]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 8–11 September 2014; pp. 740–755. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Everingham, M.; van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
- Thung, G.; Yang, M. Classification of Trash for Recyclability Status. CS229 Project Report. 2016, pp. 1–6. Available online: http://cs229.stanford.edu/proj2016/report/ThungYang-ClassificationOfTrashForRecyclabilityStatus-report.pdf (accessed on 1 January 2022).
- Proença, P.F.; Simões, P. Taco: Trash annotations in context for litter detection. arXiv 2020, arXiv:2003.06975. [Google Scholar]
- Zhao, X.; Zhang, Q.; Wang, W.; Xu, Z. Image detection method of combustible dust cloud. China Saf. Sci. J. 2020, 30, 8–13. [Google Scholar]
- Qiu, Z.; Zhu, X.; Liao, C.; Shi, D.; Kuang, Y.; Li, Y.; Zhang, Y. Detection of bird species related to transmission line faults based on lightweight convolutional neural network. IET Gener. Transm. Dis. 2022, 16, 869–881. [Google Scholar] [CrossRef]
Configuration Name | Parameter |
---|---|
Operation System | Win10 |
GPU | NVIDIA GeForce RTX 3060 Ti |
CPU | lntel (R) Core (TM) i5-10400F CPU @2.90 GHz |
Memory | 16 G |
Deep Learning Framework | Pytorch1.8.0 |
Method | Training Dataset | Test Dataset | mAP |
---|---|---|---|
YOLOv5_Y | VOC07 + 12 | VOC-Test07 | 0.8620 |
Ours | VOC07 + 12 | VOC-Test07 | 0.8731 |
Method | AP | mAP | F1 | ||||||
---|---|---|---|---|---|---|---|---|---|
Brick | Wood | Stone | Plastic | Brick | Wood | Stone | Plastic | ||
YOLOv5_Y | 0.8711 | 0.9138 | 0.9158 | 0.8959 | 0.8991 | 0.84 | 0.82 | 0.89 | 0.85 |
YOLOv5_C | 0.9141 | 0.9572 | 0.9511 | 0.9581 | 0.9451 | 0.89 | 0.90 | 0.93 | 0.87 |
YOLOv5_S | 0.9075 | 0.9565 | 0.9430 | 0.9447 | 0.9379 | 0.88 | 0.92 | 0.92 | 0.90 |
YOLOv5_D | 0.9119 | 0.9581 | 0.9534 | 0.9605 | 0.9460 | 0.89 | 0.91 | 0.92 | 0.91 |
YOLOv5_CS | 0.9132 | 0.9551 | 0.9506 | 0.9680 | 0.9467 | 0.89 | 0.92 | 0.94 | 0.90 |
YOLOv5_CD | 0.9215 | 0.9485 | 0.9422 | 0.9545 | 0.9417 | 0.89 | 0.91 | 0.93 | 0.92 |
YOLOv5_SD | 0.9095 | 0.9555 | 0.9412 | 0.9718 | 0.9445 | 0.88 | 0.93 | 0.93 | 0.92 |
Ours | 0.9222 | 0.9659 | 0.9555 | 0.9485 | 0.9480 | 0.89 | 0.91 | 0.93 | 0.92 |
Method | AP | mAP | F1 | ||||||
---|---|---|---|---|---|---|---|---|---|
Brick | Wood | Stone | Plastic | Brick | Wood | Stone | Plastic | ||
Ours | 0.9222 | 0.9659 | 0.9555 | 0.9485 | 0.9480 | 0.89 | 0.91 | 0.93 | 0.92 |
YOLOv7 | 0.9006 | 0.9416 | 0.9250 | 0.9388 | 0.9265 | 0.88 | 0.93 | 0.91 | 0.92 |
YOLOv5_Y | 0.8711 | 0.9138 | 0.9158 | 0.8959 | 0.8991 | 0.84 | 0.82 | 0.89 | 0.85 |
YOLOv4 | 0.8063 | 0.8972 | 0.8809 | 0.8782 | 0.8656 | 0.74 | 0.85 | 0.85 | 0.84 |
YOLOv3 | 0.7016 | 0.8157 | 0.8239 | 0.8058 | 0.7868 | 0.69 | 0.77 | 0.77 | 0.79 |
Faster-RCNN | 0.7933 | 0.8967 | 0.8886 | 0.9021 | 0.8702 | 0.75 | 0.85 | 0.83 | 0.88 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Q.; Liu, H.; Qiu, Y.; Zheng, W. Object Detection for Construction Waste Based on an Improved YOLOv5 Model. Sustainability 2023, 15, 681. https://doi.org/10.3390/su15010681
Zhou Q, Liu H, Qiu Y, Zheng W. Object Detection for Construction Waste Based on an Improved YOLOv5 Model. Sustainability. 2023; 15(1):681. https://doi.org/10.3390/su15010681
Chicago/Turabian StyleZhou, Qinghui, Haoshi Liu, Yuhang Qiu, and Wuchao Zheng. 2023. "Object Detection for Construction Waste Based on an Improved YOLOv5 Model" Sustainability 15, no. 1: 681. https://doi.org/10.3390/su15010681
APA StyleZhou, Q., Liu, H., Qiu, Y., & Zheng, W. (2023). Object Detection for Construction Waste Based on an Improved YOLOv5 Model. Sustainability, 15(1), 681. https://doi.org/10.3390/su15010681