Research on Intelligent Identification Technology for Bridge Cracks
Abstract
:1. Introduction
2. Intelligent Recognition Technology for Bridge Cracks
2.1. Dataset Construction
2.2. YOLOv8 Object Detection Algorithm
2.3. DeepLabV3+ Semantic Segmentation Algorithm
2.4. Crack Multi-Task Integration Algorithm
3. Train Model
3.1. Evaluation Metrics for Detection Model
3.2. YOLOv8 Model Training
3.3. DeepLabv3+ Model Training
3.4. Crack Width Calculation
4. Experimental Validation
4.1. Experimental Environment
4.2. Model Validation
4.3. Edge Device Deployment
5. Practical Application
5.1. Overview
5.2. Crack Detection on Donghai Bridge
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- 2022 Transportation Industry Development Statistical Bulletin. China Waterw. Transp. 2023, 29–33. [CrossRef]
- Yue, Q.; Xu, G.; Liu, X. Crack Intelligent Recognition and Bridge Monitoring Methods. China J. Highw. Transp. 2024, 37, 16–28. [Google Scholar] [CrossRef]
- Liu, Y.F.; Fan, J.S.; Nie, J.G.; Kong, S.Y.; Qi, Y. Review and prospect of digital-image-based crack detection of structure surface. J. Civ. Eng. 2021, 54, 79–98. [Google Scholar] [CrossRef]
- Zou, J.; Yang, J.; Li, H.; Shuai, C.; Huang, D.; Jiang, S. Bridge apparent damage detection based on the improved YOLO v3 in complex background. J. Railw. Sci. Eng. 2021, 18, 3257–3266. [Google Scholar] [CrossRef]
- Wei, C.; Zhu, X.; Zhang, D. Light weight grid bridge crack detection technology based on depth classification. Comput. Eng. Des. 2022, 43, 2334–2341. [Google Scholar] [CrossRef]
- Peng, Y.; Liu, M.; Wan, Z.; Jiang, W.; He, W. A Dual Deep Network Based on the Improved YOLO for Fast Bridge Surface Defect Detection. Acta Autom. Sin. 2022, 48, 1018–1032. [Google Scholar] [CrossRef]
- Huantong, G.; Zhenyu, L.; Jun, J.; Zichen, F.; Jiaxing, L. Embedded road crack detection algorithm based on improved YOLOv8. Comput. Appl. 2024, 44, 1613–1618. [Google Scholar]
- Hui, L.; Ibrahim, A.; Hindi, R. Computer Vision-Based Concrete Crack Identification Using MobileNetV2 Neural Network and Adaptive Thresholding. Infrastructures 2025, 10, 42. [Google Scholar] [CrossRef]
- Yu, J.; Li, F.; Xue, X.; Zhu, P.; Wu, X. Intelligent Identification of Bridge Structural Cracks Based on Unmanned Aerial Vehicle and Mask R-CNN. J. China Highw. Transp. 2021, 34, 80–90. [Google Scholar] [CrossRef]
- Xuebing, Z.; Junjie, W. Bridge Crack Detection Based on Improved DeeplabV3+ and Migration Learning. Comput. Eng. Appl. 2023, 59, 262–269. [Google Scholar]
- Yu, J.; Liu, B.; Yin, D.; Gao, W.; Xie, Y. Intelligent Identification and Measurement of Bridge Cracks Based on YOLOv5 and U-Net3+. J. Hunan Univ. (Nat. Sci.) 2023, 50, 65–73. [Google Scholar] [CrossRef]
- Hammouch, W.; Chouiekh, C.; Khaissidi, G.; Mrabti, M. Crack Detection and Classification in Moroccan Pavement Using Convolutional Neural Network. Infrastructures 2022, 7. [Google Scholar] [CrossRef]
- Di Benedetto, A.; Fiani, M.; Gujski, L.M. U-Net-Based CNN Architecture for Road Crack Segmentation. Infrastructures 2023, 8, 90. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLO; GitHub: San Francisco, CA, USA, 2023. [Google Scholar]
- Dorafshan, S.; Thomas, R.J.; Maguire, M. SDNET2018: An annotated image dataset for non-contact concrete crack detection using deep convolutional neural networks. Data Brief 2018, 21, 1664–1668. [Google Scholar] [CrossRef] [PubMed]
- Jin, T.; Ye, X.; Li, Z. Establishment and evaluation of conditional GAN-based image dataset for semantic segmentation of structural cracks. Eng. Struct. 2023, 285, 116058. [Google Scholar] [CrossRef]
- Wang, C.Y.; Mark Liao, H.Y.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1571–1580. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar] [CrossRef]
- Mao, A.; Mohri, M.; Zhong, Y. Cross-Entropy Loss Functions: Theoretical Analysis and Applications. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
- Ye, X.W.; Jin, T.; Li, Z.X.; Ma, S.Y.; Ding, Y.; Ou, Y.H. Structural Crack Detection from Benchmark Data Sets Using Pruned Fully Convolutional Networks. J. Struct. Eng. 2021, 147, 04721008. [Google Scholar] [CrossRef]
- Yufei, L. Multi-Scale Structural Damage Assessment Based on Model Updating and Image Processing. Ph.D. Thesis, Tsinghua University, Beijing, China, 2015. [Google Scholar]
Model | Accuracy | Params (M) | FLOPs | FPS | F1-Score |
---|---|---|---|---|---|
YOLOv8n | 78.79 | 3.01 | 8.2 | 52.99 | 88.14 |
YOLOv8s | 76.73 | 11.14 | 28.6 | 50.37 | 86.83 |
YOLOv8m | 76.54 | 25.86 | 79.1 | 48.29 | 86.71 |
YOLOv8l | 78.34 | 43.63 | 165.4 | 45.50 | 87.85 |
YOLOv8x | 74.61 | 68.15 | 258.1 | 36.21 | 85.45 |
Dataset | Backbone | Loss Function | mAP | mIOU | Dice |
---|---|---|---|---|---|
BCL | MobileNetV2 | Cross-Entropy | 86.08 | 78.41 | 86.48 |
Focal | 69.19 | 65.40 | 74.32 | ||
ResNet-101 | Cross-Entropy | 87.31 | 79.46 | 87.29 | |
Focal | 87.41 | 79.44 | 87.28 | ||
Xception-65 | Cross-Entropy | 89.55 | 79.43 | 87.27 | |
Focal | 73.31 | 66.74 | 75.77 | ||
BCL2.0 | MobileNetV2 | Cross-Entropy | 84.20 | 77.30 | 85.15 |
Focal | 67.85 | 64.25 | 73.18 | ||
ResNet-101 | Cross-Entropy | 87.45 | 81.50 | 87.50 | |
Focal | 86.97 | 76.37 | 84.74 | ||
Xception-65 | Cross-Entropy | 88.30 | 79.20 | 86.85 | |
Focal | 64.65 | 60.35 | 67.95 |
Hardware Platform | Name | Configuration Version |
---|---|---|
Jetson Nano | Operating System | Ubuntu 18.04 |
CPU | ARM Cortex-A57MPCore | |
GPU | Maxwell Architecture, 128 CUDA Cores | |
Memory | 4 GB | |
CUDA | 10.02 |
Equipment | Parameter | Value/Type |
---|---|---|
USB Camera | Resolution (pixels) | 1280 × 720 |
Maximum Frame Rate | 30 fps | |
Interface Type | USB 2.0 | |
Dynamic Range | 56 dB | |
Signal-to-Noise Ratio | 62 dB |
Model | Id | Precision | Recall | F1-Score | IOU | Time |
---|---|---|---|---|---|---|
BCL Original | 1 | 97.75 | 62.92 | 76.56 | 62.03 | 176 |
2 | 97.49 | 59.67 | 74.03 | 58.77 | 176 | |
3 | 75.51 | 72.91 | 74.19 | 58.96 | 176 | |
4 | 68.39 | 77.45 | 72.64 | 57.03 | 176 | |
5 | 87.42 | 40.44 | 55.30 | 38.21 | 176 | |
6 | 95.80 | 72.66 | 82.65 | 70.44 | 176 | |
BCL Multi Task | 1 | 98.22 | 56.78 | 71.96 | 66.78 | 42 |
2 | 99.36 | 60.85 | 75.48 | 60.61 | 48 | |
3 | 85.74 | 64.84 | 73.58 | 59.53 | 83 | |
4 | 88.10 | 89.09 | 88.59 | 79.52 | 41 | |
5 | 87.60 | 56.32 | 68.56 | 52.16 | 85 | |
6 | 96.07 | 75.35 | 84.46 | 73.10 | 62 | |
BCL2.0 Original | 1 | 68.82 | 74.12 | 71.37 | 55.49 | 181 |
2 | 91.47 | 42.92 | 58.42 | 41.27 | 181 | |
3 | 77.19 | 82.00 | 79.52 | 66.00 | 181 | |
4 | 28.36 | 86.21 | 42.68 | 27.13 | 181 | |
5 | 49.80 | 71.57 | 58.73 | 41.58 | 181 | |
6 | 87.19 | 68.40 | 76.66 | 62.15 | 181 | |
BCL2.0 Multi Task | 1 | 87.56 | 81.75 | 84.55 | 73.24 | 46 |
2 | 77.94 | 85.36 | 81.48 | 68.75 | 51 | |
3 | 84.96 | 81.59 | 83.24 | 71.29 | 85 | |
4 | 53.95 | 94.58 | 68.71 | 52.33 | 44 | |
5 | 64.84 | 84.57 | 73.40 | 57.98 | 90 | |
6 | 90.87 | 78.68 | 84.34 | 72.91 | 70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Su, Y.; Song, Y.; Zhan, Z.; Bi, Z.; Zhou, B.; Yu, Y.; Song, Y. Research on Intelligent Identification Technology for Bridge Cracks. Infrastructures 2025, 10, 102. https://doi.org/10.3390/infrastructures10050102
Su Y, Song Y, Zhan Z, Bi Z, Zhou B, Yu Y, Song Y. Research on Intelligent Identification Technology for Bridge Cracks. Infrastructures. 2025; 10(5):102. https://doi.org/10.3390/infrastructures10050102
Chicago/Turabian StyleSu, Yumeng, Yunlong Song, Zhaomin Zhan, Zhuo Bi, Bang Zhou, Youling Yu, and Yanting Song. 2025. "Research on Intelligent Identification Technology for Bridge Cracks" Infrastructures 10, no. 5: 102. https://doi.org/10.3390/infrastructures10050102
APA StyleSu, Y., Song, Y., Zhan, Z., Bi, Z., Zhou, B., Yu, Y., & Song, Y. (2025). Research on Intelligent Identification Technology for Bridge Cracks. Infrastructures, 10(5), 102. https://doi.org/10.3390/infrastructures10050102