Next Article in Journal
Identifying the Unknown Parameters of Equivalent Circuit Model for Li-Ion Battery Using Rao-1 Algorithm
Previous Article in Journal
Detection of Chest X-ray Abnormalities Using CNN Based on Hyperparameter Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Automated Damage Detection on Concrete Structures Using Computer Vision and Drone Imagery †

by
Timothy Malche
1,
Sumegh Tharewal
2 and
Rajesh Kumar Dhanaraj
2,*
1
Department of Computer Applications, Manipal University Jaipur, Jaipur 303007, India
2
Symbiosis Institute of Computer Studies and Research (SICSR), Symbiosis International (Deemed) University (SIU), Pune 411016, India
*
Author to whom correspondence should be addressed.
Presented at the 10th International Electronic Conference on Sensors and Applications (ECSA-10), 15–30 November 2023; Available online: https://ecsa-10.sciforum.net/.
Eng. Proc. 2023, 58(1), 60; https://doi.org/10.3390/ecsa-10-16059
Published: 15 November 2023

Abstract

:
The manual inspection of concrete structures, such as tall buildings, bridges, and huge infrastructures can be time-consuming and costly, and damage assessment is a crucial task that requires the close-range inspection of all surfaces. The proposed system uses computer vision model to identify various types of damages on these structures. The computer vision model and was trained on a large dataset of drone footage, which was annotated manually to ensure accuracy. The model was then tested on new data, and the results showed that it could accurately detect and identify structural damage on concrete structures with a 94% accuracy. The system is much faster and more efficient than manual inspection, reducing the time and cost required for damage assessment. The proposed system has the potential to revolutionize the way we perform damage assessment on concrete structures. It can help to preserve and protect these valuable assets by enabling the early detection of damage and facilitating timely repairs.

1. Introduction

The manual inspection of concrete structures, such as tall buildings, bridges, and huge infrastructures, is a time-consuming and risky process for human employees. Drones with sensor camera nodes have shown potential in gathering close-range footage, but the problem lies in rapidly analyzing enormous volumes of data to detect and diagnose structural deterioration. This study deals with these challenges by presenting an Internet of Things (IoT), computer-vision- and deep-learning-based automated solution. The primary issue addressed in this research is the requirement for a more efficient and reliable way of identifying structural damage on concrete structures.
The traditional manual inspection technique is time-consuming and expensive, making timely repairs and maintenance impossible. As a result, an automated solution is necessary to speed up the damage assessment process while reducing dangers to human personnel. The proposed system focuses on detecting various types of damage, such as cracks, Alkali-Silica Reactions (ASRs), concrete degradation, and others, on concrete structures using drone-captured video footage [1,2]. The system’s scope includes developing a Convolutional Neural Network (CNN) architecture tailored to this specific task and implementing a seamless process for automatically obtaining video data from drones.
The primary objective of this work is to create and implement an automated damage detection system capable of identifying structural damage on concrete structures in an efficient and accurate manner. The technology intends to expedite the inspection process by utilizing IoT, computer vision, and deep learning techniques, enabling proactive maintenance and preservation activities. The novelty of the proposed system is a custom-designed CNN architecture that is optimized for detecting damage on concrete structures and a system architecture based on IoT to automatically capture data and perform analysis and reporting. The performance of the proposed automated damage detection system was evaluated using a diverse dataset of drone-captured video footage containing various types of damage on concrete structures. The CNN architecture demonstrated impressive results, achieving an accuracy of 94% in correctly identifying different types of structural damage.
The approach involves capturing close-range footage of the infrastructure with drones and processing the footage using a computer vision model to identify and classify damage. The proposed method can provide an efficient and cost-effective way to detect damage to concrete structures and help preserve these important historical structures for future generations. The working of the system is shown in Figure 1.

2. Related Work

This comprehensive analysis [3] explores into the critical issue of damage detection in civil engineering, emphasizing the transformational potential of Computer Vision and Deep Learning algorithms. The research [4] presents a Deep Convolutional Neural Network-based Damage Locating (DCNN-DL) approach for steel frame inspection that outperforms existing techniques with a 99.3% accuracy. The approach uses the DenseNet architecture to properly identify and detect damaged regions, providing a fast real-time solution for visual damage evaluation in civil structures. The research in [5] investigates the use of computer vision algorithms in conjunction with remote cameras and unmanned aerial vehicles (UAVs) for non-contact civil infrastructure evaluation. The study in [6] presents a unique structural identification framework for bridge health monitoring that makes use of computer-vision-based measures. It employs a novel damage indicator, a displacement unit influence surface, and successfully identifies and localizes simulated damage on a large-scale bridge model, demonstrating its use in structural health evaluation. The thorough study in [7] addresses gaps in the current literature on computer-vision-based crack diagnosis for civil infrastructures by providing a complete evaluation of qualitative and quantitative methodologies, including deep-learning-based approaches.
Despite extensive research into image-based damage identification and quantification, this technology is still in its early phases, with limitations and gaps for further investigation. Despite efforts in improving the reliability of image-based approaches, attaining total automation in damage assessment and categorization remains a substantial task.

3. System Design

The proposed system involves the use of a drone to capture image data, which are then fed to an object detection model to identify different types of structural damage in concrete structures. In this section, we provide a detailed methodology for building and deploying the system, which involves collecting images of cracks in buildings, labeling and training the model used, and deploying the model using a Python 3.x script.

3.1. Dataset Collection

The first step in building the system is to collect images of cracks in buildings. These images are taken using a drone, which captures high-resolution images of the building’s surface. The images are then annotated with labels indicating the type of damage using the Roboflow platform. The labeling process involves drawing bounding boxes around the damaged area and assigning them a label indicating the type of damage, such as cracks, ASR, or concrete degradation.
Different images of cracks were collected. The images are sized to 800 × 800 pixels which is a standard transformation applied before training. The image dataset is then split up into training, testing, and validating groups:
  • 2527 images used for training.
  • 2149 images used for validation.
  • 279 images used for testing.
With the use of the Roboflow platform’s rectangular “bounding box” tool, each picture in the dataset is labelled for object detection. The labelling for the various photos is shown in the Figure 2:

3.2. Model Training

The annotated data are then used to train the object detection model. The model is trained using a deep learning algorithm, YOLO.
The trained model is deployed using a Python script, which receives a live camera stream and runs the object detection model to detect the cracks. The Python script uses the Model API to query the hosted version of the model and return the results. The API provides a simple interface for sending images or video to the model and receiving the output in a standardized format. The methodology of system is presented in the Figure 3.
The selection of hardware components plays a crucial role in this research as they enable the deployment and testing of the computer vision model. Specifically, a drone and a camera module are utilized to facilitate the implementation of the model. The Figure 4 below illustrates the setup of the drone used in this research. The setup of the drone, as depicted in the figure, showcases the integration of the camera module and other necessary components to ensure seamless data capture and transmission.
The system incorporates several essential hardware components for its operation. It utilizes a Quadcopter Drone Kit, WiFi Camera Module, a Roboflow account, and a Python development environment.

4. Results and Discussion

In this section, the results of experiments using different object detection models for detecting physical damage in concrete structures has been discussed. The three different models, YOLOv8, YOLOv7, and YOLOv5, are used to evaluate the performance of the system on a test dataset of annotated images for the field of concrete structures monitoring and protection.
The performance of the YOLOv8, YOLOv7, and YOLOv5 models has been evaluated on the test dataset of annotated images. The models were trained using the dataset from the Roboflow platform. As transfer learning is used in this research, the following tables, Table 1 and Table 2, show the pre-trained YOLO models used and the training settings.
The evaluation metrics used were precision, recall, and F1 score. Equations (1)–(4) [8] were used to evaluate the performance of machine learning models and measure their accuracy, precision, recall, F1 score, and latency:
Accuracy = (TP + TN)/(TP + TN + FP + FN)
Precision = TP/(TP + FP)
Recall = TP/(TP + TN)
F1Score = (2 × Precision × Recall)/(Precision + Recall)
Table 3 shows the results of the evaluation for each model.
As shown in Table 1, the YOLOv8 model achieved the highest performance in terms of precision, recall, and F1 score. The model achieved a precision of 0.93, recall of 0.85, and F1 score of 0.89. The YOLOv7 model also performed well with a precision of 0.92, recall of 0.81, and F1 score of 0.86. The YOLOv5 model achieved a precision of 0.90, recall of 0.75, and F1 score of 0.82.
The following Table 4 compares the mAP scores for each class across the YOLOv8, YOLOv7, and YOLOv5 models:
As shown in the table, the YOLOv8 model has the highest mAP score for all classes except for “Heaving cracks”, where the YOLOv7 model performs slightly better. The YOLOv7 model also shows good performance for “Crazing and Crusting Cracks”, “Settling cracks”, and “Corrosion of Reinforcement”. The YOLOv5 model generally shows lower mAP scores for all classes compared to the other two models, but still performs relatively well for “Crazing and Crusting Cracks” and “Corrosion of Reinforcement”. The following Table 5 compares the inferencing time for all three models.
As shown in the table, the YOLOv8 model has the highest mAP score and a relatively fast inference time, making it the best overall option for detecting structural damages. The YOLOv7 model has a moderate mAP score but a slightly slower inference time compared to the YOLOv8 model. The YOLOv5 model has the fastest inference time but the lowest mAP score, making it a suitable option for applications where speed is a priority over accuracy. The following Figure 5 depict the result from YOLOv8 model.

5. Conclusions

This research demonstrates the effectiveness of using drone-captured imagery and computer vision techniques for the inspection of physical damage in in concrete structures such as buildings, bridges etc. By employing object detection models such as YOLOv8, YOLOv7, and YOLOv5, various types of damage, including plastic shrinkage cracks, crazing and crusting cracks, settling cracks, expansion cracks, heaving cracks, overloading cracks, and corrosion of reinforcement, have been successfully identified. The evaluation of the models has shown promising results, with high mAP scores across different classes.
This research has highlighted the potential of leveraging computer vision and drone technology in damage assessment, providing a safer, cost-effective, and efficient alternative to traditional manual inspections. By automating the detection process, we reduce the need for manual evaluation, which can be time-consuming and prone to human error. The integration of these technologies allows for comprehensive and detailed inspections, facilitating early detection and timely intervention to mitigate further damage to concrete structures.

Author Contributions

T.M., S.T. and R.K.D., conceptualized the idea for this manuscript. T.M. provided the resources. S.T. designed and developed the hardware for data acquisition. R.K.D. carried out the investigation and data curation of the acquired data. T.M. validated the data and results of the acquired data. S.T. prepared the original draft. R.K.D. reviewed and edited the original draft. T.M. and S.T. supervised the work. R.K.D. administered the work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the first author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lu, C.; Bu, S.; Zheng, Y.; Kosa, K. Deterioration of concrete mechanical properties and fracture of steel bars caused by alkali-silica reaction: A review. Structures 2022, 35, 893–902. [Google Scholar] [CrossRef]
  2. Kovler, K.; Chernov, V. Types of damage in concrete structures. In Failure, Distress and Repair of Concrete Structures; Woodhead Publishing: Sawston, UK, 2009; pp. 32–56. [Google Scholar]
  3. Lingxin, Z.; Junkai, S.; Baijie, Z. A review of the research and application of deep learning-based computer vision in structural damage detection. Earthq. Eng. Eng. Vib. 2022, 21, 1–21. [Google Scholar] [CrossRef]
  4. Kim, B.; Yuvaraj, N.; Park, H.W.; Preethaa, K.S.; Pandian, R.A.; Lee, D.E. Investigation of steel frame damage based on computer vision and deep learning. Autom. Constr. 2021, 132, 103941. [Google Scholar] [CrossRef]
  5. Spencer, B.F., Jr.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  6. Khuc, T.; Catbas, F.N. Structural identification using computer vision–based bridge health monitoring. J. Struct. Eng. 2018, 144, 04017202. [Google Scholar] [CrossRef]
  7. Deng, J.; Singh, A.; Zhou, Y.; Lu, Y.; Lee, V.C.S. Review on computer vision-based crack detection and quantification methodologies for civil structures. Constr. Build. Mater. 2022, 356, 129238. [Google Scholar] [CrossRef]
  8. Cao, C.; Wang, B.; Zhang, W.; Zeng, X.; Yan, X.; Feng, Z.; Liu, Y.; Wu, Z. An improved faster R-CNN for small object detection. IEEE Access 2019, 7, 106838–106846. [Google Scholar] [CrossRef]
Figure 1. System Architecture.
Figure 1. System Architecture.
Engproc 58 00060 g001
Figure 2. Annotated images in the dataset. Bounding box for cracks detection.
Figure 2. Annotated images in the dataset. Bounding box for cracks detection.
Engproc 58 00060 g002
Figure 3. Methodology of the system.
Figure 3. Methodology of the system.
Engproc 58 00060 g003
Figure 4. Drone with WiFi Camera Module.
Figure 4. Drone with WiFi Camera Module.
Engproc 58 00060 g004
Figure 5. Inferencing result showing detection of cracks on wall.
Figure 5. Inferencing result showing detection of cracks on wall.
Engproc 58 00060 g005
Table 1. YOLO pre-trained models.
Table 1. YOLO pre-trained models.
ModelWeightsLayersParametersGradientsGFLOPs
YOLOv8yolov8m.pt29525,856,89925,856,88379.1
YOLOv7Yolov7.pt40737,194,71037,194,710105.1
YOLOv5Yolov5m.pt29120,871,31820,871,31848.2
Table 2. Training settings.
Table 2. Training settings.
ModelImage SizeLearning RateBatch SizeEpochs
YOLOv88000.0116200
YOLOv76400.0116200
YOLOv58000.0116200
Table 3. Performance matrix of different YOLO models.
Table 3. Performance matrix of different YOLO models.
ModelPrecisionRecallF1 Score
YOLOv80.910.850.89
YOLOv70.850.810.86
YOLOv50.710.750.82
Table 4. Class-wise accuracy of different YOLO models.
Table 4. Class-wise accuracy of different YOLO models.
ClassYOLO8 mAP ScoreYOLO7 mAP ScoreYOLO5 mAP Score
Plastic shrinkage cracks0.890.80.66
Crazing and Crusting Cracks0.930.850.73
Settling cracks0.90.840.7
Expansion cracks0.910.830.72
Heaving cracks0.780.850.65
Overloading cracks0.920.830.68
Corrosion of Reinforcement0.930.860.71
Table 5. Accuracy and speed comparison of YOLO models.
Table 5. Accuracy and speed comparison of YOLO models.
ModelmAP ScoreInference Time (ms)
YOLOv80.9125
YOLOv70.8532
YOLOv50.7118
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malche, T.; Tharewal, S.; Dhanaraj, R.K. Automated Damage Detection on Concrete Structures Using Computer Vision and Drone Imagery. Eng. Proc. 2023, 58, 60. https://doi.org/10.3390/ecsa-10-16059

AMA Style

Malche T, Tharewal S, Dhanaraj RK. Automated Damage Detection on Concrete Structures Using Computer Vision and Drone Imagery. Engineering Proceedings. 2023; 58(1):60. https://doi.org/10.3390/ecsa-10-16059

Chicago/Turabian Style

Malche, Timothy, Sumegh Tharewal, and Rajesh Kumar Dhanaraj. 2023. "Automated Damage Detection on Concrete Structures Using Computer Vision and Drone Imagery" Engineering Proceedings 58, no. 1: 60. https://doi.org/10.3390/ecsa-10-16059

Article Metrics

Back to TopTop