Next Article in Journal
Multi-Intent Natural Language Understanding Framework for Automotive Applications: A Heterogeneous Parallel Approach
Next Article in Special Issue
Runway Pavement Structural Analysis Using Remote Laser Doppler Vibrometers
Previous Article in Journal
Iterative Pilot-Based Reference Frame Estimation for Improved Data Rate in Two-Dimensional Display Field Communications
Previous Article in Special Issue
Load Transfer Efficiency Assessment of Concrete Pavement Joints Using Distributed Optical Vibration Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Localization of Cracks in Concrete Structures Lacking Reference Objects and Feature Points Using an Unmanned Aerial Vehicle

1
Department of Architecture, Kyungil University, Gyeongsan 38428, Republic of Korea
2
School of Architecture, Civil, Environmental and Energy Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
3
Department of Civil Engineering, Kunsan National University, Gunsan 54150, Republic of Korea
4
Department of Architecture and Building Engineering, Kunsan National University, Gunsan 54150, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9918; https://doi.org/10.3390/app13179918
Submission received: 7 August 2023 / Revised: 29 August 2023 / Accepted: 30 August 2023 / Published: 1 September 2023
(This article belongs to the Special Issue Structural Health Monitoring of Civil Structures and Infrastructures)

Abstract

:
Information on the location of cracks in concrete structures is an important factor enabling appropriate maintenance or reinforcement measures to be taken. Most studies related to concrete cracks are limited to crack detection and identification, and studies related to crack location information are insufficient. The novelty of this study is to develop application technology related to crack localization by proposing a methodology that can estimate the location of concrete cracks even when reference objects or feature points are lacking using an unmanned aerial vehicle and image processing techniques. For the development and verification of the proposed method, aerial photography and image acquisition were performed using mounting a laser pointer model on an unmanned aerial vehicle. To build the analysis data, image distortion correction and feature point extraction were performed using the homography matrix and scale-invariant feature transform algorithm. Spatial information was established using the point cloud technique and image stitching technique, and crack localization was estimated using generating crack expression data via layer merging. The proposed method was validated using comparison with field-measured data. In the future, the proposed methodology can be utilized for supplementing and improving the conventional methods for visual inspection of infrastructures and facilities.

1. Introduction

Cracks are the earliest signs of structural deterioration that can reduce the lifespan and reliability of concrete structures and lead to severe environmental damage. To ensure the longevity and predict potential failures of such structures, evaluation and monitoring are required [1]. Typical damage types that occur in concrete structures include cracks, peeling/exfoliation, efflorescence/leakage, and material separation. Among these damages, cracks are the most common type. When a crack occurs in a concrete structure, it can lead to fatal losses such as structural defects, reduced durability, exterior damage, corrosion of steel bars, and impaired waterproofing performance [2]. In the case of cracks, if they are accessible, they can be measured with a simple instrument such as a tape measure or a crack magnifying glass. However, if access is difficult, the length and area must be estimated using visual inspection, which may differ from the actual crack size [3]. Furthermore, it is challenging to assess the progression of cracks when reviewing their history with the visual inspection. When measurement results differ, the accuracy of calculating the crack size is compromised, and without determining the progression, the reliability of the outcome is not guaranteed, as an appropriate repair plan for the crack cannot be formulated [4]. Since most of the crack investigation tasks are based on the inspection method that relies on the inspector’s eyes, there is room for differences in the inspection results prepared by each inspector, and there is a limitation that a lot of inspection manpower and cost are required [5].
Therefore, as an alternative to the field inspection method of visual inspection using manpower, studies related to video-based safety inspection using unmanned aerial vehicles, image processing, and deep learning technology are being conducted to increase the objectivity and efficiency of crack investigation work. Jeong et al. [6] conducted a study using Convolutional-Neural-Network (CNN)-based machine learning to identify damage in concrete bridges using images captured using an unmanned aerial vehicle. Liu et al. [7] conducted a crack detection study that involved image distortion correction and 3D model reconstruction to assess cracks in bridge piers using images obtained from unmanned aerial vehicles. Cho et al. [8] proposed a safety inspection process for collaborative housing based on unmanned aerial vehicles using the Business Process Modeling Notation (BPMN) technique and verified its practicality using on-site application. Deng et al. [9] conducted a study on the computer vision-based crack detection and quantification methodology for civil structures and suggested the possibility of automated visual inspection using image processing and deep learning. Avendãno et al. [10] presented a framework for inspection that combines data collection, crack detection, and quantification of essential parameters of cracks. The framework utilizes machine learning and image analysis algorithms for image-based inspection of concrete cracks using unmanned aerial vehicles. Jung et al. [11] proposed a concrete crack detection method that employs deep learning and image processing techniques to identify cracks in concrete structures, aiming to enhance the efficiency and objectivity of crack investigation work. Yu et al. [12] conducted research on vision-based concrete crack detection using a hybrid framework and proposed an automated vision-based method for identifying the surface condition of concrete structures. Maslan et al. [13] developed an automatic detection and evaluation system for runway surface cracks using UAVs and Deep-CNN. Orinaitė et al. [14] conducted research on the use of machine learning for the detection of underwater concrete cracks and verified the efficiency and accuracy of the proposed approach. Liu et al. [15] conducted research on the detection of concrete cracks based on computer vision using U-Net and found it to be more robust and effective compared to CNN-based methods. Park et al. [16] conducted research on the detection and quantification of concrete cracks using deep learning and structured light. Arbaoui et al. [17] proposed a methodology for the detection and monitoring of concrete cracks in a concrete material sample and specimen using deep learning-based multi-resolution analysis. Jang et al. [18] conducted research on deep learning-based concrete crack detection using hybrid image scanning that combines vision and infrared thermography images. Kim et al. [19] developed a deep learning-based image analysis technique for crack detection and feature analysis in small-scale infrastructure images.
If safety inspections are conducted based on these image-based preceding studies, it becomes possible to enhance the objectivity of the inspection results by minimizing the subjective intervention of the inspector. Additionally, it can improve the limitations on inspection time and location, as well as enhance the existing inspection and investigation processes. This, in turn, is expected to reduce manpower and costs associated with inspections. However, most of the safety inspection studies of image-based facilities are mainly focused on crack detection. It is related to the presence and detection of cracks using the development of crack detection algorithms and deep learning-based models, and research on crack location estimation is relatively insufficient.
The location information of cracks is a crucial factor in conducting effective safety inspections of actual facilities. It enables the understanding of crack size, shape, and distribution, which are vital for implementing appropriate maintenance and reinforcement measures [20]. However, as mentioned earlier, most of the research related to cracks has focused on their identification and representation, while studies on crack location estimation are relatively scarce or nonexistent. Zoubir et al. [21] conducted a study on the identification and localization of concrete bridge defects based on D-CNN and transfer learning. The trained model for defect identification achieved a high accuracy of 97.13%, but for defect localization, it provided only an approximate pixel-level location. Kim et al. [22] evaluated concrete cracks using two stereo visions with different focal lengths. A crack was detected using a framework based on the crack candidate region, and the location of the crack was simply expressed via the construction of a 3D model using the structure from the motion process. Studies related to such crack location information have also focused on crack detection and identification, and the crack location information has been shown to be at the approximate location expression level. Woo et al. [23] proposed a methodology for estimating the location of concrete cracks with an accuracy at the level of millimeters using image processing techniques that utilize a reference object. However, a limitation of this methodology is that it cannot be applied when there are no identifiable reference points in the facility for defining the reference object for estimating the location of concrete cracks. Moreover, it was also found that there exists a limitation in the accuracy of the image processing technique when there is an inadequate number of reference objects available as feature points in the image.
The purpose of this study is to address the limitations of previous crack location estimation research based on image processing techniques that utilize reference objects. It aims to conduct a new image-based concrete crack location estimation study, seeking improvements in the existing methods. In the utilization of image processing techniques for crack location estimation, the lack of feature points can lead to instability in the results of spatial information construction using methods such as image-stitching and Point Cloud techniques that rely on feature points. A lack of reference objects can degrade the performance of algorithms used in tasks and increase errors, which can affect the accuracy and reliability of the results. Therefore, in this study, we attempted to estimate the location of concrete cracks in facilities lacking reference objects and feature points, and we propose a methodology for this purpose. In addition, this study is expected to be novel in developing a crack localization technology in a situation where most studies are focused on crack detection and identification.

2. Materials and Methods

2.1. Overview

In general, images without reference objects may contain different objects or scenes, making it difficult to match feature points. To find an appropriate match between different objects, a more sophisticated feature point extraction and matching algorithm is needed. Non-reference images can have various transformations and distortions, which can make accurate position estimation challenging when stitching images or creating 3D point clouds. Therefore, in this experiment, we attempted to address these limitations by applying Laser Points to the images. Laser Points were used to correct images with various deformations and distortions and served as corresponding points for image correction using a homography matrix.
Through this, the accuracy of image matching and location estimation can be improved, images are corrected using distortion correction and matching algorithms, and the location of cracks is estimated. Finally, we intend to conduct a position estimation experiment via the spatial information constructed using the aerial image obtained by operating the unmanned aerial vehicle equipped with the laser pointer model and the image processing technique. Figure 1 depicts the visualized flow chart for estimating the location of cracks in the exterior walls of buildings using unmanned aerial vehicles without the reference objects in the image. The proposed method comprises the following steps: (1) Data Acquisition: UAV-based aerial photography. (2) Construction of analysis data using image processing technique: (a) Distorted image correction using homography matrix and (b) Feature point extraction using SIFT algorithm. (3) Crack localization: (a) Construction of spatial information using image processing techniques, (b) Merging of spatial information layers based on point cloud technique and image stitching technique, and (c) Crack localization and Data validation.

2.2. Data Acquisition

When acquiring data using unmanned aerial vehicle-based aerial photography, a flight plan must be established after thoroughly considering the filming conditions. Moreover, as the data quality obtained using aerial photography has a significant impact on image processing and spatial information construction, prior planning is crucial. Therefore, in this study, the flight plan was established considering flight safety and the quality of the photographed data.
Before initiating a UAV flight, the pilot must pre-assess factors that could impact flight safety, such as the structural system of the target building, the layout of surrounding terrain and buildings, and the presence of wires near the building. Subsequently, aerial photography is conducted in a location free from potential safety concerns. When capturing images of a building’s walls using a UAV, vertical flight to vary the altitude and horizontal flight to change the UAV’s position are employed. For such aerial photography, a rotary-wing UAV is more suitable than a fixed-wing UAV due to its ability for hovering flight and unrestricted altitude changes during flight. Additionally, in this study, close-up photography of the target building was necessary to capture aerial images containing cracks. To manage unexpected situations with a collision risk, experienced pilots conducted manual flights.
In obtaining aerial photographic data for estimating the location of concrete cracks, several factors must be considered, including flight stability, data quality, and overlap of acquired images. The resolution of an aerial image is influenced by the performance of the mounted camera sensor and the shooting distance. Woo et al. [24] used an unmanned aerial vehicle with a resolution of 20 Megapixels (MP) to define structural cracks in the exterior walls of concrete buildings. Aerial images were acquired by setting the shooting distance to 2 m. Jeong et al. [25] conducted crack detection using aerial images with a resolution of 12 MP taken from a shooting distance of 5 to 10 m. However, they confirmed that the detection had low accuracy and a limited detection range. Liu et al. [7] used a UAV to detect facility cracks and acquired aerial images with a resolution of 20.8 MP by setting the shooting distance to 1~2 m. Kim and Cho [26] used a UAV for automatic vision-based detection of cracks on concrete surfaces and set the shooting distance to 2 m to acquire aerial images with a resolution of 20 MP. Therefore, this study aimed to acquire aerial images with a resolution of 20 MP. The shooting distance was set to 2 m, considering safety.
The image overlap has a significant impact on the construction of spatial information based on Point Cloud techniques for concrete crack location estimation [27]. Yonas et al. [28] constructed spatial information for crack detection in aging bridges by setting the image overlap to 60~70%. Zhu et al. [29] utilized aerial images with 75% image overlap for deep learning-based crack detection on roads. Yuhan et al. [30] set the image overlap to 50% to detect defects in buildings and infrastructure. Kim et al. [31] used aerial images with image overlap of more than 60% to identify cracks in aging concrete bridges using UAVs. Accordingly, this study set the image overlap to at least 65% for data acquisition to build spatial information and estimate the location of concrete cracks.

2.3. Construction of Analysis Data Using Image Processing Technique

The construction of analysis data using image processing techniques involves two methods: (a) Distorted image correction using a homography matrix and (b) feature point extraction using the SIFT algorithm.
Aerial photographs of building exteriors taken using UAVs can occur from various distortions. Various distortions can occur in the images. These distortions are caused by factors such as the UAV’s position and attitude during aerial photography and the calibration of the camera sensors [32]. Image distortion can be converted to its original form by applying correction techniques, and homography is one of the matrix conversion methods for image conversion and is used for perspective conversion. Homography transformation represents a projection transformation in a 3D space and performs a transformation that projects a plane of a 2D image onto another plane. Homography 4-point projection selects four corresponding points in the image to perform the transformation, and these corresponding points consist of corresponding pairs of corresponding points in the original image and the transformed image. The homography matrix is estimated using these correspondence points, and the image is transformed using it [33]. In this study, Laser Point was used as a correspondence point of the homography matrix and applied to distortion correction of images lacking reference objects and feature points. The use of these laser points can reduce errors that occur when correcting distorted images that lack reference objects and feature points.
Next, feature points were extracted and matched using the SIFT algorithm to apply image processing techniques for cracks in concrete buildings using images with distortions corrected using a homography matrix. The SIFT algorithm is a feature point extraction algorithm widely used in image processing and computer vision fields. SIFT transforms images at various scales to detect feature points. To this end, feature point candidates were selected by applying a Laplacian of Gaussian (LoG) filter in various scales and directions of the image. Next, a key point having the sharpness of the image and the maximum value of the region was selected, and feature point detection in the image was performed using a Difference of Gaussian (DoG) filter. Feature descriptors were calculated based on the detected feature points, and feature point matching was performed by comparing the feature descriptors of each feature point using the Euclidean distance formula for feature point matching [34].

2.4. Crack Localization

Crack localization involves three methods: (a) Construction of spatial information using image processing techniques, (b) Merging of spatial information layers based on point cloud technique and image stitching technique, and (c) Crack localization and Data validation.
In this study, spatial information was constructed for estimating the location of cracks in concrete building exterior walls using image stitching and Point Cloud techniques. The Point Cloud technique is one of the methods for constructing 3D data and represents objects or environments as a set of points. Each point represents a position in 3D space, and the combination of points forms an overall model [35]. The typical process of spatial information construction based on Point Cloud techniques consists of three steps: (1) Initial Processing, (2) Point Cloud and Mesh, and (3) Digital Surface Model (DSM) and Orthomosaic. In the initial processing, the Scale Invariant Feature Transform (SIFT) algorithm is used to identify key points as feature points in images that contain location information. Then, matching is performed to find corresponding key points in different images, and the internal and external parameters of the imaging sensor are calibrated. Next, the generated key points undergo Point Densification to construct the Point Cloud. Based on the constructed Point Cloud, a 3D Textured Mesh can be created, allowing the construction of a 3D model. Using this 3D model as a foundation, DSM and orthomosaic can be generated [36]. Aerial images acquired using unmanned aerial vehicles (UAVs) contain vectorized location information. The spatial information constructed based on the Point Cloud technique using these aerial images has high accuracy and realism because it includes location information [37]. However, the spatial information built based on the point cloud technique causes a situation in which the image quality is lower than that of raw data in the process of generating a 3D model by dividing and reconstructing aerial images into points [38]. As a result, while the spatial information constructed based on the Point Cloud technique retains location information, it has limitations in defining cracks due to the difficulty in detecting cracks at the millimeter level.
The image stitching technique is a method that combines multiple images by matching common feature points to create a single image, allowing for the acquisition of high-resolution images like panoramic images. Feature-based image stitching involves extracting geometric features such as corners, edges, and lines from the input images and comparing them with the reference image to find corresponding points [39]. These characteristics have robustness against brightness variations. However, extracting too many feature points increases the computational workload, leading to slower processing speed. Additionally, if incorrect feature points are extracted, errors may occur during the matching process. Therefore, in feature-based image stitching methods, the extraction of feature points is the most crucial factor to consider [40]. The typical image stitching-based spatial information construction consists of five steps: (1) Feature Point Detection and Matching, (2) Correspondence Point Matching, (3) Estimation of Transformation Model, (4) Image Blending, and (5) Image Generation. Firstly, a feature point detection algorithm is used to detect common feature points among input images. Then, the detected feature points are used to match corresponding points between images. Correspondence points represent points connected between feature points in one image and feature points in another image. Next, using image transformation algorithms, relative position, rotation, and size information between images are utilized to estimate the transformation model based on the correspondence points. Using this process, multiple images are combined according to the transformation model to generate a single spatial information image formed using image stitching [41].
Generally, spatial information generated using image stitching has a high resolution that allows for the identification of cracks. However, the resulting spatial information does not include the Geotags of each original image used in the stitching process. Geotags refers to the geographical location information contained in the images. During the stitching process, the images are transformed and combined to create a new image. In this process, there is a typical loss of individual metadata for each image, including Geotags [42].
In this study, the spatial information layer merged based on the Point Cloud technique, which contains position information but may have difficulty identifying cracks, with the spatial information layer based on the image stitching technique, which has a high resolution, enabling crack identification but lacks position information. By combining the layers via layer merging, we performed crack location estimation of the building’s exterior. Additionally, to validate the accuracy of the crack location information estimated using the proposed methodology, we compared and analyzed the estimated crack positions with the crack position information obtained using field measurements.

3. Experimental Results

3.1. Data Acquisition

In this experiment, a Laser Pointer model was developed that can be attached to an unmanned aerial vehicle’s camera sensor. Aerial photography was conducted on the exterior wall of a concrete building located in Buk-gu, Daegu, Republic of Korea. In the case of the exterior wall of the building selected as the test subject, there is a feature object that can be defined as a reference object, but there are no surrounding obstacles during aerial photography, and there are parts that are judged to have insufficient feature points on the exterior wall of the building. Therefore, an experiment was conducted by selecting a target for the methodology presented in this study. The overview of the target area where aerial photography was performed in this study is shown in Figure 2.
UAV and camera sensor used in this study are the Inspire 2 and Zenmuse X5S models from DJI in Shenzhen, China. The upward infrared sensor can detect surrounding obstacles up to 5 m, the downward vision system ranges from 0.1 to 5 m, and the forward vision system operates between 0.7 to 30 m. The Field of View (FOV) can identify objects within a horizontal range of 60° and a vertical range of 54°. The camera lens has a focal length of 15 mm, and the image sensor size is 17.3 mm × 9.7 mm for CMOS 4/3″. The resolution is 5280 × 2970 with a 16:9 aspect ratio, allowing for capturing photos with a maximum resolution of 20.8 MP. Additionally, the Laser Pointer model developed for mounting on the camera sensor was designed and manufactured using a 3D printer, as shown in Figure 3.
For the flight path, the exterior wall of the target building was planned in a single grid format, and the angle of the exterior wall of the building and the camera sensor were set vertically. The UAV shooting path planned in this study is shown in Figure 4.
The aerial photography was conducted on 6 May 2023, based on the flight plan prepared in advance. The photography was carried out using manual flight control, considering the positioning of the Laser Pointer model-equipped UAV in relation to the exterior wall of the building. During a total flight time of 10 min, 107 images were acquired with the following camera settings: ISO—100, Aperture—F/2, Shutter speed—1/120 s, and Shooting angle—0°. The distance between the UAV and the target building’s exterior wall was set to approximately 2 m, considering the projection and position estimation of the Laser Pointer. While obtaining aerial photographs with uniform image overlap using manual flight control can be challenging, a minimum of 65% image overlap was achieved among the acquired aerial photographs.
The analysis of the aerial images from the aerial photography revealed a clear identification of the cracks in the building and the Laser Point. Based on this, the study utilized the aerial images with the displayed Laser Point to construct spatial information using image processing techniques aimed at the localization of concrete cracks.

3.2. Construction of Analysis Data Using Image Processing Technique

In this study, aerial photography was performed using a UAV equipped with a laser point model to estimate the location of cracks on the exterior wall of a building without a reference object. In order to correct the distortion of the aerial image obtained using aerial photography, four laser points identified in the raw aerial photography image were selected, and image correction was performed using a homography matrix. The four points and corresponding points set for homography matrix conversion are shown in Figure 5.
Correspondence points were set for the distorted laser points of the raw image using model design information, and image distortion correction was performed for the original images acquired using aerial photography using a homography matrix. The following Figure 6 is the result of image correction using the homography matrix.
Next, feature point detection and matching were performed using the SIFT algorithm to confirm the availability of the distortion-corrected image, and the results are shown in Figure 7.
As a result of the experiment, it was confirmed that the image generated using the image distortion correction and feature point extraction method proposed in this study can be sufficiently utilized for image processing techniques.

3.3. Crack Localization

In this study, a background model was generated in the target building using the Point Cloud technique based on aerial photography images with recorded location information. The spatial information construction using the Point Cloud technique was carried out using Switzerland PIX4D company’s PIX4D mapper software (Prilly, Switzerland). The following Figure 8 shows the spatial information construction process for the background model generation.
A total of 107 aerial images were used to build spatial information using the Point Cloud technique for creating a background model, and the location accuracy of the constructed spatial information was RMSE X = 0.65 m, RMSE Y = 0.33 m, and RMSE Z = 0.40 m.
Next, the spatial information of the target building was constructed using the image stitching technique to generate high-resolution images and acquire crack expression information. The image stitching technique for constructing spatial information applied a feature-based stitching technique that utilizes the feature points of each image and used a data set previously constructed through image distortion correction and feature point extraction. The spatial information of the target building created by using the image stitching technique to acquire crack expression information is shown in Figure 9.
Next, the analysis data was constructed by merging the spatial-information-based background model (using the Point Cloud technique) with the location information obtained using image stitching, which expressed the crack locations in high resolution. The merging of the spatial information layers was performed using AutoCAD 2023 software from Autodesk in San Francisco, CA, USA. The merging was achieved by matching the relative coordinates of the stitched image data, where the crack locations were expressed in the orthomosaic of the building, with coordinate values obtained using the background model. First, standardization was performed by adjusting the 1:1000 scale orthomosaic. After that, to overlap the orthomosaic and stitching image data of the target building, the starting point of the lower left corner of the building was set as the reference point (0, 0), and the relative distance from the reference point was converted into (x, y) coordinates to estimate relative location information. Superposition was performed using the analysis of data built using layer merging, which is shown in Figure 10.
Using the analysis data established by merging layers, four cracks were identified in the building at the target site, and their locations were estimated. To estimate the crack locations, we set the lower left end of the target building as the reference point (0, 0) in the constructed analysis data. The location information of the concrete cracks was then estimated by converting the reference point and the crack locations into relative distance coordinates (x, y). Therefore, to estimate the relative location of the cracks, we determined the starting point (x1) and ending point (x2) of the horizontal axis crack and the starting point (y1) and ending point (y2) of the vertical axis crack from the reference point (0, 0) of the building. In this study, we expressed the vertical/horizontal start and end points of the crack as (x1, y1) and (x2, y2) coordinates to estimate their relative positions. The estimated relative locations of the concrete cracks based on the scale are shown in Table 1.

3.4. Data Validation

In this section, the location of cracks was estimated through experiments for the purpose of determining the position of concrete cracks when there are no feature points that can be defined as reference objects in the facility or when there is a lack of feature points in the image. Field measurements were performed to verify the crack location information estimated based on the scale. A total of 4 cracks were measured ground truth values, and the location information of the cracks acquired using measurement is shown in Table 2.
Finally, the accuracy of the estimated crack location information was analyzed to verify the applicability of the crack location estimation methodology presented in this experiment. Accuracy analysis was performed by comparing and analyzing the previously estimated crack location information and the values measured using field measurements. The analysis results of the location information of cracks estimated in the study and the locations of measured cracks are shown in Table 3.

4. Discussion

In this study, a methodology was proposed for the localization of concrete cracks using unmanned aerial vehicle-based aerial images when there is a lack of reference objects and feature points within the images. The use of unmanned aerial vehicles for image-based safety inspections allows data to be obtained from hard-to-reach areas. It minimizes the subjective intervention of inspectors, ensuring the objectivity of the investigation results. Moreover, it improves constraints on the timing and location of inspections, leading to reduced manpower and cost for conventional inspections and surveys. Due to these advantages, research utilizing unmanned aerial vehicles for safety inspections, including buildings and other facilities, has been actively conducted in recent times. However, current studies related to safety inspection, such as the study of Deng et al. [9], focus only on the detection of cracks. Although there were previous studies related to the location of cracks, the studies also focused on the detection and identification of cracks and the location of cracks was roughly expressed [43,44,45,46,47]. In addition, Woo et al. [23] conducted a study on the localization of cracks using an image processing technique using a reference object, but when the facility to estimate the location of the crack does not have a feature point that can be defined as a reference object, there was a limitation that the methodology could not be used. The methodology proposed in this study overcomes these limitations, and using experimentation and validation, it has been confirmed that crack localization is possible even when reference objects and feature points are lacking.
In this study, the localization of cracks was estimated by applying an unmanned aerial vehicle and image processing techniques, which have recently attracted attention in safety inspection. Compared to previous studies, this study is differentiated in that it provides a methodology that can localization of cracks even when reference objects in the target building are insufficient or do not exist. In addition, it is expected that the utilization of unmanned aerial vehicles during safety inspection can be increased in that the technology that has been attracting attention has been applied.
However, this study has a limitation in that it designed a separate Laser Point model to be attached to the unmanned aerial vehicle for the supplementation of reference objects and feature points.
Many researchers have conducted studies using unmanned aerial vehicles and deep learning to detect cracks. It is anticipated that the fusion of this study with crack detection technology in the future could make a significant contribution to safety inspections.

5. Conclusions

In this study, a new image-based concrete crack localization method was developed to address the limitations of the existing crack localization research that relied on reference objects in image processing techniques. The accuracy of crack localization in facilities lacking reference objects and feature points was verified by comparing the estimated positions with the measured ground truth values.
A total of 107 aerial images were acquired using an unmanned aerial vehicle equipped with a laser pointer model, and analysis data was established using image correction and feature point extraction based on the homography matrix and SIFT algorithm. Next, layer merging was performed by constructing spatial information based on the point cloud technique and spatial information based on the image stitching technique, and location estimation of cracks on the exterior of the building was performed using the analysis data established using layer merging. Four cracks were defined in the experiment, and localization was performed. The estimated crack positions were compared with the ground truth values obtained from field measurements, revealing an RMSE error ranging from 80.80 to 108.95 mm.
The methodology for localization of concrete cracks proposed in this study was performed for buildings with universal box-shaped characteristics. It is judged that it can be sufficiently utilized for buildings with such morphological characteristics. If a study is conducted on the localization of cracks in atypical concrete buildings in the future, it will be possible to estimate the location of cracks in concrete regardless of the shape of the building.

Author Contributions

Conceptualization, S.-C.B., I.-H.K. and S.J.; methodology, S.-C.B., I.-H.K. and S.J.; software, S.-C.B.; validation, S.-C.B., I.-H.K. and S.J.; formal analysis, H.-J.W. and J.O.; investigation, S.-C.B.; resources, I.-H.K. and S.J.; data curation, S.-C.B. and J.O.; writing—original draft preparation, S.-C.B.; writing—review and editing, I.-H.K. and S.J.; visualization, H.-J.W. and J.O.; supervision, I.-H.K. and S.J.; project administration, S.-C.B., I.-H.K. and S.J.; funding acquisition, I.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No. NRF-2021R1A6A1A03045185). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No. NRF-2022R1C1C1005963).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ali, L.; Alnajjar, F.; Jassmi, H.A.; Gocho, M.; Khan, W.; Serhani, M.A. Performance Evaluation of Deep CNN-Based Crack Detection and Localization Techniques for Concrete Structures. Sensors 2021, 21, 1688. [Google Scholar] [CrossRef]
  2. Ali, R.; Chuah, J.H.; Talip, M.S.A.; Mokhtar, N.; Shoaib, M.A. Structural Crack Detection using Deep Convolutional Neural Networks. Autom. Constr. 2022, 133, 103989. [Google Scholar] [CrossRef]
  3. Yao, Y.; Tung, S.E.; Glisic, B. Crack Detection and Characterization techniques—An Overview. Struct. Control Health Monit. 2014, 21, 1387–1413. [Google Scholar] [CrossRef]
  4. Kim, A.; Kim, D.; Byun, Y.; Lee, S. Crack Detection of Concrete Structure using Deep Learning and Image Processing Method in Geotechnical Engineering. J. Korean Geotech. Soc. 2018, 34, 145–154. [Google Scholar]
  5. Nam, W.; Jung, H.; Park, K.; Kim, C.; Kim, G. Development of Deep Learning-Based Damage Detection Prototype for Concrete Bridge Condition Evaluation. KSCE J. Civ. Environ. Eng. Res. 2022, 42, 107–116. [Google Scholar]
  6. Jeong, E.; Seo, J.; Wacker, J.P. UAV-Aided Bridge Inspection Protocol through Machine Learning with Improved Visibility Images. Expert Syst. Appl. 2022, 197, 116791. [Google Scholar] [CrossRef]
  7. Liu, Y.; Nie, X.; Fan, J.; Liu, X. Image-based Crack Assessment of Bridge Piers using Unmanned Aerial Vehicles and Three-dimensional Scene Reconstruction. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 511–529. [Google Scholar] [CrossRef]
  8. Cho, J.; Shin, H.; Ahn, Y.; Lee, S. Proposal of Regular Safety Inspection Process in the Apartment Housing using a Drone. Korea Inst. Ecol. Archit. Environ. 2019, 19, 121–127. [Google Scholar]
  9. Deng, J.; Singh, A.; Zhou, Y.; Lu, Y.; Lee, V.C. Review on Computer Vision-Based Crack Detection and Quantification Methodologies for Civil Structures. Constr. Build. Mater. 2022, 356, 129238. [Google Scholar] [CrossRef]
  10. Avendãno, J.C.; Leander, J.; Karoumi, R. Image based inspection of concrete cracks using UAV photography. In Bridge Safety, Maintenance, Management, Life-Cycle, Resilience and Sustainability; CRC Press: Boca Raton, FL, USA, 2022; pp. 1973–1978. [Google Scholar]
  11. Jung, S.; Lee, S.; Park, C.; Cho, S.; Yu, J. A Method for Detecting Concrete Cracks using Deep-Learning and Image Processing. J. Archit. Inst. Korea Struct. Constr. 2019, 35, 163–170. [Google Scholar]
  12. Yu, Y.; Samali, B.; Rashidi, M.; Mohammadi, M.; Nguyen, T.N.; Zhang, G. Vision-Based Concrete Crack Detection using a Hybrid Framework Considering Noise Effect. J. Build. Eng. 2022, 61, 105246. [Google Scholar] [CrossRef]
  13. Maslan, J.; Cicmanec, L. A System for the Automatic Detection and Evaluation of the Runway Surface Cracks Obtained by Unmanned Aerial Vehicle Imagery Using Deep Convolutional Neural Networks. Appl. Sci. 2023, 13, 6000. [Google Scholar] [CrossRef]
  14. Orinaitė, U.; Karaliūtė, V.; Pal, M.; Ragulskis, M. Detecting Underwater Concrete Cracks with Machine Learning: A Clear Vision of a Murky Problem. Appl. Sci. 2023, 13, 7335. [Google Scholar] [CrossRef]
  15. Liu, Z.; Cao, Y.; Wang, Y.; Wang, W. Computer Vision-Based Concrete Crack Detection using U-Net Fully Convolutional Networks. Autom. Constr. 2019, 104, 129–139. [Google Scholar] [CrossRef]
  16. Park, S.E.; Eem, S.; Jeon, H. Concrete Crack Detection and Quantification using Deep Learning and Structured Light. Constr. Build. Mater. 2020, 252, 119096. [Google Scholar] [CrossRef]
  17. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Concrete Cracks Detection and Monitoring Using Deep Learning-Based Multiresolution Analysis. Electronics 2021, 10, 1772. [Google Scholar] [CrossRef]
  18. Jang, K.; Kim, N.; An, Y. Deep Learning–based Autonomous Concrete Crack Evaluation through Hybrid Image Scanning. Struct. Health Monit. 2019, 18, 1722–1737. [Google Scholar] [CrossRef]
  19. Kim, J.J.; Kim, A.-R.; Lee, S.-W. Artificial Neural Network-Based Automated Crack Detection and Analysis for the Inspection of Concrete Structures. Appl. Sci. 2020, 10, 8105. [Google Scholar] [CrossRef]
  20. Choi, D.; Bell, W.; Kim, D.; Kim, J. UAV-Driven Structural Crack Detection and Location Determination using Convolutional Neural Networks. Sensors 2021, 21, 2650. [Google Scholar] [CrossRef]
  21. Zoubir, H.; Rguig, M.; El Aroussi, M.; Chehri, A.; Saadane, R.; Jeon, G. Concrete Bridge Defects Identification and Localization Based on Classification Deep Convolutional Neural Networks and Transfer Learning. Remote Sens. 2022, 14, 4882. [Google Scholar] [CrossRef]
  22. Kim, H.; Sim, S.; Spencer, B.F. Automated Concrete Crack Evaluation using Stereo Vision with Two Different Focal Lengths. Autom. Constr. 2022, 135, 104136. [Google Scholar] [CrossRef]
  23. Woo, H.; Seo, D.; Kim, M.; Park, M.; Hong, W.; Baek, S. Localization of Cracks in Concrete Structures using an Unmanned Aerial Vehicle. Sensors 2022, 22, 6711. [Google Scholar] [CrossRef] [PubMed]
  24. Woo, H.; Hong, W.; Oh, J.; Baek, S. Defining Structural Cracks in Exterior Walls of Concrete Buildings using an Unmanned Aerial Vehicle. Drones 2023, 7, 149. [Google Scholar] [CrossRef]
  25. Jeong, D.; Lee, J.; Ju, Y. Photogrammetric Crack Detection Method in Building using Unmanned Aerial Vehicle. J. Archit. Inst. Korea Struct. Constr. 2019, 35, 11–19. [Google Scholar]
  26. Kim, B.; Cho, S. Automated Vision-Based Detection of Cracks on Concrete Surfaces using a Deep Learning Technique. Sensors 2018, 18, 3452. [Google Scholar] [CrossRef] [PubMed]
  27. Bang, S.; Kim, H.; Kim, H. UAV-Based Automatic Generation of High-Resolution Panorama at a Construction Site with a Focus on Preprocessing for Image Stitching. Autom. Constr. 2017, 84, 70–80. [Google Scholar] [CrossRef]
  28. Ayele, Y.Z.; Aliyari, M.; Griffiths, D.; Droguett, E.L. Automatic Crack Segmentation for UAV-Assisted Bridge Inspection. Energies 2020, 13, 6250. [Google Scholar] [CrossRef]
  29. Zhu, J.; Zhong, J.; Ma, T.; Huang, X.; Zhang, W.; Zhou, Y. Pavement Distress Detection using Convolutional Neural Networks with Images Captured Via UAV. Autom. Constr. 2022, 133, 103991. [Google Scholar] [CrossRef]
  30. Jiang, Y.; Han, S.; Bai, Y. Building and Infrastructure Defect Detection and Visualization using Drone and Deep Learning Technologies. J. Perform. Constr. Facil. 2021, 35, 04021092. [Google Scholar] [CrossRef]
  31. Kim, I.; Jeon, H.; Baek, S.; Hong, W.; Jung, H. Application of Crack Identification Techniques for an Aging Concrete Bridge Inspection using an Unmanned Aerial Vehicle. Sensors 2018, 18, 1881. [Google Scholar] [CrossRef]
  32. Juan, L.; Gwun, O. A Comparison of Sift, Pca-Sift and Surf. Int. J. Image Process. 2009, 3, 143–152. [Google Scholar]
  33. Ahn, H.; Rhee, S. Fast Image Stitching Based on Improved Surf Algorithm using Meaningful Features. KIPS Trans. Part B 2012, 19, 93–98. [Google Scholar] [CrossRef]
  34. Adel, E.; Elmogy, M.; Elbakry, H. Image Stitching System Based on ORB Feature Based Technique and Compensation Blending. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 55–62. [Google Scholar] [CrossRef]
  35. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic Reconstruction of as-Built Building Information Models from Laser-Scanned Point Clouds: A Review of Related Techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
  36. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
  37. Turner, D.; Lucieer, A.; Watson, C. An Automated Technique for Generating Georectified Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV) Imagery, Based on Structure from Motion (SfM) Point Clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef]
  38. Zeybek, M.; Şanlıoğlu, İ. Point Cloud Filtering on UAV Based Point Cloud. Measurement 2019, 133, 99–111. [Google Scholar] [CrossRef]
  39. Im, J.; Lee, E.; Kim, H.; Kim, K. Images Grouping Technology Based on Camera Sensors for Efficient Stitching of Multiple Images. J. Broadcast Eng. 2017, 22, 713–723. [Google Scholar]
  40. Zhu, Z.; Fu, J.; Yang, J.; Zhang, X. Panoramic Image Stitching for Arbitrarily Shaped Tunnel Lining Inspection. Comput. Aided Civ. Infrastruct. Eng. 2016, 31, 936–953. [Google Scholar] [CrossRef]
  41. Chen, C.; Klette, R. Image Stitching—Comparisons and New Techniques. In Proceedings of the Computer Analysis of Images and Patterns: 8th International Conference, CAIP’99, Ljubljana, Slovenia, 1–3 September 1999; Proceedings 8. pp. 615–622. [Google Scholar]
  42. Zhao, Q.; Ma, Y.; Zhu, C.; Yao, C.; Feng, B.; Dai, F. Image Stitching Via Deep Homography Estimation. Neurocomputing 2021, 450, 219–229. [Google Scholar] [CrossRef]
  43. Goszczyńska, B.; Świt, G.; Trąmpczyński, W.; Krampikowska, A.; Tworzewska, J.; Tworzewski, P. Experimental Validation of Concrete Crack Identification and Location with Acoustic Emission Method. Arch. Civ. Mech. Eng. 2012, 12, 23–28. [Google Scholar] [CrossRef]
  44. Golding, V.P.; Gharineiat, Z.; Munawar, H.S.; Ullah, F. Crack Detection in Concrete Structures Using Deep Learning. Sustainability 2022, 14, 8117. [Google Scholar] [CrossRef]
  45. Yang, J.; Li, H.; Zou, J.; Jiang, S.; Li, R.; Liu, X. Concrete Crack Segmentation Based on UAV-Enabled Edge Computing. Neurocomputing 2022, 485, 233–241. [Google Scholar] [CrossRef]
  46. Zollini, S.; Alicandro, M.; Dominici, D.; Quaresima, R.; Giallonardo, M. UAV Photogrammetry for Concrete Bridge Inspection Using Object-Based Image Analysis (OBIA). Remote Sens. 2020, 12, 3180. [Google Scholar] [CrossRef]
  47. Li, H.-Y.; Huang, C.-Y.; Wang, C.-Y. Measurement of Cracks in Concrete Bridges by Using Unmanned Aerial Vehicles and Image Registration. Drones 2023, 7, 342. [Google Scholar] [CrossRef]
Figure 1. Visualizing flow chart for estimating the location of cracks in the exterior walls of buildings using unmanned aerial vehicles when without the reference objects in the image: (a) UAV-based aerial photography; (b) Distorted image correction using homography matrix; (c) Feature point extraction using SIFT algorithm; (d) Construction of spatial information using image processing techniques; (e) Merging of spatial information layers based on point cloud technique and image stitching technique; (f) Crack localization and Data validation.
Figure 1. Visualizing flow chart for estimating the location of cracks in the exterior walls of buildings using unmanned aerial vehicles when without the reference objects in the image: (a) UAV-based aerial photography; (b) Distorted image correction using homography matrix; (c) Feature point extraction using SIFT algorithm; (d) Construction of spatial information using image processing techniques; (e) Merging of spatial information layers based on point cloud technique and image stitching technique; (f) Crack localization and Data validation.
Applsci 13 09918 g001
Figure 2. Overview of the target site.
Figure 2. Overview of the target site.
Applsci 13 09918 g002
Figure 3. Laser Pointer model and mounting on the camera sensor.
Figure 3. Laser Pointer model and mounting on the camera sensor.
Applsci 13 09918 g003
Figure 4. UAV shooting path plan.
Figure 4. UAV shooting path plan.
Applsci 13 09918 g004
Figure 5. Four points and corresponding points are set for homography matrix.
Figure 5. Four points and corresponding points are set for homography matrix.
Applsci 13 09918 g005
Figure 6. Result of image correction using homography matrix.
Figure 6. Result of image correction using homography matrix.
Applsci 13 09918 g006
Figure 7. Result of feature point detection and matching using SIFT algorithm.
Figure 7. Result of feature point detection and matching using SIFT algorithm.
Applsci 13 09918 g007
Figure 8. Process of spatial information construction using Point Cloud technique.
Figure 8. Process of spatial information construction using Point Cloud technique.
Applsci 13 09918 g008
Figure 9. Result of spatial information using image stitching technique.
Figure 9. Result of spatial information using image stitching technique.
Applsci 13 09918 g009
Figure 10. Analysis data after merging spatial information layers.
Figure 10. Analysis data after merging spatial information layers.
Applsci 13 09918 g010
Table 1. Relative locations of the concrete cracks based on the scale. (Location from start point to end points of cracks).
Table 1. Relative locations of the concrete cracks based on the scale. (Location from start point to end points of cracks).
ClassificationRelative Location (X1, Y1), (X2, Y2), (Unit: mm)
Crack 1(1974, 821), (3072, 798)
Crack 2(5585, 2005), (5553, 3873)
Crack 3(11,048, 5914), (11,302, 7246)
Crack 4(11,864, 7273), (12,512, 7219)
Table 2. Values of measured ground truth distance to cracks (Location from start point to end points of cracks).
Table 2. Values of measured ground truth distance to cracks (Location from start point to end points of cracks).
ClassificationGround Truth Values (Δx, Δy), (Unit: mm)
Crack 1(1032, 21)
Crack 2(30, −1699)
Crack 3(−238, −1212)
Crack 4(−609, 49)
Table 3. Comparison of estimated relative positions and measured relative position values.
Table 3. Comparison of estimated relative positions and measured relative position values.
ClassificationRelative Position (Δx, Δy), (Unit: mm)
CoordinateGround TruthEstimateErrorRMSE
Crack 1x1211109811180.80
y4023−7
Crack 2x−20−32−12104.29
y−1721−1868−113
Crack 3x23125423101.72
y−1474−1332−119
Crack 4x49464894108.95
y495410
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baek, S.-C.; Oh, J.; Woo, H.-J.; Kim, I.-H.; Jang, S. Localization of Cracks in Concrete Structures Lacking Reference Objects and Feature Points Using an Unmanned Aerial Vehicle. Appl. Sci. 2023, 13, 9918. https://doi.org/10.3390/app13179918

AMA Style

Baek S-C, Oh J, Woo H-J, Kim I-H, Jang S. Localization of Cracks in Concrete Structures Lacking Reference Objects and Feature Points Using an Unmanned Aerial Vehicle. Applied Sciences. 2023; 13(17):9918. https://doi.org/10.3390/app13179918

Chicago/Turabian Style

Baek, Seung-Chan, Jintak Oh, Hyun-Jung Woo, In-Ho Kim, and Sejun Jang. 2023. "Localization of Cracks in Concrete Structures Lacking Reference Objects and Feature Points Using an Unmanned Aerial Vehicle" Applied Sciences 13, no. 17: 9918. https://doi.org/10.3390/app13179918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop