Next Article in Journal
Costs to Reduce the Human Health Toxicity of Biogas Engine Emissions
Previous Article in Journal
Post-Closure Safety Analysis of Nuclear Waste Disposal in Deep Vertical Boreholes
Previous Article in Special Issue
Toward the Renewal of the Sustainable Urban Indicators’ System after a Global Health Crisis. Practical Application in Granada, Spain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Drone-Assisted Image Processing Scheme using Frame-Based Location Identification for Crack and Energy Loss Detection in Building Envelopes

1
Indoor Air Quality Research Center, Korea Institute of Civil Engineering and Building Technology, Goyang-Si 10223, Korea
2
Department of Aerospace and Software Engineering, Gyeongsang National University, 501 Jinjudaero, Jinju 52828, Korea
3
Department of AI Convergence Engineering, Gyeongsang National University, 501 Jinjudaero, Jinju 52828, Korea
*
Author to whom correspondence should be addressed.
Energies 2021, 14(19), 6359; https://doi.org/10.3390/en14196359
Submission received: 27 July 2021 / Revised: 16 September 2021 / Accepted: 28 September 2021 / Published: 5 October 2021
(This article belongs to the Special Issue Energy-Saving, Comfort, and Healthier Strategies for Smart Buildings)

Abstract

:
This paper presents improved methods to detect cracks and thermal leakage in building envelopes using unmanned aerial vehicles (UAV) (i.e., drones) with video camcorders and/or infrared cameras. Three widely used contour detectors of Sobel, Laplacian, and Canny algorithms were compared to find a better solution with low computational overhead. Furthermore, a scheme using frame-based location identification was developed to effectively utilize the existing approach by finding the current location of the drone-assisted image frame. The results showed a simplified drone-assisted scheme along with automation, higher accuracy, and better speed while using lower battery energy. Furthermore, this paper found that the cost-effective drone with the attached equipment generated accurate results without using an expensive drone. The new scheme of this paper will contribute to automated anomaly detection, energy auditing, and commissioning for sustainably built environments.

1. Introduction

The use of unmanned aerial systems (UAS, also called drones) have been growing because they can be useful in achieving the project goals of sustainably built environments. Drones can quickly and precisely perform their missions with low operational costs and safety risks, particularly, when they are used with video recording and photography [1]. As built environments become old, drones play a significant role in detecting anomaly damages in terms of structure and thermal energy leakage issues in building envelopes such as walls, windows, and roofs. For example, the Korean Ministry of Land, Infrastructure and Transport reports that approximately 36% of Korea’s infrastructures was built more than 30 years ago [2].
Researchers began to actively exploit drones to identify damages through image processing [3,4,5,6,7]. In the field of crack inspection in concrete walls, researchers worked mostly on increasing the accuracy of contour detection through exploiting different schemes with complex pre-processing of the captured image. Choi and Kim [3] suggested a drone-assisted scheme that allowed users to modify a threshold to adjust an image size through an image acquisition system. They used the Canny edge detection algorithm to find cracks inside or outside of a building. Noh et al. [4] suggested a drone-assisted image processing method to find cracks larger than 0.3 mm on the surface of a bridge. They segmented an image with fuzzy c-means clustering and removed noise through mask filtering with three different sizes. Their approach was important because the images taken from a drone is not typically close to the concrete surface. Their approach enhanced the accuracy of the detection from the drone-assisted images. Dixit and Wagatsuma [5] used morphological component analysis on a manually acquired image of a concrete bridge to identify the texture features. They used dual tree complex wavelet transform and anisotropic diffusion to remove noise of the image. Then, they used Sobel edge detector to find the fine cracks. Their results showed that anisotropic diffusion outperformed dual tree complex wavelet transform. The results from this study were also important because coarse images taken from a drone can be accurately detected through this approach.
In addition, Seo et al. [6] suggested a drone-enabled methodology and application for a bridge inspection. They developed a five-stage methodology using a drone based on an extensive literature review and demonstrated their efficient and cost-effective approaches with a filed investigation. Their results showed that drone-enabled methodology can identify various damage types, such as cracks, spalling, corrosion, and moisture, on different materials of concrete, steel, and timber, by using a photogrammetric computer software and a visual inspection. Morgenthal et al. [7] also presented a framework for an automated drone system to inspect large bridges. Modern cameras of a drone generated high-resolution image data of the bridge surface. Then, an intelligent flight planning was developed to consider the quality of the image from the drone. Using photogrammetry and machine learning, typical damage patterns were identified.
These previous studies used the images from drones to identify anomaly damages such as cracks. Several studies contributed to an automated detection process or scheme. However, all the previous studies did not include an automated image location process and did not compare different detection methods for anomaly damages.
In addition, many studies such as biology [8] and geology [9] have covered the use of drones with infrared cameras. Moreover, the construction industry needs to increase the use of infrared drones because there are advantages such as thermal pattern analysis and 3D photogrammetry modeling [1,10]. Traditionally, infrared thermography has been widely used in building energy audits [11,12]. Infrared thermography has been used for qualitative (i.e., walk-through audit) or qualitative/quantitative (i.e., standard and simulation audit) approaches [11]. Particularly, drone-assisted infrared thermography approaches are helpful to quantify heat energy losses in the building envelope because they provide reliable and fast inspection for large areas. Infrared drones can be useful for the quantitative approach as well as the qualitative approach when the spatial resolution by the distance of a drone flight and other sensor data by measuring weather conditions are considered [12,13,14,15].
Recently, Rakha and Gorodetsky reviewed drone-assisted applications in thermography and 3D photogrammetry to analyze building performance [1]. They found that infrared images from the use of a drone can significantly improve traditional energy auditing methods. Their case study showed that infrared drones can provide useful images on thermal leakage. However, several conditions should be carefully controlled for the better inspection for the pre-flight, during-flight, post-flight steps. Entrop and Vasenev developed a protocol for the building thermography research based on a literature study and several test flights [10]. Thermal leakage in the building envelope and a photovoltaic panel on the roof was investigated using the protocol. From the multiple test flights, they found that the distance between a drone and a building, the velocity of a drone, and the flight paths of a drone should be carefully adjusted based on the research conditions. They also found that inside and outside temperatures, wind, and precipitation would be influential factors for the drone research results. Ellenberg et al. showed that infrared drones can detect a delamination from the thermography in bridge decks [16]. They developed a post-processing algorithm using the Canny edge detector combined with the Hough transform. They also suggested a method to identify the location of delamination. However, many images were required to find the location. They concluded that their method provided a rapid screening, but this approach should be supported by other refined methods such as a ground inspection method. Infrared drones can be used for the calibration of urban scale microclimate models [17]. Fabbri and Costanzo proposed a novel calibration approach using the measurements of urban-scale surface temperatures through drone-assisted infrared images. They compared the measured surface temperatures with the simulated surface temperatures from ENVI-met simulations.
In summary, although the previous studies showed rapid and improved drone-assisted approaches for their research purposes using experimental data, they did not provide a fully automated and easy-to-use procedure through an image from a drone, including an automated location identification approach for anomaly detection in building envelopes. In addition, most of their approaches were not simplified; thus, they required additional steps and/or manual steps. Furthermore, the drones (e.g., DJI Inspire 1, DJI Phantom 2 and 4, etc.) and cameras (e.g., Go Pro 4, 1080P HD/12 Megapixel camera, Sony Alpha 7R, etc.) used for the previous studies were not cost-effective, and the cost was not detailed. The battery usage and computational loads were also not studied during the flight. Finally, they did not compare different detecting methods for anomaly damages.
Therefore, in this paper, using the cost-effective drone and attached equipment, an automated drone-assisted image processing scheme was developed to probe a building envelope. The following objectives were achieved: (1) Considering battery usage with respect to the direction, the most battery efficient routing path was decided. (2) Three different contour detectors (i.e., Sobel [18], Laplacian [19,20], and Canny [21]) were compared to find an accurate scheme with low computational overhead. (3) Using FPS (frames per second), and angle of view of the camcorder of the drone, the relative position of frame and image was identified.
An overview of the developed scheme for this paper is presented in the second section. The drone developed for this paper, the most battery efficient routing path, the frame-based location identification, and the three different contour detectors is also introduced. In the third section, the results from the developed scheme of this paper are discussed. In Section 4, the cost of the drone with the attached equipment is summarized. Section 5 presents the discussion and Section 6 concludes this paper.

2. Methods

This section includes a brief description of the drone used for this study and an overall procedure of the drone-assisted image scheme. The framework proposed in the paper is illustrated in the Figure 1. The components are ground control system, drone with camera, wall inspection program, and report generator. Ground control system manages the routes of a drone. Once the route information is fed into the drone, it flies against the wall and takes video. The video feed is used as input to the inspection program. The program generates the report, including the image of crack and the location of the crack.

2.1. Hardware Design

In this paper, we chose DJI F450 quadcopter drone as the base frame with four 920 KV motors because of its small form factor and reliability. The CATIA software (CATIA V5, Dassault Systèmes, Vélizy-Villacoublay, France) [22] was used to create the 3D models of the drone as well as propeller guards and landing gears (Figure 2). The propeller guards and landing gears were created using a 3D printer to protect the four propellers of the drone arms and to protect the battery and camcorder located at the center of the drone, respectively. Finally, the drone equipped with the battery and camcorder was produced with the propeller guards and landing gears, as shown in Figure 3. The camcorder angles of horizontal and vertical views were 170 and 60°, respectively, and the camcorder had video resolution of 2.7k@30FPS. In addition, a FLIR thermal camera was additionally equipped with the drone. Hardware components are described in Section 4.
Since the drone used in the experiment was custom made, the reliability and safety of the drone had to be analyzed. The total load of the drone was 1.497 kg, and by calculation the thrust generated by four motors was able to lift up to 3.8 kg. Another important analysis that had to be carried out was stress and deformation of the frame because the thrust deforms the arms of the drone. We used SAMCEF [23] to analyze the deformation of the four arms and the stress of the center when the drone was in flight (Figure 4). The simulation results showed that the maximum deformation of the four arms was 0.39 mm, and the maximum stress of the center was 0.95 MPa. The arm was able to withstand stress of 270 Mpa. The result of deformation and stress test assured the safety of the drone. In addition, the stability during the drone flight was also tested. The test results showed that the drone was returned to a hovering flight status within one second after the roll/pitch was maximized. The cost for the development of the drone used in this study is also identified in Section 4, which shows that the total cost was significantly lower than the previous studies.

2.2. Software Components

Video captured by the drone was transferred to a ground station PC composed of Intel [email protected], 4GB DRAM, and Windows 10. The anomaly detection system was built on Visual C++. Xbee explorer dongle communication was used in the drone. We used APM as the flight controller and it was connected to Ground Control System through Mavlink [24].
Camera module captured the video image of a wall, then Raspberry Pi transfers the image to the ground workstation. The ground station PC ran automated framework to process the image and generate the report. The average run time to process and generate the report was about 0.7 s. Within the report, we made a list of coordinates of identified cracks along with the captured images of cracks and thermal leakages for further investigation.
Figure 5 illustrates an automated framework for the anomaly contour detection. First, once the video data of building envelope was acquired, the captured data is processed by the contour detection algorithms after pre-processing images to remove noises using the Gaussian blur and binarization. Then, the contour detector identified cracks on a wall and/or window. Since all contours may not be cracks, based on the guideline provided by Korea Land Housing Corporation, cracks on a concrete wall with a width larger than 0.3 mm were identified as cracks. For infrared thermal images, the highest temperature contours were considered as thermal leakage because it was assumed that the weather conditions when the research was conducted did not introduce acknowledgeable bias in detecting the thermal leakage in the building envelope using the highest temperature. In addition, it was assumed that the building surface temperature was not significantly affected by the building wall structure.
Second, a frame-based location identification for the contours was developed. Using the images with cracks and/or thermal leakage, a relative position of the frame captured via video frame rate was identified. Then, the location information was embedded on to the image.
Min and max value for hysteresis thresholding in Canny was heuristically determined to 120 and 350, respectively. Threshold and kernel size of Sobel and Laplacian algorithms were also determined heuristically (i.e., threshold 70 and 150, and Kernel size 3 and 5, respectively).

2.3. Contour Detection for Efficient Battery Utilization

The battery power is one of the most important factors in operating a drone for any missions. However, to the best of our knowledge, no previous studies have considered battery usage in executing a flight plan. The average flight time given by the manufacturer is based on hovering of a drone in fixed position; thus, this is the idle use case. Since the building wall is a two-dimensional surface, we have four different ways of exhaustively and completely inspect the wall. We can fly the drone (1) horizontally, (2) vertically, (3) diagonally, and (4) randomly. The battery power is drained at different rate depending on how it is operated. In the case of wall crack inspection, it is inevitable that a drone has to travel upwards. However, thrust and acceleration are the two motions that drains the battery quickly. Thus, it is critical to minimize the thrust and acceleration motions while navigating and inspecting walls. To reach the goal of making a drone travel its maximum distance with lowest possible battery usage, we developed a flight plan as in Figure 6 to minimize the prolonged upward thrust motion.

2.4. Frame-Based Location Identification

Once a frame with the anomaly detection is identified, users need to know where this frame is taken from in relative coordinates, i.e., (x, z). We set the base point (0, 0) as the lower left corner of a building as shown in Figure 6. In order to calculate the position of a frame, we consider of distance moved per frame and x-axis coordinates with respect to odd and even orders of the routing turn. It was assumed that a drone is flying in a constant speed at 1 m perpendicularly away from the wall, and the dimension of a building (W, H) is given.
The horizontal and vertical length is represented by Equations (1) and (2), respectively.
w = p x PPI   × 0.0254
h = p z PPI   × 0.0254
Here, PPI (pixels per inch) is used to measure the dimension of the wall visible in a frame. The number of pixels in the x- and z-axis is denoted as p x and p z . 0.0254 is multiplied to convert the pixels to meters.
The dimension of the wall visible in a frame is (w, h), and the start point of taking video is (0.5 w, 0.5 h). The moving distance in the x- and z-axis is measured by (W-w) and h, respectively. Thus, the drone moves (W-w) m in the x-axis and upward h m in the z-axis as a one-time turn.
The distances moved per frame in x- and z-axis, which are denoted as D x and D z , can be obtained by dividing the moving distance by the total number of frames, which are Equations (3) and (4), respectively.
D x = ( W w ) F x      
D z = h F z                  
Here, the total number of frames on x- and z-axis are denoted as F x and F z , respectively, which is the product of the number of FPS and the seconds moved in each axis direction.
To calculate the position using the number of frames, it was considered that the direction of drone moves as it travels back and forth in x-axis as shown in Figure 6. The number of turns, T, was counted to find whether a frame is in even or odd turn, that is Equation (5).
T = N F x + F z + 1      
Here, N denotes the current frame number.
C was used to denote the number of frames within a turn, which is Equation (6).
C = N   mod   ( F x + F z )
Thus, if T mod 2 = 0 then the order of turn is in an even number and use Equation (7); if T mod 2 = 1 then the order of turn is in an odd number and use Equation (8). z can be identified regardless of which order of a turn, which uses Equation (9).
x = D x × ( F x C )    
x = D x × C  
z = h × T + D z          

2.5. Contour Detector Methods

Sobel [5,18], Laplacian [19,20], and Canny [21] contour detectors were compared using the crack images from a drone. In addition, Canny contour detector was applied to infrared thermal images in the building envelopes. The three contour detectors were built on Visual C++, and the results from the three contour detectors are shown in the Section 3.3. For completeness of the paper, we have summarized the concepts of each algorithm used in the paper. The details of the algorithms can be found in [5,19,21].
Sobel operator can smooth the presence of random noise in an image using an average factor and can improve the elements of the edge appearing bright and thick. Sobel operator uses an orthogonal gradient operator and first order differential operator. Sober operator convolves an image in horizontal and vertical direction with an integer valued, small, and separable filter. The orthogonal gradient operator can be calculated using Equations (10) and (11).
S x = { f ( x + 1 , y 1 ) + 2 f ( x + 1 , y ) + f ( x + 1 , y + 1 ) } { f ( x 1 , y 1 ) + 2 f ( x 1 , y ) + f ( x 1 , y + 1 ) }  
S y = { f ( x 1 , y + 1 ) + 2 f ( x , y + 1 ) + f ( x + 1 , y + 1 ) } { f ( x 1 , y 1 ) + 2 f ( x , y 1 ) + f ( x + 1 , y 1 ) }  
At the position (x,y), the pixel value of an image can be shown in a continuous function f(x,y). The gradient of a continuous function can be expressed using a vector in Equation (12).
f ( x , y ) = [ S x   S y ] T = [ f x f y ]
Magnitude and directional angle of the vectors can be expressed using Equation (13).
mag ( f ) = | f ( 2 ) | = [ S x 2   S y 2 ] 1 2  
Equation (13) can be simplified using Equations (14) and (15) for a digital image. ∅ is the directional angle between the vectors of S x and S y .
mag ( f ) = | S x | + | S y |  
( x , y ) = arctan ( S x S y )
Partial derivative formula for each pixel location is calculated. Using the gradient operator, S x and S y are combined for convolution templates. To conduct convolution, two kernels (templates) are used for every point. One kernel has a maximum response to the vertical edge, and the other kernel has a maximum response to the level edge. The output point uses the maximum value of the two convolutions. Then, the edge amplitude image is created. The convolution is conducted using Equations (16)–(18).
g 1 ( x , y ) = k = 1 1 l = 1 1 S 1 ( k , l ) f ( x + k , y + l )
g 2 ( x , y ) = k = 1 1 l = 1 1 S 2 ( k , l ) f ( x + k , y + l )
g ( x , y ) = g 1 2 ( x , y ) + g 2 2 ( x , y )
Laplacian operator is a second order differential operator. The operator is defined in the n-dimensional Euclidean space using the divergence (∇) of the gradient (∇ f). The Laplacian operator needs a more careful approach to noise because it is the second derivative operator. Scattered broken edge pixels can be shown in the results. To reduce the low-quality pixels, a low pass filter is significant before the Laplacian edge detection. It is proven that the Gaussian low pass filter is effective for image denoising [25]. This approach is called the Laplacian of Gaussian (LOG) operator using Equation (19). G σ (x,y) is a Gaussian kernel function with the standard deviation of σ.
Δ ( G σ * I ) = [ 2 G σ ( x , y ) x 2 + 2 G σ ( x , y ) y 2 ] * I ( x , y )
Canny operator is based on the three standards of the signal-to-noise standard, location accuracy standard, and monolateral response standard. First, using the Gaussian function, an image is denoised. Second, the maximum value of first differential determines the edge points, which is closest to the real edge. Finally, both the maximum and minimum values (i.e., strong edge and weak edge) of the first differential are matched with the zero cross point of the second differential in order to extremely suppress the response of unreal edge. Therefore, Canny operator can effectively avoid noise. Canny operator has the following three standards of Equations (20)–(22):
Signal-to-noise (SNR) standard
S N R = | ω ω G ( x ) h ( x ) d x | σ ω ω h 2 ( x ) d x  
where G(x) is an input image, h(x) is an impulse response of filter with the width of ω, and σ is the unbiased variance of Gaussian noise.
Location accuracy standard
L = | ω ω G ( x ) h ( x ) d x | σ ω ω h 2 ( x ) d x  
where L quantificationally describes the accuracy of the edge detection. Larger value means better accuracy.
Monolateral response standard
D ( f ) = 2 π { + h 2 ( x ) d x + h 2 ( x ) d x } 1 / 2
where D ( f ) should be larger to satisfy the zero cross point, impulse response differential coefficient of the operator.

3. Results

In this section, the results from the efficient routing coordinate, the frame-based location identification, and the three contour detectors are described and discussed.

3.1. Battery Utilization

The drone used the battery of a lithium polymer with 14.8 V (5200 mAh), and it was able to fly for 17 min in average. The drone flew perpendicularly to the wall at a distance of 1 m distance. Then, the drone captured the dimension of 4 m × 2 m in a frame. The distances of 3 m, 5 m, and 7 m were also tested as shown in Figure 7. We considered the resolution and the distortion in the captured image while deciding the drone’s distance from the wall. Note that resolution and distortion of captured images are primarily dependent on the specification of a camera and the lens it uses. The distance must be decided with respect to the specification. For this study, the images from the distance of 1 m were used. It was observed that the maximum distance for accurate images with less errors was 5 m. The distance of 7 m did not provide a reliable drone image. It is because resolution of the camera is fixed; however, it must capture a larger area, which means a larger area has to be compressed into a pixel in the image.
The flight plan, as shown in Figure 6, was used to designate the way points of the drone. The power of the battery was measured after a round trip flight of 10 m with vertical and horizontal routes. The total round-trip distance for the vertical and horizontal directions was 100 m. The tests were repeated several times to obtain the average power dissipation in each direction. Figure 8 shows that the power dissipation of the vertical route was greater than the horizontal route. In the vertical route, the power decreased by 0.36 V on average, which was about 2.9% of the total power. On the contrary, in the horizontal route, the power decreased by only 0.11 V on average, which was about 0.8% of the total power. It was found that the horizontal movement was at least 3.3 times better than the vertical movement. A simple linear regression analysis was conducted, and the results showed that the vertical and horizontal oriented flights can travel about 4.1 km and 13.5 km, respectively. The travel distance could cover the distance for the inspection of three of 40-story apartments or 140 of two-story houses when the average distances between the apartments and the houses were 50 m and 3 m, respectively.

3.2. Accuracy of Frame-Based Location Identification

The accuracy of the frame-based position was tested (Figure 9). The drone started flying at coordinates of (2.5, 0). It flew 4 m up, which was on (2.5, 4), and moved 5 m to the right (7.5, 4). Then, the drone flew up 1 m reaching at the final point of (7.5, 5). The actual coordinates against estimated coordinates via FPS at each turn were compared. The average error rates of the frame-based position on x- and z-axis were 1.3% and 0.17%, respectively. It was found that maintaining constant speed was important to calculate an accurate position.

3.3. Contour Detector for Crack Detection

The Sobel [5,18], Laplacian [19,20], and Canny [21] contour detectors were compared to find which is the best approach for inspecting anomaly damage in building envelopes with low computation over-head. The criteria for the contour detector accuracy were the low rates of false negative and false positive. Although many tests were conducted, two images were selected for this study.
Figure 10 shows the result of the three contour detectors. The building used for the crack detection was built in year 2000 within Gyeongsang National University, Jinju, South Korea. The purpose of this two-story building is to accommodate and incubate startup companies of various sizes. The reasons for the cracks on the building surface [26] are irregular stress from a long-term overload [27], deformation and corrosion by the weather conditions [28], decrease in bearing capacity [29] and damage from earthquakes [30]. The dimensions of cracks numbered in Input Image 1 of Figure 10 was as follows: (1, 4, and 5) circular shape with radius of 3 to 5 mm; (2) 20 cm long and 1 cm wide; and (3) triangular shape with a base of 2 cm and a height of 2 cm. The widths of cracks shown in the Input Image 2 of Figure 11 was as follows (length and width): (1). 11 cm and 4 mm, (2) 9 cm and 8 mm, (3) 15 cm and 10 mm, (4) 3 cm and 6 mm, (5) 12 cm and 3 mm, and (6) 7 cm and 3 mm.
In the case of the Sobel detector, the detection results were not reliable when the texture of the wall was not smooth (see Figure 11). Sobel required a lot of preprocessing to reduce the noises. As a result, it showed a high false positive rate. The Laplacian detector showed a higher false positive rate in Figure 10. On the contrary, the Canny detector successfully identified all the edges on the wall regardless of the texture or the dimension of the cracks.
Figure 12 shows the result from the three contour detectors adopted in the proposed crack inspection program. In the automated scheme, the dimension of a crack had to be large than 0.3 mm × 0.3 mm to be actually counted as a crack. An image with the dimension of 627 × 239 pixels (46.4 Kb) was used to measure the run time of executed algorithms. Since only Canny found the cracks on given criteria, low frequency filters and binarization were added for a fair comparison of the performance of the anomaly detection system (see Figure 12).
False negative rates for Sobel, Laplacian, and Canny were 0.5, 0.5, and 0, respectively. The average run time of 10 runs of Sobel, Laplacian, and Canny contour detectors was 20.3 ms, 40.1 ms and 11.4 ms. In terms of low computational overhead and accuracy, the Canny contour detector was selected for the anomaly detection system.

3.4. Thermal Leakage Detection

In addition, the Canny detector was used to analyze thermal images captured with an infrared camera. The aim for using Canny detector in thermal image was to provide easy solution to locate the leakage in the built environment due to cold bridges, missing insulation, moisture ingress, etc. More specifically, we were interested in finding the area quickly where the thermal leakage was the highest. The building used for the thermal leakage study was a two-story office building built in the 1980s in Boise, Idaho, the US. A particular aspect of the building is that the windows of the building were replaced with new, lower SHGC (solar heat gain coefficient) windows in 2015 [31]. Regardless of the retrofit of the window, the thermal image shows high thermal transmittance in the frames of the window.
For this experiment, both the camcorder and the infrared cameras were mounted on the drone, thus the automated scheme can be used using the frame-based location identification. Figure 13 shows the results from the Canny detector. It was effective to detect the highest temperatures (i.e., thermal leakage through the window frames) on the images. The images were taken when the outside temperature was −4.0 °C and the sky condition was cloudy, for the better results.

4. Cost Analysis

This paper proposes a cost-effective custom drone to inspect cracks and thermal leakage on buildings autonomously. The cost-effectiveness can be compared using total cost of ownership (TCO) analysis, which is the sum of capital expenditure (CAPEX) and operational expenditure (OPEX). CAPEX includes the cost of the drone, camera, and battery. OPEX contains the cost of assembling the drone, replacing the parts, maneuvering the drone, processing the acquired image, and depreciation of the drone. TCO analysis justifies the need for a custom drone with crack and thermal leakage inspection capabilities.
To compare TCOs from different drones, we chose the mid- and high price range of the custom drones to match the prices of consumer drones (i.e., DJI Phantom 4 Pro and Inspire2). We chose Go Pro4 as a camera for a mid-price custom drone and Sony A7R II with 24–70 mm lens as a high-price custom drone. For the consumer drones, we chose Zenmuse X5S as a camera for Inspire2. In the case of Phantom 4 Pro, it uses a built-in camera. Table 1 summarizes the breakdown of the cost structure of the custom drone proposed in this paper. Motor and electronic speed control (ESC) and Pixhawk platform were ranked at number one (i.e., 57.2%) and two (11.8%) among the equipment cost, respectively. Note that we excluded the cost of FLIR thermal camera in CAPEX because it is not a default option for consumer drones. The total CAPEX for all drones is shown in Table 2. The price range of custom-high and Inspire 2 is in the same tier (i.e., approximately $5000), and the price range of custom-mid and Phantom 4 Pro is in the same tier (i.e., approximately $2000).
A professional must assemble all the parts for a custom drone. Although assembling is not too difficult, it takes about an hour for novice personnel. We assumed $200 to hire personnel to build a custom drone. We also assumed that we needed two replacements for a frame, landing gear, propellers, and camera lens. In addition, we replaced the battery every year. Since parts on a consumer drone are not replaceable, we considered a care plan for the body and the camera, which provided replacements for various accidents with a fee. Since the camera lens is not replaceable for a module camera or a Go-Pro product, we assumed replacing the camera itself is necessary rather than replacing the lens. In the case of custom-high, we chose to replace the lens because the lens was not fixed to the body.
Unlike a proposed custom drone, a professional must fly consumer drones and post-process acquired images. To consider the cost of hiring a professional, we assumed the professional work for 2 h per day, five days per week, and two weeks with a fee of USD 300 per hour. In the custom drone case, we assumed the same amount of time with USD 100 per hour. It was also assumed that the same amount of proficiency is required to process the image to find the cracks. For the sake of comparison, we assumed the manual inspection for the image; however, we can reduce the cost if it uses the proposed scheme. As the TCO analysis shows in Table 2, automatically inspecting cracks in the building has competitive advantages over consumer drones.
Using a high-resolution camera in inspecting the wall may reduce overhead by reducing route length for a drone to scan because it may capture an image further away from the wall and still maintain the information it requires to detect cracks. However, using a better camera causes several complications: First, a high-end camera cannot interact with embedded boards such as Raspberry Pi. Moreover, even if we assume it can be connected to the embedded boards, it requires higher computation power to process the acquired image. Second, the heavier the peripherals are attached to a drone, the more battery it consumes and the shorter the drone’s flight time. Table 3 describes the weight and battery time of drones. Third, we need more skilled drone pilots to fly the drone higher up, which increases the hiring cost. Another critical factor is post-processing cost. Irrelevant to the camera used to capture the image of a wall, we need to take the post-process for the image to determine the cracks in the wall. The distance of the drone from the wall or generating the battery-optimized route of the drone is a customizable factor that one can adjust depending on the specification of a drone. However, determining the crack and identifying the coordinates of the found crack on the wall is time-consuming and tedious error-prone work. Thus, we can expect that using the automation scheme proposed in this paper will reduce the time of operating and processing the acquired images and also significantly reduce the cost of ownership.

5. Discussion

There are other sophisticated and elaborate schemes that allowed more accurate detection of cracks in the built environment. For example, global positioning system (GPS) can be used for the crack detection instead of the frame-based location identification. To address this issue, we set a drone to a hovering mode and measured GPS readings for 30 min. Although the drone was in hovering mode, the difference of longitude and latitude of actual location and the readings were 6.1 m on average. In a short period of time, i.e., 1 min, the GPS readings were constantly fluctuating and gave us an error of 2.3 m. A more expensive yet reliable solution for identifying the location would be an ensemble of more accurate modules. However, the cost-effective solution developed in this study was viable because this solution did not require expensive peripherals.
The default scenario for the built environment inspection includes human engagement for visual confirmation. Since inspections are performed periodically, identifying the location, size, and the pattern of a crack is enough for determining the progress of the crack in many of the cases. Moreover, thermal defects are 3D in nature, and both interior and exterior of the defected area must be thoroughly inspected by trained and qualified personnel. Since the drone was used as auxiliary equipment for the inspection, we believe the developed cost-effective solution will suffice for the inspection scenario. Regarding detecting thermal leakage using drones, weather conditions, such as sky conditions, outside air temperatures, and daylight and solar radiation, are important [1]. For example, the emissivity of the building materials can be influenced by the solar radiation and cloud conditions [32]. In addition, the difference of 10 °C between inside and outside air temperatures can be required for the better thermography results [33]. Thus, stable and desirable weather conditions are necessary to achieve the research goal utilizing the drone equipped with the infrared camera. If the conditions are appropriately met for the infrared camera, the drones equipped with the infrared camera will quickly identify thermal leakage from the envelopes of many buildings and will help human inspectors save the time.
Applications of these drones are applicable to crack and thermal leakage detection of a building and post-earthquake inspection for any built environment. In the case of earthquake inspection scenarios, time is of the essence. The authorities can deploy several drones to inspect for cracks and leakage in high-rise buildings to reduce the safety risks. This approach can also reduce operational costs and time spent inspecting buildings.
The limitations of this study include weather conditions during the drone flight. In low light environments when the sunlight is weak, the three contour detectors were not effective. In addition, wind speeds should be considered to mitigate the effect of adverse wind on a drone. An additional feedback control algorithm is needed to maintain the stability and velocity of a drone.

6. Conclusions

As a built environment ages and as natural disasters such as earthquakes increase, the fatigue also increases, and it causes internal or external cracks on the surface of the building that may lead to greater disaster. Since we have limited manpower and resources, it is not only an expensive but is also a time-consuming job to investigate the buildings for cracks. By using drones, we can mitigate the cost structure of the wall inspection as well as provide time-efficient solution for crack inspection of the built environment. The solution can be used to also utilized to inform a managing agency about the danger signals observed while inspecting. The contribution of the paper is in two-folds. First, it offers ingredients of autonomous building inspection, which opens many doors to sustaining built environment. Second, it offers a low-cost and easily maintainable solution for wall inspection.
This paper presents improved approaches to detect cracks and thermal leakage in building envelopes using drones with video camcorders and/or infrared cameras. First, the efficient routing coordinate was found from several tests. Second, the automated scheme using the frame-based location identification was developed to effectively find the current location of the drone-assisted image frame. Finally, three widely used contour detectors of Sobel, Laplacian, and Canny algorithms were compared to find a better solution with low computational overhead. In addition, the Canny detector was applied to the anomaly detection from thermal images.
The results showed the new simplified drone-assisted scheme provided automation, higher accuracy, and better speed through low battery energy use. Furthermore, this study found that the developed, cost-effective drone with the attached equipment generated accurate results without using an expensive drone.
Although there were limitations, the developed, drone-assisted scheme of this paper will be valuable to automate all the procedures to detect anomaly damages in the building envelopes with low battery use, low computational loads, and low cost. This new scheme will contribute to fully automated anomaly detection, energy audit, and commissioning for sustainably built environments, including numerous residential and commercial buildings.

Author Contributions

Conceptualization, S.O. and S.L.; methodology, S.O., S.H. and S.L.; supervision, S.L.; writing—original draft, S.H.; writing—review and editing, S.O. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (no. 2019R1G1A1100455) and this study was also carried out with the support of R&D Program for Forest Science Technology (2021344B10-2123-CD01) provided by Korea Forest Service (Korea Forestry Promotion Institute).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable. Written informed consent for publication must be obtained from participating patients who can be identified (including by the patients themselves). Please state “Written informed consent has been obtained from the patient(s) to publish this paper” if applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rakha, T.; Gorodetsky, A. Review of Unmanned Aerial System (UAS) applications in the built environment: Towards automated building inspection procedures using drones. Autom. Constr. 2018, 93, 252–264. [Google Scholar] [CrossRef]
  2. Ministry of Land Infrastructure and Transport Statistics of Infrastructure Built More than 30 Years in Cities and Provinces. Available online: http://www.blcm.go.kr/stat/customizedStatic/CustomizedStaticSupplyList.do (accessed on 1 March 2021).
  3. Choi, S.; Kim, E. Building crack inspection using small UAV. In Proceedings of the 2015 17th International Conference on Advanced Communication Technology (ICACT), Phoenix Park, PyeongChang, Korea, 1–3 July 2015; pp. 235–238. [Google Scholar]
  4. Noh, Y.; Koo, D.; Kang, Y.-M.; Park, D.; Lee, D. Automatic crack detection on concrete images using segmentation via fuzzy C-means clustering. In Proceedings of the 2017 International Conference on Applied System Innovation (ICASI), Sapporo, Japan, 13–17 May 2017; pp. 877–880. [Google Scholar] [CrossRef]
  5. Dixit, A.; Wagatsuma, H. Comparison of Effectiveness of Dual Tree Complex Wavelet Transform and Anisotropic Diffusion in MCA for Concrete Crack Detection. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2681–2686. [Google Scholar]
  6. Seo, J.; Duque, L.; Wacker, J. Drone-enabled bridge inspection methodology and application. Autom. Constr. 2018, 94, 112–126. [Google Scholar] [CrossRef]
  7. Morgenthal, G.; Hallermann, N.; Kersten, J.; Taraben, J.; Debus, P.; Helmrich, M.; Rodehorst, V. Framework for automated UAS-based structural condition assessment of bridges. Autom. Constr. 2018, 97, 77–95. [Google Scholar] [CrossRef]
  8. Scholten, C.N.; Kamphuis, A.J.; Vredevoogd, K.J.; Lee-Strydhorst, K.G.; Atma, J.L.; Shea, C.B.; Lamberg, O.N.; Proppe, D.S. Real-time thermal imagery from an unmanned aerial vehicle can locate ground nests of a grassland songbird at rates similar to traditional methods. Biol. Conserv. 2019, 233, 241–246. [Google Scholar] [CrossRef]
  9. Harvey, M.C.; Rowland, J.V.; Luketina, K.M. Drone with thermal infrared camera provides high resolution georeferenced imagery of the Waikite geothermal area, New Zealand. J. Volcanol. Geotherm. Res. 2016, 325, 61–69. [Google Scholar] [CrossRef]
  10. Entrop, A.G.; Vasenev, A. Infrared drones in the construction industry: Designing a protocol for building thermography procedures. Energy Procedia 2017, 132, 63–68. [Google Scholar] [CrossRef]
  11. Lucchi, E. Applications of the infrared thermography in the energy audit of buildings: A review. Renew. Sustain. Energy Rev. 2018, 82, 3077–3090. [Google Scholar] [CrossRef]
  12. Nardi, I.; Lucchi, E.; de Rubeis, T.; Ambrosini, D. Quantification of heat energy losses through the building envelope: A state-of-the-art analysis with critical and comprehensive review on infrared thermography. Build. Environ. 2018, 146, 190–205. [Google Scholar] [CrossRef] [Green Version]
  13. Ariwoola, R.T. Use of Drone and Infrared Camera for a Campus Building Envelope Study; East Tennessee State University: Johnson City, TN, USA, 2016. [Google Scholar]
  14. Fox, M.; Goodhew, S.; De Wilde, P. Building defect detection: External versus internal thermography. Build. Environ. 2016, 105, 317–331. [Google Scholar] [CrossRef] [Green Version]
  15. Fox, M.; Coley, D.; Goodhew, S.; de Wilde, P. Thermography methodologies for detecting energy related building defects. Renew. Sustain. Energy Rev. 2014, 40, 296–310. [Google Scholar] [CrossRef] [Green Version]
  16. Ellenberg, A.; Kontsos, A.; Moon, F.; Bartoli, I. Bridge deck delamination identification from unmanned aerial vehicle infrared imagery. Autom. Constr. 2016, 72, 155–165. [Google Scholar] [CrossRef]
  17. Fabbri, K.; Costanzo, V. Drone-assisted infrared thermography for calibration of outdoor microclimate simulation models. Sustain. Cities Soc. 2019, 52, 101855. [Google Scholar] [CrossRef]
  18. Dixit, A.; Wagatsuma, H. Investigating the effectiveness of the sobel operator in the MCA-based automatic crack detection. In Proceedings of the 2018 4th International Conference on Optimization and Applications (ICOA), Mohammedia, Morocco, 26–27 April 2018; pp. 1–6. [Google Scholar]
  19. Li, J.; Wang, N.; Liu, Y.; Yang, Y. A Study of Crack Detection Algorithm. In Proceedings of the 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, China, 18–20 September 2015; pp. 1184–1187. [Google Scholar]
  20. Ding, K.; Xiao, L.; Weng, G. Active contours driven by region-scalable fitting and optimized Laplacian of Gaussian energy for image segmentation. Signal Process. 2017, 134, 224–233. [Google Scholar] [CrossRef]
  21. Zhao, H.; Qin, G.; Wang, X. Improvement of canny algorithm based on pavement edge detection. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; Volume 2, pp. 964–967. [Google Scholar]
  22. Dassault Systems CATIA. Available online: https://www.3ds.com/products-services/catia (accessed on 1 August 2021).
  23. Simens LMS Samcef Solver Suite. Available online: https://acam.at/wp-content/uploads/Samcef_Solversuite.pdf (accessed on 1 August 2021).
  24. Meier, L. MAVLink Developer Guide. Available online: https://mavlink.io/en/ (accessed on 1 August 2021).
  25. Jain, R.; Kasturi, R.; Schunck, B.G. Machine Vision; McGraw-Hill: New York, NY, USA, 1995; Volume 5. [Google Scholar]
  26. Wu, X.; Liu, X. Building crack identification and total quality management method based on deep learning. Pattern Recognit. Lett. 2021, 145, 225–231. [Google Scholar] [CrossRef]
  27. Tianhua, D.; Wei, L.; Chao, Z.; Jianjian, D.; Weimin, D.; Xianlin, Z. Eggshell crack identification based on Welch power spectrum and generalized regression neural network (GRNN). Food Sci. 2015, 14, 156–160. [Google Scholar] [CrossRef]
  28. Huang, X.; Yang, M.; Feng, L.; Gu, H.; Su, H.; Cui, X.; Cao, W. Crack detection study for hydraulic concrete using PPP-BOTDA. Smart Struct. Syst. 2017, 20, 75–83. [Google Scholar] [CrossRef]
  29. Brooks, W.S.M.; Lamb, D.A.; Irvine, S.J.C. IR Reflectance Imaging for Crystalline Si Solar Cell Crack Detection. IEEE J. Photovoltaics 2015, 5, 1271–1275. [Google Scholar] [CrossRef]
  30. Kayırga, O.M.; Altun, F. Investigation of earthquake behavior of unreinforced masonry buildings having different opening sizes: Experimental studies and numerical simulation. J. Build. Eng. 2021, 40, 102666. [Google Scholar] [CrossRef]
  31. Oh, S.; Gardner, J. Impact of Window Replacement on Yanke Building Energy Consumption. 2017. Available online: https://works.bepress.com/john_gardner/17/ (accessed on 1 August 2021).
  32. Siebert, S.; Teizer, J. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system. Autom. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  33. Borrmann, D.; Nuechter, A.; Ðakulović, M.; Maurović, I.; Petrović, I.; Osmanković, D.; Velagić, J. A mobile robot based system for fully automated thermal 3D mapping. Adv. Eng. Informatics 2014, 28, 425–440. [Google Scholar] [CrossRef]
Figure 1. Wall crack inspection framework.
Figure 1. Wall crack inspection framework.
Energies 14 06359 g001
Figure 2. CATIA modeling for the drone along with the propeller guard (left) and the landing gear (right).
Figure 2. CATIA modeling for the drone along with the propeller guard (left) and the landing gear (right).
Energies 14 06359 g002
Figure 3. The drone equipped with the battery, video camcorder, propeller gears, and landing gears.
Figure 3. The drone equipped with the battery, video camcorder, propeller gears, and landing gears.
Energies 14 06359 g003
Figure 4. SAMCEF analysis for the deformation of the four arms (left) and the stress of the center (right).
Figure 4. SAMCEF analysis for the deformation of the four arms (left) and the stress of the center (right).
Energies 14 06359 g004
Figure 5. Flowchart for the overall anomaly contour detection.
Figure 5. Flowchart for the overall anomaly contour detection.
Energies 14 06359 g005
Figure 6. Battery Utilization optimized routing coordinate.
Figure 6. Battery Utilization optimized routing coordinate.
Energies 14 06359 g006
Figure 7. Detection results by distance (a) 1 m from the surface, (b) 3 m from the surface, (c) 5 m from the surface, and (d) 7 m from the surface. As the drone flies further away from the building, the distortion increases and the resolution of the crack decreases.
Figure 7. Detection results by distance (a) 1 m from the surface, (b) 3 m from the surface, (c) 5 m from the surface, and (d) 7 m from the surface. As the drone flies further away from the building, the distortion increases and the resolution of the crack decreases.
Energies 14 06359 g007
Figure 8. Boxplot of battery utilization with respect to direction.
Figure 8. Boxplot of battery utilization with respect to direction.
Energies 14 06359 g008
Figure 9. Boxplot of accuracy of frame-based location identification.
Figure 9. Boxplot of accuracy of frame-based location identification.
Energies 14 06359 g009
Figure 10. Comparison of the contour detectors with 5 cracks.
Figure 10. Comparison of the contour detectors with 5 cracks.
Energies 14 06359 g010
Figure 11. Comparison of the contour detectors with 6 cracks.
Figure 11. Comparison of the contour detectors with 6 cracks.
Energies 14 06359 g011
Figure 12. Comparison of the crack detectors on the images.
Figure 12. Comparison of the crack detectors on the images.
Energies 14 06359 g012
Figure 13. Results from the thermal images using the Canny contour detector.
Figure 13. Results from the thermal images using the Canny contour detector.
Energies 14 06359 g013
Table 1. Specifications and cost for the drone and attached equipment.
Table 1. Specifications and cost for the drone and attached equipment.
ItemUnit CostPieceTotal CostPercentage
Pixhawk Platform163 USD1163 USD11.8%
GPS for Pixhawk Platform48 USD148 USD3.5%
Motor and Electronic Speed Control (ESC)197 USD4788 USD57.2%
DJI F450 Frame25 USD125 USD1.8%
Carbon Fiber Propeller16 USD232 USD2.3%
Power Module16 USD116 USD1.2%
Battery and Lipo Battery Voltage Tester74 USD174 USD5.4%
Landing Gear2 USD12 USD0.1%
Raspberry Pi 3 Model16 USD116 USD1.2%
Camera Module74 USD174 USD5.4%
Controller90 USD190 USD6.5%
Battery Charger49 USD149 USD3.6%
Total 1377 USD100.0%
Table 2. Total Cost of Ownership of a Drone (4-years).
Table 2. Total Cost of Ownership of a Drone (4-years).
CostCategorySpecificationCustom-Ours (USD)Custom-Mid (USD)Custom-High (USD)DJI Phantom 4 Pro (USD)DJI Inspire 2 (USD)
CAPEXDroneBody (Frame, controller, etc.)12291500250020493299
CameraCamera module74----
Go Pro 4-249---
Go Pro mount-15---
Sony A7R II--1198--
Lense 24–70 mm--398--
Mount--500--
Zenmuse X5S----2049
BatteryMinimum requirement74150250--
Total CAPEX 13771914484620495348
OPEXAssemblyPersonnel ($200 per Hour)200200200--
Replacement
(2 Years)
Landing gear replacement (×2)82020--
Frame replacement (×2) 100200200--
Camera (Cost ×2 × 2 Years)2969961592--
Battery (×1)100300500370358
Propellers replacement (×2)6410010052100
Care plan
(Drone)
1 Year care---159339
1 Year care extension---129-
1st replacement---99209
2nd replacement---149329
Care plan (Camera)1 year----205
1st replacement----149
2nd replacement----219
Operation
(2H, 5D, 2W)
Automatic drone control ($100)200020002000--
Manual drone control ($300)---60006000
Processing
(2H, 5D, 2W)
Automatic inspection ($100)200020002000--
Manual inspection ($300)---60006000
Depreciation cost4 Years34447912125121337
Total OPEX 51126295782413,47015,245
TCOCAPEX + OPEX2 Years6489820912,67015,51920,593
Table 3. Weight and battery time of drones.
Table 3. Weight and battery time of drones.
DronesCustom-OursCustom-MidCustom-HighDJI Phantom 4 ProDJI Inspire 2
Weight0.5 kg1.5 kg4.5 kg1.4 kg4 kg
Battery time~18 min~20 min~20 min~30 min~25 min
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oh, S.; Ham, S.; Lee, S. Drone-Assisted Image Processing Scheme using Frame-Based Location Identification for Crack and Energy Loss Detection in Building Envelopes. Energies 2021, 14, 6359. https://doi.org/10.3390/en14196359

AMA Style

Oh S, Ham S, Lee S. Drone-Assisted Image Processing Scheme using Frame-Based Location Identification for Crack and Energy Loss Detection in Building Envelopes. Energies. 2021; 14(19):6359. https://doi.org/10.3390/en14196359

Chicago/Turabian Style

Oh, Sukjoon, Suyeon Ham, and Seongjin Lee. 2021. "Drone-Assisted Image Processing Scheme using Frame-Based Location Identification for Crack and Energy Loss Detection in Building Envelopes" Energies 14, no. 19: 6359. https://doi.org/10.3390/en14196359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop