Next Article in Journal
Integrated Predictor Based on Decomposition Mechanism for PM2.5 Long-Term Prediction
Next Article in Special Issue
Categorization of the Condition of Railway Embankments Using a Multi-Attribute Utility Theory
Previous Article in Journal
Hazard Analysis for Escalator Emergency Braking System via System Safety Analysis Method Based on STAMP
Previous Article in Special Issue
Modeling of Temperature Time-Lag Effect for Concrete Box-Girder Bridges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Overall Deformation Monitoring Method of Structure Based on Tracking Deformation Contour

1
State Key Laboratory of Mountain Bridge and Tunnel Engineering, Chongqing Jiaotong University, Chongqing 400074, China
2
College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(21), 4532; https://doi.org/10.3390/app9214532
Submission received: 4 September 2019 / Revised: 18 October 2019 / Accepted: 20 October 2019 / Published: 25 October 2019

Abstract

:

Featured Application

This study aims at the lack of sufficient data supporting structural damage identification, which is a general issue in traditional single-point measurement method. A novel method has been proposed for structural deformation monitoring based on digitalized photogrammetry, with improved efficiency and reduced cost. The method can be applied in the overall deformation monitoring of engineering structures, such as bridges. Furthermore, the method can provide a solid foundation for the estimation of structural health state.

Abstract

In structural deformation monitoring, traditional methods are mainly based on the deformation data measured at several individual points. As a result, only the discrete deformation, not the overall one, can be obtained, which hinders the researcher from a better and all-round understanding on the structural behavior. At the same time, the surrounding area around the measuring structure is usually complicated, which notably escalates the difficulty in accessing the deformation data. In dealing with the said issues, a digital image-based method is proposed for the overall structural deformation monitoring, utilizing the image perspective transformation and edge detection. Due to the limitation on camera sites, the lens is usually not orthogonal to the measuring structure. As a result, the obtained image cannot be used to extract the deformation data directly. Thus, the perspective transformation algorithm is used to obtain the orthogonal projection image of the test beam under the condition of inclined photography, which enables the direct extraction of deformation data from the original image. Meanwhile, edge detection operators are used to detect the edge of structure’s orthogonal projection image, to further characterize the key feature of structural deformation. Using the operator, the complete deformation data of structural edge are obtained by locating and calibrating the edge pixels. Based on the above, a series of load tests has been carried out using a steel–concrete composite beam to validate the proposed method, with the implementation of traditional dial deformation gauges. It has been found that the extracted edge lines have an obvious sawtooth effect due to the illumination environment. The sawtooth effect makes the extracted edge lines slightly fluctuate around the actual contour of the structure. On this end, the fitting method is applied to minimize the fluctuation and obtain the linear approximation of the actual deflection curve. The deformation data obtained by the proposed method have been compared with the one measured by the dial meters, indicating that the measurement error of the proposed method is less than 5%. However, since the overall deformation data are continuously measured by the proposed method, it can better reflect the overall deformation of the structure, and moreover the structural health state, when compared with the traditional “point” measurements.

1. Introduction

During the service life, engineering structures are subjected to various inherent deterioration processes of structure such as corrosion, fatigue, material creep, and so on. As a result, the deformation of the degraded structure will deviate from the original one. On this end, the structural deformation in eventually used as an important index in the structural health monitoring [1,2]. For instance, the external load and deformation of the structure system generally follows the below relation:
{ d } = [ K ] 1 { f }
where { d } stands for the deformation state; [ K ] is the stiffness matrix of the structure; { f } represent the effect induced by the external load. When any damage or deterioration occurs in the structure, the stiffness matrix [ K ] will change correspondingly, which in turns lead to the inevitable change in the deformation state { d } . Therefore, the change in the deformation state can be utilized to evaluate the health state of the structure. The present study focuses on the direct extraction of the overall deformation, rather than approximating the deformation through the data measured at several discrete points. The major advantage of the overall deformation data is to eliminate the error in structural health evaluation caused by insufficient measurement.
Traditionally, the structural deformation can be measured by leveling, total station, GPS, vibration sensors, and other equipment. At present, these methods can accurately and rapidly measure the deformation information of structures. However, only the limited key points of the targeting structure can be measured using the above methods, which often lead to insufficient data and, moreover, the insensitivity to structural deterioration [3]. Obviously, the direct solution is to largely increase the number of sensors installed on the structure. However, it is both time and budget consuming, which is not applicable in engineering practices. Alternatively, the digital image full-field structural morphology measurement can be a very ideal solution, which can take the advantages of both the structural damage identification method and digital image processing technology. Therefore, it is very crucial to effectively utilize the structural image features to extract the full-field deformation information of the structure.
In recent years, digital image processing technology is eventually employed to measure the overall deformation of the structure. As a kind of remote sensing technique, photogrammetry does not need any contact with the objects, and this can be a great advantage in the deformation monitoring of structures. “Photogrammetry” is to set up a base station in a stable area on the front of the target, and then shoot the target, so as to get the shape and motion state of the target according to the image [4]. According to different imaging distances, photogrammetry can be divided into “space photogrammetry”, “close-range photogrammetry”, and “microscopic photogrammetry” [5]. In structural deformation monitoring, close-range photogrammetry has broad prospects for development [6], and “Close-range photogrammetry” means that the distance between the base station and the measured structure is within 300 m [7]. Feng et al. [8] presents a comprehensive review on the recent development of computer vision-based sensors for structural displacement response measurement and their applications for SHM. Importation issues critical to successful measurement are discussed in detail, including how to convert pixel displacements to physical displacements, how to achieve sub-pixel resolutions, and what to cause measurement errors and how to mitigate the errors. However, the article also clearly points out that in many respects, the vision-based sensor technology is still in its infancy. The majority studies have still been focused on measurements of small-scale laboratory structures or field measurements of large structures at a limited number of points for a short period of time. Rolands Kromanis et al. [9] introduces a low-cost robotic camera system (RCS) for accurate measurement collection of structural response. The low-cost RCS provides very accurate vertical displacements. The measurement error of the RCS is 1.4%. Serena Artese et al. [10] proposed a bridge monitoring system, which combines camera and laser indicator; the elastic line inclination is measured by analyzing the single frames of an HD video of the laser beam imprint projected on a flat target. The inclination of the elastic line at the support was obtained with a precision of 0.01 mrad. Ghorban et al. [11] measured the overall deformation of the masonry wall subjected to cyclic loads, using the 3D image correlation technology. The displacement, rotation, and interface slip between the reinforced concrete column and masonry were measured. Wang et al. [12] used the close-range photogrammetry technology to monitor the displacement of tunnel caverns. The measured results were compared with the values measured by mechanical convergence meter, and the difference between the two methods is no more than ±2 mm at the measuring distance of 8 m. This accuracy meets the requirement of general tunnel deformation monitoring. Reference [13] studied the application of sub-pixel displacement measurement method in soil strain monitoring. Based on the spatial correlation function iteration, the sub-pixel displacement of soils was measured. Zang et al. [14] applied the close-range photogrammetry technology in measuring bridge deflection and proved that a desirable accuracy can be achieved, i.e., ±1 mm. However, the accuracy is greatly affected by the positioning of the artificial marking points required by the method [15]. Although the above studies validated the feasibility of the application of close-range photogrammetry technology in structural deformation monitoring, the above methods are still unable to measure the structural overall deformation.
In order to explore the feasibility of photogrammetry in structural overall deformation monitoring, Ivan Detchev et al. [16] explored the use of consumer-grade cameras and projectors for the deformation monitoring of structural elements. A low-cost digital camera deformation monitoring system is proposed. Static load tests of concrete beams are carried out in the laboratory. The experiments proved that it was possible to detect sub-millimeter-level overall deformations given the used equipment and the geometry of the setup. However, this technology requires high texture characteristics of the structure surface and needs to project random pattern on the structure surface, which is difficult to achieve in the actual bridge structure deformation monitoring. On another hand, the close-range photogrammetry requires the measuring equipment to be located in the orthogonal projection position of the measured surface, which is usually difficult in engineering practice. Taking the bridge structure as an example, the surroundings near the bridge are usually complex, such as mountains, rivers, and trees, which makes it difficult for the camera to maintain the orthogonal projection position with respect to the measuring bridge.
In the deformation monitoring of structures, environmental factors must be considered. The complex geographical conditions of the structures means the photogrammetric camera is unable to work in the ideal measuring position. Therefore, it is necessary to study a new photogrammetric method for the overall deformation of structures under the condition that the camera is in the inclined position. In view of the actual needs of the structure deformation monitoring, this paper studies the overall deformation monitoring method under the condition of tilt photography. The steel truss concrete composite beam specimens were made in the laboratory. The static deformation images of the specimens were obtained by oblique photography. The overall deformation of the specimens was obtained by perspective transformation and edge detection technology. The error sources of this method were analyzed. This research is a comprehensive application of photogrammetry and digital image processing technology in the field of structure deformation monitoring. Its research foundation has been carefully verified and published in many publications [17,18]. The research results can alleviate the problem of insufficient deformation data in damage identification. In addition, compared with traditional photogrammetric methods, this study also highlights the advantages of flexible placement of camera positions in actual measurement work.

2. Orthogonal Projection and Global Deformation Acquisition Method of Structures

2.1. Perspective Transformation of Digital Image

On the basis of unchanged image content, the image pixel position is transformed, which is called image geometric transformation [19]. It mainly includes translation, rotation, zooming, reflection, and slicing. Usually, compound transformations, such as the perspective transformation, can be divided into a series of basic transformations. According to the perspective principle [20], when photographed under the condition of non-orthogonal projection, the image of the measured structure will deform. As a result, the true shape of the structure can be obtained only when the camera is in the orthogonal projection position of the measured surface. The process of mathematical transformation from oblique projection center to orthogonal projection center is called the perspective transformation. Figure 1 illustrates the basic model of perspective transformation.
A 3D Cartesian coordinate system can be established, in which the projection center of the camera is selected as the origin, called the camera coordinate. Meanwhile, the image plane is set as the x-y plane, and the focus of the plane is located at [ 0 , 0 , f ] ( f > 0 ) . A 2D Cartesian coordinate system can be established on the plane where the object is measured, called the measuring coordinate. The origin of the measuring coordinate system is [ x 0 , y 0 , z 0 ] T in the camera coordinate. The unit vectors in the x-axis direction are [ u 1 , u 2 , u 3 ] T , the unit vectors in the y-axis direction are [ v 1 , v 2 , v 3 ] T , and the vector relation can be written as follows:
{ u 1 v 1 + u 2 v 2 + u 3 v 3 = 0 u 1 2 + u 2 2 + u 3 2 = v 1 2 + v 2 2 + v 3 2 = 1 .
The points of coordinates [ u , v ] T in the measuring plane can be expressed in the camera coordinate system as the vector below,
u [ u 1 u 2 u 3 ] + v [ v 1 v 2 v 3 ] + [ x 0 y 0 z 0 ] .
Assuming that the coordinate of the point in the original imaging plane is [ x , y , 0 ] T , k R the following expression can be derived,
u [ u 1 u 2 u 3 ] + v [ v 1 v 2 v 3 ] + [ x 0 y 0 z 0 ] [ 0 0 f ] = k ( [ 0 0 f ] [ x y 0 ] ) .
Comparing the preceding formula, it yields
k [ x y ] = u [ u 1 u 2 ] + v [ v 1 v 2 ] + [ x 0 y 0 ] = [ u 1 v 1 x 0 u 2 v 2 y 0 ] [ u v 1 ]
k f = u u 3 + v v 3 + z 0 f = [ u 3 v 3 z 0 f ] [ u v 1 ] .
Equation (5) can be rewritten as the following,
k = [ u 3 f v 3 f z 0 f f ] [ u v 1 ] .
Combining Equations (6) and (4), it leads to
k [ x y 1 ] = [ u 1 v 1 x 0 u 2 v 2 y 0 u 3 f v 3 f z 0 f f ] [ u v 1 ] .
For convenience, a parameter matrix M is introduced, as shown below,
M = [ u 1 v 1 x 0 u 2 v 2 y 0 u 3 f v 3 f z 0 f f ] .
If the focus [ 0 , 0 , f ] T is not on the measuring plane, the matrix M is a nonsingular matrix. Under normal working conditions, the focus will not be on the measuring plane, so the matrix M can usually be treated as a nonsingular matrix. The focal length and spatial position of the camera will change when the camera moves to a new position to capture the target structure. It can also be considered that the camera imaging plane is fixed, the focal length and the actual spatial position of the structure are changed. Make the coordinates of the camera focus change to [ 0 , 0 , f ] T , and the original coordinates of the measuring plane change to [ x 0 , y 0 , z 0 ] T . The unit vectors of the x , y axes in the measuring plane become [ u 1 , u 2 , u 3 ] T , [ v 1 , v 2 , v 3 ] T . Similarly, there is k R , making the coordinate point [ u , v ] T on the measuring plane, and its corresponding imaging point [ x , y , 0 ] T should satisfy:
k [ x y 1 ] = [ u 1 v 1 x 0 u 2 v 2 y 0 u 3 f v 3 f z 0 f f ] [ u v 1 ] .
The parameter matrix M’ is denoted as:
M = [ u 1 v 1 x 0 u 2 v 2 y 0 u 3 f v 3 f z 0 f f ] .
Comparison of Equations (7) and (9) leads to
k [ x y 1 ] = M [ u v 1 ] = k M M 1 [ x y 1 ]
assuming:
M · M 1 = [ m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 ] .
Accordingly, the following expansions are introduced:
{ k x = k ( m 11 x + m 12 x + m 13 ) k x = k ( m 21 x + m 22 x + m 23 ) k = k ( m 31 x + m 32 y + m 33 ) .
Therefore, there is:
{ x = m 11 x + m 12 x + m 13 m 31 x + m 32 y + m 33 y = m 21 x + m 22 x + m 23 m 31 x + m 32 y + m 33 .
The coordinates ( x , y ) are the imaging point of the original image, which is transformed into a new imaging point ( x , y ) after the perspective transformation. Based on the above analysis, the proposed process can convert the original oblique structural image into orthophoto-projection image, which provides technological foundation for monitoring the overall deformation of the structure.

2.2. Edge Detection

The edge of structural image is an important carrier of overall deformation information. Edge detection is a method to analyze the main features of images [21]; it can greatly reduce the amount of data, eliminate information not related to deformation monitoring, and retain the basic attributes of structure. The basic task of edge detection is to recognize the step change of the gray value of the structure edge in the image, which can be further used to obtain the feature edge of the structure. According to [22], the step edge is related to the peak value of the first-order derivative of the gray level of the image, and the degree of change of the gray value can be expressed by gradient. The gradient of image function is a vector with direction and size, as shown below:
G ( x , y ) = [ G x G y ] = [ f x f y ] .
It can be seen that the direction of the vector G ( x , y ) is the change rate of the gray value of function f ( x , y ) .
The amplitude of the gradient can be expressed as:
| G ( x , y ) | = G x 2 + G y 2 .
In this paper, the absolute value is used to approximate the gradient amplitude:
| G ( x , y ) | max ( | G x | , | G y | ) .
The direction of the gradient can be derived as:
α ( x , y ) = a r c t a n ( G y G x )
where α is the angle between the direction vector and the x-axis.
From the above formulas, it is suggested that the degree of the change in gray levels can be detected by the discrete approximation function of gradient. At the edge of the structure, the gray value will change [23], resulting in the maximum value of the gradient function. On this end, the edge can be extracted through the above features.
From the above algorithm, we can see that the essence of the image edge is the point of discontinuous gray level, or where the gray level changes dramatically. The drastic change of gray level of edge means that near the edge point, the signal has high frequency components in the spatial domain. Therefore, the edge detection method is essentially to detect the high-frequency component of the signal, but it is difficult to distinguish the high-frequency component of the gray signal from the environmental noise of the actual structure photogrammetry, which makes it difficult to accurately extract the edge information of the structure. Taking one-dimensional signal of the structure image as an example, as shown in Figure 2, if point A is regarded as the edge point of the signal and there is a jump in the signal, then whether there is an edge at point B and point C needs to be treated with caution. In fact, point B and point C are probably the combination of the signal and some noise.
Edges with continuous gradients, such as point A in Figure 2, are very rare in actual structure images. Most of the structural edge points will be accompanied by environmental noise, forming a large number of complex edge points such as points B and C. Therefore, it is necessary to study the false edges caused by noise in order to ensure the accuracy of structural deformation monitoring.

3. Static Test of the Beam

A static test has been carried out on a steel truss–concrete composite beam, to validate the proposed method, as shown in Figure 3.
The specimen is simply supported by two hinge bearings at the both ends, as shown in Figure 3a. Two hydraulic jacks have been used to apply the two-points bending load on the specimen. Three dial meters have been placed at the quarter-span and the midspan of the specimen, to measure the structural deflection.
The specimen has been loaded with a step-by-step prototype from 0 to 600 kN, with an increment of 100 kN. The loading protype is shown in Figure 4. It is worth stating that the measurement at each step has been made two minutes after the target load is reached, to allow the well-deformation of the specimen.
During the test, the digital image of the specimen is also collected using Canon EOS 5DS R low-cost digital camera; the camera and lens parameters are shown in Table 1. The spatial position of the camera in this experiment is set in the non-orthogonal projection position to simulate the normal condition in engineering practice, as aforementioned. During the whole process, the space position and azimuth of the camera should be maintained to ensure the consistency of projection centers of structure images in the whole process. In order to prevent the camera from being disturbed, shooting remote controller is used to control camera parameters and shutter shooting.
As a common practice, the system error exists in the measurement due to the physic limitation of the applied hardware. Specifically, the accuracy of the photogrammetry-based method has a stronger dependence on the capacity of hardware when compared with the traditional methods. Therefore, the calibration of photogrammetry equipment is an essential part of the measurement. Generally, the largest part of error in photogrammetric hardware originates from lens distortion [24]. On this end, the checkerboard lattice calibration method [25,26] has been applied to calibrate the lens of photogrammetric camera. The calibration has been conducted with a total of 25 checkerboard lattice images, and the lens distortion parameters are obtained. Based on that, the photogrammetric images obtained in this paper have been corrected. The calibration process is shown in Figure 5.
The results of camera calibration parameters are shown in Table 2.
In the calibration, the distortion correction formula [27] has been used to correct the structural image element by element. As a result, the ideal image without lens distortion has been obtained, which can be further used for the extraction of structural deformation in the next. The distortion correction effect of the image is shown in Figure 6.
In reality, the landform around the structure makes it difficult to obtain the orthogonal projection image, so that it can only be tilt photographed on both sides. According to the practical application requirements, the actual situation is simulated in the test, i.e., the camera takes pictures of the test beam at a fixed tilt angle. As shown in Figure 7a, the distance between the camera and the ground is about 3.5 m, and the distance between the camera and the test beam is about 3.0 m. Obviously, there is a large horizontal angle between the optical axis of the camera and the normal direction of the vertical plane of the test beam, and there is also an elevation angle in the vertical direction. This photogrammetric method simulates the possible inclination angle of the camera in the actual structure survey; unlike orthographic projection, this tilt photography will cause the structure image to be affected by the perspective relationship and present near-large-far-small imaging features, which will affect the extraction of structural deformation information. Figure 7 shows the image acquisition result of the specimen. It is worth noting in the image that the specimen near the right bearing is blocked by the reaction frame, which can reflect the usual monitoring conditions of actual structure. Thus, it is crucial to obtain the deformation data of the sheltered part of the structure, which is also a key part of the present study.

4. Result and Discussion

4.1. Picture Perspective Transform of Test Beam

According to the principle of perspective transformation, the image of the test beam obtained under each load grades is processed, and the result of image processing under one of the load grades is shown in Figure 8. It can be seen that through perspective transformation, the image of the test beam changes from oblique projection to orthogonal projection, while all the details of the specimen have been well preserved. It is worth noting that the remain parts apart from the specimen are distorted by the perspective transformation. However, the distortion does not affect the acquisition of structural deformation data since the test aims at the overall deformation data of the specimen only.

4.2. Edge Contour Extraction of Structures

Several types of operators can be employed for the edge detection, including the Sobel [28] operator, Prewitt operator [29], Roberts operator [30], and log operator [31], in which different methods are used to solve the gradient extremum. In this paper, all the five operators have been applied to detect the edge of the specimen using the image after perspective transformation, as shown in Figure 9. Compared with the Log operator, the edge detection results of the other four operators are not satisfactory due to the lack of edge information, which in turn has a negative impact on the accuracy. The advantage of the Log operator over the other methods is that the Gauss spatial filter is employed to smooth the original image, which minimizes the influence of noise on edge detection. The Log operator is a second-order edge detection operator [32], as shown in Equations (19) and (20):
2 f = 2 f x 2 + 2 f y 2
{ 2 f x 2 = f ( i , j + 1 ) 2 f ( i , j ) + f ( i , j 1 ) 2 f y 2 = f ( i + 1 , j ) 2 f ( i , j ) + f ( i 1 , j ) .
Based on the second-order differential of the image, the extreme points can be generated at the abrupt position of the gray value. According to these extreme points, the edge of the structure can be determined.
As shown in Figure 9, the distribution of the light intensity respecting the specimen is inconsistent, and the gray value of some edges does not change significantly, resulting in discontinuity in the detection of some edges. The discontinuous edges like Figure 9 are very normal in actual structure images. On the other hand, the distribution of edge pixels obtained by various edge algorithms is also different. The distribution density of edge pixels in this paper is shown in Figure 10.
As shown in Figure 10, the edge features obtained by the above five operators are entirely different. For instance, the edge distributions in some operators are three pixels wide, while in some operators, e.g., the Log operator, the edges are just concentrated in one pixel. Because of the large scatter in the pixel distribution, it is difficult to determine the exact edge position of the structure. Naturally, an efficient edge detection operator should have the centralized pixel distribution and good pixel continuity. After comparison, it is found that the Log operator can extract the edges of the structure relatively intact and maintain a relatively centralized distribution of edge pixels. Thus, the log operator has been selected in extracting the edge of the structure.
The phenomenon of discontinuous edges and scattered edge distribution of the above-mentioned is similar to the detection issue in real structures, which is induced by the environment of measurement. In dealing with such kind of problem, the data processing and analysis process have been employed, as illustrated in the following. As an important part of the deformation, the edge of the structure is the key content of this paper. For the specimens in this paper, the upper and lower edges of the bridge deck and the lower edges of the specimens can be used as characteristic contours to analyze the overall deformation of the structure. From Figure 2, we can see that the noise caused by the environment will make the edge location confused. Points with continuous gradient change are very rare in the actual structure image. The presence of image noise can lead to the generation of pseudo edges. On the other hand, the real signal on the edge of the structure may also be smoothed out by the Gauss spatial filter, which will cause the edge of the structure to be discontinuous and the edge information missing as shown in Figure 11. Figure 11a shows that the lower edge contour of the bridge deck is relatively continuous. Therefore, the lower edge of the bridge deck is used as the characteristic contour of the test beam to extract the overall deformation of the structure; the position of edge contour extraction in this paper is shown in Figure 11b.
Because the space position of the camera is fixed and the same perspective transformation method is used in each load grade, the edge contour before and after deformation can be directly extracted and compared without looking for fixed points in each load grade. The pixel coordinates of the lower edge of the bridge deck are extracted, and the original edge contour of the structure is obtained under each load grade, as shown in Figure 12.
The result shows that the deflection of the specimen increases with the load grade. Since the edge line of the specimen is obscured near the right bearing by the reaction frame, the edge information is partially missed. By contrast, the information of the left edge is well established. Considering the parallel change of the contour line near the bearing on the left, it can be inferred that the rigid displacement of the specimen exists under each load condition. When loaded from 0 to 100 kN, the rigid displacement reaches its largest. This large displacement is due to the fact that the bearings are not in close contact with the reaction frame before loading. After the load of 100 kN, the bearings will be tightly contacted with the reaction frame. However, due to the deformation of the reaction frame since it is not ideally stiff, a small amount of rigid body displacement still exists in all the load conditions. Besides, it can be found that the edge contour of the specimen is piecewise continuous under every single load condition, with the step change occurred at the four connections between the segments. During the fabrication, the specimen is first divided into five segments and each of the segments is manufactured independently. Then, the separated segments are assembled at the four points together, resulting in the inevitable assembly error. As a result, the shape of the specimen will change abruptly at those assembly points, which in turn lead to the step change as reflected in the edge contour.

4.3. Deformation Curve Obtained by Overlapping Difference of Contour Line

By calibrating each pixel in the image, the size of each pixel can be obtained. The size of each pixel is the theoretical limit of the accuracy in the proposed photogrammetric method. In the steel truss bridge tested, the vertical member is orthogonal to the pixels, as shown in Figure 13. Thus, the calibration of the vertical member is relatively simple. As shown in Table 3, by calibrating 13 visible vertical members, the measurement accuracy of this experiment is 1.12 mm. According to the calibration value, the contour of the pixel in Figure 12 can be transformed into the actual deformation value.
The edge line shown in Figure 12 is notably discontinuous at the connecting points due to assembly error as analyzed before. However, the discontinuity is not caused by the structural deformation and will not change with the applied load. Therefore, the influence of the discontinuity can be eliminated by the overlapping difference method. Since the bearings contact the reaction frame closely after 100 kN, the edge line of the initial working condition can be overlap differenced by the edge line of the other load grades. Thus, the load-displacement curve of 100–600 kN can be derived, as shown in Figure 14.
It can be found that the load-displacement curve presents a zigzag feature. The reason for this problem is that the edge of the structure in the image is composed of one or more pixels, as shown in the red pixel of Figure 15. These pixels are arranged side by side to form a bandwidth. The final edge position (shown in deep red in Figure 15) is determined by the gradient change rate of the gray value of all pixels in the bandwidth. Under the influence of illumination conditions, the final edge and the actual edge will have errors, as shown in the dotted line of Figure 15. As a result, the zigzag phenomenon occurs in the load-displacement curve shown in Figure 14. Based on the above analysis, it can be indicated that the deformation extracted from the image will be distributed around the actual deformation of the structure. On this end, the load-deformation curve is polynomial fitted to approximate the actual deformation value of the structure, as shown in Figure 16. The measured comparison of three dial meters is shown in Figure 17.

4.4. Error Analysis of Photogrammetry

The overall deformation of the structure edge extracted by photogrammetry is compared with the dial gauge as shown in Figure 18. The numerical comparison results are shown in Table 4. From Figure 18 and Table 4, it can be seen that the structural deformation data obtained by photogrammetry are consistent with the data measured by the dial meters, and the maximum error is less than 5%. Compared with the traditional methods, the photogrammetry method has a wider observation range. Thus, the proposed method can obtain the displacement of any section without extensive efforts, which in turn can better reflect the overall displacement and deformation of the structure.

5. Conclusions

Based on digital image processing technology, this paper presents a novel method for structural overall deformation monitoring. Using the proposed method, a series of deformation measurement experiments are carried out on a steel–concrete composite beam, and the main conclusions are as the following:
(1)
Due to the limitation on camera sites, orthogonal projection images are usually difficult to be accessed for large engineering structures such as bridges. In dealing with the issue, the perspective transformation method is applied to acquire the orthogonal projection of structures from the originally inclined images. The experimental results show that the orthogonal projection image obtained by the proposed method can correctly reflect the overall deformation of the structure.
(2)
In order to characterize the key feature of structural deformation, the edge detection operator is utilized to obtain the edge contour of the structure from the processed orthogonal images. Using the operator, the overall deflection curve of the structure can be obtained by locating and calibrating the edge pixels.
(3)
The edge line of the structure acquired from the position of the pixel shows a notable zigzag effect. Further investigations have been carried out, and the result suggests that the illumination environment can be mainly attributed to the zigzag effect. Since the image edge of the structure has a certain bandwidth, the final position of edge pixels in the bandwidth range will be affected by the illumination environment, which eventually results in the fluctuation around the actual deflection curve. On this end, the fitting method is used to minimize the fluctuation and obtain the linear approximation of the actual deflection curve. After comparison with the data measured by the dial meters, it shows the error of the proposed method is less than 5%.
(4)
Since the proposed method is based on digital images, the accuracy is dependent on the quality of available images even if some advanced image processing methods are utilized. For instance, a major limitation of the method is that the overall deformation cannot be directly obtained when some parts of the measuring structures are obscured. Under such a situation, the postprocessing method, such as the fitting, can be applied to obtain the approximation data of the blocked parts.

Author Contributions

X.C. conceived this study, designed the computational algorithms, wrote the program code, and wrote the manuscript. Z.Z. proposed some valuable suggestions and guided the experiments. G.D., X.D. acquired the test images and performed the experiments. X.J. carried out the measurements and analyzed the experimental data.

Funding

This research was funded by the National Natural Science Foundation Projects (Grant No.: 51778094, 51708068).

Acknowledgments

Special thanks to J.L. Heng at the Southwest Jiaotong University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Z.; Zhang, S.; Cai, S. Experimental Study on Deflection Monitoring Scheme of Steep Gradient and High Drop Bridge. J. Highw. Transp. Res. Dev. 2015, 32, 88–93. [Google Scholar]
  2. Jiang, T.; Tang, L.; Zhou, Z. Study on Application of Close-Range Photogrammetric 3D Reconstruction in Structural Tests. Res. Explor. Lab. 2016, 35, 26–29. [Google Scholar]
  3. Heng, J.; Zheng, K.; Kaewunruen, S.; Zhu, J.; Baniotopoulos, C. Probabilistic fatigue assessment of rib-to-deck joints using thickened edge U-ribs. Steel Compos. Struct. 2020, 2, 23–56. [Google Scholar]
  4. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging, 2nd Edition. Photogramm. Eng. Remote Sens. 2015, 81, 273–274. [Google Scholar]
  5. Baltsavias, E.P. A comparison between photogrammetry and laser scanning. ISPRS J. Photogramm. Remote Sens. 1999, 54, 83–94. [Google Scholar] [CrossRef]
  6. Smith, M.J. Close range photogrammetry and machine vision. Emp. Surv. Rev. 2001, 34, 276. [Google Scholar] [CrossRef]
  7. Ke, T.; Zhang, Z.X.; Zhang, J.Q. Panning and multi-baseline digital close-range photogrammetry. Proc. SPIE Int. Soc. Opt. Eng. 2007, 34, 43–44. [Google Scholar]
  8. Feng, D.; Feng, M.Q. Computer vision for SHM of civil infrastructure: From dynamic response measurement to damage detection—A review. Eng. Struct. 2018, 156, 105–117. [Google Scholar] [CrossRef]
  9. Kromanis, R.; Forbes, C. A Low-Cost Robotic Camera System for Accurate Collection of Structural Response. Inventions 2019, 4, 47. [Google Scholar] [CrossRef]
  10. Artese, S.; Achilli, V.; Zinno, R. Monitoring of bridges by a laser pointer: Dynamic measurement of support rotations and elastic line displacements: Methodology and first test. Sensors 2018, 18, 338. [Google Scholar] [CrossRef]
  11. Ghorbani, R.; Matta, F.; Sutton, M.A. Full-field Deformation Measurement and Crack Mapping on Confined Masonry Walls Using Digital Image Correlation. Exp. Mech. 2015, 55, 227–243. [Google Scholar] [CrossRef]
  12. Wang, G.; Ma, L.; Yang, T. Study and Application of Deformation Monitoring to Tunnel with Amateur Camera. Chin. J. Rock Mech. Eng. 2005, 24, 5885–5889. [Google Scholar]
  13. Pan, B.; Xie, H.; Dai, F. An Investigation of Sub-pixel Displacements Registration Algorithms in Digital Image Correlation. Chin. J. Theor. Appl. Mech. 2007, 39, 245–252. [Google Scholar]
  14. Zhang, G.; Yu, C. The Application of Digital Close-Range Photogrammetry in the Deformation Observation of Bridge. GNSS World China 2016, 41, 91–95. [Google Scholar]
  15. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A Vision-Based Sensor for Noncontact Structural Displacement Measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef] [PubMed]
  16. Detchev, I.; Habib, A.; El-Badry, M. Case study of beam deformation monitoring using conventional close range photogrammetry. In Proceedings of the ASPRS 2011 Annual Conference, ASPRS, Milwaukee, WI, USA, 1–5 May 2011. [Google Scholar]
  17. Chu, X.; Zhou, Z.; Deng, G. Improved design of fuzzy edge detection algorithm for blurred images. J. Jilin Univ. Sci. Ed. 2019, 57, 875–881. [Google Scholar]
  18. Chu, X.; Xiang, X.; Zhou, Z. Experimental study of Euler motion amplification algorithm in bridge vibration analysis. J. Highw. Transp. Res. Dev. 2019, 36, 41–47. [Google Scholar]
  19. Chang, X.; Du, S.; Li, Y.; Fang, S. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration. Sensors 2018, 18, 1360. [Google Scholar] [CrossRef]
  20. Jianming, W.; Yiming, M.; Tao, Y. Perspective Transformation Algorithm for Light Field Image. Laser Optoelectron. Prog. 2019, 56, 151003. [Google Scholar] [CrossRef]
  21. Ren, H.; Zhao, S.; Gruska, J. Edge detection based on single-pixel imaging. Opt. Express 2018, 26, 5501. [Google Scholar] [CrossRef]
  22. Bao, P.; Zhang, L.; Wu, X. Canny edge detection enhancement by scale multiplication. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1485–1490. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Yi-Bin, H.E.; Zeng, Y.J.; Chen, H.X. Research on improved edge extraction algorithm of rectangular piece. Int. J. Mod. Phys. C 2018, 29, 169–186. [Google Scholar]
  24. Mamedbekov, S.N. Definition Lens Distortion Camera when Photogrammetric Image Processing. Her. Dagestan State Tech. Univ. Tech. Sci. 2016, 35, 8–13. [Google Scholar] [CrossRef]
  25. Myung-Ho, J.; Hang-Bong, K. Stitching Images with Arbitrary Lens Distortions. Int. J. Adv. Robot. Syst. 2014, 11, 1–7. [Google Scholar]
  26. Ronda, J.I.; Valdés, A. Geometrical Analysis of Polynomial Lens Distortion Models. J. Math. Imaging Vis. 2019, 61, 252–268. [Google Scholar] [CrossRef]
  27. Bergamasco, F.; Cosmo, L.; Gasparetto, A. Parameter-Free Lens Distortion Calibration of Central Cameras. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  28. Li, J.; Di, W.; Yue, W. A novel method of Brillouin scattering spectrum identification based on Sobel operators in optical fiber sensing system. Opt. Quantum Electron. 2018, 50, 27. [Google Scholar] [CrossRef]
  29. Bora, D.J. An Efficient Innovative Approach Towards Color Image Enhancement. Int. J. Inf. Retr. Res. 2018, 8, 20–37. [Google Scholar] [CrossRef] [Green Version]
  30. Barbeiro, S.; Cuesta, E. Cross-Diffusion Systems for Image Processing: I. The Linear Case. J. Math. Imaging Vis. 2017, 58, 447–467. [Google Scholar] [Green Version]
  31. Cobos, F.; Fernández-Cabrera, L.M.; Kühn, T. On an extreme class of real interpolation spaces. J. Funct. Anal. 2009, 256, 2321–2366. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, Y.; Cheng, M.M.; Hu, X. Richer Convolutional Features for Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1939–1946. [Google Scholar] [CrossRef]
Figure 1. Perspective transformation.
Figure 1. Perspective transformation.
Applsci 09 04532 g001
Figure 2. Illustration of the edge point and noise point.
Figure 2. Illustration of the edge point and noise point.
Applsci 09 04532 g002
Figure 3. The tested steel–concrete composite beam (unit: mm); (a) elevational view; (b) sectional view; (c) detail size.
Figure 3. The tested steel–concrete composite beam (unit: mm); (a) elevational view; (b) sectional view; (c) detail size.
Applsci 09 04532 g003
Figure 4. Test loading protype.
Figure 4. Test loading protype.
Applsci 09 04532 g004
Figure 5. Camera calibration intersection photography.
Figure 5. Camera calibration intersection photography.
Applsci 09 04532 g005
Figure 6. Camera calibration distortion adjustment; (a) original image; (b) image after distortion correction.
Figure 6. Camera calibration distortion adjustment; (a) original image; (b) image after distortion correction.
Applsci 09 04532 g006
Figure 7. Field layout of static load test; (a) camera placement position; (b) collecting shape data of experimental beam.
Figure 7. Field layout of static load test; (a) camera placement position; (b) collecting shape data of experimental beam.
Applsci 09 04532 g007
Figure 8. Perspective transformation of the tested beam; (a) after perspective transform; (b) integral drawing of specimen after projection transformation.
Figure 8. Perspective transformation of the tested beam; (a) after perspective transform; (b) integral drawing of specimen after projection transformation.
Applsci 09 04532 g008
Figure 9. The applied five kinds of edge detection operators; (a) Sobel operator edge detection; (b) Prewitt operator edge detection; (c) Roberts operator edge detection; (d) log operator edge detection.
Figure 9. The applied five kinds of edge detection operators; (a) Sobel operator edge detection; (b) Prewitt operator edge detection; (c) Roberts operator edge detection; (d) log operator edge detection.
Applsci 09 04532 g009
Figure 10. Distribution density of edge pixels by various edge detection operators; (a) edge width distribution degree of Sobel operator; (b) edge width distribution degree of Prewitt operator; (c) edge width distribution degree of Robert operator; (d) edge width distribution degree of log operator.
Figure 10. Distribution density of edge pixels by various edge detection operators; (a) edge width distribution degree of Sobel operator; (b) edge width distribution degree of Prewitt operator; (c) edge width distribution degree of Robert operator; (d) edge width distribution degree of log operator.
Applsci 09 04532 g010aApplsci 09 04532 g010b
Figure 11. Feature contour extraction; (a) edge of the test beam affected by environment; (b) sketch map of edge extraction position.
Figure 11. Feature contour extraction; (a) edge of the test beam affected by environment; (b) sketch map of edge extraction position.
Applsci 09 04532 g011
Figure 12. Edge line of lower edge of bridge deck.
Figure 12. Edge line of lower edge of bridge deck.
Applsci 09 04532 g012
Figure 13. Pixel calibration schematic.
Figure 13. Pixel calibration schematic.
Applsci 09 04532 g013
Figure 14. Load-displacement curves under various load grades.
Figure 14. Load-displacement curves under various load grades.
Applsci 09 04532 g014
Figure 15. Difference between pixel edge and actual edge.
Figure 15. Difference between pixel edge and actual edge.
Applsci 09 04532 g015
Figure 16. The load-displacement curve after fitting.
Figure 16. The load-displacement curve after fitting.
Applsci 09 04532 g016
Figure 17. The load-displacement curve measured by dial meters.
Figure 17. The load-displacement curve measured by dial meters.
Applsci 09 04532 g017
Figure 18. Error comparison chart.
Figure 18. Error comparison chart.
Applsci 09 04532 g018
Table 1. Parametric table of camera and lens.
Table 1. Parametric table of camera and lens.
Number of PixelsSize of SensorData InterfaceAspect RatioPhoto-Sensors
50.6 million36 × 24 mmUSB 3.03:2CMOS
Image AmplitudePixel SizeLens TypeFocal LengthLens Relative Aperture
8688 × 57924.14 μ m EF 24–70 mm f/2.8 L50 mmF2.8–F22
Table 2. Parametric table of camera and lens.
Table 2. Parametric table of camera and lens.
Outline SizePhase Principal Coordinates X0−0.0769 mm
Phase Principal Coordinates Y00.0045 mm
Camera Main Distance f50.9339 mm
Radial Distortion CoefficientK11.9644 × 10−8
K25.6287 × 10−6
Eccentric Distortion CoefficientP11.4683 × 10−5
P2−3.8601 × 10−6
Pixel Size0.004096 mm
Image Size5792 × 8668
Table 3. Pixel size calibration table of vertical members.
Table 3. Pixel size calibration table of vertical members.
Number of Vertical MembersNumber of PixelsReal Length of Members/mmCalibration ValueAverage Value
13443871.121.12 mm/px
23423791.10
33403801.11
43453771.09
53393761.10
62923321.12
73463751.08
84214731.12
93303761.14
102953311.22
113353771.12
123513761.07
133423811.11
Table 4. Comparison of displacement measurement error.
Table 4. Comparison of displacement measurement error.
Loading Condition/kNDialgauge PositionDialgauge Measured Value R1/mmPhotogrammetric Value R2/mm|R2 − R1| = S/mmError
S/R1/%
RMSE
200Left support7.507.250.253.330.82
L/415.9515.500.452.82
2L/419.0918.750.341.78
3L/416.9016.190.714.20
300Left support7.747.450.293.75
L/422.1322.690.562.53
2L/427.3727.800.431.57
3L/423.4923.780.291.23
400Left support8.828.470.353.97
L/428.4227.560.863.03
2L/435.4534.151.303.67
3L/429.8928.940.953.18
500Left support8.728.360.364.13
L/434.3433.141.203.49
2L/443.4642.181.282.95
3L/436.0836.910.832.30
600Left support10.339.840.494.74
L/441.7642.811.052.51
2L/452.7551.261.492.82
3L/443.2842.191.092.52

Share and Cite

MDPI and ACS Style

Chu, X.; Zhou, Z.; Deng, G.; Duan, X.; Jiang, X. An Overall Deformation Monitoring Method of Structure Based on Tracking Deformation Contour. Appl. Sci. 2019, 9, 4532. https://doi.org/10.3390/app9214532

AMA Style

Chu X, Zhou Z, Deng G, Duan X, Jiang X. An Overall Deformation Monitoring Method of Structure Based on Tracking Deformation Contour. Applied Sciences. 2019; 9(21):4532. https://doi.org/10.3390/app9214532

Chicago/Turabian Style

Chu, Xi, Zhixiang Zhou, Guojun Deng, Xin Duan, and Xin Jiang. 2019. "An Overall Deformation Monitoring Method of Structure Based on Tracking Deformation Contour" Applied Sciences 9, no. 21: 4532. https://doi.org/10.3390/app9214532

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop