1. Introduction and Related Works
Displacement measurement is an important technique required in diverse industries, such as construction, machinery, and robotics. In particular, displacement measurement of critical structural positions in large structures is a vital component of structural health monitoring (SHM). Extensive research has been conducted in the field of displacement measurement, including how well structures are designed to effectively respond to external disturbances such as earthquakes and heavy winds, as well as the level of structural risk that could lead to a building collapsing [
1].
Existing contact-type sensors designed to measure displacement have limitations. For example, displacement measurement of large structures, such as a bridge, is difficult to measure due to installation challenges at critical structural positions [
2], limitations in the measurement of maximum displacement [
3], and limitations in establishing stationary reference points near the structure [
4]. In addition to the technical aspects, existing displacement measurement systems also suffer from high costs and a short life-cycle, which limits the cost effectiveness of such systems.
Emerging research is focused on displacement measurement technologies that rely on cameras instead of contact-type displacement sensors to measure structural displacement. This is accomplished by using vision-based measurement technologies that recognize and measure the displacement of markers attached at critical structural positions.
An effective SHM system includes displacement measurements at both low and high frequencies. At extremely low frequencies, displacement measurement focuses on the excess lag and the load of a structure over time, while at high frequencies, displacement measurement is focused on structural stress caused by external, environmental disturbances, such as earthquakes or heavy winds. While SHM is concerned with displacement at either frequency, monitoring at low frequencies should take priority, as the critical points of a structure can move slowly over time and monitoring at an extremely low frequency can provide early warning symptoms of building collapse, hence preventing such a collapse from occurring. In this respect, a previous approach measuring displacement at an extremely low frequency through a vision-based displacement measurement system was introduced, so that the risk of a bridge collapsing could be detected in advance [
5].
The accuracy of vision-based displacement measurement technology is the most critical aspect for commercialization, which is at the core of recent displacement measurement research [
6] by statistically considering uncertainty due to various effects, including camera calibration. An advanced template matching was applied to increase the accuracy of vision-based displacement measurement, which unlike conventional template matching, determined the detection of objects with sub-pixel accuracy [
7]. Sladek showed that the accuracy of displacement measurement can be improved by correcting for the camera lens distortion and scale distortion [
8]. Meanwhile, a previous study [
9] showed that the accuracy of displacement measurement could be increased by using the random sample consensus (RANSAC) algorithm to accurately extrapolate two orthogonal lines drawn on the marker.
Moreover, Fukuda developed a stand-alone vision-based displacement measurement system, based on a connected laptop and camcorder, which allowed displacement measurement to be achieved through a wireless Internet connection and further reinforced the cost and installation advantages of vision-based displacement measurement systems [
10].
Outside of these studies, Park [
1] and Lee [
11] successfully applied a vision-based displacement measurement system to high-rise buildings [
1], while Kim developed a video-based algorithm to measure the tension of hanger cables connecting a bridge to the pier [
2].
The latest advances of vision-based sensors and their applications are introduced. A cost effective vision-based structural health monitoring approach has been presented [
12], in which the displacement is accurately measured by considering the tilt between the target and marker. The cable tension measurement based on a vision-based sensor was successfully introduced [
13].
Despite these advantages, conventional studies on vision-based displacement systems have relied on images generated from a fixed viewing angle camera, which necessitated a trade-off between the field of view (FOV) and spatial resolution. In other words, any effort to increase the FOV by raising the number of markers resulted in a decrease in spatial resolution. Similarly, any effort to increase the measurement accuracy by increasing the spatial resolution resulted in a lower number of markers and decreased FOV. Hence, conventional studies on displacement measurement relied on 1–2 markers per camera to measure displacement in large structures or focused on smaller structures for experiments.
The existing methods have limitations in terms of measurable range because they are based on the fixed camera with the narrow and constant FOV. This paper proposes a vision-based displacement measurement system based on a pan-tilt-zoom (PTZ) camera, which can extend the measurable range on monitoring displacement.
To perform an early estimation of collapse risk for a building structure, we assume that displacement happens at very low frequency and the positioned markers slightly move along the plane on which markers are attached. Markers’ displacement will be monitored sequentially using visual information described on markers.
The algorithm underlying the PTZ displacement measurement system can be explained as follows. First, template matching is applied to find the region of interest (ROI) of the marker.
Although we adopt the RANSAC ellipse fitting-based model with coordinates of marker contours in the ROI, the center point of the marker is detected in sub-pixel accuracy. As it relates to the accuracy of displacement measurement, the center point is mapped to the perspective distortion correction plane which we call the standard plane.
This paper is organized as follows:
Section 2 describes a displacement measurement system based on a PTZ camera and explains the algorithm underpinning our proposed system;
Section 3 describes the experimental results on diverse structures;
Section 4 concludes the paper.
3. Experimental Results
The purpose of this paper was to verify the performance effectiveness of a displacement measurement system based on images generated by a PTZ camera. We evaluate the proposed system on markers moving at a low frequency to observe the accuracy of displacement measurements. Markers used in the tests are shown in
Table 1.
The test environment setup is described in
Figure 12. The nine markers are distributed on the several structures, which are at various distance. The PTZ-camera rotates its angle, sequentially captures the markers and transmits the projected image via a wireless channel. A Hanwha Techwin PTZ camera (model SNP-6321H) was used to capture marker images, for which our proposed software performs the image processing algorithm in the host side, which is implemented using C/C++ programming language. The SNP-6321H model has a CMOS-type sensor, with a maximum resolution of 1920 × 1080 pixels and a video transmission rate of 30 frames per second (FPS), and the focal length of the optical lens varies in the range of 4.44~142.6 mm with 32× optical zoom and automatic focus. The angle of view (AOV) is 62.8
(Wide)~2.23
(Tele) in the horizontal direction and 36.8
(Wide)~1.26
(Tele) in the vertical direction. The camera SNP-6321H is installed on a tripod to monitor three buildings with nine markers. The PTZ camera sequentially obtains the images by rotating its angle and zooming, with the PTZ coordinates in preset data, to monitor the other markers; it then wirelessly transfers the data to the host computer.
The camera can acquire an image at 30 frames per second for each marker, but in this experiment, the camera captures 10 images and transfers them to the host computer in one second by considering the bandwidth of wireless communication. This means that the allowed time to perform the analysis for one image is 100 ms. Our implemented system could perform the entire algorithm for displacement measurement at 73 ms processing time. We performed the analysis for 10 captured images for each marker per second, and the rotation of the camera angle takes one second, therefore the processing time for each marker is 2 s. The sequential analysis for the other markers is iteratively performed by rotating the camera. Therefore, one round of analysis for all nine markers, which is described in
Table 1, takes a total of 18 s. We perform a total of 10 rounds of analysis for the experiment in
Table 2.
The tests were conducted on nine markers with distances from the camera between 5 m and 72 m, by capturing the image using H.264 codec at 1920 × 1080 resolution, as shown in
Table 1 and
Table 2. Each marker was printed on 210 mm × 297 mm (A4-size) paper, which consisted of five red circles, spaced 69 mm from the center. The results on markers 1–9 are shown below. We measured the trajectory of displacement during a specific amount of time.
Figure 13a shows the implemented host interface to monitor the displacement of a target marker in real time including measurement result in case of no target marker movement, as shown in
Figure 13b. After the user tries to select the target marker and manually focus on the monitored region, the internal detection algorithm performs the detection of the center position, feature extraction, and perspective distortion correction. As shown in the left of
Figure 13a, we first check the amount of error in the case of no displacement to verify the experiment environment, including stable camera installation.
Figure 13b shows a result of long-term displacement measurement.
As shown in
Figure 13b, the error in the
y-direction is more severe than that in the
x-direction. The marker image, which is installed on the target wall, is projected into the another plane of the camera. This causes spatial distortion in terms of the position of the projected circles because the image sensor in the camera spatially captures the circle image at the different resolution relatively. In this case, in terms of the distorted angle, the error of displacement, which is measured in the
y-direction, is slightly larger than a result in the
x-direction.
Figure 14 shows the measurement results of the displacement for each marker at different distances and angles. The evaluation results show that our approach shows small variation with respect to the distance between the camera and markers.
The evaluation results are summarized as follows:
A Though the distance between the camera and target marker increases, spatial resolution variation, as a result of displacement measurement, is relatively small
The existing vision-based displacement measurement approaches using a lens with fixed magnification result in large degradation of spatial resolution according to the increase of distance between the camera and marker. Consequentially, they require more accurate lens specification to cover maximum distance, or to extend the measurable range. Because our approach could utilize a specific zoom level and lens angle based on the PTZ camera considering the marker location for multiple markers, our implemented system results in stable spatial resolution although the distance is largely increased, as shown in
Table 2.
The measured displacement, which is multiple captured, is statically represented in terms of
RMS with the following equation:
is
measured displacement, the ground truth (
GT) means the actual displacement value that we manually moved as a reference, for which 5 cm, 10 cm, 15 cm, 20 cm, and 25 cm are used for our experiment in
Figure 14. The number of measurements for each
GT is 100.
B Extend the measurable range by monitoring the target at a slightly slower sample rate
As explained, we adopted a PTZ-camera-based vision sensing approach to monitor markers, which are installed in various locations. Our approach enables to extend the measurable range of monitoring the target structure, compared to the existing methods. The following
Table 3 shows the comparison in terms of available measurement range, sampling rate allowed, and effective resolution.
The existing approaches monitor the displacement for the static target on the static spatial plane, but our approach introduces a case study for the multiple markers on the structures, which are distributed at various distance of 5~72 m. The individual planes on multiple markers are sequentially projected to the standard plane of the PTZ-based vision system based on the proposed distortion cancellation technique. Our approach considers monitoring all markers with one PTZ camera, so that the total sample rate in monitoring the target will be decreased, but this drawback can be ignored due to extremely slow movement of the building structure as an assumption.
There are also various factors causing additional errors in the measurement result compared to the actual displacement, which is manually controlled by the test environment and measured by the conventional distance-measuring instrument. Considering application-specific requirements, we assume that the proposed system recognizes the centimeter-order movement on the target wall. Our experimental environment could measure displacement at the millimeter scale. The camera calibration, which reduces the errors caused by the lens distortion, has to be performed for more accurate measurement at specific zoom levels, distances, and angles for the multiple targets. The experiment is done for multiple targets at various distance and with a wide range of angles. The current status of this approach is still lacking with regards to providing a statistical analysis and calibration by the camera lens distortion. This paper focuses on introducing the feasibility of displacement measurement using a PTZ-based vision system. More analysis of the quantitative relationship causing the measurement errors is needed to improve the total accuracy and resolution in future work.