Next Article in Journal
Smarter Evolution: Enhancing Evolutionary Black Box Fuzzing with Adaptive Models
Next Article in Special Issue
Trend Decomposition for Temperature Compensation in a Radar-Based Structural Health Monitoring System of Wind Turbine Blades
Previous Article in Journal
Piezotronic Antimony Sulphoiodide/Polyvinylidene Composite for Strain-Sensing and Energy-Harvesting Applications
Previous Article in Special Issue
Efficacy of Vehicle Scanning Methods in Estimating the Mode Shapes of Bridges Seated on Elastic Supports
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Computer Vision-Based Bridge Inspection and Monitoring: A Review

1
College of Civil Engineering, Hunan University, Changsha 410082, China
2
Key Laboratory for Damage Diagnosis of Engineering Structures of Hunan Province, College of Civil Engineering, Hunan University, Changsha 410082, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7863; https://doi.org/10.3390/s23187863
Submission received: 17 August 2023 / Revised: 7 September 2023 / Accepted: 8 September 2023 / Published: 13 September 2023
(This article belongs to the Special Issue Recent Advances in Structural Health Monitoring and Damage Detection)

Abstract

:
Bridge inspection and monitoring are usually used to evaluate the status and integrity of bridge structures to ensure their safety and reliability. Computer vision (CV)-based methods have the advantages of being low cost, simple to operate, remote, and non-contact, and have been widely used in bridge inspection and monitoring in recent years. Therefore, this paper reviews three significant aspects of CV-based methods, including surface defect detection, vibration measurement, and vehicle parameter identification. Firstly, the general procedure for CV-based surface defect detection is introduced, and its application for the detection of cracks, concrete spalling, steel corrosion, and multi-defects is reviewed, followed by the robot platforms for surface defect detection. Secondly, the basic principle of CV-based vibration measurement is introduced, followed by the application of displacement measurement, modal identification, and damage identification. Finally, the CV-based vehicle parameter identification methods are introduced and their application for the identification of temporal and spatial parameters, weight parameters, and multi-parameters are summarized. This comprehensive literature review aims to provide guidance for selecting appropriate CV-based methods for bridge inspection and monitoring.

1. Introduction

Bridges play a key role in the transportation infrastructure system [1,2,3]. A bridge structure is subjected to the coupling effects of multiple factors such as wind, earthquakes, impact, and vehicle load. In addition, the degradation of the material itself will cause varying degrees of damage during the service period. Therefore, ensuring the safety and reliability of bridge structures is one of the important tasks in the operation, maintenance, and management of civil infrastructure systems [4,5,6].
Bridge inspection and monitoring are usually used to evaluate the status and integrity of bridge structures, which mainly include visual inspection, nondestructive detection, and health monitoring. The traditional visual inspection has difficulty meeting the inspection requirements of modern bridges, due to its relatively subjective decision-making process, low detection efficiency, and low safety. To improve the detection efficiency of bridge surface defects detection and promote the intelligence of the industry, many researchers have used cameras to collect images of bridge surface defects [7,8,9,10] and have combined deep learning (DL) and computer vision (CV) techniques to analyze the camera-recorded images intelligently [11,12,13,14,15,16]. In addition, vibration measurement is very important for evaluating the integrity and safety of bridges. The traditional vibration measurement method obtains the dynamic response of a bridge through sensors installed on the bridge. However, this method is a contact method, which has disadvantages such as difficulty in sensor installation, high equipment cost, and low testing efficiency. To overcome this problem, some scholars have proposed non-contact vibration measurement methods [17,18].
In recent years, with the continuous advancement of CV techniques and image acquisition equipment, CV-based bridge inspection and monitoring methods have emerged and have been validated in practical engineering applications [19,20,21,22,23]. They have attracted the attention of researchers and engineers due to their advantages, such as being low-cost, simple to operate, remote, and non-contact [24]. CV-based inspection and monitoring methods have been widely used in a variety of tasks for structural health monitoring (SHM), such as surface defect detection in bridges [25,26,27], vibration measurements [28,29,30], and vehicle parameter identification [31,32].
Some scholars have published related reviews of CV-based SHM. Zhou et al. [33] introduced popular DL-based crack segmentation algorithms and summarized some publicly available large-scale crack image datasets and popular performance metrics for crack detection. Deng et al. [34] reviewed the current state-of-the-art CV-based crack identification and quantification systems. Jeong et al. [35] summarized the CV-based crack detection algorithm and reviewed the application of unmanned aerial vehicles (UAVs) for crack detection. Zhuang et al. [36] reviewed the applications of CV techniques to structural deformation monitoring from two aspects, that of physical impact and target-tracking algorithm impact. Poorghasem et al. [37] summarized the robot-based approach to structural vibration measurement from two aspects of hardware and software, and discussed its challenges and opportunities. However, most of the above literature reviews focused on crack detection and deformation monitoring of civil engineering structures; there is a lack of systematic reviews on bridge inspection and monitoring based on CV techniques. To fill this gap, this study reviews state-of-the-art applications of CV techniques for bridge inspection and monitoring. The principles and general procedures of surface defect detection, vibration measurement, and vehicle parameter identification based on CV techniques are introduced, followed by a review of their applications. This review will help researchers to gain a rapid and systematic understanding of bridge inspection and monitoring based on CV techniques, and an idea of the challenges that still need to be overcome.
This review is organized as follows: Section 2 describes the principle of CV-based surface defect identification and summarizes its application to bridge inspection. Section 3 reviews the principle of CV-based vibration measurement and its application to the identification of displacement, modal parameters, and structural damage. Section 4 introduces the CV-based vehicle parameter identification methods and reviews their application to the identification of vehicles’ temporal, spatial, and weight parameters. Finally, Section 5 provides conclusions and future research prospects.

2. CV-Based Surface Defect Detection

Surface defect detection is an essential part of bridge inspection for the condition assessment, maintenance, and management of bridges. In recent years, CV-based methods for surface defect detection have developed rapidly and become a research hotspot [3,38,39,40].

2.1. General Procedure

Figure 1 shows the general procedure of CV-based surface defect detection, consisting of five steps, i.e., image acquisition, defect detection, quantification, assessment, and decision [41].

2.1.1. Image Acquisition

The acquisition equipment for bridge surface defect images in the existing work is mainly divided into four categories: industrial cameras, commercial cameras, RGB-D cameras, and 3D laser scanners [41]. Industrial/commercial cameras are widely used for surface defect image acquisition on bridges due to their ease of deployment. However, industrial/commercial cameras cannot acquire a depth of information about the surface defects. To solve this problem, some scholars have proposed using an RGB-D camera to obtain a depth of information about surface defects. In addition, 3D laser scanning technology is also used to obtain surface defect images of bridges.
The collection platforms for bridge surface defects are divided into three categories: handheld, vehicle-mounted, and UAV-taken [41]. Smartphones are commonly used handheld devices. Compared with the handheld method, the vehicle-mounted cameras have the advantages of intelligence and high efficiency. However, the vehicle-mounted camera cannot be directly used to collect images of surface defects on piers, pylons, and cables. To solve this problem, some scholars have proposed a UAV-based surface defect image acquisition system.

2.1.2. Defect Detection

The algorithms of object detection for CV-based surface defect detection are mainly categorized into two types: i.e., image processing algorithms and DL algorithms. The widely used image processing algorithms include the threshold algorithm, edge algorithm, and region algorithm.
  • Threshold algorithm. The threshold method is the most commonly used image segmentation algorithm. Its principle is to divide the image into several different classes overall or locally based on one or more gray thresholds to separate the target and the background [42]. The threshold segmentation methods mainly include the gray histogram threshold method, the Otsu algorithm, and the iterative threshold segmentation method [43].
  • Edge algorithm. The edge algorithm is a segmentation technology based on image discontinuity, which is often completed by convolution based on edge differential operators. According to different calculation methods of gradients, traditional edge detection operators can be divided into two categories: first-order derivatives (Sobel gradient operator, Canny gradient operator, Prewitt gradient operator) and second-order derivatives (Laplace operator). The Sobel operator and Canny operator are most widely used in the field of crack extraction [44,45].
  • Region algorithm. Traditional edge detection methods are prone to breakage when extracting tiny cracks. Some scholars have studied the complete extraction methods of fracture connections. Therefore, a region-based crack extraction algorithm is proposed. The seed-growing algorithm is a common region-based algorithm [46].
The DL-based surface defect detection method does not require pre-definition of detect features or image preprocessing. Through a large number of samples for learning and automatic feature extraction, the identification and extraction of surface defects are realized on this basis. The DL algorithm can effectively eliminate the influence of structural surface noise and interference and extract surface defects more accurately. DL-based surface defect detection algorithms are divided into three categories: image classification algorithms, target detection algorithms, and image segmentation algorithms [47].
  • Image classification algorithm. Convolutional neural networks (CNN) are the current mainstream image classification algorithm. This algorithm was first proposed by Lecun et al. [48] and applied to the recognition of handwritten digits. Its network structure consists of multiple convolutional layers and several fully connected layers. The input image enters the fully connected layer after convolution, pooling, activation, and other operations and finally outputs the classification result. Commonly used network structures include AlexNet [49], GoogLeNet [50], VGGNet [51] and ResNet [52].
  • Target detection algorithm. The target detection algorithm needs to categorize and surround the object to be tested with a rectangular frame in the image. Target detection algorithms are divided into two categories: the first category is the candidate region-based algorithm represented by R-CNN [53,54]. The second category is the regression-based algorithm represented by YOLO [55,56]. The latter is called a single-stage object detector because it does not include a candidate region generation process and can directly obtain the classification and location of the object. Therefore, in terms of performance, single-stage object detectors are computationally faster and can achieve real-time detection, but their accuracy is worse than that of the two-stage object detector.
  • Image segmentation algorithm. The fully CNN algorithm can achieve pixel-level segmentation of crack images. This method was first proposed by Long et al. [57]. Its network structure cancels the fully connected layer on the basis of the CNN so that it can address any size of input images. The image segmentation algorithm can directly complete the whole process of extracting cracks from the original image in the process of crack extraction [58,59]. The encoder–decoder structure represented by U-Net [60,61] is more widely used in this field, and the dilated/atrous convolution structure represented by Deeplab [62] is less used.

2.1.3. Quantification

The quantification of surface defects is performed to determine the location, direction, length, and width of the defects. The methods for defect length measurement include the Euclidean distance method, chain code method, and skeleton line method [63]. The methods for defect width measurement include the average width method, centerline method, inscribed circle method, edge line minimum distance method, gray value method, and edge gradient method [64]. The methods for defect area measurement include the pixel equivalent method and approximate estimation method.

2.1.4. Assessment and Decision

After obtaining the location and size of the surface defect of bridges through the identification and quantification algorithms, it is necessary to combine the historical data of the bridge condition and the bridge specification to evaluate the operation status of the bridge. According to inspection and monitoring data, it provides a scientific decision-making basis for bridge safety monitoring, operation maintenance and management [65].

2.2. Application of Surface Defect Detection in Bridges

Surface defect detection is of great significance to the safe operation and maintenance of bridges. For different types of defects, many researchers have developed corresponding detection methods using CV techniques.

2.2.1. Concrete Cracks

Cracks are one of the most common surface defects of concrete bridges, reflecting the safety and durability of bridge structures. To ensure the safe operation of bridges, it is necessary to conduct regular detection of concrete cracks, which requires the development of accurate and rapid concrete crack detection methods. Li et al. [66] developed an automatic bridge crack detection system based on UAV and Faster R-CNN. Dung et al. [59] proposed a concrete crack detection method based on a deep fully convolutional neural (FCN) network. This method can only identify cracks, which are difficult to automatically quantify. Zhang et al. [67] proposed an automatic pixel-level crack detection method based on an improved U-net network (i.e., Crack U-net). However, Crack U-net cannot accurately detect cracks from low-resolution (LR) images. To solve this problem, Xiang et al. [68] proposed an automatic tiny-crack detection method based on super-resolution (SR) reconstruction and semantic segmentation, as shown in Figure 2. The SR image reconstructed using the DL-based model is input into the proposed semantic segmentation network for crack segmentation, and the crack length and width are measured according to the improved medial axis transformation method. Due to the lack of open-source crack datasets (the open-source bridge surface defect datasets are shown in Table 1), using the fully supervised segmentation method requires manual labeling of a large amount of data, which takes a long time. To solve this problem, Wang et al. [69] proposed a semi-supervised semantic segmentation network for crack detection. However, this method has poor ability to detect tiny cracks. To solve the problem of difficult segmentation of tiny cracks, Chu et al. [70] proposed a multi-scale feature fusion network with an attention mechanism to capture the local features of tiny cracks.
After identifying the cracks using crack detection algorithms, it is necessary to analyze and quantify the cracks, including their length and width. Dare et al. [75] used manual selection of the start and end points of the cracks to connect the cracks into a polyline, and calculated the crack width according to the threshold. Fujita et al. [76] applied median filtering and multi-scale linear filtering for noise reduction in the crack region, and calculated the crack width through a local adaptive threshold. Luo et al. [77] calculated the crack width using the minimum distance between two crack edges. Kim et al. [78] used the two closest edge points to the crack skeleton point to measure the crack width. Flah et al. [79] used the DL algorithm and the improved Otsu algorithm to quantify the crack length, width, and orientation angle. Liu et al. [80] combined VGG-16 and morphology to segment cracks at the pixel level and quantify their lengths and widths. Miao et al. [81] quantified the width and direction of cracks using the TaHE threshold algorithm after histogram equalization. Jahanshahi et al. [82] used the correlation values between a strip kernel and a subset of binary crack images to calculate crack width and orientation.
Most of the current research has focused on crack identification and segmentation algorithms. However, the development of cracks often reveals the damage mechanism of the structure, and the tracking and monitoring of existing cracks could be useful for analyzing the performance of bridge structures. Therefore, further research is required to determine how to accurately screen out and monitor cracks that affect structural safety.

2.2.2. Concrete Spalling

The concrete spalling can result in exposed rebar, and the rebar is prone to rust when exposed to the air for a long time. Therefore, early detection of concrete spalling is necessary to ensure the integrity of the bridge structure. In recent years, many scholars have used the CV technique to detect concrete spalling. Cao et al. [83] proposed an automatic concrete spalling detection method based on LogitBoost classification and regression tree modeling. Santos et al. [84] proposed an automatic identification method for concrete spalling in reinforced concrete bridges based on AlexNet migration learning with an accuracy of 99.1%. Hoang et al. [85] used a Gabor filter to extract texture information of concrete spalling and a logistic regression model based on the state-of-the-art adaptive moment estimation to detect concrete spalling. However, the method was unable to detect minor spalling. To solve this problem, Hoang et al. [86] developed a method for detecting minor spalling of concrete surfaces based on image texture analysis and a novel jellyfish search optimizer. Nguyen et al. [87] enhanced the prediction performance of the extreme gradient boosting machine (XGBoost) using a meta-heuristic Aquila optimizer metaheuristic, and employed XGBoost and a deep convolutional neural network (DCNN) to categorize concrete spalling into shallow spall and deep spall. Abdelkader et al. [88] proposed an entropy-based method for detecting and evaluating the severity of concrete bridge spalling.

2.2.3. Steel Structure Corrosion

In environments such as the ocean, hot and humid conditions, acid rain and salt spray, the surface of steel structures is extremely prone to corrosion. The main corrosion types of steel bridges include surface corrosion, cable, and suspender corrosion. In recent years, with the rapid development of CV technology and UAV technology, many scholars have applied the CV technique and DL algorithm to detect steel structure corrosion. Forkan et al. [89] proposed a framework for detecting corrosion of steel bridges based on UAV vision. Dong et al. [90] proposed a multi-vision scanning system for detecting corrosion on cable surfaces and used a panoramic image stitching processing algorithm to identify corrosion defects on the surface of the cables. Hou et al. [91] proposed an automatic detection method for surface defects of cables based on the Mask R- CNN network. Compared with other networks, the proposed method has higher applicability and accuracy. To improve the accuracy and speed of cable surface defect detection, Huang et al. [92] proposed an intelligent cable damage detection method based on CNN, which can realize the automatic extraction of cable surface image features. Khayatazad et al. [93] developed a corrosion detection algorithm for steel structures by combining two visual parameters, i.e., roughness and color, which can be used to effectively locate the corrosion areas. To realize the precise location of corrosion defects, Meng et al. [94] developed a lightweight DL model for cable corrosion detection by combining intelligent image recognition and magnetic memory technology. This method can realize the precise identification and localization of cable corrosion defects, and its detection accuracy is as high as 97.18%.

2.2.4. Multi Defects

There are various types of bridge surface defects, and there may be overlap between different defects. However, traditional CNN can only detect a single defect. To solve this problem, Dunphy et al. [95] utilized transfer learning-based generative adversarial networks for multi-defect detection. Hüthwohl et al. [96] proposed a multi-classifier for reinforced concrete bridge defects. The classifier can reliably classify multiple defect types with an accuracy of 85%. Cha et al. [97] proposed a Faster R-CNN based multi-defect detection method for steel bridges, which can detect concrete cracks, steel corrosion with two levels (medium and high), bolt corrosion, and steel delamination. Compared with the traditional CNN network, Faster R-CNN has higher computational efficiency. To further improve the accuracy of multi-disease detection, Kim et al. [98] proposed a concrete damage detection method based on Mask R-CNN. The method can detect cracks, efflorescence, rebar exposure, and spalling with an accuracy of more than 90% in outdoor environmental tests. Zhang et al. [99] proposed a multi-defect detection method for bridges by combining YOLOv3 and transfer learning. The testing accuracy was greatly improved compared to the traditional YOLOv3. Li et al. [100] developed a multi-defect detection method for concrete bridges based on the FCN network. The method can detect cracks, spalling, efflorescence, and holes. It has good applicability in practical applications. However, the above method cannot realize the real-time detection of multiple defects. To solve this problem, Ali et al. [101] developed a real-time bridge multi-defect detection system based on UAV utilizing improved Faster R-CNN.
Researchers have proposed many CV-based methods to detect bridge surface defects. However, many problems in this field have not been adequately addressed. Some relevant challenges are summarized as follows:
(1)
Due to the lack of open-source datasets of bridge surface defects, researchers need to perform time-consuming and labor-intensive work for data preparation. The establishment of public datasets with generalization capability is urgently needed to solve the automated bridge surface defect detection [41].
(2)
CV-based surface defect detection system is susceptible to its own hardware and external environmental factors. That is, hardware and environmental factors can adversely affect the performance of the detection system [65].
(3)
It is difficult for existing DL algorithms to realize real-time detection of bridge surface defects. The computational efficiency needs to be improved, although some improvement has been realized through lightweight DL models [102].
(4)
Most of the current research focuses on how to utilize advanced algorithms to identify surface defects of the bridge, while the assessment of its severity requires further research.

2.3. Robotic Platforms

With the rapid development of artificial intelligence and robotics, various robotic platforms for bridge inspection have been developed. La et al. [103] integrated an advanced nondestructive testing technique into an automated robot to achieve accurate detection of bridge deck cracks. However, the robot was unable to detect cracks at the bottom of the bridge. Xie et al. [104] developed a new vehicle-based robotic inspection system with the ability to detect surface defects on the bridge deck and under the bridge. However, the ground-based inspection vehicle could not be directly used for defect detection of hard-to-reach structural components such as cables, bridge towers, piers, and bridge bearings. Therefore, further development of more complex robotic platforms with multiple locomotion capabilities is required. Leibbrandt et al. [105] designed a wall-climbing robot that can be attached to the bottom of a bridge to detect cracks. For the pier and abutment parts of a bridge, a wall-climbing robot can be utilized for crack detection [106,107,108]. In addition, Jang et al. [109] developed a ring-shaped wall-climbing robot (see Figure 3), which can realize the automatic detection of cracks on bridge piers at high altitudes, avoiding the safety hazards presented by working at high altitudes. Boomeri et al. [110] designed a cable-climbing robot with adaptive force control capability, which realizes the detection of the surface defect of cables. The existing bridge surface defects’ robot platforms are shown in Table 2.
Compared with traditional visual inspection methods, wall-climbing robots have the advantage of reducing worker safety risks and maintenance costs. However, the load-carrying capacity of the wall-climbing robot platform is relatively low, and few sensors can be integrated into the platform for defect detection. In addition, different wall-climbing robots need to be designed for different bridge components. Cable-climbing robots have been widely used to detect surface defects in cables. However, the cable-climbing robot can only detect one cable at a time, which limits the detection efficiency. Compared with ground-mobile robots, wall-climbing and cable-climbing robots, UAVs have the unique advantage of approaching some difficult-to-reach regions of bridges. One of the bottlenecks with regard to applying UAVs to engineering practice is the limited power provided by lithium-ion batteries for a short duration. Another problem is how to locate the UAV in environments where GPS is unavailable.
Different robot platforms for surface defect detection have been developed for different bridge components. The development of a robot platform for surface defect detection with many applicable scenarios and low cost will be the focus of future research.

3. CV-Based Vibration Measurement

Vibration measurement is a key part of bridge SHM. The vibration displacement and modal parameters of the bridge can be obtained through vibration measurement. The modal parameters are important indicators to reflect the health of bridges, and damage identification and performance evaluation of bridges can be performed through the changes in modal parameters. The traditional vibration measurement method depends on contact sensors. This method has the disadvantages of high cost, difficult sensor installation, low precision, and poor real-time performance, which makes it difficult to meet the demand for real-time monitoring of bridge dynamic response. In recent years, CV-based vibration measurement methods have developed rapidly. Due to the advantages of non-contact, full-field measurement, and strong real-time performance, it has been widely used in displacement measurement and modal identification.
This section introduces the general procedure of CV-based vibration measurement, followed by a review of applications in displacement measurement, modal identification, and damage identification.

3.1. General Procedure

The CV-based vibration measurement mainly includes four steps: camera calibration, feature extraction, target tracking, and displacement calculation [65,123], as indicated in Figure 4.

3.1.1. Camera Calibration

The main purpose of camera calibration is to explore the projection relationship from 3D world coordinates to 2D image coordinates, thereby realizing the transformation of each point in the image to the 3D world. In the process of projection, the image will have central perspective distortion [125] and radial distortion [126], and camera calibration is also implemented to correct the image distortion.
  • General camera calibration. Camera calibration needs to estimate the intrinsic parameters, distortion coefficients, and extrinsic parameters of the camera. The intrinsic parameters and distortion coefficients are determined by the lens of the camera, and the extrinsic parameters are determined by the position and direction of the camera. During the camera calibration process, the external matrix and internal matrix need to be estimated.
The coordinates in the 3D world are projected to the 2D image coordinates through the camera, and the transformation expression is as follows:
s x y 1 = f x γ c x 0 f y c y 0 0 1   r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 X Y Z 1
The simplified expression of Equation (1) is as follows:
s x = K R | t X
where s is the scale factor and x = ( x , y , 1 ) T are 2D image coordinates, X = ( X , Y , Z , 1 ) T are 3D world coordinates, K is the camera intrinsic parameter, and R and t are camera extrinsic parameters. In the intrinsic matrix, f x and f y are the focal length of the lens in the horizontal and vertical directions, c x and c y are the offset of the main axis in the horizontal and vertical directions, and γ is the skew factor of the lens.
It can be seen in Equations (1) and (2) that the intrinsic parameters of the camera are related to the camera and the lens, while the extrinsic parameters of the camera are related to the relative position of the camera and the objects. The intrinsic parameters will not change unless the lens focal length and other hardware parameters change. However, the extrinsic parameters of the camera should be calibrated for different application scenarios. The black-and-white chessboard is often used for camera calibration. Commercial software such as MATLAB 2019a and Ni Vision 2020 and the open-source library OpenCV 4.2.0 provide a wealth of toolkits that can be used to achieve rapid calibration.
  • Homography matrix method. When using a camera for displacement measurement in a 2D structure plane, engineers simplify the camera calibration process in terms of the camera, lens, and motion characteristics of the tested structure. That is, the homography matrix method is used for camera calibration [127], and Equation (1) can be simplified as
s x y 1 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33   X Y 1
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
where H is the homography matrix.
  • Scaling factor method. The above method can be further simplified by using the pinhole camera model to estimate the scale factor if the lens distortion is small enough to be ignored. An illustration of scale factor estimation is shown in Figure 5.
When the optical axis of the camera is perpendicular to the structure plane, that is, the optical axis is collinear with the normal of the structure plane, the scale factor S is determined by
S = D d = Z f d p i x e l
where D is the physical length of the selected object in the structure plane, d is the length in pixels of its corresponding image part, f is the focal length, Z is the distance from the camera lens to the structure plane, V is the physical displacement, v is the displacement in pixels of the image, and dpixel is the pixel size.
When the optical axis of the camera is not perpendicular to the structure plane, i.e., there is an angle θ between the optical axis and the normal to the structure plane, as shown in Figure 6. Jiang et al. [29] proposed a pinhole camera model to estimate structural displacement, compensating for the effect of angle θ . The scale factor S in Equation (5) can be determined by
S = Z f cos 2 θ d p i x e l

3.1.2. Feature Extraction

Image features are the basis for target tracking. When using CV technology to measure structural vibration, it is necessary to select and extract image features to facilitate the selection of subsequent tracking algorithms. Common image features include grayscale features [128], feature points [129], gradient features [130,131], shape features [132], color features [133], color or grayscale histograms, and image convolution blocks [134].

3.1.3. Target Tracking

The task of target tracking is to track the tested structure or its markers according to the selected image features, so as to determine the position of features in each frame of the video or image sequence, and then obtain the vibration time history of the structure. Typical tracking methods used in displacement measurement include template matching [135], feature point matching [136], full-field optical flow [137], sparse optical flow [138], geometric matching [132], particle image velocimetry [139], color matching [133], and DL-based target-tracking algorithms [140].
A comparison of different target-tracking algorithms is shown in Table 3. The field conditions listed in Table 3 are the weather conditions of the experiment. The measurement error of the template matching algorithm is the largest, followed by the optical flow method, while the error of the feature matching method is the smallest.

3.1.4. Displacement Calculation

The pixel displacement of the measured structure in the image is obtained via image tracking. After that, the intrinsic and extrinsic parameter matrix in Equation (1), the homography matrix in Equation (4), or the scale factor in Equation (5) are used to convert the pixel displacement in the image into the physical displacement in the real world.

3.2. Displacement Measurement

The CV-based displacement measurement method has advantages such as remote and non-contact, and thus has been rapidly developed in the field of bridge engineering in recent years. Many CV-based displacement measurement systems have been developed for different application scenarios [128,151,152,153,154,155,156,157].

3.2.1. Displacement Measurement Based on a Fixed Camera

In the 1990s, Stephen et al. [158] began to apply non-contact vision sensing technology to the vibration displacement measurement of the Humber Bridge in the UK. Subsequently, many scholars applied template-matching algorithms, optical flow methods, digital image correlation (DIC), and DL-based target-tracking algorithms to bridge vibration displacement measurement. Guo et al. [159] proposed a dynamic displacement measurement method based on the Lucas–Kanade template tracking algorithm, and the effectiveness of the method in dynamic displacement measurement was verified by the measurement results of a viaduct. Ye et al. [160] proposed a multi-point displacement measurement system based on a color pattern-matching algorithm, and verified the accuracy and reliability of the system through field testing of arch bridges. To overcome the influence of environmental and operating conditions, Wang et al. [161] proposed novel gradient-based matching via a voting technique, which can provide reliable tracking of moved targets with different degrees of feature loss. To realize the target-free measurement of vibration displacement, Dong et al. [130] proposed a displacement measurement method based on the optical flow method. Zhu et al. [162] developed a structural dynamic displacement measurement method integrating a multi-resolution depth feature framework, as shown in Figure 7. This method uses the structural natural texture as the tracking region, which effectively avoids the installation problem of artificial targets. Dong et al. [123] converted the displacement obtained by feature matching into acceleration, and used the acceleration to evaluate the vibration serviceability of footbridges. Subsequently, Dong et al. [163] successfully identified the bridge’s displacement influence line using a template-matching algorithm based on the normalized cross-correlation coefficient. Javh et al. [164] applied the gradient-based optical flow method to identify the full-field displacement of steel beams, and the displacement identification accuracy was less than 0.001 pixels. Subsequently, in order to reduce costs, Javh et al. [165] developed a measurement system that can measure high-frequency vibration using a low-speed camera. Xie et al. [166] proposed a new edge detection operator to measure the vibration displacement of the cables under different illuminations. Compared with the Canny operator, this operator has stronger robustness under different illumination. Miao et al. [167] proposed a powerful and accurate phase-based 2D motion estimation method with strong anti-noise performance.
In addition, some CV-based methods for 3D displacement measurement of bridges have been developed. Helfrick et al. [168] applied 3D DIC to the measurement of structural vibration displacement for the first time. The results show that 3D DIC has great potential in the full-field vibration measurement of structures. Warren et al. [169] used a 3D point tracking algorithm to determine the 3D displacement of the structure. Pan et al. [170] reviewed the application of stereo-DIC in the full-field 3D displacement measurement of structures. Barone et al. [171] proposed a low-speed single-camera stereo-DIC system to obtain the 3D full-field vibration displacement of a cantilever beam. Shao et al. [172] developed a target-free full-field 3D displacement measurement based on a binocular vision system.
Existing target-tracking algorithms are only suitable for scenarios with a large amplitude of vibration displacement, and struggle to obtain small vibration measurements of bridge structures. However, the amplitude of vibration displacement of rigid structures such as small and medium-span bridges, short cables, and short suspenders under environmental excitation is very small, and it is difficult for general target-tracking algorithms to accurately obtain the displacement time history of bridge structures based on small vibrations.

3.2.2. UAV-Based Displacement Measurement

In recent years, UAVs were introduced into the displacement measurement of bridges as mobile platforms with cameras installed. UAV technology has become an effective alternative to solve long-distance measurements. However, the vibration of the UAV itself will be coupled to the motion of the target, which reduces the measurement accuracy of the displacement. To solve this problem, many studies have been conducted to eliminate the UAV vibration by fixing targets, adding sensors to the UAV, or using a high-pass filter signal processing method. Han et al. [173] developed a vibration displacement measurement system based on UAV vision and used laser light to eliminate the vibration of the UAV. Tian et al. [174] proposed a cable vibration displacement identification method based on the UAV and line segments detector, and used the displacement subtraction of two points to eliminate the UAV vibration. To further improve the identification accuracy of the cable displacement, Zhang et al. [175] used the empirical mode decomposition algorithm to eliminate the UAV vibration. Liu et al. [176] developed a bridge vibration displacement measurement system based on UAV and DIC, and corrected the deviation caused by UAV vibration through a homography matrix. Jiang et al. [177] proposed a bridge vibration displacement measurement method based on a DL-based target tracking method and a dual-camera UAV system. When using their method, deformation point and stable point on the bridge are simultaneously captured by dual cameras with telephoto and wide-angle lenses, and the vibration displacement of the bridge and UAV is measured simultaneously, and the displacement of the UAV is eliminated by using the homography between the two cameras.

3.2.3. Displacement Measurement Based on Visual Displacement and Acceleration

As displacement measurement based on the CV technique is usually limited to a low sampling rate, some scholars have proposed fusing vision-based displacement and acceleration to estimate displacement. Smyth et al. [178] proposed a multi-rate Kalman filtering method to identify displacements by the aggregating of displacements sampled at low frequency and accelerations sampled at high frequency. However, this Kalman filtering method is only applicable when the vision and acceleration measurements are synchronized, and the ratio of the sampling rates is an integer. To overcome this problem, Ma et al. [179,180] improved the multi-rate Kalman filtering method and proposed an adaptive multi-rate Kalman filtering method for the fusion of vision displacement and acceleration. Park et al. [181] applied complementary filtering to fuse vision displacement and acceleration to estimate displacement. Wu et al. [182] verified the feasibility of using complementary filtering in the fusion vision displacement and acceleration in field applications. However, the above methods are only suitable for obtaining high-fidelity displacements at sparse locations and cannot be used to obtain the full-field displacement of the bridge. To address this problem, Wu et al. [183] proposed a sparse accelerometer-aided CV technique and used the modal superposition method to obtain the full-field dynamic displacement through acceleration at sparse locations.
The existing bridge displacement monitoring methods based on CV techniques have problems such as a limited camera sampling rate and insufficient resolution in engineering applications. The development of a bridge displacement monitoring method based on the fusion of vision displacement and acceleration requires further research.

3.3. Modal Identification

Modal parameters (i.e., natural frequency, mode shape and damping ratio) are important indicators to reflect the structure status. The changes in the modal parameters can be used to identify bridge structural damage and evaluate the performance of the bridge structure [143,184,185,186,187]. In this section, a CV-based method for identifying bridge modal parameters is reviewed.

3.3.1. Frequency Identification

Yoon et al. [188] investigated the potential of consumer-grade cameras for structural modal identification using a CV-based target-free method and an eigensystem realization algorithm. Feng et al. [151] proposed a bridge frequency identification method based on an upsampling cross-correlation template matching algorithm, and verified the feasibility of using vision technology to identify bridge frequencies through field tests. Ji et al. [189] used the Canny edge detection operator for image processing, obtained the displacement response from the image sequence reconstruction, and then obtained the vibration frequency of the structure according to the wavelet transform. Dong et al. [128] proposed a multi-point synchronous method for measuring structural dynamic displacement, and identified the natural frequencies of simply supported rectangular steel beams via FFT. Xu et al. [141] used the zero-mean cross-correlation template-matching algorithm to monitor the dynamic displacement of a cable-stayed bridge under the action of crowds, and analyzed the changes in the instantaneous frequency and vibration amplitude of the bridge when large crowds crossed the bridge. Ngeljaratan et al. [190] used DIC to monitor the vibration displacement of a steel truss pedestrian bridge and obtained the vibration frequency of the pedestrian bridge through signal processing technology. Feng et al. [21] proposed a non-contact cable natural frequency test method based on the orientation code matching algorithm. Hoskere et al. [148] proposed using UAV to obtain the natural frequencies and mode shapes of full-scale infrastructure, which solved the problem of difficult modal identification of full-scale infrastructure. However, the above methods make it difficult to measure the small vibrations of the bridge structures. To solve this problem, Luo et al. [191] proposed a method for measuring small vibrations of cables based on a broad-band phase-based motion magnification and line tracking algorithm, as shown in Figure 8. The results show that using the broad-band phase-based motion magnification algorithm to magnify small vibrations can significantly improve the identification accuracy of the cable frequency.

3.3.2. Mode Shape Identification

In recent years, the rapidly developed phase-based motion magnification (PMM) algorithm has been used for bridge mode shape identification [192]. Chen et al. [193] used the PMM algorithm and edge detection algorithm to obtain the mode shapes of cantilever beams, and performed an application study on a steel truss bridge [149]. Yang et al. [194,195] used the PMM algorithm as a post-processing step to visualize the mode shapes by magnifying the motion in a narrow frequency band around the natural frequencies of each order. Molina-Viedma et al. [196,197] combined the PMM algorithm and DIC to identify the mode shape of the cantilever beam. The results show that the combination of the PMM algorithm and DIC is suitable for the identification of structural modal parameters under high-frequency vibration, which makes up for the low precision of DIC when identifying structural high-frequency vibration displacement. To solve the difficulties posed by modal identification of full-scale bridge structures, Hoskere et al. [148] proposed a UAV-based vision-based mode shape identification method for full-scale suspension bridges. Han et al. [198] proposed a portable laser-and-camera system (as shown in Figure 9) for bridge displacement measurement, and used the Data-SSI method to identify bridge mode shapes. The system is easily assembled in the field and avoids the problems associated with the use of telephoto lenses.
In addition, full-field mode shape identification has also been investigated. Willems et al. [199] exploited the high spatial density of CV-based measurements to identify full-field mode shapes. Javh et al. [200] used the Least-Squares Complex-Frequency method combined with the Least-squares Frequency-domain method to identify the high-frequency full-field mode shape of steel beams. Zaletelj et al. [201] used frequency-domain triangulation to identify full-field modal shapes. Gorjup et al. [202] developed a new single-camera multi-view operating deflection shape (ODS) measurement system. This system uses only a single monochrome high-speed camera to achieve full-field 3D ODS measurements. Bhowmick et al. [203] used the optical flow method to track the pixel-level edge points of the structure, and obtained the full-field mode shape of the cantilever beam through the dynamic mode decomposition method. However, this method cannot obtain sub-pixel edges of structures. To solve this problem, Kong et al. [204] proposed a full-field mode shape identification method based on sub-pixel edge detection and edge tracking.
Most of the methods in the existing literature can only obtain the mode shape profile of the bridge and cannot perform quantitative analysis of the mode shape. In addition, the existing research mainly focuses on the identification of discrete mode shapes of bridges, and the identification of full-field mode shapes is in need of further research.

3.3.3. Damping Identification

In the dynamic design of bridges, the damping ratio of the bridge needs to be estimated to evaluate the performance of the bridge under dynamic loads (i.e., seismic and wind loads). Monitoring of the bridge damping ratio is also crucial during bridge operation. Siringoringo et al. [18] used the PMM algorithm, discretized centroid searching algorithm, and dynamic mode decomposition to identify the damping of cantilever beams. Subsequently, Wangchuk et al. [205] applied this method to identify the damping ratio of the cable.

3.4. Damage Identification

The change in the boundary conditions and stiffness of the bridge structure will cause a change in the modal parameters, and the damage to the bridge can be identified through the change in modal parameters.
With regard to bridge damage detection based on CV technology, Feng et al. [22] measured the displacement of multiple points of a test girder, and identified the damage to the test girder through the changes in the mode shape observed before and after the damage. Xu et al. [206] projected a visible laser line onto the damaged girder, extracted the curvature mode by tracking the laser line in the modal test, and determined the location and size of the damage according to the change in the curvature mode. Cha et al. [207] used the unscented Kalman filter to denoise the displacement obtained via the phase-based optical flow, and detected the boundary damage of the cantilever girder by identifying the stiffness and damping coefficient, as shown in Figure 10. Khuc et al. [208] determined the relationship between load (input) and response (output) based on CV technology and used the unit influence surface (UIS) to identify the damage to the bridge structure. Zhang et al. [209] proposed a novel vibration-based method to detect structural damage by combining phase-based motion estimation and CNNs. Shu et al. [210] proposed a damage identification method based on the DL algorithm. The method combines a data-driven approach with finite element model updating to quantify the damage location and damage level of structures. Hu et al. [211] proposed a hybrid method for damage detection and condition assessment of hinged joints of hollow slab bridges based on physical models and vision measurements.
Most of the research on bridge damage detection based on the CV technique was limited to the laboratory model test. The use of CV technique for real bridge damage detection is in need of further research.

4. CV-Based Vehicle Parameter Identification

Vehicle parameters are important evidence that reflects the stress state and traffic density of the bridge. The acquisition of the temporal and spatial distribution of vehicles on the bridge and the real-time monitoring of overloaded vehicles are essential in the field of bridge maintenance, management, and reinforcement.
This section introduces CV-based vehicle detection and tracking methods, followed by a review of applications in temporal and spatial, weight parameter and multi-parameter identification.

4.1. Vehicle Detection and Tracking Methods

The detection and tracking of moving vehicles is used to obtain the vehicle information through feature extraction and identification of the vehicle target in the moving state, and this process is shown in Figure 11. The moving vehicle detection methods based on CV and DL can be divided into three stages: moving object detection methods, object instance detection/segmentation methods, and fine-grained detection methods.

4.1.1. Moving Object Detection Methods

Moving object detection has a wide range of applications in the field of intelligent transportation, which is the premise of obtaining vehicle information. By judging whether there is a moving vehicle in the image, the target is identified and its location is revealed. Information such as vehicle speed and vehicle distance can also be obtained through tracking analysis. Vehicle-moving object detection aims to extract moving vehicle targets in a video sequence for the subsequent tracking, identification, and analysis of moving vehicles. Moving object detection methods mainly include the inter-frame difference method [213,214,215,216,217], background subtraction method [218,219], and optical flow method [220].

4.1.2. Object Instance Detection/Segmentation Methods

After the moving object detection method described above has been used to realize the identification and tracking of the target object, it is necessary to use the target instance detection method to conduct an in-depth analysis of the video-framed image. Further, it can be used to extract the multi-level feature information of the target vehicle, such as vehicle type, the spatial position of the vehicle on the bridge, vehicle spacing, license plate, and other feature parameters. Compared to the traditional methods, the DL-based target instance detection method can highly improve the accuracy and efficiency of detection, which has become the current mainstream method. Target instance detection methods are mainly divided into single-stage methods [221,222], two-stage methods [223,224], and multi-method fusion [225,226].

4.1.3. Fine-Grained Detection Methods

Although DL-based target detection can extract multi-level feature information of moving targets, the difference in the appearance of the same type of vehicle is very small in the video image, and its subtle differences make it difficult to identify and track accurately. Fine-grained image identification, also known as the sub-category image classification method, has been highlighted as a promising technique in recent years. A more detailed sub-category division is useful for the identification of subtle differences that cannot be identified using the normal detection method. According to the manual intervention image information ratio, it can be divided into a strongly supervised fine-grained image identification model [227,228] and a weakly supervised fine-grained image identification model [229,230,231].
Strongly supervised fine-grained image identification has high classification accuracy, but requires a large amount of manual annotation information, and has poor practicability. Weakly supervised fine-grained image identification is an end-to-end identification mode, which has high identification accuracy and does not require human intervention, but the identification efficiency is relatively low. Commonly used strongly supervised fine-grained models include the Part-based R-CNN model and Pose Normalized CNN model. Weakly supervised fine-grained image identification models include Two-Level Attention models, Constellations models, and Bilinear CNN models.

4.2. Temporal and Spatial Parameters

4.2.1. Temporal Parameter Identification

The time-related parameter of the vehicle mainly refers to the vehicle’s moving speed. The identification of vehicle speed can be achieved using the object detection method.
With regard to vehicle speed detection, some scholars have used background subtraction and optical flow methods to detect vehicle-moving targets. Jeyabharathi et al. [232] proposed a dynamic background subtraction based on the Diagonal Hexadecimal Pattern, which used an improved dynamic background subtraction and target-tracking algorithm to detect vehicle speed. The results showed that the method is simple to operate, insensitive to light, and has strong real-time performance. However, its detection accuracy is low. Doğan et al. [233] found enough vehicle feature reference points in each frame of the image and used sparse optical flow technology to estimate the real-time speed of a single vehicle or multiple vehicles. Hua et al. [234] presented a detect-then-track paradigm vehicle tracking model, in which the data-driven and optical flow tracking algorithms were employed to achieve vehicle speed estimation. The optical flow method does not require a priori information to recognize high accuracy and can recognize the motion information and background dynamic information of the target vehicle. However, the algorithm has poor anti-noise performance, and the calculations are complex, making this algorithm incapable of tracking moving targets in real-time.
Some scholars combined multiple methods to realize real-time vehicle speed recognition. Lan et al. [235] employed an improved three-frame difference method to extract the contour of the moving vehicle and used the gray constraint optical flow algorithm to obtain the optical flow value of the vehicle contour, and then identified the vehicle speed based on the ratio of the moving vehicle pixel to the road width. Javadi et al. [236,237] applied the frame rate of the camera, the positions of the four intrusion lines, and the motion pattern vector to establish the probability density function of the vehicle speed estimation, as shown in Figure 12. Compared with a vehicle equipped with GPS, the average error is within 1.77%. This shows that the vehicle speed measurement combined with multiple methods has high accuracy and robustness.

4.2.2. Spatial Parameter Identification

Vehicle spatial parameters include the lane in which the vehicle is located, the longitudinal and lateral positions of the vehicle on the bridge, and the distance between vehicles.
  • Lane detection. With the rapid development of computer technology, the DL algorithm is widely used in lane detection, and its detection accuracy has been greatly improved [238]. Li et al. [239] combined a multitask deep convolutional network with a recurrent neural network (RNN) to simultaneously detect the position and attribute of the lane in the image. Kim et al. [240] added a CNN to the random sample consensus (RANSAC) algorithm to eliminate the influence of edge noise, such as roadside trees and fences, in the road scene and enhanced the robustness of lane detection. Lee et al. [241] combined the features of road vanishing points with DL to form an end-to-end vanishing point guided network (VPGNet) for lane and road marking detection. The network was guided by vanishing points, which can effectively solve the problem of lane and road marking detection on rainy days and low illumination conditions.
  • Vehicle location identification. For the identification of vehicle positions on the bridge, Chen et al. [242] employed template matching and PFA to track the moving vehicle on the bridge to identify the temporal and spatial distribution of bridge traffic in real time. The results showed that the feature-based and area-based hybrid approach of template matching could improve matching accuracy for lane-changing vehicles. Brown et al. [243] proposed a system that can identify a vehicle on a bridge and track its location through multiple video frames. The vehicles can be tracked along a bridge with acceptable errors in the location output. Ojio et al. [244] determined the position and axle spacing of the vehicle from the camera surveillance video and used the Lukas–Kanade algorithm to track the motion state.
  • Vehicle distance measurement. Regarding the distance detection of vehicles, Kim et al. [245] proposed a new stereo-vision vehicle distance estimation method by combining the two-vehicle distance estimation methods of vehicle position and vehicle width. Park et al. [246] proposed the use of the size and position of vehicles in the image to estimate the virtual lanes, which can be used for inter-vehicle distance estimation when lane markings are not visible. To guarantee the safety of vehicles and reduce collision accidents, Chen et al. [247] proposed a vehicle front distance detection algorithm based on single-camera and dual-camera switching. By switching between a single camera and dual camera, the shadow image of the vehicle bottom, the characteristics of the vertical direction of the vehicle, the vehicle taillights and other characteristics are obtained to lock the position in front of the vehicle.

4.3. Weight Parameters

The rapid and accurate identification of vehicle weight is of great importance for the management and control of vehicle overload and the evaluation of road/bridge usage. The Bridge Weigh-in-Motion (BWIM) technique combined with cameras has been widely used in traffic load monitoring. BWIM is used to obtain vehicle load information, and cameras are used to determine the temporal and spatial distribution of vehicle loads on the bridge. In recent years, some scholars have also applied CV techniques to the identification of vehicle weight parameters [243,248].

4.3.1. Weight Identification Based on Bridge Response

The non-contact BWIM method [249,250] uses bridges to measure vehicle weights without installing any sensors on the bridge. It has the advantages of convenient installation without interrupting traffic, causes no damage to the road, and allows real-time fast weighing. Ojio et al. [244] proposed a CV-based non-contact BWIM method, which requires two cameras to work together. One camera is used to measure sub-millimeter bridge deflections, and the other is used to monitor traffic and determine axle spacing. Ding et al. [251] proposed a vehicle load and load centroid measurement system based on the CV and vertical displacement of the body. According to the vertical characteristic distance recognized by the side camera, the vehicle load value can be obtained by resolving the parameters. Chen et al. [242] presented a method to identify the temporal and spatial distribution of vehicle loads for long-span bridges. The vehicle weight information is obtained through the BWIM system, and then the vehicle loads are tracked using CV techniques. The effectiveness and accuracy of the algorithm were verified by the field test on Hangzhou Bay Bridge. Micu et al. [252] employed adaptive thresholding and morphological reconstruction methods to extract vehicle length information from traffic videos. Further, they used statistical methods to establish the correlation between vehicle length and the weight measured by the BWIM system. Zhou et al. [253] divided the vehicle database into nine types of vehicle axle weight distribution intervals, and established the relationship between vehicle types and corresponding weight information. The vehicle type was accurately identified by using a deep convolutional neural network (DCNN), such as to obtain the corresponding vehicle weight. Dan et al. [254] proposed an information-fusion-based method for load identification to be applied to bridges of different lengths. In this method, the pavement-based weigh-in-motion system (WIMs) was installed at the entrance of the bridge to determine the weight of vehicles. The videos of traffic flow acquired by multiple cameras arranged along the bridge were employed to calculate each vehicle’s trajectory and location.

4.3.2. Weight Identification Based on Tire Deformation

Some scholars took the vehicle tire as the target of load identification and attempted to identify the vehicle weight parameters. Feng et al. [255] introduced an innovative CV-based vehicle weigh-in-motion (WIM) method. The method is based on simple physics: the tire–roadway contact force is equal to the contact pressure multiplied by the contact area. The area can be estimated by measuring tire deformation parameters such as tire–roadway contact length and tire vertical deflection using CV techniques, while tire pressure can be obtained from onboard sensors. Feng et al. [256] applied edge detection and optical character recognition (OCR) technology to identify the marking texts on the tire sidewall such that the manufacturer-recommended tire inflation pressure can be found, thus obtaining the vehicle tire brand, tire model, and tire size. This indicates that CV techniques such as edge detection and OCR are applied to enhance the measurement and recognition accuracy. Kong et al. [32] proposed a non-contact vehicle weight identification method based on the tire–road contact model and CV techniques, as presented in Figure 13. The theoretical model of tire–road contact was established based on the improved Hertz contact theory. CV techniques, including image segmentation and character recognition, were used to identify tire deformation and inflation pressure. Subsequently, Kong et al. [31] analyzed the tire–road contact mechanism, and numerical analyses were conducted to develop the tire contact force equations. The methodology for identifying the tire–road contact force by combining the derived equations and CV techniques was verified with field experiments on passenger cars and trucks. The experiment showed that the results predicted using the proposed method were in good agreement with the measured results. Compared with the traditional method, the developed method based on tire mechanics and CV has the advantages of high accuracy and efficiency, easy operation, low cost, and does not require the placement of sensors; thus, it provides a new approach to vehicle weighing.

4.4. Vehicle Multi-Parameters

In practical applications, it is necessary to obtain multiple parameters of the vehicle simultaneously, not just the above-mentioned single parameter, to meet the needs of systems such as intelligent transportation and SHM. For example, the type, length, number of axles, speed, trajectory, spacing, axle weight, and the total weight of the vehicle on the bridge are of great significance for bridge load statistics, condition monitoring and safety assessment.
Considering vehicle multi-parameter identification, Zaurin et al. [257,258,259,260] proposed a new method of SHM for bridge systems based on CV. An experimental study showed that the vehicle models could be effectively detected, classified, and tracked from the surveillance video. Zhang et al. [212] used the CV technique and Faster R-CNN algorithm to realize the identification of vehicle type, vehicle length, number of axles, vehicle speed and lane. Jian et al. [261] proposed a traffic sensing methodology that combines a DL-based CV technique with the influence line theory. The obtained results showed that the proposed method could automatically identify the vehicle load and speed with promising efficiency. Pan et al. [262] adopted the histogram of oriented gradients (HOG) and a random forest (RF) classifier for fast vehicle classification, and the vehicle speed and the vehicle–barrier separation distance were determined. Xia et al. [263] combined the network of strain sensors and CV to monitor the traffic load in short and medium-span bridges. The field monitoring results showed that this method can be used to identify the key parameters such as weight, speed, number, type, and trajectory of vehicles in a complex traffic environment. Dan et al. [264] proposed an improved full-bridge traffic load monitoring (TLM) method based on the YOLO-v3 convolutional neural network. The results showed that the vehicle trajectory, body, and tail contour of the vehicle can be more accurately identified using this method. Appathurai et al. [265] developed a novel hybridization of an artificial neural network (ANN) and oppositional gravitational search optimization algorithm (ANN-OGSA)-based moving vehicle detection (MVD) system. The system can be used for vehicle tracking, counting, vehicle speed measurement, and vehicle classification. The vehicle multi-parameters identification system based on CV and DL is an important bridge SHM and intelligent transportation system and a direction for future development.

5. Conclusions

This paper reviews the recent developments in CV-based bridge inspection and monitoring technology, including surface defect detection, vibration measurement, and vehicle parameter identification. The main conclusions and future challenges are as follows.
(1)
Due to the lack of open-source datasets of bridge surface defects, researchers need to perform time-consuming and labor-intensive work for data preparation. The establishment of public datasets with generalization capability is urgently needed to solve the automated bridge surface defect detection [41].
(2)
CV-based surface defect detection system is susceptible to its own hardware and external environmental factors. That is, hardware and environmental factors can adversely affect the performance of the detection system [65].
(3)
It is difficult for existing DL algorithms to realize real-time detection of bridge surface defects. The computational efficiency needs to be improved, although some improvement has been realized through lightweight DL models [102].
(4)
Most of the current research focuses on how to utilize advanced algorithms to identify surface defects on the bridge, while the assessment of their severity requires further research.
(5)
Different robot platforms for surface defect detection have been developed for different bridge components. The development of a robot platform for surface defect detection with many applicable scenarios and low cost should be the focus of future research.
(6)
Existing target-tracking algorithms are only suitable for scenarios with a large amplitude of vibration displacement, and struggle to obtain small vibration measurements of bridge structures. However, the amplitude of vibration displacement of rigid structures such as small and medium-span bridges, short cables, and short suspenders under environmental excitation is very small, and it is difficult for general target-tracking algorithms to accurately obtain the displacement time-history of bridge structures based on small vibrations.
(7)
The existing bridge displacement monitoring method based on the CV technique has the problems of limited camera sampling rate and insufficient resolution in engineering applications. The development of a bridge displacement monitoring method based on the fusion of vision displacement and acceleration requires further research.
(8)
Most of the methods in the existing literature can only obtain the mode shape profile of the bridge and cannot perform quantitative analysis of the mode shape. In addition, the existing research mainly focuses on the identification of discrete mode shapes of bridges, and the identification of full-field mode shapes is in need of further exploration.
(9)
The vehicle parameter identification based on CV is mainly focused on the single-parameter identification of the vehicle, and there are relatively few studies on multi-parameter identification. With the continuous progress of CV technology, comprehensive multi-parameter identification of vehicles is a trend that will continue to grow in the future.
(10)
The current bridge SHM systems mostly perform independent measurements of vehicles and bridges, and most of the assessment of bridge conditions is achieved only through the analysis of the output responses of the bridge, lacking accurate input information. Simultaneously obtaining the parameter information of the bridge structure and the vehicles on the bridge is an important direction for the development of the bridge SHM system in the future.

Author Contributions

Conceptualization, K.L. and X.K.; methodology, K.L. and X.K.; software, K.L., H.T. and J.Z.; validation, H.T. and J.H.; formal analysis, J.Z., J.H. and J.L.; investigation, K.L., X.K., H.T. and J.H.; resources, J.L., J.H. and H.T.; data curation, K.L. and X.K.; writing—original draft preparation, K.L., X.K., J.Z. and J.H.; writing—review and editing, K.L., X.K. and J.L.; visualization, J.H.; supervision, X.K.; project administration, J.H. and X.K.; funding acquisition, X.K., J.H. and K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 52008160 and 52108434), the Science Foundation for Excellent Young Scholars of Hunan Province (Grant No. 2021JJ20015), and the Postgraduate Scientific Research Innovation Project of Hunan Province (Grant No: QL20220091).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dong, C.; Bas, S.; Catbas, F.N. A portable monitoring approach using cameras and computer vision for bridge load rating in smart cities. J. Civ. Struct. Health Monit. 2020, 10, 1001–1021. [Google Scholar] [CrossRef]
  2. Kim, I.; Jung, H.; Yoon, S.; Park, J.W. Dynamic Response Measurement and Cable Tension Estimation Using an Unmanned Aerial Vehicle. Remote Sens. 2023, 15, 4000. [Google Scholar] [CrossRef]
  3. Spencer, B.F.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  4. Hou, R.; Xia, Y. Review on the new development of vibration-based damage identification for civil engineering structures: 2010–2019. J. Sound Vib. 2021, 491, 115741. [Google Scholar] [CrossRef]
  5. Kao, S.P.; Chang, Y.C.; Wang, F.L. Combining the YOLOv4 deep learning model with UAV imagery processing technology in the extraction and quantization of cracks in bridges. Sensors 2023, 23, 2572. [Google Scholar] [CrossRef]
  6. Avci, O.; Abdeljaber, O.; Kiranyaz, S.; Hussein, M.; Gabbouj, M.; Inman, D.J. A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mech. Syst. Signal Process. 2021, 147, 107077. [Google Scholar] [CrossRef]
  7. Al-Kaff, A.; Martin, D.; Garcia, F.; de la Escalera, A.; Armingol, J.M. Survey of computer vision algorithms and applications for unmanned aerial vehicles. Expert Syst. Appl. 2018, 92, 447–463. [Google Scholar] [CrossRef]
  8. Liu, Y.; Yeoh, J.K.; Chua, D.K. Deep learning-based enhancement of motion blurred UAV concrete crack images. J. Comput. Civ. Eng. 2020, 34, 04020028. [Google Scholar] [CrossRef]
  9. Xu, Z.; Wang, Y.; Hao, X.; Fan, J. Crack Detection of Bridge Concrete Components Based on Large-Scene Images Using an Unmanned Aerial Vehicle. Sensors 2023, 23, 6271. [Google Scholar] [CrossRef]
  10. Kim, B.; Choi, S.W.; Hu, G.; Lee, D.E.; Serfa Juan, R.O. An Automated Image-Based Multivariant Concrete Defect Recognition Using a Convolutional Neural Network with an Integrated Pooling Module. Sensors 2022, 22, 3118. [Google Scholar] [CrossRef]
  11. Yeum, C.M.; Dyke, S.J.; Ramirez, J. Visual data classification in post-event building reconnaissance. Eng. Struct. 2018, 155, 16–24. [Google Scholar] [CrossRef]
  12. Khan, M.A.M.; Kee, S.H.; Pathan, A.S.K.; Nahid, A.A. Image Processing Techniques for Concrete Crack Detection: A Scien-tometrics Literature Review. Remote Sens. 2023, 15, 2400. [Google Scholar] [CrossRef]
  13. Zhou, M.; Lu, W.; Xia, J.; Wang, Y. Defect Detection in Steel Using a Hybrid Attention Network. Sensors 2023, 23, 6982. [Google Scholar] [CrossRef]
  14. Zhang, C.; Chen, Y.; Tang, L.; Chu, X.; Li, C. CTCD-Net: A Cross-Layer Transmission Network for Tiny Road Crack Detection. Remote Sens. 2023, 15, 2185. [Google Scholar] [CrossRef]
  15. Kim, I.H.; Jeon, H.; Baek, S.C.; Hong, W.H.; Jung, H.J. Application of crack identification techniques for an aging concrete bridge inspection using an unmanned aerial vehicle. Sensors 2018, 18, 1881. [Google Scholar] [CrossRef] [PubMed]
  16. Bayomi, N.; Nagpal, S.; Rakha, T.; Fernandez, J.E. Building envelope modeling calibration using aerial thermography. Energy Build. 2021, 233, 110648. [Google Scholar] [CrossRef]
  17. Mehrabi, A.B.; Farhangdoust, S. A laser-based noncontact vibration technique for health monitoring of structural cables: Background, success, and new developments. Adv. Acous. Vib. 2018, 2018, 8640674. [Google Scholar] [CrossRef]
  18. Siringoringo, D.M.; Wangchuk, S.; Fujino, Y. Noncontact operational modal analysis of light poles by vision-based motion-magnification method. Eng. Struct. 2021, 244, 112728. [Google Scholar] [CrossRef]
  19. Ye, X.; Dong, C.; Liu, T. A review of machine vision-based structural health monitoring: Methodologies and applications. J. Sens. 2016, 2016, 7103039. [Google Scholar] [CrossRef]
  20. Feng, D.; Feng, M.Q. Identification of structural stiffness and excitation forces in time domain using noncontact vision-based displacement measurement. J. Sound Vib. 2017, 406, 15–28. [Google Scholar] [CrossRef]
  21. Feng, D.; Scarangello, T.; Feng, M.Q.; Ye, Q. Cable tension force estimate using novel noncontact vision-based sensor. Measurement 2017, 99, 44–52. [Google Scholar] [CrossRef]
  22. Wang, W.; Cui, D.; Ai, C.; Zaheer, Q.; Wang, J.; Qiu, S.; Xiong, J. Target-free recognition of cable vibration in complex backgrounds based on computer vision. Mech. Syst. Signal Process. 2023, 197, 110392. [Google Scholar] [CrossRef]
  23. Yu, S.; Zhang, J.; Su, Z.; Jiang, P. Fast and robust vision-based cable force monitoring method free from environmental disturbances. Mech. Syst. Signal Process. 2023, 201, 110617. [Google Scholar] [CrossRef]
  24. Dabous, S.A.; Feroz, S. Condition monitoring of bridges with non-contact testing technologies. Autom. Constr. 2020, 116, 103224. [Google Scholar] [CrossRef]
  25. Xiang, C.; Gan, V.J.; Guo, J.; Deng, L. Semi-supervised learning framework for crack segmentation based on contrastive learning and cross pseudo supervision. Measurement 2023, 217, 113091. [Google Scholar] [CrossRef]
  26. Xiang, C.; Guo, J.; Cao, R.; Deng, L. A crack-segmentation algorithm fusing transformers and convolutional neural networks for complex detection scenarios. Autom. Constr. 2023, 152, 104894. [Google Scholar] [CrossRef]
  27. Jiang, T.; Frøseth, G.T.; Rønnquist, A. A robust bridge rivet identification method using deep learning and computer vision. Eng. Struct. 2023, 283, 115809. [Google Scholar] [CrossRef]
  28. Wang, M.; Ao, W.K.; Bownjohn, J.; Xu, F. Completely non-contact modal testing of full-scale bridge in challenging conditions using vision sensing systems. Eng. Struct. 2022, 272, 114994. [Google Scholar] [CrossRef]
  29. Jiang, T.; Rønnquist, A.; Song, Y.; Frøseth, G.T.; Nåvik, P. A detailed investigation of uplift and damping of a railway catenary span in traffic using a vision-based line-tracking system. J. Sound Vib. 2022, 527, 116875. [Google Scholar] [CrossRef]
  30. Jiang, T.; Frøseth, G.T.; Rønnquist, A.; Fagerholt, E. A robust line-tracking photogrammetry method for uplift measurements of railway catenary systems in noisy backgrounds. Mech. Syst. Signal Process. 2020, 144, 106888. [Google Scholar] [CrossRef]
  31. Kong, X.; Wang, T.; Zhang, J.; Deng, L.; Zhong, J.; Cui, Y.; Xia, S. Tire contact force equations for vision-based vehicle weight identification. Appl. Sci. 2022, 12, 4487. [Google Scholar] [CrossRef]
  32. Kong, X.; Zhang, J.; Wang, T.; Deng, L.; Cai, C.S. Non-contact vehicle weighing method based on tire-road contact model and computer vision techniques. Mech. Syst. Signal Process. 2022, 174, 109093. [Google Scholar] [CrossRef]
  33. Zhou, S.; Canchila, C.; Song, W. Deep learning-based crack segmentation for civil infrastructure: Data types, architectures, and benchmarked performance. Autom. Constr. 2023, 146, 104678. [Google Scholar] [CrossRef]
  34. Deng, J.; Singh, A.; Zhou, Y.; Lu, Y.; Lee, V.C.S. Review on computer vision-based crack detection and quantification methodologies for civil structures. Constr. Build. Mater. 2022, 356, 129238. [Google Scholar] [CrossRef]
  35. Jeong, E.; Seo, J.; Wacker, J. Literature review and technical survey on bridge inspection using unmanned aerial vehicles. J. Perform. Constr. Facil. 2020, 34, 04020113. [Google Scholar] [CrossRef]
  36. Poorghasem, S.; Bao, Y. Review of robot-based automated measurement of vibration for civil engineering structures. Measurement 2022, 207, 112382. [Google Scholar] [CrossRef]
  37. Zhuang, Y.; Chen, W.; Jin, T.; Chen, B.; Zhang, H.; Zhang, W. A review of computer vision-based structural deformation monitoring in field environments. Sensors 2022, 22, 3789. [Google Scholar] [CrossRef]
  38. Yoon, H.; Hoskere, V.; Park, J.W.; Spencer, B.F., Jr. Cross-correlation-based structural system identification using unmanned aerial vehicles. Sensors 2017, 17, 2075. [Google Scholar] [CrossRef] [PubMed]
  39. Chen, W.; He, Z.; Zhang, J. Online monitoring of crack dynamic development using attention-based deep networks. Autom. Constr. 2023, 154, 105022. [Google Scholar] [CrossRef]
  40. Seo, J.; Duque, L.; Wacker, J.P. Field application of UAS-based bridge inspection. Transport Res. Rec. 2018, 2672, 72–81. [Google Scholar] [CrossRef]
  41. Ai, D.; Jiang, G.; Lam, S.K.; He, P.; Li, C. Computer vision framework for crack detection of civil infrastructure—A review. Eng. Appl. Artif. Intell. 2023, 117, 105478. [Google Scholar] [CrossRef]
  42. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  43. Li, Q.; Liu, X. Novel approach to pavement image segmentation based on neighboring difference histogram method. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; pp. 792–796. [Google Scholar] [CrossRef]
  44. Lim, R.S.; La, H.M.; Shan, Z.; Sheng, W. Developing a crack inspection robot for bridge maintenance. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 6288–6293. [Google Scholar] [CrossRef]
  45. Zhao, H.; Qin, G.; Wang, X. Improvement of canny algorithm based on pavement edge detection. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; pp. 964–967. [Google Scholar] [CrossRef]
  46. Li, Q.; Zou, Q.; Zhang, D.; Mao, Q. FoSA: F* seed-growing approach for crack-line detection from pavement images. Image Vision Comput. 2011, 29, 861–872. [Google Scholar] [CrossRef]
  47. Hsieh, Y.A.; Tsai, Y.J. Machine learning for crack detection: Review and model performance comparison. J. Comput. Civ. Eng. 2020, 34, 04020038. [Google Scholar] [CrossRef]
  48. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  49. Sapijaszko, G.; Mikhael, W.B. An overview of recent convolutional neural network algorithms for image recognition. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS), Windsor, ON, Canada, 5–8 August 2018; pp. 743–746. [Google Scholar] [CrossRef]
  50. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
  51. Bang, S.; Park, S.; Kim, H.; Kim, H. Encoder-decoder network for pixel-level road crack detection in black-box images. Comput. Aided Civ. Infrastruct. Eng. 2019, 34, 713–727. [Google Scholar] [CrossRef]
  52. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar] [CrossRef]
  53. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
  54. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar] [CrossRef]
  55. Kou, X.; Liu, S.; Cheng, K.; Qian, Y. Development of a YOLO-V3-based model for detecting defects on steel strip surface. Measurement 2021, 182, 109454. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Huang, J.; Cai, F. On bridge surface crack detection based on an improved YOLO v3 algorithm. IFAC-PapersOnLine 2020, 53, 8205–8210. [Google Scholar] [CrossRef]
  57. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  58. Schmugge, S.J.; Rice, L.; Nguyen, N.R.; Lindberg, J.; Grizzi, R.; Joffe, C.; Shin, M.C. Detection of cracks in nuclear power plant using spatial-temporal grouping of local patches. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016. [Google Scholar] [CrossRef]
  59. Dung, C.V. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [Google Scholar] [CrossRef]
  60. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  61. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
  62. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar] [CrossRef]
  63. Payab, M.; Abbasina, R.; Khanzadi, M. A brief review and a new graph-based image analysis for concrete crack quantification. Arch. Comput. Methods Eng. 2019, 26, 347–365. [Google Scholar] [CrossRef]
  64. Adhikari, R.S.; Moselhi, O.; Bagchi, A. Image-based retrieval of concrete crack properties for bridge inspection. Autom. Constr. 2014, 39, 180–194. [Google Scholar] [CrossRef]
  65. Dong, C.Z.; Catbas, F.N. A review of computer vision-based structural health monitoring at local and global levels. Struct. Health Monit. 2021, 20, 692–743. [Google Scholar] [CrossRef]
  66. Li, R.; Yu, J.; Li, F.; Yang, R.; Wang, Y.; Peng, Z. Automatic bridge crack detection using Unmanned aerial vehicle and Faster R-CNN. Constr. Build. Mater. 2023, 362, 129659. [Google Scholar] [CrossRef]
  67. Zhang, L.; Shen, J.; Zhu, B. A research on an improved Unet-based concrete crack detection algorithm. Struct. Health Monit. 2021, 20, 1864–1879. [Google Scholar] [CrossRef]
  68. Xiang, C.; Wang, W.; Deng, L.; Shi, P.; Kong, X. Crack detection algorithm for concrete structures based on super-resolution reconstruction and segmentation network. Autom. Constr. 2022, 140, 104346. [Google Scholar] [CrossRef]
  69. Wang, W.; Su, C. Semi-supervised semantic segmentation network for surface crack detection. Autom. Constr. 2021, 128, 103786. [Google Scholar] [CrossRef]
  70. Chu, H.; Wang, W.; Deng, L. Tiny-Crack-Net: A multiscale feature fusion network with attention mechanisms for segmentation of tiny cracks. Comput. Aided Civ. Infrastruct. Eng. 2022, 37, 1914–1931. [Google Scholar] [CrossRef]
  71. Xu, H.; Su, X.; Wang, Y.; Cai, H.; Cui, K.; Chen, X. Automatic bridge crack detection using a convolutional neural network. Appl. Sci. 2019, 9, 2867. [Google Scholar] [CrossRef]
  72. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar] [CrossRef]
  73. Dorafshan, S.; Thomas, R.J.; Maguire, M. SDNET2018: An annotated image dataset for non-contact concrete crack detection using deep convolutional neural networks. Data Brief 2018, 21, 1664–1668. [Google Scholar] [CrossRef] [PubMed]
  74. Liu, Y.; Yao, J.; Lu, X.; Xie, R.; Li, L. DeepCrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing 2019, 338, 139–153. [Google Scholar] [CrossRef]
  75. Dare, P.; Hanley, H.; Fraser, C.; Riedel, B.; Niemeier, W. An operational application of automatic feature extraction: The measurement of cracks in concrete structures. Photogramm. Rec. 2002, 17, 453–464. [Google Scholar] [CrossRef]
  76. Fujita, Y.; Hamamoto, Y. A robust automatic crack detection method from noisy concrete surfaces. Mach. Vision Appl. 2011, 22, 245–254. [Google Scholar] [CrossRef]
  77. Luo, Q.; Ge, B.; Tian, Q. A fast adaptive crack detection algorithm based on a double-edge extraction operator of FSM. Constr. Build. Mater. 2019, 204, 244–254. [Google Scholar] [CrossRef]
  78. Kim, H.; Lee, J.; Ahn, E.; Cho, S.; Shin, M.; Sim, S.H. Concrete crack identification using a UAV incorporating hybrid image processing. Sensors 2017, 17, 2052. [Google Scholar] [CrossRef]
  79. Flah, M.; Suleiman, A.R.; Nehdi, M.L. Classification and quantification of cracks in concrete structures using deep learning image-based techniques. Cem. Concr. Compos. 2020, 114, 103781. [Google Scholar] [CrossRef]
  80. Liu, Y.; Yeoh, J.K. Robust pixel-wise concrete crack segmentation and properties retrieval using image patches. Autom. Constr. 2021, 123, 103535. [Google Scholar] [CrossRef]
  81. Miao, Z.; Ji, X.; Okazaki, T.; Takahashi, N. Pixel-level multicategory detection of visible seismic damage of reinforced concrete components. Comput. Aided Civ. Infrastruct. Eng. 2021, 36, 620–637. [Google Scholar] [CrossRef]
  82. Jahanshahi, M.R.; Masri, S.F. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation. Smart Mater. Struct. 2013, 22, 035019. [Google Scholar] [CrossRef]
  83. Cao, M.T.; Nguyen, N.M.; Chang, K.T.; Tran, X.L.; Hoang, N.D. Automatic recognition of concrete spall using image processing and metaheuristic optimized LogitBoost classification tree. Adv. Eng. Softw. 2021, 159, 103031. [Google Scholar] [CrossRef]
  84. Santos, R.; Ribeiro, D.; Lopes, P.; Cabral, R.; Calçada, R. Detection of exposed steel rebars based on deep-learning techniques and unmanned aerial vehicles. Autom. Constr. 2022, 139, 104324. [Google Scholar] [CrossRef]
  85. Hoang, N.D. Image processing-based spall object detection using gabor filter, texture analysis, and adaptive moment estimation (Adam) optimized logistic regression models. Adv. Civ. Eng. 2020, 2020, 8829715. [Google Scholar] [CrossRef]
  86. Hoang, N.D.; Huynh, T.C.; Tran, V.D. Concrete spalling severity classification using image texture analysis and a novel jellyfish search optimized machine learning approach. Adv. Civ. Eng. 2021, 2021, 5551555. [Google Scholar] [CrossRef]
  87. Nguyen, H.; Hoang, N.D. Computer vision-based classification of concrete spall severity using metaheuristic-optimized extreme gradient boosting machine and deep convolutional neural network. Autom. Constr. 2022, 140, 104371. [Google Scholar] [CrossRef]
  88. Mohammed, A.E.; Moselhi, O.; Marzouk, M.; Zayed, T. Entropy-based automated method for detection and assessment of spalling severities in reinforced concrete bridges. J. Perform. Constr. Facil. 2021, 35, 04020132. [Google Scholar] [CrossRef]
  89. Forkan, A.R.M.; Kang, Y.B.; Jayaraman, P.P.; Liao, K.; Kaul, R.; Morgan, G.; Ranjan, R.; Sinha, S. CorrDetector: A framework for structural corrosion detection from drone images using ensemble deep learning. Expert Syst. Appl. 2022, 193, 116461. [Google Scholar] [CrossRef]
  90. Dong, Y.; Pan, Y.; Wang, D.; Cheng, T. Corrosion detection and evaluation for steel wires based on a multi-vision scanning system. Constr. Build. Mater. 2022, 322, 125877. [Google Scholar] [CrossRef]
  91. Hou, S.; Dong, B.; Wang, H.; Wu, G. Inspection of surface defects on stay cables using a robot and transfer learning. Autom. Constr. 2020, 119, 103382. [Google Scholar] [CrossRef]
  92. Huang, X.; Liu, Z.; Zhang, X.; Kang, J.; Zhang, M.; Guo, Y. Surface damage detection for steel wire ropes using deep learning and computer vision techniques. Measurement 2020, 161, 107843. [Google Scholar] [CrossRef]
  93. Khayatazad, M.; De Pue, L.; De Waele, W. Detection of corrosion on steel structures using automated image processing. Dev. Built Environ. 2020, 3, 100022. [Google Scholar] [CrossRef]
  94. Meng, Q.; Zhang, Y.; Wang, H.; Huang, X.; Wang, Z. A Detection Method for Bridge Cables Based on Intelligent Image Recognition and Magnetic-Memory Technology. J. Perform. Constr. Facil. 2022, 36, 04022059. [Google Scholar] [CrossRef]
  95. Dunphy, K.; Sadhu, A.; Wang, J. Multiclass damage detection in concrete structures using a transfer learning-based generative adversarial networks. Struct. Control Health Monit. 2022, 29, e3079. [Google Scholar] [CrossRef]
  96. Hüthwohl, P.; Lu, R.; Brilakis, I. Multi-classifier for reinforced concrete bridge defects. Autom. Constr. 2019, 105, 102824. [Google Scholar] [CrossRef]
  97. Cha, Y.J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
  98. Kim, B.; Cho, S. Automated multiple concrete damage detection using instance segmentation deep learning model. Appl. Sci. 2020, 10, 8008. [Google Scholar] [CrossRef]
  99. Zhang, C.; Chang, C.; Jamshidi, M. Concrete bridge surface damage detection using a single-stage detector. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 389–409. [Google Scholar] [CrossRef]
  100. Li, S.; Zhao, X.; Zhou, G. Automatic pixel-level multiple damage detection of concrete structure using fully convolutional network. Comput. Aided Civ. Infrastruct. Eng. 2019, 34, 616–634. [Google Scholar] [CrossRef]
  101. Ali, R.; Kang, D.; Suh, G.; Cha, Y.J. Real-time multiple damage mapping using autonomous UAV and deep faster region-based neural networks for GPS-denied structures. Automt. Constr. 2021, 130, 103831. [Google Scholar] [CrossRef]
  102. Peng, X.; Zhong, X.; Zhao, C.; Chen, A.; Zhang, T. A UAV-based machine vision method for bridge crack recognition and width quantification through hybrid feature learning. Constr. Build. Mater. 2021, 299, 123896. [Google Scholar] [CrossRef]
  103. La, H.; Gucunski, N.; Kee, S.; Yi, J.; Senlet, T.; Nguyen, L. Autonomous robotic system for bridge deck data collection and analysis. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 1950–1955. [Google Scholar] [CrossRef]
  104. Xie, R.; Yao, J.; Liu, K.; Lu, X.; Liu, Y.; Xia, M.; Zeng, Q. Automatic multi-image stitching for concrete bridge inspection by combining point and line features. Autom. Constr. 2018, 90, 265–280. [Google Scholar] [CrossRef]
  105. Leibbrandt, A.; Caprari, G.; Angst, U.; Siegwart, R.Y.; Flatt, R.J.; Elsener, B. Climbing robot for corrosion monitoring of reinforced concrete structures. In Proceedings of the 2012 2nd International Conference on Applied Robotics for the Power Industry (CARPI), Zurich, Switzerland, 11–13 September 2012; pp. 10–15. [Google Scholar] [CrossRef]
  106. Guan, D.; Yan, L.; Yang, Y.; Xu, W. A small climbing robot for the intelligent inspection of nuclear power plants. In Proceedings of the 2014 4th IEEE International Conference on Information Science and Technology, Shenzhen, China, 26–28 April 2014; pp. 484–487. [Google Scholar] [CrossRef]
  107. Jung, S.; Song, S.; Kim, S.; Park, J.; Her, J.; Roh, K.; Myung, H. Toward Autonomous Bridge Inspection: A framework and experimental results. In Proceedings of the 2019 16th International Conference on Ubiquitous Robots (UR), Jeju, Republic of Korea, 24–27 June 2019; pp. 208–211. [Google Scholar] [CrossRef]
  108. Liu, Q.; Liu, Y. An approach for auto bridge inspection based on climbing robot. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 2581–2586. [Google Scholar] [CrossRef]
  109. Jang, K.; An, Y.K.; Kim, B.; Cho, S. Automated crack evaluation of a high-rise bridge pier using a ring-type climbing robot. Comput. Aided Civ. Infrastruct. Eng. 2021, 36, 14–29. [Google Scholar] [CrossRef]
  110. Boomeri, V.; Tourajizadeh, H. Design, modeling, and control of a new manipulating climbing robot through infrastructures using adaptive force control method. Robotica 2020, 38, 2039–2059. [Google Scholar] [CrossRef]
  111. Sutter, B.; Lelevé, A.; Pham, M.T.; Gouin, O.; Jupille, N.; Kuhn, M.; Lulé, P.; Michaud, P.; Rémy, P. A semi-autonomous mobile robot for bridge inspection. Autom. Constr. 2018, 91, 111–119. [Google Scholar] [CrossRef]
  112. Liu, Y.; Dai, Q.; Liu, Q. Adhesion-adaptive control of a novel bridge-climbing robot. In Proceedings of the 2013 International Conference on Cyber Technology in Automation, Control and Intelligent Systems, Nanjing, China, 26–29 May 2013; pp. 102–107. [Google Scholar] [CrossRef]
  113. Peel, H.; Luo, S.; Cohn, A.G.; Fuentes, R. Localisation of a mobile robot for bridge bearing inspection. Autom. Constr. 2018, 94, 244–256. [Google Scholar] [CrossRef]
  114. Nguyen, S.T.; La, H.M. A climbing robot for steel bridge inspection. J. Intell. Robot Syst. 2021, 102, 75. [Google Scholar] [CrossRef]
  115. Xu, F.; Wang, X.; Wu, H. Inspection method of cable-stayed bridge using magnetic flux leakage detection: Principle, sensor design, and signal processing. J. Mech. Sci. Technol. 2012, 26, 661–669. [Google Scholar] [CrossRef]
  116. Yun, H.B.; Kim, S.H.; Wu, L.; Lee, J.J. Development of inspection robots for bridge cables. Sci. World J. 2013, 2013, 967508. [Google Scholar] [CrossRef]
  117. Cho, K.H.; Jin, Y.H.; Kim, H.M.; Moon, H.; Koo, J.C.; Choi, H.R. Multifunctional robotic crawler for inspection of suspension bridge hanger cables: Mechanism design and performance validation. IEEE-ASME Trans. Mech. 2016, 22, 236–246. [Google Scholar] [CrossRef]
  118. Kang, D.; Cha, Y.J. Autonomous UAVs for structural health monitoring using deep learning and an ultrasonic beacon system with geo-tagging. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 885–902. [Google Scholar] [CrossRef]
  119. Seo, J.; Duque, L.; Wacker, J. Drone-enabled bridge inspection methodology and application. Autom. Constr. 2018, 94, 112–126. [Google Scholar] [CrossRef]
  120. Ellenberg, A.; Kontsos, A.; Moon, F.; Bartoli, I. Bridge deck delamination identification from unmanned aerial vehicle infrared imagery. Autom. Constr. 2016, 72, 155–165. [Google Scholar] [CrossRef]
  121. Sanchez-Cuevas, P.J.; Ramon-Soria, P.; Arrue, B.; Ollero, A.; Heredia, G. Robotic system for inspection by contact of bridge beams using UAVs. Sensors 2019, 19, 305. [Google Scholar] [CrossRef] [PubMed]
  122. Jiang, S.; Zhang, J. Real-time crack assessment using deep neural networks with wall-climbing unmanned aerial system. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 549–564. [Google Scholar] [CrossRef]
  123. Dong, C.; Bas, S.; Catbas, F.N. Investigation of vibration serviceability of a footbridge using computer vision-based methods. Eng. Struct. 2020, 224, 111224. [Google Scholar] [CrossRef]
  124. Yu, L.; Lubineau, G. A smartphone camera and built-in gyroscope based application for non-contact yet accurate off-axis structural displacement measurements. Measurement 2021, 167, 108449. [Google Scholar] [CrossRef]
  125. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Emerald Group Publishing Limited: Harvard, MA, USA, 2001; pp. 1333–1341. [Google Scholar] [CrossRef]
  126. Smith, W.J. Modern Optical Engineering: The Design of Optical Systems, 4th ed.; McGraw-Hill Education: New York, NY, USA, 2008; pp. 154–196. Available online: https://www.accessengineeringlibrary.com/content/book/9780071476874 (accessed on 16 August 2023).
  127. Dong, C.Z.; Celik, O.; Catbas, F.N.; O’Brien, E.J.; Taylor, S. Structural displacement monitoring using deep learning-based full field optical flow methods. Struct. Infrastruct. Eng. 2020, 16, 51–71. [Google Scholar] [CrossRef]
  128. Dong, C.; Ye, X.; Jin, T. Identification of structural dynamic characteristics based on machine vision technology. Measurement 2018, 126, 405–416. [Google Scholar] [CrossRef]
  129. Khuc, T.; Catbas, F.N. Completely contactless structural health monitoring of real-life structures using cameras and computer vision. Struct. Control Health Monit. 2017, 24, e1852. [Google Scholar] [CrossRef]
  130. Dong, C.Z.; Celik, O.; Catbas, F.N. Marker-free monitoring of the grandstand structures and modal identification using computer vision methods. Struct. Health Monit. 2019, 18, 1491–1509. [Google Scholar] [CrossRef]
  131. Celik, O.; Dong, C.Z.; Catbas, F.N. A computer vision approach for the load time history estimation of lively individuals and crowds. Comput. Struct. 2018, 200, 32–52. [Google Scholar] [CrossRef]
  132. Ehrhart, M.; Lienhart, W. Development and evaluation of a long range image-based monitoring system for civil engineering structures. Struct. Health Monit. Insp. Adv. Mater. Aerosp. Civ. Infrastruct. 2015, 9437, 123–135. [Google Scholar] [CrossRef]
  133. Ye, X.; Dong, C.; Liu, T. Image-based structural dynamic displacement measurement using different multi-object tracking algorithms. Smart Struct. Syst. 2016, 17, 935–956. [Google Scholar] [CrossRef]
  134. Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the Computer Vision-ECCV 2016 Workshops, Amsterdam, The Netherlands, 8–10 and 15–16 October 2016; pp. 850–865. [Google Scholar] [CrossRef]
  135. Zhang, K.; Zhang, L.; Liu, Q.; Zhang, D.; Yang, M.H. Fast visual tracking via dense spatio-temporal context learning. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; pp. 127–141. [Google Scholar] [CrossRef]
  136. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 2004, 60, 91–110. [Google Scholar] [CrossRef]
  137. Khaloo, A.; Lattanzi, D. Pixel-wise structural motion tracking from rectified repurposed videos. Struct. Control Health Monit. 2017, 24, e2009. [Google Scholar] [CrossRef]
  138. Tian, T.Y.; Tomasi, C.; Heeger, D.J. Comparison of approaches to egomotion computation. In Proceedings of the CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996. [Google Scholar] [CrossRef]
  139. Tian, Y.; Zhang, J.; Yu, S. Rapid impact testing and system identification of footbridges using particle image velocimetry. Comput. Aided Civ. Infrastruct. Eng. 2019, 34, 130–145. [Google Scholar] [CrossRef]
  140. Wang, Q.; Zhang, L.; Bertinetto, L.; Hu, W.; Torr, P.H. Fast online object tracking and segmentation: A unifying approach. In Proceedings of the Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, Los Angeles, CA, USA, 15–21 June 2019. [Google Scholar] [CrossRef]
  141. Xu, Y.; Brownjohn, J.; Kong, D. A non-contact vision-based system for multipoint displacement monitoring in a cable-stayed footbridge. Struct. Health Monit. 2018, 25, e2155. [Google Scholar] [CrossRef]
  142. Xu, Y.; Brownjohn, J.M.W.; Huseynov, F. Accurate deformation monitoring on bridge structures using a cost-effective sensing system combined with a camera and accelerometers: Case study. J. Bridge Eng. 2019, 24, 05018014. [Google Scholar] [CrossRef]
  143. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision-based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199–211. [Google Scholar] [CrossRef]
  144. Khuc, T.; Catbas, F.N. Computer vision-based displacement and vibration monitoring without using physical target on structures. Struct. Infrastruct. Eng. 2017, 13, 505–516. [Google Scholar] [CrossRef]
  145. Lydon, D.; Lydon, M.; Taylor, S.; Del Rincon, J.M.; Hester, D.; Brownjohn, J. Development and field testing of a vision-based displacement system using a low cost wireless action camera. Mech. Syst. Signal Process. 2019, 121, 343–358. [Google Scholar] [CrossRef]
  146. Shariati, A.; Schumacher, T. Eulerian-based virtual visual sensors to measure dynamic displacements of structures. Struct. Control Health Monit. 2017, 24, e1977. [Google Scholar] [CrossRef]
  147. Fioriti, V.; Roselli, I.; Tatì, A.; Romano, R.; De Canio, G. Motion Magnification Analysis for structural monitoring of ancient constructions. Measurement 2018, 129, 375–380. [Google Scholar] [CrossRef]
  148. Hoskere, V.; Park, J.W.; Yoon, H.; Spencer, B.F., Jr. Vision-based modal survey of civil infrastructure using unmanned aerial vehicles. J. Struct. Eng. 2019, 145, 04019062. [Google Scholar] [CrossRef]
  149. Chen, J.G.; Adams, T.M.; Sun, H.; Bell, E.S.; Büyüköztürk, O. Camera-based vibration measurement of the world war I memorial bridge in Portsmouth, New Hampshire. J. Struct. Eng. 2018, 144, 04018207. [Google Scholar] [CrossRef]
  150. Yoon, H.; Shin, J.; Spencer, B.F., Jr. Structural displacement measurement using an unmanned aerial system. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 183–192. [Google Scholar] [CrossRef]
  151. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef]
  152. Xu, Y.; Brownjohn, J.M. Review of machine-vision based methodologies for displacement measurement in civil structures. J. Civ. Struct. Health Monit. 2018, 8, 91–110. [Google Scholar] [CrossRef]
  153. Zhao, X.; Liu, H.; Yu, Y.; Xu, X.; Hu, W.; Li, M.; Ou, J. Bridge displacement monitoring method based on laser projection-sensing technology. Sensors 2015, 15, 8444–8463. [Google Scholar] [CrossRef]
  154. Artese, S.; Achilli, V.; Zinno, R. Monitoring of bridges by a laser pointer: Dynamic measurement of support rotations and elastic line displacements: Methodology and first test. Sensors 2018, 18, 338. [Google Scholar] [CrossRef] [PubMed]
  155. Lee, J.; Lee, K.C.; Cho, S.; Sim, S.H. Computer vision-based structural displacement measurement robust to light-induced image degradation for in-service bridges. Sensors 2017, 17, 2317. [Google Scholar] [CrossRef]
  156. Vicente, M.A.; Gonzalez, D.C.; Minguez, J.; Schumacher, T. A novel laser and video-based displacement transducer to monitor bridge deflections. Sensors 2018, 18, 970. [Google Scholar] [CrossRef]
  157. Feng, M.Q.; Fukuda, Y.; Feng, D.; Mizuta, M. Nontarget vision sensor for remote measurement of bridge dynamic response. J. Bridge Eng. 2015, 20, 04015023. [Google Scholar] [CrossRef]
  158. Stephen, G.A.; Brownjohn, J.M.W.; Taylor, C.A. Measurements of static and dynamic displacement from visual monitoring of the Humber Bridge. Eng. Struct. 1993, 15, 197–208. [Google Scholar] [CrossRef]
  159. Guo, J.; Zhu, C.A. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm. Mech. Syst. Signal Process. 2016, 66, 425–436. [Google Scholar] [CrossRef]
  160. Ye, X.W.; Yi, T.H.; Dong, C.Z.; Liu, T.; Bai, H. Multi-point displacement monitoring of bridges using a vision-based approach. Wind. Struct. 2015, 20, 315–326. [Google Scholar] [CrossRef]
  161. Wang, M.; Ao, W.K.; Bownjohn, J.; Xu, F. A novel gradient-based matching via voting technique for vision-based structural displacement measurement. Mech. Syst. Signal Process. 2022, 171, 108951. [Google Scholar] [CrossRef]
  162. Zhu, J.; Zhang, C.; Lu, Z.; Li, X. A multi-resolution deep feature framework for dynamic displacement measurement of bridges using vision-based tracking system. Measurement 2021, 183, 109847. [Google Scholar] [CrossRef]
  163. Dong, C.Z.; Bas, S.; Catbas, F.N. A completely non-contact recognition system for bridge unit influence line using portable cameras and computer vision. Smart Struct. Syst. 2019, 24, 617–630. [Google Scholar] [CrossRef]
  164. Javh, J.; Slavič, J.; Boltežar, M. The subpixel resolution of optical-flow-based modal analysis. Mech. Syst. Signal Process. 2017, 88, 89–99. [Google Scholar] [CrossRef]
  165. Javh, J.; Slavič, J.; Boltežar, M. Measuring full-field displacement spectral components using photographs taken with a DSLR camera via an analogue Fourier integral. Mech. Syst. Signal Process. 2018, 100, 17–27. [Google Scholar] [CrossRef]
  166. Xie, K.; Lei, D.; Du, W.; Bai, P.; Zhu, F.; Liu, F. A new operator based on edge detection for monitoring the cable under different illumination. Mech. Syst. Signal Process. 2023, 187, 109926. [Google Scholar] [CrossRef]
  167. Miao, Y.; Kong, Y.; Jeon, J.Y.; Nam, H.; Park, G. A novel marker for robust and accurate phase-based 2D motion estimation from noisy image data. Mech. Syst. Signal Process. 2023, 187, 109931. [Google Scholar] [CrossRef]
  168. Helfrick, M.N.; Niezrecki, C.; Avitabile, P.; Schmidt, T. 3D digital image correlation methods for full-field vibration measurement. Mech. Syst. Signal Process. 2011, 25, 917–927. [Google Scholar] [CrossRef]
  169. Warren, C.; Niezrecki, C.; Avitabile, P. FRF measurements and mode shapes determined using image-based 3D point-tracking. In Proceedings of the 29th IMAC on Structural Dynamics, New York, NY, USA, 26–29 May 2023; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
  170. Pan, B.; Yu, L.; Zhang, Q. Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement. Sci. China Technol. Sc. 2018, 61, 2–20. [Google Scholar] [CrossRef]
  171. Barone, S.; Neri, P.; Paoli, A.; Razionale, A.V. Low-frame-rate single camera system for 3D full-field high-frequency vibration measurements. Mech. Syst. Signal Process. 2019, 123, 143–152. [Google Scholar] [CrossRef]
  172. Shao, Y.; Li, L.; Li, J.; An, S.; Hao, H. Computer vision based target-free 3D vibration displacement measurement of structures. Eng. Struct. 2021, 246, 113040. [Google Scholar] [CrossRef]
  173. Han, Y.; Wu, G.; Feng, D. Vision-based displacement measurement using an unmanned aerial vehicle. Struct. Control Health Monit. 2022, 29, e3025. [Google Scholar] [CrossRef]
  174. Tian, Y.; Zhang, C.; Jiang, S.; Zhang, J.; Duan, W. Noncontact cable force estimation with unmanned aerial vehicle and computer vision. Comput. Aided Civ. Infrastruct. Eng. 2021, 36, 73–88. [Google Scholar] [CrossRef]
  175. Zhang, C.; Tian, Y.; Zhang, J. Complex image background segmentation for cable force estimation of urban bridges with drone-captured video and deep learning. Struct. Control Health Monit. 2022, 29, e2910. [Google Scholar] [CrossRef]
  176. Liu, G.; He, C.; Zou, C.; Wang, A. Displacement measurement based on UAV images using SURF-enhanced camera calibration algorithm. Remote Sens. 2022, 14, 6008. [Google Scholar] [CrossRef]
  177. Jiang, S.; Zhang, J.; Gao, C. Bridge Deformation Measurement Using Unmanned Aerial Dual Camera and Learning-Based Tracking Method. Struct. Control Health Monit. 2023, 2023, 4752072. [Google Scholar] [CrossRef]
  178. Smyth, A.; Wu, M. Multi-rate Kalman filtering for the data fusion of displacement and acceleration response measurements in dynamic system monitoring. Mech. Syst. Signal Process. 2007, 200, 110575. [Google Scholar] [CrossRef]
  179. Ma, Z.; Choi, J.; Liu, P.; Sohn, H. Structural displacement estimation by fusing vision camera and accelerometer using hybrid computer vision algorithm and adaptive multi-rate Kalman filter. Automat. Constr. 2022, 140, 104338. [Google Scholar] [CrossRef]
  180. Ma, Z.; Choi, J.; Sohn, H. Real-time structural displacement estimation by fusing asynchronous acceleration and computer vision measurements. Comput. Aided Civ. Infrastruct. Eng. 2022, 37, 688–703. [Google Scholar] [CrossRef]
  181. Park, J.W.; Moon, D.S.; Yoon, H.; Gomez, F.; Spencer, B.F., Jr.; Kim, J.R. Visual–inertial displacement sensing using data fusion of vision-based displacement with acceleration. Struct. Control Health Monit. 2018, 25, e2122. [Google Scholar] [CrossRef]
  182. Wu, T.; Tang, L.; Shao, S.; Zhang, X.; Liu, Y.; Zhou, Z.; Qi, X. Accurate structural displacement monitoring by data fusion of a consumer-grade camera and accelerometers. Eng. Struct. 2022, 262, 114303. [Google Scholar] [CrossRef]
  183. Wu, T.; Tang, L.; Li, X.; Zhang, X.; Liu, Y.; Zhou, Z. Sparse accelerometer-aided computer vision technology for the accurate full-field displacement estimation of beam-type bridge structures. Measurement 2023, 212, 112532. [Google Scholar] [CrossRef]
  184. Feng, D.; Feng, M.Q. Vision-based multipoint displacement measurement for structural health monitoring. Struct. Control Health Monit. 2016, 23, 876–890. [Google Scholar] [CrossRef]
  185. Gao, X.; Ji, X.; Zhang, Y.; Zhuang, Y.; Cai, E. Structural displacement estimation by a hybrid computer vision approach. Mech. Syst. Signal Process. 2023, 204, 110754. [Google Scholar] [CrossRef]
  186. Chen, Z.; Ruan, X.; Zhang, Y. Vision-Based Dynamic Response Extraction and Modal Identification of Simple Structures Subject to Ambient Excitation. Remote Sens. 2023, 15, 962. [Google Scholar] [CrossRef]
  187. Feng, D.; Feng, M.Q. Computer vision for SHM of civil infrastructure: From dynamic response measurement to damage detection—A review. Eng. Struct. 2018, 156, 105–117. [Google Scholar] [CrossRef]
  188. Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F., Jr. Target-free approach for vision-based structural system identification using consumer-grade cameras. Struct. Control Health Monit. 2016, 23, 1405–1416. [Google Scholar] [CrossRef]
  189. Ji, Y.; Chang, C. Nontarget stereo vision technique for spatiotemporal response measurement of line-like structures. J. Eng. Mech. 2008, 134, 466–474. [Google Scholar] [CrossRef]
  190. Ngeljaratan, L.; Moustafa, M.A. Structural health monitoring and seismic response assessment of bridge structures using target-tracking digital image correlation. Eng. Struct. 2020, 213, 110551. [Google Scholar] [CrossRef]
  191. Luo, K.; Kong, X.; Wang, X.; Jiang, T.; Frøseth, G.T.; Rønnquist, A. Cable vibration measurement based on broad-band phase-based motion magnification and line tracking algorithm. Mech. Syst. Signal Process. 2023, 200, 110575. [Google Scholar] [CrossRef]
  192. Wadhwa, N.; Rubinstein, M.; Durand, F.; Freeman, W.T. Phase-based video motion processing. ACM Trans. Graph. 2013, 32, 1–10. [Google Scholar] [CrossRef]
  193. Chen, J.G.; Wadhwa, N.; Cha, Y.J.; Durand, F.; Freeman, W.T.; Buyukozturk, O. Modal identification of simple structures with high-speed video using motion magnification. J. Sound Vib. 2015, 345, 58–71. [Google Scholar] [CrossRef]
  194. Yang, Y.; Dorn, C.; Mancini, T.; Talken, Z.; Nagarajaiah, S.; Kenyon, G.; Farrar, C.; Mascareñas, D. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements. J. Sound Vib. 2017, 390, 232–256. [Google Scholar] [CrossRef]
  195. Yang, Y.; Dorn, C.; Mancini, T.; Talken, Z.; Kenyon, G.; Farrar, C.; Mascareñas, D. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 2017, 85, 567–590. [Google Scholar] [CrossRef]
  196. Molina-Viedma, A.J.; Felipe-Sesé, L.; López-Alba, E.; Díaz, F. High frequency mode shapes characterisation using Digital Image Correlation and phase-based motion magnification. Mech. Syst. Signal Process. 2018, 102, 245–261. [Google Scholar] [CrossRef]
  197. Molina-Viedma, A.J.; Felipe-Sesé, L.; López-Alba, E.; Díaz, F.A. 3D mode shapes characterisation using phase-based motion magnification in large structures using stereoscopic DIC. Mech. Syst. Signal Process. 2018, 108, 140–155. [Google Scholar] [CrossRef]
  198. Han, Y.; Wu, G.; Feng, D. Structural modal identification using a portable laser-and-camera measurement system. Measurement 2023, 214, 112768. [Google Scholar] [CrossRef]
  199. Willems, T.; Egner, F.S.; Wang, Y.; Kirchner, M.; Desmet, W.; Naets, F. Time-domain model identification of structural dynamics from spatially dense 3D vision-based measurements. Mech. Syst. Signal Process. 2023, 182, 109553. [Google Scholar] [CrossRef]
  200. Javh, J.; Slavič, J.; Boltežar, M. High frequency modal identification on noisy high-speed camera data. Mech. Syst. Signal Process. 2018, 98, 344–351. [Google Scholar] [CrossRef]
  201. Zaletelj, K.; Gorjup, D.; Slavič, J.; Boltežar, M. Multi-level curvature-based parametrization and model updating using a 3D full-field response. Mech. Syst. Signal Process. 2023, 187, 109927. [Google Scholar] [CrossRef]
  202. Gorjup, D.; Slavič, J.; Boltežar, M. Frequency domain triangulation for full-field 3D operating-deflection-shape identification. Mech. Syst. Signal Process. 2019, 133, 106287. [Google Scholar] [CrossRef]
  203. Bhowmick, S.; Nagarajaiah, S. Spatiotemporal compressive sensing of full-field Lagrangian continuous displacement response from optical flow of edge: Identification of full-field dynamic modes. Mech. Syst. Signal Process. 2022, 164, 108232. [Google Scholar] [CrossRef]
  204. Kong, X.; Yi, J.; Wang, X.; Luo, K.; Hu, J. Full-Field Mode Shape Identification Based on Subpixel Edge Detection and Tracking. Appl. Sci. 2023, 13, 747. [Google Scholar] [CrossRef]
  205. Wangchuk, S.; Siringoringo, D.M.; Fujino, Y. Modal analysis and tension estimation of stay cables using noncontact vision-based motion magnification method. Struct. Control Health Monit. 2022, 29, e2957. [Google Scholar] [CrossRef]
  206. Xu, Y. Photogrammetry-based structural damage detection by tracking a visible laser line. Struct. Health Monit. 2020, 19, 322–336. [Google Scholar] [CrossRef]
  207. Cha, Y.J.; Chen, J.G.; Büyüköztürk, O. Output-only computer vision based damage detection using phase-based optical flow and unscented Kalman filters. Eng. Struct. 2017, 132, 300–313. [Google Scholar] [CrossRef]
  208. Khuc, T.; Catbas, F.N. Structural identification using computer vision–based bridge health monitoring. J. Struct. Eng. 2018, 144, 04017202. [Google Scholar] [CrossRef]
  209. Zhang, T.; Shi, D.; Wang, Z.; Zhang, P.; Wang, S.; Ding, X. Vibration-based structural damage detection via phase-based motion estimation using convolutional neural networks. Mech. Syst. Signal Process. 2022, 178, 109320. [Google Scholar] [CrossRef]
  210. Shu, J.; Zhang, C.; Chen, X.; Niu, Y. Model-informed deep learning strategy with vision measurement for damage identification of truss structures. Mech. Syst. Signal Process. 2023, 196, 110327. [Google Scholar] [CrossRef]
  211. Hu, H.; Wang, J.; Dong, C.Z.; Chen, J.; Wang, T. A hybrid method for damage detection and condition assessment of hinge joints in hollow slab bridges using physical models and vision-based measurements. Mech. Syst. Signal Process. 2023, 183, 109631. [Google Scholar] [CrossRef]
  212. Zhang, B.; Zhou, L.; Zhang, J. A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision. Comput. Aided Civil Infrastruct. Eng. 2019, 34, 471–487. [Google Scholar] [CrossRef]
  213. Singla, N. Motion detection based on frame difference method. Int. J. Inform. Comput. Technol. 2014, 4, 1559–1565. [Google Scholar]
  214. Bai, Z.; Gao, Q.; Yu, X. Moving object detection based on adaptive loci frame difference method. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; Available online: https://ieeexplore.ieee.org/document/8816624 (accessed on 29 August 2019).
  215. Liang, R.; Yan, L.; Gao, P.; Qian, X.; Zhang, Z.; Sun, H. Aviation video moving-target detection with inter-frame difference. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; Available online: https://ieeexplore.ieee.org/document/5646303 (accessed on 29 November 2010).
  216. Zhang, Y.; Wang, X.; Qu, B. Three-frame difference algorithm research based on mathematical morphology. Procedia Eng. 2012, 29, 2705–2709. [Google Scholar] [CrossRef]
  217. Zuo, F.; Gao, S. Moving Object Detection and Tracking Based on WADM. In Proceedings of the 2009 International Symposium on Computer Network and Multimedia Technology, Wuhan, China, 18–20 January 2009; Available online: https://ieeexplore.ieee.org/document/5374584 (accessed on 8 January 2010).
  218. Barnich, O.; Van Droogenbroeck, M. ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 2010, 20, 1709–1724. [Google Scholar] [CrossRef] [PubMed]
  219. Cioppa, A.; Braham, M.; Van Droogenbroeck, M. Asynchronous semantic background subtraction. J. Imaging 2020, 6, 50. [Google Scholar] [CrossRef] [PubMed]
  220. Horn, B.K.P.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
  221. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, Y.C.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 14–16 October 2016. [Google Scholar] [CrossRef]
  222. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
  223. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 42, 2980–2988. [Google Scholar] [CrossRef]
  224. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. Available online: https://ieeexplore.ieee.org/document/7485869 (accessed on 6 June 2016). [CrossRef]
  225. Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
  226. Guo, R.; Shen, X.; Dong, X.; Zhang, X. Multi-focus image fusion based on fully convolutional networks. Front. Inform. Technol. Electron. Eng. 2020, 21, 1019–1033. [Google Scholar] [CrossRef]
  227. Branson, S.J.; Van Horn, G.; Belongie, S.; Perona, P. Bird species categorization using pose normalized deep convolutional nets. arXiv 2014, arXiv:1406.2952. [Google Scholar] [CrossRef]
  228. Zhang, N.; Donahue, J.; Girshick, R.; Donahue, J. Part-based R-CNNs for fine-grained category detection. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  229. Xiao, T.; Xu, Y.; Yang, K.; Peng, Y.; Zhang, Z. The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar] [CrossRef]
  230. Simon, M.; Rodner, E.; Denzler, J. Part detector discovery in deep convolutional neural networks. In Proceedings of the Computer Vision—ACCV 2014: 12th Asian Conference on Computer Vision, Singapore, 1–5 November 2014. [Google Scholar] [CrossRef]
  231. Simon, M.; Rodner, E. Neural activation constellations: Unsupervised part model discovery with convolutional networks. In Proceedings of the Computer Vision--ACCV 2014: 12th Asian Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
  232. Jeyabharathi, D.; Dejey, D. Vehicle Tracking and Speed Measurement system (VTSM) based on novel feature descriptor: Diagonal Hexadecimal Pattern (DHP). J. Visual Commun. Image Represent. 2016, 40, 816–830. [Google Scholar] [CrossRef]
  233. Doğan, S.; Temiz, M.S.; Külür, S. Real time speed estimation of moving vehicles from side view images from an uncalibrated video camera. Sensors 2010, 10, 4805–4824. [Google Scholar] [CrossRef] [PubMed]
  234. Hua, S.; Kapoor, M.; Anastasiu, D.C. Vehicle tracking and speed estimation from traffic videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar] [CrossRef]
  235. Lan, J.; Li, J.; Hu, G.; Ran, B.; Wang, L. Vehicle speed measurement based on gray constraint optical flow algorithm. Optik 2014, 125, 289–295. [Google Scholar] [CrossRef]
  236. Javadi, S.; Dahl, M.; Pettersson, M.I. Vehicle speed measurement model for video-based systems. Comput. Electr. Eng. 2019, 76, 238–248. [Google Scholar] [CrossRef]
  237. Dahl, M.; Javadi, S. Analytical modeling for a video-based vehicle speed measurement framework. Sensors 2019, 20, 160. [Google Scholar] [CrossRef] [PubMed]
  238. Huval, B.; Wang, T.; Tandon, S.; Kiske, J.; Song, W.; Pazhayampallil, J.; Andriluka, M.; Rajpurkar, P.; Migimatsu, T.; Cheng-Yue, R.; et al. An empirical evaluation of deep learning on highway driving. arXiv 2015, arXiv:1504.01716. [Google Scholar] [CrossRef]
  239. Li, J.; Mei, X.; Prokhorov, D.; Tao, D.C. Deep neural network for structural prediction and lane detection in traffic scene. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 690–703. [Google Scholar] [CrossRef]
  240. Kim, J.; Lee, M. Robust Lane detection based on convolutional neural network and random sample consensus. Lect. Notes Comput. Sci. 2014, 8834, 454–461. [Google Scholar] [CrossRef]
  241. Lee, S.; Kim, J.; Shin Yoon, J.; Shin, S.; Bailo, O.; Kim, N.; Lee, T.; Hong, S.H.; Han, S.; So Kweon, I. VPGNet: Vanishing point guided network for lane and road marking detection and recognition. In Proceedings of the Neural Information Processing: 21st International Conference, ICONIP 2014, Kuching, Malaysia, 3–6 November 2014. [Google Scholar] [CrossRef]
  242. Chen, Z.; Li, H.; Bao, Y.; Li, N.; Jin, Y. Identification of spatio-temporal distribution of vehicle loads on long-span bridges using computer vision technology. Struct. Control Health Monit. 2016, 23, 517–534. [Google Scholar] [CrossRef]
  243. Brown, R.; Wicks, A. Vehicle tracking for bridge load dynamics using vision techniques. Struct. Health Monit. 2016, 7, 83–90. [Google Scholar] [CrossRef]
  244. Ojio, T.; Carey, C.H.; Obrien, E.J.; Doherty, C.; Taylor, S.E. Contactless bridge weigh-in-motion. J. Bridge Eng. 2016, 21, 04016032. [Google Scholar] [CrossRef]
  245. Kim, G.; Cho, J.S. Vision-based vehicle detection and inter-vehicle distance estimation for driver alarm system. Opt. Rev. 2012, 19, 388–393. [Google Scholar] [CrossRef]
  246. Park, K.Y.; Hwang, S.Y. Robust range estimation with a monocular camera for vision-based forward collision warning system. Sci. World J. 2014, 2014, 923632. [Google Scholar] [CrossRef]
  247. Chen, M.; Tian, D.; Xiao, X.; Bian, C. Research on the forward distance detection algorithm based on the camera switching. IOP Conf. Ser. Mater. Sci. Eng. 2019, 533, 012043. [Google Scholar] [CrossRef]
  248. Ye, X.; Ni, Y.; Wai, T.; Wang, K.; Zhang, X.; Xu, F. A vision-based system for dynamic displacement measurement of long-span bridges: Algorithm and verification. Smart Struct. Syst. 2013, 12, 363–379. [Google Scholar] [CrossRef]
  249. He, W.; Deng, L.; Shi, H.; Cai, C.S.; Yu, Y. Novel virtual simply supported beam method for detecting the speed and axles of moving vehicles on bridges. J. Bridge Eng. 2017, 22, 04016141. [Google Scholar] [CrossRef]
  250. He, W.; Ling, T.; Obrien, E.J.; Deng, L. Virtual axle method for bridge weigh-in-motion systems requiring no axle detector. J. Bridge Eng. 2019, 24, 04019086. [Google Scholar] [CrossRef]
  251. Ding, Y.; Zhou, D.; Wang, Z.; Li, M.; Yu, S. Overload and load centroid recognition method based on vertical displacement of body. In Proceedings of the 2019 34th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Jinzhou, China, 6–8 June 2019. [Google Scholar] [CrossRef]
  252. Micu, E.A.; Malekjafarian, A.; Obrien, E.J.; Quilligan, M.; McKinstray, R.; Augus, E.; Lydon, M.; Catbas, F.N. Evaluation of the extreme traffic load effects on the Forth Road Bridge using image analysis of traffic data. Adv. Eng. Softw. 2019, 137, 102711. [Google Scholar] [CrossRef]
  253. Zhou, Y.; Pei, Y.; Li, Z.; Fang, L.; Zhao, Y.; Yi, W. Vehicle weight identification system for spatiotemporal load distribution on bridges based on non-contact machine vision technology and deep learning algorithms. Measurement 2020, 159, 107801. [Google Scholar] [CrossRef]
  254. Dan, D.; Ge, L.; Yan, X. Identification of moving loads based on the information fusion of weigh-in-motion system and multiple camera machine vision. Measurement 2019, 144, 155–166. [Google Scholar] [CrossRef]
  255. Feng, M.Q.; Leung, R.Y.; Eckersley, C.M. Non-contact vehicle weigh-in-motion using computer vision. Measurement 2020, 153, 107415. [Google Scholar] [CrossRef]
  256. Feng, M.Q.; Leung, R.Y. Application of computer vision for estimation of moving vehicle weight. IEEE Sens. J. 2020, 21, 11588–11597. [Google Scholar] [CrossRef]
  257. Zaurin, R.; Catbas, F.N. Structural health monitoring using video stream, influence lines, and statistical analysis. Struct. Health Monit. 2011, 10, 309–332. [Google Scholar] [CrossRef]
  258. Zaurin, R.; Catbas, F.N. Integration of computer imaging and sensor data for structural health monitoring of bridges. Smart Mater. Struct. 2009, 19, 015019. [Google Scholar] [CrossRef]
  259. Catbas, F.N.; Zaurin, R.; Gul, M.; Gokce, H.B. Sensor networks, computer imaging, and unit influence lines for structural health monitoring: Case study for bridge load rating. J. Bridge Eng. 2012, 17, 662–670. [Google Scholar] [CrossRef]
  260. Zaurin, R.; Khuc, T.; Catbas, F.N. Hybrid sensor-camera monitoring for damage detection: Case study of a real bridge. J. Bridge Eng. 2016, 21, 05016002. [Google Scholar] [CrossRef]
  261. Jian, X.; Xia, Y.; Lozano-Galant, J.A.; Sun, L.M. Traffic sensing methodology combining influence line theory and computer vision techniques for girder bridges. J. Sens. 2019, 2019, 3409525. [Google Scholar] [CrossRef]
  262. Pan, Y.; Wang, D.; Shen, X.; Xu, Y.; Pan, Z. A novel computer vision-based monitoring methodology for vehicle-induced aerodynamic load on noise barrier. Struct. Control Health Monit. 2018, 25, e2271. [Google Scholar] [CrossRef]
  263. Xia, Y.; Jian, X.; Yan, B.; Su, D. Infrastructure safety oriented traffic load monitoring using multi-sensor and single camera for short and medium span bridges. Remote Sens. 2019, 11, 2651. [Google Scholar] [CrossRef]
  264. Ge, L.; Dan, D.; Li, H. An accurate and robust monitoring method of full-bridge traffic load distribution based on YOLO-v3 machine vision. Struct. Control Health Monit. 2020, 27, e2636. [Google Scholar] [CrossRef]
  265. Appathurai, A.; Sundarasekar, R.; Raja, C.; Alex, E.J.; Palagan, C.A.; Nithya, A. An efficient optimal neural network-based moving vehicle detection in traffic video surveillance system. Circuits Syst. Signal Process. 2020, 39, 734–756. [Google Scholar] [CrossRef]
Figure 1. CV-based surface defect detection process for bridges [41].
Figure 1. CV-based surface defect detection process for bridges [41].
Sensors 23 07863 g001
Figure 2. Framework for crack segmentation and feature quantification based on SR images [68].
Figure 2. Framework for crack segmentation and feature quantification based on SR images [68].
Sensors 23 07863 g002
Figure 3. Inspection of bridge pier cracks using a ring-type climbing robot [109].
Figure 3. Inspection of bridge pier cracks using a ring-type climbing robot [109].
Sensors 23 07863 g003
Figure 4. The flow chart of CV-based bridge displacement identification [124]: (a) CV-based displacement measurement principle; (b) basic steps for determining displacement.
Figure 4. The flow chart of CV-based bridge displacement identification [124]: (a) CV-based displacement measurement principle; (b) basic steps for determining displacement.
Sensors 23 07863 g004
Figure 5. Scaling factor calculation diagram.
Figure 5. Scaling factor calculation diagram.
Sensors 23 07863 g005
Figure 6. A pinhole camera model to estimate the scale factor considering the optical axis not perpendicular to the structure plane [29].
Figure 6. A pinhole camera model to estimate the scale factor considering the optical axis not perpendicular to the structure plane [29].
Sensors 23 07863 g006
Figure 7. Framework for target-free displacement measurement [162].
Figure 7. Framework for target-free displacement measurement [162].
Sensors 23 07863 g007
Figure 8. A framework for cable vibration measurements based on CV techniques [191].
Figure 8. A framework for cable vibration measurements based on CV techniques [191].
Sensors 23 07863 g008
Figure 9. Bridge mode shape identification based on laser-and-camera system [198].
Figure 9. Bridge mode shape identification based on laser-and-camera system [198].
Sensors 23 07863 g009
Figure 10. Principles of video-based damage detection [207].
Figure 10. Principles of video-based damage detection [207].
Sensors 23 07863 g010
Figure 11. Framework for vehicle parameter identification [212].
Figure 11. Framework for vehicle parameter identification [212].
Sensors 23 07863 g011
Figure 12. Vehicle speed detection [236].
Figure 12. Vehicle speed detection [236].
Sensors 23 07863 g012
Figure 13. The vehicle weight identification system based on CV techniques [32].
Figure 13. The vehicle weight identification system based on CV techniques [32].
Sensors 23 07863 g013
Table 1. Open-source datasets of surface defects in bridges.
Table 1. Open-source datasets of surface defects in bridges.
NameDatasets DescriptionApplication Fields
Aft_Original_Crack_DataSet_Second [71]2068 bridge crack image with 1024 × 1024 pixels, all containing cracksSemantic segmentation and object detection
Concrete Crack Images for Classification [72]40,000 concrete surface crack images with 227 × 227 pixels, some without cracksSemantic segmentation and image classification
SDNET2018 [73]56,092 images with 256 × 256 pixels of bridge cracks, defects, and holes, some without cracksImage classification
Crack dataset [71]6069 images with 224 × 224 pixels of bridge cracks, all containing cracksSemantic segmentation and image classification
Deep Crack [74]8592 images with 256 × 256 pixels of concrete surface cracksSemantic segmentation
Table 2. Summary of robot platforms with different functions.
Table 2. Summary of robot platforms with different functions.
Type of RobotsAuthorLocomotionSensorsDefect Types
Mobile robotsLa et al. [103]Ground-based movementCameraCrack inspection
Xie et al. [104]Ground-based movementCameraCrack inspection
Wall-climbing robotsSutter et al. [111]Vertical climbingCamerasConcrete crack inspection
Liu et al. [112]Vertical climbingCameraConcrete crack inspection of bridge piers and towers
Peel et al. [113]Vertical wall climbingCameraBearing inspection of bridges
Nguyen and La [114]Inclined wall climbingCamera, hall-effect sensors IR IMU, Eddy currentFatigue crack inspection of steel structures
Cable-climbing robotsXu et al. [115]Inclined cable climbingCameras, MFL sensorsInspection of cable surface and inner defects
Yun et al. [116]Inclined cable climbingThree camerasInspection of cable surface defects
Cho et al. [117]Vertical cable climbingCameraSurface defects inspection of vertical hangers
Aerial robotsKang and Cha, [118]Aerial flyingCameraSurface crack detection of concrete structures
Seo et al. [119] Aerial flyingCameraConcrete spalling and steel corrosion inspection
Ellenberg et al. [120]Aerial flyingInfrared camerasSubsurface delamination inspection
Sanchez-Cuevas et al. [121]Aerial flying and physical contactCamera, supersonic sensorsCrack depth measurement
Multipurpose dronesJiang et al. [122]Aerial flying and wall climbingCamerasReal-time crack length and width measurement
Table 3. Comparison of different target-tracking algorithms.
Table 3. Comparison of different target-tracking algorithms.
MethodStructureMeasurement Distance (m)ParametersMaximum ErrorRMSE (mm)Field Condition
Template matching
OCM [21]CableFrequency2.81%Good weather
Zero mean NCC [141]Pedestrian Baker bridge55.3Frequency8%Overcast
Zero mean NCC [142]Railway bridge6.9Displacement0.35Sunny
OCM [143]Manhattan bridge300DisplacementGood weather
Feature matching
SIFT [144]Bridge9.7Frequency2.5%Partial shading
Hessian + SURF [145]Suspension bridge71.2Displacement0.03Low-speed winds
Optical flow
Virtual visual method [146]Pedestrian bridgeDisplacementGood weather
Motion magnification [147]Arched bridge30Frequency6.79%Sunny
Harris corner detector + KLT [148] Pedestrian bridge3–4Frequency1.51%Low-speed wind
Motion magnification [149]Bridge80Frequency6.25%Light drizzle
Harris corner detection + KLT [150]Steel truss bridge4.6Displacement2.14Indoor
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, K.; Kong, X.; Zhang, J.; Hu, J.; Li, J.; Tang, H. Computer Vision-Based Bridge Inspection and Monitoring: A Review. Sensors 2023, 23, 7863. https://doi.org/10.3390/s23187863

AMA Style

Luo K, Kong X, Zhang J, Hu J, Li J, Tang H. Computer Vision-Based Bridge Inspection and Monitoring: A Review. Sensors. 2023; 23(18):7863. https://doi.org/10.3390/s23187863

Chicago/Turabian Style

Luo, Kui, Xuan Kong, Jie Zhang, Jiexuan Hu, Jinzhao Li, and Hao Tang. 2023. "Computer Vision-Based Bridge Inspection and Monitoring: A Review" Sensors 23, no. 18: 7863. https://doi.org/10.3390/s23187863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop