Next Article in Journal
Elaborated Porosity and High-Quality Reservoirs in Deeply Buried Coarse-Grained Sediment: Insight of the Sublacustrine Sandy Conglomerates in the Eocene Shehejie Formation (Es4), Dongying Subbasin, Bohai Bay Basin, China
Previous Article in Journal
Study on Impact Process of a Large LNG Tank Container for Trains
Previous Article in Special Issue
Application of Digital Twins and Building Information Modeling in the Digitization of Transportation: A Bibliometric Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Underwater Structural Bridge Damage and BIM-Based Bridge Damage Management

College of Transportation Engineering, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(3), 1348; https://doi.org/10.3390/app13031348
Submission received: 19 December 2022 / Revised: 9 January 2023 / Accepted: 10 January 2023 / Published: 19 January 2023
(This article belongs to the Special Issue BIM-Based Digital Constructions)

Abstract

:
The number of bridges in operation has increased. Along with the increase in the length of time bridges are in service, the structural safety of the bridges also decreases. Bridge substructure is a key component of bridges, but there are few studies on safety management and identification of water bridge substructure damage. Deep learning is a focus of research in the field of target detection, and this document lightens YOLO-v4 to achieve precise and intelligent determination of concrete cracks. This was combined with a point cloud algorithm to provide a three-dimensional estimate of faulty lesions. Finally, the BIM was combined with the method of identifying the underwater structure of the deck. Based on Revit, an integrated management system for underwater bridge structures is put in place. Performing detailed bridge damage management includes (1) 3D visualization of the bridge detail model view, (2) establishment of a bridge damage database, (3) bridge damage management, and (4) management of the comprehensive underwater bridge inspection cycle.

1. Introduction

Transportation is an essential service industry and an important part of the modern economic system, and bridges, as the leading engineering of transportation infrastructure, play a vital role in the transportation system. In recent years, bridge accidents caused by the aging of bridges and improper maintenance management have been frequent, and bridge maintenance management has attracted the attention of various countries [1,2,3,4]. The U.S. Federal Highway Administration statistical standards show that the total number of bridges built in the United States is about 600,000, of which the proportion of defective bridges is about 25.8% [5]. To store a large amount of archival data on the nation’s bridges, the U.S. researched bridge management systems (BMS) earlier [6,7,8]. Some other regions, such as China, UK, and Germany, are also gradually researching bridge management systems to achieve rapid inspection and quantitative assessment of structural damage of bridges in service [9,10,11,12,13]. Bridge management systems can improve the management efficiency of bridges. However, traditional management systems usually store bridge inspection results separately, and bridge inspection information is separated from bridge entities, lacking accurate and practical assessment and maintenance recommendations.
Construction industry analyst Jerry Laiserin first coined Building Information Modelling (BIM) in 2002 [14]. Since then, scholars worldwide have extensively researched BIM’s information integration and engineering applications [15,16,17,18,19]. The core of BIM technology is the establishment of virtual 3D models of construction projects. It uses digital technology to build a complete engineering information base to help realise the information integration of building information in the whole life cycle. In the study of bridge health and maintenance, many scholars have considered combining BIM technology with bridge health monitoring and management based on the concept of whole-life bridge construction and proposed constructing a BIM-based bridge management system. For example, Davila Delgado modelled a structural performance inspection system in a building information modelling environment, thus allowing sensor data to be visualised directly on the BIM model [20]; McGuire et al. built a BIM model to connect and analyse data related to bridge inspection, assessment, and management to help decision makers better manage bridge health information [21]; Zhou marked the damage information of a steel arch bridge in the 3D model drawn by Revit to achieve rapid automatic assessment of the overall technical condition of the bridge and significantly speed up the bridge inspection [22]. BIM technology, with its convenient interoperability, data information integration, and imageability, has greatly improved the efficiency of bridge inspection and structural assessment. However, in the process of system construction, the bridge management system is often set up in a comprehensive way for each structure of the bridge, without considering the complex environment that distinguishes underwater structures from above-water structures, such as sewage environment, sediment, special damages, and others [23,24,25,26,27,28]. Underwater structures are an essential part of bridges, and their condition significantly affects the safety of their use. As a result of long-term service and external environmental factors, damage to underwater structures has gradually emerged, such as the uneven settlement of foundations due to water scour, cracking and spalling of concrete, and others [29,30,31]. The New York State Department of Transportation’s underwater inspection of local bridges in service revealed that more than half of the bridge collapses were due to large structural damage to the substructure [32]. Underwater structural damage poses a significant threat to the bridges’ overall structural safety and durability, and scientific monitoring and assessment of the structural health of bridges are urgently needed to ensure the safe operation of bridges. Therefore, it is necessary to develop a bridge underwater structural health management system for the particular characteristics of underwater structures.
With the rapid development of computer technology, bridge inspection techniques have become progressively more intelligent, with the emergence of non-destructive testing techniques such as image recognition technology and machine learning algorithms. Some researchers have used bridge damage images to automate the processing of bridge damage images through techniques such as deep learning, where deep learning algorithms with convolutional neural networks (CNNs) as the core can extract features from many images to achieve the detection of additional damages [33,34,35,36]; YOLO-v4 is one of the CNNs. Many scholars have combined YOLO-v4 with underwater target recognition to explore the recognition effectiveness in different underwater environments [37,38,39]. For example, Gasparovic et al. tested six CNN detectors in different depths of underwater environments and proved that YOLO-v4 outperformed other models in detecting underwater pipeline objects with poor light visibility [40]; Li et al. found that YOLO-v4 could identify underwater cracks quickly and accurately under different water quality conditions (clear water, turbid water, and deep water), proving that the powerful image enhancement technique of YOLO-v4 can adapt to the conditions of underwater environment [41]. Therefore, this study considers the optimization of YOLO-v4 and proposes a more lightweight recognition method, Lite-YOLO-v4, to reduce the hardware performance requirements and improve the accuracy of underwater structural damage recognition of bridges.
There are many different representations of 3D data, such as depth images, point clouds and grids, and others. As a form of 3D data representation, point clouds can retain information about the 3D shape and geometric features of the original object surface in 3D space, not requiring discretization [42]. For example, Wang et al. analysed the application of 3D point cloud acquisition technology in the construction industry, including the creation of original BIM models for generating projects and the quality inspection of various buildings and infrastructures [43]; Vetrivel et al. used 3D point clouds to capture image features by processing multiple aerial platform plan views to generate models to improve the detection of post-disaster building damage [44]; Lague et al. used ground scanners combined with 3D point cloud acquisition techniques to comparatively measure surface changes, improving the accuracy of generating 3D models for handling multiple complex situations and reducing uncertainty in point cloud imaging [45]; Harwin et al., based on close-range image capture by UAVs, created dense point cloud data, and the final findings showed that the accuracy of 3D data obtained using point cloud data can reach the thousandth percentile [46]; it has been demonstrated in several studies that 3D point cloud acquisition techniques are suitable for high precision image identification and for 3D information reconstruction, therefore, this paper considers the use of underwater cameras for multi-directional image acquisition and uses a method based on 3D point cloud data acquisition to obtain 3D information about underwater damage.
The safety of the underwater structure of a bridge affects the overall durability of the bridge, and the complex environment of the underwater structure makes ordinary damage detection methods inapplicable to the complex working conditions underwater; therefore, a health management system needs to be set up for the characteristics of the underwater environment. In summary, this study considers the establishment of a BIM technology-based health management system for underwater bridge structures, incorporating deep learning and point cloud data collection techniques to improve the accuracy of underwater damage identification and 3D index calculation of damaged areas, correlating inspection data with the model, through which the bridge management system can observe information about the location, appearance images, and dimensions of the bridge substructure regarding the damage. Improving the correlation between the model and the bridge management system allows the observation of the location, appearance, and measurements of the bridge substructure, enhances the correlation between the model and the inspection data, and realises the automatic identification and 3D visualisation of the underwater structural damage of the bridge.

2. Commissioning the Camera

2.1. Selection of Depth Camera

An ordinary camera outputs an object in three-dimensional space as a two-dimensional picture, and there is no real basis to determine the distance between the camera and the object in the image. Depth cameras differ from ordinary cameras by obtaining the 3D coordinate value of a point of the object and then calculating the distance between each point of the object and the camera to realize the modelling of 3D objects.
Depth cameras can be divided into binocular cameras, structured light cameras, and TOF (Time-of-Flight) cameras, depending on their operating principles [47,48,49]. The working principles of the different cameras and a detailed comparison are shown in the following Table 1.
RGB binocular cameras passively perform ranging, relying on the visual difference between the two cameras to obtain depth information, which is indirectly calculated by matching image feature points using the triangulation principle, and can achieve millimetre-level accuracy at a closer measuring distance, while RGB binocular cameras have low power consumption to meet the needs of outdoor use; structured light cameras actively project a known coded pattern through active ranging to the TOF camera, which sends continuous pulsed light waves to the target and uses the sensor to accept the light reflected back from the object, and determines the distance to the target by detecting the time of round-trip flight of the light, which is generally used for longer distance measurement with higher frame rate of measurement results and requires higher power consumption while scanning comprehensively.
In order to obtain better 3D information, a binocular camera is selected in this paper. It has good environmental adaptability as well as strong anti-interference ability to obtain depth information through binoculars outdoors, and its use of the triangulation principle can effectively improve the quality of depth information.

2.2. Camera Underwater Calibration Experiments

When the camera acquires underwater damage images, light will refract through three media, water, lens, and air, and the focal length of the camera changes due to refraction, resulting in large errors in the parameters obtained by the camera, which affects the accuracy of the later measurement results, so the camera must be calibrated underwater to correct the camera parameters.
The camera calibration method proposed by Zhang et al. is simpler and less expensive than the traditional method [50]. Therefore, this paper adopts Zhang’s calibration method for underwater camera calibration experiments.
The calibration experiments were carried out in a 1.5 m cube water tank. To ensure the accuracy of the calibration, an aluminium high-precision diffuse reflection calibration plate was used in this paper, on which the calibration plate was divided into squares, all of which were 20 mm × 20 mm in size. Figure 1a is the experimental water tank and calibration plate.
The experimental procedure is as follows:
(1)
A total of 15 images of the underwater calibration plate were acquired by simultaneously photographing the calibration plate from different angles and distances with the left and right infrared lenses of the binocular camera in the water tank. The spatial position of the camera in relation to the calibration plate at the time of acquisition as shown in Figure 2. The above steps were repeated using the colour lens alone and a further 15 images of the plate were acquired.
(2)
The corner points are extracted from the acquired image and the parameters are solved. The corner points are extracted as shown below Figure 3. The centre of the green circle is the extracted corner point, the yellow square is the origin of the corner point coordinate system, and X and Y indicate the coordinate axes.
The intrinsic matrix for the left camera is obtained as follows:
M L = 513.6847 0 326.1383 0 513.9499 247.5797 0 0 1
The matrix of the distortion coefficient is:
k 1 , k 2 , k 3 , p 1 , p 2 = 0.4002 0.2833 0.1084 0.0102 0.0029
The intrinsic matrix for the left camera is:
M R = 513.2454 0 324.5766 0 513.6073 247.4670 0 0 1 a = 1 ,
The matrix of the distortion coefficient is:
k 1 , k 2 , k 3 , p 1 , p 2 = 0.3890 0.3749 0.4866 0.0106 7.5427 e 04 a = 1 ,
The rotation matrix of the camera is:
R = 1 4.2220 e 5 0.0034 4.0376 e 05 1 0.0011 0.0034 0.0011 1 a = 1 ,
The translation matrix of the camera is:
T = 62.6907 0.0370 0.2397 a = 1 ,
The colour camera is calibrated separately according to the above steps. The intrinsic matrix of the colour camera are as follows
M R = 1377.56 0 946.56 0 1377.42 532.41 0 0 1 a = 1 ,
The matrix of the distortion coefficient is:
k 1 , k 2 , k 3 , p 1 , p 2 = 0.1089 0.1374 0.086 7.5427 e 04 0.3241 a = 1 ,
(3)
The re-projection error is the position difference between the actual projection point and the theoretical projection point. To ensure the accuracy of image measurement in the later period, the re-projection error should be limited to a certain range. In this paper, the image with large error is eliminated, and the re-projection error is less than 0.5 as the ideal range. The re-projection error value of the calibration result in this paper is 0.05, as shown in Figure 4.
(4)
The obtained underwater parameters of the camera are written into the camera and the image is corrected. The camera distortion around the calibration plate is shown in the following Figure 1b. After correction, the normal image is restored, such as Figure 1c.
(5)
The three-dimensional appearance of underwater damages is obtained by using calibrated cameras. At the same time, the align functions in the camera development tool are used to align the colour map with the depth map to obtain more accurate point cloud information.

3. YOLO Algorithm and Point Cloud Algorithm

To detect and identify cracks and other damages of bridge underwater structure, Lite-YOLO-v4 and three-dimensional point cloud damage identification algorithms are integrated into the bridge underwater structure health management system. Lite-YOLO-v4, a lightweight convolution neural network with the YOLO-v4 network as the main body, has been improved from YOLO-v4 to ensure the accuracy and improve the detection efficiency. Based on a 3D point cloud measurement algorithm for bridge underwater damages, the BIM system is combined with 3D point cloud technology to capture 3D geometric information of structural damages by 3D point cloud technology, the acquired 3D point cloud is studied and processed, and the detailed information of damages is calculated.

3.1. YOLO-v4 Network Improvement and Architecture [41]

The YOLO-v4 algorithm regards target detection as a regression problem and inputs the image containing the target into the model. The algorithm marks the position of the target in the image and outputs it according to the target type. The YOLO-v4 network needs to calculate a lot of parameters in the process of image feature extraction. In order to meet the real-time requirements of mobile devices, the system uses an improved algorithm based on the YOLO-v4 network, Lite-YOLO-v4. The specific improvement measures of the algorithm are as follows:
(1)
The backbone feature extraction network, CSPDargent53, of YOLO-v4 is replaced by lightweight Mobilenetv3, and the scale of the feature layer of Mobilenetv3 is modified to change the dimension of the feature to connect with the follow-up network of YOLO-v4. Input the extracted preliminary feature layer into the enhanced feature extraction network for feature fusion.
(2)
The PANET network in the YOLO-v4 feature fusion network uses a large number of 3 × 3 common convolutions. Replace normal convolutions with 3 × 3 depth separable convolutions to reduce the amount of calculation.
(3)
After PANet network integration, three output layers of feature layer are generated for detecting small target, medium target, and large target. Most of the underwater cracks belong to the large target. Only the large output layer needs to be trained, which makes the middle output layer and the small output layer unable to be trained. Therefore, the original three feature output layers are fused into one output layer.
The improved YOLO-v4 network model, Lite-YOLO-v4, is structured as follows Figure 5.
In order to verify the damage identification effect of the improved network algorithm, the model is compared with CenterNet, YOLO-v4, YOLO-v4-tiny, MobileNet V3-YOLO-v4, YOLO-v5l, YOLO-v5m, and YOLOv5s algorithms to verify the recognition accuracy and image acquisition performance of the model under small crack conditions. The loss function curves of the eight models are shown in the Figure 6.
It can be seen that the training effect of the eight models is good, and there is no overfitting phenomenon, which can be used for comparative experiments. The indicators of the above model are evaluated, and the specific evaluation index values are as follows in Table 2.
The results show that the Lite-YOLO-v4 algorithm in this paper improves the detection speed and network performance while reducing the amount of calculation, and can identify cracks in complex underwater environments.

3.2. Calculation of Concrete Defects Based on 3D Point Cloud Technology

The emergence of three-dimensional point cloud technology provides technical support for underwater detection. The system adopts a bridge underwater disease measurement algorithm based on a 3D point cloud. Firstly, the calibration camera is used to obtain the 3D point cloud of the underwater disease. The point cloud is filtered and denoised to obtain the length, width, and depth data of the disease. The data are imported into the database, and various disease information can be viewed in the system. Furthermore, the high-order aberration correction polynomial is used to eliminate the underwater image aberration captured by the 3D point cloud camera, completing the underwater calibration of the camera, and determining the correspondence between the 3D points in the underwater space and the image pixels. Finally, the Poisson surface reconstruction algorithm is used to reconstruct the disease, and the Helen formula and 3D convex hull algorithm are used to obtain the area and volume of the pier disease.
The specific flowchart is as follows in Figure 7:
The depth camera can obtain the three-dimensional coordinate values of each point of the damage, calculate the distance between the damage and the camera, and then realise the modelling of the three-dimensional damage in order to grasp the real value of damage [47,48].
The three-dimensional image processing of underwater diseases is different from other three-dimensional image processing. The three-dimensional image of underwater diseases has low resolution and is easily affected by water flow and swimming of aquatic organisms, resulting in noise points in the collected point cloud data, which reduces the measurement accuracy. Therefore, statistical filtering denoising is considered. In order to eliminate the influence of adverse factors such as water flow and aquatic organisms and improve the image resolution, the filtering parameters are set in the algorithm to remove the noise point cloud. In order to clearly show the denoising effect, a certain area in the point cloud is selected for detail display, before and after denoising, as shown in Figure 8.
The denoised point cloud image is divided into three parts, numbered a, b, and c in the order from left to right, as shown in the following Figure 9. This paper uses the concrete disease analysis numbered c as an example of the system calculation process.
A nearly flat section surrounds the disease; the pit edge is judged by the gradient change of the Z direction (vertical direction) of the data point. When presetting z direction gradient change threshold, the gradient change of data points less than this threshold is the plane point, and the pit edge point is beyond this threshold. The three-dimensional point cloud of the disease area is obtained by removing the plane points. The effect of the three-dimensional projection of the disease is shown in the following Figure 10.
Due to some plane fluctuations around the disease, irregular points are distributed around the disease point cloud, as shown in Figure 11. To accurately extract the point cloud of the disease area, further processing of the image is considered, and the three-dimensional point cloud image is projected onto the two-dimensional plane. The result is shown in Figure 11.
Through the two-dimensional plane point cloud frame to select defects, the internal data points of the disease are extracted, and the image of the complete defect area is reconstructed. The convexHull convex hull algorithm package built in Matlab is used to construct the point cloud into a convex polygon to calculate the defect volume.
To obtain the area of disease, the Poisson surface reconstruction algorithm is used to map the point cloud data to the triangular mesh data. Each spatial point is connected to a triangular mesh, and the area of each triangle is calculated and summed to obtain the area of the disease, thereby performing surface reconstruction on the disease data points.
Helen’s formula is as follows, where a, b, and c are the lengths of the triangle edge, and p is the half circumference of the triangle.
S = p p a p b p c p = a + b + c 2
The Z coordinate axis is perpendicular to the disease surface, and the X coordinate axis is parallel to the disease surface. In the plane parallel to the disease surface, the X coordinate axis is rotated counterclockwise by 90 degrees as the Y coordinate axis. The difference between the extreme values of the disease data points in the X and Y directions is defined as the length and width of the disease. The coordinate values of 10 disease data points with the more considerable absolute coordinate value were selected along the Z-axis direction, and the mean value was taken as the disease depth. The calculation results are shown in Table 3.
All defects are calculated according to the above procedure, and the calculated values are compared with the actual values. The following formula calculates the relative error:
R e l a t i v e   e r r o r = M e a s u r e d   v a l u e T r u e   v a l u e T r u e   v a l u e × 100 %
It can be seen from Table 3 that the average error of disease a is 5.36%, and the overall error is significant. The average error of diseases b and c is 3.17%. It is worth noting that the general defect of disease a is minor, and the groove plane is tortuous. The three-dimensional points inside the groove are sparse. The surface of diseases b and c is flat, and the internal three-dimensional points are dense. Therefore, the error of the former is relatively large, and the error of the latter is small. It can be seen that the measurement accuracy is affected by the scanning angle and the depth of the disease. More data points can be obtained by shooting perpendicular to the condition, which makes the system calculation more accurate.
The comparison results show that the algorithm of the system will not affect the measurement accuracy due to the shape of the disease. The flatter the collected disease observation point cloud plane, the denser the point cloud data and the higher the calculation accuracy. According to the error analysis, the point cloud algorithm has high accuracy for the automatic measurement of the three-dimensional information of the disease.

4. Bridge Underwater Structural Health Management System Based on BIM Technology

The Bridge Health Monitoring System (SHM) has made great progress, but there is still a lack of external detection of underwater bridge structures. There are few applications for underwater structure monitoring, and some existing independent detection methods cannot be well combined with the SHM system. The characteristics of BIM can make up for this deficiency. BIM is an open platform with information integration and engineering characteristics. It can provide a visual and developable digital expression environment for health monitoring, and effectively improve the visualization and information sharing of monitoring information. The above two damage identification algorithms are integrated into the bridge underwater health inspection system, and the high exploitability of the BIM platform is used to realize the visualization of bridge underwater structural defect detection. Users can visualize the system to view disease information on the bridge substructure, the depth, length, width, and volume of the disease, as well as 3D images of the disease. It greatly facilitates researchers to understand the health status of bridges.

4.1. Systematic Introduction

Based on the secondary development of BIM technology, the underwater disease health management system is established. In order to meet the needs of underwater monitoring, the underwater disease detection module, the underwater structure model visualization module, and the underwater structure damage data management module are added to the traditional health monitoring system. While realizing the visualization of bridge structure information by establishing a bridge information model, the visualization of underwater disease information is realised, which is convenient for researchers to understand the health damage of the bridge and take timely disease repair measures to facilitate the maintenance and repair of the bridge.
The system framework mainly includes three parts, as shown in Figure 12: bridge information model, underwater structure detection, and damage data management.

4.1.1. Bridge Information Model

Based on the Revit software structure, a 3D model containing the basic information of the bridge was established. The underwater structural part of the bridge is an important part of the bridge. The system realizes the visualization of the underwater structure model by establishing the model of the underwater structure part while realizing the visualization of the bridge superstructure. The established bridge structure and pier foundation model are imported into the system. Then, the family libraries of each component of the bridge model are established in the Revit model, and each component is named and numbered. The disease system is connected to the model through a Revit.API interface. When the project in the folder is opened, the system can clearly display the structural information of the bridge. Follow-up disease information is input into the bridge model through the interface to realise the visualization of detection data and disease damage.
Based on the established three-dimensional bridge infrastructure model, the selected bridge infrastructure model can be monitored and analysed after the bridge disease (cracks, concrete shedding, etc.) detection information is imported into the system. To implement this function, the system developed a visualisation plug-in using the Revit API in the Visual Studio development environment. The later bridge maintenance is performed by querying the detection information, and the detection information is managed and processed. The bridge damage module allows users to view bridge damage detection data. Through the bridge disease analysis, the evaluation of the structural state is realised. Through the two data nodes of bridge name and detection time, the creation of a bridge disease detection project and bridge disease management is realised.

4.1.2. Underwater Structure Detection Module

The underwater part of the bridge structure has long faced water erosion and microbial erosion. Traditional cameras cannot accurately scan disease information. The underwater vehicle used in the system is equipped with a binocular camera, good environmental adaptability, and strong anti-interference ability. The binocular camera is sealed by a self-designed sealing structure to scan the disease.
In the third part, the improved point cloud algorithm and Lite-YOLO-v4 network can process disease information accurately. In the underwater detection module, to obtain accurate disease information, the improved point cloud algorithm and Lite-YOLO-v4 network are used. Obtaining more accurate disease while realizing disease visualization must be done to facilitate managers to make judgments on diseases. For crack disease, one can clearly detect crack length, crack shape, and crack width, and determine the location of the crack; for structural defects, while obtaining the depth, length, width, volume, and other three-dimensional information of the disease, the three-dimensional image of the disease is drawn and its position is marked.
The system uses the underwater structure detection plug-in, based on Revit software and C# programming language, to extract the disease information obtained by the camera, and then processes the extracted diseases through two algorithms, and integrates various damages into the established bridge Revit model. The integrated application of Revit and underwater structural damage monitoring can check the information and display the inspection results in the BIM model while obtaining the disease information.

4.1.3. Structural Damage Database

To facilitate damage management, a bridge damage database is established through three steps: data demand analysis, conceptual structure design, and logical structure analysis. Managers can organise and optimise the disease information in the database, delete accidental errors, and update the bridge damage database based on the latest disease information. After determining the disease is accurate, they can take disease repair measures. By searching the database of disease information and bridging disease situations, the potential risks are judged in advance and early warning is made to eliminate potential safety hazards.
The bridge structure damage database is realised by SQL Server 2012 software, and the communication between the detection data and the REVIT API interface is realised by C# programming language. The overall architecture adopts a layered approach. Disease inspection is added to the outermost layer, including bridge name, disease detection time, and model file. The next layer contains the detection of the bridge structure, namely the disease photos of the bridge structure, disease location, and concrete damage. After the system identifies the bridge damage, one can download the structural damage photos and upload them to the bridge damage module to form a structural damage database.

4.2. Underwater Disease Health Management System

The Lite-YOLO-v4 algorithm adopted by the system can effectively identify cracks in various complex underwater environments on the premise of reducing the amount of calculation, and add the crack identification results to the bridge damage database to complete the whole process of bridge underwater crack identification and management. Taking a bridge as an example, the use process of underwater disease health management system is verified. The specific process is divided into the following steps:
Establish the bridge and foundation structure model of the bridge and realise the refinement of the bridge components.
The BIM model of a bridge is established by Revit to realise the visualisation of the whole structure of the bridge. In bridge detection, different bridges’ basic structure and basic data are different. Therefore, it is necessary to refer to each bridge’s data and establish its BIM model according to the bridge’s actual information. In the BIM model, the basic construction family library of the bridge is established, and the bridge structure and components are named and numbered according to the naming rules to facilitate the subsequent supplement of the corresponding diseases and data retrieval.
Taking a bridge as an example, the bridge structure model diagram established in the system is as follows in Figure 13. The bridge model visualisation can accurately represent the bridge structure information. Compared with the traditional CAD plan, while realising the three-dimensional information of the bridge structure, the bridge manager can more easily obtain the detailed structure information, which is convenient for later damage identification and disease repair.
Establishing a good underwater structure model is particularly important for underwater disease management. The marking of the disease is highly dependent on the establishment of the underwater model. To clearly obtain the position of the underwater disease in the bridge, it is necessary to model the underwater structure of the bridge in detail. The pier foundation model established with a bridge as an example is shown in the Figure 14:
An underwater robot is used to identify the structural disease of the bridge. An underwater robot scans the underwater structure of the bridge with a binocular camera to obtain the initial disease situation. The Lite-Yolo-v4 algorithm and 3D point cloud disease identification method are used to identify and reconstruct the obtained bridge disease pictures to clarify disease information. The two algorithms can adapt to a variety of complex underwater environment, accurately detect the damage and disease of underwater bridge structure, and save the test results in the system. For crack disease, crack length, width, and crack shape can be obtained to provide data support for later crack treatment and repair. For point cloud images such as potholes, the system can accurately obtain detailed information such as depth and area and generate intuitive information for subsequent maintenance. The system sorts the diseases according to the type, size, and location of the diseases to facilitate the subsequent removal of a small number of unreasonable diseases, disease viewing, and modification.
The processed disease information is sorted and divided, and each disease is marked in the bridge model according to the disease location. The disease should contain the following four parts of information: disease location, disease volume (length, width, depth), detection conclusion, and underwater disease detection and identification photos. The inspection photos can be enlarged to better understand the details of the disease, to help review the test results. The effect of the added disease information is shown in Figure 15:
At the same time, a bridge inspection management system is established, which can check each bridge inspection. At the same time, each inspection should record the bridge inspection time, inspection method, inspection content, bridge inspection results, bridge underwater structure assessment grade, and bridge maintenance scheme, as shown in Figure 16 and Figure 17.

5. Results and Discussion

In this paper, the underwater bridge health monitoring system combining the Lite-YOLO-v4 algorithm and 3D point cloud damage identification algorithm is applied to bridge detection. In the second part of the article, the types of depth cameras are introduced and compared. A binocular structured light camera combined with an underwater robot is selected to obtain images of bridge diseases. The principle and calibration of underwater camera imaging are described. A high order distortion correction formula is introduced to aim at the underwater imaging distortion problem. The camera parameters are solved and written into the camera, and the corresponding relationship between the three-dimensional points of the underwater space and the image pixels is established. The distorted image can be corrected by the calibration parameters to output a normal image, and the calibrated camera can complete the disease appearance acquisition task underwater.
The third part introduces the Lite-YOLO-v4 algorithm and 3D point cloud damage identification algorithm. In the YOLO-v4 algorithm, the classification layer and output layer of Mobilenetv3 are removed, and CSPDarkent53 is used as the main feature extraction network. In addition, it replacing many ordinary convolutions in the PANet structure with deep separable convolutions, multi-feature fusion is performed on the prior box. The improved network model parameters and calculation amount are significantly reduced, and the model size is reduced to 1/5. For the acquired disease 3D point cloud image, the image noise is removed by statistical filtering algorithm and setting filtering parameters. Then, the Poisson surface reconstruction algorithm is used to reconstruct the defect surface, the depth and width of the defect are obtained by three-dimensional data points, and the defect volume is calculated by the convex hull algorithm. Comparing the crack information and calculating the three-dimensional information of concrete diseases shows that the two algorithms have high accuracy for the three-dimensional information calculation of underwater diseases and have practical application potential.
The fourth part establishes a bridge underwater structure health management system based on BIM technology. The integration of BIM technology and underwater disease detection has greatly improved the environment of underwater bridge structure detection. The system is based on Revit’s secondary development tools, developed by C # language, and then uses a Revit.API interface to achieve 3D visualisation of structures and defects. It uses an SQL Server as a storage database to store the acquired disease information to meet the health detection system’s damage data management function. Revit is used to establish the bridge model and create the relevant information about the bridge. The underwater robot is used to scan the disease. Then, the two damage identification algorithms of Lite-YOLO-v4 algorithm and 3D point cloud are used to obtain a clear image of the underwater disease. The detailed information of the underwater disease is added to the system, which verifies that the system has a good application prospect in bridge detection.
In addition, using the Revit.API interface, the bridge underwater health management system can be given more diversified functions to further improve and optimize the system for bridge detection and maintenance.

Author Contributions

Conceptualization, X.L. and T.Z.; methodology, Q.M.; software, H.S.; validation, R.S. and M.W.; formal analysis, R.S. and H.S.; data curation, M.W. and R.S.; writing—original draft preparation, R.S., Q.M. and M.W.; writing—review and editing, Q.M. and M.W; visualisation, H.S.; supervision, X.L. and T.Z.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to acknowledge Chen Zhang and Taiyi Song for their contributions to this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mitchell, D.; Marchand, J.; Croteau, P.; Cook, W.D. Concorde Overpass Collapse: Structural Aspects. J. Perform. Constr. Facil. 2011, 25, 545–553. [Google Scholar] [CrossRef]
  2. Scattarreggia, N.; Salomone, R.; Moratti, M.; Malomo, D.; Pinho, R.; Calvi, G.M. Collapse analysis of the multi-span reinforced concrete arch bridge of Caprigliola, Italy. Eng. Struct. 2022, 251, 113375. [Google Scholar] [CrossRef]
  3. Scattarreggia, N.; Galik, W.; Calvi, P.M.; Moratti, M.; Orgnoni, A.; Pinho, R. Analytical and numerical analysis of the torsional response of the multi-cell deck of a collapsed cable-stayed bridge. Eng. Struct. 2022, 265, 114412. [Google Scholar] [CrossRef]
  4. Tan, J.-S.; Elbaz, K.; Wang, Z.-F.; Shen, J.S.; Chen, J. Lessons Learnt from Bridge Collapse: A View of Sustainable Management. Sustainability 2020, 12, 1205. [Google Scholar] [CrossRef] [Green Version]
  5. American Road and Transportation Builder’s Association. Available online: https://infrastructurereportcard.org/cat-item/bridges-infrastructure/ (accessed on 15 December 2022).
  6. Sun, X.; Zhang, Z.; Wang, R.; Wang, X.; Chapman, J. Analysis of Past National Bridge Inventory Ratings for Predicting Bridge System Preservation Needs. Transp. Res. Rec. 2004, 1866, 36–43. [Google Scholar] [CrossRef]
  7. Jeong, S.; Hou, R.; Lynch, J.P.; Sohn, H.; Law, K.H. An information modeling framework for bridge monitoring. Adv. Eng. Softw. 2017, 114, 11–31. [Google Scholar] [CrossRef]
  8. Gralund, M.S.; Puckett, J.A. System for bridge management in a rural environment. J. Comput. Civ. Eng. 1996, 10, 97–105. [Google Scholar] [CrossRef]
  9. Byun, N.; Han, W.; Kwon, Y.; Kang, Y. Development of BIM-Based Bridge Maintenance System Considering Maintenance Data Schema and Information System. Sustainability 2021, 13, 4858. [Google Scholar] [CrossRef]
  10. Dayan, V.; Chileshe, N.; Hassanli, R. A Scoping Review of Information-Modeling Development in Bridge Management Systems. J. Constr. Eng. Manag. 2022, 148, 03122006. [Google Scholar] [CrossRef]
  11. Qin, Y.D.; Xiao, R.E. Research on Bridge Management System Based on BIM Technology. In Proceedings of the 9th International Conference on Bridge Maintenance, Safety and Management (IBAMAS), Melbourne, Australia, 9–13 July 2018. [Google Scholar]
  12. Kamya, B.M. Research update of a typical bridge management system applied to a UK local authority. In Proceedings of the 7th International Conference on Bridge Maintenance, Safety and Management (IABMAS), Shanghai, China, 7–11 July 2014. [Google Scholar]
  13. Holst, R. First Results of the German BMS—Influence of data availability and quality. In Proceedings of the 6th International Conference on Bridge Maintenance, Safety and Management (IABMAS), Stresa, Italy, 8–12 July 2012. [Google Scholar]
  14. Comparing Pommes and Naranjas. Available online: http://laiserin.com/features/issue15/feature01.php (accessed on 15 December 2022).
  15. Baarimah, A.O.; Alaloul, W.S.; Liew, M.S.; Kartika, W.; Al-Sharafi, M.A.; Musarat, M.A.; Alawag, A.M.; Qureshi, A.H. A Bibliometric Analysis and Review of Building Information Modelling for Post-Disaster Reconstruction. Sustainability 2022, 14, 393. [Google Scholar] [CrossRef]
  16. Cheng, B.; Lu, K.; Li, J.; Chen, H.; Luo, X.; Shafique, M. Comprehensive assessment of embodied environmental impacts of buildings using normalized environmental impact factors. J. Clean. Prod. 2022, 334, 130083. [Google Scholar] [CrossRef]
  17. Olanrewaju, O.I.; Kineber, A.F.; Chileshe, N.; Edwards, D.J. Modelling the relationship between Building Information Modelling (BIM) implementation barriers, usage and awareness on building project lifecycle. Build. Environ. 2022, 207, 108556. [Google Scholar] [CrossRef]
  18. Rodrigues, F.; Baptista, J.S.; Pinto, D. BIM Approach in Construction Safety—A Case Study on Preventing Falls from Height. Buildings 2022, 12, 73. [Google Scholar] [CrossRef]
  19. Shishehgarkhaneh, M.B.; Keivani, A.; Moehler, R.C.; Jelodari, N.; Laleh, S.R. Internet of Things (IoT), Building Information Modeling (BIM) and Digital Twin (DT) in Construction Industry: A Review, Bibliometric and Network Analysis. Buildings 2022, 12, 1503. [Google Scholar] [CrossRef]
  20. Delgado, J.M.D.; Oyedele, L.; Beach, T.; Demian, P. Augmented and Virtual Reality in Construction: Drivers and Limitations for Industry Adoption. J. Constr. Eng. Manag. 2020, 146, 04020079. [Google Scholar] [CrossRef]
  21. McGuire, B.; Atadero, R.; Clevenger, C.; Ozbek, M.E. Bridge Information Modeling for Inspection and Evaluation. J. Bridge Eng. 2016, 21, 04015076. [Google Scholar] [CrossRef]
  22. Zhou, Z. An Intelligent Bridge Management and Maintenance Model Using BIM Technology. Mob. Inf. Syst. 2022, 2022, 7130546. [Google Scholar] [CrossRef]
  23. Cao, Y.; Kamaruzzaman, S.N.; Aziz, N.M. Green Building Construction: A Systematic Review of BIM Utilization. Buildings 2022, 12, 1205. [Google Scholar] [CrossRef]
  24. Matarneh, S.T.; Danso-Amoako, M.; Al-Bizri, S.; Gaterell, M.; Matarneh, R. Building information modeling for facilities management: A literature review and future research directions. J. Build. Eng. 2019, 24, 100755. [Google Scholar] [CrossRef] [Green Version]
  25. Panah, R.S.; Kioumarsi, M. Application of Building Information Modelling (BIM) in the Health Monitoring and Maintenance Process: A Systematic Review. Sensors 2021, 21, 837. [Google Scholar] [CrossRef]
  26. Liu, H.; Skibniewski, M.J.; Ju, Q.; Li, J.; Jiang, H. BIM-enabled construction innovation through collaboration: A mixed-methods systematic review. Eng. Constr. Archit. Manag. 2021, 28, 1541–1560. [Google Scholar] [CrossRef]
  27. Carvalho, J.P.; Bragança, L.; Mateus, R. A Systematic Review of the Role of BIM in Building Sustainability Assessment Methods. Appl. Sci. 2020, 10, 4444. [Google Scholar] [CrossRef]
  28. Akbarieh, A.; Jayasinghe, L.B.; Waldmann, D.; Teferle, F.N. BIM-Based End-of-Lifecycle Decision Making and Digital Deconstruction: Literature Review. Sustainability 2020, 12, 2670. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, Y.; Tien, I. Combined Effects of Soil Stress History and Scour Hole Dimensions on Laterally and Axially Loaded Piles in Sand and Clay under Scour Conditions. J. Geotech. Geoenviron. Eng. 2022, 148, 04022028. [Google Scholar] [CrossRef]
  30. Montalvo, C.; Cook, W.; Keeney, T. Retrospective Analysis of Hydraulic Bridge Collapse. J. Perform. Constr. Facil. 2020, 34, 04019111. [Google Scholar] [CrossRef]
  31. Hamid, H.; Chorzepa, M.G.; Durham, S.A. Investigation of Cracks Observed in Underwater Bridge Seal Structures and Crack Control by Means of Material Design. J. Perform. Constr. Facil. 2020, 34, 04020117. [Google Scholar] [CrossRef]
  32. Cook, W.; Barr, P.J. Observations and Trends among Collapsed Bridges in New York State. J. Perform. Constr. Facil. 2017, 31, 04017011. [Google Scholar] [CrossRef]
  33. Cha, Y.-J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
  34. Chen, F.-C.; Jahanshahi, M.R. NB-CNN: Deep Learning-Based Crack Detection Using Convolutional Neural Network and Naive Bayes Data Fusion. IEEE Trans. Ind. Electron. 2018, 65, 4392–4400. [Google Scholar] [CrossRef]
  35. Ghahremani, B.; Bitaraf, M.; Ghorbani-Tanha, A.K.; Fallahi, R. Structural damage identification based on fast S-transform and convolutional neural networks. Structures 2021, 29, 1199–1209. [Google Scholar] [CrossRef]
  36. Kim, B.; Cho, S. Image-based concrete crack assessment using mask and region-based convolutional neural network. Struct. Control. Health Monit. 2019, 26, e2381. [Google Scholar] [CrossRef]
  37. Alla, D.N.V.; Jyothi, V.B.N.; Venkataraman, H.; Ramadass, G. Vision-based Deep Learning algorithm for Underwater Object Detection and Tracking. In Proceedings of the OCEANS Conference, Chennai, India, 21–24 February 2022. [Google Scholar]
  38. Tian, M.J.; Li, X.; Kong, S.; Wu, L.; Yu, J. A modified YOLOv4 detection method for a vision-based underwater garbage cleaning robot. Front. Inf. Technol. Electron. Eng. 2022, 23, 1217–1228. [Google Scholar] [CrossRef]
  39. Chen, L.Y.; Zheng, M.; Duan, S.; Luo, W.; Yao, L. Underwater Target Recognition Based on Improved YOLOv4 Neural Network. Electronics 2021, 10, 1634. [Google Scholar] [CrossRef]
  40. Gasparovic, B.; Lerga, J.; Mauša, G.; Ivašić-Kos, M. Deep Learning Approach for Objects Detection in Underwater Pipeline Images. Appl. Artif. Intell. 2022, 36, 21. [Google Scholar] [CrossRef]
  41. Li, X.; Sun, H.; Song, T.; Zhang, T.; Meng, Q. A method of underwater bridge structure damage detection method based on a lightweight deep convolutional network. IET Image Process. 2022, 16, 3893–3909. [Google Scholar] [CrossRef]
  42. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4338–4364. [Google Scholar] [CrossRef]
  43. Wang, Q.; Kim, M.-K. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  44. Vetrivel, A.; Gerke, M.; Kerle, N.; Nex, F.; Vosselman, G. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning. ISPRS J. Photogramm. Remote Sens. 2018, 140, 45–59. [Google Scholar] [CrossRef]
  45. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). Isprs J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef] [Green Version]
  46. Harwin, S.; Lucieer, A. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  47. Benxing, G.; Wang, G. Underwater Image Recovery Using Structured Light. IEEE Access 2019, 7, 77183–77189. [Google Scholar] [CrossRef]
  48. Zhu, K.; Xu, X.; An, X.; Wu, B.; Xu, X. Object detection and recognition method based on binocular. In Proceedings of the 11th International Conference on Information Optics and Photonics (CIOP 2019), Xi’an, China, 6–9 August 2019. [Google Scholar]
  49. Mehranian, A.; Wollenweber, S.D.; Walker, M.D.; Bradley, K.M.; Fielding, P.A.; Huellner, M.; Kotasidis, F.; Su, K.-H.; Johnsen, R.; Jansen, F.P.; et al. Deep learning-based time-of-flight (ToF) image enhancement of non-ToF PET scans. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 3740–3749. [Google Scholar] [CrossRef] [PubMed]
  50. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Figure 1. Calibration plate design and results in experimental use of calibration plates: (a) calibration late; (b) before the picture was corrected, the calibration plate was recessed inward in a curved shape all around; (c) after picture correction, the calibration plate is straight all around.
Figure 1. Calibration plate design and results in experimental use of calibration plates: (a) calibration late; (b) before the picture was corrected, the calibration plate was recessed inward in a curved shape all around; (c) after picture correction, the calibration plate is straight all around.
Applsci 13 01348 g001
Figure 2. Spatial Distribution of Calibration Plate.
Figure 2. Spatial Distribution of Calibration Plate.
Applsci 13 01348 g002
Figure 3. Corner Point Extraction of Calibration Board.
Figure 3. Corner Point Extraction of Calibration Board.
Applsci 13 01348 g003
Figure 4. Re-projection Error.
Figure 4. Re-projection Error.
Applsci 13 01348 g004
Figure 5. Structure of the Lite-YOLO-v4 Crack Detection Algorithm.
Figure 5. Structure of the Lite-YOLO-v4 Crack Detection Algorithm.
Applsci 13 01348 g005
Figure 6. Model loss function plot. (a) Comparison curve of the loss function of the training set. (b) Validation set loss function contrast curves.
Figure 6. Model loss function plot. (a) Comparison curve of the loss function of the training set. (b) Validation set loss function contrast curves.
Applsci 13 01348 g006
Figure 7. Three-Dimensional Point Cloud Processing Flow Chart.
Figure 7. Three-Dimensional Point Cloud Processing Flow Chart.
Applsci 13 01348 g007
Figure 8. Comparison of Details Before and After Denoising: (a) the point cloud before denoising; (b) the point cloud after denoising.
Figure 8. Comparison of Details Before and After Denoising: (a) the point cloud before denoising; (b) the point cloud after denoising.
Applsci 13 01348 g008
Figure 9. Simulated Concrete Disease Point Cloud. (a) No. 1 simulated concrete disease. (b) No. 2 simulated concrete disease. (c) No. 3 simulated concrete disease.
Figure 9. Simulated Concrete Disease Point Cloud. (a) No. 1 simulated concrete disease. (b) No. 2 simulated concrete disease. (c) No. 3 simulated concrete disease.
Applsci 13 01348 g009
Figure 10. Removal of Flat Points to Obtain 3D Point Clouds of Diseased Areas.
Figure 10. Removal of Flat Points to Obtain 3D Point Clouds of Diseased Areas.
Applsci 13 01348 g010
Figure 11. 2D Planar Point Cloud and Complete Defect Area Image.
Figure 11. 2D Planar Point Cloud and Complete Defect Area Image.
Applsci 13 01348 g011
Figure 12. System Framework.
Figure 12. System Framework.
Applsci 13 01348 g012
Figure 13. Bridge Overall Model.
Figure 13. Bridge Overall Model.
Applsci 13 01348 g013
Figure 14. Bridge Foundation Detection Model Database Establishment.
Figure 14. Bridge Foundation Detection Model Database Establishment.
Applsci 13 01348 g014
Figure 15. Added Disease Information. (a) Add new crack damage. (b) Add new defect damage.
Figure 15. Added Disease Information. (a) Add new crack damage. (b) Add new defect damage.
Applsci 13 01348 g015
Figure 16. Graph Check Record Table.
Figure 16. Graph Check Record Table.
Applsci 13 01348 g016
Figure 17. Bridge Underwater Structure Inspection Record Add.
Figure 17. Bridge Underwater Structure Inspection Record Add.
Applsci 13 01348 g017
Table 1. Performance Comparison of Three Types of Cameras.
Table 1. Performance Comparison of Three Types of Cameras.
Camera CategoryRGB BinocularStructured LightTOF
Operating
Principle
Triangulation principle, feature point matchingProjecting laser patterns onto objects to enhance feature matchingMeasurement using the emission and reflection times of light
Ranging methodPassiveProactiveProactive
Measurement RangeProximity measurement, limited by baseline, generally within 10 mSuitable for proximity measurement 0.1–10 mCan measure longer distance 1–100 m
Power
Consumption
relatively lowModeratehigh
Frame RateFrom high to lowGenerally 30FPSUp to hundreds of FPS
Table 2. Values of the Eight Model Indicators Under the Crack Dataset.
Table 2. Values of the Eight Model Indicators Under the Crack Dataset.
ModelRecallPrecisionmAPFPSModel Size/MBTraining Time/Epoch
YOLO-v453.61%95.62%84.20%924415 min
CenterNet48.53%94.31%77.10%151258 min
YOLO-v4-tiny40.03%87.64%66.86%2722.51.5 min
Mobilenetv3-YOLO-v441.03%91.20%75.13%141529 min
Lite-YOLO-v447.98%93.97%77.07%2544.32 min
YOLO-V5l53.69%92.96%82.49%151787 min
YOLO-V5m52.25%89.04%80.41%1881.54 min
Table 3. Calculation and Error of Damage Geometry Information.
Table 3. Calculation and Error of Damage Geometry Information.
Damage Numberabc
Volume/cm3true value8.08362.69420.28
calculated value8.52372.49439.61
relative error e 5.5%2.7%4.6%
Areatrue value19.78202.03214.11
calculated value20.71207.28227.60
relative error e 4.7%2.6%6.3%
Depthtrue value1.054.856.14
calculated value1.115.016.29
relative error e 6.2%3.3%2.5%
X-axis widthtrue value4.5413.3013.13
calculated value4.7813.4113.55
relative error e 5.4%0.8%3.2%
Y-axis widthtrue value4.4512.9512.99
calculated value4.6713.3813.31
relative error e 5%3.3%2.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Meng, Q.; Wei, M.; Sun, H.; Zhang, T.; Su, R. Identification of Underwater Structural Bridge Damage and BIM-Based Bridge Damage Management. Appl. Sci. 2023, 13, 1348. https://doi.org/10.3390/app13031348

AMA Style

Li X, Meng Q, Wei M, Sun H, Zhang T, Su R. Identification of Underwater Structural Bridge Damage and BIM-Based Bridge Damage Management. Applied Sciences. 2023; 13(3):1348. https://doi.org/10.3390/app13031348

Chicago/Turabian Style

Li, Xiaofei, Qinghang Meng, Mengpu Wei, Heming Sun, Tian Zhang, and Rongrong Su. 2023. "Identification of Underwater Structural Bridge Damage and BIM-Based Bridge Damage Management" Applied Sciences 13, no. 3: 1348. https://doi.org/10.3390/app13031348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop