Next Article in Journal
Numerical Analysis of the Influence of Deep Excavation on Nearby Pile Foundation Building
Next Article in Special Issue
Countermeasures for the Transformation of Migrant Workers to Industrial Workers in the Construction Industry Based on Evolutionary Game Theory
Previous Article in Journal
Optimising Daylight and Ventilation Performance: A Building Envelope Design Methodology
Previous Article in Special Issue
Research on the Improvement Path of Prefabricated Buildings’ Supply Chain Resilience Based on Structural Equation Modeling: A Case Study of Shenyang and Hangzhou, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion

School of Civil Engineering, Central South University, Changsha 410075, China
*
Author to whom correspondence should be addressed.
Buildings 2023, 13(11), 2841; https://doi.org/10.3390/buildings13112841
Submission received: 9 October 2023 / Revised: 4 November 2023 / Accepted: 9 November 2023 / Published: 13 November 2023

Abstract

:
To address the incomplete image data collection of close-to-ground structures, such as bridge piers and local features like the suspension cables in bridges, obtained from single unmanned aerial vehicle (UAV) oblique photography and the difficulty in acquiring point cloud data for the top structures of bridges using single terrestrial laser scanners (TLSs), as well as the lack of textural information in TLS point clouds, this study aims to establish a high-precision, complete, and realistic bridge model by integrating UAV image data and TLS point cloud data. Using a particular large-scale dual-track bridge as a case study, the methodology involves aerial surveys using a DJI Phantom 4 RTK for comprehensive image capture. We obtain 564 images circling the bridge arches, 508 images for orthorectification, and 491 images of close-range side views. Subsequently, all images, POS data, and ground control point information are imported into Context Capture 2023 software for aerial triangulation and multi-view image dense matching to generate dense point clouds of the bridge. Additionally, ground LiDAR scanning, involving the placement of six scanning stations both on and beneath the bridge, was conducted and the point cloud data from each station are registered in Trimble Business Center 5.5.2 software based on identical feature points. Noise point clouds are then removed using statistical filtering techniques. The integration of UAV image point clouds with TLS point clouds is achieved using the iterative closest point (ICP) algorithm, followed by the creation of a TIN model and texture mapping using Context Capture 2023 software. The effectiveness of the integrated modeling is verified by comparing the geometric accuracy and completeness of the images with those obtained from a single UAV image-based model. The integrated model is used to generate cross-sectional profiles of the dual-track bridge, with detailed annotations of boundary dimensions. Structural inspections reveal honeycomb surfaces and seepage in the bridge piers, as well as painted rust and cracks in the arch ribs. The geometric accuracy of the integrated model in the X, Y, and Z directions is 1.2 cm, 0.8 cm, and 0.9 cm, respectively, while the overall 3D model accuracy is 1.70 cm. This method provides technical reference for the reconstruction of three-dimensional point cloud bridge models. Through 3D reconstruction, railway operators can better monitor and assess the condition of bridge structures, promptly identifying potential defects and damages, thus enabling the adoption of necessary maintenance and repair measures to ensure the structural safety of the bridges.

1. Introduction

Creating a detailed realistic model can provide comprehensive bridge structure information, aiding in monitoring the structural health and safety. This can help engineers and maintenance personnel better understand the condition of the bridge, enabling them to identify potential issues early and take necessary maintenance and repair measures to ensure the safety and reliability of the bridge. Establishing a fine-grained realistic model of a bridge also offers vital information and resources for various fields, including engineering, cultural heritage preservation, education, and research [1,2,3].
In their study, Zhou et al. [4] used drone oblique photography technology to reconstruct a realistic three-dimensional model of a bridge and extract the bridge’s alignment. The difference between the alignment measured by the drone and the alignment measured by traditional leveling was 3.4 cm. Yu et al. [5] employed drone oblique photography technology and used the structure-from-motion multi-view stereo (SfM-MVS) method to reconstruct realistic three-dimensional models, achieving a high precision of less than 3.0 cm, which is suitable for the automated detection of road slopes in urban and highway environments. Chen et al. [6] utilized close-range drone photography to capture image data and create a realistic three-dimensional model of slopes with an accuracy of 2.0 cm. However, single drone oblique photography has limitations in acquiring data close to the ground; it often results in poorer texture quality on the underside and requires further improvement in modeling accuracy.
In recent years, TLSs have emerged as high-precision, non-contact measurement and mapping tools. They utilize laser beams to acquire three-dimensional point cloud data of the ground and its surrounding environment. These scanners are applicable in various fields, including in geographic information systems (GISs), architectural measurement, environmental monitoring, forestry, and many others. Guo et al. [7] proposed a pseudo-single-point deformation monitoring method, which utilizes point cloud geographic attributes, intensity, color information, and geometric features to establish a pseudo-single-point deformation monitoring system based on TLS technology. Zhou et al. [8] suggested using mobile laser scanning to acquire bridge point cloud data, with errors within 7% compared to traditional measurement methods. However, a single ground-based laser scanner cannot capture point cloud data from the top of target objects. This leads to phenomena like voids in the 3D reconstruction models, and the reconstructed models lack texture features.
Therefore, terrestrial laser scanning and drone photography technologies complement each other, providing more comprehensive and accurate geographical data. Terrestrial-based laser scanning can offer high-precision information on ground elevation and structures, while drone photography can capture large areas with high-resolution images. Combining them can achieve a more comprehensive geographic information dataset. Xiao et al. [9] proposed an integrated framework for drone photography and ground-based laser scanning and applied it in open-pit mining areas, resulting in digital surface models (DSMs) and digital orthophoto images (DOMs) with sub-meter-level accuracy. Zhang et al. [10] used the CMVS-PMVS method for dense point cloud reconstruction of arch bridges, achieving a completeness improvement of approximately 31.9% through merging the results with 3D laser scanning models. Zhang et al. [11], using the Queen’s Palace ancient site as a case study, fused ground-based laser scanning point cloud data with drone-image-based visual point clouds, aiding in the digital archiving and structural stability monitoring of the Queen’s Palace. Ren et al. [12] introduced a graph-cut algorithm and point-cloud-filtering algorithm to guide the fusion of ground-based laser scanning point clouds and drone aerial point clouds, resulting in a 5.42% accuracy improvement and a 2.94% enhancement in model completeness.
Maintaining double-track railway superstructures is a critical component of transportation infrastructure. Due to the complexity of its structure, using a single data acquisition method to cover the entire scenario is challenging, which can affect the integrity of subsequent realistic 3D models. Therefore, the application of point cloud data fusion technology holds significant importance and practical utility. This study explored the fusion of point cloud data collected through UAV photogrammetry and TLSs to create a 3D reality model of a bridge. This approach was used to inspect the condition of the bridge. The effectiveness of this method was validated through a comparative analysis with the single UAV-based 3D reconstruction accuracy and texture integrity.

2. Reconstruction of a Three-Dimensional Model of a Bridge

2.1. UAV Oblique Photography Technology

Tilted photography measurement technology not only captures images from vertical and oblique perspectives but also acquires accurate geographical location information. The operational tasks are mainly divided into aerial photography, control point acquisition, aerial triangulation, model construction, and model inspection and refinement [13,14]. The modeling of oblique photography data was carried out using Context Capture software, and the processing workflow is summarized in Figure 1.

2.1.1. Multi-View Image Acquisition

Ground control points (GCPs) are used to improve the accuracy of aerial triangulation in photogrammetry. Some of the GCPs are considered as checkpoints, and the difference between the coordinates of the control points established using RTK measurements in the CGCS2000 coordinate system is used as an accuracy assessment indicator. The control points should be selected at the intersections of small linear features with good angular intersections or at the corners of features, and the targets must be clear, distinct, and meet the requirements of GPS observations. When using UAVs for aerial photography, it is necessary to set the UAV’s flying altitude, side overlap, and forward overlap reasonably to ensure flight safety while maintaining image resolution and the integrity of the bridge data. The explanations for each parameter are as follows:
  • Heading overlap and lateral overlap
The longitudinal overlap represents the degree of overlap between two adjacent photos within a flight line, calculating the overlap of the ground projection between the heights of the preceding and succeeding images. The lateral overlap represents the degree of overlap between photos from adjacent flight lines, calculating the overlap of the ground projection width between two photos from adjacent flight lines. The schematic diagram is shown in Figure 2.
  • Image resolution
The image resolution of a drone directly affects the resolution of orthophoto models or 3D models. The primary factors determining ground resolution are flight altitude, camera focal length, and pixel size. The relationship between flight altitude and resolution can be expressed as:
H = f × G S D a
where f represents the focal length of the lens; a represents the pixel size; H represents the flight altitude; GSD represents the ground sampling distance.

2.1.2. Feature Point Extraction and Matching

The scale-invariant feature transform (SIFT) algorithm is used to extract and match feature points in drone images. The Gaussian function is applied to perform convolutional down sampling on multi-view images, constructing a Gaussian image pyramid [15,16]. The SIFT algorithm detects extrema points by establishing the corresponding image scale space, where the Gaussian function serves as a linear kernel for scale transformation, and a Gaussian blur is employed to implement scale space, requiring the introduction of a key parameter σ. By continuously varying σ, a sequence of images in the scale space can be obtained. The image scale space L(x, y, σ) is represented, and the original image is denoted as I(x, y). The Gaussian function is represented by G(x, y, σ):
G ( x , y , σ ) = ( x 2 + y 2 ) 2 π σ 2
L ( x , y , σ ) = G ( x , y , σ ) I ( x , y )
where the asterisk (∗) represents convolution, and σ denotes the scale factor that determines the level of image smoothing. A larger value indicates a greater degree of image smoothing, resulting in a blurrier image.
The Gaussian pyramid divides the image into several groups, each of which is further divided into multiple layers of images. The Gaussian difference scale space involves the process of differencing two adjacent images within the same group. When judging the extreme points in the scale space, it is necessary to first determine the points within the adjacent range of each pixel, which allows for the identification of the positions of key points. The pyramid used for continuous images is the difference of Gaussian (DOG) pyramid, which is obtained by subtracting the adjacent upper and lower layers of the Gaussian image.
D ( x , y , σ ) = L ( x , y , k σ ) L ( x , y , σ )
By extracting the scale space extrema as feature points and assigning them orientations, the topological relationship in the image space is established using the position and orientation system (POS) data from the drone images. The operational process is illustrated in Figure 3.

2.1.3. Aerial Triangulation

The relationship between the drone photogrammetry center, the ground control points, and the matching of corresponding image points is established to construct collinearity equations. The bundle adjustment unit in the bundle block adjustment method is a bundle of rays formed by a single image, with the image point coordinates as the original observations. The collinearity equations are formed by the image points, the object points, and the camera station points at the moment of image acquisition. By considering the rotation and translation of each ray in space, the external orientation parameters and the ground point coordinates are obtained through the adjustment of the image control point coordinates [17,18]. The main model of the bundle block adjustment method is the collinearity equation, as shown below:
{ x = f a 1 ( X X S ) + b 1 ( Y Y S ) + c 1 ( Z Z S ) a 3 ( X X S ) + b 3 ( Y Y S ) + c 3 ( Z Z S ) y = f a 2 ( X X S ) + b 2 ( Y Y S ) + c 2 ( Z Z S ) a 3 ( X X S ) + b 3 ( Y Y S ) + c 3 ( Z Z S )
where (x, y) represents the image plane coordinates with the principal point as the origin, (X, Y, Z) represents the ground coordinates of an object point, ( X S , Y S , Z S ) represents the coordinates of the photographic center in the ground coordinate system, f represents the focal length as an internal orientation parameter, and a 1 , a 2 , a 3 , b 1 , b 2 , b 3 , c 1 , c 2 , and c 3 represent the coefficients of the rotation matrix.

2.2. Terrestrial Laser Scanning Technology

2.2.1. Operational Principle

TLS primarily uses the pulse laser ranging method to perform non-contact scanning measurements of target objects and acquire point cloud data models [19]. The main working principle is the time-of-flight (TOF) pulse ranging method, which determines the horizontal angle α and vertical angle β of the laser beam and calculates the distance between the scanning point and the instrument, denoted as “ S ”, by measuring the time it takes for the pulse laser to be emitted and received after reflection (Refer to Figure 4 for details). TLS typically uses a local coordinate system, with the X and Y axes as the horizontal plane of the local coordinate system, the Y axis representing the scanner’s direction, and the Z axis as the vertical direction. Therefore, the coordinates of the scanned target point P can be expressed as:
{ X P = S cos β cos α Y P = S cos β sin α Z P = S cos β

2.2.2. Deployment of Scanning Stations

Under the bridge, an untargeted, arbitrary station setup was used for scanning, ensuring that there is no less than a 30% overlap area between adjacent stations. The stations were set up based on the size of the scanning object, its position, and environmental factors. A schematic diagram of the under-bridge station setup is shown in Figure 5.
The upper part of the bridge uses the railway CPIII control point as the back-sighting point and employs a control-based measurement method for scanning. A GPR1 prism is placed at the CPIII control point for posterior intersection. The ground-based laser scanner can automatically identify the elevation of the prism center point.

2.2.3. TLS Workflow

The field data collection for terrestrial laser scanning mainly involve on-site reconnaissance, developing scanning strategies, and collecting field data. During the planning phase, it is essential to determine the number of scanning stations, their positions, and sampling density. Checking whether the collected point cloud data align with modeling requirements is crucial before proceeding to internal data processing. As seen in Figure 6.

2.3. Point Cloud Processing

2.3.1. Point Cloud Denoising

During UAV flights, data may be captured in non-model areas, which result in noisy image point clouds. TLS devices generate uneven point cloud datasets during scanning, consisting primarily of system noise and target noise, both of which are considered irrelevant information. Statistical filters can remove obvious outliers, characterized by their sparse distribution in space, with denser areas containing more valuable information.
Based on the distribution characteristics of outliers, it can be defined that if the density of point clouds at a certain location is below a certain range, the information in that region is considered invalid. Statistical analysis is conducted on the neighborhood of each point, assuming that the distances between all points in the point cloud follow a Gaussian distribution, with the shape determined by the mean μ and standard deviation σ [20,21].
Assuming the coordinates of the nth point in the point cloud are P n ( x n , y n , z n ) , the distance from this point to any other point P m ( x m , y m , z m ) can be represented as:
S i = ( x n x m ) 2 + ( y n y m ) 2 + ( z n z m ) 2
The formula for calculating the average distance between each point and any other point through iteration is expressed as:
μ = 1 n i = 1 n S i
The standard deviation can be expressed as:
σ = 1 n i = 1 n ( S i μ ) 2
In this denoising algorithm, when given a standard deviation multiplier (std), you only need to input two thresholds, k and std. If the average distance of a point to k other points falls within the standard range ( μ σ s t d , μ + σ s t d ) , then the point is retained. Otherwise, the point is defined as an outlier and removed. As seen in Figure 7.

2.3.2. Point Cloud Registration

Data fusion processing mainly consists of two aspects: one is the point cloud registration between different TLS stations, and the other is the fusion of UAV image point clouds with TLS point clouds. For the registration of point clouds from multiple TLS stations, manual registration using common feature points in Trimble Business Center software can achieve the required model accuracy. As for the fusion of UAV aerial point clouds with TLS point clouds, the iterative closest point (ICP) algorithm is used to merge the two datasets.
The point cloud registration algorithm refers to taking the input set of UAV point clouds, denoted as set p s (source point cloud), and the terrestrial-based laser scanning point cloud set p t (target point cloud), to output a rigid transformation T that maximizes the overlap between p s and B, encompassing both rotation and translation. The ICP iteratively refines the rigid transformation between the two original point clouds, involving translation and rotation, to maximize the overlap between the point sets [22]. The steps of the ICP algorithm are as follows:
(1)
Apply an initial transformation T 0 to each point p a i in the point cloud set p s , resulting in p a i .
(2)
Search for the point C in point cloud set A that is closest to the point B, forming the corresponding point pair.
(3)
Determine the solution for the optimal transformation Δ T .
Δ T = arg min R , t 1 | P S | i = 1 | P S | p t i ( R p s i + t ) 2
(4)
Determine the convergence based on the error between consecutive iterations and the number of iterations. If convergence is achieved, output the following result: T = Δ T T 0 ; otherwise, T 0 = Δ T T 0 , repeat step (1).
Based on the above principles, it is possible to quickly select and eliminate the points with a lower registration accuracy, retaining the high-precision points. This allows for the removal of points with larger errors and optimizes the quality of the remaining key points. The fused bridge point cloud is then imported into Context Capture software to create a high-precision reality model.

3. Application of Engineering Case Studies

3.1. Engineering Overview

The experimental test area is located in Nanning City, China. The case involved a key controlled railway bridge project. The railway line was classified as Grade I, with dual tracks spaced 4.6 m apart and ballast tracks. Data collection was carried out under favorable, clear weather conditions, with a temperature of 22 °C and a wind speed of 1–2 on the Beaufort scale.

3.2. Experimental Data

3.2.1. UAV Data Acquisition

Control points were measured using real-time kinematic (RTK) technology, with the equipment model selected as “Zhonghaida V5 Mobile Station( Manufacturer: Guangzhou Zhonghaida Satellite Navigation Technology Co., Ltd., Guangzhou, China)”. The accuracy for static positioning in the horizontal plane was ±(2.5 + 0.5 × 10−6D) mm, and for vertical accuracy, it was ±(5 + 0.5 × 10−6D) mm, where ‘D’ represents the distance between the measured points. The mathematical basis for field data collection was the National Geodetic Coordinate System (CGCS2000), with a Gauss-Krüger projection central meridian at 108° and a vertical datum of “Orthometric Height”. Seven control points were evenly distributed within the survey area, and all of them were designated as model accuracy check points. The coordinate information of the control points is provided in the Table 1 below.
The DJI Phantom 4 RTK (Manufacturer: DJI Innovations Technology Co., Ltd., Shenzhen, China) is equipped with a centimeter-level navigation positioning system and a high-performance imaging system, capable of providing high-precision navigation positioning and image acquisition. It integrate a new RTK module, offering robust resistance to magnetic interference and precise positioning capabilities, enabling real-time centimeter-level positioning data. This significantly enhances the absolute accuracy of the image metadata. As seen in Figure 8.
The camera had a focal length of 8.8 mm, and the ground resolution of the images was 3 cm, with a relative flying height of 110 m. Multiple-view image data of the target bridge were collected with a side lap of 75% and an end lap of 80%. Using a DJI Phantom 4 RTK, we conducted a circular flight around the arch top, capturing 564 images. We also used the DJI Phantom 4 RTK to perform orthomosaic and cross-flight mapping across the entire survey area, obtaining 508 images. Additionally, we used the DJI Phantom 4 to capture 491 images of the bridge’s side.

3.2.2. TLS Data Acquisition

The Trimble SX12 (Manufacturer: Trimble Inc. (United States), Sunnyvale, CA, USA) is a next-generation imaging scanner that integrates scanning, total station, and digital photogrammetric systems into one device. When used in conjunction with a control panel running the Windows 10 operating system, it can perform surveying processes such as station setup, high-speed scanning, topographic measurement, and image capture (As seen in Figure 9). During field scanning, the resolution was set to 1 mm@10 m, with a total of 6 stations set up at distances of approximately 20 m apart. The total number of scanned point clouds was 216,801,944.
When setting up the scanner, a quadrilateral approach is used to effectively cover the scanning area, improving scanning efficiency and reducing point cloud errors. Using the Trimble SX12 scanner, the characteristics of the scanned objects can be freely framed. If there are deficiencies, the angles can be adjusted accordingly to encompass the scanned object. If there is redundancy, the range can be reduced accordingly to save time. After selecting the range, the main structure of the bridge is densely scanned to obtain detailed point cloud data of the bridge.

3.3. Point Cloud Data Processing

The main tasks of point cloud processing are point cloud denoising and point cloud registration. The processing results are as follows:

3.3.1. Noise Point Cloud Removal

The denoising process combines statistical filtering and manual removal methods. Statistical filtering primarily removes occluded points and outliers, while manual methods focus on eliminating the non-modeled objects such as surrounding buildings, vehicles, pedestrians, etc. As seen in Figure 10.

3.3.2. Two Types of Point Cloud Fusion

Laser scanning point cloud data of the ground are imported into Trimble Business Center software. Each station is registered one by one based on the overlapping areas between adjacent stations. The ICP algorithm is used to iteratively optimize the point cloud through a least-squares approach. This process achieves precise alignment between the drone-acquired point cloud and the terrestrial-based laser scanning point cloud through optimal rigid transformations, enabling the fusion of drone-captured point cloud data with TLS data. The fused point cloud data are then imported into Context Capture software to generate a photorealistic 3D model. As seen in Figure 11.

3.3.3. Three-Dimensional Real-World Model

The local texture features of a single drone model are incomplete, as seen in the Figure 12, where the red box highlights the suspension rod. After integrating terrestrial-based laser scanning, the three-dimensional modeling demonstrates more realistic and complete textures.

3.4. Drone Bridge Damage Inspection

The combination of drone oblique photography technology with TLS for realistic three-dimensional modeling was used to inspect the surface quality of the bridge deck, piers, and arch ribs. The results of the bridge damage inspection are shown in Figure 13.
The concrete surface of some areas on the bridge piers exhibited localized honeycombing and pitting, as seen in Figure 13a. Inspection of the piers revealed water seepage from the upper to lower parts, as depicted in Figure 13a,b. The localized water seepage on the pier surface was compromising durability, necessitating reinforced drainage measures. Through drone aerial photography and on-site surveys, rusting and cracking were observed in the paint coating of the arch ribs, as shown in Figure 13c,d.

3.5. Generation of Planar Cross-Sectional Profiles

First, integrate the point cloud obtained by fusing data from a drone and TLS into AutoCAD2020 software. Then, perform clipping along the XOY, XOZ, and YOZ planes, and fit the contour areas. Once the desired cross-section positions are selected, draw the contour of the cross-section based on the point cloud data. Next, use drawing lines to approximate the shape of the point cloud and make gradual adjustments to ensure accuracy. Based on the obtained contours, use the drawing tools in CAD2020 software to create specific elevation profiles. Then, add dimensions, labels, and annotations to ensure that the drawn profile clearly represents the structure and the details of the bridge. Finally, export the drawing. As seen in Figure 14.

3.6. Model Quality Assessment

Using the RTK measurements of control point coordinates as the true values, extract the coordinates of the control points from the model, and employ root mean square error analysis to evaluate the error of the control points, measuring model accuracy. The precision in the X, Y, and Z directions is calculated using the following formulas [23,24]:
S U = 1 N i = 1 N ( U i R i ) , U = X , Y , Z
S F = 1 N i = 1 N ( F i R i ) , F = X , Y , Z
S = S X 2 + S Y 2 + S Z 2
In the equation, U i represents the three-dimensional coordinates of the ith control point from single drone modeling; F i represents the three-dimensional coordinates of the ith control point from the fusion of TLS and drone modeling; R i represents the true three-dimensional coordinates of the ith control point measured by RTK; and N is the number of control points. The calculation results can be seen in Table 2.
Based on the calculations, individual directions are all better than 1.2 cm, indicating a high model accuracy. The three-dimensional modeling accuracy of the UAV, when fused with TLS, is 1.7 cm, representing a 31.5% improvement compared to the accuracy of single UAV modeling.

4. Conclusions

The combination of UAV-TLS methods in this study enables the creation of a realistic 3D model of a large-scale operational railway bridge. It significantly reduces labor costs and serves as a valuable auxiliary tool in bridge defect detection.
(1)
The proposed data acquisition method involving the fusion of drones with ground-based laser scanners addresses the issues of incomplete data collection in single UAV bridge surveys and the lack of texture information in single TLS of bridges.
(2)
By utilizing a statistical filtering method to remove noise from point clouds and employing the ICP registration algorithm to fuse TLS point clouds with drone aerial point clouds, the real-world three-dimensional model achieves an accuracy of 1.70 cm, representing a 31.5% improvement.
(3)
We created a high-precision, comprehensive, and realistically textured 3D model of a railway bridge, allowing us to inspect the condition of piers, arch ribs, and other structural elements. Additionally, we accurately obtained elevation and cross-sectional drawings of the bridge.

Author Contributions

Conceptualization, J.L. and Y.P.; methodology, J.L.; software, J.L. and Z.T.; validation, J.L., Z.T. and Z.L.; formal analysis, J.L.; investigation, Y.P.; resources, Y.P.; data curation, Z.T.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; visualization, Y.P.; supervision, Y.P.; project administration, Y.P.; funding acquisition, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available through email upon request to the corresponding author.

Acknowledgments

The author sincerely thanks Peng Yipu for his invaluable assistance in establishing the model used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luhmann, T.; Chizhova, M.; Gorkovchuk, D. Fusion of UAV and terrestrial photogrammetry with laser scanning for 3D reconstruction of historic churches in georgia. Drones 2020, 4, 53. [Google Scholar] [CrossRef]
  2. Yu, M.-S.; Yao, X.-Y.; Deng, N.-C.; Hao, T.; Wang, L.; Wang, H. Optimal cable force adjustment for long-span concrete-filled steel tube arch bridges: Real-Time Correction and Reliable Results. Buildings 2023, 13, 2214. [Google Scholar] [CrossRef]
  3. Shan, Z.; Qiu, L.-J.; Chen, H.-H.; Zhou, J. Coupled analysis of safety risks in bridge construction based on n-k model and SNA. Buildings 2023, 13, 2178. [Google Scholar] [CrossRef]
  4. Zhou, Y.; Li, J.; Liu, P.; Zhou, X.-F.; Pan, H.; Du, Z. Research on bridge condition evaluation by integrating UAV line shape measurement and vibration test. J. Hunan Univ. 2023, 1–11. [Google Scholar]
  5. Yu, J.-Y.; Xue, X.-K.; Chen, C.-F.; Chen, R.; He, K.; Li, F. Three-dimensional reconstruction and disaster identification of highway slope using unmanned aerial vehicle-based oblique photography technique. China J. Highw. Transp. 2022, 35, 77–86. [Google Scholar]
  6. Chen, C.-F.; He, K.-Y.; Yu, J.-Y.; Mao, F.; Xue, X.; Li, F. Identification of discontinuities of high steep slope based on UAV nap-of-the-object photography. J. Hunan Univ. 2022, 49, 145–154. [Google Scholar]
  7. Guo, X.-T.; Huang, T.; Jia, Y.; Zhang, R. Landslide deformation monitoring using TLS technology and pseudo-single point monitoring model. Bull. Surv. Mapp. 2021, 6, 106–111. [Google Scholar]
  8. Zhou, Z.-X.; Jiang, T.-J.; Tang, L.; Chu, X.; Yang, J. Application of mobile three-dimensional laser scanning system in bridge deformation monitoring. J. Appl. Basic Sci. Eng. 2018, 26, 1078–1091. [Google Scholar]
  9. Tong, X.; Liu, X.; Chen, P.; Liu, S.; Luan, K.; Li, L.; Liu, S.; Liu, X.; Xie, H.; Jin, Y.; et al. Integration of UAV-based photogrammetry and terrestrial laser scanning for the three-dimensional mapping and monitoring of open-pit mine areas. Remote Sens. 2015, 7, 6635–6662. [Google Scholar] [CrossRef]
  10. Zhang, Y.-T.; Sun, B.-Y.; Mo, C.-H.; Xue, W. 3D reconstruction of arch bridge based on fusion of unmanned aerial vehicle (UAV) and 3D laser scanning. Sci. Technol. Eng. 2023, 23, 2274–2281. [Google Scholar]
  11. Zhang, Z.-J.; Cheng, X.-J.; Cao, Y.-J.; Wang, F.; Yu, Y. Three-dimensional reconstruction of ancient relics using a combination of laser and visual point clouds. Chin. Laser 2020, 47, 273–282. [Google Scholar]
  12. Ren, D.-W. Dense Matching of UAV Oblique Images Combined with Terrestrial Laser Scanning Point Cloud and Point Cloud Fusion Technology. Master’s Thesis, Wuhan University, Wuhan, China, 2022. [Google Scholar]
  13. Wright, R.; Gomez, A.; Zimmer, V.A.; Toussaint, N.; Khanal, B.; Matthew, J.; Skelton, E.; Kainz, B.; Rueckert, D.; Hajnal, J.V.; et al. Fast fetal head compounding from multi-view 3D ultrasound. Med. Image Anal. 2023, 89, 102793. [Google Scholar] [CrossRef] [PubMed]
  14. Bei, W.-J.; Fan, X.-T.; Jian, H.-D.; Du, X.; Yan, D. Geoglue: Feature matching with self-supervised geometric priors for high-resolution UAV images. Int. J. Digit. Earth 2023, 16, 1246–1275. [Google Scholar] [CrossRef]
  15. Bas, S.; Ok, A.O. A new productive framework for point-based matching of oblique aircraft and UAV-based images. Photogramm. Rec. 2021, 36, 252–284. [Google Scholar] [CrossRef]
  16. Cheng, M.-L.; Matsuoka, M.; Liu, W.; Yamazaki, F. Near-real-time gradually expanding 3D land surface reconstruction in disaster areas by sequential drone imagery. Autom. Constr. 2022, 135, 104105. [Google Scholar] [CrossRef]
  17. Akbari, Y.; Almaadeed, N.; Al-Maadeed, S.; Elharrouss, O. Applications, databases and open computer vision research from drone videos and images: A survey. Artif. Intell. Rev. 2021, 54, 3887–3938. [Google Scholar] [CrossRef]
  18. Burgett, J.; Lytle, B.; Bausman, D.; Shaffer, S.; Stuckey, E. Accuracy of drone-based surveys: Structured evaluation of a UAS-based land survey. J. Infrastruct. Syst. 2021, 27, 05021005. [Google Scholar] [CrossRef]
  19. Nap, M.E.; Chiorean, S.; Cira, C.I.; Manso-Callejo, M.Á.; Păunescu, V.; Șuba, E.E.; Sălăgean, T. Non-destructive measurements for 3D modeling and monitoring of large buildings using terrestrial laser scanning and unmanned aerial systems. Sensors 2023, 23, 5678. [Google Scholar] [CrossRef]
  20. Buades, A.; Coll, B.; Morel, J.M. Nonlocal image and movie denoising. Int. J. Comput. Vis. 2008, 76, 123–139. [Google Scholar] [CrossRef]
  21. Macháň, R.; Kapusta, P.; Hof, M. Statistical filtering in fluorescence microscopy and fluorescence correlation spectroscopy. Anal. Bioanal. Chem. 2014, 406, 4797–4813. [Google Scholar] [CrossRef]
  22. Qiao, J.-W.; Wang, J.-J.; Xu, W.-S.; Lu, Y.-P.; Hu, Y.-W. Research on laser point cloud stitching laser point cloud stitching based on iterative closest point algorithms. J. Shandong Univ. Sci. Technol. 2020, 34, 46–50. [Google Scholar]
  23. Gomes, P.G.; Caceres, C.A.; Takahashi, M.G.; Amorim, A.; Galo, M. Assessment of UAV-based digital surface model and the effects of quantity and distribution of ground control points. Int. J. Remote Sens. 2021, 42, 65–83. [Google Scholar] [CrossRef]
  24. Zhang, M.; Chen, T.; Gu, X.; Kuai, Y.; Wang, C.; Chen, D.; Zhao, C. UAV-borne hyperspectral estimation of nitrogen content in tobacco leaves based on ensemble learning methods. Comput. Electron. Agric. 2023, 211, 108008. [Google Scholar] [CrossRef]
Figure 1. Modeling the overall route with oblique aerial photography data.
Figure 1. Modeling the overall route with oblique aerial photography data.
Buildings 13 02841 g001
Figure 2. Schematic diagram of heading and lateral overlaps. (a) Heading overlap diagram; (b) lateral overlap diagram.
Figure 2. Schematic diagram of heading and lateral overlaps. (a) Heading overlap diagram; (b) lateral overlap diagram.
Buildings 13 02841 g002
Figure 3. Process flowchart of feature point extraction and matching.
Figure 3. Process flowchart of feature point extraction and matching.
Buildings 13 02841 g003
Figure 4. The principle of TLS. (a) Pulse laser ranging, where green represents emission, and red represents reception; (b) geometric relationship.
Figure 4. The principle of TLS. (a) Pulse laser ranging, where green represents emission, and red represents reception; (b) geometric relationship.
Buildings 13 02841 g004
Figure 5. Schematic diagram of scanning station setup. (S* represents the *th scanning station.)
Figure 5. Schematic diagram of scanning station setup. (S* represents the *th scanning station.)
Buildings 13 02841 g005
Figure 6. The field workflow for TLS.
Figure 6. The field workflow for TLS.
Buildings 13 02841 g006
Figure 7. Statistical filtering flowchart.
Figure 7. Statistical filtering flowchart.
Buildings 13 02841 g007
Figure 8. UAV measurements with different flight paths. (a) UAV field operations flight; (b) drone orbit flight, cross flight, and close-range photography(xk01 represents control point 1, and so on).
Figure 8. UAV measurements with different flight paths. (a) UAV field operations flight; (b) drone orbit flight, cross flight, and close-range photography(xk01 represents control point 1, and so on).
Buildings 13 02841 g008
Figure 9. Field operations of TLS. (a) Pier bottom station scanning; (b) bridge side station scanning; (c) bridge deck station scanning.
Figure 9. Field operations of TLS. (a) Pier bottom station scanning; (b) bridge side station scanning; (c) bridge deck station scanning.
Buildings 13 02841 g009
Figure 10. Before and after point cloud denoising comparison. (a) The point cloud before denoising (The red box indicates noise points.); (b) the point cloud after denoising.
Figure 10. Before and after point cloud denoising comparison. (a) The point cloud before denoising (The red box indicates noise points.); (b) the point cloud after denoising.
Buildings 13 02841 g010
Figure 11. Fusion of TLS point cloud and UAV point cloud. (a) Before the fusion of TLS and UAV; (b) after the fusion of TLS and UAV.
Figure 11. Fusion of TLS point cloud and UAV point cloud. (a) Before the fusion of TLS and UAV; (b) after the fusion of TLS and UAV.
Buildings 13 02841 g011
Figure 12. Fused 3D real-world model. (a) Single drone 3D real-world modeling; (b) drone integrated with terrestrial-based laser scanning 3D real-world modeling; (c,d) partial texture results of the integrated modeling.
Figure 12. Fused 3D real-world model. (a) Single drone 3D real-world modeling; (b) drone integrated with terrestrial-based laser scanning 3D real-world modeling; (c,d) partial texture results of the integrated modeling.
Buildings 13 02841 g012
Figure 13. Fused 3D real-world model. (a,b) Pier damage; (c,d) arch rib damage. (The red box indicates the locations with existing defect or pathology.)
Figure 13. Fused 3D real-world model. (a,b) Pier damage; (c,d) arch rib damage. (The red box indicates the locations with existing defect or pathology.)
Buildings 13 02841 g013
Figure 14. Acquisition of bridge cross-sectional elevation profile. (a) Arch bridge elevation profile. (b) Arch bridge plan view. (c) Arch bridge cross-sectional profile.
Figure 14. Acquisition of bridge cross-sectional elevation profile. (a) Arch bridge elevation profile. (b) Arch bridge plan view. (c) Arch bridge cross-sectional profile.
Buildings 13 02841 g014
Table 1. Coordinate information of control points.
Table 1. Coordinate information of control points.
Control PointX/mY/mZ/m
xk01566,945.1122,433,871.073−9.864
xk02566,986.2372,433,807.995−10.286
xk03566,986.6072,433,878.805−9.990
xk04567,066.4522,433,763.583−19.302
xk05567,088.8252,433,668.017−9.581
xk06567,120.9882,433,671.596−8.724
xk07567,006.9052,433,767.596−16.656
Table 2. Root mean square error (RMSE) of ground control points.
Table 2. Root mean square error (RMSE) of ground control points.
ModelRoot Mean Square Error (RMSE)/cm
XYZ3D
UAV1.81.11.32.48
UAV + TLS1.20.80.91.70
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Peng, Y.; Tang, Z.; Li, Z. Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion. Buildings 2023, 13, 2841. https://doi.org/10.3390/buildings13112841

AMA Style

Li J, Peng Y, Tang Z, Li Z. Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion. Buildings. 2023; 13(11):2841. https://doi.org/10.3390/buildings13112841

Chicago/Turabian Style

Li, Jian, Yipu Peng, Zhiyuan Tang, and Zichao Li. 2023. "Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion" Buildings 13, no. 11: 2841. https://doi.org/10.3390/buildings13112841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop