Next Article in Journal
Decentralized Sensor Fault-Tolerant Control of DC Microgrids Using the Attracting Ellipsoid Method
Previous Article in Journal
Assessment of Land Cover Status and Change in the World and “the Belt and Road” Region from 2016 to 2020
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flight Path Setting and Data Quality Assessments for Unmanned-Aerial-Vehicle-Based Photogrammetric Bridge Deck Documentation

1
School of Information Science and Engineering, Hunan Institute of Science and Technology, Yueyang 414015, China
2
School of Civil Engineering, University College Dublin, D04C1P1 Dublin, Ireland
3
College of Mechanical Engineering, Hunan Institute of Science and Technology, Yueyang 414015, China
4
Center for Urban Science and Progress, Tandon School for Engineering, New York University, New York, NY 10012, USA
5
School of Civil Engineering, Technical University Delft, 2628 CD Delft, The Netherlands
6
School of Computer Science, University College Dublin, D04C1P1 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(16), 7159; https://doi.org/10.3390/s23167159
Submission received: 14 July 2023 / Revised: 2 August 2023 / Accepted: 7 August 2023 / Published: 14 August 2023
(This article belongs to the Section Remote Sensors)

Abstract

:
Imagery from Unmanned Aerial Vehicles can be used to generate three-dimensional (3D) point cloud models. However, final data quality is impacted by the flight altitude, camera angle, overlap rate, and data processing strategies. Typically, both overview images and redundant close-range images are collected, which significantly increases the data collection and processing time. To investigate the relationship between input resources and output quality, a suite of seven metrics is proposed including total points, average point density, uniformity, yield rate, coverage, geometry accuracy, and time efficiency. When applied in the field to a full-scale structure, the UAV altitude and camera angle most strongly affected data density and uniformity. A 66% overlapping was needed for successful 3D reconstruction. Conducting multiple flight paths improved local geometric accuracy better than increasing the overlapping rate. The highest coverage was achieved at 77% due to the formation of semi-irregular gridded gaps between point groups as an artefact of the Structure from Motion process. No single set of flight parameters was optimal for every data collection goal. Hence, understanding flight path parameter impacts is crucial to optimal UAV data collection.

1. Introduction

To ensure ongoing serviceability and safety, bridges must be inspected periodically as per local regulations (e.g., AASHTO, 1970 [1]; RAIU, 2010 [2]). Although many methods have been developed to support bridge inspection, visual inspection using on-site inspectors dominates. However, the visual inspection has many shortcomings, including the following aspects: (1) subjective results; (2) access only via heavy and/or specialty equipment; (3) traffic closures; (4) the requirement for highly skilled trained inspectors; (5) safety risks for inspectors; and (6) time-consuming and expensive processes. These aspects are particularly challenging in the absence of as-built drawings or an existing 3D model.
Bridge documentation and inspection been conducted using cameras (Xie et al., 2018) [3] and/or laser scanners (Truong-Hong and Laefer, 2015 [4]; Gyetvai et al., 2018 [5]), and even microwave radar interferometry (Zhang et al., 2018) [6] and synthetic aperture radar. Three-dimensional point clouds can be produced either directly through laser scanning or indirectly through assembling two-dimensional images. However, the quality of these point clouds is highly related to the view angles and offset distances. For example, the camera or scanner is set on the bridge deck or river bank, and incomplete coverage of the entire structure may occur due to the fixed field of view and positioning logistics. Low-cost UAVs equipped with cameras provide workarounds and offer many benefits such as non-contact measurement, avoidance of traffic closures, and use of non-specialized equipment (Atole et al., 2017) [7], while providing better data coverage in hard-to-reach areas like beneath the deck or the upper portions of a bridge’s pylons (Chen et al., 2019) [8].
As an alternative to laser scanners, low-cost UAVs equipped with a single digital camera can generate dense and accurate point clouds when coupled with state-of-the-art computer-vision-based methods. Such capabilities have accelerated the adoption of UAV-based data capture for a wide range of infrastructure needs including building modelling (Byrne and Laefer, 2016) [9], dam inspection (Sizeng Zhao et al., 2021) [10], construction site monitoring (Hoegner et al., 2016) [11], and road surface evaluation (Chen and Dou, 2018) [12]. However, there are many factors that directly influence the model’s final accuracy, resolution, and completeness. While these are known to include the camera positioning, number of images collected, overlap extent, and image quality, the interaction between these factors with respect to their impact on the final 3D model quality has yet to be quantified. Additionally, to explore their capability for comprehensive documentation and to devise optimization strategies, a series of reliable and systematic evaluation metrics are required to evaluate the results. To date, these have yet to be established. Therefore, this paper introduces four data quality evaluation metrics for bridge deck or roadway point clouds and investigates the interaction between flight path parameters and the quality of the reconstructed point cloud using those metrics and with respect to a terrestrial laser scanner.

2. Background

In recent years, with the improvement in design, control, and navigation technologies, UAVs are becoming cheaper and more easily accessible (Chen et al., 2016) [13]. In addition to conventional fixed-wing UAVs, newer designs developed for low-altitude, close-range inspection are increasingly available. For example, multirotor UAVs with outstanding hovering capabilities and better safety tolerances for rotor failure are already being used to a limited extent for civil infrastructure inspection (Liu et al., 2014) [14]. The incorporation of navigation sensors, such as Inertial Measurement Units (IMUs) (Li et al., 2015) [15], and obstacle detection sensors, such as optical flow cameras (Honegger et al., 2013) [16] and ultrasonic sensors (Papa and Del, 2015) [17], is further improving UAV reliability. For data collection purposes, laser scanners (Chisholm et al., 2013) [18] and digital cameras (Ferrick et al., 2012) [19] are commonly used separately and together both with and without UAVs. Examples are shown in Table 1.
Laser scanning provides high-quality 3D point clouds but the equipment is comparatively expensive. Imagery is arguably more cost effective but not without difficulties, especially as many applications including full documentation and crack detection require 3D data (Chen et al., 2011) [36]. To obtain depth information from 2D images, they must be stitched together to form either stereoscopic images or a point cloud. For UAV inspection, the latter is commonly achieved using the SfM (Structure from Motion) method. The approach relies on having overlapping images taken from multiple viewpoints to enable the formation of a 3D point cloud. The process starts by detecting key points in each image through which images can be linked. This procedure can be accomplished by applying feature detectors like scale invariant feature transforms (SIFT) (Lowe, 1999) [37] or the speeded up robust features (SURF) method (Bay et al., 2008) [38]. Then, the 3D structure and camera motion based on the extracted features can be estimated to improve triangulation. Subsequently, a spare bundle adjustment (Lourakis and Argyros, 2009) [39] can be used to optimize the camera’s position and generate a sparse point cloud to represent the object. Finally, point density can be intensified by applying multi-view stereo (MSV) techniques (Yasutaka and Hernández, 2015) [40]. Many of these procedures and related algorithms appear singly or in combination with many off-the-shelf software products, including Agisoft Photoscan, Pix4D, OpenMVG, and VisualSfM. For the sake of simplicity, in this paper the term SfM will be used to denote the entire reconstruction procedure.
While SfM is well established, the presence of cars, shadows, and specific terrains can complicate the subsequent data processing. The resulting 3D point clouds are also impacted by camera setting, lens distortion, flying height, quality and quantity of images, distribution of perspectives in those images, and capture angles (Smith and Vericat, 2015) [41]. Recent efforts have investigated the impact of these factors on the quality and quantity of SfM-generated point clouds. For example, Byrne et al. (2017) [42] studied the effects of camera mode and lens settings on point density. This study showed that the lens distortion under a wide view mode generated a point cloud only half as dense as the one derived from images with no distortion. Similarly, poor data density and distortions were observed by Chen et al. (2017) [43] under laboratory conditions when the angle of incidence was high. That study recommended combining images from different oblique angles (e.g., 45° with 60°) to minimize the density and distortion issues that appear when they are processed separately. A similar recommendation was made previously by James et al. (2014) [44], where the addition of oblique or parallel images was performed to reduce the error in the digital elevation models by as much as two orders of magnitude. However, all images may not be equally valuable. For example, Dandois et al. (2015) [45] found that denser point clouds were more easily produced on cloudy days due to the absence the unwanted shadows produced on sunny days. However, Chen et al. (2017) demonstrated that under laboratory conditions direct light increased the contrast in the images, which improved model accuracy, thereby implying that sunny days will lead to more accurate point clouds even though they may be less dense than those collected on cloudy days. Han et al. (2023) [35] conducted a study on the influence of UAV flight paths on the geometric accuracy of the final model. However, it is important to note that the geometric accuracy of the point cloud does not solely represent the point cloud quality. In real-world engineering scenarios, the point cloud quality typically requires evaluation from various perspectives, including volume density, completeness, geometric accuracy, and time taken, among others.
Although the aforementioned studies have recognized the effect of some variables related to camera calibration or data post-processing, a systematic understanding of how flight path parameters affect final point cloud quality has yet to be established, especially at lower altitudes (below 50 m) and in the presence of buildings and other infrastructure. Furthermore, there has yet to emerge a standard evaluation process for SfM point clouds. While some studies, such as those by Byrne et al. (2017) [46] and Slocum and Parrish (2017) [47], used the final number of points in a point cloud or point density as a proxy for quality, this is not widely performed, and while a few researchers (e.g., (Dandois et al., 2015)) have considered the geometric accuracy of the reconstructed point cloud with respect to GPS and GCP, these properties do not address data completeness or uniformity. To address these knowledge gaps, a systematic evaluation method is proposed in this paper to quantitatively study image-based point clouds. The usefulness of these metrics is then demonstrated as a means to determine the impact of flight parameters on 3D point cloud reconstructions.

3. Quality-Based Evaluation Metrics

To determine the quality of reconstructed point clouds for bridge deck and road surface documentation, a quintet of new quality-based point cloud evaluation metrics is herein proposed that covers the following aspects: (i) point average density, (ii) uniformity, (iii) completeness, (iv) overall point yield, and (v) geometric accuracy. Having these objective metrics will then enable more informed decisions about UAV flight path planning with respect to the required outputs. Each of these metrics is described in this section and then implemented in the next section as part of an actual field study.

3.1. Point Density and Uniformity

The first two proposed evaluation metrics are overall point density and point uniformity. Point cloud density is an indicator of data resolution. When the overall density is too low, small details will not appear in the dataset and may preclude damage identification because of poor data availability. Conversely, overly dense point clouds will have redundant data, thereby unnecessarily requiring storage space and slowing analyses. Non-uniform point clouds will include both high-density and low-density areas.
These defects influence the quality of subsequent data processing and the affiliated outputs as well. This may include the performance of neighbor search algorithms and feature estimation processes, further data simplification (Moenning and Dodgson, 2003) [48], surface reconstruction (Huang et al., 2009 [49]; Holz and Behnke, 2014 [50]), and multi-dataset registration (Holz and Behnke, 2014; Huang et al., 2009). In addition, employing algorithms that specify a minimum-density threshold (Zolanvari and Laefer, 2016 [51]; Truong-Hong et al., 2013 [52]) may be especially challenging as even quantification of the minimum density would require significant resources to establish. Unfortunately, in real surveys, both TLS (terrestrial laser scanning) point clouds and imagery-derived point clouds (referred to here as SfM point clouds) are non-uniform.
To overcome these aforementioned problems, identifying the parameters that most affect the density and uniformity of a point cloud is necessary. Due to different data capturing mechanisms, parameters impacting the TLS and SfM point clouds differ. The non-uniformity of TLS data is directly linked to offset distance and the angle of incidence, as well as data capture speed. Specifically, smaller offsets and incidence angles tend to produce higher densities and lower differences in data distributions and can be represented as largely linear relationships (Laefer et al., 2009 [53]; Quagliarini et al., 2017 [54]). However, the main factors contributing to non-uniformity in SfM point clouds are less understood, and the explicit relationship between image resolution and overlapping rate has yet to be studied systematically.
To identify the critical data capture parameters that affect data densities and uniformities in SfM point clouds collected from UAVs, a volume density calculation is proposed; volume density is more representative as the surfaces are not entirely flat. The approach considers point distribution across a sphere. As shown in Figure 1, for each point Pi, the number of neighbour points inside a specified spherical neighbourhood (Ni) with a radius R is calculated using a k-Nearest Neighbour (kNN) algorithm (Fukunaga and Hostetler, 1973) [55]. The volume density of Pi is equal to Ni divided by the neighbourhood volume. As such, the general density can be represented by the statistical characteristics of each point. As shown in Equation (1), the average density (AD) will represent the overall density of the point cloud, while a standard deviation (SD) (Equation (2)) is used to evaluate its uniformity level. As density may vary greatly between datasets, a direct comparison of the SD is not meaningful. Thus, the term relative standard deviation (RSD) is introduced in the form of Equation (3) as an indication of the uniformity. Lower RSD values represent more uniform datasets.
D = 1 n i = 1 n N i 4 / 3 π R 3
S D = 1 n i = 1 n ( N i 4 / 3 π R 3 A D ) 2
R S D = S D A D × 100 %

3.2. Completeness

The third metric relates to completeness. Incompleteness in SfM point clouds commonly relates to insufficient coverage, insufficient overlap, inability to discern textures in the images, and overall poor image quality. As shown in Figure 2, the missing data appear as either missing patches or randomly distributed empty spots. In contrast, incompleteness in TLS datasets is usually caused by high angles of incidence or line-of-sight interference, both common artefacts of site access issues.
To quantify the completeness of a point cloud, a mesh-based area calculation method is introduced. Since the bridge deck upper surface is nearly a flat plane, to make the calculation more efficient, a 2D mesh is used. The process involves first projecting the data points onto a normal plane. Then, a triangulation mesh is built from the projected data points based on x and y coordinates across an entire plane. Next, the threshold radius α is applied to control the searching radius for the mesh generation. For any point C within the radius α, if a neighbour point exists, a triangular mesh will be generated, as shown in Figure 3. The mesh is then used to calculate the area. Thus, by controlling the threshold α, the areas with and without coverage can be calculated.
To choose an appropriate α, the average distance of any point to its nearest neighbours must be measured. In this algorithm, points are randomly taken from the original data as querying points and used in a KNN search to find the closest point to each query point. Then, the average Euclidean distance (βave) of all pairs of query points and their closest neighbour are calculated. If the α value is close to or equal to the βave, then the mesh will overlook the incomplete areas and only represent the real data coverage. Instead, if α is set as much larger than the βave, the mesh will connect all points and measure the entirety of the pavement. By comparing these two meshes, the degree of coverage for each dataset can be measured and compared, as shown in Figure 4. The completeness index (CI) is equal to the percentage of the area covered by the points compared to the entirety of the area enveloped inside the boundary, as shown in Equation (4).
C I = A s A × 100 %
In an ideal world, the smallest known feature or damage could be used, but that would require a priori knowledge or extensive pre-processing and localized surface generation prior to implementation of this check.

3.3. Geometric Accuracy

The fourth evaluation metric is geometric accuracy, which is important for engineering inspection, especially for applications such as deformation monitoring and quantifiable damage assessment. In a surveying context, Lucieer et al. (2014) [56] compared a UAV-based SfM point cloud to checks with GPS readings through a set of 24 GCPs for landslide mapping. Similarly, Mosbrucker et al. (2017) [57] used LiDAR-derived digital terrain models as the ground truth along with 103 control points for topographic mapping. Such methods rely on GCPs for the large-scale global accuracy assessment and demonstrated a range of SfM-point cloud accuracy from 0.05 m to 0.97 m for the applications and equipment considered in those studies. This type of approach works well for topographic surveying, as the goal is to compare the positioning of data points to known positions in the real world.
For documentation, inspection, and modelling, however, the accuracy must be tied to the geometric object under evaluation. For small-scale surveys, Palmer et al. (2015) [58] used TLS data as the ground truth. In that process, fixed features of the structure (e.g., beam length) were used for comparison. However, picking the same points from different datasets for measurement is hard to achieve reliably given the discrete nature of the data capture and is arguably fraught with hard-to-quantify errors. To overcome these problems, Byrne et al. (2017) [42] proposed a point-to-point distance evaluation based on an average point-to-point distance. However, the problem remains that the geometry is not itself being checked in the absence of measured drawings, which are rarely available. Moreover, because this point-to-point distance calculation is based on the closest neighbour searching, non-uniform data distribution will cause errors to the result as well.
To resolve the problems mentioned above, a cross-section evaluation method for the accuracy assessment is proposed herein. First of all, each SfM dataset and the TLS ground truth point cloud were aligned using the ICP algorithm (Besl and McKay, 1992) [59]. Then, a cross-section (with a thickness of 5 cm in the x-direction) of the bridge deck from each dataset was manually extracted, as shown in Figure 5. After that, those points were projected to the Y–Z plane and separated into multiple intervals in the y-direction (Figure 6a). In each interval, the average Z value was calculated. By linking those points, the local surface was assembled (Figure 6b). Lastly, by measuring the difference between each SfM dataset and the TLS dataset, the Pearson correlation coefficient could be calculated through Equation (5). In that equation, c o v A , B is the covariance between two sets of mean values along the cross-section from different datasets. The terms σ A and σ B are the standard deviations of each set of mean values.
ρ ( A , B ) = c o v ( A , B ) σ A σ B

3.4. Data Density Yield

In some studies, the total points appearing in a reconstructed point cloud are used as a proxy to compare the quality of different reconstruction methods. For infrastructure documentation applications, such a broad approach may not encapsulate the true quality of the output, as points appearing in the background or in non-essential areas may contribute little. To determine the extent that captured data appear in the relevant portion of the point cloud, a density conversion rate ( D C R ) metric is proposed as a direct indicator of the yield. As shown in Equation (6), the A D A O I is the average volume density of the area of interest (AOI), in this case the bridge deck. In this equation, P N is the total number of points included in the dataset. The D C R indicates the relative value of the overall point cloud with respect to an area of interest (e.g., the bridge deck). Lower D C R values indicate a lower yield percentage with respect to all data collected.
D C R = A D A O I P N

4. Field Study

To demonstrate the applicability and usefulness of the aforementioned metrics, a field study was undertaken. Such an approach provides insight for understanding the interaction of flight path parameter selection for bridge documentation. In this case only the bridge deck was considered as the target object.

4.1. Scope

The field study considered three common UAV flight path parameters: altitude, oblique angle, and overlapping rate. The Blessington bridge at Co. Wicklow, Ireland, a concrete bridge, was selected as the case study. This bridge was selected because it is outside the Dublin airport flight control area with clear surroundings and light vehicular traffic, which facilitated both UAV flights and the TLS data collection. More information about the site is presented in Section 4.3.
This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

4.2. Methodology

The overall methodology is shown Figure 7, in which the workflow for obtaining and processing the experimental data from the UAV is shown in parallel to the acquisition and processing of the ground truth data.
The procedure includes data acquisition, processing, and evaluation of the reconstructed point cloud (Figure 7). In regard to the UAV data acquisition, multiple flight paths were designed to help determine the influence of specific parameters on the final 3D model reconstruction. As shown in Figure 8, flight paths 1–5 were situated directly over the bridge deck. These were flown at vertical offsets of 10 m, 15 m, 20 m, 30 m, and 40 m. In each of these configurations, the camera was positioned directly above the bridge deck, and the oblique angle (the angle between the camera centre line to the bridge deck’s normal direction) was 0°. These flights considered the impact of elevation.
To determine the effect of the oblique angle, two flight paths were undertaken (flight path 9, Figure 8). These were conducted along each side of the bridge, with one oriented at 45° and one at 30° from the bridge deck. In both cases, the offset distance was approximately 15 m from the deck centre and thus captured different vantage points. For flight paths 1–9, the image overlapping rate was above 80%. In addition to those images, path 10 was flown along the previous path, path 1, with an overlapping rate higher than 90%. Table 2, Table 3, Table 4 and Table 5 demonstrate how each of these flight paths were used singly and in combination to create 15 distinct datasets. Dataset Groups A and B used images from single flight paths 1–9 to analyse the effect of altitude and angles. Dataset Group C used multiple flight paths to evaluate the effect of different combination strategies. Dataset Group D used images from flight path 10. Dataset D-I used all images as the input, while D-II and D-III were generated using only the second (D-II) or third (D-III) image in the acquisition sequence. Thus, three different datasets in Group D were created to check the effect of the overlapping rate with respect to image acquisition speed.
After the image acquisition, a standard SfM 3D reconstruction process and noise reduction process was applied using methods previously introduced by the authors (Chen et al., 2017; Chen et al., 2018 [60]). Then, to ensure that the same section of the bridge was compared, each 3D point cloud was aligned with its accompanying TLS dataset through the ICP algorithm (Besl and McKay, 1992). After data alignment, the bridge deck was extracted in each dataset to evaluate the quality and accuracy of each model on a local (per-point) basis. For this, five metrics were employed in the form of point density (Equation (1)), point uniformity (Equation (3)), completeness of the reconstruction (Equation (4)), geometric accuracy (Equation (5)), and data yield (Equation (6)).

4.3. Experimental Set Up

To investigate flight path optimization, an experiment was conducted using the Blessington bridge in County Wicklow, Ireland. The bridge is constructed of reinforced concrete, is approximately 130 m long and 8 m wide, and is typically situated 10 m above the water level (Figure 9).
A DJI Engineering quadrotor Phantom 4 was used for the experiment. The UAV was equipped with a 4K camera (3000 × 4000 pix) and a 3-axis gimbal, as shown in Figure 10 and Figure 11. The total cost for the system was about EUR 1500. The take-off, image capture, and landing operations were manually controlled by a remote pilot through a first-person-view camera with a mandated second operator to help ensure obstacle avoidance.

4.4. Data Processing

The 3D reconstruction process was performed in the commercial software PhotoScan (Agisoft, 2017) with GPS tagging. In the software, both the image alignment accuracy and the dense point reconstruction quality were set to high. The reconstructions were processed on a Dell XPS 15 laptop (i7 GPU, 16 Gb RAM); the results are reported in Section 6. Point cloud registration, manual bridge deck extraction, and density calculations were achieved through the open-source software CloudCompare2.11.3 (CloudCompare, 2017) [61], and Equations (3)–(7) were implemented in MatLab.

4.5. TLS Data Collection

The TLS data to be used for benchmarking were collected with a Leica Scan Station P20 terrestrial laser scanner (Figure 12 and Figure 13). The bridge desk was captured from a total of 10 scan stations (see Figure 14) along the side path of the bridge. The resolution was set as 6.1 mm at 10 m, resulting in a sampling step of 5 mm. That data collection took approximately 3 h by one surveyor including logistics and scanner set up. The scanning only required about 7 min per scan station including data and target capture. Scan co-registration was performed by using Leica’s proprietary software Cyclone (V9.1). The final dataset contained approximately 270 million points. The local geometric accuracy was measured using TLS as the ground truth. TLS data have high resolution and accuracy at close distances via a single scan; multiple long-distance surveys as would be required for global accuracy would have cumulative global errors introduced by the registration process.

5. Results

5.1. Collected Data

The image acquisition process was conducted in the early morning to minimize vehicular-based occlusions. During the UAV imagery acquisition process, 526 images were captured across the 10 flight paths. Of the 55 min required for imagery data collection, 14 was for site checks, take-offs, reversals, and landings (see Table 6 for more details). The highest ground resolution (GR) achieved was 3.71 mm/pixel. The individual flights ranged from 2 to 10 min yielding as few as 21 and as many as 143 images at data capture rates of 8.7 to 15.7 images per minute but at a constant overlapping rate. More details are shown in Table 6.

5.2. Error Sources

A key aim of this paper is to provide a better understanding of how different UAV flight paths impact the quality of imagery-based point clouds for the inspection of bridge decks and similar infrastructure. To that end, flight paths were designed with pre-specified altitudes and offset distances from the bridge. However, the equipment’s on-board GPS system has an advertised hover accuracy of ±0.5 m in the vertical direction and ±1.5 m in the horizontal direction (DJI, 2019). Furthermore, the field conditions included wind effects. By checking the camera pose estimation results, a presence drift was verified. For example, flight path 1 and flight path 10 were intended to have identical altitudes of 10 m. In reality, the average capture distance for path 1 was 9.5 m, while that for path 10 was 10.5 m. While such differences affected the ground resolution of the captured images, the general trends being reported herein were not impacted.
Characteristics of the SfM point clouds derived from those images are shown in Table 7. Generally, the total processing time related to the quantity of input images (Figure 15). However, dataset C-III used less processing time than the less-populated dataset D-I. A possible reason is that the multiple flight paths were parallel to each other. Thus, overlapping between images occurred both in the horizontal and vertical directions, which appears to have decreased the feature matching time (Byrne et al., 2017) in a different experimental arrangement.

5.3. Density and Uniformity Comparison

As expected, the TLS dataset had a point density with a radial distribution, with the scanner at its centre producing higher-density point areas closer to the scanner. Lower-density strips are an artefact of cars or pedestrians passing in front of the scanner. Within allowable time constraints, these were minimized by re-performing the scans. In contrast, the SfM point cloud exhibits a largely uniform point distribution across the study area interspersed with waves of slightly lower density strips, as shown in Figure 16. This comparative homogeneity of the data offers a constant data resolution across the entire structure and reduces post-processing difficulties, as previously mentioned. To further understand how the flight path setting interacted with general point density and uniformity, the volume-based density calculation method introduced in Section 3.1 was applied to all 15 datasets (Table 7). The results are shown in Table 8.
As expected, the results shown in Table 8 demonstrate a significant correlation between the flight altitude and data density, with lower flights generating more uniform datasets (approximately 1% improvement per meter in RSD). The linear overlapping rate also affected the density. In the D Test series, as the overlap rate increased from 66% to 90%, the data density increased by about 10%, while in the A Test series the density increased by more than 227% when the altitude decreased from 40 m to 10 m.
As shown in Table 9, when comparing B Test series outputs to A Test series outputs, datasets obtained with narrower oblique angles at the same altitude led to denser point clouds than those collected with wider one. Also shown in Table 9, datasets with similar ground resolutions (B-I, B-III, and A-II) exhibited similar average densities.
Importantly, the C Test series showed that, instead of increasing the final point density, adding more flight paths from various angles decreased the final point cloud density. Based on work by Byrne et al., the extra images may be providing rich geometric information, which would then allow for the better detection of invalid points or noise and their subsequent removal as part of the reconstruction process. This concept of quality over quantity is further explored in Section 6.1.
To better understand the RSD changes in the SfM point cloud, a density map was generated (Figure 17) for the A Test series. At each fight altitude, different data density patterns appear. Close ups (10 and 20 times) illustrate that those patterns segmented the point cloud into numerous irregular grids, and in the boundary of each grid, points are missing (Figure 17). The grid size and the gap width are highly related to flight altitude (Figure 17). A probable reason is that with the increasing altitude, the ground resolution of each pixel increases correspondingly. As the pixel is the smallest unit for feature detection, ground resolution will be directly affected in the feature matching process (Verhoeven et al., 2015; Apollonio et al., 2014) [62,63]. After feature matching, the dense reconstruction process occurs through MVS algorithms to generate denser patches around matched seed features (Shao et al., 2016 [64]). In the detailed inspection of the data, around each patch is a gap where no data exist and which reduces the overall average density. The size of the gaps increases with altitude (Figure 17).

5.4. Completeness Comparison

The aforementioned gaps are treated as incomplete areas and quantified by Equation (4). The average point-to-point distance βave of each dataset was selected as the threshold for mesh generation. Areas where this threshold was exceeded were considered as incomplete. Table 10 demonstrates that for the particular equipment and the specific bridge in this field study, the single path datasets ranged in completeness from just over 66% to nearly 77%—generally producing better ones at lower altitudes. The fixed angle and different altitudes in Group A showed a U-shaped distribution in completeness. With increasing altitude, the completeness level dropped quickly in the beginning. Then, above 20 m it increased again but more slowly. The highest completeness was achieved by the lowest flight path centred over the pavement’s centre. When the 80% overlapping rate in A-I was increased to 90% at the same altitude in D-I, the completeness rate nudged slightly higher but certainly nothing close to proportional for the additional quantity of data being collected and processed.
Dataset Group B showed that depending upon the oblique angle much greater completeness can be achieved with significantly less data. In this case the completeness was nearly 10% more even in the absence of nearly a third of the data. Interestingly, when flight patterns were mixed other complexities arose, as shown in Group C where the completeness levels were less than other groups, as measured herein. The multiple flight paths caused a mixing of the grid layouts, thereby resulting in a range of gap sizes (Figure 17). When processed according to the procedure described in Section 3.2, a small threshold was selected, which was then used to calculate the completeness rate. Consequently, the non-uniform gaps introduce an artefact into the dataset that influences the calculation. Therefore, this must be considered as a limitation of this newly proposed metric.

5.5. Geometry Accuracy Comparison

A geometric accuracy assessment was conducted by comparing a cross-section of each SfM point cloud to the equivalent portion of the TLS point cloud. In this test, the cross-section was divided into 200 intervals. The mean altitude of each interval was calculated and compared through the method introduced in Section 4.2.
As visible in Figure 18 and Table 11, the geometric accuracy was affected by all parameters. As expected, the Group A test series showed a linear improvement with lower altitudes. Similarly, the overlapping rate and oblique angle had direct effects on the accuracy. Using multiple view angles increased the geometric accuracy compared to processing each angle separately. In summary, as expected, the best overall results were achieved at lower altitudes, smaller angles, and higher overlapping rates.

5.6. Data Yield

The DCR results are shown in Table 12. According to the Group A tests, lower altitudes have higher DCRs, as would be expected peripheral information such as the river or its banks is not being captured. Test Series B shows that the oblique angle will also decrease the DCR, this because the oblique angle captures more of the bridge’s side view. In Test Series C, even though the total number of points (PN) were similar across the data series, the average point density (AD) and the final data yield (DCR) differed significantly. Capturing the bridge’s side data negatively impacted these two metrics. Test Series D shows that, the higher overlapping rate improved the PN significantly. However, the AD did not change much and the DCR decreased when the overlapping rate increased, which means the higher overlapping rate decreased the efficiency of point utilization.

6. Discussion

6.1. Flight Path Optimization

To provide guidance for flight path planning, each category of data analysed in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 was normalized by the highest value achieved across the 15 datasets and compiled in Table 13. Those datasets with the best performance in at least one metric were further analysed in a seven-pronged radar map to show a more holistic performance across the various metrics (Figure 19). Unlike previous research using the total point numbers or the average density as a unique standard to evaluate the reconstruction performance, herein seven different metrics are proposed and compared. Both from Table 13 and Figure 19, highly distinctive patterns can be observed.
As such, proper flight path selection must be informed by the survey’s purpose. For example, Dataset A-I, which had the closest survey distance (9.5 m) and no offset oblique angle, produced the highest average density, yield rate, and uniformity, demonstrating that this flight path can generate a well-distributed point cloud. Additionally, it also had a good balance in completeness, geometric accuracy, and time efficiency. Dataset D-I illustrated that by adding more images to increase the overlapping the completeness and accuracy level could be increased. However, this improvement is costly. To improve the completeness by 1%, when compared to D-II, D-I tripled the time cost in image acquisition and post-processing. In some surveys, rapid assessment through shorter flight times and limited processing periods is important, such as after nature disasters. In those cases, if the bridge deck area is the focus of concern, then flying at a higher altitude directly over the bridge (e.g., path A group) may be the most appropriate choice at the cost of accuracy and density.
The evaluation concepts of accuracy and completeness, as well as point yield, introduced in this paper, provide a more holistic and, arguably, more rigorous approach to UAV-based imagery acquisition for bridge documentation. In fact, the experimental work demonstrates that maximizing point density may actually be counterproductive to obtaining cost-effective and comprehensive point clouds depending upon the position of the UAV with respect to the areas targeted for documentation.

6.2. UAV Photogrammetry vs. TLS

TLS is often proposed as an alternative solution for bridge documentation. For the quality evaluation purpose, this section compares the best SfM point clouds achieved in the experiments that were achieved by TLS. As mentioned in some other studies (Hallermann et al., 2015; Chen et al., 2018 [65]), the advantages of UAV imagery data collection include high efficiency and low costs. In this experiment, even with multiple flight paths (10 paths), the entire flight time was less than one hour—only a third of the TLS data collection time. In this instance, the post-processing times were almost the same for the UAV images’ SfM reconstruction and the TLS data’s co-registration (Table 14). However, as illustrated in Figure 19, the SfM post-processing time is highly dependent upon image quantity. If the datasets have large amounts of sky and water, image matching becomes harder and more time consuming. However, if the imagery is collected via video, a limited number of frames can be automatically selected to restrict the image matching process during reconstruction, as explained by Byrne et al. (2017). In contrast, the TLS data co-registration time is largely linear, more predictable, and can be minimized based on reducing the number of scan station locations. Additionally, the UAV-SfM system used in this study cost only 10% of the cost of the TLS system budget and generated a competitive result (Table 14). However, this figure does not include UAV training, permitting, or insurance costs.
This paper’s experimental results for documenting a bridge deck demonstrated that a well-designed flight path can achieve two-thirds of the average density of the TLS result, with a geometric difference as little as 3 mm. While this figure is important, what is arguably of greater concern for further post-processing is the uniformity of the point cloud. The UAV-SfM point cloud is much more uniform than the TLS result (RSD of 5.56–25.6% vs. 73.12%) and with almost no low-density pockets. Moreover, with the designed metric and strict threshold, the completeness level of TLS is only 7.49%. That means only 7.49% of the entire survey area was covered by well-distributed, high-density points. In contrast, the UAV-SfM method easily achieved more than 50%. The higher completeness and better uniformity of the SfM point cloud has many benefits for inspections, such as (1) less unknowns and (2) more ability to obtain consistent post-processed objects, as the input is more uniform. However, the UAV method is highly vulnerable to the weather. Wind especially affects flight path quality by causing the camera to shake and the UAV to drift—both impacting the final quality. Sunlight was also shown to have some impacts. Thus, when designing a proper flight path for a specified quality, those issues should be considered ahead of time.

7. Conclusions

To optimize UAV-SfM bridge deck inspection or similar applications, flight path design and data capture considerations in terms of altitude, angle of capture, overlapping rate, and combined flight paths were explored. To evaluate the various outcomes, this paper proposed a suite of seven evaluation metrics to check the variance of point cloud quality and overall efficiency in the form of the total number or points, average density, uniformity, yield rate, completeness, geometry accuracy, and time efficiency. In the presented case study of the Blessington Bridge, bridge deck geometry was acquired from 10 different flight paths from which 15 groups of point cloud datasets were assembled as generated through an SfM method. Evaluation of these 15point clouds established that both altitude and oblique angle significantly affected the point density and uniformity.
Several major conclusions can be drawn from this study. First, irrespective of the individual and combined parameters, the SfM process resulted in point groupings in semi-irregular grids with clearly identifiable gaps between point groups. The size of both the grids and the gaps increased at higher flight altitudes. Multiple flight paths resulted in a combination of the individual grid patterns from the specific flight paths, which decreased the general completeness rate but improved the overall geometric accuracy. The best completeness (77%) was achieved by a single flight path with the lowest altitude (9.5 m) and an 80% overlapping rate. Next, while the overlapping rate strongly affects the total number of points, it only weakly impacts the average density of the portion of the point cloud representing the deck surface, thereby negatively impacting the time efficiency without strongly improving the data yield rate. However, in this study a minimum overlapping rate of 66% was found to be needed to successfully achieve the SfM reconstruction process.
Additionally, this research suggests that there is not a unique solution for UAV bridge deck surveys due to the complex relationship between the flight path settings and the specific survey objectives’ (e.g., accuracy, completeness, and economy) strong influence on the optimal data capture strategy (Figure 19). For example, if high accuracy is the goal, using a lower altitude, smaller angle, and higher overlapping rate can achieve better results than other flight path combinations. Finally, in the case study presented herein, the UAV-SfM method demonstrated some critical advantages over TLS documentation, including time efficiency, general cost, and data uniformity, but at the expense of point density and some accuracy.

Author Contributions

S.C.: formulation or evolution of overarching research goals and aims, supervision. X.Z.: writing—original draft. D.F.L.: writing—review and editing, resources, formal analysis. L.T.-H.: data curation. E.M.: formal analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the European Union’s Horizon 2020 Research and Innovation programme Marie Skłodowska-Curie grant (grant number 642453; Recipient of fund: Siyuan Chen); the University College Dublin seed funding program (grant number SF1404; Recipient of fund: Siyuan Chen); research on road detection methods based on UAV image reconstruction technology (item number 20B266; Recipient of fund: Siyuan Chen); and research on monitoring technology and application of bank collapse based on 3D reconstruction (item number XSKJ2021000-13; Recipient of fund: Siyuan Chen).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and codes that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

We declare that none of the work contained in this manuscript is published in any language or currently under consideration at any other journal, and there are no conflict of interest to declare.

References

  1. AASHTO. Manual for Maintenance Inspection of Bridges; AASHTO: Washington, DC, USA, 1970. [Google Scholar]
  2. Unit, Railway Accident Investigation. Malahide Viaduct Collapse on the Dublin to Belfast Line, on the 21st August 2009. Irish Railways. 2010. Available online: https://www.railwaysarchive.co.uk/docsummary.php?docID=3506 (accessed on 1 August 2023).
  3. Xie, R.; Yao, J.; Liu, K.; Lu, X.; Liu, Y.; Xia, M.; Zeng, Q. Automation in Construction Automatic multi-image stitching for concrete bridge inspection by combining point and line features. Autom. Constr. 2018, 90, 265–280. [Google Scholar] [CrossRef]
  4. Truong-Hong, L.; Laefer, D.F. Documentation of Bridges by Terrestrial Laser Scanner. In Proceedings of the IABSE Symposium Report; International Association for Bridge and Structural Engineering: Zürich, Switzerland, 2015; Volume 105, pp. 1–8. [Google Scholar]
  5. Gyetvai, N.; Truong-Hong, L.; Laefer, D.F. Laser scan-based structural assessment of wrought iron bridges: Guinness Bridge, Ireland. Proc. Inst. Civ. Eng. Eng. Hist. Herit. 2018, 171, 76–89. [Google Scholar] [CrossRef] [Green Version]
  6. Qu, Y.; Huang, J.; Zhang, X. Rapid 3D Reconstruction for Image Sequence Acquired. Sensors 2018, 18, 225. [Google Scholar] [CrossRef] [Green Version]
  7. Atole, R.R.; Bello, L.C.S.; Lirag, J.R.S. Eyes in the Sky: A Review of Civilian Unmanned Aerial Vehicles (UAVs). Int. J. Comput. Appl. 2017, 173, 36–41. [Google Scholar]
  8. Chen, S.; Laefer, D.F.; Mangina, E.; Zolanvari, S.M.I.; Byrne, J. UAV Bridge Inspection through Evaluated 3D Reconstructions. J. Bridge Eng. 2019, 24, 05019001. [Google Scholar] [CrossRef] [Green Version]
  9. Byrne, J.; Laefer, D. Variables effecting photomosaic reconstruction and ortho-rectification from aerial survey datasets. arXiv 2016, arXiv:1611.03318. [Google Scholar]
  10. Zhao, S.; Kang, F.; Li, J.; Ma, C. Structural health monitoring and inspection of dams based on UAV photogrammetry with image 3D reconstruction. Autom. Constr. 2021, 130, 103832. [Google Scholar] [CrossRef]
  11. Hoegner, L.; Tuttas, S.; Stilla, U. 3D building reconstruction and construction site monitoring from RGB and TIR image sets. In Proceedings of the 2016 12th IEEE International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 27–28 October 2016; pp. 305–308. [Google Scholar]
  12. Chen, Z.; Dou, A. Road damage extraction from post-earthquake uav images assisted by vector data. The International Archives of the Photogrammetry. Remote Sens. Spat. Inf. Sci. 2018, 42, 211–216. [Google Scholar]
  13. Chen, S.; Laefer, D.F.; Mangina, E. State of Technology Review of Civilian UAVs. Recent Pat. Eng. 2016, 10, 160–174. [Google Scholar] [CrossRef] [Green Version]
  14. Liu, P.; Chen, A.Y.; Huang, Y.-N.; Han, J.-Y.; Lai, J.-S.; Kang, S.-C.; Wu, T.-H.; Wen, M.-C.; Tsai, M.-H. A review of rotorcraft Unmanned Aerial Vehicle (UAV) developments and applications in civil engineering. Smart Struct. Syst. 2014, 13, 1065–1094. [Google Scholar] [CrossRef]
  15. Li, R.; Liu, J.; Zhang, L.; Hang, Y. LIDAR/MEMS IMU integrated navigation (SLAM) method for a small UAV in indoor environments. In Proceedings of the 2014 DGON Inertial Sensors and Systems (ISS), Karlsruhe, Germany, 16–17 September 2014; pp. 1–15. [Google Scholar]
  16. Honegger, D.; Meier, L.; Tanskanen, P.; Pollefeys, M. An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1736–1741. [Google Scholar] [CrossRef]
  17. Papa, U.; Del Core, G. Design of sonar sensor model for safe landing of an UAV. In Proceedings of the 2015 IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 4–5 June 2015; pp. 346–350. [Google Scholar]
  18. Chisholm, R.A.; Cui, J.; Lum, S.K.Y.; Chen, B.M. UAV LiDAR for below-canopy forest surveys. J. Unmanned Veh. Syst. 2013, 1, 61–68. [Google Scholar] [CrossRef] [Green Version]
  19. Ferrick, A.; Fish, J.; Venator, E.; Lee, G.S. UAV obstacle avoidance using image processing techniques. In Proceedings of the 2012 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, USA, 23–24 April 2012; pp. 73–78. [Google Scholar] [CrossRef]
  20. Ruggles, S.; Clark, J.; Franke, K.W.; Wolfe, D.; Reimschiissel, B.; Martin, R.A.; Okeson, T.J.; Hedengren, J.D. Comparison of SfM computer vision point clouds of a landslide derived from multiple small UAV platforms and sensors to a TLS-based model. J. Unmanned Veh. Syst. 2016, 4, 246–265. [Google Scholar] [CrossRef]
  21. Wallace, L.; Lucieer, A.; Malenovsky, Z.; Turner, D.; Vopenka, P. Assessment of Forest Structure Using Two UAV Techniques: A Comparison of Airborne Laser Scanning and Structure from Motion (SfM) Point Clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef] [Green Version]
  22. Bolourian, N. Point Cloud-based Deep Learning and UAV Path Planning for Surface Defect Detection of Concrete Bridges. Ph.D. Thesis, Concordia University, River Forest, IL, USA, 2022. [Google Scholar]
  23. Hallermann, N.; Morgenthal, G. Visual inspection strategies for large bridges using Unmanned Aerial Vehicles (UAV). ISBM 2014, 2014, 661–667. [Google Scholar]
  24. Ellenberg, A.; Kontsos, A.; Moon, F.; Bartoli, I. Bridge related damage quantification using unmanned aerial vehicle imagery. Struct. Control Health Monit. 2016, 23, 1168–1179. [Google Scholar] [CrossRef]
  25. Kim, H.; Lee, J.; Ahn, E.; Cho, S.; Shin, M.; Sim, S. Concrete Crack Identification using UAV incorporating Hybrid Image Processing. Sensors 2017, 17, 2052. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Reagan, D.; Sabato, A. Feasibility of using digital image correlation for unmanned aerial vehicle structural health monitoring of bridges. Struct. Health Monit. 2017, 17, 1–32. [Google Scholar] [CrossRef]
  27. Escobar-Wolf, R.; Oommen, T.; Brooks, C.N.; Dobson, R.J.; Ahlborn, T.M. Unmanned Aerial Vehicle (UAV)-Based Assessment of Concrete Bridge Deck Delamination Using Thermal and Visible Camera Sensors: A Preliminary Analysis. Res. Nondestruct. Eval. 2017, 29, 1–16. [Google Scholar] [CrossRef]
  28. Omar, T.; Nehdi, M.L. Automation in Construction Remote sensing of concrete bridge decks using unmanned aerial vehicle infrared thermography. Autom. Constr. 2017, 83, 1–12. [Google Scholar] [CrossRef]
  29. Bartczak, E.T.; Bassier, M.; Vergauwen, M. Case Study for Uas-Assisted Bridge Inspections. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 33–39. [Google Scholar] [CrossRef]
  30. Hallermann, N.; Morgenthal, G.; Rodehorst, V. Unmanned Aerial Systems (UAS)—Case Studies of Vision Based Monitoring of Ageing Structures. In Proceedings of the International Symposium Non-Destructive Testing in Civil Engineering (NDT-CE), Berlin, Germany, 15–17 September 2015; pp. 15–17. [Google Scholar]
  31. Hallermann, N.; Universit, B.; Morgenthal, G.; Universit, B. Unmanned Aerial Systems (UAS)—Survey and monitoring based on high-quality airborne photos high-quality airborne photos. In Proceedings of the IABSE Conference: Structural Engineering: Providing Solutions to Global Challenges, Geneva, Switzerland, 23–25 September 2015; pp. 1279–1286. [Google Scholar]
  32. Khaloo, A.; Lattanzi, D.; Cunningham, K.; Dell’andrea, R.; Riley, M. Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling. Struct. Infrastruct. Eng. 2017, 14, 124–136. [Google Scholar] [CrossRef]
  33. Calì, M.; Ambu, R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors 2018, 18, 2815. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, F.; Zou, Y.; Castillo, E.D.R.; Lim, J.B.P. Optimal UAV Image Overlap for Photogrammetric 3D Reconstruction of Bridges. In Proceedings of the CIB World Building Congress, Melbourne, Australia, 27–30 June 2022; Volume 1101. [Google Scholar] [CrossRef]
  35. Han, Y.; Feng, D.; Wu, W.; Yu, X.; Wu, G.; Liu, J. Geometric shape measurement and its application in bridge construction based on UAV and terrestrial laser scanner. Autom. Constr. 2023, 151, 104880. [Google Scholar] [CrossRef]
  36. Chen, S.-E.; Rice, C.; Boyle, C. Hauser Small-Format Aerial Photography for Highway-Bridge Monitoring. J. Perform. Constr. Facil. 2011, 25, 105–112. [Google Scholar] [CrossRef]
  37. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
  38. Herbert, B.; Andreas, E.; Tinne, T.; Luc, V.G. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  39. Lourakis, M.I.A.; Argyros, A.A. SBA: A Software Package for Generic Sparse Bundle Adjustment. ACM Trans. Math. Softw. 2009, 36, 1–30. [Google Scholar] [CrossRef]
  40. Yasutaka, F.; Hernández, C. Multi-View Stereo: A Tutorial. Foundations and Trends® in Computer Graphics and Vision 9(1–2). Available online: https://carlos-hernandez.org/papers/fnt_mvs_2015.pdf (accessed on 1 August 2023).
  41. Smith, M.W.; Vericat, D. From experimental plots to experimental landscapes: Topography, erosion and deposition in sub-humid badlands from Structure-from-Motion photogrammetry. Earth Surf. Process. Landf. 2015, 40, 1656–1671. [Google Scholar] [CrossRef] [Green Version]
  42. Byrne, J.; O’Keeffe, E.; Lennon, D.; Laefer, D.F. 3D Reconstructions Using Unstabilized Video Footage from an Unmanned Aerial Vehicle. J. Imaging 2017, 3, 15. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, S.; Laefer, D.F.; Byrne, J.; Natanzi, A.S. The effect of angles and distance on image-based, three-dimensional reconstructions. In European Safety and Reliability ESREL2017; Taylor & Francis Group: Portorož, Slovenia, 2017; pp. 2757–2761. [Google Scholar]
  44. James, M.R.; Robson, S.; Centre, L.E.; Engineering, G. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef] [Green Version]
  45. Dandois, J.; Olano, M.; Ellis, E. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  46. Byrne, J.; Laefer, D.F.; O’Keeffe, E. Maximizing feature detection in aerial unmanned aerial vehicle datasets. J. Appl. Remote Sens. 2017, 11, 025015. [Google Scholar] [CrossRef]
  47. Slocum, R.K.; Parrish, C.E. Simulated Imagery Rendering Workflow for UAS-Based Photogrammetric 3D Reconstruction Accuracy Assessments. Remote Sens. 2017, 9, 396. [Google Scholar] [CrossRef] [Green Version]
  48. Moenning, C.; Dodgson, N.A. A new point cloud simplification algorithm. In Proceedings of the 3rd IASTED International Conference on Visualization, Imaging, and Image Processing, Benalmádena, Spain, 8–10 September 2003; p. 6. [Google Scholar]
  49. Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph. 2009, 28, 176. [Google Scholar] [CrossRef]
  50. Holz, D.; Behnke, S. Registration of Non-Uniform Density 3D Point Clouds using Approximate Surface Reconstruction. In Proceedings of the Conference ISR ROBOTIK 2014, Berlin, Germany, 2–3 June 2014; pp. 475–481. [Google Scholar]
  51. Zolanvari, S.M.I.; Laefer, D.F. Slicing Method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens. 2016, 119, 334–346. [Google Scholar] [CrossRef]
  52. Truong-Hong, L.; Laefer, D.F.; Hinks, T.; Carr, H. Combining an angle criterion with voxelization and the flying voxel method in reconstructing building models from LiDAR data. Comput.-Aided Civ. Infrastruct. Eng. 2013, 28, 112–129. [Google Scholar] [CrossRef] [Green Version]
  53. Laefer, D.F.; Fitzgerald, M.; Maloney, E.M.; Coyne, D.; Lennon, D.; Morrish, S. Lateral image degradation in terrestrial laser scanning Author(s). Struct. Eng. Int. 2009, 19, 184–189. [Google Scholar] [CrossRef]
  54. Quagliarini, E.; Clini, P.; Ripanti, M. Fast, low cost and safe methodology for the assessment of the state of conservation of historical buildings from 3D laser scanning: The case study of Santa Maria in Portonovo (Italy). J. Cult. Herit. 2017, 24, 175–183. [Google Scholar] [CrossRef]
  55. Fukunaga, K.; Hostetler, L.D. Optimization of k-Nearest-Neighbor Density Estimates. IEEE Trans. Inf. Theory 1973, 19, 320–326. [Google Scholar] [CrossRef]
  56. Lucieer, A.; de Jong, S.M.; Turner, D. Mapping landslide displacements using Structure from Motion (SfM) and image correlation of multi-temporal UAV photography. Prog. Phys. Geogr. 2014, 38, 97–116. [Google Scholar] [CrossRef]
  57. Mosbrucker, A.R.; Major, J.J.; Spicer, K.R.; Pitlick, J. Camera system considerations for geomorphic applications of SfM photogrammetry. Earth Surf. Process. Landf. 2017, 42, 969–986. [Google Scholar] [CrossRef] [Green Version]
  58. Palmer, L.M.; Franke, K.W.; Abraham Martin, R.; Sines, B.E.; Rollins, K.M.; Hedengren, J.D. Application and Accuracy of Structure from Motion Computer Vision Models with Full-Scale Geotechnical Field Tests; American Society of Civil Engineers: Reston, VA, USA, 2015; pp. 2432–2441. [Google Scholar] [CrossRef] [Green Version]
  59. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef] [Green Version]
  60. Chen, S.; Truong-Hong, L.; Laefer, D.F.; Mangina, E. Automated Bridge Deck Evaluation through UAV Derived Point Cloud. In Proceedings of the 2018 Civil Engineering Research in Ireland Conference, Dublin, Ireland, 30 August 2018; pp. 735–740. [Google Scholar]
  61. CloudCompare. CloudCompare Stereo, V2.9, Open Source Project. 2017. Available online: http://www.danielgm.net/cc/ (accessed on 15 January 2022).
  62. Verhoeven, G.; Karel, W.; Doneus, M.; Trinks, I.; Pfeifer, N.; Cloud, P. Mind your grey tones—Examining the influence of decolourization methods on interest point extraction and matching for architectural image-based modelling. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Avila, Spain, 25 February 2015; pp. 25–27. [Google Scholar] [CrossRef] [Green Version]
  63. Apollonio, F.I.; Ballabeni, A.; Gaiani, M.; Remondino, F. Evaluation of feature-based methods for automated network orientation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 47–54. [Google Scholar] [CrossRef] [Green Version]
  64. Shao, Z.; Yang, N.; Xiao, X.; Zhang, L.; Peng, Z. A Multi-View Dense Point Cloud Generation Algorithm Based on Low-Altitude Remote Sensing Images. Remote Sens. 2016, 8, 381. [Google Scholar] [CrossRef] [Green Version]
  65. Chen, S.; Truong-hong, L.; Keeffe, E.O.; Laefer, D.F.; Mangina, E. Outlier detection of point clouds generating from low cost UAVs for bridge inspection. In Proceedings of the Sixth International Symposium on Life-Cycle Civil Engineering, Ghent, Belgium, 28–31 October 2018; pp. 1969–1975. [Google Scholar]
Figure 1. Volume density.
Figure 1. Volume density.
Sensors 23 07159 g001
Figure 2. Incomplete dataset in SfM roadway point cloud.
Figure 2. Incomplete dataset in SfM roadway point cloud.
Sensors 23 07159 g002
Figure 3. Mesh generation.
Figure 3. Mesh generation.
Sensors 23 07159 g003
Figure 4. Workflow for completeness evaluation.
Figure 4. Workflow for completeness evaluation.
Sensors 23 07159 g004
Figure 5. Cross-section extraction.
Figure 5. Cross-section extraction.
Sensors 23 07159 g005
Figure 6. Cross-section comparison: (a) projected points; (b) fitted curves.
Figure 6. Cross-section comparison: (a) projected points; (b) fitted curves.
Sensors 23 07159 g006
Figure 7. Flowchart of the data processing procedure.
Figure 7. Flowchart of the data processing procedure.
Sensors 23 07159 g007
Figure 8. Flight path design.
Figure 8. Flight path design.
Sensors 23 07159 g008
Figure 9. Blessington bridge and surrounding environs: (a) satellite image; (b) aerial image.
Figure 9. Blessington bridge and surrounding environs: (a) satellite image; (b) aerial image.
Sensors 23 07159 g009
Figure 10. UAV with 4k camera.
Figure 10. UAV with 4k camera.
Sensors 23 07159 g010
Figure 11. Image acquisition (UAV is shown above front right support).
Figure 11. Image acquisition (UAV is shown above front right support).
Sensors 23 07159 g011
Figure 12. P20 laser scanner.
Figure 12. P20 laser scanner.
Sensors 23 07159 g012
Figure 13. TLS data acquisition.
Figure 13. TLS data acquisition.
Sensors 23 07159 g013
Figure 14. Distribution of 10 TLS stations.
Figure 14. Distribution of 10 TLS stations.
Sensors 23 07159 g014
Figure 15. Image number and post-processing time.
Figure 15. Image number and post-processing time.
Sensors 23 07159 g015
Figure 16. Data density maps of TLS and SfM point cloud.
Figure 16. Data density maps of TLS and SfM point cloud.
Sensors 23 07159 g016
Figure 17. Density maps for the test series: (a) bridge deck data coverage; (b) bridge deck data coverage.
Figure 17. Density maps for the test series: (a) bridge deck data coverage; (b) bridge deck data coverage.
Sensors 23 07159 g017
Figure 18. Correlation matrix of accuracy assessment: (a) correlation matrix of Group A; (b) correlation matrix of Group B; (c) correlation matrix of Group C; (d) correlation matrix of Group D.
Figure 18. Correlation matrix of accuracy assessment: (a) correlation matrix of Group A; (b) correlation matrix of Group B; (c) correlation matrix of Group C; (d) correlation matrix of Group D.
Sensors 23 07159 g018aSensors 23 07159 g018b
Figure 19. Radar map for performance comparison.
Figure 19. Radar map for performance comparison.
Sensors 23 07159 g019
Table 1. Sampling of UAV inspection-related research.
Table 1. Sampling of UAV inspection-related research.
UAV TypeSensorData OutputPurposeResearcher
MultirotorLaser scanner2D point cloudForest survey: tree diameter measurements(Chisholm et al., 2013)
[18]
MultirotorLaser scanner3D point cloudBuilding survey: house dimension measurement(Roca et al., 2015) [20]
MultirotorLaser scanner and camera3D point cloudForest survey: quality comparison between laser and SfM for copy measurement(Wallace et al., 2016) [21]
MultirotorLaser scanner3D point cloudPoint-cloud-based deep learning and UAV path planning for surface defect detection of concrete bridges(Bolourian et al., 2022)
[22]
MultirotorCamera2D imagesBridge survey: 3D model generation for bridge(Hallermann and Morgenthal, 2014) [23]
MultirotorCamera2D imagesBridge inspection: crack detection from distortion images(Ellenberg et al., 2016) [24]
MultirotorCamera2D imagesBuilding inspection: crack detection from hybrid image(Kim et al., 2017) [25]
MultirotorCamera2D imagesBridge inspection: using Digital Image Correlation (DIC) technique for crack measurement(Reagan and Sabato, 2017) [26]
MultirotorCamera
(thermal)
2D imagesBridge inspection: delamination mapping for concrete bridge deck(Escobar-Wolf et al., 2017) [27]
MultirotorCamera
(thermal)
2D imagesBridge inspection: delamination measurement for concrete bridge deck(Omar and Nehdi, 2017) [28]
MultirotorCamera
(thermal)
2D images STUDY FOR UAS-ASSISTED BRIDGE INSPECTIONS(E.T. Bartczak, 2023)
[29]
MultirotorCamera3D point cloudBuilding survey: 3D model generation for aging structures(Hallermann, et al., 2015) [30]
MultirotorCamera3D point cloudBridge survey: 3D model generation for historical bridge(Hallermann et al., 2015) [31]
MultirotorCamera3D point cloudBridge survey: 3D model generation for timber truss bridge(Khaloo et al., 2017) [32]
MultirotorCamera3D point cloudBuilding survey: 3D model generation from image in sequence(Qu et al., 2018) [6]
MultirotorCamera3D point cloudBridge survey: 3D model accuracy evaluation(Calì and Ambu, 2018) [33]
Multirotor
and
fixed-wing
Camera3D point cloudRoad inspection: accuracy comparison of TLS and SfM(Ruggles et al., 2016) [20]
MultirotorCamera3D point cloudOptimal UAV image overlap for photogrammetric 3D reconstruction of bridges(F wang et al., 2022) [34]
MultirotorCamera3D point cloudGeometric shape measurement and its application in bridge construction based on UAV and terrestrial laser scanner(Yitian Han et al., 2023) [35]
Table 2. Altitude comparison.
Table 2. Altitude comparison.
Dataset NameInput Data SourceDesigned Altitude (m)
A-IPath 110
A-IIPath 215
A-IIIPath 320
A-IVPath 430
A-VPath 540
All flights directly above the bridge deck’s centre line.
Table 3. Oblique angle comparison.
Table 3. Oblique angle comparison.
Dataset NameInput Data SourceDesigned
Angle
(Degree)
Designed
Altitude
(m)
Designed
Offset
(m)
Distance to Centre Line (m)
B-IPath 6−45101014.14
B-IIPath 7−30151020
B-IIIPath 845101014.14
B-IVPath 930151020
Table 4. Path combination comparison.
Table 4. Path combination comparison.
Dataset NameInput Data SourcePath Combination
C-IPath 2 + 7Top + one side
C-IIPath 7 + 9Two sides
C-IIIPath 2 + 7 + 9Top + two sides
Table 5. Overlapping rate comparison.
Table 5. Overlapping rate comparison.
Dataset NameInput Data SourceOverlapping Rate
D-IPath 1090%
D-II50% of Path 10 data83%
D-III33% of Path 10 data66%
D-IV25% of Path 10 data50%
Table 6. UAV image acquisition result.
Table 6. UAV image acquisition result.
Flight PathDesigned Altitude
(Meter)
Designed Oblique
Angle
(Degree)
Images
Acquired
Ground Resolution
(mm/pixel)
Flying Time
(Minutes)
Data Capture Rate (Images/
Minute)
1100563.71511.2
2150545.88413.5
3200318.71310.3
43002613.438.7
54002118.5210.5
610−45506.49412.5
715−30457.26315.0
81045535.73413.3
91530477.44315.7
101001434.091014.3
Table 7. Three-dimensional point cloud reconstruction results.
Table 7. Three-dimensional point cloud reconstruction results.
Dataset NameInput Data SourceNumber
of
Images
Image Acquisition TimeImage Matching TimeTotal Points NumberReconstruction TimeTotal
Time
A-IPath 15652.942,849,7075.313.3
A-IIPath 25442.829,810,8555.412.3
A-IIIPath 33131.416,800,2752.67.0
A-IVPath 42631.212,725,0122.16.3
A-VPath 52120.89,550,9781.94.6
B-IPath 65041.822,761,8704.210.0
B-IIPath 74531.623,503,1734.39.0
B-IIIPath 85341.728,807,0164.910.5
B-IVPath 94732.024,261,3264.59.5
C-IPath 2 + 710673.446,732,03910.921.3
C-IIPath 7 + 910365.041,138,59911.021.9
C-IIIPath 2 + 7 + 9159104.947,549,25622.437.3
D-IPath 101431020.356,929,56031.161.3
D-II *1/2 of path 10 images7254.747,219,8809.819.5
D-III1/3 of path 10 images483.32.038,213,6293.08.2
D-IV1/4 of path 10 images362.5FailedFailedFailedFailed
* Every other image was used to achieve the reduced dataset.
Table 8. Data density and uniformity comparison.
Table 8. Data density and uniformity comparison.
Test ADesigned
Altitude (m)
AD (Points/m3)RSD
A-I10 298,4745.56%
A-II15 118,1929.34%
A-III20 54,76514.17%
A-IV30 21,90520.38%
A-V40 13,11725.61%
Test BDesigned
Angle (degree)
Designed
Altitude (m)
AD (points/m3)RSD
B-I−4510111,8618.40%
B-II−301577,57410.27%
B-III4510127,1537.58%
B-IV301575,9548.50%
Test CCombination AD (points/m3)RSD
C-ITop + one side 219,97916.35%
C-IITwo sides (±30°) 75,20513.22%
C-IIITop + two sides 187,20312.87%
Test DOverlapping Rate AD (points/m3)RSD
D-I90% 277,1595.80%
D-II83% 263,2396.02%
D-III66% 251,9635.96%
BenchmarkStations AD (points/m3)RSD
TLS10 4,163,13373.12%
Best ResultsPath Setting AD (points/m3)RSD
A-I10 m 298,4745.56%
B-III45 degrees 127,1537.58%
C-ITop + one side 219,97916.35%
D-I90% overlap 277,1595.80%
Table 9. Comparison of oblique angle impact.
Table 9. Comparison of oblique angle impact.
DatasetsAltitudeAngleGRADRSD
A-I1003.71298,4745.56%
B-I10−456.49101,8618.40%
B-III10455.73127,1537.58%
A-II1505.88118,1929.34%
B-II15−307.2677,57410.27%
B-IV15307.4475,9548.50%
Table 10. Comparison of completeness.
Table 10. Comparison of completeness.
Altitude Change Effect DatasetsAltitudeDensityβave (mm)Completeness
A-I *10298,4745.476.80%
A-II15118,1928.569.30%
A-III2054,76512.866.41%
A-IV3021,90519.468.25%
A-V4013,11726.868.75%
Angle Change
Datasets
AltitudeDensityβave (mm)Completeness
B-I−45111,8618.356.10%
B-II−3077,57410.364.72%
B-III45127,1537.559.85%
B-IV3075,95410.262.49%
Flight Path
Arrangement
Effect Datasets
Path locationDensityβave (mm)Completeness
C-ITop + one side219,9795.748.52%
C-IITwo sides75,20510.559.54%
C-IIITop + two sides187,2036.151.59%
Overlapping Rate Effect
Datasets
OverlappingDensityβave (mm)Completeness
D-I90%277,1595.277.26%
D-II83%263,2395.676.84%
D-III66%251,9635.976.09%
Benchmark
Datasets
StationsDensityβave (mm)Completeness
TLS104,163,1331.27.49%
* 80% overlapping rate.
Table 11. Comparison of geometric accuracy based on flight parameters.
Table 11. Comparison of geometric accuracy based on flight parameters.
Altitude Effect
Datasets
Altitude (m)Correlation Coefficient
Entry 1datadata
A-I100.9962
A-II150.9952
A-III200.9849
A-IV300.9711
A-V400.955
Altitude Effect
Datasets
Angle (degree)Correlation Coefficient
B-I−450.9935
B-II−300.9903
B-III450.9936
B-IV300.9919
Flight Path Effect
Datasets
Path locationCorrelation Coefficient
C-ITop + one side0.9962
C-IITwo sides0.9945
C-IIITop + two sides0.9978
Overlapping Rate
Effect Datasets
OverlappingCorrelation Coefficient
D-I90%0.9986
D-II83%0.9969
D-III66%0.9945
Table 12. Data yield rate.
Table 12. Data yield rate.
Test AAltitude (m)PNADDCR
A-I1042,849,707298,4746.97 × 10−3
A-II1529,810,855118,1923.96 × 10−3
A-III2016,800,27554,7653.26 × 10−3
A-IV3012,725,01221,9051.72 × 10−3
A-V409,550,97813,1171.37 × 10−3
A-I1042,849,707298,4746.97 × 10−3
Test BAngle(degree)PNADDCR
B-I-4522,761,870111,8614.91 × 10−3
B-II-3023,503,17377,5743.30 × 10−3
B-III4528,807,016127,1534.41 × 10−3
B-IV3024,261,32675,9543.13 × 10−3
Test CCombinationPNADDCR
C-ITop + one side46,732,039219,9794.71 × 10−3
C-IITwo sides (±30°)41,138,59975,2051.83 × 10−3
C-IIITop + two sides47,549,256187,2033.94 × 10−3
Test DOverlapping RatePNADDCR
D-I90%56,929,560277,1594.87 × 10−3
D-II83%47,219,880263,2395.57 × 10−3
D-III66%38,213,629251,9636.59 × 10−3
BenchmarkStationsPNADDCR
TLS10270,276,5824,163,1331.54 × 10−2
Table 13. Normalized results.
Table 13. Normalized results.
Dataset NamePoints NumberAverage DensityYield RateUnifor-mityCompletenessGeometric
Accuracy
Time
Efficiency
A-I0.701.001.001.000.980.940.85
A-II0.430.370.460.810.720.920.87
A-III0.150.150.340.570.620.690.96
A-IV0.070.030.060.260.690.370.97
A-V0.000.000.000.000.700.001.00
B-I0.280.350.630.860.260.880.91
B-II0.290.230.340.770.560.810.92
B-III0.410.400.540.900.390.890.90
B-IV0.310.220.310.850.490.850.91
C-I0.780.720.600.460.000.940.71
C-II0.670.220.080.620.380.910.70
C-III0.800.610.460.640.110.980.42
D-I1.000.930.620.991.001.000.00
D-II0.800.880.750.980.990.960.74
D-III0.600.840.930.980.960.910.94
Table 14. UAV-SfM vs. TLS.
Table 14. UAV-SfM vs. TLS.
FeaturesUAV-SfMTLS
Data acquisition time10 min for image
capturing (of dataset D-I)
170 min
for 10 scan locations
Point cloud generation time52 min for image
capturing (of dataset D-I)
50 min for 10 dataset registration and irrelevant point removal
Point numbers57 million points270 million points
Equipment costUSD 5000 including the UAV platform and reconstruction softwareUSD 50,000 including the Leica P-20 and post-processing software
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, S.; Zeng, X.; Laefer, D.F.; Truong-Hong, L.; Mangina, E. Flight Path Setting and Data Quality Assessments for Unmanned-Aerial-Vehicle-Based Photogrammetric Bridge Deck Documentation. Sensors 2023, 23, 7159. https://doi.org/10.3390/s23167159

AMA Style

Chen S, Zeng X, Laefer DF, Truong-Hong L, Mangina E. Flight Path Setting and Data Quality Assessments for Unmanned-Aerial-Vehicle-Based Photogrammetric Bridge Deck Documentation. Sensors. 2023; 23(16):7159. https://doi.org/10.3390/s23167159

Chicago/Turabian Style

Chen, Siyuan, Xiangding Zeng, Debra F. Laefer, Linh Truong-Hong, and Eleni Mangina. 2023. "Flight Path Setting and Data Quality Assessments for Unmanned-Aerial-Vehicle-Based Photogrammetric Bridge Deck Documentation" Sensors 23, no. 16: 7159. https://doi.org/10.3390/s23167159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop