Next Article in Journal
Vibration Protection of a Famous Statue against Ambient and Earthquake Excitation Using A Tuned Inerter–Damper
Previous Article in Journal
Equivalence Analysis of Mass and Inertia for Simulated Space Manipulator Based on Constant Mass
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Moving 3D Laser Scanner for Automated Underbridge Inspection

1
Politecnico di Milano, Dipartimento di Meccanica, Via La Masa 1, 20156 Milano, Italy
2
Dipartimento di Ingegneria Industriale e dell’Informazione, Università degli Studi di Pavia, via Ferrata 5, 27100 Pavia, Italy
3
Gexcel-Geomatics & Excellence, Via Branze 45-25123 Brescia, Italy
*
Author to whom correspondence should be addressed.
Machines 2017, 5(4), 32; https://doi.org/10.3390/machines5040032
Submission received: 27 October 2017 / Revised: 14 December 2017 / Accepted: 18 December 2017 / Published: 19 December 2017

Abstract

:
Recent researches have proven that the underbridge geometry can be reconstructed by mounting a 3D laser scanner on a motorized cart travelling on a walkway located under the bridge. The walkway is moved by a truck and the accuracy of the bridge model depends on the accuracy of the trajectory of the scanning head with respect to a fixed reference system. In this paper, we describe a vision-based measurement system that can be used to identify the relative motion of the cart that moves the 3D laser scanner with respect to the walkway. The orientation of the walkway with respect to the bridge is determined using inclinometers and a camera that detect the position of a laser spot, while the position of the truck with respect to the bridge is measured using a conventional odometer. The accuracy of the proposed system was initially evaluated by numerical simulations and successively verified by experiments in laboratory conditions. The complete system has then been tested by comparing the geometry of buildings reconstructed using the proposed system with the geometry obtained with a static scan. Results showed that the error is less than 6 mm; given the satisfying quality of the point clouds obtained, it is also possible to detect small defects on the surface.

1. Introduction

This study presents a machine allowing the creation of a point cloud using a laser scanner in continuous motion with respect to the observed target. Although the work has many potential applications, we developed a method for the reconstruction of the geometry under the bridge deck for the identification of local defects. In this field, 3D laser scanners are used to model the surface of civil structures [1,2,3,4], but the necessity of high density point cloud often leads to overlap static scans obtained with different scanner positions. When the structure to be monitored is a bridge, there are several limitations that preclude using multiple static scans [5,6]. Under the bridge deck is often not accessible and the inspection is performed by skilled operators using special trucks. Since many existing highway bridges were built in the 1960s [5], and given that the surveys are extremely expensive [7], the inspection automation is the focal point of several studies [4,5,6,7,8,9,10,11,12].

1.1. Bridge Inspection Techniques

The general methods for the automatic survey of structures [8] cannot be used under the bridge deck and fit-to-purpose methods are often used. Metallic bridges can be monitored using non-destructive techniques [9], while concrete bridges are mainly monitored using visual analyses, given the difficulty of creating inspection systems capable of detecting defects of a few millimeters on constructions of hundreds of meters. One of the most promising techniques is the analysis of images as proposed by Yu et al. [4], but great limitations derive from the difficulty in associating the exact position with each image and with the amount of work required to achieve 3D information using photogrammetry. In this field, Jiang et al. reviewed the close-range photogrammetry applications in bridge measurements [13]; early applications used special cameras and targets to align the different images. Nowadays, computerized analytical tools allow reliable image alignment to be obtained with minimal effort. However, despite different works [14,15,16] focused on the methods for managing the images, these methods provide geometrical information about the bridge that are less accurate than those obtained with laser scanners [17]. Image methods were also used to measure the vertical bridge deflection [12,18], but these techniques can be used only to detect defects resulting in big changes of structural parameters.

1.2. Limitations of the Existing Techniques and Proposed Approach

Literature studies show that a laser scanner located on the ground can be used, together with image processing, for assessing the presence of cracks on concrete bridges [2]. However, as evidenced by the studies, this solution is possible only if the scanning head is close enough to the deck in order to obtain a decent point cloud resolution. This practically precludes detecting cracks with commercial laser scanners if the height of the bridge is greater than 20–30 m. The only possible way of obtaining a point cloud with a high resolution is mounting a laser scanner on a moving cart. With this technique, the reconstruction of the point cloud is based on the knowledge of the scanner position and orientation: different from any other system commercially available, in this method the point cloud is created by knowing the instantaneous position of the laser and the instantaneous distance of each measured point on the surface from the scanning head. In other words, the point cloud is not created by overlapping static scans, but is rather created with proprietary algorithms that require the instantaneous position of the scanning head. Given the typical dimension of highway bridges, it is not possible to stop the truck and to perform several static scans. Consequently, both the truck and the scanning head are continuously moving: the mathematical formulation of the problem and the preliminary experimental results were presented in references [5,6,7,19]. Numerical analyses described in reference [19] showed that the poor quality of the point cloud was mainly due to the non-linear terms in the rotation matrices used to identify the position of the cart with respect to the bridge (the linearization-induced errors on the point cloud accuracy were close to 20 mm). Results also showed that a small error in the identification of the scanning head tilt leads to large errors in the point cloud. In order to increase the accuracy in the reconstruction, we focused on the methods for the identification of the position of vehicles moving on bounded trajectories.

1.3. Identification of the Laser Position

In the literature, the objects’ egomotion has been measured with different approaches [20,21,22,23,24,25] and the technique that grew more rapidly is the Visual Odometry (VO) [20], the process in which the motion of a vehicle (or subject) is detected starting from the images acquired by a single or multiple cameras. The estimation of a vehicle’s motion from images was pioneered by Moravec in the 1980s [25] and the term VO was introduced in 2004 by Nister thanks to the similarity to wheel odometry, which incrementally estimates the motion of a vehicle by integrating a wheel rotation [20,24]. As outlined by Scaramuzza and Fraundorfer in their review on VO [24], the technique is effective only if there is sufficient illumination in the environment and the static scene has enough features to allow the identification of the relative motion; the framerate must be large enough to allow the images to overlap. VO can provide relative position error ranging from 0.1 to 2%. This capability makes VO an interesting alternative to the conventionally used techniques (global positioning system, inertial measurement units, and laser odometry). The early VO studies were motivated by the NASA Mars exploration program to measure the rovers’ motion and, in general, this technique is the preferred choice in environments where the global positioning system is not available or does not provide for the required accuracy. Also, the other techniques used for the localization of objects had several limitations. GPS cannot be used because of the limited accuracy and because of the poor signal quality on the surface immediately below the bridge deck. Inertial Measurement Units were the baseline solution at the beginning of the project, but the long measurement duration induced relevant drift problems. Vision-based measurement systems, such as pattern matching techniques to track the position of the cart using fixed cameras or trinocular stereoscopic systems using markers were viable solutions, but the worsening accuracy of 3D reconstruction at the increasing distances evidenced in tests performed in controlled conditions [26] was not acceptable for our application. Simultaneous Localization and Mapping (SLAM) techniques focusing on the cart position on the walkway were difficult to implement because the relative motion between the cart, the walkway, the truck frame, and the bridge implies that different parts of the images are moving in different directions.
In this work, we describe the system used to identify the relative position of the cart transporting the 3D laser scanner with respect to the origin of the by-bridge walkway. The cart position measurement system described in this paper uses laser distance meters and cameras to identify the relative position between the cart and the walkway. The measurement method is described in Section 2. Experimental results are presented in Section 3 and discussed in Section 4. The conclusions of the paper are drawn in Section 5.

2. Method

The position of the scanning head can be identified applying roto-translations, starting from the position of the truck:
M 0 N = M 01 M 12 M 23 M ( N 1 ) N
where
[ M 01 ] = [ R 01 T 01 0 1 ]
R01 and T01 are respectively the rotation matrix (3 × 3) and the translation vector (3 × 1) which describe the position of a reference system with respect to the previous one. The rotation matrix R cannot be linearized for small angles and the rotations are as follows:
R 01 = [ 1 γ β γ 1 α β α 1 ]
The system that measures the cart position with respect to the walkway is set up with:
  • a laser distance meter and two laser pointers located at the beginning of the by-bridge walkway;
  • a camera on the cart which observes the three spots on the projection plane;
  • two cameras on the cart observing sideward and downwards; and
  • an encoder for the closed-loop control of the cart motor.
The scheme of the measurement chain is shown in Figure 1; the three laser beams generate three spots on the projection plane; the central one is the laser distance meter, while the other two are used as optical rails. The three lasers are aligned with the cart motion direction, so that the displacement of the three points on the projection plane is limited. The downward and the lateral cameras observe respectively a metering tape fixed to the walkway and the walkway handrail.
With the proposed setup, the cart position is identified as follows:
  • the fore-and-aft motion is determined by the laser range finder located at the beginning of the walkway;
  • the lateral and vertical motion of the cart are measured by the spot camera, which identifies the translation from the position of the central spot;
  • the cart roll is measured by the spot camera, observing the rotation of the two external laser spots;
  • the cart pitch and yaw are measured respectively by the lateral and the vertical cameras, observing linear objects parallel to the cart motion.
In the actual method implementation, there is no data fusion between the information of the different measurement systems: the lateral camera, for instance, can also be used to identify the vertical cart displacement but, at this initial stage, we decided to keep the method as simple as possible. As in any vision system, the quality of the image is crucial for obtaining reliable measurements. In our case, the biggest problem is probably related to the large variation of lighting and viewing conditions of the scene, since the cameras may be exposed to direct sunlight at the beginning and at the end of the bridge, while below the bridge the lighting condition may be very poor.
The phase-shift laser scanner is positioned above the cart. The laser scanner can work in spherical mode and helical mode. In the first mode, the laser scanner acquires the 3D coordinates of the visible points around the scanner head in a field of view of 310° (vertical) × 360° (horizontal). In the second mode, the horizontal axe is fixed, and the laser can acquire 310° vertical sections; the combination of the cart movement and the scanner vertical rotation guarantees a 3D acquisition. The density of the cloud depends on the speeds of the cart, of the track and of the rotation speed of the scanning head. To use the scanner in helical mode, given the cart trajectory, the relative position between the cart and laser scanner head must be determined; a boresight technique was used for this purpose. Four targets detectable from the laser scanner were positioned jointly liable with the cart. The following paragraphs describe the measurement subsystems and the actions taken to obtain reliable measurements.

2.1. Experimental Setup

The three cameras used for the identification of the cart roll, pitch and yaw are manufactured by IDS (uEye UI-5240CP-M-GC); the image resolution is 1280 × 1024 pixels and the maximum frame rate is 25 Hz. A LabVIEW-based software running on an embedded PC captures the images, that are analyzed offline in order to tune the algorithms in case of non-standard lighting conditions.

2.1.1. Laser Pointers and Camera

The cart position along the X axis (direction of motion, almost perpendicular to the projection plane) is detected by the laser range finder (FAE LS121 FA, range 100 m, resolution 0.1 mm). The cart lateral and vertical motions (Y and Z axes in Figure 2a), as well as the cart roll are measured by analyzing the image captured by the camera. The translations are measured by the position of the central spot and the two external spots are used to identify the cart roll. The scheme of the measurement setup is shown in Figure 2.
Since the camera sensor is not parallel to the observation plane, the system was calibrated by observing a grid with known geometry (diameter 6 mm, grid pitch 25 mm), so that the result of the measurement is an array of spot coordinates in physical units. In order to obtain good images independently from the sunlight conditions, the camera deputed to observe the 3 spots has a bandpass filter from 635 to 646 nm, given that the three lasers have a wavelength of 639 nm (red color). The alignment between the three lasers strongly affects the measurement accuracy and consequently we developed a fit-to-purpose calibration procedure (described in Section 2.2).
The coordinates of the three laser spots are identified using the blob detection algorithm, based on a classical image thresholding paired by a blob analysis. The threshold level was set to 30 (8-bit grayscale image) and the lookup region of interest (ROI) is rearranged dynamically, since between one acquisition and another the movement should not exceed 30 pixels. This value corresponds to a displacement lower than 15 mm in 40 ms; the value was obtained with experiments performed by fixing an accelerometer on the cart and analyzing the maximum velocity. This procedure allowed a reduction of the image processing time, which is in the order of a few milliseconds per frame.

2.1.2. Lateral and Vertical Camera

The vertical camera observed a roller meter below the cart; also in this case, the camera was equipped with an infrared lighting system and an infrared filter. The camera was calibrated acquiring a calibration grid, in order to measure the displacements and the rotations in physical coordinates. The position of the roller meter was coincident with the walkway axis (maximum error smaller than 2 mm); a scheme of the measurement method is shown in Figure 3.
The yaw angle is measured by a custom edge detection algorithm: the grayscale levels image is divided into columns (5 pixels wide). Data of each column is processed using a moving average performed on a 15 × 5 pixels window. All the points in which the grey-scale level variation is larger than 10 are used for the edge detection; the cart yaw angle is measured by fitting the points in the least square sense. The slope is bounded between ±5° and the region of interest for the calculation of the threshold is limited to 20 rows above or below the one computed at the previous step; these values were chosen after the analysis of the maximum yaw in operative conditions. With this approach, the image processing time is approximately 15 ms.
A similar system has been used also for the measurement of the pitch angle. The reference line is the walkway handrail, and the edge used for the identification of the pitch is the one between the handrail and the background. Starting from the top of the image, the derivative of the intensity has been computed on the image averaged on a 5 × 5 window. Preliminary analyses showed that with the proposed experimental setup the edge is the best line fitting the points exceeding the level of 20 grayscale units/pixel. The angle is constrained between ±9°. The region of interest for the calculation of this line is limited to 80 pixels above or below the previously calculated line. The image processing time is approximately 30 ms. Examples of the images used for the identification of the cart pitch and yaw are shown in Figure 4.

2.2. Laser Scanner Boresight

The laser scanner (Faro Cam2) boresight calibration consists in finding the roto-translation between the Laser reference system and the cart reference system, as seen in the left part of Figure 5. The transformation is obtained by scanning a set of 4 non-aligned markers fixed on the cart with the laser scanner mounted on the cart and with another laser scanner that observes the cart with the markers.
An example of a 360° view acquired by the FARO laser scanner located on the cart is shown in Figure 6.

2.3. System Calibration and Uncertainty Budget

The system calibration is necessary for both the transformation of the image coordinates into physical coordinates (camera calibration) and for the compensation of the bias errors due to the non-idealities of the measurement system (system calibration). The camera calibration was performed by acquiring the image of the calibration grid and compensated for the perspective and non-linear (optical) distortions of the cameras; the standard non-linear compensation algorithm of LabVIEW was used in all the analyses.
The system calibration procedure included the experimental evaluation of the measurement uncertainty and the compensation of the bias errors [27,28]. The latter were significant only in the “laser spot and cameras” subsystem, where the lasers’ misalignment results in a drift of the cart position and a linearly increasing roll angle. The calibration was performed by comparing the tilt measured by the laser spots and camera to that measured by a reference inclinometer (dual axis SEIKA SBG2U, full-scale ±10° and linearity deviation lower than 0.01°) at different distances (from 1 to 10 m). The error due to the laser misalignment was derived by plotting the difference between the angle measured by the inclinometer and that measured by the vision system as a function of the distance. The linear component of the trend (approximately 2° after 10 m in our prototype) was subtracted from the measurements performed in operative conditions; as later discussed, the error is large because of the large mechanical tolerances with which the three lasers were mounted. The three laser beams were therefore not parallel: given that the error is repeatable, it can be compensated and therefore does not limit the method accuracy.
The uncertainties of the different components of the measurement chain (defined as per ISO GUM [27]) were evaluated as the standard deviation measured in repeatability conditions (given that all the systematic errors outlined in the calibration are compensated). Uncertainty of the laser distance meter was verified at distances of 2.5, 5, 10 and 15 m. Tests results evidenced a standard uncertainty of 0.3 mm.
Uncertainty of the displacements measured by the cameras was evaluated under repeatability conditions, i.e., by observing the spots and edges when the cart was not moving. Uncertainty of the displacement measurement performed by the laser spot and vision system was 0.1 mm, which corresponded to 1/5 of the pixel size (0.5 mm). The resulting uncertainty of the yaw angle is 0.04°.
The uncertainty of pitch and yaw angles was measured by imposing known rotations of ±5° to an aluminum profile and using the edge detection algorithms described in this paper. The standard uncertainty was 0.03°; this value is probably an underestimation of the value that can be obtained in operating conditions, since the background during the edge detection did not vary during the calibration tests. Furthermore, the algorithm starts from the assumption that the edge to be detected is an ideal line, and in the current method implementation, the lack of linearity of the edge results in a reduction of the method accuracy. The summary of the uncertainties reported in this section are summarized in Table 1.

2.4. Experiments and Data Analysis

The system (cart + laser scanner) has been used for acquiring a known geometry of two environments, an indoor corridor and an external facade. In both the cases, the geometry of the two environments has been reconstructed by the Faro Cam2 laser scanner located on the moving cart (mobile acquisition) which was compared to the geometry measured by the same Faro laser scanner placed in a fixed position (tripod/static acquisition) along the corridor and at the center of the façade.
The error of the proposed method was quantified by creating a reference mesh (triangulated model) from the static scan, and calculating the distance between the reference triangle and the closer 3D point acquired in the mobile mode. Results are presented as images of the 3D scan and descriptive statistics of the error (root mean square, average, maximum). Although images do not provide a quantitative indication of quality, they are the result of the final application of the system (identification of the defects under the bridge deck) and consequently the point cloud rendering is a parameter of paramount importance.

3. Results

3.1. Indoor Tests

The first series of tests was performed by scanning indoor corridors of the Gexcel and Politecnico di Milano offices. In these conditions, the ground surface was extremely regular and the cart roll, pitch and yaw were negligible; the only pieces of information used were the distance from the laser distance meter and the translation in the YZ plane (Figure 3a). The cart nominal speed was 1.2 m/s, a value 20% larger than the ideal speed identified by imposing a point density of 1600 points/dm2 with the resolution set to ¼ (helical scans rate 95 Hz). The results of the inspection are shown in Figure 7. The color indicates the difference between the results of a static scan and the results obtained with the proposed method in which the laser is moving.
Errors were generally small when the surface to be observed was parallel to the direction of motion of the cart: given the use of helical scan mode, the point density of the surface is a maximum when the cart is moving along a direction parallel to the surface. This aspect is clarified in Figure 8b, where the point density on horizontal lines is much larger than that of the vertical lines.
Descriptive statistics summarizing the errors of the surface scans not perpendicular to the direction of motion are presented in Table 2.
Tests were repeated with different directions of motion of the cart, that was moving with trajectories that were not parallel to the walls, as shown in Figure 9. The figure shows the difference between three scans of the corridor obtained with the cart moving on different trajectories. The red and yellow scan lines show the point clouds obtained by moving from right to left, while the purple dots are obtained with the cart moving from left to right. Results show a good agreement between the results of different scans, with differences that are compatible with those indicated in Table 2.

3.2. Outdoor Tests

The second series of tests was performed outside in conditions that are more similar to those expected under the bridges; the setup for the method validation is shown in Figure 10. The Faro Cam2 scanning head was mounted on the instrumented cart moving back and forth over a non-flat terrain simulating the by-bridge. A roller meter was fixed on the ground and two linear metal rods (lateral rails) were fixed on a fence. The cart moved for approximately 15 m and then returned to its original position. The cart nominal speed was 1.2 m/s as in the previous tests.
Results of the mobile acquisition scan are summarized in Figure 11, that shows the cloud point obtained before the tilt compensation (part a and b) and after the compensation (part c) of all the systematic errors. One can notice that the progressive twist of the façade is recovered after the laser misalignment compensation.
Descriptive statistics summarizing the errors of the surface scans not perpendicular to the direction of motion are presented in Table 3. The RMS value of the error (distance between static and dynamic scan) was lower than 6 mm. The error is larger on the higher part of the façade and was constant at different distances from the measurement origin (position of the laser pointers). The possible causes of the planarity error evidenced in this section are discussed, together with the method limitation, in Section 4.

4. Discussion

Results showed that the RMS error in indoor and outdoor tests is compatible with that of the methods existing in the literature [29]. The uncertainty is large in comparison with the uncertainty of the cart position, and numerical simulations [19] showed that the error is mainly due to a combination of the tilt error of the cart and a large distance between the cart and the observed surface (3 to 10 m). The position error increases linearly with the distance between the scanning head and the measured surface. In real experimental conditions, the error is supposed to be smaller as the distance from the bridge surface is lower than 2 m. Conversely, the walkway oscillation may worsen the results obtained in these preliminary tests. In the real by-bridge usage, the accuracy can be increased by considering only the points placed at a limited distance from the scanning head, given that the simultaneous cart and truck motion allows an observation of the same point from different cart/truck positions.
The accuracy of the angles’ measurements can be increased by improving the image quality. The increase of image resolution is not possible without significantly increasing the instrumentation cost, since this choice would limit the camera frame rate and consequently the cart speed (that strongly affects the cart vibration [5,7]). The uncertainty of the edge detection algorithms can be reduced by increasing the contrast between the edges and the background; consequently, the tests in actual working conditions were performed by painting the walkway surface of a special opaque paint, as shown in the lower part of Figure 10.
Given that the most limiting factor is the accuracy of the roll angle, the latter could be increased by adopting a procedure similar to the one used to measure the pitch and yaw angles, i.e., by replacing the two laser pointers with a laser line. In order to obtain a decent contrast with the projection plane, the laser aperture should be large enough to fill the entire projection plane at the smaller distance (approximately 1 m) and should have enough power to ensure a sufficient contrast at a distance of 20 m. A similar result can be obtained by replacing the two laser pointers with an array of pointers and adopting a least square procedure to identify the roll angle with a better accuracy. With our experimental setup, the parallelism of the three lasers was limited by the poor planarity of the optical bench, and the usage of three lasers instead of the external ones did not increase the measurement accuracy significantly.
Also, the position uncertainty can be reduced by adopting data fusion procedures [30,31,32], given that the lateral and vertical displacements can be detected by the vertical and lateral cameras respectively, and the odometry can be performed by analyzing the images of the vertical camera (which observes the roller meter). The theoretical uncertainty reduction in the case of an average of two measures with similar uncertainty is 2 [33,34]; however, the accuracy increase in the final application (reconstruction of the underbridge geometry) would be limited, given that the angles can be measured only by one camera at a time.
Many limitations of the proposed measurement system for the identification of the bridge geometry derive from the odd surface on which the cart is moving, and the easiest solution would be to ensure a smoother motion of the cart. For obvious safety reasons, it is impossible to modify the structure of the walkway, which is telescopic and can be folded on the truck during the transport phase. Since the by-bridge is used for ordinary road maintenance, the non-slip aluminum on the floor is often in poor condition, and the telescopic structure of the walkway prevents the use of linear guides (rails) that would ensure a limited roll, pitch and yaw of the cart.
The results presented in this paper showed that the mechanical design of the entire structure can be optimized by ensuring the parallelism between the laser pointers and by modifying the design of the cart, introducing passive or active suspension systems in order to limit the vibration of the 3D scanning head. These improvements are deserving of forthcoming studies, given that the accuracy of 6 mm on a 15 by 10 m surface was judged sufficient for the identification of macroscopic structural damages.
Preliminary tests performed under a bridge with the inspection truck and a moving Faro CAM2 scanner evidenced the validity of the proposed method.

5. Conclusions

This paper described an original technique for the identification of the motion of a moving cart on bounded trajectories. The laser scanner is meant to be mounted on a truck for underbridge inspection to avoid the visual inspection currently performed by operators. The RMS error in the reconstruction of a corridor and of a building façade (15 by 10 m) were respectively 4 and 6 mm; the values are promising for the final application of the system, given that it was obtained with the cart moving at a speed 20% higher than the speed at which the cart will travel during the underbridge inspection.
The analysis of the uncertainty budget showed that the dominant factor that is limiting the accuracy of the point cloud is the accuracy in the identification of the cart roll. The latter can be increased by improving the quality of the optical layout of the laser pointers or by replacing the laser pointers with a laser line. In our tests, the lasers were manually aligned and the lack of parallelism was numerically compensated; nevertheless, the adoption of a high quality optical bench with finely adjustable laser alignment would increase the roll angle measurement accuracy. The error in the pitch and yaw angle was less critical, being dependent on measurements of the cameras observing the walkway features.
The main limitations of the proposed method are relative to the complexity of the experimental setup, that requires the installation of cameras and laser pointers on the metallic frame of the special truck. Also, the procedure required for locating the cart in the walkway, as described in this paper, is rather long, but in the current state-of-the-art there are no systems allowing the identification of the underbridge geometry independently from the bridge height and from the presence of water. Future works will be focused on the identification of the system in real usage conditions and on the optimization of the mechanical design of the system to increase the accuracy of the roll motion.

Acknowledgments

Authors gratefully acknowledge SINECO SPA for the financial support of this activity.

Author Contributions

Hermes Giberti and Federico Cheli conceived the method. Remo Sala designed the experiments. Silvio Giancola, Hermes Giberti and Marco Tarabini performed the experiments. Marco Tarabini and Hermes Giberti analyzed the data. Matteo Sgrenzaroli completed the algorithm for the cloud computation and error compensation. The paper was written by Marco Tarabini and Silvio Giancola.

Conflicts of Interest

The authors declare no conflict of interest.

References and Note

  1. Bobkowka, K.; Nykiel, G.; Tysiąc, P. DMI measurements impact on a position estimation with lack of GNSS signals during Mobile Mapping. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2017; p. 012010. [Google Scholar]
  2. Valença, J.; Puente, I.; Júlio, E.; González-Jorge, H.; Arias-Sánchez, P. Assessment of cracks on concrete bridges using image processing supported by laser scanning survey. Constr. Build. Mater. 2017, 146, 668–678. [Google Scholar] [CrossRef]
  3. Nagrodzka-Godycka, K.; Szulwic, J.; Ziólkowski, P. The method of analysis of damage reinforced concrete beams using terrestrial laser scanning. In Proceedings of the 14th SGEM GeoConference on Informatics, Geoinformatics and Remote Sensing, Albena, Bulgaria, 17–26 June 2014; Volume 3, pp. 335–342. [Google Scholar]
  4. Yu, S.; Jang, J.; Han, C. Auto inspection system using a mobile robot for detecting concrete cracks in a tunnel. Autom. Constr. 2007, 16, 255–261. [Google Scholar] [CrossRef]
  5. Zanoni, A.; Maninetti, G.; Cheli, F.; Garozzo, M. Development of a Computer Vision Tracking System for Automated 3D Reconstruction of Concrete Bridges. In Proceedings of the ASME 2014 12th Biennial Conference on Engineering Systems Design and Analysis, American Society of Mechanical Engineers, Copenhagen, Denmark, 25–27 July 2014; p. V003T15A012. [Google Scholar]
  6. Giancola, S.; Giberti, H.; Sala, R.; Tarabini, M.; Cheli, F.; Garozzo, M. A non-contact optical technique for vehicle tracking along bounded trajectories. J. Phys. Conf. Ser. 2015, 1–13. [Google Scholar] [CrossRef]
  7. Giberti, H.; Zanoni, A.; Mauri, M.; Gammino, M. Preliminary study on automated concrete bridge inspection. In Proceedings of the ASME 2014 12th Biennial Conference on Engineering Systems Design and Analysis, Copenhagen, Denmark, 25–27 June 2014; p. V003T15A011. [Google Scholar]
  8. Mills, J.; Barber, D. Geomatics techniques for structural surveying. J. Surv. Eng. 2004, 130, 56–64. [Google Scholar] [CrossRef]
  9. McCrea, A.; Chamberlain, D.; Navon, R. Automated inspection and restoration of steel bridges—A critical review of methods and enabling technologies. Autom. Constr. 2002, 11, 351–373. [Google Scholar] [CrossRef]
  10. Hugenschmidt, J. Concrete bridge inspection with a mobile GPR system. Constr. Build. Mater. 2002, 16, 147–154. [Google Scholar] [CrossRef]
  11. Adhikari, R.; Moselhi, O.; Bagchi, A. Image-based retrieval of concrete crack properties for bridge inspection. Autom. Constr. 2014, 39, 180–194. [Google Scholar] [CrossRef]
  12. Busca, G.; Cigada, A.; Mazzoleni, P.; Tarabini, M.; Zappa, E. Static and Dynamic Monitoring of Bridges by Means of Vision-Based Measuring System; Springer: New York, NY, USA, 2013; pp. 83–92. [Google Scholar]
  13. Jiang, R.; Jáuregui, D.V.; White, K.R. Close-range photogrammetry applications in bridge measurement: Literature review. Measurement 2008, 41, 823–834. [Google Scholar] [CrossRef]
  14. Brilakis, I.; Fathi, H.; Rashidi, A. Progressive 3D reconstruction of infrastructure with videogrammetry. Autom. Constr. 2011, 20, 884–895. [Google Scholar] [CrossRef]
  15. Zhu, Z.; German, S.; Brilakis, I. Detection of large-scale concrete columns for automated bridge inspection. Autom. Constr. 2010, 19, 1047–1055. [Google Scholar] [CrossRef]
  16. Abudayyeh, O.; Al Bataineh, M.; Abdel-Qader, I. An imaging data model for concrete bridge inspection. Adv. Eng. Softw. 2004, 35, 473–480. [Google Scholar] [CrossRef]
  17. Janowski, A.; Nagrodzka-Godycka, K.; Szulwic, J.; Ziolkowski, P. Remote sensing and photogrammetry techniques in diagnostics of concrete structures. Comput. Concr. 2016, 18, 405–420. [Google Scholar] [CrossRef]
  18. Jáuregui, D.V.; White, K.R.; Woodward, C.B.; Leitch, K.R. Noncontact photogrammetric measurement of vertical bridge deflection. J. Bridge Eng. 2003, 8, 212–222. [Google Scholar] [CrossRef]
  19. Giberti, H.; Tarabini, M.; Cheli, F.; Garozzo, M. Accuracy Enhancement of a Device for Automated Underbridge Inspections. In Structural Health Monitoring, Damage Detection & Mechatronics; Springer: Cham, Switzerland, 2016. [Google Scholar]
  20. Nistér, D.; Naroditsky, O.; Bergen, J. Visual odometry. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; Volume 1, pp. I-652–I-659. [Google Scholar]
  21. Stein, G.P.; Mano, O.; Shashua, A. A robust method for computing vehicle ego-motion. In Proceedings of the Intelligent Vehicles Symposium, Dearborn, MI, USA, 5 October 2000; pp. 362–368. [Google Scholar]
  22. Rolland, J.P.; Davis, L.; Baillot, Y. A survey of tracking technology for virtual environments. Fundam. Wearable Comput. Augment. Real. 2001, 1, 67–112. [Google Scholar]
  23. Portugal-Zambrano, C.E.; Mena-Chalco, J.P. Robust range finder through a laser pointer and a webcam. Electron. Notes Theor. Comput. Sci. 2011, 281, 143–157. [Google Scholar] [CrossRef]
  24. Scaramuzza, D.; Fraundorfer, F. Visual odometry [tutorial]. Robot. Autom. Mag. IEEE 2011, 18, 80–92. [Google Scholar] [CrossRef]
  25. Moravec, H.P. Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover; Stanford Univ Ca Dept of Computer Science: Stanford, CA, USA, 1980. [Google Scholar]
  26. Fossati, F.; Sala, R.; Basso, A.; Rocchi, M.G.D. A multicamera displacement measurement system for wind engineering testing. In Proceedings of the 5th European & African Conference on Wind Engineering, Florence, Italy, 19–23 July 2009. [Google Scholar]
  27. Joint Committee for Guides in Metrology (JCGM) GUIDE 98-3/suppl.1:2008, Evaluation of Measurement data—Supplement 1 to the Guide to the Expression of Uncertainty in Measurement—Propagation of Distributions Using a Monte Carlo Method, 2008.
  28. Moschioni, G.; Saggin, B.; Tarabini, M.; Hald, J.; Morkholt, J. Use of design of experiments and Monte Carlo method for instruments optimal design. Meas. J. Int. Meas. Confed. 2013, 46, 976–984. [Google Scholar] [CrossRef]
  29. Müller, M.; Surmann, H.; Pervölz, K.; May, S. The accuracy of 6D SLAM using the AIS 3D laser scanner. In Proceedings of the 2006 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Heidelberg, Germany, 3–6 September 2006; pp. 389–394. [Google Scholar]
  30. Hall, D.L.; Llinas, J. An introduction to multisensor data fusion. Proc. IEEE 1997, 85, 6–23. [Google Scholar] [CrossRef]
  31. Taniguchi, M.; Tresp, V. Averaging regularized estimators. Neural Comput. 1997, 9, 1163–1178. [Google Scholar] [CrossRef]
  32. Ferrari, D.; Giberti, H. A genetic algorithm approach to the kinematic synthesis of a 6-DoF parallel manipulator. In Proceedings of the 2014 IEEE Conference on Control Applications (CCA), Juan Les Antibes, France, 8–10 October 2014; pp. 222–227. [Google Scholar]
  33. Moschioni, G.; Saggin, B.; Tarabini, M. 3-D Sound Intensity Measurements: Accuracy Enhancements with Virtual-Instrument-Based Technology. IEEE Trans. Instrum. Meas. 2008, 57, 1820–1829. [Google Scholar] [CrossRef]
  34. Silvestri, M.; Confalonieri, M.; Ferrario, A. Piezoelectric actuators for micro positioning stages in automated machines: Experimental characterization of open loop implementations. FME Trans. 2017, 45, 331–338. [Google Scholar] [CrossRef]
Figure 1. Scheme of the measurement chain.
Figure 1. Scheme of the measurement chain.
Machines 05 00032 g001
Figure 2. Laser spots on the projection plane (a) and scheme of the cart rotation measurement (b).
Figure 2. Laser spots on the projection plane (a) and scheme of the cart rotation measurement (b).
Machines 05 00032 g002
Figure 3. Scheme of the system for the measurement of the cart yaw (a) and extraction of the yaw angle from the image (b).
Figure 3. Scheme of the system for the measurement of the cart yaw (a) and extraction of the yaw angle from the image (b).
Machines 05 00032 g003
Figure 4. Pictures for the identification of the car yaw (a) and pitch (b).
Figure 4. Pictures for the identification of the car yaw (a) and pitch (b).
Machines 05 00032 g004
Figure 5. Laser Scanner reference systems on the cart (a) and 3D scan of the set of four markers located on the cart (as seen from the laser scanner, (b)).
Figure 5. Laser Scanner reference systems on the cart (a) and 3D scan of the set of four markers located on the cart (as seen from the laser scanner, (b)).
Machines 05 00032 g005
Figure 6. 360° view of the scan performed in GEXCEL laboratories.
Figure 6. 360° view of the scan performed in GEXCEL laboratories.
Machines 05 00032 g006
Figure 7. Reconstruction of the corridor.
Figure 7. Reconstruction of the corridor.
Machines 05 00032 g007
Figure 8. Differences between the static scan and a dynamic scan in indoor tests: 3D view (a) and top view (b).
Figure 8. Differences between the static scan and a dynamic scan in indoor tests: 3D view (a) and top view (b).
Machines 05 00032 g008
Figure 9. Results of dynamic scans performed with two different directions of motion of the cart.
Figure 9. Results of dynamic scans performed with two different directions of motion of the cart.
Machines 05 00032 g009
Figure 10. Pictorial views of the experimental setup used for the method validation. Rear (a) and frontal view (b) of the experimental setup used for the method validation.
Figure 10. Pictorial views of the experimental setup used for the method validation. Rear (a) and frontal view (b) of the experimental setup used for the method validation.
Machines 05 00032 g010
Figure 11. Images of the facade reconstructed by the moving cart without the laser misalignment compensation (a,b) and after the misalignment compensation (c).
Figure 11. Images of the facade reconstructed by the moving cart without the laser misalignment compensation (a,b) and after the misalignment compensation (c).
Machines 05 00032 g011
Table 1. Summary of the uncertainties obtained with the proposed method.
Table 1. Summary of the uncertainties obtained with the proposed method.
QuantityUncertainty
Displacement x (motion)0.3 mm
Displacement y (lateral)0.1 mm
Displacement z (vertical)0.1 mm
roll0.03°
pitch0.03°
yaw0.04°
Table 2. Descriptive statistics of the error of the dynamic scan in indoor tests.
Table 2. Descriptive statistics of the error of the dynamic scan in indoor tests.
QuantityValue (mm)
Mean error2.1
RMS error3.9
95th percentile6.1
Maximum error7.2
Table 3. Descriptive statistics of the error of the dynamic scan in indoor tests.
Table 3. Descriptive statistics of the error of the dynamic scan in indoor tests.
QuantityValue (mm)
Mean error3.1
RMS error5.8
95th percentile10.1
Maximum error27.2

Share and Cite

MDPI and ACS Style

Tarabini, M.; Giberti, H.; Giancola, S.; Sgrenzaroli, M.; Sala, R.; Cheli, F. A Moving 3D Laser Scanner for Automated Underbridge Inspection. Machines 2017, 5, 32. https://doi.org/10.3390/machines5040032

AMA Style

Tarabini M, Giberti H, Giancola S, Sgrenzaroli M, Sala R, Cheli F. A Moving 3D Laser Scanner for Automated Underbridge Inspection. Machines. 2017; 5(4):32. https://doi.org/10.3390/machines5040032

Chicago/Turabian Style

Tarabini, Marco, Hermes Giberti, Silvio Giancola, Matteo Sgrenzaroli, Remo Sala, and Federico Cheli. 2017. "A Moving 3D Laser Scanner for Automated Underbridge Inspection" Machines 5, no. 4: 32. https://doi.org/10.3390/machines5040032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop