Next Article in Journal
Spotting Frozen Curd in PDO Buffalo Mozzarella Cheese Through Insights on Its Supramolecular Structure Acquired by 1H TD-NMR Relaxation Experiments
Previous Article in Journal
Seg2pix: Few Shot Training Line Art Colorization with Segmented Image Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Point-Cloud Registration for Quality Control in Building Works

by
Rocio Mora
,
Jose Antonio Martín-Jiménez
,
Susana Lagüela
and
Diego González-Aguilera
*
Department of Cartographic and Land Engineering, Higher Polytechnic School of Avila, University of Salamanca, Hornos Caleros 50, 05003 Ávila, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(4), 1465; https://doi.org/10.3390/app11041465
Submission received: 24 December 2020 / Revised: 31 January 2021 / Accepted: 3 February 2021 / Published: 5 February 2021
(This article belongs to the Section Civil Engineering)

Abstract

:
Total and automatic digitalization of indoor spaces in 3D implies a great advance in building maintenance and construction tasks, which currently require visits and manual works. Terrestrial laser scanners (TLS) have been widely used for these tasks, although the acquisition methodology with TLS systems is time consuming, and each point cloud is acquired in a different coordinate system, so the user has to post-process the data to clean and get a unique point cloud of the whole scenario. This paper presents a solution for the automatic data acquisition and registration of point clouds from indoor scenes, designed for point clouds acquired with a terrestrial laser scanner (TLS) mounted on an unmanned ground vehicle (UGV). The methodology developed allows the generation of one complete dense 3D point cloud consisting of the acquired point clouds registered in the same coordinate system, reaching an accuracy below 1 cm in section dimensions and below 1.5 cm in walls thickness, which makes it valid for quality control in building works. Two different study cases corresponding to building works were chosen for the validation of the method, showing the applicability of the methodology developed for tasks related to the control of the evolution of the construction.

1. Introduction

The demand for 3D models of building interiors has grown in the recent years due to (1) the need to have as-built 3D models available for tasks related to planning and maintenance in buildings and (2) the proliferation of the building information modelling (BIM) standard [1,2,3,4]. The most common techniques for the generation of 3D models have been photogrammetry [5,6,7] and laser scanning [8,9]. Regarding the first, evolution has come with advances in computer vision and structure-from-motion techniques. These made possible the computation of the orientation and self-calibration parameters of a high number of images regardless the a priori knowledge of position and orientation. However, aspects such as presence of homogeneous surfaces (i.e., textureless), changes in illumination, the need to scale the model and the computational cost have been the main drawbacks when generating quality 3D models in indoor scenes with photogrammetry. With respect to the second technique, terrestrial laser scanners (TLS) allow the generation of high-precision 3D models [10,11,12,13], but the acquisition requires meticulous planning, together with a high number of acquisition sets, and the application of an efficient registration technique to join all the point clouds acquired in the same point cloud, within a unique coordinate system. This issue has been studied several times in the last years and several solutions have been developed. Table 1 shows a comparison between different approaches attending to five different aspects: (i) the need of artificial targets, (ii) the need of RGB images, (iii) the number of point clouds that can be aligned with each method (i.e., only two point clouds or several point clouds), (iv) the minimum overlap between scans and v) the sensitivity to noise. For example, [14] developed a fully automatic registration method of point clouds using special targets attached to the objects. The algorithm searches the specific templates of these targets using radiometric and geometric information (shape, size and planarity) and evaluates invariant parameters between each scan to calculate the registration. Although this method works, it depends on the placement of artificial targets in the scene. Placing targets is not always possible, and it requires previous works and detailed studies of the environment. Other studies have evaluated the registration using shape patterns between scans, such as planes [15]. This way, two point clouds are coarsely aligned using plane extraction to find tie points. This method is time-consuming and works only for pairs of point clouds, not being directly applicable to a set of scans. [16] implemented a registration method between scans based on linear and planar shapes to find homologous points and apply a coarse alignment. In [17], 2D images were used in combination with 3D models to improve point clouds registration. Two-dimensional images are used to add texture to the point clouds and to provide information about common features through their search with the SURF descriptor. With this, the matrix transformation is created and refined with different strategies such as RANSAC. This method reaches sufficient results only when the overlap between scans is good. To solve this problem, [18] presented a novel methodology to register outdoor and indoor point clouds using a two-step process: first, they extracted common features using the RGB information acquired by a digital camera and then they refined the alignment using iterative closest points (ICP) or planes matching techniques, that work properly even when the overlap between scans is poor. Both methods are robust and suitable for indoor and outdoor scenarios, but a hybrid calibrated system composed by an RGB camera and a laser scanner is mandatory. Moreover, both methods are based on the use of RGB images and textures, so it is not applicable for scans without that information. A fully image-based methodology was proposed by [19]. In this work, a four-step process is performed: (1) creation of a scan network; (2) coarse alignment; (3) filtering outliers and (4) fine alignment based on the minimum loop expansion (MLE) method. The authors use panoramic images associated to each scan to identify common features between them, so it is not possible to use this methodology for scans without these images or textures. In a similar line, [20] proposed a methodology to organise the scans in a network based on the correspondences detected between pairs of scans. Once the scans are connected, a coarse registration is applied and finally a fine registration performs the alignment process. Although the process is effective, the accuracies achieved are between 9 and 32 mm, values similar to those currently achieved with mobile laser scanner (MLS) systems.
MLS allow the generation of 3D models in a unique acquisition, with the direct result of a single point cloud, consequently with one coordinate system [21,22]. However, the guarantee of metric quality of the 3D models depends on the knowledge of the trajectory of the MLS, which is a complicated task in indoor scenes due to the low coverage of the GNSS signal. Simultaneous localization and mapping (SLAM) techniques are an efficient alternative for the computation of the trajectory but do not guarantee the precision required in applications such as quality control during construction works. The reason is that most MLS systems are equipped with low-cost and lower performance laser devices in order to cope with the weight requirements [23], such as Velodyne HDL-32 or Hokuyo UTM in Zeb-Revo system, that present maximum nominal precision between 2 and 3 cm [24], not comparable with the millimetre-precision of TLS systems [25]. Di Filippo et al. [21] use a MLS to create the 3D models of existing buildings in a short time. However, the errors are between 1–3 cm, which is not acceptable for control tasks in building works, where the maximum deviation allowed for transversal sections in columns [26] is ±10 mm.
Providing the highest precision of TLS compared to some MLS, TLS has been used for quality control of built environments in repeated occasions, in combination with artificial referencing systems such as artificial targets for the semi-automatic registration of the different point clouds acquired [22]. This requires the intervention of the user in some tasks, especially in the planning of the acquisition and the registration of point clouds into the same coordinate system.
Regarding user intervention with MLS, the systems in the market that are adequate for indoor mapping require to be pushed or carried by the user [23]. Thus, their use is limited in scenarios with elements of risk such as obstacles and hanging pieces, as well as in difficult-access areas. This limits their use in most “under construction” scenarios.
Trying to minimize the user intervention and achieve better results with SLAM techniques and high-quality point clouds, [27] presented a SLAM solution for robotic indoor mapping registration in which static and dynamic scans are used together to achieve optimal results. Static scans provide high quality point clouds, and dynamic scans provide the trajectory and the 3D point clouds of hidden areas between static scans. This technique allows us to generate high-quality point clouds in an automatic way using SLAM algorithms, but the global accuracy of the process is between 25 and 50 mm, exceeding the limits allowed for quality control of works in construction [26]. This global accuracy is formed by around 3 mm error from the nominal accuracy of the LiDAR used (SICK 2D laser scanner), and consequently, it can be deduced that more than a 20 mm error is introduced by the data processing.
Thus, the generation of quality and complete 3D point clouds from building interiors requires the use of TLS because of their high precision and the dynamic performance of the measurements in order to cover all the scenes. However, the methodology that exploits the advantages of this combination has not been established yet. For this reason, this paper is focused on two main goals: (i) minimizing the user interaction during the infield works and the processing tasks and (ii) establishing an automatic workflow to register all the scans without any target on the scene and without user intervention. Taking this into account, this paper presents a new methodology for the automatic registration of point clouds acquired with an unmanned ground vehicle (UGV). Nevertheless, this approach is applicable to point clouds coming from other photogrammetric or laser scanning sensors. The results are subjected to their use in the quality control of the different phases of a construction work.
Regarding point cloud registration, different approaches have been developed in the scientific community.
The pre-alignment of point clouds is usually performed based on detectors and descriptors, that is, algorithms that detect points of interest on 3D and describe them according to their geometric or radiometric characteristics. There are several 3D detectors and descriptors, focusing on different special characteristics [28,29]. Thus, the optimal detectors and descriptors depend on the particularities of each case [30], and their selection affects the result.
The detector scale invariant feature transform (SIFT) [31] has been widely used in automatic registration tasks, since it is not affected by the scale. Although it was initially developed to work with 2D information from images, it was adapted by [32] to operate in 3D using the principal curvature of the points in the role of the intensity of the pixels. [33] used the SIFT detector to automatically register 3D point clouds without artificial targets, but the test was performed in a single building without surroundings. [34] developed a methodology in which the 3D point cloud and the associated RGB /intensity images were used together to compute the registration. Their method was highly sensitive to the size of the overlapping area and depended on the acquisition of images associated to the scans. Moreover, the SIFT detector requires a lot of time to complete the processing, and its performance is optimal for the registration of two point clouds [30] but does not perform correctly for registering a higher number of point clouds, as is the case in this study. Similar results are obtained with the normal aligned radial feature (NARF) detector, which takes advantage of the key points in the borders and significant geometric structures [30]. The set of resultant key points is poor, so it works for the alignment of pairs of point clouds individually.
The Harris 3D detector detects corners and edges through the analysis of the relation of each point with its neighbours. A set of point rings is generated for each group of points, and their centroid is determined. A paraboloid is adjusted to the neighbour points in each centroid, and PCA is applied to compute the mean square surface of the paraboloid. Finally, the evaluation of the changes in the surface is performed through the analysis of the derivatives of the surface, in combination with a continuous Gaussian function [35,36]. The Harris 3D detector is invariant to rotation, scale, variations in luminosity and presence of noise.
Once the feature points have been detected, a describing process is performed to extract the corresponding points between the point clouds. The description process analyses each point in relation to its neighbours, focusing on its geometric or radiometric properties. Thus, the signature of histograms of orientations (SHOT) descriptor [37,38] splits into several bins a sphere that is centred at a feature point and collects the histogram of normal angles in each bin to build the descriptor. Other descriptors such as USC or 3DSC [39] also consider a sphere superimposed on the feature point and divides it into smaller segments. In these cases, the number of points contained is computed at each segment and weighted inversely by the segment density. These descriptors are not robust enough in environments with presence of noise [39]. Ref [40] developed the point feature histogram (PFH) descriptor, which besides searching the correspondences between points, classifies them in primitive groups such as corners, edges or planes [29].
It should be highlighted that the scenarios of construction works present similarities and repetitive patterns, especially in the early stages of the works where the predominant elements are pillars, walls, doors and windows, all presenting similar shapes. Moreover, the scans of these scenarios are affected by illumination changes. All these factors complicate the process of detection and characterization of corresponding points between point clouds, provided that two different points from different point clouds can be assigned very similar descriptions and be incorrectly considered as corresponding due to their similar characteristics. In order to avoid all the previous drawbacks and to develop a methodology that is easy to use, the detector selected was Harris 3D [41,42] in combination with the point feature histogram (PFH) descriptor.
The paper is structured as follows: after the Introduction, Section 2 presents the methodology developed and the technology used; Section 3 shows the results obtained after the application of the methodology to the quality control of the construction works based on statistical analysis with robust estimators. Section 4 describes the conclusions extracted during the work.

2. Materials and Methods

2.1. Materials

Unnamed Ground Vehicle (UGV) and Terrestrial Laser Scanner (TLS)

Data acquisition for this work was performed with a TLS, Faro Focus 3D 330, mounted on a UGV controlled by an automatic route planner [43]. The path planner uses a 2D drawing of the building as input data, being able to calculate the number of stations needed and the positions of each scan point according to the overlap and the accuracy established [43]. Using this approach, the UGV can work autonomously, without supervision, following the path planned and operating at night to avoid interruptions during the building tasks. The TLS Faro Focus 3D 330 (Figure 1) is used as a static scanning system through a Stop&Go process. In the Stop&Go mode [44], the TLS captures data when the UGV is stopped, making a conventional static TLS measurement. Once the scan is finished, the UGV continues its route to the next target point, where another scan is performed. Thus, the TLS is fully autonomous and acquires data with a precision of ±2 mm at 10 m.
The Faro Focus 3D 330 consists of an infrared laser scanner that measures coordinates directly using the phase difference of the laser ray in the range of 0.60–330 m in a wavelength of 1550 nm. The field of view covers 300° vertically and 360° horizontally, with a resolution of 0.009° and a measurement rate of 122,000–976,000 points per second, also registering radiometric information for each point. The nominal precision of the TLS is ±2 mm at 10 m distance in normal illumination conditions to a 90% reflective surface, with a ray divergence of 0.19 mrad. All these characteristics ensure that the quality standards in construction can be achieved and determine its selection as the scanning system. The Faro Focus system has been mounted on an UGV, Guardian Robotnik, with a payload of 100 kg, speed of 3 m/s, autonomy between 3 and 10 h in normal performance and capacity to detect occlusions and obstacles using two laser scanner SICK S300 Expert integrated at the front and the back of the UGV. Additionally, the path planning developed by [43] was incorporated to give the UGV full autonomy in the acquisition tasks.

2.2. Method

The method developed for the automatic registration follows an execution line structured in different steps (Figure 2). The laser data acquired with the robotic system (Step 1) are subjected to an automatic pre-processing of the laser data (Step 2). The pre-processing consists of a coarse noise filtering based on the analysis of the coordinate histogram, followed by a homogenization of the resolution of the resultant point clouds through voxelization [45] and a filtering process. Then, the registration strategy developed is applied (Steps 3 and 4): (i) coarse registration of each point cloud based on 3D detectors and descriptors (Step 3) [46], (ii) fine registration with iterative closest point (ICP) technique (Step 4) [47]. The result is one 3D point cloud joining all the point clouds acquired into one unique coordinate system, obtained with no user interaction. The last step of the methodology is the quality control of the construction works focused on certain constructive elements (Step 4).

2.2.1. Path Planning

The UGV is equipped with an automatic path planner able to calculate the optimal route for the data acquisition. The workflow of the route planner [43] is shown in Figure 3.
This planner takes the 2D CAD files of each floor extracted from the BIM model as input data. Constructive elements, such as walls, columns or floors are grouped in layers. Then, each layer is discretized into points with the same resolution (configurable parameter), and a binary occupancy map is performed. A classification of elements based on its non-navigable position (e.g., holes and constructive elements) and navigable position is performed. Next step estimates the candidate positions where the scanner device can make a scan by performing a grid distribution. The dimensions of the UGV are taken in consideration in this step leaving a security margin of 1 m near each building element. The output of this step is a map with candidate positions, construction element points and no candidate positions. Before designing the optimal route, a visibility analysis on each candidate position is performed in order to know the theoretical area of building elements that would be captured. The visibility analysis is made by a ray-tracing algorithm in combination with the maximum scanner range. The scanner range is a criterion to ensure good density and quality of the acquisition data and is used as the limit of each candidate position analysis. For the study case, the maximum scanner range was 10 m. Finally, an optimization of candidate positions is performed by using the back-tracking algorithm [48]. This algorithm minimizes the scan points ensuring the minimum coverage of the elements selected. For the study case, the minimum coverage was 90%. Once the positions have been optimized, the optimal route is created using the travelling salesman problem followed by the ant colony optimization algorithm [49]. The optimal route is the path with the minimum number of scans that can cover the whole scenario with the setup parameters defined.
During the execution of the route, the path planner is constantly working for ensuring the good performance of the route. In fact, if any scan point is not reachable due to an obstacle (very common in building works) the path planner re-calculates an alternative scan point and thus the resulting route. The calculation of this alternative trajectory is carried out on the fly immediately when the impossibility of reaching a scan point is detected. As a result, the UGV can automatically acquire the data, guaranteeing the setup parameters conditions without user interaction.

2.2.2. Pre-Processing

Commonly, the point clouds acquired by 3D laser scans contain noise. The noise is composed by points measured with positional errors. These errors are due to vibrations, reflections, far points or moving elements during the scan works, so it is important to filter them to prevent their influence in the rest of the calculus.
The first operation is an automatic noise elimination using statistical outlier removal (SOR) filter [50]. The SOR filter automatically eliminates the noise present in the point clouds by performing a proximity analysis for each point with the neighbour points (Figure 4). For this case, the fast statistical outlier removal (FSOR) proposed by Balta et al. [51] was used to minimize the processing time and optimize resources.
After cleaning the point cloud, a passthrough filter [52,53] based on the coordinate histogram of the point clouds is applied. Because the Faro Focus captures data in the range 0.60–330 m, the system measures points that are too far from the scan station and consequently useless for the application (Figure 5a,c). The first automatic filter applied removes the farthest points computing the coordinates distribution along the different axis (x, y, z). For each axis, the coordinates are converted to scalar field, and its histogram of distribution is filtered according to the position with highest accumulation of points in each case. This procedure was applied in the three directions obtaining a first filtered point cloud (Figure 5b,d).
Finally, a homogenization of the point cloud is performed. It is well known that 3D point clouds acquired with a laser scanner present heterogeneous point density. The further away from the station point, the lower point density. The voxel-grid filter [53,54] allows the generation of a uniform dataset by dividing the space in cubes of custom size and replacing the points on each cube by the centroid of that set of points. The voxelization [45] is performed in an automatic way: the size of the voxel depends on the density of the point cloud, which is calculated through the study of the mean distance of each point to its closest neighbours. In this way, the process is completely automated, as well as adaptative to the data of each scenario.

2.2.3. Alignment

Once the point clouds are pre-processed, their pre-alignment is performed based on detectors and descriptors. In the proposed methodology, the detector selected was Harris 3D [41,42] in combination with the point feature histogram (PFH) descriptor. Finally, an ICP alignment was applied.
The Harris 3D detector presents good results in construction scenarios, since its characteristics are optimal for the identification of points in pillars, vertical walls and ceilings, by its corners and edges. It is also fast and detects large sets of interest points with a good correspondence [55]. The Harris 3D detector allows for several variations depending on the evaluation of the trace and the determinant (det) of the covariance matrix (Cov) [41]. In all the variations, the response of the key points r(x, y, z) is estimated, but different criteria are applied in each case. The original Harris 3D detector is performed using Equation (1), while Harris 3D Lowe follows Equation (2) and the variant Harris 3D Noble (Figure 6) applies Equation (3), where the trace of the covariance matrix is not squared.
r ( x , y , z ) = det ( C o v ( x , y , z ) k ( t r a c e ( C o v ( x , y , z ) ) ) 2
r ( x , y , z ) =   det ( C o v ( x , y , z ) ) t r a c e ( C o v ( x , y , z ) ) 2
r ( x , y , z ) =   det ( C o v ( x , y , z ) ) t r a c e ( C o v ( x , y , z ) )
These changes among Harris 3D variants modify the evaluation of the relation between the determinant and the trace of the covariance matrix, which affects in the result of the point detection [42]. The Harris 3D method includes a constant k that depends on the point cloud data, while Harris 3D Lowe and Harris 3D Noble do not consider this factor, being independent of the input data [56]. This fact led to the selection of Harris 3D Noble as an optimal point detector in the methodology developed.
Since the detector used (Harris 3D) focuses mainly on detecting corners and edges, and because the PFH descriptor is more robust against changes in the viewpoints and delivers reliable information between the point clouds [30,41], the latter was chosen for this work, since it is common to have different points of view, illumination changes and good overlap between scans.
The 3D PFH descriptor describes each point with its neighbours generalizing the normal vectors (n) and the mean curvature (α, ϕ, θ), and representing these values in a histogram. The histogram will have a unique and invariant signature for each point. Providing two points, p and q, a reference system can be created consisting of three unit vectors (u, v, w) [57]. Thus, the difference between normal vectors in points p (np) and q (nq) is determined with three angular variables (Equations (4)–(6)) where d represents the distance between p and q.
α = a r c cos ( v · n q )
= a r c cos ( u · p q d )
θ =   a r c tan ( w · n p ,   u · n p )
Being:
u the surface normal at p
  • v = u   ×   p q d
  • w = u   ×   v
  • d = p q 2
The three angles are coded with distance, and together they form the final descriptor, with the concatenation of the histograms of the four variables.
The 3D PFH descriptor is applied to all points identified as feature points with the Harris 3D Noble detector, in such way that each point is associated with a description, and the coincidence of descriptions is used for the determination of correspondences between points from different point clouds. The identification of corresponding points allows the computation of the transformation between point clouds to the same coordinate reference system. In addition, the description of each point allows its classification into the different primitives such as edges, corners or planes.
The invariance to rotation, scale, variations in luminosity and presence of noise is the reason for the selection of the couple Harris 3D–PFH for the study of indoor building scenes. This results in the efficient detection of points of interest in corners and borders and the assignment of correspondences between feature points of different point clouds. However, for those cases with repetitive elements such as pillars, some erroneous correspondences can appear. In order to minimize this effect, the methodology proposed includes a restriction of maximum distance between corresponding points. This restriction avoids the determination of erroneous points correspondences produced by the similarity between different zones.
The filter that discriminates according to the distance between points is graphically explained in Figure 7a. Its performance is as follows: Corresponding points for each point of interest are searched within a distance threshold. This search, together with the initial results of the detector/descriptor combination, allows us to refine the registration between point clouds, thus improving the results. This step is reiterated until the root mean square error (RMSE) is below a threshold established according to the criteria for quality control in the works. The purpose of this study is to check and validate sections of the building based on variations and translations of the built elements. For the first case, the tolerances allowed are ±10 mm and for the second case ±24 mm [26]. To ensure that all the data meet these tolerances, the quality threshold is set to 5 mm.
Once the Harris 3D detector and PFH descriptor have been computed and a first coarse alignment based on features has been applied to all the point clouds, some of them are correctly aligned, but some others are not. This is due to the similarity between scans, which can cause imperfections in the final alignment. However, the point clouds are approximated to their real positions and the generation of one unique 3D point cloud formed by all the point clouds can be completed with the final adjustment of the registration, which eliminates the errors. The final adjustment is performed using the ICP algorithm [58]. ICP consists of an iterative process in which the distance error between point clouds is minimized until the point clouds are perfectly aligned. Each iteration consists of three steps: (i) finding pairs of corresponding points; (ii) estimation of the transformation that minimizes the distance between corresponding points; (iii) application of the transformation to the point clouds. Since ICP is an iterative algorithm, it is mandatory to have a good initial transformation to make its convergence possible [59]. This initial estimation comes from the coarse alignment, which already makes a good estimation of the point clouds positions.
It should be highlighted that the registration method based on the compatibility between normal vectors (Figure 7b) is not applicable in these scenarios, due to the presence of repetitive structures with similar characteristics (squared pillars, vertical walls, doors, among others) that resulted in similar descriptions based on the normal vector in spite of their difference. This invalidity of the normal vector method was analysed through its application to the cases of study and shown through the lack of convergence regarding the ICP algorithm.
The ICP algorithm can be applied in a point-to-point or in a point-to-plane mode. The first performs the adjustment between point clouds by minimizing the distance between one point in the point cloud of reference and its corresponding point in the point cloud to register. The second adjusts the point cloud through the minimization of the distance between the point in the point cloud of reference and the tangent plane to its corresponding point in the point cloud to register. Studies shown that the validity of point-to-point ICP is more robust to Gaussian noise than point-to-plane ICP [60]. However, the latter undergoes fewer iterations, and it is more effective [60]. Since the point clouds are cleaned and filtered from noise before starting the registration process, the point-to-point ICP method was chosen for this work.
The main advantage of the double step Harris 3D/PFH and point-to-point ICP registration is that there is no need of user interaction, and there is no need to place any targets in the scene to serve as points of interest, because the points of interest in the method proposed are naturally present in the scene.

3. Results and Discussion

3.1. Case Study

The validation of the methodology proposed is performed through its application to two different scenarios of a residential building located in Badalona (Spain). The building has asymmetric U-shaped floor plan (Figure 8a). In the moment of the acquisition, the building was being constructed with a life expectation of 50 years. The plan is 25,000 m2 and is vertically distributed in two basements dedicated to parking, one base floor dedicated to commercial offices and the upper floors destined to housing. The building is divided in two blocks, north and south, with 14 and 13 floors, respectively.
The foundations of the building are constructed with a rectangular base, 200 × 45 cm and 260 × 45 cm size, individually or in groups of two elements with capped piles for load transmission. Retaining walls are made of concrete with temporary anchoring.
In the moment of data acquisition, the building was under different periods of construction, allowing for the testing of the methodology proposed in different environments and complexities of construction works. Two floors were selected for the data acquisition: (i) Floor 1 in the north block, which presents 810 m2 (Figure 8b) and (ii) the ground floor in the south block, with 845 m2 (Figure 8c). These floors were selected due to their condition of construction and geometrical complexity, which can be considered as representative for construction works. During acquisition, there were walls, pillars, holes (for the lift and drainpipes), floors and ceilings present in the scenes, as well as construction materials such as scaffolds and boards.

3.2. Data Acquisition

Data acquisition was performed with the UGV described in Section 2.1. For the first study case, seven stations were needed, in the Stop&Go strategy, for the complete acquisition (Figure 9b). For the second study case, a total of 11 stations were needed to cover all the spaces (Figure 9d). In the second study case, more stations were necessary, since this floor was in a more advanced state of construction and there were more occlusions between elements. The position of the stations was optimized by the path planning programmed in the UGV [43] in order to minimize the trajectories (Figure 9a,c), acquiring all the information required. Since both floors were in the construction phase (Figure 9b), the trajectory had the requirement of including all the pillars in the data acquisition. Resolution and quality of the data acquired were adjusted in a compromise to obtain all the information needed and to minimize acquisition time and quantity of data generated. Thus, 60% overlap between stops was established, together with a scanning resolution of 4 mm at 20 m. The overlap percentage was selected towards the optimization of the automatic registration process in terms of precision and reliability. In addition, providing the minimum acquisition range of the TLS (0.6 m), all points at a distance below 0.8 m from the TLS were discarded from the acquisition (Table 2)
The time required per scan position was approximately 3 min. This value should be added to the time required in the trajectory between stops, for a total acquisition time of 25 and 45 min for the first and second study cases, respectively.
For the first and second study cases, the result of the acquisition was six and eleven point clouds with 24 million points each, respectively. To have a homogeneous and clean point cloud in the next steps, the raw point clouds were filtered as described in Section 2.2. In this application, the voxel size was established at 0.5 cm, resulting in point clouds about 7 million points each and with homogeneous data distribution. The UGV can operate at night and can go over piles of rubble as well as muddy and wet surfaces thanks to its caterpillar wheels.

3.3. Automatic Registration

The automatic registration procedure described in Section 2.2 was applied to the six and eleven TLS point clouds acquired for each study case, after the application of the filtering procedure. That is, (i) one coarse registration based on the Harris 3D Noble detector and the PFH descriptor with which the point clouds are positioned close to their real position (Figure 10b) in the shared coordinate system; (ii) one fine registration procedure based on ICP for the positioning of the point clouds with millimetre precision (Figure 10c). The final 3D point clouds, after removing duplicated points, are formed by 44 million points and 73 million points for the first and second study cases, respectively.
Regarding processing time, the total time for the automatic procedure required 23 and 48 min for the first and second study cases, respectively, in a PC Mountain Studio3D i5 Ivy with Intel Core i5-3570K processor and 32 Gb RAM with operating system Windows 10 64 bits, and no need for interaction by the user.

3.4. Quality Control

The BIM models (Figure 8b,c) were used as reference for the quality control. More precisely, the registration of the BIM and point clouds was done manually using a local data established by the company, which corresponds with the definition of a local origin and the direction of X and Y axes. The Z axis was always considered along the vertical. After that, two different approaches were performed for analysing the quality: (i) the models were compared with the designed building information model (BIM) to find visual discrepancies between as-built and BIM models; (ii) the accuracy of the point clouds was checked comparing against the BIM models. In particular, the accuracy of the point clouds was checked using two different dimensional analysis: (i) dimensional control of the pillars, (ii) distance between construction elements. For the dimensional control, the dimensions obtained from the as-built model, automatically registered, were compared to the dimensions measured in the reference BIM.
The reference BIM provides a theoretical model very close to the final result expected for the construction, and it is updated over the time. This contributes to making the quality control during the whole process. The BIM reflects the geometry, metrical properties, material properties and construction phases of each element, together with the final layout of the building [61]. This way, the BIM is considered as a theoretical model in this case and provides the reference measurements for the structural elements chosen to validate the methodology in each phase.
However, the point clouds registered with the automatic procedure proposed contain a high amount of information, and their statistical distribution is often non-normal [62]. Thus, the statistical analysis performed was based on nonparametric estimators of the median, m, such as the normalized median absolute deviation (NMAD) (Equation (7)) and the bi-weight mid-variance (BWMV) (Equation (8)). These parameters allow us to perform a comparison and, consequently, a more precise and reliable quality control.
N M A D = 14 , 826 · M A D
B W M V =   n · i = 1 n a i · ( x i m ) 2 · ( 1 U i 2 ) 4 ( i = 1 n a i · ( 1 U i 2 ) · ( 1 5 U i 2 ) ) 2
where:
a i =   { 1 ,   i f   U i < 1 0 ,   i f   U i 1
U =   x i m 9 · M A D
being the median absolute deviation (MAD) defined by Equation (11) and described as the median (m) of the absolute deviation of the median of the data (mx) and xi the measurements performed on the registered point cloud.
M A D = m · ( x i m x )
Table 3 shows the results of bias and dispersion for the discrepancies of TLS vs. BIM for the two study cases.
Considering that the approach developed provides values admissible for quality control tasks, a comparison of measurements on the point cloud registered with the method and on the reference BIM measurements was performed. The elements where the errors in registration are most evident are pillars. Table 4 shows the admissible tolerances within the most relevant structural elements in a building according to [26].
The validation focuses on the dimensions and the cross sections of the pillars. These elements are predefined, making them adequate as control elements. Seven pillars of the first study case and seven pillars of the second study case were subjected to measurements of their width at three different heights: low height at 0.20 m, medium height at 1.20 m and high height at 2.50 m from the floor. Each measurement is performed five times, and robust statistical estimators are calculated from these measurements. Table 5 shows the theoretical measurements (performed on the BIM) in each pillar and the measurements performed on the as-built model resulting from our methodology. Table 6 shows the results of the robust statistical parameters calculated from the dimensional analysis (median and BWMV). Provided that the tolerance for pillar section control is established at 8–10 mm (Table 4), the results obtained with the methodology proposed are applicable in this type of scenario.
An additional test of the dimensions of one pillar on each study case is performed by adjusting planes to each face of the pillars (Figure 11). This test is performed as an evaluation method of the quality of the results. This additional evaluation is necessary, because the point cloud is formed by discrete points, and the punctual measurements can present non-robust results, since they depend on the points used for the measurements.
The metrical verification in the planar case consists in computing the size of the pillars by measuring the distance between opposite sides of the pillars and comparing the distances measured with the real dimensions of the pillars (Table 7). Thus, the width of the pillar is denoted as d1 (distance between planes 3 and 4), and the length of the pillar is d2 (distance between planes 1 and 2), as shown in Figure 11c.
Provided that the pillars are the elements more influenced by the results of the registration, and given the results obtained for them, the automatic registration methodology proposed presents adequate quality and precision for its application in quality control of construction works: the methodology allows for the measurement of the pillars with an error below 10 mm and the performance of any additional measurement from the point cloud.

4. Conclusions

This paper presents a new methodology for the automatic acquisition and registration of point clouds for indoor scenarios in construction works. The main contributions of the presented approach are two-fold: (i) the possibility of automatically capturing point clouds based on the Stop&Go method and using an UGV; (ii) a methodology of point cloud registration fully automatic that works with any photogrammetric or laser scanning point clouds.
The methodology proposed presents an automatic data acquisition with an UGV equipped with a TLS and an automatic route planner able to calculate the best route and producing no interferences during the construction works. The different point clouds acquired are automatically registered using a combination of detector/descriptor. The integral methodology can be completely autonomous and automatic, with no need of artificial elements that disturb the construction scenario and no need of user interaction. The precision obtained in the automatic registration was evaluated through metric verifications on the resulting point cloud. The errors are below 1 cm in all measurements performed. This precision validates the use of the methodology in quality control scenarios in construction works for the identification and control of construction elements.
The methodology presented for point cloud registration consists of three steps: (i) pre-processing of the data filtering and homogenizing of the point clouds, (ii) a coarse registration and (iii) a fine registration. The data were acquired with an autonomous unnamed ground vehicle with the Faro laser scan mounted on it and applying the Stop&Go method. The UGV can operate at night and with no user intervention thanks to the path planner. The resultant point cloud has enough quality, density and precision for quality control in building works (less than 1 cm in section dimensions and less than 1.5 cm in wall thickness).
For the coarse registration, a combination of Harris 3D detector and PFH descriptor was used, since this combination allows the best detection of feature points on corners, edges and planes. These parametric forms are the most common in building scenarios and provide robust results. The detector/descriptor method allows to be applied on any set of point clouds, captured with any laser scan system, using a UGV or not.
In addition, the chosen detector/descriptor-based method uses only geometric information to compute the coarse alignment. This means that it is not necessary to have radiometric information (RGB colour), making the methodology more scalable and suitable for many more study cases. In this line, the Harris 3D/PFH combination was chosen, because it does not need images to perform the calculations, as other detectors and descriptors do, such as SIFT or SURF. Being independent of image capturing makes the methodology scalable an accessible for many more situations.
Despite the independence of the laser scan system, the equipment used has some limitations. For example, it has not been designed to be able to climb stairs, although it can go through small bumps such as planks. In addition to the TLS for data acquisition, the UGV is equipped with two laser scanners SICK S300 Expert integrated at the front and the back of the UGV that allow to detect objects and holes. The combination of this system with the path planning makes it totally autonomous. The path planning incorporates an algorithm to recalculate the optimal route in those cases where the target point is inaccessible. This occurs frequently in building construction environments where materials or debris can appear unexpectedly. In order to minimize the traffic of personnel, the data acquisition can be carried out at night, since the laser system is an active system and does not need light to operate.
Regarding future works, the first research line will improve the coarse registration for those cases with a considerable number of outliers. Furthermore, the recognition of artificial targets could be included to improve those cases where the coarse registration is not good enough. The second research line will focus on the automatic extraction of the semantics of the different construction elements from the point cloud and either the classification of the elements or the conversion of the as-built model to a surface 3D model or a BIM. The use of BIM as a reference was possible for this work, but usually the BIM model is not available, in such way that the natural evolution of the as-built model generated automatically with the methodology proposed is the generation of the BIM.

Author Contributions

Conceptualization, D.G.-A. and R.M.; methodology, D.G.-A., S.L. and R.M.; software, R.M. and J.A.M.-J.; validation, S.L., D.G.-A. and R.M.; formal analysis, J.A.M.-J. and S.L.; writing—original draft preparation, S.L. and D.G.-A.; all the authors contributed to writing—review and editing; visualization. All authors have read and agreed to the published version of the manuscript.

Funding

Authors would like to thank the Junta de Castilla y León and the Fondo Social Europeo for the financial support given through programs for human resources (EDU/1100/2017).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Special thanks to the Cátedra Iberdrola VIII Centenario–University of Salamanca for funding given to personnel resources, and the Ministerio de Economia, Industria y Competitividad–Gobierno de España (RTC-2016-5257-7).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azhar, S.; Khalfan, M.; Maqsood, T. Building information modelling (BIM): Now and beyond. Constr. Econ. Build. 2012, 12, 15–28. [Google Scholar] [CrossRef] [Green Version]
  2. Real Decreto 1515/2018, de 28 de diciembre, por el que se crea la Comisión interministerial para la incorporación de la metodología BIM en la contratación pública. Available online: https://www.boe.es/eli/es/rd/2018/12/28/1515 (accessed on 28 December 2020).
  3. Jung, W.; Lee, G. The status of BIM adoption on six continents. Int. J. Civ. Environ. Struct. Constr. Archit. Eng. 2015, 9, 444–448. [Google Scholar]
  4. Kreider, R.G.; Messner, J.I. The Uses of BIM: Classifying and Selecting BIM Uses; State College Pennsylvania: State College, PA, USA, 2013; p. 22. [Google Scholar]
  5. Remondino, F.; Guarnieri, A.; Vettore, A. 3D modeling of close-range objects: Photogrammetry or laser scanning? In Videometrics VIII; International Society for Optics and Photonics: San José, CA, USA, 2005; Volume 5665, p. 56650M. [Google Scholar]
  6. Alidoost, F.; Arefi, H. An image-based technique for 3D building reconstruction using multi-view UAV images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 43. [Google Scholar] [CrossRef] [Green Version]
  7. Guarnieri, A.; Vettore, A.; Remondino, F.; Church, O.P. Photogrammetry and Ground-Based Laser Scanning: Assessment of Metric Accuracy of the 3D Model of Pozzoveggiani Church. 2004. Available online: https://www.fig.net/resources/proceedings/fig_proceedings/athens/papers/ts26/TS26_4_Guarnieri_et_al.pdf (accessed on 5 February 2021).
  8. Bernat, M.; Janowski, A.; Rzepa, S.; Sobieraj, A.; Szulwic, J. Studies on the use of terrestrial laser scanning in the maintenance of buildings belonging to the cultural heritage. In Proceedings of the 14th Geoconference on Informatics, Geoinformatics and Remote Sensing, SGEM, Albena, Bulgaria, 17–26 June 2014; Volume 3, pp. 307–318. [Google Scholar]
  9. Pritchard, D.; Sperner, J.; Hoepner, S.; Tenschert, R. Terrestrial laser scanning for heritage conservation: The Cologne Cathedral documentation project. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 213–220. [Google Scholar] [CrossRef] [Green Version]
  10. Sánchez-Aparicio, L.J.; Del Pozo, S.; Ramos, L.F.; Arce, A.; Fernandes, F.M. Heritage site preservation with combined radiometric and geometric analysis of TLS data. Autom. Constr. 2018, 85, 24–39. [Google Scholar] [CrossRef]
  11. Sánchez-Aparicio, L.J.; Bautista-De Castro, Á.; Conde, B.; Carrasco, P.; Ramos, L.F. Non-destructive means and methods for structural diagnosis of masonry arch bridges. Autom. Constr. 2019, 104, 360–382. [Google Scholar] [CrossRef]
  12. Del Pozo, S.; Herrero-Pascual, J.; Felipe-García, B.; Hernández-López, D.; Rodríguez-Gonzálvez, P.; González-Aguilera, D. Multispectral radiometric analysis of façades to detect pathologies from active and passive remote sensing. Remote Sens. 2016, 8, 80. [Google Scholar] [CrossRef] [Green Version]
  13. Cabo, C.; Del Pozo, S.; Rodríguez-Gonzálvez, P.; Ordóñez, C.; González-Aguilera, D. Comparing terrestrial laser scanning (TLS) and wearable laser scanning (WLS) for individual tree modeling at plot level. Remote Sens. 2018, 10, 540. [Google Scholar] [CrossRef] [Green Version]
  14. Akca, D. Full Automatic Registration of Laser Scanner Point Clouds; ETH Zurich: Zurich, Switzerland, 2003. [Google Scholar]
  15. Theiler, P.W.; Schindler, K. Automatic registration of terrestrial laser scanner point clouds using natural planar surfaces. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 3, 173–178. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, S.; Nan, L.; Xia, R.; Zhao, J.; Wonka, P. PLADE: A Plane-Based Descriptor for Point Cloud Registration with Small Overlap. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2530–2540. [Google Scholar] [CrossRef]
  17. Gai, M.; Cho, Y.K.; Xu, Q. Target-free automatic point clouds registration using 2D images. In Proceedings of the ASCE International Workshop on Computing in Civil Engineering, Los Angeles, California, 23–25 June 2013; pp. 865–872. [Google Scholar]
  18. Kim, P.; Chen, J.; Cho, Y.K. Automated point cloud registration using visual and planar features for construction environments. J. Comput. Civ. Eng. 2018, 32, 04017076. [Google Scholar] [CrossRef]
  19. Ge, X.; Hu, H.; Wu, B. Image-Guided Registration of Unordered Terrestrial Laser Scanning Point Clouds for Urban Scenes. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9264–9276. [Google Scholar] [CrossRef]
  20. Weinmann, M.; Jutzi, B. Fully automatic image-based registrationof unorganised TLS data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, W12. [Google Scholar]
  21. Di Filippo, A.; Sánchez-Aparicio, L.; Barba, S.; Martín-Jiménez, J.; Mora, R.; González Aguilera, D. Use of a Wearable Mobile Laser System in Seamless Indoor 3D Mapping of a Complex Historical Site. Remote Sens. 2018, 10, 1897. [Google Scholar] [CrossRef] [Green Version]
  22. Sánchez-Aparicio, L.J.; Conde, B.; Maté-González, M.A.; Mora, R.; Sánchez-Aparicio, M.; García-Álvarez, J.; González-Aguilera, D. A Comparative Study between WMMS and TLS for the Stability Analysis of the San Pedro Church Barrel Vault by Means of the Finite Element Method. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 4215, 1047–1054. [Google Scholar] [CrossRef] [Green Version]
  23. Otero, R.; Lagüela, S.; Garrido, I.; Arias, P. Mobile indoor mapping technologies: A review. Autom. Constr. 2020, 120, 103399. [Google Scholar] [CrossRef]
  24. Geo-SLAM Zeb-Revo. Available online: http://geoslam.com/hardware-products/zeb-revo/ (accessed on 5 February 2021).
  25. Faro Focus 3D. Features, Benefits & Technical Specifications. Available online: https://knowledge.faro.com/Hardware/3D_Scanners/Focus/Technical_Specification_Sheet_for_the_Laser_Scanner_Focus_3D (accessed on 5 February 2021).
  26. EHE-08. Instrucción de Hormigón Estructural Secretaría General Técnica; Ministerio de Fomento: Madrid, España, 2008. [Google Scholar]
  27. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  28. Tombari, F.; Salti, S.; Di Stefano, L. Performance evaluation of 3D keypoint detectors. Int. J. Comput. Vis. 2013, 102, 198–220. [Google Scholar] [CrossRef]
  29. Chen, Z.; Czarnuch, S.; Smith, A.; Shehata, M. Performance evaluation of 3D keypoints and descriptors. In International Symposium on Visual Computing; Springer: Cham, Switzerland, 2016; pp. 410–420. [Google Scholar]
  30. Hänsch, R.; Weber, T.; Hellwich, O. Comparison of 3D interest point detectors and descriptors for point cloud fusion. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 57. [Google Scholar] [CrossRef] [Green Version]
  31. Lowe, G. SIFT-the scale invariant feature transform. Int. J. 2004, 2, 91–110. [Google Scholar]
  32. Rusu, R.B.; Cousins, S. 3D is here: Point cloud library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  33. Böhm, J.; Becker, S. Automatic marker-free registration of terrestrial laser scans using reflectance. In Proceedings of the 8th Conference on Optical 3D Measurement Techniques, Zurich, Switzerland, 9–12 July 2007; pp. 9–12. [Google Scholar]
  34. Moussa, W.; Abdel-Wahab, M.; Fritsch, D. An automatic procedure for combining digital images and laser scanner data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, B5. [Google Scholar] [CrossRef] [Green Version]
  35. Sipiran, I.; Bustos, B. Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes. Vis. Comput. 2011, 27, 963. [Google Scholar] [CrossRef]
  36. Pratikakis, I.; Spagnuolo, M.; Theoharis, T.; Veltkamp, R. A robust 3D interest points detector based on Harris operator. In Eurographics Workshop on 3D Object Retrieval; 2010; Volume 5, Available online: https://users.dcc.uchile.cl/~isipiran/papers/SB10b.pdf (accessed on 4 February 2021).
  37. Tombari, F.; Salti, S.; Di Stefano, L. A combined texture-shape descriptor for enhanced 3D feature matching. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 809–812. [Google Scholar]
  38. Aldoma, A.; Marton, Z.C.; Tombari, F.; Wohlkinger, W.; Potthast, C.; Zeisl, B.; Rusu, R.B.; Gedikli, S.; Vincze, M. Tutorial: Point cloud library: Three-dimensional object recognition and 6 DOF pose estimation. IEEE Robot. Autom. Mag. 2012, 19, 80–91. [Google Scholar] [CrossRef]
  39. Salih, Y.; Malik, A.S.; Walter, N.; Sidibé, D.; Saad, N.; Meriaudeau, F. Noise robustness analysis of point cloud descriptors. In International Conference on Advanced Concepts for Intelligent Vision Systems; Springer: Cham, Switzerland, 2013; pp. 68–79. [Google Scholar]
  40. Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
  41. Filipe, S.; Alexandre, L.A. A comparative evaluation of 3D keypoint detectors in a RGB-D object dataset. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Porgutal, 5–8 January 2014; Volume 1, pp. 476–483. [Google Scholar]
  42. Orts-Escolano, S.; Morell, V.; García-Rodríguez, J.; Cazorla, M. Point cloud data filtering and downsampling using growing neural gas. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar]
  43. Díaz-Vilariño, L.; Frías, E.; Balado, J.; González-Jorge, H. Scan planning and route optimization for control of execution of as-designed BIM. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, XLII-4, 143–148. [Google Scholar] [CrossRef] [Green Version]
  44. Heikkilä, R.; Kivimäki, T.; Mikkonen, M.; Lasky, T.A. Stop & Go Scanning for Highways—3D Calibration Method for a Mobile Laser Scanning System. In Proceedings of the 27th International Symposium on Automation and Robotics in Construction, Bratislava, Slovakia, 25–27 June 2010; pp. 40–48. [Google Scholar]
  45. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  46. Gesto-Diaz, M.; Tombari, F.; Gonzalez-Aguilera, D.; Lopez-Fernandez, L.; Rodriguez-Gonzalvez, P. Feature matching evaluation for multimodal correspondence. ISPRS J. Photogramm. Remote Sens. 2017, 129, 179–188. [Google Scholar] [CrossRef]
  47. Holz, D.; Ichim, A.E.; Tombari, F.; Rusu, R.B.; Behnke, S. Registration with the point cloud library: A modular framework for aligning in 3-D. IEEE Robot. Autom. Mag. 2015, 22, 110–124. [Google Scholar] [CrossRef]
  48. Golomb, S.W.; Baumert, L.D. Backtrack programming. J. ACM 1965, 12, 516–524. [Google Scholar] [CrossRef]
  49. Dorigo, D. Ant Colonies for the Traveling Salesman Problem; Université Libre de Bruxelles: Brussels, Belgium, 1997; Volume 1, pp. 53–66. [Google Scholar]
  50. Han, X.F.; Jin, J.S.; Wang, M.J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  51. Balta, H.; Velagic, J.; Bosschaerts, W.; De Cubber, G.; Siciliano, B. Fast statistical outlier removal-based method for large 3D point clouds of outdoor environments. IFAC Pap. OnLine 2018, 51, 348–353. [Google Scholar] [CrossRef]
  52. Miknis, M.; Davies, R.; Plassmann, P.; Ware, A. Near real-time point cloud processing using the PCL. In Proceedings of the 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, UK, 10–12 September 2015; pp. 153–156. [Google Scholar]
  53. Miknis, M.; Davies, R.; Plassmann, P.; Ware, A. Efficient point cloud pre-processing using the point cloud library. Int. J. Image Process. 2016, 10, 63–72. [Google Scholar]
  54. Ximin, Z.; Xiaoging, Y.; Wanggen, W.; Junxing, M.; Qingmin, L.; Libing, L. The Simplification of 3D Color Point Cloud Based on Voxel. In Proceedings of the IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013), Shanghai, China, 19–20 August 2013. [Google Scholar]
  55. Yu, T.H.; Woodford, O.J.; Cipolla, R. A performance evaluation of volumetric 3D interest point detectors. Int. J. Comput. Vis. 2013, 102, 180–197. [Google Scholar] [CrossRef] [Green Version]
  56. Liu, J.; Jakas, A.; Al-Obaidi, A.; Liu, Y. A comparative study of different corner detection methods. In Proceedings of the 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation-(CIRA), Daejeon, Korea, 15–18 December 2009; pp. 509–514. [Google Scholar]
  57. Alexandre, L.A. 3D descriptors for object and category recognition: A comparative evaluation. In Proceedings of the Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 7 October 2012; Volume 20, p. 7. [Google Scholar]
  58. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; International Society for Optics and Photonics: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  59. Sharp, G.C.; Lee, S.W.; Wehe, D.K. ICP registration using invariant features. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 90–102. [Google Scholar] [CrossRef] [Green Version]
  60. Li, P.; Wang, R.; Wang, Y.; Tao, W. Evaluation of the ICP Algorithm in 3D Point Cloud Registration. IEEE Access 2020, 8, 68030–68048. [Google Scholar] [CrossRef]
  61. Barnes, P.; Davies, N. BIM in Principle and in Practice; ICE Publishing: London, UK, 2015. [Google Scholar]
  62. Rodríguez-Gonzálvez, P.; Garcia-Gago, J.; Gomez-Lahoz, J.; González-Aguilera, D. Confronting passive and active sensors with non-gaussian statistics. Sensors 2014, 14, 13759–13777. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. System used for data acquisition: (a) terrestrial laser scanner FARO 3D × 330. (b) Unnamed ground vehicle guardian from Robotnik. (c) Hybrid system: terrestrial laser scanner (TLS) on unnamed ground vehicle
Figure 1. System used for data acquisition: (a) terrestrial laser scanner FARO 3D × 330. (b) Unnamed ground vehicle guardian from Robotnik. (c) Hybrid system: terrestrial laser scanner (TLS) on unnamed ground vehicle
Applsci 11 01465 g001
Figure 2. Methodology proposed for automatic registration of point clouds.
Figure 2. Methodology proposed for automatic registration of point clouds.
Applsci 11 01465 g002
Figure 3. Path-planning workflow.
Figure 3. Path-planning workflow.
Applsci 11 01465 g003
Figure 4. Fast statistical outlier removal (FSOR) filter applied over the point cloud. (a) Detail of raw point cloud with noise; (b) result after applying FSOR filter.
Figure 4. Fast statistical outlier removal (FSOR) filter applied over the point cloud. (a) Detail of raw point cloud with noise; (b) result after applying FSOR filter.
Applsci 11 01465 g004
Figure 5. Passthrough filter based on the coordinate histograms. (a) Top view of raw scanned data with scatter. (b) Top view of scanned data after filter by X-coordinates histogram (c) Front view of raw scanned data with scatter. (d) Result after coordinates histogram filter.
Figure 5. Passthrough filter based on the coordinate histograms. (a) Top view of raw scanned data with scatter. (b) Top view of scanned data after filter by X-coordinates histogram (c) Front view of raw scanned data with scatter. (d) Result after coordinates histogram filter.
Applsci 11 01465 g005
Figure 6. Harris 3D Noble detector: performance mode.
Figure 6. Harris 3D Noble detector: performance mode.
Applsci 11 01465 g006
Figure 7. Method for filtering correspondences with ICP algorithm: (a) method based on distance; (b) method based on normal vectors, which application was discarded.
Figure 7. Method for filtering correspondences with ICP algorithm: (a) method based on distance; (b) method based on normal vectors, which application was discarded.
Applsci 11 01465 g007
Figure 8. Building selected as case study: (a) 2D planes of the buildings. (b) Building information modelling (BIM) model of the first floor in the north block selected as the first study case. (c) BIM model of the ground floor in the south block selected for the second study case.
Figure 8. Building selected as case study: (a) 2D planes of the buildings. (b) Building information modelling (BIM) model of the first floor in the north block selected as the first study case. (c) BIM model of the ground floor in the south block selected for the second study case.
Applsci 11 01465 g008
Figure 9. Optimal route followed by the unmanned ground vehicle (UGV) for data acquisition; (a) optimal route and scan points in the first study case; (b) resulting Faro point cloud of the first study case; (c) optimal route and scan points of the second study case; (d) resulting Faro point cloud of the second study case.
Figure 9. Optimal route followed by the unmanned ground vehicle (UGV) for data acquisition; (a) optimal route and scan points in the first study case; (b) resulting Faro point cloud of the first study case; (c) optimal route and scan points of the second study case; (d) resulting Faro point cloud of the second study case.
Applsci 11 01465 g009
Figure 10. Automatic registration procedure: (a) initial state of the point clouds (all referenced to the (0,0,0), with relative coordinates regarding the position of the TLS in each acquisition); (b) state of the point clouds after the initial registration based on Harris 3D Noble–PFH, with a detail of the pillars where the error in registration is noticeable; (c) final result of the ICP registration, with a detail of the pillar shown in (b), which was correctly registered during this last step. Each colour represents a different point cloud.
Figure 10. Automatic registration procedure: (a) initial state of the point clouds (all referenced to the (0,0,0), with relative coordinates regarding the position of the TLS in each acquisition); (b) state of the point clouds after the initial registration based on Harris 3D Noble–PFH, with a detail of the pillars where the error in registration is noticeable; (c) final result of the ICP registration, with a detail of the pillar shown in (b), which was correctly registered during this last step. Each colour represents a different point cloud.
Applsci 11 01465 g010
Figure 11. (a) Point cloud of a pillar obtained after the automatic registration proposed; (b) plane adjustment for each pillar face; (c) control measurements performed on the pillar.
Figure 11. (a) Point cloud of a pillar obtained after the automatic registration proposed; (b) plane adjustment for each pillar face; (c) control measurements performed on the pillar.
Applsci 11 01465 g011
Table 1. Comparison between different registration approaches, including our proposed method.
Table 1. Comparison between different registration approaches, including our proposed method.
Target
Based
[14]
Plane
Based [15,16]
Image
Based [17]
Image Based + Network [18,19,20]Proposed
Method
Target NeedX
Image Need XX
Only Pairwise Registration XX
Minimum Overlap RequiredXXXXX
Noise SensitiveXXXXX
Table 2. Scanning parameters for the acquisition with TLS.
Table 2. Scanning parameters for the acquisition with TLS.
ParameterValue
Mean laser range (m)8
Security distance (m)0.8
% Scanning overlap60%
Scanning resolution4 mm at 20 m
Table 3. Bias and dispersion results of the TLS vs. BIM for the two study cases.
Table 3. Bias and dispersion results of the TLS vs. BIM for the two study cases.
BiasDispersion
Median (m)NMAD (m)(BWMV) (m)
Study Case 10.007±0.007±0.00005
Study Case 20.008±0.008±0.00007
Table 4. Tolerances in the different structural elements according to [26]. “D” represents the transversal dimension of the elements.
Table 4. Tolerances in the different structural elements according to [26]. “D” represents the transversal dimension of the elements.
Tolerances in Construction Elements
Cross Section (D < 30)+10–8 mm
Cross Section (30 < D < 100)+12–10 mm
Cross Section (100 < D)+24–20 mm
Vertical deviation outer edges columns±6 mm
Wall Thickness < 25 cm+12–10 mm
Wall Thickness > 25 cm+16–10 mm
Table 5. Results of the control measurements performed in the different pillars, seven for the first study case and seven for the second study case.
Table 5. Results of the control measurements performed in the different pillars, seven for the first study case and seven for the second study case.
ElementBIM Width (mm)As-Built Width (mm)
Study
Case 1
Pillar 1500500
Pillar 2450450
Pillar 3350351
Pillar 5450449
Pillar 6450450
Pillar 7350348
Pillar 8600600
Study
Case 2
Pillar 1600597
Pillar 2450452
Pillar 3450451
Pillar 4450448
Pillar 5450448
Pillar 6400398
Pillar 7 500498
Table 6. Bias and dispersion data for the analysed data in both study cases.
Table 6. Bias and dispersion data for the analysed data in both study cases.
BiasDispersion
Median (mm)NMAD (mm)(BWMV) (mm)
Study Case 1Pillar 1503±1.5±0.5
Pillar 2450±4.4±9.6
Pillar 3352±3.0±9.0
Pillar 5449±3.0±9.8
Pillar 6451±1.5±7.6
Pillar 7348±1.5±3.3
Pillar 8600±1.5±2.3
Study Case 2Pillar 1597±1.5±2.9
Pillar 2452±1.5±1.6
Pillar 3451±1.5±1.8
Pillar 4447±1.5±1.6
Pillar 5448±2.9±4.4
Pillar 6398±2.9±7.7
Pillar 7497±1.5±3.0
Table 7. Result of the control measurements performed on pillar 3 (study case 1) and on the pillar 9 (study case 2). The measurements performed correspond to distances between opposite sides of the pillar (Figure 11).
Table 7. Result of the control measurements performed on pillar 3 (study case 1) and on the pillar 9 (study case 2). The measurements performed correspond to distances between opposite sides of the pillar (Figure 11).
IdBIM ModelAs Built ModelErrors
Width (mm)Length
(mm)
Width–d1 (mm)Length–d2 (mm)Width (mm)Length
(mm)
Pillar 3
(Study Case 1)
350.0500.0350.9490.60.90.4
Pillar 9
(Study Case 2)
360.0500.0360.1500.20.10.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mora, R.; Martín-Jiménez, J.A.; Lagüela, S.; González-Aguilera, D. Automatic Point-Cloud Registration for Quality Control in Building Works. Appl. Sci. 2021, 11, 1465. https://doi.org/10.3390/app11041465

AMA Style

Mora R, Martín-Jiménez JA, Lagüela S, González-Aguilera D. Automatic Point-Cloud Registration for Quality Control in Building Works. Applied Sciences. 2021; 11(4):1465. https://doi.org/10.3390/app11041465

Chicago/Turabian Style

Mora, Rocio, Jose Antonio Martín-Jiménez, Susana Lagüela, and Diego González-Aguilera. 2021. "Automatic Point-Cloud Registration for Quality Control in Building Works" Applied Sciences 11, no. 4: 1465. https://doi.org/10.3390/app11041465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop