Next Article in Journal
Denoising of Photon-Counting LiDAR Bathymetry Based on Adaptive Variable OPTICS Model and Its Accuracy Assessment
Previous Article in Journal
Solar Cycle Dependence of Migrating Diurnal Tide in the Equatorial Mesosphere and Lower Thermosphere
Previous Article in Special Issue
Robust Fusion of Multi-Source Images for Accurate 3D Reconstruction of Complex Urban Scenes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surface Reconstruction from SLAM-Based Point Clouds: Results from the Datasets of the 2023 SIFET Benchmark

1
Polytechnic Department of Engineering and Architecture (DPIA), University of Udine, Via delle Scienze, 206, 33100 Udine, Italy
2
Department of Engineering and Architecture (DIA), University of Trieste, Via Alfonso Valerio, 6/1, 34127 Trieste, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(18), 3439; https://doi.org/10.3390/rs16183439
Submission received: 25 July 2024 / Revised: 31 August 2024 / Accepted: 12 September 2024 / Published: 16 September 2024

Abstract

:
The rapid technological development that geomatics has been experiencing in recent years is leading to increasing ease, productivity and reliability of three-dimensional surveys, with portable laser scanner systems based on Simultaneous Localization and Mapping (SLAM) technology, gradually replacing traditional techniques in certain applications. Although the performance of such systems in terms of point cloud accuracy and noise level has been deeply investigated in the literature, there are fewer works about the evaluation of their use for surface reconstruction, cartographic production, and as-built Building Information Model (BIM) creation. The objective of this study is to assess the suitability of SLAM devices for surface modeling in an urban/architectural environment. To this end, analyses are carried out on the datasets acquired by three commercial portable laser scanners in the context of a benchmark organized in 2023 by the Italian Society of Photogrammetry and Topography (SIFET). In addition to the conventional point cloud assessment, we propose a comparison between the reconstructed mesh and a ground-truth model, employing a model-to-model methodology. The outcomes are promising, with the average distance between models ranging from 0.2 to 1.4 cm. However, the surfaces modeled from the terrestrial laser scanning point cloud show a level of detail that is still unmatched by SLAM systems.

1. Introduction

In light of the vast existing built heritage, the need for Building Information Models (BIM) that accurately represent reality is an increasingly frequent and pressing demand in the Architecture, Engineering and Construction (AEC) sector. The use of these models is fundamental in various application areas such as construction management, renovation and restoration planning, structural design, energy efficiency, change detection and as-built vs. as-designed analyses. When the BIM methodology is applied to historical heritage buildings or, more in general, to the field of cultural heritage, it is referred to as HBIM, i.e., Historic (Heritage) Building Information Modeling, a term coined by [1]. The process by which such models are obtained is referred to as scan-to-BIM/HBIM, a true reverse engineering procedure that enables the transition from 3D point clouds to BIM models of existing buildings. As described in [2], the scan-to-BIM/HBIM workflow typically comprises several steps. These include: (i) identifying the required information, (ii) determining the data quality and the appropriate scanning methods to achieve the desired goals, (iii) performing data acquisition, and, finally, (iv) modeling the BIM objects.
The literature proposes numerous works dealing with the scan-to-BIM/HBIM process from various perspectives. Several studies have been conducted on heritage buildings with the aim of defining in detail the scan-to-BIM/HBIM workflow, with a particular focus on the last phase, i.e., the modeling step [3,4,5]. Other works address the utilization of HBIM models generated through the aforementioned process for diverse applications, including the preventive conservation of historical buildings [6,7], the architectural enhancement and promotion of historical sites [8], and the use of such models as a reference for planning renovation works [9]. Furthermore, Finite Element Methods (FEM) for structural analysis and simulations can be applied to BIM models, as shown by several studies (e.g., [10,11]). To this end, additional processing steps are included in the modeling workflow, giving rise to a novel cloud-to-BIM-to-FEM pipeline [12].
Different degrees of detail and reliability of information can characterize the BIM elements. Therefore, some authors [13,14,15] have focused on defining specific parameters for the assessment of a BIM model, taking into account not only the level of detail (LOD) but also the relationship between the accuracy of the survey, the geometric level and the level of accuracy (LOA) of each BIM object. The grade of generation (GOG) must also be considered: GOG 1–8 define simple and basic modeling methods (including, e.g., extrusion, sweeping and revolving), while GOG 9–10 correspond to complex NURBS-based modeling functionalities. However, a literature analysis reveals that there is no standardized scan-to-BIM workflow, which is usually case-dependent and, above all, still yet appears a laborious, manual and time-consuming process.
In this context, the development of automatic or semi-automatic modeling support functions is the focus of ongoing research efforts. In particular, the classification and segmentation of point clouds is a very active area of research, with current approaches employing artificial intelligence techniques such as machine and deep learning. Indeed, many authors [16,17,18,19] agree that the preliminary step of semantic segmentation serves to facilitate subsequent modeling phases. As an example, Avena et al. [20] proposed an innovative methodology to support the scan-to-BIM/HBIM process, integrating visual programming languages with Python 3D libraries, with the aim of achieving automation in the digitization of cultural heritage from previously segmented and classified 3D survey data. A similar methodology is also followed in [16], which proposed a semi-automated scan-to-BIM approach that relies on a popular machine learning algorithm. Other contributions towards the automation of the scan-to-BIM/HBIM process, with the objective of reducing human intervention, are those of [21,22]. These works suggest an alternative methodology, designated mesh-to-HBIM, which entails the transfer of a mesh surface into a semantic HBIM model. In particular, ref. [22] proposed a workflow that begins with the point cloud and its segmentation. Subsequently, the mesh is generated and, following a refinement process is converted into an HBIM model using algorithms developed in a visual programming language. This workflow is particularly suited to the processing of complex elements, with the intermediate step of mesh modeling forming part of the complete scan-to-BIM/HBIM process.
Besides the final modeling stage, the data acquisition step is also a pivotal aspect of the entire scan-to-BIM/HBIM process. A review of the literature reveals that the most common surveying technologies supporting the scan-to-BIM workflow are photogrammetry and Terrestrial Laser Scanning (TLS) [23,24,25]. However, a number of issues, including the limited acquisition speed of these systems and the need to improve the ease and productivity of surveying activities, have prompted a rapid technological evolution of the geomatics sector in the last years. As a result, photogrammetry and TLS have been recently joined by Light Detection and Ranging (LiDAR) systems mounted on vehicles or Unmanned Aerial Vehicles (UAV), and, above all, by Mobile Laser Scanners (MLS), which are revolutionizing mapping operations of indoor spaces in particular. Such devices comprise a laser scanner sensor and an Inertial Measurement Unit (IMU) and are frequently integrated with cameras that permit the coloring of the acquired point clouds. More specifically, MLSs can be classified as either Portable Laser Scanners (PLS) when carried by an operator in a handheld manner, or Wearable Laser Scanners (WLS) when worn by the operator via a backpack [26]. Through an MLS, mapping operations are carried out by a surveyor by simply walking through the area of interest, while monitoring the progress of the survey in real time [27]. The characteristics and versatility of these instruments make them optimal for the fast and efficient survey of indoor environments, as well as industrial sites, confined spaces, and, more generally, complex environments, including caves and mines [28,29]. Moreover, due to their compactness and ease of use, portable systems are suitable for outdoor applications, including forest inventory [30] and cultural heritage documentation, often in combination with other geomatics techniques, in a multi-scale and multi-sensor approach [31,32].
The technology underlying the operation of MLSs is the Simultaneous Localization and Mapping (SLAM) method, which allows the localization of the sensor while building a map of the environment simultaneously, thus making mobile mapping possible even when Global Navigation Satellite System (GNSS) signal is not available. Originally developed by the robotics community to enable the navigation of a robot in an unknown space by measuring a limited number of landmarks, SLAM algorithms have been later adapted to the needs of geomatics to accurately estimate the trajectory of a laser scanner and at the same time create a detailed 3D point cloud of the surroundings. The SLAM problem is a complex issue that has been addressed over the years using a variety of approaches [33] and a comprehensive account of the developed solutions is beyond the scope of this work. However, it is worth mentioning that SLAM methods often rely on loop closures [34], i.e., re-surveyed areas are exploited to avoid drift error propagation in the trajectory. For this reason, following closed-loop paths is the most effective good practice to consider during a survey with an MLS to obtain accurate point clouds [35].

1.1. Aim of the Paper

Given the increasing popularity of SLAM-based systems, the question that now arises is whether the modeling and surface reconstruction operations, originally devised for data acquired through traditional techniques, can also be successfully replicated on point clouds acquired through SLAM technology. Indeed, as will be shown in detail in Section 1.2, the advantages of such devices are, however, accompanied by a reduction in accuracy, precision, and level of detail. The primary objective of this study is therefore twofold: firstly, to assess the quality and level of detail of the point clouds provided by the SLAM systems; and secondly, to evaluate their use for the subsequent surface modeling phase.
The evaluation methodology is divided into two main parts. The first, described in Section 2.2.1, comprises the analysis of SLAM-based point clouds by comparison with a TLS point cloud, which is assumed to be the ground truth. The second part, outlined in Section 2.2.2, involves the reconstruction of polygon meshes on the SLAM data, which can be regarded as an intermediate product in a semi-automatic process for the creation of an HBIM model of complex elements. The resulting mesh models are then subjected to a comparative analysis following a model-to-model approach, with a ground-truth model built on the TLS point cloud. The investigation is carried out on the datasets acquired by three commercial SLAM devices in the context of the benchmark proposed for the 65th National Congress of the Italian Society of Photogrammetry and Topography (SIFET). The benchmark was specifically focused on the survey of an urban/architectural scenario using portable laser scanner instrumentation based on SLAM, with the objective of investigating the potential of this technology for the creation of products such as plans, elevations and sections.
The paper is structured as follows. First, an overview of the current applications of SLAM-based laser scanners is provided, reviewing the performance analyses of such devices that can be found in the literature. Section 2 describes the proposed evaluation methodology and the case study to which it is applied. The outcomes of the assessment carried out and the discussion of the results are presented in Section 3 and Section 4, respectively. Finally, Section 5 draws the conclusions and outlines future perspectives.

1.2. State of the Art

Since the appearance of SLAM-based laser scanners, numerous studies have analyzed their performance and explored the potential fields of use of these instruments, which encompass both simple and complex scenarios, ranging from indoor and outdoor environments to urban settings. For instance, in [36], two SLAM devices were tested for surveying the interior of a building and mapping a square. The work described in [37] quantitatively analyzed three commercial devices for the survey of a historic building. In [38], the performance of a SLAM system was evaluated in terms of portability of the instrument, acquisition speed, and data accuracy for the documentation of a historical architectural complex, including the underground parts of the building. Moreover, in [35], the accuracy and noise level of the point clouds obtained through a handheld device in different outdoor scenarios were evaluated. More recently, in [39,40], the performance of different portable laser scanners based on SLAM technology was tested in a complex urban scenario, such as the city of Venice. In general, the works cited concur in emphasizing the efficiency of SLAM-based instruments, which ensure the acquisition of complete point clouds in reduced times, with an accuracy level ranging from a few centimeters to 10 cm. Nevertheless, these research papers also point out the general lower density and higher noise produced by such devices, which negatively impact the ability to reconstruct architectural details. The observed outcomes can be attributed, on the one hand, to the intrinsic characteristics of the employed sensors (e.g., range accuracy, angular resolution, rotation rate) and, on the other hand, to the underlying SLAM algorithms.
A number of papers have investigated the use of SLAM-based mapping devices directly on-site for construction progress monitoring. As highlighted in [41,42,43], the current monitoring approach is a manual one, conducted by an operator, and is both time-consuming and costly. Therefore, Vassena et al. [42] proposed a monitoring method based on 4D BIM data (where the fourth dimension refers to the temporal information) that includes the model built in the design phase and the data acquired during periodic surveys carried out using a portable laser scanner. More in detail, the authors proposed the comparison of the datasets from two different epochs in the Sitemotion platform. This was performed with the objective of highlighting any congruities between the two situations, or alternatively, to identify any differences between them in the event that the design and actual construction status do not match, for example, due to delays or errors during construction. In a similar study [43], the authors reported that the progress monitoring approach classified items as completed works if the distance between the design model and the SLAM-based data referring to the as-built situation was less than a tolerance set at 4 cm. Furthermore, it was observed that for certain instances, the platform was unable to correctly identify a match due to the high noise level of the data acquired by the portable system, exceeding the permitted tolerance of 4 cm.
As already mentioned in Section 1, several authors [28,32,44,45] have emphasized the importance of integrating SLAM devices with other survey techniques. The combined use of portable laser scanning and UAV-based surveys (employing both photogrammetric and LiDAR technologies) enables the rapid acquisition of a comprehensive dataset, even in highly complex contexts where traditional scanning techniques such as TLS would be impractical due to time constraints. Moreover, as demonstrated in [46], the integration of a general scan of the environment acquired through a SLAM device with detailed scans of specific areas obtained by means of TLS represents a viable approach when both completeness and a high level of detail of some elements is required.
An application that has not yet been thoroughly investigated and for which there are only a few examples in the literature is the use of SLAM-based point clouds as a basis for structural analyses. In this regard, one can mention the work by Sánchez-Aparicio et al. [47], which evaluated the potential of mobile mapping systems for the deflection analysis of historic wooden floors. The authors concentrated on the assessment of different filters for reducing the noise of the point clouds in order to obtain data of high quality, which is a fundamental aspect of structural analyses. It was demonstrated that, despite extensive filtering, these systems cannot be considered reliable techniques for the deflection analysis of beams. In fact, they can be used in slabs of length L = 5–8 m to identify deflection only higher than L / 200 (i.e., 2.5–4 cm). However, in a previous study [48], a plane deformation analysis of a historical masonry structure was conducted, comparing the results obtained from a SLAM device and a TLS. The analysis revealed that the SLAM system acquisition time was 7.5 times shorter than that of the TLS. Nonetheless, the structural analysis results differed by only 3%, suggesting that portable laser scanners can also be employed for structural evaluations in cases where the complexity of the environment makes the survey through TLS challenging and inefficient. Other works that relied on SLAM technology with the subsequent aim of using the obtained point clouds to perform advanced numerical simulations for the structural analysis of historical buildings are [49,50]. Nevertheless, it can be asserted that the studies in this field are still in their beginning, and further in-depth analyses should be carried out on a significant number of case studies.
Despite a flourishing literature on the performance assessment of SLAM devices, to date, only a limited number of studies have analyzed these systems not only on the basis of the point cloud obtained but also for the subsequent modeling or vectorization phases, a step frequently required in many applications. Among these, it is worthy to cite the contribution of [51], in which the authors proposed the generation of NURBS from point clouds acquired with a portable laser scanner, in order to obtain the parametric model of various architectural elements characterized by a complex geometry. The objective of the study described in [52] was to propose an optimized scan-to-BIM workflow for extracting a 3D model from data acquired with an indoor mobile mapping system. Also, the work presented in [53] proposed a preliminary test of BIM modeling based on data provided by SLAM instrumentation, which yielded encouraging results. Finally, recent significant work on this topic is reported in [54], which provides a complete workflow encompassing the entire survey process with several geomatics techniques (TLS, MLS, UAV photogrammetry), and extending the analysis and the evaluation of the results to the modeling phases. It was found that point clouds generated by portable systems are particularly useful for defining certain geometries, such as walls, and for inserting specific elements, such as columns and beams. Nevertheless, the noise and low density do not ensure adequate resolution for the detailed modeling of architectural and structural elements of small dimensions. As a result, the authors concluded that the scale of representation that can be achieved of such models obtained through SLAM point clouds is 1:100/1:200.
This paucity of work on surface modeling and the extraction of map products from SLAM data has prompted our interest in the further investigation in this area. In particular, it was observed that in most of the case studies documented in the literature, the comparison between data acquired with SLAM systems and ground-truth data (obtained, e.g., through a TLS) is limited to an assessment of point clouds. This analysis typically involves the calculation of cloud-to-cloud distances (C2C), density evaluation and, to a less extend, roughness analysis. When the focus is on surface reconstruction, a direct comparison between models (parametric or polygon mesh) is necessary. A model-to-model (M2M) analysis, previously applied in [53], is therefore further investigated in this paper in order to provide a more in-depth evaluation of the potential of the SLAM-based surveying.

2. Materials and Methods

The following section (Section 2.1) presents the case study, together with the three portable SLAM systems and the TLS used to acquire the analyzed datasets and the ground truth, respectively. Next, Section 2.2 outlines the methodological approach that was employed to evaluate both the point clouds and the reconstructed meshes.

2.1. Materials

The data used in this paper are part of the 2023 SIFET Benchmark, which focused on the survey of an urban/architectural context using portable laser scanner systems based on SLAM technology. The case study is the Piazza Grande in Arezzo (Italy) (Figure 1a), a square that is unique for its original shape and conformation.
In fact, the shape is that of a quadrilateral, similar to a trapezoid, with two almost parallel sides measuring 73 m the longer and 59 m the shorter. One side is 63 m long while the fourth one is 52 m long. The inclined plane on which the square lies has a height difference of approximately 10 m between the highest and lowest points, from south-west to north-east, and it is overlooked by several buildings from different historical periods with very uneven heights. In addition to the survey of the entire square, which constitutes Dataset A, a second data acquisition was performed for the palace called Palazzo della Fraternita dei Laici (Dataset B, Figure 1b). It is a palace whose construction began during the XIV century and features distinctive architectural hybrids, with a facade that combines Gothic and Renaissance elements [56]. More in detail, the facade, approximately 32 m long and 15 m high, and the area in front of it, characterized by a 7 m deep balcony overlooking the square and a flight of steps, form Dataset B. Dataset A represents an appropriate setting for a SLAM-based survey, due to the dimensions and shape of the square and the considerable number of buildings facing it. On the other hand, the building facade is a particularly demanding scenario for a SLAM-based survey given the multitude of architectural details.
For both cases A and B, three surveys were conducted using different commercial PLSs (Figure 2), namely the Leica BLK2GO (Dataset A-B-1), the GeoSLAM ZEB HORIZON (Dataset A-B-2) and the Stonex X120GO (Dataset A-B-3).
The Leica BLK2GO [57], developed by Leica Geosystems AG (Figure 2a), is a handheld imaging laser scanner based on SLAM technology. It is designed for both indoor and outdoor use and comprises three 4.8 Mpx 300° × 135° panoramic cameras for visual SLAM, a high-resolution 12 Mpx 90° × 120° camera for image capture, a LiDAR sensor and a IMU. The system emits laser signals with wavelengths of 830 nm, providing a 360° horizontal field of view and a 270° vertical field of view, and is capable to capture up to 420,000 pts/s. The scanner has a range of 0.5–25 m, with a distance measurement precision of ±3 mm and provides an indoor accuracy of ±10 mm.
The GeoSLAM ZEB HORIZON [58] by GeoSLAM, now acquired by FARO Technologies (Figure 2b), is a handheld LiDAR scanner with a measuring range of 100 m and the ability to acquire up to 300,000 pts/s. It has the same field of view as the Leica BLK2GO, while the laser wavelength is 903 nm. The relative accuracy is up to 6 mm. This device is integrated with the ZEB Vision accessory for point cloud coloring.
The Stonex X120GO [59], distributed by STONEX Srl (Figure 2c), is a SLAM laser scanner equipped with a rotating head that can acquire up to 320,000 pts/s. It measures a minimum range of 0.5 m and a maximum range of 120 m. Like the GeoSLAM ZEB HORIZON, it has a 360° horizontal and 270° vertical field of view and the relative accuracy is up to 6 mm. Three 5-megapixel cameras cover a horizontal field of view of 200° and a 100° vertical field of view, capturing panoramic images and information for the coloring of point clouds. Table 1 provides a summary of the technical specifications of the three devices.
The surveys were conducted autonomously by the respective companies, leaving them the freedom to define the scanning configuration and the trajectory to follow during the acquisitions. The data so obtained from the various surveys were processed and filtered by the companies’ experts through proprietary software, so that these clouds represent the best possible result obtainable from such SLAM systems. At the end, the point clouds in .e57 format were made available to the SIFET working group, and consequently to the benchmark participants. No information was provided regarding the trajectories tracked during the surveying.
The point clouds of the 2023 SIFET Benchmark represent therefore the starting data for this work, and the analyses that will be presented are all results of this research activity. As a preliminary step, points relating to disturbing elements, such as moving people or vehicles, outlier points and data from areas close to the square which are not of interest were manually removed by the authors of this paper, before any evaluation was carried out. Table 2 reports the total number of points after the point cloud cleaning operations, for both Datasets A and B, while Figure 3a–c show the SLAM-based point clouds of Dataset B.
To assess the performance of the SLAM devices a ground-truth point cloud was obtained by the SIFET working group using the Leica RTC360 laser scanner [60]. This is a static terrestrial high-speed 3D laser scanner that comes with an integrated HDR spherical imaging sensor and a visual inertial system for real time registration. The employed TLS has a 360° horizontal field of view and a 300° vertical field of view, a minimum range of 0.5 m and a maximum range of up to 130 m. It is capable of capturing 2,000,000 pts/s. According to the manufacturer’s specifications, this TLS can achieve millimeter accuracy at a distance of several meters, with 1.9 mm accuracy at 10 m and 5.3 mm accuracy at 40 m. A total of 37 scans were acquired uniformly across the square. These scans were then registered by means of the Leica Cyclone Register 360 software, exploiting primarily the well-known Iterative Closest Point (ICP) algorithm [61], with an average residual error of approximately 3 mm. Then, the TLS cloud was georeferenced based on the topographic points, always within Cyclone Register 360 environment, with a mean residual error again in the order of 3 mm. At the end, the point clouds acquired by the SLAM devices were aligned to the ground-truth one via ICP, with mean residuals ranging in the interval 3.6–5.7 mm, that can be considered more than satisfactory. All these numerical elaborations were carried out by the 2023 SIFET Benchmark working group.
The TLS data were also subjected to cleaning operations by the authors of the present work, resulting in the identification of 225 M and 99 M points for the Piazza Grande and the palace (Figure 3d), respectively, a number of points significantly higher than the SLAM datasets. Finally, it should be noted that the SLAM acquisitions and the TLS survey were conducted at different times, as evidenced by a ramp and a tourist totem only present in the TLS point cloud (Figure 3d) and not in the SLAM point clouds. Consequently, these areas containing objects that were not present in both survey periods were excluded from the subsequent analyses.

2.2. Methods

The main purpose of the 2023 SIFET Benchmark was to provide participants with SLAM point clouds so to stimulate them to yield (horizontal) plans and (vertical) sections, i.e., the classical numerical products of architectonic surveying, by using any software available to them (link to the 2023 SIFET Benchmark call: https://www.sifet.org/wp-content/uploads/2023/06/rev-Call_BENCHMARK_SIFET2023_EXTENDED.pdf (accessed on 29 August 2024)). Given the dimensions and characteristics of Dataset A, the participants were required only to produce plans and sections as polylines directly from the point cloud, considering, for instance, slices of points lying onto such planes. Dataset B, instead, is focused only on the most interesting building, and in this case the participants were asked to produce the facade mesh and, from this, two sections (at a given position/elevation). Analyses on the results provided by the participants were then carried out by the benchmark working group, firstly in the form of dimensional checks on the main elements (e.g., openings, pillars, beams, steps). Moreover, the distances between the polylines of the sections and the corresponding slice of the TLS point cloud were evaluated, and a cloud-to-model comparison was conducted between the TLS ground-truth cloud and the meshes obtained from the SLAM data. The results of the 2023 SIFET Benchmark will be published in the future.
In the present work, instead, the evaluation methodology can be divided into two distinct steps. The first phase concerns a qualitative and quantitative analysis of the point clouds, while the second phase is focused on the surface reconstruction of Dataset B and the evaluation of the obtained models through a model-to-model analysis. These two steps are described in detail in Section 2.2.1 and Section 2.2.2.

2.2.1. Analysis of the Point Clouds

Firstly, the accuracy and precision of the point clouds provided by the SLAM devices were evaluated by estimating the cloud-to-cloud absolute distances (C2C) to the ground truth (acquired through the TLS) for each survey. The surface density was also calculated on both Datasets A and B by counting the number of points in a neighborhood of given radius and dividing by the neighborhood area. This analysis can help in the evaluation of the ability of the survey method in capturing local geometric features [38], thereby providing an indication on the level of detail characterizing the point cloud. Indeed, a low surface density in areas rich of complex geometric elements results in a significant loss of information, which will also affect the modeling phase. Furthermore, the noise level (which, in turn, influences also the achievable level of detail) was evaluated locally using the roughness feature: the roughness value for each point is defined as the distance between the point and the plane of best fit, estimated on the nearest neighbors. All these analyses were conducted using the open-source software CloudCompare (version 2.13.beta) [62], setting to 5 cm the radius of the sphere centered on each point for the definition of the neighborhood size.

2.2.2. Mesh Modeling and Model-to-Model Evaluation

In addition to the analyses carried out on the point clouds, we also investigated the modeling step on Dataset B. Indeed, the majority of applications necessitate the input of a model, as point clouds are not final products but rather intermediate ones. Consequently, it is of paramount importance to analyze the actual quality of this product, as the modeling process may introduce simplifications, noise or artifacts with respect to the starting point cloud.
Rather than using parametric modeling, we employed mesh generation on the point clouds, as requested and performed within the 2023 SIFET Benchmark. First of all, mesh modeling was selected over parametric modeling due to the numerous complex elements that characterize Dataset B. Additionally, as previously discussed, the mesh can serve as an intermediate product in the scan-to-BIM process, facilitating semi-automatic modeling and reducing the necessity for manual operator intervention, which in turn also reduces the time required. The process of manual modeling is highly operator-dependent, with a high degree of subjectivity that may vary depending on the specific dataset being modeled. In this work, the use of mesh modeling eliminates the potential for bias that can arise from manual modeling, ensuring a completely balanced comparison between all the tested devices.
More in detail, mesh reconstruction was performed using the open-source software MeshLab (version 2023.12) [63], resorting to the Screened Poisson Surface Reconstruction method [64], an algorithm that creates watertight surfaces and exhibits resilience to noisy data and artifacts that can arise, e.g., from misregistration [65]. MeshLab was preferred over other software solutions due to its superior flexibility, the availability of different mesh modeling algorithms and the possibility of setting the parameters of these algorithms. In contrast to some commercial black-box software, in fact, adjusting certain parameters surely enabled the optimization of the results. Surface reconstruction was carried out on Dataset B both on the point clouds derived from the SLAM devices and the one acquired by the TLS, in order to have a ground truth model. This permitted a direct model-to-model (M2M, or mesh-to-mesh) comparison, rather than comparing the SLAM-based models to the TLS point cloud.
Regarding the parameters of the Poisson algorithm, the same values were applied to all the datasets, with the exception of the mesh generated on the GeoSLAM ZEB HORIZON point cloud. In particular, the following parameters were manually tuned, with the aim of obtaining results as detailed as possible and, at the same time, free of noise. The Reconstruction Depth, i.e., the maximum depth of the octree used for surface reconstruction, turned to be the most significant control parameter. Running at depth d corresponds to solving on a voxel grid whose resolution is no larger than 2 d × 2 d × 2 d [63]. This parameter exerts a directly proportional influence on the calculation time of the algorithm. In our experiments, a value of 12 was specified. The Scale Factor, which is the ratio between the diameter of the cube used for reconstruction and the diameter of the samples’ bounding cube [63], was set to 1.1, i.e., the default value. However, a Scale Factor of 1.1 resulted in a highly noisy and rough mesh only for the GeoSLAM ZEB HORIZON dataset, rendering it incomparable with the other models. In order to provide greater coherence between the meshes, the value of this parameter was thus increased to 2.5. Following a series of trials, it was established that the remaining parameters of the Poisson algorithm should be retained at their default values.
The comparisons were then made between the reference mesh (built from the TLS data) and the mesh generated on the point clouds acquired with the SLAM devices, thus following a mesh-to-mesh approach. This analysis was performed using MeshLab Distance from Reference mesh command, which allows us to measure the (signed) distance from each vertex of one mesh to the nearest point on the surface of the reference mesh.
It should be noted that the mesh generation was not carried out on Dataset A, as the M2M evaluation would have lacked meaningfulness due to the presence of gaps and occlusions on the point clouds, especially in the north-eastern porched side of the square, resulting in reconstruction failures and artifacts on both the SLAM and TLS meshes. The presence of additional elements, such as bar tables, curtains, ornamental plants, and furniture in general, also contributed to the aforementioned issues.

3. Results

The following sections present the results of the investigations conducted. As previously stated, the point cloud assessment was performed on both cases A and B, with the C2C comparisons on the entire Piazza Grande that enabled the verification of SLAM accuracy on large datasets. Conversely, the facade of the Palazzo della Fraternita dei Laici is well suited to surface reconstruction analyses, given its numerous and complex architectural elements.

3.1. Analysis of the Point Clouds

The results of the C2C analysis performed on the surveys of the entire square are shown in Figure 4 and Table 3. From a visual inspection, considerable differences can be observed for the Dataset A-1 in the upper left part of the cloud (green points in Figure 4a, corresponding to distances in the range 10–15 cm). Although the C2C analysis yields overall good results (mean value equal to 3.0 cm), the observed local deviations contribute to the highest average error among the three datasets. In contrast, Dataset A-2 does not exhibit visible errors due to local deformations. This is corroborated by the low mean C2C distance (2.0 cm). On the other hand, the higher noise level affecting this point cloud can be perceived in Figure 4b: the most visible points are those furthest away from the mean surface, which is why the cloud appears in colors tending more towards green (i.e., towards higher distance values). Finally, the C2C distances observed for Dataset A-3 are minimal, with an average error of 1.3 cm. Please note that for all three datasets, outlier distance values of up to 30 cm (colored in red) are visible in Figure 4, due to the presence of disturbing objects (e.g., bar tables, non-static objects as people or vehicles) in the SLAM point clouds that were not manually removed. However, the limited number of such points did not significantly influence the statistics shown in Table 3.
The analysis on Dataset A therefore provides a global indication of the differences with respect to the ground-truth point cloud, thus highlighting the overall accuracy of the SLAM methods in an urban context of significant size. On the contrary, the C2C assessment performed on the Palazzo della Fraternita dei Laici (Figure 5 and Table 4) allows for a more in-depth local analysis. Dataset B-1 reflects the outcomes previously obtained for the entire square, with significant discrepancies on the left side of the palace. In the case of the GeoSLAM ZEB HORIZON point cloud, instead, it becomes now evident that a deviation exists in the right-hand zone of the horizontal surface of the balcony, which may be indicative of a local deformation. In contrast, Dataset B-3 shows the lowest differences with respect to the ground truth, with an average error of 6 mm. As previously described in Section 2.1, it is worth mentioning that the gap visible in Figure 5 in the center of the point clouds is a consequence of the manual removal of an area comprising objects present only in the TLS data, which were acquired at different times than the SLAM surveys.
As outlined in Section 2.2.1, surface density and roughness assessment was also performed on both cases A and B. Figure 6, Figure 7, Figure 8 and Figure 9 present the results in graphical form while Table 5 shows the mean and standard deviation values.
Upon analysis of the statistics, it can be noted that the four devices provided point clouds that exhibit completely varying densities. As expected, the TLS point cloud shows a high average point density for both Datasets A and B, which is, however, on average slightly lower than that of the Leica BLK2GO dataset, although a more uniform point distribution can be appreciated (evidenced in Figure 6 and by the lower standard deviation value). In contrast to the SLAM data, which are acquired by walking relatively close to the object, the TLS survey is influenced by the location of the stations, which may have an impact on the average density value. On the other hand, the GeoSLAM ZEB HORIZON and the Stonex X120GO datasets are characterized by significant lower density values, especially for the Palazzo della Fraternita dei Laici point sets. These differences among the point clouds obtained through the handheld devices may have various origins, including the sensor technical specifications, the SLAM algorithms employed but also the distinct approaches used by the operators in post-processing the point clouds, with the aim of achieving the most optimal results. Although there is no certain information about the post-processing steps of the acquired data, it can be reasonably assumed that datasets B-2 and B-3 underwent a filtering process. Furthermore, the loss of uniformity evident in the upper right area of the TLS cloud (Figure 7d) is a consequence of the presence of a net that prevented the acquisition of that area in a satisfactory manner.
The roughness analysis at planar surfaces is indicative of the noise level that characterizes the point clouds. Among the SLAM surveys, Datasets A-3 and B-3 show the lowest roughness values, comparable to the TLS ones. This aspect is also clearly visible from Figure 8 and Figure 9, which demonstrates that the noise level is higher for the data acquired with the Leica BLK2GO and the GeoSLAM ZEB HORIZON devices.

3.2. Mesh Modelling and Model-to-Model Evaluation

Figure 10 shows the triangular meshes reconstructed from the SLAM and TLS point clouds for Dataset B. As envisaged, it can be observed from Table 6 that the number of faces and vertices is correlated with the point cloud surface density, with the exception of Dataset B-2. In that case, in fact, the different value applied to the Scale Factor parameter produced a lower number of faces and vertices with respect to the dataset characterized by a similar density, namely Dataset B-3. Indeed, as previously described in Section 2.2.2, the mesh constructed from Dataset B-2 with the Scale Factor parameter set to 1.1 resulted in a highly noisy output, rendering it challenging to interpret and compare. Consequently, it was necessary to modify the parameter value. As illustrated in Figure 10b, the final outcome is visually comparable to the other datasets. However, a closer inspection reveals that the lower part of the facade was smoothed by the reconstruction algorithm, while the upper portion still exhibits residual noise. Increasing this parameter further to totally eliminate the noise would have resulted in a loss of detail and extreme simplification of the mesh. Therefore, a Scale Factor of 2.5 was chosen in order to achieve a fair balance between the two.
Subsequent to the modeling operation, a comparison was conducted between the SLAM-based models and the ground-truth mesh derived from TLS. In order to avoid biased results of the M2M analysis, it was necessary to remove the portions corresponding to temporary objects present in the TLS data, as was performed for the C2C assessment. The outcomes of the model-to-model evaluation are presented in Figure 11 and Table 6, and are in line with the results obtained from the C2C analysis. However, the signed M2M distance allows us to better capture local deformations or systematic trends. Indeed, Figure 11a reveals a greater deviation with respect to the ground truth on the left side of the facade for Dataset B-1, and in particular it can be observed that as one proceeds from the left to the right side of the facade, there is a shift from positive to negative distance values, which could be traced back to a relative rotation between the data. As already highlighted by the C2C analysis, a local deformation is also visible in Dataset B-2, on the right-hand side at the horizontal surface of the balcony. Anyway, average M2M errors between 0.2 and 1.4 cm (Table 6) show that all SLAM datasets provided very similar results. Moreover, as can be noted in Figure 11, for all the three datasets high M2M values are observed at the opening on the right-hand side, at the top of the facade and at the windows. This is due to the way the Poisson algorithm attempts to fill any gaps in the final mesh, producing some artifacts.

4. Discussion

From the analysis of Dataset A, it emerges that the average discrepancy between the SLAM-based point clouds and the TLS one is less than 10 cm for all three datasets (mean C2C values between 1.3 cm and 3.0 cm), which corroborates the findings of other studies (e.g., [36,37]). The higher error on a portion of Dataset A-1 (10–15 cm) is likely due to trajectory drift and local deformations that the SLAM algorithm failed to avoid. Nevertheless, these values remain lower than those observed in previous works (e.g., [35] reports local errors of up to 25 cm), indicating, on the one hand, the suitability of the context of Piazza Grande for a SLAM-based survey. On the other hand, these outcomes demonstrate that the employed devices and processing algorithms were effective in the trajectory estimation and map generation.
As expected, the presence of local deformations is even less evident in the case of Dataset B (see also the detail shown in Figure 12), which is more limited in extent and therefore less prone to drift.
The primary issue identified in the existing literature on these systems is the high noise level and the lower level of detail that can be achieved, which is undoubtedly inferior to that achievable with TLS or photogrammetry. The density and roughness analyses presented in this work reinforce this conclusion. In addition to the global results, which are reported in Section 3.1, it is of particular interest to focus on two architectural elements, namely a portion of the stringcourse frame and a helical semicolumn, depicted in Figure 13. This reveals that Dataset B-3 is characterized by a notable loss of detail (and low point density), although being less noisy on planar surfaces. Obviously, details are much sharper on the TLS point cloud, followed by Dataset B-1 where the column and the decorative elements are accurately reconstructed. In the case of Dataset B-1, a satisfactory trade-off is achieved between the level of detail and noise of the point cloud. Usually, works in the literature concentrate on roughness analysis to evaluate the capacity of SLAM devices to reconstruct details (e.g., [32,66]). However, this case study shows that roughness values alone are not sufficient, but that a density analysis should also be added. Dataset B-3 is illustrative: despite comparable roughness values with those of the TLS, the low density of the point cloud severely limits the level of detail.
The current limitations of SLAM devices in terms of the level of detail they can provide are even more evident after the mesh reconstruction phase. In this regard, the complex helical semicolumn shown in Figure 14, taken as an example for the roughness assessment in Figure 13, is quite illustrative. The completeness, precision and level of detail of the TLS mesh are significantly superior than those observed in the other models. Indeed, the higher noise and the lower density that characterize the point clouds acquired through the SLAM systems influence the modeling phase. In particular, the helical semicolumn is barely discernible in the SLAM meshes, whereas it is complete and sharp in the TLS case. Dataset B-3, which is less noisy at the point cloud level, yielded also a globally slightly more refined mesh. However, its low density negatively affected the ability to detect and reconstruct several decoration details, as for the semicolumn element. On the other hand, Dataset B-1 is denser and some parts of the column (Figure 14a) have been reconstructed with greater completeness and detail than in the other cases (Figure 14b,c). This is partially visible from the M2M analysis (Figure 15) conducted on the portion of the mesh corresponding to the semicolumn and its adjacent elements (to better highlight local discrepancies between the TLS and the SLAM-based models of the column, caused by the different level of detail of the data, an ICP alignment was performed on these subsets prior to the M2M assessment. In this way, the influence of the global accuracy of the original point clouds on this analysis was limited).
Summarizing, the level of detail that can be achieved for models obtained from SLAM data is undoubtedly inferior to that of models reconstructed from TLS point clouds. In particular, the findings of this study corroborate and reinforce the observations made by other researchers (e.g., [54,67]): the scale of representation that can be achieved is 1:100–1:200.
Finally, it is worth mentioning that this work is limited to the analysis of automatic mesh modeling from point clouds. Consequently, the final step in the generation of real BIM models is missing. Future investigations will also focus on manual parametric modeling, with the objective of achieving a comprehensive understanding of the scan-to-BIM workflow applied to SLAM-based data.

5. Conclusions

In this work, a modeling feasibility study was conducted on point clouds acquired with a portable laser scanner based on SLAM technology. The advent of these devices is leading to increasingly easier and faster surveying procedures in a multitude of scenarios and environments. However, there is still a gap in the literature regarding the subsequent post-processing phases and the use of the SLAM data, particularly in the context of surface reconstruction. Indeed, it is important to note that most applications require a model as an input and the modeling process potentially introduces simplifications, noise or artifacts with respect to the starting point cloud.
Thanks to the extensive datasets acquired for the benchmark proposed in 2023 by the Italian Society of Photogrammetry and Topography (SIFET), it was possible to study the performance of three different SLAM devices by analyzing the acquired point clouds and, above all, carrying out modeling tests directly on them. The availability of data acquired by TLS made it possible to compare the results obtained with a ground-truth dataset. Overall, the experiments demonstrate that SLAM devices are suitable for model reconstruction up to a scale of 1:100–1:200, thereby corroborating the findings reported in the literature.
Based on the outcomes and considerations of this work, we are planning to further analyze the feasibility of modeling from SLAM data to achieve a SLAM-to-BIM/HBIM process, including semantic segmentation of the point cloud as a key step. Additionally, the idea is to go beyond the HBIM model derived from SLAM and move to finite element modeling for structural analysis, in vision of a complete SLAM-to-BIM/HBIM-to-FEM workflow.

Author Contributions

Conceptualization, A.M., E.M., A.B. and D.V.; methodology, A.M. and E.M.; validation, E.M.; formal analysis, A.M. and E.M.; investigation, A.M.; data curation, A.M.; writing—original draft preparation, A.M.; writing—review and editing, E.M., A.B. and D.V.; visualization, A.M.; supervision, A.B. and D.V.; project administration, D.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are provided by the Italian Society of Photogrammetry and Topography (SIFET).

Acknowledgments

The authors would like to thank the Italian Society of Photogrammetry and Topography (SIFET) and the 2023 SIFET Benchmark working group for the use of the datasets. We would also like to extend our thanks to the anonymous reviewers whose valuable feedback helped to improve the quality of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AECArchitecture, Engineering and Construction
BIMBuilding Information Modeling
C2CCloud-to-Cloud
FEMFinite Element Method
GNSSGlobal Navigation Satellite System
GOGGrade of Generation
HBIMHistoric Building Information Modeling
ICP Iterative Closest Point
IMUInertial Measurement Unit
LiDARLight Detection and Ranging
LOALevel of Accuracy
LODLevel of Detail
MLSMobile Laser Scanner
MMSMobile Mapping System
M2MModel-to-Model (Mesh-to-Mesh)
PLSPortable Laser Scanner
SIFETItalian Society of Photogrammetry and Topography
SLAMSimultaneous Localization and Mapping
TLSTerrestrial Laser Scanner
UAVUnmanned Aerial Vehicle
WLSWearable Laser Scanner

References

  1. Murphy, M.; McGovern, E.; Pavia, S. Historic building information modelling (HBIM). Struct. Surv. 2009, 27, 311–327. [Google Scholar] [CrossRef]
  2. Wang, Q.; Guo, J.; Kim, M.K. An application oriented Scan-to-BIM framework. Remote Sens. 2019, 11, 365. [Google Scholar] [CrossRef]
  3. Rocha, G.; Mateus, L.; Fernández, J.; Ferreira, V. A Scan-to-BIM methodology applied to heritage buildings. Heritage 2020, 3, 47–67. [Google Scholar] [CrossRef]
  4. Barrile, V.; Bernardo, E.; Bilotta, G. An experimental HBIM processing: Innovative tool for 3D model reconstruction of morpho-typological phases for the cultural heritage. Remote Sens. 2022, 14, 1288. [Google Scholar] [CrossRef]
  5. Visintini, D.; Marcon, E.; Pantò, G.; Canevese, E.; De Gottardo, T.; Bertani, I. Advanced 3D modeling versus Building Information Modeling: The case study of Palazzo Ettoreo in Sacile (Italy). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 1137–1143. [Google Scholar] [CrossRef]
  6. Mora, R.; Sánchez-Aparicio, L.J.; Maté-González, M.Á.; García-Álvarez, J.; Sanchez-Aparicio, M.; González-Aguilera, D. An historical building information modelling approach for the preventive conservation of historical constructions: Application to the historical library of Salamanca. Autom. Constr. 2021, 121, 103449. [Google Scholar] [CrossRef]
  7. Costantino, D.; Pepe, M.; Restuccia, A. Scan-to-HBIM for conservation and preservation of Cultural Heritage building: The case study of San Nicola in Montedoro church (Italy). Appl. Geomat. 2023, 15, 607–621. [Google Scholar] [CrossRef]
  8. Parrinello, S.; Sanseverino, A.; Fu, H. HBIM modelling for the architectural valorisation via a maintenance digital eco-system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 1157–1164. [Google Scholar] [CrossRef]
  9. Garramone, M.; Jovanovic, D.; Oreni, D.; Barazzetti, L.; Previtali, M.; Roncoroni, F.; Mandelli, A.; Scaioni, M. Basilica di San Giacomo in Como (Italy): Drawings and HBIM to manage archeological, conservative and structural activities. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 653–660. [Google Scholar] [CrossRef]
  10. Ursini, A.; Grazzini, A.; Matrone, F.; Zerbinatti, M. From scan-to-BIM to a structural finite elements model of built heritage for dynamic simulation. Autom. Constr. 2022, 142, 104518. [Google Scholar] [CrossRef]
  11. Abbate, E.; Invernizzi, S.; Spanò, A. HBIM parametric modelling from clouds to perform structural analyses based on finite elements: A case study on a parabolic concrete vault. Appl. Geomat. 2022, 14, 79–96. [Google Scholar] [CrossRef]
  12. Barazzetti, L.; Banfi, F.; Brumana, R.; Gusmeroli, G.; Previtali, M.; Schiantarelli, G. Cloud-to-BIM-to-FEM: Structural simulation with accurate historic BIM from laser scans. Simul. Model. Pract. Theory 2015, 57, 71–87. [Google Scholar] [CrossRef]
  13. Banfi, F. BIM orientation: Grades of generation and information for different type of analysis and management process. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 57–64. [Google Scholar] [CrossRef]
  14. Banfi, F. HBIM generation: Extending geometric primitives and BIM modelling tools for heritage structures and complex vaulted systems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 139–148. [Google Scholar] [CrossRef]
  15. Brumana, R.; Cantini, L.; Previtali, M.; Della Torre, S. HBIM level of detail-geometry-accuracy and survey analysis for architectural preservation. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W11, 2019 GEORES 2019—2nd International Conference of Geomatics and Restoration, Milan, Italy, 8–10 May 2019; pp. 293–299. [Google Scholar]
  16. Croce, V.; Caroti, G.; Piemonte, A.; De Luca, L.; Véron, P. H-BIM and artificial intelligence: Classification of architectural heritage for semi-automatic Scan-to-BIM reconstruction. Sensors 2023, 23, 2497. [Google Scholar] [CrossRef] [PubMed]
  17. Croce, V.; Caroti, G.; De Luca, L.; Jacquot, K.; Piemonte, A.; Véron, P. From the semantic point cloud to heritage-building information modeling: A semiautomatic approach exploiting machine learning. Remote Sens. 2021, 13, 461. [Google Scholar] [CrossRef]
  18. Pierdicca, R.; Paolanti, M.; Matrone, F.; Martini, M.; Morbidoni, C.; Malinverni, E.S.; Frontoni, E.; Lingua, A.M. Point cloud semantic segmentation using a deep learning framework for cultural heritage. Remote Sens. 2020, 12, 1005. [Google Scholar] [CrossRef]
  19. Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing machine and deep learning methods for large 3D heritage semantic segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 535. [Google Scholar] [CrossRef]
  20. Avena, M.; Patrucco, G.; Remondino, F.; Spanò, A. A scalable approach for automating Scan-to-BIM processes in the heritage field. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, 48, 25–31. [Google Scholar] [CrossRef]
  21. Yang, X.; Lu, Y.C.; Murtiyoso, A.; Koehl, M.; Grussenmeyer, P. HBIM modeling from the surface mesh and its extended capability of knowledge representation. ISPRS Int. J. Geo-Inf. 2019, 8, 301. [Google Scholar] [CrossRef]
  22. Vieira, M.M.; Gonçalves, J.E.; de O Silva, D.M.; Mesquita, E.F.; Lima, J.M. Semi-automatic scan-to-BIM procedure applied to architectural ornaments of Nossa Senhora do Rosário Church, Aracati-CE. J. Build. Pathol. Rehabil. 2024, 9, 1–14. [Google Scholar] [CrossRef]
  23. Rashdi, R.; Martínez-Sánchez, J.; Arias, P.; Qiu, Z. Scanning technologies to Building Information Modelling: A review. Infrastructures 2022, 7, 49. [Google Scholar] [CrossRef]
  24. Liu, J.; Azhar, S.; Willkens, D.; Li, B. Static Terrestrial Laser Scanning (TLS) for Heritage Building Information Modeling (HBIM): A systematic review. Virtual Worlds 2023, 2, 90–114. [Google Scholar] [CrossRef]
  25. Yang, S.; Xu, S.; Huang, W. 3D point cloud for cultural heritage: A scientometric survey. Remote Sens. 2022, 14, 5542. [Google Scholar] [CrossRef]
  26. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  27. Otero, R.; Lagüela, S.; Garrido, I.; Arias, P. Mobile indoor mapping technologies: A review. Autom. Constr. 2020, 120, 103399. [Google Scholar] [CrossRef]
  28. Grasso, N.; Dabove, P.; Piras, M. The use of SLAM and UAV technology in geological field for monitoring: The case study of the Bossea cave. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, XLVIII-2/W3-2023, 73–79. [Google Scholar] [CrossRef]
  29. Trybała, P.; Kasza, D.; Wajs, J.; Remondino, F. Comparison of low-cost handheld LiDAR-based SLAM systems for mapping underground tunnels. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, XLVIII-1/W1-2023, 517–524. [Google Scholar] [CrossRef]
  30. Balenović, I.; Liang, X.; Jurjević, L.; Hyyppä, J.; Seletković, A.; Kukko, A. Hand-held personal laser scanning—Current status and perspectives for forest inventory application. Croat. J. For. Eng. J. Theory Appl. For. Eng. 2021, 42, 165–183. [Google Scholar]
  31. Chiabrando, F.; Sammartano, G.; Spanò, A.; Spreafico, A. Hybrid 3D models: When geomatics innovations meet extensive built heritage complexes. ISPRS Int. J. Geo-Inf. 2019, 8, 124. [Google Scholar] [CrossRef]
  32. Maset, E.; Valente, R.; Iamoni, M.; Haider, M.; Fusiello, A. Integration of photogrammetry and portable mobile mapping technology for 3D modeling of cultural heritage sites: The case study of the Bziza temple. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 831–837. [Google Scholar] [CrossRef]
  33. Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A tutorial on Graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  34. Tiozzo Fasiolo, D.; Scalera, L.; Maset, E. Comparing LiDAR and IMU-based SLAM approaches for 3D robotic mapping. Robotica 2023, 41, 2588–2604. [Google Scholar] [CrossRef]
  35. Maset, E.; Cucchiaro, S.; Cazorzi, F.; Crosilla, F.; Fusiello, A.; Beinat, A. Investigating the performance of a handheld mobile mapping system in different outdoor scenarios. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 103–109. [Google Scholar] [CrossRef]
  36. Nocerino, E.; Menna, F.; Remondino, F.; Toschi, I.; Rodríguez-Gonzálvez, P. Investigation of indoor and outdoor performance of two portable mobile mapping systems. In Proceedings of the Videometrics, Range Imaging, and Applications XIV, Munich, Germany, 26–27 June 2017; Volume 10332, pp. 125–139. [Google Scholar]
  37. Tucci, G.; Visintini, D.; Bonora, V.; Parisi, E.I. Examination of indoor mobile mapping systems in a diversified internal/external test field. Appl. Sci. 2018, 8, 401. [Google Scholar] [CrossRef]
  38. Tanduo, B.; Teppati Losè, L.; Chiabrando, F. Documentation of complex environments in cultural heritage sites. A SLAM-based survey in the Castello del Valentino basement. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 489–496. [Google Scholar] [CrossRef]
  39. Tanduo, B.; Martino, A.; Balletti, C.; Guerra, F. New tools for urban analysis: A SLAM-based research in Venice. Remote Sens. 2022, 14, 4325. [Google Scholar] [CrossRef]
  40. Martino, A.; Breggion, E.; Balletti, C.; Guerra, F.; Renghini, G.; Centanni, P. Digitization approaches for urban cultural heritage: Last generation MMS within Venice outdoor scenarios. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 265–272. [Google Scholar] [CrossRef]
  41. Ibrahimkhil, M.H.; Shen, X.; Barati, K.; Wang, C.C. Dynamic progress monitoring of masonry construction through mobile SLAM mapping and as-built modeling. Buildings 2023, 13, 930. [Google Scholar] [CrossRef]
  42. Vassena, G.P.; Perfetti, L.; Comai, S.; Mastrolembo Ventura, S.; Ciribini, A.L. Construction progress monitoring through the integration of 4D BIM and SLAM-based mapping devices. Buildings 2023, 13, 2488. [Google Scholar] [CrossRef]
  43. Sgrenzaroli, M.; Ortiz Barrientos, J.; Vassena, G.; Sanchez, A.; Ciribini, A.; Mastrolembo Ventura, S.; Comai, S. Indoor Mobile Mapping Systems and (BIM) digital models for construction progress monitoring. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 121–127. [Google Scholar] [CrossRef]
  44. Sammartano, G. Rilievi integrati UAV e terrestri, basati su tecnologia SLAM, a Pescara del Tronto. Atti Rass. Tec. 2019, 3, 186–192. [Google Scholar]
  45. Perfetti, L.; Vassena, G.; Fassi, F. Preliminary survey of historic buildings with wearable mobile mapping systems and UAV photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 1217–1223. [Google Scholar] [CrossRef]
  46. Keitaanniemi, A.; Virtanen, J.P.; Rönnholm, P.; Kukko, A.; Rantanen, T.; Vaaja, M.T. The combined use of SLAM laser scanning and TLS for the 3D indoor mapping. Buildings 2021, 11, 386. [Google Scholar] [CrossRef]
  47. Sánchez-Aparicio, L.J.; Villanueva-Llauradó, P.; Sanz-Honrado, P.; Aira-Zunzunegui, J.R.; Pinilla Melo, J.; González-Aguilera, D.; Oliveira, D. Evaluation of a SLAM-based point cloud for deflection analysis in historic timber floors. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 1411–1418. [Google Scholar] [CrossRef]
  48. Sánchez Aparicio, L.J.; Conde Carnero, B.; González, M.; Mora, R.; Sánchez Aparicio, M.; García Álvarez, J.; González Aguilera, D. A comparative study between WMMS and TLS for the stability analysis of the San Pedro Church Barrel vault by means of the finite element method. Isprs-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 1047–1054. [Google Scholar] [CrossRef]
  49. Sánchez-Aparicio, L.J.; Mora, R.; Conde, B.; Maté-González, M.Á.; Sánchez-Aparicio, M.; González-Aguilera, D. Integration of a wearable mobile mapping solution and advance numerical simulations for the structural analysis of historical constructions: A case of study in San Pedro Church (Palencia, Spain). Remote Sens. 2021, 13, 1252. [Google Scholar] [CrossRef]
  50. Rodríguez-Martín, M.; Sánchez-Aparicio, L.J.; Maté-González, M.Á.; Muñoz-Nieto, Á.L.; Gonzalez-Aguilera, D. Comprehensive generation of historical construction CAD models from data provided by a wearable mobile mapping system: A case study of the Church of Adanero (Ávila, Spain). Sensors 2022, 22, 2922. [Google Scholar] [CrossRef]
  51. Sammartano, G.; Previtali, M.; Banfi, F. Parametric generation in HBIM workflows for SLAM-based data: Discussing expectations on suitability and accuracy. In Proceedings of the Arqueológica 2.0 & Geores, Valencia, Spain, 26–28 April 2021; pp. 374–388. [Google Scholar]
  52. Roggeri, S.; Vassena, G.P.M.; Tagliabue, L. Scan-to-BIM efficient approach to extract BIM models from high productive indoor mobile mapping survey. Proc. Int. Struct. Eng. Constr. 2022, 9, 1–6. [Google Scholar] [CrossRef]
  53. Matellon, A.; Maset, E.; Visintini, D.; Beinat, A. Feasibility and accuracy of as-built modelling from SLAM-based point clouds: Preliminary results. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 273–278. [Google Scholar] [CrossRef]
  54. Matrone, F.; Colucci, E.; Ugliotti, F.; Del Giudice, M. From an integrated survey with MMS to a Scan-to-BIM process for educational purposes. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 279–286. [Google Scholar] [CrossRef]
  55. Regione Toscana. Ortofoto OFC-RT2023, 20 cm, 1:5000. 2024. Available online: https://www502.regione.toscana.it/geoscopio/ortofoto.html (accessed on 5 June 2024).
  56. Discover Arezzo. 2024. Available online: https://www.discoverarezzo.com/scopri-arezzo/piazza-grande/ (accessed on 2 July 2024).
  57. Leica Geosystems. Leica BLK2GO Handheld Imaging Laser Scanner. 2024. Available online: https://leica-geosystems.com/en-gb/products/laser-scanners/autonomous-reality-capture/leica-blk2go-handheld-imaging-laser-scanner (accessed on 5 June 2024).
  58. GeoSLAM (A FARO solution). GeoSLAM ZEB HORIZON. 2024. Available online: https://www.faro.com/en/Products/Hardware/GeoSLAM-ZEB-Horizon-RT (accessed on 5 June 2024).
  59. Stonex. X120GO SLAM Laser Scanner. 2024. Available online: https://www.stonex.it/project/x120go-slam-laser-scanner/ (accessed on 5 June 2024).
  60. Leica Geosystems. Leica RTC360 3D Laser Scanner. 2024. Available online: https://leica-geosystems.com/en-gb/products/laser-scanners/scanners/leica-rtc360 (accessed on 5 June 2024).
  61. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 12–15 November 1991; Volume 1611, pp. 586–606. [Google Scholar]
  62. CloudCompare. 2024. Available online: https://www.danielgm.net/cc/ (accessed on 5 June 2024).
  63. MeshLab. 2024. Available online: https://www.meshlab.net (accessed on 5 June 2024).
  64. Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph. ToG 2013, 32, 1–13. [Google Scholar] [CrossRef]
  65. Ismail, F.; Shukor, S.A.; Rahim, N.; Wong, R. Surface reconstruction from unstructured point cloud data for building Digital Twin. Int. J. Adv. Comput. Sci. Appl. 2023, 14. [Google Scholar] [CrossRef]
  66. Campi, M.; Falcone, M.; Sabbatini, S. Towards continuous monitoring of architecture. Terrestrial laser scanning and mobile mapping system for the diagnostic phases of the cultural heritage. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 121–127. [Google Scholar] [CrossRef]
  67. Sammartano, G.; Spanò, A. Point clouds by SLAM-based mobile mapping systems: Accuracy and geometric content validation in multisensor survey and stand-alone acquisition. Appl. Geomat. 2018, 10, 317–339. [Google Scholar] [CrossRef]
Figure 1. The case study: (a) Orthophoto of Piazza Grande in Arezzo (Italy) (image source: [55]) and (b) a view of the facade of the Palazzo della Fraternita dei Laici (the red box in (a) shows its location).
Figure 1. The case study: (a) Orthophoto of Piazza Grande in Arezzo (Italy) (image source: [55]) and (b) a view of the facade of the Palazzo della Fraternita dei Laici (the red box in (a) shows its location).
Remotesensing 16 03439 g001
Figure 2. The three commercial SLAM devices used for data acquisition. (a) Leica BLK2GO [57]. (b) GeoSLAM ZEB HORIZON [58]. (c) Stonex X120GO [59].
Figure 2. The three commercial SLAM devices used for data acquisition. (a) Leica BLK2GO [57]. (b) GeoSLAM ZEB HORIZON [58]. (c) Stonex X120GO [59].
Remotesensing 16 03439 g002
Figure 3. The point clouds of the Palazzo della Fraternita dei Laici acquired with the SLAM devices (ac) and the ground truth obtained from the TLS (d).
Figure 3. The point clouds of the Palazzo della Fraternita dei Laici acquired with the SLAM devices (ac) and the ground truth obtained from the TLS (d).
Remotesensing 16 03439 g003
Figure 4. Cloud-to-cloud distances of the dataset A computed between the SLAM-based point clouds and the TLS one. (a) Dataset A-1—Leica BLK2GO. (b) Dataset A-2—GeoSLAM ZEB HORIZON. (c) Dataset A-3—Stonex X120GO.
Figure 4. Cloud-to-cloud distances of the dataset A computed between the SLAM-based point clouds and the TLS one. (a) Dataset A-1—Leica BLK2GO. (b) Dataset A-2—GeoSLAM ZEB HORIZON. (c) Dataset A-3—Stonex X120GO.
Remotesensing 16 03439 g004
Figure 5. Cloud-to-cloud distances of Dataset B computed between the SLAM-based point clouds and TLS one. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO.
Figure 5. Cloud-to-cloud distances of Dataset B computed between the SLAM-based point clouds and TLS one. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO.
Remotesensing 16 03439 g005
Figure 6. Surface density analysis for Dataset A (the same color bar is applied to all images). (a) Dataset A-1—Leica BLK2GO. (b) Dataset A-2—GeoSLAM ZEB HORIZON. (c) Dataset A-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Figure 6. Surface density analysis for Dataset A (the same color bar is applied to all images). (a) Dataset A-1—Leica BLK2GO. (b) Dataset A-2—GeoSLAM ZEB HORIZON. (c) Dataset A-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Remotesensing 16 03439 g006
Figure 7. Surface density analysis for Dataset B (the same color bar is applied to all images). (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEH HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Figure 7. Surface density analysis for Dataset B (the same color bar is applied to all images). (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEH HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Remotesensing 16 03439 g007
Figure 8. Roughness of the point clouds of Dataset A. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Figure 8. Roughness of the point clouds of Dataset A. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Remotesensing 16 03439 g008
Figure 9. Roughness of the point clouds of Dataset B. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Figure 9. Roughness of the point clouds of Dataset B. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Remotesensing 16 03439 g009
Figure 10. Polygon mesh obtained from the SLAM-based (ac) and the TLS (d) point clouds.
Figure 10. Polygon mesh obtained from the SLAM-based (ac) and the TLS (d) point clouds.
Remotesensing 16 03439 g010
Figure 11. Mesh-to-mesh (M2M) distances between the surfaces modeled from the SLAM point clouds and the ground-truth mesh derived from the TLS data. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEH HORIZON. (c) Dataset B-3—Stonex X120GO.
Figure 11. Mesh-to-mesh (M2M) distances between the surfaces modeled from the SLAM point clouds and the ground-truth mesh derived from the TLS data. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEH HORIZON. (c) Dataset B-3—Stonex X120GO.
Remotesensing 16 03439 g011
Figure 12. Detail of the Cloud-to-Cloud (C2C) distances calculated on the helical semicolumn. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Color bar.
Figure 12. Detail of the Cloud-to-Cloud (C2C) distances calculated on the helical semicolumn. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Color bar.
Remotesensing 16 03439 g012
Figure 13. Detail of a portion of the stringcourse frame and a helical semicolumn, colored according to the local roughness. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Figure 13. Detail of a portion of the stringcourse frame and a helical semicolumn, colored according to the local roughness. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Remotesensing 16 03439 g013
Figure 14. Detail of the mesh reconstructed on the SLAM and TLS point clouds. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Figure 14. Detail of the mesh reconstructed on the SLAM and TLS point clouds. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Ground truth—TLS RTC360.
Remotesensing 16 03439 g014
Figure 15. Mesh-to-Mesh (M2M) distances calculated on the helical semicolumn. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Color bar.
Figure 15. Mesh-to-Mesh (M2M) distances calculated on the helical semicolumn. (a) Dataset B-1—Leica BLK2GO. (b) Dataset B-2—GeoSLAM ZEB HORIZON. (c) Dataset B-3—Stonex X120GO. (d) Color bar.
Remotesensing 16 03439 g015
Table 1. Technical specifications of the SLAM systems used in this study.
Table 1. Technical specifications of the SLAM systems used in this study.
Leica BLK2GOGeoSLAM ZEB HORIZONStonex X120GO
Min range [m]0.50.5
Max range [m]25100120
Pts/s420,000300,000320,000
N. of channels1616
Weight * [kg]0.651.41.6
FOV [°]360 × 270360 × 270360 × 270
Wavelength [nm]830903
Relative accuracy ** [mm]1066
N. of cameras3external3
Camera resolution [Mpx]4.85
Camera FOV [°]300 × 135200 × 100
* excluding batteries; ** Local accuracy, related to the accuracy of the LiDAR sensor measurements and dependent on the environment.
Table 2. Number of points of the analyzed datasets (after outlier removal and clipping in the area of interest).
Table 2. Number of points of the analyzed datasets (after outlier removal and clipping in the area of interest).
Leica BLK2GOGeoSLAM ZEB HORIZONStonex X120GOLeica RTC360
Dataset A 150 M 82 M 81 M 225 M
Dataset B 78 M 21 M 14 M 99 M
Table 3. Values of deviations (C2C) between the SLAM point clouds and the TLS ground truth for Dataset A.
Table 3. Values of deviations (C2C) between the SLAM point clouds and the TLS ground truth for Dataset A.
Dataset A-1Dataset A-2Dataset A-3
Leica BLK2GO GeoSLAM ZEB HORIZON Stonex X120GO
mean [m]0.0300.0200.013
SD [m]0.0330.0310.031
Table 4. Values of deviations (C2C) between the SLAM point clouds and the TLS ground truth for Dataset B.
Table 4. Values of deviations (C2C) between the SLAM point clouds and the TLS ground truth for Dataset B.
Dataset B-1Dataset B-2Dataset B-3
Leica BLK2GO GeoSLAM ZEB HORIZON Stonex X120GO
mean [m]0.0180.0280.006
SD [m]0.0160.0230.011
Table 5. Surface density and roughness calculated on the SLAM-based and TLS point clouds.
Table 5. Surface density and roughness calculated on the SLAM-based and TLS point clouds.
DatasetCloudSurface DensityRoughness
mean [pts/m2] SD [pts/m2] mean [m] SD [m]
ALeica BLK2GO30,77632,0140.0050.005
GeoSLAM ZEB H.829554070.0080.007
Stonex X120GO12,33111,5220.0020.002
TLS RTC36019,32679090.0030.004
BLeica BLK2GO88,98156,4160.0050.004
GeoSLAM ZEB H.16,24583430.0090.007
Stonex X120GO19,29212,8250.0020.003
TLS RTC36077,83230,2810.0030.004
Table 6. Main characteristics of the computed meshes and M2M results (the differences between SLAM-based models and the TLS-based model are reported).
Table 6. Main characteristics of the computed meshes and M2M results (the differences between SLAM-based models and the TLS-based model are reported).
MeshesMesh ModellingM2M
N. of Faces [-] N. of Vertices [-] Mean [m] RMS [m]
Leica BLK2GO50 M25 M0.0030.033
GeoSLAM ZEB
HORIZON
10 M5 M0.0140.057
Stonex X120GO32 M16 M0.0020.030
TLS Leica
RTC360
44 M22 M
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matellon, A.; Maset, E.; Beinat, A.; Visintini, D. Surface Reconstruction from SLAM-Based Point Clouds: Results from the Datasets of the 2023 SIFET Benchmark. Remote Sens. 2024, 16, 3439. https://doi.org/10.3390/rs16183439

AMA Style

Matellon A, Maset E, Beinat A, Visintini D. Surface Reconstruction from SLAM-Based Point Clouds: Results from the Datasets of the 2023 SIFET Benchmark. Remote Sensing. 2024; 16(18):3439. https://doi.org/10.3390/rs16183439

Chicago/Turabian Style

Matellon, Antonio, Eleonora Maset, Alberto Beinat, and Domenico Visintini. 2024. "Surface Reconstruction from SLAM-Based Point Clouds: Results from the Datasets of the 2023 SIFET Benchmark" Remote Sensing 16, no. 18: 3439. https://doi.org/10.3390/rs16183439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop