Next Article in Journal
Intelligent Diagnosis Based on Double-Optimized Artificial Hydrocarbon Networks for Mechanical Faults of In-Wheel Motor
Previous Article in Journal
Rapid Seismic Evaluation of Continuous Girder Bridges with Localized Plastic Hinges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments

by
Miguel Ángel Maté-González
1,2,3,*,
Vincenzo Di Pietra
1 and
Marco Piras
1
1
Department of Environment, Land and Infrastructure Engineering, Politecnico di Torino, 10129 Torino, Italy
2
Department of Topographic and Cartography Engineering, Escuela Técnica Superior de Ingenieros en Topografía, Geodesia y Cartografía, Universidad Politécnica de Madrid, Mercator 2, 28031 Madrid, Spain
3
Department of Cartographic and Land Engineering, Higher Polytechnic School of Ávila, Universidad de Salamanca, Hornos Caleros 50, 05003 Ávila, Spain
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(16), 6314; https://doi.org/10.3390/s22166314
Submission received: 29 July 2022 / Revised: 18 August 2022 / Accepted: 19 August 2022 / Published: 22 August 2022
(This article belongs to the Special Issue UAV Lidar System: Performance Assessment and Application)

Abstract

:
In the present work, three LiDAR technologies (Faro Focus 3D X130—Terrestrial Laser Scanner, TLS-, Kaarta Stencil 2–16—Mobile mapping system, MMS-, and DJI Zenmuse L1—Airborne LiDAR sensor, ALS-) have been tested and compared in order to assess the performances in surveying built heritage in vegetated areas. Each of the mentioned devices has their limits of usability, and different methods to capture and generate 3D point clouds need to be applied. In addition, it has been necessary to apply a methodology to be able to position all the point clouds in the same reference system. While the TLS scans and the MMS data have been geo-referenced using a set of vertical markers and sphere measured by a GNSS receiver in RTK mode, the ALS model has been geo-referenced by the GNSS receiver integrated in the unmanned aerial system (UAS), which presents different characteristics and accuracies. The resulting point clouds have been analyzed and compared, focusing attention on the number of points acquired by the different systems, the density, and the nearest neighbor distance.

1. Introduction

Architectural heritage is a resource of multiple dimensions (cultural, social, territorial, and economic) that greatly enriches and protects the societies in which the sites are located [1]. These heritages, symbols of regions, cities, and towns, are linked to their social sentiment and cultural identity and are often the basis for the development of activities on which their economy is based, such as tourism [2]. Monumental heritage is a reflection of the history of a territory, capable of tracing the passage of different civilizations. Its value is unquestionable, but its conservation and maintenance are not always feasible. They are very fragile elements that have suffered the effects and consequences of historical and natural events that have modified, altered, and, in the worst of cases, even destroyed them [3]. The conservation of this built heritage is one of the aspects that any advanced society must inevitably address. Today, apart from the deterioration due to the passage of time and the impact of meteorological agents and the effects of climate change, these assets are exposed to other constant threats, such as: (a) their abandonment (due to depopulation or lack of funds for their maintenance) or loss of functionality; (b) destructive actions towards these assets (vandalism and armed conflicts); (c) accidental destructive events (such as the fire that happened at Notre Dame Cathedral in Paris); (d) other types of hazards that are difficult to predict, such as natural catastrophes, earthquakes, among others; (e) threats linked to human activities, such as poor planning, management, or maintenance of these assets (uncontrolled tourism, corrective repairs, etc.) [4]. All these factors make the protection, management, research, dissemination, improvement, conservation, and safeguarding of architectural heritage at a global level a highly complex task [4].
Considering these circumstances, different international organizations and institutions (UNESCO or ICOMOS, among other international bodies), together with national and regional administrations, have promoted international charters [5,6], norms, and laws that attempt to deal with this problem. In almost all of them, apart from speaking of the values and importance of cultural heritage for today’s society and the need to preserve this historical legacy of past generations, they promote research and knowledge transfer from other fields of science, engineering, and other disciplines for the adaptation or creation of new methodologies and techniques of analysis, which favor and improve the current methods of intervention, conservation, and management of cultural heritage.
Among this range of techniques and methodologies that improve studies and research on cultural heritage, geomatics plays a key role in capturing, processing, analyzing, interpreting, modeling, and disseminating digital geospatial information [3,7,8,9,10,11]. These sciences and technologies are particularly useful in the cultural heritage sector as they provide information on the current state of heritage assets and make it possible to highlight and quantify the changes that these assets may undergo in space and time [7,8,9,10,11,12,13,14].
Currently, the documentation, conservation, restoration, and dissemination actions carried out in the cultural heritage field avail the use of 3D recording strategies that allow the creation of digital replicas of assets in an accurate and fast way. These 3D recording strategies provide important information with metric properties of cultural heritage geometrically, structurally, dimensionally, and figuratively [3,7,8,9,10,11,12,13,14]. Thus, we have managed to improve graphic representation resources with more detailed planimetry and with the option of generating a 3D model from which we can obtain longitudinal and transversal profiles, virtual tours, calculation of surfaces, volumes, orthophotographs, monitoring of the object under study to check its deterioration, etc. [3,7,8,9,10,11,12,13,14]. Products can be used to support in-depth analysis of the architectural, engineering, and artistic features of the objects. In this context, 3D recording strategies have become indispensable tools and methodologies when carrying out studies on assets, conservation or restoration work, or their dissemination [12,13,14].
Nowadays, it is possible to find numerous investigations and works in which these techniques are applied in architectures of cultural and historical relevance. These heritage assets, in most cases, are located in cities or towns and their surroundings [7,8,9,10,11,12,13,14], where around 55% of the world’s population is concentrated [15]. However, it is rare to find projects dealing with the management and safeguarding of cultural heritage located in remote rural areas, such as mountain areas or formerly inhabited places, which are currently abandoned. As a general rule, the population in the 1960s left these territories (in which there were hardly any services) and moved to the cities, looking for new opportunities for their families, taking advantage of the restructuring of large industries [16]. This led to the disappearance of many historic mountain villages, which based their economy on artisanal activities, such as the exploitation of wood or the extraction of raw materials from the subsoil (mining) [17]. At present, many of these villages and the remains of the industrial craft activities they carried out are in a deplorable state of preservation (some of them have disappeared) and are invaded by vegetation and have been absorbed by wooded areas, making their 3D documentation very complicated.
In recent years, the geomatics sector has undergone a significant transformation, which has made it possible to improve 3D digitizing methods, opening up the possibility of researching and progressing in the documentation of these complex scenarios. Firstly, the improvement of static scanning systems and the appearance of new dynamic scanners, known as mobile mapping systems (MMSs) [10,12,13,14], have made it possible to speed up the reconstruction of complex scenarios with a simple walk. This is thanks to the combination of an inertial measurement unit (IMU), a LiDAR scanning system, and the application of SLAM (simultaneous location and mapping) techniques and visual odometry for the processing of these data. MMSs are mostly based on ROS (robotic operative system) being widely used in robotic navigation and in autonomous driving. These systems allow to perform dead-reckoning position estimation without the necessity of a GNSS receiver and implement novel algorithms for registering point clouds and extracting maps [10,12,13,14]. In terms of data quality, these devices usually offer centimeter accuracy in contrast with the last generation terrestrial laser scanner (TLS), which reaches a millimeter or sub-centimeter level of accuracy [10,12,13,14]. Moreover, point cloud resolution depends on the acquisition rate, the distance to the object at any given time, and the number of passages. Although these devices are more suitable for indoor use, due to their higher productivity and efficiency than outdoor use, they have been successfully used for reconstructing outdoor scenarios, such as archaeological sites, churches, civil engineering elements, among others [10,12,13,14].
Secondly, the rise of commercial off-the-shelf (COTS) drones, together with a substantial improvement in their performance and the miniaturization of the electronic components, has allowed installing an airborne LiDAR sensor (ALS) on a flying platform, ensuring greater flight autonomy, precise positioning in flight (thanks to GNSS—global navigation satellite systems- or INS—inertial navigation system), and a lower cost compared to classic alternatives (such as satellite images or photogrammetric flights).
In relation to the above, it may seem strange not to mention photogrammetric methods, which have revolutionized 3D documentation in cultural heritage [7,18,19,20]. It is true that new developments in photogrammetry (SFM—structure from motion-, computer vision… [21]) have made it possible that almost anyone with little knowledge in the field, with an image capturing device (cell phone, tablet, camera, or commercial drone with camera), following a simple protocol in data collection, and with the help of commercial or free software [21], is able to obtain point clouds or 3D models of objects or simple scenarios [3,22,23]. However, it is more than demonstrated that, to date, these techniques, which use images (such as input), cannot provide information beyond what appears in them [21]. Since we are dealing with cultural properties covered by vegetation and absorbed by wooded areas, if we use these methods, we would be documenting all the biological parts that cover and alter the heritage element and not the element itself.
This research aims to compare the results obtained using several active systems of massive data capture: a TLS, a WMMS, and a drone ALS, which allow the documentation of the cultural heritage even if it is covered by biological invasions [24]. Consequently, it is necessary to analyze in depth the possibilities offered by these systems for the documentation of this type of scenario, as well as their limitations, and to evaluate their advantages and disadvantages as a possible step toward strengthening the management of endangered cultural heritage. As can be seen, Section 2 introduces the study case and Section 3 defines the materials and methods used to digitalize it. Section 4 describes the experimental results obtained andincludes the discussion, and, finally, some conclusions obtained from the present work are drawn within Section 5.

2. Case Study

2.1. Monte Pietraborga, Trana, Provincia di Torino, Piemonte, Italy

The Monte Pietraborga is a low-altitude small mountain system that is part of the Eastern Italian Alps (Central Cozie Alps). It is composed of several ridges and peaks (the highest being about 926 m). It is located at the head of the Sangone Valley, within the municipalities of Trana (mostly), Sangano, and Piossasco in the Metropolitan City of Torino in the Piedmont region. It lies a few kilometers southwest of Torino (Figure 1). From the plain where the city of Turin is located, the skyline of Monte Pietraborga and its neighbor Monte San Giorgio (837 m high at its highest peak) can be seen. Its advanced position with respect to the plain represents an important visual reference point for the surrounding area.
The rocks that compose the mountain are part of the Massiccio Ultrabasico di Lanzo, a geological formation of ancient origin. The predominant rocks, the peridotite, are very rich in magnesium, and this also influences the nature of the soils formed by their degradation. Thanks to this contribution of substrates, the vegetation present in the mountain is quite lush, highlighting the chestnut, oak, and hazel, among other vegetation.
In Monte Pietraborga, several historical villages are almost uninhabited or completely uninhabited, among which stands out the village of Pratovigero, located on the northwest slope. The access to this village is from Trana, by a poorly preserved dirt road. Other villages are Prese de Piossasco and Prese di Sangano. These are made up of small groups of houses located at different points on the southern slope of Monte Pietraborga. Their access is very complex due to the poor preservation of the roads, some of them nowadays blocked by obstacles, such as fallen trees. Historical documents confirm that some of these centers have been inhabited since the 10th century.
Apart from the aforementioned population centers, there are other heritage elements of historical importance on Monte Pietraborga, such as: (i) the Cappella Madonna della Neve, built in 1700 (in a poor state of preservation); (ii) several fountains and springs; (iii) an area with remains of Celtic vestiges (dolmens and menhirs arranged in a circle, known as Sito dei Celti, and dating back to the period between 4000 and 2800 BC).
Most of the people who lived in this mountainous environment were engaged in primary sector activities: (i) they had domestic livestock and cultivated the fields to feed the animals (rye, fodder, etc.) and for their consumption (wheat, potatoes, turnips); (ii) they were collectors of the fruits provided by the mountain flora, such as: hazelnuts, walnuts, chestnuts, and other types of cultivated fruits, such as pears, apples, and cherries; (iii) or were dedicated to the exploitation of the resources and raw materials of the area (wood, coal, serpentinite).
Since the beginning of the twentieth century, due to the absence of fundamental services, the inhabitants of these territories abandoned them and moved to the city of Turin in search of new opportunities for their families, taking advantage of the restructuring of the large industries [17,25,26]. This exodus worsened after the end of the Second World War.
Today, most of the heritage assets of this territory are abandoned and in a poor state of conservation, mostly invaded by vegetation or absorbed by wooded areas.

Study Area Selection

In order to select a suitable site for testing the different geomatic sensors, a study was carried out to locate all the buildings or constructive elements covered by vegetation located on Monte Pietraborga. The data used for this study are based on: (i) the vector layer of the buildings registered in the Cadastre of the Province of Turin (Figure 2a, elements in red); (ii) and the high-resolution raster layers of the forest cover density, provided by the COPERNICUS service of the European Union (https://land.copernicus.eu/, accessed on 22 July 2022) (Figure 2b). The first step that has been carried out is the creation of the centroids of the vector layer of the buildings registered in the Cadastre. Subsequently, the extracted points were assigned the value of the raster layer of the tree cover density. Finally, a selection of points was made to ensure that they were covered by more than 70% of the tree cover density (points in yellow in Figure 2).
Among those points, the site that best fits the scope of our research has been selected after a survey in the field. Figure 3 shows the selected heritage element. It is a historical building with a typical construction of the territory of Monte Pietraborga architecture. It is located near the village of Pratovigero. At present, this building is abandoned and in ruins. As can be seen, the forest has covered it entirely, and part of it is covered by the surrounding vegetation.

3. Material and Methods

In order to evaluate the use of different laser scanner technologies for the specific purpose of documenting a submerged cultural heritage in an extremely vegetated environment, three different sensors among the most widely used in the market today were employed. The abandoned building analyzed in this study was the subject of both a terrestrial and aerial multi-sensor topographic measurement campaign. Specifically, the survey was performed at first with a terrestrial laser scanner (TLS), the Faro Focus 3D X130, a static point cloud acquisition tool based on the co-registration of different portions of the building acquired from different station points. Next, an MMS was used, the Kaarta Stencil, a handheld multi-sensor system consisting of a 16-beam laser scanner, a tracker-type camera, an inertial measurement unit (IMU), and a computation processor integrating advanced simultaneous localization and mapping (SLAM) algorithms. Finally, with the aim of validating its performance in an extreme environment, DJI’s new Zenmuse L1 ALS was used for an aerial platform laser survey. This LiDAR sensor combines data from an RGB sensor and the IMU unit in a stabilized 3-axis gimbal. While the TLS scans and the MMS data have been geo-referenced using a set of vertical markers and sphere measured by a GNSS receiver in RTK mode, the ALS model has been geo-referenced by the GNSS receiver integrated in the unmanned aerial system (UAS), which presents different characteristics and accuracies. Therefore, this incoherence in the final data must be considered in the following analysis.

3.1. Methodological Approach for the Validation Analysis

The digital data of the architectural asset were evaluated according to typical metrics and different strategies used in the literature for point cloud analysis, using as ground truth the TLS cloud, which is, by sensor characteristics, the most accurate and dense (about an order of magnitude higher than other systems). To this scope, third-party software CloudCompare [27] was used to exploit different algorithms and tools for point cloud analysis and cloud-to-cloud comparison.
Specifically, the number of points of each dataset has been reported for the original on-field survey and for a common portion of the environment (mainly the building and a small portion of the surroundings). The three LiDAR products have also been compared in terms of point density, defined as the number of points per unit in a given point cloud. The density is mainly determined by the acquisition techniques and post-processing filtering, the scanner resolution, and the scanning geometry [28]. In this research, the density was computed considering a volume defined by a 3D sphere with radius of 0.5 m, as expressed in (Equation (1)).
N 4 3 π r 3
where N equals the number of neighbors detected in a 3D sphere of 0.5 m radius around the given point.
Finally, a cloud-to-cloud distance analysis was performed on selected areas of the building, using as ground reference the TLS data and comparing both the MMS and the ALS results. The cloud-to-cloud distance was performed using the multiscale model to model cloud comparison (M3C2) algorithm [29], already implemented in CloudCompare. This approach estimates the discrepancies between the two point clouds defined in the same reference system. It is important to highlight that, when two clouds have been geo-referenced with coherent measurements and the same technique, the cloud-to-cloud comparison provides direct information about the performance of the acquisition system. On the other hand, when the geo-referencing procedure of the two clouds is different and different acquisition sensors provide the relative measurements, the cloud-to-cloud comparison could be affected by the geo-referencing accuracy. Therefore, considering the different geo-referencing procedures used in this work (GCP-based vs. direct RTK), the L1 data must be evaluated, referring not only to the measurement accuracy but also to the geo-referencing accuracy. In the first case, the M3C2 was applied directly on the L1 data without any pre-processing (a), while, in the second case, the M3C2 algorithm was preceded by a relative fine registering of L1 data to TLS data using an iterative closest point procedure (ICP) [30] (b). All the analyses to validate the three systems are summarized in Figure 4. The tests focused on the ability of these instruments to operate in extremely vegetated environments; in this regard, the methodological approaches used will be made explicit in the Results section.

Multiscale Model-to-Model Cloud Comparison (M3C2)

The M3C2 algorithm allows a fast analysis for the calculation of the local mean distance of two large point clouds with irregular surfaces. For this purpose, it calculates a local mean cloud-to-cloud distance for a point in the reference cloud, called “core point”, by using a search cylinder projected along a locally oriented normal vector [31,32] (Figure 5).
The parameters used to calibrate the M3C2 algorithm were selected following the literature [29,31,32,33]. Starting from the typical value selected for cultural heritage clouds, parameters have been tuned in an iterative approach. In the following table are reported the parameters from the first and last step of the calibration (Table 1).

3.2. Terrestrial Laser Scanner

To obtain a detailed model of the building and its immediate surroundings, a terrestrial laser scanner was used. More specifically, the lightweight terrestrial laser scanner Faro Focus 3D was used (Figure 6). This device measures distances using the principle of phase shift within a range of 0.6 to 120 m with a measurement rate of 976,000 points per second, an angular resolution of 0.009°, and a beam divergence of 0.19 mrad. Concerning its accuracy, this laser allows for capturing scenes with ±2 mm of precision under normal lighting conditions.

3.3. Mobile Mapping Systems

The MMS used for the present work was the Kaarta Stencil 2 mobile mapping system [34,35,36]. Kaarta Stencil 2 depends on LiDAR and IMU data for localization. Kaarta Stencil 2 is a stand-alone, lightweight SLAM instrument with an integrated system of mapping and real-time position estimation. In addition, it is a handheld of limited size, which allows quick and easy 3D mapping by hand made. The system uses Velodyne VLP-16 connected to a low-cost MEMS IMU and a processing computer for real-time mapping (Figure 7). VLP-16 has a 360° field of view with a 30° azimuthal opening with a band of 16 scan lines. A small tripod was used to support the system. The device can be connected to other sensors, such as a GNSS receiver or an external monitor with a wired USB connection (which allows to see the results in real-time). The data acquisitions were captured using the Kaarta Stencil 2 default configuration parameters, set in order to use the instrument in structured outdoor environments (Table 2).
All these devices are carried by an operator, whose movement provides the third dimension required to generate the 3D point cloud. The 3D point cloud is generated by combining the information coming from the scanning head and the IMU sensor, using to this end the LOAM (LiDAR odometry and mapping) approach [34,35,36]. It is called LOAM: by finely using two algorithms running in parallel, it allows a practically real-time solution. An odometry algorithm estimates the velocity of the TLS, therefore correcting the errors in the point cloud while a mapping algorithm matches and registers the point cloud to create a map. It must be taken into consideration that the previous process is an incremental procedure in which each segment is aligned with respect to the previous one. The error accumulation derived from the incremental procedure is minimized by a global registration on the basis that the starting and ending points are the same (closed-loop solution).
This sensor accurately maps large indoor and outdoor spaces quickly and easily, with a range of up to 2–100 m with a LiDAR accuracy of ±30 mm (in outdoor environments). The data rate is 300,000 points per second up to 10 Hz. This device is also equipped with a camera that allows one to record a video while the laser is capturing the scene. The manufacturer ensures an accuracy of 1–3 cm for a 20-min scan, with the closing of a single loop.

3.4. Drone + LiDAR Sensor

The UAV chosen for this research is a DJI Matrice 300 RTK. This DJI device allows a wide range of sensors to be airborne, which will enable us to carry out different tasks in agriculture, construction, inspections, security, and mapping. The Matrice 300 RTK is among the best drones on the market thanks to its compatibility with multiple cameras, such as a high-resolution camera (45 megapixels) known as the DJI Zenmuse P1 camera or a LiDAR sensor (flight time) known as the DJI Zenmuse L1. The basic characteristics of this UAV are detailed in Table 3.
The LiDAR sensor DJI Zenmuse L1 combines data from an RGB sensor and the IMU unit in a stabilized 3-axis gimbal, thus providing a true color point cloud from the RGB sensor (Figure 8). When used with the Matrice 300 RTK and DJI Terra software, the L1 forms a complete solution that provides real-time 3D data throughout the day, efficiently capturing the details of complex structures and providing highly accurate reconstructed models. Table 4 shows the technical characteristics of the sensor.

4. Results and Discussion

Previous to the data collection with the different LiDAR sensors, a network of registration spheres and target cards (horizontally and vertically) was placed along the scene with the aim of aligning the different point clouds generated in a common global coordinate system (Figure 9). Distributed throughout the scene, five markers have been placed horizontally on the ground in areas clear of wooded vegetation (Figure 9a), and the coordinates of these points have been obtained with a GNSS network real time kinematic (NRTK) survey for which a geodetic receiver, the multi-frequency, multi-constellation Leica GS18, has been used, receiving differential correction from the network service of continuous operating reference stations (CORS) provided by the Piedmont region (SPIN reference). The marker location accuracy was 1.2 cm in planimetry and 2.5 cm in altimetry. With these points and thanks to a total station survey (Leica MS50, Wetzlar, Germany), the global coordinates of the rest of the vertical target cards have been obtained thanks to the topographic radiation method (Figure 9b). These points have been obtained with millimeter precision.
All these control points have been used in two phases of the work: (i) as ground control points to check if the global reference system of the point cloud obtained by the airborne LiDAR sensor on the RTK drone coincides with the global reference system of the ground data acquisition; (ii) as control points to compare the data obtained from the TLS with respect to the data of the MMS; (iii) to align the different TLS scans with the classical point-based coregistration procedure.
Regarding point (i), the estimated control points (marker, vertical target, and sphere) will allow us to check if the independent geo-referencing procedure of the RTK drone (cm-level of accuracy) is consistent with the GNSS survey on the ground. The RTK positioning performed directly on the fly by the DJI system is mandatory when no ground markers are visible from the drone point of view due to the massive presence of trees and vegetation. This is especially interesting in this work since the building is located in a wooded area where it is difficult to position ground control points.
Regarding the vertical targets, two of them have been fixed to the building, while the others have been placed on topographical tripods, where, later, the spheres were placed. It should be noted that the vertical targets central point coincides with the center of the sphere (with precision of ±1 mm) (Figure 9d). These devices have been calibrated in the topography laboratory of the Department of Environmental Engineering, Territory and Infrastructure (DIATI) of the Politecnico di Torino. In total, six registration spheres with a diameter of 200 mm have been placed along the scene, guaranteeing their visibility from different positions (Figure 9c).

4.1. TLS, MMS, and ALS Survey

In order to digitalize the entire building and its surroundings, nine scans have been made around the object, as shown in Figure 10 (yellow triangle). The high number of scans required for completing the building survey is due to the difficulties in observing directly the object in a woody area. The alignment of these scan stations was carried out by means of the target-based registration method. This method allows for aligning different point clouds through the use of geometrical features coming from artificial targets, such as planar targets or registration spheres. For the present study case, a target-based registration approach able to use the centroid of each registration sphere was used as a control point for the alignment between point clouds. Within this framework, the centroid of each sphere was extracted by the RANSAC shape detector algorithm [37]. As a result, it was possible to align all the point clouds with an accuracy of ±3 mm. The resulting point cloud is composed of 47,368,018 points. The field data acquisition of the nine scans took about 100 min (around 8 min per scan), and approximately 150 min for the post-processing of the merging of the different point clouds.
In addition, data acquisition was carried out with the Kaarta Stencil 2 mobile mapping system. Prior to the data acquisition with the MMS device, an on-site inspection was carried out with the aim of designing the most appropriate data acquisition protocol, taking into account the suggestions proposed by Di Filippo et al. [13], with the following statements standing out: (i) ensuring accessibility to all the areas, (ii) removing obstacles along the way, and (iii) planning a closed-loop in order to compensate for the error accumulation. During the data acquisition, a closed-loop path was followed with the aim of compensating possible error accumulations (Figure 10). In order to ensure a homogenous density of the point cloud, the walking speed was constant, paying special attention to transition areas.
Taking these considerations into account, a single loop was necessary to digitize the building and its surroundings, investing a total of 12 min.
Data acquisition with the Kaarta Stencil 2 device first required PC initialization. Along with this power-up, the IMU was started to establish the reference coordinate system, the tracking camera, and the LiDAR sensor. At the end of the acquisition phase with Kaarta Stencil 2, information about the configuration setting, the 3D point cloud characteristics, and the estimated trajectory is stored in a folder automatically created by the WMMS processer at every operation of the survey. Subsequently, precise processing of the field data was carried out in the laboratory, which took 40 min. The total resulting point cloud is composed of 63,256,457 points.
The UAS LiDAR-based survey was performed using a DJI Matrice 300 RTK drone piloted in manual mode to prevent collision in such a harsh environment. The real-time kinematic features allow to connect the drone to any RTK server via NTRIP protocol and, therefore, provide differential correction to the GNSS positioning system in real time. In a high vegetated environment, where GCP are not visible from the UAS, this feature allows direct geo-referencing with a high level of accuracy (few centimeters). This is the case in our study, where all the acquired data and derived products are geo-referenced directly without exploiting the GCPs under the canopy.
Those data were acquired with the DJI Zenmuse L1 commercial system, a portable multi-sensor platform composed of a Livox LiDAR module, a CMOS RGB imaging sensor, and an IMU. As the L1 sensor supports multi-beam LiDAR acquisition, the parameters were set to acquire three return signals (maximum value) with a scanning frequency of 240 K/s to test its ability to penetrate vegetation. The raw data acquired are 5473 × 3648 RGB jpeg images, a proprietary data format for LiDAR data, inertial, and GNSS positioning data. The raw LiDAR data are converted in standard point cloud format (.las) with DJI Terra software, which also processes all the RTK and IMU data with the images to color the LiDAR point cloud. The obtained point cloud was composed of 8,768,243 points with associated RGB and intensity channels, as well as scan angle, time of acquisition, and return number. The UAS flight survey required about 15 min and the post-processing approximately 60 min. Figure 11 shows the obtained point cloud in RGB visualization and with signal return classification, from which is evident the difficulties to penetrate the vegetation in such a challenging scenario, where the building is beneath a dense forest and covered by the foliage. For this reason, the GCPs and the marker used in the previous analysis have not been used, and direct geo-referencing of the data has been performed.
Analyzing the results of the raw data acquisition for each sensor, a higher number of cloud points are provided by the TLS Faro Focus, as expected. The Faro LiDAR sensor acquires 976,000 pts/sec with an angular resolution of 0.009 deg for each direction, the Velodyne VLP-16 integrated into the Kaarta Stencil acquires 300,000 pts/s with an angular resolution of 0.1 deg along the azimuth and 0.4 deg along the zenith, and, finally, the DJI L1 sensor acquires 240,000 pts/s. These characteristics are reflected in the number of points for each data type, as described in Figure 12. Regarding the density computed as the number of neighbors in a volume of 1 m3, the Kaarta Stencil is the most dense data acquired. The reason is due to the acquisition methodology. While the TLS point cloud is obtained co-registering a certain number of scans acquired from a fixed position, the MMS point cloud is obtained by the sum of the continuous acquisition process performed during the movement. This means that the density of the data could be increased by simply observing the same area more times during the walking path.

4.2. Data Comparison Analysis

4.2.1. TLS vs. MMS

The data comparison analysis was performed in a selected portion of the LiDAR survey, in particular on the building covered by the vegetation. The first analysis compared the volume density of the TLS cloud with respect to the MMS one to investigate the performances of the two systems in the interest area. The density, expressed in pts/m3, has been computed from (Equation (1)), and the results are expressed in Table 5. Observing the density distribution evidences the difference between the two on-field data acquisition methods (Figure 13). As expected, the density values derived from the TLS survey present a high-tailed distribution, indicating more dense data in the overlapping areas and sparse data where the object is observed just from one scan position. On the other hand, the MMS has a Gaussian mixture distribution of the density value, with areas of high density, medium density, and lower density. Moreover, the mean density value is greater in the MMS cloud (Table 5). This is typical of an iterative LiDAR acquisition performed from a pedestrian, which cannot be constant in the velocity movement and, therefore, can cause different behaviors in the density and numerosity of the acquired points.
In order to align the point cloud obtained by the MMS with the TLS point clouds, another target-based registration phase was carried out, using to this end the spheres (Figure 9). The root mean square error (RMSE) of this registration phase was 0.01 m. As stated before, the registration spheres were equipped on topographic tripods with special platforms that allow for geo-referencing the centroid of each sphere. Thanks to this, it was possible to geo-reference the point cloud obtained by means of a six-parameter Helmert transformation (three translations and three rotations), allowing for placing both LiDAR models in the same coordinate system.
A point-to-point comparison was carried out to obtain a more in-depth evaluation of the potential of the MMS solution for mapping these spaces. During this stage, the multiscale model to model cloud comparison (M3C2) algorithm was used [29]. This approach allows for estimating the observed discrepancies between the MLS and TLS point clouds (Figure 14), and it is implemented in the open-source software CloudCompare v2.10 [27]. In order to obtain reliable results, the two point clouds were segmented by eliminating the non-common areas. The comparison of both point clouds provided an RMSE of 0.01 m (Figure 15), in line with the RMSE obtained during the alignment phase.

4.2.2. TLS vs. ALS

As for the MMS data, the L1 point cloud was compared with the TLS point cloud using the latter as a reference. Firstly, the clouds were compared by number of points and density metrics. Point density has been computed in terms of the number of neighbors (NoN) detected in a 3D sphere of radius 0.5 m. The analysis is performed in the building area of the survey, identifying a common portion of the cloud (Figure 16). In this case, L1 data present a Gaussian density distribution with almost 30 times fewer points than the TLS data (Table 6).
Following the density analysis, a point-to-point comparison was carried out, exploiting again the M3C2 algorithm implemented in CloudCompare software as homologous points cannot be defined. The M3C2 distance computation calculates the changes between two point clouds along the normal direction of a mean surface at a scale consistent with the local surface variations. If the reference cloud is dense enough, then the nearest neighbor distance will be close to the “true” distance to the underlying surface. The first point-to-point comparison was performed considering the entire region of interest, i.e., the building under the canopy, where the TLS point cloud was used as a reference. Subsequently, point-to-point comparison was performed on different portions of the building (façade and planar section) to provide M3C2 distance statistics also along the principal direction of the cloud. This allows to analyze the data acquired by the L1 sensor considering the vegetation density of the surrounding environment, which could differently affect the scanned surface. Moreover, dividing the analysis on a different portion allows to obtain a less error-prone statistical value. In Figure 17 is shown the different regions of interest.
The geo-referenced TLS model and the L1 point cloud were both projected into the UTM cartographic representation (East, North UTMWGS84-32N ETRF2000) with elevation transformed from ellipsoidal height to orthometric height using regular interpolation grids provided by the Istituto Geografico Militare (IGM) (https://www.igmi.org/, accessed on 20 July 2022). Considering the different geo-referencing procedure (GCP-based vs. direct RTK), the L1 data must be evaluated referring not only to the measurements’ accuracy but also to the geo-referencing accuracy. In the first case, the M3C2 was applied directly on the L1 data without any pre-processing (a), while, in the second case, the M3C2 algorithm was preceded by a relative fine registering of L1 data to TLS data using an iterative closest point procedure (ICP) (b). For both analyses, a common portion of the surveyed area was selected and segmented. The parameters of the alignment process were optimized by minimizing the ICP alignment error. In particular, the number of iterations of the algorithm was fixed to the default value of 20, the optimum threshold for minimizing the root mean square (RMS) difference was found to be 10−5 cm, and the final overlap was set to 60%. With these parameters, 106,957 points out of 120,748 were selected for registration and the final RMS was 0.042 cm. The results of the analysis of the M3C2 distance are statistically represented in Table 7 and visually reported in Figure 18.
The M3C2 distance analysis (Table 8) clearly shows the potentiality of the L1 sensor to penetrate the vegetation and, at the same time, obtain an accurate survey. The point cloud obtained by the direct geo-referencing process presents some deviation with 50% of the points with a distance less than 24 cm. This value highly decreases after ICP computation reaching 1 cm of distance. In this case, 95% of the points have a distance less than 7 cm, which means an overall good performance of this system. It must be highlighted that the comparison has been made in a highly vegetated environment; therefore, these errors are affected by some noise related to the environment.
Finally, to provide insight regarding the penetration capability of the L1 sensor, the three signal returns have been analyzed again, computing the number of points and the cloud density for each beam (Table 9). As you can see, L1 is able to gather behind the canopy, also reaching a portion of the building, although with some void.

5. Conclusions

In the present work, three LiDAR technologies (Faro Focus 3D X130 TLS, Kaarta Stencil 2–16 MMS, and DJI Zenmuse L1 ALS) have been tested and compared in order to assess the performances in surveying built heritage in vegetated areas. Each LiDAR surveying technique applies a different on-field and post-processing methodology due to the difference in the technological features. In the field, the rapidity and ease of data acquisition represent fundamental aspects, while, in post-processing, computing cost and automation of workflow are essential. In this regard, moving LiDAR technologies have a great advantage with respect to TLS: they can be easily deployed in the field and the data acquisition can be performed in a few minutes. Moreover, the processing cost is lower thanks to iterative co-registration algorithms, which allow scans to update in a dead reckoning fashion. In more detail, the MMSs are more versatile and can be mounted on aerial and ground vehicles or carried by a pedestrian, solving the problem of surveying in more scenarios. ALS, on the other hand, by taking advantage of the moving agent represented by the UAV, can cover large expanses of land, but only by returning an overhead view. In our specific scenario, i.e., a built heritage submerged by dense vegetation, the ALS demonstrated good penetration ability overall, generating a point cloud representative of the object, albeit sometimes pierced. Much better performing were the ground-based systems: both TLS and MMS allowed the laser scanner to be positioned/moved around the building, choosing the best vantage points or paths for the specific task. Indeed, a limitation of these terrestrial tools lies in the inability to acquire the higher portions of the buildings, thus generating incomplete object geometries.
Regarding the technical performances of the three LiDAR models, the DJI Zenmuse L1 has the lowest performance in terms of level of detail, density, number of points, and related noise. Although the Kaarta Stencil MMS performs less well than Faro Focus TLS in terms of the density and resolution of the acquired data, it manages to compensate because of the acquisition methodology, which allows user movement to increase/decrease the density and amount of acquired data. On the other hand, the MMS lacks an imaging sensor able to colorize the point cloud, while L1 has visible images that also allow photogrammetric survey.
Concerning the processing, the MMS dataset was the fastest in processing time thanks to the high-level SLAM and loop closure algorithm implemented in the post-processing routine. Moreover, the ALS dataset can be processed relatively fast and, thanks to the UAS RTK positioning, automatically. The TLS required both more human effort and processing time (for point cloud registration, filtering, and segmentation).
The resulting point clouds have been analyzed and compared, focusing attention on the number of points acquired by the different systems, the density, and the nearest neighbor distance. The TLS survey is more accurate and provides a higher number of points, but the overall mean density is less than the MMS cloud. The ALS is less dense and has almost 30 times fewer points than the TLS data.
The M3C2 cloud-to-cloud distance computation algorithm has been applied to compare MMS and ALS with the TLS used as a ground truth. The results highlight that the direct geo-referencing of the RTK positioning performed by the UAV receiver is not enough to obtain reliable data as the mean difference between the reference TLS cloud (geo-referenced with the geodetic GNSS survey) and the ALS cloud is about 30 cm. Therefore, a ground GNSS survey should always be performed using markers or reflective targets visible from the DJI L1 sensor to be used as a GCP. Despite this, geo-referencing performed on the fly can be an excellent starting point to quickly apply co-registration algorithms. In this work, ICP was applied to aerial survey results, resulting in an average distance between point clouds of less than 1 cm. The L1 sensor can exploit the three different return signals to acquire data in forestry and densely vegetated areas. In this work, it has been observed that the system is able to gather data beneath the canopy, also reaching that portion of the building.

Author Contributions

Conceptualization, V.D.P. and M.Á.M.-G.; methodology, V.D.P. and M.Á.M.-G.; validation, V.D.P., M.Á.M.-G. and M.P.; formal analysis, V.D.P. and M.Á.M.-G.; writing—original draft preparation, V.D.P. and M.Á.M.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been carried out in the framework of a research project funded by the European Union through a postdoctoral fellowship to one of the authors within the actions of Marie Skłodowska-Curie Individual Fellowships, H2020-MSCA-IF-2019 (grant agreement ID: 894785; AVATAR project “Application of Virtual Anastylosis Techniques for Architectural Research” (http://avatar.polito.it/, accessed on 20 July 2022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Department of Environmental Engineering, Territory and Infrastructure (DIATI) of the Politecnico di Torino for allowing us to use their tools and facilities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Worthing, D.; Bond, S. Managing Built Heritage: The Role of Cultural Significance; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  2. Du Cros, H. A new model to assist in planning for sustainable cultural heritage tourism. Int. J. Tour. Res. 2001, 3, 165–170. [Google Scholar] [CrossRef]
  3. Maté-González, M.Á.; Rodríguez-Hernández, J.; Blázquez, C.S.; Torralba, L.T.; Sánchez-Aparicio, L.J.; Hernández, J.F.; Tejedor, T.R.H.; García, J.F.F.; Piras, M.; Díaz-Sánchez, C.; et al. Challenges and Possibilities of Archaeological Sites Virtual Tours: The Ulaca Oppidum (Central Spain) as a Case Study. Remote Sens. 2022, 14, 524. [Google Scholar] [CrossRef]
  4. Machat, C.; Ziesemer, J. Heritage at Risk. World Report 2016–2019 on Monuments and Sites in Danger. Hendrik BÄßLER Verlag. 2020. Available online: https://www.icomos.de/icomos/pdf/hr20_2016_2019.pdf (accessed on 22 July 2021).
  5. ICOMOS. The Nara Document on Authenticity. 1994. Available online: https://www.icomos.org/charters/nara-e.pdf (accessed on 22 July 2021).
  6. UNESCO. Declaration concerning the Intentional Destruction of Cultural Heritage. 2003. Available online: https://international-review.icrc.org/sites/default/files/irrc_854_unesco_eng.pdf (accessed on 22 July 2021).
  7. Fernández-Hernandez, J.; González-Aguilera, D.; Rodríguez-Gonzálvez, P.; Mancera-Taboada, J. Image-based modelling from unmanned aerial vehicle (UAV) photogrammetry: An effective, low-cost tool for archaeological applications. Archaeometry 2015, 57, 128–145. [Google Scholar] [CrossRef]
  8. Del Pozo, S.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Onrubia-Pintado, J.; González-Aguilera, D. Sensor fusion for 3D archaeological documentation and reconstruction: Case study of “Cueva Pintada” in Galdar, Gran Canaria. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 373–379. [Google Scholar] [CrossRef] [Green Version]
  9. Masciotta, M.G.; Sanchez-Aparicio, L.J.; Oliveira, D.V.; González-Aguilera, D. Integration of Laser Scanning Technologies and 360° Photography for the Digital Documentation and Management of Cultural Heritage Buildings. Int. J. Archit. Herit. 2022, 1–20. [Google Scholar] [CrossRef]
  10. Maté-González, M.Á.; Sánchez-Aparicio, L.J.; Sáez Blázquez, C.; Carrasco García, P.; Álvarez-Alonso, D.; de Andrés-Herrero, M.; García-Davalillo, J.C.; González-Aguilera, D.; Ruiz, M.H.; Bordehore, L.J.; et al. On the combination of remote sensing and geophysical methods for the digitalization of the San Lázaro Middle Paleolithic rock shelter (Segovia, Central Iberia, Spain). Remote Sens. 2019, 11, 2035. [Google Scholar] [CrossRef] [Green Version]
  11. Previtali, M.; Brumana, R.; Banfi, F. Existing infrastructure cost effective informative modelling with multisource sensed data: TLS, MMS and photogrammetry. Appl. Geomat. 2020, 14, 21–40. [Google Scholar] [CrossRef]
  12. Rodríguez-Martín, M.; Sánchez-Aparicio, L.J.; Maté-González, M.Á.; Muñoz-Nieto, Á.L.; Gonzalez-Aguilera, D. Comprehensive Generation of Historical Construction CAD Models from Data Provided by a Wearable Mobile Mapping System: A Case Study of the Church of Adanero (Ávila, Spain). Sensors 2022, 22, 2922. [Google Scholar] [CrossRef]
  13. Di Filippo, A.; Sánchez-Aparicio, L.J.; Barba, S.; Martín-Jiménez, J.A.; Mora, R.; Aguilera, D.G. Use of a Wearable Mobile Laser System in Seamless Indoor 3D Mapping of a Complex Historical Site. Remote Sens. 2018, 10, 1897. [Google Scholar] [CrossRef] [Green Version]
  14. Mora, R.; Sánchez-Aparicio, L.J.; Maté-González, M.Á.; García-Álvarez, J.; Sánchez-Aparicio, M.; González-Aguilera, D. An historical building information modelling approach for the preventive conservation of historical constructions: Application to the Historical Library of Salamanca. Autom. Constr. 2021, 121, 103449. [Google Scholar] [CrossRef]
  15. Cleland, J. World population growth; past, present and future. Environ. Resour. Econ. 2013, 55, 543–554. [Google Scholar] [CrossRef]
  16. Bagnasco, A.; Fondevila, E.R. La reestructuración de la gran industria y los procesos sociopolíticos en la ciudad: Turín, por ejemplo. Reis 1987, 38, 45–73. [Google Scholar] [CrossRef]
  17. Belluso, E.; Compagnoni, R.; Ferraris, G. Occurrence of asbestiform minerals in the serpentinites of the Piemonte zone, western Alps. In Giornata di Studio in Ricordo del Prof. Stefano Zucchet; Tipolitografia Edicta: Torino, Italy, 1995; pp. 57–66. [Google Scholar]
  18. Stek, T.D. Drones over Mediterranean landscapes. The potential of small UAV’s (drones) for site detection and heritage management in archaeological survey projects: A case study from Le Pianelle in the Tappino Valley, Molise (Italy). J. Cult. Herit. 2016, 22, 1066–1071. [Google Scholar] [CrossRef]
  19. Colloredo-Mansfeld, M.; Laso, F.J.; Arce-Nazario, J. Drone-based participatory mapping: Examining local agricultural knowledge in the Galapagos. Drones 2020, 4, 62. [Google Scholar] [CrossRef]
  20. Mancini, F.; Piras, M.; Ruotsalainen, L.; Vacca, G.; Lingua, A. The impact of innovative and emerging technologies on the surveying activities. Appl. Geomat. 2020, 12, 1–2. [Google Scholar] [CrossRef] [Green Version]
  21. Gonzalez-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Hernandez-Lopez, D.; Guerrero, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; et al. GRAPHOS—Open-source software for photogrammetric applications. Photogramm. Rec. 2018, 33, 11–29. [Google Scholar] [CrossRef] [Green Version]
  22. Maté-González, M.Á.; Yravedra, J.; González-Aguilera, D.; Palomeque-González, J.F.; Domínguez-Rodrigo, M. Micro-photogrammetric characterization of cut marks on bones. J. Archaeol. Sci. 2015, 62, 128–142. [Google Scholar] [CrossRef]
  23. Guarnieri, A.; Vettore, A.; Remondino, F.; Church, O.P. Photogrammetry and ground-based laser scanning: Assessment of metric accuracy of the 3D model of Pozzoveggiani Church. In Proceedings of the FIG Working Week 2004, Athens, Greece, 22–27 May 2004. [Google Scholar]
  24. Wulder, M.A.; White, J.C.; Nelson, R.F.; Næsset, E.; Ørka, H.O.; Coops, N.C.; Hilker, T.; Bater, C.W.; Gobakken, T. Lidar sampling for large-area forest characterization: A review. Remote Sens. Environ. 2012, 121, 196–209. [Google Scholar] [CrossRef] [Green Version]
  25. Repossi, E.; Gennaro, V. I minerali delle serpentine di Piossasco (Piemonte). In Atti della Reale Accademia dei Lincei, Serie 6, RENDICONTI, Classe di Scienze Fisiche, Matematiche e Naturali, 4, 2° Semestre; Accademia Nazionale Dei Lincei: Rome, Italy, 1926; pp. 150–153. [Google Scholar]
  26. Gennaro, V. I minerali delle serpentine di Piossasco (Piemonte). Atti Della R. Accad. Delle Sci. Torino 1931, 66, 433–458. [Google Scholar]
  27. Cloud Compare Project. Available online: https://www.danielgm.net/cc/ (accessed on 22 July 2022).
  28. Van Genechten, B. Theory and Practice on Terrestrial Laser Scanning: Training Material Based on Practical Applications; Universidad Politecnica de Valencia Editorial: Valencia, Spain, 2008. [Google Scholar]
  29. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (NZ). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar]
  30. Shi, X.; Liu, T.; Han, X. Improved Iterative Closest Point (ICP) 3D point cloud registration algorithm based on point cloud filtering and adaptive fireworks for coarse registration. Int. J. Remote Sens. 2020, 41, 3197–3220. [Google Scholar] [CrossRef]
  31. Barnhart, T.B.; Crosby, B.T. Comparing Two Methods of Surface Change Detection on an Evolving Thermokarst Using High-Temporal-Frequency Terrestrial Laser Scanning, Selawik River, Alaska. Remote Sens. 2013, 5, 2813–2837. [Google Scholar] [CrossRef] [Green Version]
  32. DiFrancesco, P.M.; Bonneau, D.; Hutchinson, D.J. The implications of M3C2 projection diameter on 3D semi-automated rockfall extraction from sequential terrestrial laser scanning point clouds Scanning. Remote Sens. 2020, 12, 1885. [Google Scholar] [CrossRef]
  33. Holst, C.; Klingbeil, L.; Esser, F.; Kuhlmann, H. Using point cloud comparisons for revealing deformations of natural and artificial objects. In Proceedings of the 7th International Conference on Engineering Surveying (INGEO), Lisbon, Portugal, 18–20 October 2017; pp. 18–20. [Google Scholar]
  34. Zhang, J.; Grabe, V.; Hamner, B.; Duggins, D.; Singh, S. Compact, real-time localization without reliance on infrastructure. Third Annu. Microsoft Indoor Localization Compet. 2016, 18, 24–31. [Google Scholar]
  35. Di Stefano, F.; Torresani, A.; Farella, E.M.; Pierdicca, R.; Menna, F.; Remondino, F. 3D surveying of underground built heritage: Opportunities and challenges of mobile technologies. Sustainability 2021, 13, 13289. [Google Scholar] [CrossRef]
  36. Giordan, D.; Godone, D.; Baldo, M.; Piras, M.; Grasso, N.; Zerbetto, R. Survey Solutions for 3D Acquisition and Representation of Artificial and Natural Caves. Appl. Sci. 2021, 11, 6482. [Google Scholar] [CrossRef]
  37. Thrun, S. Simultaneous Localization and Mapping. In Robotics and Cognitive Approaches to Spatial Mapping; Springer: Berlin/Heidelberg, Germany, 2007; pp. 13–41. [Google Scholar]
Figure 1. Monte Pietraborga location (Trana, Provincia di Torino, Piemonte, Italy).
Figure 1. Monte Pietraborga location (Trana, Provincia di Torino, Piemonte, Italy).
Sensors 22 06314 g001
Figure 2. (a) In red, the surface of the buildings registered in the cadastre of the Città Metropolitana di Torino. In yellow, the centroids of the buildings that are covered by vegetation; (b) Tree cover density, dominant leaf type, and forest type products for reference year 2018 in 10 m resolution (High-Resolution Layers—Forest).
Figure 2. (a) In red, the surface of the buildings registered in the cadastre of the Città Metropolitana di Torino. In yellow, the centroids of the buildings that are covered by vegetation; (b) Tree cover density, dominant leaf type, and forest type products for reference year 2018 in 10 m resolution (High-Resolution Layers—Forest).
Sensors 22 06314 g002
Figure 3. (a) Location of the heritage element chosen for testing the different geomatic sensors (37,5016.508 East; 4,986,232.369 North—WGS 1984, UTM Zone 32 N). On the right and from top to bottom: (i) The first white box shows the orthophotography of the chosen site. As can be see it is completely covered by vegetation. (ii) In the second frame it is possible to see the overlapping layer of buildings, with the yellow dots of the survey done (where it indicates that the building is under a wooded area). (iii) The third frame shows the High-Resolution Layers-Forest layer, where the layer of the buildings and the layer of the dots under the forest have been overlaid.; (b) Image of the heritage element.
Figure 3. (a) Location of the heritage element chosen for testing the different geomatic sensors (37,5016.508 East; 4,986,232.369 North—WGS 1984, UTM Zone 32 N). On the right and from top to bottom: (i) The first white box shows the orthophotography of the chosen site. As can be see it is completely covered by vegetation. (ii) In the second frame it is possible to see the overlapping layer of buildings, with the yellow dots of the survey done (where it indicates that the building is under a wooded area). (iii) The third frame shows the High-Resolution Layers-Forest layer, where the layer of the buildings and the layer of the dots under the forest have been overlaid.; (b) Image of the heritage element.
Sensors 22 06314 g003
Figure 4. Validation analysis workflow to compare FARO Focus 3D X130 TLS, Kaarta Stencil 2–16 MMS, and DJI Zenmuse L1 ALS.
Figure 4. Validation analysis workflow to compare FARO Focus 3D X130 TLS, Kaarta Stencil 2–16 MMS, and DJI Zenmuse L1 ALS.
Sensors 22 06314 g004
Figure 5. Cylinder projection distance M3C2. The point normal for i is calculated using the scale, D. A cylinder with a diameter d and a user-specified maximum length is used to select points in Cb and Ca (point clouds to be compared) for the calculation of i1 and i2, respectively. LM3C2 is the distance between i1 and i2 and is stored as an attribute of i. The local and apparent roughness of Cb and Ca are calculated as σ1 and σ2, respectively, which are used to calculate the confidence interval of the spatial variable i [31,32].
Figure 5. Cylinder projection distance M3C2. The point normal for i is calculated using the scale, D. A cylinder with a diameter d and a user-specified maximum length is used to select points in Cb and Ca (point clouds to be compared) for the calculation of i1 and i2, respectively. LM3C2 is the distance between i1 and i2 and is stored as an attribute of i. The local and apparent roughness of Cb and Ca are calculated as σ1 and σ2, respectively, which are used to calculate the confidence interval of the spatial variable i [31,32].
Sensors 22 06314 g005
Figure 6. Faro Focus 3D X130 Terrestrial Laser Scanner device used for the detailed digitalization of the building and its environment. On the left, the laser scanner is shown during data collection and on the right the equipment in detail.
Figure 6. Faro Focus 3D X130 Terrestrial Laser Scanner device used for the detailed digitalization of the building and its environment. On the left, the laser scanner is shown during data collection and on the right the equipment in detail.
Sensors 22 06314 g006
Figure 7. Kaarta Stencil 2–16 wearable mobile mapping system used: main components and a photo taken during the data acquisition.
Figure 7. Kaarta Stencil 2–16 wearable mobile mapping system used: main components and a photo taken during the data acquisition.
Sensors 22 06314 g007
Figure 8. DJI Matrice 300 RTK with DJI Zenmuse L1 ALS.
Figure 8. DJI Matrice 300 RTK with DJI Zenmuse L1 ALS.
Sensors 22 06314 g008
Figure 9. Network of registration spheres and target cards (horizontally and vertically), plant view. (a) Horizontal target cards; (b) vertical target cards; (c) spheres; (d) image of the calibration system.
Figure 9. Network of registration spheres and target cards (horizontally and vertically), plant view. (a) Horizontal target cards; (b) vertical target cards; (c) spheres; (d) image of the calibration system.
Sensors 22 06314 g009
Figure 10. TLS scan stations and path followed during the data acquisition with the WMMS.
Figure 10. TLS scan stations and path followed during the data acquisition with the WMMS.
Sensors 22 06314 g010
Figure 11. Point cloud acquired by the DJI L1 sensor, visualized in RGB (left) and in return number classification (right).
Figure 11. Point cloud acquired by the DJI L1 sensor, visualized in RGB (left) and in return number classification (right).
Sensors 22 06314 g011
Figure 12. Number of cloud points for each original dataset acquired and volume density in 1 m3. All the data are extracted by an area of 3000 m2.
Figure 12. Number of cloud points for each original dataset acquired and volume density in 1 m3. All the data are extracted by an area of 3000 m2.
Sensors 22 06314 g012
Figure 13. Point cloud density expressed in pts/m3 of TLS data and relative histogram (a) with respect to Kaarta Stencil WMMS data (b).
Figure 13. Point cloud density expressed in pts/m3 of TLS data and relative histogram (a) with respect to Kaarta Stencil WMMS data (b).
Sensors 22 06314 g013
Figure 14. Discrepancies obtained from the comparison of the TLS and MMS point clouds. It is observed that most of the MMS point clouds show a blue color, indicating that the discrepancies between the MMSS and the TLS are under or near 1 cm. Yellow, red, and blue areas indicate vegetation zones (with minimal wind, they can move and generate discrepancies between point clouds).
Figure 14. Discrepancies obtained from the comparison of the TLS and MMS point clouds. It is observed that most of the MMS point clouds show a blue color, indicating that the discrepancies between the MMSS and the TLS are under or near 1 cm. Yellow, red, and blue areas indicate vegetation zones (with minimal wind, they can move and generate discrepancies between point clouds).
Sensors 22 06314 g014
Figure 15. (a) TLS point cloud; (b) MMS point cloud; (c) comparison between the TLS and the MMSS point clouds—the TLS point cloud is in purple and the MMS point cloud is in blue. The AA’ Plane cuts the area of the building horizontally. BB’ Plane cuts the building vertically.
Figure 15. (a) TLS point cloud; (b) MMS point cloud; (c) comparison between the TLS and the MMSS point clouds—the TLS point cloud is in purple and the MMS point cloud is in blue. The AA’ Plane cuts the area of the building horizontally. BB’ Plane cuts the building vertically.
Sensors 22 06314 g015
Figure 16. Point cloud density of TLS data and relative histogram (a) with respect to L1 data (b).
Figure 16. Point cloud density of TLS data and relative histogram (a) with respect to L1 data (b).
Sensors 22 06314 g016
Figure 17. Region analysis for L1 point cloud comparison.
Figure 17. Region analysis for L1 point cloud comparison.
Sensors 22 06314 g017
Figure 18. M3C2 analysis of the building along different sections with direct geo-referencing and after ICP. Faro Focus 3D X130 TLS is used as reference and DJI Zenmuse L2 as compared dataset.
Figure 18. M3C2 analysis of the building along different sections with direct geo-referencing and after ICP. Faro Focus 3D X130 TLS is used as reference and DJI Zenmuse L2 as compared dataset.
Sensors 22 06314 g018aSensors 22 06314 g018b
Table 1. Parameters used for running the M3C2 algorithm. The first column shows the parameters suggested by the literature review, while the last column reports the tuned parameters used after some iterations.
Table 1. Parameters used for running the M3C2 algorithm. The first column shows the parameters suggested by the literature review, while the last column reports the tuned parameters used after some iterations.
M3C2 Parameters
M3C2VERfirstlast
NormalScale0.250.251872
NormalMode01
NormalMinScale0.1259360.125936
NormalStep0.1259360.125936
NormalMaxScale0.5037440.503744
NormalUseCorePointstruefalse
NormalPreferedOri06
SearchScale0.250.2
SearchDepth0.254
SubsampleRadius0.050.125936
SubsampleEnabledtruefalse
RegistrationError0.0030.003
RegistrationErrorEnabledtruefalse
UseSinglePass4Depthfalsefalse
PositiveSearchOnlyfalsefalse
UseMedianfalsefalse
UseMinPoints4Statfalsefalse
MinPoints4Stat55
ProjDestIndex12
UseOriginalCloudfalsetrue
ExportStdDevInfotruetrue
ExportDensityAtProjScaletruetrue
MaxThreadCount1616
UsePrecisionMapsfalsefalse
PM1Scale11
PM2Scale11
Table 2. Setting Kaarta Stencil 2 parameters input for outdoor environment.
Table 2. Setting Kaarta Stencil 2 parameters input for outdoor environment.
Resolution of the Point Cloud in Map File (m)
voxelSize = 0.4 m
Resolution of the Point Cloud for Scan Matching and Display (m)
cornerVoxelSize = 0.2 m
surfVoxelSize = 0.4 m
surroundVoxelSize = 0.6 m
Minimum Distance of the Points to Be Used for the Mapping (m)
blindRadius = 2 m
Table 3. Basic characteristics of the UAV DJI Matrice 300 RTK.
Table 3. Basic characteristics of the UAV DJI Matrice 300 RTK.
WeightApprox. 6.3 kg (with One Gimbal)
Max. transmitting distance (Europe)8 km
Max. flight time55 min
Dimensions810 × 670 × 430 mm
Max. payload2.7 kg
Max. speed82 km/h
GNSSGPS + GLONASS + BeiDou + Galileo
Accuracy in hovering flight (P mode, with GPS)Vertical:
±0.1 m (vision system activated).
±0.5 m (GPS activated)
±0.1 m (RTK activated)
Horizontal:
±0.3 m (vision system on)
±1.5 m (GPS on)
±0.1 m (RTK activated)
RTK positioning accuracyWith RTK activated and locked:
1 cm + 1 ppm (Horizontal)
1.5 cm + 1 ppm (Vertical)
Table 4. Basic characteristics of the DJI Zenmuse L1 laser scanner.
Table 4. Basic characteristics of the DJI Zenmuse L1 laser scanner.
Dimensions152 × 110 × 169 mm
Weight930 ± 10 g
Maximum Measurement Distance450 m at 80% reflectivity, 190 m at 10% reflectivity
Recording SpeedSingle return: max. 240,000 points/s; Multiple return: max. 480,000 points/s
System Accuracy (1σ)Horizontal: 10 cm per 50 m; Vertical: 5 cm per 50 m
Distance Measurement Accuracy (1σ)3 cm per 100 m
Beam Divergence0.28° (Vertical) × 0.03° (Horizontal)
Maximum Registered Reflections3
RGB camera Sensor Size1 in
RGB Camera Effective Pixels20 Mpix (5472 × 3078)
IMURefresh date = 200 Hz
Accelerometer range = ± 8 g
Table 5. Volume density and number of neighbors comparison in a volume of 1 m3 between the reference point cloud (Faro Focus 3D X130) and the compared one (Kaarta Stencil 2–16).
Table 5. Volume density and number of neighbors comparison in a volume of 1 m3 between the reference point cloud (Faro Focus 3D X130) and the compared one (Kaarta Stencil 2–16).
Volume Density (Radius = 0.5 m)
LiDARN°. of PointsMean (m)St. Deviation (m)
FARO TLS3,838,72330,943.2016,340.60
Kaarta Stencil4,762,03942,419.3038,283.40
Number of Neighbors (Radius = 0.5 m)
N°. of PointsMean (m)St. Deviation(m)
FARO TLS3,838,72316,181.608577.41
Kaarta Stencil4,762,03922,210.7020,045.10
Table 6. Volume density and number of neighbors comparison in a volume of 1 m3 between the reference point cloud (Faro Focus 3D X130) and the compared one (DJI Zenmuse L1).
Table 6. Volume density and number of neighbors comparison in a volume of 1 m3 between the reference point cloud (Faro Focus 3D X130) and the compared one (DJI Zenmuse L1).
Volume Density (Radius = 0.5 m)
LiDARN°. of PointsMean (m)St. Deviation (m)
FARO TLS3,838,72330,943.2016,340.60
DJI L1120,748939.02315.00
Number of Neighbors (Radius = 0.5 m)
N°. of PointsMean (m)St. Deviation(m)
FARO TLS3,838,72316,181.608577.41
DJI L1120,748491.13165.56
Table 7. Results of M3C2 algorithm. Statistical values of the deviations between FARO Focus 3D X130 and DJI Zenmuse L1 (with direct geo-referencing and after ICP).
Table 7. Results of M3C2 algorithm. Statistical values of the deviations between FARO Focus 3D X130 and DJI Zenmuse L1 (with direct geo-referencing and after ICP).
M3C2 Distance
AlignmentNº. of Valid PointsMin (m)Max (m)Mean (m)St. Deviation (m)
Area of Interest
Direct Georef.112,8630.004.110.360.47
ICP117,9430.001.290.060.00
1_Southern Section
Direct Georef.34,0340.001.250.270.18
ICP34,9600.001.250.070.14
2_Northern Section
Direct Georef.34,2160.000.870.270.17
ICP34,1790.000.950.030.06
3_Western Section
Direct Georef.27,7200.000.720.240.15
ICP28,2110.000.480.0160.03
4_Plane Section
Direct Georef.28,9850.001.210.270.17
ICP29,0350.000.980.050.11
Table 8. M3C2 distance analysis between TLS and ALS for the building area. Mean distance for 5%, 50%, and 95% of the points.
Table 8. M3C2 distance analysis between TLS and ALS for the building area. Mean distance for 5%, 50%, and 95% of the points.
M3C2 Distance Analysis
Alignment5%50%95%
Area of Interest
Direct Georef.0.020.241.1
ICP0.000.010.07
1_Southern Section
Direct Georef.0.020.260.57
ICP0.000.020.37
2_Northern Section
Direct Georef.0.020.280.51
ICP0.000.010.15
3_Western Section
Direct Georef.0.020.240.46
ICP0.000.010.06
4_Plane_Section
Direct Georef.0.020.270.53
ICP0.000.010.28
Table 9. Density and point number statistics of different L1 signal returns.
Table 9. Density and point number statistics of different L1 signal returns.
N°. of PointsN°. of Neighbor (r = 0.5 m)Density (r = 0.5 m)
Mean (m)St. Dev (m)Mean (m)St. Dev (m)
Beam 13,476,674229.47157.33438.25300.49
Beam 21,976,398150.57118.80287.58226.89
Beam 3654,926125.98105.84240.61202.14
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maté-González, M.Á.; Di Pietra, V.; Piras, M. Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments. Sensors 2022, 22, 6314. https://doi.org/10.3390/s22166314

AMA Style

Maté-González MÁ, Di Pietra V, Piras M. Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments. Sensors. 2022; 22(16):6314. https://doi.org/10.3390/s22166314

Chicago/Turabian Style

Maté-González, Miguel Ángel, Vincenzo Di Pietra, and Marco Piras. 2022. "Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments" Sensors 22, no. 16: 6314. https://doi.org/10.3390/s22166314

APA Style

Maté-González, M. Á., Di Pietra, V., & Piras, M. (2022). Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments. Sensors, 22(16), 6314. https://doi.org/10.3390/s22166314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop