Next Article in Journal
Structural Behaviour and Strength Evaluation of a Venetian Church through Finite-Element Analysis
Previous Article in Journal
Evolutionary Game Analysis on Cooperative Behavior of Major Projects’ Technology Innovation Subjects under General Contracting Mode
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Usage of a Conventional Device with LiDAR Implementation for Mesh Model Creation

Department of Geodesy, Faculty of Civil Engineering, University of Žilina, 010 26 Žilina, Slovakia
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(5), 1279; https://doi.org/10.3390/buildings14051279
Submission received: 28 March 2024 / Revised: 22 April 2024 / Accepted: 28 April 2024 / Published: 1 May 2024

Abstract

:
The trend of using conventional devices like mobile phones, tablets, and the other devices is gaining traction in improving customer service practices. This coincides with the growing popularity of building information modeling (BIM), which has led to increased exploration of various 3D object capture methods. Additionally, the technological boom has resulted in a surge of applications working with different 3D model formats including mesh models, point cloud, and TIN models. Among these, the usage of mesh models is experiencing particularly rapid growth. The main objective advantages of mesh models are their efficiency, scalability, flexibility, sense of detail, user-friendliness, and compatibility. The idea of this paper is to use a conventional device, specifically an iPad Pro equipped with light detection and ranging (LiDAR) technology, for creating mesh models. The different data capture methods employed by various applications will be compared to evaluate the final models´ precision. The accuracy of the 3D models generated by each application will be assessed by comparing the spatial coordinates of identical points distributed irregularly across the entire surface of the chosen object. Various available currently most-used applications were utilized in the process of data collection. In general, 3D representations of the object/area, etc., may be visualized, analyzed, and further processed in more formats such as TIN models, point cloud, or mesh models. Mesh models provide a visualization of the object mirroring the solid design of the real object, thus approximating reality in the closest way. This fact, along with automatized postprocessing after data acquisition, the ability to capture and visualize both convex and concave objects, and the possibility to use this type of 3D visualization for 3D printing, contribute to the decision to test and analyze mesh models. Consequently, the mesh models were created via the automatic post-processing, i.e., without external intervention. This fact leads to the problems of random coordinate systems being automatically pre-defined by every application. This research must deal with the resulting obstacles in order to provide a valid and credible comparative analysis. Various criteria may be applied to the mesh models’ comparisons, including objective qualitative and quantitative parameters and also the subjective ones. The idea of this research is not to analyze the data acquisition process in detail, but instead to assess the possibilities of the applications for the basic users.

1. Introduction

The current pace of construction places demands on the time, method, and cost of measuring objects without sacrificing objective accuracy. Some modern technologies open new avenues and possibilities for appropriate methods of data collection and post-processing while maintaining the above conditions. This is the case, for example, of ground-based photogrammetry technology, which can be performed using the right application and a commonly used camera built into a personal mobile phone. This possibility is taken up a level by the presence of devices with built-in LiDAR technology (e.g., iPad Pro, iPhone Pro). In the case of achieving the required accuracy classes, the need to measure using standard surveying instruments could be eliminated. It would also be possible to reduce the time of processing the measured data and creating the resulting model for further purposes. The results could be further analyzed, used, and, if necessary, become part of, e.g., BIM or the basis for the creation of a digital twin of the object.
LiDAR is a remote sensing technology that uses laser light to measure distances and create highly detailed 3D maps of the environment [1,2,3]. It has found its application in many practical and scientific fields, whether in the construction industry, in environmental studies and geohazard solutions, or in interdisciplinary issues such as the occurrence of drinking water [4,5,6,7].
The different approaches to data collection using LiDAR technology allow us to achieve resulting accuracies at different levels depending on many factors such as the LiDAR type, distance from the object of measurement, atmospheric conditions, integrated GNSS/IMU system, the instrument and carrier used and the associated constraints, physical influences (carrier velocity), etc. LiDAR boasts an impressive accuracy, reaching up to 2–3 cm for horizontal coordinates (X, Y) and 3–5 cm for height (Z) when combined with GNSS data for georeferencing and IMU data for orientation [8]. It is important to note that such high-accuracy measurements are typically associated with large-scale surveys like topographic mapping [9].
In 2020, Apple introduced a LiDAR scanner built into its mobile devices. Debuting in the iPad Pro and later included in the iPhone 12 Pro series, the scanner is positioned near the camera lenses. It houses a sensor that emits laser pulses towards the environment. When these pulses bounce off objects and return to a receiver, the device can accurately calculate distances. Initially, the focus was on enhancing photography, particularly for better autofocus and low-light performance. Apple’s LiDAR surpassed its competitors’ time-of-flight (ToF) sensors in terms of sophistication.
While initially aimed at improving photos, LiDAR’s applications have expanded. It can create highly accurate 3D models, though its accuracy can be influenced by the size, shape, and other factors of the object being scanned. Studies have shown that LiDAR sensors excel at creating high-resolution models of small objects with side lengths exceeding 10 cm, achieving an impressive accuracy of ±1 cm [10,11,12,13].
This paper investigates the feasibility of using terrestrial LiDAR technology for creating 3D models of the small- to medium-scale convex objects that are commonly encountered in civil engineering practices. These objects could include prefabricated building components, pipe segments, and other regular or semi-regular shapes. The implementation of LiDAR technology into daily devices [14] (iPADs etc.) requires the overall minimization of measuring components and computational components. It is therefore necessary to test the accuracy, precision, application, and suitability of implemented LiDAR technology. This research will employ automatically generated mesh models. These models, generated by widely used applications like Scaniverse, 3DScennerApp, or Polycam, will be evaluated to assess their suitability for civil engineering applications.

Problem Overview

Numerous research endeavors have undertaken the exploration and analysis of employing the operational LiDAR system within conventional instruments across various domains. Previous research has explored LiDAR technology applications in non-invasive remote sensing across diverse environments (forestry, urban areas, etc.) for various target groups [15,16,17]. There have been notable advancements in laser scanning techniques, expanding from original terrestrial or aerial setups (including UAVs and satellites) to dynamic terrestrial perspectives. Mobile laser scanning from an automotive carrier has found tremendous application in the mapping processes. Notably, a comprehensive review provided in reference [15] encompasses research on mobile laser scanning, focusing on applications in urban environments [18,19,20,21], urban monitoring [22,23,24,25], and city modeling for digital twin creation [26,27]. Previous studies have addressed challenges and tasks related to data analysis techniques such as classification, filtration, and fusion, yielding insights and hypotheses [15]. The applications included parameter measurements (heights, volumes, areas) at varying precisions and object measurements for potential usage, e.g., in integrating green and digital construction industries. The previous authors mainly tried to bridge the gap between traditional conventional methods of data capturing and methods with the potential to be widely used in the future.
The widespread utilization of spatial data derived from laser scanning has quickly transitioned its usage from scientific and professional domains to everyday life, integrating into conventional devices like mobile phones and tablets. This accessibility motivates users to explore various applications designed for 3D modeling, thereby enabling them to create their own 3D models. These emerging opportunities have captured the attention of numerous researchers, with papers addressing different tasks and challenges across diverse fields.
A considerable body of published articles focuses on analyzing the integration of LiDAR technology into iPads, particularly concerning forestry-related issues [28,29,30,31,32,33]. In reference [29], the authors focused on the determination of the tree diameter parameters and compared the cross-sections resulting from the individual point clouds as outputs from selected applications: 3D Scanner App, Polycam, and SiteScape (3D Scanner App-LAAN LABS—New York, NY, USA; Polycam-Polycam Inc.—New York, NY, USA; SiteScape-FARO—Lake Mary, FL, USA). The best results were provided by the application of 3D Scanner App (RMSE = 0.036 m). Another approach for the trunk flare diameter estimation was proposed in reference [30], where the authors compared the point clouds from an iPad Pro and the terrestrial laser scanner FARO. The final statements confirmed similar results (RMSE for iPad Pro = 0.087 m and for FARO = 0.070 m). Similar research was performed in references [32,33]. The credible overview concerning the data acquisition methods including the iPad device for scientific forest management is presented in reference [31].
The task to analyze and assess the implemented LiDAR technology using an iPad Pro has been introduced into industrial [34] or urban environments [35], into the civil engineering regarding cultural heritage [36,37,38], and also into less common fields such as medical issues [39] and sports [40]. In reference [34], the authors compared the iPad LiDAR to an industrial 3D scanning system. Lego bricks of different shapes and colors were the measured objects, and the conclusion was that the LiDAR technology proved to be impractical for scanning small objects and that no results could be obtained to determine the accuracy. Three different mobile devices with LiDAR scanning solutions were used in reference [35]. The usage of LiDAR in IOS systems for the architectural and heritage purposes is presented in reference [36]. According to the paper, the Apple LiDAR sensor can be used for the creation of 3D models for applications and metric documentation of architectural and cultural heritage that are not particularly complex in form and texture.
The novelty of this paper lies in its focus on testing various popular applications available on the App Store which boast high ratings and offer the option of exporting data without fees. Previous researchers have primarily presented findings related to the analysis of 3D point clouds from multiple perspectives (industry, environment, urban areas, etc.). This paper introduces a fresh perspective on mesh models and their potential applications and analysis, extending beyond casual use. As the creation of 3D model representations gains traction, one of this paper’s challenges is to evaluate the usage and accuracy of mesh models through diverse approaches. This challenge stems from the increasing popularity of 3D printing, utilized not only in civil engineering but also for personal endeavors. Analyzing and comparing various applications that are capable of creating 3D models at a 1:1 scale using implemented LiDAR technology is crucial. Such assessments can serve as a foundation for users interested in 3D printing existing components without designing them from scratch, leveraging the LiDAR technology in their devices to obtain ready-to-print mesh models while considering accuracy and other relevant factors.

2. Materials and Methods

These tests aimed to verify the potential of LiDAR technology, in combination with various applications for creating 3D models of small, regular, and slightly complex convex objects under realistic conditions. In the simulation, common measurement conditions should be achieved (so as not to idealize these conditions, such as weathering and others). A partial objective was also to compare the 3D object model created using LiDAR technology with a 3D model created using graphical sectional photogrammetry.
To evaluate the effectiveness of different applications in creating 3D models, two comparison approaches, subjective and technical, are employed. These can be, for example, subjective parameters such as the overall impression of the resulting model, the color sharpness, the detail of the object edges, the complexity of the individual parts of the object, the part of the data acquisition, the size of the resulting file, the possibilities of exporting the results for further use, or the cost of the acquisition including the acquisition device, the cost of the application for creating the model, etc. As for the pragmatic, technical approach, it is possible to compare the dimensions of the object, the identity of the edges, the shape of the object, and so on. For this work, the method of comparing the coordinates of uniquely determined points on an object with irregular fitting was chosen [41,42,43,44]. After the detailed points were fitted on the object, these points were aligned using the conventional spatial polar method, and their spatial coordinates were determined in the relative coordinate system. Then, the entire model was aligned in the presence of the detailed points from the selected applications based on the instructions, and in this way, a 3D model of the measured object was created multiple times. Part of the post-processing was the identification of the detailed points in their relative coordinate system. The next step was the transformation of the coordinates into the coordinate system created by the geodetic survey. The result was coordinate pairs, a geodetic reference coordinate system, and a transformed coordinate system derived from the 3D model for the selected application. These twins form transformation vectors. The scale factor, mean model transformation error, single coordinate shift of individual detailed points, single transformation parameter vector, or file size were selected for processing evaluation. The last evaluation criterion was the nature of these transformation vectors, especially their spatial size. The set of vectors was analyzed in terms of their size in several centimeter intervals, determining the absolute frequency of these vectors. It is up to the user on his own to decide which of the parameters he considers meritorious and which he prefers based on subjective priorities.

2.1. Measured Object

The case study object was a monument to German reprisals, located in the cadastral district of Žilina. It was chosen for its suitability: a convex shape with slight fragmentation, a manageable size for various data acquisition technologies, and easy accessibility. The monument commemorates young heroes who fought against Nazi forces. It consists of a large stone statue (5.35 m × 3.45 m × 2.90 m, L × W × H) on a concrete base, with inscriptions and symbols engraved on the surface. The Monument to the German Reprisals (Figure 1), located in Chrasť Forest Park, is a national cultural monument. In 2009, the city of Žilina repaired and cleaned the monument and the memorial plaque.

2.2. Data Collection and Post-Processing

Following the selection of the monument and a site assessment, a network of detailed observation points was established. These points were square markers, and each was assigned a unique numerical identifier. The points were randomly distributed across the monument’s edges, sides, and body. A total of 70 observation points (example shown in Figure 2, left) were identified and attached. These points were then meticulously measured using a conventional geodetic method, specifically the spatial polar method within a local coordinate system. While this established method guarantees accuracy, using a local coordinate system prioritizes convenience and simplifies the measurement process. It is important to acknowledge that this choice (not using a global reference system) increases the final network accuracy due to the law of mean error propagation.
The spatial polar method in the local coordinate system was used as a conventional proven method. A Trimble VX universal measuring station (Figure 2 right) was employed for the measurements (angular accuracy 1’’, length accuracy 2 mm + 2 ppm for the prism-less measurement). The spatial polar method is the most-used method for defining the spatial coordinates of a measured point provided by the geodetic measuring instrument, i.e., total station, universal measurement station. The spatial polar method is also known as tachymetry. The principle of this method is based on the measurement of both the horizontal and vertical (zenith) angles and of a distance (horizontal, oblique) towards the required point signalized by the special prisms or without the signalization. The spatial polar method consists of two different calculations: the polar method for the X, Y coordinates and trigonometry height determination for the Z coordinate. According to the requirement for the reference coordinate system of the target points, a geodetic point field must be created if it does not exist at the site. When the local coordinate system is required (due to the elimination of the mean errors of the geodetic point field into the measured points coordinates), the arbitrary spatial coordinates for the station position and arbitrary orientation are chosen. The polar method and trigonometry height determination scheme is illustrated in Figure 3 and mathematically defined using Formulas (1) and (2). Due to the desired distribution of these detail points, measurements were required from two separate positions with mutual linking via orientation points.
The corresponding calculation relations for the calculation of spatial coordinates using the spatial polar method are as follows:
XB = XA + s · cos(ω)
 YB = YA + s · sin(ω)
Z B = H B = H A + h i + s · tg ( β ) s · cot g ( z ) s · sin ( β ) s · cos ( z ) h j
The choice of a local coordinate system was purposeful, mainly due to the possibility of systematic or gross errors accumulating and influencing the results. The local coordinate system was set using the total station Trimble VX, thus creating the reference system to which the outputs from the used applications could be compared. According to the above-mentioned approach, the GNSS technique was not applied for the coordinate system establishment. The methodology scheme is presented in Figure 4.
In addition to the LiDAR measurements, the monument was also scanned using an iPad Pro 3 (2021) and iPhone 14 (to evaluate photogrammetry for creating a 3D model) [42,43]. We chose the most popular applications for scanning, including Scaniverse, 3D Scanner App, and Polycam. Notably, Polycam was used for both LiDAR and photogrammetric processing. This resulted in four models: the Scaniverse model, 3D Scanner App Model, Polycam LiDAR model, and Polycam Photogrammetry model [45,46,47,48,49]. The scanning applications enabled us export the results of the scanning process in several spatial representations, including a mesh model, point cloud, and TIN model (Figure 5).
All of the representations were used in the following analysis. Based on the comparison of the mentioned 3D representations of the object, the following statements supported the choice of the mesh model to be analyzed and evaluated:
-
The significantly smaller size of the mesh model formats leads to easier processing and work in general;
-
The markers are more easily recognized on the mesh model compared to within the point cloud;
-
Both the point cloud and mesh model correctly express the shape of the monument, whether the object is concave or convex.
All the models were exported in identical .obj formats for further analysis. Mesh models, the most common type of 3D model, are frequently saved in the .obj format. The scheme of the 3D coordinate extraction data acquisition process is illustrated in Figure 6.
The *.obj models were then imported into the Agisoft Metashape program for further processing (Figure 7). Within each model, the spatial coordinates S1–S4 of the detailed observation points were identified and extracted. However, a challenge arose: the origin and orientation of the coordinate system embedded within each model were not clearly defined. The axes’ orientation and coordinate systems were part of the automatic results of the final mesh model exported from the various applications. While the Cartesian system could be determined, the specific axis directions and starting point remained ambiguous.
The Helmert transformation is based on the unique identification of at least identical points and forms a sequence of relations that are commonly used in geodesy for transfer between two Cartesian systems. If there are several identical points, it is possible to use the least square method for the transformation. The transformation may be linear or non-linear. For the case study, a linear one was used, so the result included components TX, TY, and TZ of translation vector T, components ωX, ωY, and ωZ of rotation matrix R, and a scale factor q. The final shape of the rotation matrix was as follows:
R = 1 ω Z ω Y ω Z 1 ω X ω Y ω X 1 q R =   r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33   ,
After introducing the substitutions, α is the angle of rotation in the X-axis direction; β is the angle of rotation in the Y-axis direction; and γ is the angle of rotation in the Z-axis direction. Then, Equation (1) can be edited to shape:
m R =   q · cos β · cos γ q · cos β · sin γ q · sin β q · sin α · sin β · cos γ     q · cos α · sin γ q · sin α · sin β · sin γ + q · cos α · cos γ q · sin α · cos β q · cos α · sin β · cos γ + q · sin α · sin γ q · cos α · sin β · sin γ     q · sin α · cos γ q · cos α · cos β  
The fundamental transformation equation is as follows:
X’ = T + q · R · X,
where vector X’ consists of coordinates X’, Y’, and Z’ after the transformation. The vector X consists of coordinates X’, Y’, and Z’ before the transformation. The spatial transformation key computation is based on the principle of redundant identical point coordinates and the method of least squares (LSM).
The input into the LSM is as follows:
R = 1 ω Z ω Y ω Z 1 ω X ω Y ω X 1 q R = X Y Z = 1 0 0 0 1 0 0 0 1 X Y Z 0 0 0 0 0 0 0 0 0 X Y Z 0 0 0 0 0 0 0 0 0 X Y Z X 0 Y 0 Z 0 r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
The repair equations as a basis for partial derivatives are as follows:
v 1 = T X + T Y + T Z + r 11 X 1 + r 12 Y 1 + r 13 Z 1 + r 21 X 1 + r 22 Y 1 + r 23 Z 1 + r 31 X 1 + r 32 Y 1 + r 33 Z 1 X 1 v 2 = T X + T Y + T Z + r 11 X 1 + r 12 Y 1 + r 13 Z 1 + r 21 X 1 + r 22 Y 1 + r 23 Z 1 + r 31 X 1 + r 32 Y 1 + r 33 Z 1 Y 1 v 3 = T X + T Y + T Z + r 11 X 1 + r 12 Y 1 + r 13 Z 1 + r 21 X 1 + r 22 Y 1 + r 23 Z 1 + r 31 X 1 + r 32 Y 1 + r 33 Z 1 Z 1 v 3 n   2 = T X + T Y + T Z + r 11 X 1 + r 12 Y 1 + r 13 Z 1 + r 21 X 1 + r 22 Y 1 + r 23 Z 1 + r 31 X 1 + r 32 Y 1 + r 33 Z 1 X n v 3 n     1 = T X + T Y + T Z + r 11 X 1 + r 12 Y 1 + r 13 Z 1 + r 21 X 1 + r 22 Y 1 + r 23 Z 1 + r 31 X 1 + r 32 Y 1 + r 33 Z 1 Y n v 3 n = T X + T Y + T Z + r 11 X 1 + r 12 Y 1 + r 13 Z 1 + r 21 X 1 + r 22 Y 1 + r 23 Z 1 + r 31 X 1 + r 32 Y 1 + r 33 Z 1 Z n
The partial derivatives according to the unknown parameters are the components of the plan matrix A:
A 3 n ,   12 = 1 0 0 0 1 0 0 0 1 X 1 Y 1 Z 1 0 0 0 0 0 0 0 0 0 X 1 Y 1 Z 1 0 0 0 0 0 0 0 0 0 X 1 Y 1 Z 1 1 0 0 0 1 0 0 0 1 X 2 Y 2 Z 2 0 0 0 0 0 0 0 0 0 X 2 Y 2 Z 2 0 0 0 0 0 0 0 0 0 X 2 Y 2 Z 2 1 0 0 0 1 0 0 0 1 X n Y n Z n 0 0 0 0 0 0 0 0 0 X n Y n Z n 0 0 0 0 0 0 0 0 0 X n Y n Z n .
The vector of absolute values consists of the point coordinates to be transformed into a reference system:
l 3 n ,   1 = X 1 Y 1 Z 1 X 2 Y 2 Z 2 X n Y n Z n
The calculation of the elements of the transformation key using the method of least squares is performed with the help of the following relation:
dx 12 , 1 = A T PA 1 A T P l = T X T Y T Z r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ·
The calculation of the rotation angles and scale factor based on the above substitution is carried out based on the following equations resulting from the elements in Formula (2):
ω X = α = arctg r 23 r 33 , ω Y = β = arctg r 13 · sin α r 23 , ω z = γ = arctg r 12 r 11 , q =   r 13 sin γ m = 1 q
The quantification of the transformation key parameters is not the last of the calculations resulting from the LSM. The other calculations serve to assess accuracy and quality and are an objective evaluation criterion when comparing the results of the case study. These are mainly the calculation of the corrections, the mean error of each model’s identical points, and the determination of the maximum coordinate shift between identical points after transformation, the determination of the maximum size of the transformation vector, and the multiplicity of transformation vectors based on size at predefined intervals.
According to the uncertainty of various 3D model axis directions, the LSM could not be applied to the calculations. The transformation results led to incorrect transformation (rotation, translation, scale factor, and the origin position). The problem was solved by the estimation of approximate values of the transformation parameters resulting from the calculations of four identical points defined in both the transformed and reference coordinate systems. These approximated values were consequently adjusted using the LSM.

3. Results

The selected approach compares the 3D models generated using various methods using a two-pronged evaluation: visual assessment and objective analysis [50,51,52]. Based on the Helmert transformation, cloud-to-cloud analysis could represent another way to define and assess the task results and to compare the pre-selected applications for the creation of 3D models. The visual assessment provides a user-centric perspective, considering aspects like file size, export options, and application cost. This subjective approach acknowledges the importance of user experience. The objective analysis, on the other hand, focuses on the core quality and precision of the mesh models generated by the LiDAR measurement system (LMS) and subsequent calculations.

3.1. Cloud-to-Cloud Analysis

As mentioned above in the Materials and Methods section, the seven-parameter Helmert transformation was applied to solve the problem of different local coordinate systems defined on-the-fly via individual applications. Thus, the definition of individual 3D model coordinate systems (origin, translation, rotation, and scale factor) is referred to as the pre-defined reference coordinate system set by the measurement using the geodetic total station Trimble VX. In this reference system, the spatial position of the observed points was defined. As can be seen from the established procedure using clearly identifiable identical points (Figure 2 left), the seven-parameter Helmert transformation resulted in the conversion of the individual identifiable identical points with coordinates in the local coordinate systems into the reference one. Cloud-to-cloud analysis (C2C) could be consequently performed. In the case of a pre-set methodology, the input data for the C2C were individual post-transformation identical points. Uniformly distributed identical points across the object of interest represent discrete comparable components that can be used to mediate C2C analysis. Thus, in the case of the task, the individual to-be-compared clouds were represented by the identifiable identical points in the unified coordinate system.
A customized C2C analysis was performed separately in individual planes XY, XZ, YZ, and its spatial analysis results are shown in Table 1. The observed point coordinates, defined on the individual 3D representations, differ in relation to the reference coordinates and also mutually in spite of the performed transformation. These deviations enable the C2C analysis to be visualized. Separated comparisons with regards to the individual planes XY, XZ, and YZ also enable the analysis of the possible excesses in the individual planes and allow the visualization to be transparent. The deviations are visualized as vectors with the origin in the reference identical point position, and its direction leads towards the position of the identical point of the transformed system. The analysis and visualization in individual planes, defined by selected couple axes, includes all the vectors representing the clouds via deviations in discrete identical points. The visualization of the deviations is illustrated in Figure 8a–c. Different colors were chosen for the vector visualization of the analyzed applications—Scaniverse (red color), 3D Scanner App (blue color), Polycam LiDAR approach (green color), Polycam Photogrammetry approach (magenta color).
Based on the C2C analysis along with the objective comparative approach described in Section 3.3, the individual 3D representations of the object provided by the different specialized applications were compared, visualized, and quantified. A significant difference can be observed in 3D Scanner App results, which deviations in relation to the reference model reaching the largest values. On the other hand, the Polycam application provides the best results and the smallest deviations.

3.2. Subjective Comparative Approach

Homogenous conditions (consistent weather, temperature, time of day, lighting conditions, and observation time) allowed for a straightforward visual comparison using any software that supports opening and 3D object manipulation.
At first glance, the generated mesh models exhibit a generally good and quite high level of detail, closely resembling the original object (Figure 9). The textured mesh models provide realistic visual object representation. However, some limitations are evident in capturing the finer details of the object and its surrounding elements.
More attention was paid to the details of the edges and the complexity of the model including all the additional components/objects (wreath, pot, etc.), texture sharpness, detail recognition, or distortion occurrence.
Figure 10 reveals the discrepancies between the final mesh models in a more detailed way. Notably, the shape of the displayed part of the object is not the same. This difference may be seen on the upper edge of the memorial plaque. The mesh model provided by the 3DScannerApp shows this edge as simplified and smoothed. Another shortcoming of this output may be observed in the written text, which has distorted lines.
The selection of this monument as a case study was partly driven by the presence of various components beyond the main structure. Specifically, we aimed to evaluate how the applications captured the chains connecting the side pillars. Figure 9 also highlights potential limitations in handling such intricate details.
The ability of the applications to capture and model objects and parts that are irregular and discrete is displayed in Figure 11. Two out of the four applications managed to capture and model the chains between the pillars; however, the final representation may not be considered accurate. The position of chains is shifted, and in the case of the Scaniverse model, the chains became a part of the object below. As for modelling the chains using the Polycam with photogrammetry approach, only a part of the chains was captured. There is no trace of chains in the other two models.
Another important characteristic to be compared is edge sharpness and the ability to preserve the (almost) perpendicular elements and parts. The ability to accurately capture sharp edges and perpendicular features significantly contributes to the model’s realism. Since these qualities are readily apparent upon visual inspection, they serve as a good indicator of the overall quality and fidelity of the model to the real object. As illustrated in Figure 12, the Scaniverse mesh model excels at preserving sharp edges and rectangular shapes. In contrast, the 3D Scanner App mesh model exhibits the most noticeable distortions in these areas. Interestingly, the Polycam application offers two distinct approaches, with the LiDAR-based method producing superior results in terms of sharpness.

3.3. Objective Comparative Approach

Objective parameters can be considered as those that have a mathematical essence and can be expressed numerically. The mesh models can be compared in different ways. The course of edges, spatial changes of the object, deviations from predefined axes, and the dimensions of selected elements (lines, surfaces, points) placed on the object can be compared.
It was predefined that the accuracy of the mesh models would be mediated by 70 randomly placed detailed observation points. These were subsequently input to a Helmert seven-element transform with the application of the LSM. This resulted in transformed coordinates, which were then compared with their reference position. The pairs of points formed transformation vectors. These were analyzed and are considered as the basis for an objective evaluation of the individual models.
Table 1 highlights that the translation shift (three parameters) and rotation aspects are not directly indicative of 3D model precision. A crucial parameter is the scale factor, where a value of 1 represents no distortion. The Scaniverse, 3D Scanner App, and Polycam (LiDAR) models all achieved values close to 1, suggesting that their sizes closely matched the real object. Notably, the photogrammetric model created using Polycam exhibited a significant scale distortion (half the actual size) despite offering a distance measurement feature.
The mean model error, calculated from the LSM, numerically defines the model’s average accuracy based on the corrections applied to the observation points. Interestingly, the processed models displayed similar precisions, with the Polycam application (regardless of the method used) achieving the highest mean accuracy (lowest mean error).
Other comparative parameters resulting from the LSM are enabled based on the vector sizes and their multiplicity analysis. The transformation vectors sizes reached values from nearly 0.000 m up to 0.110 m. The multiplicity of the vectors according to their size was variable due to each tested 3D mesh model. The largest size of the transformation vector was determined for the mesh model created using the 3D Scanner App. Also, the largest number of transformation vectors exceeding 0.070 m (more than 15% of the vectors) was found for this model. The maximum values of the transformation vectors for Polycam were 0.034/0.048 m, and the frequency of the vectors below 0.010 m was 45.7%/45.7% and below 0.020 m up to 87.1%/95.7% for LiDAR/photogrammetric data acquisition.

4. Discussion

The aim of this paper was to discuss the possible usage of conventional devices with built-in LiDAR systems for practical applications in civil engineering in the shape of final mesh models [53,54,55]. The case study presented in this article investigated the possibilities, limitations, and accuracy of capturing an object using various 3D modeling methods. This paper employed a two-pronged approach: visual assessment and objective evaluation using detailed observation markers. LiDAR measurements and photogrammetry using commercially available mobile applications (such as Scaniverse, 3D Scanner App, or Polycam) were chosen for data acquisition. Upon initial inspection, the generated mesh models seemed to have a good quality and a high level of detail, closely resembling the texture of the original monument. The textured mesh models provided a realistic visual representation of the monument, with the possibility to be used in civil engineering. However, there were evident limitations in capturing the finer details of the monument and the surrounding elements such as the chains connecting the side pillars. This problem can be solved by focusing on better collection during the measurement. The mesh model resulting from the 3D Scanner App displayed the most significant simplification and smoothing of the edges, particularly noticeable on the memorial plaque’s upper edge.
The objective evaluation focused on the model’s accuracy in the representation of the real object’s dimensions. The scale factor, a crucial parameter where a value of 1 indicates no distortion, was analyzed. The LiDAR mesh models achieved scale factors that were close to the value 1, suggesting that their sizes closely matched the actual dimensions of the monument. This highlights a potential limitation of the photogrammetry techniques, especially when mobile devices are used. All the processed models were tested based on the mean mesh model error, a metric calculated using the LMS and detailed markers that were randomly placed on the case study monument. While further investigation is needed, Polycam’s superior performance could be attributed to factors like its robust processing algorithms of the potential hardware advantages in the LiDAR implementation for the mesh model.
This paper’s findings concerning the higher accuracy of LiDAR mesh models in capturing object dimensions align with previous studies [1,2,3,4,5,14] that reported similar results when comparing LiDAR and photogrammetric techniques. This study is limited to using a single, regular monument and a relatively small set of mobile applications. Future research could benefit from testing a broader range of objects with varying shapes and complexities, exploring advanced processing techniques to potentially improve the accuracy of the photogrammetric models and comparing the accuracy of different mobile applications under various environmental conditions. Another possibility is to not use a mesh model as a final version of the 3D object interpretation, but instead to try to use formats such as point cloud or TIN models. This case study demonstrates the potential of both LiDAR and photogrammetry methods for the 3D modeling of monuments. LiDAR offers superior accuracy in capturing object dimensions, while photogrammetry using mobile devices provides a user-friendly and potentially cost-effective alternative. However, users should be aware of potential limitations in the photogrammetric model scale, particularly when dealing with complex objects. Further research is needed to explore advanced processing techniques and the impact of environmental conditions on accuracy. Overall, 3D modeling technologies offer valuable tools for documenting and preserving cultural heritage objects.
The results of the research lead to the conclusions that conventional devices with the implemented LiDAR technology may be used for the capturing of the objects for the next application. The capturing is understandable for the basic user, data collection is quite easy, and the post-processing is automatically performed by the applications in the background. The data acquisition step is the most time-consuming part of the whole process. However, although the applications are well designed for the capturing process, disadvantages may result from irregular or non-standard object shapes, where the applications may suffer complications resulting in incorrect data evaluation. With regards to the used devices and application price, they can be considered available to the vast community. The analysis of the chosen 3D representation of the measured object in the form of the mesh model includes contributions for any user interested in 3D model creation along with the expected precision of the selected application. Moreover, these users may use the created mesh model directly for 3D printing if required without processing the point cloud into a printable format in other software.

5. Conclusions

The aim of a future case study will be to modify this case study in terms of the methodology, where we plan to compare the point clouds from the conventional application to the reference point cloud resulting from the LiDAR technology with GNSS and IMU. The post-processing steps must include point cloud denoising, removing outlier points, editing, filtering, and classification. In this case, it is considered to deal with cloud-to-cloud transformation and cloud pre-processing. These data will then be used as the input data for the creation of mesh models outside the automatized environment of the analyzed applications. Consequently, the secondary mesh models may be analyzed and compared to the automatically created ones. For complete and credible analysis in general, it is necessary to analyze objects with different shapes (convex/concave analysis) in order to assess the data acquisition correctness, processing, and the used application suitability from more points of view.

Author Contributions

Conceptualization, D.S. and J.C.; methodology, R.S. and J.I.; software, J.C.; validation, D.S., J.C. and R.S.; formal analysis, J.I.; investigation, D.S.; resources, R.S.; data curation, J.C. and J.I.; writing—original draft preparation, D.S. and J.C.; writing—review and editing, J.I.; visualization, D.S. and R.S.; supervision, J.I.; project administration, J.I.; funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Research Project No. 1/0630/21 of the Slovak Scientific Grant Agency (VEGA).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dong, P.; Chen, Q. LiDAR Remote Sensing and Applications, 1st ed.; Taylor & Francis Group: Boca Raton, FL, USA, 2017; pp. 19–26. [Google Scholar] [CrossRef]
  2. Jaboyedoff, M.; Oppikofer, T.; Abellán, A.; Derron, M.H.; Loye, A.; Metzger, R.; Pedrazzini, A. Use of LIDAR in landslide investigations: A review. Nat. Hazards 2012, 61, 5–28. [Google Scholar] [CrossRef]
  3. Ralph, O.D.; Jason, B.D. Lidar Remote Sensing for Forestry. J. For. 2000, 98, 44–46. [Google Scholar] [CrossRef]
  4. Lefsky, M.; Cohen, W.; Parker, G.; Harding, D. Lidar Remote Sensing for Ecosystem Studies: Lidar, an emerging remote sensing technology that directly measures the three-dimensional distribution of plant canopies, can accurately estimate vegetation structural attributes and should be of particular interest to forest, landscape, and global ecologists. BioScience 2002, 52, 19–30. [Google Scholar]
  5. Mêda, P.; Calvetti, D.; Sousa, H. Exploring the Potential of iPad-LiDAR Technology for Building Renovation Diagnosis: A Case Study. Buildings 2023, 13, 456. [Google Scholar] [CrossRef]
  6. Godfroy, J.; Lejot, J.; Demarchi, L.; Bizzi, S.; Michel, K.; Piégay, H. Combining Hyperspectral, LiDAR, and Forestry Data to Characterize Riparian Forests along Age and Hydrological Gradients. Remote Sens. 2023, 15, 17. [Google Scholar] [CrossRef]
  7. Rebelo, C.; Rodrigues, A.M.; Tenedório, J.A.; Goncalves, J.A.; Marnoto, J. Building 3D city models: Testing and comparing Laser scanning and low-cost UAV data using FOSS technologies. In Proceedings of the International Conference on Computational Science and Its Applications, Banff, AB, Canada, 22–25 June 2015; Springer: Cham, Switzerland; pp. 367–379. [Google Scholar] [CrossRef]
  8. Jung, M.; Jung, J. A Scalable Method to Improve Large-Scale Lidar Topographic Differencing Results. Remote Sens. 2023, 15, 4289. [Google Scholar] [CrossRef]
  9. Bačová, D.; Ižvoltová, J.; Šedivý, Š.; Chromčák, J. Different Approach for the Structure Inclination Determination. Buildings 2023, 13, 637. [Google Scholar] [CrossRef]
  10. Kovarik, K.; Mužík, J.; Gago, F.; Sitányiová, D. Modified local singular boundary method for solution of two-dimensional diffusion equation. Eng. Anal. Bound. Elem. 2022, 143, 525–534. [Google Scholar] [CrossRef]
  11. Koreň, M.; Mokroš, M.; Bucha, T. Accuracy of Tree Diameter Estimation from Terrestrial Laser Scanning by Circle-Fitting Methods. Int. J. Appl. Earth Obs. Geoinf. 2017, 63, 122–128. [Google Scholar] [CrossRef]
  12. Pérez, J.J.; Senderos, M.; Casado, A.; Leon, I. Field Work’s Optimization for the Digital Capture of Large University Campuses, Combining Various Techniques of Massive Point Capture. Buildings 2022, 12, 380. [Google Scholar] [CrossRef]
  13. Kovarik, K.; Mužík, J.; Gago, F.; Sitányiová, D. The local boundary knots method for solution of Stokes and the biharmonic equation. Eng. Anal. Bound. Elem. 2023, 155, 1149–1159. [Google Scholar] [CrossRef]
  14. Nowak, R.; Orłowicz, R.; Rutkowski, R. Use of TLS (LiDAR) for Building Diagnostics with the Example of a Historic Building in Karlino. Buildings 2020, 10, 24. [Google Scholar] [CrossRef]
  15. Wang, Y.; Chen, Q.; Zhu, Q.; Liu, L.; Li, C.; Zheng, D. A Survey of Mobile Laser Scanning Applications and Key Techniques over Urban Areas. Remote Sens. 2019, 11, 1540. [Google Scholar] [CrossRef]
  16. Yadav, M.; Singh, A.K.; Lohani, B. Extraction of road surface from mobile LiDAR data of complex road environment. Int. J. Remote Sens. 2017, 38, 4655–4682. [Google Scholar] [CrossRef]
  17. Jaakkola, A.; Hyyppä, J.; Hyyppä, H.; Kukko, A. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping. Sensors 2008, 8, 5238–5249. [Google Scholar] [CrossRef] [PubMed]
  18. Hartfield, K.A.; Landau, K.I.; van Leeuwen, W.J.D. Fusion of high resolution aerial multispectral and lidar data: Land cover in the context of urban mosquito habitat. Remote Sens. 2011, 3, 2364–2383. [Google Scholar] [CrossRef]
  19. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne lidar data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  20. Zou, X.; Zhao, G.; Li, J.; Yang, Y.; Fang, Y. Object based image analysis combining high spatial resolution imagery and laser point clouds for urban land cover. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 733–739. [Google Scholar] [CrossRef]
  21. Matikainen, L.; Karila, K.; Hyyppä, J.; Litkey, P.; Puttonen, E.; Ahokas, E. Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating. ISPRS J. Photogramm. Remote Sens. 2017, 128, 298–313. [Google Scholar] [CrossRef]
  22. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3d point cloud based object maps for household environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
  23. Boyko, A.; Funkhouser, T. Extracting roads from dense point clouds in large scale urban environment. Isprs J. Photogramm. Remote Sens. 2011, 66, S2–S12. [Google Scholar] [CrossRef]
  24. Jeong, J.; Yoon, T.S.; Park, J.B. Multimodal sensor-based semantic 3d mapping for a large-scale environment. Expert Syst. Appl. 2018, 105, 1–10. [Google Scholar] [CrossRef]
  25. Soilan, M.; Riveiro, B.; Sanchez-Rodriguez, A.; Arias, P. Safety assessment on pedestrian crossing environments using mls data. Accid. Anal. Prev. 2018, 111, 328–337. [Google Scholar] [CrossRef] [PubMed]
  26. Guo, L.; Chehata, N.; Mallet, C.; Boukir, S. Relevance of airborne lidar and multispectral image data for urban scene classification using random forests. ISPRS J. Photogramm. Remote Sens. 2011, 66, 56–66. [Google Scholar] [CrossRef]
  27. Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (geobia): Emerging trends and future opportunities. Gisci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
  28. Guenther, M.; Heenkenda, M.K.; Morris, D.; Leblon, B. Tree Diameter at Breast Height (DBH) Estimation Using an iPad Pro LiDAR Scanner: A Case Study in Boreal Forests, Ontario, Canada. Forests 2024, 15, 214. [Google Scholar] [CrossRef]
  29. Gollob, C.; Ritter, T.; Kraßnitzer, R.; Tockner, A.; Nothdurft, A. Measurement of Forest Inventory Parameters with Apple iPad Pro and Integrated LiDAR Technology. Remote Sens. 2021, 13, 3129. [Google Scholar] [CrossRef]
  30. Bobrowski, R.; Winczek, M.; Silva, L.P.; Cuchi, T.; Szostak, M.; Wężyk, P. Promising Uses of the iPad Pro Point Clouds: The Case of the Trunk Flare Diameter Estimation in the Urban Forest. Remote Sens. 2022, 14, 4661. [Google Scholar] [CrossRef]
  31. Yan, X.; Chai, G.; Han, X.; Lei, L.; Wang, G.; Jia, X.; Zhang, X. SA-Pmnet: Utilizing Close-Range Photogrammetry Combined with Image Enhancement and Self-Attention Mechanisms for 3D Reconstruction of Forests. Remote Sens. 2024, 16, 416. [Google Scholar] [CrossRef]
  32. Çakir, G.Y.; Post, C.J.; Mikhailova, E.A.; Schlautman, M.A. 3D LiDAR Scanning of Urban Forest Structure Using a Consumer Tablet. Urban Sci. 2021, 5, 88. [Google Scholar] [CrossRef]
  33. Brach, M.; Tracz, W.; Krok, G.; Gąsior, J. Feasibility of Low-Cost LiDAR Scanner Implementation in Forest Sampling Techniques. Forests 2023, 14, 706. [Google Scholar] [CrossRef]
  34. Vogt, M.; Rips, A.; Emmelmann, C. Comparison of iPad Pro®’s LiDAR and TrueDepth Capabilities with an Industrial 3D Scanning Solution. Technologies 2021, 9, 25. [Google Scholar] [CrossRef]
  35. Costantino, D.; Vozza, G.; Pepe, M.; Alfio, V.S. Smartphone LiDAR Technologies for Surveying and Reality Modelling in Urban Scenarios: Evaluation Methods, Performance and Challenges. Appl. Syst. Innov. 2022, 5, 63. [Google Scholar] [CrossRef]
  36. Vacca, G. 3D Survey with Apple LiDAR Sensor—Test and Assessment for Architectural and Cultural Heritage. Heritage 2023, 6, 1476–1501. [Google Scholar] [CrossRef]
  37. Łabędź, P.; Skabek, K.; Ozimek, P.; Rola, D.; Ozimek, A.; Ostrowska, K. Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods. Sensors 2022, 22, 8504. [Google Scholar] [CrossRef]
  38. Teppati Losè, L.; Spreafico, A.; Chiabrando, F.; Giulio Tonolo, F. Apple LiDAR Sensor for 3D Surveying: Tests and Results in the Cultural Heritage Domain. Remote Sens. 2022, 14, 4157. [Google Scholar] [CrossRef]
  39. Callegari, E.; Agnolucci, J.; Angiola, F.; Fais, P.; Giorgetti, A.; Giraudo, C.; Viel, G.; Cecchetto, G. The Precision, Inter-Rater Reliability, and Accuracy of a Handheld Scanner Equipped with a Light Detection and Ranging Sensor in Measuring Parts of the Body—A Preliminary Validation Study. Sensors 2024, 24, 500. [Google Scholar] [CrossRef] [PubMed]
  40. Oberhofer, K.; Knopfli, C.; Achermann, B.; Lorenzetti, S.R. Feasibility of Using Laser Imaging Detection and Ranging Technology for Contactless 3D Body Scanning and Anthropometric Assessment of Athletes. Sports 2024, 12, 92. [Google Scholar] [CrossRef]
  41. Sun, M.; Zhuo, S.; Chiang, P.Y. Multi-Scale Histogram-Based Probabilistic Deep Neural Network for Super-Resolution 3D LiDAR Imaging. Sensors 2023, 23, 420. [Google Scholar] [CrossRef]
  42. Lokugam Hewage, C.N.; Laefer, D.F.; Vo, A.-V.; Le-Khac, N.-A.; Bertolotto, M. Scalability and Performance of LiDAR Point Cloud Data Management Systems: A State-of-the-Art Review. Remote Sens. 2022, 14, 5277. [Google Scholar] [CrossRef]
  43. Che, D.; Su, M.; Ma, B.; Chen, F.; Liu, Y.; Wang, D.; Sun, Y. A Three-Dimensional Triangle Mesh Integration Method for Oblique Photography Model Data. Buildings 2023, 13, 2266. [Google Scholar] [CrossRef]
  44. Brenner, C. Building reconstruction from images and laser scanning. Int. J. Appl. Earth Obs. Geoinf. 2005, 6, 187–198. [Google Scholar] [CrossRef]
  45. Salagean-Mohora, I.; Anghel, A.A.; Frigura-Iliasa, F.M. Photogrammetry as a Digital Tool for Joining Heritage Documentation in Architectural Education and Professional Practice. Buildings 2023, 13, 319. [Google Scholar] [CrossRef]
  46. Adamopoulos, E.; Rinaudo, F. Combining Multiband Imaging, Photogrammetric Techniques, and FOSS GIS for Affordable Degradation Mapping of Stone Monuments. Buildings 2021, 11, 304. [Google Scholar] [CrossRef]
  47. Scaniverse. Available online: https://scaniverse.com (accessed on 16 October 2023).
  48. 3D Scanner App. Available online: https://3dscannerapp.com (accessed on 16 October 2023).
  49. Polycam. Available online: https://poly.cam (accessed on 16 October 2023).
  50. Wang, M.; Wang, C.C.; Sepasgozar, S.; Zlatanova, S. A Systematic Review of Digital Technology Adoption in Off-Site Construction: Current Status and Future Direction towards Industry 4.0. Buildings 2020, 10, 204. [Google Scholar] [CrossRef]
  51. Sepasgozar, S.M.E. Differentiating Digital Twin from Digital Shadow: Elucidating a Paradigm Shift to Expedite a Smart, Sustainable Built Environment. Buildings 2021, 11, 151. [Google Scholar] [CrossRef]
  52. Pozo-Antonio, J.S.; Puente, I.; Pereira, M.F.C.; Rocha, C.S.A. Quantification and mapping of deterioration patterns on granite surfaces by means of mobile LiDAR data. Measurement 2019, 140, 227–236. [Google Scholar] [CrossRef]
  53. Florkowska, L.; Bryt-Nitarska, I.; Gawałkiewicz, R.; Kruczkowski, J. Monitoring and Assessing the Dynamics of Building Deformation Changes in Landslide Areas. Buildings 2020, 10, 3. [Google Scholar] [CrossRef]
  54. Suchocki, C.; Katzer, J.; Rapinski, J. Terrestrial Laser Scanner as a Tool for Assessment of Saturation and Moisture Movement in Building Materials. Period. Polytech. Civil. Eng. 2018, 62, 694–699. [Google Scholar] [CrossRef]
  55. Drešček, U.; Kosmatin Fras, M.; Tekavec, J.; Lisec, A. Spatial ETL for 3D Building Modelling Based on Unmanned Aerial Vehicle Data in Semi-Urban Areas. Remote Sens. 2020, 12, 1972. [Google Scholar] [CrossRef]
Figure 1. Monument to German reprisals.
Figure 1. Monument to German reprisals.
Buildings 14 01279 g001
Figure 2. Detailed observation points shape (left), Trimble VX series used for measurement (right).
Figure 2. Detailed observation points shape (left), Trimble VX series used for measurement (right).
Buildings 14 01279 g002
Figure 3. Methodology sketch for the polar method (a) and trigonometry height determination (b) together presenting the spatial polar method.
Figure 3. Methodology sketch for the polar method (a) and trigonometry height determination (b) together presenting the spatial polar method.
Buildings 14 01279 g003
Figure 4. Methodology scheme.
Figure 4. Methodology scheme.
Buildings 14 01279 g004
Figure 5. Different 3D model representations: mesh model (a), point cloud (b), TIN model (c).
Figure 5. Different 3D model representations: mesh model (a), point cloud (b), TIN model (c).
Buildings 14 01279 g005
Figure 6. Scheme of 3D coordinate extraction data acquisition process.
Figure 6. Scheme of 3D coordinate extraction data acquisition process.
Buildings 14 01279 g006
Figure 7. Definition of detailed observation points in Agisoft Metashape. Blue flags with red centroid define the marker position.
Figure 7. Definition of detailed observation points in Agisoft Metashape. Blue flags with red centroid define the marker position.
Buildings 14 01279 g007
Figure 8. The C2C analysis visualization mediated by the deviations in the identical points. Red vectors indicate the Scaniverse application, blue vectors indicate the 3D Scanner App, green vectors indicate the Polycam—LiDAR approach, magenta vectors indicate the Polycam—photogrammetry approach. The individual plane analysis is illustrated in (a) (XY plane), (b) (XZ plane), (c) (YZ plane).
Figure 8. The C2C analysis visualization mediated by the deviations in the identical points. Red vectors indicate the Scaniverse application, blue vectors indicate the 3D Scanner App, green vectors indicate the Polycam—LiDAR approach, magenta vectors indicate the Polycam—photogrammetry approach. The individual plane analysis is illustrated in (a) (XY plane), (b) (XZ plane), (c) (YZ plane).
Buildings 14 01279 g008aBuildings 14 01279 g008b
Figure 9. Comparison of the overall impression of the resulting 3D models: (a) Scaniverse, (b) 3DAppScanner, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Figure 9. Comparison of the overall impression of the resulting 3D models: (a) Scaniverse, (b) 3DAppScanner, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Buildings 14 01279 g009
Figure 10. Focusing on the detail-capturing ability, shape of the object reconstruction, and distortion occurrence of the resulting 3D models: (a) Scaniverse, (b) 3DScannerApp, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Figure 10. Focusing on the detail-capturing ability, shape of the object reconstruction, and distortion occurrence of the resulting 3D models: (a) Scaniverse, (b) 3DScannerApp, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Buildings 14 01279 g010
Figure 11. Focusing on the challenging components capture abilities of (a) Scaniverse, (b) 3DScannerApp, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Figure 11. Focusing on the challenging components capture abilities of (a) Scaniverse, (b) 3DScannerApp, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Buildings 14 01279 g011
Figure 12. Focusing on the edge sharpness and ability to preserve perpendicular parts: (a) Scaniverse, (b) 3DScannerApp, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Figure 12. Focusing on the edge sharpness and ability to preserve perpendicular parts: (a) Scaniverse, (b) 3DScannerApp, (c) Polycam—LiDAR approach, (d) Polycam—photogrammetry approach.
Buildings 14 01279 g012
Table 1. Objective parameters of the analyzed 3D models.
Table 1. Objective parameters of the analyzed 3D models.
Determined ParameterApplication
Scaniverse3DScannerAppPolycam_LPolycam_F
Transformation
key
Translation T X T Y T Z [m] 194.826 107.111 42.219 194.826 107.414 50.284 194.582 107.068 50.362 194.121 107.867 50.657
Rotation
ω X ω Y ω Z [g]
0.1963 0.4279 14.0166 0.0611 0.1138 91.0204 99.8253 85.5179 0.7382 96.2330 85.6259 5.0785
Scale Factor q0.992381.011770.995901.99661
Model mean error σ [m]0.0110.0180.0060.007
Max. coordinate shift max 0.0530.0650.0340.048
Max. transformation vector0.0650.1090.0370.051
Multiplicity of transformed vectors by size [m]
Intervals:
Buildings 14 01279 i001Buildings 14 01279 i002Buildings 14 01279 i003Buildings 14 01279 i004
            Buildings 14 01279 i005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Smrčková, D.; Chromčák, J.; Ižvoltová, J.; Sásik, R. Usage of a Conventional Device with LiDAR Implementation for Mesh Model Creation. Buildings 2024, 14, 1279. https://doi.org/10.3390/buildings14051279

AMA Style

Smrčková D, Chromčák J, Ižvoltová J, Sásik R. Usage of a Conventional Device with LiDAR Implementation for Mesh Model Creation. Buildings. 2024; 14(5):1279. https://doi.org/10.3390/buildings14051279

Chicago/Turabian Style

Smrčková, Daša, Jakub Chromčák, Jana Ižvoltová, and Róbert Sásik. 2024. "Usage of a Conventional Device with LiDAR Implementation for Mesh Model Creation" Buildings 14, no. 5: 1279. https://doi.org/10.3390/buildings14051279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop