Next Article in Journal
Dynamic Effects of Atmosphere over and around the Tibetan Plateau on the Sustained Drought in Southwest China from 2009 to 2014
Next Article in Special Issue
A Method for Predicting Canopy Light Distribution in Cherry Trees Based on Fused Point Cloud Data
Previous Article in Journal
A Spectral Library Study of Mixtures of Common Lunar Minerals and Glass
Previous Article in Special Issue
Cotton Growth Modelling Using UAS-Derived DSM and RGB Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees

1
Department of Agricultural, Food and Environmental Sciences, Università Politecnica delle Marche, 60131 Ancona, Italy
2
Department of Political Sciences, Communication and International Relations, Università di Macerata, 62100 Macerata, Italy
3
Department of Construction, Civil Engineering and Architecture, Università Politecnica delle Marche, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2197; https://doi.org/10.3390/rs15082197
Submission received: 8 March 2023 / Revised: 12 April 2023 / Accepted: 19 April 2023 / Published: 21 April 2023
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)

Abstract

:
In recent years, advancements in remote and proximal sensing technology have driven innovation in environmental and land surveys. The integration of various geomatics devices, such as reflex and UAVs equipped with RGB cameras and mobile laser scanners (MLS), allows detailed and precise surveys of monumental trees. With these data fusion method, we reconstructed three monumental 3D tree models, allowing the computation of tree metric variables such as diameter at breast height (DBH), total height (TH), crown basal area (CBA), crown volume (CV) and wood volume (WV), even providing information on the tree shape and its overall conditions. We processed the point clouds in software such as CloudCompare, 3D Forest, R and MATLAB, whereas the photogrammetric processing was conducted with Agisoft Metashape. Three-dimensional tree models enhance accessibility to the data and allow for a wide range of potential applications, including the development of a tree information model (TIM), providing detailed data for monitoring tree health, growth, biomass and carbon sequestration. The encouraging results provide a basis for extending the virtualization of these monumental trees to a larger scale for conservation and monitoring.

Graphical Abstract

1. Introduction

Over the past two decades, innovation played a major role in the evolution of environmental and land surveys due to the fast advancement in both devices and processing technology [1]. The new techniques involve both remote and proximal sensing. On the one hand, remote sensing allows researchers to gather data from afar, avoiding physical access to the site. This permits affordable surveys even in locations with limited or no accessibility [2]. Furthermore, remote applications allow for the retrieval of data across various spatial scales, from local sizes [3,4] to entire regions [5] and even global surveys [6,7]. On the other hand, proximal sensing devices are placed directly on or closely to the area being surveyed, allowing researchers to gather highly detailed and accurate data to be used to monitor changes over time. The combination of such technologies has promoted more comprehensive surveys, opening new opportunities and perspectives for integrated research activities.
All the data fusion methods combine data from multiple sources to produce a more comprehensive view of a scene, attempting to overcome measurement and sampling processes which impose limitations on the amount of information that can be extracted from a specific dataset [8]. Geomatic data fusion techniques allow the development of new effective conservation and management strategies enhancing the understanding of the processes affecting the biotic components of ecosystems. A combination of Light Detection and Ranging (LiDAR) outputs and optical imagery was applied in different areas, including but not limited to registration, generation of orthophotographs, image pan-sharpening, classification, 3D reconstruction, forest inventory and land use/land cover classification [9,10].
A synergy between forestry and geomatics can shed new light in the comprehension of partially unrevealed natural dynamics. Data fusion among the various outputs of LiDAR and photogrammetry technologies allows researchers to observe plot-level forest structure alterations and attributes (i.e., mean DBH, individual height, stem density, basal area, volume) [11,12] and can be even used to monitor phenological changes such as the growth of branches and leaves on single trees [13]. For instance, a study estimated the forest above-ground biomass integrating airborne and terrestrial laser scanners (ALS and TLS) with multispectral images [14], while others used the ALS in combination with mobile laser scanners (MLS) to explore the impacts of canopy cover on tree attribute estimation [15]. A recent study demonstrated the integration of geomatic devices for assessing the effect of vegetation burning, exploring how the combined use of ALS and MLS can provide post-fire forest structure assessment [16]. Other studies reported that it is possible to predict tree-specific wood quality indicators [17] or estimate above-ground biomass [18] fusing ALS and TLS. Nowadays, several tools are available to process point cloud data in the forestry field; R and MATLAB are dedicated to statistical inferences and validations, 3D Forest is a suite for tree parameters extraction; and CloudCompare or Agisoft Metashape are designed for the management and processing of 3D point clouds [19].
The use of these technologies has greatly improved the ability to understand and monitor the dynamics of forests and individual trees and widened the spectrum of opportunities for forest analysis and management. MLS uses a LiDAR unit to generate detailed point clouds of the surveyed area, with the position of each point in the cloud being recorded according to the device’s relative reference system. Moreover, these devices are usually lightweight, providing new opportunities in the field of forestry. Due to their compactness, easy handling and friendly operating in various environments, MLSs are a useful tool for collecting detailed and accurate data on the structure of trees and forests, especially in areas with difficult access or where traditional survey techniques may not be practical [20]. The laser beam shot below the tree canopies is often concealed in summer by leaves and it does not allow the accurate detection of the tops of individual trees [21], making the tree height estimation uncertain. Therefore, it is advisable to fuse different acquisition procedures to overcome the devices’ limitations and to take advantage of their strengths. Some authors used aerial photogrammetry to enhance forest inventory data content and to update the forest inventory frameworks, being cost-effective and providing different data if compared with other laser devices [22,23]. Indeed, through an unmanned aerial vehicle (UAV) equipped with cameras, it is possible to capture high-resolution images of the upper part of the canopy and generate it through photogrammetry, obtaining an UAV dense cloud. The combination of UAV dense clouds with MLS point clouds results in a complete 3D tree model used to extract tree metric data, such as diameter at breast height (DBH), total height (TH), crown basal area (CBA), crown volume (CV) and wood volume (WV). Furthermore, to provide a virtual preservation method, the 3D tree model is also integrated with trunk reconstruction performed with Structure from Motion (SfM) techniques with a close-range device, a single-lens reflex (SLR) camera that provided a colored SLR dense cloud of the trunk. Finally, the 3D tree model is obtained from the fusion of UAV and SLR dense clouds and the MLS point cloud.
According to the national law, monumental trees may be defined upon distinct criteria such as: exceptional shape or large size, old age, cultural heritage value and genius loci [24,25]. Monumental trees may have exceptional conservation value and notable cultural, botanical or ecological importance [26,27,28], and need to be monitored and studied thoroughly, being threatened by climate change and related disturbances [29,30,31]. Some of these trees in the Marche region (Central Italy) occur in traditional chestnut groves, a staple of the rural/mountain landscape and local economy, providing edible fruits and many other non-wood forest products for centuries. Here, we describe a successful fusion approach with MLS and SfM outputs for the analysis and monitoring of monumental trees for their valorization and long-term conservation in the framework of ongoing regional and national networks (AMI, Italian Monumental Trees). In order to efficiently plan inventories or monitoring, it is important to collect these data [32,33] throughout the years, providing useful information about their health status and to ensure a proper management. The possibility of using new methodologies such as the one proposed here, which limit the operators’ subjectivity, allows for 3D tree models, providing useful information to estimate, for instance, the tree carbon sequestration potential [34]. This innovative approach makes the field measurements more objective and replicable, with the aim of the conservation of monumental trees. There is indeed growing interest on the assessment of ecosystem services and their payment (PES) provided by agro-silvo-pastoral elements and landscapes [24,35]. This is true in particular for monumental trees, given their high ecological and socio-cultural value [36,37,38].
In this paper, we aimed to generate 3D tree models of three monumental chestnut trees (Castanea sativa Mill.) through a geomatic data fusion approach, in both leaf-on and leaf-off phenological phases, to estimate their main structural parameters through the computation of tree metric data.

2. Materials and Methods

2.1. Monumental Tree Locations and Features

The monumental chestnut trees (MT1, MT2, MT3) selected for our experiment are located within a traditional chestnut orchard in the southern Marche region (Central Italy) (Figure 1). All trees have a DBH greater than 143 cm, the minimum legal threshold to consider chestnuts monumental trees [36].
Traditional chestnut orchards usually have a low tree density to favor fruit production, and the target trees were selected both for their architecture and accessibility. These monumental trees are located at a suitable distance from other trees, making the complex architecture (trunk, limbs and branches) detectable with most of the instruments used. They were first mapped and then measured both during the summer, when the trees were in full vigor with foliage on, and in winter without foliage, with their wood framework better detectable by the geomatic devices used.

2.2. Geomatic Devices Surveys

We conducted the surveys in both seasons on all trees using the same data acquisition methods. We applied an ad hoc workflow to generate our 3D tree models including field data collection, followed by processing and georeferencing of the devices’ outputs in separate software programs, enabling us to merge the resulting files (Figure 2).
For the summer surveys, we used a Total Station (TOPCON GPT 7000) to acquire several ground control points (GCPs) in a local reference system within the chestnut grove when the trees were in full vigor and canopy and the satellite signals were not in touch with the GNSS system. In the winter, with leaf-off trees, the satellite signal used by a GNSS system was not reduced and we could also obtain georeferenced GCPs. The coordinates were collected in WGS 84 UTM 33N, utilizing the HiPer VR Topcon GNSS antenna with RTK method, with 1 cm of accuracy (https://www.topconpositioning.com/, accessed on 11 April 2023).
Kaarta Stencil-2 is an MLS instrument equipped with a LiDAR unit (Velodyne VLP-16) used to obtain the point clouds of our target trees. The Velodyne VLP-16 sensor is equipped with multiple lasers, allowing us to scan a wide field of view (vertical field of view of 30° and a horizontal field of view of 360°) in a single pass. Kaarta Stencil-2 has an inertial motion unit which measures the acceleration and angular velocity of the MLS instrument, allowing to accurately track its movement during the scan. An internal processor was also used to process and analyze the data collected. The integration of these three components permits a real-time localization [39] of the instrument during the mapping of the surrounding environment, speeding up the survey time [40]. Kaarta Stencil-2 belongs to the MLS units and therefore it is a lightweight device and it can be moved in a handheld mode [1].
All around the three monumental chestnut trees we followed a circular path, and the MLS device scanned the entire trees from multiple angles, allowing for the creation of detailed MLS point clouds of the trees (Figure 3). All along the path, in clean areas, we placed 3 highly reflective targets which were collected as GCPs with the Total Station or the GNSS device. These targets are clearly visible in the MLS point clouds and therefore they can be used to correctly scale and reference the datasets. We processed the raw MLS point clouds data through the open source CloudCompare v2.13 software (http://www.cloudcompare.org/, accessed on 15 February 2023) [41]. In the six surveys, thanks to the CloudCompare tools, we manually removed all the points belonging to the surrounding trees to obtain a clean MLS point cloud of each single monumental chestnut tree. This operation was enhanced with both the “Cloth Simulation Filter” [42] and the “Statistical Outlier Remover” [43,44] function in CloudCompare; the former was used to differentiate the above-ground from the ground points, whereas the latter was used to remove the acquisition phase noise points. After the normalization of the MLS point clouds, we extracted the monumental trees’ height as the maximum point distance in the Z axis. In the summer surveys, the LiDAR unit of the MLS is reflected by the dense canopies of the monumental trees.
The occlusion effect due to the leaves did not allow us to correctly scan the top of the trees’ canopies [21,45] and thus the scans remain incomplete. By using a combination of manual and automated segmentation, we segmented the three MLS point clouds in “MLS tree skeleton” and “MLS tree canopy”. The first includes the stem and main branches, up to the canopy insertion point, whereas the second includes secondary branches and leaves, to the highest point collected by the MLS. We segmented the MLS point clouds using both CloudCompare and R software [46]. In CloudCompare, filters such as “number of neighbors” and “verticality” were used to emphasize MLS point clouds’ geometric features, such as the tree trunks and the crowns. Both filters are in the section Tools > Other > Compute geometric features. The “Number of neighbors” filter estimates the surface density by counting the number of N neighbors for each point within a sphere of radius R [47], while the “verticality” one analyzes the surfaces verticality in a point cloud. This filter helps to identify the orientation relative to the vertical axis by analyzing the angles between it and the plane normals, and then it classifies the points according to their verticality, enhancing the tree trunks [48]. Being highlighted by these filters, the user can manually and randomly take portions of the MLS point clouds representing the tree skeleton and the canopy and assign them the proper label. Indeed, these filters only assist in identifying the tree skeleton and canopy but are unable to accurately separate the two parts, given the complex tree structure. User intervention is required to select a sample of points and classify them into one category or the other. Using linear discriminant analysis (LDA) [49] within the ‘caret’ [50] package in R, which allows us to classify large amounts of data quickly [51], the non-labeled points were classified based on the labeled points, resulting in a fully segmented MLS point cloud. On the other hand, during the winter surveys and with no leaves intercepting the LiDAR beam, we collected the monumental tree in the leaf-off phase and, in post-processing, we removed the smaller twigs and limbs always with the “Number of neighbors” and “verticality” filters. The resulting MLS point clouds represent the actual MLS tree skeletons, which can then be used to extract the WV.
The three monumental trees have a complex structure and a wide crown, being partially isolated from the other trees in the chestnut grove. This partial isolation allowed the MLS to properly scan the entire tree structure in the leaf-off phenological phase. In summer, however, to obtain the complete structure, fusion with the UAV dense cloud was necessary. We used a DJI Mavic Mini drone (https://www.dji.com/nl/mavic-mini, accessed on 11 April 2023) to acquire different nadiral RGB images. This UAV has a 1/2.3” CMOS sensor with 12 MP and a f/2.8 aperture with electronic shutter. Its takeoff weight is 249 g and the maximum speed is between 8 and 13 m/s. We set the flight at 30 m above the ground, and in the summer surveys, the upper part of the canopies were generated with the SfM algorithm [52]. This involves capturing a series of overlapping images of the surveyed object from different viewpoints. We carried out the photogrammetric approach on the Agisoft Metashape Professional (AMP) software version 1.8.4 (https://www.agisoft.com/, accessed on 10 April 2023). With the UAV surveys, we collected 71 images for MT1, 94 images for MT2 and 60 images for MT3. During the SfM process, we utilized GCPs to align the images, resulting in the successful generation of UAV dense clouds for the canopy tops of the monumental trees.
Moreover, to generate the dense clouds of the monumental trees’ trunks, in the winter surveys, we used a SLR Sony Alpha77 camera as a close-range photogrammetry device. The results are a high-definition SLR dense clouds of the trunks, which have pronounced buttresses and visible cankers reconstructable by close-range photogrammetry. The operators made two circular paths all around the monumental trees, also shooting the GCPs placed close to the trunks to collect useful overlapping images. We acquired 63, 78 and 96 frames (respectively, for MT1, MT2 and MT3) which were put together to generate the three SLR dense clouds up to the level of the first main branches. We generated the SLR dense clouds and the UAV dense clouds with AMP software (Figure 4), using the “Estimate image quality” algorithm on the “Photos” menu just to discard the poor quality ones. We set the “high” accuracy in the alignment phase, leaving 40,000 key points and 4000 tie points as default. Afterwards, we optimized the resulting sparse point clouds with the “gradual outlier selection” process and through the guided marker positioning systems, we scaled the point clouds. We generated the dense clouds using the “high” quality and both the “aggressive” and “mild” filtering modes. We preferred the latter to preserve the smallest details in the SLR surveys, while the aggressive one had been used in the UAV surveys, where conservation of the smaller details was unnecessary [53].
The point–pair registration of the point clouds obtained through LiDAR and photogrammetry results in a sub-decimeter level of accuracy. The goal of point–pair registration is to determine a transformation matrix which enables the conversion of point clouds acquired by different sensors into a common coordinate system [54]. Both the photogrammetric dense clouds and the MLS point cloud were thus co-registered with the GCPs that were used to compute the errors. The MLS residuals of observation were within the tolerance of the instrument’s declared precision and were computed in CloudCompare. The photogrammetric residuals of observation were computed with the reprojection error directly by the AMP software.
The GCPs collected during the MLS, UAV and SLR surveys made it possible to correctly align and fuse the different point clouds of the monumental trees and to obtain the 3D tree models.

2.3. Point Clouds Processing

We developed a semi-automatic procedure for tree metric data extraction and tree volume computation. Through the “Cross section” tool, in CloudCompare, we sliced our SLR dense clouds at 1.30 m and then we exported this tree portion as a single point cloud to determine the DBH. With the slice’s contour we generated a polyline and, with the “PoissonRecon” plugin, we reconstructed the mesh [55] and then we computed its area using the “measure the surface” command in CloudCompare. Assuming that the slices had a circular shape, each DBH was determined using the reverse formula. Once we had merged the MLS point cloud with the UAV and SLR dense clouds, we exported the 3D tree model of each monumental tree in a .pcd file format with CloudCompare and then we imported it in 3D Forest open source software [56,57] (https://www.3dforest.eu/, accessed on 10 April 2023). The latter enabled the extraction of various tree parameters, including the TH and the CBA, by using an orthogonal projection into a plane and applying the concave hull algorithm [58]. In order to determine the WV, we used the MLS tree skeletons acquired during the MLS winter surveys. The AdQSM [59], a novel tree quantitative structure model (QSM) which was originally developed to reconstruct 3D mesh from terrestrial laser scanning devices [60,61], was applied to the MLS tree skeleton, enabling the reconstruction of a mesh representing the 3D branch geometry. The AdQSM is based on the combination of a point cloud segmentation and a voxel-based model, which allows for a precise reconstruction of the tree structure [59,62]. The values of the “Height Segmentation” and “Cloud Parameter” parameters were set to 0.50 and 0.003, respectively, in accordance with the recommended values presented by the authors of the AdQSM. The exported tree skeleton meshes were then imported into the CloudCompare software, and the WV was computed through the mesh volume measurement method. Regarding the computation of the CV, the “Alpha Shape” algorithm is a commonly used method in the literature for the extraction of such features [63,64,65]. This algorithm performs a geometric reconstruction of surfaces and it is based on the extraction of contours from a set of unordered points [64], resulting in a mesh. The algorithm uses a parameter, referred to as “alpha”, to control the number of points included within the enclosed polyhedron mesh. Therefore, the predefined parameter “alpha” determines the level of the mesh [66]. Alpha ranges between 0 and 1, and decreasing its value leads to the inclusion of fewer points. This results in a more detailed and smoother mesh, which is more realistic compared to meshes created using close-to-1 alpha values [67,68]. The three MLS tree canopies fused with the UAV dense clouds were processed in MATLAB [69] (educational license provided by our university), using the Alpha Shape algorithm with an alpha value set at 0.25, resulting in three meshes.

3. Results

During the field survey, DBH was measured with a diameter–circumference tape, obtaining the following values: MT1 = 166 cm, MT2 = 194 cm, MT3 = 145 cm. The following tree height values were estimated by a laser hypsometer (TruPulse® 360B, Laser Technology Inc., Centennial, CO, USA): MT1 = 18.5 m; MT2 = 21.2 m; MT3 = 24.5 m. The tree age of MT1, MT2 and MT3, estimated using an increment borer, is at least 200 years.
The geomatic devices and the different tree phenological phases surveyed allowed us to obtain 3D tree models to extract accurate tree metric data. Table 1 indicates the errors of the SfM process in meters and pixels for both the creation of SLR dense clouds and the generation of the UAV dense clouds. Table 2 displays the errors in meters of the MLS point clouds’ registration process for both winter and summer surveys, as well as the acquisition time and trajectory length for each survey.

3D Models and Tree Metrics Data Extraction

The geomatic data fusion approach we used here provided 3D tree models (Figure 5). The resulting 3D tree models are the final products of the fusion process, from which we extracted various tree metrics.
After obtaining the meshes of three SLR dense cloud slices (Figure 6), we computed the related DBHs. Our procedure revealed that the DBH of the MT1 trunk was 164 cm, the MT2 trunk was 187 cm and the MT3 trunk was 148 cm. The mean absolute difference between the traditional and proposed methods corresponded to 4 cm, equal to a mean percentage difference of 2.3%.
Importing the three 3D tree models into 3D Forest provided the following measures: TH MT1 = 18.75 m; TH MT2 = 19.57 m; TH MT3 = 23.18 m; CBA MT1 = 258.67 m2; CBA MT2 = 198.55 m2; CBA MT3 = 156.46 m2 (Figure 7). The mean absolute difference between the traditional and proposed methods for TH corresponded to 1.07 m, equal to a mean percentage difference of 4.8%.
Using the mesh reconstructed through the AdQSM thanks to the MLS tree skeletons surveyed with the MLS winter surveys (Figure 8), we measured the standing WV of the three monumental trees: MT1, MT2 and MT3. The mesh was reimported into CloudCompare as an .obj file and we computed the WV through the mesh volume computation function. The WV values were 39.83 m3, 41.63 m3 and 39.05 m3, respectively.
Thanks to the three crown meshes reconstructed (Figure 9), we estimated the following CWs of the three 3D tree models (MT1, MT2 and MT3): 328.39 m3, 393.31 m3 and 406.48 m3, respectively.

4. Discussion

Our geomatic data fusion approach for the detection and modeling of three monumental chestnut trees allowed the tree metrics estimation of the three individuals. Moreover, the 3D tree models obtained are useful datasets to be preserved over time. The 3D tree models are the final products of the fusion and therefore they have to be considered as a result in our paper. They provide a representation useful for monitoring the evolution and decay of the trees, and they can be shared with different users and various software or digital visualizers [70]. Our data fusion approach combining point clouds from ground-based devices with the output of UAV equipped with RGB cameras allowed us to overcome the single device survey limitations, as several authors have pointed out [23,71,72]. In addition, the three monumental trees have peculiar structures and their modeling required us to integrate heterogeneous data from different sources [73].
Our modeling results are comparable with the DBH and the TH obtained in the field by traditional measurements. Another study demonstrated that the combination of UAV used for aerial photogrammetry with point clouds obtained by LiDAR can be used to generate a single 3D model of the forest study area with a precision equal to 3 cm [74]. Similarly, the use of close-range photogrammetry for sparse point cloud generation shows that the parameters extracted from the 3D Forest models have an accuracy greater than 90% [75]. In a previous work [76], close-range photogrammetry generated dense and accurate point clouds suitable for DBH estimation, reaching an RMSE interval from 4.41 cm to 5.98 cm. In our case, traditional field measurements of DBH and especially TH could be biased by the large size and the complex canopy structure, favoring the application of UAV surveys for the accurate detection of the canopy top. However, different papers showed the reliability and accuracy of the SfM approach in measuring tree trunk diameters and TH [77,78,79].
Usually, the estimation of CV and WV is conducted with tree/forest yield tables, but they are unsuitable for irregular structures, such as the monumental trees [80]. We computed the WV by using AdQSM and, in the literature, this computation achieved an rRMSE of 32.78% in [62] and of 49.40% in [81]. AdQSM enables a comprehensive understanding of the trees’ structure, as confirmed by another study that used this algorithm to extract forest structure parameters and demonstrated its usefulness in studying forest windbreaks [82]. However, even if it proved to be an effective method to model the complex and highly variable structure of monumental trees, the resulting meshes do not take into account the presence of trunk cavities, hollows and breakages and therefore the WV must be intended as the volume occupied by the tree rather than the actual amount of wood.
Regarding the CV, several authors have demonstrated the widespread use of the Alpha Shape algorithm for the computation of CV in the literature [66,83,84,85]. We selected the alpha value based on the results obtained from other authors. The authors of [67] used an alpha of 0.25, obtaining R2 = 0.84 in the CV computation, while the authors of [86] obtained R2 = 0.87 in their paper.
The heterogeneity of the datasets can be managed for updating and visualizing 3D tree models as a first step for a tree information model, extending the concept of the better-known building information model. These types of datasets, in the near future, could be associated with other arboricultural field observations, creating a multi-scale tree decision support system [87]. Inclusive and updated information, such as the ones presented, would be of great support, especially in urban forestry and landscape restoration, where the role of individual trees is of increasing importance for their adaptive management and the supply of multiple ecological services in the cities and in rural landscapes [88].

5. Conclusions

The combination of geomatic techniques (MLS, UAV and SLR photogrammetry) allowed us to overcome some instrumental limitations and to create 3D tree models. The 3D tree models of three monumental trees enhanced the data access and increased the range of potential applications. In this case, we utilized these 3D tree models to extract metric data that are difficult to measure directly in the field using traditional methods. Our results, along with replication on other specimens in various conditions, provide a basis for extending the virtualization of these monumental trees to landscape or regional scale. This can induce monitoring for the conservation of these important habitat trees, by providing a comprehensive database, a sort of a “Monumental Trees Network” (given the existing list of regional and national inventories). Moreover, this type of dataset can also be used as training for artificial intelligence algorithms or virtual reality studies, enhancing the possibilities and the knowledge associated with the agroforestry sector and to the ecosystems services provided by the monumental individuals. In conclusion, the proposed methodology needs to be extended to other monumental trees, training the algorithms on different tree species, dimensions and physiognomy.

Author Contributions

Manuscript conceptualization, M.B. and R.P.; methodology, M.B.; software, M.B.; validation, M.B. and R.P.; formal analysis, M.B.; investigation, M.B.; resources, M.B., R.P. and A.V.; data curation, M.B.; writing—original draft preparation, M.B.; writing—review and editing, M.B., A.V., E.T., E.F. and C.U.; visualization, M.B.; manuscript supervision, R.P. and C.U.; research idea and project supervision, C.U.; funding acquisition, C.U. All authors have read and agreed to the published version of the manuscript.

Funding

This study is a part of a larger research project funded by ASSAM—Agenzia Servizi Settore Agroalimentare delle Marche (now AMAP—Agenzia per l’innovazione nel Settore Agroalimentare e della Pesca), founding number: prot. N. 0046740 28 July 2020 Classif. III/19.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We wish to thank several persons that contributed to the accomplishment of this study: Stefano Chiappini, Francesco Malandra, Massimo Prosdocimi, Leonardo Lori and Piergiorgio Fioravanti for data collection and technical support; Ascenzio Santini for logistic support during field activities.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  2. Boccardo, P.; Giulio Tonolo, F. Remote sensing role in emergency mapping for disaster response. In Engineering Geology for Society and Territory-Volume 5: Urban Geology, Sustainable Planning and Landscape Exploitation; Springer: Berlin/Heidelberg, Germany, 2015; pp. 17–24. [Google Scholar]
  3. Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V. Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2017, 126, 129–145. [Google Scholar] [CrossRef]
  4. Apostol, B.; Petrila, M.; Lorenţ, A.; Ciceu, A.; Gancz, V.; Badea, O. Species discrimination and individual tree detection for predicting main dendrometric characteristics in mixed temperate forests by use of airborne laser scanning and ultra-high-resolution imagery. Sci. Total Environ. 2020, 698, 134074. [Google Scholar] [CrossRef] [PubMed]
  5. Hościło, A.; Lewandowska, A. Mapping forest type and tree species on a regional scale using multi-temporal Sentinel-2 data. Remote Sens. 2019, 11, 929. [Google Scholar] [CrossRef]
  6. Lang, N.; Kalischek, N.; Armston, J.; Schindler, K.; Dubayah, R.; Wegner, J.D. Global canopy height regression and uncertainty estimation from GEDI LIDAR waveforms with deep ensembles. Remote Sens. Environ. 2022, 268, 112760. [Google Scholar] [CrossRef]
  7. Potapov, P.; Li, X.; Hernandez-Serna, A.; Tyukavina, A.; Hansen, M.C.; Kommareddy, A.; Pickens, A.; Turubanova, S.; Tang, H.; Silva, C.E. Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sens. Environ. 2021, 253, 112165. [Google Scholar] [CrossRef]
  8. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.; Anders, K.; Gloaguen, R. Multisource and multitemporal data fusion in remote sensing: A comprehensive review of the state of the art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  9. Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion 2017, 8, 1–31. [Google Scholar] [CrossRef]
  10. Suwardhi, D.; Fauzan, K.N.; Harto, A.B.; Soeksmantono, B.; Virtriana, R.; Murtiyoso, A. 3D Modeling of Individual Trees from LiDAR and Photogrammetric Point Clouds by Explicit Parametric Representations for Green Open Space (GOS) Management. ISPRS Int. J. Geo-Inf. 2022, 11, 174. [Google Scholar] [CrossRef]
  11. Cao, L.; Liu, H.; Fu, X.; Zhang, Z.; Shen, X.; Ruan, H. Comparison of UAV LiDAR and digital aerial photogrammetry point clouds for estimating forest structural attributes in subtropical planted forests. Forests 2019, 10, 145. [Google Scholar] [CrossRef]
  12. Tupinambá-Simões, F.; Pascual, A.; Guerra-Hernández, J.; Ordóñez, C.; de Conto, T.; Bravo, F. Assessing the Performance of a Handheld Laser Scanning System for Individual Tree Mapping—A Mixed Forests Showcase in Spain. Remote Sens. 2023, 15, 1169. [Google Scholar] [CrossRef]
  13. Liu, Q.; Li, S.; Li, Z.; Fu, L.; Hu, K. Review on the applications of UAV-based LiDAR and photogrammetry in forestry. Sci. Silvae Sin. 2017, 53, 134–148. [Google Scholar]
  14. Lian, X.; Zhang, H.; Xiao, W.; Lei, Y.; Ge, L.; Qin, K.; He, Y.; Dong, Q.; Li, L.; Han, Y. Biomass Calculations of Individual Trees Based on Unmanned Aerial Vehicle Multispectral Imagery and Laser Scanning Combined with Terrestrial Laser Scanning in Complex Stands. Remote Sens. 2022, 14, 4715. [Google Scholar] [CrossRef]
  15. Qi, Y.; Coops, N.C.; Daniels, L.D.; Butson, C.R. Comparing tree attributes derived from quantitative structure models based on drone and mobile laser scanning point clouds across varying canopy cover conditions. ISPRS J. Photogramm. Remote Sens. 2022, 192, 49–65. [Google Scholar] [CrossRef]
  16. Qi, Y.; Coops, N.C.; Daniels, L.D.; Butson, C.R. Assessing the effects of burn severity on post-fire tree structures using the fused drone and mobile laser scanning point clouds. Front. Environ. Sci. 2022, 10, 1362. [Google Scholar] [CrossRef]
  17. Pyörälä, J.; Saarinen, N.; Kankare, V.; Coops, N.C.; Liang, X.; Wang, Y.; Holopainen, M.; Hyyppä, J.; Vastaranta, M. Variability of wood properties using airborne and terrestrial laser scanning. Remote Sens. Environ. 2019, 235, 111474. [Google Scholar] [CrossRef]
  18. Wang, Y.; Pyörälä, J.; Liang, X.; Lehtomäki, M.; Kukko, A.; Yu, X.; Kaartinen, H.; Hyyppä, J. In situ biomass estimation at tree and plot levels: What did data record and what did algorithms derive from terrestrial and aerial point clouds in boreal forest. Remote Sens. Environ. 2019, 232, 111309. [Google Scholar] [CrossRef]
  19. Mokroš, M.; Mikita, T.; Singh, A.; Tomaštík, J.; Chudá, J.; Wężyk, P.; Kuželka, K.; Surový, P.; Klimánek, M.; Zięba-Kulawik, K. Novel low-cost mobile mapping systems for forest inventories as terrestrial laser scanning alternatives. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102512. [Google Scholar] [CrossRef]
  20. Di Pietra, V.; Grasso, N.; Piras, M.; Dabove, P. Characterization of a mobile mapping system for seamless navigation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B1-2, 223–227. [Google Scholar] [CrossRef]
  21. Bauwens, S.; Bartholomeus, H.; Calders, K.; Lejeune, P. Forest inventory with terrestrial LiDAR: A comparison of static and hand-held mobile laser scanning. Forests 2016, 7, 127. [Google Scholar] [CrossRef]
  22. Goodbody, T.R.H.; Coops, N.C.; White, J.C. Digital aerial photogrammetry for updating area-based forest inventories: A review of opportunities, challenges, and future directions. Curr. For. Rep. 2019, 5, 55–75. [Google Scholar] [CrossRef]
  23. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from motion photogrammetry in forestry: A review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
  24. Bauhus, J.; Puettmann, K.; Messier, C. Silviculture for old-growth attributes. For. Ecol. Manage. 2009, 258, 525–537. [Google Scholar] [CrossRef]
  25. Nolan, V.; Reader, T.; Gilbert, F.; Atkinson, N. The Ancient Tree Inventory: A summary of the results of a 15 year citizen science project recording ancient, veteran and notable trees across the UK. Biodivers. Conserv. 2020, 29, 3103–3129. [Google Scholar] [CrossRef]
  26. Jim, C.Y. Urban Heritage Trees: Natural-Cultural Significance Informing Management and Conservation. In Greening Cities: Forms and Functions; Springer: Berlin/Heidelberg, Germany, 2017; pp. 279–305. [Google Scholar]
  27. Skarpaas, O.; Blumentrath, S.; Evju, M.; Sverdrup-Thygeson, A. Prediction of biodiversity hotspots in the Anthropocene: The case of veteran oaks. Ecol. Evol. 2017, 7, 7987–7997. [Google Scholar] [CrossRef]
  28. Wetherbee, R.; Birkemoe, T.; Sverdrup-Thygeson, A. Veteran trees are a source of natural enemies. Sci. Rep. 2020, 10, 18485. [Google Scholar] [CrossRef]
  29. Wilkaniec, A.; Borowiak-Sobkowiak, B.; Irzykowska, L.; Breś, W.; Świerk, D.; Pardela, Ł.; Durak, R.; Środulska-Wielgus, J.; Wielgus, K. Biotic and abiotic factors causing the collapse of Robinia pseudoacacia L. veteran trees in urban environments. PLoS ONE 2021, 16, e0245398. [Google Scholar] [CrossRef]
  30. Lonsdale, D. Review of oak mildew, with particular reference to mature and veteran trees in Britain. Arboric. J. 2015, 37, 61–84. [Google Scholar] [CrossRef]
  31. Jacobsen, R.M.; Birkemoe, T.; Evju, M.; Skarpaas, O.; Sverdrup-Thygeson, A. Veteran trees in decline: Stratified national monitoring of oaks in Norway. For. Ecol. Manag. 2023, 527, 120624. [Google Scholar] [CrossRef]
  32. Maravelakis, E.; Bilalis, N.; Mantzorou, I.; Konstantaras, A.; Antoniadis, A. 3D modelling of the oldest olive tree of the world. Antoniadis/Int. J. Comput. Eng. Res. 2012, 2, 2250–3005. [Google Scholar]
  33. Krebs, P.; Koutsias, N.; Conedera, M. Modelling the eco-cultural niche of giant chestnut trees: New insights into land use history in southern Switzerland through distribution analysis of a living heritage. J. Hist. Geogr. 2012, 38, 372–386. [Google Scholar] [CrossRef]
  34. Velasco, E.; Chen, K.W. Carbon storage estimation of tropical urban trees by an improved allometric model for aboveground biomass based on terrestrial laser scanning. Urban For. Urban Green. 2019, 44, 126387. [Google Scholar] [CrossRef]
  35. Achim, A.; Moreau, G.; Coops, N.C.; Axelson, J.N.; Barrette, J.; Bédard, S.; Byrne, K.E.; Caspersen, J.; Dick, A.R.; D’Orangeville, L. The changing culture of silviculture. Forestry 2022, 95, 143–152. [Google Scholar] [CrossRef]
  36. Farina, A.; Canini, L. Alberi Monumentali D’Italia; MASAF, Rodorigo Editore: Roma, Italy, 2013; ISBN 9788899544348. Available online: https://www.politicheagricole.it/flex/cm/pages/ServeBLOB.php/L/IT/IDPagina/13577 (accessed on 7 March 2023).
  37. Cousseran, F. Guide des Arbres Remarquables de France; Edisud: St Rémy de Provence, France, 2009; ISBN 2744908215. [Google Scholar]
  38. Croft, A. Ancient and Other Veteran Trees: Further Guidance on Management; Ancient Tree Forum: London, UK, 2013. [Google Scholar]
  39. Chang, L.; Niu, X.; Liu, T.; Tang, J.; Qian, C. GNSS/INS/LiDAR-SLAM integrated navigation system based on graph optimization. Remote Sens. 2019, 11, 1009. [Google Scholar] [CrossRef]
  40. Ullrich, A.; Pfennigbauer, M. Advances in lidar point cloud processing. In Proceedings of the Laser Radar Technology and Applications XXIV, Baltimore, MD, USA, 16–17 April 2019; SPIE: Bellingham, WA, USA, 2019; Volume 11005, pp. 157–166. [Google Scholar]
  41. CloudCompare v2.13 Software. Available online: http://www.cloudcompare.org/ (accessed on 15 February 2023).
  42. Sun, J.; Zhang, Z.; Long, B.; Qin, S.; Yan, Y.; Wang, L.; Qin, J. Method for determining cloth simulation filtering threshold value based on curvature value of fitting curve. Int. J. Grid Util. Comput. 2021, 12, 276–286. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Wu, H.; Yang, W. Forests growth monitoring based on tree canopy 3D reconstruction using UAV aerial photogrammetry. Forests 2019, 10, 1052. [Google Scholar] [CrossRef]
  44. Salehi, M.; Rashidi, L. A Survey on Anomaly detection in Evolving Data: [with Application to Forest Fire Risk Prediction]. ACM SIGKDD Explor. Newsl. 2018, 20, 13–23. [Google Scholar] [CrossRef]
  45. Gollob, C.; Ritter, T.; Wassermann, C.; Nothdurft, A. Influence of scanner position and plot size on the accuracy of tree detection and diameter estimation using terrestrial laser scanning on forest inventory plots. Remote Sens. 2019, 11, 1602. [Google Scholar] [CrossRef]
  46. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing; 2013; ISBN 3-900051-07-0. Available online: http://www.R-project.org/ (accessed on 11 April 2023).
  47. Hamal, S.N.G.; Ulvi, A. 3D modeling of underwater objects using photogrammetric techniques and software comparison. Intercont. Geoinf. Days 2021, 3, 164–167. [Google Scholar]
  48. Hristova, H.; Abegg, M.; Fischer, C.; Rehush, N. Monocular Depth Estimation in Forest Environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 1017–1023. [Google Scholar] [CrossRef]
  49. Balakrishnama, S.; Ganapathiraju, A. Linear discriminant analysis-a brief tutorial. Inst. Signal Inf. Process. 1998, 18, 1–8. [Google Scholar]
  50. Kuhn, M.; Wing, J.; Weston, S.; Williams, A.; Keefer, C.; Engelhardt, A.; Cooper, T.; Mayer, Z.; Kenkel, B.; R Core Team. Package ‘caret’. R J. 2020, 223, 7. [Google Scholar]
  51. Zhu, M.; Martinez, A.M. Subclass discriminant analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1274–1286. [Google Scholar] [PubMed]
  52. Zhang, W.; Shao, J.; Jin, S.; Luo, L.; Ge, J.; Peng, X.; Zhou, G. Automated marker-free registration of multisource forest point clouds using a coarse-to-global adjustment strategy. Forests 2021, 12, 269. [Google Scholar] [CrossRef]
  53. Tinkham, W.T.; Swayze, N.C. Influence of Agisoft Metashape parameters on UAS structure from motion individual tree detection from canopy height models. Forests 2021, 12, 250. [Google Scholar] [CrossRef]
  54. Zhang, Y.; Qiao, D.; Xia, C.; He, Q. A Point Cloud Registration Method Based on Histogram and Vector Operations. Electronics 2022, 11, 4172. [Google Scholar] [CrossRef]
  55. Kazhdan, M.; Chuang, M.; Rusinkiewicz, S.; Hoppe, H. Poisson surface reconstruction with envelope constraints. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2020; Volume 39, pp. 173–182. [Google Scholar]
  56. Trochta, J.; Kruček, M.; Vrška, T.; Kraâl, K. 3D Forest: An application for descriptions of three-dimensional forest structures using terrestrial LiDAR. PLoS ONE 2017, 12, e0176871. [Google Scholar] [CrossRef]
  57. Liang, X.; Hyyppä, J.; Kaartinen, H.; Lehtomäki, M.; Pyörälä, J.; Pfeifer, N.; Holopainen, M.; Brolly, G.; Francesco, P.; Hackenberg, J. International benchmarking of terrestrial laser scanning approaches for forest inventories. ISPRS J. Photogramm. Remote Sens. 2018, 144, 137–179. [Google Scholar] [CrossRef]
  58. Krůček, M.; Trochta, J.; Cibulka, M.; Král, K. Beyond the cones: How crown shape plasticity alters aboveground competition for space and light—Evidence from terrestrial laser scanning. Agric. For. Meteorol. 2019, 264, 188–199. [Google Scholar] [CrossRef]
  59. Fan, G.; Nan, L.; Dong, Y.; Su, X.; Chen, F. AdQSM: A new method for estimating above-ground biomass from TLS point clouds. Remote Sens. 2020, 12, 3089. [Google Scholar] [CrossRef]
  60. Hackenberg, J.; Spiecker, H.; Calders, K.; Disney, M.; Raumonen, P. SimpleTree—An efficient open source tool to build tree models from TLS clouds. Forests 2015, 6, 4245–4294. [Google Scholar] [CrossRef]
  61. Calders, K.; Newnham, G.; Burt, A.; Murphy, S.; Raumonen, P.; Herold, M.; Culvenor, D.; Avitabile, V.; Disney, M.; Armston, J. Nondestructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 2015, 6, 198–208. [Google Scholar] [CrossRef]
  62. Dong, Y.; Fan, G.; Zhou, Z.; Liu, J.; Wang, Y.; Chen, F. Low Cost Automatic Reconstruction of Tree Structure by AdQSM with Terrestrial Close-Range Photogrammetry. Forests 2021, 12, 1020. [Google Scholar] [CrossRef]
  63. Miranda-Fuentes, A.; Llorens, J.; Gamarra-Diezma, J.L.; Gil-Ribes, J.A.; Gil, E. Towards an optimized method of olive tree crown volume measurement. Sensors 2015, 15, 3671–3687. [Google Scholar] [CrossRef]
  64. Vauhkonen, J.; Tokola, T.; Packalén, P.; Maltamo, M. Identification of Scandinavian commercial species of individual trees from airborne laser scanning data using alpha shape metrics. For. Sci. 2009, 55, 37–47. [Google Scholar]
  65. Vauhkonen, J.; Korpela, I.; Maltamo, M.; Tokola, T. Imputation of single-tree attributes using airborne laser scanning-based height, intensity, and alpha shape metrics. Remote Sens. Environ. 2010, 114, 1263–1276. [Google Scholar] [CrossRef]
  66. Zhu, Z.; Kleinn, C.; Nölke, N. Assessing tree crown volume—A review. For. An Int. J. For. Res. 2021, 94, 18–35. [Google Scholar] [CrossRef]
  67. Wang, K.; Zhou, J.; Zhang, W.; Zhang, B. Mobile LiDAR scanning system combined with canopy morphology extracting methods for tree crown parameters evaluation in orchards. Sensors 2021, 21, 339. [Google Scholar] [CrossRef]
  68. Ahongshangbam, J.; Khokthong, W.; Ellsaesser, F.; Hendrayanto, H.; Hoelscher, D.; Roell, A. Drone-based photogrammetry-derived crown metrics for predicting tree and oil palm water use. Ecohydrology 2019, 12, e2115. [Google Scholar] [CrossRef]
  69. MathWorks, I. MATLAB: The Language of Technical Computing: Computation, Visualization, Programming. Installation Guide for UNIX Version 5; Math Works Incorporated: Natick, MA, USA, 1996. [Google Scholar]
  70. Rahaman, H.; Champion, E.; Bekele, M. From photo to 3D to mixed reality: A complete workflow for cultural heritage visualisation and experience. Digit. Appl. Archaeol. Cult. Herit. 2019, 13, e00102. [Google Scholar] [CrossRef]
  71. Gollob, C.; Ritter, T.; Nothdurft, A. Forest Inventory with Long Range and High-Speed Personal Laser Scanning (PLS) and Simultaneous Localization and Mapping (SLAM) Technology. Remote Sens. 2020, 12, 1509. [Google Scholar] [CrossRef]
  72. Xu, C.; Morgenroth, J.; Manley, B. Integrating data from discrete return airborne LiDAR and optical sensors to enhance the accuracy of forest description: A review. Curr. For. Rep. 2015, 1, 206–219. [Google Scholar] [CrossRef]
  73. Remondino, F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef]
  74. Aicardi, I.; Dabove, P.; Lingua, A.M.; Piras, M. Integration between TLS and UAV photogrammetry techniques for forestry applications. IForest 2017, 10, 41–47. [Google Scholar] [CrossRef]
  75. Zhu, R.; Guo, Z.; Zhang, X. Forest 3d reconstruction and individual tree parameter extraction combining close-range photo enhancement and feature matching. Remote Sens. 2021, 13, 1633. [Google Scholar] [CrossRef]
  76. Mokroš, M.; Liang, X.; Surový, P.; Valent, P.; Čerňava, J.; Chudý, F.; Tunák, D.; Saloň, Š.; Merganič, J. Evaluation of close-range photogrammetry image collection methods for estimating tree diameters. ISPRS Int. J. Geo-Inf. 2018, 7, 93. [Google Scholar] [CrossRef]
  77. Bayati, H.; Najafi, A.; Vahidi, J.; Gholamali Jalali, S. 3D reconstruction of uneven-aged forest in single tree scale using digital camera and SfM-MVS technique. Scand. J. For. Res. 2021, 36, 210–220. [Google Scholar] [CrossRef]
  78. Hunčaga, M.; Chudá, J.; Tomaštík, J.; Slámová, M.; Koreň, M.; Chudý, F. The comparison of stem curve accuracy determined from point clouds acquired by different terrestrial remote sensing methods. Remote Sens. 2020, 12, 2739. [Google Scholar] [CrossRef]
  79. Piermattei, L.; Karel, W.; Wang, D.; Wieser, M.; Mokroš, M.; Surový, P.; Koreň, M.; Tomaštík, J.; Pfeifer, N.; Hollaus, M. Terrestrial structure from motion photogrammetry for deriving forest inventory data. Remote Sens. 2019, 11, 950. [Google Scholar] [CrossRef]
  80. Weiskittel, A.R.; Hann, D.W.; Kershaw, J.A., Jr.; Vanclay, J.K. Forest Growth and Yield Modeling; John Wiley & Sons: Hoboken, NJ, USA, 2011; ISBN 1119971500. [Google Scholar]
  81. Hui, Z.; Cai, Z.; Liu, B.; Li, D.; Liu, H.; Li, Z. A Self-Adaptive Optimization Individual Tree Modeling Method for Terrestrial LiDAR Point Clouds. Remote Sens. 2022, 14, 2545. [Google Scholar] [CrossRef]
  82. An, L.; Wang, J.; Xiong, N.; Wang, Y.; You, J.; Li, H. Assessment of Permeability Windbreak Forests with Different Porosities Based on Laser Scanning and Computational Fluid Dynamics. Remote Sens. 2022, 14, 3331. [Google Scholar] [CrossRef]
  83. Chiappini, S.; Giorgi, V.; Neri, D.; Galli, A.; Marcheggiani, E.; Malinverni, E.S.; Pierdicca, R.; Balestra, M. Innovation in olive-growing by Proximal sensing LiDAR for tree volume estimation. In Proceedings of the 2022 IEEE Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Perugia, Italy, 3–5 November 2022; IEEE: Manhattan, NY, USA, 2022; pp. 213–217. [Google Scholar]
  84. Yan, Z.; Liu, R.; Cheng, L.; Zhou, X.; Ruan, X. A Concave Hull Methodology for Calculating the Crown Volume of Individual Trees Based on Vehicle-Borne LiDAR Data. Remote Sens. 2019, 11, 623. [Google Scholar] [CrossRef]
  85. Korhonen, L.; Vauhkonen, J.; Virolainen, A.; Hovi, A.; Korpela, I. Estimation of tree crown volume from airborne lidar data using computational geometry. Int. J. Remote Sens. 2013, 34, 7236–7248. [Google Scholar] [CrossRef]
  86. Liu, X.; Wang, Y.; Kang, F.; Yue, Y.; Zheng, Y. Canopy parameter estimation of citrus grandis var. Longanyou based on Lidar 3d point clouds. Remote Sens. 2021, 13, 1859. [Google Scholar] [CrossRef]
  87. Matasov, V.; Belelli Marchesini, L.; Yaroslavtsev, A.; Sala, G.; Fareeva, O.; Seregin, I.; Castaldi, S.; Vasenev, V.; Valentini, R. IoT monitoring of urban tree ecosystem services: Possibilities and challenges. Forests 2020, 11, 775. [Google Scholar] [CrossRef]
  88. Song, P.; Kim, G.; Mayer, A.; He, R.; Tian, G. Assessing the ecosystem services of various types of urban green spaces based on i-Tree Eco. Sustainability 2020, 12, 1630. [Google Scholar] [CrossRef]
Figure 1. Study area location and a view of the three monumental chestnut trees in the winter season.
Figure 1. Study area location and a view of the three monumental chestnut trees in the winter season.
Remotesensing 15 02197 g001
Figure 2. Complete workflow of the experiment; the data collection with passive (images) and active sensors (LiDAR) and georeferencing with GNSS or Total Station coordinates. The data processing with dedicated software produced the 3D tree models that allowed us to estimate main structural parameters (DBH = diameter at breast heigh, WV = wood volume, H = total tree height, CV = crown volume).
Figure 2. Complete workflow of the experiment; the data collection with passive (images) and active sensors (LiDAR) and georeferencing with GNSS or Total Station coordinates. The data processing with dedicated software produced the 3D tree models that allowed us to estimate main structural parameters (DBH = diameter at breast heigh, WV = wood volume, H = total tree height, CV = crown volume).
Remotesensing 15 02197 g002
Figure 3. (a) The MLS detection of a tree (MT1) surrounding and target in winter. (b) Top view of the Kaarta Stencil-2 point cloud around one tree (black line) with the loop-closed trajectory. The line color change is based on the trajectory time, with red at the beginning and white at the end. The blue and green colors represent the points’ intensity obtained during the MLS point cloud acquisition.
Figure 3. (a) The MLS detection of a tree (MT1) surrounding and target in winter. (b) Top view of the Kaarta Stencil-2 point cloud around one tree (black line) with the loop-closed trajectory. The line color change is based on the trajectory time, with red at the beginning and white at the end. The blue and green colors represent the points’ intensity obtained during the MLS point cloud acquisition.
Remotesensing 15 02197 g003
Figure 4. Detailed view of the MT2 dense clouds obtained through the AMP software. (a) SLR dense cloud representing the monumental trunk with pronounced buttresses and visible defects and (b) the UAV dense cloud representing the canopy top which was not detectable with the MLS in the summer survey.
Figure 4. Detailed view of the MT2 dense clouds obtained through the AMP software. (a) SLR dense cloud representing the monumental trunk with pronounced buttresses and visible defects and (b) the UAV dense cloud representing the canopy top which was not detectable with the MLS in the summer survey.
Remotesensing 15 02197 g004
Figure 5. A view of the 3D tree models of the three monumental trees (MT1, MT2 and MT3) obtained fusing SLR dense cloud, MLS point cloud and UAV dense cloud.
Figure 5. A view of the 3D tree models of the three monumental trees (MT1, MT2 and MT3) obtained fusing SLR dense cloud, MLS point cloud and UAV dense cloud.
Remotesensing 15 02197 g005
Figure 6. Mesh of the DBH sections from the SLR dense clouds in the three monumental trees (MT1, MT2 and MT3).
Figure 6. Mesh of the DBH sections from the SLR dense clouds in the three monumental trees (MT1, MT2 and MT3).
Remotesensing 15 02197 g006
Figure 7. The 3D Forest output of the three 3D tree models (MT1, MT2 and MT3) with the TH (m) and the CBA (m2).
Figure 7. The 3D Forest output of the three 3D tree models (MT1, MT2 and MT3) with the TH (m) and the CBA (m2).
Remotesensing 15 02197 g007
Figure 8. Mesh representation of the three MLS tree skeletons in the winter surveys (MT1, MT2 and MT3) by AdQSM (visualization in CloudCompare).
Figure 8. Mesh representation of the three MLS tree skeletons in the winter surveys (MT1, MT2 and MT3) by AdQSM (visualization in CloudCompare).
Remotesensing 15 02197 g008
Figure 9. The three crown meshes reconstructed with the alpha shape (α: 0.25) algorithm.
Figure 9. The three crown meshes reconstructed with the alpha shape (α: 0.25) algorithm.
Remotesensing 15 02197 g009
Table 1. SfM errors in both meters and pixels for the three monumental tree surveys using UAV and SLR cameras.
Table 1. SfM errors in both meters and pixels for the three monumental tree surveys using UAV and SLR cameras.
Tree IDDeviceOutputError [m]Error [pix]
MT1SLR Sony Alpha77SLR dense cloud 0.0074656.576
MT2SLR Sony Alpha77SLR dense cloud 0.0075287.669
MT3SLR Sony Alpha77SLR dense cloud 0.0073914.384
MT1DJI Mavic MiniUAV dense cloud 0.0685570.731
MT2DJI Mavic MiniUAV dense cloud0.0290170.555
MT3DJI Mavic MiniUAV dense cloud0.0786770.910
Table 2. Errors in meters, scan time and trajectory length of the MLS point clouds in both winter and summer surveys.
Table 2. Errors in meters, scan time and trajectory length of the MLS point clouds in both winter and summer surveys.
Tree IDSeasonErrors [m]Scan Time (h:m:s)Trajectory Length [m]
MT1Summer0.04593400:04:13155.8
MT2Summer0.07032600:02:23103.5
MT3Summer0.03840200:04:08180.8
MT1Winter0.05589700:04:32190.6
MT2Winter0.05602300:05:10176.8
MT3Winter0.02101000:03:46121.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Balestra, M.; Tonelli, E.; Vitali, A.; Urbinati, C.; Frontoni, E.; Pierdicca, R. Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees. Remote Sens. 2023, 15, 2197. https://doi.org/10.3390/rs15082197

AMA Style

Balestra M, Tonelli E, Vitali A, Urbinati C, Frontoni E, Pierdicca R. Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees. Remote Sensing. 2023; 15(8):2197. https://doi.org/10.3390/rs15082197

Chicago/Turabian Style

Balestra, Mattia, Enrico Tonelli, Alessandro Vitali, Carlo Urbinati, Emanuele Frontoni, and Roberto Pierdicca. 2023. "Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees" Remote Sensing 15, no. 8: 2197. https://doi.org/10.3390/rs15082197

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop