Next Article in Journal
Dynamic Diagnosis of an Extreme Precipitation Event over the Southern Slope of Tianshan Mountains Using Multi-Source Observations
Previous Article in Journal
An Efficient PSInSAR Method for High-Density Urban Areas Based on Regular Grid Partitioning and Connected Component Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters

The Academy of Digital China, Fuzhou University, Fuzhou 350108, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(9), 1520; https://doi.org/10.3390/rs17091520
Submission received: 6 March 2025 / Revised: 11 April 2025 / Accepted: 23 April 2025 / Published: 25 April 2025
(This article belongs to the Section AI Remote Sensing)

Abstract

The accurate and efficient 3D reconstruction of trees is beneficial for urban forest resource assessment and management. Close-range photogrammetry (CRP) is widely used in the 3D model reconstruction of forest scenes. However, in practical forestry applications, challenges such as low reconstruction efficiency and poor reconstruction quality persist. Recently, novel view synthesis (NVS) technology, such as neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS), has shown great potential in the 3D reconstruction of plants using some limited number of images. However, existing research typically focuses on small plants in orchards or individual trees. It remains uncertain whether this technology can be effectively applied in larger, more complex stands or forest scenes. In this study, we collected sequential images of urban forest plots with varying levels of complexity using imaging devices with different resolutions (cameras on smartphones and UAV). These plots included one with sparse, leafless trees and another with dense foliage and more occlusions. We then performed dense reconstruction of forest stands using NeRF and 3DGS methods. The resulting point cloud models were compared with those obtained through photogrammetric reconstruction and laser scanning methods. The results show that compared to photogrammetric method, NVS methods have a significant advantage in reconstruction efficiency. The photogrammetric method is suitable for relatively simple forest stands, as it is less adaptable to complex ones. This results in tree point cloud models with issues such as excessive canopy noise and wrongfully reconstructed trees with duplicated trunks and canopies. In contrast, NeRF is better adapted to more complex forest stands, yielding tree point clouds of the highest quality that offer more detailed trunk and canopy information. However, it can lead to reconstruction errors in the ground area when the input views are limited. The 3DGS method has a relatively poor capability to generate dense point clouds, resulting in models with low point density, particularly with sparse points in the trunk areas, which affects the accuracy of the diameter at breast height (DBH) estimation. Tree height and crown diameter information can be extracted from the point clouds reconstructed by all three methods, with NeRF achieving the highest accuracy in tree height. However, the accuracy of DBH extracted from photogrammetric point clouds is still higher than that from NeRF point clouds. Meanwhile, compared to ground-level smartphone images, tree parameters extracted from reconstruction results of higher-resolution and varied perspectives of drone images are more accurate. These findings confirm that NVS methods have significant application potential for 3D reconstruction of urban forests.
Keywords: 3D reconstruction; photogrammetry; neural radiance fields (NeRF); 3D Gaussian splatting (3DGS); computer vision; deep learning; urban forest 3D reconstruction; photogrammetry; neural radiance fields (NeRF); 3D Gaussian splatting (3DGS); computer vision; deep learning; urban forest

Share and Cite

MDPI and ACS Style

Tian, G.; Chen, C.; Huang, H. Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters. Remote Sens. 2025, 17, 1520. https://doi.org/10.3390/rs17091520

AMA Style

Tian G, Chen C, Huang H. Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters. Remote Sensing. 2025; 17(9):1520. https://doi.org/10.3390/rs17091520

Chicago/Turabian Style

Tian, Guoji, Chongcheng Chen, and Hongyu Huang. 2025. "Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters" Remote Sensing 17, no. 9: 1520. https://doi.org/10.3390/rs17091520

APA Style

Tian, G., Chen, C., & Huang, H. (2025). Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters. Remote Sensing, 17(9), 1520. https://doi.org/10.3390/rs17091520

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop