Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (112)

Search Parameters:
Keywords = structure-from-motion/multi-view-stereo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4674 KB  
Article
CLCFM3: A 3D Reconstruction Algorithm Based on Photogrammetry for High-Precision Whole Plant Sensing Using All-Around Images
by Atsushi Hayashi, Nobuo Kochi, Kunihiro Kodama, Sachiko Isobe and Takanari Tanabata
Sensors 2025, 25(18), 5829; https://doi.org/10.3390/s25185829 - 18 Sep 2025
Viewed by 247
Abstract
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it [...] Read more.
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it difficult to reconstruct accurate 3D point clouds. One challenge in this regard is occlusion, where points in the 3D point cloud cannot be obtained due to overlapping parts, preventing accurate point capture. Another is the generation of erroneous points in non-existent locations due to image-matching errors along object outlines. To overcome these challenges, we propose a 3D point cloud reconstruction method named closed-loop coarse-to-fine method with multi-masked matching (CLCFM3). This method repeatedly executes a process that generates point clouds locally to suppress occlusion (multi-matching) and a process that removes noise points using a mask image (masked matching). Furthermore, we propose the closed-loop coarse-to-fine method (CLCFM) to improve the accuracy of structure from motion, which is essential for implementing the proposed point cloud reconstruction method. CLCFM solves loop closure by performing coarse-to-fine camera position estimation. By facilitating the acquisition of high-density, high-precision 3D data on a large number of plant bodies, as is necessary for research activities, this approach is expected to enable comparative analysis of visible phenotypes in the growth process of a wide range of plant species based on 3D information. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

25 pages, 1596 KB  
Review
A Survey of 3D Reconstruction: The Evolution from Multi-View Geometry to NeRF and 3DGS
by Shuai Liu, Mengmeng Yang, Tingyan Xing and Ran Yang
Sensors 2025, 25(18), 5748; https://doi.org/10.3390/s25185748 - 15 Sep 2025
Viewed by 1130
Abstract
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. [...] Read more.
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. With the rise in novel view synthesis technologies such as Neural Radiation Field (NeRF) and 3D Gaussian Splatting (3DGS), 3D reconstruction is facing unprecedented development opportunities. This article introduces the basic principles of traditional 3D reconstruction methods, including Structure from Motion (SfM) and Multi View Stereo (MVS) techniques, and analyzes the limitations of these methods in dealing with complex scenes and dynamic environments. Focusing on implicit 3D scene reconstruction techniques related to NeRF, this paper explores the advantages and challenges of using deep neural networks to learn and generate high-quality 3D scene rendering from limited perspectives. Based on the principles and characteristics of 3DGS-related technologies that have emerged in recent years, the latest progress and innovations in rendering quality, rendering efficiency, sparse view input support, and dynamic 3D reconstruction are analyzed. Finally, the main challenges and opportunities faced by current 3D reconstruction technology and novel view synthesis technology were discussed in depth, and possible technological breakthroughs and development directions in the future were discussed. This article aims to provide a comprehensive perspective for researchers in 3D reconstruction technology in fields such as digital twins and smart cities, while opening up new ideas and paths for future technological innovation and widespread application. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6994 KB  
Article
Dynamic Quantification of PISHA Sandstone Rill Erosion Using the SFM-MVS Method Under Laboratory Rainfall Simulation
by Yuhang Liu, Sui Zhang, Jiwei Wang, Rongyan Gao, Jiaxuan Liu, Siqi Liu, Xuebing Hu, Jianrong Liu and Ruiqiang Bai
Atmosphere 2025, 16(9), 1045; https://doi.org/10.3390/atmos16091045 - 2 Sep 2025
Viewed by 549
Abstract
Soil erosion is a critical ecological challenge in semi-arid regions of China, particularly in the Yellow River Basin, where Pisha sandstone slopes undergo rapid degradation. Rill erosion, driven by rainfall and overland flow, destabilizes slopes and accelerates ecosystem degradation. To address this, we [...] Read more.
Soil erosion is a critical ecological challenge in semi-arid regions of China, particularly in the Yellow River Basin, where Pisha sandstone slopes undergo rapid degradation. Rill erosion, driven by rainfall and overland flow, destabilizes slopes and accelerates ecosystem degradation. To address this, we developed a multi-view stereo observation system that integrates Structure-from-Motion (SFM) and multi-view stereo (MVS) for high-precision, dynamic monitoring of rill erosion. Laboratory rainfall simulations were conducted under four inflow rates (2–8 L/min), corresponding to rainfall intensities of 30–120 mm/h. The erosion process was divided into four phases: infiltration and particle rolling, splash and sheet erosion, incipient rill incision, and mature rill networks, with erosion concentrated in the middle and lower slope sections. The SFM-MVS system achieved planimetric and vertical errors of 3.1 mm and 3.7 mm, respectively, providing approximately 25% higher accuracy and nearly 50% faster processing compared with LiDAR and UAV photogrammetry. Infiltration stabilized at approximately 6.2 mm/h under low flows (2 L/min) but declined to less than 4 mm/h under high flows (≥6 L/min), leading to intensified rill incision and coarse-particle transport (up to 21.4% of sediment). These results demonstrate that the SFM-MVS system offers a scalable and non-invasive method for quantifying erosion dynamics, with direct implications for field monitoring, ecological restoration, and soil conservation planning. Full article
(This article belongs to the Special Issue Research About Permafrost–Atmosphere Interactions (2nd Edition))
Show Figures

Figure 1

28 pages, 4026 KB  
Article
Multi-Trait Phenotypic Analysis and Biomass Estimation of Lettuce Cultivars Based on SFM-MVS
by Tiezhu Li, Yixue Zhang, Lian Hu, Yiqiu Zhao, Zongyao Cai, Tingting Yu and Xiaodong Zhang
Agriculture 2025, 15(15), 1662; https://doi.org/10.3390/agriculture15151662 - 1 Aug 2025
Viewed by 561
Abstract
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based [...] Read more.
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based on the Structure-from-Motion Multi-View Stereo (SFM-MVS) algorithms, a high-precision three-dimensional point cloud model was reconstructed from multi-view RGB image sequences, and 12 phenotypic parameters, such as plant height, crown width, were accurately extracted. Through regression analyses of plant height, crown width, and crown height, and the R2 values were 0.98, 0.99, and 0.99, respectively, the RMSE values were 2.26 mm, 1.74 mm, and 1.69 mm, respectively. On this basis, four biomass prediction models were developed using Adaptive Boosting (AdaBoost), Support Vector Regression (SVR), Gradient Boosting Decision Tree (GBDT), and Random Forest Regression (RFR). The results indicated that the RFR model based on the projected convex hull area, point cloud convex hull surface area, and projected convex hull perimeter performed the best, with an R2 of 0.90, an RMSE of 2.63 g, and an RMSEn of 9.53%, indicating that the RFR was able to accurately simulate lettuce biomass. This research achieves three-dimensional reconstruction and accurate biomass prediction of facility lettuce, and provides a portable and lightweight solution for facility crop growth detection. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

17 pages, 610 KB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 770
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

22 pages, 64906 KB  
Article
Comparative Assessment of Neural Radiance Fields and 3D Gaussian Splatting for Point Cloud Generation from UAV Imagery
by Muhammed Enes Atik
Sensors 2025, 25(10), 2995; https://doi.org/10.3390/s25102995 - 9 May 2025
Viewed by 2156
Abstract
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods [...] Read more.
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have emerged as powerful alternatives for point cloud generation. This paper explores the performance of NeRF and 3DGS methods in generating point clouds from UAV images. For this purpose, the Nerfacto, Instant-NGP, and Splatfacto methods developed in the Nerfstudio framework were used. The obtained point clouds were evaluated by taking the point cloud produced with the photogrammetric method as reference. In this study, the effects of image size and iteration number on the performance of the algorithms were investigated in two different study areas. According to the results, Splatfacto demonstrates promising capabilities in addressing challenges related to scene complexity, rendering efficiency, and accuracy in UAV imagery. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

20 pages, 10100 KB  
Article
A Method for Identifying Picking Points in Safflower Point Clouds Based on an Improved PointNet++ Network
by Baojian Ma, Hao Xia, Yun Ge, He Zhang, Zhenghao Wu, Min Li and Dongyun Wang
Agronomy 2025, 15(5), 1125; https://doi.org/10.3390/agronomy15051125 - 2 May 2025
Cited by 2 | Viewed by 933
Abstract
To address the challenge of precise picking point localization in morphologically diverse safflower plants, this study proposes PointSafNet—a novel three-stage 3D point cloud analysis framework with distinct architectural and methodological innovations. In Stage I, we introduce a multi-view reconstruction pipeline integrating Structure from [...] Read more.
To address the challenge of precise picking point localization in morphologically diverse safflower plants, this study proposes PointSafNet—a novel three-stage 3D point cloud analysis framework with distinct architectural and methodological innovations. In Stage I, we introduce a multi-view reconstruction pipeline integrating Structure from Motion (SfM) and Multi-View Stereo (MVS) to generate high-fidelity 3D plant point clouds. Stage II develops a dual-branch architecture employing Star modules for multi-scale hierarchical geometric feature extraction at the organ level (filaments and frui balls), complemented by a Context-Anchored Attention (CAA) mechanism to capture long-range contextual information. This synergistic feature learning approach addresses morphological variations, achieving 86.83% segmentation accuracy (surpassing PointNet++ by 7.37%) and outperforming conventional point cloud models. Stage III proposes an optimized geometric analysis pipeline combining dual-centroid spatial vectorization with Oriented Bounding Box (OBB)-based proximity analysis, resolving picking coordinate localization across diverse plants with 90% positioning accuracy and 68.82% mean IoU (13.71% improvement). The experiments demonstrate that PointSafNet systematically integrates 3D reconstruction, hierarchical feature learning, and geometric reasoning to provide visual guidance for robotic harvesting systems in complex plant canopies. The framework’s dual emphasis on architectural innovation and geometric modeling offers a generalizable solution for precision agriculture tasks involving morphologically diverse safflowers. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

30 pages, 33973 KB  
Article
Research on Rapid and Accurate 3D Reconstruction Algorithms Based on Multi-View Images
by Lihong Yang, Hang Ge, Zhiqiang Yang, Jia He, Lei Gong, Wanjun Wang, Yao Li, Liguo Wang and Zhili Chen
Appl. Sci. 2025, 15(8), 4088; https://doi.org/10.3390/app15084088 - 8 Apr 2025
Viewed by 1479
Abstract
Three-dimensional reconstruction entails the development of mathematical models of three-dimensional objects that are suitable for computational representation and processing. This technique constructs realistic 3D models of images and has significant practical applications across various fields. This study proposes a rapid and precise multi-view [...] Read more.
Three-dimensional reconstruction entails the development of mathematical models of three-dimensional objects that are suitable for computational representation and processing. This technique constructs realistic 3D models of images and has significant practical applications across various fields. This study proposes a rapid and precise multi-view 3D reconstruction method to address the challenges of low reconstruction efficiency and inadequate, poor-quality point cloud generation in incremental structure-from-motion (SFM) algorithms in multi-view geometry. The methodology involves capturing a series of overlapping images of campus. We employed the Scale-invariant feature transform (SIFT) algorithm to extract feature points from each image, applied the KD-Tree algorithm for inter-image matching, and Enhanced autonomous threshold adjustment by utilizing the Random sample consensus (RANSAC) algorithm to eliminate mismatches, thereby enhancing feature matching accuracy and the number of matched point pairs. Additionally, we developed a feature-matching strategy based on similarity, which optimizes the pairwise matching process within the incremental structure from a motion algorithm. This approach decreased the number of matches and enhanced both algorithmic efficiency and model reconstruction accuracy. For dense reconstruction, we utilized the patch-based multi-view stereo (PMVS) algorithm, which is based on facets. The results indicate that our proposed method achieves a higher number of reconstructed feature points and significantly enhances algorithmic efficiency by approximately ten times compared to the original incremental reconstruction algorithm. Consequently, the generated point cloud data are more detailed, and the textures are clearer, demonstrating that our method is an effective solution for three-dimensional reconstruction. Full article
Show Figures

Figure 1

21 pages, 4483 KB  
Article
DEM Generation Incorporating River Channels in Data-Scarce Contexts: The “Fluvial Domain Method”
by Jairo R. Escobar Villanueva, Jhonny I. Pérez-Montiel and Andrea Gianni Cristoforo Nardini
Hydrology 2025, 12(2), 33; https://doi.org/10.3390/hydrology12020033 - 14 Feb 2025
Cited by 1 | Viewed by 1924
Abstract
This paper presents a novel methodology to generate Digital Elevation Models (DEMs) in flat areas, incorporating river channels from relatively coarse initial data. The technique primarily utilizes filtered dense point clouds derived from SfM-MVS (Structure from Motion-Multi-View Stereo) photogrammetry of available crewed aerial [...] Read more.
This paper presents a novel methodology to generate Digital Elevation Models (DEMs) in flat areas, incorporating river channels from relatively coarse initial data. The technique primarily utilizes filtered dense point clouds derived from SfM-MVS (Structure from Motion-Multi-View Stereo) photogrammetry of available crewed aerial imagery datasets. The methodology operates under the assumption that the aerial survey was carried out during low-flow or drought conditions so that the dry (or almost dry) riverbed is detected, although in an imprecise way. Direct interpolation of the detected elevation points yields unacceptable river channel bottom profiles (often exhibiting unrealistic artifacts) and even distorts the floodplain. In our Fluvial Domain Method, channel bottoms are represented like “highways”, perhaps overlooking their (unknown) detailed morphology but gaining in general topographic consistency. For instance, we observed an 11.7% discrepancy in the river channel long profile (with respect to the measured cross-sections) and a 0.38 m RMSE in the floodplain (with respect to the GNSS-RTK measurements). Unlike conventional methods that utilize active sensors (satellite and airborne LiDAR) or classic topographic surveys—each with precision, cost, or labor limitations—the proposed approach offers a more accessible, cost-effective, and flexible solution that is particularly well suited to cases with scarce base information and financial resources. However, the method’s performance is inherently limited by the quality of input data and the simplification of complex channel morphologies; it is most suitable for cases where high-resolution geomorphological detail is not critical or where direct data acquisition is not feasible. The resulting DEM, incorporating a generalized channel representation, is well suited for flood hazard modeling. A case study of the Ranchería river delta in the Northern Colombian Caribbean demonstrates the methodology. Full article
(This article belongs to the Special Issue Hydrological Modeling and Sustainable Water Resources Management)
Show Figures

Figure 1

20 pages, 7029 KB  
Article
Three-Dimensional Reconstruction, Phenotypic Traits Extraction, and Yield Estimation of Shiitake Mushrooms Based on Structure from Motion and Multi-View Stereo
by Xingmei Xu, Jiayuan Li, Jing Zhou, Puyu Feng, Helong Yu and Yuntao Ma
Agriculture 2025, 15(3), 298; https://doi.org/10.3390/agriculture15030298 - 30 Jan 2025
Cited by 6 | Viewed by 1534
Abstract
Phenotypic traits of fungi and their automated extraction are crucial for evaluating genetic diversity, breeding new varieties, and estimating yield. However, research on the high-throughput, rapid, and non-destructive extraction of fungal phenotypic traits using 3D point clouds remains limited. In this study, a [...] Read more.
Phenotypic traits of fungi and their automated extraction are crucial for evaluating genetic diversity, breeding new varieties, and estimating yield. However, research on the high-throughput, rapid, and non-destructive extraction of fungal phenotypic traits using 3D point clouds remains limited. In this study, a smart phone is used to capture multi-view images of shiitake mushrooms (Lentinula edodes) from three different heights and angles, employing the YOLOv8x model to segment the primary image regions. The segmented images were reconstructed in 3D using Structure from Motion (SfM) and Multi-View Stereo (MVS). To automatically segment individual mushroom instances, we developed a CP-PointNet++ network integrated with clustering methods, achieving an overall accuracy (OA) of 97.45% in segmentation. The computed phenotype correlated strongly with manual measurements, yielding R2 > 0.8 and nRMSE < 0.09 for the pileus transverse and longitudinal diameters, R2 = 0.53 and RMSE = 3.26 mm for the pileus height, R2 = 0.79 and nRMSE = 0.12 for stipe diameter, and R2 = 0.65 and RMSE = 4.98 mm for the stipe height. Using these parameters, yield estimation was performed using PLSR, SVR, RF, and GRNN machine learning models, with GRNN demonstrating superior performance (R2 = 0.91). This approach was also adaptable for extracting phenotypic traits of other fungi, providing valuable support for fungal breeding initiatives. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 14926 KB  
Article
Plant Height Estimation in Corn Fields Based on Column Space Segmentation Algorithm
by Huazhe Zhang, Nian Liu, Juan Xia, Lejun Chen and Shengde Chen
Agriculture 2025, 15(3), 236; https://doi.org/10.3390/agriculture15030236 - 22 Jan 2025
Cited by 1 | Viewed by 1608
Abstract
Plant genomics have progressed significantly due to advances in information technology, but phenotypic measurement technology has not kept pace, hindering plant breeding. As maize is one of China’s three main grain crops, accurately measuring plant height is crucial for assessing crop growth and [...] Read more.
Plant genomics have progressed significantly due to advances in information technology, but phenotypic measurement technology has not kept pace, hindering plant breeding. As maize is one of China’s three main grain crops, accurately measuring plant height is crucial for assessing crop growth and productivity. This study addresses the challenges of plant segmentation and inaccurate plant height extraction in maize populations under field conditions. A three-dimensional dense point cloud was reconstructed using the structure from motion–multi-view stereo (SFM-MVS) method, based on multi-view image sequences captured by an unmanned aerial vehicle (UAV). To improve plant segmentation, we propose a column space approximate segmentation algorithm, which combines the column space method with the enclosing box technique. The proposed method achieved a segmentation accuracy exceeding 90% in dense canopy conditions, significantly outperforming traditional algorithms, such as region growing (80%) and Euclidean clustering (75%). Furthermore, the extracted plant heights demonstrated a high correlation with manual measurements, with R2 values ranging from 0.8884 to 0.9989 and RMSE values as low as 0.0148 m. However, the scalability of the method for larger agricultural operations may face challenges due to computational demands when processing large-scale datasets and potential performance variability under different environmental conditions. Addressing these issues through algorithm optimization, parallel processing, and the integration of additional data sources such as multispectral or LiDAR data could enhance its scalability and robustness. The results demonstrate that the method can accurately reflect the heights of maize plants, providing a reliable solution for large-scale, field-based maize phenotyping. The method has potential applications in high-throughput monitoring of crop phenotypes and precision agriculture. Full article
Show Figures

Figure 1

21 pages, 11620 KB  
Article
Performance Evaluation and Optimization of 3D Gaussian Splatting in Indoor Scene Generation and Rendering
by Xinjian Fang, Yingdan Zhang, Hao Tan, Chao Liu and Xu Yang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 21; https://doi.org/10.3390/ijgi14010021 - 7 Jan 2025
Cited by 1 | Viewed by 5807
Abstract
This study addresses the prevalent challenges of inefficiency and suboptimal quality in indoor 3D scene generation and rendering by proposing a parameter-tuning strategy for 3D Gaussian Splatting (3DGS). Through a systematic quantitative analysis of various performance indicators under differing resolution conditions, threshold settings [...] Read more.
This study addresses the prevalent challenges of inefficiency and suboptimal quality in indoor 3D scene generation and rendering by proposing a parameter-tuning strategy for 3D Gaussian Splatting (3DGS). Through a systematic quantitative analysis of various performance indicators under differing resolution conditions, threshold settings for the average magnitude of spatial position gradients, and adjustments to the scaling learning rate, the optimal parameter configuration for the 3DGS model, specifically tailored for indoor modeling scenarios, is determined. Firstly, utilizing a self-collected dataset, a comprehensive comparison was conducted among COLLI-SION-MAPping (abbreviated as COLMAP (V3.7), an open-source software based on Structure from Motion and Multi-View Stereo (SFM-MVS)), Context Capture (V10.2) (abbreviated as CC, a software utilizing oblique photography algorithms), Neural Radiance Fields (NeRF), and the currently renowned 3DGS algorithm. The key dimensions of focus included the number of images, rendering time, and overall rendering effectiveness. Subsequently, based on this comparison, rigorous qualitative and quantitative evaluations are further conducted on the overall performance and detail processing capabilities of the 3DGS algorithm. Finally, to meet the specific requirements of indoor scene modeling and rendering, targeted parameter tuning is performed on the algorithm. The results demonstrate significant performance improvements in the optimized 3DGS algorithm: the PSNR metric increases by 4.3%, and the SSIM metric improves by 0.2%. The experimental results prove that the improved 3DGS algorithm exhibits superior expressive power and persuasiveness in indoor scene rendering. Full article
Show Figures

Figure 1

17 pages, 9384 KB  
Article
Multi-Spectral Point Cloud Constructed with Advanced UAV Technique for Anisotropic Reflectance Analysis of Maize Leaves
by Kaiyi Bi, Yifang Niu, Hao Yang, Zheng Niu, Yishuo Hao and Li Wang
Remote Sens. 2025, 17(1), 93; https://doi.org/10.3390/rs17010093 - 30 Dec 2024
Viewed by 1073
Abstract
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization [...] Read more.
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization of leaf anisotropic reflectance. We proposed a novel maize point cloud generation method that combines an advanced UAV cross-circling oblique (CCO) photography route with the Structure from the Motion-Multi-View Stereo (SfM-MVS) algorithm. A multi-spectral point cloud was then generated by fusing multi-spectral imagery with the point cloud using a DSM-based approach. The Rahman–Pinty–Verstraete (RPV) model was finally applied to establish maize leaf-level anisotropic reflectance models. Our results indicated a high degree of similarity between measured and estimated maize structural parameters (R2 = 0.89 for leaf length and 0.96 for plant height) based on accurate point cloud data obtained from the CCO route. Most data points clustered around the principal plane due to a constant angle between the sun and view vectors, resulting in a limited range of view azimuths. Leaf reflectance anisotropy was characterized by the RPV model with R2 ranging from 0.38 to 0.75 for five wavelength bands. These findings hold significant promise for promoting the decoupling of plant structural information and leaf optical characteristics within remote sensing data. Full article
Show Figures

Figure 1

28 pages, 19500 KB  
Article
Empirical Evaluation and Simulation of GNSS Solutions on UAS-SfM Accuracy for Shoreline Mapping
by José A. Pilartes-Congo, Chase Simpson, Michael J. Starek, Jacob Berryhill, Christopher E. Parrish and Richard K. Slocum
Drones 2024, 8(11), 646; https://doi.org/10.3390/drones8110646 - 6 Nov 2024
Cited by 2 | Viewed by 1997 | Correction
Abstract
Uncrewed aircraft systems (UASs) and structure-from-motion/multi-view stereo (SfM/MVS) photogrammetry are efficient methods for mapping terrain at local geographic scales. Traditionally, indirect georeferencing using ground control points (GCPs) is used to georeference the UAS image locations before further processing in SfM software. However, this [...] Read more.
Uncrewed aircraft systems (UASs) and structure-from-motion/multi-view stereo (SfM/MVS) photogrammetry are efficient methods for mapping terrain at local geographic scales. Traditionally, indirect georeferencing using ground control points (GCPs) is used to georeference the UAS image locations before further processing in SfM software. However, this is a tedious practice and unsuitable for surveying remote or inaccessible areas. Direct georeferencing is a plausible alternative that requires no GCPs. It relies on global navigation satellite system (GNSS) technology to georeference the UAS image locations. This research combined field experiments and simulation to investigate GNSS-based post-processed kinematic (PPK) as a means to eliminate or reduce reliance on GCPs for shoreline mapping and charting. The study also conducted a brief comparison of real-time network (RTN) and precise point positioning (PPP) performances for the same purpose. Ancillary experiments evaluated the effects of PPK base station distance and GNSS sample rate on the accuracy of derived 3D point clouds and digital elevation models (DEMs). Vertical root mean square errors (RMSEz), scaled to the 95% confidence interval using an assumption of normally-distributed errors, were desired to be within 0.5 m to satisfy National Oceanic and Atmospheric Administration (NOAA) requirements for nautical charting. Simulations used a Monte Carlo approach and empirical tests to examine the influence of GNSS performance on the quality of derived 3D point clouds. RTN and PPK results consistently yielded RMSEz values within 10 cm, thus satisfying NOAA requirements for nautical charting. PPP did not meet the accuracy requirements but showed promising results that prompt further investigation. PPK experiments using higher GNSS sample rates did not always provide the best accuracies. GNSS performance and model accuracies were enhanced when using base stations located within 30 km of the survey site. Results without using GCPs observed a direct relationship between point cloud accuracy and GNSS performance, with R2 values reaching up to 0.97. Full article
Show Figures

Figure 1

16 pages, 9232 KB  
Article
DSM Reconstruction from Uncalibrated Multi-View Satellite Stereo Images by RPC Estimation and Integration
by Dong-Uk Seo and Soon-Yong Park
Remote Sens. 2024, 16(20), 3863; https://doi.org/10.3390/rs16203863 - 17 Oct 2024
Viewed by 1757
Abstract
In this paper, we propose a 3D Digital Surface Model (DSM) reconstruction method from uncalibrated Multi-view Satellite Stereo (MVSS) images, where Rational Polynomial Coefficient (RPC) sensor parameters are not available. While recent investigations have introduced several techniques to reconstruct high-precision and high-density DSMs [...] Read more.
In this paper, we propose a 3D Digital Surface Model (DSM) reconstruction method from uncalibrated Multi-view Satellite Stereo (MVSS) images, where Rational Polynomial Coefficient (RPC) sensor parameters are not available. While recent investigations have introduced several techniques to reconstruct high-precision and high-density DSMs from MVSS images, they inherently depend on the use of geo-corrected RPC sensor parameters. However, RPC parameters from satellite sensors are subject to being erroneous due to inaccurate sensor data. In addition, due to the increasing data availability from the internet, uncalibrated satellite images can be easily obtained without RPC parameters. This study proposes a novel method to reconstruct a 3D DSM from uncalibrated MVSS images by estimating and integrating RPC parameters. To do this, we first employ a structure from motion (SfM) and 3D homography-based geo-referencing method to reconstruct an initial DSM. Second, we sample 3D points from the initial DSM as references and reproject them to the 2D image space to determine 3D–2D correspondences. Using the correspondences, we directly calculate all RPC parameters. To overcome the memory shortage problem while running the large size of satellite images, we also propose an RPC integration method. Image space is partitioned to multiple tiles, and RPC estimation is performed independently in each tile. Then, all tiles’ RPCs are integrated into the final RPC to represent the geometry of the whole image space. Finally, the integrated RPC is used to run a true MVSS pipeline to obtain the 3D DSM. The experimental results show that the proposed method can achieve 1.455 m Mean Absolute Error (MAE) in the height map reconstruction from multi-view satellite benchmark datasets. We also show that the proposed method can be used to reconstruct a geo-referenced 3D DSM from uncalibrated and freely available Google Earth imagery. Full article
Show Figures

Figure 1

Back to TopTop