Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (598)

Search Parameters:
Keywords = cloud points matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1904 KB  
Article
Integrated LiDAR-Based Localization and Navigable Region Detection for Autonomous Berthing of Unmanned Surface Vessels
by Haichao Wang, Yong Yin, Liangxiong Dong and Helang Lai
J. Mar. Sci. Eng. 2025, 13(11), 2079; https://doi.org/10.3390/jmse13112079 (registering DOI) - 31 Oct 2025
Abstract
Autonomous berthing of unmanned surface vehicles (USVs) requires high-precision positioning and accurate detection of navigable region in complex port environments. This paper presents an integrated LiDAR-based approach to address these challenges. A high-precision 3D point cloud map of the berth is first constructed [...] Read more.
Autonomous berthing of unmanned surface vehicles (USVs) requires high-precision positioning and accurate detection of navigable region in complex port environments. This paper presents an integrated LiDAR-based approach to address these challenges. A high-precision 3D point cloud map of the berth is first constructed by fusing LiDAR data with real-time kinematic (RTK) measurements. USV pose is then estimated by matching real-time LiDAR scans to the prior map, achieving robust, RTK-independent localization. For safe navigation, a novel navigable region detection algorithm is proposed, which combines point cloud projection, inner-boundary extraction, and target clustering. This method accurately identifies quay walls and obstacles, generating reliable navigable areas and ensuring collision-free berthing. Field experiments conducted in Ling Shui Port, Dalian, China, validate the proposed approach. Results show that the map-based positioning reduces absolute trajectory error (ATE) by 55.29% and relative trajectory error (RTE) by 38.71% compared to scan matching, while the navigable region detection algorithm provides precise and stable navigable regions. These outcomes demonstrate the effectiveness and practical applicability of the proposed method for autonomous USV berthing. Full article
(This article belongs to the Special Issue New Technologies in Autonomous Ship Navigation)
21 pages, 5023 KB  
Article
Robust 3D Target Detection Based on LiDAR and Camera Fusion
by Miao Jin, Bing Lu, Gang Liu, Yinglong Diao, Xiwen Chen and Gaoning Nie
Electronics 2025, 14(21), 4186; https://doi.org/10.3390/electronics14214186 - 27 Oct 2025
Viewed by 361
Abstract
Autonomous driving relies on multimodal sensors to acquire environmental information for supporting decision making and control. While significant progress has been made in 3D object detection regarding point cloud processing and multi-sensor fusion, existing methods still suffer from shortcomings—such as sparse point clouds [...] Read more.
Autonomous driving relies on multimodal sensors to acquire environmental information for supporting decision making and control. While significant progress has been made in 3D object detection regarding point cloud processing and multi-sensor fusion, existing methods still suffer from shortcomings—such as sparse point clouds of foreground targets, fusion instability caused by fluctuating sensor data quality, and inadequate modeling of cross-frame temporal consistency in video streams—which severely restrict the practical performance of perception systems. To address these issues, this paper proposes a multimodal video stream 3D object detection framework based on reliability evaluation. Specifically, it dynamically perceives the reliability of each modal feature by evaluating the Region of Interest (RoI) features of cameras and LiDARs, and adaptively adjusts their contribution ratios in the fusion process accordingly. Additionally, a target-level semantic soft matching graph is constructed within the RoI region. Combined with spatial self-attention and temporal cross-attention mechanisms, the spatio-temporal correlations between consecutive frames are fully explored to achieve feature completion and enhancement. Verification on the nuScenes dataset shows that the proposed algorithm achieves an optimal performance of 67.3% and 70.6% in terms of the two core metrics, mAP and NDS, respectively—outperforming existing mainstream 3D object detection algorithms. Ablation experiments confirm that each module plays a crucial role in improving overall performance, and the algorithm exhibits better robustness and generalization in dynamically complex scenarios. Full article
Show Figures

Figure 1

18 pages, 4377 KB  
Article
GeoAssemble: A Geometry-Aware Hierarchical Method for Point Cloud-Based Multi-Fragment Assembly
by Caiqin Jia, Yali Ren, Zhi Wang and Yuan Zhang
Sensors 2025, 25(21), 6533; https://doi.org/10.3390/s25216533 - 23 Oct 2025
Viewed by 281
Abstract
Three-dimensional fragment assembly technology has significant application value in fields such as cultural relic restoration, medical image analysis, and industrial quality inspection. To address the common challenges of limited feature representation ability and insufficient assembling accuracy in existing methods, this paper proposes a [...] Read more.
Three-dimensional fragment assembly technology has significant application value in fields such as cultural relic restoration, medical image analysis, and industrial quality inspection. To address the common challenges of limited feature representation ability and insufficient assembling accuracy in existing methods, this paper proposes a geometry-aware hierarchical fragment assembly framework (GeoAssemble). The core contributions of our work are threefold: first, the framework utilizes DGCNN to extract local geometric features while integrating centroid relative positions to construct a multi-dimensional feature representation, thereby enhancing the identification quality of fracture points; secondly, it designs a two-stage matching strategy that combines global shape similarity coarse matching with local geometric affinity fine matching to effectively reduce matching ambiguity; finally, we propose an auxiliary transformation estimation mechanism based on the geometric center of fracture point clouds to robustly initialize pose parameters, thereby improving both alignment accuracy and convergence stability. Experiments conducted on both synthetic and real-world fragment datasets demonstrate that this method significantly outperforms baseline methods in matching accuracy and exhibits higher robustness in multi-fragment scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 6492 KB  
Article
MAC-I2P: I2P Registration with Modality Approximation and Cone–Block–Point Matching
by Yunda Sun, Lin Zhang and Shengjie Zhao
Appl. Sci. 2025, 15(20), 11212; https://doi.org/10.3390/app152011212 - 20 Oct 2025
Viewed by 284
Abstract
The misaligned geometric representation between images and point clouds and the different data densities limit the performance of I2P registration. The former hinders the learning of cross-modal features, and the latter leads to low-quality 2D–3D matching. To address these challenges, we propose a [...] Read more.
The misaligned geometric representation between images and point clouds and the different data densities limit the performance of I2P registration. The former hinders the learning of cross-modal features, and the latter leads to low-quality 2D–3D matching. To address these challenges, we propose a novel I2P registration framework called MAC-I2P, which is composed of a modality approximation module and a cone–block–point matching strategy. By generating pseudo-RGBD images, the module mitigates geometrical misalignment and converts 2D images into 3D space. In addition, it voxelizes the point cloud so that the features of the image and the point cloud can be processed in a similar way, thereby enhancing the repeatability of cross-modal features. Taking into account the different data densities and perception ranges between images and point clouds, the cone–block–point matching relaxes the strict one-to-one matching criterion by gradually refining the matching candidates. As a result, it effectively improves the 2D–3D matching quality. Notably, MAC-I2P is supervised by multiple matching objectives and optimized in an end-to-end manner, which further strengthens the cross-modal representation capability of the model. Extensive experiments conducted on KITTI Odometry and Oxford Robotcar demonstrate the superior performance of our MAC-I2P. Our approach surpasses the current state-of-the-art (SOTA) by 8∼63.2% in relative translation error (RTE) and 19.3∼38.5% in relative rotation error (RRE). The ablation experiments also confirm the effectiveness of each proposed component. Full article
(This article belongs to the Special Issue Computer Vision, Robotics and Intelligent Systems)
Show Figures

Figure 1

22 pages, 7596 KB  
Article
Orthographic Video Map Generation Considering 3D GIS View Matching
by Xingguo Zhang, Xiangfei Meng, Li Zhang, Xianguo Ling and Sen Yang
ISPRS Int. J. Geo-Inf. 2025, 14(10), 398; https://doi.org/10.3390/ijgi14100398 - 13 Oct 2025
Viewed by 398
Abstract
Converting tower-mounted videos from perspective to orthographic view is beneficial for their integration with maps and remote sensing images and can provide a clearer and more real-time data source for earth observation. This paper addresses the issue of low geometric accuracy in orthographic [...] Read more.
Converting tower-mounted videos from perspective to orthographic view is beneficial for their integration with maps and remote sensing images and can provide a clearer and more real-time data source for earth observation. This paper addresses the issue of low geometric accuracy in orthographic video generation by proposing a method that incorporates 3D GIS view matching. Firstly, a geometric alignment model between video frames and 3D GIS views is established through camera parameter mapping. Then, feature point detection and matching algorithms are employed to associate image coordinates with corresponding 3D spatial coordinates. Finally, an orthographic video map is generated based on the color point cloud. The results show that (1) for tower-based video, a 3D GIS constructed from publicly available DEMs and high-resolution remote sensing imagery can meet the spatialization needs of large-scale tower-mounted video data. (2) The feature point matching algorithm based on deep learning effectively achieves accurate matching between video frames and 3D GIS views. (3) Compared with the traditional method, such as the camera parameters method, the orthographic video map generated by this method has advantages in terms of geometric mapping accuracy and visualization effect. In the mountainous area, the RMSE of the control points is reduced from 137.70 m to 7.72 m. In the flat area, it is reduced from 13.52 m to 8.10 m. The proposed method can provide a near-real-time orthographic video map for smart cities, natural resource monitoring, emergency rescue, and other fields. Full article
Show Figures

Figure 1

20 pages, 12119 KB  
Article
An Improved Two-Step Strategy for Accurate Feature Extraction in Weak-Texture Environments
by Qingjia Lv, Yang Liu, Peng Wang, Xu Zhang, Caihong Wang, Tengsen Wang and Huihui Wang
Sensors 2025, 25(20), 6309; https://doi.org/10.3390/s25206309 - 12 Oct 2025
Viewed by 373
Abstract
To address the challenge of feature extraction and reconstruction in weak-texture environments, and to provide data support for environmental perception in mobile robots operating in such environments, a Feature Extraction and Reconstruction in Weak-Texture Environments solution is proposed. The solution enhances environmental features [...] Read more.
To address the challenge of feature extraction and reconstruction in weak-texture environments, and to provide data support for environmental perception in mobile robots operating in such environments, a Feature Extraction and Reconstruction in Weak-Texture Environments solution is proposed. The solution enhances environmental features through laser-assisted marking and employs a two-step feature extraction strategy in conjunction with binocular vision. First, an improved SURF algorithm for feature point fast localization method (FLM) based on multi-constraints is proposed to quickly locate the initial positions of feature points. Then, the robust correction method (RCM) for feature points based on light strip grayscale consistency is proposed to calibrate and obtain the precise positions of the feature points. Finally, a sparse 3D (three-dimensional) point cloud is generated through feature matching and reconstruction. At a working distance of 1 m, the spatial modeling achieves an accuracy of ±0.5 mm, a relative error of 2‰, and an effective extraction rate exceeding 97%. While ensuring both efficiency and accuracy, the solution demonstrates strong robustness against interference. It effectively supports robots in performing tasks such as precise positioning, object grasping, and posture adjustment in dynamic, weak-texture environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

15 pages, 4146 KB  
Article
A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation
by Lijing Zhang, Wei Wang, Tianbao Liu, Jiahui Guo, Bo Wu and Nan Zhang
Bioengineering 2025, 12(10), 1096; https://doi.org/10.3390/bioengineering12101096 - 12 Oct 2025
Viewed by 454
Abstract
In surgical navigation-assisted pedicle screw fixation, cross-source pre- and intra-operative point clouds registration faces challenges like significant initial pose differences and low overlapping ratio. Classical algorithms based on feature descriptor have high computational complexity and are less robust to noise, leading to a [...] Read more.
In surgical navigation-assisted pedicle screw fixation, cross-source pre- and intra-operative point clouds registration faces challenges like significant initial pose differences and low overlapping ratio. Classical algorithms based on feature descriptor have high computational complexity and are less robust to noise, leading to a decrease in accuracy and navigation performance. To address these problems, this paper proposes a coarse-to-fine registration framework. In the coarse registration stage, a Point Matching algorithm based on Curvature Feature Learning (CFL-PM) is proposed. Through CFL-PM and Farthest Point Sampling (FPS), the coarse registration of overlapping regions between the two point clouds is achieved. In the fine registration stage, the Iterative Closest Point (ICP) is used for further optimization. The proposed method effectively addresses the challenges of noise, initial pose and low overlapping ratio. In noise-free point cloud registration experiments, the average rotation and translation errors reached 0.34° and 0.27 mm. Under noisy conditions, the average rotation error of the coarse registration is 7.28°, and the average translation error is 9.08 mm. Experiments on pre- and intra-operative point cloud datasets demonstrate the proposed algorithm outperforms the compared algorithms in registration accuracy, speed, and robustness. Therefore, the proposed method can achieve the precise alignment of the surgical navigation-assisted pedicle screw fixation. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

21 pages, 4537 KB  
Article
A Registration Method for ULS-MLS Data in High-Canopy-Density Forests Based on Feature Deviation Metric
by Houyu Liang, Xiang Zhou, Tingting Lv, Qingwang Liu, Zui Tao and Hongming Zhang
Remote Sens. 2025, 17(20), 3403; https://doi.org/10.3390/rs17203403 - 11 Oct 2025
Viewed by 284
Abstract
The integration of unmanned aerial vehicle-based laser scanning (ULS) and mobile laser scanning (MLS) enables the detection of forest three-dimensional structure in high-density canopy areas and has become an important tool for monitoring and managing forest ecosystems. However, MLS faces difficulties in positioning [...] Read more.
The integration of unmanned aerial vehicle-based laser scanning (ULS) and mobile laser scanning (MLS) enables the detection of forest three-dimensional structure in high-density canopy areas and has become an important tool for monitoring and managing forest ecosystems. However, MLS faces difficulties in positioning due to canopy occlusion, making integration challenging. Due to the variations in observation platforms, ULS and MLS point clouds exhibit significant structural discrepancies and limited overlapping areas, necessitating effective methods for feature extraction and correspondence establishment between these features to achieve high-precision registration and integration. Therefore, we propose a registration algorithm that introduces a Feature Deviation Metric to enable feature extraction and correspondence construction for forest point clouds in complex regional environments. The algorithm first extracts surface point clouds using the hidden point algorithm. Then, it applies the proposed dual-threshold method to cluster individual tree features in ULS, using cylindrical detection to construct a Feature Deviation Metric from the feature points and surface point clouds. Finally, an optimization algorithm is employed to match the optimal Feature Deviation Metric for registration. Experiments were conducted in 8 stratified mixed tropical rainforest plots with complex mixed-species canopies in Malaysia and 6 structurally simple, high-canopy-density pure forest plots in anorthern China. Our algorithm achieved an average RMSE of 0.17 m in eight tropical rainforest plots with an average canopy density of 0.93, and an RMSE of 0.05 m in six northern forest plots in China with an average canopy density of 0.75, demonstrating high registration capability. Additionally, we also conducted comparative and adaptability analyses, and the results indicate that the proposed model exhibits high accuracy, efficiency, and stability in high-canopy-density forest areas. Moreover, it shows promise for high-precision ULS-MLS registration in a wider range of forest types in the future. Full article
Show Figures

Figure 1

19 pages, 7416 KB  
Article
LiDAR SLAM for Safety Inspection Robots in Large Scale Public Building Construction Sites
by Chunyong Feng, Junqi Yu, Jingdan Li, Yonghua Wu, Ben Wang and Kaiwen Wang
Buildings 2025, 15(19), 3602; https://doi.org/10.3390/buildings15193602 - 8 Oct 2025
Viewed by 530
Abstract
LiDAR-based Simultaneous Localization and Mapping (SLAM) plays a key role in enabling inspection robots to achieve autonomous navigation. However, at installation construction sites of large-scale public buildings, existing methods often suffer from point-cloud drift, large z-axis errors, and inefficient loop closure detection, [...] Read more.
LiDAR-based Simultaneous Localization and Mapping (SLAM) plays a key role in enabling inspection robots to achieve autonomous navigation. However, at installation construction sites of large-scale public buildings, existing methods often suffer from point-cloud drift, large z-axis errors, and inefficient loop closure detection, limiting their robustness and adaptability in complex environments. To address these issues, this paper proposes an improved algorithm, LeGO-LOAM-LPB (Large-scale Public Building), built upon the LeGO-LOAM framework. The method enhances feature quality through point-cloud preprocessing, stabilizes z-axis pose estimation by introducing ground-residual constraints, improves matching efficiency with an incremental k-d tree, and strengthens map consistency via a two-layer loop closure detection mechanism. Experiments conducted on a self-developed inspection robot platform in both simulated and real construction sites of large-scale public buildings demonstrate that LeGO-LOAM-LPB significantly improves positioning accuracy, reducing the root mean square error by 41.55% compared with the original algorithm. The results indicate that the proposed method offers a more precise and robust SLAM solution for safety inspection robots in construction environments and shows strong potential for engineering applications. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

34 pages, 9527 KB  
Article
High-Resolution 3D Thermal Mapping: From Dual-Sensor Calibration to Thermally Enriched Point Clouds
by Neri Edgardo Güidi, Andrea di Filippo and Salvatore Barba
Appl. Sci. 2025, 15(19), 10491; https://doi.org/10.3390/app151910491 - 28 Sep 2025
Viewed by 510
Abstract
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to [...] Read more.
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to generate thermally enriched 3D point clouds by fusing RGB and thermal imagery acquired simultaneously with a dual-sensor unmanned aerial vehicle system. The methodology includes geometric calibration of both cameras, image undistortion, cross-spectral feature matching, and projection of radiometric data onto the photogrammetric model through a computed homography. Thermal values are extracted using a custom parser and assigned to 3D points based on visibility masks and interpolation strategies. Calibration achieved 81.8% chessboard detection, yielding subpixel reprojection errors. Among twelve evaluated algorithms, LightGlue retained 99% of its matches and delivered a reprojection accuracy of 18.2% at 1 px, 65.1% at 3 px and 79% at 5 px. A case study on photovoltaic panels demonstrates the method’s capability to map thermal patterns with low temperature deviation from ground-truth data. Developed entirely in Python, the workflow integrates into Agisoft Metashape or other software. The proposed approach enables cost-effective, high-resolution thermal mapping with applications in civil engineering, cultural heritage conservation, and environmental monitoring applications. Full article
Show Figures

Figure 1

20 pages, 14512 KB  
Article
Dual-Attention-Based Block Matching for Dynamic Point Cloud Compression
by Longhua Sun, Yingrui Wang and Qing Zhu
J. Imaging 2025, 11(10), 332; https://doi.org/10.3390/jimaging11100332 - 25 Sep 2025
Viewed by 457
Abstract
The irregular and highly non-uniform spatial distribution inherent to dynamic three-dimensional (3D) point clouds (DPCs) severely hampers the extraction of reliable temporal context, rendering inter-frame compression a formidable challenge. Inspired by two-dimensional (2D) image and video compression methods, existing approaches attempt to model [...] Read more.
The irregular and highly non-uniform spatial distribution inherent to dynamic three-dimensional (3D) point clouds (DPCs) severely hampers the extraction of reliable temporal context, rendering inter-frame compression a formidable challenge. Inspired by two-dimensional (2D) image and video compression methods, existing approaches attempt to model the temporal dependence of DPCs through a motion estimation/motion compensation (ME/MC) framework. However, these approaches represent only preliminary applications of this framework; point consistency between adjacent frames is insufficiently explored, and temporal correlation requires further investigation. To address this limitation, we propose a hierarchical ME/MC framework that adaptively selects the granularity of the estimated motion field, thereby ensuring a fine-grained inter-frame prediction process. To further enhance motion estimation accuracy, we introduce a dual-attention-based KNN block-matching (DA-KBM) network. This network employs a bidirectional attention mechanism to more precisely measure the correlation between points, using closely correlated points to predict inter-frame motion vectors and thereby improve inter-frame prediction accuracy. Experimental results show that the proposed DPC compression method achieves a significant improvement (gain of 70%) in the BD-Rate metric on the 8iFVBv2 dataset. compared with the standardized Video-based Point Cloud Compression (V-PCC) v13 method, and a 16% gain over the state-of-the-art deep learning-based inter-mode method. Full article
(This article belongs to the Special Issue 3D Image Processing: Progress and Challenges)
Show Figures

Figure 1

26 pages, 3429 KB  
Article
I-VoxICP: A Fast Point Cloud Registration Method for Unmanned Surface Vessels
by Qianfeng Jing, Mingwang Bai, Yong Yin and Dongdong Guo
J. Mar. Sci. Eng. 2025, 13(10), 1854; https://doi.org/10.3390/jmse13101854 - 25 Sep 2025
Viewed by 436
Abstract
The accurate positioning and state estimation of surface vessels are prerequisites to autonomous navigation. Recently, the rapid development of 3D LiDARs has promoted the autonomy of both land and aerial vehicles, which has attracted the interest of researchers in the maritime community. However, [...] Read more.
The accurate positioning and state estimation of surface vessels are prerequisites to autonomous navigation. Recently, the rapid development of 3D LiDARs has promoted the autonomy of both land and aerial vehicles, which has attracted the interest of researchers in the maritime community. However, in traditional maritime surface multi-scenario applications, LiDAR scan matching has low point cloud scanning and matching efficiency and insufficient positional accuracy when dealing with large-scale point clouds, so it has difficulty meeting the real-time demand of low-computing-power platforms. In this paper, we use ICP-SVD for point cloud alignment in the Stanford dataset and outdoor dock scenarios and propose an optimization scheme (iVox + ICP-SVD) that incorporates the voxel structure iVox. Experiments show that the average search time of iVox is 72.23% and 96.8% higher than that of ikd-tree and kd-tree, respectively. Executed on an NVIDIA Jetson Nano (four ARM Cortex-A57 cores @ 1.43 GHz) the algorithm processes 18 k downsampled points in 56 ms on average and 65 ms in the worst case—i.e., ≤15 Hz—so every scan is completed before the next 10–20 Hz LiDAR sweep arrives. During a 73 min continuous harbor trial the CPU temperature stabilized at 68 °C without thermal throttling, confirming that the reported latency is a sustainable, field-proven upper bound rather than a laboratory best case. This dramatically improves the retrieval efficiency while effectively maintaining the matching accuracy. As a result, the overall alignment process is significantly accelerated, providing an efficient and reliable solution for real-time point cloud processing. Full article
Show Figures

Figure 1

18 pages, 2691 KB  
Article
YOLOv8-DMC: Enabling Non-Contact 3D Cattle Body Measurement via Enhanced Keypoint Detection
by Zhi Weng, Wenwen Hao, Caili Gong and Zhiqiang Zheng
Animals 2025, 15(18), 2738; https://doi.org/10.3390/ani15182738 - 19 Sep 2025
Viewed by 492
Abstract
Accurate and non-contact measurement of cattle body dimensions is essential for precision livestock management. This study presents YOLOv8-DMC, a lightweight deep learning model optimized for anatomical keypoint detection in side-view images of cattle. The model integrates three attention modules—DRAMiTransformer, MHSA-C2f, and CASimAM—to improve [...] Read more.
Accurate and non-contact measurement of cattle body dimensions is essential for precision livestock management. This study presents YOLOv8-DMC, a lightweight deep learning model optimized for anatomical keypoint detection in side-view images of cattle. The model integrates three attention modules—DRAMiTransformer, MHSA-C2f, and CASimAM—to improve robustness under occlusion and lighting variability. Following keypoint prediction, a 16-neighborhood depth completion and pass-through filtering process are applied to generate clean, colored point clouds. This enables precise 3D localization of keypoints by matching them to valid depth values. The model achieves AP@0.5 of 0.931 and AP@[0.50:0.95] of 0.868 on a dataset of over 7000 images, improving baseline accuracy by 2.14% and 3.09%, respectively, with only 0.35 M additional parameters and 0.9 GFLOPs in complexity. For real-world validation, strictly lateral-view RGB-D images from 137 cattle were collected, with ground-truth manual measurements. Compared with manual measurements, the average relative errors are 2.43% for body height, 2.26% for hip height, 3.65% for body length, and 4.48% for cannon circumference. The system supports deployment on edge devices, providing an efficient and accurate solution for 3D cattle measurement in real-world farming conditions. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

21 pages, 4674 KB  
Article
CLCFM3: A 3D Reconstruction Algorithm Based on Photogrammetry for High-Precision Whole Plant Sensing Using All-Around Images
by Atsushi Hayashi, Nobuo Kochi, Kunihiro Kodama, Sachiko Isobe and Takanari Tanabata
Sensors 2025, 25(18), 5829; https://doi.org/10.3390/s25185829 - 18 Sep 2025
Viewed by 523
Abstract
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it [...] Read more.
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it difficult to reconstruct accurate 3D point clouds. One challenge in this regard is occlusion, where points in the 3D point cloud cannot be obtained due to overlapping parts, preventing accurate point capture. Another is the generation of erroneous points in non-existent locations due to image-matching errors along object outlines. To overcome these challenges, we propose a 3D point cloud reconstruction method named closed-loop coarse-to-fine method with multi-masked matching (CLCFM3). This method repeatedly executes a process that generates point clouds locally to suppress occlusion (multi-matching) and a process that removes noise points using a mask image (masked matching). Furthermore, we propose the closed-loop coarse-to-fine method (CLCFM) to improve the accuracy of structure from motion, which is essential for implementing the proposed point cloud reconstruction method. CLCFM solves loop closure by performing coarse-to-fine camera position estimation. By facilitating the acquisition of high-density, high-precision 3D data on a large number of plant bodies, as is necessary for research activities, this approach is expected to enable comparative analysis of visible phenotypes in the growth process of a wide range of plant species based on 3D information. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

23 pages, 5510 KB  
Article
Research on Intelligent Generation of Line Drawings from Point Clouds for Ancient Architectural Heritage
by Shuzhuang Dong, Dan Wu, Weiliang Kong, Wenhu Liu and Na Xia
Buildings 2025, 15(18), 3341; https://doi.org/10.3390/buildings15183341 - 15 Sep 2025
Viewed by 472
Abstract
Addressing the inefficiency, subjective errors, and limited adaptability of existing methods for surveying complex ancient structures, this study presents an intelligent hierarchical algorithm for generating line drawings guided by structured architectural features. Leveraging point cloud data, our approach integrates prior semantic and structural [...] Read more.
Addressing the inefficiency, subjective errors, and limited adaptability of existing methods for surveying complex ancient structures, this study presents an intelligent hierarchical algorithm for generating line drawings guided by structured architectural features. Leveraging point cloud data, our approach integrates prior semantic and structural knowledge of ancient buildings to establish a multi-granularity feature extraction framework encompassing local geometric features (normal vectors, curvature, Simplified Point Feature Histograms-SPFH), component-level semantic features (utilizing enhanced PointNet++ segmentation and geometric graph matching for specialized elements), and structural relationships (adjacency analysis, hierarchical support inference). This framework autonomously achieves intelligent layer assignment, line type/width selection based on component semantics, vectorization optimization via orthogonal and hierarchical topological constraints, and the intelligent generation of sectional views and symbolic annotations. We implemented an algorithmic toolchain using the AutoCAD Python API (pyautocad version 0.5.0) within the AutoCAD 2023 environment. Validation on point cloud datasets from two representative ancient structures—Guanchang No. 11 (Luoyuan County, Fujian) and Li Tianda’s Residence (Langxi County, Anhui)—demonstrates the method’s effectiveness in accurately identifying key components (e.g., columns, beams, Dougong brackets), generating engineering-standard line drawings with significantly enhanced efficiency over traditional approaches, and robustly handling complex architectural geometries. This research delivers an efficient, reliable, and intelligent solution for digital preservation, restoration design, and information archiving of ancient architectural heritage. Full article
Show Figures

Figure 1

Back to TopTop