Next Article in Journal
Reduced Heating Wireless Energy Transmission System for Powering Implanted Circulatory Assist Devices: Benchtop and In-Vivo Studies
Next Article in Special Issue
Efficient Large-Scale Point Cloud Geometry Compression
Previous Article in Journal
Validity and Reliability of the Posturographic Outcomes of a Portable Balance Board
Previous Article in Special Issue
Point Cloud Wall Projection for Realistic Road Data Augmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

A Brief Introduction to Intelligent Point Cloud Processing, Sensing, and Understanding: Part II

1
Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518052, China
2
School of Stomatology, Peking University, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(5), 1310; https://doi.org/10.3390/s25051310
Submission received: 17 February 2025 / Accepted: 19 February 2025 / Published: 21 February 2025

1. Introduction

Point cloud, which represents the three-dimensional (3D) digital world, is one of the fundamental data carriers in many emerging applications [1], including the autonomous driving, robotics, and geospatial fields. Advancements in sensor technologies have facilitated the acquisition of point clouds from diverse platforms [2], perspectives, and spectra. This progress has underscored the necessity to address the generation [3], processing [4], analysis [5], and quality evaluation [6] of point cloud data.
The demand for intelligent point cloud processing and analysis has surged across various industries [7]. For examples, accurate 3D models enhance design, construction, and maintenance in construction and architecture [8], improving project outcomes and reducing costs. Integrating point cloud data into physical object systems ensures more reliable digital representations of structures [9]. In autonomous driving, real-time point cloud processing is critical for vehicle perception [10], where LiDAR sensors help vehicles to detect and navigate their environment, ensuring safety and reliability. Additionally, industries such as manufacturing, cultural heritage preservation, and urban planning rely on point cloud data for quality control, digital restoration, and city modeling. The incorporation of artificial intelligence (AI) has revolutionized point cloud applications [11], where AI-driven technology helps to automate object recognition, segmentation, and classification, significantly improving efficiency and accuracy [12].
This Special Issue serves as a comprehensive collection of recent advances in point cloud processing across various sensors. A total of ten contributions from the Republic of Korea, rhe UK, the PR China, Belgium, and the USA have been ultimately accepted for publication. These contributions delve into diverse aspects of point clouds, including dataset establishment, registration, detection, performance evaluation, receptive field analysis, and plant feature extraction. We provide a brief introduction to each of the collected contributions in the following section.

2. Overview of Contributions

Contribution 1 introduces an innovative framework to enhance LiDAR-based datasets for advanced driver-assistance systems (ADASs), improving the representation of distant objects by generating synthetic object points from real point clouds. The proposed framework consists of three key modules: (1) position determination identifies optimal locations and orientations for synthetic objects, employing ground filtering to separate ground and non-ground points; selects candidate positions; resolves collisions; and determines object poses; (2) object generation converts LiDAR data from Cartesian to spherical coordinates to generate synthetic points, where a spherical point-tracking technique and a “point wall” method mitigate excessive data loss, preserving object shape fidelity; (3) synthetic annotation automatically labels object points with attributes such as position, size, pose, and occlusion, ensuring consistency with datasets such as KITTI-360. Experimental results show that integrating these synthetic objects into training datasets enhances the performance of 3D detection models.
Contribution 2 advances 3D data processing by introducing a point cloud-specific attention mechanism to enhance convolutional neural networks (CNNs). Traditional CNNs struggle with point clouds due to its unstructured and unordered nature. To address these issues, the authors propose a channel attention mechanism tailored for point clouds, integrating it into the ConvPoint benchmark. The resulted module enhances feature emphasis, improving the network’s focus on critical information. Experimental results show that incorporating the attention mechanism increases the mean intersection over union (mIoU) score, outperforming the base ConvPoint framework. This enhancement enables more effective processing of complex point clouds, with applications in autonomous driving, robotics, and virtual reality.
Contribution 3 proposes a novel algorithm for efficiently extracting phenotypic traits of rice plants using terrestrial laser scanning (TLS) data. It is noteworthy that traditional manual measurements are labor-intensive and time-consuming, so this study proposes an automated approach leveraging TLS data to extract key phenotypic features, including crown diameter, stem perimeter, plant height, surface area, volume, and projected leaf area. The extraction process employs several point cloud processing methods: (1) a neighborhood search algorithm calculates crown diameter and stem perimeter by establishing geometric relationships between point clouds; (2) an alpha-shape algorithm reconstructs the plant’s 3D surface to determine surface area and volume; (3) extended hierarchical density-based spatial clustering is used to group plant stem point clouds and obtain the tiller number. Based on these algorithms, the study enhances the efficiency and accuracy of phenotypic data collection, offering a valuable resource for rice breeding and growth monitoring.
Contribution 4 introduces a flexible and adaptive framework that enhances feature learning through the innovative use of receptive field space (RFS) and attention mechanisms. Since traditional methods rely on manually defined local neighborhoods, they may be inflexible and may fail to capture both local details and global dependencies. To address these problems, the authors propose constructing an RFS mechanism that extracts effective features across multiple receptive field ranges, allowing adaptive scale selection for each point. Moreover, they develop an RFS attention method, which dynamically adjusts the network’s focus across receptive field ranges, enhancing feature representation capability. This mechanism is integrated into a network architecture for point cloud classification and segmentation. Experimental results show that the proposed RFS effectively captures both local and global features, leading to improved 3D point cloud analysis.
Contribution 5 presents a comprehensive evaluation of six point cloud registration methods for aligning computer-aided design (CAD) models with real-world 3D scans. Unlike prior studies relying on synthetic datasets, this study utilizes point clouds from the Cranfield benchmark, incorporating CAD-sampled models and 3D scans of physical objects. The authors introduce real-world complexities such as noise and outliers, providing a more rigorous assessment. They evaluate three classical registration methods (i.e., GO-ICP, RANSAC, FGR) and three learning-based approaches (i.e., PointNetLK, RPMNet, ROPNet) using metrics such as recall, accuracy, computation time, and robustness to noise and partial data. This study can provide valuable findings that highlight the strengths and limitations of classical and learning-based registration techniques for real-world data, offering practical guidance for domain researchers in selecting methods based on accuracy, efficiency, and noise resilience.
Contribution 6 introduces a refined point cloud registration method that leverages geometric constraints and a dual-criteria evaluation process. It is noted that traditional point-to-point methods often suffer from inaccuracies due to erroneous matches and noise. The proposed approach enhances reliability by requiring only two correspondences (i.e., instead of the conventional three) to generate a transformation matrix, reducing computational complexity. Specifically, keypoints are detected to establish initial correspondences, with high-quality matches selected for improved alignment. Rotation and translation matrices are then computed using centroids and local reference frames. The optimal transformation matrix is determined based on the overlap ratio and inlier count. Experimental results demonstrate that integrating geometric constraints with a comprehensive evaluation strategy significantly enhances both accuracy and efficiency.
Contribution 7 presents an innovative LiDAR-based method for dynamic target detection, where motion states are evaluated by analyzing positional and geometric differences in point cloud clusters across consecutive frames. To accurately pair clusters representing the same target, a double registration algorithm is introduced, where a coarse registration is performed via iterative closest point (ICP) for initial pose estimation, followed by fine registration using random sample consensus and a four-parameter transformation for precise inter-frame alignment. This dual-step process standardizes coordinate systems, facilitating cluster association. Based on these paired clusters, the study constructs a classification feature system and employs the XGBoost decision tree for motion state evaluation. To improve training efficiency, a Spearman rank correlation-based bidirectional search reduces feature dimensionality, optimizing the classification subset. Meanwhile, a double Boyer–Moore voting–sliding window algorithm refines detection accuracy.
Contribution 8 introduces a robust solution for 3D object detection by effectively leveraging distance information and preserving critical point features, thereby enhancing the accuracy and reliability of scene understanding in complex environments. Specifically, the authors propose a set abstraction enhancement (SAE) network to address challenging issues raised by sparse and irregular point cloud data, which incorporates three key modules: an initial feature fusion module, a keypoint feature enhancement module, and a revised group aggregation module. By emphasizing distance information, the proposed network enhances the representation of distant objects, mitigating the decline in reflectivity that occurs with increased range. Moreover, it reinforces the intrinsic features of keypoints before they are combined with aggregated features, ensuring that essential semantic information is retained. Finally, the semantic coherence of the aggregated features improves the network’s ability to differentiate between objects. Experimental results demonstrate that the integration of distance features and the enhancement of keypoint characteristics contribute to the more accurate detection of objects at varying distances, addressing the challenges posed by point cloud sparsity and reflectivity attenuation.
Contribution 9 advances point cloud registration by addressing challenges in low-overlap environments, where traditional methods struggle due to their reliance on abundant, repeatable keypoints for accurate correspondence extraction. As a result, the authors propose a graph convolutional attention-based robust point cloud registration network (RRGA-Net) to optimize correspondences among sparse keypoints through a multi-layer channel sampling mechanism and a template matching module. By forming patches through feature weight filtering, the proposed network captures more comprehensive contextual features, which is crucial for effective registration in low-overlap scenarios. Moreover, the integration of self-attention mechanisms allows the network model to dynamically adjust weights based on relationships between points, enhancing the capture of both local and global features. Experimental results demonstrate that RRGA-Net exhibits robust performance, particularly excelling in low-overlap scenarios.
Contribution 10 presents a novel framework for accurately measuring cuboid and cylindrical objects using point cloud data from time-of-flight (ToF) sensors. ToF sensors often produce low-resolution, noisy data with self-occlusions and multipath interference, distorting object shape and size. To address these issues, the authors propose an enhanced superquadric fitting technique designed for noisy and incomplete point clouds. The proposed framework first performs ground plane rectification using fiducial markers to align the ground horizontally, followed by segmentation to isolate the related objects. A superquadric shape is then fitted using non-linear least squares regression. The proposed method is tested on objects of known dimensions placed on various surfaces, including aluminum foil, black/white posterboard, and black felt. Experimental results demonstrate that the enhanced superquadric fitting, particularly the bounding method, significantly improves accuracy, making this approach valuable for precise object measurement applications.

3. Conclusions

The evolution of point cloud acquisition and analysis has been propelled by AI advancements. This Special Issue compiles a diverse portfolio of contributions that address critical challenges in point cloud processing, sensing, and understanding. The selected studies push the boundaries of current knowledge by offering innovative solutions to existing challenges and unlocking new 3D applications. We anticipate that these developments can further expand the applications of point clouds across various industries, offering new opportunities and valuable insights for both researchers and practitioners to drive research and innovation in this field.

Author Contributions

Original draft preparation, M.W.; review and editing, M.W. and S.T. All authors have read and agreed to the published version of this manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors express their sincere gratitude to the Journal Office of the MDPI journal Sensors for the kindness and assistance shown in the preparation of this Editorial.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Contributions

  • Kim, K.; Lee, S.; Kakani, V.; Li, X.; Kim, H. Point Cloud Wall Projection for Realistic Road Data Augmentation. Sensors 2024, 24, 8144.
  • Umar, S.; Taherkhani, A. PointCloud-At: Point Cloud Convolutional Neural Networks with Attention for 3D Data Processing. Sensors 2024, 24, 6446.
  • Wang, K.; Pu, X.; Li, B. Automated Phenotypic Trait Extraction for Rice Plant Using Terrestrial Laser Scanning Data. Sensors 2024, 24, 4322.
  • Jiang, Z.; Tao, H.; Liu, Y. Receptive Field Space for Point Cloud Analysis. Sensors 2024, 24, 4274.
  • Denayer, M.; De Winter, J.; Bernardes, E.; Vanderborght, B.; Verstraten, T. Comparison of Point Cloud Registration Techniques on Scanned Physical Objects. Sensors 2024, 24, 2142.
  • Kang, C.; Geng, C.; Lin, Z.; Zhang, S.; Zhang, S.; Wang, S. Point Cloud Registration Method Based on Geometric Constraint and Transformation Evaluation. Sensors 2024, 24, 1853.
  • Xu, A.; Gao, J.; Sui, X.; Wang, C.; Shi, Z. LiDAR Dynamic Target Detection Based on Multidimensional Features. Sensors 2024, 24, 1369.
  • Zhang, Z.; Bao, Z.; Tian, Q.; Lyu, Z. SAE3D: Set Abstraction Enhancement Network for 3D Object Detection Based Distance Features. Sensors 2023, 24, 26.
  • Qian, J.; Tang, D. RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention. Sensors 2023, 23, 9651.
  • Rodriguez, B.; Rangarajan, P.; Zhang, X.; Rajan, D. Dimensioning Cuboid and Cylindrical Objects Using Only Noisy and Partially Observed Time-of-Flight Data. Sensors 2023, 23, 8673.

References

  1. Wang, M.; Yue, G.; Xiong, J.; Tian, S. Intelligent Point Cloud Processing, Sensing, and Understanding. Sensors 2024, 24, 283. [Google Scholar] [CrossRef] [PubMed]
  2. Fang, J.; Zhou, D.; Zhao, J.; Wu, C.; Tang, C.; Xu, C.Z.; Zhang, L. LiDAR-CS dataset: LiDAR point cloud dataset with cross-sensors for 3D object detection. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; pp. 14822–14829. [Google Scholar]
  3. Zhou, C.; Zhong, F.; Hanji, P.; Guo, Z.; Fogarty, K.; Sztrajman, A.; Gao, H.; Oztireli, C. Frepolad: Frequency-rectified point latent diffusion for point cloud generation. In Proceedings of the Springer European Conference on Computer Vision (ECCV), Milan, Italy, 29 September–4 October 2024; pp. 434–453. [Google Scholar]
  4. Fugacci, U.; Romanengo, C.; Falcidieno, B.; Biasotti, S. Reconstruction and preservation of feature curves in 3D point cloud processing. Comput.-Aided Des. 2024, 167, 103649. [Google Scholar] [CrossRef]
  5. Zhou, X.; Liang, D.; Xu, W.; Zhu, X.; Xu, Y.; Zou, Z.; Bai, X. Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 14707–14717. [Google Scholar]
  6. Xie, W.; Liu, Y.; Wang, K.; Wang, M. LLM-guided Cross-Modal Point Cloud Quality Assessment: A Graph Learning Approach. IEEE Signal Process. Lett. 2024, 31, 2250–2254. [Google Scholar] [CrossRef]
  7. Wang, M.; Huang, R.; Xie, W.; Ma, Z.; Ma, S. Compression Approaches for LiDAR Point Clouds and Beyond: A Survey. ACM Trans. Multimed. Comput. Commun. Appl. 2025, 1–30. [Google Scholar] [CrossRef]
  8. Choi, M.; Kim, S.; Kim, S. Semi-automated visualization method for visual inspection of buildings on BIM using 3D point cloud. J. Build. Eng. 2024, 81, 108017. [Google Scholar] [CrossRef]
  9. Li, Y.; Xiao, Z.; Li, J.; Shen, T. Integrating vision and laser point cloud data for shield tunnel digital twin modeling. Autom. Constr. 2024, 157, 105180. [Google Scholar] [CrossRef]
  10. Wang, M.; Huang, R.; Liu, Y.; Li, Y.; Xie, W. suLPCC: A novel LiDAR point cloud compression framework for scene understanding tasks. IEEE Trans. Ind. Inform. 2025, 1–12. [Google Scholar] [CrossRef]
  11. Zheng, Y.; Li, Y.; Yang, S.; Lu, H. Global-PBNet: A novel point cloud registration for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22312–22319. [Google Scholar] [CrossRef]
  12. Zhu, Q.; Fan, L.; Weng, N. Advancements in point cloud data augmentation for deep learning: A survey. Pattern Recognit. 2024, 153, 110532. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, M.; Tian, S. A Brief Introduction to Intelligent Point Cloud Processing, Sensing, and Understanding: Part II. Sensors 2025, 25, 1310. https://doi.org/10.3390/s25051310

AMA Style

Wang M, Tian S. A Brief Introduction to Intelligent Point Cloud Processing, Sensing, and Understanding: Part II. Sensors. 2025; 25(5):1310. https://doi.org/10.3390/s25051310

Chicago/Turabian Style

Wang, Miaohui, and Sukun Tian. 2025. "A Brief Introduction to Intelligent Point Cloud Processing, Sensing, and Understanding: Part II" Sensors 25, no. 5: 1310. https://doi.org/10.3390/s25051310

APA Style

Wang, M., & Tian, S. (2025). A Brief Introduction to Intelligent Point Cloud Processing, Sensing, and Understanding: Part II. Sensors, 25(5), 1310. https://doi.org/10.3390/s25051310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop