Next Article in Journal
Quantifying City- and Street-Scale Urban Tree Phenology from Landsat-8, Sentinel-2, and PlanetScope Images: A Case Study in Downtown Beijing
Previous Article in Journal
A Two-Step Phase Compensation-Based Imaging Method for GNSS-Based Bistatic SAR: Extraction and Compensation of Ionospheric Phase Scintillation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Point Cloud Denoising in Outdoor Real-World Scenes Based on Measurable Segmentation

College of Geoscience and Surveying Engineering, China University of Mining and Technology (Beijing), Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2347; https://doi.org/10.3390/rs16132347
Submission received: 22 April 2024 / Revised: 21 June 2024 / Accepted: 24 June 2024 / Published: 27 June 2024
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
With the continuous advancements in three-dimensional scanning technology, point clouds are fundamental data in various fields such as autonomous driving, 3D urban modeling, and the preservation of cultural heritage. However, inherent inaccuracies in instruments and external environmental interference often introduce noise and outliers into point cloud data, posing numerous challenges for advanced processing tasks such as registration, segmentation, classification, and 3D reconstruction. To effectively address these issues, this study proposes a hierarchical denoising strategy based on finite measurable segmentation in spherical space, taking into account the performance differences in horizontal and vertical resolutions of LiDAR systems. The effectiveness of this method was validated through a denoising experiment conducted on point cloud data collected from real outdoor environments. The experimental result indicates that this denoising strategy not only effectively eliminates noise but also more accurately preserves the original detail features of the point clouds, demonstrating significant advantages over conventional denoising techniques. Overall, this study introduces a novel and effective method for denoising point cloud data in outdoor real-world scenes.

1. Introduction

With the rapid advancement of three-dimensional sensing technologies, point cloud data have become an indispensable component in various applications [1]. From the 3D reconstruction of ancient ruins to contemporary autonomous vehicles, point clouds offer an intuitive and precise representation of three-dimensional data. This superior form of data representation has found extensive applications in virtual reality [2,3,4], augmented reality [5,6], autonomous vehicles [7,8,9], geological surveys [10,11,12,13], and many other domains. However, due to the inherent limitations of 3D scanning and sensing technologies, coupled with environmental noise, scattering, refraction, and other interference factors, the use of point cloud data often comes with significant noise and outliers.
Such noise and outliers pose substantial challenges to further processing point cloud data [14]. For instance, in point cloud registration tasks, noise and outliers can lead to incorrect registration results, subsequently affecting the accuracy of the entire 3D model. For feature extraction and object recognition tasks, noise and outliers can obscure features, leading to misidentification [15]. Thus, denoising point clouds is a crucial step in any point cloud processing pipeline [16]. Effective denoising not only enhances data quality but also establishes a more robust foundation for subsequent processing steps. However, it is worth noting that the diversity of point cloud data implies there is not a singular denoising strategy suitable for all applications. Different applications and scenes often demand specific treatments and optimizations.
Over the past few years, researchers have made some progress in terms of outlining point cloud denoising methods. Cheng et al. [17] proposed a denoising algorithm based on mesh principal component analysis and ground splicing that effectively removes noise from point cloud data. However, this method may lose features of objects outside the plane while removing noise and requires precise parameter adjustment to achieve the best denoising effect. Sun et al. [18] proposed a structure-aware denoising method for real-world noisy point clouds, effectively preserving point cloud structures while removing noise by combining external and internal prior knowledge, demonstrating excellent comprehensive performance. Nonetheless, this method may face challenges when dealing with specific complex structures in point clouds. Zheng et al. [19] proposed a single-stage adaptive multi-scale point cloud noise filtering algorithm based on feature information, which, through an efficient kd-tree structure and improved normal vector estimation method, can quickly and accurately process both large-scale and small-scale noise while maintaining the geometric features of the point cloud. Despite its effectiveness, this algorithm involves multiple parameters, is sensitive to parameter settings, and has a high level of computational complexity, which results in longer processing times for large point clouds. Shi et al. [20] proposed a three-dimensional point cloud denoising algorithm based on a gravitational feature function. This algorithm effectively removes sparse and dense drift noises in point cloud data by calculating the gravitational force between the centroid of the point cloud and the spherical neighborhood of points. However, this method may incorrectly identify small objects in scenes with large size differences as noise, indicating limitations in specific application scenarios. Charron et al. [21] proposed a dynamic radius outlier removal (DROR) filter, considering the changing point cloud density with increasing distance from the sensor, to effectively remove noise caused by snow while retaining scene details. Even so, this method faces challenges in processing points close to the sensor and avoiding the erroneous deletion of non-snow points, requiring careful adjustment of parameters to adapt to different scenes. Nevertheless, a common challenge faced by these methods is how to balance denoising performance and computational efficiency. Processing large-scale, high-density point cloud data requires substantial computational resources and time, which is unacceptable in some real-time applications. Furthermore, excessive denoising may lead to the loss of important information within the data, thus diminishing the representational ability of the point cloud. Therefore, ensuring both denoising performance and computational efficiency while maintaining data quality is crucial.
To address these challenges, this study introduces an innovative hierarchical denoising method that leverages the high resolution of LiDAR in the horizontal direction and the low resolution in the vertical direction. This method decomposes point cloud data into multiple subsets, allowing for the selection of suitable denoising parameters for different spatial regions of the point cloud subsets, thereby enabling precise control over noise and outliers.

2. Methodology

2.1. Calculation of the Maximum Radius

The maximum radius of a point cloud is the radius of a sphere surrounding the entire point cloud. For a point cloud composed of n points, the calculation formula for the maximum radius R max is as follows:
R max = max i = 1 , 2 , , n d ( p i p ¯ )
where p ¯ is the mean value of p i and the center of the point cloud.

2.2. Calculation of the Number of Concentric Spheres

To compute the number of concentric spheres under the framework of spherical space finite measurable segmentation, the maximum bounding sphere for the entire point cloud is constructed based on the calculated maximum radius R max . Subsequently, the number of concentric spheres (number of point cloud subsets) is determined using the obtained maximum radius R max . The specific formula for calculating the number of concentric spheres N is as follows:
N = λ R max
where λ represents an adjustment coefficient and represents the ceiling function.

2.3. Finite Measurable Segmentation of Point Clouds in Spherical Space

2.3.1. Equidistant Segmentation (ES) and Proportional Segmentation (PS)

Assuming the outermost bounding sphere of the entire point cloud constitutes a measurable space X , F , when this measurable space is subjected to equidistant and proportional finite measurable segmentation, the spherical space is partitioned into a series of concentric spheres denoted as S n ,   n = 1 , 2 , , N with a property S n F . The radius of the concentric spheres generated under these two segmentation methods differ, resulting in varied distributions of the point cloud at different hierarchies within the spherical space. Equidistant segmentation divides the spherical space surrounding the entire point cloud into several subsets S n with equal differences in the radii of adjacent subsets. Proportional segmentation divides the spherical space surrounding the entire point cloud into several subsets S n with an equal ratio of radii of adjacent subsets. The radius R n corresponding to the subset S n is calculated as follows:
R n = n ( R max N ) ;         ES D R max D n N ;   PS   ( n = 1 , 2 , , N )
where n represents the sequence number of point cloud subsets under equidistant and proportional segmentations in the spherical space. D represents the detection blind area of the LiDAR, and the formula for calculating the blind area range is as follows:
D = h tan ( π 2 θ v )
where h represents the installation height of the LiDAR and θ v represents the vertical field angle of view of the LiDAR.
For any point p i in the point cloud, if the distance to the center p ¯ of the point cloud satisfies the following formula:
R n 1 d ( p i , p ¯ ) R n ,     n = 1 , 2 , , N
then p i is a part of the subset S n . Both equidistant and proportional segmentations are employed to ensure the finite measurable segmentation of the point cloud in spherical space.

2.3.2. Distribution of Point Clouds in Different Hierarchical Spaces

The standard deviation (SD) serves as a statistical metric to characterize variability within a dataset. By setting the radii of concentric spheres, the study assesses the segmentation effects of point clouds at various hierarchical levels within spherical spaces. The standard deviation within each hierarchy and the total standard deviation (TSD) were calculated to evaluate the consistency of point cloud distributions across different hierarchies and the entire dataset. At a specific hierarchy, a smaller standard deviation indicates higher uniformity in the point cloud distribution, whereas a larger standard deviation suggests poorer uniformity. The total standard deviation acts as a comprehensive metric for assessing the uniformity of the entire point cloud distribution. Specifically, a smaller total standard deviation indicates that the uniformity of the point cloud distribution is consistent across different hierarchies, while a larger total standard deviation indicates a significant variability in uniformity between hierarchies. The calculation formulas for the standard deviation at different hierarchies σ n and the overall standard deviation are as follows:
σ n = 1 m i = 1 m ( N N i N N ¯ ) 2
= 1 N 1 n = 1 N ( σ n σ ¯ ) 2
where m represents the number of points in the same hierarchy, N represents the number of point cloud subsets, N N i represents the nearest neighbor distance of point p i within that hierarchy, N N ¯ represents the mean value of N N i , and σ ¯ represents the mean value of σ n .

2.4. Calculation of Point Cloud Hierarchical Denoising Parameters

2.4.1. Calculation of Point Cloud Hierarchical Denoising Parameters under Equidistant Segmentation

The spherical space enclosing the point cloud set is divided into finite measurable segments with a fixed radius difference through equidistant segmentation. For a series of subspaces generated by this segmentation, different parameters (radius threshold and neighboring point threshold) are selected to perform adaptive denoising on the point cloud. Among them, the calculation formula of the radius threshold R e _ t h r e s h is as follows:
R e _ t h r e s h = M δ h R n
where M represents the magnification factor, δ h denotes the theoretical value of the distance between adjacent points in the horizontal direction of the LiDAR, and R n represents the radius of the concentric sphere under equidistant segmentation, and its detailed expression is shown in Formula (3). The specific calculation formulas for M and δ h are as follows:
M = V deg 2 H deg
δ h = 2 sin ( H r a d 2 )
where V deg denotes the vertical resolution of the LiDAR, while H deg signifies the horizontal resolution of the LiDAR, and H r a d represents the horizontal resolution of the LiDAR measured in radians.
In the context of equidistant segmentation in spherical space, the calculation formula for the threshold T u _ n n of neighboring points at different hierarchies is as follows:
T u _ n n = max ( 5 ,   10 R e _ t h r e s h + 1 )
To concisely and structurally present the logic and steps of the algorithm for hierarchically denoising point clouds using equidistant segmentation in spherical space, the flowchart of the algorithm is provided in Figure 1.

2.4.2. Calculation of Point Cloud Hierarchical Denoising Parameters under Proportional Segmentation

Compared with the equidistant segmentation with a fixed radius difference, the calculation of adaptive denoising parameters for point clouds under the proportional segmentation with a fixed radius ratio is slightly different. The calculation of the denoising radius threshold R p _ t h r e s h is similar to Formula (8), but the only difference is that the calculation of the radius R n of the point cloud subset should be based on the proportional segmentation method. The calculation of the magnification factor M and the theoretical value δ h of the horizontal spacing between adjacent data points per unit distance remain consistent with Formulas (9) and (10), respectively. The process for hierarchical point cloud denoising under proportional segmentation of spherical space is structurally similar to that under equidistant segmentation. The calculation formula for the threshold T p _ n n of neighboring points for denoising point clouds in different hierarchical subspaces under proportional segmentation is as follows:
T p _ n n = max ( 5 ,   10 log ( 1 + 12 R p _ t h r e s h ) 10
where the functions max ( ) and retain their previously defined interpretations.

3. Experiment

3.1. Experimental Setup

3.1.1. Dataset Description

The point cloud dataset used in this study was collected from an outdoor real-world scene using the WLR-720 model LiDAR produced by VanJee Corporation, located in Beijing, China. This device is equipped with 16 scanning beams and has an effective ranging capability of up to 120 m (ranging from 0.5 m to 70 m at 10% reflectivity). It can perform a 360° full-range scan horizontally at a frequency of 20 Hz. Vertically, its field of view is 30°, spanning from −16° to 14°, thus providing extensive coverage of the vertical space. The WLR-720 achieves a resolution of 0.1° horizontally and 2° vertically, with a ranging accuracy of ±2 cm (typical value).
In this experiment, 1,489,400 data points were collected using the LiDAR system in outdoor real-world scenes. The acquired point cloud data comprehensively document the three-dimensional spatial coordinates of each data point, as well as the reflectance information of various targets. The scan encompassed a variety of subjects including buildings, trees, ground surfaces, vehicles, and pedestrians, among others. Figure 2 displays the raw point cloud data collected from the outdoor real-world scenes.

3.1.2. Experimental Environment and Tools

To verify the effectiveness of the point cloud denoising method proposed in this study, relevant experiments were conducted within specific hardware and software settings. The configuration of the experimental platform is as follows:
(1)
Hardware environment
  • CPU: High-performance Intel® Core™ i9-14900 with a clock speed of up to 5.8 GHz, providing robust data processing capabilities.
  • Graphics Card: NVIDIA® GeForce RTX™ 4060, supports efficient graphical processing.
  • Memory: 32 GB of Samsung DDR5, ensuring ample data processing and storage capacity.
(2)
Software environment
The experiments were performed on a Windows 10 operating system. Data processing and analysis primarily relied on the Python 3.8.8 environment and its associated scientific computing libraries. The tools used included the Spyder 5.4.3 integrated development environment, Pandas 1.4.3 and NumPy 1.22.0 for data processing, SciPy 1.7.3 for scientific calculations, and Open3D 0.13.0 for visualization and processing of point cloud datasets.

3.2. Experimental Result

3.2.1. Visualization Comparison of Point Cloud before and after Hierarchical Denoising

(1)
Spatial distribution of data points in different ranges
To thoroughly analyze the spatial distribution characteristics of the raw point cloud, a histogram was constructed (see Figure 3), which displays the distribution of point distances relative to the center of the LiDAR system. In this histogram, the horizontal axis is divided into intervals of 10 m, representing the distances from the data points to the center of the LiDAR; the vertical axis indicates the number of data points within each distance interval.
The histogram clearly illustrates a significant phenomenon: as distance increases, the density of point clouds decreases, a trend consistent with the intrinsic measurement characteristics of LiDAR (see Figure 4). Specifically, within a 10 m radius near the center of the LiDAR, the point cloud density is at its highest, reflecting its high resolution and excellent detection capabilities for nearby targets. As the distance extends, the density of point clouds gradually diminishes, primarily due to the sparse distribution of objects within the scanned area. At greater distances, the density significantly reduces due to a combination of factors including the device’s maximum effective detection range, laser beam divergence, atmospheric scattering, and the low reflectivity of target surfaces. The histogram provides an intuitive understanding of the LiDAR’s detection capabilities and the characteristics of its scanning area, which is crucial for the subsequent processing and analysis of point cloud data in outdoor real-world scenes.
(2)
Denoising of point cloud data under equidistant segmentation in spherical space
Point cloud data collected in outdoor real-world scenes often exhibit sparsity and non-uniform distributions. When conventional denoising techniques are applied to such data, multiple parameter adjustments are typically required, and the results often fail to meet ideal denoising criteria. To address this issue, this study introduces a hierarchical denoising method based on equidistant segmentation within spherical space, which has been tested using point cloud data gathered from outdoor scenes. According to Formula (2), this study sets three different quantities of point cloud subsets: 3, 6, and 9. The denoising performance for each is illustrated in Figure 5, Figure 6 and Figure 7, respectively.
Figure 5, Figure 6 and Figure 7 illustrate the hierarchical denoising of point clouds under the equidistant segmentation in spherical space. The average performance of points removed from each point cloud subset under the equidistant segmentation is shown in Figure 8a–c, respectively.
An analysis of Figure 8a–c reveals that the denoising ratio varies significantly across different ranges as the number of point cloud subsets increases. Specifically, within approximately 13 m of the LiDAR center, the denoising ratio shows a gradual decline. Conversely, at distances greater than about 77 m from the LiDAR center, especially when the number of point cloud subsets is 9, the denoising ratio significantly increases. This phenomenon is attributed to the sparsity and extremely low density of point clouds in these ranges, as confirmed by the statistical distribution of point cloud quantities shown in Figure 3 and the trend of point cloud density with distance illustrated in Figure 4. Further comparative analysis from Figure 5b to Figure 7b indicates that with an increase in the number of subsets, the noise removal becomes more precise, particularly in areas far from the radar center. This improvement is due to the denoising algorithm which operates based on the proximity and number of points, thereby enhancing the overall denoising effect. Experimental results suggest that appropriately increasing the number of subsets can effectively optimize the performance of the denoising algorithm.
(3)
Denoising of point cloud data under proportional segmentation in spherical space
Raw data acquired from LiDAR typically exhibit spatial variations in point cloud density: regions near the center of the scanner have higher densities, resulting in more densely packed point clouds, whereas areas further from the center are less dense and thus sparser. Given that the proposed denoising method performs better on uniformly dense point clouds, a hierarchical denoising algorithm based on proportional segmentation in spherical space has been further developed. According to Formula (2), similar to the equidistant segmentation, the number of point cloud subsets was set at 3, 6, and 9 for the denoising experiments, with the results displayed in Figure 9, Figure 10 and Figure 11, respectively.
Figure 8, Figure 9 and Figure 10 illustrate the hierarchical denoising of point clouds under the proportional segmentation in spherical space. The average performance of points removed from each point cloud subset under the proportional segmentation is shown in Figure 12a–c, respectively.
An analysis of Figure 12a–c reveals that under proportional segmentation conditions, ranges closer to the center of the LiDAR generate a larger number of point cloud subsets. The substantial number of points in the range, coupled with an increased number of subsets, facilitates more precise control over noise removal. Particularly, when the subset count reaches 9, 6 subsets are formed within a 40 m radius of the LiDAR center. Compared to when there are only 3 or 6 subsets, the noise removal ratio in this region significantly improves, achieving more meticulous noise reduction. Beyond a 40 m radius from the LiDAR center, the noise removal ratio for 6 or 9 subsets significantly exceeds that for only 3 subsets. This phenomenon is mainly due to the sparse distribution and lower density of point clouds in that range. Further, analysis from Figure 9b to Figure 11b indicates that with 9 subsets, regardless of the proximity to the LiDAR center, the overall denoising effect of the point clouds is optimal.

3.2.2. Quantitative Evaluation of Point Cloud Distribution under Equidistant and Proportional Segmentations

(1)
The radius of point cloud subsets under equidistant and proportional segmentations
To enhance the clarity and intuition of the variations in the radii of point cloud subsets under equidistant and proportional segmentations, this study illustrates the changes in radii for point clouds consisting of 3, 6, and 9 subsets under these segmentation conditions. The corresponding graphical results are displayed in Figure 13a–c, respectively.
As illustrated in Figure 13a, within a 40 m range of the LiDAR, both segmentation methods produce an equal number of subsets. Figure 13b,c shows that within the same distance, equidistant segmentation results in fewer subsets, while proportional segmentation yields more subsets, offering a finer segmentation. Approximately 94% of the point clouds are located within this range, yet with equidistant segmentation at 9 subsets, only 3 are generated. In areas with a vast number of points, fewer partitions mean that denoising parameters may lack specificity, adversely affecting the denoising outcome. In regions further from the center of the LiDAR (beyond 40 m), equidistant segmentation generates more subsets. Since the denoising algorithm analyzes based on the proximity and number of points, the sparsity of points in these regions might lead to inadvertent data deletion.
(2)
The overall standard deviation of point clouds after denoising
Standard deviation is commonly used to describe the variability of a dataset. In this study, the total standard deviation is utilized to evaluate the effectiveness of hierarchical segmentation of point clouds in spherical spaces through equidistant and proportional segmentations. Specifically, a lower total standard deviation indicates higher uniformity in the distribution of point clouds across different levels; conversely, a higher total standard deviation suggests significant disparities in uniformity among the levels. The uniformity of this distribution is quantified by the segmentation results of point clouds in spherical spaces, with the total standard deviation calculations presented in Table 1.
The analysis of Table 1 reveals that under the condition of an equal number of point cloud subsets, the total standard deviation for the proportional segmentation method is significantly lower than that for the equidistant segmentation method. This finding indicates that proportional segmentation in spherical spaces results in a more uniform distribution of point clouds across different hierarchies. Consequently, proportional segmentation not only optimizes the hierarchical division of point clouds in spherical spaces but also enhances the effectiveness of hierarchical denoising. These results highlight the importance of selecting an appropriate segmentation strategy when processing point cloud data.

3.2.3. Point Cloud Denoising Effectiveness Assessment

(1)
The denoising performance of different denoising methods in outdoor real-world scenes
In outdoor real-world scenes, point cloud denoising operations should be conducted while preserving scene details and features of distant objects. Typically, the point cloud data acquired in outdoor real-world scenes exhibit significant variations in density. The point spacing between data points near the center of the LiDAR and those far from the center can differ by several tens to hundreds of times. Conventional point cloud denoising methods use fixed parameters for denoising, which cannot dynamically adjust based on changes in point cloud density. As a result, the denoising performance of these methods is highly sensitive to point cloud density and point spacing, making them suboptimal for denoising in outdoor real-world scenes. This study proposes a method that segments the point cloud into spatially hierarchical subsets, grouping point clouds with similar densities into the same subset. Each point cloud subset is then assigned denoising parameters appropriate for its density, thereby achieving more effective noise removal without losing scene details. To further evaluate the performance of the proposed method in removing noise from point clouds in outdoor real-world scenes, comparisons were made with ROR [22], SOR [22], DROR [21], and LIOR [23]. The results are presented in Figure 14.
In Figure 14a–g, (1) represents the car’s outer contour and the ground. Upon observation, it can be noted that there are some noise points at the edges of the car’s contour and on the ground. After applying the denoising method proposed in this study, the noise points around the car’s contour and on the ground were significantly reduced, as shown in Figure 14f,g-(1). The DROR and LIOR methods performed poorly, leaving a small amount of noise points after implementation. The ROR and SOR methods showed the worst performance, with noise points remaining quite evident after application. In Figure 14a–g, (2) represents the side façade of a building. Similarly, careful observation reveals that after the proposed adaptive parameter denoising under proportional measurable segmentation, the distant building retains a relatively complete geometric feature structure while being denoised, as shown in Figure 14g-(2). Other methods performed poorly, especially when an equidistant measurable segmentation method was applied to the point cloud set. The geometric structural features of distant objects were not well preserved during denoising. In Figure 14a–g, (3) represents the ground cover of junipers, with many noise points present within and around them. After applying the denoising method proposed in this study, the noise points in the point cloud were significantly reduced, especially when the proportional measurable segmentation method was adopted, as shown in Figure 14g-(3). The ROR, SOR, DROR, and LIOR methods showed average performance, with some noise points remaining unremoved in the point cloud after implementation. From the above analysis, it can be concluded that when the point cloud set undergoes adaptive parameter denoising based on proportional measurable segmentation, noise can be maximally removed while preserving the geometric structural features of objects, regardless of their proximity to or distance from the LiDAR center.
(2)
Comparing the efficiency of point cloud denoising methods
Given the typically large volume of point cloud data collected by LiDAR in outdoor environments, time efficiency becomes critically important during the denoising process. Therefore, this study has conducted a statistical analysis of the time required by various denoising methods under the same computational conditions (as described in Section 3.1.2), with specific results presented in Table 2.
An analysis of Table 2 reveals that among the methods for denoising point cloud data from outdoor scenes, the SOR method requires the least time, while the ROR method takes the most. The DROR method consumes slightly less time than ROR but is still more time-consuming than the LIOR. The denoising time with the studied method is greater than that of SOR but superior to the other three methods. Compared to the DROR method, this approach shows a time reduction of over 60%, and about 15% compared to the LIOR method. This layered adaptive parameter denoising method, based on measurable segmentation, processes the point cloud by dividing it into smaller sections, reducing the data volume per section and thereby diminishing the computational load per process. This distributed computational load implies that each section can be processed faster, especially when utilizing multicore or parallel processing. Moreover, by delineating the extent of each layer, the method allows the algorithm to dynamically adjust the denoising parameters according to the characteristics of each layer’s data, thereby more finely addressing the varying characteristics of different regions, such as point density and noise levels, reducing redundant computations of neighbor points, and ultimately decreasing the overall processing time.
(3)
Quantitative evaluation of different denoising methods
In the evaluation of noise removal efficacy within real-world outdoor scenes, acquiring high-quality real point cloud datasets often presents a challenge. To ensure a fair comparison between the denoising method proposed in this study and other approaches, such as LIOR, DROR, SOR, and ROR, this study has adopted the normalized median variance as the metric to quantify the performance of different denoising techniques. This approach is based on each point in the denoised point cloud, where it seeks neighboring points in the raw dataset and calculates the variance of these points. This variance measures the consistency and stability of point distributions within local areas. The median, rather than the mean, of these variances is used as the metric to effectively reduce the impact of outliers on the evaluation results. Subsequently, this median variance is normalized to ensure consistency and fairness in the evaluation results across different scales of data. Figure 15 illustrates a comparison of the denoising performance achieved by five different filters.
Figure 15 presents a quantitative comparison of the performance of five denoising filters, assessed using the normalized median variance difference. A smaller normalized value indicates that the denoised point cloud retains a higher degree of local spatial consistency with the raw point cloud. The results suggest that the denoising method proposed in this study exhibits the smallest normalized difference, indicating superior denoising performance in real-world outdoor scenes. In comparison, the LIOR, DROR, and SOR filters perform moderately well, while the ROR filter shows the least desirable denoising outcomes.

4. Discussion

4.1. Efficacy of Point Cloud Hierarchical Denoising with Spherical Space Measurable Segmentation

The hierarchical denoising method for point clouds, through measurable segmentation of spherical space, has demonstrated its efficacy in point cloud data processing. This approach employs a step-by-step denoising strategy, dynamically adjusting the denoising parameters to align with the distance between data points and the LiDAR center, making it particularly suitable for noisy, sparse, and non-uniform point cloud data while effectively preserving the detailed features of the point clouds. Moreover, it offers a range of adjustable parameters, such as the horizontal and vertical resolutions of LiDAR and the number of concentric spheres, allowing users to customize the method based on the specific characteristics of the data, thereby enhancing its applicability. Most critically, the method supports the visualization of concentric spheres and point clouds both before and after denoising, providing researchers and users with a powerful tool to visually assess the denoising outcomes.

4.2. Limitations of Point Cloud Hierarchical Denoising with Spherical Space Measurable Segmentation

While the denoising method proposed in this study demonstrates certain advantages, it also possesses inherent limitations. Firstly, when processing large-scale point cloud datasets, the sequential handling of multiple concentric spheres increases computation time, which may render this method unsuitable for applications requiring real-time processing. Secondly, the performance of this method depends on the assumption that the point cloud data are concentrically distributed. Deviations from this assumption could adversely affect performance. Therefore, in practical applications, a precise understanding of the distribution characteristics of point cloud data in outdoor scenes is especially critical.

4.3. The Practicality Extension and Computational Efficiency Optimization of the Hierarchical Denoising Method

The point cloud denoising method proposed in this study demonstrates certain advantages over traditional techniques [17], particularly in preserving fine details, especially when dealing with unevenly distributed noise, offering new perspectives and solutions for point cloud processing. To enhance the efficiency and applicability of this method, it is essential to explore and implement computational optimization strategies, especially to reduce the computational time required for processing large point cloud datasets [24]. Additionally, the introduction of an automatic parameter adjustment mechanism allows for the automation of the denoising parameter selection, optimized according to the characteristics of the point cloud distribution, further enhancing the practicality of the method.

5. Conclusions

This study presents a method for the hierarchical denoising of a point cloud within a finite measurable segmentation framework in spherical space, and its efficacy is empirically demonstrated. By gradually increasing the radii of concentric spheres and finely tuning the parameters, the method successfully eliminated noise and outliers in the point cloud. The denoised data closely resembled the geometric shapes of the original scene, significantly enhancing the quality and usability of the data. However, this study also highlights the importance of parameter selection, urging researchers to choose appropriate parameters based on their specific datasets and application scenes to achieve optimal denoising results. Future work may include more sophisticated methods for adaptive parameterization and performance optimization in various applications. Overall, the hierarchical denoising method under finite measurable segmentation in spherical space provides an effective solution for point cloud denoising, with potential for further research and innovative improvements for practical applications.

Author Contributions

L.W. and Y.C. contributed to the study’s conception and design. L.W. designed the experiments, analyzed the data, and wrote the first draft. Y.C. proposed key ideas and gave suggestions for the manuscript. H.X. participated in data curation and proposed suggestions for modifications to the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Fundamental Research Funds for the Central Universities under Grant 2010YD06.

Data Availability Statement

The datasets generated and analyzed during the current study are not publicly available due to other considerations and constraints specific to the nature of the research. However, the datasets are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to thank Wenbin Sun of College of Geoscience and Surveying Engineering, China University of Mining and Technology (Beijing) for giving some comments and writing suggestions. We would also like to thank the anonymous reviewers and the editors for their comments.

Conflicts of Interest

The authors declare no competing interests.

References

  1. Yang, B.; Haala, N.; Dong, Z. Progress and Perspectives of Point Cloud Intelligence. Geo-Spat. Inf. Sci. 2023, 26, 189–205. [Google Scholar] [CrossRef]
  2. Wirth, F.; Quchl, J.; Ota, J.; Stiller, C. PointAtMe: Efficient 3D Point Cloud Labeling in Virtual Reality. In Proceedings of the 2019 30th IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1693–1698. [Google Scholar]
  3. Valenzuela-Urrutia, D.; Muñoz-Riffo, R.; Ruiz-del-Solar, J. Virtual Reality-Based Time-Delayed Haptic Teleoperation Using Point Cloud Data. J. Intell. Robot. Syst. 2019, 96, 387–400. [Google Scholar] [CrossRef]
  4. Tredinnick, R.; Broecker, M.; Ponto, K. Progressive Feedback Point Cloud Rendering for Virtual Reality Display. In Proceedings of the 2016 IEEE Virtual Reality Conference (VR), Greenville, SC, USA, 19–23 March 2016; pp. 301–302. [Google Scholar]
  5. Alexiou, E.; Upenik, E.; Ebrahimi, T. Towards Subjective Quality Assessment of Point Cloud Imaging in Augmented Reality. In Proceedings of the 2017 19th IEEE International Workshop on Multimedia Signal Processing (MMSP), Luton, UK, 16–18 October 2017. [Google Scholar]
  6. Ma, K.; Lu, F.; Chen, X.W. Robust Planar Surface Extraction from Noisy and Semi-dense 3D Point Cloud for Augmented Reality. In Proceedings of the 2016 International Conference on Virtual Reality and Visualization (ICVRV), Hangzhou, China, 24–26 September 2016; pp. 453–458. [Google Scholar]
  7. Lin, X.H.; Wang, F.H.; Yang, B.S.; Zhang, W.W. Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS-Challenged Environments. Remote Sens. 2021, 13, 506. [Google Scholar] [CrossRef]
  8. Chen, S.H.; Liu, B.A.; Feng, C.; Vallespi-Gonzalez, C.; Wellington, C. 3D Point Cloud Processing and Learning for Autonomous Driving: Impacting Map Creation, Localization, and Perception. IEEE Signal Process. Mag. 2021, 38, 68–86. [Google Scholar] [CrossRef]
  9. Xiao, Z.P.; Dai, B.; Li, H.D.; Wu, T.; Xu, X.; Zeng, Y.J.; Chen, T.T. Gaussian Process Regression-Based Robust Free Space Detection for Autonomous Vehicle by 3-D Point Cloud and 2-D Appearance Information Fusion. Int. J. Adv. Robot. Syst. 2017, 14, 172988141771705. [Google Scholar] [CrossRef]
  10. Abellan, A.; Derron, M.-H.; Jaboyedoff, M. “Use of 3D Point Clouds in Geohazards” Special Issue: Current Challenges and Future Trends. Remote Sens. 2016, 8, 130. [Google Scholar] [CrossRef]
  11. Cai, X.H.; Lü, Q.; Zheng, J.; Liao, K.W.; Liu, J. An Efficient Adaptive Approach to Automatically Identify Rock Discontinuity Parameters Using 3D Point Cloud Model from Outcrops. Geol. J. 2023, 58, 2195–2210. [Google Scholar] [CrossRef]
  12. Jung, M.Y.; Jung, J.H. A Scalable Method to Improve Large-Scale Lidar Topographic Differencing Results. Remote Sens. 2023, 15, 4289. [Google Scholar] [CrossRef]
  13. Mammoliti, E.; Di Stefano, F.; Fronzi, D.; Mancini, A.; Malinverni, E.S.; Tazioli, A. A Machine Learning Approach to Extract Rock Mass Discontinuity Orientation and Spacing, from Laser Scanner Point Clouds. Remote Sens. 2022, 14, 2365. [Google Scholar] [CrossRef]
  14. Javaheri, A.; Brites, C.; Pereira, F.; Ascenso, J. Subjective and Objective Quality Evaluation of 3D Point Cloud Denoising Algorithms. In Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China, 10–14 July 2017; pp. 1–6. [Google Scholar]
  15. Mian, A.; Bennamoun, M.; Owens, R. On the Repeatability and Quality of Keypoints for Local Feature-Based 3D Object Retrieval from Cluttered Scenes. Int. J. Comput. Vis. 2010, 89, 348–361. [Google Scholar] [CrossRef]
  16. Digne, J. Similarity Based Filtering of Point Clouds. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; pp. 73–79. [Google Scholar]
  17. Cheng, D.; Zhao, D.; Zhang, J.; Wei, C.; Tian, D. PCA-Based Denoising Algorithm for Outdoor Lidar Point Cloud Data. Sensors 2021, 21, 3703. [Google Scholar] [CrossRef] [PubMed]
  18. Sun, G.; Chu, C.; Mei, J.; Li, W.; Su, Z. Structure-Aware Denoising for Real-world Noisy Point Clouds with Complex Structures. Comput.-Aided Des. 2022, 149, 103275. [Google Scholar] [CrossRef]
  19. Zheng, Z.; Zha, B.; Zhou, Y.; Huang, J.; Xuchen, Y.; Zhang, H. Single-Stage Adaptive Multi-Scale Point Cloud Noise Filtering Algorithm Based on Feature Information. Remote Sens. 2022, 14, 367. [Google Scholar] [CrossRef]
  20. Shi, C.; Wang, C.; Liu, X.; Sun, S.; Xiao, B.; Li, X.; Li, G. Three-Dimensional Point Cloud Denoising via a Gravitational Feature Function. Appl. Opt. 2022, 61, 1331–1343. [Google Scholar] [CrossRef]
  21. Charron, N.; Phillips, S.; Waslander, S.L. De-noising of Lidar Point Clouds Corrupted by Snowfall. In Proceedings of the 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018; pp. 254–261. [Google Scholar]
  22. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  23. Park, J.-I.; Park, J.; Kim, K.-S. Fast and Accurate Desnowing Algorithm for LiDAR Point Clouds. IEEE Access 2020, 8, 160202–160212. [Google Scholar] [CrossRef]
  24. Elseberg, J.; Borrmann, D.; Nüchter, A. Efficient Processing of Large 3D Point Clouds. In Proceedings of the 2011 XXIII International Symposium on Information, Communication and Automation Technologies, Sarajevo, Bosnia and Herzegovina, 27–29 October 2011; pp. 1–7. [Google Scholar]
Figure 1. Flowchart of hierarchical point cloud denoising under the equidistant segmentation of spherical space.
Figure 1. Flowchart of hierarchical point cloud denoising under the equidistant segmentation of spherical space.
Remotesensing 16 02347 g001
Figure 2. Raw point clouds in outdoor real-world scenes. (a) Plan view. (b) Side view.
Figure 2. Raw point clouds in outdoor real-world scenes. (a) Plan view. (b) Side view.
Remotesensing 16 02347 g002
Figure 3. Spatial distribution of data points in different ranges.
Figure 3. Spatial distribution of data points in different ranges.
Remotesensing 16 02347 g003
Figure 4. The variation of point cloud density with distance from the center of the LiDAR. Here, ε 1 , ε 2 , and ε 3 represent the sizes of the neighborhood radii.
Figure 4. The variation of point cloud density with distance from the center of the LiDAR. Here, ε 1 , ε 2 , and ε 3 represent the sizes of the neighborhood radii.
Remotesensing 16 02347 g004
Figure 5. Hierarchical denoising of point cloud under the equidistant segmentation of spherical space ( N = 3 ) . (a) Equidistant segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Figure 5. Hierarchical denoising of point cloud under the equidistant segmentation of spherical space ( N = 3 ) . (a) Equidistant segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Remotesensing 16 02347 g005
Figure 6. Hierarchical denoising of point cloud under the equidistant segmentation of spherical space ( N = 6 ) . (a) Equidistant segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Figure 6. Hierarchical denoising of point cloud under the equidistant segmentation of spherical space ( N = 6 ) . (a) Equidistant segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Remotesensing 16 02347 g006
Figure 7. Hierarchical denoising of point cloud under the equidistant segmentation of spherical space ( N = 9 ) . (a) Equidistant segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Figure 7. Hierarchical denoising of point cloud under the equidistant segmentation of spherical space ( N = 9 ) . (a) Equidistant segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Remotesensing 16 02347 g007
Figure 8. The average performance of points removed from each point cloud subset under equidistant segmentation. Among them, the red number represents the point cloud subset number (from inside out). (a) N = 3 . (b) N = 6 . (c) N = 9 .
Figure 8. The average performance of points removed from each point cloud subset under equidistant segmentation. Among them, the red number represents the point cloud subset number (from inside out). (a) N = 3 . (b) N = 6 . (c) N = 9 .
Remotesensing 16 02347 g008aRemotesensing 16 02347 g008b
Figure 9. Hierarchical denoising of point cloud under the proportional segmentation of spherical space ( N = 3 ) . (a) Proportional segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Figure 9. Hierarchical denoising of point cloud under the proportional segmentation of spherical space ( N = 3 ) . (a) Proportional segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Remotesensing 16 02347 g009
Figure 10. Hierarchical denoising of point cloud under the proportional segmentation of spherical space ( N = 6 ) . (a) Proportional segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Figure 10. Hierarchical denoising of point cloud under the proportional segmentation of spherical space ( N = 6 ) . (a) Proportional segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Remotesensing 16 02347 g010
Figure 11. Hierarchical denoising of point cloud under the proportional segmentation of spherical space ( N = 9 ) . (a) Proportional segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Figure 11. Hierarchical denoising of point cloud under the proportional segmentation of spherical space ( N = 9 ) . (a) Proportional segmentation of the raw point cloud in spherical space. (b) Point cloud data after denoising.
Remotesensing 16 02347 g011
Figure 12. The average performance of points removed from each point cloud subset under proportional segmentation. Among them, the red number represents the point cloud subset number (from inside out). (a) N = 3 . (b) N = 6 . (c) N = 9 .
Figure 12. The average performance of points removed from each point cloud subset under proportional segmentation. Among them, the red number represents the point cloud subset number (from inside out). (a) N = 3 . (b) N = 6 . (c) N = 9 .
Remotesensing 16 02347 g012aRemotesensing 16 02347 g012b
Figure 13. The radius curves of point cloud subsets under equidistant and proportional segmentations. (a) N = 3 . (b) N = 6 . (c) N = 9 .
Figure 13. The radius curves of point cloud subsets under equidistant and proportional segmentations. (a) N = 3 . (b) N = 6 . (c) N = 9 .
Remotesensing 16 02347 g013aRemotesensing 16 02347 g013b
Figure 14. The denoising performance of different point cloud denoising methods in outdoor real scenes. Among them are (a) raw point cloud; (b) point cloud after denoising by ROR method; (c) point cloud after denoising by SOR method; (d) point cloud after denoising by DROR method; (e) point cloud after denoising by LIOR method; (f) point cloud after removing noise under equidistant measurable segmentation; (g) point cloud after removing noise under proportional measurable segmentation.
Figure 14. The denoising performance of different point cloud denoising methods in outdoor real scenes. Among them are (a) raw point cloud; (b) point cloud after denoising by ROR method; (c) point cloud after denoising by SOR method; (d) point cloud after denoising by DROR method; (e) point cloud after denoising by LIOR method; (f) point cloud after removing noise under equidistant measurable segmentation; (g) point cloud after removing noise under proportional measurable segmentation.
Remotesensing 16 02347 g014aRemotesensing 16 02347 g014bRemotesensing 16 02347 g014c
Figure 15. Quantitative assessment of denoising performance among different filters.
Figure 15. Quantitative assessment of denoising performance among different filters.
Remotesensing 16 02347 g015
Table 1. Total standard deviation under equidistant and proportional segmentations of spherical space.
Table 1. Total standard deviation under equidistant and proportional segmentations of spherical space.
Number of SubsetsSegmentation StrategyTSD
3ES1.266
PS0.947
6ES1.177
PS0.683
9ES0.983
PS0.416
Table 2. Time statistics for different denoising methods.
Table 2. Time statistics for different denoising methods.
RORSORDRORLIORThis Approach
Denoising time (s)8.1540.5037.7313.2872.793
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Chen, Y.; Xu, H. Point Cloud Denoising in Outdoor Real-World Scenes Based on Measurable Segmentation. Remote Sens. 2024, 16, 2347. https://doi.org/10.3390/rs16132347

AMA Style

Wang L, Chen Y, Xu H. Point Cloud Denoising in Outdoor Real-World Scenes Based on Measurable Segmentation. Remote Sensing. 2024; 16(13):2347. https://doi.org/10.3390/rs16132347

Chicago/Turabian Style

Wang, Lianchao, Yijin Chen, and Hanghang Xu. 2024. "Point Cloud Denoising in Outdoor Real-World Scenes Based on Measurable Segmentation" Remote Sensing 16, no. 13: 2347. https://doi.org/10.3390/rs16132347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop