Next Article in Journal
Consumer Behavior and Sustainable Marketing Development in Online and Offline Settings
Previous Article in Journal
Blockchain Technology Implementation in Supply Chain Management: A Literature Review
Previous Article in Special Issue
Enhancing Microseismic Signal Classification in Metal Mines Using Transformer-Based Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithm for Point Cloud Dust Filtering of LiDAR for Autonomous Vehicles in Mining Area

1
Department of Vehicle Engineering, School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
Beijing ROCK-AI Autonomous Driving Technology Co., Ltd., Beijing 100027, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(7), 2827; https://doi.org/10.3390/su16072827
Submission received: 29 December 2023 / Revised: 13 March 2024 / Accepted: 21 March 2024 / Published: 28 March 2024
(This article belongs to the Special Issue Advances in Intelligent and Sustainable Mining)

Abstract

:
With the continuous development of the transformation of the “smart mine” in the mineral industry, the use of sensors in autonomous trucks has become very common. However, the driving of trucks causes the point cloud collected by through Light Detection and Ranging (LiDAR) to contain dust points, leading to a significant decline in its detection performance, which makes it easy for vehicles to have failures at the perceptual level. In order to solve this problem, this study proposes a LiDAR point cloud denoising method for the quantitative analysis of laser reflection intensity and spatial structure. This method uses laser reflectivity as the benchmark template, constructs the initial confidence level template and initially screens out the sparse dust point cloud. The results are analyzed through the Euclidean distance of adjacent points, and the confidence level in the corresponding template is reduced for rescreening. The experimental results show that our method can significantly filter dust point cloud particles while retaining the rich environmental information of data. The computational load caused by filtering is far lower than that of other methods, and the overall operation efficiency of the system has no significant delay.

1. Introduction

Open-pit mines are stepping into a high-level and deep integration stage of informatization and industrialization. Intelligentization, automation and greening are the technological innovation directions of future mine development. With the help of new technology, it is the trend of the times to develop intelligent and even unmanned mining technology, innovate mining modes and reduce the number of operators.
Transportation in the open-pit mining area is different from that in general road transportation. The mining area environment is bad, the road is narrow and the road surface is potholed; the vehicle body is wide, the blind area is large and the requirements for driver qualification are high; some mining areas are in ultra-deep and high-altitude areas, which makes the safety risk higher. Moreover, there are many problems in mining enterprises, such as labor difficulty and high labor costs. The application of autonomous driving can effectively alleviate the production capacity delay caused by insufficient manpower supply and flexibly adjust production capacity according to market demand, which is conducive to ensuring the overall sustainable development of labor and industry [1,2,3,4,5]. In addition, the characteristics of fixed transportation routes and closed roads in mining areas facilitate the application of automatic driving in this field.
Open-pit mine automatic driving technology is composed of driving environment perception, path planning and vehicle control technology. The path planning algorithm is used to calculate the driving track for the transport vehicle, and the on-board sensors are used to detect and identify obstacles on the driving track to control the vehicle driving and avoid obstacles. Automatic driving technology largely depends on the accuracy of environmental perception. At present, the sensing devices in automatic driving technology mainly include Light Detection and Ranging (LiDAR), camera, etc. LiDAR has the advantages of a long detection distance and high detection accuracy. However, there are some problems in the application of this sensor in the complex working environment of the mining area. The surface of the open-pit mining area scene is composed of loose sediment, and a large amount of fugitive dust will be generated when the wheels drive over the surface. As shown in Figure 1, Vehicle driving will be impacted by smoke and dust caused by surrounding vehicles, making the collected point cloud data have a lot of noise, which seriously affects the results of environmental perception. Therefore, the filtering of smoke and dust noise in the point cloud data of the LiDAR mining area plays an important role in the subsequent perception effect.
The main challenge in the field of point cloud denoising is to remove noise while retaining clear features as shown in Figure 2. Local Optimal Projection (LOP)-based methods, such as weighted LOP [1] and continuous LOP [2], use local operators without normal values to remove noise, but they are often criticized for excessive smoothing [3] and are not designed to retain sharp features. Methods based on moving least squares (MLS), such as Algebraic Point Set Surface (APSS) [4] and robust implicit moving least squares (RIMLS) [5], are designed to maintain a clear structure. However, due to their strong dependence on initial normal estimation, they are generally more sensitive to outliers and noise than LOP-based methods. The method based on spatial domain filtering is a classical algorithm in the denoising task, such as the pass through filter (PTF), statistical method and Gaussian filtering. This kind of algorithm uses point cloud spatial coordinate information, including the neighborhood points and distance distribution angle, to design the corresponding filtering rules. However, due to the single feature calculation method and the inflexible threshold setting, the sensitivity to outliers is not high. Several other point cloud denoising algorithms have been proposed, combining the sparse idea and the tensor-voting method, but they are related to regularization, and the penalty for noise is too heavy, which leads to excessive smoothing.
In recent years, modern methods have been introduced in the field of noise removal, such as deep learning and machine learning methods [6,7]. This expands the research scope of many related fields of point cloud denoising, including data collection, data preprocessing, algorithm research, etc. The production of datasets or the acquisition of public datasets is the biggest difficulty in using the deep learning model, but this problem is being solved gradually. Although these models have high performance and accuracy, this is at the cost of their difficulties in interpretation [8]. The computing power required to process large real-time data streams provided by computer vision sensors and other sensors often exceeds the data processing capacity of existing data center infrastructure, and the inherent benefits are delayed or unavailable. To deal with these large data streams in a timely manner, a robust ETL pipeline is necessary. These shortcomings are also common in deep learning models, which also require higher data resolution and computing resources for training, deployment and monitoring [9,10,11,12].
In this study, we propose a point cloud dust filtering method based on the quantitative analysis of laser reflection intensity and spatial structure to solve the above problems. This method focuses on removing noise and outliers caused by dust while retaining sharp features in the point cloud. Our method includes four processing steps: (1) using the principle of random sampling to process the original point cloud and remove the background; (2) taking laser reflectivity as the benchmark template, an initial confidence-level template is constructed to screen out sparse dust point clouds; (3) the non-ground points obtained from the initial screening are analyzed by the Euclidean distance of adjacent points, and the point cloud clusters with large spatial structure information variance are judged as suspected non-rigid bodies, reducing the confidence value in the corresponding template; (4) the dense dust point cloud is filtered according to the modified confidence-level template.
Our work is motivated by three observations: (1) Most of the environmental interference information, such as dust, rain and fog, is confirmed to be filtered out in point cloud data. (2) Some scattered point clouds on the real rigid body target are deleted by mistake, but usually there are few corners, such as window glass and the occluded surface, which does not cause the loss of the integrity of the target data. All the test vehicles and personnel are detected, and there is no obvious deformation in the three-dimensional structure of the data. (3) The calculation load caused by filtering is only 9% higher than that without filtering, and the overall operation efficiency of the system has no significant delay.
The point cloud dust filtering method we proposed retains the main components of the main data and removes the noise. Experiments on datasets collected in mining areas show that our algorithm is superior to the most advanced methods in filtering effect and operating efficiency. The three principal contributions of this study are as follows:
  • We propose a method for laser reflectivity and 3D feature analysis of point clouds, which can accurately distinguish rigid point clouds from non-rigid point clouds.
  • We propose an algorithm to quantify the transparency and density of target point clouds and comprehensively weigh them. Using the optical and physical characteristics of non-rigid objects (such as rain, fog and dust), which are obviously different from rigid objects (such as mountains, human bodies and car bodies), we can identify noise targets.
  • We develop a point cloud dust filtering algorithm, which is robust enough for large-scale dust noise and outliers, and is superior to the most advanced point cloud denoising methods.

2. Related Work

2.1. Traditional Filtering Algorithm Based on Optimization Idea

Traditional filtering algorithms generally need to first construct the objective function term and related constraint regularization term about the original point cloud and complete the filtering process through multiple iterations of the convex optimization process. Spatial filtering [13] is mainly based on the spatial coordinate information of the point cloud, generally considering the design of corresponding filtering rules from the perspective of neighborhood points and distance distribution. The pass through filter considers filtering the point cloud spatial information from the dimension level; the statistical method considers that the local spatial neighborhood points of the external points have the attribute of sparse point cloud distribution, and it uses the distance measure index to detect outliers; Gaussian filtering specifies the active neighborhood, calculates the Euclidean distance in the neighborhood for each point and implements noise reduction by weighted average.
The moving least squares method [14] is mainly used to accurately approximate the curve or surface represented by the unordered input point set by constructing the fitting function defined by the coefficient vector and the basis function. Levin [15] introduced the concept of MLS projection earlier, and Alexa et al. [16] introduced this type of method into the field of point clouds. Shen et al. [17] defined the Implict MLS Surface earlier and applied it to polygon unordered sets. Kolluri [18] used this type of IMLS method in the field of point set reconstruction.
Lipman et al. [19] earlier proposed the Locally Optimal Projection operator for surface fitting, which does not depend on any local parameters and acts directly on point sets. Huang et al. [20] constructed the WLOP (weighted LOP) operator to filter the original input point cloud. Huang et al. [21] also proposed edge aware resampling to avoid directly calculating normal values at discontinuous boundaries.
Regarding the methods of sparsization and low rank classes, Avron et al. [22] proposed a sparse reconstruction framework based on L1 regularization for point cloud denoising. Xu et al. [23] introduced L0 regularization into image filtering. He and Schaefer [24] transferred L0 regularization to grid processing, and they solved the problem that L0 regularization is difficult to optimize by introducing auxiliary variables and based on the idea of alternate optimization.

2.2. Point Cloud Denoising Algorithm Based on Deep Learning

The method based on deep learning needs to build a neural network framework for end-to-end learning and select appropriate training and test data sets. Roveri et al. [6] introduced convolutional neural networks and first proposed an end-to-end point cloud denoising framework, PointProNets. Zhang et al. [25] proposed a more streamlined Pointfilter framework. In the field of point cloud processing, Wang et al. [26] earlier proposed a dynamic graph convolution model suitable for high-order tasks, such as point cloud classification and segmentation. The above methods are supervised learning, which requires a large-scale training dataset containing pairs of noise point clouds and clean point clouds. Casajus et al. [27] introduced the idea of unsupervised learning for point cloud denoising. Based on this idea, Regava et al. [28] proposed a point denoise model for outlier detection.

3. Materials and Methods

3.1. Overview

The following paragraphs describe in detail the complete method of dust filtering received by the 360° coverage LiDAR sensor. First, we improved the random sampling consistency method to quickly extract the surface points, and then used the laser reflectivity as the benchmark template to construct the initial confidence template. Finally, we used the method of quantifying the transparency and density of the target point cloud and combining the Euclidean distance of adjacent points to reduce the confidence value of the template. Each paragraph is conceptually divided into three sections, including a brief reasoning behind the algorithm selection along with the definition of new terms, an overview of the algorithm according to the pseudocode diagrams and discussion of algorithm implementation details.

3.2. Modified Random Sample Consensus

Random sampling consistency is an iterative method used to estimate the parameters of a mathematical model from a set of data containing outliers [29]. It assumes that all data are composed of inliers and outliers, and outliers can be explained by a model with a set of specific parameter values, which are not suitable for the model in any case. In this study, we randomly select a sample subset from the sample, use the minimum variance estimation algorithm to calculate the model parameters for this subset and calculate the deviation of all samples from the model. Set a threshold value in advance to compare it with the deviation. When the deviation is less than the threshold value, the sample point belongs to inliers; otherwise, it belongs to outliers. Record the current number of inliers, and then repeat the process [30]. The current best model parameters are recorded for each repetition. That is, the number of inliers is the largest, and the corresponding number of inliers is best_inliers. At the end of each iteration, an evaluation factor will be calculated according to the expected error rate, best_inliers, the total number of samples and the current number of iterations to determine whether the iteration ends. The number of iterations can be obtained by Formula (1), where k represents the number of iterations of the algorithm, n represents the minimum number of data applicable to this model, p represents the probability that the points randomly selected from the dataset during iteration are all inliers, and it also represents the probability that the algorithm will produce useful results, w represents the probability of selecting one inlier from the dataset each time, w n represents the probability that all n points are inliers and 1     w n represents the probability that at least one of n points is an outlier, which indicates that we have estimated a bad model from the dataset.
k = log ( 1 p ) log ( 1 w n ) ,
In order to obtain more reliable parameters, the standard deviation or its product can be added to k . The standard deviation of k is defined by Formula (2):
S D ( K ) = 1 w n w n ,
In theory, this method can eliminate the influence of outliers and obtain the global optimal parameter estimation. However, random sampling consistency algorithm must distinguish between outliers and inliers in each iteration, so the threshold value needs to be set in advance. This model aims at a relatively special mining area scenario, and the eigenvalue fluctuates greatly. Therefore, this study chooses to set the dynamic threshold value according to the dynamic change in samples. Through the method of modified random sampling consistency, this study realized the removal of ground points and background points in the mining area scene.

3.3. Confidence Interval Template

Confidence interval is the reference value to measure whether the LiDAR target point cloud is a real object. It is the estimation of the credibility of the three-dimensional imaging point provided by the LiDAR. It is usually determined according to the reflected light intensity in the imaging process of the point [31]. In this study, the initial template of confidence interval is equivalent to the reflection intensity or reflectivity set of each frame of the point cloud, both of which are the parameters of the LiDAR itself. In each datum returned by the lidar, except for the reflection intensity derived from the speed and time, it is actually the ratio of the echo power and the transmission power of the laser point. According to the existing optical model, the reflected intensity of laser can be better described as Formula (3):
P R = P E D R 2 ρ cos α 4 R 2 η S y s η A t m ,
where P R represents echo power, P E represents transmission power, D R represents receiver aperture, ρ represents object surface reflectivity, α represents the angle of incidence, which is the angle between the incident light and the normal of the object surface and η S y s and η A t m represent system transmission rate and atmospheric transmission rate, respectively. In Formula (3), R calculates the Euclidean distance of the point cloud, which measures the absolute distance between two points in a multidimensional space. The Euclidean distance in two-dimensional and three-dimensional space is the actual distance between two points. The Euclidean distance calculation formula of n-dimensional space is Formula (4):
d = i = 1 n ( x i y i ) 2 ,
We can see that the reflectivity of the laser spot is inversely proportional to the square of the distance and the incident angle of the object. Due to the large gap between the dust particles and the small reflective surface, the dust target has poor light reflection ability and deep reflective thickness. Therefore, compared with the rigid target, the confidence interval of the dust point cloud is generally low. Setting an appropriate threshold T can eliminate a small amount of dust point cloud. The main algorithm function of confidence interval is Algorithm A1 (c.f. Appendix A). At present, compensation should be added to the remote point cloud due to the irradiation angle of the vehicle. The main reason is that after the object exceeds a certain distance, the active light reflection intensity of LiDAR radiation will decrease with the increase in distance, and this distance needs to be determined by analyzing the actual data.
In this study, the confidence interval template is generated based on the reflection intensity, and then the data are corrected through the density analysis. The density analysis will be detailed in the next section. Finally, it is used as the reference template for dust filtering, which is equivalent to the identification of rigid and non-rigid bodies by integrating the reflection intensity of the reference point cloud itself and the optical and geometric characteristics of the adjacent points, replacing a single standard or simple multiple filtering.

3.4. Quantification of Point Cloud Transparency and Density

The transparency and density of point clouds are the means for human visual system and human brain to distinguish rigid bodies and dust. The purpose of the technical solution is to analyze this feature through the characteristic attribute data of point clouds. In this study, the feature attribute data are analyzed as horizontal axis and vertical axis. Taking the horizontal axis analysis as an example, the LiDAR adopted is 64-line and 1024 resolution, and the horizontal axis is 1024-times the sampling axis on the single line horizontal plane, and the horizontal axis analysis is conducted once for each line of sampling axis.
  • Initialize the counter and starting point i to zero, and set the center distance floating threshold and cumulative confidence threshold.
The center distance floating threshold D T measures the stability of target point cloud positioning. Because the dust particle cluster is translucent and loose, the point cloud generated by it is very discrete, and the spacing between adjacent points is large and irregular. The dust surface concentration is not completely uniform, the local particle density is large, the laser reflection is strong and the point cloud imaging position is close to the surface; in areas with sparse particles, the laser reflection is weak, and the point cloud imaging position is far away from the surface. In addition, the irregular surface shape of the dust itself makes the point cloud more discrete and irregular. In contrast, the point cloud imaging on the rigid object surface is continuous due to its compactness. Even though the point cloud imaging on the rough and pitted surface fluctuates, it is still continuous. The measurement basis of center distance floating threshold is shown in Figure 3.
The confidence threshold C T is first compared with the center distance floating threshold D T through the algorithm calculation results, and then the confidence C   ( x ,   y ) of each point is corrected and compared with the confidence threshold. The low confidence point below the threshold C T will be determined as a dust point cloud and directly deleted. At this time, D T is used to measure the stability of the point cloud, then correct the confidence C of each point, and then C T is used to measure whether the confidence C of each point meets the standard expectation of the rigid body and filter out the non-conformance.
2.
For each point i in the point cloud, its coordinate is p o i n t   ( i ) , and the distance from point i to the LiDAR center is:
D i = p o i n t i · x 2 + p o i n t i · y 2 ,
The main function logic of horizontal axis analysis is shown in Algorithm A2 (c.f. Appendix A). The vertical axis analysis is performed in the same way. The scanning direction is changed to 64 lines at the same angle in the longitudinal direction, and it is performed according to 1024 resolution angles. The calculation method is the same as the horizontal axis, but the cumulative threshold C T needs to be reduced in a targeted manner.
The horizontal axis and vertical axis analysis aims to correct the confidence interval by comparing the density and uniformity of the point cloud in the horizontal and vertical directions. The point cloud reflected by the dust has low confidence interval, and the confidence interval of the rigid body point cloud with neat structure and dense points is further improved by the algorithm. Then, after setting an appropriate cumulative confidence interval threshold, the low confidence interval dust target point cloud can be deleted. The overall steps of the scheme and the detailed framework of the specific horizontal and vertical axis analysis are shown in Figure 4.

4. Results

4.1. Data Description

The test datasets include 390 Gb (Gigabytes) innvusion falcon kinetic radar data (about 300 K frames), 612 Gb ouster OS1 BH-64 radar data (about 200 K frames) and 170 Gb robosense M1 radar data (about 190 K frames). As shown in Figure 5, the radar is positioned at a height of 3.12 m above the ground, precisely located in the center of the truck’s roof. In addition, the duration of the actual vehicle verification in Dananhu open-pit mining area of Hami, Xinjiang, is approximately 3 months (1 convoy with 4 vehicles). This process primarily occurs during the period of highest temperature and aridity in the field, spanning from July to September. Each vehicle operates for a duration of 4–7 h daily.
Our algorithm/scheme is aimed at the braking problem of unmanned vehicles caused by dust in the practice of open-pit mining and stripping operations. Consequently, our primary objective is to enhance operational efficiency in dusty conditions and minimize false braking incidents resulting from misidentifying dust as an obstacle. In the experimental setup, manual observation is primarily conducted to assess the effectiveness of dust information filtration in field data samples. Figure 6 presents visualized experimental data samples, where BEV (bird’s-eye-view) projection and road surface segmentation are performed on the radar point cloud. Ground points are represented by green color, while non-ground points are depicted in red, providing an intuitive demonstration of the algorithm’s filtering efficacy.

4.2. Implemention Results of Proposed Algorithm

The purpose of our modified random consensus sampling method is to distinguish the ground and obstacles (including dust, retaining walls and vehicles) to ensure that the computing power of subsequent algorithms is not wasted on the ground points.
The quantitative analysis of algorithm accuracy is carried out in a standard manner similar to the semantic segmentation model in the AI domain. The basic steps are as follows:
  • The data of the running vehicles in the mining area are recorded and sent back to the laboratory via the cloud server.
  • The data are split frame by frame, and the data frames with dust are screened out.
  • The ground points are removed, and the point cloud data of vehicle and dust are left.
  • The annotation tool is used to mark the target point cloud as positive and negative samples. P (positive) is the rigid target point, N (negative) is the non-rigid noise point such as dust and the annotated data frame is considered as the Ground Truth for validation.
  • Execute the dust filtering algorithm to process the same batch of data frames and also record the unfiltered rigid body points as P points, and the filtered points are counted as N points.
  • Compare the difference between data processing results and Ground Truth of the algorithm to complete quantitative analysis.
Furthermore, the AI point cloud segmentation model RandLa-Net was included as the reference group, with 3/4 of the annotated samples allocated for training and the remaining used for testing. The quantization results of the AI segmentation model were also obtained to facilitate comparison.
The precision quantization results should be clarified as being solely used for data comparison purposes. As shown in Table 1 and Table 2, due to the compact and dense characteristics of rigid body targets, while the cloud points are relatively discrete, the number of P points in the sample is significantly higher than that of N points. Therefore, increasing the confidence threshold will only slightly increase the proportion of over-filtered rigid body targets but will have a greater impact on the overall quantization accuracy. However, in practice, filtering low-reflection target points caused by factors such as dust shielding, angle and material does not degrade obstacle detection effectiveness. But missing the identification of some dense dust points can lead to more noticeable negative effects. Hence, raising the confidence threshold is preferred during actual mining and peeling operations to achieve better results. Although the AI segmentation model’s data index closely resembles that of the algorithm presented in this study, its missed detection and false-detection target points exhibit unpredictable randomness, which can significantly affect vehicle operation efficiency.
As shown in Figure 7, from top to bottom are the visualization effects of three steps: original point cloud data, reflectance filtered data results and the final results of Euclidean distance analysis of neighboring points. According to Figure 7, we can see that the dust cloud has been significantly filtered based on our approach.
The conventional approach involves utilizing the reflection intensity, reflectance ratio or Euclidean distance clustering methods for point cloud filtering. The reflection intensity method exhibits high sensitivity to distance and experiences an exponential decay effect in terms of intensity differences between distant and nearby points. Consequently, it necessitates the establishment of multiple threshold levels based on various angular distances.
The reflectance ratio method is an enhanced version of the reflection intensity method, as it incorporates distance correction compensation based on the reflection intensity. However, in dense areas like the one depicted in the second picture of Figure 7, Group B, the filter may not be entirely effective, resulting in some point cloud clusters retaining non-rigid pseudo-obstacle points. Although these points are scattered and lack the precise structure of real rigid targets, they can still be identified and eliminated using a neighbor point Euclidean distance analysis approach during step three.
The Euclidean distance clustering method is straightforward yet rudimentary. Firstly, it fails to incorporate the structural information of point cloud adjacency, resulting in each point having to compare distances with all other points within the frame. This becomes a significant computational burden when dealing with point clouds exceeding 1000 points in a single frame. Secondly, relying solely on Euclidean distance calculations cannot effectively filter out clean dust cloud clusters. In certain areas where dust particles aggregate, closely arranged points in spatial proximity may form unfiltered pseudo-targets.
In our proposed method, the reflectance ratio filter is utilized to eliminate discrete points with low confidence, followed by an analysis of spatial structure relationships within the point cluster using the Euclidean distance method of neighboring points. Subsequently, a secondary confidence filter is applied to reduce the overall confidence of point cloud clusters exhibiting loose spatial structure relationships (by averaging individual point confidences). This approach achieves a balance between speed and effectiveness while avoiding inadvertent deletion.
In conclusion, the conventional reflectance ratio is considered as the fundamental confidence parameter, while a basic filter is employed to eliminate discrete points with low confidence. As depicted in Figure 7, the proportion of dust in this region varies from 50% to 90% across different scenes. Subsequently, an analysis method based on Euclidean distance between neighboring points is applied to decrease the confidence value of densely populated dust points, thereby achieving real-time removal of persistent dust points. Simultaneously, genuine targets such as vehicles and stones are not erroneously processed by the filter. In comparison with the standard Euclidean distance method, there are two advantages: first, computational load only increases linearly rather than exponentially with an increase in point count; second, consideration of continuity characteristics within neighbor point structures increases recognition accuracy.
The dynamic targets involved in the actual operational scenario include wide-body trucks, refueling trucks, sprinkler trucks and others. These vehicles exhibit similar sizes. In this study, the wide-body truck with the highest frequency of occurrence at the scene is considered as the primary target. The specific details regarding point cloud data collection are presented in Figure 6, Figure 7 and Figure 8. In a dry mining area without water sprinklers on the transportation road, a wide-body truck serves as the target vehicle, running at speeds ranging from 20 to 40 km per hour. A radar-equipped wide-body truck of the same type (as shown in Figure 5) follows behind as the data acquisition vehicle within a distance of 20 m to capture data. The dimensions of the wide body truck are measured at 3.4 m width, 9 m length and 3.9 m height. Consequently, Figure 6 illustrates a flat area measuring approximately 26 × 26 m located ahead of the vehicle from a bird’s-eye-view perspective (BEV). Group A in Figure 7 presents point cloud visualization results captured from an oblique downward perspective positioned approximately 13 m behind and 6 m above the target vehicle, covering an approximate width of 8 m. Group B showcases visual representations derived from radar perspectives about 16 m ahead of the target vehicle, depicting an image width around ten meters.
The visual effects of cloud filtering from different angles at different times are illustrated in Figure 8, where the green color represents the rigid point, the red color represents the points that are filtered out and the yellow color represents the point filtered out after adjusting the confidence of neighboring points’ Euclidean distance. In Figure 8A, the observation point is positioned behind the target vehicle; in Figure 8B, it is located behind its left side; and in Figure 8C, it is also situated behind its left side but closer and higher.
During the test period in the actual mining scenario of Dananhu No. 2 open-pit mine, the entire transportation distance of the unmanned vehicle was 1.64 km from the loading yard to the unloading yard, with a closed-loop process covering approximately 3.28 km. The frequency of single-vehicle transportation operations varied from 20 to 30 rounds per day.
Before the implementation of our algorithm, a maximum of 37 false brakes (the highest recorded number) can occur within a single transportation cycle, with an average of 18.3 brakes. It is noteworthy that the occurrence of braking events caused by the misidentification of obstacles due to dust is significantly associated with on-site road water-spraying operations. However, due to the strong randomness of ground dryness, water-spraying volume and evaporation rate, quantification of the association remains challenging.
After the implementation of our algorithm, the occurrence rate of false braking events in a single transportation cycle approached zero, while the incidence of false braking during the joint operation of four vehicles in a single day ranges from 0 to 5 times, with an average frequency of 2.1 occurrences (when the autonomous driving system triggers braking behavior in response to perceived obstacles, the safety officer on board determines that there is no actual obstacle ahead of the vehicle, resulting in a recorded false brake).

5. Discussion and Conclusions

In this study, a point cloud filtering method for the quantitative analysis of laser reflection intensity and spatial structure is proposed to deal with dust interference information in the mining area scene. The dust has the physical characteristics of large dust particle spacing, so it will show partially reflected and partially penetrated translucent visual characteristics after being irradiated by the light beam, which leads to the feature of sparse three-dimensional point cloud density after the reflected light of the dust surface is captured and imaged by radar, which is obviously different from the conventional rigid body surface three-dimensional point cloud imaging, The basic principle of the invention is to distinguish the real rigid body obstacle and the false obstacle cloud based on these features.
The experimental results show that the developed algorithm can filter dust particles from the original point cloud data. In addition, compared with the AI-based method, this method has the advantages of simplicity and low computational cost, and it provides satisfactory performance in dust removal. Therefore, the proposed method can be applied to the LiDAR sensor installed on the transport vehicle in the mining area and the transport machinery in the intelligent mine. Due to the uneven and untreated road surface, as well as continuous uphill and downhill slopes in the field, the speed limit for unmanned vehicles is set at 18 km/h. Based on actual task records, implementing a dust filtering algorithm reduced the time required for vehicle transportation operations on the main path (including both no-load driving and heavy load driving) by approximately 2/3 compared to before its deployment. Consequently, the duration of a single operation cycle process decreased by about half after incorporating dust filtering.
Taking a real-world scenario as an example, deploying the cloud filtering algorithm resulted in a significant improvement in transportation efficiency. Specifically, the no-load driving efficiency increased by 176.84%, heavy-load driving efficiency increased by 227.92% and overall working efficiency during a complete process cycle improved by 82.3% (with minor optimizations made to other processes, such as loading, unloading and queuing, that had minimal impact on overall working process efficiency). Although there were slight variations in the data results among multiple vehicles due to differences in hardware equipment, the general trend remained consistent.
The key to this research is to identify and filter by analyzing the feature difference between the dust point cloud and the rigid object point cloud. In the future, researchers will have other options for the implementation details of the algorithm. At present, this study takes dust as the example object of the scheme. In the future, non-rigid objects with translucent visual characteristics, such as smoke, water mist and rain curtains, can be used as filtering objects by adjusting the algorithm. In addition, this study currently uses horizontal axis and vertical axis analysis to modify the confidence to achieve low-confidence dust filtering. In the future, we can also try to use convolution statistics and other methods to analyze point cloud density to exclude sparse point cloud targets.

Author Contributions

We acknowledge X.J. and Y.X. for coming up with novel ideas and designing the entire experimental work. Also, we gratefully appreciate W.Y. and C.N. for their coding and experimental work. Furthermore, thanks go to X.J., C.N. and W.Y. for their work on this manuscript. Y.M. supervised the research. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the National Key Research and Development Program of China (Grant No. 2019YFC0605300).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

Authors Xianyao Jiang, Yi Xie, Chongning Na and Wenyang Yu were employed by the company Beijing ROCK-AI Autonomous Driving Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

Algorithm A1: Confidence interval template
Sustainability 16 02827 i001
Algorithm A2: Horizontal axis analysis
Sustainability 16 02827 i002

References

  1. Aldoseri, A.; Al-Khalifa, K.N.; Hamouda, A.M. AI-Powered Innovation in Digital Transformation: Key Pillars and Industry Impact. Sustainability 2024, 16, 1790. [Google Scholar] [CrossRef]
  2. Louati, A.; Louati, H.; Kariri, E.; Neifar, W.; Hassan, M.K.; Khairi, M.H.H.; Farahat, M.A.; El-Hoseny, H.M. Sustainable Smart Cities through Multi-Agent Reinforcement Learning-Based Cooperative Autonomous Vehicles. Sustainability 2024, 16, 1779. [Google Scholar] [CrossRef]
  3. Taddei, S.; Visintainer, F.; Stoffella, F.; Biral, F. Multi-Layered Local Dynamic Map for a Connected and Automated In-Vehicle System. Sustainability 2024, 16, 1306. [Google Scholar] [CrossRef]
  4. Li, H.; Li, J.; Li, H.; Chu, J.; Miao, Q. The Sustainability and Energy Efficiency of Connected and Automated Vehicles from the Perspective of Public Acceptance towards Platoon Control. Sustainability 2024, 16, 808. [Google Scholar] [CrossRef]
  5. Zhang, Q.; Cao, Y. Revolutionizing Chinese Manufacturing: Uncovering the Nexus of Intelligent Transformation and Capital Market Information Efficiency. Sustainability 2023, 15, 14429. [Google Scholar] [CrossRef]
  6. Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph. 2009, 28, 1–7. [Google Scholar] [CrossRef]
  7. Preiner, R.; Mattausch, O.; Arikan, M. Continuous Projection for Fast L1 Reconstruction. ACM Trans. Graph. 2014, 33, 1–13. [Google Scholar] [CrossRef]
  8. Oztireli, A.; Guennebaud, G.; Gross, M. Feature Preserving Point Set Surfaces based on Non-Linear Kernel Regression. Comput. Graph. Forum J. Eur. Assoc. Comput. Graph. 2009, 28, 493–501. [Google Scholar] [CrossRef]
  9. Zheng, Y.; Li, G.; Wu, S. Guided point cloud denoising via sharp feature skeletons. Vis. Comput. 2017, 33, 857–867. [Google Scholar] [CrossRef]
  10. Guennebaud, G.; Germann, M.; Gross, M. Dynamic Sampling and Rendering of Algebraic Point Set Surfaces. Comput. Graph. Forum 2010, 27, 653–662. [Google Scholar] [CrossRef]
  11. Rahim, M.A.; Hassan, H.M. A deep learning based traffic crash severity prediction framework. Accid. Anal. Prev. 2021, 154, 106090. [Google Scholar] [CrossRef] [PubMed]
  12. Chakraborty, P.; Adu-Gyamfi, Y.O.; Poddar, S.; Ahsani, V.; Sharma, A.; Sarkar, S. Traffic Congestion Detection from Camera Images using Deep Convolution Neural Networks. Transp. Res. Rec. J. Transp. Res. Board 2018, 2672, 222–231. [Google Scholar] [CrossRef]
  13. Yu, R.; Li, Y.; Shahabi, C.; Demiryurek, U.; Liu, Y. Deep learning: A generic approach for extreme condition traffic forecasting. In Proceedings of the 17th SIAM International Conference on Data Mining (SDM), Houston, TX, USA, 27–29 April 2017; pp. 777–785. [Google Scholar]
  14. Jiang, W.; Zhang, L. Geospatial data to images: A deep-learning framework for traffic forecasting. Tsinghua Sci. Technol. 2019, 24, 52–64. [Google Scholar] [CrossRef]
  15. Almamlook, R.E.; Kwayu, K.M.; Alkasisbeh, M.R.; Frefer, A.A. Comparison of Machine Learning Algorithms for Predicting Traffic Accident Severity. In Proceedings of the 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), Amman, Jordan, 9–11 April 2019; IEEE: New York, NY, USA, 2019; pp. 272–276. [Google Scholar]
  16. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  17. Zhang, J.; Zheng, Y.; Qi, D. Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 1655–1661. [Google Scholar]
  18. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  19. Levin, D. The approximation power of moving least-squares. Math. Comput. 1998, 67, 1517–1531. [Google Scholar] [CrossRef]
  20. Levin, D. Mesh-independent surface interpolation. In Geometric Modeling for Scientific Visualization (GMSV); Springer: Berlin, Germany, 2004; pp. 37–49. [Google Scholar]
  21. Alexa, M.; Behr, J. Computing and rendering point set surfaces. Vis. Comput. Graph. 2003, 9, 3–15. [Google Scholar] [CrossRef]
  22. Shen, C.; O’Brien, J.F.; Shewchuk, J.R. Interpolating and approximating implicit surfaces from polygon soup. ACM Trans. Graph. (TOG) 2004, 23, 896–904. [Google Scholar] [CrossRef]
  23. Kolluri, R.K. Provably good moving least squares. ACM Trans. Algorithms 2008, 4, 1–25. [Google Scholar] [CrossRef]
  24. Lipman, Y.; Cohen-Or, D.; Levin, D.; Tal-Ezer, H. Parameterization-free projection for geometry reconstruction. ACM Trans. Graph. (TOG) 2007, 26, 22. [Google Scholar] [CrossRef]
  25. Huang, H.; Shihao, W.U.; Gong, M.; Cohen-Or, D.; Ascher, U.; Zhang, H.R. Edge-aware point set resampling. ACM Trans. Graph. (TOG) 2013, 32, 1–12. [Google Scholar] [CrossRef]
  26. Avron, H.; Sharf, A.; Chen, G.; Cohen-Or, D. 1-sparse reconstruction of sharp point set surfaces. ACM Trans. Graph. (TOG) 2010, 29, 1–12. [Google Scholar] [CrossRef]
  27. Li, X.; Lu, C.; Yi, X.; Jia, J. Image smoothing via l0 gradient minimization. ACM Trans. Graph. (TOG) 2011, 30, 1–12. [Google Scholar]
  28. Lei, H.; Schaefer, S. Mesh denoising via l-0 minimization. ACM Trans. Graph. (TOG) 2013, 32, 1–8. [Google Scholar]
  29. Roveri, R.; Öztireli, A.C.; Pandele, I.; Gross, M. Pointpronets: Consolidation of point clouds with convolutional neural networks. Comput. Graph. Forum 2018, 37, 87–99. [Google Scholar] [CrossRef]
  30. Zhang, D.; Lu, X.; Qin, H.; He, Y. Pointfilter: Point cloud filtering via encoder-decoder modeling. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2015–2027. [Google Scholar] [CrossRef]
  31. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. (TOG) 2019, 38, 1–12. [Google Scholar] [CrossRef]
Figure 1. Image with dust behind the vehicle in mine area.
Figure 1. Image with dust behind the vehicle in mine area.
Sustainability 16 02827 g001
Figure 2. Application scenario of dust filtering algorithm.
Figure 2. Application scenario of dust filtering algorithm.
Sustainability 16 02827 g002
Figure 3. Example of single line imaging points of LiDAR point cloud for rigid and non-rigid objects.
Figure 3. Example of single line imaging points of LiDAR point cloud for rigid and non-rigid objects.
Sustainability 16 02827 g003
Figure 4. Overall step diagram and horizontal and vertical axis analysis flow chart of the scheme.
Figure 4. Overall step diagram and horizontal and vertical axis analysis flow chart of the scheme.
Sustainability 16 02827 g004
Figure 5. Position of radar on the autonomous truck.
Figure 5. Position of radar on the autonomous truck.
Sustainability 16 02827 g005
Figure 6. Visualization of BEV projection. The red points are obstacles recognized by the algorithm. The green points are background. The figure on the left shows the irregular shape of the pseudo-obstacle points when the smoke is not filtered, and the figure on the right shows that only the car body and small clouds of dust connected to the car body are identified as obstacles after the implementation our algorithm.
Figure 6. Visualization of BEV projection. The red points are obstacles recognized by the algorithm. The green points are background. The figure on the left shows the irregular shape of the pseudo-obstacle points when the smoke is not filtered, and the figure on the right shows that only the car body and small clouds of dust connected to the car body are identified as obstacles after the implementation our algorithm.
Sustainability 16 02827 g006
Figure 7. Visualization effect for point cloud data before/after filtering. From top to bottom: Top: raw data frame; Middle: Preliminary results of point cloud filtering; Bottom: Final result processed with the Euclidean distance method of neighboring points.
Figure 7. Visualization effect for point cloud data before/after filtering. From top to bottom: Top: raw data frame; Middle: Preliminary results of point cloud filtering; Bottom: Final result processed with the Euclidean distance method of neighboring points.
Sustainability 16 02827 g007
Figure 8. Colored visualization effect for point cloud data. The green color represents the rigid point, the red color represents the points that are filtered out, and the yellow color represents the point filtered out after adjusting the confidence of neighboring points’ Euclidean distance. In Figure (A), the observation point is positioned behind the target vehicle; in Figure (B), it is located behind its left side; and in Figure (C), it is also situated behind its left side but closer and higher.
Figure 8. Colored visualization effect for point cloud data. The green color represents the rigid point, the red color represents the points that are filtered out, and the yellow color represents the point filtered out after adjusting the confidence of neighboring points’ Euclidean distance. In Figure (A), the observation point is positioned behind the target vehicle; in Figure (B), it is located behind its left side; and in Figure (C), it is also situated behind its left side but closer and higher.
Sustainability 16 02827 g008
Table 1. Location: Dananhu No. 2 open-pit mine, Hami, Xinjiang. Type of radar: Robosense M1. Sample size: 2610 frames.
Table 1. Location: Dananhu No. 2 open-pit mine, Hami, Xinjiang. Type of radar: Robosense M1. Sample size: 2610 frames.
Confidence ThresholdAverage AccuracyPrecisionRecallF1 Score
80.9296090.9559340.9657300.960807
120.8709120.9742880.8772830.923245
200.7817110.9746860.7741070.862894
All0.8906800.8879930.9748900.929415
Table 2. Location: Wujiata open-pit mine, Ordos, Inner Mongolia. Type of radar: Ouster OS1 (the confidence threshold is slightly adjusted due to the inconsistency between Robosense and Ouster Radar). Sample size: 1402 frames.
Table 2. Location: Wujiata open-pit mine, Ordos, Inner Mongolia. Type of radar: Ouster OS1 (the confidence threshold is slightly adjusted due to the inconsistency between Robosense and Ouster Radar). Sample size: 1402 frames.
Confidence ThresholdAverage AccuracyPrecisionRecallF1 Score
100.9498730.9724720.9757420.974104
150.9190740.9932650.922579 0.956618
250.8939200.9984730.891232 0.941810
All0.9247530.9470040.972856 0.959756
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, X.; Xie, Y.; Na, C.; Yu, W.; Meng, Y. Algorithm for Point Cloud Dust Filtering of LiDAR for Autonomous Vehicles in Mining Area. Sustainability 2024, 16, 2827. https://doi.org/10.3390/su16072827

AMA Style

Jiang X, Xie Y, Na C, Yu W, Meng Y. Algorithm for Point Cloud Dust Filtering of LiDAR for Autonomous Vehicles in Mining Area. Sustainability. 2024; 16(7):2827. https://doi.org/10.3390/su16072827

Chicago/Turabian Style

Jiang, Xianyao, Yi Xie, Chongning Na, Wenyang Yu, and Yu Meng. 2024. "Algorithm for Point Cloud Dust Filtering of LiDAR for Autonomous Vehicles in Mining Area" Sustainability 16, no. 7: 2827. https://doi.org/10.3390/su16072827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop