Next Article in Journal
Machine Recognition of DDoS Attacks Using Statistical Parameters
Previous Article in Journal
University Campus as a Complex Pedestrian Dynamic Network: A Case Study of Walkability Patterns at Texas Tech University
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LIDAR Point Cloud Augmentation for Dusty Weather Based on a Physical Simulation

1
Key Laboratory of In-Situ Property-Improving Mining of Ministry of Education, Taiyuan University of Technology, Taiyuan 030024, China
2
Academy of Military Science, Beijing 100091, China
3
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
4
Unmanned Vehicle Innovation Center, Ningbo Institute of NPU, Ningbo 315048, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(1), 141; https://doi.org/10.3390/math12010141
Submission received: 11 November 2023 / Revised: 18 December 2023 / Accepted: 28 December 2023 / Published: 31 December 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
LIDAR is central to the perception systems of autonomous vehicles, but its performance is sensitive to adverse weather. An object detector trained by deep learning with the LIDAR point clouds in clear weather is not able to achieve satisfactory accuracy in adverse weather. Considering the fact that collecting LIDAR data in adverse weather like dusty storms is a formidable task, we propose a novel data augmentation framework based on physical simulation. Our model takes into account finite laser pulse width and beam divergence. The discrete dusty particles are distributed randomly in the surrounding of LIDAR sensors. The attenuation effects of scatters are represented implicitly with extinction coefficients. The coincidentally returned echoes from multiple particles are evaluated by explicitly superimposing their power reflected from each particle. Based on the above model, the position and intensity of real point clouds collected from dusty weather can be modified. Numerical experiments are provided to demonstrate the effectiveness of the method.

1. Introduction

Light Detection and Ranging (LIDAR) is an active remote sensing system that uses electromagnetic waves in the optical range to measure distance and generate point clouds of objects. As a leading technology of environment perception, LIDAR is widely used in geography, geomatics, atmospheric physics, urban planning, agriculture, military and defense [1], etc. With the rapid emergence of autonomous driving, LIDAR has become an indispensable tool for surveying and mapping [2,3], localization [4], and 3D object detection [5,6].
LIDAR performance is vulnerable to the influence of adverse weather, such as rain, snow, fog, and dust storms [7], as shown in Figure 1. The small particles including raindrops, snowflakes, and ice particles can absorb and scatter the laser beam [8], which has a two-fold effect on LIDAR data (Figure 2): (a) It attenuates laser beam’s energy and reduces detection range; and (b) the echo reflected from scatters may introduce false positive points, which is particularly manifested in snowflakes and dust particles. As a consequence, when a deep learning-based object detection model trained with data collected from clear weather are used under adverse weather conditions, the accuracy and reliability is degraded significantly [9]. For instance, in the DARPA Urban Challenge, the winning vehicle Boss erroneously identified dust as objects during the competition [10]. According to the research conducted by [11], the incidence of traffic accidents significantly increases in adverse weather conditions, posing a substantial threat to public safety.
The data collection procedure of LIDAR point clouds under adverse weather conditions is time-consuming, labor-intensive, and highly risky, although some progress has been made along this line, including LIBRE [12], Seeing Through Fog [13], CADC [14], WADS [15], ACDC [16], Boreas [17], and so on. The lack of a large amount of high-quality training data in adverse weather impairs the reliability of deep learning-based object detectors. Physical simulation provides an effective tool of data augmentation for transforming and expanding original dataset in clear weather to adverse weather [18]. The breakthrough in this field can be seen in the contributions of [7,19,20,21], which create synthetic LIDAR point cloud datasets in fog, snowfall, and rain with physical simulations (Figure 3).
In this study, we attempt to propose a novel physical-based framework for the data enhancement of LIDAR point clouds in adverse weather and apply it to dusty weather. The algorithm is inspired by and extended from the pioneering work of [7,19,20,21]. We consider finite pulse width, and use extinction coefficients to implicitly represent the attenuation effects of the scattering media. The coincidentally returned power from the particles are evaluated by explicitly summing up the reflections by multiple particles at different locations. The algorithm will be detailed in the third section, and its difference with the work of [19,20,21] will be commented at the end of Section 3. The numerical examples will be given in Section 4, followed by the conclusion in Section 5.

2. Related Work

2.1. LIDAR Model in Adverse Weather

Ref. [22] proposed a mathematical model for predicting the influence of rain on LIDAR. In their work, a Dirac-shaped pulse with infinitely small width is assumed. The intensity attenuation effect is characterized by the extinction coefficient, but range errors are modeled by a normal distribution with errors less than 2 % . The model is integrated into the Mississippi State University (MSU) Autonomous Vehicle Simulator and validated for different rain rates. Ref. [23] introduces a general framework for modeling LIDAR systems influenced by rain, fog and snow. They considered a finite pulse width by approximating the pulse with a sin 2 function. The received signal power is calculated as a convolution between the transmit signal and the spatial impulse response function. The solid objects are regarded as hard targets, with their spatial response being characterized by differential reflectivity. On the other hand, the scatters are regarded as soft targets, which generates distributed scattering described by the backscattering coefficient of each range bin. In [24], a statistical model is adopted for generating noise filters. Each beam consists of multiple rays that are offset to each other for mimicking the divergence. The number of intersections between the rays and raindrops is counted, and its ratio to the total number of rays per beam determines the likelihood of the beam hitting a raindrop. A threshold of hit ratio is set beyond which a point is moved.

2.2. LIDAR Point Cloud Data Augmentation

The influence of adverse weather can be simulated on LIDAR point clouds in clear weather, without cumbersome data collection procedures in realistic scenarios. Such a data augmentation technique is able to generate a significant amount of synthetic data for enhancing the training of object detection model. Ref. [25] uses a ray tracing method based on the model of [24] to simulate rain and snow phenomenon. For fog weather, they employed a probability method to delete or move a point. The intensity of the moved points varies in the range of 0–32% of the maximum intensity, and the intensity of the unaltered points was computed according to Beer–Lambert law. Ref. [26] combines physical simulation [25] with geometric data augmentation techniques, including random translation, random scaling, local scaling, random flip, and filter labels. Additionally, the Optuna hyperparameter optimization framework is used to optimize the augmentation parameters. LISA [21] is a versatile approach for simulating a variety of adverse weather conditions, including snow, rain and fog. However, LISA is based on the hypothesis of an infinitely small pulse width, so the signal can only hit the particles at the same distance simultaneously. Based on the physical model of [23], Ref. [20] introduces a data enhancement method for fog weather. The beam divergence, finite pulse width, and crossover function are incorporated in the simulation. Furthermore, the authors [19] proposed a snowfall simulation algorithm and meanwhile considered the influence of wetting the ground. Ref. [19] models snowflakes as non-overlapping opaque spheres and samples them explicitly. The occlusion effects of particles to the targets are characterized by the ratio of its occlusion angle to the beam divergence.

3. Dust Simulation on Real Point Cloud

LIDAR measures the distance to a solid object by using either a pulsed laser or a Frequency Modulated Continuous Wave. The present work only considers pulse LIDAR, which takes time-of-flight (ToF) measurement for ranging process. The principle of ToF is calculating distance by
R = c ( t = t ) / 2
where c represents the speed of light, t is the time of pulse emission, t is the time the reflected wave is received, and R is the maximum distance the pulse ever reaches before time t.
The pulse wave has a finite pulse width in practice, and can be modeled using sin 2 function:
P T ( t ) = P 0 sin 2 π 2 τ H t , if 0 t 2 τ H 0 , otherwise .
where P T is the power of emitted pulse which varies with emission time t , P 0 is the magnitude, and  τ H is full width at half maximum. For the function of sin 2 ( x ) , τ H is equal to the half of the total duration of the pulse.
In clear weather, the power of the pulse reflected from solid object at distance R tar is expressed by
P ( R tar ) = C A P 0 β tar ξ R tar R tar 2 P T τ H
where P is the power received by the LIDAR sensor; C A is the system constant, which is related to light speed, the laser detector’s optical aperture area, and the overall system efficiency; and β tar is reflectivity of the object. It is noteworthy that the time origin is set to t = 0 and the peak value is attained at t = τ H . ξ in (3) is a crossover function, standing for the ratio of the area illuminated by the LIDAR transmitter ( A T ) to the area observed by the receiver ( A R ) (Figure 4),
ξ ( R ) = A R ( R ) A T ( R ) A T ( R ) if A R ( R ) A T ( R ) < A T ( R ) 1 else .
The overlap function can be approximated by a linear model [19,20], as shown in Figure 4.
In dusty weather, the energy of the pulse is attenuated as it propagates through the scattering media (dust particles), and thus the signal reflected from the target object is rewritten as
P ( R tar ) = C A P 0 β tar ξ R tar R tar 2 P T τ H T 2 ( R tar )
in which T is one-way transmittance at distance R,
T ( R ) = exp ( 0 R α ( r ) d r )
where α is the extinction coefficient, which is the sum of the scattering coefficient and the absorption coefficient. For homogeneous medium, α is a constant and independent of distance R, so the transmittance reduces to
T ( R ) = exp ( α R )
In addition to reducing the pulse energy, dust particles can reflect signal and introduce noisy points. Due to the finite pulse width, a portion of media in the beam segment is illuminated by the LIDAR, and multiple particles can reflect the wave simultaneously. Therefore, the power the pulse returned to LIDAR at any instant time is the superimposition of multiple echos, as expressed by:
P scatter ( R ) = d S C A P 0 β d ξ R d R d 2 P T 2 ( R R d ) c + τ H T 2 ( R d ) θ d Θ , S = { d | R c τ H R d R }
where d is the index of the discrete particles, and β d is the reflectance of the dth discrete particle at distance R d . The fraction of the beam cross sections occupied by dth particle is θ d / Θ , in which θ d is the angle occluded by the particle, and  Θ is the divergence angle of the beam. S is the set associated with distance R, in which the particles can return the signal coincidentally.
To complete the LIDAR equation, the extinction coefficient needs to be evaluated, which is expressed by
α = r = 0 σ ext ( r ) N ( r ) d r
where σ ext is the extinction cross section and N ( r ) is the particle size distribution function. σ ext can be expressed by
σ ext = π r 2 Q ext ( D )
where Q ext is the extinction efficiency, which is calculated with Mie scattering theory:
Q ext ( α , n ) = 2 α 2 n = 1 ( 2 n + 1 ) Re a n + b n
in which a n and b n are Mie coefficients. In practice, the extinction coefficient α is often determined directly from experiments.
The particle size distribution function in Equation (9) is given by
N ( r ) = N 0 p ( r )
where N 0 is the total number of particles per unit volume (particle concentration), and  p ( r ) is the probability function. The log-normal distribution is commonly used for modeling particle size distributions [27],
p ( r ) = 1 r ln ( σ g ) 2 π exp ( ln ( r ) ln ( r m ) ) 2 2 ln ( σ g ) 2
where r is particle radius, σ g is the standard deviation, and  r m is the mean radius of particles, respectively. p ( r ) satisfies the following property,
0 p ( r ) d r = 1
We add that our model is based on the following assumptions:
  • Single scattering. i.e., a signal is reflected only once before it returns to the LIDAR receiver.
  • The discretely distributed particles are hard spheres and impenetrable.
  • The sizes of dust particles are smaller than the wave length.
Based on the aforementioned model, our data augmentation algorithm can be summarized as follows:
  • Sample dust particles in the surrounding of LIDAR according to the log-normal distribution of dusty particles for each layer (Equation (13)).
  • Divide the range of R tar into different distance intervals.
  • Reduce the original intensity of the signal reflected from the solid object by introducing attenuation coefficients of dust particles to LIDAR equation (Equation (5)).
  • The total power reflected from scattering particles are evaluated at each distances, by explicitly superimposing the echoes from multiple particles at the beam segment ( R c τ H , R ) .
  • By comparing the power reflected from the extended solid object and that from dust particles at different distances, the original point can be removed or relocated correspondingly.
The pseudo code of the algorithm is given in Algorithm 1.
Algorithm 1 Dusty hybrid Monte-Carlo approach
  1:
Input: Lidar point clouds in clear weather( pc )
  2:
Output: Noisy point clouds in dusty weather
  3:
for l in n l layers do
  4:
    Randomly sample N 0 many scatters conforming to log-normal distribution with Equation (13)
  5:
    for  x , y , z in pc  do
  6:
        Range R tar x 2 + y 2 + z 2
  7:
        Retrieve signal intensity P ini at R tar from datasets
  8:
        Infer system constant C by solving Equation (3)
  9:
        Evaluate beam divergence and crossover function
10:
        Evaluate extinction coefficient α from the size distribution and particle density according to Equation (9)
11:
        for  R in ( 0 , 1 , 2 , , R tar )  do
12:
           if  R = = R tar  then
13:
               Evaluate back reflected power from the solid objec using Equation (5)
14:
           else
15:
               Detect the collection of particles in the range of ( R c τ H , R ) .
16:
               Evaluate the echo of scatters using Equation (8)
17:
           end if
18:
        end for
19:
         P * max P ( R )
20:
         R * argmax [ P ( R ) ]
21:
        if  P * < P tol  then
22:
           The point is removed
23:
        end if
24:
        if  P * P ( R tar )  then
25:
           return  x , y , z , P ( R tar )
26:
        else
27:
            ( x * , y * , z * ) R * R tar ( x , y , z )
28:
           return  x * , y * , z * , P ( R * )
29:
        end if
30:
    end for
31:
end for
Since our work is based on the algorithms proposed by [19,20,21], it is worth highlighting the main differences from them:
  • Compared to LISA [21], our approach considers finite pulse width by approximating Dirac Delta function with sin 2 function like the work [19,20]. This implies that a pulse wave can illuminate a collection of particles at different instances simultaneously and the received power is a superposition of multiple echoes returned to the receiver coincidentally.
  • Unlike [20], which representing backscattering effects using a backscattering coefficient associated with each range bin, our simulation takes an explicit way to calculate the echoes by superimposing the coincident reflections from different particles.
  • In the work of [19], each particle reflects only a fraction of the beam divergence angle and let the remaining portion of the beam reach the target, whereby modeling the occlusions between particles and the target. Such an operation implies that the geometric cross section is equal to extinction cross section and neglects the diffraction. In our work, the attenuation effect is implicitly represented by extinction coefficient at each range bin.
The aforementioned differences are summarized in Table 1.

4. Experiments and Results

The Seeing Through Fog (STF) dataset is a multimodal dataset encompassing various weather conditions [13]. We will conduct dusty simulation experiments on the clear day subset of this dataset. Before running the simulation algorithm, it is necessary to precompute the distribution of 64 sand dust particles in a 2D plane (a circle with a radius of 80 m), corresponding to the 64 channels of the Velodyne HDL-S3D sensor used in the STF dataset [19]. In our experiments, weather conditions with airborne sand dust particles are classified into three categories: floating dust, blowing sand, and dust storm [28]. The particle size distribution of airborne sand dust particles varies under different weather conditions. We use the mean particle radius of sand dust particles as a variable to simulate LIDAR point cloud data under different weather conditions. In addition, we also discuss and analyze the impact of beam divergence angle and half-power pulse width on simulation results of LIDAR point clouds. We further compared our simulation algorithm with the preceding LIDAR simulation algorithms under adverse weather conditions. Finally, we trained various mainstream 3D object detection models and compared them with a clear-sky baseline to validate the effectiveness of our simulation algorithm. The computer specifications used in this study are as follows:
  • CPU: Intel Core i9-10900K @ 3.70 GHz;
  • RAM: 64 GB DDR4 RAM;
  • GPU: NVIDIA GeForce RTX 3090;
  • Hard-disk drive: 1 TB SSD + 2 TB HDD.

4.1. Qualitative Results

According to the research of [29,30], the average radius of lognormal distribution of sampled sand dust particles in floating dust, blowing sand, and dust storms are taken as 15, 20, and 25 microns, respectively. Ref. [31] measured the particle concentration of airborne sand dust in Yinchuan, China, under three weather conditions: floating dust, blowing sand, and dust storm. We represent this parameter by the proportion of the sampled particle area to the total plane area. In our simulation, we set the sampling area ratios for these three weather conditions to 1:2:4.
In Figure 5, we conducted a comparative analysis between the simulated data under different dusty weather conditions (floating dust, blowing sand, and dust storm) and the original data captured on clear weather. Compared to the original point cloud data, in our simulated data, the points in the distance become sparser, and a growing number of “clutter points” appear in the area close to sensors; overall, the point cloud color tends to shift towards red. These changes suggest that an increasing number of original target points are being mistakenly identified by LIDAR, primarily manifested in forward shifts in position and the attenuation of reflection intensity. As shown in Figure 5, the outlines of pedestrians slightly farther away become increasingly unclear; the cyclist towards the rear virtually disappears in the dust storm scene; and there is a noticeable decline in the reflected signal intensity of the cars, making it impossible to discern their contours in the dust storm weather. The occurrence of this phenomenon is attributed to the increase in the quantity and size of sand dust particles in the air, which leads to a higher level of occlusion to the targets. The larger the extinction coefficient, the more significant the attenuation impact on the pulse intensity of the LIDAR. This ultimately leads to a reduction in the measurement range of the LIDAR.
The half-power pulse width is a crucial system parameter, whose value is determined by specific applications and design requirements. In scenarios such as medium-range applications (like autonomous driving and traffic monitoring), where a balance between distance resolution and energy propagation should be struck, the half-power pulse width may vary from a few nanoseconds to several hundred nanoseconds. Consequently, we have examined the impact of different half-power pulse width within this range on simulation algorithms, with the characteristics of the simulation algorithms depicted in Figure 6. When the half-power pulse width is set to 5 ns and 10 ns, no significant difference is observed in the simulation results. When the half-power pulse width is set to 100 ns, there is a visible increase in the number of “clutter points”, but there is no evident change in the overall reflected power intensity of the point cloud. This indicates that when the half-power pulse width is sufficiently large, more dust particles are illuminated by the laser beam, leading to an increased misidentification of these particles by the LIDAR. On the other hand, a larger pulse width implies that the laser pulse lasts for a longer duration, and the carried energy is distributed over a broader spatial range. In such a scenario, the intensity attenuation of the laser beam during propagation may be relatively slow, so the overall reflected power intensity does not change greatly.
The beam divergence angle can influence the resolution and measurement accuracy of LIDAR. We conducted tests on three beam divergence angles: 0.003 radians [19], 0.006 radians [32], and 0.01 radians [33]. The simulation results are illustrated in the Figure 7. From the provided examples, it is evident that with an increase in the beam divergence angle, the number of clutter points also increases. Additionally, the emission intensity of targets decreases, and the color transitions from blue to red. This phenomenon is attributed to the smaller beam divergence angle, where the laser beam is more focused, exhibiting higher penetration capability and resulting in higher pulse intensity upon reaching the object point. Furthermore, a larger divergence angle includes more dust particles in the laser beam, leading to increased occlusion levels at the object points.
Finally, we conducted a comparative analysis between our simulation algorithm and two other LIDAR simulators, LISA [21] and the algorithm in [19] (Figure 8). LISA produces the smallest number of “clutter points”, and our simulation algorithm generates the most. Regarding the overall point cloud reflectance, LISA’s generated simulation data had the lowest intensity, followed by our algorithm, while Martin’s algorithm exhibited the highest reflected power intensity. In comparison, our algorithm excelled in simulating the occlusion effects of scattered particles on object points, while offering a more rational calculation of reflectance intensity. This not only enables our algorithm to more accurately capture occlusion effects on object points, but also ensures the credibility of simulation results when reflecting laser reflectance intensity in real-world scenarios.

4.2. Quantitative Results

In 300 samples, we conducted a statistical analysis of simulated data under conditions of floating dust, blowing sand, and dust storm. We meticulously recorded the time required for each execution of the simulation algorithm (time), the number of points removed due to low reflectance below the threshold (removed point), the number of points with changed positions (scattered point), and the number of points with only intensity changes (attenuated point). We then calculated the mean values for each parameter. We summarized statistical results in Table 2, providing a visual representation of the performance of our simulation algorithm under various dust weather conditions.
This outcome indicates a gradual increase in the execution time of our simulation algorithm from floating dust weather to dust storm conditions. Simultaneously, the number of affected target points also rises. This further confirms the reliability and effectiveness of our simulation algorithm in simulating LIDAR perception under various dusty weather conditions. Specifically, as the quantity of sampled sand dust particles increases, the attenuation effect on LIDAR pulse intensity and the extent of target impact become more pronounced.
The selection of training set and test set is very important for training a high-performance detection model [34]. Due to the absence of publicly available extensive dusty point cloud datasets, according to [19], our training and testing were conducted on two subsets of the STF dataset in clear weather [13], which includes 4396 samples and 1816 samples, respectively. We apply our blowing sand simulation to all the samples in the training set and test set. We focus on relaxed intersection over union (IoU) thresholds, and present results using the official KITTI evaluation framework, and report average precision (AP) at 40 recall positions, as suggested in [35]. For the 3D object detection methods, we choose PV-RCNN [36] and PointRCNN [37] to validate the effectiveness of our simulation algorithm.
The hyperparameters, such as learning rate and regularization strategies, play a key role in model training [38]. We utilized the OpenPCDet framework [39] with default configurations to train these models. Taking PV-RCNN as an example, in OpenPCDet framework, the initial learning rate is set to 0.01 and adjusted over time. The optimizer is Adam-onecycle, and corner loss regularization is applied to avoid overfitting problems [38]. Each model was trained from scratch for 80 epochs. For the OpenPCDet framework [39], each epoch will generate a model during the training process. We have tested all the models and recorded the best comprehensive performance model results. We presented our detection results in Table 3.
We listed these models in descending order based on their performance. Each model predicts three classes of objects (car, pedestrian, cyclist). Our simulation algorithm exhibited improvements across all models and all three classes of objects. Particularly noteworthy is the significant enhancement for PV-RCNN [36], which showed an increase of 10.23% in mean average precision (mAP) over those three classes compared to the clear-sky baseline. In comparison, PointRCNN [37] also demonstrated some performance improvement, albeit with a small growth rate of only 1.72%. The performance improvement observed in PV-RCNN may stem from its higher sensitivity to our simulated data, allowing it to better learn the features of the objects and improve the model’s generalization capability.
The model trained with clear weather data lacks information on dusty environments. Our simulation allows object detection trained with a synthetic dusty environment and injects more information to the model, so object detection can achieve a higher accuracy in adverse weather. It is commented that the object detection based on simulated LIDAR point clouds is an inverse problem like tomography. To improve the performance of the detection in few data measurements, some prior physical knowledge like Partial Differential Equations (PDE) can be incorporated to the detector.

5. Conclusions

In this study, we introduce a novel framework for augmenting LIDAR point clouds in dusty weather based on physical simulation. With the presented approach, the dusty particles are distributed discretely in the surrounding of LIDAR, and the finite laser pulse width and beam divergence is considered in the model. The attenuation effects of dusty particles on the target object and other particles are represented implicitly by extinction coefficients. The coincidentally returned power is evaluated explicitly by superimposing the echoes from multiple particles. Based on the above simulation, the position and intensity of the original real point clouds are modified.
Through analyzing physically simulated point clouds in different dusty weather, such as floating dust, blowing sand and dust storm, we verify the reliability and effectiveness of our algorithm. The synthetic data generated by our simulator can be used as training data to enhance the accuracy and robustness of an object detector in dusty weather, thus alleviating the difficulty of collecting real dusty point clouds. Although our method is applied to dusty weather, it can also be used in other adverse weather conditions (snowfall, rain, etc.) with trivial modification. Our work holds promise in practical applications of autonomous driving and robotics. We expect that this work could advance further research on intelligent perception systems in adverse weather conditions.
In the future, the following directions are worth pursuing:
  • We will combine our algorithm with numerical simulation in electromagnetic analysis, for example, boundary element methods [40,41], and extend it to to simulate sonar image in complex underwater environment [42,43].
  • Adaptivity and generalization errors of the algorithm will be studied. In addition, we will introduce physical knowledge represented by partial differential equations (PDE) to object detector for improving its performance with few sparse measurements.
  • The algorithm will be extensively tested and applied for downstream tasks such as classification [44], tracing, segmentation [45], recognition an detection [46,47].

Author Contributions

Conceptualization, H.L., Z.M., S.L. and Y.Q.; Methodology, H.L. and Y.Q.; Software, H.L., P.S. and Z.M.; Validation, P.S., S.L., P.W. and Y.Q.; Formal Analysis, H.L., P.S. and P.W.; Investigation, H.L., P.S. and Z.M.; Resources, S.L. and P.W.; Data Curation, P.S., S.L. and P.W.; Writing—Original Draft Preparation, H.L., P.S. and Y.Q.; Writing—Review & Editing, H.L., P.S. and Y.Q.; Visualization, P.S.; Supervision, H.L. and Y.Q.; Project Administration, H.L. and Y.Q.; Funding Acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

We appreciate the financial support of National Natural Science Foundation of China (NSFC) under No. 52274222.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.uni-ulm.de/en/in/driveu/projects/dense-datasets#c811669 (accessed on 10 November 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, W.; Wei, Z.; Sun, H.; He, H. Review on the Application of Airborne LiDAR in Active Tectonics of China: Dushanzi Reverse Fault in the Northern Tian Shan. Front. Earth Sci. 2022, 10, 895758. [Google Scholar] [CrossRef]
  2. Diab, A.; Kashef, R.; Shaker, A. Deep Learning for LiDAR Point Cloud Classification in Remote Sensing. Sensors 2022, 22, 7868. [Google Scholar] [CrossRef] [PubMed]
  3. Zheng, S.; Wang, J.; Rizos, C.; Ding, W.; El-Mowafy, A. Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis. Remote Sens. 2023, 15, 1156. [Google Scholar] [CrossRef]
  4. Chen, J.; Zhao, X.; Su, Z. 3D LiDAR-Based Localization Methods: An Overview. In Proceedings of the International Conference on Guidance, Navigation and Control, Tianjin, China, 5–7 August 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1624–1636. [Google Scholar]
  5. Mao, J.; Shi, S.; Wang, X.; Li, H. 3D object detection for autonomous driving: A comprehensive survey. Int. J. Comput. Vis. 2023, 131, 1909–1963. [Google Scholar] [CrossRef]
  6. Gundu, S.R.; Panem, C.; Vijaylaxmi, J.; Dave, A. Advanced Rival Combatant LIDAR-Guided Directed Energy Weapon Application System Using Hybrid Machine Learning. Robot. Process. Autom. 2023, 33–46. [Google Scholar]
  7. Sun, P.; Sun, C.; Wang, R.; Zhao, X. Object detection based on roadside LiDAR for cooperative driving automation: A review. Sensors 2022, 22, 9316. [Google Scholar] [CrossRef] [PubMed]
  8. Dreissig, M.; Scheuble, D.; Piewak, F.; Boedecker, J. Survey on LiDAR Perception in Adverse Weather Conditions. arXiv 2023, arXiv:2304.06312. [Google Scholar]
  9. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  10. Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous driving in urban environments: Boss and the urban challenge. J. Field Robot. 2008, 25, 425–466. [Google Scholar] [CrossRef]
  11. Islam, M.M.; Alharthi, M.; Alam, M.M. The impacts of climate change on road traffic accidents in Saudi Arabia. Climate 2019, 7, 103. [Google Scholar] [CrossRef]
  12. Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1094–1101. [Google Scholar]
  13. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11682–11692. [Google Scholar]
  14. Pitropov, M.; Garcia, D.E.; Rebello, J.; Smart, M.; Wang, C.; Czarnecki, K.; Waslander, S. Canadian adverse driving conditions dataset. Int. J. Robot. Res. 2021, 40, 681–690. [Google Scholar] [CrossRef]
  15. Kurup, A.; Bos, J. Dsor: A scalable statistical filter for removing falling snow from lidar point clouds in severe winter weather. arXiv 2021, arXiv:2109.07078. [Google Scholar]
  16. Sakaridis, C.; Dai, D.; Van Gool, L. ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10765–10775. [Google Scholar]
  17. Burnett, K.; Yoon, D.J.; Wu, Y.; Li, A.Z.; Zhang, H.; Lu, S.; Qian, J.; Tseng, W.K.; Lambert, A.; Leung, K.Y.; et al. Boreas: A multi-season autonomous driving dataset. Int. J. Robot. Res. 2023, 42, 33–42. [Google Scholar] [CrossRef]
  18. Rebuffi, S.A.; Gowal, S.; Calian, D.A.; Stimberg, F.; Wiles, O.; Mann, T.A. Data augmentation can improve robustness. Adv. Neural Inf. Process. Syst. 2021, 34, 29935–29948. [Google Scholar]
  19. Hahner, M.; Sakaridis, C.; Bijelic, M.; Heide, F.; Yu, F.; Dai, D.; Van Gool, L. Lidar snowfall simulation for robust 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16364–16374. [Google Scholar]
  20. Hahner, M.; Sakaridis, C.; Dai, D.; Van Gool, L. Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 15283–15292. [Google Scholar]
  21. Kilic, V.; Hegde, D.; Sindagi, V.; Cooper, A.B.; Foster, M.A.; Patel, V.M. Lidar light scattering augmentation (lisa): Physics-based simulation of adverse weather conditions for 3d object detection. arXiv 2021, arXiv:2107.07004. [Google Scholar]
  22. Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the Influence of Rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef]
  23. Rasshofer, R.H.; Spies, M.; Spies, H. Influences of weather phenomena on automotive laser radar systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef]
  24. Hasirlioglu, S.; Riener, A. A Model-Based Approach to Simulate Rain Effects on Automotive Surround Sensor Data. In Proceedings of the 2018 IEEE International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018. [Google Scholar]
  25. Teufel, S.; Volk, G.; Bernuth, A.V.; Bringmann, O. Simulating Realistic Rain, Snow, and Fog Variations For Comprehensive Performance Characterization of LiDAR Perception. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference (VTC2022-Spring), Helsinki, Finland, 19–22 June 2022. [Google Scholar]
  26. Teufel, S.; Gamerdinger, J.; Volk, G.; Gerum, C.; Bringmann, O. Enhancing Robustness of LiDAR-Based Perception in Adverse Weather using Point Cloud Augmentations. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–6. [Google Scholar]
  27. Chen, H.Y.; Ku, C.C. Calculation of wave attenuation in sand and dust storms by the FDTD and turning bands methods at 10–100 GHz. IEEE Trans. Antennas Propag. 2012, 60, 2951–2960. [Google Scholar] [CrossRef]
  28. Ma, J.; Chen, F. Discussion of causes and observations of blowing sand and floating dust. Meteorol. Sci. Technol. Zhejiang 2003, 22, 44–46. [Google Scholar]
  29. Hoffmann, C.; Funk, R.; Sommer, M.; Li, Y. Temporal variations in PM10 and particle size distribution during Asian dust storms in Inner Mongolia. Atmos. Environ. 2008, 42, 8422–8431. [Google Scholar] [CrossRef]
  30. Chen, W.; Fryrear, D. Sedimentary characteristics of a haboob dust storm. Atmos. Res. 2002, 61, 75–85. [Google Scholar] [CrossRef]
  31. Shao, J.; Mao, J. Dust particle size distributions during spring in Yinchuan, China. Adv. Meteorol. 2016, 2016, 6940502. [Google Scholar] [CrossRef]
  32. Sadiku, M.N.; Musa, S.M.; Nelatury, S.R. Free space optical communications: An overview. Eur. Sci. J. 2016, 12, 55–68. [Google Scholar] [CrossRef]
  33. Salhi, M.; Boudriga, N. Multi-array spherical LiDAR system for drone detection. In Proceedings of the 2020 22nd International Conference on Transparent Optical Networks (ICTON), Bari, Italy, 19–23 July 2020; pp. 1–5. [Google Scholar]
  34. Uçar, M.K.; Nour, M.; Sindi, H.; Polat, K. The effect of training and testing process on machine learning in biomedical datasets. Math. Probl. Eng. 2020, 2020, 2836236. [Google Scholar] [CrossRef]
  35. Simonelli, A.; Bulo, S.R.; Porzi, L.; López-Antequera, M.; Kontschieder, P. Disentangling monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1991–1999. [Google Scholar]
  36. Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10529–10538. [Google Scholar]
  37. Shi, S.; Wang, X.; Li, H. Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar]
  38. Tian, Y.; Zhang, Y. A comprehensive survey on regularization strategies in machine learning. Inf. Fusion 2022, 80, 146–166. [Google Scholar] [CrossRef]
  39. Shi, S. Openpcdet: An Open-Source Toolbox for 3D Object Detection from Point Clouds. Ph.D. Thesis, The Chinese University of Hong Kong, Hong Kong, China, 2020. [Google Scholar]
  40. Chen, L.; Lian, H.; Xu, Y.; Li, S.; Liu, Z.; Atroshchenko, E.; Kerfriden, P. Generalized isogeometric boundary element method for uncertainty analysis of time-harmonic wave propagation in infinite domains. Appl. Math. Model. 2023, 114, 360–378. [Google Scholar] [CrossRef]
  41. Chen, L.; Wang, Z.; Lian, H.; Ma, Y.; Meng, Z.; Li, P.; Ding, C.; Bordas, S.P. Reduced order isogeometric boundary element methods for CAD-integrated shape optimization in electromagnetic scattering. Comput. Methods Appl. Mech. Eng. 2024, 419, 116654. [Google Scholar] [CrossRef]
  42. Chen, L.; Lian, H.; Liu, Z.; Gong, Y.; Zheng, C.; Bordas, S. Bi-material topology optimization for fully coupled structural-acoustic systems with isogeometric FEM–BEM. Eng. Anal. Bound. Elem. 2022, 135, 182–195. [Google Scholar] [CrossRef]
  43. Chen, L.; Zhao, J.; Lian, H.; Yu, B.; Atroshchenko, E.; Li, P. A BEM broadband topology optimization strategy based on Taylor expansion and SOAR method—Application to 2D acoustic scattering problems. Int. J. Numer. Methods Eng. 2023, 124, 5151–5182. [Google Scholar] [CrossRef]
  44. Yang, T.; Fu, D.; Hao, L. Supervised laplacian graph multiple kernel classification. In Proceedings of the 2016 55th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Tsukuba, Japan, 20–23 September 2016; pp. 1461–1465. [Google Scholar]
  45. Yu, X.; Zhou, Z.; Gao, Q.; Li, D.; Ríha, K. Infrared image segmentation using growing immune field and clone threshold. Infrared Phys. Technol. 2018, 88, 184–193. [Google Scholar] [CrossRef]
  46. Zhou, Z.; Zhang, B.; Yu, X. Immune coordination deep network for hand heat trace extraction. Infrared Phys. Technol. 2022, 127, 104400. [Google Scholar] [CrossRef]
  47. Sony, S. Towards Multiclass Damage Detection and Localization Using Limited Vibration Measurements. Ph.D. Thesis, The University of Western Ontario, London, ON, Canada, 2021. [Google Scholar]
Figure 1. LIDAR perception. (a) Under ideal conditions, LIDAR pulse do not experience scattering and intensity attenuation upon reaching the target. (b) When there are scattering media in the air, the LIDAR pulse is scattered, resulting in a attenuation in the target’s reflection intensity. It may even incorrectly identify scattering particles as targets.
Figure 1. LIDAR perception. (a) Under ideal conditions, LIDAR pulse do not experience scattering and intensity attenuation upon reaching the target. (b) When there are scattering media in the air, the LIDAR pulse is scattered, resulting in a attenuation in the target’s reflection intensity. It may even incorrectly identify scattering particles as targets.
Mathematics 12 00141 g001
Figure 2. In the presence of scattering particles in the air, a fraction of the emitted pulse will be scattered, with some of the scattered pulses diverging away from the detector, some converging towards the detector, and a fraction of the emitted pulse penetrating through the scattering particles.
Figure 2. In the presence of scattering particles in the air, a fraction of the emitted pulse will be scattered, with some of the scattered pulses diverging away from the detector, some converging towards the detector, and a fraction of the emitted pulse penetrating through the scattering particles.
Mathematics 12 00141 g002
Figure 3. The RGB image above represents the scene from the perspective of the camera. The lower left corner shows the original point cloud for this scene, while the lower right corner displays the simulated snowfall point cloud for the same scene.
Figure 3. The RGB image above represents the scene from the perspective of the camera. The lower left corner shows the original point cloud for this scene, while the lower right corner displays the simulated snowfall point cloud for the same scene.
Mathematics 12 00141 g003
Figure 4. LIDAR sensor schematic with dual-beam configuration.
Figure 4. LIDAR sensor schematic with dual-beam configuration.
Mathematics 12 00141 g004
Figure 5. Comparison of LIDAR simulations under different dusty weather conditions. For all dusty weather conditions, the half-power pulse width and beam divergence angle are set to 10 ns and 0.003 radians, respectively. All point cloud colors are encoded according to the Jet colormap rule, where blue represents high values and red represents low values. In the point cloud under clear weather conditions, we provide 3D bounding boxes of real objects as a reference.
Figure 5. Comparison of LIDAR simulations under different dusty weather conditions. For all dusty weather conditions, the half-power pulse width and beam divergence angle are set to 10 ns and 0.003 radians, respectively. All point cloud colors are encoded according to the Jet colormap rule, where blue represents high values and red represents low values. In the point cloud under clear weather conditions, we provide 3D bounding boxes of real objects as a reference.
Mathematics 12 00141 g005
Figure 6. Comparison of different half-power pulse widths based on our blowing sand simulation with α set to 0.01. The beam divergence angle is kept constant at 0.003 radians. All point cloud colors are still encoded according to the Jet colormap rules, and we provide 3D bounding boxes of real objects under clear weather conditions.
Figure 6. Comparison of different half-power pulse widths based on our blowing sand simulation with α set to 0.01. The beam divergence angle is kept constant at 0.003 radians. All point cloud colors are still encoded according to the Jet colormap rules, and we provide 3D bounding boxes of real objects under clear weather conditions.
Mathematics 12 00141 g006
Figure 7. Comparison of different beam divergence angles based on our blowing sand simulation with α set to 0.01. The half-power pulse width is kept constant at 10ns. All point cloud colors are still encoded according to the Jet colormap rules, and we provide 3D bounding boxes of real objects under clear weather conditions.
Figure 7. Comparison of different beam divergence angles based on our blowing sand simulation with α set to 0.01. The half-power pulse width is kept constant at 10ns. All point cloud colors are still encoded according to the Jet colormap rules, and we provide 3D bounding boxes of real objects under clear weather conditions.
Mathematics 12 00141 g007
Figure 8. Comparison of different simulation methods. LISA [21] adheres to all its default parameters, while Hahner’s simulation [19] substitutes particle distribution with our blowing sand distribution, maintaining the rest according to the default parameters. Our results were based on blowing sand simulation with α , half-power pulse width, and beam divergence angle set to 0.01, 10 ns and 0.003 radians, respectively. All point cloud colors are still encoded according to the Jet colormap rules, and we provide 3D bounding boxes of real objects under clear weather conditions.
Figure 8. Comparison of different simulation methods. LISA [21] adheres to all its default parameters, while Hahner’s simulation [19] substitutes particle distribution with our blowing sand distribution, maintaining the rest according to the default parameters. Our results were based on blowing sand simulation with α , half-power pulse width, and beam divergence angle set to 0.01, 10 ns and 0.003 radians, respectively. All point cloud colors are still encoded according to the Jet colormap rules, and we provide 3D bounding boxes of real objects under clear weather conditions.
Mathematics 12 00141 g008
Table 1. Algorithm comparison. “Implicit” means the (attenuation or backscattering) effects of the particles are represented collectively by a parameter associated with a beam segment (with extinction or backscattering coefficients), whereas “Explicit” means that the effects are considered particle by particle.
Table 1. Algorithm comparison. “Implicit” means the (attenuation or backscattering) effects of the particles are represented collectively by a parameter associated with a beam segment (with extinction or backscattering coefficients), whereas “Explicit” means that the effects are considered particle by particle.
PropertiesLISA [21][20][19]Ours
Pulse width0finitefinitefinite
Attenuationimplicitimplicitexplicitimplicit
Backscatteringexplicitimplicitexplicitexplicit
Table 2. Performance analysis of our simulation algorithm under different dusty weather conditions.
Table 2. Performance analysis of our simulation algorithm under different dusty weather conditions.
Weather ConditionsTime(s)Remove PointScattered PointAttenuated Point
floating dust6.818451451412
blowing sand8.219203322012
dust storm12.420637242562
Table 3. Comparison of our simulation methods with clear-sky baseline for 3D object detection results on the test set. We report 3D average precision (AP) of moderate for three classes.
Table 3. Comparison of our simulation methods with clear-sky baseline for 3D object detection results on the test set. We report 3D average precision (AP) of moderate for three classes.
MethodSimulationCar [email protected]Pedestrian [email protected]Cyclist [email protected]mAP over Classes
PV-RCNN [36]None51.8922.3024.4732.89
Ours-dust63.2134.0032.1443.12
PointRCNN [37]None49.0616.2922.7829.38
Ours-dust50.7322.4220.1331.10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lian, H.; Sun, P.; Meng, Z.; Li, S.; Wang, P.; Qu, Y. LIDAR Point Cloud Augmentation for Dusty Weather Based on a Physical Simulation. Mathematics 2024, 12, 141. https://doi.org/10.3390/math12010141

AMA Style

Lian H, Sun P, Meng Z, Li S, Wang P, Qu Y. LIDAR Point Cloud Augmentation for Dusty Weather Based on a Physical Simulation. Mathematics. 2024; 12(1):141. https://doi.org/10.3390/math12010141

Chicago/Turabian Style

Lian, Haojie, Pengfei Sun, Zhuxuan Meng, Shengze Li, Peng Wang, and Yilin Qu. 2024. "LIDAR Point Cloud Augmentation for Dusty Weather Based on a Physical Simulation" Mathematics 12, no. 1: 141. https://doi.org/10.3390/math12010141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop