LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model
Abstract
:1. Introduction
- Segmentation maps of adverse effects are produced via a designated 3D clustering algorithm, serving as conditional guides for generative models.
- A novel early data fusion approach was developed to integrate raw and segmentation data, exhibiting remarkable capabilities in directing the creation of adverse effects.
- High-level robustness in the generation of paired adverse data is presented through the generation of quasi-natural adverse effects across huge domain gaps in terms of traffic layouts and environments. A notable enhancement in detection performance with the proposed data augmentation scheme is validated through experiments.
2. Related Works
2.1. Adverse Effect Synthesis
2.2. Existing Adverse Effects Enrichment
3. Methods
3.1. Clusters Classification and Segmentation Map
- Noise number: Points without any neighbor points within a designated range (solitary points) are considered noise points, mostly snowflakes. A decrease in noise number is one of the most direct indicators of a low adverse effect presence.The count N of the points that have fewer than neighbors within a given radius is shown as
- Cluster number: A main output of the algorithm, representing groups of data points that are closely related based on their reachability. The cluster number can be simply denoted as C [30].
- Reachability distance: The smallest distance required to connect point A to point B via a path of points that satisfy the density criteria. Normally, the average reachability distance would rise along with larger cluster numbers.For points A and B, the reachability distance could be defined as
- Inter-cluster distances (ICDs): The concept here involves identifying the centroid, or the average point, of each cluster, and subsequently computing the distance between every possible pair of centroids. Should there be a decrease in the average of these distances, it would suggest a rise in the number of clusters and a more concentrated cluster distribution. In the context of this study, such a pattern could be interpreted as an effect of high adverse effects appearance.For clusters i and j with centroids and :
- Size of clusters: This is essentially determined by the average number of points each cluster holds. Under conditions dominated by scattered snow, the snow noise points tend to form numerous small-scale clusters. Their presence, consequently, leads to a diminution in the average size of the clusters [30].For cluster i with points, the average size S could be
- Silhouette score: Measures the cohesion within clusters and the separation between clusters. A silhouette score close to 1 indicates a good clustering quality, while a score close to −1 indicates poor clustering. A lower silhouette score is commonly observed in adverse and snowy conditions due to the more overlap between clusters.For a point x in cluster A, the silhouette score is calculated as
- Davies–Bouldin index (DBI): Measures the ratio of within-cluster scatter to between-cluster separation and assesses the quality of the overall cluster separation. A lower Davies–Bouldin index indicates better clustering, with zero being the ideal value. Adverse conditions with many noise points or swirl clusters exhibit higher values of the DBI.The Davies–Bouldin index DBI is calculated as
Algorithm 1 Point cloud segmentation through a 3D clustering algorithm based on OPTICS |
|
3.2. Conditional Guide Data Fusion
3.3. Architecture and Loss Functions
3.3.1. Custom Loss
3.3.2. Identity Loss
3.3.3. Overall Loss Function
3.4. Violations and Solutions in LiDAR Data Augmentation
4. Experiments and Results
4.1. Reproduction of Real Adverse Conditions
4.1.1. Qualitative Results
4.1.2. Quantitative Results and Ablation Study
4.1.3. Detection Rate Improvement
4.2. Synthetic Adverse Conditions
4.2.1. Qualitative Results
4.2.2. 3D Clustering Results for Nagoya Snow Synthesis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
LiDAR | Light detection and ranging |
GAN | Generative Adversarial Networks |
CADC | Canadian Adverse Driving Conditions |
OPTICS | Ordering points to identify the clustering structure |
ICD | Inter-cluster distance |
SSIM | Structural Similarity Index |
DROR | Dynamic Radius Outlier Removal |
DSOR | Dynamic Statistical Outlier Removal |
PGM | Polar Grid Map |
DBI | Davies–Bouldin index |
BEV | Bird Eye View |
CUT | Contrastive Unpaired Translation |
AP | Average precision |
References
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Jokela, M.; Kutila, M.; Pyykönen, P. Testing and validation of automotive point-cloud sensors in adverse weather conditions. Appl. Sci. 2019, 9, 2341. [Google Scholar] [CrossRef]
- Charron, N.; Phillips, S.; Waslander, S.L. De-noising of Lidar point clouds corrupted by snowfall. In Proceedings of the Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 9–11 May 2018; pp. 254–261. [Google Scholar]
- Le, M.H.; Cheng, C.H.; Liu, D.G. An Efficient Adaptive Noise Removal Filter on Range Images for LiDAR Point Clouds. Electronics 2023, 12, 2150. [Google Scholar] [CrossRef]
- Bergius, J. LiDAR Point Cloud De-Noising for Adverse Weather. Ph.D. Thesis, Halmstad University, Halmstad, Sweden, 2022. [Google Scholar]
- Zhang, Y.; Ding, M.; Yang, H.; Niu, Y.; Feng, Y.; Ohtani, K.; Takeda, K. L-DIG: A GAN-Based Method for LiDAR Point Cloud Processing under Snow Driving Conditions. Sensors 2023, 23, 8660. [Google Scholar] [CrossRef] [PubMed]
- Hahner, M.; Sakaridis, C.; Bijelic, M.; Heide, F.; Yu, F.; Dai, D.; Van Gool, L. Lidar snowfall simulation for robust 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16364–16374. [Google Scholar]
- Heinzler, R.; Piewak, F.; Schindler, P.; Stork, W. CNN-based lidar point cloud de-noising in adverse weather. IEEE Robot. Autom. Lett. 2020, 5, 2514–2521. [Google Scholar] [CrossRef]
- Rasshofer, R.H.; Spies, M.; Spies, H. Influences of weather phenomena on automotive laser radar systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef]
- Wallace, A.M.; Halimi, A.; Buller, G.S. Full waveform lidar for adverse weather conditions. IEEE Trans. Veh. Technol. 2020, 69, 7064–7077. [Google Scholar] [CrossRef]
- Guo, A.; Feng, Y.; Chen, Z. LiRTest: Augmenting LiDAR point clouds for automated testing of autonomous driving systems. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis, Virtual, 18–22 July 2022; pp. 480–492. [Google Scholar]
- Piroli, A.; Dallabetta, V.; Walessa, M.; Meissner, D.; Kopp, J.; Dietmayer, K. Robust 3D Object Detection in Cold Weather Conditions. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 5–9 June 2022; pp. 287–294. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Yang, H.; Carballo, A.; Takeda, K. Disentangled Bad Weather Removal GAN for Pedestrian Detection. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference:(VTC2022-Spring), Helsinki, Finland, 19–22 June 2022; pp. 1–6. [Google Scholar]
- Jaw, D.W.; Huang, S.C.; Kuo, S.Y. DesnowGAN: An efficient single image snow removal framework using cross-resolution lateral connection and GANs. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 1342–1350. [Google Scholar] [CrossRef]
- Sallab, A.E.; Sobh, I.; Zahran, M.; Essam, N. LiDAR Sensor modeling and Data augmentation with GANs for Autonomous driving. arXiv 2019, arXiv:1905.07290. [Google Scholar]
- Sobh, I.; Amin, L.; Abdelkarim, S.; Elmadawy, K.; Saeed, M.; Abdeltawab, O.; Gamal, M.; El Sallab, A. End-to-end multi-modal sensors fusion system for urban automated driving. In Proceedings of the NIPS Workshop on Machine Learning for Intelligent Transportation Systems, Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
- Lee, J.; Shiotsuka, D.; Nishimori, T.; Nakao, K.; Kamijo, S. GAN-Based LiDAR Translation between Sunny and Adverse Weather for Autonomous Driving and Driving Simulation. Sensors 2022, 22, 5287. [Google Scholar] [CrossRef] [PubMed]
- Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1094–1101. [Google Scholar]
- Von Bernuth, A.; Volk, G.; Bringmann, O. Simulating photo-realistic snow and fog on existing images for enhanced CNN training and evaluation. In Proceedings of the Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 41–46. [Google Scholar]
- Zhang, K.; Li, R.; Yu, Y.; Luo, W.; Li, C. Deep Dense Multi-Scale Network for Snow Removal Using Semantic and Depth Priors. IEEE Trans. Image Process. 2021, 30, 7419–7431. [Google Scholar] [CrossRef] [PubMed]
- Uřičář, M.; Sistu, G.; Rashed, H.; Vobecky, A.; Kumar, V.R.; Krizek, P.; Burger, F.; Yogamani, S. Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving. In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 766–775. [Google Scholar]
- Chen, Z.; Wang, Y.; Yang, Y.; Liu, D. PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 7180–7189. [Google Scholar]
- Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11682–11692. [Google Scholar]
- Kurup, A.; Bos, J. DSOR: A Scalable Statistical Filter for Removing Falling Snow from LiDAR Point Clouds in Severe Winter Weather. arXiv 2021, arXiv:2109.07078. [Google Scholar]
- Pitropov, M.; Garcia, D.E.; Rebello, J.; Smart, M.; Wang, C.; Czarnecki, K.; Waslander, S. Canadian adverse driving conditions dataset. Int. J. Robot. Res. 2021, 40, 681–690. [Google Scholar] [CrossRef]
- Ankerst, M.; Breunig, M.M.; Kriegel, H.P.; Sander, J. OPTICS: Ordering points to identify the clustering structure. ACM Sigmod Rec. 1999, 28, 49–60. [Google Scholar] [CrossRef]
- El Yabroudi, M.; Awedat, K.; Chabaan, R.C.; Abudayyeh, O.; Abdel-Qader, I. Adaptive DBSCAN LiDAR Point Cloud Clustering For Autonomous Driving Applications. In Proceedings of the 2022 IEEE International Conference on Electro Information Technology (eIT), Mankato, MN, USA, 19–21 May 2022; pp. 221–224. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Kdd-Proceedings; AAAI Press: Washingthon, DC, USA, 1996; Volume 96, pp. 226–231. [Google Scholar]
- Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. (TODS) 2017, 42, 19. [Google Scholar] [CrossRef]
- Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. (CSUR) 1999, 31, 264–323. [Google Scholar] [CrossRef]
- Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
- Davies, D.L.; Bouldin, D.W. A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, PAMI-1, 224–227. [Google Scholar] [CrossRef]
- Zhang, Y.; Ding, M.; Yang, H.; Niu, Y.; Feng, Y.; Ge, M.; Carballo, A.; Takeda, K. LiDAR Point Cloud Translation Between Snow and Clear Conditions Using Depth Images and GANs. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–7. [Google Scholar]
- Mertan, A.; Duff, D.J.; Unal, G. Single image depth estimation: An overview. Digit. Signal Process. 2022, 123, 103441. [Google Scholar] [CrossRef]
- Eigen, D.; Puhrsch, C.; Fergus, R. Depth map prediction from a single image using a multi-scale deep network. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; Volume 27. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Park, T.; Efros, A.A.; Zhang, R.; Zhu, J.Y. Contrastive Learning for Unpaired Image-to-Image Translation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Betz, T.; Karle, P.; Werner, F.; Betz, J. An analysis of software latency for a high-speed autonomous race car—A case study in the indy autonomous challenge. SAE Int. J. Connect. Autom. Veh. 2023, 6, 283–296. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Simonelli, A.; Bulo, S.R.; Porzi, L.; López-Antequera, M.; Kontschieder, P. Disentangling monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1991–1999. [Google Scholar]
- Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. PV-RCNN: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10529–10538. [Google Scholar]
- Yan, Y.; Mao, Y.; Li, B. Second: Sparsely embedded convolutional detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed]
Detection Method | PV-RCNN [42] | SECOND [43] | ||||
---|---|---|---|---|---|---|
Augmentation method | None | DROR | Ours | None | DROR | Ours |
3D average precision (AP) | 43.11 | 38.69 | 45.57 | 37.08 | 35.31 | 38.23 |
Items | Nagoya | Synthesized Snow |
---|---|---|
Noise number | 1204.67 | 2631.46 |
Cluster number | 480.25 | 1073.52 |
Reachability distance | 0.2470 | 0.3952 |
Inter-cluster distance | 59.30 | 49.45 |
Size of clusters | 28.2638 | 13.3046 |
Davies–Bouldin index | 2.3653 | 4.4149 |
Silhouette score | −0.2170 | −0.2927 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Ding, M.; Yang, H.; Niu, Y.; Ge, M.; Ohtani, K.; Zhang, C.; Takeda, K. LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model. Remote Sens. 2024, 16, 2247. https://doi.org/10.3390/rs16122247
Zhang Y, Ding M, Yang H, Niu Y, Ge M, Ohtani K, Zhang C, Takeda K. LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model. Remote Sensing. 2024; 16(12):2247. https://doi.org/10.3390/rs16122247
Chicago/Turabian StyleZhang, Yuxiao, Ming Ding, Hanting Yang, Yingjie Niu, Maoning Ge, Kento Ohtani, Chi Zhang, and Kazuya Takeda. 2024. "LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model" Remote Sensing 16, no. 12: 2247. https://doi.org/10.3390/rs16122247
APA StyleZhang, Y., Ding, M., Yang, H., Niu, Y., Ge, M., Ohtani, K., Zhang, C., & Takeda, K. (2024). LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model. Remote Sensing, 16(12), 2247. https://doi.org/10.3390/rs16122247