Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection
Abstract
:1. Introduction
1.1. Motivation
1.2. Existing Surveys
Year | Surveys | Representation | Alignment | Datasets | Challenges | Main Topic |
---|---|---|---|---|---|---|
2019 | [18] | ⊙ | × | √ | × | Multimodal detection methods. |
2020 | [19] | ⊙ | × | √ | × | LiDAR-based deep networks. |
2021 | [20] | √ | × | ⊙ | √ | LiDAR-based detection. |
[21] | × | × | √ | × | Electric vehicles detection. | |
[22] | ⊙ | × | √ | √ | LiDAR-based detection. | |
2022 | [23] | ⊙ | × | √ | √ | Case study detection methods. |
[24] | × | × | √ | × | LiDAR-based detection. | |
2023 | [7] | √ | ⊙ | √ | √ | Multimodal detection methods. |
[25] | √ | ⊙ | √ | √ | Images-based detection. | |
[6] | × | × | √ | √ | Multimodal detection methods. | |
2024 | [26] | ⊙ | ⊙ | √ | √ | Multimodal detection methods. |
[27] | ⊙ | ⊙ | √ | √ | Multimodal detection methods. | |
This study | √ | √ | √ | √ | Representation and alignment. |
1.3. Contributions
- What are 3D object detection, related autonomous driving datasets, and their heterogeneous alignment?
- How many studies on data representation and heterogeneous alignment methods for 3D object detection have been conducted between 2019 and 2024?
- How do we categorize 3D object detection data representation and heterogeneous alignment methods?
- What are the challenges, limitations, and recommendations for future research on 3D object detection?
- Provide an analytical comparison of 3D object detection and heterogeneous alignment, focusing on recent research articles published between 2019 and 2024.
- Summarize the latest research trends and a method for classifying the alignment of heterogeneous data for 3D object detection.
- Highlight critical challenges in the heterogeneous alignment of 3D object detection and potential avenues for future exploration in this domain.
1.4. Organization
2. Background
2.1. Object Detection
2.1.1. Sensory
2.1.2. Camera-Based
2.1.3. LiDAR-Based
2.1.4. Fusion-Based
2.1.5. Discussion and Analysis
2.2. Datasets
2.2.1. Definition and Comparative Analysis
2.2.2. Discussion and Analysis
2.3. Heterogenous Alignment Discussion and Analysis
3. Protocol and Strategies for Studies
3.1. Literature Search and Screening Strategies
3.2. Classification and Analytical Framework
4. Heterogeneous Data Representation Approaches
4.1. Categorization Summary
4.2. Discussion and Analysis
5. Heterogeneous Alignment Techniques
5.1. Geometric Alignment
5.2. Feature Alignment
5.3. Learning Alignment
5.4. Discussion and Analysis
6. Challenges and Future Directions
6.1. Data Representation
6.2. Datasets
6.3. Multimodal Alignment
6.4. Data Enhancement
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Abd Rahman, A.H.; Sulaiman, R.; Sani, N.S.; Adam, A.; Amini, R. Evaluation of Peer Robot Communications Using Cryptoros. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 658–663. [Google Scholar] [CrossRef]
- Abd Rahman, A.H.; Ariffin, K.A.Z.; Sani, N.S.; Zamzuri, H. Pedestrian Detection Using Triple Laser Range Finders. Int. J. Electr. Comput. Eng. (IJECE) 2017, 7, 3037–3045. [Google Scholar] [CrossRef]
- Shahrim, K.A.; Abd Rahman, A.H.; Goudarzi, S. Hazardous Human Activity Recognition in Hospital Environment Using Deep Learning. IAENG Int. J. Appl. Math. 2022, 52, 748–753. [Google Scholar]
- Wang, L.; Zhang, X.; Song, Z.; Bi, J.; Zhang, G.; Wei, H.; Tang, L.; Yang, L.; Li, J.; Jia, C.; et al. Multimodal 3d Object Detection in Autonomous Driving: A Survey and Taxonomy. IEEE Trans. Intell. Veh. 2023, 8, 3781–3798. [Google Scholar] [CrossRef]
- Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
- Wang, Y.; Mao, Q.; Zhu, H.; Deng, J.; Zhang, Y.; Ji, J.; Li, H.; Zhang, Y. Multimodal 3d Object Detection in Autonomous Driving: A Survey. Int. J. Comput. Vis. 2023, 131, 2122–2152. [Google Scholar] [CrossRef]
- Mao, J.; Shi, S.; Wang, X.; Li, H. 3d Object Detection for Autonomous Driving: A Comprehensive Survey. Int. J. Comput. Vis. 2023, 131, 1909–1963. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2019, 8, 2847–2868. [Google Scholar] [CrossRef]
- Feng, D.; Haase-Schütz, C.; Rosenbaum, L.; Hertlein, H.; Glaeser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep Multimodal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1341–1360. [Google Scholar] [CrossRef]
- Cui, Y.; Chen, R.; Chu, W.; Chen, L.; Tian, D.; Li, Y.; Cao, D. Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review. IEEE Trans. Intell. Transp. Syst. 2021, 23, 722–739. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-View 3d Object Detection Network for Autonomous Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Advances in Neural Information Processing Systems 30; Neural Information Processing Systems Foundation, Inc. (NeurIPS): San Diego, CA, USA, 2017. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep Learning on Point Sets for 3d Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Pang, S.; Morris, D.; Radha, H. Clocs: Camera-Lidar Object Candidates Fusion for 3d Object Detection. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Yan, Y.; Mao, Y.; Li, B. Second: Sparsely Embedded Convolutional Detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed]
- He, C.; Zeng, H.; Huang, J.; Hua, X.-S.; Zhang, L. Structure Aware Single-Stage 3d Object Detection from Point Cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Arnold, E.; Al-Jarrah, O.Y.; Dianati, M.; Fallah, S.; Oxtoby, D.; Mouzakitis, A. A Survey on 3d Object Detection Methods for Autonomous Driving Applications. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3782–3795. [Google Scholar] [CrossRef]
- Wu, Y.; Wang, Y.; Zhang, S.; Ogai, H. Deep 3d Object Detection Networks Using Lidar Data: A Review. IEEE Sens. J. 2020, 21, 1152–1171. [Google Scholar] [CrossRef]
- Fernandes, D.; Silva, A.; Névoa, R.; Simões, C.; Gonzalez, D.; Guevara, M.; Novais, P.; Monteiro, J.; Melo-Pinto, P. Point-Cloud Based 3d Object Detection and Classification Methods for Self-Driving Applications: A Survey and Taxonomy. Inf. Fusion 2021, 68, 161–191. [Google Scholar] [CrossRef]
- Dai, D.; Chen, Z.; Bao, P.; Wang, J. A Review of 3d Object Detection for Autonomous Driving of Electric Vehicles. World Electr. Veh. J. 2021, 12, 139. [Google Scholar] [CrossRef]
- Zamanakos, G.; Tsochatzidis, L.; Amanatiadis, A.; Pratikakis, I. A Comprehensive Survey of Lidar-Based 3d Object Detection Methods with Deep Learning for Autonomous Driving. Comput. Graph. 2021, 99, 153–181. [Google Scholar] [CrossRef]
- Qian, R.; Lai, X.; Li, X. 3d Object Detection for Autonomous Driving: A Survey. Pattern Recognit. 2022, 130, 108796. [Google Scholar] [CrossRef]
- Hasan, M.; Hanawa, J.; Goto, R.; Suzuki, R.; Fukuda, H.; Kuno, Y.; Kobayashi, Y. Lidar-Based Detection, Tracking, and Property Estimation: A Contemporary Review. Neurocomputing 2022, 506, 393–405. [Google Scholar] [CrossRef]
- Ma, X.; Ouyang, W.; Simonelli, A.; Ricci, E. 3d Object Detection from Images for Autonomous Driving: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 3537–3556. [Google Scholar] [CrossRef]
- Pravallika, A.; Hashmi, M.F.; Gupta, A. Deep Learning Frontiers in 3d Object Detection: A Comprehensive Review for Autonomous Driving. IEEE Access 2024, 12, 173936–173980. [Google Scholar] [CrossRef]
- Song, Z.; Liu, L.; Jia, F.; Luo, Y.; Jia, C.; Zhang, G.; Yang, L.; Wang, L. Robustness-Aware 3d Object Detection in Autonomous Driving: A Review and Outlook. IEEE Trans. Intell. Transp. Syst. 2024, 25, 15407–15436. [Google Scholar] [CrossRef]
- Liu, W.; Zhang, T.; Ma, Y.; Wei, L. 3d Street Object Detection from Monocular Images Using Deep Learning and Depth Information. J. Adv. Comput. Intell. Intell. Inform. 2023, 27, 198–206. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-Cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-Cnn: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-Level Control through Deep Reinforcement Learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
- Carr, P.; Sheikh, Y.; Matthews, I. Monocular Object Detection Using 3d Geometric Primitives. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. Proceedings, Part I 12 2012. [Google Scholar]
- Andriluka, M.; Roth, S.; Schiele, B. Monocular 3d Pose Estimation and Tracking by Detection. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
- Chen, L.; Zou, Q.; Pan, Z.; Lai, D.; Zhu, L.; Hou, Z.; Wang, J.; Cao, D. Surrounding Vehicle Detection Using an Fpga Panoramic Camera and Deep Cnns. IEEE Trans. Intell. Transp. Syst. 2019, 21, 5110–5122. [Google Scholar] [CrossRef]
- Lee, C.-H.; Lim, Y.-C.; Kwon, S.; Lee, J.-H. Stereo Vision–Based Vehicle Detection Using a Road Feature and Disparity Histogram. Opt. Eng. 2011, 50, 027004-04-23. [Google Scholar] [CrossRef]
- Kemsaram, N.; Das, A.; Dubbelman, G. A Stereo Perception Framework for Autonomous Vehicles. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020. [Google Scholar]
- Kim, K.; Woo, W. A Multi-View Camera Tracking for Modeling of Indoor Environment. In Proceedings of the Pacific-Rim Conference on Multimedia, Tokyo, Japan, 30 November–3 December 2004. [Google Scholar]
- Park, J.Y.; Chu, C.W.; Kim, H.W.; Lim, S.J.; Park, J.C.; Koo, B.K. Multi-View Camera Color Calibration Method Using Color Checker Chart. U.S. Patent 12/334,095, 18 June 2009. [Google Scholar]
- Zhou, Y.; Wan, G.; Hou, S.; Yu, L.; Wang, G.; Rui, X.; Song, S. Da4ad: End-to-End Deep Attention-Based Visual Localization for Autonomous Driving. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part XXVIII 16 2020. [Google Scholar]
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and Sensing for Autonomous Vehicles under Adverse Weather Conditions: A Survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision Meets Robotics: The Kitti Dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Huang, X.; Cheng, X.; Geng, Q.; Cao, B.; Zhou, D.; Wang, P.; Lin, Y.; Yang, R. The Apolloscape Dataset for Autonomous Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Patil, A.; Malla, S.; Gang, H.; Chen, Y.-T. The H3d Dataset for Full-Surround 3d Multi-Object Detection and Tracking in Crowded Urban Scenes. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3d Proposal Generation and Object Detection from View Aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Meyer, G.P.; Charland, J.; Hegde, D.; Laddha, A.; Vallespi-Gonzalez, C. Sensor Fusion for Joint 3d Object Detection and Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Geiger, A.; Lenzp, U.R. Are We Ready for Autonomous Driving? The Kitti Vision Benchmark Suite. In Proceedings of the 2012 IEEE Conference on Computer Visionand Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
- Choi, Y.; Kim, N.; Hwang, S.; Park, K.; Yoon, J.S.; An, K.; Kweon, I.S. Kaist Multi-Spectral Day/Night Data Set for Autonomous and Assisted Driving. IEEE Trans. Intell. Transp. Syst. 2018, 19, 934–948. [Google Scholar] [CrossRef]
- Li, G.; Jiao, Y.; Knoop, V.L.; Calvert, S.C.; Van Lint, J.W.C. Large Car-Following Data Based on Lyft Level-5 Open Dataset: Following Autonomous Vehicles Vs. Human-Driven Vehicles. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 24–28 September 2023. [Google Scholar]
- Chang, M.-F.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; Lucey, S.; Ramanan, D.; et al. Argoverse: 3d Tracking and Forecasting with Rich Maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. Nuscenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Pham, Q.-H.; Sevestre, P.; Pahwa, R.S.; Zhan, H.; Pang, C.H.; Chen, Y.; Mustafa, A.; Chandrasekhar, V.; Lin, J. A* 3d Dataset: Towards Autonomous Driving in Challenging Environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
- Xiao, P.; Shao, Z.; Hao, S.; Zhang, Z.; Chai, X.; Jiao, J.; Li, Z.; Wu, J.; Sun, K.; Jiang, K. Pandaset: Advanced Sensor Suite Dataset for Autonomous Driving. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021. [Google Scholar]
- Wang, Z.; Ding, S.; Li, Y.; Fenn, J.; Roychowdhury, S.; Wallin, A.; Martin, L.; Ryvola, S.; Sapiro, G.; Qiu, Q. Cirrus: A Long-Range Bi-Pattern Lidar Dataset. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Mao, J.; Niu, M.; Jiang, C.; Liang, H.; Chen, J.; Liang, X.; Li, Y.; Ye, C.; Zhang, W.; Li, Z. One Million Scenes for Autonomous Driving: Once Dataset. arXiv 2021, arXiv:2106.11037. [Google Scholar]
- Ma, J.; Wang, X.; Duan, H.; Wang, R. 3d Object Detection Based on the Fusion of Projected Point Cloud and Image Features. In Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering, Xiamen, China, 21–23 October 2022. [Google Scholar]
- Xu, X.; Zhang, L.; Yang, J.; Cao, C.; Tan, Z.; Luo, M. Object Detection Based on Fusion of Sparse Point Cloud and Image Information. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
- Ouyang, Z.; Cui, J.; Dong, X.; Li, Y.; Niu, J. Saccadefork: A Lightweight Multi-Sensor Fusion-Based Target Detector. Inf. Fusion 2022, 77, 172–183. [Google Scholar] [CrossRef]
- Hong, D.-S.; Chen, H.-H.; Hsiao, P.-Y.; Fu, L.-C.; Siao, S.-M. Crossfusion Net: Deep 3d Object Detection Based on Rgb Images and Point Clouds in Autonomous Driving. Image Vis. Comput. 2020, 100, 103955. [Google Scholar] [CrossRef]
- Rjoub, G.; Wahab, O.A.; Bentahar, J.; Bataineh, A.S. Improving Autonomous Vehicles Safety in Snow Weather Using Federated Yolo Cnn Learning. In Proceedings of the International Conference on Mobile Web and Intelligent Information Systems, Virtual Event, 23–25 August 2021. [Google Scholar]
- Liu, L.; He, J.; Ren, K.; Xiao, Z.; Hou, Y. A Lidar–Camera Fusion 3d Object Detection Algorithm. Information 2022, 13, 169. [Google Scholar] [CrossRef]
- Liu, Q.; Li, X.; Zhang, X.; Tan, X.; Shi, B. Multi-View Joint Learning and Bev Feature-Fusion Network for 3d Object Detection. Appl. Sci. 2023, 13, 5274. [Google Scholar] [CrossRef]
- Yong, Z.; Xiaoxia, Z.; Nana, D. Research on 3d Object Detection Method Based on Convolutional Attention Mechanism. J. Phys. Conf. Ser. 2021, 1848, 012097. [Google Scholar] [CrossRef]
- Liu, Z.; Cheng, J.; Fan, J.; Lin, S.; Wang, Y.; Zhao, X. Multimodal Fusion Based on Depth Adaptive Mechanism for 3d Object Detection. IEEE Trans. Multimed. 2023. [Google Scholar] [CrossRef]
- Zhu, A.; Xiao, Y.; Liu, C.; Cao, Z. Robust Lidar-Camera Alignment with Modality Adapted Local-to-Global Representation. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 59–73. [Google Scholar] [CrossRef]
- Wu, Y.; Zhu, M.; Liang, J. Psnet: Lidar and Camera Registration Using Parallel Subnetworks. IEEE Access 2022, 10, 70553–70561. [Google Scholar] [CrossRef]
- Wang, K.; Zhou, T.; Zhang, Z.; Chen, T.; Chen, J. Pvf-Dectnet: Multimodal 3d Detection Network Based on Perspective-Voxel Fusion. Eng. Appl. Artif. Intell. 2023, 120, 105951. [Google Scholar] [CrossRef]
- Carranza-García, M.; Galán-Sales, F.J.; Luna-Romera, J.M.; Riquelme, J.C. Object Detection Using Depth Completion and Camera-Lidar Fusion for Autonomous Driving. Integr. Comput.-Aided Eng. 2022, 29, 241–258. [Google Scholar] [CrossRef]
- Chen, Z.; Li, Z.; Zhang, S.; Fang, L.; Jiang, Q.; Zhao, F.; Zhou, B.; Zhao, H. Autoalign: Pixel-Instance Feature Aggregation for Multi-Modal 3d Object Detection. arXiv 2022, arXiv:2201.06493. [Google Scholar]
- Song, Z.; Jia, C.; Yang, L.; Wei, H.; Liu, L. Graphalign++: An Accurate Feature Alignment by Graph Matching for Multimodal 3d Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2619–2632. [Google Scholar] [CrossRef]
- Song, Z.; Wei, H.; Bai, L.; Yang, L.; Jia, C. Graphalign: Enhancing Accurate Feature Alignment by Graph Matching for Multimodal 3d Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 4–6 October 2023. [Google Scholar]
- Song, Z.; Yang, L.; Xu, S.; Liu, L.; Xu, D.; Jia, C.; Jia, F.; Wang, L. Graphbev: Towards Robust Bev Feature Alignment for Multimodal 3d Object Detection. arXiv 2024, arXiv:2403.11848. [Google Scholar]
- Chen, C.; Fragonara, L.Z.; Tsourdos, A. Roifusion: 3d Object Detection from Lidar and Vision. IEEE Access 2021, 9, 51710–51721. [Google Scholar] [CrossRef]
- Rishav, R.; Battrawy, R.; Schuster, R.; Wasenmüller, O.; Stricker, D. Deeplidarflow: A Deep Learning Architecture for Scene Flow Estimation Using Monocular Camera and Sparse Lidar. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Pang, S.; Morris, D.; Radha, H. Fast-Clocs: Fast Camera-Lidar Object Candidates Fusion for 3d Object Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022. [Google Scholar]
- Melotti, G.; Premebida, C.; Gonçalves, N. Multimodal Deep-Learning for Object Recognition Combining Camera and Lidar Data. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020. [Google Scholar]
- Dou, J.; Xue, J.; Fang, J. Seg-Voxelnet for 3d Vehicle Detection from Rgb and Lidar Data. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Ku, J.; Pon, A.D.; Walsh, S.; Waslander, S.L. Improving 3d Object Detection for Pedestrians with Virtual Multi-View Synthesis Orientation Estimation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019. [Google Scholar]
- Zhang, H.; Yang, D.; Yurtsever, E.; Redmill, K.A.; Özgüner, Ü. Faraway-Frustum: Dealing with Lidar Sparsity for 3d Object Detection Using Fusion. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021. [Google Scholar]
- Wang, Z.; Jia, K. Frustum Convnet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3d Object Detection. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019. [Google Scholar]
- Paigwar, A.; Sierra-Gonzalez, D.; Erkent, Ö.; Laugier, C. Frustum-Pointpillars: A Multi-Stage Approach for 3d Object Detection Using Rgb Camera and Lidar. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum Pointnets for 3d Object Detection from Rgb-D Data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Shin, K.; Kwon, Y.P.; Tomizuka, M. Roarnet: A Robust 3d Object Detection Based on Region Approximation Refinement. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019. [Google Scholar]
- Xu, D.; Anguelov, D.; Jain, A. Pointfusion: Deep Sensor Fusion for 3d Bounding Box Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Zhang, X.; Wang, L.; Zhang, G.; Lan, T.; Zhang, H.; Zhao, L.; Li, J.; Zhu, L.; Liu, H. Ri-Fusion: 3d Object Detection Using Enhanced Point Features with Range-Image Fusion for Autonomous Driving. IEEE Trans. Instrum. Meas. 2022, 72, 1–13. [Google Scholar] [CrossRef]
- Zhang, Z.; Liang, Z.; Zhang, M.; Zhao, X.; Li, H.; Yang, M.; Tan, W.; Pu, S. Rangelvdet: Boosting 3d Object Detection in Lidar with Range Image and Rgb Image. IEEE Sens. J. 2021, 22, 1391–1403. [Google Scholar] [CrossRef]
- Yin, T.; Zhou, X.; Krähenbühl, P. Multimodal Virtual Point 3d Detection. Adv. Neural Inf. Process. Syst. 2021, 34, 16494–16507. [Google Scholar]
- Huang, T.; Liu, Z.; Chen, X.; Bai, X. Epnet: Enhancing Point Features with Image Semantics for 3d Object Detection. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part XV 16 2020. [Google Scholar]
- Zhang, Z.; Shen, Y.; Li, H.; Zhao, X.; Yang, M.; Tan, W.; Pu, S.; Mao, H. Maff-Net: Filter False Positive for 3d Vehicle Detection with Multimodal Adaptive Feature Fusion. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022. [Google Scholar]
- Fei, J.; Chen, W.; Heidenreich, P.; Wirges, S.; Stiller, C. Semanticvoxels: Sequential Fusion for 3d Pedestrian Detection Using Lidar Point Cloud and Semantic Segmentation. In Proceedings of the 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Karlsruhe, Germany, 14–16 September 2020. [Google Scholar]
- Wang, G.; Tian, B.; Zhang, Y.; Chen, L.; Cao, D.; Wu, J. Multi-View Adaptive Fusion Network for 3d Object Detection. arXiv 2020, arXiv:2011.00652. [Google Scholar]
- Xie, L.; Xiang, C.; Yu, Z.; Xu, G.; Yang, Z.; Cai, D.; He, X. Pi-Rcnn: An Efficient Multi-Sensor 3d Object Detector with Point-Based Attentive Cont-Conv Fusion Module. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, New York, USA, 7–12 February 2020. [Google Scholar]
- Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. Pointpainting: Sequential Fusion for 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Liang, M.; Yang, B.; Chen, Y.; Hu, R.; Urtasun, R. Multi-Task Multi-Sensor Fusion for 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Simon, M.; Amende, K.; Kraus, A.; Honer, J.; Samann, T.; Kaulbersch, H.; Milz, S.; Michael Gross, H. Complexer-Yolo: Real-Time 3d Object Detection and Tracking on Semantic Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Sindagi, V.A.; Zhou, Y.; Tuzel, O. Mvx-Net: Multimodal Voxelnet for 3d Object Detection. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Wang, Z.; Zhao, Z.; Jin, Z.; Che, Z.; Tang, J.; Shen, C.; Peng, Y. Multi-Stage Fusion for Multi-Class 3d Lidar Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Chen, X.; Zhang, T.; Wang, Y.; Wang, Y.; Zhao, H. Futr3d: A Unified Sensor Fusion Framework for 3d Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
- Bai, X.; Hu, Z.; Zhu, X.; Huang, Q.; Chen, Y.; Fu, H.; Tai, C.-L. Transfusion: Robust Lidar-Camera Fusion for 3d Object Detection with Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Zhang, Y.; Chen, J.; Huang, D. Cat-Det: Contrastively Augmented Transformer for Multimodal 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Li, Y.; Yu, A.W.; Meng, T.; Caine, B.; Ngiam, J.; Peng, D.; Shen, J.; Lu, Y.; Zhou, D.; Le, Q.V. Deepfusion: Lidar-Camera Deep Fusion for Multimodal 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Chen, Z.; Li, Z.; Zhang, S.; Fang, L.; Jiang, Q.; Zhao, F. Deformable Feature Aggregation for Dynamic Multimodal 3d Object Detection. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Li, Y.; Qi, X.; Chen, Y.; Wang, L.; Li, Z.; Sun, J.; Jia, J. Voxel Field Fusion for 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Wang, C.; Ma, C.; Zhu, M.; Yang, X. Pointaugmenting: Cross-Modal Augmentation for 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Wang, Z.; Zhan, W.; Tomizuka, M. Fusing Bird’s Eye View Lidar Point Cloud and Front View Camera Image for 3d Object Detection. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar]
- Zhu, M.; Ma, C.; Ji, P.; Yang, X. Cross-Modality 3d Object Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2021. [Google Scholar]
- Xu, S.; Zhou, D.; Fang, J.; Yin, J.; Bin, Z.; Zhang, L. Fusionpainting: Multimodal Fusion with Adaptive Attention for 3d Object Detection. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021. [Google Scholar]
- Zhu, H.; Deng, J.; Zhang, Y.; Ji, J.; Mao, Q.; Li, H.; Zhang, Y. Vpfnet: Improving 3d Object Detection with Virtual Point Based Lidar and Stereo Data Fusion. IEEE Trans. Multimed. 2022, 25, 5291–5304. [Google Scholar] [CrossRef]
- Chen, Y.; Li, Y.; Zhang, X.; Sun, J.; Jia, J. Focal Sparse Convolutional Networks for 3d Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Yang, H.; Liu, Z.; Wu, X.; Wang, W.; Qian, W.; He, X.; Cai, D. Graph R-Cnn: Towards Accurate 3d Object Detection with Semantic-Decorated Local Graph. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Wu, X.; Peng, L.; Yang, H.; Xie, L.; Huang, C.; Deng, C.; Liu, H.; Cai, D. Sparse Fuse Dense: Towards High Quality 3d Detection with Depth Completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Liang, T.; Xie, H.; Yu, K.; Xia, Z.; Lin, Z.; Wang, Y.; Tang, T.; Wang, B.; Tang, Z. Bevfusion: A Simple and Robust Lidar-Camera Fusion Framework. Adv. Neural Inf. Process. Syst. 2022, 35, 10421–10434. [Google Scholar]
- Liu, Z.; Tang, H.; Amini, A.; Yang, X.; Mao, H.; Rus, D.L.; Han, S. Bevfusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023. [Google Scholar]
- Yoo, J.H.; Kim, Y.; Kim, J.; Choi, J.W. 3d-Cvf: Generating Joint Camera and Lidar Features Using Cross-View Spatial Feature Fusion for 3d Object Detection. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part XXVII 16 2020. [Google Scholar]
- Liang, M.; Yang, B.; Wang, S.; Urtasun, R. Deep Continuous Fusion for Multi-Sensor 3d Object Detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Lu, H.; Chen, X.; Zhang, G.; Zhou, Q.; Ma, Y.; Zhao, Y. Scanet: Spatial-Channel Attention Network for 3d Object Detection. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019. [Google Scholar]
- Deng, J.; Czarnecki, K. Mlod: A Multi-View 3d Object Detection Based on Robust Feature Fusion Method. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019. [Google Scholar]
- Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; Dai, J. Bevformer: Learning Bird’s-Eye-View Representation from Multi-Camera Images Via Spatiotemporal Transformers. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Wen, Q.; Sun, L.; Yang, F.; Song, X.; Gao, J.; Wang, X.; Xu, H. Time Series Data Augmentation for Deep Learning: A Survey. arXiv 2020, arXiv:2002.12478. [Google Scholar]
- Lu, W.; Zhao, D.; Premebida, C.; Zhang, L.; Zhao, W.; Tian, D. Improving 3d Vulnerable Road User Detection with Point Augmentation. IEEE Trans. Intell. Veh. 2023, 8, 3489–3505. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Zoph, B.; Cubuk, E.D.; Ghiasi, G.; Lin, T.-Y.; Shlens, J.; Le, Q.V. Learning Data Augmentation Strategies for Object Detection. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part XXVII 16 2020. [Google Scholar]
- Xie, Q.; Dai, Z.; Hovy, E.; Luong, T.; Le, Q. Unsupervised Data Augmentation for Consistency Training. Adv. Neural Inf. Process. Syst. 2020, 33, 6256–6268. [Google Scholar]
- Zhao, T.; Liu, Y.; Neves, L.; Woodford, O.; Jiang, M.; Shah, N. Data Augmentation for Graph Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021. [Google Scholar]
Datasets | Year | Dataset Size | Number of Classes | Number of Sensors | LiDAR Sensor | Camera Sensor | Sensor Types | LiDAR | Image | Classes | Locations |
---|---|---|---|---|---|---|---|---|---|---|---|
KITTI [48] | 2012 | <50 k | 5–10 | 1/3 | 1 | 2 | 3 | 15 k | 15 k | 8 | Germany |
KAIST [49] | 2018 | <50 k | <5 | 1/2 | 1 | 2 | 2 | 8.9 k | 8.9 k | 3 | Korea |
ApolloScape [43] | 2018 | 50 k–1 M | 5–10 | 2/3 | 2 | 2 | 3 | 20 k | 144 k | 6 | China |
H3D [44] | 2019 | 50 k–1 M | 5–10 | 1/3 | 1 | 2 | 3 | 27 k | 83 k | 8 | United States |
Lyft L5 [50] | 2019 | 50 k–1 M | 5–10 | 2/2 | 2 | 6 | 2 | 46 k | 323 k | 9 | United States |
Argoverse [51] | 2019 | 50 k–1 M | >10 | 1/2 | 1 | 7 | 2 | 44 k | 490 k | 15 | United States |
nuScenes [52] | 2019 | >1 M | >10 | 1/3 | 1 | 6 | 3 | 400 k | 1.4 M | 23 | Singapore and United States |
Waymo [53] | 2019 | >1 M | <5 | 5/2 | 5 | 5 | 2 | 230 k | 1 M | 4 | United States |
A*3D [54] | 2019 | 50 k–1 M | 5–10 | 1/2 | 1 | 6 | 2 | 39 k | 39 k | 7 | Singapore |
PandaSet [55] | 2020 | 50 k–1 M | >10 | 2/3 | 2 | 6 | 3 | 8.2 k | 49 k | 28 | United States |
Cirrus [56] | 2021 | <50 k | 5–10 | 2/2 | 2 | 1 | 2 | 6.2 k | 6.2 k | 8 | China |
NCE [57] | 2021 | >1 M | 5–10 | 1/2 | 1 | 7 | 2 | 1 M | 7 M | 5 | United States |
Inclusion Criteria | Exclusion Criteria |
---|---|
Refer to LiDAR-camera representation and alignment | Refer to a point cloud representation only or others |
A Machine Learning approach is followed | Not based on Machine Learning methods |
Published in journal articles or conference papers | Published in elsewhere |
Published between 2019 and 2024 | Focus on mapping as the main topic |
Source | Initially Retrieved | After Removing Duplication | After First Screening | After Second Screening | After Third Screening |
---|---|---|---|---|---|
WoS | 23 | 8 | 5 | 4 | 3 |
SpringerLink | 15 | 13 | 4 | 1 | 0 |
IEEE Xplore | 27 | 6 | 4 | 2 | 2 |
ProQuest | 119 | 114 | 29 | 10 | 3 |
ACM Digital Library | 11 | 11 | 3 | 1 | 1 |
Scopus | 100 | 100 | 52 | 17 | 9 |
Science Director | 11 | 11 | 6 | 2 | 2 |
Google Scholar | 108 | 5 | 3 | 2 | 1 |
Author | Country and Year | Dataset | Contribution | Applicability/Strength | Limitations/Trends |
---|---|---|---|---|---|
[58] | China 2022 | KITTI | Image fusion with point cloud projection features. | Real-time detection. | Computation cost. |
[59] | China 2021 | KITTI | Sparse point cloud and image feature fusion. | Real-time detection. | Computation cost. |
[60] | China 2022 | KITTI: local dataset | Bilateral Filtering and Delaunay Point Cloud Densification | Dynamic environment. | Low accuracy of small objects. |
[61] | China 2020 | KITTI | Multilevel network of attentional mechanisms. | Dynamic environment. | Low accuracy of small objects. |
[62] | China 2019 | Real data | FL and YOLO CNN. | Diverse environment. | Safety. |
[63] | China 2022 | KITTI | 2D feature extraction. | Occlusion environment. | Low accuracy of small objects. |
[64] | China 2022 | KITTI | Multiview and BEV feature. | Small-object detection. | Computation cost. |
[65] | China 2021 | KITTI; SUN-RGB | Multilevel network of attentional mechanisms. | Occlusion environment. | Computation cost. |
[66] | China 2022 | KITTI | Deep attention mechanism. | Occlusion environment. | Computation cost. |
[67] | China 2023 | KITTI | Modality adaptation. | Across different scenes | lightweight models |
[68] | China 2022 | KITTI | The multiscale feature aggregation module. | Across different scenes. | Real-time. |
[69] | China 2023 | KITTI | To improve the weight of image information. | Improve the weight of image information. | Diverse environment. |
[70] | China 2022 | Waymo | Enhancing sparse LiDAR data. | Various lighting conditions. | Computational complexity. |
[71] | China 2023 | KITTI; nuScenes | Pixel-level Cross-Attention Feature Alignment (CAFA). | Small objects. | Computational costs. |
[72] | China 2023 | KITTI; nuScenes | Graph Feature Alignment (GFA). | Small objects. | Real-time. |
[73] | China 2022 | KITTI; nuScenes | Self-Attention Feature Alignment (SAFA) modules. | Computational complexity. | Diverse conditions. |
[74] | China 2024 | nuScenes | Graph matching to resolve global misalignment. | Small-object detection. | Dynamic environment. |
[75] | UK 2021 | KITTI | Region of Interest deep learning | Low computation cost. | Low accuracy of small objects. |
[76] | Germany 2020 | KITTI; FlyingThings3D | Multiscale. | Dynamic environment. | Computation cost. |
[77] | USA 2022 | KITTI; nuScenes | Light weight. | Real-time detection. | Low accuracy of small objects. |
[78] | Brazil 2020 | KITTI | Late fusion. | Occlusion environment. | Computation cost. |
References | Input Data | Dimension | Source |
---|---|---|---|
[45,67,71,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112] | Raw Image | 2D | Image |
[81,82,84,85,86,89,90,91,94,99,100,102,106,107,108] | Raw Point Cloud | 3D | Point Cloud |
[96,113] | Pseudo-Point Cloud | 3D | Image |
[87,88] | Front (Range) View | 2D | Point Cloud |
[67,83,92,94,95,97,98,100,101,103,104,105,109,110,111,112,113,114,115] | Voxel | 3D | Point Cloud |
[114,115,116,117] | BEV Feature | 2D | Image |
[12,45,96,107,117,118,119] | BEV Map | 2D | Point Cloud |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Y.; Abd Rahman, A.H.; Nor Rashid, F.’A.; Razali, M.K.M. Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection. Sensors 2024, 24, 7855. https://doi.org/10.3390/s24237855
Wang Y, Abd Rahman AH, Nor Rashid F’A, Razali MKM. Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection. Sensors. 2024; 24(23):7855. https://doi.org/10.3390/s24237855
Chicago/Turabian StyleWang, Yujing, Abdul Hadi Abd Rahman, Fadilla ’Atyka Nor Rashid, and Mohamad Khairulamirin Md Razali. 2024. "Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection" Sensors 24, no. 23: 7855. https://doi.org/10.3390/s24237855
APA StyleWang, Y., Abd Rahman, A. H., Nor Rashid, F. ’A., & Razali, M. K. M. (2024). Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection. Sensors, 24(23), 7855. https://doi.org/10.3390/s24237855