Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review
Abstract
:1. Introduction
2. Sensor Technology in Autonomous Vehicles
2.1. Camera
2.2. LiDAR
- 1D or one-dimensional sensors measure only the distance information (x-coordinates) of objects in the surroundings.
- 2D or two-dimensional sensors provides additional information about the angle (y-coordinates) of the targeted objects.
- 3D or three-dimensional sensors fire laser beams across the vertical axes to measure the elevation (z-coordinates) of objects around the surroundings.
Company | Model | Channels or Layers | FPS (Hz) | Acc. (m) | RNG (m) | VFOV (°) | HFOV (°) | HR (°) | VR (°) | λ (nm) | Ø (mm) | ROS Drv. | Ref. | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mechanical/Spinning LiDARs | Velodyne | VLP-16 | 16 | 5–20 | ±0.03 | 1…100 | 30 | 360 | 0.1–0.4 | 2 | 903 | 103.3 | [67] | [51,68,69,70] |
VLP-32C | 32 | 5–20 | ±0.03 | 1…200 | 40 | 360 | 0.1–0.4 | 0.33 1 | 903 | 103 | ||||
HDL-32E | 32 | 5–20 | ±0.02 | 2…100 | 41.33 | 360 | 0.08–0.33 | 1.33 | 903 | 85.3 | ||||
HDL-64E | 64 | 5–20 | ±0.02 | 3…120 | 26.8 | 360 | 0.09 | 0.33 | 903 | 223.5 | ||||
VLS-128 Alpha Prime | 128 | 5–20 | ±0.03 | max 245 | 40 | 360 | 0.1–0.4 | 0.11 1 | 903 | 165.5 | - | |||
Hesai | Pandar64 | 64 | 10,20 | ±0.02 | 0.3…200 | 40 | 360 | 0.2,0.4 | 0.167 1 | 905 | 116 | [71] | [72] | |
Pandar40P | 40 | 10,20 | ±0.02 | 0.3…200 | 40 | 360 | 0.2,0.4 | 0.167 1 | 905 | 116 | [73] | |||
Ouster | OS1–64 Gen 1 | 64 | 10,20 | ±0.03 | 0.8…120 | 33.2 | 360 | 0.7,0.35, 0.17 | 0.53 | 850 | 85 | [74] | [75,76] | |
OS1-16 Gen 1 | 16 | 10,20 | ±0.03 | 0.8…120 | 33.2 | 360 | 0.53 | 850 | 85 | |||||
RoboSense | RS-Lidar32 | 32 | 5,10,20 | ±0.03 | 0.4…200 | 40 | 360 | 0.1–0.4 | 0.33 1 | 905 | 114 | [77] | [78] | |
LeiShen | C32-151A | 32 | 5,10,20 | ±0.02 | 0.5…70 | 32 | 360 | 0.09, 0.18,0.36 | 1 | 905 | 120 | [79] | [80] | |
C16-700B | 16 | 5,10,20 | ±0.02 | 0.5…150 | 30 | 360 | 2 | 905 | 102 | [81] | [82] | |||
Hokuyo | YVT-35LX-F0 | - | 20 3 | ±0.05 3 | 0.3…35 3 | 40 | 210 | - | - | 905 | ◊ | [83] | [84] | |
Solid State LiDARs | IBEO | LUX 4L Standard | 4 | 25 | 0.1 | 50 2 | 3.2 | 110 | 0.25 | 0.8 | 905 | ◊ | [85] | [86] |
LUX HD | 4 | 25 | 0.1 | 50 2 | 3.2 | 110 | 0.25 | 0.8 | 905 | ◊ | [87] | |||
LUX 8L | 8 | 25 | 0.1 | 30 2 | 6.4 | 110 | 0.25 | 0.8 | 905 | ◊ | [88] | |||
SICK | LD-MRS400102S01 HD | 4 | 50 | - | 30 2 | 3.2 | 110 | 0.125…0.5 | - | ◊ | [85] | [89] | ||
LD-MRS800001S01 | 8 | 50 | - | 50 2 | 6.4 | 110 | 0.125…0.5 | - | ◊ | [90] | ||||
Cepton | Vista P60 | - | 10 | - | 200 | 22 | 60 | 0.25 | 0.25 | 905 | ◊ | [91] | [92] | |
Vista P90 | - | 10 | - | 200 | 27 | 90 | 0.25 | 0.25 | 905 | ◊ | [93] | |||
Vista X90 | - | 40 | - | 200 | 25 | 90 | 0.13 | 0.13 | 905 | ◊ | [94] |
2.3. Radar
3. Sensor Calibration and Sensor Fusion for Object Detection
3.1. Sensor Calibrations
3.1.1. Intrinsic Calibration Overview
- Photogrammetric calibration. This approach uses the known calibration points observed from a calibration object (usually a planar pattern) where the geometry in the 3D world space is known with high precision.
- Self-calibration. This approach utilizes the correspondence between the captured images from a moving camera in a static scene to estimate the camera intrinsic and extrinsic parameters.
3.1.2. Extrinsic Calibration Overview
- Pose and Structure Estimation (PSE). It estimates the latent variables of the true board locations and optimizes the transformations to a precise estimate of all calibration target poses employing the estimated latent variables.
- Minimally Connected Pose Estimation (MCPE). It relies on a reference sensor and estimates the multi-sensing modalities transformations to a single reference frame.
- Fully Connected Pose Estimation (FCPE). It estimates the transformations between all sensing modalities “jointly” and enforces a loop closure constraint to ensure consistency.
- Ensure that the edges of the circles have sufficient contrast with the background, specifically when calibrating the cameras outdoors as was necessary in our case. Though, it is recommended in [146] that calibration of sensors be done indoors to avoid strong wind which may overturn the calibration board.
- Ensure that the camera lenses are protected from rain droplets to reduce noise when calibrating the sensors outdoors, particularly during rainy and blustery weather conditions.
- Additional or modified scripts may be required to match the ROS sensor message types of the board detector nodes depending on the employed ROS sensor drivers. For instance, a Continental ARS430 radar was utilized in [146] and exploited the AutonomouStuff-provided ROS messages which output the detections in an AutonomouStuff sensor message array format [101]. However, the ROS driver from SmartMicro radars outputs the detections in a ROS sensor message type of PointCloud2 format [113]. Table 6 summarizes the sensor message types for each board detector node (as input requirements) of the extrinsic calibration tool.
- Ensure that the edges of the four circles are detected (covered) with sufficient points within the LiDAR point cloud. We examined and compared the elevation angles of the Velodyne VLP-32C with the Velodyne HDL-64E ([162], utilized in [146]). The results indicated that the vertical laser points of HDL-64E are distributed uniformly between −24.9° to 2°. In comparison, the vertical laser points of Velodyne VLP-32C are concentrated in the middle of the optical center between −25° to 15°, as shown in Figure 10. Hence, the position and orientation of the lidar relative to the calibration board may have a significant effect on reported location of circles detected within the lidar data.
- It is suggested in [146] to position the calibration board in a spacious area and capture at least ten calibration board locations in the FoV of all sensors. However, it is not recommended to hold the calibration board, which can affect the detections of the corner reflector (by the radar sensor).
3.1.3. Temporal Calibration Overview
- Styrofoam or cardboard to fabricate the triangular planar pattern,
- Printed AprilTag marker with a size of approximately 17 cm in length, located at the front of the triangular planar and,
- Cardboards to assemble a trihedral corner reflector where the three inner sides of the reflector are overlaid with aluminum foil and attached at the rear of the triangular planar.
3.2. Sensor Fusion
3.2.1. Sensor Fusion Approaches
3.2.2. Sensor Fusion Techniques and Algorithms
- ResNet, or Residual Networks, is a residual learning framework that facilitates deep networks training [195].
- SSD, or Single-Shot Multibox Detector, is a method that discretizes bounding boxes into a set of boxes with different sizes and aspect ratios per feature map location to detect objects with variant sizes [196]—it overcomes the limitation of YOLO small and variant-scale object detection accuracy.
- CenterNet [197] represents the state-of-the-art monocular camera 3D object detection algorithm, which leverages key-point estimation to find center points of bounding boxes and regresses the center points to all other object properties, including size, 3D location, orientation, and pose.
3.2.3. Challenges of Sensor Fusion for Safe and Reliable Environment Perception
4. Conclusions and Future Research Recommendations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- World Health Organization. Global Status Report on Road Safety; WHO: Geneva, Switzerland, 2018; ISBN 978-9241565684. [Google Scholar]
- Road | Mobility and Transport. Available online: https://ec.europa.eu/transport/themes/its/road_it (accessed on 20 November 2020).
- Autonomous Vehicle Market to Garner Growth 63.5%. Available online: https://www.precedenceresearch.com/autonomous-vehicle-market (accessed on 19 November 2020).
- Glon, R.; Edelstein, S. The History of Self-Driving Cars. 2020. Available online: https://www.digitaltrends.com/cars/history-of-self-driving-cars-milestones/ (accessed on 18 November 2020).
- Wiggers, K. Waymo’s Autonomous Cars Have Driven 20 Million Miles on Public Roads. 2020. Available online: https://venturebeat.com/2020/01/06/waymos-autonomous-cars-have-driven-20-million-miles-on-public-roads/ (accessed on 18 November 2020).
- Jaguar Land Rover to Partner with Autonomous Car Hub in Shannon. 2020. Available online: https://www.irishtimes.com/business/transport-and-tourism/jaguar-land-rover-to-partner-with-autonomous-car-hub-in-shannon-1.4409884 (accessed on 25 November 2020).
- Shuttleworth, J. SAE Standard News: J3016 Automated-Driving Graphic Update. 2019. Available online: https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic (accessed on 18 November 2020).
- Autopilot. Available online: https://www.tesla.com/en_IE/autopilot (accessed on 23 November 2020).
- Footage Audi A8: Audi AI Traffic Jam Pilot. Available online: https://www.audi-mediacenter.com/en/audimediatv/video/footage-audi-a8-audi-ai-traffic-jam-pilot-3785#:~:text=The%20Audi%20AI%20traffic%20jam,%2Fh%20(37.3%20mph) (accessed on 23 November 2020).
- Edelstein, S. Audi Gives up on Level 3 Autonomous Driver-Assist System in A8. 2020. Available online: https://www.motorauthority.com/news/1127984_audi-gives-up-on-level-3-autonomous-driver-assist-system-in-a8 (accessed on 23 November 2020).
- Sage, A. Waymo Unveils Self-Driving Taxi Service in Arizona for Paying Customers. 2018. Available online: https://www.reuters.com/article/us-waymo-selfdriving-focus/waymo-unveils-self-driving-taxi-service-in-arizona-for-paying-customers-idUSKBN1O41M2 (accessed on 23 November 2020).
- Mozaffari, S.; Al-Jarrah, O.Y.; Dianati, M.; Jennings, P.; Mouzakitis, A. Deep Learning-Based Vehicle Behavior Prediction for Autonomous Driving Applications: A Review. IEEE Trans. Intell. Transp. Syst. 2020, 1–15. [Google Scholar] [CrossRef]
- Mehra, A.; Mandal, M.; Narang, P.; Chamola, V. ReViewNet: A Fast and Resource Optimized Network for Enabling Safe Autonomous Driving in Hazy Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2020, 1–11. [Google Scholar] [CrossRef]
- Gonzalez-de-Santos, P.; Fernández, R.; Sepúlveda, D.; Navas, E.; Emmi, L.; Armada, M. Field Robots for Intelligent Farms—Inhering Features from Industry. Agronomy 2020, 10, 1638. [Google Scholar] [CrossRef]
- Velasco-Hernandez, G.; Yeong, D.J.; Barry, J.; Walsh, J. Autonomous Driving Architectures, Perception and Data Fusion: A Review. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP 2020), Cluj-Napoca, Romania, 3–5 September 2020. [Google Scholar]
- Giacalone, J.; Bourgeois, L.; Ancora, A. Challenges in aggregation of heterogeneous sensors of Autonomous Driving Systems. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019. [Google Scholar]
- Liu, X.; Baiocchi, O. A comparison of the definitions for smart sensors, smart objects and Things in IoT. In Proceedings of the 2016 IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 13–15 October 2016. [Google Scholar]
- Wojciechowicz, T. Smart Sensor vs Base Sensor—What’s the Difference? | Symmetry Blog. Available online: https://www.semiconductorstore.com/blog/2018/Smart-Sensor-vs-Base-Sensor-Whats-the-Difference-Symmetry-Blog/3538/#:~:text=By%20using%20a%20smart%20sensor,achieve%20on%20a%20base%20sensor (accessed on 26 November 2020).
- Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef] [PubMed]
- Campbell, S.; O’Mahony, N.; Krpalcova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Conor, R. Sensor Technology in Autonomous Vehicles: A review. In Proceedings of the 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK, 21–22 June 2018. [Google Scholar]
- Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2019, 8, 2847–2868. [Google Scholar] [CrossRef]
- Yeong, D.J.; Barry, J.; Walsh, J. A Review of Multi-Sensor Fusion System for Large Heavy Vehicles Off Road in Industrial Environments. In Proceedings of the 2020 31st Irish Signals and Systems Conference (ISSC), Letterkenny, Ireland, 11–12 June 2020. [Google Scholar]
- Jusoh, S.; Almajali, S. A Systematic Review on Fusion Techniques and Approaches Used in Applications. IEEE Access 2020, 8, 14424–14439. [Google Scholar] [CrossRef]
- Castanedo, F. A Review of Data Fusion Techniques. Sci. World J. 2013, 2013, 19. [Google Scholar] [CrossRef]
- Kuutti, S.; Bowden, R.; Jin, Y.; Barber, P.; Fallah, S. A Survey of Deep Learning Applications to Autonomous Vehicle Control. IEEE Trans. Intell. Transp. Syst. 2021, 22, 712–733. [Google Scholar] [CrossRef]
- Hu, J.-W.; Zheng, B.-Y.; Wang, C.; Zhao, C.-H.; Hou, X.-L.; Pan, Q.; Xu, Z. A Survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments. Front. Inform. Technol. Electron. Eng. 2020, 21, 675–692. [Google Scholar] [CrossRef]
- Mobile Robot Sensors. Available online: http://www.robotiksistem.com/robot_sensors.html (accessed on 24 November 2020).
- Robotic Autonomy Summer Camp. Available online: http://www.cs.cmu.edu/~rasc/Download/AMRobots4.pdf (accessed on 24 November 2020).
- Woo, A.; Fidan, B.; Melek, W.W. Localization for Autonomous Driving. In Handbook of Position Location: Theory, Practice, and Advances, 2nd ed.; Zekavat, S., Buehrer, R.M., Eds.; Wiley-IEEE Press: Hoboken, NJ, USA, 2019; pp. 1051–1087. ISBN 978-1-119-43458-0. [Google Scholar]
- Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357. [Google Scholar] [CrossRef] [Green Version]
- Guo, X. Feature-Based Localization Methods for Autonomous Vehicles. Ph.D. Thesis, Freien Universität Berlin, Berlin, Germany, 2017. [Google Scholar]
- Wendt, Z.; Jeremy Cook, S. Saved by the Sensor: Vehicle Awareness in the Self-Driving Age. 2019. Available online: https://www.machinedesign.com/mechanical-motion-systems/article/21836344/saved-by-the-sensor-vehicle-awareness-in-the-selfdriving-age (accessed on 25 November 2020).
- Joglekar, A.; Joshi, D.; Khemani, R.; Nair, S.; Sahare, S. Depth Estimation Using Monocular Camera. IJCSIT 2011, 2, 1758–1763. [Google Scholar]
- Bhoi, A. Monocular Depth Estimation: A Survey. arXiv 2019, arXiv:1901.09402v1. [Google Scholar]
- Garg, R.; Wadhwa, N.; Ansari, S.; Barron, J.T. Learning Single Camera Depth Estimation using Dual-Pixels. arXiv 2019, arXiv:1904.05822v3. [Google Scholar]
- Cronin, C.; Conway, A.; Walsh, J. State-of-the-Art Review of Autonomous Intelligent Vehicles (AIV) Technologies for the Automotive and Manufacturing Industry. In Proceedings of the 2019 30th Irish Signals and System Conference (ISSC), Maynooth, Ireland, 17–18 June 2019. [Google Scholar]
- Orbbec—Intelligent computing for everyone everywhere. Available online: https://orbbec3d.com/ (accessed on 4 December 2020).
- Harapanahalli, S.; O’Mahony, N.; Velasco-Hernandez, G.; Campbell, S.; Riordan, D.; Walsh, J. Autonomous Navigation of mobile robots in factory environment. Procedia Manuf. 2019, 38, 1524–1531. [Google Scholar] [CrossRef]
- Stereo_Image_Proc—ROS Wiki. Available online: http://wiki.ros.org/stereo_image_proc (accessed on 4 December 2020).
- 3D Camera Survey—ROS-Industrial. Available online: https://rosindustrial.org/news/2016/1/13/3d-camera-survey (accessed on 23 November 2020).
- Roboception 3D Stereo Sensor. Available online: https://roboception.com/wp-content/uploads/2020/06/202006_3D_StereoSensor.pdf (accessed on 23 November 2020).
- MultiSense S7—Carnegie Robotics LLC. Available online: https://carnegierobotics.com/multisense-s7 (accessed on 23 November 2020).
- Knabe, C.; Griffin, R.; Burton, J.; Cantor-Cooke, G.; Dantanarayana, L.; Day, G.; Ebeling-Koning, O.; Hahn, E.; Hopkins, M.; Neal, J.; et al. Team VALOR’s ESCHER: A Novel Electromechanical Biped for the DARPA Robotics Challenge. J. Field Robot. 2017, 34, 1–27. [Google Scholar] [CrossRef]
- MultiSense S21B—Carnegie Robotics LLC. Available online: https://carnegierobotics.com/multisense-s21b (accessed on 23 November 2020).
- N-Series Model Listing | Ensenso. Available online: https://www.ensenso.com/support/modellisting/?id=N35-606-16-BL (accessed on 24 November 2020).
- FRAMOS Industrial Depth Camera D435e—Starter Kit | FRAMOS. Available online: https://www.framos.com/en/framos-depth-camera-d435e-starter-kit-22805 (accessed on 25 November 2020).
- Karmin 3D Stereo Camera—Nerian Vision Technologies. Available online: https://nerian.com/products/karmin3-3d-stereo-camera/ (accessed on 26 November 2020).
- Compare Intel RealSense Depth Cameras (Tech specs and Review). Available online: https://www.intelrealsense.com/compare-depth-cameras/ (accessed on 27 November 2020).
- Bumblebee®2 FireWire | FLIR Systems. Available online: https://www.flir.eu/support/products/bumblebee2-firewire/#Overview (accessed on 27 November 2020).
- Bumblebee® XB3 FireWire | FLIR Systems. Available online: https://www.flir.eu/support/products/bumblebee-xb3-firewire/#Overview (accessed on 27 November 2020).
- Rosero, L.A.; Osório, F.S. Calibration and multi-sensor fusion for on-road obstacle detection. In Proceedings of the 2017 Latin American Robotics Symposium (LARS) and 2017 Brazilian Symposium on Robotics (SBR), Curitiba, Brazil, 8–11 November 2017. [Google Scholar]
- Yahiaoui, M.; Rashed, H.; Mariotti, L.; Sistu, G.; Clancy, I.; Yahiaoui, L.; Yogamani, S. FisheyeMODNet: Moving Object Detection on Surround-view Cameras for Autonomous Driving. In Proceedings of the IMVIP 2019: Irish Machine Vision & Image Processing, Technological University Dublin, Dublin, Ireland, 28–30 August 2019. [Google Scholar] [CrossRef]
- Yogamani, S.; Hughes, C.; Horgan, J.; Sistu, G.; Varley, P.; O’Dea, D.; Uricar, M.; Milz, S.; Simon, M.; Amende, K.; et al. WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving. arXiv 2019, arXiv:1905.01489v2. [Google Scholar]
- Heng, L.; Choi, B.; Cui, Z.; Geppert, M.; Hu, S.; Kuan, B.; Liu, P.; Nguyen, R.; Yeo, Y.C.; Geiger, A.; et al. Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System. arXiv 2019, arXiv:1809.05477v2. [Google Scholar]
- O’Mahony, C.; Campbell, S.; Krpalkova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Ryan, C. Computer Vision for 3D Perception A review. In Proceedings of the 2018 Intelligent Systems Conference (IntelliSys), London, UK, 6–7 September 2018. [Google Scholar]
- Petit, F. The Beginnings of LiDAR—A Time Travel Back in History. Available online: https://www.blickfeld.com/blog/the-beginnings-of-lidar/#:~:text=Lidar%20technology%20emerged%20already%20in,such%20as%20autonomous%20driving%20today (accessed on 20 December 2020).
- The Automotive LiDAR Market. Available online: http://www.woodsidecap.com/wp-content/uploads/2018/04/Yole_WCP-LiDAR-Report_April-2018-FINAL.pdf (accessed on 15 December 2020).
- A Guide to Lidar Wavelengths. Available online: https://velodynelidar.com/blog/guide-to-lidar-wavelengths/ (accessed on 15 December 2020).
- Wojtanowski, J.; Zygmunt, M.; Kaszczuk, M.; Mierczyk, Z.; Muzal, M. Comparison of 905nm and 1550nm semiconductor laser rangefinders’ performance deterioration due to adverse environmental conditions. Opto-Electron. Rev. 2014, 22, 183–190. [Google Scholar] [CrossRef]
- Kutila, M.; Pyykönen, P.; Ritter, W.; Sawade, O.; Schäufele, B. Automotive LIDAR sensor development scenarios for harsh weather conditions. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016. [Google Scholar]
- What is LiDAR Technology? Available online: https://blog.generationrobots.com/en/what-is-lidar-technology/#:~:text=For%20a%202D%20LiDAR%20only,on%20X%20and%20Y%20axes.&text=For%20a%203D%20LiDAR%2C%20the,X%2C%20Y%20and%20Z%20axes (accessed on 17 December 2020).
- Kodors, S. Point Distribution as True Quality of LiDAR Point Cloud. Balt. J. Mod. Comput. 2017, 5, 362–378. [Google Scholar] [CrossRef]
- Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef] [Green Version]
- Carballo, A.; Lambert, J.; Monrroy-Cano, A.; Wong, D.R.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The Multiple 3D LiDAR Dataset. arXiv 2020, arXiv:2003.06129v2. [Google Scholar]
- LIBRE: LiDAR Benchmark Reference dataset. Available online: https://sites.google.com/g.sp.m.is.nagoya-u.ac.jp/libre-dataset (accessed on 23 December 2020).
- Zhao, X.; Yang, Z.; Schwertfeger, S. Mapping with Reflection—Detection and Utilization of Reflection in 3D Lidar Scans. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Abu Dhabi, United Arab Emirates, 4–6 November 2020. [Google Scholar]
- Velodyne—ROS Wiki. Available online: http://wiki.ros.org/velodyne (accessed on 28 December 2020).
- Products | AutonomouStuff. Available online: https://autonomoustuff.com/products?para1=LiDAR%20Laser%20Scanners¶2=0¶3=Velodyne (accessed on 28 December 2020).
- Sualeh, M.; Kim, G.-W. Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking. Sensors 2019, 19, 1474. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Herzog, M.; Dietmayer, K. Training a Fast Object Detector for LiDAR Range Images Using Labeled Data from Sensors with Higher Resolution. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019. [Google Scholar]
- HesaiTechnology/HesaiLidar_General_ROS: ROS driver for PandarXT PandarQT Pandar64 Pandar40P Pandar40M Pandar20A Pandar20B. Available online: https://github.com/HesaiTechnology/HesaiLidar_General_ROS (accessed on 28 December 2020).
- Pandar64—HESAI. Available online: https://www.hesaitech.com/en/Pandar64 (accessed on 28 December 2020).
- Pandar40—HESAI. Available online: https://www.hesaitech.com/en/Pandar40 (accessed on 28 December 2020).
- Ouster-Lidar/Ouster_Example: Ouster Sample Code. Available online: https://github.com/ouster-lidar/ouster_example (accessed on 28 December 2020).
- OS1 (Serial Number Beginning with “os1-“) Mid-Range High Resolution Imaging Lidar. Available online: http://data.ouster.io/downloads/OS1-gen1-lidar-sensor-datasheet.pdf (accessed on 28 December 2020).
- Muckenhuber, S.; Holzer, H.; Bockaj, Z. Automotive Lidar Modelling Approach Based on Material Proper-ties and Lidar Capabilities. Sensors 2020, 20, 3309. [Google Scholar] [CrossRef]
- RoboSense-LiDAR/ros_Rslidar: ROS driver for RS-LiDAR-16 and RS-LiDAR-32. Available online: https://github.com/RoboSense-LiDAR/ros_rslidar (accessed on 28 December 2020).
- RS-LiDAR-32—RoboSense LiDAR—Autonomous Vehicles, Robots, V2R. Available online: http://www.robosense.ai/en/rslidar/RS-LiDAR-32 (accessed on 28 December 2020).
- LSC32/lslidar_c32 at Master Leishen-Lidar/LSC32. Available online: https://github.com/leishen-lidar/LSC32/tree/master/lslidar_c32 (accessed on 28 December 2020).
- LSC16/lslidar_c16 at Master Leishen-Lidar/LSC32. Available online: https://github.com/leishen-lidar/LSC16/tree/master/lslidar_c16 (accessed on 28 December 2020).
- 32-Channel LiDAR C32-LeiShenLiDAR/Laser Scanner. Available online: http://www.lslidar.com/product/leida/MX/768ea27b-22d2-46eb-9c5d-e81425ef6f11.html (accessed on 28 December 2020).
- Leishen lslidar-C16 16 channels lidar—Autoware—ROS Discourse. Available online: https://discourse.ros.org/t/leishen-lslidar-c16-16-channels-lidar/10055 (accessed on 28 December 2020).
- hokuyo3—ROS Wiki. Available online: http://wiki.ros.org/hokuyo3d (accessed on 30 October 2020).
- Scanning Rangefinder Distance Data Output/YVT-35LX Product Details | HOKUYO AUTOMATIC CO., LTD. Available online: https://www.hokuyo-aut.jp/search/single.php?serial=224 (accessed on 30 October 2020).
- Sick_Ldmrs_Laser—ROS Wiki. Available online: http://wiki.ros.org/sick_ldmrs_laser (accessed on 28 October 2020).
- Ibeo Standard Four Layer Multi-Echo LUX Sensor | AutonomouStuff. Available online: https://autonomoustuff.com/products/ibeo-lux-standard (accessed on 28 October 2020).
- Ibeo Standard Eight Layer/Multi-Echo LUX Sensor | AutonomouStuff. Available online: https://autonomoustuff.com/products/ibeo-lux-8l (accessed on 28 October 2020).
- DATA SHEET ibeo LUX 4L / ibeo LUX 8L / ibeo LUX HD. Available online: https://hexagondownloads.blob.core.windows.net/public/AutonomouStuff/wp-content/uploads/2019/05/ibeo_LUX_datasheet_whitelabel.pdf (accessed on 28 October 2020).
- LD-MRS LD-MRS400102S01 HD, Online Data Sheet. Available online: https://hexagondownloads.blob.core.windows.net/public/AutonomouStuff/wp-content/uploads/2019/05/LD-MRS400102S01-HD_1052961_en-compressed.pdf (accessed on 29 October 2020).
- LD-MRS LD-MRS800001S01, Online Data Sheet. Available online: https://hexagondownloads.blob.core.windows.net/public/AutonomouStuff/wp-content/uploads/2019/05/LD-MRS800001S01_1069408_en-Branded.pdf (accessed on 29 October 2020).
- Ceptontech/Cepton_sdk_Redist: Cepton SDK Redistribution Channel. Available online: https://github.com/ceptontech/cepton_sdk_redist (accessed on 12 November 2020).
- Cepton | Products. Available online: https://www.cepton.com/products.html (accessed on 12 November 2020).
- Cepton Vista™-Edge Smart Lidar for Smart Security. Available online: https://www.cepton.com/downloads/Vista-Edge-product-brief_0904.pdf (accessed on 12 November 2020).
- Cepton | Vista®-X90. Available online: https://www.cepton.com/vista-x90.html (accessed on 12 November 2020).
- Jia, Y.; Guo, L.; Xin, W. Real-time control systems. In Transportation Cyber-Physical Systems, 1st ed.; Deka, L., Chowdhury, M., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 81–113. [Google Scholar]
- Radartutorial. Available online: https://www.radartutorial.eu/11.coherent/co06.en.html (accessed on 28 December 2020).
- Radar Systems—Doppler Effect—Tutorialspoint. Available online: https://www.tutorialspoint.com/radar_systems/radar_systems_doppler_effect.htm (accessed on 28 December 2020).
- Detecting Static Objects in View Using—Electrical Engineering Stack Exchange. Available online: https://electronics.stackexchange.com/questions/236484/detecting-static-objects-in-view-using-radar (accessed on 29 December 2020).
- Determining the Mounting Position of Automotive Radar Sensors | Rohde & Schwarz. Available online: https://www.rohde-schwarz.com/applications/determining-the-mounting-position-of-automotive-radarsensors-application-card_56279-661795.html (accessed on 28 December 2020).
- Walling, D.H. The Design of an Autonomous Vehicle Research Platform. Master’s Thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA, 3 August 2017. [Google Scholar]
- Astuff/Astuff_Sensor_Msgs: A Set of Messages Specific to Each Sensor Supported by AutonomouStuff. Available online: https://github.com/astuff/astuff_sensor_msgs/tree/master (accessed on 13 November 2020).
- Unizg-fer-Lamor / Radar_Interface—Bitbucket. Available online: https://bitbucket.org/unizg-fer-lamor/radar_interface/src/master/ (accessed on 13 November 2020).
- lf2653/Myrepository: Ros Driver for Continental ARS 408 Radar. Available online: https://github.com/lf2653/myrepository (accessed on 13 November 2020).
- Smartmicro Automotive Radar UMRR-96 Type 153 | AutonomouStuff. Available online: https://autonomoustuff.com/products/smartmicro-automotive-radar-umrr-96 (accessed on 20 February 2020).
- Narula, L.; LaChapelle, D.M.; Murrian, M.J.; Wooten, J.M.; Humphreys, T.E.; Toldi, E.d.; Morvant, G.; Lacambre, J.-B. TEX-CUP: The University of Texas Challenge for Urban Positioning. In Proceedings of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, 20–23 April 2020. [Google Scholar]
- Stanislas, L.; Thierry, P. Characterisation of the Delphi Electronically Scanning Radar for robotics applications. In Proceedings on the Australasian Conference on Robotics and Automation 2015; Li., H., Kim, J., Eds.; Australian Robotics and Automation Association: Sydney, Australia, 2015; pp. 1–10. [Google Scholar]
- Automotive Radar Comparison—System Plus Consulting. Available online: https://www.systemplus.fr/wp-content/uploads/2018/10/SP18368-Automotive-Radar-Comparison-2018-Sample-2.pdf (accessed on 30 December 2020).
- Aptiv SRR2 Rear and Side Detection System | AutonomouStuff. Available online: https://autonomoustuff.com/products/aptiv-srr2 (accessed on 13 November 2020).
- Aptiv ESR 2.5 | AutonomouStuff. Available online: https://autonomoustuff.com/products/aptiv-esr-2-5-24v (accessed on 13 November 2020).
- Continental ARS 408-21 | AutonomouStuff. Available online: https://autonomoustuff.com/products/continental-ars-408-21 (accessed on 13 November 2020).
- Xu, F.; Wang, H.; Hu, B.; Ren, M. Road Boundaries Detection based on Modified Occupancy Grid Map Using Millimeter-wave Radar. Mob. Netw. Appl. 2020, 25, 1496–1503. [Google Scholar] [CrossRef]
- Weber, C.; von Eichel-Streiber, J.; Rodrigo-Comino, J.; Altenburg, J.; Udelhoven, T. Automotive Radar in a UAV to Assess Earth Surface Processes and Land Responses. Sensors 2020, 20, 4463. [Google Scholar] [CrossRef]
- Automotive Radar | Smartmicro. Available online: https://www.smartmicro.com/automotive-radar (accessed on 13 June 2020).
- Bruns, T.; Software Engineer—Smartmicro, Braunschweig, Germany; Yeong, D.J.; Institute of Technology, Tralee, Kerry, Ireland. Personal communication, 2020.
- Parker, M. Chapter 19—Pulse Doppler Radar. In Digital Signal Processing 101: Everything You Need to Know to Get Started, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2017; pp. 241–251. [Google Scholar]
- Lee, R.S.; Inside Sales Manager, AutonomouStuff—Hexagon, Stockholm, Sweden; Yeong, D.J.; Institute of Technology, Tralee, Kerry, Ireland. Personal communication, 2020.
- Jain, A.; Zhang, L.; Jiang, L. High-Fidelity Sensor Calibration for Autonomous Vehicles. 2019. Available online: https://medium.com/lyftself-driving/high-fidelity-sensor-calibration-for-autonomous-vehicles-6af06eba4c26 (accessed on 13 October 2020).
- Bouain, M.; Ali, K.M.A.; Berdjag, D.; Fakhfakh, N.; Atitallah, R.B. An Embedded Multi-Sensor Data Fusion Design for Vehicle Perception Tasks. J. Commun. 2018, 13, 8–14. [Google Scholar] [CrossRef]
- Lesson 3: Sensor Calibration—A Necessary Evil—Module 5: Putting It together—An Autonomous Vehicle State Estimator | Coursera. Available online: https://www.coursera.org/lecture/state-estimation-localization-self-driving-cars/lesson-3-sensor-calibration-a-necessary-evil-jPb2Y (accessed on 15 June 2020).
- Tzafestas, S.G. Introduction to Mobile Robot Control, 1st ed.; Elsevier: Waltham, MA, USA, 2014; pp. 479–530. [Google Scholar]
- Montag, A.; Technical Solutions Engineer—EMEA Velodyne Europe, Rüsselsheim, Germany; Yeong, D.J.; Institute of Technology, Tralee, Kerry, Ireland. Personal Communication, 2020.
- Mirzaei, F.M. Extrinsic and Intrinsic Sensor Calibration. Ph.D. Thesis, University of Minnesota, Minneapolis, MN, USA, 2013. [Google Scholar]
- Nouira, H.; Deschaud, J.E.; Goulette, F. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LiDAR System. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2019; pp. 359–366. [Google Scholar]
- De la Escalera, A.; Armingol, J.M. Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration. Sensors 2010, 10, 2027–2044. [Google Scholar] [CrossRef]
- Jackman, B.; Sarraj, A.; Walsh, F. Self-Calibration of Fish-Eye Camera for Advanced Assistance Systems. In Proceedings of the ICCV 2018: 20th International Conference on Connected Vehicles, Zurich, Switzerland, 15–16 January 2018. [Google Scholar]
- Liu, Z.; Wu, Q.; Wu, S.; Pan, X. Flexible and accurate camera calibration using grid spherical images. Opt. Express 2017, 25, 15269–15285. [Google Scholar] [CrossRef] [PubMed]
- Xiao, Y.; Ruan, X.; Chai, J.; Zhang, X.; Zhu, X. Online IMU Self-Calibration for Visual-Inertial Systems. Sensors 2019, 19, 1624. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Camera_Calibration—ROS Wiki. Available online: http://wiki.ros.org/camera_calibration (accessed on 23 July 2020).
- Glennie, C.; Lichti, D.D. Static Calibration and Analysis of the Velodyne HDL-64E S2 for High Accuracy Mobile Scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef] [Green Version]
- Lecture 1: The Pinhole Camera Model. Available online: http://opilab.utb.edu.co/computer-vision/alllectures.pdf (accessed on 7 January 2021).
- Pinhole Camera Model | HediVision. Available online: https://hedivision.github.io/Pinhole.html (accessed on 7 January 2021).
- Burger, W.; Burge, M.J. 1.4 Image Acquisition. In Digital Image Processing—An Algorithmic Introduction Using Java, 2nd ed.; Gries, D., Schneider, F.B., Eds.; Springer: London, UK, 2016; pp. 4–11. [Google Scholar]
- Burger, W. Zhang’s Camera Calibration Algorithm: In-Depth Tutorial and Implementation; HGB16-05; University of Applied Sciences Upper Austria, School of Informatics, Communications and Media, Dept. of Digital Media: Hagenberg, Austria, 2016; pp. 1–6. [Google Scholar]
- Camera Calibration and 3D Reconstruction—OpenCV 2.4.13.7 documentation. Available online: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html (accessed on 16 October 2020).
- Camera Model: Intrinsic Parameters—Hoàng-Ân Lê. Available online: https://lhoangan.github.io/camera-params/ (accessed on 8 January 2021).
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; pp. 1–19. [Google Scholar]
- What is Camera Calibration? —MATLAB & Simulink. Available online: https://www.mathworks.com/help/vision/ug/camera-calibration.html (accessed on 7 January 2021).
- Dissecting the Camera Matrix, Part 3: The Intrinsic Matrix. Available online: http://ksimek.github.io/2013/08/13/intrinsic/ (accessed on 7 January 2021).
- Pedersen, M.; Bengtson, S.H.; Gade, R.; Madsen, N.; Moeslund, T.B. Camera Calibration for Underwater 3D Reconstruction Based on Ray Tracing Using Snell’s Law. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Muhovič, J.; Perš, J. Correcting Decalibration of Stereo Cameras in Self-Driving Vehicles. Sensors 2020, 20, 3241. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
- Wang, J.; Shi, F.; Zhang, J.; Liu, Y. A new calibration model for lens distortion. Pattern Recognit. 2008, 41, 607–615. [Google Scholar] [CrossRef]
- Velas, M.; Spanel, M.; Materna, Z.; Herout, A. Calibration of RGB Camera with Velodyne LiDAR. J. WSCG 2014, 2014, 135–144. [Google Scholar]
- Schöller, G.; Schnettler, M.; Krämmer, A.; Hinz, G.; Bakovic, M.; Güzet, M.; Knoll, A. Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems. arXiv 2019, arXiv:1904.08743. [Google Scholar]
- An, P.; Ma, T.; Yu, K.; Fang, B.; Zhang, J.; Fu, W.; Ma, J. Geometric calibration for LiDAR-camera system fusing 3D-2D and 3D-3D point correspondences. Opt. Express 2020, 28, 2122–2141. [Google Scholar] [CrossRef]
- Domhof, J.; Kooij, J.F.P.; Gavrila, D.M. An Extrinsic Calibration Tool for Radar, Camera and Lidar. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- tudelft-iv/multi_sensor_calibration. Available online: https://github.com/tudelft-iv/multi_sensor_calibration (accessed on 16 July 2020).
- Peršić, J.; Marković, I.; Petrović, I. Extrinsic 6DoF calibration of a radar-LiDAR-camera system enhanced by radar cross section estimates evaluation. Rob. Auton. Syst. 2019, 114, 217–230. [Google Scholar] [CrossRef]
- Peršić, J.; Marković, I.; Petrović, I. Extrinsic 6DoF calibration of 3D LiDAR and radar. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017. [Google Scholar]
- Mishra, S.; Pandey, G.; Saripalli, S. Extrinsic Calibration of a 3D-LIDAR and a Camera. arXiv 2020, arXiv:2003.01213v2. [Google Scholar]
- Jeong, J.; Cho, L.Y.; Kim, A. Road is Enough! Extrinsic Calibration of Non-overlapping Stereo Camera and LiDAR using Road Information. arXiv 2019, arXiv:1902.10586v2. [Google Scholar] [CrossRef] [Green Version]
- Huang, J.K.; Grizzle, J.W. Improvements to Target-Based 3D LiDAR to Camera Calibration. IEEE Access 2020, 8, 134101–134110. [Google Scholar] [CrossRef]
- UMich-BipedLab/extrinsic_lidar_camera_calibration: This is a package for extrinsic calibration between a 3D LiDAR and a camera, described in paper: Improvements to Target-Based 3D LiDAR to Camera Calibration. This package is used for Cassie Blue’s 3D LiDAR semantic mapping and automation. Available online: https://github.com/UMich-BipedLab/extrinsic_lidar_camera_calibration (accessed on 15 January 2021).
- Beltrán, J.; Guindel, C.; García, F. Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups. arXiv 2021, arXiv:2101.04431. [Google Scholar]
- velo2cam_calibration—ROS Wiki. Available online: http://wiki.ros.org/velo2cam_calibration (accessed on 15 January 2021).
- Dhall, A.; Chelani, K.; Radhakrishnan, V.; Krishna, K.M. LiDAR-Camera Calibration using 3D-3D Point correspondences. arXiv 2017, arXiv:1705.09785. [Google Scholar]
- Ankitdhall/Lidar_Camera_Calibration: ROS Package to Find a Rigid-Body Transformation between a LiDAR and a Camera for “LiDAR-Camera Calibration Using 3D-3D Point Correspondences”. Available online: https://github.com/ankitdhall/lidar_camera_calibration#usage (accessed on 16 July 2020).
- But_Calibration_Camera_Velodyne—ROS Wiki. Available online: http://wiki.ros.org/but_calibration_camera_velodyne (accessed on 16 July 2020).
- Yin, L.; Luo, B.; Wang, W.; Yu, H.; Wang, C.; Li, C. CoMask: Corresponding Mask-Based End-to-End Extrinsic Calibration of the Camera and LiDAR. Remote Sens. 2020, 12, 1925. [Google Scholar] [CrossRef]
- Autoware Camera-LiDAR Calibration Package—Autoware 1.9.0 Documentation. Available online: https://autoware.readthedocs.io/en/feature-documentation_rtd/DevelopersGuide/PackagesAPI/sensing/autoware_camera_lidar_calibrator.html (accessed on 15 January 2021).
- Guindel, C.; Beltrán, J.; Martín, D.; García, F. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 674–679. [Google Scholar]
- Products | Velodyne Lidar. Available online: https://velodynelidar.com/products/ (accessed on 18 January 2021).
- Sensor_Msgs—ROS Wiki. Available online: http://wiki.ros.org/sensor_msgs (accessed on 18 January 2021).
- Message_Filters—ROS Wiki. Available online: http://wiki.ros.org/message_filters (accessed on 17 July 2020).
- Chapter 9: Time Synchronization. Available online: https://www3.nd.edu/~cpoellab/teaching/cse40815/Chapter9.pdf (accessed on 22 March 2020).
- Kelly, J.; Sukhatme, G.S. A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors. In Experiment Robotics; Khatib, O., Kumar, V., Sukhatme, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 79, pp. 195–209. [Google Scholar]
- Abdelmohsen, Y.K. Camera-LIDAR Detection Fusion. Bachelor’s Thesis, German University in Cairo, New Cairo City, Egypt, 2020. [Google Scholar]
- Olson, E. A passive solution to the sensor synchronization problem. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
- Peršić, J.; Petrović, L.; Marković, I.; Petrović, I. Spatio-Temporal Multisensor Calibration Based on Gaussian Processes Moving Object Tracking. arXiv 2019, arXiv:1904.04187. [Google Scholar]
- Unizg-fer-Lamor / Calirad—Bitbucket. Available online: https://bitbucket.org/unizg-fer-lamor/calirad/src/master/ (accessed on 15 May 2020).
- Spatiotemporal Multisensor Calibration via Gaussian Process Moving Target Tracking—YouTube. Available online: https://www.youtube.com/watch?v=vqTR6zMIKJs&ab_channel=LAMOR (accessed on 15 May 2020).
- Peršić, J.; University of Zagreb, Zagreb, Croatia; Yeong, D.J.; Munster Technological University, Tralee, Ireland. Personal Communication, 2020.
- Lee, C.-L.; Hsueh, Y.-H.; Wang, C.-C.; Lin, W.-C. Extrinsic and Temporal Calibration of Automotive Radar and 3D LiDAR. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Rangesh, A.; Yuen, K.; Satzoda, R.K.; Rajaram, R.N.; Gunaratne, P.; Trivedi, M.M. A Multimodal, Full-Surround Vehicular Testbed for Naturalistic Studies and Benchmarking: Design, Calibration and Deployment. arXiv 2019, arXiv:1709.07502v4. [Google Scholar]
- Lundquist, C. Sensor Fusion for Automotive Applications; Linköping University: Linköping, Sweden, 2011. [Google Scholar]
- Pollach, M.; Schiegg, F.; Knoll, A. Low Latency and Low-Level Sensor Fusion for Automotive Use-Cases. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
- Gu, S.; Zhang, Y.; Yang, J.; Alvarez, J.M.; Kong, H. Two-View Fusion based Convolutional Neural Network for Urban Road Detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019. [Google Scholar]
- Nobis, F.; Geisslinger, M.; Weber, M.; Betz, J.; Lienkamp, M. A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection. In Proceedings of the 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn, Germany, 15–17 October 2019. [Google Scholar]
- Self-Driving Made Real—NAVYA. Available online: https://navya.tech/fr (accessed on 25 January 2021).
- Banerjee, K.; Notz, D.; Windelen, J.; Gavarraju, S.; He, M. Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar]
- Yoo, J.H.; Kim, Y.; Kim, J.; Choi, J.W. 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fursion for 3D Object Detection. arXiv 2020, arXiv:2004.12636v2. [Google Scholar]
- Li, Y.; Jha, D.K.; Ray, A.; Wettergren, T.A. Feature level sensor fusion for target detection in dynamic environments. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015. [Google Scholar]
- Visteon | Current Sensor Data Fusion Architectures: Visteon’s Approach. Available online: https://www.visteon.com/current-sensor-data-fusion-architectures-visteons-approach/ (accessed on 28 January 2021).
- Brena, R.F.; Aguileta, A.A.; Trejo, L.A.; Molino-Minero-Re, E.; Mayora, O. Choosing the Best Sensor Fusion Method: A Machine-Learning Approach. Sensors 2020, 20, 2350. [Google Scholar] [CrossRef]
- Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ali, M.A.H.; Mailah, M.; Jabbar, W.A.; Moiduddin, K.; Ameen, W.; Alkhalefah, H. Autonomous Road Roundabout Detection and Navigation System for Smart Vehicles and Cities Using Laser Simulator–Fuzzy Logic Algorithms and Sensor Fusion. Sensors 2020, 20, 3694. [Google Scholar] [CrossRef]
- Kim, J.; Kim, J.; Cho, J. An advanced object classification strategy using YOLO through camera and LiDAR sensor fusion. In Proceedings of the 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS), Gold Coast, Australia, 16–18 December 2019. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640v5. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, M.H. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Lee, K.W.; Yoon, H.S.; Song, J.M.; Park, K.R. Convolutional Neural Network-Based Classification of Driver’s Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Sensors 2018, 18, 957. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sindagi, V.A.; Zhou, Y.; Tuzel, O. MVX-Net: Multimodal VoxelNet for 3D Object Detection. arXiv 2019, arXiv:1904.01649. [Google Scholar]
- Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. arXiv 2017, arXiv:1711.06396. [Google Scholar]
- Xu, D.; Anguelov, D.; Jain, A. PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation. arXiv 2018, arXiv:1711.10871v2. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. arXiv 2017, arXiv:1612.00593v2. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. arXiv 2015, arXiv:1512.02325. [Google Scholar]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as Points. arXiv 2019, arXiv:1904.07850v2. [Google Scholar]
- O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Velasco-Hernandez, G.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep Learning vs. Traditional Computer Vision. arXiv 2019, arXiv:1910.13796. [Google Scholar]
- Bhanushali, D.R. Multi-Sensor Fusion for 3D Object Detection. Master’s Thesis, Rochester Institute of Technology, New York, NY, USA, 2020. [Google Scholar]
- Shi, W.; Bao, S.; Tan, D. FFESSD: An Accurate and Efficient Single-Shot Detector for Target Detection. Appl. Sci. 2019, 9, 4276. [Google Scholar] [CrossRef] [Green Version]
- Nabati, R.; Qi, H. CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection. arXiv 2020, arXiv:2011.04841v1. [Google Scholar]
- Roth, M.; Jargot, D.; Gavrila, D.M. Deep End-to-end 3D Person Detection from Camera and Lidar. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019. [Google Scholar]
- Zhou, Y.; Sun, P.; Zhang, Y.; Anguelov, D.; Gao, J.; Ouyang, T.; Guo, J.; Ngiam, J.; Vasudevan, V. End-to-End Mult-View Fusion for 3D Object Detection in LiDAR Point Clouds. arXiv 2019, arXiv:1910.06528v2. [Google Scholar]
- Elfring, J.; Appeldoorn, R.; van den Dries, S.; Kwakkernaat, M. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving. Sensors 2016, 16, 1668. [Google Scholar] [CrossRef] [Green Version]
- Floudas, N.; Polychronopoulos, A.; Aycard, O.; Burlet, J.; Ahrholdt, M. High Level Sensor Data Fusion Approaches for Object Recognition in Road Environment. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007. [Google Scholar]
- Kim, S.; Song, W.; Kim, S. Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning. Remote Sens. 2018, 10, 72. [Google Scholar] [CrossRef] [Green Version]
- Miller, R. Rolling Zettabytes: Quantifying the Data Impact of Connected Cars. Available online: https://datacenterfrontier.com/rolling-zettabytes-quantifying-the-data-impact-of-connected-cars/ (accessed on 1 February 2021).
- Liu, S.; Tang, J.; Zhang, Z.; Gaudiot, J.-L. CAAD: Computer Architecture for Autonomous Driving. arXiv 2017, arXiv:1702.01894. [Google Scholar]
- Knight, W. An Ambitious Plan to Build a Self-Driving Borg. Available online: https://www.technologyreview.com/2016/10/10/157091/an-ambitious-plan-to-build-a-self-driving-borg/ (accessed on 1 February 2021).
- Wiggers, K. Roboflow: Popular autonomous vehicle data set contains critical flaws | VentureBeat. Available online: https://venturebeat.com/2020/02/14/report-popular-autonomous-vehicle-data-set-contains-critical-flaws/ (accessed on 1 February 2021).
- Ren, K.; Zheng, T.; Qin, Z.; Liu, X. Adversarial Attacks and Defenses in Deep Learning. Engineering 2020, 6, 346–360. [Google Scholar] [CrossRef]
- Ma, X.; Niu, Y.; Gu, L.; Wang, Y.; Zhao, Y.; Bailey, J.; Lu, F. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems. arXiv 2020, arXiv:1907.10456v2. [Google Scholar]
- Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
- Rawat, P. Environment Perception for Autonomous Driving: A 1/10 Scale Implementation of Low-Level Sensor Fusion Using Occupancy Grid Mapping. Master’s Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, March 2019. [Google Scholar]
- Kiran, B.R.; Sobh, I.; Talpaert, V.; Mannion, P.; Al Sallab, A.A.; Yogamani, S.; Pérez, P. Deep Reinforcement Learning for Autonomous Driving: A Survey. arXiv 2021, arXiv:2002.00444v2. [Google Scholar]
Reference | Summary |
---|---|
Velasco-Hernandez et al. [15] | An overview of the AD architectures—technical and functional architectures depending on the domain of their definition. Further, the authors highlight the perception stage of self-driving solutions as a component, detailing the sensing component and sensor fusion techniques to perform localization, mapping, and obstacle detection. |
Fayyad et al. [19] | An overview of the state-of-the-art deep learning sensor fusion techniques and algorithms for perception, localization, and mapping. |
Campbell et al. [20] | A summary of sensor technologies, including their strengths and weaknesses, that were commonly used to develop an autonomous vehicle. Moreover, the authors examined some of the sensor fusion techniques that can be employed in both indoor and outdoor environments, and algorithms for obstacle detection, navigation, and environment modelling. |
Wang et al. [21] | A discussion of sensor technology and their performance in various conditions. The authors surveyed and presented a detailed summary of the multi-sensor fusion strategies in recent studies and techniques to establish motion model and data association in multi-target tracking. |
Yeong et al. [22] | A summary of advantages and disadvantages of perception-based sensors and the architecture of multi-sensor setup for obstacle detection in industrial environments. Moreover, the authors highlighted some of the challenges to temporal synchronize multiple data streams in AD applications. |
Jusoh, S. & Almajali, S. [23] | A discussion of the current state-of-the-art multi-sensor fusion techniques and approaches for various applications such as obstacle detection, localization, and mapping, in three major domains, namely robotics, military, and healthcare. |
Castanedo, F. [24] | A discussion of the classification of data fusion techniques based on several criteria and providing a comprehensive overview of the most employed methods and algorithms for data association, state estimation, and decision fusion tasks. |
Kuutti et al. [25] | An overview of deep learning approaches and methods for autonomous vehicle control, and the challenges to deep learning-based vehicle control. The authors considered these approaches for three categories of tasks: lateral (steering), longitudinal (acceleration and braking), and simultaneous lateral and longitudinal control, and discussed the relevant methods in detail. |
Hu et al. [26] | A discussion of the perception-based sensors for intelligent ground vehicles in off-road environment and a comprehensive review of the current state-of-the-art multi-sensor fusion approaches. In addition, the author summarized the main considerations of on-board multi-sensor configurations and reviewed the architectural structure of perception systems and applications for obstacle detection in diverse environments. |
Depth Information | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Model | Baseline (mm) | HFOV (°) | VFOV (°) | FPS (Hz) | Range (m) | Img Res (MP) | Range (m) | Res (MP) | FPS (Hz) | Ref | |
Roboception | RC Visard 160 | 160 | 61 * | 48 * | 25 | 0.5–3 | 1.2 | 0.5–3 | 0.03–1.2 | 0.8–25 | [40,41] |
Carnegie Robotics® | MultiSense™ S7 1 | 70 | 80 | 49/80 | 30 max | - | 2/4 | 0.4 min | 0.5–2 | 7.5–30 | [40,42,43] |
MultiSense™ S21B 1 | 210 | 68–115 | 40–68 | 30 max | - | 2/4 | 0.4 min | 0.5–2 | 7.5–30 | [40,44] | |
Ensenso | N35-606-16-BL | 100 | 58 | 52 | 10 | 4 max | 1.3 | - | [40,45] | ||
Framos | D435e | 55 | 86 | 57 | 30 | 0.2–10 | 2 | 0.2 min | 0.9 | 30 | [40,46] |
Nerian | Karmin3 2 | 50/100/250 | 82 | 67 | 7 | - | 3 | 0.23/0.45/1.14 min | 2.7 | - | [40,47] |
Intel RealSense | D455 | 95 | 86 | 57 | 30 | 20 max | 3 | 0.4 min | ≤1 | ≤90 | [40,48] |
D435 | 50 | 86 | 57 | 30 | 10 max | 3 | 0.105 min | ≤1 | ≤90 | ||
D415 | 55 | 65 | 40 | 30 | 10 max | 3 | 0.16 min | ≤1 | ≤90 | ||
Flir® | Bumblebee2 3 | 120 | 66 | - | 48/20 | - | 0.3/0.8 | - | [40,49] | ||
Bumblebee XB3 3 | 240 | 66 | - | 16 | - | 1.2 | [50,51] |
Aptiv Delphi | Continental | SmartMicro | ||
---|---|---|---|---|
ESR 2.5 | SRR2 | ARS 408-21 | UMRR-96 T-153 1 | |
Freq (GHz) | 76.5 | 76.5 | 76…77 | 79 (77…81) |
HFOV (°) | ±75 | |||
Short-Range | ±9 | ≥130 | ||
Mid-Range | ±45 | ≥130 | ||
Long-Range | ±10 | ±60 | ≥100 (squint beam) | |
VFOV (°) | 4.4 | 10 | 20 | 15 |
Short-Range | 14 | |||
Long-Range | ||||
Range (m) | 1–60 | 0.5–80 2 | ||
Short-Range | 1–175 2 | 0.2–70/100 | 0.15–19.3 3 | |
Mid-Range | 0.4–55 3 | |||
Long-Range | 0.2–250 | 0.8–120 3 | ||
Range Acc (m) | - | ±0.5 noise and ±0.5% bias | - | |
Short-Range | <0.15 or 1% (bigger of) | |||
Mid-Range | <0.30 or 1% (bigger of) | |||
Long-Range | <0.50 or 1% (bigger of) | |||
Vel Range (km/h) | - | - | -400…+200 4 | |
Short-Range | −400…+140 4 | |||
Mid-Range | −340…+140 4 | |||
Long-Range | −340…+140 4 | |||
IO Interfaces | CAN/Ethernet 5 | PCAN | CAN | CAN/Automotive Ethernet |
ROS Drivers | [101,102] | [103] | [104] | |
Reference | [51,105,106,107,108,109] | [110,111,112] | [113] |
Ref | S | M | L | R | Platform | Toolbox | Calibration Target |
---|---|---|---|---|---|---|---|
[145] 1 | ✓ | * | ✓ | ✓ | ROS | [146] | Styrofoam planar with four circular holes and a copper plate trihedral corner reflector. |
[148] | ~ | ✓ | ✓ | ✓ | - | - | Checkerboard triangular pattern with trihedral corner retroreflector. |
[152] | ✓ | ✖ | ✓ | ✖ | MATLAB | [153] | LiDARTag 2 and AprilTag 2. |
[154] 3 | ✓ | * | ✓ | ✖ | ROS | [155] | Planar with four circular holes and four ArUco markers 4 around the planar corners. |
[156] | ✓ | * | ✓ | ✖ | ROS | [157] | ArUco marker on one corner of the hollow rectangular planar cardboard marker. |
[143] | ~ | ✓ | ✓ | ✖ | ROS | [158] | 3D marker with four circular holes pattern. |
[159] | ~ | ✓ | ✓ | ✖ | ROS | [160] | Planar checkerboard pattern. |
Detector | Subscribed Topic Name | ROS Sensor Message Types |
---|---|---|
LiDAR | /velodyne_points | sensors_msgs::PointCloud2 |
Stereo | /ueye/left/image_rect_color /ueye/left/camera_info /ueye/right/camera_info /ueye/disparity | sensor_msgs::Image sensor_msgs::CameraInfo sensor_msgs::CameraInfo stereo_msgs::DisparityImage |
Monocular | /ueye/left/image_rect_color /ueye/left/camera_info | sensor_msgs::Image sensor_msgs::CameraInfo |
Radar | /radar_converter/detections | radar_msgs::RadarDetectionArray 1 |
Factors | Camera | LiDAR | Radar | Fusion |
---|---|---|---|---|
Range | ~ | ~ | ✓ | ✓ |
Resolution | ✓ | ~ | ✖ | ✓ |
Distance Accuracy | ~ | ✓ | ✓ | ✓ |
Velocity | ~ | ✖ | ✓ | ✓ |
Color Perception, e.g., traffic lights | ✓ | ✖ | ✖ | ✓ |
Object Detection | ~ | ✓ | ✓ | ✓ |
Object Classification | ✓ | ~ | ✖ | ✓ |
Lane Detection | ✓ | ✖ | ✖ | ✓ |
Obstacle Edge Detection | ✓ | ✓ | ✖ | ✓ |
Illumination Conditions | ✖ | ✓ | ✓ | ✓ |
Weather Conditions | ✖ | ~ | ✓ | ✓ |
(a) | |||
---|---|---|---|
Sensor Fusion Approaches | Descriptions | Strengths | Weaknesses |
High-Level Fusion (HLF) | Each sensor carries out detection or tracking algorithm separately and subsequently combines the result into one global decision. | Lower complexity and requires less computational load and communication resources. Further, HLF enables standardizing the interface towards the fusion algorithm and does not necessitate an in-depth understanding of the signal processing algorithms involved. | Provides inadequate information as classifications with a lower confidence value are discarded. Furthermore, fine-tuning the fusion algorithms has a negligible impact on the data accuracy or latency. |
Low-Level Fusion (LLF) | Sensor data are integrated at the lowest level of abstraction (raw data) to be of better quality and more informative. | Sensor information is retained and provides more accurate data (a lower signal-to-noise ratio) than the individual sensors operating independently. As a result, it has the potential to improve the detection accuracy. In addition, LLF reduces latency where the domain controller does not have to wait for the sensor to process the data before acting upon it. This can help to speed up the performance—of particular importance in time-critical systems. | Generates large amount of data that could be an issue in terms of memory or communication bandwidth. Further, LLF requires precise calibration of sensors to accurately fuse their perceptions and it may pose a challenge to handle incomplete measurements. Although multi-source data can be fused to the maximum extent, there is data redundancy, which results in low fusion efficiency. |
Mid-Level Fusion (MLF) | Extracts contextual descriptions or features from each sensor data (raw measurements) and subsequently fuses the features from each sensor to produce a fused signal for further processing. | Generates small information spaces and requires less computation load than LLF approaches. Further, MLF approach provides a powerful feature vector and the features selection algorithms that detect corresponding features and features subsets can improve the recognition accuracy. | Requires large training sets to find the most significant feature subset. It requires precise sensor calibration before extracting and fusing the features from each sensor. |
(b) | |||
Algorithms | Descriptions | Advantages and Drawbacks | Reference |
YOLO | You Only Look Once (YOLO) is a single-stage detector, which predicts bounding boxes and produces class probabilities with confidence scores on an image in a single CNN 1. |
| [19,187,188] |
SSD | Single-Shot Multibox Detector (SSD) is a single-stage CNN detector that discretizes bounding boxes into a set of boxes with different sizes and aspect ratios to detection obstacles with variant sizes. |
| [19,196,200] |
VoxelNet | A generic 3D obstacle detection network that unifies feature extraction and bounding boxes prediction into a single-stage, end-to-end trainable deep network. In other words, VoxelNet is a voxelized method for obstacle detection using point cloud data. |
| [192,202] |
PointNet | Presents a permutation-invariant deep neural network which learns global features from unordered point clouds (two-stage detection). |
| [194,202] |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. https://doi.org/10.3390/s21062140
Yeong DJ, Velasco-Hernandez G, Barry J, Walsh J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors. 2021; 21(6):2140. https://doi.org/10.3390/s21062140
Chicago/Turabian StyleYeong, De Jong, Gustavo Velasco-Hernandez, John Barry, and Joseph Walsh. 2021. "Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review" Sensors 21, no. 6: 2140. https://doi.org/10.3390/s21062140
APA StyleYeong, D. J., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors, 21(6), 2140. https://doi.org/10.3390/s21062140