Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion
Abstract
:1. Introduction
- Point Cloud Projection Broadcasting: We implemented a system for projecting RGB-D camera point clouds as 2D laser scans. This involves preparing the data for seamless integration with other sensors by utilizing transformation matrices and leveraging existing ROS packages; see Figure 1b.
- Modular Multi-Sensor Fusion with Noise Filtering: The system incorporates multiple sensors in parallel with early fusion, designed for adaptability and modularity. If one sensor fails, the system can continue functioning. Additionally, a noise filtering block is added after each sensor to improve data quality and reliability. See Figure 1a,b.
- Enhanced Gmapping with Adaptive Resampling: We improved Gmapping by adding adaptive resampling and degeneracy handling. Resampling occurs only when needed, ensuring a well-balanced particle distribution with proper weights (see Figure 2).
2. Related Work
2.1. Feature-Based SLAM Approaches
2.2. Direct SLAM Approaches
3. Methodology
3.1. Multi Sensor Fusion Framework
3.1.1. RGB-D Camera and LiDAR Fusion
3.1.2. Sensor Data Integration
3.2. Gmapping Enhancements
3.2.1. Adaptive Resampling
- 1.
- Compute the cumulative sum of particle weights:
- 2.
- Generate N random values from a uniform distribution .
- 3.
- Select a particle for each random value based on:
3.2.2. Degeneracy Handling
Algorithm 1 Enhanced Gmapping Algorithm |
|
3.2.3. System Integration
4. Experiment Setup
4.1. Experiment Environment Design
- Simple Environment: This setup has only a few obstacles, such as tables, keyboards, and sofas. The paths are wide and clear, making navigation easy for the robot. It provides a basic test of movement without major challenges.
- Moderate Environment: This setup adds more obstacles, like lamp holders and standing humans, to make navigation harder. The paths are narrower, requiring the robot to move carefully and adjust its route when needed. This helps test how well the robot can handle slightly more difficult spaces.
- Complex Environment: This setup is the most challenging. More obstacles are placed while the robot is moving, and the paths are made even narrower. This forces the robot to make precise movements and smart decisions to avoid collisions. It tests how well the robot adapts to unpredictable situations.
4.2. Comparison Formulas Preparation
4.3. Simulation
4.3.1. Rviz and Gazebo Object Projection
4.3.2. Mapping and Localization
- A.
- Gmapping and AMCL
- B.
- RTAB-Map
4.3.3. SLAM and Navigation
4.3.4. Path Planning
- 1.
- Global Path Planning:
- 2.
- Local Path Planning:
4.4. Implementation
- Traveled Distance: Using our EGM, the total traveled distance was 14.95 m, compared to 16.10 m with the original GM. This reduction demonstrates improved estimation of the robot position based on the distributed particles and in collaboration with the obtained sensor fused information.
- Time Required: The journey with EGM was completed in 64.70 s, whereas GM required 70.90 s.
- Goal Achievement: Both methods successfully reached all goal points (100% success rate), confirming that EGM and classical GM are scoring the same target for goal achievement.
- Overlap Ratio: The overall overlap ratio with EGM was 68%, while GM had 33.7%. The high overlap ratio in EGM suggests that the particles were more focused, reducing unnecessary dispersion and enhancing the robot’s precise positioning.
- Average Error: The cumulative localization error in EGM was 0.85 m, while GM had a higher error of 1.56 m. This 45.5% reduction in error demonstrates the effectiveness of adaptive resampling in maintaining accurate position estimates.
5. Results and Discussion
5.1. Evaluation Metrics
5.2. Sensor Configuration Analysis
- 1.
- AS (C + L): All sensors, incorporating both cameras and LiDARs.
- 2.
- FB (C + L): Front and back cameras and LiDARs.
- 3.
- LRFB (C): Just left, right, front, and back cameras.
- 4.
- FB (L): Just front and back LiDAR sensors.
5.3. Environmental Configuration Analysis
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Placed, J.A.; Strader, J.; Carrillo, H.; Atanasov, N.; Indelman, V.; Carlone, L.; Castellanos, J.A. A survey on active simultaneous localization and mapping: State of the art and new frontiers. IEEE Trans. Robot. 2023, 39, 1686–1705. [Google Scholar] [CrossRef]
- Gu, J.; Stefani, E.; Wu, Q.; Thomason, J.; Wang, X.E. Vision-and-language navigation: A survey of tasks, methods, and future directions. arXiv 2022, arXiv:2203.12667. [Google Scholar]
- Zhu, J.; Li, H.; Zhang, T. Camera, LiDAR, and IMU based multi-sensor fusion SLAM: A survey. Tsinghua Sci. Technol. 2023, 29, 415–429. [Google Scholar] [CrossRef]
- Al-Tawil, B.; Hempel, T.; Abdelrahman, A.; Al-Hamadi, A. A review of visual SLAM for robotics: Evolution, properties, and future applications. Front. Robot. AI 2024, 11, 1347985. [Google Scholar] [CrossRef]
- Mokssit, S.; Licea, D.B.; Guermah, B.; Ghogho, M. Deep learning techniques for visual slam: A survey. IEEE Access 2023, 11, 20026–20050. [Google Scholar] [CrossRef]
- Yin, J.; Luo, D.; Yan, F.; Zhuang, Y. A novel lidar-assisted monocular visual SLAM framework for mobile robots in outdoor environments. IEEE Trans. Instrum. Meas. 2022, 71, 8503911. [Google Scholar] [CrossRef]
- Kulkarni, M.; Junare, P.; Deshmukh, M.; Rege, P.P. Visual SLAM combined with object detection for autonomous indoor navigation using kinect V2 and ROS. In Proceedings of the 2021 IEEE 6th International Conference on Computing, Communication and Automation (ICCCA), IEEE, Arad, Romania, 17–19 December 2021; pp. 478–482. [Google Scholar]
- Peng, G.; Lam, T.L.; Hu, C.; Yao, Y.; Liu, J.; Yang, F. Visual SLAM for Mobile Robot. In Introduction to Intelligent Robot System Design: Application Development with ROS; Springer: Berlin/Heidelberg, Germany, 2023; pp. 481–539. [Google Scholar]
- Hempel, T.; Al-Hamadi, A. Pixel-wise motion segmentation for SLAM in dynamic environments. IEEE Access 2020, 8, 164521–164528. [Google Scholar] [CrossRef]
- Zhao, L.; Mao, Z.; Huang, S. Feature-based SLAM: Why simultaneous localisation and mapping? In Proceedings of the Robotics: Science and Systems 2021, Virtually, 12–16 July 2021. [Google Scholar]
- Liang, M.; Leitinger, E.; Meyer, F. Direct Multipath-Based SLAM. arXiv 2024, arXiv:2409.20552. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Pire, T.; Fischer, T.; Castro, G.; De Cristóforis, P.; Civera, J.; Berlles, J.J. S-PTAM: Stereo parallel tracking and mapping. Robot. Auton. Syst. 2017, 93, 27–42. [Google Scholar] [CrossRef]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
- Wang, R.; Schworer, M.; Cremers, D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3903–3911. [Google Scholar]
- Somlyai, L.; Vámossy, Z. Improved RGB-D Camera-based SLAM System for Mobil Robots. Acta Polytech. Hung. 2024, 21, 107–124. [Google Scholar] [CrossRef]
- Hempel, T.; Al-Hamadi, A. Slam-based multistate tracking system for mobile human-robot interaction. In Proceedings of the International Conference on Image Analysis and Recognition, Póvoa de Varzim, Portugal, 24–26 June 2020; pp. 368–376. [Google Scholar]
- Panigrahi, P.K.; Bisoy, S.K. Localization strategies for autonomous mobile robots: A review. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 6019–6039. [Google Scholar] [CrossRef]
- Zhang, X.; Lai, J.; Xu, D.; Li, H.; Fu, M. 2d lidar-based slam and path planning for indoor rescue using mobile robots. J. Adv. Transp. 2020, 2020, 8867937. [Google Scholar] [CrossRef]
- Tian, C.; Liu, H.; Liu, Z.; Li, H.; Wang, Y. Research on multi-sensor fusion SLAM algorithm based on improved gmapping. IEEE Access 2023, 11, 13690–13703. [Google Scholar] [CrossRef]
- Sun, C.; Wu, X.; Sun, J.; Sun, C.; Xu, M.; Ge, Q. Saliency-Induced Moving Object Detection for Robust RGB-D Vision Navigation Under Complex Dynamic Environments. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10716–10734. [Google Scholar] [CrossRef]
- Xie, H.; Zhang, D.; Wang, J.; Zhou, M.; Cao, Z.; Hu, X.; Abusorrah, A. Semi-direct multimap SLAM system for real-time sparse 3-D map reconstruction. IEEE Trans. Instrum. Meas. 2023, 72, 4502013. [Google Scholar] [CrossRef]
- Kang, X.; Li, J.; Fan, X.; Wan, W. Real-time rgb-d simultaneous localization and mapping guided by terrestrial lidar point cloud for indoor 3-D reconstruction and camera pose estimation. Appl. Sci. 2019, 9, 3264. [Google Scholar] [CrossRef]
- Wang, X.; Li, K.; Chehri, A. Multi-sensor fusion technology for 3D object detection in autonomous driving: A review. IEEE Trans. Intell. Transp. Syst. 2023, 25, 1148–1165. [Google Scholar] [CrossRef]
- Chu, P.M.; Sung, Y.; Cho, K. Generative adversarial network-based method for transforming single RGB image into 3D point cloud. IEEE Access 2018, 7, 1021–1029. [Google Scholar] [CrossRef]
- Ahmed, Z.A.; Raafat, S.M. An Extensive Analysis and Fine-Tuning of Gmapping’s Initialization Parameters. Int. J. Intell. Eng. Syst. 2023, 16, 126–138. [Google Scholar]
- Zhou, Y.; Li, B.; Wang, D.; Mu, J. 2D Grid map for navigation based on LCSD-SLAM. In Proceedings of the 2021 11th International Conference on Information Science and Technology (ICIST), Chengdu, China, 21–23 May 2021; pp. 499–504. [Google Scholar]
- Zhou, Z.; Feng, X.; Di, S.; Zhou, X. A lidar mapping system for robot navigation in dynamic environments. IEEE Trans. Intell. Veh. 2023, 1–20. [Google Scholar] [CrossRef]
- Meng, J.; Wan, L.; Wang, S.; Jiang, L.; Li, G.; Wu, L.; Xie, Y. Efficient and reliable LiDAR-based global localization of mobile robots using multiscale/resolution maps. IEEE Trans. Instrum. Meas. 2021, 70, 8503315. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, IEEE, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef]
- Yanik, Ö.F.; Ilgin, H.A. A comprehensive computational cost analysis for state-of-the-art visual slam methods for autonomous mapping. Commun. Fac. Sci. Univ. Ank. Ser. A2–A3 Phys. Sci. Eng. 2023, 65, 1–15. [Google Scholar] [CrossRef]
- Labbé, M.; Michaud, F. RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J. Field Robot. 2019, 36, 416–446. [Google Scholar] [CrossRef]
- Jia, L.; Ma, Z.; Zhao, Y. A Mobile Robot Mapping Method Integrating Lidar and Depth Camera. J. Phys. Conf. Ser. 2022, 2402, 012031. [Google Scholar] [CrossRef]
- Zhang, T.; Wang, P.; Zha, F.; Guo, W.; Li, M. RGBD Navigation: A 2D navigation framework for visual SLAM with pose compensation. In Proceedings of the 2023 IEEE International Conference on Real-Time Computing and Robotics (RCAR), Datong, China, 17–21 July 2023; pp. 644–649. [Google Scholar]
- Hempel, T.; Al-Hamadi, A. An online semantic mapping system for extending and enhancing visual SLAM. Eng. Appl. Artif. Intell. 2022, 111, 104830. [Google Scholar] [CrossRef]
- Muharom, S.; Sardjono, T.A.; Mardiyanto, R. Real-Time 3D Modeling and Visualization Based on RGB-D Camera using RTAB-Map through Loop Closure. In Proceedings of the 2023 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 26–27 July 2023; pp. 228–233. [Google Scholar]
- Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
- Wei, H.; Jiao, J.; Hu, X.; Yu, J.; Xie, X.; Wu, J.; Zhu, Y.; Liu, Y.; Wang, L.; Liu, M. Fusionportablev2: A unified multi-sensor dataset for generalized slam across diverse platforms and scalable environments. arXiv 2024, arXiv:2404.08563. [Google Scholar] [CrossRef]
- Ali, W.; Liu, P.; Ying, R.; Gong, Z. A feature based laser SLAM using rasterized images of 3D point cloud. IEEE Sens. J. 2021, 21, 24422–24430. [Google Scholar] [CrossRef]
- Zhang, Y.; Tosi, F.; Mattoccia, S.; Poggi, M. Go-slam: Global optimization for consistent 3d instant reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 3727–3737. [Google Scholar]
- Younes, G.; Khalil, D.; Zelek, J.; Asmar, D. H-SLAM: Hybrid direct-indirect visual SLAM. Robot. Auton. Syst. 2024, 179, 104729. [Google Scholar] [CrossRef]
- Lang, X.; Li, L.; Zhang, H.; Xiong, F.; Xu, M.; Liu, Y.; Zuo, X.; Lv, J. Gaussian-LIC: Photo-realistic LiDAR-Inertial-Camera SLAM with 3D Gaussian Splatting. arXiv 2024, arXiv:2404.06926. [Google Scholar]
- Zhuang, Y.; Chen, L.; Wang, X.S.; Lian, J. A weighted moving average-based approach for cleaning sensor data. In Proceedings of the 27th International Conference on Distributed Computing Systems (ICDCS’07), Toronto, ON, Canada, 25–27 June 2007; p. 38. [Google Scholar]
- Park, J.i.; Jo, S.; Seo, H.T.; Park, J. LiDAR Denoising Methods in Adverse Environments: A Review. IEEE Sens. J. 2025, 25, 7916–7932. [Google Scholar] [CrossRef]
- Ballardini, A.L.; Fontana, S.; Furlan, A.; Sorrenti, D.G. ira_laser_tools: A ROS laserscan manipulation toolbox. arXiv 2014, arXiv:1411.1086. [Google Scholar]
- Tang, M.; Chen, Z.; Yin, F. An improved H-infinity unscented FastSLAM with adaptive genetic resampling. Robot. Auton. Syst. 2020, 134, 103661. [Google Scholar] [CrossRef]
- Hinduja, A.; Ho, B.J.; Kaess, M. Degeneracy-aware factors with applications to underwater slam. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1293–1299. [Google Scholar]
- Ahmed Abdulsaheb, J.; Jasim Kadhim, D. Real-Time SLAM Mobile Robot and Navigation Based on Cloud-Based Implementation. J. Robot. 2023, 2023, 9967236. [Google Scholar] [CrossRef]
- Jaenal, A.; Moreno, F.A.; Gonzalez-Jimenez, J. Sequential Monte Carlo localization in topometric appearance maps. Int. J. Robot. Res. 2023, 42, 1117–1132. [Google Scholar] [CrossRef]
- Chung, M.A.; Lin, C.W. An improved localization of mobile robotic system based on AMCL algorithm. IEEE Sens. J. 2021, 22, 900–908. [Google Scholar] [CrossRef]
- Liu, H.; Shen, Y.; Yu, S.; Gao, Z.; Wu, T. Deep reinforcement learning for mobile robot path planning. arXiv 2024, arXiv:2404.06974. [Google Scholar] [CrossRef]
Sensor Configurations | AS (C + L) | FB (C + L) | LRFB (C) | FB (L) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Argument Parameter | D (m) | T (s) | G | D (m) | T (s) | G | D (m) | T (s) | Goal | D (m) | T (s) | G |
S0 | 6.0 | 26.6 | ✓ | 5.9 | 25.5 | ✓ | 5.9 | 36.0 | ✓ | 6.5 | 41.0 | ✓ |
S1 | 4.8 | 19.6 | ✓ | 4.6 | 19.2 | ✓ | 4.7 | 19.4 | ✓ | 7.0 | 35.2 | ✗ |
S2 | 5.1 | 23.5 | ✓ | 5.1 | 23.4 | ✓ | 6.2 | 28.7 | ✓ | 6.1 | 36.3 | ✓ |
S3 | 6.4 | 27.0 | ✓ | 6.1 | 26.5 | ✓ | 5.5 | 18.9 | ✓ | 5.1 | 37.2 | ✗ |
S4 | 4.8 | 20.3 | ✓ | 5.9 | 27.2 | ✓ | 7.3 | 22.3 | ✓ | 6.1 | 38.1 | ✗ |
S5 | 4.0 | 16.5 | ✓ | 3.9 | 16.4 | ✓ | 3.9 | 24.3 | ✓ | 5.0 | 22.2 | ✓ |
S6 | 6.5 | 27.2 | ✓ | 7.4 | 32.2 | ✓ | 6.6 | 34.3 | ✓ | 7.1 | 24.2 | ✗ |
S7 | 5.6 | 22.3 | ✓ | 5.5 | 22.4 | ✓ | 4.6 | 22.4 | ✓ | 6.1 | 26.3 | ✗ |
S8 | 4.5 | 18.3 | ✓ | 5.1 | 18.4 | ✓ | 4.5 | 26.4 | ✗ | 5.1 | 35.1 | ✓ |
S9 | 4.1 | 22.4 | ✓ | 5.5 | 34.3 | ✓ | 6.8 | 42.3 | ✓ | 5.1 | 29.3 | ✗ |
Total, Avg | 51.8 | 223.7 | 100% | 54.3 | 245.5 | 100% | 56.0 | 275.0 | 90% | 59.2 | 325.0 | 40% |
Approach | AS (C + L) | FB (C + L) | LRFB (C) | FB (L) |
---|---|---|---|---|
Parameter | ||||
Charge Decrease % | 5.3 | 5.5 | 4.67 | 6.2 |
Voltage Decrease % | 2.6 | 2.8 | 2.3 | 2.2 |
CPU Usage % | 0.01 | 0.09 | 0.19 | 0.57 |
Average Linear Velocity m/s | 0.15, Stable | 0.15, Stable | 0.15, Not Stable | 0.15, Not Stable |
Average Angular Velocity rad/s | 0.5, Stable | 0.5, Stable | 0.5, Not stable | 0.5, Not Stable |
Environments | Simple Environment | Moderate Environment | Complex Environment | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Arguments | D (m) | T (s) | Goal.A | D (m) | T (s) | Goal.A | D (m) | T (s) | Goal.A | |||||||||
EGM | RTAB | EGM | RTAB | EGM | RTAB | EGM | RTAB | EGM | RTAB | EGM | RTAB | EGM | RTAB | EGM | RTAB | EGM | RTAB | |
Localization (L) | 0.00 | 12.59 | 0.00 | 63.01 | - | - | 0.00 | 3.14 | 0.00 | 15.74 | - | - | 0.00 | 3.14 | 0.00 | 15.78 | - | - |
S0 | 5.96 | 5.73 | 24.92 | 27.78 | ✓ | ✓ | 4.01 | 3.74 | 23.06 | 20.32 | ✓ | ✓ | 5.70 | 7.54 | 30.08 | 38.67 | ✓ | ✓ |
S1 | 5.03 | 4.41 | 19.93 | 18.22 | ✓ | ✓ | 4.06 | 4.21 | 19.71 | 20.45 | ✓ | ✓ | 8.08 | 9.09 | 39.06 | 47.39 | ✓ | ✓ |
S2 | 5.19 | 5.10 | 24.10 | 22.93 | ✓ | ✓ | 5.73 | 4.87 | 26.22 | 21.03 | ✓ | ✓ | 6.20 | 6.29 | 23.73 | 23.60 | ✓ | ✓ |
S3 | 6.37 | 6.10 | 26.74 | 25.42 | ✓ | ✓ | 3.76 | 4.24 | 17.42 | 18.94 | ✓ | ✓ | 7.03 | 7.11 | 28.62 | 28.92 | ✓ | ✓ |
S4 | 4.74 | 5.08 | 20.84 | 60.04 | ✓ | ✗ | 6.96 | 7.41 | 28.44 | 29.81 | ✓ | ✓ | 5.45 | 5.52 | 22.72 | 23.04 | ✓ | ✗ |
S5 | 3.96 | 4.93 | 16.60 | 36.40 | ✓ | ✓ | 6.95 | 8.01 | 28.32 | 33.34 | ✓ | ✗ | 4.50 | 4.60 | 20.14 | 20.43 | ✓ | ✓ |
S6 | 6.66 | 0.00 | 27.50 | 14.06 | ✓ | ✗ | 7.06 | 7.28 | 26.80 | 28.61 | ✓ | ✓ | 5.88 | 5.90 | 20.74 | 20.73 | ✓ | ✓ |
S7 | 5.55 | 3.80 | 22.43 | 15.90 | ✓ | ✓ | 4.58 | 3.80 | 18.15 | 14.13 | ✓ | ✓ | 11.29 | 11.43 | 41.64 | 42.16 | ✓ | ✓ |
S8 | 4.48 | 3.97 | 18.63 | 16.52 | ✓ | ✓ | 4.70 | 4.64 | 18.20 | 18.16 | ✓ | ✓ | 8.76 | 8.59 | 35.44 | 31.95 | ✓ | ✗ |
S9 | 5.10 | 5.21 | 28.34 | 27.37 | ✓ | ✓ | 6.50 | 5.18 | 32.51 | 34.40 | ✓ | ✓ | 3.68 | 4.16 | 25.12 | 42.33 | ✓ | ✓ |
Total | 53.04 | 56.92 | 230.04 | 327.65 | 100% | 80% | 54.31 | 56.52 | 238.83 | 254.93 | 100% | 90% | 66.65 | 73.37 | 287.29 | 335.00 | 100% | 80% |
Environments | Simple Environment | Moderate Environment | Complex Environment | |||
---|---|---|---|---|---|---|
Arguments | EGM | RTAB | EGM | RTAB | EGM | RTAB |
Charge Decrease % | 5.31 | 4.43 | 5.43 | 5.34 | 6.66 | 7.02 |
Voltage Decrease % | 2.65 | 2.22 | 2.72 | 2.67 | 3.33 | 3.51 |
CPU Usage % | 0.04 | 0.14 | 0.4 | 0.84 | 0.55 | 0.92 |
Average Linear Velocity m/s | 0.15, Stable | 0.15, Stable | 0.15, Stable | 0.15, Stable | 0.15, Stable | 0.15, Not Stable |
Angular Velocity rad/s | 0.5, Stable | 0.5, Not Stable | 0.5, Stable | 0.5, Stable | 0.5, Stable | 0.5, Stable |
Segments (m) | Traveled Distance (m) | Time (s) | Goals Achievement | Overlap Ratio (%) | Average Error (m) | |||||
---|---|---|---|---|---|---|---|---|---|---|
EGM | GM | EGM | GM | EGM | GM | EGM | GM | EGM | GM | |
, 2.0 m | 2.10 | 2.25 | 8.80 | 9.50 | ✓ | ✓ | 48.00 | 35.00 | 0.05 | 0.06 |
, 2.5 m | 2.65 | 2.80 | 11.10 | 12.61 | ✓ | ✓ | 68.00 | 33.00 | 0.15 | 0.30 |
, 1.0 m | 1.10 | 1.25 | 5.20 | 6.5 | ✓ | ✓ | 67.00 | 36.00 | 0.10 | 0.20 |
, 1.5 m | 1.65 | 1.75 | 9.85 | 10.55 | ✓ | ✓ | 71.00 | 32.00 | 0.15 | 0.35 |
, 3.2 m | 3.45 | 3.55 | 14.55 | 15.37 | ✓ | ✓ | 75.00 | 32.00 | 0.25 | 0.37 |
, 3.8 m | 4.00 | 4.50 | 15.20 | 16.37 | ✓ | ✓ | 78.00 | 34.00 | 0.15 | 0.28 |
Total-Avg, 14.0 | 14.95 | 16.10 | 64.70 | 70.90 | 100% | 100% | 68.00 | 33.70 | 0.85 | 1.56 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Al-Tawil, B.; Candemir, A.; Jung, M.; Al-Hamadi, A. Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion. Sensors 2025, 25, 2408. https://doi.org/10.3390/s25082408
Al-Tawil B, Candemir A, Jung M, Al-Hamadi A. Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion. Sensors. 2025; 25(8):2408. https://doi.org/10.3390/s25082408
Chicago/Turabian StyleAl-Tawil, Basheer, Adem Candemir, Magnus Jung, and Ayoub Al-Hamadi. 2025. "Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion" Sensors 25, no. 8: 2408. https://doi.org/10.3390/s25082408
APA StyleAl-Tawil, B., Candemir, A., Jung, M., & Al-Hamadi, A. (2025). Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion. Sensors, 25(8), 2408. https://doi.org/10.3390/s25082408