A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation
Abstract
:1. Introduction
2. Related Work
2.1. Localization
2.2. Monocular Depth Estimation
2.3. Path Planning
2.4. Autonomous UAV Navigation
3. Materials and Methods
3.1. Simulation Environment
3.2. System Architecture
- Depth estimation module (DEM). This module is responsible for estimating depth images from the RGB camera feed provided by the simulation environment. For this task, the Depth Anything V2 model was utilized.
- Mapper module (MM). The purpose of MM is to build and iteratively update the occupancy map-based 3D environment using depth images supplied by DEM and camera position and orientation retrieved from the simulation environment.
- Navigation module (NM). This module finds viable path trajectories to specified points in an everchanging mapped 3D environment. Path-finding logic is based on the A* algorithm. The output of this module is fed back to the simulation environment.
3.3. Depth Estimation
3.3.1. Preparing Synthetic Data for Depth Estimation Models
- Number of coordinates that define how many points will be generated;
- Map boundaries specifying the range in which the drone can be placed;
- Height list specifying predefined height (1, 3, 5, 7, and 9 m) for generated coordinates along the z-axis;
- Camera IDs, representing cameras used to capture depth and RGB images. Cameras were positioned at 0, 45, 90, 135, 180, 225, 270, and 315 degrees, allowing images to be captured from all sides for a specific coordinate.
Algorithm 1. Data generation script |
Input: Number of coordinates num_coord, map boundaries x_start, x_end, y_start, y_end, Height list z_heights, Constants: Camera id list camera_ids Output: Saved images. 1: Generate random coordinates: 2: Initialize coordinate list coord_list 3: For z to z_heights: 4: For i to num_coord: 5: add the generated coordinate to coordinate list coord_list.add(random(x_start, x_end), random(y_start, y_end), z) 6: End For 7: End For 8: Start and stabilize the drone in environment 9: For coordinate in coord_list: 10: Spawn and hover drone in coordinate 11: For a camera in camera_ids: 12: Save the Depth and RGB image of the camera 13: End For 14: End For |
3.3.2. Process for Fine-Tuning Depth Estimation Model
3.4. Occupancy Grid Map
Algorithm 2. Depth Image to Point Cloud Conversion |
Input: Depth image dimg, Field of view fov, Image dimensions (H, W), Stride stride, Minimum depth dmin, Maximum depth dmax. Output: Point cloud pcd in ENU coordinate system. 1: Initialize empty point cloud pcd 2: Compute horizontal FOV in radians hfovrad = fov⋅ π/180.0 3: Calculate focal lengths fx = fy = (W/2.0)/tan(hfovrad/2.0) 4: Set principal point coordinates cx = W/2.0, cy = H/2.0 5: For v = 0 to H step stride: 6: For u = 0 to W step stride: 7: Retrieve depth value z = dimg[v,u] 8: If dmin ≤ z < dmax then: 9: Compute x = (u − cx) ⋅ z/fx 10: Compute y = (v − cy) ⋅ z/fy 11: Rotate and add point [z,−x,−y] to pcd ENU point 12: End If 13: End For 14: End For 15: Return point cloud pcd |
3.5. Path Planning and Collision Avoidance
- A new obstacle is detected inside set E;
- Replanning is also performed if the drone drifts away from the planned path by a specified distance d., i.e., if ;
- Intermediate goal point I is reached. Point I is a voxel with best known distance heuristic to goal G after failure to find a complete path to G in iterations.
4. Results
4.1. Fine-Tunning of Depth Estimation Models
4.2. Autonomous Navigation Simulation Results
4.2.1. General Experimentation Results
4.2.2. Acquired AirSim Depth Images vs. Finetuned Depth Estimation
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Environment | Voxel Size | AirSim Depth Images | Fine-Tuned Model | Pretrained Model |
---|---|---|---|---|
AirsimNH | 0.5 | 2.72 | 2.77 | 3.10 |
1 | 2.99 | 3.03 | 3.15 | |
2 | 3.32 | 3.15 | 3.39 | |
Overall | 3.01 | 2.98 | 3.21 | |
Blocks | 0.5 | 3.88 | 3.59 | 4.00 |
1 | 4.32 | 3.86 | 4.30 | |
2 | 4.35 | 4.00 | 4.38 | |
Overall | 4.19 | 3.82 | 4.22 | |
MSBuild2018 | 0.5 | 3.03 | 3.57 | 3.67 |
1 | 3.64 | 3.79 | 3.91 | |
2 | 4.08 | 3.91 | 4.17 | |
Overall | 3.58 | 3.76 | 3.92 |
Appendix B
References
- Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
- Gyagenda, N.; Hatilima, J.V.; Roth, H.; Zhmud, V. A Review of GNSS-Independent UAV Navigation Techniques. Rob. Auton. Syst. 2022, 152, 104069. [Google Scholar] [CrossRef]
- Yang, N.; Wang, R.; Gao, X.; Cremers, D. Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias and Rolling Shutter Effect. arXiv 2018, arXiv:1705.04300. [Google Scholar] [CrossRef]
- Arampatzakis, V.; Pavlidis, G.; Mitianoudis, N.; Papamarkos, N. Monocular Depth Estimation: A Thorough Review. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 2396–2414. [Google Scholar] [CrossRef]
- Wang, C.; Ma, H.; Chen, W.; Liu, L.; Meng, M.Q.-H. Efficient Autonomous Exploration with Incrementally Built Topological Map in 3-D Environments. IEEE Trans. Instrum. Meas. 2020, 69, 9853–9865. [Google Scholar] [CrossRef]
- Faria, M.; Ferreira, A.S.; Pérez-Leon, H.; Maza, I.; Viguria, A. Autonomous 3D Exploration of Large Structures Using an UAV Equipped with a 2D LIDAR. Sensors 2019, 19, 4849. [Google Scholar] [CrossRef] [PubMed]
- Vanegas, F.; Gaston, K.J.; Roberts, J.; Gonzalez, F. A Framework for UAV Navigation and Exploration in GPS-Denied Environments. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; pp. 1–6. [Google Scholar]
- Chang, Y.; Cheng, Y.; Manzoor, U.; Murray, J. A Review of UAV Autonomous Navigation in GPS-Denied Environments. Rob. Auton. Syst. 2023, 170, 104533. [Google Scholar] [CrossRef]
- Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. In Proceedings of the Field and Service Robotics, Zürich, Switzerland, 13–15 September 2017. [Google Scholar]
- Lu, Y.; Xue, Z.; Xia, G.-S.; Zhang, L. A Survey on Vision-Based UAV Navigation. Geo-Spat. Inf. Sci. 2018, 21, 21–32. [Google Scholar] [CrossRef]
- Arafat, M.Y.; Alam, M.M.; Moh, S. Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges. Drones 2023, 7, 89. [Google Scholar] [CrossRef]
- Horn, B.K.P.; Schunck, B.G. Determining Optical Flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the IJCAI’81: 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Volume 2, pp. 674–679. [Google Scholar]
- Santos-Victor, J.; Sandini, G.; Curotto, F.; Garibaldi, S. Divergent Stereo for Robot Navigation: Learning from Bees. In Proceedings of the Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 15–17 June 1993; pp. 434–439. [Google Scholar]
- Agrawal, P.; Ratnoo, A.; Ghose, D. Inverse Optical Flow Based Guidance for UAV Navigation through Urban Canyons. Aerosp. Sci. Technol. 2017, 68, 163–178. [Google Scholar] [CrossRef]
- Couturier, A.; Akhloufi, M.A. A Review on Absolute Visual Localization for UAV. Rob. Auton. Syst. 2021, 135, 103666. [Google Scholar] [CrossRef]
- Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the Proc. AVC, Manchester, UK, 15–17 September 1988; pp. 23.1–23.6. [Google Scholar]
- Edward, R.; Drummond, T. Machine Learning for High-Speed Corner Detection. In Proceedings of the Computer Vision—ECCV, Graz, Austria, 7–13 May 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary Robust Independent Elementary Features. In Proceedings of the Computer Vision—ECCV, Heraklion, Crete, 5–11 September 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the Computer Vision—ECCV, Graz, Austria, 7–13 May 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
- Dryanovski, I.; Morris, W.; Xiao, J. Multi-Volume Occupancy Grids: An Efficient Probabilistic 3D Mapping Model for Micro Aerial Vehicles. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1553–1559. [Google Scholar]
- Herrera-Granda, E.P.; Torres-Cantero, J.C.; Peluffo-Ordóñez, D.H. Monocular Visual SLAM, Visual Odometry, and Structure from Motion Methods Applied to 3D Reconstruction: A Comprehensive Survey. Heliyon 2024, 10, e37356. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM. In Proceedings of the Computer Vision—ECCV, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
- Yang, L.; Kang, B.; Huang, Z.; Zhao, Z.; Xu, X.; Feng, J.; Zhao, H. Depth Anything V2. arXiv 2024, arXiv:2406.09414. [Google Scholar]
- Godard, C.; Mac Aodha, O.; Firman, M.; Brostow, G.J. Digging into Self-Supervised Monocular Depth Prediction. In Proceedings of the International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Guizilini, V.; Ambrus, R.; Pillai, S.; Raventos, A.; Gaidon, A. 3D Packing for Self-Supervised Monocular Depth Estimation. arXiv 2020, arXiv:1905.02693. [Google Scholar]
- Dhulkefl, E.; Durdu, A.; Terzioğlu, H. Dijkstra algorithm using uav path planning. Konya J. Eng. Sci. 2020, 8, 92–105. [Google Scholar] [CrossRef]
- Zhibo, Z. A Review of Unmanned Aerial Vehicle Path Planning Techniques. Appl. Comput. Eng. 2024, 33, 234–241. [Google Scholar] [CrossRef]
- Mandloi, D.; Arya, R.; Verma, A.K. Unmanned Aerial Vehicle Path Planning Based on A* Algorithm and Its Variants in 3d Environment. Int. J. Syst. Assur. Eng. Manag. 2021, 12, 990–1000. [Google Scholar] [CrossRef]
- Luo, Y.; Lu, J.; Zhang, Y.; Qin, Q.; Liu, Y. 3D JPS Path Optimization Algorithm and Dynamic-Obstacle Avoidance Design Based on Near-Ground Search Drone. Appl. Sci. 2022, 12, 7333. [Google Scholar] [CrossRef]
- Suanpang, P.; Jamjuntr, P. Optimizing Autonomous UAV Navigation with D* Algorithm for Sustainable Development. Sustainability 2024, 16, 7867. [Google Scholar] [CrossRef]
- Zhu, X.; Yan, B.; Yue, Y. Path Planning and Collision Avoidance in Unknown Environments for USVs Based on an Improved D* Lite. Appl. Sci. 2021, 11, 7863. [Google Scholar] [CrossRef]
- Yang, K.; Gan, S.K.; Sukkarieh, S. A Gaussian Process-Based RRT Planner for the Exploration of an Unknown and Cluttered Environment with a UAV. Adv. Robot. 2013, 27, 431–443. [Google Scholar] [CrossRef]
- Yan, F.; Zhuang, Y.; Xiao, J. 3D PRM Based Real-Time Path Planning for UAV in Complex Environment. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 1135–1140. [Google Scholar]
- Candeloro, M.; Lekkas, A.M.; Hegde, J.; Sørensen, A.J. A 3D Dynamic Voronoi Diagram-Based Path-Planning System for UUVs. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; pp. 1–8. [Google Scholar]
- Bi, Y.; Lan, M.; Li, J.; Zhang, K.; Qin, H.; Lai, S.; Chen, B.M. Robust Autonomous Flight and Mission Management for MAVs in GPS-Denied Environments. In Proceedings of the 2017 11th Asian Control Conference (ASCC), Gold Coast, Australia, 17–20 December 2017; pp. 67–72. [Google Scholar]
- Sampedro, C.; Rodriguez-Ramos, A.; Bavle, H.; Carrio, A.; de la Puente, P.; Campoy, P. A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments Using Learning-Based Techniques. J. Intell. Robot. Syst. 2019, 95, 601–627. [Google Scholar] [CrossRef]
- Mohta, K.; Watterson, M.; Mulgaonkar, Y.; Liu, S.; Qu, C.; Makineni, A.; Saulnier, K.; Sun, K.; Zhu, A.; Delmerico, J.; et al. Fast, Autonomous Flight in GPS-Denied and Cluttered Environments. J. Field Robot. 2018, 35, 101–120. [Google Scholar] [CrossRef]
- Bachrach, A.; Prentice, S.; He, R.; Roy, N. RANGE–Robust Autonomous Navigation in GPS-Denied Environments. J. Field Robot. 2011, 28, 644–666. [Google Scholar] [CrossRef]
- Leishman, R.C.; McLain, T.W.; Beard, R.W. Relative Navigation Approach for Vision-Based Aerial GPS-Denied Navigation. J. Intell. Robot. Syst. 2014, 74, 97–111. [Google Scholar] [CrossRef]
- Li, Q.; Li, D.-C.; Wu, Q.; Tang, L.; Huo, Y.; Zhang, Y.; Cheng, N. Autonomous Navigation and Environment Modeling for MAVs in 3-D Enclosed Industrial Environments. Comput. Ind. 2013, 64, 1161–1177. [Google Scholar] [CrossRef]
- Perez-Grau, F.J.; Ragel, R.; Caballero, F.; Viguria, A.; Ollero, A. An Architecture for Robust UAV Navigation in GPS-Denied Areas. J. Field Robot. 2018, 35, 121–145. [Google Scholar] [CrossRef]
- Valenti, F.; Giaquinto, D.; Musto, L.; Zinelli, A.; Bertozzi, M.; Broggi, A. Enabling Computer Vision-Based Autonomous Navigation for Unmanned Aerial Vehicles in Cluttered GPS-Denied Environments. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3886–3891. [Google Scholar]
- Schmid, K.; Lutz, P.; Tomić, T.; Mair, E.; Hirschmüller, H. Autonomous Vision-Based Micro Air Vehicle for Indoor and Outdoor Navigation. J. Field Robot. 2014, 31, 537–570. [Google Scholar] [CrossRef]
- Lutz, P.; Müller, M.G.; Maier, M.; Stoneman, S.; Tomić, T.; von Bargen, I.; Schuster, M.J.; Steidle, F.; Wedler, A.; Stürzl, W.; et al. ARDEA—An MAV with Skills for Future Planetary Missions. J. Field Robot. 2020, 37, 515–551. [Google Scholar] [CrossRef]
- Lin, Y.; Gao, F.; Qin, T.; Gao, W.; Liu, T.; Wu, W.; Yang, Z.; Shen, S. Autonomous Aerial Navigation Using Monocular Visual-Inertial Fusion. J. Field Robot. 2018, 35, 23–51. [Google Scholar] [CrossRef]
- Esrafilian, O.; Taghirad, H.D. Autonomous Flight and Obstacle Avoidance of a Quadrotor by Monocular SLAM. In Proceedings of the 2016 4th International Conference on Robotics and Mechatronics (ICROM), Tehran, Iran, 26–28 October 2016; pp. 240–245. [Google Scholar]
- von Stumberg, L.; Usenko, V.; Engel, J.; Stückler, J.; Cremers, D. From Monocular SLAM to Autonomous Drone Exploration. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–8. [Google Scholar]
- Laina, I.; Rupprecht, C.; Belagiannis, V.; Tombari, F.; Navab, N. Deeper Depth Prediction with Fully Convolutional Residual Networks. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 239–248. [Google Scholar]
- Bresenham, J.E. Algorithm for Computer Control of a Digital Plotter. IBM Syst. J. 1965, 4, 25–30. [Google Scholar] [CrossRef]
- Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
- Tordesillas, J.; Lopez, B.T.; Everett, M.; How, J.P. FASTER: Fast and Safe Trajectory Planner for Navigation in Unknown Environments. IEEE Trans. Robot. 2022, 38, 922–938. [Google Scholar] [CrossRef]
- Sun, W.; Sun, P.; Ding, W.; Zhao, J.; Li, Y. Gradient-Based Autonomous Obstacle Avoidance Trajectory Planning for B-Spline UAVs. Sci. Rep. 2024, 14, 14458. [Google Scholar] [CrossRef]
GPU | CPU | RAM |
---|---|---|
EVGA GeForce RTX 3070 Ti; 8 GB GDDR6X @ 1575 MHz | AMD Ryzen 7 5800X; 8 cores @ 3.8 GHz | Kingston Fury; 2 × 32 GB @ 3200 MHz |
Environment | Scenario | Start (X, Y, Z) | End, (X, Y, Z) | Distance | Snapshot |
---|---|---|---|---|---|
AirSimNH | 1 | 0, 0, 0 | −130, −115, 3 | 173.59 | a |
2 | 0, 0, 0 | 130, 130, 3 | 183.87 | b | |
3 | 0, 0, 0 | −115, 130, 3 | 173.59 | c | |
Blocks | 1 | 0, 0, 0 | 70, 150, 3 | 165.56 | d |
2 | 0, 0, 0 | 70, −135, 3 | 152.10 | e | |
3 | 0, 0, 0 | −105, −150, 3 | 183.12 | f | |
MSBuilding2018 | 1 | 0, 0, 0 | 30, 260, 3 | 261.74 | g |
2 | 0, 0, 0 | 30, −280, 3 | 281.62 | h | |
3 | 0, 0, 0 | −400, 30, 3 | 401.14 | i |
Environment | Pre-Trained Model | Finetuned Model |
---|---|---|
AirSimNH | 120.45% | 33.41% |
Blocks | 70.09% | 8.04% |
MSBuild2018 | 121.94% | 32.86% |
Environment | Method | Reached Goal | Distance to Goal |
---|---|---|---|
AirSimNH | Depth Images from AirSim | 35 of 90 | 117.40 (55 unsuccessful) |
Estimated Depth Images (finetuned) | 3 of 90 | 135.28 (87 unsuccessful) | |
Estimated Depth Images (pre-trained) | 0 of 90 | 171.20 (90 unsuccessful) | |
Blocks | Depth Images from AirSim | 79 of 90 | 67.75 (11 unsuccessful) |
Estimated Depth Images (finetuned) | 65 of 90 | 95.95 (25 unsuccessful) | |
Estimated Depth Images (pre-trained) | 0 of 90 | 152.34 (90 unsuccessful) | |
MSBuild2018 | Depth Images from AirSim | 12 of 90 | 187.80 (78 unsuccessful) |
Estimated Depth Images (finetuned) | 0 of 90 | 230.50 (90 unsuccessful) | |
Estimated Depth Images (pre-trained) | 0 of 90 | 311.25 (90 unsuccessful) |
AirSim Depth Images | Estimated Depth Images | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Voxel Size | Scenario | Reached Goal | Collisions | Distance | Time | Distance to Goal | Reached Goal | Collisions | Distance | Time | Distance to Goal |
0.5 | 1 | 3 | 12.00 | 226.75 | 136.42 | 129.56 | 0 | - | - | - | 120.26 |
2 | 3 | 0.67 | 228.47 | 132.29 | 125.93 | 3 | 5.00 | 282.79 | 176.08 | 126.85 | |
3 | 0 | - | - | - | 142.63 | 0 | - | - | - | 128.47 | |
1.0 | 1 | 4 | 3.00 | 345.22 | 212.07 | 41.13 | 0 | - | - | - | 124.08 |
2 | 7 | 0.43 | 288.95 | 174.66 | 78.63 | 0 | - | - | - | 133.49 | |
3 | 3 | 4.67 | 439.83 | 271.87 | 131.08 | 0 | - | - | - | 147.58 | |
2.0 | 1 | 9 | 0.22 | 346.49 | 225.01 | 72.87 | 0 | - | - | - | 130.44 |
2 | 6 | 0 | 348.58 | 224.92 | 172.17 | 0 | - | - | - | 140.68 | |
3 | 0 | - | - | - | 162.58 | 0 | - | - | - | 165.69 |
AirSim Depth Images | Estimated Depth Images | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Voxel Size | Scenario | Reached Goal | Collisions | Distance | Time | Distance to Goal | Reached Goal | Collisions | Distance | Time | Distance to Goal |
0.5 | 1 | 7 | 0.71 | 178.51 | 95.62 | 122.58 | 10 | 0 | 196.28 | 112.91 | - |
2 | 10 | 0 | 160.09 | 83.25 | - | 9 | 0 | 160.16 | 88.51 | 106.87 | |
3 | 8 | 2.63 | 221.09 | 122.61 | 129.52 | 8 | 0.63 | 222.76 | 128.11 | 90.84 | |
1.0 | 1 | 9 | 0 | 179.57 | 97.30 | 124.76 | 7 | 0 | 207.58 | 118.11 | 129.32 |
2 | 10 | 0 | 160.64 | 85.98 | - | 7 | 0 | 161.32 | 92.70 | 102.68 | |
3 | 7 | 0.71 | 226.13 | 125.64 | 109.51 | 5 | 0 | 226.93 | 133.94 | 58.17 | |
2.0 | 1 | 8 | 0.25 | 188.67 | 118.40 | 123.39 | 4 | 0 | 350.40 | 217.18 | 119.89 |
2 | 10 | 0 | 166.80 | 106.18 | - | 9 | 0 | 186.23 | 119.91 | 96.14 | |
3 | 10 | 0 | 209.12 | 131.73 | - | 9 | 0 | 235.37 | 153.67 | 159.64 |
AirSim Depth Images | Estimated Depth Images | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Voxel Size | Scenario | Reached Goal | Collisions | Distance | Time | Distance to Goal | Reached Goal | Collisions | Distance | Time | Distance to Goal |
0.5 | 1 | 6 | 2.33 | 376.99 | 217.90 | 53.96 | 0 | - | - | - | 104.55 |
2 | 4 | 25.00 | 472.27 | 283.29 | 176.02 | 0 | - | - | - | 222.09 | |
3 | 0 | - | - | - | 218.00 | 0 | - | - | - | 254.27 | |
1.0 | 1 | 1 | 43.00 | 376.29 | 224.41 | 174.06 | 0 | - | - | - | 216.62 |
2 | 0 | - | - | - | 160.91 | 0 | - | - | - | 246.09 | |
3 | 0 | - | - | - | 234.73 | 0 | - | - | - | 268.21 | |
2.0 | 1 | 4 | 14.25 | 375.22 | 220.60 | 158.99 | 0 | - | - | - | 227.49 |
2 | 0 | - | - | - | 236.57 | 0 | - | - | - | 247.53 | |
3 | 0 | - | - | - | 276.91 | 0 | - | - | - | 287.67 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gaigalas, J.; Perkauskas, L.; Gricius, H.; Kanapickas, T.; Kriščiūnas, A. A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation. Drones 2025, 9, 236. https://doi.org/10.3390/drones9040236
Gaigalas J, Perkauskas L, Gricius H, Kanapickas T, Kriščiūnas A. A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation. Drones. 2025; 9(4):236. https://doi.org/10.3390/drones9040236
Chicago/Turabian StyleGaigalas, Jonas, Linas Perkauskas, Henrikas Gricius, Tomas Kanapickas, and Andrius Kriščiūnas. 2025. "A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation" Drones 9, no. 4: 236. https://doi.org/10.3390/drones9040236
APA StyleGaigalas, J., Perkauskas, L., Gricius, H., Kanapickas, T., & Kriščiūnas, A. (2025). A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation. Drones, 9(4), 236. https://doi.org/10.3390/drones9040236