Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network–Superpixel Approach
Abstract
:1. Introduction
2. Related Works
2.1. Landing Zone Detection with RGB Images
2.2. Landing Zone Detection with 3D Information
3. Methodology
3.1. Zone Identification
3.1.1. Superpixel Segmentation
3.2. Feature Extraction of Potential Landing Zones
3.2.1. Pinhole Model
3.2.2. Principal Component Analysis
3.2.3. Height Homogenization
3.2.4. Feature Extraction
3.3. Landing Zone Segmentation
3.3.1. Training Dataset
3.3.2. Accessible Landing Zone
3.3.3. DNN for Landing Zone Segmentation
3.4. Diagram of Our Methodology
4. Discussion
4.1. Metrics
4.2. Landing Segmentation Approaches
4.3. Segmentation Result
Processing Time
4.4. Three-Dimensional Landing Zone Segmentation Result
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Appendix A.1. ROC Curves and Corresponding AUC
Appendix A.2. ROC Curves, Corresponding AUC and Confusion Matrices
Appendix A.3. Confusion Matrices
Appendix A.4. Landing Zone on VPAIR, VALID, and ITTG Datasets and 3D Segmentation
References
- Putranto, H.Y.; Irfansyah, A.N.; Attamimi, M. Identification of Safe Landing Areas with Semantic Segmentation and Contour Detection for Delivery UAV. In Proceedings of the 2022 9th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), Semarang, Indonesia, 25–29 August 2022; pp. 254–257. [Google Scholar] [CrossRef]
- Balestrieri, E.; Daponte, P.; De Vito, L.; Picariello, F.; Tudosa, I. Sensors and Measurements for UAV Safety: An Overview. Sensors 2021, 21, 8253. [Google Scholar] [CrossRef] [PubMed]
- Maturana, D.; Scherer, S. 3D Convolutional Neural Networks for landing zone detection from LiDAR. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3471–3478. [Google Scholar] [CrossRef]
- Safadinho, D.; Ramos, J.; Ribeiro, R.; Filipe, V.; Barroso, J.; Pereira, A. System to Detect and Approach Humans from an Aerial View for the Landing Phase in a UAV Delivery Service. In Ambient Intelligence–Software and Applications, Proceedings of the 10th International Symposium on Ambient Intelligence, ISAmI 2019, Ávila, Spain, 26–28 June 2019; Novais, P., Lloret, J., Chamoso, P., Carneiro, D., Navarro, E., Omatu, S., Eds.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 1006. [Google Scholar] [CrossRef]
- Mittal, M.; Mohan, R.; Burgard, W.; Valada, A. Vision-Based Autonomous UAV Navigation and Landing for Urban Search and Rescue. In Robotics Research; Asfour, T., Yoshida, E., Park, J., Christensen, H., Khatib, O., Eds.; ISRR 2019; Springer Proceedings in Advanced Robotics; Springer: Cham, Switzerland, 2022; Volume 20. [Google Scholar] [CrossRef]
- Loureiro, G.; Dias, A.; Martins, A.; Almeida, J. Emergency Landing Spot Detection Algorithm for Unmanned Aerial Vehicles. Remote Sens. 2021, 13, 1930. [Google Scholar] [CrossRef]
- Kaljahi, M.A.; Shivakumara, P.; Idris, M.Y.I.; Anisi, M.H.; Lu, T.; Blumenstein, M.; Noor, N.M. An automatic zone detection system for safe landing of UAVs. Expert Syst. Appl. 2019, 122, 319–333. [Google Scholar] [CrossRef]
- Chen, J.; Du, W.; Lin, J.; Borhan, U.M.; Lin, Y.; Du, B. Emergency UAV Landing on Unknown Field Using Depth-Enhanced Graph Structure. IEEE Trans. Autom. Sci. Eng. 2024, 22, 4434–4445. [Google Scholar] [CrossRef]
- Lee, M.-F.R.; Nugroho, A.; Le, T.-T.; Bahrudin; Bastida, S.N. Landing Area Recognition using Deep Learning for Unammaned Aerial Vehicles. In Proceedings of the 2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, Taiwan, 19–21 August 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Truong, N.Q.; Nguyen, P.H.; Nam, S.H.; Park, K.R. Deep Learning-Based Super-Resolution Reconstruction and Marker Detection for Drone Landing. IEEE Access 2019, 7, 61639–61655. [Google Scholar] [CrossRef]
- Nguyen, P.H.; Arsalan, M.; Koo, J.H.; Naqvi, R.A.; Truong, N.Q.; Park, K.R. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone. Sensors 2018, 18, 1703. [Google Scholar] [CrossRef]
- Castellano, G.; Castiello, C.; Mencar, C.; Vessio, G. Crowd Detection for Drone Safe Landing Through Fully-Convolutional Neural Networks. In SOFSEM 2020: Theory and Practice of Computer Science; Chatzigeorgiou, A., Dondi, R., Herodotou, H., Kapoutsis, C., Manolopoulos, Y., Papadopoulos, G.A., Sikora, F., Eds.; SOFSEM 2020; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12011. [Google Scholar] [CrossRef]
- García-Pulido, J.A.; Pajares, G.; Dormido, S.; Cruz, J.M.d. Recognition of a landing platform for unmanned aerial vehicles by using computer vision-based techniques. Expert Syst. Appl. 2017, 76, 152–165. [Google Scholar] [CrossRef]
- Rabah, M.; Rohan, A.; Talha, M.; Nam, K.; Kim, S.H. Autonomous Vision-based Target Detection and Safe Landing for UAV. Int. J. Control Autom. Syst. 2018, 16, 3013–3025. [Google Scholar] [CrossRef]
- Patruno, C.; Nitti, M.; Petitti, A.; Stella, E.; D’Orazio, T. A Vision-Based Approach for Unmanned Aerial Vehicle Landing. J. Intell. Robot Syst. 2019, 95, 645–664. [Google Scholar] [CrossRef]
- García-Pulido, J.A.; Pajares, G.; Dormido, S. UAV Landing Platform Recognition Using Cognitive Computation Combining Geometric Analysis and Computer Vision Techniques. Cogn. Comput. 2023, 15, 392–412. [Google Scholar] [CrossRef]
- Dergachov, K.; Bahinskii, S.; Piavka, I. The Algorithm of UAV Automatic Landing System Using Computer Vision. In Proceedings of the 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT), Kyiv, Ukraine, 14–18 May 2020. [Google Scholar] [CrossRef]
- Guo, X.; Denman, S.; Fookes, C.; Mejias, L.; Sridharan, S. Automatic UAV Forced Landing Site Detection Using Machine Learning. In Proceedings of the 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Wollongong, Australia, 25–27 November 2014. [Google Scholar] [CrossRef]
- Muhammad Faheem, R.; Aziz, S.; Khalid, A.; Bashir, M.; Yasin, A. UAV Emergency Landing Site Selection System using Machine Vision. J. Mach. Intell. 2015, 1, 13–20. [Google Scholar] [CrossRef]
- Mukadam, K.; Sinh, A.; Karani, R. Detection of landing areas for unmanned aerial vehicles. In Proceedings of the 2016 International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 12–13 August 2016. [Google Scholar] [CrossRef]
- Lin, S.; Jin, L.; Chen, Z. Real-Time Monocular Vision System for UAV Autonomous Landing in Outdoor Low-Illumination Environments. Sensors 2021, 21, 6226. [Google Scholar] [CrossRef] [PubMed]
- Yan, L.; Qi, J.; Wang, M.; Wu, C.; Xin, J. A Safe Landing Site Selection Method of UAVs Based on LiDAR Point Clouds. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 6497–6502. [Google Scholar] [CrossRef]
- Ikura, M.; Miyashita, L.; Ishikawa, M. Real-time Landing Gear Control System Based on Adaptive 3D Sensing for Safe Landing of UAV. In Proceedings of the 2020 IEEE/SICE International Symposium on System Integration (SII), Honolulu, HI, USA, 12–15 January 2020. [Google Scholar] [CrossRef]
- Alam, M.S.; Oluoch, J. A survey of safe landing zone detection techniques for autonomous unmanned aerial vehicles (UAVs). Expert Syst. Appl. 2021, 179, 115091. [Google Scholar] [CrossRef]
- Zeybek, M. Classification of UAV point clouds by random forest machine learning algorithm. Turk. J. Eng. 2021, 5, 48–57. [Google Scholar] [CrossRef]
- Lane, S.; Kira, Z.; James, R.; Carr, D.; Tuell, G. Landing zone identification for autonomous UAV applications using fused hyperspectral imagery and LIDAR point clouds. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV, Proceedings SPIE 10644, Orlando, FL, USA, 17–19 April 2018; Volume 106440D. [Google Scholar] [CrossRef]
- Liu, F.; Shan, J.; Xiong, B.; Fang, Z. A Real-Time and Multi-Sensor-Based Landing Area Recognition System for UAVs. Drones 2022, 6, 118. [Google Scholar] [CrossRef]
- Matsumoto, T.; Premachandra, C. Depth Sensor Application in Ground Unevenness Estimation for UAV Emergency Landing. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS), Ottawa, ON, Canada, 18–20 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Maturana, D.; Scherer, S. VoxNet: A 3D Convolutional Neural Network for real-time object recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015. [Google Scholar] [CrossRef]
- Marcu, A.; Costea, D.; Licăreţ, V.; Pîrvu, M.; Sluşanschi, E.; Leordeanu, M. SafeUAV: Learning to Estimate Depth and Safe Landing Areas for UAVs from Synthetic Data. Phys. Solid Surf. 2019, 11130, 43–58. [Google Scholar] [CrossRef]
- Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Vedaldi, A.; Soatto, S. Quick Shift and Kernel Methods for Mode Seeking. In Proceedings of the Computer Vision–ECCV 2008: 10th European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 705–718. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. Slic Superpixels. EPFL. 2010. Available online: https://www.iro.umontreal.ca/~mignotte/IFT6150/Articles/SLIC_Superpixels.pdf (accessed on 11 November 2024).
- de Jesús Osuna-Coutiño, J.A.; Martinez-Carranza, J. Volumetric structure extraction in a single image. Vis. Comput. 2022, 38, 2899–2921. [Google Scholar] [CrossRef]
- Nisticò, G.; Verwichte, E.; Nakariakov, V.M. 3D Reconstruction of Coronal Loops by the Principal Component Analysis. Entropy 2013, 15, 4520–4539. [Google Scholar] [CrossRef]
- Maćkiewicz, A.; Ratajczak, W. Principal components analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
- Schleiss, M.; Rouatbi, F.; Cremers, D. VPAIR-Aerial Visual Place Recognition and Localization in Large-scale Outdoor Environments. arXiv 2022, arXiv:2205.11567. [Google Scholar]
- Chen, L.; Liu, F.; Zhao, Y.; Wang, W.; Yuan, X.; Zhu, J. VALID: A Comprehensive Virtual Aerial Image Dataset. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Al-Rawabdeh, A.; He, F.; Moussa, A.; El-Sheimy, N.; Habib, A. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition. Remote Sens. 2016, 8, 95. [Google Scholar] [CrossRef]
- Suthaharan, S. Support Vector Machine. In Integrated Series in Information Systems; Springer: Boston, MA, USA, 2016; pp. 207–235. [Google Scholar] [CrossRef]
- LaValley, M.P. Logistic Regression. Circ. J. 2008, 117, 2395–2399. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar] [CrossRef]
VPAIR Dataset [37] | ||||||||
---|---|---|---|---|---|---|---|---|
PR | RE/SE | F1 | AC | SP | JI | AUC | PAM | |
PCA | 0.6898 | 0.9446 | 0.7973 | 0.8992 | 0.8871 | 0.6629 | 0.8367 | 0.7208 |
SVM | 0.8748 | 0.9595 | 0.9152 | 0.9627 | 0.9635 | 0.8436 | 0.9319 | 0.8728 |
L. Regression | 0.8566 | 0.9698 | 0.9097 | 0.9596 | 0.9569 | 0.8344 | 0.9242 | 0.8667 |
YOLO-RGB | 0.7152 | 0.5691 | 0.6338 | 0.8620 | 0.9398 | 0.4639 | 0.8033 | 0.5225 |
YOLO-Depth | 0.6824 | 0.5862 | 0.6306 | 0.8559 | 0.9275 | 0.4605 | 0.7882 | 0.5182 |
CNN | 0.8519 | 0.9850 | 0.9099 | 0.9612 | 0.9552 | 0.8421 | 0.9247 | 0.8747 |
DNN-S. Net | 0.9339 | 0.9170 | 0.9254 | 0.9690 | 0.9828 | 0.8612 | 0.9560 | 0.8823 |
VALID Synthetic Dataset [38] | ||||||||
---|---|---|---|---|---|---|---|---|
PR | RE/SE | F1 | AC | SP | JI | AUC | PAM | |
PCA | 0.8566 | 0.9922 | 0.9194 | 0.8611 | 0.3395 | 0.8509 | 0.8864 | 0.5787 |
SVM | 0.9066 | 0.9998 | 0.9509 | 0.9176 | 0.5902 | 0.9065 | 0.9528 | 0.7484 |
L. Regression | 0.9004 | 0.9998 | 0.9475 | 0.9115 | 0.5603 | 0.9003 | 0.9497 | 0.7304 |
YOLO-RGB | 0.9429 | 0.7359 | 0.8266 | 0.7533 | 0.8228 | 0.7045 | 0.6911 | 0.5394 |
YOLO-Depth | 0.9446 | 0.8681 | 0.9047 | 0.8539 | 0.7974 | 0.8260 | 0.7738 | 0.6710 |
CNN | 0.9730 | 0.9919 | 0.9822 | 0.9718 | 0.8889 | 0.9657 | 0.9706 | 0.9163 |
DNN-S. Net | 0.9791 | 0.9873 | 0.9832 | 0.9731 | 0.9163 | 0.9670 | 0.9635 | 0.9229 |
ITTG Dataset | ||||||||
---|---|---|---|---|---|---|---|---|
PR | RE/SE | F1 | AC | SP | JI | AUC | PAM | |
PCA | 0.9361 | 0.9226 | 0.9293 | 0.9592 | 0.9742 | 0.8679 | 0.9523 | 0.8788 |
SVM | 0.9027 | 0.9773 | 0.9385 | 0.9628 | 0.9569 | 0.8841 | 0.9465 | 0.8973 |
L. Regression | 0.9178 | 0.9700 | 0.9432 | 0.9661 | 0.9645 | 0.8925 | 0.9526 | 0.9040 |
YOLO-RGB | 0.8633 | 0.9167 | 0.8892 | 0.9337 | 0.9406 | 0.8004 | 0.9141 | 0.8172 |
YOLO-Depth | 0.8734 | 0.8970 | 0.8851 | 0.9324 | 0.9468 | 0.7938 | 0.9154 | 0.8101 |
CNN | 0.8743 | 0.9762 | 0.9201 | 0.9482 | 0.9336 | 0.8465 | 0.9257 | 0.8642 |
DNN-S. Net | 0.9340 | 0.9552 | 0.9445 | 0.9674 | 0.9724 | 0.8948 | 0.9577 | 0.9048 |
Approach | VPAIR | VALID | ITTG |
---|---|---|---|
PCA | 3.28 | 8.07 | 16.47 |
SVM | 0.9392 | 2.2790 | 3.96 |
Logistic Regression | 0.9259 | 2.2262 | 3.95 |
YOLO-RGB | 0.0523 | 0.0595 | 0.0583 |
YOLO-Depth | 0.0471 | 0.0503 | 0.0595 |
DNN-Superpixel Net | 0.9454 | 2.2538 | 4.02 |
Dataset | Error | Approaches | |||||
---|---|---|---|---|---|---|---|
Logistic | YOLOv7- | YOLOv7- | DNN-Superpixel | ||||
PCA | SVM | Regression | RGB | DEPTH | Net | ||
RMS(X) | 4.9031 | 2.1394 | 3.6114 | 5.0379 | 4.6755 | 2.1 | |
VPAIR | RMS(Y) | 8.4476 | 4.5299 | 5.7991 | 4.6323 | 5.8089 | 2.6234 |
RMS(Z) | 4.5261 | 1.1875 | 3.2211 | 1.125 | 1.3859 | 1.3923 | |
Ave. RMS | 5.9589 | 2.6189 | 4.2105 | 3.5984 | 3.9568 | 2.0386 | |
RMS(X) | 0.6632 | 0.9662 | 1.0599 | 7.0259 | 5.2428 | 0.4032 | |
VALID | RMS(Y) | 0.5467 | 0.7323 | 0.7035 | 2.6387 | 7.8879 | 0.1366 |
RMS(Z) | 0.7784 | 0.3422 | 0.3294 | 0.8817 | 0.6714 | 0.0739 | |
Ave. RMS | 0.6628 | 0.6802 | 0.6976 | 3.5154 | 4.6007 | 0.2046 | |
RMS(X) | 0.523 | 0.1604 | 0.1437 | 0.3571 | 0.5766 | 0.1311 | |
ITTG | RMS(Y) | 0.372 | 0.168 | 0.1441 | 0.131 | 0.3972 | 0.1433 |
RMS(Z) | 0.097 | 0.0594 | 0.0641 | 0.0851 | 0.0711 | 0.0419 | |
Ave. RMS | 0.3307 | 0.1293 | 0.1173 | 0.1911 | 0.3483 | 0.1054 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Morales-Navarro, N.A.; Osuna-Coutiño, J.A.d.J.; Pérez-Patricio, M.; Camas-Anzueto, J.L.; Velázquez-González, J.R.; Aguilar-González, A.; Ocaña-Valenzuela, E.A.; Ibarra-de-la-Garza, J.-B. Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network–Superpixel Approach. Sensors 2025, 25, 2517. https://doi.org/10.3390/s25082517
Morales-Navarro NA, Osuna-Coutiño JAdJ, Pérez-Patricio M, Camas-Anzueto JL, Velázquez-González JR, Aguilar-González A, Ocaña-Valenzuela EA, Ibarra-de-la-Garza J-B. Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network–Superpixel Approach. Sensors. 2025; 25(8):2517. https://doi.org/10.3390/s25082517
Chicago/Turabian StyleMorales-Navarro, N. A., J. A. de Jesús Osuna-Coutiño, Madaín Pérez-Patricio, J. L. Camas-Anzueto, J. Renán Velázquez-González, Abiel Aguilar-González, Ernesto Alonso Ocaña-Valenzuela, and Juan-Belisario Ibarra-de-la-Garza. 2025. "Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network–Superpixel Approach" Sensors 25, no. 8: 2517. https://doi.org/10.3390/s25082517
APA StyleMorales-Navarro, N. A., Osuna-Coutiño, J. A. d. J., Pérez-Patricio, M., Camas-Anzueto, J. L., Velázquez-González, J. R., Aguilar-González, A., Ocaña-Valenzuela, E. A., & Ibarra-de-la-Garza, J.-B. (2025). Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network–Superpixel Approach. Sensors, 25(8), 2517. https://doi.org/10.3390/s25082517