Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = vision-based autonomous landing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3941 KB  
Article
Method of Collaborative UAV Deployment: Carrier-Assisted Localization with Low-Resource Precision Touchdown
by Krzysztof Kaliszuk, Artur Kierzkowski and Bartłomiej Dziewoński
Electronics 2025, 14(13), 2726; https://doi.org/10.3390/electronics14132726 - 7 Jul 2025
Viewed by 461
Abstract
This study presents a cooperative unmanned aerial system (UAS) designed to enable precise autonomous landings in unstructured environments using low-cost onboard vision technology. This approach involves a carrier UAV with a stabilized RGB camera and a neural inference system, as well as a [...] Read more.
This study presents a cooperative unmanned aerial system (UAS) designed to enable precise autonomous landings in unstructured environments using low-cost onboard vision technology. This approach involves a carrier UAV with a stabilized RGB camera and a neural inference system, as well as a lightweight tailsitter payload UAV with an embedded grayscale vision module. The system relies on visually recognizable landing markers and does not require additional sensors. Field trials comprising full deployments achieved an 80% success rate in autonomous landings, with vertical touchdown occurring within a 1.5 m radius of the target. These results confirm that vision-based marker detection using compact neural models can effectively support autonomous UAV operations in constrained conditions. This architecture offers a scalable alternative to the high complexity of SLAM or terrain-mapping systems. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation, 2nd Edition)
Show Figures

Figure 1

27 pages, 1880 KB  
Article
UAV-Enabled Video Streaming Architecture for Urban Air Mobility: A 6G-Based Approach Toward Low-Altitude 3D Transportation
by Liang-Chun Chen, Chenn-Jung Huang, Yu-Sen Cheng, Ken-Wen Hu and Mei-En Jian
Drones 2025, 9(6), 448; https://doi.org/10.3390/drones9060448 - 18 Jun 2025
Viewed by 861
Abstract
As urban populations expand and congestion intensifies, traditional ground transportation struggles to satisfy escalating mobility demands. Unmanned Electric Vertical Take-Off and Landing (eVTOL) aircraft, as a key enabler of Urban Air Mobility (UAM), leverage low-altitude airspace to alleviate ground traffic while offering environmentally [...] Read more.
As urban populations expand and congestion intensifies, traditional ground transportation struggles to satisfy escalating mobility demands. Unmanned Electric Vertical Take-Off and Landing (eVTOL) aircraft, as a key enabler of Urban Air Mobility (UAM), leverage low-altitude airspace to alleviate ground traffic while offering environmentally sustainable solutions. However, supporting high bandwidth, real-time video applications, such as Virtual Reality (VR), Augmented Reality (AR), and 360° streaming, remains a major challenge, particularly within bandwidth-constrained metropolitan regions. This study proposes a novel Unmanned Aerial Vehicle (UAV)-enabled video streaming architecture that integrates 6G wireless technologies with intelligent routing strategies across cooperative airborne nodes, including unmanned eVTOLs and High-Altitude Platform Systems (HAPS). By relaying video data from low-congestion ground base stations to high-demand urban zones via autonomous aerial relays, the proposed system enhances spectrum utilization and improves streaming stability. Simulation results validate the framework’s capability to support immersive media applications in next-generation autonomous air mobility systems, aligning with the vision of scalable, resilient 3D transportation infrastructure. Full article
Show Figures

Figure 1

35 pages, 111295 KB  
Article
A Visual Guidance and Control Method for Autonomous Landing of a Quadrotor UAV on a Small USV
by Ziqing Guo, Jianhua Wang, Xiang Zheng, Yuhang Zhou and Jiaqing Zhang
Drones 2025, 9(5), 364; https://doi.org/10.3390/drones9050364 - 12 May 2025
Viewed by 1768
Abstract
Unmanned Surface Vehicles (USVs) are commonly used as mobile docking stations for Unmanned Aerial Vehicles (UAVs) to ensure sustained operational capabilities. Conventional vision-based techniques based on horizontally-placed fiducial markers for autonomous landing are not only susceptible to interference from lighting and shadows but [...] Read more.
Unmanned Surface Vehicles (USVs) are commonly used as mobile docking stations for Unmanned Aerial Vehicles (UAVs) to ensure sustained operational capabilities. Conventional vision-based techniques based on horizontally-placed fiducial markers for autonomous landing are not only susceptible to interference from lighting and shadows but are also restricted by the limited Field of View (FOV) of the visual system. This study proposes a method that integrates an improved minimum snap trajectory planning algorithm with an event-triggered vision-based technique to achieve autonomous landing on a small USV. The trajectory planning algorithm ensures trajectory smoothness and controls deviations from the target flight path, enabling the UAV to approach the USV despite the visual system’s limited FOV. To avoid direct contact between the UAV and the fiducial marker while mitigating the interference from lighting and shadows on the marker, a landing platform with a vertically placed fiducial marker is designed to separate the UAV landing area from the fiducial marker detection region. Additionally, an event-triggered mechanism is used to limit excessive yaw angle adjustment of the UAV to improve its autonomous landing efficiency and stability. Experiments conducted in both terrestrial and river environments demonstrate that the UAV can successfully perform autonomous landing on a small USV in both stationary and moving scenarios. Full article
Show Figures

Figure 1

23 pages, 24213 KB  
Article
Optical Image Generation Through Digital Terrain Models for Autonomous Lunar Navigation
by Michele Ceresoli, Stefano Silvestrini and Michèle Lavagna
Aerospace 2025, 12(2), 92; https://doi.org/10.3390/aerospace12020092 - 27 Jan 2025
Cited by 1 | Viewed by 1715
Abstract
In recent years, Vision-Based Navigation (VBN) techniques have emerged as a fundamental component to enable autonomous spacecraft operations, particularly in challenging environments such as planetary landings, where ground control may be limited or unavailable. Developing and testing VBN algorithms requires the availability of [...] Read more.
In recent years, Vision-Based Navigation (VBN) techniques have emerged as a fundamental component to enable autonomous spacecraft operations, particularly in challenging environments such as planetary landings, where ground control may be limited or unavailable. Developing and testing VBN algorithms requires the availability of a large number of realistic images of the application scenario; however, these are rarely available. This paper presents a novel rendering software tool to generate accurate synthetic optical images of the lunar surface by leveraging high-resolution Digital Terrain Models (DTMs). Unlike traditional ray-tracing algorithms, the method iteratively propagates camera rays to determine their intersection with the terrain surface defined by a Digital Elevation Model (DEM). The color information is then retrieved from the corresponding Digital Orthophoto Model (DOM) through the knowledge of the ray impact points, bypassing the need for the costly computation of shadows, reflections, and refractions effects. The rendering performance is demonstrated through a comprehensive selection of images of the lunar surface under different illumination conditions and camera orientations. Full article
(This article belongs to the Special Issue Space Navigation and Control Technologies)
Show Figures

Figure 1

22 pages, 40791 KB  
Article
Autonomous Landing Guidance for Quad-UAVs Based on Visual Image and Altitude Estimation
by Lingxia Mu, Shaowei Cao, Youmin Zhang, Xielong Zhang, Nan Feng and Yuan Zhang
Drones 2025, 9(1), 57; https://doi.org/10.3390/drones9010057 - 15 Jan 2025
Cited by 2 | Viewed by 2068
Abstract
In this paper, an autonomous landing guidance strategy is proposed for quad-UAVs, including landing marker detection, altitude estimation, and adaptive landing commands generation. A double-layered nested marker is designed to ensure that the marker can be captured both in high and low altitudes. [...] Read more.
In this paper, an autonomous landing guidance strategy is proposed for quad-UAVs, including landing marker detection, altitude estimation, and adaptive landing commands generation. A double-layered nested marker is designed to ensure that the marker can be captured both in high and low altitudes. A deep learning-based marker detection method is designed where the intersection of union is replaced by the normalized Wasserstein distance in the computation of non-maximum suppression to improve the detection accuracy. The UAV altitude measured by inertial measurement unit is fused with vision-based altitude estimation data to improve the accuracy during the landing process. An image-based visual servoing method is designed to guide the UAV approach to the landing marker. Both simulation and flight experiments are conducted to verify the proposed strategy. Full article
Show Figures

Figure 1

36 pages, 17153 KB  
Article
YOLO-RWY: A Novel Runway Detection Model for Vision-Based Autonomous Landing of Fixed-Wing Unmanned Aerial Vehicles
by Ye Li, Yu Xia, Guangji Zheng, Xiaoyang Guo and Qingfeng Li
Drones 2024, 8(10), 571; https://doi.org/10.3390/drones8100571 - 10 Oct 2024
Cited by 2 | Viewed by 3799
Abstract
In scenarios where global navigation satellite systems (GNSSs) and radio navigation systems are denied, vision-based autonomous landing (VAL) for fixed-wing unmanned aerial vehicles (UAVs) becomes essential. Accurate and real-time runway detection in VAL is vital for providing precise positional and orientational guidance. However, [...] Read more.
In scenarios where global navigation satellite systems (GNSSs) and radio navigation systems are denied, vision-based autonomous landing (VAL) for fixed-wing unmanned aerial vehicles (UAVs) becomes essential. Accurate and real-time runway detection in VAL is vital for providing precise positional and orientational guidance. However, existing research faces significant challenges, including insufficient accuracy, inadequate real-time performance, poor robustness, and high susceptibility to disturbances. To address these challenges, this paper introduces a novel single-stage, anchor-free, and decoupled vision-based runway detection framework, referred to as YOLO-RWY. First, an enhanced data augmentation (EDA) module is incorporated to perform various augmentations, enriching image diversity, and introducing perturbations that improve generalization and safety. Second, a large separable kernel attention (LSKA) module is integrated into the backbone structure to provide a lightweight attention mechanism with a broad receptive field, enhancing feature representation. Third, the neck structure is reorganized as a bidirectional feature pyramid network (BiFPN) module with skip connections and attention allocation, enabling efficient multi-scale and across-stage feature fusion. Finally, the regression loss and task-aligned learning (TAL) assigner are optimized using efficient intersection over union (EIoU) to improve localization evaluation, resulting in faster and more accurate convergence. Comprehensive experiments demonstrate that YOLO-RWY achieves AP50:95 scores of 0.760, 0.611, and 0.413 on synthetic, real nominal, and real edge test sets of the landing approach runway detection (LARD) dataset, respectively. Deployment experiments on an edge device show that YOLO-RWY achieves an inference speed of 154.4 FPS under FP32 quantization with an image size of 640. The results indicate that the proposed YOLO-RWY model possesses strong generalization and real-time capabilities, enabling accurate runway detection in complex and challenging visual environments, and providing support for the onboard VAL systems of fixed-wing UAVs. Full article
Show Figures

Figure 1

17 pages, 5154 KB  
Article
A Two-Step Controller for Vision-Based Autonomous Landing of a Multirotor with a Gimbal Camera
by Sangbaek Yoo, Jae-Hyeon Park and Dong Eui Chang
Drones 2024, 8(8), 389; https://doi.org/10.3390/drones8080389 - 9 Aug 2024
Viewed by 1334
Abstract
This article presents a novel vision-based autonomous landing method utilizing a multirotor and a gimbal camera, which is designed to be applicable from any initial position within a broad space by addressing the problems of a field of view and singularity to ensure [...] Read more.
This article presents a novel vision-based autonomous landing method utilizing a multirotor and a gimbal camera, which is designed to be applicable from any initial position within a broad space by addressing the problems of a field of view and singularity to ensure stable performance. The proposed method employs a two-step controller based on integrated dynamics for the multirotor and the gimbal camera, where the multirotor approaches the landing site horizontally in the first step and descends vertically in the second step. The multirotor and the camera converge simultaneously to the desired configuration because we design the stabilizing controller for the integrated dynamics of the multirotor and the gimbal camera. The controller requires only one feature point and decreases unnecessary camera rolling. The effectiveness of the proposed method is demonstrated through simulation and real environment experiments. Full article
Show Figures

Figure 1

10 pages, 4470 KB  
Article
Demonstration of a Low-SWaP Terminal for Ground-to-Air Single-Mode Fiber Coupled Laser Links
by Ayden McCann, Alex Frost, Skevos Karpathakis, Benjamin Dix-Matthews, David Gozzard, Shane Walsh and Sascha Schediwy
Photonics 2024, 11(7), 633; https://doi.org/10.3390/photonics11070633 - 2 Jul 2024
Cited by 3 | Viewed by 1975
Abstract
Free space optical technology promises to revolutionize point-to-point communications systems. By taking advantage of their vastly higher frequencies, coherent optical systems outperform their radio counterparts by orders of magnitude in achievable data throughput, while simultaneously lowering the required size, weight, and power (SWaP), [...] Read more.
Free space optical technology promises to revolutionize point-to-point communications systems. By taking advantage of their vastly higher frequencies, coherent optical systems outperform their radio counterparts by orders of magnitude in achievable data throughput, while simultaneously lowering the required size, weight, and power (SWaP), making them ideal for mobile applications. However, the widespread implementation of this technology has been largely hindered by the effects of atmospheric turbulence, often necessitating complex higher-order adaptive optics systems that are largely unsuitable for deployment on mobile platforms. By employing tip/tilt beam-stabilization, we present the results of a bespoke low-SWaP optical terminal that demonstrated single-mode fiber (SMF) coupling. This was achieved by autonomously acquiring and tracking the targets using a combination of aircraft transponder and machine vision feedback to a root-mean-square (RMS) tracking error of 29.4 µrad and at angular rates of up to 0.83 deg/s. To the authors’ knowledge, these works constitute the first published SMF coupled optical link to a full-sized helicopter, and we describe derived quantities relevant to the future refinement of such links. The ability to achieve SMF coupling without the constraints of complex adaptive optics systems positions this technology as a versatile quantum-capable communications solution for land-, air-, and sea-based platforms ranging across commercial, scientific, and military operators. Full article
(This article belongs to the Special Issue Next-Generation Free-Space Optical Communication Technologies)
Show Figures

Figure 1

29 pages, 3844 KB  
Article
VALNet: Vision-Based Autonomous Landing with Airport Runway Instance Segmentation
by Qiang Wang, Wenquan Feng, Hongbo Zhao, Binghao Liu and Shuchang Lyu
Remote Sens. 2024, 16(12), 2161; https://doi.org/10.3390/rs16122161 - 14 Jun 2024
Cited by 10 | Viewed by 3519
Abstract
Visual navigation, characterized by its autonomous capabilities, cost effectiveness, and robust resistance to interference, serves as the foundation for vision-based autonomous landing systems. These systems rely heavily on runway instance segmentation, which accurately divides runway areas and provides precise information for unmanned aerial [...] Read more.
Visual navigation, characterized by its autonomous capabilities, cost effectiveness, and robust resistance to interference, serves as the foundation for vision-based autonomous landing systems. These systems rely heavily on runway instance segmentation, which accurately divides runway areas and provides precise information for unmanned aerial vehicle (UAV) navigation. However, current research primarily focuses on runway detection but lacks relevant runway instance segmentation datasets. To address this research gap, we created the Runway Landing Dataset (RLD), a benchmark dataset that focuses on runway instance segmentation mainly based on X-Plane. To overcome the challenges of large-scale changes and input image angle differences in runway instance segmentation tasks, we propose a vision-based autonomous landing segmentation network (VALNet) that uses band-pass filters, where a Context Enhancement Module (CEM) guides the model to learn adaptive “band” information through heatmaps, while an Orientation Adaptation Module (OAM) of a triple-channel architecture to fully utilize rotation information enhances the model’s ability to capture input image rotation transformations. Extensive experiments on RLD demonstrate that the new method has significantly improved performance. The visualization results further confirm the effectiveness and interpretability of VALNet in the face of large-scale changes and angle differences. This research not only advances the development of runway instance segmentation but also highlights the potential application value of VALNet in vision-based autonomous landing systems. Additionally, RLD is publicly available. Full article
Show Figures

Graphical abstract

25 pages, 5518 KB  
Article
High-Altitude Precision Landing by Smartphone Video Guidance Sensor and Sensor Fusion
by Joao Leonardo Silva Cotta, Hector Gutierrez, Ivan R. Bertaska, John P. Inness and John Rakoczy
Drones 2024, 8(2), 37; https://doi.org/10.3390/drones8020037 - 25 Jan 2024
Cited by 2 | Viewed by 3822
Abstract
This paper describes the deployment, integration, and demonstration of the Smartphone Video Guidance Sensor (SVGS) as novel technology for autonomous 6-DOF proximity maneuvers and high-altitude precision landing of UAVs via sensor fusion. The proposed approach uses a vision-based photogrammetric position and attitude sensor [...] Read more.
This paper describes the deployment, integration, and demonstration of the Smartphone Video Guidance Sensor (SVGS) as novel technology for autonomous 6-DOF proximity maneuvers and high-altitude precision landing of UAVs via sensor fusion. The proposed approach uses a vision-based photogrammetric position and attitude sensor (SVGS) to support the precise automated landing of a UAV from an initial altitude above 100 m to ground, guided by an array of landing beacons. SVGS information is fused with other on-board sensors at the flight control unit to estimate the UAV’s position and attitude during landing relative to a ground coordinate system defined by the landing beacons. While the SVGS can provide mm-level absolute positioning accuracy depending on range and beacon dimensions, the proper operation of the SVGS requires a line of sight between the camera and the beacon, and readings can be disturbed by environmental lighting conditions and reflections. SVGS readings can therefore be intermittent, and their update rate is not deterministic since the SVGS runs on an Android device. The sensor fusion of the SVGS with on-board sensors enables an accurate and reliable update of the position and attitude estimates during landing, providing improved performance compared to state-of-art automated landing technology based on an infrared beacon, but its implementation must address the challenges mentioned above. The proposed technique also shows significant advantages compared with state-of-the-art sensors for High-Altitude Landing, such as those based on LIDAR. Full article
Show Figures

Figure 1

19 pages, 7874 KB  
Article
An Autonomous Tracking and Landing Method for Unmanned Aerial Vehicles Based on Visual Navigation
by Bingkun Wang, Ruitao Ma, Hang Zhu, Yongbai Sha and Tianye Yang
Drones 2023, 7(12), 703; https://doi.org/10.3390/drones7120703 - 12 Dec 2023
Cited by 5 | Viewed by 5363
Abstract
In this paper, we examine potential methods for autonomously tracking and landing multi-rotor unmanned aerial vehicles (UAVs), a complex yet essential problem. Autonomous tracking and landing control technology utilizes visual navigation, relying solely on vision and landmarks to track targets and achieve autonomous [...] Read more.
In this paper, we examine potential methods for autonomously tracking and landing multi-rotor unmanned aerial vehicles (UAVs), a complex yet essential problem. Autonomous tracking and landing control technology utilizes visual navigation, relying solely on vision and landmarks to track targets and achieve autonomous landing. This technology improves the UAV’s environment perception and autonomous flight capabilities in GPS-free scenarios. In particular, we are researching tracking and landing as a cohesive unit, devising a switching plan for various UAV tracking and landing modes, and creating a flight controller that has an inner and outer loop structure based on relative position estimation. The inner and outer nested markers aid in the autonomous tracking and landing of UAVs. Optimal parameters are determined via optimized experiments on the measurements of the inner and outer markers. An indoor experimental platform for tracking and landing UAVs was established. Tracking performance was verified by tracking three trajectories of an unmanned ground vehicle (UGV) at varying speeds, and landing accuracy was confirmed through static and dynamic landing experiments. The experimental results show that the proposed scheme has good dynamic tracking and landing performance. Full article
Show Figures

Figure 1

15 pages, 3214 KB  
Article
Vision-Based Deep Reinforcement Learning of UAV-UGV Collaborative Landing Policy Using Automatic Curriculum
by Chang Wang, Jiaqing Wang, Changyun Wei, Yi Zhu, Dong Yin and Jie Li
Drones 2023, 7(11), 676; https://doi.org/10.3390/drones7110676 - 13 Nov 2023
Cited by 14 | Viewed by 5424
Abstract
Collaborative autonomous landing of a quadrotor Unmanned Aerial Vehicle (UAV) on a moving Unmanned Ground Vehicle (UGV) presents challenges due to the need for accurate real-time tracking of the UGV and the adjustment for the landing policy. To address this challenge, we propose [...] Read more.
Collaborative autonomous landing of a quadrotor Unmanned Aerial Vehicle (UAV) on a moving Unmanned Ground Vehicle (UGV) presents challenges due to the need for accurate real-time tracking of the UGV and the adjustment for the landing policy. To address this challenge, we propose a progressive learning framework for generating an optimal landing policy based on vision without the need of communication between the UAV and the UGV. First, we propose the Landing Vision System (LVS) to offer rapid localization and pose estimation of the UGV. Then, we design an Automatic Curriculum Learning (ACL) approach to learn the landing tasks under different conditions of UGV motions and wind interference. Specifically, we introduce a neural network-based difficulty discriminator to schedule the landing tasks according to their levels of difficulty. Our method achieves a higher landing success rate and accuracy compared with the state-of-the-art TD3 reinforcement learning algorithm. Full article
(This article belongs to the Special Issue Cooperation of Drones and Other Manned/Unmanned Systems)
Show Figures

Figure 1

26 pages, 3661 KB  
Article
A Unmanned Aerial Vehicle (UAV)/Unmanned Ground Vehicle (UGV) Dynamic Autonomous Docking Scheme in GPS-Denied Environments
by Cheng Cheng, Xiuxian Li, Lihua Xie and Li Li
Drones 2023, 7(10), 613; https://doi.org/10.3390/drones7100613 - 29 Sep 2023
Cited by 16 | Viewed by 8449
Abstract
This study designs a navigation and landing scheme for an unmanned aerial vehicle (UAV) to autonomously land on an arbitrarily moving unmanned ground vehicle (UGV) in GPS-denied environments based on vision, ultra-wideband (UWB) and system information. In the approaching phase, an effective multi-innovation [...] Read more.
This study designs a navigation and landing scheme for an unmanned aerial vehicle (UAV) to autonomously land on an arbitrarily moving unmanned ground vehicle (UGV) in GPS-denied environments based on vision, ultra-wideband (UWB) and system information. In the approaching phase, an effective multi-innovation forgetting gradient (MIFG) algorithm is proposed to estimate the position of the UAV relative to the target using historical data (estimated distance and relative displacement measurements). Using these estimates, a saturated proportional navigation controller is developed, by which the UAV can approach the target, making the UGV enter the field of view (FOV) of the camera deployed in the UAV. Then, a sensor fusion estimation algorithm based on an extended Kalman filter (EKF) is proposed to achieve accurate landing. Finally, a numerical example and a real experiment are used to support the theoretical results. Full article
Show Figures

Figure 1

14 pages, 9332 KB  
Article
Image-Based Lunar Hazard Detection in Low Illumination Simulated Conditions via Vision Transformers
by Luca Ghilardi and Roberto Furfaro
Sensors 2023, 23(18), 7844; https://doi.org/10.3390/s23187844 - 13 Sep 2023
Cited by 4 | Viewed by 2260
Abstract
Hazard detection is fundamental for a safe lunar landing. State-of-the-art autonomous lunar hazard detection relies on 2D image-based and 3D Lidar systems. The lunar south pole is challenging for vision-based methods. The low sun inclination and the terrain rich in topographic features create [...] Read more.
Hazard detection is fundamental for a safe lunar landing. State-of-the-art autonomous lunar hazard detection relies on 2D image-based and 3D Lidar systems. The lunar south pole is challenging for vision-based methods. The low sun inclination and the terrain rich in topographic features create large areas in shadow, hiding the terrain features. The proposed method utilizes a vision transformer (ViT) model, which is a deep learning architecture based on the transformer blocks used in natural language processing, to solve this problem. Our goal is to train the ViT model to extract terrain features information from low-light RGB images. The results show good performances, especially at high altitudes, beating the UNet, one of the most popular convolutional neural networks, in every scenario. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 9353 KB  
Article
A Vision-Based Autonomous Landing Guidance Strategy for a Micro-UAV by the Modified Camera View
by Lingxia Mu, Qingliang Li, Ban Wang, Youmin Zhang, Nan Feng, Xianghong Xue and Wenzhe Sun
Drones 2023, 7(6), 400; https://doi.org/10.3390/drones7060400 - 16 Jun 2023
Cited by 12 | Viewed by 5055
Abstract
Autonomous landing is one of the key technologies for unmanned aerial vehicles (UAVs) which can improve task flexibility in various fields. In this paper, a vision-based autonomous landing strategy is proposed for a quadrotor micro-UAV based on a novel camera view angle conversion [...] Read more.
Autonomous landing is one of the key technologies for unmanned aerial vehicles (UAVs) which can improve task flexibility in various fields. In this paper, a vision-based autonomous landing strategy is proposed for a quadrotor micro-UAV based on a novel camera view angle conversion method, fast landing marker detection, and an autonomous guidance approach. The front-view camera of the micro-UAV video is first modified by a new strategy to obtain a top-down view. By this means, the landing marker can be captured by the onboard camera of the micro-UAV and is then detected by the YOLOv5 algorithm in real time. The central coordinate of the landing marker is estimated and used to generate the guidance commands for the flight controller. After that, the guidance commands are sent by the ground station to perform the landing task of the UAV. Finally, the flight experiments using DJI Tello UAV are conducted outdoors and indoors, respectively. The original UAV platform is modified using the proposed camera view angle-changing strategy so that the top-down view can be achieved for performing the landing mission. The experimental results show that the proposed landing marker detection algorithm and landing guidance strategy can complete the autonomous landing task of the micro-UAV efficiently. Full article
Show Figures

Figure 1

Back to TopTop