Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = runway-line detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 11033 KB  
Article
Deep Learning-Based Navigation System for Automatic Landing Approach of Fixed-Wing UAVs in GNSS-Denied Environments
by Ying-Xi Lin and Ying-Chih Lai
Aerospace 2025, 12(4), 324; https://doi.org/10.3390/aerospace12040324 - 10 Apr 2025
Cited by 6 | Viewed by 2791
Abstract
The Global Navigation Satellite System (GNSS) is widely used in various applications of UAVs (unmanned aerial vehicles) that require precise positioning or navigation. However, GNSS signals can be blocked in specific environments and are susceptible to jamming and spoofing, which will degrade the [...] Read more.
The Global Navigation Satellite System (GNSS) is widely used in various applications of UAVs (unmanned aerial vehicles) that require precise positioning or navigation. However, GNSS signals can be blocked in specific environments and are susceptible to jamming and spoofing, which will degrade the performance of navigation systems. In this study, a deep learning-based navigation system for the automatic landing of fixed-wing UAVs in GNSS-denied environments is proposed to serve as an alternative navigation system. Most visual-based runway landing systems are typically focused on runway detection and localization while neglecting the issue of integrating the localization solution into flight control and guidance laws to become a complete real-time automatic landing system. This study addresses these problems by combining runway detection and localization methods, YOLOv8 and CNN (convolutional neural network) regression, to demonstrate the robustness of deep learning approaches. Moreover, a line detection method is employed to accurately align the UAV with the runway, effectively resolving issues related to runway contours. In the control phase, the guidance law and controller are designed to ensure the stable flight of the UAV. Based on a deep learning model framework, this study conducts experiments within the simulation environment, verifying system stability under various assumed conditions, thereby avoiding the risks associated with real-world testing. The simulation results demonstrate that the UAV can achieve automatic landing on 3-degree and 5-degree glide slopes, whether it is directly aligned with the runway or deviating from it, with trajectory tracking errors within 10 m. Full article
Show Figures

Figure 1

20 pages, 21698 KB  
Article
An Enhanced Aircraft Carrier Runway Detection Method Based on Image Dehazing
by Chenliang Li, Yunyang Wang, Yan Zhao, Cheng Yuan, Ruien Mao and Pin Lyu
Appl. Sci. 2024, 14(13), 5464; https://doi.org/10.3390/app14135464 - 24 Jun 2024
Cited by 2 | Viewed by 2051
Abstract
Carrier-based Unmanned Aerial Vehicle (CUAV) landing is an extremely critical link in the overall chain of CUAV operations on ships. Vision-based landing location methods have advantages such as low cost and high accuracy. However, when an aircraft carrier is at sea, it may [...] Read more.
Carrier-based Unmanned Aerial Vehicle (CUAV) landing is an extremely critical link in the overall chain of CUAV operations on ships. Vision-based landing location methods have advantages such as low cost and high accuracy. However, when an aircraft carrier is at sea, it may encounter complex weather conditions such as haze, which could lead to vision-based landing failures. This paper proposes a runway line recognition and localization method based on haze removal enhancement to solve this problem. Firstly, a haze removal algorithm using a multi-mechanism, multi-architecture network model is introduced. Compared with traditional algorithms, the proposed model not only consumes less GPU memory but also achieves superior image restoration results. Based on this, We employed the random sample consensus method to reduce the error in runway line localization. Additionally, extensive experiments conducted in the Airsim simulation environment have shown that our pipeline effectively addresses the issue of decreased detection accuracy of runway line detection algorithms in haze maritime conditions, improving the runway line localization accuracy by approximately 85%. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

25 pages, 8137 KB  
Article
Research on Unmanned Aerial Vehicle (UAV) Visual Landing Guidance and Positioning Algorithms
by Xiaoxiong Liu, Wanhan Xue, Xinlong Xu, Minkun Zhao and Bin Qin
Drones 2024, 8(6), 257; https://doi.org/10.3390/drones8060257 - 12 Jun 2024
Cited by 12 | Viewed by 3701
Abstract
Considering the weak resistance to interference and generalization ability of traditional UAV visual landing navigation algorithms, this paper proposes a deep-learning-based approach for airport runway line detection and fusion of visual information with IMU for localization. Firstly, a coarse positioning algorithm based on [...] Read more.
Considering the weak resistance to interference and generalization ability of traditional UAV visual landing navigation algorithms, this paper proposes a deep-learning-based approach for airport runway line detection and fusion of visual information with IMU for localization. Firstly, a coarse positioning algorithm based on YOLOX is designed for airport runway localization. To meet the requirements of model accuracy and inference speed for the landing guidance system, regression loss functions, probability prediction loss functions, activation functions, and feature extraction networks are designed. Secondly, a deep-learning-based runway line detection algorithm including feature extraction, classification prediction and segmentation networks is designed. To create an effective detection network, we propose efficient loss function and network evaluation methods Finally, a visual/inertial navigation system is established based on constant deformation for visual localization. The relative positioning results are fused and optimized with Kalman filter algorithms. Simulation and flight experiments demonstrate that the proposed algorithm exhibits significant advantages in terms of localization accuracy, real-time performance, and generalization ability, and can provide accurate positioning information during UAV landing processes. Full article
(This article belongs to the Special Issue Path Planning, Trajectory Tracking and Guidance for UAVs)
Show Figures

Figure 1

20 pages, 25890 KB  
Article
Charge-Coupled Frequency Response Multispectral Inversion Network-Based Detection Method of Oil Contamination on Airport Runway
by Shuanfeng Zhao, Zhijian Luo, Li Wang, Xiaoyu Li and Zhizhong Xing
Sensors 2024, 24(12), 3716; https://doi.org/10.3390/s24123716 - 7 Jun 2024
Viewed by 1698
Abstract
Aircraft failures can result in the leakage of fuel, hydraulic oil, or other lubricants onto the runway during landing or taxiing. Damage to fuel tanks or oil lines during hard landings or accidents can also contribute to these spills. Further, improper maintenance or [...] Read more.
Aircraft failures can result in the leakage of fuel, hydraulic oil, or other lubricants onto the runway during landing or taxiing. Damage to fuel tanks or oil lines during hard landings or accidents can also contribute to these spills. Further, improper maintenance or operational errors may leave oil traces on the runway before take-off or after landing. Identifying oil spills in airport runway videos is crucial to flight safety and accident investigation. Advanced image processing techniques can overcome the limitations of conventional RGB-based detection, which struggles to differentiate between oil spills and sewage due to similar coloration; given that oil and sewage have distinct spectral absorption patterns, precise detection can be performed based on multispectral images. In this study, we developed a method for spectrally enhancing RGB images of oil spills on airport runways to generate HSI images, facilitating oil spill detection in conventional RGB imagery. To this end, we employed the MST++ spectral reconstruction network model to effectively reconstruct RGB images into multispectral images, yielding improved accuracy in oil detection compared with other models. Additionally, we utilized the Fast R-CNN oil spill detection model, resulting in a 5% increase in Intersection over Union (IOU) for HSI images. Moreover, compared with RGB images, this approach significantly enhanced detection accuracy and completeness by 25.3% and 26.5%, respectively. These findings clearly demonstrate the superior precision and accuracy of HSI images based on spectral reconstruction in oil spill detection compared with traditional RGB images. With the spectral reconstruction technique, we can effectively make use of the spectral information inherent in oil spills, thereby enhancing detection accuracy. Future research could delve deeper into optimization techniques and conduct extensive validation in real airport environments. In conclusion, this spectral reconstruction-based technique for detecting oil spills on airport runways offers a novel and efficient approach that upholds both efficacy and accuracy. Its wide-scale implementation in airport operations holds great potential for improving aviation safety and environmental protection. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

25 pages, 5912 KB  
Article
Implicit Neural Mapping for a Data Closed-Loop Unmanned Aerial Vehicle Pose-Estimation Algorithm in a Vision-Only Landing System
by Xiaoxiong Liu, Changze Li, Xinlong Xu, Nan Yang and Bin Qin
Drones 2023, 7(8), 529; https://doi.org/10.3390/drones7080529 - 12 Aug 2023
Cited by 4 | Viewed by 2646
Abstract
Due to their low cost, interference resistance, and concealment of vision sensors, vision-based landing systems have received a lot of research attention. However, vision sensors are only used as auxiliary components in visual landing systems because of their limited accuracy. To solve the [...] Read more.
Due to their low cost, interference resistance, and concealment of vision sensors, vision-based landing systems have received a lot of research attention. However, vision sensors are only used as auxiliary components in visual landing systems because of their limited accuracy. To solve the problem of the inaccurate position estimation of vision-only sensors during landing, a novel data closed-loop pose-estimation algorithm with an implicit neural map is proposed. First, we propose a method with which to estimate the UAV pose based on the runway’s line features, using a flexible coarse-to-fine runway-line-detection method. Then, we propose a mapping and localization method based on the neural radiance field (NeRF), which provides continuous representation and can correct the initial estimated pose well. Finally, we develop a closed-loop data annotation system based on a high-fidelity implicit map, which can significantly improve annotation efficiency. The experimental results show that our proposed algorithm performs well in various scenarios and achieves state-of-the-art accuracy in pose estimation. Full article
Show Figures

Figure 1

27 pages, 3739 KB  
Article
Visual Navigation Algorithm for Night Landing of Fixed-Wing Unmanned Aerial Vehicle
by Zhaoyang Wang, Dan Zhao and Yunfeng Cao
Aerospace 2022, 9(10), 615; https://doi.org/10.3390/aerospace9100615 - 17 Oct 2022
Cited by 20 | Viewed by 4269
Abstract
In the recent years, visual navigation has been considered an effective mechanism for achieving an autonomous landing of Unmanned Aerial Vehicles (UAVs). Nevertheless, with the limitations of visual cameras, the effectiveness of visual algorithms is significantly limited by lighting conditions. Therefore, a novel [...] Read more.
In the recent years, visual navigation has been considered an effective mechanism for achieving an autonomous landing of Unmanned Aerial Vehicles (UAVs). Nevertheless, with the limitations of visual cameras, the effectiveness of visual algorithms is significantly limited by lighting conditions. Therefore, a novel vision-based autonomous landing navigation scheme is proposed for night-time autonomous landing of fixed-wing UAV. Firstly, due to the difficulty of detecting the runway caused by the low-light image, a strategy of visible and infrared image fusion is adopted. The objective functions of the fused and visible image, and the fused and infrared image, are established. Then, the fusion problem is transformed into the optimal situation of the objective function, and the optimal solution is realized by gradient descent schemes to obtain the fused image. Secondly, to improve the performance of detecting the runway from the enhanced image, a runway detection algorithm based on an improved Faster region-based convolutional neural network (Faster R-CNN) is proposed. The runway ground-truth box of the dataset is statistically analyzed, and the size and number of anchors in line with the runway detection background are redesigned based on the analysis results. Finally, a relative attitude and position estimation method for the UAV with respect to the landing runway is proposed. New coordinate reference systems are established, six landing parameters, such as three attitude and three positions, are further calculated by Orthogonal Iteration (OI). Simulation results reveal that the proposed algorithm can achieve 1.85% improvement of AP on runway detection, and the reprojection error of rotation and translation for pose estimation are 0.675 and 0.581%, respectively. Full article
(This article belongs to the Special Issue Vision-Based UAV Navigation)
Show Figures

Figure 1

22 pages, 7259 KB  
Article
Low Complexity Lane Detection Methods for Light Photometry System
by Jakub Suder, Kacper Podbucki, Tomasz Marciniak and Adam Dąbrowski
Electronics 2021, 10(14), 1665; https://doi.org/10.3390/electronics10141665 - 13 Jul 2021
Cited by 22 | Viewed by 4873
Abstract
The aim of the paper was to analyze effective solutions for accurate lane detection on the roads. We focused on effective detection of airport runways and taxiways in order to drive a light-measurement trailer correctly. Three techniques for video-based line extracting were used [...] Read more.
The aim of the paper was to analyze effective solutions for accurate lane detection on the roads. We focused on effective detection of airport runways and taxiways in order to drive a light-measurement trailer correctly. Three techniques for video-based line extracting were used for specific detection of environment conditions: (i) line detection using edge detection, Scharr mask and Hough transform, (ii) finding the optimal path using the hyperbola fitting line detection algorithm based on edge detection and (iii) detection of horizontal markings using image segmentation in the HSV color space. The developed solutions were tuned and tested with the use of embedded devices such as Raspberry Pi 4B or NVIDIA Jetson Nano. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

22 pages, 5727 KB  
Article
An Airport Knowledge-Based Method for Accurate Change Analysis of Airport Runways in VHR Remote Sensing Images
by Wei Ding and Jidong Wu
Remote Sens. 2020, 12(19), 3163; https://doi.org/10.3390/rs12193163 - 26 Sep 2020
Cited by 15 | Viewed by 5904
Abstract
Due to the complexity of airport background and runway structure, the performances of most runway extraction methods are limited. Furthermore, at present, the military fields attach greater importance to semantic changes of some objects in the airport, but few studies have been done [...] Read more.
Due to the complexity of airport background and runway structure, the performances of most runway extraction methods are limited. Furthermore, at present, the military fields attach greater importance to semantic changes of some objects in the airport, but few studies have been done on this subject. To address these issues, this paper proposes an accurate runway change analysis method, which comprises two stages: airport runway extraction and runway change analysis. For the former stage, some airport knowledge, such as chevron markings and runway edge markings, are first applied in combination with multiple features of runways to improve the accuracy. In addition, the proposed method can accomplish airport runway extraction automatically. For the latter, semantic information and vector results of runway changes can be obtained simultaneously by comparing bi-temporal runway extraction results. In six test images with about 0.5-m spatial resolution, the average completeness of runway extraction is nearly 100%, and the average quality is nearly 89%. In addition, the final experiment using two sets of bi-temporal very high-resolution (VHR) images of runway changes demonstrated that semantic results obtained by our method are consistent with the real situation and the final accuracy is over 80%. Overall, the airport knowledge, especially chevron markings for runways and runway edge markings, are critical to runway recognition/detection, and multiple features of runways, such as shape and parallel line features, can further improve the completeness and accuracy of runway extraction. Finally, a small step has been taken in the study of runway semantic changes, which cannot be accomplished by change detection alone. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Back to TopTop