Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = camera gimbal control

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 20759 KB  
Article
Autonomous UAV Landing and Collision Avoidance System for Unknown Terrain Utilizing Depth Camera with Actively Actuated Gimbal
by Piotr Łuczak and Grzegorz Granosik
Sensors 2025, 25(19), 6165; https://doi.org/10.3390/s25196165 (registering DOI) - 5 Oct 2025
Abstract
Autonomous landing capability is crucial for fully autonomous UAV flight. Currently, most solutions use either color imaging from a camera pointed down, lidar sensors, dedicated landing spots, beacons, or a combination of these approaches. Classical strategies can be limited by either no color [...] Read more.
Autonomous landing capability is crucial for fully autonomous UAV flight. Currently, most solutions use either color imaging from a camera pointed down, lidar sensors, dedicated landing spots, beacons, or a combination of these approaches. Classical strategies can be limited by either no color data when lidar is used, limited obstacle perception when only color imaging is used, a low field of view from a single RGB-D sensor, or the requirement for the landing spot to be prepared in advance. In this paper, a new approach is proposed where an RGB-D camera mounted on a gimbal is used. The gimbal is actively actuated to counteract the limited field of view while color images and depth information are provided by the RGB-D camera. Furthermore, a combined UAV-and-gimbal-motion strategy is proposed to counteract the low maximum range of depth perception to provide static obstacle detection and avoidance, while preserving safe operating conditions for low-altitude flight, near potential obstacles. The system is developed using a PX4 flight stack, CubeOrange flight controller, and Jetson nano onboard computer. The system was flight-tested in simulation conditions and statically tested on a real vehicle. Results show the correctness of the system architecture and possibility of deployment in real conditions. Full article
(This article belongs to the Special Issue UAV-Based Sensing and Autonomous Technologies)
Show Figures

Figure 1

17 pages, 5134 KB  
Article
Monocular Camera Pose Estimation and Calibration System Based on Raspberry Pi
by Chung-Wen Hung, Ting-An Chang, Xuan-Ni Chen and Chun-Chieh Wang
Electronics 2025, 14(18), 3694; https://doi.org/10.3390/electronics14183694 - 18 Sep 2025
Viewed by 231
Abstract
Numerous imaging-based methods have been proposed for artifact monitoring and preservation, yet most rely on fixed-angle cameras or robotic platforms, leading to high cost and complexity. In this study, a portable monocular camera pose estimation and calibration framework is presented to capture artifact [...] Read more.
Numerous imaging-based methods have been proposed for artifact monitoring and preservation, yet most rely on fixed-angle cameras or robotic platforms, leading to high cost and complexity. In this study, a portable monocular camera pose estimation and calibration framework is presented to capture artifact images from consistent viewpoints over time. The system is implemented on a Raspberry Pi integrated with a controllable three-axis gimbal, enabling untethered operation. Three methodological innovations are proposed. First, ORB feature extraction combined with a quadtree-based distribution strategy is employed to ensure uniform keypoint coverage and robustness under varying illumination conditions. Second, on-device processing is achieved using a Raspberry Pi, eliminating dependence on external power or high-performance hardware. Third, unlike traditional fixed setups or multi-degree-of-freedom robotic arms, real-time, low-cost calibration is provided, maintaining pose alignment accuracy consistently within three pixels. Through these innovations, a technically robust, computationally efficient, and highly portable solution for artifact preservation has been demonstrated, making it suitable for deployment in museums, exhibition halls, and other resource-constrained environments. Full article
Show Figures

Figure 1

17 pages, 5154 KB  
Article
A Two-Step Controller for Vision-Based Autonomous Landing of a Multirotor with a Gimbal Camera
by Sangbaek Yoo, Jae-Hyeon Park and Dong Eui Chang
Drones 2024, 8(8), 389; https://doi.org/10.3390/drones8080389 - 9 Aug 2024
Viewed by 1428
Abstract
This article presents a novel vision-based autonomous landing method utilizing a multirotor and a gimbal camera, which is designed to be applicable from any initial position within a broad space by addressing the problems of a field of view and singularity to ensure [...] Read more.
This article presents a novel vision-based autonomous landing method utilizing a multirotor and a gimbal camera, which is designed to be applicable from any initial position within a broad space by addressing the problems of a field of view and singularity to ensure stable performance. The proposed method employs a two-step controller based on integrated dynamics for the multirotor and the gimbal camera, where the multirotor approaches the landing site horizontally in the first step and descends vertically in the second step. The multirotor and the camera converge simultaneously to the desired configuration because we design the stabilizing controller for the integrated dynamics of the multirotor and the gimbal camera. The controller requires only one feature point and decreases unnecessary camera rolling. The effectiveness of the proposed method is demonstrated through simulation and real environment experiments. Full article
Show Figures

Figure 1

29 pages, 20581 KB  
Article
Manipulating Camera Gimbal Positioning by Deep Deterministic Policy Gradient Reinforcement Learning for Drone Object Detection
by Ming-You Ma, Yu-Hsiang Huang, Shang-En Shen and Yi-Cheng Huang
Drones 2024, 8(5), 174; https://doi.org/10.3390/drones8050174 - 28 Apr 2024
Cited by 3 | Viewed by 3121
Abstract
The object recognition technology of unmanned aerial vehicles (UAVs) equipped with “You Only Look Once” (YOLO) has been validated in actual flights. However, here, the challenge lies in efficiently utilizing camera gimbal control technology to swiftly capture images of YOLO-identified target objects in [...] Read more.
The object recognition technology of unmanned aerial vehicles (UAVs) equipped with “You Only Look Once” (YOLO) has been validated in actual flights. However, here, the challenge lies in efficiently utilizing camera gimbal control technology to swiftly capture images of YOLO-identified target objects in aerial search missions. Enhancing the UAV’s energy efficiency and search effectiveness is imperative. This study aims to establish a simulation environment by employing the Unity simulation software for target tracking by controlling the gimbal. This approach involves the development of deep deterministic policy-gradient (DDPG) reinforcement-learning techniques to train the gimbal in executing effective tracking actions. The outcomes of the simulations indicate that when actions are appropriately rewarded or penalized in the form of scores, the reward value can be consistently converged within the range of 19–35. This convergence implies that a successful strategy leads to consistently high rewards. Consequently, a refined set of training procedures is devised, enabling the gimbal to accurately track the target. Moreover, this strategy minimizes unnecessary tracking actions, thus enhancing tracking efficiency. Numerous benefits arise from training in a simulated environment. For instance, the training in this simulated environment is facilitated through a dataset composed of actual flight photographs. Furthermore, offline operations can be conducted at any given time without any constraint of time and space. Thus, this approach effectively enables the training and enhancement of the gimbal’s action strategies. The findings of this study demonstrate that a coherent set of action strategies can be proficiently cultivated by employing DDPG reinforcement learning. Furthermore, these strategies empower the UAV’s gimbal to rapidly and precisely track designated targets. Therefore, this approach provides both convenience and opportunities to gather more flight-scenario training data in the future. This gathering of data will lead to immediate training opportunities and help improve the system’s energy consumption. Full article
Show Figures

Figure 1

14 pages, 15837 KB  
Article
Improving Optical Flow Sensor Using a Gimbal for Quadrotor Navigation in GPS-Denied Environment
by Jonathan Flores, Ivan Gonzalez-Hernandez, Sergio Salazar, Rogelio Lozano and Christian Reyes
Sensors 2024, 24(7), 2183; https://doi.org/10.3390/s24072183 - 28 Mar 2024
Cited by 3 | Viewed by 2961
Abstract
This paper proposes a new sensor using optical flow to stabilize a quadrotor when a GPS signal is not available. Normally, optical flow varies with the attitude of the aerial vehicle. This produces positive feedback on the attitude control that destabilizes the orientation [...] Read more.
This paper proposes a new sensor using optical flow to stabilize a quadrotor when a GPS signal is not available. Normally, optical flow varies with the attitude of the aerial vehicle. This produces positive feedback on the attitude control that destabilizes the orientation of the vehicle. To avoid this, we propose a novel sensor using an optical flow camera with a 6DoF IMU (Inertial Measurement Unit) mounted on a two-axis anti-shake stabilizer mobile aerial gimbal. We also propose a robust algorithm based on Sliding Mode Control for stabilizing the optical flow sensor downwards independently of the aerial vehicle attitude. This method improves the estimation of the position and velocity of the quadrotor. We present experimental results to show the performance of the proposed sensor and algorithms. Full article
Show Figures

Figure 1

14 pages, 2722 KB  
Article
A Study on Robust Finite-Time Visual Servoing with a Gyro-Stabilized Surveillance System
by Thinh Huynh and Young-Bok Kim
Actuators 2024, 13(3), 82; https://doi.org/10.3390/act13030082 - 21 Feb 2024
Cited by 2 | Viewed by 2488
Abstract
This article presents the design and validation of a novel visual servoing scheme for a surveillance system. In this system, a two-axis gimbal mechanism operates the rotation of a camera which is able to provide visual information on the tracked target for the [...] Read more.
This article presents the design and validation of a novel visual servoing scheme for a surveillance system. In this system, a two-axis gimbal mechanism operates the rotation of a camera which is able to provide visual information on the tracked target for the control system. The control objective is to bring the target’s projection to the center of the image plane with the smallest steady-state error and a smooth transient response, even with the unpredictable motion of the target and the influence of external disturbances. To fulfill these tasks, the proposed control scheme is designed consisting of two parts: (1) an observer estimates simultaneously the matched and unmatched disturbances; and (2) a motion control law guarantees the finite-time stability and visual servoing performance. Finally, experiments are conducted for validation and evaluation. The proposed control system shows its consistency and ought to perform better than previous approaches. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

18 pages, 29460 KB  
Article
A Deep Learning Approach of Intrusion Detection and Tracking with UAV-Based 360° Camera and 3-Axis Gimbal
by Yao Xu, Yunxiao Liu, Han Li, Liangxiu Wang and Jianliang Ai
Drones 2024, 8(2), 68; https://doi.org/10.3390/drones8020068 - 18 Feb 2024
Cited by 4 | Viewed by 3502
Abstract
Intrusion detection is often used in scenarios such as airports and essential facilities. Based on UAVs equipped with optical payloads, intrusion detection from an aerial perspective can be realized. However, due to the limited field of view of the camera, it is difficult [...] Read more.
Intrusion detection is often used in scenarios such as airports and essential facilities. Based on UAVs equipped with optical payloads, intrusion detection from an aerial perspective can be realized. However, due to the limited field of view of the camera, it is difficult to achieve large-scale continuous tracking of intrusion targets. In this study, we proposed an intrusion target detection and tracking method based on the fusion of a 360° panoramic camera and a 3-axis gimbal, and designed a detection model covering five types of intrusion targets. During the research process, the multi-rotor UAV platform was built. Then, based on a field flight test, 3043 flight images taken by a 360° panoramic camera and a 3-axis gimbal in various environments were collected, and an intrusion data set was produced. Subsequently, considering the applicability of the YOLO model in intrusion target detection, this paper proposes an improved YOLOv5s-360ID model based on the original YOLOv5-s model. This model improved and optimized the anchor box of the YOLOv5-s model according to the characteristics of the intrusion target. It used the K-Means++ clustering algorithm to regain the anchor box that matches the small target detection task. It also introduced the EIoU loss function to replace the original CIoU loss function. The target bounding box regression loss function made the intrusion target detection model more efficient while ensuring high detection accuracy. The performance of the UAV platform was assessed using the detection model to complete the test flight verification in an actual scene. The experimental results showed that the mean average precision (mAP) of the YOLOv5s-360ID was 75.2%, which is better than the original YOLOv5-s model of 72.4%, and the real-time detection frame rate of the intrusion detection was 31 FPS, which validated the real-time performance of the detection model. The gimbal tracking control algorithm for intrusion targets is also validated. The experimental results demonstrate that the system can enhance intrusion targets’ detection and tracking range. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

16 pages, 9569 KB  
Article
Enhancing UAV Visual Landing Recognition with YOLO’s Object Detection by Onboard Edge Computing
by Ming-You Ma, Shang-En Shen and Yi-Cheng Huang
Sensors 2023, 23(21), 8999; https://doi.org/10.3390/s23218999 - 6 Nov 2023
Cited by 16 | Viewed by 3923
Abstract
A visual camera system combined with the unmanned aerial vehicle (UAV) onboard edge computer should deploy an efficient object detection ability, increase the frame per second rate of the object of interest, and the wide searching ability of the gimbal camera for finding [...] Read more.
A visual camera system combined with the unmanned aerial vehicle (UAV) onboard edge computer should deploy an efficient object detection ability, increase the frame per second rate of the object of interest, and the wide searching ability of the gimbal camera for finding the emergent landing platform and for future reconnaissance area missions. This paper proposes an approach to enhance the visual capabilities of this system by using the You Only Look Once (YOLO)-based object detection (OD) with Tensor RTTM acceleration technique, an automated visual tracking gimbal camera control system, and multithread programing for image transmission to the ground station. With lightweight edge computing (EC), the mean average precision (mAP) was satisfied and we achieved a higher frame per second (FPS) rate via YOLO accelerated with TensorRT for an onboard UAV. The OD compares four YOLO models to recognize objects of interest for landing spots at the home university first. Then, the trained dataset with YOLOv4-tiny was successfully applied to another field with a distance of more than 100 km. The system’s capability to accurately recognize a different landing point in new and unknown environments is demonstrated successfully. The proposed approach substantially reduces the data transmission and processing time to ground stations with automated visual tracking gimbal control, and results in rapid OD and the feasibility of using NVIDIA JetsonTM Xavier NX by deploying YOLOs with more than 35 FPS for the UAV. The enhanced visual landing and future reconnaissance mission capabilities of real-time UAVs were demonstrated. Full article
Show Figures

Figure 1

19 pages, 3517 KB  
Article
Quadrotor UAV Dynamic Visual Servoing Based on Differential Flatness Theory
by Ahmed Alshahir, Mohammed Albekairi, Kamel Berriri, Hassen Mekki, Khaled Kaaniche, Shahr Alshahr, Bassam A. Alshammari and Anis Sahbani
Appl. Sci. 2023, 13(12), 7005; https://doi.org/10.3390/app13127005 - 10 Jun 2023
Cited by 1 | Viewed by 2245
Abstract
In this paper, we propose 2D dynamic visual servoing (Dynamic IBVS), where a quadrotor UAV tries to track a moving target using a single facing-down perspective camera. As an application, we propose the tracking of a car-type vehicle. In this case, data related [...] Read more.
In this paper, we propose 2D dynamic visual servoing (Dynamic IBVS), where a quadrotor UAV tries to track a moving target using a single facing-down perspective camera. As an application, we propose the tracking of a car-type vehicle. In this case, data related to the altitude and the lateral angles have no importance for the visual system. Indeed, to perform the tracking, we only need to know the longitudinal displacements (along the x and y axes) and the orientation along the z-axis. However, those data are necessary for the quadrotor’s guidance problem. Thanks to the concept of differential flatness, we demonstrate that if we manage to extract the displacements according to the three axes and the orientation according to the yaw angle (the vertical axis) of the quadrotor, we can control all the other variables of the system. For this, we consider a camera equipped with a vertical stabilizer that keeps it in a vertical position during its movement (a gimbaled camera). Other specialized sensors measure information regarding altitude and lateral angles. In the case of classic 2D visual servoing, the elaboration of the kinematic torsor of the quadrotor in no way guarantees the physical realization of instructions, given that the quadrotor is an under-actuated system. Indeed, the setpoint has a dimension equal to six, while the quadrotor is controlled only by four inputs. In addition, the dynamics of a quadrotor are generally very fast, which requires a high-frequency control law. Furthermore, the complexity of the image processing stage can cause delays in motion control, which can lead to target loss. A new dynamic 2D visual servoing method (Dynamic IBVS) is proposed. This method makes it possible to generate in real time the necessary movements for the quadrotor in order to carry out the tracking of the target (vehicle) using a single point of this target as visual information. This point can represent the center of gravity of the target or any other part of it. A control by flatness has been proposed, which guarantees the controllability of the system and ensures the asymptotic convergence of the generated trajectory in the image plane. Numerical simulations are presented to show the effectiveness of the proposed control strategy. Full article
(This article belongs to the Special Issue Control and Position Tracking for UAVs)
Show Figures

Figure 1

14 pages, 1168 KB  
Article
A Hierarchical Stabilization Control Method for a Three-Axis Gimbal Based on Sea–Sky-Line Detection
by Zhanhua Xin, Shihan Kong, Yuquan Wu, Gan Zhan and Junzhi Yu
Sensors 2022, 22(7), 2587; https://doi.org/10.3390/s22072587 - 28 Mar 2022
Cited by 10 | Viewed by 4086
Abstract
Obtaining a stable video sequence for cameras on surface vehicles is always a challenging problem due to the severe disturbances in heavy sea environments. Aiming at this problem, this paper proposes a novel hierarchical stabilization method based on real-time sea–sky-line detection. More specifically, [...] Read more.
Obtaining a stable video sequence for cameras on surface vehicles is always a challenging problem due to the severe disturbances in heavy sea environments. Aiming at this problem, this paper proposes a novel hierarchical stabilization method based on real-time sea–sky-line detection. More specifically, a hierarchical image stabilization control method that combines mechanical image stabilization with electronic image stabilization is adopted. With respect to the mechanical image stabilization method, a gimbal with three degrees of freedom (DOFs) and with a robust controller is utilized for the primary motion compensation. In addition, the electronic image stabilization method based on sea–sky-line detection in video sequences accomplishes motion estimation and compensation. The Canny algorithm and Hough transform are utilized to detect the sea–sky line. Noticeably, an image-clipping strategy based on prior information is implemented to ensure real-time performance, which can effectively improve the processing speed and reduce the equipment performance requirements. The experimental results indicate that the proposed method for mechanical and electronic stabilization can reduce the vibration by 74.2% and 42.1%, respectively. Full article
(This article belongs to the Special Issue Perception, Planning and Control of Marine Robots)
Show Figures

Figure 1

21 pages, 16283 KB  
Article
Autonomous Landing of a Quadrotor on a Moving Platform via Model Predictive Control
by Kaiyang Guo, Pan Tang, Hui Wang, Defu Lin and Xiaoxi Cui
Aerospace 2022, 9(1), 34; https://doi.org/10.3390/aerospace9010034 - 11 Jan 2022
Cited by 14 | Viewed by 6013
Abstract
Landing on a moving platform is an essential requirement to achieve high-performance autonomous flight with various vehicles, including quadrotors. We propose an efficient and reliable autonomous landing system, based on model predictive control, which can accurately land in the presence of external disturbances. [...] Read more.
Landing on a moving platform is an essential requirement to achieve high-performance autonomous flight with various vehicles, including quadrotors. We propose an efficient and reliable autonomous landing system, based on model predictive control, which can accurately land in the presence of external disturbances. To detect and track the landing marker, a fast two-stage algorithm is introduced in the gimbaled camera, while a model predictive controller with variable sampling time is used to predict and calculate the entire landing trajectory based on the estimated platform information. As the quadrotor approaches the target platform, the sampling time is gradually shortened to feed a re-planning process that perfects the landing trajectory continuously and rapidly, improving the overall accuracy and computing efficiency. At the same time, a cascade incremental nonlinear dynamic inversion control method is adopted to track the planned trajectory and improve robustness against external disturbances. We carried out both simulations and outdoor flight experiments to demonstrate the effectiveness of the proposed landing system. The results show that the quadrotor can land rapidly and accurately even under external disturbance and that the terminal position, speed and attitude satisfy the requirements of a smooth landing mission. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

18 pages, 5583 KB  
Article
A Study on Vision-Based Backstepping Control for a Target Tracking System
by Thinh Huynh, Minh-Thien Tran, Dong-Hun Lee, Soumayya Chakir and Young-Bok Kim
Actuators 2021, 10(5), 105; https://doi.org/10.3390/act10050105 - 19 May 2021
Cited by 11 | Viewed by 4469
Abstract
This paper proposes a new method to control the pose of a camera mounted on a two-axis gimbal system for visual servoing applications. In these applications, the camera should be stable while its line-of-sight points at a target located within the camera’s field [...] Read more.
This paper proposes a new method to control the pose of a camera mounted on a two-axis gimbal system for visual servoing applications. In these applications, the camera should be stable while its line-of-sight points at a target located within the camera’s field of view. One of the most challenging aspects of these systems is the coupling in the gimbal kinematics as well as the imaging geometry. Such factors must be considered in the control system design process to achieve better control performances. The novelty of this study is that the couplings in both mechanism’s kinematics and imaging geometry are decoupled simultaneously by a new technique, so popular control methods can be easily implemented, and good tracking performances are obtained. The proposed control configuration includes a calculation of the gimbal’s desired motion taking into account the coupling influence, and a control law derived by the backstepping procedure. Simulation and experimental studies were conducted, and their results validate the efficiency of the proposed control system. Moreover, comparison studies are conducted between the proposed control scheme, the image-based pointing control, and the decoupled control. This proves the superiority of the proposed approach that requires fewer measurements and results in smoother transient responses. Full article
(This article belongs to the Special Issue Visual Servoing of Mobile Robots)
Show Figures

Figure 1

10 pages, 2306 KB  
Article
Design of a Free Space Optical Communication System for an Unmanned Aerial Vehicle Command and Control Link
by Yiqing Zhang, Yuehui Wang, Yangyang Deng, Axin Du and Jianguo Liu
Photonics 2021, 8(5), 163; https://doi.org/10.3390/photonics8050163 - 14 May 2021
Cited by 16 | Viewed by 4854
Abstract
An electromagnetic immune Free Space Optical Communication (FSOC) system for an Unmanned Aerial Vehicle (UAV) command and control link is introduced in this paper. The system uses the scheme of omnidirectional receiving and ground scanning transmitting. It has a strong anti-turbulence ability by [...] Read more.
An electromagnetic immune Free Space Optical Communication (FSOC) system for an Unmanned Aerial Vehicle (UAV) command and control link is introduced in this paper. The system uses the scheme of omnidirectional receiving and ground scanning transmitting. It has a strong anti-turbulence ability by using a large area detector and short-focus lens. The design of omnidirectional communication improves the ability of anti-vibration and link establishment. Pure static reception has no momentum effect on the platform. The receiver is miniaturized under no use of a gimbal mirror system, beacon camera system, Four-Quadrant Photodetector (QPD) and multi-level lens system. The system can realize omnidirectional reception and the communication probability in 1 s is greater than 99.99%. This design strengthens the ability of the FSOC system, so it can be applied in the UAV command and control, the satellite submarine communication and other occasions where the size of the platform is restricted. Full article
Show Figures

Figure 1

24 pages, 12172 KB  
Article
Development of Non Expensive Technologies for Precise Maneuvering of Completely Autonomous Unmanned Aerial Vehicles
by Luca Bigazzi, Stefano Gherardini, Giacomo Innocenti and Michele Basso
Sensors 2021, 21(2), 391; https://doi.org/10.3390/s21020391 - 8 Jan 2021
Cited by 14 | Viewed by 4141
Abstract
In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a [...] Read more.
In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a limited amount of sensors and computing devices can be installed on-board. Then, to make the prototype capable of moving autonomously along a fixed trajectory, a “cyber-pilot”, able on demand to replace the human operator, has been implemented on an embedded control board. This cyber-pilot overrides the commands thanks to a custom hardware signal mixer. The drone is able to localize itself in the environment without ground assistance by using a camera possibly mounted on a 3 Degrees Of Freedom (DOF) gimbal suspension. A computer vision system elaborates the video stream pointing out land markers with known absolute position and orientation. This information is fused with accelerations from a 6-DOF Inertial Measurement Unit (IMU) to generate a “virtual sensor” which provides refined estimates of the pose, the absolute position, the speed and the angular velocities of the drone. Due to the importance of this sensor, several fusion strategies have been investigated. The resulting data are, finally, fed to a control algorithm featuring a number of uncoupled digital PID controllers which work to bring to zero the displacement from the desired trajectory. Full article
(This article belongs to the Special Issue Sensors for Unmanned Aircraft Systems and Related Technologies)
Show Figures

Figure 1

24 pages, 4182 KB  
Article
Remote Estimation of Target Height from Unmanned Aerial Vehicle (UAV) Images
by Andrea Tonini, Paula Redweik, Marco Painho and Mauro Castelli
Remote Sens. 2020, 12(21), 3602; https://doi.org/10.3390/rs12213602 - 2 Nov 2020
Cited by 6 | Viewed by 4615
Abstract
This paper focuses on how the height of a target can be swiftly estimated using images acquired by a digital camera installed into moving platforms, such as unmanned aerial vehicles (UAVs). A pinhole camera model after distortion compensation was considered for this purpose [...] Read more.
This paper focuses on how the height of a target can be swiftly estimated using images acquired by a digital camera installed into moving platforms, such as unmanned aerial vehicles (UAVs). A pinhole camera model after distortion compensation was considered for this purpose since it does not need extensive processing nor vanishing lines. The pinhole model has been extensively employed for similar purposes in past studies but mainly focusing on fixed camera installations. This study analyzes how to tailor the pinhole model for gimballed cameras mounted into UAVs, considering camera parameters and flight parameters. Moreover, it indicates a solution that foresees correcting only a few needed pixels to limit the processing overload. Finally, an extensive analysis was conducted to define the uncertainty associated with the height estimation. The results of this analysis highlighted interesting relationships between UAV-to-target relative distance, camera pose, and height uncertainty that allow practical exploitations of the proposed approach. The model was tested with real data in both controlled and uncontrolled environments, the results confirmed the suitability of the proposed method and outcomes of the uncertainty analysis. Finally, this research can open consumer UAVs to innovative applications for urban surveillance. Full article
Show Figures

Graphical abstract

Back to TopTop