Next Article in Journal
Counting of Hong-Ou-Mandel Bunched Optical Photons Using a Fast Pixel Camera
Next Article in Special Issue
An Improved FBPN-Based Detection Network for Vehicles in Aerial Images
Previous Article in Journal
Detection of Ferric Ions and Catecholamine Neurotransmitters via Highly Fluorescent Heteroatom Co-Doped Carbon Dots
Previous Article in Special Issue
Benchmarking Deep Trackers on Aerial Videos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Vision-Based Control of a Rotorcraft UAV for Uncooperative Target Tracking

Research Center of Satellite Technology, Harbin Institute of Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(12), 3474; https://doi.org/10.3390/s20123474
Submission received: 8 May 2020 / Revised: 6 June 2020 / Accepted: 14 June 2020 / Published: 19 June 2020
(This article belongs to the Special Issue Aerial Vision and Sensors)

Abstract

:
This paper investigates the problem of using an unmanned aerial vehicle (UAV) to track and hover above an uncooperative target, such as an unvisited area or an object that is newly discovered. A vision-based strategy integrating the metrology and the control is employed to achieve target tracking and hovering observation. First, by introducing a virtual camera frame, the reprojected image features can change independently of the rotational motion of the vehicle. The image centroid and an optimal observation area on the virtual image plane are exploited to regulate the relative horizontal and vertical distance. Then, the optic flow and gyro measurements are utilized to estimate the relative UAV-to-target velocity. Further, a gain-switching proportional-derivative (PD) control scheme is proposed to compensate for the external interference and model uncertainties. The closed-loop system is proven to be exponentially stable, based on the Lyapunov method. Finally, simulation results are presented to demonstrate the effectiveness of the proposed vision-based strategy in both hovering and tracking scenarios.

1. Introduction

Unmanned aerial vehicles (UAVs) have received growing interest, due to their advantages of vertical take off and landing, rapid maneuverability, and low cost. With improvements in sensing devices, batteries, materials, and other technologies, UAVs have sufficient payload and flight endurance, supporting many applications such as transportation, real-time monitoring, search and rescue, and security and surveillance [1,2,3,4]. There have been a variety of studies related to missions using autonomous hovering and tracking technologies [5,6,7]. In [5], a finite-time controller was proposed to drive a quadrotor hovering above a target with a limited duration. In [6], a novel fuzzy PID-type iterative learning control was developed for trajectory tracking of a quadrotor under the effects of external disturbances and uncertain factors of the system. The problem of energy-efficient path planning and simultaneously anticipating disturbances has been addressed in [7]. However, the available studies have mainly focused on hovering above or tracking a target with a known trajectory and definite position and velocity information. Tracking passive, uncooperative, or even unknown targets, such as vehicles in traffic accidents and fire areas in groves or forests, is still a challenging problem for UAV control. These cases usually occur suddenly and detailed geometric information (e.g., dimensions and size) of the target is not available. On the other hand, rough information (e.g., shape and structure) or a simplified model of the target can be stored onboard, which can help the vehicle to identify and lock on to the target.
A large amount of state of the art sensors has been equipped on the airborne platform, and multi-sensor fusion is a perspective trend in UAV navigation and control [8]. Inertial measurement units (IMUs) and Global Positioning Systems (GPS) are commonly mounted on UAVs to provide position, orientation, and velocity information [9]. However, GPS cannot determine the relative position between the UAV and a target, unless the target’s position information is accurately known. Moreover, the accuracy of GPS is unreliable and ineffective in the presence of obstacles or external disturbances, such as urban canyon effects or electromagnetic interference [10,11]. On the other hand, a large number of UAVs, especially small-scale quadrotors, are equipped with cameras which can provide adequate visual information and act as an alternative option to GPS. The notion of introducing visual information into the control system is referred to as visual servoing [12]. According to different feedback information, visual servoing can be classified into two approaches: position-based visual servoing (PBVS) and image-based visual servoing (IBVS).
PBVS methods rely on relative pose estimation techniques, including the PnP algorithm, epipolar geometry, ICP algorithm, or the fusion of vision and other sensors. In [13], a monocular semi-direct visual odometry algorithm was applied to obtain position information and an adaptive sliding mode controller based on the backstepping technique was presented for quadrotor tracking control. In [14], visual feedback was exploited to estimate the position and attitude of a UAV based on the triangulation method, and a classical PID control was designed for indoor UAV flight. In [15], a vision-based control scheme was proposed for a quadrotor, in order to track a moving target. To compensate for the image dynamic uncertainties, a neural network controller based on a radial basis function was introduced into the closed-loop system. However, the methods mentioned above require a priori knowledge of the target’s geometric model, which does not apply to a newly discovered uncooperative target. Further, 3D reconstruction based on generic features (instead of specific artificial markers) is usually a challenging task, and the sophisticated image processing involved requires long computation times, which may result in delays in the actuation. The IBVS approach, on the other hand, allows for designing a control law directly in the 2D image space and omits the relative pose estimation process. Thus, the geometric information of the target is not required in this approach. The applications of IBVS for UAV control are largely found in recent studies, in which image features have been used as the position cue. To decouple the rotational motion of the vehicle from the change in the image features, several methods have been adopted in IBVS control of a UAV, including spherical projection [16,17], homography-based methods [18,19], virtual spring methods [20,21], and virtual camera methods [22,23]. It is worth noting that the reference image (or image trajectory) is usually designated ahead of the control process, which is typically referred to as the “teach-by-showing” framework. However, this process may not be practical in some specific missions, such as when a UAV is deployed to track a newly discovered target [24].
When tracking an uncooperative target without an accurate geometric model, one effective solution is Simultaneous Localization and Mapping (SLAM). SLAM is a real-time process by which a vehicle computes its localization and simultaneously builds a map of an unknown environment [25]. SLAM techniques are often employed to provide the pose estimation for the PBVS control of the UAV. In [26], a novel estimator based on an Extended Kalman Filter was proposed for multi-sensor fusion, including AHRS, GPS, and a monocular camera. Moreover, a visual SLAM-based scheme has been adopted to provide an accurate estimation of the translational position and velocity, as described in [27]. In [28], a strategy based on visual-inertial SLAM was employed for collision avoidance in a GPS-denied environment. In general, visual SLAM requires long computation times and large amounts of onboard resources, thus typically struggling to provide the rapid pose estimation needed by the UAV guidance system.
This paper presents a simple vision-based strategy to track an uncooperative target which is newly discovered by the UAV. The proposed strategy demonstrates a strict interaction between vision-based metrology and the resulting control. It is assumed that the target’s image is acquired and identified by matching the visual data extracted from the monocular camera with similar information stored on board. Then, a set of image features are well-defined by the image processing unit for the following process. Image centroid is employed for the metrology in the horizontal direction, in spite of the fact that the geometric model is unknown to the UAV. To eliminate the coupling effect of the UAV rotational motion and the change in the image features, we introduce a virtual camera frame. After reprojecting the image features onto the virtual image plane, we can directly estimate the relative horizontal distance between the UAV and the target, based on the perspective projection model. At the same time, the flying height of the UAV can be adjusted by specifying an optimal observation area on the virtual image plane. Motivated by the work in [29], the relative velocity is estimated by a Partial Velocity Evaluation method, which is reliable and computationally effective. Robust methods for the optic flow measurement have been reported in [30,31]. Considering the deviation of the measured mass and inertia matrix from the true values, as well as the disturbances resulting from the wind and other external factors, traditional PID algorithms may not be competent in the UAV controller design, as described in [32,33]. To compensate for the model and disturbance uncertainties and improve robust performance of the system, a gain-switching proportional-derivative (PD) control scheme is proposed in this paper. In this control strategy, the stability and robustness properties of the UAV nonlinear system are guaranteed based on the Lyapunov stability theory. Unlike the sliding mode control, which can be found in [34,35], the transient behavior of the system can be characterized analytically.
The remaining parts of this paper are organized as follows. The problem formulation is given in Section 2, the visual measurement for uncooperative target tracking is discussed thoroughly in Section 3, the gain switching PD controllers are designed in Section 4, the corresponding simulation results are depicted in Section 5 and, finally, the paper is concluded in Section 6.

2. Problem Formulation

The problem addressed in this paper corresponds to UAV tracking and hovering scenarios in which the target is newly discovered. The target, which is referred to as “uncooperative”, lacks accurate dimension/size information and real-time communication with the vehicle. It can be identified by the rough information of generic features stored onboard, such as shape and structure. The UAV studied here is a quadrotor equipped with multiple sensors, including an IMU, an ultrasonic sensor, and a monocular camera. A vision-based strategy integrating the metrology and the resulting control is employed to achieve target tracking and hovering observation. The quadrotor is a typical underactuated mechanism with more degrees of freedom than actuations. To analyze the tracking maneuvers, we first define several reference coordinate frames and present the equations of the motion of the quadrotor, then give an overall control framework of tracking and hovering observation.

2.1. Quadrotor Model

The quadrotor considered in this paper consists of a rigid cross-frame equipped with four rotors, as shown in Figure 1. The two rotors on the diagonal rotate in the same direction, while the adjacent rotors rotate in opposite directions. Six degrees of freedom of the quadrotor’s position and attitude can be achieved by adjusting the rotation speed of the four motors. Two coordinate frames are introduced to describe the equations of motion of the quadrotor. An inertial frame I is fixed to some point O i on the earth, with a basis { X i , Y i , Z i } , whose elements are oriented north, east, and down, respectively. A body-fixed frame B is assumed to be attached to the center of mass O b of the quadrotor. The unit vectors of the body-fixed frame are represented by { X b , Y b , Z b } ,which are oriented forward, right, and down, respectively.
Consider a quadrotor with mass m and inertial matrix J R 3 × 3 . The translational dynamics in the inertial frame and the rotational dynamics in the body frame are given as follows.
For the translational dynamics,
ξ ˙ = v m v ˙ = u 1 R E 3 + F g + F d ,
and for the rotational dynamics,
Φ ˙ = W ( Φ ) · ω J ω ˙ = ω × J ω + τ + τ d ,
where ξ = [ x , y , z ] T and v = [ v x , v y , v z ] T are position and linear velocities of the quadrotor, respectively, expressed in the inertial frame; E 3 = [ 0 , 0 , 1 ] T is the unit vector in the body frame; the attitude of the vehicle, Φ , is given by three Euler angles φ , θ , and ψ denoting the roll, pitch, and yaw, respectively; and ω = [ ω x , ω y , ω z ] T is the quadrotor’s angular velocity expressed in the body frame. The corresponding rotation matrix from the body frame to the inertial frame is denoted by R , and the matrix associated with the Euler angles and the angular velocity can be written as
W ( Φ ) = 1 sin φ tan θ cos φ tan θ 0 cos φ sin φ 0 sin φ / cos θ cos φ / cos θ .
Assumption 1.
If the quadrotor does not perform maneuvers that are too aggressive, the roll and pitch angles will both be very small (< 15 ). Then, the matrix W ( Φ ) can be replaced with a unit matrix, and φ ˙ , θ ˙ , ψ ˙ , φ ¨ , θ ¨ , ψ ¨ can be regarded as approximately equal to ω x , ω y , ω z , ω ˙ x , ω ˙ y , ω ˙ z , respectively.
According to the working principle of the quadrotor (see Figure 1), by adjusting the rotational speed of each group of rotors, the quadrotor can generate a thrust force u 1 and torque vectors τ = [ u 2 , u 3 , u 4 ] T , which can be described as
u 1 = b ( ω 1 2 + ω 2 2 + ω 3 2 + ω 4 2 ) u 2 = b l ( ω 4 2 ω 2 2 ) u 3 = b l ( ω 3 2 ω 1 2 ) u 4 = d ( ω 2 2 + ω 4 2 ω 1 2 + ω 3 2 ) ,
where ω 1 , ω 2 , ω 3 , and ω 4 are the speeds of four motors, respectively; b and d represent lift and drag coefficients, respectively; and l is the distance from the center of each rotor to the center of mass of the quadrotor.
The gravity vector of the quadrotor is denoted by F g . As the quadrotor may be disturbed by the wind and other external factors, some unstructured forces and moments of the translational and rotational dynamics, which are described as F d and τ d , are introduced into the system.
Assumption 2.
The external disturbances F d and τ d are assumed to be bounded; that is, F d ( F d ) max and τ d ( τ d ) max , where ( F d ) max and ( τ d ) max are positive constants and · denotes the standard Euclidean vector norm and induced matrix norm.
Assumption 3.
The mass is m = m ¯ + Δ m , where m ¯ and Δ m are the nominal and uncertain parts of the mass, respectively. The inertia matrix is J = J ¯ + Δ J , where J ¯ and Δ J are the nominal and uncertain parts of the inertia matrix, respectively. Δ m and Δ J satisfy the inequalities Δ m c m and Δ J c J , where c m and c J are positive constants.

2.2. Control Framework of Tracking and Hovering Observation

A flowchart of the proposed vision-based UAV tracking strategy is presented as follows (see Figure 2). During the cruise of the UAV, when a sensitive target is captured and identified by the onboard camera, the tracking and hovering observation process starts. First, a set of feature points on the target are extracted. By using the center point of the feature points and their pixel velocities, we can determine the relative distance and velocity, respectively, between the UAV and target. Then, the relative distance and velocity are input into the controller as error states. Based on the gain-switching PD control, the target centroid is expected to coincide with the center of the image plane, and the target image is required to be within the optimal observation area.

3. Visual Measurement Using a Virtual Camera Frame

When a sensitive target is first detected and determined by the camera during the quadrotor flight, its projection on the image plane can be described by a set of parameters, including slope, curvature, area, image centroid, and some parameters related to the shape of the target [36]. The selected target for tracking is assumed to be horizontal to the X i Y i plane of the inertial frame, and its projection onto the image plane is a compact area with large values of shape parameters, such as sphericity or rectangularity.

3.1. Relative Distance Estimation in Horizontal Direction

As the target is uncooperative and cannot send any position information to the quadrotor, the relative distance measurement relies entirely on the vision-based method. However, the target does not have specific markers and its detailed geometry is unknown to the quadrotor, such that it is challenging to use PnP solvers or Template Matching approaches for relative pose acquisition. In this work, we utilize an effective and simple method based on the image error to estimate the relative horizontal distance between the quadrotor and the target. On the other hand, the quadrotor is an underactuated system with only four independently controllable degrees of freedom. Rolling and translational motions along the Y b axis are coupled, as are pitching and translational motions along the X b axis, which means that the vehicle will inevitably tilt when maneuvering horizontally. Thus, the image features of the target will change with not only the translational, but also the rotational, motion of the quadrotor, which makes it more complicated to estimate the relative distance and velocity.
Without loss of generality, the camera frame considered in this paper, C = { X c , Y c , Z c } , is assumed to coincide with the body frame B . To solve the problem mentioned above, we introduce a virtual camera coordinate frame V , which has the same origin as the frame B ( C ) . The corresponding virtual image plane X v Y v is always parallel to the X i Y i plane and with the same yaw angle as the frame C ; that is, its roll and pitch angles remain zero (see Figure 3).
The coordinates of a point P in the inertial frame and camera frame are denoted by i P = [ i P x , i P y , i P z ] T and c P = [ c P x , c P y , c P z ] T , respectively. They have the following geometric relationship,
c P = R T ( i P i O c ) ,
where i O c denotes the coordinates of the origin O c in the inertial frame.
The pixel coordinates of the point P on the image plane are given by perspective projection equations [37]:
u = f c P x c P z + u 0 n = f c P y c P z + n 0 ,
in which f is the focal length of the camera and [ u 0 , n 0 ] T is the coordinate of the image plane center.
Now, reproject the image coordinates [ u , n ] T onto the virtual image plane using a matrix, R θ φ , associated with a rotation in the roll angle around X i and in the pitch angle around Y i :
v u v n = f R ¯ θ φ 3 s R ¯ θ φ 1 p R ¯ θ φ 2 p ,
in which p = [ u , n , f ] T and R ¯ θ φ 1 , R ¯ θ φ 2 , and R ¯ θ φ 3 are the row vectors of the matrix R θ φ .
It is assumed that N ( N 3 ) non-collinear points are fixed on the selected target. The image centroid of the target is computed as
v u g = 1 N · k = 1 N v u k , v n g = 1 N · k = 1 N v n k .
In this paper, we need to drive the quadrotor directly above the target, such that the desired image feature is determined as the center of the virtual image plane, [ v u 0 , v n 0 ] T , as shown in Figure 3. The reprojection of the feature points onto the virtual camera frame decouples the pitch and roll motion of the vehicle through the change of coordinates [ v u , v n ] T . Therefore, the relative horizontal distance can be estimated directly, using the deviation of the image centroid of the target from the center of the image plane:
Δ x = v z a f , Δ y = v z b f ,
where a = v u g v u 0 and b = v n g v n 0 are image errors defined in image space, and v z is the vertical distance of the virtual camera frame obtained by the ultrasonic sensor measurement.

3.2. Relative Distance Estimation in Vertical Direction

When tracking an uncooperative target, the relative horizontal distance estimated in the above section should converge to and remain at zero. The height control is relatively flexible, depending on different observation requirements. The easiest way which comes to our mind is to keep the quadrotor flying at a constant height; however, this cannot guarantee the target being observed in an optimal area on the image plane. Intuitively, the image size of the target varies with the height of the camera. The quadrotor is required to fly neither too high nor too low, avoiding making the target image invisible or leaving the field of view.
Now, introduce a circle as the optimal observation area on the virtual image plane centered at O 1 with radius r o p t . Then, construct a circle that passes through the farthest feature point from the image centroid of the target. The radius of the constructed circle, denoted by r, is defined as
r = max { ( v u k v u g ) 2 + ( v n k v n g ) 2 } , k = 1 , . . . , N .
According to Equation (10), the constructed circle will cover all of the feature points of the selected compact target. Given that the relative horizontal error converges to zero, the vertical distance of the quadrotor can be controlled by adjusting r to be equal to or less than r o p t , which guarantees the target image being kept in the optimal observation area. Therefore, the desired value of radius r satisfies the following conditions,
r d = σ r o p t , σ 1 ,
where σ is the radius scaling factor and σ = 1 indicates that the optimal observation area circumscribes the target image.
From Equations (10) and (11), the desired flying height of the quadrotor can be written as
z d = v z · r r d = v z · r σ r o p t .
Then, we can define the position error in the vertical direction as
Δ z = v z z d .
To control the translational motion of the quadrotor, we define the full position error as Δ ξ = [ Δ x , Δ y , Δ z ] T , which is desired to converge to [ 0 , 0 , 0 ] T .

3.3. Relative Velocity Estimation

Generally, the control actions require knowledge of not only position error but also velocity error. The quadrotor is equipped with an IMU providing the angular velocity and acceleration of the vehicle. If the target is stationary, the relative velocity can be obtained directly by integrating the measured acceleration information. However, while the precision of the gyro is satisfactory for the needs of the vehicle maneuvers, UAV-equipped accelerometers are usually not accurate enough for evaluating the platform velocity [29]. Velocity error acquirement is more complicated when the uncooperative target is moving. In this paper, we use the optic flow and gyro measurements to estimate the relative quadrotor-to-target velocity, which is recorded by Partial Velocity Evaluation.
Taking the first derivative of Equation (7), we obtain the dynamics of an image feature v p k = [ v u k , v n k ] T [22]:
v p ˙ k = L v k ( v v v v p k ) + L ψ k ψ ˙ ,
in which v v and v v p k are the velocities of the point P and the quadrotor expressed in the virtual camera frame, and the matrices L v k and L ψ k are defined as
L v k = f v P z 0 v u k v P z 0 f v P z v n k v P z , L ψ k = v n k v u k .
Define a vector of image features, v p = [ v p 1 T , , v p N T ] T R 2 N , and assume that the velocities of N points are approximately equal to the target velocity, v v p k = v v t . Then, Equation (14) can be extended as
v p ˙ = L v ( v v v v p ) + L ψ ψ ˙ ,
in which L v = [ L v 1 T , , L v N T ] T R 2 N × 3 and L ψ = [ L ψ 1 T , , L ψ N T ] T R 2 N .
The finite time difference of image features, v p ˙ , can be computed by directly measuring the optic flow of the visual features in images. Denoting by Δ v the relative quadrotor-to-target velocity expressed in the inertial frame, we have
Δ v = R ψ L v + ( v p ˙ L ψ ψ ˙ ) ,
in which L v + R 3 × 2 N is the Moore–Penrose pseudo-inverse of the matrix L v and R ψ is the rotation matrix corresponding to the yaw angle by the relation R = R ψ R θ φ .
The relative translational velocity (expressed in the inertial frame) Δ v and the pixel velocities v p ˙ are related by the interaction matrix L v k and L ψ k . At least two points on the target are required to determine the three components of vector Δ v . Due to the fact that the virtual image plane is introduced, the depth information of each feature point is not necessary, which is an advantage over traditional visual odometry.

4. Gain-Switching PD Controller Design

The control objective in this work is to drive a quadrotor tracking an uncooperative target at an appropriate height for optimal observation. The quadrotor, as an underactuated system, has six degrees of freedom with only four control inputs. Therefore, the control of a quadrotor is typically divided into an outer position loop and an inner attitude loop. The outer loop provides the reference attitude signal to the inner loop, while the inner loop tracks the orientation reference of the vehicle. A block diagram of the UAV control loop structure is shown in Figure 4. The objective is equivalent to designing a control input for translational motion based on visual feedback, such that the controlled underactuated system with model uncertainties can guarantee Δ ξ 0 and Δ v 0 , then designing a control input for rotational motion with the knowledge of reference Euler angles derived from the outer loop.
PD controllers are commonly used in UAV control. Given the uncertainties in the dynamics of the vehicle, a traditional PD controller can produce large steady-state errors or even affect the overall stability of the system. To improve the robustness of the PD controller, we introduce a gain-switching term to deal with the uncertainties by avoiding using large gains.

4.1. Control of the Translational Motion

The translational dynamics (1) can be rewritten, in matrix form, as
x ¨ y ¨ z ¨ = u 1 m cos φ sin θ cos ψ + sin φ sin ψ cos φ sin θ sin ψ sin φ cos ψ cos φ cos θ + 0 0 g + f d x f d y f d z .
The translational motion of the quadrotor is actually accomplished by both the thrust and orientation of the body. Let us define a set of virtual control inputs for the translational motion,
u x = u 1 ( cos φ sin θ cos ψ + sin φ sin ψ ) u y = u 1 ( cos φ sin θ sin ψ sin φ cos ψ ) u z = u 1 ( cos φ cos θ ) .
From Equations (17) and (18), we have
m v ˙ = u + F g + F d .
The control law for the translational motion is proposed as
u = u R + u P + u D + u ε = F ¯ g + ( K P m ¯ Δ ξ ) + ( K D m ¯ Δ v ) + ( K ε m ¯ n ( ε , s ) ) ,
where quantities with overbar symbols indicate that a priori estimates are used, which may deviate from the true value; Δ ξ and Δ v are obtained by visual measurement, as detailed in Section 3; F ¯ g is the model compensation term designed to eliminate nonlinear elements of the dynamics; and K P , K D , and K ε represent the proportional, differential, and gain-switching coefficient matrices, respectively. The switching function n ( ε , s ) can be any piecewise-continuous function with the following properties,
s n ( ε , s ) = n ( ε , s ) s n ( ε , s ) 1 ε ε s s , s 0 .
In this paper, the switching function is chosen as
n = s r ε , r < ε ε , s < ε s s s s , s ε ,
where ε is the gain switching threshold and s is the error feedback term given by
s = k P Δ ξ + k D Δ v .
Substituting Equation (20) into (19) yields the error dynamics for the translational motion:
m Δ v ˙ = K P m Δ ξ K D m Δ v K ε n + h ,
where the disturbance function, denoted by h , can be written as:
h = K P δ m Δ ξ K D δ m Δ v δ F g m v t ˙ + F d ,
where δ m = m ¯ m and δ F g = F ¯ g F g denote the deviation between the measured values and the true values, v t ˙ is the target acceleration, which is assumed to be unknown but has an upper bound.
Remark 1.
The disturbance function h represents all sources of uncertainties in translational error dynamics, including model uncertainty resulting from the inaccurately measured mass of the quadrotor K P δ m Δ ξ and K D δ m Δ v , gravitational error δ F g , unknown target motion m v t ˙ , and unstructured forces F d . Under Assumption 2–3, the disturbance function h is bounded.
Now, the properties of the gain-switching coefficient will be addressed. The matrix K ε is symmetric and positive definite, and must satisfy the following conditions,
K ε n K ε n ( m ¯ + c m ) ( m ¯ + c m ) k ε n k ε h h ( m ¯ + c m ) ( m ¯ + c m ) ,
where k ε is a positive bounding scalar designated to ensure that the term u ε provides greater acceleration than that resulting from the disturbances in h .
The proportional and differential coefficient matrices K P and K D are selected to satisfy the following constraints with k P and k D ,
K P m ¯ Δ ξ K P m ¯ Δ ξ ( m ¯ + c m ) ( m ¯ + c m ) k P Δ ξ K D m ¯ Δ v K D m ¯ Δ v ( m ¯ + c m ) ( m ¯ + c m ) k D Δ v ,
in which k P and k D are feedback gains designated as the following functions of the desired rate of convergence, α :
k P = 2 α 2 k D = 2 α , α > 0 .
These equations guarantee that the norm of K P and K D is large enough, such that the commanded force specified by u P and u D delivers—at a minimum—the acceleration specified by the vector u ε .
Remark 2.
Compared with traditional PD control, the proposed PD control consists of not only the proportional and differential terms u P and u D , which can eliminate the position and velocity errors, but also a gain-switching term u ε , which acts as a robustifying term based on the switching function n ( ε , s ) . By using Equation (26), appropriate robust control parameters can be selected to restrain the uncertainties h . In addition, the transient behavior of the system can be characterized analytically, based on the desired rate of convergence in feedback gains (28).
Referring to Equation (24) and defining the system state e t = Δ ξ T , Δ v T T , the tracking dynamics model for translational motion can be written as
Δ ξ ˙ Δ v ˙ = Δ v ( 1 m ) · ( K P m Δ ξ K D m Δ v K ε n ( ε , s ) + h ) .
Lemma 1.
Consider a system x ˙ ( t ) = f ( t , x ( t ) ) . If there exist a continuously differentiable function V ( x ) and scalars c 1 , c 2 > 0 , which satisfy [38]
( i ) c 1 x 2 V ( x ) c 2 x 2 ( ii ) x { x | V ( x ) > V * , V * > 0 } , V ˙ ( x ) 2 α [ V ( x ) V * ] ,
then the system is (globally and uniformly) exponentially convergent to B ( q ) with rate α, where
B ( q ) = Δ { x R n : x q } q = Δ ( V * V * c 1 c 1 ) 1 / 2 .
Theorem 1.
Consider the closed-loop system for the translational motion in Equation (29) with the controller designed by Equation (20). If the corresponding parameters are assigned as in Equations (26)(28), then the state e t will exponentially converge to zero.
Proof. 
The Lyapunov function candidate is designated as
V ( e t ) = 1 2 e t T A e t ,
where
A = k L I k P I k P I k D I
and I is a unit matrix. The constant parameter k L is also a function of α :
k L = 4 α 3 .
By using the values k P , k D , and k L in Equations (28) and (32), the matrix A is positive definite. Thus, V ( e t ) is a positive definite function satisfying
λ min e t 2 V ( e t ) λ max e t 2 .
Condition ( i ) in Lemma 1 is satisfied with c 1 = λ min and c 2 = λ max , where λ min and λ max are the smallest and largest eigenvalues of A .
Taking the time derivative of V ( e t ) and substituting Equation (29) into it, we have
V ˙ ( e t ) = k L Δ ξ T Δ v + k P Δ v T Δ v k P [ Δ ξ T K P Δ ξ + Δ ξ T K D Δ v ] k D [ Δ v T K D Δ v + Δ v T K P Δ ξ ] + ( 1 1 m m ) s T ( h K ε n ( ε , s ) ) .
Using the parameter constraints in Equations (26) and (27), it follows that
V ˙ ( x ) k P 2 Δ ξ T Δ ξ + ( k P k D 2 ) Δ v T Δ v + ( k L 2 k P k D ) Δ ξ T Δ v + s T [ ( 1 1 m m ) h k ε n ( ε , s ) ] .
Substituting Equations (28) and (32) into (35) leads to the following,
V ˙ ( e t ) 2 α V ( e t ) + E ,
where
E = s T [ ( 1 1 m m ) h k ε n ( ε , s ) ] .
By using the properties of n in (21), Equation (37) yields the following expression,
E s h * k ε n s ,
where h * = h h m m .
Equation (22) requires that n = s s ε ε for s < ε . Moreover, recalling (26), which states that k ε h * , Equation (38) shows, in this case, that
E k ε · Δ ,
where Δ = ( ε 2 s 2 ) ( ε 2 w 2 ) ε ε > 0 .
When s ε , n = s s s s , it follows that
E 0 .
Combining Equations (39) and (40), we can obtain a global upper bound
E ¯ k ε · Δ .
Substituting Equation (41) into the time derivative of V ˙ ( e t ) in Equation (38) leads to
V ˙ ( e t ) 2 α [ V ( e t ) V * ] ,
in which V * = E ¯ E ¯ 2 α 2 α .
Inequality (42) ensures that the state of the system e t can exponentially converge to a small ball around the origin defined by V ( e t ) < V * . Thus, the condition ( i i ) of Lemma 1 is satisfied. □
It is worth noting that the control law given by Equation (20) yields virtual control inputs for translational motion. Using Equation (18), we can compute the command thrust u 1 , as well as the desired roll and pitch angles for the attitude controller:
u 1 = u x 2 + u y 2 + u z 2 φ d = arcsin u x sin ψ d u y cos ψ d u 1 θ d = arctan u x cos ψ d + u y sin ψ d u z ,
where ψ d is the reference value of yaw. As the yaw angle is independent of the outer loop, we can prescribe it as its initial value or another constant.

4.2. Control of the Rotational Motion

In the inner attitude loop, a similar gain-switching PD control law is presented:
τ = τ R + τ P + τ D + τ ε = ω × J ¯ ω + ( K P J ¯ Δ Φ ) + ( K D J ¯ Δ ω ) + ( K ε J ¯ n ( ε , s ) ) ,
where Δ Φ = Φ Φ d and Δ ω = ω ω d denote the quadrotor attitude and the equivalent rate error, respectively, and Φ d = [ φ d , θ d , ψ d ] T are the desired Euler angles output from the outer loop. The desired angular velocity ω d is taken to be zero.
The switching function n ( ε , s ) is chosen the same as before, and the error feedback term s is given by
s = k P Δ Φ k D Δ ω .
Referring to Equation (2) and defining the system states e r = Δ Φ T , Δ ω T T , the tracking dynamics model for rotational motion, under Assumption 1, can be written as
Δ Φ ˙ Δ ω ˙ = Δ ω J 1 ( K P J Δ Φ K D J Δ ω K ε J n ( ε , s ) + h ) ,
where the disturbance function h is given by
h = K P δ J Δ Φ K D δ J Δ ω + ( ω × J ¯ ω ω × J ω ) J ω ˙ t + τ d .
Theorem 2.
Consider the closed-loop system for the rotational motion in Equation (46) with the controller designed by Equation (44). If the corresponding parameters are assigned by
K P J ¯ Δ Φ K P J ¯ Δ Φ λ max ( J ) λ max ( J ) k P Δ Φ K D J ¯ Δ ω K D J ¯ Δ ω λ max ( J ) λ max ( J ) k D Δ ω ,
K ε K ε λ max ( J ) λ max ( J ) k ε n ( ε , s ) k ε h h λ min ( J ) λ min ( J ) ,
k P = 2 α 2 k D = 2 α ,
then, the state e r will exponentially converge to zero.
Proof. 
The stability analysis of the system for rotational motion is similar to that for translational motion, so it is not described in detail here. □

5. Simulation Results

In this section, MATLAB simulations are presented to validate the performance of the proposed vision-based control scheme. We considered two scenarios in this work: hovering above a stationary target and tracking a moving target. In both scenarios, the target to be observed was assumed to be uncooperative and without detailed geometry information. The physical parameters of the simulated quadrotor were m = 2.1 kg and J = diag { 0.0096 , 0.0098 , 0.016 } kg · m 2 / rad . The nominal part of the quadrotor’s mass and moment of inertia were m ¯ = 2.1 kg and J ¯ = diag { 0.0081 , 0.0081 , 0.0142 } kg · m 2 / rad . The focal length divided by pixel size of the camera was set as 213, and the image resolution was 160 × 120 pixels with principal point located at [ 80 , 60 ] . Considering that the quadrotor may be affected by wind disturbances in the environment, we applied some sinusoidal (cosinusoidal) forces and torques to the vehicle with the following values,
F d = 0.5 · sin ( 2 π t ) cos ( π t ) sin ( 2 π t ) T τ d = 0.01 · sin ( 2 π t ) cos ( π t ) sin ( 2 π t ) T .

5.1. Scenario 1: Hovering Observation

In some cases, such as traffic and fire accidents, we usually drive a UAV equipped with optical sensors to approach the scene of the accident and hover over the damaged vehicle or burning object. To provide distinct images and effective air support for the subsequent rescue, it is required to adjust the height of the UAV, keeping the designated target within the optimal area of the camera’s field of view. The proposed vision-based control scheme in this work is applicable to the above missions.
In the simulation, the target was set as a rectangular object. Its visual features included its four vertexes with the following coordinates, [ 0.2 , 0.25 , 0 ] m, [ 0.2 , 0.25 , 0 ] m, [ 0.2 , 0.25 , 0 ] m, and [ 0.2 , 0.25 , 0 ] m, all expressed in the inertial frame. The quadrotor initial position and attitude were [ 1 , 0.8 , 4 ] m and [ 0 , 0 , 0.2 ] rad, respectively. For the optimal observation of the target, we defined a reference circle of radius 40 pixels on the virtual image plane, and the radius scaling factor σ was designated to be 1. The corresponding desired height was 1.7048 m, which can be computed using Equation (12). The control gains used in the simulation are listed in Table 1.
The transient response of the system mainly depends on the feedback gains k P , k D and k P , k D , which are determined by the constraint Equations (28) and (50). Based on the trial and error method, the desired rates of convergence for translational and rotational motion were selected as 2 / 2 and 2, respectively. Then, the proportional and differential matrices K P , K D and K P , K D could be determined by using Equations (27) and (48). To guarantee the robustness properties of the system in the presence of wind and modelling errors, the control gains in the gain-switching term k ε , K ε and k ε , K ε must be selected such that they are larger than the effect resulting from the unstructured disturbances h , h , based on Equations (26) and (49). The gain switching thresholds ε and ε were tuned repeatedly to restrain the chattering phenomenon of the control efforts.
The simulation results are illustrated in Figure 5, Figure 6 and Figure 7. The translational motion of the quadrotor is shown in Figure 5. The horizontal position errors converged to less than 0.05 m with transient response time enduring at about 5 s. The error in the vertical direction was relatively large, about 0.12 m, mainly resulting from the uncertainty of the quadrotor mass. Figure 6 describes the rotational motion of the quadrotor. The Euler angles were kept within a small range, satisfying Assumption 1. The properties of the target centroid in image space are illustrated in Figure 7. It is shown that the vehicle could rapidly and smoothly approach the target, and the control accuracy of the target centroid on the image plane was kept within 5 pixels. The proposed system, therefore, is satisfactory for continuous hovering observation of the target.
For comparison, we ran a simulation of traditional PD control using the same initial conditions and gains. The results are reported in Figure 8 and Figure 9. The results indicate that the proposed gain-switching PD controller had better performance than the traditional PD controller in steady-state performance, with lower oscillation, and the proposed method had a smaller error limit of the target centroid in image space. Further, we relocated the quadrotor at two different initial positions to compare the system steady-state behavior between the gain-switching PD controller with the traditional PD controller. The mean, amplitude, and accuracy ( 3 σ ) of the position error were illustrated in Table 2. The results showed that the proposed controller was effective in different initial conditions. Compared with the traditional PD controller, the performance of the proposed controller was greatly improved, with the position-control accuracy increasing by a factor of 4–5.
To evaluate the imaging effects of the target and the corresponding flying height of the quadrotor under different observation requirements, we ran another simulation ignoring the external disturbance and model uncertainties. In the simulation, three criteria were set: flying at a constant height (i.e., Δ z = 0 ), radius scaling factor σ = 1 , and σ = 0.8 . As plotted in Figure 10, the results showed that introducing a reference circle on the virtual image plane and adjusting the radius scaling factor can regulate the imaging effect, according to different observation requirements. To illustrate that the proposed strategy is applicable to different kinds of targets, we ran another simulation to hover above an irregularly shaped target. The Cartesian coordinates of the feature points that enclose the target were: [ 0.25 , 0.05 , 0 ] m, [ 0.10 , 0.20 , 0 ] m, [ 0.20 , 0.15 , 0 ] m, [ 0.25 , 0.10 , 0 ] m, [ 0.10 , 0.20 , 0 ] m, [ 0.15 , 0.15 , 0 ] m. The imaging effect under different observation requirements was shown in Figure 11. Similar to the rectangular target, the proposed control strategy can meet the hovering and observation needs of the target.

5.2. Scenario 2: Tracking a Moving Target

We also considered some more challenging cases, such as when the target vehicle is moving or when a fire area changes in real-time. Therefore, not only positioning the UAV over the target is required, but also following its (unknown) trajectory. Compared with the traditional PD control, the robust controller proposed in this paper was more competent in this challenging mission. In this simulation, the target had the same geometric characteristics as in SCENARIO 1 but followed three different trajectories on the flat ground: circular movement, S-type movement, and linear movement. The quadrotor initial position was set to be [ 1 , 1.5 , 3 ] m. To evaluate the performance of the proposed controllers in a more realistic environment, we added some noise to the measurement information. White noise with covariances of 0.5 and 10 4 was augmented to the visual data (image features and their pixel velocities) and angular rates, respectively.
The tracking performance of the circular, S-type, and linear movements were illustrated in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17, respectively. Trajectories of motion of the quadrotor and the target in a 3D environment were plotted in Figure 12, Figure 14 and Figure 16, which showed satisfactory tracking performance in spite of the presence of disturbance and noise. Meanwhile, Figure 13, Figure 15 and Figure 17 depicted the position tracking error of the two controllers in different target movements. The results indicated that the proposed control strategy applied to different target maneuvers, and showed better robustness and higher tracking accuracy than the traditional PD controller. The only exception was that there existed some fluctuations during the linear tracking (Figure 16 and Figure 17). It was because of the vehicle’s reaction time when the target suddenly changed its direction.

6. Conclusions

In this paper, we have developed a vision-based control scheme of a quadrotor for target tracking in the absence of location information and geometric features of the target. After transforming the image features to a virtual camera frame, optical-based metrology is exploited to estimate the relative distance and velocity. At the same time, the height of the quadrotor and image size can be adjusted by regulating the optimal observation area and radius scaling factor. Considering the presence of external interference and model uncertainties, we presented a gain-switching proportional-derivative (PD) control strategy to improve the robustness of the system. Two case studies, corresponding to hovering and tracking scenarios, are presented in this work. The simulation results indicated that the proposed vision-based scheme performed better in both hovering and tracking missions, compared with the traditional PD control.
In future work, we are going to add a field of view constraint to the system, as the proposed algorithm can not guarantee that all visual features are always kept inside the field of view of the camera. We also plan to implement the proposed control scheme in a real quadrotor.

Author Contributions

S.Z. provided the main idea and supervised the whole process. X.Z. performed the simulation and finished the draft manuscript. B.Z. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank the support from Lei Gao (Lunar Exploration and Space Engineering Center, Beijing), who gave us valuable suggestions from the technical and engineering perspective.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liang, X.; Fang, Y.; Sun, N.; Lin, H. Nonlinear hierarchical control for unmanned quadrotor transportation systems. Aerosp. Sci. Technol. 2018, 65, 3395–3405. [Google Scholar] [CrossRef]
  2. Kim, D.; Liu, M.Y.; Lee, S.; Kamat, V.R. Remote proximity proximity monitoring between mobile construction resources using camera-mounted UAVs. Autom. Constr. 2019, 99, 168–182. [Google Scholar] [CrossRef]
  3. Yao, P.; Xie, Z.; Ren, P. Optimal UAV route planning for coverage search of stationary target in river. IEEE Trans. Control Syst. Technol. 2019, 27, 822–829. [Google Scholar] [CrossRef]
  4. Kim, I.H.; Jeon, H.; Baek, S.C.; Hong, W.H.; Jung, H.J. Application of crack identification techniques for an aging concrete bridge inspection using an unmanned aerial vehicle. Sensors 2018, 18, 1881. [Google Scholar] [CrossRef] [Green Version]
  5. Zhu, W.W.; Du, H.B.; Cheng, Y.Y.; Chu, Z.B. Hovering control for quadrotor aircraft based on finite-time control algorithm. Nonlinear Dyn. 2017, 88, 2359–2369. [Google Scholar] [CrossRef]
  6. Dong, J.; He, B. Novel fuzzy PID-type iterative learning control for quadrotor UAV. Sensors 2019, 19, 24. [Google Scholar] [CrossRef] [Green Version]
  7. Wai, R.J.; Prasetia, A.S. Adaptive neural network control and optimal path planning of UAV surveillance system with energy consumption prediction. IEEE Access 2019, 7, 126137–126153. [Google Scholar] [CrossRef]
  8. Jin, X.B.; Sun, S.; Wei, H.; Yang, F.B. Advances in Multi-Sensor Information Fusion: Theory and Applications 2017. Sensors 2018, 18, 1162. [Google Scholar] [CrossRef] [Green Version]
  9. Mahony, R.; Kumar, V.; Corke, P. Multirotor aerial vehicles: Modeling, estimation, and control of quadrotor. IEEE Robot. Autom. Mag. 2012, 19, 20–32. [Google Scholar] [CrossRef]
  10. Agrawal, P.; Ratnoo, A.; Ghose, D. Inverse optical flow based guidance for UAV navigation through urban canyons. Aerosp. Sci. Technol. 2017, 68, 163–178. [Google Scholar] [CrossRef]
  11. Hrabar, S.; Sukhatme, G.S.; Corke, P.; Usher, K.; Roberts, J. Combined optic-flow and stereo-based navigation of urban canyons for a UAV. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3309–3316. [Google Scholar]
  12. Chaumette, F.; Hutchinson, S. Visual servo control, part I: Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  13. Zhao, B.; Tang, Y.; Wu, C.; Du, W. Vision-based tracking control of quadrotor with backstepping sliding mode control. IEEE Access 2018, 6, 72439–72448. [Google Scholar] [CrossRef]
  14. Oh, H.; Won, D.Y.; Huh, S.S.; Shim, D.H.; Tahk, M.J.; Tsourdos, A. Indoor UAV control using multi-camera visual feedback. J. Intell. Robot. Syst. 2011, 61, 57–84. [Google Scholar] [CrossRef] [Green Version]
  15. Shirzadeh, M.; Asl, H.J.; Amirkhani, A.; Jalali, A.A. Vision-based control of a quadrotor utilizing artificial neural networks for tracking of moving targets. Eng. Appl. Artif. Intell. 2017, 58, 34–48. [Google Scholar] [CrossRef]
  16. Herisse, B.; Hamel, T.; Mahony, R.; Russotto, F.X. Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow. IEEE Trans. Robot. 2012, 28, 77–89. [Google Scholar] [CrossRef]
  17. Serra, P.; Cunha, R.; Hamel, T.; Cabecinhas, D.; Silvestre, C. Landing of a Quadrotor on a Moving Target Using Dynamic Image-Based Visual Servo Control. IEEE Trans. Robot. 2016, 32, 1524–1535. [Google Scholar] [CrossRef]
  18. Benhimane, S.; Malis, E. Homography-based 2D visual tracking and servoing. IEEE Trans. Robot. 2007, 26, 661–676. [Google Scholar] [CrossRef]
  19. De Plinval, H.; Morin, P.; Mouyon, P.; Hamel, T. Visual servoing for underactuated VTOL UAVs: A linear, homography-based framework. Int. J. Robot. Res. 2014, 24, 2285–2308. [Google Scholar] [CrossRef] [Green Version]
  20. Ozawa, R.; Chaumette, F. Dynamic visual servoing with image moments for a quadrotor using a virtual spring approach. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 5670–5676. [Google Scholar]
  21. Ozawa, R.; Chaumette, F. Dynamic visual servoing with image moments for an unmanned aerial vehicle using a virtual spring approach. Adv. Robot. 2013, 27, 683–696. [Google Scholar] [CrossRef] [Green Version]
  22. Asl, H.J.; Oriolo, G.; Bolandi, H. An adaptive scheme for image-based visual servoing of an underactuated UAV. Int. J. Robot. Autom. 2014, 29, 92–104. [Google Scholar] [CrossRef] [Green Version]
  23. Li, J.A.; Xie, H.; Ma, R.; Low, K.H. Output feedback image-based visual servoing of rotorcrafts. J. Intell. Robot. Syst. 2019, 93, 277–287. [Google Scholar] [CrossRef] [Green Version]
  24. Guo, D.; Wang, H.; Leang, K.K. Nonlinear vision-based observer for visual servo control of an aerial robot in global positioning system denied environments. J. Mech. Robot. 2018, 10, 061018. [Google Scholar] [CrossRef]
  25. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aeosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
  26. Munguía, R.; Urzua, S.; Bolea, Y.; Grau, A. Vision-based SLAM system for unmanned aerial vehicles. Sensors 2016, 16, 372. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Zhang, X.; Xian, B.; Zhao, B.; Zhang, Y. Autonomous flight control of a nano quadrotor helicopter in a gps-denied environment using on-board vision. IEEE Trans. Ind. Electron. 2015, 62, 6392–6403. [Google Scholar] [CrossRef]
  28. Fu, C.H.; Olivares-Mendez, M.A.; Suarez-Fernandez, R.; Campoy, P. Monocular visual-inertial SLAM-based collision avoidance strategy for fail-safe UAV using fuzzy logic controllers. J. Intell. Robot. Syst. 2014, 73, 513–533. [Google Scholar] [CrossRef] [Green Version]
  29. Sabatini, M.; Palmerini, G.B.; Monti, R.; Gasbarri, P. Image based control of the “PINOCCHIO” experimental free flying platform. Acta Astronaut. 2014, 94, 480–492. [Google Scholar] [CrossRef]
  30. Chen, Z.Y.; Chen, W.C.; Liu, X.M.; Song, C. Fault-tolerant optical flow sensor/SINS integrated navigation scheme for MAV in a GPS-denied environment. J. Sens. 2018, 2018, 9678505. [Google Scholar] [CrossRef] [Green Version]
  31. Cheng, H.W.; Chen, T.L.; Tien, C.H. Motion estimation by hybrid optical flow technology for UAV landing in an unvisited area. Sensors 2019, 19, 1380. [Google Scholar] [CrossRef] [Green Version]
  32. Moreno-Valenzuela, J.; Perez-Alcocer, R.; Guerrero-Medina, M.; Dzul, A. Nonlinear PID-type controller for quadrotor trajectory tracking. IEEE-ASME Trans. Mechatron. 2018, 23, 2436–2447. [Google Scholar] [CrossRef]
  33. Goodarzi, F.; Lee, D.; Lee, T. Geometric nonlinear PID control of a quadrotor UAV on SE (3). In Proceedings of the European Control Conference (ECC), Zurich, Switzerland, 17–19 July 2013; pp. 3845–3850. [Google Scholar]
  34. Huang, T.; Huang, D.; Wang, Z.; Shah, A. Robust tracking control of a quadrotor UAV based on adaptive sliding mode controller. Complexity 2019, 2019, 7931632. [Google Scholar] [CrossRef]
  35. Tan, J.; Fan, Y.; Yan, P.; Wang, C.; Feng, H. Sliding mode fault tolerant control for unmanned aerial vehicle with sensor and actuator faults. Sensors 2019, 19, 643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  37. Corke, P. Robotics, Vision and Control: Fundamental Algorithms in MATLAB, 2nd ed.; Springer: Berlin, Germany, 2017. [Google Scholar]
  38. Thurman, S.W.; Flashner, H. Robust digital autopilot design for spacecraft equipped with pulse-operated thrusters. J. Guid. Control Dyn. 1996, 19, 1047–1055. [Google Scholar] [CrossRef]
Figure 1. Quadrotor model and frame definitions.
Figure 1. Quadrotor model and frame definitions.
Sensors 20 03474 g001
Figure 2. Flowchart of the vision-based unmanned aerial vehicle (UAV) tracking strategy.
Figure 2. Flowchart of the vision-based unmanned aerial vehicle (UAV) tracking strategy.
Sensors 20 03474 g002
Figure 3. Camera frame C and virtual camera frame V with their corresponding image planes. The perspective projection of point P on the virtual image plane.
Figure 3. Camera frame C and virtual camera frame V with their corresponding image planes. The perspective projection of point P on the virtual image plane.
Sensors 20 03474 g003
Figure 4. Block diagram of the UAV control system.
Figure 4. Block diagram of the UAV control system.
Sensors 20 03474 g004
Figure 5. Scenario 1: Time evolution of the quadrotor Cartesian coordinates and velocity.
Figure 5. Scenario 1: Time evolution of the quadrotor Cartesian coordinates and velocity.
Sensors 20 03474 g005
Figure 6. Scenario 1: Time evolution of the quadrotor attitude and angular velocity.
Figure 6. Scenario 1: Time evolution of the quadrotor attitude and angular velocity.
Sensors 20 03474 g006
Figure 7. Scenario 1: (a) Target centroid trajectory on the image plane. (b) Target centroid deviation from the image center.
Figure 7. Scenario 1: (a) Target centroid trajectory on the image plane. (b) Target centroid deviation from the image center.
Sensors 20 03474 g007
Figure 8. Scenario 1: Time evolution of the quadrotor Cartesian coordinates and attitude (proportional-derivative (PD) controller).
Figure 8. Scenario 1: Time evolution of the quadrotor Cartesian coordinates and attitude (proportional-derivative (PD) controller).
Sensors 20 03474 g008
Figure 9. Scenario 1: (a) Target centroid trajectory on the image plane; and (b) Target centroid deviation from the image center (PD controller).
Figure 9. Scenario 1: (a) Target centroid trajectory on the image plane; and (b) Target centroid deviation from the image center (PD controller).
Sensors 20 03474 g009
Figure 10. Scenario 1: Imaging effect of the rectangular target under different observation requirements.
Figure 10. Scenario 1: Imaging effect of the rectangular target under different observation requirements.
Sensors 20 03474 g010
Figure 11. Scenario 1: Imaging effect of the irregularly shaped target under different observation requirements.
Figure 11. Scenario 1: Imaging effect of the irregularly shaped target under different observation requirements.
Sensors 20 03474 g011
Figure 12. Scenario 2: Circular trajectory of the quadrotor and the target in a 3D environment under the proposed controller.
Figure 12. Scenario 2: Circular trajectory of the quadrotor and the target in a 3D environment under the proposed controller.
Sensors 20 03474 g012
Figure 13. Scenario 2: Time evolution of the position tracking error under the two controllers (circular movement).
Figure 13. Scenario 2: Time evolution of the position tracking error under the two controllers (circular movement).
Sensors 20 03474 g013
Figure 14. Scenario 2: S-type trajectory of the quadrotor and the target in a 3D environment under the proposed controller.
Figure 14. Scenario 2: S-type trajectory of the quadrotor and the target in a 3D environment under the proposed controller.
Sensors 20 03474 g014
Figure 15. Scenario 2: Time evolution of the position tracking error under the two controllers (S-type movement).
Figure 15. Scenario 2: Time evolution of the position tracking error under the two controllers (S-type movement).
Sensors 20 03474 g015
Figure 16. Scenario 2: Linear trajectory of the quadrotor and the target in a 3D environment under the proposed controller.
Figure 16. Scenario 2: Linear trajectory of the quadrotor and the target in a 3D environment under the proposed controller.
Sensors 20 03474 g016
Figure 17. Scenario 2: Time evolution of the position tracking error under the two controllers (linear movement).
Figure 17. Scenario 2: Time evolution of the position tracking error under the two controllers (linear movement).
Sensors 20 03474 g017
Table 1. Control gains in the simulation.
Table 1. Control gains in the simulation.
GainsValuesGainsValues
k P 1 k P 8
k D 1.44 k D 4
k ε 0.3 k ε 1
K P diag{1.2,1.2,1.3} K P diag{10,10,10}
K D diag{1.5,1.5,1.6} K D diag{5,5,5}
K ε m · diag{0.3,0.3,0.5} K ε k ε · J
ε 0.1 ε 0.3
Table 2. The steady-state performance (position error) of the two methods.
Table 2. The steady-state performance (position error) of the two methods.
Traditional PD ControlGain switching PD Control
MeanAmplitudeAccuracyMeanAmplitudeAccuracy
Initial Position (m)
[ 1 , 0.8 , 4 ]
8.9 × 10 4 0.14750.32093.7 × 10 4 0.02900.0628
−0.00510.13600.29851.5 × 10 4 0.03450.0731
−0.52110.01640.5495−0.13230.00820.1484
Initial Position (m)
[ 1 , 2 , 5 ]
9.0 × 10 4 0.14750.32093.7 × 10 4 0.02910.0628
−0.00510.13600.29841.6 × 10 4 0.03450.0731
−0.52120.01650.5495−0.13240.00830.1485
Initial Position (m)
[ 0.6 , 0.8 , 1.5 ]
8.8 × 10 4 0.14750.32083.7 × 10 4 0.02900.0627
−0.00520.13600.29851.6 × 10 4 0.03450.0731
−0.52110.01640.5494−0.13220.00820.1483

Share and Cite

MDPI and ACS Style

Zhang, S.; Zhao, X.; Zhou, B. Robust Vision-Based Control of a Rotorcraft UAV for Uncooperative Target Tracking. Sensors 2020, 20, 3474. https://doi.org/10.3390/s20123474

AMA Style

Zhang S, Zhao X, Zhou B. Robust Vision-Based Control of a Rotorcraft UAV for Uncooperative Target Tracking. Sensors. 2020; 20(12):3474. https://doi.org/10.3390/s20123474

Chicago/Turabian Style

Zhang, Shijie, Xiangtian Zhao, and Botian Zhou. 2020. "Robust Vision-Based Control of a Rotorcraft UAV for Uncooperative Target Tracking" Sensors 20, no. 12: 3474. https://doi.org/10.3390/s20123474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop