Next Article in Journal
Real-Time Rain Rate Evaluation via Satellite Downlink Signal Attenuation Measurement
Next Article in Special Issue
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment
Previous Article in Journal
Multiple Cracks Detection in Pipeline Using Damage Index Matrix Based on Piezoceramic Transducer-Enabled Stress Wave Propagation
Previous Article in Special Issue
Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller

by
Carlos Lopez-Franco
1,2,*,
Javier Gomez-Avila
1,
Alma Y. Alanis
1,
Nancy Arana-Daniel
1 and
Carlos Villaseñor
1
1
Centro Universitario de Ciencias Exactas e Ingenierías, Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Guadalajara C.P. 44430, Jalisco, Mexico
2
Avenida Revolución 1500 Modulo “R”, Colonia Universitaria, Guadalajara C.P. 44430, Jalisco, Mexico
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(8), 1865; https://doi.org/10.3390/s17081865
Submission received: 1 July 2017 / Revised: 4 August 2017 / Accepted: 10 August 2017 / Published: 12 August 2017

Abstract

:
In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.

1. Introduction

The use of Unmanned Aerial Vehicles (UAVs) has been increased over the last few decades. UAVs have shown satisfactory flight and navigation capabilities, which are very important in applications like surveillance, mapping, search and rescue, etc. The ability to move freely in a 3D space represents a great advantage over ground vehicles, especially when the robot is supposed to travel long distances or move in dangerous environments, like in search and rescue tasks. Commonly, UAVs have four rotors; however, having more than four gives them a higher lifting capacity. The hexarotor has some advantages over the highly popular quadrotor, such as their increased load capacity, higher speed and safety, because the two extra rotors allow the UAV landing even if it loses one of the motors. However, the hexarotor is a highly nonlinear and under actuated system because it has fewer control inputs than degrees of freedom and its Lagrangian dynamics contain feedforward nonlinearities; in other words, there are some acceleration directions that can only be produced by a combination of the actuators.
In contrast with ground vehicles, it is not possible to use sensors like encoders to estimate its position. A good alternative is to use visual information as a reference, due to the high amount of information that a camera can provide in contrast with their low power consumption and low weight. Since it is not possible to know the position of a hexarotor with common on-board sensors such as Inertial Measurement Units (IMU), some works use off board sensor systems [1,2,3,4,5]; however, this kind of control limits the application to indoor navigation and adds noise and delays because of the communication between the robot and the ground station.
For such reason, visual control of UAVs has been widely performed. Although stereo vision is extensively used in mapping applications [6,7], when used in UAV navigation like in [8,9], it requires 3D reconstruction or optical flow, which are computationally expensive algorithms. In this approach, monocular vision is used, and the feature error position in the image coordinate plane is related with the robot velocity vector that reduces this error [10,11,12,13,14,15]. Consequently, we can set the position of the robot based on the camera information to control its navigation not only in indoor environments [16,17]. Classical Image Based Visual Servo (IBVS) control stabilizes attitude and position separately [18], which is not possible for underactuated systems. In [18], an Image Based approach is used for an underactuated system but approximating the depth distance to the features.
In [19], a PID controller is implemented on an hexarotor and comparisons between quaternions and Euler angles are made. In [20], the authors propose a visual servoing algorithm combined with a proportional derivative (PD) controller. However, PID approaches are not effective on highly nonlinear systems with model uncertainties such as the hexarotor [21,22]. According to this, another approach is required. In this paper, we propose the use of a Neural Network based PID. The advantages of using neural networks to control nonlinear systems are that the controller will have the adaptability and learning capabilities of the neural network [23], making the system able to adapt to actuator faults such as loss of effectiveness, which is described in [24] and solves disadvantages of the traditional PID [25] such as uncertainties of the system, communication time-delay, parametric uncertainties, external disturbances, actuator saturations and unmodeled system dynamics, among others. If to all of these issues we add the complexity to integrate servo control algorithms with vision sensors and a neural PID in a real-time implementation, it is required to have a well-designed coordination between all of the elements of this implementation, requiring different processing stages with their respective communication architecture (software and hardware).
The rest of the paper is structured as follows: Section 2 describes the robot and its dynamics. In Section 3, the visual servo control approach is introduced. In Section 5, the design of the PID controller and weights adjustment are shown. Section 4 presents the relationship between the error signals from the visual algorithm and the control signals of the hexarotor. Section 6 and Section 7 present the simulation and experimental results of the proposed approach and its comparison with the conventional PID controller. Finally, the conclusions are given in Section 8.

2. Hexarotor Dynamic Modeling

The hexarotor consists of six arms connected symmetrically to the central hub. At the end of each arm, a propeller driven by a brushless Direct Current (DC) motor is attached. Each propeller produces an upward thrust and, since they are located outside the center of gravity, differential thrust is used to rotate the hexarotor. In addition, the rotation of the propellers also produces a torque in the opposite direction of the rotation of the motors; therefore, there must be two groups of rotors spinning in the opposite direction for the purpose of making this reaction torque equal to zero.
The pose of an hexarotor is given by its position ζ = x , y , z T and its orientation η = ϕ , θ , ψ T in the three Euler angles roll, pitch and yaw, respectively. For the sake of simplicity, s i n ( · ) and c o s ( · ) will be abbreviated s · and c · . The transformation from world frame O to body frame (Figure 1) is given by
x B y B z B = c θ c ψ c θ s ψ s θ s ϕ s θ c ψ c ϕ s ψ s ϕ s θ s ψ + c ϕ c ψ s ϕ c θ c ϕ s θ c ψ + s ϕ s ψ c θ s θ s ψ s ϕ c ψ c ϕ c θ x W y W z W .
The dynamic model of the robot expressed in the body frame in Newton–Euler formalism is obtained as in [26].
m I 3 × 3 0 0 I V ˙ ω ˙ + ω × m V ω × I ω = F τ ,
where I is the 3 × 3 inertia matrix; V the lineal speed vector and ω the body angular speed. The equations of motion for the helicopter of Figure 1 can be written as in [27]
ζ ˙ = v , v ˙ = g e 3 + R b m Ω i 2 , R ˙ = R ω ^ , I ω ˙ = ω × I ω J r ω × e 3 Ω i + τ a ,
where ζ is the position vector, R the rotation matrix from the body frame to the world frame, R ˙ represents the rotation dynamics, ω ^ represents the skew symmetric matrix, Ω is the rotor speed, I the body inertia, J r the rotor inertia, b is the thrust factor and τ is the torque applied to the body frame due to the rotors. Since we are dealing with an hexarotor, this torque vector differs from the well-known quadrotor torque vector and, if we are working with a structure like the one in Figure 2, it can be written as
τ a = b l Ω 2 2 + Ω 5 2 + 1 2 Ω 1 2 Ω 3 2 + Ω 4 2 + Ω 6 2 3 2 b l Ω 1 2 + Ω 3 2 + Ω 4 2 Ω 6 2 d Ω 1 2 + Ω 2 2 Ω 3 2 + Ω 4 2 Ω 5 2 + Ω 6 2 ,
where l is the distance from the center of gravity of the robot to the rotor and b is the thrust factor. The full dynamic model is
x ¨ = cos ϕ s i n θ c o s ψ + sin ϕ s i n ψ U 1 m , y ¨ = cos ϕ s i n θ sin ψ sin ϕ cos ψ U 1 m , z ¨ = g + cos ϕ cos θ U 1 m , ϕ ¨ = θ ˙ ψ ˙ I y I z I x J r I x θ ˙ Ω + l I x U 2 , θ ¨ = ϕ ˙ ψ ˙ I z I x I y J r I y ϕ ˙ Ω + l I y U 3 , ψ ¨ = ϕ ˙ θ ˙ I x I y I z + l I z U 4 ,
where U 1 , U 2 , U 3 , U 4 and Ω represent the system inputs and in the case of the hexarotor are obtained as follows:
U 1 = b Ω 1 2 + Ω 2 2 + Ω 3 2 + Ω 4 2 + Ω 5 2 + Ω 6 2 , U 2 = b l Ω 2 2 + Ω 5 2 + 1 2 Ω 1 2 Ω 3 2 + Ω 4 2 + Ω 6 2 , U 3 = 3 2 b l Ω 1 2 + Ω 3 2 + Ω 4 2 Ω 6 2 , U 4 = d Ω 1 2 + Ω 2 2 Ω 3 2 + Ω 4 2 Ω 5 2 + Ω 6 2 ,
where d is the drag factor.

3. Visual Servo Control

In this paper, we use an Image Based Visual Servo control approach and the eye-in-hand case. The camera is mounted on the robot and the movement of the hexarotor induces camera motion [28].
The purpose of the vision based control is to minimize the error
e t = s m t , a s * ,
where s is the vector of captured features and in the function of a vector of 2D points coordinates in the image plane, m ( t ) and a are the set of known parameters of the camera (e.g., camera intrinsic parameters). Vector s * contains the desired values. Since the error e ( t ) is defined on the image space and the robot moves in the 3D space, it is necessary to relate changes in the image features with the hexarotor displacement. The image Jacobian [29] (also known as interaction matrix) captures the relation between features and robot velocities as shown
s ˙ = L s v c ,
where s ˙ is the variation of the features position, L s is the interaction matrix and v c = ( v c , ω ˙ c ) denotes the camera translational ( v ˙ c ) and rotational ( ω ˙ c ) velocities. Considering v c as the control input, we can try to ensure an exponential decrease of the error with
v c = λ L s + e ,
where λ is a positive constant, L s R 6 × k is the pseudo-inverse of L s , k is the number of features and e the feature error.
To calculate the interaction matrix, consider a 3D point X with coordinates ( X , Y , Z ) in the camera frame, the projected point in the image plane x with coordinates ( x , y ) is defined as
x = X / Z = u c u / f α , y = Y / Z = v c v / f ,
where ( u , v ) are the coordinates of the point in the image space expressed in pixel units, ( c u , c v ) are the coordinates of the principal point, α is the ratio of pixel dimensions and f the focal length. If we derive (10), we have
x ˙ = X ˙ x Z ˙ Z y ˙ = Y ˙ y Z ˙ Z .
The relation between a fixed 3D point and the camera spatial velocity is stated as follows:
X ˙ = v c ω c × X .
Then, we can write the derivatives of the 3D coordinates as
X ˙ = v x ω y Z + ω z Y , Y ˙ = v y ω z X + ω x Z , Z ˙ = v z ω x Y + ω y X .
Substituting (13) in (11), we can state the pixel coordinates variation as follows:
x ˙ = v x Z + x v z Z + x y ω x 1 + x 2 ω y + y ω z , y ˙ = v y Z + y v z Z + 1 + y 2 ω x x y ω y x ω z ,
which can be written
x ˙ = L x v c
with
L x = 1 Z 0 x Z x y 1 + x 2 y 0 1 Z y Z 1 + y 2 x y x ,
where Z is the actual distance from the vision sensor to the feature, for this reason, most of the IBVS algorithms need to approximate this depth. In our case, we use an RGB-D sensor and this distance is known. To control the six degrees of freedom (DoF), at least three points are necessary [28]. In that particular case, we would have three interaction matrices L x 1 , L x 2 , L x 3 , one for each feature, and the complete interaction matrix is now
L x = L x 1 L x 2 L x 3 .
When using three points, there are some configurations for which L x is singular and four global minima [30]. More precisely, there are four poses for the camera such that s ˙ = 0 , and these four poses are impossible to differentiate [31]. With this in mind, it is usual to consider more points [28].
On the other hand, only one pose achieves s = s * when using four points. Moreover, we can use the pseudo-inverse or the transpose of the interaction matrix indistinctly to solve for v c in (15) [32,33].
In this paper, four points are used. In addition, our pattern does not move and because of the nature of the hexarotor, we suppose that the pattern will never be rotated since any rotation in roll or pitch will produce a translation. In other words, the hexarotor is an underactuated system, and it means there are some acceleration directions that can be only produced by a combination of the actuators. Most of the time, this is due to a lower number of actuators than degrees of freedom of the system; however, in the hexarotor, there is no actuator that can produce by itself a translational acceleration in the x and y directions. In consequence, it is not possible to have this kind of robot static and tilted at the same time, and, consequently, rotational velocities related to roll and pitch in v c are 0.

4. Control of Hexarotor

The hexarotor has four control inputs U i , U 1 , which represents the translation in the z-axis, U 2 , which represents the roll torque, U 3 the pitch, and U 4 represents the yaw torque. The visual algorithm act as a proportional controller, where λ in (9) works as a proportional gain. When combined with the Artificial Neural Network (ANN) based PID, we can adapt not only this proportional gain but also the derivative and the integral gains. Since the system is underactuated, we can use the translational velocities [ x ˙ , y ˙ ] computed by IBVS as input roll and pitch torques, and the error will be reduced. This is shown in Figure 3.
The velocity mapping block in Figure 3 traduces the velocities vector from IBVS to hexarotor inputs, i.e., v x = roll, v y = thrust, v z = pitch and ω ψ = yaw. In our case, ω ϕ = 0 and ω θ = 0 , since we assume the pattern will never be rotated because it is an underactuated system.

5. Neural Network Based PID

Considering the control loop with unitary feedback as shown in Figure 4.
Conventional digital Proportional-Integral-Derivative (PID) with unitary feedback is described in [34] and the control law is given by
U ( z ) = K P + K I 1 z 1 + K D 1 z 1 E ( z ) ,
where E ( z ) is the error calculated as the difference between the reference signal and the system output R ( z ) Y ( z ) . The terms K P , K I , and K D are the proportional, integral and derivative gains, respectively. These gains are related as follows:
K P = K K I 2 K I = K T T i K D = K T d T ,
where K is the gain, T is the sample time, T i is the integration time and T d the derivative time. Applying the inverse Z-transform to (18), the PID sequence u ( k ) is given by
u ( k ) = u ( k 1 ) + K P e ( k ) + K I [ e ( k ) 2 e ( k 1 ) + e ( k 2 ) ] + K D [ e ( k ) e ( k 1 ) ] .
Despite conventional PID being widely used to control these vehicles due to its simplicity and performance, it is not intended to control highly nonlinear systems, such as hexarotors.
In order to handle these nonlinearities, an ANN based PID controller is used. The purpose of the ANN is not only to deal with the nonlinear system, but also adjusting the PID controller gains to the end that it can also handle with uncertainties in the model. The topology of the PID-ANN used is shown in Figure 5.
From Figure 5, the e i ( k ) vector represents the proportional error, the derivative of the error and the integral of the error. They are defined as follows:
e 1 ( k ) = e ( k ) , e 2 ( k ) = e ( k ) 2 e ( k 1 ) + e ( k 2 ) , e 3 ( k ) = e ( k ) e ( k 1 ) .
Accordingly, the control law of the conventional PID can be rewritten as
u ( k ) = u ( k 1 ) + K P e 1 ( k ) + K I e 2 ( k ) + K D e 3 ( k ) .
The Neuron input is defined as
I = i = 1 3 e i k w i k ,
where vector w i ( k ) represents the weights of the network, which are incremented by
Δ w i = η i e i k e k u k
with a learning factor η . The new value of w i ( k ) will be
w i k = w i k 1 + Δ w i k .
The Euclidean norm will be used to limit the values of w i ( k ) as
w i k = w i k i = 1 3 w i k .
The activation function of the neuron is the hyperbolic tangent; therefore, the output will be
Φ I = A 1 e I b 1 + e I b ,
where A is a gain factor to escalate the maximum value of the activation function, which is between [ 1 , 1 ] and b is a scalar to avoid saturation of the neuron. The control law of the ANN based PID is expressed as follows:
u k = u k 1 + Φ I ,
and there is one PID-ANN module for every U i to control in (6). U i has information of the rotor speed combination necessary to achieve a specific rotation, i.e., if the robot needs to move in more than one direction at the same time, there will be more than one U i with values different to zero.

6. Simulation Results

Simulations are implemented in Matlab (Matlab R2016a, The MathWorks Inc., Natick, MA, USA) using the Robotics Toolbox [35]. For the visual servo algorithm, four points are used. In the first experiment, the robot starts on the ground and has to reach a certain position given by these 2D points. With the intention of proving the algorithms, we simulate uncertainty of the system changing two parameters separately in two simulations.
In the first simulation, at the second 10, the mass of the robot has been increased 50%. It can be seen that conventional PID controller is unable to keep the position. The results are shown in Figure 6.
In the second simulation, the mass of the system remains constant, but the moment of inertia I x is increased from I x = 0 . 0820 to I x = 0 . 550 . Figure 7 shows the results of using conventional PID when the moment of inertia is increased. The results show that the control input is excessively high to the robot (Figure 7b), making the system unable to follow the reference (Figure 7c).
In the following simulations, the PID-ANN is now controlling the system under the same conditions. The mass has been incremented at second 10. It can be seen in Figure 8 that the controller can be adapted to this mass increment and keep the reference.
Finally, we show, in Figure 9 the results of changing the moment of inertia I x while mass remains constant. In contrast with conventional PID, the PID-ANN is able to keep its position.

7. Experimental Results

The hexarotor used in the experiments is the Asctec Firefly (Ascending Technologies, Krailling, Germany). The actual configuration of the experiment is shown in Figure 10. The vision sensor has been changed, we use the Intel RealSense R200 camera (Intel, Santa Clara, CA, USA) with RGB and infrared depth sensing features and an indoor range from 0.4 m to 2.8 m. This change in the vision sensor will modify robot mass and moment of inertia. This uncertainty can be absorbed by the neural network.
Vision information is highly noisy and presents a high computational cost even when working with low resolution images (in this case, 640 × 480 ). The more time the algorithm uses in image capture and processing phase, the more error will exist between what the robot sees and the actual position. Computer vision algorithms, such as optical flow approaches, requires tracking of a set of n features using some kind of descriptor. Other approaches use stereo vision but that requires a 3D reconstruction. We propose to use only four points to reduce this time.
It is important to note that coordination between vision sensors, neural network and model system working at different processing stages and its communication at their respective architectures is crucial to achieve real-time implementation. A QR-code is used, and we track it with a Zbar bar code reader library. This pattern is chosen because of its robustness to rotations and illumination changes. The algorithms are implemented on the onboard computer of the hexarotor.
For a first experiment, the moment of inertia and mass of the system changed and a previously tuned conventional PID controller will be compared with the proposed algorithm. Figure 11 shows results when the pattern is fixed at a certain position and the hexarotor is at hover position. As can be seen in Figure 11b, the conventional controller cannot achieve system stabilization at a fixed position when the model changes. Table 1 shows the Root Mean Square Error (RMSE) and the Average Absolute Deviation (AAD) in pixel units. The pair ( x i , y i ) is the location of feature ( i = 1 , 2 , 3 , 4 ) in image coordinates.
In Figure 11b, the solid increasing lines represent the x position of the four features in image coordinates (pixel units). As can be seen, if a conventional PID is not correctly tuned for this specific system, its position diverges. On the other hand, when the system is controlled by the PID-ANN, its position does not diverge (Figure 11d) even when the controller has not been previously tuned.
Once ANN-PID has demonstrated its effectiveness over the PID controller, the experiment is repeated, but now the QR pattern has movement. As shown in Figure 12, the hexarotor does not lose sight of the objective.

8. Conclusions

In this paper, we propose a Neural Network based PID controller with visual feedback to control an hexarotor. The hexarotor is equipped with an RGB-D sensor that allows for estimating the feature error, this error has been used to compute the camera velocities. The proposed approach is able to deal with delays due to image processing, system uncertainties, noises and changes in the model since the ANN is continuously adapting the PID gains. In contrast with conventional PID controllers, where it is mandatory to tune it according to a specific system, the ANN can deal with nonlinearities and changes in the system.

Acknowledgments

The authors thank the support of CONACYT Mexico, through Projects CB256769 and CB258068 (Project supported by Fondo Sectorial de Investigacion para la Educacion).

Author Contributions

All of the experiments reported in this paper have been designed and performed by Javier Gomez-Avila and Carlos Villaseñor. The data and results presented on this work were analyzed and validated by Carlos Lopez-Franco, Alma Y. Alanis and Nancy Arana-Daniel. All authors are credited for their contribution on the writing and edition of the presented manuscript.

Conflicts of Interest

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Abbreviations

The following abbreviations are used in this manuscript:
UAVUnmanned Aerial Vehicle
IBVSImage Based Visual Servo
VTOLVertical Take-Off and Landing
PIDProportional Integral Derivative
ANNArtificial Neural Network
IMUInertial Measurement Unit
PDProportional Derivative
RMSERoot Mean Square Error
AADAverage Absolute Deviation

References

  1. Bouabdallah, S.; Siegwart, R. Full control of a quadrotor. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 153–158. [Google Scholar]
  2. Achtelik, M.; Zhang, T.; Kuhnlenz, K.; Buss, M. Visual tracking and control of a quadcopter using a stereo camera system and inertial sensors. In Proceedings of the International Conference on Mechatronics and Automation, Changchun, China, 9–12 August 2009; pp. 2863–2869. [Google Scholar]
  3. Klose, S.; Wang, J.; Achtelik, M.; Panin, G.; Holzapfel, F.; Knoll, A. Markerless, vision-assisted flight control of a quadrocopter. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18–20 October 2010; pp. 5712–5717. [Google Scholar]
  4. Zhang, T.; Kang, Y.; Achtelik, M.; Kuhnlenz, K.; Buss, M. Autonomous hovering of a vision/IMU guided quadrotor. In Proceedings of the International Conference on Mechatronics and Automation, Changchun, China, 9–12 August 2009; pp. 2870–2875. [Google Scholar]
  5. Angeletti, G.; Valente, J.P.; Iocchi, L.; Nardi, D. Autonomous indoor hovering with a quadrotor. In Proceedings of the Workshop SIMPAR, Venice, Italy, 3–4 November 2008; pp. 472–481. [Google Scholar]
  6. Stefanik, K.V.; Gassaway, J.C.; Kochersberger, K.; Abbott, A.L. UAV-based stereo vision for rapid aerial terrain mapping. GISci. Remote Sens. 2011, 48, 24–49. [Google Scholar] [CrossRef]
  7. Kim, J.H.; Kwon, J.W.; Seo, J. Multi-UAV-based stereo vision system without GPS for ground obstacle mapping to assist path planning of UGV. Electron. Lett. 2014, 50, 1431–1432. [Google Scholar] [CrossRef]
  8. Salazar, S.; Romero, H.; Gómez, J.; Lozano, R. Real-time stereo visual servoing control of an UAV having eight-rotors. In Proceedings of the 2009 6th International Conference on IEEE Electrical Engineering, Computing Science and Automatic Control, Toluca, Mexico, 10–13 October 2009; pp. 1–11. [Google Scholar]
  9. Hrabar, S.; Sukhatme, G.S.; Corke, P.; Usher, K.; Roberts, J. Combined optic-flow and stereo-based navigation of urban canyons for a UAV. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), Edmonton, AB, Canada, 2–6 August 2005; pp. 3309–3316. [Google Scholar]
  10. Altug, E.; Ostrowski, J.P.; Mahony, R. Control of a quadrotor helicopter using visual feedback. In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; Volume 1, pp. 72–77. [Google Scholar]
  11. Romero, H.; Benosman, R.; Lozano, R. Stabilization and location of a four rotor helicopter applying vision. In Proceedings of the American Control Conference, Minneapolis, MN, USA, 14–16 June 2006. [Google Scholar]
  12. Saripalli, S.; Montgomery, J.F.; Sukhatme, G.S. Visually guided landing of an unmanned aerial vehicle. IEEE Trans. Robot. Autom. 2003, 19, 371–380. [Google Scholar] [CrossRef]
  13. Wu, A.D.; Johnson, E.N.; Proctor, A.A. Vision-aided inertial navigation for flight control. J. Aerosp. Comput. Inf. Commun. 2005, 2, 348–360. [Google Scholar] [CrossRef]
  14. Azinheira, J.R.; Rives, P.; Carvalho, J.R.; Silveira, G.F.; De Paiva, E.C.; Bueno, S.S. Visual servo control for the hovering of all outdoor robotic airship. In Proceedings of the International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; Volume 3, pp. 2787–2792. [Google Scholar]
  15. Bourquardez, O.; Chaumette, F. Visual servoing of an airplane for auto-landing. In Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1314–1319. [Google Scholar]
  16. Mejias, L.; Saripalli, S.; Campoy, P.; Sukhatme, G.S. Visual servoing of an autonomous helicopter in urban areas using feature tracking. J. Field Robot. 2006, 23, 185–199. [Google Scholar] [CrossRef] [Green Version]
  17. Serres, J.; Dray, D.; Ruffier, F.; Franceschini, N. A vision-based autopilot for a miniature air vehicle: Joint speed control and lateral obstacle avoidance. Auton. Robots 2008, 25, 103–122. [Google Scholar] [CrossRef]
  18. Hamel, T.; Mahony, R. Visual servoing of an under-actuated dynamic rigid-body system: An image-based approach. IEEE Trans. Robot. Autom. 2002, 18, 187–198. [Google Scholar] [CrossRef]
  19. Alaimo, A.; Artale, V.; Milazzo, C.L.R.; Ricciardello, A. PID controller applied to hexacopter flight. J. Intell. Robot. Syst. 2014, 73, 261–270. [Google Scholar] [CrossRef]
  20. Ceren, Z.; Altuğ, E. Vision-based servo control of a quadrotor air vehicle. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), Daejeon, Korea, 15–18 December 2009; pp. 84–89. [Google Scholar]
  21. Rivera-Mejía, J.; Léon-Rubio, A.; Arzabala-Contreras, E. PID based on a single artificial neural network algorithm for intelligent sensors. J. Appl. Res. Technol. 2012, 10, 262–282. [Google Scholar]
  22. Yang, J.; Lu, W.; Liu, W. PID controller based on the artificial neural network. In Proceedings of the International Symposium on Neural Networks, Dalian, China, 19–21 August 2004; pp. 144–149. [Google Scholar]
  23. Ge, S.S.; Zhang, J.; Lee, T.H. Adaptive neural network control for a class of MIMO nonlinear systems with disturbances in discrete-time. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2004, 34, 1630–1645. [Google Scholar] [CrossRef]
  24. Freddi, A.; Longhi, S.; Monteriù, A.; Prist, M. Actuator fault detection and isolation system for an hexacopter. In Proceedings of the 2014 IEEE/ASME 10th International Conference on Mechatronic and Embedded Systems and Applications (MESA), Senigallia, Italy, 10–12 September 2014; pp. 1–6. [Google Scholar]
  25. Sheng, Q.; Xianyi, Z.; Changhong, W.; Gao, X.; Zilong, L. Design and implementation of an adaptive PID controller using single neuron learning algorithm. In Proceedings of the 4th World Congress on Intelligent Control and Automation, Shanghai, China, 10–14 June 2002; Volume 3, pp. 2279–2283. [Google Scholar]
  26. Moussid, M.; Sayouti, A.; Medromi, H. Dynamic modeling and control of a hexarotor using linear and nonlinear methods. Int. J. Appl. Inf. Syst. 2015, 9. [Google Scholar] [CrossRef]
  27. Bouabdallah, S.; Murrieri, P.; Siegwart, R. Design and control of an indoor micro quadrotor. In Proceedings of the International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Volume 5, pp. 4393–4398. [Google Scholar]
  28. Chaumette, F.; Hutchinson, S. Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  29. Weiss, L.; Sanderson, A.; Neuman, C. Dynamic sensor-based control of robots with visual feedback. IEEE J. Robot. Autom. 1987, 3, 404–417. [Google Scholar] [CrossRef]
  30. Michel, H.; Rives, P. Singularities in the Determination of the Situation of a Robot Effector from the Perspective View of 3 Points. Ph.D. Thesis, INRIA, Valbonne, France, 1993. [Google Scholar]
  31. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  32. Chaumette, F. Potential problems of stability and convergence in image-based and position-based visual servoing. In The Confluence of Vision and Control; Springer: Berlin, Germany, 1998; pp. 66–78. [Google Scholar]
  33. Wu, Z.; Sun, Y.; Jin, B.; Feng, L. An Approach to Identify Behavior Parameter in Image-based Visual Servo Control. Inf. Technol. J. 2012, 11, 217. [Google Scholar]
  34. Ogata, K. Discrete-Time Control Systems; Prentice Hall: Englewood Cliffs, NJ, USA, 1995; Volume 2. [Google Scholar]
  35. Corke, P.I. Robotics, Vision & Control: Fundamental Algorithms in Matlab; Springer: Berlin, Germany, 2011. [Google Scholar]
Figure 1. Structure of hexarotor and coordinate frames.
Figure 1. Structure of hexarotor and coordinate frames.
Sensors 17 01865 g001
Figure 2. Geometry of hexarotor.
Figure 2. Geometry of hexarotor.
Sensors 17 01865 g002
Figure 3. Block diagram of our Image Based Visual Servo (IBVS) control algorithm combined with Artificial Neural Network based PID (PID-ANN). The Artificial Neural Network based PID module consists of four modules (one for each Degree of Freedom). The output of the IBVS block is the velocity vector with angular velocity equal to 0 for roll and pitch since we assume the pattern will never be rotated in those angles. The output of IBVS control block consists of translational velocities v x , v y , v z , which must be mapped to U 2 , U 3 and U 1 , respectively, in order to be added to the PID-ANN output. These U i control actions will be traduced to translational displacement of the robot.
Figure 3. Block diagram of our Image Based Visual Servo (IBVS) control algorithm combined with Artificial Neural Network based PID (PID-ANN). The Artificial Neural Network based PID module consists of four modules (one for each Degree of Freedom). The output of the IBVS block is the velocity vector with angular velocity equal to 0 for roll and pitch since we assume the pattern will never be rotated in those angles. The output of IBVS control block consists of translational velocities v x , v y , v z , which must be mapped to U 2 , U 3 and U 1 , respectively, in order to be added to the PID-ANN output. These U i control actions will be traduced to translational displacement of the robot.
Sensors 17 01865 g003
Figure 4. Control loop for conventional PID in discrete time.
Figure 4. Control loop for conventional PID in discrete time.
Sensors 17 01865 g004
Figure 5. PID-ANN topology. There is one module PID-ANN for every DoF. e ( k ) represents the error in that DoF, e i ( k ) is a vector that represents the error, derivative of the error and the integral of the error and η i is the learning rate, which must be heuristically selected to be small enough to avoid saturation of the neuron and not necessarily the same for every gain (proportional, derivative and integral). The Euclidean norm block normalizes the neuron weights w i in order to avoid divergence since the neuron weights are the PID controller gains. The output of the neuron is the control input u ( k ) that will be traduced as thrust ( U 1 ), roll ( U 2 ), pitch ( U 3 ) or yaw ( U 4 ), depending on the state error at the input of the ANN.
Figure 5. PID-ANN topology. There is one module PID-ANN for every DoF. e ( k ) represents the error in that DoF, e i ( k ) is a vector that represents the error, derivative of the error and the integral of the error and η i is the learning rate, which must be heuristically selected to be small enough to avoid saturation of the neuron and not necessarily the same for every gain (proportional, derivative and integral). The Euclidean norm block normalizes the neuron weights w i in order to avoid divergence since the neuron weights are the PID controller gains. The output of the neuron is the control input u ( k ) that will be traduced as thrust ( U 1 ), roll ( U 2 ), pitch ( U 3 ) or yaw ( U 4 ), depending on the state error at the input of the ANN.
Sensors 17 01865 g005
Figure 6. Simulation using conventional PID. At 10 s, the mass of the system is incremented 50% and λ = 0 . 3 . (a) Cartesian velocities; (b) control input; (c) features error.
Figure 6. Simulation using conventional PID. At 10 s, the mass of the system is incremented 50% and λ = 0 . 3 . (a) Cartesian velocities; (b) control input; (c) features error.
Sensors 17 01865 g006
Figure 7. Simulation using conventional PID. Moment of inertia is increased. Mass remains constant. It can be seen that the robot is not stable when I x changes. (a) Cartesian velocities; (b) control input; (c) features error.
Figure 7. Simulation using conventional PID. Moment of inertia is increased. Mass remains constant. It can be seen that the robot is not stable when I x changes. (a) Cartesian velocities; (b) control input; (c) features error.
Sensors 17 01865 g007
Figure 8. Simulation using conventional PID-ANN. At 10 s, the mass of the system is incremented 50% and λ = 0 . 3 . The system remains at desired position. (a) Cartesian velocities; (b) control input; (c) features error.
Figure 8. Simulation using conventional PID-ANN. At 10 s, the mass of the system is incremented 50% and λ = 0 . 3 . The system remains at desired position. (a) Cartesian velocities; (b) control input; (c) features error.
Sensors 17 01865 g008
Figure 9. Simulation using ANN based PID. Moment of inertia is increased. Mass remains constant. It can be seen that the robot remains stable at desired position when I x changes. (a) Cartesian velocities; (b) control input; (c) features error.
Figure 9. Simulation using ANN based PID. Moment of inertia is increased. Mass remains constant. It can be seen that the robot remains stable at desired position when I x changes. (a) Cartesian velocities; (b) control input; (c) features error.
Sensors 17 01865 g009
Figure 10. Actual experiment configuration. The corners of the Quick Response (QR) code represent the 3D features.
Figure 10. Actual experiment configuration. The corners of the Quick Response (QR) code represent the 3D features.
Sensors 17 01865 g010
Figure 11. Experimental results when mass and moment of inertia changed. The pattern remains at the same position during the test. (a) control input conventional PID; (b) error conventional PID; (c) control input ANN-PID; (d) error ANN-PID.
Figure 11. Experimental results when mass and moment of inertia changed. The pattern remains at the same position during the test. (a) control input conventional PID; (b) error conventional PID; (c) control input ANN-PID; (d) error ANN-PID.
Sensors 17 01865 g011
Figure 12. Experimental results at hover position when mass and moment of inertia changed. The pattern is moving during the test. (a) control input ANN-PID; (b) error ANN-PID.
Figure 12. Experimental results at hover position when mass and moment of inertia changed. The pattern is moving during the test. (a) control input ANN-PID; (b) error ANN-PID.
Sensors 17 01865 g012
Table 1. Controllers’ comparison.
Table 1. Controllers’ comparison.
Root Mean Square Error (Pixel Units)
x 1 y 1 x 2 y 2 x 3 y 3 x 4 y 4
PID1741.9393.11712.0339.11677.5329.21723.9380.1
ANNPID212.41666.549191.144757.4298824.08559.7654861.820762.6225
Average Absolute Deviation (Pixel Units)
x 1 y 1 x 2 y 2 x 3 y 3 x 4 y 4
PID55.21779.643754.063610.545752.482710.70453.0699.6577
ANNPID43.22817.425942.568611.387446.683211.48447.55767.5503
Comparison Table between conventional Proportional Integral Derivative (PID) controller and the Artificial Neural Network based PID controller (ANNPID).

Share and Cite

MDPI and ACS Style

Lopez-Franco, C.; Gomez-Avila, J.; Alanis, A.Y.; Arana-Daniel, N.; Villaseñor, C. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller. Sensors 2017, 17, 1865. https://doi.org/10.3390/s17081865

AMA Style

Lopez-Franco C, Gomez-Avila J, Alanis AY, Arana-Daniel N, Villaseñor C. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller. Sensors. 2017; 17(8):1865. https://doi.org/10.3390/s17081865

Chicago/Turabian Style

Lopez-Franco, Carlos, Javier Gomez-Avila, Alma Y. Alanis, Nancy Arana-Daniel, and Carlos Villaseñor. 2017. "Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller" Sensors 17, no. 8: 1865. https://doi.org/10.3390/s17081865

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop