Next Article in Journal
Topological Analysis of a Novel Compact Omnidirectional Three-Legged Robot with Parallel Hip Structures Regarding Locomotion Capability and Load Distribution
Previous Article in Journal
Inverse Kinematic Control of a Delta Robot Using Neural Networks in Real-Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mixed-Initiative Formation Control Strategy for Multiple Quadrotors

by
George C. Karras
1,
Charalampos P. Bechlioulis
2,*,
George K. Fourlas
1 and
Kostas J. Kyriakopoulos
3
1
Department of Computer Science and Telecommunications, School of Science, University of Thessaly, 35131 Lamia, Greece
2
Department of Electrical and Computer Engineering, Faculty of Engineering, University of Patras, 26504 Patras, Greece
3
Control Systems Laboratory, School of Mechanical Engineering, National Technical University of Athens, 15773 Athens, Greece
*
Author to whom correspondence should be addressed.
Robotics 2021, 10(4), 116; https://doi.org/10.3390/robotics10040116
Submission received: 14 July 2021 / Revised: 22 October 2021 / Accepted: 23 October 2021 / Published: 26 October 2021
(This article belongs to the Section Aerospace Robotics and Autonomous Systems)

Abstract

:
In this paper, we present a mixed-initiative motion control strategy for multiple quadrotor aerial vehicles. The proposed approach incorporates formation specifications and motion-planning commands as well as inputs by a human operator. More specifically, we consider a leader–follower aerial robotic system, which autonomously attains a specific geometrical formation, by regulating the distances among neighboring agents while avoiding inter-robot collisions. The desired formation is realized by a decentralized prescribed performance control strategy, resulting in a low computational complexity implementation with guaranteed robustness and accurate formation establishment. The multi-robot system is safely guided towards goal configurations, by employing a properly defined navigation function that provides appropriate motion commands to the leading vehicle, which is the only one that has knowledge of the workspace and the goal configurations. Additionally, the overall framework incorporates human commands that dictate the motion of the leader via a teleoperation interface. The resulting mixed-initiative control system has analytically guaranteed stability and convergence properties. A realistic simulation study, considering a team of five quadrotors operating in a cluttered environment, was carried out to demonstrate the performance of the proposed strategy.

1. Introduction

During recent years, multi-rotor aerial robots have been established as a popular solution for a variety of autonomous or semi-autonomous aerial tasks. Concurrently, as the technology of sensors and actuators rapidly advances, multi-rotor aerial robots evolve with improved flight endurance, maneuverability and payload efficiency. In critical aerial missions such as search and rescue, load transportation, precision agriculture, field surveillance and monitoring, the coordinated deployment of a multi-agent system excels in single-agent operation in terms of perception and data collection, speed of completion and fault tolerance. On the other hand, many of the aforementioned missions (e.g., search and rescue) require enhanced cognitive capabilities for efficient decision-making. However, cognitive robotic skills are still in their early stages; hence mixed-initiative strategies allowing a human operator with enhanced cognitive and decision capabilities to provide high level commands and affect online the motion of a robotic swarm, may significantly improve the safety and effectiveness of a mission.

1.1. Related Work

In general, formation of multi-robot systems can be either rigid or flexible with respect to the inter-robot distance specifications [1,2,3]. Two main formation strategies can be distinguished: (a) the leader–follower configuration and (b) the behavior-based approach [4]. In a leader–follower scheme, the motion of the system is dictated by the leader, while the following vehicles scatter in relatively close ranges extending the searching area. A leader–follower scheme is energy efficient and usually avoids instabilities related to information dissemination [5,6]. Nevertheless, in its nominal form, it is possible to appear robustness issues, because in the case the leader fails, the overall system collapses [7]. Formation control of multi-rotors is considered a rather interesting topic by the robotics community [8,9] and spans over a variety of applications such as evasion maneuvering [10], collision-free trajectory tracking [11], aggressive formation [12], target interception [13] and enclosing [14]. Frequently, the primary goal of a formation strategy is to maintain a specific structure of the involved agents, while simultaneously conform with a set of constraints [15,16,17]. In this respect, numerous control protocols can be found in the previous related studies, including backstepping approaches [18,19], model predictive control [20], feedback linearization [21], linear quadratic control [22], game theory approaches [23], prescribed performance methods [24] and iterative learning control [25]. A comparative study on formation control is presented in [26]. Although swarm control strategies can be employed for the autonomous operation of many Unmanned Aerial Vehicles (UAVs), effective and task-driven swarm navigation (e.g., search and rescue, autonomous exploration) is still in its early stages, due to the limited cognitive robotic skills. In this vein, mixed-initiative control strategies that allow a human operator to provide high level commands and affect the initial plan of a UAV swarm based on his/her enhanced cognitive and decisional capabilities may significantly improve the perceptional information acquired during a specific air mission. A substantial number of works have investigated the human-robot interaction in terms of sensor readings and command interfaces ranging from simple joystick commands using visual feedback from onboard cameras to more sophisticated gesture-based controls employing virtual reality [27,28,29]. However, sensing and direct teleoperation control in human-robot collaboration accounts for just a small part of the overall challenges faced when dealing with such missions. The human-robot collaboration should be perceptual and cognitive at the same time; humans should make rapid decisions for the robots, by interpreting in real time the sensor feedback (e.g., video, audio, thermal imaging) provided by the robots. Therefore, there is a great need for shared autonomy between human and robots to allow fast and efficient decision-making.
In a mixed-initiative autonomy mode, the robot can accept varying levels of human intervention, even from multimodal frameworks [28,29,30], hence relieving the latter from the tedious task of direct control. In fact, experimental procedures indicate that collaborative control can increase the performance of the overall system even in highly complicated workspaces [31]. Moreover, mixed-initiative control schemes can be employed successfully in cases where the robotic agent is assigned to perform various tasks such as navigation operations (e.g., take-off, hover, waypoint fly, land, etc.), but also tasks that require proximity and contact with surfaces (e.g., docking, undocking, sample picking, wall inspection, etc.) [32,33], where in such cases force feedback plays an important role in the mixed-initiative process. In any case, the element of trust is of utmost importance in human-robot collaboration; hence, methods towards formulating decision-analytical-based measures of trust in various cooperative tasks have been also studied [32,33,34]. In the same spirit, the robot should be able to recognize when help is actually needed from the operator and can refuse to undertake commands that may pose a threat to itself or its surroundings [32,35]. At this point, we should highlight that fusing motion-planning and control with mixed-initiative strategies is a rather tricky undertaking, since stability and convergence of the overall scheme are not always guaranteed. Towards this direction, very few studies have been conducted that blend human inputs with motion controllers to intervene with the robot guidance laws, although the resulting scheme exhibits guaranteed convergence properties [36].

1.2. Contribution

In this work and contrary to the related literature, a mixed-initiative formation control strategy (Preliminary results on prescribed performance distance-based formation control without incorporating human commands were published in the Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS’20), Athens, Greece [37]. The main theoretical contribution of the present work is the stability analysis of the integrated system, which incorporates the trajectory tracking of the leader, the prescribed distance-based formation of the agent strategy and finally the mixed-initiative scheme introducing human commands in the loop. To the best of our knowledge, similar mixed-initiative approaches accompanied with a rigorous stability analysis have not yet appeared in the related literature.) is proposed for multiple quadrotor aerial vehicles that incorporates for the first time formation specifications, motion-planning commands as well as human inputs. We summarize the main contributions of this paper as follows:
  • Low complexity, decentralized formation with prescribed performance: The proposed formation control protocol is model-free, and the control output does not require complex calculations, hence it is ideal for real-time implementation on embedded computing units. It is decentralized, since each vehicle requires only position measurements that are easy to acquire via the onboard sensors, without the necessity for information exchange, while each control scheme runs on the local computing system of each agent. Moreover, the mathematical formulation is quite lean avoiding exhaustive numerical methods that are usually required by other approaches that attempt to tackle similar issues, such as Model Predictive Control or Reinforcement Learning. Finally, the transient and steady state response is predefined via the selection of specific performance functions.
  • Safety in navigation: The multi-agent system is safely guided towards position goal configurations, while simultaneously avoiding obstacles, by employing a properly defined navigation function which calculates the motion commands for the leader vehicle.
  • Flexibility: The overall framework incorporates human commands for the desired motion of the leader via a teleoperation interface. Hence, the overall system is implicitly guided by the mixed-initiative motion of the leader combined with predefined distance formation specifications, which proves to be rather efficient in cases where either an alternate path should be followed or an ad-hoc release from local minima is required, especially during obstacle avoidance. It should be noted that the proposed strategy successfully allows the human operator to intervene to the motion of the formation without however compromising safety (e.g., collisions and connectivity breaks) or performance.

1.3. Outline

The rest of the paper is organized as follows: In Section 2, some preliminary information regarding the quadrotor equations of motion, the formation of the multi-agent system and the mathematical description of the workspace is presented along with the problem formulation. Section 3 describes rigorously the proposed method consisting of the decentralized prescribed performance formation controller, the navigation function for the guidance of the leader as well as the overall mixed-initiative scheme that incorporates the human inputs. Section 4 demonstrates the performance of the proposed architecture via realistic simulation tests. Finally, Section 5 concludes the paper.

2. Preliminaries and Problem Formulation

In this section, we briefly present the quadrotor equations of motion and the low-level control architecture. Next, we provide some preliminary information regarding the distance-based formation control as well as the mathematical description of the operating workspace. Finally, we formulate the problem in hand for the envisioned multi-robot system.

2.1. Quadrotor Equations of Motion

The quadrotor actuation system consists of four motors, where the opposite pairs are rotating clockwise and counterclockwise. Thus, the dynamic motion of the system is induced via the torque and thrust applied by each actuation module (motor and propeller) [38,39].
We define a body-fixed frame B i = { e B x i , e B y i , e B z i } located at the i-th vehicle gravity center, as depicted in Figure 1, and the inertial frame I = { e I x , e I y , e I z } attached at a fixed position O I inside the vehicles workspace. According to the Newton-Euler method [39], the dynamic equations of motion for each vehicle may be formulated as follows:
I p ˙ B i = I v B i I v ˙ B i = m i 1 I R B i F B i ω ˙ B i = J B i 1 M B i
where I p B i = I x B i I y B i I z B i T , I v B i = I v x B i I v y B i I v z B i T the position and linear velocities for vehicle i with respect to I , ω B i = p B i q B i r B i T the angular velocities in B i , m i is the mass, I R B i is the rotation matrix from frame B i to I and J B i is the matrix describing the inertia in B i . The overall forces and torques acting on the i-th vehicle are given by the following equations (refer to [38,39] for more details):
F B i = F M i + F d i + F g i M B i = M M i + M d i
where:
  • F d i = C d , F B i R I I v B i I v w B i I v B i I v w B i are the drag forces and C d , F denotes the drag force coefficient, while I v w B i is the wind velocity vector
  • M d i = C d , M ω ˙ B i ω ˙ B i are the drag moments and C d , M denotes the drag moment coefficient
  • F g i = m i B i R I 0 0 g e T is the gravity vector and g e being the gravitational acceleration
  • F M i = 0 0 T 1 + T 2 + T 3 + T 4 T is the vector of the motor thrusts
  • M M i = T 4 T 2 l m T 1 T 3 l m M 1 + M 2 M 3 + M 4 T is the motor torque vector and l m is the distance of each motor as shown in Figure 1
  • T i = C T ω i ¯ 2 , i = 1 4 is the thrust of each individual thruster, C T > 0 denotes the thrust coefficient and ω i ¯ is the speed of the i-th thruster
  • M i = C Q ω i ¯ 2 , i = 1 4 is the thruster reaction torque while C Q denotes the torque coefficient

2.2. Quadrotor Low-Level Control

Regarding the low-level control of each vehicle, we assume the architecture proposed in [39], which is also implemented in the employed simulation environment. It consists of a fast inner loop responsible for controlling the rotational dynamics and an outer loop handling the translational ones. The overall scheme is implemented using a set of cascaded PI and PID controllers. The proposed architecture guarantees that for motions close to the hovering state individual control of each Degree of Freedom (DoF) is feasible. The reference inputs to the low-level control scheme are the desired linear velocities and yaw rate in B i and the desired position and orientation z d i , ψ d i with respect to I . The calculated commanded thrust and torques are then converted to motor voltages by the servo control system. For more information regarding the adopted low-level control strategy please refer to [39].

2.3. Formation of the Multi-Agent System

In this work, we consider N + 1 identical quadrotors in a leader–follower configuration. The leading vehicle is guided under a mixed-initiative control framework, consisting of a Navigation Function-based scheme combined with human teleoperation system. The leader is the only agent that has global knowledge of the environment (i.e., global positioning, position of obstacles and desired waypoints). The N + 1 quadrotors are using only onboard sensor data and more specifically inter-robot ranges and headings. As with [2], we investigate a formation case where all vehicles should regulate their distances to each other to achieve the desired formation geometry.
We proceed with the modeling of the formation using undirected graphs as depicted in Figure 2. Let us define a graph G ( V , E ) which consists of l edges and N + 1 vertices. Each vertex corresponds to a quadrotor, hence V = { 0 , 1 , . . . , N } is the set of vertices and E V × V is the set of edges, with index 0 referring to the leading quadrotor. We define as N i ( E ) = { j V | ( i , j ) E } the set of neighbors of i-th quadrotor. To simplify the presentation, we define p i I p B i where p i 3 , i = 0 , 1 , , N is the position of each vehicle and p ¯ col ( p i ) 3 ( N + 1 ) represents the realization of G in 3 . Thus, the framework of G is defined by the pair F ( G , p ¯ ) . Considering an arbitrary sequence of edges in E , a rigidity function Φ G : 3 ( N + 1 ) l related to F can be formulated as:
Φ G ( p ¯ ) = [ . . . , | | p i p j | | , ] T , ( i , j ) E
To achieve the operational specifications, we synthesize a minimally and infinitesimally rigid framework F, with the following rigidity matrix R:
R ( p ¯ ) = Φ G ( p ¯ ) p ¯
with each row of R ( p ¯ ) formulated as follows:
O 1 × m T , . . . , p i p j | | p i p j | | T , . . . , O 1 × m T , . . . , p j p i | | p i p j | | T , . . . , O 1 × m T
where O 1 × m T a vector of length m containing zeros. Since the framework F is infinitesimally rigid it holds that:
rank [ R ( p ¯ ) ] = 3 ( N + 1 ) 6
and consequently the corresponding graph consists of at least 3 ( N + 1 ) 6 edges, i.e., | E | 3 ( N + 1 ) 6 .
We invoke the following Lemma from [13], which will be employed next in the stability analysis of the proposed decentralized control scheme(Section 3.2).
Lemma 1.
If the framework F = ( G , p ¯ ) is minimally and infinitesimally rigid, then the matrix R ( p ¯ ) R ( p ¯ ) T is positive definite.
Detailed information about distance-based formation can be found in [40].

2.4. Description of the Workspace

In this work, we consider a team of N + 1 quadrotors which should be autonomously navigated within a workspace, while simultaneously avoiding scattered obstacles and workspace boundaries. Hence, we consider a team of quadrotors operating in a workspace W 3 with boundaries W = { p i 3 : p i cl ( W ) int ( W ) } . We may consider that the overall vehicle formation and the obstacles are modeled by spheres. Let B ( p c , r c ) be a sphere that surrounds the vehicles formation where p c 3 is the formation’s centroid that coincides with the leader’s position and r c the radius selected accordingly to encapsulate all vehicles. Moreover, we may define M spherical obstacles π m = B ( p π m , r π m ) , m { 1 , , M } , where p π m 3 is the center and r π m > 0 the radius of obstacle π m . According to the spherical world representation as described in [41], the next inequality holds for two obstacles m , m { 1 , , M } :
| | p π m p π m | | > 2 r c + r π m + r π m
which means that for any pair of obstacles, there exists sufficient space such that the entire volume of the formation may travel between them. A graphical representation of the workspace is given in Figure 3. Finally, more complex geometries can be dealt by employing topological equivalent transformations as described in [41] or [42].

2.5. Problem Statement

In the proposed mixed-initiative formation control strategy, we identify the following specifications:
  • We consider a leader–follower scheme consisting of N + 1 quadrotor vehicles.
  • The N + 1 multi-robot system should be safely guided towards specific waypoints inside a workspace with internal obstacles.
  • The N + 1 multi-robot system should always retain an enclosing formation around the leader. Hence, each vehicle should uphold a set of distance specifications d i j , ( i , j ) E among the agents with prescribed performance, as depicted in Figure 2.
  • The leader’s desired position coincides with the formation centroid.
  • Only the leader vehicle has global knowledge of the workspace along with the location of the waypoint goals.
  • A human operator may influence the motion behavior of the multi-robot system without compromising the obstacle avoidance and the distance formation properties.
Hence, given that the initial framework ( G , p ( 0 ) ) is minimally and infinitesimally rigid, we will proceed with the design of a mixed-initiative formation control strategy capable of accomplishing the desired formation as well as tracking a collision-free trajectory towards the goal waypoints. Furthermore, we consider that the only sensor measurements required for the operation of the proposed scheme are acquired from the sensor suite of each vehicle, including the relative distance/heading measurements among neighbors; thus, no explicit information is exchanged by the aerial agents at any time. Only for the case of the leader, we assume that it has global knowledge of the workspace and the position of the desired waypoints. Finally, we consider that the motion trajectory of the multi-robot system may be modified by a human operator via a teleoperation (joystick) interface that sends desired velocity reference signals to the leader. The overall control architecture is presented in Figure 4.

3. Methodology

In this section, we provide the analytical derivation of the mixed-initiative control scheme. First, we design the Navigation Function for the leading robot and show how it is combined with the human input commands, resulting in a smooth trajectory profile. Next, we proceed with the analytical formulation of the distance-based formation controller, which incorporates smooth reference trajectories for the leader. Finally, we provide a stability analysis for the overall mixed-initiative scheme.

3.1. Mixed-Initiative Control

We consider that the leading vehicle has global knowledge of the goal waypoints and obstacles’ location. Hence, its motion control objective is to safely guide the overall formation towards the goal waypoints while avoiding collisions with the scattered obstacles. On the other hand, the followers will implicitly navigate along with the leader as imposed by the decentralized distance-based formation scheme described in Section 3.2.
We denote as p L and p L d the current and desired position of the leading quadrotor, respectively. Next, we calculate a feasible trajectory within the workspace W , by a properly designed Navigation Function [41], as follows:
ϕ L ( p L ; p L d ) = γ ( p L p L d ) [ γ k ( p L p L d ) + β ( p L ) ] 1 k
where ϕ L : W m = 1 M B ( p π m , r π m + r c ) [ 0 , 1 ) is the potential that determines a safe motion vector field inside the free workspace W m = 1 M B ( p π m , r π m + r c ) . We define k > 1 as a design constant, γ ( p L p L d ) > 0 with γ ( 0 ) = 0 as the attractive to the goal p L d potential field and β ( p L ) > 0 with:
lim p L { Boundary Obstacles β ( p L ) = 0
as the repulsive potential field owing to the obstacles and workspace boundaries. As shown in [41], ϕ L ( p L ; p L d ) has a global minimum at p L d and no other local minima for sufficiently large values of k. Hence, an obstacle-free path from any initial location (except for a set of measure zero [41]) to the goal can be generated by following the negated gradient of ϕ L ( p L ; p L d ) . As a result, the desired velocity profile of the leading vehicle can be designed as follows:
v L d ( t ) = K N F p L ϕ L ( p L ( t ) , p L d )
where K N F > 0 is a positive gain.
Following [36], we proceed with the formulation of a mixed-initiative desired velocity profile, incorporating the human motion intention commands as follows:
v L m i d ( t ) = v L d ( t ) + r p L u t
In (11), u t denotes the bounded linear velocity command vector provided by the human operator via a joystick teleoperation interface. The weighting function r p L becomes null when the leader is close to the workspace and the obstacles boundaries, while it equals to one, when the leader is far away from them, to assure safe interference with the motion planner (11). According to [36], we select the following smooth function:
r p L   = υ β p L r s υ β p L r s   +   ε + r s β p L
where ε > 0 , r s 0 and β p L is the obstacle function employed in the Navigation Function ϕ L and the function υ t is defined as follows:
υ t   = Δ   e 1 / t , t > 0 0 , t 0
Finally, according to the stability analysis provided in [36], the proposed interconnection exhibits global input to state stability properties, with respect to the human commands.

3.2. Distance-Based Formation Control

At first, we define the distance errors for each edge of the rigid graph, as:
e i j ( t ) = p i ( t ) p j ( t ) d i j , ( i , j ) E .
Then, we define the distance specifications that should be met by the multi-robot system, and more specifically the inter-robot collision avoidance and the sensor range limitations. Regarding collision avoidance, the distance of neighboring agents should be kept greater than a safety zone d ̲ < d i j . Concerning the sensor range limitations, the distance of the agents should be maintained below a radius d ¯ > d i j to ensure connectivity and allow them the required proximity to achieve reliable relative localization. Given an initial vehicle configuration, where the above specifications hold, the decentralized control objective is expressed as follows:
ρ ̲ i j ( t ) < e i j ( t ) < ρ ¯ i j ( t ) , ( i , j ) E
for all t 0 , where ρ ̲ i j ( t ) , ρ ¯ i j ( t ) denote strictly positive and decreasing performance functions [43] that satisfy lim t ρ ̲ i j ( t ) ρ ̲ i j > 0 and lim t ρ ¯ i j ( t ) ρ ¯ i j > 0 , respectively. By selecting ρ ̲ i j ( 0 ) = d i j d ̲ and ρ ¯ i j ( 0 ) = d ¯ d i j , then satisfying (15) for all time guarantees inter-robot collision avoidance as well as connectivity maintenance, due to the decreasing behavior of ρ ̲ i j ( t ) , ρ ¯ i j ( t ) . Furthermore, by selecting appropriately the steady state value and decreasing rate of ρ ̲ i j ( t ) , ρ ¯ i j ( t ) we can impose steady state and transient performance specifications on the errors e i j ( t ) . More specifically, we select the following decaying performance functions:
ρ ̲ i j ( t ) =   d i j d ̲ ρ exp ( λ t ) + ρ
ρ ¯ i j ( t ) =   d ̲ d i j ρ exp ( λ t ) + ρ
where ρ and λ are the maximum allowable steady state error and decaying rate, respectively.
Remark 1.
The proposed strategy does not include any specific design for the orientation of the vehicles. Instead, the low-level orientation controller of each vehicle can be employed (see Section 2.2), to either retain a fixed heading, or track a specific reference profile independently.

3.2.1. Control Design

We proceed with the design, selecting as control outputs the linear body velocities of each vehicle participating in the formation. The respective outputs will be fed as reference signals to each quadrotor’s low-level controller as described in Section 2.2. We consider a desired smooth and bounded velocity profile v L m i d ( t ) , as denoted in (11), and any initial configuration close to the desired formation within the acceptable collision avoidance and connectivity bounds. At first, we select the performance functions ρ ̲ i j ( t ) , ρ ¯ i j ( t ) , ( i , j ) E as in (16) and (17) and the parameter values ρ and λ in order to prescribe the desired performance specifications of the steady state error and the speed of convergence.
Next, for all the vehicles participating in the formation, we design the following linear velocity control scheme:
v I = k E R T Ξ E E + I L v m i d , k E > 0
which is expressed in the inertial frame, where R l × 3 ( N + 1 ) is the rigidity matrix as defined in (4), v m i d ( t ) [ v L m i d ( t ) , v L m i d ( t ) , , v L m i d ( t ) ] T 3 N × 1 , I L 3 ( N + 1 ) × 3 =   I 3 x 3 , O 3 x 3 , , O 3 x 3 , , O 3 x 3 is a selection matrix activating the mixed-initiative control only for the leading vehicle, E is the vector of modulated errors given as follows:
E col ln 1 + e i j ( t ) ρ ̲ i j ( t ) 1 e i j ( t ) ρ ¯ i j ( t ) ( i , j ) E l
and Ξ E is a diagonal matrix containing the partial derivatives of the modulated errors with respect to the distance errors:
Ξ E E e i j = diag ρ ̲ i j ( t ) + ρ ¯ i j ( t ) ( ρ ̲ i j ( t ) + e i j ( t ) ) ( ρ ¯ i j ( t ) e i j ( t ) ) ( i , j ) E l × l .
The linear velocity profile is then expressed in the body frame of each vehicle via the respective rotation matrix of each vehicle:
v d [ v 0 d T , v 1 d T , , v N d T ] = I R B 1 v I ,
where I R B = diag I R B 0 , I R B 1 , , I R B N . Finally, the body velocity commands (21) are directed as references to the low-level control architecture (Section 2.2) of each quadrotor.

3.2.2. Stability Analysis

We proceed with the stability analysis of the proposed controller, which is summarized as follows:
Theorem 1.
Consider a group of N + 1 quadrotors in a leader–follower formation modeled by (1), at an initial minimally and infinitesimally rigid configuration. The decentralized control protocol proposed in Section 3.2.1 guarantees that the group safely navigates within a cluttered environment while simultaneously: (i) achieving a predefined rigid formation shape around the leading vehicle with prescribed transient and steady state performance, (ii) avoiding inter-agent collisions and connectivity breaks, and (iii) incorporating human input commands without affecting the global stability of the system.
Proof. 
Since the initial distance errors satisfy ρ ̲ i j ( 0 ) < e i j ( 0 ) < ρ ¯ i j ( 0 ) , for all j N i and i = 0 , 1 , , N , Theorem 54 (p. 476) in [44] guarantees that ρ ̲ i j ( t ) < e i j ( t ) < ρ ¯ i j ( t ) for a maximal interval with τ f { + , } . Subsequently arguments will be invoked to prove that τ f = . Therefore, consider the following positive definite function of the modulated distance errors E:
V = 1 2 E T E .
Differentiating (22) with respect to time, we obtain:
V ˙ = E T E e i j col e ˙ i j ( t ) + E T E ρ ¯ i j col ρ ¯ ˙ i j ( t ) + E T E ρ ̲ i j col ρ ̲ ˙ i j ( t ) .
Employing the fact that E e i j Ξ E , E ρ ¯ i j Ξ E Ξ ρ ¯ i j , E ρ ̲ i j Ξ E Ξ ρ ̲ i j for some bounded diagonal matrices Ξ ρ ̲ i j , Ξ ρ ¯ i j and col e ˙ i j ( t ) = R ( I R B ) v d , where R is the rigidity matrix and v d is defined in (21), we have:
V ˙ = E T Ξ E R v I + E T Ξ E Ξ ρ ¯ i j col ρ ¯ ˙ i j ( t ) + Ξ ρ ̲ i j col ρ ̲ ˙ i j ( t ) .
Finally, invoking (18), we obtain:
V ˙ = k E E T Ξ E R R T Ξ E E + E T Ξ E ( R v m i d ( t ) + Ξ ρ ¯ i j col ρ ¯ ˙ i j ( t ) + Ξ ρ ̲ i j col ρ ̲ ˙ i j ( t ) )
Notice that all terms in the right parenthesis are bounded by construction or by assumption. Moreover, Lemma 1 guarantees that the square matrix R R T is positive definite. Hence, we may conclude that the modulated error vector E in (19) remains bounded. As a result, inequalities (15) are strictly satisfied for all t [ 0 , τ f ) , thus concluding τ f = by Proposition C.3.6 (p. 481) in [44]. Furthermore, since E was proven bounded, the velocity profiles v I and consequently v d remain also bounded. Lastly, the formation centroid p c ( t ) 1 N i = 0 N 1 p i ( t ) , which practically coincides with the leader’s position for sufficient small allowable steady state errors (i.e., ρ < < 1 ) obeys p ˙ c = v L m i d , from which we conclude that the overall formation navigates safely towards the desired waypoint, while incorporating the human commands without compromising the operational specifications. □

4. Results and Discussion

4.1. System Description

The performance of the overall mixed-initiative scheme is demonstrated via a set of simulation scenarios conducted in ROS and Gazebo [45,46]. The package h e c t o r _ q u a d r o t o r [47] was incorporated and modified accordingly to synthesize a multi-vehicle waypoint tracking scenario with simultaneous obstacle avoidance. We consider N + 1 = 5 identical quadrotors ( Q i ) with Q 0 designating the leader that has global knowledge of the environment (goals, obstacles configurations and workspace boundaries), as depicted in Figure 3. The N = 4 followers are using relative only range measurements, without explicit information exchange, which is a practical and realistic assumption that can be realized via a common (e.g., acoustic) onboard sensor suite.

4.2. Simulation Results

We conducted two different mission scenarios, namely Scenario 1 and 2. In both cases, the goal was to autonomously guide the multi-robot system to a set of desired waypoints, located inside a bounded workspace with scattered obstacles. Our strategy (Figure 4) implements a leader–follower configuration, where only the leading quadrotor has knowledge of the goal waypoints location and the workspace constraints. The Navigation Function presented in (10) is responsible for calculating the velocity profile for the leader to reach safely the desired waypoints. The followers are implicitly guided along with the leader under the influence of the decentralized formation controller, which is responsible for maintaining the desired inter-agent distance specifications. Hence, the motion control of each agent is calculated via the implementation of (18). During Scenario 1, the human operator does not provide any input to the system (i.e., u t   = 0 ), thus (11) is simplified to (10). On the other hand, during Scenario 2 and at specific time instances, the human applies additional velocity commands to the leader, via a joystick device (i.e., u t   0 ), hence the velocity inputs are mixed accordingly via the implementation of (11). The goal of mixing the control inputs in Scenario 2, is to demonstrate that the overall trajectory of the formation can be influenced at any time by the human operator, without compromising safety (e.g., collision) or performance (e.g., connectivity breaks). The orientation of the vehicles is kept constant in both scenarios; thus, no angular velocity control input is applied.
The workspace is considered bounded and modeled as a sphere B w = B p w , r w , with p w = 0 , 0 , 0 T and r w = 40 . We also follow a spherical representation for the obstacles π m = B ( p π m , r π m ) , m { 1 , , 3 } , where p π 1 =   10 , 0 , 4 , p π 1 =   20 , 20 , 8 , p π 1 =   30 , 0 , 12 are the centers and r π 1 = r π 2 = r π 3 = 2 the radii respectively. We assume that the multi-agent system is enclosed inside a virtual sphere B c = B p c , r c , with p c being the formation centroid and r c = 3.5 . The desired waypoints are located in w p 1 =   20 , 4 , 10 , w p 2 =   30 , 8 , 10 , w p 3 =   20 , 28 , 10 . We consider that each waypoint has been reached when the following inequality holds: p c w p i 1 .
We choose the formation to be the one of a triangular right pyramid, where the followers are located at the pyramid’s apexes and the leader at the centroid. We consider a pyramid with height h = 2.45 and base length a = 3.0 . Hence, all quadrotors must conform to the necessary distance specifications to achieve and maintain the triangular formation. In particular, the desired inter-agent distances are set d 10 = ( 1 / 3 ) h , d 12 = a , d 13 = a , d 14 = a , d 23 = a , d 24 = a , d 30 = ( 2 / 3 ) h , d 34 = a , d 40 = ( 2 / 3 ) h . We choose the safety inter-distance to avoid collisions as d i j > d ̲ = 0.5 , and the distance for maintaining connectivity as d i j < d ¯ = 4.5 , ( i , j ) E . Furthermore, the specifications for minimum convergence speed and maximum steady state error (during formation) are set as exp t 5 and 0.55 respectively. This intuitively means that the errors should reach close to zero in approximately 20 s. We selected the performance functions to be as (16) and (17) to satisfy the specific performance requirements. The framework is minimally and infinitesimally rigid since graph G =   { 0 , 1 , 2 , 3 , 4 } , { ( 1 , 0 ) , ( 1 , 2 ) , ( 1 , 3 ) , ( 1 , 4 ) , ( 2 , 3 ) , ( 2 , 4 ) , ( 3 , 0 ) , ( 3 , 4 ) , ( 4 , 0 ) } has exactly 3 × 5 6 = 9 edges. The gains are set as k = 5 , k E = 0.1 and k N F = 0.75 .
The evolution of Scenario 1 is depicted in Figure 5 at 12 sequential time instances, respectively. The trajectory of the vehicles in 3D is depicted in Figure 6. At this Scenario, no human input was commanded to the leader, hence the latter was guided only by the Navigation Function and the formation controller. The 3D trajectory of the leader vehicle versus the centroid of the formation is depicted in Figure 7. The errors of the inter-distances converge close to zero within the prescribed time, while the upper and lower bound specifications are constantly satisfied as shown in Figure 8. The control velocity commands (inertial frame) as dictated by (18), are depicted in Figure 9, Figure 10 and Figure 11.
The evolution of Scenario 2 is depicted in Figure 12 at 12 sequential time instances, respectively. The trajectory of the vehicles in 3D is depicted in Figure 13. At this Scenario, and at specific time instances, the human operator commands additional velocities to the leader as depicted in Figure 14. Hence, the latter was guided by the mixed-initiative control scheme (Equation (11)) and the formation controller. The 3D trajectory of the leader vehicle versus the centroid of the formation is shown in Figure 15. We may notice that the trajectories of the vehicles and the formation centroid were clearly affected by the application of the human commands. More specifically, we can observe that the route from the starting position to w p 1 as well as the route from w p 2 to w p 3 are completely different in comparison to Scenario 1, due to the influence of the human commands. Nevertheless, the errors of the inter-distances converge close to zero within the prescribed time, while the upper and lower bound specifications are constantly satisfied as shown in Figure 16. As indicated by the results, the human operator intervened to the motion of the formation (i.e., the resulting trajectories are not topologically equivalent to those from Scenario 1), without however compromising safety (e.g., collision) or performance (e.g., connectivity breaks). The control velocity commands (inertial frame) as dictated by (18), are depicted in Figure 17, Figure 18 and Figure 19.

5. Conclusions

This work presented a mixed-initiative motion control strategy for multiple quadrotors. The proposed approach incorporates simultaneously formation specifications, motion-planning commands as well as inputs by a human operator. More specifically, we consider a leader–follower aerial system, which autonomously attains a specific geometrical formation, by regulating the distances among neighboring agents and simultaneously avoiding inter-collisions. The desired formation is realized by a decentralized prescribed performance control strategy, which is computationally efficient, robust and straightforward regarding its implementation. The multi-agent system is safely guided towards goal configurations, by employing a properly defined navigation function which calculates the motion commands for the leading vehicle. In the proposed strategy, only the leader has knowledge of the workspace and the goal configurations. Additionally, the overall framework incorporates human commands for the desired motion of the leader via a teleoperation interface. Hence, the overall system is implicitly guided by the mixed-initiative motion of the leader in combination with the sustenance of distance formation specifications. The resulting mixed-initiative control system was evaluated via realistic simulation tests.
In our future research, we will proceed with the experimental validation of the proposed control strategy, to assess the efficacy of the overall architecture in the presence of external disturbances and sensor noise. Moreover, we aim to extend the proposed work in more complex workspaces (e.g., dense obstacle environments or generic geometry) using Harmonic Maps and replacing the Navigation Function with a Harmonic Potential Field controller [42,48]. Finally, we aim to employ our mixed-initiative control strategy for training motion planners to avoid local minima.

Author Contributions

Conceptualization, G.C.K., C.P.B., G.K.F. and K.J.K.; methodology, C.P.B. and G.C.K.; software, G.C.K. and G.K.F.; validation, G.C.K. and G.K.F.; formal analysis, C.P.B. and G.C.K.; writing—original draft preparation, G.C.K.; writing—review and editing, C.P.B., G.K.F. and K.J.K.; supervision, K.J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kwon, J.; Chwa, D. Hierarchical Formation Control Based on a Vector Field Method for Wheeled Mobile Robots. IEEE Trans. Robot. 2012, 28, 1335–1345. [Google Scholar] [CrossRef]
  2. Oh, K.; Park, M.; Ahn, H. A survey of multi-agent formation control. Automatica 2015, 53, 424–440. [Google Scholar] [CrossRef]
  3. Eren, T. Formation shape control based on bearing rigidity. Int. J. Control 2012, 85, 1361–1379. [Google Scholar] [CrossRef]
  4. Balch, T.; Arkin, R.C. Behavior-based formation control for multirobot teams. IEEE Trans. Robot. Autom. 1998, 14, 926–939. [Google Scholar] [CrossRef] [Green Version]
  5. Ni, W.; Cheng, D. Leader-following consensus of multi-agent systems under fixed and switching topologies. Syst. Control Lett. 2010, 59, 209–217. [Google Scholar] [CrossRef]
  6. Baillieul, J.; Antsaklis, P.J. Control and Communication Challenges in Networked Real-Time Systems. Proc. IEEE 2007, 95, 9–28. [Google Scholar] [CrossRef]
  7. Ali, Q.; Gageik, N.; Montenegro, S. A Review on Distributed Control of Cooperating Mini UAVS. Int. J. Artif. Intell. Appl. 2014, 5, 1–13. [Google Scholar] [CrossRef]
  8. Nathan, P.T.; Almurib, H.A.F.; Kumar, T.N. A review of autonomous multi-agent quad-rotor control techniques and applications. In Proceedings of the 2011 4th International Conference on Mechatronics (ICOM), Kuala Lumpur, Malaysia, 17–19 May 2011; pp. 1–7. [Google Scholar] [CrossRef]
  9. Hou, Z.; Wang, W.; Zhang, G.; Han, C. A survey on the formation control of multiple quadrotors. In Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, 28 June–1 July 2017; pp. 219–225. [Google Scholar] [CrossRef]
  10. Shim, D.H.; Sastry, S. An Evasive Maneuvering Algorithm for UAVs in See-and-Avoid Situations. In Proceedings of the 2007 American Control Conference, New York, NY, USA, 9–13 July 2007; pp. 3886–3891. [Google Scholar] [CrossRef]
  11. Turpin, M.; Michael, N.; Kumar, V. Capt: Concurrent assignment and planning of trajectories for multiple robots. Int. J. Robot. Res. 2014, 33, 98–112. [Google Scholar] [CrossRef]
  12. Turpin, M.; Michael, N.; Kumar, V. Trajectory design and control for aggressive formation flight with quadrotors. Auton. Robot. 2012, 33. [Google Scholar] [CrossRef]
  13. Cai, X.; de Queiroz, M. Formation Maneuvering and Target Interception for Multi-Agent Systems via Rigid Graphs. Asian J. Control 2015, 17, 1174–1186. [Google Scholar] [CrossRef]
  14. Ju, S.; Wang, J.; Dou, L. Enclosing Control for Multiagent Systems With a Moving Target of Unknown Bounded Velocity. IEEE Trans. Cybern. 2021, 2021, 1–10. [Google Scholar] [CrossRef]
  15. Xue, Z.; Zeng, J. Formation Control Numerical Simulations of Geometric Patterns for Unmanned Autonomous Vehicles with Swarm Dynamical Methodologies. In Proceedings of the 2009 International Conference on Measuring Technology and Mechatronics Automation, Zhangjiajie, China, 11–12 April 2009; Volume 1, pp. 477–482. [Google Scholar] [CrossRef]
  16. Tabuada, P.; Pappas, G.J.; Lima, P. Feasible formations of multi-agent systems. In Proceedings of the 2001 American Control Conference, (Cat. No.01CH37148), Arlington, VA, USA, 25–27 June 2001; Volume 1, pp. 56–61. [Google Scholar]
  17. Roldao, V.; Cunha, R.; Cabecinhas, D.; Silvestre, C.; Oliveira, P. A leader-following trajectory generator with application to quadrotor formation flight. Robot. Auton. Syst. 2014, 62, 1597–1609. [Google Scholar] [CrossRef]
  18. Min, Y.X.; Cao, K.; Sheng, H.H.; Tao, Z. Formation tracking control of multiple quadrotors based on backstepping. In Proceedings of the 2015 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 4424–4430. [Google Scholar] [CrossRef]
  19. Lee, K.; Choi, Y.; Park, J. Backstepping Based Formation Control of Quadrotors with the State Transformation Technique. Appl. Sci. 2017, 7, 1170. [Google Scholar] [CrossRef] [Green Version]
  20. Kuriki, Y.; Namerikawa, T. Formation control with collision avoidance for a multi-UAV system using decentralized MPC and consensus-based control. In Proceedings of the 2015 European Control Conference (ECC), Linz, Austria, 15–17 July 2015; pp. 3079–3084. [Google Scholar] [CrossRef]
  21. Zhao, W. Quadcopter formation flight control combining MPC and robust feedback linearization. J. Frankl. Inst. 2014, 351, 1335–1355. [Google Scholar] [CrossRef]
  22. Rinaldi, F.; Chiesa, S. Linear Quadratic Control for Quadrotors UAVs Dynamics and Formation Flight. J. Intell. Robot. Syst. 2013, 70, 203–220. [Google Scholar] [CrossRef]
  23. Kazerooni, E.S.; Khorasani, K. Semi-Decentralized Optimal Control Technique for a Leader-Follower Team of Unmanned Systems with Partial Availability of the Leader Command. In Proceedings of the 2007 IEEE International Conference on Control and Automation, Rome, Italy, 10–14 April 2007; pp. 475–480. [Google Scholar] [CrossRef]
  24. Hua, C.; Chen, J.; Li, Y. Leader-follower finite-time formation control of multiple quadrotors with prescribed performance. Int. J. Syst. Sci. 2017, 48, 2499–2508. [Google Scholar] [CrossRef]
  25. Zhao, Z.; Wang, J.; Chen, Y.; Ju, S. Iterative learning-based formation control for multiple quadrotor unmanned aerial vehicles. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420911520. [Google Scholar] [CrossRef] [Green Version]
  26. Gulzar, M.; Rizvi, S.; Javed, M.Y.; Munir, U.; Asif, H. Multi-Agent Cooperative Control Consensus: A Comparative Review. Electronics 2018, 7, 22. [Google Scholar] [CrossRef] [Green Version]
  27. Hughes, S.; Lewis, M. obotic camera control for remote exploration. In Proceedings of the 2004 Conference on Human Factors in Computing Systems—CHI 04, Vienna, Austria, 24–29 April 2004. [Google Scholar]
  28. Hughes, S.; Manojlovich, J.; Lewis, M.; Gennari, J. Camera control and decoupled motion for teleoperation. In Proceedings of the 2003 IEEE International Conference on Systems, Man and Cybernetics, Washington, DC, USA, 5–8 October 2003. [Google Scholar]
  29. Bevacqua, G.; Cacace, J.; Finzi, A.; Lippiello, V. Mixed-initiative planning and execution for multiple drones in search and rescue missions. In Proceedings of the ICAPS 15: International Conference on Automated Planning and Scheduling, Jerusalem, Israel, 7–11 June 2015; pp. 315–323. [Google Scholar]
  30. Lewis, M.; Wang, J.; Hughes, S.; Liu, X. Experiments with attitude: Attitude displays for teleoperation. In Proceedings of the 2003 IEEE International Conference on Systems, Man and Cybernetics, Washington, DC, USA, 8 October 2003. [Google Scholar]
  31. Bruemmer, D.J.; Few, D.A.; Boring, R.L.; Marble, J.L.; Walton, M.C.; Nielsen, C.W. Shared Understanding for Collaborative Control. IEEE Trans. Syst. Man Cybern. 2005, 35, 412–442. [Google Scholar] [CrossRef]
  32. Hardin, B.; Goodrich, M.A. On using mixed-initiative control: A perspective for managing large- scale robotic teams. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction—HRI 09, La Jolla, CA, USA, 9–13 March 2009; p. 165. [Google Scholar]
  33. Cacace, J.; Finzi, A.; Lippiello, V. A mixed-initiative control system for an Aerial Service Vehicle supported by force feedback. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014. [Google Scholar]
  34. Freedy, A.; DeVisser, E.; Weltman, G.; Coeyman, N. Measurement of trust in human-robot collaboration. In Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, Orlando, FL, USA, 21–25 May 2007. [Google Scholar]
  35. Bruemmer, D.J.; Marble, J.L.; Dudenhoeffer, D.D.; Anderson, M.; McKay, M.D. Mixed-initiative control for remote characterization of hazardous environments. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 6–9 January 2003. [Google Scholar]
  36. Loizou, S.G.; Kumar, V. Mixed Initiative Control of Autonomous Vehicles. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 1431–1436. [Google Scholar] [CrossRef]
  37. Karras, G.C.; Bechlioulis, C.P.; Fourlas, G.K.; Kyriakopoulos, K.J. Formation Control and Target Interception for Multiple Multi-rotor Aerial Vehicles. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 9–12 June 2020; pp. 85–92. [Google Scholar] [CrossRef]
  38. Mahony, R.; Kumar, V.; Corke, P. Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor. IEEE Robot. Autom. Mag. 2012, 19, 20–32. [Google Scholar] [CrossRef]
  39. Meyer, J.; Sendobry, A.; Kohlbrecher, S.; Klingauf, U.; von Stryk, O. Comprehensive Simulation of Quadrotor UAVs Using ROS and Gazebo. In Simulation, Modeling, and Programming for Autonomous Robots. SIMPAR 2012. Lecture Notes in Computer Science; Noda, I., Ando, N., Brugali, D., Kuffner, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 400–411. [Google Scholar]
  40. Fidan, B.; Hendrickx, J.M. Rigid Graph Control Architectures for Autonomous Formations. IEEE Control Syst. 2008, 28, 48–63. [Google Scholar]
  41. Koditschek, D.; Rimon, E. Robot navigation functions on manifolds with boundary. Adv. Appl. Math. 1990, 11, 412–442. [Google Scholar] [CrossRef] [Green Version]
  42. Vlantis, P.; Vrohidis, C.; Bechlioulis, C.P.; Kyriakopoulos, K.J. Robot Navigation in Complex Workspaces Using Harmonic Maps. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1726–1731. [Google Scholar] [CrossRef]
  43. Bechlioulis, C.P.; Rovithakis, G.A. Robust adaptive control of feedback linearizable MIMO nonlinear systems with prescribed performance. IEEE Trans. Autom. Control 2008, 53, 2090–2099. [Google Scholar] [CrossRef]
  44. Sontag, E.D. Mathematical Control Theory; Springer: London, UK, 1998. [Google Scholar]
  45. ROS. Robot Operating System. 2009. Available online: http://www.ros.org/ (accessed on 24 March 2021).
  46. Koenig, N.; Howard, A. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar]
  47. Hector_Quadrotor. 2012. Available online: http://wiki.ros.org/hector_quadrotor (accessed on 15 March 2021).
  48. Vlantis, P.; Vrohidis, C.; Bechlioulis, C.P.; Kyriakopoulos, K.J. Orientation-aware motion planning in complex workspaces using adaptive harmonic potential fields. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8592–8598. [Google Scholar] [CrossRef]
Figure 1. Quadrotor reference frames.
Figure 1. Quadrotor reference frames.
Robotics 10 00116 g001
Figure 2. Formation graph of the 5 quadrotors.
Figure 2. Formation graph of the 5 quadrotors.
Robotics 10 00116 g002
Figure 3. The formation should be safely navigated inside the constrained workspace avoiding any internal obstacles.
Figure 3. The formation should be safely navigated inside the constrained workspace avoiding any internal obstacles.
Robotics 10 00116 g003
Figure 4. The overall control architecture.
Figure 4. The overall control architecture.
Robotics 10 00116 g004
Figure 5. Scenario 1—The progress of the waypoint tracking scenario without the incorporation of human inputs. Twelve consecutive snapshot instances are depicted. The gray circle indicates the location of the multi-agent aerial system.
Figure 5. Scenario 1—The progress of the waypoint tracking scenario without the incorporation of human inputs. Twelve consecutive snapshot instances are depicted. The gray circle indicates the location of the multi-agent aerial system.
Robotics 10 00116 g005
Figure 6. Scenario 1—The motion of the system in 3D space during the waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Figure 6. Scenario 1—The motion of the system in 3D space during the waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Robotics 10 00116 g006
Figure 7. Scenario 1—The 3D trajectories of the leader and the system’s centroid during waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Figure 7. Scenario 1—The 3D trajectories of the leader and the system’s centroid during waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Robotics 10 00116 g007
Figure 8. Scenario 1—Distance formation errors.
Figure 8. Scenario 1—Distance formation errors.
Robotics 10 00116 g008
Figure 9. Scenario 1—The linear velocity commands along x-axis inertial frame.
Figure 9. Scenario 1—The linear velocity commands along x-axis inertial frame.
Robotics 10 00116 g009
Figure 10. Scenario 1—The linear velocity commands along y-axis inertial frame.
Figure 10. Scenario 1—The linear velocity commands along y-axis inertial frame.
Robotics 10 00116 g010
Figure 11. Scenario 1—The linear velocity commands along z-axis inertial frame.
Figure 11. Scenario 1—The linear velocity commands along z-axis inertial frame.
Robotics 10 00116 g011
Figure 12. Scenario 2—The progress of the waypoint tracking scenario including human inputs. Twelve consecutive snapshot instances are depicted. The gray circle indicates the location of the multi-agent aerial system.
Figure 12. Scenario 2—The progress of the waypoint tracking scenario including human inputs. Twelve consecutive snapshot instances are depicted. The gray circle indicates the location of the multi-agent aerial system.
Robotics 10 00116 g012
Figure 13. Scenario 2—The motion of the system in 3D space during the waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Figure 13. Scenario 2—The motion of the system in 3D space during the waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Robotics 10 00116 g013
Figure 14. Scenario 2—Human velocity commands.
Figure 14. Scenario 2—Human velocity commands.
Robotics 10 00116 g014
Figure 15. Scenario 2—The 3D trajectories of the leader and the system’s centroid waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Figure 15. Scenario 2—The 3D trajectories of the leader and the system’s centroid waypoint tracking. The green spheres denote the desired waypoints while the red ones denote the obstacles.
Robotics 10 00116 g015
Figure 16. Scenario 2—Distance formation errors.
Figure 16. Scenario 2—Distance formation errors.
Robotics 10 00116 g016
Figure 17. Scenario 2—The linear velocity commands along x-axis inertial frame.
Figure 17. Scenario 2—The linear velocity commands along x-axis inertial frame.
Robotics 10 00116 g017
Figure 18. Scenario 2—The linear velocity commands along y-axis inertial frame.
Figure 18. Scenario 2—The linear velocity commands along y-axis inertial frame.
Robotics 10 00116 g018
Figure 19. Scenario 2—The linear velocity commands along z-axis inertial frame.
Figure 19. Scenario 2—The linear velocity commands along z-axis inertial frame.
Robotics 10 00116 g019
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Karras, G.C.; Bechlioulis, C.P.; Fourlas, G.K.; Kyriakopoulos, K.J. A Mixed-Initiative Formation Control Strategy for Multiple Quadrotors. Robotics 2021, 10, 116. https://doi.org/10.3390/robotics10040116

AMA Style

Karras GC, Bechlioulis CP, Fourlas GK, Kyriakopoulos KJ. A Mixed-Initiative Formation Control Strategy for Multiple Quadrotors. Robotics. 2021; 10(4):116. https://doi.org/10.3390/robotics10040116

Chicago/Turabian Style

Karras, George C., Charalampos P. Bechlioulis, George K. Fourlas, and Kostas J. Kyriakopoulos. 2021. "A Mixed-Initiative Formation Control Strategy for Multiple Quadrotors" Robotics 10, no. 4: 116. https://doi.org/10.3390/robotics10040116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop