Next Article in Journal
Real-Time Detection of Railway Track Component via One-Stage Deep Learning Networks
Next Article in Special Issue
Sensor Fusion Algorithm Using a Model-Based Kalman Filter for the Position and Attitude Estimation of Precision Aerial Delivery Systems
Previous Article in Journal
A Waveform Image Method for Discriminating Micro-Seismic Events and Blasts in Underground Mines
Previous Article in Special Issue
Velocity Sensor for Real-Time Backstepping Control of a Multirotor Considering Actuator Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decentralized Mesh-Based Model Predictive Control for Swarms of UAVs

by
Salvatore Rosario Bassolillo
1,
Egidio D’Amato
2,*,
Immacolata Notaro
1,
Luciano Blasi
1 and
Massimiliano Mattei
1
1
Department of Engineering, University of Campania Luigi Vanvitelli, 81031 Aversa (CE), Italy
2
Department of Science and Technology, University of Naples Parthenope, 80143 Naples, Italy
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(15), 4324; https://doi.org/10.3390/s20154324
Submission received: 29 June 2020 / Revised: 26 July 2020 / Accepted: 28 July 2020 / Published: 3 August 2020
(This article belongs to the Special Issue Sensors for Aerial Unmanned Systems)

Abstract

:
This paper deals with the design of a decentralized guidance and control strategy for a swarm of unmanned aerial vehicles (UAVs), with the objective of maintaining a given connection topology with assigned mutual distances while flying to a target area. In the absence of obstacles, the assigned topology, based on an extended Delaunay triangulation concept, implements regular and connected formation shapes. In the presence of obstacles, this technique is combined with a model predictive control (MPC) that allows forming independent sub-swarms optimizing the formation spreading to avoid obstacles and collisions between neighboring vehicles. A custom numerical simulator was developed in a Matlab/Simulink environment to prove the effectiveness of the proposed guidance and control scheme in several 2D operational scenarios with obstacles of different sizes and increasing number of aircraft.

1. Introduction

Research on UAV has attracted increasing attention for at least two decades, exploring several topics, such as design, development and deployment. Military and civil applications are exponentially growing thanks to the great versatility and adaptability of these kind of platforms to a great number of missions. Furthermore, costs are decreasing more and more due to the technological developments driven by a greater attention and use.
However, it is not possible for a mission to always be completed effectively by a single UAV, as it may require the cooperation of several UAVs to perform complex tasks. This need may also derive from the choice to use smaller and lighter vehicles to reduce costs and increase reliability. In fact, costs of a single aircraft exponentially increase with the take-off weight [1,2,3,4], whereas the use of several smaller UAVs can also be effective in reducing the production and even operating costs [5,6]. The reduction of the mission time can be another important reason for the use of more than one aircraft whenever the limited endurance of a single UAV forces it to take off and land several times for refueling. Furthermore, the robustness of the overall mission can be enhanced by creating redundancy and reducing the payload weight per aircraft [7,8,9]. Finally, by involving several smaller vehicles having an improved mobility in confined spaces, it is possible to survey larger areas with respect to a bigger single aircraft, even if equipped with higher-quality sensors.
In the literature, several examples can be found regarding swarms of UAVs and related applications. An extended research field focuses on the swarming behavior, called flocking, known in nature as the collective behavior of a large number of agents able to interact and share a common objective [10,11,12,13,14,15,16,17,18,19]. Flocking techniques deal with the ability of a group of autonomous agents (being aircraft or robots) to move like a swarm of birds or insects.
Tracking a static or moving target has been studied in [20], where multiple UAVs searched for a source of pollution while keeping a safe distance from each other. An example of group of aircraft looking for targets using random walk paths can be found in [21], where an ant colony-like heuristic algorithm was considered. By depositing a virtual pheromone, the probability that other aircraft pass the same path was increased. Cooperative surveillance is another investigated task involving a swarm of UAVs: in [22], an evolutionary-based optimization technique was used to find the optimal position of UAVs in the airspace that maximizes the coverage of the area of interest, whereas in [23] a distributed algorithm was considered, taking into account communication constraints. In [24], a cognitive adaptive optimization algorithm was presented, having the goal of maximizing the surveilled area with a team of aircraft. In [25], a behavior-based algorithm leading a swarm of UAVs during a surveillance mission was presented.
However, the key point for an effective use of a swarm is the capability of the aircraft to properly collaborate in order to accomplish the mission objectives by coordinating their tasks.
An example of a collaboration concept consists of constrained flight between multiple UAVs, as in the case of several drones that work linked together to transport heavy loads [26,27,28,29,30].
Another example of a collaboration concept is the use of fleets of fixed-wing aircraft to improve the aerodynamic efficiency during flight [31]. This concept falls into the more general problem of managing the aircrafts’ relative positions to optimize performance in terms of fuel consumption or time spent to achieve mission goals [32,33,34,35,36,37].
Generally speaking, the output of a swarm control algorithm consists in the computation of the appropriate control signals to maintain the shape of the swarm, satisfying the assigned performance constraints [38]. For example, flocking research focuses on consensus-based formation control [15], which uses the relative position of the vehicles to realize a certain shape of the flight formation. Formation control algorithms focus mainly on stability and performance constraints. The classic flocking control model is based on three heuristic rules [39]: cohesion, separation and alignment. The concept of using flexible swarm shapes has been developed only in recent years.
Another important research field is cooperative motion planning, which is based on collaboration among vehicles with different point of views. Taking into account mission goals (usually in terms of waypoints) and environmental constraints, such as obstacles or no-fly zones, aircraft are driven along optimal trajectories, while benefiting from coordination [40]. Cooperative motion planning algorithms mainly focus on safety distance from obstacles, computational time, swarm shape and trajectory smoothness [41,42].
To maintain the given shape of the flight formation, several control architectures have been proposed: leader–follower, virtual structure and behavior-based.
The leader–follower approach considers a leader working as a reference aircraft in the swarm with full information about the other vehicles. Some papers introduce the idea of a virtual leader in order to overcome the issues of centralization [43,44,45].
A virtual structure architecture was proposed in [46]. According to this approach, the vehicles keep a rigid geometric relationship to each other in order to maintain a reference shape [47]. This concept considers the formation shape as a rigid body, maintained by minimizing the position error between the virtual structure and the actual aircraft position.
Behavior-based formation control was first proposed in [48]. It deals with the formation control problem by using a hybrid vector-weighted control function, which is able to compute the command action based on several kinds of formation missions. Applications of this approach to large-scale robot formations can be found in [49,50,51,52].
Within the flocking research field, one of the most popular approaches to control flocking behavior is the use of artificial potential fields and consensus [38,53,54]. Flock aggregation and collision avoidance are often dealt with using a potential field, whose design is still an active field of research [55].
Another common approach is based on the use of the nonlinear dynamic inversion technique (NDI). Through the use of nonlinear functions, the NDI control technique attempts to cancel out the inherent dynamics of a plant and enforce the dynamics of a reference model. The most important benefit is the ability to linearize systems. The resulting model can be controlled by any technique, such as sliding model control (SMC) [56,57] and pole placement [58,59], H [60]. The authors of [61] dealt with a decentralized algorithm for a swarm of UAVs that uses the potential field approach combined with a sliding mode control technique. Optimal control is another option that can be used to compute the control command to achieve swarm objectives while preserving performance and environmental constraints. A stochastic optimal control approach was presented in [62], whereas linear and nonlinear MPCs. were presented in [63,64,65].
In several papers, authors used a machine learning heuristics to tackle the problem of controlling a swarm. For example, [66] described a task assignment algorithm using a self-reinforcement learning model to specialize the vehicles for the different task types, balancing the work load; another example is the use of the ant colony algorithm to organize task assignment: in [36] a swarming technique was developed to perform distributed target recognition using UAVs. In [67], multiple UAVs were involved in looking for targets that are displaying evading behavior in order to avoid detection. In [68], a genetic algorithm was used to optimize the tuning of the consensus control parameters of a swarm of quadrotors. In this work, UAVs try to find an efficient geometric flight configuration in order to optimize their sensing capabilities. Nature-inspired optimization algorithms were also used in reconfiguration problems [69,70].
Scalability and decentralization represent two other important issues in swarm management strategy [61]. They are strictly related to the swarm architecture and control approach used to maintain the formation. While several leader–follower architectures are centralized on the leader, computing the desired positions for each aircraft, flocking techniques are usually decentralized, in order to mimic the natural behavior.
In this paper, the design of a decentralized flocking strategy is dealt with for a fleet of UAVs flying at a given altitude to reach a target position while maintaining an assigned connection topology with desired mutual distances.
The starting point of the proposed approach is the triangle formation algorithm (TFA), previously described in [71,72]. Such a technique represents an interactive control algorithm used to organize the swarm into an equilateral triangle-based formation. However, TFA does not assign a precise topology to the formation and may have some issues in the presence of obstacles, allowing the formation of sub-swarms that may not rendezvous with the main formation.
This is an important open question concerning the spreading of the formation in the presence of obstacles. Several approaches use a virtual leader, allowing the followers to spread between obstacles without creating sub-swarms with a prescribed topology [73,74,75].
In this paper, we propose a new strategy, based on an extension of the Delaunay triangulation algorithm: in the absence of obstacles, the assigned topology guarantees a single regular-meshed flight formation, whereas in the presence of obstacles, it allows the formation to split into independent sub-formations to avoid collisions, forcing them to rendezvous after obstacles.
The application of the Delaunay triangulation in the presence of obstacles represents a novel aspect requiring the definition of a visibility property. Furthermore, while the idea of triangular mesh-based formation has already been exploited [72], the new feature consists in the capability to generate sub-swarms without the definition of any physical or virtual leader.
Another original point concerns trajectory tracking and collision avoidance: TFA computes the desired position of the aircraft to accomplish the triangle-based formation. This is tracked by a proportional navigation-based control algorithm that cannot take into account performance and environmental constraints. In [72], an artificial potential field was also used to take obstacles into account. Our approach is to use a potential field [76,77] to compute the aircraft’s desired heading angle and speed to keep the desired mutual distances between aircraft, whereas an MPC-based control algorithm performs the trajectory tracking and the obstacle collision avoidance.
In our previous paper [78], we presented an MPC strategy in the absence of obstacles with prescribed rules between aircraft. Based on this work, we propose an original reactive MPC aimed at improving the performance of the anti-collision scheme using practical geometrical constraints leading to a computationally tractable algorithm.
The paper is organized as follows:
  • In Section 2, the problem is precisely stated and the general scheme of the proposed strategy is illustrated;
  • Section 3 focuses on the swarm topology based on the Delaunay triangulation;
  • Section 4 describes the potential based guidance algorithm driving the swarm on a regular-meshed flight formation;
  • Section 5 focuses on the MPC-based trajectory tracking algorithm with collision avoidance;
  • Finally, in Section 6, numerical simulation results are shown for several operational scenarios.

2. Problem Statement and General Architecture

Consider a swarm of N vehicles flying at fixed speed and altitude to simulate a cruise or surveillance mission phase.
As the focus of this paper is mainly on swarm guidance and collision avoidance, we neglected any atmospheric disturbance as well as any change in aircraft properties (i.e., mass variation due to fuel consumption).
We then assume that each aircraft is equipped with its own flight control system (FCS), so the i-th vehicle’s closed loop dynamics can be described well by the following simple kinematic equations:
x ˙ i ( t ) = V i ( t ) cos ψ i ( t ) y ˙ i ( t ) = V i ( t ) sin ψ i ( t ) V ˙ i ( t ) = u V i ( t ) ψ ˙ i ( t ) = u ψ i ( t )
where P i ( t ) = [ x i ( t ) , y i ( t ) ] T is the position at time t, V i ( t ) is the speed, ψ i ( t ) is the heading angle u ψ i ( t ) and u V i ( t ) are the input control signals in terms of turning rate and acceleration along the flight trajectory, respectively.
The goal of the fleet is to reach a given target position, namely P f = [ x f , y f ] T , preserving the mutual distance between pairs of aircraft.
Figure 1, shows the architecture of the guidance algorithm implemented on each aircraft. This is based on the following components:
  • The so-called swarm awareness algorithm (SAA) aims to compute the connection topology ( P , Q ¯ i ) using an extended Delaunay triangulation technique that takes into account the presence of obstacles.
  • The swarm guidance algorithm (SGA) is based on a potential field technique to compute the desired heading angle and speed ( ψ ˜ i , V ˜ i ) so as to maintain the mutual distances between aircraft and reach the target position P f .
  • The trajectory tracking and collision avoidance algorithm (TTCAA) is based on a model predictive controller and provides the reference signals u ψ i and u V i to the FCS, taking into account environmental constraints (obstacles, no-fly zones, other aircraft) and possible constraint on aircraft state and input (e.g., minimum/maximum speed, maximum turn rate).
In order to take into account a digital implementation, the algorithm works with a fixed sampling period Δ t . In the following the discrete time t k = t k 1 + Δ t is referred to as k.
The proposed strategy is based on the assumption that each vehicle can measure or estimate its current position, speed and heading. Moreover, it is equipped with an ADS-B or a similar device to share these estimates with the other vehicles.

3. Swarm Awareness Algorithm

The SAA is based on the Delaunay triangulation [79] with the addition of a visibility concept.
Definition 1.
Given the set P ( k ) = { P l ( k ) , l = 1 , 2 , , N } of positions of the swarm vehicles, the Delaunay triangulation, D ( P ( k ) ) = { T j m l : P j ( k ) , P m ( k ) , P l ( k ) P ( k ) } , is a triangular partition of the swarm such that no position P i ( k ) P ( k ) is inside the circumcircle of any triangle T j m l ( k ) D ( P ) , ( j , m , l i ).
Definition 2.
Given a Delaunay triangulation, the associated Delaunay triangulation graph (DTG) G ( k ) = { V ( k ) , A ( k ) } is composed of the nodes set V ( k ) = P ( k ) and the arc set A ( k ) containing all the pairs < P m ( k ) P l ( k ) > such that the segment P m , P l ¯ is an edge of a Delaunay triangle.
Definition 3.
A vehicle m is a neighbor of the vehicle l, at time k, if the pair < P m ( k ) , P l ( k ) > A ( k ) . The set of neighboring aircraft of the j-th vehicle is:
Q j ( k ) = { m : < P m ( k ) , P j ( k ) > A ( k ) }
In the presence of obstacles, which are modeled as flat geometrical figures, in order to implement an obstacle-avoidance strategy based on the possible splitting of the swarm into independent sub-swarms, a visibility concept is also introduced.
Definition 4.
At time k, aircraft j is visible from aircraft i if segment P i ( k ) P j ( k ) ¯ does not intersect any obstacle. The set of visible neighbors of the aircraft i is
Q ¯ i ( k ) = { j Q i : Φ i j ( k ) = 1 }
where Φ i j ( k ) is the visibility operator
Φ i j ( k ) = Φ j i ( k ) = 1 if i is visible from j 0 otherwise .
According to the visibility concept, in the presence of obstacles, the DTG is modified by considering only arcs present in i = 1 N Q ¯ i ( k ) . For this reason, DTG may become not connected and several subgraphs without common nodes may arise. However, once the obstacles have been overcome, the DTG becomes connected again, resulting in a rendezvous of the formation.
On each vehicle i, the SAA calculates the DTG and Q ¯ i at each time step k.

4. The Swarm Guidance Algorithm

The SGA computes the guidance law to guarantee a correct position of the aircraft in the swarm while flying toward the target point. It is based on a local interaction between visible neighboring aircraft and a potential field approach [80,81] forcing the swarm vehicles to take place on the nodes of a regular triangular mesh with a given mutual distance d ¯ .
At each discrete time step, the SGA computes a desired heading ψ ˜ i ( k ) and speed V ˜ i ( k ) , which are then passed to the trajectory tracking algorithm described in the next section. In other terms, the SGA assures that the vehicle i moves toward a target direction Γ i ( k ) = cos ψ ˜ i ( k ) , sin ψ ˜ i ( k ) T with a given speed.
As shown in Figure 2, let d i , j ( k ) = | | P j ( k ) P i ( k ) | | be the distance between two aircraft, i and j, visible to each other. An isotropic artificial potential field forces aircraft i to stay on a circumference with radius d ¯ centered on P j ( k ) , for all visible aircraft j. To achieve this goal, the desired direction γ ij ( k ) = [ x γ i j ( k ) , y γ i j ( k ) ] T derives from the attractive–repulsive action defined as:
x γ i j ( k ) = k 1 · ( d ¯ / d i , j ( k ) 1 ) · cos α i j ( k ) if d i , j ( k ) d ¯ k 2 · ( d i , j ( k ) / d ¯ 1 ) · cos ( α i j ( k ) + π ) if d i , j ( k ) > d ¯ y γ i j ( k ) = k 1 · ( d ¯ / d i , j ( k ) 1 ) · sin α i j ( k ) if d i , j ( k ) d ¯ k 2 · ( d i , j ( k ) / d ¯ 1 ) · sin ( α i j ( k ) + π ) if d i , j ( k ) > d ¯
where k 1 and k 2 are positive coefficients used to tune the repulsive/attractive component in the overall potential field.
In order to establish a hierarchy in the propagation of the potential field action, a relative position operator Ξ i j ( k ) is then defined establishing whether aircraft j is flying ahead ( Ξ i j ( k ) = 1 ) or behind ( Ξ i j ( k ) = 0 ) aircraft i, where:
Ξ i j ( k ) = 1 if | Δ α i j ( k ) | π / 2 0 if | Δ α i j ( k ) | > π / 2
and
-
Δ α i j ( k ) = α i j ( k ) ψ i ( k ) π π ,
-
the operator · π π wraps the angle in the interval [ π , π ] ,
-
α i j ( k ) = tan 1 y j ( k ) y i ( k ) x j ( k ) x i ( k ) 0 2 π is the angle between the vector P j ( k ) P i ( k ) and the horizontal axis, and
-
ψ i ( k ) is the heading of the i-th aircraft.
The final desired direction Γ i ( k ) = [ x Γ i ( k ) , y Γ i ( k ) ] , T of the i-th aircraft is computed as a weighted averaged action from the visible neighboring aircraft, combined with a potential action driving to the target point:
Γ i ( k ) = j Q ¯ i ( k ) Ξ i j ( k ) γ i j ( k ) card ( Q ¯ i ( k ) ) + k T a r g e t P f P i ( k ) P f P i ( k )
where card ( Q ¯ i ( k ) ) represents the cardinality of the set Q ¯ i ( k ) . The k T a r g e t positive constant is chosen to balance the attraction to the target with all other actions.
The resulting heading angle is finally computed as follows:
ψ ˜ i ( k ) = tan 1 y Γ i ( k ) x Γ i ( k ) 0 2 π
Additionally, the desired speed V ˜ i ( k ) is a combination of the reference cruise speed V c and a repulsive or attractive action from neighboring vehicles, k V > 0 being a tuning parameter:
V i ˜ ( k ) = V c + k V j Q ¯ i ( k ) ( 2 Ξ i j ( k ) 1 ) d i , j ( k ) d ¯ | d i , j ( k ) d ¯ | | | γ ij ( k ) | |
In this way, vehicle i is attracted to vehicle j if the mutual distance d i , j ( k ) > d ¯ and vice versa.

5. Trajectory Tracking and Collision Avoidance Algorithm

The TTCAA runs on each aicraft i and is composed of the following parts (see Figure 3):
  • A Reference Trajectory Generation (RTG): on the basis of the inputs from the SGA, it computes the reference trajectory for the MPC;
  • A Collision Avoidance Algorithm (CAA): it adds constraints to the MPC problem, whenever a potential collision with other vehicles or obstacles is detected.
  • Model Predictive Control (MPC): it computes the acceleration vector needed to follow the reference trajectory and avoid any potential collision;
  • MPC Output Post Processing (OPP): it converts acceleration into control signal for the FCS, namely acceleration along the flight trajectory u V i and turn rate u ψ i .
According to the MPC strategy, at each time step k, the algorithm computes the optimal control action u i * ( k | k ) as the first element of the optimal sequence U i * ( k ) = [ u i * T ( k | k ) , u i * T ( k + 1 | k ) , , u i * T ( k + n c | k ) ] T , obtained by solving an optimal control problem over a finite prediction horizon [ k , k + n p ] [82].
Generally speaking the MPC requires the solution of the following problem.
Problem 1.
Constrained discrete time control optimization problem. Find the optimal sequence U i * ( k ) composed of n c control moves, with n c n p , such that:
U i * ( k ) = argmin U i J ξ i k | k , U i k , k
subject to
η ξ i k + p | k , u i k + p | k , k 0 p [ 0 , , n p ]
ξ i ( k + p + 1 | k ) = f ( ξ i ( k + p | k ) , u i ( k + p | k ) , k ) p [ 0 , , n p ] ξ i k | k = ξ i k
where:
-
J ( · , · , · ) is the cost function to be minimized;
-
ξ i k + p + 1 | k is the predicted state at the time step k + p + 1 , p [ 0 , , n p ] . This is computed using (12) which represents the dynamics of the aircraft, comprehensive of the initial condition ξ ( k ) which is measured or estimated;
-
u i ( k + p | k ) is the control signal at the time instant k + p , p [ 0 , , n p ] ;
-
Equation (11) defines static constraints involving states and inputs.
In particular, to predict the aircraft state trajectory a linear kinematic model is considered, whose state-space representation is:
ξ i ( k + 1 ) = 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1 ξ i ( k ) + 0 0 0 0 Δ t 0 0 Δ t u i ( k ) = A ξ i ( k ) + Bu i ( k )
where the state vector ξ i ( k ) = x i ( k ) , y i ( k ) , v i , x ( k ) , v i , y ( k ) T is composed of the position P i ( k ) = [ x i ( k ) , y i ( k ) ] T and the velocity vector V i ( k ) = [ v i , x ( k ) , v i , y ( k ) ] T of the i-th aircraft, and u i ( k ) = a i , x ( k ) , a i , y ( k ) T is the acceleration vector that must be optimized in order to allow the aircraft to follow the reference trajectory ξ i r e f ( k ) = [ x i r e f ( k + p + 1 | k ) , y i r e f ( k + p + 1 | k ) , v i , x r e f ( k + p + 1 | k ) , v i , y r e f ( k + p + 1 | k ) ] T ( p = 0 , , n p ). This trajectory can be computed by integrating the following equation
x i r e f ( k + p + 1 | k ) = x i r e f ( k + p | k ) + v i , x r e f ( k + p | k ) Δ t y i r e f ( k + p + 1 | k ) = y i r e f ( k + p | k ) + v i , y r e f ( k + p | k ) Δ t v i , x r e f ( k + p + 1 | k ) = V i ˜ ( k ) cos ψ ˜ i ( k ) v i , y r e f ( k + p + 1 | k ) = V i ˜ ( k ) sin ψ ˜ i ( k )
over the prediction horizon [ k , k + n p ] , starting from the initial condition ξ i r e f ( k | k ) = [ x i ( k ) , y i ( k ) , V ˜ i ( k ) cos ψ ˜ i ( k ) , V ˜ i ( k ) sin ψ ˜ i ( k ) ] T , where x i ( k ) , y i ( k ) represents the measured current position, V ˜ i ( k ) and ψ ˜ i ( k ) are provided by the SGA.
In order to obtain an online numerical solution to the MPC Problem 1 with Quadratic Programming (QP) techniques, the objective function is defined quadratic in the state and input variables as follows:
J ( ξ i ( k | k ) , U i , k ) = p = 0 n p e ξ i k + p + 1 | k T Q e ξ i k + p + 1 | k + + p = 0 n c u i k + p | k T R u i k + p | k
The first term of the objective function J accounts for the difference between the predicted and reference trajectory, e ξ i τ | k = ξ i τ | k ξ i r e f τ | k , whereas the second one aims to limit the control effort. Weighting matrices Q and R are positive definite.
On the other hand, an effort has to be spent to formulate constraints as linear functions of the optimization variable.

5.1. Acceleration and Speed Limits

Control laws must be consistent with the aircraft acceleration and speed limits.
Speed is limited by the following nonlinear constraints:
V i , m i n V i ( k + p + 1 | k ) V i , m a x p = 0 , , n p
where V i , m i n and V i , m a x are the minimum and maximum allowed speed and V i = v i , x 2 + v i , y 2 .
Equation (16) defines a set of non-convex and nonlinear constraints. In fact, as shown in Figure 4, the velocity vector has always to be kept within the region between two concentric circles C V i , m i n and C V i , m a x , with radius equal to V i , m i n and V i , m a x respectively. For this reason, C V i , m a x C V i , m i n represents the set of admissible V i .
In order to define a linear and convex constraint, let consider the two polytopes P V i , m i n and P V i , m a x with n v 3 vertices, such that C V i , m i n is inscribed in P V m i n i and C V i , m a x is circumscribed at P V m a x i (see Figure 4). Therefore the set of admissible velocity vector, defined as C V i , m a x C V i , m i n , is approximated by the set difference P V m a x i P V m i n i .
Let P i = P s i , s = 1 , 2 , , n v be the set of polytopes needed to define P V m a x i P V m i n i where:
P s i = { V i ( k + p + 1 | k ) R 2 : μ i , s v V i ( k + p + 1 | k ) ν i , s v }
with μ i , s v and ν i , s v constant matrix and vector, respectively.
Consequently, the set of non-convex linear constraints V i ( k + p + 1 | k ) P i can be converted into a convex one, by using the so-called big-M reformulation [83] that results in a MIQP (Mixed Integer Quadratic Programming) reformulation of the problem. This approach requires to increase the number of optimization variables, by introducing n v additional binary variables δ i , s v ( k + p + 1 | k ) { 0 , 1 } ( s = 1 , 2 , n v ), at each time step k + p ( p = 0 , , n p ). Velocity constraints (16) are then reformulated as follows:
μ i , 1 v V i ( k + p + 1 | k ) ν i , 1 v + M ( 1 δ i , 1 v ( k + p + 1 | k ) ) μ i , 2 v V i ( k + p + 1 | k ) ν i , 2 v + M ( 1 δ i , 2 v ( k + p + 1 | k ) ) μ i , n v v V i ( k + p + 1 | k ) ν i , n v v + M ( 1 δ i , n v v ( k + p + 1 | k ) ) s = 1 n v δ i , s v ( k + p + 1 | k ) = 1 p = 0 , , n p
where M = [ M x , M y ] T , M x and M y being sufficiently large numbers.
On the other hand, acceleration constraints can be defined as follows:
a i , m i n ( k ) a i , m i n ( k ) a i ( k + p | k ) a i ( k + p | k ) a i , m a x ( k ) a i , m a x ( k ) p = 0 , , n p
where superscripts ‖ and ⊥ indicate tangential and normal components to the flight trajectory. The values of maximum and minimum accelerations, a i , m i n ( k ) , a i , m i n ( k ) , a i , m a x ( k ) , and a i , m a x ( k ) , are related to the available thrust and the turn rate capabilities of the aircraft. In particular, a i ( k ) is produced with the excess of thrust with respect to drag force in the wing levelled forward flight approximation, whereas normal acceleration limits is related to the minimum turning radius r i as follows:
a i ( k ) V i ( k ) 2 r i
In order to obtain linear constraints, accelerations have to be translated into the global reference system (flat Earth-fixed reference frame). Let T i be the rotation matrix from the local reference frame to the global one. Such a matrix is precomputed at time k and assumed constant in the prediction horizon T i k + p | k = T i k | k . Consequently, the acceleration constraints can be formulated as the following set of linear inequalities:
T i k | k a i , m i n ( k ) a i , m i n ( k ) a i , x ( k + p | k ) a i , y ( k + p | k ) T i k | k a i , m a x ( k ) a i , m a x ( k ) p = 0 , , n p
Let’s finally define an enlarged vector of design variables Ω i ( k ) = [ u i ( k | k ) T , , u i ( k + n c | k ) T , δ i , s v ( k + 1 | k ) , , δ i , s v ( k + n p + 1 | k ) ] T . Problem 1 is then reformulated as follows:
Problem 2.
At each time step k, find the optimal sequence of design variables Ω i ( k ) such that it minimizes the cost function (15), in presence of linear constraints (18) and (21).
Problem 2 turns out to be a Mixed Integer Quadratic Programming problem, which has efficient numerical solvers available [84,85].
According to the receding horizon paradigm, the output of the MPC is the control signal at p = 0 , u i * ( k | k ) = a i , x * ( k | k ) , a i , y * ( k | k ) T .
The OPP block computes the speed change u V i and the turn rate u ψ i to be passed to the aircraft FCS as follows:
u V i ( k ) = 2 v i , x ( k ) a i , x * ( k | k ) + 2 v i , y ( k ) a i , y * ( k | k ) v i , x ( k ) 2 + v i , y ( k ) 2 u ψ i ( k ) = v i , x ( k ) a i , y * ( k | k ) v i , y ( k ) a i , x * ( k | k ) v i , x ( k ) 2 + v i , y ( k ) 2

5.2. Collision Avoidance Algorithm and Constraints

The Collision Avoidance Algorithm (CAA) is designed to guarantee that each aircraft in the formation is able to avoid collision with the other vehicles or with any existing obstacle.
Consider N o = N o b + ( N 1 ) equivalent obstacles C j ( k ) ( N o b real obstacles and N 1 aircraft of the swarm). Assume that each obstacle j is modeled as a circumference of radius r ¯ j , centered at point C ¯ j ( k ) = [ x ¯ j ( k ) , y ¯ j ( k ) ] T .
The CAA verifies if any distance between the i-th aircraft and the obstacles is less than or equal to a fixed safety distance d s a f e t y , evaluating the set of possible colliding obstacles O i ( k ) .
Definition 5.
The set of possible colliding obstacles O i ( k ) at time k is defined as:
O i ( k ) = C j ( k ) : P i ( k ) C ¯ j ( k ) r ¯ j r ¯ i d s a f e t y
where r ¯ i and r ¯ j are the equivalent radius of the i-th aircraft and j-th obstacle respectively.
In accordance with the reactive scheme [86,87], only the nearest colliding obstacle within set O i ( k ) is taken into account by the MPC [8,78] by adding a limited number of constraints to the Problem 2.
Without loss of generality, assume that the forthcoming potential collision, for the vehicle i, is going to occur with the aircraft l ¯ , as shown in Figure 5.
For all the time instants in the prediction horizon k + p ( p = 1 , , n p ), the UAV position is constrained to fall within a region delimited by two straight lines. This guarantees the desired safety distance d s a f e t y .
The first boundary line r i 1 ( k + p | k ) is normal to the vector P l ¯ ( k + p | k ) P i ( k + p | k ) and tangent to the circumference C l ¯ with radius r ¯ l ¯ , centered in the predicted position P l ¯ ( k + p | k ) = [ x ^ l ¯ ( k + p | k ) , y ^ l ¯ ( k + p | k ) ] T .
Since there is no cooperation between vehicles, the obstacle future positions P l ¯ ( k + p | k ) = [ x ^ l ¯ ( k + p | k ) , y ^ l ¯ ( k + p | k ) ] T , over the prediction horizon ( p = 1 , , n p + 1 ), are estimated assuming that both speed and heading are constant, i.e. V l ¯ ( k + p | k ) = V l ¯ ( k | k ) , ψ l ¯ ( k + p | k ) = ψ l ¯ ( k | k ) .
Line r i 1 ( k + p | k ) is defined as follows:
r i 1 ( k + p | k ) : c α , i ( k + p | k ) x x ^ l ¯ ( k + p | k ) + s α , i ( k + p | k ) y y ^ l ¯ ( k + p | k ) r ¯ l ¯ = 0
with
[ c α , i ( k + p | k ) , s α , i ( k + p | k ) ] T = P l ¯ ( k + p | k ) P i ( k ) | | P l ¯ ( k + p | k ) P i ( k ) | |
Another line delimiting the region that can be occupied by the aircraft trajectory is that reported as r i 2 ( k + p | k ) in Figure 5. This is tangent to the obstacle C l ¯ and passes through a point P i r e f * = P i r e f ( k + p * | k ) = ( x i r e f ( k + p * | k ) , y i r e f ( k + p * | k ) ) , on the reference trajectory ( p * 1 ), which is ahead the current aircraft position. This choice to look forward gives enough space and time to maneuver in order to avoid the forbidden regions.
For each time step k + p ( p = 1 , , n p ), consider the vector P l ¯ ( k + p | k ) P i r e f ( k + p * | k ) . Its direction with respect to the x axis is defined by the angle β i l ¯ ( k + p | k ) .
Compute the angles:
ϕ i ( k + p | k ) = arcsin ( r ¯ l ¯ / | | P l ¯ ( k + p | k ) P i r e f ( k + p * | k ) | | )
ϕ i ( k + p | k ) = arcsin ( r ¯ l ¯ / | | P l ¯ ( k + p | k ) P i r e f ( k + p * | k ) | | ) .
The CAA algorithm selects the angle ϕ i ( k + p | k ) { ϕ i ( k + p | k ) , ϕ i " ( k + p | k ) } minimizing the deviation from the desired heading ψ ˜ i ( k ) .
Line r i 2 ( k + p | k ) is defined as follows
r i 2 ( k + p | k ) : s θ , i ( k + p | k ) x x i r e f ( k + p * | k ) c θ , i ( k + p | k ) y y i r e f ( k + p * | k ) = 0
where c θ , i ( k + p | k ) = cos θ i ( k + p | k ) , and s θ , i ( k + p | k ) = sin θ i ( k + p | k ) , with θ i ( k + p | k ) = β i l ¯ ( k + p | k ) + ϕ i ( k + p | k ) π π
It is worth noticing that constraints defined by r i 2 ( k + p | k ) are not strictly needed to perform collision avoidance which could be managed in terms of speed optimization. However this kind of constraint, increases the overall performance of the algorithm implementing the anti-collision action well in advance with respect to the approaching obstacle. For example, without this constraint, in a symmetric condition, with the aircraft reference trajectory pointing to the center of the obstacle, the vehicle would tend to decrease its speed to the minimum, before performing a turn induced by the potential action spreading from the obstacle.
On the other hand, near the obstacle, this constraint may lead to an unfeasible problem being the aircraft initial position not in the feasible region. For this reason, the boundary line r i 2 ( k + p | k ) is deactivated under certain conditions according to the following activation operator:
Λ i , l ¯ ( k + p | k ) = 1 if | β i l ¯ ( k + p | k ) ψ i ( k ) | π / 3 0 otherwise
The following equations define the admissible region over the prediction horizon [ k , k + n p ] as constraints to be added to the MPC Problem:
c α , i ( k + p | k ) x x ^ l ¯ ( k + p | k ) + s α , i ( k + p | k ) y y ^ l ¯ ( k + p | k ) r ¯ l ¯ = 0
Λ i , l ¯ ( k + p | k ) s θ , i ( k + p | k ) x x ^ i r e f ( k + p * | k ) c θ , i ( k + p | k ) y y ^ i r e f ( k + p * | k ) 0
for each time step k + p , with p = 0 , , n p .

6. Numerical Results

To test and validate the proposed guidance system architecture, several numerical simulations were carried out using an ad-hoc simulator developed in a Matlab/Simulink environment. All of the operational scenarios are compliant with homogeneous swarms of micro/mini quadrotor-type UAVs having 10–15 min of flight endurance [88]. The proposed procedure was implemented using Operator Splitting Quadratic Program (OSQP) [89,90] to solve the MPC optimization problem.
The simulation parameters are presented in Table 1.
The position of each aircraft is monitored, checking if collisions are avoided and if the shape of the flight formation is re-established after passing obstacles. A numerical indicator, namely d mean , is also calculated to check the algorithm effectiveness in keeping the swarm on a triangular mesh. It represents the average value of the distance between every pair of visible aircraft in the Delaunay triangulation and is defined as:
d mean ( k ) = i = 1 N j Q ¯ i ( k ) d i , j ( k ) i = 1 N card ( Q ¯ i ( k ) )
To highlight the features of the proposed guidance algorithm, three scenarios were considered. Each scenario lies in a box of 1 × 1 km, simulating an environment with different obstacle arrangement and size.
Results with three, five and 10 flying vehicles are presented in Scenario #1, and 10 vehicles are used in Scenarios #2 and #3.
Aircraft initial conditions were assigned such that the swarm initial mesh is not composed of equilateral triangles as desired. Simulations were repeated 10 times for each scenario to avoid different initial conditions affecting the analysis of the results.

6.1. Scenario #1—Three Aircraft

The first scenario is made up of three circular obstacles of different sizes, placed between the fleet’s starting position and the target point P f . Figure 6 shows the resulting trajectories obtained involving three aircraft; markers are used to show vehicles positions at the fixed time instants t = 0 s, t = 30 s, t = 65 s, t = 95 s and t = 160 s, whereas green lines are used to highlight the DTG arcs. Figure 7 shows the distances between vehicle 1 and the other aircraft of the swarm. Solid lines indicate the distances from vehicle 1 from any other visible aircraft; whenever the aircraft are no longer visible dashed gray lines are used.
During the flight, in presence of obstacles, the swarm split into two parts (see aircraft positions at time instants t = 32.9 s , t = 59.7 s , and t = 86.5 s in Figure 8). Since aircraft 1 and 2 overcame the obstacles, passing on the same side, their mutual distance d 1 , 2 was always close to the desired distance. On the other hand, aircraft 3 passed on the other side of the obstacles and, consequently, distances d 1 , 3 and d 2 , 3 greatly deviated from the desired distance d ¯ near the obstacles. The goal of keeping a uniform distance between aircraft was achieved while avoiding any collision. As we can see, vehicle 2 almost always belonged to Q ¯ 1 , the set of visible aircraft from vehicle 1. Vehicle 3, passing on the other side of the obstacles, became not visible from aircraft 1 during some time intervals. However, after passing the obstacles ( t 95.9 s ), the DTG was newly connected and aircraft tended to restore an equilateral triangles-based flight formation with edges equal to the desired distance d ¯ .
Figure 9 shows the mean distance d m e a n . As expected, this distance increased with respect to the desired one only in short time intervals, where the presence of obstacles led the aircraft to deviate more from the desired mutual distances.
Time histories of the control signals u ψ and u V are presented in Figure 10. It is worth noticing that at the beginning of the simulation, the random initial position of the aircraft required them to accelerate and turn to establish the desired swarm shape. Furthermore, control signals presented peaks before passing obstacles at the activation of the collision avoidance and, after that, in order to re-compose the swarm.

6.2. Scenario #1—Five Aircraft

The goal of keeping a uniform distance between aircraft was also achieved with a higher number of vehicles. Figure 11 shows the resulting trajectories obtained with five aircraft together with their positions (markers) and the DTG edges (green lines) at several time instants ( t = 0 s, t = 34 s, t = 60 s, t = 100 s and t = 160 s).
As shown in Figure 12, near obstacles the formation split into two parts, passing at the opposite sides of the obstacles; e.g., at time instant t = 68.1 s the DTG was not connected and was composed of two distinct sub-graphs made up of vehicles { 3 , 4 , 5 } and { 1 , 2 } , seperately.
In Figure 13 the distances between vehicle 1 and the others are shown. Aircraft 4 never belonged to the set of vehicles visible from UAV 1, Q ¯ 1 , resulting in a distance d 1 , 4 (indicated with a dashed line) that was always greater than the desired distance. On the other hand, aircraft 2 was almost always visible from UAV 1, resulting in a distance approximately equal to the desired one during the flight, except in short time intervals, where collision avoidance prevented vehicles from the desired relative positioning. Vehicle 3, while passing on the opposite side of obstacles with respect to aircraft 1, tried to re-compose the swarm after them. Finally, at t > 100 s , d 1 , 3 clearly tended to the desired distance d ¯ .
It is worth noting that DTG, and consequently the swarm configuration, changes over time: for instance, at t 10 s, as shown in Figure 13, aircraft 1 belonged to two triangles, one with vehicles 3 and 5 and the other with 2 and 3, while at approximately t > 100 s it belonged only to a triangle with 2 and 3.
Figure 14 shows the mean distance d m e a n between vehicles over DTG. Although such a distance showed slight deviations in short time intervals, due to a change in swarm configuration in the presence of obstacles, it tended always to the desired distance d ¯ .

6.3. Scenario #1—10 Aircraft

Figure 15 shows the trajectories of a swarm of 10 aircraft, highlighting their positions (markers) and edges of DTG (green lines) at several time instants ( t = 0 s, t = 35 s, t = 75 s, t = 110 s and t = 160 s).
In Figure 16, snapshots of swarm configuration at time instants t = 51 s, t = 63.6 s, t = 73 s, t = 80.2 s, t = 100.9 s and t = 106.6 s are shown. It is worth noting that, although the vehicles passed on both sides of the obstacles, virtually splitting the swarm in two parts, the DTG remained connected, giving the ability to create a regular mesh also around the obstacles.
In Figure 17, the distances of each vehicle from aircraft 1 are shown, highlighting that UAVs 4, 6, 7, 8, 9 and 10 never belonged to Q ¯ 1 (their distances are represented as colored dashed lines), whereas the remaining aircraft showed a continuously changing swarm configuration, finding an equilibrium only after the last obstacle ( t > 120 s).
Figure 18 shows the distances of each vehicle from aircraft 10. Vehicles 5, 7 and 9 always belonged to Q ¯ 10 (their distances are represented as solid lines), whereas aircraft 4 and 8 belonged to the set of visible neighbors only at the beginning of the simulation.
Looking at Figure 17 and Figure 18, it is clear that aircraft tried to compose regular triangle-based sub-swarms with visible neighbors, also during the obstacle avoidance, maintaining the mutual distances as equally as possible to the desired distance.
Figure 19 presents the average distance between visible aircraft. As shown, aircraft were able to keep a mutual average distance very close to the desired one with slight deviations only when overcoming obstacles.

6.4. Scenario #2—10 Aircraft

Scenario #2 is made up of four larger obstacles placed between the starting and the target points, in order to spread the swarm before a narrow channel that forces the aircraft to re-group. After that, the remaining big obstacle forces the swarm to divide again before reaching the target. This kind of scenario is an interesting test to stress the collision avoidance algorithm and prove the swarm’s capability to rendezvous after the obstacles. Any sub-swarm, due to the presence of the obstacles, is automatically re-linked to the overall DTG to re-estabilish the aircraft formation.
Figure 20 shows trajectories of the vehicles, highlighting the positions and the DTG at several time instants ( t = 0 s, t = 100 s, t = 180 s, t = 280 s and t = 350 s).
As expected, after the first obstacle, the swarm divided into four sub-swarms, composed of vehicles { 1 , 3 , 5 , 6 , 10 } , { 7 , 8 , 9 } , { 2 } and { 4 } (see aircraft positions at time instant t = 49.9 s in Figure 21). To pass the channel between the following obstacles the swarm had to re-organize its shape (see the snapshots at time instants t = 89.9 s and t = 121.6 s in Figure 21). After that, another split happened in order to overcome the last obstacle; the new sub-swarms, composed of vehicles { 2 , 4 , 5 , 6 , 7 , 8 } and { 1 , 3 , 9 , 10 } , merged at the end of the obstacle avoidance. The result highlights the freedom of the swarm to compose different sub-swarms without assigning a prescribed structure, which is difficult to obtain using a leader (or virtual leader)–follower paradigm.
Figure 22 and Figure 23 show the aircraft distances from aircraft 1 and 7. A continuous change of swarm configuration can be noticed from the repeated switching between solid and dashed lines.
It is worth noting that, after obstacles, the swarm tended to a configuration with mutual distances between visible aircraft that was equal to the desired one, e.g., aircraft 1 created equilateral triangles with aircraft 2, 3 and 4. This trend is confirmed by Figure 24, which shows the average distance between neighboring visible aircraft.

6.5. Scenario #3—10 Aircraft

The third scenario allows to prove the capability of the swarm to rapidly adapt its shape to the environment. The first group of obstacles consists of two large circles, creating a narrow corridor before the third obstacle, which is placed to force the swarm to spread. The second group is composed of a full area of smaller obstacles forcing the swarm to split into several sub-swarms.
Figure 25 shows trajectories of the vehicles, highlighting their positions and the DTG arcs at several time instants ( t = 0 s, t = 80 s, t = 170 s, and t = 280 s).
At the beginning of the simulation, the swarm was forced to compact itself to pass between two big obstacles simulating the presence of a narrow crossing channel; then, the reduced size of the third obstacle permited the swarm to recompose a compact shape and pass it, losing the DTG connectivity at some time instants, as reported in Figure 26 at t = 104.8 s. Once aircraft reached the area full of obstacles, they scattered in order to overcome the obstacles, creating several time-varying sub-swarms as highlighted in Figure 26 at t [ 160.8 , 196.4 ] s.
This behavior is also confirmed by Figure 27 and Figure 28, which show the distances of each vehicle visible from aircraft 1 and 6, respectively. As shown in Figure 27, in the time interval t [ 160 , 180 ] s , the set of visible aircraft Q ¯ 1 was often empty, leaving the vehicle to fly and overcome obstacles without any swarm rule.
Finally, the UAVs re-estabilished the formation in absence of further obstacles. Figure 29 shows the average distance between visible aircraft. There was a clear tendency of it to become equal to the desired distance d ¯ .
During the validation test campaign, the proposed algorithm showed its properties as expected and discussed in the introduction. In the absence of obstacles, aircraft were able to fly together without creating a particular shape of the formation but maintaining mutual distances equal to the desired one. In presence of obstacles, such behavior resulted in a compact formation that split only near the obstacles to avoid collisions.
The absence of a fixed structure and a fixed leader or virtual leader makes the procedure really decentralized, lowering typical implementation and management difficulties [37,91]. During the flight, vehicles can freely change the topology of the swarm, creating sub-swarms as needed and re-composing the overall formation when possible.
It is worth noting that the use of the Delaunay triangulation makes the swarm cooperatively solve a particular packing problem [92] without a fixed external boundary. The objective of this packing problem is to find the shape of the swarm that can be exactly divided in equilateral triangles and whose vertices represent the vehicles positions. The result tended to a hexagonal packing that is the most efficient in terms of occupied area [93].
The swarm packing was tested in the absence of obstacles in order to show the packing density, which is defined as follows:
d e n s i t y = N π d ¯ 2 4 A c
where A c is the area of the region containing all the circles with a diameter equal to the desired distance. Figure 30 shows that the packing density was almost constant also in the presence of a swarm with a large number of vehicles and tended to be approximately equal to 1. Although in a packing problem the hexagonal packing density represents a maximum, here the circles can be overlapped, i.e., vehicles may have mutual distances shorter than the desired one. For this reason, the density reached values around 1.
Another important feature is that the proposed approach involves a reasonably low computational burden. As shown in Figure 31, the MPC, which is the most demanding part of the algorithm from a computational point of view, never required more than 100 ms on an Intel i7-based laptop for the proposed numerical simulations. This result suggests the possible implementation of the algorithm even on slower CPU architectures, lowering the guidance algorithm sampling frequency.

7. Conclusions

In this paper, a decentralized guidance system for swarms of aircraft is proposed. It is based on both a Delaunay triangulation topology, to realize a regular mesh of the formation, and a potential field technique to compute the desired position for each aircraft. The tracking of the desired positions and collision avoidance is then based on a receding horizon model predictive control to optimize the overall performance of the swarm. Vehicles need only to exchange information about position, speed and heading, whereas their on-board control algorithms run independently. Since the fleet formation is not pre-assigned, in the presence of obstacles the swarm can split into several independent sub-swarms to avoid collisions. However, far from the obstacles, the proposed approach allows the vehicles to re-join and re-establish the flight formation. The vehicles tend to self-organize the swarm in order to assume a regular triangle-based formation.
Numerical results, involving different scenarios and fleet configurations, showed the effectiveness of the guidance and control system and underlined several features of the proposed technique.
In particular, the algorithm makes the UAVs formation easily adaptable to different operational environments by changing the swarm configuration and, consequently, the DTG connectivity over time.
This feature is difficult to obtain using an assigned structure or virtual structure as in a leader–follower paradigm. Furthermore, using our approach, the final configuration represents the solution to an optimization problem similar to the packing problem. The use of triangles and, in particular, Delaunay triangulation, gives the swarm the ability to increase the efficiency in terms of packing density. Finally, in terms of future applications, the relatively low computational burden would allow an implementation of the proposed approach also on an embedded low-cost control board.

Author Contributions

Conceptualization, methodology, investigation, formal analysis, writing, S.R.B., E.D., I.N., L.B. and M.M. Data curation, visualization, software development, S.R.B., E.D. and I.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the REGIONE CAMPANIA, SCAVIR Project “Advanced Configurations Studies for an Innovative Regional Aircraft” CUP B43D18000210007.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raymer, D. Aircraft Design: A Conceptual Approach; American Institute of Aeronautics and Astronautics, Inc.: Reston, VA, USA, 2012. [Google Scholar]
  2. Roskam, J. Airplane Design; DARcorporation: Lawrence, KS, USA, 1985. [Google Scholar]
  3. Gudmundsson, S. General Aviation Aircraft Design: Applied Methods and Procedures; Butterworth-Heinemann: Oxford, UK, 2013. [Google Scholar]
  4. Valerdi, R. Cost metrics for unmanned aerial vehicles. In Proceedings of the AIAA 16th Lighter-Than-Air Systems Technology Conference and Balloon Systems Conference, Arlington, VA, USA, 26–28 September 2005; p. 7102. [Google Scholar]
  5. Maza, I.; Ollero, A.; Casado, E.; Scarlatti, D. Classification of Multi-UAV Architectures. In Handbook of Unmanned Aerial Vehicles; Valavanis, K.P., Vachtsevanos, G.J., Eds.; Springer: Dordrecht, The Netherlands, 2015; pp. 953–975. [Google Scholar] [CrossRef]
  6. Kushleyev, A.; Mellinger, D.; Powers, C.; Kumar, V. Towards a swarm of agile micro quadrotors. Auton. Robot. 2013, 35, 287–300. [Google Scholar] [CrossRef]
  7. Dudek, G.; Jenkin, M.; Milios, E.; Wilkes, D. A taxonomy for multi-agent robotics. Auton. Robot. 1996, 3. [Google Scholar] [CrossRef]
  8. Cao, Y.U.; Fukunaga, A.S.; Kahng, A. Cooperative mobile robotics: Antecedents and directions. Auton. Robot. 1997, 4, 7–27. [Google Scholar] [CrossRef]
  9. Schneider-Fontan, M.; Mataric, M. Territorial multi-robot task division. IEEE Trans. Robot. Autom. 1998, 14, 815–822. [Google Scholar] [CrossRef] [Green Version]
  10. Yang, J.; Thomas, A.G.; Singh, S.; Baldi, S.; Wang, X. A semi-physical platform for guidance and formations of fixed-wing unmanned aerial vehicles. Sensors 2020, 20, 1136. [Google Scholar] [CrossRef] [Green Version]
  11. Guo, X.; Lu, J.; Alsaedi, A.; Alsaadi, F.E. Bipartite consensus for multi-agent systems with antagonistic interactions and communication delays. Phys. A Stat. Mech. Its Appl. 2018, 495, 488–497. [Google Scholar] [CrossRef]
  12. Lee, W.; Kim, D. Autonomous shepherding behaviors of multiple target steering robots. Sensors 2017, 17, 2729. [Google Scholar]
  13. Forestiero, A. Bio-inspired algorithm for outliers detection. Multimed. Tools Appl. 2017, 76, 25659–25677. [Google Scholar] [CrossRef]
  14. Wang, J.; Ahn, I.S.; Lu, Y.; Yang, T.; Staskevich, G. A distributed estimation algorithm for collective behaviors in multiagent systems with applications to unicycle agents. Int. J. Control. Autom. Syst. 2017, 15, 2829–2839. [Google Scholar] [CrossRef]
  15. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef] [Green Version]
  16. Su, H.; Wang, X.; Lin, Z. Flocking of multi-agents with a virtual leader. IEEE Trans. Autom. Control 2009, 54, 293–307. [Google Scholar] [CrossRef]
  17. Sun, F.; Turkoglu, K. Distributed real-time non-linear receding horizon control methodology for multi-agent consensus problems. Aerosp. Sci. Technol. 2017, 63, 82–90. [Google Scholar] [CrossRef] [Green Version]
  18. Vásárhelyi, G.; Virágh, C.; Somorjai, G.; Tarcai, N.; Szörényi, T.; Nepusz, T.; Vicsek, T. Outdoor flocking and formation flight with autonomous aerial robots. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 3866–3873. [Google Scholar]
  19. Bennet, D.J.; MacInnes, C.; Suzuki, M.; Uchiyama, K. Autonomous three-dimensional formation flight for a swarm of unmanned aerial vehicles. J. Guid. Control. Dyn. 2011, 34, 1899–1908. [Google Scholar] [CrossRef] [Green Version]
  20. Varela, G.; Caamaño, P.; Orjales, F.; Deibe, A.; Lopez-Pena, F.; Duro, R.J. Swarm intelligence based approach for real time UAV team coordination in search operations. In Proceedings of the IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; pp. 365–370. [Google Scholar]
  21. Alfeo, A.L.; Cimino, M.G.; De Francesco, N.; Lazzeri, A.; Lega, M.; Vaglini, G. Swarm coordination of mini-UAVs for target search using imperfect sensors. Intell. Decis. Technol. 2018, 12, 149–162. [Google Scholar] [CrossRef] [Green Version]
  22. Saska, M.; Vonásek, V.; Chudoba, J.; Thomas, J.; Loianno, G.; Kumar, V. Swarm distribution and deployment for cooperative surveillance by micro-aerial vehicles. J. Intell. Robot. Syst. 2016, 84, 469–492. [Google Scholar] [CrossRef]
  23. Acevedo, J.J.; Arrue, B.C.; Maza, I.; Ollero, A. Cooperative large area surveillance with a team of aerial mobile robots for long endurance missions. J. Intell. Robot. Syst. 2013, 70, 329–345. [Google Scholar] [CrossRef]
  24. Renzaglia, A.; Doitsidis, L.; Chatzichristofis, S.A.; Martinelli, A.; Kosmatopoulos, E.B. Distributed Multi-Robot Coverage using Micro Aerial Vehicles. In Proceedings of the 21st Mediterranean Conference on Control and Automation, Chania, Greece, 25–28 June 2013; pp. 963–968. [Google Scholar]
  25. Garcia-Aunon, P.; del Cerro, J.; Barrientos, A. Behavior-Based Control for an Aerial Robotic Swarm in Surveillance Missions. Sensors 2019, 19, 4584. [Google Scholar] [CrossRef] [Green Version]
  26. Kosuge, K.; Sato, M. Transportation of a single object by multiple decentralized-controlled nonholonomic mobile robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’99), Kyongju, Korea, 17–21 October 1999; Volume 3, pp. 1681–1686. [Google Scholar] [CrossRef]
  27. Tartaglione, G.; D’Amato, E.; Ariola, M.; Rossi, P.S.; Johansen, T.A. Model predictive control for a multi-body slung-load system. Robot. Auton. Syst. 2017, 92, 1–11. [Google Scholar] [CrossRef]
  28. Bernard, M.; Kondak, K. Generic slung load transportation system using small size helicopters. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 3258–3264. [Google Scholar] [CrossRef]
  29. Michael, N.; Fink, J.; Kumar, V. Cooperative manipulation and transportation with aerial robots. Auton. Robot. 2011, 30, 73–86. [Google Scholar] [CrossRef] [Green Version]
  30. Palunko, I.; Cruz, P.; Fierro, R. Agile Load Transportation: Safe and Efficient Load Manipulation with Aerial Robots. IEEE Robot. Autom. Mag. 2012, 19, 69–79. [Google Scholar] [CrossRef]
  31. Bangash, Z.A.; Sanchez, R.P.; Ahmed, A.; Khan, M.J. Aerodynamics of Formation Flight. J. Aircr. 2006, 43, 907–912. [Google Scholar] [CrossRef]
  32. Ariola, M.; Mattei, M.; D’Amato, E.; Notaro, I.; Tartaglione, G. Model predictive control for a swarm of fixed wing uavs. In Proceedings of the 30th Congress of the International Council of the Aeronautical Sciences (ICAS), Daejeon, Korea, 25–30 September 2016. [Google Scholar]
  33. Yun, B.; Chen, B.M.; Lum, K.Y.; Lee, T.H. Design and implementation of a leader-follower cooperative control system for unmanned helicopters. J. Control Theory Appl. 2010, 8, 61–68. [Google Scholar] [CrossRef]
  34. Bayraktar, S.; Fainekos, G.; Pappas, G. Experimental cooperative control of fixed-wing unmanned aerial vehicles. In Proceedings of the 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601), Nassau, Bahamas, 14–17 December 2004; Volume 4, pp. 4292–4298. [Google Scholar] [CrossRef]
  35. Paul, T.; Krogstad, T.R.; Gravdahl, J.T. Modelling of UAV formation flight using 3D potential field. Simul. Model. Pract. Theory 2008, 16, 1453–1462. [Google Scholar] [CrossRef]
  36. Dasgupta, P. A Multiagent Swarming System for Distributed Automatic Target Recognition Using Unmanned Aerial Vehicles. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2008, 38, 549–563. [Google Scholar] [CrossRef]
  37. D’Amato, E.; Mattei, M.; Notaro, I. Bi-level flight path planning of UAV formations with collision avoidance. J. Intell. Robot. Syst. 2019, 93, 193–211. [Google Scholar] [CrossRef]
  38. Oh, K.K.; Park, M.C.; Ahn, H.S. A survey of multi-agent formation control. Automatica 2015, 53, 424–440. [Google Scholar] [CrossRef]
  39. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 27–31 July 1987; pp. 25–34. [Google Scholar]
  40. Bellingham, J.S.; Tillerson, M.; Alighanbari, M.; How, J.P. Cooperative path planning for multiple UAVs in dynamic and uncertain environments. In Proceedings of the 41st IEEE Conference on Decision and Control, Las Vegas, NV, USA, 10–13 December 2002; Volume 3, pp. 2816–2822. [Google Scholar]
  41. Garrido, S.; Moreno, L.; Lima, P.U. Robot formation motion planning using fast marching. Robot. Auton. Syst. 2011, 59, 675–683. [Google Scholar] [CrossRef] [Green Version]
  42. Gómez, J.V.; Lumbier, A.; Garrido, S.; Moreno, L. Planning robot formations with fast marching square including uncertainty conditions. Robot. Auton. Syst. 2013, 61, 137–152. [Google Scholar] [CrossRef] [Green Version]
  43. Yang, T.T.; Liu, Z.Y.; Chen, H.; Pei, R. Formation control of mobile robots: State and open problems. Zhineng Xitong Xuebao (CAAI Trans. Intell. Syst.) 2007, 2, 21–27. [Google Scholar]
  44. Peng, Z.; Wen, G.; Rahmani, A.; Yu, Y. Leader–follower formation control of nonholonomic mobile robots based on a bioinspired neurodynamic based approach. Robot. Auton. Syst. 2013, 61, 988–996. [Google Scholar] [CrossRef]
  45. Consolini, L.; Morbidi, F.; Prattichizzo, D.; Tosques, M. Leader–follower formation control of nonholonomic mobile robots with input constraints. Automatica 2008, 44, 1343–1349. [Google Scholar] [CrossRef]
  46. Tan, K.H.; Lewis, M.A. Virtual structures for high-precision cooperative mobile robotic control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’96), Osaka, Japan, 8 November 1996; Volume 1, pp. 132–139. [Google Scholar]
  47. Lewis, M.A.; Tan, K.H. High precision formation control of mobile robots using virtual structures. Auton. Robot. 1997, 4, 387–403. [Google Scholar] [CrossRef]
  48. Balch, T.; Arkin, R.C. Behavior-based formation control for multirobot teams. IEEE Trans. Robot. Autom. 1998, 14, 926–939. [Google Scholar] [CrossRef] [Green Version]
  49. Balch, T.; Hybinette, M. Behavior-based coordination of large-scale robot formations. In Proceedings of the Fourth International Conference on MultiAgent Systems, Boston, MA, USA, 10–12 July 2000; pp. 363–364. [Google Scholar]
  50. Balch, T.; Hybinette, M. Social potentials for scalable multi-robot formations. In Proceedings of the Millennium Conference, IEEE International Conference on Robotics and Automation, Symposia Proceedings (Cat. No. 00CH37065) (2000 ICRA), San Francisco, CA, USA, 24–28 April 2000; Volume 1, pp. 73–80. [Google Scholar]
  51. Marino, A.; Parker, L.E.; Antonelli, G.; Caccavale, F. A decentralized architecture for multi-robot systems based on the null-space-behavioral control with application to multi-robot border patrolling. J. Intell. Robot. Syst. 2013, 71, 423–444. [Google Scholar] [CrossRef]
  52. Baizid, K.; Giglio, G.; Pierri, F.; Trujillo, M.A.; Antonelli, G.; Caccavale, F.; Viguria, A.; Chiaverini, S.; Ollero, A. Behavioral control of unmanned aerial vehicle manipulator systems. Auton. Robot. 2017, 41, 1203–1220. [Google Scholar] [CrossRef] [Green Version]
  53. Barve, A.; Nene, M.J. Survey of Flocking Algorithms in multi-agent Systems. Int. J. Comput. Sci. Issues (IJCSI) 2013, 10, 110. [Google Scholar]
  54. Chung, S.J.; Paranjape, A.A.; Dames, P.; Shen, S.; Kumar, V. A survey on aerial swarm robotics. IEEE Trans. Robot. 2018, 34, 837–855. [Google Scholar] [CrossRef] [Green Version]
  55. Vásárhelyi, G.; Virágh, C.; Somorjai, G.; Nepusz, T.; Eiben, A.E.; Vicsek, T. Optimized flocking of autonomous drones in confined environments. Sci. Robot. 2018, 3, eaat3536. [Google Scholar] [CrossRef] [Green Version]
  56. Cordeiro, T.F.; Ferreira, H.C.; Ishihara, J.Y. Robust and Synchronous Nonlinear Controller for Autonomous Formation Flight of Fixed Wing UASs. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; pp. 378–387. [Google Scholar]
  57. Singh, S.N.; Zhang, R.; Chandler, P.; Banda, S. Decentralized nonlinear robust control of UAVs in close formation. Int. J. Robust Nonlinear Control. IFAC-Affil. J. 2003, 13, 1057–1078. [Google Scholar] [CrossRef]
  58. Campa, G.; Gu, Y.; Seanor, B.; Napolitano, M.R.; Pollini, L.; Fravolini, M.L. Design and flight-testing of non-linear formation control laws. Control Eng. Pract. 2007, 15, 1077–1092. [Google Scholar] [CrossRef]
  59. Cordeiro, T.F.; Ferreira, H.C.; Ishihara, J.Y. Non linear controller and path planner algorithm for an autonomous variable shape formation flight. In Proceedings of the IEEE 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1493–1502. [Google Scholar]
  60. Rezaee, H.; Abdollahi, F.; Talebi, H.A. H Based Motion Synchronization in Formation Flight With Delayed Communications. IEEE Trans. Ind. Electron. 2014, 61, 6175–6182. [Google Scholar] [CrossRef]
  61. Han, K.; Lee, J.; Kim, Y. Unmanned aerial vehicle swarm control using potential functions and sliding mode control. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2008, 222, 721–730. [Google Scholar] [CrossRef]
  62. Quintero, S.A.; Collins, G.E.; Hespanha, J.P. Flocking with fixed-wing UAVs for distributed sensing: A stochastic optimal control approach. In Proceedings of the IEEE 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013; pp. 2025–2031. [Google Scholar]
  63. Zhihao, C.; Longhong, W.; Jiang, Z.; Kun, W.; Yingxun, W. Virtual target guidance-based distributed model predictive control for formation control of multiple UAVs. Chin. J. Aeronaut. 2020, 33, 1037–1056. [Google Scholar]
  64. Xi, W.; Baras, J.S. MPC based motion control of car-like vehicle swarms. In Proceedings of the IEEE 2007 Mediterranean Conference on Control & Automation, Athens, Greece, 27–29 June 2007; pp. 1–6. [Google Scholar]
  65. Peng, Z.; Li, B.; Chen, X.; Wu, J. Online route planning for UAV based on model predictive control and particle swarm optimization algorithm. In Proceedings of the IEEE 10th World Congress on Intelligent Control and Automation, Beijing, China, 6–8 July 2012; pp. 397–401. [Google Scholar]
  66. Zhang, D.; Xie, G.; Yu, J.; Wang, L. Adaptive task assignment for multiple mobile robots via swarm intelligence approach. Robot. Auton. Syst. 2007, 55, 572–588. [Google Scholar] [CrossRef]
  67. Altshuler, Y.; Yanovsky, V.; Wagner, I.A.; Bruckstein, A.M. Efficient cooperative search of smart targets using uav swarms. Robotica 2008, 26, 551. [Google Scholar] [CrossRef] [Green Version]
  68. Najm, A.A.; Ibraheem, I.K.; Azar, A.T.; Humaidi, A.J. Genetic Optimization-Based Consensus Control of Multi-Agent 6-DoF UAV System. Sensors 2020, 20, 3576. [Google Scholar] [CrossRef]
  69. Duan, H.; Luo, Q.; Shi, Y.; Ma, G. Hybrid particle swarm optimization and genetic algorithm for multi-UAV formation reconfiguration. IEEE Comput. Intell. Mag. 2013, 8, 16–27. [Google Scholar] [CrossRef]
  70. Bai, C.; Duan, H.; Li, C.; Zhang, Y. Dynamic multi-UAVs formation reconfiguration based on hybrid diversity-PSO and time optimal control. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 775–779. [Google Scholar]
  71. Li, X.; Chen, H. An interactive control algorithm used for equilateral triangle formation with robotic sensors. Sensors 2014, 14, 7229–7247. [Google Scholar] [CrossRef] [Green Version]
  72. Li, X. A Triangular Formation Strategy for Collective Behaviors of Robot Swarm. In Computational Science and Its Applications—ICCSA 2009; Gervasi, O., Taniar, D., Murgante, B., Laganà, A., Mun, Y., Gavrilova, M.L., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5592, pp. 897–911. [Google Scholar] [CrossRef]
  73. Elkaim, G.H.; Kelbley, R.J. A lightweight formation control methodology for a swarm of non-holonomic vehicles. In Proceedings of the 2006 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006. [Google Scholar]
  74. Defoort, M.; Polyakov, A.; Demesure, G.; Djemai, M.; Veluvolu, K. Leader-follower fixed-time consensus for multi-agent systems with unknown non-linear inherent dynamics. IET Control Theory Appl. 2015, 9, 2165–2170. [Google Scholar] [CrossRef] [Green Version]
  75. Ying, Z.; Xu, L. Leader-follower formation control and obstacle avoidance of multi-robot based on artificial potential field. In Proceedings of the IEEE 27th Chinese Control and Decision Conference (2015 CCDC), Qingdao, China, 23–25 May 2015; pp. 4355–4360. [Google Scholar]
  76. Barnes, L.; Fields, M.; Valavanis, K. Unmanned ground vehicle swarm formation control using potential fields. In Proceedings of the 2007 Mediterranean Conference on Control Automation, Athens, Greece, 27–29 June 2007; pp. 1–8. [Google Scholar] [CrossRef]
  77. Beaulieu, A.; Givigi, S.N.; Ouellet, D.; Turner, J.T. Model-Driven Development Architectures to Solve Complex Autonomous Robotics Problems. IEEE Syst. J. 2018, 12, 1404–1413. [Google Scholar] [CrossRef]
  78. D’Amato, E.; Mattei, M.; Notaro, I. Distributed Reactive Model Predictive Control for Collision Avoidance of Unmanned Aerial Vehicles in Civil Airspace. J. Intell. Robot. Syst. 2020, 97, 185–203. [Google Scholar] [CrossRef]
  79. Sloan, S. A fast algorithm for constructing Delaunay triangulations in the plane. Adv. Eng. Softw. (1978) 1987, 9, 34–55. [Google Scholar] [CrossRef]
  80. Kownacki, C.; Ambroziak, L. Local and asymmetrical potential field approach to leader tracking problem in rigid formations of fixed-wing UAVs. Aerosp. Sci. Technol. 2017, 68, 465–474. [Google Scholar] [CrossRef]
  81. Sun, J.; Tang, J.; Lao, S. Collision avoidance for cooperative UAVs with optimized artificial potential field algorithm. IEEE Access 2017, 5, 18382–18390. [Google Scholar] [CrossRef]
  82. Rawlings, J.B.; Mayne, D.Q. Model predictive control: Theory and design. Nob Hill Pub. 2009. [Google Scholar]
  83. Griva, I.; Nash, S.G.; Sofer, A. Linear and Nonlinear Optimization; Siam: Philadelphia, PA, USA, 2009; Volume 108. [Google Scholar]
  84. Lazimy, R. Mixed-integer quadratic programming. Math. Program. 1982, 22, 332–349. [Google Scholar] [CrossRef]
  85. Takapoui, R.; Moehle, N.; Boyd, S.; Bemporad, A. A simple effective heuristic for embedded mixed-integer quadratic programming. Int. J. Control 2020, 93, 2–12. [Google Scholar] [CrossRef]
  86. D’Amato, E.; Notaro, I.; Mattei, M. Reactive Collision Avoidance using Essential Visibility Graphs. In Proceedings of the 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France, 23–26 April 2019; pp. 522–527. [Google Scholar]
  87. Huang, H.; Savkin, A.; Li, X. Reactive Autonomous Navigation of UAVs for Dynamic Sensing Coverage of Mobile Ground Targets. Sensors 2020, 20, 3720. [Google Scholar] [CrossRef]
  88. Moschetta, J.M. The aerodynamics of micro air vehicles: Technical challenges and scientific issues. Int. J. Eng. Syst. Model. Simul. 48 2014, 6, 134–148. [Google Scholar] [CrossRef]
  89. Stellato, B.; Banjac, G.; Goulart, P.; Bemporad, A.; Boyd, S. OSQP: An Operator Splitting Solver for Quadratic Programs. Math. Program. Comput. 2020. [Google Scholar] [CrossRef] [Green Version]
  90. Stellato, B.; Naik, V.V.; Bemporad, A.; Goulart, P.; Boyd, S. Embedded mixed-integer quadratic optimization using the OSQP solver. In Proceedings of the European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018. [Google Scholar] [CrossRef]
  91. Perez-Montenegro, C.; Scanavino, M.; Bloise, N.; Capello, E.; Guglieri, G.; Rizzo, A. A mission coordinator approach for a fleet of uavs in urban scenarios. Transp. Res. Procedia 2018, 35, 110–119. [Google Scholar] [CrossRef]
  92. Pach, J.; Agarwal, P.K. Combinatorial Geometry; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 37. [Google Scholar]
  93. Chang, H.C.; Wang, L.C. A simple proof of Thue’s Theorem on circle packing. arXiv 2010, arXiv:1009.4322. [Google Scholar]
Figure 1. On-board guidance and control algorithm architecture (time information has been omitted).
Figure 1. On-board guidance and control algorithm architecture (time information has been omitted).
Sensors 20 04324 g001
Figure 2. Reference angles for potential field computation. Red arrow represents the i-th vehicle’s flight direction. The repulsive force driven by j forces vehicle i to stay on the circumference with a radius equal to the desired distance d ¯ .
Figure 2. Reference angles for potential field computation. Red arrow represents the i-th vehicle’s flight direction. The repulsive force driven by j forces vehicle i to stay on the circumference with a radius equal to the desired distance d ¯ .
Sensors 20 04324 g002
Figure 3. Trajectory Tracking and Collision Avoidance Algorithm.
Figure 3. Trajectory Tracking and Collision Avoidance Algorithm.
Sensors 20 04324 g003
Figure 4. Velocity vector domain and its polytopic approximation. Blue circles represent the maximum and minimum allowed speed.
Figure 4. Velocity vector domain and its polytopic approximation. Blue circles represent the maximum and minimum allowed speed.
Sensors 20 04324 g004
Figure 5. Collision Avoidance constraints (Time information has been omitted). Obstacle is described by circumference centered in the point P j ; tangent lines r i 1 and r i 2 define the admissible region for the i-th future position.
Figure 5. Collision Avoidance constraints (Time information has been omitted). Obstacle is described by circumference centered in the point P j ; tangent lines r i 1 and r i 2 define the admissible region for the i-th future position.
Sensors 20 04324 g005
Figure 6. Scenario #1—three aircraft: UAVs’ trajectories. Red circles represent the obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Figure 6. Scenario #1—three aircraft: UAVs’ trajectories. Red circles represent the obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Sensors 20 04324 g006
Figure 7. Scenario #1—three aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft, dashed gray lines are employed whenever they are no longer visible. The desired distance is highlighted using a dotted red line.
Figure 7. Scenario #1—three aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft, dashed gray lines are employed whenever they are no longer visible. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g007
Figure 8. Scenario #1—three aircraft: snapshots of UAVs’ positions at different time instants. Colored markers indicate UAVs’ actual positions; straight green lines show the DTG arcs. Several swarm configurations during obstacle avoidance are highlighted.
Figure 8. Scenario #1—three aircraft: snapshots of UAVs’ positions at different time instants. Colored markers indicate UAVs’ actual positions; straight green lines show the DTG arcs. Several swarm configurations during obstacle avoidance are highlighted.
Sensors 20 04324 g008
Figure 9. Scenario #1—three aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (red dotted line).
Figure 9. Scenario #1—three aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (red dotted line).
Sensors 20 04324 g009
Figure 10. Scenario #1—three aircraft: control signal computed by TTCAA.
Figure 10. Scenario #1—three aircraft: control signal computed by TTCAA.
Sensors 20 04324 g010
Figure 11. Scenario #1—five aircraft: UAVs trajectories. Red circles represent obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Figure 11. Scenario #1—five aircraft: UAVs trajectories. Red circles represent obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Sensors 20 04324 g011
Figure 12. Scenario #1—five aircraft: snapshots of UAVs positions at different time instants. Colored markers indicate UAVs actual positions; straight green lines show the DTG arcs. During obstacle avoidance the DTG may became not connected, separating the formation into two distinct sub-graphs.
Figure 12. Scenario #1—five aircraft: snapshots of UAVs positions at different time instants. Colored markers indicate UAVs actual positions; straight green lines show the DTG arcs. During obstacle avoidance the DTG may became not connected, separating the formation into two distinct sub-graphs.
Sensors 20 04324 g012
Figure 13. Scenario #1—five aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft, dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 13. Scenario #1—five aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft, dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g013
Figure 14. Scenario #1—five aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (dotted red line).
Figure 14. Scenario #1—five aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (dotted red line).
Sensors 20 04324 g014
Figure 15. Scenario #1—10 aircraft: UAV trajectories. Red circles represent obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Figure 15. Scenario #1—10 aircraft: UAV trajectories. Red circles represent obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Sensors 20 04324 g015
Figure 16. Scenario #1—10 aircraft: snapshots of aircraft positions at different time instants. Colored markers indicate UAVs’ current positions; straight green lines show the arcs of modified DTG. Chosen time instants highlight swarm connectivity during obstacle avoidance.
Figure 16. Scenario #1—10 aircraft: snapshots of aircraft positions at different time instants. Colored markers indicate UAVs’ current positions; straight green lines show the arcs of modified DTG. Chosen time instants highlight swarm connectivity during obstacle avoidance.
Sensors 20 04324 g016
Figure 17. Scenario #1—10 aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 17. Scenario #1—10 aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g017
Figure 18. Scenario #1—10 aircraft: distances between UAV 10 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 18. Scenario #1—10 aircraft: distances between UAV 10 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g018
Figure 19. Scenario #1—10 aircraft: mean distance between visible aircraft (blue solid line), compared with desired distance (red dotted line).
Figure 19. Scenario #1—10 aircraft: mean distance between visible aircraft (blue solid line), compared with desired distance (red dotted line).
Sensors 20 04324 g019
Figure 20. Scenario #2—10 aircraft: UAV trajectories. Red circles represent obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Figure 20. Scenario #2—10 aircraft: UAV trajectories. Red circles represent obstacles; colored markers indicate positions at different time instants; green straight lines highlight DTG arcs.
Sensors 20 04324 g020
Figure 21. Scenario #2—10 aircraft: snapshots of UAVs’ positions at different time instants. Colored markers indicate UAVs’ actual positions; straight green lines show the DTG arcs. Chosen time instants highlight how the swarm is able to adapt its shape to the environment. At time instants t = 49.9 s and t = 210.9 s the DTG was not connected, and the swarm split into several sub-swarms.
Figure 21. Scenario #2—10 aircraft: snapshots of UAVs’ positions at different time instants. Colored markers indicate UAVs’ actual positions; straight green lines show the DTG arcs. Chosen time instants highlight how the swarm is able to adapt its shape to the environment. At time instants t = 49.9 s and t = 210.9 s the DTG was not connected, and the swarm split into several sub-swarms.
Sensors 20 04324 g021
Figure 22. Scenario #2—10 aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never-visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 22. Scenario #2—10 aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never-visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g022
Figure 23. Scenario #2—10 aircraft: distances between UAV 7 and the other swarm vehicles. Solid lines are used for visible aircraftand dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never-visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 23. Scenario #2—10 aircraft: distances between UAV 7 and the other swarm vehicles. Solid lines are used for visible aircraftand dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never-visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g023
Figure 24. Scenario #2—10 aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (red dotted line).
Figure 24. Scenario #2—10 aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (red dotted line).
Sensors 20 04324 g024
Figure 25. Scenario #3—10 aircraft: UAV trajectories. Red circles represent obstacles; colored markers indicate the UAVs positions at different time instants; green straight lines highlight DTG arcs.
Figure 25. Scenario #3—10 aircraft: UAV trajectories. Red circles represent obstacles; colored markers indicate the UAVs positions at different time instants; green straight lines highlight DTG arcs.
Sensors 20 04324 g025
Figure 26. Scenario #3—10 aircraft: snapshots of UAV positions at different time instants. Colored markers indicate UAVs’ actual positions, straight green lines show the DTG arcs. Snapshots highlight how the DTG connectivity rapidly changes in presence of obstacles.
Figure 26. Scenario #3—10 aircraft: snapshots of UAV positions at different time instants. Colored markers indicate UAVs’ actual positions, straight green lines show the DTG arcs. Snapshots highlight how the DTG connectivity rapidly changes in presence of obstacles.
Sensors 20 04324 g026
Figure 27. Scenario #3—10 aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 27. Scenario #3—10 aircraft: distances between UAV 1 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g027
Figure 28. Scenario #3—10 aircraft: distances between UAV 6 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Figure 28. Scenario #3—10 aircraft: distances between UAV 6 and the other swarm vehicles. Solid lines are used for visible aircraft and dashed gray lines are employed whenever they are no longer visible, whereas dashed colored lines are used for never visible UAVs. The desired distance is highlighted using a dotted red line.
Sensors 20 04324 g028
Figure 29. Scenario #3—10 aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (red dotted line).
Figure 29. Scenario #3—10 aircraft: mean distance between visible aircraft (blue solid line) compared with desired distance (red dotted line).
Sensors 20 04324 g029
Figure 30. Minimum (blue bars) and maximum swarm packing densities (red bars) in the absence of obstacles. n Δ describes the number of triangles in a DTG. The values close to 1 highlight the efficiency of the Delaunay-based swarm guidance strategy.
Figure 30. Minimum (blue bars) and maximum swarm packing densities (red bars) in the absence of obstacles. n Δ describes the number of triangles in a DTG. The values close to 1 highlight the efficiency of the Delaunay-based swarm guidance strategy.
Sensors 20 04324 g030
Figure 31. Occurrences of the MPC computational time for the proposed simulations.
Figure 31. Occurrences of the MPC computational time for the proposed simulations.
Sensors 20 04324 g031
Table 1. Simulation parameters.
Table 1. Simulation parameters.
DescriptionValue
Cruise speed V c [m/s]5
Minimum speed V m i n [m/s]3
Maximum speed V m a x [m/s]10
Minimum normal acceleration a m i n [m/s 2 ]−5
Maximum normal acceleration a m a x [m/s 2 ]5
Minimum tangential acceleration a m i n [m/s 2 ]−10
Maximum tangential acceleration a m i n [m/s 2 ]10
Safety distance d s a f e t y [m]15
Aircraft size r ¯ j j = 1 , , N [m]5
Desired distance d ¯ [m]40
Weight matrix for tracking error Q diag([10 10 10 10])
Weight matrix for control effort R diag([1 1])
Number of steps of the prediction horizon n p 10
Number of steps of the control horizon n c 5

Share and Cite

MDPI and ACS Style

Bassolillo, S.R.; D’Amato, E.; Notaro, I.; Blasi, L.; Mattei, M. Decentralized Mesh-Based Model Predictive Control for Swarms of UAVs. Sensors 2020, 20, 4324. https://doi.org/10.3390/s20154324

AMA Style

Bassolillo SR, D’Amato E, Notaro I, Blasi L, Mattei M. Decentralized Mesh-Based Model Predictive Control for Swarms of UAVs. Sensors. 2020; 20(15):4324. https://doi.org/10.3390/s20154324

Chicago/Turabian Style

Bassolillo, Salvatore Rosario, Egidio D’Amato, Immacolata Notaro, Luciano Blasi, and Massimiliano Mattei. 2020. "Decentralized Mesh-Based Model Predictive Control for Swarms of UAVs" Sensors 20, no. 15: 4324. https://doi.org/10.3390/s20154324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop