Next Article in Journal
Robot-Assisted Glovebox Teleoperation for Nuclear Industry
Previous Article in Journal
Effects of Temperature and Mounting Configuration on the Dynamic Parameters Identification of Industrial Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Predictive Control for Cooperative Transportation with Feasibility-Aware Policy

1
Department of Information Engineering, University of Padova, 35131 Padova, Italy
2
Department of Management and Engineering, University of Padova, 36100 Vicenza, Italy
*
Authors to whom correspondence should be addressed.
Robotics 2021, 10(3), 84; https://doi.org/10.3390/robotics10030084
Submission received: 25 May 2021 / Revised: 25 June 2021 / Accepted: 27 June 2021 / Published: 30 June 2021
(This article belongs to the Topic Motion Planning and Control for Robotics)

Abstract

:
The transportation of large payloads can be made possible with Multi-Robot Systems (MRS) implementing cooperative strategies. In this work, we focus on the coordinated MRS trajectory planning task exploiting a Model Predictive Control (MPC) framework addressing both the acting robots and the transported load. In this context, the main challenge is the possible occurrence of a temporary mismatch among agents’ actions with consequent formation errors that can cause severe damage to the carried load. To mitigate this risk, the coordination scheme may leverage a leader–follower approach, in which a hierarchical strategy is in place to trade-off between the task accomplishment and the dynamics and environment constraints. Nonetheless, particularly in narrow spaces or cluttered environments, the leader’s optimal choice may lead to trajectories that are infeasible for the follower and the load. To this aim, we propose a feasibility-aware leader–follower strategy, where the leader computes a reference trajectory, and the follower accounts for its own and the load constraints; moreover, the follower is able to communicate the trajectory infeasibility to the leader, which reacts by temporarily switching to a conservative policy. The consistent MRS co-design is allowed by the MPC formulation, for both the leader and the follower: here, the prediction capability of MPC is key to guarantee a correct and efficient execution of the leader–follower coordinated action. The approach is formally stated and discussed, and a numerical campaign is conducted to validate and assess the proposed scheme, with respect to different scenarios with growing complexity.

1. Introduction

A multi-robot system (MRS) is a set of autonomous agents, able to sense the environment, take decisions, communicate information, and cooperate in order to perform goals that are beyond the capabilities and the knowledge of each individual component [1]. The strength of this architecture is indeed the possibility to accomplish complex global tasks through the realization of simple local rules by the interacting robotic agents. An MRS is thus mainly characterized by cooperative nature which is not altered by the addition and/or removal of a subset of elements. In this sense, it is robust and scalable [2]: robustness refers to the ability of a system to tolerate the failure of one or more agents, while scalability originates from the system modularity.
Thanks to these features, in the last decade, the MRS architecture has emerged as an effective technology in military, civil, and industrial contexts, where it is employed in a wide range of application fields from the more traditional surveillance, monitoring, and tracking tasks with both fixed and mobile robotic agents [3,4,5], to the modern physical interaction tasks involving loads cooperative grasping, transportation, and manipulation [6,7]. Nonetheless, the MRS scenario yields also new methodological challenges that need to be addressed and solved. Indeed, moving the focus from the single robotic agent to the robotic system implies the redefinition of the operation and control space, which involves not only the non-controllable surrounding environment but also the presence of neighboring robotic agents that may be collaborative or not. In this sense, architectural issues related to distributed or hierarchical strategies in controlling the MRS have to be considered [8,9,10,11], with an impact on both the information sharing and the control interplay among robots, in particular when the physical robot-to-robot interaction is required as in transportation/manipulation tasks. In this context, current MRS research effort is focused on the development of learning and optimal decision making algorithms to guarantee general improvements on efficiency, scalability, adaptivity, and robustness in presence of partial information availability and dynamic environments [7,12,13].
Along this line, the attention in this work concerns a transportation task performed by cooperative mobile robots constituted by autonomous ground vehicles (AGVs) equipped with a generic manipulator whose end-effector (EE) consists of a (soft) gripper that allows to grasp the load of interest. Specifically, a model predictive control (MPC) architecture is adopted in conjunction with a leader–follower strategy to solve the cooperative tasks, namely trajectory planning towards the desired goal implementing obstacle avoidance for the leader and maintenance of the transportation formation along a feasible collision-free path for the follower [14,15].

1.1. Related Works

As explained in [6,16], when performing a cooperative transportation task, three main approaches can be distinguished in terms of load holding realization: pushing-only, grasping and caging. According to pushing-only strategies, the transport is achieved by pushing the load avoiding the physical attachment of the robots (see, e.g.,  [17,18,19] and the references therein). Similarly, caging techniques envisage that robots arrange themselves around the load embracing and holding it tightly during the motion [20]. On the other hand, grasping strategies rest on the presence of EEs that allow the robots to be physically (more or less elastically) attached to the load, and will represent the adopted solution for this paper case study.
As a first note in this respect, it is important to highlight that a fully decentralized approach may result difficult to realize in case of transported loads that cannot bear internal stresses induced by formation mismatch at the EE level. For this reason, centralized control schemes are typically proposed to face the problem of cooperative transportation. In [21], for example, a centralized optimization approach is proposed, based on the calculation of a convex hull in space-time were no collisions can arise and deadlocks are prevented thanks to the global nature of the solution (differently from what can occur with a distributed local approach [22]). Indeed this method allows for reconfiguration using the redundancy available for mobile manipulators and is more focused on collision avoidance. Additionally in [23] a centralized method is proposed for an AGV transportation system with a particular attention on the force distribution and internal force analysis of the transported load, to yield an effective design of the control algorithm.
Interestingly, to address this issue by design, most of the existing solutions exploiting grasping strategies are based on a leader–follower paradigm wherein one robotic agent (the leader) is in charge to guide the rest of the MRS (the followers). Within the leader–follower framework the two approaches of centralized and decentralized can be found, where in the former case the followers follow the information given by a static leader, while in the latter the followers actively follow the leader motion. Several control techniques are utilized by leader–follower formation to ensure that the followers accurately follow the leader [24], from the more classical linear techniques (e.g., proportional integral derivative (PID) control), to the non-linear based (e.g., sliding mode control), to name a few. With no aim of being exhaustive, some references exploring different control solution to address the single load cooperative transportation problem are reported in the following. In [25] simulations and experiments are performed with two robots transporting a large and fragile load using a PID-based leader–follower control, proving the effectiveness of the approach in satisfying the load handling requirements.
In [26] a dynamic control architecture of a team of two robots is presented where the control is design using the attractor dynamics approach. Simulations and real experiments in unknown and changing environments show stable behaviour of the two-robots system in accomplishing the joint transportation task.
The leader–follower control paradigm is adopted also in [14,27,28] to design model predictive control (MPC) based distributed solutions for the cooperative transportation task. MPC schemes predict the future state of the plant over a finite horizon using a mathematical model and solve an optimal control problem to compute the control action by handling environment and plant constraints in the problem formulation (for a general overview on MPC we refer to [29]). This last feature motivates the current extensive use of such a technique in solving path planning task, especially in the case of cluttered environments. Trajectory generation problem ensuring collision avoidance is, for instance, addressed in [30,31,32] accounting for intelligent vehicles and also in [33,34,35] for aerial platforms. In general, the use of MPC has become very popular due to the intrinsic flexibility in the formulation and possibility of easy adaptation to different objectives. The employment of the MPC approach for mobile manipulators can be found in [36], where a decentralized nonlinear MPC (NMPC) is proposed with proofs of feasibility and convergence, and physical experiments are presented. A similar NMPC application is studied and simulated in [37], which includes avoidance of collisions and of singular configurations for the manipulators. More recent studies like [38] propose data-driven MPC solutions, where each robot employs sensor data to predict the future behavior of the others, instead of relying on mutual communication. Two observations are now in order. First, it is worthwhile to notice that it is also possible to generate trajectories for the single agent (e.g., the leader) in a much cheaper way than using the above MPC scheme, by exploiting popular algorithms as A-star or Rapidly-exploring Random Tree (RRT) [39], followed by a trajectory smoothing procedure. It is nonetheless clear that these approaches only provide the geometric path, without consider constraints about the travel time or energy spent to move the robot from the initial point to the goal. Because of these requirements, MPC approaches remain preferable [40]. Moreover, while in most of the MPC literature, collision avoidance is included in the problem formulation as hard constraints, some authors adopt the Potential Field Method [41] for trajectory generation, which can be easily adapted for both the leader and the follower and can deal also with additional communication constraints [42]. Local minima may appear with this latter approach, where the robots get stuck easily in presence of many obstacles, but recent advances propose ways to overcome this drawback [43].

1.2. Contribution

Given these premises, the main contribution of this paper is the design and the implementation of an MPC-based leader–follower scheme that performs local trajectory planning for each robotic agent while also performing obstacle avoidance and maintaining grasping of the transported load. In particular, the proposed method improves the standard leader–follower approach, as in [5], by accounting for the feasibility of the trajectory imposed by the leader to the follower. Specifically, the approach relies on a direct MPC formulation based on the convexification of the optimization space with linear constraints that are easily customizable depending on loads shapes. Formation errors and infeasibility in the trajectory calculation, caused by combination of the highly-reduced optimization space and leader–follower approaches, are mitigated via a novel recovery policy.

1.3. Paper Structure

The paper is organized as follows. After the formal statement of the problem in Section 2, the main technical Section 3, Section 4 and Section 5 discuss in detail the MPC approach for the leader–follower MRS, how obstacles are accounted for in the multi-robot optimization framework, and the feasibility-aware policy that is put in place to accommodate all constraints and requirements while ensuring the task accomplishment. Then, in Section 6, a numerical validation is proposed to highlight the main features of the approach and assess its efficacy in performing the cooperative task in different complex scenarios. Finally, Section 7 provides a summary of the study, together with some conclusive remarks.

1.4. Notation

In this work, vectors in R m and matrices in R m × n are indicated respectively with bold lowercase and bold uppercase letters. Calligraphic letters are reserved for sets. 𝟙 m R m denotes the m-dimensional (column) vector having respectively all one entries, while 0 m × m R m × m and I m R m × m identify the null and the identity square matrices of dimension m. Finally, ⊗ indicates the standard Kronecker product.

2. Problem Formulation

Hereafter, we consider two autonomous ground robots made of a mobile base and a manipulator. These are required to transport a load in an environment populated by static and/or dynamic obstacles. The robots are supposed to grasp the load realizing a compliant contact. Within this context, the attention is focused on the development of a cooperative trajectory planning solution based on a leader–follower scheme for the two autonomous robots, which ensures the fulfillment of the transportation task in a decentralized way. According to [15], since the grasping between the EE and the load is assumed to be compliant, the interaction wrench results to be proportional to the formation error. Hence, without loss of generality, the following assumptions are made for both the robotic agents (leader and follower):
  • all measurements, state and input quantities, as well as the environment obstacles, are referred to the position of the EE grasping the transported load; to allow control feasibility, the obstacle constraints on the EEs are compatible with those on the mobile base;
  • once a reference trajectory for the EE has been computed, a low-level controller is tasked to manage the position of the mobile base and the joints configuration of the manipulator such that the position of the EE is able to track the trajectory; such controller is assumed to have has full authority on the EE degrees of freedom, such that this latter can be modeled as a double integrator [44] and the load is maintained at a constant height: the transportation problem can be therefore formulated in the two-dimensional plane;
  • the robot is equipped with sensing capabilities, so that it is aware of the environment and able to self-localize with respect to a common inertial frame; it is also able to communicate and exchange data with its transportation companion, through an ideal channel.
Given these assumptions, the combination of mobile base and manipulator is generic, and hereafter all quantities will be referred to the EE of the leader/follower. In the considered scenario we denote with x L ( t ) R 2 and x F ( t ) R 2 the position of the leader and the follower EE, respectively.
When accounting for the leader–follower paradigm, the cooperative trajectory planning problem consists in the definition of a desired trajectory x L , d ( t ) R 2 that drives the leader toward a final target position x L , d * R 2 , while the follower is required to maintain a certain distance d L , F R + from the leader, so that the load is not stretched or the grasp is detached, i.e., its desired follower trajectory x F , d ( t ) R 2 is such that
x F , d ( t ) x L ( t ) = d L , F .
An interesting remark is now in order. During the transportation phase that we consider, the design of safe trajectories (from initial to target position) for leader and follower, connected through the load grasping, implicitly regulates the load orientation. Indeed, this is motivated by the fact that the obstacle avoidance task for the two robots and the load is of primary importance with respect to the load orientation. Conversely, once the transportation phase is completed (and the target position is reached) there may be the additional need to re-orient the load for placing it at rest: this can be done through an explicit control action not induced by the obstacles and in any case referred to the grasping robots. This latter design is out of the scope of this study.
In the definition of the trajectories, it is also necessary to take into account the obstacles presence. We assume that the environment is populated by a set of M static and/or dynamic obstacles, e.g., fixed structures and/or humans/other mobile robots whose dynamics is comparable with that of the transporting robots. For each j-th obstacle, j = 1 M , whose position is denoted by o j ( t ) R 2 , a radius r j R + is defined as the minimum distance preventing a collision. In order to realistically face the obstacle avoidance task, we model both the leader and the follower as extended rigid bodies and we identify a set of N v vertices accounting for the physical structure of the robots: these vertices denote points of interest of the robots we want to avoid the collision with to allow for its controlled motion. As a consequence, the obstacle avoidance is guaranteed when
v , i ( t ) o j ( t ) > r j , = L , F , i = 1 N v , j = 1 M ,
where v , i ( t ) = x ( t ) + s , i R 2 denotes the absolute position of i-th vertex (in the common inertial frame) being s , i R 2 the relative position of the i-th vertex with respect to the leader/follower position. These vertices can be also defined on the load and, since the follower needs to keep a desired distance with the leader with no constraints on the mutual orientation, as in (1), the relative position of the vertices on the load must be defined as a function h ( · ) of the leader and follower position as
v , i ( t ) : = h ( x L ( t ) , x F ( t ) ) R 2 , i = 1 N v .
An example of such configuration and the vertices definition is graphically provided later on, in Figure 1.
Given these premises, the final goal of the leader–follower approach is that the two agents will travel along trajectories x ( t ) that are close to the desired reference x , d ( t ) . Moreover, the goal of this work consists in the design of a trajectory generation method for the MRS composed of leader and follower, guaranteeing both the load grasping (i.e., the fulfillment of (1)) and the obstacle avoidance (i.e., the fulfillment of (2)). Formally, we aim to determine the desired trajectories x L , d ( t ) and x F , d ( t ) so that
lim t x L , d ( t ) = x L , d * ,
x F , d ( t ) x L ( t ) = d L , F ,
v , i ( t ) o j ( t ) > r j = L , F , , i = 1 N v , j = 1 M .
In the remainder of the paper, the subscripts L and F are omitted for any quantity when the given observations are valid for both EEs. Moreover, without loss of generality, we assume x L , d * = 0 2 R 2 in the rest of the work.

3. MPC-Based Strategy

To compute collision-free trajectories for leader and follower, namely x L , d ( t ) and x F , d ( t ) satisfying (4)–(6), we adopt an MPC-based formulation. Inspired also by the leader–follower framework proposed in [45], the EE positions are assumed to be described by a double integrator model; hence, denoting with u L ( t ) R 2 and u F ( t ) R 2 the leader and the follower acceleration inputs, it holds that
x ¨ L ( t ) = u L ( t ) and x ¨ F ( t ) = u F ( t ) .
Introducing also v L ( t ) = x ˙ L ( t ) and v F ( t ) = x ˙ F ( t ) as the leader and follower velocities, the full state of the EEs can be described by
q L ( t ) : = x L ( t ) v L ( t ) R 4 and q F ( t ) : = x F ( t ) v F ( t ) R 4
which allows to write the dynamics of both the agents as the linear system
q ˙ ( t ) = A c q ( t ) + B c u ( t ) , with A c = 0 2 × 2 I 2 0 2 × 2 0 2 × 2 R 4 × 4 , B c = 0 2 × 2 I 2 R 4 × 2 .
The system model (9) can be discretized with sampling time T s R + , thus obtaining
q ( t + 1 ) = A q ( t ) + B u ( t ) , with A = e A c T s R 4 × 4 , B = 0 T s e A c τ B c R 4 × 2 ,
where, with a slight abuse of notation, it holds that t = h T s , h N .
Given these premises, the cooperative trajectory planning problem is addressed by considering an MPC approach based on the iterative solution of the following optimization problem at each time instant t
u ^ * ( · ) = argmin u ^ ( · ) J ( q ^ ( · ) , u ^ ( · ) )
subject to q ^ ( k + 1 ) = A q ^ ( k ) + B u ^ ( k ) ( m o d e l d y n a m i c s )
q ^ ( 0 ) = q ( t ) ( i n i t i a l c o n d i t i o n )
q ^ ( k ) X ( t ) ( s t a t e c o n s t r a i n t )
u ^ ( k ) U ( i n p u t c o n s t r a i n t )
where k = 0 N with N being the total prediction horizon samples. In (11)–(15), the vectors q ^ ( k ) : = x ^ ( k ) v ^ ( k ) R 4 and u ^ ( k ) R 2 represent the state and input prediction respectively, while X ( t ) and U denote the constraints sets. Given the predicted trajectory x ^ ( · ) , we generate online at time t the desired trajectory for both leader and follower by considering the first sample of the MPC prediction, namely we have that
x d ( t ) = x ^ ( 0 ) .
Notice that two different MPC problems need to be considered for the leader and the follower since both the objective and the constraints are different for the two EEs. Moreover, while recalling that completely decentralized approaches are particularly challenging in cooperative transportation tasks based on grasping [6], the strategy outlined here rests on the following points and results in a decentralized scheme.
  • Exploiting the described MPC approach, the leader aims at iteratively solving the problem of reaching the target position x L , d * , while performing obstacle avoidance (according to (6) given = L ) without accounting for the follower. The MPC scheme provides a prediction of the leader trajectory to communicate to the follower.
  • After this data exchange, the follower adopts the MPC strategy to compute its trajectory aiming at maintaining the distance d L , F with respect to the leader predicted trajectory and at performing obstacle avoidance for itself and the transported load.
Note that obstacle avoidance for the follower and the load are not considered by the leader to avoid a centralized procedure, which may lead the leader to compute a trajectory that results to be unfeasible from the follower side. However, such case can be detected and handled via the introduction of a proper recovery policy, as explained in Section 5.
Hereafter, taking into account on the optimization problem (11)–(15), we provide a definition for the cost functions J L ( · ) , J F ( · ) and constraints set X L ( t ) , X F ( t ) , U L , U F for the leader–follower MPC problem. In particular, the state constraint are time-varying and such that
X ( t ) : = X v X , o ( t ) = L , F
where X v is the (same) constraint set on both leader and follower velocity and X , o ( t ) is imposed by the collision avoidance requirement clarified in Section 4.

3.1. Leader Objective Function

The principal goal of the leader is to determine a suitable trajectory such that the error with respect to the final target position x L , d * is minimized. For this reason, accounting for the MPC framework (11)–(15), we consider the quadratic cost function:
J L ( q ^ L ( · ) , u ^ L ( · ) ) : = k = 0 N 1 q ^ L ( k ) W q ^ L ( k ) + u ^ L ( k ) R u ^ L ( k ) + q ^ L ( N ) Z q ^ L ( N ) ,
where the gain matrices W R 4 × 4 , Z R 4 × 4 , and R R 2 × 2 allow to regulate the behavior of the leader in terms of convergence speed towards the target position and control effort.
The function (18) can be rewritten in compact form as
J L ( q ¯ ^ L , u ¯ ^ L ) = q ^ L ( 0 ) W q ^ L ( 0 ) + q ¯ ^ L W ¯ q ¯ ^ L + u ¯ ^ L R ¯ u ¯ ^ L ,
by introducing the stacked versions of states, inputs, and corresponding weight matrices, namely
q ¯ ^ L : = [ q ^ L ( 1 ) q ^ L ( N ) ] R 4 N , u ¯ ^ L : = [ u ^ L ( 0 ) u ^ L ( N 1 ) ] R 2 N ,
W ¯ = I N 1 W 0 4 ( N 1 ) × 4 0 4 × 4 ( N 1 ) Z R 4 N × 4 N , R ¯ = I N R R 2 N × 2 N .
On the other hand, from the system dynamics (10), for both the leader and the follower, it follows that
q ¯ ^ = T ¯ q ^ ( 0 ) + S ¯ u ¯ ^ ,
where
T ¯ = T ¯ 1 T ¯ N = A A 2 A N R 4 N × 4 , S ¯ = S ¯ 1 S ¯ N = B 0 4 × 2 0 4 × 2 A B B 0 4 × 2 A N 1 B A N 2 B B R 4 N × 2 N .
As a consequence, we can rewrite (18) as
J L ( q ^ L ( 0 ) , u ¯ ^ L ) = 1 2 u ¯ ^ L H u ¯ ^ L + q ^ L ( 0 ) F u ¯ ^ L + 1 2 q ^ L ( 0 ) Y q ^ L ( 0 ) ,
with
H = 2 ( R ¯ + S ¯ W ¯ S ¯ ) R 2 N × 2 N ,
F = 2 T ¯ W ¯ S ¯ R 4 × 2 N ,
Y = 2 ( W + T ¯ W ¯ T ¯ ) R 4 × 4 .

3.2. Follower Objective Function

The objective of the follower consists in minimizing the distance error with respect to the leader while tracking the shortest path. Since the problem is decentralized, the requirement (5) has to be treated as a desired condition instead of an hard constraint, thus implying the minimization of the squared formation error, defined as follows:
e L , F ( t ) : = x L ( t ) x F ( t ) 2 d L , F 2 = P x q L ( t ) q F ( t ) 2 d L , F 2 ,
given P x : = I 2 0 2 × 2 R 2 × 4 and d L , F has been introduced in (1). In detail, the follower aims at solving an MPC optimization problem wherein the cost function is given by a weighted sum of two terms:
  • a term accounting for the minimization of the error (28) computed in correspondence to the predicted states q ^ L ( · ) and q ^ F ( · ) , namely
    J F , 1 ( q ^ L ( · ) , q ^ F ( · ) ) : = k = 1 N β k P x q ^ L ( k ) q ^ F ( k ) 2 d L , F 2 2 = k = 1 N β k e ^ L , F 2 ( k ) ,
    where β ( 0 , 1 ] R is a decaying weight factor;
  • a term accounting for the minimization of the travelled distance in the prediction horizon, namely
    J F , 2 ( q ^ L ( · ) , q ^ F ( · ) ) : = k = 0 N 1 P x q ^ F ( k + 1 ) q ^ F ( k ) 2 .
Therefore, the cost function associated to the follower results to be
J F ( q ^ L ( · ) , q ^ F ( · ) ) = c J F , 1 ( q ^ L ( · ) , q ^ F ( · ) ) + J F , 2 ( q ^ L ( · ) , q ^ F ( · ) ) ,
where the parameter c R + allows to differently prioritize the two aforementioned minimization goal.
Similarly to the leader case, it is possible to rewrite the function (31) as a function of the follower initial state q ^ F ( 0 ) , of the follower predicted control input u ^ F and of the predicted leader trajectory q ^ L ( k ) , k = 1 N . In doing this, we introduce the vectors q ¯ ^ F R 4 N and u ¯ ^ F R 4 N , whose definition is analogous to that of q ¯ ^ L and u ¯ ^ L in (20). Then, it holds:
J F , 1 ( q ^ F ( 0 ) , u ¯ ^ F , q ¯ ^ L ) = k = 1 N β k P x ( q ^ L ( k ) T ¯ k q ^ F ( 0 ) S ¯ k u ¯ ^ F ) 2 d L , F 2 2 ,
where S ¯ k R 4 × 2 N and T ¯ k R 4 × 4 are introduced in (23). Moreover, we can further rewrite J F , 2 ( q ^ L ( · ) , q ^ F ( · ) ) as
J F , 2 ( q ^ F ( 0 ) , u ¯ ^ F ) = S ¯ u ¯ ^ F + T ¯ q ^ F ( 0 ) P ¯ x P ¯ x S ¯ u ¯ ^ F + T ¯ q ^ F ( 0 ) ,
where P ¯ x = I N P x R 2 N × 4 N and   
S ¯ = B 0 4 × 2 0 4 × 2 ( A I 4 ) B B 0 4 × 2 A N 2 ( A I 4 ) B A N 3 ( A I 4 ) B B R 4 N × 2 N , T ¯ = A I 4 A ( A I 4 ) A N 1 ( A I 4 ) R 4 N × 4 .
Note that, differently from the leader case, the objective function J F ( · ) is not quadratic.

3.3. Constraint Sets Definition

To conclude the section, we discuss the constraints that we want to consider regarding the state and control input of both the leader and the follower. From a practical point of view, these are imposed by the actuation limits that implies some bounds both on the velocity and on the acceleration of the EEs.
Formally, within the framework (11)–(15), we can defined the input constraint set as
U : = { u ^ ( k ) R 2 : u m i n u ^ ( k ) u m a x } ,
where the inequality between vectors is considered component-wise. Equivalently, for each k-th prediction sample, k = 1 N , it holds that    
G a u ^ ( k ) u l i m with G a = I 2 I 2 R 4 × 2 , u l i m = u m i n u m a x R 4 ,
and, considering the whole prediction horizon, we have that
G ¯ a u ¯ ^ u ¯ l i m with G ¯ a = I N G a R 4 N × 2 N , u ¯ l i m = 𝟙 N u l i m R 4 N .
A similar reasoning can be carried out for constraints affecting the velocity component of the state. Indeed, we consider the velocity constraint set from (17) as:
X v : = { q ^ ( k ) = x ^ ( k ) v ^ ( k ) R 4 : v m i n v ^ ( k ) v m a x }
that can be rewritten in terms of control input over the prediction horizon as follows
X v = { u ¯ ^ R 2 N : G ¯ v S ¯ u ¯ ^ v ¯ l i m G ¯ v T ¯ q ( 0 ) }
with G ¯ v = I N G v R 4 N × 4 N and v ¯ l i m = 𝟙 N v l i m R 4 N given
G v = 0 2 × 2 I 2 0 2 × 2 I 2 R 4 × 4 , v l i m = v m i n v m a x R 4 .
Both the input constraint set (35) and the state constraints set (38) are convex.

4. Obstacle Avoidance

In this work, we deal with an environment populated by static and/or dynamic obstacles, thus the cooperative trajectory planning problem has to be faced ensuring also the collision avoidance for the leader, the follower and the transported load. Specifically, as introduced in Section 2, in this work we consider M obstacles. Each j-th obstacle, j = 1 M , whose position is identified by o j ( t ) R 2 , is associated to a certain radius r j R + . It is reasonable to include in the definition of the obstacle radius r j also possible position measurement errors: formally, by defining the robot position errors as ε x , ( t ) < ε ¯ x , = L , F , for each obstacle j = 1 M characterized by virtual radius r ˜ j , we can consider the effective radius r j : = r ˜ j + ε ¯ x . Moreover, we recall that collision is avoided with respect to a set of N v vertices v , i ( t ) R 2 , i = 1 N v , = L , F , ; with the aim to ease the discussion and without losing in generality, we will assume hereafter that (i) the cardinality N v of the vertices set is the same for leader, follower, load, namely that they can be modeled with an equal number of boundary points, although different in shape and dimensions; (ii) this definition of vertices (and radius) is adequate to allow for obstacles detection in the considered scenario.
Inspired by the strategy proposed in [46], for both the leader and the follower, we aim at figuring out the largest area that does not contain any obstacle and can be defined with linear constraints. With reference to Figure 1a, in correspondence to any j-th obstacle, we can first consider the point on the circle centered in o j ( t ) closest to the EE position x ( t ) , given by:
w j ( t ) = λ j ( t ) x ( t ) ( 1 λ j ( t ) ) o j ( t ) R 2 with λ j ( t ) = r j x ( t ) o j ( t ) R .
Then, for the EE placed in x ( t ) , it is possible to describe the largest half-plane that does not contain the j-th obstacle: this is identified by the line orthogonal to the segment x ( t ) w j ( t ) and passing through w j ( t ) (green shaded areas in Figure 1). In the light of this fact, j-th obstacle avoidance is guaranteed when the following inequality is satisfied for each i-th vertex, i = 1 N v :
g o , j ( t ) v , i ( t ) b o , j ( t ) ,
where = L , F , and    
g o , j ( t ) = ( w j ( t ) x ( t ) ) R 2 , b o , j ( t ) = ( w j ( t ) x ( t ) ) w j ( t ) R .
Given these premises, the collision free area for any robot corresponds to the polyhedron resulting from the intersection of the M half-planes defined as in (42) by taking into account all the obstacles. In detail, we assume that collision avoidance is achieved if each of the N v vertices belongs to the resulting convex set. Formally, this is ensured by the fulfillment of the following inequality for any i = 1 N v :
G o ( t ) v , i ( t ) b o ( t ) ,
where = L , F , and
G o ( t ) = g o , 1 ( t ) g o , M ( t ) R M × 2 and b o ( t ) = b o , 1 ( t ) b o , M ( t ) R M .
In order to account for the collision avoidance while solving the cooperative trajectory planning task, we include the constraint (44) in the computation of both q ^ L ( · ) and q ^ F ( · ) . Note that any obstacle position can vary over time but, for the MPC formulation, it is assumed to be fixed during the prediction horizon.

4.1. On the Leader Side

Focusing on the leader, the obstacle avoidance requirement is guaranteed if the constraint (44) is satisfied in correspondence to any element of the set { v L , i ( t ) , i = 1 N v } . Considering the EE initial state and dynamics, it is possible to verify that, for any i = 1 N v , the vector v ¯ ^ L , i : = v L , i ( 1 ) v L , i ( N ) R 2 N is such that
v ¯ ^ L , i = P ¯ x T ¯ L q ^ L ( 0 ) + S ¯ L u ¯ ^ L + s ¯ L , i ,
where s ¯ L , i = 𝟙 N s L , i R 2 N . As a consequence, accounting for (45), the obstacle avoidance from the leader side is ensured when the following condition is fulfilled for each i = 1 N v
X L , o , i ( t ) : = { u ¯ ^ L R 2 N : G ¯ o ( t ) P ¯ x S ¯ u ¯ ^ L b ¯ o ( t ) G ¯ o ( t ) ( P ¯ x T ¯ q ^ L ( 0 ) + s ¯ L , i ) } ,
where
G ¯ o ( t ) = I N G o ( t ) R M N × 2 N and b ¯ o ( t ) = 𝟙 N b o ( t ) R M N .
It follows that the leader constraint for obstacle avoidance (17) at time t is given by
X L , o ( t ) : = i = 1 N v X L , o , i ( t ) ,
which results to be a convex set since it is the intersection of (convex) half-planes representing the collision free area.

4.2. On the Follower Side

The reasoning carried out for the leader is valid also for the follower. Specifically, given i = 1 N v , the following obstacle avoidance condition can be derived
X F , o , i ( t ) : = { u ¯ ^ F R 2 N : G ¯ o ( t ) P ¯ x S ¯ u ¯ ^ F b ¯ o ( t ) G ¯ o ( t ) ( P ¯ x T ¯ q ^ F ( 0 ) + s ¯ F , i ) }
where s ¯ F , i = 𝟙 N s L , i R 2 N and G ¯ o ( t ) and b ¯ o ( t ) are defined as in (48).
The follower is also charged to prevent the transported load from collisions. This requirement, depending on both the leader and follower position, yields to the non linear constraint derived in the following. Consider the unit vector
y ( t ) : = x F ( t ) x L ( t ) x F ( t ) x L ( t ) R 2
and the set { s , i R 2 , i = 1 N v } whose elements are the relative positions of the load vertices with respect to the follower position given an arbitrary fixed configuration, e.g., corresponding to the case y ( t ) = 1 0 . Given the actual position of leader and follower, to ensure the obstacle avoidance by the transported load, we have to consider the position v , i ( t ) of each i-th vertex of the load, i = 1 N v . In the light of the given premises, it holds that
v , i ( t ) = h , i ( x L ( t ) , x F ( t ) ) = x F ( t ) + R L , F ( t ) s , i ,
where
R L , F ( t ) = y ( t ) y ( t ) SO ( 2 ) , with y ( t ) = 0 1 1 0 y ( t ) .
According to (42), collision with the j-th obstacle is then avoided if, for any i = 1 N v , we have that
g o , j ( t ) v , i ( t ) b o , j ( t ) ,
and this constraint has to be included during the computation of x ^ F ( t ) . As a consequence, observing that, for any k = 1 N , it holds that
v ^ , i ( k ) = x ^ F ( k ) + R L , F ( t ) s , i = P x ( T ¯ k q ^ F ( 0 ) + S ¯ k u ¯ ^ F ( k ) ) + R L , F ( t ) s , i ,
hence, the constraint set can be rewritten in a more compact fashion accounting for the whole prediction horizon and the complete set of obstacles as:
X , o , i ( t ) = { u ¯ ^ F R 2 N : G ¯ o P ¯ x S ¯ u ¯ ^ F b ¯ o ( t ) G ¯ o ( P ¯ x T ¯ q ^ F ( 0 ) s ¯ , i R ) } .
where s ¯ , i R = 𝟙 N ( R L , F ( t ) s , i ) R 2 N . Finally, the (convex) obstacle avoidance constraint set at the sample time t for the follower is given by the following intersection:
X F , o ( t ) : = i = 1 N v X F , o , i ( t ) X , o , i ( t ) .

5. Feasibility-Aware Recovery Policy

The decentralized strategy proposed in the previous sections entails that the trajectories of the leader and follower are generated separately, accounting for the diverse constraints characterizing the two robots, whose relative position with respect to the transported load is different. This approach may leads to undesired situations wherein the trajectory computed from the leader yields an infeasible optimum problem for the follower or the outlined follower trajectory produces a formation error that overcomes the inherent elasticity of the grasp structure, thus causing load stresses and eventually load break or grasp detachment. Nonetheless, according to the decentralized approach, it is possible to exploit the reciprocal communication between leader and follower to detect follower-infeasibility conditions and carry out a recovery policy.
Follower-infeasibility conditions generally occur in case of sharp bends or particularly cluttered environments. In the light of this fact, in this section, we propose a heuristic according to which the follower is able to assess if its trajectory in the next k r e c N , k r e c N samples would produce a formation error larger than a predefined threshold ε R + and to communicate such information to the leader. Then, the leader will correct its behavior by generating a different (typically slower) trajectory compatible with the follower (and load) dynamics, according to the following conservative optimization functional
J L , r e c ( q ^ L ( · ) ) : = k = 0 N 1 P x q ^ L ( k + 1 ) q ^ L ( k ) 2 = k = 0 N 1 x ^ L ( k + 1 ) x ^ L ( k ) 2 ,
which temporarily replaces (24) in the MPC computation. Algorithm 1, executed at each time instant t, formalizes such introduced heuristic.
Algorithm 1. Recovery policy.
1:
x ¯ ^ L MPC leader prediction considering J L ( · ) in (18)
2:
x ¯ ^ F MPC follower prediction considering J F ( · ) in (31)
3:
if x ^ L ( k ) x ^ F ( k ) 2 d L , F 2 > ε 2 for any k = 1 k r e c  then
4:
x ¯ ^ L , r e c MPC leader prediction considering J L , r e c ( · ) in (58)
5:
x ¯ ^ F , r e c MPC follower prediction considering J F ( · ) in (31) computed for x ¯ ^ L , r e c
6:
x L , d ( t ) x ^ L , r e c ( 0 )
7:
x F , d ( t ) x ^ F , r e c ( 0 )
8:
else
9:
x L , d ( t ) x ^ L ( 0 )
10:
x F , d ( t ) x ^ F ( 0 )
11:
end if

6. Numerical Results

This section is devoted to the validation of the proposed MPC solution for the trajectory planning problem within the context of cooperative transportation in a cluttered environment, as described in Section 2. It has already been highlighted how such application involves the accomplishment of three distinct and simultaneous goals, which need to be consistently accommodated by the control action: the leader is required to reach the target position x L , d * , the follower is required to maintain the distance d L , F from the leader, both the agents are required to avoid the obstacles and the follower also needs to prevent the transported load from collisions.
In the light of this fact, to assess the performance of the proposed solution, we account for the following three metrics:
  • the leader position x L ( t ) with respect to the target position x L , d * , given the assumption x L , d * = 0 2 ;
  • the quadratic formation error e L , F ( t ) = x L ( t ) x F ( t ) 2 d L , F 2 introduced in (28);
  • the index d , m i n R defined as follows
    d , m i n = min i = 1 N v j = 1 M v , i ( t ) o j ( t ) r j , = L , F , ,
    and corresponding to the distance between the closest pair of i-th vertex and j-th obstacle boundary. Note that this involves the obstacle radius so the collision is avoided for positive values of d , m i n .
Hereafter, we report the results for three different obstacles configurations:
(i)
M = 2 large static identical obstacles, forming a narrow passage;
(ii)
M = 3 static obstacles, of different size;
(iii)
M = 24 obstacles, where one obstacle can move, while the other are fixed but characterized by different radii. Some of these obstacles, in particular, are placed contiguously to each other so as to form a narrow corridor.
For all the configurations, the leader and follower parameters are reported in Table 1.
Configuration (i)
Figure 2 reports the outcome of one of the performed numerical simulation, providing both an overview of the scenario and of the task evolution and the behavior of the performance metrics. The two columns refer respectively to the case in which the recovery policy of Section 5 is considered (right) or not (left). For both cases, we report the planar trajectory of the leader and follower (first panel, at the top) as well as the trend of their position components (second panel), the trend of the formation error e L , F ( t ) (third panel) and the trend of the index d , m i n for = L , F , o (fourth panel).
The reported plots allow us to assess the performance of the proposed MPC-based trajectory generation strategy and, in particular, to appreciate the benefit deriving from the introduction of the feasibility-aware recovery policy. Indeed, it is possible to observe that, both with and without implementing the recovery policy, the leader converges towards the target position (second panel in Figure 2). However, the employment of the recovery policy is motivated by the rapid increase of the leader–follower formation error at approximately t = 6 s, in correspondence to the narrow passage by the two obstacles: the modification of the MPC optimization function for the leader is effective in leading to a smaller formation error (third panel) at the cost of a slightly slower convergence speed. Finally, the plots in the bottom panel confirm that the agents and the load are able in both cases to keep the desired distance from the obstacles. As a further remark, we notice that the challenging region to overcome is the one closest to the obstacles, where the obstacle avoidance constraints are modified more quickly.
Configuration (ii)
This configuration differs from the previous because of the presence of an additional (static) obstacle, whose position is such to require an aggressive maneuver by the leader–follower system.
This scenario behavior is highlighted by the results reported in Figure 3. As in the previous case, from the first and second panels, we observe that the leader can reach its target position, both implementing the recovery policy (Figure 3b) and not implementing the recovery policy (Figure 3a). As pointed out in the previous case, nearby the obstacles the linear constraints change very quickly and in this case it is more difficult for the follower to keep the formation error low: again, the constraints imposed by the two large obstacles with the additional presence of that along the predicted trajectory for the MRS, leads to a peak in the e L , F metrics value, because the follower need to re-arrange its position accounting for both its own occupancy and that of the load. In this case, the feasibility-aware recovery policy turns out to be fundamental to limit the formation error: from the third panel we notice that the e L , F is drastically reduced by slowing down the formation speed when needed. Finally, obstacle avoidance is granted in this case as well. Note that at around t = 8 s the load is very close to the constraint limit, which nonetheless does not raise a risk for physical collision if the radius accounts also for a certain safety margin.
Configuration (iii)
The last configuration here presented includes 23 static obstacles (two bigger than the others) and a dynamic obstacle (in total M = 24 ). In particular, the former are arranged so as to form a corridor and two narrow passages, one of which in correspondence to the target position; the moving obstacle, instead, travels along an unknown trajectory in the area in front of the target, as shown in Figure 4.
In this case, only the control with the recovery policy is discussed: the plots in Figure 4 show four snapshots of the MRS behavior, specifically at t = 8 s, t = 13.5 s, t = 22.5 s, and t = 30 s, and the performance index results reported in Figure 5 confirm the effectiveness of the proposed trajectory planning solution. As a general comment, in such a complex scenario the implementation of the feasibility-aware recovery policy ensures both the minimization of the formation error (note the different scale w.r.t. to the plots in Figure 2 and Figure 3) and the collision avoidance while fulfilling the velocity constraints. In detail, we can well appreciate four dynamic phases on which to spend some comments:
  • t = 0–12 s: the MRS is travelling the corridor abiding several constraints, with an impact on the performance index (59), which is low; moreover, the effects of the control of leader and follower on the load show in this phase a relatively higher formation error.
  • t = 12–20 s: the MRS is going through the narrow passage made of the two large obstacles, being very close to the upper obstacle at t = 14.5 s and to the lower one at t = 18 s.
  • t = 20–25 s: the motion is now in the open area where the main issue is represented by the presence of the mobile obstacle, which is intersecting the estimated MRS trajectory at t = 21 s, thus causing the leader to react by planning a motion correction slightly upwards.
  • t = 25–30 s: the final phase is characterized by the maneuver of MRS and the action of the follower on the load, rotating it in order to enter the goal area horizontally (hence the noisy behavior of the formation error).

7. Conclusions

Motivated by the cutting-end research and application trends within MRS scenarios, we present a decentralized strategy to perform cooperative transportation by two mobile ground robots. The procedure is based on a leader–follower scheme and exploits an MPC approach running in parallel on both agents: the leader, basing on its own state, has the task of designing a potentially effective collision free trajectory for the whole MRS towards the goal. The follower, instead, has to accommodate the leader command with its proper and the load’s constraints to ensure the overall task transportation feasibility. In case the trajectory computed by the leader is not feasible for the follower/load movement, the leader is able to switch to a conservative recovery policy that ensures task completion. The proposed approach is discussed formally and the overall control procedure is validated in a simulation framework that includes different scenarios, highlighting the effectiveness of the technique and the role of the proposed recovery policy.
Interestingly, many different aspects that are worth to be studied stem from the methodology presented in this work, which mainly regard the control of the interaction between the transporting robots and the load, the regulation of the load orientation at the target position, the management of non-convex obstacles and of very dynamic environments.

Author Contributions

Conceptualization, B.E., M.P., and T.A.; methodology, B.E., M.P., T.A., N.L., G.M., and A.C.; formal analysis, N.L., G.M., and A.C.; writing—original draft preparation, B.E., M.P., and T.A.; writing—review and editing, N.L., G.M., and A.C.; project administration, A.C.; funding acquisition, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by MIUR (Italian Ministry of Education, University, and Research) under the initiative “Departments of Excellence”, and by the University of Padova under the BIRD-SEED TSTARK and CAR projects.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rizk, Y.; Awad, M.; Tunstel, E.W. Cooperative heterogeneous multi-robot systems: A survey. ACM Comput. Surv. (CSUR) 2019, 52, 1–31. [Google Scholar] [CrossRef]
  2. Engelbrecht, D.; Steyn, N.; Djouani, K. Adaptive Virtual Impedance Control of a Mobile Multi-Robot System. Robotics 2021, 10, 19. [Google Scholar] [CrossRef]
  3. Julian, B.J.; Angermann, M.; Schwager, M.; Rus, D. Distributed robotic sensor networks: An information-theoretic approach. Int. J. Robot. Res. 2012, 31, 1134–1154. [Google Scholar] [CrossRef] [Green Version]
  4. Khan, A.; Rinner, B.; Cavallaro, A. Cooperative robots to observe moving targets. IEEE Trans. Cybern. 2016, 48, 187–198. [Google Scholar] [CrossRef] [PubMed]
  5. Lissandrini, N.; Michieletto, G.; Antonello, R.; Galvan, M.; Franco, A.; Cenedese, A. Cooperative optimization of UAVs formation visual tracking. Robotics 2019, 8, 52. [Google Scholar] [CrossRef] [Green Version]
  6. Tuci, E.; Alkilabi, M.H.; Akanyeti, O. Cooperative object transport in multi-robot systems: A review of the state-of-the-art. Front. Robot. AI 2018, 5, 59. [Google Scholar] [CrossRef] [PubMed]
  7. Feng, Z.; Hu, G.; Sun, Y.; Soon, J. An overview of collaborative robotic manipulation in multi-robot systems. Annu. Rev. Control. 2020, 49, 113–127. [Google Scholar] [CrossRef]
  8. Sabattini, L.; Secchi, C.; Cocetti, M.; Levratti, A.; Fantuzzi, C. Implementation of coordinated complex dynamic behaviors in multirobot systems. IEEE Trans. Robot. 2015, 31, 1018–1032. [Google Scholar] [CrossRef]
  9. Liu, Z.; Zhou, S.; Wang, H.; Shen, Y.; Li, H.; Liu, Y.H. A hierarchical framework for coordinating large-scale robot networks. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6672–6677. [Google Scholar]
  10. Bartolomei, L.; Karrer, M.; Chli, M. Multi-robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), Glasgow, UK, 23–28 August 2020. [Google Scholar]
  11. Chen, F.; Dimarogonas, D.V. Leader-follower Formation Control with Prescribed Performance Guarantees. IEEE Trans. Control. Netw. Syst. 2020, 8, 450–461. [Google Scholar] [CrossRef]
  12. Draganjac, I.; Miklić, D.; Kovačić, Z.; Vasiljević, G.; Bogdan, S. Decentralized control of multi-AGV systems in autonomous warehousing applications. IEEE Trans. Autom. Sci. Eng. 2016, 13, 1433–1447. [Google Scholar] [CrossRef]
  13. Yan, L.; Stouraitis, T.; Vijayakumar, S. Decentralized Ability-Aware Adaptive Control for Multi-robot Collaborative Manipulation. IEEE Robot. Autom. Lett. 2021, 6, 2311–2318. [Google Scholar] [CrossRef]
  14. Verginis, C.K.; Nikou, A.; Dimarogonas, D.V. Communication-based decentralized cooperative object transportation using nonlinear model predictive control. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; pp. 733–738. [Google Scholar]
  15. Lissandrini, N.; Verginis, C.; Roque, P.; Cenedese, A.; Dimarogonas, D.V. Decentralized Nonlinear MPC for Robust Cooperative Manipulation by Heterogeneous Aerial-Ground Robots. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
  16. Inglett, J.E.; Rodríguez-Seda, E.J. Object transportation by cooperative robots. In Proceedings of the SoutheastCon 2017, Concord, NC, USA, 30 March–2 April 2017; pp. 1–6. [Google Scholar]
  17. Alkilabi, M.; Narayan, A.; Tuci, E. Cooperative object transport with a swarm of e-puck robots: Robustness and scalability of evolved collective strategies. Swarm Intell. 2017, 11, 185–209. [Google Scholar] [CrossRef]
  18. Chen, J.; Gauci, M.; Li, W.; Kolling, A.; Groß, R. Occlusion-based cooperative transport with a swarm of miniature mobile robots. IEEE Trans. Robot. 2015, 31, 307–321. [Google Scholar] [CrossRef]
  19. Neumann, M.A.; Chin, M.H.; Kitts, C.A. Object manipulation through explicit force control using cooperative mobile multi-robot systems. In Proceedings of the World Congress on Engineering and Computer Science, San Francisco, CA, USA, 22–24 October 2014; Volume 1. [Google Scholar]
  20. Pereira, G.A.; Campos, M.F.; Kumar, V. Decentralized algorithms for multi-robot manipulation via caging. Int. J. Robot. Res. 2004, 23, 783–795. [Google Scholar] [CrossRef] [Green Version]
  21. Alonso-Mora, J.; Baker, S.; Rus, D. Multi-robot formation control and object transport in dynamic environments via constrained optimization. Int. J. Robot. Res. 2017, 36, 1000–1021. [Google Scholar] [CrossRef]
  22. Alonso-Mora, J.; Montijano, E.; Nägeli, T.; Hilliges, O.; Schwager, M.; Rus, D. Distributed multi-robot formation control in dynamic environments. Auton. Robot. 2019, 43, 1079–1100. [Google Scholar] [CrossRef] [Green Version]
  23. Huzaefa, F.; Liu, Y.C. Centralized control architecture for cooperative object transportation using multiple omnidirectional AGVs. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 6526–6532. [Google Scholar]
  24. Rashid, M.; Yakub, F.; Zaki, S.; Ali, M.; Mamat, N.; Putra, S.; Roslan, S.; Shah, H.; Aras, M. Comprehensive review on controller for leader-follower robotic system. Indian J. Geo-Mar. Sci. 2019, 48, 985–1007. [Google Scholar]
  25. Rizzo, C.; Lagraña, A.; Serrano, D. GEOMOVE: Detached AGVs for Cooperative Transportation of Large and Heavy Loads in the Aeronautic Industry. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Azores, Portugal, 15–17 April 2020; pp. 126–133. [Google Scholar]
  26. Machado, T.; Malheiro, T.; Monteiro, S.; Erlhagen, W.; Bicho, E. Attractor dynamics approach to joint transportation by autonomous robots: Theory, implementation and validation on the factory floor. Auton. Robot. 2019, 43, 589–610. [Google Scholar] [CrossRef]
  27. Yang, X.; Watanabe, K.; Izumi, K.; Kiguchi, K. A decentralized control system for cooperative transportation by multiple non-holonomic mobile robots. Int. J. Control 2004, 77, 949–963. [Google Scholar] [CrossRef]
  28. Machado, T.; Malheiro, T.; Monteiro, S.; Erlhagen, W.; Bicho, E. Multi-constrained joint transportation tasks by teams of autonomous mobile robots using a dynamical systems approach. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3111–3117. [Google Scholar]
  29. Bemporad, A. Model predictive control design: New trends and tools. In Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, 13–15 December 2006; pp. 6678–6683. [Google Scholar]
  30. Yi, B.; Bender, P.; Bonarens, F.; Stiller, C. Model predictive trajectory planning for automated driving. IEEE Trans. Intell. Veh. 2018, 4, 24–38. [Google Scholar] [CrossRef]
  31. Zuo, Z.; Yang, X.; Li, Z.; Wang, Y.; Han, Q.; Wang, L.; Luo, X. MPC-based Cooperative Control Strategy of Path Planning and Trajectory Tracking for Intelligent Vehicles. IEEE Trans. Intell. Veh. 2020. [Google Scholar] [CrossRef]
  32. Wu, H.; Si, Z.; Li, Z. Trajectory Tracking Control for Four-Wheel Independent Drive Intelligent Vehicle Based on Model Predictive Control. IEEE Access 2020, 8, 73071–73081. [Google Scholar] [CrossRef]
  33. Bemporad, A.; Rocchi, C. Decentralized linear time-varying model predictive control of a formation of unmanned aerial vehicles. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; pp. 7488–7493. [Google Scholar]
  34. Marzat, J.; Bertrand, S.; Eudes, A.; Sanfourche, M.; Moras, J. Reactive MPC for autonomous MAV navigation in indoor cluttered environments: Flight experiments. IFAC-PapersOnLine 2017, 50, 15996–16002. [Google Scholar] [CrossRef]
  35. Luis, C.E.; Vukosavljev, M.; Schoellig, A.P. Online trajectory generation with distributed model predictive control for multi-robot motion planning. IEEE Robot. Autom. Lett. 2020, 5, 604–611. [Google Scholar] [CrossRef] [Green Version]
  36. Wang, Y. Cooperative Transportation of Mobile Manipulators With Collision Avoidance. Dissertation. 2018. Available online: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231841 (accessed on 28 June 2021).
  37. Nikou, A.; Verginis, C.; Heshmati-Alamdari, S.; Dimarogonas, D.V. A nonlinear model predictive control scheme for cooperative manipulation with singularity and collision avoidance. In Proceedings of the 2017 25th Mediterranean Conference on Control and Automation (MED), Valletta, Malta, 3–6 July 2017; pp. 707–712. [Google Scholar]
  38. Zhu, H.; Claramunt, F.M.; Brito, B.; Alonso-Mora, J. Learning Interaction-Aware Trajectory Predictions for Decentralized Multi-Robot Motion Planning in Dynamic Environments. IEEE Robot. Autom. Lett. 2021, 6, 2256–2263. [Google Scholar] [CrossRef]
  39. Li, B.; Kong, Q.; Zhang, Y.; Shao, Z.; Wang, Y.; Peng, X.; Yan, D. On-road Trajectory Planning with Spatio-temporal RRT* and Always-feasible Quadratic Program. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 942–947. [Google Scholar]
  40. Ramos, O.E. Optimal Control for Time and Energy Minimization in the Trajectory Generation of a Mobile Robot. In Proceedings of the 2019 IEEE XXVI International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Lima, Peru, 12–14 August 2019; pp. 1–4. [Google Scholar]
  41. Azzabi, A.; Nouri, K. Path planning for autonomous mobile robot using the Potential Field method. In Proceedings of the 2017 International Conference on Advanced Systems and Electric Technologies (IC_ASET), Hammamet, Tunisia, 4–17 January 2017; pp. 389–394. [Google Scholar]
  42. Coquet, C.; Arnold, A.; Bouvet, P.-J. Control of a Robotic Swarm Formation to Track a Dynamic Target with Communication Constraints: Analysis and Simulation. Appl. Sci. 2021, 11, 3179. [Google Scholar] [CrossRef]
  43. Azzabi, A.; Nouri, K. An advanced potential field method proposed for mobile robot path planning. Trans. Inst. Meas. Control 2019, 41, 3132–3144. [Google Scholar] [CrossRef]
  44. Tognon, M.; Gabellieri, C.; Pallottino, L.; Franchi, A. Aerial co-manipulation with cables: The role of internal force for equilibria, stability, and passivity. IEEE Robot. Autom. Lett. 2018, 3, 2577–2583. [Google Scholar] [CrossRef] [Green Version]
  45. Franchi, A.; Giordano, P.R.; Michieletto, G. Online Leader Selection for Collective Tracking and Formation Control: The Second-Order Case. IEEE Trans. Control. Netw. Syst. 2019, 6, 1415–1425. [Google Scholar] [CrossRef]
  46. Mousavi, M.A.; Heshmati, Z.; Moshiri, B. LTV-MPC based path planning of an autonomous vehicle via convex optimization. In Proceedings of the 2013 21st Iranian Conference on Electrical Engineering (ICEE), Mashhad, Iran, 14–16 May 2013; pp. 1–7. [Google Scholar]
Figure 1. Two different configurations of the MRS. Leader, follower and load are indicated with the polygonal colored area, while the avoidance circle due to the obstacle presence corresponds to the dashed line. (a) Initial position of MRS elements: definition of vertices and safe area (shaded in green). (b) The rotation of the load causes the MRS to move and the safe area changes accordingly.
Figure 1. Two different configurations of the MRS. Leader, follower and load are indicated with the polygonal colored area, while the avoidance circle due to the obstacle presence corresponds to the dashed line. (a) Initial position of MRS elements: definition of vertices and safe area (shaded in green). (b) The rotation of the load causes the MRS to move and the safe area changes accordingly.
Robotics 10 00084 g001
Figure 2. Performance of the proposed MPC-based trajectory planning solution in correspondence of configuration ( i ) involving M = 2 obstacles.
Figure 2. Performance of the proposed MPC-based trajectory planning solution in correspondence of configuration ( i ) involving M = 2 obstacles.
Robotics 10 00084 g002
Figure 3. Performance of the proposed MPC-based trajectory planning solution in correspondence of configuration ( i i ) involving M = 3 obstacles.
Figure 3. Performance of the proposed MPC-based trajectory planning solution in correspondence of configuration ( i i ) involving M = 3 obstacles.
Robotics 10 00084 g003
Figure 4. Leader and follower EE trajectory in correspondence of configuration ( i i i ) involving multiple obstacles. Snapshots are taken at t = 8 s, t = 13.5 s, t = 22.5 s, and t = 30 s.
Figure 4. Leader and follower EE trajectory in correspondence of configuration ( i i i ) involving multiple obstacles. Snapshots are taken at t = 8 s, t = 13.5 s, t = 22.5 s, and t = 30 s.
Robotics 10 00084 g004
Figure 5. Trend of the performance metrics in correspondence of configuration ( i i i ) involving multiple obstacles. The control approach implements the feasibility-aware policy.
Figure 5. Trend of the performance metrics in correspondence of configuration ( i i i ) involving multiple obstacles. The control approach implements the feasibility-aware policy.
Robotics 10 00084 g005
Table 1. Numerical scenario parameters.
Table 1. Numerical scenario parameters.
LeaderFollower
Scenario
u m a x [m/s2]0.8 𝟙 2 2 𝟙 2
v m a x [m/s]0.2 𝟙 2 1 𝟙 2
d F , L [m]11
N v 2020
s i i = 1 N v [m] 0.32 cos ( 0.3 i ) sin ( 0.3 i ) i = 1 N v 0 0 i = 1 N v
s , i i = 1 N v [m] 0.2 d F , L 0.2 d F , L , 0.2 0.2 , 0.2 0.2 , 0.2 d F , L 0.2 0 0 i = 5 N v   
Leader objective function
W I 4
R 0.9 I 2
Z I 4
Follower objective function
c5000
β 0.95
Leader recovery objective function
ε [m]0.01
k r e c 3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Elaamery, B.; Pesavento, M.; Aldovini, T.; Lissandrini, N.; Michieletto, G.; Cenedese, A. Model Predictive Control for Cooperative Transportation with Feasibility-Aware Policy. Robotics 2021, 10, 84. https://doi.org/10.3390/robotics10030084

AMA Style

Elaamery B, Pesavento M, Aldovini T, Lissandrini N, Michieletto G, Cenedese A. Model Predictive Control for Cooperative Transportation with Feasibility-Aware Policy. Robotics. 2021; 10(3):84. https://doi.org/10.3390/robotics10030084

Chicago/Turabian Style

Elaamery, Badr, Massimo Pesavento, Teresa Aldovini, Nicola Lissandrini, Giulia Michieletto, and Angelo Cenedese. 2021. "Model Predictive Control for Cooperative Transportation with Feasibility-Aware Policy" Robotics 10, no. 3: 84. https://doi.org/10.3390/robotics10030084

APA Style

Elaamery, B., Pesavento, M., Aldovini, T., Lissandrini, N., Michieletto, G., & Cenedese, A. (2021). Model Predictive Control for Cooperative Transportation with Feasibility-Aware Policy. Robotics, 10(3), 84. https://doi.org/10.3390/robotics10030084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop