1. Introduction
Robotic and autonomous systems often exhibit nonlinear dynamics and operate in uncertain and disturbed environments. Planning and executing a trajectory is one of the most common ways for an autonomous system to achieve a mission. However, the presence of uncertainties and disturbances, together with the nonlinear dynamics, brings significant challenges to safe planning and execution of a trajectory. Built upon contraction theory and disturbance estimation, this paper presents a trajectory-centric learning control approach that allows for using different model learning tools to learn uncertain dynamics while providing guaranteed tracking performance in the form of exponential trajectory convergence throughout the learning phase. Our approach hinges on control contraction metrics (CCMs) and uncertainty estimation.
Related Work: Control design methods for uncertain systems can be roughly classified into adaptive/robust approaches and learning-based approaches. Robust approaches, such as
control [
1], sliding mode control [
2], and robust/tube model predictive control (MPC) [
3,
4], usually consider parametric uncertainties or bounded disturbances and design controllers to achieve certain performance despite the presence of such uncertainties. Disturbance-observer (DOB) based control and related methods such as active disturbance rejection control (ADRC) [
5] usually lump parametric uncertainties, unmodeled dynamics, and external disturbances together as a “total disturbance”, estimate it via an observer such as DOB and extended state observer (ESO) [
5], and then compute control actions to compensate for the estimated disturbance [
6]. On the other hand, adaptive control methods, such as model reference adaptive control (MRAC) [
7] and
adaptive control [
8], rely on online adaptation to estimate parametric or non-parametric uncertainties and use of the estimated value in the control design to provide stability and performance guarantees. While significant progress has been made in the linear setting, trajectory tracking for nonlinear uncertain systems with
transient performance guarantees has been less successful in terms of analytical quantification, yet it is required for safety guarantees of robotic and autonomous systems.
Contraction theory [
9] provides a powerful tool for analyzing general nonlinear systems in a differential framework and is focused on studying the convergence between pairs of state trajectories towards each other, i.e., incremental stability. It has recently been extended for constructive control design, e.g., via control contraction metrics (CCMs) [
10,
11]. Compared to incremental Lyapunov function approaches for studying incremental stability, contraction metrics present an intrinsic characterization of incremental stability (i.e., invariant under change of coordinates); additionally, the search for a CCM can be achieved using the sum of squares (SOS) optimization or semidefinite programming [
12] and DNN optimization [
13,
14]. For nonlinear uncertain systems, CCM has been integrated with adaptive and robust control to address parametric [
15] and non-parametric uncertainties [
16,
17]. The issue of bounded disturbances in contraction-based control has been tackled through robust analysis [
12] or synthesis [
18,
19]. For more work related to contraction theory and CCM for nonlinear stability analysis and control synthesis, the readers can refer to a recent tutorial paper [
20] and the references therein.
On the other hand, recent years have witnessed an increased use of
machine learning (ML) to learn dynamics models, which are then incorporated into control-theoretic approaches to generate the control law. For model-based learning control with safety and/or transient performance guarantees, most of the existing research relies on quantifying the learned model error and robustly handling such an error in the controller design or analyzing its effect on the control performance [
21,
22,
23]. As a result, researchers have largely relied on Gaussian process regression (GPR) to learn uncertain dynamics, due to its inherent ability to quantify the learned model error [
21,
22]. Additionally, in almost all the existing research, the control performance is directly determined by the quality of the learned model, i.e., a poorly learned model naturally leads to poor control performance. Deep neural networks (DNNs) were used to approximate state-dependent uncertainties in adaptive control design in [
24,
25]. However, these results only provide asymptotic (i.e., no transient) performance guarantees at most, and they investigate pure control problems without considering planning. Moreover, they either consider linear nominal systems or leverage feedback linearization to cancel the (estimated) nonlinear dynamics, which can only be done for fully actuated systems. In contrast, this paper considers the planning–control pipeline and does not try to cancel the nonlinearity, thereby allowing the systems to be underactuated. In [
23,
26], the authors used DNNs for batch-wise learning of uncertain dynamics from scratch; however, good tracking performance cannot be achieved when the learned uncertainty model is poor.
Statement of Contributions: We propose a contraction-based learning control architecture for nonlinear systems with matched uncertainties (depicted in
Figure 1). The proposed architecture allows for using different ML tools, e.g., DNNs, for learning the uncertainties while guaranteeing exponential trajectory convergence under certain conditions throughout the learning phase. It leverages a disturbance estimator with a pre-computable estimation error bound (EEB) and a robust Riemann energy condition to compute the control signal. It is empirically shown that learning can help improve the robustness of the controller and facilitate better trajectory planning. We demonstrate the efficacy of the proposed approach using a planar quadrotor example.
This work builds on [
16] with several key differences. The authors of [
16] introduce a robust tracking controller utilizing CCM and disturbance estimation without involving model learning. In contrast, this work adapts the controller to handle scenarios where machine learning tools are used to learn unknown dynamics, offering tracking performance guarantees throughout the learning phase. Additionally, this research empirically showcases the advantages of integrating learning to improve trajectory planning and strengthen the robustness of the closed-loop system, aspects that are not explored in [
16].
Notations. Let and denote the n-dimensional real vector space and the space of real m by n matrices, respectively; 0 and I denote a zero matrix and an identity matrix of compatible dimensions, respectively; and ‖·‖ denotes the ∞-norm and 2-norm of a vector/matrix, respectively. Given a vector y, let denote its ith element. For a vector and a matrix-valued function , let denote the directional derivative of along y. For symmetric matrices P and Q, () means is positive definite (semidefinite); stands for . Finally, we use ⊖ to denote the Minkowski set difference.
2. Preliminaries and Problem Setting
Consider a nonlinear control-affine system
where
and
are state and input vectors, respectively,
and
are known functions that are locally Lipschitz,
is an unknown function denoting the
matched model uncertainties;
is assumed to have full column rank for all
. Additionally,
is a compact set that includes the origin, and
is the control constraint set defined as
, where
and
represent the lower and upper bounds of the
ith control channel, respectively.
Remark 1. The matched uncertainty assumption is common in adaptive control [7] or disturbance observer-based control [6] and is made in existing related work. Assumption 1. There exist known positive constants , , and such that for any , the following holds: Remark 2. Assumption 1 does not assume that the system states stay in (and thus is bounded). We will prove the boundedness of x later in Theorem 1. Assumption 1 merely indicates that is locally Lipschitz with a known bound on the Lipschitz constant and is bounded by a prior known constant in the compact set .
Assumption 1 is not very restrictive as the local Lipschitz bound in
for
can be conservatively estimated from prior knowledge. Additionally, given the local Lipschitz constant bound
and the compact set
, we can always derive a uniform bound using Lipschitz property if a bound for
for any
x in
is known. For example, supposing
, we have
. In practice, leveraging prior knowledge about the system can result in a tighter bound than the one based on Lipschitz continuity. Thus, we assume a uniform bound. Under Assumption 1, it will be shown later (in
Section 2.4) that the pointwise value of
at any time
t can be estimated with pre-computable estimation error bounds (EEBs).
2.1. Learning Uncertain Dynamics
Given a collection of data points
with
N denoting the number of data points, the uncertain function
can be learned using ML tools. For demonstration purposes, we choose to use DNNs, due to their significant potential in dynamics learning attributed to their expressive power and the fact that they have been rarely explored for dynamics learning with safety and/or performance guarantees. Denoting the learned function as
and the model error as
, the actual dynamics (
1) can be rewritten as
where
The learned dynamics can now be represented as
Remark 3. The above setting includes the special case of no learning, corresponding to .
Note that the performance guarantees provided by the proposed framework are agnostic to the model learning tools used, as long as the following assumption can be satisfied.
Assumption 2. We are able to obtain a uniform error bound for the learned function , i.e., we could compute a constant such that Remark 4. Assumption 2 can be easily satisfied when using a broad class of ML tools. For instance, when Gaussian processes are used, a uniform error bound (UEB) can be computed using the approach in [27]. When using DNNs, we could use spectral-normalized DNNs (SN-DNNs) [28] (to enforce that has a Lispitchz bound in ) and compute the UEB as follows. Obviously, since both and have a local Lipschitz bound in , the model error has a local Lipschitz bound in . As a result, given any point , we havewhere is one of the N number of data points. The preceding inequality implies Equation (5) holds with . 2.2. Problem Setting
The learned dynamics Equation (
4) (including the special case of
) can be incorporated in a motion planner or trajectory optimizer to plan a desired trajectory
to minimize a specific cost function. Suppose Assumptions 1 and 2 hold. The focus of the paper includes
- (i)
Designing a feedback controller to track the desired state trajectory with guaranteed tracking performance despite the presence of the model error ;
- (ii)
Empirically demonstrating the benefits of learning in improving the robustness and reducing the cost associated with the actual trajectory.
In the following, we will present some preliminaries on CCM and uncertainty estimation used to build our solution.
2.3. CCM for the Nominal Dynamics
CCM extends contraction analysis [
9] to controlled dynamic systems, where the analysis simultaneously seeks a controller and a metric that characterizes the contraction properties of the closed-loop system [
10]. According to [
10], a symmetric matrix-valued function
serves as a strong CCM for the nominal (uncertainty-free) system
in
, if there exist positive constants
,
, and
such that
hold for all
and
.
Assumption 3. There exists a strong CCM for the nominal system Equation (6) in , i.e., satisfies Equations (7)–(9). Remark 5. Similar to the synthesis of Lyapunov functions, given dynamics, a strong CCM can be systematically synthesized using convex optimization, more specially, sum of squares programming [10,12,18]. Given a CCM
, a feasible trajectory
satisfying the nominal dynamics Equation (
6), and the actual state
at
t, the control signal can be constructed as follows [
10,
12]. At any
, compute a minimal-energy path (termed as
geodesic)
connecting
and
, e.g., using the pseudospectral method [
29]. Note that the geodesic is always a straight line segment if the metric is constant. Next, compute the Riemann energy of the geodesic, defined as
, where
. Finally, by interpreting the Riemann energy as an incremental control Lyapunov function, we can construct a control signal
such that
where
with the dependence on
t omitted for brevity,
is defined in Equation (
6), and
. In practice, one may want to compute
with a minimal
such that Equation (
10) holds, which can be achieved by setting
with
obtained via solving a quadratic programming (QP) problem [
10,
12]:
at each time
t. Problem Equation (
11) is commonly referred to as the pointwise minimum-norm control problem and possesses an analytic solution [
30]. The performance guarantees provided by the CCM-based controller can be summarized in the following theorem.
Lemma 1 ([
10]).
Suppose Assumption 3 holds for the nominal system Equation (6) with positive constants , , and λ. Then, a control law satisfying Equation (10) universally exponentially stabilizes the system Equation (6), which can be expressed mathematically as The following lemma from [
12] establishes a bound on the tracking control effort from solving Equation (
11).
Lemma 2 ([
12] Theorem 5.2).
For all such that for a scalar constant , the control effort from solving Equation (11) is bounded bywhere , satisfies , represents the largest eigenvalue, and denotes the smallest non-zero singular value. 2.4. Uncertainty Estimation with Computable Error Bounds
We leverage a disturbance estimator described in [
16] to estimate the value of the uncertainty
at each time instant. More importantly, an estimation error bound can be pre-computed and systematically reduced by tuning a parameter of the estimator. The estimator comprises a state predictor and an update law. The state predictor is defined by
where
denotes the prediction error,
is a scalar constant. The estimate,
, is given by a piecewise-constant update law:
where
T is an estimation sampling time, and
. Finally, the value of
at time
t is estimated as
where
is the pseudoinverse of
. The following lemma establishes the estimation error bound associated with the disturbance estimator defined by (
15) and (
16). Lemma 3 can be considered as a simplified version of (Lemma 4 [
16]) in the sense that the uncertainty considered in [
16] can explicitly depend on both time and states, i.e., represented as
, whereas the uncertainty in this paper depends on states only. The proof is similar to that in [
16] and is omitted for brevity.
Lemma 3. [16] Given the dynamics (1) subject to Assumption 1, and the disturbance estimator in (15) and (16), for , ifthe estimation error can be bounded asfor all t in , wherewith constants , , and from Assumption 1. Moreover, for any . Remark 6. According to Lemma 3, the estimation error after a single sampling interval can be arbitrarily reduced by decreasing T.
Remark 7. As explained in (Remark 5 [16]), the error bound can be quite conservative primarily due to the conservatives introduced in computing ϕ and . For practical implementation, one could benefit from empirical studies, such as conducting simulations with a selection of user-defined functions of to identify a more refined bound than defined in Equation (19). 4. Simulation Results
We validate the proposed learning control approach on a 2D quadrotor introduced in [
12]. We selected this example because the 2D quadrotor, although simpler than a 3D quadrotor, presents significant control challenges due to its nonlinear and unstable dynamics. Additionally, it offers a suitable scenario for demonstrating the applicability of the proposed control architecture, specifically, maintaining safety and reducing energy consumption in the presence of disturbances. Matlab codes can be found at github.com/boranzhao/de-ccm-w-learning,
https://github.com/boranzhao/de-ccm-w-learning, accessed on 12 May 2024. The dynamics of the vehicle are given by
where
and
represent the positions in
x and
z directions, respectively,
and
denote the lateral velocity and velocity along the thrust axis in the body frame;
is the angle between the
x direction of the body frame and the
x direction of the inertia frame. The input vector
contains the thrust force produced by the two propellers;
m and
J represent the mass and moment of inertia about the out-of-plane axis, respectively;
l denotes the distance between each propeller and the vehicle center, and
signifies the unknown disturbances exerted on the propellers. Specific parameter values are assigned as follows:
kg,
, and
m. We choose
to be
, where
represents the disturbance intensity whose values in a specific location are denoted by the color at this location in
Figure 2. We consider three navigation tasks with different start and target points, while avoiding the three circular obstacles, as illustrated in
Figure 2. The planned trajectories were computed using OptimTraj [
31] to minimize a cost function
, where
denotes the arrival time. The actual start points for Tasks 1∼3 were intentionally set to be different from the desired ones used for trajectory planning.
4.1. Control Design
For computing a CCM, we parameterized the CCM
W by
and
, and set the convergence rate
to 0.8. Additionally, we enforced the constraints:
,
. More details about synthesizing the CCM and computing the geodesic can be found in [
18]. All the subsequent computations and simulations except DNN training (which was done in Python using PyTorch) were done in MATLAB R2021b. For estimating the disturbance using Equations (15)–(17), we set
. It is straightforward to confirm that
,
, and
(due to the constant nature of
B) satisfy Equation (2). By discretizing the space
into a grid, one can determine the constant
in Lemma 3 to be
. Based on Equation (
19), to achieve an error bound
, the estimation sampling time needs to satisfy
s. However, as mentioned in Remark 7, the error bound calculated from Equation (
19) might be overly conservative. Through simulations, we determined that a sampling time of
s was sufficient to achieve the desired error bound and thus set
s.
4.2. Performance and Robustness across Learning Transients
Figure 2 (top) illustrates the planned and actual trajectories under the proposed controller utilizing the RRE condition and disturbance estimation (referred to as DE-CCM), in the presence of
no, moderate, and
good learned model for uncertain dynamics. For these results, we did
not low-pass filter the estimated uncertainty, which is equivalent to setting
in Equation (
24). Spectral-normalized DNNs [
28] (Remark 4) with four inputs, two outputs, and four hidden layers were used for model learning. For training data collection, we planned and executed nine trajectories with different start and end points, as shown in
Figure 3. The data collected during the execution of these trajectories were used to train the
moderate model. However, these nine trajectories were still not enough to fully explore the state space. Sufficient exploration of the state space is necessary to learn a good uncertainty model. Thanks to the performance guarantee, the DE-CCM controller facilitates safe exploration, as demonstrated in
Figure 3. For illustration purposes, we directly used the true uncertainty model to generate the data and used the generated data for training, which yielded the
good model.
As depicted in
Figure 2, the actual trajectories generated by DE-CCM exhibited expected convergence towards the desired trajectories during the learning phase across all three tasks. The minor deviations observed between the actual and desired trajectories under DE-CCM can be attributed to the finite step size used in the ODE solver employed for simulations (refer to Remark 14). The planned trajectory for Task 2 in the moderate learning case seemed unusual near the end point. This is because the learned model was not accurate in the area due to a lack of sufficient exploration. However, with the DE-CCM controller, the quadrotor was still able to track the trajectory.
Figure 4 depicts the trajectories of true, learned, and estimated disturbances in the presence of no and good learning for Task 1; the trajectories for Tasks 2 and 3 are similar and thus omitted. One can see that the estimated disturbances were always fairly close to the true disturbances. Also, the area with high disturbance intensity was avoided under good learning, which explains the smaller disturbance encountered.
Figure 5 shows trajectories of Riemann energy
in the presence of no and good learning for Task 1; the trajectories for Tasks 2 and 3 are similar and thus omitted. It is evident that
under DE-CCM decreased exponentially across all scenarios, irrespective of the quality of the learned mode. To compare, we implemented three additional controllers, namely, a vanilla CCM controller that disregards the uncertainty or learned model error, an adaptive CCM (Ad-CCM) controller from [
15] with true and polynomial regressors, and a robust CCM (RCCM) controller that can be seen as a specific case of a DE-CCM controller with the uncertainty estimate equal to zero and the EEB equal to the disturbance bound
. The Ad-CCM controller [
15] needs a parametric structure for the uncertainty in the form of
, with
being a known regressor matrix and
being the vector of unknown parameters. For the no-learning case, we assume that we know the regressor matrix for the original uncertainty
. For the learning cases, since we do not know the regressor matrix for the learned model error
, we used a second-order polynomial regressor matrix:
Figure 2 (bottom) shows the tracking control performances yielded by these additional controllers for Task 1 under different learning scenarios. The observed trajectories resulting from the CCM controller showed significant deviations from the planned trajectories and occasionally encountered collisions with obstacles, except in the case of good learning. Additionally, Ad-CCM yielded poor tracking performance under the moderate learning case. The poor performance could be attributed to the fact that the uncertainty
may not have a parametric structure, or, even if it does, the selected regressor may not be sufficient to represent it. RCCM achieved similar performance as compared to the proposed method, but shows weaker robustness against control input delays, as demonstrated later.
4.3. Improved Planning and System Robustness with Learning
Figure 6 shows the costs
J associated with the actual trajectories achieved by DE-CCM under different learning qualities. As expected, the
good model helped plan better trajectories, leading to reduced costs for all three tasks. It is not a surprise that the
poor and
moderate models led to a temporal increase in the costs for some tasks. In practice, we may not use the poorly learned model directly for planning trajectories. DE-CCM ensures that in case one really does so, the planned trajectories can still be tracked well.
The role of the low-pass filter in protecting system robustness under the no learning case is illustrated in
Appendix A. We next tested the robustness of RCCM and DE-CCM against input delays under different learning scenarios. Note that, unlike linear systems for which gain and phase margins are commonly used as robustness criteria, we often evaluate the robustness of nonlinear systems in terms of their tolerance for input delays. Under an input delay of
, plant dynamics Equation (
1) becomes
. For these experiments, we leveraged a low-pass filter
to filter the estimated disturbance following Equation (
24). In principle, the EEB of 0.1 used in the previous experiments would not hold anymore, as the presence of the filter would lead to a larger error bound according to Equation (
27). However, we kept using the same EEB of 0.1, as the theoretical bound according to Equation (
27) could be quite conservative. The Riemann energy, which indicates the tracking performance for all states, is shown in
Figure 7. One can see that, under both delay cases, DE-CCM achieves smaller and less oscillatory Riemann energy, indicating better robustness and tracking performance, compared to RCCM.
Additionally, the robustness of DE-CCM against input delays in the presence of good learning is significantly improved compared to the no-learning case, which illustrates the benefits of incorporating learning. This can be explained as follows. The input delay may cause the disturbance estimate
to be highly oscillatory and a large discrepancy between
and
. The low pass filter
can filter out the high-frequency oscillatory component of
. Under good learning, according to Equation (
24), the learned model
approaches the true uncertainty
; as a result, the filtered disturbance estimate
defined in Equation (
24) can be much closer to
, leading to improved robustness and performance, compared to no and moderate learning cases.
5. Conclusions
This paper presents a disturbance estimation-based contraction control architecture that allows for using model learning tools (e.g., a neural network) to learn uncertain dynamics while guaranteeing exponential trajectory convergence during learning transients under certain conditions. The architecture uses a disturbance estimator to estimate the value of the uncertainty, i.e., the difference between nominal dynamics and actual dynamics, with pre-computable estimation error bounds (EEBs), at each time instant. The learned dynamics, the estimated disturbances, and the EEBs are then incorporated into a robust Riemann energy condition, which is used to compute the control signal that guarantees exponential convergence to the desired trajectory throughout the learning phase. On the other hand, we show that learning can facilitate better trajectory planning and improve the robustness of the closed-loop system, e.g., against input delays. The proposed framework is validated on a planar quadrotor example.
Future directions could involve addressing broader uncertainties, especially unmatched uncertainties prevalent in practical systems, minimizing the conservatism of the estimation error bound, and demonstrating the efficacy of the proposed control framework with alternative model learning tools.