Next Article in Journal
Design and Validation of New Methodology for Hydraulic Passage Integration in Carbon Composite Mechanisms
Next Article in Special Issue
Inductance Estimation Based on Wavelet-GMDH for Sensorless Control of PMSM
Previous Article in Journal
Solar–Hydrogen Storage System: Architecture and Integration Design of University Energy Management Systems
Previous Article in Special Issue
H Control for 2D Singular Continuous Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trajectory Tracking of Nonlinear Systems with Convex Input Constraints Based on Tracking Control Lyapunov Functions

Department of Robotics and Mechatronics, Tokyo Denki University, 5 Senju-Asahi-cho, Adachi-ku, Tokyo 120-8551, Japan
Appl. Sci. 2024, 14(11), 4377; https://doi.org/10.3390/app14114377
Submission received: 5 April 2024 / Revised: 13 May 2024 / Accepted: 16 May 2024 / Published: 22 May 2024
(This article belongs to the Special Issue Advanced Control Systems and Applications)

Abstract

:
Trajectory tracking control of input-constrained systems is an essential problem in many control applications, including robotics. In this paper, we propose a constrained tracking controller for input affine nonlinear systems with convex input constraints based on tracking control Lyapunov functions (TCLFs). To deal with general convex input constraints, we first solve a convex optimization problem that minimizes the time derivative of TCLFs subject to convex input constraints; we refer to its optimal solution as minimizing input. Then, the proposed trajectory tracking is constructed by using the minimizing input and an appropriate scaling function. We prove that the proposed controller locally achieves trajectory tracking and satisfies the given convex input constraints. Finally, we demonstrate the effectiveness of the proposed controller by numerical simulations of a wheeled mobile robot.

1. Introduction

In nonlinear control theory, control Lyapunov functions (CLFs) [1] and CLF-based controller designs [2,3] are one of the standard approaches for asymptotic stabilization. Once a CLF of a controlled system is obtained, we can constructively design asymptotically stabilizing state feedback controllers. Moreover, the designed controllers minimize meaningful cost functionals and guarantee robustness in the sense of stability margins.
When applying control theory to real-world control systems, it is critical to consider existing input constraints due to physical limitations or safety reasons. In the CLF-based framework, asymptotic stabilization of input-constrained systems is also well-studied. Specifically, for systems with norm input constraints, some stabilizing state feedback controllers have been proposed [4,5,6,7,8]. Although the norm input constraints are an important class, they are too restrictive for a controller design that utilizes input transformations, including trajectory tracking control, as described later.
As a more general class containing the norm input constraints, convex input constraints, where the control inputs are constrained to known compact convex sets, are also of interest [9]. In [10,11,12], global asymptotic stabilization of nonlinear systems with convex input constraints based on CLFs is studied. Since bounded controllers may not achieve global stabilization, a locally asymptotically stabilizing controller is also proposed in [13]. The basic idea of the controller design of [13] is to find the state feedback that minimizes the time derivative of a given CLF, which we call the minimizing input. Under the convex input constraints, the minimizing input is characterized as the optimal solution of a convex optimization problem. This approach is also extended to a discontinuous stabilizing controller design [14] and an inverse optimal controller design [15].
Trajectory tracking is an essential control problem in many control applications, including robotics. Specifically, trajectory tracking of autonomous vehicles has been extensively studied, and many tracking controller design methods have been proposed (see, e.g., [16,17,18,19,20,21]). Moreover, tracking controller design for input-constrained vehicles is also discussed [22,23,24]. From the viewpoint of nonlinear control theory, trajectory tracking is studied mainly for specified classes of nonlinear control systems, such as feedback linearizable systems [25], port Hamiltonian systems [26,27], and incrementally passive systems [28]. Although nonlinear model predictive control (NMPC) can provide a general framework for constrained trajectory tracking [29], computational costs for real-time implementation are still an issue. Consequently, a general low-computational cost framework for constrained tracking is required.
Recently, tracking control Lyapunov functions (TCLFs) [16,30,31], an extension of CLFs to trajectory tracking control, have attracted attention. When there are no input constraints, some TCLF-based tracking controllers have been proposed by extending the CLF-based stabilizing feedback controllers [32,33,34]. However, we cannot directly extend the CLF-based constrained stabilizing feedback controllers for input-constrained systems. This is due to the input transformations used to transform the original nonlinear systems into the corresponding error systems. In particular, the norm constraints are not invariant under the input transformations, i.e., even if the original systems have norm input constraints, the input constraints for the transformed error systems are no longer simple norm constraints. Therefore, to design a constrained trajectory-tracking controller, we need to consider a class of input constraints that are invariant under input transformations.
Based on the above background, this article proposes a general controller design framework for trajectory tracking of nonlinear systems with convex input constraints based on TCLFs by extending the controller design method of [13]. The primary contributions of this article are summarized as follows:
  • To extend the controller design method of [13] to trajectory tracking, we first show that the convex input constraints are invariant under the input transformations, i.e., the input constraints for the transformed error systems are also convex.
  • We formulate a convex optimization problem to characterize the minimizing input for the error systems with convex input constraints.
  • Based on the minimizing input, we propose a continuous constrained tracking controller by introducing an appropriate scaling function.
  • Through numerical simulations, we can confirm that the proposed tracking controller achieves trajectory tracking and satisfies the given input constraints. Compared to a Sontag-type (unconstrained) tracking controller [30,31], we can also confirm that the proposed controller reduces the tracking error more efficiently.
The rest of the article is organized as follows. Section 2 introduces the basic definition and results on trajectory tracking of nonlinear systems, specifically based on the Lyapunov-based approach. The problem addressed in this article is formulated in Section 3. In Section 4, we derive the minimizing input, which minimizes the time derivative of a TCLF subject to a given convex input constraint. Based on the minimizing input, we propose the constrained tracking controller in Section 5. Then, Section 6 confirms the effectiveness of the proposed controller through numerical simulations. Finally, brief conclusions are given in Section 7.

Notation

Let R and R n be the set of real numbers and the n-dimensional real vector space, respectively. Let B n ( y 0 , r ) : = { y R n y y 0 < r } R n . For a set C R n , its closure, boundary, and interior are denoted by C ¯ , C , and C ˚ , respectively. Let 2 C be the power set of C, i.e., the set of all subsets of C. The set C is said to be as follows:
  • Convex, if for any y 1 , y 2 C and λ [ 0 , 1 ] , ( 1 λ ) y 1 + λ y 2 C holds;
  • Strictly convex, if for any y 1 , y 2 C and λ ( 0 , 1 ) , ( 1 λ ) y 1 + λ y 2 C ˚ holds [35].
A function f : R n R is said to be convex if for any y 1 , y 2 R n and λ [ 0 , 1 ] , the inequality f ( ( 1 λ ) y 1 + λ y 2 ) ( 1 λ ) f ( y 1 ) + λ f ( y 2 ) holds. Moreover, f is said to be strictly convex if the above inequality is strict when a b and λ ( 0 , 1 ) .

2. Trajectory Tracking of Nonlinear Control Systems

In this section, we introduce the basics of trajectory tracking control of nonlinear systems based on a Lyapunov-based approach.
Let us consider the following input affine nonlinear system:
x ˙ = f ( x ) + g ( x ) u = f ( x ) + i = 1 m g i ( x ) u i ,
where x R n is the state and u R m is the control input. The mappings f , g i : R n R n are assumed to be locally Lipschitz continuous.
Assumption 1 
(Desired trajectory). Let x d : [ 0 , ) R n ; t x f ( t ) be a C 2 desired trajectory for system (1) such that
x ˙ d ( t ) = f ( x d ( t ) ) + g ( x d ( t ) ) u r ( t ) , t [ 0 , ) ,
where u r : [ 0 , ) R m ; t u r ( t ) is the corresponding C 1 reference input.
Throughout the paper, we suppose that x ˙ d ( t ) , x ¨ d ( t ) , u r ( t ) , and u ˙ r ( t ) are bounded on [ 0 , ) .
By using the tracking error e ( t ) : = x ( t ) x d ( t ) and the new control input u ˜ : = u u r ( t ) , system (1) can be transformed into the following time-varying error system:
e ˙ = f ˜ ( t , e ) + g ˜ ( t , e ) u ˜ ,
where
f ˜ ( t , e ) : = f ( e + x d ( t ) ) f ( x d ( t ) ) + [ g ( e + x d ( t ) ) g ( x d ( t ) ) ] u r ( t ) ,
g ˜ ( t , e ) : = g ( e + x d ( t ) ) .
Then, the trajectory tracking control problem of system (1) is to design a time-varying continuous state feedback controller u ˜ = k ˜ ( t , e ) such that the origin e = 0 of the error system (3) is asymptotically stable.
Remark 1. 
Since e ( t ) = 0 is equivalent to x ( t ) = x d ( t ) , the asymptotic stability of e = 0 of the error system (3) implies that the following holds for the original system (1):
  • For every initial condition x 0 R n , x ( t ) is bounded for all t [ 0 , ) ;
  • x ( t ) converges to x d ( t ) as t .
Moreover, a trajectory-tracking controller achieves the above conditions given by the following input transformation:
u = k ( t , x ) = k ˜ ( t , x x d ) + u r ( t ) ,
where u ˜ = k ˜ ( t , e ) is an asymptotic stabilizing controller for the error system (3).
To design an asymptotic stabilizing controller u ˜ = k ˜ ( t , e ) , the tracking control Lyapunov function (TCLF), which is defined as follows, plays an important role:
Definition 1 
(TCLF [30,31]). Let D R n be a neighborhood of e = 0 . Then, a tracking control Lyapunov function (TCLF) for the error system (3) defined on D is a C 1 -differentiable function V : [ 0 , ) × D R satisfying the following conditions:
(B1) 
There exist positive definite proper functions V ̲ , V ¯ : D R such that
V ̲ ( e ) V ( t , e ) V ¯ ( e ) , t 0 , e D ;
(B2) 
There exists a positive definite function Q : D R such that
inf u ˜ V t + L f ˜ V + L g ˜ V · u ˜ < Q ( e ) , e 0 ,
where
L f ˜ V ( t , e ) = V e f ˜ ( t , e ) , L g ˜ V ( t , e ) = V e g ˜ ( t , e ) = [ L g ˜ 1 V , , L g ˜ m V ] , L g ˜ i V ( t , e ) = V e g ˜ i ( t , e ) ( i = 1 , , m ) .
Moreover, if the conditions (B1)–(B2) holds on D = R n , V ( t , e ) is said to be a global TCLF.
By using a TCLF V ( t , e ) , we can construct the following Sontag-type trajectory-tracking controller:
Theorem 1 
(Theorem 5.5 of [30]). Let V ( t , e ) be a TCLF for the error system (3). Then, the state feedback controller:
u ˜ = k ˜ ( t , e ) : = p 1 ( t , e ) ( L g ˜ V ) T ,
p 1 ( t , e ) : = ω 1 + ω 1 2 + L g ˜ V 4 L g ˜ V 2 ( L g ˜ V 0 ) 0 ( L g ˜ V = 0 ) ,
ω 1 ( t , e ) : = V t + L f ˜ V ,
locally asymptotically stabilizes the origin e = 0 of (3). Moreover, if V ( t , e ) is a global TCLF, the controller (10) globally asymptotically stabilizes the origin.
Remark 2. 
Although the state feedback controller (10) achieves trajectory tracking, it may not be able to satisfy existing input constraints. This is confirmed by computer simulations in Section 6.4.

3. Problem Formulation

In this article, we focus on trajectory tracking of the following input-constrained nonlinear system:
x ˙ = f ( x ) + g ( x ) u , u U ( x ) R m ,
where U ( x ) is a convex input constraint defined as follows:
Definition 2 
(Convex input constraint). A continuous-set-valued map U : R n 2 R m is said to be a convex input constraint if the following conditions hold:
(H1) 
U ( x ) is defined by U ( x ) : = { u R m G ( x , u ) 0 } , where the mapping G : R n × R m R is C 1 on R n × R m and strictly convex with respect to u;
(H2) 
u = 0 U ˚ ( x ) , x R n ;
(H3) 
There exists a compact set U ¯ R m such that U ( x ) U ¯ , x R n .
Since level sets of strictly convex functions are strictly convex sets, U ( x ) is a compact strictly convex set with the C 1 boundary for each fixed x R n .
Example 1 
(Norm input constraint). The following norm input constraint is a simple example of a convex input constraint:
u = u u α ,
where α > 0 is a constant. This input constraint is represented as U = { u R m G 1 ( u ) 0 } , where G 1 ( u ) is given by
G 1 ( x , u ) = u α .
Note that the expression of U ( x ) is not unique. In fact, by using
G 2 ( x , u ) = u 2 α 2 ,
the constraint (14) is also represented by
U = { u R m G 2 ( x , u ) 0 } .
Example 2. 
Let us consider the following state-dependent constraint for u R :
a ̲ ( x ) u a ¯ ( x ) ,
where a ̲ ( x ) and a ¯ ( x ) are continuous non-negative functions.
If the constraint is symmetric, i.e., there exists a ( x ) such that a ̲ ( x ) = a ¯ ( x ) = a ( x ) , the constraint (18) reduces to the following (state-dependent) norm constraint:
u a ( x ) .
In the same manner as Example 1, this constraint is also represented by the form of (H1) with G ( x , u ) = u a ( x ) . Note, however, that asymmetric constraints cannot be represented as the form of a simple norm constraint. Note, however, that asymmetric constraints are not represented as the form of a simple norm constraint.
In contrast, we can represent asymmetric constraints as convex constraints. Let b ( x ) be the function defined by
b ( x ) = ( a ¯ ( x ) a ̲ ( x ) ) / 2 .
Then, the constraint (18) becomes
a ¯ ( x ) + a ̲ ( x ) 2 u + b ( x ) a ¯ ( x ) + a ̲ ( x ) 2 .
This is equivalent to
u + b ( x ) a ¯ ( x ) + a ̲ ( x ) 2 ,
and hence, the corresponding function G ( x , u ) of (H1) is given by
G ( x , u ) = u + b ( x ) a ¯ ( x ) + a ̲ ( x ) 2 .
Note that (23) is not a simple norm constraint, but is a convex constraint for u. This example shows that convex constraints can deal with asymmetric input constraints. Note also that this type of asymmetric state-dependent constraint appears in some practical problems, such as vehicle tracking control subject to tire slip angle constraints (see, e.g., [36]).
We introduce the following additional assumption to consider the input-constrained trajectory tracking problem:
Assumption 2. 
Let x d ( t ) and u r ( t ) be a reference trajectory and the corresponding reference input satisfying Assumption 1, respectively. Then, we assume that the following holds:
u r ( t ) U ˚ ( x ) , ( t , x ) [ 0 , ) × R n .
According to the discussion in Section 2, system (13) can be transformed into the following constrained error system:
e ˙ = f ˜ ( t , e ) + g ˜ ( t , e ) u ˜ , u ˜ U ˜ ( t , e ) ,
where
U ˜ ( t , e ) : = { u ˜ R m G ˜ ( t , e , u ˜ ) 0 } ,
G ˜ ( t , e , u ˜ ) : = G ( e + x d ( t ) , u ˜ + u r ( t ) ) .
It should be noted that the original convex input constraint u U ( x ) is transformed into the time-varying input constraint u ˜ U ˜ ( t , e ) due to the input transformation u ˜ = u u r ( t ) . In the following, we refer to this constraint as the time-varying convex input constraint.
Remark 3. 
If the original input constraint is u α , the transformed constraint for u ˜ becomes
u ˜ + u r ( t ) α .
Note that (28) is a “center-shifted” norm constraint [13] and is not a simple norm constraint for u ˜ . This shows that the norm input constraints are not invariant under the input transformation u ˜ = u u r ( t ) . The important thing here is that the constraint (28) is still convex for u ˜ . This is the motivation for introducing the convex input constraints in this article.
Under Assumption 2, we can prove the following lemma, which shows the invariance of convex input constraints under the input transformation:
Lemma 1. 
Let U ( x ) be a convex input constraint, and let Assumption 2 hold. Then, the time-varying convex input constraint U ˜ ( t , e ) defined by (26) satisfies the following conditions:
(H1’) 
The function G ˜ ( t , e , u ˜ ) defined by (27) is C 1 on [ 0 , ) × R n × R m and is strictly convex with respect to u ˜ ;
(H2’) 
u ˜ = 0 U ˜ ˚ ( t , e ) , ( t , e ) [ 0 , ) × R n ;
(H3’) 
There exists a compact set U ¯ R m such that U ˜ ( t , e ) U ¯ , ( t , e ) [ 0 , ) × R n .
Proof. 
(H1’) G ˜ is clearly C 1 since both G and u r are C 1 with respect to its arguments. Then, we prove the strict convexity of G ˜ . For any u ˜ 1 , u ˜ 2 R m and any constant λ ( 0 , 1 ) , we have
G ˜ ( t , e , λ u ˜ 1 + ( 1 λ ) u ˜ 2 ) = G ( e + x d ( t ) , u r ( t ) + λ u ˜ 1 + ( 1 λ ) u ˜ 2 ) = G ( e + x d ( t ) , λ ( u r ( t ) + u ˜ 1 ) + ( 1 λ ) ( u r ( t ) + u ˜ 2 ) ) < λ G ( e + x d ( t ) , u r ( t ) + u ˜ 1 ) + ( 1 λ ) G ( e + x d ( t ) , u r ( t ) + u ˜ 2 ) = λ G ˜ ( t , e , u ˜ 1 ) + ( 1 λ ) G ˜ ( t , e , u ˜ 2 ) ,
where the inequality follows from the strict convexity of G.
(H2’) Owing to Assumption 2, G ( x , u r ( t ) ) < 0 holds for any ( t , x ) . By choosing x = e + x d ( t ) R n ,
G ( e + x d ( t ) , u r ( t ) ) = G ˜ ( t , e , u r ( t ) + 0 ) < 0 0 U ˜ ( t , e ) .
(H3’) According to (H3) of Definition 2, there exists a constant r > 0 such that
U ( x ) { u R m u r } , x R n ,
and this implies
u U ( x ) u = u ˜ + u r ( t ) r .
By using u ˜ = ( u ˜ + u r ( t ) ) u r ( t ) and the triangular inequality, we can obtain
u ˜ u ˜ + u r ( t ) + u r ( t ) r + u ¯ r ,
where u ¯ r = sup t [ 0 , ) u r ( t ) . Since u U ( x ) is equivalent to u ˜ U ˜ ( t , e ) , we conclude that
U ˜ ( t , e ) U ¯ : = { u ˜ R m u ˜ r + u ¯ r } , ( t , e ) [ 0 , ) × R n .
Remark 4. 
By Lemma 1, the set U ˜ ( t , e ) R m is compact and strictly convex for each fixed ( t , e ) .
To consider TCLF-based tracking controller design, we impose the following assumption:
Assumption A3. 
Let D R n be a neighborhood of e = 0 of the constrained error system (25). We suppose that there exist a C 1 function V : [ 0 , ) × D R and a positive definite function Q : D R satisfying (B1) of Definition 1 and the following condition:
min u ˜ U ˜ ( t , e ) V t + L f ˜ V + L g ˜ V · u ˜ < Q ( e ) , ( t , e ) [ 0 , ) × D { 0 } .
Under Assumptions 1–3, the constrained trajectory tracking control problem considered in this article is to design a time-varying state feedback controller u ˜ = k ˜ ( t , e ) for the error system (25) such that we have the following:
  • The origin e = 0 of (25) is locally asymptotically stable on a neighborhood W D ;
  • The input constraint is satisfied, i.e., k ˜ ( t , e ) U ˜ ( t , e ) , ( t , e ) [ 0 , ) × R n .
Remark 5. 
Let u ˜ = k ˜ ( t , e ) be a state feedback controller satisfying the above conditions. Then, as mentioned in Remark 1, the transformed controller u = k ( t , x ) = k ˜ ( t , x x d ( t ) ) + u r ( t ) guarantees the following:
  • x ( 0 ) x d ( 0 ) W x ( t ) x d ( t ) ( t ) ;
  • The original convex input constraint is satisfied, i.e., k ( t , x ) U ( x ) , ( t , x ) [ 0 , ) × R n .

4. Minimizing Input

In this section, we consider the minimizing input, which minimizes V ˙ ( t , e , u ˜ ) subject to u ˜ U ˜ ( t , e ) , as the first step of the proposed controller design.

4.1. Characterization by Convex Optimization

For each fixed ( t , e ) [ 0 , ) × R n , ω 1 ( t , e ) can be viewed as a constant. Hence, a control input that minimizes V ˙ ( t , e , u ˜ ) = ω 1 ( t , e ) + L g ˜ V ( t , e ) · u ˜ subject to u ˜ U ˜ ( t , e ) is given as an optimal solution of the following convex optimization problem:
Minimize L g ˜ V ( t , e ) · u ˜ , subject to u ˜ U ˜ ( t , e ) , ( t , e ) [ 0 , ) × R n .
We define the optimal value function ϕ ( t , e ) and the optimal solution map for the problem (36) as follows:
ϕ ( t , e ) : = min u ˜ U ˜ ( t , e ) L g ˜ V ( t , e ) · u ˜
Φ ( t , e ) : = { u ˜ U ˜ ( t , e ) L g ˜ V ( t , e ) · u ˜ = φ ( t , e ) }
Remark 6. 
Note that, when L g ˜ V ( t , e ) = 0 , ϕ ( t , e ) = 0 and Φ ( t , e ) = U ˜ ( t , e ) hold, i.e., every input u ˜ U ˜ ( t , e ) attains the minimum value L g ˜ V ( t , e ) · u ˜ = 0.
The following lemma guarantees the existence of an optimal solution:
Lemma 2 
(Existence of an optimal solution). The optimal value function ϕ ( t , e ) given by (37) is finite for any fixed ( t , e ) [ 0 , ) × R n .
Proof. 
Note that U ˜ ( t , e ) is compact for any fixed ( t , e ) [ 0 , ) × R n (see Lemma 1), and the objective function L g ˜ V ( t , e ) · u ˜ is continuous with respect to u ˜ . Hence, L g ˜ V ( t , e ) · u ˜ takes the minimum value on U ˜ ( t , e ) by the extreme value theorem. □
To discuss the optimality condition, we introduce the Lagrangian function L : [ 0 , ) × R n × R m × R R defined as follows [37,38,39]:
L ( t , e , u ˜ , λ ) : = L g ˜ V ( t , e ) · u ˜ + λ G ˜ ( t , e , u ˜ ) ,
where λ R is a Lagrange multiplier. Then, a necessary and sufficient condition for optimal solutions is given by the following Karush–Kuhn–Tucker (KKT) condition [37,38,39]:
Lemma 3 
(KKT condition). Consider the optimization problem (36) for any fixed ( t , e ) [ 0 , ) × R n . Then, u ¯ t , e Φ ( t , e ) (i.e., u ¯ t , e is an optimal solution) if and only if there exists a Lagrange multiplier λ t , e R satisfying the following conditions:
L u ˜ ( t , e , u ¯ t , e , λ t , e ) = L g ˜ V ( t , e ) + λ t , e G ˜ u ˜ ( t , e , u ¯ t , e ) = 0 ,
λ t , e G ˜ ( t , e , u ¯ t , e ) = 0 , λ t , e 0 , G ˜ ( t , e , u ¯ t , e ) 0 .
Proof. 
See, e.g., Section 5.5.3 of [38] or Corollary 28.3.1 of [39]. □
As an application of Lemma 3, here, we derive the optimal solution in the case of a norm input constraint:
Lemma 4. 
Let us consider the norm input constraint:
U ˜ = { u ˜ R m u ˜ α } ,
where α > 0 . Then, for any fixed ( t , e ) such that L g ˜ V ( t , e ) 0 , the optimal solution for the problem (36) is given by
u ¯ t , e = α L g ˜ V ( t , e ) L g ˜ V ( t , e ) .
Proof. 
The norm constraint (42) is represented by U ˜ = { u G ˜ ( u ˜ ) 0 } , where G ˜ ( u ˜ ) is given as follows (see Example 1):
G ˜ ( u ˜ ) = u ˜ 2 α 2 .
Substituting (44) into the KKT condition (40), we have
L g ˜ V ( t , e ) + 2 λ t , e u ˜ t , e = 0 u ˜ t , e = 1 2 λ t , e L g ˜ V ( t , e ) .
Since L g ˜ V ( t , e ) 0 implies λ t , e > 0 , the condition (41) reduces to G ˜ ( u ˜ t , e ) = 0 . By solving this equation, we can obtain
λ t , e = L g ˜ V ( t , e ) 2 α ,
and this completes the proof. □
Remark 7. 
Since 0 U ˜ ˚ ( t , e ) for each ( t , e ) by (H2’) of Lemma 1, there exists a constant δ > 0 such that
B ¯ m ( 0 , δ ) U ˜ ( t , e ) , ( t , e ) [ 0 , ) × R n .
This shows the fact that the given convex constraint set U ˜ ( t , e ) can be locally approximated by the norm constraint. Hence, Lemma 4 is crucial for the local analysis of the closed-loop system.

4.2. Definition and Properties

To define the minimizing input formally, we first prove the uniqueness of the optimal solution in the case of L g ˜ V ( t , e ) 0 :
Proposition 1. 
Consider the convex optimization problem (36) for any fixed ( t , e ) [ 0 , ) × R n such that L g ˜ V ( t , e ) 0 . Then, u ¯ t , e U ˜ ( t , e ) satisfying the KKT condition (40), (41) is unique. In other words, Φ ( t , e ) = { u ¯ t , e } holds.
To prove Proposition 1, we employ the following lemma:
Lemma 5. 
Consider the optimization problem (36) for any fixed ( t , e ) such that L g ˜ V ( t , e ) 0 . Then, the following holds:
Φ ( t , e ) U ˜ ( t , e ) .
Proof. 
Let u ¯ t , e Φ ( t , e ) . We suppose that u ¯ t , e U ˜ ˚ ( t , e ) .
Then, there exists a constant δ > 0 such that
u : = u ¯ t , e δ L g ˜ V ( t , e ) U ˜ ( t , e ) .
Since u ¯ t , e is an optimal solution,
min u ˜ U ˜ ( t , e ) L g ˜ V ( t , e ) · u ˜ = L g ˜ V ( t , e ) · u ¯ t , e
holds. By considering u , however,
L g ˜ V ( t , e ) · u = L g ˜ V ( t , e ) · [ u ¯ t , e δ L g ˜ V ( t , e ) ] = L g ˜ V ( t , e ) · u ¯ t , e δ L g ˜ V ( t , e ) 2 < L g ˜ V · u ¯ t , e .
This contradicts the fact that u ¯ t , e Φ ( t , e ) . □
Proof of Proposition 1. 
We suppose that u ¯ 1 , u ¯ 2 Φ ( t , e ) and u ¯ 1 u ¯ 2 . Let β ( 0 , 1 ) be a constant. Then, for u = β u ¯ 1 + ( 1 β ) u ¯ 2 U ˜ ( t , e ) , we can obtain
L g ˜ V ( t , e ) · u = L g ˜ V ( t , e ) · [ β u ¯ 1 + ( 1 β ) u ¯ 2 ] = β L g ˜ V ( t , e ) · u ¯ 1 + ( 1 β ) L g ˜ V ( t , e ) · u ¯ 2 = min u ˜ U ˜ ( t , e ) L g ˜ V ( t , e ) · u ˜ .
This shows that u is also an optimal solution. Since the set U ˜ ( t , e ) R m is strictly convex by (H1’) of Lemma 1, however,
u = β u ¯ 1 + ( 1 β ) u ¯ 2 U ˜ ˚ ( t , e ) .
This contradicts Lemma 5. □
Thanks to Proposition 1, we can formally define the minimizing input as follows:
Definition 3 
(Minimizing input). The following state feedback k ¯ : [ 0 , ) × R n R m is said to be the minimizing input for the error system (3) with respect to a given TCLF V ( t , e ) :
k ¯ ( t , e ) = argmin u ˜ U ˜ ( t , e ) L g ˜ V · u ˜ ( L g ˜ V ( t , e ) 0 ) 0 ( L g ˜ V ( t , e ) = 0 )
Remark 8. 
When L g ˜ V ( t , e ) = 0 , any u ˜ U ˜ ( t , e ) becomes an optimal solution (see Remark 6). The reason we set k ¯ ( t , e ) = 0 in Definition 3 is to design a trajectory-tracking controller that is continuous even when L g ˜ V ( t , e ) = 0 in Section 5.
In the following, we introduce two important properties of k ¯ ( t , e ) on the set
X 1 : = { ( t , e ) L g ˜ V ( t , e ) 0 } [ 0 , ) × R n .
Lemma 6. 
The minimizing input (54) satisfies
L g ˜ V ( t , e ) · k ¯ ( t , e ) < 0 , ( t , e ) X 1 .
Proof. 
Fix ( t , e ) X 1 arbitrarily. According to Lemma 1, there exists a constant δ > 0 such that
B ¯ m ( 0 , δ ) U ˜ ( t , e ) .
Thanks to Lemma 4, we have
L g ˜ V ( t , e ) · k ¯ ( t , e ) = min u ˜ U ˜ ( t , e ) L g ˜ V ( t , e ) · u ˜ min u ˜ B ¯ m ( 0 , δ ) L g ˜ V ( t , e ) · u ˜ = L g ˜ V ( t , e ) · δ L g ˜ V ( t , e ) L g ˜ V ( t , e ) = δ L g ˜ V ( t , e ) < 0 .
Lemma 7 
(Continuity of k ˜ ( t , e ) ). The minimizing input (54) is continuous on X 1 .
Proof. 
According to Lemma 1, U ˜ ( t , e ) is a continuous-set-valued map. Then, by applying Theorem 1 of [40], the optimal solution map Φ ( t , e ) defined by (38) is upper semicontinuous (see, e.g., [41]) on [ 0 , ) × R n .
Otherwise, by Proposition 1, Φ ( t , e ) is a singleton for each ( t , e ) X 1 . More precisely, we have
Φ ( t , e ) = { u ¯ t , e } , ( t , e ) X 1 ,
where u ¯ t , e = argmin { L g ˜ V · u ˜ u ˜ U ˜ ( t , e ) } . This means that we can view Φ ( t , e ) as the single-valued map k ¯ ( t , e ) on X 1 . Since the upper semicontinuity of set-valued maps reduces the continuity of (standard single-valued) maps, we conclude that k ¯ ( t , e ) is continuous on X 1 . □

5. Continuous-Trajectory-Tracking Controller Design

In Section 4, we designed the minimizing input k ¯ ( t , e ) by solving the convex optimization problem (36). Although k ¯ ( t , e ) is important for characterizing an asymptotically stabilizable domain (see Definition 4 below), we cannot apply this for system (25) because of the following:
  • k ¯ ( t , e ) is discontinuous for ( t , e ) such that L g ˜ V ( t , e ) = 0 ;
  • k ¯ ( t , e ) is large even if e is sufficiently small.
In this section, we introduce an appropriate scaling function for k ¯ ( t , e ) to address these issues. We then propose a continuous trajectory-tracking controller.

5.1. Main Theorem

Firstly, we characterize an asymptotically stabilizable domain of system (25) subject to the input constraint u ˜ U ˜ ( t , e ) . According to Assumption 3, there exists a positive definite proper function V ¯ ( e ) such that V ( t , e ) V ¯ ( e ) for all ( t , e ) . In this paper, we consider the following asymptotically stabilizable domain, which is defined by using the sub-level set of V ¯ ( e ) :
Definition 4 
(Asymptotically stabilizable domain). Let D R n be the domain of Assumption 3, and let r > 0 be a constant such that { e R n e r } D . Let α m a x ( 0 , min e = r e ) be the maximum constant such that
V ¯ ( e ) < α m a x ω 1 ( t , e ) + L g ˜ V ( t , e ) · k ¯ ( t , e ) < Q ( e ) , t [ 0 , ) ,
where k ¯ ( t , e ) is the minimizing input given by (54). Then, we define the asymptotically stabilizable domain of system (3) subject to u ˜ U ˜ ( t , e ) as follows:
W = { e R n V ¯ ( e ) < α m a x } D .
To design a continuous-trajectory-tracking controller satisfying u ˜ U ˜ ( t , e ) , we introduce the following scaling function μ : [ 0 , ) × R n R proposed in [13]:
μ ( t , e ) = P ( t , e ) + | P ( t , e ) | + C ( L g ˜ V ) 2 + C ( L g ˜ V ) ( L g ˜ V ( t , e ) 0 ) 0 ( L g ˜ V ( t , e ) = 0 ) ,
where P : X 1 R is given by
P ( t , e ) = ω 1 ( t , e ) + Q ( e ) L g ˜ V ( t , e ) · k ¯ ( t , e ) ,
and Q ( e ) is a positive-definite function of Assumption 3. Moreover, C : R R is any continuous positive-definite proper function.
Remark 9. 
The composition of the scaling function μ ( t , e ) is almost the same as the one proposed in [13]. The only difference is that the function P ( t , e ) contains the positive-definite function Q(e) to deal with the stabilization of time-varying systems.
Then, by using the scaling function μ ( t , e ) , we can formulate the following main theorem of this article:
Theorem 2. 
Consider the error system (25) subject to u ˜ U ˜ ( t , e ) . Let W D be the domain of Definition 4, k ˜ ( t , e ) the minimizing input given by (54), and μ ( t , e ) the scaling function defined by (62). Then, the state feedback controller
u ˜ = k ˜ ( t , e ) = μ ( t , e ) k ¯ ( t , e )
locally asymptotically stabilizes the origin e = 0 on W and satisfies the input constraint u ˜ U ˜ ( t , e ) .
The proof of Theorem 2 will be given in Section 5.2.
Corollary 1. 
Consider the nonlinear system (1) subject to a convex input constraint u U ( x ) . Then, the state feedback controller
u = k ( t , x ) : = k ˜ ( t , x x d ( t ) ) + u r ( t )
locally achieves trajectory tracking and satisfies the input constraint
k ( t , x ) U ( x ) , ( t , x ) [ 0 , ) × R n .

5.2. Proof of Theorem 2

To prove Theorem 2, we employ the following lemmas:
Lemma 8. 
The function P ( t , e ) defined by (63) satisfies
P ( t , e ) < 1 , ( t , e ) [ 0 , ) × W X 1 .
Proof. 
According to Definition 4, the following holds for any ( t , e ) [ 0 , ) × W :
ω 1 ( t , e ) + L g ˜ V ( x ) · k ¯ ( t , e ) < Q ( e ) .
Moreover, since ( t , e ) X 1 L g ˜ V ( t , e ) 0 , it follows from (68) that
ω 1 ( t , e ) + Q ( e ) < L g ˜ V ( t , e ) · k ¯ ( t , e ) ω 1 ( t , e ) + Q ( e ) L g ˜ V ( t , e ) · k ¯ ( t , e ) < 1 .
Lemma 9. 
The scaling function μ ( t , e ) given by (62) satisfies the following conditions:
(1) 
μ ( t , e ) [ 0 , 1 ) , ( t , e ) [ 0 , ) × W ;
(2) 
The following holds, except e = 0 :
lim L g ˜ V 0 μ ( t , e ) = 0 .
Proof. 
Since μ ( t , e ) = 0 when L g ˜ V ( t , e ) = 0 , we consider condition (1) for ( t , e ) ( [ 0 , ) × W ] ) X 1 . In this case, we have
0 P ( t , e ) + | P ( t , e ) | < 2
by Lemma 8. This shows that condition (1) holds.
Then, we prove condition (2). For ( t , e ) such that L g ˜ V ( t , e ) = 0 and e 0 , the following holds by condition (B2) of Definition 1:
ω 1 ( t , e ) + Q ( e ) < 0 .
This implies the existence of a neighborhood Ξ [ 0 , ) × W of ( t , e ) such that
P ( t , e ) + | P ( t , e ) | = 0 , ( t , e ) Ξ .
Hence, we can see that the following holds on Ξ :
lim L g ˜ V 0 μ ( t , e ) = lim L g ˜ V 0 C ( L g ˜ V ) 2 + C ( L g ˜ V ) = 0 .
Lemma 10. 
The state feedback controller u ˜ = k ˜ ( t , e ) given by (64) is continuous on [ 0 , ) × R n { 0 } .
Proof. 
According to Lemma 7, k ˜ ( t , e ) is continuous on X 1 . Since ω 1 ( t , e ) , L g ˜ V ( t , e ) , and Q ( e ) are continuous, μ ( t , e ) is also continuous on X 1 . Hence, k ˜ ( t , e ) = μ ( t , e ) k ¯ ( t , e ) is clearly continuous on X 1 . Moreover, by using Lemma 9, we can prove continuity for ( t , e ) such that L g ˜ V ( t , e ) = 0 and e 0 as
lim L g ˜ V 0 k ˜ ( t , e ) = lim L g ˜ V 0 μ ( t , e ) k ¯ ( t , e ) = 0 .
Remark 10. 
To prove the continuity of k ˜ ( t , e ) for e = 0 , an additional assumption is required. More precisely, the following must be held uniformly in t [ 0 , ) :
lim e 0 | P ( t , e ) | = 0 .
Lemma 11. 
The state feedback controller u ˜ = k ˜ ( t , e ) given by (64) satisfies the input constraint u ˜ U ˜ ( t , e ) for all ( t , e ) [ 0 , ) × W .
Proof. 
Note that the minimizing input k ¯ ( t , e ) satisfies k ¯ ( t , e ) U ˜ ( t , e ) and μ ( t , e ) [ 0 , 1 ) on [ 0 , ) × W by Lemma 9. Since U ˜ ( t , e ) is a convex set containing u ˜ = 0 (see Lemma 1), we conclude that
k ˜ ( t , e ) = μ ( t , e ) k ¯ ( t , e ) + [ 1 μ ( t , e ) ] · 0 U ˜ ( t , e ) .
Lemma 12. 
The state feedback controller (64) asymptotically stabilizes the origin e = 0 of the error system (25) on W D .
Proof. 
We can calculate the time derivative V ˙ ( t , e , k ˜ ( t , e ) ) as follows:
V ˙ ( t , e , k ˜ ( t , e ) ) = ω 1 ( t , e ) + μ ( t , e ) L g ˜ V ( t , e ) · k ¯ ( t , e ) = ω 1 ( t , e ) + 1 2 + C ( L g ˜ V ) P ( t , e ) + | P ( t , e ) | + C ( L g ˜ V ) L g ˜ V ( t , e ) · k ¯ ( t , e ) = 1 2 + C ( L g ˜ V ) ( ω 1 ( t , e ) + Q ( e ) ) | ω 1 ( t , e ) + Q ( e ) | + C ( L g ˜ V ) L g ˜ V ( t , e ) · k ¯ ( t , e ) + ( 2 + C ( L g ˜ V ) ) ω 1 ( t , e ) = 1 2 + C ( L g ˜ V ) ω 1 ( t , e ) Q ( e ) | ω 1 ( t , e ) + Q ( e ) | + C ( L g ˜ V ) [ ( ω 1 ( t , e ) + L g ˜ V ( t , e ) · k ¯ ( t , e ) ] .
Here, we note that e W { 0 } implies ω 1 ( t , e ) + L g ˜ V ( t , e ) · k ¯ ( t , e ) < Q ( e ) (see Definition 4). Moreover, the following inequality holds:
ω 1 ( t , e ) Q ( e ) | ω 1 ( t , e ) + Q ( e ) | 2 Q ( e ) .
By using these inequalities, an upper bound of V ˙ ( t , e , k ˜ ( t , e ) ) on W is given by
V ˙ ( t , e , k ˜ ( t , e ) ) 1 2 + C ( L g ˜ V ) 2 Q ( e ) C ( L g ˜ V ) Q ( e ) = Q ( e ) .
Then, the Lyapunov stability theorem for time-varying systems (see, e.g., Theorem 4.9 of [42] or Theorem 3.2 of [43]) guarantees asymptotic stability on W. □
Proof of Theorem 2 
According to Lemma 11, k ˜ ( t , e ) satisfies the input constraint U ˜ ( t , e ) . In addition, the local asymptotic stability of e = 0 on W is proven by Lemma 12. This completes the proof. □

6. Numerical Example: Trajectory Tracking of a Wheeled Mobile Robot

In this section, we confirm the effectiveness of the proposed controller (64) through a numerical example of a wheeled mobile robot.

6.1. Kinematic Model of the Mobile Robot

Let us consider the wheeled mobile robot shown in Figure 1.
The kinematic model of the robot is given by
x ˙ = x ˙ 1 x ˙ 2 x ˙ 3 = cos x 3 0 sin x 3 0 0 1 u 1 u 2 : = g ( x ) u ,
where ( x 1 , x 2 ) R 2 is the position and x 3 R is the orientation. Control inputs u 1 , u 2 R are translational and angular velocities, respectively. In the following, we consider the robot model (81) subject to the following input constraint:
u 2 = u 1 2 + u 2 2 β 2 ,
where β > 0 is a constant.
We suppose that a C 2 desired trajectory x d ( t ) = [ x 1 d ( t ) , x 2 d ( t ) , x 3 d ( t ) ] and the corresponding C 1 reference input u r ( t ) = [ u 1 r ( t ) , u 2 r ( t ) ] satisfying
x ˙ 1 d = u 1 r cos x 3 d ( t ) x ˙ 2 d = u 1 r sin x 3 d ( t ) x ˙ 3 d = u 2 r ,
are given. Moreover, we also assume that u 1 r ( t ) > γ , t [ 0 , ) holds for some constant γ > 0 . Then, system (81) can transform the following error system:
x ˙ = f ˜ ( t , e ) + g ˜ ( t , e ) u ˜ f ˜ ( t , e ) = u 1 r [ cos ( e 3 + x 3 d ) cos x 3 d ] u 1 r [ sin ( e 3 + x 3 d ) sin x 3 d ] 0 , g ˜ ( t , e ) = cos ( e 3 + x 3 d ) 0 sin ( e 3 + x 3 d ) 0 0 1 .
The input constraint for u ˜ is given by
u ˜ U ˜ ( t ) = { u R 2 G ˜ ( t , u ) 0 } ,
G ˜ ( t , u ) = u ˜ + u r ( t ) 2 β 2 = [ u ˜ 1 u 1 r ( t ) ] 2 + [ u ˜ 2 u 2 r ( t ) ] 2 β 2 .
As mentioned in Remark 3, (85) is not a simple norm constraint, but a convex constraint on u ˜ . Hence, we can apply the proposed controller design method.

6.2. Proposed Controller Design

In this subsection, we construct the proposed controller (64) for the error systems (84) subject to the convex constraint (85).
We employ the following function V ( t , e ) as a TCLF for (84) [32]:
V ( t , e ) = 3 2 ( e 1 2 + e 2 2 ) + 1 2 h 2 ( t , e ) ,
h ( t , e ) = ( 2 x ˙ 1 d e 1 ) sin ( e 3 + x 3 d ( t ) ) ( 2 x ˙ 2 d e 2 ) cos ( e 3 + x 3 d ( t ) ) .
By using V ( t , e ) and the mappings f ˜ ( t , e ) and g ˜ ( t , e ) of (84), we can calculate ω 1 ( t , e ) and L g ˜ V ( t , e ) as follows:
ω 1 ( t , e ) = 3 u 1 r ( t ) [ e 1 I 1 ( t , e ) + e 2 I 2 ( t , e ) ] + h ( t , e ) [ u 1 r ( t ) sin e 3 + J ( t , e ) ] ,
L g ˜ V ( t , e ) = L g ˜ 1 V L g ˜ 2 V = 3 e 1 cos ( e 3 + x 3 d ( t ) ) + 3 e 2 sin ( e 3 + x 3 d ( t ) ) h ( t , e ) · h e 3 ( t , e )
where
I 1 ( t , e ) = cos ( e 3 + x 3 d ( t ) ) cos x 3 d ( t ) , I 2 ( t , e ) = sin ( e 3 + x 3 d ( t ) ) sin x 3 d ( t ) ,
J ( t , e ) = h ˙ ( t , e ) = h x 3 d ( t , e ) x ˙ 3 d ( t ) + [ 2 x ¨ 1 d ( t ) + 2 x ˙ 2 d ( t ) x ˙ 3 d ( t ) ] sin ( e 3 + x 3 d ( t ) ) + [ 2 x ¨ 2 d ( t ) + 2 x ˙ 1 d ( t ) x ˙ 3 d ( t ) ] cos ( e 3 + x 3 d ( t ) ) ,
h e 3 ( t , e ) = h x 3 d ( t , e ) = ( 2 x ˙ 1 d e 1 ) cos ( e 3 + x 3 d ( t ) ) + ( 2 x ˙ 2 d e 2 ) sin ( e 3 + x 3 d ( t ) ) .
Then, we design the minimizing input k ¯ ( t , e ) . Since L g ˜ V ( t , e ) = 0 implies k ˜ ( t , e ) = 0 (see (54)), we fix ( t , e ) such that L g ˜ V ( t , e ) 0 . According to Lemma 3 and the function G ˜ ( t , e ) given by (86), the necessary and sufficient condition of the optimal solution u ¯ t , e is
L g ˜ V ( t , e ) + λ t , e G ˜ u ˜ ( t , u ¯ t , e ) = L g ˜ V ( t , e ) + 2 λ t , e [ u ¯ t , e u r ( t ) ] = 0
By solving (94) with respect to u ¯ t , e , we have
u ¯ t , e = 1 2 λ t , e L g ˜ V ( t , e ) + u r ( t ) .
Substituting (95) into the condition G ˜ ( t , u ¯ t , e ) = 0 , we can obtain
λ t , e = L g ˜ V ( t , e ) 2 β .
Hence, the minimizing input (54) is calculated as follows:
k ¯ ( t , e ) = β L g ˜ V ( t , e ) L g ˜ V ( t , e ) + u r ( t ) ( L g ˜ V ( t , e ) 0 ) 0 ( L g ˜ V ( t , e ) = 0 ) .
Then, we can construct the proposed tracking controller u ˜ = k ˜ ( t , e ) by using (62)–(64), (89)–(93), and (97). Finally, the trajectory-tracking controller u = k ( t , x ) for the original systems (81) by applying the input transformation (65).
Remark 11. 
To characterize the asymptotically stabilizable domain W of Definition 4, for example, the following function is available as V ¯ ( e ) :
V ¯ ( e ) = 2 ( e 1 2 + e 2 2 ) + c 1 e 1 2 + e 2 2 + c 2 sin 2 e 3 ,
where c 1 = 2 sup t [ 0 , ) u 1 r ( t ) and c 2 = c 1 2 / 2 . Note, however, that we do not calculate W itself in this article. A specific computation method for W based on V ¯ ( e ) and k ¯ ( t , e ) is a future issue.

6.3. Simulation Conditions

To confirm the effectiveness of the proposed controller, we perform a computer simulation with the following circular trajectory:
x d ( t ) = r sin ω t r cos ω t ω t , u r ( t ) = r ω ω ,
where r > 0 is the radius of the circle and ω is the angular velocity. Here, we set r = 2.0 [ m ] and ω = π / 4 [ rad / s ] . The initial condition of simulation is e ( 0 ) = [ 3 , 2 , π / 3 ] (or equivalently, x ( 0 ) = [ 3 , 4 , π / 3 ] ). Moreover, β in (86), which characterizes the input constraint, is set to β = 6.0 . The functions C ( L g ˜ V ( t , e ) ) and Q ( e ) in the proposed controller are
C ( L g ˜ V ( t , e ) ) = L g ˜ V ( t , e ) 2 ,
Q ( e ) = 0.25 ( e 1 2 + e 2 2 + 0.25 sin 2 e 3 ) .
For comparison, we also apply the following two existing controllers:
  • Sontag-type (unconstrained) tracking controller:
We can construct the unconstrained tracking controller by substituting ω 1 ( t , e ) and L g ˜ V ( t , e ) given by (89) and (90) into (10) and (11).
  • Constrained tracking controller based on norm input constraints:
The following norm constraint is available as a sufficient condition for the convex constraint (85):
U ˜ ( t ) : = { u R 2 u ˜ 2 β u r ( t ) 2 } U ˜ ( t ) .
Then, we can construct the following constrained tracking controller based on the asymptotically stabilizing controller proposed in [6]:
k ˜ 2 ( t , e ) = μ 2 ( t , e ) k ¯ 2 ( t , e ) , k ¯ 2 ( t , e ) = β u r ( t ) L g ˜ V ( t , e ) L g ˜ V ( t , e ) , μ 2 ( t , e ) = P 2 ( t , e ) + | P 2 ( t , e ) | + C ( L g ˜ V ) 2 + C ( L g ˜ V ) ( L g ˜ V ( t , e ) 0 ) 0 ( L g ˜ V ( t , e ) = 0 ) , P 2 ( t , e ) = ω 1 ( t , e ) + Q ( e ) ( β u r ( t ) ) L g ˜ V ( t , e ) .

6.4. Simulation Results

Simulations results with the proposed controller (64) are illustrated in Figure 2. We can confirm that the tracking error e converges to 0 quickly, and hence, trajectory tracking is achieved by Figure 2a–c.
For comparison, we also ran computer simulations with the other tracking controllers mentioned in Section 6.3. The comparative simulation results are summarized in Figure 3. We can confirm that the controllers (10) and (103) achieve trajectory tracking by Figure 3a. According to Figure 3c, we can also see that the Sontag-type controller (10) violates the convex input constraint (85) for u ˜ (i.e., the original norm constraint (82) for u) while the proposed controller satisfies it. Although the (unconstrained) Sontag-type controller (10) achieves the best performance in terms of tracking error convergence, the proposed controller (64) also achieves good performance under the input constraint (see Figure 3b). To improve the convergence speed of the proposed controller in a small neighborhood of e = 0 , it is important to design the function C in (64) more carefully.
To see the advantage of considering convex input constraints, we then compare the proposed controller with the norm-constrained tracking controller (103). According to Figure 3a,b, we can confirm that the proposed controller achieves better tracking performance than (103). This is because the proposed controller is based on the minimization of V ˙ ( t , e , u ˜ ) subject to the original convex constraint (85), while the controller (103) is based on the conservative norm constraint (102). In other words, the proposed convex constraint-based approach can use the control input more efficiently than the norm-based one. We can observe this in the enlarged section of Figure 3c.

7. Conclusions

In this article, we have proposed a TCLF-based trajectory-tracking controller for nonlinear systems with convex input constraints. As a first step, we designed the minimizing input, which is given as the optimal solution of the convex optimization problem. Then, we constructed the continuous tracking controller by introducing the scaling function. We proved that the proposed controller achieves trajectory tracking locally and satisfies the given convex input constraints.
We confirmed the effectiveness of the proposed controller through a numerical example of a wheeled mobile robot. Together with the fact that convex constraints can handle asymmetric constraints (as discussed in Example 2), we believe that the proposed method can provide a general helpful framework for tracking control of constrained systems.
However, several issues need to be addressed before the proposed method can be applied to actual control problems. First, since we assumed the existence of TCLFS, its general design methods are required. It is still unclear which CLF achieves better control performance. In applications, the robustness of control systems is also crucial. One promising approach to this problem is to guarantee stability margins [3]. In this context, the analysis of the inverse optimality [15] of the proposed controller is needed. To deal with uncertainties and unknown disturbances, a more general framework containing adaptive control or disturbance attenuation is also required.
In parallel with solving the above issues, we plan to apply the proposed method to an electric wheelchair (WHILL Model CR, WHILL Inc., Tokyo, Japan) and perform trajectory tracking experiments.

Funding

This work was partly supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number JP20K14769.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sontag, E.D. A ‘Universal’ Construction of Artstein’s Theorem on Nonlinear Stabilization. Syst. Control Lett. 1989, 13, 117–123. [Google Scholar] [CrossRef]
  2. Freeman, R.A.; Kokotović, P. Robust Nonlinear Control Design: State-Space and Lyapunov Techniques; Birkhäuser: Boston, MA, USA, 1996. [Google Scholar]
  3. Sepulchre, R.; Janković, M.; Kokotović, P.V. Constructive Nonlinear Control; Springer: London, UK, 1997. [Google Scholar]
  4. Lin, Y.; Sontag, E.D. A Universal Formula for Stabilization with Bounded Controls. Syst. Control Lett. 1991, 16, 393–397. [Google Scholar] [CrossRef]
  5. Malisoff, M.; Sontag, E.D. Universal Formulas for Feedback Stabilization with Respect to Minkowski Balls. Syst. Control Lett. 2000, 4, 247–260. [Google Scholar] [CrossRef]
  6. Kidane, N.; Nakamura, H.; Yamashita, Y.; Nishitani, H. Controller for a Nonlinear System with an Input Constraint by Using a Control Lyapunov Function I. IFAC Proc. Vol. 2005, 38, 747–752. [Google Scholar] [CrossRef]
  7. Kidane, N.; Nakamura, H.; Yamashita, Y.; Nishitani, H. Controller for a Nonlinear System with an Input Constraint by Using a Control Lyapunov Function II. IFAC Proc. Vol. 2005, 38, 753–758. [Google Scholar] [CrossRef]
  8. Nakamura, N.; Nakamura, H.; Yamashita, Y.; Nishitani, H. Inverse Optimal Control for Nonlinear Systems with Input Constraints. In Proceedings of the European Control Conference 2007, Kos, Greece, 2–5 July 2007; pp. 5376–5382. [Google Scholar]
  9. Sontag, E.D. Control-Lyapunov Functions. In Open Problems in Mathematical Systems and Control Theory; Blondel, V.D., Sontag, E.D., Vidyasagar, M., Willems, J.C., Eds.; Springer: London, UK, 1999; pp. 211–216. [Google Scholar]
  10. Suárez, R.; Solís-Daun, J.; Aguirre, B. Global CLF Stabilization for Systems with Compact Convex Control Value Sets. In Proceedings of the 40th IEEE Conference on Decision and Control, Orlando, FL, USA, 4–7 December 2001; pp. 3838–3843. [Google Scholar]
  11. Solís-Daun, J. Global CLF Stabilization of Nonlinear Systems. Part I: A Geometric Approach—Compact Strictly Convex CVS. SIAM J. Control Optim. 2013, 51, 2152–2175. [Google Scholar] [CrossRef]
  12. Solís-Daun, J. Global CLF Stabilization of Nonlinear Systems. Part II: An Approximation Approach—Closed CVS. SIAM J. Control Optim. 2015, 53, 645–669. [Google Scholar] [CrossRef]
  13. Satoh, Y.; Nakamura, H.; Nakamura, N.; Katayama, H.; Nishitani, H. Control Formula for Nonlinear Systems Subject to Convex Input Constraints using Control Lyapunov Functions. In Proceedings of the 47th IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 2512–2519. [Google Scholar]
  14. Satoh, Y.; Nakamura, H.; Kimura, S. Discontinuous Control of Nonlinear Systems with Convex Input Constraint via Locally Semiconcave Control Lyapunov Functions. IFAC Proc. Vol. 2014, 47, 8629–8635. [Google Scholar] [CrossRef]
  15. Satoh, Y.; Nakamura, H.; Ohtsuka, T. Inverse Optimal Controller for Nonlinear Systems with Convex Input Constraints. IFAC PapersOnLine 2016, 49, 742–747. [Google Scholar] [CrossRef]
  16. Aguiar, A.P.; Hespanha, J.P. Trajectory-Tracking and Path-Following of Underactuated Autonomous Vehicles with Parametric Modeling Uncertainty. IEEE Trans. Autom. Control 2008, 52, 1362–1379. [Google Scholar] [CrossRef]
  17. Ren, W.; Beard, R. Trajectory tracking for unmanned air vehicles with velocity and heading rate constraints. IEEE Trans. Autom. Control 2004, 12, 706–716. [Google Scholar] [CrossRef]
  18. Luo, W.; Chu, Y.C.; Ling, K.V. Inverse optimal adaptive control for attitude tracking of spacecraft. IEEE Trans. Autom. Control 2005, 50, 1639–1654. [Google Scholar]
  19. Shi, Y.; Shen, C.; Fang, H.; Li, H. Advanced Control in Marine Mechatronic Systems: A Survey. IEEE/ASME Trans. Mechatron. 2017, 22, 1121–1131. [Google Scholar] [CrossRef]
  20. Li, S.; Zheng, L.; Zhixin, Y.; Zhang, B.; Zhang, N. Dynamic Trajectory Planning and Tracking for Autonomous Vehicle with Obstacle Avoidance Based on Model Predictive Control. IEEE Access 2019, 7, 132074–132086. [Google Scholar] [CrossRef]
  21. Lu, G.; Zheng, L.; Cai, Y.; Chen, N.; Long, F.; Ren, Y. Trajectory Generation and Tracking Control for Aggressive Tail-Sitter Flights. Int. J. Robot. Res. 2023, 43, 241–280. [Google Scholar] [CrossRef]
  22. Shen, C.; Bai, J.; Cai, Y.; Shi, B.; Chen, Y. Trajectory Tracking Control for Wheeled Mobile Robot Subject to Generalized Torque Constraints. Trans. Inst. Meas. Control 2022, 45, 1258–1270. [Google Scholar] [CrossRef]
  23. Li, J.W. Adaptive Tracking and Stabilization of Nonholonomic Mobile Robots with Input Saturation. IEEE Trans. Autom. Control 2022, 67, 6173–6179. [Google Scholar] [CrossRef]
  24. Huang, P.; Gao, Y.; Zhang, Z. Tracking Controller of Extended Chained Nonholonomic Systems with Matched Disturbance and Input Saturation. IET Control Theory Appl. 2023, 18, 710–724. [Google Scholar] [CrossRef]
  25. Chen, J.; Behal, A.; Dawson, D.M. Robust Feedback Control for a Class of Uncertain MIMO Nonlinear Systems. IEEE Trans. Autom. Control 2007, 53, 591–596. [Google Scholar] [CrossRef]
  26. Fujimoto, K.; Sakurama, K.; Sugie, T. Trajectory tracking control of port-controlled Hamiltonian systems via generalized canonical transformations. Automatica 2007, 39, 2059–2069. [Google Scholar] [CrossRef]
  27. Yaghmaei, A.; Yazdanpanah, M.J. Trajectory tracking for a class of contractive port Hamiltonian systems. Automatica 2017, 83, 331–336. [Google Scholar] [CrossRef]
  28. Chengshuai, W.; van der Schaft, A.; Chen, J. Robust trajectory tracking for incrementally passive nonlinear systems. Automatica 2019, 107, 595–599. [Google Scholar]
  29. Faulwasser, T. Optimization-Based Solutions to Constrained Trajectory-Tracking and Path-Following Problems; Shaker Verlag GmbH: Herzogenrath, Germany, 2013. [Google Scholar]
  30. Krstić, M.; Deng, H. Stabilization of Nonlinear Uncertain Systems; Springer: London, UK, 1998. [Google Scholar]
  31. Nakamura, H. Global Nonsmooth Control Lyapunov Function Design for Path-Following Problem via Minimum Projection Method. IFAC PapersOnLine 2016, 49, 600–605. [Google Scholar] [CrossRef]
  32. Kubo, R.; Fujii, Y.; Nakamura, H. Control Lyapunov Function Design for Trajectory Tracking Problems of Wheeled Mobile Robot. IFAC PapersOnLine 2020, 53, 6177–6182. [Google Scholar] [CrossRef]
  33. Ikeda, R.; Hayashi, T.; Nakamura, H. Design of Constructive Tracking Control for Differentially Flat Systems via Minimum Projection Method. In Proceedings of the 2020 IEEE Conference on Control Technology and Applications (CCTA), Montréal, QC, Canada, 24–26 August 2021; pp. 827–832. [Google Scholar]
  34. Satoh, Y.; Iwashita, M.; Sakata, O. Robust Adaptive Trajectory Tracking of Nonlinear Systems Based on Input-to-State Stability Tracking Control Lyapunov Functions. IFAC PapersOnLine 2021, 54, 388–393. [Google Scholar] [CrossRef]
  35. Ha, T.X.D.; Jahn, J. Characterizations of strictly convex sets by the uniqueness of support points. Optimization 2019, 68, 1321–1335. [Google Scholar] [CrossRef]
  36. Lee, J.; Yim, S. Path Tracking Control with Constraint on Tire Slip Angles under Low-Friction Road Conditions. Appl. Sci. 2024, 14, 1066. [Google Scholar] [CrossRef]
  37. Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Cambridge, MA, USA, 1999. [Google Scholar]
  38. Boyd, S.; Vandenberghe, C. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  39. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1997. [Google Scholar]
  40. Robinson, S.M.; Day, R.H. A Sufficient Condition for Continuity of Optimal Sets in Mathematical Programming. J. Math. Anal. Appl. 1974, 45, 506–511. [Google Scholar] [CrossRef]
  41. Fiacco, A.V.; Ishizuka, Y. Sensitivity and Stability Analysis for Nonlinear Programming. Ann. Oper. Res. 1990, 27, 215–236. [Google Scholar] [CrossRef]
  42. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  43. Bacciotti, A.; Rosier, L. Liapunov Functions and Stability in Control Theory, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
Figure 1. Model of the wheeled mobile robot.
Figure 1. Model of the wheeled mobile robot.
Applsci 14 04377 g001
Figure 2. Simulation results with the proposed controller (64). (a) Tracking error; (b) trajectory; (c) control input.
Figure 2. Simulation results with the proposed controller (64). (a) Tracking error; (b) trajectory; (c) control input.
Applsci 14 04377 g002
Figure 3. Comparative simulation results. (a) Trajectory; (b) norm of the tracking error; (c) norm of the control input.
Figure 3. Comparative simulation results. (a) Trajectory; (b) norm of the tracking error; (c) norm of the control input.
Applsci 14 04377 g003aApplsci 14 04377 g003b
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Satoh, Y. Trajectory Tracking of Nonlinear Systems with Convex Input Constraints Based on Tracking Control Lyapunov Functions. Appl. Sci. 2024, 14, 4377. https://doi.org/10.3390/app14114377

AMA Style

Satoh Y. Trajectory Tracking of Nonlinear Systems with Convex Input Constraints Based on Tracking Control Lyapunov Functions. Applied Sciences. 2024; 14(11):4377. https://doi.org/10.3390/app14114377

Chicago/Turabian Style

Satoh, Yasuyuki. 2024. "Trajectory Tracking of Nonlinear Systems with Convex Input Constraints Based on Tracking Control Lyapunov Functions" Applied Sciences 14, no. 11: 4377. https://doi.org/10.3390/app14114377

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop