Next Article in Journal
Automated Mounting of Pole-Shoe Wedges in Linear Wave Power Generators—Using Industrial Robotics and Proximity Sensors
Next Article in Special Issue
Dimensional (Parametric) Synthesis of the Hexapod-Type Parallel Mechanism with Reconfigurable Design
Previous Article in Journal / Special Issue
Life Cycle Analysis of Double-Arm Type Robotic Tools for LCD Panel Handling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Direct Uncertainty Minimization Framework for System Performance Improvement in Model Reference Adaptive Control

by
Benjamin C. Gruenwald
1,
Tansel Yucelen
1,* and
Jonathan A. Muse
2
1
Laboratory for Autonomy, Control, Information, and Systems (LACIS), Department of Mechanical Engineering, University of South Florida, 4202 East Fowler Ave., Tampa, FL 33620, USA
2
Air Force Research Laboratory, Wright Patterson Air Force Base, Dayton, OH 45433, USA
*
Author to whom correspondence should be addressed.
Machines 2017, 5(1), 9; https://doi.org/10.3390/machines5010009
Submission received: 3 January 2017 / Accepted: 28 February 2017 / Published: 8 March 2017
(This article belongs to the Special Issue Robotic Machine Tools)

Abstract

:
In this paper, a direct uncertainty minimization framework is developed and demonstrated for model reference adaptive control laws. The proposed framework consists of a novel architecture involving modification terms in the adaptive control law and the update law. In particular, these terms are constructed through a gradient minimization procedure in order to achieve improved closed-loop system performance with adaptive control laws. The proposed framework is first developed for adaptive control laws with linear reference models and then generalized to adaptive control laws with nonlinear reference models. Two illustrative numerical examples are included to demonstrate the efficacy of the proposed framework.

1. Introduction

Research in adaptive control algorithms is primarily motivated by the fact that these algorithms have the capability to estimate and suppress the effect of system uncertainties resulting from imperfect system modeling, degraded modes of operation, abrupt changes in dynamics, damaged control surfaces and sensor failures, to name but a few examples. Although government and industry agree on the potential of these algorithms in providing safety and reducing system development costs, a major issue is their poor transient performance.
To address this problem, the authors of [1,2,3,4,5,6,7,8] present modifications to adaptive update laws. In particular, the work in [1,2,3] uses filtered versions of the control input and state; [4,5,6] uses a moving time window of the system uncertainty; and [7,8] uses recorded and instantaneous data concurrently. In contrast to these approaches, the authors of [9,10,11] present an approach called artificial basis functions that adds modification terms not only to the update law, but also to the adaptive controller and show that the system error can be suppressed during the transient system response. The common denominator of the approaches in [1,2,3,4,5,6,7,8,9,10,11] is that they introduce additional mechanisms to model reference adaptive control laws that capture a form of the system uncertainty in order to suppress its effect.
In this paper, we introduce a novel framework called direct uncertainty minimization for model reference adaptive control laws. Unlike the approaches in [1,2,3,4,5,6,7,8], the proposed framework consists of an architecture involving modification terms in both the adaptive controller and the update law, such that these terms are activated when the system error is nonzero and vanishes as the system reaches its steady state. In addition, this new framework directly allows one to suppress the effect of system uncertainty on the transient system response through a gradient minimization procedure and, hence, leads to improved system performance. Furthermore, unlike the approaches in [9,10,11], the proposed framework is computationally less expensive, and it can enforce the system error to approximately stay in a priori given, user-defined error performance bound. The proposed framework is first developed for adaptive control laws with linear reference models and then generalized to adaptive control laws with nonlinear reference models. This generalization adopts tools and methods from [12,13].
The organization of this paper is as follows. Section 2 highlights the notation used in this paper and states necessary mathematical preliminaries. Section 3 introduces the proposed direct uncertainty minimization framework, while Section 4 generalizes the results of Section 3 to a class of nonlinear reference models. Two illustrative numerical examples are provided in Section 5 to demonstrate the efficacy of the proposed approach to model reference adaptive control, and conclusions are finally summarized in Section 6.

2. Notation and Mathematical Preliminaries

We use a fairly standard notation, where R denotes the set of real numbers, R n denotes the set of n × 1 real column vectors, R n × m denotes the set of n × m real matrices, R + (resp. R ¯ + ) denotes the set of positive (resp. non-negative-definite) real numbers, R + n × n (resp. R ¯ + n × n ) denotes the set of n × n positive-definite (resp. non-negative-definite) real matrices, D n × n denotes the set of n × n real matrices with diagonal scalar entries, ( · ) T denotes transpose, ( · ) 1 denotes inverse, tr ( · ) denotes the trace operator, | | · | | 2 denotes the Euclidean norm, | | · | | F denotes the Frobenius matrix norm and “≜” denotes equality by definition. Furthermore, we write λ min ( A ) (resp., λ max ( A ) ) for the minimum (resp. maximum) eigenvalue of the Hermitian matrix A.
We next state necessary preliminaries on the model reference adaptive control problem. For this purpose, consider the uncertain dynamical system given by:
x ˙ p ( t ) = A p x p ( t ) + B p Λ u ( t ) + B p δ p ( x p ( t ) ) , x p ( 0 ) = x p 0 ,
where x p ( t ) R n p is the state vector available for feedback, u ( t ) R m is the control input restricted to the class of admissible controls consisting of measurable functions, δ p : R n p R m is an uncertainty, A p R n p × n p is a known system matrix, B p R n p × m is a known control input matrix with B p T B p being nonsingular, Λ R + m × m D m × m is an unknown control effectiveness matrix and the pair ( A p , B p ) is controllable. The next assumption is standard in the adaptive control literature [14,15,16].
Assumption 1.
The uncertainty in Equation (1) is parameterized as:
δ p ( x p ( t ) ) = W p T σ p ( x p ( t ) ) , x p ( t ) R n p ,
where W p R s × m is an unknown weight matrix and σ p : R n p R s is a known basis function of the form σ p ( x p ( t ) ) = [ σ p 1 ( x p ( t ) ) , σ p 2 ( x p ( t ) ) , , σ p s ( x p ( t ) ) ] T .
For addressing command following, let c ( t ) R n c be a given piecewise continuous command and x c ( t ) R n c be the integrator state given by the dynamics:
x ˙ c ( t ) = E p x p ( t ) c ( t ) , x c ( 0 ) = x c 0 ,
where E p R n c × n p selects a subset of x p ( t ) to follow c ( t ) . Based on the above construction, Equations (1) and (3) are now augmented as:
x ˙ ( t ) = A x ( t ) + B Λ u ( t ) + B W p T σ p ( x p ( t ) ) + B r c ( t ) , x ( 0 ) = x 0 ,
where x ( t ) [ x p T ( t ) , x c T ( t ) ] T R n , n = n p + n c , is the augmented state vector, x 0 = [ x p 0 T , x c 0 T ] T R n , and:
A A p 0 n p × n c E p 0 n c × n c R n × n ,
B [ B p T , 0 n c × m T ] T R n × m ,
B r [ 0 n p × n c T , I n c × n c ] T R n × n c .
Consider now the feedback control law given by:
u ( t ) = u n ( t ) + u a ( t ) ,
where u n ( t ) and u a ( t ) are the nominal feedback control law and the adaptive feedback control law, respectively. Let the nominal feedback control law be further given by:
u n ( t ) = K x ( t ) , K R m × n ,
such that A r A B K is Hurwitz. Using Equations (8) and (9) in Equation (4) yields:
x ˙ ( t ) = A r x ( t ) + B r c ( t ) + B Λ [ u a ( t ) + W T σ ( x ( t ) ) ] ,
where:
W [ Λ 1 W p T , ( Λ 1 I ) ] T R ( s + m ) × m
is an unknown aggregated weight matrix and:
σ ( x ( t ) ) [ σ p T ( x p ( t ) ) , x T ( t ) K T ] T R ( s + m )
is a known aggregated basis function. Considering Equation (10), the adaptive control law is given by:
u a ( t ) = W ^ T ( t ) σ ( x ( t ) ) ,
where W ^ ( t ) R ( s + m ) × m is the estimate of W satisfying the weight update law:
W ^ ˙ ( t ) = γ σ ( x ( t ) ) e T ( t ) P B , W ^ ( 0 ) = W ^ 0 .
In Equation (14), γ R + is the learning rate, e ( t ) x ( t ) x r ( t ) is the system error state vector with x r ( t ) R n being the reference state vector satisfying the reference model dynamics:
x ˙ r ( t ) = A r x r ( t ) + B r c ( t ) , x r ( 0 ) = x r 0 ,
and P R + n × n is a symmetric solution of the Lyapunov equation:
0 = A r T P + P A r + R , R R + n × n .
Now, using Equation (13) in Equation (10) yields:
x ˙ ( t ) = A r x ( t ) + B r c ( t ) B Λ W ˜ T ( t ) σ ( x ( t ) ) ,
and the system error dynamics are given using Equations (15) and (17) as:
e ˙ ( t ) = A r e ( t ) B Λ W ˜ T ( t ) σ ( x ( t ) ) , e ( 0 ) = e 0 ,
where W ˜ ( t ) W ^ ( t ) W R ( s + m ) × m and e 0 x 0 x r 0 .
Remark 1.
The update law given by Equation (14) can be derived using Lyapunov analysis by considering the Lyapunov function candidate (see, for example, [14,15,16]):
V ( e , W ˜ ) = e T P e + γ 1 tr ( W ˜ Λ 1 / 2 ) T ( W ˜ Λ 1 / 2 ) .
Note that V ( 0 , 0 ) = 0 and V ( e , W ˜ ) > 0 for all ( e , W ˜ ) ( 0 , 0 ) . Now, differentiating Equation (19) yields:
V ˙ ( e ( t ) , W ˜ ( t ) ) = e T ( t ) R e ( t ) 2 e T ( t ) P B Λ W ˜ T ( t ) σ ( x ( t ) ) + 2 γ 1 tr W ˜ T ( t ) W ^ ˙ ( t ) Λ ,
where using Equation (14) in Equation (20) results in:
V ˙ ( e ( t ) , W ˜ ( t ) ) = e T ( t ) R e ( t ) 0 , t R ¯ + ,
which guarantees that the system error state vector e ( t ) and the weight error W ˜ ( t ) are Lyapunov stable and, hence, are bounded for all t R ¯ + . Since σ ( x ( t ) ) is bounded for all t R ¯ + , it follows from Equation (17) that e ˙ ( t ) is bounded, and hence, V ¨ ( e ( t ) , W ˜ ( t ) ) is bounded for all t R ¯ + . Now, it follows from Barbalat’s lemma [17] that:
lim t V ˙ ( e ( t ) , W ˜ ( t ) ) = 0 ,
which consequently shows that e ( t ) 0 as t .
Remark 2.
In this paper, we assume that the uncertainty can be perfectly parameterized as in Equation (2), which implies that the structure of the uncertainty is known. To elucidate this point, consider an example with the uncertainty δ p ( x p ( t ) ) = α 1 x p 1 ( t ) + α 2 x p 1 2 ( t ) + α 3 x p 2 ( t ) , where x p T ( t ) = [ x p 1 ( t ) , x p 2 ( t ) ] is the state vector and α 1 , α 2 and α 3 are some unknown parameters. In this case, it follows from the parameterization in Equation (2) that W p T = [ α 1 , α 2 , α 3 ] and σ p T ( x p ( t ) ) = [ x p 1 ( t ) , x p 1 2 ( t ) , x p 2 ( t ) ] . That is, provided that one knows the structure of the uncertainty as in this representative example, the basis function can be easily formed. For situations when one does not know the structure of the uncertainty and the uncertainty in Equation (1) cannot be perfectly parameterized, then Assumption 1 can be relaxed by considering [18,19]:
δ p ( t , x p ( t ) ) = W p T ( t ) σ p ( x p ( t ) ) + ε p ( t , x p ( t ) ) , x p ( t ) D x p ,
where W p ( t ) R s × m is an unknown time-varying weight matrix satisfying W p ( t ) F w and W ˙ p ( t ) F w ˙ with w R + and w ˙ R + being unknown scalars, σ p : D x p R s is a known basis function of the form σ p ( x p ( t ) ) = [ 1 , σ p 1 ( x p ( t ) ) , σ p 2 ( x p ( t ) ) , , σ p s 1 ( x p ( t ) ) ] T , ε p : R ¯ + × D x p R m is the system modeling error satisfying ε p ( t , x p ( t ) ) 2 ϵ with ϵ R + being an unknown scalar and D x p is a compact subset of R n p . In this case, the update law given by Equation (14) can be replaced by, for example,
W ^ ˙ ( t ) = γ Proj [ W ^ ( t ) , σ ( x ( t ) ) e T ( t ) P B ] , W ^ ( 0 ) = W ^ 0 ,
to guarantee the uniform boundedness of the system error state vector e ( t ) and the weight error W ˜ ( t ) , where Proj denotes the projection operator [20].

3. Direct Uncertainty Minimization for Adaptive System Performance Improvement: Linear Reference Model Case

For the model reference adaptive control framework introduced in Section 2, we now develop the direct uncertainty minimization mechanism to improve transient system response. In particular, we first modify the adaptive feedback control law given by Equation (13) as:
u a ( t ) = W ^ T ( t ) σ ( x ( t ) ) ϕ ( t ) ,
where ϕ ( t ) R m is the system performance improvement term that satisfies:
ϕ ( t ) = ϕ ( 0 ) + k ( B T B ) 1 B T [ ( e ( t ) e ( 0 ) ) 0 t A r e ( τ ) d τ ] ,
with k R + being a design parameter. Using Equation (25), the system error dynamics in Equation (18) become:
e ˙ ( t ) = A r e ( t ) B Λ [ W ˜ T ( t ) σ ( x ( t ) ) + ϕ ( t ) ] , e ( 0 ) = e 0 .
Notice that the ideal system error dynamics have the form:
e ˙ ( t ) = A r e ( t ) , e ( 0 ) = e 0 ,
under nominal conditions with ϕ ( t ) 0 when there is no system uncertainty or control uncertainty. Motivating from this standpoint, the mismatch term W ˜ T ( t ) σ ( x ( t ) ) + ϕ ( t ) in Equation (27) has to be minimized during the transient system response to improve system performance. In the next theorem, we show that the proposed system performance improvement term given by Equation (26) achieves this objective through a gradient minimization procedure.
Theorem 1.
The modification term of the adaptive feedback control law in Equation (26) is the negative gradient of the cost function given by:
J ( · ) = k 2 | | Λ 1 / 2 ( W ˜ T ( t ) σ ( x ( t ) ) + ϕ ( t ) ) | | 2 2 .
Proof. 
The negative gradient of the cost function given by Equation (29) with respect to ϕ ( t ) has the form given by:
J ( · ) ϕ ( t ) = k [ Λ ( W ˜ T ( t ) σ ( x ( t ) ) + ϕ ( t ) ) ] ,
which can be rewritten using Equation (27) as:
J ( · ) ϕ ( t ) = k ( B T B ) 1 B T [ e ˙ ( t ) A r e ( t ) ] .
In Equation (31), note that B T B = B p T B p is nonsingular by its definition in Section 2. To construct the modification term of the adaptive feedback control law in Equation (26), let:
ϕ ˙ ( t ) = J ( · ) ϕ ( t ) = k ( B T B ) 1 B T [ e ˙ ( t ) A r e ( t ) ] ,
where Equation (26) is a direct consequence of Equation (32) using integration by parts. ☐
Remark 3.
The proposed modification term of the adaptive feedback control law in Equation (26) allows for the system error to be shaped by suppressing the mismatch term W ˜ T ( t ) σ ( x ( t ) ) + ϕ ( t ) in Equation (27) due to gradient minimization, since it is constructed to be the negative gradient of Equation (29) with respect to ϕ ( t ) . Therefore, by adjusting k in Equation (26), the uncertain dynamical system response and the reference model response can be made close to each other for all time including the transient phase. See Section 5 for illustrative numerical examples.
Next, to maintain closed-loop system stability under the modified adaptive control signal given by Equation (25), we now modify the update law given by Equation (14) as:
W ^ ˙ ( t ) = γ σ ( x ( t ) ) [ e T ( t ) P B + ξ ϕ T ( t ) ] , W ^ ( 0 ) = W ^ 0 ,
with ξ = k / a and a R + being a design parameter.
Remark 4.
Note that the structure of Equation (26) is much simpler than the structure of (44) in [11], in that the former does not involve W ^ ( t ) dependence and additional integration terms. Furthermore, the same conclusion is also true when Equation (33) is compared with (31)–(33) of [11], where the latter has an extra differential equation in addition to the modification terms. Thus, the approach proposed here is much less computationally expensive than [9,10,11].
Now, we are ready to state the following theorem, which shows the asymptotic stability of the pair ( e ( t ) , ϕ ( t ) ) , as well as the Lyapunov stability of W ˜ ( t ) .
Theorem 2.
Consider the uncertain dynamical system given by Equation (1) subject to Assumption 1, the reference model given by Equation (15) and the feedback control law given by Equation (25) with Equations (26) and (33). In addition, let ξ be chosen such that:
λ min ( R ) 1 ξ | | P B | | F 2 Λ * > 0
holds, where | | Λ | | F Λ * (here, Λ * R + is a known, possibly conservative bound on the control effectiveness). Then, the solution ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) of the closed-loop dynamical system is Lyapunov stable for all initial conditions and t R ¯ + , lim t e ( t ) = 0 and lim t ϕ ( t ) = 0 .
Proof. 
To show Lyapunov stability of the solution ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) , consider the Lyapunov function candidate given by:
V ( e , ϕ , W ˜ ) = e T P e + a 1 ϕ T ϕ + γ 1 tr ( W ˜ Λ 1 / 2 ) T ( W ˜ Λ 1 / 2 ) .
Note that V ( 0 , 0 , 0 ) = 0 and V ( e , ϕ , W ˜ ) > 0 for all ( e , ϕ , W ˜ ) ( 0 , 0 , 0 ) . Differentiating Equation (35) along the closed-loop dynamical system trajectories yields:
V ˙ ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) = e T ( t ) R e ( t ) 2 ξ ϕ T ( t ) Λ ϕ ( t ) 2 e T ( t ) P B Λ 1 / 2 Λ 1 / 2 ϕ ( t ) .
Using Young’s inequality [21] for the last term in Equation (36) gives:
2 e T ( t ) P B Λ 1 / 2 Λ 1 / 2 ϕ ( t ) | 2 e T ( t ) P B Λ 1 / 2 Λ 1 / 2 ϕ ( t ) | 1 μ e T ( t ) P B Λ B T P e ( t ) + μ ϕ T ( t ) Λ ϕ ( t ) .
Now, setting μ = ξ and using Equation (37) in Equation (36) yields:
V ˙ ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) e T ( t ) R e ( t ) + 1 ξ e T ( t ) P B Λ B T P e ( t ) ξ ϕ T ( t ) Λ ϕ ( t ) λ min ( R ) | | e ( t ) | | 2 2 + 1 ξ | | e ( t ) | | 2 2 | | P B | | F 2 Λ * ξ λ min ( Λ ) | | ϕ ( t ) | | 2 2 = | | e ( t ) | | 2 2 [ λ min ( R ) 1 ξ | | P B | | F 2 Λ * ] ξ λ min ( Λ ) | | ϕ ( t ) | | 2 2
Using the condition Equation (34) in Equation (38), it follows that V ˙ ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) 0 , which guarantees the Lyapunov stability of the solution ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) . Since this implies the boundedness of e ( t ) , ϕ ( t ) and W ˜ ( t ) for all t R ¯ + , it follows from Equation (27) and Equation (32) that e ˙ ( t ) and ϕ ˙ ( t ) are bounded for all t R ¯ + , and hence, V ¨ ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) is bounded for all t R ¯ + . It now follow from Barbalat’s lemma [17]:
lim t V ˙ ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) = 0 ,
which shows that lim t e ( t ) = 0 and lim t ϕ ( t ) = 0 . ☐
From a practical standpoint, if e ( t ) is sufficiently small, then the design parameter ξ, which affects both modification terms in Equations (25) and (33), can be chosen to be small, such that Equation (34) holds. However, as e ( t ) becomes large, then ξ may need to be increased accordingly to put more weight on minimizing the cost function given by Equation (29) and, hence, to enforce system error to approximately stay in a priori given, user-defined performance bounds. To achieve this practical objective, we can let ξ ( t ) = k ( t ) / a , where ξ ( t ) [ ξ min , ξ max ] , ξ min R + , ξ max R + , and consider the cost function given by:
J ( · ) = k ( t ) 2 | | Λ 1 / 2 ( W ˜ T ( t ) σ ( x ( t ) ) + ϕ ( t ) ) | | 2 2 .
Choosing the modification term in Equation (25) as the negative gradient of Equation (40), i.e., ϕ ˙ ( t ) = J ( · ) ϕ ( t ) , and following similar steps as highlighted in the proof of Theorem 1, it follows by integration by parts that:
ϕ ( t ) = ϕ ( 0 ) + a [ ξ ( t ) ( B T B ) 1 B T e ( t ) ξ ( 0 ) ( B T B ) 1 B T e ( 0 ) 0 t ξ ˙ ( τ ) ( B T B ) 1 B T e ( τ ) d τ 0 t ξ ( τ ) ( B T B ) 1 B T A r e ( τ ) d τ ] .
Notice that in this case, the modified update law becomes:
W ^ ˙ ( t ) = γ σ ( x ( t ) ) [ e T ( t ) P B + ξ ( t ) ϕ T ( t ) ] , W ^ ( 0 ) = W ^ 0 ,
and the condition Equation (34) needs to be replaced with:
λ min ( R ) 1 ξ min | | P B | | F 2 Λ * > 0 ,
where | | Λ | | F Λ * (here, Λ * R + is a known bound on the control effectiveness). In addition, we choose:
ξ ˙ ( t ) = γ ξ [ f ( e ) ( ξ ( t ) ξ min ) + ( 1 f ( e ) ) ( ξ ( t ) ξ max ) ] , ξ ( 0 ) = ξ 0 [ ξ min , ξ max ] ,
where γ ξ R + and f ( e ) [ 0 , 1 ] is a continuously differentiable function, such that it is close to one when e ( t ) is sufficiently small and otherwise close to zero. It follows from Equation (44) that ξ ( t ) [ ξ min , ξ max ] and ξ ( t ) approach to ξ min (resp., ξ max ) when f ( e ) = 1 (resp., f ( e ) = 0 ). A candidate f ( e ) has the form f ( e ) = 1 [ 1 sech ( c 1 | | e ( t ) | | P ) ] c 2 , | | e ( t ) | | P e T ( t ) P e ( t ) , where it is depicted in Figure 1 for c 1 = 5 (this is chosen to drive ξ ( t ) to ξ max if | | e ( t ) | | P is larger than 0.5) and c 2 = 10 .

4. Generalization to a Class of Nonlinear Reference Models

In most of the model reference adaptive control literature, it is common to design a reference model with linear dynamics as given by Equation (15). While this is practical for several applications, the control designer may prefer to use a nonlinear reference model to better capture the desired closed-loop system performance for many robotics and flight control applications. By adopting the tools and methods from [12,13], we now generalize the results in Section 3 such that the proposed direct uncertainty minimization adaptive control architecture can be used to suppress the effect of the system uncertainty on the transient system response and drive the states of a nonlinear uncertain dynamical system to the states of a class of nonlinear reference models.
For this purpose, we recast the uncertain dynamical system given by Equation (1) with a more general class of affine-in-control nonlinear system dynamics given by:
x ˙ p ( t ) = f p ( x p ( t ) ) + B p Λ u ( t ) + B p δ p ( x p ( t ) ) , x p ( 0 ) = x p 0 ,
where x p ( t ) R n p is the state vector, u ( t ) R m is the control input restricted to the class of admissible controls consisting of measurable functions such that f p : R n p R n p is a known system function that satisfies f p ( 0 ) = 0 , B p R n p × m is a known control input matrix, Λ R + m × m D m × m is an unknown control effectiveness matrix and δ p : R n p R m is the system uncertainty, and it is implicitly assumed that the required properties for the existence and uniqueness of solutions are satisfied for the controllable uncertain dynamical system, such that Equation (45) has a unique solution forward in time [17,22].
Once again, to address the command following, Equation (45) can be augmented with the integrator state dynamics given by Equation (3) in the following form subject to Assumption 1:
x ˙ ( t ) = f ( x ( t ) , c ( t ) ) + B Λ u ( t ) + B W p T σ p ( x p ( t ) ) , x ( 0 ) = x 0 ,
where x ( t ) [ x p T ( t ) , x c T ( t ) ] T R n , n = n p + n c , is the augmented state vector, x 0 = [ x p 0 T , x c 0 T ] T R n , c ( t ) R n c is a given bounded command, B is given by Equation (6) and f : R n × R n c R n is the aggregated system function with the integrator state dynamics that satisfies f ( 0 , 0 ) = 0 and:
f ( x ( t ) , c ( t ) ) = f p ( x p ( t ) ) E p x p ( t ) c ( t ) .
Next, consider the nonlinear reference model given by:
x ˙ r ( t ) = f r ( x r ( t ) , c ( t ) ) , x r ( 0 ) = x r 0 ,
where x r ( t ) R n is the reference state vector and f r : R n × R n c R n is the reference model function that satisfies f r ( 0 , 0 ) = 0 and:
f r ( x r ( t ) , c ( t ) ) f ( x r ( t ) , c ( t ) ) B k ( x r ( t ) ) ,
with k : R n R m being a feedback law such that x r ( t ) is bounded for all t R ¯ + . In addition, it is implicitly assumed that Equation (48) has a unique solution forward in time.
Let the nominal control law be given by:
u n ( t ) = k ( x ( t ) ) ,
such that with Equation (8), Equation (46) can be rewritten as:
x ˙ ( t ) = f ( x ( t ) , c ( t ) ) B k ( x ( t ) ) + B Λ [ u a ( t ) + Λ 1 W p T σ p ( x p ( t ) ) + ( Λ 1 I ) k ( x ( t ) ) ] = f r ( x ( t ) , c ( t ) ) + B Λ [ u a ( t ) + W o T σ o ( x ( t ) ) ] ,
where W o [ Λ 1 W p T , ( Λ 1 I ) ] T R ( s + m ) × m and σ o ( x ( t ) ) [ σ p T ( x p ( t ) ) , k T ( x ( t ) ) ] T R ( s + m ) . The system error dynamics then follow from Equations (48) and (51) as:
e ˙ ( t ) = f r ( x ( t ) , c ( t ) ) f r ( x r ( t ) , c ( t ) ) + B Λ [ u a ( t ) + W o T σ o ( x ( t ) ) ] , e ( 0 ) = e 0 .
Note that there exists a known signal v ( x ( t ) , x r ( t ) , c ( t ) ) R m , which can be used as a feedback linearization term, such that:
A r e ( t ) = f r ( x ( t ) , c ( t ) ) f r ( x r ( t ) , c ( t ) ) + B v ( · )
holds, and hence, Equation (52) can be written as:
e ˙ ( t ) = A r e ( t ) + B Λ [ u a ( t ) + W o T σ o ( x ( t ) ) Λ 1 v ( · ) ] = A r e ( t ) + B Λ [ u a ( t ) + W T σ ( · ) ] ,
with W [ W o T , Λ 1 ] T R ( s + 2 m ) × m being the unknown aggregated weight matrix and σ ( · ) [ σ o T ( x ( t ) ) , v T ( · ) ] T R ( s + 2 m ) being the known aggregated basis function.
Now, consider the adaptive feedback control law given by:
u a ( t ) = W ^ T ( t ) σ ( · ) ϕ ( t ) ,
where ϕ ( t ) R m satisfies Equation (41) with Equation (44) and W ^ ( t ) R ( s + 2 m ) × m satisfies:
W ^ ˙ ( t ) = γ σ ( · ) [ e T ( t ) P B + ξ ( t ) ϕ T ( t ) ] , W ^ ( 0 ) = W ^ 0 .
Using Equation (55) in Equation (54), it follows that the system error dynamics can be written as:
e ˙ ( t ) = A r e ( t ) B Λ [ W ˜ T ( t ) σ ( · ) + ϕ ( t ) ] ,
where W ˜ ( t ) W ^ ( t ) W R ( s + 2 m ) × m .
Remark 5.
It should be noted that the term v ( · ) acts similar to a feedback linearization signal, which is an important feature in generalizing the direct uncertainty minimization framework for the considered class of nonlinear reference models. By appropriately selecting v ( · ) , when possible, for the given application, such that Equation (53) holds, and then embedding v ( · ) into the unknown weight matrix W and the known basis function σ ( · ) , the resulting system error dynamics given by Equation (57) has an identical structure to the system error dynamics given by Equation (27) in Section 3 for the linear reference model. It then follows that the analysis and synthesis of the direct uncertainty minimization mechanism and stability analysis presented in Section 3 directly translates to the case in which nonlinear reference models are used.
Theorem 3.
Consider the nonlinear uncertain dynamical system given by Equation (45) subject to Assumption 1; the nonlinear reference model given by Equation (48) and the feedback control law given by Equation (55) with Equations (41), (44) and (56). In addition, let ξ min be chosen, such that Equation (43) holds. Then, the solution ( e ( t ) , ϕ ( t ) , W ˜ ( t ) ) of the closed-loop dynamical system is Lyapunov stable for all initial conditions, and t R ¯ + , lim t e ( t ) = 0 and lim t ϕ ( t ) = 0 .
Proof. 
As a consequence of the discussion highlighted in Remark 5, the proof is similar to the proof of Theorem 2, and hence, is omitted. ☐

5. Illustrative Numerical Examples

To demonstrate the efficacy of the proposed direct uncertainty minimization framework, we now present two examples in the following two subsections. We first investigate the application to a hypersonic vehicle using a linear reference model. The second example considers a wing rock dynamics model for an aircraft with a nonlinear reference model, where the purpose of the nonlinear reference model is to limit the pilot authority for envelope protection.

5.1. Example 1: Application to a Hypersonic Vehicle Model

For this example, we first formulate a state space model of a generic hypersonic vehicle (GHV). Then, it is explained how the model is decoupled into longitudinal and lateral dynamics for which separate controllers are designed. The longitudinal and lateral controllers have both a nominal and adaptive portion where the simulation results illustrate both nominal control performance, a standard adaptive control performance and the proposed adaptive control performance.
For the configuration with an altitude of 80,000 feet and a Mach number of six, a linearized model under nominal conditions ( δ p ( x p ( t ) ) = 0 and Λ = I ) is obtained in the form of Equation (1) with:
A p = 3.70 × 10 3 7.17 × 10 1 0 3.18 × 10 1 2.67 × 10 4 5.35 × 10 7 2.39 × 10 1 1 2.95 × 10 12 2.23 × 10 7 2.79 × 10 5 4.26 1.19 × 10 1 0 3.94 × 10 5 4.76 × 10 8 1.31 × 10 13 1 4.45 × 10 14 1.33 × 10 11 5.53 × 10 10 5.87 × 10 3 0 5.87 × 10 3 0 5.99 × 10 16 3.14 × 10 11 0 3.04 × 10 19 9.74 × 10 16 1.47 × 10 10 4.45 × 10 6 0 0 1.00 × 10 11 5.29 × 10 12 3.98 × 10 8 0 0 1.28 × 10 12 8.08 × 10 28 2.04 × 10 22 1.01 × 10 20 1.17 × 10 16 1.73 × 10 31 8.81 × 10 1 0 0 1.77 × 10 15 1.06 × 10 3 0 0 3.18 × 10 21 1.47 0 0 0 1.08 × 10 19 4.44 × 10 16 9.58 × 10 16 2.58 × 10 18 0 0 0 3.26 × 10 13 6.97 × 10 2 1.04 × 10 2 9.99 × 10 1 5.35 × 10 3 1.31 × 10 3 2.03 7.54 × 10 3 0 2.07 1.55 × 10 3 5.31 × 10 2 0 2.38 × 10 4 8.54 × 10 1 8.84 × 10 3 3.00 × 10 6
B p = 6.53 × 10 3 1.24 × 10 13 2.98 × 10 3 1.33 × 10 4 2.44 × 10 13 1.17 × 10 7 1.84 × 10 1 1.60 × 10 13 2.48 × 10 4 0 0 0 0 0 0 1.40 × 10 16 2.47 × 10 5 2.18 × 10 4 5.90 × 10 11 8.04 10.3 8.56 × 10 14 3.17 × 10 2 2.85 × 10 1 0 0 0
with the state vector being defined as x p ( t ) = [ V ( t ) , α ( t ) , q ( t ) , θ ( t ) , h ( t ) , β ( t ) , p ( t ) , r ( t ) , ϕ ( t ) ] T , where V ( t ) denotes the total velocity, α ( t ) denotes the angle of attack, q ( t ) denotes the pitch rate, θ ( t ) denotes the pitch angle, h ( t ) denotes the altitude, β ( t ) denotes the sideslip angle, p ( t ) denotes the roll rate, r ( t ) denotes the yaw rate and ϕ ( t ) denotes the roll angle. The control input vector is defined as u ( t ) = [ δ e ( t ) , δ a ( t ) , δ r ( t ) ] T , where δ e ( t ) denotes the elevator deflection, δ a ( t ) denotes the aileron deflection and δ r ( t ) denotes the rudder deflection. To control the model described above, we decouple the system into its longitudinal and lateral dynamics, design nominal and adaptive controllers for the decoupled system and then combine the separate controllers to control the overall coupled GHV model (see Figure 2 and Figure 3).

5.1.1. Longitudinal Control Design

For the decoupled longitudinal dynamics, we consider the state vector defined as x p lo ( t ) = [ α ( t ) , q ( t ) ] T , with the respective system matrices:
A p lo = 2.39 × 10 1 1 4.26 1.19 × 10 1 ,
B P o = 1.33 × 10 4 1.84 × 10 1 .
LQR theory is used to design the nominal controller with E p lo = [ 1 , 0 ] such that a desired angle of attack command is followed. The controller gain matrix K lo is obtained using the highlighted augmented formulation (Equations (5) and (6)), along with the weighting matrices Q lo = diag [ 20000 , 25000 , 400000 ] to penalize x lo ( t ) and R lo = 12.5 to penalize u lo ( t ) , resulting in the following gain matrix:
K lo = 1.65 × 10 2 6.09 × 10 1 1.79 × 10 2 .
The solution to A r lo T P lo + P lo A r lo + R 1 lo = 0 , where A r lo A lo B lo K lo , is calculated using R 1 lo = diag [ 1 , 1 , 100 ] for both the standard adaptive control design and the proposed controller. For the proposed design, we use Equations (25), (41) and (42) and resort to Equation (44) for enforcing e lo ( t ) P lo 0.5 . Additionally, note that ξ min = 10 is selected to satisfy Equation (43), and we choose a = 2 . To visualize the overall longitudinal control design, a block diagram is provided in Figure 2.

5.1.2. Lateral Control Design

The decoupled lateral dynamics follows similarly. Specifically, we consider the state vector defined as x p la ( t ) = [ β ( t ) , p ( t ) , r ( t ) , ϕ ( t ) ] T , with the respective system matrices:
A p la = 6.97 × 10 2 1.04 × 10 2 9.99 × 10 1 5.35 × 10 3 1.31 × 10 3 2.03 7.54 × 10 3 0 2.07 1.55 × 10 3 5.31 × 10 2 0 2.38 × 10 4 8.54 × 10 1 8.84 × 10 3 3.00 × 10 6 ,
B p la = 2.47 × 10 5 2.18 × 10 4 8.04 10.3 3.17 × 10 2 2.85 × 10 1 0 0 .
LQR theory is used to design the nominal controller with:
E p la = 1 0 0 0 0 0 0 1
such that a desired sideslip angle command and roll angle command are followed. The controller gain matrix K la is obtained using the highlighted augmented formulation along with the weighting matrices Q la = diag [ 100 , 100 , 100 , 100 , 400000 , 2500 ] to penalize x la ( t ) and R la = diag [ 1.25 , 50 ] to penalize u la ( t ) , resulting in the following gain matrix:
K la = 2.78 × 10 2 9.08 3.62 × 10 1 3.15 × 10 1 1.21 × 10 2 4.37 × 10 1 8.70 × 10 1 1.52 × 10 1 2.72 × 10 1 1.30 8.74 × 10 1 1.51 .
The solution to A r la T P la + P la A r la + R 1 la = 0 , where A r la A la B la K la is calculated using R 1 la = diag [ 1 , 1 , 1 , 1 , 100 , 100 ] for both the standard adaptive control design and the proposed controller. For the proposed design, we use Equations (25), (41) and (42) and resort to Equation (44) for enforcing e la ( t ) P la 0.5 . Additionally, note that ξ min = 10 is selected to satisfy Equation (43), and we choose a = 2 . Similar to the previous section, a block diagram is provided in Figure 3 to visualize the control design using the decoupled lateral dynamics to control the overall uncertain system.

5.1.3. Nominal System without Uncertainty

The longitudinal and lateral controllers are augmented and applied to the overall coupled system. We first consider the case when there is no uncertainty in the system to show the nominal performance of the control designs. Figure 4 shows the response of the nominal control performance. It can also be seen from this figure that the error signals are not equal to zero, which is expected due to the coupling effects.

5.1.4. Uncertainty in Control Effectiveness and Stability Derivatives

We now consider the case when the control effectiveness matrix is unknown, as well as the stability derivatives C m α and C n β . For this purpose, we let Λ = 0.5 I , and we increase C m α and decrease C n β . Figure 5 shows the response with the nominal control, which goes unstable.
A standard adaptive control design is first implemented. For the standard adaptive controllers, we select the basis functions σ lo ( x lo ( t ) ) = [ x lo T ( t ) K lo T , α ( t ) ] T and σ la ( x la ( t ) ) = [ x la T ( t ) K la T , β ( t ) ] T respectively for the longitudinal and lateral controllers. Figure 6 and Figure 7 show the standard adaptive control response. Specifically, Figure 6 shows that for a low learning gain, the system transient performance in the sideslip angle and angle of attack is poor. In addition, the control surface deflection angles exceed practical working limits. To improve the performance, the learning gain is increased as shown in Figure 7. Both the tracking performance and the control response improve; however, as seen in the bottom part of the figure, the standard adaptive controller is unable to enforce a pre-defined bound on the error.
To improve performance further and enforce a user-defined bound on the error, the proposed adaptive controller is then implemented using the same basis functions as the standard adaptive control design. Figure 8 and Figure 9 show the proposed controller performance using the gain varying control. Specifically, Figure 8 illustrates the superior tracking performance, and Figure 9 shows the guaranteed bound e ( t ) P 0.5 for both the longitudinal and lateral dynamics.

5.2. Example 2: Wing Rock Dynamics with Nonlinear Reference Model

We now consider the nonlinear dynamical system representing a controlled wing rock dynamics model given by:
x ˙ p 1 ( t ) x ˙ p 2 ( t ) = 0 1 0 0 x p 1 ( t ) x p 2 ( t ) + 0 1 ( Λ u ( t ) + δ p ( x p ( t ) ) ) , x p 1 ( 0 ) x p 2 ( 0 ) = 0 0 ,
where x p 1 represents the roll angle in radians and x p 2 represents the roll rate in radians per second. In Equation (67), δ p ( x p ) represents an uncertainty of the form δ p ( x p ) = α 1 x p 1 + α 2 x p 2 + α 3 | x p 1 | x p 2 + α 4 | x p 2 | x p 2 + α 5 x p 1 3 , where α i , i = 1 , , 5 , are unknown parameters that are derived from the aircraft aerodynamic coefficients. For this numerical example, we set α 1 = 0.5 , α 2 = 1.0 , α 3 = 1.0 , α 4 = 1.0 , α 5 = 0.5 and Λ = 0.5 .
Note that for this example, the wing rock dynamics are linear, such that f p ( x p ( t ) ) in Equation (45) is written as A p x p ( t ) . As a result, we let E p = [ 1 , 0 ] such that the roll angle command is followed and use LQR theory with the augmented formulation Equations ((5) and (6)), along with the weighting matrices Q = diag [ 50 , 1 , 100 ] and R = 1 to obtain the gain matrix K = [ 12.30 , 5.06 , 10.0 ] . In addition, we adopt the same nominal control structure to limit pilot authority as in [13] to design the nonlinear reference model as:
x ˙ r ( t ) = 0 1 0 0 0 0 1 0 0 x r ( t ) 0 1 0 k ( x r ( t ) ) + 0 0 1 c ( t ) , x r ( t ) = 0 ,
with k ( x r ( t ) ) = K [ x r 1 ( t ) , x r 2 ( t ) , Φ ( x r ( t ) ) x r 3 ( t ) ] T , c ( t ) = c d ( t ) Φ ( x r ( t ) ) and:
Φ ( x r ( t ) ) = tanh ( 5 | | x r 1 ( t ) | 2 | ) .
Note that c d ( t ) is a desired command applied by the pilot and Φ ( x r ( t ) ) is a nonlinear function that limits the pilot authority by constraining the absolute value of the roll angle to remain less than or equal to two. Motivated from the structure of the nonlinear reference model, the feedback linearization term is designed as:
v ( · ) = K e ( t ) + K [ x 1 ( t ) , x 2 ( t ) , Φ ( x ( t ) ) x 3 ( t ) ] T K [ x r 1 ( t ) , x r 2 ( t ) , Φ ( x r ( t ) ) x r 3 ( t ) ] T ,
such that Equation (53) holds. Using this, we select the basis function as:
σ ( · ) = [ x p 1 , x p 2 , | x p 1 | x p 2 , | x p 2 | x p 2 , x p 1 3 , x T ( t ) K T , v T ( · ) ] T ,
and we set R = I 3 × 3 for both the standard adaptive controller and the proposed adaptive controller. Furthermore, for the proposed design, we use Equations (55), (41) and (56) and resort to Equation (44) for enforcing e ( t ) P 0.5 . Additionally, note that ξ min = 1 is selected to satisfy Equation (43), and we choose a = 2 .
Figure 10 shows the standard adaptive control response. It can be seen from the figure that even though the roll angle command is reasonably followed, the roll rate and the control response have undesirable high-frequency content, which can cause instability. In addition, as seen in the bottom part of the figure, the standard adaptive controller is unable to enforce a pre-defined bound on the error.
To improve performance and enforce a user-defined bound on the error, the proposed adaptive controller is then implemented. Figure 11 and Figure 12 show the proposed controller performance using the gain varying control. It is clear from Figure 11 that the proposed adaptive controller obtains superior command-following performance, and Figure 12 shows that the system error stays in the a priori given, user-defined performance bound.

6. Conclusions

We proposed a direct uncertainty minimization approach that uses modification terms in the adaptive control law and the update law to suppress the effect of system uncertainty on the transient system response through a gradient minimization procedure for improved system performance. In addition, the use of a varying gain on the modification term was shown to keep the system error approximately within a priori given, user-defined error performance bounds. The proposed approach was then generalized to incorporate a nonlinear reference model to better capture the desired closed-loop system performance for a class of nonlinear uncertain dynamical systems. Two illustrative numerical examples were included to demonstrate the efficacy of the proposed adaptive control framework. Future research will include generalizations of the proposed framework to output feedback adaptive control, as well as applications to large-scale dynamical systems.

Acknowledgments

This research was supported by the Air Force Research Laboratory Aerospace Systems Directorate under the Universal Technology Corporation Grant 15-S2606-04-C27.

Author Contributions

The research documented in this paper was conducted by Benjamin Gruenwald, where he received periodical support and guidance in theory development and simulations from Tansel Yucelen and Jonathan Muse.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duarte, M.A.; Narendra, K.S. Combined direct and indirect approach to adaptive control. IEEE Trans. Autom. Control 1989, 34, 1071–1075. [Google Scholar] [CrossRef]
  2. Slotine, J.J.E.; Li, W. Composite adaptive control of robot manipulators. Automatica 1989, 25, 509–519. [Google Scholar] [CrossRef]
  3. Lavretsky, E. Combined/composite model reference adaptive control. IEEE Trans. Autom. Control 2009, 54, 2692. [Google Scholar] [CrossRef]
  4. Volyanskyy, K.Y.; Calise, A.J.; Yang, B.J. A novel Q-modification term for adaptive control. In Proceedings of the American Control Conference, Minneapolis, MN, USA, 14–16 June 2006.
  5. Volyanskyy, K.Y.; Calise, A.J.; Yang, B.J.; Lavretsky, E. An error minimization method in adaptive control. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Keystone, CO, USA, 21–24 August 2006.
  6. Volyanskyy, K.Y.; Haddad, M.; Calise, A.J. A new neuroadaptive control architecture for nonlinear uncertain dynamical systems: Beyond-and-modifications. IEEE Trans. Neural Netw. 2009, 20, 1707–1723. [Google Scholar] [CrossRef] [PubMed]
  7. Chowdhary, G.; Johnson, E.N. Theory and flight-test validation of a concurrent-learning adaptive controller. J. Guid. Control Dyn. 2011, 34, 592–607. [Google Scholar] [CrossRef]
  8. Chowdhary, G.; Yucelen, T.; Mühlegg, M.; Johnson, E.N. Concurrent learning adaptive control of linear systems with exponentially convergent bounds. Int. J. Adapt. Control Signal Process. 2013, 27, 280–301. [Google Scholar] [CrossRef]
  9. Yucelen, T.; Johnson, E.N. Artificial basis functions in adaptive control for transient performance improvement. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Boston, MA, USA, 19–22 August 2013.
  10. Gruenwald, B.; Yucelen, T.; Fravolini, M. Performance oriented adaptive architectures with guaranteed bounds. In Proceedings of the AIAA Infotech@Aerospace Conference, Kissimmee, FL, USA, 5–9 January 2015.
  11. Gruenwald, B.; Yucelen, T. On transient performance improvement of adaptive control architectures. Int. J. Control 2015, 88, 2305–2315. [Google Scholar] [CrossRef]
  12. Yucelen, T.; Gruenwald, B.; Muse, J.; De La Torre, G. Adaptive control with nonlinear reference systems. In Proceedings of the American Control Conference, Chicago, IL, USA, 1–3 July 2015.
  13. Arabi, E.; Gruenwald, B.C.; Yucelen, T.; Nguyen, N.T. A set-theoretic model reference adaptive control architecture for disturbance rejection and uncertainty suppression with strict performance guarantees. Int. J. Control 2017, in press. [Google Scholar]
  14. Narendra, K.S.; Annaswamy, A.M. Stable Adaptive Systems; Courier Corporation: Mineola, NY, USA, 2012. [Google Scholar]
  15. Lavretsky, E.; Wise, K.A. Robust Adaptive Control; Springer: New York, NY, USA, 2013. [Google Scholar]
  16. Ioannou, P.A.; Sun, J. Robust Adaptive Control; Courier Corporation: Mineola, NY, USA, 2012. [Google Scholar]
  17. Khalil, H.K. Nonlinear Systems; Prentice Hall: Upper Saddle River, NJ, USA, 1996. [Google Scholar]
  18. Lewis, F.L.; Liu, K.; Yesildirek, A. Neural net robot controller with guaranteed tracking performance. IEEE Trans. Neural Netw. 1995, 6, 703–715. [Google Scholar] [CrossRef] [PubMed]
  19. Lewis, F.L.; Yesildirek, A.; Liu, K. Multilayer neural-net robot controller with guaranteed tracking performance. IEEE Trans. Neural Netw. 1996, 7, 388–399. [Google Scholar] [CrossRef] [PubMed]
  20. Pomet, J.B.; Praly, L. Adaptive nonlinear regulation: Estimation from the Lyapunov equation. IEEE Trans. Autom. Control 1992, 37, 729–740. [Google Scholar] [CrossRef]
  21. Krstic, M.; Kanellakopoulos, I.; Kokotovic, P.V. Nonlinear and Adaptive Control Design; Wiley: New York, NY, USA, 1995. [Google Scholar]
  22. Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
Figure 1. A candidate f ( e ) for Equation (44).
Figure 1. A candidate f ( e ) for Equation (44).
Machines 05 00009 g001
Figure 2. Block diagram of separated longitudinal control design.
Figure 2. Block diagram of separated longitudinal control design.
Machines 05 00009 g002
Figure 3. Block diagram of separated lateral control design.
Figure 3. Block diagram of separated lateral control design.
Machines 05 00009 g003
Figure 4. Nominal controller performance without uncertainty.
Figure 4. Nominal controller performance without uncertainty.
Machines 05 00009 g004
Figure 5. Nominal controller performance with uncertainty in Λ, C m α and C n β .
Figure 5. Nominal controller performance with uncertainty in Λ, C m α and C n β .
Machines 05 00009 g005
Figure 6. Standard adaptive controller performance with uncertainty in Λ, C m α and C n β ( Γ lo = I 2 × 2 and Γ la = I 3 × 3 ).
Figure 6. Standard adaptive controller performance with uncertainty in Λ, C m α and C n β ( Γ lo = I 2 × 2 and Γ la = I 3 × 3 ).
Machines 05 00009 g006
Figure 7. Standard adaptive controller performance with uncertainty in Λ, C m α and C n β ( Γ lo = 100 I 2 × 2 and Γ la = 100 I 3 × 3 ).
Figure 7. Standard adaptive controller performance with uncertainty in Λ, C m α and C n β ( Γ lo = 100 I 2 × 2 and Γ la = 100 I 3 × 3 ).
Machines 05 00009 g007
Figure 8. Proposed gain varying adaptive control performance with uncertainty in Λ, C m α and C n β ( Γ lo = I 2 × 2 and Γ la = diag [ 0.1 , 1 , 1 ] , ξ 0 = 10 and a = 2 ).
Figure 8. Proposed gain varying adaptive control performance with uncertainty in Λ, C m α and C n β ( Γ lo = I 2 × 2 and Γ la = diag [ 0.1 , 1 , 1 ] , ξ 0 = 10 and a = 2 ).
Machines 05 00009 g008
Figure 9. System error bounds and adaptation gain for Figure 8.
Figure 9. System error bounds and adaptation gain for Figure 8.
Machines 05 00009 g009
Figure 10. Standard adaptive controller performance ( γ = 1 ).
Figure 10. Standard adaptive controller performance ( γ = 1 ).
Machines 05 00009 g010
Figure 11. Proposed gain varying adaptive control performance ( γ = 1 and γ ξ = 1 , ξ 0 = 1 and a = 2 ).
Figure 11. Proposed gain varying adaptive control performance ( γ = 1 and γ ξ = 1 , ξ 0 = 1 and a = 2 ).
Machines 05 00009 g011
Figure 12. System error bounds and adaptation gain for Figure 11.
Figure 12. System error bounds and adaptation gain for Figure 11.
Machines 05 00009 g012

Share and Cite

MDPI and ACS Style

Gruenwald, B.C.; Yucelen, T.; Muse, J.A. Direct Uncertainty Minimization Framework for System Performance Improvement in Model Reference Adaptive Control. Machines 2017, 5, 9. https://doi.org/10.3390/machines5010009

AMA Style

Gruenwald BC, Yucelen T, Muse JA. Direct Uncertainty Minimization Framework for System Performance Improvement in Model Reference Adaptive Control. Machines. 2017; 5(1):9. https://doi.org/10.3390/machines5010009

Chicago/Turabian Style

Gruenwald, Benjamin C., Tansel Yucelen, and Jonathan A. Muse. 2017. "Direct Uncertainty Minimization Framework for System Performance Improvement in Model Reference Adaptive Control" Machines 5, no. 1: 9. https://doi.org/10.3390/machines5010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop