Next Article in Journal
On the P3-Coloring of Bipartite Graphs
Previous Article in Journal
Modeling the Within-Host Dynamics of SARS-CoV-2 Infection Based on Antiviral Treatment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

One New Property of a Class of Linear Time-Optimal Control Problems

Department of Control Systems, Faculty of Electronics and Automatics, Technical University—Sofia, Branch Plovdiv, 4000 Plovdiv, Bulgaria
Mathematics 2023, 11(16), 3486; https://doi.org/10.3390/math11163486
Submission received: 10 June 2023 / Revised: 9 August 2023 / Accepted: 10 August 2023 / Published: 11 August 2023
(This article belongs to the Special Issue Mathematical Modelling, Simulation, and Optimal Control)

Abstract

:
The following paper deals with a new property of linear time-optimal control problems with real eigenvalues of the system. This property unveils the possibility of synthesizing the time-optimal control without describing the switching hyper-surfaces. Furthermore, the novel technique offers an alternative solution to the classic example of the time-optimal control of a double integrator system.

1. Introduction

Since the first studies of Feldbaum [1,2], Pontryagin’s Principle of Maximum [3], etc., the theory of linear time-optimal control problem has gained maturity—the main theoretical issues have been thoroughly studied and answered [4,5,6,7,8,9]. This historical evolution and facts provide a solid background of the progress in this field. The achieved state of knowledge in this field establishes the foundation for further exploration and advancement. Achieving a transition from one system state to another in a minimum time with maximum utilization of the available system resources—control within the constraints of both control inputs and state space variables—in a form of synthesis still presents an attractive topic for further research.
In synchrony with the above mentioned, the authors in the recently published book [10] state, “there has been tremendous progress in numerical methods in optimal control over the past fifteen years that has led to the solutions of some specific and very difficult problems” and, in particular, the introduction of geometrical methods, more specifically—“a first illustration of the power of geometric methods that go well beyond the conditions of the maximum principle and lead to deep results about the structure of optimal solutions”. The geometric approach to the optimal control of a double integrator is also discussed in [11,12].
In a recent publication on the topic [13], the authors say that, “this paper has proposed a global time optimal control law for triple integrator with input saturation and full state constraints”, and in terms of the results, “An analytical state feedback form control law has been synthesized based on the switching surfaces and curves”.
The authors also mention “there are plenty of researches trying to solve the problem analytically, while there is still no complete time optimal analytical solution for systems higher than second order”.
This is noteworthy considering Pontryagin’s original sources. In reference [3] (Chapter 3, § 20, § 21, Example 3), the author and his colleagues describe the solution of the problem of a linear time-optimal control system fulfilling the condition of normality with real non-positive eigenvalues and one control input as follows.
The time-optimal control for such a type of linear system has maximum n (the order of the system) intervals of constancy, i.e., the number of switchings is maximum ( n 1 ) ; the state-space of the system is separated into manifold M n ,   M n 1 , …, M 1 of dimensions, respectively, 1, 2, …, n. The manifold M n consists of all the points for which the time-optimal control has one interval of constancy. Supposing | u | 1 , the trajectory of the system under the control +1 ending at the state-space origin is defined as M n + , while the trajectory of the system ending at the state-space origin but under the control -1 is defined as M n . Together, M n + and M n compose the switching curve M n . The final stage of the time-optimal process represents a movement alongside M n + or M n . All the trajectories of the system ending at a point of the curve M n under the control +1 fill the surface M n 1 + . Analogically, all the trajectories ending at a point of the curve M n + under the control −1 fill the surface M n 1 . Combining M n 1 + and M n 1 , we obtain the switching surface M n 1 , so the last two stages of each time-optimal process are in M n 1 . In the same manner, the rest of the manifolds are constructed. The manifold M i is of dimension ( n i + 1 ) ; M i + 1 is entirely in M i and divides it into two areas M i + and M i ; M i + consists of all the trajectories under the control +1 ending at a point of M i + 1 , while M i consists of all the trajectories under the control −1 ending at a point of M i + 1 + . The last manifold M1 coincides with the whole state-space of the system. The synthesizing function is depicted as:
u ( x ) = { + 1 i n   a l l   a r e a s M i + , 1 i n   a l l   a r e a s M i .
So, in order to synthesize the time-optimal control for a given system fulfilling the above conditions, one needs to describe properly the switching surfaces M i + and M i .
Despite the progress in the field, finding a new solution for the problem discussed above by Pontryagin and others without the need of directly describing the respective manifolds M i + and M i renders it more appealing by conducting a deeper investigation of the state-space geometric properties of this time-optimal control problem.
A novel method for synthesizing the time-optimal control for a class of controllable linear systems of any order with real non-positive simple eigenvalues and one input is developed and further explored in the dissertation [14] and the following papers [15,16,17]. It is founded on some new state-space properties of the considered linear time-optimal control problem and the exclusion of switching surfaces description serves as its main advantage. The study [18] illustrates an example of a possible application of the method in practice.
Therefore, it is worthwhile trying to expand the thus developed solution of synthesizing the linear time-optimal control without the description of switching surfaces and curves to the more general case as the one described by Pontryagin and colleagues, in particular, a controllable linear system with one input and real non-positive eigenvalues, but not just non-positive simple eigenvalues.
The current paper is structured in the following way. In Section 2, a new property of the linear time-optimal control problem is theoretically represented. In Section 3, the author compares the classic solution of the time-optimal control problem of a double integrator to the alternatively suggested novel way by application of the new property. Section 4 represents a detailed discussion of the obtained results.

2. Formulation of the Problem and Solution

Let us consider the following linear time-optimal control problem of order n , n 2 . The system is described by the equations:
x ˙ i = j = 1 n 1 a i j x j + b i u ,   i = 1 , 2 ,   ,   n 1 , x ˙ n = j = 1 n 1 a n j x j + λ n x n + b n u .  
Let us suppose it is controllable as well as possessing real non-positive eigenvalues. It should be mentioned that every normal system with real eigenvalues could be transformed to such a type of presentation.
The initial state at the moment t 0 = 0 of the system (1) is
x 0 = ( x 10   x ( n 1 ) 0 x n 0 ) T
and the target state at the moment t f represents the origin of the system’s state-space where t f is unspecified
x ( t f ) = x f = ( 0   0 0 n ) T .
The admissible control u ( t ) is a piecewise continuous function that takes its values in the range of
u 0 u ( t ) u 0 ,   u 0 = c o n s t > 0 ,
which is continuous on the boundaries of the set of allowed values (4) and with regard to the points of discontinuity τ we have
u ( τ ) = u ( τ + 0 ) .
The problem is to find an admissible control u ( x ) which transfers the system (1) from its initial state (2) to the final state (3) in minimum time, i.e., minimizing the performance index
J = t f m i n .
Let us refer to this problem as “Problem P( n )”.
The form of the equations of the system (1) allows the introduction of the linear sub-system of order ( n 1 )
x ˙ n 1 = A n 1 x n 1 + B n 1 u , y n 1 = C n 1 x n 1
with the state-space vector
x n 1 = ( x 1 x n 1 ) T
and scalar output y n 1 where the matrices A n 1 ,   B n 1 ,   C n 1 are, respectively,
A n 1 = ( a 11 a 21 a n 1 , 1 a 12 a 22 a n 1 , 2   a 1 , n 1 a 2 , n 1 a n 1 , n 1 ) , B n 1 = ( b 1 b 2 b n 1 ) , C n 1 = ( a n 1 a n 2 a n , n 1 ) .
Thus, the system (1) could be represented by (7) in the following form which is also depicted in Figure 1.
x ˙ n 1 = A n 1 x n 1 + B n 1 u ,                               y n 1 = C n 1 x n 1 , x ˙ n = λ n x n + y n 1 + b n u .
With regard to the sub-system (7), its initial state may be represented by x ( n 1 ) 0 (11) and the relationship between the initial states of both the system and the sub-system may be described as (12).
x ( n 1 ) 0 = ( x 10 x ( n 1 ) 0 n 1 ) T .
x 0 = ( x ( n 1 ) 0 x n 0 ) .
Let us formulate the following linear time-optimal control problem of order ( n 1 ) which we shall call “Problem P ( n 1 ) ”. The system is defined by Equation (7). The initial state of the system (7) at the moment t 0 = 0 is (11) and the target state at the moment t ( n 1 ) f , which one should bear in mind is not initially specified, is the origin of the ( n 1 ) -dimensional state-space of the system (7)
x n 1 ( t ( n 1 ) f ) = x ( n 1 ) f = ( 0   0 n 1 ) T .
The admissible control u ( t ) represents a piecewise continuous function that takes its values in the range of (4), which is continuous on the boundaries of the set of allowed values (4). Regarding the points of discontinuity τ we have (5). The Problem P ( n 1 ) consists of synthesizing an admissible control u ( x n 1 ) which on the one hand transfers the system (7) from its initial (11) to final state (13) and on the other hand, minimizes the performance index
J n 1 = t ( n 1 ) f m i n .
Let us assume we have found the solution of Problem P ( n 1 ) and denote by t ( n 1 ) f o the optimal time defined as the minimum time of (14)
t ( n 1 ) f o = m i n ( J n 1 ) ,
by u n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] —the optimal control, and x n 1 o ( t ) ,   t [ 0 , t ( n 1 ) f o ] —the optimal trajectory in the ( n 1 ) -dimensional state-space of the system (7), which is described by
x n 1 o ( t ) = e A n 1 t x ( n 1 ) 0 + 0 t e A n 1 τ B n 1 u n 1 o ( t τ ) d τ for   t [ 0 ,   t ( n 1 ) f o ] .
x n 1 o ( t ( n 1 ) f o ) = ( 0   0 n 1 ) T .
Let us denote the scalar output of the system (7) as a representation of the optimal vector-function x n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] , resulting as y n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] . In that case, y n 1 o ( t ) stands for
y n 1 o ( t ) = C n 1 x n 1 o ( t )   for   t [ 0 ,   t ( n 1 ) f o ] ,
y n 1 o ( t ( n 1 ) f o ) = C n 1 x n 1 o ( t ( n 1 ) f o ) = 0 .
Let us define x n 0 1 (21) as an initial state of the n -th coordinate of the state-space vector x of the system (1) or (10) and consider the trajectory x 1 ( t ) in the n -dimensional state-space of Problem P ( n ) with initial state in the point x 0 1 and coordinates (20) and (21) under the optimal control u n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] , of Problem P ( n 1 ) .
x 0 1 = ( x ( n 1 ) 0 x n 0 1 ) ,
x n 0 1 = 0 t ( n 1 ) f o e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ e λ n t ( n 1 ) f o .
Given the characteristics of the system as defined in (1), the vector-function x 1 ( t ) presented as (10) specifies (22). According to (16), the first ( n 1 ) variables of the vector-function in (22) typify the optimal vector-function x n 1 o ( t ) of Problem P ( n 1 ) . Regarding the last n -th variable of x 1 ( t ) in (22), the function y n 1 ( τ ) depicts the scalar output of the system (7), which in this case is the result of the optimal vector-function x n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] . Then, in terms of the above mentioned and in consonance with (18), y n 1 ( τ ) equals y n 1 o ( τ ) . Thus, we obtain (23) for x 1 ( t ) (22).
x 1 ( t ) = ( e A n 1 t x ( n 1 ) 0 + 0 t e A n 1 τ B n 1 u n 1 o ( t τ ) d τ e λ n t x n 0 1 + 0 t e λ n ( t τ ) ( y n 1 ( τ ) + b n u n 1 o ( τ ) ) d τ ) for   t [ 0 ,   t ( n 1 ) f o ] .
x 1 ( t ) = ( x n 1 o ( t ) e λ n t x n 0 1 + 0 t e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) for   t [ 0 ,   t ( n 1 ) f o ] .
With regard to x 1 ( t ) (23) at the moment t = t ( n 1 ) f o we obtain
x 1 ( t ( n 1 ) f o ) = ( x n 1 o ( t ( n 1 ) f o ) ( e λ n t ( n 1 ) f o x n 0 1 + + 0 t ( n 1 ) f o e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) ) .
Then, substituting x n 1 o ( t ( n 1 ) f o ) for (17) and x n 0 1 for (21), the following result is achieved.
x 1 ( t ( n 1 ) f o ) = ( ( 0   0 n 1 ) T ( e λ n t ( n 1 ) f o ( 0 t ( n 1 ) f o e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ e λ n t ( n 1 ) f o ) + + 0 t ( n 1 ) f o e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) ) .
x 1 ( t ( n 1 ) f o ) = ( ( 0   0 n 1 ) T ( 0 t ( n 1 ) f o e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ + 0 t ( n 1 ) f o e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) ) .
x 1 ( t ( n 1 ) f o ) = ( ( 0   0 n 1 ) T 0 ) = ( 0   0 0 n ) T .
Thus, we obtain that for Problem P ( n ) the trajectory x 1 ( t ) in the n -dimensional state-space of the system (1) or (10) with initial point x 0 1 (20) and (21) under the optimal control u n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] , of Problem P ( n 1 ) ends at the moment t = t ( n 1 ) f o at the origin of the n -dimensional state-space of Problem P ( n ) . Taking into account that the function u n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] , represents the optimal control of Problem P ( n 1 ) and thereby it is a piecewise constant function with an amplitude u 0 and a number of switchings maximum ( n 2 ) , i.e., the number of intervals of constancy maximum is ( n 1 ) [7] (Chapter 2, §6, Theorem 2.11, p. 116), one comes to the conclusion that the trajectory x 1 ( t ) lies wholly on the switching hyper-surface of Problem P ( n ) .
Let us now consider the trajectory x ( t ) (28) in the n -dimensional state-space of Problem P ( n ) with an initial point representing the initial state x 0 (2) or (12) of Problem P ( n ) under the optimal control u n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ] , of Problem P ( n 1 ) . According to (16), the first ( n 1 ) variables of the vector-function in (28) account for the optimal vector-function x n 1 o ( t ) of Problem P ( n 1 ) . With regard to the last variable of x ( t ) in (28), the function y n 1 ( τ ) is the scalar output of the system (7), which is actually the result of the optimal vector-function x n 1 o ( t ) ,   t [ 0 , t ( n 1 ) f o ] . In consonance with (18), the function y n 1 ( τ ) therefore denotes y n 1 o ( τ ) in this case. Thus, we obtain (29) for x ( t ) (28).
x ( t ) = ( e A n 1 t x ( n 1 ) 0 + 0 t e A n 1 τ B n 1 u n 1 o ( t τ ) d τ e λ n t x n 0 + 0 t e λ n ( t τ ) ( y n 1 ( τ ) + b n u n 1 o ( τ ) ) d τ ) for   t [ 0 ,   t ( n 1 ) f o ] .
x ( t ) = ( x n 1 o ( t ) e λ n t x n 0 + 0 t e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) for   t [ 0 ,   t ( n 1 ) f o ] .
Let us consider now the difference between the two vector-functions x ( t ) (29) and x 1 ( t ) (23). Thus, we obtain consecutively
x ( t ) x 1 ( t ) = ( x n 1 o ( t ) e λ n t x n 0 + 0 t e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) ( x n 1 o ( t ) e λ n t x n 0 1 + 0 t e λ n ( t τ ) ( y n 1 o ( τ ) + b n u n 1 o ( τ ) ) d τ ) for   t [ 0 ,   t ( n 1 ) f o ] .
x ( t ) x 1 ( t ) = ( ( 0   0 n 1 ) T e λ n t ( x n 0 x n 0 1 ) ) for   t [ 0 ,   t ( n 1 ) f o ] .
As for the last n -th coordinate of (31) e λ n t ( x n 0 x n 0 1 ) for t [ 0 ,   t ( n 1 ) f o ] we could state that:
  • If x n 0 = x n 0 1 , then the initial state x 0 (2) or (12) of Problem P ( n ) coincides with the point x 0 1 with coordinates (20)–(21). As already illustrated, x 0 1 represents a point of the switching hyper-surface of Problem P ( n ) and the trajectory with the initial point x 0 1 under the optimal control u n 1 o ( t ) ,   t [ 0 , t ( n 1 ) f o ] , of Problem P ( n 1 ) lies wholly on the switching hyper-surface of Problem P ( n ) and ends at the moment t = t ( n 1 ) f o at the origin of the n -dimensional state-space of the system (1) or (10) of Problem P ( n ) ;
  • If x n 0 x n 0 1 , then the initial state x 0 (2) or (12) of Problem P ( n ) does not coincide with the point x 0 1 with coordinates (20)–(21). The expression e λ n t ( x n 0 x n 0 1 ) for t [ 0 ,   t ( n 1 ) f o ] does not change its sign and is not equal to zero because t ( n 1 ) f o is a finite time. Thus, the trajectory with initial state x 0 (2) or (12) of Problem P ( n ) under the optimal control u n 1 o ( t ) ,   t [ 0 , t ( n 1 ) f o ] , of Problem P ( n 1 ) lies entirely above or below the switching hyper-surface of Problem P ( n ) nowhere intersecting it and ends at the moment t = t ( n 1 ) f o at a point of the coordinate axis x n different from zero.
Thus, the following theorem has been proven.
Theorem 1.
The trajectory of the system (1) or (10) with initial point in  x 0  (2) under the optimal control  u n 1 o ( t ) ,   t [ 0 ,   t ( n 1 ) f o ]  of Problem  P ( n 1 )  lies wholly on the switching hyper-surface of Problem  P ( n )  and ends at the moment  t = t ( n 1 ) f o  at the origin of the n-dimensional state-space of the system (1) or (10) of Problem  P ( n )  or lies entirely above or below the switching hyper-surface of Problem  P ( n )  nowhere intersecting it and ends at the moment  t = t ( n 1 ) f o  at a point of the coordinate axis  x n  different from zero.

3. Example

Let us consider the following example of synthesizing the time-optimal control of a double integrator (§ 3. Example. The problem of synthesis, p. 38) [7]; (Chapter 7, Problem 7.1, p. 150) [11,12]. It is noteworthy to mention that the above problem of synthesis, as it is already an established example, has found a place in online optimal control courses on world platforms with video content [19,20,21,22]. It should be noted that these online resources are often volatile and unavailable after some time. In the first place, an illustration of this classical synthesis will be presented, and thereafter the synthesis as an expansion and update of the method [14] by the new property.
The system is described by the variables y (position) and v (velocity) and represents
d y d t = v , d v d t = u .
Let the constraints of the admissible control u (4), (5) be
u 0 u ( t ) u 0 ,   u 0 = 1
.

3.1. Classical Synthesis

The switching curve S 2 in the phase plane y v is described by
S 2 = γ + γ ( 0 , 0 ) , γ + = { ( y , v ) : y = v 2 2 u 0 ,   v < 0   } , γ = { ( y , v ) : y = v 2 2 u 0 ,   v > 0   } .
The two pieces γ + and γ of the switching curve S 2 are the parts of the parabolas representing the phase trajectories going through the origin of the phase plane in case of constant control u = u 0 or u = u 0 , respectively.
The two areas R + and R in the phase plane,
R + = { ( y , v ) : y + s i g n ( v ) v 2 2 u 0 < 0   } , R = { ( y , v ) : y + s i g n ( v ) v 2 2 u 0 > 0   } ,
below and above the switching curve S 2 (34), respectively, encompass the areas where the optimal control takes a value u 0 with regard to the points of R + and ( u 0 ) with regard to the points of R . The areas R + and R as well as the parts γ + and γ of S 2 are shown in the following Figure 2.
The time-optimal control is synthesized in the form
u ( y , v ) = { 0 w h e n ( y , v ) ( 0 ,   0 ) , + u 0 w h e n ( y , v ) R +   γ + , u 0 w h e n ( y , v ) R   γ .
After substitution of R + and R for (35) as well as γ + and γ for (34) in (36) the synthesized optimal control appears as
u ( y , v ) = { 0 w h e n ( y , v ) ( 0 , 0 ) , + u 0 ( w h e n ( y + s i g n ( v ) v 2 2 u 0 < 0 ) o r   w h e n ( y v 2 2 u 0 = 0 ,   v < 0 ) ) , u 0 ( w h e n ( y + s i g n ( v ) v 2 2 u 0 > 0 ) o r   w h e n ( y + v 2 2 u 0 = 0 ,   v > 0 ) ) .

3.2. Synthesis Based on the New Property and the Method [14]

Let us now consider the synthesis in terms of the method developed in [14] and expanded by the new property. One of the founding properties of the described method regards the trajectory in the state-space of a time-optimal control problem of higher order now being defined by the solution for the lower order, taking into consideration that all the time-optimal control problems of descending order are generated by the problem of the utmost order and form a class of problems. Thus, the method now allows a synthesis to be defined without the description of the switching hyper-surfaces. As we have shown here, the new property represents an expansion covering the general case of controllable linear systems with one input and real non-positive eigenvalues. Therefore, the simple non-positive system’s eigenvalues of the method demonstrated in [14] is now omitted as an initial restriction. The example here considers a system of order two with double zero eigenvalue, so the synthesis is directly based on the solution of the problem of order one, which also allows the solution of the initial problem to be expressed analytically.
Step 1. First, we make a suitable change of variables through (38) and obtain a representation by ( x 1 ,   x 2 ) , which could also be performed by the matrix T (39) and (40) via (41).
y = x 2 , v = x 1 .
T = ( 0 1 1 0 ) .
T 1 = ( 0 1 1 0 ) , T 1 T = ( 0 1 1 0 ) ( 0 1 1 0 ) = ( 1 0 0 1 ) = E .
( y v ) = ( 0 1 1 0 ) ( x 1 x 2 ) .
Thus, we obtain (43) and (44) from the initial system (32) through its matrix representation (42).
( y v ˙ ˙ ) = ( 0 1 0 0 ) ( y v ) + ( 0 1 ) u .
( x ˙ 1 x ˙ 2 ) = T 1 ( 0 1 0 0 ) T ( x 1 x 2 ) + T 1 ( 0 1 ) u .
( x ˙ 1 x ˙ 2 ) = ( 0 0 1 0 ) ( x 1 x 2 ) + ( 1 0 ) u .
The system (44) is now in the form (1). Then, (44) in form (10) is represented as (45) and (46) (as (1) in form (10)). The sub-system of (45) and (46) is (47) or (48).
x ˙ 1 = A 1 x 1 + B 1 u ,                   y 1 = C 1 x 1 , x ˙ 2 = λ 2 x 2 + y 1 + b 2 u ,
x 1 = ( x 1 ) ,   A 1 = ( 0 ) , B 1 = ( b 1 ) = ( 1 ) ,   C 1 = ( 1 ) , λ 2 = 0 1 , b 2 = 0 .
x ˙ 1 = A 1 x 1 + B 1 u , y 1 = C 1 x 1 .
x ˙ 1 = 0 x 1 + 1 u , y 1 = 1 x 1 .
Step 2. Solving Problem P ( 1 ) . The eigenvalue of A 1 is 0. The optimal control of Problem P ( 1 ) , u 1 o ( t ) for t [ 0 ,   t 1 f o ] , is (49) and (50) [14] (pp. 50–52).
u 1 o ( t ) = { 0 w h e n x 10 = 0 , s 11 o u 0 f o r   t [ 0 , t 11 o ] w h e n x 10 0 .
s 11 o = s i g n ( b 1 x 10 ) , t 11 o = | x 10 | | b 1 | u 0 .
m i n J 1 = t 1 f o = { 0 w h e n x 10 = 0 , t 11 o w h e n x 10 0 .
Step 3. Calculating the value of the variable x 2 w . The variable x k w is defined in [14] (pp. 39–40), [15] (p. 320), [16] (p. 41) and in the case of expanding the class of time-optimal control problems here, it represents at k = n the n -th coordinate of the vector x ( t ) (29) at the moment t = t ( n 1 ) f o . In case n = 2 , the variable x 2 w represents (52). With regard to the system (47) or (48) of Problem P ( 1 ) , the variable x 2 w (52) becomes (53) and after simplifying—(55).
x 2 w = e λ 2 t x 20 + 0 t 1 f o e λ 2 ( t τ ) ( y 1 o ( τ ) + b 2 u 1 o ( τ ) ) d τ .
x 2 w = x 20 + x 10 t 11 o + b 1 s 11 o u 0 2 t 11 o 2 .
x 2 w = x 20 + x 10 | x 10 | | b 1 | u 0 + ( s i g n ( b 1 x 10 ) ) x 10 2 2 b 1 u 0 .
x 2 w = x 20 + s i g n ( x 10 ) x 10 2 2 | b 1 | u 0 .
Step 4. Applying the theorem for synthesizing the optimal function in the initial state [14] (Theorem 3.2, pp. 40–43), [15] (Theorem 3, p. 320), and [16] (Theorem 3, p. 41). According to this theorem and its corollaries, the time-optimal control in the initial state of Problem P ( 2 ) represents (56).
u o ( 0 ) = u o ( x 10 , x 20 ) = { u 0 w h e n x 2 + x 2 w > 0 , u 1 o ( 0 ) w h e n x 2 + x 2 w = 0 u 0 w h e n x 2 + x 2 w < 0 . ,
The variable x k + , respectively, x 2 + in (56), is a term introduced in [14] (p. 38) and [15] (pp. 319–320) and defines the relationship between the points on axis x k of the state-space of the system of Problem P ( k ) from the considered class of problems and the switching hyper-surface of the same Problem P ( k ) . The value of the variable x k + is determined by a procedure called “axes initialization” (Chapter 3, Section 3.3, pp. 60–88) [14] and (pp. 41–45) [16].
With regard to the example
x 2 + = 1 .
Hence, this means that all the points of the negative semi-axis O x 2 are above the switching curve of Problem P(2) and the optimal control value for them is + u 0 while all the points of the positive semi-axis O x 2 are below the switching curve of Problem P(2) and the optimal control value for them is u 0 .
Thus, after substitution x 2 w for (55) and x 2 + for (57) taking into consideration the initial state ( x 10 , x 20 ) based on (56), we obtain
u o ( 0 ) = u o ( x 10 , x 20 ) = { u 0 w h e n ( x 20 + s i g n ( x 10 ) x 10 2 2 | b 1 | u 0 ) > 0 , u 1 o ( 0 ) w h e n ( x 20 + s i g n ( x 10 ) x 10 2 2 | b 1 | u 0 ) = 0 u 0 w h e n ( x 20 + s i g n ( x 10 ) x 10 2 2 | b 1 | u 0 ) < 0 . ,
So, the synthesized optimal function with regard to a state ( x 1 , x 2 ) is
u o ( x 1 , x 2 ) = { u 0 w h e n ( x 2 + s i g n ( x 1 ) x 1 2 2 | b 1 | u 0 ) > 0 , s i g n ( b 1 x 1 ) u 0 w h e n ( x 2 + s i g n ( x 1 ) x 1 2 2 | b 1 | u 0 ) = 0 , u 0 w h e n ( x 2 + s i g n ( x 1 ) x 1 2 2 | b 1 | u 0 ) < 0 .
Taking into account b 1 = 1 according to (46), (59) becomes
u o ( x 1 , x 2 ) = { u 0 w h e n ( x 2 + s i g n ( x 1 ) x 1 2 2 u 0 ) < 0 , s i g n ( x 1 ) u 0 w h e n ( x 2 + s i g n ( x 1 ) x 1 2 2 u 0 ) = 0 , u 0 w h e n ( x 2 + s i g n ( x 1 ) x 1 2 2 u 0 ) > 0 .
Bearing in mind the relation (38) or (41) between ( y ,   v ) and ( x 1 , x 2 ) , one can easily appreciate that the analytical expression of the synthesized here optimal control (60) is identical with the expression obtained by the classical synthesis (37).

3.3. Simulation Results

For instance, let us depict the following two initial states
( y 0 , v 0 ) = ( 10 ,   0 ) .
( y 0 , v 0 ) = ( 10 ,   0 ) .
The corresponding initial states in the state-space ( x 1 , x 2 ) of the system (44) are, respectively,
( x 10 , x 20 ) = ( 0 ,   10 ) .
( x 10 , x 20 ) = ( 0 , 10 ) .
In Step 2, according to (49) and (50) and with regard to (63), we obtain
s 11 o = 0 ,   t 11 o = 0 , t 1 f o = 0 ,   u 1 o ( t ) = 0 .
In Step 3, with regard to x 2 w according to (53), we obtain
x 2 w = x 20 = 10 .
Thus, in Step 4 in reference to (56) and (57), the result for the time-optimal control in the initial state (63) is
u o ( 0 ) = u o ( 0 ,   10 ) = u 0 = 1 .
Analogically, in accord with the initial state (64) in Step 2, likewise the initial state (63) in Step 2, we again obtain (65), but in terms of x 2 w the result is
x 2 w = x 20 = 10 ,
which leads to
u o ( 0 ) = u o ( 0 , 10 ) = u 0 = 1 .
Figure 3 shows the near time-optimal processes with an accuracy of ε r = 0.001 with regard to the considered initial states while the trajectories in the phase plane y v of the system (32) are shown in Figure 4a. The blue and red phase trajectories outline the initial states ( y 0 , v 0 ) = ( 10 ,   0 ) and ( y 0 , v 0 ) = ( 10 ,   0 ) , respectively. The near time-optimal trajectories relating to the corresponding initial states in the state-space ( x 1 ,   x 2 ) of (44) ( x 10 ,   x 20 ) = ( 0 ,   10 ) and ( x 10 ,   x 20 ) = ( 0 , 10 ) are represented in the phase plane x 1 x 2 of (44) in Figure 4b. The blue trajectory concerns the state ( x 10 , x 20 ) = ( 0 ,   10 ) , however, the red one— ( x 10 , x 20 ) = ( 0 , 10 ) . The conversion of the trajectories shown in Figure 4b by the relation (41) returns the identical result shown in Figure 4a.

4. Discussion

If the assumption of real non-positive eigenvalues of the system, in particular the constraints on the eigenvalues of subsystem (7), is omitted as they lack any specific characteristics, then in accordance with the idea and derivation technique it will simply turn out that the trajectory of the system with the initial state x 0 (2), obtained under the action of the optimal control of Problem P ( n 1 ) , will coincide with the trajectory with the initial point x 0 1 with coordinates (20) and (21) or that this trajectory will be completely below or above the last one in a vertical direction, determined by the axis x n and will end at a point on the axis x n different from the coordinate origin.
In order to serve the idea of synthesis this result is a matter of separate research. In case of not considering the spectral structure of the system matrix the number of switches or intervals of constancy, although finite, is not limited by the order of the system. At first glance, it might be appropriate to look at a certain area around the coordinate origin, if we do not deviate from the idea of the approach.
However, taking into account that the eigenvalues of the system are real non-positive, then on the basis of the theorem for the number of intervals of constancy [7] (Chapter 2, §6, Theorem 2.11, p. 116), we obtain that the initial point x 0 of Problem P( n ) and the end point of the obtained trajectory located on the axis x n have the same relationship to the switching surface.
In [14,15,16], a novel property of the state space of the system has been defined, in particular that the positive and negative parts of the coordinate axes lie outside the switching hypersurface on opposite sides of the surface and the optimal control for the points of these axes is exactly with n number of intervals of constancy.
Then, finding the optimal control at the initial point x 0 of Problem P ( n ) becomes a significantly easier task because it only requires solving the easier Problem P ( n 1 ) , which significantly reduces the computational load and the knowledge of the relation of the positive or negative parts of the axis x n to the switching hyper-surface. The latter can again be obtained by solving the not so difficult problem of the lower order, but under specific initial conditions [14,15,16]. These data can be retrieved in advance and the process is called “axes initialization”. Relying on a straightforward geometrical concept, this advantage of the approach is of significant benefit when solving high-order problems by immersing the initial problem in a class of problems Problem P ( n ) , Problem P ( n 1 ) , … Problem P ( 1 ) and returning by reverse order to the initial Problem P ( n ) .
Besides the property proved here, there is still a rigorous need to prove other properties of the problem in the case of its expansion. In [18], the author presents several interesting results of numerical experiments for near time-optimal control of a scanning lidar system based on the described method. Furthermore, the numerical aspects of the currently developed technique imply a close connection with linear programming.

Funding

This study is supported by the European Regional Development Fund within the OP “Science and Education for Smart Growth 2014–2020”, Project Competence Centre “Smart Mechatronic, Eco-And Energy Saving Systems And Technologies”, № BG05M2OP001-1.002-0023.

Data Availability Statement

All relevant data are within the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Feldbaum, A.A. The simplest relay automatic control systems. Autom. Remote Control. 1949, X, 5. (In Russian) [Google Scholar]
  2. Feldbaum, A.A. Optimal processes in automatic control systems. Autom. Remote Control. 1953, XIV, 6. (In Russian) [Google Scholar]
  3. Pontryagin, L.S.; Boltyanskii, V.G.; Gamkrelidze, R.V.; Mischenko, E.F. The Mathematical Theory of Optimal Processes; Pergamon Press: Oxford, UK, 1964. [Google Scholar]
  4. Athans, M.; Falb, P.L. Optimal Control: An Introduction to the Theory and Its Applications; McGraw-Hill: New York, NY, USA, 1966. [Google Scholar]
  5. Lee, E.B.; Markus, L. Foundations of Optimal Control Theory; Wiley & Sons Inc.: Hoboken, NJ, USA, 1967. [Google Scholar]
  6. Bryson, A.E.; Ho, Y.C. Applied Optimal Control; Blaisdell Publishing Company: Waltham, MA, USA, 1969. [Google Scholar]
  7. Бoлтянский, B.Г. Математические Метoды Оптимальнoгo Управления; Наукa: Moscow, Russia, 1969. (In Russian) [Google Scholar]
  8. Leitmann, G. The Calculus of Variations and Optimal Control; Plenum Press: New York, NY, USA, 1981. [Google Scholar]
  9. Pinch, E.R. Optimal Control and the Calculus of Variations; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  10. Schättler, H.; Ledzewicz, U. Geometric Optimal Control; Springer: New York, NY, USA, 2012; ISBN 978-1-4614-3834-2. [Google Scholar] [CrossRef]
  11. Romano, M.; Curti, F. Time-optimal control of linear time invariant systems between two arbitrary states. Automatica 2020, 120, 109151. [Google Scholar] [CrossRef]
  12. Locatelli, A. Optimal Control of a Double Integrator; Studies in Systems, Decision and Control; Springer: Cham, Switzerland, 2017; Volume 68, ISBN 978-3-319-42126-1. [Google Scholar] [CrossRef]
  13. He, S.; Hu, C.; Zhu, Y.; Tomizuka, M. Time optimal control of triple integrator with input saturation and full state constraints. Automatica 2020, 122, 109240. [Google Scholar] [CrossRef]
  14. Penev, B.G. A Method for Synthesis of Time-Optimal Control of Any Order for a Class of Linear Problems for Time-Optimal Control. Ph.D. Dissertation, Technical University of Sofia, Sofia, Bulgaria, 1999. Available online: https://www.researchgate.net/publication/337290119_A_Method_for_Synthesis_of_Time-Optimal_Control_of_Any_Order_for_a_Class_of_Linear_Problems_for_Time-Optimal_Control_in_Bulgarian_Metod_za_sintez_na_optimalno_po_brzodejstvie_upravlenie_ot_proizvolen_r (accessed on 27 March 2023). (In Bulgarian).
  15. Penev, B.G.; Christov, N.D. On the Synthesis of Time Optimal Control for a Class of Linear Systems. In Proceedings of the 2002 American Control Conference (IEEE Cat. No.CH37301), Anchorage, AK, USA, 8–10 May 2002; Volume 1, pp. 316–321. [Google Scholar] [CrossRef]
  16. Penev, B.G.; Christov, N.D. On the State-Space Analysis in the Synthesis of Time-Optimal Control for a Class of Linear Systems. In Proceedings of the 2004 American Control Conference, Boston, MA, USA, 30 June–2 July 2004; Session “Optimal control I”, Schedule Code WeA02.4. Volume 1, pp. 40–45. [Google Scholar] [CrossRef]
  17. Penev, B.G.; Christov, N.D. A fast time-optimal control synthesis algorithm for a class of linear systems. In Proceedings of the 2005 American Control Conference, Portland, OR, USA, 8–10 June 2005; Session “Optimal Control Theory”, Schedule Code WeB10.6. Volume 2, pp. 883–888. [Google Scholar] [CrossRef]
  18. Penev, B.G. Analysis of the possibility for time-optimal control of the scanning system of the GREEN-WAKE’s project lidar. arXiv 2012, arXiv:1807.08300. [Google Scholar] [CrossRef]
  19. L7.3 Time-Optimal Control for Linear Systems Using Pontryagin’s Principle of Maximum, Graduate Course on “Optimal and Robust Control” (B3M35ORR, BE3M35ORR, BEM35ORC) Given at Faculty of Electrical Engineering, Czech Technical University in Prague. Available online: https://www.youtube.com/watch?v=YiIksQcg8EU (accessed on 27 March 2023).
  20. Solution of Minimum—Time Control Problem with an Example. Available online: https://www.youtube.com/watch?v=Oi90M3cS8wg (accessed on 27 March 2023).
  21. Alexandre Girard, Commande Optimale: Exemple Pour le Temps Minimum d’une Masse avec une Limite de Force, Modélisation, Analyse et Commande des Robots, Exemple d’un Calcul de loi de Commande Optimale. Available online: https://www.youtube.com/watch?v=wKjEAXFvXlQ (accessed on 27 March 2023).
  22. Рютин, К.C. Вaриaциoннoе Исчисление и Оптимaльнoе Упрaвление—12. Зaдaчa быстрoдействия. Available online: https://www.youtube.com/watch?v=u7FtLP5BWeg (accessed on 10 June 2023).
Figure 1. Schematic representation of the initial system (1) in form (10).
Figure 1. Schematic representation of the initial system (1) in form (10).
Mathematics 11 03486 g001
Figure 2. Representation of the areas R + and R as well as the two parts γ + and γ of the switching curve S 2 in the phase plane y v .
Figure 2. Representation of the areas R + and R as well as the two parts γ + and γ of the switching curve S 2 in the phase plane y v .
Mathematics 11 03486 g002
Figure 3. Near time-optimal process with an accuracy of ε r = 0.001 referring to the initial state: (a) ( y 0 , v 0 ) = ( 10 ,   0 ) with corresponding ( x 10 , x 20 ) = ( 0 ,   10 ) ; (b) ( y 0 , v 0 ) = ( 10 ,   0 ) with corresponding ( x 10 , x 20 ) = ( 0 , 10 ) .
Figure 3. Near time-optimal process with an accuracy of ε r = 0.001 referring to the initial state: (a) ( y 0 , v 0 ) = ( 10 ,   0 ) with corresponding ( x 10 , x 20 ) = ( 0 ,   10 ) ; (b) ( y 0 , v 0 ) = ( 10 ,   0 ) with corresponding ( x 10 , x 20 ) = ( 0 , 10 ) .
Mathematics 11 03486 g003
Figure 4. Phase trajectories of the near time-optimal processes with an accuracy of ε r = 0.001 in the phase plane: (a) y v of the system (32); (b) x 1 x 2 of the system (44).
Figure 4. Phase trajectories of the near time-optimal processes with an accuracy of ε r = 0.001 in the phase plane: (a) y v of the system (32); (b) x 1 x 2 of the system (44).
Mathematics 11 03486 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Penev, B. One New Property of a Class of Linear Time-Optimal Control Problems. Mathematics 2023, 11, 3486. https://doi.org/10.3390/math11163486

AMA Style

Penev B. One New Property of a Class of Linear Time-Optimal Control Problems. Mathematics. 2023; 11(16):3486. https://doi.org/10.3390/math11163486

Chicago/Turabian Style

Penev, Borislav. 2023. "One New Property of a Class of Linear Time-Optimal Control Problems" Mathematics 11, no. 16: 3486. https://doi.org/10.3390/math11163486

APA Style

Penev, B. (2023). One New Property of a Class of Linear Time-Optimal Control Problems. Mathematics, 11(16), 3486. https://doi.org/10.3390/math11163486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop