Next Article in Journal
Optimal Control Strategy for SLBRS with Two Control Inputs
Next Article in Special Issue
Exploratory Landscape Validation for Bayesian Optimization Algorithms
Previous Article in Journal
Balancing the Average Weighted Completion Times in Large-Scale Two-Agent Scheduling Problems: An Evolutionary-Type Computational Study
Previous Article in Special Issue
Universal Stabilisation System for Control Object Motion along the Optimal Trajectory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Synthesized Control for Solving the Optimal Control Problem

by
Askhat Diveev
and
Elizaveta Shmalko
*,†
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 44/2, Vavilova Str., Moscow 119333, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(19), 4035; https://doi.org/10.3390/math11194035
Submission received: 2 September 2023 / Revised: 19 September 2023 / Accepted: 20 September 2023 / Published: 22 September 2023

Abstract

:
The development of artificial intelligence systems assumes that a machine can independently generate an algorithm of actions or a control system to solve the tasks. To do this, the machine must have a formal description of the problem and possess computational methods for solving it. This article deals with the problem of optimal control, which is the main task in the development of control systems, insofar as all systems being developed must be optimal from the point of view of a certain criterion. However, there are certain difficulties in implementing the resulting optimal control modes. This paper considers an extended formulation of the optimal control problem, which implies the creation of such systems that would have the necessary properties for its practical implementation. To solve it, an adaptive synthesized optimal control approach based on the use of numerical methods of machine learning is proposed. Such control moves the control object, optimally changing the position of the stable equilibrium point in the presence of some initial position uncertainty. As a result, from all possible synthesized controls, one is chosen that is less sensitive to changes in the initial state. As an example, the optimal control problem of a quadcopter with complex phase constraints is considered. To solve this problem, according to the proposed approach, the control synthesis problem is firstly solved to obtain a stable equilibrium point in the state space using a machine learning method of symbolic regression. After that, optimal positions of the stable equilibrium point are searched using a particle swarm optimization algorithm using the source functional from the initial optimal control problem statement. It is shown that such an approach allows for generating the control system automatically by computer, basing this on the formal statement of the problem and then directly implementing it onboard as far as the stabilization system has already been introduced.

1. Introduction

Long ago, Leonard Euler spoke about the optimal arrangement of everything in the world: “For since the fabric of the universe is most perfect and the work of a most wise Creator, nothing at all takes place in the universe in which some rule of maximum or minimum does not appear”. Striving for optimality is natural in every sphere.
In order to optimally move an autonomous robot to a certain target position, currently, as a standard, engineers first solve the problem of optimal control, obtain the optimal trajectory, and then solve the additional problem of moving the robot along the obtained optimal trajectory. In most cases, the following approach is used to move the robot along a path. Initially, the object is made stable relative to a certain point in the state space. Then, the stability points are positioned along the desired path and the object is moved along the trajectory by following these points from one point to another [1,2,3,4,5,6,7]. The difference between the existing methods is in solving the control synthesis problem to ensure stability relatively to some equilibrium point in the state space and in the location of these stability points.
Often, to ensure stability, the model of the control object is linearized relative to a certain point in the state space. Then, for the linear model of the object, a linear feedback control is found to arrange the eigenvalues of the closed-loop control system matrix on the left side of the complex plane. Sometimes, to improve the quality of stabilization, control channels or components of the control vector are defined that affect the movement of an object along a specific coordinate system axis of the state space. Then controllers, as a rule PI controllers, are inserted into these channels with the coefficients that are adjusted according to the specified control quality criterion [3,4]. In some cases, analytical or semi-analytical methods are used to solve the control synthesis problem and build nonlinear stable control systems [5,7]. But the stability property of the nonlinear model of the control object, obtained from the linearization of this model, is generally preserved only in the vicinity of a stable equilibrium point.
The main drawback of the approach when the control object is moved along the stable points on the trajectory is that even if this trajectory is obtained as a solution of the optimal control problem [8], then the movement itself will never be optimal. To ensure optimality, it is necessary to move along the trajectory at a certain speed, but when approaching the stable equilibrium point, the speed of the control object tends to zero.
The optimal control problem generally does not require ensuring the stability of the control object. The construction of a stabilization system that provides the stability of the object relative to some equilibrium point in the state space is carried out by the researcher to achieve predictable behavior of the control object in the vicinity of a given trajectory.
The optimal control problem in the classical formulation is solved for a control object without any stabilization system; therefore, the resulting optimal control and the optimal trajectory will not be optimal for this object with a further introduced stabilization system. It follows that the classical formulation of the optimal control problem [9] is missing something as far as its solution cannot be directly implemented in the real object, since this leads to an open-loop control. The open-loop control system is very sensitive to small disturbances, but they are always possible in real conditions, since no model accurately describes the control object. In order to achieve optimal control in a real object, it is necessary to build a feedback control system, which should provide some additional properties, for example stability relative to the trajectory or points on this trajectory. The authors of [10] proposed an extended formulation statement of the optimal control problem, which has additional requirements established for the optimal trajectory. The optimal trajectory must have a non-empty neighborhood with a property of attraction. Performing these requirements provides implementation of the solution of the optimal control problem directly in the real control object.
In [11,12], an approach to solving the extended optimal control problem on the base of the synthesized control is presented. This approach ensures obtaining a solution of the optimal control problem in the class of practically implemented control functions. According to this approach, initially, the control synthesis problem is solved. So, the control object becomes stable in the state space relatively to some equilibrium point. In the second stage, the optimal control problem is solved by determination of optimal positions of the stable equilibrium point. Switching stable points after a constant time interval ensures moving the control object from initial state to the terminal one optimally according to the given quality criterion. Optimal positions of stable equilibrium points can be far from the optimal trajectory in the state space; therefore, a control object does not slow down its motion speed. Studies of synthesized control in various optimal control problems have shown that such control is not sensitive to perturbations and can be directly implemented in a real object [13,14].
In synthesized control, the optimal control problem is solved for a control object already with a stabilization system. Another advantage of synthesized control is that the position of the stable point does not change during the time interval; that is, an optimal control function is solved using the class of piece-wise constant functions, which simplifies the search for the optimal solution.
It is possible that piece-wise constant control in the synthesized approach finds several optimal solutions with practically the same value of the quality criterion. This circumstance prompted in us the idea to find among all almost-optimal solutions one that is less sensitive to perturbations. This approach is called adaptive synthesized control.
In this work, a principle of adaptive synthesized control is proposed in Section 2, methods for solving it are discussed in Section 3 and further in the Section 4, a computational experiment to determine the solution of the optimal control problem for the spatial motion of quadcopter by adaptive synthesized control is considered.

2. Adaptive Synthesized Control

Consider the principle of adaptive synthesized control for solving the optimal control problem in its extended formulation [10].
Initially, the control synthesis problem is solved to provide stability of the control object relatively some point in the state space. In the problem, the mathematical model of the control object in the form of ordinary differential equation system is given.
x ˙ = f ( x , u ) ,
where x is a state vector, x R n , u is a control vector, u U R m , U is a compact set that determines restrictions on the control vector.
The domain of admissible initial states is given
X 0 R n .
To solve the problem numerically, the initial domain (2) is taken in the form of the finite number of points in the state space:
X ˜ 0 = { x 0 , 1 , , x 0 , K } .
Sometimes, it is convenient to set one initial state and deviations from it:
x 0 , j = x 0 Δ 0 + 2 ( j ) 2 Δ 0 , j = 1 , , 2 n 1 ,
where x 0 is a given initial state, Δ 0 is a deviations vector, Δ 0 = [ Δ 1 Δ n ] T , ⊙ is Hadamard product of vectors, ( j ) 2 is a binary code of the number j. In this case K = 2 n 1 .
The stabilization point as a terminal state is given by
x f 1 R n .
It is necessary to find a control function in the form
u = h ( x f 1 x ) U ,
where h ( x ) : R n R m , such that it minimizes the quality criterion
J 0 = i = 0 K t f 1 , i + p x f 1 x ( t f 1 , i , x 0 , i ) min ,
where t f 1 , i is the time of achieving the terminal state (5) from the initial state x 0 , i , t f 1 , i is determined by an equation
t f 1 , i = t , if t < t + and x f 1 x ( t , x 0 , i ) ε 0 t + , otherwise ,
x ( t , x 0 , i ) is a particular solution from initial state x 0 , i , i = 1 , , K , of the differential Equation (1) with an inserted control function (6)
x ˙ = f ( x , h ( x f 1 x ) ) ,
ε 0 is a given accuracy for hitting to terminal state (5), t + is a given maximal time for control process, p is a weight coefficient.
Further, using the principles of synthesized optimal control the following optimal control problem is considered. The model of control object in the form (9) is used
x ˙ = f ( x , h ( x * x ) ) ,
where the terminal state vector (5) is changed into the new unknown vector x * , which will be a control vector in the considered optimal control problem.
In accordance with the classical formulation of the optimal control problem, the initial state of the object (10) is given
x 0 R n .
In the engineering practice, there can be some deviations in the initial position; therefore, in adaptive synthesized control, instead of one initial state (11) the set of initial states used are defined by Equation (4). The vector of initial deviations Δ 0 is defined as a level of disturbances.
The goal of control is defined by achievement of the terminal state
x f R n .
The quality criterion is given
J 1 = 0 t f f 0 ( x , h ( x * x ) ) d t min ,
where t f is a terminal time, t f is not given but is limited, t f t + , t + is a given limit time of control process.
According to the principle of synthesized control, it is necessary to choose time interval Δ t and to search for optimal constant values of the control vector x , i for each interval
x * = x , i , if ( i 1 ) Δ t t < i Δ t , i = 1 , M ,
where M is a number of intervals
M = t + Δ t .
So the system (10) with the found optimal constant values of the control vector (14) in the right-hand side of differential equations has a particular solution which reaches the terminal state (12) from the given initial state (11) with an optimal value of the quality criterion (13).
Algorithmically, in the second stage of the adaptive synthesized control approach, the optimal values of the vector x * are found as a result of the optimization task with the following quality criterion, which takes into account the given grid according to the initial conditions:
J 2 = i = 1 K 0 t f , i f 0 ( x , h ( x , i x ) ) d t + p x f x ( t f , i , x 0 , i ) min x * ,
where K is number of initial states, t f , i is determined by Equation (8).

3. Methods of Solving

As described in the previous section, the approach based on the principle of adaptive synthesized optimal control consists of two stages.
To implement the first stage of the approach under consideration for solving the control synthesis problem (1)–(9), any known method can be used. For linear systems, for example, methods of modal control [15] can be applied, as well as such analytical methods such as backstepping [16,17] or synthesis based on the application of the Lyapunov function [18]. In practice, stability is ensured through linearization of the model (1) in the terminal state and setting PI or PID controllers in control channels [19,20]. All known analytical and technical methods have their limitations, which mostly depend on the type of the model used to describe the control object. The mathematical formulation of the stabilization problem as a control synthesis problem is needed to apply numerical methods and automatically obtain a feedback control function. Today, to solve the synthesis problem for nonlinear dynamic objects of varying complexity, modern numerical methods of machine learning can be applied [21]. Among different machine learning techniques, only symbolic regression allows searching both for the structure of the needed mathematical function and its parameters. In our case, the needed function is a control function. So, in the present paper machine learning by symbolic regression [22,23] is used.
Methods of symbolic regression search for the mathematical expression of the desired function in the encoded form. These methods differ in the form of this code. The search for solutions is performed in the space of codes by a special genetic algorithm.
Let us demonstrate the main features of symbolic regression on the example of the network operator method (NOP), which was used in this work in the computational experiment. To code a mathematical expression NOP uses an alphabet of elementary functions:
Functions without arguments or parameters and variables of the mathematical expression
F 0 = { f 0 , 1 = x 1 , , f 0 , n = x n , f 0 , n + 1 = q 1 , , f 0 , n + r = q n + r } ;
Functions with one argument
F 1 = { f 1 , 1 ( z ) = z , f 1 , 2 ( z ) , , f 1 , W ( z ) } ;
Functions with two arguments
F 2 = { f 2 , 1 ( z 1 , z 2 ) , , f 2 , V ( z 1 , z 2 ) } .
Any elementary function is coded by two digits: the first one is the number of arguments, the second one is the function number in the corresponding set. These digits are written as indexes of elements in the introduced sets of the alphabet (17)–(19). The set of functions with one argument must include the identity function f 1 , 1 ( z ) = 1 . Functions with two arguments should be commutative, associative and have a unit element.
NOP encodes a mathematical expression in the form of an oriented graph. Source-nodes of the NOP-graph are connected with functions without arguments, while other nodes are connected with functions with two arguments. Arcs of the NOP-graph are connected with functions with one argument. If on the NOP-graph some node has one input arc, then the second argument is a unit element for the function with two arguments connected with this node.
Let us define the following alphabet of elementary functions:
F 0 = { f 0 , 1 = x 1 , f 0 , 2 = x 2 , f 0 , 3 = q 1 , f 0 , 4 = q 2 } ; F 1 = { f 1 , 1 ( z ) = z , f 1 , 2 ( z ) = z , f 1 , 3 ( z ) = cos ( z ) , f 1 , 4 ( z ) = sin ( z ) } ; F 2 = { f 2 , 1 ( z 1 , z 2 ) = z 1 + z 2 , f 2 , 2 ( z 1 , z 2 ) = z 1 z 2 } .
With this alphabet the following mathematical expressions can be encoded in the form of NOP:
y 1 = cos ( x 1 ) sin ( x 2 ) sin ( x 1 ) cos ( x 2 ) ; y 2 = cos ( q 1 x 1 + q 2 sin ( x 2 ) ) ; y 3 = q 1 sin ( q 2 x 2 ) + q 2 cos ( q 1 x 1 ) ; y 4 = q 1 sin ( q 2 cos ( x 1 ) ) + q 2 cos ( q 1 sin ( x 2 ) ) .
The NOP-graphs of these mathematical expressions are presented in Figure 1. The nodes of the graph are numbered. Inside each node there is either the number of a binary operation or an element of the set of variables and parameters F 0 , and the arcs of the graph indicate the numbers of unary operations.
In the computer memory, the NOP-graphs are presented in the form of integer matrices.
Ψ = [ ψ i , j ] , i , j = 1 , , L .
As the NOP-nodes are enumerated in such a way that the node number from which an arc comes out is less than the node number to which an arc enters, then the NOP-matrix has an upper triangular form. Every line of the matrix corresponds some node of the graph. Lines with zeros in the main diagonal corresponds to source-nodes of the graph. Other elements in the main diagonal are the function numbers with two arguments. Non-zero elements above the main diagonal are the function numbers with one argument.
NOP-matrices for the mathematical expressions (21) have the following forms:
Ψ 1 = 0 0 3 4 0 0 0 4 3 0 0 0 2 0 2 0 0 0 2 1 0 0 0 0 1 , Ψ 2 = 0 0 0 0 1 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 1 3 0 0 0 0 0 0 0 1 ,
Ψ 3 = 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 2 0 3 0 0 0 0 0 0 0 2 0 4 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 0 1 , Ψ 4 = 0 0 0 0 3 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 2 0 4 0 0 0 0 0 0 0 2 0 3 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 0 1 .
To calculate a mathematical expression by its NOP-matrix, initially, the vector of nodes is determined. The number of components of the vector of nodes equals to the number of nodes in a graph. The initial vector of nodes includes variables and parameters in positions that correspond to source nodes, as well as other components equal to the unit elements of the corresponding functions with two arguments. Further, every line of the matrix is checked. If element of the matrix does not equal zero, then corresponding element of the vector of nodes is changed. To calculate mathematical expression by the NOP-matrix, the following equation is used:
z j ( i ) f 2 , ψ j , j ( z j ( i 1 ) , f 1 , ψ i , j ( z ( i 1 ) i ) ) , if ψ i , j 0 z j ( i 1 ) , otherwise , i = 1 , , L 1 , j = i + 1 , , L ,
where
z i ( 0 ) = f 0 , i , if ψ i , i = 0 e ψ i , i , otherwise ,
e j is a unit element for function with two arguments f 2 , j ( z 1 , z 2 ) ,
f 2 , j ( e j , z ) = f ( z , e j ) = z .
Consider an example of calculating the second mathematical expression in (21) on its NOP-matrix Ψ 2 .
The initial vector of nodes is
z ( 0 ) = [ x 1 x 2 q 1 q 2 1 1 0 0 ] T .
Further, all strings in the matrix Ψ 2 are checked and non-zero elements are found.
ψ 1 , 5 = 1 , z 5 ( 1 ) = f 2 , 2 ( z 5 ( 0 ) , f 1 , 1 ( z 1 ( 0 ) ) ) = 1 · f 1 , 1 ( z 1 ( 0 ) ) = 1 · x 1 = x 1 ,
ψ 2 , 6 = 4 , z 6 ( 2 ) = f 2 , 2 ( z 6 ( 1 ) , f 1 , 4 ( z 1 ( 1 ) ) ) = 1 · f 1 , 4 ( z ( 1 ) ) = 1 · sin ( x 2 ) = sin ( x 2 ) ,
ψ 3 , 5 = 1 , z 5 ( 3 ) = f 2 , 2 ( z 5 ( 2 ) , f 1 , 1 ( z 3 ( 2 ) ) ) = x 1 · q 1 = q 1 x 1 ,
ψ 4 , 6 = 1 , z 6 ( 4 ) = f 2 , 2 ( z 6 ( 3 ) , f 1 , 1 ( z 4 ( 3 ) ) ) = sin ( x 2 ) · q 2 = q 2 sin ( x 2 ) ,
ψ 5 , 7 = 1 , z 7 ( 5 ) = f 2 , 1 ( z 7 ( 4 ) , f 1 , 1 ( z 5 ( 4 ) ) ) = 0 + q 1 x 1 = q 1 x 1 ,
ψ 6 , 7 = 1 , z 7 ( 6 ) = f 2 , 1 ( z 7 ( 5 ) , f 1 , 1 ( z 6 ( 5 ) ) ) = q 1 x 1 + q 2 sin ( x 2 ) ,
ψ 7 , 8 = 3 , z 8 ( 7 ) = f 2 , 1 ( z 8 ( 6 ) , f 1 , 3 ( z 7 ( 6 ) ) ) = 0 + cos ( q 1 x 1 + q 2 sin ( x 2 ) ) = cos ( q 1 x 1 + q 2 sin ( x 2 ) ) .
The last mathematical expression coincides with the needed mathematical expression for y 2 (21).
So, we considered the way of coding in the NOP method. Then, to search for an optimal mathematical expression in some task, the NOP method applies a principle of small variations of a basic solution. According to this principle, one possible solution is encoded in the form of the NOP-matrix Ψ 0 . This solution is the basic solution and it is set by a researcher as a good solution. Other possible solutions are presented in the form of sets of small-variation vectors. A small variation vector consists of four integer numbers
w = [ w 1 w 2 w 3 w 4 ] T ,
where w 1 is a type of small variation, w 2 is a line number of the NOP-matrix, w 3 is a column number of NOP-matrix, w 4 is a new value of an NOP-matrix element. There are four types of small variations: w 1 = 0 is an exchange of the function with one argument, if ψ w 2 , w 3 0 , then ψ w 2 , w 3 w 4 ; w 1 = 1 is an exchange of the function with two arguments, if ψ w 2 , w 2 0 , then ψ w 2 , w 2 w 4 ; w 1 = 2 is an insertion of the additional function with one argument, if ψ w 2 , w 3 = 0 , then ψ w 2 , w 3 w 4 ; w 1 = 3 is an elimination of the function with one argument, if ψ w 2 , w 3 0 and ψ w 2 , j 0 , j > w 2 , j w 3 and ψ i , w 3 0 , i w 2 , then ψ w 2 , w 3 0 .
The initial population includes H possible solutions. Each possible solution i { 1 , , H } except the basic solution is encoded in the form of the set of small variation vectors
W i = ( w i , 1 , , , w i , d ) , i { 1 , , H } ,
where d is a depth of variations, which is set as a parameter of the algorithm.
The NOP-matrix of a possible solution is determined after application of all small variations to the basic solution
Ψ i = w i , d w i , 1 Ψ 0 , i { 1 , , H } ,
Here, the small variation vector is written as a mathematical operator changing matrix Ψ 0 .
During the search process sometimes the basic solution is replaced by the current best possible solution. This process is called a change of an epoch.
Consider an example of applying small variations to the NOP-matrix Ψ 3 . Let d = 3 and there are three following small variation vectors:
w 1 = [ 0 3 5 2 ] T , w 2 = [ 2 5 6 3 ] T , w 3 = [ 2 8 9 4 ] T
After application of these small variation vectors to the NOP-matrix Ψ 3 , the following NOP-matrix is obtained:
Ψ 5 = 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 2 3 3 0 0 0 0 0 0 0 2 0 4 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0 2 4 0 0 0 0 0 0 0 0 1 .
This NOP-matrix corresponds to the following mathematical expression:
y 5 = sin ( q 1 sin ( q 2 x 2 cos ( q 1 x 1 ) ) ) + q 2 cos ( q 1 x 1 ) .
Similar to a search engine, a genetic algorithm is used. To perform the main genetic operation of crossover, two possible solutions are selected randomly
W α = ( w α , 1 , , w α , d ) , W β = ( w β , 1 , , w β , d ) .
A crossover point is selected randomly c { 1 , , d } . Two new possible solutions are obtained as the result of exchanging elements of the selected possible solutions after the crossover point:
W H + 1 = ( w α , 1 , , w α , c , w β , c + 1 , , w β , d ) , W H + 2 = ( w β , 1 , , w β , c , w α , c + 1 , , w α , d ) .
The second stage of the synthesized principle under consideration is to solve the problem of optimal control via determination of the optimal position of the equilibrium points. Studies have shown that for a complex optimal control problem with phase constraints, evolutionary algorithms allow the system to cope with such problems. Good results were demonstrated [24] by such algorithms as a genetic algorithm (GA) [25], a particle swarm optimization (PSO) algorithm [26,27,28], a grey wolf optimizer (GWO) algorithm [29] or a hybrid algorithm [24] involving one population of possible solutions and all three evolutionary transformations of GA, PSO and GWO selected randomly.

4. Computational Experiment

Consider the optimal control problem for the spatial motion of a quadcopter. In the problem, the quadcopter should move for a minimum time on a closed-loop circle from the given initial state to the same terminal state, avoiding collisions with obstacles and passing through the given areas.

4.1. Mathematical Model of Spatial Movement of Quadcopter

In the general case, the mathematical model of a quadcopter as a hard body has the following form:
x ¨ = F ( cos ( γ ) sin ( θ ) cos ( ψ ) + sin ( γ ) sin ( ψ ) ) / m ; y ¨ = F cos ( γ ) cos ( θ ) / m g ; z ¨ = F ( cos ( γ ) sin ( θ ) sin ( ψ ) + sin ( γ ) cos ( ψ ) ) / m ; γ ¨ = ( ( I y y + I z z ) θ ˙ ψ ˙ + M x ) / I x x ; ψ ¨ = ( ( I z z + I x x ) γ ˙ θ ˙ + M y ) / I y y ; θ ¨ = ( ( I x x + I y y ) γ ˙ θ ˙ + M z ) / I z z .
where F is a summary thrust force of all drone screws, m is a mass of drone, g is acceleration of gravity, g = 9.80665 , M x , M y , M z are control moments around the respective axes.
Figure 2 shows how the angles of a quadcopter turn are linked with its axes.
To transform the model (32) to a vector record, the following designations are entered: x = x 1 , y = x 2 , z = x 3 , x ˙ 1 = x 4 , x ˙ 2 = x 5 , x ˙ 3 = x 6 , γ = x 7 , ψ = x 8 , θ = x 9 , γ ˙ = x 10 , ψ ˙ = x 11 , θ ˙ = x 12 , M 1 = M x , M 2 = M y , M 3 = M z .
As a result the following mathematical model is received:
x ˙ 1 = x 4 ; x ˙ 2 = x 5 ; x ˙ 3 = x 6 ; x ˙ 4 = F ( cos ( x 7 ) sin ( x 9 ) cos ( x 8 ) + sin ( x 7 ) sin ( x 8 ) ) / m ; x ˙ 5 = F ( cos ( x 7 ) cos ( x 9 ) / m g ; x ˙ 6 = F ( cos ( x 7 ) sin ( x 9 ) sin ( x 8 ) + sin ( x 7 ) cos ( x 8 ) ) / m ; x ˙ 7 = x 10 ; x ˙ 8 = x 11 ; x ˙ 9 = x 12 ; x ˙ 10 = ( ( I y y + I z z ) x 11 x 12 + M x ) / I x x ; x ˙ 11 = ( ( I z z + I x x ) x 10 x 12 + M y ) / I y y ; x ˙ 12 = ( ( I x x + I y y ) x 10 x 11 + M z ) / I z z ;
where x is a state space vector, x = [ x 1 x n ] T , M is a vector of control moments, M = [ M 1 M 2 M 3 ] T .
As a rule, quadcopters are manufactured with some angle stabilization systems. This means that a drone can be stabilized at any angle for some interval. The system of angle stabilization provides a stable location of the drone relatively, given angles by control moments:
M i = w i ( x 7 * x 7 , x 8 * x 8 , x 9 * x 9 , x 10 , x 11 , x 12 ) , i = 1 , 2 , 3 .
Assume that the angular stabilization system works out the given angles of the quadcopter quickly enough, at least in comparison with spatial movement. In this case we can assume that the control of the spatial movement of the quadcopter is carried out using the angular position of the drone and the thrust force. Let us define components of the spatial control vector: x 7 = u 1 , x 8 = u 2 , x 9 = u 3 , F / m = u 4 .
As a result we receive the following model of spatial quadcopter movement:
x ˙ 1 = x 4 ; x ˙ 2 = x 5 ; x ˙ 3 = x 6 ; x ˙ 4 = u 4 ( cos ( u 1 ) sin ( u 3 ) cos ( u 2 ) + sin ( u 1 ) sin ( u 2 ) ) ; x ˙ 5 = u 4 cos ( u 1 ) cos ( u 3 ) g ; x ˙ 6 = u 4 ( cos ( u 1 ) sin ( u 3 ) sin ( u 2 ) + sin ( u 1 ) cos ( u 2 ) ) .
In this work, this model is used to obtain optimal control for the spatial motion of the quadcopter.

4.2. The Optimal Control Problem for Spatial Motion of Quadcopter

The model (35) of the control object is given. Here, x is a state space vector, x R 6 , u is a control vector U R 4 . U is a compact set that defines restrictions on values of control vector components,
u 1 = π / 12 u 1 π / 12 = u 1 + , u 2 = π u 2 π = u 2 + , u 3 = π / 12 u 3 π / 12 = u 3 + , u 4 = 0 u 4 12 = u 4 + .
According to the principle of synthesized control, initially the control synthesized problem (1)–(9) is solved. The model (35) is used as a model of the control object. To construct the set of initial states (4), the following vector of deviations is used:
Δ 0 = [ 2 2 2 0 0 0 ] T .
In the problem, initial state and terminal state were equal:
x 0 = x f = [ 0 5 0 0 0 0 ] T .
For calculation of the quality criterion (7), the following parameters are used: t + = 2 , ε 0 = 0.1 , p = 2 .
To solve the control synthesis problem, the network operator method [23] was used. NOP found the following solution:
u i = u i + , if u ^ i > u i + u i , if u ^ i < u i u ^ i , otherwise , i = 1 , , m = 4 ,
where
u ^ 1 = μ ( C ) ,
u ^ 2 = u ^ 1 u ^ 1 3 ,
u ^ 3 = u ^ 2 + ρ 19 ( W + μ ( C ) ) + ρ 17 ( A ) ,
u ^ 4 = u ^ 3 + ln ( | u ^ 2 | ) + sgn ( W + μ ( C ) ) | W + μ ( C ) | + ρ 19 ( W ) +
arctan ( H ) + sgn ( F ) + arctan ( E ) + exp ( q 2 ( x 2 f x 2 ) ) + q 1 ,
C = q 6 ( x 6 f x 6 ) + q 3 ( x 3 f x 3 ) , W = V + tanh ( G ) + exp ( D ) ,
A = q 1 ( x 1 f x 1 ) + q 4 ( x 4 f x 4 ) , H = G + tanh ( F ) + ρ 18 ( B ) ,
F = E + C + arctan ( D ) B , E = D + sgn ( x 5 f x 5 ) + ( x 2 f x 2 ) 3 ,
V = exp ( H ) + cos ( q 6 ( x 6 f x 6 ) ) + sgn ( D ) | D | , G = F + E 3 + sin ( A ) ,
B = sin ( q 6 ( x 6 f x 6 ) ) + q 5 ( x 5 f x 5 ) + q 2 ( x 2 f x 2 ) + cos ( q 1 ) + ϑ ( x 2 f x 2 ) ,
D = ρ 17 ( C ) + B 3 + A + ϑ ( q 5 ( x 5 f x 5 ) ) + ( x 5 f x 5 ) 2 ,
μ ( z ) = z , if | z | < 1 sgn ( z ) , otherwise , ρ 17 ( z ) = sgn ( z ) ln ( | z | + 1 ) ,
ρ 18 ( z ) = sgn ( z ) ( exp ( | z | ) 1 ) , ρ 19 ( z ) = sgn ( z ) exp ( | z | ) ,
q 1 = 7.26709 , q 2 = 11.46143 , q 3 = 12.77026 , q 4 = 3.20630 , q 5 = 8.38501 , q 6 = 5.56250 .
In the second stage, the optimal control problem is considered. In the problem, the mathematical model (35) is given. The initial state coincides with the terminal state (38).
It is necessary to find a control in the form of points in the state space (14). For synthesized control it is necessary to minimize the following quality criterion:
J 3 = t f + p 1 x f x + ( t f ) + p 2 i = 0 N 0 t f ϑ ( φ i ( x ) ) d t +
p 3 j = 1 S p 3 ϑ ( min t | δ j ( x ) | ε ) min x * ,
where p 1 = 2 , p 2 = 3 , p 3 = 3 ,
φ i ( x ) = r i ( x i , 1 x 1 ) 2 + ( x i , 3 x 3 ) 2 ,
i = 1 , , N = 4 , r 1 = r 2 = r 3 = r 4 = 2 , x 1 , 1 = 5 , x 1 , 3 = 0 , x 2 , 1 = 10 , x 2 , 3 = 5 , x 3 , 1 = 5 , x 3 , 3 = 10 , x 4 , 1 = 0 , x 4 , 3 = 5 ,
δ j ( x ) = ( y i , 1 x 1 ) 2 + ( y j , 3 x 3 ) 2 ,
j = 1 , , S = 7 , y 1 , 1 = 5 , y 1 , 3 = 2 , y 2 , 1 = 10 , y 2 , 3 = 0 , y 3 , 1 = 12 ,+ y 3 , 3 = 5 , y 4 , 1 = 10 , y 4 , 3 = 10 , y 5 , 1 = 5 , y 5 , 3 = 12 , y 6 , 1 = 0 , y 6 , 3 = 10 , y 7 , 1 = 2 , y 7 , 3 = 5 , ε = 0.6 .
In the optimal control problem, the terminal time t f is determined by the Equation (8) with t + = 14.4 , ε 0 = 0.1 . It is necessary to find coordinates of control points on each time interval, Δ t = 0.8 . The desired vector includes 3 M parameters, where
M = t + Δ t = 14.4 0.8 = 18 ,
that is, q * = [ q 1 q 54 ] T . The hybrid evolutionary algorithm has found the following optimal solution:
x , 1 = [ 4.83910 1.14025 5.22899 ] T , x , 2 = [ 11.07056 6.79389 2.48647 ] T , x , 3 = [ 9.19808 1.54674 15.87195 ] T , x , 4 = [ 0.12204 0.12276 1.82381 ] T , x , 5 = [ 4.08347 2.93658 5.89553 ] T , x , 6 = [ 16.72896 2.18022 2.27907 ] T , x , 7 = [ 1.18106 2.56582 14.41088 ] T , x , 8 = [ 8.67198 5.78737 2.90409 ] T , x , 9 = [ 8.59478 2.73948 11.33252 ] T , x , 10 = [ 1.25924 [ 1.97448 1.42747 ] T , x , 11 = [ 2.45445 7.42257 0.38164 ] T , x , 12 = [ 8.68306 0.78496 15.41667 ] T , x , 13 = [ 0.60972 7.02724 7.66403 ] T , x , 141 = [ 0.59975 0.39324 1.31307 ] T , x , 15 = [ 2.39004 7.95279 3.02003 ] T , x , 16 = [ 2.52642 6.69332 9.17356 ] T x , 17 = [ 0.95896 4.42529 0.36318 ] T , x , 18 = [ 0.01193 5.02821 15.40007 ] T .
For the found solution (48), the value of the quality criterion is J 3 = 14.7010 .
In Figure 3, projections of the optimal trajectory on the horizontal plane { x 1 ; x 3 } are presented. Here, red circles are phase constraints described by (45), small black circles are passing areas described by (46) and small black boxes are control points (48).
For the new adaptive synthesized control proposed in this paper, the set of initial states is determined by Equation (3) with deviation vector
Δ 0 = [ 0.2 0.2 0.2 0 0 0 ] T .
It is necessary to find the same number of control points according to the following quality criterion:
J 4 = k = 1 K t f , k + p 1 x f x ( t f , i , x 0 , i ) + p 2 i = 0 N 0 t f ϑ ( φ i ( x ) ) d t +
p 3 j = 1 S p 3 ϑ ( min t | δ j ( x ) | ε ) min x * ,
where K = 7 , t f , k is defined by Equation (8). Other parameters of the criterion are the same as for the criterion (44).
Again, the hybrid algorithm was applied and the following optimal solution has been found:
x , 1 = [ 17.46361 1.14030 8.00000 ] T , x , 2 = [ 11.07060 6.79390 2.48650 ] T , x , 3 = [ 9.19810 2.05890 15.87207 ] T , x , 4 = [ 0.27800 2.51633 1.99493 ] T , x , 5 = [ 3.98430 2.27048 13.40976 ] T , x , 6 = [ 17.18235 0.26253 2.19246 ] T , x , 7 = [ 3.56784 3.44842 14.94369 ] T , x , 8 = [ 4.53881 2.20612 2.99328 ] T , x , 9 = [ 9.06419 2.49928 11.30274 ] T , x , 10 = [ 0.16333 1.88939 0.75766 ] T . x , 11 = [ 2.17956 6.92983 1.06412 ] T , x , 12 = [ 10.24873 0.51465 5.82840 ] T , x , 13 = [ 1.12164 2.84506 7.93804 ] T , x , 14 = [ 0.10678 3.23489 1.55778 ] T , x , 15 = [ 2.54374 0.99732 2.82005 ] T , x , 16 = [ 7.41006 6.49634 12.02799 ] T , x , 17 = [ 0.67510 4.03845 0.28527 ] T , x , 18 = [ 0.18037 4.62980 6.89661 ] T .
A value of the quality criterion (50) for one initial state x ( 0 ) = [ 0 5 0 0 0 0 ] T , is J 4 = 15.6090 . In Figure 4, projections of the optimal trajectory on the horizontal plane { x 1 ; x 3 } found by the adaptive synthesized control (51) are presented.
Since the initial state in the problem coincided with the terminal state, in order to force the control object to move along a closed path, mandatory conditions for passing through certain areas were added to the quality criterion. For trajectories that meet the criteria for passing through the specified areas, the value of the quality criterion will not change at p 3 = 0 . This is seen in Figure 3 and Figure 4 as both trajectories pass through all specified areas.
Let us check the sensitivity of the obtained solutions to random perturbations of the initial state
x i ( 0 ) = x i 0 + β 0 ( 2 ξ ( t ) 1 ) , i = 1 , , n = 6 ,
where ξ ( t ) is a function generator of random noise, which returns a random number from interval ( 0 ; 1 ) at every call, β 0 is a constant level of noise.
In Figure 5 and Figure 6, the optimal (in blue) and eight perturbed trajectories (in black) for β 0 = 0.1 for the solutions obtained by synthesized (Figure 5) and adaptive synthesized (Figure 6) control are presented.
For comparison, for a model (35) without stabilization systems (39), the problem of optimal control directly was solved, where control was sought in the form of a piece-wise linear function, taking into account restrictions (36).
u i = u i + , if u ˜ i > u i + u i , if u ˜ i < u i u ˜ i , otherwise , i = 1 , , m = 4 ,
where
u ˜ i = ( q i + j m q i + ( j 1 ) m ) t ( j 1 ) Δ t Δ t + q i + ( j 1 ) m , i = 1 , 2 , 3 , 4 ,
( j 1 ) Δ t t < j Δ t , j { 1 , , K + 1 } , q i is a component of desired parameters vector, i = 1 , , m ( M + 1 ) ,
q = [ q 1 q m ( M + 1 ) ] T .
In this work, we set the same time interval Δ t = 0.8 ; therefore, from (15) M = 18 , and it is necessary to find m ( M + 1 ) = 4 · 19 = 76 parameters, q = [ q 1 q 76 ] T .
To solve the optimal control problem, the same hybrid algorithm was used. As a result, the following solution was obtained:
q = [ 12.57045 4.58471 2.74617 2.90422 9.26325 0.10990 2.63222 18.36841 0.00816 17.35177 0.00165 4.28718 11.81492 1.88489 8.01206 15.87943 10.84894 0.06505 9.65475 19.51903 2.79860 4.06408 0.88992 10.50507 19.19030 17.90240 12.52431 19.00010 4.76513 11.97648 0.00010 8.85464 2.92334 0.14238 8.60919 7.83194 5.74904 8.35383 3.42757 12.87671 18.58717 15.43057 9.06137 12.55621 1.54628 1.47314 2.40706 8.67602 0.00091 11.91236 19.94063 17.08304 19.92640 1.33145 7.77258 15.54094 19.93278 17.37121 9.31290 5.03257 0.90297 5.22021 0.62653 4.21368 2.04314 0.53192 0.09353 14.25213 0.11587 9.05588 0.03270 11.23667 0.03826 16.78047 0.18220 19.81652 ] T .
In Figure 7, the projection of the optimal trajectory obtained by the direct method is presented.
In Table 1 there are values of the quality criterion (44) of ten experiments for perturbed solutions obtained by the synthesized (column Synthesized), the adaptive synthesized (column Adaptive) control and the direct solution (Direct). In two last strings of the table, average values of the functionals and standard deviations for all experiments are presented.
As can be seen from Figure 5 and Figure 6 and Table 1, the solutions obtained by adaptive synthesized control are less sensitive to perturbations of initial states than the solutions obtained by simple synthesized control or, especially, by the direct approach.

5. Conclusions

A new method for solving the problem of optimal control in the class of implemented functions, an adaptive synthesized control principle is presented. Unlike synthesized control, the new method takes into account the perturbations of the initial state when solving the optimal control problem. Therefore, the value of the quality criterion is calculated as the sum of the quality criterion values for the different initial states. As a result of this approach, a solution is chosen in such a way that for the origin initial state it may not give the best quality criterion value, but in the case of disturbances of the initial state, the quality criterion value changes slightly.

6. Discussion

Obtaining a solution based on replacing the optimal solution is less optimal, but also less sensitive to disturbances. At first glance, this seems obvious and can be applied to any method of solving the optimal control problem. However, this is not the case. A direct solution to the optimal control problem results in control in the form of a time function and an open-loop control system. Perturbation of the initial conditions for such a system gives large variations in quality criterion values, which cannot be reliably estimated from the average value due to the large variance.
The synthesized control method firstly makes the control object stable relative to some equilibrium point in the state space. This means that the perturbed and unperturbed trajectories at each point in time move towards a stable equilibrium point. The adaptive synthesized control method sets the positions of the equilibrium points so that all disturbed trajectories are located in some tube that does not violate phase constraints whenever possible.
In the future, when using the adaptive synthesized control method, it is necessary to assess the required size of the initial state region and reduce the number of initial state points, since this significantly increases the time for finding the optimal solution.

Author Contributions

Conceptualization, E.S.; methodology, A.D. and E.S.; software, A.D. and E.S.; validation, E.S.; formal analysis, A.D.; investigation, E.S.; writing—original draft preparation, A.D. and E.S.; writing—review and editing, E.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science and Higher Education of the Russian Federation, project No. 075-15-2020-799.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Egerstedt, M. Motion Planning and Control of Mobile Robots. Ph.D. Thesis, Royal Institute of Technology, Stockholm, Sweden, 2000. [Google Scholar]
  2. Walsh, G.; Tilbury, D.; Sastry, S.; Murray, R.; Laumond, J.P. Stabilization of trajectories for systems with nonholonomic constraints. IEEE Trans. Autom. Control 1994, 39, 216–222. [Google Scholar] [CrossRef]
  3. Samir, A.; Hammad, A.; Hafez, A.; Mansour, H. Quadcopter Trajectory Tracking Control using State-Feedback Control with Integral Action. Int. J. Comput. Appl. 2017, 168, 1–7. [Google Scholar] [CrossRef]
  4. Allagui, N.Y.; Abid, D.B.; Derbel, N. Autonomous navigation of mobile robot with combined fractional order PI and fuzzy logic controllers. In Proceedings of the 2019 16th International Multi-Conference on Systems, Signals & Devices (SSD), Istanbul, Turkey, 21–24 March 2019; pp. 78–83. [Google Scholar] [CrossRef]
  5. Chen, B.; Cao, Y.; Feng, Y. Research on Trajectory Tracking Control of Non-holonomic Wheeled Robot Using Backstepping Adaptive PI Controller. In Proceedings of the 2022 7th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), Tianjin, China, 1–3 July 2022; pp. 7–12. [Google Scholar] [CrossRef]
  6. Karnani, C.; Raza, S.; Asif, A.; Ilyas, M. Adaptive Control Algorithm for Trajectory Tracking of Underactuated Unmanned Surface Vehicle (UUSV). J. Robot. 2023, 2023, 4820479. [Google Scholar] [CrossRef]
  7. Nguyen, A.T.; Nguyen, X.-M.; Hong, S.-K. Quadcopter Adaptive Trajectory Tracking Control: A New Approach via Backstepping Technique. Appl. Sci. 2019, 9, 3873. [Google Scholar] [CrossRef]
  8. Lee, E.B.; Marcus, L. Foundations of Optimal Control Theory; Robert & Krieger Publishing Company: Malabar, Florida, 1967; 576p. [Google Scholar]
  9. Pontryagin, L.S.; Boltyanskii, V.G.; Gamkrelidze, R.V.; Mishchenko, E.F. The Mathematical Theory of Optimal Process. L. S. Pontryagin, Selected Works; Gordon and Breach Science Publishers: New York, NY, USA; London, UK; Paris, France; Montreux, Switzerland; Tokyo, Japan, 1985; Volume 4, 360p. [Google Scholar]
  10. Shmalko, E.; Diveev, A. Extended Statement of the Optimal Control Problem and Machine Learning Approach to Its Solution. Math. Probl. Eng. 2022, 2022, 1932520. [Google Scholar] [CrossRef]
  11. Diveev, A.; Shmalko, E.Y.; Serebrenny, V.V.; Zentay, P. Fundamentals of synthesized optimal control. Mathematics 2020, 9, 21. [Google Scholar] [CrossRef]
  12. Shmalko, E.Y. Feasibility of Synthesized Optimal Control Approach on Model of Robotic System with Uncertainties. In Electromechanics and Robotics; Smart Innovation, Systems and Technologies; Ronzhin, A., Shishlakov, V., Eds.; Springer: Singapore, 2022; Volume 232. [Google Scholar]
  13. Shmalko, E. Computational Approach to Optimal Control in Applied Robotics. In Frontiers in Robotics and Electromechanics. Smart Innovation, Systems and Technologies; Ronzhin, A., Pshikhopov, V., Eds.; Springer: Singapore, 2023; Volume 329. [Google Scholar]
  14. Diveev, A.; Shmalko, E.Y. Stability of the Optimal Control Problem Solution. In Proceedings of the 8th International Conference on Control, Decision and Information Technologies, CoDIT, Istanbul, Turkey, 17–20 May 2022; pp. 33–38. [Google Scholar]
  15. Simon, J.D.; Mitter, S.K. A theory of modal control. Inf. Control 1968, 13, 316–353. [Google Scholar] [CrossRef]
  16. Jouffroy, J.; Lottin, J. Integrator backstepping using contraction theory: A brief methodological note. In Proceedings of the 15th IFAC World Congress, Barcelona, Spain, 21–26 July 2002. [Google Scholar]
  17. Zhou, H.; Liu, Z. Vehicle Yaw Stability-Control System Design Based on Sliding Mode and Backstepping Contol Approach. IEEE Trans. Veh. Technol. 2010, 59, 3674–3678. [Google Scholar] [CrossRef]
  18. Febsya, M.R.; Ardhi, R.; Widyotriatmo, A.; Nazaruddin, Y.Y. Design Control of Forward Motion of an Autonomous Truck-Trailer using Lyapunov Stability Approach. In Proceedings of the 2019 6th International Conference on Instrumentation, Control, and Automation (ICA), Bandung, Indonesia, 31 July–2 August 2019; pp. 65–70. [Google Scholar] [CrossRef]
  19. Zihao, S.; Bin, W.; Ting, Z. Trajectory Tracking Control of a Spherical Robot Based on Adaptive PID Algorithm. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 5171–5175. [Google Scholar] [CrossRef]
  20. Liu, J.; Song, X.; Gao, S.; Chen, C.; Liu, K.; Li, T. Research on Horizontal Path Tracking Control of a Biomimetic Robotic Fish. In Proceedings of the 2022 International Conference on Mechanical and Electronics Engineering (ICMEE), Xi’an, China, 21–23 October 2022; pp. 100–105. [Google Scholar] [CrossRef]
  21. Duriez, T.; Brunton, S.L.; Noack, B.R. Machine Learning Control–Taming Nonlinear Dynamics and Turbulence; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
  22. Awange, J.L.; Paláncz, B.; Lewis, R.H.; Völgyesi, L. Symbolic Regression. In Mathematical Geosciences; Springer: Cham, Switzerland, 2018. [Google Scholar]
  23. Diveev, A.I.; Shmalko, E.Y. Machine Learning Control by Symbolic Regression; Springer: Cham, Switzerland, 2021; 155p. [Google Scholar]
  24. Diveev, A.; Shmalko, E. Machine Learning Feedback Control Approach Based on Symbolic Regression for Robotic Systems. Mathematics 2022, 10, 4100. [Google Scholar] [CrossRef]
  25. Davis, L. Handbook of Genetic Algorithms; Van Nostrand Reinhold: New York, NY, USA, 1991. [Google Scholar]
  26. Eberhardt, R.C.; Kennedy, J.A. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  27. Eltamaly, A. A Novel Strategy for Optimal PSO Control Parameters Determination for PV Energy Systems. Sustainability 2021, 13, 1008. [Google Scholar] [CrossRef]
  28. Salehpour, E.; Vahidi, J.; Hosseinzadeh, H. Solving optimal control problems by PSO-SVM. Comput. Methods Differ. Equ. 2018, 6, 312–325. [Google Scholar]
  29. Mirjalili, S.; Mirjalil, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
Figure 1. NOP-graphs for mathematical expressions (21), (a) y 1 , (b) y 2 , (c) y 3 , (d) y 4 .
Figure 1. NOP-graphs for mathematical expressions (21), (a) y 1 , (b) y 2 , (c) y 3 , (d) y 4 .
Mathematics 11 04035 g001
Figure 2. Inertial coordinate system for quadcopter.
Figure 2. Inertial coordinate system for quadcopter.
Mathematics 11 04035 g002
Figure 3. Optimal trajectory for synthesized control.
Figure 3. Optimal trajectory for synthesized control.
Mathematics 11 04035 g003
Figure 4. Optimal trajectory for adaptive synthesized control.
Figure 4. Optimal trajectory for adaptive synthesized control.
Mathematics 11 04035 g004
Figure 5. Optimal and eight disturbance trajectories of synthesized control.
Figure 5. Optimal and eight disturbance trajectories of synthesized control.
Mathematics 11 04035 g005
Figure 6. Optimal and eight disturbance trajectories of adaptive synthesized control.
Figure 6. Optimal and eight disturbance trajectories of adaptive synthesized control.
Mathematics 11 04035 g006
Figure 7. Optimal trajectory of direct control.
Figure 7. Optimal trajectory of direct control.
Mathematics 11 04035 g007
Table 1. Sensitivity of decisions to perturbations of initial states.
Table 1. Sensitivity of decisions to perturbations of initial states.
NoSynthesizedAdaptiveDirect
114.765115.489219.2082
220.737715.482919.8854
315.288815.694716.7706
416.974315.493516.2334
518.615916.039719.2815
619.522715.795019.3866
720.093715.417816.8263
817.541616.142423.3437
920.122517.069519.6251
1019.925715.389320.8163
Av18.358815.801419.1377
SD2.12340.51672.1285
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Diveev, A.; Shmalko, E. Adaptive Synthesized Control for Solving the Optimal Control Problem. Mathematics 2023, 11, 4035. https://doi.org/10.3390/math11194035

AMA Style

Diveev A, Shmalko E. Adaptive Synthesized Control for Solving the Optimal Control Problem. Mathematics. 2023; 11(19):4035. https://doi.org/10.3390/math11194035

Chicago/Turabian Style

Diveev, Askhat, and Elizaveta Shmalko. 2023. "Adaptive Synthesized Control for Solving the Optimal Control Problem" Mathematics 11, no. 19: 4035. https://doi.org/10.3390/math11194035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop