**1. Introduction**

Object control in the classical mathematical sense is to qualitatively change the righthand sides of the differential equations describing the mathematical model of the control object, due to the control vector included in them. Thus, the problem of optimal control [1] consists in finding such a control function, as a function of time, which will make the required changes in the right-hand sides of the model of the control object so that, for given initial conditions, the partial solution of the system of differential equations achieves the control goal with the optimal value of the quality criterion.

There are two main directions for solving the problem of optimal control: direct and indirect approaches. The indirect approach based on the Pontryagin's maximum principle [2–4] solves optimal control by formulating it as a boundary-value problem, in which it is necessary to find the initial conditions for a system of differential equations for conjugate variables. Its optimal solution is highly accurate, however, very sensitive to the formulation of additional conditions that the control must satisfy, along with ensuring the maximum of the Hamiltonian, which are generally very difficult to set in practice for problems with complex phase constraints. The direct approach reduces the optimal control problem to a nonlinear programming problem [5–7], that provides the transition from the optimization problem in the infinite-dimensional space to the optimization problem in the

**Citation:** Diveev, A.; Shmalko, E.; Serebrenny, V.; Zentay, P. Fundamentals of Synthesized Optimal Control. *Mathematics* **2021**, *9*, 21. https://dx.doi.org/10.3390/ math9010021

Received: 3 November 2020 Accepted: 17 December 2020 Published: 23 December 2020

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).

finite-dimensional space, so it is more convenient and can be readily solved within a wider convergence region.

However, these works generally focus on the nominal trajectory performance without considering possible uncertainties. In practice, in the right-hand sides of the models, there are objectively some uncertainties of various nature. As a rule, they are not taken into account, but the presence of such uncertainties can lead to the loss of optimality of the obtained control.

There are also approaches when the impact of uncertainties is taken into account during the reference trajectory design beforehand [8,9]. For example, desensitized optimal control [10], modifies the nominal optimal trajectory such that it is less sensitive with respect to uncertain parameters. This involves constructing an appropriate sensitivity cost which, when penalized, provides solutions that are relatively insensitive to parametric uncertainties.

Although in practice such solutions do not guarantee the stability and still require construction of the feedback stabilization control system to eliminate errors [8].

In control theory, there is a field of robust control [11–14], which provides a certain stability coefficient of the control system. Robust control methods generally move the eigenvalues of the linearized system as far as possible to the left of the imaginary axis of the complex plane, so that uncertainties and perturbations do not make the system unstable. These methods are not aimed at solving the optimal control problem.

In practical control system design, the existing uncertainties of the mathematical model of the object, which subsequently cause the discrepancy between the real trajectory of the object and the obtained optimal one, are compensated by the synthesis of a feedback motion stabilization system relative to the optimal trajectory [8,15–17]. But construction of the stabilization system changes the mathematical model of the object and the received control might be not optimal for the new model.

In this paper, uncertainties are included in the problem statement as an additive bounded function. And the optimal control problem is supposed to be solved after ensuring stability to the plant in the state space. This approach was called the method of synthesized optimal control. A control function is found such that the system of differential equations will always have a stable equilibrium point in the state space. With that, the control system contains parameters that affect the position of the equilibrium point. Consequently, the object is controlled by changing the position of the equilibrium point. In this paper, it is shown that such control can also provide the required value of the quality criterion, but the mathematical model of the control object turns out to be insensitive to the existing uncertainties and external disturbances. The approach of synthesized optimal control is new, but we have already managed to obtain good experimental results [18,19] confirming the effectiveness of such control. In this paper, we provide mathematical formulations of the approach and give a theoretical substantiation of the efficiency of the synthesized optimal control. A comparative numerical example of solving the problem of optimal control of two robots under phase constraints by the indirect method of synthesized optimal control and by the direct method based on piecewise linear approximation is given.

#### **2. Problem Statement**

The mathematical model of control object with uncertainty is given

$$
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{u}) + \mathbf{y}(t), \tag{1}
$$

where **<sup>x</sup>** <sup>∈</sup> <sup>R</sup>*n*, **<sup>u</sup>** <sup>∈</sup> <sup>U</sup> <sup>⊆</sup> <sup>R</sup>*m*, U is a compact set, *<sup>m</sup>* <sup>≤</sup> *<sup>n</sup>*, **<sup>y</sup>** is a uncertainty function, **<sup>y</sup>**(*t*) <sup>∈</sup> <sup>R</sup>*n*,

$$\mathbf{y}^- \le \mathbf{y}(t) \le \mathbf{y}^+ \tag{2}$$

**y**−, **y**<sup>+</sup> are set constant vectors.

Initial conditions are set

$$\mathbf{x}(0) = \mathbf{x}^0. \tag{3}$$

Terminal condition is set

$$\mathbf{x}(t\_f) = \mathbf{x}^f,\tag{4}$$

where time *tf* of hitting terminal conditions *tf* is not given, but is limited

$$t\_f \le t^+,\tag{5}$$

*t* <sup>+</sup> is a given positive value.

The functional is given

$$J = \int\_{0}^{t\_f} f\_0(\mathbf{x}(t), \mathbf{u}(t))dt + p\_1 \|\mathbf{x}^f - \mathbf{x}(t\_f)\| \to \min\_{\mathbf{u}(\cdot) \in \mathcal{U}} \tag{6}$$

where *p*<sup>1</sup> is a given positive value.

It is necessary to find a control function

$$\mathbf{u} = \mathbf{h}(\mathbf{x}, t) \tag{7}$$

such that for any partial solution

$$\mathbf{x}(t, \mathbf{x}^0) \tag{8}$$

of the system

$$
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{h}(\mathbf{x}, t)) + \mathbf{y}(t) \tag{9}
$$

from initial conditions (3) for any uncertainty function (2) value of the functional (6) satisfies inequation

$$J(\mathbf{x}(t, \mathbf{x}^0), \mathbf{y}(t)) \le J(\mathbf{x}(t, \mathbf{x}^0), 0) + \Delta\_{y\_{\prime}} \tag{10}$$

where *J*(**x**(*t*, **x**0), **y**(*t*)) is a value of functional (6) for the solution (8) with perturbation (2), *J*(**x**(*t*, **x**0), 0) is a value of functional (6) for the same solution (8) without perturbations, **y**(·) ≡ 0, Δ*<sup>y</sup>* is a given positive value.

Among possible solutions in the form (7) we consider only such that possess the following properties. Let **<sup>x</sup>**(*t*, **<sup>x</sup>**0) be some partial solution of the system (9) with **<sup>y</sup>**(*t*) <sup>≡</sup> <sup>0</sup> and *J*(0) be a value of criterion (10) for it. Let us denote

$$
\bar{\mathbf{x}} = \mathbf{x}(t, \mathbf{x}^0) + \bar{\mathbf{z}}(t), \tag{11}
$$

$$
\bar{\mathbf{x}} = \mathbf{x}(t, \mathbf{x}^0) + \bar{\mathbf{z}}(t), \tag{12}
$$

and

$$\bar{\delta} = \max\_{t \in [0; t\_f]} ||\mathbf{x}(t, \mathbf{x}^0) - \bar{\mathbf{x}}(t)||\_\prime \tag{13}$$

$$\bar{\bar{\delta}} = \max\_{t \in [0; t\_f]} ||\mathbf{x}(t, \mathbf{x}^0) + \bar{\mathbf{x}}(t)||. \tag{14}$$

Then ˜ *<sup>δ</sup>* <sup>&</sup>gt; 0 exist, such that <sup>∀</sup> ˜˜ *<sup>δ</sup>* <sup>≤</sup> ˜ *δ* conditions are met

$$
\Lambda \le \Lambda,\tag{15}
$$

where

$$\bar{\Delta} = |f(\mathbf{x}(t, \mathbf{x}^0), 0) - f(\bar{\mathbf{x}}(t), 0)|, \tag{16}$$

$$\tilde{\Delta} = |f(\mathbf{x}(t, \mathbf{x}^0), 0) - f(\tilde{\mathbf{x}}(t), 0)|. \tag{17}$$

The condition (15) is called the continuous dependence of the functional on perturbations. The goal is to look for solutions in form (7) so that they satisfy condition (15).

˜
