*Proceeding Paper* **Feedback Linearization Control of Nonlinear System †**

**Ivan Sergeevich Trenev \*,‡ and Daniil Dmitrievich Devyatkin ‡**

V.A. Trapeznikov Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya Street, 117997 Moscow, Russia


**Abstract:** In this work, a neural network controller is developed for a wide class of nonlinear systems including dynamic systems in the Brunovsky canonical form and those with skew-symmetry properties and bounded nonlinearities. An example of the applicability of this controller to the control of the position of a magnet over an electromagnet is considered. The modeling has been provided through the Simulink environment.

**Keywords:** neural network; control; Matlab; Simulink

#### **1. Introduction**

Since linearization is based on accurate knowledge of the nonlinearities of the system, the applicability of its approaches to the control of real systems is severely limited. To guarantee the existence of a closed system solution, the nonlinearities must satisfy certain conditions, and for linearization methods for feedback control, system controllability is an important factor [1]. In what follows, several adaptive schemes that allow for linear parametric uncertainties are presented, which will help loosen constraints on model matching. Due to the fact that neural networks are acceptable approximators of nonlinear functions, we can assume that the linearity conditions will not be imposed on the system under consideration [2].

The feedback linearization algorithm is centered around geometric methods. However, due to the fact that this algorithm is based on exact knowledge of the nonlinearities of the system, the applicability of these approaches to the control of real systems is limited. To loosen the limitations of the exact model-matching restrictions, several adaptive schemes have been introduced that allow for linear parametric uncertainties [3]. Due to the properties of the universal approximation, NNs are used to calculate nonlinearities, which means that system parameters are not required [4].

For the sake of simplicity, all components of the state vector are assumed to be measurable. If only some components of the state vector are measurable, which corresponds to the case of an output feedback control; then, an additional dynamic neural network is required to evaluate non-measurable states.

#### **2. Brunovsky Canonical Form**

To begin, some definitions are given that are necessary for further reasoning. Let *<sup>f</sup>* , *<sup>g</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> be the unknown smooth functions, *<sup>x</sup>* <sup>=</sup> *x*1, *x*2, ..., *xn <sup>T</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* be the state vector, *u* be the control, and *y* be the output. The canonical Brunovsky form defines a special form of nonlinear dynamics of continuous time

**Citation:** Trenev, I.S.; Devyatkin, D.D. Feedback Linearization Control of Nonlinear System. *Eng. Proc.* **2023**, *33*, 36. https://doi.org/10.3390/ engproc2023033036

Academic Editors: Askhat Diveev, Ivan Zelinka, Arutun Avetisyan and Alexander Ilin

Published: 20 June 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

$$\begin{cases} \dot{\mathbf{x}}\_1 = \mathbf{x}\_{2, \prime} \\ \dot{\mathbf{x}}\_2 = \mathbf{x}\_{3, \prime} \\ \vdots \\ \dot{\mathbf{x}}\_n = f(\mathbf{x}) + \mathbf{g}(\mathbf{x})u\_{\prime} \\ y = h(\mathbf{x}) \end{cases} \tag{1}$$

As shown in Figure 1, each integrator stores information. Each of them requires an initial condition [5].

**Figure 1.** Continuous-time SISO (one input, one output) Brunovsky form.

Consider a SISO linearizable state feedback system with an unknown disturbance. This system has a representation in the state space in Brunovsky's canonical form (1). Then, the following assumption is true [6].

**Proposition 1.** *(Bounds on disturbance and function g*(*x*)*)*


#### **3. Tracking Controller and Error Dynamics**

To begin with, let us form a tracking goal: for a given output *yd*(*t*), it is necessary to find the control *u*(*t*) : *y*(*t*) = *x*1(*t*) (i.e., *h*(*x*) = *x*1). This condition implies a desired trajectory with acceptable accuracy, with restrictions on the state and control vector components. Feedback linearization will be used to track the output here. Let us introduce some assumptions to design the tracking controller. The desired trajectory vector will look like this [7]:

$$\mathbf{x}\_d(t) = \begin{pmatrix} y\_{d\prime} & \dot{y}\_{d\prime} & \dots \end{pmatrix} \begin{pmatrix} y\_d^{(n-1)} \end{pmatrix}^T. \tag{2}$$

For the state vector approximation error *e* = *x* − *xd*, define the filtered tracking error with its derivative as follows:

$$
\tau = \Lambda^T e, \ \dot{\tau} = f(\mathbf{x}) + \mathbf{g}(\mathbf{x})u + d + \mathbf{Y}\_{d\prime} \tag{3}
$$

where *Yd* = −*x* (*n*) *<sup>d</sup>* <sup>+</sup> *<sup>n</sup>*−<sup>1</sup> ∑ *i*=1 *λiei*+<sup>1</sup> = −*x* (*n*) *<sup>d</sup>* <sup>+</sup> <sup>Λ</sup>¯ *Te* is a known signal and <sup>Λ</sup> = (*λ*1, *<sup>λ</sup>*2, ... ,

*<sup>λ</sup>n*−<sup>1</sup> <sup>1</sup>)*<sup>T</sup>* is a well-chosen vector of coefficients, <sup>Λ</sup>¯ <sup>=</sup> *λ*1, *λ*2, ..., *λn*−<sup>1</sup> *T* .

For both cases, when *g*(*x*) is known or unknown, define the control action in the following form:

$$\begin{split} u\_{c\text{exact}} &= \frac{1}{\mathcal{g}(\mathbf{x})} (-f(\mathbf{x}) - \mathbf{K}\_v r - \mathbf{Y}\_d), \\ u\_c &= \frac{1}{\mathcal{\mathcal{G}}(\hat{\Theta}\_{\mathcal{G}'} \mathbf{x})} \Big( -\hat{f}(\Theta\_{f'} \mathbf{x}) + v \Big), \ v = -\mathbf{K}\_v r - \mathbf{Y}\_d. \end{split} \tag{4}$$

where the estimates *g*ˆ(Θˆ *<sup>g</sup>*, *x*) and ˆ *f*(Θˆ *<sup>f</sup>* , *x*) are to be designed using two neural networks with appropriate weights matrices Θ*<sup>g</sup>* and Θ*<sup>f</sup>* .

#### **4. Neural Networks for Approximating Functions**

Let neural networks be used to approximate unknown continuous functions. Then, the compact set *<sup>n</sup>* has ideal target weights *Wi* and *Vi* (*<sup>i</sup>* <sup>=</sup> *<sup>f</sup>* for the case of a known *<sup>g</sup>*(*x*) and *i* ∈ { *f* , *g*} for the case of an unknown *g*(*x*)):

$$i(\mathbf{x}) = \mathcal{W}\_i^T \sigma(V\_i^T \mathbf{x}) + \varepsilon\_{\bar{i}}$$

where the estimation errors are bounded *εi* < *εiN*. Let the ideal weights be unknown and possibly not unique but satisfying the following assumption [8].

**Proposition 2.** *(Restriction on the neural network target weights) Ideal neural network weights for functions f*(*x*) *and g*(*x*) *are bounded on any compact subset of n:*

$$\left| \left| \Theta\_{i} \right| \right| \leqslant \Theta\_{im\prime} \text{ where } \Theta\_{i} = \begin{pmatrix} V\_{i} & 0 \\ 0 & W\_{i} \end{pmatrix} \text{ are the weight matrices for } i, i \in \{f, g\}.$$

Then, using two neural networks, nonlinear functions *f*(*x*) and *g*(*x*), evaluations are obtained in the following form:

$$
\hat{H}(\mathbf{x}) = \hat{W}\_i^T \sigma(\hat{V}\_i^T \mathbf{x}), \text{ where } \hat{\Theta}\_i = \begin{pmatrix} \hat{V}\_i & 0 \\ 0 & \hat{W}\_i \end{pmatrix} \text{ are the actual value of the weight matrix.}
$$

Using the expansion in a Taylor series, the functional errors of the estimate are as follows:

$$\tilde{i}(\mathbf{x}) = i(\mathbf{x}) - \hat{i}(\mathbf{x}) = \tilde{\mathcal{W}}\_i^T(\boldsymbol{\mathcal{O}}\_i - \boldsymbol{\mathcal{O}}\_i^{'}\hat{V}\_i^T\mathbf{x}) + \hat{\mathcal{W}}\_i^T\boldsymbol{\mathcal{O}}\_i^{'}\tilde{V}\_i^T\mathbf{x} + w\_{i\tau}$$

where, for some *Cj*, it is true that *wi*(*t*) - *<sup>C</sup>*<sup>0</sup> <sup>+</sup> *<sup>C</sup>*1Θ˜ *<sup>i</sup><sup>F</sup>* <sup>+</sup> *<sup>C</sup>*2*r*·Θ˜ *<sup>i</sup>F*. Estimates based on the boundedness of the standard activation functions together with their derivatives have the following form [7].

### **Lemma 1.** *(Bounds on state vector and approximated function)*


#### **5. Controller Structure**

Here, a tracking controller is designed for both cases, when *g*(*x*) is known or unknown. The main goal of this section is to construct a neural network that approximates the unknown function *f*(*x*) (and function *g*(*x*)) [9].

*5.1. System with a Known Function g*(*x*)

Figure 2 shows the resulting controller, described in Table 1.


**Table 1.** The resulting controller with known function *g*(*x*).

**Figure 2.** Neural network controller with known function *g*(*x*).

*5.2. System with an Unknown Function g*(*x*)

Figure 3 shows the resulting controller, described in Table 2.

**Table 2.** The resulting controller with unknown function *g*(*x*).


**Figure 3.** Neural network controller with unknown function *g*(*x*).

#### **6. Modelling**

Consider the system shown in Figure 4 and a magnet suspended above an electromagnet. The magnet can only move in the vertical direction. The purpose of the control is to control the position of the magnet. The equation for motion in the described system is

$$\frac{d^2y(t)}{dt^2} = -\mathbf{g} + \frac{\alpha}{M} \frac{i^2(t)}{y(t)} - \frac{\beta}{M} \frac{dy(t)}{dt},\tag{5}$$

where *g* is the standard gravitational acceleration, *i*(*t*) is the current flowing in the electromagnet, *y*(*t*) is the distance above the electromagnet, and *M* is the mass of the magnet. The parameter *β* is the coefficient of viscous friction, and *α* is the field strength constant.

Figure 5 shows the results of modeling this system in the presence of two neural networks. (Red graph—desired trajectory; blue graph—approximation using neural networks.) The simulation was carried out in the Matlab/Simulink environment using additional neural network packages [10].

**Figure 4.** Electromagnet system.

**Figure 5.** Simulation result.

#### **7. Conclusions**

In this paper, we showed how to design neural network controllers for systems in the Brunovsky form for the case of a known input influence function and the case of an unknown input influence function. For the case with an unknown input influence function, it was necessary to make certain efforts to ensure that control remained bounded. Based on the simulation results presented in the last section, we can conclude that neural networks are a universal approximator and can be used to simulate the behavior of a real object.

**Author Contributions:** These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data sharing not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
