*Article* **Control of String Vibrations by Displacement of One End with the Other End Fixed, Given the Deflection Form at an Intermediate Moment of Time**

**Vanya Barseghyan <sup>1</sup> and Svetlana Solodusha 2,\***


**Abstract:** We consider a boundary control problem for the equation of string vibration with given initial and final conditions, given the deflection form at an intermediate moment of time. The control is carried out by displacement of one end with the other end fixed. The problem is reduced to the problem of a distributed action control with zero boundary conditions. We propose a constructive approach to constructing a boundary control action by the separation of variables and methods of the theory of control of finite-dimensional systems. The approach is applied to given functions. A computational experiment was carried out with the construction of the corresponding graphs and their comparative analysis. They confirm the obtained results.

**Keywords:** vibration control; boundary control; intermediate state control; separation of variables

#### **1. Introduction**

Mathematical modeling of various controlled physical and engineering processes associated with vibration systems leads to wave equations. Controlled vibration systems are widespread in various theoretical and applied fields of science. In practice, control problems often arise for both distributed and lumped systems, in particular, when forming a given (desired) form of motion that satisfies multipoint intermediate conditions. Multipoint boundary value problems of control and optimal control of dynamical systems given both the classical boundary (initial and final) and multipoint intermediate conditions have applied value and theoretical importance. Therefore, they require research. In the scientific literature, multipoint boundary value problems of control are considered for systems described both by ordinary differential equations and partial differential equations. Unlike control problems for systems described by ordinary differential equations, control problems for ones described by partial differential equations with multipoint intermediate conditions are little studied.

Many researchers study problems of (optimal) control of vibrational processes. As a rule, both distributed and boundary-concentrated impacts are considered [1–19]. Modeling and control of dynamic systems is currently an actual scientific direction. At the same time, mathematical models of dynamic systems use both ordinary differential equations and partial differential equations with intermediate conditions. Studies of the above problems are the subject of such research contributions as [4–9,20,21] and others.

In production processes associated with the longitudinal movement of materials (for example, a paper web), undesirable transverse perturbations arise, which, for a vertical section, is described by the wave equation of a longitudinally moving string [22]. As a result, statements associated with generating the desired oscillation arise, i.e., oscillation control problems over a finite time. One of the possible approaches designed to prevent

**Citation:** Barseghyan, V.; Solodusha, S. Control of String Vibrations by Displacement of One End with the Other End Fixed, Given the Deflection Form at an Intermediate Moment of Time. *Axioms* **2022**, *11*, 157. https://doi.org/10.3390/ axioms11040157

Academic Editor: Hans J. Haubold

Received: 21 January 2022 Accepted: 24 March 2022 Published: 28 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the appearance of unwanted disturbances can be considered the control of oscillations with given multipoint intermediate conditions. These conditions can be interpreted as a driving force.

Control and optimal control problems for the string oscillation equation with given initial and final conditions and undivided values of string point velocities at intermediate times are considered in [5,6]. The presented work is close to these articles.

This study solves the problem of boundary control of vibrations of a homogeneous string with given initial and final conditions, with a given form of deflection at an intermediate moment of time. Control is implemented by displacing the left end with the right end fixed. The problem is reduced to a distributed action control problem with zero boundary conditions. Using the method of separation of variables and methods of the theory of control of finite-dimensional systems for the first *n* harmonics of vibrations, we construct the required boundary control, under the action of which the deflection function of the string takes a given (or close to a given) value at an intermediate moment of time. In the paper, we formulate the corresponding statement and theorem for the first *n* harmonics. The results obtained for the first *n* harmonics are illustrated for *n* = 1 and *n* = 2. The presented study is located at the intersection of several scientific fields. We use terminology and approaches from the fields of control of systems with distributed parameters and control of finite-dimensional dynamic systems.

This paper is organized as follows. Section 2 contains formulas necessary for the analytical construction of the solution. Further, in Section 3, using the method of separation of variables and methods of the theory of control of finite-dimensional systems, for the first *n* harmonics of vibrations, we construct the required boundary control and the corresponding string deflection function. The presented formulas are necessary for the constructiveness of constructing an analytical solution. The constructed analytical solution of the formulated problem is compactly presented in Sections 2 and 3 with the corresponding formulations of the obtained general results in the form of a statement and a theorem. Section 4 presents formulas for fixed *n* = 1 and *n* = 2. They are also used in the Section 5 of the paper. In Section 5, we realize a computational experiment, build corresponding graphs and make a comparative analysis. They confirm the results of the study. The conclusion summarizes the main results.

#### **2. Problem Statement and Its Reduction to a Problem with Zero Boundary Conditions**

Consider the small transverse vibrations of a taut homogeneous string described by the function *Q*(*x*, *t*), 0 ≤ *x* ≤ *l*, 0 ≤ *t* ≤ *T*, which obeys the wave equation

$$\frac{\partial^2 Q}{\partial t^2} = a^2 \frac{\partial^2 Q}{\partial x^2}, \quad 0 < x < l, \quad t > 0,\tag{1}$$

subject to boundary conditions

$$Q(0, t) = u(t), \quad Q(l, t) = 0, \ 0 \le t \le T. \tag{2}$$

In the Equation (1) *a* <sup>2</sup> = *T*0 *ρ* , where *T*<sup>0</sup> is string tension, *ρ* is density of the homogeneous string, and the function *u*(*t*) is a boundary control (*u*(*t*) is unknown function).

Let the initial and final conditions be given as follows:

$$Q(\mathbf{x},0) = \varphi\_0(\mathbf{x}), \quad \left. \frac{\partial Q}{\partial t} \right|\_{t=0} = \psi\_0(\mathbf{x}), \quad 0 \le \mathbf{x} \le l,\tag{3}$$

$$Q(\mathbf{x}, T) = \varphi\_T(\mathbf{x}) = \varphi\_2(\mathbf{x}), \quad \left. \frac{\partial Q}{\partial t} \right|\_{t=T} = \psi\_T(\mathbf{x}) = \psi\_2(\mathbf{x}), \quad 0 \le \mathbf{x} \le l,\tag{4}$$

where *T* is some given moment of time. It is assumed that the function *Q*(*x*, *t*) ∈ *C* 2 (Ω*T*), where the set Ω*<sup>T</sup>* = {(*x*, *t*) : *x* ∈ [0, *l*], *t* ∈ [0, *T*]}.

Let at some moment of time *t*<sup>1</sup> (0 < *t*<sup>1</sup> < *T*) an intermediate state of points (deflection) of the string be given:

$$Q(\mathbf{x}, t\_1) = \varrho\_1(\mathbf{x}), \quad 0 \le \mathbf{x} \le l. \tag{5}$$

Let us state the following problem of boundary control of string vibrations.

Among possible boundary controls *u*(*t*), 0 ≤ *t* ≤ *T*, (2), it is required to find such a control that would cause the vibrating motion of the system (1) to pass from the given initial state (3) to the final state (4), taking a given form of deflection (5) at an intermediate moment of time.

Let us assume that the functions *ϕi*(*x*) *i* = 0, 2 belong to the space *C* 2 [0, *l*] and the functions *ψ*0(*x*) and *ψT*(*x*) belong to the space *C* 1 [0, *l*]. The function *u*(*t*) ∈ *C* 2 [0, *T*] is called an admissible control. It is also assumed that all functions are such that the consistency conditions below are satisfied.

Since the boundary conditions (2) are not homogeneous, we reduce the solution to the problem stated to a control problem with zero boundary conditions. Hence, following [23], we find the solution to the Equation (1) in the form of the sum

$$Q(\mathbf{x},t) = V(\mathbf{x},t) + W(\mathbf{x},t),\tag{6}$$

where *V*(*x*, *t*) is an unknown function to be determined, with homogeneous boundary conditions

$$V(0,t) = V(l,t) = 0,\tag{7}$$

and the function *W*(*x*, *t*) is the solution to the Equation (1) with non-homogeneous boundary conditions

$$\mathcal{W}(0, t) = \mathfrak{u}(t), \ \mathcal{W}(l, t) = 0.$$

The function *W*(*x*, *t*) has the form

$$\mathcal{W}(\mathbf{x},t) = \left(1 - \frac{\mathbf{x}}{l}\right)\boldsymbol{\mu}(t). \tag{8}$$

Substituting (6) into (1) and considering (8), we obtain the following equation for the determination of the function *V*(*x*, *t*):

$$\frac{\partial^2 V}{\partial t^2} = a^2 \frac{\partial^2 V}{\partial x^2} + F(\mathbf{x}, t), \tag{9}$$

where

$$F(\mathbf{x}, t) = \left(\frac{\mathbf{x}}{l} - 1\right) \mu^{\prime\prime}(t). \tag{10}$$

The function *V*(*x*, *t*) by virtue of conditions (2)–(5) must satisfy the initial conditions

$$V(\mathbf{x},0) = \varphi\_0(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - 1\right)u(0), \quad \frac{\partial V}{\partial t}\bigg|\_{t=0} = \psi\_0(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - 1\right)u'(0),\tag{11}$$

the intermediate condition

$$V(\mathbf{x}, t\_1) = \varphi\_1(\mathbf{x}) + \left(\frac{\mathbf{x}}{I} - \mathbf{1}\right) u(t\_1) \tag{12}$$

and final conditions

$$V(\mathbf{x},T) = \varphi\_T(\mathbf{x}) + \left(\frac{\mathbf{x}}{I} - 1\right)u(T), \quad \frac{\partial V}{\partial t}\Big|\_{t=T} = \psi\_T(\mathbf{x}) + \left(\frac{\mathbf{x}}{I} - 1\right)u'(T). \tag{13}$$

It follows from the condition (7) that

$$V(0, t\_i) = V(l, t\_i) = 0, \ \left. \frac{\partial V(0, t)}{\partial t} \right|\_{t = t\_i} = \left. \frac{\partial V(l, t)}{\partial t} \right|\_{t = t\_i} = 0, \ i = \overline{0, \ 2}. \tag{14}$$

From the conditions (11), (12) and (13), given (14), we obtain the following consistency conditions:

$$
\mu(0) = \varphi\_0(0), \ \mu'(0) = \psi\_0(0), \ \varphi\_0(l) = \psi\_0(l) = 0,\tag{15}
$$

$$
\mu(t\_1) = \varrho\_1(0), \ \varrho\_1(l) = 0,\tag{16}
$$

$$
\mu(T) = \varphi\_T(0), \ \mu'(T) = \psi\_T(0), \ \varphi\_T(l) = \psi\_T(l) = 0. \tag{17}
$$

Thus, taking into account the conditions (15)–(17), the conditions (11)–(13) are written as follows:

$$V(\mathbf{x},0) = \wp\_0(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - 1\right)\wp\_0(0), \quad \left.\frac{\partial V}{\partial t}\right|\_{t=0} = \wp\_0(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - 1\right)\wp\_0(0),\tag{18}$$

$$V(\mathbf{x}, t\_1) = \varphi\_1(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - 1\right)\varphi\_1(0),\tag{19}$$

$$V(\mathbf{x}, T) = \boldsymbol{\varrho}\_T(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - \mathbf{1}\right) \boldsymbol{\varrho}\_T(\mathbf{0}), \\ \left.\frac{\partial V}{\partial t}\right|\_{t=T} = \boldsymbol{\psi}\_T(\mathbf{x}) + \left(\frac{\mathbf{x}}{l} - \mathbf{1}\right) \boldsymbol{\psi}\_T(\mathbf{0}).\tag{20}$$

Thus, the solution to the stated problem of boundary control of vibrations of a string with a given form of deflection at an intermediate moment of time is reduced to the problem of control of (9) with boundary conditions (7) and is stated as follows: to find such a control *u*(*t*), 0 ≤ *t* ≤ *T*, under which the vibratory motion (9) with boundary conditions (7) from the given initial state (18) through the intermediate state (19) passes to the final state (20).

#### **3. Problem Solution**

Given that the boundary conditions (7) are homogeneous and consistency conditions are satisfied, according to the Fourier series theory, we find the solution to the Equation (9) in the form

$$V(\mathbf{x}, t) = \sum\_{k=1}^{\infty} V\_k(t) \sin \frac{\pi k}{l} \mathbf{x}. \tag{21}$$

Let us represent the functions *F*(*x*, *t*), *ϕi*(*x*) *i* = 0, 2 , *ψ*0(*x*) and *ψT*(*x*) as Fourier series, and by substituting their values together with *V*(*x*, *t*) in the Equations (9) and (10) and in the conditions (18)–(20), we obtain

$$
\ddot{V}\_k(t) + \lambda\_k^2 V\_k(t) = F\_k(t), \ \lambda\_k^2 = \left(\frac{a\pi k}{l}\right)^2, \ F\_k(t) = -\frac{2a}{\lambda\_k l} u''(t), \tag{22}
$$

$$V\_k(0) = \varphi\_k^{(0)} - \frac{2a}{\lambda\_k l} \varphi\_0(0), \ \dot{V}\_k(0) = \psi\_k^{(0)} - \frac{2a}{\lambda\_k l} \psi\_0(0), \tag{23}$$

$$
\dot{V}\_k(0) = \psi\_k^{(0)} - \frac{2a}{\lambda\_k l} \psi\_0(0),
\tag{24}
$$

$$
\dot{V}\_k(T) = \phi\_k^{(T)} - \frac{2a}{\lambda\_k l} \varphi\_T(0), \ \dot{V}\_k(T) = \psi\_k^{(T)} - \frac{2a}{\lambda\_k l} \psi\_T(0), \tag{25}
$$

where *Fk*(*t*), *ϕ* (*i*) *k i* = 0, 2 , *ψ* (0) *k* and *ψ* (*T*) *k* denote the Fourier coefficients of the functions *F*(*x*, *t*), *ϕi*(*x*) *i* = 0, 2 , *ψ*0(*x*) and *ψT*(*x*), respectively.

The general solution to the Equation (22) with the initial conditions (23) is of the form

$$V\_k(t) = V\_k(0)\cos\lambda\_k t + \frac{1}{\lambda\_k}\dot{V}\_k(0)\sin\lambda\_k t + \frac{1}{\lambda\_k}\int\_0^t F\_k(\tau)\sin\lambda\_k(t-\tau)d\tau. \tag{26}$$

Now, given the intermediate (24) and final (25) conditions and the consistency conditions (15)–(17), using the approaches given in [8,9], we obtain from (26) that the function *u*(*τ*) for each *k* must satisfy the following integral relation:

$$\int\_{0}^{T} \overline{H}\_{k}(\tau)\mu(\tau)d\tau = \mathbb{C}\_{k}(t\_{1}, T), \ k = 1, \ 2,\tag{27}$$

$$\overline{H}\_{k}(\tau) = \begin{pmatrix} \sin \lambda\_{k}(T-\tau) \\ \cos \lambda\_{k}(T-\tau) \\ h\_{k}^{(1)}(\tau) \end{pmatrix}, h\_{k}^{(1)}(\tau) = \begin{cases} \sin \lambda\_{k}(t\_{1}-\tau), \ 0 \le \tau \le t\_{1\prime} \\ 0, \ \qquad t\_{1} < \tau \le T, \end{cases}$$

$$\mathbb{C}\_{k}(t\_{1\prime}, T) = \begin{pmatrix} \mathbb{C}\_{1k}(T) \\ \mathbb{C}\_{2k}(T) \\ \mathbb{C}\_{1k}(t\_{1}) \end{pmatrix}, \tag{28}$$

$$\mathbf{C}\_{1k}(T) = \frac{1}{\lambda\_k^2} \left[ \frac{\lambda\_k l}{2a} \widetilde{\mathbf{C}}\_{1k}(T) + \mathbf{X}\_{1k} \right], \quad \widetilde{\mathbf{C}}\_{1k}(T) = \lambda\_k V\_k(T) - \lambda\_k V\_k(0) \cos \lambda\_k T - \dot{V}\_k(0) \sin \lambda\_k T,$$

$$\mathbf{C}\_{2k}(T) = \frac{1}{\lambda\_k^2} \left[ \frac{\lambda\_k l}{2a} \widetilde{\mathbf{C}}\_{2k}(T) + \mathbf{X}\_{2k} \right], \ \widetilde{\mathbf{C}}\_{2k}(T) = \dot{V}\_k(T) + \lambda\_k V\_k(0) \sin \lambda\_k T - \dot{V}\_k(0) \cos \lambda\_k T,\tag{29}$$

$$\mathbf{C}\_{1k}(t\_1) = \frac{1}{\lambda\_k^2} \left[ \frac{\lambda\_k l}{2\tilde{a}} \widetilde{\mathbf{C}}\_{1k}(t\_1) + \mathbf{X}\_{1k}^{(1)} \right], \\ \widetilde{\mathbf{C}}\_{1k}(t\_1) = \lambda\_k V\_k(t\_1) - \lambda\_k V\_k(0) \cos \lambda\_k t\_1 - \dot{V}\_k(0) \sin \lambda\_k t\_1.$$

$$X\_{1k} = \lambda\_k \boldsymbol{\varrho}\_T(0) - \boldsymbol{\wp}\_0(0) \sin \lambda\_k T - \lambda\_k \boldsymbol{\varrho}\_0(0) \cos \lambda\_k T,$$

$$X\_{2k} = \boldsymbol{\wp}\_T(0) - \boldsymbol{\wp}\_0(0) \cos \lambda\_k T + \lambda\_k \boldsymbol{\varrho}\_0(0) \sin \lambda\_k T,\tag{30}$$

$$X\_{1k}^{(1)} = \lambda\_k \boldsymbol{\varrho}\_1(0) - \boldsymbol{\wp}\_0(0) \sin \lambda\_k t\_1 - \lambda\_k \boldsymbol{\varrho}\_0(0) \cos \lambda\_k t\_1.$$

Thus, to find the function *u*(*τ*), *τ* ∈ [0, *T*], we obtain the infinite integral relations (27). In practice, the first *n* harmonics of vibrations are selected and the problem of control synthesis is solved using methods of the theory of control of finite-dimensional systems [8,9,24].

For the first *n* harmonics, let us introduce the following block vector notations:

$$H\_n(\tau) = \begin{pmatrix} \overline{H}\_1(\tau) \\ \overline{H}\_2(\tau) \\ \vdots \\ \overline{H}\_n(\tau) \end{pmatrix}, \quad \eta\_n = \begin{pmatrix} \mathcal{C}\_1(t\_1, T) \\ \mathcal{C}\_2(t\_1, T) \\ \vdots \\ \mathcal{C}\_n(t\_1, T) \end{pmatrix}. \tag{31}$$

with the dimensionalities *Hn*(*τ*) − (3*n* × 1) and *η<sup>n</sup>* − (3*n* × 1). Consequently, for the first *n* harmonics, taking into account (31) from (27), we have

$$\int\_{0}^{T} H\_{n}(\tau) \mu\_{n}(\tau) d\tau = \eta\_{n} \tag{32}$$

(here and elsewhere, the designation of the letter "n" in the lower index will mean "for the first n harmonics").

The obtained relation (32) implies the validity of the following statement.

**Statement.** *For each n, the process described by equation (22) with conditions (23)–(25) is completely controllable if and only if, for any given vector η<sup>n</sup> (31), the control un*(*t*)*, t* ∈ [0, *T*]*, can be found, satisfying condition (32).*

For arbitrary numbers of first harmonics, the boundary control action *un*(*t*), satisfying the integral relation (32), has the form [8,9,24]:

$$
\mu\_n(t) = H\_n^T(t) \mathbf{S}\_n^{-1} \eta\_n + f\_n(t), \tag{33}
$$

where *H<sup>T</sup> n* (*t*) is a transposed matrix and *fn*(*t*) is some vector function such that

$$\int\_{0}^{T} H\_{n}(t) f\_{n}(t) dt = 0, \ \mathcal{S}\_{n} = \int\_{0}^{T} H\_{n}(t) H\_{n}^{T}(t) dt. \tag{34}$$

Here, *Hn*(*t*)*H<sup>T</sup> n* (*t*) is the outer product, *S<sup>n</sup>* is a known matrix of dimensionality (3*n* × 3*n*) and it is assumed that *detS<sup>n</sup>* 6= 0.

Thus, the following theorem is true.

**Theorem 1** *When the initial data of the problem specified in* Section 1 *are matched and the complete controllability condition is fulfilled, problem (1)–(5) has a solution determined for each harmonic of motion by the formula (33).*

Substituting (33) into (22) and the expression obtained for *Fk*(*t*) into (26), we obtain the function *Vk*(*t*), *t* ∈ [0, *T*]. Then, from (21), we have

$$V\_n(\mathbf{x}, t) = \sum\_{k=1}^n V\_k(t) \sin \frac{\pi k}{l} \mathbf{x} \tag{35}$$

and from (6) for the first *n* harmonics, the string deflection function *Q<sup>n</sup>* (*x*, *t*) is written as

$$Q\_{\mathfrak{n}}(\mathbf{x},t) = V\_{\mathfrak{n}}(\mathbf{x},t) + W\_{\mathfrak{n}}(\mathbf{x},t),\tag{36}$$

where

$$\mathcal{W}\_{\mathfrak{n}}(\mathfrak{x},t) = \left(1 - \frac{\mathfrak{x}}{l}\right)\mathfrak{u}\_{\mathfrak{n}}(t). \tag{37}$$

#### **4. Solution Construction in the Cases When** *n* = 1 **and** *n* = 2

Applying the above approach, we construct the boundary control given *n* = 1 and given *n* = 2 and the string deflection function, respectively.

*4.1. Case n* = 1

Given *n* = 1 (therefore, *k* = 1), according to (31), we have *H*1(*τ*) = *H*1(*τ*) and *η*<sup>1</sup> = *C*1(*t*1, *T*), and from (34) we obtain

$$S\_1 = \int\_0^T H\_1(\tau) H\_1^T(\tau) d\tau = \begin{pmatrix} s\_{11}^{(1)} & s\_{12}^{(1)} & s\_{13}^{(1)} \\ s\_{21}^{(1)} & s\_{22}^{(1)} & s\_{23}^{(1)} \\ s\_{31}^{(1)} & s\_{32}^{(1)} & s\_{33}^{(1)} \end{pmatrix}.$$

Elements of the matrix *S*1, according to the notation (28), have the following form:

$$s\_{11}^{(1)} = \frac{T}{2} - \frac{1}{4\lambda\_1} \sin 2\lambda\_1 T,\ s\_{12}^{(1)} = s\_{21}^{(1)} = \frac{1}{2\lambda\_1} \sin^2 \lambda\_1 T,\ s\_{22}^{(1)} = \frac{T}{2} + \frac{1}{4\lambda\_1} \sin 2\lambda\_1 T,$$

$$s\_{33}^{(1)} = \frac{t\_1}{2} - \frac{1}{4\lambda\_1} \sin 2\lambda\_1 t\_1,\ s\_{13}^{(1)} = s\_{31}^{(1)} = \frac{t\_1}{2} \cos \lambda\_1 (T - t\_1) - \frac{1}{2\lambda\_1} \sin \lambda\_1 t\_1 \cos \lambda\_1 T,$$

$$s\_{23}^{(1)} = s\_{32}^{(1)} = \frac{1}{2\lambda\_1} \sin \lambda\_1 t\_1 \sin \lambda\_1 T - \frac{t\_1}{2} \sin \lambda\_1 (T - t\_1),$$

and ∆ = det*S*<sup>1</sup> 6= 0. Denote by *S* −1 1 the symmetric matrix of dimension (3 × 3) inverse to the matrix *S*1.

From (33), it follows that *u*1(*τ*) = *H<sup>T</sup>* 1 (*τ*)*S* −1 1 *η*<sup>1</sup> + *f*1(*τ*). Assuming that *f*1(*τ*) = 0, we obtain, given *τ* ∈ [0, *t*1],

$$\begin{array}{l} \mu\_{1}(\tau) = \sin \lambda\_{1}(T-\tau) \left[ \mathbb{s}\_{11} \mathbb{C}\_{11}(T) + \mathbb{s}\_{12} \mathbb{C}\_{21}(T) + \mathbb{s}\_{13} \mathbb{C}\_{11}(t\_{1}) \right] \\ + \cos \lambda\_{1}(T-\tau) \left[ \mathbb{s}\_{21} \mathbb{C}\_{11}(T) + \mathbb{s}\_{22} \mathbb{C}\_{21}(T) + \mathbb{s}\_{23} \mathbb{C}\_{11}(t\_{1}) \right] \\ + \sin \lambda\_{1}(t\_{1}-\tau) \left[ \mathbb{s}\_{31} \mathbb{C}\_{11}(T) + \mathbb{s}\_{32} \mathbb{C}\_{21}(T) + \mathbb{s}\_{33} \mathbb{C}\_{11}(t\_{1}) \right] \end{array} \tag{38}$$

and given *τ* ∈ (*t*1, *T*],

$$\begin{array}{l} u\_{1}(\tau) = \sin \lambda\_{1}(T-\tau) [\hat{s}\_{11}\mathcal{C}\_{11}(T) + \hat{s}\_{12}\mathcal{C}\_{21}(T) + \hat{s}\_{13}\mathcal{C}\_{11}(t\_{1})] \\ + \cos \lambda\_{1}(T-\tau) [\mathcal{S}\_{21}\mathcal{C}\_{11}(T) + \mathcal{S}\_{22}\mathcal{C}\_{21}(T) + \mathcal{S}\_{23}\mathcal{C}\_{11}(t\_{1})]. \end{array} \tag{39}$$

Note that according to (36), we can write the expression for the function *Q*<sup>1</sup> (*x*, *t*). Assume that *t*<sup>1</sup> = *<sup>l</sup> a* , *T* = 2 *l a* . Then, given *λ*<sup>1</sup> = *<sup>a</sup><sup>π</sup> l* , we obtain *t*1*λ*<sup>1</sup> = *π*, *Tλ*<sup>1</sup> = 2*π* and *λ*1(*T* − *t*1) = *π*. For matrices *S*<sup>1</sup> and *S* −1 1 , we have:

$$\mathbf{S}\_1 = \begin{pmatrix} \frac{l}{a} & \mathbf{0} & -\frac{l}{2a} \\ \mathbf{0} & \frac{l}{a} & \mathbf{0} \\ -\frac{l}{2a} & \mathbf{0} & \frac{l}{2a} \end{pmatrix}, \quad \mathbf{S}\_1^{-1} = \begin{pmatrix} \frac{2a}{l} & \mathbf{0} & \frac{2a}{l} \\ \mathbf{0} & \frac{a}{l} & \mathbf{0} \\ \frac{2a}{l} & \mathbf{0} & \frac{4a}{l} \end{pmatrix}.$$

and for the control from (38) and (39), we obtain

$$u\_1(\tau) = \begin{cases} \frac{a}{\bar{\tau}} \cos \lambda\_1 \tau \, \mathcal{C}\_{21}(T) + \frac{2a}{\bar{\tau}} \sin \lambda\_1 \tau \, \mathcal{C}\_{11}(t\_1), \ \tau \in [0, t\_1],\\ \frac{a}{\bar{\tau}} \cos \lambda\_1 \tau \, \mathcal{C}\_{21}(T) - \frac{2a}{\bar{\tau}} \sin \lambda\_1 \tau \, (\mathcal{C}\_{11}(T) + \mathcal{C}\_{11}(t\_1)), \ \tau \in (t\_1, T]. \end{cases} \tag{40}$$

For the function *<sup>V</sup>*1(*t*) from (26), given (22), so that *<sup>F</sup>*1(*t*) <sup>=</sup> <sup>−</sup> <sup>2</sup>*<sup>a</sup> λ*1 *l u* <sup>00</sup> <sup>1</sup>(*t*), we obtain, given *t* ∈ [0, *t*1],

$$V\_1(t) = \left(V\_1(0) - \frac{a\lambda\_1 t \,\mathbb{C}\_{11}(t\_1)}{\pi l}\right) \cos\lambda\_1 t + \left(\frac{\dot{V}\_1(0)}{\lambda\_1} + \frac{a(2\mathbb{C}\_{11}(t\_1) + \lambda\_1 t \,\mathbb{C}\_{21}(T))}{2\pi l}\right) \sin\lambda\_1 t\_1$$

and given *t* ∈ (*t*1, *T*],

$$\begin{aligned} V\_1(t) &= \left[ V\_1(0) + \frac{a(t - t\_1)\lambda\_1}{\pi l} \mathcal{C}\_{11}(T) + \frac{a(t - 2t\_1)\lambda\_1}{\pi l} \mathcal{C}\_{11}(t\_1) \right] \cos \lambda\_1 t \\ &+ \left[ \frac{\dot{V}\_1(0)}{\lambda\_1} + \frac{at\lambda\_1}{2\pi l} \mathcal{C}\_{21}(T) - \frac{a(\mathcal{C}\_{11}(T) + \mathcal{C}\_{11}(t\_1))}{\pi l} \right] \sin \lambda\_1 t. \end{aligned}$$

From (36), given (35) and (37), we have

$$Q\_1(\mathbf{x}, t) = V\_1(t) \sin \frac{\pi}{l} \mathbf{x} + \left(1 - \frac{\mathbf{x}}{l}\right) u\_1(t). \tag{41}$$

*4.2. Case n* = 2

Given *n* = 2 (i.e., *k* = 1, 2) from (31), according to (28)–(30), we have

$$H\_{2}(\tau) = \left(\begin{array}{c} \overline{H}\_{1}(\tau) \\ \overline{H}\_{2}(\tau) \\ \end{array}\right) = \left(\begin{array}{c} \sin\lambda\_{1}(T-\tau) \\ \cos\lambda\_{1}(T-\tau) \\ h\_{1}^{(1)}(\tau) \\ \sin\lambda\_{2}(T-\tau) \\ \cos\lambda\_{2}(T-\tau) \\ h\_{2}^{(1)}(\tau) \\ \end{array}\right), \quad \eta\_{2} = \left(\begin{array}{c} \mathbb{C}\_{11}(T) \\ \mathbb{C}\_{21}(T) \\ \mathbb{C}\_{11}(T) \\ \end{array}\right) = \left(\begin{array}{c} \mathbb{C}\_{11}(T) \\ \mathbb{C}\_{21}(T) \\ \mathbb{C}\_{12}(T) \\ \mathbb{C}\_{22}(T) \\ \mathbb{C}\_{12}(t\_{1}) \\ \end{array}\right).$$

where

$$h\_1^{(1)}(\tau) = \begin{cases} \sin \lambda\_1 (t\_1 - \tau), & 0 \le \tau \le t\_{1\prime} \\ 0, & t\_1 < \tau \le T, \end{cases} \quad h\_2^{(1)}(\tau) = \begin{cases} \sin \lambda\_2 (t\_1 - \tau), & 0 \le \tau \le t\_{1\prime} \\ 0, & t\_1 < \tau \le T. \end{cases}$$

The values *C*11(*T*), *C*21(*T*), *C*11(*t*1), *C*12(*T*), *C*22(*T*) and *C*12(*t*1) can be easily calculated using formulas (29) and (30). Their explicit form is omitted for brevity.

From (34), we obtain

$$S\_2 = \int\_0^T H\_2(\tau) H\_2^T(\tau) d\tau$$

where *S*<sup>2</sup> is a symmetric matrix of dimension (6 × 6) and its elements *s* (2) *ij* are equal to ones of the matrix *S*1, i.e.,

$$s\_{\mathrm{ij}}^{(2)} = s\_{\mathrm{ij}}^{(1)} \text{ for } i, j = \overline{1,3}; \quad s\_{\mathrm{ij}}^{(2)} = s\_{\mathrm{ji}}^{(2)} \text{ for } i, j = \overline{1,6}, \text{ i } \neq j.$$

Given *λ*<sup>2</sup> = <sup>2</sup>*a<sup>π</sup> l* and using the assumptions made in Section 4.1, for the matrix *S*2, we obtain:

$$S\_2 = \begin{pmatrix} \frac{l}{a} & 0 & -\frac{l}{2a} & 0 & 0 & 0\\ 0 & \frac{l}{a} & 0 & 0 & 0 & -\frac{4l}{3a\pi} \\ -\frac{l}{2a} & 0 & \frac{l}{2a} & 0 & -\frac{2l}{3a\pi} & 0 \\ 0 & 0 & 0 & \frac{l}{a} & 0 & \frac{l}{2a} \\ 0 & 0 & -\frac{2l}{3a\pi} & 0 & \frac{l}{a} & 0 \\ 0 & -\frac{4l}{3a\pi} & 0 & \frac{l}{2a} & 0 & \frac{l}{2a} \end{pmatrix} \prime$$

where the calculation takes into account the following ratios: *t*1*λ*<sup>2</sup> = 2*π*, *Tλ*<sup>2</sup> = 4*π*, *<sup>λ</sup>*2(*<sup>T</sup>* <sup>−</sup> *<sup>t</sup>*1) <sup>=</sup> <sup>2</sup>*π*, (*λ*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*2)*<sup>T</sup>* <sup>=</sup> <sup>6</sup>*π*, *<sup>λ</sup>*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*<sup>2</sup> <sup>=</sup> <sup>3</sup>*a<sup>π</sup> l* , *<sup>λ</sup>*<sup>1</sup> <sup>−</sup> *<sup>λ</sup>*<sup>2</sup> <sup>=</sup> <sup>−</sup>*a<sup>π</sup> l* , *λ* 2 <sup>1</sup> − *λ* 2 <sup>2</sup> = −3 *aπ l* 2 , *λ*1*T* + *λ*2*t*<sup>1</sup> = 4*π*, *λ*1*T* − *λ*2*t*<sup>1</sup> = 0, *λ*2*T* + *λ*1*t*<sup>1</sup> = 5*π* and *λ*2*T* − *λ*1*t*<sup>1</sup> = 3*π*. Let us note that det*S*<sup>2</sup> = *<sup>l</sup>* 6 6 4 *a* <sup>6</sup> *π*<sup>4</sup> *p q* , where *p* = 9*π* <sup>2</sup> <sup>−</sup> <sup>64</sup>−<sup>1</sup> , *q* = 9*π* <sup>2</sup> <sup>−</sup> <sup>16</sup>−<sup>1</sup> . Having the matrix *S*2, it is not difficult to calculate the matrix *S* −1 , the inverse to it.

2 From (33), it follows that *u*2(*τ*) = *H<sup>T</sup>* 2 (*τ*)*S* −1 2 *η*<sup>2</sup> + *f*2(*τ*). For simplicity, assuming that *f*2(*τ*) = 0, we obtain given *τ* ∈ [0, *t*1],

$$\begin{split} u\_{2}(\tau) &= \frac{2a}{l} q \big( 9\pi^{2} \mathbf{C}\_{11}(t\_{1}) + 8\mathbf{C}\_{11}(T) + 6\pi \mathbf{C}\_{22}(T) \big) \sin\lambda\_{1}\tau \\ &+ \frac{3a\pi}{l} p (16\mathbf{C}\_{12}(t\_{1}) - 8\mathbf{C}\_{12}(T) + 3\pi \mathbf{C}\_{21}(T)) \cos\lambda\_{1}\tau \\ &+ \frac{2a}{l} p (32\mathbf{C}\_{12}(T) - 9\pi^{2} \mathbf{C}\_{12}(t\_{1}) - 12\pi \mathbf{C}\_{21}(T)) \sin\lambda\_{2}\tau \\ &+ \frac{3a\pi}{l} q (8\mathbf{C}\_{11}(t\_{1}) + 4\mathbf{C}\_{11}(T) + 3\pi \mathbf{C}\_{22}(T)) \cos\lambda\_{2}\tau, \end{split} \tag{42}$$

and given *τ* ∈ (*t*1, *T*],

$$\begin{split} u\_{2}(\tau) &= \frac{2a}{l} q \left[ 8\mathbf{C}\_{11}(T) - 9\pi^{2} (\mathbf{C}\_{11}(t\_{1}) + \mathbf{C}\_{11}(T)) - 6\pi \mathbf{C}\_{22}(T) \right] \sin \lambda\_{1} \tau \\ &+ \frac{3a\pi}{l} p (16\mathbf{C}\_{12}(t\_{1}) - 8\mathbf{C}\_{12}(T) + 3\pi \mathbf{C}\_{21}(T)) \cos \lambda\_{1} \tau \\ &+ \frac{2a}{l} p \left[ 9\pi^{2} (\mathbf{C}\_{12}(t\_{1}) - \mathbf{C}\_{12}(T)) + 32\mathbf{C}\_{12}(T) + 12\pi \mathbf{C}\_{21}(T) \right] \sin \lambda\_{2} \tau \\ &+ \frac{3a\pi}{l} q (8\mathbf{C}\_{11}(t\_{1}) + 4\mathbf{C}\_{11}(T) + 3\pi \mathbf{C}\_{22}(T)) \cos \lambda\_{2} \tau, \end{split} \tag{43}$$

where

$$\begin{split} \mathbf{C}\_{11}(T) &= \frac{l}{2a} (V\_1(T) - V\_1(0)) + \frac{q\tau\_T(0) - q\rho\_0(0)}{\lambda\_1}, \\ \mathbf{C}\_{21}(T) &= \frac{l}{2a\lambda\_1} (\dot{V}\_1(T) - \dot{V}\_1(0)) + \frac{\psi\_T(0) - \psi\_0(0)}{\lambda\_1^2}, \\ \mathbf{C}\_{11}(t\_1) &= \frac{l}{2a} (V\_1(t\_1) + V\_1(0)) + \frac{\varrho\_1(0) + \varrho\_0(0)}{\lambda\_1}, \\ \mathbf{C}\_{12}(T) &= \frac{l}{2a} (V\_2(T) - V\_2(0)) + \frac{q\tau\_T(0) - \varrho\_0(0)}{\lambda\_2}, \\ \mathbf{C}\_{22}(T) &= \frac{l}{2a\lambda\_2} \left( \dot{V}\_2(T) - \dot{V}\_2(0) \right) + \frac{\Psi\_T(0) - \psi\_0(0)}{\lambda\_2^2}, \\ \mathbf{C}\_{12}(t\_1) &= \frac{l}{2a} (V\_2(t\_1) - V\_2(0)) + \frac{\varrho\_1(0) - \varrho\_0(0)}{\lambda\_2}. \end{split} \tag{44}$$

From (26), for *<sup>V</sup>*2(*t*) given (22), so that *<sup>F</sup>*2(*t*) <sup>=</sup> <sup>−</sup> <sup>2</sup>*<sup>a</sup> λ*2*l u* <sup>00</sup> <sup>2</sup>(*t*), we obtain, given *t* ∈ [0, *t*1]

$$\begin{split} V\_{2}(t) &= \frac{\kappa\_{1}}{\lambda\_{1}^{2} - \lambda\_{2}^{2}} \cos \lambda\_{1} t - \frac{\beta\_{1}}{\lambda\_{1}^{2} - \lambda\_{2}^{2}} \sin \lambda\_{1} t + \left( V\_{2}(0) - \frac{\beta\_{2} t}{2\lambda\_{2}} + \frac{a\_{1}}{\lambda\_{1}^{2} - \lambda\_{2}^{2}} \right) \cos \lambda\_{2} t \\ &+ \left( \frac{\dot{V}\_{2}(0)}{\lambda\_{2}} + \frac{a\_{2} t}{2\lambda\_{2}} + \frac{\beta\_{1} \lambda\_{1}}{\lambda\_{2} \left( \lambda\_{1}^{2} - \lambda\_{2}^{2} \right)} + \frac{\beta\_{2}}{2\lambda\_{2}^{2}} \right) \sin \lambda\_{2} t \end{split}$$

and given *t* ∈ (*t*1, *T*],

$$\begin{aligned} V\_2(t) &= \frac{\gamma\_1}{\lambda\_1^2 - \lambda\_2^2} \sin \lambda\_1 t - \frac{\alpha\_1}{\lambda\_1^2 - \lambda\_2^2} \cos \lambda\_1 t \\ &+ \left( V\_2(0) + \frac{\alpha\_1}{\lambda\_1^2 - \lambda\_2^2} - \frac{\gamma\_2 t}{2\lambda\_2} + \frac{\pi (\lambda\_1 - 2\lambda\_2)(\gamma\_2 - \beta\_2)}{2\lambda\_2 \left(\lambda\_1^2 - \lambda\_2^2\right)} \right) \cos \lambda\_2 t \\ &+ \left( \frac{\dot{V}\_2(0)}{\lambda\_2} + \frac{\alpha\_2 t}{2\lambda\_2} + \frac{\gamma\_2}{2\lambda\_2^2} + \frac{\lambda\_1 (\gamma\_1 + 2\beta\_1)}{\lambda\_2 \left(\lambda\_1^2 - \lambda\_2^2\right)} \right) \sin \lambda\_2 t \end{aligned}$$

where

$$a\_{1} = \frac{3a\lambda\_{1}^{2}p}{l}[8(2\mathcal{C}\_{12}(t\_{1}) - \mathcal{C}\_{12}(T)) + 3\,\pi\mathcal{C}\_{21}(T)],$$

$$a\_{2} = \frac{3a\lambda\_{2}^{2}q}{l}[4(2\mathcal{C}\_{11}(t\_{1}) + \mathcal{C}\_{11}(T)) + 3\,\pi\mathcal{C}\_{22}(T)],$$

$$\beta\_{1} = \frac{2a\lambda\_{1}^{2}q}{\pi l}\Big(9\,\pi^{2}\mathcal{C}\_{11}(t\_{1}) + 6\pi\mathcal{C}\_{22}(T) + 8\mathcal{C}\_{11}(T)\Big),$$

$$\beta\_{2} = \frac{2a\lambda\_{2}^{2}p}{\pi l}\Big(32\mathcal{C}\_{12}(T) - 12\pi\mathcal{C}\_{21}(T) - 9\,\pi^{2}\mathcal{C}\_{12}(t\_{1})\Big),$$

$$\gamma\_{1} = \frac{2a\lambda\_{1}^{2}q}{\pi l}\Big[9\,\pi^{2}(\mathcal{C}\_{11}(t\_{1}) + \mathcal{C}\_{11}(T)) + 6\pi\mathcal{C}\_{22}(T) - 8\mathcal{C}\_{11}(T)\Big],$$

$$\gamma\_{2} = \frac{2a\lambda\_{2}^{2}p}{\pi l}\Big[32\mathcal{C}\_{12}(T) + 12\pi\mathcal{C}\_{21}(T) - 9\,\pi^{2}(\mathcal{C}\_{12}(T) - \mathcal{C}\_{12}(t\_{1}))\Big].$$

From (36), given (35) and (37), we have

$$Q\_2(\mathbf{x}, t) = V\_2(\mathbf{x}, t) + \mathcal{W}\_2(\mathbf{x}, t) = V\_1(t) \sin \frac{\pi}{l} \mathbf{x} + V\_2(t) \sin \frac{2\pi}{l} \mathbf{x} + \left(1 - \frac{\mathbf{x}}{l}\right) \nu\_2(t). \tag{45}$$

#### **5. Computational Experiment**

Applying the above approach, we construct the boundary control given *n* = 1 and *n* = 2 and the string deflection function, respectively. This section includes the initial data, the results obtained and a discussion of the methodology's effectiveness.

#### *5.1. Initial Data*

Let us present the results of a computational experiment for a given initial, intermediate and final state of the string given *n* = 1 and *n* = 2 assuming that *a* = <sup>1</sup> 3 and *l* = 1 and compare the behavior of the string deflection function with the given initial functions. Given the chosen values of *a* and *l*, we have

$$t\_1 = \frac{l}{a} = 3, \ T = 2\frac{l}{a} = 6, \ \lambda\_1 = \frac{\pi}{3}, \ \lambda\_2 = \frac{2\pi}{3}$$

.

The choice of an intermediate value *t*<sup>1</sup> = *<sup>T</sup>* 2 is due to practical recommendations [4]. We choose the specific initial functions from the functions class from the problem statement (Section 2) that satisfied the consistency conditions (15)–(17).

Let the following initial state be specified given *t* = 0:

$$
\varphi\_0(\mathfrak{x}) = \frac{1}{2}\mathfrak{x}^2 - \frac{2\mathfrak{x}}{5} - \frac{1}{10'}, \\
\psi\_0(\mathfrak{x}) = -\frac{\mathfrak{x}^2}{3} + \frac{\mathfrak{x}}{3'}.
$$

Given *t*<sup>1</sup> = 3, an intermediate state is specified as follows:

$$
\varphi\_1(\mathbf{x}) = \frac{\mathbf{x}^3}{3} - \mathbf{x}^2 + \frac{2\mathbf{x}}{3},
$$

Moreover, given *T* = 6, the next end state is specified:

$$
\varphi\_T(\mathfrak{x}) = 0,\\
\psi\_T(\mathfrak{x}) = 0.
$$

The proposed approach is applicable for any initial functions that meet the necessary requirements given in Section 2 so that the selected functions are some of them.

Note the choice of final zero values does not reflect the essence of the limitations of the technique and is made to simplify the final formulas. In addition, the problem of stabilizing string vibrations is relevant for damping transverse vibrations of a longitudinally moving string (for example, a paper web) in production [22].

The coefficients of the Fourier series for the functions *ϕ*0(*x*), *ψ*0(*x*), *ϕ*1(*x*), *ϕT*(*x*) and *ψT*(*x*) are equal, respectively, to:

$$\varphi\_1^{(0)} = -\frac{4}{\pi^3} - \frac{1}{5\pi'}, \ \varphi\_2^{(0)} = -\frac{1}{10\pi'}, \ \psi\_1^{(0)} = \frac{8}{3\pi^3}, \ \begin{aligned} \varphi\_1^{(0)} &= \frac{8}{3\pi^3}, \ \varphi\_1^{(1)} = \frac{4}{\pi^3}, \ \varphi\_2^{(1)} = \frac{1}{2\pi^3}, \\ \psi\_2^{(0)} = \varphi\_1^{(T)} = \varphi\_2^{(T)} = \psi\_1^{(T)} = \psi\_2^{(T)} = 0. \end{aligned}$$

The values of these functions at the ends of the string are as follows:

$$\begin{split} \varphi\_0(0) = -\frac{1}{10}, \; \varphi\_1(0) = \varphi\_T(0) = \psi\_T(0) = \psi\_0(0) = \varphi\_0(1) = \varphi\_1(1) = \varphi\_T(1) \\ \quad = \psi\_T(1) = \psi\_0(1) = 0. \end{split}$$

From (23)–(25), we have

$$\begin{aligned} V\_1(0) &= -\frac{4}{\pi^3}, \dot{V}\_1(0) = \frac{8}{3\pi^3}, \; V\_1(3) = \frac{4}{\pi^3}, \; V\_1(6) = 0, \; \dot{V}\_1(6) = 0, \\\\ V\_2(0) &= 0, \; \dot{V}\_2(0) = 0, \; V\_2(3) = \frac{1}{2\pi^3}, \; V\_2(6) = 0, \; \dot{V}\_2(6) = 0. \end{aligned}$$

From (44), we have

$$\mathcal{C}\_{11}(\mathsf{6}) = \frac{\mathsf{6}}{\pi^3} + \frac{\mathsf{3}}{10\pi^{\prime}}, \; \mathsf{C}\_{21}(\mathsf{6}) = -\frac{12}{\pi^4}, \; \mathsf{C}\_{11}(\mathsf{3}) = -\frac{\mathsf{3}}{10\pi^{\prime}}.$$

$$\mathcal{C}\_{12}(\mathsf{6}) = \frac{\mathsf{3}}{20\pi^{\prime}}, \; \mathsf{C}\_{22}(\mathsf{6}) = 0, \; \mathsf{C}\_{12}(\mathsf{3}) = \frac{\mathsf{3}}{4\pi^3} + \frac{\mathsf{3}}{20\pi}.$$

#### *5.2. Results*

In this section, we present the calculation formulas obtained for the functions *u*1, *u*2, *V*1, *V*2, *Q*<sup>1</sup> and *Q*2. From (40), (42) and (43), we have

$$u\_1(t) = \begin{cases} -\frac{4}{\pi^4} \cos\frac{\pi}{3}t & -\frac{1}{5\pi} \sin\frac{\pi}{3}t, \ t \in [0, 3], \\\ -\frac{4}{\pi^4} \cos\frac{\pi}{3}t & -\frac{4}{\pi^3} \sin\frac{\pi}{3}t, \ t \in (3, 6], \end{cases} \tag{46}$$

$$\begin{aligned} u\_2(t) &= \frac{6}{5\pi^2} (\pi^2 - 20) p \cos\frac{\pi}{3} t \quad - \frac{6}{5\pi^2} (\pi^2 - 20) q \cos\frac{2\pi}{3} t \\\ - \frac{1}{5\pi^3} (9\pi^4 - 8\pi^2 - 160) q \sin\frac{\pi}{3} t &- \frac{1}{10\pi^3} (9\pi^4 + 13\pi^2 - 960) p \sin\frac{2\pi}{3} t, \ t \in [0, 3], \end{aligned} \tag{47}$$

$$\begin{aligned} u\_2(t) &= \frac{6}{5\pi^2} \left(\pi^2 - 20\right) p \cos\frac{\pi}{3} t \ -\frac{6}{5\pi^2} \left(\pi^2 - 20\right) q \ \cos\frac{2\pi}{3} t \\\ -\frac{4}{5\pi^3} \left(43\pi^2 - 40\right) q \ \sin\frac{\pi}{3} t &+ \frac{1}{10\pi^3} \left(77\pi^2 - 960\right) p \ \sin\frac{2\pi}{3} t, \ t \in \left(3, 6\right]. \end{aligned} \tag{48}$$

Note that the following estimates are obtained for the functions *u*1(*t*) and *u*2(*t*):

$$\max\_{0 \le t \le 6} |u\_1(t)| \approx 0.1354,\\ \max\_{0 \le t \le 6} |u\_2(t)| \approx 0.1193.$$

We obtain the following explicit expressions for the functions *V*1(*t*) and *V*2(*t*):

*V*1(*t*) = *tπ* <sup>2</sup>−<sup>120</sup> <sup>30</sup>*π*<sup>3</sup> cos *<sup>π</sup>* 3 *<sup>t</sup>* <sup>−</sup> <sup>20</sup>*t*+3*<sup>π</sup>* <sup>2</sup>−<sup>240</sup> 30*π*<sup>4</sup> sin *<sup>π</sup>* 3 *t*, *t* ∈ [0, 3], 3*π* <sup>2</sup>+20*t*−<sup>180</sup> <sup>30</sup>*π*<sup>3</sup> cos *<sup>π</sup>* 3 *t* − 2(*t*−9) 3*π*<sup>4</sup> sin *<sup>π</sup>* 3 *t* , *t* ∈ (3, 6], (49) *V*2(*t*) = <sup>2</sup> 5*π*<sup>3</sup> *π* <sup>2</sup> <sup>−</sup> <sup>20</sup> *p* cos *<sup>π</sup>* 3 *<sup>t</sup>* <sup>−</sup> <sup>1</sup> 15*π*<sup>4</sup> 9*π* <sup>4</sup> <sup>−</sup> <sup>8</sup>*<sup>π</sup>* <sup>2</sup> <sup>−</sup> <sup>160</sup> *q* sin *<sup>π</sup>* 3 *t* + <sup>1</sup> 30*π*<sup>3</sup> 9*π* <sup>4</sup> + 13*π* <sup>2</sup> <sup>−</sup> <sup>960</sup> *t* − 12*π* <sup>2</sup> + 240 *p* cos <sup>2</sup>*<sup>π</sup>* 3 *t* − 2 5*π*<sup>2</sup> *π* <sup>2</sup> <sup>−</sup> <sup>20</sup> *tq*+ <sup>1</sup> 60*π*<sup>4</sup> 81*π* <sup>6</sup> + 1215*π* <sup>4</sup> <sup>−</sup> 24, 688*<sup>π</sup>* <sup>2</sup> + 25, 600 *pq* sin <sup>2</sup>*<sup>π</sup>* 3 *t*, *t* ∈ [0, 3], (50) *V*2(*t*) = <sup>2</sup> 5*π*<sup>3</sup> *π* <sup>2</sup> <sup>−</sup> <sup>20</sup> *p* cos *<sup>π</sup>* 3 *<sup>t</sup>* <sup>−</sup> <sup>4</sup> 15*π*<sup>4</sup> 43*π* <sup>2</sup> <sup>−</sup> <sup>40</sup> *q* sin *<sup>π</sup>* 3 *t* + <sup>1</sup> 30*π*<sup>3</sup> <sup>960</sup> <sup>−</sup> <sup>77</sup>*<sup>π</sup>* 2 *t* + 27*π* <sup>4</sup> + 258*π* <sup>2</sup> <sup>−</sup> <sup>5520</sup> *p* cos <sup>2</sup>*<sup>π</sup>* 3 *t* − 2 5*π*<sup>2</sup> *π* <sup>2</sup> <sup>−</sup> <sup>20</sup> *tq* + <sup>1</sup> 60*π*<sup>4</sup> −324*π* <sup>6</sup> + 3609*π* <sup>4</sup> + 8432*π* <sup>2</sup> <sup>−</sup> 66, 560 *pq* sin <sup>2</sup>*<sup>π</sup>* 3 *t*, *t* ∈ (3, 6]. (51)

Note that the following estimates take place for the functions *V*1(*t*) and *V*2(*t*):

$$\max\_{0 \le t \le 6} |V\_1(t)| \approx 0.1165, \max\_{0 \le t \le 6} |V\_2(t)| \approx 0.0321.$$

This confirms that the absolute value of each subsequent summand of series (21) decreases.

From (41) and (45), given (46)–(51), we obtain the following explicit expressions for the functions *Q*1(*x*, *t*) and *Q*2(*x*, *t*):

given *t* ∈ [0, 3],

$$Q\_1(t, \mathbf{x}) = \left(\frac{t\pi^2 - 120}{30\pi^3} \cos\frac{\pi}{3}t \; -\frac{20t + 3\pi^2 - 240}{30\pi^4} \sin\frac{\pi}{3}t\right) \sin\pi\mathbf{x}$$

$$-\left(\frac{4}{\pi^4} \cos\frac{\pi}{3}t \; +\frac{1}{5\pi} \sin\frac{\pi}{3}t\right)(1 - \mathbf{x}),$$

given *t* ∈ (3, 6],

$$Q\_1(t,x) = \left(\frac{3\pi^2 + 20t - 180}{30\pi^3} \cos\frac{\pi}{3}t - \frac{2(t-9)}{3\pi^4} \sin\frac{\pi}{3}t\right) \sin\pi x$$

$$- \left(\frac{4}{\pi^4} \cos\frac{\pi}{3}t + \frac{4}{\pi^3} \sin\frac{\pi}{3}t\right)(1-x)\_t$$

given *t* ∈ [0, 3],

$$Q\_2(\mathbf{x}, t) = \left(\frac{t\pi^2 - 120}{30\pi^3} \cos\frac{\pi}{3}t - \frac{20t + 3\pi^2 - 240}{30\pi^4} \sin\frac{\pi}{3}t\right) \sin\pi x$$

$$+ \left(\frac{2}{5\pi^3} \left(\pi^2 - 20\right) p \cos\frac{\pi}{3}t - \frac{1}{15\pi^4} \left(9\pi^4 - 8\pi^2 - 160\right) q \sin\frac{\pi}{3}t\right)$$

$$+ \frac{1}{30\pi^3} \left(\left(9\pi^4 + 13\pi^2 - 960\right) t - 12\pi^2 + 240\right) p \cos\frac{2\pi}{3}t$$

$$- \left(\frac{2}{5\pi^2} \left(\pi^2 - 20\right) tq + \frac{1}{60\pi^4} \left(81\pi^6 + 1215\pi^4 + 24, 688\pi^2 + 25, 600\right) pq\right)$$

$$\begin{aligned} \left(-\sin\frac{2\pi}{3}t\right)\sin 2\pi \ge & \left(\frac{6}{5\pi^2}\left(\pi^2 - 20\right)p \cdot \cos\frac{\pi}{3}t \right. - \frac{6}{5\pi^2}\left(\pi^2 - 20\right)q \cdot \cos\frac{2\pi}{3}t \\\ -\frac{4}{5\pi^3}\left(43\pi^2 - 40\right)q \cdot \sin\frac{\pi}{3}t + \frac{1}{10\pi^3}\left(77\pi^2 - 960\right)p \cdot \sin\frac{2\pi}{3}t \end{aligned}$$

and given *t* ∈ (3, 6],

$$Q\_2(\mathbf{x}, t) = \left(\frac{3\pi^2 + 20t - 180}{30\pi^3} \cos\frac{\pi}{3}t - \frac{2(t - 9)}{3\pi^4} \sin\frac{\pi}{3}t\right) \sin\pi\pi$$

$$+ \left(\frac{2}{5\pi^3} \left(\pi^2 - 20\right)p\cos\frac{\pi}{3}t - \frac{4}{15\pi^4} \left(43\pi^2 - 40\right)q\sin\frac{\pi}{3}t\right)$$

$$+ \frac{1}{30\pi^3} \left(\left(960 - 77\pi^2\right)t + 27\pi^4 + 258\pi^2 - 5520\right)p\cos\frac{2\pi}{3}t$$

$$- \left(\frac{2}{5\pi^2} \left(\pi^2 - 20\right)tq + \frac{1}{60\pi^4} \left(-324\pi^6 + 3609\pi^4 + 8432\pi^2 - 66,560\right)pq\right)$$

$$\sin\frac{2\pi}{3}t\right)\sin 2\pi x + \left(\frac{6}{5\pi^2} \left(\pi^2 - 20\right)p\cos\frac{\pi}{3}t - \frac{6}{5\pi^2} \left(\pi^2 - 20\right)q\cos\frac{2\pi}{3}t\right)$$

$$- \frac{4}{5\pi^3} \left(43\pi^2 - 40\right)q\sin\frac{\pi}{3}t + \frac{1}{10\pi^3} \left(77\pi^2 - 960\right)p\sin\frac{2\pi}{3}t \left(1 - \text{x}\right)$$

At the moment of time *t* = 0, the functions *Q*1(*x*, 0) and *Q*2(*x*, 0) are equal to:

$$Q\_1(\mathbf{x},0) = -\frac{4}{\pi^3} \sin \pi \mathbf{x} - \frac{4}{\pi^4} (1-\mathbf{x}),$$

$$Q\_2(\mathbf{x},0) = -\frac{4}{\pi^3} \sin \pi \mathbf{x} + \frac{288}{5\pi^2} \left(\pi^2 - 20\right) pq(1-\mathbf{x}).$$

Calculate

$$
\left.\frac{\partial Q\_1(\mathbf{x},t)}{\partial t}\right|\_{t=0} = \dot{Q}\_1(\mathbf{x},0) = \frac{8}{3\pi^3} \sin \pi \mathbf{x} - \frac{1}{15}(1-\mathbf{x}),
$$

$$
\left.\frac{\partial Q\_2(\mathbf{x},t)}{\partial t}\right|\_{t=0} = \dot{Q}\_2(\mathbf{x},0) = \frac{8}{3\pi^3} \sin \pi \mathbf{x}
$$

$$
$$

We can check that the expression of the deflection functions *Q*1(*x*, 3) and *Q*2(*x*, 3) at the final moment of the segment [0, 3] coincides with the corresponding expression at the beginning of the next time interval, and the functions have the form:

$$Q\_1(\mathbf{x}, \mathbf{3}) = \frac{40 - \pi^2}{10\pi^3} \sin \pi \mathbf{x} + \frac{4}{\pi^4} (1 - \mathbf{x}),$$

$$Q\_2(\mathbf{x}, \mathbf{3}) = \frac{40 - \pi^2}{10\pi^3} \sin \pi \mathbf{x} + \frac{1}{10\pi^3} \left(5\pi^2 + 9\pi^4 - 800\right) p \sin 2\pi \mathbf{x},$$

$$-\frac{12}{5\pi^2} \left(\pi^2 - 20\right) \left(9\pi^2 - 40\right) pq(1 - \mathbf{x}).$$

The deflection function and its derivative at the moment of time *t* = 6 are equal, respectively, to:

$$Q\_1(\mathbf{x}, \mathbf{6}) = \frac{\pi^2 - 20}{10\pi^3} \sin \pi \mathbf{x} - \frac{4}{\pi^4} (1 - \mathbf{x}),$$

$$\left. \frac{\partial Q\_1(\mathbf{x}, t)}{\partial t} \right|\_{t=\mathbf{6}} = \dot{Q}\_1(\mathbf{x}, \mathbf{6}) = \frac{4}{3\pi^3} \sin \pi \mathbf{x} - \frac{4}{3\pi^2} (1 - \mathbf{x}).$$

$$Q\_2(\mathbf{x}, \mathbf{6}) = \frac{\pi^2 - 20}{10\pi^3} \sin \pi \mathbf{x} + \frac{1}{10\pi} \sin 2\pi \mathbf{x} + \frac{288}{5\pi^2} \left(\pi^2 - 20\right) pq(1 - \mathbf{x}),$$

$$\left.\frac{\partial Q\_2(\mathbf{x}, t)}{\partial t}\right|\_{t=6} = \dot{Q}\_2(\mathbf{x}, \mathbf{6}) = \frac{4}{3\pi^3} \sin \pi \mathbf{x} - \frac{6}{5\pi} \left(\pi^2 - 20\right) q \sin 2\pi \mathbf{x}$$

$$-\frac{1}{15\pi^2} \left(855\pi^4 - 2576\pi^2 - 5120\right) pq(1 - \mathbf{x}).$$

1

#### *5.3. Illustrative Material 5.3. Illustrative Material*

spectively, to:

spectively, to:

Let us illustrate the obtained formulas on the graphs. The graphs of the functions *u*1(*t*) and *u*2(*t*) are given in Figure 1. Let us illustrate the obtained formulas on the graphs. The graphs of the functions 1() and 2() are given in Figure 1. *5.3. Illustrative Material* Let us illustrate the obtained formulas on the graphs. The graphs of the functions 1() and 2() are given in Figure 1.

**Figure 1.** Graphs of 1() (solid line) and 2() (dashed line). **Figure 1.** Graphs of *u*1(*t*) (solid line) and *u*2(*t*) (dashed line). **Figure 1.** Graphs of 1() (solid line) and 2() (dashed line).

*Axioms* **2022**, *11*, x FOR PEER REVIEW 13 of 17

*Axioms* **2022**, *11*, x FOR PEER REVIEW 13 of 17

<sup>6</sup> −675

<sup>6</sup> −675

beginning of the next time interval, and the functions have the form:

beginning of the next time interval, and the functions have the form:

1(, 3) =

1(, 3) =

40 − 2

40 − 2

10 3

10 3

=6

=6

= ˙ 1 (, 6) =

= ˙ 1 (, 6) =

− 12 5 2 (

− 12 5 2 (

1(,) <sup>|</sup>

1(,) <sup>|</sup>

> <sup>2</sup> − 20

<sup>4</sup> −9776

<sup>4</sup> −9776

We can check that the expression of the deflection functions 1(, 3) and 2(, 3) at the final moment of the segment [0,3] coincides with the corresponding expression at the

We can check that the expression of the deflection functions 1(, 3) and 2(, 3) at the final moment of the segment [0,3] coincides with the corresponding expression at the

sin +

sin +

1 10 3 (5

1 10 3 (5

The deflection function and its derivative at the moment of time = 6 are equal, re-

The deflection function and its derivative at the moment of time = 6 are equal, re-

4 3 3

4 3 3

 <sup>2</sup>−20 103

 <sup>2</sup>−20 103

4 4

4 4

<sup>2</sup> + 9

<sup>2</sup> + 9

<sup>2</sup> − 40)(1 − ).

<sup>2</sup> − 40)(1 − ).

sin −

sin −

sin −

sin −

288

(1 − ),

(1 − ),

4 4

4 4

> 4 3 2

4 3 2

(1− ),

(1− ),

(1 − ),

(1 − ),

<sup>4</sup> −800)sin2

<sup>4</sup> −800)sin2

40− 2 103

40− 2 103

<sup>2</sup> − 20)(9

<sup>2</sup> − 20)(9

sin +

sin +

1(, 6) =

1(, 6) =

<sup>2</sup> +25,600)(1 − ).

<sup>2</sup> +25,600)(1 − ).

− 1 15 2 (162

− 1 15 2 (162

2(, 3) =

2(, 3) =

The graphs of the functions <sup>1</sup> () and <sup>2</sup> () are shown in Figure 2. The graphs of the functions *V*1(*t*) and *V*2(*t*) are shown in Figure 2. The graphs of the functions <sup>1</sup> () and <sup>2</sup> () are shown in Figure 2.

**Figure 2.** Graphs of the functions 1() (solid line) and 2() (dashed line). **Figure 2.** Graphs of the functions 1() (solid line) and 2() (dashed line). **Figure 2.** Graphs of the functions *V*1(*t*) (solid line) and *V*2(*t*) (dashed line).

The graphical representation of the functions 1(, 0), 2(, 0) and 0() is illustrated in Figure 3. The graphical representation of the functions 1(, 0), 2(, 0) and 0() is illustrated in Figure 3. The graphical representation of the functions *Q*1(*x*, 0), *Q*2(*x*, 0) and *ϕ*0(*x*) is illustrated in Figure 3. *Axioms* **2022**, *11*, x FOR PEER REVIEW 14 of 17 *Axioms* **2022**, *11*, x FOR PEER REVIEW 14 of 17

**Figure 3.** Graphs of 1(, 0) (solid line), 2(, 0) (dashed line) and 0() (dash-dotted line). **Figure 3.** Graphs of *Q*1(*x*, 0) (solid line), *Q*2(*x*, 0) (dashed line) and *ϕ*0(*x*) (dash-dotted line). **Figure 3.** Graphs of 1(, 0) (solid line), 2(, 0) (dashed line) and 0() (dash-dotted line).

The graphs of the functions 1(, 3), 2(, 3) and 1() are shown in Figure 4, which illustrates the small discrepancies between these functions. The graphs of the functions *Q*1(*x*, 3), *Q*2(*x*, 3) and *ϕ*1(*x*) are shown in Figure 4, which illustrates the small discrepancies between these functions. The graphs of the functions 1(, 3), 2(, 3) and 1() are shown in Figure 4, which illustrates the small discrepancies between these functions.

**Figure 4.** Graphs of 1(, 3) (solid line), 2(, 3) (dashed line) and 1() (dash-dotted line). **Figure 4.** Graphs of 1(, 3) (solid line), 2(, 3) (dashed line) and 1() (dash-dotted line). **Figure 4.** Graphs of *Q*1(*x*, 3) (solid line), *Q*2(*x*, 3) (dashed line) and *ϕ*1(*x*) (dash-dotted line).

Graphical representations of the functions 1(, 6) and 2(, 6) and ˙ <sup>1</sup>(, 6) and ˙ <sup>2</sup>(, 6) are shown in Figures 5 and 6, respectively. Graphical representations of the functions 1(, 6) and 2(, 6) and ˙ <sup>1</sup>(, 6) and ˙ <sup>2</sup>(, 6) are shown in Figures 5 and 6, respectively. Graphical representations of the functions *Q*1(*x*, 6) and *Q*2(*x*, 6) and . *Q*<sup>1</sup> (*x*, 6) and . *Q*<sup>2</sup> (*x*, 6) are shown in Figures 5 and 6, respectively.

<sup>2</sup>(, 6) (dashed line).

<sup>2</sup>(, 6) (dashed line).

Figure 7 provides graphical illustrations of the dynamics of the behavior of the func-

Figure 7 provides graphical illustrations of the dynamics of the behavior of the func-

(**a**)

(**a**)

(**b**)

(**b**)

(**c**)

(**c**)

<sup>1</sup>(, 6) (solid line) and ˙

<sup>1</sup>(, 6) (solid line) and ˙

tions 1(,) and 2(,) given = 0, 1, 2, 3, 4, 5, 6.

tions 1(,) and 2(,) given = 0, 1, 2, 3, 4, 5, 6.

**Figure 6.** Graphs of ˙

**Figure 6.** Graphs of ˙

˙

˙

**Figure 6.** Graphs of ˙

which illustrates the small discrepancies between these functions.

which illustrates the small discrepancies between these functions.

*Axioms* **2022**, *11*, x FOR PEER REVIEW 14 of 17

**Figure 3.** Graphs of 1(, 0) (solid line), 2(, 0) (dashed line) and 0() (dash-dotted line).

**Figure 3.** Graphs of 1(, 0) (solid line), 2(, 0) (dashed line) and 0() (dash-dotted line).

**Figure 4.** Graphs of 1(, 3) (solid line), 2(, 3) (dashed line) and 1() (dash-dotted line).

**Figure 4.** Graphs of 1(, 3) (solid line), 2(, 3) (dashed line) and 1() (dash-dotted line).

Graphical representations of the functions 1(, 6) and 2(, 6) and ˙

Graphical representations of the functions 1(, 6) and 2(, 6) and ˙

<sup>1</sup>(, 6) and

<sup>1</sup>(, 6) and

The graphs of the functions 1(, 3), 2(, 3) and 1() are shown in Figure 4,

The graphs of the functions 1(, 3), 2(, 3) and 1() are shown in Figure 4,

**Figure 5.** Graphs of 1(, 6) (solid line) and 2(, 6) (dashed line). **Figure 5.** Graphs of *Q*1(*x*, 6) (solid line) and *Q*2(*x*, 6) (dashed line). **Figure 5.** Graphs of 1(, 6) (solid line) and 2(, 6) (dashed line).

<sup>2</sup>(, 6) are shown in Figures 5 and 6, respectively.

<sup>2</sup>(, 6) are shown in Figures 5 and 6, respectively.

Figure 7 provides graphical illustrations of the dynamics of the behavior of the functions 1(,) and 2(,) given = 0, 1, 2, 3, 4, 5, 6. Figure 7 provides graphical illustrations of the dynamics of the behavior of the functions 1(,) and 2(,) given = 0, 1, 2, 3, 4, 5, 6. Figure 7 provides graphical illustrations of the dynamics of the behavior of the functions *Q*1(*x*, *t*) and *Q*2(*x*, *t*) given *t* = 0, 1, 2, 3, 4, 5, 6. *Axioms* **2022**, *11*, x FOR PEER REVIEW 15 of 18

<sup>2</sup>(, 6) (dashed line).

**Figure 7.** *Cont*.

*5.4. Discussion of Results*

between these functions.

ሺ, ሻ ൌ ̃ ሺ,ሻ <sup>ଵ</sup>

෨

หሺ, ሻെሺሻห and ̃ ሺ, ሻ ൌ หሶ

time : (**a**) ൌ0; (**b**) ൌ1; (**c**) ൌ2;(**d**) ൌ3; (**e**) ൌ4; (**f**) ൌ5; (**g**) ൌ6.

are given in the following table.

(**f**)

(**g**) **Figure 7.** Graphs of the functions ଵሺ, ሻ (solid line) and ଶሺ, ሻ (dashed line) at fixed points in

For a comparative analysis of the results obtained, we denote by ൫, ൯ ൌ

Tables 1 and 2 show that, under the constructed control, the behavior of the string deflection functions is quite close to that of the given initial ones. An illustration of the

ൌൌ2 corresponds to the moment of time ଶ ൌ ), which illustrate the discrepancy

The maximum values of residuals ൫, ൯, ̃ ሺ,ሻ, ൫, ൯ ൌ ൫,൯ <sup>ଵ</sup>

ሺ, ሻെሺሻห, ൌ 1,2, ൌ 0,2, ൌ 0, 2 തതതതത (here,

and

෨

(**e**)

(**a**)

(**b**)

(**c**)

(**d**)

**Figure 7.** Graphs of the functions ଵሺ, ሻ (solid line) and ଶሺ, ሻ (dashed line) at fixed points in time : (**a**) ൌ0; (**b**) ൌ1; (**c**) ൌ2;(**d**) ൌ3; (**e**) ൌ4; (**f**) ൌ5; (**g**) ൌ6. **Figure 7.** Graphs of the functions *Q*1(*x*, *t*) (solid line) and *Q*2(*x*, *t*) (dashed line) at fixed points in time *t*: (**a**) *t* = 0; (**b**) *t* = 1; (**c**) *t* = 2; (**d**) *t* = 3; (**e**) *t* = 4; (**f**) *t* = 5; (**g**) *t* = 6.

#### *5.4. Discussion of Results 5.4. Discussion of Results*

For a comparative analysis of the results obtained, we denote by ൫, ൯ ൌ หሺ, ሻെሺሻห and ̃ ሺ, ሻ ൌ หሶ ሺ, ሻെሺሻห, ൌ 1,2, ൌ 0,2, ൌ 0, 2 തതതതത (here, ൌൌ2 corresponds to the moment of time ଶ ൌ ), which illustrate the discrepancy between these functions. For a comparative analysis of the results obtained, we denote by *ε<sup>n</sup> x*, *t<sup>j</sup>* = *Q<sup>n</sup> x*, *t<sup>j</sup>* − *ϕj*(*x*)  and <sup>e</sup>*εn*(*x*, *<sup>t</sup>m*) <sup>=</sup> . *Q<sup>n</sup> x*, *t<sup>j</sup>* − *ψm*(*x*) , *<sup>n</sup>* <sup>=</sup> 1, 2, *<sup>m</sup>* <sup>=</sup> 0, 2, *<sup>j</sup>* <sup>=</sup> 0, 2 (here, *m* = *j* = 2 corresponds to the moment of time *t*<sup>2</sup> = *T*), which illustrate the discrepancy between these functions.

The maximum values of residuals ൫, ൯, ̃ ሺ,ሻ, ൫, ൯ ൌ ൫,൯ <sup>ଵ</sup> and ሺ, ሻ ൌ ̃ ሺ,ሻ <sup>ଵ</sup> are given in the following table. The maximum values of residuals *ε<sup>n</sup> x*, *t<sup>j</sup>* , <sup>e</sup>*εn*(*x*, *<sup>t</sup>m*), *<sup>E</sup><sup>n</sup> x*, *t<sup>j</sup>* = R 1 0 *εn x*, *t<sup>j</sup> dx* and *E*e*n*(*x*, *tm*) = R 1 0 <sup>e</sup>*εn*(*x*, *<sup>t</sup>m*)*dx* are given in the following table.

Tables 1 and 2 show that, under the constructed control, the behavior of the string deflection functions is quite close to that of the given initial ones. An illustration of the Tables 1 and 2 show that, under the constructed control, the behavior of the string deflection functions is quite close to that of the given initial ones. An illustration of the residuals at the initial and intermediate time points is shown in the following figures. The graphical representation of the functions *εn*(*x*, 0), *n* = 1, 2, is shown in Figure 8.

> **Table 1.** Comparison of residuals for *ϕ<sup>j</sup>* .


**Table 2.** Comparison of residuals for *ψm*.


**Figure 8.** Graphs of the functions <sup>1</sup> (, 0) (solid line) and <sup>2</sup> (, 0) (dashed line). **Figure 8.** Graphs of the functions *ε*1(*x*, 0) (solid line) and *ε*2(*x*, 0) (dashed line).

The graphical representation of the functions (, 3), = 1,2, is shown in Figure 9. The graphical representation of the functions *εn*(*x*, 3), *n* = 1, 2, is shown in Figure 9.

(, 3) (solid line) and <sup>2</sup>

The proposed analytical constructions are valid for any first harmonics of string vibrations. Numerical calculations, illustrations of the results and their analysis were carried out with the help of the developed general approach for = 1,2. The series (21) is uniformly convergent for functions from the above classes. The behavior of the functions

Thus, given = 1 and = 2, we construct explicit expressions of the boundary con-

We proposed a constructive method for constructing the control of vibrations of a homogeneous string with a given deflection shape at an intermediate moment. We also proposed a constructive method for constructing the control of homogeneous string vibrations with a given deflection shape at an intermediate moment. The control was carried out by shifting one end with the other end fixed. The construction scheme was as follows: We reduced the original problem to the control problem of distributed influences with zero boundary conditions. Further, we used the method of separation of variables and methods of control theory for finite-dimensional systems with multipoint intermediate

We formulated the corresponding statement and theorem for the first harmonics. A specific example illustrated the obtained results. We realized a computational experiment, constructed the corresponding graphs and made a comparative analysis. They confirm the results of the study. The proposed method can be extended to other non-onedimensional vibrational systems. The results presented in the paper can be used in the design of boundary control of vibration processes in physical and technological systems.

**Author Contributions:** Conceptualization, V.B. and S.S.; methodology, V.B.; software, S.S.; validation, V.B. and S.S.; formal analysis, V.B. and S.S.; investigation, V.B. and S.S.; resources, V.B. and S.S.; data curation, S.S.; writing—original draft preparation, V.B. and S.S.; writing—review and editing, V.B. and S.S.; visualization, S.S.; supervision, V.B. and S.S.; project administration, V.B. and S.S.; funding acquisition, V.B. and S.S. All authors have read and agreed to the published version of

**Funding:** Part of the research of S.S. was carried out under State Assignment Project no. FWEU-2021-0006, reg. number АААА-А21-121012090034-3, of the Fundamental Research Program of Russian Federation 2021–2030 using the resources of the High-Temperature Circuit Multi-Access Research Center (Ministry of Science and Higher Education of the Russian Federation, project no

trol 1() and 2() and those of the string deflection functions 1(,) and 2(,).

(, 3) (dashed line).

**Figure 9.** Graphs of the functions <sup>1</sup>

**6. Conclusions**

conditions.

the manuscript.

13.CKP.21.0038).

**Institutional Review Board Statement:** Not applicable.

1() and 2() shows it (see Figure 2).

The graphical representation of the functions

**Figure 9.** Graphs of the functions <sup>1</sup> (, 3) (solid line) and <sup>2</sup> (, 3) (dashed line). **Figure 9.** Graphs of the functions *ε*1(*x*, 3) (solid line) and *ε*2(*x*, 3) (dashed line).

The proposed analytical constructions are valid for any first harmonics of string vibrations. Numerical calculations, illustrations of the results and their analysis were carried out with the help of the developed general approach for = 1,2. The series (21) is uniformly convergent for functions from the above classes. The behavior of the functions 1() and 2() shows it (see Figure 2). The proposed analytical constructions are valid for any first *n* harmonics of string vibrations. Numerical calculations, illustrations of the results and their analysis were carried out with the help of the developed general approach for *n* = 1, 2. The series (21) is uniformly convergent for functions from the above classes. The behavior of the functions *V*1(*t*) and *V*2(*t*) shows it (see Figure 2).

(, 0) (solid line) and <sup>2</sup>

(, 0) (dashed line).

(, 3), = 1,2, is shown in Figure 9.

Thus, given = 1 and = 2, we construct explicit expressions of the boundary control 1() and 2() and those of the string deflection functions 1(,) and 2(,). Thus, given *n* = 1 and *n* = 2, we construct explicit expressions of the boundary control *u*1(*t*) and *u*2(*t*) and those of the string deflection functions *Q*1(*x*, *t*) and *Q*2(*x*, *t*).

#### **6. Conclusions 6. Conclusions**

**Figure 8.** Graphs of the functions <sup>1</sup>

We proposed a constructive method for constructing the control of vibrations of a homogeneous string with a given deflection shape at an intermediate moment. We also proposed a constructive method for constructing the control of homogeneous string vibrations with a given deflection shape at an intermediate moment. The control was carried out by shifting one end with the other end fixed. The construction scheme was as follows: We reduced the original problem to the control problem of distributed influences with zero boundary conditions. Further, we used the method of separation of variables and We proposed a constructive method for constructing the control of vibrations of a homogeneous string with a given deflection shape at an intermediate moment. We also proposed a constructive method for constructing the control of homogeneous string vibrations with a given deflection shape at an intermediate moment. The control was carried out by shifting one end with the other end fixed. The construction scheme was as follows: We reduced the original problem to the control problem of distributed influences with zero boundary conditions. Further, we used the method of separation of variables and methods of control theory for finite-dimensional systems with multipoint intermediate conditions.

methods of control theory for finite-dimensional systems with multipoint intermediate conditions. We formulated the corresponding statement and theorem for the first harmonics. A specific example illustrated the obtained results. We realized a computational experiment, constructed the corresponding graphs and made a comparative analysis. They confirm the results of the study. The proposed method can be extended to other non-one-We formulated the corresponding statement and theorem for the first *n* harmonics. A specific example illustrated the obtained results. We realized a computational experiment, constructed the corresponding graphs and made a comparative analysis. They confirm the results of the study. The proposed method can be extended to other non-one-dimensional vibrational systems. The results presented in the paper can be used in the design of boundary control of vibration processes in physical and technological systems.

design of boundary control of vibration processes in physical and technological systems. **Author Contributions:** Conceptualization, V.B. and S.S.; methodology, V.B.; software, S.S.; validation, V.B. and S.S.; formal analysis, V.B. and S.S.; investigation, V.B. and S.S.; resources, V.B. and S.S.; data curation, S.S.; writing—original draft preparation, V.B. and S.S.; writing—review and ed-**Author Contributions:** Conceptualization, V.B. and S.S.; methodology, V.B.; software, S.S.; validation, V.B. and S.S.; formal analysis, V.B. and S.S.; investigation, V.B. and S.S.; resources, V.B. and S.S.; data curation, S.S.; writing—original draft preparation, V.B. and S.S.; writing—review and editing, V.B. and S.S.; visualization, S.S.; supervision, V.B. and S.S.; project administration, V.B. and S.S.; funding acquisition, V.B. and S.S. All authors have read and agreed to the published version of the manuscript.

dimensional vibrational systems. The results presented in the paper can be used in the

iting, V.B. and S.S.; visualization, S.S.; supervision, V.B. and S.S.; project administration, V.B. and S.S.; funding acquisition, V.B. and S.S. All authors have read and agreed to the published version of the manuscript. **Funding:** Part of the research of S.S. was carried out under State Assignment Project no. FWEU-2021-0006, reg. number АААА-А21-121012090034-3, of the Fundamental Research Program of Rus-**Funding:** Part of the research of S.S. was carried out under State Assignment Project no. FWEU-2021-0006, reg. number AAAA-A21-121012090034-3, of the Fundamental Research Program of Russian Federation 2021–2030 using the resources of the High-Temperature Circuit Multi-Access Research Center (Ministry of Science and Higher Education of the Russian Federation, project no 13.CKP.21.0038).

sian Federation 2021–2030 using the resources of the High-Temperature Circuit Multi-Access Research Center (Ministry of Science and Higher Education of the Russian Federation, project no **Institutional Review Board Statement:** Not applicable.

13.CKP.21.0038). **Informed Consent Statement:** Not applicable.

**Institutional Review Board Statement:** Not applicable. **Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Waves in a Hyperbolic Predator–Prey System**

**Andrey Morgulis 1,2**


**Abstract:** We address a hyperbolic predator–prey model, which we formulate with the use of the Cattaneo model for chemosensitive movement. We put a special focus on the case when the Cattaneo equation for the flux of species takes the form of conservation law—that is, we assume a special relation between the diffusivity and sensitivity coefficients. Regarding this relation, there are pieces arguing for its relevance to some real-life populations, e.g., the copepods (*Harpacticoida*), in the biological literature (see the reference list). Thanks to the mentioned conservatism, we get exact solutions describing the travelling shock waves in some limited cases. Next, we employ the numerical analysis for continuing these waves to a wider parametric domain. As a result, we discover smooth solitary waves, which turn out to be quite sustainable with small and moderate initial perturbations. Nevertheless, the perturbations cause shedding of the predators from the main core of the wave, which can be treated as a settling mechanism. Besides, the localized perturbations make waves, colliding with the main core and demonstrating peculiar quasi-soliton phenomena sometimes resembling the leapfrog playing. An interesting side result is the onset of the migration waves due to the explosion of overpopulated cores.

**Keywords:** Patlak–Keller–Segel systems; the Cattaneo model of chemosensitive movement; hyperbolic models; shock waves; conservation laws

**MSC:** 92C17; 35L40; 92E20

#### **1. Introduction**

The Patlak–Keller–Segel (PKS) law provides a simple macroscopic model for a perceptual motion (taxis) of the particles ensemble in response to the spatial gradient of a stimulus (or signal) field. The commonly recognized formulation of the PKS flux reads as: *χp*∇*s*, where the notations of *p*, *s*, and *χ* stand for the density of the medium that moves in response to the signal, the intensity of the signal itself and the sensitivity coefficient, correspondingly. A multitude of parabolic PKS systems resulting from the summation of the diffusive and PKS fluxes stand as the subject of intensive research for last five decades. A considerable number of reviews expose the advances in this area, e.g., [1–3].

However, usage of the parabolic framework is not a unique way of treating the PKS law. For example, Dolak and Hillen [4] proposed a different formulation known as the Cattaneo model for chemosensitive movement. In contrast with the parabolic models, this one takes into account the inertia of the response and becomes hyperbolic. At that, the flux has a contribution from the local time derivative of itself in addition to those mentioned above. From the articles [5,6], it follows that the common parabolic model, the Cattaneomodel and several other hyperbolic models represent the approximations of the kinetics equations under different hypotheses. There is a concise review by R. Eftimie [7] of this subject that covers deriving the models and the issues of exact solutions, stability and bifurcations, etc. It makes clear that the hyperbolic models are not less natural than their parabolic counterparts, e.g., while modelling the aggregation processes in the active media. Nevertheless, the former receive much less attention than the latter. The mentioned

**Citation:** Morgulis, A. Waves in a Hyperbolic Predator–Prey System. *Axioms* **2022**, *11*, 187. https:// doi.org/10.3390/axioms11050187

Academic Editor: Hans J. Haubold

Received: 21 March 2022 Accepted: 14 April 2022 Published: 20 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

review by R. Eftimie reports the deficiency of results for understanding the generic pattern emerging from the dynamics of the local hyperbolic models even in the case of one spatial dimension (see Table 9.1, Chapter 9) despite a considerable piece of work exposed therein.

Anyway, the hyperbolic models are the subject of continuing research. Among the recent results, those published in [8,9] have much to do with the present study. These articles address propagating the travelling shock waves and the gradual formation of the shocks from the initially smooth solutions for an infinite time extent. The analysis covers the rigorous proof of both features for a non-local hyperbolic model of a media, the density of which governs its velocity via the action of a given first-order pseudo-differential operator. Interestingly, the travelling shock waves coexist with the smooth ones. The latter are the widely studied features in the parabolic case, see, e.g., [10–12] and the references therein. The studies of their hyperbolic counterparts likely traced back to K. Hadeler [13–16].

In the present paper, we address similar issues but in a different context. We consider the Cattaneo model for a predator–prey community with the Lotka–Volterra kinetics. Regarding the predator flux, we assume that the diffusivity coefficient, *µ* = *µ*(*p*,*s*), and the sensitivity coefficient, *χ* = *χ*(*p*,*s*), are given functions in the species densities, *p* and *s*. In addition, we assume that 1-form −*µ*(*p*,*s*)*dp* + *pχ*(*p*,*s*)*ds* is exact. Despite being restrictive, this assumption relies on certain biological grounds [17,18]. Formally, it entails the conservatism of the flux equation. Thanks to this circumstance, we take advantage of considering the travelling shock waves and arrive at very simple exact solutions for the limit case of sedentary prey and a highly-inertial predator. In the aforementioned articles, K. Hadeler addressed the travelling waves in a semi-linear version of the Cattaneo model. However, the model dealt with here is not semi- but quasi-linear. Besides, the conservatism allows for the elimination of the flux from the governing equations using the ansatz by K. Hadeler; see the reference above again. This reduction is helpful for calculating the numerical solution. The numerical continuation of the travelling shock waves to a wider parametric area discovered their smooth counterparts. These are the soliton-like waves. The study of their perturbations and collisions have discovered a rather peculiar interplay that resembles the quasi-soliton interactions reported in [19–21]. An interesting side result is the explosive migration waves emergent from the overpopulated cores.

Thus, there are several key findings in the present article, namely: (i) identifying a class of the Cattaneo models for the chemosensitive movement that allows the formulation of the governing equations as the conservation laws on one hand and, on the other hand, includes a biologically justified model; (ii) discovering the exact solutions describing the travelling shock waves in the limit of sedentary prey and highly-inertial predators; (iii) discovering the smooth counterparts of the shocks in the general case and their interactions such as leapfrog playing; (iv) observing the migration waves due to exploding the overpopulated kernels; (v) formation of the layers of high concentrations of the species nearby the fronts of waves emerging from the collisions of the solitary waves or upon the massive escape from the overpopulated areas.

The paper is organized as follows. In Section 2, we formulate the model and put it into the dimensionless form. In Section 3, we consider the case of the sedentary prey, in which the system becomes purely hyperbolic. In Section 4, we discuss the general issues regarding the shock waves, particularly the travelling ones. In Section 5, we distinguish a special case that allows a very simple exact solution. In Section 6, we discuss the general setting of the numerical experiments, and present their results. In Section 7, we discuss the results of our study and their applications. In the Appendix A, we have gathered the precise formulations of the data used for performing the numerical experiments.

#### **2. The Governing Equations and Scaling**

The governing equations read as:

$$p\_t + q\_\mathbf{x} \quad = \quad \mathcal{F}(p, \mathbf{s}) \tag{1}$$

$$
\pi q\_t + \nu q\_{\phantom{x}} = \ x(p\_\prime s) p s\_x - \mu(p\_\prime s) p\_{\ddagger} \tag{2}
$$

$$\mathbf{s}\_{\ell} \quad = \ \mathbf{G}(p\_{\prime}\mathbf{s}) + D\_{\mathbf{s}}\mathbf{s}\_{\mathbf{x}\mathbf{x}}.\tag{3}$$

In this system, the dependent variables, *x* and *t*, stand for the spatial and temporal coordinates, correspondingly, *x* ∈ R, *t* > 0. The dependent variables are *p* = *p*(*x*, *t*), *q* = *q*(*x*, *t*) and *s* = *s*(*x*, *t*). The first and the last one play the parts of the densities of the species, say, the predators and the prey, correspondingly. The first two equations constitute the Cattaneo model for the prey-sensitive movement of the predators, so that the remaining dependent variable, *q*, stands for the predators' flux. The prey spreads itself purely by diffusion, and the notation of *D<sup>s</sup>* stands for its diffusivity. In what follows, *D<sup>s</sup>* ≡ const by assumption. We also assume that the predators' diffusivity, *µ*(*p*,*s*), and the sensitivity, *χ*(*p*,*s*), are specified, and

$$p\chi(p,s)\to 0,\ p\to +0,\ \quad \forall s\ge 0.\tag{4}$$

We assume that the reaction terms, *G* and *F*, are prescribed, but postpone further detailing them to Section 5. The mechanical analogy suggests treating the second term on the left hand side of the second equation as a contribution of the resistance of the environment to the predator's motion. So, we will be calling the correspondent coefficient, *ν*, as the resistivity.

Let the notations *T*, *X*, *P*, *S*, *Q*, *Dp*, *C*, *Jp*, *J<sup>s</sup>* stand for characteristic scales for the time, length, predator's density, prey's density, diffusivity, sensitivity, predator's and prey's sources densities correspondingly. The resistivity coefficient, *ν*, is naturally dimensionless. Since the values of *Dp*, *D<sup>s</sup>* and *C* depend on the concrete species, it is natural to consider them as anyhow given. In contrast, the values of *X*, *T*, *S*, *P*, *J<sup>p</sup>* and *J<sup>s</sup>* are free to choose. Given this, let us set

$$T = \text{tr.}\,\,\,\text{X} = \sqrt{\text{\textdegree}D\_{p\text{\textdegree}}}\,\,\,\text{Q} = P\sqrt{\frac{D\_p}{\text{\textdegree}}}\,\,\,\,\,\text{J}\_p = \text{J}\_s = \text{\textdegree}^{-1},\tag{5}$$

and postpone defining the values of *P* and *S*. We also set

$$
\mu(\mathfrak{p}, \mathfrak{s}) = D\_p^{-1} \mu(P\mathfrak{p}, \mathbb{S}\mathfrak{s}), \quad \mathfrak{X}(\mathfrak{p}, \mathfrak{s}) = \mathbb{C}^{-1} \chi(P\mathfrak{p}, \mathbb{S}\mathfrak{s}).\tag{6}
$$

In what follows, every variable employed is dimensionless by default. Upon the above scaling, the dimensionless form of the governing equations reads:

$$p\_t + q\_x \quad = \quad F(p, s), \tag{7}$$

*q<sup>t</sup>* + *νq* = *κχ*(*p*,*s*)*ps<sup>x</sup>* − *µ*(*p*,*s*)*px*, (8)

$$s\_t \quad = \quad G(p\_\prime s) + \delta s\_{\rm xx} \tag{9}$$

$$
\kappa = \mathbb{C}S / D\_p, \qquad \delta = D\_s / D\_p. \tag{10}
$$

For the forthcoming analysis, it is important to distinguish the case when the righthand side in the flux equation is integrable in the sense that:

$$
\kappa \chi(p\_\prime s) p s\_\times - \mu(p\_\prime s) p\_\times = (\varphi(p\_\prime s))\_\times. \tag{11}
$$

For such an integrability, it is necessary and sufficient to link the diffusivity to the sensitivity as follows:

$$
\mu\_{\mathbf{s}}(p, \mathbf{s}) = -\kappa (p\chi)\_{\mathbf{p}}.\tag{12}
$$

#### **3. Sedentary Prey and Hyperbolicity**

Throughout this section, we consider the system (7)–(9) with *δ* = 0. Then it becomes a first order quasi-linear PDE system, which we put into the form:

$$z\_t + A(z)z\_x = b(z), \quad \text{where} \tag{13}$$

$$z = \begin{pmatrix} p \\ q \\ s \end{pmatrix}, \ b: \begin{pmatrix} p \\ q \\ s \end{pmatrix} \mapsto \begin{pmatrix} F(p,s) \\ -\imath q \\ G(p,s) \end{pmatrix}, \ A: \begin{pmatrix} p \\ q \\ s \end{pmatrix} \mapsto \begin{pmatrix} 0 & 1 & 0 \\ \mu(p,s) & 0 & -\kappa p \chi(p,s) \\ 0 & 0 & 0 \end{pmatrix}. \tag{14}$$

The eigenvalues of matrix *A* are ± p *µ*(*p*,*s*), 0. They are real and distinct one from another as long as *µ*(*p*,*s*) > 0. It is always true by assumption. Hence the system of Equations (13) and (14) is strictly hyperbolic. The above triple of eigenvalues determines the triple of *characteristic speeds*—that is, every characteristic of system (13) allows a parametrization by the mapping *<sup>t</sup>* 7→ (*t*, *<sup>X</sup>*(*t*)) that satisfies an equation *<sup>X</sup>*˙ (*t*) = *λ*(*p*(*X*(*t*), *t*),*s*(*X*(*t*), *t*)), where *λ* ∈ {0, ± p *µ*(*p*,*s*)}.

A question to ask about a first-order hyperbolic system is whether or not it allows diagonalizing by a pointwise transform *ρ* = R(*z*). Such an ansatz generally does not exist for a system that includes more than two equations. The diagonalizing is feasible, provided that the system matrix, *A* = *A*(*z*), satisfies an integrability criterion, which allows a straight algebraic formulation. Another formulation requires satisfying the Frobenius condition *dω<sup>i</sup>* ∧ *ω<sup>i</sup>* = 0 with every 1-form *ω<sup>i</sup>* = `*<sup>i</sup>* · *dz* generated by the vectors of the dual eigenbasis {`1, `2, ...} of the matrix *A* (this is the eigenbasis of the transposed matrix). Then for every *i* there exists a factor *α<sup>i</sup>* = *αi*(*z*) such that *αiω<sup>i</sup>* = *dρ<sup>i</sup>* , and *ρ<sup>i</sup>* = R*<sup>i</sup>* . The last criterion is handy to use as long as there is a handy dual eigenbasis, as in the case of matrix (14), for example. Then a routine calculation reduces the diagonalization criterion to the following condition:

$$
\mu\_s(p,s) = \kappa p \chi(p,s) \mathbf{5} \left( \ln \frac{\mu(p,s)}{p^2 \chi^2(p,s)} \right)\_p. \tag{15}
$$

Under condition (12) the obtained criterion simplifies to:

$$
\mu(p\_\prime s) = pc(s)\chi(p\_\prime s), \tag{16}
$$

where *c* stands for an arbitrary function. Thus, assuming the diagonalization entails restrictions that are too artificial even in the simplified form. That is why we will not be considering this option in this study anymore.

Let the condition (12) hold throughout all subsequent considerations. Then there exists a single-valued function *ϕ* = *ϕ*(*p*,*s*) such that:

$$d\varphi = \mu(p, s)dp - \kappa p \chi(p, s)ds.\tag{17}$$

Hence, the system (13) and (14) consists of conservation laws, namely:

$$p\_t + q\_\chi = \mathcal{F}(p, s), \quad q\_t + q\_\chi = -\nu q\_\prime \quad s\_t = \mathcal{G}(p, s). \tag{18}$$

This feature makes feasible the generalized solutions, e.g., the shock waves.

Consider now a shock wave with discontinuities at some curve *x* = *X*(*t*). Then the velocity of the shock, *X*˙ , has to satisfy the Rankine–Hugoniot conditions entailed by the conservations laws (18). They read as:

$$
\dot{X}[p] = [q]\_\prime \quad \dot{X}[q] = [\![\![\rho]\!]\_\prime \quad \dot{X}[s] = 0. \tag{19}
$$

Here the notation of a dependent variable put in the square brackets stands for the jump of this variable across the discontinuity—that is, the difference between the limit values evaluated for (*x*, *t*) → (*X*(*t*) + 0, *t*) and (*x*, *t*) → (*X*(*t*) − 0, *t*). At that,

$$\mu[\varphi] = \int\_{(p^-,s^-)}^{(p^+,s^+)} \mu(p\_\prime s) dp - \kappa p \chi(p\_\prime s) ds,\tag{20}$$

where the superscripts + or − stand for the unilateral limit values at the left and right shores of the discontinuity (when the observer looks forward alongside the time axis). There are two possibilities. The first is the standing wave—that is,

$$
\dot{X} = \mathbf{0}, \ [q] = [p] = \mathbf{0}, \tag{21}
$$

where the value of [*s*] remains undetermined. The second is the travelling wave—that is,

$$\dot{X} \neq 0, \ [s] = 0, \ [q]^2 = [p][\varphi] = [p] \int\_{(p^-, s\_\*)}^{(p^+, s\_\*)} \mu(s, p) dp = [p]^2 \mu(p\_\*, s\_\*), \tag{22}$$

where *s*<sup>∗</sup> = *s*(*X*(*t*), *t*) is the value of the prey density directly on the shock (note that function *s* remains continuous across the shock), and *p*<sup>∗</sup> ∈ (*p* −, *p* <sup>+</sup>) is an unknown quantity that for each *t* satisfies the equation:

$$\int\_{\mu(p^-,s\_\*)}^{(p^+,s\_\*)} \mu(p,s)d\mu = [p]\mu(p\_\*,s\_\*).\tag{23}$$

The speed at which such a wave propagates can take two opposite values, namely:

$$
\dot{X} = [q] / [p] = [\varrho] / [q] = \pm \sqrt{\mu(p\_{\*\prime} s\_{\*})}.\tag{24}
$$

Generically, the shock wave speeds determined in (24) differ from the characteristic ones, which are discontinuous across the shocks. It depends on the inequalities between the velocity of a specific shock wave and the limit values of the characteristic velocities whether or not this shock propagates. These inequalities are known as Lax conditions. Checking them yields a conclusion that shock waves can propagate provided that:

$$\left(\mu(p^+,\mathbf{s}\_\*) - \mu(p\_{\*\prime},\mathbf{s}\_\*)\right)\left(\mu(p^-,\mathbf{s}\_\*) - \mu(p\_{\*\prime},\mathbf{s}\_\*)\right) < 0. \tag{25}$$

Moreover, exactly one of the two possible shock waves can propagate, and the propagation speed is equal to p *µ*(*p*∗,*s*∗) provided that *µ*(*p* <sup>−</sup>,*s*∗) > *µ*(*p*∗,*s*∗) or − p *µ*(*p*∗,*s*∗) otherwise.

A peculiar degeneration occurs when a predator's diffusivity is independent of its density—that is, *µ* = *µ*(*s*). Given this, the expressions for the shock wave speeds read as either:

$$
\dot{X} = \pm \sqrt{\mu(s\_\*)} \tag{26}
$$

or *<sup>X</sup>*˙ <sup>=</sup> 0. It is worth recalling that *<sup>s</sup>*<sup>∗</sup> stands for the trace of the prey's density, *<sup>s</sup>*, right on the shock. It is defined well due to the continuity of the function *s* across the shock. Hence, every possible wave's speed coincides with a characteristic one—that is, *the shocks spread along the characteristics.*

Matching the predator-independent diffusivity to the integrability condition, (12) and assumption (4) make the sensitivity, *χ*, predator-independent too. Given this, the integrability condition simplifies as follows:

$$
\mu = \mu(\mathbf{s}), \ \mu\_{\mathbf{s}}(\mathbf{s}) = -\kappa \chi(\mathbf{s}).\tag{27}
$$

Modulo the scaling, relation (27) is equivalent to that deduced by Tyutyunov et al. from biological rationales in the aforementioned articles. Also, Tyutyunov et al. addressed some issues of stability and pattern formation for the corresponding PKS systems in the parabolic form [22]. Throughout the rest of the present article, we will be studying Cattaneo's systems that arise from relation (27).

#### **4. The Travelling Shock Waves for the Predator-Independent Diffusivity**

Let *d*/*dt* stand for the total derivative along a characteristic. From system (18) it follows that:

$$\frac{dq}{dt} + \lambda \frac{dp}{dt} = \kappa \chi(s) p s\_{\chi} - \nu q + cF(p, s), \tag{28}$$

where the notation of *λ* = ± p *µ*(*s*) stands for the corresponding characteristic speed. If this characteristic supports a shock, then the same equations hold on the shores of the shock with the limit values of every quantity involved. Subtracting them and eliminating the variable *q* with the use of the Rankine–Hugoniot conditions (19) lead to the following equation on the shock

$$\frac{d(\lambda\_\*[p])}{dt} + \lambda\_\* \frac{d[p]}{dt} = \kappa \chi\_\* [p s\_\mathbb{x}] - \nu \lambda\_\* [p] + \mathfrak{c}[F], \quad q = \lambda\_\* [p], \tag{29}$$

where subscript <sup>∗</sup> indicates the quantities, which depend on the values of *s*<sup>∗</sup> only. Furthermore, from the continuity of the prey's density, *s*, it follows that the gap of ∇*s* across the shock is normal to it everywhere—that is,

$$
\lambda \mathbf{[s\_t]} + \lambda\_\* \mathbf{[s\_x]} = \mathbf{0}.\tag{30}
$$

So, we are discussing the travelling shock waves. By definition, such a wave propagates at a constant speed *c*, and the corresponding solution depends on only one variable,

$$
\mathfrak{F} = \mathfrak{t} - \mathfrak{x}/\mathfrak{c}.\tag{31}
$$

Then the characteristics at which the shocks occur are the parallel lines determined by equations *ξ* = *ξ*<sup>∗</sup> for the values of *ξ*<sup>∗</sup> varying over some set, Σ, which we assume to be finite. This set gathers all the discontinuities of the solution in variable *ξ*. On the complement of the singular set, the system (18) reduces to an ODE system. Namely,

$$(c^2 - \mu(s))p\_\sharp = c^2(F(p,s) - \nu(p-r)) - \kappa \chi(s)pG(p,s), \; r\_\sharp = F(p,s), \; s\_\sharp = G(p,s), \tag{32}$$

where *r* = *p* − *q*/*c*, *ξ* 6∈ Σ. The conditions for matching the solutions are defined on the adjacent intervals separated by a discontinuity following from Equations (29) and (30). The latter holds trivially for every function in only variable *ξ*, while the former turns to a system of functional equations as follows:

$$
\mathcal{L}^2([F] - \nu[p]) - \kappa \chi\_\* [p\mathcal{G}] = 0, \quad [\mathcal{s}] = 0, \, [r] = 0. \tag{33}
$$

The last equation in this set is equivalent to [*q*] = *c*[*p*]. The following constraint is for the wave speed, *c*, to obey:

$$\exists \mathcal{c} \colon \forall \mathfrak{f}\_\* \in \Sigma \quad \lambda\_\* = \pm \sqrt{\mu(s\_\*)} = \mathfrak{c} \text{ } s\_\* = \mathfrak{s}(\mathfrak{f}\_\*). \tag{34}$$

Alluding to the values of the unknown, *s*, taken on Σ is correct by the continuity that is consistent with the matching condition (33). The same is true regarding another unknown, *r*.

The first equation in system (32) degenerates when the independent variable, *ξ*, is approaching a discontinuity. Let us restrict ourselves within the solutions obeying the following condition:

$$p\_{\tilde{\xi}}^{\pm} = o\left( (\tilde{\xi} - \tilde{\xi}\_\*)^{-1} \right), \quad \tilde{\xi} \to \tilde{\xi}\_\*,\tag{35}$$

where the superscripts, <sup>+</sup> and <sup>−</sup>, are to distinguish the solutions settled on the right and left intervals adjacent to the discontinuity point, *ξ*∗. This estimate entails one more condition for matching the solutions at the discontinuity. Namely, the following equations are to obey:

$$
\mathcal{L}^2(F(p\_\*^\pm, s\_\*) - \nu(p\_\*^\pm - r\_\*)) - \kappa \chi\_\* p\_\*^\pm G(p\_\*^\pm, s\_\*) = 0,\tag{36}
$$

where the notations of *p* ± ∗ , *s*<sup>∗</sup> and *r*<sup>∗</sup> stand for the unilateral limits of variable *p* and the bilateral limits of variables *s* and *r* correspondingly. It is worth noting that subtracting the equalities (36) gives the first of matching condition (33).

At first, let us address the waves which allow a single shock only. Then there must be only one discontinuity on the *ξ*-axis, so let us place the origin there, and put *ξ*<sup>∗</sup> = 0. Consider the equation:

$$
\mu(z)\mathcal{F}(\mathbf{x},z) - \nu(\mathbf{x}-\mathbf{y}) - \kappa \chi(z)\mathbf{x}\mathcal{G}(\mathbf{x},z) = \mathbf{0}.\tag{37}
$$

By conditions (36), a nonzero jump across the singularity in variable *p* due to a travelling wave presumes that Equation (37) has several distinct solutions (*x<sup>i</sup>* , *y*, *z*) with the same *y*, *z*, e.g., *z* = *s*∗, *y* = *r*∗, *x<sup>i</sup>* = *p* + <sup>∗</sup> and *<sup>x</sup><sup>j</sup>* = *<sup>p</sup>* − ∗ for some *i* 6= *j*.

Assume there exists a nonempty set filled with solutions (*x*, *y*, *z*) to Equation (37) such that *x* ≥ 0, *z* ≥ 0, and let the projection of this set onto the *yz*-plane alongside the *x*-axis cover some domain with multiplicity *N* ≥ 2. For every point (*z*, *y*) in this domain, let *P<sup>i</sup>* = *Pi*(*y*, *z*), *i* = 1, 2, . . . , *N* be the coordinates of projection of its pre-image on the *x*-axis alongside the *yz*-plane. Then constructing the travelling shock waves can go the following way: set

$$s\_\* = y\_\prime \ p\_\* = z\_\prime \ p\_\*^+ = P\_\mathrm{i}(y\_\prime z), \ p\_\*^- = P\_\mathrm{j}(y\_\prime z), \ i \neq j,\tag{38}$$

and then try to find the solutions *p* ± = *p* ±(*ξ*), *r* ± = *r* <sup>+</sup>(*ξ*), *s* ± = *s* ±(*ξ*) to Equation (32), that are defined and bounded for ±*ξ* > 0 and match the data listed above when *ξ* = 0. Note that the values of *y*, *z* are free to manipulate. One can try to obtain some pairs of conjugated waves by transposing *i* and *j*. Since we have been assuming the projection to be the *N*-leaf covering mapping, there are *N*(*N* − 1)/2 conjugated pairs of the datasets. In the next subsection, we will be considering a case of a very simple implementation of the approach outlined above.

#### **5. The Inertial Limit for the Sedentary Prey**

At this point, we need more details regarding the reaction terms, *F* = *F*(*p*,*s*) and *G* = *G*(*p*,*s*). Henceforth, we will be assuming the following:

$$F = pf(p, s), \ f|\_{s=0} < 0, \qquad G = sg(p, s), \ g(0, 0) > 0, \ g\_p|\_{p=0} = -1, \ g\_s|\_{s=0} = -1. \tag{39}$$

The last two assumptions are equivalent to inequalities *gp*|*p*=<sup>0</sup> < 0, *g<sup>s</sup>* |*s*=<sup>0</sup> < 0 modulo the scaling of variables *p*,*s* since the characteristic scales of these variables, *P* and *S*, are indefinite still. For example, the Lotka–Volterra kinetics while being normalized in accord with (39) reads as

$$F = p(\gamma s - \beta), \ G = s(\mathfrak{a} - s - p), \quad \mathfrak{a}, \beta, \gamma = \text{const} > 0. \tag{40}$$

Thus, parameter *α* plays the part of the carrying capacity given the scaling adopted here.

Consider the Cauchy problem

$$S\_{\tau} = G(0, S), \quad S|\_{\tau=0} = a. \tag{41}$$

Assume there exists a number *a*<sup>0</sup> > 0 such that for every *a* ∈ [0, *a*0] problem (41) has a solution *S* = *S*(*τ*, *a*) defined on R, and such that the mapping *a* → *S*(*τ*, *a*) sends the segment [0, *a*0] to itself for every *τ* ∈ R. Consider also the following system of functional equations:

$$g(p,s) = 0, \quad f(p,s) = 0,\tag{42}$$

and assume there exists a positive solution *s* = *s<sup>e</sup>* > 0, *p* = *p<sup>e</sup>* > 0. The last assumption entails the existence of a strictly positive equilibrium for general system (7)–(9). By equilibrium, we mean a particular solution specified as follows: *q* = 0, *p* = const,*s* = const, so that both species distribute themselves homogeneously. A nonnegative equilibrium not being strictly positive can make sense too. For example, the Lotka–Volterra kinetics (40) allows an equilibrium with *p* = 0,*s* = *α* for every parameter setting, and there exists the strictly positive equilibrium *s<sup>e</sup>* = *β*/*γ*, *p<sup>e</sup>* = *α* − *s<sup>e</sup>* provided that *β* < *αγ*.

Throughout the rest of this section, we will be considering the inertial limit and sedentary prey. So, we put *ν* = *δ* = 0. We also assume all the listed assertions on the kinetics to be true. The Lotka–Volterra kinetics defined in (40) meet such an assumption for *β* < *αγ*; at that, the ODE involved in the Cauchy problem (41) reads as *S<sup>τ</sup>* = *S*(*α* − *S*).

Given the assumption made, there are at least two solutions, *x* = *Pi*(*y*, *z*), *i* = 1, 2 to Equation (37) defined in some vicinity of every point (*y*,*se*). At that, *P*<sup>1</sup> ≡ 0, while function *P*<sup>2</sup> (in fact, in one variable, *z*) is that defined implicitly by equations *µ*(*z*)*f*(*x*, *z*) = *κχ*(*z*)*g*(*x*, *z*) = 0, *Pi*(*se*) = *p<sup>e</sup>* . However, we do not need much manipulating with the values *y*, *z* here, since the choice is evident; namely, *x*<sup>1</sup> = 0, *x*<sup>2</sup> = *p<sup>e</sup>* , *z* = *s<sup>e</sup>* , and the values of *y* are arbitrary. Then there are two pairs of the travelling shock waves, one pair propagates at speed *c* = p *µ*(*se*), and the other one propagates at the opposite speed. The formulae for both are the same, and they read as:

$$\begin{aligned} p = 0, \ r = y, \ s = \mathcal{S}(\mathfrak{f}, s\_{\varepsilon}), \ \mathfrak{f} \in (-\infty, 0), \\ p = 0, \ r = y, \ s = \mathcal{S}(\mathfrak{f}, \mathfrak{s}\_{\varepsilon}), \ \mathfrak{f} \in (0, \infty), \\ \end{aligned} \quad \begin{aligned} p = p\_{\varepsilon}, \ r = y, \ s = \mathfrak{s}\_{\varepsilon}, \ \mathfrak{f} \in (0, \infty), \\ p = p\_{\varepsilon}, \ r = y, \ s = \mathfrak{s}\_{\varepsilon}, \ \mathfrak{f} \in (-\infty, 0), \end{aligned} \quad \text{(43)}$$

where *y* is an arbitrary constant.

Waves (43) and (44) are conjugated in the sense explained in the previous section but not mirroring one by the other one. Although both species in both waves take the same equilibrium values within the areas settled by predators, outside them, prey spreads itself differently. Indeed, the solutions to the Cauchy problem (41) generically behave differently for ±*τ* > 0 (except for the equilibria).

In Figure 1, the left and middle frames show profiles of waves (43) and (44) correspondingly. For both waves, the front of the predator's invasion moves towards the smaller concentration of prey. This feature is not as paradoxical as it can seem given the locality of the predator–prey interactions.

An overlay of waves (44) and (43) gives examples of finite predators' mass localized within a patch that moves uniformly. Such a wave propagates with two shocks at the speed *c* = p *µ*(*se*) or at the opposite speed. The formulae read as:

$$\begin{aligned} p = 0, \; r = y, \; s = \mathcal{S}(\mathfrak{f} - \mathfrak{f}\_{0\prime} s\_{\varepsilon}), \; \mathfrak{f} < \mathfrak{z}\_{0\prime} \\ s = s\_{\varepsilon\prime} \; p = p\_{\varepsilon\prime} \; r = y, \; \mathfrak{f} \in (\mathfrak{f}\_{0\prime} \mathfrak{z}\_{1\prime}), \mathfrak{z}\_{1} - \mathfrak{z}\_{0} = M / p\_{\varepsilon} \\ p = 0, \; r = y, \; \; s = \mathcal{S}(\mathfrak{f} - \mathfrak{z}\_{1\prime} s\_{\varepsilon}), \; \mathfrak{z} > \mathfrak{z}\_{1\prime} \end{aligned} \tag{45}$$

where the values of *y* and *M* > 0 are arbitrary constants. At that, the values of *M* stand as the mass of the patch. It is worth noting that the shapes of the patchy waves differ depending on the sign of the wave speed, *c*. The prey are dying out to the left (right) of the predators patch for negative (positive) *c*, so that the patch moves towards the smaller

concentration of prey (see the right panel in Figure 1). In Section 7, we return to these waves to discuss their relevance to the population dynamics.

**Figure 1.** The figure shows the propagation of waves (43)–(45) calculated for the Lotka–Volterra kinetics (40) (from the left to the right). Each frame shows three instantaneous profiles for both the predator and the prey densities. In the left and middle frames, the blue (green) coloured lines are for the former (latter), while the solid, dashed and dotted lines picture the profiles taken at *t* = 0, *t* = 3/2 and *t* = 3. The right frame addresses the wave that transports a patch filled with the unit mass of predators. At that, the dashed (solid) lines mark out the densities of the predators (prey). The colours green, red and blue are for the shots taken at *t* = 0, *t* = 3/2 and *t* = 3 (so that the wave speed is negative). All three panels correspond to the Lotka–Volterra kinetics (40), where *α* = 1, *β* = 0.2, *γ* = 0.8. At that, the diffusivity function reads as *µ* = (1 + *s*) −1 , so that *κ* = 1, and the equality (27) determines the sensitivity, *χ*. These definitions give the wave speed, *c* = p *µ*(*se*) ≈ 0.9. Finally, recall that the figure regards the diffusionless and inertial limit, and *δ* = *ν* = 0, hence.

#### **6. Numerical Experiments**

In this section, we present the numerical solutions to system (7)–(9) that we formulate with the use of the Lotka–Volterra kinetics (40) and predators' diffusivity *µ* = (1 + *κs*) −1 . The diffusivity determines the sensitivity, *χ*(*s*), by the simplified integrability condition (27). For the numerical implementation, we eliminate the flux, *q*, from this system with the use of ansatz by K. Hadeler (see references provided above). As a result, we arrive at the following equations:

$$(p\_{lt} + \nu(p\_l - F(p\_\prime s)) \quad = \quad (\mu(s)p)\_{\text{xx}} + (F(p\_\prime s))\_{t\prime} \tag{46}$$

$$\mathbf{s}\_{\mathbf{f}} = \mathbf{G}(\mathbf{p}\_{\mathbf{f}}\mathbf{s}) + \delta \mathbf{s}\_{\mathbf{xx}}.\tag{47}$$

Substituting this second-order system for the original one (which is of order 1) requires ad hoc initial conditions. These are:

$$p\sharp\_{\mathfrak{l}=0} = p\_{0\prime} \quad \mathfrak{sl}\_{\mathfrak{l}=0} = s\_{0\prime} \quad p\_{\mathfrak{l}}\natural\_{\mathfrak{l}=0} = F(p\_{0\prime}q\_{0}) - q\_{0\infty\prime} \tag{48}$$

where the notations of *p*0, *q*0,*s*<sup>0</sup> stand for the functions which determine the initial values of the dependent variables of the original system (7)–(9). The numerical implementation also requires restricting the solution within a finite spatial domain and formulating suitable boundary conditions. So, we set

$$p\_{\mathbf{x}\|\mathbf{r}=\pm L} = \mathbf{0}, \quad \mathbf{s}\_{\mathbf{x}\|\mathbf{r}=\pm L} = \mathbf{0}.\tag{49}$$

It turns out that the numerical solving of the initial-boundary problems (46)–(49) is quite feasible with the *Maple* built-in PDE solver.

Implementation of the numerical solution should cover a spatial interval wide enough to put the artificial boundaries far out of the domain in which the phenomena of interest occurs. We had controlled all the results below by widening the spatial area and found them reproducing themselves sustainably for values of *L* higher than 8. In particular, *L* = 10 for all the figures presented in this section. In a similar way, we checked the influence of the mesh refining and varying the level of smoothing used for preparing the initial data and found the results of this inspection quite satisfactory.

The first set of numerical experiments is for answering the question of whether the shock waves persist for the positive values of the resistivity, *ν* and the prey diffusivity, *δ*. To this end, we have been taking the profiles of the species' densities, *s* and *p* from the waves (43)–(45) and putting them as the initial profiles, *s*<sup>0</sup> and *p*0, which enter the boundary conditions (48). At that, we have been smoothing the shocks slightly. Further, we have been putting *q*<sup>0</sup> = *c*<sup>∗</sup> *p*0, where *c*<sup>∗</sup> = p *µ*(*se*) is the shock wave speed. This choice is consistent with the definition of the shock waves. Appendix A provides the concrete details of setting initial conditions, the control parameters values, etc. The answer to the question formulated above is fully affirmative. The shocks become a bit smoother, but keep propagating at an almost constant speed that is nearly equal to the value of *c*∗. Figure 2 shows the typical behavior of the slightly smoothed counterpart of the patchy shock wave. This figure tells us that the smoothed patch spreads as a kind of soliton, which is shaped rather sharply for the small resistivity and prey diffusivity. An increase in the resistivity produces scattering of the predators behind the rear front of the wave.

The next piece of computing addresses the interactions of the observed solitary waves with some perturbations applied initially. These are:


In cases (a) and (b), the smallness means that the magnitudes of mutual displacements (deformations) are approximately ten percent of the magnitude of the main patch. The smallness of the droplet means that its mass is ten times smaller than the mass of the main patch, while they both are localized in the intervals of nearly equal lengths. In Appendix A, there is the exact formulation of the initial conditions and the concrete values of the control parameters.

In cases (a) and (b), the effects of the initial perturbations manifest themselves mostly by the predators scattering, which goes almost the same way as shown in the bottom row of frames in Figure 2, with no any qualitative distinctions. So, we do not illustrate these cases. Case (c) is similar to the above, but shedding the predators is rather intensive. We illustrate this case in the top row of frames in Figure 3. Case (d) demonstrates a very peculiar interplay of solitary waves. Namely, the main patch and the droplet attract one to the other one until they clash. Then they play leapfrog: droplet climbs over the patch and rolls down to the other side of it. The patch drops some mass due to the scattering but keeps moving at almost the same speed. The droplet keeps moving too but in the opposite direction while getting bigger and sharper. We illustrate this case in the bottom row of frames in Figure 3.

**Figure 2.** The figure illustrates propagating the solitary waves that are the slightly smoothed counterparts of the shock patchy wave (45) when the resistivity, *ν*, and prey diffusivity, *δ*, take some small values. In Appendix A, there is the accurate exposition of all the parameter values and initial data that we have used for producing all the pictures displayed herein. The upper row of frames shows three distributions of the predators over *x*, *t*-plane which arise from propagating the shock and smoothed patchy waves. The left upper frame displays the former while the central and right frames display the latter for two different values of the resistivity. Both values are small, but the one corresponding to the central frame is substantially smaller. The saturation of blue is in use for indicating the predators' density. The central and lower rows animate the propagation of the smoothed patchy waves shown in the middle and right frames in the upper row. For comparison, the shock patchy wave (that we have been showing in the left upper frame) is animated in the same frames synchronously. For capturing the former (latter), the solid (dotted) lines are in use, and the coloring of them distinguishes the species. Namely, for both waves, the profile of the predators' (prey) density is blue (green) colored.

The results presented above confirm the stability of the travelling patchy waves. Figure 4 demonstrates the results of a more extreme crash-test. Appendix A provides the detailed description of the initial states and other settings used for this piece of computing. It is worth recalling that every travelling patchy wave has a counterpart that propagates at the opposite speed. Gluing this pair provides the initial state for computing the next five patterns pictured in Figure 4. Overall, they illustrate the collision of two patchy waves. The remarkable feature is that the collision gives rise to an expansion wave, at the fronts of which peaks are arising and sharpening in the course of its propagation. The mirror symmetry of the patterns is due to the mirror symmetry of the problem and initial data.

**Figure 3.** The figure illustrates the evolution of the initial perturbations described in clauses (c) and (d) on page 10. In particular, the bottom row demonstrates the leapfrog playing. The detailed description of the initial states and all other settings used for computing this figure is in Appendix A.

**Figure 4.** The figure illustrates a collision of two patchy waves, which propagate at the opposite speeds. The blue (green) lines are for the instant profiles of the predators' (prey) density. The detailed description of the initial states and other settings used for computing this figure is in Appendix A.

Further, we proceed with an asymmetrical initial configuration that arises from perturbing the travelling patchy wave in a way similar to that used in case (d) above. However, the perturbation is not small this time as its mass is equal to the patch mass. Appendix A provides a detailed description of the initial states and other settings used for this piece of computing. In Figure 5, the two rows of images display two ways of the evolution of this configuration for two different values of the resistivity, *ν*. The top row regards the smaller resistivity. The initial stage of evolution is qualitatively like that reported for case (d) above. New features arise after the leapfrog playing. Among them, the most notable is

the spike that springs up suddenly at the foremost bound of the main patch. The other core becomes sharper too. The bottom row shows the changes due to increasing the resistivity to a considerable extent. It is easy to see a powerful smoothing, which is emergent from forcing out the waves by the equilibrium state. The rightmost frame indeed allows us to see that the values of the densities are close to the equilibrium ones near the origin. Here it is worth recalling that, for the kinetics (40), the equilibrium densities are *p<sup>e</sup>* = *α* − *s<sup>e</sup>* , *s<sup>e</sup>* = *β*/*γ*. So, *p<sup>e</sup>* ≈ 0.43, *s<sup>e</sup>* ≈ 0.57 for the parameters values adopted for computing the patterns presented herein.

The numerical experiments illustrated above are about the travelling shock waves presented in Section 5. The system we deal with, however, hides a multitude of interesting features, one of which is the formation of peaks after colliding the travelling patchy wave with another pattern, which is not necessarily a wave of the same type too. In Figure 6, we illustrate the occurence of a similar feature irrespective of the patchy waves. Namely, we consider an explosion wave due to a unit mass of the predators smoothly localized in a compact area at the initial time moment with the zero initial flux, *q*0. At that, the initial density of prey is everywhere equal to the carrying capacity, which, effectively, takes the value of the parameter *α* given the scaling applied here. At the same time, it is the value of carrying capacity that corresponds to the prey density at the equilibrium state with no predators. Then the initial core of the predators stands as a perturbation, which is not small though localized. The bottom (top) row of frames corresponds to the parameters values such that the equilibrium state with a positive predators' density is possible (impossible). The mirror symmetry of the patterns is due to the mirror symmetry of the problem and initial data, the detailed description of which is in Appendix A. Both rows demonstrate an explosion of the initial core accompanied by spreading the predators across a widening areal extent with peaks at the bounds. Outside these boundary layers, both densities tend to the equilibrium values. If the positive equilibrium density of the predators is not feasible then smoothing and decaying of the boundary peaks takes place by degrees. Otherwise, these peaks become sharper and higher.

**Figure 5.** The figure illustrates two ways of evolution of a heavily perturbed patchy wave. The perturbation is smoothly localized ahead from the patch and got moving instantly towards the patch. The meaning of the colors and styles of lines is the same as for Figure 3. The detailed description of the initial states and other settings used for computing this figure are in Appendix A.

**Figure 6.** The figure illustrates propagating two waves due to the explosion of a smoothly localized unit mass of predators amid the uniform distribution of prey, the density of which is equal to the carrying capacity. The blue lines are for the instantaneous graphs of the predators' density. The green lines are for the deviation of the prey density from the carrying capacity. The detailed description of the initial states and other settings used for computing this figure is in Appendix A.

#### **7. Discussion**

We have addressed a Cattaneo-type dynamics of the predator–prey system with the Lotka–Volterra kinetics term in one spatial dimension. By assumption, only the predators are capable of the perceptual motions, and the flux of them generally reads as *pχ*(*p*,*s*)*s<sup>x</sup>* − *µ*(*p*,*s*)*px*, where *µ*(*p*,*s*) and *χ*(*p*,*s*) are some prescribed functions. To start with, we have considered the sedentary prey. In this approximation, the governing equations turn to form a strictly hyperbolic system, for which we have formulated the criterion for reducing to Riemann's invariants in terms of sensitivity, *χ*(*p*,*s*), and diffusivity, *µ*(*p*,*s*), explicitly. It has turned out, however, that the class of systems obeying this criterion looks rather artificial. At the same time, reducing the governing equations to the conservation laws happened to be less restrictive. In particular, such a reduction is possible provided that *µ* = *µ*(*s*), *κχ*(*s*) = −*dµ*/*ds* . Since there are biological rationales for this structural relation (see the reference above), we have accepted it.

For the systems of conservation laws, the shock waves are natural, and we have derived a system of conditions on the shocks. In the inertial limit—that is, for *ν* = 0—we have discovered a very simple exact solutions that describe the travelling shock waves. The wave pattern represents a semi-infinite or even finite patch of predators that propagates at a constant speed. There are no predators outside the patch while both species coexist in the equilibrium state inside, see Figure 1. The wave speed is equal to ± p *µ*(*se*), where *s<sup>e</sup>* is the prey density at the equilibrium state.

The waves carrying semi-infinite patches describes the transitions between two equilibria states. The community goes either from the extinction of both species to the coexistence at the equilibrium or from the coexistence at the equilibrium to the extinction of the predators and restoring the prey up to the carrying capacity. In this sense they resemble the KPP-Fisher waves. Here, however, the extinction of predators means that the wave either has not brought them to the areal extent under consideration yet or has moved them out already. The interesting feature is that the front of the predators' invasion always moves towards the smaller concentration of prey. This is not as paradoxical as it can seem given the locality of the predator–prey interactions. Thus, two patterns of behavior are emergent from propagating these waves. The first is restoring the resource up to the carrying capacity after the migrating predators' withdrawal. The second is the transition from the prey dying out to the mutual equilibrium with the spreading predators. The travelling shock waves

carrying a finite mass of the predators combine both features. Indeed, the profile of such a wave involves the transitions from the mutual extinction via the coexistence to the extinction of predators. Besides, the patch of predators moves towards the smaller concentration of prey again. So, the corresponding pattern of behavior looks like preventing the prey from dying out and even restoring the prey population due to migrating a compact core of the predators.

We have extended the travelling shock waves to the positive values of the prey diffusivity, *δ*, and the resistivity, *ν*, numerically. It turns out that they withstand such an extension, at least while the values of *δ* and *ν* remain small enough. The shocks smooth themselves, but the speed at which they propagate is nearly equal to the value of *c* = ± p *µ*(*se*).

The shock waves carrying a finite mass of predators transform themselves into the smooth soliton-like waves, to which we have paid particular attention, see Figure 2. An interesting feature is shedding the predators behind the rear front of the wave due to an increase in the resistivity. As a result, the core of migration leaves behind itself a populated areal extent, which remains settled even when the migration core moves far ahead. Thus, propagating the patchy wave can play a part in settling the predators due to migration.

A relatively small predator's droplet that occurs instantly ahead of or behind the main core of the soliton-like wave enhances the above scattering but does not cause any other noticeable changes (see Figure 3). The main core never absorbs the droplet in the case of collision, but they both keep moving after a peculiar interaction resembling the leapfrog play. Quite a different pattern of behavior arises from colliding the cores, which have gathered the equal masses of the predators (see Figures 4–6). We have examined colliding for several pairs of cores, which belong to the following classes: (i) two identical smoothed travelling patchy waves which propagate at opposite speeds; (ii) a compact group of predators that occurs suddenly ahead of the predators' patch of a travelling wave and get moving towards it; (iii) a finite mass of predators that suddenly have landed on a compact part of a greater areal extent where the resource density has attained the value of the carrying capacity. Merging the cores immediately leads to local overpopulation and a lack of prey, which, in turn, gives rise to an explosive migration. The explosion wave spreads the predators uniformly across a widening area while boundary layers of high density occur near the wave fronts. Deep in this area, the species coexist at equilibrium if feasible, or the predators become extinct, and the prey density goes back to the carrying capacity. Various numerical experiments have been reproducing this pattern of behavior sustainably though not literally, of course. Thus, we conclude that the explosive waves that we have been discussing deliver a mechanism for overcoming the local overpopulation and lack of resources.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Author acknowledges the support from the Southern Federal university.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **Appendix A. Initial Data**

Implementing the numerical experiments, the outcomes of which we have been discussing above, numerically solved several initial-boundary value problems, each of which comprises Equations (46) and (47) with the Lotka–Volterra kinetics (40), boundary conditions (49), and initial conditions (48). These problems depend on the control parameters, *α*, *β*, *γ*, *δ*, *κ*, *ν*, *L*. We have also been using parameter *ε* while smoothing some initial conditions. Table A1 displays the values of these parameters and the subsections below explain

setting the initial conditions for the concrete computations. We also recall that, throughout all the computations, the predators' diffusivity reads as:

$$
\mu(s) = \frac{1}{1 + \kappa s}.\tag{A1}
$$

At that, the relation (27) determines the predators sensitivity, *χ*.

**Table A1.** The correspondence between the Figures presented above and the values of the parameters that enter Equations (46) and (47) and initial conditions (48).


*Appendix A.1. Data for Figure 2*

The initial conditions are:

$$2p\_0 = p\_\varepsilon \text{erf}\left(\frac{\mathbf{x} - \mathbf{x}\_0}{2\varepsilon c\_\*}\right) - p\_\varepsilon \text{erf}\left(\frac{\mathbf{x} - \mathbf{x}\_1}{2\varepsilon c\_\*}\right), \quad p\_\varepsilon = \mathbf{a} - \frac{\mathfrak{k}}{\gamma}, \quad q\_0 = c\_\* p\_0 \tag{A2}$$

and *s*<sup>0</sup> is given by equality (45), where *ξ* = *x*/*c*∗, *ξ*<sup>0</sup> = *x*0/*c*∗, *x*<sup>0</sup> = −1/*p<sup>e</sup>* , *x*<sup>1</sup> = 0, *ξ*<sup>1</sup> = *x*1/*c*∗, *c*<sup>∗</sup> = p *µ*(*se*) and *s<sup>e</sup>* = *β γ* .

#### *Appendix A.2. Data for Figure 3*

As we have been saying on page 10, this Figure corresponds to the initial data that read as:

$$p\_0 = p\_w + Pp\_d \quad q\_0 = c\_\* p\_w + Qp\_d \quad s\_0 = s\_{w\prime} \tag{A3}$$

where the notations of *p<sup>w</sup>* and *s<sup>w</sup>* stand for the same functions as those defined in Appendix A.1.

$$\begin{array}{rcl} P = 0.1, \ Q = 0, \ p\_d & = & \exp(-3(\mathfrak{x} + \mathfrak{z})^2) \text{ for the top row,} \\ P = 0.1, \ Q = -0.1, \ p\_d & = & \exp(-3(\mathfrak{x} - \mathfrak{z})^2) \text{ for the bottom row.} \end{array}$$

*Appendix A.3. Data for Figure 4*

In this Figure, the all of images correspond to the initial data that read as:

$$2p\_0(\mathbf{x}) \; := \; \left\{ \begin{array}{ll} p\_w(\mathbf{x}), & \mathbf{x} > \mathbf{0}, \\ p\_w(-\mathbf{x}), & \mathbf{x} < \mathbf{0}, \end{array} \; \right. \\ \left. p\_w = p\_\ell \text{erf} \left( \frac{\mathbf{x} - \mathbf{x}\_0}{2\varepsilon c\_\*} \right) - p\_\ell \text{erf} \left( \frac{\mathbf{x} - \mathbf{x}\_1}{2\varepsilon c\_\*} \right). \\ \text{ (A4)}$$

$$q\_0 = -c\_\* \text{erf}(\mathbf{3x}) p\_0(\mathbf{x}) \tag{A5}$$

$$s\_0 = \begin{cases} \ s\_w(\mathbf{x}), & \mathbf{x} > \mathbf{0}, \\ s\_w(-\mathbf{x}), & \mathbf{x} < \mathbf{0}, \end{cases} \tag{A6}$$

where the notation of *s<sup>w</sup>* stands for the same function as that defined in Appendix A.1 with *ξ* = *x*/*c*∗, *ξ*<sup>0</sup> = *x*0/*c*∗, *ξ*<sup>1</sup> = *x*1/*c*∗, *c*<sup>∗</sup> = p *µ*(*se*) and *s<sup>e</sup>* = *β γ* . Here *x*<sup>0</sup> = −5 − 1/*p<sup>e</sup>* , *x*<sup>1</sup> = −5.

*Appendix A.4. Data for Figure 5*

In this Figure, both rows of images correspond to the initial data that read as:

$$p\_0 = p\_w + p\_d, \quad q\_0 = c\_\*(p\_w - p\_d), \quad s\_0 = s\_w + s\_{d\prime} \tag{A7}$$

where *p<sup>w</sup>* and *s<sup>w</sup>* are the same as those defined in Appendix A.1, and

$$p\_d = s\_d \quad = \quad \frac{\sqrt{3} \exp\left(-\frac{3(\mathbf{x} - \mathbf{3})^2}{2}\right)}{\sqrt{2\pi}}, \quad c\_\* = \sqrt{\mu(s\_\varepsilon)}, \quad s\_\varepsilon = \frac{\beta}{\gamma}.$$

*Appendix A.5. Data for Figure 6*

In this Figure, both rows of images correspond to the initial data that read as:

$$p\_0 = \frac{5\mathbf{e}^{-\frac{25\mathbf{v}^2}{4}}}{2\sqrt{\pi}}, \quad q\_0 = 0, \ s\_0 = \mathfrak{a}.\tag{A8}$$

#### **References**


**Jingli Fu <sup>1</sup> , Lijun Zhang 2,3,\* , Shan Cao <sup>4</sup> , Chun Xiang <sup>5</sup> and Weijia Zao <sup>6</sup>**


**Abstract:** In this paper, a symplectic algorithm is utilized to investigate constrained Hamiltonian systems. However, the symplectic method cannot be applied directly to the constrained Hamiltonian equations due to the non-canonicity. We firstly discuss the canonicalization method of the constrained Hamiltonian systems. The symplectic method is used to constrain Hamiltonian systems on the basis of the canonicalization, and then the numerical simulation of the system is carried out. An example is presented to illustrate the application of the results. By using the symplectic method of constrained Hamiltonian systems, one can solve the singular dynamic problems of nonconservative constrained mechanical systems, nonholonomic constrained mechanical systems as well as physical problems in quantum dynamics, and also available in many electromechanical coupled systems.

**Keywords:** constrained Hamiltonian system; canonicalization; symplectic method; numerical simulation

**MSC:** 37J60; 37J10; 37K05; 37K50

#### **1. Introduction**

In 1993, symplectic algorithms for constrained Hamiltonian systems have been proposed [1]. We know that the displacements q and momenta p of an object moving freely are given by a Hamilton canonical equation in the form [2]

$$
\dot{q} = \nabla\_p H(p, q), \ \dot{p} = -\nabla\_q (p, q) \tag{1}
$$

where *p*, *q* ∈ *R n* , *H* : *R <sup>n</sup>* <sup>×</sup> *<sup>R</sup> <sup>n</sup>* <sup>→</sup> *<sup>R</sup> n* is called the Hamiltonian function. A natural question is what happens when (1) is constrained by algebraic equations on *q* and/or *p*. That is, there are Hamiltonian constraints of the form *g*(*q*) = 0, and it leads to the constraints of Hamiltonian equations as [3,4]

$$\dot{q} = \nabla\_p H(p, q), \; \dot{p} = -\nabla\_q H(p, q) - \lambda G(q)^t,\tag{2}$$

where *g* : *R <sup>n</sup>* <sup>→</sup> *<sup>R</sup> n* , *G*(*q*) = *gq*(*q*) ∈ *R <sup>n</sup>*×*<sup>m</sup>* and <sup>λ</sup> <sup>∈</sup> *<sup>R</sup> <sup>m</sup>*. Equation (2) is called a constrained Hamiltonian system, which is not only a relatively loose concept but also a general constrained mechanical system. The flow of a Hamiltonian system like (1) possesses an important symplectic geometric structure. It has been observed in numerical experiments that symplectic methods with fixed step-size possess better long-term stability properties. Leimkuhler and Skeel [5] investigated symplectic numerical integrators of constrained Hamiltonian systems in molecular dynamics. By composition methods, Reich [6] studied

**Citation:** Fu, J.; Zhang, L.; Cao, S.; Xiang, C.; Zao, W. A Symplectic Algorithm for Constrained Hamiltonian Systems. *Axioms* **2022**, *11*, 217. https://doi.org/10.3390/ axioms11050217

Academic Editor: Hans J. Haubold

Received: 18 April 2022 Accepted: 30 April 2022 Published: 7 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

symplectic integration of the constrained Hamiltonian systems. The method they proposed can reduce Hamiltonian differential-algebraic equations to ordinary differential equations in Euclidean space.

When studying symmetry properties of classical and quantum constrained systems, Li [7,8] found that via Legendre transformation, a singular Lagrangian system can be transformed into the phase space determined by generalized momenta and generalized coordinates. Since there are inherent constraints between generalized momenta and generalized coordinates, it is named a constrained Hamiltonian system. A lot of important physical systems belong to this system, such as quantum electrodynamics, quantum flavor dynamics, and so on. Even many electromechanical coupled systems belong to constrained Hamiltonian systems. For a Lagrangian system, if the value of determinant det *∂* 2*L ∂qs∂q<sup>k</sup>* vanishes, then it is named as a singular Lagrange system. The Lagrangian function of supersymmetry, supergravity, and string theory are all singular. Therefore, the fundamental theory of constrained Hamiltonian systems acts an important role in modern quantum field theory [9].

In the late 1980s, Feng et al. established the so-called symplectic algorithms to study the equations in Hamiltonian form and showed that these methods are more superior over a long time by combining theoretical analysis and computer experimentation [10,11]. The symplectic method has been widely recognized as a suitable numerical integrator with global conservation properties for canonical Hamiltonian systems. It has been well applied in testing particle simulation and some physical experiments in plasma physics, and thus derived a series of results, for instance, a variational multi-symplectic particle-in-cell algorithm of the Vlasov-Maxwell system [12], the practical symplectic partitioned Runge-Kutta and Runge-Kutta-Nystrom methods [13], the symplectic integrations of Hamiltonian systems [14], symplectic integrators of the Ablowitz–Ladik discrete nonlinear Schrödinger equation [15], etc. The standard symplectic scheme normally works for a canonical structure of the dynamical system. However, the symplectic simulation for the constrained Hamiltonian systems is beset with difficulties since the constrained Hamiltonian systems are usually non-canonical.

In this paper, we will present a general procedure for constructing the canonical coordinates of constrained Hamiltonian systems. By defining a variable transformation and calculations, the canonical variables for constrained Hamiltonian systems can be derived, and thus the constrained Hamiltonian systems are canonicalized. Once the canonical coordinates of constrained Hamiltonian systems are derived, one can employ the standard canonical symplectic methods to study the constrained Hamiltonian systems. The method we proposed is of importance in the study of constrained Hamiltonian systems. We believe that the symplectic method of constrained Hamiltonian systems given in this paper can be used in the study of quantum dynamics, electromechanical coupled systems, and strange constrained dynamics as well.

To verify the effect of the canonicalization and illustrate the advantage of the canonical symplectic simulation, a numerical example of the constrained Hamiltonian system is presented. Clearly, the numerical results derived by the canonical symplectic method are more accurate in the long-term simulation since they can maintain conservation properties.

#### **2. Canonicalization of Constrained Hamiltonian Systems**

Assume that a mechanical system is determined by the generalized coordinates *qi*(*i* = 1, 2, . . . , *n*), and the Lagrangian function *L* = *L*(*t*, *q<sup>i</sup>* , . *qi* ) satisfies det *∂* 2*L ∂qs∂q<sup>k</sup>* = 0 When the generalized momenta and Hamiltonian of the system are constructed, there are inherent constraints between the canonical variables in the phase space

$$\phi\_j(t, q\_{i\nu} p\_i) = 0 \quad (j = 1, 2, \dots, n - r, i = j = 1, 2, \dots, n) \tag{3}$$

this is the constraint equation that should be obtained between the generalized coordinates and the generalized momenta of the constrained Hamiltonian system.

Then the motion equations of a singular system can be written as [11]

$$\dot{q}\_{i} = \frac{\partial H\_{c}}{\partial p\_{i}} + \lambda\_{j} \frac{\partial \varphi\_{j}}{\partial p\_{i}}, \qquad \dot{p}\_{i} = -\frac{\partial H\_{c}}{\partial q\_{i}} - \lambda\_{j} \frac{\partial \varphi\_{j}}{\partial q\_{i}} \qquad (i = 1, 2, \dots, n) \tag{4}$$

where *H<sup>c</sup>* is the Hamiltonian of the system and *λ<sup>j</sup>* is the Lagrange multiplier. The multiplier in Formula (4) can be given by Equations (3) and (4).

The motion Equation (4) of the constrained Hamiltonian system can be rewritten as

$$
\begin{pmatrix}
\dot{p}\_1\\ \vdots\\ \dot{p}\_i\\ \dot{q}\_1\\ \vdots\\ \dot{q}\_n
\end{pmatrix} = \left(
\begin{array}{c}
\mathbf{0}\_n\\ \mathbf{0}\_n\\ \mathbf{I}\_n
\end{array}
\right) \begin{pmatrix}
\frac{\partial H\_c}{\partial p\_1}\\ \vdots\\ \frac{\partial H\_c}{\partial p\_i}\\ \frac{\partial H\_c}{\partial q\_1}\\ \vdots\\ \frac{\partial H\_c}{\partial q\_i}
\end{array} = M\_{2n \times 2n} \left(
\begin{array}{c}
\frac{\partial H\_c}{\partial p\_1}\\ \vdots\\ \frac{\partial H\_c}{\partial p\_i}\\ \frac{\partial H\_c}{\partial q\_1}\\ \vdots\\ \frac{\partial H\_c}{\partial q\_i}
\end{array}
\right),
\tag{5}
$$

where

$$\mathbf{S}\_{n} = \begin{pmatrix} -1 - \lambda\_{j} \frac{\partial \boldsymbol{\varrho}\_{j}}{\partial H\_{c}} & \dots & \mathbf{0} \\ \vdots & \ddots & \vdots \\ 0 & \cdots & -1 - \lambda\_{j} \frac{\partial \boldsymbol{\varrho}\_{j}}{\partial H\_{c}} \end{pmatrix}\_{\boldsymbol{n} \times \boldsymbol{n}} \mathcal{I}\_{n} = \begin{pmatrix} 1 + \lambda\_{j} \frac{\partial \boldsymbol{\varrho}\_{j}}{\partial H\_{c}} & \dots & \mathbf{0} \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1 + \lambda\_{j} \frac{\partial \boldsymbol{\varrho}\_{j}}{\partial H\_{c}} \end{pmatrix}\_{\boldsymbol{n} \times \boldsymbol{n}} \tag{6}$$

and *M*2*n*×2*<sup>n</sup>* is an anti-symmetric matrix.

Let *v* = (*p*, *q*) *T* , where *p* = (*p*1, *p*2, . . . , *pn*), *q* = (*q*1, *q*2, . . . , *qn*) , then Equation (5) can be rewritten as

$$\dot{v} = K(v)^{-1} \nabla H\_{\mathfrak{c}}(v),\tag{7}$$

where

$$K(v) = \begin{pmatrix} \mathbf{0}\_n & T\_n^{-1} \\ S\_n^{-1} & \mathbf{0}\_n \end{pmatrix}. \tag{8}$$

It is easy to see that Equations (5) and (7) are non-canonical Hamiltonian systems.

To rewrite the non-canonical Hamiltonian system in canonical form, we let *Z* = Ψ(*v*) be the corresponding canonical variables which is a transformation from *R* 2*n* to *R* 2*n* . *<sup>Z</sup>* = (*p*e, *<sup>q</sup>*e) *T* are new variables after canonicalization. By the chain rule, the canonicalization of Equation (7) can be written as [11]

$$\dot{Z} = \left(\frac{\partial \Psi}{\partial v}\right) K(v)^{-1} \left(\frac{\partial \Psi}{\partial v}\right)^{T} \nabla \tilde{H}(Z),\tag{9}$$

where *<sup>H</sup>*e(*Z*) = *<sup>H</sup>c*(*v*). If we let *∂*Ψ *∂v K*(*v*) −1 *∂*Ψ *∂v T* = *J* −1 , i.e.,

$$K(v) = \left(\frac{\partial \Psi}{\partial v}\right)^T \mathbf{J}\left(\frac{\partial \Psi}{\partial v}\right) \tag{10}$$

Note that *K*(*v*) is a given matrix and *v* = (*p*, *q*) *T* is the original variable, so we can get Ψ(*v*) through this transformation, which is a set of canonical new generalized momenta *<sup>p</sup>*<sup>e</sup> = (*p*e1, *<sup>p</sup>*e2, . . . , *<sup>p</sup>*e*n*) and generalized coordinates *<sup>q</sup>*<sup>e</sup> = (*q*e1, *<sup>q</sup>*e2, . . . , *<sup>q</sup>*e*n*). Now, we have transformed the non-canonical Hamiltonian system into a canonical Hamiltonian system.

By substituting the new variables into the original Hamiltonian of the constrained system, it becomes canonical. Based on the canonical Hamiltonian equations, one can examine their properties and hence some useful algorithms can be applied to examine the numerical solutions and numerical simulation of the constrained Hamilton systems. The results of the original system can be obtained by replacing the new variables with the old ones.

#### **3. Symplectic Method for Constrained Hamiltonian Systems**

The constrained Hamiltonian systems are transformed in the canonical form (9):

$$\frac{dZ}{dt} = f^{-1}\tilde{\nabla}H(Z),\tag{11}$$

that is, the canonical Hamiltonian system is

$$\frac{dZ}{dt} = f^{-1}\widetilde{\nabla}H(Z), \quad f = \begin{pmatrix} 0 & I\_n \\ -I\_n & 0 \end{pmatrix}, \quad Z \in \mathbb{R}^{2n} \tag{12}$$

We now show that the properties, conclusions, and calculation methods of canonical Hamiltonian systems can be extended to constrained Hamiltonian systems. We give the symplectic method for constrained Hamiltonian systems as follows.

A transformation of the constrained Hamiltonian system

$$\Psi: \mathbb{R}^{2n} \to \mathbb{R}^{2n}, \quad v = \left(\begin{array}{c} p \\ q \end{array}\right) \quad \to \tilde{Z} = \left(\begin{array}{c} \tilde{p} \\ \tilde{q} \end{array}\right) \tag{13}$$

is called the symplectic transformation for a system if its Jacobian is a symplectic matrix

$$\mathbb{P}\left(\frac{d\tilde{Z}}{dZ}\right)^{\mathrm{T}}\mathbb{J}\left(\frac{d\tilde{Z}}{dZ}\right) = \mathbb{J} \Leftrightarrow \sum\_{k=1}^{n} d\tilde{p}\_{k} \wedge d\tilde{q}\_{k} = \sum\_{k=1}^{n} dp\_{k} \wedge dq\_{k}.\tag{14}$$

For the canonical Hamiltonian system (9), if

$$
\widetilde{p} = p - \pi \frac{\partial H}{\partial q}(\widetilde{p}, q), \quad \widetilde{q} = q + \pi \frac{\partial H}{\partial p}(\widetilde{p}, q), \tag{15}
$$

then it is a first-order symplectic scheme. When *H*(*p*, *q*) = *U*(*p*) + *V*(*q*), Equation (15) becomes

$$
\widetilde{p} = p - \pi \frac{\partial V}{\partial q}(q), \quad \widetilde{q} = q + \pi \frac{\partial U}{\partial p}(\widetilde{p}), \tag{16}
$$

which is an explicit symplectic scheme. For the canonical Hamiltonian system (9), the Euler midpoint rule is

$$
\tilde{Z} = Z + \tau \mathbf{J}^{-1} \nabla H(\frac{\ddot{Z} + Z}{2}) , \tag{17}
$$

which is a second-order symplectic scheme. A Runge-Kutta method

$$\widetilde{Z} = Z + \tau \sum\_{i=1}^{m} b\_i f^{-1} \nabla H(\mathcal{K}\_i), \quad \mathcal{K}\_i = Z + \tau \sum\_{i=1}^{m} a\_{ij} f^{-1} \nabla H(\mathcal{K}\_j), \ i = 1, \dots, m,\tag{18}$$

is symplectic if and only if *bib<sup>j</sup>* − *biaij* − *bjaij* = 0. In Equations (15)–(18), τ represents the time step size.

#### **4. Example**

The Lotka-Volterra model can be expressed as a non-canonical Hamiltonian system with *n* = 1

$$H\left(\begin{array}{c}\dot{p}\\\dot{q}\end{array}\right) = \left(\begin{array}{cc}0 & -pq\\pq & 0\end{array}\right) \nabla H(p,q) \,,\tag{19}$$

where *H*(*p*, *q*) = *p* − 2 log *p* + *q* − log *q*.

The Hamiltonian *H* can be rewritten as *H* = *H*<sup>1</sup> + *H*<sup>2</sup> with *H*<sup>1</sup> = *p* − 2 log *p* and *H*<sup>2</sup> = *q* − log *q*. According to the canonialization method shown in Section 2, we have . 0 1 K − = *pq pq* (20)

0

Z Z (K ) K Z (K ) 1,..., ,

*i i ij*

The Lotka-Volterra model can be expressed as a non-canonical Hamiltonian system

*pq*

0

 

According to the canonialization method shown in Section 2, we have

1

*q*

~

*p*

*pq*

0

−

 

can be rewritten as

= 

*q p*

 

, ,

*b J H a J H <sup>j</sup>*

= + *<sup>i</sup>* = + =

1

=

−

*H( p,q )*

*H* = *H*<sup>1</sup> + *H*<sup>2</sup>

*b<sup>i</sup> b <sup>j</sup>* −*b<sup>i</sup> aij* −*b <sup>j</sup> aij* = 0 . In Equations (15)–(18), τ represents the

*m i*

*i m*

, (19)

*H p* 2log *p* <sup>1</sup> = − and

(21)

(23)

(18)

~ <sup>1</sup>

$$\mathbf{K} = \begin{pmatrix} 0 & \frac{1}{pq} \\ -\frac{1}{pq} & 0 \end{pmatrix}. \tag{20}$$
  $\mathbf{K} = \begin{pmatrix} \frac{1}{pq} & 0 \\ 0 & \frac{1}{pq} \end{pmatrix}$ 

*q pq*

with

According to Equation (10), we get *p p p p* − 

*q*

~

~

~

*H*( *p*,*q*) = *p* − 2log *p* + *q* −log*q*.

*H*

*Axioms* **2022**, *11*, x FOR PEER REVIEW 5 of 7

1

−

1

=

is symplectic if and only if

The Hamiltonian

log . <sup>2</sup> *H* = *q* − *q*

time step size.

**4. Example**

with n = 1

where

*m i*

$$
\frac{\partial \widetilde{p}}{\partial p} \frac{\partial \widetilde{q}}{\partial p} - \frac{\partial \widetilde{p}}{\partial p} \frac{\partial \widetilde{q}}{\partial p} = 0, \quad \frac{\partial \widetilde{p}}{\partial p} \frac{\partial \widetilde{q}}{\partial q} - \frac{\partial \widetilde{q}}{\partial p} \frac{\partial \widetilde{p}}{\partial q} = \frac{1}{pq} \tag{21}
$$

*p*

and

and

$$
\widetilde{p} = \log(p)\_\prime \quad \widetilde{q} = \log(q). \tag{22}
$$

Hence, we have

Hence, we have

$$p = \exp(-\hat{p}), \quad q = \exp(\hat{q}) \tag{23}$$

~

and thus and thus

$$\tilde{H}(\tilde{p}, \tilde{q}) = \exp(\tilde{p}) - 2\tilde{p} + \exp(\tilde{q}) - \tilde{q} \tag{24}$$

which is a canonical Hamiltonian system. Using the second-order explicit symplectic scheme on the basis of the canonicalization, we get the trajectory of the canonical variable *<sup>p</sup>*e, *<sup>q</sup>*<sup>e</sup> , where *<sup>p</sup>*e(0) <sup>=</sup> ln 2, *<sup>q</sup>*e(0) <sup>=</sup> ln 3, and time step size *<sup>τ</sup>* <sup>=</sup> 0.1 (see Figure 1). which is a canonical Hamiltonian system. Using the second-order explicit symplectic scheme on the basis of the canonicalization, we get the trajectory of the canonical variable *p q* ~ , ~ , where ̃(0) = ln2, ̃(0) = ln3, and time step size = 0.1 (see Figure 1).

**Figure 1.** Trajectory of the canonical variable *p q* ~ , ~ **Figure 1.** Trajectory of the canonical variable *<sup>p</sup>*<sup>e</sup> . , *<sup>q</sup>*<sup>e</sup> .

Using Equation (23) we can obtain *p*, *q*, and *p*(0) = 2, *q*(0) = 3, and time step size *τ* = 0.1, then using the second-order explicit symplectic scheme on the basis of *p*, *q*, we get the trajectory of the non-canonical variable *p*, *q* (see Figure 2). Using Equation (23) we can obtain , , and (0) = 2, (0) = 3, and time step size = 0.1, then using the second-order explicit symplectic scheme on the basis of , , we get the trajectory of the non-canonical variable , (see Figure 2).

In addition, the implicit Runge-Kutta method of order 3 is applied directly to the non-

As can be seen from Figures 1 and 2, the trajectory diagrams of regularized variables and initial variables are kept unchanged by a symplectic algorithm. After 1,000,000 steps, the graph remains basically unchanged, which indicates that the symplectic algorithm of constrained Hamiltonian systems has the property of preserving structure. Namely, the physical properties of constrained Hamiltonian systems can be maintained by a symplectic method. One can see from Figure 3 that the graph using the third-order Runge Kutta method (or general numerical calculation method) is very unstable. This method does not have the property of preserving the structure, that is, it cannot maintain the physical properties of the constrained Hamiltonian systems. It is shown clearly from the three figures that the symplectic algorithm has better structure-preserving properties. It is of great significance to study the constrained Hamiltonian systems using the symplectic algorithm.

In this paper, we discuss the canonicalization method of the constrained Hamiltonian systems, then the symplectic method is applied to the constrained Hamiltonian systems

**Figure 2.** Trajectory of the non-canonical variable , . **Figure 2.** Trajectory of the non-canonical variable *p*,*q*.

**Figure 3.** Trajectory of the non-canonical variable.

**5. Conclusions**

In addition, the implicit Runge-Kutta method of order 3 is applied directly to the non-canonical Hamiltonian system directly, and then we get the trajectory of the original variables *p*, *q*, and *p*(0) = 2, *q*(0) = 3 and time step size *τ* = 0.1 (see Figure 3). In addition, the implicit Runge-Kutta method of order 3 is applied directly to the noncanonical Hamiltonian system directly, and then we get the trajectory of the original variables , , and (0) = 2, (0) = 3 and time step size = 0.1 (see Figure 3).

Using Equation (23) we can obtain , , and (0) = 2, (0) = 3, and time step size = 0.1, then using the second-order explicit symplectic scheme on the basis of , , we

**Figure 2.** Trajectory of the non-canonical variable , .

*Axioms* **2022**, *11*, x FOR PEER REVIEW 6 of 7

get the trajectory of the non-canonical variable , (see Figure 2).

**Figure 3.** Trajectory of the non-canonical variable. **Figure 3.** Trajectory of the non-canonical variable.

As can be seen from Figures 1 and 2, the trajectory diagrams of regularized variables and initial variables are kept unchanged by a symplectic algorithm. After 1,000,000 steps, the graph remains basically unchanged, which indicates that the symplectic algorithm of constrained Hamiltonian systems has the property of preserving structure. Namely, the physical properties of constrained Hamiltonian systems can be maintained by a symplectic method. One can see from Figure 3 that the graph using the third-order Runge Kutta method (or general numerical calculation method) is very unstable. This method does not have the property of preserving the structure, that is, it cannot maintain the physical properties of the constrained Hamiltonian systems. It is shown clearly from the three figures that the symplectic algorithm has better structure-preserving properties. It is of great significance to study the constrained Hamiltonian systems using the symplectic algorithm. As can be seen from Figures 1 and 2, the trajectory diagrams of regularized variables and initial variables are kept unchanged by a symplectic algorithm. After 1,000,000 steps, the graph remains basically unchanged, which indicates that the symplectic algorithm of constrained Hamiltonian systems has the property of preserving structure. Namely, the physical properties of constrained Hamiltonian systems can be maintained by a symplectic method. One can see from Figure 3 that the graph using the third-order Runge Kutta method (or general numerical calculation method) is very unstable. This method does not have the property of preserving the structure, that is, it cannot maintain the physical properties of the constrained Hamiltonian systems. It is shown clearly from the three figures that the symplectic algorithm has better structure-preserving properties. It is of great significance to study the constrained Hamiltonian systems using the symplectic algorithm.

#### **5. Conclusions 5. Conclusions**

In this paper, we discuss the canonicalization method of the constrained Hamiltonian systems, then the symplectic method is applied to the constrained Hamiltonian systems In this paper, we discuss the canonicalization method of the constrained Hamiltonian systems, then the symplectic method is applied to the constrained Hamiltonian systems on the basis of the canonicalization. Compared with the traditional Runge-Kutta method, they have better structural preservation properties. Consequently, the symplectic methods can be applied to more noncanonical Hamiltonian systems, which will be further investigated in our next work.

**Author Contributions:** Conceptualization, methodology, validation, and supervision, J.F. and L.Z.; formal analysis and investigation, C.X.; writing—original draft preparation, S.C.; writing—review and editing, L.Z.; W.Z. symplectic algorithm numerical simulation. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Nature Science Foundation of China, 11872335, 12172199, 11672270.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

