**1. Introduction**

Hopfield, Cohen–Grossberg and similar neural networks have been actively studied recently due to their applications in physics and engineering [1–4]. Hopfield neural networks (HNNs) have found many applications in associative memory, repetitive learning, classification of patterns, optimization problems and many others.

Today, two basic mathematical models are employed for neural network research: either local field neural network models or static neural models. The basic model of local field neural network is described as

$$\frac{d\mathbf{x}\_i(t)}{dt} = -\mathbf{x}\_i(t) + \sum\_{j=1}^n w\_{ij}\mathbf{g}\_j(\mathbf{x}\_j(t)) + I\_i, i = 1, 2, \dots, n,\tag{1}$$

where *gi* is a function of the *i*th neuron activation, *xi* is the state of the *i*th neuron, *Ii* is the external input imposed on the *i*th neuron, *wij* stands for the synaptic connectivity value between the *j*th neuron and the *i*th neuron, and *n* is the number of neurons in the network.

**Citation:** Boykov, I.; Roudnev, V.; Boykova, A. Stability of Solutions to Systems of Nonlinear Differential Equations with Discontinuous Right-Hand Sides: Applications to Hopfield Artificial Neural Networks. *Mathematics* **2022**, *10*, 1524. https:// doi.org/10.3390/math10091524

Academic Editor: Maria C. Mariani

Received: 31 March 2022 Accepted: 29 April 2022 Published: 2 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

A static neural network is defined by the system of equations

$$\frac{d\mathbf{x}\_i(t)}{dt} = -\mathbf{x}\_i(t) + g\_i \left(\sum\_{j=1}^n w\_{ij}\mathbf{x}\_j(t) + I\_i\right), i = 1, 2, \dots, n,\tag{2}$$

where we used the same notation as above.

The local fields neural network (1) was introduced by Hopfield and it is the Hopfield neural network that is usually referred to in the literature. The neural network (1) models bidirectional associative memory networks [5] and cellular neural networks [6].

Static neural networks (2) are often referred to as Cohen–Grossberg networks. They are widely used in optimization problems and in modeling brain processes, so-called brain-state-in-a-box neural networks [7]. The research on stability for models such as (2) was opened with the classical work [8].

The stability results for the basic model (2) as well as the results for a more general model,

$$\frac{dx\_i(t)}{dt} = a\_i(x\_i) \left[ b\_i(x\_i) - \sum\_{i=1}^n c\_{ij} \boldsymbol{\rho}\_j(x\_j) \right]$$

were obtained in [8,9].

Below, we present a brief review of the papers devoted to the stability of solutions for systems of ordinary differential equations with discontinuous right-hand sides. Here, we examined the stability of dynamic neural networks and obtained sufficient conditions for their absolute and local asymptotic stability. The fixed points of the neural network are associated with local minima of the network energy function. Interest in seeking for local minima is due to the study of the memory problem for neural networks. Clearly, the more local minima a neural network has, the greater potential memory it holds.

When solving computational mathematics problems on neural networks, asymptotically stable networks appear to be more preferable in general.

The derivation of sufficient conditions for stable neural networks in general is a rather complicated problem. Its solution is known only in a few special cases.

In [10], sufficient conditions for stability for neural networks (1) have been obtained based on Gershgorin circles. In [11], sufficient conditions for the global stability of solutions of systems of Equation (1) have been obtained by a mapping method. In [12], the results of works [10,11] have been generalized and new Lyapunov functions have been constructed.

In [13], it was proven that the diagonal stability of the interconnection matrix implies the existence and uniqueness of an equilibrium and global stability of the equilibrium.

In [14], it was shown that the negative semidefiniteness of matrices ensures the stability of Hopfield neural networks described by the Equation (1). In [15], the number of sufficient conditions for the local exponential stability of HNNs was presented. In [16], the algorithm of matrix norms was applied to the study of nonlinear dynamical systems.

The stability of recurrent systems that model identification problems was investigated in [17].

Extensive literature is devoted to researching the stability of neural networks with various time delays [18–22].

Constructing a mathematical model, we have to abstract from many phenomena—for example, from the uncertainty. In [21], the stability of fuzzy cellular neural networks based on the union of cellular neural networks and fuzzy logic methods has been studied. The cellular neural networks are modeled by systems of differential equations with discontinuous right-hand sides.

Along with continuous activation functions, there are a great number of applications that are modeled by neural networks with discontinuous activation functions. Similar models have been studied in [23].

The theory of differential equations with discontinuous right-hand sides is given in [24].

Recall, following [24], the definitions of solutions for differential equations with discontinuous right-hand sides and their stability.

Consider an equation or a system of equations in vector form

$$\frac{d\mathbf{x}}{dt} = f(t, \mathbf{x}),\tag{3}$$

where *f*(*t*, *x*) is a piece-wise continuous function or a vector function in domain Ω : {*x* ∈ *Rn*, *t* ∈ [0, ∞)}; *M* is a measure zero set of the function *f*(*t*, *x*) discontinuity points.

Each point (*t*, *x*) we associate with a set *F*(*t*, *x*) in *n* dimensional space. This set is constructed as follows. If the function *f*(*t*, *x*) is continuous at the point (*t*, *x*), the set *F*(*t*, *x*) contains just one point which matches with *f*(*t*, *x*). If *f* has a discontinuity point at (*t*, *x*), the set *F*(*t*, *x*) is defined according to the related physical problem. One such method is described in Section 1.5 in [25].

**Definition 1** ([24])**.** *A solution of the Equation (3) is called a solution of differential inclusion*

$$\frac{d\mathbf{x}}{dt} \in F(t, \mathbf{x}),\tag{4}$$

*i.e., absolutely continuous vector function x*(*t*) *defined in interval or segment I and for which the inclusion dx dt* ∈ *F*(*t*, *x*) *is satisfied almost everywhere in I.*

**Definition 2** ([24])**.** *The solution x* = *ϕ*(*t*), *t*<sup>0</sup> ≤ *t* < ∞ *of differential inclusion (4) is called stable if, for any*  > 0*, there exists δ* > 0 *so that, for every x*˜0*,* |*x*˜0(*t*0) − *ϕ*(*t*0)| < *δ each solution x*˜(*t*) *with the initial condition x*˜(*t*0) = *x*˜0, *t*<sup>0</sup> ≤ *t* < ∞ *exists and satisfies the inequality*

$$|\vec{x}(t) - \varphi(t)| < \epsilon \quad \text{for} \quad t\_0 \le t < \infty.$$

**Definition 3** ([24])**.** *The solution x* = *ϕ*(*t*), *t*<sup>0</sup> ≤ *t* < ∞ *of differential inclusion (4) is called asymptotically stable if it is stable and, in addition,* lim *<sup>t</sup>*→<sup>∞</sup> <sup>|</sup>*x*˜(*t*) <sup>−</sup> *<sup>ϕ</sup>*(*t*)<sup>|</sup> <sup>=</sup> <sup>0</sup>*.*

**Definition 4** ([24])**.** *The solution x* = *ϕ*(*t*), *t*<sup>0</sup> ≤ *t* < ∞, *of differential inclusion (4) is called stable in general if it is asymptotically stable for any initial x*<sup>0</sup> ∈ *Rn, where Rn is n-dimensional vector space.*

Intense research on the stability of systems of ordinary differential equations with discontinuous right-hand sides began in the middle of the last century in connection with increasing interest in automatic control problems. In addition to the issues of automatic control, automatic regulation [26] and the theory of relay systems, differential equations with discontinuous right-hand sides are widely used to model various problems in physics and engineering—in particular, the classical problem of dry friction [27]. Differential equations for automatic control with variable structures and discontinuous right-hand sides are obtained from differential equations with continuous right-hand sides when passing to the limit along a parameter [24].

Today, the stability of solutions of systems of ordinary differential equations with discontinuous right-hand sides is an active and growing field.

This is because there are numerous applications of systems of differential equations with discontinuous right-hand sides (Filippov systems) for various problems in physics, techniques, biology and medicine. A detailed bibliography is given in [28].

Recently, stability theory with discontinuous coefficients has been extended to numerical mathematics. There are widely used various methods to determine solutions for systems of linear and nonlinear algebraic equations.

In [29], the authors have developed a continuous method for solving nonlinear operator equations. Each nonlinear operator equation is assigned with the Cauchy problem. Convergence of the method is based on Lyapunov stability theory.

Collocation methods for solving initial and boundary problems for differential equations and the theory of B, D, G, P stability of their solutions has been developed and presented in [30–32]. The latter also contains an extensive bibliography.

In [33], the second Lyapunov's method was used to investigate semistability finitetime stability differential inclusions for systems of differential equations with discontinuity of the first kind on various manifolds. Semistability has a wider range of applications than the stability condition.

Research is performed in several directions: (1) systems of differential equations with one [34] and two [35] relays have been studied.

Stability of solutions of differential equations with one relay

$$\frac{dx\_i(t)}{dt} = p\_i \text{sgn} \, x\_i + \sum\_{j=1}^n c\_{ij} x\_j, i = 1, 2, \dots, n. \tag{5}$$

has been studied in [34].

In [35], the author investigates the stability of solutions of systems of differential equations

$$\begin{cases} \frac{dx\_1(t)}{dt} = a\_1 \text{sgn}\mathbf{x}\_1(t) + b\_1 \text{sgn}\mathbf{x}\_2(t) + \sum\_{j=2}^n c\_{1,j} \mathbf{x}\_j(t),\\ \frac{dx\_2(t)}{dt} = a\_2 \text{sgn}\mathbf{x}\_1(t) + b\_2 \text{sgn}\mathbf{x}\_2(t) + \sum\_{j=2}^n c\_{2,j} \mathbf{x}\_j(t),\\ \frac{dx\_3(t)}{dt} = \sum\_{j=1}^n c\_{i,j} \mathbf{x}\_j(t), i = 3, \dots, n \end{cases} \tag{6}$$

with constant coefficients.

Stability of solutions of systems of differential equations with relay [34,35] is based on the study of transfer functions. There have been obtained necessary and sufficient conditions for the stability of solutions for the systems (5), (6) expressed in terms of coefficients of equations.

Numerous works have been devoted to the stability of systems of nonlinear switching differential equations. For a bibliography, see [36].

Another class of problems is related to the study of sliding modes in automatic regulation and control systems [37]. It is interesting to note that sliding modes are present in ecology models [28].

When studying the stability of systems of nonlinear differential equations with discontinuous right-hand sides, Lyapunov's functions method [38–40] has been used.

Stability of neural networks described by the equations

$$\frac{d\mathbf{x}\_i}{dt} = -\mathbf{c}\_i \mathbf{x}\_i + \sum\_{j=1}^n a\_{ij} \boldsymbol{\varphi}\_j(\mathbf{x}\_j),\tag{7}$$

*i* = 1, 2, . . . , *n* and more general equations

$$\frac{d\mathbf{x}\_i}{dt} = -\mathbf{c}\_i \mathbf{x}\_i - \sum\_{j=1}^n a\_{ij} \sum\_{k=1}^n a\_{jk} \mathbf{g}\_k(\mathbf{x}\_k) \,. \tag{8}$$

*i* = 1, 2, ... , *n* by the second Lyapunov method has been investigated in [41] assuming that the functions *gk*(*x*), *k* = 1, 2, ... , *n* are continuous. The following conditions are imposed on the functions *ϕj*(*xj*)

A1. Each function *ϕj*(*xj*) is defined everywhere for −∞ < *xj* < ∞, continuous and one-valued;

A2. Each function *ϕj*(*xj*) lies in the first and the third quadrant; *xj* = 0, moreover, the inequalities are fulfilled *xjϕj*(*xj*) > 0, *j* = 1, 2, . . . , *n*;

A3. lim|*xj*|→<sup>∞</sup> 5 *xj* <sup>0</sup> *ϕj*(*ρ*)*dρ* = +∞, *j* = 1, 2, . . . , *n*.

Stability of neural networks with discontinuous coefficients *gi*(*x*), *i* = 1, 2, ... , *n* has been studied in [42,43].

It was assumed in [42] that

$$\mathbf{g}\_i(\mathbf{x}) = \mathbf{g}(\mathbf{x}) = \begin{cases} \ 1, & \mathbf{x} > \mathbf{0}, \\\ 0, & \mathbf{x} < \mathbf{0}, \end{cases}$$

*i* = 1, 2, ... , *n*. To ensure the stability of a neural network, the method based on the majorization of the nonlinear part of Equation (7) by a constant and further solving the differential has been proposed.

Stability of the solutions of Equation (7) with discontinuous nonlinear functions was investigated with the second Lyapunov method in [43]. The study of sliding mode stability is also reported in [43].

The detailed research of neural networks including Hopfield networks is given in [44]. The stability of neural networks with various activation functions in general has been studied, as well as the stability at separated stationary points. The basic technique of neural network stability in [44] appears to be the use of Lyapunov's and energy functions. In [44], one can find an extensive bibliography on the stability of neural networks described with differential and difference equations.

The exponential stability of a Hopfield neural network on the timeline has been investigated in [45]. Stability of the neural networks described by differential equations

$$\frac{d\mathbf{x}\_i(t)}{dt} = -\mathbf{e}\_i \mathbf{x}\_i + \sum\_{j=1}^n \mathbf{g}\_{ij}(\mathbf{x}\_j), i = 1, 2, \dots, n,\tag{9}$$

with functions *gij*(*xj*), *i*, *j* = 1, 2, ... , *n* having discontinuities of the first kind at separate points has been studied in [25,46].

In this paper, we investigate the stability of solutions of systems of linear and nonlinear equations with discontinuous right-hand sides. We obtained sufficient conditions for the asymptotic stability for systems of differential equations used when studying HNNs' stability with discontinuous synapses and activation functions.

We study the stability of solutions for systems of differential equations regardless of how an inclusion equation is defined. With this approach, it is essential to use the first Lyapunov method.

It is possible to suggest that in applying the second Lyapunov method, one has to construct separate Lyapunov–Krasovski functionals for each area where the right-hand side of the differential equation system is continuous.

The paper is divided into the Introduction, three sections and the Conclusions. Section 2 introduces the definitions and the notation used throughout the paper. Section 3 examines the stability of solutions of differential equations with discontinuous right-hand sides. In Section 4, we analyze the stability of Hopfield neural networks. The obtained results are drawn in the final section.

#### **2. Definitions and Notations**

We now introduce a few definitions used in this paper.

Here, *Dkg*(*t*, *u*1, ... , *un*) stands for a partial derivative *Dkg*(*t*, *u*1,..., *un*) = *∂g*(*t*, *u*1,..., *un*)/*∂uk*, *k* = 1, 2, . . . , *n*.

Moreover, we employ the following notation: *B*(*a*,*r*) = {*z* ∈ *B* : *z* − *a* ≤ *r*}, *S*(*a*,*r*) = {*z* ∈ *B* : *z* − *a* = *r*}, *Re*(*K*) = (*K*)=(*K* + *K*∗)/2, Λ(*K*) = lim *h*↓0 ( *I* + *hK* −

<sup>1</sup>)*h*−1. Here, *<sup>B</sup>* is a Banach space, *<sup>a</sup>* <sup>∈</sup> *<sup>B</sup>*, *<sup>K</sup>* is a linear and bounded operator on *<sup>B</sup>*, <sup>Λ</sup>(*K*) is the logarithmic norm [47] of the operator *K*, *K*∗ is the conjugate operator to *K*, and *I* stands for the identity operator.

The main properties of the logarithmic norm are given in [47].

If *A* is an *n* × *n* matrix, then Λ(*A*) can readily be computed for the corresponding norms of linear vector spaces.

The logarithmic norm is known for operators in the most frequently used spaces.

Let *A* = {*aij*}, *i*, *j* = 1, 2, . . . , *n* be a real matrix.

In the *n*-dimensional space *Rn* of vectors *x* = (*x*1, ... , *xn*), the following norms are often used:

$$\begin{aligned} \|\|\mathbf{x}\|\|\_1 &= \sum\_{i=1}^n |\mathbf{x}\_i|;\\ \|\|\mathbf{y}\|\|\_1 &= \max\_{\mathbf{x}, \mathbf{y}} |\mathbf{y}\_i|. \end{aligned}$$

$$\text{--} \qquad \|\mathbf{x}\|\_2 = \max\_{1 \le i \le n \atop n} |\mathbf{x}\_i|\mathbf{y}$$


Below are some expressions of the logarithmic norm of a matrix *A* = (*aij*) corresponding to the norms of the vectors given above:

$$\begin{aligned} \Lambda\_1(A) &= \max\_{1 \le j \le n} \left( a\_{\vec{ij}} + \sum\_{i \ne j} |a\_{\vec{ij}}| \right); \\ \Lambda\_2(A) &= \max\_{1 \le i \le n} \left( a\_{\vec{ii}} + \sum\_{j \ne i} |a\_{\vec{ij}}| \right); \\ \Lambda\_3(A) &= \lambda\_{\max} \left( \frac{A + A^\*}{2} \right). \end{aligned}$$

where *A*∗ is the conjugate matrix for *A*.

## **3. Stability of Solutions to Equations Systems with Discontinuous Right-Hand Sides**

*3.1. Stability of Solutions to Linear Equations Systems with Discontinuous Coefficients*

Consider the Cauchy problem

$$\frac{d\mathbf{x}\_i(t)}{dt} = \sum\_{j=1}^n a\_{ij}(t)\mathbf{x}\_j(t), t \ge 0,\tag{10}$$

$$\mathbf{x}\_i(0) = \mathbf{x}\_i, i = 1, 2, \dots, n,\tag{11}$$

with discontinuous coefficients *aij*(*t*), *i*, *j* = 1, 2, . . . , *n*.

We assume that the functions *aij*(*t*) are continuous everywhere except a countable set of points *ζ*1, *ζ*2, . . . , where the functions have discontinuities.

The following statement is valid.
