Next Article in Journal
Hybrid LSTM-ARMA Demand-Forecasting Model Based on Error Compensation for Integrated Circuit Tray Manufacturing
Next Article in Special Issue
New Results on Finite-Time Synchronization Control of Chaotic Memristor-Based Inertial Neural Networks with Time-Varying Delays
Previous Article in Journal
On a Framework for the Stability and Convergence Analysis of Discrete Schemes for Nonstationary Nonlocal Problems of Parabolic Type
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Lagrange Exponential Stability Analysis of Quaternion-Valued Neural Networks with Time Delays

1
College of Zhijiang, Zhejiang University of Technology, Shaoxing 312030, China
2
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2157; https://doi.org/10.3390/math10132157
Submission received: 17 May 2022 / Revised: 9 June 2022 / Accepted: 17 June 2022 / Published: 21 June 2022
(This article belongs to the Special Issue Mathematic Control and Artificial Intelligence)

Abstract

:
This study on the local stability of quaternion-valued neural networks is of great significance to the application of associative memory and pattern recognition. In the research, we study local Lagrange exponential stability of quaternion-valued neural networks with time delays. By separating the quaternion-valued neural networks into a real part and three imaginary parts, separating the quaternion field into 3 4 n subregions, and using the intermediate value theorem, sufficient conditions are proposed to ensure quaternion-valued neural networks have 3 4 n equilibrium points. According to the Halanay inequality, the conditions for the existence of 2 4 n local Lagrange exponentially stable equilibria of quaternion-valued neural networks are established. The obtained stability results improve and extend the existing ones. Under the same conditions, quaternion-valued neural networks have more stable equilibrium points than complex-valued neural networks and real-valued neural networks. The validity of the theoretical results were verified by an example.

1. Introduction

Because of the wide application prospect in pattern recognition, associative memory, optimization, and others, dynamic characteristics of global stability and local stability have been deeply studied for several kinds of recurrent neural networks [1,2,3,4,5,6]. Global stability can be applied to optimization problems [7,8,9,10,11,12], and local stability can be applied to pattern recognition [13,14,15,16,17]. The topological structure of activation functions has an important influence on the dynamic characteristics of recurrent neural networks. The local stability of recurrent neural networks with monotonically increasing activation functions was studied in References [18,19]. The local stability of recurrent neural networks with nonmonotonic activation functions was studied in Reference [20]. The local stability of recurrent neural networks with discontinuous activation functions was studied in Reference [21]. When applying recurrent neural networks to pattern recognition, stable equilibrium points of recurrent neural network are mapped into the ideal memory pattern. For example, in [22], the authors designed a recurrent neural network with 25 neurons to store three patterns. The more stable the equilibrium points are, the larger the memory capacity of the recurrent neural networks is. When the initial state is located in the domain of attraction of the equilibrium point corresponding to the pattern that is retrieved, the initial state will converge into the stable equilibrium point. That is, the desired output pattern can be derived. Each stable equilibrium point has a certain domain of attraction. The larger the domain of attraction of the equilibrium point is, stronger the fault tolerance of the memory pattern is. Therefore, it is very meaningful to study how to increase the number of stable equilibrium points and expand the domain of attraction of stable equilibrium points. By studying the multiple stability of complex-valued recurrent neural networks, the number of stable equilibrium points was increased [23]. By studying the multistability of recurrent neural networks with multi-order activation functions, the number of stable equilibrium points was increased [24].
In the practical application of recurrent neural networks, time delay is inevitable. Because time delay is often the origin of system instability and oscillation, a large number of researchers have discussed neural networks with time delay [25,26,27]. The research on the multiple stability of recurrent neural networks with time delay is very significant.
In recent decades, due to the development of algebra, the quaternion has gradually become a research hotspot and has successfully been applied to signal processing. As a special generalization of complex-valued neural network, a quaternion-valued neural network has a quaternion state, a quaternion link matrix, and a quaternion activation function, which are based on the non-commutation of the quaternion product. Due to the fact that complex-valued recurrent neural networks can increase the number of stable equilibrium points, quaternion-valued recurrent neural networks have more stable equilibrium points. Because of the quaternion state, one-dimensional quaternion-valued neural network can be separated into four dynamic equations. Therefore, under the same activation function, a one-dimensional quaternion-valued neural network, two-dimensional complex-valued neural network, and four-dimensional real-valued neural network have the same number of equilibrium points. That is to say, in the case of the same dimension, a quaternion-valued neural network has more stable equilibrium points than a complex-valued neural network and a real-valued neural network. However, the non-commutativity makes the research on quaternion-valued neural networks more difficult. Recently, the dynamic characteristics of quaternion-valued neural networks have been deeply studied for their applications in information processing, optimization, and automatic control [28,29,30,31].
This paper makes two contributions. (1) By separating the quaternion-valued neural network into a real part and three imaginary parts, we obtain sufficient conditions of existence of 3 4 n equilibrium points. (2) Some criteria are established to ensure larger storage capacity than the existing multistability results.
In this paper, we study local Lagrange exponential stability of quaternion-valued neural networks with time delays. In Section 2, we describe the neural network model, the definitions, and the lemma used in this paper. Multistability results of the quaternion-valued neural networks are described in Section 3. An example is given to verify the effectiveness of the results in Section 4. Furthermore, the conclusion is presented in Section 5.

2. Problem Formulation and Some Preliminaries

Consider the quaternion-valued neural networks with delays described by
x ˙ i ( t )   =   c i x i ( t )   +   j = 1 n a i j f j ( x j ( t ) )   +   j = 1 n b i j f j ( x j ( t     τ ) )   +   I i ,
for i   =   1 , 2 , , n , t     0 . System (1) is equivalent to the following vector form:
x ˙ i ( t )   =   C x ( t )   +   A f ( x ( t ) )   +   B f ( x ( t     τ ) )   +   I ,
where x ( t )   =   ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T     Q n is the state vector, C   =   d i a g ( c 1 , c 2 , , c n )     R n × n represents the self-feedback connection weight matrix, A   =   ( a i j ) n × n     Q n × n , B   =   ( b i j ) n × n     Q n × n represent the connection weight, f ( x ( t ) )   =   ( f 1 ( x 1 ( t ) ) , f 2 ( x 2 ( t ) ) , ,
f n ( x n ( t ) ) ) T     Q n represents the activation function, τ   >   0 is the time delay, I   =   ( I 1 , I 2 , ,
I n ) T     Q n is the external input vector, R denotes the set of real numbers, Q denotes the set of quaternion numbers, and for K   =   { 0 , 1 , 2 , 3 } , i , j   =   1 , 2 , , n ,
x i ( t )   =   k K x i ( k ) ( t ) q k , a i j   =   k K a i j ( k ) q k , b i j   =   k K b i j ( k ) q k , I i   =   k K I i ( k ) q k , f j ( ξ )   =   k K f j ( k ) ( ξ ( k ) ) q k ,
f j ( k ) ( ξ ( k ) )   =   m j ( k ) , ( , p j ( k ) ) , M j ( k )     m j ( k ) P j ( k )     p j ( k ) ( ξ ( k )     p j ( k ) )   +   m j ( k ) , [ p j ( k ) , P j ( k ) ] , M j ( k ) , ( P j ( k ) , + ) ,
in which q 0   =   1 and the imaginary units q 1 , q 2 , q 3 obey the following rules:
q k 2   =   1 , k   =   1 , 2 , 3 , q 1 q 2   =   q 2 q 1   =   q 3 , q 2 q 3   =   q 3 q 2   =   q 1 , q 3 q 1   =   q 1 q 3   =   q 2 ,
  <   m j ( k )   <   M j ( k )   <   + ,   <   p j ( k )   <   P j ( k )   <   + , j   =   1 , 2 , , n , k   =   0 , 1 , 2 , 3 . Denote
K i ( k )   =   M i ( k )     m i ( k ) P i ( k )     p i ( k ) , k   =   0 , 1 , 2 , 3 , i   =   1 , 2 , , n .
Let
( , p i ( k ) )   =   ( , p i ( k ) ) 1   ×   [ p i ( k ) , P i ( k ) ] 0   ×   ( P i ( k ) , + ) 0 , [ p i ( k ) , P i ( k ) ]   =   ( , p i ( k ) ) 0   ×   [ p i ( k ) , P i ( k ) ] 1   ×   ( P i ( k ) , + ) 0 , ( P i ( k ) , + )   =   ( , p i ( k ) ) 0   ×   [ p i ( k ) , P i ( k ) ] 0   ×   ( P i ( k ) , + ) 1 .
Denote
Φ   =   { i = 1 n k = 0 3 [ ( , p i ( k ) ) δ i k , 1   ×   [ p i ( k ) , P i ( k ) ] δ i k , 2   ×   ( P i ( k ) , + ) δ i k , 3 ] , ( δ i k , 1 , δ i k , 2 , δ i k , 3 )   =   ( 1 , 0 , 0 ) o r ( 0 , 1 , 0 ) o r ( 0 , 0 , 1 ) } .
Then, Q n   =   Φ can be divided into 3 4 n subsets. Denote
Φ 1   =   { i = 1 n k = 0 3 [ ( , p i ( k ) ) δ i k , 1   ×   [ p i ( k ) , P i ( k ) ] 0 ×   ( P i ( k ) , + ) δ i k , 2 ] , ( δ i k , 1 , δ i k , 2 )   =   ( 1 , 0 ) o r ( 0 , 1 ) } ,
Φ 2   =   Φ     Φ 1 .
It is easy to see that Φ 1 , Φ 2 have 2 4 n , 3 4 n     2 4 n subsets, respectively. We divide Q n into two parts: Φ 1 and Φ 2 . In this paper, we mainly discuss the stability of the equilibrium point in Φ 1 . In the following, one lemma and one definition used in this paper are given.
Lemma 1
(Halanay Inequality [32,33]). Assume constant numbers k 1 , k 2 satisfy k 1   >   k 2   >   0 ;   V ( t ) is a nonnegative continuous function on [ t 0     τ , t 0 ] , and as t     t 0 , the following inequality is satisfied:
D + V ( t )     k 1 V ( t )   +   k 2 V ¯ ( t ) ,
where D + is the Dini derivative. V ¯ ( t )   =   sup t τ s t { V ( s ) } , τ     0 is constant. Then as t     t 0 , we have
V ( t )     V ¯ ( t 0 ) e λ ( t t 0 ) ,
in which λ is the unique positive solution of the equation λ   =   k 1     k 2 e λ τ .
Suppose the continuous initial condition associated with system (1) or (2) is x ( s )   =   ϑ ( s ) for any s     [ τ , 0 ] . Based on the definition of global exponential stability in the Lagrange sense for recurrent neural networks in [34,35], we give the definition of locally exponentially stability in the Lagrange sense of an equilibrium point for system (1) or system (2).
Definition 1
([34]). The equilibrium point x * of system (1) or system (2) is said to be locally exponentially stable in the Lagrange sense in region D if there exist positive constants β and ε for any given positive number H, and there exists M   =   M ( H )   >   0 such that x ( t , ϑ ( s ) )   <   β   +   M e ε t for any t     0 , ϑ ( s )     Q H   =   { ϑ ( s )     C ( [ τ , 0 ] , D ) | ϑ ( s )     H } , where C indicates that ϑ ( s ) is a connection mapping.
As a special case of asymptotic stability, exponential stability has a faster convergence rate. This will be conducive to efficient pattern recognition applications.

3. Existence and Stability of Multiple Equilibrium Points

In this section, we will study the existence and stability of multiple equilibrium points for system (1). We can separate system (1) into a real part and three imaginary parts. System (1) can be rewritten as follows for i   =   1 , 2 , , n ,
x ˙ i ( 0 ) ( t )   =   c i x i ( 0 ) ( t )   +   j = 1 n a i j ( 0 ) f j ( 0 ) ( x j ( 0 ) ( t ) )     j = 1 n a i j ( 1 ) f j ( 1 ) ( x j ( 1 ) ( t ) )     j = 1 n a i j ( 2 ) f j ( 2 ) ( x j ( 2 ) ( t ) ) j = 1 n a i j ( 3 ) f j ( 3 ) ( x j ( 3 ) ( t ) )   +   j = 1 n b i j ( 0 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) )     j = 1 n b i j ( 1 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) ) j = 1 n b i j ( 2 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) )     j = 1 n b i j ( 3 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) )   +   I i ( 0 ) ,
x ˙ i ( 1 ) ( t )   =   c i x i ( 1 ) ( t )   +   j = 1 n a i j ( 0 ) f j ( 1 ) ( x j ( 1 ) ( t ) )   +   j = 1 n a i j ( 1 ) f j ( 0 ) ( x j ( 0 ) ( t ) )   +   j = 1 n a i j ( 2 ) f j ( 3 ) ( x j ( 3 ) ( t ) ) j = 1 n a i j ( 3 ) f j ( 2 ) ( x j ( 2 ) ( t ) )   +   j = 1 n b i j ( 0 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) )   +   j = 1 n b i j ( 1 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) ) + j = 1 n b i j ( 2 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) )     j = 1 n b i j ( 3 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) )   +   I i ( 1 ) ,
x ˙ i ( 2 ) ( t )   =   c i x i ( 2 ) ( t )   +   j = 1 n a i j ( 0 ) f j ( 2 ) ( x j ( 2 ) ( t ) )     j = 1 n a i j ( 1 ) f j ( 3 ) ( x j ( 3 ) ( t ) )   +   j = 1 n a i j ( 2 ) f j ( 0 ) ( x j ( 0 ) ( t ) ) + j = 1 n a i j ( 3 ) f j ( 1 ) ( x j ( 1 ) ( t ) )   +   j = 1 n b i j ( 0 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) )     j = 1 n b i j ( 1 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) ) + j = 1 n b i j ( 2 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) ) + j = 1 n b i j ( 3 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) )   +   I i ( 2 ) ,
x ˙ i ( 3 ) ( t )   =   c i x i ( 3 ) ( t )   +   j = 1 n a i j ( 0 ) f j ( 3 ) ( x j ( 3 ) ( t ) )   +   j = 1 n a i j ( 1 ) f j ( 2 ) ( x j ( 2 ) ( t ) )     j = 1 n a i j ( 2 ) f j ( 1 ) ( x j ( 1 ) ( t ) ) + j = 1 n a i j ( 3 ) f j ( 0 ) ( x j ( 0 ) ( t ) )   +   j = 1 n b i j ( 0 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) )   +   j = 1 n b i j ( 1 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) ) j = 1 n b i j ( 2 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) )   +   j = 1 n b i j ( 3 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) )   +   I i ( 3 ) .
Denote the following functions for i   =   1 , 2 , , n ,
F i ( 0 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 0 ) ( ξ )   +   j = 1 , j i n a i j ( 0 ) f j ( 0 ) ( x j ( 0 ) )   +   j = 1 , j i n b i j ( 0 ) f j ( 0 ) ( x j ( 0 ) ) j = 1 n a i j ( 1 ) f j ( 1 ) ( x j ( 1 ) )     j = 1 n a i j ( 2 ) f j ( 2 ) ( x j ( 2 ) )     j = 1 n a i j ( 3 ) f j ( 3 ) ( x j ( 3 ) ) j = 1 n b i j ( 1 ) f j ( 1 ) ( x j ( 1 ) )     j = 1 n b i j ( 2 ) f j ( 2 ) ( x j ( 2 ) )     j = 1 n b i j ( 3 ) f j ( 3 ) ( x j ( 3 ) )   +   I i ( 0 ) ,
F i ( 1 ) ( ξ )   =       c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 1 ) ( ξ )   +   j = 1 , j i n a i j ( 0 ) f j ( 1 ) ( x j ( 1 ) )   +   j = 1 , j i n b i j ( 0 ) f j ( 1 ) ( x j ( 1 ) ) + j = 1 n a i j ( 1 ) f j ( 0 ) ( x j ( 0 ) )   +   j = 1 n a i j ( 2 ) f j ( 3 ) ( x j ( 3 ) )     j = 1 n a i j ( 3 ) f j ( 2 ) ( x j ( 2 ) ) + j = 1 n b i j ( 1 ) f j ( 0 ) ( x j ( 0 ) )   +   j = 1 n b i j ( 2 ) f j ( 3 ) ( x j ( 3 ) )     j = 1 n b i j ( 3 ) f j ( 2 ) ( x j ( 2 ) )   +   I i ( 1 ) ,
F i ( 2 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 2 ) ( ξ )   +   j = 1 , j i n a i j ( 0 ) f j ( 2 ) ( x j ( 2 ) )   +   j = 1 , j i n b i j ( 0 ) f j ( 2 ) ( x j ( 2 ) ) j = 1 n a i j ( 1 ) f j ( 3 ) ( x j ( 3 ) )   +   j = 1 n a i j ( 2 ) f j ( 0 ) ( x j ( 0 ) )   +   j = 1 n a i j ( 3 ) f j ( 1 ) ( x j ( 1 ) ) j = 1 n b i j ( 1 ) f j ( 3 ) ( x j ( 3 ) )   +   j = 1 n b i j ( 2 ) f j ( 0 ) ( x j ( 0 ) )   +   j = 1 n b i j ( 3 ) f j ( 1 ) ( x j ( 1 ) )   +   I i ( 2 ) ,
F i ( 3 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 3 ) ( ξ )   +   j = 1 , j i n a i j ( 0 ) f j ( 3 ) ( x j ( 3 ) )   +   j = 1 , j i n b i j ( 0 ) f j ( 3 ) ( x j ( 3 ) ) + j = 1 n a i j ( 1 ) f j ( 2 ) ( x j ( 2 ) )     j = 1 n a i j ( 2 ) f j ( 1 ) ( x j ( 1 ) )   +   j = 1 n a i j ( 3 ) f j ( 0 ) ( x j ( 0 ) ) + j = 1 n b i j ( 1 ) f j ( 2 ) ( x j ( 2 ) )     j = 1 n b i j ( 2 ) f j ( 1 ) ( x j ( 1 ) )   +   j = 1 n b i j ( 3 ) f j ( 0 ) ( x j ( 0 ) )   +   I i ( 3 ) .
Define the following continuous functions
F ¯ i ( 0 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 0 ) ( ξ )   +   j = 1 , j i n max { a i j ( 0 ) m j ( 0 ) , a i j ( 0 ) M j ( 0 ) } + j = 1 , j i n max { b i j ( 0 ) m j ( 0 ) , b i j ( 0 ) M j ( 0 ) }     j = 1 n min { a i j ( 1 ) m j ( 1 ) , a i j ( 1 ) M j ( 1 ) } j = 1 n min { a i j ( 2 ) m j ( 2 ) , a i j ( 2 ) M j ( 2 ) }     j = 1 n min { a i j ( 3 ) m j ( 3 ) , a i j ( 3 ) M j ( 3 ) } j = 1 n min { b i j ( 1 ) m j ( 1 ) , b i j ( 1 ) M j ( 1 ) }     j = 1 n min { b i j ( 2 ) m j ( 2 ) , b i j ( 2 ) M j ( 2 ) } j = 1 n min { b i j ( 3 ) m j ( 3 ) , b i j ( 3 ) M j ( 3 ) }   +   I i ( 0 ) ,
F ̲ i ( 0 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 0 ) ( ξ )   +   j = 1 , j i n min { a i j ( 0 ) m j ( 0 ) , a i j ( 0 ) M j ( 0 ) } + j = 1 , j i n min { b i j ( 0 ) m j ( 0 ) , b i j ( 0 ) M j ( 0 ) }     j = 1 n max { a i j ( 1 ) m j ( 1 ) , a i j ( 1 ) M j ( 1 ) } j = 1 n max { a i j ( 2 ) m j ( 2 ) , a i j ( 2 ) M j ( 2 ) }     j = 1 n max { a i j ( 3 ) m j ( 3 ) , a i j ( 3 ) M j ( 3 ) } j = 1 n max { b i j ( 1 ) m j ( 1 ) , b i j ( 1 ) M j ( 1 ) }     j = 1 n max { b i j ( 2 ) m j ( 2 ) , b i j ( 2 ) M j ( 2 ) } j = 1 n max { b i j ( 3 ) m j ( 3 ) , b i j ( 3 ) M j ( 3 ) }   +   I i ( 0 ) ,
F ¯ i ( 1 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 1 ) ( ξ )   +   j = 1 , j i n max { a i j ( 0 ) m j ( 1 ) , a i j ( 0 ) M j ( 1 ) } + j = 1 , j i n max { b i j ( 0 ) m j ( 1 ) , b i j ( 0 ) M j ( 1 ) }   +   j = 1 n max { a i j ( 1 ) m j ( 0 ) , a i j ( 1 ) M j ( 0 ) } + j = 1 n max { a i j ( 2 ) m j ( 3 ) , a i j ( 2 ) M j ( 3 ) }     j = 1 n min { a i j ( 3 ) m j ( 2 ) , a i j ( 3 ) M j ( 2 ) } + j = 1 n max { b i j ( 1 ) m j ( 0 ) , b i j ( 1 ) M j ( 0 ) }   +   j = 1 n max { b i j ( 2 ) m j ( 3 ) , b i j ( 2 ) M j ( 3 ) } j = 1 n min { b i j ( 3 ) m j ( 2 ) , b i j ( 3 ) M j ( 2 ) }   +   I i ( 1 ) ,
F ̲ i ( 1 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 1 ) ( ξ )   +   j = 1 , j i n min { a i j ( 0 ) m j ( 1 ) , a i j ( 0 ) M j ( 1 ) } + j = 1 , j i n min { b i j ( 0 ) m j ( 1 ) , b i j ( 0 ) M j ( 1 ) }   +   j = 1 n min { a i j ( 1 ) m j ( 0 ) , a i j ( 1 ) M j ( 0 ) } + j = 1 n min { a i j ( 2 ) m j ( 3 ) , a i j ( 2 ) M j ( 3 ) }     j = 1 n max { a i j ( 3 ) m j ( 2 ) , a i j ( 3 ) M j ( 2 ) } + j = 1 n min { b i j ( 1 ) m j ( 0 ) , b i j ( 1 ) M j ( 0 ) }   +   j = 1 n min { b i j ( 2 ) m j ( 3 ) , b i j ( 2 ) M j ( 3 ) } j = 1 n max { b i j ( 3 ) m j ( 2 ) , b i j ( 3 ) M j ( 2 ) }   +   I i ( 1 ) ,
F ¯ i ( 2 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 2 ) ( ξ )   +   j = 1 , j i n max { a i j ( 0 ) m j ( 2 ) , a i j ( 0 ) M j ( 2 ) } + j = 1 , j i n max { b i j ( 0 ) m j ( 2 ) , b i j ( 0 ) M j ( 2 ) }     j = 1 n min { a i j ( 1 ) m j ( 3 ) , a i j ( 1 ) M j ( 3 ) } + j = 1 n max { a i j ( 2 ) m j ( 0 ) , a i j ( 2 ) M j ( 0 ) }   +   j = 1 n max { a i j ( 3 ) m j ( 1 ) , a i j ( 3 ) M j ( 1 ) } j = 1 n min { b i j ( 1 ) m j ( 3 ) , b i j ( 1 ) M j ( 3 ) }   +   j = 1 n max { b i j ( 3 ) m j ( 0 ) , b i j ( 3 ) M j ( 0 ) } + j = 1 n max { b i j ( 3 ) m j ( 1 ) , b i j ( 3 ) M j ( 1 ) }   +   I i ( 2 ) ,
F ̲ i ( 2 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 2 ) ( ξ )   +   j = 1 , j i n min { a i j ( 0 ) m j ( 2 ) , a i j ( 0 ) M j ( 2 ) } + j = 1 , j i n min { b i j ( 0 ) m j ( 2 ) , b i j ( 0 ) M j ( 2 ) }     j = 1 n max { a i j ( 1 ) m j ( 3 ) , a i j ( 1 ) M j ( 3 ) } + j = 1 n min { a i j ( 2 ) m j ( 0 ) , a i j ( 2 ) M j ( 0 ) }   +   j = 1 n min { a i j ( 3 ) m j ( 1 ) , a i j ( 3 ) M j ( 1 ) } j = 1 n max { b i j ( 1 ) m j ( 3 ) , b i j ( 1 ) M j ( 3 ) }   +   j = 1 n min { b i j ( 2 ) m j ( 0 ) , b i j ( 2 ) M j ( 0 ) } + j = 1 n min { b i j ( 3 ) m j ( 1 ) , b i j ( 3 ) M j ( 1 ) }   +   I i ( 2 ) ,
F ¯ i ( 3 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 3 ) ( ξ )   +   j = 1 , j i n max { a i j ( 0 ) m j ( 3 ) , a i j ( 0 ) M j ( 3 ) } + j = 1 , j i n max { b i j ( 0 ) m j ( 3 ) , b i j ( 0 ) M j ( 3 ) }   +   j = 1 n max { a i j ( 1 ) m j ( 2 ) , a i j ( 1 ) M j ( 2 ) } j = 1 n min { a i j ( 2 ) m j ( 1 ) , a i j ( 2 ) M j ( 1 ) }   +   j = 1 n max { a i j ( 3 ) m j ( 0 ) , a i j ( 3 ) M j ( 0 ) } + j = 1 n max { b i j ( 1 ) m j ( 2 ) , b i j ( 1 ) M j ( 2 ) }     j = 1 n min { b i j ( 2 ) m j ( 1 ) , b i j ( 2 ) M j ( 1 ) } + j = 1 n max { b i j ( 3 ) m j ( 0 ) , b i j ( 3 ) M j ( 0 ) }   +   I i ( 3 ) ,
F ̲ i ( 3 ) ( ξ )   =   c i ξ   +   ( a i i ( 0 )   +   b i i ( 0 ) ) f i ( 3 ) ( ξ )   +   j = 1 , j i n min { a i j ( 0 ) m j ( 3 ) , a i j ( 0 ) M j ( 3 ) } + j = 1 , j i n min { b i j ( 0 ) m j ( 3 ) , b i j ( 0 ) M j ( 3 ) }   +   j = 1 n min { a i j ( 1 ) m j ( 2 ) , a i j ( 1 ) M j ( 2 ) } j = 1 n max { a i j ( 2 ) m j ( 1 ) , a i j ( 2 ) M j ( 1 ) }   +   j = 1 n min { a i j ( 3 ) m j ( 0 ) , a i j ( 3 ) M j ( 0 ) } + j = 1 n min { b i j ( 1 ) m j ( 2 ) , b i j ( 1 ) M j ( 2 ) }     j = 1 n max { b i j ( 2 ) m j ( 1 ) , b i j ( 2 ) M j ( 1 ) } + j = 1 n min { b i j ( 3 ) m j ( 0 ) , b i j ( 3 ) M j ( 0 ) }   +   I i ( 3 ) .
We can derive the following results from the previous definitions for i = 1, 2, …, n,
F ̲ i ( 0 ) ( ξ )     F i ( 0 ) ( ξ )     F ¯ i ( 0 ) ( ξ ) , F ̲ i ( 1 ) ( ξ )     F i ( 1 ) ( ξ )     F ¯ i ( 1 ) ( ξ ) , F ̲ i ( 2 ) ( ξ )     F i ( 2 ) ( ξ )     F ¯ i ( 2 ) ( ξ ) , F ̲ i ( 3 ) ( ξ )     F i ( 3 ) ( ξ )     F ¯ i ( 3 ) ( ξ ) .
Based on the above inequality, the following results can be obtained.
Theorem 1.
Suppose
F ¯ i ( k ) ( p i ( k ) )   <   0 , F ̲ i ( k ) ( P i ( k ) )   >   0 ,
for k   =   0 , 1 , 2 , 3 , i   =   1 , 2 , , n , then there exist 3 4 n equilibrium points for system (1).
Proof. 
According to the definition of F i ( k ) ( ξ ) , we can obtain the fact that
lim ξ F i ( k ) ( ξ )   =   + , lim ξ + F i ( k ) ( ξ )   =   .
From condition (11), we have
F i ( k ) ( p i ( k ) )   <   F ¯ i ( k ) ( p i ( k ) )   <   0 , F i ( k ) ( P i ( k ) )   >   F ̲ i ( k ) ( P i ( k ) )   >   0 .
Let Φ ξ be an arbitrary subregion located in Φ . Denote Φ ¯ ξ as the closure of Φ ξ . Without loss of generality, assume
Φ ¯ ξ   =   i = 1 n k = 0 3 [ α i ( k ) , β i ( k ) ] ,
where the value of [ α i ( k ) , β i ( k ) ] is [ 1 / ε , p i ( k ) ] or [ p i ( k ) , P i ( k ) ] or [ P i ( k ) , 1 / ε ] , ε   >   0 is a very small constant. According to intermediate value theorems (12) and (13), we can derive that
F i ( k ) ( α i ( k ) )   ·   F i ( k ) ( β i ( k ) )   <   0 .
Therefore, there exists x ¯ i ( k )     [ α i ( k ) , β i ( k ) ] such that F i ( k ) ( x ¯ i ( k ) )   =   0 , k   =   0 , 1 , 2 , 3 , i = 1 , 2 , , n . Denote a mapping G : Φ ¯ ξ Φ ¯ ξ with
G ( x 1 ( 0 ) , x 2 ( 0 ) , , x n ( 0 ) , x 1 ( 1 ) , x 2 ( 1 ) , , x n ( 1 ) , x 1 ( 2 ) , x 2 ( 2 ) , , x n ( 2 ) , x 1 ( 3 ) , x 2 ( 3 ) , , x n ( 3 ) ) =   ( x ¯ 1 ( 0 ) , x ¯ 2 ( 0 ) , , x ¯ n ( 0 ) , x ¯ 1 ( 1 ) , x ¯ 2 ( 1 ) , , x ¯ n ( 1 ) , x ¯ 1 ( 2 ) , x ¯ 2 ( 2 ) , , x ¯ n ( 2 ) , x ¯ 1 ( 3 ) , x ¯ 2 ( 3 ) , , x ¯ n ( 3 ) ) .
G is continuous. By Brouwer’s fixed point theorem, there exists one fixed point ( x ^ 1 ( 0 ) , x ^ 2 ( 0 ) , , x ^ n ( 0 ) , x ^ 1 ( 1 ) , x ^ 2 ( 1 ) , , x ^ n ( 1 ) , x ^ 1 ( 2 ) , x ^ 2 ( 2 ) , , x ^ n ( 2 ) , x ^ 1 ( 3 ) , x ^ 2 ( 3 ) , , x ^ n ( 3 ) ) in Φ ¯ ξ . So x ^   =   ( x ^ 1 , x ^ 2 , , x ^ n ) T   =   ( x ^ 1 ( 0 )   +   x ^ 1 ( 1 ) q 1   +   x ^ 1 ( 2 ) q 2   +   x ^ 1 ( 3 ) q 3 , x ^ 2 ( 0 )   +   x ^ 2 ( 1 ) q 1   +   x ^ 2 ( 2 ) q 2   +   x ^ 2 ( 3 ) q 3 , , x ^ n ( 0 )   +   x ^ n ( 1 ) q 1   +   x ^ n ( 2 ) q 2   +   x ^ n ( 3 ) q 3 ) T is also an equilibrium point of system (1). Φ has 3 4 n subregions, and Φ ξ is an arbitrary subregion in Φ . Therefore, there exist 3 4 n equilibrium points for system (1). □
Remark 1.
By dividing the quaternion field into multiple local regions, as well as the boundary value principle and Brouwer’s fixed point principle, sufficient conditions for the existence of local equilibrium points of system (1) are established. Ref. [15] established sufficient conditions for the existence of multiple equilibrium points of fractional-order competitive neural networks by using the concave-convex characteristics of activation functions. Compared with Ref. [15], the boundary value principle in this paper is simpler and easier to understand.
Theorem 2.
Assume condition (11) holds; then, Φ 1 is positively invariant.
Proof. 
By means of reduction to absurdity, we suppose Φ 1 is not positively invariant. Then there exist i 0     { 1 , 2 , , n } , k 0     { 0 , 1 , 2 , 3 } , t 0   >   0 such that x i 0 ( k 0 ) ( t 0 ) escapes from Φ 1 . Therefore, there are two cases as follows,
Case A.
x i 0 ( k 0 ) ( t 0 )   =   p i 0 ( k 0 ) ,
x i 0 ( k 0 ) ( t )   <   p i 0 ( k 0 ) , t     [ τ , t 0 ) .
Case B.
x i 0 ( k 0 ) ( t 0 )   =   P i 0 ( k 0 ) ,
x i 0 ( k 0 ) ( t )   >   P i 0 ( k 0 ) , t     [ τ , t 0 ) .
From Case A, we can derive x ˙ i 0 ( k 0 ) ( t 0 )   >   0 . From Case B, we can derive x ˙ i 0 ( k 0 ) ( t 0 )   <   0 . However, for Case A,
x ˙ i 0 ( k 0 ) ( t 0 )   =   c i 0 x i 0 ( k 0 ) ( t 0 )   +   ( a i 0 i 0 ( 0 )   +   b i 0 i 0 ( 0 ) ) f i 0 ( k 0 ) ( x i 0 ( k 0 ) ( t 0 ) )   +   M i 0 ( k 0 ) ( t 0 ) =   c i 0 p i 0 ( k 0 )   +   ( a i 0 i 0 ( 0 )   +   b i 0 i 0 ( 0 ) ) f i 0 ( k 0 ) ( p i 0 ( k 0 ) )   +   M i 0 ( k 0 ) ( t 0 ) <   F ¯ i 0 ( k 0 ) ( p i 0 ( k 0 ) ) <   0 ,
where the definitions of M i ( k ) ( t ) , k   =   0 , 1 , 2 , 3 are as follows,
M i ( 0 ) ( t )   =   j = 1 , j i n a i j ( 0 ) f j ( 0 ) ( x j ( 0 ) ( t ) )     j = 1 n a i j ( 1 ) f j ( 1 ) ( x j ( 1 ) ( t ) ) j = 1 n a i j ( 2 ) f j ( 2 ) ( x j ( 2 ) ( t ) )     j = 1 n a i j ( 3 ) f j ( 3 ) ( x j ( 3 ) ( t ) ) + j = 1 , j i n b i j ( 0 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) )     j = 1 n b i j ( 1 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) ) j = 1 n b i j ( 2 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) )     j = 1 n b i j ( 3 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) )   +   I i ( 0 ) , M i ( 1 ) ( t )   =   j = 1 , j i n a i j ( 0 ) f j ( 1 ) ( x j ( 1 ) ( t ) )   +   j = 1 n a i j ( 1 ) f j ( 0 ) ( x j ( 0 ) ( t ) ) + j = 1 n a i j ( 2 ) f j ( 3 ) ( x j ( 3 ) ( t ) )     j = 1 n a i j ( 3 ) f j ( 2 ) ( x j ( 2 ) ( t ) ) + j = 1 , j i n b i j ( 0 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) )   +   j = 1 n b i j ( 1 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) ) + j = 1 n b i j ( 2 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) )     j = 1 n b i j ( 3 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) )   +   I i ( 1 ) , M i ( 2 ) ( t )   =   j = 1 , j i n a i j ( 0 ) f j ( 2 ) ( x j ( 2 ) ( t ) )     j = 1 n a i j ( 1 ) f j ( 3 ) ( x j ( 3 ) ( t ) ) + j = 1 n a i j ( 2 ) f j ( 0 ) ( x j ( 0 ) ( t ) )   +   j = 1 n a i j ( 3 ) f j ( 1 ) ( x j ( 1 ) ( t ) ) + j = 1 , j i n b i j ( 0 ) f j ( 2 ) ( x j ( 2 ) ( t     τ ) )     j = 1 n b i j ( 1 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) ) + j = 1 n b i j ( 2 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) )   +   j = 1 n b i j ( 3 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) )   +   I i ( 2 ) , M i ( 3 ) ( t )   =   j = 1 , j i n a i j ( 0 ) f j ( 3 ) ( x j ( 3 ) ( t ) )   +   j = 1 n a i j ( 1 ) f j ( 2 ) ( x j ( 2 ) ( t ) ) j = 1 n a i j ( 0 ) f j ( 1 ) ( x j ( 1 ) ( t ) )   +   j = 1 n a i j ( 3 ) f j ( 0 ) ( x j ( 0 ) ( t ) ) + j = 1 , j i n b i j ( 0 ) f j ( 3 ) ( x j ( 3 ) ( t     τ ) )   +   j = 1 n b i j ( 1 ) f j ( 0 ) ( x j ( 2 ) ( t     τ ) ) j = 1 n b i j ( 2 ) f j ( 1 ) ( x j ( 1 ) ( t     τ ) )   +   j = 1 n b i j ( 3 ) f j ( 0 ) ( x j ( 0 ) ( t     τ ) )   +   I i ( 3 ) .
It can be seen that inequality (14) is in contradiction with Case A. Similarly, for Case B,
x ˙ i 0 ( k 0 ) ( t 0 )   =   c 0 x i 0 ( k 0 ) ( t 0 )   +   ( a i 0 i 0 ( 0 )   +   b i 0 i 0 ( 0 ) ) f i 0 ( k 0 ) ( x i 0 ( k 0 ) ( t 0 ) )   +   M i 0 ( k 0 ) ( t 0 ) =   c 0 P i 0 ( k 0 )   +   ( a i 0 i 0 ( 0 )   +   b i 0 i 0 ( 0 ) ) f i 0 ( k 0 ) ( P i 0 ( k 0 ) )   +   M i 0 ( k 0 ) ( t 0 ) >   F ̲ i 0 ( k 0 ) ( P i 0 ( k 0 ) ) >   0 ,
where the definitions of M i ( k ) ( t ) , k   =   0 , 1 , 2 , 3 are the same as Case A. Inequality (15) is in contradiction with Case B. Therefore, Φ 1 is positively invariant. □
Remark 2.
In Theorem 2, we have analyzed the invariant property of Φ 1 . This property means that initial states of system (1) in Φ 1 will not escape from Φ 1 , which is conducive to the discussion of the stability of the equilibrium point later.
Now, we will investigate the stability of multiple equilibrium points for system (1).
Theorem 3.
Assume condition (11) holds. There exist at least 2 4 n locally exponentially stable equilibrium points in the Lagrange sense for system (1) if there exist μ i ( k )   >   0 , k   =   0 , 1 , 2 , 3 , i   =   1 , 2 , , n such that
σ min   >   ω max   >   0 ,
where
σ min   =   min 1 i n { σ i ( k ) , k   =   0 , 1 , 2 , 3 } , ω max   =   max 1 i n { ω i ( k ) , k   =   0 , 1 , 2 , 3 } ,
σ i ( 0 )   =   c i     j = 1 n μ j ( 0 ) μ i ( 0 ) | a j i ( 0 ) | K i ( 0 )     j = 1 n μ j ( 1 ) μ i ( 0 ) | a j i ( 1 ) | K i ( 0 ) j = 1 n μ j ( 2 ) μ i ( 0 ) | a j i ( 2 ) | K i ( 0 )     j = 1 n μ j ( 3 ) μ i ( 0 ) | a j i ( 3 ) | K i ( 0 ) , σ i ( 1 )   =   c i     j = 1 n μ j ( 0 ) μ i ( 1 ) | a j i ( 1 ) | K i ( 1 )     j = 1 n μ j ( 1 ) μ i ( 1 ) | a j i ( 0 ) | K i ( 1 ) j = 1 n μ j ( 2 ) μ i ( 1 ) | a j i ( 3 ) | K i ( 1 )     j = 1 n μ j ( 3 ) μ i ( 1 ) | a j i ( 2 ) | K i ( 1 ) , σ i ( 2 )   =   c i     j = 1 n μ j ( 0 ) μ i ( 2 ) | a j i ( 2 ) | K i ( 2 )     j = 1 n μ j ( 1 ) μ i ( 2 ) | a j i ( 3 ) | K i ( 2 ) j = 1 n μ j ( 2 ) μ i ( 2 ) | a j i ( 0 ) | K i ( 2 )     j = 1 n μ j ( 3 ) μ i ( 2 ) | a j i ( 1 ) | K i ( 2 ) , σ i ( 3 )   =   c i     j = 1 n μ j ( 0 ) μ i ( 3 ) | a j i ( 3 ) | K i ( 3 )     j = 1 n μ j ( 1 ) μ i ( 3 ) | a j i ( 2 ) | K i ( 3 ) j = 1 n μ j ( 2 ) μ i ( 3 ) | a j i ( 1 ) | K i ( 3 )     j = 1 n μ j ( 3 ) μ i ( 3 ) | a j i ( 0 ) | K i ( 3 ) , ω i ( 0 )   =   j = 1 n μ j ( 0 ) μ i ( 0 ) | b j i ( 0 ) | K i ( 0 )   +   j = 1 n μ j ( 1 ) μ i ( 0 ) | b j i ( 1 ) | K i ( 0 ) + j = 1 n μ j ( 2 ) μ i ( 0 ) | b j i ( 2 ) | K i ( 0 )   +   j = 1 n μ j ( 3 ) μ i ( 0 ) | b j i ( 3 ) | K i ( 0 ) , ω i ( 1 )   =   j = 1 n μ j ( 0 ) μ i ( 1 ) | b j i ( 1 ) | K i ( 1 )   +   j = 1 n μ j ( 1 ) μ i ( 1 ) | b j i ( 0 ) | K i ( 1 ) + j = 1 n μ j ( 2 ) μ i ( 1 ) | b j i ( 3 ) | K i ( 1 )   +   j = 1 n μ j ( 3 ) μ i ( 1 ) | b j i ( 2 ) | K i ( 1 ) , ω i ( 2 )   =   j = 1 n μ j ( 0 ) μ i ( 2 ) | b j i ( 2 ) | K i ( 2 )   +   j = 1 n μ j ( 1 ) μ i ( 2 ) | b j i ( 3 ) | K i ( 2 ) + j = 1 n μ j ( 2 ) μ i ( 2 ) | b j i ( 0 ) | K i ( 2 )   +   j = 1 n μ j ( 3 ) μ i ( 2 ) | b j i ( 1 ) | K i ( 2 ) , ω i ( 3 )   = j = 1 n μ j ( 0 ) μ i ( 3 ) | b j i ( 3 ) | K i ( 3 ) + j = 1 n μ j ( 1 ) μ i ( 3 ) | b j i ( 2 ) | K i ( 3 ) + j = 1 n μ j ( 2 ) μ i ( 3 ) | b j i ( 1 ) | K i ( 3 )   +   j = 1 n μ j ( 3 ) μ i ( 3 ) | b j i ( 0 ) | K i ( 3 ) .
Proof. 
Let ϕ be an arbitrary subregion located in Φ 1 . From Theorem 1 and Theorem 2, there exist an equilibrium point in ϕ . Let x ¯   =   ( x ¯ 1 , x ¯ 2 , , x ¯ n ) T     Q n be the equilibrium point in ϕ , and x ¯ i   =   x ¯ i ( 0 ) q 0   +   x ¯ i ( 1 ) q 1   +   x ¯ i ( 2 ) q 2   +   x ¯ i ( 3 ) q 3 . Suppose x ( t )   =   ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T     Q n , t   >   0 is an arbitrary neuron state in ϕ . Denote e i ( k ) ( t )   =   x i ( k ) ( t )     x i ¯ ( k ) , k   =   0 , 1 , 2 , 3 . Denote the following function,
V ( t )   =   V ( 0 ) ( t )   +   V ( 1 ) ( t )   +   V ( 2 ) ( t )   +   V ( 3 ) ( t ) ,
where
V ( 0 ) ( t )   =   i = 1 n μ i ( 0 ) | e i ( 0 ) ( t ) | , V ( 1 ) ( t )   =   i = 1 n μ i ( 1 ) | e i ( 1 ) ( t ) | , V ( 2 ) ( t )   =   i = 1 n μ i ( 2 ) | e i ( 2 ) ( t ) | , V ( 3 ) ( t )   =   i = 1 n μ i ( 3 ) | e i ( 3 ) ( t ) | .
Then
V ˙ ( 0 ) ( t )   =   i = 1 n μ i ( 0 ) s g n ( e i ( 0 ) ( t ) ) { e ˙ i ( 0 ) ( t ) } =   i = 1 n μ i ( 0 ) s g n ( e i ( 0 ) ( t ) ) { c i [ x i ( 0 )     x ¯ i ( 0 ) ]   +   j = 1 n a i j ( 0 ) [ f j ( 0 ) ( x j ( 0 ) ( t ) )     f j ( 0 ) ( x ¯ j ( 0 ) ) ] j = 1 n a i j ( 1 ) [ f j ( 1 ) ( x j ( 1 ) ( t ) )     f j ( 1 ) ( x ¯ j ( 1 ) ) ]     j = 1 n a i j ( 2 ) [ f j ( 2 ) ( x j ( 2 ) ( t ) )     f j ( 2 ) ( x ¯ j ( 2 ) ) ] j = 1 n a i j ( 3 ) [ f j ( 3 ) ( x j ( 3 ) ( t ) )     f j ( 3 ) ( x ¯ j ( 3 ) ) ]   +   j = 1 n b i j ( 0 ) [ f j ( 0 ) ( x j ( 0 ) ( t     τ ) )     f j ( 0 ) ( x ¯ j ( 0 ) ) ] j = 1 n b i j ( 1 ) [ f j ( 1 ) ( x j ( 1 ) ( t     τ ) )     f j ( 1 ) ( x ¯ j ( 1 ) ) ]     j = 1 n b i j ( 2 ) [ f j ( 2 ) ( x j ( 2 ) ( t     τ ) )     f j ( 2 ) ( x ¯ j ( 2 ) ) ] j = 1 n b i j ( 3 ) [ f j ( 3 ) ( x j ( 3 ) ( t     τ ) )     f j ( 3 ) ( x ¯ j ( 3 ) ) ] }   i = 1 n μ i ( 0 ) c i | e i ( 0 ) ( t ) |   +   i = 1 n j = 1 n μ i ( 0 ) | a i j ( 0 ) | K j ( 0 ) | e j ( 0 ) ( t ) | + i = 1 n j = 1 n μ i ( 0 ) | a i j ( 1 ) | K j ( 1 ) | e j ( 1 ) ( t ) |   +   i = 1 n j = 1 n μ i ( 0 ) | a i j ( 2 ) | K j ( 2 ) | e j ( 2 ) ( t ) | + i = 1 n j = 1 n μ i ( 0 ) | a i j ( 3 ) | K j ( 3 ) | e j ( 3 ) ( t ) |   +   i = 1 n j = 1 n μ i ( 0 ) | b i j ( 0 ) | K j ( 0 ) | e j ( 0 ) ( t     τ ) | + i = 1 n j = 1 n μ i ( 0 ) | b i j ( 1 ) | K j ( 1 ) | e j ( 1 ) ( t     τ ) |   +   i = 1 n j = 1 n μ i ( 0 ) | b i j ( 2 ) | K j ( 2 ) | e j ( 2 ) ( t     τ ) | + i = 1 n j = 1 n μ i ( 0 ) | b i j ( 3 ) | K j ( 3 ) | e j ( 3 ) ( t     τ ) | =   i = 1 n μ i ( 0 ) c i | e i ( 0 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 0 ) | a j i ( 0 ) | K i ( 0 ) | e i ( 0 ) ( t ) | + j = 1 n i = 1 n μ j ( 0 ) | a j i ( 1 ) | K i ( 1 ) | e i ( 1 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 0 ) | a j i ( 2 ) | K i ( 2 ) | e i ( 2 ) ( t ) | + j = 1 n i = 1 n μ j ( 0 ) | a j i ( 3 ) | K i ( 3 ) | e i ( 3 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 0 ) | b j i ( 0 ) | K i ( 0 ) | e i ( 0 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 0 ) | b j i ( 1 ) | K i ( 1 ) | e i ( 1 ) ( t     τ ) |   +   j = 1 n i = 1 n μ j ( 0 ) | b j i ( 2 ) | K i ( 2 ) | e i ( 2 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 0 ) | b j i ( 3 ) | K i ( 3 ) | e i ( 3 ) ( t     τ ) | .
Similarly,
V ˙ ( 1 ) ( t )     i = 1 n μ i ( 1 ) c i | e i ( 1 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 1 ) | a j i ( 0 ) | K i ( 1 ) | e i ( 1 ) ( t ) | + j = 1 n i = 1 n μ j ( 1 ) | a j i ( 1 ) | K i ( 0 ) | e i ( 0 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 1 ) | a j i ( 2 ) | K i ( 3 ) | e i ( 3 ) ( t ) | + j = 1 n i = 1 n μ j ( 1 ) | a j i ( 3 ) | K i ( 2 ) | e i ( 2 ) ( t ) | + j = 1 n i = 1 n μ j ( 1 ) | b j i ( 0 ) | K i ( 1 ) | e i ( 1 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 1 ) | b j i ( 1 ) | K i ( 0 ) | e i ( 0 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 1 ) | b j i ( 2 ) | K i ( 3 ) | e i ( 3 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 1 ) | b j i ( 3 ) | K i ( 2 ) | e i ( 2 ) ( t     τ ) | ,
V ˙ ( 2 ) ( t )     i = 1 n μ i ( 2 ) c i | e i ( 2 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 2 ) | a j i ( 0 ) | K i ( 2 ) | e i ( 2 ) ( t ) | + j = 1 n i = 1 n μ j ( 2 ) | a j i ( 1 ) | K i ( 3 ) | e i ( 3 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 2 ) | a j i ( 2 ) | K i ( 0 ) | e i ( 0 ) ( t ) | + j = 1 n i = 1 n μ j ( 2 ) | a j i ( 3 ) | K i ( 1 ) | e i ( 1 ) ( t ) | + j = 1 n i = 1 n μ j ( 2 ) | b j i ( 0 ) | K i ( 2 ) | e i ( 2 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 2 ) | b j i ( 1 ) | K i ( 3 ) | e i ( 3 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 2 ) | b j i ( 2 ) | K i ( 0 ) | e i ( 0 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 2 ) | b j i ( 3 ) | K i ( 1 ) | e i ( 1 ) ( t     τ ) | ,
V ˙ ( 3 ) ( t )     i = 1 n μ i ( 3 ) c i | e i ( 3 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 3 ) | a j i ( 0 ) | K i ( 3 ) | e i ( 3 ) ( t ) | + j = 1 n i = 1 n μ j ( 3 ) | a j i ( 1 ) | K i ( 2 ) | e i ( 2 ) ( t ) |   +   j = 1 n i = 1 n μ j ( 3 ) | a j i ( 2 ) | K i ( 1 ) | e i ( 1 ) ( t ) | + j = 1 n i = 1 n μ j ( 3 ) | a j i ( 3 ) | K i ( 0 ) | e i ( 0 ) ( t ) | + j = 1 n i = 1 n μ j ( 3 ) | b j i ( 0 ) | K i ( 3 ) | e i ( 3 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 3 ) | b j i ( 1 ) | K i ( 2 ) | e i ( 2 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 3 ) | b j i ( 2 ) | K i ( 1 ) | e i ( 1 ) ( t     τ ) | + j = 1 n i = 1 n μ j ( 3 ) | b j i ( 3 ) | K i ( 0 ) | e i ( 0 ) ( t     τ ) | .
It follows from (17)–(20) that
V ˙ ( t )     i = 1 n μ i ( 0 ) [ c i     j = 1 n μ j ( 0 ) μ i ( 0 ) | a j i ( 0 ) | K i ( 0 )     j = 1 n μ j ( 1 ) μ i ( 0 ) | a j i ( 1 ) | K i ( 0 ) j = 1 n μ j ( 2 ) μ i ( 0 ) | a j i ( 2 ) | K i ( 0 )     j = 1 n μ j ( 3 ) μ i ( 0 ) | a j i ( 3 ) | K i ( 0 ) ] | e i ( 0 ) ( t ) | i = 1 n μ i ( 1 ) [ C i     j = 1 n μ j ( 0 ) μ i ( 1 ) | a j i ( 1 ) | K i ( 1 )     j = 1 n μ j ( 1 ) μ i ( 1 ) | a j i ( 0 ) | K i ( 1 ) j = 1 n μ j ( 2 ) μ i ( 1 ) | a j i ( 3 ) | K i ( 1 )     j = 1 n μ j ( 3 ) μ i ( 1 ) | a j i ( 2 ) | K i ( 1 ) ] | e i ( 1 ) ( t ) | i = 1 n μ i ( 2 ) [ c i     j = 1 n μ j ( 0 ) μ i ( 2 ) | a j i ( 2 ) | K i ( 2 )     j = 1 n μ j ( 1 ) μ i ( 2 ) | a j i ( 3 ) | K i ( 2 ) j = 1 n μ j ( 2 ) μ i ( 2 ) | a j i ( 0 ) | K i ( 2 )     j = 1 n μ j ( 3 ) μ i ( 2 ) | a j i ( 1 ) | K i ( 2 ) ] | e i ( 2 ) ( t ) | . i = 1 n μ i ( 3 ) [ c i     j = 1 n μ j ( 0 ) μ i ( 3 ) | a j i ( 3 ) | K i ( 3 )     j = 1 n μ j ( 1 ) μ i ( 3 ) | a j i ( 2 ) | K i ( 3 ) j = 1 n μ j ( 2 ) μ i ( 3 ) | a j i ( 1 ) | K i ( 3 )     j = 1 n μ j ( 3 ) μ i ( 3 ) | a j i ( 0 ) | K i ( 3 ) ] | e i ( 3 ) ( t ) | + i = 1 n μ i ( 0 ) [ j = 1 n μ j ( 0 ) μ i ( 0 ) | b j i ( 0 ) | K i ( 0 )   +   j = 1 n μ j ( 1 ) μ i ( 0 ) | b j i ( 1 ) | K i ( 0 ) j = 1 n μ j ( 2 ) μ i ( 0 ) | b j i ( 2 ) | K i ( 0 ) + j = 1 n μ j ( 3 ) μ i ( 0 ) | b j i ( 3 ) | K i ( 0 ) ] | e i ( 0 ) ( t     τ ) |   +   i = 1 n μ i ( 1 ) [ j = 1 n μ j ( 0 ) μ i ( 1 ) | b j i ( 1 ) | K i ( 1 ) + j = 1 n μ j ( 1 ) μ i ( 1 ) | b j i ( 0 ) | K i ( 1 )   +   j = 1 n μ j ( 2 ) μ i ( 1 ) | b j i ( 3 ) | K i ( 1 )   +   j = 1 n μ j ( 3 ) μ i ( 1 ) | b j i ( 2 ) | K i ( 1 ) ] | e i ( 1 ) ( t     τ ) | + i = 1 n μ i ( 2 ) [ j = 1 n μ j ( 0 ) μ i ( 2 ) | b j i ( 2 ) | K i ( 2 )   +   j = 1 n μ j ( 1 ) μ i ( 2 ) | b j i ( 3 ) | K i ( 2 )   +   j = 1 n μ j ( 2 ) μ i ( 2 ) | b j i ( 0 ) | K i ( 2 ) + j = 1 n μ j ( 3 ) μ i ( 2 ) | b j i ( 1 ) | K i ( 2 ) ] | e i ( 2 ) ( t     τ ) |   +   i = 1 n μ i ( 3 ) [ j = 1 n μ j ( 0 ) μ i ( 3 ) | b j i ( 3 ) | K i ( 3 ) + j = 1 n μ j ( 1 ) μ i ( 3 ) | b j i ( 2 ) | K i ( 3 ) j = 1 n μ j ( 2 ) μ i ( 3 ) | b j i ( 1 ) | K i ( 3 )   +   j = 1 n μ j ( 3 ) μ i ( 3 ) | b j i ( 0 ) | K i ( 3 ) ] | e i ( 3 ) ( t     τ ) | .
Therefore,
V ˙ ( t )     σ min V ( t )   +   ω max V ( t     τ )   σ min V ( t )   +   ω max V ¯ ( t     τ ) ,
where V ¯ ( t )   =   sup s [ t τ , t ] { V ( s ) } . According Lemma 1, we can derive that,
V ( t )     V ¯ ( 0 ) e λ t ,
where λ is the unique positive solution of equation of σ min     ω max e λ τ . Denote U   =   min i { 1 , 2 , , n } { μ i ( 0 ) , μ i ( 1 ) , μ i ( 2 ) , μ i ( 3 ) } . From (21), we have
i = 1 n [ | e i ( 0 ) ( t ) |   +   | e i ( 1 ) ( t ) |   +   | e i ( 2 ) ( t ) |   +   | e i ( 3 ) ( t ) | ]     V ¯ ( 0 ) U e λ t .
So,
i = 1 n [ | x i ( 0 ) ( t )     x ¯ i ( 0 ) | + | x i ( 1 ) ( t )     x ¯ i ( 1 ) | + | x i ( 2 ) ( t )     x ¯ i ( 2 ) | + | x i ( 3 ) ( t )     x ¯ i ( 3 ) | ]     V ¯ ( 0 ) U e λ t .
It can be known from Definition 1 that x ¯ is a locally exponentially state equilibrium point in the Lagrange sense. Therefore, there exist at least 2 4 n locally exponentially stable equilibrium points in the Lagrange sense for system (1). □
Remark 3.
According to Lemma 1 and Definition 1, Theorem 3 is derived. Equation (21) guarantees local exponential stability in the Lagrange sense of equilibrium points. In order to obtain Equation (21), it is necessary to assume that (16) holds. By defining functional V ( t ) and mathematical analysis, it is concluded that there exist at least 2 4 n locally exponentially stable equilibrium points in thes Lagrange sense for system (1).
Remark 4.
In this paper, we discuss the stability of equilibrium points in Φ 1 , which has 2 4 n subsets. Under conditions (11) and (16), it can be derived from Theorem 3 that the equilibrium points of these subsets are locally exponentially stable in theLagrange sense. Moreover, there are 3 4 n     2 4 n equilibrium points in Φ 2 ( Φ 2   =   Φ     Φ 1 ) . Due to the existence of the high dimension and the time delay for system (1), it is difficult to discuss the stability of equilibrium points in Φ 2 . We will discuss it further in follow-up research.
Remark 5.
Because a one-dimensional quaternion-valued neural network can be separated into four dynamic equations, under the same activation function, the one-dimensional quaternion-valued neural network, the two-dimensional complex-valued neural network, and the four-dimensional real-valued neural network have the same number of equilibrium points. That is to say, in the case of the same dimension, the quaternion-valued neural network has more stable equilibrium points than the complex-valued neural network [23] and the real-valued neural network [19]. The stable equilibrium points of the neural network are designed as the ideal patterns of pattern recognition, so a quaternion-valued neural network has larger storage capacity than the existing complex-valued neural network and the real-valued neural network in the same conditions.
Remark 6.
Unlike the application of pattern recognition based on a deep learning algorithm, the quaternion-valued neural network determines the weight of a neural network according to the sufficient conditions in the theorem. When the initial state of the neural network is in the invariant domain, the state of the neural network will converge to the equilibrium point corresponding to the invariant domain. The equilibrium point is designed as an ideal pattern in pattern recognition applications.

4. Simulation Results

In this section, we will give an example to verify the validity of the theoretical results. Consider the following one-dimensional quaternion-valued neural network,
x ˙ 1 ( t )   =   2 x i ( t )   +   ( 1.15   +   0.3 q 1   +   0.2 q 2     0.2 q 3 ) f 1 ( x 1 ( t ) ) + ( 2.6   +   0.1 q 1     0.3 q 2   +   0.1 q 3 ) f 1 ( x 1 ( t     0.3 ) ) + 0.5   +   0.5 q 1   +   0.5 q 2   +   0.5 q 3 ,
where the activation function is defined as follows, f 1 ( ξ )   =   f 1 ( 0 ) ( ξ ( 0 ) )   +   f 1 ( 1 ) ( ξ ( 1 ) ) q 1   +   f 1 ( 2 ) ( ξ ( 2 ) ) q 2   +   f 1 ( 3 ) ( ξ ( 3 ) ) q 3 , and
f 1 ( k ) ( ξ ( k ) )   =   1 , ( , 1 ) , ξ ( k ) , [ 1 , 1 ] , 1 , ( 1 , + ) .
According to the correspondence of real parts and imaginary parts, system (22) can be transformed into four formulas,
x ˙ 1 ( 0 ) ( t )   =   2 x 1 ( 0 ) ( t )   +   1.15 f 1 ( 0 ) ( x 1 ( 0 ) ( t ) ) 0.3 f 1 ( 1 ) ( x 1 ( 1 ) ( t ) )     0.2 f 1 ( 2 ) ( x 1 ( 2 ) ( t ) ) + 0.2 f 1 ( 3 ) ( x 1 ( 3 ) ( t ) )   +   2.6 f 1 ( 0 ) ( x 1 ( 0 ) ( t     0.3 ) ) 0.1 f 1 ( 1 ) ( x 1 ( 1 ) ( t     0.3 ) )   +   0.3 f 1 ( 2 ) ( x 1 ( 2 ) ( t     0.3 ) ) 0.1 f 1 ( 3 ) ( x 1 ( 3 ) ( t     0.3 ) )   +   0.5 ,
x ˙ 1 ( 1 ) ( t )   =   2 x 1 ( 1 ) ( t )   +   1.15 f 1 ( 1 ) ( x 1 ( 1 ) ( t ) ) + 0.3 f 1 ( 0 ) ( x 1 ( 0 ) ( t ) )   +   0.2 f 1 ( 3 ) ( x 1 ( 3 ) ( t ) ) + 0.2 f 1 ( 2 ) ( x 1 ( 2 ) ( t ) )   +   2.6 f 1 ( 1 ) ( x 1 ( 1 ) ( t     0.3 ) ) + 0.1 f 1 ( 0 ) ( x 1 ( 0 ) ( t     0.3 ) )     0.3 f 1 ( 3 ) ( x 1 ( 3 ) ( t     0.3 ) ) 0.1 f 1 ( 2 ) ( x 1 ( 2 ) ( t     0.3 ) )   +   0.5 ,
x ˙ 1 ( 2 ) ( t )   =   2 x 1 ( 2 ) ( t )   +   1.15 f 1 ( 2 ) ( x 1 ( 2 ) ( t ) ) 0.3 f 1 ( 3 ) ( x 1 ( 3 ) ( t ) )   +   0.2 f 1 ( 0 ) ( x 1 ( 0 ) ( t ) ) 0.2 f 1 ( 1 ) ( x 1 ( 1 ) ( t ) )   +   2.6 f 1 ( 2 ) ( x 1 ( 2 ) ( t     0.3 ) ) 0.1 f 1 ( 3 ) ( x 1 ( 3 ) ( t     0.3 ) )     0.3 f 1 ( 0 ) ( x 1 ( 0 ) ( t     0.3 ) ) + 0.1 f 1 ( 1 ) ( x 1 ( 1 ) ( t     0.3 ) )   +   0.5 ,
x ˙ 1 ( 3 ) ( t )   =   2 x 1 ( 3 ) ( t )   +   1.15 f 1 ( 3 ) ( x 1 ( 3 ) ( t ) ) + 0.3 f 1 ( 2 ) ( x 1 ( 2 ) ( t ) )     0.2 f 1 ( 1 ) ( x 1 ( 1 ) ( t ) ) 0.2 f 1 ( 0 ) ( x 1 ( 0 ) ( t ) )   +   2.6 f 1 ( 3 ) ( x 1 ( 3 ) ( t     0.3 ) ) + 0.1 f 1 ( 2 ) ( x 1 ( 2 ) ( t     0.3 ) )   +   0.3 f 1 ( 1 ) ( x 1 ( 1 ) ( t     0.3 ) ) + 0.1 f 1 ( 0 ) ( x 1 ( 0 ) ( t     0.3 ) )   +   0.5 .
According to the definition of (3)–(10), it can be concluded that, F ¯ 1 ( 0 ) ( 1 )   =   F ¯ 1 ( 1 ) ( 1 )   =   F ¯ 1 ( 2 ) ( 1 )   =   F ¯ 1 ( 3 ) ( 1 )   =   2.55   <   0 , F ̲ 1 ( 0 ) ( 1 )   =   F ̲ 1 ( 1 ) ( 1 )   =   F ̲ 1 ( 2 ) ( 1 )   =   F ̲ 1 ( 3 ) ( 1 )   =   3.55   >   0 . Let μ 1 ( 0 )   =   μ 1 ( 1 )   =   μ 1 ( 2 )   =   μ 1 ( 3 )   =   1 , then σ i ( 0 )   =   σ i ( 1 )   =   σ i ( 2 )   =   σ i ( 3 )   =   0.15   >   0 . Meanwhile, conditions (11) and (16) hold. That is to say, all conditions of Theorem 3 are satisfied. According to Theorem 3, system (22) has 16 locally exponentially stable equilibrium points in the Lagrange sense. In fact, we can verify the conclusion through simulation experiments. We simulate the running tracks of the neuron states of system (22) through the function of ode45 in Matlab software. The simulation results with 64 initial states show that system (22) has 16 locally exponentially stable equilibrium points, which are as follows,
2.025   +   2.325 q 1   +   1.825 q 2   +   2.325 q 4 ,
1.925   +   2.225 q 1   +   1.925 q 2     1.425 q 3 ,
1.925   +   2.425 q 1     1.925 q 2   +   2.425 q 3 ,
1.825   +   2.325 q 1     1.825 q 2     1.325 q 3 ,
2.425     1.425 q 1   +   1.925 q 2   +   2.225 q 3 ,
2.325     1.525 q 1   +   2.025 q 2     1.525 q 3 ,
2.325     1.325 q 1     1.825 q 2   +   2.325 q 3 ,
2.225     1.425 q 1     1.725 q 2     1.425 q 3 ,
2.425     1.425 q 1   +   1.925 q 2   +   2.225 q 3 ,
2.325     1.525 q 1   +   2.025 q 2     1.525 q 3 ,
2.325     1.325 q 1     1.825 q 2   +   2.325 q 3 ,
2.225     1.425 q 1     1.725 q 2     1.425 q 3 ,
1.325     1.825 q 1   +   2.325 q 2   +   1.825 q 3 ,
1.425     1.925 q 1   +   2.425 q 2     1.925 q 3 ,
1.425     1.725 q 1     1.425 q 2   +   1.925 q 3 ,
1.525     1.825 q 1     1.325 q 2     1.825 q 3 .
The theoretical results are verified to be valid. Because the four-dimensional state cannot be displayed in the same figure, transient behaviors of states x 1 ( 0 ) , x 1 ( 1 ) , x 1 ( 2 ) , and x 1 ( 3 ) with 64 initial states are depicted in four separate figures. Please see Figure 1, Figure 2, Figure 3 and Figure 4. Because stable states of x 1 ( 0 ) , x 1 ( 1 ) , x 1 ( 2 ) and x 1 ( 3 ) are all located in [1.825, 2.425] and [−1.925, −1.325], the four figures look very similar. In fact, these 16 stable equilibrium points are different. It can be seen from Figure 1, Figure 2, Figure 3 and Figure 4 that each initial state converges exponentially to the stable equilibrium state. As a special case of asymptotic stability, exponential stability has a faster convergence rate.

5. Conclusions

This paper has studied local Lagrange exponential stability of quaternion-valued neural networks with delays. By separating the quaternion-valued neural network into a real part and three imaginary parts, some sufficient conditions have been established to ensure that 3 4 n equilibrium points exist. By using Halanay inequality, the conditions have been established to ensure 2 4 n equilibria were local Lagrange exponentially stable. One example has been given to verify the validity of the theoretical results. Through analysis, it was shown that, under the same conditions, quaternion-valued neural networks have more stable equilibrium points than complex-valued neural networks and real-valued neural networks. That is, quaternion-valued neural networks have greater storage capacity in pattern-recognition applications.

Author Contributions

Conceptualization, W.D. and Y.H.; software, W.D. and T.C.; validation, W.D. and T.C.; project management, W.D. and H.L.; preparation of the original draft, W.D., Y.H. and T.C. visualization, W.D. and T.C.; methodology, Y.H. and H.L.; investigation, Y.H. and H.L.; revision, X.F. and H.L.; supervision, X.F. and H.L.; funding acquisition, X.F. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62106225), Natural Science Foundation of Zhejiang Province (Grant No. LY20F020024).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Sincere thanks to everyone who suggested revisions and improved this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zheng, Y.; Sheng, W.; Sun, X.; Chen, S. Airline passenger profiling based on fuzzy deep machine learning. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2911–2923. [Google Scholar] [CrossRef] [PubMed]
  2. Zheng, Y.; Chen, S.; Xue, Y.; Xue, J. A Pythagorean-Type Fuzzy Deep Denoising Autoencoder for Industrial Accident Early Warning. IEEE Trans. Fuzzy Syst. 2017, 25, 1561–1575. [Google Scholar] [CrossRef]
  3. Zheng, Y.; Zhou, X.; Sheng, W.; Xue, Y.; Chen, S. Generative adversarial network based telecom fraud detection at the receiving bank. Neural Netw. 2018, 102, 78–86. [Google Scholar] [CrossRef] [PubMed]
  4. Cao, J.; Wang, J. Global asymptotic stability of a general class of recurrent neural networks with time-varying delays. IEEE Trans. Circuits Syst. I 2003, 50, 34–44. [Google Scholar]
  5. Cao, J.; Wang, J. Global asymptotic stability of recurrent neural networks with multiple time-varying delays. IEEE Trans. Neural Netw. 2008, 19, 855–873. [Google Scholar]
  6. Zhang, H.; Liu, Z.; Huang, G.; Liu, Z.; Wang, Z. Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. IEEE Trans. Neural Netw. 2010, 21, 91–106. [Google Scholar] [CrossRef]
  7. Wu, A.; Zeng, Z. Exponential stabilization of memristive neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2012, 12, 1919–1929. [Google Scholar]
  8. Wu, A.; Zeng, Z. Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 2012, 36, 1–10. [Google Scholar] [CrossRef]
  9. Wu, A.; Zeng, Z. Lagrange stability of memristive neural networks with discrete and distributed delays. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 690–703. [Google Scholar] [CrossRef]
  10. Wen, S.; Huang, T.; Yu, X.; Chen, M.; Zeng, Z. Aperiodic sampled-data sliding-mode control of fuzzy systems with communication delays via the event-triggered method. IEEE Trans. Fuzzy Syst. 2016, 24, 1048–1057. [Google Scholar] [CrossRef]
  11. Lakshmanan, S.; Prakash, M.; Lim, C.P.; Rakkiyappan, R.; Balasubramaniam, P.; Nahavandi, S. Synchronization of an inertial neural network with time-varying delays and its application to secure communication. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 195–207. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, Z.; Guo, R.; Liu, X.; Lin, C. Lagrange Exponential Stability of Complex-Valued BAM Neural Networks with Time-Varying Delays. IEEE Trans. Syst. 2020, 50, 3072–3085. [Google Scholar] [CrossRef]
  13. Liu, P.; Zeng, Z.; Wang, J. Multistability of recurrent neural networks with nonmonotonic activation functions and mixed time delays. IEEE Trans. Syst. 2016, 46, 512–523. [Google Scholar] [CrossRef]
  14. Song, Q.; Chen, X. Multistability Analysis of Quaternion-Valued Neural Networks With Time Delays. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5430–5440. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, F.; Zeng, Z. Multistability and Stabilization of Fractional-Order Competitive Neural Networks With Unbounded Time-Varying Delays. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–12. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, F.; Huang, T.; Feng, D.; Zeng, Z. Multistability and robustness of complex-valued neural networks with delays and input perturbation. Neurocomputing 2021, 447, 319–328. [Google Scholar] [CrossRef]
  17. Zhang, F.; Huang, T.; Feng, D.; Zeng, Z. Multistability of delayed fractional-order competitive neural networks. Neural Netw. 2021, 140, 325–335. [Google Scholar] [CrossRef]
  18. Cheng, C.Y.; Lin, K.H.; Shih, C.W. Multistability and convergence in delayed neural networks. Phys. D Nonlin. Phenom. 2007, 225, 61–74. [Google Scholar] [CrossRef]
  19. Zeng, Z.; Huang, T.; Zheng, W. Multistability of recurrent neural networks with time-varying delays and the piecewise linear activation function. IEEE Trans. Neural Netw. 2010, 21, 1371–1377. [Google Scholar] [CrossRef]
  20. Wang, L.L.; Chen, T.P. Multistability of neural networks with Mexican-hat-type activation functions. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1816–1826. [Google Scholar] [CrossRef]
  21. Huang, Y.; Zhang, H.; Wang, Z. Multistability and multiperiodicity of delayed bidirectional associative memory neural networks with discontinuous activation functions. Appl. Math. Comput. 2012, 219, 899–910. [Google Scholar] [CrossRef]
  22. Huang, Y.; Zhang, H.; Wang, Z. Dynamical stability analysis of multiple equilibrium points in time-varying delayed recurrent neural networks with discontinuous activation functions. Neurocomputing 2012, 91, 21–28. [Google Scholar] [CrossRef]
  23. Huang, Y.; Zhang, H.; Wang, Z. Multistability of complex-valued recurrent neural networks with real-imaginary-type activation functions. Appl. Math. Comput. 2014, 229, 187–200. [Google Scholar] [CrossRef]
  24. Zhang, F.; Zeng, Z. Multiple ψ-Type Stability of Cohen-Grossberg Neural Networks With Both Time-Varying Discrete Delays and Distributed Delays. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 566–579. [Google Scholar] [CrossRef]
  25. Cai, Z.; Huang, L.; Zhang, L. Finite-time synchronization of master-slave neural networks with time-delays and discontinuous activations. Appl. Math. Model. 2017, 47, 208–226. [Google Scholar] [CrossRef]
  26. Wang, S.; Zhang, Z.; Lin, C.; Chen, J. Fixed-time synchronization for complex-valued BAM neural networks with time-varying delays via pinning control and adaptive pinning control. Chaos Solitons Fractals 2021, 153, 111583. [Google Scholar] [CrossRef]
  27. Wei, X.; Zhang, Z.; Lin, C.; Chen, J. Synchronization and anti-synchronization for complex-valued inertial neural networks with time-varying delays. Appl. Math. Comput. 2021, 403, 126194. [Google Scholar] [CrossRef]
  28. Wei, R.; Cao, J.; Huang, C. Lagrange exponential stability of quaternion-valued memristive neural networks with time delays. Math. Methods Appl. Sci. 2020, 43, 7269–7291. [Google Scholar] [CrossRef]
  29. Xiao, J.; Cao, J.; Zhong, S.; Wen, S. Novel methods to finite-time Mittag-Leffler synchronization problem of fractional-order quaternion-valued neural networks. Inf. Sci. 2020, 526, 221–244. [Google Scholar] [CrossRef]
  30. Wu, Z. Multiple asymptotic stability of fractional-order quaternion-valued neural networks with time-varying delays. Neurocomputing 2021, 448, 301–312. [Google Scholar] [CrossRef]
  31. Udhayakumar, K.; Rakkiyappan, R.; Liu, X.; Cao, J. Mutiple psi-type stability of fractional-order quaternion-valued neural networks. Appl. Math. Comput. 2021, 401, 126092. [Google Scholar]
  32. Gopalsamy, K. Stability and Oscillations in Delay Differential Equations of Population Dynamics; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1992. [Google Scholar]
  33. Cao, J.; Wang, J. Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays. Neural Netw. 2004, 17, 379–390. [Google Scholar] [CrossRef]
  34. Liao, X.; Luo, Q.; Zeng, Z.; Guo, Y. Global exponential stability in Lagrange sense for recurrent neural networks with time delays. Nonlinear Anal. Real World Appl. 2008, 9, 1535–1557. [Google Scholar] [CrossRef]
  35. Aladdin, A.M.; Rashid, T.A. A New Lagrangian Problem Crossover: A Systematic Review and Meta-Analysis of Crossover Standards. arXiv 2022, arXiv:2204.10890. [Google Scholar]
Figure 1. Transient states of x 1 ( 0 ) for system (22).
Figure 1. Transient states of x 1 ( 0 ) for system (22).
Mathematics 10 02157 g001
Figure 2. Transient states of x 1 ( 1 ) for system (22).
Figure 2. Transient states of x 1 ( 1 ) for system (22).
Mathematics 10 02157 g002
Figure 3. Transient states of x 1 ( 2 ) for system (22).
Figure 3. Transient states of x 1 ( 2 ) for system (22).
Mathematics 10 02157 g003
Figure 4. Transient states of x 1 ( 3 ) for system (22).
Figure 4. Transient states of x 1 ( 3 ) for system (22).
Mathematics 10 02157 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, W.; Huang, Y.; Chen, T.; Fan, X.; Long, H. Local Lagrange Exponential Stability Analysis of Quaternion-Valued Neural Networks with Time Delays. Mathematics 2022, 10, 2157. https://doi.org/10.3390/math10132157

AMA Style

Dong W, Huang Y, Chen T, Fan X, Long H. Local Lagrange Exponential Stability Analysis of Quaternion-Valued Neural Networks with Time Delays. Mathematics. 2022; 10(13):2157. https://doi.org/10.3390/math10132157

Chicago/Turabian Style

Dong, Wenjun, Yujiao Huang, Tingan Chen, Xinggang Fan, and Haixia Long. 2022. "Local Lagrange Exponential Stability Analysis of Quaternion-Valued Neural Networks with Time Delays" Mathematics 10, no. 13: 2157. https://doi.org/10.3390/math10132157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop