Next Article in Journal
Algorithmic Analysis of Vesselness and Blobness for Detecting Retinopathies Based on Fractional Gaussian Filters
Next Article in Special Issue
Mixed Modified Recurring Rogers-Szego Polynomials Neural Network Control with Mended Grey Wolf Optimization Applied in SIM Expelling System
Previous Article in Journal
Δ-Convergence of Products of Operators in p-Uniformly Convex Metric Spaces
Previous Article in Special Issue
Asymptotic Convergence of Soft-Constrained Neural Networks for Density Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Stability of Complex-Valued Stochastic Neural Networks with Time-Varying Delays and Parameter Uncertainties

by
Pharunyou Chanthorn
1,
Grienggrai Rajchakit
2,*,
Jenjira Thipcha
2,
Chanikan Emharuethai
2,
Ramalingam Sriraman
3,
Chee Peng Lim
4 and
Raja Ramachandran
5
1
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Department of Mathematics, Faculty of Science, Maejo University, Chiang Mai 52290, Thailand
3
Department of Science and Humanities, Vel Tech High Tech Dr.Rangarajan Dr.Sakunthala Engineering College, Chennai 600062, India
4
Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC 3216, Australia
5
Ramanujan Centre for Higher Mathematics, Alagappa University, Karaikudi 630004, India
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(5), 742; https://doi.org/10.3390/math8050742
Submission received: 8 April 2020 / Revised: 28 April 2020 / Accepted: 2 May 2020 / Published: 8 May 2020
(This article belongs to the Special Issue Neural Networks and Learning Systems)

Abstract

:
In practical applications, stochastic effects are normally viewed as the major sources that lead to the system’s unwilling behaviours when modelling real neural systems. As such, the research on network models with stochastic effects is significant. In view of this, in this paper, we analyse the issue of robust stability for a class of uncertain complex-valued stochastic neural networks (UCVSNNs) with time-varying delays. Based on the real-imaginary separate-type activation function, the original UCVSNN model is analysed using an equivalent representation consisting of two real-valued neural networks. By constructing the proper Lyapunov–Krasovskii functional and applying Jensen’s inequality, a number of sufficient conditions can be derived by utilizing It o ^ ’s formula, the homeomorphism principle, the linear matrix inequality, and other analytic techniques. As a result, new sufficient conditions to ensure robust, globally asymptotic stability in the mean square for the considered UCVSNN models are derived. Numerical simulations are presented to illustrate the merit of the obtained results.

1. Introduction

1.1. Background and Motivation

The dynamical analysis for a variety of neural networks (NNs) has recently attracted the increasing attention of researchers. The results of NNs have been used extensively in different domains, including signal processing, pattern recognition, optimal control, and other science and engineering domains [1,2,3,4,5,6,7,8,9,10]. On the other hand, stability is a key requirement for a system to function properly and safely. As such, it is important to carry out the stability analysis of NNs, and they have received considerable attention [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. Generally, NNs can be characterized by their structure and functional features, including real-valued neural networks (RVNNs) and complex-valued neural networks (CVNNs). Recently, CVNNs have been used to model many practical systems efficiently. In addition, CVNN consists of complex-valued states, inputs, connection weights, and activation functions. As a result, the investigation of CVNNs has increased [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25].
Recently, several methods have been presented for analysing the various dynamical behaviours of CVNNs [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. They can be grouped into two categories. In the first category, the real and imaginary parts of the activation function are not divided [10,11,12,18,24]. As an example, the study in [11] used the Lyapunov stability concept and a new complex-valued inequality to analyse the global asymptotic stability of CVNNs that contain leakage and interval time delays. In [12], the problem of CVNNs with additive time-varying delays was discussed. The corresponding sufficient conditions were derived, which ensured the global asymptotic stability of the models. In the second category, the real and imaginary parts of the activation function were divided [13,19,22,23,25,31]. Based on this method, the study in [19] employed a real-imaginary separate-type activation to perform robust state estimation of CVNNs with time delays. The problem of global robust synchronization of fractional-order CVNNs was studied in [20]. Many similar outcomes can be found in [14,15,16,17,21].
It is well known that the time delay phenomena are commonly encountered in real systems. In this respect, time delays cause chaos, divergence, poor functionality, and instability of the various systems. As a result, it is necessary and important to conduct the stability analysis of NNs with time delays [18,19,20,21,22,23,24,25,26,27,28]. Recently, several research studies have extensively analysed the dynamics of NNs with time delays [29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]. On the other hand, the uncertainties associated with the system parameters are often included in the system modelling problems. Indeed, many practical systems are susceptible to uncertainties in real environments. From the application point of view, it is important to investigate NNs with uncertain parameters [26,27,28]. Recently, several related results have been published [29,30,32,41,48].
In addition, the stability of NNs can be affected by certain stochastic disturbances [30,32,33,34,35,36]. Therefore, we have to consider the existence of noise, since it is inherent in nonlinear dynamic systems’ modelling. In fact, when a system is influenced by external disturbance, the stochastic NN models become very useful for describing a real system, as compared with a deterministic NNs. Therefore, many studies have been made to ensure the stability of stochastic NNs and to derive various stability conditions [30,32,37,38,39,40,41,42,43,44,45]. As an example, in prior studies [30,32], without considering the diffusion coefficient vector, the problem of delay-dependent stability for a class of uncertain stochastic NNs with time-varying delays has been studied. In [41], the authors derived several conditions to confirm the robust stability of uncertain stochastic CVNNs with additive time delays, as well as the diffusion coefficient vector. Moreover, the considered diffusion coefficient vector is assumed to satisfy the Lipschitz condition and the linear growth condition as well. Later, the problem of mean squared exponential stability for some class of stochastic CVNNs with time delays was studied in [42,43]. Recently, in [48], by employing the real-imaginary separate methods, the problem of robust state estimation for stochastic CVNNs with sampled data control was studied. Similarly, a number of stability conditions for stochastic NNs can be found in [44,45]. On the other hand, there are only a few works concerned about the dynamical analysis of the stochastic uncertain CVNNs. According to our survey, the study on mean squared robust asymptotic stability analysis with respect to the proposed UCVSNN models with time-varying delays is new in the literature, and our paper contributes toward this research area.

1.2. Contributions

Motivated by the above discussions, in this paper, we aim to deal with the robust stability problem for uncertain complex-valued stochastic neural networks (UCVSNNs) with time-varying delays. Firstly, a more general form of the NN model is considered, including the effects of parameter uncertainties and stochastic disturbances. In the NN model, the uncertain parameters are considered to be of the norm-bounded type, and the stochastic disturbances are assumed to be Brownian motions. Secondly, based on the real-imaginary separate-type activation function, the original UCVSNN model is separated into an equivalent representation consisting of two real-valued NNs. On the basis of the homeomorphism principle, It o ^ ’s formula, the Lyapunov–Krasovskii functional (LKF), as well as the linear matrix inequality (LMI), the sufficient conditions are derived in terms of simplified LMIs, whose feasible solutions can be verified by MATLAB. Numerical simulations are presented to ascertain the merits of the presented results.

1.3. Organization

This paper contains five sections. In Section 2, the problem definition is formally presented. In Section 3, the main results of this paper are presented. In Section 4, the usefulness of the results is illustrated by numerical examples. Concluding remarks are given in Section 5.

1.4. List of Symbols

Throughout this paper, R n and C n denote the n-dimensional Euclidean space and unitary space, while the n × n real matrix and complex matrix are denoted by R n × n and C n × n , respectively; · denotes the Euclidean norm in R n . A symmetric positive-definite matrix is denoted by P > 0 ; the transposition and complex conjugate transpose are denoted by the superscript T and *, respectively. Besides that, J n denotes the n-dimensional identity matrix; ⋄ denotes a matrix that can be inferred by symmetry. The space of a continuous function mapping ϕ from [−d,0] into C n is denoted as C ([−d,0]; C n ). (Ω, F , P ) represents the complete probability space with a filtration { F t } t 0 ; while the family of all F 0 measurable is denoted as L F 0 2 ( [ d , 0 ] ; C n ) while diag {} stands for a block-diagonal matrix. The notation z T ( · ) P = z T ( t ) P z ( t ) and E { · } indicates the mathematical expectation.

2. Problem Definition and Fundamentals

2.1. Problem Definition

A CVSNN model with parameter uncertainties is expressed as:
d z ( t ) = [ ( D + D ( t ) ) z ( t ) + ( A + A ( t ) ) g ( z ( t d ( t ) ) ) + J ] d t + [ B z ( t ) + C z ( t d ( t ) ) ] d ω ( t ) ,
where z ( t ) = [ z 1 ( t ) , , z n ( t ) ] T C n is the state vector; while A = ( a k j ) n × n C n × n and D = d i a g { d 1 , , d n } R n with d k > 0 ( k = 1 , , n ) are the delayed connection weight matrix and self-feedback connection weight matrix, respectively. In addition, the vector-valued activation function is denoted by g ( z ( t ) ) = [ g 1 ( z 1 ( t ) ) , , g n ( z n ( t ) ) ] T : C n C n ; J C n is the external input vector; d ( t ) is the time-varying delay that satisfies 0 d ( t ) d , 0 d ˙ ( t ) μ in which d and μ are known real constants; and z ( t ) = ϕ ( t ) , t [ d , 0 ] is the initial condition, ϕ C ( [ d , 0 ] , C n ) . Besides that, ω ( t ) is the n-dimensional Brownian motion defined on ( Ω , F , { F t } t 0 , P ) ; while B , C R n × n are known constant matrices.
Assumption 1.
The activation function g j ( · ) , j = 1 , , n satisfies the following Lipschitz condition for all z 1 , z 2 C :
| g j ( z 1 ) g j ( z 2 ) | l j | z 1 z 2 | , j = 1 , . . , n ,
where l j is a constant.
Based on (2), the following inequality holds for any positive matrix W ,
( g ( z 1 ) g ( z 2 ) ) W ( g ( z 1 ) g ( z 2 ) ) ( z 1 z 2 ) L T W L ( z 1 z 2 ) ,
where L = d i a g { l 1 , . . . . , l n } .
The parameter uncertainties D ( t ) , A ( t ) = A R ( t ) + i A I ( t ) in (1) are assumed to satisfy:
D ( t ) , A R ( t ) , A I ( t ) = G F ( t ) H 1 , H 2 , H 3 ,
where G and H 1 , H 2 , H 3 are known matrices, while F ( t ) is a time-varying uncertain matrix that satisfies:
F T ( t ) F ( t ) I .
For further analysis, let z ( t ) = x ( t ) + i y ( t ) , A = A R + i A I , g ( z ( t d ( t ) ) ) = g R ( x ( t d ( t ) ) , y ( t d ( t ) ) ) + i g I ( x ( t d ( t ) ) , y ( t d ( t ) ) ) , J = J R + i J I , where i indicates the imaginary unit.
d x ( t ) = [ ( D + D ( t ) ) x ( t ) + ( A R + A R ) g R ( x ( t d ( t ) ) , y ( t d ( t ) ) ) ( A I + A I ) g I ( x ( t d ( t ) ) , y ( t d ( t ) ) ) + J R ] d t + [ B x ( t ) + C x ( t d ( t ) ) ] d ω ( t ) ,
d y ( t ) = [ ( D + D ( t ) ) y ( t ) + ( A I + A I ) g R ( x ( t d ( t ) ) , y ( t d ( t ) ) ) + ( A R + A R ) g I ( x ( t d ( t ) ) , y ( t d ( t ) ) ) + J I ] d t + [ B y ( t ) + C y ( t d ( t ) ) ] d ω ( t ) .
Let υ ˜ ( t ) = [ x ( t ) T , y ( t ) T ] T , υ ˜ ( t d ( t ) ) = [ x T ( t d ( t ) ) , y T ( t d ( t ) ) ] T , g ˜ ( υ ˜ ( t d ( t ) ) ) = [ g R ( x ( t d ( t ) ) , y ( t d ( t ) ) ) T , g I ( x ( t d ( t ) ) , y ( t d ( t ) ) ) T ] T , J ˜ = [ J R , J I ] T , D ˘ + D ˘ = d i a g { D + D ( t ) , D + D ( t ) } , B ˘ = d i a g { B , B } , C ˘ = d i a g { C , C } , A ˘ + A ˘ = A R + A R ( t ) A I A I ( t ) A I + A I ( t ) A R + A R ( t ) .
Both NN models in (6) and (7) can be equivalently re-written as:
d υ ˜ ( t ) = [ ( D ˘ + D ˘ ) υ ˜ ( t ) + ( A ˘ + A ˘ ) g ˜ ( υ ˜ ( t d ( t ) ) ) + J ˜ ] d t + [ B ˘ υ ˜ ( t ) + C ˘ υ ˜ ( t d ( t ) ) ] d ω ( t ) ,
From (4) and (5), it is easy to see that the parameter uncertainties D ˘ , A ˘ satisfy:
D ˘ , A ˘ = G ˘ F ˘ ( t ) H ˘ 1 , H ˘ 2 ,
where G ˘ = d i a g { G , G } , F ˘ ( t ) = d i a g { F ( t ) , F ( t ) } , H ˘ 1 = d i a g { H 1 , H 1 } , H ˘ 2 = H 2 H 3 H 3 H 2 .
From (3), one can obtain:
( g ˜ ( υ ˜ 1 ) g ˜ ( υ ˜ 2 ) ) T W ˘ ( g ˜ ( υ ˜ 1 ) g ˜ ( υ ˜ 2 ) ) ( υ ˜ 1 υ ˜ 2 ) T L ˘ T W ˘ L ˘ ( υ ˜ 1 υ ˜ 2 ) ,
where L ˘ = d i a g { L , L } , W ˘ = d i a g { W , W } .
Let the initial condition of the NN model (8) be:
υ ˜ ( t ) = ϕ ˜ ( t ) , t [ d , 0 ] ,
where ϕ ˜ ( t ) = [ ϕ R ( t ) , ϕ I ( t ) ] T .
Remark 1.
It should be noted that, if we let D ˘ = A ˘ = B ˘ = C ˘ 0 , the NN model in (8) turns into the following NN model:
d υ ˜ ( t ) = [ D ˘ υ ˜ ( t ) + A ˘ g ˜ ( υ ˜ ( t d ( t ) ) + J ˜ ] d t .

2.2. Fundamentals

The main results can be derived by using the following lemmas.
Definition 1.
For the NN model (1) and every ϕ L F 0 2 ( [ d , 0 ] , C n ) , the trivial solution is globally robustly asymptotically stable in the mean square if, for all admissible uncertainties,
lim t E { | z ( t ; ϕ ) | 2 } = 0 .
Definition 2.
[45] Suppose that Ω is an open set of R n and H : Ω R n is an operator. With the Euclidean norm · 2 , the nonlinear measure of H on Ω can be obtained:
m Ω ( H ) sup x , y Ω , x y H ( x ) H ( y ) , x y x y 2 2 = sup x , y Ω , x y ( x y ) T ( H ( x ) H ( y ) ) x y 2 2 .
Lemma 1.
[45] H is an injective mapping on Ω if m ( H ) < 0 . Besides that, H is homeomorphic in R n if Ω = R n .
Lemma 2.
[31] The following inequality holds for any vectors M , N R n , a scalar ϵ > 0 , and a positive-definite matrix P R n :
M T P N + N T P M ϵ 1 M T P M + ϵ N T P N .
Lemma 3.
[46] If O R n × n , O = O T > 0 , scalar function d : = d ( t ) > 0 , and vector-valued function z : [ d , 0 ] R n , such that the following integrations are well defined, then:
d t d t z ˙ T ( s ) O z ˙ ( s ) d s ζ 1 T ( t ) O O O ζ 1 ( t ) , d 2 2 d 0 t + u t z ˙ T ( s ) O z ˙ ( s ) d s d u ζ 2 T ( t ) O O O ζ 2 ( t ) , where ζ 1 ( t ) = [ z T ( t ) z T ( t d ) ] T , ζ 2 ( t ) = [ d z T ( t ) t d t z T ( s ) d s ] T .
Lemma 4.
[47] The function Θ ( α , O ) can be defined in the following for any vector Π R m , two matrices Z 1 , Z 2 R n × m , an n × n matrix O > 0 , positive integers m and n, and scalar α in the interval ( 0 , 1 ) :
Θ ( α , O ) = 1 α Π T Z 1 T O Z 1 Π + 1 1 α Π T Z 2 T O Z 2 Π .
If there exists a matrix X R n × n satisfying O X O 0 , then:
min α ( 0 , 1 ) Θ ( α , O ) Z 1 Π Z 2 Π T O X O Z 1 Π Z 2 Π .
Lemma 5.
[48] Let Π = Π T , K 1 and K 2 be real matrices, and M ( t ) satisfy M T ( t ) M ( t ) I . Then, Π + ( K 1 M ( t ) K 2 ) + ( K 1 M ( t ) K 2 ) T < 0 , iff there exists a scalar ϵ > 0 such that Π + ϵ 1 K 1 K 1 T + ϵ K 2 T K 2 < 0 or, equivalently:
Π K 1 ϵ K 2 ϵ I 0 ϵ I < 0 .
Let υ ˜ ( t ; ϕ ˜ ) denote the solution from the initial data υ ˜ ( t ) = ϕ ˜ ( t ) on d t 0 in L F 0 2 ( [ d , 0 ] , R 2 n ) . Obviously, according to [33,34], the NN model (8) admits a trivial solution υ ˜ ( t ; 0 ) = 0 .

3. Main Results

This section explains the delay-dependent robust stability criteria for the considered CVNN models based on the Lyapunov functional and LMI method.
Theorem 1.
Under Assumption 1, the following LMI holds given matrix P > 0 and diagonal matrix W > 0 :
Ω = P D ˘ D ˘ T P + L ˘ T W ˘ L ˘ P A ˘ W ˘ < 0 ,
then a unique equilibrium point exists for the CVNN model in (12).
Proof. 
Define the following operator:
H ( υ ˜ ) = P D ˘ υ ˜ + P A ˘ g ˜ ( υ ˜ ) + P J ˜ .
where P > 0 . As such, one can infer that the equilibrium points of the NN model in (12) are the same as the zero points of H ( υ ˜ ) . Subsequently, we prove that m R 2 n ( H ) < 0 .
By Definition 2, we have:
m R 2 n ( H ) = sup υ ˜ 1 , υ ˜ 2 R 2 n , υ ˜ 1 υ ˜ 2 ( υ ˜ 1 υ ˜ 2 ) T ( H ( υ ˜ 1 ) H ( υ ˜ 2 ) ) υ ˜ 1 υ ˜ 2 2 2 .
By (10), (14), and Lemma 2, for diagonal matrix W ˘ > 0 , we have:
2 ( υ ˜ 1 υ ˜ 2 ) T H ( υ ˜ 1 υ ˜ 2 ) = 2 ( υ ˜ 1 υ ˜ 2 ) T ( P D ˘ ( υ ˜ 1 υ ˜ 2 ) + P A ˘ ( g ˜ ( υ ˜ 1 ) g ˜ ( υ ˜ 2 ) ) ) = ( υ ˜ 1 υ ˜ 2 ) T ( P D ˘ D ˘ P ) ( υ ˜ 1 υ ˜ 2 ) + 2 ( υ ˜ 1 υ ˜ 2 ) T P A ˘ g ˜ ( υ ˜ 1 ) g ˜ ( υ ˜ 2 ) ) ( υ ˜ 1 υ ˜ 2 ) T ( P D ˘ D ˘ P ) ( υ ˜ 1 υ ˜ 2 ) + ( υ ˜ 1 υ ˜ 2 ) T P A ˘ W ˘ 1 A ˘ T P ( υ ˜ 1 υ ˜ 2 ) + ( g ˜ ( υ ˜ 1 ) g ˜ ( υ ˜ 2 ) ) T W ˘ ( g ˜ ( υ ˜ 1 ) g ˜ ( υ ˜ 2 ) ) ( υ ˜ 1 υ ˜ 2 ) T ( P D ˘ D ˘ P + P A ˘ W ˘ 1 A ˘ T P + L ˘ T W ˘ L ˘ ) ( υ ˜ 1 υ ˜ 2 ) ( υ ˜ 1 υ ˜ 2 ) T Ω ¯ ( υ ˜ 1 υ ˜ 2 ) .
where Ω ¯ = ( P D ˘ D ˘ P + P A ˘ W ˘ 1 A ˘ T P + L ˘ T W ˘ L ˘ ) .
Using Lemma 5, it follows from (13) that Ω < 0 . Thus, for υ ˜ 1 υ ˜ 2 , ( υ ˜ 1 υ ˜ 2 ) T ( H ( υ ˜ 1 ) H ( υ ˜ 2 ) ) T < 0 . By (15), one can get that m R 2 n ( H ) < 0 . Next, using Lemma 1, H ( υ ˜ ) is a homeomorphism of R 2 n . As a result, H ( υ ˜ ) has a unique zero point. This indicates that the NN model in (8) has a unique equilibrium point. ☐
Let υ be the unique equilibrium point of the CVNN model in (12), and υ ˘ = υ ˜ υ can be shifted to the origin for the following NN model:
υ ˘ ˙ ( t ) = D ˘ υ ˘ ( t ) + A ˘ g ˘ ( υ ˘ ( t d ( t ) ) ) ,
with the initial condition υ ˘ ( t ) = ϕ ˘ ( t ) = ϕ ˜ ( t ) υ , t [ d , 0 ] , where g ˘ ( υ ˘ ( t d ( t ) ) ) = g ˜ ( υ ˘ ( t d ( t ) ) + υ ) g ˜ ( υ ) .
The following Theorem 2 describes that the robustly globally asymptotically stable criterion for the considered NN model (1) or equivalent NN model (8).
Theorem 2.
Based on Assumption 1, we can divide the activation function into two: the real and imaginary parts. The model in (8) is robustly globally asymptotically stable in the mean square for any scalars d > 0 , μ > 0 , if there exist matrices P > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , as well as any matrix X , diagonal matrix W > 0 , and scalar γ > 0 , such that the following LMIs hold:
Π ^ = Π ^ 1 , 1 Π ^ 1 , 2 Π ^ 1 , 3 Π ^ 1 , 4 Π ^ 1 , 5 Π ^ 1 , 6 Π ^ 1 , 7 Π ^ 1 , 8 Π ^ 1 , 9 Π ^ 1 , 10 Π ^ 2 , 2 Π ^ 2 , 3 0 0 0 0 0 0 Π ^ 2 , 10 Π ^ 3 , 3 0 0 0 0 0 0 0 Π ^ 4 , 4 0 0 Π ^ 4 , 7 Π ^ 4 , 8 Π ^ 4 , 9 0 Π ^ 5 , 5 0 0 0 0 0 Π ^ 6 , 6 0 0 0 0 Π ^ 7 , 7 0 0 0 Π ^ 8 , 8 0 0 Π ^ 9 , 9 0 Π ^ 10 , 10 < 0 ,
S X S 0 ,
where:
Π ^ 1 , 1 = P D ˘ D ˘ T P + Q + R S d 2 U d 2 U + γ H ˘ 1 T H ˘ 1 , Π ^ 1 , 2 = S X , Π ^ 1 , 3 = X , Π ^ 1 , 4 = P A ˘ , Π ^ 1 , 5 = d U , Π ^ 1 , 6 = d U , Π ^ 1 , 7 = d D ˘ T S , Π ^ 1 , 8 = d 2 2 D ˘ T U , Π ^ 1 , 9 = P G ˘ , Π ^ 1 , 10 = B ˘ T P , Π ^ 2 , 2 = ( 1 μ ) Q S S + X + X T + L ˘ T W ˘ L ˘ , Π ^ 2 , 3 = S X , Π ^ 2 , 10 = C ˘ T P , Π ^ 3 , 3 = R S , Π ^ 4 , 4 = W ˘ + γ H ˘ 2 T H ˘ 2 , Π ^ 4 , 7 = d A ˘ T S , Π ^ 4 , 8 = d 2 2 A ˘ T U , Π ^ 4 , 9 = P G ˘ , Π ^ 5 , 5 = U , Π ^ 6 , 6 = U , Π ^ 7 , 7 = S , Π ^ 8 , 8 = U , Π ^ 9 , 9 = γ I , Π ^ 10 , 10 = P .
Proof. 
Given the NN model in (8), the Lyapunov function candidate can be expressed as:
V ( t , υ ˘ t ) = υ ˘ T ( t ) P + t d ( t ) t υ ˘ T ( s ) Q d s + t d t υ ˘ T ( s ) R d s + d d 0 t + u t υ ˘ ˙ T ( s ) S d s d u + d 2 2 d 0 u 0 t + v t υ ˘ ˙ T ( s ) U d s d v d u .
By It o ^ ’s differential rule, taking the stochastic derivative with respect to V ( t , υ ˘ t ) pertaining to the trajectories of the NN model in (8), we have:
d V ( t , υ ˘ t ) = { 2 υ ˘ T ( t ) P [ ( D ˘ + D ˘ ) υ ˘ ( t ) + ( A ˘ + A ˘ ) g ˘ ( υ ˘ ( t d ( t ) ) ) ] + υ ˘ T ( t ) Q ( 1 d ˙ ( t ) ) υ ˘ T ( t d ( t ) ) Q + υ ˘ T ( t ) R υ ˘ T ( t d ) R + d 2 υ ˘ ˙ T ( t ) S d t d t υ ˘ ˙ T ( s ) S d s + d 2 2 2 υ ˘ ˙ T ( t ) U d 2 2 d 0 t + u t υ ˘ ˙ T ( s ) U d s d u + [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] T P } d t + 2 υ ˘ T ( t ) P [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] d ω ( t ) d V ( t , υ ˘ t ) { 2 υ ˘ T ( t ) ( P D ˘ ) 2 υ ˘ T ( t ) ( P G ˘ F ˘ ( t ) H ˘ 1 ) + 2 υ ˘ T ( t ) ( P A ˘ ) g ˘ ( υ ˘ ( t d ( t ) ) ) + 2 υ ˘ T ( t ) ( P G ˘ F ˘ ( t ) H ˘ 2 ) g ˘ ( υ ˘ ( t d ( t ) ) ) ] + υ ˘ T ( t ) Q ( 1 μ ) υ ˘ T ( t d ( t ) ) Q + υ ˘ T ( t ) R υ ˘ T ( t d ) R + d 2 υ ˘ ˙ T ( t ) S d t d t υ ˘ ˙ T ( s ) S d s + d 2 2 2 υ ˘ ˙ T ( t ) U d 2 2 d 0 t + u t υ ˘ ˙ T ( s ) U d s d u + [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] T P } d t + 2 υ ˘ T ( t ) P [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] d ω ( t ) .
By using Lemma 2, we obtain:
d V ( t , υ ˘ t ) { 2 υ ˘ T ( t ) ( P D ˘ ) + 1 γ υ ˘ T ( t ) ( P G ˘ G ˘ T P T ) + γ υ ˘ T ( t ) ( H ˘ 1 T H ˘ 1 ) + 2 υ ˘ T ( t ) ( P A ˘ ) g ˘ ( υ ˘ ( t d ( t ) ) ) + 1 γ υ ˘ T ( t ) ( P G ˘ G ˘ T P T ) + γ g ˘ T ( υ ˘ ( t d ( t ) ) ) ( H ˘ 2 T H ˘ 2 ) + υ ˘ T ( t ) Q ( 1 μ ) υ ˘ T ( t d ( t ) ) Q + υ ˘ T ( t ) R υ ˘ T ( t d ) R + d 2 υ ˘ ˙ T ( t ) S d t d t υ ˘ ˙ T ( s ) S d s + d 2 2 2 υ ˘ ˙ T ( t ) U d 2 2 d 0 t + u t υ ˘ ˙ T ( s ) U × d s d u + [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] T P } d t + 2 υ ˘ T ( t ) P [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] d ω ( t ) .
The single integration of (21) can be estimated, i.e.,
d t d t υ ˘ ˙ T ( s ) S d s = d t d t d ( t ) υ ˘ ˙ T ( s ) S d s d t d ( t ) t υ ˘ ˙ T ( s ) S d s .
Using Lemmas 3 and 4 we obtain:
d t d t d ( t ) υ ˘ ˙ T ( s ) S d s d t d ( t ) t υ ˘ ˙ T ( u ) S d s d d d ( t ) t d t d ( t ) υ ˘ ˙ ( s ) d s T S d d ( t ) t d ( t ) t υ ˘ ˙ ( s ) d s T S d d d ( t ) υ ˘ ( t ) υ ˘ ( t d ( t ) ) υ ˘ ( t d ) T 0 I I S 0 I I T d d ( t ) υ ˘ ( t ) υ ˘ ( t d ( t ) ) υ ˘ ( t d ) T I I 0 S I I 0 T υ ˘ ( t ) υ ˘ ( t d ( t ) ) υ ˘ ( t d ) T 0 I I I I 0 S X S 0 I I I I 0 T υ ˘ ( t ) υ ˘ ( t d ( t ) ) υ ˘ ( t d ) T S S + X X S + S X X T X S S .
The double integration of (21) can be estimated, i.e.,
d 2 2 d 0 t + u t υ ˘ ˙ T ( s ) U d s d u = d 2 2 d d ( t ) t + u t υ ˘ ˙ T ( s ) U d s d u d 2 2 d ( t ) 0 t + u t υ ˘ ˙ T ( s ) U d s d u .
Using Lemma 3, we obtain:
d 2 2 d d ( t ) t + u t υ ˘ ˙ T ( s ) U d s d u d 2 2 d ( t ) 0 t + u t υ ˘ ˙ T ( s ) U d s d u d d ( t ) t + u t υ ˘ ˙ ( s ) d s d u T U d ( t ) 0 t + u t υ ˘ ˙ ( s ) d s d u T U ( d d ( t ) ) υ ˘ ( t ) t d t d ( t ) υ ˘ ( s ) d s T I I U I I T d ( t ) υ ˘ ( t ) t d ( t ) t υ ˘ ( s ) d s T I I U I I T d υ ˘ ( t ) t d t d ( t ) υ ˘ ( s ) d s t d ( t ) t υ ˘ ( s ) d s T U + U U U U 0 U .
Moreover, from (10), it follows that:
0 W ˘ [ υ ˘ T ( t d ( t ) ) L ˘ T L ˘ g ˘ T ( υ ˘ ( t d ( t ) ) ) ] .
Then, combining with (21)–(24), we have:
d V ( t , υ ˘ t ) ζ T ( t ) Π d t + 2 υ ˘ T ( t ) P [ B ˘ υ ˘ ( t ) + C ˘ υ ˘ ( t d ( t ) ) ] d ω ( t ) ,
where:
ζ ( t ) = υ ˘ T ( t ) υ ˘ T ( t d ( t ) ) υ ˘ T ( t d ) g ˘ T ( υ ˘ ( t d ( t ) ) ) t d t d ( t ) υ ˘ T ( s ) d s t d ( t ) t υ ˘ T ( s ) d s T , Π = Π 1 , 1 Π 1 , 2 Π 1 , 3 Π 1 , 4 Π 1 , 5 Π 1 , 6 Π 2 , 2 Π 2 , 3 0 0 0 Π 3 , 3 0 0 0 Π 4 , 4 0 0 Π 5 , 5 0 Π 6 , 6 + d 2 Θ T S Θ + d 2 2 2 Θ T U Θ + 1 γ Φ T Φ + Υ T P Υ ,
Θ = [ D ˘ 0 0 A ˘ 0 0 ] , Φ = [ P G ˘ 0 0 P G ˘ 0 0 ] , Υ = [ B ˘ C ˘ 0 0 0 0 ] , Π 1 , 1 = P D ˘ D ˘ T P + Q + R S d 2 U d 2 U + γ H ˘ 1 T H ˘ 1 , Π 1 , 2 = S X , Π 1 , 3 = X , Π 1 , 4 = P A ˘ , Π 1 , 5 = d U , Π 1 , 6 = d U , Π 2 , 2 = ( 1 μ ) Q S S + X + X T + L ˘ T W ˘ L ˘ , Π 2 , 3 = S X , Π 3 , 3 = R S , Π 4 , 4 = W ˘ + γ H ˘ 2 T H ˘ 2 , Π 5 , 5 = U , Π 6 , 6 = U .
By applying Lemma 5, the condition (25) is equivalent to the form (18). Therefore, for Π ^ < 0 , a scalar β > 0 exists such that:
Π ^ + β I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 < 0 .
Taking the mathematical expectation on both sides of (25), we have:
d E { V ( t , υ ˘ t ) } d t E { ζ T ( t ) Π ^ } β E { υ ˘ ( t ) 2 } .
As a result, the model in (8) is robustly globally asymptotically stable in the mean squared sense. This completes the proof.
Remark 2.
In the situation where there are no parameter uncertainties, (1) becomes:
d z ( t ) = [ D z ( t ) + A g ( z ( t d ( t ) ) ) + J ] d t + [ B z ( t ) + C z ( t d ( t ) ) ] d ω ( t ) .
Simultaneously, System (8) then turns into:
d υ ˜ ( t ) = [ D ˘ υ ˜ ( t ) + A ˘ g ˜ ( υ ˜ ( t d ( t ) ) ) + J ˜ ] d t + [ B ˘ υ ˜ ( t ) + C ˘ υ ˜ ( t d ( t ) ) ] d ω ( t ) .
Corollary 1 is obtained when we set D ˘ = A ˘ = 0 in the proof of Theorem 2.
Corollary 1.
Based on Assumption 1, we can divide the activation function into two: the real and imaginary parts. The NN model in (29) is globally asymptotically stable in the mean square for any scalar d > 0 , μ > 0 , if there exist matrices P > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , diagonal matrix W > 0 , and matrix X in such a way that the following LMI holds:
Π ˘ = Π ˘ 1 , 1 Π ˘ 1 , 2 Π ˘ 1 , 3 Π ˘ 1 , 4 Π ˘ 1 , 5 Π ˘ 1 , 6 Π ˘ 1 , 7 Π ˘ 1 , 8 Π ˘ 1 , 9 Π ˘ 2 , 2 Π ˘ 2 , 3 0 0 0 0 0 Π ˘ 2 , 9 Π ˘ 3 , 3 0 0 0 0 0 0 Π ˘ 4 , 4 0 0 Π ˘ 4 , 7 Π ˘ 4 , 8 0 Π ˘ 5 , 5 0 0 0 0 Π ˘ 6 , 6 0 0 0 Π ˘ 7 , 7 0 0 Π ˘ 8 , 8 0 Π ˘ 9 , 9 < 0 ,
S X S 0 ,
where:
Π ˘ 1 , 1 = P D ˘ D ˘ T P + Q + R S d 2 U d 2 U , Π ˘ 1 , 2 = S X , Π ˘ 1 , 3 = X , Π ˘ 1 , 4 = P A ˘ , Π ˘ 1 , 5 = d U , Π ˘ 1 , 6 = d U , Π ˘ 1 , 7 = d D ˘ T S , Π ˘ 1 , 8 = d 2 2 D ˘ T U , Π ˘ 1 , 9 = B ˘ T P , Π ˘ 2 , 2 = ( 1 μ ) Q S S + X + X T + L ˘ T W ˘ L ˘ , Π ˘ 2 , 3 = S X , Π ˘ 2 , 9 = C ˘ T P , Π ˘ 3 , 3 = R S , Π ˘ 4 , 4 = W ˘ , Π ˘ 4 , 7 = d A ˘ T S , Π ˘ 4 , 8 = d 2 2 A ˘ T U , Π ˘ 5 , 5 = U , Π ˘ 6 , 6 = U , Π ˘ 7 , 7 = S , Π ˘ 8 , 8 = U , Π ˘ 9 , 9 = P .
Remark 3.
In the situation where there are no parameter uncertainties and stochastic disturbances, (1) becomes:
d z ( t ) = [ D z ( t ) + A g ( z ( t d ( t ) ) ) + J ] d t .
Simultaneously, the model in (8) becomes:
d υ ˜ ( t ) = [ D ˘ υ ˜ ( t ) + A ˘ g ˜ ( υ ˜ ( t d ( t ) ) ) + J ˜ ] d t .
Corollary 2 is obtained when we set D ˘ = A ˘ = B ˘ = C ˘ = 0 in the proof of Theorem 2.
Corollary 2.
Based on Assumption 1, we can divide the activation function into two: the real and imaginary parts. The model in (33) is globally asymptotically stable for any scalar d > 0 , μ > 0 , if there exist matrices P > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , as well as any diagonal matrix W > 0 , and matrix X in such a way that the following LMI holds:
Π ˜ = Π ˜ 1 , 1 Π ˜ 1 , 2 Π ˜ 1 , 3 Π ˜ 1 , 4 Π ˜ 1 , 5 Π ˜ 1 , 6 Π ˜ 1 , 7 Π ˜ 1 , 8 Π ˜ 2 , 2 Π ˜ 2 , 3 0 0 0 0 0 Π ˜ 3 , 3 0 0 0 0 0 Π ˜ 4 , 4 0 0 Π ˜ 4 , 7 Π ˜ 4 , 8 Π ˜ 5 , 5 0 0 0 Π ˜ 6 , 6 0 0 Π ˜ 7 , 7 0 Π ˜ 8 , 8 < 0 ,
S X S 0 ,
where:
Π ˜ 1 , 1 = P D ˘ D ˘ T P + Q + R S d 2 U d 2 U , Π ˜ 1 , 2 = S X , Π ˜ 1 , 3 = X , Π ˜ 1 , 4 = P A ˘ , Π ˜ 1 , 5 = d U , Π ˜ 1 , 6 = d U , Π ˜ 1 , 7 = d D ˘ T S , Π ˜ 1 , 8 = d 2 2 D ˘ T U , Π ˜ 2 , 2 = ( 1 μ ) Q S S + X + X T + L ˘ T W ˘ L ˘ , Π ˜ 2 , 3 = S X , Π ˜ 3 , 3 = R S , Π ˜ 4 , 4 = W ˘ , Π ˜ 4 , 7 = d A ˘ T S , Π ˜ 4 , 8 = d 2 2 A ˘ T U , Π ˜ 5 , 5 = U , Π ˜ 6 , 6 = U , Π ˜ 7 , 7 = S , Π ˜ 8 , 8 = U .
Remark 4.
In this study, we designed a general system model, namely UCVSNNs for continuous time. Thus, the NN model discussed in [9] is the special case of the proposed NN model in this paper. This means that when there are no parameter uncertainties, stochastic disturbance, and time-varying delays in (1), then it is equivalent to the following system model proposed in [9].
z ˙ ( t ) = D z ( t ) + A g ( z ( t ) ) + J
Remark 5.
In this paper, we used the approach proposed in prior studies [30], but we extended the results on the complex domain. This means that, in [30], the authors studied the robust stability criteria for a class of uncertain stochastic NN, while, in our paper, a new class of uncertain stochastic CVNN was developed by implementing complex algebra into RVNN, in order to generalize RVNN models with a complex-valued state vector, input vector, and neuron activation functions. Similar robust stability criteria were derived for the considered UCVSNN models in Theorem 2. The new approach used in this paper was more concise and powerful.
Remark 6.
It should be noted that network models are susceptible to stochastic disturbances. In this respect, numerous researchers have investigated the stability issues with stochastic inputs for different types of NN models. Among the studies are included passivity, robust stability, mean squared exponential stability, global exponential stability, as well as robust state estimation [41,42,43,44,48]. In our study, in Theorem 2, we obtained sufficient conditions for the robust globally asymptotic stability in the mean square in terms of the LMI approach. The conditions were more concise than those obtained in [41] and much easier to check.
Remark 7.
When the complex-valued activation function g j ( · ) , j = 1 , 2 , , n was not divided into real and imaginary parts separately, the main results of this paper were invalid.

4. Illustrative Examples

This section shows the merits of the present results by two numerical examples.
Example 1.
Consider a two-neuron UCVSNN model in (1):
d z ( t ) = [ ( D + D ( t ) ) z ( t ) + ( A + A ( t ) ) g ( z ( t d ( t ) ) ) + J ] d t + [ B z ( t ) + C z ( t d ( t ) ) ] d ω ( t )
with:
D = 8 0 0 8 , A = 2 + i 2 + i 1 i 1 + i , J = 1 i 1 + i , G = 0.2 0 0 0.2 , H 1 = 0.3 0 0 0.3 , H 2 = 0.3 0.2 0.2 0.3 , H 3 = 0.3 0.2 0.2 0.3 , B = 0.3 0 0 0.5 , C = 0.4 0 0 0.2 , L = 1 2 0 0 1 2 , g j ( z j ) = 1 e 2 x j y j 1 + e 2 x j y j + i 1 1 + e x j + 2 y j , j = 1 , 2 , F ( t ) = 0.2 s i n ( t ) ,
For this NN model, the activation function is the form as g j ( z ) = g j R ( x , y ) + i g j I ( x , y ) , j = 1 , 2 . We can obtain the following results under the simple calculation,
D ˘ = 8 0 0 0 0 8 0 0 0 0 8 0 0 0 0 8 , A ˘ = 2 2 1 1 1 1 1 1 1 1 1 2 1 1 1 1 , G ˘ = 0.2 0 0 0 0 0.2 0 0 0 0 0.2 0 0 0 0 0.2 , H ˘ 1 = 0.3 0 0 0 0 0.3 0 0 0 0 0.3 0 0 0 0 0.3 , H ˘ 2 = 0.3 0.2 0.3 0.2 0.2 0.3 0.2 0.3 0.3 0.2 0.3 0.2 0.2 0.3 0.2 0.3 , B ˘ = 0.3 0 0 0 0 0.5 0 0 0 0 0.3 0 0 0 0 0.5 , C ˘ = 0.4 0 0 0 0 0.2 0 0 0 0 0.4 0 0 0 0 0.2 , L ˘ = 1 2 0 0 0 0 1 2 0 0 0 0 1 2 0 0 0 0 1 2 .
By taking d ( t ) = 0.3 s i n t + 0.4 , which satisfies d = 0.7 and μ = 0.5 , and by applying Theorem 2, it is straightforward to realize that the CVNN model in (1) or the equivalent CVNN model in (8) is robustly globally asymptotically stable in the mean squared sense. We can obtain the following feasible solutions by solving the LMIs in (18) and (19):
P = 3.2038 0.2915 0.0022 0.2016 0.2915 2.9447 0.2007 0.0020 0.0022 0.2007 3.2014 0.3001 0.2016 0.0020 0.3001 2.9521 , Q = 7.7544 0.1515 0.0059 0.3352 0.1515 7.4147 0.3412 0.0037 0.0059 0.3412 7.7064 0.1483 0.3352 0.0037 0.1483 7.5078 , R = 7.0412 0.2272 0.0010 0.5225 0.2272 6.9813 0.5160 0.0010 0.0010 0.5160 7.0425 0.2512 0.5225 0.0010 0.2512 6.9762 , S = 0.3356 0.0576 0.0006 0.1206 0.0576 0.2882 0.1206 0.0004 0.0006 0.1206 0.3360 0.0594 0.1206 0.0004 0.0594 0.2883 , U = 2.9414 0.2842 0.0010 0.5300 0.2842 2.7066 0.5299 0.0001 0.0010 0.5299 2.9374 0.2893 0.5300 0.0001 0.2893 2.7136 , X = 0.1531 0.2104 0.0024 0.3494 0.2104 0.0474 0.3489 0.0030 0.0024 0.3489 0.1642 0.1848 0.3494 0.0030 0.1848 0.1118 , W = d i a g { 12.2867 , 11.7452 , 12.3003 , 11.7144 } , γ = 182.5122 .
Under the initial conditions z 1 ( t ) = x 1 ( t ) + i y 1 ( t ) = 0.9 0.3 i , z 2 ( t ) = x 2 ( t ) + i y 2 ( t ) = 0.7 + 0.4 i , Figure 1 and Figure 2 depict the state trajectories pertaining to the real and imaginary parts of the NN model in (1), respectively. From Figure 1 and Figure 2, we can see that the state trajectories of the NN model (1) converge to the equilibrium point.
Example 2.
Let the parameters of a CVNN model defined in (32) be:
z ˙ 1 ( t ) z ˙ 2 ( t ) = 3 0 0 3 z 1 ( t ) z 2 ( t ) + 1 + i 2 + i 1 i 1 g 1 ( z 1 ( t d ( t ) ) ) g 2 ( z 2 ( t d ( t ) ) ) + 1 + i 1 i g j ( z j ) = t a n h ( z j ) , j = 1 , 2 , L = d i a g { 0.5 , 0.5 } .
For this NN model, the activation function is the form as g j ( z ) = g j R ( x , y ) + i g j I ( x , y ) , j = 1 , 2 . Under some simple calculation, we can obtain the following results:
D ˘ = 3 0 0 0 0 3 0 0 0 0 3 0 0 0 0 3 , A ˘ = 1 2 1 1 1 1 1 0 1 1 1 2 1 0 1 1 , L ˘ = 0.5 0 0 0 0 0.5 0 0 0 0 0.5 0 0 0 0 0.5 .
By taking d ( t ) = 0.5 s i n t + 0.8 , which satisfies d = 1.3 and μ = 0.5 , and by applying the MATLAB LMI toolbox, we can obtain the following feasible solutions for the LMIs (34) and (35):
P = 69.7790 3.4810 0.0000 3.0269 3.4810 65.3996 3.0269 0.0000 0.0000 3.0269 69.7790 3.4810 3.0269 0.0000 3.4810 65.3996 , Q = 211.4852 0.9692 0.0000 3.8134 0.9692 204.4609 3.8134 0.0000 0.0000 3.8134 211.4852 0.9692 3.8134 0.0000 0.9692 204.4609 , R = 199.0265 3.1874 0.0000 6.4928 3.1874 199.2336 6.4928 0.0000 0.0000 6.4928 199.0265 3.1874 6.4928 0.0000 3.1874 199.2336 , S = 5.3369 0.5498 0.0000 0.9844 0.5498 5.2876 0.9844 0.0000 0.0000 0.9844 5.3369 0.5498 0.9844 0.0000 0.5498 5.2876 , U = 63.3608 4.3097 0.0000 6.8579 4.3097 62.3130 6.8579 0.0000 0.0000 6.8579 63.3608 4.3097 6.8579 0.0000 4.3097 62.3130 , X = 0.7744 2.1732 0.0000 4.2102 2.1732 2.2919 4.2102 0.0000 0.0000 4.2102 0.7744 2.1732 4.2102 0.0000 2.1732 2.2919 , W = d i a g { 294.4714 , 251.8096 , 294.4714 , 251.8096 } .
Under various μ settings, i.e., μ = 0.1 , 0.3 , 0.5 , 0.7 , the results of the maximum allowable delay bounds of d = 1.4505 , 1.4053 , 1.3000 , 1.2011 , respectively, can be obtained by soling LMIs (34)–(35) in Corollary 2.
Figure 3 depicts the state trajectories with respect to the real and imaginary parts of the CVNN model in (32), subject to the initial conditions z 1 ( t ) = x 1 ( t ) + i y 1 ( t ) = 0.4 + 0.3 i , z 2 ( t ) = x 2 ( t ) + i y 2 ( t ) = 0.2 0.5 i . The phase trajectories with respect to the real and imaginary parts of the CVNN model in (32) are given in Figure 4. By Corollary 2, it is straightforward to confirm that the equilibrium point of the CVNN model in (32) is globally asymptotically stable.

5. Conclusions

The problem of the robust stability of UCVSNNs with time-varying delays was investigated in this paper. To analyse more realistic behaviours, a general form of the NN model including the effects of parameter uncertainties and stochastic disturbances was considered. Based on the real-imaginary separate-type activation function, the original UCVSNN model was separated into an equivalent representation consisting of two real-valued NNs. By applying the homeomorphism principle, It o ^ ’s formula, LKF, as well as LMI, the associated sufficient conditions were derived. The results confirmed that the model equilibrium was unique and was globally asymptotically stable, in which the feasible solution could be easily verified by MATLAB. To demonstrate the usefulness of the obtained results, two illustrative examples were presented. For further research, we will extend our proposed approach to analysing other relevant types of stochastic quaternion-valued neural network models. In this regard, we intend to undertake the investigation on Cohen–Grossberg quaternion-valued neural network models and bi-directional associative memory quaternion-valued neural network models. This is the direction for future studies whereby new methodologies will be developed and evaluated comprehensively.

Author Contributions

Funding acquisition, P.C.; conceptualization, G.R.; software, P.C., J.T., C.E., R.R. and R.S.; formal analysis, G.R.; methodology, G.R.; supervision, C.P.L.; writing—original draft, G.R.; validation, G.R.; writing—review and editing, G.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research is made possible through financial support from Chiang Mai University.

Acknowledgments

The authors are grateful to Chiang Mai University for supporting this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, L.; Cao, J. A based-on LMI stability criterion for delayed recurrent neural networks. Chaos Soliton Fract. 2006, 28, 154–160. [Google Scholar] [CrossRef]
  2. Kwon, O.M.; Park, J.H. New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays. Appl. Math. Comput. 2008, 205, 417–427. [Google Scholar] [CrossRef]
  3. Agarwal, R.; Hristova, S.; O’Regan, D.; Kopanov, P. Stability analysis of Cohen–Grossberg neural networks with nandom impulses. Mathematics 2018, 6, 144. [Google Scholar] [CrossRef] [Green Version]
  4. Zhu, Q.; Cao, J. Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 2010, 21, 1314–1325. [Google Scholar] [PubMed]
  5. Li, X.; Ding, D. Mean square exponential stability of stochastic Hopfield neural networks with mixed delays. Stat. Probabil. Lett. 2017, 126, 88–96. [Google Scholar] [CrossRef]
  6. Mathews, J.H.; Howell, R.H. Complex Analysis for Mathematics and Engineering; Jones and Bartlett Publishers: Burlington, MA, USA, 2012. [Google Scholar]
  7. Jankowski, S.; Lozowski, A.; Zurada, J.M. Complex-valued multistate neural associative memory. IEEE Trans. Neural Netw. 1996, 7, 1491–1496. [Google Scholar] [CrossRef] [PubMed]
  8. Nishikawa, T.; Iritani, T.; Sakakibara, K.; Kuroe, Y. Phase dynamics of complex-valued neural networks and its application to traffic signal control. Int. J. Neural Syst. 2005, 15, 111–120. [Google Scholar] [CrossRef]
  9. Wang, Z.; Guo, Z.; Huang, L.; Liu, X. Dynamical behavior of complex-valued Hopfield neural networks with discontinuous activation functions. Neural Process. Lett. 2017, 45, 1039–1061. [Google Scholar] [CrossRef]
  10. Song, Q.; Zhao, Z.; Liu, Y. Stability analysis of complex-valued neural networks with probabilistic time-varying delays. Neurocomputing 2015, 159, 96–104. [Google Scholar] [CrossRef]
  11. Samidurai, R.; Sriraman, R.; Cao, J.; Tu, Z. Effects of leakage delay on global asymptotic stability of complex-valued neural networks with interval time-varying delays via new complex-valued Jensens inequality. Int. J. Adapt. Control Signal Process. 2018, 32, 1294–1312. [Google Scholar] [CrossRef]
  12. Subramanian, K.; Muthukumar, P. Global asymptotic stability of complex-valued neural networks with additive time-varying delays. Cogn. Neurodyn. 2017, 11, 293–306. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, Z.; Huang, L. Global stability analysis for delayed complex-valued BAM neural networks. Neurocomputing 2016, 173, 2083–2089. [Google Scholar] [CrossRef]
  14. Shi, Y.; Cao, J.; Chen, G. Exponential stability of complex-valued memristor-based neural networks with time-varying delays. Appl. Math. Comput. 2017, 313, 222–234. [Google Scholar] [CrossRef]
  15. Zhang, W.; Cao, J.; Chen, D.; Alsaadi, F.E. Synchronization in fractional-order complex-valued delayed neural networks. Entropy 2018, 20, 54. [Google Scholar] [CrossRef] [Green Version]
  16. Li, L.; Wang, Z.; Lu, J.; Li, Y. Adaptive synchronization of fractional-order complex-valued neural networks with discrete and distributed delays. Entropy 2018, 20, 124. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, Z.; Liu, X. Exponential stability of impulsive complex-valued neural networks with time delay. Math. Comput. Simulat. 2019, 156, 143–157. [Google Scholar] [CrossRef]
  18. Liang, J.; Li, K.; Song, Q.; Zhao, Z.; Liu, Y.; Alsaadi, F.E. State estimation of complex-valued neural networks with two additive time-varying delays. Neurocomputing 2018, 309, 54–61. [Google Scholar] [CrossRef]
  19. Gong, W.; Liang, J.; Kan, X.; Nie, X. Robust state estimation for delayed complex-valued neural networks. Neural Process. Lett. 2017, 46, 1009–1029. [Google Scholar] [CrossRef]
  20. Pratap, A.; Raja, R.; Cao, J.; Rajchakit, G.; Lim, C.P. Global robust synchronization of fractional order complex-valued neural networks with mixed time-varying delays and impulses. Int. J. Control Automat. Syst. 2019, 17, 509–520. [Google Scholar]
  21. Chen, X.; Zhao, Z.; Song, Q.; Hu, J. Multistability of complex-valued neural networks with time-varying delays. Appl. Math. Comput. 2017, 294, 18–35. [Google Scholar] [CrossRef]
  22. Tu, Z.; Cao, J.; Alsaedi, A.; Alsaadi, F.E.; Hayat, T. Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays. Complexity 2016, 21, 438–450. [Google Scholar] [CrossRef]
  23. Zhang, Z.; Liu, X.; Guo, R.; Lin, C. Finite-time stability for delayed complex-valued BAM neural networks. Neural Process. Lett. 2018, 48, 179–193. [Google Scholar] [CrossRef]
  24. Samidurai, R.; Sriraman, R.; Zhu, S. Leakage delay-dependent stability analysis for complex-valued neural networks with discrete and distributed time-varying delays. Neurocomputing 2019, 338, 262–273. [Google Scholar] [CrossRef]
  25. Ramasamy, S.; Nagamani, G. Dissipativity and passivity analysis for discrete-time complex-valued neural networks with leakage delay and probabilistic time-varying delays. Int. J. Adapt. Control Signal Process. 2017, 31, 876–902. [Google Scholar] [CrossRef]
  26. Raja, R.; Samidurai, R. New delay dependent robust asymptotic stability for uncertain stochastic recurrent neural networks with multiple time varying delays. J. Franklin Inst. 2012, 349, 2108–2123. [Google Scholar] [CrossRef]
  27. Samidurai, R.; Sriraman, R. Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions. Math. Comput. Simulat. 2019, 155, 201–216. [Google Scholar] [CrossRef]
  28. Raja, R.; Sakthivel, R.; Anthoni, S.M. Delay-dependent stability criteria of stochastic uncertain Hopfield neural networks with unbounded distributed delays and impulses. Adv. Syst. Sci. Appl. 2011, 10, 94–110. [Google Scholar]
  29. Liu, G.; Yang, S.X.; Chai, Y.; Feng, W.; Fu, W. Robust stability criteria for uncertain stochastic neural networks of neutral-type with interval time-varying delays. Neural Comput. Appl. 2013, 22, 349–359. [Google Scholar] [CrossRef]
  30. Huang, H.; Feng, G. Delay-dependent stability for uncertain stochastic neural networks with time-varying delay. Physica A 2007, 381, 93–103. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Liu, X.; Chen, J.; Guo, R.; Zhou, S. Further stability analysis for delayed complex-valued recurrent neural networks. Neurocomputing 2017, 251, 81–89. [Google Scholar] [CrossRef]
  32. Wang, Z.; Lauria, S.; Fang, J.; Liu, X. Exponential stability of uncertain stochastic neural networks with mixed time delays. Chaos Soliton. Fract. 2007, 32, 62–72. [Google Scholar] [CrossRef]
  33. Mao, X. Stochastic Differential Equations and Their Applications; Chichester: Horwood, UK, 1997. [Google Scholar]
  34. Blythe, S.; Mao, X.; Liao, X. Stability of stochastic delay neural networks. J. Franklin Inst. 2001, 338, 481–495. [Google Scholar] [CrossRef]
  35. Chen, Y.; Zheng, W. Stability analysis of time delay neural networks subject to stochastic perturbations. IEEE Trans. Cyber. 2013, 43, 2122–2134. [Google Scholar] [CrossRef] [PubMed]
  36. Tan, H.; Hua, M.; Chen, J.; Fei, J. Stability analysis of stochastic Markovian switching static neural networks with asynchronous mode-dependent delays. Neurocomputing 2015, 151, 864–872. [Google Scholar] [CrossRef]
  37. Wan, L.; Sun, J. Mean square exponential stability of stochastic delayed Hopfield neural networks. Phys. Lett. A 2005, 343, 306–318. [Google Scholar] [CrossRef]
  38. Zhu, S.; Shen, Y. Passivity analysis of stochastic delayed neural networks with Markovian switching. Neurocomputing 2011, 74, 1754–1761. [Google Scholar] [CrossRef]
  39. Cao, Y.; Samidurai, R.; Sriraman, R. Stability and dissipativity analysis for neutral type stochastic Markovian jump static neural networks with time delays. J. Artif. Intell. Soft Comput. Res. 2019, 9, 189–204. [Google Scholar] [CrossRef] [Green Version]
  40. Guo, J.; Meng, Z.; Xiang, Z. Passivity analysis of stochastic memristor-based complex-valued recurrent neural networks with mixed time-varying delays. Neural Process. Lett. 2018, 47, 1097–1113. [Google Scholar] [CrossRef]
  41. Cao, Y.; Sriraman, R.; Shyamsundarraj, N.; Samidurai, R. Robust stability of uncertain stochastic complex-valued neural networks with additive time-varying delays. Math. Comput. Simulat. 2020, 171, 207–220. [Google Scholar] [CrossRef]
  42. Liu, D.; Zhu, S.; Chang, W. Mean square exponential input-to-state stability of stochastic memristive complex-valued neural networks with time varying delay. Int. J. Syst. Sci. 2017, 48, 1966–1977. [Google Scholar] [CrossRef]
  43. Xu, X.; Yang, J.; Xu, Y. Mean square exponential stability of stochastic complex-valued neural networks with mixed delays. Complexity 2019, 2019, 3429326. [Google Scholar] [CrossRef]
  44. Liu, D.; Zhu, S.; Chang, W. Global exponential stability of stochastic memristor-based complex-valued neural networks with time delays. Nonlinear Dyn. 2017, 90, 915–934. [Google Scholar] [CrossRef]
  45. Li, P.; Cao, J. Stability in static delayed neural networks: A nonlinear measure approach. Neurocomputing 2006, 69, 1776–1781. [Google Scholar] [CrossRef]
  46. Hui, J.J.; Kong, X.Y.; Zhang, H.X.; Zhou, X. Delay-partitioning approach for systems with interval time-varying delay and nonlinear perturbations. J. Comput. Appl. Math. 2015, 281, 74–81. [Google Scholar] [CrossRef]
  47. Zeng, H.B.; Park, J.H.; Zhang, C.F.; Wang, W. Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Franklin Inst. 2015, 352, 1284–1295. [Google Scholar] [CrossRef]
  48. Gong, W.; Liang, J.; Kan, X.; Wang, L.; Dobaie, A.M. Robust state estimation for stochastic complex-valued neural networks with sampled-data. Neural Comput. Appl. 2019, 31, 523–542. [Google Scholar] [CrossRef]
Figure 1. An illustration of the state trajectories of the real part pertaining to the model (1) in Example 1.
Figure 1. An illustration of the state trajectories of the real part pertaining to the model (1) in Example 1.
Mathematics 08 00742 g001
Figure 2. An illustration of the state trajectories of the imaginary part pertaining to the model (1) in Example 1.
Figure 2. An illustration of the state trajectories of the imaginary part pertaining to the model (1) in Example 1.
Mathematics 08 00742 g002
Figure 3. An illustration of the state trajectories pertaining to the real part of the model (32) in Example 2.
Figure 3. An illustration of the state trajectories pertaining to the real part of the model (32) in Example 2.
Mathematics 08 00742 g003
Figure 4. An illustration of the phase trajectories between real subspace [ R e ( z 1 ) , I m ( z 2 ) ] pertaining to the model (32) in Example 2.
Figure 4. An illustration of the phase trajectories between real subspace [ R e ( z 1 ) , I m ( z 2 ) ] pertaining to the model (32) in Example 2.
Mathematics 08 00742 g004

Share and Cite

MDPI and ACS Style

Chanthorn, P.; Rajchakit, G.; Thipcha, J.; Emharuethai, C.; Sriraman, R.; Lim, C.P.; Ramachandran, R. Robust Stability of Complex-Valued Stochastic Neural Networks with Time-Varying Delays and Parameter Uncertainties. Mathematics 2020, 8, 742. https://doi.org/10.3390/math8050742

AMA Style

Chanthorn P, Rajchakit G, Thipcha J, Emharuethai C, Sriraman R, Lim CP, Ramachandran R. Robust Stability of Complex-Valued Stochastic Neural Networks with Time-Varying Delays and Parameter Uncertainties. Mathematics. 2020; 8(5):742. https://doi.org/10.3390/math8050742

Chicago/Turabian Style

Chanthorn, Pharunyou, Grienggrai Rajchakit, Jenjira Thipcha, Chanikan Emharuethai, Ramalingam Sriraman, Chee Peng Lim, and Raja Ramachandran. 2020. "Robust Stability of Complex-Valued Stochastic Neural Networks with Time-Varying Delays and Parameter Uncertainties" Mathematics 8, no. 5: 742. https://doi.org/10.3390/math8050742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop