Next Article in Journal
Nonlinear Systems: Dynamics, Control, Optimization and Applications to the Science and Engineering
Next Article in Special Issue
LQR Chaos Synchronization for a Novel Memristor-Based Hyperchaotic Oscillator
Previous Article in Journal
Interval Fuzzy Type-2 Sliding Mode Control Design of Six-DOF Robotic Manipulator
Previous Article in Special Issue
A Memristor-Based Colpitts Oscillator Circuit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Event-Based Impulsive Control for Heterogeneous Neural Networks with Communication Delays

1
Industrial Training Center, Shenzhen Polytechnic, Shenzhen 518055, China
2
College of Mathematics and Statistics, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(24), 4836; https://doi.org/10.3390/math10244836
Submission received: 11 November 2022 / Revised: 10 December 2022 / Accepted: 11 December 2022 / Published: 19 December 2022
(This article belongs to the Special Issue Chaotic Systems and Their Applications)

Abstract

:
The quasi-synchronization for a class of general heterogeneous neural networks is explored by event-based impulsive control strategy. Compared with the traditional average impulsive interval (AII) method, instead, an event-triggered mechanism (ETM) is employed to determine the impulsive instants, in which case the subjectivity of selecting the controlling sequence can be eliminated. In addition, considering the fact that communication delay is inevitable between the allocation and execution of instructions in practice, we further nominate an ETM centered on communication delays and aperiodic sampling, which is more accessible and affordable, yet can straightforwardly avoid Zeno behavior. Hence, on the basis of the novel event-triggered impulsive control strategy, quasi-synchronization of heterogeneous neural network model is investigated and some general conditions are also achieved. Finally, two numerical simulations are afforded to validate the efficacy of theoretical results.

1. Introduction

The human brain is composed of complicated neuron networks and just because the interaction occurs to these neurons, human beings can understand the information transmitted by sensory organs precisely. Neural networks, more accurately, artificial neural networks, are an approximation through highly simplified of biological neural networks based on the physiological behavior of the human brain. Note that the majority of previous research generally assumed that the matrices used to illustrate the coupling structure of neural networks are identical [1,2], which were principally ideal models of neural networks, and were unable to adequately depict the actual situation of the brain’s neuronal networksand could not accurately reflect the actual situation of neural networks in the brain. Many real-world systems in nature and human civilization, it turns out, do not adhere to the preceding assumption. In the power grid considered in [3], for example, the model does not obey signal integrator dynamics. Localized instability that emerges in such a power network might result in cascade failures that eventually cause widespread blackouts. One of the primary issues confronting the “smart power grid” has been the detection and rejection of such instabilities. By constructing a heterogeneous neural network model, each neuron reflects a local power grid that has its own dynamic characteristics. The results we present prove that the power grid can realize cooperative electricity generation across multiple regions under communication delays. Hence, investigating heterogeneous neural networks with various coupling matrices for different nodes has become a worthwhile endeavor.
As a critical component of neural networks behavior, synchronization has naturally received extensive exploration in pattern recognition [4], fixed-point computing [2], signal optimization [5], image processing [6], and other fields of science and engineering [7]. Take pattern recognition for example, traditional weave pattern identification methods rely heavily on manual analysis, which is time-consuming and expensive. Through analogizing machines with different patterns as neurons, the automatic method we propose has high accuracy and generalization when faced with various kinds of fabrics. Even the neuroscience literature [7,8] has demonstrated that many specific brain operations or important physiological states, such as epilepsy and Parkinson’s disease, are frequently linked to the presence of synchronization in brain. As a consequence, research on the synchronization behavior of neural networks has been a prime location with a lot of potential. Its associated issues have also yielded fruitful achievements. Various notable methods have been proposed in the literature to handle quasi-synchronization issues [9,10,11,12]. For example, Tang et al. [13] studied the exponential synchronization problem on a class of coupled heterogeneous neural networks with hybrid time-varying delays. The relevant delay conditions were derived using the distributed pinning strategy and the method of average impulsive intervals to ensure the exponential quasi-synchronization on coupled neural networks. Wu et al. analyzed the exponential synchronization of stochastic neural networks using a unique periodically intermittent discrete observation controller [14]. In [15], Sun et al. discussed the quasi-synchronization of heterogeneous dynamical networks via event-triggered impulsive control and two kinds of triggered mechanisms were proposed.
Although heterogeneous neural networks provide unprecedented possibilities, the communication delay during signal transmission brings additional hurdles to the applications. Indeed, with the dramatic improvements of network science, there is a pressing need to build a more realistic version that accurately portrays the operation process of the control mechanism. The source of the communication delay should be traced back to the operation process of the event-triggered mechanism (ETM). Specifically speaking, the ETM adopts the zero-order holder (ZOH) principle which predetermines an event-triggered condition. When the condition is met, a new excitation moment is formed and the information is sent to the controller via ZOH. The critical point is that there will be a time delay between the generation of information at the excitation moment and its transmission to the controller in actuality, which is the communication delay to be solved in this study. There is no dispute that no matter how fast a system reacts, communication delay in signal transmission between distinct components cannot be eliminated, which is why accounting for communication delay is so necessary. One coin has two sides—while considering the communication delay makes the conclusions more feasible, it also complicates the process of theoretical analysis. First, how is the influence of communication delay reflected in the controller? Second, in the interval without event-triggered condition, how can one verify that the Lyapunov function does not rise over the restraining function?
Hitherto, various control approaches have been established, such as adaptive feedback control [16], sliding control [17,18], impulsive control [19,20], and so on. Among them, impulsive control, as a typical discontinuous control approach, has garnered considerable attention since it distinguishes itself by reducing the amount of information delivered, therefore saving expenses. It should not be emphasized how imperative identifying the impulsive sequence is when applying impulsive control, the average impulsive interval (AII) method is used in the majority of previous publications [21,22]. Although AII is undeniably effective, the condition T t T a N 0 N ζ ( T , t ) T t T a + N 0 prevents impulsive generation from occurring too frequently or indefinitely [23], where ζ = { t 1 , t 2 , } is the impulsive sequence, N ζ ( T , t ) denotes the number of impulsive times of the impulsive sequence ζ on the interval (t, T). However, since this condition does not restrict the specific impulsive moment and also because distinct individuals have different selection methodologies, this results in subjectivity when picking imoulsive sequences based on AII. Therefore, the event-triggered sequence { s k } k N would be first determined using the event-triggered condition and will become the foundation for obtaining the impulsive sequence { t k } k N throughout this study. Simultaneously, in actuality, signal transmission normally takes a certain amount of time, so we suppose that the controller will be activated after η k seconds since the kth event occurs, which leads to t k ( t k = s k + η k ). Here, η k is the exact formulation of the above-mentioned communication delay. In neural networks, communication delay that substantially damages the control performance is frequently unavoidable. Nevertheless, there are only a few existing results on event-based impulsive control while taking communication delay into account. As a result, an innovative event-based impulsive control strategy fusing communication delay is considered in this article to enrich relevant research.
Motivated by the preceding analysis, the purpose of this paper is to handle the event-based impulsive control issue for heterogeneous neural networks with time-varying delay and external disturbances. Moreover, a novel event-based impulsive control protocol is proposed, which will be updated only by the violation of the event-triggering condition. In the meantime, the control law is designed by using the information of the event-triggering instants. The main novelties of this article are as follows:
(1)
Considering the influence of time-varying delay and external interference on the neural networks, so a type of heterogeneous neural networks is conceived. Compared with the homogeneous neural network frameworks in [24,25], which presumes identical dynamics of distinct neural network, the heterogeneous neural network model considered here is more pragmatic;
(2)
An ineluctable issue in reality, ordinarily, is that the sampling information transmitted from the generator to the actuator is usually delayed. In other words, the transmission of the sampling information needs a reaction time and so induce communication delay. Under the effect of communication delay, it is unmanageable to achieve a satisfactory performance. Accordingly, the study of event-triggered impulsive control with communication delay is more meaningful than the results in [26,27];
(3)
In the existing works [21,22], the impulsive sequence was developed based on AII. However, this article proposes a novel event-based impulsive control scheme by combining communication delay. The main novelty is that the representation of communication delay is specified to obtain the impulsive sequence, such that the effect of event-triggered condition can be fully utilized in the proposed scheme.
The reminder of the paper is organized as follows. Section 2 presents the heterogeneous coupled neural network model. Some lemmas and definitions are described in Section 3. Meanwhile, Section 3 gives the main results including quasi-synchronization conditions for heterogeneous neural networks with time-varying delay, which are then verified by numerical examples in Section 4. Finally, in Section 5, we restate the paper’s context and draw the conclusions.
Notations: R represents the set of real numbers. | | · | | denotes the Euclidean norm. ⊗ indicates Kronecker product in the usual sense. For a symmetric matrix P, P > 0 ( P < 0 ) means that the matrix P is positive definite (negative definite). The notations P T and P 1 represent the transpose and the inverse of P, respectively. The Dini derivative of ϕ ( t ) is defined as D + ϕ ( t ) = limsup s 0 + ϕ ( t + s ) ϕ ( t ) s .

2. Model Description and Preliminaries

Consider heterogeneous coupled neural networks with time-varying delays
x ˙ i ( t ) = C i x i ( t ) + A i f x i ( t ) + B i g x i ( t τ ( t ) ) + c j = 1 N l i j Γ x j ( t ) + K i + u i ( t ) , i = 1 , , N ,
where x i ( t ) = ( x i 1 ( t ) , x i 2 ( t ) , , x i n ( t ) ) T R n represents the state vector of the ith neural network; u i ( t ) represents the control input; f ( x i ( t ) ) = ( f 1 ( x i ( t ) ) , f 2 ( x i ( t ) ) , , f n ( x i ( t ) ) ) T and g ( x i ( t ) ) = ( g 1 ( x i ( t ) ) , g 2 ( x i ( t ) ) , , g n ( x i ( t ) ) ) T are continuous differentiable activation functions; C i = diag ( c i 1 , c i 2 , c i n ) > 0 with c i j denoting the decay rate of the jth neuron without any stimulation. A i R n × n and B i R n × n are connection weight matrix and delayed connection matrix, respectively. K i = k i 1 , , k i n T R n is an external input vector; τ ( t ) is the transmission time-varying delay satisfying 0 < τ ( t ) < τ ; L = ( l i j ) N × N is the Laplace matrix. The elements l i j of the matrix L are defined as follows: if there is a connection between nodes i and j ( i j ) , l i j = l j i > 0 ; otherwise l i j = l j i = 0 ( i j ) , and the diffusivity coupling condition l i i = j = 1 , j i N l i j is satisfied for the diagonal elements l i i ( i = 1 , 2 , , N ) ; Γ = diag { γ 1 , γ 2 , , γ n } > 0 is the inner coupling matrix between two connected notes. c > 0 is the coupling strength of the dynamic network.
In this paper, the following neural network with time-varying delay is taken as the objective trajectory:
s ˙ ( t ) = C 0 s ( t ) + A 0 f ( s ( t ) ) + B 0 g ( s ( t τ ( t ) ) ) + K 0 ,
where s ( t ) = ( s 1 ( t ) , s 2 ( t ) , , s n ( t ) ) T R n is the state vector associated with the neurons; C 0 = diag ( c 01 , c 02 , , c 0 n ) > 0 , A 0 = ( a i j 0 ) n × n and B 0 = ( b i j 0 ) n × n are connection weight matrix and delayed connection matrix, respectively. K 0 = k 01 , , k 0 n T R n is also an external input vector. The definitions of f ( · ) and g ( · ) are in model (1).
Let e i ( t ) = x i ( t ) s ( t ) be the error state of node i between current state x i ( t ) and objective state s ( t ) . In light of the fact that c j = 1 N l i j Γ s ( t ) = 0 , the following error system can be formed:
e ˙ i ( t ) = C i e i ( t ) + A i f ˜ ( e i ( t ) ) + B i g ˜ ( e i ( t τ ( t ) ) ) + K i ˜ + u i ( t ) + c j = 1 N l i j Γ e j ( t ) ( C i C 0 ) s ( t ) + ( A i A 0 ) f ( s ( t ) ) + ( B i B 0 ) g ( s ( t τ ( t ) ) ) , i = 1 , 2 , , N ,
where f ˜ ( e i ( t ) ) f ( x i ( t ) ) f ( s ( t ) ) , g ˜ ( e i ( t τ ( t ) ) ) g ( x i ( t τ ( t ) ) ) g ( s ( t τ ( t ) ) ) and K i ˜ = K i K 0 .
The initial conditions of heterogeneous neural network (3) are given by
e i ( t ) = φ i ( t ) , τ t 0 , i = 1 , 2 , , N ,
where φ i ( t ) C ( [ τ , 0 ] , R n ) is the set of continuous functions from [ τ , 0 ] to R n .
Consequently, the compact form of (3) can be expressed as:
e ˙ ( t ) = C e ( t ) + A F ˜ ( e ( t ) ) + B G ˜ ( e ( t τ ( t ) ) ) + c ( L Γ ) e ( t ) + K ˜ + U ( t ) ( C I N C 0 ) s ˜ ( t ) + ( A I N A 0 ) F ˜ ( s ( t ) ) + ( B I N B 0 ) G ˜ ( s ( t τ ( t ) ) ) ,
where C = diag ( C 1 , C 2 , , C N ) ; A = diag ( A 1 , A 2 , , A N ) ; B = diag ( B 1 , B 2 , , B N ) ;
F ˜ ( e ( t ) ) = ( f ˜ T ( e 1 ( t ) ) , f ˜ T ( e 2 ( t ) ) , , f ˜ T ( e N ( t ) ) ) T ; G ˜ ( e ( t τ ( t ) ) ) = ( g ˜ T ( e 1 ( t τ ( t ) ) ) , g ˜ T ( e 2 ( t τ ( t ) ) ) , , g ˜ T ( e N ( t τ ( t ) ) ) ) T ; K ˜ = ( K ˜ 1 T , K ˜ 2 T , , K ˜ N T ) T ; U ( t ) = ( u 1 T ( t ) , u 2 T ( t ) , , u N T ( t ) ) T ; s ˜ ( t ) = ( s T ( t ) , s T ( t ) , , s T ( t ) ) T ; F ˜ ( s ( t ) ) = ( f T ( s ( t ) ) , f T ( s ( t ) ) , , f T ( s ( t ) ) ) .
Remark 1.
Note that matrices C , A , B are both block diagonal matrix and matrix C is much more unique because it is also symmetric. As a result, in the proof of Theorem 1, this characteristic can be directly applied to deal with matrix C, while the Cauchy inequality method should be used for matrices A , B .
The following preliminaries are needed for the proof of the quasi-synchronization of heterogeneous coupled neural network (1) and the objective trajectory (2).
Assumption 1.
Let f ( · ) , g ( · ) : R n R n be continuous vector-valued functions satisfying f ( 0 ) = 0 and g ( 0 ) = 0 . In addition, f ( · ) and g ( · ) also satisfy the following conditions for ∀ x , y R n
f ( x ) f ( y ) μ 1 x y , g ( x ) g ( y ) μ 2 x y
where μ 1 , μ 2 > 0 are constants also referred to as Lipschitz constants.
Definition 1
([28]). Let there exist a compact set O so that, for any initial condition φ ( u ) , the error system e ( t ) converges to O . Then, the heterogeneous neural network (1) and objective trajectory s ( t ) are thought to realize quasi-synchronization with an error level ϵ 0 > 0 .
O = { e ( t ) R n N : | | e ( t ) | | ϵ 0 } as t
.
Lemma 1.
Let P C = { ϕ ϕ : [ τ , ) R } . Suppose that m 1 , m 2 P C satisfy
D + m 1 ( t ) p m 1 ( t ) + q m 1 ( t τ ( t ) ) , t t k m 1 t k + ς m 1 t k , k = 1 , 2 ,
and
D + m 2 ( t ) > p m 2 ( t ) + q m 2 ( t τ ( t ) ) , t t k m 2 t k + = ς m 2 t k , k = 1 , 2 ,
where p , q and ς are constants. If m 1 ( t ) m 2 ( t ) holds for τ t 0 , then the conclusion m 1 ( t ) m 2 ( t ) also holds for t > 0 .

3. Mean Results

In this section, the quasi-synchronization problem for heterogeneous neural network (1) and the target state (2) under an event-based impulsive control strategy is discussed, then the sufficient criteria will be established. First and foremost, the constraint function h ( t ) shall be defined as follows:
h ( t ) = 2 h 0 + 2 V ( 0 ) e ρ t ,
where V ( t ) is an appropriate Lyapunov function with error e ( t ) , h 0 > 0 and ρ > 0 .
Remark 2.
The purpose of designing the restraining function is to operate as a “ceiling”, which can never be touched by the bounded Lyapunov function. Strictly speaking, the quasi-synchronization is obtained through the use of h ( t ) . Additionally, the synchronization error is directly related to the restraining function h ( t ) .
Then, in order to prove the Lyapunov function does not exceed h ( t ) , we need to analyze its property in some intervals. Naturally, the next goal is to estimate the rule that can divide [ t 0 , t ) into those intervals. Therefore, based on the above restraining function, the event-triggered mechanism (ETM) is be designed as follows:
s k + 1 = inf t t > t k V ( t ) ϕ k ( t ) ,
where trigger functions ϕ k ( t ) = h k e ρ ˜ ( t t k ) , h k = 1 2 h ( t k ) , k N , ρ ˜ > ρ are two constants connected to trigger functions and the restraining function.
Remark 3.
The trigger functions ϕ k ( t ) , k N are some monotonically decreasing exponential functions, and ρ ˜ > ρ indicates that the decreasing rate of each ϕ k ( t ) is faster than h ( t ) . When t = t k , ϕ k ( t ) = h k = 1 2 h ( t k ) which indicates the value of trigger function that can replace h ( t ) at each impulsive instant.
More vividly, the trigger functions act like sentries, triggering an alarm (which means the ETM is satisfied and creates a new trigger instant) when an enemy poses a threat (which means the Lyapunov function touches trigger functions).
Without loss of generality, e i ( t ) is always assumed to be right continuous at t = t k , i.e., e i ( t k ) = e i ( t k + ) . Therefore, the solutions of (3) are piece-wise right-hand continuous functions with discontinuous at t = t k for k N .

3.1. Switching ETM Depending on Communication Delays

When the control input is transferred to ZOH over the communication networks, there will apparently occur communication delays. As information is always not delivered in a timely manner so the controller cannot be updated in a timely manner, this phenomenon could have an impact on the system’s control performance. As a result, specialized strategies should be introduced to address the impact of communication delays. We have denoted s k ( k N ) above, concretely, the moment that the kth control task is authorized for completion is indicated by s k s k is the time instant when the kth control task is released for execution, that is, the moment the ETM is triggered and the state information is sampled. To reveal the effect of communication delays, we suppose that η k seconds will pass after the kth event-triggered condition has been satisfied. In other words, η k t k s k 0 is described as the kth communication delay between the sensor and the actuator. Apparently, t k is the instant at which the kth task has completed execution, as well as the instant at which the actuator value is updated. To extend the sample interval as much as feasible, a switching ETM based on communication delays is employed.
According to the consideration of communication delays, impulsive sequence and event-triggered sequence can be distinct. Finally, combining these two sequences, the impulsive controllers can be determined as follows:
u i ( t ) = k = 1 β k 2 e i s k e i ( t k ) δ t t k , k N ,
where δ ( · ) denotes the Dirac function; β k R represents the impulsive intensity. The order of sampling state information, as defined by (7), is { s 0 , s 1 , } . Meanwhile, { t 0 , t 1 , } represents the time series at which the impulses are activated and the control inputs are updated.
Remark 4.
Notably, when s 0 = t 0 = 0 , the impulsive intensity β 0 = 1 is assumed, because V ( 0 ) < γ 0 ( 0 ) , therefore, there is no event trigger at this time.
Remark 5.
In ETM (7), it is possible to split the time interval [ s k , s k + 1 ) into two segments: (1) communication delay for [ s k , s k + η k ) and (2) continuous event-trigger for [ s k + η k , s k + 1 ) (continuous sampling and continuous checking in (7)). The sensor ceases functioning at least η k seconds after the sample has been transmitted at s k , therefore, (7) need not be evaluated. The sensor begins to continuously check (7) at s k + η k and transmits the sampling when (7) is violated.
Figure 1 presents a visual depiction of the operating principle of ETM. Due to the occurrence of communication delays, V ( t ) < θ h ( t ) exists rather than V ( t ) < 1 2 h ( t ) for t [ s k , t k ) . As for t [ t k , s k + 1 ) , there exists V ( t ) < ϕ k ( t ) < 1 2 h ( t ) based on ETM (7). Therefore, proving V ( t ) < θ h ( t ) for t [ s k , s k + 1 ) is enough.
Now, we could acquire the following theorem.
Theorem 1.
Suppose that Assumption 1 holds. Let h 0 > 0 and 0 < ρ < ρ ˜ < 2 ρ , p > q > 0 . Then, the error system (3) under (8) and (7) is quasi-synchronization in the following sense:
| | e ( t ) | | < 2 θ h 0 + 2 θ V ( 0 ) e ρ 2 t , t 0
where θ = 1 + σ h 0 , σ = ( 1 / 2 ) ξ ¯ sup τ s 0 i = 1 N φ i ( s ) 2 , p = 2 λ m i n ( C ) ϵ 1 μ 1 2 λ m a x ( A T A ) ϵ , q = ϵ 2 μ 2 2 λ m a x ( B T B ) , ϵ 1 ϵ 1 + 1 ϵ 2 + 1 ϵ 3 + 1 ϵ 4 + 1 ϵ 5 + 1 ϵ 6 .
Proof. 
Construct a Lyapunov function V ( t ) = 1 2 e T ( t ) e ( t ) then for t [ t k , t k + 1 ) , we obtain
D + V ( t ) = C e T ( t ) e ( t ) + e T ( t ) A F ˜ ( e ( t ) ) + e T ( t ) B G ˜ ( e ( t τ ( t ) ) ) + c e T ( t ) ( L Γ ) e ( t ) + e T ( t ) K ˜ e T ( t ) ( C I N C 0 ) s ˜ ( t ) + e T ( t ) ( A I N A 0 ) F ˜ ( s ( t ) ) + e T ( t ) ( B I N B 0 ) G ˜ ( s ( t τ ( t ) ) ) ,
Based on Assumption 1 and Lemma 1, we can estimate (10) item by item.
e T ( t ) C e ( t ) 2 λ m i n ( C ) V ( t ) ,
e T ( t ) A F ˜ ( e ( t ) ) 1 2 ϵ 1 e T ( t ) e ( t ) + ϵ 1 2 F ˜ T ( e ( t ) ) A T A F ˜ ( e ( t ) ) 1 ϵ 1 V ( t ) + ϵ 1 μ 1 2 λ m a x ( A T A ) V ( t ) ,
c e T ( t ) ( L Γ ) e ( t ) = c i = 1 N j = 1 N e i T ( t ) l i j Γ e j ( t ) = c i = 1 N j = 1 N l i j ι = 1 n e i ι ( t ) γ ι e j ι ( t ) = c ι = 1 n γ ι ( e ι ( t ) ) T L e ι ( t ) = c ι = 1 n γ ι i = 1 N j > i l i j ( e ι i ( t ) e ι j ( t ) ) 2 < 0 ,
where e ι ( t ) = ( e 1 ι ( t ) , e 2 ι ( t ) , , e N ι ( t ) ) T .
e T ( t ) B G ˜ ( e ( t τ ( t ) ) ) 1 2 ϵ 2 e T ( t ) e ( t ) + ϵ 2 2 G ˜ T ( e ( t τ ( t ) ) ) B T B G ˜ ( e ( t τ ( t ) ) ) 1 ϵ 2 V ( t ) + ϵ 2 μ 2 2 λ m a x ( B T B ) V ( t τ ( t ) ) ,
and
e T ( t ) K ˜ 1 2 ϵ 3 e T ( t ) e ( t ) + ϵ 3 2 K ˜ T K ˜ 1 ϵ 3 V ( t ) + ϵ 3 2 K ˜ T K ˜ ,
e T ( t ) ( C I N C 0 ) s ˜ ( t ) 1 2 ϵ 4 e T ( t ) e ( t ) + ϵ 4 2 s ˜ T ( t ) ( C I N C 0 ) 2 s ˜ ( t ) 1 ϵ 4 V ( t ) + ϵ 4 N m 2 2 λ m a x [ ( C I N C 0 ) 2 ] ,
e T ( t ) ( A I N A 0 ) F ˜ ( s ( t ) ) 1 2 ϵ 5 e T ( t ) e ( t ) + ϵ 5 2 F ˜ T ( s ( t ) ) ( A I N A 0 ) T ( A I N A 0 ) F ˜ ( s ( t ) ) 1 ϵ 5 V ( t ) + ϵ 5 N μ 1 2 m 2 2 λ m a x [ ( A I N A 0 ) T ( A I N A 0 ) ] ,
e T ( t ) ( B I N B 0 ) G ˜ ( s ( t τ ( t ) ) ) 1 2 ϵ 6 e T ( t ) e ( t ) + ϵ 6 2 G ˜ T ( s ( t τ ( t ) ) ) ( B I N B 0 ) T ( B I N B 0 ) G ˜ ( s ( t τ ( t ) ) ) 1 ϵ 6 V ( t ) + ϵ 6 N μ 2 2 m 2 2 λ m a x [ ( B I N B 0 ) T ( B I N B 0 ) ] ,
Then, we will obtain
D + V ( t ) [ 2 λ m i n ( C ) 1 ϵ 1 ϵ 1 μ 1 2 λ m a x ( A T A ) 1 ϵ 2 1 ϵ 3 1 ϵ 4 1 ϵ 5 1 ϵ 6 ] V ( t ) + ϵ 2 μ 2 2 λ m a x ( B T B ) V ( t τ ( t ) ) + ϵ 3 2 K ˜ T K ˜ + ϵ 4 N m 2 2 λ m a x [ ( C I N C 0 ) 2 ] + ϵ 5 N m 2 μ 1 2 2 λ m a x [ ( A I N A 0 ) T ( A I N A 0 ) ] + ϵ 6 N m 2 μ 2 2 2 λ m a x [ ( B I N B 0 ) T ( B I N B 0 ) ] p V ( t ) + q V ( t τ ( t ) ) + r ,
in which p = 2 λ m i n ( C ) ϵ 1 μ 1 2 λ m a x ( A T A ) ϵ , q = ϵ 2 μ 2 2 λ m a x ( B T B ) ,
r = ϵ 3 2 K ˜ T K ˜ + ϵ 4 N m 2 2 λ m a x [ ( C I N C 0 ) 2 ] + ϵ 5 N m 2 μ 1 2 2 λ m a x [ ( A I N A 0 ) T ( A I N A 0 ) ] + ϵ 6 N m 2 μ 2 2 2 λ m a x [ ( B I N B 0 ) T ( B I N B 0 ) ] .
Suppose v ( t ) is a unique solution to time-delays system
d v ( t ) d t = p v ( t ) + q v ( t τ ( t ) ) + r , t t k v t k + = β k 2 v s k v ( t ) = 1 2 ξ i = 1 N φ i ( t ) 2 , τ t 0
Since V ( t ) ( 1 / 2 ) ξ ¯ i = 1 N φ i ( t ) 2 = v ( t ) , for τ t 0 , for Lemma 3.1, one obtains
0 V ( t ) v ( t ) , t 0 t t 1 .
Then, we will prove that v ( t ) < θ h ( t ) holds for t [ t k , t k + 1 ) .
Since the characteristic equation of (13) can be written as ϱ ( λ ) = λ + p q e λ τ , and ϱ ( ) < 0 , ϱ ( 0 ) = p q > 0 , so ϱ ( λ ) has a negative root defined as ρ . Thus, for ∀ t ( τ , 0 ) , we have v ( t ) 1 2 i = 1 N φ i ( t ) 2 1 2 sup τ s 0 i = 1 N φ i ( s ) 2 = σ < σ e ρ ( t t 0 ) + r p q . We define h 0 r p q , then we have v ( t ) 1 2 h ( t ) < θ h ( t ) for t ( τ , 0 ) .
We divide the time interval [ t k , t k + 1 ) into two parts, the first part is [ t k , s k + 1 ) , the second part is [ s k + 1 , t k + 1 ) .
(a) When t [ t k , s k + 1 ) :
In this part, according to ETM (7), we have v ( t ) ϕ k ( t ) = ( h 0 + σ e ρ t k ) e ρ ˜ ( t t k ) h 0 + σ e ρ t = 1 2 h ( t ) < θ h ( t ) .
(b) When t [ s k + 1 , t k + 1 ) :
We have v ( t ) < 1 2 h ( t s k + 1 ) = h 0 + σ e ρ ( t s k + 1 ) , or else ∃ t ( s k + 1 , t k + 1 ) satisfied v ( t ) h 0 + σ e ρ ( t s k + 1 ) and v ( t ) h 0 + σ e ρ ( t s k + 1 ) , ∀ t ( s k + 1 , t ) .
According to (13), we obtain
v ( t ) = e s k + 1 t p d s [ v ( s k + 1 ) + s k + 1 t e s k + 1 s p d t ( q v ( s τ ( s ) ) + r ) d s ] = e p ( t s k + 1 ) v ( s k + 1 ) + s k + 1 t e p ( t s ) ( q v ( s τ ( s ) ) + r ) d s ,
because v ( s k + 1 ) = γ k ( s k + 1 ) < γ 0 ( 0 ) = h 0 + σ , so (15) can be further calculated as
v ( t ) ( h 0 + σ ) e p ( t s k + 1 ) + s k + 1 t e p ( t s ) ( q v ( s τ ( s ) ) + r ) d s ,
then, we will obtain
v ( t ) ( h 0 + σ ) e p ( t s k + 1 ) + s k + 1 t e p ( t s ) ( q v ( s τ ( s ) ) + r ) d s e p ( t s k + 1 ) [ σ + r p q + s k + 1 t e p ( s s k + 1 ) ( q v ( s τ ( s ) ) + r ) d s ] ,
for 0 < ρ < ρ ˜ < 2 ρ , we have e ρ ˜ t · e t k ( ρ ˜ ρ ) < e ρ t · e ρ s k + 1 , which indicates ( h 0 + σ e ρ t k ) e ρ ˜ ( t t k ) < h 0 + σ e ρ ( t s k + 1 ) , t ( t k , s k + 1 ) , so we have ϕ k ( t ) < 1 2 h ( t s k + 1 ) , t ( t k , s k + 1 ) . Combined with t is the first time that does not satisfy v ( t ) 1 2 h ( t s k + 1 ) , then we have v ( t ) 1 2 h ( t s k + 1 ) , t ( t k , t ) .
Now, because of 0 < τ ( s ) < τ , we obtain t k < s k + 1 τ < s τ ( s ) < t (s here refers to the one in (17)), then we will have v ( s τ ( s ) ) < 1 2 h ( s τ ( s ) s k + 1 ) , this v ( s τ ( s ) ) is which in (17).
Now, we can obtain
v ( t ) e p ( t s k + 1 ) { σ + r p q + s k + 1 t e p ( s s k + 1 ) [ q ( r p q + σ e ρ ( s τ ( s ) s k + 1 ) ) + r ] d s } e p ( t s k + 1 ) [ σ + r p q + q r p ( p q ) ( e p ( t s k + 1 ) 1 ) + q σ e ρ τ p ρ ( e ( p ρ ) ( t s k + 1 ) 1 ) + r p ( e p ( t s k + 1 ) 1 ) ]
e p ( t s k + 1 ) [ σ + r p q + r p q ( e p ( t s k + 1 ) 1 ) + q σ e ρ τ p ρ ( e ( p ρ ) ( t s k + 1 ) 1 ) + r p ( e p ( t s k + 1 ) 1 ) ] .
Since ρ is the root of λ + p q e λ τ = 0 , we have p ρ = q e ρ τ . Therefore, the last inequality can be written as follows:
v ( t ) e p ( t s k + 1 ) [ σ + r p q + r p q ( e p ( t s k + 1 ) 1 ) + σ ( e ( p ρ ) ( t s k + 1 ) 1 ) ] r p q + σ e ρ ( t s k + 1 ) h 0 + σ e ρ ( t s k + 1 ) ,
which contradicts the definition of t . Therefore, it can be seen from the above proof process that v ( t ) < 1 2 h ( t s k + 1 ) < θ h ( t ) for t [ s k + 1 , t k + 1 ) .
Moreover, we use mathematical induction to prove v ( t ) < θ h ( t ) holds for ∀ t [ t k , t k + 1 ) , ∀ k N . Firstly, v ( t ) < θ h ( t ) need be proven to hold over [ 0 , t 1 ) . According to the formulation of h ( t ) in (6), v ( 0 ) < h ( 0 ) holds. As a consequence, v ( t ) < γ 0 ( t ) holds for t [ 0 , s 1 ) , which indicates for every t [ 0 , s 1 ) , v ( t ) < h ( t ) holds. Additionally, according to Theorem 1, v ( t ) < θ h ( t ) holds exactly for each t [ 0 , t 1 ) . By the definition of h ( t ) in (6), it is obvious that v ( 0 ) < h ( 0 ) . Therefore, v ( t ) < γ 0 ( t ) over t [ 0 , s 1 ) , which means for any t [ 0 , s 1 ) , v ( t ) < h ( t ) holds. Also by Theorem 1, v ( t ) < θ h ( t ) exactly holds for any t [ 0 , t 1 ) .
Then, we assume that v ( t ) < θ h ( t ) holds for each t [ t k 1 , t k ) we need to illustrate it is also true for t [ t k , t k + 1 ) . Since v ( t k + ) < γ k ( t k ) < θ h ( t k ) , it can be verified that v ( t ) < ϕ k ( t ) < θ h ( t ) over t [ t k , s k + 1 ) . Also by Theorem 1, v ( t ) < θ h ( t ) holds for each t [ t k , t k + 1 ) k N .
Therefore, we conclude that
V ( t ) < v ( t ) < θ h ( t ) , t 0 .
By simple transformation, we can obtain
| | e ( t ) | | < 2 θ h 0 + 2 θ V ( 0 ) e ρ 2 t .
The proof is completed. □
Remark 6.
The value of θ in Theorem 1 is determined from the following derivation. We assume that there exists a constant θ satisfying h ( t s k + 1 ) < θ h ( t ) for t [ s k + 1 , t k + 1 ) and k N . Then, we should find h ( t s k + 1 ) m a x < θ h ( t ) m i n , which indicates h ( 0 ) < θ h ( t k + 1 ) , so we obtain θ > 1 + σ σ e ρ t k + 1 h 0 + σ e ρ t k + 1 . We choose 1 + σ h 0 > 1 + σ σ e ρ t k + 1 h 0 + σ e ρ t k + 1 as a suitable value, readers can also choose other values.

3.2. Switching ETM Depending on Communication Delays and Aperiodic Sampling

From Theorem 1, we discover that communication delays have a significant impact on switching ETM. A new aperiodic sampling time ξ k is implemented to further minimize the time of continuous sampling and continuous checking whilst avoiding Zeno behavior. In other words, in addition to communication delay η k , switching ETM also relies on aperiodic sampling time ξ k . Aperiodic sampling time is defined as [ s k , s k + η k + ξ k ) for simplicity. The next intention is to use a switch to toggle between aperiodic sampling and continuous event trigger. As a result, ξ k has a beneficial influence on extending the aperiodic sampling interval. Figure 2 is a timing diagram illustrating the distinction between these two switching ETMs. The following condition is ETM considering ξ k to determine the event-triggered instant s k + 1 :
s k + 1 = inf t t > t k + ξ k V ( t ) ϕ k ( t )
the definition of ϕ k ( · ) , k N has been given in (7). It is worth noting that, at t k , the kth task has been completed and the value of ξ k can be determined.
The rationale for applying aperiodic sampling is that the value of Lyapunov function after each impulse changes owing to the impulsive intensity and V ( s k ) in impulsive controller. Aperiodic sampling, as opposed to periodic sampling, can achieve the goal of designing intermittent intervals based on different values of Lyapunov function. We conclude the range of each aperiodic sampling time ξ k through the following theorem based on the calculation of the Lyapunov function in Theorem 1.
Theorem 2.
Consider error system (5) under (7) and (8). Suppose that Assumption 1 holds. Given constants h 0 > 0 , 0 < ρ < ρ ˜ < 2 ρ , the aperiodic sampling time ξ k can be estimated as follows:
ξ k = 1 ρ ˜ l n h 0 + σ e ρ t k V ( t k + ) , V ( t k + ) > ζ , 1 ρ ˜ l n h 0 + σ e ρ t k q p ( h 0 + σ ) + r p , V ( t k + ) ζ ,
where ζ = q h ( 0 ) 2 p + r p .
Proof: From (12), we have
V ( t ) < e p ( t t k ) [ V ( t k ) + t k t e p ( s t k ) ( q V ( s τ ( s ) ) + r ) d s ] .
Now, we want to know the maximum of it. We assume that the lower bound of η k is τ . Then, we have the range of s τ ( s ) in (12) is ( s k , t k + ξ k ) . We take this interval into two parts, that is, ( s k , t k ) and ( t k , t k + ξ k ) . In the former interval, we have V ( s τ ( s ) ) < V ( s τ ( s ) s k ) < 1 2 h ( 0 ) . Similarly, in the latter interval, from ETM (7), we also have V ( s τ ( s ) ) < γ k ( s τ ( s ) ) < 1 2 h ( 0 ) .
We take V ( s τ ( s ) ) into 1 2 h ( 0 ) , then we have
V ( t ) < e p ( t t k ) [ V ( t k ) + t k t e p ( s t k ) ( q h ( 0 ) 2 + r ) d s ] ( V ( t k ) q h ( 0 ) 2 p r p ) e p ( t t k ) + q h ( 0 ) 2 p + r p ( V ( t k ) ζ ) e p ( t t k ) + ζ
Case 1: When V ( t k + ) > ζ
Then, the maximum of (25) is V ( t k ) , assume that V ( t k ) < γ k ( t k + ξ k ) = ( h 0 + σ e ρ t k ) e ρ ˜ ξ k , then, we have ξ k < 1 ρ ˜ l n h 0 + σ e ρ t k V ( t k + ) .
Case 2: When V ( t k + ) ζ
Then, the maximum of (25) is ( V ( t k ) ζ ) e p ξ k + ζ , also assume that V ( t k ) < γ k ( t k + ξ k ) = ( h 0 + σ e ρ t k ) e ρ ˜ ξ k , then, we have ξ k < 1 ρ ˜ l n h 0 + σ e ρ t k q p ( h 0 + σ ) + r p .
Remark 7.
Theorem 3.2 indicates that the value of aperiodic sampling time ξ k is related to the relationship between V ( t k + ) and restraining function h ( t ) as well as p , q , r . Obviously, once the heterogeneous neural network is determined, then p , q , r can be calculated, which means the value of ζ can be obtained. Furthermore, the trend of V ( t k + ) is decreasing, as shown in Figure 1. This means the condition V ( t k + ) ζ becomes increasingly satisfiable, the value of ξ k is equal to 1 ρ ˜ l n h 0 + σ e ρ t k q p ( h 0 + σ ) + r p as time grows.

4. Numerical Simulations

Two illustrative examples will be presented in this section to show the validity of the derived results.
Example 1.
Consider the chaotic neural networks that has the similar form in [29] with time-varying delays
x ˙ i ( t ) = C i x i ( t ) + A i f x i ( t ) + B i g x i ( t τ ( t ) ) + c j = 1 N l i j Γ x j ( t ) + K i , i = 1 , , 6
where x i ( t ) = ( x i 1 ( t ) , x i 2 ( t ) ) T , f ( x i ( t ) ) = 0.1 ( tanh x i 1 ( t ) , tanh x i 2 ( t ) ) T , g ( x i ( t τ ( t ) ) ) = 0.1 ( tanh x i 1 ( t τ ( t ) ) , tanh x i 2 ( t τ ( t ) ) ) T and
C 1 = 1 0 0 1 , A 1 = 0.5 0.1 0.3 0.15 , B 1 = 0.7 0.1 0.2 0.5 ,
C 2 = 1.2 0 0 1 , A 2 = 0.5 0.11 0.3 0.2 , B 2 = 0.75 0.1 0.16 0.45 ,
C 3 = 1.05 0 0 1 , A 3 = 0.5 0.11 0.3 0.1 , B 3 = 0.78 0.1 0.17 0.46 ,
C 4 = 1.1 0 0 1 , A 4 = 0.5 0.09 0.3 0.1 , B 4 = 0.75 0.1 0.19 0.5 ,
C 5 = 1.15 0 0 1 , A 5 = 0.5 0.11 0.3 0.15 , B 5 = 0.7 0.1 0.18 0.47 ,
C 6 = 1.09 0 0 1 , A 6 = 0.5 0.13 0.3 0.1 , B 6 = 0.78 0.1 0.15 0.5 ,
where K i = 0 ( i = 1 , , 6 ) and time-varying delays τ ( t ) = 0.7 + 0.3 sin ( 2 t ) 1 = τ .
Due to the dissimilarity of C i , A i and B i ( i = 1 , 6 ) , the chaotic neural networks (26) are heterogeneous. Suppose G is connected and the corresponding Laplacian matrix L is
L = 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1 0 0 0 0 1
Since f ( · ) = g ( · ) = 0.1 tanh ( · ) ( 0.1 , 0.1 ) , tanh ( · ) = 1 tanh 2 ( · ) ( 0 , 1 ) , it is easy to obtain μ 1 = μ 2 = 0.1 in Assumption 1. Assuming h 0 = 0.1 , ρ = 1 , ρ ˜ = 1.5 , for fixed MATLAB time step 0.001, t [ 1 , 3 ] .
Figure 3 describes x i j , i = 1 , 2 , , 6 , j = 1 , 2 , from which one can see that the heterogeneous neural network approaches associated target as time goes.
The state curve of error system could be viewed in conjunction with the changing trend of x i ( t ) , i = 1 , 2 , , 6 . In addition to depicting the change trend of x i ( t ) , i = 1 , 2 , , 6 , the state curve of the error system can also be observed in Figure 4.
Finally, the event-triggered impulsive series are presented in Figure 5.
Example 2.
Consider the same chaotic neural networks as (26). In order to identify the influence of aperiodic sampling, the parameters are assumed to be the same as those given in Example 1. Concentrating on Figure 6, it is feasible that under ETM (22) which introduces aperiodic sampling parameter ξ k , chaotic neural networks (26) can also realize synchronization. Since ξ k only act on delaying every sampling instant, the changing tendency of x i j ( t ) , i = 1 , 2 , , 6 , j = 1 , 2 is similar to Example 1. Figure 7 depicts the norm of e i ( t ) , i = 1 , 2 , , 6 using the previously provided data
The event-triggered impulsive series are given in Figure 8.
Comparing the results of Examples 1 and 2, especially the event-triggered impulsive series in Figure 5 and Figure 8, we could observe that considering aperiodic sampling can effectively reduce the number of impulsive times for the same system. Although ETM (22) is not checked during [ s k , s k + η k + ξ k ) , V ( t ) is kept below h ( t ) , which effectively controls the synchronization effect.

5. Conclusions

The quasi-synchronization of heterogeneous neural networks was investigated in this research using event-based impulsive control. Considering the communication delay in signal transmission, we designed a switching ETM to determine the impulsive instants. Based on the proposition of above ETM, we designed a more economical ETM by introducing a new aperiodic sampling time. Some sufficient criteria are given to guarantee the quasi-synchronization of a class of heterogeneous neural networks. Lastly, two simulations are conducted to confirm the validity of our results.

Author Contributions

Y.L. performed the data analyses and wrote the manuscript; C.Y. helped perform the analysis with constructive discussions; J.F. contributed to the conception of the study; J.W. performed the experiment. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the associate editor and the anonymous reviewers for their insightful suggestions. This work was supported by the Scientific Research Launch project of Shenzhen Polytechnic (6022312040K).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Miyajima, H.; Shigei, N.; Yatsuki, S. Shift-Invariant associative memory based on homogeneous neural networks. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2005, E88-A, 2600–2606. [Google Scholar] [CrossRef]
  2. Christou, V.; Tsipouras, M.G.; Giannakeas, N.; Tzallas, A.T. Hybrid extreme learning machine approach for homogeneous neural networks. Neurocomputing 2018, 311, 397–412. [Google Scholar] [CrossRef]
  3. Dorfler, F.; Bullo, F. Synchronization and transient stability in power networks and non-uniform Kuramoto oscillators. SIAM J. Control Optim. 2012, 50, 1616–1642. [Google Scholar]
  4. Meng, S.; Pan, R.; Gao, W.; Zhou, J.; Wang, J.; He, W. A multi-task and multi-scale convolutional neural network for automatic recognition of woven fabric pattern. J. Intell. Manuf. 2021, 32, 1147–1161. [Google Scholar] [CrossRef]
  5. Rao, R.; Zhong, S.; Pu, Z. Fixed point and p-stability of T-S fuzzy impulsive reaction-diffusion dynamic neural networks with distributed delay via Laplacian semigroup. Neurocomputing 2019, 28, 170–184. [Google Scholar] [CrossRef]
  6. Zhao, Y.; He, X.; Huang, T.; Huang, J.; Li, P. A smoothing neural network for minimization l1-lp in sparse signal reconstruction with measurement noises. Neural Netw. 2020, 122, 40–53. [Google Scholar] [CrossRef]
  7. Wu, X.; Mao, B.; Wu, X.; Lv, J. Dynamic event-triggered leader-follower consensus control for multiagent systems. SIAM J. Control Optim. 2022, 60, 189–209. [Google Scholar] [CrossRef]
  8. Grasa, J.; Calvo, B. Simulating Extraocular Muscle Dynamics. A Comparison between Dynamic Implicit and Explicit Finite Element Methods. Mathematics 2021, 9, 1024. [Google Scholar] [CrossRef]
  9. Shanmugasundaram, S.; Udhayakumar, K.; Gunasekaran, D.; Rakkiyappan, R. Event-triggered impulsive control design for synchronization of inertial neural networks with time delays. Neurocomputing 2022, 483, 322–332. [Google Scholar] [CrossRef]
  10. Bao, Y.; Zhang, Y. Fixed-time dual-channel event-triggered secure quasi-synchronization of coupled memristive neural networks. J. Frankl. Inst. 2021, 358, 10052–10078. [Google Scholar] [CrossRef]
  11. Chen, J.; Chen, B.; Zeng, Z. Exponential quasi-synchronization of coupled delayed memristive neural networks via intermittent event-triggered control. Neural Netw. 2021, 141, 98–106. [Google Scholar] [CrossRef] [PubMed]
  12. Yao, W.; Wang, C.; Sun, Y.; Zhou, C.; Lin, H. Synchronization of inertial memristive neural networks with time-varying delays via static or dynamic event-triggered control. Neurocomputing 2020, 404, 367–380. [Google Scholar] [CrossRef]
  13. Tang, Z.; Xuan, D.; Park, J.H.; Zhou, C.; Wang, Y.; Feng, J. Impulsive effects based distributed synchronization of heterogeneous coupled neural networks. IEEE Trans. Netw. Sci. Eng. 2021, 8, 498–510. [Google Scholar] [CrossRef]
  14. Wu, Y.; Zhu, J.; Li, W. Intermittent discrete observation control for synchronization of stochastic neural networks. IEEE Trans. Cybern. 2020, 50, 2414–2424. [Google Scholar] [CrossRef]
  15. Sun, W.; Zheng, H.; Guo, W.; Xu, Y.; Cao, J.; Abdel-Aty, M.; Chen, S. Quasisynchronization of Heterogeneous Dynamical Networks via Event-Triggered Impulsive Controls. IEEE Trans. Cybern. 2022, 52, 228–239. [Google Scholar] [CrossRef] [PubMed]
  16. Ding, S.; Wang, Z.; Zhang, H. Quasi-Synchronization of delayed memristive neural networks via region-partitioning-dependent intermittent control. IEEE Trans. Cybern. 2019, 49, 4066–4077. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Zeng, Z. Event-triggered impulsive control on quasi-synchronization of memristive neural networks with time-varying delays. Neural Netw. 2019, 110, 55–65. [Google Scholar] [CrossRef]
  18. Zhang, J.; Meng, W.; Yin, Y.; Li, Z.; Ma, L.; Liang, W. High-Order sliding mode control for three-joint rigid manipulators based on an improved particle swarm optimization neural network. Mathematics 2022, 10, 3418. [Google Scholar] [CrossRef]
  19. Chen, W.; Deng, X.; Zheng, W. Sliding mode control for linear uncertain systems with impulse effects via switching gains. IEEE Trans. Autom. Control 2021, 67, 2044–2051. [Google Scholar] [CrossRef]
  20. Li, X.; Rao, R.; Zhong, S.; Yang, X.; Li, H.; Zhang, Y. Impulsive control and synchronization for fractional-order Hyper-chaotic financial system. Mathematics 2022, 10, 2737. [Google Scholar] [CrossRef]
  21. Lu, J.; Wang, Z.; Cao, J.; Ho, D.W.C.; Kurths, J. Pinning impulsive stabilization of nonlinear dynamical networks with time-varying delay. Int. J. Bifurcat. Chaos 2012, 22, 11830–11862. [Google Scholar] [CrossRef]
  22. Xie, X.; Liu, X.; Xu, H. Synchronization of delayed coupled switched neural networks: Mode-dependent average impulsive interval. Neurocomputing 2019, 365, 261–272. [Google Scholar] [CrossRef]
  23. Lu, J.; Ding, C.; Lou, J.; Cao, J. Outer synchronization of partially coupled dynamical networks via pinning impulsive controllers. J. Frankl. Inst. 2015, 11, 5024–5041. [Google Scholar] [CrossRef]
  24. Wen, W.; Du, Y.; Zhong, S.; Xu, J.; Zhou, N. Global asymptotic stability of piecewise homogeneous markovian jump BAM neural networks with discrete and distributed time-varying delays. Adv. Differ. Equ. 2016, 60. [Google Scholar] [CrossRef] [Green Version]
  25. Shi, T.; Xu, Q.; Zou, Z.; Shi, Z. Automatic raft labeling for remote sensing images via Dual-Scale homogeneous convolutional neural network. Remote. Sens. 2018, 10, 1130. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, M.; Li, Z.; Jiang, H.; Hu, C.; Yu, Z. Exponential synchronization of complex-valued neural networks via average impulsive interval strategy. Neural. Process. Lett. 2020, 52, 1377–1394. [Google Scholar] [CrossRef]
  27. Yang, X.; Lu, J. Finite-Time synchronization of coupled networks with Markovian topology and impulsive effects. IEEE T Autom. Control 2016, 61, 2256–2261. [Google Scholar] [CrossRef]
  28. Liu, X.; Xi, H. Quasi-synchronization of Markovian jump complex heterogeneous networks with partly unknown transition rates. Int. J. Control Autom. 2014, 12, 1336–1344. [Google Scholar] [CrossRef]
  29. Wang, S.; Hong, L.; Jiang, J. An image encryption scheme using a chaotic neural network and a network with multistable hyperchaos. Optik 2022, 268, 169758. [Google Scholar] [CrossRef]
Figure 1. Black, blue, red, and green lines are described as θ h ( t ) , 1 2 h ( t ) , ϕ k ( t ) , and V ( t ) , respectively.
Figure 1. Black, blue, red, and green lines are described as θ h ( t ) , 1 2 h ( t ) , ϕ k ( t ) , and V ( t ) , respectively.
Mathematics 10 04836 g001
Figure 2. Timing diagram for ETM. (a) Switching ETM depending on communication delays. (b) Switching ETM depending on communication delays and aperiodic sampling.
Figure 2. Timing diagram for ETM. (a) Switching ETM depending on communication delays. (b) Switching ETM depending on communication delays and aperiodic sampling.
Mathematics 10 04836 g002aMathematics 10 04836 g002b
Figure 3. x i j ( t ) , i = 1 , 2 , , 6 , j = 1 , 2 under ETM (7).
Figure 3. x i j ( t ) , i = 1 , 2 , , 6 , j = 1 , 2 under ETM (7).
Mathematics 10 04836 g003
Figure 4. e i ( t ) , i = 1 , 2 , , 6 under ETM (7).
Figure 4. e i ( t ) , i = 1 , 2 , , 6 under ETM (7).
Mathematics 10 04836 g004
Figure 5. Event-triggered impulsive series under ETM (7).
Figure 5. Event-triggered impulsive series under ETM (7).
Mathematics 10 04836 g005
Figure 6. x i j ( t ) , i = 1 , 2 , , 6 , j = 1 , 2 under ETM (22).
Figure 6. x i j ( t ) , i = 1 , 2 , , 6 , j = 1 , 2 under ETM (22).
Mathematics 10 04836 g006
Figure 7. e i ( t ) , i = 1 , 2 , , 6 under ETM (22).
Figure 7. e i ( t ) , i = 1 , 2 , , 6 under ETM (22).
Mathematics 10 04836 g007
Figure 8. Event-triggered impulsive series under ETM (22).
Figure 8. Event-triggered impulsive series under ETM (22).
Mathematics 10 04836 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Y.; Yi, C.; Feng, J.; Wang, J. Event-Based Impulsive Control for Heterogeneous Neural Networks with Communication Delays. Mathematics 2022, 10, 4836. https://doi.org/10.3390/math10244836

AMA Style

Li Y, Yi C, Feng J, Wang J. Event-Based Impulsive Control for Heterogeneous Neural Networks with Communication Delays. Mathematics. 2022; 10(24):4836. https://doi.org/10.3390/math10244836

Chicago/Turabian Style

Li, Yilin, Chengbo Yi, Jianwen Feng, and Jingyi Wang. 2022. "Event-Based Impulsive Control for Heterogeneous Neural Networks with Communication Delays" Mathematics 10, no. 24: 4836. https://doi.org/10.3390/math10244836

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop