Next Article in Journal
Adaptive Real-Time Transmission in Large-Scale Satellite Networks Through Software-Defined-Networking-Based Domain Clustering and Random Linear Network Coding
Next Article in Special Issue
Information Dissemination Model Based on Social Networks Characteristics
Previous Article in Journal
Wave-Power Extraction by an Oscillating Water Column Device over a Step Bottom
Previous Article in Special Issue
Intelligent Incident Management Leveraging Artificial Intelligence, Knowledge Engineering, and Mathematical Models in Enterprise Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Alpha Unpredictable Cohen–Grossberg Neural Networks with Poisson Stable Piecewise Constant Arguments

1
Department of Mathematics, Middle East Technical University, Ankara 06800, Turkey
2
Department of Mathematics, K. Zhubanov Aktobe Regional University, Aktobe 030000, Kazakhstan
3
Institute of Information and Computational Technologies, Almaty 050010, Kazakhstan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(7), 1068; https://doi.org/10.3390/math13071068
Submission received: 17 February 2025 / Revised: 13 March 2025 / Accepted: 24 March 2025 / Published: 25 March 2025
(This article belongs to the Special Issue Artificial Intelligence Applications in Complex Networks)

Abstract

:
There are three principal novelties in the present investigation. It is the first time Cohen–Grossberg-type neural networks are considered with the most general delay and advanced piecewise constant arguments. The model is alpha unpredictable in the sense of electrical inputs and is researched under the conditions of alpha unpredictable and Poisson stable outputs. Thus, the phenomenon of ultra Poincaré chaos, which can be indicated through the analysis of a single motion, is now confirmed for a most sophisticated neural network. Moreover, finally, the approach of pseudo-quasilinear reduction, in its most effective form is now expanded for strong nonlinearities with time switching. The complexity of the discussed model makes it universal and useful for various specific cases. Appropriate examples with simulations that support the theoretical results are provided.

1. Introduction

In this paper, we examine a specific class of recurrent neural networks known as Cohen–Grossberg types, which were introduced by M. Cohen and S. Grossberg in 1983 [1]. These networks have become a foundational model widely applied in fields such as signal processing and optimization. The Cohen–Grossberg model is particularly notable for its stability properties, which arise from the characteristics of the neuron-like components it represents. Significant advancements in this model include modifications to accommodate time delays, impulses, and nonlinear activation functions, thereby greatly enhancing its applicability to complex scenarios. This study first explores Cohen–Grossberg-type neural networks incorporating the most general delay and an advanced piecewise constant argument. The extension further broadens the model’s applicability, allowing for a more comprehensive analysis of complex dynamical behaviors in neural networks.
Neural networks with piecewise constant arguments have become an important tool for modeling systems exhibiting continuous and discontinuous dynamics. These models provide a simplified yet effective way to represent complex systems, reducing computational effort while maintaining practical relevance to real-world applications such as control systems, biological processes, and signal processing. Differential equations are essential tools for understanding how systems change over time. Equations with piecewise constant arguments have become an important research focus in recent years. Over the years, researchers have developed new approaches to handle systems that combine continuous and discrete behaviors, leading to studying differential equations with piecewise constant arguments.
The fundamental ideas behind differential equations with generalized piecewise constant arguments were developed in [2,3]. They were first introduced at the Conference on Differential and Difference Equations at the Florida Institute of Technology, 1–5 August 2005, Melbourne, Florida [4]. These equations have attracted significant attention from multiple disciplines because of their theoretical importance and broad applications in mathematics, biology, engineering, electronics, control theory, and neural networks. They are particularly remarkable as they combine elements of both continuous and discrete dynamical systems [5,6,7,8,9].
A significant aspect of neural network research is the analysis of recurrent oscillations, which include periodic and almost periodic motions. Poisson stable motions are the most complex type of oscillations, and there are relatively few results in this area. In this study, we propose examining a subclass of Poisson stable dynamics that are alpha unpredictable [10,11]. This novel approach allows us to reduce the analysis of chaotic systems. Thus, traditional methods for verifying the existence and stability of recurrent motions, such as periodic and almost periodic motions, can be applied to complex dynamics. The principles, which are based on alpha unpredictability, make the concept convenient for the research of both recurrent motions of models such as the line of periodicity, where almost periodicity is continued, and chaos [10] (previously known as unpredictability [12]). Alpha unpredictability is strongly associated with ultra Poincaré chaos [11] (formerly Poincaré chaos [12]). The ultra Poincare chaos is an alternative to all the types, Li-Yorke, Devaney, etc., which are based on the Lorenz sensitivity.
Thus, the novel concept of alpha unpredictability allows us to reduce the analysis of chaotic systems. Moreover, traditional methods for verifying the existence and stability of recurrent motions, such as periodic and almost periodic motions, can be applied to complex dynamics. The principles, which are based on alpha unpredictability, make the concept convenient for the research of both recurrent motions of models such as the line of periodicity, where almost periodicity is continued, and chaos [10,11].
Ultra Poincaré chaos [11] is based on the concept of alpha unpredictability, which focuses solely on a single trajectory, while well-known Lorentz chaos relies on the collective behavior of orbits, including sensitivity, dense cycles, transitivity, proximality, and frequency separation [13,14]. An unpredictable trajectory is naturally positively Poisson stable, and its key feature is the emergence of chaos within the corresponding quasi-minimal set. This concept offers a new perspective in chaos theory while also reconnecting with the core principles of classical dynamics and modern theories of differential and difference equations, emphasizing motion and stability. This novel understanding of motion advances chaos research both theoretically and in practical applications across various scientific and industrial fields. Moreover, it is particularly valuable in neuroscience, where complex dynamics present significant challenges.
It is well established that chaos plays a vital role in neural network applications. In [15,16], the similarity between naturally occurring chaos observed experimentally in the brain’s neural systems and the chaotic behavior of artificial neural networks was examined, emphasizing asymmetry and nonlinearity in a dynamic model. The phenomenon of neural network chaos is closely associated with an improved comprehension of signal and pattern recognition in noisy environments and the processing and transmission of information. The quantity of information transmitted in chaotic dynamic systems was calculated in [17,18]. It was demonstrated that a chaotic neural network possesses the capability for the efficient transmission of any externally received information.
Chaos plays a significant role in cognitive functions related to memory processes in the human brain. Various models of self-organizing hypothetical computers have been proposed, including the synergetic computer [19], resonance neurocomputers [20], the holonic computer [21], and chaotic computing models [22], as well as chaotic information processing and chaotic neural networks [23]. These models attempt to incorporate dynamical system theory to varying degrees, particularly in nonlinear and far-from-equilibrium states of physical systems, as a possible mechanism for explaining how higher animals process information in the brain.
In [24,25], the adaptive synchronization problem of chaotic Cohen–Grossberg neural networks with mixed time delays and discrete delays was extensively studied. These studies focused on developing synchronization schemes that ensure the stability of chaotic neural networks under various delay conditions, providing theoretical proofs and numerical simulations to support their findings. The problem of adaptive synchronization analysis for a class of Cohen–Grossberg neural networks with mixed time delays has been rigorously analyzed using a Lyapunov–Krasovskii functional approach and the invariant principle of functional differential equations. As described in [24], these methods ensure robustness against system uncertainties and external perturbations, making them highly effective for practical applications in neural network synchronization.
Furthermore, when considering chaos synchronization in the case of alpha unpredictable Cohen–Grossberg neural networks, the synchronization mechanism may be based on a specific approach known as delta synchronization of Poincaré chaos [26]. This method provides a framework for analyzing and controlling chaotic dynamics in unpredictable systems by leveraging Poincaré map techniques and delta synchronization principles. Authors have discussed the effectiveness of delta synchronization in scenarios where other synchronization methods may not be effective. They noted that generalized synchronization is a well-known and effective tool; however, a study demonstrated that delta synchronization can be achieved even when generalized synchronization is not present. This finding suggests that delta synchronization is valuable in cases where other conservative synchronization methods fail.
Our studies provide a mathematical justification for chaos and expand the understanding of this phenomenon. In particular, ultra Poincaré chaos is an alternative to Lorenz chaos and demonstrates alpha unpredictability as a characteristic of isolated motion [11]. The practical application of these concepts is especially relevant in neural networks, where mathematical chaos reflects the complexity of the world and has the potential to enhance the study of intellectual activity in the brain. Modern research integrates the theoretical foundations of classical dynamic systems and differential equations with the study of chaos in neural models [10,11,27].
One of the most common types of differential equations with the piecewise constant argument has the following structure [2]:
x ( t ) = f ( t , x ( t ) , x ( γ ( t ) ) ) ,
where γ ( t ) = ζ k if t [ θ k , θ k + 1 ) , where k Z and t R , is a piecewise constant function, and ζ k and θ k , where k Z , are strictly increasing sequences of real numbers, unbounded on the left and on the right such that θ k ζ k θ k + 1 for all k Z . A sketch of the graph of the function is shown in Figure 1.
The development of differential equations with piecewise constant arguments [4], along with subsequent insights from [2,3], has been an important step in understanding models with deviated arguments. Two significant advancements have emerged from this research. First, reducing these systems to equivalent integral equations has proven effective, enabling the application of well-established tools from differential equations. Second, the time intervals where the argument stays constant can be freely chosen. This flexibility allows for better adaptation to specific needs, like optimization, accuracy, or modeling real-world scenarios in physics and biology. This flexibility allows us to study models with natural nonlinearity, which traditional methods for hybrid systems often cannot handle. As a result, this approach enhances our understanding of real-world processes and expands the analytical scope, significantly strengthening methodological capabilities. It is now recognized that these models belong to the broader class of functional differential equations, occupying an intermediate position between ordinary and functional differential equations. This makes them particularly important for the mathematical modeling of phenomena in neuroscience.
To enhance the applicability, a new class of compartmental functions has been recently introduced [10]. They are formed by combining Poisson stable or alpha unpredictable functions with periodic, quasi-periodic functions called compartmental Poisson stable or compartmental alpha unpredictable functions. They include theoretical features of regular and irregular functions, which strongly support their appearance in real-world problems.
Given the growing interest in neural networks with piecewise constant arguments, this study aimed to develop the theoretical framework further and explore new dynamical properties in these models. By integrating concepts such as Poisson stability and alpha unpredictability, we extend previous research [28] and provide a more comprehensive understanding of the interplay between continuous and discrete dynamics in neural networks. Our findings contribute to both these models’ theoretical advancement and practical application, offering new perspectives on complex dynamical behaviors in neuroscience, control systems, and beyond.

Description of the Model

We investigate the following Cohen–Grossberg neural network:
x i ( t ) = a i ( x i ( t ) ) [ b i ( x i ( t ) ) + c i ( x i ( γ ( t ) ) ) j = 1 p d i j ( t ) f j ( x j ( t ) ) j = 1 p e i j ( t ) g j ( x j ( γ ( t ) ) ) + w i ( t ) ] ,
where t , x i R , x i ( t ) , i = 1 , p , corresponds to the state of the ith neuron at time moment t; γ ( t ) is a piecewise constant argument function; p is the number of neurons in the network; a i ( x i ( t ) ) , i = 1 , p ,is an amplification function; b i ( x i ( t ) ) and c i ( x i ( γ ( t ) ) ) , i = 1 , p , are components of the rates with which the neuron self-adjusts or resets its potential when isolated from other neurons and inputs; d i j ( t ) and e i j ( t ) , i , j = 1 , 2 , , p , are the strength of connectivity between cells j and i at moment t; f j and g j , j = 1 , 2 , , p , are activation functions; and w i ( t ) , j = 1 , 2 , , p , is an external input introduced to cell i from outside the network. Functions b i ( s ) and c i ( s ) , i = 1 , 2 , , p are continuous, and strengths e i j ( t ) and d i j ( t ) as well as inputs w i ( t ) , i , j = 1 , 2 , , p , are alpha unpredictable functions.
A key feature of this model is the set of coefficients that define the rates and the activation functions, which we consider sums of members with continuous time and with piecewise constant arguments. Specifically, we add components c i ( x i ( γ ( t ) ) ) and g j ( x j ( γ ( t ) ) ) to the coefficients b i ( x i ( t ) ) and f j ( x j ( t ) ) . We believe that this approach will allow for a more effective exploitation of models that describe complex biological processes, such as ecological interactions or population dynamics. By incorporating aspects like population growth, reproduction delays, metamorphosis, or other stages of an organism’s life cycle, we aim to better capture the complexities of these processes [29,30,31]. With our suggestions, we significantly increase the application capabilities of the model, and in this paper, we will apply it for chaos appearance.
Moreover, the Cohen–Grossberg neural network model we are examining possesses a universal nature, as it can encompass a variety of specific cases by choosing its parameters. For instance, by setting certain parameters such as rates and the activation functions, c i ( x i ( γ ( t ) ) ) = 0 and g j ( x j ( γ ( t ) ) ) = 0 , the model simplifies to a standard framework [1,28]. Additionally, if we set c i ( x i ( γ ( t ) ) ) = 0 and in g j ( x j ( γ ( t ) ) ) replace the piecewise constant argument with time delays, the model represents another special case of a generalized argument [32]. In this case, the system captures the dynamics governed by time deviations rather than the original argument structure. For such configurations, the existence and behavior of periodic and almost periodic solutions have been thoroughly investigated in prior research [33,34,35,36,37]. These studies provide a foundation for understanding the diverse solution properties within the broader framework of the Cohen–Grossberg neural network model.

2. Preliminaries

In this section, the basic definitions are provided as well as description of the discontinuous deviated argument, and reduction to the pseudo-quasilinear model is performed.
First of all, let us start with the classical notations for the natural, integer, and real numbers: N , Z , and R , respectively. Within this paper, we utilize the norm v = max 1 i p v i , where · denotes the absolute value, v = ( v 1 , , v p ) , and v i R .
Definition 1 
([38]). A sequence κ i , where i Z and κ i R , is called Poisson stable provided that it is bounded and there exists a sequence l n , n N , of positive integers that satisfies κ i + l n κ i as n on bounded intervals of integers.
Definition 2 
([38]). A uniformly continuous and bounded function u : R R p is Poisson stable if there exists a sequence t n that diverges to infinity such that u ( t + t n ) u ( t ) as n uniformly on compact subsets of R .
Definition 3 
([10]). A uniformly continuous and bounded function v : R R p is alpha unpredictable if there exist positive numbers ϵ 0 , σ , and sequences t n and s n , both of which diverge to infinity such that the following hold:
  • v ( t + t n ) v ( t ) as n uniformly on compact subsets of R ;
  • v ( t + t n ) v ( t )   ϵ 0 for each t [ s n σ , s n + σ ] and n N .
Fix the sequences of real numbers t n , θ k , and ξ k , n N , k Z , which are strictly increasing with regard to the indices. Sequences θ k and ξ k , k Z , are unbounded in both directions. Moreover, it satisfies θ ̲ θ k + 1 θ k θ ¯ with positive numbers θ ̲ and θ ¯ .
Definitions of the Poisson couple and triple are presented below.
Definition 4 
([12]). A couple ( t n , θ k ) of sequences t n and θ k , n N , k Z , is called a Poisson couple if there exists a sequence l n , n N , that diverges to infinity such that
θ k + l n t n θ k 0 a s n ,
uniformly on each bounded interval of integers k.
Definition 5 
([12]). A triple ( t n , θ k , ξ k ) of the sequences t n , θ k , and ξ k , n N , k Z , is called Poisson triple if there exists a sequence l n , n N , of integers that diverges to infinity such that condition (3) is satisfied and
ξ k + l n t n ξ k 0 a s n ,
uniformly on each bounded interval of integers k.

2.1. The Delay-Advanced Piecewise Constant Argument

We aim to determine the argument function in (2) by considering its general characteristics, as outlined in [2]. Specifically, we assume that γ ( t ) = ξ k if θ k t < θ k + 1 , k Z . The function is defined over the entire real line and associated with two Poisson sequences θ k and ξ k , k Z such that the interval satisfies θ ̲ θ k + 1 θ k θ ¯ for some positive numbers θ ̲ and θ ¯ , applicable to all integer values k . A prototype of the function is present by the graph in Figure 1.
Alongside θ k and ξ k , we fix a sequence t n , n Z , forming a Poisson triple ( t n , θ k , ξ k ) , as per Definition 5. By analyzing γ ( t + t n ) for a fixed n Z , we prove that for sufficiently large n, the function takes the form γ ( t + t n ) = ξ k + l n if θ k t < θ k + 1 , where θ k = θ k + l n t n , k Z .
Our goal is now to show that this discontinuous argument function possesses a property known as discontinuous Poisson stability.
Consider the bounded interval [ a , b ] with b > a , and choose any positive number ϵ such that 2 ϵ < θ ̲ over this interval. Without loss of generality, we assume θ k θ k + l n t n and analyze the discontinuity points θ k for k = l + 1 , l + 2 , , l + m 1 , within [ a , b ] satisfying
θ l a < θ l + 1 < θ l + 2 < < θ l + p 1 < b θ l + m .
It follows that for sufficiently large values n, the condition
| θ k θ k | < ϵ
holds for all k = l , l + 1 , , l + m , and that
| γ ( t + t n ) γ ( t ) | < ϵ
for any t [ a , b ] , except in cases where t lies between θ k and θ k for a given k. Now, fixing k, in k = l , l + 1 , , l + m , we observe that for a given k, the function satisfies γ ( t ) = ξ k , t [ θ k , θ k + 1 ) , and γ ( t + t n ) = ξ k + l n , t [ θ k , θ k + 1 ) . For sufficiently large n, the interval ( θ k , θ k + 1 ) is non-empty, and condition (5) holds a result of (3). Furthermore, from condition (4), it follows that for sufficiently large n ,
| γ ( t + t n ) γ ( t ) | = | ξ k + l n ξ k | < ϵ
for all t [ θ k , θ k + 1 ) . Thus, inequalities (5) and (6) are approved.
Since conditions (5) and (6) are valid for arbitrary ϵ , they are sufficient to establish that the piecewise constant function γ ( t + t n ) converges to γ ( t ) on the bounded interval in the B-topology [39]. In other words, γ ( t ) is a discontinuous Poisson stable function [12].

2.2. Reduction to the Pseudo-Quasilinear System

We will consider a solution, x ( t ) , of model (2), which is determined on the real axis, and assume that the solution x ( t ) is bounded, such that sup t R x ( t ) < H 0 , where H 0 is a fixed positive number.
To construct an environment for the research, we will use the following notation:
m i j d = sup t R | d i j ( t ) | , m i j e = sup t R | e i j ( t ) | , m j f = sup t R | f j ( t ) | , m j g = sup t R | g j ( t ) | , m i w = sup t R | w i ( t ) |
for each i , j = 1 , 2 , , p .
The following conditions are required:
(C1)
Each function a i ( s ) , i = 1 , 2 , , p , is continuous, and there exist constants a ̲ i > 0 and a ¯ i > 0 such that a ̲ i a i ( s ) a ¯ i for all i = 1 , 2 , , p , and | s | < H 0 .
(C2)
Components b i ( s ) and c i ( s ) , i = 1 , 2 , , p , are Lipschitzian: that is, | b i ( s 1 ) b i ( s 2 ) | L i b | s 1 s 2 | and | c i ( s 1 ) c i ( s 2 ) | L i c | s 1 s 2 | with positive constants L i b and L i c for all i = 1 , 2 , , p , and | s 1 | < H 0 and | s 2 | < H 0 .
(C3)
Components e i j ( t ) , d i j ( t ) , and w i ( t ) , i , j = 1 , 2 , , p , are alpha unpredictable functions with the common sequences of convergence, t n , and separation, s n , n N .
(C4)
Functions f i ( s ) and g i ( s ) , i = 1 , 2 , , p , are Lipschitzian: that is, | f i ( s 1 ) f i ( s 2 ) | L i f | s 1 s 2 | and | g i ( s 1 ) g i ( s 2 ) | L i g | s 1 s 2 | if | s 1 | < H 0 and | s 2 | < H 0 , with positive constants L i f and L i g .
(C5)
θ ¯ [ a ¯ i ( L i b + j = 1 p m i j d L i f ) ( 1 + a ¯ i L i c θ ¯ + a ¯ i j = 1 p m i j e L g θ ¯ ) e a ¯ i ( L i b + j = 1 p m i j d L i f ) θ ¯ + a ¯ i ( L i c + j = 1 p m i j e L i g ) ] < 1 for all i = 1 , 2 , , p .
According to ( C 1 ) , for every i = 1 , 2 , , p , it is possible to determine a function h i ( s ) = 0 s 1 a i ( τ ) d τ , h ( 0 ) = 0 . Obviously, h i ( s ) = 1 a i ( s ) . Because of the condition a i ( s ) > 0 , one can obtain that the function h i ( s ) is increasing, and the inverse function ( h i ) 1 ( s ) , h 1 ( 0 ) = 0 , is existential, continuous, and differentiable. Moreover, ( h i 1 ) ( s ) = a i ( s ) , where ( h i 1 ) is the derivative of function ( h i ) 1 ( s ) .
By mean value theorem, one can find that
| h i 1 ( s 1 ) h i 1 ( s 2 ) | = | ( h i 1 ) ( ζ ) ( s 1 s 2 ) | = | a i ( ζ ) | | s 1 s 2 | , i = 1 , 2 , , p ,
for | s 1 | < H 0 and | s 2 | < H 0 and ζ ( s 1 , s 2 ) . For this reason, function ( h i ) 1 ( s ) fulfills the Lipschitz condition:
a ̲ i | s 1 s 2 | | h i 1 ( s 1 ) h i 1 ( s 2 ) | a ¯ i | s 1 s 2 | , i = 1 , 2 , , p ,
if | s 1 | < H 0 and | s 2 | < H 0 .
Now, new variables,
y i ( t ) = h i ( x i ( t ) ) , i = 1 , 2 , , p .
are proposed, which reduce the highly nonlinear model and admit a pseudo-quasilinear shape.
One can obtain that y i ( t ) = h i ( x i ( t ) ) x i ( t ) = x i ( t ) a i ( x i ( t ) ) and x i ( t ) = h i 1 ( y i ( t ) ) . To satisfy the inequality x ( t ) < H 0 , the motion in the new variables must subdue the relation y ( t ) < H , where H = H 0 m a x a ¯ i and t R .
Consequently, for piecewise constant argument function x i ( γ ( t ) ) , it is clear that
y i ( γ ( t ) ) = h i ( x i ( γ ( t ) ) )
and x i ( γ ( t ) ) = h i 1 ( y i ( γ ( t ) ) ) , i = 1 , 2 , , p .
Using (9) and (10) for system (2), we obtain that
y i ( t ) = [ u i ( y i ( t ) ) + v i ( y i ( γ ( t ) , y i ( t ) ) ) ] y i ( t ) + j = 1 p d i j ( t ) f j ( h j 1 ( y j ( t ) ) ) + j = 1 p e i j ( t ) g j ( h j 1 ( y j ( γ ( t ) ) ) ) + w i ( t ) ,
where u i ( y i ( t ) ) = b i ( h i 1 ( y i ( t ) ) ) y i ( t ) and v i ( y i ( γ ( t ) ) , y i ( t ) ) = c i ( h i 1 ( y i ( γ ( t ) ) ) ) y i ( t ) , i = 1 , 2 , , p . Because of the structure’s right-hand side, the last model can be called a pseudo-quasilinear system of differential equations.
A function y ( t ) = ( y 1 ( t ) , . . . , y p ( t ) ) is considered as a bounded solution of Equation (11) if and only if it satisfies the corresponding integral equation:
y i ( t ) = t e s t [ u i ( y i ( ς ) ) + v i ( y i ( γ ( ς ) , y i ( ς ) ) ) ] d ς [ j = 1 p d i j ( s ) f j ( h j 1 ( y j ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( y j ( γ ( s ) ) ) ) + w i ( s ) ] d s ,
with i = 1 , , p [40].

3. Main Result

Introduce the set P of p-dimensional functions φ : R R p , φ = ( φ 1 , φ 1 , , φ p ) , with the norm φ 1 = sup t R φ ( t ) such that the following must hold:
(P1)
There exists a positive number H that satisfies φ 1 < H for all φ ( t ) P ;
(P2)
There are Poisson stable with convergence sequence t n , n = 1 , 2 , .
Suppose that the conditions below hold for every i = 1 , 2 , , p :
(C6)
There exist constants m i > 0 , n i > 0 M i > 0 and N i > 0 such that 0 < m i u i ( s ) M i and 0 < n i v i ( s ) N i ;
(C7)
There exist positive numbers L i u and L i v such that | u i ( s 1 ) u i ( s 2 ) | L i u | s 1 s 2 | and | v i ( s 1 ) v i ( s 2 ) | L i v | s 1 s 2 | for all | s 1 | < H and | s 2 | < H ;
(C8)
1 m i + n i j = 1 p m i j d m j f + j = 1 p m i j e m j g + m i w < H ;
(C9)
L i u + L i v ( m i + n i ) 2 j = 1 p m i j d m i f + j = 1 p m i j e m i g + 1 m i + n i j = 1 p m i j d L i f a ¯ i + j = 1 p m i j e L i g a ¯ i < 1 .
The following notations and lemma are required to prove the main results of this paper.
Denote K i = K i a ¯ i a ̲ i , where
K i = ( 1 θ ¯ [ a ¯ i ( L i b + j = 1 p m i j d L i f ) ( 1 + a ¯ i L i c θ ¯ + a ¯ i j = 1 p m i j e L g θ ¯ ) e a ¯ i ( L i b + j = 1 p m i j d L i f ) θ ¯ + a ¯ i ( L i c + j = 1 p m i j e L i g ) ] ) 1 .
The proofs for all statements discussed in this work are provided in the Appendix A.
Lemma 1. 
Assume that conditions (C1)–(C7) hold true. If y i ( t ) , i = 1 , 2 , , p , is a motion of model (11), then the inequality given by
| y i ( γ ( t ) ) | K i | y i ( t ) | , t R ,
is valid.
Define in P the operator Π such that Π φ ( t ) = ( Π 1 φ 1 ( t ) , Π 2 φ 2 ( t ) , . . . , Π p φ p ( t ) ) , where
Π i φ i ( t ) = t e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) , y i ( ς ) ) ) ] d ς [ j = 1 p d i j ( s ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s ) ] d s .
Lemma 2. 
Π P P .
Lemma 3. 
The operator Π is a contraction on P .
Theorem 1. 
If conditions ( C 1 ) ( C 9 ) are fulfilled, then there exists a unique Poisson stable motion of the network (2).
The following theorem establishes the existence of a unique alpha unpredictable solution for neural networks (2).
Theorem 2. 
Under the assumption that conditions (C1)–(C9) hold, the Cohen–Grossberg neural network (2) has a unique alpha unpredictable solution.
Denote L u = max ( i ) L i u , L v = max i L i v , m = min i m i , n = min i n i , m d = max i j = 1 n m i j d , m e = max i j = 1 n m i j e , L f = max i L i f , L g = max i L i g , m w = max i m i w , and a ¯ = max i a i ¯ , and let δ be a positive number such that δ < m + n .
The requirement is essential for proving exponential stability.
(C10)
H L u + K L v δ + L u + K L v m d m f + m e m g + m w δ 2 m + n δ + 1 m + n + m d L f + K m e L g a ¯ m + n δ < 1 .
Theorem 3. 
Assume that conditions (C1)–(C10) are satisfied. Then, the Poisson stable and alpha unpredictable solutions of the Cohen–Grossberg neural network (2) approved in Theorems 1 and 2, respectively, are exponentially stable.

4. An Example

Let us start by creating alpha unpredictable functions. In [11], it was proved that the logistic equation
π k + 1 = η π k ( 1 π k )
admits an alpha unpredictable solution τ k , k Z , if η [ 3 + ( 2 / 3 ) 1 / 2 , 4 ] . That is, there exist sequences l n and m n as n and a positive number ϵ 0 such that | τ k + l n τ k | 0 as n for each k in the bounded intervals of integers, and | τ l n + m n τ m n | ϵ 0 for each n N .
Using the function Ω ( t ) = τ k , if t ( k , k + 1 ] , we construct an integral function, Θ ( t ) = t e 3 ( t s ) Ω ( s ) d s . The function Θ ( t ) is bounded on R . The Poisson stability of the function is proved using the method of included intervals, and the alpha unpredictability is also established as in [10].
While working on new types of recurrence, we were surprised to discover that, despite extensive literature, there is a lack of numerical examples and simulations for both the functions and solutions of differential equations. Meanwhile, the demands of industries, particularly in fields such as neuroscience, artificial intelligence, and other modern domains, require numerical representations of motions that are already well supported by theoretical frameworks. Our research addresses these challenges comprehensively, as we are the first to construct samples of Poisson stable and alpha unpredictable functions using solutions to the logistic equation.
The next analysis of a neural network is a confirmation of the results, and beside it is an illustration of the theorems of this paper.
Consider the following Cohen–Grossberg neural network with a piecewise constant argument:
x i ( t ) = a i ( x i ( t ) ) [ b i ( x i ( t ) ) + c i ( x i ( γ ( t ) ) ) j = 1 p d i j ( t ) f j ( x j ( t ) ) j = 1 p e i j ( t ) g j ( x j ( γ ( t ) ) ) + w i ( t ) ] ,
where i = 1 , 2 , 3 . The following functions are used as activations: f ( x i ( t ) ) = 0.2 tanh ( x i ( t ) 6 ) and g ( x i ( γ ( t ) ) ) = 0.5 tanh ( x i ( γ ( t ) ) 4 ) . The amplification functions are as follows: a 1 ( x 1 ( t ) ) = cos ( 0.2 x 1 ( t ) ) ,   a 2 ( x 2 ( t ) ) = cos ( 0.01 x 2 ( t ) ) , and a 3 ( x 3 ( t ) ) = cos ( 0.1 x 3 ( t ) ) . The rates are given by b 1 ( x 1 ( t ) ) = 3 sin ( 0.2 x 1 ( t ) ) ,   b 2 ( x 2 ( t ) ) = 2 sin ( 0.5 x 2 ( t ) ) ,   b 3 ( x 3 ( t ) ) = 4 sin ( 0.4 x 3 ( t ) ) ,   c 1 ( x 1 ( γ ( t ) ) ) = sin ( 0.4 x 1 ( γ ( t ) ) ) ,   c 2 ( x 2 ( γ ( t ) ) ) = 2 sin ( 0.1 x 2 ( γ ( t ) ) ) , and c 3 ( x 3 ( γ ( t ) ) ) = sin ( 0.2 x 3 ( γ ( t ) ) ) . Neurons are connected through the following measures of connectivity strength: d 11 ( t ) = 0.2 Θ ( t ) ,   d 12 ( t ) = 0.5 Θ ( t ) ,   d 13 ( t ) = 0.1 Θ ( t ) ,   d 21 ( t ) = 0.1 Θ ( t ) ,   d 22 ( t ) = 0.2 Θ ( t ) ,   d 23 ( t ) = 0.1 Θ ( t ) ,   d 31 ( t ) = 0.4 Θ ( t ) ,   d 32 ( t ) = 0.1 Θ ( t ) ,   d 33 ( t ) = 0.2 Θ ( t ) ,   e 11 ( t ) = 0.1 Θ ( t ) ,   e 12 ( t ) = 0.2 Θ ( t ) ,   e 13 ( t ) = 0.1 Θ ( t ) ,   e 21 ( t ) = 0.3 Θ ( t ) ,   e 22 ( t ) = 0.2 Θ ( t ) ,   e 23 ( t ) = 0.5 Θ ( t ) ,   e 31 ( t ) = 0.6 Θ ( t ) ,   e 32 ( t ) = 0.1 Θ ( t ) ,   e 33 ( t ) = 0.2 Θ ( t ) . And the external inputs are w 1 ( t ) = 1.5 Θ ( t ) ,   w 2 ( t ) = 2.5 Θ ( t ) , and w 3 ( t ) = 2 Θ ( t ) .
The constant argument function γ ( t ) is defined by the sequences θ k = k 5 and ξ k = 2 k + 1 10 , k Z , t n = n w , n N , w R , which constitute the Poisson triple [10,12].
Calculate H 0 = 2 ,   a 1 ¯ = 0.921 ,   a 2 ¯ = 0.878 ,   a 3 ¯ = 0.980 ,   m 1 f = m 2 f = m 3 f = 0.2 ,   m 1 g = m 2 f = m 3 g = 0.5 ,   L 1 f = L 2 f = L 3 f = 0.05 ,   L 1 g = L 2 g = L 3 g = 0.06 ,   j = 1 3 m 1 j d = j = 1 3 m 2 j d = j = 1 3 m 3 j d = j = 1 3 m 1 j d = j = 1 3 m 2 j d = j = 1 3 m 3 j d = 3 / 20 ,   m 1 w = 0.1 ,   m 2 w = 0.013 , and m 3 w = 0.05 .
Taking into account that h i ( s ) = 0 s 1 a i ( ς ) d ς ,   i = 1 , 2 , 3 , one can find h 1 1 ( s ) = 0 s cos ( 0.2 ς ) d ς = 5 sin ( 0.2 s ) ,   h 2 1 ( s ) = 0 s cos ( 0.25 ς ) d ς = 4 sin ( 0.25 s ) , and h 3 1 ( s ) = 0 s cos ( 0.1 ς ) d ς = 10 sin ( 0.1 s ) . We have that u i ( y i ( t ) ) = b i ( h i 1 ( y i ( t ) ) ) y i ( t ) and v i ( y i ( γ ( t ) ) ) = c i ( h i 1 ( y i ( γ ( t ) ) ) ) y i ( t ) ,   i = 1 , 2 , 3 , such that u 1 ( y 1 ( t ) ) = 3 sin ( sin ( 0.2 y 1 ( t ) ) ) y 1 ( t ) ,   u 2 ( y 2 ( t ) ) = 2 sin ( 2 sin ( 0.25 y 2 ( t ) ) ) y 2 ( t ) ,   u 3 ( y 3 ( t ) ) = 4 sin ( 4 sin ( 0.1 y 3 ( t ) ) ) y 3 ( t ) , v 1 ( y 1 ( γ ( t ) ) ) = s i n ( 2 s i n ( 0.2 y 1 ( γ ( t ) ) ) y 1 ( t ) ,   v 2 ( y 2 ( γ ( t ) ) ) =   2 s i n ( 0.4 s i n ( 0.25 y 2 ( γ ( t ) ) ) y 2 ( t ) , and v 3 ( y 3 ( γ ( t ) ) ) = s i n ( 2 s i n ( 0.1 y 3 ( γ ( t ) ) ) y 3 ( t ) . Thus, we obtain that 0.57 m 1 1.5 , 0.82 m 2 1 , 1.43 m 3 2 ,   0.43 n 1 0.5 , 0.82 n 2 1 , and 0.39 n 3 0.5 . Functions u i ( s ) and v i ( s ) ,   i = 1 , 2 , 3 , satisfy the Lipschitz condition with L 1 u = 0.03 ,   L 2 u = 0.02 , and L 3 u = 0.04 and L 1 v = 0.01 ,   L 2 v = 0.02 , and L 3 v = 0.01 . Conditions (C1)–(C10) hold for the functions and constants described above. Assumption (C11) is valid given that H = 2.04 ,   L u = 0.02 ,   δ = 0.5 ,   m = 0.57 ,   n = 0.539 ,   m d = m e = 3 / 20 ,   m f = 0.2 ,   m g = 0.5 ,   L f = 0.05 , L f = 0.06 , and a ¯ = 0.98 . According to Theorem 3, there exists a unique alpha unpredictable solution, y ( t ) = ( y 1 ( t ) , y 2 ( t ) , y 3 ( t ) ) , of neural network (15). In Figure 2, coordinates (a) and trajectory (b) of solution ψ ( t ) = ( ψ 1 ( t ) , ψ 2 ( t ) , ψ 3 ( t ) ) , which exponentially converges to the alpha unpredictable solution x ( t ) , are shown.

5. Conclusions

Chaos is a crucial aspect of neural dynamics, influencing cognitive functions such as memory formation, learning, and decision making. In neural networks, chaotic behavior allows for enhanced adaptability, improved pattern recognition, and efficient information processing, closely resembling the dynamics observed in biological brains. The ability to harness chaos in artificial neural networks can lead to the development of more efficient machine learning models.
This study advances the theoretical and application role of Cohen–Grossberg-type neural networks by introducing a model with the most general delay and an advanced piecewise constant argument. The alpha unpredictability and conditions for Poisson stability are analyzed, confirming the phenomenon of ultra Poincaré chaos in complex neural networks. The model and its recurrence properties pose new challenges in studying neural networks, particularly in proving Poisson stability. Additionally, the system’s complexity is heightened by strong nonlinearities and time-dependent switching, necessitating an extension of the pseudo-quasilinear reduction method.
The theoretical results, supported by numerical simulations, provide valuable insights for practical applications in signal processing, pattern recognition, and synchronization in deterministic and stochastic processes. These findings contribute to the broader understanding of complex neural network behavior and their potential real-world implementations.

Author Contributions

Conceptualization, M.A.; methodology, M.A.; investigation, M.A., Z.N., and R.S.; writing—original draft preparation, R.S.; writing—review and editing, Z.N.; supervision R.S.; software, Z.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP23487275).

Data Availability Statement

No datasets were generated or analyzed during the current study.

Acknowledgments

The authors wish to express their sincere gratitude to the referees for helpful criticism and valuable suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proofs of the Results

Proof of Lemma 1. 
We begin with the consideration of a bounded motion x ( t ) = ( x 1 ( t ) , . . . , x p ( t ) ) , for system (2). It satisfies the following integral equations [40]:
x i ( t ) = t a i ( x i ( s ) ) [ b i ( x i ( s ) ) + c i ( x i ( γ ( s ) ) ) j = 1 p d i j ( s ) f j ( x j ( s ) ) j = 1 p e i j ( s ) g j ( x j ( γ ( s ) ) ) + w i ( s ) ] d s , i = 1 , , p .
Let us prove that
| x i ( γ ( t ) ) | K i | x i ( t ) | , t R ,
for all i = 1 , 2 , , p .
First, fix an integer k such that t θ k , θ k + 1 , and then consider two alternative cases: (a) θ k ξ k t < θ k + 1 and (b) θ k t < ξ k < θ k + 1 .
(a) For t ξ k , obtain that
| x i ( t ) | | x i ( ξ k ) | + ξ k t a ¯ i L i b | x i ( s ) | + L i c | x i ( ξ k ) | + j = 1 p m i j d L i f | x j ( s ) | + j = 1 p m i j e L i g | x j ( ξ k ) | d s | x i ( ξ k ) | ( 1 + a ¯ i L i c θ ¯ + a ¯ i j = 1 p m i j e L i g θ ¯ ) + a ¯ i ξ k t ( L i b + j = 1 p m i j d L i f ) | x i ( s ) | d s .
The Gronwall–Bellman Lemma yields that
| x i ( t ) | | x i ( ξ k ) | ( 1 + a ¯ i L i c θ ¯ + a ¯ i j = 1 p m i j e L i g θ ¯ ) e a ¯ i ( L i b + j = 1 p m i j d L i f ) θ ¯ .
Moreover,
| x i ( ξ k ) | | x i ( t ) | + ξ k t a ¯ i L i b | x i ( s ) | + L i c | x i ( ξ k ) | + j = 1 p m i j d L i f | x j ( s ) | + j = 1 p m i j e L i g | x j ( ξ k ) | d s | x i ( t ) | + ξ k t [ a ¯ i ( L i b + j = 1 p m i j d L i f ) | x i ( s ) | + a ¯ i ( L i c + j = 1 p m i j e L i g ) | x i ( ξ k ) | ] d s | x i ( t ) | + ξ k t [ a ¯ i ( L i b + j = 1 p m i j d L i f ) ( 1 + a ¯ i L i c θ ¯ + a ¯ i j = 1 p m i j e L i g θ ¯ ) e a ¯ i ( L i b + j = 1 p m i j d L i f ) θ ¯ | x i ( ξ k ) | + a ¯ i ( L i c + j = 1 p m i j e L i g ) | x i ( ξ k ) | ] d s | x i ( t ) | + θ ¯ [ a ¯ i ( L i b + j = 1 p m i j d L i f ) ( 1 + a ¯ i L i c θ ¯ + a ¯ i j = 1 p m i j e L i g θ ¯ ) e a ¯ i ( L i b + j = 1 p m i j d L i f ) θ ¯ + a ¯ i ( L i c + j = 1 p m i j e L i g ) ] | x i ( ξ k ) | .
The last inequality implies relation (A2).
The assertion for the case θ k t < ξ i < θ k + 1 , k Z , can be proved in the same way. Consequently, it follows that | x i ( ξ k ) | K i | x i ( t ) | for t θ k , θ k + 1 , i Z . Therefore, (A2) holds for all θ k ξ k t < θ k + 1 , k Z . Thus, one can conclude that (A2) holds for all t R .
Now, consider the function h i ( t ) , which satisfies Lipschitz condition (8). From the inequality | y i ( t ) | < 1 a ̲ i | x i ( t ) | , it follows that | x i ( t ) | < a ¯ i | y i ( t ) | . Subsequently, applying Equation (A2) along with transformation (9), one can obtain the following inequalities:
| y i ( γ ( t ) ) | < 1 a ̲ i | x i ( γ ( t ) ) | < K i a ̲ i | x i ( t ) | < K i a ¯ i a ̲ i | y i ( t ) | = K i | y i ( t ) | .
The proof of the lemma is complete. □
Proof of Lemma 2. 
We have the following for fixed φ i ( t ) P and i = 1 , , p :
| Π i φ i ( t ) | t e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς [ j = 1 p | d i j ( s ) | | f j ( h j 1 ( φ j ( s ) ) ) | + j = 1 p | e i j ( s ) | | g j ( h j 1 ( φ j ( γ ( s ) ) ) ) | + | w i ( s ) | ] d s t e ( m i + n i ) ( t s ) [ j = 1 p | d i j ( s ) | | f j ( h j 1 ( φ j ( s ) ) ) | + j = 1 p | e i j ( s ) | | g j ( h j 1 ( φ j ( γ ( s ) ) ) ) | + | w i ( s ) | ] d s 1 m i + n i j = 1 p m i j d m j f + j = 1 p m i j e m j g + m i w .
From the last inequality and condition (C8), we obtain | | Π φ | | 1 < H . So, property (P1) is valid for Π φ .
Next, let us prove property (P2) for Π φ ( t ) . This requires proving a sequence t n , t n , ensuring that for each Π φ ( t ) P , Π φ ( t + t n ) Π φ ( t ) uniformly over every closed and bounded interval of the real axis. To achieve this, we employ the method of included intervals considered in [10]. Fix an interval [ a , b ] , where a , b R with a < b and a positive real number ε . To prove the claim, it is sufficient to show that | | Π φ ( t + t n ) Π φ ( t ) | | < ε for t [ a , b ] and large n. Choose numbers c < a and ζ > 0 such that
1 4 j = 1 p m i j d L i f a ¯ i H + j = 1 p m i j d m i f + 1 4 j = 1 p m i j e L i g a ¯ i H + j = 1 p m i j e m i g + m i w m i + n i × e ( m i + n i ) ( a c ) < ε / 8 ,
L i u + K i L i v ( m i + n i ) 2 j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w ζ < ε / 8 ,
j = 1 p m i j d L i f a ¯ i + p m i f + p m i g + 1 ( m i + n i ) ζ < ε / 8 .
Take n large enough such that θ k + l n t n θ k < ζ , | ξ i ( θ k + l n ) ξ i ( θ k ) | < ξ , whenever θ k [ c , b ] , k Z , and | d i j ( t + t n ) d i j ( t ) | < ξ , | e i j ( t + t n ) e i j ( t ) | < ξ and | w i ( t + t n ) w i ( t ) | < ξ , | φ i ( t + t n ) φ i ( t ) | < ξ for all t [ c , b ] , i = 1 , 2 , , m . Then, for φ ( t ) P , we obtain that
| Π i φ i ( t + t n ) Π i φ i ( t ) | | t e s t [ u i ( φ i ( ς + t n ) ) + v i ( φ i ( γ ( ς + t n ) ) ) ] d ς [ j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( s + t n ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) + w i ( s + t n ) ] d s t e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς [ j = 1 p d i j ( s ) f j ( h j 1 ( φ j ( s ) ) ) +
j = 1 p e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s ) ] d s | t | e s t [ u i ( φ i ( ς + t n ) ) + v i ( φ i ( γ ( ς + t n ) ) ) ] d ς e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς | × | j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( t + t n ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) + w i ( s + t n ) | d s + t | e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς | | j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( s + t n ) ) ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p d i j ( s + t n ) d i j ( s ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + j = 1 p e i j ( s + t n ) e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s + t n ) + w i ( s ) | d s .
Examine the integral as the sum of its components over the last two intervals ( , c ] and ( c , t ] . Using inequalities (A3)–(A5), we obtain that the following estimates are correct for each i = 1 , 2 , , p :
I 1 = c | e s t [ u i ( φ i ( ς + t n ) ) + v i ( φ i ( γ ( ς + t n ) ) ) ] d ς e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς | × | j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( t + t n ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) + w i ( s + t n ) | d s + c | e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς | | j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( s + t n ) ) ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p d i j ( s + t n ) d i j ( s ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + j = 1 p e i j ( s + t n ) e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s + t n ) w i ( s ) | d s c 2 e ( m i + n i ) ( t s ) j = 1 p m i j d m j f + j = 1 p m i j e m j g + m i w d s + c e ( m i + n i ) ( t s ) j = 1 p m i j d L i f a ¯ i H + 2 j = 1 p m i j d m i f + j = 1 p m i j e L i g a ¯ i H + 2 j = 1 p m i j e m i g + 2 m i w d s 2 m i + n i e ( m i + n i ) ( a c ) j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w + 1 m i + n i e ( m i + n i ) ( a c ) j = 1 p m i j d L i f a ¯ i H + 2 j = 1 p m i j d m i f + j = 1 p m i j e L i g a ¯ i H + 2 j = 1 p m i j e m i g + 2 m i w 4 m i + n i e ( m i + n i ) ( a c ) 1 4 j = 1 p m i j d L i f a ¯ i H + j = 1 p m i j d m i f + 1 4 j = 1 p m i j e L i g a ¯ i H + j = 1 p m i j e m i g + m i w < ϵ 2 .
Let us calculate the integral over the interval ( c , t ] :
I 2 = c t | e s t [ u i ( φ i ( ς + t n ) ) + v i ( φ i ( γ ( ς + t n ) ) ) ] d ς e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς | × | j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( t + t n ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) + w i ( s + t n ) | d s + c t | e s t [ u i ( φ i ( ς ) ) + v i ( φ i ( γ ( ς ) ) ) ] d ς | | j = 1 p d i j ( s + t n ) f j ( h j 1 ( φ j ( s + t n ) ) ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p d i j ( s + t n ) d i j ( s ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p e i j ( s + t n ) g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + j = 1 p e i j ( s + t n ) e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s + t n ) + w i ( s ) | d s .
In the following, inequality
| e t 0 t [ u i ( y i ( ς ) ) + v i ( y i ( γ ( ς ) ) ] d ς e t 0 t [ u i ( z i ( ς ) ) + v i ( z i ( γ ( ς ) ) ] d ς | e ( m i + n i ) ( t t 0 ) t 0 t L i u | y i ( ς ) z i ( ς ) | + L i v | y i ( γ ( ς ) ) z i ( γ ( ς ) ) | d ς ,
where t t 0 , i = 1 , 2 , , n , will be intensively utilized. To evaluate the integral I 2 , we use the mean value theorem. According to this theorem, for an exponential function, we have e x 1 e x 2 = e x * ( x 1 x 2 ) , with x 1 = t 0 t [ u i ( y i ( ς ) ) + v i ( y i ( ς ) ] d ς , x 2 = t 0 t [ u i ( z i ( ς ) ) + v i ( z i ( ς ) ] d ς , and a number x * between x 1 and x 2 . Additionally, it can be easily shown that relation x * < ( m i + n i ) ( t t 0 ) holds.
Now, let us evaluate the integral I 2 using inequality (A6).
I 2 c t e ( m i + n i ) ( t s ) s t L i u | φ i ( ς + t n ) φ i ( ς ) | + L i v | φ i ( γ ( ς + t n ) ) φ i ( γ ( ς ) ) | d ς × j = 1 p | d i j ( s + t n ) | | f j ( h j 1 ( φ j ( s + t n ) ) ) | + j = 1 p | d i j ( s + t n ) | | g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) | + | w i ( s + t n ) | d s + c t e ( m i + n i ) ( t s ) ( j = 1 p | d i j ( s + t n ) | | f j ( h j 1 ( φ j ( s + t n ) ) ) f j ( h j 1 ( φ j ( s ) ) ) | + j = 1 p | d i j ( s + t n ) d i j ( s ) | | f j ( h j 1 ( φ j ( s ) ) ) | + j = 1 p | d i j ( s + t n ) | | g j ( h j 1 ( φ j ( γ ( s + t n ) ) ) ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) | + j = 1 p | d i j ( s + t n ) d i j ( s ) | | g j ( h j 1 ( φ j ( γ ( s ) ) ) ) | + | w i ( s + t n ) w i ( s ) | ) d s c t e ( m i + n i ) ( t s ) s t L i u | φ i ( ς + t n ) φ i ( ς ) | j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w d ς d s + c t e ( m i + n i ) ( t s ) s t L i v | φ i ( γ ( ς + t n ) ) φ i ( γ ( ς ) ) | j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w d ς d s +
c t e ( m i + n i ) ( t s ) j = 1 p m i j d L i f a ¯ i ζ + p ζ m i f + j = 1 p m i j e L i g a ¯ i ( φ j ( γ ( s + t n ) ) φ j ( γ ( s ) ) ) + p ζ m i g + ζ d s c t e ( m i + n i ) ( t s ) L i u + K i L i v ζ | ( t s ) j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w d s + c t e ( m i + n i ) ( t s ) j = 1 p m i j d L i f a ¯ i ζ + p ζ m i f + p ζ m i g + ζ d s + c t e ( m i + n i ) ( t s ) j = 1 p m i j e L i g a ¯ i ( φ j ( γ ( s + t n ) ) φ j ( γ ( s ) ) ) d s L i u + K i L i v ( m i + n i ) 2 ζ j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w + 1 ( m i + n i ) ζ j = 1 p m i j d L i f a ¯ i + p m i f + p m i g + 1 + j = 1 p m i j d L i g a ¯ i c t e ( m i + n i ) ( t s ) | φ j ( γ ( s + t n ) ) φ j ( γ ( s ) ) | d s .
Let us denote
I γ = c t e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s , i = 1 , 2 , , p .
Moreover, since the last integral contains a summation with a piecewise constant argument, to evaluate this integral on the interval ( c , t ] , we divide this interval into small parts as follows. For a fixed t [ a , b ] , we assume, without loss of generality, that θ i θ i + l n t n and θ k θ k + l n t n = c < θ k + 1 < θ k + 2 < < θ k + r θ k + r + l n t n t < θ k + r + 1 . That is, there are exactly r breakpoints in [ c , t ] . Let the following inequalities
2 j = 1 p m i j d L i g a ¯ i ( r + 1 ) ζ 1 e ( m i + n i ) θ m i + n i < ε / 8 ,
and
2 j = 1 p m i j d L i g a ¯ i r H e ( m i + n i ) ζ 1 m i + n i < ε / 8
be satisfied for the given ε > 0 .
Consider the last integral as follows:
I γ = c θ k + 1 e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s + θ k + 1 θ k + 1 + l n t n e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s + θ k + 1 + l n t n θ k + 2 e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s + θ k + 2 θ k + 2 + l n t n e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s
+ θ k + 2 + l n t n θ k + 3 e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s + θ k + r + l n t n t e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s = i = k k + r 1 θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s + i = k k + r 1 θ i + 1 θ i + 1 + l n t n e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s + θ k + r + l n t n t e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s .
Denote
A i = θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s
and
B i = θ i + 1 θ i + 1 + l n t n e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s ,
where i = k , k + 1 , , k + r 1 , and
I γ = i = k k + r 1 A i + i = k k + r 1 B i + θ k + r + l n t n t e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s .
By condition (6), for t [ θ i + l n t n , θ i + 1 ) , γ ( t ) = ξ i , we have that γ ( t + t n ) = ξ i + l n , i = k , k + 1 , , k + r 1 . Thus, we obtain that
A i = θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) | φ j γ ( s + t n ) ) φ j ( γ ( s ) ) | d s = θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) | φ j ( ξ i + t n + o ( 1 ) ) φ j ( ξ i ) | d s = θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) | φ j ( ξ i + t n ) φ j ( ξ i ) + φ j ( ξ i + t n + o ( 1 ) ) φ j ( ξ i + t n ) | d s θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) [ | φ j ( ξ i + t n ) φ j ( ξ i ) | + | φ j ( ξ i + t n + o ( 1 ) ) φ j ( ξ i + t n ) | ] d s θ i + l n t n θ i + 1 e ( m i + n i ) ( t s ) ξ i + | φ j ( ξ i + t n + o ( 1 ) ) φ j ( ξ i + t n ) | d s .
Due to the uniform continuity of φ , for sufficiently large n and ζ > 0 , there exists a corresponding η > 0 such that φ ( ξ i + t n + o ( 1 ) ) φ ( ξ i + t n ) < η if | ξ i + l n ξ i t n | < η . From this, one can conclude that
A i 2 ζ θ i 1 + l n t n θ i e ( m i + n i ) ( t s ) d s 2 ζ 1 e ( m i + n i ) θ m i + n i .
Moreover, we obtain that
B i 2 H θ i θ i + l n t n e ( m i + n i ) ( t s ) d s 2 H e ( m i + n i ) ζ 1 m i + n i
as a consequence of condition (5). Likewise, for A k , the following integral can be estimated:
θ k + r 1 + l n t n t e ( m i + n i ) ( t s ) | φ j ( γ ( s + t n ) ) φ j ( γ ( s ) | d s 2 ζ 1 e ( m i + n i ) θ m i + n i .
In this way,
I γ 2 ( r + 1 ) ζ 1 e ( m i + n i ) θ m i + n i + 2 r H e ( m i + n i ) ζ 1 m i + n i .
can be obtained.
So,
I 2 L i u K i L i v ( m i + n i ) 2 ζ j = 1 p m i j d m i f + j = 1 p m i j e m i g + m i w + 1 ( m i + n i ) ζ j = 1 p m i j d L i f a ¯ i + p m i f + p m i g + 1 + j = 1 p m i j d L i g a ¯ i 2 ( r + 1 ) ζ 1 e ( m i + n i ) θ m i + n i + 2 r H e ( m i + n i ) ζ 1 m i + n i < ϵ 2 .
As result, inequality | Π i φ i ( t + t n ) Π i φ i ( t ) | < ϵ holds for t [ a , b ] , in agreement with (A3)–(A8). This confirms that (P2) is valid for Π φ , proving that operator Π is invariant in P . □
Proof of Lemma 3. 
Let functions φ and ψ belong to the space P . It is true for all t R that
| Π i φ ( t ) Π i ψ ( t ) | | t e s t [ u i ( φ i ( s ) ) + v i ( φ i ( γ ( s ) ) ) ] d s [ j = 1 p d i j ( s ) f j ( h j 1 ( φ j ( s ) ) ) + + j = 1 p e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s ) ] d s t e s t [ u i ( ψ i ( s ) ) + v i ( ψ i ( γ ( s ) ) ) ] d s [ j = 1 p d i j ( s ) f j ( h j 1 ( ψ j ( t ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( ψ j ( γ ( s ) ) ) ) w i ( s ) ] d s |
| t e s t [ u i ( φ i ( s ) ) + v i ( φ i ( γ ( s ) ) ) ] d s e s t [ u i ( ψ i ( s ) ) + v i ( ψ i ( γ ( s ) ) ) ] d s [ j = 1 p d i j ( s ) | f j ( h j 1 ( φ j ( s ) ) ) + + j = 1 p e i j ( s ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) + w i ( s ) ] d s | + | t e s t [ u i ( ψ i ( s ) ) + v i ( ψ i ( γ ( s ) ) ) ] d s [ j = 1 p d i j ( s ) f j ( h j 1 ( ψ j ( t ) ) ) f j ( h j 1 ( φ j ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( ψ j ( γ ( s ) ) ) ) g j ( h j 1 ( φ j ( γ ( s ) ) ) ) ] d s | t e ( m i + n i ) ( t s ) ( L i u + L i v ) sup t R | φ i ( t ) ψ i ( t ) | ( t s ) j = 1 p m i j d m i f + j = 1 p m i j e m i g d s + t e ( m i + n i ) ( t s ) j = 1 p m i j d L i f a ¯ i sup t R | φ i ( t ) ψ i ( t ) | + j = 1 p m i j e L i g a ¯ i sup t R | φ i ( t ) ψ i ( t ) | d s
L i u + L i v ( m i + n i ) 2 j = 1 p m i j d m i f + j = 1 p m i j e m i g + 1 m i + n i j = 1 p m i j d L i f a ¯ i + j = 1 p m i j e L i g a ¯ i × sup t R | φ i ( t ) ψ i ( t ) | .
So, it is true for all t R that Π φ Π ψ 1 = L i u + L i v ( m i + n i ) 2 j = 1 p m i j d m i f + j = 1 p m i j e m i g + 1 m i + n i j = 1 p m i j d L i f a ¯ i + j = 1 p m i j e L i g a ¯ i φ ψ 1 .
Consequently, conditions (C6) and (C9) imply that operator Π : P P is contractive. The lemma is proved. □
Proof of Theorem 1. 
Let us first proof that space P is complete. Consider a Cauchy sequence ϕ k ( t ) in P that converges to a limit function ϕ ( t ) on R . Let us start with the second condition ( P 2 ) because the property ( P 1 ) can be easily checked. Fix a closed and bounded interval I R . Thus, one can write
| | ϕ ( t + t n ) ϕ ( t ) | | < | | ϕ ( t + t n ) ϕ k ( t + t n ) | | + | | ϕ k ( t + t n ) ϕ k ( t ) | | + | | ϕ k ( t ) ϕ ( t ) | | < ϵ
It is possible to choose sufficiently large values for k and n so that each term on the right side of inequality (A9) remains smaller than ϵ / 3 for any given ϵ > 0 and for all t I . A result of inequality (A9) guarantees that ϕ ( t + t n ) ϕ ( t ) uniformly on I. This establishes the completeness of P .
By applying Lemmas 2 and 3, the invariance and contractive nature of operator Π in P ensure the existence a unique point y P , which is a fixed point of Π . This point corresponds to a solution of system (11) and satisfies the convergence property. Consequently, function y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y p ( t ) ) serves as the unique Poisson stable solution of system (11).
Now, consider a function φ ( t ) = ( φ 1 ( t ) , φ 2 ( t ) , , φ p ( t ) ) such that φ i ( t ) = h i 1 ( y i ( t ) ) , i = 1 , 2 , , p . According to substitution (9), function φ ( t ) is a unique solution of system (2). Let us show that φ ( t ) is Poisson stable. Using inequality (8), on a fixed bounded interval I R , we obtain that
| φ i ( t + t n ) φ i ( t ) | = | h i 1 ( y i ( t + t n ) ) h i 1 ( y i ( t ) ) | a i ¯ | y i ( t + t n ) y i ( t ) | ,
for all i = 1 , 2 , , n . Therefore, each sequence φ i ( t + t n ) , i = 1 , 2 , , p , uniformly converges to φ i ( t ) , t I , as n . This leads to the conclusion that function φ ( t ) = ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) represents the unique Poisson stable solution of neural network (2). The theorem is proved. □
Proof of Theorem 2. 
According to the previous theorem, it follows that neural network (2) possesses a unique Poisson stable solution, given by φ ( t ) = h 1 ( y ( t ) ) . Now, we aim to demonstrate the alpha unpredictability of φ ( t ) . The first step is to verify that the Poisson stable solution y ( t ) of system (11) fulfills the separation property.
Applying relations
y i ( t ) = y i ( s n ) s n t u i ( y i ( s ) ) y i ( s ) d s s n t v i ( y i ( γ ( s ) ) ) y i ( s ) d s + s n t j = 1 p d i j ( s ) f j ( h j 1 ( z j ( s ) ) ) d s + s n t j = 1 p e i j ( s ) g j ( h j 1 ( y j ( γ ( s ) ) ) ) d s + s n t w i ( s ) d s
and
y i ( t + t n ) = y i ( s n + t n ) s n t u i ( y i ( s + t n ) ) y i ( s + t n ) d s s n t v i ( y i ( γ ( s + t n ) ) ) y i ( s + t n ) d s + s n t j = 1 p d i j ( s + t n ) f j ( h j 1 ( y j ( s + t n ) ) ) d s + s n t j = 1 p e i j ( s + t n ) g j ( h j 1 ( y j ( γ ( s + t n ) ) ) ) d s + s n t w i ( s + t n ) d s ,
one can obtain that
y i ( t + t n ) y i ( t ) = y i ( s n + t n ) y i ( s n ) s n t u i ( y i ( s + t n ) ) y i ( s + t n ) d s + s n t u i ( y i ( s ) ) y i ( s ) d s s n t v i ( y i ( s + t n ) ) y i ( s + t n ) d s + s n t v i ( y i ( γ ( s ) ) ) y i ( s ) d s + s n t j = 1 p d i j ( s + t n ) f j ( h j 1 ( y j ( s + t n ) ) ) d s s n t j = 1 p d i j ( s ) f j ( h j 1 ( y j ( s ) ) ) d s s n t j = 1 p e i j ( s + t n ) g j ( h j 1 ( y j ( γ ( s + t n ) ) ) ) d s s n t j = 1 p e i j ( s ) g j ( h j 1 ( y j ( γ ( s ) ) ) ) d s + s n t w i ( s + t n ) d s s n t w i ( s ) d s ,
for each i = 1 , 2 , , p .
It is possible to find positive numbers σ 1 and integer values l and k such that the subsequent inequalities hold for all i , j = 1 , 2 , , p :
σ 1 < σ ;
| d i j ( t + s ) d i j ( s ) | < ϵ 0 1 l + 2 k , t R ,
| d i j ( t + s ) d i j ( s ) | < ϵ 0 1 l + 2 k , t R ,
| w i ( t + s ) w i ( s ) | < ϵ 0 1 l + 2 k , t R ,
σ 1 1 1 l + 2 k ( L i u + L i v ) H + M i + N i + ( m i f + m i g ) p + j = 1 p m i j d L i f + j = 1 p m i j e L i g a i ¯ > 3 2 l ,
| y i ( t + s ) y i ( t ) | < ϵ 0 min 1 k , 1 4 l , t R , | s | < σ 1 .
Let numbers σ 1 , l , and k , as well as numbers p N and i = 1 , , p , be fixed. Consider the following two alternatives, (i) | y i ( t n + s n ) y i ( s n ) | < ϵ 0 / l and (ii) | y i ( t n + s n ) y i ( s n ) | ϵ 0 / l :
(i) Using (A15), one can show that
| y i ( t + t n ) y i ( t n ) | | y i ( t + t n ) y i ( t n + s n ) | + | y i ( t n + s n ) y i ( s n ) | + | y i ( s n ) y i ( t ) | < ϵ 0 l + ϵ 0 k + ϵ 0 k = ϵ 0 1 l + 2 k , i = 1 , 2 , , p ,
if t [ s n , s n + σ 1 ] . Inequalities (A11)–(A16) imply that
| y i ( t + t n ) y i ( t ) | s n t | w i ( s + t p ) w i ( s ) | d s | y i ( s n + t n ) y i ( s n ) | s n t | u i ( y i ( s + t n ) ) u i ( y i ( s ) ) | | y i ( s + t n ) | + | u i ( y i ( s ) ) | | y i ( s + t n ) y i ( s ) | d s + s n t | v i ( y i ( γ ( s + t n ) ) ) v i ( y i ( γ ( s ) ) ) | | y i ( s + t n ) | + | v i ( y i ( γ ( s ) ) ) | | y i ( s + t n ) y i ( s ) | d s + s n t j = 1 p ( | d i j ( s + t n ) d i j ( s ) | | f j ( h j 1 ( y i ( s + t n ) ) ) | + j = 1 p | d i j ( s ) | | f j ( h j 1 ( y i ( s + t n ) ) ) f j ( h j 1 ( y i ( s ) ) ) | ) d s + s n t j = 1 p ( | e i j ( s + t n ) e i j ( s ) | | g j ( h j 1 ( y i ( γ ( s + t n ) ) ) ) | + j = 1 p | e i j ( s ) | | g j ( h j 1 ( y i ( γ ( s + t n ) ) ) ) g j ( h j 1 ( y i ( γ ( s ) ) ) ) | ) d s > σ 1 ϵ 0 ϵ 0 l σ 1 ( L i u H + M i ) ϵ 0 ( 1 l + 2 k ) + σ 1 ( L i v H + N i ) ϵ 0 ( 1 l + 2 k ) + σ 1 p ϵ 0 1 l + 2 k m i f + m i g + j = 1 p m i j d L i f + j = 1 p m i j e L i g a i ¯ ϵ 0 1 l + 2 k > ϵ 0 2 l
for t [ s n , s n + σ 1 ] .
(ii) If | y i ( t n + s n ) y i ( s n ) | ϵ 0 / l , it is not difficult to find that (A15) implies the following:
| y i ( t + t n ) y i ( t ) | | y i ( t n + s n ) y i ( s n ) | | y i ( s n ) y i ( t ) | | y i ( t + t n ) y i ( t n + s n ) | > ϵ 0 l ϵ 0 4 l ϵ 0 4 l = ϵ 0 2 l , i = 1 , 2 , , p ,
if t [ s n σ 1 , s n + σ 1 ] and n N . Thus, both cases ( i ) and ( i i ) indicate that solution y ( t ) satisfies the separation property. It follows that y ( t ) qualifies as an alpha unpredictable solution of system (11), as defined in Definition 3, where sequences t n and s n along with positive values σ 1 2 and ϵ 0 2 l , are considered.
Next, we demonstrate that the function φ ( t ) = h 1 ( y ( t ) ) , which represents a solution of neural network (2), also possesses alpha unpredictability. Utilizing condition (8), we obtain that
| φ i ( t + t n ) φ i ( t ) | = | h i 1 ( y i ( t + t n ) ) h i 1 ( y i ( t ) ) | a ̲ i | y i ( t + t n ) y i ( t ) | > a ̲ i ϵ 2 l ,
for all i = 1 , 2 , , n , and t [ s n σ 1 2 , s n + σ 1 2 ] . This confirms that neural network (2) admits a unique alpha unpredictable solution, as described in Definition 3. The theorem is proved. □
Proof of Theorem 3. 
Consider the solution y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y p ( t ) ) , which is proved to be Poisson stable by Theorem 1 and alpha unpredictable by Theorem 2. It is clear that one can consider the solution as a bounded one, and examining it exponentially stable is sufficient to verify the theorem.
The solution satisfies the following integral equation:
y i ( t ) = y i ( t 0 ) e t 0 t [ u i ( y i ( s ) ) + v i ( y i ( γ ( s ) ) ) ] d s + t 0 t e s t [ u i ( y i ( s ) ) + v i ( y i ( γ ( s ) ) ) ] d s × j = 1 p d i j ( s ) f j ( h j 1 ( y j ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( y j ( γ ( s ) ) ) ) + w i ( s ) d s .
with i = 1 , , p .
Let z ( t ) = ( z 1 ( t ) , z 2 ( t ) , , z p ( t ) ) be another solution of system (11). Then, for fixed i = 1 , , p , one can find that
z i ( t ) = z i ( t 0 ) e t 0 t [ u i ( z i ( s ) ) + v i ( z i ( γ ( s ) ) ) ] d s + t 0 t e s t [ u i ( z i ( s ) ) + v i ( z i ( γ ( s ) ) ) ] d s × j = 1 p d i j ( s ) f j ( h j 1 ( z j ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( z j ( γ ( s ) ) ) ) + w i ( s ) d s .
Denote υ i ( t ) = y i ( t ) z i ( t ) and υ i ( t 0 ) = y i ( t 0 ) z i ( t 0 ) for all i = 1 , 2 , , p . Then, it is true that
υ i ( t ) = υ i ( t 0 ) e t 0 t [ u i ( z i ( ς ) ) + v i ( z i ( γ ( ς ) ) ) ] d ς + ( υ i ( t 0 ) + z i ( t 0 ) ) e t 0 t [ u i ( υ i ( ς ) + z i ( ς ) ) + v i ( υ i ( γ ( ς ) ) + z i ( γ ( ς ) ) ) ] d ς e t 0 t [ u i ( z i ( ς ) ) + v i ( z i ( γ ( ς ) ) ) ] d ς + + t 0 t e t 0 t [ u i ( υ i ( ς ) + z i ( ς ) ) + v i ( υ i ( γ ( ς ) ) + z i ( γ ( ς ) ) ) ] d ς e t 0 t [ u i ( z i ( s ) ) + v i ( z i ( γ ( s ) ) ) ] d s × j = 1 p d i j ( s ) f j ( h j 1 ( υ i ( s ) + z i ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( υ i ( γ ( s ) ) + z i ( γ ( s ) ) ) ) ) + w i ( s ) d s
+ t 0 t e t 0 t [ u i ( z i ( ς ) ) + v i ( z i ( γ ( ς ) ) ) ] d ς [ j = 1 p d i j ( s ) f j ( h j 1 ( υ i ( s ) + z i ( s ) ) ) f j ( h j 1 ( z j ( s ) ) ) + j = 1 p e i j ( s ) g j ( h j 1 ( υ i ( γ ( s ) ) + z i ( γ ( s ) ) ) ) ) g j ( h j 1 ( z j ( γ ( s ) ) ) ) ] d s .
Now, let us construct the sequence of successive approximations υ k ( t ) , k 0 , for the last system considering
υ 0 ( t ) = ( υ 1 ( t 0 ) e t 0 t [ u 1 ( z 1 ( ς ) ) + v 1 ( z 1 ( γ ( ς ) ) ) ] d ς , υ 2 ( t 0 ) e t 0 t [ u 2 ( z 2 ( ς ) ) + v 2 ( z 2 ( γ ( ς ) ) ) ] d ς , , υ n ( t 0 ) e t 0 t [ u n ( z n ( ς ) ) + v n ( z n ( γ ( ς ) ) ) ] d ς ) .
Using (A17), we obtain that for each i = 1 , 2 , , n , the following inequalities are correct:
| υ i k + 1 ( t ) | | υ i ( t 0 ) | e ( m i + n i ) ( t t 0 ) + H e ( m i + n i ) ( t t 0 ) t 0 t [ L i u | υ i k ( ς ) | + L i v | υ i k ( υ i ( γ ( ς ) ) | ] d ς + t 0 t e ( m i + n i ) ( t t 0 ) t 0 s [ L i u | υ i k ( ς ) | + L i v | υ i k ( υ i ( γ ( ς ) ) | ] d ς j = 1 n m i j d m i f + j = 1 n m i j e m i g + m i w d s + t 0 t e ( m i + n i ) ( t t 0 ) j = 1 n m i j d L i f a i ¯ | υ i k ( s ) | + j = 1 n m i j e L i g a i ¯ | υ i k ( γ ( s ) ) | d s | υ i ( t 0 ) | e ( m i + n i ) ( t t 0 ) + H e ( m i + n i ) ( t t 0 ) t 0 t [ L i u + K L i v ] | υ i k ( ς ) | d ς + t 0 t e ( m i + n i ) ( t t 0 ) t 0 s [ L i u + K L i v ] | υ i k ( ς ) | d ς j = 1 n m i j d m i f + j = 1 n m i j e m i g + m i w d s + t 0 t e ( m i + n i ) ( t t 0 ) a i ¯ j = 1 n m i j d L i f + K j = 1 n m i j e L i g | υ i k ( s ) | d s .
From (A18), it follows that
υ 0 ( t ) ( υ ( t 0 ) + ϵ ) e δ ( t t 0 ) ,
where ϵ is a positive number such that
ϵ > υ ( t 0 ) H L u + K L v δ + L u + K L v m d m f + m e m g + m w δ 2 m + n δ + 1 m + n + m d L f + K m e L g a ¯ m + n δ 1 H L u + K L v δ + L u + K L v m d m f + m e m g + m w δ 2 m + n δ + 1 m + n + m d L f + K m e L g a ¯ m + n δ .
Assume that for fixed k N , the following inequality is valid:
υ k ( t ) ( υ ( t 0 ) + ϵ ) e δ ( t t 0 ) .
Applying inequality (A19), we obtain that
υ k + 1 ( t ) υ ( t 0 ) e ( m + n ) ( t t 0 ) + H e m ( t t 0 ) t 0 t L u + K L v ( υ ( t 0 ) + ϵ ) e δ ( s t 0 ) d s + t 0 t e ( m + n ) ( t s ) L u + K L v m d m f + m e m g + m w t 0 s ( υ ( t 0 ) + ϵ ) e δ ( ς t 0 ) d ς d s + t 0 t e ( m + n ) ( t s ) m d L f + K m e L g a ¯ ( υ ( t 0 ) + ϵ ) e δ ( s t 0 ) d s υ ( t 0 ) e ( m + n ) ( t t 0 ) + H e m ( t t 0 ) L u + K L v ( υ ( t 0 ) + ϵ ) 1 δ | e δ ( t t 0 ) 1 | +
t 0 t e ( m + n ) ( t s ) L u + K L v m d m f + m e m g + m w ( υ ( t 0 ) + ϵ ) 1 δ | e δ ( s t 0 ) 1 | d s + t 0 t e ( m + n ) ( t s ) m d L f + K m e L g a ¯ ( υ ( t 0 ) + ϵ ) e δ ( s t 0 ) d s υ ( t 0 ) e ( m + n ) ( t t 0 ) + H e ( m + n ) ( t t 0 ) L u + K L v ( υ ( t 0 ) + ϵ ) 1 δ | e δ ( t t 0 ) 1 | + L u + K L v m d m f + m e m g + m w ( υ ( t 0 ) + ϵ ) 1 δ ( 1 m + n δ [ e δ ( t t 0 ) + e ( m + n ) ( t t 0 ) ] + 1 m + n e ( m + n ) ( t t 0 ) ) + m d L f + K m e L g a ¯ ( υ ( t 0 ) + ϵ ) 1 m + n δ [ e δ ( t t 0 ) e ( m + n ) ( t t 0 ) ] υ ( t 0 ) e δ ( t t 0 ) + H e δ ( t t 0 ) L u + K L v ( υ ( t 0 ) + ϵ ) 1 δ + L u + K L v m d m f + m e m g + m w ( υ ( t 0 ) + ϵ ) 1 δ 2 m + n δ + 1 m + n e δ ( t t 0 ) + m d L f + K m e L g a ¯ ( υ ( t 0 ) + ϵ ) 1 m + n δ e δ ( t t 0 ) ( υ ( t 0 ) + ( H L u + K L v 1 δ + L u + K L v m d m f + m e m g + m w 1 δ 2 m + n δ + 1 m + n + m d L f + K m e L g a ¯ 1 m + n δ ) ( υ ( t 0 ) + ϵ ) ) e δ ( t t 0 ) .
Let us verify that sequence υ k ( t ) is uniformly convergent. From inequality (A19), we conclude that
υ 1 ( t ) υ 0 ( t ) H e ( m + n ) ( t t 0 ) t 0 t [ L u + K L v ] υ 0 ( ς ) d ς + t 0 t e ( m + n ) ( t t 0 ) t 0 s [ L u + K L v ] υ 0 ( ς ) d ς m d m f + m e m g + m w d s + t 0 t e ( m + n ) ( t t 0 ) a ¯ m d L f + K m e L g υ 0 ( s ) d s H [ L u + K L v ] e ( m + n ) ( t t 0 ) t 0 t ( υ ( t 0 ) + ϵ ) e δ ( s t 0 ) d s + t 0 t e ( m + n ) ( t s ) [ L u + K L v ] m d m f + m e m g + m w t 0 s ( υ ( t 0 ) + ϵ ) e δ ( ς t 0 ) d ς d s + t 0 t e ( m + n ) ( t s ) a ¯ m d L f + K m e L g ( υ ( t 0 ) + ϵ ) e δ ( s t 0 ) d s ( H [ L u + K L v ] 1 δ + [ L u + K L v ] m d m f + m e m g + m w 1 δ 2 m + n δ + 1 m + n + a ¯ m d L f + K m e L g 1 m + n δ ) ( υ ( t 0 ) + ϵ ) e δ ( t t 0 ) ,
and
υ 2 ( t ) υ 1 ( t ) H e ( m + n ) ( t t 0 ) t 0 t [ L u + K L v ] υ 1 ( ς ) υ 0 ( ς ) d ς + t 0 t e ( m + n ) ( t s ) t 0 t [ L u + K L v ] υ 1 ( ς ) υ 0 ( ς ) d ς m d m f + m e m g + m w d s + t 0 t e ( m + n ) ( t t 0 ) m c L f a ¯ υ 1 ( s ) υ 0 ( s ) d s ( H [ L u + K L v ] 1 δ + [ L u + K L v ] m d m f + m e m g + m w 1 δ 2 m + n δ + 1 m + n + a ¯ m d L f + K m e L g 1 m + n δ ) 2 ( υ ( t 0 ) + ϵ ) e δ ( t t 0 ) .
By applying the principle of mathematical induction, we can establish that
υ k + 1 ( t ) υ k ( t ) ( H [ L u + K L v ] 1 δ + [ L u + K L v ] m d m f + m e m g + m w 1 δ × 2 m + n δ + 1 m + n + a ¯ m d L f + K m e L g 1 m + n δ ) k + 1 ( υ ( t 0 ) + ϵ ) e δ ( t t 0 ) ,
for every k 0 . From condition ( C 10 ) , it follows that sup t [ t 0 , ) υ k + 1 ( t ) υ k ( t ) 0 as k . This result confirms that the sequence υ k ( t ) converges uniformly to the unique solution, υ ( t ) = y ( t ) z ( t ) , of integral equation (A17), satisfying the inequality
y ( t ) z ( t ) y ( t 0 ) z ( t 0 ) + ϵ e δ ( t t 0 ) .
Consequently, the solution y ( t ) of system (11) is exponentially stable.
Next, we demonstrate that the solution φ ( t ) = h 1 ( y ( t ) ) of neural network (2) also exhibits exponential stability. If ψ ( t ) = h 1 ( z ( t ) ) represents another solution of system (2), then we obtain
ψ ( t ) φ ( t ) = h 1 ( y ( t ) ) h 1 ( z ( t ) ) | a ¯ y ( t ) z ( t ) a ¯ ( y ( t 0 ) z ( t 0 ) + ϵ ) e δ ( t t 0 ) .
Therefore, the Poisson stable solution φ ( t ) of neural network (2) is confirmed to be exponentially stable. □

References

  1. Cohen, M.A.; Grossberg, S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. SMC 1983, 13, 815–826. [Google Scholar]
  2. Akhmet, M. Nonlinear Hybrid Continuous/Discrete-Time Models; Atlantis Press: France, Paris, 2011. [Google Scholar]
  3. Akhmet, M.U. Integral manifolds of differential equations with piecewise constant argument of generalized type. Nonlinear Anal. 2007, 66, 367–383. [Google Scholar]
  4. Akhmet, M.U. On the integral manifolds of the differential equations with piecewise constant argument of generalized type. In Proceedings of the Conference on Differential and Difference Equations at the Florida Institute of Technology, Melbourne, FL, USA, 1–5 August 2005; pp. 11–20. [Google Scholar]
  5. Wan, L.; Wu, A. Stabilization control of generalized type neural networks with piecewise constant argument. J. Nonlinear Sci. Appl. 2016, 9, 3580–3599. [Google Scholar]
  6. Torres, R.; Pinto, M.; Castillo, S.; Kostić, M. Uniform approximation of impulsive Hopfield cellular neural networks by piecewise constant arguments on [τ,). Acta Appl. Math. 2021, 171, 8. [Google Scholar]
  7. Li, X. Existence and exponential stability of solutions for stochastic cellular neural networks with piecewise constant argument. J. Appl. Math. 2014, 2014, 1–11. [Google Scholar]
  8. Bao, G.; Wen, S.H.; Zeng, Z.H. Robust stability analysis of interval fuzzy Cohen–Grossberg neural networks with piecewise constant argument of generalized type. Neural Netw. 2012, 33, 32–41. [Google Scholar]
  9. Xi, Q. Global exponential stability of Cohen-Grossberg neural networks with piecewise constant argument of generalized type and impulses. Neural Comput. 2016, 28, 229–255. [Google Scholar]
  10. Akhmet, M.; Tleubergenova, M.; Zhamanshin, A.; Nugayeva, Z. Artificial Neural Networks: Alpha Unpredictability and Chaotic Dynamics; Springer: Cham, Switzerland, 2024. [Google Scholar]
  11. Akhmet, M. Ultra Poincaré Chaos and Alpha Labeling: A New Approach to Chaotic Dynamics; IOP Publishing: Philadelphia, PA, USA, 2024. [Google Scholar]
  12. Akhmet, M.; Tleubergenova, M.; Nugayeva, Z. Unpredictable and Poisson Stable Oscillations of Inertial Neural Networks with Generalized Piecewise Constant Argument. Entropy 2023, 25, 620. [Google Scholar] [CrossRef]
  13. Devaney, R. An Introduction to Chaotic Dynamical Systems; Addison-Wesley: Menlo Park, CA, USA, 1990. [Google Scholar]
  14. Li, T.Y.; Yorke, J.A. Period three implies chaos. Am. Math. Mon. 1975, 82, 985–992. [Google Scholar]
  15. Garliauskas, A. Neural network chaos analysis. J. Nonlinear Anal. Model. Control. 1988, 3, 1–14. [Google Scholar]
  16. Chapeau-Blondeau, F.; Chauvet, G. Stable, Oscillatory, and Chaotic Regimes in the Dynamics of Small Neural Network with Delay. Neural Netw. 1992, 5, 735–743. [Google Scholar]
  17. Matsumoto, K.; Tsuda, I. Extended information in one dimensional maps. Phys. D 1987, 26, 347–357. [Google Scholar]
  18. Matsumoto, K.; Tsuda, I. Calculation of information ow rate from mutual information. J. Phys. A Math. Gen. 1988, 64, 3561–3566. [Google Scholar]
  19. Haken, H. Learning in Synergetic Systems for Pattern Recognition and Associative Action. Z. Phys. B-Condens. Matter 1988, 71, 521–526. [Google Scholar]
  20. Grossberg, S. The Adaptive Brain I; North-Holland: Amsterdam, The Netherlands, 1987. [Google Scholar]
  21. Shimizu, H.; Yamaguchi, Y. Synergetic Computer and Holonics Information Dynamics of a Semantic Computer. Phys. Scr. 1987, 36, 970–985. [Google Scholar]
  22. Masayoshi, I.; Nagayoshi, A. A chaos neuro-computer. Phys. Letter A 1991, 158, 373–376. [Google Scholar]
  23. Tsuda, I. Dynamic Link of Memory—Chaotic memory Map in Nonequillibrium Neural Networks. Neural Netw. 1992, 5, 313–326. [Google Scholar]
  24. Zhu, Q.; Cao, J. Adaptive synchronization of chaotic Cohen–Crossberg neural networks with mixed time delays. Nonlinear Dyn. 2010, 61, 517–534. [Google Scholar]
  25. Liu, Q.; Zhang, S.H. Adaptive lag synchronization of chaotic Cohen-Grossberg neural networks with discrete delays. Chaos 2012, 22, 033123. [Google Scholar]
  26. Akhmet, M.; Başkan, K.; Yeşil, C. Delta synchronization of Poincaré chaos in gas discharge-semiconductor systems. Chaos 2022, 32, 083137. [Google Scholar]
  27. Akhmet, M.U. Dynamical synthesis of quasi-minimal sets. Int. J. Bifurc. Chaos 2009, 19, 1–5. [Google Scholar] [CrossRef]
  28. Akhmet, M.; Tleubergenova, M.; Zhamanshin, A. Cohen-Grossberg neural networks with unpredictable and Poisson stable dynamics. Chaos Solitons Fractals 2024, 178, 114307. [Google Scholar] [CrossRef]
  29. Cottam, R.; Vounckx, R. Chaos, complexity and computation in the evolution of biological systems. Biosystems 2022, 217, 104671. [Google Scholar] [CrossRef] [PubMed]
  30. Tadeusiewicz, R. Neural networks as a tool for modeling of biological systems. Bio-Algorithms Med-Syst. 2015, 11, 7. [Google Scholar] [CrossRef]
  31. Hanrahan, G. Artificial Neural Networks in Biological and Environmental Analysis; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  32. Kong, F.; Zhu, Q.; Aouiti, C.; Dridi, F. Periodic and homoclinic solutions of discontinuous Cohen–Grossberg neural networks with time-varying delays. Eur. J. Control. 2021, 59, 238–249. [Google Scholar] [CrossRef]
  33. Zhao, H.; Chen, L.; Mao, Z. Existence and stability of almost periodic solution for Cohen–Grossberg neural networks with variable coefficients. Nonlinear Anal. RWA 2008, 9, 662–673. [Google Scholar] [CrossRef]
  34. Li, Y.; Fan, X. Existence and globally exponential stability of almost periodic solution for Cohen–Grossberg BAM neural networks with variable coefficients. Appl. Math. Model. 2009, 33, 2114–2120. [Google Scholar] [CrossRef]
  35. Kong, F.; Ren, Y.; Sakthivel, R.; Pan, X.; Liu, S. New criteria on periodicity and stabilization of discontinuous uncertain inertial Cohen-Grossberg neural networks with proportional delays. Chaos Solitons Fractals 2021, 150, 111148. [Google Scholar] [CrossRef]
  36. Cai, Z.; Huang, L.; Wang, Z.; Pan, X.; Liu, S. Periodicity and multi-periodicity generated by impulses control in delayed Cohen-Grossberg-type neural networks with discontinuous activations. Neural Netw. 2021, 143, 230–245. [Google Scholar] [CrossRef]
  37. Liang, T.; Yang, Y.; Liu, Y.; Li, L. Existence and global exponential stability of almost periodic solutions to Cohen-Grossberg neural networks with distributed delays on time scales. Neurocomputing 2014, 123, 207–215. [Google Scholar] [CrossRef]
  38. Sell, G. Topological Dynamics and Ordinary Differential Equations; Van Nostrand Reinhold Company: London, UK, 1971. [Google Scholar]
  39. Akhmet, M. Principles of Discontinuous Dynamical Systems; Springer: New York, NY, USA, 2010. [Google Scholar]
  40. Hartman, P. Ordinary Differential Equations; Birkhauser: Boston, UK, 2002. [Google Scholar]
Figure 1. The graph of the piecewise constant argument function γ ( t ) .
Figure 1. The graph of the piecewise constant argument function γ ( t ) .
Mathematics 13 01068 g001
Figure 2. The coordinates (a) and trajectory (b) of the solution ψ ( t ) with initial data ψ 1 ( 0 ) = 0.5 , ψ 2 ( 0 ) = 0.4 , and ψ 3 ( 0 ) = 0.2 .
Figure 2. The coordinates (a) and trajectory (b) of the solution ψ ( t ) with initial data ψ 1 ( 0 ) = 0.5 , ψ 2 ( 0 ) = 0.4 , and ψ 3 ( 0 ) = 0.2 .
Mathematics 13 01068 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akhmet, M.; Nugayeva, Z.; Seilova, R. Alpha Unpredictable Cohen–Grossberg Neural Networks with Poisson Stable Piecewise Constant Arguments. Mathematics 2025, 13, 1068. https://doi.org/10.3390/math13071068

AMA Style

Akhmet M, Nugayeva Z, Seilova R. Alpha Unpredictable Cohen–Grossberg Neural Networks with Poisson Stable Piecewise Constant Arguments. Mathematics. 2025; 13(7):1068. https://doi.org/10.3390/math13071068

Chicago/Turabian Style

Akhmet, Marat, Zakhira Nugayeva, and Roza Seilova. 2025. "Alpha Unpredictable Cohen–Grossberg Neural Networks with Poisson Stable Piecewise Constant Arguments" Mathematics 13, no. 7: 1068. https://doi.org/10.3390/math13071068

APA Style

Akhmet, M., Nugayeva, Z., & Seilova, R. (2025). Alpha Unpredictable Cohen–Grossberg Neural Networks with Poisson Stable Piecewise Constant Arguments. Mathematics, 13(7), 1068. https://doi.org/10.3390/math13071068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop