Next Article in Journal
Fuzzy Broad Learning System Combined with Feature-Engineering-Based Fault Diagnosis for Bearings
Previous Article in Journal
A Control Method of Mobile Manipulator Based on Null-Space Task Planning and Hybrid Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State Estimation of Memristor Neural Networks with Model Uncertainties

Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(12), 1228; https://doi.org/10.3390/machines10121228
Submission received: 24 October 2022 / Revised: 7 December 2022 / Accepted: 12 December 2022 / Published: 15 December 2022
(This article belongs to the Section Automation and Control Systems)

Abstract

:
This paper is concerned with the problem of state estimation of memristor neural networks with model uncertainties. Considering the model uncertainties are composed of time-varying delays, floating parameters and unknown functions, an improved method based on long short term memory neural networks (LSTMs) is used to deal with the model uncertainties. It is proved that the improved LSTMs can approximate any nonlinear model with any error. On this basis, adaptive updating laws of the weights of improved LSTMs are proposed by using Lyapunov method. Furthermore, for the problem of state estimation of memristor neural networks, a new full-order state observer is proposed to achieve the reconstruction of states based on the measurement output of the system. The error of state estimation is proved to be asymptotically stable by using Lyapunov method and linear matrix inequalities. Finally, two numerical examples are given, and simulation results demonstrate the effectiveness of the scheme, especially when the memristor neural networks with model uncertainties.

1. Introduction

Since the prototype of memristor was born in 1971 by Chua [1], memristor have been widely used in all walks of life. The vector–matrix multiplication is realized by the crossbar array structure of memristor, and a neural network can be realized by the corresponding coding scheme based on it. Various neural networks based on memristor hardware have been developed rapidly. Because of an incomparable advantage that memristor neural networks can reflect the memorized information, the memristor neural networks are particularly suitable for self-adaptability, nonlinear systems, self-learning, and associative storage, so memristor neural networks are widely used in brain simulation, pattern recognition, neural morphologic computation, knowledge acquisition, and various hardware applications involving neural networks [2,3,4,5,6,7,8,9,10,11,12,13,14,15]. To list a few, the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification) was shown in [16]. In [17], a structure suppressing the overshoot current was investigated to approach the conditions required as an ideal synapse of a neuromorphic system. In [18], fully memristive artificial neural networks were built by using diffusive memristors based on silver nanoparticles in a dielectric film. The electrical properties and conduction mechanism of the fabricated IGZO-based memristor device in a 10 × 10 crossbar array were analyzed in [19]. Operation of one-hidden layer perceptron classifier entirely in the mixed-signal integrated hardware was demonstrated in [20]. Therefore, the research on memristor neural networks is very necessary and meaningful. Although many papers have extended the memristor neural networks and solved some problems, there are still problems in the memristor neural networks. Therefore, memristor neural networks including their various kinds of deformation have broad market prospects. Especially, the research on memristor neural networks with model uncertainties has become a hot topic.
In recent decades, scholars have carried out a great amount of research and analysis on memristor neural networks. The results can be broadly divided into four categories: (1) Stability analysis of memristor neural networks [21,22,23,24,25]; (2) State estimation of memristor neural networks [26,27,28,29]; (3) Synchronization problem of memristor neural networks [30,31,32]; (4) Control problem of memristor neural networks [33,34,35]. In practice, time-varying delays must exist in the hardware implementation of memristor neural networks. Due to the existence of time-varying delays, the future states of the system are affected by the previous states, which leads to instability of the system and poor control performance. Consequently, state estimation of memristor neural networks is of great research value and a large part of the research has focused on state estimation of memristor neural networks. Note that the above results are generally based on the known structures and parameters of memristor neural networks without model uncertainties. In practice, the hardware implementation of memristor neural networks usually fails to attain the ideal design values, and there are design deviations. In particular, model uncertainties often exist in the hardware implementation of memristor neural networks. Therefore, model uncertainties and model errors are common in hardware memristor neural networks. Similarly, affected by model uncertainties, state estimation of memristor neural networks is also a challenging problem. Considering the above analysis, it is needed to study state estimation of memristor neural networks with model uncertainties.
A great amount of valuable research on state estimation of memristor neural networks with model uncertainties can be found in [26,27,28,29,36,37,38]. In [26], it used passivity theory to deal with the state estimation problem of memristor-based recurrent neural networks with time-varying delays. By using Lyapunov–Krasovskii function (LKF), convex combination technique and reciprocal convexity technique, a delay-dependent state estimation matrix was established, and the expected estimator gain matrix was obtained by solving linear matrix inequalities (LMIs). It is a pity that the model of the system must be determined and the functions in the system must be known. In [27], for memristor neural networks with randomness, the random system was transformed into an interval parameter system by Filippov, and the H∞ state observer was designed on this basis. One of the problems in the paper is that it is a random interference that affects the system rather than model uncertainty. The random interference is regular and limited. In [28], for memristor-based bidirectional associative memory neural networks with additive time-varying delays, a state estimation matrix was constructed by selecting an appropriate LKF and using the Cauchy-Schwartz-based summation inequality, and the gain matrix was obtained by the LMIs. The paper also has the problems mentioned above. In [29], for a class of memristor neural networks with different types of inductance functions and uncertain time-varying delays, a state estimation matrix was constructed by selecting a suitable LKF, and the gain matrix was solved by using the LMIs and Wirtinger-type inequality. Model uncertainty is involved in the paper, but it is only for the uncertainty of the time-varying delays. In [36], an extended dissipative state observer was proposed by using nonsmooth analysis and a new LKF. In [37], based on the basic properties of quaternion-valued, a state observer was designed for quaternion-valued memristor neural networks, and algebraic conditions were given to ensure global dissipation. The methods proposed in [36,37] are not suitable for memristor neural networks with model uncertainties. In [38], for memristor neural networks with random sampling, the randomness was represented by two different sampling periods, which satisfied a Bernoulli distribution. The random sampling system was transformed into a system with random parameters by using an input delay method. On this basis, a state observer was designed based on the LMIs and a LKF. Through the above discussion, it is not difficult to find that a similar method is used to estimate the states of memristor neural networks. By selecting an appropriate LKF, the state observation matrix is constructed based on the structure of the system, and the gain matrix is solved by utilizing the LMIs. It can be seen from the above analysis that most studies on state estimation of memristor neural networks have the same problem, which requires that the system cannot contain the model uncertainties. Some studies include model uncertainties, which are only for time-varying delays. Other studies also include model uncertainties, which are only about the fluctuation of parameters. There are few studies on the state estimation of memristor neural networks whose model uncertainties include time-varying delays, floating parameters and unknown functions. It has a huge research potential to tap.
When the memristor neural networks are designed and translated into hardware by the designer, the model uncertainties of the system only include the time-varying delays and floating parameters. In practice, the situation is not unique. Sometimes it is necessary to analyze the memristor neural networks designed by other designers. At this time, the model uncertainties of memristor neural networks include time-varying delays, floating parameters and unknown functions. The model of the memristor neural networks can be designed as in Figure 1 [28]. Motivated by the above discussion, the main concern of this paper is to design a state observer for memristor neural networks with model uncertainties, which include time-varying delays, floating parameters and unknown functions. Model uncertainties are composed of current states, past states and unknown functions. In order to approach the model uncertainties that contain memory information, improved long short term memory neural networks (LSTMs) are proposed. It is theoretically proved that the improved LSTMs can approach the model uncertainties with arbitrary error. Memristor neural networks with model uncertainties can be transformed into a new system with an improved LSTMs. On this basis, a full-order state observer is designed according to the output of the system. An error matrix of the states is constructed by a designed LKF, and the gain matrix is solved by the LMIs. In order to make the new system more accurate, a new error matrix of the states is constructed by using Young’s inequality based on a LKF. On this basis, adaptive updating laws of the weights of improved LSTMs are designed to reduce the errors of the states. The main contributions of this paper are as follows.
  • Improved LSTMs are proposed for memristor neural networks with model uncertainties. It is proved that the improved LSTMs can well approach the model uncertainties in memristor neural networks. Model uncertainties include time-varying delays, floating parameters and unknown functions. It has not been seen in other studies.
  • By utilizing the LMIs and a LKF, a full-order observer based on the output of the system is presented to obtain state information and solve the problem of state estimation.
  • By using Young’s inequality and a designed LKF, adaptive updating laws of the weights of improved LSTMs are given to obtain the new system with improved LSTMs precisely.
This paper is organized as follows. In Section 2, the problem is formulated, and several essential assumptions and lemmas are listed. Section 3 presents the primary theorems, including improved LSTMs, observer design for memristor neural networks with model uncertainties, and adaptive updating laws of the weights of improved LSTMs. In Section 4, the effectiveness of the proposed scheme is demonstrated through numerical examples. Finally, the conclusions are drawn in Section 5.
Notation: R n denotes the n dimensional Euclidean space. For a given matrix A or vector B , A T and B T denote their transpose, and t r { A } denotes its trace. A < 0 indicate a negative definite matrix.

2. Preliminaries

Considering the memristor neural networks as follows, the same model can be found in [26,27,28,36,37],
x ˙ 1 ( t ) = a 1 x 1 ( t ) + j = 1 n b 1 j ( x 1 ( t ) ) f j ( x j ( t ) ) + j = 1 n c 1 j ( x 1 ( t ) ) g j ( x j ( t τ j ( t ) ) ) + U 1 x ˙ 2 ( t ) = a 2 x 2 ( t ) + j = 1 n b 2 j ( x 2 ( t ) ) f j ( x j ( t ) ) + j = 1 n c 2 j ( x 2 ( t ) ) g j ( x j ( t τ j ( t ) ) ) + U 2 x ˙ n ( t ) = a n x n ( t ) + j = 1 n b n j ( x n ( t ) ) f j ( x j ( t ) ) + j = 1 n c n j ( x n ( t ) ) g j ( x j ( t τ j ( t ) ) ) + U n y 1 ( t ) = j = 1 n h 1 j x j ( t ) y 2 ( t ) = j = 1 n h 2 j x j ( t ) y m ( t ) = j = 1 n h n j x j ( t ) ,
where x i ( t ) ( i = 1 , , n ) represents the state variable of the memristor neural networks, and n is the system dimension; a i ( i = 1 , , n ) is the self-feedback coefficient, which satisfies a i > 0 ; f j ( x j ( t ) ) and g j ( x j ( t τ j ( t ) ) ) ( j = 1 , , n ) represent the activation functions of states x j ( t ) and x j ( t τ j ( t ) respectively; b i j ( x i ( t ) ) represents the memristive synaptic connection weight between states x i ( t ) and x j ( t ) , and c i j ( x i ( t ) ) represents the memristive synaptic connection weight between states x i ( t ) and x j ( t τ j ( t ) ) ; τ j ( j = 1 , , n ) denotes the time-varying delay which satisfies 0 τ j τ m a x , and τ m a x is the upper bound constant; U i ( i = 1 , , n ) denotes the input of the system, and y i ( t ) ( i = 1 , , m ) represents the measurement output of the system; h i j ( i = 1 , , m ; j = 1 , , n ) is the measurement constant from state x j ( t ) to output y i ( t ) , and m is the output dimension.
The system (1) can be represented in vector form,
x ˙ ( t ) = A x ( t ) + B f ( x ( t ) ) + C g ( x ( t τ ( t ) ) ) + U y ( t ) = H x ( t ) ,
where A = a 1 0 0 0 a 2 0 0 0 a n , B = b 11 b 12 b 1 n b 21 b 22 b 2 n b n 1 b n 2 b n n , U = U 1 U 2 U n , f ( x ( t ) ) = f 1 ( x 1 ( t ) ) f 2 ( x 2 ( t ) ) f n ( x n ( t ) ) , g ( x ( t τ ( t ) ) ) = g 1 ( x 1 ( t τ 1 ( t ) ) ) g 2 ( x 2 ( t τ 2 ( t ) ) ) g n ( x n ( t τ n ( t ) ) ) , y ( t ) = y 1 ( t ) y 2 ( t ) y m ( t ) , C = c 11 c 12 c 1 n c 21 c 22 c 2 n c n 1 c n 2 c n n , H = h 11 h 12 h 1 n h 21 h 22 h 2 n h m 1 c m 2 h m n , x ( t ) = x 1 ( t ) x 2 ( t ) x n ( t ) .
As mentioned in the introduction, most studies involve model uncertainties that only include floating parameters. In the process of neural network hardware implementation as memristor neural networks, the memristive synaptic connection weights b i j and c i j will produce deviations [28]. The fluctuation of parameters b i j and c i j is regarded as model uncertainty. This is the starting point of much research on state estimation of memristor neural networks, such as [26,27,28,36,37,38]. Some studies regard time-varying delay τ j ( t ) as model uncertainty and study state estimation of memristor neural networks based on it, for example [29]. It should be noted that model uncertainties in all the above studies do not include f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) . Both f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) must be known, and b i j and c i j float within the ideal range. If f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) are unknown, and the ideal values of b i j and c i j are unknown, the model uncertainties include floating parameters b i j and c i j , time-varying delay τ j ( t ) and unknown functions f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) , and all the above studies are not applicable. The state estimation of memristor neural networks with model uncertainties including floating parameters, time-varying delays and unknown functions is the main concern in this paper.
Remark 1.
In other studies, model uncertainties only include floating parameters b i j and c i j or time-varying delay τ j ( t ) . Functions f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) must be known. In this paper, model uncertainties include floating parameters b i j and c i j , time-varying delay τ j ( t ) and unknown functions f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) .
As shown in system (2), the model uncertainties contain memory information x i ( t τ i ) . The LSTMs are the most suitable to deal with the model uncertainties. LSTMs are networks of basic LSTMs cells, and the architecture of a conventional LSTMs cell is illustrated in Figure 2. A memory cell, an input gate, an output gate and a forgetting gate make up a LSTMs cell. The forgetting gate, input gate, and output gate respectively determine whether historical information, input information, and output information are retained [39]. The specific computation is shown in Equation (3).
f t i t o t g t = σ W f x t , h t 1 + b f σ W i x t , h t 1 + b i σ W o x t , h t 1 + b o t a n h W g x t , h t 1 + b g c t = f t c t 1 i t g t h t = o t t a n h c t ,
where f t denotes the forgetting gate; i t and o t represent input gate and output gate, respectively; g t is the updating vector of the LSTM cell; h t is the hidden state vector; h t 1 is the hidden state vector at step t 1 ; x t is the input vector of the LSTM cell; c t is the state vector of the cell; c t 1 is the state vector of the cell at step t 1 ; W is the weight matrix and b refers to the bias vector; σ and t a n h ( · ) are the sigmoid and tanh activation functions, respectively; ⊗ and ⊕ represent elementwise multiplication and addition, respectively.
Remark 2.
LSTMs cell is not completely suitable for estimating the states of memristor neural networks with model uncertainties. LSTMs cell needs to be improved to save computation and be more suitable for state estimation.
Moreover, in order to improve the LSTMs, design the state observer of memristor neural networks with model uncertainties, and derive the updating laws of the weights of the improved LSTMs, some assumptions and lemmas need to be introduced for the following proof.
Assumption 1.
The functions f j ( · ) and g j ( · ) satisfy local Lipschitz conditions. For all k , p R , have f j ( k ) f j ( p ) K f k p and g j ( k ) g j ( p ) K g k p , where K f and K g are Lipschitz constants, and satisfy f j ( 0 ) = g j ( 0 ) = 0 .
f j ( · ) and g j ( · ) are the activation functions of memristor neural networks, so Assumption 1 is generally tenable.
Lemma 1
([40]).  k ( · ) is a continuous function defined on a set Ω. Multilayer neural networks can be defined as,
k ¯ = W T S ( VI ) ,
where W and V are the second weight matrix and the first weight vector of the Multilayer neural networks, respectively; I is the input vector of Multilayer neural networks, and S ( · ) is the activation function of Multilayer neural networks.
Then, for a given desired level of accuracy ε > 0 , there exist the ideal weights W ¯ and V ¯ to satisfy the following inequality,
sup I Ω k ( · ) k ¯ ε .
Lemma 2.
(Young’s inequality) For all x , y R , the following inequality holds,
x y ε p p x p + 1 q ε p y q ,
where ε > 0 , p > 1 , q > 1 , and ( p 1 ) ( q 1 ) = 1 .

3. Main Result

In this part, improved LSTMs, state observer design for memristor neural networks with model uncertainties, and adaptive updating laws of the weights of improved LSTMs will be discussed.
To begin with, the system (2) can be redefined as follows,
x ˙ ( t ) = A x ( t ) + K x ( t ) , x ( t τ ( t ) ) ) + U y ( t ) = H x ( t ) ,
where K ( · ) is a vector of functions, which can be defined as [ K 1 ( x ( t ) , x ( t τ ( t ) ) ) ) , K 2 ( x ( t ) , x ( t τ ( t ) ) ) ) , , K n ( x ( t ) , x ( t τ ( t ) ) ) ) ] T .
As mentioned in Remark 1, K ( · ) is the function vector of model uncertainties formed by floating parameters b i j and c i j , time-varying delay τ j ( t ) and unknown functions f i ( x i ( t ) ) and g i ( x i ( t τ i ) ) . In order to approximate the unknown function vector K x ( t ) , x ( t τ ( t ) ) ) , improved LSTMs are proposed, and an improved LSTMs cell is shown in Figure 3.
Comparing Figure 2 and Figure 3, it can be seen that the input gate i t and the hidden state vector h t 1 at step t 1 have been removed. Since x ( t ) is part of K ( · ) in the form of a function vector, the input gate can be removed. x ( t ) should be part of LSTMs cell in the form of tanh function. The reason why h t 1 is removed is that K ( · ) contains x ( t τ ( t ) ) , so the functions of h t 1 can be combined into c t 1 to save computation. Remove the output gate o t and use h t as the output of LSTMs cell to simplify the structure of LSTMs cell. Therefore, the improved LSTMs cell is made up of the following parts: (1) The state vector x ( t ) of the system at time t and the state vector with weights c t 1 of the system at time t 1 constitute the input of the improved LSTMs cell; (2) c t is the vector that holds the state of the improved LSTMs cell at time t; (3) h t is the output vector of the improved LSTMs cell at time t; (4) σ ( x ( t ) ) is the forgetting function at time t, which is used to control whether the memory information stored by the improved LSTMs cell at time t 1 is added to the improved LSTMs cell calculation at time t. The specific computation of a simplified and improved LSTMs cell can be expressed as follows,
c t = W t , i x t c t 1 σ x t h t = t a n h W t , i x t c t 1 σ x t b t , i ,
where W t , i = W t , i , 1 , W t , i , 2 , , W t , i , n denotes the weight vector of the ith cell and b t , i is a bias constant of the ith cell; x t = x 1 ( t ) , x 2 ( t ) , , x n ( t ) T represents the state vector at time t.
Based on the simplified and improved LSTMs cell, the improved LSTMs are illustrated in Figure 4. In Figure 4, each column represents a neural network composed of p improved LSTMs cells at time j. The outputs of the p improved LSTMs cells pass through the weight matrix V j to obtain the output vector of the neural network at time j, which is used to approximate K ( · ) . The neural network at each time can be connected through c j and c j 1 to form neural networks at all times. x j represents the state vector at time j, and j [ 1 , t ] . c j i denotes the output of the hidden states of the ith ( i [ 1 , p ] ) LSTMs cell at time j, and p is the number of LSTMs cells. W j , i represents the weight vector of the ith LSTMs cell at time j. b j , i is the bias of the ith LSTMs cell at time j. h j i denotes the output of the states of the ith LSTMs cell at time j. V j , i , l represents the weight coefficient from the output of the ith LSTMs cell to the lth system output at time j, and l [ 1 , m ] . y j , l denotes the lth system output at time j. The improved LSTMs can approximate any nonlinear function by the following theorem.
Theorem 1.
k ( · ) is a continuous nonlinear function defined on a set Ω. Improved LSTMs are shown in Figure 3. k ¯ is an approximate function of k ( · ) based on the improved LSTMs. Then, for a given desired level of accuracy ε > 0 , there exist the ideal weights W ¯ j , i ( j [ 1 , t ] , i [ 1 , p ] ) and V ¯ j , i , l ( j [ 1 , t ] , i [ 1 , p ] , l [ 1 , m ] ) to satisfy the following inequality,
sup x j Ω k ( · ) k ¯ ε .
The proof of Theorem 1 can be found in Appendix A.
Based on Theorem 1, the estimation system for the system (4) can be defined as the following formula,
x ^ ˙ ( t ) = A x ^ ( t ) + K ¯ + L · y ( t ) H · x ^ ( t ) + U y ^ ( t ) = H x ^ ( t ) ,
where L R n × m denotes the observer gain matrix; K ¯ R n is an estimated function vector of K x ( t ) , x ( t τ ( t ) ) ) based on the improved LSTMs, which satisfies Theorem 1. K ¯ is given in Equation (8),
K ¯ = V ¯ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t ,
where V ¯ t R p × n and W ¯ i R p × n denote the ideal weight matrices, and b ¯ t R p is the ideal bias vector.
The function σ ( x j ) is determined by the time-varying delay τ j ( t ) which satisfies 0 τ j ( t ) τ m a x . σ ( x j ) is 1 in the range of t τ m a x , t , and σ ( x j ) is 0 in the rest of the range. This ensures that all the data in the interval t τ m a x to t will be included in the calculation. Considering the system (4) and the estimation system (7), the error system can be obtained as follows
e ( t ) = x ^ ( t ) x ( t ) , e ˙ ( t ) = x ^ ˙ ( t ) x ˙ ( t ) = A x ^ ( t ) + K ¯ + L · y ( t ) H · x ^ ( t ) + U A x ( t ) + K x ( t ) , x ( t τ ( t ) ) ) + U = A + LH e ( t ) + K ¯ K x ( t ) , x ( t τ ( t ) ) .
Assumption 2.
For the unknown function K i x ( t ) , x ( t τ ( t ) ) ) and the estimated function K ¯ i ( i = 1 , 2 , , n ) , there exist Lipschitz constant vectors K L 1 and K L 2 , which satisfy the following inequality,
K i x ( t ) , x ( t τ ( t ) ) ) K ¯ i K L 1 T | x ( t ) x ^ ( t ) | + K L 2 T | x ( t τ ( t ) ) x ^ ( t τ ( t ) ) | .
Considering Theorem 1, K ¯ i is an estimated function of the finite error of K i x ( t ) , x ( t τ ( t ) ) . Similarly, K ¯ i is a function of x ^ ( t ) and x ^ ( t τ ( t ) ) . On this basis, considering Assumption 1, Assumption 2 is tenable.
Theorem 2.
Suppose that Assumption 2 holds for the system (4) and the estimation system (7), if there exist symmetric positive definite matrices P , Q , M , a diagonal matrix F , a matrix G R n × p and a real constant δ > 0 such that inequality (11) holds,
Ω 1 P M M P F 0 0 M 0 Ω 2 2 M M 0 2 M 2 M < 0 ,
where Ω 1 = A T P H T G T P A GH + Q + 2 K L 1 t r { F } K L 1 T , and Ω 2 = 2 K L 2 t r { F } K L 2 T ( 1 δ ) Q 2 M .
Then, the error system (9) is asymptotically stable with observer gain matrix calculated by L = P 1 G . The proof of Theorem 2 can be found in Appendix B.
Based on Theorem 2, the observer gain matrix L can be obtained. Considering the function vector K ¯ in system (7), the weight matrices W ¯ i and V ¯ i are ideal. In fact, the ideal weights are hard to select, and the estimated weights need to be adjusted by adaptive laws to be close to the ideal weights. With reference to the system (7), the estimated system can be redefined as follows
x ^ ˙ ( t ) = A x ^ ( t ) + K ^ + L · y ( t ) H · x ^ ( t ) + U y ^ ( t ) = H x ^ ( t ) ,
where K ^ is an estimated function vector of K ¯ .
K ^ is given in Equation (13),
K ^ = V ^ t T · t a n h W ^ t x t + i = 1 t 1 W ^ i x i · j = i + 1 t σ ( x j ) + b ^ t ,
where V ^ t and W ^ i are estimated weight matrices; b ^ t is a estimated bias vector.
With reference to the error system (9), the error system can be obtained as follows by using the Equation (6)
e ˙ ( t ) = A + LH e ( t ) + K ^ K ¯ ϵ 1 ,
where ϵ 1 is an error vector.
For error weight matrices V ˜ t and W ˜ i and a error weight vector b ˜ t , we have
V ˜ t = V ^ t V ¯ t W ˜ i = W ^ i W ¯ i b ˜ t = b ^ t b ¯ t .
Theorem 3.
For the error system (14), the design parameters N w _ i R p , N v _ t R p and N b _ t R p × n satisfy following inequality
Ω 3 P 1 2 Ω w v b T P 0 0 1 2 Ω w v b 0 0 0 ,
where Ω 3 = A + LH T P P A + LH ,
Ω w v b = Ω w _ t Ω w _ t 1 · σ ( x t ) Ω w _ 1 · j = 2 t σ ( x j ) Ω v _ t N b _ t , Ω w _ i = N w _ i 0 0 0 N w _ i 0 0 0 N w _ i ,
Ω v _ t = N v _ t 0 0 0 N v _ t 0 0 0 N v _ t 2 ϵ 2 · P 1 ϵ 2 · P 2 ϵ 2 · P n .
The adaptive updating laws of the weights can be given as follows
W ^ ˙ i = N w _ i · e T ( t ) 4 V ^ t P e ( t ) x i T ( i = 1 , 2 , , t ) V ^ ˙ t = N v _ t · e T ( t ) 2 S ^ t e T ( t ) P b ^ ˙ t = N b _ t · e ( t ) 4 V ^ t P e ( t ) ,
then the error system (14) is asymptotically stable. The proof of Theorem 3 can be found in Appendix C.
Considering (16), the adaptive updating laws of the weights are determined by e ( t ) . Hence, it requires that e ( t ) is a n-dimensional vector. According to (12), we have
y ^ ( t ) y ( t ) = H · x ^ ( t ) x ( t ) = H · e ( t ) .
If there exists H 1 , the e ( t ) can be obtained as follows by using (17)
e ( t ) = H 1 · y ^ ( t ) y ( t ) .
In general, m is not equal to n, and H 1 does not exist. Hence, (18) does not hold. To solve this problem, the following assumption is given.
Assumption 3.
y ^ ( t ) and y ( t ) are continuously differentiable functions and the first derivative of y ^ ( t ) and y ( t ) are bounded and measurable.
Theorem 4.
Based on Assumption 3, if the given matrix G is left invertible, the e ( t ) can be obtained as follows
e ( t ) = G 1 · Y ,
where G = H H A + LH and Y = y ^ ( t ) y ( t ) y ^ ˙ ( t ) y ˙ ( t ) . The proof of Theorem 4 can be found in Appendix D.
Remark 3.
Based on Theorem 1, the estimated system (12) can be given. By using Theorem 2, the observer gain matrix L can be obtained. By using Theorems 3 and 4, the adaptive updating laws of the weights can be obtained.

4. Simulation Analysis

In this section, two numerical cases are presented to verify the rationality of the above results.

4.1. Examples

Example 1. 2-dimensional memristor neural networks are considered, and the parameters of the system (2) are given as follows,
A = 2.3 0 0 2 , B = 0.31 0.38 0.49 0.32 , C = 0.32 0.19 0.39 0.25 , U = 0.2 0.3 , H = 1 0.5 , f ( x ( t ) ) = | x 1 + 1 | | x 1 1 | 2 | x 2 + 1 | | x 2 1 | 2 , g ( x ( t τ ( t ) ) ) = x 1 ( t τ 1 ( t ) ) x 2 ( t τ 2 ( t ) ) , τ 1 ( t ) = τ 2 ( t ) = 0.05 t 1 + t , x ( 0 ) = 1 1 .
Based on the system (2), the estimated system (17) can be designed as follows,
x ^ ( 0 ) = 0.2 0.3 , K L 1 = K L 2 = 1 1 , δ = 0.1 .
By using Theorem 2 and LMIs tools, the parameters of the estimated system (17) can be obtained,
P = 0.03553 0.00345 0.00345 0.03629 , Q = 0.03545 0.01504 0.01504 0.04070 , F = 0.00363 0 0 0.00379 , M = 0.00494 68.44222 68.44222 0.00505 , L = 0.88667 0.75623 .
Set sampling time T = 30 s and sampling period T = 0.001 s . Considering (18), set x i = x ^ i · T ( i = 1 , 2 , , t / T ) and σ ( x i ) = 0 ( i < t / T 30 ) . Based on Theorem 3 and Theorem 4, set N w _ i and N v _ t and N b _ t are negative unit vectors and matrix.
The state trajectories of the state x ( t ) and the state observer x ^ ( t ) are drawn in Figure 5. Figure 6 is drawn for the estimated error between the state x ( t ) and the state observer x ^ ( t ) . In Figure 7, the trajectories of the derivative of the state x ˙ ( t ) and the derivative of the state observer x ^ ˙ ( t ) are depicted. The trajectories of the error between x ˙ ( t ) and x ^ ˙ ( t ) are given in Figure 8. In Figure 9, the output curve y ( t ) and the estimated output curve y ^ ( t ) are given. Figure 10 shows the estimated error curve between y ( t ) and y ^ ( t ) .
In order to verify the accuracy of the estimated structure, a test system is designed based on the gain observation matrix L . Under the same simulation conditions as above, the effects of the adjusted weights and the random weights on the system are compared. In Figure 11, the state trajectories of the real system and the estimated system with the adjusted weights and the system with the random weights are given. Figure 12 shows the real output curve and the estimated output with the adjusted weights and the output curve with the random weights.
Example 2. 3-dimensional memristor neural networks are considered, and the parameters of the system (2) are given as follows,
A = 2.5 0 0 0 2.5 0 0 0 3.3 , B = 0.2 0.15 0.35 0.15 0.35 0.08 0.1 0.15 0.3 , C = 0.125 0.15 0.136 0.35 0.18 0.3 0.2 0.04 0.05 , H = 1 1 1 1 0 1 , f ( x ( t ) ) = | x 1 + 1 | | x 1 1 | 2 | x 2 + 1 | | x 2 1 | 2 | x 3 + 1 | | x 3 1 | 2 , g ( x ( t τ ( t ) ) ) = x 1 ( t τ 1 ( t ) ) x 2 ( t τ 2 ( t ) ) x 3 ( t τ 3 ( t ) ) , U = 0.2 0.3 0.1 , x ( 0 ) = 1 1 1 , τ 1 ( t ) = τ 2 ( t ) = τ 3 ( t ) = 0.05 t 1 + t .
Based on the system (2), the estimated system (17) can be designed as follows,
x ^ ( 0 ) = 0.4 0.5 0.6 , K L 1 = K L 2 = 1 1 1 , δ = 0.1 .
By using Theorem 2 and LMIs tools, the parameters of the estimated system (17) can be obtained,
P = 0.51349 0.03364 0.10199 0.03364 0.71344 0.02472 0.10199 0.02472 0.39999 , Q = 0.30425 0.32097 0.34837 0.32097 0.51236 0.19432 0.34837 0.19432 0.00547 , F = 0.10879 0 0 0 0.17792 0 0 0 0.20825 , M = 0.05344 47.67758 16.83918 47.67685 0.04188 39.81878 16.83445 39.83217 0.07044 , L = 0.29468 0.06180 0.99352 0.98883 0.32407 0.98883 .
Set sampling time T = 30 s and sampling period T = 0.001 s . Considering (18), set x i = x ^ i · T ( i = 1 , 2 , , t / T ) and σ ( x i ) = 0 ( i < t / T 30 ) . Based on Theorems 3 and 4, set N w _ i and N v _ t and N b _ t are negative unit vectors and matrix.
The state trajectories of the state x ( t ) and the state observer x ^ ( t ) are drawn in Figure 13. Figure 14 is drawn for the estimated error between the state x ( t ) and the state observer x ^ ( t ) . In Figure 15, the trajectories of the derivative of the state x ˙ ( t ) and the derivative of the state observer x ^ ˙ ( t ) are depicted. The trajectories of the error between x ˙ ( t ) and x ^ ˙ ( t ) are given in Figure 16. In Figure 17, the output curve y ( t ) and the estimated output curve y ^ ( t ) are given. Figure 18 shows the estimated error curve between y ( t ) and y ^ ( t ) .
In order to verify the accuracy of the estimated structure, a test system is designed based on the gain observation matrix L . Under the same simulation conditions as above, the effects of the adjusted weights and the random weights on the system are compared. In Figure 19, the state trajectories of the real system and the estimated system with the adjusted weights and the system with the random weights are given. Figure 20 shows the real output curve and the estimated output with the adjusted weights and the output curve with the random weights.

4.2. Description of Simulation Results

Figure 5 and Figure 13 show that the estimated state vector is a good approximation of the real state vector. On the other hand, Figure 6 and Figure 14 verify that the estimation error vector of states is going to zero. Figure 7 and Figure 15 show that the derivative vector of estimated states is a good approximation of the derivative vector of real states. On the other hand, Figure 8 and Figure 16 verify that the estimation error vector of derivative of states is going to zero. Figure 9 and Figure 17 show that the estimated output vector is a good approximation of the real output vector. On the other hand, Figure 10 and Figure 18 verify that the estimation error vector of outputs is going to zero. Figure 11 and Figure 19 show that the estimated state vector with adaptive weights is better than that with random weights. Figure 12 and Figure 20 show that the output vector with adaptive weights is better than that with random weights. Simulation results indicate that the state observer proposed in this paper has stronger adaptability and more accurate estimation results for memristor neural networks with model uncertainties.

5. Conclusions

The state estimation of memristor neural networks with model uncertainties is discussed in this paper. In particular, model uncertainties include time-varying delays, floating parameters and unknown functions. An improved approach based on LSTMs is proposed to deal with model uncertainties. This paper proves that the improved neural networks can approximate any nonlinear function with any error. On this basis, a full-order state observer is proposed to achieve the reconstruction of states based on the measurement output of the system. The adaptive updating laws of the weights of improved neural networks are proposed based on a LKF. By using LKF and LMIs tools, this paper obtains the asymptotic stability conditions for the error systems. The simulation results show that by using the full-order state observer and the adaptive updating laws of the weights, an accurate estimate of the solution can be obtained. The test results show that the model uncertainties can be approximated accurately. As mentioned in the introduction, the improved LSTMs designed in this paper can also be realized by crossbar array of memristor, which will be our next work.

Author Contributions

Conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing—original draft preparation, writing—review and editing, L.M.; visualization, supervision, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declared no potential conflict of interest with respect to the research, authorship, and/or publication of this article.

Appendix A. The Proof of Theorem 1

Proof. 
Considering the Equation (5), the improved neural networks are derived by using the recursive method.
Take j = 1 , we have
c 1 1 = W ¯ 1 , 1 x 1 , h 1 1 = t a n h W ¯ 1 , 1 x 1 + b ¯ 1 , 1 , c 1 2 = W ¯ 1 , 2 x 1 , h 1 2 = t a n h W ¯ 1 , 2 x 1 + b ¯ 1 , 2 , c 1 p = W ¯ 1 , p x 1 , h 1 p = t a n h W ¯ 1 , p x 1 + b ¯ 1 , p .
Then,
c 1 = W ¯ 1 x 1 , h 1 = t a n h W ¯ 1 x 1 + b ¯ 1 ,
where c 1 = c 1 1 c 1 2 c 1 p , W ¯ 1 = W ¯ 1 , 1 W ¯ 1 , 2 W ¯ 1 , p = W ¯ 1 , 1 , 1 W ¯ 1 , 1 , 2 W ¯ 1 , 1 , n W ¯ 1 , 2 , 1 W ¯ 1 , 2 , 2 W ¯ 1 , 2 , n W ¯ 1 , p , 1 W ¯ 1 , p , 2 W ¯ 1 , p , n , h 1 = h 1 1 h 1 2 h 1 p , b ¯ 1 = b ¯ 1 , 1 b ¯ 1 , 2 b ¯ 1 , p .
And we have
y 1 , 1 = V ¯ 1 , 1 , 1 · t a n h W ¯ 1 , 1 x 1 + b ¯ 1 , 1 + V ¯ 1 , 2 , 1 · t a n h W ¯ 1 , 2 x 1 + b ¯ 1 , 2 + + V ¯ 1 , p , 1 · t a n h W ¯ 1 , p x 1 + b ¯ 1 , p , y 1 , 2 = V ¯ 1 , 1 , 2 · t a n h W ¯ 1 , 1 x 1 + b ¯ 1 , 1 + V ¯ 1 , 2 , 2 · t a n h W ¯ 1 , 2 x 1 + b ¯ 1 , 2 + + V ¯ 1 , p , 2 · t a n h W ¯ 1 , p x 1 + b ¯ 1 , p , y 1 , m = V ¯ 1 , 1 , m · t a n h W ¯ 1 , 1 x 1 + b ¯ 1 , 1 + V ¯ 1 , 2 , m · t a n h W ¯ 1 , 2 x 1 + b ¯ 1 , 2 + + V ¯ 1 , p , m · t a n h W ¯ 1 , p x 1 + b ¯ 1 , p .
Then,
y 1 = V ¯ 1 T · t a n h W ¯ 1 x 1 + b ¯ 1 ,
where y 1 = y 1 , 1 y 1 , 2 y 1 , m , V ¯ 1 = V ¯ 1 , 1 , 1 V ¯ 1 , 1 , 2 V ¯ 1 , 1 , m V ¯ 1 , 2 , 1 V ¯ 1 , 2 , 2 V ¯ 1 , 2 , m V ¯ 1 , p , 1 V ¯ 1 , p , 2 V ¯ 1 , p , m .
Take j = 2 , we have
c 2 1 = W ¯ 2 , 1 x 2 + c 1 1 · σ ( x 2 ) = W ¯ 2 , 1 x 2 + W ¯ 1 , 1 x 1 · σ ( x 2 ) , h 2 1 = t a n h W ¯ 2 , 1 x 2 + W ¯ 1 , 1 x 1 · σ ( x 2 ) + b ¯ 2 , 1 , c 2 2 = W ¯ 2 , 2 x 2 + c 1 2 · σ ( x 2 ) = W ¯ 2 , 2 x 2 + W ¯ 1 , 2 x 1 · σ ( x 2 ) , h 2 2 = t a n h W ¯ 2 , 2 x 2 + W ¯ 1 , 2 x 1 · σ ( x 2 ) + b ¯ 2 , 2 , c 2 p = W ¯ 2 , p x 2 + c 1 p · σ ( x 2 ) = W ¯ 2 , p x 2 + W ¯ 1 , p x 1 · σ ( x 2 ) , h 2 p = t a n h W ¯ 2 , p x 2 + W ¯ 1 , p x 1 · σ ( x 2 ) + b ¯ 2 , p .
Then,
c 2 = W ¯ 2 x 2 + W ¯ 1 x 1 · σ ( x 2 ) , h 2 = t a n h W ¯ 2 x 2 + W ¯ 1 x 1 · σ ( x 2 ) + b ¯ 2 ,
where c 2 = c 2 1 c 2 2 c 2 p , W ¯ 2 = W ¯ 2 , 1 W ¯ 2 , 2 W ¯ 2 , p = W ¯ 2 , 1 , 1 W ¯ 2 , 1 , 2 W ¯ 2 , 1 , n W ¯ 2 , 2 , 1 W ¯ 2 , 2 , 2 W ¯ 2 , 2 , n W ¯ 2 , p , 1 W ¯ 2 , p , 2 W ¯ 2 , p , n , h 2 = h 2 1 h 2 2 h 2 p , b ¯ 2 = b ¯ 2 , 1 b ¯ 2 , 2 b ¯ 2 , p .
And we have
y 2 , 1 = V ¯ 2 , 1 , 1 · t a n h W ¯ 2 , 1 x 2 + W ¯ 1 , 1 x 1 · σ ( x 2 ) + b ¯ 2 , 1 + V ¯ 2 , 2 , 1 · t a n h ( W ¯ 2 , 2 x 2 + W ¯ 1 , 2 x 1 · σ ( x 2 ) + b ¯ 2 , 2 ) + + V ¯ 2 , p , 1 · t a n h W ¯ 2 , p x 2 + W ¯ 1 , p x 1 · σ ( x 2 ) + b ¯ 2 , p , y 2 , 2 = V ¯ 2 , 1 , 2 · t a n h W ¯ 2 , 1 x 2 + W ¯ 1 , 1 x 1 · σ ( x 2 ) + b ¯ 2 , 1 + V ¯ 2 , 2 , 2 · t a n h ( W ¯ 2 , 2 x 2 + W ¯ 1 , 2 x 1 · σ ( x 2 ) + b ¯ 2 , 2 ) + + V ¯ 2 , p , 2 · t a n h W ¯ 2 , p x 2 + W ¯ 1 , p x 1 · σ ( x 2 ) + b ¯ 2 , p , y 2 , m = V ¯ 2 , 1 , m · t a n h W ¯ 2 , 1 x 2 + W ¯ 1 , 1 x 1 · σ ( x 2 ) + b ¯ 2 , 1 + V ¯ 2 , 2 , m · t a n h ( W ¯ 2 , 2 x 2 + W ¯ 1 , 2 x 1 · σ ( x 2 ) + b ¯ 2 , 2 ) + + V ¯ 2 , p , m · · t a n h W ¯ 2 , p x 2 + W ¯ 1 , p x 1 · σ ( x 2 ) + b ¯ 2 , p .
Then,
y 2 = V ¯ 2 T · t a n h W ¯ 2 x 2 + W ¯ 1 x 1 · σ ( x 2 ) + b ¯ 2 ,
where y 2 = y 2 , 1 y 2 , 2 y 2 , m , V ¯ 2 = V ¯ 2 , 1 , 1 V ¯ 2 , 1 , 2 V ¯ 2 , 1 , m V ¯ 2 , 2 , 1 V ¯ 2 , 2 , 2 V ¯ 2 , 2 , m V ¯ 2 , p , 1 V ¯ 2 , p , 2 V ¯ 2 , p , m .
Take j = t , we have
c t 1 = W ¯ t , 1 x t + c t 1 1 · σ ( x t ) = W ¯ t , 1 x t + W ¯ t 1 , 1 x t 1 + c t 2 1 · σ ( x t 1 ) · σ ( x t ) = W ¯ t , 1 x t + i = 1 t 1 W ¯ i , 1 x i · j = i + 1 t σ ( x j ) , h t 1 = t a n h W ¯ t , 1 x t + i = 1 t 1 W ¯ i , 1 x i · j = i + 1 t σ ( x j ) + b ¯ t , 1 , c t 2 = W ¯ t , 2 x t + c t 1 2 · σ ( x t ) = W ¯ t , 2 x t + W ¯ t 1 , 2 x t 1 + c t 2 2 · σ ( x t 1 ) · σ ( x t ) = W ¯ t , 2 x t + i = 1 t 1 W ¯ i , 2 x i · j = i + 1 t σ ( x j ) , h t 2 = t a n h W ¯ t , 2 x t + i = 1 t 1 W ¯ i , 2 x i · j = i + 1 t σ ( x j ) + b ¯ t , 2 ,
c t p = W ¯ t , p x t + c t 1 p · σ ( x t ) = W ¯ t , p x t + W ¯ t 1 , p x t 1 + c t 2 p · σ ( x t 1 ) · σ ( x t ) = W ¯ t , p x t + i = 1 t 1 W ¯ i , p x i · j = i + 1 t σ ( x j ) , h t p = t a n h W ¯ t , p x t + i = 1 t 1 W ¯ i , p x i · j = i + 1 t σ ( x j ) + b ¯ t , p .
Then,
c t = W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) , h t = t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t ,
where c t = c t 1 c t 2 c t p , W ¯ t = W ¯ t , 1 W ¯ t , 2 W ¯ t , p = W ¯ t , 1 , 1 W ¯ t , 1 , 2 W ¯ t , 1 , n W ¯ t , 2 , 1 W ¯ t , 2 , 2 W ¯ t , 2 , n W ¯ t , p , 1 W ¯ t , p , 2 W ¯ t , p , n , h t = h t 1 h t 2 h t p , b ¯ t = b ¯ t , 1 b ¯ t , 2 b ¯ t , p .
And we have
y t , 1 = V ¯ t , 1 , 1 · t a n h W ¯ t , 1 x t + i = 1 t 1 W ¯ i , 1 x i · j = i + 1 t σ ( x j ) + b ¯ t , 1 + V ¯ t , 2 , 1 · t a n h W ¯ t , 2 x t + i = 1 t 1 W ¯ i , 2 x i · j = i + 1 t σ ( x j ) + b ¯ t , 2 + + V ¯ t , p , 1 · t a n h W ¯ t , p x t + i = 1 t 1 W ¯ i , p x i · j = i + 1 t σ ( x j ) + b ¯ t , p , y t , 2 = V ¯ t , 1 , 2 · t a n h W ¯ t , 1 x t + i = 1 t 1 W ¯ i , 1 x i · j = i + 1 t σ ( x j ) + b ¯ t , 1 + V ¯ t , 2 , 2 · t a n h W ¯ t , 2 x t + i = 1 t 1 W ¯ i , 2 x i · j = i + 1 t σ ( x j ) + b ¯ t , 2 + + V ¯ t , p , 2 · t a n h W ¯ t , p x t + i = 1 t 1 W ¯ i , p x i · j = i + 1 t σ ( x j ) + b ¯ t , p , y t , m = V ¯ t , 1 , m · t a n h W ¯ t , 1 x t + i = 1 t 1 W ¯ i , 1 x i · j = i + 1 t σ ( x j ) + b ¯ t , 1 + V ¯ t , 2 , m · t a n h W ¯ t , 2 x t + i = 1 t 1 W ¯ i , 2 x i · j = i + 1 t σ ( x j ) + b ¯ t , 2 + + V ¯ t , p , m · t a n h W ¯ t , p x t + i = 1 t 1 W ¯ i , p x i · j = i + 1 t σ ( x j ) + b ¯ t , p .
Then,
y t = V ¯ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t ,
where y t = y t , 1 y t , 2 y t , m , V ¯ t = V ¯ t , 1 , 1 V ¯ t , 1 , 2 V ¯ t , 1 , m V ¯ t , 2 , 1 V ¯ t , 2 , 2 V ¯ t , 2 , m V ¯ t , p , 1 V ¯ t , p , 2 V ¯ t , p , m .
Take σ ( x j ) = 1 , ( j = 1 , 2 , , t ) , and y t can be reconstituted as follows,
y t = V ¯ t T · t a n h W ¯ x + b ¯ t ,
where W ¯ = W ¯ t , W ¯ t 1 , , W ¯ 1 , x = x t , x t 1 , , x 1 T .
It can be found that the formula is the basic form of multiple neural networks. Therefore, the improved LSTMs can be converted into multiple neural networks. Considering Lemma 1, multiple neural networks can approximate any nonlinear function with any error. Considering the improved LSTMs, since it can be converted into the multiple neural networks, the lemma satisfied by the multiple neural networks can also be satisfied by the improved LSTMs. Therefore, Theorem 1 can be obtained according to the form of Lemma 1. □

Appendix B. The Proof of Theorem 2

Proof. 
Take a Lyapunov-Krasovskii function,
V t , e ( t ) = e T ( t ) P e ( t ) + t τ ( t ) t e T ( s ) Q e ( s ) ,
where P and Q are positive definite matrices.
Based on the error system (9), the derivative of V t , e ( t ) can be obtained as follows,
V ˙ t , e ( t ) = e ˙ T ( t ) P e ( t ) + e T ( t ) P e ˙ ( t ) + e T ( t ) Q e ( t ) e T ( t τ ( t ) ) Q e ( t τ ( t ) ) = A + LH e ( t ) + K ¯ K x ( t ) , x ( t τ ( t ) ) T P e ( t ) + e T ( t ) P A + LH e ( t ) + K ¯ K x ( t ) , x ( t τ ( t ) ) + e T ( t ) Q e ( t ) e T ( t τ ( t ) ) Q e ( t τ ( t ) ) = e T ( t ) A + LH T P e ( t ) e T ( t ) P A + LH e ( t ) + e T ( t ) Q e ( t ) + e T ( t ) P K ¯ K x ( t ) , x ( t τ ( t ) ) e T ( t τ ( t ) ) Q e ( t τ ( t ) ) + K ¯ K x ( t ) , x ( t τ ( t ) ) T P e ( t ) .
By using Assumption 2 and Lemma 2, for a positive definite diagonal matrix F , we have
K ¯ K x ( t ) , x ( t τ ( t ) ) T F K ¯ K x ( t ) , x ( t τ ( t ) ) = i = 1 n F i K ¯ i K i x ( t ) , x ( t τ ( t ) ) 2 i = 1 n F i K L 1 T | x ( t ) x ^ ( t ) | + K L 2 T | x ( t τ ( t ) ) x ^ ( t τ ( t ) ) | 2 i = 1 n F i e T ( t ) K L 1 K L 1 T e ( t ) + e T ( t τ ( t ) ) K L 2 K L 2 T e ( t τ ( t ) ) + i = 1 n F i 2 | e T ( t ) | K L 1 K L 2 T | e ( t τ ( t ) ) | 2 e T ( t ) K L 1 t r { F } K L 1 T e ( t ) + 2 e T ( t τ ( t ) ) K L 2 t r { F } K L 2 T e ( t τ ( t ) ) .
And there exists a positive definite matrix M to satisfy the following equation
2 t τ ( t ) t e ˙ ( s ) d s e ( t ) + e ( t τ ( t ) ) M t τ ( t ) t e ˙ T ( s ) d s e T ( t τ ( t ) ) = 0 .
For a real constant δ > 0 , combining with the above formulas, we have
V ˙ t , e ( t ) e T ( t ) A + LH T P e ( t ) e T ( t ) P A + LH e ( t ) + e T ( t ) Q e ( t ) + e T ( t ) P K ¯ K x ( t ) , x ( t τ ( t ) ) e T ( t τ ( t ) ) ( 1 δ ) Q e ( t τ ( t ) ) + K ¯ K x ( t ) , x ( t τ ( t ) ) T P e ( t ) + 2 e T ( t τ ( t ) ) K L 2 t r { F } K L 2 T e ( t τ ( t ) ) + 2 e T ( t ) K L 1 t r { F } K L 1 T e ( t ) K ¯ K x ( t ) , x ( t τ ( t ) ) T F K ¯ K x ( t ) , x ( t τ ( t ) ) + 2 t τ ( t ) t e ˙ ( s ) d s e ( t ) + e ( t τ ( t ) ) M t τ ( t ) t e ˙ T ( s ) d s e T ( t τ ( t ) ) e T ( t ) A T P H T L T P P A PLH + Q + 2 K L 1 t r { F } K L 1 T e ( t ) + e T ( t ) P K ¯ K x ( t ) , x ( t τ ( t ) ) + K ¯ K x ( t ) , x ( t τ ( t ) ) T P e ( t ) K ¯ K x ( t ) , x ( t τ ( t ) ) T F K ¯ K x ( t ) , x ( t τ ( t ) ) + e T ( t τ ( t ) ) 2 K L 2 t r { F } K L 2 T ( 1 δ ) Q 2 M e ( t τ ( t ) ) + 2 e T ( t τ ( t ) M e ( t ) + 2 t τ ( t ) t e ˙ T ( s ) d s M e ( t ) 2 t τ ( t ) t e ˙ T ( s ) d s M e ( t τ ( t ) ) 2 e T ( t τ ( t ) ) M t τ ( t ) t e ˙ ( s ) d s 2 t τ ( t ) t e ˙ T ( s ) d s M t τ ( t ) t e ˙ ( s ) d s ȷ T · Ω 1 P M M P F 0 0 M 0 Ω 2 2 M M 0 2 M 2 M · ȷ ,
where ȷ = e ( t ) , K ¯ K x ( t ) , x ( t τ ( t ) ) , e ( t τ ( t ) ) , t τ ( t ) t e ˙ ( s ) d s T , Ω 1 = A T P H T G T P A GH + Q + 2 K L 1 t r { F } K L 1 T , Ω 2 = 2 K L 2 t r { F } K L 2 T ( 1 δ ) Q 2 M , and G = P L .
Considering the above inequality, take
Ω 1 P M M P F 0 0 M 0 Ω 2 2 M M 0 2 M 2 M < 0 ,
then the error system (9) is asymptotically stable, and the proof of Theorem 2 is completed. □

Appendix C. The Proof of Theorem 3

Proof. 
Based on the Equation (15), take a Lyapunov–Krasovskii function(LKF),
V t , e ( t ) = e T ( t ) P e ( t ) + t r { V ˜ t T · V ˜ t } + t r { W ˜ t T · W ˜ t } + b ˜ t T · b ˜ t + i = 1 t 1 t r { W ˜ i T · W ˜ i } · j = i + 1 t σ ( x j ) .
By using the error system (14) and the Equation (15), the derivative of V (t; e(t)) can be obtained as follows,
V ˙ t , e ( t ) = e ˙ T ( t ) P e ( t ) + e T ( t ) P e ˙ ( t ) + t r { V ˜ t T · V ^ ˙ t } + t r { W ˜ t T · W ^ ˙ t } + b ˜ t T · b ^ ˙ t + i = 1 t 1 t r { W ˜ i T · W ^ ˙ i } · j = i + 1 t σ ( x j ) = e T ( t ) A + LH T P P A + LH e ( t ) + K ^ K ¯ T P e ( t ) ϵ 1 T P e ( t ) + e T ( t ) P K ^ K ¯ e T ( t ) P ϵ 1 + t r { V ˜ t T · V ^ ˙ t } + t r { W ˜ t T · W ^ ˙ t } + b ˜ t T · b ^ ˙ t + i = 1 t 1 t r { W ˜ i T · W ^ ˙ i } · j = i + 1 t σ ( x j ) .
Combining (8) and (13), it can be obtained
K ^ K ¯ = V ^ t T · t a n h W ^ t x t + i = 1 t 1 W ^ i x i · j = i + 1 t σ ( x j ) + b ^ t V ¯ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t = V ^ t T · t a n h W ^ t x t + i = 1 t 1 W ^ i x i · j = i + 1 t σ ( x j ) + b ^ t V ^ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ^ t + V ^ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ^ t V ^ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t + V ^ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t V ¯ t T · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t 2 V ^ t T · ( W ^ t W ¯ t ) x t + i = 1 t 1 ( W ^ i W ¯ i ) x i · j = i + 1 t σ ( x j ) + 2 V ^ t T · ( b ^ t b ¯ t ) + ( V ^ t T V ¯ t T ) · t a n h W ¯ t x t + i = 1 t 1 W ¯ i x i · j = i + 1 t σ ( x j ) + b ¯ t 2 V ^ t T · W ˜ t x t + i = 1 t 1 W ˜ i x i · j = i + 1 t σ ( x j ) + 2 V ^ t T · b ˜ t + V ˜ t T · S ^ t V ˜ t T · ϵ 2 ,
where S ^ t = t a n h W ^ t x t + i = 1 t 1 W ^ i x i · j = i + 1 t σ ( x j ) + b ^ t and ϵ 2 = S ^ t S ¯ t .
Considering the above equations, we have
V ˙ t , e ( t ) e T ( t ) A + LH T P P A + LH e ( t ) e T ( t ) P ϵ 1 ϵ 1 T P e ( t ) + 4 V ^ t T W ˜ t x t T P e ( t ) + 4 i = 1 t 1 ( V ^ t T W ˜ i x i ) T P e ( t ) · j = i + 1 t σ ( x j ) + 4 V ^ t T · b ˜ t T P e ( t ) + 2 V ˜ t T · S ^ t T P e ( t ) 2 V ˜ t T · ϵ 2 T P e ( t ) + t r { V ˜ t T · V ^ ˙ t } + t r { W ˜ t T · W ^ ˙ t } + b ˜ t T · b ^ ˙ t + i = 1 t 1 t r { W ˜ i T · W ^ ˙ i } · j = i + 1 t σ ( x j ) e T ( t ) A + LH T P P A + LH e ( t ) e T ( t ) P ϵ 1 ϵ 1 T P e ( t ) + t r W ˜ t T · 4 V ^ t P e ( t ) x t T + W ^ ˙ t + t r V ˜ t T · 2 S ^ t e T ( t ) P + V ^ ˙ t + i = 1 t 1 t r W ˜ i T · 4 V ^ t P e ( t ) x i T + W ^ ˙ i · j = i + 1 t σ ( x j ) 2 ϵ 2 T V ˜ t P e ( t ) + b ˜ t T · 4 V ^ t P e ( t ) + b ^ ˙ t .
Then, adaptive updating laws of the weights are given as follows
W ^ ˙ i = N w _ i · e T ( t ) 4 V ^ t P e ( t ) x i T ( i = 1 , 2 , , t ) V ^ ˙ t = N v _ t · e T ( t ) 2 S ^ t e T ( t ) P b ^ ˙ t = N b _ t · e ( t ) 4 V ^ t P e ( t ) ,
where N w _ i = N w _ i , 1 N w _ i , 2 N w _ i , p and N v _ t = N v _ t , 1 N v _ t , 2 N v _ t , p denote the design vectors;
N b _ t = N b _ t , 1 , 1 N b _ t , 1 , 2 N b _ t , 1 , n N b _ t , 2 , 1 N b _ t , 2 , 2 N b _ t , 2 , n N b _ t , p , 1 N b _ t , p , 2 N b _ t , p , n denotes the design matrix.
And we have
t r W ˜ t T · 4 V ^ t P e ( t ) x t T + W ^ ˙ t = t r W ˜ t T N w _ t e T ( t ) = t r W ˜ t , 1 , 1 W ˜ t , 1 , 2 W ˜ t , 1 , n W ˜ t , 2 , 1 W ˜ t , 2 , 2 W ˜ t , 2 , n W ˜ t , p , 1 W ˜ t , p , 2 W ˜ t , p , n T · N w _ t , 1 N w _ t , 2 N w _ t , p · e 1 ( t ) e 2 ( t ) e n ( t ) T = i = 1 n W ˜ t , , i T · N w _ t · e i ( t )
= W ˜ t , , 1 W ˜ t , , 2 W ˜ t , , n T · N w _ t 0 0 0 N w _ t 0 0 0 N w _ t · e 1 ( t ) e 2 ( t ) e n ( t ) = ȷ w _ t T · Ω w _ t · e ( t ) , i = 1 t 1 t r W ˜ i T · 4 V ^ t P e ( t ) x i T + W ^ ˙ i · j = i + 1 t σ ( x j ) = i = 1 t 1 ȷ w _ i T · Ω w _ i · e ( t ) · j = i + 1 t σ ( x j ) , t r V ˜ t T · 2 S ^ t e T ( t ) P + V ^ ˙ t 2 ϵ 2 T V ˜ t P e ( t ) = t r V ˜ t T N v _ t e T ( t ) 2 ϵ 2 T V ˜ t P e ( t ) = i = 1 n V ˜ t , , i T · N v _ t · e i ( t ) 2 i = 1 n V ˜ t , , i T · ϵ 2 · P i · e ( t ) = V ˜ t , , 1 V ˜ t , , 2 V ˜ t , , n T · N v _ t 0 0 0 N v _ t 0 0 0 N v _ t 2 ϵ 2 · P 1 ϵ 2 · P 2 ϵ 2 · P n · e 1 ( t ) e 2 ( t ) e n ( t ) = ȷ v _ t T · Ω v _ t · e ( t ) , b ˜ t T · 4 V ^ t P e ( t ) + b ^ ˙ t = b ˜ t T · N b _ t · e ( t ) .
The derivative of V (t; e(t)) can be obtained as follows
V ˙ t , e ( t ) e T ( t ) A + LH T P P A + LH e ( t ) e T ( t ) P ϵ 1 ϵ 1 T P e ( t ) + ȷ w _ t T · Ω w _ t · e ( t ) + i = 1 t 1 ȷ w _ i T · Ω w _ i · e ( t ) · j = i + 1 t σ ( x j ) + ȷ v _ t T · Ω v _ t · e ( t ) + b ˜ t T · N b _ t · e ( t ) e T ( t ) A + LH T P P A + LH e ( t ) e T ( t ) P ϵ 1 ϵ 1 T P e ( t ) + ȷ w _ t ȷ w _ t 1 ȷ w _ 1 ȷ v _ t b ˜ t T · Ω w _ t Ω w _ t 1 · σ ( x t ) Ω w _ 1 · j = 2 t σ ( x j ) Ω v _ t N b _ t · e ( t ) e T ( t ) A + LH T P P A + LH e ( t ) e T ( t ) P ϵ 1 ϵ 1 T P e ( t ) + ȷ w v b T · Ω w v b · e ( t ) e ( t ) ϵ 1 ȷ w v b T · Ω 3 P 1 2 Ω w v b T P 0 0 1 2 Ω w v b 0 0 · e ( t ) ϵ 1 ȷ w v b ȷ 1 T · Ω 3 P 1 2 Ω w v b T P 0 0 1 2 Ω w v b 0 0 · ȷ 1 .
Select appropriate N w _ i , N v _ t and N b _ t to satisfy
Ω 3 P 1 2 Ω w v b T P 0 0 1 2 Ω w v b 0 0 0 ,
then the error system (14) is asymptotically stable, and the proof of Theorem 3 is completed. □

Appendix D. The Proof of Theorem 4

Proof. 
By using Assumption 3 and (14), we have
y ^ ˙ ( t ) y ˙ ( t ) = H x ^ ˙ ( t ) x ˙ ( t ) = H A + LH e ( t ) + K ^ K ¯ ϵ 1 .
By using (17), Theorems 2 and 3, we have
y ^ ( t ) y ( t ) y ^ ˙ ( t ) y ˙ ( t ) = H H A + LH e ( t ) .
And we have
Y = G · e ( t ) ,
where G = H H A + LH and Y = y ^ ( t ) y ( t ) y ^ ˙ ( t ) y ˙ ( t ) .
If G is left invertible, there exists G 1 such that e ( t ) = G 1 · Y , and the proof of Theorem 4 is completed. □

References

  1. Chua, L. Memristor-the missing circuit element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  2. Wu, A.; Wen, S.; Zeng, Z. Synchronization control of a class of memristor-based recurrent neural networks. Inf. Sci. 2012, 183, 106–116. [Google Scholar] [CrossRef]
  3. Wen, S.; Zeng, Z.; Huang, T. Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays. Neurocomputing 2012, 97, 233–240. [Google Scholar] [CrossRef]
  4. Chen, J.; Zeng, Z.; Jiang, P. Global mittag-leffler stability and synchronization of memristor-based fractional-order neural networks. Neural Netw. 2014, 51, 1–8. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, F.; Na, H.; Wu, S.; Yang, X.; Guo, Y.; Lim, G.; Rashid, M.M. Delayed switching applied to memristor neural networks. J. Appl. Phys. 2012, 111, 507–511. [Google Scholar] [CrossRef]
  6. Jiang, M.; Mei, J.; Hu, J. New results on exponential synchronization of memristor-based chaotic neural networks. Neurocomputing 2015, 156, 60–67. [Google Scholar] [CrossRef]
  7. Wu, H.; Li, R.; Ding, S.; Zhang, X.; Yao, R. Complete periodic adaptive antisynchronization of memristor-based neural networks with mixed time-varying delays. Can. J. Phys. 2014, 92, 1337–1349. [Google Scholar] [CrossRef]
  8. Hu, M.; Chen, Y.; Yang, J.; Wang, Y.; Li, H.H. A compact memristor-based dynamic synapse for spiking neural networks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2016, 36, 1353–1366. [Google Scholar] [CrossRef]
  9. Zheng, M.; Li, L.; Xiao, J.; Yang, Y.; Zhao, H. Finite-time projective synchronization of memristor-based delay fractional-order neural networks. Nonlinear Dyn. 2017, 89, 2641–2655. [Google Scholar] [CrossRef]
  10. Negrov, D.; Karandashev, I.; Shakirov, V.; Matveyev, Y.A. A plausible memristor implementation of deep learning neural networks. Neurocomputing 2015, 237, 193–199. [Google Scholar] [CrossRef]
  11. Wang, X.; Ju, H.; Yang, H.; Zhong, S. A new settling-time estimation protocol to finite-time synchronization of impulsive memristor-based neural networks. IEEE Trans. Cybern. 2020, 52, 4312–4322. [Google Scholar] [CrossRef] [PubMed]
  12. Chen, B.; Yang, H.; Zhuge, F.; Li, Y.; Chang, T.-C.; He, Y.-H.; Yang, W.; Xu, N.; Miao, X.-S. Optimal tuning of memristor conductance variation in spiking neural networks for online unsupervised learning. IEEE Trans. Electron Devices 2019, 66, 2844–2849. [Google Scholar] [CrossRef]
  13. Liu, X.Y.; Zeng, Z.G.; Wunsch, D.C. Memristor-based LSTM network with in situ training and its applications. Neural Netw. 2020, 131, 300–311. [Google Scholar] [CrossRef] [PubMed]
  14. Ning, L.; Wei, X. Bipartite synchronization for inertia memristor-based neural networks on coopetition networks. Neural Netw. 2020, 124, 39–49. [Google Scholar]
  15. Xu, C.; Wang, C.; Sun, Y.; Hong, Q.; Deng, Q.; Chen, H. Memristor-based neural network circuit with weighted sum simultaneous perturbation training and its applications. Neurocomputing 2021, 462, 581–590. [Google Scholar] [CrossRef]
  16. Prezioso, M.; Bayat, F.M.; Hoskins, B.D.; Likharev, K.K.; Strukov, D.B. Training andoperation of an integrated neuromorphic network based on metal-oxide memristors. Nature 2015, 521, 61–64. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Kim, S.; Park, J.; Kim, T.H.; Hong, K.; Hwang, Y.; Park, B.g.; Kim, H. 4-bit multilevel operation in overshoot suppressed Al2O3/TiOx resistive random-access memory crossbar array. Adv. Intell. Syst. 2022, 4, 2100273. [Google Scholar] [CrossRef]
  18. Wang, Z.R.; Joshi, S.; Savel’ev, S.; Song, W.; Midya, R.; Li, Y.; Rao, M.; Yan, P.; Asapu, S.; Zhuo, Y.; et al. Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 2018, 1, 137–145. [Google Scholar] [CrossRef] [Green Version]
  19. Choi, W.S.; Jang, J.T.; Kim, D.; Yang, T.J.; Kim, C.; Kim, H.; Kim, D.H. Influence of Al2O3 layer on InGaZnO memristor crossbar array for neuromorphic applications. Chaos Solitons Fractals 2022, 156, 111813. [Google Scholar] [CrossRef]
  20. Bayat, F.M.; Prezioso, M.; Chakrabarti, B.; Nili, H.; Kataeva, I.; Strukov, D.B. Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits. Nat. Commun. 2018, 9, 2331. [Google Scholar] [CrossRef]
  21. Chen, Y.; Chen, G. Stability analysis of delayed neural networks based on a relaxed delay-product-type lyapunov functional. Neurocomputing 2021, 439, 340–347. [Google Scholar] [CrossRef]
  22. Wang, L.; Song, Q.; Liu, Y.; Zhao, Z.; Alsaadi, F.E. Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with both leakage and time-varying delays. Neurocomputing 2017, 245, 86–101. [Google Scholar] [CrossRef]
  23. Meng, Z.; Xiang, Z. Stability analysis of stochastic memristor-based recurrent neural networks with mixed time-varying delays. Neural Comput. Appl. 2017, 28, 1787–1799. [Google Scholar] [CrossRef]
  24. Chen, C.; Zhu, S.; Wei, Y. Finite-time stability of delayed memristor-based fractional-order neural networks. IEEE Trans. Cybern. 2020, 50, 1607–1616. [Google Scholar] [CrossRef]
  25. Du, F.; Lu, J. New criteria for finite-time stability of fractional order memristor-based neural networks with time delays. Neurocomputing 2021, 421, 349–359. [Google Scholar] [CrossRef]
  26. Rakkiyappan, R.; Chandrasekar, A.; Laksmanan, S.; Park, J.H. State estimation of memristor-based recurrent neural networks with time-varying delays based on passivity theory. Complexity 2014, 19, 32–43. [Google Scholar] [CrossRef]
  27. Bao, H.; Cao, J.; Kruths, J.; Alsaedi, A.; Ahmad, B. H∞ state estimation of stochastic memristor-based neural networks with time-varying delays. Neural Netw. 2018, 99, 79–91. [Google Scholar] [CrossRef]
  28. Nagamani, G.; Rajan, G.S.; Zhu, Q. Exponential state estimation for memristor-based discrete-time bam neural networks with additive delay components. IEEE Trans. Cybern. 2020, 5, 4281–4292. [Google Scholar] [CrossRef]
  29. Sakthivel, R.; Anbuvithya, R.; Mathiyalagan, K.; Prakash, P. Combined h∞ and passivity state estimation of memristive neural networks with random gain fluctuations. Neurocomputing 2015, 168, 1111–1120. [Google Scholar] [CrossRef]
  30. Wei, F.; Chen, G.; Wang, W. Finite-time synchronization of memristor neural networks via interval matrix method. Neural Netw. 2020, 127, 7–18. [Google Scholar] [CrossRef]
  31. Wang, J.; Wang, Z.; Chen, X.; Qiu, J. Synchronization criteria of delayed inertial neural networks with generally markovian jumping. Neural Netw. 2021, 139, 64–76. [Google Scholar] [CrossRef] [PubMed]
  32. Li, L.; Xu, R.; Gan, Q.; Lin, J. Synchronization of neural networks with memristor-resistor bridge synapses and lévy noise. Neurocomputing 2021, 432, 262–274. [Google Scholar] [CrossRef]
  33. Ren, H.; Peng, Z.; Gu, Y. Fixed-time synchronization of stochastic memristor-based neural networks with adaptive control. Neural Netw. 2020, 130, 165–175. [Google Scholar] [CrossRef] [PubMed]
  34. Zheng, C.D.; Zhang, L. On synchronization of competitive memristor-based neural networks by nonlinear control. Neurocomputing 2020, 410, 151–160. [Google Scholar] [CrossRef]
  35. Pan, C.; Bao, H. Exponential synchronization of complex-valued memristor-based delayed neural networks via quantized intermittent control. Neurocomputing 2020, 404, 317–328. [Google Scholar] [CrossRef]
  36. Xiao, J.; Li, Y.; Zhong, S.; Xu, F. Extended dissipative state estimation for memristive neural networks with time-varying delay. ISA Trans. 2016, 64, 113–128. [Google Scholar] [CrossRef] [PubMed]
  37. Li, R.; Gao, X.; Cao, J.; Zhang, K. Dissipativity and exponential state estimation for quaternion-valued memristive neural networks. Neurocomputing 2019, 363, 236–245. [Google Scholar] [CrossRef]
  38. Li, H. Sampled-data state estimation for complex dynamical networks with time-varying delay and stochastic sampling. Neurocomputing 2014, 138, 78–85. [Google Scholar] [CrossRef]
  39. Dai, X.; Yin, H.; Jha, N. Grow and prune compact, fast, and accurate lstms. IEEE Trans. Comput. 2020, 69, 441–452. [Google Scholar] [CrossRef] [Green Version]
  40. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
Figure 1. Circuit of memristor neural networks [28].
Figure 1. Circuit of memristor neural networks [28].
Machines 10 01228 g001
Figure 2. Schematic diagram of a basic LSTMs cell.
Figure 2. Schematic diagram of a basic LSTMs cell.
Machines 10 01228 g002
Figure 3. Schematic diagram of an improved LSTMs cell.
Figure 3. Schematic diagram of an improved LSTMs cell.
Machines 10 01228 g003
Figure 4. Schematic diagram of the improved LSTMs.
Figure 4. Schematic diagram of the improved LSTMs.
Machines 10 01228 g004
Figure 5. The state and estimated state curves of the 2-dimensional memristor neural networks.
Figure 5. The state and estimated state curves of the 2-dimensional memristor neural networks.
Machines 10 01228 g005
Figure 6. The estimated error curves of the states.
Figure 6. The estimated error curves of the states.
Machines 10 01228 g006
Figure 7. The derivative curves of the states and estimated states.
Figure 7. The derivative curves of the states and estimated states.
Machines 10 01228 g007
Figure 8. The error curves of the derivative of the states.
Figure 8. The error curves of the derivative of the states.
Machines 10 01228 g008
Figure 9. The output and estimated output curves.
Figure 9. The output and estimated output curves.
Machines 10 01228 g009
Figure 10. The error curves of the output.
Figure 10. The error curves of the output.
Machines 10 01228 g010
Figure 11. The state and estimated state with the adjusted weights and state with the random weights curves.
Figure 11. The state and estimated state with the adjusted weights and state with the random weights curves.
Machines 10 01228 g011
Figure 12. The output and estimated output with the adjusted weights and output with the random weights curves.
Figure 12. The output and estimated output with the adjusted weights and output with the random weights curves.
Machines 10 01228 g012
Figure 13. The state and estimated state curves of the 3-dimensional memristor neural networks.
Figure 13. The state and estimated state curves of the 3-dimensional memristor neural networks.
Machines 10 01228 g013
Figure 14. The estimated error curves of the states.
Figure 14. The estimated error curves of the states.
Machines 10 01228 g014
Figure 15. The derivative curves of the states and estimated states.
Figure 15. The derivative curves of the states and estimated states.
Machines 10 01228 g015
Figure 16. The error curves of the derivative of the states.
Figure 16. The error curves of the derivative of the states.
Machines 10 01228 g016
Figure 17. The output and estimated output curves.
Figure 17. The output and estimated output curves.
Machines 10 01228 g017
Figure 18. The error curves of the output.
Figure 18. The error curves of the output.
Machines 10 01228 g018
Figure 19. The state and estimated state with the adjusted weights and state with the random weights curves.
Figure 19. The state and estimated state with the adjusted weights and state with the random weights curves.
Machines 10 01228 g019
Figure 20. The output and estimated output with the adjusted weights and output with the random weights curves.
Figure 20. The output and estimated output with the adjusted weights and output with the random weights curves.
Machines 10 01228 g020
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, L.; Wang, M. State Estimation of Memristor Neural Networks with Model Uncertainties. Machines 2022, 10, 1228. https://doi.org/10.3390/machines10121228

AMA Style

Ma L, Wang M. State Estimation of Memristor Neural Networks with Model Uncertainties. Machines. 2022; 10(12):1228. https://doi.org/10.3390/machines10121228

Chicago/Turabian Style

Ma, Libin, and Mao Wang. 2022. "State Estimation of Memristor Neural Networks with Model Uncertainties" Machines 10, no. 12: 1228. https://doi.org/10.3390/machines10121228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop