Next Article in Journal
Modeling the Operation of Signal-Controlled Intersections with Different Lane Occupancy
Next Article in Special Issue
A Three Stage Optimal Scheduling Algorithm for AGV Route Planning Considering Collision Avoidance under Speed Control Strategy
Previous Article in Journal
Stability and Synchronization of Fractional-Order Complex-Valued Inertial Neural Networks: A Direct Approach
Previous Article in Special Issue
Multi-AGV Dynamic Scheduling in an Automated Container Terminal: A Deep Reinforcement Learning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Zeroing Neural Networks Combined with Gradient for Solving Time-Varying Linear Matrix Equations in Finite Time with Noise Resistance

School of Cyber Security, Guangdong Polytechnic Normal University, Guangzhou 510635, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(24), 4828; https://doi.org/10.3390/math10244828
Submission received: 10 November 2022 / Revised: 9 December 2022 / Accepted: 15 December 2022 / Published: 19 December 2022
(This article belongs to the Special Issue AI Algorithm Design and Application)

Abstract

:
Due to the time delay and some unavoidable noise factors, obtaining a real-time solution of dynamic time-varying linear matrix equation (LME) problems is of great importance in the scientific and engineering fields. In this paper, based on the philosophy of zeroing neural networks (ZNN), we propose an integration-enhanced combined accelerating zeroing neural network (IEAZNN) model to solve LME problem accurately and efficiently. Different from most of the existing ZNNs research, there are two error functions combined in the IEAZNN model, among which the gradient of the energy function is the first design for the purpose of decreasing the norm-based error to zero and the second one is adding an integral term to resist additive noise. On the strength of novel combination in two error functions, the IEAZNN model is capable of converging in finite time and resisting noise at the same time. Moreover, theoretical proof and numerical verification results show that the IEAZNN model can achieve high accuracy and fast convergence speed in solving time-varying LME problems compared with the conventional ZNN (CZNN) and integration-enhanced ZNN (IEZNN) models, even in various kinds of noise environments.

1. Introduction

The advances in computer science have extended its use in various fields of application, such as the robotics domain [1,2,3], adaptive neural tracking control [4] and image-processing [5]. Among all the applications mentioned above, the linear matrix equation (LME) is one of the most commonly seen problems. Therefore, how to solve the LME is a crucial challenge. Numerous efforts have been made to solve LME difficulties more effectively. A common way of dealing with LMEs is to use Newton’s iteration method [6,7,8]. However, there may be some problems related to the running time and the algorithm’s complexity [9] in the case of high-dimensional or large-scale matrix operations, and the complexity of the numerical iterative algorithm is proportional to the cube of the matrix dimension [10]. In general, such numerical methods may not be able to meet the requirements of real-time applications due to their inherent iterative nature when used to solve time-varying LME problems.
Recently, due to the intensive study of neural networks, many dynamic simulation solvers based on recurrent neural networks (RNN) have been studied and developed to process LMEs in parallel [11,12,13,14]. Different from numerical algorithms, the RNN is famous for its parallel computing advantages, which helps greatly to promote computational efficiency. As a result, neural dynamic methods are now considered a powerful alternative to online computation of matrix problems due to their parallel distribution properties and the convenience of hardware implementation [15,16]. As a typical RNN, the gradient-based neural network (GNN) is designed to solve the minimization problems [8,17,18,19]. However, when coping with time-dependent cases, GNN produces relatively large lag errors [20]. As a result, a new kind of continuous-time RNN called zeroing neural network (ZNN) has emerged for solving various time-dependent problems, such as matrix inversion [21], the Lyapunov equation [15,22] and the Sylvester equation [23]. From the perspective of solution results, the state solution of ZNN can start from any initial state and converge globally to the theoretical solution with exponential speed. In order to accelerate the convergence speed, the finite-time zeroing neural network model (FTZNN) [24] was developed to solve the time-varying matrix inversion problem, in which the upper bound of finite convergence time was calculated theoretically.
Although ZNN can achieve finite-time convergence, its denoising ability is limited and cannot qualify for solving accurately in the case of noisy systems [25]. Since external noise is unavoidable and some noise will inevitably appear in the hardware implementation of the neural network model [26], some anti-noise ZNN models have been developed [22,25,26,27]. The paper [26] presents a discrete-time zeroing neural algorithm for solving linear equations using control techniques, focusing on solving discrete time-varying linear equations with various noises. In the literature [22], a noise-tolerant zeroing neural network (NTZNN) model is proposed to solve the nonstationary Lyapunov equation under various types of noise interference. An integration-enhanced noise-tolerant zeroing neural network (IENTZNN) model is defined and considered to calculate the outer inverse of time-varying matrix in the presence of various noises [27]. By using a novel activation function (NAF), the authors proposed a new interference-tolerant fast convergence zeroing neural network (ITFCZNN) for solving dynamic matrix inversion (DMI) [25]. Additionally, the most classic anti-noise model is IEZNN [28], which can be used to solve time-varying LME in various noisy environments. Compared with the conventional ZNN, the main improvement of the IEZNN model lies in introducing an integral term of error on the right side of ZNN to resist additional injected noise. However, due to the intrinsic properties of linearity, the IEZNN model can only achieve exponential convergence in theory [28]. In general, the CZNN with nonlinear activation function can achieve finite-time convergence but cannot resist noise, while the improved noise-resistant neural network models such as IEZNN cannot achieve finite-time convergence. As a result, more and more research is focusing on developing models that can effectively resist noise while solving the problem in a finite amount of time.
Different from other previous research methods, in this paper, we propose a new interference-tolerant and fast convergence zeroing neural network for solving dynamic matrix inversion using the gradient of the energy function as the first error function to obtain the superior convergence; then, a new second error function is constructed for noise resistance. To overcome the exponential convergence speed, the novel integration-enhanced combined accelerating zeroing neural network (IEAZNN) model is developed to solve the time-varying LME problem. From the comparison and experimental results, the proposed IEAZNN model can exactly and efficiently solve the time-varying LME without lagging error in a finite time, even in noisy environments.
The rest of the article is organized as follows. Section 2 lists the time-varying LME to be solved, and presents the traditional CZNN model and noise-resistant IEZNN model with an integral term to solve the equation for comparison. In Section 3, the improved ZNN model with accelerated solution is formed by a new error definition based on the energy function gradient of time-varying LME, and a new IEAZNN model with both noise resistance and finite-time convergence is obtained. Then, the convergence, finite time and stability of the IEAZNN model are proved. The MATLAB simulation results are presented in Section 4, in which the CZNN model, IEZNN model and the proposed IEAZNN model are digitally verified with no noise, constant noise and dynamic linear noise, respectively. The conclusions are summarized in the final section. Before ending this section, the main contributions of this article are listed as follows:
(1)
Specially for the design of IEAZNN model, the gradient of the energy function is used as the first error function for superior convergence. To resist noise influences, a new, second error function is designed based on the IEZNN model. Then, the IEAZNN is developed for the anti-noise and finite-time convergence.
(2)
Different from other ZNN models, which are either for achieving anti-noise or finite time convergence, the presented IEAZNN model is noise-tolerant and has the advantage of finite-time convergence at the same time.
(3)
The theoretical analyses and experimental comparisons show that the proposed IEAZNN model in this paper performs well in both convergence speed and solution accuracy under various noise disturbances.

2. Problem Formulation and Related Models

Consider the same time-varying LME in [29]:
A ( t ) X ( t ) B ( t ) = C ( t )
where A ( t ) R m × m , B ( t ) R n × n and C ( t ) R m × n are smooth and differentiable time-varying coefficient matrices. X ( t ) R m × n is the time-varying unknown matrix that we need to solve in real time to satisfy the time-varying problem (1). The problem (1) solving is widely applied to many practical problems in control theory and has been mentioned in many papers [5,29,30].

2.1. Conventional ZNN Model

The conventional ZNN (CZNN) [2,11,21,22,28,29,31] is designed to systematically resolve time-varying matrix inversion in real time while eliminating the lagging errors. For solving the above time-varying LME (1), the CZNN design formula can be written as
S ˙ ( t ) = γ 1 S ( t )
where γ 1 R is used to scale the convergence rate of the solution and γ 1 > 0. In order to achieve the time-varying unknown matrix X ( t ) of the time-varying LME problem (1), error function S ( t ) should be defined as S ( t ) = A ( t ) X ( t ) B ( t ) C ( t ) . Then, take the derivative of S ( t ) as S ˙ ( t ) and substitute it into the above CZNN design Formula (2). Thus, by expanding the Formula (2), the following CZNN solver can be obtained to solve the time-varying LME problem (1):
A ( t ) X ˙ ( t ) B ( t ) = γ 1 ( A ( t ) X ( t ) B ( t ) C ( t ) ) A ˙ ( t ) X ( t ) B ( t ) A ( t ) X ( t ) B ˙ ( t ) + C ˙ ( t )
However, the shortcoming of the above CZNN is that when there exists noise, the solution of the model is difficult to converge or fails to converge to the theoretical ones. Intuitively speaking, after adding noise to the right side of the design Formula (2), one can have S ˙ ( t ) = γ 1 S ( t ) + noise . Evidently, when the CZNN model (3) is exploited to solve (1), error S ( t ) = A ( t ) X ( t ) B ( t ) C ( t ) does not really decrease to 0; so, X ( t ) will not converge to the theoretical solution X * ( t ) .

2.2. Integration-Enhanced ZNN Model

In order to solve the above accuracy problem during the solving procedure in the presence of noise, an integration-enhanced ZNN (IEZNN) model was proposed [28]. The main improvement of IEZNN compared with CZNN is to add an integral term to the right side of (2). The IEZNN design formula can be summarized as follows:
S ˙ ( t ) = γ 1 S ( t ) ω 0 t S ( σ ) d σ
where γ 1 is the same as defined in CZNN (2) and ω > 0 are the design parameters to scale the convergence of IEZNN. By following the design formula of IEZNN (4) and considering S ( t ) = A ( t ) X ( t ) B ( t ) C ( t ) , an IEZNN model can be obtained for solving time-varying LME (1) as follows:
A ( t ) X ˙ ( t ) B ( t ) = γ 1 ( A ( t ) X ( t ) B ( t ) C ( t ) ) ω 0 t ( A ( σ ) X ( σ ) B ( σ ) C ( σ ) ) d σ A ˙ ( t ) X ( t ) B ( t ) A ( t ) X ( t ) B ˙ ( t ) + C ˙ ( t )
By the research investigations in [28], (5) is the complete form of the noise-resistant IEZNN model, which has a high noise tolerance and is able to converge to a theoretical solution of the time-varying matrix inversion problem. However, the IEZNN model cannot achieve finite-time convergence because no nonlinear activation function [15] is used.

3. Noise-Enduring IEAZNN Model

When there exists external noise interference, the aforementioned CZNN model (3) cannot solve with zero-free the time-varying LME (1) [22], while the IEZNN model (5) can resist noise interference by means of adding an integral term but cannot achieve finite-time convergence. Since noise interference is unavoidable in both nature and practice, in this section, we design and implement an anti-noise interference model that can converge in a finite amount of time, shown in Equation (9).

3.1. Model Design

In order to solve problem (1), as for CZNN (2) and IEZNN (4), the design formula is constructed based on an error function S ( t ) R m × n for solving time-varying LME (1). Different from the design of CZNN and IEZNN, the error function S ( t ) will be replaced by the gradient of an energy function depicted as the following steps:
(1)
By the classical gradient-based algorithm [12,15,18,29] and the general construction process [15], first, a non-negative scalar-valued norm basis energy function ε ( X ) can be defined as ε ( X ) = A X B C F 2 / 2 = trace ( ( A X B C ) ( A X B C ) ) / 2 with t omitted for written convenience.
(2)
By the property of the trace, we can obtain the above derivative of ε ( X ) , ε ( X ) X = A A X B B A C B , with respect to X.
(3)
Define G ( t ) : = ε ( X ) X to replace the error function S ( t ) in (2).
In the absence of noise interference, when the error function S ( t ) converges to 0, it means that the solution X ( t ) of the time-varying LME problem (1) can converge to the theoretical solution X * ( t ) by the following ZNN model:
S ˙ ( t ) = γ 1 S ( t ) γ 2 S q p ( t )
However, X ( t ) is no longer the exact solution of the original LME problem (1) under the interference of various external noises. In order to overcome this problem, in this section, a new error function E ( t ) R m × n , which is completely different from the error S ( t ) = A ( t ) X ( t ) B ( t ) C ( t ) , is designed as follows:
E ( t ) = G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ
where p and q denote the positive odd integers satisfying p > q . The integral term is composed of two parts, which can speed up the convergence of the model against noise to a certain extent, and is equivalent to the role of the activation function [32]. Similarly, in order to solve the time-varying problem (1) accurately and effectively, based on the new error function E ( t ) , an improved design idea is introduced as follows:
E ˙ ( t ) = γ 1 E ( t ) γ 2 E q p ( t )
where the design parameter γ 2 > 0 , which is the same as γ 1 . As for Equation (8), the first item ( γ 1 E ( t ) ) mainly affect the convergence speed, which controls dominantly the convergence rate of the neural network for a related big residual error at the beginning of the solution. However, when E ( t ) gradually closes to nearby the zero point, the convergence speed of the neural network is mainly decided by the second term of (8) ( γ 2 E q p ( t ) ) . Therefore, the ZNN model based on (8) can have superior convergence properties. By the new error function (7) and combining (8), the integration-enhanced combined accelerated zeroing neural network (IEAZNN) model can be expressed as follows:
G ( t ) ˙ + ω ( G ( t ) + G q p ( t ) ) = γ 1 ( G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ) γ 2 ( G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ) q p
For model (9), according to G ( t ) = A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) , we can obtain the following IEAZNN model for the time-varying problem (1), solving
A ( t ) A ( t ) X ( t ) ˙ B ( t ) B ( t ) = γ 1 ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ) γ 2 ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ) q p A ( t ) ˙ A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) A ( t ) ˙ X ( t ) B ( t ) B ( t ) A ( t ) A ( t ) X ( t ) B ( t ) ˙ B ( t ) A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) ˙ + A ( t ) ˙ C ( t ) B ( t ) + A ( t ) C ( t ) ˙ B ( t ) ω ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) + ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) ) q p ) + A ( t ) C ( t ) B ( t ) ˙
As discussed in the presentation nearby Equation (7), the IEAZNN model (10) is a noise-resistant model evolved by the improved design idea (8). Consider that there unavoidably exists noise in the practical engineering applications; thus, the IEAZNN (10) could be further written as the following form associated with noise.
A ( t ) A ( t ) X ( t ) ˙ B ( t ) B ( t ) = γ 1 ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ) γ 2 ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ) q p A ( t ) ˙ A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) A ( t ) ˙ X ( t ) B ( t ) B ( t ) A ( t ) A ( t ) X ( t ) B ( t ) ˙ B ( t ) A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) ˙ + A ( t ) ˙ C ( t ) B ( t ) + A ( t ) C ( t ) ˙ B ( t ) ω ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) + ( A ( t ) A ( t ) X ( t ) B ( t ) B ( t ) A ( t ) C ( t ) B ( t ) ) q p ) + A ( t ) C ( t ) B ( t ) ˙ + noise
In this paper, we carry out the theoretical analysis and experiment simulation in three cases (i.e., noise-free, constant noise and dynamic linear noise) for the noised model (11).

3.2. Theoretical Analyses

This section theoretically analyzes the convergence, stability and finite time for the noise-resistant model (10), together with the detailed proof on the noised-model (11) for achieving convergence to the theoretical solution even in a noisy environment.
Theorem 1. 
The IEAZNN model (10) can globally converge to the theoretical time-varying solution X * ( t ) beginning from any initial state X 0 = X ( 0 ) R m × n by taking into account the coefficient matrices A ( t ) , B ( t ) and C ( t ) in the time-varying LME problem (1) under the assumption that there is a single unique solution.
Proof. 
According to (7), we can define a candidate Lyapunov function [15,33] as follows:
V E ( t ) = 1 2 E 2 ( t ) ,
and the time derivative of (12) with respect to time t can be written as
V ˙ E ( t ) = E ( t ) E ˙ ( t ) = γ 1 E 2 ( t ) γ 2 E p + q p ( t )
Evidently, from (12) and (13), V E ( t ) 0 and V ˙ E ( t ) 0 . According to the Lyapunov stability principle [15,16], we can know that E ( t ) = G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ will eventually converge to 0. At that time, G ( t ) = ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ , and G ˙ ( t ) = ω ( G ( t ) + G q p ( t ) ) can be further obtained. Similar to (12), we proceed to define a new candidate Lyapunov function as follows:
V G ( t ) = 1 2 G 2 ( t )
Obviously, V G ( t ) 0 , and the derivative of V G ( t ) is formed as follows:
V G ˙ ( t ) = G ( t ) G ˙ ( t ) = ω ( G 2 ( t ) + G p + q p ( t ) )
From the above analysis, we can obtain V ˙ G ( t ) 0 . According to the Lyapunov stability principle [15], G ( t ) also will eventually converge to 0, which implies that the solution X ( t ) can converge to theoretical solution X * ( t ) . This proof is thus completed.    □
Theorem 2. 
Starting from any initial value, the IEAZNN model (10) can globally converge to the theoretical solution X * ( t ) in a limited time; the upper bound of the convergence time is T p γ 1 ( p q ) ln γ 1 E p q p ( 0 ) + γ 2 γ 2 + p ω ( p q ) ln ( G p q p ( 0 ) + 1 ) , where E ( 0 ) and G ( 0 ) are the error initial values.
Proof. 
During the convergence period, the total finite convergence time T can be divided into two parts, namely, the time T E when E ( t ) decreases to 0 in (7) and the time T G when G ( t ) decreases to 0 in (7)—that is, the total convergence time T = T E + T G . To compute T E , combine (7) and (8); then, we have
E ˙ ( t ) = d E ( t ) d t = γ 1 E ( t ) γ 2 E q p ( t ) d t = 1 γ 1 E ( t ) γ 2 E q p ( t ) d E ( t ) 0 T E d t = E ( 0 ) 0 1 γ 1 E ( t ) γ 2 E q p ( t ) d E ( t ) T E = p p q 0 E p q p ( 0 ) 1 γ 1 E p q p ( t ) + γ 2 d E p q p ( t ) = p γ 1 ( p q ) ln γ 1 E p q p ( 0 ) + γ 2 γ 2
Thus, we can know that E ( t ) = G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ converges to 0 within T E = p γ 1 ( p q ) ln γ 1 E p q p ( 0 ) + γ 2 γ 2 seconds. When E ( t ) converges to 0, we can know E ( t ) = ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ ; thus, G ˙ ( t ) can be written as follows:
G ˙ ( t ) = ω ( G ( t ) + G q p ( t ) )
Our ultimate goal is to obtain the X ( t ) to be solved by reducing G ( t ) to 0. Therefore, we need to prove the convergence time T G from (17) at the time when E ( t ) is reduced to 0. Thus, we have
G ˙ ( t ) = d G ( t ) d t = ω ( G ( t ) + G q p ( t ) ) d t = 1 ω ( G ( t ) + G q p ( t ) ) d G ( t ) 0 T G d t = G ( 0 ) 0 1 ω ( G ( t ) + G q p ( t ) ) d G ( t ) T G = p ω ( p q ) 0 G p q p ( 0 ) 1 G p q p ( t ) + 1 d G p q p ( t ) = p ω ( p q ) ln ( G p q p ( 0 ) + 1 )
As above derivations, we can obtain the time T G for the error G ( t ) converging to 0. As a whole, the error E ( t ) is reduced to 0 within T E = p γ 1 ( p q ) ln γ 1 E p q p ( 0 ) + γ 2 γ 2 seconds, and starting from the time when E ( t ) = 0 , the time T G = p ω ( p q ) ln ( G p q p ( 0 ) + 1 ) when G ( t ) is reduced to 0. Therefore, the upper bound of the total convergence time of (10) is
T = T E + T G = p γ 1 ( p q ) ln γ 1 E p q p ( 0 ) + γ 2 γ 2 + p ω ( p q ) ln ( G p q p ( 0 ) + 1 ) .
   □
Theorem 3. 
Assuming that the time-varying LME problem (1) has an optimal solution, the noise-perturbed IEAZNN model (11) can stably converge to the theoretical solution from any initial value under the presence of different noise disturbances.
Proof. 
According to Equations (8) and (11), we can obtain the simple expression of the perturbed model
E ˙ ( t ) = γ 1 E ( t ) γ 2 E q p ( t ) + noise
with E ( t ) = G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ . Similarly, we define the candidate Lyapunov function for (20) as follows:
L ( t ) = ( γ 1 E ( t ) γ 2 E q p ( t ) + noise ) 2 2
Then, the derivative L ( t ) ˙ of L ( t ) can be written as follows:
L ( t ) ˙ = ( γ 1 E ( t ) γ 2 E q p ( t ) + noise ) 2 ( γ 1 γ 2 q p E p q p ( t ) )
By observing Equation (22), there is no question that ( γ 1 E ( t ) γ 2 E q p ( t ) + noise ) 2 0 . γ 1 , γ 2 , q and p are greater than 0; moreover, p > q are both odd, which promises that γ 1 γ 2 q p E p q p ( t ) is negative—that is, L ˙ ( t ) 0 , since L ( t ) is defined by Equation (21) and L ( t ) 0 . According to the Lyapunov stability principle [15], we can know that L ( t ) will eventually converge to 0. At that time, ( γ 1 E ( t ) γ 2 E q p ( t ) + noise ) = E ˙ ( t ) = 0 .
As E ( t ) = G ( t ) + ω 0 t ( G ( σ ) + G q p ( σ ) ) d σ , expressed in Equation (7), when E ˙ ( t ) is equal to 0, we can further obtain G ˙ ( t ) = ω ( G ( t ) + G q p ( t ) ) written in Equation (18), of which the convergence has been proven. That is to say, we have successfully proved the stability and convergence of the perturbed IEAZNN model (11), which can still accurately solve the time-varying LME (1) in a noisy environment.    □

4. Comparative Verifications

This section would give out two illustrative examples to show the convergence performance of the presented neural models. In the first example, we compared the convergence effects of the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) under no noise, constant noise and dynamic linear noise. Subsequently, the performance of IEAZNN under random noise is analyzed. Additionally, under the condition that there is no noise, the ZNN model (6) is applied to solve the image deblurring problem.
Example 1. 
In the previous sections, we presented the CZNN model (3), IEAZNN (10) and perturbed IEAZNN (11), together with the theory analysis of convergence, finiteness of time and model robustness. In order to further verify the superiorities of the proposed IEAZNN model (10) as compared with the existing CZNN model (3) and the IEZNN model (5), an illustrative example is given in this section with the following coefficient matrices in time-varying LME problem (1), the same in [29]:
A ( t ) = sin t cos t cos t sin t , B ( t ) = 2 sin t + cos t sin t cos t 2 cos t sin t 2 2 sin t 2 sin t ,
C ( t ) = 2 + sin t cos t 2 cos t 3 cos t + sin t sin t cos t sin t + 2 cos t sin t cos t cos t + sin 2 t 2 sin t 2 + 2 sin t sin 2 t cos t + 2 sin t sin 2 t .
The following theoretical solution X * can be obtained using the MATLAB tool [34]:
X * ( t ) = sin t cos t 0 cos t sin t 1 ,
which is used to verify the accuracy of the abovementioned three solvers under the following different noise cases. Meanwhile, according to the above theoretical proof and model characteristics, p and q will be set as 5 and 3, respectively, in the experiment. In order to better introduce the Matlab application, a brief presentation of the algorithm is presented in Algorithm 1:
Algorithm 1: Matlab program core ideas.
γ 1 , γ 2 ← 1, ω = 10
repeat IEAZNN model (10) until t=10
for i← 1 to 6
do figure(1); plot(t, X i )
error ← A ( t ) X ( t ) B ( t ) C ( t ) , theo ← A ( t ) C ( t ) B ( t )
for j← 1 to 6
do figure(1); plot(t, t h e o j )
figure(2); plot(t,norm(error))
Case 1. 
No Noise: First of all, the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) start from the same random initial value with the same other conditions. Figure 1 shows the convergence of the three models with different design parameters γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 under the absence of noise, where Figure 1a depicts the fitting relationship between the X ( t ) solved by the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10), and the theoretical solution X * (red line) when γ 1 = γ 2 = 1 and ω = 10 . It can be seen that the three models can eventually converge to the theoretical solution when there is no noise interference. Figure 1b shows the convergence performance of residual error S ( t ) = A ( t ) X ( t ) B ( t ) C ( t ) F , which corresponds to X ( t ) shown in Figure 1a, in which all three models have good convergence accuracy. Compared with Figure 1a, the design parameters γ 1 and γ 2 are increased to 10 in Figure 1c. Obviously, X ( t ) by the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) can still converge to the theoretical solution, and the convergence time is shorter in comparison with Figure 1b with γ 1 = γ 2 = 1 . Similarly, Figure 1d shows the residual error, which corresponds to Figure 1c. It shows that although CZNN model (3) converges faster, the IEAZNN model (10) has higher convergence accuracy reaching the 10 6 level, and is better than those of the CZNN model (3) and the IEZNN model (5), both of which can only reach the 10 4 level. It is worth mentioning that, when the design parameters increase, the IEAZNN model (10) converges faster than the IEZNN model (5) with zero noise. The code about this figure can be found in Appendix A.
Case 2. 
Constant Noise:Figure 2 shows the convergence performance of the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) under constant noise c = 2 , the same as in [32]. Figure 2a depicts the fitting relationship between X ( t ) solved by the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10), and the theoretical solution X * (red line) with γ 1 = γ 2 = 1 and ω = 10 . It can be seen that under the condition of constant noise c = 2 , only the IEZNN model (5) and IEAZNN model (10) involved with an integral term can converge to the theoretical solution, while the CZNN model (3) without the integral term cannot resist noise. Figure 2b shows the residual error, which corresponds to X ( t ) , shown in Figure 2a. It shows that both the IEZNN and IEAZNN have good convergence effects when γ 1 = γ 2 = 1 , and the convergence accuracy of the IEZNN model (5) reaches the 10 4 level, which is better than the IEAZNN’s 10 3 level. Notably, when design parameters are not so large, such as γ 1 = γ 2 = 1 , the IEAZNN model (10) deals with noise by adding the integral term into the error, and the design idea (8) has two parts controlled by design parameters, which makes the IEAZNN model (10) have no advantages compared with the IEZNN model when solving LME (1) dynamically. However, along with the increase of design parameters, γ 1 = γ 2 to 10. Obviously, the IEZNN model (5) and IEAZNN model (10) can still converge to the theoretical solution, and the convergence time is shorter than that of the case when γ 1 = γ 2 = 1 . Although, when γ 1 is increased to 10, the CZNN model (3) is still unable to resist noise, and the convergence effect is better than that of the case when γ 1 = 1 with a low error level. Additionally, as shown in Figure 2d, compared with the IEZNN model (5), the IEAZNN model (10) can not only greatly shorten the convergence time but also have better convergence accuracy with the 10 5 level, which is better than the 10 3 level of IEZNN model (5).
Case 3. 
Dynamic Linear Noise:Figure 3 shows the convergence performance of the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) under dynamic linear noise noise = 0.6 t [32]. Likewise, the comparison experiments start from the same random initial value, ensuring that all other conditions are the same. As shown in Figure 3a, it is obvious that when the CZNN model (3) is used, X ( t ) cannot converge to the theoretical solution X * when γ 1 = γ 2 = 1 and ω = 10 , only X ( t ) of the IEZNN model (5) and IEAZNN model (10) can converge to the theoretical solution. Figure 3b shows that the residual error of the traditional CZNN model (3) decreases at the beginning, and then starts to increase after several seconds with a divergent trend. On the other hand, the residual error of the IEZNN model (5) and the IEAZNN model (10) can achieve convergence but the residual convergence speed and convergence accuracy of the IEZNN model (5) are not as good as those of the IEAZNN model (10). For example, when γ 1 = γ 2 = 1 , the residual error convergence accuracy of the IEAZNN model (10) can reach the 10 2 level, which is better than the 10 1 of the IEZNN model (5). As shown in Figure 3c,d, although the residual error of the traditional CZNN model (3) still cannot be reduced to 0, the increase is smaller than that of the case when γ 1 = 1 . In addition, the convergence speed and convergence accuracy of the residuals of the IEAZNN model (10) have been further improved, in which the residual can converge to zero within 1 s and the convergence accuracy reaches the order of magnitude of 10 3 . In contrast, under the interference of dynamic linear noise, neither the convergence accuracy nor the convergence time of the residual error of the IEZNN model (5) are obviously improved with the increase in γ 1 , and the convergence accuracy can only reach the 10 1 level.
In order to simulate the irregular noise in nature as much as possible, the IEAZNN model (10) is used to solve the LME (1) dynamically under the interference of random noise. From Figure 4, it can be clearly seen that the IEAZNN model (10) performs very well under the interference of random noise, not only with high convergence accuracy but also with short convergence time. It can be seen from Figure 4b that when γ 1 = γ 2 = 1 , the residual error convergence accuracy of the IEAZNN model (10) can reach the 10 3 level.
It can be seen from the above comparative experiments that when γ 1 = γ 2 = 1 , the convergence time of the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) are basically the same. However, when γ 1 and γ 2 increase to 10, the IEAZNN model (10) can converge in about 1 s, and its comprehensive convergence effect is the best. For comparing the experiment results more intuitively, the convergence precision of the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) are summarized in Table 1 under different noise and design parameters. From Table 1, we have the following facts:
(1)
In the case of zero-noise interference, the convergence effect of the CZNN model (3) is stable and the accuracy is high whether γ 1 = 1 or γ 1 = 10 . Once there is noise interference, the CZNN model (3) cannot accurately solve the time-varying LME (1). However, whether there exists noise or not, the IEZNN model (5) and IEAZNN model (10) can always accurately solve the LME problem (1).
(2)
Because of the different design formulas, the presented IEAZNN model (10) in this paper has higher convergence accuracy and better convergence performance when dealing with dynamic linear noise, whether γ 1 = γ 2 = 1 or γ 1 = γ 2 = 10 , in comparison with the other two ZNN models.
(3)
With the increase in γ 1 and γ 2 , among these three ZNN models, the convergence accuracy of the IEAZNN model (10) proposed in this paper is the highest whether there exists noise or not. It shows that the IEAZNN model (10) has advantages in solving the time-varying LME problems.
Example 2. 
In the application of image blurring, we consider the time-invariant special case of LME (1). A and B represent a 100 × 100-dimensional Toeplitz matrix [5], which is constructed by formula (23) and A = B . X represents the original image, and C represents the blurred image. We consider r = 8 . The ZNN model (6) is used for image deblurring, and the results are shown in Figure 5.
a i j = 1 2 r 1 , | i j | r 0 , o t h e r w i s e .
γ 1 = γ 2 = 1 is set for this experiment, and other conditions are the same as Example 1. Compared with the original Figure 5a, although the deblurred Figure 5c has some minor defects, it can restore the blurred image Figure 5b to a visible state, which is almost the same as Figure 5a.

5. Conclusions

In order to achieve finite-time convergence, high convergence accuracy and anti-noise robustness in LME solving in the presence of noise, an integration-enhanced combined accelerating zeroing neural network (IEAZNN) is designed, verified and applied in this paper. Different from the general existing ZNN models, IEAZNN first takes the gradient of the energy function as the initial error, and then obtains the reconstructed error by the help of the design idea of IEZNN. Next, this paper provides the proofs of global stability and finite convergence time under zero noise and different noise situations for the designed IEAZNN model. Finally, numerical simulations are conducted to verify the convergence characteristic of the IEAZNN model, and comparisons among the CZNN model and IEZNN model are performed to illustrate the superior performance of the proposed method. Future studies might concentrate on how to apply the IEAZNN model to practical engineering applications.

Author Contributions

Conceptualization, C.Y. and W.D.; methodology, C.Y. and W.D.; software, W.D.; validation, J.C. (Jun Cai), W.D., J.C. (Jingjing Chen) and C.Y.; formal analysis, W.D.; investigation, J.C. (Jun Cai) and C.Y.; resources, J.C. (Jingjing Chen); data curation, W.D.; writing—original draft preparation, W.D.; writing—review and editing, C.Y.; visualization, J.C. (Jingjing Chen); supervision, J.C. (Jun Cai) and C.Y.; project administration, J.C. (Jun Cai) and C.Y.; funding acquisition, J.C. (Jun Cai). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Special Projects in National Key Research and Development Program of China (2018YFB1802200, 2019YFB1804403); GPNU Foundation (2022SDKYA029); Key Areas of Guangdong Province (2019B010118001); National Natural Science Foundation of China (61972104, 61902080, 62002072, 61702120); Science and Technology Project in Guangzhou (201803010081); Foshan Science and Technology Innovation Project, China (2018IT100283); Guangzhou Key Laboratory (202102100006) and Science and Technology Program of Guangzhou, China (202002020035); and Industry-University-Research Innovation Fund for Chinese Universities (2021FNA04010).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
LMElinear matrix equation
RNNrecurrent neural network
GNNgradient-based neural network
ZNNzeroing neural network
CZNNconventional zeroing neural network
FTZNNfinite-time zeroing neural network
NTZNNnoise-tolerant zeroing neural network
IEZNNintegration-enhanced zeroing neural network
NAFnovel activation function
DMIdynamic matrix inversion
IEAZNNintegration-enhanced combined accelerating zeroing neural network

Appendix A

Mathematics 10 04828 i001Mathematics 10 04828 i002Mathematics 10 04828 i003

References

  1. Zhang, Z.; Zheng, L.; Yu, J.; Li, Y.; Yu, Z. Three recurrent neural networks and three numerical methods for solving a repetitive motion planning scheme of redundant robot manipulators. IEEE ASME Trans. Mechatron. 2017, 22, 1423–1434. [Google Scholar] [CrossRef]
  2. Zhang, Z.; Zheng, L.; Chen, Z.; Kong, L.; Karimi, H.R. Mutual-collision-avoidance scheme synthesized by neural networks for dual redundant robot manipulators executing cooperative tasks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1052–1066. [Google Scholar] [CrossRef] [PubMed]
  3. Jin, L.; Li, S.; Xiao, L.; Lu, R.; Liao, B. Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1715–1724. [Google Scholar] [CrossRef]
  4. Wang, H.; Liu, X.; Liu, K. Robust adaptive neural tracking control for a class of stochastic nonlinear interconnected systems. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 510–523. [Google Scholar] [CrossRef]
  5. Safarzadeh, M.; Sadeghi Goughery, H.; Salemi, A. Global-DGMRES method for matrix equation A × B = C. Int. J. Comput. Math. 2022, 99, 1005–1021. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Ma, W.; Yi, C. The link between newton iteration for matrix inversion and Zhang neural network (ZNN). In Proceedings of the 2008 IEEE International Conference on Industrial Technology, Chengdu, China, 21–24 April 2008; pp. 1–6. [Google Scholar] [CrossRef]
  7. Saad, Y.; van der Vorst, H.A. Iterative solution of linear systems in the 20th century. J. Comput. Appl. Math. 2000, 123, 1–33. [Google Scholar] [CrossRef] [Green Version]
  8. Zhou, J.; Wei, W.; Zhang, R.; Zheng, Z. Damped Newton stochastic gradient descent method for neural networks training. Mathematics 2021, 9, 1533. [Google Scholar] [CrossRef]
  9. Concas, A.; Reichel, L.; Rodriguez, G.; Zhang, Y. Iterative methods for the computation of the Perron vector of adjacency matrices. Mathematics 2021, 9, 1522. [Google Scholar] [CrossRef]
  10. Lv, X.; Xiao, L.; Tan, Z. Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. Inf. Process. Lett. 2019, 147, 88–93. [Google Scholar] [CrossRef]
  11. Gerontitis, D.; Moysis, L.; Stanimirović, P.; Katsikis, V.N.; Volos, C. Varying-parameter finite-time zeroing neural network for solving linear algebraic systems. Electron. Lett. 2020, 56, 810–813. [Google Scholar] [CrossRef]
  12. Chen, K. Recurrent implicit dynamics for online matrix inversion. Appl. Math. Comput. 2013, 219, 10218–10224. [Google Scholar] [CrossRef]
  13. Li, X.; Yu, J.; Li, S.; Ni, L. A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 2018, 317, 70–78. [Google Scholar] [CrossRef]
  14. Liao, B.; Zhang, Y.; Jin, L. Taylor O(h3) discretization of ZNN models for dynamic equality-constrained quadratic programming with application to manipulators. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 225–237. [Google Scholar] [CrossRef] [PubMed]
  15. Yi, C.; Chen, Y.; Lu, Z. Improved gradient-based neural networks for online solution of Lyapunov matrix equation. Inf. Process. Lett. 2011, 111, 780–786. [Google Scholar] [CrossRef]
  16. Chen, Y.; Yi, C.; Qiao, D. Improved neural solution for the Lyapunov matrix equation based on gradient search. Inf. Process. Lett. 2013, 113, 876–881. [Google Scholar] [CrossRef]
  17. Xiao, L.; Lu, R. A fully complex-valued gradient neural network for rapidly computing complex-valued linear matrix equations. Chin. J. Electron. 2017, 26, 1194–1197. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Zhang, J.; Weng, J. Dynamic moore-penrose inversion with unknown derivatives: Gradient neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–11. [Google Scholar] [CrossRef]
  19. Tan, Z. Fixed-time convergent gradient neural network for solving online Sylvester equation. Mathematics 2022, 10, 3090. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Fu, T.; Yan, Z.; Jin, L.; Xiao, L.; Sun, Y.; Yu, Z.; Li, Y. A varying-parameter convergent-differential neural network for solving joint-angular-drift problems of redundant robot manipulators. IEEE ASME Trans. Mechatron. 2018, 23, 679–689. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Ge, S.S. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Netw. 2005, 16, 1477–1490. [Google Scholar] [CrossRef] [Green Version]
  22. Yan, J.; Xiao, X.; Li, H.; Zhang, J.; Yan, J.; Liu, M. Noise-tolerant zeroing neural network for solving non-stationary Lyapunov equation. IEEE Access 2019, 7, 41517–41524. [Google Scholar] [CrossRef]
  23. Li, K.; Jiang, C.; Xiao, X.; Huang, H.; Li, Y.; Yan, J. Residual error feedback zeroing neural network for solving time-varying Sylvester equation. IEEE Access 2022, 10, 2860–2868. [Google Scholar] [CrossRef]
  24. Xiao, L. A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion. Theor. Comput. Sci. 2016, 647, 50–58. [Google Scholar] [CrossRef]
  25. Jin, J.; Gong, J. An interference-tolerant fast convergence zeroing neural network for dynamic matrix inversion and its application to mobile manipulator path tracking. Alex. Eng. J. 2021, 60, 659–669. [Google Scholar] [CrossRef]
  26. Jin, L.; Li, S.; Hu, B.; Liu, M.; Yu, J. A noise-suppressing neural algorithm for solving the time-varying system of linear equations: A control-based approach. IEEE Trans. Ind. Inform. 2018, 15, 236–246. [Google Scholar] [CrossRef]
  27. Stanimirović, P.S.; Katsikis, V.N.; Li, S. Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 2019, 329, 129–143. [Google Scholar] [CrossRef]
  28. Jin, L.; Zhang, Y.; Li, S. Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 2615–2627. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Chen, K. Comparison on Zhang neural network and gradient neural network for time-varying linear matrix equation A × B = C solving. In Proceedings of the 2008 IEEE International Conference on Industrial Technology, Chengdu, China, 21–24 April 2008; pp. 1–6. [Google Scholar]
  30. Dai, J.; Li, Y.; Xiao, L.; Jia, L. Zeroing neural network for time-varying linear equations with application to dynamic positioning. IEEE Trans. Ind. Inform. 2022, 18, 1552–1561. [Google Scholar] [CrossRef]
  31. Liao, S.; Liu, J.; Qi, Y.; Huang, H.; Zheng, R.; Xiao, X. An adaptive gradient neural network to solve dynamic linear matrix equations. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 5913–5924. [Google Scholar] [CrossRef]
  32. Xiao, L.; Dai, J.; Jin, L.; Li, W.; Li, S.; Hou, J. A noise-enduring and finite-time zeroing neural network for equality-constrained time-varying nonlinear optimization. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 4729–4740. [Google Scholar] [CrossRef]
  33. Xu, F.; Li, Z.; Nie, Z.; Shao, H.; Guo, D. Zeroing neural network for solving time-varying linear equation and inequality systems. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2346–2357. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, Y.; Yue, S.; Chen, K.; Yi, C. MATLAB simulation and comparison of Zhang neural network and gradient neural network for time-varying Lyapunov equation solving. In Proceedings of the International Symposium on Neural Networks, Beijing, China, 24–28 September 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 117–127. [Google Scholar]
Figure 1. Comparisons among the CZNN model (3), IEZNN model (5) and IEAZNN model (10) for solving time-varying LME (1) with γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 in the absence of noise. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F . (c) Neural states X ( t ) . (d) Residual errors S ( t ) F .
Figure 1. Comparisons among the CZNN model (3), IEZNN model (5) and IEAZNN model (10) for solving time-varying LME (1) with γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 in the absence of noise. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F . (c) Neural states X ( t ) . (d) Residual errors S ( t ) F .
Mathematics 10 04828 g001
Figure 2. Comparisons among the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) for solving time-varying LME (1) with γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 as the constant noise. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F . (c) Neural states X ( t ) . (d) Residual errors S ( t ) F .
Figure 2. Comparisons among the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) for solving time-varying LME (1) with γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 as the constant noise. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F . (c) Neural states X ( t ) . (d) Residual errors S ( t ) F .
Mathematics 10 04828 g002
Figure 3. Comparisons among the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) for solving time-varying LME (1) with γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 as the dynamic linear noise. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F . (c) Neural states X ( t ) . (d) Residual errors S ( t ) F .
Figure 3. Comparisons among the CZNN model (3), the IEZNN model (5) and the IEAZNN model (10) for solving time-varying LME (1) with γ 1 = γ 2 = 1 and γ 1 = γ 2 = 10 as the dynamic linear noise. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F . (c) Neural states X ( t ) . (d) Residual errors S ( t ) F .
Mathematics 10 04828 g003
Figure 4. Results of solving LME (1) with IEAZNN model (10) at γ 1 = γ 2 = 1 , ω = 10 under random noise interference. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F .
Figure 4. Results of solving LME (1) with IEAZNN model (10) at γ 1 = γ 2 = 1 , ω = 10 under random noise interference. (a) Neural states X ( t ) . (b) Residual errors S ( t ) F .
Mathematics 10 04828 g004
Figure 5. The original, blurred and deblurred images. (a) The original image. (b) The blurred image. (c) The deblurred image.
Figure 5. The original, blurred and deblurred images. (a) The original image. (b) The blurred image. (c) The deblurred image.
Mathematics 10 04828 g005
Table 1. Comparison of solving accuracies of the CZNN model (3), IEZNN model (5) and IEAZNN model (10) under different noise and design parameters.
Table 1. Comparison of solving accuracies of the CZNN model (3), IEZNN model (5) and IEAZNN model (10) under different noise and design parameters.
No NoiseConstant NoiseDynamic Bounded Noise
CZNN 10 4 Cannot reach convergenceCannot reach convergence
γ 1 = γ 2 = 1 IEZNN 10 5 10 4 10 1
IEAZNN 10 3 10 3 10 2
CZNN 10 4 Cannot reach convergenceCannot reach convergence
γ 1 = γ 2 = 10 IEZNN 10 4 10 3 10 1
IEAZNN 10 6 10 5 10 3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, J.; Dai, W.; Chen, J.; Yi, C. Zeroing Neural Networks Combined with Gradient for Solving Time-Varying Linear Matrix Equations in Finite Time with Noise Resistance. Mathematics 2022, 10, 4828. https://doi.org/10.3390/math10244828

AMA Style

Cai J, Dai W, Chen J, Yi C. Zeroing Neural Networks Combined with Gradient for Solving Time-Varying Linear Matrix Equations in Finite Time with Noise Resistance. Mathematics. 2022; 10(24):4828. https://doi.org/10.3390/math10244828

Chicago/Turabian Style

Cai, Jun, Wenlong Dai, Jingjing Chen, and Chenfu Yi. 2022. "Zeroing Neural Networks Combined with Gradient for Solving Time-Varying Linear Matrix Equations in Finite Time with Noise Resistance" Mathematics 10, no. 24: 4828. https://doi.org/10.3390/math10244828

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop