Next Article in Journal
A Robust Optimization Method for Location Selection of Parcel Lockers under Uncertain Demands
Next Article in Special Issue
Dispatch for a Continuous-Time Microgrid Based on a Modified Differential Evolution Algorithm
Previous Article in Journal
Offensive/Defensive Game Target Damage Assessment Mathematical Calculation Method between the Projectile and Target
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recurrent Neural Network Models Based on Optimization Methods

by
Predrag S. Stanimirović
1,2,*,
Spyridon D. Mourtas
2,3,
Vasilios N. Katsikis
3,
Lev A. Kazakovtsev
2 and
Vladimir N. Krutikov
4
1
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia
2
Laboratory “Hybrid Methods of Modelling and Optimization in Complex Systems”, Siberian Federal University, Krasnoyarsk 660041, Russia
3
Department of Economics, Division of Mathematics and Informatics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
4
Department of Applied Mathematics, Kemerovo State University, Krasnaya Street 6, Kemerovo 650043, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4292; https://doi.org/10.3390/math10224292
Submission received: 13 October 2022 / Revised: 10 November 2022 / Accepted: 14 November 2022 / Published: 16 November 2022

Abstract

:
Many researchers have addressed problems involving time-varying (TV) general linear matrix equations (GLMEs) because of their importance in science and engineering. This research discusses and solves the topic of solving TV GLME using the zeroing neural network (ZNN) design. Five new ZNN models based on novel error functions arising from gradient-descent and Newton optimization methods are presented and compared to each other and to the standard ZNN design. Pseudoinversion is involved in four proposed ZNN models, while three of them are related to Newton’s optimization method. Heterogeneous numerical examples show that all models successfully solve TV GLMEs, although their effectiveness varies and depends on the input matrix.

1. Introduction and Preliminaries

Novel zeroing neural network (ZNN) dynamical systems are presented using various error functions that arise from gradient-descent and Newton optimization methods. Introduced models are theoretically investigated and compared to each other and to the standard ZNN design. The proposed ZNN models are applied in solving time-varying (TV) general linear matrix equations (GLMEs) with arbitrary TV real input matrices. The TV GLME problem is expressed as the matrix equation
A ( t ) Y ( t ) C ( t ) = B ( t ) , A ( t ) R m × n , Y ( t ) R n × k , C ( t ) R k × h , B ( t ) R m × h , m n , k h ,
in which Y ( t ) is unknown matrix.
The solutions Y ( t ) to the Equation (1) are extremely useful in science as well as in a wide range of engineering applications, including optimizing the manipulability of a robotic arm’s joint angles [1], making highly accurate predictions for missing quality of service data [2], tracking robotic motion and mobile objects [3], obtaining weights of feed-forward neural networks [4]. This paper presents and contrasts six ZNN models based on two new error functions for solving TV GLMEs that involve arbitrary matrices A ( t ) , B ( t ) , and C ( t ) . It is worth mentioning that the matrix pseudoinversion is involved in four of the proposed ZNN models, while three are related to Newton’s optimization method.
The ZNN concept was coined by Zhang et al. in [5] for determining online solutions to TV matrix inversion problems. Most ZNN-based dynamical systems belong to classes of recurrent neural networks (RNNs) designed to approximate zeros of matrix, vector, or scalar equations. Besides ZNN, there is another type of RBBs, which is known as Gradient neural networks (GNN). As an outcome, numerous significant research results have been revealed in the scientific literature. ZNN and GNN dynamics for solving linear matrix equations of the form (1) were described and compared in [6]. Two Zhang functions and initiated ZNN dynamics for solving the TV GLME A ( t ) Y ( t ) = B ( t ) were presented and investigated theoretically and numerically in [7]. A gradient-based neural dynamics for solving the matrix equation A X B = D was investigated in [8]. Gradient and zeroing dynamical systems aimed to solve the system of linear equations in both the TV and constant environment are investigated in [9,10,11]. GNN neural-dynamic schemes are substantially designed for solving equations with time-invariant coefficient matrices [11]. A global overview of Zeroing dynamical systems (ZND), involving ZND aimed to solving linear matrix equations, was presented in [12]. Nonlinear Zeroing neural dynamical systems with finite-time convergence and noise-tolerant for solving linear matrix equations in time-varying scenarios were proposed and investigated in [13,14]. A ZNN design whose dynamics are based on varying gain parameter and which are suitable for solving TV GLME was proposed in [15]. A very popular class of ZNN dynamical systems is represented by ZNN models with a finite-time or pre-defined convergence time. A finite-time convergent ZNN dynamics with a robust activation was presented in [14] and applied in some engineering fields. Xiao in [16] discovered and investigated finite-time dynamical systems for solving time-dependent complex matrix equation A X = B . A desirable feature of ZNN dynamics is its ability to neutralize different types of noises that appear during the evolution of dynamic systems. A noise-tolerant ZNN design with nonlinear activation that involves appropriate integral term was proposed and investigated in [17]. Finite-convergent zeroing neural design in the complex domain based on the Tikhonov regularization on the matrix A and aimed at solving time-dependent matrix equations A X = D was considered in [18]. Various and numerous areas of numerical linear algebra are studied by applying ZNN dynamics in the time-varying case. Solving tensor and matrix inversion on TV arrays [19], generalized eigenvalues problems [20], generalized inversion [21], matrix equations and matrix decompositions [3], quadratic optimization problems [22], and approximations of various matrix functions [23,24] are predominant utilizations of ZNNs. ZNN models based on two fuzzy-adaptive activations for resolving specific linear TV linear matrix equations (LME) were introduced in [25,26]. The hybrid models introduced in [27] are defined using suitable combinations of GNN and ZNN neural dynamics for solving matrix equations B X = D or C X = D . Apart from different types of activation in dynamic systems, numerous studies have been devoted to the modification of the dynamic evolution of ZNN. High-order error function designs and high-order ZNNs which are designed using powers of standard Zhang functions and adopted for solving LME were proposed in [28].
Dtermining an error function (or Zhang function) E ( t ) , based on a suitable expression that reflects the problem being solved, is the first step in developing ZNN dynamics [29]. The second stage takes the dynamical development
E ˙ ( t ) = d E ( t ) d t = λ F E ( t ) , λ > 0
in which E ˙ ( t ) R m × n means the time derivative of E ( t ) R m × n , a real number λ > 0 represents the scaling parameter required for accelerating the convergence speed, and F ( · ) : R m × n R m × n denotes elementwise usage of appropriately defined odd-increasing activation function on E ( t ) . Target of our research is the linear ZNN design
E ˙ ( t ) = λ E ( t ) , λ > 0 .
The system of first-order differential Equation (3), which defines the ZNN evolution law, has the analytical solution E ( t ) = E ( 0 ) e λ t [12]. Thus, E ( t ) in the standard ZNN is exponentially convergent to the zero 0 with the exponential convergence rate λ as t + .
The ZNN model for solving the GLME (1) is defined using the matrix-valued Zhangian
E ( t ) = A ( t ) Y ( t ) C ( t ) B ( t ) , A ( t ) R m × n , Y ( t ) R n × k , C ( t ) R k × h , B ( t ) R m × h .
The main drawback of the basic ZNN design (3) and (4), based on the standard error function E ( t ) , is its limited application to cases where the input matrices meet certain requirements. Motivated by that shortcoming, in this research we propose two new error functions derived from optimization methods. The error function E G ( t ) is defined by zeroing the gradient of the Frobenius matrix norm of E ( t ) , while the error function E N ( t ) is defined by the Newton step corresponding to the Frobenius matrix norm of E ( t ) . That is why we use the terms Gradient zeroing neural network (GZNN) and Newton zeroing neural network (NZNN) for dynamic systems that are defined based on the new error functions E G and E N , respectively.
Moreover, we noticed that the analysis of the complete set of solutions to the Equation (1) has not been investigated so far. In previous research, the solution based on invertible matrices A and C has been analyzed, which corresponds to the solution Y ( t ) = A 1 ( t ) B ( t ) C 1 ( t ) [6,26]. Our idea is to look for least-squares solutions, under weaker assumptions that allow the coefficient-matrices to be singular.
The following highlights are key results of this work:
  • A new and unique ZNN development, based on two novel error functions E G ( t ) and E N ( t ) , is used to address the problem of solving TV GLMEs (1) with arbitrary TV real input matrices.
  • Six ZNN models based on different error functions are presented and compared, five of which are new. Pseudoinversion is involved in three new ZNN models, while two are related to Newton’s optimization method.
  • A precise convergence analysis of the proposed ZNNs is presented and the set of solutions is defined using known results from linear algebra.
  • Six numerical examples confirm that all considered models can solve TV GLMEs, although their effectiveness varies depending on input matrices.
The following is the structure of the presentation. Section 2 presents the motivation and describes the main results. Section 3 defines six ZNN dynamical systems and analyses their solvability, general solution, sets of all solutions, and convergence properties. Section 4 introduces and analyses simulation results obtained through six numerical examples that involve input matrices of different types. Finally, in Section 5, there are closing remarks.
The paper follows classical notations: I n R n × n signifies the identity matrix of the order n; I m , n R m × n signifies a m × n matrix with ones on the main diagonal and zeros elsewhere; 1 n and and 1 m , n , respectively, signify a vector in R n and a matrix in R m × n consisting of ones; O m , n R m × n signifies the zero m × n array; ⊗ designates the Kronecker product; vec ( · ) stands for the vectorization; ⊙ means the Hadamard product; · F signifies the Frobenius norm; the operators ( · ) T , ( · ) 1 and ( · ) , respectively, signify the matrix transposition, inversion and the pseudoinversion.

2. Motivation and Description of Main Results

The traditional ZNN dynamics for solving (1) is based on the error function (Zhangian) E ( t ) defined in (4), and follows the dynamic flow
E ˙ ( t ) = λ E ( t ) = λ A ( t ) Y ( t ) C ( t ) B ( t ) .
The standard ZNN design (5) developed to solve the GLME (1) requires invertible matrices A and C to force E ( t ) to zero, which the unique solution Y ( t ) = A 1 ( t ) B ( t ) C 1 ( t ) [26,30]. Our goal is to resolve this restriction and propose dynamical evolutions based on error functions that tend to zero in more general cases, which include the singularity of matrix-coefficients A ( t ) and C ( t ) . This research aims to find new neurodynamical models based on different error functions. Our motivation in defining new Zhang functions has its origin in gradient-descent methods for solving the unconstrained minimization problem
min f ( x ) , x R n ,
where f : R n R is a continuously-differentiable function bounded from below. The gradient descent (GD) iterative scheme for solving (6) is defined by
x k + 1 = x k α k g k ,
whereby g k = f ( x k ) is the gradient vector and α k is appropriately defined step size. The convergence result
lim k g k = 0
for iterations (7) is widely known in the literature [31].
The Newton method with line search for solving (6) is defined by
x k + 1 = x k α k G k 1 g k ,
where G k 1 denotes the inverse of the Hessian matrix G k = 2 f ( x k ) .
New error functions E G ( t ) and E N ( t ) are defined using analogies with gradient-descent and Newton method for nonlinear optimization. The residual matrix E ( t ) = A ( t ) Y ( t ) C ( t ) B ( t ) is the desired goal function which is forced to the zero matrix. The gradient of the Frobenius norm
ε Y = E ( t ) F 2 2 = A ( t ) Y ( t ) C ( t ) B ( t ) F 2 2
is equal to [6]
ε Y Y = ε Y = A T ( A Y C B ) C T = E G ( t ) .
Follwing iterations defined in [32], the gradient-descent iterations for minimizing the goal function ε Y are defined by
Y k + 1 = Y k α k A T ( A Y k C B ) C T , α k > 0 ,
taking an arbitrary initial matrix Y 0 R n × k . The Gradient neural network (GNN) dynamic evolution minimizes A Y ( t ) C B F 2 and it is based on the direct proportionality between the time derivative Y ˙ ( t ) and the negative gradient of the goal function ε Y ( t ) [33,34]:
Y ˙ ( t ) = λ ε Y ( t ) Y , Y ( 0 ) = Y 0 .
Our goal is to define the ZNN design for solving the TV GLME (1) based on the error function
E G ( t ) : = ε Y ( t ) = A T A Y C B C T = A T E ( t ) C T ,
which initiates the following dynamics, termed GZNN dynamics:
E ˙ G ( t ) = λ E G ( t ) = λ A T A Y C B C T .
Further, Hessian of ε Y in the case C : = I is equal to 2 ε Y = 2 ε Y Y 2 = A T A . In the case A : = I , Hessian of ε Y is equal to 2 ε Y = 2 ε Y Y 2 = C C T . Following the results from [32], the Newton iterations with line search in the time-invariant case can be defined by
Y k + 1 = Y k α k A T A 1 A T ( A Y k C B ) C T C C T 1 , α k > 0 .
Consider the real numbers δ A T A and δ C C T defined by
δ A T A = ε , rank ( A T A ) < n , 0 , rank ( A T A ) = n , δ C C T = ε , rank ( C C T ) < k , 0 , rank ( C C T ) = k ,
where ε > 0 is a small real regularization parameter. So, we define the Zhangian in the case of rank ( A ) n m , rank ( C ) k h as
E N ( t ) : = A T ( t ) A ( t ) + δ A T A I 1 A T A ( t ) Y ( t ) C ( t ) B ( t ) C T ( t ) C ( t ) C T ( t ) + δ C C T I 1 .
Given the well-known limit representation of the Moore-Penorse inverse, restated in [35], it can be obtained
lim ε 0 A T A + δ A T A I 1 A T = A , lim ε 0 C T C C T + δ C C T I 1 = C ,
which implies
E N ( t ) A ( t ) ( A ( t ) Y ( t ) C ( t ) B ( t ) ) C ( t ) = A ( t ) E ( t ) C ( t )
as ε 0 . The ZNN design for solving E N ( t ) = 0 based on the error function E N ( t ) , and termed NZNN, is defined by
E ˙ N ( t ) = λ E N ( t ) .
Dynamics (14) forces E N ( t ) to zero, which (in view of (13)) coincides with the zero of E G ( t ) .

Solvability and Solutions of Proposed Dynamical Systems

In this subsection we investigate conditions for solvability of the matrix equations E ( t ) = 0 , E G ( t ) = 0 and E N ( t ) = 0 and their sets of solutions.
Lemma 1 defines conditions for the solvability and defines solutions of the linear matrix equation A Y C = B . The results is obtained as a consequence of [35] (p. 52, Theorem 1) in the time-varying case.
Lemma 1
([35] p. 52, Theorem 1). For arbitrary A ( t ) C m × n , B ( t ) C p × q , D ( t ) C m × q , the general linear matrix equation
A ( t ) Y ( t ) C ( t ) = B ( t )
is solvable if and only if
A ( t ) A ( t ) B ( t ) C ( t ) C ( t ) = B ( t ) ,
in which case its general solution is given by
A ( t ) B ( t ) C ( t ) + Q ( t ) A ( t ) A ( t ) Q ( t ) C ( t ) C ( t ) ,
for arbitrary Q ( t ) C n × p .
Corollary 1.
If (16) is satisfied, the set of all solutions to (15) is defined by
Θ Q = A ( t ) B ( t ) C ( t ) + Q ( t ) A ( t ) A ( t ) Q ( t ) C ( t ) C ( t ) Q ( t ) C n × p .
The set Θ Q is termed as solutions set in rest of the paper.
Lemma 2 restates results from [8] (Theorem 2.2). These results are used in Remark 1 to define the solution which will be used in numerical comparison of defined models.
Lemma 2
([8] Theorem 2.2). Assume that constant matrices A R m × n , C R k × h and B R m × h satisfy
A A B C C = B .
Then the activation state variables matrix Y ( t ) of the dynamical system
Y ˙ ( t ) = γ A T ( A V ( t ) C B ) C T
is convergent as t + and has the equilibrium state
Y ( t ) Y ˜ = A C B + Y ( 0 ) A A Y ( 0 ) C C
for every initial state matrix Y ( 0 ) R n × p .
Remark 1.
Theorem 2.2 in [8] claims that the activation state variables matrix Y ( t ) of the GNN dynamics (20) for solving (15) in the time-invariant case converges as t + to the equilibrium state defined in (21).
Accordingly, we will use the equilibrium matrix
Y ˜ Y ( 0 ) ( t ) = A ( t ) B ( t ) C ( t ) + Y ( 0 ) A ( t ) A ( t ) Y ( 0 ) C ( t ) C ( t ) Θ Q ,
for the comparison of the proposed models in performed numerical experiments.
Theorem 1 investigates conditions for the solvability of the equations E ( t ) = 0 , E G ( t ) = 0 , E N ( t ) = 0 , and solutions sets to these equations.
Theorem 1.
Consider arbitrary smooth TV matrices A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h . The following statements are valid.
(a)
The equation E ( t ) = 0 is solvable if and only if (16) holds, and the solutions set to E ( t ) = 0 is Θ Q defined in (18).
(b)
If rank ( A ) n m , the equation E G ( t ) = 0 is always solvable and the solutions set to E G ( t ) = 0 is Θ Q defined by (18).
(c)
If rank ( A ) n m , the equation E N ( t ) = 0 is always solvable and the solutions set to E N ( t ) = 0 is Θ Q defined by (18).
Proof. 
(a)
The error function E ( t ) in the traditional ZNN (TZNN) model is defined by (4) and solves the equation E ( t ) = 0 . This part of the proof follows from known results on the solvability and the general solution to the matrix Equation (15), given in Lemma 1, and its application to the matrix equation E ( t ) = 0 A ( t ) Y ( t ) C ( t ) = B ( t ) .
(b)
The dynamics (12) forces E G ( t ) = ε Y ( t ) = A T A Y C B C T to zero. On the other hand, according to Lemma 1, the error function E G ( t ) vanishes in the case.
E G ( t ) = 0 A T A Y C C T = A T B C T .
As a consequence, the matrix equation E G ( t ) = 0 is consistent, its general solution is is given by (17), and its solutions set is defined in (18). The consistency is confirmed by the identity
A T A A T A A T B C T C C T C C T = A T B C T .
In addition, the general solution to E G ( t ) = 0 is given by (17). Indeed, using Lemma 1, the general solution to E G ( t ) = 0 is equal to
Y = A T A A T B C T C C T + Q A T A A T A Q C C T C C T = A B C + Q + A A Q C C .
(c)
In the case rank ( A ) n m , rank ( C ) k h , the Zhangian E N ( t ) becomes the zero matrix in the case
E N ( t ) = 0 A T A + δ A T A I 1 A T A Y C B C T C C T + δ C C T I 1 = 0 .
The GLME E N ( t ) = 0 is always solvable and its solutions set is equal to (18). This statement follows from
E N ( t ) = 0 A T A Y C B C T = 0 E G ( t ) = 0 .
Following general results from [35] (p. 52, Theorem 1) it is straightforward to verify that (18) is the general set of solutions to E N ( t ) = 0 .
   □
Remark 2.
The general conclusion is that the neural dynamics (12) forces the residual matrix E G ( t ) = ε Y ( t ) = A T A Y C B C T to zero. According to (23), the ZNN dynamics (12) reaches the solution Y ( t ) of the form (23) which coincides with the solution (18) to E ( t ) = A ( t ) Y ( t ) C ( t ) B ( t ) 0 . Finally, E N ( t ) 0 is equivalent to E G ( t ) 0 .
In this way, we explain one of the main advantages of the proposed models: the matrix equation E ( t ) = 0 is solvable under the condition (16), while zeroing E G ( t ) = 0 and E N ( t ) = 0 are always consistent. In view of (18) and (23), the general solutions to all three equations E ( t ) = 0 , E G ( t ) = 0 and E N ( t ) = 0 are identical.
Corollary 2.
Let us consider the solutions set
Λ = Y ˜ Y ( 0 ) ( t ) Y ( 0 ) R n × k } ,
generated by all possible initial states Y ( 0 ) of the form (22). The following statements are valid.
(a)
A Λ C = { B } .
(b)
An arbitrary solution Y ˜ Y ( 0 ) ( t ) Λ , defined in (22), is a least squares solution to (1).
(c)
The unique solution Y ˜ 0 ( t ) Λ produces the best-approximate (i.e., the minimal-norm least-squares) solution A B C to (1).
Proof. 
Part (a) follows from Lemma 1.
(b) Following the results from [36,37],
A Y C B 2 A A B C C B 2 ,
for arbitrary solution Y to (1), where the equality is valid if Y Y ˜ Y ( 0 ) ( t ) .
(c) Follows from the known result originated in [36,37]
A B C 2 A B C + Y A A Y C C 2 .
The proof is complete.    □

3. Various ZNN Models for Solving TV GLMEs

The current section defines and analyses six ZNN expansions for calculating online solutions of TV GLMEs, including TZNN which is based on the traditional ZNN approach, GZNN based on the gradient ZNN approach, NZNNV1, NZNNV2 and NZNNV3 which are based on the Newton’s optimization method, and DZNN based on the ZNN approach for finding the direct solution produced with the use of pseudoinverse. The TZNN and GZNN models provided here cover all potential scenarios for arbitrary TV real input matrices A ( t ) , B ( t ) , C ( t ) for solving the TV GLME (1).

3.1. Traditional ZNN Model

Considering smooth arbitrary TV matrices A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h , the error function in the traditional ZNN (TZNN) model is established as
E ( t ) = A ( t ) Y ( t ) C ( t ) B ( t ) ,
under conditions
{ A , B , C } D = A R m × n , C R k × h , B R m × h , m = n = rank ( A ) , k = h = rank ( C ) .
It is important to note that the desired solution is Y ( t ) R n × k . The time derivative of E ( t ) defined in (25) is equal to
E ˙ ( t ) = A ˙ ( t ) Y ( t ) C ( t ) + A ( t ) Y ˙ ( t ) C ( t ) + A ( t ) Y ( t ) C ˙ ( t ) B ˙ ( t ) .
Therefore, the following implicit dynamics can be obtained considering the linear ZNN flow (2)
A ( t ) Y ˙ ( t ) C ( t ) = λ ( A ( t ) Y ( t ) C ( t ) B ( t ) ) A ˙ ( t ) Y ( t ) C ( t ) A ( t ) Y ( t ) C ˙ ( t ) + B ˙ ( t ) , { A , B , C } D .
which is just the linear ZNN model (8) in [6]. Using the Kronecker product in conjunction with the vectorization, the dynamics (28) is modified and, as a result, we obtain the following ZNN model in vector form
( C T ( t ) A ( t ) ) vec ( Y ˙ ( t ) ) = vec λ ( A ( t ) Y ( t ) C ( t ) B ( t ) ) A ˙ ( t ) Y ( t ) C ( t ) A ( t ) Y ( t ) C ˙ ( t ) + B ˙ ( t ) , { A , B , C } D .
The ZNN model (29) will be termed as TZNN model. The solution to the dynamical system (29) can be approximated using the standard Matlab ode solver. If C T ( t ) A ( t ) is a nonsingular mass matrix, the exponential convergence speed achieved by the TZNN design (29) to the exact TV solution of the TV GLME (1) is proved in [6] (Theorem 1).
In [6], the authors vectorized the TV GLME A ( t ) Y ( t ) C ( t ) = B ( t ) into its equivalent form ( C T ( t ) A ( t ) ) vec ( Y ( t ) ) = vec ( B ( t ) ) . We restate the main result from [6] (Theorem 1) in the form adopted for (1) to improve readability of the paper. This result defines Unique-solution condition that requires invertibility of the mass matrix ( C T ( t ) A ( t ) ) .
Lemma 3
([6] Theorem 1). Assume that time-varying coefficient matrices in (1) are smooth. If the following Unique-solution condition
( C T ( t ) A ( t ) ) T ( C T ( t ) A ( t ) ) δ I , t 0 ,
is satisfied for a real number δ > 0 , then the matrix Y ( t ) of the Zhang neural network (28), starting from arbitrary initial state Y ( 0 ) , converges exponentially to the theoretical solution of (1) with the exponential convergence rate λ.
Vectorized ZNN design (29) exploits the same mass matrix C T ( t ) A ( t ) and requires its invertibility. Using ( C T ( t ) A ( t ) ) 1 = C T ( t ) 1 A ( t ) 1 , we conclude that the Unique-solution condition requires invertibility of A ( t ) and C ( t ) . Therefore, if C T ( t ) A ( t ) is nonsingular then (16) is satisfied.
Corollary 3.
Let A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h be differentiable. If the condition (26) is satisfied, starting from arbitrary initial value Y ( 0 ) , the TZNN model (40) converges exponentially and globally to A 1 ( t ) B ( t ) C 1 ( t ) .

3.2. Gradient ZNN (GZNN) Model

The gradient-based ZNN (GZNN) approach considers arbitrary smooth TV matrices A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h for solving
A T ( t ) A ( t ) Y ( t ) C ( t ) C T ( t ) = A T ( t ) B ( t ) C T ( t ) ,
where Y ( t ) R n × k is the desired solution. According to general requirements of the ZNN design (2) [6,30], we consider the constraints
{ A , B , C } D G = A R m × n , C R k × h , B R m × h , m n rank ( A ( t ) ) , h k rank ( C ( t ) ) .
The error function in GZNN design will be defined by
E G ( t ) = A T ( t ) A ( t ) Y ( t ) C ( t ) B ( t ) C T ( t ) = A T ( t ) E ( t ) C T ( t ) , { A , B , C } D G .
The time derivative of (33) is
E ˙ G ( t ) = A ˙ T ( A Y C B ) C T + A T ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) C T + A T ( A Y C B ) C ˙ T .
Then, considering (2), it can be obtained:
A T A Y ˙ C C T = λ A T ( A Y C B ) A ˙ T ( A Y C B ) C T A T ( A ˙ Y C + A Y C ˙ B ˙ ) C T A T ( A Y C B ) C ˙ T , { A , B , C } D G .
Using the vectorization in conjunction with the Kronecker product, the dynamics (35) is modified as
( C C T A T A ) vec ( Y ˙ ) = vec λ A T ( A Y C B ) A ˙ T ( A Y C B ) C T A T ( A ˙ Y C + A Y C ˙ B ˙ ) C T A T ( A Y C B ) C ˙ T , { A , B , C } D G .
Replacements
W ( t ) = C C T A T A r ( t ) = vec λ A T ( A Y C B ) A ˙ T ( A Y C B ) C T A T ( A ˙ Y C + A Y C ˙ B ˙ ) C T A T ( A Y C B ) C ˙ T y ˙ ( t ) = vec ( Y ˙ ( t ) ) , y = vec ( Y ( t ) )
lead to the dynamical evolution
W ( t ) y ˙ ( t ) = r ( t ) .
The mass matrix W ( t ) in (38) is singular in rank conditions rank ( A ) < min { m , n } or rank ( C ) < min { k , h } . The Tikhonov regularization is the principle widely exploited to solve such singularity problems. As a result, W ( t ) is regularized by the matrix
M ( t ) = W ( t ) , rank ( A ) = min { m , n } & rank ( C ) = min { k , h } W ( t ) + β I k n , rank ( A ) < min { m , n } or rank ( C ) < min { k , h }
where β 0 is a small regularization quantity. Then the following ZNN model is used instead of (38):
M ( t ) y ˙ ( t ) = r ( t ) ,
where M ( t ) is a nonsingular mass matrix. The ZNN (40) is named GZNN, and it is solvable using a proper ode Matlab solver. A global convergence with exponential speed of the GZNN design (40) to the exact TV solution of the TV GLME (31) is certified in Theorem 2.
Theorem 2.
Let A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h be differentiable. If the condition (32) is satisfied, starting from arbitrary initial value y ( 0 ) , the vectorized GZNN model (40) converges exponentially and globally to one element from the set Θ Q defined in (18), which solves the TV GLME (31).
Proof. 
To obtain the solution y ( t ) to the TV GLMEs (31), the Zhangian matrix is defined by (33) inline with its time derivative (34). The ZNN expansion based on (33) leads to the model (35). Furthermore, from the derivation procedure, (40) is an equivalent form of (35), which represents the standard GZNN model. Since a regularization parameter β is involved in the the definition of the mass matrix M ( t ) in (40), M ( t ) is invertible. From [6] (Theorem 1), restated in Lemma 3, the unknown matrix Y ( t ) in (35) converges to the exact solution as t . According to Theorem 1(b), the matrix equation E G ( t ) = 0 is always solvable and its solutions are included in Θ Q .    □

3.3. Newton ZNN Model (Version 1)

Applying the limit representation of the pseudoinverse [35]
lim ε 0 A T A + δ A T A I 1 A T = A , lim ε 0 C T C C T + δ C C T A I 1 = C ,
the error function (13) of the Newton ZNN version 1 (NZNNV1) model will be defined by
E N ( t ) = A ( t ) ( A ( t ) Y ( t ) C ( t ) B ( t ) ) C ( t ) , { A , B , C } D G .
So, the first variant of the Newton ZNN approach considers the smooth arbitrary TV matrices A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h for solving the following GLMEs:
A ( t ) A ( t ) Y ( t ) C ( t ) C ( t ) = A ( t ) B ( t ) C ( t ) , { A , B , C } D G ,
where the desired solution is Y ( t ) R n × k .
The time derivative of E N ( t ) is given as
E ˙ N ( t ) = A ˙ ( A Y C B ) C + A ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) C + A ( A Y C B ) C ˙ .
The time derivative of the pseudoinverse of P ( t ) R m × n is equal to
P ˙ = P P ˙ P + P ( P ) T P ˙ T ( I P P ) + ( I P P ) P ˙ T ( P ) T P .
Then, considering the linear ZNN design (2), it is obtained
A ˙ ( A Y C B ) C + A ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) C + A ( A Y C B ) C ˙ = λ A ( A Y C B ) C , { A , B , C } D G ,
or equivalently
A A Y ˙ C C = λ A ( A Y C B ) C A ˙ ( A Y C B ) C A ( A ˙ Y C + A Y C ˙ B ˙ ) C A ( A Y C B ) C ˙ , { A , B , C } D G ,
Applying the vectorization and the Kronecker product, the (46) dynamics is transformed into
( C C ) ( A A ) vec ( Y ˙ ) = vec λ A ( A Y C B ) C A ˙ ( A Y C B ) C A ( A ˙ Y C + A Y C ˙ B ˙ ) C A ( A Y C B ) C ˙ , { A , B , C } D G ,
and setting
W ( t ) = ( ( C C ) ( A A ) ) r ( t ) = vec λ A ( A Y C B ) C A ˙ ( A Y C B ) C A ( A ˙ Y C + A Y C ˙ B ˙ ) C A ( A Y C B ) C ˙ y ˙ ( t ) = vec ( Y ˙ ( t ) ) , y ( t ) = vec ( Y ( t ) ) ,
the subsequent ZNN design is generated:
W ( t ) y ˙ ( t ) = r ( t ) .
The mass matrix W ( t ) in (49) is singular under the rank conditions rank ( A ) < min { m , n } or rank ( C ) < min { k , h } . The regularization principle leads to the regular mass matrix
M ( t ) = W ( t ) , rank ( A ) = min { m , n } & rank ( C ) = min { k , h } W ( t ) + β I k n , rank ( A ) < min { m , n } or rank ( C ) < min { k , h }
such that β 0 defines the ridge regression parameter. Accordingly, the ZNN dynamical system
M ( t ) y ˙ ( t ) = r ( t )
is used instead of (49), where M ( t ) is the nonsingular mass matrix. The differential system (51) is termed NZNNV1, and it is solvable using ode Matlab solvers. Theorem 3 proves the exponential convergence rate of the NZNNV1 evolution (51) to the accurate TV solution of TV GLMEs (42).
Theorem 3.
Let A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h be differentiable. If the condition (16) is satisfied, the NZNNV1 model (51), starting form any initial value y ( 0 ) , exponentially converges to one element from the set Θ Q defined in (18), which solves the TV GLME (42).
Proof. 
To acquire the solution y ( t ) to (42), the Zhangian matrix is defined as in (41), in conjunction with the initiated ZNN dynamics. The model (45) is obtained by adopting the linear design for zeroing (41), which produces the NZNN design. From [6] (Theorem 1), restated in Lemma 3, each error matrix equation in the error matrix equation group (45) converges to the accurate solution as t . According to Theorem 1(c), the matrix equation E N ( t ) = 0 is always solvable and its solutions are included in Θ Q . Consequently, the solution of (45) converges to the solution of TV GLMEs (42) as t . The derivation procedure of (51) confirms that it is an equivalent form of (45).    □

3.4. Newton ZNN Model (Version 2)

This subsection presents the second version of the Newton ZNN model for solving the TV GLMEs of (42). The Newton ZNN version 2 (NZNNV2) model includes two error functions. More precisely, the error function (41) is converted as follows:
E 1 ( t ) = X ( t ) ( A ( t ) Y ( t ) C ( t ) B ( t ) ) Z ( t ) , { A , B , C } D G .
The matrix X ( t ) R n × m in (52) is defined as the zero of the error function for finding the pseudoinverse of an arbitrary TV matrix A ( t ) (see [38])
E 2 ( t ) = A T ( t ) ( I m A ( t ) X ( t ) ) , m n rank ( A ( t ) ) ,
while Z ( t ) R h × k is the zero of the error function for finding the pseudoinverse of an arbitrary TV matrix C ( t )
E 3 ( t ) = ( I h Z ( t ) C ( t ) ) C T ( t ) , h k rank ( C ( t ) ) .
The derivative of (52) is
E ˙ 1 ( t ) = X ˙ ( A Y B ) Z + X ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) Z + X ( A Y C B ) Z ˙ , { A , B , C } D G ,
while the derivative of (53) is
E ˙ 2 ( t ) = A ˙ T ( I m A X ) A T A ˙ X A T A X ˙ , m n rank ( A ) ,
and
E ˙ 3 ( t ) = ( I h Z C ) C ˙ T Z ˙ C C T Z C ˙ C T , h k rank ( C ) ,
Then, considering the linear ZNN design (2), it can be obtained:
X ˙ ( A Y B ) Z + X ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) Z + X ( A Y C B ) Z ˙ = λ X ( A Y C B ) Z , A ˙ T ( I m A X ) A T A ˙ X A T A X ˙ = λ A T ( I m A X ) , ( I h Z C ) C ˙ T Z ˙ C C T Z C ˙ C T = λ ( I h Z C ) C T
or in equivalent form
X A Y ˙ C Z + X ˙ ( A Y B ) Z + X ( A Y C B ) Z ˙ = λ X ( A Y C B ) Z X ( A ˙ Y C + A Y C ˙ B ˙ ) Z , A T A X ˙ = λ A T ( I m A X ) A ˙ T ( I m A X ) + A T A ˙ X , Z ˙ C C T = λ ( I h Z C ) C T ( I h Z C ) C ˙ T + Z C ˙ C T .
The dynamics (59) are modified using vectorization and the Kronecker product as follows:
( C Z ) T ( X A ) vec ( Y ˙ ) + ( ( A Y B ) Z ) T I n vec ( X ˙ ) + I k ( X ( A Y C B ) ) vec ( Z ˙ ) = vec λ X ( A Y C B ) Z X ( A ˙ Y C + A Y C ˙ B ˙ ) Z , ( I m A T A ) vec ( X ˙ ) = vec λ A T ( I m A X ) A ˙ T ( I m A X ) + A T A ˙ X , ( C C T I h ) vec ( Z ˙ ) = vec λ ( I h Z C ) C T ( I h Z C ) C ˙ T + Z C ˙ C T .
Setting
w 1 ( t ) = ( C Z ) T ( X A ) w 2 ( t ) = ( ( A Y B ) Z ) T I n , w 3 ( t ) = I k ( X ( A Y C B ) ) , w 4 ( t ) = I m A T A , w 5 ( t ) = C C T I h , r 1 ( t ) = vec λ X ( A Y C B ) Z X ( A ˙ Y C + A Y C ˙ B ˙ ) Z r 2 ( t ) = vec λ A T ( I m A X ) A ˙ T ( I m A X ) + A T A ˙ X r 3 ( t ) = vec λ ( I h Z C ) C T ( I h Z C ) C ˙ T + Z C ˙ C T W ( t ) = w 1 w 2 w 3 O m n , k n w 4 O m n , k h O k h , k n O k h , m n w 5 , r ( t ) = r 1 r 2 r 3 , y ˙ ( t ) = vec ( Y ˙ ) vec ( X ˙ ) vec ( Z ˙ ) , y = vec ( Y ) vec ( X ) vec ( Z ) ,
the following dynamical system is generated
W ( t ) y ˙ ( t ) = r ( t ) ,
with the mass matrix W ( t ) . To solve the problem with singularity of W ( t ) in the case rank ( A ) < min { m , n } or rank ( C ( t ) ) < min { k , h } , the following nonsingular mass matrix is used:
M ( t ) = W ( t ) , rank ( A ) = min { m , n } & rank ( C ) = min { k , h } W ( t ) + β I m n k n , rank ( A ) < min { m , n } or rank ( C ) < min { k , h } ,
wherein β 0 . Then the following ZNN model is used instead of (62):
M ( t ) y ˙ ( t ) = r ( t ) ,
where M ( t ) is a nonsingular mass matrix. The NZNNV2 design (64) is solvable with an ode solver available in Matlab. The global and exponential convergence rate of the NZNNV2 flow (64) to the exact TV solution of (42) is verified in Theorem 4.
Theorem 4.
Let A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h be differentiable. If the condition (16) is satisfied, the NZNNV2 model (64) initialized by an arbitrary initial value y ( 0 ) , exponentially converges to the exact solution of the TV GLMEs (42) in the form (18).
Proof. 
Similar to the proof of Theorem 3.    □

3.5. Newton ZNN Model (Version 3)

This subsection presents the third version of the Newton ZNN model for solving the TV GLMEs of (65). The Newton approach considers the smooth arbitrary TV matrices A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h for solving the following GLMEs:
A T A + γ I n 1 A T A Y C C T C C T + γ I k 1 = A T A + γ I n 1 A T B C T C C T + γ I k 1 , { A , B , C } D G ,
where Y R n × k is the desired solution. Note that γ 0 denotes the regularization parameter.
Based on the aforementioned, the error function in the Newton ZNN version 3 (NZNNV3) model will be defined by:
E N ( t ) = A T A + γ I n 1 A T ( A Y C B ) C T C C T + γ I k 1 , { A , B , C } D G ,
Given that the time derivative of the inverse of P ( t ) is
P ˙ 1 ( t ) = P 1 P ˙ P 1 ,
and setting
S 1 ( t ) = ( A T A + γ I n ) 1 , m n rank ( A ) ,
S 2 ( t ) = ( C C T + γ I k ) 1 , h k rank ( C ) ,
with
S ˙ 1 ( t ) = A T A + γ I n 1 ( A ˙ T A + A T A ˙ ) A T A + γ I n 1 , m n rank ( A )
and
S ˙ 2 ( t ) = C C T + γ I k 1 ( C ˙ C T + C C ˙ T ) C C T + γ I k 1 , h k rank ( C ) ,
the time derivative of (41) is given by
E ˙ N ( t ) = S ˙ 1 A T ( A Y C B ) C T S 2 + S 1 A ˙ T ( A Y C B ) C T S 2 + S 1 A T ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) C T S 2 + S 1 A T ( A Y C B ) C ˙ T S 2 + S 1 A T ( A Y C B ) C T S ˙ 2 , { A , B , C } D G .
Then, considering the linear ZNN dynamics (2), it can be obtained:
S ˙ 1 A T ( A Y C B ) C T S 2 + S 1 A ˙ T ( A Y C B ) C T S 2 + S 1 A T ( A ˙ Y C + A Y ˙ C + A Y C ˙ B ˙ ) C T S 2 + S 1 A T ( A Y C B ) C ˙ T S 2 + S 1 A T ( A Y C B ) C T S ˙ 2 = λ S 1 A T ( A Y C B ) C T S 2 ,
or equivalently
S 1 A T A Y ˙ C C T S 2 = λ S 1 A T ( A Y C B ) C T S 2 S ˙ 1 A T ( A Y C B ) C T S 2 S 1 A ˙ T ( A Y C B ) C T S 2 S 1 A T ( A ˙ Y C + A Y C ˙ B ˙ ) C T S 2 S 1 A T ( A Y C B ) C ˙ T S 2 S 1 A T ( A Y C B ) C T S ˙ 2 ,
The vectorization and the Kronecker product transform (46) into the equivalent form
( ( C C T S 2 ) T ( S 1 A T A ) ) vec ( Y ˙ ) = vec ( λ S 1 A T ( A Y C B ) C T S 2 S ˙ 1 A T ( A Y C B ) C T S 2 S 1 A ˙ T ( A Y C B ) C T S 2 S 1 A T ( A ˙ Y C + A Y C ˙ B ˙ ) C T S 2 S 1 A T ( A Y C B ) C ˙ T S 2 S 1 A T ( A Y C B ) C T S ˙ 2 ) .
Then setting
W ( t ) = ( ( C C T S 2 ) T ( S 1 A T A ) ) r ( t ) = vec ( λ S 1 A T ( A Y C B ) C T S 2 S ˙ 1 A T ( A Y C B ) C T S 2 S 1 A ˙ T ( A Y C B ) C T S 2 S 1 A T ( A ˙ Y C + A Y C ˙ B ˙ ) C T S 2 S 1 A T ( A Y C B ) C ˙ T S 2 S 1 A T ( A Y C B ) C T S ˙ 2 ) y ˙ ( t ) = vec ( Y ˙ ( t ) ) , y ( t ) = vec ( Y ( t ) ) ,
the following system of differential equations is obtained:
W ( t ) y ˙ ( t ) = r ( t ) .
Since W ( t ) is singular in the case rank ( A ) < min { m , n } or rank ( C ) < min { k , h } , the utilization of the Tikhonov principle leads to the invertible mass matrix
M ( t ) = W ( t ) , rank ( A ) = min { m , n } & rank ( C ) = min { k , h } W ( t ) + β I k n , rank ( A ) < min { m , n } or rank ( C ) < min { k , h }
in which it is β 0 . As a result, the following dynamical evolution is used instead of (77):
M ( t ) y ˙ ( t ) = r ( t ) ,
where M ( t ) is a nonsingular mass matrix. The ZNN flow (79) will be denoted as NZNNV3, and it is solvable using one of the ode Matlab solvers.
Theorem 5.
Let A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h be differentiable. If the condition (16) is satisfied, the NZNNV3 model (79), starting from arbitrary initial state y ( 0 ) , exponentially converges to the theoretical TV solution of the TV GLMEs (65) in the form (18).
Proof. 
Similar to the proof of Theorem 3.    □

3.6. Direct ZNN Model

The direct approach considers the smooth arbitrary TV matrices A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h for solving the GLME
Y ( t ) = A ( t ) B ( t ) C ( t ) , { A , B , C } D G ,
where the desired solution is Y ( t ) R n × k . The direct approach always calculates the solution produced directly by the pseudoinverse of A ( t ) and C ( t ) . As a result, the direct ZNN (DZNN) model includes three error functions. The first error function in the DZNN model is defined by
E 1 ( t ) = Y ( t ) X ( t ) B ( t ) Z ( t ) , { A , B , C } D G ,
wherein X ( t ) R n × m is the desired zero of the second error function for calculating A ( t ) (see [38]):
E 2 ( t ) = A T ( t ) ( I m A ( t ) X ( t ) ) , m n rank ( A ( t ) ) ,
and Z ( t ) R h × k is the zero of the third error function for finding C ( t ) :
E 3 ( t ) = ( I h Z ( t ) C ( t ) ) C T ( t ) , h k rank ( C ( t ) ) .
The time derivative of (81) is defined by
E ˙ 1 ( t ) = Y ˙ X ˙ B Z X B ˙ Z X B Z ˙ , { A , B , C } D G ,
the time derivative of (82) is
E ˙ 2 ( t ) = A ˙ T ( I m A X ) A T A ˙ X A T A X ˙ , m n rank ( A )
and the time derivative of (83) is given by
E ˙ 3 ( t ) = ( I h Z C ) C ˙ T Z ˙ C C T Z C ˙ C T , h k rank ( C ) .
Then, considering the ZNN model (2), it can be obtained
Y ˙ X ˙ B Z X B ˙ Z X B Z ˙ = λ ( Y X B Z ) , A ˙ T ( I m A X ) A T A ˙ X A T A X ˙ = λ A T ( I m A X ) , ( I h Z C ) C ˙ T Z ˙ C C T Z C ˙ C T = λ ( I h Z C ) C T ,
or in equivalent form
Y ˙ X ˙ B Z X B Z ˙ = λ ( Y X B Z ) + X B ˙ Z , A T A X ˙ = λ A T ( I m A X ) A ˙ T ( I m A X ) + A T A ˙ X , Z ˙ C C T = λ ( I h Z C ) C T ( I h Z C ) C ˙ T + Z C ˙ C T .
Using the vectorization and the Kronecker product, (88) is transformed into
I n k vec ( Y ˙ ) ( ( B Z ) T I n ) vec ( X ˙ ) ( I k ( X B ) ) vec ( Z ˙ ) = vec λ ( Y X B Z ) + X B ˙ Z ( I m A T A ) vec ( X ˙ ) = vec λ A T ( I m A X ) A ˙ T ( I m A X ) + A T A ˙ X ( C C T I h ) vec ( Z ˙ ) = vec λ ( I h Z C ) C T ( I h Z C ) C ˙ T + Z C ˙ C T .
Setting
w 1 ( t ) = I n k w 2 ( t ) = ( B Z ) T I n , w 3 ( t ) = I k ( X B ) , w 4 ( t ) = I m A T A , w 5 ( t ) = C C T I h , r 1 ( t ) = vec λ ( Y X B Z ) + X B ˙ Z r 2 ( t ) = vec λ A T ( I m A X ) A ˙ T ( I m A X ) + A T A ˙ X r 3 ( t ) = vec λ ( I h Z C ) C T ( I h Z C ) C ˙ T + Z C ˙ C T W ( t ) = w 1 w 2 w 3 O m n , k n w 4 O m n , k h O k h , k n O k h , m n w 5 , r ( t ) = r 1 r 2 r 3 , y ˙ = vec ( Y ˙ ) vec ( X ˙ ) vec ( Z ˙ ) , y = vec ( Y ) vec ( X ) vec ( Z ) ,
the following differential system with the mass matrix W ( t ) is obtained
W ( t ) y ˙ ( t ) = r ( t ) .
Since W ( t ) is singular when rank ( A ( t ) ) < min { m , n } or rank ( C ( t ) ) < min { k , h } , the following regular mass matrix is used:
M ( t ) = W ( t ) , rank ( A ( t ) ) = min { m , n } & rank ( C ( t ) ) = min { k , h } W ( t ) + β I m n k n , rank ( A ( t ) ) < min { m , n } or rank ( C ( t ) ) < min { k , h }
such that β 0 . As a result, the following ZNN model is used instead of (91):
M ( t ) y ˙ ( t ) = r ( t ) ,
where M ( t ) is a nonsingular mass matrix. The system (93) is termed as DZNN model, and it is solvable with an ode Matlab solver.
Theorem 6.
Let A ( t ) R m × n , C ( t ) R k × h and B ( t ) R m × h be differentiable. If the condition (16) is satisfied, the DZNN model (93) starting from any initial value y ( 0 ) , converges exponentially to the theoretical TV solution of the TV GLMEs (80) in the form (18).
Proof. 
Analogous to the proof of Theorem 3. □

4. Simulation Examples

This section compares performances of the TZNN model (29), the GZNN design (40), the NZNNV1 model (51), the NZNNV2 model (64), the NZNNV3 model (79), and the DZNN model (93) on six numerical examples (NE), involving square or rectangular, as well as singular or nonsingular input TV matrices. The diagram in Figure 1 is presented to understand better the solutions produced by these six models.
This diagram shows how the GZNN, NZNNV1, NZNNV2, and NZNNV3 models generate various solutions for different initial conditions. In contrast, the TZNN and DZNN models generate just the pseudoinverse solution of the TV GLME (1) for any initial conditions.
As preliminaries, a few parameters and symbols must be defined, as well as some additional information for the subsequent NEs. The considered time interval is [ 0 , 10 ] in all NE, while the ZNN scaling parameter is assigned to λ = 10 , and the regularization parameter is β = 1 e 8 . The initial condition (IC1) in all NEs is
Y ( 0 ) = A T ( 0 ) I m , h C T ( 0 )
for all models, with X ( 0 ) = A T ( 0 ) and Z ( 0 ) = C T ( 0 ) for the NZNNV2 model (64), the NZNNV3 model (79), and the DZNN model (93). The second initial condition (IC2) for the last three NEs has been set as
Y ( 0 ) = ( 1 : k ) 1 n × k ,
with X ( 0 ) = A T ( 0 ) and Z ( 0 ) = C T ( 0 ) for the NZNNV2 model (64), the NZNNV3 model (79) and the DZNN model (93), and γ = 1 e 1 for the NZNNV3 model (79). In the figures legends, the notation TZNN , GZNN , NZNNV 1 , NZNNV 2 , NZNNV 3 and DZNN denote the solutions or errors produced by the corresponding models solutions, while the notations Y 1 * ( t ) , Y 2 * ( t ) , Y * ( t ) refer to Y 1 * ( t ) = A B C and Y 2 * ( t ) = Y ˜ Y ( 0 ) ( t ) , and Y * ( t ) = Y 1 * = Y 2 * .

4.1. Example 1

This NE is about a TV GLME that considers the following square matrices of dimensions 4 × 4 :
A ( t ) = 3 + cos ( t ) 1 + 1 / 2 sin ( t ) 1 + 1 / n sin ( t ) 1 + 1 / 2 sin ( t ) 3 + cos ( t ) 1 + 1 / ( n 1 ) sin ( t ) 1 + 1 / m sin ( t ) 1 + 1 / ( m 1 ) sin ( t ) 3 + cos ( t ) , C ( t ) = 5 + cos ( t ) 3 + sin ( t ) 3 + cos ( t ) 2 + sin ( t ) , B ( t ) = 7 + cos ( t ) 6 + sin ( 2 t ) 5 + sin ( t ) 4 sin ( t ) 1 + cos ( t ) 6 + cos ( t ) 3 cos ( t ) 2 + cos ( t ) ,
such that rank ( A ( t ) ) = 4 . The input matrices A ( t ) , C ( t ) and B ( t ) are nonsingular matrices.

4.2. Example 2

This NE is about a TV GLME that considers rectangular matrices
A ( t ) = 6 cos ( π t ) 3 cos ( π t ) 1 cos ( π t ) 5 cos ( π t ) 8 cos ( π t ) 2 cos ( π t ) T , C ( t ) = 5 + cos ( t ) 3 + sin ( t ) 2 + sin ( t ) 1 + sin ( t ) 3 + cos ( t ) 2 + sin ( t ) 3 + sin ( t ) 4 + sin ( t ) 5 + cos ( t ) 7 + sin ( t ) 3 + sin ( t ) 2 + sin ( t ) , B ( t ) = 2 A ( : , 1 ) 3 A ( : , 2 ) 3 A ( : , 2 ) 2 A ( : , 1 ) ,
such that rank ( A ( t ) ) = 2 . That is, the input A ( t ) is a full-column rank 3 × 2 matrix, and the input C ( t ) is a full-row rank 3 × 4 matrix.

4.3. Example 3

This NE is about a TV GLME that takes into account the rectangular matrices below:
A ( t ) = 3 + cos ( t ) 1 + 1 / 2 sin ( t ) 1 + 1 / n sin ( t ) 1 + 1 / 2 sin ( t ) 3 + cos ( t ) 1 + 1 / ( n 1 ) sin ( t ) 1 + 1 / m sin ( t ) 1 + 1 / ( m 1 ) sin ( t ) 3 + cos ( t ) , C ( t ) = 5 + cos ( t ) 3 + sin ( t ) 2 + sin ( t ) 1 + sin ( t ) 3 + cos ( t ) 2 + sin ( t ) 3 + sin ( t ) 4 + sin ( t ) , B ( t ) = diag ( 1 : k ) 1 k 2 × 1 2 , k + sin ( t ) ,
where rank ( A ( t ) ) = min { m , n } and m = 10 , n = 6 and k = 4 . That is, the input matrix A ( t ) is a full-column rank 10 × 6 matrix, and the input C ( t ) is a full-row rank 2 × 4 matrix.

4.4. Example 4

This example concerns a TV GLME that takes into account the input matrices
A ( t ) = 1 + sin ( t ) 1 + 1 / 2 sin ( t ) 1 + 1 / 3 sin ( t ) 1 + 1 / 4 sin ( t ) 1 4 , C ( t ) = 5 + cos ( t ) 3 + sin ( t ) 5 + cos ( t ) 3 + sin ( t ) , B ( t ) = 3 A ( : , 1 ) 2 A ( : , 4 )
such that A ( t ) and C ( t ) are singular and of dimensions 4 × 4 and 2 × 2 , respectively, with rank ( A ( t ) ) = rank ( C ( t ) ) = 1 .

4.5. Example 5

This NE is about a TV GLME that takes the rectangular matrices below into account:
A ( t ) = 6 cos ( π t ) 5 cos ( π t ) 1 + sin ( π t ) 2 + sin ( t ) 2 + sin ( t ) T 1 2 1 5 , 2 , C ( t ) = 5 3 2 1 2 3 8 4 2 7 2 3 5 7 3 2 3 2 + sin ( t ) , B ( t ) = 5 A ( : , 1 ) 2 A ( : , 2 ) 1 5 × 4
where rank ( A ( t ) ) = 1 and rank ( C ( t ) ) = 1 . The input matrix A ( t ) is a 5 × 2 rank-deficient matrix, and C ( t ) is a full-row rank 3 × 6 matrix.

4.6. Example 6

This NE is about a TV GLME that takes the rectangular matrices below into consideration:
A ( t ) = 1 + sin ( t ) 1 + 1 / 2 sin ( t ) 1 + 1 / m sin ( t ) 1 m × n , C ( t ) = 5 + cos ( t ) 3 + sin ( t ) 2 + sin ( t ) 1 + sin ( t ) 1 k × h , B ( t ) = 3 A ( : , 1 ) 2 A ( : , 1 ) 5 A ( : , 1 ) 1 m ,
where rank ( A ( t ) ) = 1 with m = 8 and n = 5 , and rank ( C ( t ) ) = 1 with k = 3 and h = 4 . The input matrix A ( t ) is rank-deficient and of size 8 × 5 , and C ( t ) is a rank deficient 3 × 4 matrix.

4.7. General Discussion

Since A ( t ) and C ( t ) in Example 1 are nonsingular, the solvability condition (16) is satisfied. As a consequence, the TV GLME (1) is solvable with the unique solution Y 1 * ( t ) = A B C . On the other hand, the solvability condition (16) for the matrix equation E ( t ) = 0 is not satisfied in all another Examples 2–6. So, the equation E ( t ) = 0 is not solvable in Examples 2–6. On the other hand, according to Theorem 1, the matrix equations E G ( t ) = 0 and E N ( t ) = 0 are solvable in all examples.
In this subsection, the performance of six proposed ZNN models for solving TV GLME (1) is investigated through six NEs. All the models generate the solution Y 1 * ( t ) , i.e., the pseudoinverse solution of the TV GLME (1), under the IC1 in all NEs, while the GZNN, NZNNV1, NZNNV2, and NZNNV3 models generate the solution Y 2 * ( t ) under the IC2 in NEs Section 4.4, Section 4.5 and Section 4.6. More precisely, NEs Section 4.1, Section 4.2 and Section 4.3, respectively, deal with square nonsingular A ( t ) and C ( t ) , a rectangular full column rank A ( t ) , and a rectangular full row rank C ( t ) . As a result, all the models generate the pseudoinverse solution of the TV GLME (1), i.e., Y * ( t ) = Y 1 * ( t ) = Y 2 * ( t ) , for any initial condition. It is worth noting that the TZNN is only applicable in NE Section 4.1, which has square nonsingular input matrices A ( t ) and C ( t ) . To substantiate the claim mentioned above, the TZNN has been used in NE Section 4.2, where Figure 2b demonstrates that the TZNN error matrix does not converge to zero. Furthermore, NE Section 4.4 deals with square singular A ( t ) and C ( t ) , and NEs Section 4.5 and Section 4.6 deal with rectangular rank deficient A ( t ) and C ( t ) . As a result, the GZNN, NZNNV1, NZNNV2, and NZNNV3 generate the solution Y 1 * ( t ) under the IC1 and the solution Y 2 * ( t ) under the IC2. It is worth noting that Y 1 * ( t ) Y 2 * ( t ) in NEs Section 4.4, Section 4.5 and Section 4.6 under the IC2.
Figure 2, Figure 3 and Figure 4 follow the next schedule: first rows in the figures display the convergence of the error functions involved in the tested models, i.e., (25) of TZNN, (33) of GZNN, (41) of NZNNV1, (52) and (53) of NZNNV2 (NZNNV2-E1 and NZNNV2-E2), (66) of NZNNV3, (81) and (82) of DZNN (DZNN-E1 and DZNN-E2) Frobenius norms; the second rows in figures exhibit the convergence behavior of the models on Error 1: Y ( t ) Y 2 * ( t ) F , where Y ( t ) refers to solutions generated by considered models; the third rows in the figures display trajectories of generated solutions Y ( t ) ; the fourth rows show the convergence of the models on Error 2: A ( t ) Y ( t ) C ( t ) B ( t ) F in NE Section 4.1, and Error 2: A T ( t ) ( A ( t ) Y ( t ) C ( t ) B ( t ) ) C T ( t ) F in NEs Section 4.2, Section 4.3, Section 4.4, Section 4.5 and Section 4.6. Note also that Figure 2 contains results about NEs Section 4.1, Section 4.2 and Section 4.3, Figure 3 contains results about NEs Section 4.4, Section 4.5 and Section 4.6 under the IC1, and Figure 4 contains results about NEs Section 4.4, Section 4.5 and Section 4.6 under the IC2.
The following can be deduced from the NEs. The exponential convergence of the models that is proven in Theorems 2–6 can be observed in the first row figures. Particularly, Figure 2a–c present the exponential convergence of the models in NEs Section 4.1, Section 4.2 and Section 4.3, respectively, whereas Figure 3a–c and Figure 4a–c present the exponential convergence of the models in NEs Section 4.4, Section 4.5 and Section 4.6 under IC1 and IC2, respectively. More particularly, while all models begin with an initial value other than the optimal one, all NEs have received differentiable A ( t ) , B ( t ) , and C ( t ) matrices that meet the condition (16). In Figure 2a–c, Figure 3a–c and Figure 4a–c, the error functions convergence begins at t = 0 in the range [ 10 1 , 10 6 ] , but it ends before t = 2 with lowest values in the range [ 10 10 , 10 1 ] . It is also crucial to note that when the value of the design parameter λ is higher, the models will converge faster and the overall error will be lowered even further. Furthermore, all other figures behave in the same way due to the convergence tendency of the error functions. In other words, the graphs associated with the models in the remaining figures begin at t = 0 in a very different value from the objective and reach it before t = 2 . As a result, it is evident that Theorems 2–6 are proven true.
In general, according to the first row figures, all the ZNN models converge to zero, where the NZNNV1 and NZNNV3 models have the fastest convergence rate, and the TZNN and DZNN have the second and third fastest, respectively. In contrast, the GZNN and NZNNV2 models have the slowest and almost similar convergence rates. According to the second row figures under IC1 and fourth row figures, the NZNNV1 and NZNNV3 models have the fastest convergence rate, the GZNN has the second fastest, and the NZNNV2 model has the slowest. The NZNNV2 and DZNN models have the lowest overall errors when A ( t ) and C ( t ) are rectangular, specifically in NEs Section 4.2 and Section 4.5. The second row figures under IC2 show that the NZNNV1, NZNNV2, NZNNV3, and GZNN models perform similarly to the second row figures under IC1. However, the DZNN model does not converge to zero, as expected. Finally, the third row figures demonstrate that under IC1, all models solutions match Y * ( t ) , however under IC2, the DZNN model solutions match Y 1 * ( t ) , while the NZNNV1, NZNNV2, NZNNV3, and GZNN models solutions match Y 2 * ( t ) .
Graphs included in Figure 3a–c and Figure 4a–c, on the other hand, show that the IC1 and IC2 have no effect on the models’ convergence, proving the Theorems 2–6 claim that the models’ performance is unaffected by the initial condition.
To summarize, the TZNN is only applicable in the case where m = n = rank ( A ( t ) ) and k = h = rank ( C ( t ) ) . It is important to mention that A ( t ) becomes C ( t ) and C ( t ) becomes A ( t ) when n < m and k < h . As a result, the cases where n < m and k < h is disregarded in the NEs. The GZNN has a lower overall error than NZNNV1 and NZNNV3, although they have the same convergence speed. Even though the NZNNV2 has the slowest convergence speed, its overall error is between NZNNV1 and GZNN. Compared to GZNN, NZNNV1, and NZNNV3, the DZNN model’s convergence speed is slightly slower, but its overall error is the lowest. Remember that the DZNN model’s primary distinction from all other models is that, regardless of the initial conditions, it generates only a single solution based on the pseudoinversion. That is, all the models work excellently in solving the TV GLMEs (1).

5. Conclusions

The problem of solving TV GLMEs with random TV real input matrices is resolved in this paper by applying the ZNN design. Six ZNN models for calculating the online solution to general TV GLMEs are defined, analyzed, and compared, including TZNN, which is based on the traditional ZNN approach, GZNN which is based on the gradient ZNN approach, NZNNV1, NZNNV2, and NZNNV3 which are based on the Newton’s optimization method, and DZNN which is based on the ZNN approach for finding the direct solution produced with the use of the pseudoinverse. Six numerical examples involving square or rectangular, and singular or nonsingular matrices show that all models successfully solve TV GLMEs. However, their effectiveness varies and depends on the input matrix, while the NZNNV1 and NZNNV3 models converge to exact solutions faster than the other models.
There are a few prospective study areas that can be explored.
  • The streams of NZNNV1, NZNNV2, NZNNV3, and DZNN models accelerated by appropriate nonlinear activations, as well as nonlinear NZNNV1, NZNNV2, NZNNV3, and DZNN model flows with a finite convergence, can all be investigated.
  • Another research stream is to use carefully selected parameters defined in fuzzy environments. Such research will be a continuation of research presented in [25,26,38,39].
  • The presented TZNN, GZNN, NZNNV1, NZNNV2, NZNNV3, and DZNN models are not noise-tolerant, so all noise types have a significant influence on the accuracy of the suggested ZNNs. Consequently, related future investigation can be oriented on adjusting the proposed dynamical systems to appropriate more general integration-enhanced and noise-tolerant ZNN classes.
  • Numerous results concerning the solution of important equations of a special type have appeared in recent years. For example, there is a number of recent papers concerning the Sylvester equation [40,41] or related to Lyapunov equation [42,43,44]. Studying such equations can be one of the goals of our research.

Author Contributions

P.S.S.: conceptualization, methodology, validation, formal analysis, investigation, writing—original draft, writing-review & editing. S.D.M.: data curation, validation, investigation, formal analysis, writing-review & editing. V.N.K. (Vasilios N. Katsikis): conceptualization, methodology, validation, formal analysis, investigation, writing–original draft. L.A.K.: methodology, validation, investigation. V.N.K. (Vladimir N. Krutikov): methodology, formal analysis, investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Predrag Stanimirović acknowledges support of Ministry of Education, Science and Technological Development, Republic of Serbia, Grant No. 451-03-68/2022-14/200124, and support of Science Fund of the Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jin, L.; Li, S.; La, H.M.; Luo, X. Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 2017, 64, 4710–4720. [Google Scholar] [CrossRef]
  2. Luo, X.; Zhou, M.; Xia, Y.; Zhu, Q.; Ammari, A.C.; Alabdulwahab, A. Generating highly accurate predictions for missing QoS data via aggregating nonnegative latent factor models. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 524–537. [Google Scholar] [CrossRef] [PubMed]
  3. Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Zhang, Y. Solving complex-valued time-varying linear matrix equations via QR decomposition with applications to robotic motion tracking and on angle-of-arrival localization. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3415–3424. [Google Scholar] [CrossRef] [PubMed]
  4. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D. Multi-input bio-inspired weights and structure determination neuronet with applications in European Central Bank publications. Math. Comput. Simul. 2022, 193, 451–465. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Ge, S.S. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Netw. 2005, 16, 1477–1490. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, Y.; Chen, K. Comparison on Zhang neural network and gradient neural network for time-varying linear matrix equation AXB=C solving. In Proceedings of the 2008 IEEE International Conference on Industrial Technology, Chengdu, China, 21–24 April 2008. [Google Scholar]
  7. Xiao, L.; Zhang, Y. From Different Zhang Functions to Various ZNN Models Accelerated to Finite-Time Convergence for Time-Varying Linear Matrix Equation. Neural Process. Lett. 2014, 39, 309–326. [Google Scholar] [CrossRef]
  8. Stanimirović, P.; Petković, M. Gradient neural dynamics for solving matrix equations and their applications. Neurocomputing 2018, 306, 200–212. [Google Scholar] [CrossRef]
  9. Stanimirović, P.S.; Katsikis, V.N.; Jin, L.; Mosić, D. Properties and computation of continuous-time solutions to linear systems. Appl. Math. Comput. 2021, 405, 16. [Google Scholar] [CrossRef]
  10. Li, J.; Shi, Y.; Sun, Y.; Fan, J.; Mirza, A. Noise-tolerant Zeroing Neural Dynamics for solving hybrid multilayered time-varying linear equation system. Secur. Commun. Netw. 2022, 2022, 6040463. [Google Scholar] [CrossRef]
  11. Guo, D.; Yi, C.; Zhang, Y. Zhang neural network versus gradient-based neural network for time-varying linear matrix equation solving. Neurocomputing 2011, 74, 3708–3712. [Google Scholar] [CrossRef]
  12. Jin, L.; Li, S.; Liao, B.; Zhang, Z. Zeroing neural networks: A survey. Neurocomputing 2017, 267, 597–604. [Google Scholar] [CrossRef]
  13. Lin, J.; Chen, W.; Zhao, L.; Chen, L.; Tang, Z. A nonlinear zeroing neural network and its applications on time-varying linear matrix equations solving, electronic circuit currents computing and robotic manipulator trajectory tracking. Comput. Appl. Math. 2022, 41, 319. [Google Scholar] [CrossRef]
  14. Xiao, L.; Jia, L.; Zhang, Y.; Hu, Z.; Dia, J. Finite-time convergence and robustness analysis of two nonlinear activated ZNN models for time-varying linear matrix equations. IEEE Access 2019, 7, 135133–135144. [Google Scholar] [CrossRef]
  15. Zhang, Z.; Deng, X.; Qu, X.; Liao, B.; Kong, L.-D.; Li, L. A varying-gain recurrent neural network and its application to solving online time-varying matrix equation. Comput. Appl. Math. 2016, 4, 77940–77952. [Google Scholar] [CrossRef]
  16. Xiao, L. A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation. Neurocomputing 2015, 167, 254–259. [Google Scholar] [CrossRef]
  17. Li, X.; Yu, J.; Li, S.; Ni, L. A nonlinear and noise-tolerant znn model solving for time-varying linear matrix equation. Neurocomputing 2018, 317, 70–78. [Google Scholar] [CrossRef]
  18. Wang, X.; Liang, L.; Che, M. Finite-time convergent complex-valued neural networks for the time-varying complex linear matrix equations. Eng. Lett. 2018, 26, 432–440. [Google Scholar]
  19. Mo, C.; Gerontitis, D.; Stanimirović, P. Solving the time-varying tensor square root equation by varying-parameters finite-time Zhang neural network. Neurocomputing 2021, 445, 309–325. [Google Scholar] [CrossRef]
  20. Wang, X.; Che, M.; Wei, Y. Recurrent neural network for computation of generalized eigenvalue problem with real diagonalizable matrix pair and its applications. Neurocomputing 2016, 216, 230–241. [Google Scholar] [CrossRef]
  21. Li, X.; Li, S.; Xu, Z.; Zhou, X. A Vary-parameter convergence-accelerated recurrent neural network for online solving dynamic matrix pseudoinverse and its robot application. Neural Process. Lett. 2021, 53, 1287–1304. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Yang, S.; Zheng, L. A penalty strategy combined varying-parameter recurrent neural network for solving time-varying multi-type constrained quadratic programming problems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2993–3004. [Google Scholar] [CrossRef]
  23. Katsikis, V.N.; Stanimirović, P.S.; Mourtas, S.D.; Li, S.; Cao, X. Generalized Inverses: Algorithms and Applications; Ivan Kyrchei, I., Ed.; Towards Higher Order Dynamical Systems, Mathematics Research Developments; Nova Science Publishers, Inc.: New York, NY, USA, 2021; pp. 207–239. [Google Scholar]
  24. Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Zhang, Y. Continuous-time varying complex QR decomposition via zeroing neural dynamics. Neural Process. Lett. 2021, 53, 3573–3590. [Google Scholar] [CrossRef]
  25. Dai, J.; Yang, X.; Xiao, L.; Jia, L.; Li, Y. ZNN with fuzzy adaptive activation functions and its application to time-varying linear matrix equation. IEEE Trans. Ind. Inform. 2022, 18, 2560–2570. [Google Scholar] [CrossRef]
  26. Dai, J.; Yang, X.; Xiao, L.; Jia, L.; Li, Y. A fuzzy adaptive zeroing neural network with superior finite-time convergence for solving time-variant linear matrix equations. Knowl.-Based Syst. 2022, 242, 108405. [Google Scholar] [CrossRef]
  27. Stanimirović, P.S.; Katsikis, V.N.; Li, S. Hybrid GNN-ZNN models for solving linear matrix equations. Neurocomputing 2018, 316, 124–134. [Google Scholar] [CrossRef]
  28. Xiao, L.; Tan, H.; Dai, J.; Jia, L.; Tang, W. High-order error function designs to compute time-varying linear matrix equations. Inf. Sci. 2021, 576, 173–186. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Guo, D. Zhang Functions and Various Models; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  30. Zhang, Y.; Yi, C. Zhang Neural Networks and Neural-Dynamic Method; Nova Science Publishers, Inc.: New York, NY, USA, 2011. [Google Scholar]
  31. Nocedal, J.; Wright, S. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  32. Djordjević, D.S.; Stanimirović, P.S. Iterative methods for computing generalized inverses related with optimization methods. J. Aust. Math. Soc. 2005, 78, 257–272. [Google Scholar] [CrossRef] [Green Version]
  33. Wang, J. A recurrent neural network for real-time matrix inversion. Appl. Math. Comput. 1993, 55, 89–100. [Google Scholar] [CrossRef]
  34. Wang, J. Recurrent neural networks for computing pseudoinverse of rank-deficient matrices. SIAM J. Sci. Comput. 1997, 18, 1479–1493. [Google Scholar] [CrossRef] [Green Version]
  35. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; CMS Books in Mathematics; Springer: New York, NY, USA, 2003. [Google Scholar]
  36. Maher, P.J. Some operator inequalities concerning generalized inverses. Ill. J. Math. 1990, 34, 503–514. [Google Scholar] [CrossRef]
  37. Penrose, R. A generalized inverse for matrices. Proc. Camb. Phil. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  38. Katsikis, V.N.; Stanimirović, P.S.; Mourtas, S.D.; Xiao, L.; Karabasević, D.; Stanujkić, D. Zeroing neural network with fuzzy parameter for computing pseudoinverse of arbitrary matrix. IEEE Trans. Fuzzy Syst. 2022, 30, 3426–3435. [Google Scholar] [CrossRef]
  39. Dai, J.; Luo, L.; Xiao, L.; Jia, L.; Li, X. An intelligent fuzzy robustness ZNN model with fixed-time convergence for time-variant Stein matrix equation. Int. J. Intell. Syst. 2022. [Google Scholar] [CrossRef]
  40. Xiao, L.; Zhang, Y.; Dai, J.; Li, J.; Li, W. New Noise-Tolerant ZNN Models With Predefined-Time Convergence for Time-Variant Sylvester Equation Solving. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3629–3640. [Google Scholar] [CrossRef] [Green Version]
  41. Xiao, L.; He, Y. A Noise-Suppression ZNN Model With New Variable Parameter for Dynamic Sylvester Equation. IEEE Trans. Ind. Inform. 2021, 17, 7513–7522. [Google Scholar] [CrossRef]
  42. He, Y.; Xiao, L.; Sun, F.; Wang, Y. A variable-parameter ZNN with predefined-time convergence for dynamic complex-valued Lyapunov equation and its application to AOA positioning. Appl. Soft Comput. 2022, 130, 109703. [Google Scholar] [CrossRef]
  43. Liao, B.; Hua, C.; Cao, X.; Katsikis, V.N.; Li, S. Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation. Mathematics 2022, 10, 2817. [Google Scholar] [CrossRef]
  44. Sun, M.; Liu, J. A novel noise-tolerant Zhang neural network for time-varying Lyapunov equation. Adv. Contin. Discret. Model. 2020, 2020, 116. [Google Scholar] [CrossRef]
Figure 1. Solutions produced by proposed ZNN models.
Figure 1. Solutions produced by proposed ZNN models.
Mathematics 10 04292 g001
Figure 2. Errors and trajectories in NE Section 4.1, Section 4.2 and Section 4.3.
Figure 2. Errors and trajectories in NE Section 4.1, Section 4.2 and Section 4.3.
Mathematics 10 04292 g002
Figure 3. Errors and trajectories in NE Section 4.4, Section 4.5 and Section 4.6 under IC1.
Figure 3. Errors and trajectories in NE Section 4.4, Section 4.5 and Section 4.6 under IC1.
Mathematics 10 04292 g003
Figure 4. Errors and trajectories in NE Section 4.4, Section 4.5 and Section 4.6 under IC2.
Figure 4. Errors and trajectories in NE Section 4.4, Section 4.5 and Section 4.6 under IC2.
Mathematics 10 04292 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stanimirović, P.S.; Mourtas, S.D.; Katsikis, V.N.; Kazakovtsev, L.A.; Krutikov, V.N. Recurrent Neural Network Models Based on Optimization Methods. Mathematics 2022, 10, 4292. https://doi.org/10.3390/math10224292

AMA Style

Stanimirović PS, Mourtas SD, Katsikis VN, Kazakovtsev LA, Krutikov VN. Recurrent Neural Network Models Based on Optimization Methods. Mathematics. 2022; 10(22):4292. https://doi.org/10.3390/math10224292

Chicago/Turabian Style

Stanimirović, Predrag S., Spyridon D. Mourtas, Vasilios N. Katsikis, Lev A. Kazakovtsev, and Vladimir N. Krutikov. 2022. "Recurrent Neural Network Models Based on Optimization Methods" Mathematics 10, no. 22: 4292. https://doi.org/10.3390/math10224292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop