Next Article in Journal
Greedy Kernel Methods for Approximating Breakthrough Curves for Reactive Flow from 3D Porous Geometry Data
Previous Article in Journal
Light-Fueled Self-Propulsion of Liquid Crystal Elastomer-Engined Automobiles in Zero-Energy Modes
Previous Article in Special Issue
A Fuzzy Multi-Criteria Evaluation System for Share Price Prediction: A Tesla Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulations and Bisimulations between Weighted Finite Automata Based on Time-Varying Models over Real Numbers

by
Predrag S. Stanimirović
1,2,
Miroslav Ćirić
1,
Spyridon D. Mourtas
2,3,
Pavle Brzaković
4 and
Darjan Karabašević
4,5,*
1
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia
2
Laboratory “Hybrid Methods of Modelling and Optimization in Complex Systems”, Siberian Federal University, Prosp. Svobodny 79, Krasnoyarsk 660041, Russia
3
Department of Economics, Division of Mathematics-Informatics and Statistics-Econometrics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
4
Faculty of Applied Management, Economics and Finance, University Business Academy in Novi Sad, Jevrejska 24, 11000 Belgrade, Serbia
5
College of Global Business, Korea University, Sejong 30019, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(13), 2110; https://doi.org/10.3390/math12132110
Submission received: 5 June 2024 / Revised: 27 June 2024 / Accepted: 3 July 2024 / Published: 5 July 2024
(This article belongs to the Special Issue Advances in Fuzzy Logic and Artificial Neural Networks)

Abstract

:
The zeroing neural network (ZNN) is an important kind of continuous-time recurrent neural network (RNN). Meanwhile, the existence of forward and backward simulations and bisimulations for weighted finite automata (WFA) over the field of real numbers has been widely investigated. Two types of quantitative simulations and two types of bisimulations between WFA are determined as solutions to particular systems of matrix and vector inequations over the field of real numbers R . The approach used in this research is unique and based on the application of a ZNN dynamical evolution in solving underlying matrix and vector inequations. This research is aimed at the development and analysis of four novel ZNN dynamical systems for addressing the systems of matrix and/or vector inequalities involved in simulations and bisimulations between WFA. The problem considered in this paper requires solving a system of two vector inequations and a couple of matrix inequations. Using positive slack matrices, required matrix and vector inequations are transformed into corresponding equations and then the derived system of matrix and vector equations is transformed into a system of linear equations utilizing vectorization and the Kronecker product. The solution to the ZNN dynamics is defined using the pseudoinverse solution of the generated linear system. A detailed convergence analysis of the proposed ZNN dynamics is presented. Numerical examples are performed under different initial state matrices. A comparison between the ZNN and linear programming (LP) approach is presented.

1. Preliminaries on Weighted Finite Automata and Zeroing Neural Networks

Simulations between WFA ensure its containment, while bisimulations ensure the equivalence of WFA. As a result of the transition from various boolean to quantitative systems, both simulations and bisimulations become quantitative. Corresponding models are based on the use of matrices whose entries supply a quantitative measurement of the relationship between states of underlying systems.
Hereafter, R denotes the field of real numbers, and  N denotes the set of natural numbers without zero, while the set of all positive real numbers is denoted by R + . Additionally, X = { x 1 , , x r } is a non-empty finite set with k elements, where k N , called an alphabet, while X + = { x 1 x 2 x s s N , x 1 , x 2 , , x s X } is the set of all finite sequences of elements of X, which are called words over the alphabet X, and  X * = X + { ε } , where ε X + is a symbol that denotes the empty word of length 0. With respect to the conventional concatenation operation on words (sequences), X + forms a semigroup, while X * is a structure representing a monoid with the identity element  ε .
A weighted finite automaton over the field of real numbers R and the alphabet X is defined as a quadruple A = m , σ A , { M x A } x X , τ A , where m N denotes the dimension of A; σ A R 1 × m , τ A R m × 1 are the initial vector and terminal vector, respectively, and { M x A } x X R m × m is a collection of transition matrices. The initial vector σ A is treated as a row vector, while the terminal vector τ A is treated as a column vector. The behavior of a weighted finite automaton is expressed as the product σ A , representing the initial weights, matrices { M x A } x X representing the weights of the transitions induced by input letters, and the column vector τ A representing the terminal weights.
The collection { M x A } x X is extended up to a collection { M u A } u X * R m × m of compound transition matrices expressed as
M u A = I m , u = ε , M x 1 A M x 2 A M x s A , u = x 1 x 2 x s X + ,
where I m denotes the m × m identity matrix. The matrices M u A , u X * , defined in (1), are known as the compound transition matrices of A. The multiplication of transition matrices carry numerical values over R , known as weights. A function f : X * R is called a word function. In particular, each weighted finite automaton A = m , σ A , { M x A } x X , τ A gives rise to a word function A : X * R defined as follows:
A ( u ) = σ A M u A τ A = σ A M x 1 A M x 2 A M x s A τ A , u = x 1 x 2 x s X + , σ A M ε A τ A = σ A τ A , u = ε .
The word function A defined in (2) is called the behavior of A, or a word function computed by A. The behavior of an automaton is a mapping that relates a weight to words over a semiring.
Consider the weighted finite automata (WFA) A = m , σ A , { M x A } x X , τ A and B = n , σ B , { M x B } x X , τ B over the field of real numbers R and X. The following notations are used:
A = B A ( u ) = B ( u ) , for every u X * ;
A B A ( u ) B ( u ) , for every u X * .
WFA A and B over R and the alphabet X are said to be equivalent if A = B . On the other hand, if  A B , then A is said to be contained in B. The problem of determining whether WFA are equivalent is called the equivalence problem, and the problem of determining whether one of two WFA is contained in another is called the containment problem. A solution to the equivalence problem decides whether two WFA compute the same word function. On the other hand, a solution to the containment problem determines whether the word function computed by one WFA is less than or equal to the word function corresponding to the other WFA
A matrix (resp. vector) is said to be a positive matrix (resp. positive vector) if all its entries are positive real numbers, and a weighted finite automaton A is said to be a positive automaton if its initial and terminal vectors, as well as all its transition matrices, are positive.
Weighted automata have been applied to describe quantitative properties in various systems, as well as to represent probabilistic models, image compression, speech recognition, and finite representations of formal languages. Context–free grammars are used in the development of programming languages as well as in artificial intelligence.
The theoretical foundations of current investigations involve two types of simulations and two types of bisimulations defined in [1], in the general context of WFA over a semiring. The approach we use consists of defining quantitative simulations and bisimulations as matrices that are solutions to certain systems of matrix inequations. Such an approach was introduced in [2], where quantitative simulations and bisimulations between fuzzy finite automata were introduced and their basic properties were examined. Algorithms for testing their existence were developed in [3]. The same algorithms compute the greatest simulations and bisimulations in cases when they exist. Then, the same approach was applied to the study of bisimulations and simulations for non-deterministic automata [4], WFA over an additively idempotent semiring [5], and max-plus automata [6], as well as for WFA over an arbitrary semiring [1,7], which encompass all the previous ones. It turns out that an almost identical methodology can also be applied to social networks [8]. In [9], it was proven that two probabilistic finite automata are equivalent if and only if there is a bisimulation between them, where the bisimulation is defined as a classical binary relation between the vector spaces corresponding to those automata.
In the present paper, we investigate forward and backward simulations and bisimulations for WFA over the field of real numbers. It is worth noting that there are some very important specifics in this case. For most WFA types, the problem of equivalence (determining whether two automata compute the same word function) and the minimization problem (determining an automaton with the minimal number of states equivalent to a given automaton) are computationally hard. In these cases, bisimulations have two very important roles. The first role is to provide an efficient procedure for witnessing the existence of the equivalence of two automata, and the second one is to provide an efficient way to construct an automaton equivalent to a given one, with a not necessarily minimal but reasonably smaller number of states. However, it is not the case with WFA over the field of real numbers, for which there are efficient algorithms for testing the equivalence and performing minimization. Despite this observation, the importance of bisimulations for these automata is not diminished. Bisimulations are still needed as a means of determining the measure of similarity between the states of different automata, which algorithms for testing the equivalence are unable to do. In the context of weighted automata over the field of real numbers, such measures have already been studied in [10] by means of bisimulation seminorms and pseudometrics, and in [11] by means of linear bisimulations; in our upcoming research, we will deal with the relationships between bisimulation seminorms, linear bisimulations, and our concepts of bisimulations.
Following the definitions of simulations and bisimulations over various algebraic structures, an analogous approach has been used in defining simulations and bisimulations for WFA over the field of real numbers. The problem of simulations and bisimulations for WFA over the field of real numbers reduces to the system of two vector inequations and a number of matrix inequations. There is a notable lack of numerical methods for solving simulation and bisimulationproblems. Urabe and Hasuo proposed the idea of reducing the problem of testing the existence of simulations to the problem of linear programming (LP) and implemented it in [7] (Section 5). Seen more generally, the research described in this paper shows that the ZNN design is usable in solving systems of matrix and vector inequations in linear algebra. Our goal is to show that the zeroing neural network (ZNN) dynamics are an effective tool to decide on the containment or equivalence between WFA. A comparison between the ZNN and LP approach is presented.
On the other hand, the application of dynamical systems is a robust tool for solving various matrix algebra problems, primarily owing to the global exponential convergence, parallel distributed essence, convenience of hardware implementation, suitability for online computations involving TV objects, and possibility of providing convergence in a finite time frame [12,13]. First, ZNN models have been used to solve the TV matrix inversion problem [14]. Standard and finite-time convergent ZNN dynamical systems aimed at solving time-varying (TV) linear matrix equations have been widely investigated [12,15,16,17,18]. The applications of ZNN design, mainly focusing on robot manipulator path tracking, motion planning, and chaotic systems, were surveyed in [19]. ZNN dynamical systems for solving TV linear matrix–vector inequalities (TVLMVI) and TV linear matrix inequalities (TVLMI) have been broadly investigated [12,15,20,21,22,23,24,25,26,27]. Moreover, various ZNN models for solving TVLMI have been applied, mainly in obstacle avoidance for redundant robots and robot manipulator control [12,28,29]. Typically, TVLMVI and TVLMI of type “≤” are solved by utilizing an additional matrix or vector of appropriate dimensions with non-negative entries. A TV matrix inequality of the Stein form A ( t ) X ( t ) B ( t ) + X ( t ) C ( t ) was considered in [21]. A TVLMVI problem of the general form A ( t ) x ( t ) b ( t ) was considered in [24,26,27]. Two ZNN models for solving systems of two TVLMVI were developed in [15]. In [22], the authors proposed ZNNs for solving TV nonlinear inequalities. Finite-time dynamics for solving general TVLMVI A ( t ) X ( t ) B ( t ) C ( t ) were proposed in [25]. A comparison between ZNN and gradient-based networks for solving A ( t ) x ( t ) b ( t ) was investigated in [23]. The computational time for solving TV equations increases due to the large number of calculations of TV requirements [30].
The problem under consideration is more complex because it requires us to solve systems of linear matrix and vector inequations. The structure of ZNN models developed in the current research is based on composite models with a prescribed number of error functions in matrix form and two in vector form. The ZNN dynamics aim to force the convergence of the involved error functions to zero over the considered time interval [13]. But the ZNN model in this paper aims to solve several matrix–vector equations that are inconsistent in the general case. Our strategy is to utilize ZNN neurodynamics to generate simulations between two WFA with weights over real numbers. In this way, our objective involves the topic of numerical linear algebra.
This research is aimed at the development and analysis of four novel ZNN models for addressing the systems of matrix and vector inequalities involved in simulations between WFA. The problem considered in this paper is specific and complex, and  it requires solving a system of two vector inequations and a couple of matrix inequations. Using positive slack matrices, matrix and vector inequalities are transformed into corresponding equalities. In this case, it is useful to utilize the development of ZNN dynamics based on several inequalities and Zhang error functions. ZNN algorithms established upon a few error functions have been investigated in several studies, such as [31,32,33,34]. Our motivation for the application of ZNN arises from a verified fact that it is a powerful tool for solving various matrix algebra models, possessing global exponential convergence and a parallel distributed structure [12,13]. Therefore, it is interesting to construct the ZNN evolution for such a problem and study its behavior. A detailed convergence analysis is considered. Numerical examples are performed with different initial state matrices.
The main results are emphasized as follows.
(1)
Two types of quantitative simulations and two types of bisimulations between WFA are determined as solutions to particular systems of several matrix and two vector inequations over R .
(2)
The approach used to solve the problem of simulations and bisimulations in this research is unique and based on the application of the ZNN dynamical evolution in solving underlying matrix and vector inequations.
(3)
A detailed convergence analysis of the proposed ZNN dynamics is presented.
(4)
Numerical examples are performed under different initial state matrices, and a comparison between the ZNN and LP approach is presented.
The overall organization of the sections is as follows. Preliminaries on WFA and ZNN are presented in Section 1. Global results are highlighted in the same section. Two types of simulations and four types of bisimulations proposed in [1] in the general context of WFA over a semiring are generalized in the context of WFA over the field of real numbers in Section 2. ZNN designs for simulations and bisimulations of WFA over real numbers are presented in Section 3. Section 4 is aimed at testing the developed ZNN dynamical systems and making comparisons with the LP solver. Concluding remarks are given in Section 5.

2. Simulations and Bisimulations of WFA over Real Numbers

As a continuation of the research presented in [1], here we correspondingly introduce definitions of two types of simulations and two types of bisimulations in the context of WFA over R . For this purpose, consider two WFA A = m , σ A , { M x A } x X , τ A and B = n , σ B , { M x B } x X , τ B over the field of real numbers R and the alphabet X. A matrix U R m × n is called a forward simulation between A and B if it satisfies the following conditions with respect to U:
( fs-1 ) σ A σ B U ( fs-2 ) U M x A M x B U ( x X ) ( fs-3 ) U τ A τ B ,
and it is termed as backward simulation between A and B if it fulfills
( bs-1 ) τ A U τ B ( bs-2 ) M x A U U M x B ( x X ) ( bs-3 ) σ A U σ B .
Our intention is to apply the notion of transposed automaton from [35] to reverse the transitions’ flow direction. If both U and U are forward simulations between A and B and vice versa, i.e., if they fulfil
( fb-1 ) σ A σ B U , σ B σ A U ( fb-2 ) U M x A M x B U , U M x B M x A U ( x X ) ( fb-3 ) U τ A τ B , U τ B τ A
then U is termed as a forward bisimulation between A and B, and if both U and U are backward simulations between A and B and vice versa, i.e., if they satisfy
( bb-1 ) τ A U τ B , τ B U τ A ( bb-2 ) M x A U U M x B , M x B U U M x A ( x X ) ( bb-3 ) σ A U σ B , σ B U σ A
then U is known as a backward bisimulation between A and B.
It is important to note that, for any ω { fs , bs , fb , bb } , the conditions ( ω -1), ( ω -2), and ( ω -3) can be treated a system of matrix inequations with the unknown matrix U, and simulations or bisimulations of type ω are precisely solutions to this system. This is extremely important because simulations between weighted automata over the field of real numbers are searched for by solving the corresponding systems of matrix inequalities.
Another important note is that the main role of simulations is to witness containment between automata A and B, while the main role of bisimulations is to witness equivalence between A and B. However, forward and backward simulations and bisimulations are defined by matrix inequations. On that note, in order to prove that simulations achieve containment and bisimulations achieve equivalence, we need the inequations to be preserved by multiplying, on either side, by the transition matrices, as well as by the initial and terminal vectors. Multiplication by matrices and vectors containing negative entries can violate inequalities, and, therefore, in order for simulations and bisimulations defined by systems of inequations to make full sense, we consider these types of bisimulations and simulations only between positive automata.
Theorem 1 is a modified version of [1] (Theorem 1).
Theorem 1.
The following statements are valid for positive WFA A and B over R :
(a)
For ω { fs , bs } , if there is a simulation of type ω between A and B, then A B .
(b)
For ω { fb , bb } , if there is a bisimulation of type ω between A and B, then A = B .
The modification is reflected in the following. A slightly different version of Theorem 1 was proved in [1] [Theorem 1] for WFA over a positive semiring. Theorem 1 could also be formulated for A and B as WFA over the positive semiring R + of nonnegative real numbers, but such a formulation would mean that the simulations and bisimulations between A and B should also be over the semiring R + , that is, they should be positive matrices, which is not necessary. Namely, for positive WFA over an arbitrary ordered semiring (not necessarily positive), the proof of [1] (Theorem 1) also holds for simulations and bisimulations that contain negative entries, and Theorem 1 is formulated to allow for such simulations and bisimulations as well.
As this article is primarily concerned with solving systems of matrix inequations, nothing important will change if we consider the more general case and allow the transition matrices, as well as the initial and terminal vectors, to have negative entries, which is performed below. On the other hand, in some applications of simulations and bisimulations, for example in the dimensionality reduction for WFA, there is a need to find positive solutions of the considered systems of matrix inequations. For this reason, we consider systems with an additional condition requiring the positivity of the solution. It should be noted that the proposed procedures for solving the systems remain valid even in the case when this condition is omitted, and in the same way, in that case we obtain solutions that do not have to be positive.

3. ZNN Designs for Simulations and Bisimulations of WFA over Real Numbers

This section defines and analyzes four novel ZNN models for addressing the systems of inequations (3)–(6). For the remainder of this section, let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be two WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , r .
Also, it is crucial to mention that the process of building a ZNN model usually involves two primary steps. The error matrix equation’s (EME) function, E ( t ) , must be initially declared. Secondly, the dynamic system represented by the continuous differential equation of the general form
E ˙ ( t ) = λ E ( t ) ,
needs to be employed. The dynamical evolution (7) relates the time derivative E ˙ ( t ) to E ( t ) in proportion to the positive real coefficient λ . The convergence rate of the dynamical system (7) is altered by manipulating the parameter λ R + . More precisely, with increasing values of λ , any ZNN model converges even faster [13,36,37]. The primary goal of the dynamics (7) is to force E ( t ) to approach 0 as t . The continuous learning principle that emerges from the EME’s construction in Equation (7) is used to manage this goal. EME is, therefore, considered as a tracking indication in the context of the ZNN model’s learning.
Special attention should be paid to a few notations that are used in the remainder of this work. The p × 1 matrices with all ones and all zeros as entries are indicated by 1 p and 0 p , whereas the p × r matrices with all ones and all zeros as entries are indicated by 1 p , r and 0 p , r . Furthermore, the  p × p identity matrix is indicated by I p , whereas vec ( ) , , , ( ) , ( ) , and   F stand for the vectorization process, the Kronecker product, the Hadamard (or elementwise) product, the Hadamard exponential, pseudoinversion, and the matrix Frobenius norm, respectively. Finally, rand ( m , n ) denotes an m × n matrix whose entries consist of random numbers.

3.1. The ZNN-fs Model

In line with (3), the following group of inequations must be satisfied:
U T ( t ) τ A τ B 0 n , σ A σ B U T ( t ) 0 m T , U T ( t ) M x i A M x i B U T ( t ) 0 n , m , i = 1 , , r , U ( t ) 0 m , n ,
with respect to an unknown matrix U ( t ) R m × n . Utilizing the vectorization in conjunction with the Kronecker product, the system (8) is reformulated into the vector inequations form
( τ A ) T I n vec ( U T ( t ) ) τ B 0 n , I m σ B vec ( U T ( t ) ) + ( σ A ) T 0 m , ( M x i A ) T I n I m M x i B vec ( U T ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n .
To calculate U ( t ) more efficiently, (9) must be simplified. Thus, the vectorization-related Lemma 1 derived from [38] is given.
Lemma 1.
The vectorization vec ( W T ) R m n of the transpose W T of W R m × n is defined by
vec ( W T ) = P vec ( W ) ,
where P R m n × m n is a constant permutation matrix that depends on the number of columns n and number of rows m in W.
The algorithmic procedure for generating the permutation matrix P in (10) is presented in the following Algorithm 1.
Algorithm 1 The permutation matrix P formation.
Input: The number of rows m and columns n of a matrix W R m × n .
 1:
procedure Perm_Mat( m , n )
 2:
      Put g = eye ( m n ) and W = reshape ( 1 : m n , n , m )
 3:
      return  P = g ( : , reshape ( W T , 1 , m n ) )
 4:
end procedure
Output: P
Using the permutation matrix P for generating vec ( U T ( t ) ) , inequations (9) can be rewritten in the form
( τ A ) T I n P vec ( U ( t ) ) τ B 0 n , I m σ B P vec ( U ( t ) ) + ( σ A ) T 0 m , ( M x i A ) T I n I m M x i B P vec ( U ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n ,
wherein the last constraint imposes non-negativity on the solution. The corresponding block matrix form of (11) is given by
L f s vec ( U ( t ) ) b f s 0 z ,
such that z = ( r + 1 ) m n + m + n and
L f s = ( ( τ A ) T I n ) P ( I m σ B ) P W f s I m n R z × m n , b f s = τ B ( σ A ) T 0 ( r + 1 ) m n R z , W f s = ( ( M x 1 A ) T I n I m M x 1 B ) P ( ( M x 2 A ) T I n I m M x 2 B ) P ( ( M x r A ) T I n I m M x r B ) P R r m n × m n .
Then, considering the vector of slack variables K ( t ) = [ k 1 ( t ) k z ( t ) ] R z , the inequation (12) can be converted into the corresponding equation
L f s vec ( U ( t ) ) b f s + K 2 ( t ) = 0 z ,
in which K 2 ( t ) = [ k 1 2 ( t ) k z 2 ( t ) ] is the time-varying term with secured non-negative entries.
Thereafter, the ZNN approach considers the following EME, which is based on (12), to simultaneously satisfy all the inequations in (8):
E f s ( t ) = L f s vec ( U ( t ) ) b f s + K 2 ( t ) ,
where U ( t ) and K ( t ) are the unknown matrices that need to be found. The ZNN design (7) exploits the first time derivative of (15)
E ˙ f s ( t ) = L f s vec ( U ˙ ( t ) ) + 2 ( I z K ( t ) ) K ˙ ( t ) .
Combining Equations (15) and (16) with the generic ZNN design (7), we obtain
L f s vec ( U ˙ ( t ) ) + 2 ( I z K ( t ) ) K ˙ ( t ) = λ E f s ( t ) .
As a result, setting
H f s = L f s 2 ( I z K ( t ) ) R z × ( m n + z ) , x ˙ ( t ) = vec ( U ˙ ( t ) ) K ˙ ( t ) R m n + z , x ( t ) = vec ( U ( t ) ) K ( t ) R m n + z ,
the next system of linear equations with respect to x ˙ is obtained:
H f s x ˙ = λ E f s ( t ) .
The ZNN dynamics are applicable in solving (18) if the mass matrix H f s is invertible. To avoid this restriction, it is appropriate to use the pseudoinverse (best approximate) solution
x ˙ = H f s λ E f s ( t ) .
An appropriate ode MATLAB R2022a solver can be used to handle the ZNN dynamics (19), additionally referred to as the ZNN-fs model. The ZNN-fs model’s convergence and stability investigation is shown in Theorem 2.
Theorem 2.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be the WFA over R and the alphabet X = { x 1 , , x r } , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . The dynamics (17) in linewith the ZNN method (7) lead to the theoretical solution (TSOL), determined by x S ( t ) = vec ( U S ( t ) ) T K S T ( t ) T , which is stable according to Lyapunov.
Proof. 
Let
U S T ( t ) τ A τ B 0 n , σ A σ B U S T ( t ) 0 m T , U S T ( t ) M x i A M x i B U S T ( t ) 0 n , m , i = 1 , , r , U S ( t ) 0 m , n .
Using vectorization, Kronecker product, and the permutation matrix P for constructing vec ( U T ( t ) ) , defined by Algorithm 1, the system (20) is reformulated as
( τ A ) T I n P vec ( U S ( t ) ) τ B 0 n , I m σ B P vec ( U S ( t ) ) + ( σ A ) T 0 m , ( M x i A ) T I n I m M x i B P vec ( U S ( t ) ) 0 m n , i = 1 , , r , vec ( U S ( t ) ) 0 m n .
The equivalent form of (21) is
L f s vec ( U S ( t ) ) b f s 0 z
where L f s and b f s are declared in Equation (13). Then, considering the slack variable K S ( t ) R z , the inequation (22) can be converted into the equation
L f s vec ( U S ( t ) ) b f s + K S 2 ( t ) ( t ) = 0 z ,
in which K S 2 ( t ) is always a non-negative time-varying term.
The substitution
x O ( t ) : = x ( t ) + x S ( t ) = vec ( U ( t ) ) + vec ( U S ( t ) ) K ( t ) + K S ( t ) : = vec ( U O ( t ) ) K O ( t )
gives
x ( t ) = x S ( t ) x O ( t ) = vec ( U S ( t ) ) vec ( U O ( t ) ) K S ( t ) K O ( t ) .
The 1st derivative of x ( t ) is equal to
x ˙ ( t ) = x ˙ S ( t ) x ˙ O ( t ) = vec ( U ˙ S ( t ) ) vec ( U ˙ O ( t ) ) K ˙ S ( t ) K ˙ O ( t ) .
As a result, after substituting (14) for x ( t ) = x S ( t ) x O ( t ) , the following holds
E S ( t ) = L f s vec ( U S ( t ) ) vec ( U O ( t ) ) b f s + K S ( t ) K O ( t ) 2 ,
or
E S ( t ) = L f s ( I z K S ( t ) K O ( t ) ) ( x S ( t ) x O ( t ) ) b f s ,
where L f s and b f s are declared in (13). Then, the following results follow from (7):
E ˙ S ( t ) = L f s vec U ˙ ( t ) + U ˙ ( t ) + 2 I z K S ( t ) K O ( t ) K ˙ S ( t ) K ˙ O ( t ) = λ E S ( t ) ,
or equivalently
E ˙ S ( t ) = L f s 2 I z ( K S ( t ) K O ( t ) ) ( x ˙ S ( t ) x ˙ O ( t ) ) = λ E S ( t ) .
Next, for confirming the convergence, we choose the plausible Lyapunov function
Z ( t ) = 1 2 E S ( t ) F 2 = 1 2 tr E S ( t ) E S ( t ) T .
The following is confirmed for Z ( t ) :
Z ˙ ( t ) = 2 tr E S ( t ) T E ˙ S ( t ) 2 = tr E S ( t ) T E ˙ S ( t ) = λ tr E S ( t ) T E S ( t ) .
Because of (24), the following is valid:
Z ˙ ( t ) < 0 , E S ( t ) 0 , = 0 , E S ( t ) = 0 , Z ˙ ( t ) < 0 , L f s ( I z ( K S ( t ) K O ( t ) ) ) ( x S ( t ) x O ( t ) ) b f s 0 , = 0 , L f s ( I z ( K S ( t ) K O ( t ) ) ) ( x S ( t ) x O ( t ) ) b f s = 0 , Z ˙ ( t ) < 0 , L f s ( I z ( K S ( t ) K O ( t ) ) ) vec ( U S ( t ) ) vec ( U O ( t ) ) K S ( t ) K O ( t ) b f s 0 , = 0 , L f s ( I z ( K S ( t ) K O ( t ) ) ) vec ( U S ( t ) ) vec ( U O ( t ) ) K S ( t ) K O ( t ) b f s = 0 , Z ˙ ( t ) < 0 , vec ( U O ( t ) ) K O ( t ) 0 , = 0 , vec ( U O ( t ) ) K O ( t ) = 0 . Z ˙ ( t ) < 0 , x O ( t ) 0 , = 0 , x O ( t ) = 0 .
With x O ( t ) being the equilibrium point of the system (23), we have
x O ( t ) 0 , Z ˙ ( t ) 0 .
It appears that the equilibrium state
x O ( t ) = x ( t ) + x S ( t ) = vec ( U ( t ) ) + vec ( U S ( t ) ) K ( t ) + K S ( t ) = 0
is stable in accordance with Lyapunov theory. Afterwards, when t , the following holds:
x ( t ) = vec ( U ( t ) ) K ( t ) x S ( t ) = vec ( U S ( t ) ) K S ( t ) ,
which finalizes the proof.    □
Theorem 3.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be the WFA over R and X = { x 1 , , x r } , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . Beginning from any initial point x ( 0 ) , the ZNN-fs model of (19) converges exponentially to x * ( t ) , which refers to the TSOL of (3).
Proof. 
Firstly, the system of (8) is considered to find the solution x ( t ) = [ vec ( U ( t ) ) T , K T ( t ) ] T that is affiliated to the time-varying backward-forward bisimulation between A and B of (3). Secondly, the system of (8) is reformulated into the system of (9) utilizing vectorization and the Kronecker product and, then, into the system of (12) utilizing the operational permutation matrix P for vec ( U T ( t ) ) . Thirdly, considering the slack variable K ( t ) , the inequality constraint of the system of (12) is converted into an equality constraint in the system of (14). Fourthly, the EME of (15) is constructed, in keeping with the ZNN technique and the system of (14), to generate the solution x ( t ) that is affiliated with the system of (3). Fifthly, the model of (17) is yielded in accordance to the ZNN technique of (7) for zeroing (15). According to Theorem 2, the EME of (15) converges to zero as t . Consequently, the solution of (19) converges to x * ( t ) = vec ( U * ( t ) ) T , ( K * ( t ) ) T T as t . Furthermore, it is obvious that (19) is (17) in a different form because of the derivation process. After that, the proof is accomplished.    □

3.2. The ZNN-bs Model

In line with (4), the following group of inequations must be satisfied:
τ A U ( t ) τ B 0 m , σ A U ( t ) σ B 0 n T , M x i A U ( t ) U ( t ) M x i B 0 m , n , i = 1 , , r , U ( t ) 0 m , n ,
where U ( t ) R m × n denotes the unknown matrix to be found. Utilizing vectorization and the Kronecker product, the system of inequations (25) is rewritten in the equivalent form
( τ B ) T I m vec ( U ( t ) ) + τ A 0 m , ( I n σ A ) vec ( U ( t ) ) ( σ B ) T 0 n , I n M x i A ( M x i B ) T I m vec ( U ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n ,
and its corresponding matrix form is
L b s vec ( U ( t ) ) b b s 0 z ,
where
L b s = ( τ B ) T I m I n σ A W b s I m n R z × m n , b b s = τ A ( σ B ) T 0 ( r + 1 ) m n R z , W b s = I n M x 1 A ( M x 1 B ) T I m I n M x 2 A ( M x 2 B ) T I m I n M x r A ( M x r B ) T I m R r m n × m n .
Then, considering the slack variable K ( t ) R z , the inequation (26) can be converted into the equation
L b s vec ( U ( t ) ) b b s + K 2 ( t ) = 0 z ,
where K 2 ( t ) is always a non-negative time-varying term.
Thereafter, the ZNN approach considers the following EME, which is based on (27), for  simultaneously satisfying all the inequations in (25):
E b s ( t ) = L b s vec ( U ( t ) ) b b s + K 2 ( t ) ,
where U ( t ) and K ( t ) are the unknown matrices to be found. The first time derivative of (28) is
E ˙ b s ( t ) = L b s vec ( U ˙ ( t ) ) + 2 ( I z K ( t ) ) K ˙ ( t ) .
Then, combining Equations (28) and (29) with the ZNN design (7), we obtain
L b s vec ( U ˙ ( t ) ) + 2 ( I z K ( t ) ) K ˙ ( t ) = λ E b s ( t ) .
As a result, setting
H b s = L b s 2 ( I z K ( t ) ) R z × ( m n + z ) , x ˙ ( t ) = vec ( U ˙ ( t ) ) K ˙ ( t ) R m n + z , x ( t ) = vec ( U ( t ) ) K ( t ) R m n + z ,
the next model is obtained:
H b s x ˙ = λ E b s ( t ) .
Since the ZNN dynamics in solving (31) requires invertibility of the mass matrix H b s , it is practical to use the best approximate solution to (31), which leads to
x ˙ = H b s ( λ E b s ( t ) ) .
An appropriate ode MATLAB solver can be used to handle the ZNN model of (32), additionally referred to as the ZNN-bs flow. The ZNN-bs model’s convergence and stability investigation is shown in Theorem 4.
Theorem 4.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . The dynamics (30) in line with the ZNN method of (7) lead to the TSOL, shown by x S ( t ) = vec ( U S ( t ) ) T K S T ( t ) T , which is stable according to Lyapunov.
Proof. 
The proof is omitted since it is similar to the proof of Theorem 2.    □
Theorem 5.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . Beginning from any initial point x ( 0 ) , the ZNN-bs design (32) converges exponentially to x * ( t ) , which refers to the TSOL of (4).
Proof. 
The proof is omitted since it is similar to the proof of Theorem 3.    □

3.3. The ZNN-fb Model

In line with (5), the following group of inequations must be satisfied:
U T ( t ) τ A τ B 0 n , U ( t ) τ B τ A 0 m , σ A σ B U T ( t ) 0 m T , σ B σ A U ( t ) 0 n T , U T ( t ) M x i A M x i B U T ( t ) 0 n , m , i = 1 , , r , U ( t ) M x i B M x i A U ( t ) 0 m , n , i = 1 , , r , U ( t ) 0 m , n ,
where U ( t ) R m × n implies the unknown matrix to be generated. Utilizing vectorization in combination with the Kronecker product, the system of (33) is reformulated as
( τ A ) T I n vec ( U T ( t ) ) τ B 0 n , ( τ B ) T I m vec ( U ( t ) ) τ A 0 m , I m σ B vec ( U T ( t ) ) + ( σ A ) T 0 m , I n σ A vec ( U ( t ) ) + ( σ B ) T 0 n , ( M x i A ) T I n I m M x i B vec ( U T ( t ) ) 0 m n , i = 1 , , r , ( M x i B ) T I m I n M x i A vec ( U ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n .
Using the permutation matrix P for vec ( U T ( t ) ) , (34) is rewritten as
( τ A ) T I n P vec ( U ( t ) ) τ B 0 n , ( τ B ) T I m vec ( U ( t ) ) τ A 0 m , I m σ B P vec ( U ( t ) ) + ( σ A ) T 0 m , I n σ A vec ( U ( t ) ) + ( σ B ) T 0 n , ( M x i A ) T I n I m M x i B P vec ( U ( t ) ) 0 m n , i = 1 , , r , ( M x i B ) T I m I n M x i A vec ( U ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n ,
and its corresponding matrix form is
L f b vec ( U ( t ) ) b f b 0 y ,
where y = ( 2 r + 1 ) m n + 2 m + 2 n and
L f b = ( ( τ A ) T I n ) P ( τ B ) T I m ( I m σ B ) P I n σ A W f b I m n R y × m n , b f b = τ B τ A ( σ A ) T ( σ B ) T 0 ( 2 r + 1 ) m n R y , W f b = ( ( M x 1 A ) T I n I m M x 1 B ) P ( M x 1 B ) T I m I n M x 1 A ( ( M x 2 A ) T I n I m M x 2 B ) P ( M x 2 B ) T I m I n M x 2 A ( ( M x r A ) T I n I m M x r B ) P ( M x r B ) T I m I n M x r A R 2 r m n × m n .
Then, considering the slack variables vector K ( t ) R y , the inequation (35) is converted into the equation
L f b vec ( U ( t ) ) b f b + K 2 ( t ) = 0 y .
Thereafter, the ZNN approach considers the following EME, which is based on (35), for simultaneously satisfying all the inequations in (33):
E f b ( t ) = L f b vec ( U ( t ) ) b f b + K 2 ( t ) ,
where U ( t ) and K ( t ) are the unknown matrices to be found. The first time derivative of (36) is equal to
E ˙ f b ( t ) = L f b vec ( U ˙ ( t ) ) + 2 ( I y K ( t ) ) K ˙ ( t ) .
Then, combining Equations (36) and (37) with the ZNN design (7), we obtain the following:
L f b vec ( U ˙ ( t ) ) + 2 ( I y K ( t ) ) K ˙ ( t ) = λ E f b ( t ) .
As a result, setting
H f b = L f b 2 ( I y K ( t ) ) R y × ( m n + y ) , x ˙ ( t ) = vec ( U ˙ ( t ) ) K ˙ ( t ) R m n + y , x ( t ) = vec ( U ( t ) ) K ( t ) R m n + y ,
(38) is transformed into the model
H f b x ˙ = λ E f b ( t )
whose pseudoinverse solution is equal to
x ˙ = H f b ( λ E f b ( t ) ) .
An appropriate ode MATLAB solver can be used to handle the ZNN model (39), additionally referred to as the ZNN-fb model. The ZNN-fb model’s convergence and stability investigation is shown in the next theorem.
Theorem 6.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be the WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . The dynamics of (38) in line with the ZNN method of (7) lead to the TSOL, shown by x S ( t ) = vec ( U S ( t ) ) T K S T ( t ) T , which is stable according to Lyapunov.
Proof. 
The proof is omitted since it is similar to the proof of Theorem 2.    □
Theorem 7.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be the WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . Beginning from any initial point x ( 0 ) , the ZNN-bs model of (39) converges exponentially to x * ( t ) , which refers to the TSOL of (5).
Proof. 
The proof is similar to the proof of Theorem 3.    □

3.4. The ZNN-bb Model

In line with (6), the following group of inequations must be satisfied:
τ A U ( t ) τ B 0 m , τ B U T ( t ) τ A 0 n , σ A U ( t ) σ B 0 n T , σ B U T ( t ) σ A 0 m T , M x i A U ( t ) U ( t ) M x i B 0 m , n , i = 1 , , r , M x i B U T ( t ) U T ( t ) M x i A 0 n , m , i = 1 , , r , U ( t ) 0 m , n ,
where U ( t ) R m × n stands for the unknown matrix. The system of (40) is reformulated as follows:
( τ B ) T I m vec ( U ( t ) ) + τ A 0 m , ( τ A ) T I n vec ( U T ( t ) ) + τ B 0 n , I n σ A vec ( U ( t ) ) ( σ B ) T 0 n , I m σ B vec ( U T ( t ) ) ( σ A ) T 0 m , I n M x i A ( M x i B ) T I m vec ( U ( t ) ) 0 m n , i = 1 , , r , I m M x i B ( M x i A ) T I n vec ( U T ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n .
Using the permutation matrix P for generating vec ( U T ( t ) ) , (41) is rewritten as
( τ B ) T I m vec ( U ( t ) ) + τ A 0 m , ( τ A ) T I n P vec ( U ( t ) ) + τ B 0 n , I n σ A vec ( U ( t ) ) ( σ B ) T 0 n , I m σ B P vec ( U ( t ) ) ( σ A ) T 0 m , I n M x i A ( M x i B ) T I m vec ( U ( t ) ) 0 m n , i = 1 , , r , I m M x i B ( M x i A ) T I n P vec ( U ( t ) ) 0 m n , i = 1 , , r , vec ( U ( t ) ) 0 m n ,
and its corresponding matrix form is the following:
L b b vec ( U ( t ) ) b b b 0 y ,
where
L b b = ( τ B ) T I m ( ( τ A ) T I n ) P I n σ A ( I m σ B ) P W b b I m n R y × m n , b b b = τ A τ B ( σ B ) T ( σ A ) T 0 ( 2 r + 1 ) m n R y , W b b = I n M x 1 A ( M x 1 B ) T I m I m M x 1 B ( M x 1 A ) T I n P I n M x 2 A ( M x 2 B ) T I m I m M x 2 B ( M x 2 A ) T I n P I n M x r A ( M x r B ) T I m I m M x r B ( M x r A ) T I n P R 2 r m n × m n .
Then, considering the slack variable K ( t ) R y , the inequation (42) can be converted into the equation
L b b vec ( U ( t ) ) b b b + K 2 ( t ) = 0 y ,
in which K 2 ( t ) is always a non-negative time-varying term.
Thereafter, the ZNN approach considers the following EME, which is based on (42), for simultaneously satisfying all the equations in (40):
E b b ( t ) = L b b vec ( U ( t ) ) b b b + K 2 ( t ) ,
where U ( t ) and K ( t ) are the unknown matrices to be found. The first time derivative of (43) is given as
E ˙ b b ( t ) = L b b vec ( U ˙ ( t ) ) + 2 ( I y K ( t ) ) K ˙ ( t ) .
Then, combining Equations (43) and (44) with the ZNN design of (7), we can obtain
L b b vec ( U ˙ ( t ) ) + 2 ( I y K ( t ) ) K ˙ ( t ) = λ E b b ( t ) .
As a result, setting
H b b = L b b 2 ( I y K ( t ) ) R y × ( m n + y ) , x ˙ ( t ) = vec ( U ˙ ( t ) ) K ˙ ( t ) R m n + y , x ( t ) = vec ( U ( t ) ) K ( t ) R m n + y ,
the next model is obtained:
H b b x ˙ = λ E b b ( t ) ,
or an equivalent:
x ˙ = H b b ( λ E b b ( t ) ) .
An appropriate ode MATLAB solver can be used to handle the ZNN model of (46), additionally referred to as the ZNN-bb model. The ZNN-bb model’s convergence and stability investigation is shown in the next theorem.
Theorem 8.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be the WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . The dynamics of (45) in line with the ZNN method of (7) lead to the TSOL, shown by x S ( t ) = vec ( U S ( t ) ) T K S T ( t ) T , which is stable according to Lyapunov.
Proof. 
The proof is omitted since it is similar to the proof of Theorem 2.    □
Theorem 9.
Let A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B be the WFA over R , where M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 with i = 1 , , r . Beginning from any initial point x ( 0 ) , the ZNN-bb model of (46) converges exponentially to x * ( t ) , which refers to the TSOL of (6).
Proof. 
The proof is similar to the proof of Theorem 3.    □

4. ZNN Experiments

The performances of the ZNN-fs model of (19), the ZNN-bs model of (32), the ZNN-fb model of (39), and the ZNN-bb model of (46) are examined in each of the five numerical experiments presented in this section. Keep in mind that during the computation in all experiments, the MATLAB ode45 solver was applied with time span of [ 0 , 10 ] under a relative and absolute tolerance of 10 12 and 10 8 , respectively. Additionally, we contrast the output of the ZNN models with the results of the MATLAB function linprog (with the default settings). Following the model proposed in [7], the zero initial point is used.
Example 1.
Let us choose m = 2 , n = 3 , r = 2 , and X = { x 1 , x 2 } , and consider WFA over R defined by A = m , σ A , { M x i A } x i X , τ A and B = n , σ B , { M x i B } x i X , τ B . Clearly, M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 . Consider A = m , σ A , { M x i A , = 1 , 2 } , τ A defined by
σ A = 7 8 , τ A = 13 13 T , M x 1 A = 6 9 13 14 , M x 2 A = 13 1 15 9
and B = n , σ B , M x i B , i = 1 , 2 , τ B defined by
σ B = 5 14 9 , τ B = 3 1 1 T , M x 1 B = 4 2 2 13 17 9 15 14 17 , M x 2 B = 2 12 7 1 16 3 2 5 7 .
Furthermore, the design parameter of ZNN is set to λ = 10 , and the following initial conditions (ICs) are used:
  • IC1: x ( 0 ) = 1 23 ,
  • IC2: x ( 0 ) = 1 23 ,
  • IC3: x ( 0 ) = rand ( 23 , 1 ) .
The results of the ZNN-fs model are presented in Figure 1.
Example 2.
Let m = 4 , n = 2 , r = 2 , and X = { x 1 , x 2 } , and consider WFA A = m , σ A , M x i A x i X , τ A and B = n , σ B , M x i B x i X , τ B . Clearly, M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 . Consider A = m , σ A , { M x i A , = 1 , 2 } , τ A defined by
σ A = 1 1 2 1 , τ A = 1 1 1 1 T , M x 1 A = 2 1 3 1 1 2 1 2 3 1 2 1 3 1 2 1 , M x 2 A = 1 4 2 4 2 1 2 1 2 4 1 4 2 4 1 4
and B = n , σ B , M x i B , i = 1 , 2 , τ B defined by
σ B = 1 2 , τ B = 1 1 T , M x 1 B = 6 4 4 4 , M x 2 B = 4 6 6 4 .
Also, the design parameters of ZNN are λ = 10 , λ = 100 and λ = 100 , whereas the IC is set to x ( 0 ) = 1 30 . The results of the ZNN-bs model are presented in Figure 1.
Example 3.
Let m = n = k = 10 , r = 2 , and X = { x 1 , x 2 } , and consider WFA A = m , σ A , { M x i A } x i X , τ A and B = n , σ B , { M x i B } x i X , τ B over R . Clearly, M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 . Consider A = m , σ A , { M x i A , = 1 , 2 } , τ A defined by
σ A = 1 k T , τ A = 1 k , M x 1 A = I k , M x 2 A = I k
and B = n , σ B , { M x i B , i = 1 , 2 } , τ B defined by
σ B = 5 · 1 k T , τ B = 5 · 1 k , M x 1 B = 5 · I k , M x 2 B = 5 · I k .
Furthermore, the design parameter of ZNN is set to λ = 10 , 100 , 1000 , and the following ICs are used:
  • IC1: x ( 0 ) = 1 420 ,
  • IC2: x ( 0 ) = 1 420 ,
  • IC3: x ( 0 ) = rand ( 420 , 1 ) .
The results of the ZNN-fs model are presented in Figure 2.
Example 4.
Let m = n = k = 10 , r = 2 , and X = { x 1 , x 2 } , and consider WFA A = m , σ A , { M x i A } x i X , τ A and B = n , σ B , { M x i B } x i X , τ B over R . Clearly, M x i A R m × m , σ A R 1 × m , τ A R m × 1 and M x i B R n × n , σ B R 1 × n , τ B R n × 1 . Consider A = m , σ A , { M x i A , = 1 , 2 } , τ A defined by
σ A = 1 k T , τ A = 1 k , M x 1 A = I k , M x 2 A = I k
and B = n , σ B , { M x i B , i = 1 , 2 } , τ B defined by
σ B = 2 · 1 k T , τ B = 2 · 1 k , M x 1 B = 2 · I k , M x 2 B = 2 · I k .
Furthermore, the design parameter of ZNN is set to λ = 10 , 100 , 1000 , and the following ICs are used:
  • IC1: x ( 0 ) = 1 420 ,
  • IC2: x ( 0 ) = 1 420 ,
  • IC3: x ( 0 ) = rand ( 420 , 1 ) .
The results of the ZNN-bs model are presented in Figure 3.
Example 5.
Let m = k + 1 , n = k with k = 10 , r = 2 , and X = { x 1 , x 2 } , and consider WFA A = m , σ A , { M x i A } x i X , τ A and B = n , σ B , { M x i B } x i X , τ B over R . Clearly, M x i A R m × m , σ A R 1 × m , τ A R m × 1 , and M x i B R n × n , σ B R 1 × n , τ B R n × 1 . Consider A = m , σ A , { M x i A , = 1 , 2 } , τ A defined by
σ A = 1 k + 1 T , τ A = 1 k 1 , M x 1 A = 1 k , k + 1 1 k + 1 T , M x 2 A = 2 · 1 k , k + 1 1 k + 1 T
and B = n , σ B , { M x i B , i = 1 , 2 } , τ B defined by
σ B = 1 k T , τ B = 1 k , M x 1 B = 1 k , k , M x 2 B = 2 · 1 k , k .
Furthermore, the design parameter of ZNN is set to λ = 10 , 100 , 1000 , and the following ICs are used:
  • IC1: x ( 0 ) = 1 702 ,
  • IC2: x ( 0 ) = 1 702 ,
  • IC3: x ( 0 ) = rand ( 702 , 1 ) .
The results of the ZNN-fb model are presented in Figure 4.
Example 6.
Let m = k + 1 , n = k with k = 10 , r = 2 , and X = { x 1 , x 2 } , and consider WFA A = m , σ A , { M x i A } x i X , τ A and B = n , σ B , { M x i B } x i X , τ B over R . Clearly, M x i A R m × m , σ A R 1 × m , τ A R m × 1 , and M x i B R n × n , σ B R 1 × n , τ B R n × 1 . Consider A = m , σ A , { M x i A , = 1 , 2 } , τ A defined by
σ A = 2 · 1 k T 1 , τ A = 2 · 1 k 1 , M x 1 A = 1 k + 1 , k 1 k + 1 , M x 2 A = 2 · 1 k + 1 , k 1 k + 1
and B = n , σ B , { M x i B , i = 1 , 2 } , τ B defined by
σ B = 2 · 1 k T , τ B = 2 · 1 k , M x 1 B = 1 k , k , M x 2 B = 2 · 1 k , k .
Furthermore, the design parameter of ZNN is set to λ = 10 , 100 , 1000 , and the following ICs are used:
  • IC1: x ( 0 ) = 1 702 ,
  • IC2: x ( 0 ) = 1 702 ,
  • IC3: x ( 0 ) = rand ( 702 , 1 ) .
The results of the ZNN-bb model are presented in Figure 5.

Results Discussion

This part discusses the findings from the four numerical examples that look at how effectively the ZNN models perform.
More precisely, in Example 1, we obtain the next outcomes for the ZNN-fs model by IC1, IC2, and IC3 for λ = 10 . Figure 1e shows the ZNN-fs model’s EMEs. All instances start from a huge error price at t = 0 , and all EMEs conclude in the interval [ 10 8 , 10 7 ] with a negligible error price at t = 2 . Put another way, the ZNN-fs model validates Theorem 3 by converging to a value close to zero for three distinct ICs. The trajectories of U ( t ) and K ( t ) , i.e., the model’s solutions, are shown in Figure 1f,g, respectively. These results indicate that U ( t ) and K ( t ) do not have similar trajectories via IC1, IC2, and IC3, but their convergence speeds are similar. Therefore, the ZNN-fs model appears to give different solutions for a range of ICs, and its solutions’ convergence pattern is proven to be matched up with the convergence pattern of the linked EMEs. Moreover, given that the ZNN-fs model must satisfy z = 23 in number inequality constraints, Figure 1h illustrates the number of inequality constraints that remain unsatisfied during the ZNN learning process. This number equals 0 when all of the inequality constraints are satisfied. In this example, this number becomes 0 at t = 0.5 for IC1, at  t = 0.8 for IC2, and at t = 1.3 for IC3. Therefore, for a variety of ICs, the ZNN-fs model seems to have varying convergence speeds when it comes to satisfying the inequality constraints. Comparing the ZNN-fs model to the linprog, we see in Figure 1f that the linprog yields different U ( t ) trajectories than ZNN. Furthermore, we see in Figure 1h that 2 of the 23 inequality constraints are not satisfied by the linprog solution. As a result, the ZNN-fs model outperforms the linprog in this particular example.
In Example 2, under  λ = 10 , 100 , 1000 , the next outcomes for the ZNN-bs model are obtained. Figure 1a shows the ZNN-bs model’s EMEs. All instances in this figure start with a large error price at t = 0 and conclude at [ 10 10 , 10 7 ] at t = 0.02 for λ = 1000 , at  t = 0.2 for λ = 100 , and at t = 2 for λ = 10 , with a negligible error price. Put another way, the ZNN approach’s convergence features are confirmed by the ZNN-bs model’s EME, which is dependent on λ , and the ZNN-bs model validates Theorem 5 by converging to a value close to zero. The trajectories of U ( t ) and K ( t ) , i.e., the model’s solutions, are shown in Figure 1b,d, respectively. These results indicate that the convergence speed of the trajectories of U ( t ) and K ( t ) is much faster via λ = 1000 than via λ = 100 , while the convergence speed of the trajectories of U ( t ) and K ( t ) is much faster via λ = 100 than via λ = 10 . Also, it is observable that U ( t ) and K ( t ) have similar trajectories with each other via λ = 10 , 100 , 1000 , respectively. So, the ZNN-bs model appears to give the same U ( t ) and K ( t ) solutions for a range of λ values, and its solutions’ convergence pattern is proven to be matched up with the convergence pattern of the linked EMEs. Moreover, given that the ZNN-bs model must satisfy z = 30 in the number inequality constraints, Figure 1d illustrates the number of inequality constraints that remain unsatisfied during the ZNN learning process. This number becomes 0 at t = 0.3 for λ = 10 , at  t = 0.05 for λ = 100 , and at t = 0.005 for λ = 1000 . Therefore, for higher values of λ , the ZNN-bs model seems to have faster convergence speeds when it comes to satisfying the inequality constraints. Comparing the ZNN-bs model to the linprog, we see in Figure 1b that the linprog yields different U ( t ) trajectories than ZNN. Furthermore, we see in Figure 1d that 3 of the 30 inequality constraints are not satisfied by the linprog solution. As a result, the ZNN-bs model outperforms the linprog in this example.
In Examples 3–6, we obtain the next outcomes for the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models by IC1, IC2, and IC3 for λ = 10 . Figure 2a, Figure 3a, Figure 4a and Figure 5a show the ZNN model’s EMEs. All instances start from a huge error price at t = 0 , and all EMEs conclude in the interval [ 10 15 , 10 13 ] with a negligible error price at t = 3.6 . Put another way, the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models validate Theorems 3, 5, 7 and 9, respectively, by converging to a value close to zero for two distinct ICs. The trajectories of U ( t ) , generated by the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models, are shown in Figure 2b, Figure 3b, Figure 4b and Figure 5b, and the trajectories of K ( t ) are shown in Figure 2c, Figure 3c, Figure 4c and Figure 5c, respectively. For each case, these results indicate that U ( t ) and K ( t ) do not have similar trajectories via IC1, IC2, and IC3, but their convergence speeds are similar. Therefore, all ZNN models appear to give different solutions for a range of ICs, and their solutions’ convergence pattern is proven to be matched up with the convergence pattern of the linked EMEs. Moreover, given that the ZNN-fs and ZNN-bs models must satisfy z = 320 in the number inequality constraints and the ZNN-fb and ZNN-bb models must satisfy y = 592 in the number inequality constraints, Figure 2d, Figure 3d, Figure 4d and Figure 5d illustrate the number of inequality constraints that remain unsatisfied during the ZNN learning process. This number becomes 0 at around t = 2 for all ICs. Therefore, for a variety of ICs, the ZNN models seem to have varying convergence rates when it comes to satisfying the inequality constraints.
Additionally, the next outcomes for the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models are obtained in Examples 3–6 under λ = 10 , 100 , 1000 . Figure 2e, Figure 3e, Figure 4e and Figure 5e show the ZNN models’ EMEs. All instances in these figures start with a large error price at t = 0 and conclude at [ 10 16 , 10 14 ] at t = 0.04 for λ = 1000 , at  t = 0.4 for λ = 100 , and at t = 3.8 for λ = 10 , with a negligible error price. Put another way, the ZNN approach’s convergence features are confirmed by the ZNN models’ EME, which is dependent on λ , and the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models validate Theorems 3, 5, 7 and 9, respectively, by convergence to a value close to zero. The trajectories of U ( t ) generated by the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models are shown in Figure 2f, Figure 3f, Figure 4f and Figure 5f, and the trajectories of K ( t ) are shown in Figure 2g, Figure 3g, Figure 4g and Figure 5g, respectively. These results indicate that the convergence speed of the trajectories of U ( t ) and K ( t ) is much faster via λ = 1000 than via λ = 100 , while the convergence speed of the trajectories of U ( t ) and K ( t ) is much faster via λ = 100 than via λ = 10 . Also, it is observable that U ( t ) and K ( t ) have similar trajectories via λ = 10 , 100 , and 1000, respectively. So, for each case, the ZNN model appears to give the same U ( t ) and K ( t ) solutions for a range of λ values, and its solutions’ convergence pattern is proven to be matched up with the convergence pattern of the linked EMEs. Moreover, given that the ZNN-fs and ZNN-bs models must satisfy z = 320 in the number of inequality constraints and the ZNN-fb and ZNN-bb models must satisfy y = 592 in the number of inequality constraints, Figure 2h, Figure 3h, Figure 4h and Figure 5h illustrate the number of inequality constraints that remain unsatisfied during the ZNN learning process. This number becomes 0 at t = 1.9 for λ = 10 , at  t = 0.3 for λ = 100 , and at t = 0.03 for λ = 1000 . Therefore, for higher values of λ , the ZNN models seem to have faster convergence speeds when it comes to satisfying the inequality constraints.
Comparing the ZNN models in Examples 3–6 to the linprog, we see in Figure 2b,f, Figure 3b,f, Figure 4b,f and Figure 5b,f that the linprog yields different U ( t ) trajectories than ZNN. It is important to note that the zero solution is produced by the linprog in Example 6 and that 10 of the 320 inequality constraints are not satisfied by the linprog solution in Example 3. Thus, the ZNN model performs similarly to the linprog in Examples 4–6, while the ZNN model outperforms the linprog in Example 3. Furthermore, Figure 2i, Figure 3i, Figure 4i and Figure 5i show the time consumption of the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models in Examples 3–6, respectively, using the MATLAB R2022a environment on an Intel® Core TM i5-6600K CPU 3.50 GHz, 16 GB RAM, running on Windows 10 64 bit Operating System. In these figures, as the dimensions of the matrices (i.e., the value of k) rise, we find that the ZNN models’ time consumption increases considerably more via λ = 1000 than via λ = 100 , and that the ZNN models’ time consumption increases considerably more via λ = 100 than via λ = 10 . Therefore, for higher values of λ , the ZNN models seem to have a higher time consumption.
When everything is considered, the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models perform admirably in finding the solution of Equations (3)–(6), respectively. Upon comparing the ZNN models to the linprog, it is discovered that each ZNN model exhibits comparable or superior performance to the linprog. Additionally, all ZNN model performances are affected by the value of λ , and their solutions are affected by the value of the ICs. Keep in mind that the values of λ and the ICs in the experiments of this section were chosen at random. As a corollary, the approximation to the TSOL, x * ( t ) , in the ZNN-fs, ZNN-bs, ZNN-fb, and ZNN-bb models, is achieved faster via λ = 1000 than via λ = 100 and λ = 10 , while the time consumption is higher via λ = 1000 than via λ = 100 and λ = 10 .

5. Concluding Remarks

Practically, this research is focused on solving the equivalence problem (determining whether two automata determine the same word function) or solving the containment problem (determining whether the word function of one WFA is bounded from above by the word function of another). Our intention was to unify two important topics, namely, the zeroing neural network (ZNN) and the existence of forward and backward simulations and bisimulations for weighted finite automata (WFA) over the field of real numbers R . Two types of quantitative simulations and two types of bisimulations were defined as solutions to particular systems of matrix and vector inequations over R . This research was aimed at the development and analysis of two novel ZNN models, termed as ZNN-bs and ZNN-fs, for addressing the systems of matrix and vector inequations involved in simulations as well as at bisimulations between WFA and two novel ZNN models, termed ZNN-fb and ZNN-bb, for addressing the systems of matrix and vector inequalities involved in bisimulations between WFA. The problem considered in this paper requires solving a system of two vector inequalities and a couple of matrix inequalities. Using positive slack matrices, the required matrix and vector inequations were transformed into corresponding equations which are solvable by the proposed ZNN dynamical systems. A detailed convergence analysis was considered. Numerical examples were performed with different initial state matrices. A comparison with a known LP approach proposed in [7] was presented, and the better performance of the ZNN design was confirmed. The models solved in current research utilized the development of ZNN dynamics based on several inequations and Zhang error functions. The derived models can be viewed as extensions from equations to inequations of ZNN algorithms established upon a few error functions. Such models have been investigated in several papers, such as [31,32,33,34].
Seen more generally, the research described in this paper shows that the ZNN design is usable in solving systems of matrix and vector inequations in linear algebra. Further research can be aimed at solving the minimization problems (determining an automaton with the minimal number of states equivalent to a given automaton).
Simulations and bisimulations have already been studied through solving systems of matrix inequations in the context of fuzzy finite automata [2,3], nondeterministic automata [4], WFA over an additively idempotent semiring [5], and max-plus automata [6]. The methodology used there, based on the concept of residuation, is fundamentally different from the methodology applied in this article to WFA over the field of real numbers. Perhaps some general ideas of this article could be applied to solving systems of matrix inequalities in the context of fuzzy finite automata, for example, the use of neuro-fuzzy systems (fuzzy neural networks), which could be the topic of our future research. On the other hand, the proposed methodology could be more directly applied to some special WFA over a field of real numbers, such as WFA over a semiring of nonnegative real numbers and probabilistic automata. This will also be one of the topics of our future research.

Author Contributions

Conceptualization, M.Ć., P.S.S. and S.D.M.; methodology, P.S.S. and S.D.M.; software, S.D.M.; validation, M.Ć., P.S.S. and D.K.; formal analysis, P.S.S., M.Ć., P.B. and D.K.; investigation, P.S.S., S.D.M. and D.K.; resources, S.D.M.; data curation, S.D.M. and D.K.; writing—original draft preparation, M.Ć. and P.S.S.; writing—review and editing, P.S.S., S.D.M. and M.Ć.; visualization, S.D.M.; supervision, M.Ć. and P.S.S.; project administration, P.B., P.S.S. and D.K.; funding acquisition, P.B. and D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

Data Availability Statement

The data that support the findings of this study are available on request to the authors.

Acknowledgments

Predrag Stanimirović and Miroslav Ćirić acknowledge the support by the Science Fund of the Republic of Serbia, Grant No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM. Predrag Stanimirović and Miroslav Ćirić are also supported by the Ministry of Science, Technological Development and Innovation, Republic of Serbia, Contract No. 451-03-65/2024-03/200124.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ćirić, M.; Ignjatović, J.; Stanimirović, P.S. Bisimulations for weighted finite automata over semirings. Res. Sq. 2022. [Google Scholar] [CrossRef]
  2. Ćirić, M.; Ignjatović, J.; Damljanović, N.; Bašić, M. Bisimulations for fuzzy automata. Fuzzy Sets Syst. 2012, 186, 100–139. [Google Scholar] [CrossRef]
  3. Ćirić, M.; Ignjatović, J.; Jančić, I.; Damljanović, N. Computation of the greatest simulations and bisimulations between fuzzy automata. Fuzzy Sets Syst. 2012, 208, 22–42. [Google Scholar] [CrossRef]
  4. Ćirić, M.; Ignjatović, J.; Bašić, M.; Jančić, I. Nondeterministic automata: Equivalence, bisimulations, and uniform relations. Inf. Sci. 2014, 261, 185–218. [Google Scholar] [CrossRef]
  5. Damljanović, N.; Ćirić, M.; Ignjatović, J. Bisimulations for weighted automata over an additively idempotent semiring. Theor. Comput. Sci. 2014, 534, 86–100. [Google Scholar] [CrossRef]
  6. Ćirić, M.; Micić, I.; Matejić, J.; Stamenković, A. Simulations and bisimulations for max-plus automata. Discret. Event Dyn. Syst. 2024, 34, 269–295. [Google Scholar] [CrossRef]
  7. Urabe, N.; Hasuo, I. Quantitative simulations by matrices. Inf. Comput. 2017, 252, 110–137. [Google Scholar] [CrossRef]
  8. Stanković, I.; Ćirić, M.; Ignjatović, J. Bisimulations for weighted networks with weights in a quantale. Filomat 2023, 37, 3335–3355. [Google Scholar]
  9. Doyen, L.; Henzinger, T.A.; Raskin, J.F. Equivalence of labeled Markov chains. Int. J. Found. Comput. Sci. 2008, 19, 549–563. [Google Scholar] [CrossRef]
  10. Balle, B.; Gourdeau, P.; Panangaden, P. Bisimulation metrics and norms for real weighted automata. Inf. Comput. 2022, 282, 104649. [Google Scholar] [CrossRef]
  11. Boreale, M. Weighted bisimulation in linear algebraic form. In CONCUR 2009—Concurrency Theory, 20th International Conference, Bologna, Italy, 1–4 September 2009; Bravetti, M., Zavattaro, G., Eds.; LNCS; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5710, pp. 163–177. [Google Scholar]
  12. Xiao, L.; Jia, L. Zeroing Neural Networks: Finite-time Convergence Design, Analysis and Applications; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2022. [Google Scholar]
  13. Zhang, Y.; Yi, C. Zhang Neural Networks and Neural-Dynamic Method; Nova Science Publishers, Inc.: New York, NY, USA, 2011. [Google Scholar]
  14. Zhang, Y.; Ge, S.S. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Netw. 2005, 16, 1477–1490. [Google Scholar] [CrossRef] [PubMed]
  15. Wu, W.; Zheng, B. Two new zhang neural networks for solving time-varying linear equations and inequalities systems. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4957–4965. [Google Scholar] [CrossRef] [PubMed]
  16. Xiao, L.; Tan, H.; Dai, J.; Jia, L.; Tang, W. High-order error function designs to compute time-varying linear matrix equations. Inf. Sci. 2021, 576, 173–186. [Google Scholar] [CrossRef]
  17. Li, X.; Yu, J.; Li, S.; Ni, L. A nonlinear and noise-tolerant ZNN model solving for time-varying linear matrix equation. Neurocomputing 2018, 317, 70–78. [Google Scholar] [CrossRef]
  18. Dai, J.; Li, Y.; Xiao, L.; Jia, L. Zeroing neural network for time-varying linear equations with application to dynamic positioning. IEEE Trans. Ind. Inform. 2022, 18, 1552–1561. [Google Scholar] [CrossRef]
  19. Wang, T.; Zhang, Z.; Huang, Y.; Liao, B.; Li, S. Applications of Zeroing Neural Networks: A Survey. IEEE Access 2024, 12, 51346–51363. [Google Scholar] [CrossRef]
  20. Guo, D.; Yan, L.; Zhang, Y. Zhang Neural Networks for online solution of time-varying linear inequalities. In Artificial Neural Networks; Rosa, J.L.G., Ed.; IntechOpen: Rijeka, Crooatia, 2016. [Google Scholar] [CrossRef]
  21. Sun, J.; Wang, S.; Wang, K. Zhang neural networks for a set of linear matrix inequalities with time-varying coefficient matrix. Inf. Process. Lett. 2016, 116, 603–610. [Google Scholar] [CrossRef]
  22. Xiao, L.; Zhang, Y. Two new types of Zhang Neural Networks solving systems of time-varying nonlinear inequalities. IEEE Trans. Circuits Syst. Regul. Pap. 2012, 59, 2363–2373. [Google Scholar] [CrossRef]
  23. Xiao, L.; Zhang, Y. Zhang Neural Network versus Gradient Neural Network for solving time-varying linear inequalities. IEEE Trans. Neural Netw. 2011, 22, 1676–1684. [Google Scholar] [CrossRef]
  24. Xiao, L.; Zhang, Y. Different Zhang functions resulting in different ZNN models demonstrated via time-varying linear matrix–vector inequalities solving. Neurocomputing 2013, 121, 140–149. [Google Scholar] [CrossRef]
  25. Zeng, Y.; Xiao, L.; Li, K.; Li, J.; Li, K.; Jian, Z. Design and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time. Neurocomputing 2020, 390, 78–87. [Google Scholar] [CrossRef]
  26. Guo, D.; Zhang, Y. A new variant of the Zhang neural network for solving online time-varying linear inequalities. Proc. R. Soc. 2012, 2255, 2271. [Google Scholar] [CrossRef]
  27. Zheng, B.; Yue, C.; Wang, Q.; Li, C.; Zhang, Z.; Yu, J.; Liu, P. A new super-predefined-time convergence and noise-tolerant RNN for solving time-variant linear matrix–vector inequality in noisy environment and its application to robot arm. Neural Comput. Appl. 2024, 36, 4811–4827. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Yang, M.; Huang, H.; Xiao, M.; Hu, H. New discrete solution model for solving future different level linear inequality and equality with robot manipulator control. IEEE Trans. Ind. Inform. 2019, 15, 1975–1984. [Google Scholar] [CrossRef]
  29. Guo, D.; Zhang, Y. A new inequality-based obstacle-avoidance MVN scheme and its application to redundant robot manipulators. IEEE Trans. Syst. Man Cybern. Part (Appl. Rev.) 2012, 42, 1326–1340. [Google Scholar] [CrossRef]
  30. Kong, Y.; Jiang, Y.; Lou, J. Terminal computing for Sylvester equations solving with application to intelligent control of redundant manipulators. Neurocomputing 2019, 335, 119–130. [Google Scholar] [CrossRef]
  31. Katsikis, V.; Mourtas, S.; Stanimirović, P.; Zhang, Y. Solving complex-valued time-varying linear matrix equations via QR decomposition with applications to robotic motion tracking and on angle-of-arrival localization. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3415–3424. [Google Scholar] [CrossRef] [PubMed]
  32. Li, X.; Lin, C.L.; Simos, T.; Mourtas, S.; Katsikis, V. Computation of time-varying 2,3- and 2,4-inverses through zeroing neural networks. Mathematics 2022, 10, 4759. [Google Scholar] [CrossRef]
  33. Simos, T.; Katsikis, V.; Mourtas, S.; Stanimirović, P. Unique non-negative definite solution of the time-varying algebraic Riccati equations with applications to stabilization of LTV system. Math. Comput. Simul. 2022, 202, 164–180. [Google Scholar] [CrossRef]
  34. Stanimirović, P.; Mourtas, S.; Mosić, D.; Katsikis, V.; Cao, X.; Li, S. Zeroing Neural Network approaches for computing time-varying minimal rank outer inverse. Appl. Math. Comput. 2024, 465. [Google Scholar] [CrossRef]
  35. Buchholz, P. Bisimulation relations for weighted automata. Theor. Comput. Sci. 2008, 393, 109–123. [Google Scholar] [CrossRef]
  36. Hua, C.; Cao, X.; Xu, Q.; Liao, B.; Li, S. Dynamic neural network models for time-varying problem solving: A survey on model structures. IEEE Access 2023, 11, 65991–66008. [Google Scholar] [CrossRef]
  37. Jin, L.; Li, S.; Liao, B.; Zhang, Z. Zeroing neural networks: A survey. Neurocomputing 2017, 267, 597–604. [Google Scholar] [CrossRef]
  38. Graham, A. Kronecker Products and Matrix Calculus with Applications; Courier Dover Publications: Mineola, NY, USA, 2018. [Google Scholar]
Figure 1. Errors and trajectories in Examples 1 and 2. (a) Example 1: EME errors. (b) Example 1: Trajectories of U ( t ) . (c) Example 1: Trajectories of K ( t ) . (d) Example 1: Number of unsatisfied constraints. (e) Example 2: EME errors. (f) Example 2: Trajectories of U ( t ) . (g) Example 2: Trajectories of K ( t ) . (h) Example 2: Number of unsatisfied constraints.
Figure 1. Errors and trajectories in Examples 1 and 2. (a) Example 1: EME errors. (b) Example 1: Trajectories of U ( t ) . (c) Example 1: Trajectories of K ( t ) . (d) Example 1: Number of unsatisfied constraints. (e) Example 2: EME errors. (f) Example 2: Trajectories of U ( t ) . (g) Example 2: Trajectories of K ( t ) . (h) Example 2: Number of unsatisfied constraints.
Mathematics 12 02110 g001aMathematics 12 02110 g001b
Figure 2. Errors and trajectories in Example 3.
Figure 2. Errors and trajectories in Example 3.
Mathematics 12 02110 g002
Figure 3. Errors and trajectories in Example 4.
Figure 3. Errors and trajectories in Example 4.
Mathematics 12 02110 g003
Figure 4. Errors and trajectories in Example 5.
Figure 4. Errors and trajectories in Example 5.
Mathematics 12 02110 g004
Figure 5. Errors and trajectories in Example 6.
Figure 5. Errors and trajectories in Example 6.
Mathematics 12 02110 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stanimirović, P.S.; Ćirić, M.; Mourtas, S.D.; Brzaković, P.; Karabašević, D. Simulations and Bisimulations between Weighted Finite Automata Based on Time-Varying Models over Real Numbers. Mathematics 2024, 12, 2110. https://doi.org/10.3390/math12132110

AMA Style

Stanimirović PS, Ćirić M, Mourtas SD, Brzaković P, Karabašević D. Simulations and Bisimulations between Weighted Finite Automata Based on Time-Varying Models over Real Numbers. Mathematics. 2024; 12(13):2110. https://doi.org/10.3390/math12132110

Chicago/Turabian Style

Stanimirović, Predrag S., Miroslav Ćirić, Spyridon D. Mourtas, Pavle Brzaković, and Darjan Karabašević. 2024. "Simulations and Bisimulations between Weighted Finite Automata Based on Time-Varying Models over Real Numbers" Mathematics 12, no. 13: 2110. https://doi.org/10.3390/math12132110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop