**A Novel Recurrent Neural Network-Based Ultra-Fast, Robust, and Scalable Solver for Inverting a "Time-Varying Matrix"**

#### **Vahid Tavakkoli \*, Jean Chamberlain Chedjou and Kyandoghere Kyamakya**

Institute for Smart Systems Technologies, University Klagenfurt, A9020 Klagenfurt, Austria; jean.chedjou@aau.at (J.C.C.); kyandoghere.kyamakya@aau.at (K.K.) **\*** Correspondence: vtavakko@edu.aau.at; Tel.: +43-463-2700-3540

Received: 16 August 2019; Accepted: 11 September 2019; Published: 16 September 2019

**Abstract:** The concept presented in this paper is based on previous dynamical methods to realize a time-varying matrix inversion. It is essentially a set of coupled ordinary di fferential equations (ODEs) which does indeed constitute a recurrent neural network (RNN) model. The coupled ODEs constitute a universal modeling framework for realizing a matrix inversion provided the matrix is invertible. The proposed model does converge to the inverted matrix if the matrix is invertible, otherwise it converges to an approximated inverse. Although various methods exist to solve a matrix inversion in various areas of science and engineering, most of them do assume that either the time-varying matrix inversion is free of noise or they involve a denoising module before starting the matrix inversion computation. However, in the practice, the noise presence issue is a very serious problem. Also, the denoising process is computationally expensive and can lead to a violation of the real-time property of the system. Hence, the search for a new 'matrix inversion' solving method inherently integrating noise-cancelling is highly demanded. In this paper, a new combined/extended method for time-varying matrix inversion is proposed and investigated. The proposed method is extending both the gradient neural network (GNN) and the Zhang neural network (ZNN) concepts. Our new model has proven that it has exponential stability according to Lyapunov theory. Furthermore, when compared to the other previous related methods (namely GNN, ZNN, Chen neural network, and integration-enhanced Zhang neural network or IEZNN) it has a much better theoretical convergence speed. To finish, all named models (the new one versus the old ones) are compared through practical examples and both their respective convergence and error rates are measured. It is shown/observed that the novel/proposed method has a better practical convergence rate when compared to the other models. Regarding the amount of noise, it is proven that there is a very good approximation of the matrix inverse even in the presence of noise.

**Keywords:** matrix inversion; time-varying matrix; noise problem in time-varying matrix inversion; recurrent neural network (RNN); RNN-based solver; real-time fast computing
