Next Article in Journal
Definition Extraction from Generic and Mathematical Domains with Deep Ensemble Learning
Previous Article in Journal
Solving a University Course Timetabling Problem Based on AACSB Policies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Calculation of the Moore–Penrose and Drazin Inverses: Application to Fractional Calculus

by
Khosro Sayevand
1,
Ahmad Pourdarvish
2,
José A. Tenreiro Machado
3,* and
Raziye Erfanifar
1
1
Faculty of Mathematical Sciences, Malayer University, Malayer P.O. Box 16846-13114, Iran
2
Faculty of Mathematical Sciences, Department of Statistics, Mazandaran University, Mazandaran P.O. Box 47416-135534, Iran
3
Department of Electrical Engineering, Institute of Engineering, Polytechnic of Porto, 4249-015 Porto, Portugal
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(19), 2501; https://doi.org/10.3390/math9192501
Submission received: 12 August 2021 / Revised: 29 September 2021 / Accepted: 30 September 2021 / Published: 6 October 2021
(This article belongs to the Section Dynamical Systems)

Abstract

:
This paper presents a third order iterative method for obtaining the Moore–Penrose and Drazin inverses with a computational cost of O ( n 3 ) , where n N . The performance of the new approach is compared with other methods discussed in the literature. The results show that the algorithm is remarkably efficient and accurate. Furthermore, sufficient criteria in the fractional sense are presented, both for smooth and non-smooth solutions. The fractional elliptic Poisson and fractional sub-diffusion equations in the Caputo sense are considered as prototype examples. The results can be extended to other scientific areas involving numerical linear algebra.

1. Introduction

Computing the inverse matrix, particularly for a high dimension, has been a time consuming task. Therefore, numerical methods are important for the calculation of the inverse of a matrix, and numerical iterative algorithms have a special role among the available techniques.
The Moore–Penrose inverse of a matrix A C m × n , denoted by A C n × m , is the unique matrix X that obeys the four conditions [1]
A X A = A , X A X = X , ( A X ) * = A X , ( X A ) * = X A ,
where A * is the conjugate transpose of A. If  r a n k ( A ) = min { m , n } , then
A = ( A * A ) 1 A * , m > n , A 1 , m = n , A * ( A A * ) 1 , m < n .
We find in the published literature a number of different iterative methods for computing the Moore–Penrose inverse. The most common approach for the approximate inverse, A 1 , is the Newton’s iterative method ( N M ):
V r + 1 = V r ( 2 I A V r ) , r = 0 , 1 , 2 , ,
where I is the identity matrix. For more details, interested readers can see [2,3].
Li et al. [4] investigated the following third-order method, known as Chebyshev’s iterative method:
W r = A V r , V r + 1 = V r 3 I W r ( 3 I W r ) .
In addition, Toutounian and Soleymani [5] proposed another iterative method to find A 1 of the fourth order given by
W r = A V r , V r + 1 = 0.5 V r ( 9 I W r 16 I W r ( 14 I W r ( 6 I W r ) ) ) ) ,
Pan et al. [6] investigated the following eighteenth-order scheme:
W r = A V r , Z r = I W r , P r = Z r 2 , U r = P r 2 , M r = ( I + c 1 P r + U r ) ( I + c 2 P r + U r ) , T r = M r + c 3 P r , S r = M r + d 1 P r + d 2 U r , V r + 1 = V r ( I + Z r ) ( T r S r + μ P r + ψ U r ) ,
where
c 1 = 1 4 ( 27 2 93 1 ) , c 2 = 1 2 ( 1 27 93 ) , c 3 = 1 496 ( 5 93 93 ) , d 1 = 1 496 ( 93 5 93 ) , d 2 = 93 4 , μ = 3 8 , ψ = 321 1984 .
Esmaieli et al. [7] proposed the second-order method
W r = A V r , V r + 1 = V r 5.5 I W r ( 8 I 3.5 W r ) ,
which is superior in terms of computational efficiency.
To initialize these algorithms, an initial matrix V 0 was introduced by Pan et al. [8]
V 0 = α A * , where α = 1 A 1 A .
In 1958, a different kind of generalized inverse was introduced by Drazin [9]. This definition does not have flexibility in the rings and semi-groups of associations but commutes with the element. The importance of this type of inverse and its calculation was later discussed by Wilkinson [10], and several researchers proposed direct or iterative methods for calculating the solution of this problem [11,12,13,14]. In this paper, a characterization of the Drazin inverse in the scope of fractional calculus is investigated.
The paper is organized as follows. Section 2 introduces the essential concepts, fundamental definitions, and properties of fractional calculus. Section 3 and Section 4 analyse the performance of a novel iterative method for obtaining the Moore–Penrose and Drazin inverses. Section 5 introduces the error measurement. Section 6 compares the numerical results of the proposed approach with other available schemes, especially for high dimensional values. Section 7 highlights several applications of the new method and provides a numerical assessment of their effectiveness in a fractional sense. Finally, Section 8 presents the main conclusions.
Table 1 lists the abbreviations and acronyms used in the follow-up.

2. Fractional Calculus

In recent decades, fractional calculus and fractional differential equations (FDE) have had a significant impact in science, with particular emphases in system dynamical modelling [15,16,17,18,19,20,21,22,23,24]. This mathematical tool generalises the standard calculus, and several definitions of fractional derivatives and integrals have been proposed.
Let ξ ( x ) be defined as a function on the interval [ a , b ] . The Riemann–Liouville integral is defined by
I γ = 1 Γ ( γ ) a x f ( t ) ( x t ) γ 1 d t ,
where Γ is the gamma function, and a is an arbitrary but fixed base point. The γ th order ( m 1 < γ < m ) left and right sided Riemann–Liouville fractional derivatives of ξ ( x ) are defined as
a R D x γ ξ ( x ) = 1 Γ ( m γ ) d m d x m a x ξ ( τ ) ( x τ ) γ m + 1 d τ ,
x R D b γ ξ ( x ) = ( 1 ) m Γ ( n γ ) d m d x m x b ξ ( τ ) ( x τ ) γ m + 1 d τ .
The Riemann–Liouville fractional derivative and integral played an important role in the development of theoretical problems of fractional calculus. However, since the solution of FDE requires initial conditions with fractional derivatives, they pose difficulties in their application. In 1967, the Caputo fractional derivative was formulated [17]:
γ ξ ( x , t ) t γ = 1 Γ ( m γ ) 0 t 1 ( t τ ) γ m + 1 m ξ ( x , τ ) τ m d τ , γ ( m 1 , m ) , m N .
This definition simplifies the initial condition problem. The relationship between the Riemann–Liouville and Caputo fractional derivatives is as follows:
a R L D x γ ξ ( x ) = a C D x γ ξ ( x ) + k = 0 m 1 f ( k ) ( a ) ( t a ) k γ Γ ( k + 1 γ ) , m 1 γ < m .
Nonetheless, during the discretization of the FDE, we obtain a matrix, and therefore, a problem of linear algebra has a relationship with the solution of FDE.

3. New Iterative Method

Iterative methods for solving nonlinear equations are pervasive in applied mathematics, and many researchers have studied a variety of algorithms [25,26,27,28], keeping in mind that the efficiency of the method is of key importance. The existence of a function derivative in the algorithm often poses constraints and increases the computational cost. Therefore, the use of function high derivatives is usually avoided. We suppose that the function F ( χ ) has a simple root at a and that χ 0 is an initial guess sufficiently close to a. To solve the equation F ( χ ) = 0 , we consider the iterative algorithm
χ r + 1 = χ r F ( χ r ) F ( χ r ) 1 + F ( χ r ) F ( χ r ) 2 F ( χ r ) 2 F ( χ r ) 3 4 F ( χ r ) 4 ( F ( χ r ) 2 2 F ( χ r ) + F ( χ r ) F ( χ r ) F ( χ r ) F ( χ r ) 6 ) , r = 0 , 1 , .
In the following, the convergence analysis of this method is investigated.
Theorem 1.
Suppose that F : D R R is sufficiently differentiable in a neighbourhood of a D and that a is a simple zero of F ( χ ) = 0 . The iterative method (15) converges to a with convergence order three. The error equation is given by
ϵ r + 1 = 3 4 2 J 2 2 J 3 ( ϵ r ) 3 + O ( ϵ r 4 ) ,
where J i = 1 i ! F ( i ) ( a ) F ( a ) for  i 2 .
Proof. 
Based on the Taylor expansion for F about a, we can write
F ( χ r ) = F ( a ) [ ϵ r + J 2 ϵ r 2 + J 3 ϵ r 3 + J 4 ϵ r 4 + J 5 ϵ r 5 + J 6 ϵ r 6 + O ( ϵ r 7 ) ] , F ( χ r ) = F ( a ) [ 1 + 2 J 2 ϵ r + 3 J 3 ϵ r 2 + 4 J 4 ϵ r 3 + 5 J 5 ϵ r 4 + 6 J 6 ϵ r 5 + O ( ϵ r 6 ) ] , F ( χ r ) = F ( a ) [ 24 J 4 + 120 J 5 ϵ r + 360 J 6 ϵ r 2 + O ( ϵ r 3 ) ] .
From the above relations, we have
F ( χ r ) F ( χ r ) = ϵ r J 2 ϵ r 2 + 2 ( J 2 2 J 3 ) ϵ r 3 + O ( ϵ r 4 ) , F ( χ r ) F ( χ r ) 2 F ( χ r ) 2 = J 2 ϵ r + ( 3 J 2 2 + 3 J 3 ) ϵ r 2 + O ( ϵ r 3 ) ] ,
F ( χ r ) 3 4 F ( χ r ) 4 = 1 4 F ( a ) [ ϵ r 3 5 J 2 ϵ r 4 + O ( ϵ r 5 ) ] , F ( χ r ) 2 2 F ( χ r ) = F ( a ) [ 2 J 2 2 4 ( J 2 3 3 J 2 J 3 ) ϵ r + O ( ϵ r 2 ) ] , F ( χ r ) F ( χ r ) F ( χ r ) = 24 F ( a ) [ J 4 ϵ r ( J 2 J 4 5 J 5 ) ϵ r 2 + O ( ϵ r 3 ) ] .
Then, according to (15), we can write
ϵ r + 1 = 3 4 2 J 2 2 J 3 ϵ r 3 + O ( ϵ r 4 ) .
Let F ( V ) = V 1 A . Then, the following iterative method is obtained from (15):
V r + 1 = 1 4 V r 37 I 111 A V r + 151 ( A V r ) 2 97 ( A V r ) 3 + 24 ( A V r ) 4 ,
or
ϑ r = A V r , ξ r = ϑ r 2 , V r + 1 = 1 4 V r 37 I 111 ϑ r + ξ r 151 I 97 ϑ r + 24 ξ r .
Note: It is pointed out that in the body of this paper, methods (6), (8) and (22) are represented by the acronyms E 1 , E 2 , and E 3 , respectively.    □
Theorem 2.
Suppose that A C n × n is a nonsingular matrix and that the initial approximation V 0 satisfies
I A V 0   < 1 .
Then, the iterative method (22) converges to A 1 with third order.
Proof. 
The proof is similar to that of Theorem 2.1 in [29].   □
Now, consider t r = A ϵ r and s r = E r . In the following, we show the convergence properties of the iterative method, namely the behaviour of the sequences t r and s r .
Corollary 1.
Assume that the conditions of (9) hold. If  lim r t r = 0 and lim r s r = 0 , then, for the iterative method (22), it yields
lim r t r + 1 t r 3 = lim r s r + 1 s r 3 = 3 4 .
Proof. 
From Theorem 2, we have
A ϵ r + 1 = 3 4 ( A ϵ r ) 3 23 4 ( A ϵ r ) 4 + 6 ( A ϵ r ) 5 .
Consequently,
t r + 1 = A ϵ r + 1 3 4 A ϵ r 3 23 4 A ϵ r 4 + 6 A ϵ r 5 = t r 3 3 4 23 4 t r + 6 t r 2 ,
or
t r + 1 = A ϵ r + 1 3 4 A ϵ r 3 + 23 4 A ϵ r 4 + 6 A ϵ r 5 = t r 3 3 4 + 23 4 t r + 6 t r 2 ,
which implies that
3 4 23 4 t r + 6 t r 2 t r + 1 t r 3 3 4 + 23 4 t r + 6 t r 2 , or lim r t r + 1 t r 3 = 3 4 .
Again, by an argument similar to Theorem 2, we have
3 4 23 4 s r + 6 s r 2 s r + 1 s r 3 3 4 + 23 4 s r + 6 s r 2 , or lim r s r + 1 s r 3 = 3 4 .
   □
Theorem 3.
Suppose that A is a nonsingular matrix. If  A V 0 = V 0 A , then for the sequence (22), we have
A V i = V i A , i = 1 , 2 , .
Proof. 
The proof is similar to the proofs presented in [30].    □
Lemma 1.
For the sequence { V k } k = 0 k = , generated by the iterative method (22), it holds that
( V k A ) * = V k A , ( A V k ) * = A V k , V k A A = V k , A A V k = V k .
Proof. 
The proof is similar to Lemma 2.1 in [31].    □
Theorem 4.
According to the same assumptions as in Theorem 2, the iterative method (22) is asymptotically stable.
Proof. 
This theorem is similar to those adopted for a general family of methods in [32]. Thus, the proof is omitted.    □
Lemma 2.
[33]For M C n × n and any given ξ > 0 , there is at least one matrix norm · such that
ρ ( M ) M ρ ( M ) + ξ ,
where ρ ( M ) = m a x | λ i | and  λ i are eigenvalues of matrix M.
Lemma 3
([34]). For P , S C n × n , such that P = P 2 and P S = S P , it holds that
ρ ( P S ) ρ ( S ) .
Theorem 5.
Let A C r m × n and let us consider that σ 1 > σ 2 > > σ r > 0 are the singular values of A. Then, (22) converges to the Moore–Penrose inverse A in the third order, provided that V 0 = A * C , where C > σ 1 2 is a constant.
Proof. 
According to Lemma 1, we have
V r + 1 A = V r + 1 A A A A A V r + 1 A A A A ,
and if E r = V r A , then A A E r A = E r A . From the conditions of the Moore–Penrose inverse, E r , and from (22), we have
( I A A ) t = I A A , t = 2 , 3 , ( I A A ) E r A = 0 , E r A ( I A A ) = 0 ,
and
E r + 1 A = 1 4 V r 37 I 111 A V r + 151 ( A V r ) 2 97 ( A V r ) 3 + 24 ( A V r ) 4 A A = ( I V r A ) 3 3 4 I 23 4 ( I V r A ) + 6 ( I V r A ) 2 + I A A = ( I A A E r A ) 3 3 4 I 23 4 ( I A A E r A ) + 6 ( I A A E r A ) 2 + I A A = ( I A A ) 3 ( I A A ) E r A + 3 ( I A A ) ( E r A ) ( E r A ) 3 × ( 3 4 I 23 4 ( I A A ) + 23 4 ( E r A ) + 6 ( I A A ) 12 ( I A A ) ( E r A ) + 6 ( E r A ) 2 ) + I A A .
So, it is proved that
E r + 1 A = 3 4 I + 23 4 ( E r A ) + 6 ( E r A ) 2 ( E r A ) 3 .
Now, consider P = A A and S = V 0 A I , so that P 2 = P and
P S = A A ( V 0 A I ) = A A V 0 A A A = ( A A ) * V 0 A A A = V 0 A A A = V 0 A A A A A = ( V 0 A I ) A A = S P .
Therefore, according to Lemma 3
ρ ( V 0 A ) A = ρ ( A * C A ) A ρ A * C A I = max 1 i r | 1 λ i A * C A | .
Since C > σ 1 2 , we have
max 1 i r | 1 λ i A * C A | < 1 ,
and from Lemma 2,
V 0 A A ρ V 0 A A + ξ < 1 .
Consequently, according to (34) and (37), we obtain lim k V k A = 0 with the third order.    □
Theorem 6.
The sequence V r produced by (22) and with (9) satisfies
R ( V r ) = R ( A * ) , N ( V r ) = N ( A * ) ,
for r 0 , where R ( · ) and N ( · ) denote the range and the null space of the matrix, respectively.
Proof. 
Since V 0 = α A * , the theorem obviously holds for r = 0 . Suppose that y N ( V r ) is an arbitrary vector. According to the method (22), we have
V r + 1 y = 1 4 37 V r y 111 V r A V r y + 151 V r ( A V r ) 2 y 97 ( V r A V r ) 3 y + 24 V r ( A V r ) 4 y = 0 .
As we know y N ( V r + 1 ) , we can conclude that N ( V r ) N ( V r + 1 ) . Similarly we have R ( V r ) R ( V r + 1 ) . Therefore, by mathematical induction we can write
N ( V r ) N ( V 0 ) = N ( A * ) , R ( V r ) R ( V 0 ) = R ( A * ) .
To prove the equality, let
N = r N 0 N ( V r ) .
Suppose that y N . Then, y N ( V r 0 ) for r 0 N 0 . Since y N ( V r ) for every r r 0 , then V r y = 0 , and according to Theorem 2,
V y = lim r + V r y = 0 .
Finally, y N ( V ) = N ( A * ) and N N ( A * ) . On the other hand, it comes to be that
N ( A * ) N ( V r ) N N ( A * ) ,
and so N ( V r ) = N ( A * ) .
Now, according to the relation
dim R ( V r ) = m dim N ( V r ) = m dim N ( A * ) = dim R ( A * ) ,
and R ( V r ) R ( A * ) , we conclude that R ( V r ) = R ( A * ) .    □
Theorem 7.
Let { V k } k = 0 generated by method (22), for all V ^ k , such that
V ^ k = V k + Δ k ,
where Δ k is a numerical perturbation of the k-th exact iterate V k having a sufficiently small norm, we can ignore quadratic and higher order terms in O ( Δ k 2 ) , and one has
Δ k < Δ k A V k + O ( Δ k ) .
Proof. 
Let E ^ k = I A V ^ k . Then,
E ^ k j = ( E k A Δ k ) j E k A Δ k j ( E k + A Δ k ) j = C 0 j , j = 1 , 2 , 3 ,
where C 0 = E k + A Δ k = E k + O ( Δ k ) . Furthermore,
E ^ k j E k j = ( E k A Δ k ) j E k j ( E k + A Δ k ) j E k j = A Δ k i = 0 j 1 j j 1 i A Δ k i E k j 1 i ,
meaning
E ^ k j E k j T j A Δ k ,
where
T j = i = 0 j 1 j j 1 i A Δ k i E k j 1 i = j E k j 1 + O ( Δ k ) ,
and we have
Δ k + 1 = V ^ k + 1 V k + 1 = 1 4 V ^ k 37 I 111 A V ^ k + 151 ( A V ^ k ) 2 97 ( A V ^ k ) 3 + 24 ( A V ^ k ) 4 1 4 V k 37 I 111 A V k + 151 ( A V k ) 2 97 ( A V k ) 3 + 24 ( A V k ) 4 = 1 4 V ^ k 4 I + 4 E ^ k + 4 E ^ k 2 + E ^ k 3 + 24 E ^ k 4 1 4 V k 4 I + 4 E k + 4 E k 2 + E k 3 + 24 E k 4 = Δ k ( I + E ^ k + E ^ k 2 + 1 4 E ^ k 3 + 6 E ^ k 4 ) + V k I + ( E ^ k E k ) + ( E ^ k 2 E k 2 ) + 1 4 ( E ^ k 3 E k 3 ) + 6 ( E ^ k 6 E k 6 ) .
Therefore,
Δ k + 1 Δ k ( 1 + E ^ k + E ^ k 2 + 1 4 E ^ k 3 + 6 E ^ k 4 ) + V k 1 + E ^ k E k + E ^ k 2 E k 2 + 1 4 E ^ k 3 E k 3 + 6 E ^ k 4 E k 4 = Δ k ( 1 + C 0 + C 0 2 + 1 4 C 0 3 + 6 C 0 6 ) + A Δ k V k 1 + T 1 + T 2 + T 3 + 6 T 4 < Δ k A V k + O ( Δ k ) .
This expression yields the claimed estimates (50) for the numerical perturbation at iteration loop k + 1 .    □
Proof. 
The proof is straightforward.    □
Corollary 2.
The computational cost of the iterative method (22) is O ( n 3 ) .
Proof. 
To calculate the computational cost of the suggested method, the following facts hold. Suppose that Q n × n and R n × n are given matrices. Then, we verify that n 3 operations are needed to compute Q n × n . R n × n , n 2 operations are needed for Q n × n + R n × n and β Q n × n , and n operations for γ I n × n + Q n × n . Consequently, the sum of all required operations in (22) is 4 n 3 + 6 n 2 + 2 n , and we have O ( n 3 ) .    □
Table 2 lists the convergence order (CO), the number of operations for Q n × n R n × n (MM), Q n × n + R n × n (PM), β Q n × n (SM), γ I n × n + Q n × n (IM), and the computational cost (CC) in every iteration of the methods E 1 and E 3 .

4. Application in Finding the Drazin Inverse

Drazin inverses were first introduced and used by Drazin himself in the study of abstract ring theory in finite dimensional algebra. Later, the definition of Drazin inverses was generalized to bounded linear operators in Banach spaces and was used to study linear abstract differential equations in Banach spaces [9,35]. Some of the most important applications of the Drazin inverse are Markov chains, control theory, singular differential and difference equations, and iterative methods in numerical linear algebra [36,37,38].
Definition 1.
The smallest non-negative integer k = i n d ( · ) that holds
r a n k ( A k + 1 ) = r a n k ( A k ) ,
is called the index of matrix A.
Definition 2.
Suppose that A C n × n . Then, the Drazin inverse of A, denoted by A D , is the matrix V, which holds in the following equations:
A k V A = A k , V A V = V , A V = V A ,
where k = i n d ( A ) .
Li and Wei [39] proved that NM can be used to find the Drazin inverse of square matrices, and they proposed the initial matrix
V 0 = W 0 = β A l , l i n d ( A ) = k ,
where β must satisfy the condition I A V 0 < 1 .
We now consider the iterative method (22) for finding the Drazin inverse, with the initial matrix
V 0 = W 0 = 2 t r ( A k + 1 ) A k ,
where t r ( · ) stands for the trace of the matrix.
Proposition 1
([40]). Let P L , M be the projector on a space L along a space M. Then,
(i) P L , M Q = Q R ( Q ) L ,
(ii) Q P L , M Q = Q N ( Q ) M .
Theorem 8.
Suppose that A is a singular square matrix. Additionally, let the initial matrix be chosen by (60). Then, for  W r generated by the iterative method (22), the following asymptotic error estimate holds to find the Drazin inverse
A D W r O ( A D F 0 3 r ) ,
where F 0 = I A W 0 .
Proof. 
Let F 0 = I A W 0 . Then, F r = I A W r . Thus, we have
F r + 1 = I A W r + 1 = ( I A W r ) 3 3 4 I 23 4 ( I A W r ) + 6 ( I A W r ) 2 = 3 4 F r 3 23 4 F r 4 + 6 F r 5 .
Using an arbitrary matrix norm of (62) results in
F r + 1 3 4 F r 3 + 23 4 F r 4 + 6 F r 5 .
Here, since F 0 < 1 , from relation (63), we have
F 1 3 4 F 0 3 + 23 4 F 0 4 + 6 F 0 5 O ( F 0 3 ) .
By continuing this process, we arrive at
F r + 1 3 4 F r 3 + 23 4 F r 4 + 6 F r 5 O ( F r 3 ) .
Thus, F r + 1 O ( F r ) for every r 0 . Therefore, we obtain
F r 3 O ( F 0 3 r ) , r 0 .
According to relation (59), we have R ( W 0 ) R ( A k ) . In addition, the use of this result together with (21) implies that R ( W r ) R ( W r 1 ) , and we can write
R ( W r ) R ( A k ) , r 0 .
On the other hand, we have
W r + 1 = 1 4 W r 37 I 111 A W r + 151 ( A W r ) 2 97 ( A W r ) 3 + 24 ( A W r ) 4 .
It is straightforward to verify that
N ( W r ) N ( A k ) , r 0 .
According to Ben-Israel et al. [41], one can readily show that
A A D = A D A = P R ( A k ) , N ( A k ) ,
and from Proposition 1 and expressions (67) and (69), we have
W r A A D = W r = A D A W r , r 0 .
Therefore, if the error matrix is δ r = A D W r , then it follows that
δ r = A D W r = A D A D A W r = A D ( I A W r ) = A D F r ,
and from (72) and (66), we have
δ r = A D F r O ( A D F 0 3 r ) ,
which completes the proof.    □
Corollary 3.
Assume that the condition of Theorem 8 and the following stabilization condition
F 0 I A W 0 < 1
are satisfied. Then, expression (22) converges to A D .
Theorem 9.
(Stability) Suppose the same assumptions as in Theorem 8 hold. Then, the iterative method (22) has asymptotic stability for finding the Drazin inverse.
Proof. 
The proof of asymptotic stability of the iterative method (22) is similar to that in [32]. Thus, the proof is omitted.    □

5. Error Measurement

If the quantity V ¯ is viewed as an approximation to V, then the absolute ( e A ) and relative ( e R ) errors in the approximation are defined as
e A = | V V ¯ | ,
and
e R = | V V ¯ | | V | ,
respectively. However, the absolute error is not useful for large sets, and the relative error can sometimes be misleading when | V | is small. To avoid the need to choose between the absolute and relative errors, the following mixed error measure is often used in practice:
e = | V V ¯ | 1 + | V | .
The value of e in (77) is similar to the absolute error e A when | V | 1 and to the relative error e R when | V | 1 [42].

6. Numerical Results

In this section, we compare the results of the proposed approach with other schemes available in the literature. Since the comparison of the method E 2 reported in [7] shows that it has better performance than others, we only compare our proposed method E 3 with E 2 , N M , C H , T S , and E 1 . According to (77), the stop criterion is
V r + 1 V r 1 + V r < 10 10 .
We denote by CPU the required calculation time using Mathematica and by MM the number of matrix-matrix products. Furthermore for computing the inverse of a matrix and based on Method (22), the producers is presented in Algorithm 1.
Algorithm 1 Method (22) for computing the inverse of a matrix
Step 1: Input matrix A C n × n .
Step 2: Take the initial matrix V 0 = 1 A 1 A A * and the tolerance ε 0 . Set r : = 0 .
Step 3: Let
ϑ r = A V r , ξ r = ϑ r 2 , V r + 1 = 1 4 V r ( 37 I 111 ϑ r + ξ r 151 I 97 ϑ r + 24 ξ r ) .
Step 4: Stop if V r + 1 V r 1 + V r ε . Otherwise, r : = r + 1 , and go to Step 3.
Example 1.
Consider a real-valued tri-diagonal matrix with dimension 1000 × 1000 , where the diagonals are as follows:
( 1 , 360 ) = 2.35 , ( 1 , 1 ) = 2.35 , ( 700 , 1 ) = 1.85 .
Example 2.
Consider the complex-valued tri-diagonal matrix with dimension 1000 × 1000 , where the diagonals are as follows:
( 1 , 280 ) = 0.9 0.45 i , ( 1 , 1 ) = 1.25 + 0.14 i , ( 850 , 1 ) = 2.25 + 0.6 i .
Example 3.
Consider the complex-valued tri-diagonal matrix with dimension 2000 × 2000 , where the diagonals are as follows:
( 1 , 420 ) = 6.5 + 0.25 i , ( 1 , 1 ) = 1.5 + 2.25 i , ( 1650 , 1 ) = 2.5 2 i .
The results of Examples 1–3 are presented in Table 3, Table 4 and Table 5 and Figure 1, Figure 2 and Figure 3.
Example 4.
To evaluate the efficiency of the proposed method, we consider several real and complex random matrices with different dimensions. For each of sizes n × n and n × ( n + 20 ) , n = 200 , 400 , 600 , 800 , 1000 , we perform 5 random tests and compare average values of matrix multiplications. The results are presented in Figure 4 and Figure 5.
Example 5
([43]). Consider tri-diagonal matrices, where the diagonals are as follows:
( 1 , 2 ) = 1 , ( 1 , 1 ) = 0 , ( 2 , 1 ) = 1 .
The dimension of the matrices is an odd number, and the matrices are singular with i n d ( A ) = 1 . The results for computing the Drazin inverse matrices for n = 109 , 299 , 499 are presented in Table 6 and Figure 6.

7. Application

The proposed method can be used to compute the approximate inverse (i.e., the iterative algorithm (22)) when dealing with large sparse matrices arising from the discretization of linear partial differential equations (PDE) or FDE. Therefore, we consider the following PDE and FDE discussed previously in [44,45], using the iterative method (22). The computational performance of the suggested iterative method confirms the applicability and validity of the proposed strategy.
Example 6
([46]). Consider the fractional elliptic Poisson equation
γ ξ ( x , y ) x γ + γ ξ ( x , y ) y γ = g ( x , y ) , ξ ( 0 , y ) = ϕ 1 ( y ) , ξ ( 1 , y ) = ϕ 2 ( y ) , ξ ( x , 0 ) = ψ 1 ( x ) , ξ ( x , 1 ) = ψ 2 ( x ) ,
where 0 x , y 1 , and two cases:
(a) g ( x , y ) = Γ ( γ + 1 ) ( x γ + y γ ) , for 0 < γ 2 and
ϕ 1 ( y ) = ψ 1 ( x ) = 0 , ϕ 2 ( y ) = y γ , ψ 2 ( x ) = x γ ,
(b) g ( x , y ) = sin ( π x ) cos ( π y ) , for γ = 2 , and
ϕ 1 ( y ) = ϕ 2 ( y ) = ψ 1 ( x ) = ψ 2 ( x ) = 0 ,
where the fractional derivative γ ξ ( x , y ) y γ of order γ is formulated in the Caputo sense. For solving Equation (84), we use the centre finite difference for γ ξ ( x , y ) x γ and γ ξ ( x , y ) y γ . Therefore, for case (a) we have
γ ξ ( x , y ) x γ | ( x i , y j ) γ ! ( ξ i + 1 , j ξ i 1 , j ) 2 h γ , γ ξ ( x , y ) y γ | ( x i , y j ) γ ! ( ξ i , j + 1 ξ i , j 1 ) 2 k γ .
For case (b), we have
2 ξ ( x , y ) x 2 | ( x i , y j ) ξ i 1 , j 2 ξ i , j + ξ i + 1 , j h 2 , 2 ξ ( x , y ) y 2 | ( x i , y j ) ξ i , j 1 2 ξ i , j + ξ i , j + 1 k 2 .
Furthermore, the values h = 1 p and k = 1 q are adopted for the step size along the space x and y coordinates, respectively.
The results of Example 6 are presented in Table 7 and Table 8 and Figure 7 and Figure 8.
Example 7
([47]). Consider the fractional sub-diffusion equation
γ ξ ( x , t ) t γ + 2 ξ ( x , t ) x 2 = f ( x , t ) , ξ ( 0 , t ) = ξ ( 1 , t ) = 0 , 0 < t 1 , 0 < γ 1 , ξ ( x , 0 ) = 0 , 0 < x < 1 ,
where the fractional derivative γ ξ ( x , t ) t γ of order γ is formulated in the Caputo sense. According to [47], we use the finite difference for approximating the derivatives γ ξ ( x , t ) t γ and 2 ξ ( x , t ) x 2 , so that
γ ξ ( x , t ) t γ | ( x i , t j ) γ ! ( ξ i , j + 1 ξ i , j ) k γ ,
and
γ ! ( ξ i j + 1 ξ i j ) k γ = θ ξ i + 1 j + 1 2 ξ i j + 1 + ξ i 1 j + 1 h 2 + ( 1 θ ) ξ i + 1 j 2 ξ i j + ξ i 1 j h 2 + θ f i j + 1 + ( 1 θ ) f i j ,
where ξ i j = ξ ( x i , t j ) and f i j = f ( x i , t j ) . k = 1 p and h = 1 q are the step sizes along time t and space x, respectively. In this example, we examine two cases:
(a) f ( x , t ) = 2 Γ ( 3 γ ) t 2 γ sin ( 2 π x ) + 4 π 2 t 2 sin ( 2 π x ) ,
(b) f ( x , t ) = 3 4 Γ ( 1 2 ) t x 4 ( x 1 ) 4 x 2 ( 5 x 3 ) 3 2 with non-smooth solution at t = 0 for γ = 0.5 .
The results of Example 7 are presented in Table 9 and Table 10 and Figure 9 and Figure 10.
As we know, the fractional-order derivatives in the partial differential equations are non-local. This means that the discretized matrix of approximating the spatial fractional-order derivatives should be dense and often Toeplitz-like [48,49,50,51]. In Examples 6 and 7, we adopted a very simple numerical approximation to the fractional operators, and the discretized matrices are sparse. In the follow-up, we give an example using a more elaborated approximation that leads to the dense matrix. The obtained results imply an elegant superiority of our proposed iterative scheme.
Example 8.
Consider the Riesz fractional diffusion equation [50]
ξ ( x , t ) t = κ γ γ ξ ( x , t ) | x | γ + f ( x , t ) , ( x , t ) ( 0 , 1 ) × ( 0 , 1 ] , κ γ > 0 ξ ( x , 0 ) = 15 ( 1 + γ 4 ) x 3 ( 1 x ) 3 , x [ 0 , 1 ] , ξ ( 0 , t ) = ξ ( 1 , t ) = 0 , t [ 0 , 1 ] .
The Riesz fractional derivative γ ξ ( x , t ) | x | γ is defined by [52]
γ ξ ( x , t ) | x | γ = 1 2 cos ( π γ 2 ) . 1 Γ ( 2 γ ) . 2 x 2 a b ξ ( ζ , t ) | x ζ | γ 1 d ζ = 1 2 cos ( π γ 2 ) a D x γ ξ ( x , t ) + x D b γ ξ ( x , t ) , γ ( 1 , 2 ) ,
in which a D x γ and x D b γ are (11) and (12) for m = 2 .
According to [50], the first-order time derivative at the point t = t j is approximated by the second-order backward difference formula:
ξ ( x , t ) t | ( x i , t j ) = ξ i j + 1 ξ i j k , j = 1 , ξ i j 4 ξ i j 1 + 3 ξ i j 2 2 k , j 2 ,
and also for any function ξ ( x ) L 1 ( R ) , we have
Δ h γ ξ ( x ) = 1 h γ l = [ b x h ] ( x a ) h ω l ( γ ) ξ ( x l h ) , x R ,
where the γ-dependent weight coefficient is defined as
ω l γ = ( 1 ) l Γ ( 1 + γ ) Γ ( 1 + γ 2 l ) Γ ( 1 + γ 2 + l ) , l Z .
Then, for a fixed h, the fractional centred difference operator in (95) holds:
γ ξ ( x ) | x | γ = Δ h γ ξ ( x ) + O ( h 2 ) ,
where κ γ Δ h γ ξ i j can be written into the matrix-vector product form A ξ j as
A = κ γ T x = κ γ h γ ω 0 ( γ ) ω 1 ( γ ) ω 2 ( γ ) ω 3 N ( γ ) ω 2 N ( γ ) ω 1 ( γ ) ω 0 ( γ ) ω 1 ( γ ) ω 4 N ( γ ) ω 3 N ( γ ) ω 2 ( γ ) ω 1 ( γ ) ω 0 ( γ ) ω 5 N ( γ ) ω 4 N ( γ ) ω N 3 ( γ ) ω N 4 ( γ ) ω N 5 ( γ ) ω 0 ( γ ) ω 1 ( γ ) ω N 2 ( γ ) ω N 3 ( γ ) ω N 4 ( γ ) ω 1 ( γ ) ω 0 ( γ ) .
In [53], it was proven that T x is a symmetric positive definite Toeplitz matrix. The matrix-vector for solving the model problem (92) can be formulated as follows:
ξ 1 ξ 0 k A ξ 1 = f 1 , 3 ξ j 4 ξ j 1 + ξ j 2 2 k A u j = f j , 1 j N t .
The results of Example 8 are presented in Table 11 and Figure 11.
Finally, as seen from Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 and Figure 4 and Figure 5, we can find that the proposed method is faster than other numerical methods in terms of the number of matrix-matrix products and the elapsed CPU time.

8. Concluding Remarks

The inverse matrix calculation poses computational challenges in the solution of many problems due to its high computational cost. Therefore, avoiding a direct calculation and using efficient iterative methods are key aspects in mathematical modelling. In this paper, we presented a novel iterative method for solving nonlinear equations. The algorithm has good performance in terms of computational efficiency in the calculation of the Moore–Penrose and Drazin inverses. The key performance aspects of the method can be outlined as:
  • Exhibits adequate results for specific real and complex matrices;
  • Provides optimal results for real and complex square and rectangular random matrices of different dimensions;
  • Shows a good feasibility for different dimensions when computing the Drazin inverse;
  • The solution of the fractional elliptic Poisson equation shows superior results to other schemes;
  • Yields good results for the solution of the fractional sub-diffusion equation for smooth and non-smooth solutions.
In synthesis, the results show that the theoretical findings are in accordance with numerical experiments, and we verified that the proposed algorithm is superior to others available in the literature. Finally, we point out that this new strategy has its own limitations and should be generalized and verified for more complicated linear and nonlinear problems. In other words, the present paper is only an introduction to the topic, and there remains much work to do.

Author Contributions

Conceptualization, K.S.; Formal analysis, K.S., A.P. and R.E.; Investigation, A.P., J.A.T.M. and R.E.; Methodology, K.S., A.P. and R.E.; Validation, J.A.T.M. and R.E.; Visualization, K.S.; Writing—original draft, K.S., A.P. and R.E.; Writing–review—diting, J.A.T.M. All authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Ph.D. thesis dissertations are public documents, but at the moment are not internet-accessible. If one is interested in revising the investigation, information can be requested from the authors.

Acknowledgments

The authors are thankful to the respected reviewers for their valuable comments and constructive suggestions towards the improvement of the original paper.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this article.

References

  1. Penrose, R. A generalized inverse for matrices. Pro. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef] [Green Version]
  2. Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  3. Pan, V.Y. Newton’s Iteration for Matrix Inversion, Advances and Extensions, Matrix Methods: Theory Algorithms and Applications; World Scientific: Singapore, 2010. [Google Scholar]
  4. Li, H.B.; Huang, T.Z.; Zhang, Y.; Liu, X.P.; Gu, T.X. Chebyshev-type methods and preconditioning techniques. Appl. Math. Comput. 2011, 218, 260–270. [Google Scholar] [CrossRef]
  5. Toutounian, F.; Soleymani, F. An iterative method for computing the approximate inverse of a square matrix and the Moore-Penrose inverse of a non-square matrix. Appl. Math. Comput. 2013, 224, 671–680. [Google Scholar] [CrossRef]
  6. Pan, V.Y.; Soleymani, F.; Zhao, L. Highly efficient computation of generalized inverse of a matrix. Appl. Math. Comput. 2018, 316, 89–101. [Google Scholar]
  7. Esmaeili, H.; Pirnia, A. An efficient quadratically convergent iterative method to find the Moore-Penrose inverse. Int. J. Comput. Math. 2017, 94, 1079–1088. [Google Scholar] [CrossRef]
  8. Pan, V.Y.; Schreiber, R. An improved Newton iteration for the generalized inverse of a matrix with applications. SIAM J. Sci. Stat. Comput. 1991, 12, 1109–1131. [Google Scholar] [CrossRef] [Green Version]
  9. Drazin, M.P. Pseudoinverses in associative rings and semigroups. Am. Math. Mon. 1958, 65, 506–514. [Google Scholar] [CrossRef]
  10. Wilkinson, J.H. Note on the practical significance of the Drazin inverse. In Recent Applications of Generalized Inverses; Campbell, S.L., Ed.; Research Notes in Mathematics; Pitman Advanced Publishing Program: Boston, MA, USA, 1982; pp. 82–99. [Google Scholar]
  11. Liu, X.; Cai, N. High-order iterative methods for the DMP inverse. J. Math. 2018, 2018, 8175935. [Google Scholar] [CrossRef] [Green Version]
  12. Mosic, D.; Djordjevic, D.S. Block representations of the generalized Drazin inverse. Appl. Math. Comput. 2018, 331, 200–209. [Google Scholar]
  13. Qiao, S.; Wie, Y. Acute perturbation of Drazin inverse and oblique projectors. Math. China 2018, 13, 1427–1445. [Google Scholar] [CrossRef]
  14. Wang, X.Z.; Ma, H.; Stanimirovic, P.S. Recurrent neural network for computing the W-weighted Drazin inverse. Appl. Math. Comput. 2017, 300, 1–20. [Google Scholar] [CrossRef]
  15. Duarte, F.B.M.; Machado, J.T. Chaotic phenomena and fractional-order dynamics in the trajectory control of redundent manipulators. Nonlinear Dyn. 2002, 29, 315–342. [Google Scholar] [CrossRef]
  16. Ferreira, N.M.F.; Duarte, F.B.M.; Lima, M.F.M.; Marcos, M.G.; Machado, J.T. Application of fractional calculus in the dynamical analysis ans control of mechanical manipulators. Fract. Calc. Appl. Anal. 2008, 11, 91–113. [Google Scholar]
  17. Caputo, M. Linear models of dissipation whose q is almost frequency independent-ii. Geophys. J. R. Astron. Soc. 1967, 13, 529–539. [Google Scholar] [CrossRef]
  18. Kiryakova, V. Generalized Fractional Calculus and Applications; John Wiley and Sons, Inc.: New York, NY, USA, 1993. [Google Scholar]
  19. Machado, J.T.; Kiryakova, V. The chronicles of fractional calculus. Fract. Calc. Appl. Anal. 2017, 20, 307–336. [Google Scholar] [CrossRef]
  20. Machado, J.T.; Kiryakova, V.; Mainard, F. Recent history of fractional calculus. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 1140–1153. [Google Scholar] [CrossRef] [Green Version]
  21. Machado, J.T.; Mainardi, F.; Kiryakova, V. Fractional calculus: Quo vadimus? (where are we going?). Fract. Calc. Appl. Anal. 2015, 18, 495–526. [Google Scholar] [CrossRef]
  22. Machado, J.T.; Mainardi, F.; Kiryakova, V.; Atanacković, T. Fractional calculus: D’oú venons-nous? Que sommes-nous? Oú allons-nous? (Contributions to Round Table Discussion held at ICFDA 2016). Fract. Calc. Appl. Anal. 2016, 19, 1074–1104. [Google Scholar] [CrossRef] [Green Version]
  23. Xu, B.; Chen, D.; Zhang, H.; Zhou, R. Dynamic analysis and modeling of a novel fractional-order hydro-turbine-generator unit. Nonlinear Dyn. 2015, 81, 1263–1274. [Google Scholar] [CrossRef]
  24. Xu, B.; Chen, D.; Zhang, H.; Wang, F. The modeling of the fractional-order shafting system for a water jet mixed-flow pump during the startup process. Commun. Nonlinear Sci. Numer. Simul. 2015, 29, 12–24. [Google Scholar] [CrossRef]
  25. Dehghan, M.; Hajarian, M. Some derivative free quadratic and cubic convergence iterative formulas for solving nonlinear equations. Comput. Appl. Math. 2010, 29, 19–30. [Google Scholar] [CrossRef]
  26. Dehghan, M.; Hajarian, M. New iterative method for solving nonlinear equations with fourth-order convergence. Int. J. Comput. Math. 2010, 87, 834–839. [Google Scholar] [CrossRef]
  27. Erfanifar, R.; Sayevand, K.; Esmaeili, H. On modified two-step iterative method in the fractional sense: Some applications in real world phenomena. Int. J. Comput. Math. 2020, 97, 2109–2141. [Google Scholar] [CrossRef]
  28. Sayevand, K.; Erfanifar, R.; Esmaeilli, H. On computational efficiency and dynamical analysis for a class of novel multi-step iterative schemes. Int. J. Appl. Comput. Math. 2020, 6, 1–23. [Google Scholar] [CrossRef]
  29. Li, W.; Li, Z. A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput. 2010, 215, 3433–3442. [Google Scholar] [CrossRef]
  30. Wu, X. A note on computational algorithm for the inverse of a square matrix. Appl. Math. Comput. 2007, 187, 962–964. [Google Scholar] [CrossRef]
  31. Chen, H.; Wang, Y. A Family of higher-order convergent iterative methods for computing the Moore Penrose inverse. Appl. Math. Comput. 2011, 218, 4012–4016. [Google Scholar] [CrossRef]
  32. Soleymani, F.; Stanimirovic, P.S. A note on the stability of a p-th order iteration for finding generalized inverses. Appl. Math. Lett. 2014, 28, 77–81. [Google Scholar] [CrossRef]
  33. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK; New York, NY, USA; New Rochelle, NY, USA; Melbourne, Australia; Sydney, Australia, 1986. [Google Scholar]
  34. Stanimirovic, P.S.; Cvetkovic-Ilic, D.S. Successive matrix squaring algorithm for computing outer inverses. Appl. Math. Comput. 2008, 203, 19–29. [Google Scholar] [CrossRef]
  35. King, C. A Note on Drazin Inverses. Pac. J. Math. 1977, 70, 383–390. [Google Scholar] [CrossRef] [Green Version]
  36. Campbell, S.L. Singular Systems of Differential Equations; Research Notes in Mathematics; Pitman Advanced Publishing Program: Boston, MA, USA, 1980. [Google Scholar]
  37. Ren, D.G. Analysis and Design of Descriptor Linear Systems; Springer: New York, NY, USA, 2010. [Google Scholar]
  38. Kaczorek, T.; Borawski, K. Descriptor Systems of Integer and Fractional Orders; Studies in Systems, Decision and Control; Springer: Cham, Switzerland, 2021. [Google Scholar]
  39. Li, X.; Wei, Y. Iterative methods for the Drazin inverse of a matrix with a complex spectrum. Appl. Math. Comput. 2004, 147, 855–862. [Google Scholar] [CrossRef]
  40. Wang, G.; Wei, Y.; Qiao, S. Generalized Inverses: Theory and Computations; Science Press: New York, NY, USA, 2004. [Google Scholar]
  41. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  42. Gill, P.R.; Murray, W.; Wright, M.H. Numerical Linear Algebra and Optimization—Volume 1; Addison-Wesley: Redwood City, CA, USA, 1991. [Google Scholar]
  43. Toutounian, F.; Buzhabadi, R. New methods for computing the Drazin-inverse solution of singular linear systems. Appl. Math. Comput. 2017, 294, 343–352. [Google Scholar] [CrossRef]
  44. Samko, S.G.; Kilbas, A.; Marichev, O. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach: Switzerland, 1993. [Google Scholar]
  45. Sayevand, K.; Machado, J.T.; Moradi, V. A new non-standard finite difference method for analyzing the fractional Navier–Stokes equations. Comput. Math. Appl. 2019, 78, 1681–1694. [Google Scholar] [CrossRef]
  46. Youssef, I.K.; Dewaik, M.H.E. Solving Poisson’s equations with fractional order using Haar wavelet. Appl. Math. Nonlinear Sci. 2017, 2, 271–284. [Google Scholar] [CrossRef] [Green Version]
  47. Erfanifar, R.; Sayevand, K.; Ghanbari, N.; Esmaeili, H. A modified Chebyshev ϑ-weighted Crank-Nicolson method for analyzing fractional sub-diffusion equations. Numer. Methods Partial Differ. Equ. 2020, 13, 1–13. [Google Scholar] [CrossRef]
  48. Mockary, S.; Babolian, E.; Vahidi, A.R. A fast numerical method for fractional partial differential equations. Adv. Differ. Equ. 2019. [Google Scholar] [CrossRef] [Green Version]
  49. Gu, X.M.; Huang, T.Z.; Ji, C.C.; Carpentieri, B.; Alikhanov, A.A. Fast iterative method with a second-order implicit difference scheme for time-space fractional convection-diffusion equation. J. Sci. Comput. 2017, 72, 957–985. [Google Scholar] [CrossRef]
  50. Gu, X.M.; Zhao, Y.L.; Zhao, X.L.; Carpentieri, B.; Huang, Y.Y. A note on parallel preconditioning for the all-at-once solution of Riesz fractional diffusion equations. Numer. Math. Theory Meth. Appl. 2021, 14, 893–919. [Google Scholar]
  51. Gu, X.M.; Wu, S.L. A parallel-in-time iterative algorithm for Volterra partial integro-differential problems with weakly singular kernel. J. Comput. Phys. 2020, 417, 109576. [Google Scholar] [CrossRef]
  52. Yang, Q.; Liu, F.; Turner, I. Numerical methods for fractional partial differential equations with Riesz space fractional derivatives. Appl. Math. Model. 2010, 34, 200–218. [Google Scholar] [CrossRef] [Green Version]
  53. Elik, C.C.; Duman, M. Crank-Nicolson method for the fractional diffusion equation with the Riesz fractional derivative. J. Comput. Phys. 2012, 231, 1743–1750. [Google Scholar]
Figure 1. Representation of the (a) matrix and (b) inverse matrix for Example (1).
Figure 1. Representation of the (a) matrix and (b) inverse matrix for Example (1).
Mathematics 09 02501 g001
Figure 2. Representation of the (a) matrix and (b) inverse matrix for Example (2), respectively.
Figure 2. Representation of the (a) matrix and (b) inverse matrix for Example (2), respectively.
Mathematics 09 02501 g002
Figure 3. Representation of the (a) matrix and (b) inverse matrix for Example (3), respectively.
Figure 3. Representation of the (a) matrix and (b) inverse matrix for Example (3), respectively.
Mathematics 09 02501 g003
Figure 4. The average MM for computing the Moore–Penrose inverse of real matrices (a) ( n × n ) and (b) ( n × n + 20 ) by the methods { T S , N M , C H , E 1 , E 2 , E 3 } .
Figure 4. The average MM for computing the Moore–Penrose inverse of real matrices (a) ( n × n ) and (b) ( n × n + 20 ) by the methods { T S , N M , C H , E 1 , E 2 , E 3 } .
Mathematics 09 02501 g004
Figure 5. The average MM for computing the Moore–Penrose inverse of complex matrices (a) ( n × n ) and (b) ( n × n + 20 ) by the methods.
Figure 5. The average MM for computing the Moore–Penrose inverse of complex matrices (a) ( n × n ) and (b) ( n × n + 20 ) by the methods.
Mathematics 09 02501 g005
Figure 6. Representation of the (a) matrix and (b) Drazin inverse matrix for Example (5) with n = 299 , respectively.
Figure 6. Representation of the (a) matrix and (b) Drazin inverse matrix for Example (5) with n = 299 , respectively.
Mathematics 09 02501 g006
Figure 7. Representation of the (a) matrix and (b) inverse matrix for Example (6) case (a) when p = q = 17 and γ = 1.8 .
Figure 7. Representation of the (a) matrix and (b) inverse matrix for Example (6) case (a) when p = q = 17 and γ = 1.8 .
Mathematics 09 02501 g007
Figure 8. Representation of the (a) matrix and (b) inverse matrix for Example (6) (b) when p = q = 20 .
Figure 8. Representation of the (a) matrix and (b) inverse matrix for Example (6) (b) when p = q = 20 .
Mathematics 09 02501 g008
Figure 9. Representation of the (a) matrix and (b) inverse matrix for Example (7) case (a), when p = q = 20 , respectively.
Figure 9. Representation of the (a) matrix and (b) inverse matrix for Example (7) case (a), when p = q = 20 , respectively.
Mathematics 09 02501 g009
Figure 10. Representation of the (a) matrix and (b) inverse matrix for Example (7) case (b), when p = q = 15 , respectively.
Figure 10. Representation of the (a) matrix and (b) inverse matrix for Example (7) case (b), when p = q = 15 , respectively.
Mathematics 09 02501 g010
Figure 11. Representation of the (a) matrix and (b) inverse matrix for Example (8) when p = q = 40 , respectively.
Figure 11. Representation of the (a) matrix and (b) inverse matrix for Example (8) when p = q = 40 , respectively.
Mathematics 09 02501 g011
Table 1. List of abbreviations and acronyms used in the paper.
Table 1. List of abbreviations and acronyms used in the paper.
The AbbreviationsDescription
N M Newton’s method (3)
C H Chebyshev method (4)
T S Method (5)
E 1 Method (6)
E 2 Method (8)
E 3 Method (22)
C O Convergence order of method
M M Number of operations for Q n × n . R n × n
P M Number of operations for Q n × n + R n × n
S M Number of operations for β Q n × n
I M Number of operations for γ I n × n + Q n × n
C P U CPU time spent
P D E Partial differential equation
F D E Fractional differential equation
Table 2. Computational cost for every iteration of the methods.
Table 2. Computational cost for every iteration of the methods.
MethodCOMMPMSMIMCC
N M 22001 2 n 3 + 2 n
C H 33002 3 n 3 + 2 n
T S 45014 5 n 3 + n 2 + 4 n
E 1 187774 7 n 3 + 14 n 2 + 4 n
E 2 23012 3 n 3 + n 2 + 2 n
E 3 34242 4 n 3 + 6 n 2 + 2 n
Table 3. Results of Example (1).
Table 3. Results of Example (1).
Method NM CH TS E 1 E 2 E 3
M M 323340353328
C P U 0.10890.11020.15900.14230.11240.0936
Table 4. Results of Example (2).
Table 4. Results of Example (2).
Method NM CH TS E 1 E 2 E 3
M M 242430282720
C P U 0.60920.61050.67010.64210.61220.5436
Table 5. Results of Example (3).
Table 5. Results of Example (3).
Method NM CH TS E 1 E 2 E 3
M M 363340353028
C P U 0.90821.13011.16251.14211.01220.8336
Table 6. Results of Example (5).
Table 6. Results of Example (5).
nMethod NM CH TS E 1 E 2 E 3
M M 423950423932
109 C P U 1.21491.89521.30251.22351.19881.0102
M M 504860494240
299 C P U 3.12583.11814.1023.10212.70222.5789
M M 545465564240
499 C P U 6.90827.0128.15417.14215.812285.3552
Table 7. Results of Example (6) case (a).
Table 7. Results of Example (6) case (a).
γ , p Method NM CH TS E 1 E 2 E 3
1.2, 12 M M 303340353028
C P U 9.008210.120111.251110.14219.91227.1336
1.8, 17 M M 343645353628
C P U 11.008213.258919.981212.142111.04229.8336
Table 8. Results of Example (6) case (b).
Table 8. Results of Example (6) case (b).
Method NM CH TS E 1 E 2 E 3
M M 424255423932
C P U 19.908220.158931.258922.142116.012214.8336
Table 9. Results of Example (7) (a) for γ = θ = 0.2 .
Table 9. Results of Example (7) (a) for γ = θ = 0.2 .
Method NM CH TS E 1 E 2 E 3
M M 505165494540
C P U 16.908217.156228.125616.142114.212213.0336
Table 10. Results of Example (7) case (b) for γ = 0.5 and θ = 0.3 .
Table 10. Results of Example (7) case (b) for γ = 0.5 and θ = 0.3 .
Method NM CH TS E 1 E 2 E 3
M M 1007595917568
C P U 21.918217.115820.158919.182117.212216.0036
Table 11. Results of Example (8).
Table 11. Results of Example (8).
γ , p , q Method NM CH TS E 1 E 2 E 3
1.3, 10,10 M M 303040353028
C P U 0.00300.00310.00430.00370.00320.0027
1.5, 20, 20 M M 343345353028
C P U 0.00470.00450.00630.00490.00420.0036
1.8, 40, 40 M M 383940423332
C P U 0.02140.02210.02350.02510.02010.0185
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sayevand, K.; Pourdarvish, A.; Machado, J.A.T.; Erfanifar, R. On the Calculation of the Moore–Penrose and Drazin Inverses: Application to Fractional Calculus. Mathematics 2021, 9, 2501. https://doi.org/10.3390/math9192501

AMA Style

Sayevand K, Pourdarvish A, Machado JAT, Erfanifar R. On the Calculation of the Moore–Penrose and Drazin Inverses: Application to Fractional Calculus. Mathematics. 2021; 9(19):2501. https://doi.org/10.3390/math9192501

Chicago/Turabian Style

Sayevand, Khosro, Ahmad Pourdarvish, José A. Tenreiro Machado, and Raziye Erfanifar. 2021. "On the Calculation of the Moore–Penrose and Drazin Inverses: Application to Fractional Calculus" Mathematics 9, no. 19: 2501. https://doi.org/10.3390/math9192501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop