Next Article in Journal
Motion Synchronization Control for a Large Civil Aircraft’s Hybrid Actuation System Using Fuzzy Logic-Based Control Techniques
Previous Article in Journal
Graphene Twistronics: Tuning the Absorption Spectrum and Achieving Metamaterial Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Newton’s Iteration Method for Solving the Nonlinear Matrix Equation X+i=1mAi*X1Ai=Q

1
School of Mathematics, Jilin University, Changchun 130012, China
2
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
3
School of Mathematics and Statistics, Yulin University, Yulin 719000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1578; https://doi.org/10.3390/math11071578
Submission received: 6 February 2023 / Revised: 14 March 2023 / Accepted: 23 March 2023 / Published: 24 March 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, we study the nonlinear matrix equation (NME) X + i = 1 m A i * X 1 A i = Q . We transform this equation into an equivalent zero-point equation, then we use Newton’s iteration method to solve the equivalent equation. Under some mild conditions, we obtain the domain of approximation solutions and prove that the sequence of approximation solutions generated by Newton’s iteration method converges to the unique solution of this equation. In addition, the error estimation of the approximation solution is given. Finally, the comparison of two well-known approaches with Newton’s iteration method by some numerical examples demonstrates the superiority of Newton’s iteration method in the convergence speed.

1. Introduction

In this paper, we study the properties of solutions and solvability conditions of the nonlinear matrix equation
X + i = 1 m A i * X 1 A i = Q ,
where A i C n × n ( i = 1 , 2 , , m ) are given matrices. A * denotes the conjugate transpose of matrix A, Q is a Hermitian positive definite (HPD) matrix, and X is an unknown n × n matrix to be solved.
In recent years, Equation (1) has attracted extensive attention from researchers. Equation (1) for m = 1 occurs in control theory, dynamic programming, stochastic filtering, and ladder networks, see more details in [1,2,3,4,5]. Equation (1) for m = 1 has been well studied in [6,7,8,9]. These results are mainly in the following aspects: sufficient conditions for the existence of HPD solutions, perturbation analysis, condition numbers, and iterative methods for solving the type of equation. Iterative method is a powerful tool for solving various problems, such as matrix equations [10], integral equations [11], variational inequality [12], and image segmentation [13]. In the iterative method, there are mainly fixed-point iteration methods [14] and inversion-free iteration methods [6,15,16,17].
Long et al. gave some sufficient conditions for the existence of an HPD solution of Equation (1) for m = 2 and proved the convergence of the fixed-point iteration method and inversion-free iteration method used to compute the HPD solution in [18]. Vaezzadeh et al. proposed an iterative method for computing HPD solutions of the matrix equation and gave convergence results for the basic fixed-point iteration for the equations in [19]. Sayevand et al. presented a new inversion-free iteration method to compute a maximal HPD solution for the equation and derived the existence conditions of the nonlinear matrix equation in [20].
Newton’s iteration methods play an important role in computing the nonlinear system and optimization problems. As is known to all, Newton’s iteration method can obtain a quadratic convergence rate under suitable conditions. Hasanov and Hakkaev investigated iterative methods for computing the HPD solution of Equation (1) and provided convergence rates of several fixed-point iterations in [21]. Weng applied Newton’s iteration method to Equation (1) and obtained generalized Stein equations, then he adapt the generalized Smith method to find the HPD solution of Equation (1) in [22]. Huang and Ma proposed some inversion-free iteration methods to compute Equation (1); furthermore, they established Newton’s iteration method to compute the HPD solution and prove its quadratic convergence [23].
Inspired by these works [21,22,23], we further study Equation (1). Although we also use Newton’s iteration method, our method is substantially different from [22,23]. Our method is as follows: when the matrix Q in Equation (1) satisfies some conditions, we can prove that the sequence generated by Newton’s iteration method is contained in an open ball and the sequence converges to the only point in the ball, i.e., the solution of Equation (1).
The remainder of the paper is organized as follows. In Section 2, we illustrate some notations and review some lemmas, which is crucial for achieving our results. In Section 3, we derive the local convergence theorem and prove the sequence generated by Newton’s iteration method converges to an HPD solution of Equation (1). In Section 4, we compare two well-known existing methods: inversion-free iteration and basic fixed-point iteration with Newton’s iteration method by two numerical examples. We conclude in Section 5.

2. Preliminaries

This section includes an explanation of several notations used throughout this paper, a list of some definitions, and lemmas. We use notations C n × n , H n × n to denote the set of all n × n complex matrices and all n × n Hermitian matrices, respectively. For A C n × n , we use A to denote the spectral norm of A, i.e., A = λ m a x ( A A * ) . For P , Q H n × n , P Q means P Q is positive semidefinite. We use notation A B to denote the Kronecker product of matrices A and B.
The following lemma in [24] is essential for us to establish the convergence theory of Newton’s iteration method for solving Equation (1).
Lemma 1. 
Let A , B C n × n and assume that A is invertible, with A 1 α . If A B β and α β < 1 , then B is also invertible, and
B 1 α 1 α β
Proof. 
See [24] (p. 45). □
Lemma 2. 
Let A C n × n and satisfies A < 1 , then I A is nonsingular, then
( I A ) 1 1 1 A .
Proof. 
See [25] (p. 351). □

3. Newton’s Iteration Method and Its Convergence Analysis for Solving (1)

In this section, we use Newton’s iteration method to solve Equation (1). We prove that the matrix sequence { X k } k 0 generated by Newton’s iteration method converges to the unique solution of Equation (1) when the initial guess matrix X 0 = Q satisfies the following condition
δ Q 1 < 2 5 ,
where δ : = 2 ( i = 1 m A i 2 ) Q 1 1 ( i = 1 m A i 2 ) Q 1 2 . In addition, the error estimation of the approximation solution is given.

3.1. Newton’s Iteration Method

Let
F ( X ) = X + i = 1 m A i * X 1 A i Q ,
we know F ( X ) is Fréchet differentiable at any nonsingular matrix X and we obtain the Fréchet derivative of F at X, denoted by F X ( E ) , given as follows
F X ( E ) = E i = 1 m A i * X 1 E X 1 A i ,
Using Newton’s iteration method to solve Equation (1), we obtain
X k + 1 = X k ( F X k ) 1 ( F ( X k ) ) , k = 0 , 1 , 2 ,
where F X k represents the Fréchet derivative of F at X k . The iteration (4) is equivalent to
E k i = 1 m A i * X k 1 E k X k 1 A i = F ( X k ) , X k + 1 = X k + E k , k = 0 , 1 , 2 ,
or
X k i = 1 m L i k * X k L i k = Q 2 i = 1 m A i * L i k , k = 1 , 2 ,
where L i k = X k 1 1 A i , i = 1 , 2 , m .
Remark 1. 
Iteration (4) is more suitable for convergence analysis in Section 3. However, iteration (6) is more convenient for numerical computation. In order to solve Equation (1), we need to solve the subproblem (6) of Newton’s iteration method; this linear matrix equation could be solved by the iteration method involved in the Stein Equation [21].
Notice that (4) includes inverse operator ( F X ) 1 at X k , so its properties are crucial for the convergence analysis of iteration (4).
Lemma 3. 
Let X C n × n be nonsingular which satisfies the following inequality
( i = 1 m A i 2 ) X 1 2 < 1 ,
then the operator F X is nonsingular, and
( F X ) 1 1 1 ( i = 1 m A i 2 ) X 1 2 .
Proof. 
F X ( E ) = E i = 1 m A i * X 1 E X 1 A i E i = 1 m A i * X 1 E X 1 A i E ( i = 1 m A i 2 ) X 1 2 E = E 1 ( i = 1 m A i 2 ) X 1 2 .
According to condition (7), F X ( E ) = 0 if and only if E = 0 . Because F X is a operator in the finite dimensional vector space C n × n , F X is an nonsingular operator. Meanwhile
( F X ) 1 = I I i = 1 m ( X 1 A i ) T ( A i * X 1 ) 1 1 1 i = 1 m ( X 1 A i ) T ( A i * X 1 ) 1 1 ( i = 1 m A i 2 ) X 1 2 .
According to Lemma 2, the last inequality holds. □
The next lemma shows that function F satisfies the local Lipschitz conditions.
Lemma 4 
([23]). For any nonsingular matrices X , Y we have
F X F Y ( i = 1 m A i 2 ) ( X 1 + Y 1 ) X 1 Y 1 X Y .
Next, we define a matrix function based on (2) as follows
H ( X ) : = ( F Q ) 1 ( F ( X ) ) .
We notice that Newton’s iteration for H ( X ) is consistent with Newton’s iteration for F ( X ) , i.e.,
X k + 1 X k = ( H X k ) 1 ( H ( X k ) ) = ( F X k ) 1 ( F ( X k ) ) .
Theorem 1. 
Suppose δ and matrix Q satisfy the following inequality
δ Q 1 < 2 5 ,
where δ : = 2 ( i = 1 m A i 2 ) Q 1 1 ( i = 1 m A i 2 ) Q 1 2 . Furthermore, let B ( Q , δ ) = { X X Q < δ } . Then, we have the following properties:
(i) H ( Q ) δ 2 ;
(ii) H X H Y 1 δ X Y , X , Y B ( Q , δ ) ;
(iii) ( H X ) 1 1 1 X Q / δ , X B ( Q , δ ) ;
(iv) H ( X ) H ( Y ) H Y ( X Y ) 1 2 δ X Y 2 , X , Y B ( Q , δ ) ;
Proof. 
(i) According to the definition of δ and (8), we obtain
H ( Q ) = ( F Q ) 1 ( F ( Q ) ) ( F Q ) 1 F ( Q ) ( i = 1 m A i 2 ) Q 1 1 ( i = 1 m A i 2 ) Q 1 2 = δ 2 .
Proof. 
(ii) Using Lemmas 4 and 1, we have
H X H Y = ( F Q ) 1 ( F X F Y ) F X F Y 1 ( i = 1 m A i 2 ) Q 1 2 ( i = 1 m A i 2 ) ( X 1 + Y 1 ) X 1 Y 1 1 ( i = 1 m A i 2 ) Q 1 2 X Y 2 ( i = 1 m A i 2 ) Q 1 1 ( i = 1 m A i 2 ) Q 1 2 Q 1 2 ( 1 δ Q 1 ) 3 X Y Q 1 2 δ ( 1 δ Q 1 ) 3 X Y 1 δ Q 1 2 δ 2 ( 1 δ Q 1 ) 3 X Y 1 δ X Y .
The second and third inequality in (15) are derived from Lemma 4 and Lemma 1, respectively. Due to condition (13), the last inequality in (15) holds. □
Proof. 
(iii) By the definition of H ( X ) , we have ( H Q ) 1 = 1 . For all X B ( Q , δ ) , using (ii) we have
H X H Q 1 δ X Q < 1 ,
and, according to Lemma 1, we have
( H X ) 1 1 1 X Q / δ .
Proof. 
(iv) According to the Newton–Leibniz formula and (ii), we obtain
H ( X ) H ( Y ) H Y ( X Y ) = 0 1 ( H ( 1 t ) Y + t X H Y ) ( X Y ) d t X Y 0 1 ( H ( 1 t ) Y + t X H Y ) d t 1 2 δ X Y 2 .
Theorem 2. 
If the inequality (13) is satisfied, take the initial guess X 0 = Q , then the sequence { X k } k 0 generated by Newton’s iteration method for H ( X ) belongs to the ball B ( X 0 , δ ) . In addition, we obtain the following inequalities for all k 1 :
X k X k 1 δ 2 k , X k X 0 δ ( 1 1 2 k )
( H X k ) 1 2 k , ( H X k ) δ 2 2 k + 1
Proof. 
We first check (16) and (17) when k = 1 . For X 1 = X 0 ( H X 0 ) 1 ( H ( X 0 ) ) = X 0 H ( X 0 ) . So we have
X 1 X 0 H ( X 0 ) δ 2 ,
and
( H X 1 ) 1 1 1 X 1 X 0 / δ 2 .
According to (iv) in Theorem 1, we obtain
( H ( X 1 ) = ( H ( X 1 ) H ( X 0 ) H X 0 ( X 1 X 0 ) 1 2 δ X 1 X 0 2 δ 2 3 .
Suppose inequalities (16) and (17) hold for k s . We will prove inequalities (16) and (17) hold for k = s + 1 . Because H X s is nonsingular, then we have X s + 1 = X s ( H X s ) 1 ( H ( X s ) ) . According to mathematical induction and (iii), (iv) in Theorem 1, we have
X s + 1 X s ( H X s ) 1 H ( X s ) δ 2 s + 1 , X s + 1 X 0 X s X 0 + X s + 1 X s = δ ( 1 1 2 s ) + δ 2 s + 1 = δ ( 1 1 2 s + 1 ) , ( H X s + 1 ) 1 1 1 X s + 1 X 0 / δ 2 s + 1 , ( H ( X s + 1 ) = H ( X s + 1 ) H ( X s ) H X s ( X s + 1 X s ) 1 2 δ X s + 1 X s 2 δ 2 2 s + 3 .
Thus, we prove inequalities (16) and (17) hold for k = s + 1 . In addition, according to definition of B ( Q , δ ) and inequalities (16), we know the sequence { X k } k 0 belong to the open ball B ( Q , δ ) . □

3.2. Convergence Analysis

Theorem 3. 
If inequality (13) is satisfied, take the initial matrix X 0 = Q , then the sequence { X k } k 0 generated by Newton’s iteration method for H ( X ) , which is located in the closed ball B ( X 0 , δ ) ¯ , converges to a zero X * of F ( X ) . In addition, the error estimate
X k X * δ 2 k ,
holds for k 0 .
Proof. 
According to (16) X k X k 1 δ 2 k , k 1 in Theorem 2, we first prove sequence { X k } k 0 is a Cauchy sequence.
Given any integer k 0 , and l 1 , we have
X k X k + l i = k k + l 1 X i + 1 X i i = k δ 2 i + 1 = δ 2 k .
thus sequence { X k } is a Cauchy sequence.
According to Theorem 2, X k B ( X 0 , δ ) B ( X 0 , δ ) ¯ ; obviously B ( X 0 , δ ) ¯ is a Banach space, therefore there exists X * B ( X 0 , δ ) ¯ to satisfy
X * = lim k X k .
Because function H ( X ) is continuous and H ( X k ) δ 2 2 k + 1 , k 1 , we have
H ( X * ) = lim k H ( X k ) = 0 .
Thus, by the definition of H ( X ) , we know X 🟉 is a zero of F ( X ) . Taking the limits of l in (19) yields
X k X 🟉 δ 2 k .
Theorem 4. 
If the inequality (13) is satisfied, take the initial matrix X 0 = Q . Then F ( X ) has only zero in the closed ball B ( X 0 , δ ) ¯ .
Proof. 
Suppose there exists Y ˜ B ( X 0 , δ ) ¯ such that F ( Y ˜ ) = 0 , we will prove
X k Y ˜ δ 2 k
hold for all k 0 , where X k are generated by Newton’s iteration for H ( X ) .
Inequality (20) is true when k = 0 . We assume (20) is true for k s ; next we will prove inequality (20) is true for k = s + 1 . First, we have
X s + 1 Y ˜ = X s ( H X s ) 1 ( H ( X s ) ) Y ˜ = X s H ( Y ˜ ) ( H X s ) 1 ( H ( X s ) ) Y ˜ = ( H X s ) 1 [ H ( Y ˜ ) H ( X s ) H X s ( Y ˜ X s ) ] .
Using (iv) in Theorem 1, Theorem 2, and the induction hypothesis, we have
X s + 1 Y ˜ ( H X s ) 1 1 2 δ Y ˜ X s 2 δ 2 s + 1 .
Therefore, inequality (20) is true for k 0 . So
lim s X s Y ˜ = X ˜ Y ˜ = 0 ,
hence, we have Y ˜ = X ˜ . □

4. Numerical Experiments

In this section, in order to illustrate the convergence behaviour of Newton’s iteration method for solving Equation (1), we report some numerical results to solve Equation (1) by Newton’s iteration method (NIM), inversion-free iteration (IFI), and basic fixed-point iteration (BFPI) in [26]. All numerical examples are implemented on a Windows 10 computer with an Intel i5 (1.6 GHz) Processor and 8 GB of RAM; the programming language is MATLAB R2018a. The Frobenius norm of the residual is as follows:
R e s ( X k ) = X k + i = 1 m A i * X k 1 A i Q F ,
where A F = i , j = 1 m a i , j 2 for a complex n × n matrix A. We stop the iterative process if the Frobenius norm of the residual is less than 1 × 10 8 .
We illustrate some notations used in this section:
  • IT is the number of iterations;
  • CPU means the iterations’ running times in seconds;
  • In [26], the authors solve Equation (1) when Q = I by different methods:
    –IFI—inversion-free iteration;
    –BFPI—basic fixed-point iteration;
In order to use Newton’s iteration method to solve Equation (1), we need to solve subproblem (6) of Newton’s iteration method. The Stein iteration [21] (Algorithm 3.2) is an effective method for solving subproblem (6) of Newton’s iteration method. The Stein iteration is as follows:
Let X 0 = Q , for k = 1 , 2 , , take L i k = X k 1 1 A i , i = 1 , 2 , m , and solve
X k L j k * X k L j k = Q i = 1 m A i * L i k A j * L j k ,
where j is a fixed index in { 1 , 2 , , m } , until X k X k 1 t o l . Then X * X k , X * is an HPD solution of Equation (1).
Example 1. 
Consider Equation (1) with
A 1 = 1 1200 100 150 259 15 212 64 25 69 138 ,
A 2 = 1 1200 160 25 20 25 288 60 4 16 120 .
Numerical results for Example 1 are reported in Table 1 and Figure 1, which show that the BFPI costs the least computing time, but Newton’s iteration method obtains the highest accuracy and the fastest convergence speed among the three methods.
Example 2. 
Consider Equation (1) with
A 1 = 1 550 30 22 23 35 40 52 22 17 19 66 30 10 23 19 11 13 25 21 35 66 13 19 17 6 40 30 25 7 20 15 52 10 21 6 15 9 ,
A 2 = 1 550 11 12 15 17 20 45 12 7 19 21 51 13 15 19 65 44 23 18 17 21 44 31 32 33 20 51 23 32 13 41 45 19 18 33 41 2 .
Table 2 reports the numerical results of Example 2. It reveals that the Newton iteration costs almost the same computing time as BFPI and achieves almost the same accuracy as BFPI; however, Figure 2 shows Newton’s iteration method converges the fastest among the three methods.
In addition, we specify the computational complexity of (6) in terms of matrix–matrix products and compare it with existing methods in the literature. See Table 3.

5. Conclusions

In this paper, based on Newton’s iteration method, and the Fréchet derivative technique, we give a sufficient condition for the existence of the HPD solution of Equation (1). Moreover, we prove that the matrix sequence generated by Newton’s iteration method converges to the unique HPD solution of this equation under some mild conditions. Finally, we obtain error estimates and convergence of the approximate HPD solutions to Equation (1). Numerical results show that the BFPI costs the least computing time, the computational complexity of Newton’s iteration method is the highest, but Newton’s iteration method obtains the highest accuracy and the fastest convergence speed among the three methods.

Author Contributions

Methodology, C.-Z.L.; Software, C.Y.; Funding acquisition, A.-G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by grants from the Science Foundation of the Education Department of Shaanxi Province (21JK1008); Doctoral Research Project of Yulin University (21GK04).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Editor and the anonymous reviewers for their valuable comments and suggestions, which are helpful for improving the quality and readability of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Green, W.L.; Kamen, E.W. Stabilizability of linear systems over a commutative normed algebra with applications to spatially-distributed and parameter-dependent systems. SIAM J. Control Optim. 1985, 23, 1–18. [Google Scholar] [CrossRef]
  2. Anderson, W.N.; Morley, T.D.; Trapp, G.E. Ladder networks, fixpoints, and the geometric mean. Circuits Syst. Signal Process. 1983, 2, 259–268. [Google Scholar] [CrossRef]
  3. Pusz, W.; Woronowicz, S.L. Functional calculus for sesquilinear forms and the purification map. Rep. Math. Phys. 1975, 8, 159–170. [Google Scholar] [CrossRef]
  4. Anderson, W.N.; Kleindorfer, G.B.; Kleindorfer, P.R.; Woodroofe, M.B. Consistent estimates of the parameters of a linear system. Ann. Math. Stat. 1969, 40, 2064–2075. [Google Scholar] [CrossRef]
  5. Ouellette, D.V. Schur complements and statistics. Linear Alg. Appl. 1981, 36, 187–295. [Google Scholar] [CrossRef] [Green Version]
  6. Erfanifar, R.; Sayevand, K.; Esmaeili, H. A novel iterative method for the solution of a nonlinear matrix equation. Appl. Numer. Math. 2020, 153, 503–518. [Google Scholar] [CrossRef]
  7. Meini, B. Efficient computation of the extreme solutions of X + A*X−1A = Q and XA*X−1A = Q. Math. Comput. 2001, 71, 1189–1204. [Google Scholar] [CrossRef]
  8. Engwerda, J.C.; Ran, A.C.M.; Rijkeboer, A.L. Necessary and sufficient conditions for the existence of a positive definite solution of the matrix equation X + A*X−1A = Q. Linear Alg. Appl. 1993, 186, 255–275. [Google Scholar] [CrossRef] [Green Version]
  9. Guo, C.H.; Lancaster, P. Iterative solution of two matrix equations. Math. Comput. 1999, 68, 1589–1603. [Google Scholar] [CrossRef] [Green Version]
  10. Ding, F.; Chen, T. On iterative solutions of general coupled matrix equations. SIAM J. Control Optim. 2006, 44, 2269–2284. [Google Scholar] [CrossRef]
  11. Caliò, F.; Garralda-Guillem, A.I.; Marchetti, E.; Galán, M.R. Numerical approaches for systems of Volterra–Fredholm integral equations. Appl. Math. Comput. 2013, 225, 811–821. [Google Scholar] [CrossRef]
  12. Marino, G.; Xu, H.K. A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef] [Green Version]
  13. Li, C.H.; Tam, P.K.S. An iterative algorithm for minimum cross entropy thresholding. Pattern Recognit. Lett. 1998, 19, 771–776. [Google Scholar] [CrossRef]
  14. Fital, S.; Guo, C.H. A note on the fixed-point iteration for the matrix equations X ± A*X−1A = I. Linear Alg. Appl. 2008, 429, 2098–2112. [Google Scholar] [CrossRef] [Green Version]
  15. Ullah, M.Z. A new inversion-free iterative scheme to compute maximal and minimal solutions of a nonlinear matrix equation. Mathematics 2021, 9, 2994. [Google Scholar]
  16. Zhang, H. Quasi gradient-based inversion-free iterative algorithm for solving a class of the nonlinear matrix equations. Linear Alg. Appl. 2019, 77, 1233–1244. [Google Scholar]
  17. Zhang, H.M.; Ding, F. Iterative algorithms for X + ATX−1A = I by using the hierarchical identification principle. J. Frankl. Inst.-Eng. Appl. Math. 2016, 353, 1132–1146. [Google Scholar]
  18. Long, J.H.; Hu, X.Y.; Zhang, L. On the Hermitian positive definite solution of the nonlinear matrix equation X + A*X−1A + B*X−1B = I. B. Braz. Math. Soc. 2008, 39, 371–386. [Google Scholar] [CrossRef]
  19. Vaezzadeh, S.; Vaezpour, S.M.; Saadati, R.; Park, C. The iterative methods for solving nonlinear matrix equation X + A*X−1A + B*X−1B = Q. Adv. Differ. Equ. 2013, 2013, 229. [Google Scholar] [CrossRef] [Green Version]
  20. Sayevand, K.; Erfanifar, R.; Esmaeili, H. The maximal positive definite solution of the nonlinear matrix equation X + A*X−1A + B*X−1B = I. Math. Sci. 2022. [Google Scholar] [CrossRef]
  21. Hasanov, V.I.; Hakkaev, S.A. Convergence analysis of some iterative methods for a nonlinear matrix equation. Comput. Math. Appl. 2016, 72, 1164–1176. [Google Scholar] [CrossRef]
  22. Weng, P.C.Y. Solving two generalized nonlinear matrix equations. J. Appl. Math. Comput. 2021, 66, 543–559. [Google Scholar] [CrossRef]
  23. Huang, B.; Ma, C. Some iterative methods for the largest positive definite solution to a class of nonlinear matrix equation. Numer. Algorithms 2018, 79, 153–178. [Google Scholar] [CrossRef]
  24. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  25. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: New York, NY, USA, 2013. [Google Scholar]
  26. He, Y.M.; Long, J.H. On the Hermitian positive definite solution of the nonlinear matrix equation X + i = 1 m A i * X 1 A i = I . Appl. Math. Comput. 2010, 216, 3480–3485. [Google Scholar]
Figure 1. The convergence process of the three iterations for computing Equation (1) for Example 1.
Figure 1. The convergence process of the three iterations for computing Equation (1) for Example 1.
Mathematics 11 01578 g001
Figure 2. The convergence process of the three iterations for computing Equation (1) for Example 2.
Figure 2. The convergence process of the three iterations for computing Equation (1) for Example 2.
Mathematics 11 01578 g002
Table 1. The numerical results of Example 1.
Table 1. The numerical results of Example 1.
MethodITCPURes ( X k )
BFPI100.0030256.2142 × 10 9
IFI150.0041634.8205 × 10 9
NIM70.0186602.6335 × 10 9
Table 2. The numerical results of Example 2.
Table 2. The numerical results of Example 2.
MethodITCPURes ( X k )
BFPI160.0031654.1922 × 10 9
IFI230.0058866.9625 × 10 9
NIM90.0041794.3841 × 10 9
Table 3. The computational complexity of BFPI, IFI, and NIM.
Table 3. The computational complexity of BFPI, IFI, and NIM.
MethodComputational Complexity
BFPI 13 m n 3 3
IFI ( 4 m + 4 ) n 3
NIM ( 2 m + 29 3 ) n 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, C.-Z.; Yuan, C.; Cui, A.-G. Newton’s Iteration Method for Solving the Nonlinear Matrix Equation X+i=1mAi*X1Ai=Q. Mathematics 2023, 11, 1578. https://doi.org/10.3390/math11071578

AMA Style

Li C-Z, Yuan C, Cui A-G. Newton’s Iteration Method for Solving the Nonlinear Matrix Equation X+i=1mAi*X1Ai=Q. Mathematics. 2023; 11(7):1578. https://doi.org/10.3390/math11071578

Chicago/Turabian Style

Li, Chang-Zhou, Chao Yuan, and An-Gang Cui. 2023. "Newton’s Iteration Method for Solving the Nonlinear Matrix Equation X+i=1mAi*X1Ai=Q" Mathematics 11, no. 7: 1578. https://doi.org/10.3390/math11071578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop