Next Article in Journal
The Basic Formulas Derivation and Degradation Verification of the 3-D Dynamic Elastoplastic TD-BEM
Previous Article in Journal
Correction: Shehzad et al. Binned Term Count: An Alternative to Term Frequency for Text Categorization. Mathematics 2022, 10, 4124
Previous Article in Special Issue
Dynamic Mode Decomposition via Polynomial Root-Finding Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Accelerated Sixth-Order Procedure to Determine the Matrix Sign Function Computationally

1
Foundation Department, Changchun Guanghua University, Changchun 130033, China
2
Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
3
Eighth Geological Brigade of Hebei Bureau of Geology and Mineral Resources Exploration (Hebei Center of Marine Geological Resources Survey), Qinhuangdao 066000, China
4
School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(7), 1080; https://doi.org/10.3390/math13071080
Submission received: 18 February 2025 / Revised: 15 March 2025 / Accepted: 24 March 2025 / Published: 26 March 2025
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis: 2nd Edition)

Abstract

:
The matrix sign function has a key role in several applications in numerical linear algebra. This paper presents a novel iterative approach with a sixth order of convergence to efficiently compute this function. The scheme is constructed via the employment of a nonlinear equations solver for simple roots. Then, the convergence of the extended matrix procedure is investigated to demonstrate the sixth rate of convergence. Basins of attractions for the proposed solver are given to show its global convergence behavior as well. Finally, the numerical experiments demonstrate the effectiveness of our approach compared to classical methods.

1. Introductory Notes

The matrix sign function (MSF) is a fundamental concept in numerical linear algebra [1], with widespread applications in solving Lyapunov equations, algebraic Riccati equations, and computing spectral decompositions [2]. Given a square matrix A C n × n without purely imaginary eigenvalues, the MSF, denoted as sign ( A ) , is formally defined via its Jordan form representation [3]:
sign ( A ) = Z I γ 0 0 I n γ Z 1 ,
where A = Z J A Z 1 is the Jordan decomposition of A, I γ is the identity matrix of the dimension γ × γ , I n γ is the identity matrix of the dimension ( n γ ) × ( n γ ) , and the block diagonal matrix J A contains eigenvalues partitioned into those with positive and negative real parts. In addition, the MSF satisfies the nonlinear matrix equation [4]
X 2 = I ,
which provides a fundamental framework for designing iterative methods, I is the identity matrix of appropriate dimension and sign ( A ) = X .
Since its appearance, the MSF has been employed across various domains of scientific computing [5], graph theory, control theory [6], and linear algebra algorithms [7]. In the realm of differential equations, particularly those modeling physical phenomena, the MSF assists in decomposing matrices. This decomposition simplifies the solution process, making it easier to analyze and solve complex systems of differential equations.
The numerical computation of sign ( A ) has been extensively studied, leading to various iterative techniques, including Newton’s scheme, the Newton–Schultz local solver, Halley’s scheme, and the Padé family of iterations [8]. While classical schemes such as Newton’s iteration
X k + 1 = 1 2 X k + X k 1 , X 0 = A ,
exhibit quadratic convergence, recent research has focused on developing higher-order methods to achieve superior efficiency. This paper proposes a novel iterative approach based on high-order rational approximations, improving the Padé iterations [9].
The iterative computation of the MSF has undergone significant advancements, particularly with the development of rational approximations and multipoint iteration strategies. The foundational Newton iteration provides a straightforward approach with global convergence for matrices without imaginary-axis eigenvalues [4]. However, its quadratic convergence rate limits its efficiency for large-scale applications. The related numerical approaches are discussed in [10,11].
To accelerate convergence, Padé approximations have been incorporated into iterative schemes. Kenney and Laub [12] introduced Padé iterations, which utilize rational approximations of the function h ( z ) = ( 1 z 2 ) 1 / 2 to construct the following recurrence relation:
X k + 1 = X k P l m ( I X k 2 ) Q l m ( I X k 2 ) 1 ,
where P l m and Q l m denote the ( l , m ) -Padé approximants. The choice of parameters influences convergence behavior, with specific cases such as l = m 1 and l = m exhibiting global convergence.
Beyond the Padé iterations, high-order iterative methods have been explored to further enhance computational efficiency. Methods derived from root-finding schemes in scalar nonlinear equations, such as Halley’s method,
X k + 1 = X k ( I + 3 X k 2 ) ( 3 I + X k 2 ) 1 ,
demonstrate cubic convergence.
A key advancement in this direction involves designing high-order iterations using conformal mappings and stability-enhancing transformations. For instance, the recent work of Jung and Chun [13] proposes a three-point iterative framework where weight functions are tuned to optimize the stability region and convergence order. Their approach constructs an iteration scheme conformally equivalent to an eighth-order polynomial, effectively balancing convergence speed and numerical stability.
Despite these advancements, further improvements are needed to construct iterative methods with both high-order convergence and computational robustness [14]. Motivated by these challenges [15], this paper introduces a sixth-order iterative method based on refined rational approximations and weight functions. Unlike classical methods such as Newton’s iteration, which exhibits quadratic convergence, the proposed iterative scheme achieves a higher convergence rate of order six, leading to improved computational efficiency. The methodology is constructed by extending a high-order nonlinear solver for simple roots to the matrix setting, providing a systematic framework for deriving efficient iterative schemes for computing the MSF. The global convergence behavior of the scheme is further supported by an analysis of attraction basins, offering insights into its applicability across a broad spectrum of matrices. These contributions collectively distinguish the approach from existing methods and establish its effectiveness in computing the MSF.
The structure of the remainder of this study is organized as follows. Section 2 explores the benefits of a high-order scheme and introduces a novel approach for solving scalar nonlinear equations, which is subsequently generalized to the matrix setting. An analytical investigation establishes the sixth-order convergence of the proposed solver. Next, in Section 3, we investigate the global convergence behavior of the proposed method using regions of attraction basins, alongside an assessment of its stability properties. Section 4 presents numerical experiments that substantiate the theoretical findings and further illustrate the effectiveness of the proposed methodology. Finally, Section 5 offers a comprehensive conclusion, summarizing the key contributions of this work.

2. An Accelerated Sixth-Order Procedure

We now turn our attention to the scalar analogue of Equation (2), which is given by
f ( x ) = x 2 1 = 0 .
In this context, f ( x ) corresponds to the scalar version of (2), whose solutions are x = ± 1 .
To enhance most classical approaches, such as (3)–(5), (for more details, see [16]), we propose a refined multi-step iteration scheme based on rational approximations and weight functions, formulated as follows:
d k = x k f ( x k ) f ( x k ) , k 0 , z k = x k 835 f ( x k ) 836 f ( d k ) 835 f ( x k ) 1671 f ( d k ) f ( x k ) f ( x k ) , p k = z k f ( z k ) f ( z k ) f ( x k ) z k x k , x k + 1 = p k f ( p k ) f ( p k ) f ( d k ) p k d k .
The structure in (6) has been designed intentionally for two reasons. The first one is to obtain a new method (which does not belong to the Padé family of methods). The second one is efficient in computing the MSF with global convergence behavior, as will be discussed later in this work.
Theorem 1. 
Let θ D be a simple root of the adequately smooth function f : D C C . If the starting approximation x 0 is chosen to be sufficiently close to θ, then the procedure (6) tends to θ with sixth-order accuracy.
Proof. 
Let θ be a simple zero of the function f. Given that f possesses sufficient smoothness, we perform a Taylor series expansion of f ( x k ) and its derivative f ( x k ) around θ , yielding the following expressions:
f ( x k ) = f ( θ ) e r k + c 2 e r k 2 + c 3 e r k 3 + c 4 e r k 4 + c 5 e r k 5 + c 6 e r k 6 + O ( e r k 7 ) ,
as well as
f ( x k ) = f ( θ ) 1 + 2 c 2 e r k + 3 c 3 e r k 2 + 4 c 4 e r k 3 + 5 c 5 e r k 4 + 6 c 6 e r k 5 + O ( e r k 6 ) .
Here, the notation e r k = x k θ denotes the error term, and the coefficients c k are given by
c k = 1 k ! f ( k ) ( θ ) f ( θ ) , k 2 .
By utilizing Equations (7) and (8), we derive the following expression:
f ( x k ) f ( x k ) = e r k c 2 e r k 2 + 2 ( c 2 2 c 3 ) e r k 3 + 4 c 2 3 + 7 c 2 c 3 3 c 4 e r k 4 + O ( e r k 5 ) .
Substituting Equation (9) into y k as defined in (6), we obtain
d k = θ + c 2 e r k 2 + 2 c 2 2 + 2 c 3 e r k 3 4 c 2 3 + 7 c 2 c 3 3 c 4 e r k 4 + O ( e r k 5 ) .
Utilizing Equations (7)–(10), we obtain
835 f ( x k ) 836 f ( d k ) 835 f ( x k ) 1671 f ( d k ) = 1 + c 2 e r + 2 c 3 834 c 2 2 835 e r 2
+ 1669 c 2 3 697225 1666 c 3 c 2 835 + 3 c 4 e r 3 + O ( e r k 4 ) .
By substituting (11) into Equation (6) and performing Taylor expansion with some simplifications, we deduce that
z k = θ 1 835 c 2 2 e r 3 + 699729 c 2 3 697225 839 c 2 c 3 835 e r k 4 + O ( e r k 5 ) .
Now, by performing a Taylor expansion of f ( z k ) about θ and incorporating Equation (6), we arrive at
p k = θ 1 835 c 2 3 e r 4 + 700564 c 2 4 697225 168 167 c 2 2 c 3 e r 5 + O ( e r k 6 ) .
Finally, using Equations (7) and (13), we obtain the refined expression
f ( p k ) f ( d k ) p k d k = f ( θ ) + f ( θ ) c 2 2 e r 2 + 2 c 2 c 3 c 2 2 f ( θ ) e r 3
+ 3339 c 2 4 f ( θ ) 835 6 c 3 c 2 2 f ( θ ) + 3 c 4 c 2 f ( θ ) e r 4 + O ( e r k 5 ) .
Using (14) we can determine that (6) describes the following final equation for the errors:
e r k + 1 = c 2 5 835 e r k 6 + O ( e r k 7 ) .
This completes the proof by showing the sixth order of convergence for the error equation.    □
The iterative scheme outlined in (6) can now be applied to solve (2). Proceeding with this approach leads to
X k + 1 = X k 2925 I + 14615 X k 2 + 8763 X k 4 + 417 X k 6 418 I + 8772 X k 2 + 14610 X k 4 + 2920 X k 6 1 ,
having the starting matrix
X 0 = A .
In a similar manner, the reciprocal formulation corresponding to Equation (16) is obtained by employing the following systematic approach:
X k + 1 = 418 I + 8772 X k 2 + 14610 X k 4 + 2920 X k 6 X k 2925 I + 14615 X k 2 + 8763 X k 4 + 417 X k 6 1 .
The computational efficiency of iterative methods for determining the MSF depends largely on the order of convergence. Classical methods such as Newton’s iteration (3) exhibit quadratic convergence, which, while sufficient for moderate-sized problems, becomes computationally expensive for large-scale complex matrices. High-order methods are designed to achieve faster convergence rates, reducing the number of iterations needed to achieve a given accuracy. This is especially useful when dealing with complex matrices, where each iteration involves matrix–matrix operations such as multiplications and inversions. By employing high-order iterations, the total computational cost can be reduced, making the method more practical for applications in numerical linear algebra.
Another crucial advantage of high-order methods is their enhanced numerical stability. Lower-order methods, such as Newton’s iteration, often require stabilization techniques such as scaling and squaring to maintain numerical accuracy, especially for matrices with eigenvalues close to the imaginary axis. High-order methods naturally mitigate these issues by achieving rapid convergence within fewer iterations, thereby reducing error accumulation and improving robustness. Moreover, the flexibility of high-order schemes allows for the incorporation of adaptive techniques, such as dynamically selecting iteration parameters based on the spectral properties of the matrix. This adaptability makes high-order methods particularly attractive for solving ill-conditioned problems and ensures reliable performance across a broad spectrum of computational tasks.
Theorem 2. 
Assume X 0 serves as a suitable initial guess and A is an invertible matrix. Under these assumptions, the iterative scheme described by (18) (or equivalently (16)) converges to M, achieving a sixth rate of convergence.
Proof. 
It is recalled that the decomposition of the matrix A is carried out using an invertible matrix Z of identical dimensions, in conjunction with the Jordan block matrix J. This leads to the following factorization:
A = Z J Z 1 .
By leveraging this decomposition and conducting a meticulous structural analysis of the solver, an iterative scheme is formulated for calculating the eigenvalues ( μ ). This iterative process transitions from iteration k to the subsequent iteration k + 1 as follows:
μ k + 1 i = 418 + 8772 μ k i 2 + 14610 μ k i 4 + 2920 μ k i 6 × μ k i 2925 + 14615 μ k i 2 + 8763 μ k i 4 + 417 μ k i 6 1 , 1 i n ,
where
s i = sign ( μ k i ) = ± 1 .
From a theoretical standpoint, and upon performing appropriate simplifications, the iterative scheme delineated in (20) reveals that the eigenvalues asymptotically converge toward the limiting values s i = ± 1 . More precisely, this convergence behavior is mathematically characterized by the following expression:  
lim k μ k + 1 i s i μ k + 1 i + s i = 0 .
Equation (22) encapsulates the asymptotic tendency of the eigenvalues to cluster around ± 1 as the iterative process advances. With each successive iteration, the eigenvalues exhibit an increasingly tight convergence toward these limiting values. Having established the theoretical foundation for the method’s convergence, we now shift our focus toward analyzing its rate of convergence. To facilitate this investigation, we proceed as follows:
B k = X k 2925 I + 14615 X k 2 + 8763 X k 4 + 417 X k 6 .
Utilizing Equation (23) and recognizing that X k represents a rational function of A, while simultaneously ensuring that X k maintains commutativity with M in the same manner as A, the following expression can be formulated
X k + 1 M = 418 I + 8772 X k 2 + 14610 X k 4 + 2920 X k 6 B k 1 M = [ 418 I + 8772 X k 2 + 14610 X k 4 + 2920 X k 6 M B k ] B k 1 = [ 418 I + 8772 X k 2 + 14610 X k 4 + 2920 X k 6 X k [ 2925 M + 14615 X k 2 M + 8763 X k 4 M + 417 X k 6 M ] B k 1 = [ 418 ( X k M ) 6 + 417 X k M ( X k M ) 6 ] B k 1 = X k M 6 [ 418 I + 417 X k ] B k 1 .
Using (24) and 2-norm, it is possible to determine that:
X k + 1 M B k 1 418 X k 417 I X k M 6 .
This finding underscores that the iterative scheme attains a convergence rate of order six, contingent upon the selection of a suitably chosen initial matrix, such as the one specified in (17), as the starting point.    □
Compared to existing rational approximation-based methods, our approach integrates a high-order nonlinear solver into the iterative framework, enhancing both stability and convergence properties. Moreover, while lower-order methods often require stabilization techniques such as scaling and squaring to handle matrices with eigenvalues near the imaginary axis, the proposed method naturally mitigates such issues due to its rapid convergence and inherent numerical robustness.

3. Attraction Basins and Stability

Understanding the convergence behavior of iterative methods is crucial when computing the MSF. One effective way to visualize and analyze this behavior is through attraction basins, which depict the regions in the complex plane where different initial guesses lead to convergence to particular solutions. Attraction basins provide insights into the stability, efficiency, and robustness of numerical methods [17].
Iterative methods for computing the MSF rely on successive approximations, where the choice of the starting value X 0 significantly affects the convergence trajectory. By plotting attraction basins, we can carry out the following steps:
  • Identify regions where the method converges rapidly.
  • Detect unstable zones where divergence or slow convergence occurs.
  • Compare different iterative schemes in terms of their efficiency.
Numerical stability is a crucial factor when selecting an iterative method. As analyzed in [18], attraction basin plots provide valuable insights into the robustness of a method against perturbations in the initial guess, the presence of fractal-like structures that indicate chaotic behavior in iterative dynamics, and the impact of scaling techniques on improving convergence regions.
A critical advantage of drawing attraction basins is their role in benchmarking different iterative methods. Methods with larger, well-connected basins typically offer superior convergence properties, while those with fragmented or irregular basins may suffer from instability.
Drawing attraction basins is a powerful tool for analyzing iterative methods that can be used to compute the MSF. These visualizations help assess convergence behavior, stability, and efficiency, guiding the selection and refinement of high-order iterative schemes. As we develop new iterative methods, attraction basin analysis remains essential in ensuring their practical effectiveness.
In this work, the techniques presented in (16) and (18) have been introduced with the specific aim of enhancing the attraction regions associated with such schemes in the context of solving the equation f ( x ) = x 2 1 = 0 . To provide a more thorough understanding, we proceed by investigating how the presented solvers show global convergence properties and demonstrate improved radii of convergence. This is achieved by illustrating their corresponding attraction regions within the area:
[ 4 , 4 ] × [ 4 , 4 ] ,
while resolving f ( x ) = 0 .
For this purpose, the complex plane is discretized into a grid of nodes, and the behavior of each point is evaluated by using it as an initial value. This allows us to determine whether the iteration converges or diverges. In cases of convergence, the points are shaded based upon the quantity of iterates they undergo, with the convergence condition being satisfied when | f ( x k ) | 10 2 .  Figure 1 depicts the basins of attraction for (16) and (18). The results reveal that, for the methods (16) and (18), the convergence radii are large and global.
A similar spirit of logic as the one already discussed in [19] indicates the stability of the scheme.
Theorem 3. 
Let A be an invertible matrix. Based on Equation (18), the sequence { X k } k = 0 , initialized with X 0 = A , exhibits stability in the asymptotic sense.
Proof. 
To initiate the analysis, we consider a perturbed evaluation of W k at the k th iteration within the framework of the numerical solver. Further elaboration on this perturbation approach can be found in [20]. To systematically examine the effect of perturbations, the following iterative relation is formulated for each computational step:  
X ˜ k = X k + W k .
At this stage, we take into account the assumption that ( W k ) i 0 for all i 2 , which is true under the framework of a first-order error analysis, provided that W k remains sufficiently small. Based on this premise, and after performing a series of algebraic simplifications, the following inequality is derived:
W k + 1 1 2 W 0 M W 0 M .
Thus, the sequence { X k } k = 0 generated by the method in (16) is stable.    □

4. Numerical Examples

Here, we perform an assessment of the performance of the proposed iterative solvers by evaluating their effectiveness across a diverse range of problem types. The entire implementation process has been carried out using Wolfram (see [21,22]). A systematic approach has been adopted to address several computational aspects, including the precise detection of convergence. For the sake of clarity and coherence, the testing procedure is categorized into two distinct groups, following a methodology similar to that employed in [23]: the first category comprises tests involving real derived values, while the second focuses on complex matrices.
The iterative methods considered in the comparative analysis include the method given in Equation (3), referred to as NM2; the approach defined by (5), labeled as HM3; the procedure outlined in (16), indicated as PM61; the iteration specified in (18), designated as PM62; and the Zaka Ullah et al. method of fourth order defined by ZUM4 [14]:
X k + 1 = 5 I + 42 X k 2 + 17 X k 4 X k 23 I + 38 X k 2 + 3 X k 4 1 .
For all Newton-type iterative schemes considered in this comparison, the initial matrix X 0 is chosen in accordance with the specification given in (17). The computational error at each iteration is evaluated using the following formulation:
E k + 1 = X k + 1 2 I 2 κ ,
where κ represents the predefined convergence threshold, serving as the stopping criterion for the iterative process.
Example 1. 
A set of twelve randomly generated real matrices is obtained by employing the random seed command SeedRandom [ 789 ] . Following their generation, the corresponding MSFs are computed and examined to facilitate a comparative analysis. These matrices are constructed within the numerical range [ 100 , 100 ] and encompass dimensions varying from 100 × 100 up to 1200 × 1200 . All computations are carried out under κ = 10 4 .
Table 1 and Table 2 showcase the numerical results corresponding to Example 1, providing substantial evidence of the efficacy of the methods introduced in this study. Of particular note, the method PM61 enhances computational efficiency by reducing the total number of iterates needed to calculate the MSF. This improvement is reflected in a marked decrease in the average CPU time, measured in seconds, across 12 randomly produced matrices of different sizes. This is implemented to compare the overall cost for computing of the proposed scheme with the existing ones.
Example 2. 
Within this numerical investigation, the MSF is computed for a set of eight randomly generated complex matrices. The evaluation is conducted while adhering to κ = 10 4 . The implementation of these random matrices is illustrated in the Mathematica 13.3 code snippet presented below
SeedRandom[789];
nu = 8;
Table[A[n1] = RandomComplex[{-100 - 100 I,
   100 + 100 I}, {150 n1, 150 n1}];, {n1, nu}];
Table 3 and Table 4 furnish computational comparisons for Example 2, reinforcing the efficacy of the furnished solver to determine the MSF for eight randomly produced complex matrices. Consistent numerical experiments conducted across a diverse set of related test cases further corroborate these findings. Among the evaluated methods, the PM61 algorithm exhibits superior efficiency and robustness, outperforming its counterparts in terms of computational accuracy and convergence behavior.
The numerical experiments conducted here provide substantial evidence in support of the superior performance of the proposed iterative methods, particularly PM61, in computing the MSF. The comparative assessment across multiple test cases demonstrates that PM61 exhibits a reduction in iteration count, as observed in Table 1 and Table 3. For instance, in Example 1, the average number of iterations needed for PM61 to achieve convergence is 8.08, which is markedly lower than NM2 (21.25), HM3 (13.66), and ZUM4 (9.75). This efficiency advantage remains consistent in Example 2, where PM61 requires an average of 8.50 iterations compared to NM2 (22.62) and HM3 (14.37). Such a reduction in iteration count is crucial when dealing with large-scale matrices, as it directly translates to fewer matrix–matrix operations, thereby minimizing computational complexity and memory usage.
Beyond the iteration count, another critical metric in evaluating iterative solvers is their CPU execution time, as shown in Table 2 and Table 4. The PM61 method consistently outperforms NM2, HM3, and ZUM4 in terms of computational speed. In Example 1, PM61 achieves an average execution time of 1.329 s, which is 29.7% faster than NM2 (1.889 s) and 11.2% faster than ZUM4 (1.481 s). Similar trends are observed in Example 2, where PM61 exhibits an average runtime of 4.456 s, making it significantly more efficient than NM2 (6.026 s) and HM3 (5.393 s). The improved efficiency of PM61 is attributed to its higher-order convergence rate, which reduces the number of matrix computations required per iteration. Furthermore, the method demonstrates superior scalability, handling larger matrices (e.g., 1200 × 1200 ) with competitive execution times compared to alternative schemes.
A key advantage of PM61 lies in its numerical stability and robustness, particularly when applied to real and complex matrices of varying dimensions. Unlike lower-order methods such as NM2, which may require additional stabilization techniques (e.g., scaling and squaring), PM61 converges reliably across different problem instances without exhibiting erratic behavior. This robustness is evident in the consistent performance metrics recorded in both Example 1 and Example 2. Moreover, the method maintains a balanced trade-off between iteration count and per-iteration computational cost, ensuring that the efficiency gains are realized across a broad spectrum of matrix sizes.
In summary, the numerical analysis underscores the effectiveness of PM61 in computing the MSF. The method excels in minimizing the number of iterations, reducing computational time, and enhancing numerical stability, making it a highly favorable choice for large-scale applications.

5. Conclusions

The MSF remains a vital tool in numerical linear algebra, with applications spanning control theory, eigenvalue computations, and differential equations. While classical iterative methods such as Newton’s iteration and Padé approximations provide efficient solutions, the quest for high-order convergence has driven the development of advanced iterative schemes. This paper contributes to this growing body of research by proposing a novel sixth-order method designed to improve computational efficiency.
In fact, in this paper, we have furnished a novel procedure with a sixth-rate convergence for computing the MSF efficiently. By employing a nonlinear equations solver (6) designed for simple roots, we have formulated an extended matrix procedure and established its convergence properties. Theoretical analysis has confirmed the sixth-order convergence of our method, while numerical experiments have demonstrated its superior performance compared to classical techniques. Additionally, basins of attraction have been provided to showcase the global convergence features of PM61 (or PM62). For future research, one possible direction is to extend the proposed iterative method to calculate matrix functions beyond the sign function, such as the matrix square root or matrix sector function.

Author Contributions

Conceptualization, S.W. and T.L.; Methodology, T.L.; Software, S.W., Z.W. and T.L.; Validation, Z.W. and T.L.; Formal analysis, W.X. and T.L.; Investigation, W.X. and T.L.; Resources, Y.Q. and T.L.; Data curation, Y.Q. and T.L.; Writing—original draft, S.W., Z.W., W.X. and Y.Q.; Writing—review & editing, S.W., Z.W., W.X. and Y.Q.; Visualization, T.L.; Supervision, T.L.; Project administration, T.L.; Funding acquisition, T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Project of Jilin Provincial Department of Education (JJKH20251638KJ), the Open Fund Project of Marine Ecological Restoration and Smart Ocean Engineering Research Center of Hebei Province (HBMESO2321), the Technical Service Project of Eighth Geological Brigade of Hebei Bureau of Geology and Mineral Resources Exploration (KJ2022-021), the Technical Service Project of Hebei Baodi Construction Engineering Co., Ltd. (KJ2024-012), the Natural Science Foundation of Hebei Province of China (A2020501007), and the Fundamental Research Funds for the Central Universities (N2123015).

Data Availability Statement

Concerning the data availability statement, it is clarified that data sharing is not applicable to this study, as no novel datasets were created or analyzed in the preparation of this paper.

Acknowledgments

We are grateful to the anonymous referees for their careful reading of the initial version of this work.

Conflicts of Interest

The writers declare the absence of any personal affiliations, relationships, or interests.

References

  1. Hogben, L. Handbook of Linear Algebra; Chapman and Hall/CRC: Boca Raton, FL, USA, 2007. [Google Scholar]
  2. Denman, E.D.; Beavers, A.N. The matrix sign function and computations in systems. Appl. Math. Comput. 1976, 2, 63–94. [Google Scholar] [CrossRef]
  3. Roberts, J.D. Linear model reduction and solution of the algebraic Riccati equation by use of the sign function. Int. J. Cont. 1980, 32, 677–687. [Google Scholar] [CrossRef]
  4. Higham, N.J. Functions of Matrices: Theory and Computation; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008. [Google Scholar]
  5. Rani, L.; Kansal, M. Numerically stable iterative methods for computing matrix sign function. Math. Meth. Appl. Sci. 2023, 46, 8596–8617. [Google Scholar] [CrossRef]
  6. Misrikhanov, M.S.; Ryabchenko, V.N. Matrix sign function in the problems of analysis and design of the linear systems. Autom. Remote Control 2008, 69, 198–222. [Google Scholar] [CrossRef]
  7. Howland, J.L. The sign matrix and the separation of matrix eigenvalues. Linear Alg. Appl. 1983, 49, 221–232. [Google Scholar] [CrossRef]
  8. Golzarpoor, J.; Ahmed, D.; Shateyi, S. Constructing a matrix mid-point iterative method for matrix square roots and applications. Mathematics 2022, 10, 2200. [Google Scholar] [CrossRef]
  9. Gomilko, O.; Greco, F.; Ziȩtak, K. A Padé family of iterations for the matrix sign function and related problems. Numer. Lin. Alg. Appl. 2012, 19, 585–605. [Google Scholar]
  10. Al-Sawalha, M.M.; Shah, R.; Khan, A.; Ababneh, O.Y.; Botmart, T. Fractional view analysis of Kersten-Krasil’shchik coupled KdV-mKdV systems with non-singular kernel derivatives. Aims Math. 2022, 7, 18334–18359. [Google Scholar]
  11. Alderremy, A.A.; Shah, R.; Iqbal, N.; Aly, S.; Nonlaopon, K. Fractional Series Solution Construction for Nonlinear Fractional Reaction-Diffusion Brusselator Model Utilizing Laplace Residual Power Series. Symmetry 2022, 14, 1944. [Google Scholar] [CrossRef]
  12. Kenney, C.S.; Laub, A.J. Rational iterative methods for the matrix sign function. SIAM J. Matrix Anal. Appl. 1991, 12, 273–291. [Google Scholar] [CrossRef]
  13. Jung, D.; Chun, C. A general approach for improving the Padé iterations for the matrix sign function. J. Comput. Appl. Math. 2024, 436, 115348. [Google Scholar] [CrossRef]
  14. Zaka Ullah, M.; Muaysh Alaslani, S.; Othman Mallawi, F.; Ahmad, F.; Shateyi, S.; Asma, M. A fast and efficient Newton-type iterative scheme to find the sign of a matrix. AIMS Math. 2023, 8, 19264–19274. [Google Scholar] [CrossRef]
  15. Feng, Y.; Othman, A.Z. An accelerated iterative method to find the sign of a nonsingular matrix with quartical convergence. Iran. J. Sci. 2023, 47, 1359–1366. [Google Scholar] [CrossRef]
  16. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
  17. Getz, C.; Helmstedt, J. Graphics with Mathematica Fractals, Julia Sets, Patterns and Natural Forms; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  18. Zainali, N.; Lotfi, T. A globally convergent variant of mid-point method for finding the matrix sign. Comput. Appl. Math. 2018, 37, 5795–5806. [Google Scholar]
  19. Alaidarous, E.S.; Ullah, M.Z. Construction of a convergent scheme for finding matrix sign function. Appl. Math. Comput. 2015, 260, 242–248. [Google Scholar]
  20. Iannazzo, B. Numerical Solution of Certain Nonlinear Matrix Equations. Ph.D. Thesis, Universita Degli Studi di Pisa, Pisa, Italy, 2007. [Google Scholar]
  21. Abell, M.L.; Braselton, J.P. Mathematica by Example, 5th ed.; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  22. Magrab, E.B. An Engineer’s Guide To Mathematica; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  23. Shi, L.; Ullah, M.Z.; Nashine, H.K.; Alansari, M.; Shateyi, S. An enhanced numerical iterative method for expanding the attraction basins when computing matrix signs of invertible matrices. Fractal Fract. 2023, 7, 684. [Google Scholar] [CrossRef]
Figure 1. Basin of attractions for (16) in (left) and (18) in (right).
Figure 1. Basin of attractions for (16) in (left) and (18) in (right).
Mathematics 13 01080 g001
Table 1. For Example 1, a comparative performance assessment is conducted by observing the number of iterates needed for convergence.
Table 1. For Example 1, a comparative performance assessment is conducted by observing the number of iterates needed for convergence.
n × n NM2HM3ZUM4PM61PM62
100 × 100 1912977
200 × 200 1711877
300 × 300 18121097
400 × 400 2013977
500 × 500 23151088
600 × 600 2013988
700 × 700 22141097
800 × 800 23141088
900 × 900 24151199
1000 × 1000 22141088
1100 × 1100 23151088
1200 × 1200 24161199
Average21.2513.669.758.087.75
Table 2. For Example 1, comparisons in terms of CPU execution time (in seconds).
Table 2. For Example 1, comparisons in terms of CPU execution time (in seconds).
n × n NM2HM3ZUM4PM61PM62
100 × 100 0.0180.0130.0140.0130.016
200 × 200 0.0580.0540.0490.0690.046
300 × 300 0.1740.1560.2010.1610.146
400 × 400 0.3540.3810.3100.2650.262
500 × 500 0.6990.6230.5420.4760.467
600 × 600 0.9960.8280.7380.7150.722
700 × 700 1.4271.2561.1261.1780.913
800 × 800 2.0021.7511.5701.3831.420
900 × 900 2.8082.4712.2562.0492.122
1000 × 1000 3.4063.0682.7082.3562.446
1100 × 1100 4.5324.2503.5093.0153.020
1200 × 1200 6.1945.7014.7504.2724.518
Average1.8891.7131.4811.3291.342
Table 3. For Example 2, a comparative assessment of performance is conducted by observing the number of iterates needed for convergence.
Table 3. For Example 2, a comparative assessment of performance is conducted by observing the number of iterates needed for convergence.
n × n NM2HM3ZUM4PM61PM62
150 × 150 2113988
300 × 300 22141088
450 × 450 22141088
600 × 600 23151099
750 × 750 22141099
900 × 900 22141088
1050 × 1050 26161199
1200 × 1200 23151099
Average22.6214.3710.008.508.50
Table 4. For Example 2, comparisons are made in terms of CPU execution time (in seconds).
Table 4. For Example 2, comparisons are made in terms of CPU execution time (in seconds).
n × n NM2HM3ZUM4PM61PM62
150 × 150 0.0760.0690.0810.0770.096
300 × 300 0.4110.4150.4000.3700.446
450 × 450 1.0811.0551.0000.9380.990
600 × 600 2.2522.3022.0142.1152.138
750 × 750 3.9983.8313.5763.6013.862
900 × 900 6.8106.2246.0205.1195.291
1050 × 1050 13.74411.75110.2039.2309.243
1200 × 1200 19.83217.49813.73714.20114.130
Average6.0265.3934.6294.4564.525
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Wang, Z.; Xie, W.; Qi, Y.; Liu, T. An Accelerated Sixth-Order Procedure to Determine the Matrix Sign Function Computationally. Mathematics 2025, 13, 1080. https://doi.org/10.3390/math13071080

AMA Style

Wang S, Wang Z, Xie W, Qi Y, Liu T. An Accelerated Sixth-Order Procedure to Determine the Matrix Sign Function Computationally. Mathematics. 2025; 13(7):1080. https://doi.org/10.3390/math13071080

Chicago/Turabian Style

Wang, Shuai, Ziyang Wang, Wu Xie, Yunfei Qi, and Tao Liu. 2025. "An Accelerated Sixth-Order Procedure to Determine the Matrix Sign Function Computationally" Mathematics 13, no. 7: 1080. https://doi.org/10.3390/math13071080

APA Style

Wang, S., Wang, Z., Xie, W., Qi, Y., & Liu, T. (2025). An Accelerated Sixth-Order Procedure to Determine the Matrix Sign Function Computationally. Mathematics, 13(7), 1080. https://doi.org/10.3390/math13071080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop