Next Article in Journal
PN-BBN: A Petri Net-Based Bayesian Network for Anomalous Behavior Detection
Next Article in Special Issue
Algebraic Characterizations of Relationships between Different Linear Matrix Functions
Previous Article in Journal
Determining Reliable Solutions for the Team Orienteering Problem with Probabilistic Delays
Previous Article in Special Issue
Solution Bounds and Numerical Methods of the Unified Algebraic Lyapunov Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Data-Driven Parameter Prediction Method for HSS-Type Methods

1
Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Xiangtan University, Xiangtan 411105, China
2
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
3
Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University, Xiangtan 411105, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3789; https://doi.org/10.3390/math10203789
Submission received: 2 September 2022 / Revised: 5 October 2022 / Accepted: 10 October 2022 / Published: 14 October 2022
(This article belongs to the Special Issue Matrix Equations and Their Algorithms Analysis)

Abstract

:
Some matrix-splitting iterative methods for solving systems of linear equations contain parameters that need to be specified in advance, and the choice of these parameters directly affects the efficiency of the corresponding iterative methods. This paper uses a Bayesian inference-based Gaussian process regression (GPR) method to predict the relatively optimal parameters of some HSS-type iteration methods and provide extensive numerical experiments to compare the prediction performance of the GPR method with other existing methods. Numerical results show that using GPR to predict the parameters of the matrix-splitting iterative methods has the advantage of smaller computational effort, predicting more optimal parameters and universality compared to the currently available methods for finding the parameters of the HSS-type iteration methods.

1. Introduction

Solving linear equations is one of the most fundamental topics in matrix computation, and with the development of science and technology, many important problems in the natural sciences and engineering can often be reduced to the following linear equation:
A x = b ,
where x , b C n , A C n × n is a large sparse non-Hermitian and positive definite matrix.
There are many powerful matrix-splitting iterative methods for solving systems of linear equations, such as the successive over-relaxation (SOR) method [1], the symmetric SOR (SSOR) method [2], the accelerated over-relaxation (AOR) method [3] and the symmetric AOR (SAOR) method [4]. Many researchers have applied them to different problems and made some improvements [5,6,7,8]. Considering the specificity of the problem and solving Equation (1) more efficiently, many new matrix-splitting iterative methods have been proposed. Bai et al. offered the Hermitian and skew-Hermitian splitting (HSS) method and the inexact HSS method [9]. To improve the efficiency of the HSS method, Bai et al. proposed the preconditioned HSS (PHSS) [10]. Due to the promising performance of the HSS method, some HSS-type iteration methods were presented. These methods can be mainly divided into the following two forms. The first one is accelerated HSS-type methods. Such as the generalized HSS method [11], the lopsided HSS (LHSS) method [12], the generalized PHSS method [13], the asymmetric HSS method [14] and the new HSS (NHSS) method [15]. In addition, Yang et al. offered the minimum residual HSS method [16] by applying the minimum residual technique to the HSS method and Li et al. [17] proposed the single-step HSS (SHSS) method. Based on the shift-splitting method and the SHSS method, Li et al. established the SHSS-SS method [18].
Apart from the accelerated HSS-type methods, some other HSS-type methods focused on the applications to different kinds of problems. Such as the saddle-point problems [19,20,21,22,23,24], solving the matrix equation [25,26,27,28,29,30,31,32], and solving the complex symmetric linear systems [33,34,35] and the nonlinear systems [36,37].
These iteration methods contain splitting parameters that need to be specified in advance. At present, there are three main methods of selecting the splitting parameters. The first is obtaining relatively optimal parameters by traversing or experimenting within some intervals [26,38,39]. The advantage of this traversal method is that it can obtain relatively accurate optimal parameters, but it requires large amount of calculation and consumes a lot of extra time, especially when the dimension of the coefficient matrix is large. The second is estimating optimal parameters through theoretical analysis [40,41]. Some researchers find optimal parameters by minimizing the spectral radius of the iterative matrix. However, solving this optimization problem is very difficult in theoretical analysis and practical computation. Bai et al. [42] proposed an accurate formula for computing the optimal parameters of the HSS method by directly minimizing the spectral radius of the iterative matrix, but the coefficient matrix is a two-by-two matrix or a two-by-two block matrix with specific forms. Some researchers find quasi-optimal parameters by minimizing the upper bound of the spectral radius of the iterative matrix of some iteration methods [9,12,17,18]. By a reasonable and simple optimization principle, Chen [43] proposed an accurate estimate to the optimal parameter of the HSS iteration method. Huang [44] and Yang [45] estimated the optimal parameters of the HSS method by solving a cubic polynomial equation and a quartic polynomial equation, respectively. Huang [46] proposed variable-parameter HSS methods and the parameter in it updated at each step of the iteration. The above theoretical methods contain the following limitations. First, the method is only available case by case, which means it is less universal. Second, the method needs to compute the maximum or the minimum eigenvalues of the matrix, but this is time-consuming work. Jiang et al. [47] proposed the third estimation method, the Bayesian inference-based Gaussian process regression (GPR) method, to predict the optimal parameters in some alternating direction iterative methods. This method uses a training set to learn a mapping between the dimension of linear systems and relatively optimal splitting parameters.
The choice of splitting parameters can greatly affect the efficiency of the HSS-type iteration methods [47,48], which makes the parameter selection of great importance. For computing the splitting parameters of the HSS-type iteration methods, to overcome the limitations of the traversal method and the theoretical methods, we use the GPR method to predict the splitting parameters of some HSS-type iteration methods. The main contributions of this work are:
  • We apply the GPR method for the prediction of optimal splitting parameters of some HSS-type methods, which is a new application.
  • We provide extensive numerical experiments to compare the prediction performance of the GPR method with the traversal method and the theoretical methods.
The results of numerical experiments show that: comparing to the traversal method, the GPR method can predict almost the same parameters as the traversal method does but with less computational effort; comparing to the theoretical method, the GPR method can predict better optimal parameters than the theoretical method and is more universal (unlike the theoretical method available case by case, the GPR method is suitable for all the HSS-type iteration methods tested). Moreover, the theoretical methods need to compute the maximum or the minimum eigenvalues (or singular values) of the matrix but this is a time-consuming work when the dimension of the matrix is large and the GPR method overcomes this limitation.
The rest of the paper is organized as follows. In Section 2, we present Gaussian process regression method based on Bayesian inference. In Section 3, we present the iteration scheme of some HSS-type iteration methods, the corresponding convergence conditions and the theoretical methods for estimating the relatively optimal splitting parameters. In Section 4, we illustrate the efficiency of the GPR method by numerical experiments. Finally, in Section 5, we include some concluding remarks and prospects.
Throughout the paper, the sets of n × n complex and real matrices are denoted by C n × n and R n × n , respectively. If X C n × n , let X T , X 1 , X * , X 2 , X F denote the transpose, inverse, conjugate transpose, the Euclidean norm and Frobenius norm of X, respectively. The notations λ ( X ) = ( λ 1 ( X ) , λ 2 ( X ) , , λ n ( X ) ) , σ ( X ) = ( σ 1 ( X ) , σ 2 ( X ) , , σ n ( X ) ) , ρ ( X ) denote the eigenvalue set, singular value set and spectral radius of X, respectively. The symbol ⊗ denotes the Kronecker product. I represents the identity matrix.

2. Gaussian Process Regression Method

In this section, we present a Gaussian process regression method based on Bayesian inference. Gaussian process regression is an application of non-parametric Bayesian estimation to regression problems and has a wide range of applications in the field of machine learning.

2.1. Bayesian Inference

Bayesian inference is a method that infers the population distribution or the characteristic number of the population according to the sample information and the prior information. Prior information is some information about statistical problems before sampling. Taking inferrance of the distribution of an unknown quantity θ as an example, Bayesian inference believes that θ can be regarded as a random variable before obtaining the sample information, so it can be described by a probability distribution and this distribution is a prior distribution. After obtaining the samples, the population distribution, samples and the prior distribution are combined by Bayesian formula to obtain a new distribution about the unknown quantity θ , which is a posterior distribution. We can find that the process of Bayesian inference is essentially the process of updating prior information through sample information.
The Bayesian inference-based Gaussian process regression method is Bayesian inference with Gaussian process as a prior distribution. The definition of Gaussian process is given below.
Definition 1.
Gaussian process (GP) is a collection of random variables { X t , t T } and for any finite subset { t 1 , t 2 , , t k } of T, ( X t 1 , X t 2 , , X t k ) follows the joint Gaussian distribution.
In our question, we expect to obtain a mapping f ( · ) from the dimension of linear systems to relatively optimal splitting parameters, so that for each given dimension n * , the mapping can output the corresponding relatively optimal splitting parameter α * = f ( n * ) . To this end, this paper uses the GPR to fit this mapping f ( n ) .

2.2. Model Building

Following the steps of Bayesian inference, firstly, we give prior information. For each given n N + , the corresponding optimal splitting parameter α is a random variable, and we denote α as f ( n ) to reflect the correspondence between α and n. Considering that, in general, the observed α would be polluted by addition noise, the observed α may not be exactly equal to f ( n ) . It is
α = f ( n ) + η ,
where η is the addition noise and we assume that η follows a Gaussian distribution with zero mean and variance σ 2 , i.e., η N ( 0 , σ 2 ) . The desirable range of σ is 10 2 σ 10 6 . In this work, we take σ = 10 4 . We also assume that f ( n ) are independent of each other. Obviously, our task is to obtain f ( · ) .
Considering random process { f ( n ) , n N + } and assuming this process is a GP, we have
f ( n ) G P ( μ ( n ) , k ( n , n ) ) ,
where μ ( · ) and ( · , · ) denote mean function and covariance function of the GP f ( n ) , respectively. Once μ ( · ) and k ( · , · ) have been determined, the GP is also determined. The selection of μ ( · ) and k ( · , · ) is shown in Section 2.3.
Next, we obtain sample information. Assume that we have a training set D = n i , α i i = 1 , 2 , , d : = { n , α } where ( n i , α i ) is an input–output pair. n i is the dimension of the coefficient matrix and α i is the optimal splitting parameter in a matrix-splitting iterative method. Obviously, the training set is the sample information. According to the prior information, { f ( n i ) , i = 1 , 2 , , d } follows the joint Gaussian distribution, i.e.,
f ( n ) : = f n 1 f n d N μ n 1 μ n d , k n 1 , n 1 k n 1 , n d k n d , n 1 k n 1 , n d .
Or equally,
f ( n ) N ( μ ( n ) , k ( n , n ) ) ,
where n = ( n 1 , n 2 , , n d ) T . From Equation (2), the distribution of the corresponding observed α is
α N μ ( n ) , k ( n , n ) + σ 2 I d
where I d is a d-order identity matrix.
Finally, we can update the prior information by using the sample information. That is to say, to predict new dimensional vector n * = ( n 1 , n 2 , , n m ) T , we can obtain the distribution of the optimal splitting parameters corresponding to n * , i.e., the conditional distribution f * | n , α , n * , where f * = ( f ( n 1 ) , f ( n 2 ) , , f ( n m ) ) T . According to the prior information, the sample α and predicted value of n * follow the joint Gaussian distribution. From Equation (3), we have
α f * N μ ( n ) μ n * , k ( n , n ) + σ 2 I d k n , n * k n * , n k n * , n * .
To obtain the conditional distribution f * | n , α , n * from Equation (4), we have the following theorem [49].
Theorem 1.
Let x and y be jointly Gaussian random vectors, i.e.,
x y N μ x μ y , A C C T B ,
then the marginal distribution of x and the conditional distribution of x given by y are
x N μ x , A , x y N μ x + C B 1 y μ y , A C B 1 C T .
From Theorem 1, we have
f * n , α , n * N μ * , σ * 2 ,
where
μ * = k n * , n k ( n , n ) + σ 2 I d 1 ( α μ ( n ) ) + μ n * , σ * 2 = k n * , n * k n * , n k ( n , n ) + σ 2 I d 1 k n , n * .
For the predicted value n * , one can use the mean value of the above Gaussian distribution as its estimated value, i.e., f * = μ * . Now, we have successfully obtained the function f ( · ) , and denoted the independent variable of f ( · ) by n * ; then, f ( n * ) : = μ * ( n * ) .

2.3. Model Selection

In this section, we determine μ ( · ) and k ( · , · ) . In this work, we let μ ( · ) = 0 , other ways can refer to [49]. For covariance function, the exponential kernel function is
k ( x , y ) = σ f 2 exp x y 2 ι 2 ,
where θ = { ι , σ f } is the hyperparameter. In this work, maximum likelihood estimation is used to select the values of the hyperparameter θ . Specifically, when we have a training set D = n i , α i i = 1 , 2 , , d : = { n , α } , the likelihood function L of θ can be derived as
L : = log p ( α n , θ ) = 1 2 α T k θ ( n , n ) + σ 2 I d 1 α 1 2 log det k θ ( n , n ) + σ 2 I d .
The optimal hyperparameter θ is θ * = arg max θ L .
In practice, to avoid a large amount of calculation, we generally produced the training set from a set of small-scale systems.

3. Matrix-Splitting Iterative Methods

In this section, we recall some matrix-splitting iteration methods, including the HSS method, the NHSS method, the LHSS method, the SHSS method, the SHSS-SS method, the MHSS method and the MSNS method. We mainly focus on their iterative schemes, convergence and theoretical methods involving estimating optimal splitting parameters.

3.1. Matrix-Splitting Methods for Non-Hermitian Positive Definite Linear Systems

Consider the linear Equation (1).
A x = b ,
where x , b C n , A C n × n is non-singular. Let M , N C n × n be splitting matrices such that
A = M + N .
For the HSS method, the NHSS method, the LHSS method, the SHSS method and the SHSS-SS method, they all split A into Hermitian and skew-Hermitian parts. i.e.,
M = A + A * 2 , N = A A * 2 .

3.1.1. HSS Iteration Method

The scheme of HSS iteration [9] is as follows.
Definition 2.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
( α I + M ) x ( k + 1 2 ) = ( α I N ) x ( k ) + b , ( α I + N ) x ( k + 1 ) = ( α I M ) x ( k + 1 2 ) + b ,
where α is a given positive constant.
For the convergence property of the HSS iteration, we have the following theorem [9]:
Theorem 2.
Let A in Equation (5) be a positive definite matrix, the matrixes M , N be defined in the same way as Equation (6) and let α be a positive constant. Then, the iteration matrix M ( α ) of the HSS iteration is given by
M ( α ) = ( α I + N ) 1 ( α I M ) ( α I + M ) 1 ( α I N ) ,
and its spectral radius ρ ( M ( α ) ) is bounded by
σ ( α ) max λ λ ( M ) α λ i α + λ i ,
where λ ( M ) is the spectral set of the matrix M. Therefore, it holds that
ρ ( M ( α ) ) σ ( α ) < 1 ,
i.e., the HSS iteration converges to the unique solution of the Equation (5).
Equation (7) provides a theoretical method to estimate the optimal splitting parameter, that is, by minimizing the upper bound σ ( α ) of the spectral radius of the iteration matrix M ( α ) of the HSS iteration to obtain the quasi-optimal splitting parameter α * , and it is shown by the following theorem [9].
Theorem 3.
The conditions are the same as Theorem 2: let λ min and λ max be the minimum and the maximum eigenvalues of the matrix M, respectively. Then
α * arg min α max γ min λ γ max α λ α + λ = λ min λ max .
The HSS iteration needs to solve two linear systems with coefficient matrices α I + H and α I + S , which is costly and impractical. An approach to overcome this disadvantage is to solve two subproblems iteratively and this result is the inexact HSS (IHSS) iteration method.
Definition 3.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges,
1. 
Approximately solve ( α I + M ) z ¯ ( k ) = r ¯ ( k ) ( r ¯ ( k ) = b A x ¯ ( k ) ) by employing an inner iteration (e.g., the CG method), such that the residual p ¯ ( k ) = r ¯ ( k ) ( α I + M ) z ¯ ( k ) satisfies
p ¯ ( k ) ε k r ¯ ( k ) ,
and then compute x ¯ ( k + 1 2 ) = x ¯ ( k ) + z ¯ ( k ) ;
2. 
Approximately solve ( α I + N ) z ¯ ( k + 1 2 ) = r ¯ ( k + 1 2 ) ( r ¯ ( k + 1 2 ) = b A x ¯ ( k + 1 2 ) ) by employing an inner iteration (e.g., some Krylov subspace method), such that the residual q ¯ ( k + 1 2 ) = r ¯ ( k + 1 2 ) ( α I + N ) z ¯ ( k + 1 2 ) satisfies
q ¯ ( k + 1 2 ) η k r ¯ ( k + 1 2 ) ,
and then compute x ¯ ( k + 1 ) = x ¯ ( k + 1 2 ) + z ¯ ( k + 1 2 ) , here α is a given positive constant.
The convergence property and the choice of the tolerance ε k and η k can refer to [9].

3.1.2. NHSS Iteration Method

The scheme of NHSS [15] iteration is as follows.
Definition 4.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
M x ( k + 1 2 ) = N x ( k ) + b , ( α I + M ) x ( k + 1 ) = ( α I N ) x ( k + 1 2 ) + b ,
where α is a given positive constant.
For the convergence property of the NHSS iteration, we have the following theorem [15]:
Theorem 4.
Let A in Equation (5) be a positive definite and normal matrix; the matrixes M , N be defined in the same way as Equation (6) and let α be a positive constant. Then, the iteration matrix M ( α ) of the NHSS iteration is
M ( α ) = ( α I + M ) 1 ( α I N ) M 1 ( N ) .
The spectral radius ρ ( M ( α ) ) is bounded by
σ ( α ) σ max α 2 + σ max 2 λ min α + λ min ,
where σ max is the maximum singular value of the matrix N and λ min is the minimum eigenvalue of the matrix M. Moreover, if σ max λ min , then
M ( α ) σ ( α ) < 1 ,
i.e., the NHSS iteration converges to the unique solution of Equation (5).
Equation (8) provides a theoretical method to estimate the optimal splitting parameter, that is, by minimizing the upper bound σ ( α ) of the spectral radius of the iteration matrix M ( α ) of the NHSS iteration to obtain the quasi-optimal splitting parameter α * , and it is given by Theorem 5 [15].
Theorem 5.
The conditions are the same as Theorem 4; then, the quasi-optimal splitting parameter of the NHSS iteration is
α * = σ max 2 λ min .

3.1.3. LHSS Iteration Method

The scheme of LHSS iteration [12] is as follows.
Definition 5.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
M x ( k + 1 2 ) = N x ( k ) + b , ( α I + N ) x ( k + 1 ) = ( α I M ) x ( k + 1 2 ) + b ,
where α is a given positive constant.
For the convergence property of the LHSS iteration, we have the following theorem [12]:
Theorem 6.
Let A in Equation (5) be non-singular and α be a positive constant. Then, the LHSS iteration converges to the unique solution of Equation (5). Moreover, the spectral radius of the iterative matrix of the LHSS iteration satisfies
ρ ( M ( α ) ) < σ max α 2 + σ max 2 max λ i λ ( M ) α λ i λ i ,
where
M ( α ) = ( α I + N ) 1 ( α I M ) 1 ( N ) .
By minimizing the upper bound of the spectral radius of the iteration matrix of the LHSS iteration in Equation (9), we can obtain the quasi-optimal splitting parameter α * using Theorem 7 [12].
Theorem 7.
The conditions are the same as Theorem 6: let λ min and λ max be the minimum and the maximum eigenvalues of the matrix M, respectively. Let σ max be the maximum singular value of the matrix N. Then
α * = 2 λ max λ min λ max + λ min .
To improve the efficiency of the LHSS iteration method, we have the following ILHSS iteration method.
Definition 6.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges,
1. 
Approximately solve M z ¯ ( k ) = r ¯ ( k ) ( r ¯ ( k ) = b A x ¯ ( k ) ) by employing an inner iteration (e.g., the CG method), such that the residual p ¯ ( k ) = r ¯ ( k ) M z ¯ ( k ) satisfies
p ¯ ( k ) ε k r ¯ ( k ) ,
and then compute x ¯ ( k + 1 2 ) = x ¯ ( k ) + z ¯ ( k ) ;
2. 
Approximately solve ( α I + N ) z ¯ ( k + 1 2 ) = r ¯ ( k + 1 2 ) ( r ¯ ( k + 1 2 ) = b A x ¯ ( k + 1 2 ) ) by employing an inner iteration (e.g., a Krylov subspace method), such that the residual q ¯ ( k + 1 2 ) = r ¯ ( k + 1 2 ) ( α I + N ) z ¯ ( k + 1 2 ) satisfies
q ¯ ( k + 1 2 ) η k r ¯ ( k + 1 2 ) ,
and then compute x ¯ ( k + 1 ) = x ¯ ( k + 1 2 ) + z ¯ ( k + 1 2 ) , where α is a given positive constant.
The convergence property and the choice of the tolerance ε k and η k can refer to [12].

3.1.4. SHSS Iteration Method

The scheme of SHSS iteration [17] is as follows.
Definition 7.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
( α I + M ) x ( k + 1 ) = ( α I N ) x ( k ) + b .
where α is a given positive constant.
For the convergence property of the SHSS iteration, we have the following theorem [17]:
Theorem 8.
Let A in Equation (5) be positive definite. Let λ min and λ max be the minimum and the maximum eigenvalues of the matrix M, respectively. Let σ max be the maximum singular value of the matrix N. The spectral radius of the iteration matrix of the SHSS iteration method is bounded by
σ ( α ) = α 2 + σ max 2 α + λ min .
Moreover,
(i) 
If λ min σ max , then σ ( α ) < 1 for any α > 0 , which means that the SHSS iteration method is unconditional convergent;
(ii) 
If λ min < σ max , then σ ( α ) < 1 (which means the SHSS iteration method is convergent) if and only if
α > max 0 , σ max 2 λ min 2 2 λ min .
The quasi-optimal splitting parameter α * of the SHSS-SS iteration method is shown by Theorem 9 [17].
Theorem 9.
The conditions are the same as Theorem 8, then
α * = σ max 2 λ min .

3.1.5. SHSS-SS Iteration Method

The scheme of SHSS-SS iteration [18] is as follows.
Definition 8.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
( α I + M ) x ( k + 1 2 ) = ( α I N ) x ( k ) + b , ( α I + A ) x ( k + 1 ) = ( α I A ) x ( k + 1 2 ) + 2 b ,
where α is a given positive constant.
For the convergence property of the SHSS-SS iteration, we have the following theorem [18]:
Theorem 10.
Let A in Equation (5) be non-Hermitian and positive definite. Let λ min and λ max be the minimum and the maximum eigenvalues of the matrix M, respectively. Let σ max be the maximum singular value of the matrix N. If α satisfies
α > max 0 , σ max 2 λ min 2 2 λ min ,
then the SHSS-SS iteration converges to the unique solution of Equation (5).
The quasi-optimal splitting parameter α * of the SHSS-SS iteration method is as follows [18].
Theorem 11.
The conditions are the same as Theorem 10, then
α * = σ max 2 λ min .

3.2. Matrix-Splitting Methods for Complex Symmetric Linear Systems

Considering the linear equation of the form
A x ( W + i T ) x = b ,
where W , T R n × n are symmetric positive definite matrix and symmetric positive semi-definite matrix, respectively, b R n and i = 1 . Here, we let T 0 , so A in Equation (10) is non-Hermitian.

3.2.1. HSS Iteration Method and MHSS Iteration Method

As the matrix W is positive definite, so the matrix A in Equation (10) is non-Hermitian positive definite. We can straightforwardly use the HSS method to solve Equation (10). The scheme of HSS iteration is as follows.
Definition 9.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
( α I + W ) x ( k + 1 2 ) = ( α I i T ) x ( k ) + b , ( α I + i T ) x ( k + 1 ) = ( α I W ) x ( k + 1 2 ) + b ,
where α is a given positive constant.
However, solving the linear sub-system with its coefficient matrix being the shifted skew-Hermitian α I + i T is very difficult in some cases. To avoid this, Bai et al. [33] proposed the modified HSS (MHSS) method. The scheme of MHSS iteration is as follows.
Definition 10.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
( α I + W ) x ( k + 1 2 ) = ( α I i T ) x ( k ) + b , ( α I + T ) x ( k + 1 ) = ( α I + i W ) x ( k + 1 2 ) i b ,
where α is a given positive constant.
For the convergence property of the MHSS iteration, we have the following theorem [33]:
Theorem 12.
The conditions are the same as for Equation (11) and let α be a positive constant. Then, the iteration matrix M ( α ) of the MHSS iteration is
M ( α ) = ( α I + T ) 1 ( α I + i W ) ( α I + W ) 1 ( α I i T ) ,
and its spectral radius ρ ( M ( α ) ) is bounded by
σ ( α ) max λ j λ ( M ) α 2 + λ j 2 α + λ j ,
where λ ( M ) is the spectral set of the matrix M. Therefore, we have
M ( α ) σ ( α ) < 1 ,
i.e., the MHSS iteration converges to the unique solution of Equation (11).
The quasi-optimal splitting parameter α * of the MHSS method is as follows [33].
Theorem 13.
The conditions are the same as Theorem 12, let λ min and λ max be the minimum and the maximum eigenvalues of the matrix W, respectively. Then
α * = λ min λ max .
Similar to the HSS iteration method and LHSS iteration method, the MHSS iteration method has its inexact version as well [33].

3.2.2. MSNS Iteration Method

The scheme of MSNS iteration [50] is as follows.
Definition 11.
Given an initial guess x 0 , for k = 0 , 1 , 2 , , until x ( k ) converges, compute
( α I + T ) x ( k + 1 2 ) = ( i α W + T 2 ) x ( k ) + i T b , ( i α W I T 2 ) x ( k + 1 ) = ( α I T ) x ( k + 1 2 ) + i T b ,
where α is a given positive constant.
For the convergence property of the MSNS iteration, we have the following theorem [50]:
Theorem 14.
Let W be a real symmetric indefinite matrix, T be a real symmetric definite positive matrix and α be a positive constant. Then, the spectral radius ρ ( M ( α ) ) of the iteration matrix M ( α ) of the MSNS iteration is bounded by
σ ( α ) max μ j μ ( T ) α μ j α + μ j ,
where μ ( T ) is the spectral set of the matrix T. Moreover, it holds that
ρ ( M ( α ) ) σ ( α ) < 1 ,
i.e., the MSNS iteration converges to the unique solution of Equation (11).
The quasi-optimal splitting parameter α * of the MSNS method is as follows [50].
Theorem 15.
The conditions are the same as Theorem 14, let μ min and μ max be the minimum and the maximum eigenvalues of the matrix T, respectively. Then
α * = μ min μ max .

4. Numerical Results

In this section, we present extensive numerical examples to show the power of the GPR method compared with the traversal method and theoretical method. We take a three-dimensional convection-diffusion equation and Pad e ´ approximation in the time integration of a parabolic partial differential equations as examples.
In the following numerical experiments, all tests are started with a zero vector. All iterative methods are terminated if the relative residual error satisfies r ( k ) 2 / r ( 0 ) 2 10 6 , where r ( k ) = b A x ( k ) is the k-step residual. “IT” and “CPU” denote the required iterations and the CPU time (in seconds), respectively. “Traversal time” denotes the required CPU time (in seconds) to obtain the optimal splitting parameters by the traversal method. “Training time” denotes the required CPU time (in seconds) to produce the training set and train the GPR model. We use
| Traversal time Training time | Traversal time
to make a comparison of the calculation amount of the traversal method and the GPR method. Obviously, the larger this quantity is, the longer the traversal time of the traversal method will take compared to the training time of the GPR method.
For the traversal method, the optimal splitting parameter minimizes the iterations of the corresponding iteration method when solving linear systems and it is obtained by traversing interval ( 0 , 3 ] with a step size of 0.01 .
For the GPR method, the training set is produced by using the traversal method for a set of small-scale systems and their dimensions are shown later.
For the IHSS method and the ILHSS method, we use the CG method to solve the linear systems with the coefficient matrix α I + M and the GMRES method to solve linear systems with the coefficient matrix α I + N . The inner CG and GMRES iterates are terminated if the current residuals of the inner iterations satisfy
p ( j ) 2 r ( k ) 2 max 0.1 × 0.8 k , 1 × 10 7 and q ( j ) 2 r ( k ) 2 max 0.1 × 0.8 k , 1 × 10 6 ,
where p ( j ) and q ( j ) are, respectively, the residuals of the jth inner CG and GMRES, r ( k ) is the residual of the kth outer iteration.
All computations are carried out using MATLAB 2018b on a personal computer with a 1.8 GHz CPU Intel Core i5 and 8G memory.
Example 1.
Consider the following three-dimensional convection-diffusion equation
u x x + u y y + u z z + q u x + u y + u z = f ( x , y , z ) ,
on the unit cube Ω : = [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , 1 ] , with constant coefficient q and subject to Dirichlet-type boundary conditions. When the seven-point finite difference discretization and the equidistant step-size h = 1 n + 1 (n is the degree of freedom along each dimension) is used on all the three directions applied to Equation (12), we obtain the linear system with the coefficient matrix
A = T x I I + I T y I + I I T z ,
where T x , T y and T z are tridiagonal matrices. If the first order derivatives are approximated by the centered difference scheme, we have
T x = tridiag t 2 , t 1 , t 3 , T y = tridiag t 2 , 0 , t 3 , T z = tridiag t 2 , 0 , t 3 ,
with t 1 = 6 , t 2 = 1 r , t 3 = 1 + r and r = q h 2 (mesh Reynolds number).
According to [9,51,52], for the centred difference scheme, the extreme eigenvalues and singular values of matrices M and N in Equation (6) are
min 1 i n 3 λ i ( M ) = 6 ( 1 cos π h ) , max 1 i n 3 λ i ( M ) = 6 ( 1 + cos π h ) , max 1 i n 3 σ i ( N ) = 6 r cos π h .
Therefore, the theoretical method of the HSS method, the LHSS method, the NHSS method, the SHSS method and the SHSS-SS method to obtain the optimal splitting parameters can be easily calculated.
Let q = 1 in Equation (12). The discretization of Equation (12) leads to a system of linear equations A x = b , where A R n 3 × n 3 is defined by Equation (13), and set the exact solution x e = ( 1 , 1 , , 1 ) T R n 3 , then, b = A x e .
In this experiment, we apply the GPR method to compare with the traversal method and the theoretical method, respectively. Concretely, first, we use the HSS method, IHSS method, LHSS method and ILHSS method to solve the 3D convection-diffusion equation, and the splitting parameters are selected using the traversal method and the GPR method, respectively. Numerical experiments results show that the GPR can predict almost the same parameters as the traversal method does, but with less calculation. Finally, we use the HSS method, the NHSS method, the LHSS method, the SHSS method and the SHSS-SS method to solve the 3D convection-diffusion equation, and the splitting parameters are selected using the theoretical method and the GPR method, respectively. Numerical results show that the GPR method can compute better optimal parameters than the theoretical method, which means that the GPR method can be applied to a wide range of matrix-splitting iterative methods and is highly universal.
The GPR method vs. the traversal method. We first use the HSS method, the IHSS method, the LHSS method and the ILHSS method to solve the 3D convection-diffusion equation and the splitting parameters are selected using the traversal method and the GPR method, respectively. Table 1, Table 2, Table 3 and Table 4 and Figure 1 show the numerical results.
From Table 1, Table 2, Table 3 and Table 4 and Figure 1, we know that the GPR method can predict almost the same parameters as the traversal method does. It uses a training set from a set of small-scale systems and its training time is much less than the traversal time of the traversal method. Thus, the GPR method requires less calculation than the traversal method.
From Figure 2, we can have a visual representation of what the mapping f ( n ) we want to fit looks like.
The GPR method vs. the theoretical method. We use the HSS method, the NHSS, the LHSS method, the SHSS method and the SHSS-SS method to solve the 3D convection-diffusion equation, and the splitting parameters are selected using the theoretical method (given in Theorems 3, 5, 7, 9 and 11) and the GPR method, respectively. Table 5, Table 6, Table 7, Table 8 and Table 9 and Figure 3 and Figure 4 show the numerical results.
From Table 5, Table 6, Table 7, Table 8 and Table 9 and Figure 3 and Figure 4, we know that the GPR method can predict better optimal parameters than the theoretical method. Unlike the theoretical method available case-by-case, the GPR method is suitable for the five iterative methods, which means that the GPR method is highly universal.
Example 2.
Consider the following complex symmetric linear systems.
K + 3 3 τ I + i K + 3 + 3 τ I x = b ,
where K is the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square [ 0 , 1 ] × [ 0 , 1 ] with mesh-size h = 1 m + 1 . K R m 2 × m 2 and K = I V m + V m I , with V m = h 2 t r i d i a g ( 1 , 2 , 1 ) R m × m . In addition, the right-hand side vector b with its jth entry b j is given by
b j = ( 1 i ) j τ ( j + 1 ) 2 , j = 1 , 2 , , n .
Let τ = h and normalize coefficient matrix and right-hand side by multiplying both by h 2 . Refer to [53] for more details.
In this experiment, we apply the GPR method to compare with the traversal method and the theoretical method, respectively. Concretely, first, we use the HSS method and the MHSS method to solve Equation (14), and the splitting parameters are selected using the traversal method and the GPR method, respectively. Then, we use the HSS method, the MHSS method and the MSNS method to solve Equation (14), and the splitting parameters are selected using the theoretical method and the GPR method, respectively. Since the extreme eigenvalues of matrix M and extreme singular value of matrix N cannot be explicitly obtained, we use MATLAB built-in function “eigs(MaxIterations’,500,’Tolerance’,1e-5)” and “svds(MaxIterations’,500,’Tolerance’,1e-5)” to calculate them.
The GPR method vs. the traversal method. We first use the HSS method and the MHSS method to solve Equation (14) and the splitting parameters are selected using the traversal method and the GPR method, respectively. Table 10 and Table 11 and Figure 5 show the numerical results.
From Table 10 and Table 11 and Figure 5, we know that the GPR method can predict almost the same parameters as the traversal method does. It uses a training set obtained by using the traversal method for small-scale systems and its training time is much less than the traversal time of the traversal method. Thus, the GPR method requires less calculation than the traversal method.
The GPR method vs. the theoretical method. We use the HSS method, the MHSS method and the MSNS method to solve Equation (14), and the splitting parameters are selected using the theoretical method (given in Theorem 3, 13 and 14) and the GPR method, respectively. Table 12, Table 13 and Table 14 and Figure 6 show the numerical results.
From Table 12, Table 13 and Table 14 and Figure 6, we know that the GPR method can predict better optimal parameters than the theoretical method. Unlike the theoretical method only available case-by-case, the GPR method is suitable for the three iterative methods, which means that the GPR method is highly universal.

5. Conclusions

In this paper, we use the Bayesian inference-based Gaussian process regression (GPR) method to predict the relatively optimal parameters of some matrix-splitting iteration methods and provide extensive numerical experiments to compare the prediction performance of the GPR method with other methods. The GPR method learns a mapping between the dimension of linear systems and relatively optimal splitting parameters using a small training data set. Numerical results show that the GPR method requires less calculation than the traversal method. It is more universal and can predict more optimal parameters than the theoretical methods.
There is still lots of work to study the proposed methods. For example, the first one is to apply the GPR method to some iteration methods with multi-parameters or some non HSS-type iteration methods. The second one is to measure the predictive performance of the GPR method when the true optimal parameters to predict are unknown. The third one is to choose other mean functions and covariance functions to improve the predictive performance of the GPR method.

Author Contributions

K.J.—methodology, review and editing; J.S.—software, visualization, data curation; J.Z.—methodology, review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported in part by the National Natural Science Foundation of China (12171412, 11771370), Natural Science Foundation for Distinguished Young Scholars of Hunan Province (2021JJ10037), Hunan Youth Science and Technology Innovation Talents Project (2021RC3110), and the Key Project of Education Department of Hunan Province (19A500, 21A0116).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Young, D. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef]
  2. Young, D.M. Convergence properties of the symmetric and unsymmetric successive overrelaxation methods and related methods. Math. Comput. 1970, 24, 793–807. [Google Scholar] [CrossRef]
  3. Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
  4. Hadjidimos, A.; Yeyios, A. Symmetric accelerated overrelaxation (SAOR) method. Math. Comput. Simul. 1982, 24, 72–76. [Google Scholar] [CrossRef]
  5. Darvishi, M.T.; Hessari, P. Symmetric SOR method for augmented systems. Appl. Math. Comput. 2006, 183, 409–415. [Google Scholar] [CrossRef]
  6. Darvishi, M.T.; Hessari, P. A modified symmetric successive overrelaxation method for augmented systems. Comput. Math. Appl. 2011, 61, 3128–3135. [Google Scholar] [CrossRef] [Green Version]
  7. Allahviranloo, T. Successive over relaxation iterative method for fuzzy system of linear equations. Appl. Math. Comput. 2005, 162, 189–196. [Google Scholar] [CrossRef]
  8. Darvishi, M.T.; Hessari, P. On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices. Appl. Math. Comput. 2006, 176, 128–133. [Google Scholar] [CrossRef]
  9. Bai, Z.Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef] [Green Version]
  10. Bai, Z.Z.; Golub, G.H.; Pan, J.Y. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math. 2004, 98, 1–32. [Google Scholar] [CrossRef]
  11. Benzi, M. A generalization of the Hermitian and skew-Hermitian splitting iteration. SIAM J. Matrix Anal. Appl. 2009, 31, 360–374. [Google Scholar] [CrossRef]
  12. Li, L.; Huang, T.Z.; Liu, X.P. Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. Numer. Linear Algebra Appl. 2007, 14, 217–235. [Google Scholar] [CrossRef]
  13. Yang, A.L.; An, J.; Wu, Y.J. A generalized preconditioned HSS method for non-Hermitian positive definite linear systems. Appl. Math. Comput. 2010, 216, 1715–1722. [Google Scholar] [CrossRef]
  14. Li, L.; Huang, T.Z.; Liu, X.P. Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems. Comput. Math. Appl. 2007, 54, 147–159. [Google Scholar] [CrossRef] [Green Version]
  15. Noormohammadi Pour, H.; Sadeghi Goughery, H. New Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. Numer. Algorithms 2015, 69, 207–225. [Google Scholar] [CrossRef]
  16. Yang, A.L.; Cao, Y.; Wu, Y.J. Minimum residual Hermitian and skew-Hermitian splitting iteration method for non-Hermitian positive definite linear systems. BIT Numer. Math. 2019, 59, 299–319. [Google Scholar] [CrossRef]
  17. Li, C.X.; Wu, S.L. A single-step HSS method for non-Hermitian positive definite linear systems. Appl. Math. Lett. 2015, 44, 26–29. [Google Scholar] [CrossRef]
  18. Li, C.X.; Wu, S.L. A SHSS–SS iteration method for non-Hermitian positive definite linear systems. Results Appl. Math. 2022, 13, 100225. [Google Scholar] [CrossRef]
  19. Jiang, M.Q.; Cao, Y. On local Hermitian and skew-Hermitian splitting iteration methods for generalized saddle point problems. J. Comput. Appl. Math. 2009, 231, 973–982. [Google Scholar] [CrossRef] [Green Version]
  20. Benzi, M.; Gander, M.J.; Golub, G.H. Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problems. BIT Numer. Math. 2003, 43, 881–900. [Google Scholar] [CrossRef]
  21. Krukier, L.A.; Krukier, B.L.; Ren, Z.R. Generalized skew-Hermitian triangular splitting iteration methods for saddle-point linear systems. Numer. Linear Algebra Appl. 2014, 21, 152–170. [Google Scholar] [CrossRef]
  22. Bai, Z.Z.; Benzi, M. Regularized HSS iteration methods for saddle-point linear systems. BIT Numer. Math. 2017, 57, 287–311. [Google Scholar] [CrossRef]
  23. Yang, A.L.; Wu, Y.J. The Uzawa–HSS method for saddle-point problems. Appl. Math. Lett. 2014, 38, 38–42. [Google Scholar] [CrossRef]
  24. Li, X.; Yang, A.L.; Wu, Y.J. Parameterized preconditioned Hermitian and skew-Hermitian splitting iteration method for saddle-point problems. Int. J. Comput. Math. 2014, 91, 1224–1238. [Google Scholar] [CrossRef]
  25. Wang, X.; Li, Y.; Dai, L. On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix equation AXB = C. Comput. Math. Appl. 2013, 65, 657–664. [Google Scholar] [CrossRef]
  26. Bai, Z.Z. On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations. J. Comput. Math. 2011, 29, 185–198. [Google Scholar] [CrossRef] [Green Version]
  27. Wang, X.; Li, W.W.; Mao, L.Z. On positive-definite and skew-Hermitian splitting iteration methods for continuous Sylvester equation AX + XB = C. Comput. Math. Appl. 2013, 66, 2352–2361. [Google Scholar] [CrossRef]
  28. Zhou, R.; Wang, X.; Tang, X.B. A generalization of the Hermitian and skew-Hermitian splitting iteration method for solving Sylvester equations. Appl. Math. Comput. 2015, 271, 609–617. [Google Scholar] [CrossRef]
  29. Zhou, R.; Wang, X.; Tang, X.B. Preconditioned positive-definite and skew-Hermitian splitting iteration methods for continuous Sylvester equations AX + XB = C. East Asian J. Appl. Math. 2017, 7, 55–69. [Google Scholar] [CrossRef]
  30. Dehghan, M.; Shirilord, A. A generalized modified Hermitian and skew-Hermitian splitting (GMHSS) method for solving complex Sylvester matrix equation. Appl. Math. Comput. 2019, 348, 632–651. [Google Scholar] [CrossRef]
  31. Bahramizadeh, Z.; Nazari, M.; Zak, M.K.; Yarahmadi, Z. Minimal residual Hermitian and skew-Hermitian splitting iteration method for the continuous Sylvester equation. arXiv 2020, arXiv:2012.00310. [Google Scholar]
  32. Xu, L.; Mingxiang, L. Generalized positive-definite and skew-hermitian splitting iteration method and its sor acceleration for continuous sylvester equations. Math. Numer. Sin. 2021, 43, 354. [Google Scholar]
  33. Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
  34. Li, X.; Yang, A.L.; Wu, Y.J. Lopsided PMHSS iteration method for a class of complex symmetric linear systems. Numer. Algorithms 2014, 66, 555–568. [Google Scholar] [CrossRef]
  35. Wu, S.L. Several variants of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems. Numer. Linear Algebra Appl. 2015, 22, 338–356. [Google Scholar] [CrossRef]
  36. Bai, Z.Z.; Guo, X.P. On Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices. J. Comput. Math. 2010, 28, 235–260. [Google Scholar]
  37. Zhu, M.Z. Modified iteration methods based on the Asymmetric HSS for weakly nonlinear systems. J. Comput. Anal. Appl. 2013, 15, 185–195. [Google Scholar]
  38. Ke, Y.F.; Ma, C.F. A preconditioned nested splitting conjugate gradient iterative method for the large sparse generalized Sylvester equation. Comput. Math. Appl. 2014, 68, 1409–1420. [Google Scholar] [CrossRef]
  39. Zheng, Q.Q.; Ma, C.F. On normal and skew-Hermitian splitting iteration methods for large sparse continuous Sylvester equations. J. Comput. Appl. Math. 2014, 268, 145–154. [Google Scholar] [CrossRef]
  40. Carre, B. The determination of the optimum accelerating factor for successive over-relaxation. Comput. J. 1961, 4, 73–78. [Google Scholar] [CrossRef]
  41. Kulsrud, H.E. A practical technique for the determination of the optimum relaxation factor of the successive over-relaxation method. Commun. ACM 1961, 4, 184–187. [Google Scholar] [CrossRef]
  42. Bai, Z.Z.; Golub, G.H.; Li, C.K. Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices. SIAM J. Sci. Comput. 2006, 28, 583–603. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, F. On choices of iteration parameter in HSS method. Appl. Math. Comput. 2015, 271, 832–837. [Google Scholar] [CrossRef]
  44. Huang, Y.M. A practical formula for computing optimal parameters in the HSS iteration methods. J. Comput. Appl. Math. 2014, 255, 142–149. [Google Scholar] [CrossRef]
  45. Yang, A.L. Scaled norm minimization method for computing the parameters of the HSS and the two-parameter HSS preconditioners. Numer. Linear Algebra Appl. 2018, 25, e2169. [Google Scholar] [CrossRef]
  46. Huang, N. Variable-parameter HSS methods for non-Hermitian positive definite linear systems. Linear Multilinear Algebra 2021, 1–18. [Google Scholar] [CrossRef]
  47. Jiang, K.; Su, X.; Zhang, J. A general alternating-direction implicit framework with Gaussian process regression parameter prediction for large sparse linear systems. SIAM J. Sci. Comput. 2022, 44, A1960–A1988. [Google Scholar] [CrossRef]
  48. Axelsson, O.; Bai, Z.Z.; Qiu, S.X. A class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part. Numer. Algorithms 2004, 35, 351–372. [Google Scholar] [CrossRef]
  49. Von Mises, R. Mathematical Theory of Probability and Statistics; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  50. Pourbagher, M.; Salkuyeh, D.K. On the solution of a class of complex symmetric linear systems. Appl. Math. Lett. 2018, 76, 14–20. [Google Scholar] [CrossRef]
  51. Greif, C.; Varah, J. Iterative solution of cyclically reduced systems arising from discretization of the three-dimensional convection-diffusion equation. SIAM J. Sci. Comput. 1998, 19, 1918–1940. [Google Scholar] [CrossRef] [Green Version]
  52. Greif, C.; Varah, J. Block stationary methods for nonsymmetric cyclically reduced systems arising from three-dimensional elliptic equations. SIAM J. Matrix Anal. Appl. 1999, 20, 1038–1059. [Google Scholar] [CrossRef]
  53. Axelsson, O.; Kucherov, A. Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 2000, 7, 197–218. [Google Scholar] [CrossRef]
Figure 1. The IT and CPU of the HSS method and the LHSS method to solve the 3D convection-diffusion equation with splitting parameters selected by the traversal method and the GPR method, respectively.
Figure 1. The IT and CPU of the HSS method and the LHSS method to solve the 3D convection-diffusion equation with splitting parameters selected by the traversal method and the GPR method, respectively.
Mathematics 10 03789 g001
Figure 2. The splitting parameters of the HSS method, the LHSS method, the IHSS method and the ILHSS method to solve the 3D convection-diffusion equation with splitting parameters selected by the traversal method and the GPR method, respectively.
Figure 2. The splitting parameters of the HSS method, the LHSS method, the IHSS method and the ILHSS method to solve the 3D convection-diffusion equation with splitting parameters selected by the traversal method and the GPR method, respectively.
Mathematics 10 03789 g002
Figure 3. The IT and CPU of the HSS method and the LHSS method to solve the 3D convection-diffusion equation with splitting parameters selected by the theoretical method and the GPR method, respectively.
Figure 3. The IT and CPU of the HSS method and the LHSS method to solve the 3D convection-diffusion equation with splitting parameters selected by the theoretical method and the GPR method, respectively.
Mathematics 10 03789 g003
Figure 4. The IT and CPU of the NHSS method, the SHSS method and the SHSS-SS method to solve the 3D convection-diffusion equation with splitting parameters selected by the theoretical method and the GPR method, respectively.
Figure 4. The IT and CPU of the NHSS method, the SHSS method and the SHSS-SS method to solve the 3D convection-diffusion equation with splitting parameters selected by the theoretical method and the GPR method, respectively.
Mathematics 10 03789 g004
Figure 5. The IT and CPU of the HSS method and the MHSS method to solve Equation (14) with splitting parameters selected by the traversal method and the GPR method, respectively.
Figure 5. The IT and CPU of the HSS method and the MHSS method to solve Equation (14) with splitting parameters selected by the traversal method and the GPR method, respectively.
Mathematics 10 03789 g005
Figure 6. The IT and CPU of the HSS method, the MHSS method and the MSNS method to solve Equation (14) with splitting parameters selected by the theoretical method and the GPR method, respectively.
Figure 6. The IT and CPU of the HSS method, the MHSS method and the MSNS method to solve Equation (14) with splitting parameters selected by the theoretical method and the GPR method, respectively.
Mathematics 10 03789 g006
Table 1. Results of the HSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
Table 1. Results of the HSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
n 3 HSS (Traversal Method)HSS (GPR) | Traversal time Training time | Traversal time Traversal Time
α trav ITCPU α gpr ITCPUTraining Set
8 3 (512)1.7500320.25001.7533320.2813 [ 2 3 , 4 3 ] 93.73%0.12 h
12 3 (1728)1.2600460.67191.2920460.6706 [ 1 3 , 6 3 ] 97.02%0.59 h
16 3 (4096)1.0000592.56251.0673603.3125 [ 1 3 , 4 3 ] 99.45%1.53 h
20 3 (8000)0.8400727.79690.8424726.0152 [ 2 3 , 7 3 , 10 3 ] 88.53%2.42 h
24 3 (13,824)0.72008525.20310.70908625.6250 [ 1 3 , 6 3 , 10 3 ] 91.83%5.69 h
28 3 (21,952)0.62009954.40630.652710055.1875 [ 1 3 , 7 3 , 10 3 ] 93.43%>6 h
32 3 (32,768)0.5600111115.73440.5457112117.2031 [ 1 3 , 7 3 , 10 3 ] 95.99%>6 h
36 3 (46,656)0.5000124193.64060.5007124193.3594 [ 1 3 , 5 3 , 7 3 , 13 3 ] 97.79%>6 h
Table 2. Results of the IHSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
Table 2. Results of the IHSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
n 3 IHSS (Traversal Method)IHSS (GPR) | Traversal time Training time | Traversal time Traversal Time
α trav ITCPU α gpr ITCPUTraining Set
32 3 (32,768)0.620012547.79690.638912547.7656 [ 2 3 , 6 3 , 10 3 ] 98.03%3.25 h
48 3 (110,592)0.4500187257.71880.4647187256.1250 [ 2 3 , 7 3 , 10 3 ] 99.37%>6 h
64 3 (262,144)0.3600246680.70310.3502249693.2031 [ 1 3 , 4 3 , 5 3 , 10 3 , 12 3 ] 99.89%>6 h
Table 3. Results of the LHSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
Table 3. Results of the LHSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
n 3 LHSS (Traversal Method)LHSS (GPR) | Traversal time Training time | Traversal time Traversal Time
α trav ITCPU α gpr ITCPUTraining Set
8 3 (512)1.530050.09381.732350.0469 [ 2 3 , 3 3 , 4 3 ] 92.09%0.07 h
12 3 (1728)1.120050.17811.316150.1250 [ 2 3 , 3 3 , 4 3 ] 98.90%0.47 h
16 3 (4096)0.870050.34060.999950.3750 [ 2 3 , 3 3 , 4 3 ] 99.50%1.04 h
20 3 (8000)0.710050.81250.759750.8106 [ 2 3 , 3 3 , 4 3 ] 99.65%1.46 h
24 3 (13,824)0.600051.29690.649751.6719 [ 1 3 , 6 3 , 10 3 ] 91.44%2.50 h
28 3 (21,952)0.510053.95310.534052.9219 [ 1 3 , 6 3 , 10 3 ] 92.20%2.74 h
32 3 (32,768)0.4500510.60940.494955.2813 [ 1 3 , 7 3 , 10 3 ] 93.27%3.20 h
36 3 (46,656)0.4000510.21880.415858.3281 [ 1 3 , 7 3 , 10 3 ] 94.82%4.15 h
Table 4. Results of the ILHSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
Table 4. Results of the ILHSS method for solving 3D convection-diffusion equation with traversal method and GPR method.
n 3 ILHSS (Traversal Method)ILHSS (GPR) | Traversal time Training time | Traversal time Traversal Time
α trav ITCPU α gpr ITCPUTraining Set
32 3 (32,768)1.410072.32811.478972.3031 [ 4 3 , 8 3 ] 98.02%0.37 h
48 3 (110,592)1.4300710.29691.439477.4219 [ 4 3 , 8 3 ] 99.35%1.11 h
64 3 (262,144)1.4100737.70311.4155736.8281 [ 4 3 , 7 3 , 12 3 ] 99.49%3.04 h
Table 5. Results of the HSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
Table 5. Results of the HSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
n 3 HSS (Theoretical Method)HSS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
8 3 (512)2.0521350.64061.7533320.2813 [ 2 3 , 4 3 ]
12 3 (1728)1.4359491.35941.2920460.6706 [ 1 3 , 6 3 ]
16 3 (4096)1.1025624.14061.0673603.3125 [ 1 3 , 4 3 ]
20 3 (8000)0.8943758.20310.8424726.0152 [ 2 3 , 7 3 , 10 3 ]
24 3 (13,824)0.75208726.48440.70908625.6250 [ 1 3 , 6 3 , 10 3 ]
28 3 (21,952)0.648710055.84380.63129954.1875 [ 3 3 , 4 3 , 16 3 ]
32 3 (32,768)0.5703112118.21880.5457112117.2031 [ 1 3 , 7 3 , 10 3 ]
36 3 (46,656)0.5088124205.25000.5007124193.3594 [ 1 3 , 5 3 , 7 3 , 13 3 ]
Table 6. Results of the LHSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
Table 6. Results of the LHSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
n 3 LHSS (Theoretical Method)LHSS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
8 3 (512)0.709190.04691.732350.0469 [ 2 3 , 3 3 , 4 3 ]
12 3 (1728)0.3436120.46881.316150.1250 [ 2 3 , 3 3 , 4 3 ]
16 3 (4096)0.2026161.87500.999950.3750 [ 2 3 , 3 3 , 4 3 ]
20 3 (8000)0.1333204.56250.759750.8106 [ 2 3 , 3 3 , 4 3 ]
24 3 (13,824)0.09432519.34380.649751.6719 [ 1 3 , 6 3 , 10 3 ]
28 3 (21,952)0.07013141.59380.534052.9219 [ 1 3 , 6 3 , 10 3 ]
32 3 (32,768)0.05423781.17190.494955.2813 [ 1 3 , 7 3 , 10 3 ]
36 3 (46,656)0.043244150.71880.415858.3281 [ 1 3 , 7 3 , 10 3 ]
Table 7. Results of the NHSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
Table 7. Results of the NHSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
n 3 NHSS (Theoretical Method)NHSS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
8 3 (512)0.271140.04690.010040.0494 [ 2 3 , 3 3 , 4 3 ]
12 3 (1728)0.288050.18690.010030.1406 [ 2 3 , 3 3 , 4 3 ]
16 3 (4096)0.294550.40630.010030.1563 [ 2 3 , 3 3 , 4 3 ]
20 3 (8000)0.297850.87500.010030.6281 [ 2 3 , 3 3 , 4 3 ]
24 3 (13,824)0.299652.01560.010031.4063 [ 1 3 , 6 3 , 10 3 ]
28 3 (21,952)0.300755.26560.010034.1156 [ 1 3 , 6 3 , 10 3 ]
32 3 (32,768)0.301457.62500.010035.4375 [ 1 3 , 7 3 , 10 3 ]
36 3 (46,656)0.3020514.14060.010039.6406 [ 1 3 , 7 3 , 10 3 ]
Table 8. Results of the SHSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
Table 8. Results of the SHSS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
n 3 SHSS (Theoretical Method)SHSS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
8 3 (512)0.2711150.07810.010070.0281 [ 2 3 , 3 3 , 4 3 ]
12 3 (1728)0.2880250.34380.010060.0781 [ 2 3 , 3 3 , 4 3 ]
16 3 (4096)0.2945391.40630.010060.1563 [ 2 3 , 3 3 , 4 3 ]
20 3 (8000)0.2978554.75000.010060.5625 [ 2 3 , 3 3 , 4 3 ]
24 3 (13,824)0.29967513.62500.010073.4219 [ 1 3 , 6 3 , 10 3 ]
28 3 (21,952)0.30079738.93750.0100711.7813 [ 1 3 , 6 3 , 10 3 ]
32 3 (32,768)0.3014122100.54690.0100815.1094 [ 1 3 , 7 3 , 10 3 ]
36 3 (46,656)0.3020150220.48440.0100917.1563 [ 1 3 , 7 3 , 10 3 ]
Table 9. Results of the SHSS-SS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
Table 9. Results of the SHSS-SS method for solving 3D convection-diffusion equation with theoretical method and GPR method.
n 3 SHSS-SS (Theoretical Method)SHSS-SS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
8 3 (512)0.271170.04690.010070.0438 [ 2 3 , 3 3 , 4 3 ]
12 3 (1728)0.288070.40630.010060.3281 [ 2 3 , 3 3 , 4 3 ]
16 3 (4096)0.2945121.56250.010060.6563 [ 2 3 , 3 3 , 4 3 ]
20 3 (8000)0.2978176.06250.010063.3906 [ 2 3 , 3 3 , 4 3 ]
24 3 (13,824)0.29962416.79690.010069.2656 [ 1 3 , 6 3 , 10 3 ]
28 3 (21,952)0.30073152.71880.0100625.9063 [ 1 3 , 6 3 , 10 3 ]
32 3 (32,768)0.301440139.70310.0100646.2344 [ 1 3 , 7 3 , 10 3 ]
36 3 (46,656)0.302049309.81250.0100660.8906 [ 1 3 , 7 3 , 10 3 ]
Table 10. Results of the HSS method for solving Equation (14) with traversal method and GPR method.
Table 10. Results of the HSS method for solving Equation (14) with traversal method and GPR method.
m 2 HSS (Traversal Method)HSS (GPR) | Traversal time Training time | Traversal time Traversal Time
α trav ITCPU α gpr ITCPUTraining Set
32 2 (1024)0.5800630.65630.5935640.6250 [ 2 2 , 4 2 , 10 2 ] 96.53%0.07 h
64 2 (4096)0.3900944.98440.3831954.8594 [ 2 2 , 4 2 , 6 2 , 10 2 ] 99.36%0.46 h
96 2 (9216)0.310011718.01560.309311819.0469 [ 5 2 , 7 2 , 12 2 ] 99.76%1.67 h
128 2 (16,384)0.270013640.79690.278313535.6094 [ 5 2 , 8 2 , 12 2 ] 99.90%3.78 h
160 2 (25,600)0.250015171.46880.251415272.1094 [ 6 2 , 10 2 , 14 2 ] 99.91%>6 h
192 2 (36,864)0.2200166121.95310.2226165114.6875 [ 6 2 , 8 2 , 14 2 ] 99.95%>6 h
224 2 (50,176)0.2100178181.40630.2089178181.0406 [ 6 2 , 10 2 , 11 2 ] 99.98%>6 h
256 2 (65,536)0.2000191271.43750.1912191271.2344 [ 8 2 , 11 2 , 13 2 ] 99.99%>6 h
Table 11. Results of the MHSS method for solving Equation (14) with traversal method and GPR method.
Table 11. Results of the MHSS method for solving Equation (14) with traversal method and GPR method.
m 2 MHSS (Traversal Method)MHSS (GPR) | Traversal time Training time | Traversal time Traversal Time
α trav ITCPU α gpr ITCPUTraining Set
32 2 (1024)0.7800530.10940.7979530.1006 [ 3 2 , 6 2 , 14 2 ] 83.20%0.01 h
64 2 (4096)0.5500720.73440.5440730.7813 [ 3 2 , 10 2 , 14 2 ] 96.64%0.08 h
96 2 (9216)0.4600866.40630.4825876.7813 [ 3 2 , 9 2 , 12 2 , 14 2 ] 99.45%0.63 h
128 2 (16,384)0.40009811.75000.40689811.6719 [ 4 2 , 6 2 , 10 2 , 13 2 ] 99.71%1.14 h
160 2 (25,600)0.360010820.75000.359310820.6063 [ 4 2 , 7 2 , 10 2 , 13 2 ] 99.81%1.91 h
192 2 (36,864)0.330011735.60940.322711836.4844 [ 10 2 , 20 2 , 40 2 ] 99.39%3.46 h
224 2 (50,176)0.310012557.09380.311412557.0781 [ 10 2 , 20 2 , 30 2 , 60 2 ] 98.88%5.25 h
256 2 (65,536)0.290013381.65630.299013381.2500 [ 10 2 , 30 2 , 40 2 , 60 2 ] 99.57%>6 h
Table 12. Results of the HSS method for solving Equation (14) with theoretical method and GPR method.
Table 12. Results of the HSS method for solving Equation (14) with theoretical method and GPR method.
m 2 HSS (Theoretical Method)HSS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
32 2 (1024)0.6734710.70310.5935640.6250 [ 2 2 , 4 2 , 10 2 ]
64 2 (4096)0.44021025.21800.3831954.8594 [ 2 2 , 4 2 , 6 2 , 10 2 ]
96 2 (9216)0.348612430.04690.309311819.0469 [ 5 2 , 7 2 , 12 2 ]
128 2 (16,384)0.297014140.95310.278313535.6094 [ 5 2 , 8 2 , 12 2 ]
160 2 (25,600)0.263015674.37500.251415272.1094 [ 6 2 , 10 2 , 14 2 ]
192 2 (36,864)0.2384170122.89060.2226165114.6875 [ 6 2 , 8 2 , 14 2 ]
224 2 (50,176)0.2196182190.98440.2089178181.6406 [ 6 2 , 10 2 , 11 2 ]
256 2 (65,536)0.2047193273.95310.1912191271.2344 [ 8 2 , 11 2 , 13 2 ]
Table 13. Results of the MHSS method for solving Equation (14) with theoretical method and GPR method.
Table 13. Results of the MHSS method for solving Equation (14) with theoretical method and GPR method.
m 2 MHSS (Theoretical Method)MHSS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
32 2 (1024)0.6734590.17190.7979530.1006 [ 3 2 , 6 2 , 14 2 ]
64 2 (4096)0.4402880.92190.5440730.7813 [ 3 2 , 10 2 , 14 2 ]
96 2 (9216)0.34861098.04690.4825876.7813 [ 3 2 , 9 2 , 12 2 , 14 2 ]
128 2 (16,384)0.297012715.48440.40689811.6719 [ 4 2 , 6 2 , 10 2 , 13 2 ]
160 2 (25,600)0.263014226.59380.359310820.6063 [ 4 2 , 7 2 , 10 2 , 13 2 ]
192 2 (36,864)0.238415650.45310.322711836.4844 [ 10 2 , 20 2 , 40 2 ]
224 2 (50,176)0.219616975.50000.311412557.0781 [ 10 2 , 20 2 , 30 2 , 60 2 ]
256 2 (65,536)0.2047181109.71880.299013381.2500 [ 10 2 , 30 2 , 40 2 , 60 2 ]
Table 14. Results of the MSNS method for solving Equation (14) with theoretical method and GPR method.
Table 14. Results of the MSNS method for solving Equation (14) with theoretical method and GPR method.
m 2 MSNS (Theoretical Method)MSNS (GPR)
α theo ITCPU α gpr ITCPUTraining Set
32 2 (1024)1.1456421.23441.0317380.7969 [ 4 2 , 11 2 ]
64 2 (4096)0.7906577.43750.7376546.6563 [ 3 2 , 4 2 , 9 2 ]
96 2 (9216)0.63996826.95310.57456425.4531 [ 6 2 , 13 2 , 20 2 ]
128 2 (16,384)0.55167753.62500.49347450.1094 [ 7 2 , 10 2 , 24 2 ]
160 2 (25,600)0.492085100.46880.43658385.4219 [ 6 2 , 10 2 , 15 2 , 20 2 ]
192 2 (36,864)0.448392189.40630.421789182.1719 [ 7 2 , 10 2 , 15 2 , 27 2 ]
224 2 (50,176)0.414599254.59380.385596238.6875 [ 7 2 , 10 2 , 15 2 , 24 2 ]
256 2 (65,536)0.3873104335.65630.3628102328.0313 [ 7 2 , 10 2 , 15 2 , 21 2 ]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, K.; Su, J.; Zhang, J. A Data-Driven Parameter Prediction Method for HSS-Type Methods. Mathematics 2022, 10, 3789. https://doi.org/10.3390/math10203789

AMA Style

Jiang K, Su J, Zhang J. A Data-Driven Parameter Prediction Method for HSS-Type Methods. Mathematics. 2022; 10(20):3789. https://doi.org/10.3390/math10203789

Chicago/Turabian Style

Jiang, Kai, Jianghao Su, and Juan Zhang. 2022. "A Data-Driven Parameter Prediction Method for HSS-Type Methods" Mathematics 10, no. 20: 3789. https://doi.org/10.3390/math10203789

APA Style

Jiang, K., Su, J., & Zhang, J. (2022). A Data-Driven Parameter Prediction Method for HSS-Type Methods. Mathematics, 10(20), 3789. https://doi.org/10.3390/math10203789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop