Next Article in Journal
The Problems of Dimension Four, and Some Ramifications
Previous Article in Journal
A Regional Catastrophe Bond Pricing Model and Its Application in Indonesia’s Provinces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamical Optimal Values of Parameters in the SSOR, AOR, and SAOR Testing Using Poisson Linear Equations

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mathematics, College of Sciences and Humanities in Al-Kharj, Prince Sattam bin Abdulaziz University, Alkharj 11942, Saudi Arabia
3
Department of Basic Engineering Science, Faculty of Engineering, Menofia University, Shebin El-Kom 32511, Egypt
4
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3828; https://doi.org/10.3390/math11183828
Submission received: 31 July 2023 / Revised: 3 September 2023 / Accepted: 5 September 2023 / Published: 6 September 2023

Abstract

:
This paper proposes a dynamical approach to determine the optimal values of the parameters used in each iteration of the symmetric successive over-relaxation (SSOR), accelerated over-relaxation (AOR), and symmetric accelerated over-relaxation (SAOR) methods for solving linear equation systems. When the optimal values of the parameters in the SSOR, AOR, and SAOR are hard to determine as some fixed values, they are obtained by minimizing the merit functions, which are based on the maximal projection technique between the left- and right-hand-side vectors, which involves the input vector, the previous step values of the variables, and the parameters. The novelty is a new concept of the dynamical optimal values of the parameters, instead of the fixed values and the maximal projection technique. In a preferred range, the optimal values of the parameters can be quickly determined by using the golden section search algorithm with a loose convergence criterion. Without knowing and having the theoretical optimal values in general, the new methods might provide an alternative and proper choice of the values of the parameters for accelerating the convergence speed. Numerical testings of the linear Poisson equation discretized to a matrix–vector form and a Lyapunov equation form were used to assess the performance of the DOSSOR, DOAOR, and DOSAOR dynamical optimal methods.

1. Introduction

Better dynamical realizations of the symmetric successive over-relaxation (SSOR), the accelerated over-relaxation (AOR), and the symmetric accelerated over-relaxation (SAOR) methods [1,2,3,4] are offered in the paper to solve the following:
A x = b ,
where A R n × n is a given non-singular coefficient matrix with its diagonal elements being non-zeros, b R n is a given input vector, and x R n are unknown variables.
For a numerical solution x of Equation (1), b A x is the residual norm. Many numerical methods are minimized for the following:
min x b A x 2
to develop the numerical techniques in the Krylov sub-space. For example, the GMRES was developed in [5,6,7,8].
Let
y : = A x .
Equation (1) can be viewed as a balance of the unknown vector y to the input vector b :
y = b .
In order to solve Equation (1), we maximize the following:
max x ( b · y ) 2 b 2 y 2 1 ,
to render y close to b . Equation (5) signifies the maximum projection of the y direction onto the direction of b , and, indeed, the maximum value of one is obtained when y = b . Equation (5) is equivalent to the following:
min x f = b 2 y 2 ( b · y ) 2 1 ,
from which some methods [9,10] were developed to solve Equation (1) in an affine Krylov sub-space.
Only for some limited cases, the theoretical value of the optimal relaxation parameter in the method of successive over-relaxation (SOR) is known. When A is a positive definite matrix, the following w is the optimal value for the SOR [1]:
w o p t = 2 1 + 1 ρ 2 ( I n D 1 A ) ,
where D consists of the diagonal elements of A , and ρ is the spectral radius of I n D 1 A . An adaptive method based on the Wolfe condition to search for the optimal relaxation parameter in the SOR was proposed in [11].
For a system (1) having a non-square coefficient matrix with dimension m × n , Darvishi and Khosro-Aghdam [12] decomposed A to obtain the following:
A = A 11 A 12 A 21 A 22 ,
where the rank of A 11 is r, and A 11 R r × r , A 12 R r × ( n r ) , A 21 R ( m r ) × r , and A 22 R ( m r ) × ( n r ) . Suppose that B = A 21 A 11 1 . It is remarkable that Darvishi and Khosro-Aghdam [12] derived the following optimal value of the relaxation parameter for the symmetric SOR:
w o p t = 1 2 1 2 + B 2 B + 2 1 B 2 1 + 2 B 2 + B 2 + B 2 + B 2 .
Tian [13] developed the AOR method for non-square linear systems. Many works were developed for the better convergence behavior analyses of the SSOR, AOR, and SAOR methods [2,3,14]. Liu [15] applied the minimization technique in Equation (6) to search for the optimal value of the relaxation parameter used in the SOR at each iteration step. For dealing with the different extensions of the SOR and AOR for different kinds of linear systems, there are many papers such as the symmetric SOR method for the augmented systems [16,17,18,19,20], the generalized AOR method for the linear complementarity problem [21], the generalized least-square problems [22], and a class of generalized saddle point problems [23], as well as the symmetric successive over-relaxation method for the rank-deficient linear systems [24], the symmetric block SOR methods for the rank-deficient least-squares problems [25].
In general, the aforementioned methods involve one, two, or more parameters. By changing the values of the parameters in these iterative methods, the speed of their convergence changes. Hence, one of the important works in the area of such iterative methods is finding the optimal values for their parameters. That is not an easy task. The theoretical optimal values such as those in Equations (7) and (9) are hard to achieve for the SSOR, AOR, and SAOR methods. In the present paper, we extend this new idea to search for the optimal values of parameters used in the SSOR, AOR, and SAOR. No matter whether using Equation (7) for the SOR or Equation (9) for the SSOR of rectangular systems, w o p t is a fixed value when the coefficient matrix A is given. Rather than the fixed values, we seek the optimal values in each iteration step as dynamical values. The novelty of the present paper is the new dynamical optimal values of the parameters, instead of the fixed values, and a simple idea of the maximal projection technique is derived to determine the dynamical optimal values.

2. Symmetric Successive Overrelaxation (SSOR) Method

It is known that A can be uniquely decomposed into the following:
A = D U L ,
where D is a diagonal matrix of A having non-zero elements, and U is a strictly upper-triangular matrix of A , while L is a strictly lower-triangular matrix of A .
It follows from Equations (1) and (10) that an equivalent linear system is formed:
D x U x L x = b .
Multiplying Equation (11) by w and adding a term D x on both sides generates the following:
D x + w D x w U x w L x = w b + D x ,
whose iterative form is known as the successive over-relaxation (SOR) method [4]:
( D w L ) x ( k + 1 ) = w b + ( 1 w ) D x ( k ) + w U x ( k ) ,
of which 0 < w < 2 is a relaxation parameter. By using the forward substitution method for Equation (13), it is easy to find the solution of Equation (1) upon x ( k ) satisfying the prescribed convergence criterion.
Instead of the SOR, the symmetric successive over-relaxation (SSOR) method reads as follows [26]:
( D w L ) x ( k + 1 / 2 ) = w b + ( 1 w ) D x ( k ) + w U x ( k ) ,
( D w U ) x ( k + 1 ) = w b + ( 1 w ) D x ( k + 1 / 2 ) + w L x ( k + 1 / 2 ) ,
where x ( k + 1 / 2 ) is a tentative vector between x ( k ) and x ( k + 1 ) . It follows from Equations (14) and (15) that the following is obtained:
x ( k + 1 ) = ( D w U ) 1 [ ( 1 w ) D + w L ] ( D w L ) 1 [ ( 1 w ) D + w U ] x ( k ) + w ( D w U ) 1 { b + [ ( 1 w ) D + w L ] ( D w L ) 1 b } .
Determining the optimal value w in Equation (16) is a great challenge.

3. A New Merit Function

We temporarily take x ( k + 1 ) in the left-hand side of Equation (15) to be x ( k + 1 / 2 ) , and then, at each iterative step, we have a vector equation with w k to be given as follows:
( D w k U ) x ( k + 1 / 2 ) = w k b + ( 1 w k ) D x ( k + 1 / 2 ) + w k L x ( k + 1 / 2 ) ,
where x ( k + 1 / 2 ) is calculated from Equation (14).
The f in Equation (6) consists of two vectors— y and b —in Equation (4). Similarly, Equation (17) is a balance equation of two vectors with
F 1 = F 2 ,
where
F 1 : = ( D w k U ) x ( k + 1 / 2 ) , F 2 : = w k b + ( 1 w k ) D x ( k + 1 / 2 ) + w k L x ( k + 1 / 2 ) .
By inserting values, such as y and b in Equation (4), into Equation (6), we can derive the merit function as follows:
f = f 1 ( w k ) f 2 ( w k ) f 3 2 ( w k ) ,
f 1 ( w k ) : = F 1 2 , f 2 ( w k ) : = F 2 2 , f 3 ( w k ) : = F 1 · F 2 .
We can seek the optimal value of w k to minimize f at each iteration step:
min w k ( a , b ) f = f 1 ( w k ) f 2 ( w k ) f 3 2 ( w k ) .
In a given interval ( a , b ) ( 0 , 2 ) , the minimization in Equation (22) can be performed by the one-dimensional golden section search algorithm that is shown in Appendix A.
In accordance with Equation (22), the optimal value is available, and the resulting iterative algorithm is named a dynamical optimal SSOR (DOSSOR) method, which provides the dynamical optimal values of w k in the iterations. Because we are not interested in a very precise value of w k at each iteration step, and to speed up the computation in Equation (22), the convergence criterion used in the golden section search algorithm is quite loose, say 10 1 or 10 2 , such that only a few steps are spent to seek w k .

4. Accelerated Over-Relaxation (AOR) Method and Its Symmetrization

Hadjidimos [2] derived an accelerated over-relaxation (AOR) method to solve Equation (1), which can be realized from Equation (12) by adding another parameter σ that preceeds L x :
D x + w D x w U x ( w σ + σ ) L x = w b + D x .
Then, we remove w D x , w U x , and ( w σ ) L x to the right-hand side and take the value of x on the the right-hand side to be x ( k ) , while on the left-hand side, we take the value of x to be x ( k + 1 ) :
( D σ L ) x ( k + 1 ) = w b + ( 1 w ) D x ( k ) + w U x ( k ) + ( w σ ) L x ( k ) .
We temporarily set x ( k + 1 ) equal to x ( k ) on the left-hand side. By inserting
G 1 = ( D σ L ) x ( k ) , G 2 = w b + ( 1 w ) D x ( k ) + w U x ( k ) + ( w σ ) L x ( k )
into Equation (6), we can derive
min ( w k , σ k ) ( a , b ) × ( c , d ) f = G 1 2 G 2 2 ( G 1 · G 2 ) 2 .
At each iterative step, we can search for the optimal values of ( w k , σ k ) to minimize f, which can be performed by the two-dimensional golden section search algorithm that is shown in Appendix B. By using the minimization procedure in Equation (26), we can obtain the dynamical optimal AOR (DOAOR).
As with the modification from the SOR to SSOR, we can modify Equation (24) to become a symmetric AOR (SAOR) [3]:
( D σ L ) x ( k + 1 / 2 ) = w b + ( 1 w ) D x ( k ) + w U x ( k ) + ( w σ ) L x ( k ) , ( D σ U ) x ( k + 1 ) = w b + ( 1 w ) D x ( k + 1 / 2 ) + w L x ( k + 1 / 2 ) + ( w σ ) U x ( k + 1 / 2 ) .
In the minimization of Equation (26), we utilize the following:
G 1 = ( D σ U ) x ( k + 1 / 2 ) , G 2 = w b + ( 1 w ) D x ( k + 1 / 2 ) + w L x ( k + 1 / 2 ) + ( w σ ) U x ( k + 1 / 2 ) .
Therefore, the optimal values of ( w k , σ k ) can be found in the rectangle ( a , b ) × ( c , d ) for a faster convergence of the so-called dynamical optimal SAOR (DOSAOR).

5. Numerical Verifications

In order to assess the performance of the DOSSOR, DOAOR, and DOSAOR, we test the following 2D boundary value problem of the Poisson equation:
Δ u ( x , y ) = p ( x , y ) , 0 < x < 1 , 0 < y < 1 , u ( 0 , y ) = g 1 ( y ) , u ( x , 0 ) = g 2 ( x ) , u ( 1 , y ) = g 3 ( y ) , u ( x , 1 ) = g 4 ( x ) ,
where p ( x , y ) , g 1 ( y ) , g 2 ( x ) , g 3 ( y ) , and g 4 ( x ) are given functions.

5.1. Matrix–Vector Form of Equations

The process of constructing the matrix–vector form of Equation (1) from Equation (29) using the finite difference was described by Liu [27]. By using a standard five-point finite difference method that is applied to Equation (29), one can obtain a system of matrix–vector-type linear equations:
F i , j = 1 ( Δ x ) 2 [ u i + 1 , j 2 u i , j + u i 1 , j ] + 1 ( Δ y ) 2 [ u i , j + 1 2 u i , j + u i , j 1 ] p ( x i , y j ) = 0 , 1 i n 1 , 1 j n 2 ,
where Δ x = 1 / ( n 1 + 1 ) , Δ y = 1 / ( n 2 + 1 ) , and u i , j : = u ( x i , y j ) are numerical values of u ( x , y ) at the grid point ( x i , y j ) = ( i Δ x , j Δ y ) , in which u 0 , j , u n 1 + 1 , j , u i , 0 , and u i , n 2 + 1 are determined by the given boundary values.
By letting K = n 2 ( i 1 ) + j , running i from 1 to n 1 , and running j from 1 to n 2 , we can, respectively, set x K in Equation (1) and the algebraic equations F K as the following:
Do i = 1 , n 1 Do j = 1 , n 2 K = n 2 ( i 1 ) + j x K = u i , j F K = F i , j .
At the same time, the components of the input vector b and the coefficient matrix A are constructed as follows:
Do i = 1 , n 1 Do j = 1 , n 2 K = n 2 ( i 1 ) + j b K = p ( x i , y j ) If i = 1 and j = 1 , then b K = p ( x i , y j ) g 1 ( y j ) ( Δ x ) 2 g 2 ( x i ) ( Δ y ) 2 If i = 1 and j = n 2 , then b K = p ( x i , y j ) g 1 ( y j ) ( Δ x ) 2 g 4 ( x i ) ( Δ y ) 2 If i = n 1 and j = 1 , then b K = p ( x i , y j ) g 3 ( y j ) ( Δ x ) 2 g 2 ( x i ) ( Δ y ) 2 If i = n 1 and j = n 2 , then b K = p ( x i , y j ) g 3 ( y j ) ( Δ x ) 2 g 4 ( x i ) ( Δ y ) 2 If i = 1 and 1 < j < n 2 , then b K = p ( x i , y j ) g 1 ( y j ) ( Δ x ) 2 If j = n 2 and 1 < i < n 1 , then b K = p ( x i , y j ) g 4 ( x i ) ( Δ y ) 2 If i = n 1 and 1 < j < n 2 , then b K = p ( x i , y j ) g 3 ( y j ) ( Δ x ) 2 If j = 1 and 1 < i < n 1 , then b K = p ( x i , y j ) g 2 ( x i , 0 ) ( Δ y ) 2 L 1 = n 2 ( i 2 ) + j , L 2 = n 2 ( i 1 ) + j 1 , L 3 = n 2 ( i 1 ) + j , L 4 = n 2 ( i 1 ) + j + 1 , L 5 = n 2 i + j A K , L 1 = A K , L 5 = 1 ( Δ x ) 2 , A K , L 2 = A K , L 4 = 1 ( Δ y ) 2 , A K , L 3 = 2 ( Δ x ) 2 2 ( Δ y ) 2 .
Here, we have n = n 1 × n 2 linear equations.
First, we consider the following:
u ( x , y ) = ( x 2 x ) ( y 2 y ) ,
which renders p ( x , y ) = 2 ( x 2 x ) + 2 ( y 2 y ) and g 1 ( y ) = g 2 ( x ) = g 3 ( y ) = g 4 ( x ) = 0 .
In the numerical tests, we took ε = 10 5 as a convergence criterion; as a result, n 1 = n 2 = 29 , h : = Δ x = Δ y = 1 / 30 , and the usual SSOR with w = 1.5 converged at 221 steps; additionally, the accuracy was measured by the maximum error (ME) = 3.31 × 10 8 , and the root mean square error (RMSE) = 1.71 × 10 8 . For the SOR with w = 1.5 , it converged at 414 steps, the accuracy was ME = 3.16 × 10 8 , and the RMSE = 1.52 × 10 8 . Obviously, the SSOR converged faster than the SOR by about a factor of two.
By using the DOSSOR, we took a = 1.85 and b = 1.9 , and, through 105 steps, it converged, as shown in Figure 1a, to satisfy the convergence criterion ε = 10 5 , where the residual is defined as F . The convergence speed of the DOSSOR was improved by about a factor of two compared to the SSOR. An ME = 1.69 × 10 8 and a RMSE = 7.88 × 10 9 were obtained by the DOSSOR, which were more accurate values than those obtained by the SSOR. The accuracy and convergence speeds of the presented method DOSSOR were enhanced owing to the use of the merit function (22). As shown in Figure 1b, f quickly tended to 1, and the optimal values of w varied between 1.8527 and 1.8836, as are shown in Figure 1c. It is amazing that when ε = 10 13 was taken, we could obtain an ME = 4.86 × 10 17 and a RMSE = 1.49 × 10 17 , although the DOSSOR did not converge within 300 steps. The numerical solution obtained was almost equal to the exact one.
Table 1 lists the number of iterations (NI), the ME, and the RMSE for different ε using the DOSSOR with a fixed h = 1 / 30 .
By using the DOAOR, we took ( a , b ) = ( 1.85 , 1.9 ) , ( c , d ) = ( 1.8 , 1.9 ) , and through 119 steps, it converged, as shown in Figure 2a, to satisfy the convergence criterion ε = 10 5 . The convergence speed of the DOAOR was improved by about two times more than the ad hoc AOR with w = 1.9 and σ = 1.6 , which spent 257 steps. An ME = 1.07 × 10 9 and a RMSE = 1.13 × 10 10 obtained by the DOAOR were more accurate than those obtained by the AOR with ME = 3.03 × 10 8 and RMSE = 1.54 × 10 8 . The optimal values of w and σ are shown in Figure 2b,c.
Table 2 lists the NI, ME, and RMSE for different ε using the DOAOR with a fixed h = 1 / 30 . Upon comparing the values to those in Table 1 with the same ε = 10 8 , 10 9 , 10 10 , 10 11 , the DOAOR converged slightly slower than the DOSSOR, but the accuracy was raised by one or two orders.
In Table 3, we list the NI, ME, and RMSE for different mesh size h using the algorithm DOAOR with a fixed ε = 10 13 . It is interesting that the accuracy was very good, even for a larger mesh size. When h < 1 / 15 , the accuracy was decreased and did not converge within 300 steps under a stringent convergence criterion of ε = 10 13 .
Using the DOSAOR, we took n 1 = n 2 = 15 , ( a , b ) = ( 1.55 , 1.65 ) , and ( c , d ) = ( 1.55 , 1.65 ) , and, through 125 steps, it converged as shown in Figure 3a to satisfy the convergence criterion ε = 10 13 . An ME = 4.03 × 10 16 and a RMSE = 2.15 × 10 16 were more accurate than those values obtained by the AOR with ad hoc values of w = 1.9 and σ = 1.6 , whose ME = 3.03 × 10 8 and RMSE = 1.44 × 10 8 . The optimal values of w and σ are shown in Figure 3b,c. By comparing the values to those in Table 3, it can be seen that the DOSAOR converged faster than the DOAOR, as is shown in Table 4, with a competitive accuracy.

5.2. Lyapunov Equation

In addition, we can derive a Lyapunov equation for Equation (29):
P U + U P T = Q ,
where U = [ u i j ] denotes the numerical solution of u ( x , y ) at a grid point ( x i , y j ) .
P = 2 1 0 1 2 0 0 1 0 1 2
is an n 1 × n 1 -dimensional tridiagonal matrix, and
Q ˜ = h 2 p ( x 1 , y 1 ) p ( x 1 , y 2 ) p ( x 1 , y n 1 ) p ( x n 1 , y 1 ) p ( x n 1 , y 2 ) p ( x n 1 , y n 1 ) .
Here, ( x i , y j ) = ( i h , j h ) , i , j = 1 , , n 1 , and h = 1 / ( n 1 + 1 ) . Hence, we have n = n 1 2 .
At ( x i , y j ) , i , j = 2 , , n 1 1 , we take Q i j = Q ˜ i j . However, for other nodal points, the Q i j are constructed according to the following:
If i = 1 and j = 1 , then Q 11 = Q ˜ 11 + g 1 ( y 1 ) + g 2 ( x 1 ) If i = 1 and j = n 1 , then Q 1 , n 1 = Q ˜ 1 , n 1 + g 1 ( y n 1 ) + g 4 ( x 1 ) If i = n 1 and j = 1 , then Q n 1 , 1 = Q ˜ n 1 , 1 + g 3 ( y 1 ) + g 2 ( x n 1 ) If i = n 1 and j = n 1 , then Q n 1 , n 1 = Q ˜ n 1 , n 1 + g 3 ( 1 , y n 1 ) + g 4 ( x n 1 ) If i = 1 and 1 < j < n 1 , then Q 1 , j = Q ˜ 1 , j + g 1 ( y j ) If j = n 1 and 1 < i < n 1 , then Q i , n 1 = Q ˜ i , n 1 + g 4 ( x i ) If i = n 1 and 1 < j < n 1 , then Q n 1 , j = Q ˜ n 1 , j + g 3 ( y j ) If j = 1 and 1 < i < n 1 , then Q i , 1 = Q ˜ i , 1 + g 2 ( x i ) .
We can transform Equation (34) into a vectorized linear equation system with dimension n:
( I n 1 P + P I n 1 ) v e c ( U ) = v e c ( Q ) ,
where ⊗ is the Kronecker product, and v e c ( U ) is an n-dimensional vector that consists of all rows of the matrix U .
Using the DOSSOR, we took a = 1.8 and b = 1.85 , and, through 204 steps, it converged as shown in Figure 4a to satisfy ε = 10 14 , where the residual is defined by A x ( k ) b . An ME = 2.22 × 10 14 and a RMSE = 1.09 × 10 14 were obtained, which were more accurate and faster than those values shown in Table 1. As seen in Figure 4b, f quickly tended to 1, and the optimal values of w varied between 1.821 and 1.845, as are shown in Figure 4c.
Using the DOAOR, we took ( a , b ) = ( 1.8 , 1.9 ) and ( c , d ) = ( 1.8 , 1.9 ) , and, through 184 steps, it converged as shown in Figure 5a to satisfy the convergence criterion ε = 10 14 . An ME = 2.98 × 10 15 and a RMSE = 1.09 × 10 15 were more accurate and faster than those values obtained by the DOSSOR. The optimal values of w and σ are shown in Figure 5b,c.
In Table 5, we list the NI, ME, and RMSE for different mesh size h using the algorithm DOAOR to solve the Lyapunov equation with a fixed ε = 10 13 . It is interesting that the accuracy was very good, even for a larger mesh size. By comparing these values to Table 3 with the same h = 1 / 10 , 1 / 15 , the convergence of the DOAOR for the Lyapunov equation has been shown to be faster than that for the matrix–vector equation in Section 5.1; however, the accuracy was worse by about two and three orders, respectively.
Using the DOSAOR, we took ( a , b ) = ( 1.65 , 1.1 . 75 ) and ( c , d ) = ( 1.65 , 1.75 ) . In Table 6, we list the NI, ME, and RMSE for different mesh size h using the algorithm DOSAOR to solve the Lyapunov equation with a fixed ε = 10 13 . Upon comparing those values to Table 4, it can be seen that the convergence of the DOSAOR for the Lyapunov equation was faster, but the accuracy was decreased by about two orders.
To improve the accuracy shown in Table 7, the convergence criterion for the DOSAOR with the Lyapunov equation was raised to ε = 10 15 . It is interesting that the accuracy was very good, even for a larger mesh size. By comparing those values to Table 4, the convergence of the DOSAOR for the Lyapunov equation has been shown to be faster than that for the matrix–vector equation in Section 5.1, where for h = 1 / 20 did not converge within 300 iterations under a larger ε = 10 13 .
Next, we consider a a non-homogeneous boundary value problem of the Poisson equation with
u ( x , y ) = x e x y ,
which renders p ( x , y ) = ( x 3 + x y 2 + 2 y ) e x y , g 1 ( y ) = 0 , g 2 ( x ) = x , g 3 ( y ) = e y , and g 4 ( x ) = x e x .
Using the DOSSOR in Section 5.1, we took h = 1 / 20 and ( a , b ) = ( 1.7 , 1.9 ) , and, through 82 steps, it converged to satisfy the convergence criterion ε = 10 5 . An ME = 2.72 × 10 5 and a RMSE = 1.4 × 10 5 were obtained.
Using the DOAOR, we took h = 1 / 20 , ( a , b ) = ( 1.6 , 1.9 ) , and ( c , d ) = ( 1.7 , 1.9 ) , and, through 64 steps, it converged as shown in Figure 6a to satisfy the convergence criterion ε = 10 5 . An ME = 2.5 × 10 5 and a RMSE = 1.13 × 10 5 were obtained. The optimal values of w and σ are shown in Figure 6b,c.
Using the DOSAOR, we took h = 1 / 20 , ( a , b ) = ( 1.5 , 1.6 ) , and ( c , d ) = ( 1.7 , 1.9 ) , and, through 56 steps, it converged as shown in Figure 7a to satisfy the convergence criterion ε = 10 5 . An ME = 1.91 × 10 5 and a RMSE = 6.85 × 10 6 were obtained. The optimal values of w and σ are shown in Figure 7b,c.

5.3. Other Linear System

The proposed iterative methods can be used to solve the general linear system. We cannot exhaust all linear systems; however, we consider a Hilbert problem:
j = 1 n x j i + j 1 = b i , i = 1 , , n .
We suppose that x j = 1 , j = 1 , , n are exact solutions, and b i can be computed exactly. The Hilbert problem is a highly ill-posed linear equations system. We fix n = 20 . Using the DOSSOR, we took a = 0.1 and b = 0.5 , and, through NI = 281 steps, it converged as shown in Figure 8a to satisfy the convergence criterion ε = 10 5 . An ME = 9.63 × 10 3 was obtained by the DOSSOR. w quickly tended to 0.1032522475, as is shown in Figure 8b, which is the optimal value of w for the Hilbert problem with n = 20 . By adopting an ad hoc value w = 0.7 in the SSOR, NI was raised to NI = 463 under the same convergence criterion ε = 10 5 , and more worse is that the ME increased to ME = 1.53 × 10 1 . Therefore, the presented DOSSOR outperformed the SSOR with an ad hoc value for the parameter w.
We consider a least-square linear system [12]:
2 3 5 4 5 3 7 6 9 6 8 2 x 1 x 2 x 3 = 0 12 4 12 .
We suppose that x j = 1 and j = 1 , 2 , 3 are exact solutions. We transform it to a normal system:
105 116 73 116 134 70 73 70 119 x 1 x 2 x 3 = 148 180 24 .
Using the DOSSOR, we took a = 0.45 and b = 1 , and, through NI = 12 steps, it converged as shown in Figure 9a to satisfy the convergence criterion ε = 10 2 . An ME = 1.2 × 10 3 was obtained by the DOSSOR. w tended to 0.4527637493 as shown in Figure 9b, which was close to the optimal value w = 0.46898994354 for the original least-square problem obtained in [12].

6. Conclusions

The accuracy and convergence speed are two main factors to access the performance of an iterative scheme. In this paper, we have performed dynamical realizations in the DOSSOR, DOAOR, and DOSAOR to seek the proper parametric values to improve the convergence speed and accuracy with respect to the original SSOR, AOR and SAOR methods in the solutions of linear systems. Instead of the fixed values of the parameters, the concept of dynamical optimal values of the parameters is novel, and using the maximal projection technique to determine the optimal values of the parameters is highly original. A model problem of the linear Poisson equation in a unit square was formulated with a matrix–vector form and a Lyapunov form of the finite difference equations. In summary, the key outcomes are pointed out here:
  • According to the maximal projection technique between two vectors, which is equivalent to the minimization in Equation (6), different merit functions were derived in the DOSSOR, DOAOR and DOSAOR.
  • Searching for the minimization was easily performed in a preferred range by the golden section search algorithms with loose convergence criteria.
  • In the matrix–vector form equations, the accuracy and convergence speed of the DOAOR were better than the DOSSOR; however, the two-parameter DOAOR required more time to seek the optimal values of the parameters than the single-parameter DOSSOR.
  • In the matrix–vector form equations, the convergence speed of the DOSAOR was better than the DOAOR with a competitive accuracy.
  • In the matrix–vector form equations, the accuracy obtained by the DOSSOR, DOAOR, and DOSAOR with acceptable iteration numbers were all very good up to the orders 10 15 to 10 17 .
  • When the DOSSOR, DOAOR, and DOSAOR were applied to the Lyapunov form of finite difference equations, the convergence speeds under the same convergence criterion 10 13 were raised significantly; however, the accuracy was lost by about one or two orders.
  • In the Lyapuov form equations, the accuracy of the DOSAOR could be enhanced by using a stringent convergence criterion 10 15 , which yielded a highly accurate solution to the order of 10 16 with a smaller number of iterations.
  • Instead of the theoretical values of the fixed optimal values of the parameters, which, in general, are not available, the new methods provided an alternative and feasible choice of the optimal values of the parameters at each iteration.
  • No matter which form of linear equations was used, the proposed DOSSOR, DOAOR, and DOSAOR could provide almost the exact solution of the Poisson equation within a reasonable number of iterations that was smaller than 300 for 900 discretized linear equations. Even for larger mesh size, the accuracy was also very good.
  • The presented methodologies for the DOSSOR, DOAOR, and DOSAOR were quite simple, which, in the near future, may be extended to obtain the optimal values of other versions of SSOR/SAOR methods, such as generalized or modified SSOR/SAOR ones and other types of linear systems.

Author Contributions

Conceptualization, E.R.E.-Z. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L. and C.-W.C.; Resources, C.-W.C.; Data curation, C.-S.L.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, E.R.E.-Z. and C.-W.C.; Supervision, C.-W.C.; Project administration, C.-W.C.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National United University [grant numbers: 111I1206-8] and the National Science and Technology Council [grant numbers: NSTC 112-2221-E-239-022].

Acknowledgments

C.-S.L. and E.R.E.-Z. extend their appreciation to the Deputyship for Research and Innovation and the Ministry of Education in Saudi Arabia, where this study was supported via funding from the Prince Sattam bin Abdulaziz University project number (PSAU/2023/R/1444).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this appendix, we give the one-dimensional golden section search algorithm (GSSA) to find the minimum of a given function f ( x ) , x [ A , B ] under a convergence criterion ε 1 :
R = ( 5 1 ) / 2 C = A + ( 1 R ) ( B A ) D = A + R ( B A ) F C = f ( C ) F D = f ( D ) If | B A | < ε 1 Then If F C < F D Then x min = C f min = F C ELSE x min = D f min = F D Endif Stop Endif If F C < F D Then B = D D = C F D = F C C = A + ( 1 R ) ( B A ) F C = f ( C ) ELSE If F C > F D Then A = C C = D F C = F D D = A + R ( B A ) F D = f ( D ) ELSE A = C B = D C = A + ( 1 R ) ( B A ) F C = f ( C ) D = A + R ( B A ) F D = f ( D ) Endif Endif

Appendix B

In this appendix, we give the two-dimensional golden section search algorithm (GSSA) to find the minimum of a given function f ( x , y ) , ( x , y ) [ A , B ] × [ C , D ] with a given stopping criterion ε 2 :
R = ( 5 1 ) / 2 X 1 = A + ( 1 R ) ( B A ) X 2 = A + R ( B A ) Y 1 = C + ( 1 R ) ( D C ) Y 2 = C + R ( D C ) F 11 = f ( X 1 , Y 1 ) F 12 = f ( X 1 , Y 2 ) F 21 = f ( X 2 , Y 1 ) F 22 = f ( X 2 , Y 2 ) F M I N = min ( F 11 , F 11 , F 21 , F 22 ) If ( B A ) 2 + ( D C ) 2 < ε 2 Then If ( F M I N . E Q . F 11 ) Then f min = F 11 x min = X 1 y min = Y 1 Endif If ( F M I N . E Q . F 12 ) Then f min = F 12 x min = X 1 y min = Y 2 Endif If ( F M I N . E Q . F 21 ) Then f min = F 21 x min = X 2 y min = Y 1 Endif If ( F M I N . E Q . F 22 ) Then f min = F 22 x min = X 2 y min = Y 2 Endif Stop Endif If ( F M I N . E Q . F 11 ) Then B = X 2 D = Y 2 Endif If ( F M I N . E Q . F 12 ) Then B = X 2 C = Y 1 Endif If ( F M I N . E Q . F 22 ) Then A = X 1 C = Y 1 Endif If ( F M I N . E Q . F 21 ) Then A = X 1 D = Y 2 Endif X 1 = A + ( 1 R ) ( B A ) X 2 = A + R ( B A ) Y 1 = C + ( 1 R ) ( D C ) Y 2 = A 2 + R ( D C ) F 11 = f ( X 1 , Y 1 ) F 12 = f ( X 1 , Y 2 ) F 21 = f ( X 2 , Y 1 ) F 22 = f ( X 2 , Y 2 ) F M I N = min ( F 11 , F 12 , F 21 , F 22 )

References

  1. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer: New York, NY, USA, 2000. [Google Scholar]
  2. Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
  3. Hadjidimos, A.; Yeyios, A. Symmetric accelerated overrelaxation (SAOR) method. Math. Comput. Simul. 1982, 24, 72–76. [Google Scholar] [CrossRef]
  4. Hadjidimos, A. Successive overrelaxation (SOR) and related methods. J. Comput. Appl. Math. 2000, 123, 177–199. [Google Scholar] [CrossRef]
  5. Saad, Y. Krylov subspace methods for solving large unsymmetric linear systems. Math. Comput. 1981, 37, 105–126. [Google Scholar] [CrossRef]
  6. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  7. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Pennsylvania, PA, USA, 2003. [Google Scholar]
  8. Jbilou, K.; Messaoudi, A.; Sadok, H. Global FOM and GMRES algorithms for matrix equations. Appl. Numer. Math. 1999, 31, 49–63. [Google Scholar] [CrossRef]
  9. Liu, C.-S. A maximal projection solution of ill-posed linear system in a column subspace, better than the least squares solution. Comput. Math. Appl. 2014, 67, 1998–2014. [Google Scholar] [CrossRef]
  10. Liu, C.-S. A doubly optimized solution of linear equations system expressed in an affine Krylov subspace. J. Comput. Appl. Math. 2014, 260, 375–394. [Google Scholar] [CrossRef]
  11. Miyatake, Y.; Sogabe, T.; Zhang, S.L. Adaptive SOR methods based on the Wolfe conditions. Numer. Algorithms 2020, 84, 117–132. [Google Scholar] [CrossRef]
  12. Darvishi, M.T.; Khosro-Aghdam, R. Determination of the optimal value of relaxation parameter in symmetric SOR method for rectangular coefficient matrices. Appl. Math. Comput. 2006, 181, 1018–1025. [Google Scholar] [CrossRef]
  13. Tian, H. Accelerated overrelaxation methods for rank deficient linear systems. Appl. Math. Comput. 2003, 140, 485–499. [Google Scholar] [CrossRef]
  14. Yeyios, A. A necessary condition for the convergence of the accelerated overrelaxation (AOR) method. J. Comput. Appl. Math. 1989, 26, 371–373. [Google Scholar] [CrossRef]
  15. Liu, C.-S. A feasible approach to determine the optimal relaxation parameters in each iteration for the SOR method. J. Math. Res. 2021, 13, 1–9. [Google Scholar] [CrossRef]
  16. Darvishi, M.T.; Hessari, P. Symmetric SOR method for augmented systems. Appl. Math. Comput. 2006, 183, 409–415. [Google Scholar] [CrossRef]
  17. Zhang, G.F.; Lu, Q.H. On generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2008, 219, 51–58. [Google Scholar] [CrossRef]
  18. Darvishi, M.T.; Hessari, P. A modified symmetric successive overrelaxation method for augmented systems. Comput. Math. Appl. 2011, 61, 3128–3135. [Google Scholar] [CrossRef]
  19. Chao, Z.; Zhang, N.; Lu, Y. Optimal parameters of the generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2014, 266, 52–60. [Google Scholar] [CrossRef]
  20. Li, C.L.; Ma, C.F. An accelerated symmetric SOR-like method for augmented systems. Appl. Math. Comput. 2019, 341, 408–417. [Google Scholar] [CrossRef]
  21. Li, Y.; Dai, P. Generalized AOR methods for linear complementarity problem. Appl. Math. Comput. 2007, 188, 7–18. [Google Scholar] [CrossRef]
  22. Huang, Z.G.; Xu, Z.; Lu, Q.; Cui, J.J. Some new preconditioned generalized AOR methods for generalized least-squares problems. Appl. Math. Comput. 2015, 269, 87–104. [Google Scholar] [CrossRef]
  23. Zhang, C.H.; Wang, X.; Tang, X.B. Generalized AOR method for solving a class of generalized saddle point problems. J. Comput. Appl. Math. 2019, 350, 69–79. [Google Scholar] [CrossRef]
  24. Darvishi, M.T.; Khosro-Aghdam, R. Symmetric successive overrelaxation methods for rank deficient linear systems. Appl. Math. Comput. 2006, 173, 404–420. [Google Scholar] [CrossRef]
  25. Darvishi, M.T.; Khani, F.; Hamedi-Nezhad, S.; Zheng, B. Symmetric block-SOR methods for rank-deficient least squares problems. J. Comput. Appl. Math. 2008, 215, 14–27. [Google Scholar] [CrossRef]
  26. Golub, G.H.; Van Loan, C.F. Matrix Computations; The Johns Hopkins University Press: Baltimore, ML, USA, 1996. [Google Scholar]
  27. Liu, C.-S. A manifold-based exponentially convergent algorithm for solving non-linear partial differential equations. J. Mar. Sci. Technol. 2012, 20, 441–449. [Google Scholar] [CrossRef]
Figure 1. For a testing example, (a) compares the convergence residuals obtained by the SSOR with a certain parameter and the DOSSOR, (b) shows the minimal values of merit function, and (c) shows the optimal values of relaxation parameter.
Figure 1. For a testing example, (a) compares the convergence residuals obtained by the SSOR with a certain parameter and the DOSSOR, (b) shows the minimal values of merit function, and (c) shows the optimal values of relaxation parameter.
Mathematics 11 03828 g001
Figure 2. For a testing example, (a) compares the convergence residuals obtained by the AOR with certain parameters and the DOAOR; (b,c) show the optimal values of relaxation parameters.
Figure 2. For a testing example, (a) compares the convergence residuals obtained by the AOR with certain parameters and the DOAOR; (b,c) show the optimal values of relaxation parameters.
Mathematics 11 03828 g002
Figure 3. For a testing example, (a) compares the convergence residuals obtained by the DOSAOR; (b,c) show the optimal values of relaxation parameters.
Figure 3. For a testing example, (a) compares the convergence residuals obtained by the DOSAOR; (b,c) show the optimal values of relaxation parameters.
Mathematics 11 03828 g003
Figure 4. For the Lyapunov equation form of the testing example, (a) shows the convergence residuals obtained by the DOSSOR, (b) shows the minimal values of merit function, and (c) shows the optimal values of relaxation parameter.
Figure 4. For the Lyapunov equation form of the testing example, (a) shows the convergence residuals obtained by the DOSSOR, (b) shows the minimal values of merit function, and (c) shows the optimal values of relaxation parameter.
Mathematics 11 03828 g004
Figure 5. For the Lyapunov equation form of the testing example, (a) shows the convergence residuals obtained by the DOAOR, and (b,c) show the optimal values of relaxation parameters.
Figure 5. For the Lyapunov equation form of the testing example, (a) shows the convergence residuals obtained by the DOAOR, and (b,c) show the optimal values of relaxation parameters.
Mathematics 11 03828 g005
Figure 6. For the Lyapunov equation form of a nonhomogeneous boundary value problem of the Poisson equation, (a) shows the convergence residuals obtained by the DOAOR, and (b,c) show the optimal values of relaxation parameters.
Figure 6. For the Lyapunov equation form of a nonhomogeneous boundary value problem of the Poisson equation, (a) shows the convergence residuals obtained by the DOAOR, and (b,c) show the optimal values of relaxation parameters.
Mathematics 11 03828 g006aMathematics 11 03828 g006b
Figure 7. For the Lyapunov equation form of the testing example, (a) shows the convergence residuals obtained by the DOSAOR, and (b,c) show the optimal values of relaxation parameters.
Figure 7. For the Lyapunov equation form of the testing example, (a) shows the convergence residuals obtained by the DOSAOR, and (b,c) show the optimal values of relaxation parameters.
Mathematics 11 03828 g007aMathematics 11 03828 g007b
Figure 8. For a Hilbert problem with n = 20 , (a) shows the convergence residuals obtained by the DOSSOR, and (b) shows the optimal values of relaxation parameter.
Figure 8. For a Hilbert problem with n = 20 , (a) shows the convergence residuals obtained by the DOSSOR, and (b) shows the optimal values of relaxation parameter.
Mathematics 11 03828 g008
Figure 9. For a least-square problem, (a) shows the convergence residuals obtained by the DOSSOR, and (b) shows the optimal values of relaxation parameter.
Figure 9. For a least-square problem, (a) shows the convergence residuals obtained by the DOSSOR, and (b) shows the optimal values of relaxation parameter.
Mathematics 11 03828 g009aMathematics 11 03828 g009b
Table 1. Comparing NI, ME, and RMSE for different ε using the DOSSOR.
Table 1. Comparing NI, ME, and RMSE for different ε using the DOSSOR.
ε 10 6 10 7 10 8 10 9 10 10 10 11
NI122141159177195213
ME 1.79 × 10 9 1.51 × 10 10 1.51 × 10 11 1.54 × 10 12 1.55 × 10 13 1.54 × 10 14
RMSE 8.19 × 10 10 6.78 × 10 11 6.64 × 10 12 6.68 × 10 13 6.65 × 10 14 6.55 × 10 15
Table 2. Comparing NI, ME, and RMSE for different ε using the DOAOR.
Table 2. Comparing NI, ME, and RMSE for different ε using the DOAOR.
ε 10 8 10 9 10 10 10 11 10 12
NI176185188214245
ME 4.89 × 10 13 6.64 × 10 13 8.84 × 10 14 2.43 × 10 16 1.48 × 10 16
RMSE 6.67 × 10 14 4.63 × 10 14 9.46 × 10 15 6.21 × 10 17 5.62 × 10 17
Table 3. Comparing NI, ME, and RMSE for different mesh size h using the DOAOR.
Table 3. Comparing NI, ME, and RMSE for different mesh size h using the DOAOR.
h 1 / 5 1 / 10 1 / 15 1 / 20
NI202224235>300
ME 5.46 × 10 15 2.59 × 10 16 4.16 × 10 17 1.67 × 10 16
RMSE 1.74 × 10 15 4.19 × 10 17 1.27 × 10 17 7.45 × 10 17
Table 4. Comparing NI, ME, and RMSE for different mesh size h using the DOSAOR.
Table 4. Comparing NI, ME, and RMSE for different mesh size h using the DOSAOR.
h 1 / 5 1 / 10 1 / 15 1 / 20
NI5374125>300
ME 4.65 × 10 16 5.62 × 10 16 4.03 × 10 16 1.87 × 10 16
RMSE 2.72 × 10 16 2.72 × 10 16 2.15 × 10 16 8.87 × 10 17
Table 5. Comparing NI, ME, and RMSE for different mesh size h using the DOAOR.
Table 5. Comparing NI, ME, and RMSE for different mesh size h using the DOAOR.
h 1 / 10 1 / 15 1 / 20 1 / 25 1 / 30
NI160161162164179
ME 1.94 × 10 14 4.07 × 10 14 2.42 × 10 14 4.5 × 10 14 1.8 × 10 14
RMSE 9.06 × 10 15 1.55 × 10 14 1.12 × 10 14 1.03 × 10 14 1.66 × 10 15
Table 6. Comparing NI, ME, and RMSE for different mesh size h using the DOSAOR with ε = 10 13 .
Table 6. Comparing NI, ME, and RMSE for different mesh size h using the DOSAOR with ε = 10 13 .
h 1 / 5 1 / 10 1 / 15 1 / 20
NI6478104148
ME 1.54 × 10 13 4.96 × 10 14 1.27 × 10 13 2.16 × 10 13
RMSE 7.4 × 10 14 2.03 × 10 14 6.27 × 10 14 1.11 × 10 13
Table 7. Comparing NI, ME, and RMSE for different mesh size h using the DOSAOR with ε = 10 15 .
Table 7. Comparing NI, ME, and RMSE for different mesh size h using the DOSAOR with ε = 10 15 .
h 1 / 5 1 / 10 1 / 15 1 / 20
NI8391120172
ME 1.18 × 10 16 4.3 × 10 16 1.35 × 10 15 1.83 × 10 15
RMSE 5.53 × 10 17 1.76 × 10 16 6.68 × 10 16 9.38 × 10 16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; El-Zahar, E.R.; Chang, C.-W. Dynamical Optimal Values of Parameters in the SSOR, AOR, and SAOR Testing Using Poisson Linear Equations. Mathematics 2023, 11, 3828. https://doi.org/10.3390/math11183828

AMA Style

Liu C-S, El-Zahar ER, Chang C-W. Dynamical Optimal Values of Parameters in the SSOR, AOR, and SAOR Testing Using Poisson Linear Equations. Mathematics. 2023; 11(18):3828. https://doi.org/10.3390/math11183828

Chicago/Turabian Style

Liu, Chein-Shan, Essam R. El-Zahar, and Chih-Wen Chang. 2023. "Dynamical Optimal Values of Parameters in the SSOR, AOR, and SAOR Testing Using Poisson Linear Equations" Mathematics 11, no. 18: 3828. https://doi.org/10.3390/math11183828

APA Style

Liu, C. -S., El-Zahar, E. R., & Chang, C. -W. (2023). Dynamical Optimal Values of Parameters in the SSOR, AOR, and SAOR Testing Using Poisson Linear Equations. Mathematics, 11(18), 3828. https://doi.org/10.3390/math11183828

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop