Next Article in Journal
A Symmetrical Analysis of Decision Making: Introducing the Gaussian Negative Binomial Mixture with a Latent Class Choice Model
Previous Article in Journal
Task Partition-Based Computation Offloading and Content Caching for Cloud–Edge Cooperation Networks
Previous Article in Special Issue
Exploring the Dynamics of Dark and Singular Solitons in Optical Fibers Using Extended Rational Sinh–Cosh and Sine–Cosine Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhance Stability of Successive Over-Relaxation Method and Orthogonalized Symmetry Successive Over-Relaxation in a Larger Range of Relaxation Parameter

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 360302, Taiwan
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 907; https://doi.org/10.3390/sym16070907 (registering DOI)
Submission received: 8 June 2024 / Revised: 6 July 2024 / Accepted: 11 July 2024 / Published: 16 July 2024
(This article belongs to the Special Issue Symmetry: Feature Papers 2024)

Abstract

:
The successive over-relaxation method and its symmetric extension to the symmetric successive over-relaxation method inherit the advantages of direct method and iterative method; they are simple iterative algorithms to solve the linear systems. We derive the equivalent forms of successive over-relaxation method and symmetric successive over-relaxation method in terms of residual vector and descent vector. Then a new orthogonalized technique is developed to stabilize the successive over-relaxation and symmetric successive over-relaxation methods. For the orthogonalized successive over-relaxation method, the range of relaxation parameter can be extended, even with a negative value. Based on the maximal projection technique, the sub-optimal value of the relaxation parameter for the orthogonalized successive over-relaxation method is derived to enhance its accuracy; the golden section search algorithm is used to find the minimal point of a derived merit function. The orthogonalized successive over-relaxation and orthogonalized symmetric successive over-relaxation methods show absolute convergence. According to the new form of successive over-relaxation method, a new approach of the accelerated over-relaxation method can be achieved by multiplying the descent vector of the successive over-relaxation method by a stabilization factor. Numerical examples confirm that the orthogonalized successive over-relaxation and orthogonalized symmetric successive over-relaxation methods outperform the successive over-relaxation and symmetric successive over-relaxation methods.

1. Introduction

In this paper, we improve the successive over-relaxation (SOR) method for solving a linear equations system:
A x = b , b , x R n , A R n × n .
The successive over-relaxation (SOR) method was developed in 1950 by Young and Frankel; it was modified from the Gauss–Seidel method and inherited the advantages of the Gaussian elimination method and the iterative method. Let A = [ a i j ] and a i i 0 ,   i = 1 , , n . The SOR method can be written as [1,2]
x i ( k + 1 ) = ( 1 w ) x i ( k ) + w a i i j = i + 1 n a i j x i ( k ) + w a i i b i j = 1 i 1 a i j x i ( k + 1 ) ,
where 0 < w < 2 is a relaxation parameter. That in the first brace is the iteration part, while that in the second brace is the elimination part.
The progress of SOR up to 2003 can be seen in [3]. Owing to its efficiency and simplicity in numerical implementation, as shown in Equation (2), the SOR method is an important solver for Equation (1). Among many methods which used the same philosophy to introduce more parameters in the iteration formula, the accelerated over-relaxation (AOR) method is a two-parameter generalization of SOR [4]. Albrecht and Klein [5] considered an extrapolated iterative technique by extending SOR to an extrapolated SOR (ESOR) method.
It is known that the matrix A can be uniquely decomposed into
A = D U L ,
where D is a diagonal nonsingular matrix, U is a strict upper triangular matrix, and L is a strict lower triangular matrix.
The SOR iterative method with the matrix-vector form is expressed as follows [6]:
( D w L ) x k + 1 = w b + ( 1 w ) D x k + w U x k .
In SOR, the value of relaxation parameter plays a key role to control the stability and convergent property. The optimal relaxation parameter has been chosen from different methods [7,8,9,10]. When A is a symmetric positive definite matrix,
w o p t = 2 1 + 1 ρ 2 ( I n D 1 A )
is the optimal value for the SOR method [11], where ρ is the spectral radius of I n D 1 A .
The analysis of the convergence behavior of SOR, SSOR, AOR, and SAOR methods can be seen in [4,12,13,14]. Testing on the Poisson linear equations, Liu et al. [15] proposed the dynamical optimal values of parameters used in the methods of SSOR, AOR, and SAOR. The different extensions of SOR and AOR for different kinds of linear systems were reported in [16,17,18,19,20,21,22,23,24,25,26].
The technique of refinement was considered in [27,28]. They introduced the composite refinement approach in which two different iterative techniques are considered consecutively. Basically, the refinement is a two-step iterative algorithm; such as RJSOR is the refinement of Jacobi method in the first step and the SOR method in the second step; for RSORJ, the SOR method in used the first step and the Jacobi method is used in the second step. The speed of convergence of the refinement method dominates the increase in computational costs that appeared in the first step [28].

2. New Forms of SOR and AOR; Sub-Optimal Values of w

The accelerated over-relaxation (AOR) method was proposed by Hadjidimos [4]:
( D w L ) x k + 1 = σ b + ( 1 σ ) D x k + σ U x k + ( σ w ) L x k ,
which can be viewed as an extrapolation of SOR with an extrapolation parameter η = σ / w [5,29,30].

2.1. A New Form of SOR

In terms of an initial guess x 0 and the initial residual r 0 = b A x 0 , Equation (1) is equivalent to
A u = r 0 ,
where
u = x x 0
is the descent vector.
To minimize the following merit function:
min u f 0 = r 0 2 A u 2 [ r 0 · ( A u ) ] 2 ,
we can find the best descent vector u by
u = A 1 r 0 ,
which leads to f 0 = 1 ; at this moment, u exactly satisfies Equation (7), and meanwhile x = u + x 0 exactly satisfies Equation (1) as the exact solution. Unfortunately, in Equation (10), A 1 is hard to compute. If A 1 is available, we already have an exact solution of Equation (1) given by x = A 1 b .
Rather than that in Equation (10), the SOR method provides a simple forward solution of u to seek the descent vector at each iteration step given below. We begin with Equation (4) and derive the following result.
Theorem 1.
A new iterative form of the SOR method is
x k + 1 = x k + u k ,
where u k is the kth step descent vector, and is obtained from a forward solution of
( D w L ) u k = w r k ,
in which
r k = b A x k
is the kth step residual vector.
Proof. 
We rewrite Equation (4) to
( D w L ) x k + 1 = ( D w L ) x k + w b + [ ( 1 w ) D + w U ( D w L ) ] x k ,
which can be rearranged to
( D w L ) x k + 1 = ( D w L ) x k + w b + w ( U + L D ) x k .
In view of Equations (3) and (13), we have
( D w L ) x k + 1 = ( D w L ) x k + w b w A x k = ( D w L ) x k + w r k .
Upon removing ( D w L ) x k to the left-hand side yields
( D w L ) ( x k + 1 x k ) = w r k .
By Equation (11), we can derive Equation (12).    □
Corollary 1.
The SOR method in Equations (11)–(13) diverges, if
η k < 1 2 ,
where
η k = r k T A u k A u k 2 .
Proof. 
Equation (11) is multiplied by A ; using Equation (13) yields
r k + 1 = r k A u k ;
the squared norm is
r k + 1 2 = r k 2 2 r k T A u k + A u k 2 .
In terms of η k in Equation (19), it can be written as
r k + 1 2 = r k 2 + ( 1 2 η k ) A u k 2 .
If η k < 1 / 2 ,
( 1 2 η k ) A u k 2 > 0 .
Hence, r k + 1 2   > r k 2 ; as a consequence, SOR is divergent.    □

2.2. New Approaches of AOR and EOR

Motivated by the new form of SOR in Equations (11) and (12), we can prove the following result.
Theorem 2
(AOR). If a stabilization factor η is inserted preceding u in Equation (11) of the new form of SOR, then the following iterative method:
x k + 1 = x k + η u k ,
( D w L ) u k = w r k ,
is equivalent to the AOR method in Equation (6), where σ = w η .
Proof. 
Applying D w L to Equation (23) yields
( D w L ) x k + 1 = ( D w L ) x k + η ( D w L ) u k ;
in view of Equation (24), it becomes
( D w L ) x k + 1 = ( D w L ) x k + σ r k ;
where σ = w η .
Inserting r k = b A x k into Equation (26), we have
( D w L ) x k + 1 = ( D w L ) x k + σ ( b A x k ) ,
which, by using Equation (3), is changed to
( D w L ) x k + 1 = ( D w L ) x k + σ [ b ( D U L ) x k ] = σ b + ( 1 σ ) D x k + σ U x k + ( σ w ) L x k .
This is just the AOR method in Equation (6).    □
Notice that, in Theorem 2, if we take x k + 1 = x k + u k rather than that in Equation (23), then from Equation (26), we can derive
( D w L ) u k = σ r k .
Therefore, the AOR method can also be presented as
x k + 1 = x k + u k ,
( D w L ) u k = σ r k .
Basically, the AOR method does not modify the orientation of u obtained from SOR, but it modifies the length of the descent vector from u to η u .
The extrapolated SOR (ESOR) method is explored in [5]; replacing x k + 1 in Equation (4) by β x k + 1 + ( 1 β ) x k , it leads to
( D w L ) [ β x k + 1 + ( 1 β ) x k ] = w b + ( 1 w ) D x k + w U x k ,
where β is an extrapolated parameter.
Theorem 3
(ESOR). The ESOR method in Equation (31) can be written as
x k + 1 = x k + 1 β u k ,
( D w L ) u k = w r k ,
which is equivalent to AOR in Theorem 2 by setting β = 1 / η = w / σ .
Proof. 
It follows from Equation (31) that
β ( D w L ) ( x k + 1 x k ) = w b w D x k + w U x k + w L x k .
Using Equations (3) and (32), we have
( D w L ) u k = w ( b A x k ) ,
which derives Equation (33) after inserting b A x k = r k .    □

2.3. A Sub-Optimal Value of w

One needs to compute the spectral radius of I n D 1 A as shown in Equation (5) for a better convergence of the SOR method [31,32]. In this section, we propose a simple method to choose the proper values of w using the minimization technique.
In Equation (21), we take k = 0 to obtain
r 2 = r 0 2 [ 2 r 0 T A u A u 2 ] ,
where u is solved from Equation (12) with r 0 to replace r k . After giving r 0 , we search w such that the length of residual vector can be decreased maximally at the next step:
max w ( 0 , 2 ) [ 2 r 0 T A u A u 2 ] .
Therefore, we propose a new minimization technique
min w ( 0 , 2 ) { A u 2 2 r 0 T A u }
to obtain a sub-optimal value of w. We can apply the 1D golden section search algorithm (GSSA) in a range of w ( 0 , 2 ) to determine the minimal point of Equation (37). If there exists no theoretical optimal value of w for the linear system at hand, the method of GSSA, with a loose convergence criterion, say 0.1, is rather simple to obtain a suitable value of w, which is executed once before the SOR iteration process.

3. An Orthogonalized SOR

The factor η k in Equation (19) is very important, which controls the convergent property of the iterative algorithm SOR. For the convergence of SOR, we must guarantee that η k > 1 / 2 . We will give an example to show that η k is negative for SOR, if w is not properly chosen. In this situation, the residual grows fast, as shown in Equation (22), and meanwhile, the SOR will blow up.
It is interesting that when η k = 1 , we have
( r k A u k ) · ( A u k ) = 0 ;
hence, by Equation (20),
r k + 1 A u k .
This property is well-known in the literature as an orthogonality of the residual vector for iterative algorithm. The orthogonal property guarantees that the residual length is decreased step-by-step. Next, we will improve the SOR method, such that the new SOR automatically satisfies the orthogonal property in Equation (39); the newly resulting SOR method is thus named an orthogonalized SOR (OSOR) method.

3.1. Absolute Convergence of OSOR

Instead of a constant value of η in Equation (23) for the AOR method, in this section we propose a new methodology by adaptively adjusting the value of η at each iteration step; it guarantees that the newly developed iterative algorithm based on the SOR method is absolutely convergent.
Lemma 1.
The iterative scheme in Equation (11) for the SOR method can be improved by
x k + 1 = x k + r k T A u k A u k 2 u k .
Proof. 
Let
x k + 1 = x k + η u k .
Then, it follows that
r k + 1 = r k η A u k .
Taking the squared norms of both sides yields
r k + 1 2 = r k 2 ( 2 η r k T A u k η 2 A u k 2 ) .
We will maximally reduce the residual length by considering a maximization problem, given by
max η h = 2 η r k T A u k η 2 A u k 2 ,
which, using the maximality condition, leads to Equation (19). Inserting that η k into Equation (41), we prove Equation (40).    □
Lemma 2.
In Equation (40) the following orthogonal property holds:
r k + 1 · ( A u k ) = 0 .
Proof. 
Applying A to Equation (40) generates
A x k + 1 = A x k + r k T A u k A u k 2 A u k .
Using A x k + 1 = b r k + 1 and A x k = b r k , it changes to
r k + 1 = r k r k T A u k A u k 2 A u k .
Taking the inner product with A u k yields
r k + 1 · ( A u k ) = r k T A u k r k T A u k A u k 2 A u k 2 = 0 .
Equation (45) is proven.    □
With the aids of Theorem 1 and Lemma 1, we develop a new modification of the SOR method, namely an orthogonalized SOR (OSOR) method to improve the property of SOR, which is given below.
Theorem 4.
The orthogonalized successive over-relaxation (OSOR) method is
( D w L ) u k = w r k ,
η k = r k T A u k A u k 2 ,
x k + 1 = x k + η k u k .
If u k is bounded, i.e., u k < , the OSOR method is absolute convergence:
r k + 1 < r k .
Proof. 
If u k < , η k in Equation (50) is well defined. Left-multiplying Equation (51) by A yields
r k + 1 = r k η k A u k .
Taking the squared norm of Equation (53), and inserting η k in Equation (50), generates
r k + 1 2   =   r k 2 2 η k r k T A u k + η k 2 A u k 2 =   r k 2 ( r k T A u k ) 2 A u k 2 = r k 2 η k 2 A u k 2 .
Due to ( r k T A u k ) 2 / A u k 2 > 0 , Equation (52) is proven.   □
Theorem 5.
In the OSOR method, let
w k = r k T A u k A u k 2 u k
be the new descent vector in Equation (51). The following identity holds:
η k w = r k T A w k A w k 2 = 1 ;
such that the OSOR method can automatically maintain the orthogonality of the residual vector, and achieve the maximality of the decreasing quantity of residual’s length.
Proof. 
Let
γ = r k T A u k A u k 2 ;
Equation (55) is written as
w k = γ u k .
Inserting it into Equation (56) produces
η k w = 1 γ r k T A u k A u k 2 ;
hence, by means of Equation (57), it follows that
η k w = A u k 2 r k T A u k r k T A u k A u k 2 = 1 .
Thus, Equation (56) is proven. The proof to satisfy the orthogonality condition of the residual vector and the maximality of h are the same to that given in Lemmas 1 and 2.   □
The absolute convergence is a crucial property for the OSOR method; in the theory of SOR, the absolute convergence is not guaranteed, if the value of relaxation parameter is not properly chosen. Correspondingly, the Algorithm 1 based on Theorem 3 is given as follows.
Algorithm 1 OSOR
1: Give x 0 , w, and ε
2: Do k = 0 , 1 ,
3: r k = b A x k
4: Solve ( D w L ) u k = w r k
5: η k = r k T A u k A u k 2
6: x k + 1 = x k + η k u k
7: If r k + 1 < ε , stop
8: Otherwise, go to 2
The Algorithm 1 is slightly more complex than the Algorithm 2 given as follows.
Algorithm 2 SOR
1:
Give x 0 , w, and ε
2:
Do k = 0 , 1 ,
3:
r k = b A x k
4:
Solve ( D w L ) u k = w r k
5:
x k + 1 = x k + u k
6:
If r k + 1 < ε , stop
7:
Otherwise, go to 2

3.2. A Sub-Optimal Value of Relaxation Parameter for OSOR

Applying D w L to Equation (51) yields
( D w L ) x k + 1 = ( D w L ) x k + η k ( D w L ) u k ,
in view of Equations (13) and (49), which becomes
( D w L ) x k + 1 = ( D w L ) x k + η k w ( b A x k ) ;
The OSOR method is basically a nonlinear iterative algorithm, since η k is a nonlinear function of x k , as shown by Equation (19).
From Equation (62) the step-by-step iteration matrix for the OSOR method can be derived as follows:
G k = I n η k w ( D w L ) 1 A .
In the later the spectral radii ρ k of G k for some examples are plotted. The value of ρ k is not necessary smaller than one when w < 0 . Under such value of w < 0 , OSOR is still stable, but SOR is divergent. A total contraction rate (TCR) can be evaluated by
T C R = Π k = 1 NS ρ k ,
where NS is the number of steps for the convergence. A sufficient condition for the convergence of OSOR is TCR < 1 .
Like that in Equation (37) for SOR, we develop the GSSA method for OSOR. In Equation (54), we take k = 0 to obtain
r 2 = r 0 2 ( r 0 T A u ) 2 A u 2 ,
where u is solved from Equation (49) with r 0 to replace r k . After giving r 0 , we search w such that the length of residual vector can be decreased maximally at the next step as shown by Equation (65):
max w ( 0 , 2 ) ( r 0 T A u ) 2 A u 2 .
The merit function is the orthogonal projection of r 0 on the direction of A u . We can minimize
min w ( 0 , 2 ) f = A u 2 ( r 0 T A u ) 2
to obtain a sub-optimal value of w. Then, GSSA can be applied to Equation (67) to search this value of w, which is the minimal point of f.

4. Orthogonalized SSOR

The SSOR method is given in [33]:
( D w L ) x k + 1 / 2 = w b + ( 1 w ) D x k + w U x k ,
( D w U ) x k + 1 = w b + ( 1 w ) D x k + 1 / 2 + w L x k + 1 / 2 ,
which is a two-step iterative algorithm.
With the aid of Theorem 1, we can derive a new form of the SSOR method as follows.
Theorem 6.
A new iterative form of the SSOR method is
( D w L ) u k = w r k ,
x k + 1 / 2 = x k + u k ,
( D w U ) u k + 1 / 2 = w r k + 1 / 2 ,
x k + 1 = x k + 1 / 2 + u k + 1 / 2 .
Proof. 
Equations (70) and (71) were proven in Theorem 1. We rewrite Equation (69) to
( D w U ) x k + 1 = ( D w U ) x k + 1 / 2 + w b + [ ( 1 w ) D + w U ( D w L ) ] x k + 1 / 2 ,
which, in view of Equations (3) and (13), can be rearranged to
( D w U ) x k + 1 = ( D w U ) x k + 1 / 2 + w r k + 1 / 2 .
By moving ( D w L ) x k + 1 / 2 to the left-hand side yields
( D w U ) ( x k + 1 x k + 1 / 2 ) = w r k + 1 / 2 .
In terms of u k + 1 / 2 in Equation (73), we can derive Equation (72).    □
Theorem 7.
An orthogonalized symmetric successive over-relaxation (OSSOR) method is
( D w L ) u k = w r k ,
η k = r k T A u k A u k 2 ,
x k + 1 / 2 = x k + η k u k ,
( D w U ) u k + 1 / 2 = w r k + 1 / 2 ,
η k + 1 / 2 = r k + 1 / 2 T A u k + 1 / 2 A u k + 1 / 2 2 ,
x k + 1 = x k + 1 / 2 + η k + 1 / 2 u k + 1 / 2 .
Proof. 
We omit the proof, which is an extension of Theorem 1 to Equations (70)–(73).    □
Theorem 8.
If u k and u k + 1 / 2 are bounded, the OSSOR method is absolute convergence:
r k + 1 < r k + 1 / 2 < r k .
Proof. 
We omit the proof, which is an extension of Theorem 4 to Equations (70)–(73).   □
We have the following Algorithm 3 for OSSOR.
Algorithm 3 OSSOR
1:
Give x 0 , w, and ε
2:
Do k = 0 , 1 ,
3:
r k = b A x k
4:
Solve ( D w L ) u k = w r k
5:
η k = r k T A u k A u k 2
6:
x k + 1 / 2 = x k + η k u k
7:
r k + 1 / 2 = b A x k + 1 / 2
8:
Solve ( D w U ) u k + 1 / 2 = w r k + 1 / 2
9:
η k + 1 / 2 = r k + 1 / 2 T A u k + 1 / 2 A u k + 1 / 2 2
10:
x k + 1 = x k + 1 / 2 + η k + 1 / 2 u k + 1 / 2
11:
If r k + 1 < ε , stop
12:
Otherwise, go to 2

5. Numerical Tests

In order to compare the performance of the SOR, AOR, and SSOR methods to the newly developed OSOR and OSSOR methods, some linear problems were tested.

5.1. Example 1

In Equation (1), we take
A = 4 1 0 0 0 0 2 2 1.5 0 0 0 0 1 3 1 0 0 0 0 1.5 2 2 0 0 0 0 1 4 1 0 0 0 0 2 2 , b = 3 5.5 3 5.5 4 4 , x = 1 1 1 1 1 1 .
For this problem, w o p t is found to be
w o p t = 1.016288735 .
With the initial guess x i 0 = 0 , i = 1 , , 6 and under ε = 10 10 , SOR is convergence with 27 iterations, as shown in Figure 1a. The maximum error (ME) is 1.38 × 10 11 .
By using the OSOR with the optimal value w in Equation (85), through 26 iterations it converges, as shown in Figure 1a; ME = 2.12 × 10 11 is obtained. In Figure 1a, the residuals obtained by SOR and OSOR with the optimal value w in Equation (85) are compared.
It is well known that, for SOR, the relaxation parameter must be 0 < w < 2 for the stability of the iterative algorithm. When we take w = 1.5 , SOR is unstable, as shown in Table 1; however, as shown in Figure 1b, the spectral radius of OSOR is smaller than one at each step. Table 1 reveals that OSOR can converge with 35 steps.
To further test the stability of SOR and OSOR, we take w = 0.01 . In Figure 2a, we can see that SOR blows up very rapidly. Its values of η k , as shown in Figure 2b, also tend to large negative values. In contrast, the OSOR method as a stabilization of SOR can seek an accurate solution with ME = 3.36 × 10 11 within 46 steps. The values of η k = 1 are kept constant, as seen in Figure 2b.
As shown in Figure 1b, the step-by-step spectral radii are oscillatory around the value of one when w = 0.01 is negative. The total contraction rate defined in Equation (64) is 0.6528, which is smaller than one; the OSOR iteration is convergent.
Table 1 compares ME and number of steps (NSs) obtained by SOR and OSOR for different values of w. It can be seen that SOR is sensitive to the values of w; when w 0.3 , it requires more than 100 steps for the convergence, and when w 1.5 , it blows up. When w = 0.1 , we find that many η k in SOR are negative, which cause the slow convergence of SOR with 367 steps. In contrast to SOR, OSOR is very stable, not sensitive to the values of w. The OSOR method can not only stabilize SOR and also enhance the accuracy. For OSOR, w can range in a large interval w ( 2 , 2.5 ] .
For SSOR, the optimal value like that in Equation (85) is not available. It is interesting that, when we use the sun-optimal value of w = 0.90169944 in SSOR obtained from Equation (37), we can achieve a highly accurate solution within 19 steps to obtain ME = 1.50 × 10 11 .
Table 2 compares the ME and NS obtained by SSOR and OSSOR for different values of w. The improvement of NS made by OSSOR is obvious. For SSOR, the range is w ( 0 , 2 ) . For OSSOR, w can range in a large interval w [ 2.2 , 2.6 ] .
We take ε = 10 15 . Table 3 compares NS and ME obtained by using the optimal value of w o p t = 1.016288735 for SOR, and the sub-optimal values of w obtained from Equation (37) for SOR, and from Equation (67) for OSOR. w = 0.90169944 is obtained for SOR; w = 1.00251249 is obtained for OSOR.
Table 4 extends the results in Table 3 to that for OSSOR.
As an application of Theorem 2, we can apply AOR to stabilize SOR, when SOR is unstable with w = 1.5 , as shown in Table 1.
From Equation (27), the iteration matrix for AOR is given by
G = I n σ ( D w L ) 1 A .
Unlike G k for the OSOR method, the above G is a constant matrix.
For the stability of AOR, the spectral radius ρ ( G ) must be smaller than one. We plot the stability region in Figure 3. When SOR, corresponding to η = 1 , is unstable with w > 1.488 , AOR can be stabilized by taking a suitable value of η = σ / w , such that ( w , η ) locates inside the stable region.
In Table 5, we fix ε = 10 10 and w = 1.5 , but with different values of η = σ / w in AOR to compute the solution.

5.2. Example 2

We apply SOR, OSOR, and OSSOR to solve the following boundary value problem:
u ( x ) = f ( x ) , u ( 0 ) = 0 , u ( 1 ) = 0 .
The exact solution is supposed to be
u ( x ) = sin π x .
The finite difference discretization of Equation (87) is
1 ( Δ x ) 2 ( u i + 1 2 u i + u i 1 ) f ( x i ) = 0 , i = 1 , , n , u 0 = 0 , u n + 1 = 0 ,
where Δ x = 1 / ( n + 1 ) and x i = i Δ x = i / ( n + 1 ) , i = 1 , , n .
We take n = 99 and ε = 10 5 . On the other hand, we have [32,34,35]
w o p t = 2 1 + sin ( π Δ x ) = 1.939091659 .
Table 6 compares the ME and NS obtained by SOR and OSOR for different values of w. It is apparent that OSOR is convergent faster than SOR when w is not the optimal value. Due to the fact that the coefficient matrix is symmetric, SSOR and SSOR are not convergent faster than SOR and OSOR.
Table 7 compares the NS and ME obtained by using the optimal value of w o p t = 1.939091659 for SOR, and the sub-optimal values of w = 1.9381966 obtained from Equation (37) for SOR, and w = 1.9381966 from Equation (67) for OSOR.

5.3. Example 3

Consider an example of Equation (1) with A = [ a i j ] , a i j = 2 i + 3 j , i , j = 1 , , n . Suppose that b i = 1 , i = 1 , , n , and exact solutions of x i 0 are unknown. The initial values are x i 0 = 0 , i = 1 , , n .
With n = 15 , under ε = 10 10 , OSOR and OSSOR using GSSA at each step are convergent with seven and six steps, respectively, as shown in Figure 4. However, both SOR and SSOR are not convergent.
For the stability of AOR, the spectral radius ρ ( G ) must be smaller than one. We plot the stability region in Figure 5. Although we take w = 0.02 and η = 1 in the stable region in Figure 4, SOR is divergent under ε = 2.8 × 10 3 . Therefore, we take the convergence criterion for the relative residual x k + 1 x k < 2.8 × 10 3 . As shown in Figure 6, SOR is convergent with six steps, and the AOR with w = 0.02 and η = 0.8 is convergent with four steps. However, the residual errors 1.287 and 1.685 indicate that they are incorrect solutions obtained by SOR and AOR.
If 10 3 is taken as the convergence criterion of the relative residuals obtained by SOR and AOR, they are divergent again, as shown in Figure 6. In contrast, as shown in Table 8, OSOR and OSSOR can compute accurate solutions very fast.

5.4. Example 4

We consider a linear system in Equation (1) with [28]
A = 4.2 0 1 1 0 0 1 1 1 4.2 0 1 1 0 0 1 1 1 4.2 0 1 1 0 0 0 1 1 4.2 0 1 1 0 0 0 1 1 4.2 0 1 1 1 0 0 1 1 4.2 0 1 1 1 0 0 1 1 4.2 0 0 1 1 0 0 1 1 4.2 , b = 6.1 5.4 9.2 0 6.2 1.2 13.4 4.2 , x = 1 2 1 0 1 1 2 1 .
By using GSSA, the sub-optimal value w = 0.9975 for OSOR is found.
The initial values are x i 0 = 0 , i = 1 , , 8 , and under ε = 10 7 for the relative residual, OSOR is convergent with 40 steps. In Table 9, ME and NS are compared to that obtained in [28] by using the RSORJ and RJSOR methods; they used the two-step techniques, and the optimal value w = 1.85 is used.
In Table 10, the NSs obtained by OSOR and OSSOR are compared to those obtained in [36] by using the algorithms of Gauss–Seidel (GS), refinement of Gauss–Seidel (RGS), second refinement of Gauss–Seidel (SRGS), and third refinement of Gauss–Seidel (TRGS).

6. Conclusions

In this paper, we have re-examined the SOR method from the aspects of preserving the orthogonality of consecutive residual vector and maximizing the decreasing length of residual vector. They can significantly accelerate the convergence speed by using the OSOR method. If η k = r k T A u k / A u k 2 in SOR was η k < 1 / 2 during the iterations, it is unstable. For the SOR method, we can improve it such that the orthogonality is preserved automatically, and simultaneously the length of the residual vector is reduced maximally. It brings the SOR method to an orthogonalized successive over-relaxation (OSOR) method. Meanwhile, the orthogonalized symmetric successive over-relaxation (OSSOR) method was developed. The OSOR and OSSOR methods preserved the orthogonality automatically and had the property of absolute convergence.
The main features of the present paper are pointed out as follows:
(a)
A new form of SOR was derived in terms of descent vector and residual vector.
(b)
We proved that the AOR method can be generated from the new form of SOR, by inserting η preceding the descent vector in the SOR method.
(c)
The OSOR method was developed by using the extrapolated parameter η varying step-by-step, which controlled the stability of the OSOR algorithm.
(d)
The convergence behavior of OSOR is superior than SOR; for OSOR even with a negative value of w was permitted. Hence, the stability region of OSOR was enlarged significantly.
(e)
For non-symmetric linear systems, the orthogonalized SSOR (OSSOR) method outperforms the SSOR method.
(f)
We proposed minimization techniques to choose the sub-optimal values of the relaxation parameter, which can improve the accuracy of the OSOR and OSSOR methods.
In the near future, we may examine other splitting iteration methods including two or more parameters, as well as the general ( M , N ) splitting methods for two-block linear systems from the two aspects of preserving the orthogonality and maximally reducing the length of the residual vector by using the stabilizing factor η k in the iterative formulas.

Author Contributions

Conceptualization, C.-S.L. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L. and C.-W.C.; Resources, C.-S.L. and C.-W.C.; Data curation, C.-S.L. and C.-W.C.; Writing—original draft, C.-S.L. and C.-W.C.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L. and C.-W.C.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Acknowledgments

The financial support provided by the National Science and Technology Council under the Grant NSTC 113-2221-E-019-043-MY3 and NSTC 112-2221-E-239-022 are gratefully acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Young, D.M. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef]
  2. Young, D.M. Iterative Solution of Large Linear Systems; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  3. Kincaid, D.R. Celebrating fifty years of David M. Young’s successive overrelaxation method. In Numerical Mathematics and Advanced Applications: Proceedings of ENUMATH 2003 the 5th European Conference on Numerical Mathematics and Advanced Applications, Prague, August 2003; Springer: Berlin/Heidelberg, Germany, 2004; pp. 549–558. [Google Scholar]
  4. Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
  5. Albrecht, P.; Klein, M.P. Extrapolated iterative methods for linear systems. SIAM J. Numer. Anal. 1984, 21, 192–201. [Google Scholar] [CrossRef]
  6. Hadjidimos, A. Successive overrelaxation (SOR) and related methods. J. Comput. Appl. Math. 2000, 123, 177–199. [Google Scholar] [CrossRef]
  7. Bai, Z.Z.; Chi, X.B. Asymptotically optimal successive overrelaxation method for systems of linear equations. J. Comput. Math. 2003, 21, 603–612. [Google Scholar]
  8. Wen, R.P.; Meng, G.Y.; Wang, C.L. Quasi-Chebyshev accelerated iteration methods based on optimization for linear systems. Comput. Math. Appl. 2013, 66, 934–942. [Google Scholar] [CrossRef]
  9. Meng, G.Y. A practical asymptotical optimal SOR method. Appl. Math. Comput. 2014, 242, 707–715. [Google Scholar] [CrossRef]
  10. Miyatake, Y.; Sogabe, T.; Zhang, S.L. Adaptive SOR methods based on the Wolfe conditions. Numer. Algo. 2020, 84, 117–132. [Google Scholar] [CrossRef]
  11. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer Science: New York, NY, USA, 2000. [Google Scholar]
  12. Hadjidimos, A.; Yeyios, A. Symmetric accelerated overrelaxation (SAOR) method. Math. Comput. Simul. 1982, XXIV, 72–76. [Google Scholar] [CrossRef]
  13. Yeyios, A. A necessary condition for the convergence of the accelerated overrelaxation (AOR) method. J. Comput. Appl. Math. 1989, 26, 371–373. [Google Scholar] [CrossRef]
  14. Liu, C.S. A feasible approach to determine the optimal relaxation parameters in each iteration for the SOR method. J. Math. Res. 2021, 13, 1–9. [Google Scholar] [CrossRef]
  15. Liu, C.S.; El-Zahar, E.R.; Chang, C.W. Dynamical optimal values of parameters in the SSOR, AOR and SAOR testing using the Poisson linear equations. Mathematics 2023, 11, 3828. [Google Scholar] [CrossRef]
  16. Darvishi, M.T.; Hessari, P. Symmetric SOR method for augmented systems. Appl. Math. Comput. 2006, 183, 409–415. [Google Scholar] [CrossRef]
  17. Zhang, G.F.; Lu, Q.H. On generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2008, 219, 51–58. [Google Scholar] [CrossRef]
  18. Darvishi, M.T.; Hessari, P. A modified symmetric successive overrelaxation method for augmented systems. Comput. Math. Appl. 2011, 61, 3128–3135. [Google Scholar] [CrossRef]
  19. Chao, Z.; Zhang, N.; Lu, Y. Optimal parameters of the generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2014, 266, 52–60. [Google Scholar] [CrossRef]
  20. Li, C.L.; Ma, C.F. An accelerated symmetric SOR-like method for augmented systems. Appl. Math. Comput. 2019, 341, 408–417. [Google Scholar] [CrossRef]
  21. Li, Y.; Dai, P. Generalized AOR methods for linear complementarity problem. Appl. Math. Comput. 2007, 188, 7–18. [Google Scholar] [CrossRef]
  22. Huang, Z.G.; Xu, Z.; Lu, Q.; Cui, J.J. Some new preconditioned generalized AOR methods for generalized least-squares problems. Appl. Math. Comput. 2015, 269, 87–104. [Google Scholar] [CrossRef]
  23. Zhang, C.H.; Wang, X.; Tang, X.B. Generalized AOR method for solving a class of generalized saddle point problems. J. Comput. Appl. Math. 2019, 350, 69–79. [Google Scholar] [CrossRef]
  24. Darvishi, M.T.; Khosro-Aghdam, R. Symmetric successive overrelaxation methods for rank deficient linear systems. Appl. Math. Comput. 2006, 173, 404–420. [Google Scholar] [CrossRef]
  25. Darvishi, M.T.; Khani, F.; Hamedi-Nezhad, S.; Zheng, B. Symmetric block-SOR methods for rank-deficient least squares problems. J. Comput. Appl. Math. 2008, 215, 14–27. [Google Scholar] [CrossRef]
  26. Bai, Z.Z.; Parlett, B.N.; Wang, Z.Q. On generalized successive overrelaxation methods for augmented linear systems. Numer. Math. 2005, 102, 1–38. [Google Scholar] [CrossRef]
  27. Laskar, A.H.; Behera, S. Refinement of iterative methods for the solution of system of linear equations Ax=b. IOSR J. Math. 2014, 10, 70–73. [Google Scholar] [CrossRef]
  28. Meligy, S.A.; Youssef, I.K. Relaxation parameters and composite refinement techniques. Results Appl. Math. 2022, 15, 100282. [Google Scholar] [CrossRef]
  29. Hadjidimos, A.; Yeyios, A. The principle of extrapolation in connection with the accelerated overrelaxation method. Linear Alg. Appl. 1980, 30, 115–128. [Google Scholar] [CrossRef]
  30. Missirlis, N.M.; Evans, D.J. The extrapolated successive overrelaxation (ESOR) method for consistently ordered matrices. Int. J. Math. Math. Sci. 1984, 7, 361–370. [Google Scholar] [CrossRef]
  31. Milewski, S.; Orkisz, J. In search of optimal acceleration approach to iterative solution methods of simultaneous algebraic equations. Comput. Math. Appl. 2014, 68, 101–117. [Google Scholar] [CrossRef]
  32. Yang, S.; Gobbert, M.K. The optimal relaxation parameter for the SOR method applied to the Poisson equation in any space dimensions. Appl. Math. Lett. 2009, 22, 325–331. [Google Scholar] [CrossRef]
  33. Golub, G.H.; Van Loan, C.F. Matrix Computations; The Johns Hopkins University Press: Baltimore, MA, USA, 1996. [Google Scholar]
  34. Watkins, D.S. Fundamentals of Matrix Computations, 2nd ed.; Wiley: New York, NY, USA, 2002. [Google Scholar]
  35. Demmel, J.W. Applied Numerical Linear Algebra; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  36. Audu, K.J.; Essien, J.N. An accelerated iterative technique: Third refinement of Gauss-Seidel algorithm for linear systems. Comput. Sci. Math. Forum 2023, 7, 7. [Google Scholar] [CrossRef]
Figure 1. For example 1, (a) comparing the convergence residuals obtained by SOR and OSOR, and (b) the step-by-step spectral radius of OSOR with w = 1.5 and w = 0.01 .
Figure 1. For example 1, (a) comparing the convergence residuals obtained by SOR and OSOR, and (b) the step-by-step spectral radius of OSOR with w = 1.5 and w = 0.01 .
Symmetry 16 00907 g001
Figure 2. For example 1, comparing (a) residuals and (b) η obtained by SOR and OSOR with w = 0.01 .
Figure 2. For example 1, comparing (a) residuals and (b) η obtained by SOR and OSOR with w = 0.01 .
Symmetry 16 00907 g002
Figure 3. For example 1, the stable region of AOR in the plane ( w , η ) . When ( w , η ) locates inside the blank part, AOR is unstable.
Figure 3. For example 1, the stable region of AOR in the plane ( w , η ) . When ( w , η ) locates inside the blank part, AOR is unstable.
Symmetry 16 00907 g003
Figure 4. For Example 3, comparing the convergence residuals obtained by SOR, OSOR, SSOR, and OSSOR. SOR and SSOR do not converge.
Figure 4. For Example 3, comparing the convergence residuals obtained by SOR, OSOR, SSOR, and OSSOR. SOR and SSOR do not converge.
Symmetry 16 00907 g004
Figure 5. For Example 3, the stable region of AOR in the plane ( w , η ) . When ( w , η ) locates inside the blank part, AOR is unstable.
Figure 5. For Example 3, the stable region of AOR in the plane ( w , η ) . When ( w , η ) locates inside the blank part, AOR is unstable.
Symmetry 16 00907 g005
Figure 6. For Example 3, comparing the convergence relative residuals obtained by SOR and AOR. If residual was taken, both SOR and AOR are not convergent.
Figure 6. For Example 3, comparing the convergence relative residuals obtained by SOR and AOR. If residual was taken, both SOR and AOR are not convergent.
Symmetry 16 00907 g006
Table 1. For Example 1, comparing ME and number of steps (NSs) obtained by SOR and OSOR for different values of w.
Table 1. For Example 1, comparing ME and number of steps (NSs) obtained by SOR and OSOR for different values of w.
w0.10.30.81.31.51.9
NS (SOR)36711130199DivergentDivergent
NS (OSOR)433930303547
ME (SOR)3.91 × 10−113.85 × 10−112.22 × 10−112.13 × 10−11××
ME (OSOR)2.73 × 10−112.42 × 10−112.49 × 10−111.62 × 10−111.36 × 10−111.75 × 10−11
Table 2. For Example 1, comparing ME and NS obtained by SSOR and OSSOR for different values of w.
Table 2. For Example 1, comparing ME and NS obtained by SSOR and OSSOR for different values of w.
w0.10.30.81.31.51.9
NS (SSOR)18355152640238
NS (OSSOR)222016162024
ME (SSOR)3.59 × 10−113.03 × 10−111.67 × 10−118.29 × 10−128.59 × 10−128.59 × 10−12
ME (OSSOR)2.74 × 10−111.15 × 10−112.57 × 10−111.41 × 10−111.02 × 10−111.70 × 10−11
Table 3. For Example 1, comparing ME and NS obtained by OSOR for optimal and sub-optimal values of w.
Table 3. For Example 1, comparing ME and NS obtained by OSOR for optimal and sub-optimal values of w.
w1.01628874Equation (37)Equation (67)
NS393537
ME2.22 × 10−162.22 × 10−161.11 × 10−16
Table 4. For Example 1, comparing ME and NS obtained by OSSOR for optimal and sub-optimal values of w.
Table 4. For Example 1, comparing ME and NS obtained by OSSOR for optimal and sub-optimal values of w.
w1.01628874Equation (37)Equation (67)
NS312723
ME2.22 × 10−162.22 × 10−162.22 × 10−16
Table 5. For Example 1, comparing ME and NS obtained by the AOR with a fixed value w = 1.5 , but for different values of η = σ / w .
Table 5. For Example 1, comparing ME and NS obtained by the AOR with a fixed value w = 1.5 , but for different values of η = σ / w .
η 0.30.40.60.7
NS66454376
ME1.73 × 10−111.63 × 10−112.46 × 10−112.03 × 10−11
Table 6. For Example 2, comparing ME and NS obtained by SOR and OSOR for different values of w.
Table 6. For Example 2, comparing ME and NS obtained by SOR and OSOR for different values of w.
w1.51.61.71.81.9
NS (SOR)>200017481292855411
NS (OSOR)13631154943671378
ME (SOR)8.80 × 10−42.76 × 10−41.69 × 10−47.66 × 10−52.34 × 10−5
ME (OSOR)5.29 × 10−43.44 × 10−42.31 × 10−41.57 × 10−42.54 × 10−5
Table 7. For Example 2, comparing ME and NS obtained by SOR and OSOR for optimal and sub-optimal values of w.
Table 7. For Example 2, comparing ME and NS obtained by SOR and OSOR for optimal and sub-optimal values of w.
SORSOROSOROSOR
wEquation (90)GSSAEquation (90)GSSA
NS202201229229
ME9.27 × 10−59.25 × 10−59.14 × 10−59.04 × 10−5
Table 8. For Example 3, comparing NS obtained by OSOR and OSSOR for different values of n.
Table 8. For Example 3, comparing NS obtained by OSOR and OSSOR for different values of n.
n103050100
OSOR66610
OSSOR6667
Table 9. For Example 4, comparing ME and NS obtained by OSOR, OSSOR, RSORJ, and RJSOR.
Table 9. For Example 4, comparing ME and NS obtained by OSOR, OSSOR, RSORJ, and RJSOR.
OSOROSSORRSORJRJSOR
ME3.06 × 10−74.48 × 10−75.20 × 10−72.42 × 10−7
NS40223838
Table 10. For Example 4, comparing NS obtained by OSOR, OSSOR, GS, RGS, SRGS, and TRGS.
Table 10. For Example 4, comparing NS obtained by OSOR, OSSOR, GS, RGS, SRGS, and TRGS.
OSOROSSORGSRGSSRGSTRGS
NS402288443022
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Chang, C.-W. Enhance Stability of Successive Over-Relaxation Method and Orthogonalized Symmetry Successive Over-Relaxation in a Larger Range of Relaxation Parameter. Symmetry 2024, 16, 907. https://doi.org/10.3390/sym16070907

AMA Style

Liu C-S, Chang C-W. Enhance Stability of Successive Over-Relaxation Method and Orthogonalized Symmetry Successive Over-Relaxation in a Larger Range of Relaxation Parameter. Symmetry. 2024; 16(7):907. https://doi.org/10.3390/sym16070907

Chicago/Turabian Style

Liu, Chein-Shan, and Chih-Wen Chang. 2024. "Enhance Stability of Successive Over-Relaxation Method and Orthogonalized Symmetry Successive Over-Relaxation in a Larger Range of Relaxation Parameter" Symmetry 16, no. 7: 907. https://doi.org/10.3390/sym16070907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop