Next Article in Journal
Combination of Transfer Deep Learning and Classical Machine Learning Models for Multi-View Image Analysis
Previous Article in Journal
On Strong Approximation in Generalized Hölder and Zygmund Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

An Accelerated Iterative Technique: Third Refinement of Gauss–Seidel Algorithm for Linear Systems †

by
Khadeejah James Audu
* and
James Nkereuwem Essien
Department of Mathematics, Federal University of Technology, P.M.B. 65, Minna 920101, Nigeria
*
Author to whom correspondence should be addressed.
Presented at the 1st International Online Conference on Mathematics and Applications, 1–15 May 2023; Available online: https://iocma2023.sciforum.net/.
Comput. Sci. Math. Forum 2023, 7(1), 7; https://doi.org/10.3390/IOCMA2023-14415
Published: 28 April 2023

Abstract

:
Obtaining an approximation for the majority of sparse linear systems found in engineering and applied sciences requires efficient iteration approaches. Solving such linear systems using iterative techniques is possible, but the number of iterations is high. To acquire approximate solutions with rapid convergence, the need arises to redesign or make changes to the current approaches. In this study, a modified approach, termed the “third refinement” of the Gauss–Seidel algorithm, for solving linear systems is proposed. The primary objective of this research is to optimize for convergence speed by reducing the number of iterations and the spectral radius. Decomposing the coefficient matrix using a standard splitting strategy and performing an interpolation operation on the resulting simpler matrices led to the development of the proposed method. We investigated and established the convergence of the proposed accelerated technique for some classes of matrices. The efficiency of the proposed technique was examined numerically, and the findings revealed a substantial enhancement over its previous modifications.

1. Introduction

Solving a large linear system is one of the challenges of most modeling problems today. A linear system can be expressed in the format:
A t = b
where A n × n is a matrix of coefficient, b n is a column of constants and t is an unknown vector. The relation t = A 1 b denotes the exact solution of problem (1) in a situation where the matrix of coefficients is not singular. It is common knowledge that direct methods for solving such systems require around n 3 3 operations, which makes them unsuitable for large sparse systems. It appears that iterative approaches are the best option, especially when convergence with the requisite precision is attained within n steps [1,2,3,4]. The partitioning of A gives
A = D G H
in which D is the diagonal component, H and G are the strictly upper and bottom triangular constituents of A respectively. A matrix reformulation of conventional stationary iterative methods [5] is employed. Three of these standard iterative approaches are of particular relevance to our current endeavor:
Gauss–Seidel Technique [6,7,8]
t ( n + 1 ) = ( D G ) 1 H t ( n ) + ( D G ) 1 b
Gauss–Seidel Refinement Technique
t ( n + 1 ) = [ ( D G ) 1 H ] 2 t ( n ) + [ I + ( D G ) 1 H ] ( D G ) 1 b
Gauss–Seidel Second Refinement Technique
t ( n + 1 ) = [ ( D G ) 1 H ] 3 t ( n ) + [ I + ( D G ) 1 H + ( ( D G ) 1 H ) 2 ] ( D G ) 1 b
Iteration matrices in stationary iterative methods are always the same. From the second step onwards, the computational expenses per iteration are at most n 2 because the iteration matrix is only calculated once and then reused (much smaller for sparse matrices). Any iterative method’s convergence speed can be increased by employing the idea of refinement of an iterative process [9,10,11].
Iterative approaches are unquestionably the most effective approach to employ when solving huge sparse linear systems. However, such an approach may require several rounds to converge, which may reduce computer storage and computing performance [12]. In such cases, it is necessary to modify or accelerate existing methods in order to achieve approximate answers with rapid convergence. This motivated the current study to offer an improved technique capable of providing better estimated solutions quickly. A new Gauss–Seidel refinement method is presented in this paper. The pace of convergence and the influence of the proposed refinement technique on certain matrices are investigated. As we will see, the spectral radius of the iteration matrix decreased while the convergence rate increased.

2. Methodology

Considering large linear systems of (1), combination of (1) and (2) process gives the classical first-degree Gauss–Seidel iteration method
t ( n + 1 ) = ( D G ) 1 H t ( n ) + ( D G ) 1 b
The general Refinement approach is expressed as
t ( n + 1 ) = t ˜ ( n + 1 ) + ( D G ) 1 ( b A t ˜ ( n + 1 ) )
Substitution of (6) into t ˜ ( n + 1 ) in (7), gives the Refinement of Gauss–Seidel [4] as
t ( n + 1 ) = [ ( D G ) 1 H ] 2 t ( n ) + [ I + ( D G ) 1 H ] ( D G ) 1 b
Modification of (8) results into (9) [5]
t ( n + 1 ) = [ ( D G ) 1 H ] 3 t ( n ) + [ I + ( D G ) 1 H + ( ( D G ) 1 H ) 2 ] ( D G ) 1 b
We remodel (9) to obtain (10)
t ( n + 1 ) = [ ( D G ) 1 H ] 3 t ( n ) + [ I + ( D G ) 1 H + ( ( D G ) 1 H ) 2 ] ( D G ) 1 b + ( D G ) 1 ( e A { [ ( D G ) 1 H ] 3 t ( n ) + [ I + ( D G ) 1 H + ( ( D G ) 1 H ) 2 ] ( D G ) 1 b } )
Next, (10) is simplified to obtain
t ( n + 1 ) = [ ( D G ) 1 H ] 4 t ( n ) + [ I + ( D G ) 1 H + ( ( D G ) 1 H ) 2 + ( ( D G ) 1 H ) 3 ] ( D G ) 1 b
Equation (11) is called Third Refinement of Gauss–Seidel (TRGS) technique. The iteration matrix of TRGS is denoted as [ ( D G ) 1 H ] 4 . The method converges if its spectral radius is less than one, represented as ρ ( [ ( D G ) 1 H ] 4 ) < 1 . In addition, the closer the spectral radius is to one, the faster the convergence.

2.1. Convergence of Third-Refinement of Gauss–Seidel (TRGS)

Theorem 1.
If A is strictly diagonally dominant matrix {SDD}, then the third refinement of Gauss–Seidel (TRGS) method converges for any choice of the initial approximation t ( 0 ) .
Proof. 
Applying the idea of [4,13], let T be the exact solution of the linear system of the form (1). We know that if T is SDD matrix and t ˜ ( n + 1 ) T . The TRGS method can be written as
t ( n + 1 ) = t ˜ ( n + 1 ) + ( D G ) 1 ( b A t ˜ ( n + 1 ) ) t ( n + 1 ) T = t ˜ ( n + 1 ) T + ( D G ) 1 ( b A t ˜ ( n + 1 ) )
Hence, taking the norms of both sides results into
t ( n + 1 ) T = t ˜ ( n + 1 ) T + ( D G ) 1 ( b A t ˜ ( n + 1 ) ) t ( n + 1 ) T + ( D G ) 1 ( b A t ˜ ( n + 1 ) ) t ( n + 1 ) T t ˜ ( n + 1 ) T + ( D G ) 1 ( b A t ˜ ( n + 1 ) )        T T + ( D G ) 1 ( n A t ˜ ( n + 1 ) ) = 0 + ( D G ) 1 b b = 0
Therefore, t ˜ ( n + 1 ) T , implying that TRGS method is convergent.  □
Theorem 2.
If A is an M m a t r i x , then the third-refinement of Gauss–Seidel (TRGS) technique converges for any preliminary guess t ( 0 ) .
Proof. 
We employed a similar procedure to that in works of [5,13]. Therefore, we can show that TRGS converges by using the spectral radius of the iterative matrix. If A is an M-matrix, then the spectral radius of Gauss–Seidel is less than 1. Thus, ρ ( ( D G ) 1 H ) < 1 ρ [ ( ( D G ) 1 H ) 4 ] = [ ρ ( ( D G ) 1 H ) ] 4 < 1 . Since the spectral radius of TRGS is less than 1, as such TRGS is convergent. □

2.2. Algorithm for Third Refinement of Gauss–Seidel (TRGS) Technique

(i)
Input the coefficients of A = ( a i j ) , indicate a preliminary estimation t ( 0 ) , maximum iteration quantity tolerance ( ε ) .
(ii)
Obtain the partition matrices D ,   G and H from A = ( a i j ) .
(iii)
Create inverse of ( D G ) and obtain J = ( D G ) 1 H .
(iv)
Create K = { ( D G ) 1 H } 2 ,   M = { ( D G ) 1 H } 3   a n d   N = { ( D G ) 1 H } 4 .
(v)
Establish Z = [ I + J + K + M ] ( D G ) 1 b .
(vi)
Iterate t ( n + 1 ) = N t ( n ) + Z and stop if t ( n + 1 ) t ( n ) < ε .

3. Results and Discussion

In this section, an ideal numerical experiment is observed to test the performance of the proposed technique with respect to its initial refinements.
Applied problem [14]: Consider the linear system of equations;
( 4.2 0 1 1 0 0 1 1 1 4.2 0 1 1 0 0 1 1 1 4.2 0 1 1 0 0 0 1 1 4.2 0 1 1 0 0 0 1 1 4.2 0 1 1 1 0 0 1 1 4.2 0 1 1 1 0 0 1 1 4.2 0 0 1 1 0 0 1 1 4.2 ) ( t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 8 ) = ( 6.20 5.40 9.20 0.00 6.20 1.20 13.4 4.20 )
The true solution of the applied problem is t = ( 1.0 , 2.0 , 1.0 , 0.0 , 1.0 , 1.0 , 2.0 , 1.0 )
From Table 1, it can be clearly observed that the proposed method has a greatly minimized spectral radius, with respect to the iteration matrix, and also shows that the convergence rate is very high. Table 1 also shows that the TRGS reduced the number of iterations to one-fourth of GS, half of RGS and a few steps of SRGS. Based on how near their spectral radii are to zero, it is inferred that the TRGS has a faster rate of convergence than the techniques compared ( ρ ( T R G S ) < ρ ( S R G S ) < ρ ( R G S ) < ρ ( G S ) < 1 ) .

4. Conclusions

In this study, an accelerated iterative technique named “Third-Refinement of Gauss-Seidel (TRGS) technique” is proposed. The TRGS algorithm is very appropriate in solving a large system of linear equations, as it shows a significant improvement in the reduction in iteration step and increase in convergence rate. The analysis from Theorems 1 and 2 verify that the proposed technique is convergent and the efficiency of TRGS is illustrated through the applied problem as shown in Table 2. It can be deduced from our analysis that the proposed technique achieved a qualitative and quantitative shift in solving linear systems of equations and is more efficient than existing refinements of Gauss–Seidel techniques.

Author Contributions

Conceptualization, methodology, software, validation and formal analysis, K.J.A.; investigation, data curation, writing—original draft preparation, writing—review and editing, J.N.E.; visualization, supervision and project administration, K.J.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are calculated and generated from the proposed method.

Acknowledgments

The authors would like to appreciate Abdgafar Tunde Tiamiyu, Department of Mathematics, The Chinese University of Hong Kong (CUHK), Hong Kong, for his academic support and valuable advice.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Audu, K.J. Extended accelerated overrelaxation iteration techniques in solving heat distribution problems. Songklanakarin J. Sci. Technol. 2022, 44, 1232–1237. [Google Scholar]
  2. Audu, K.J.; Yahaya, Y.A.; Adeboye, K.R.; Abubakar, U.Y. Convergence of Triple Accelerated Over-Relaxation (TAOR) Method for M-Matrix Linear Systems. Iran. J. Optim. 2021, 13, 93–101. [Google Scholar]
  3. Vatti, V.B.K. Numerical Analysis Iterative Methods; I.K International Publishing House: New Delhi, India, 2016. [Google Scholar]
  4. Vatti, V.B.K.; Tesfaye, E.K. A refinement of Gauss-Seidel method for solving of linear system of equations. Int. J. Contemp. Math. Sci. 2011, 63, 117–127. [Google Scholar]
  5. Tesfaye, E.K.; Awgichew, G.; Haile, E.; Gashaye, D.A. Second refinement of Gauss-Seidel iteration method for solving linear system of equations. Ethiop. J. Sci. Technol. 2020, 13, 1–15. [Google Scholar]
  6. Tesfaye, E.K.; Awgichew, G.; Haile, E.; Gashaye, D.A. Second refinement of Jacobi iterative method for solving linear system of equations. Int. J. Comput. Sci. Appl. Math. 2019, 5, 41–47. [Google Scholar]
  7. Kumar, V.B.; Vatti; Shouri, D. Parametric preconditioned Gauss-Seidel iterative method. Int. J. Curr. Res. 2016, 8, 37905–37910. [Google Scholar]
  8. Saha, M.; Chakrabarty, J. Convergence of Jacobi, Gauss-Seidel and SOR methods for linear systems. Int. J. Appl. Contemp. Math. Sci. 2020, 6, 77. [Google Scholar] [CrossRef]
  9. Genanew, G.G. Refined iterative method for solving system of linear equations. Am. J. Comput. Appl. Math. 2016, 6, 144–147. [Google Scholar]
  10. Quateroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer: New York, NY, USA, 2000. [Google Scholar]
  11. Constantlnescu, R.; Poenaru, R.C.; Popescu, P.G. A new version of KSOR method with lower number of iterations and lower spectral radius. Soft Comput. 2019, 23, 11729–11736. [Google Scholar] [CrossRef]
  12. Muhammad, S.R.B.; Zubair, A.K.; Mir Sarfraz, K.; Adul, W.S. A new improved Classical Iterative Algorithm for Solving System of Linear Equations. Proc. Pak. Acad. Sci. 2021, 58, 69–81. [Google Scholar]
  13. Lasker, A.H.; Behera, S. Refinement of iterative methods for the solution of system of linear equations Ax=b. ISOR-J. Math. 2014, 10, 70–73. [Google Scholar] [CrossRef]
  14. Meligy, S.A.; Youssef, I.K. Relaxation parameters and composite refinement techniques. Results Appl. Math. J. 2022, 15, 100282. [Google Scholar] [CrossRef]
Table 1. Comparison of Spectral radius and Convergence rate for the Applied Problem.
Table 1. Comparison of Spectral radius and Convergence rate for the Applied Problem.
TechniqueIteration StepSpectral RadiusExecution Time (s)Convergence Rate
GS880.895306.700.04803
RGS440.801575.530.09606
SRGS300.717655.000.14408
TRGS220.642514.100.19212
Table 2. The Iterate Solution of Applied Problem.
Table 2. The Iterate Solution of Applied Problem.
Technique n t 1 ( n ) t 2 ( n ) t 3 ( n ) t 4 ( n ) t 5 ( n ) t 6 ( n ) t 7 ( n ) t 8 ( n )
GS000000000
11.476201.63720−1.449200.044761.141800.91970−1.958400.79746
20.865401.96410−1.02590−0.023910.949820.90209−2.075800.94392
870.999992.00000−1.000000.000001.000001.00000−2.000001.00000
881.000002.00000−1.000000.000001.000001.00000−2.000001.00000
RGS000000000
10.865401.96410−1.02590−0.239170.9498200.90209−2.075800.94392
20.949091.94710−1.05170−0.048780.953700.95407−2.046700.95306
430.999992.00000−1.000000.000000.999990.99999−2.000000.99999
441.000002.00000−1.000000.000001.000001.00000−2.000001.00000
SRGS000000000
10.956721.95870−1.05540−0.064390.940070.94674−2.047100.95308
20.959521.96010−1.03950−0.039500.961450.96206−2.037400.96315
290.999992.00000−1.000000.000001.000001.00000−2.000001.00000
301.000002.00000−1.000000.000001.000001.00000−2.000001.00000
TRGS000000000
10.949091.94710−1.05170−0.048780.953700.95407−2.046700.95306
20.967411.96790−1.03170−0.031700.969170.96959−2.030000.97042
210.999992.00000−1.000000.000000.999990.99999−2.000000.99999
221.000002.00000−1.000000.000001.000001.00000−2.000001.00000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Audu, K.J.; Essien, J.N. An Accelerated Iterative Technique: Third Refinement of Gauss–Seidel Algorithm for Linear Systems. Comput. Sci. Math. Forum 2023, 7, 7. https://doi.org/10.3390/IOCMA2023-14415

AMA Style

Audu KJ, Essien JN. An Accelerated Iterative Technique: Third Refinement of Gauss–Seidel Algorithm for Linear Systems. Computer Sciences & Mathematics Forum. 2023; 7(1):7. https://doi.org/10.3390/IOCMA2023-14415

Chicago/Turabian Style

Audu, Khadeejah James, and James Nkereuwem Essien. 2023. "An Accelerated Iterative Technique: Third Refinement of Gauss–Seidel Algorithm for Linear Systems" Computer Sciences & Mathematics Forum 7, no. 1: 7. https://doi.org/10.3390/IOCMA2023-14415

APA Style

Audu, K. J., & Essien, J. N. (2023). An Accelerated Iterative Technique: Third Refinement of Gauss–Seidel Algorithm for Linear Systems. Computer Sciences & Mathematics Forum, 7(1), 7. https://doi.org/10.3390/IOCMA2023-14415

Article Metrics

Back to TopTop