f: source term of the Poisson equation
function ps_fst(nx,ny,dx,dy,f)
data = Array{Complex{Float64}}(undef,nx-1,ny-1)
data1 = Array{Complex{Float64}}(undef,nx-1,ny-1)
e = Array{Complex{Float64}}(undef,nx-1,ny-1)
u = Array{Complex{Float64}}(undef,nx-1,ny-1)
for i = 1:nx-1
for j = 1:ny-1
data[i,j] = f[i+1,j+1]
end
end
e = FFTW.r2r(data,FFTW.RODFT00)
for i = 1:nx-1
for j = 1:ny-1
alpha = (2.0/(dx*dx))*(cos(pi*i/nx) - 1.0) +
(2.0/(dy*dy))*(cos(pi*j/ny) - 1.0)
data1[i,j] = e[i,j]/alpha
end
end
u = FFTW.r2r(data1,FFTW.RODFT00)/((2*nx)*(2*ny))
return u
end
```
**Figure 19.** Comparison of exact solution and numerical solution computed using fast sine transform method for the Poisson equation with Dirichlet boundary condition.

#### *6.2. Iterative Solver*

Iterative methods use successive approximations to obtain the most accurate solution to a linear system at every iteration step. These methods start with an initial guess and proceed to generate a sequence of approximations, in which the *k*-th approximation is derived from the previous ones. There are two main types of iterative methods. Stationary methods that are easy to understand and are simple to implement, but are not very effective. Non-stationary methods which are based on the idea of the sequence of orthogonal vectors. We would like to recommend a text by Barrett et al. [40] which provides a good discussion on the implementation of iterative methods for solving a linear system of equations.

We show only Dirichlet boundary condition implementation for iterative methods. For iterative methods, we stop iterations when the *<sup>L</sup>*<sup>2</sup> norm of the residual for Equation (83) goes below 1 <sup>×</sup> <sup>10</sup>−10. We test the performance of all iterative methods using the method of manufactured solution. The exact field *u* and the corresponding source term *f* used for the method of manufactured solution for iterative methods are given below [12]

$$u(x,y) = (x^2 - 1)(y^2 - 1),\tag{97}$$

$$f(\mathbf{x}, y) = -2(2 - \mathbf{x}^2 - y^2). \tag{98}$$

#### 6.2.1. Stationary Methods: Gauss-Seidel

The iterative methods work by splitting the matrix *A* into

$$A = M - P.\tag{99}$$

Equation (84) becomes

$$Mu = Pu + b.\tag{100}$$

The solution at (*k* + 1) iteration is given by

$$\mathbf{M}\mathbf{u}^{(k+1)} = \mathbf{P}\mathbf{u}^{(k)} + \mathbf{b}.\tag{101}$$

If we take the difference between the above two equations, we get the evolution of error as (*k*+1) = *M*−1*P*(*k*). For the solution to converge to the exact solution and the error to go to zero, the largest eigenvalue of iteration matrix *M*−1*P* should be less than 1. The approximate number of iterations required for the error to go below some specified tolerance *δ* is given by

$$\mathcal{Q} = \frac{\ln(\delta)}{\ln(\lambda\_1)'} \tag{102}$$

where <sup>Q</sup> is the number of iterations and *<sup>λ</sup>*<sup>1</sup> is the maximum eigenvalue of iteration matrix *<sup>M</sup>*−1*P*. If the convergence tolerance is specified to be 1 <sup>×</sup> <sup>10</sup>−<sup>5</sup> then the number of iterations for convergence will be Q = 1146 and Q = 23, 020 for *λ*<sup>1</sup> = 0.99 and *λ*<sup>1</sup> = 0.9995 respectively. Therefore, the maximum eigenvalue of the iteration matrix should be less for faster convergence. When we implement iterative methods, we do not form the matrix *M* and *P*. Equation (101) is just the matrix representation of the iterative method. In matrix terms, the Gauss-Seidel method can be expressed as

$$M = D - L,\tag{103}$$

$$
\mathfrak{u}^{(k+1)} = (D - L)^{-1} \left( \mathcal{U} \mathfrak{u}^{(k)} + b \right),
\tag{104}
$$

where *D*, *L* and *U* represent the diagonal, the strictly lower-triangular, and the strictly upper-triangular parts of A, respectively. We do not explicitly construct the matrices *D*, *L*, and *U*, instead we use the update formula based on data vectors (i.e., we obtain a vector once a matrix operates on a vector).

The update formulas for Gauss-Seidel iterations if we iterate from left to right and from bottom to top can be written as

$$u\_{i,j}^{(k+1)} = u\_{i,j}^{(k)} - \left(\frac{2}{\Delta x^2} + \frac{2}{\Delta y^2}\right) r\_{i,j},\tag{105}$$

where

$$r\_{i,j} = f\_{i,j}^{(k)} - \frac{u\_{i+1,j}^{(k)} - 2u\_{i,j}^{(k)} + u\_{i-1,j}^{(k+1)}}{\Delta x^2} - \frac{u\_{i,j+1}^{(k)} - 2u\_{i,j}^{(k)} + u\_{i,j-1}^{(k+1)}}{\Delta y^2}. \tag{106}$$

The implementation of the Gauss-Seidel method for Poisson equation in Julia is given in Listing 21. The pseudo algorithm is given by Algorithm 3, where we define an operator A such that

$$\mathcal{A}u\_{i,j} = \frac{u\_{i+1,j} - 2u\_{i,j} + u\_{i-1,j}}{\Delta x^2} + \frac{u\_{i,j+1} - 2u\_{i,j} + u\_{i,j-1}}{\Delta y^2},\tag{107}$$

gives same results as matrix-vector multiplication *Au* for (*i*, *j*)*th* grid point. It can be noticed that the matrix *M* and *P* are not formed, and the solution is computed using Equation (105) for every point on the grid. Since we are traversing bottom to top and left to right, the updated values on left (*ui*−1,*j*) and bottom (*ui*,*j*−1) will be used for updating the solution at (*i*, *<sup>j</sup>*)*th* grid point. This will help in accelerating the convergence compared to the Jacobi method which uses values at the previous iteration step for all its neighboring points.

Figure 20 shows the contour plot for exact and numerical solution for the Gauss-Seidel solver. As seen in Figure 24, the residual reaches 10−<sup>10</sup> values after more than 400,000 iterations for Gauss-Seidel method. The maximum eigenvalue of the iteration matrix is large for the Gauss-Seidel method and hence, the rate of convergence is slow. The rate of convergence is higher in the beginning while the high wavenumber errors are still present. Once the high wavenumber errors are resolved, the rate of convergence for low wavenumber errors drops. There are ways to accelerate the convergence such as using successive overrelaxation methods [41]. The residual is computed from Equation (83) as the difference between the source term and the left hand side term computed based on the central-difference scheme. We would like to point out that the residual is different from the discretization error. The discretization error is the difference between numerical and exact solution.

**Algorithm 3** Gauss-Seidel iterative method

1: Given *b* source term 2: *u* = *u*(0) initialize the solution 3: *d* = −- 2 <sup>Δ</sup>*x*<sup>2</sup> <sup>+</sup> <sup>2</sup> Δ*y*<sup>2</sup> 4: **while** tolerance met **do** 5: **for** *i* = 1 to *Nx* **do** 6: **for** *j* = 1 to *Ny* **do** 7: *ri*,*<sup>j</sup>* = *bi*,*<sup>j</sup>* − A*ui*,*<sup>j</sup>* compute the residual 8: *ui*,*<sup>j</sup>* = *ui*,*<sup>j</sup>* + *ri*,*j*/*d* update the solution 9: **end for** 10: **end for** 11: check convergence criteria, continue if necessary 12: **end while**

**Listing 21.** Implementation of Gauss-Seidel iterative method for Poisson equation in Julia. ✞ ☎

```