Next Article in Journal
Error Detection and Correction of Mismatch Errors in M-Channel TIADCs Based on Genetic Algorithm Optimization
Previous Article in Journal
Application of Generative Adversarial Network and Diverse Feature Extraction Methods to Enhance Classification Accuracy of Tool-Wear Status
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Solution of the Poisson Equation Using Finite Difference Matrix Operators

by
Mohammad Asif Zaman
Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
Electronics 2022, 11(15), 2365; https://doi.org/10.3390/electronics11152365
Submission received: 21 June 2022 / Revised: 24 July 2022 / Accepted: 25 July 2022 / Published: 28 July 2022

Abstract

:
The Poisson equation frequently emerges in many fields of science and engineering. As exact solutions are rarely possible, numerical approaches are of great interest. Despite this, a succinct discussion of a systematic approach to constructing a flexible and general numerical Poisson solver can be difficult to find. In this introductory paper, a comprehensive discussion is presented on how to build a finite difference matrix solver that can solve the Poisson equation for arbitrary geometry and boundary conditions. The boundary conditions are implemented in a systematic way that enables easy modification of the solver for different problems. An image-based geometry-definition approach is also discussed. Python code of the numerical recipe is made publicly available. Numerical examples are presented that show how to set up the solver for different problems.

1. Introduction

The Poisson equation is an elliptical partial differential equation (PDE) and is ubiquitous in many areas of physics and engineering. Two of its common uses include modeling electrostatic potential and gravitational potential. Some forms of the Poisson equation also appear in fluid flow [1] and heat transfer problems [2,3]. Often, analytic solutions of such problems are not possible for cases of practical interest due to complex geometries. Numerically solving the Poisson equation is the only way to study these systems. As such, numerical solution methods for the Poisson equation are of significant interest [2,4,5].
There are several methods for solving the Poisson equation numerically [6]. The Finite-Difference Method (FDM) is one of the most simple and popular approaches [7,8,9,10]. This method involves replacing the continuous derivative operators with approximate, discrete finite-difference operators [11] that take the form of matrices. The method is intuitive and the solution process involves well-established linear algebraic methods. It suffers from some drawbacks, such as accurately representing complex geometries and boundaries. Methods such as the Finite Element Method (FEM) are better at handling arbitrary geometries. However, due to the ease of implementation and the intuitive nature of FDM, it remains a very popular approach for solving PDEs.
There are several references that discuss solving the Poisson equation using FDM [12]. While these papers cover many details, they usually do not present a systematic approach of constructing the numerical system (i.e., system matrix formation) for arbitrary geometries and arbitrary boundary conditions. Thus, for every variation of the Poisson equation, the linear algebraic system has to be reconstructed from scratch. The goal of this paper is to introduce a systematic approach for solving the Poisson equation for any arbitrary geometry and boundary conditions with minimal modification of the numerical solver/computer code. This would allow one to study a variety of configurations of the physical system without having to reformulate the numerical recipe.
Setting up an FDM solver has some basic steps. The first step is discretization of the solution space. A series of grid points in space where the equation will be solved is defined. At each of these points, the Poisson equation is written in discrete form using difference equations. In the conventional approach, the boundary conditions are incorporated at this step by modifying the difference equations at boundary points. This step depends on the geometry and the type of the boundary conditions. After this adjustment, we are left with a system of linear equations from which a full rank matrix equation can be formed. This matrix is referred to as the system matrix. The system is then solved. Note that when the geometry or the boundary conditions are altered, the system matrix changes and thus has to be reconstructed. Most references do not provide the reader with a systematic approach to do this. In many cases, one is required to study different geometrical variations of the same problem. Thus, one is often left with the cumbersome task of modifying their numerical solver for each configuration.
In this paper, the system matrix is defined as the combination of two matrices. One matrix is independent of the problem geometry and thus remains unchanged for all cases. This is referred to as the derivative operator matrix. For the Poisson equation, this is the second-derivative operator. The other matrix is defined to be dependent on the geometry and the boundary conditions. This is referred to as the boundary operator matrix. Modifying the boundary operator matrix turns out to be relatively simple and intuitive. It is shown that the system matrix can be constructed by compiling rows from these matrices appropriately. Note that when the geometry and/or the boundary conditions change, only the boundary operator matrix and the system matrix construction procedure changes. Thus, different geometry variations (and/or boundary conditions) could be investigated with minor adjustments of the same numerical code. This is the key advantage of the proposed method.
In this paper, a complete numerical recipe of how to implement a flexible Poisson equation solver is presented in detail. In addition, a Python implementation of the algorithm is discussed and is made available to the readers through a GitHub repository [13]. Some example numerical results are also presented that showcase the capabilities of the solver.
The rest of the paper is organized as follows: Section 2 defines the Poisson equation and the types of boundary conditions typically encountered, Section 3 discusses the finite difference formulation for one-dimensional (1D) and two-dimensional (2D) cases; the computer implementation of the algorithm using Python is discussed in Section 4; Section 5 presents numerical results for a few example problems; and concluding remarks are made in Section 6.

2. The Poisson Equation and Boundary Conditions

The Poisson equation consists of three second-order spatial derivatives. The general form of the equation in Cartesian coordinates is:
Δ u ( x , y , z ) = f ( x , y , z ) , on Ω .
where
Δ = 2 = 2 x 2 + 2 y 2 + 2 z 2 ,
is the Laplacian operator in Cartesian coordinates, x, y, z are the independent Cartesian spatial coordinates, and Ω is the solution domain (i.e., the set of x, y, z values for which the equation is to be solved). The dependent variable, u, is the function that is to be solved, and f is known as the source function, which is usually given. In general, it is a function of the space coordinates. In the special case of f = 0 , the equation reduces to the Laplace equation.
Like any differential equation, boundary conditions must be defined before a solution can be attempted. A complete definition of the problem includes Equation (1) with a defined source function and boundary conditions enforced on u or its first derivatives (i.e., u x , u y , and u z ). Boundary conditions may also be enforced on both u and its derivatives simultaneously. The boundary of the solution domain Ω is denoted as Ω . The boundary can be subdivided into M number of arbitrary segments or regions, Ω 1 , Ω 2 , , Ω M , each having its own boundary conditions. Usually, four common types of boundary conditions are encountered:
  • Dirichlet Boundary Conditions, having the form: u | Ω i = u i
  • Neumann Boundary Conditions, having the form: u η | Ω i = g i
  • Mixed Boundary Conditions, having the form: u | Ω i = u i , u η | Ω j = g j
  • Robin Boundary Conditions, having the form: A u + B u η | Ω i = g i
Here η { x , y , z } represents one of the Cartesian spatial variables, i , j { 1 , 2 , , M } , A, B are constants, and u i , g j are given quantities. Note that for Neumann boundary conditions, since the constraint is applied on the derivative, u can only be determined up to a constant. Hence, an additional constraint is required to obtain a unique solution. This constraint can have the following form [14,15]:
Ω u d x = 0 .
The boundary regions can also be specified inside the simulation domain itself and not necessary defined on the outer perimeter (i.e., some Ω i among Ω 1 , Ω 2 , , Ω M may not necessarily reside on the outer edges of Ω ). This is more common for Dirichlet boundaries [11]. The solution scheme presented here makes no distinction between the inner and outer boundary regions. The method works for any arbitrary Ω i for all i.

3. Finite Difference Discretization and Matrix Operators

All the quantities in Equation (1) are continuous variables. When solving numerically, the equation is solved only at discrete points. The first step is to convert the variables and the equation from continuous domain to discrete domain. The 1D case is considered first for simplicity. Higher-dimensional cases can be built from the one-dimensional case.

3.1. The Poisson Equation in One Dimension

Considering only one space variable, x, Equation (1) can be written as:
d 2 u ( x ) d x 2 = f ( x ) , x Ω .
where Ω is the solution domain defining the possible x values. A set of N discrete x values x 1 , x 2 , , x N of x is selected where the equation is to be solved. Then, the domain is Ω = [ x 1 , x N ] . The boundary of the domain is assumed to be the outer edges: Ω = { x 1 , x N } . Some texts use the notation x 0 and x N + 1 to indicate the boundary points and exclude them from the definition of the solution domain. However, in this paper, the edge points of the solution domain are taken as boundary points unless otherwise stated.
The domain points are referred to as x i , i = 1 , 2 , , N . For simplicity, it is assumed that these points are separated by a constant number Δ x (i.e., x i + 1 x i = Δ x , i ). The corresponding values of f and u are denoted f i and u i , respectively (i.e., f ( x i ) = f i , u ( x i ) = u i ). It is convenient to express these sets of discrete points as vectors x = x 1 x 2 x N T , f = f 1 f 2 f N T , and u = u 1 u 2 u N T . Note that transpose notations were used to write the column vectors in a compact manner. With vector notation, the differential equation can be imagined as a linear algebra problem where the vector u is to be solved for a given f . The next step is to convert the second-derivative operator into a matrix operator so that a complete linear algebraic system can be formed.
To convert a continuous derivative operator to a discrete matrix operator, the starting point is the finite difference formulas for calculating the derivatives. The vectors u = u 1 u 2 u N T and u = u 1 u 2 u N T are defined to denote the first and second derivatives, respectively, corresponding to the points x (i.e., u i = u ( x i ) , and u i = u ( x i ) ). Using the second-order center-difference formulas, the first and second derivatives can be expressed as [16]:
d u d x x = x i u i = u i 1 + u i + 1 2 Δ x ,
d 2 u d x 2 x = x i u i = u i 1 2 u i + u i + 1 ( Δ x ) 2 .
Note that there are other higher-order finite difference formulas for the above derivatives. The widely used second-order center-difference formulas are selected for the current analysis. Now, from Equations (5) and (6), focusing only on three consecutive quantities u i 1 , u i , u i + 1 , the following matrix equation can be written:
u i = 1 2 Δ x 1 0 1 · u i 1 u i u i + 1 ,
u i = 1 ( Δ x ) 2 1 2 1 · u i 1 u i u i + 1 .
Equations (7) and (8) work fine for i = 2 to N 1 . However, for the boundary points i = 1 and i = N , the subscripts overflow the bounds, requiring values of u 0 and u N + 1 , respectively. Thus, the equation is not valid for those two cases (note that the outer boundary points are x 1 and x N ). We resort to using forward (for i = 1 ) and backward (for i = N ) finite difference formulas for these two cases [16]:
u 1 = 3 u 1 + 4 u 2 u 3 2 Δ x = 1 2 Δ x 3 4 1 · u 1 u 2 u 3 T ,
u 1 = 2 u 1 5 u 2 + 4 u 3 u 4 ( Δ x ) 2 = 1 ( Δ x ) 2 2 5 4 1 · u 1 u 2 u 3 u 4 T ,
u N = u N 2 4 u N 1 + 3 u N 2 Δ x = 1 2 Δ x 1 4 3 · u N 2 u N 1 u N T ,
u N = u N 3 + 4 u N 2 5 u N 1 + 2 u N ( Δ x ) 2 = 1 ( Δ x ) 2 1 4 5 2 · u N 3 u N 2 u N 1 u N T .
Now, using Equations (7) to (12), the first and second derivatives at all points in the solution space can be calculated. These equations for all i values can be compiled to form the following matrix equations:
u = u 1 u 2 u 3 u N 1 u N = 1 2 Δ x 3 4 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 4 3 D x · u 1 u 2 u 3 u N 1 u N ,
u = u 1 u 2 u 3 u N 1 u N = 1 ( Δ x ) 2 2 5 4 1 0 0 0 0 1 2 1 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 1 4 5 2 D 2 x · u 1 u 2 u 3 u N 1 u N .
It should be noted that for nonuniform grids, the term Δ x would no longer be constant. As a result, in the place of the common Δ x factor in the matrices, every matrix element would have a denominator corresponding to its own Δ x value. Every other step discussed in this manuscript would remain unchanged. With the help of these matrices, the derivative operations can be expressed in linear algebra form as:
u = D x u ,
u = D 2 x u .
Here D x and D 2 x are N × N matrices, as shown in Equations (13) and (14). These are referred to as matrix operators for the first and second derivatives, respectively, as they operate on a vector to calculate its derivatives. Equation (4) can be converted to a discrete linear algebraic system using these matrix operators:
D 2 x u = f , on Ω .
Now the boundary conditions must be implemented in such a way that they can be easily integrated with Equation (17).
Let us consider a general implementation of boundary conditions in matrix form:
B u = g , on Ω .
Here B is referred to as the boundary operator matrix, and g is the known/given boundary function vector. For a given problem, the boundary operator matrix is determined by the type of the boundary conditions. For Dirichlet boundary conditions, the boundary function values are directly assigned to the boundary points. In such a case, B would be an identity matrix, and g would list the values of u at the boundary. For Neumann boundary conditions, the value of the first derivative is assigned at the boundary points. For this case, B would be the first-derivative matrix operator D x , and g would list the values of u at the boundary. Thus,
B = I N , for Dirichlet boundary conditions D x , for Neumann boundary conditions
Here I N is the N × N identity matrix, and D x is expressed in Equation (13). Note that although Equation (19) relates the quantities in such a way that B operates on the entire u , the equation is valid only at the boundary Ω . In fact, g elements at non-boundary points are unknown. In other words, the locations of the boundaries are not defined explicitly within B. However, to understand which entries of B come from I N and which entries come from D x , information about the geometry of the system is required. In the following paragraphs, we will discuss how to compile the linear algebraic system so that only the relevant rows of B are used. This is where the remaining geometry information will be encoded. Before that, it is important to discuss how a multi-boundary system would be treated. As previously mentioned, Ω can be subdivided into smaller regions Ω 1 , Ω 2 Ω M each with its own boundary conditions. Each Ω i corresponds to a set of specific rows in the boundary operator matrix. For such cases, g would be defined appropriately in a piecewise manner. For mixed boundary conditions, the rows of B would be a compilation of rows taken from either I N or D x depending on which boundary regime the corresponding equation falls under. It should also be noted that for Neumann boundary conditions, an additional constraint such as Equation (3) would be needed to obtain a unique solution.
After defining the matrix forms for the PDE and the boundary conditions, the process of integrating those into a single linear algebraic equation can be started. It is possible to simply solve Equation (17) under the constraint defined in Equation (19). However, an alternative and somewhat more elegant approach would be to form a single system matrix that incorporates both the PDE and the boundary conditions in a compact form. By observing the full form of the matrix and vectors in Equation (17), it can be noted that each row of the matrix defines an equation containing a few elements of u . Since some of the elements of u are on the boundary and follow Equation (19), replacing the corresponding rows of D 2 x with those of B can lead to the formation of a system matrix, L . This can be mathematically described as:
L = B , on Ω D 2 x , otherwise .
Of course, similar changes to the right-hand vector would need to be made as well. The system right-hand vector, b , can be defined as:
b = g , on Ω f , otherwise .
According to this definition, if the k-th element of u , u k , corresponds to a Dirichlet boundary point, then the k-th row of L would be the k-th row of I N . Further, the k-th element of b would be the k-th row of g . All rows corresponding to non-boundary points will be identical to the rows of D 2 x . Now, the system equation can finally be written as:
L u = b .
Dimension-wise, L is an N × N matrix, and b is an N × 1 vector. This is a fully defined linear system from which u can be easily solved. Note that the derivative operator, the boundary conditions, and the problem geometry are all encoded within L and b. Let us consider the 1D case with x 1 and x N being the boundary points. For Dirichlet boundary conditions, this correspond to [ u 1 u N ] T = [ g 1 g N ] T , where g 1 and g N are given. Using Equations (20) and (21), the matrix system is constructed and solved. Note that g i , i 1 , N , are not defined and are not needed to form b .
One notable feature of the proposed system matrix construction procedure is that part of it is problem independent. Note that only the B matrix and the g vector depend on the boundary condition types at different boundaries. The D 2 x matrix is the same for all problems. The construction of L involves swapping out certain rows of D 2 x with B. This part depends on the problem geometry (as points on Ω need to be identified). Thus, modifying the solver for different geometries and boundary conditions involves selecting the appropriate form of B and modifying the row-swapping operation of L only (the same operation must be done for the right-hand vector, also). This is a major benefit over conventional procedures for which the system matrix is constructed from scratch for every geometry/boundary condition.
It is obvious that the size of the matrix operators, having size N × N , can be very large for moderately large values of N. As will be discussed later, the sizes of the matrices are even larger for cases with higher dimensions. Working with such large matrices in full form can be computationally demanding. Fortunately, these matrices are sparse in nature (i.e., the number of non-zero elements is small compared to the full size of the matrix), as can be seen from the forms of D x and D 2 x . As such, when solving numerically, they are always defined as sparse matrices, which significantly reduces memory requirements and speeds up the operation. Almost all programming languages commonly used for numerical analysis have sparse-matrix libraries that can be used for this (e.g., Python has scipy.sparse package).

3.2. The Poisson Equation in Two Dimensions

The 2D Poisson equation in a Cartesian x y plane is given by:
2 u x 2 + 2 u y 2 = f ( x , y ) , x , y Ω .
Here the solution u = u ( x , y ) spans a 2D space. The matrix analysis for the 1D Poisson equation discussed in Section 3.1 can be extended for this 2D case. The matrix operators developed for the 1D case can be used as the building blocks. The key idea is to convert the 2D problem into an equivalent 1D problem. This is done by mapping the 2D solution grid into a 1D vector, solving the 1D vector, and then remapping it to the original 2D space. The main tasks are developing the mapping algorithm, figuring out how the boundary conditions are mapped from 2D to 1D space, and assembling the corresponding matrix operators.
The 2D solution space in the x y -Cartesian plane is defined as Ω = [ x 1 , x N x ] × [ y 1 , y N y ] . This denotes a continuous rectangular region with x values spanning from x 1 to x N x and y values spanning from y 1 to y N y . Note that the analysis would hold for any shape of solution space. In discrete space, a grid of equally spaced N y × N x points ( y k , x j ) are considered, where j = 1 , 2 , , N x and k = 1 , 2 , , N y . The x and y spacing are denoted Δ x and Δ y , respectively. These points can be imagined as elements of a matrix with N y rows and N x columns, as shown in Figure 1. Any point can be addressed by the matrix coordinate ( k , j ) , where k refers to the row number and j refers to the column number. The solution u at point ( y k , x j ) is denoted as u ( k , j ) = u k j . The order of the indices j and k is selected in this way to be consistent with syntax of common numerical coding languages (e.g., Python and MATLAB).
Now, u k j , being 2D, is not an ideal input for a matrix operator. The elements of the 2D grid are rearranged by stacking the columns on top of each other to form one large N x N y × 1 vector, referred to as u 1 D . This is shown in Figure 1 [17]. Each element of this u 1 D corresponds to a specific grid point ( y k , x j ) and the corresponding solution u k j . The elements of u 1 D can be addressed by a single index i = 1 , 2 , , N x N y . Note that there are many other valid approaches to mapping a 2D matrix into an 1D vector. This discussion is limited to the column-stacking method, which is relatively simple and intuitive. For this mapping, the 2D and 1D indices are related as:
i = N y ( j 1 ) + k
The opposite transformation is:
k = i mod N y ,
j = i k N y + 1 .
Here mod refers to the modulo operation. Using these relationships, it is possible to transfer back and forth between the 2D representation and the equivalent 1D representation. This process is referred to as mapping. The mapping process is shown in Figure 1.
After obtaining u 1 D from the mapping process, the corresponding derivative matrix operators need to formulated. The matrix operators for the first and second partial derivatives with respect to x are expressed as D x 2 D and D 2 x 2 D , respectively. Similar quantities with respect to y are expressed as D y 2 D and D 2 y 2 D , respectively. The superscripts denote that these are 2D matrix operators. For the 1D case discussed in Section 3.1, the consecutive elements of the vector u corresponded to consecutive points along a 1D spatial axis. For the 2D case, however, the relation between the elements of u 1 D and the corresponding spatial points is more complex. Every N y block of elements in u 1 D (shown in different colors in Figure 1) represents a column in the 2D representation. They have consecutive y values for a given x value. Within this block, matrix operators of similar form as Equation (13) and (14) would work for the y derivative. As one moves from one of these blocks to the next (moving from one color to another in Figure 1), the y values restart from the beginning and the x values increment by one step. Our x derivative operator must operate on this jumbled space and accurately pick out the correct elements for a differentiation operation.
First, the y derivatives are considered, as they are simpler to understand than the x derivatives for this mapping process. The y partial derivatives in vector form are denoted u y , 1 D and u 2 y , 1 D . The 2D matrix operator for the y partial derivative must be such that it sifts each N y block of elements and multiplies it by the 1D-derivative matrix operator. As it goes through each of the columns, the results should be stacked on a column to maintain the same spatial mapping as u 1 D . This can be done by forming a block diagonal matrix whose diagonal blocks are copies of the 1D D y matrix operator [8,17] ( D y has the same form as D x from Equation (13), with Δ x replaced by Δ y ). The process is shown graphically in Figure 2. Each rectangular block in D y 2 D is an N y × N y matrix. The zero blocks are zero matrices of the same size. Each block operates on an N y × 1 sized block of u 1 D (represented as different-colored columns and labeled C o l 1 , C o l 2 , , C o l N x ). This is shown in the figure by connecting a D y 2 D matrix block with the corresponding u 1 D column block by curved lines. For a given block-row (the rows drawn in the figure with black lines) of the matrix, all elements of u 1 D beside a specific block of N y consecutive elements are ignored (i.e., multiplied by zeros). Thus, the derivative operation is performed for each of these column blocks as one moves from one block-row of the matrix to another. This matrix multiplication produces a column vector representing the y derivatives mapped in the same format as u 1 D . The second-derivative matrix, D 2 y 2 D , can be similarly constructed by replacing D y with D 2 y in each block.
Let us now consider the matrix operators for the x partial derivatives, D x 2 D and D 2 x 2 D . When multiplied with u 1 D , these matrices produce the partial derivatives in vector form denoted u x , 1 D and u 2 x , 1 D , respectively. First, the locations of the elements representing consecutive x values are identified and listed in the 1D-mapped vectors. The x values increase by one when moving from left to right in a given row (move by one column) in the 2D representation (Figure 1). In the 1D map, movement of one space along a row translates into movement by N y elements in the corresponding 1D-mapped vector. A block matrix construction is again used to form D x 2 D from D x . This time, however, each row of D x must operate on elements of u 1 D that are N y spaces apart. This can be done by adding N y zeros after each element in a row of D x and then copying and arranging the rows appropriately [17]. The required configuration is shown in Figure 3. The elements of D x are repeated in the diagonal of each block in D x 2 D . Thus, each block is a diagonal matrix. Each block sifts out certain elements of u 1 D , as shown by the curved lines in the figure. Careful observation shows that these elements are indeed the ones required to calculate the derivatives at that point. A similar block matrix can be constructed for D 2 x 2 D by replacing D x elements with D 2 x elements.
Now that the matrix operators have been defined, their construction using simple mathematical operations is discussed. A powerful tool in constructing block matrices is the Kronecker product [4,18]. The Kronecker product between two matrices A (size p × q ) and B (size m × n ) is defined as:
A B = A 11 B A 12 B A 1 q B A 21 B A 22 B A 2 q B A p 1 B A p 2 B A p q B
where A i j represents the ( i , j ) element of matrix A, and ⊗ represents Kronecker product operation. Note that each term in Equation (27) is a matrix itself. The A B block matrix is formed by tiling B matrices after multiplying them with an element of the A matrix. It is not difficult to see that D y 2 D can be constructed from the Kronecker product of D y and an identity matrix of size N x (denoted as I N x ):
D y 2 D = I N x D y .
Careful observation of Figure 3 and Equation (27) suggests that D x 2 D can also be computed using the Kronecker product of D x and an identity matrix of size N y × N y (denoted I N y ):
D x 2 D = D x I N y .
The second-derivative matrices are calculated identically:
D 2 y 2 D = I N x D 2 y ,
D 2 x 2 D = D 2 x I N y .
The overall sizes of these matrices are N x N y × N x N y . This can be very large even for moderate values of N x and N y . However, it is noted that the sparsity of the matrices from the 1D case is maintained in the 2D case as well. Even more so than with the 1D case, any numerical implementation of the 2D Poisson equation should employ sparse-matrix algorithms.
The step after constructing the differentiation operators is formulation of the boundary operator matrix. The 2D boundary operator matrix, B 2 D , would be identical to the one discussed in the previous section for the one-dimensional case. Equation (19) would be slightly modified to accommodate for the larger two-dimensional matrices:
B 2 D = I N x N y , for Dirichlet boundary conditions D x 2 D or D y 2 D for Neumann boundary conditions
where I N x N y is an identity matrix of size N x N y × N x N y . As there are two independent variables, there are two possible Neumann boundary conditions that can be applied. Which derivative to use for the Neumann boundary is determined by the problem statement. Having defined the boundary operator, construction of the system matrix operator is straightforward:
L = B 2 D , on Ω D 2 x 2 D , otherwise .
The right-hand vector also follows a similar definition as the 1D case:
b = g 1 D , on Ω f 1 D , otherwise ,
where f 1 D and g 1 D are 1D-mapped versions of the corresponding two-dimensional quantities, f ( y k , x j ) and g ( y k , x j ) , respectively. Obviously, the algorithm used for the 1D mapping here should be identical to the one used for calculating u 1 D . Finally, the system equation is constructed as:
L u 1 D = b .
Just like the one-dimensional case, this is a simple linear algebraic equation that can be solved easily. It should be noted that the solution is in a 1D-mapped vector form. Thus, an additional step of converting it back to the 2D form is necessary. This can be done using Equations (25) and (26).
It should be noted that only the B 2 D matrix, the g 1 D vector, and the construction of L and b are geometry and boundary condition dependent. The rest of the matrices and vectors are problem independent. Thus, configuring the solver for different geometries and boundary conditions is straightforward.

3.3. Extending to Three Dimensions

In the previous subsection, it was discussed how the two-dimensional system can be constructed using the one-dimensional operators as building blocks. A similar approach can be used to construct a three-dimensional system from the two-dimensional matrix operators. The process will involve Kronecker products of the identity matrices and the two-dimensional matrix operators. As this is intended to be an introductory paper, the details of the three-dimensional case are omitted for the sake of brevity.

4. Python Implementation

Python code for solving the Poisson equation in two-dimensions has been developed. It is publicly available on GitHub [13]. The code uses the previously discussed approach of constructing the matrix operators. The code is general and can solve any system when the boundary conditions are appropriately defined. In addition to the basic numerical recipe, a few additional things had to be taken into account for computational efficiency, with the most-necessary thing being the use of sparse matrices. The sparse matrix sub-package of SciPy (Scientific Python) [19] is used for constructing matrix operators. In this way, only the non-zero elements of the matrices are indexed and stored. This makes it possible to store large sparse matrices without wasting computer memory on storing the zeros. Further, mathematical operations can be performed more efficiently. For this work, the sparse linear algebra solver of SciPy, spsolve, was used.
In the following paragraphs, how the Python code should be used for specific problems is discussed. The focus is mainly on how geometric features and boundary conditions are defined inside the code. A few key features of the code are also highlighted.
The code requires the user to define the source function and the geometry of the problem. From those, the code automatically creates the boundary operator matrices, the system matrix, and the right-hand vectors. How the geometry is defined inside the code is discussed first. Two arrays that denote the x and y independent variables are declared. The range and the resolution of the variables are also defined here. An example code snippet is given below:
import numpy as np
# Define independent variables.
Nx = 300                    # No. of grid points along x direction
Ny = 200                    # No. of grid points along y direction
x = np.linspace(-6,6,Nx)     # x variable in 1D
y = np.linspace(-3,3,Ny)     # y variable in 1D
The size of the arrays determines the size of the matrix operators. Note that Δ x and Δ y are implicitly defined here. The 1D arrays are converted into 2D meshgrid data format, which is then unraveled into an equivalent 1D structure for ease of indexing later. The code for these operations is:
X,Y = np.meshgrid(x,y)       # 2D meshgrid
# 1D indexing
Xu = X.ravel()               # Unravel 2D meshgrid to 1D array
Yu = Y.ravel()
After the space variables, the source function needs to be defined. For a source term having a closed-form expression, a simple for loop can be used to do this operation, as shown below:
# Source term for the diode example
f = np.zeros(NxNy)
for m in range(NxNy):
    if np.abs(Xu[m]) < 3.8 and np.abs(Yu[m])<1:
        f[m] = -np.tanh(Xu[m])/np.cosh(Xu[m])
This test source function is used in the semiconductor p–n junction example problem discussed later in Section 5.2. In this example, the source function is set equal an analytical functional at specific regions of space and set to zero elsewhere. Instead of an analytical function, tabulated values can also be used by utilizing an interpolation function. Using such piecewise definitions, source functions for most practical problems can be implemented.
The next step is to define outer boundary regions. The definition consists of the boundary values at the four outer walls (for 2D problems) as well as the types of boundary conditions. An example code snippet is given below:
# Dirichlet/Neumann boundary conditions at outer walls
uL, uR, uT, uB = 0, 0, 0, 0
ub_o = [uL, uR, uT, uB]
# Type of outer boundary conditions. 0 = Dirichlet,
# 1 = Neumann x derivative, 2 = Neumann y derivative
B_type_o = [1,1,2,2]
# Order: element 1 = left, element 2 = right, element 3 = top,
# element 4 = bottom
Now we discuss the non-trivial part of the geometry: the internal regions where boundary conditions are enforced. For now, the discussion is limited to rectangular regions. Two Python lists containing the high and low limits of x , y values are used to define these inner boundary regions. Then, the boundary type and boundary values are set as previously seen. The following code snippet defines two rectangular regions, each with Dirichlet boundary conditions (one with 2 and one with 2). This code snippet is from the semiconductor p–n junction diode example discussed later in Section 5.2.
# lower and upper limits of x defining the inner boundary region
# Format:[ [xlow1,xhigh1], [xlow2,xhigh2],.... ]
xb_i = [[-4,-3.8],[3.8,4]]
yb_i = [[-1,1],[-1,1]]
# Type of inner boundary conditions. 0 = Dirichlet,
# 1 = Neumann x derivative, 2 = Neumann y derivative
B_type_i = [0,0]
ub_i = [-2,2]            # boundary values at inner region
The points that are within the rectangular region defined by the limits of x and y are automatically calculated by the code. Using this method, any number of arbitrary inner boundary regions can be defined as long as they are rectangular in shape.
For non-rectangular geometric features, the regions can be difficult to define by code algebraically. To tackle this, an image-based geometry-definition feature is implemented. The idea is that the geometric features can be extracted from a bitmap image file by code. Different colors in an image represent different regions (some of these will be boundary regions). A user can draw a bitmap image using any graphical editor. The graphic editor GraphicsGale [20], which is a freeware, is recommended. The pixels of the image will be treated as a grid point of the solution domain. With this approach, a user can define any arbitrary geometry by drawing it in an image editor and then importing it in the solver. The code snippet of the image import feature is given below:
# Read image file
im = Image.open(im_file_name)
boundary_region_color_threshold = 10    # color value that identifies boundary
strct_map = np.flipud(np.array(im))     # flipud is done to match orientation
strct_map_u = strct_map.ravel()
regions = np.unique(strctmapu)           # Find distinct color regions
n_regions = np.size(regions)             # Number of distinct color~regions
b_regions_indices_u = []
o_regions_indices_u = []
for m in range(nregions):
    ind_u_temp = np.squeeze(np.where(strctmapu== regions[m])
    if regions[m] <= boundary_region_color_threshold: # boundary region
        b_regions_indices_u.append(ind_u_temp)
    else:
        o_regions_indices_u.append(ind_u_temp)
The code reads an image file and identifies regions with distinct colors. An arbitrary color threshold value is defined. Any color value less than the threshold is considered a boundary region. The other colors are treated as normal regions where the solution is to be calculated. The indices of all the boundary (and non-boundary) regions are stored in a list. These indices correspond to the x and y variable indices.
Section 5.3 discusses an example problem using the image import feature. More details about the code can be found within the comments of the code file on the GitHub repository [13].

5. Numerical Results

Using the developed code, a few simple example problems are solved. The examples are limited to electrostatics. The electrostatic potential distribution in a two-dimensional space is determined by the Poisson equation in the following form:
2 u ( x , y ) = f ( x , y ) = ρ ϵ , on Ω .
Here ρ is the charge density, ϵ is the permittivity, and u represents the electric potential or voltage. The electric field, E , is given by the negative gradient of u:
E = u ( x , y ) .
Many interesting electrostatics problem involve solving the electric potential and electric field distribution using the Poisson equation. A few example cases are discussed here. As the focus is on the numerical approach rather than the physical system, simple source functions and boundary conditions are used that are not necessarily practical. Units of the variables are also ignored.
It should be noted that all three examples are solved using the same numerical solver with very little modification of the Python code. Only the definition of the geometry has to be redone. The rest of the code works without any further modifications. This is the distinct advantage of the proposed approach.

5.1. Example 1: Parallel Plate Capacitor

A parallel plate capacitor consists of two parallel conducting plates separated by an insulating region. When voltage is applied between the plates, the capacitor stores energy in the gap between the plates in the form of electric fields. Usually, the charge density is taken to be zero inside a parallel plate capacitor (i.e., ρ = 0 ), which implies f ( x , y ) = 0 . Thus, with a zero source term, the equation reduces to a Laplace equation:
2 u ( x , y ) = 0 , on Ω .
A parallel plate capacitor bounded within a larger simulation space with Neumann boundary conditions is simulated. This example is selected to show the versatility of the numerical solver in implementation of both inner and outer boundary conditions. The solution domain is selected to be be a rectangle bounded by 6 x 6 , 3 y 3 (i.e., Ω [ 6 , 6 ] × [ 3 , 3 ] ). The outer edge boundaries are defined as:
Ω 1 ( x , y ) | x = 6 ( left outer edge ) ,
Ω 2 ( x , y ) | x = 6 ( right outer edge ) ,
Ω 3 ( x , y ) | y = 3 ( bottom outer edge ) ,
Ω 4 ( x , y ) | y = 3 ( top outer edge ) .
The boundary conditions at the outer edges are taken to be:
u x = 0 , at Ω 1 and Ω 2 ,
u y = 0 , at Ω 3 and Ω 4 .
These are Neumann boundaries that force the electric field to be zero at the outer edges. Again, these boundaries are not usually important, as only the solution within the region bounded by the parallel plates is of interest. These extra boundary conditions are added to create a more complex demonstration problem.
Next, two inner boundary regions are defined by two rectangles representing the parallel plates:
Ω 5 ( x , y ) | 2 x 2 and 0.5 y 0.6 ( top plate )
Ω 6 ( x , y ) | 2 x 2 and 0.6 y 0.5 ( bottom plate )
Dirichlet boundary conditions are set at these boundaries:
u = 2 , at Ω 5 ,
u = 2 , at Ω 6 .
These values represent the voltage applied to these plates. The solution domain with all the boundary regions is shown in Figure 4.
Now that the problem, the solution domain, and the boundary conditions are defined, the process of constructing the boundary operator matrix, the system matrix, and the right-hand vector can be started. The boundary operator follows Equation (19):
B = D x 2 D , at Ω 1 , Ω 2 D y 2 D , at Ω 3 , Ω 4 I N , at Ω 5 , Ω 6
The system matrix has the same form as defined in Equation (35). The right hand vector follows:
b = 2 , on Ω 5 2 , on Ω 6 0 , otherwise
With all the mathematical quantities defined, the linear system of the equation is solved using a sparse-matrix solver. After mapping the solution back into 2D space, u ( x , y ) and u are plotted in Figure 5. From the u ( x , y ) contour plot, it can be seen that the top and bottom electrodes have values of + 2 and 2 , respectively, and the potential gets distributed to other points in space. The u distribution shows how the electric field lines point from the positive voltage side to the negative voltage side. The lines inside the capacitor are oriented vertically, as one would expect from an ideal parallel plate capacitor. It is also noted that the fringe fields at the edge of the plates are curved. The field magnitude is large at the edges due to the sharp potential transition. The electric field lines outside the capacitor region are mostly determined by the Neumann boundary conditions enforced at the outer edges of the solution domain. As previously stated, the solution at these locations is not usually relevant when studying a parallel plate capacitor unless one wishes to study the fringe fields. It is possible to limit the solution domain to the inside of the capacitor only, which would require setting only outer-edge boundaries. The slightly more-complicated boundaries were enforced for demonstration purposes. The proposed scheme can model physical systems that require boundary conditions to be set both at the outer edges as well as at inner regions.

5.2. Example 2: Semiconductor p–n Junction

For the second example, a highly simplified semiconductor p–n junction diode structure is considered. The diode consists of two different types of semiconductors that are sandwiched together. Voltage is applied at the two ends of the semiconductors. Such a device can act as a rectifier (allowing electric current to flow in only one direction). More details about device operation are avoided as they are not in the scope of this text. The potential profile along the junction of the device follows the Poisson equation as expressed in Equation (36). The charge profile along the junction can be modeled in various ways. A common model of charge density is [21]:
ρ = 2 ρ 0 sech x a tanh x a .
Here ρ 0 and a are parameters related to the geometrical and material properties of the device. It is assumed that the two semiconductor materials are placed along the x axis to form a junction along the line x = 0 . Being consistent with this form of charge density profile, the source function, f ( x , y ) , is defined as:
f ( x , y ) = sech x tanh x , for | x | < 3.8 , | y | < 1 0 , otherwise .
Here convenient values of parameters that produce a simple expression are used. Note that for this demonstration problem, only the same functional variation is maintained. For simplicity, realistic values of the functions are not used. In addition to the source function, two Dirichlet boundary regions, Ω 5 and Ω 6 , are defined. These represent metal contacts on either side of the semiconductors where voltage can be applied:
Ω 5 ( x , y ) | 3.8 x 4 and 1 y 1 ( right contact )
Ω 6 ( x , y ) | 4 x 3.8 and 1 y 1 ( left contact )
The four outer boundary edges (i.e., Ω 1 to Ω 4 ) have the same definitions as in the previous example (Equations (39)–(42)). The simulation domain and the source function are shown in Figure 6. As with the previous example, the region outside the device (where f ( x , y ) = 0 ) is not meaningful. It is simulated only as a test for the numerical solver.
The system matrix and the right-hand vector can be constructed using the same process as discussed in Section 5.1. The exact same boundary values are used, and a voltage of u = 2 and u = 2 is applied to the left and right contact regions, respectively. Please note that the voltages applied to the metal contacts affect the charge density profile (and hence the source function f ( x , y ) ). It is assumed that the f ( x , y ) function already contains the charge profile that would exist when the aforementioned voltage is applied. More-complex modeling of the charge distribution would be needed to study the p–n junction under a variety of voltage conditions. The results for our simplified case are shown in Figure 7. It can be noted that the electric field (i.e., u ) is maximum near the junction along x = 0 , which is to be expected for this configuration [21].

5.3. Example 3: Complex Arbitrary Geometries with Image Import

There are cases where the geometry of a problem can be more complex than that of the two examples discussed above. For those cases, it can be difficult to define the geometry through text data or code. For example, any region with non-rectangular shapes can be cumbersome to define by code alone. For such cases, the image import feature discussed in Section 4 is used. This feature is highlighted in this example. First, an arbitrary geometry is drawn in a 300 pixel × 200 pixel grid using the free image editor GraphicsGale [20]. Note that this can be done using any image editor. Each separate region of the geometry is drawn using a separate color. For this example, the geometry shown in Figure 8a is selected. The image is imported into the solver using the developed code, and the boundary regions are highlighted in Figure 8b. In the following, all colors are referred to with respect to Figure 8b. The orange and blue pixels represent zero Neumann boundary conditions (x- and y-derivative Neumann boundary condition, respectively) at the outer walls. The yellow rectangular regions are a Dirichlet boundary of value 2. The green circular regions are a Dirichlet boundary of value 2 , and the red circular region is a Dirichlet boundary of value 1. Note that some colored regions of Figure 8a are not reflected in Figure 8b because they are not defined as boundary regions. Those regions are considered simple solution regions instead. Note that there can be more than one of these regions (as in this example). Having multiple such regions can help us define complex source functions that have different values in different regions. For simplicity, a source function that has zero values everywhere is assumed.
The Poisson equation is solved for the aforementioned geometry, and the results are shown in Figure 9. Like the previous examples, these results are interpreted in context of electric potential and electric field. Both the potential and the electric field have the expected distribution.

6. Conclusions

A method for solving the Poisson equation in 1D and 2D using a finite difference approach is presented. The main focus was on the systematic construction of the solver so that it can be applied to a variety of problems involving the Poisson equation. First, the matrix operators for the derivative operation and the matrix operators for boundary condition enforcement were constructed. The two matrices were combined to form a system matrix operator. A similar operation was performed for the right-hand side vector. Thus, a systematic approach of converting the Poisson equation into a linear algebraic/matrix equation was developed. The boundary conditions with geometry information were coded into a matrix separately from the derivative operator, making the approach easy to adjust for different geometries and boundary conditions. Three example problems were solved using the developed approach. The key benefit of the proposed method is that very little modifications of the numerical solver are needed to handle three problems that are significantly different from each other. The procedure for setting up the solver for these problems is discussed thoroughly. Further, the Python code for the algorithm is available to the readers though a GitHub repository. It should be noted that the proposed method has some of the drawbacks inherent with all finite difference solvers, for example, meshing constraints when handling complex geometries and boundaries.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are openly available in the GitHub repository at https://github.com/zaman13/Poisson-solver-2D, Accessed date: 1 July 2022.

Acknowledgments

The author thanks Paul C. Hansen.

Conflicts of Interest

The author declares no conflict of interest.

List of Symbols

2 Laplacian operator in Cartesian coordinates
uDependent variable of the Poisson equation (to be solved)
fSource function of the Poisson equation
gBoundary function of the Poisson equation
Ω The solution domain
Ω Boundary of the solution domain
u Vectorized form of u on the solution grid
f Vectorized form of f on the solution grid
g Vectorized form of g on the solution boundary
b The right hand vector
u 1 D 1D-mapped vectorized form of u for the 2D problem
f 1 D 1D-mapped vectorized form of f for the 2D problem
g 1 D 1D-mapped vectorized form of g for the 2D problem
D x First-derivative matrix operator with respect to x in 1D
D y First-derivative matrix operator with respect to y in 1D
D 2 x Second-derivative matrix operator with respect to x in 1D
D 2 y Second-derivative matrix operator with respect to y in 1D
D x 2 D First-derivative matrix operator with respect to x in 2D
D y 2 D First-derivative matrix operator with respect to y in 2D
D 2 x 2 D Second-derivative matrix operator with respect to x in 2D
D 2 y 2 D Second-derivative matrix operator with respect to y in 2D
BBoundary operator matrix
I N Identity matrix of dimension N × N
L System matrix
Kronecker product
E The electric field
ρ Charge density
ϵ Permittivity of the medium

References

  1. Mori, Y.; Takabatake, K.; Tsugeno, Y.; Sakai, M. On artificial density treatment for the pressure Poisson equation in the DEM-CFD simulations. Powder Technol. 2020, 372, 48–58. [Google Scholar] [CrossRef]
  2. Radhakrishnan, A.; Xu, M.; Shahane, S.; Vanka, S.P. A Non-Nested Multilevel Method for Meshless Solution of the Poisson Equation in Heat Transfer and Fluid Flow. arXiv 2021, arXiv:2104.13758. [Google Scholar]
  3. Baranyi, L. Computation of unsteady momentum and heat transfer from a fixed circular cylinder in laminar flow. J. Comput. Appl. Mech. 2003, 4, 13–25. [Google Scholar]
  4. Brewer, J. Kronecker products and matrix calculus in system theory. IEEE Trans. Circuits Syst. 1978, 25, 772–781. [Google Scholar] [CrossRef]
  5. Schroeder, J.; Muller, R. IGFET Analysis through numerical solution of Poisson’s equation. IEEE Trans. Electron Devices 1968, 15, 954–961. [Google Scholar] [CrossRef]
  6. Matsuura, T.; Saitoh, S.; Trong, D. Numerical solutions of the poisson equation. Appl. Anal. 2004, 83, 1037–1051. [Google Scholar] [CrossRef]
  7. Causon, D.; Mingham, C. Introductory Finite Difference Methods for PDEs; Bookboon: London, UK, 2010. [Google Scholar]
  8. LeVeque, R.J. Finite Difference Methods for Ordinary and Partial Differential Equations; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007. [Google Scholar] [CrossRef]
  9. Press, W.H.; Flannery, B.P. Numerical Recipes: The Art Of Scientific Computing, 3rd ed.; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  10. Burden, A.M.; Burden, R.L.; Faires, J.D. Numerical Analysis, 10th ed.; Cengage: Boston, MA, USA, 2016. [Google Scholar] [CrossRef]
  11. Nagel, J.R. Numerical Solutions to Poisson Equations Using the Finite-Difference Method [Education Column]. IEEE Antennas Propag. Mag. 2014, 56, 209–224. [Google Scholar] [CrossRef]
  12. Abdallah, S. Numerical solutions for the pressure Poisson equation with Neumann boundary conditions using a non-staggered grid, I. J. Comput. Phys. 1987, 70, 182–192. [Google Scholar] [CrossRef]
  13. Zaman, M. Poisson-sover-2D, 2022. Available online: https://github.com/zaman13/Poisson-solver-2D (accessed on 1 July 2022).
  14. Sapa, L.; Bożek, B.; Danielewski, M. Difference Methods To One And Multidimensional Interdiffusion Models With Vegard Rule. Math. Model. Anal. 2019, 24, 276–296. [Google Scholar] [CrossRef]
  15. Sapa, L.; Bożek, B.; Tkacz–Śmiech, K.; Zajusz, M.; Danielewski, M. Interdiffusion in many dimensions: Mathematical models, numerical simulations and experiment. Math. Mech. Solids 2020, 25, 2178–2198. [Google Scholar] [CrossRef]
  16. Dunn, S.M.; Constantinides, A.; Moghe, P.V. Finite Difference Methods, Interpolation and Integration. In Numerical Methods in Biomedical Engineering; Elsevier: Amsterdam, The Netherlands, 2006; pp. 163–208. [Google Scholar] [CrossRef]
  17. Basilisk Software. Available online: http://basilisk.fr/sandbox/easystab/diffmat_2D.m (accessed on 12 April 2022).
  18. Regalia, P.A.; Sanjit, M.K. Kronecker Products, Unitary Matrices and Signal Processing Applications. SIAM Rev. 1989, 31, 586–613. [Google Scholar] [CrossRef]
  19. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef] [Green Version]
  20. GraphicsGale Software. Version 2.08.20. 2020. Available online: https://graphicsgale.com/us/ (accessed on 12 April 2022).
  21. Hayt, W. Engineering Electromagnetics; McGraw-Hill: New York, NY, USA, 2012. [Google Scholar]
Figure 1. Process of mapping a 2D grid to a 1D vector and vice-versa.
Figure 1. Process of mapping a 2D grid to a 1D vector and vice-versa.
Electronics 11 02365 g001
Figure 2. Matrix first-derivative operator (with respect to y) for the two-dimensional case. D y has the same form as D x which is defined in Equation (13).
Figure 2. Matrix first-derivative operator (with respect to y) for the two-dimensional case. D y has the same form as D x which is defined in Equation (13).
Electronics 11 02365 g002
Figure 3. Matrix first-derivative operator (with respect to x) for the two-dimensional case. D x ( j , k ) refers to the element from row j and column k of the D x matrix defined in Equation (13).
Figure 3. Matrix first-derivative operator (with respect to x) for the two-dimensional case. D x ( j , k ) refers to the element from row j and column k of the D x matrix defined in Equation (13).
Electronics 11 02365 g003
Figure 4. Solution domain for the two-dimensional parallel plate capacitor problem. The boundaries defined by the red points are Neumann boundaries. The gray regions indicate the parallel plates, where Dirichlet boundary conditions are set. The light green regions have no boundary conditions enforced on them.
Figure 4. Solution domain for the two-dimensional parallel plate capacitor problem. The boundaries defined by the red points are Neumann boundaries. The gray regions indicate the parallel plates, where Dirichlet boundary conditions are set. The light green regions have no boundary conditions enforced on them.
Electronics 11 02365 g004
Figure 5. Solution of the Poisson equation for a parallel plate capacitor: (a) potential distribution, u, and (b) electric field distribution. The colors in (b) represent the magnitude of the electric field, | u | , and the arrow lines are electric field lines showing the direction of u .
Figure 5. Solution of the Poisson equation for a parallel plate capacitor: (a) potential distribution, u, and (b) electric field distribution. The colors in (b) represent the magnitude of the electric field, | u | , and the arrow lines are electric field lines showing the direction of u .
Electronics 11 02365 g005
Figure 6. (a) Solution domain for the two-dimensional p–n junction problem. The red points are Neumann boundaries. The gray regions indicate the inner Dirichlet boundaries. (b) The source function f ( x , y ) .
Figure 6. (a) Solution domain for the two-dimensional p–n junction problem. The red points are Neumann boundaries. The gray regions indicate the inner Dirichlet boundaries. (b) The source function f ( x , y ) .
Electronics 11 02365 g006
Figure 7. Solution of the Poisson equation for a simplified p–n junction: (a) potential distribution, u, and (b) electric field distribution. The colors in (b) represent the magnitude of the electric field, | u | , and the arrow lines are electric field lines showing the direction of u .
Figure 7. Solution of the Poisson equation for a simplified p–n junction: (a) potential distribution, u, and (b) electric field distribution. The colors in (b) represent the magnitude of the electric field, | u | , and the arrow lines are electric field lines showing the direction of u .
Electronics 11 02365 g007
Figure 8. (a) Solution domain for the two-dimensional arbitrary problem with external-image-defined geometry. (b) The boundary regions identified by the code. Different colors represent different boundary conditions to be enforced.
Figure 8. (a) Solution domain for the two-dimensional arbitrary problem with external-image-defined geometry. (b) The boundary regions identified by the code. Different colors represent different boundary conditions to be enforced.
Electronics 11 02365 g008
Figure 9. Solution of the Poisson equation for an arbitrary image-defined geometry: (a) potential distribution, u, and (b) electric field distribution. The colors in (b) represent the magnitude of the electric field, | u | , and the arrow lines are electric field lines showing the direction of u .
Figure 9. Solution of the Poisson equation for an arbitrary image-defined geometry: (a) potential distribution, u, and (b) electric field distribution. The colors in (b) represent the magnitude of the electric field, | u | , and the arrow lines are electric field lines showing the direction of u .
Electronics 11 02365 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zaman, M.A. Numerical Solution of the Poisson Equation Using Finite Difference Matrix Operators. Electronics 2022, 11, 2365. https://doi.org/10.3390/electronics11152365

AMA Style

Zaman MA. Numerical Solution of the Poisson Equation Using Finite Difference Matrix Operators. Electronics. 2022; 11(15):2365. https://doi.org/10.3390/electronics11152365

Chicago/Turabian Style

Zaman, Mohammad Asif. 2022. "Numerical Solution of the Poisson Equation Using Finite Difference Matrix Operators" Electronics 11, no. 15: 2365. https://doi.org/10.3390/electronics11152365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop