Next Article in Journal
Efficient Training of Deep Spiking Neural Networks Using a Modified Learning Rate Scheduler
Previous Article in Journal
DRHT: A Hybrid Mathematical Model for Accurate Ultrasound Probe Calibration and Efficient 3D Reconstruction
Previous Article in Special Issue
Deep Learning Artificial Neural Network for Pricing Multi-Asset European Options
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite Element Method for Solving the Screened Poisson Equation with a Delta Function

1
Yazhou Bay Innovation Institute, Hainan Tropical Ocean University, Sanya 572022, China
2
Department of Mathematics, Imperial College London, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(8), 1360; https://doi.org/10.3390/math13081360
Submission received: 23 February 2025 / Revised: 8 April 2025 / Accepted: 18 April 2025 / Published: 21 April 2025
(This article belongs to the Special Issue Advances in Partial Differential Equations: Methods and Applications)

Abstract

:
This paper presents a Finite Element Method (FEM) framework for solving the screened Poisson equation with a Dirac delta function as the forcing term. The singularity introduced by the delta function poses challenges for standard numerical methods, particularly in higher dimensions. To address this, we employ integrated Legendre basis functions, which yield sparse and structured system matrices characterized by a Banded-Block-Banded-Arrowhead ( B 3 -Arrowhead) form. In one dimension, the resulting linear system can be solved directly. In two and three dimensions, the equation can be efficiently solved using a generalized Alternating Direction Implicit (ADI) method combined with reverse Cholesky factorization. Numerical results in 1D, 2D, and 3D confirm that the method accurately captures the localized impulse response and reproduces the expected Green’s function behavior. The proposed approach offers a robust and scalable solution framework for partial differential equations with singular source terms and has potential applications in physics, engineering, and computational science.
MSC:
35J05; 65N30; 65F50

1. Introduction

The screened Poisson equation is a classical partial differential equation that arises in a wide range of scientific and engineering applications, including electrostatics, quantum mechanics, image processing, and signal analysis. In this study, we focus on solving the screened Poisson equation over a rectangular domain:
Δ u ( x , y ) + ω 2 u ( x , y ) = f ( x , y ) for a x b , c y d
where Δ u : = u x x + u y y is the Laplacian and ω is a fixed parameter controlling the decay rate of the solution. A particular challenge arises when the forcing term f ( x , y ) is modeled by a Dirac delta function δ ( x , y ) , representing a localized impulse.
While Fortunato [1] proposed a fast spectral solver using the Alternating Direction Implicit (ADI) method with quasi-optimal complexity for continuous functions on right hand side, and Knook and Olver [2] extended these ideas to discontinuous function, the treatment of delta-function forcing remains nontrivial in practical computations.
In the paper, we aim to solve the screened Poisson equation in 1D, 2D, and 3D with respect to a delta function (generalized function), which plays an important role on mathematical analysis, physics, signal processing, engineering, and image processing. While standard finite element methods may face difficulties in resolving the localized singularity introduced by the delta function, this issue can be mitigated by carefully choosing the specialized basis functions. In this work, we adopt integrated Legendre functions originally introduced by Babuska and Suri [3] within the FEM framework. These functions exhibit highly sparse matrix structures when applied to the variational form of elliptic PDEs. Specifically, we exploit the resulting Banded-Block-Banded-Arrowhead ( B 3 -Arrowhead) structure to develop efficient discretizations for the screened Poisson equation in one, two, and three spatial dimensions.
Our approach combines theoretical developments with practical numerical techniques, including generalized and nested ADI [2,4], to maintain sparsity and computational efficiency. We conclude with visual demonstrations of the solution in 1D, 2D, and 3D, highlighting the structure of the Green’s function as a response to delta excitation.
The development of sparse FEM has an interesting history. The origins can be traced back to the research of Szabo and Babuska, with further elaborations found in [5,6]. Babuška and Suri [3] and Beuchler and Schoberl [7] expanded these methods to two dimensions, creating p-FEM for quadrilaterals and simplices, respectively. Additional contributions in this area include works by [5,8,9,10,11,12,13,14,15,16,17].
The main contributions of this paper are summarized as follows:
  • We propose a finite element method using integrated Legendre basis functions to solve the screened Poisson equation with Dirac delta forcing, preserving sparsity and capturing singular behavior.
  • We develop a structured discretization approach that leads to Banded-Block-Banded Arrowhead matrices, and apply it in 1D, 2D, and 3D domains.
  • We employ and adapt the ADI method to efficiently solve the resulting systems.
  • We present detailed numerical results that confirm the expected Green’s function behavior and validate the accuracy of our method for singular sources.
To the best of our knowledge, this is the first comprehensive application of this combined approach to delta-function forcing across multiple dimensions.
The remainder of this paper is organized as follows: Section 2 introduces the definition and properties of the delta function. Section 3 discusses integrated Legendre functions and their role in constructing sparse discretizations. Section 4 presents FEM formulations and numerical solutions in 1D, 2D, and 3D. Section 5 analyzes the computational cost. Finally, Section 6 concludes this paper with discussions and potential directions for future work.

2. Delta Function

Firstly, we need to knowthe definition and property of the delta function to obtain useful information in discretizations of the screened Poisson equation.
Figure 1 gives a clear function expression [18]. Next, we will present the rigorous definitions and properties [18].

2.1. Definitions

The dirac delta function δ ( x ) loosely is zero when x is not equal to zero, and it is infinite when x is equal to zero:
δ ( x ) = x = 0 0 x 0
and it satisfies the following:
δ ( x ) d x = 1 .

2.2. As a Measure

Formally, the Lebesgue integral serves as the essential analytical tool. The integral, in the sense of Lebesgue, taken with respect to the measure δ , satisfies the following:
f ( x ) δ ( d x ) = f ( 0 )
for all continuous compactly supported functions f. However, we often use the following:
f ( x ) δ ( x ) d x = f ( 0 ) .
This notation is a useful but an informal convention and does not represent a standard integral in the Riemann or Lebesgue sense.

2.3. Generalizations

In n-dimensional Euclidean space R n , the delta function can be characterized as a measure that satisfies the following:
R n f ( x ) δ ( d x ) = f ( 0 )
for any continuous function f with compact support. Viewed as a measure, the n-dimensional delta function is defined as the product of one-dimensional delta functions applied independently to each coordinate. Hence, given x = ( x 1 , x 2 , , x n ) , it can be formally expressed as follows:
δ ( x ) = δ ( x 1 ) δ ( x 2 ) δ ( x n ) .

3. Integrated Legendre Functions

We review integrated Legendre functions of [2,3] and explore their role in developing discretizations of differential operators with a unique sparsity structure known as Banded-Block-Banded Arrowhead Matrices.

3.1. A Set of Basis Functions Defined over a Interval

Writing the weighted ultraspherical polynomials as
W k ( x ) : = ( 1 x 2 ) C k ( 3 / 2 ) ( x ) ( k + 1 ) ( k + 2 )
where C k ( t ) are orthogonal in ( 1 x 2 ) t 1 / 2 on [ 1 , 1 ] for t > 1 / 2 , t 0 and it is
C k ( t ) ( x ) = 2 k ( t ) k k ! x k + O ( x k 1 )
where ( t ) k = t ( t + 1 ) ( t + k 1 ) is Pochammer Symbol [2,19].
Also, we know
W k ( x ) = P k + 1 ( x )
for the Legendre polynomials P k ( x ) C k ( 1 / 2 ) ( x ) [1,2]. We can say these functions are the integrals of Legendre polynomials; essentially, they are the integrated Legendre functions up to a constant factor. They are the same as the basis described by Schwab [5] and used by Fortunato and Townsend [1].
A useful way to demonstrate this relationship is by employing quasimatrices. These can be viewed as matrices that are continuous along the first dimension, or alternatively, as a row vector with its columns representing functions. Therefore, we have the following:
W ( x ) = W 0 ( x ) , W 1 ( x ) , W 2 ( x ) , , P ( x ) = P 0 ( x ) , P 1 ( x ) , P 2 ( x ) , , d d x W = P 0 1 1 1 D W P ,
where W ( x ) and P ( x ) are 1 × row-vector. Also, we can view W and P as [ 1 , 1 ] × quasi-matrices.
For a single element in one dimension, this basis can serve as both the test and trial basis in the weak formulation of a differential equation. We denote the L 2 ( 1 , 1 ) inner product by · , · . Consequently, the gram or mass matrix associated with Legendre polynomials is as follows:
P , P : = P 0 , P 0 P 0 , P 1 P 1 , P 0 P 1 , P 1 = 2 2 / 3 2 / 5 M P
While the discretization of the weak 1D Laplacian results in a diagonal matrix [2]:
Δ W : = ( W ) , W = ( P D W P ) , P D W P = ( D W P ) M P D W P = 2 / 3 2 / 5 2 / 7 .
 And, we have alternative way to write the formula:
W k , W j = P k + 1 , P j + 1 = 2 2 k + 3 δ k j .
From [1,2], the mass matrix can be derived by utilizing the lowering relationship:   
[ W 0 , W 1 , W 2 , ] W = [ P 0 , P 1 , P 2 , ] P 1 / 3 0 1 / 5 1 / 3 0 1 / 7 1 / 5 0 1 / 9 L W
At this stage, we concentrate on the sparsity structure. Following this, the mass matrix is represented as a truncated form of an infinite pentadiagonal matrix [2]:
M W : = W , W = L W M P L W = × 0 × 0 × 0 × × 0 × 0 × × 0 × 0 × 0 × .
The entries are straightforward explicit rational expressions. Although the latter method is somewhat less efficient, we will adopt it for clarity in explanation.

3.2. Multiple Intervals

By partitioning the interval [ a , b ] into n subintervals, where a = x 0 < x 1 < < x n = b , we can use
a i ( x ) = 2 x x i 1 x i x i x i 1
for [ 1 , 1 ] to construct mapped bubble functions as
W k i x ( x ) : = W k ( a i ( x ) ) x [ x i 1 , x i ] 0 otherwise
on every subinterval [2]. In addition, we can obtain the standard piecewise linear hat basis:
h 0 x ( x ) : = x 1 x x 1 x 0 x [ x 0 , x 1 ] , 0 otherwise , h n x ( x ) : = x x n 1 x n x n 1 x [ x 0 , x 1 ] , 0 otherwise , h i x ( x ) : = x x i 1 x i x i 1 x [ x i 1 , x i ] , x i + 1 x x i + 1 x i x [ x i , x i + 1 ] , 0 otherwise , for i = 1 , , n 1 .
Then, we assemble a block quasimatrix by aggregating the hat and bubble functions that share the same degree:
C x : = [ h 0 x , , h n x H x | W 01 x , , W 0 n x W 0 x | W 11 x , , W 1 n x W 1 x | ] .
We give the block quasimatrix for the piecewise Legendre basis:   
P x : = [ P 01 x , , P n 1 x P 1 x | P 02 x , , P n 2 x P 2 x | ]
where
P k j x ( x ) : = P k ( a j ( x ) ) x [ x j 1 , x j ] , 0 otherwise .
In the following, we often omit x for convenience.

3.2.1. Mass Matrix

When confined to each panel, our basis corresponds to a transformed version of the single panel basis described earlier. Consequently, we can re-express C x according to P x [2]. It is important to observe that the mass matrix is diagonal, and we write it as follows:
P , P = M 11 M 22 M 33 M P x ,
where · , · denotes the L 2 ( a , b ) -inner product. Because piecewise Legendre polynomials are entirely decoupled, this matrix can be considered as a direct sum:
M P x = x 1 x 0 2 M P x n x n 1 2 M P .
Then, by using it, we can derive the connection matrix [2]:
C = P R 00 R 01 R 10 0 R 12 R 21 0 R 23 R C x
where the blocks are as follows:
R 00 = 1 / 2 1 / 2 1 / 2 1 / 2 R n × ( n + 1 ) , R 10 = 1 / 2 1 / 2 1 / 2 1 / 2 R n × ( n + 1 ) , R k j = 1 1 + 2 j 1 1 + 2 j , k = j ± 1 , j > 0 .
Therefore, we obtain the mass matrix:   
M C x : = C , C = R C M P R C = × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × ×
where the entries can be described using straightforward rational expressions derived from their components. The structure is significant in this context: every block is banded, and any block not located in the first row or column is diagonal [2].

3.2.2. Weak Laplacian

Differentiation is represented as follows:
d d x C = P D 00 D 11 D x
where
D 00 = 1 / δ 1 1 / δ 1 1 / δ 2 1 / δ 2 1 / δ n 1 / δ n R n × ( n + 1 ) , D k k = 2 / δ 1 2 / δ 2 2 / δ n R n × n , k > 0 , δ i = x i x i 1 for i { 1 : n } .
Therefore, we can determine that the weak Laplacian is also block diagonal, and the shape is as follows:  
Δ C x : = ( C ) , C = ( D ) M P D = × × × × × × × × × .
The kind of structure is significant: it is a block diagonal matrix, and apart from the first block, which is banded, the blocks are diagonal.

3.3. Homogeneous Dirichlet Boundary

In order to enforce homogeneous Dirichlet boundary conditions, we need to remove the basis functions that are non-zero at the boundaries, namely the first and last hat functions. As a result, we use the following:
Q x : = C x 0 1 1 0 1 1 = [ h 1 , , h n 1 H ˜ | W 01 , , W 0 n W 0 | W 11 , , W 1 n W 1 | ] .
The discretized operators are similar to the previous ones, and we obtain the following:   
M Q x : = ( Q ) , Q = P D M C P D = × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × ×
As in the previous case, each block is banded, with all blocks outside the first row and column being diagonal. The main difference from the earlier C x case is in the bandwidths of some of the blocks.
As we saw, mass matrices and weak Laplacians exhibit a unique sparsity pattern [2]:
Definition 1.
A Banded-Block-Banded-Arrowhead Matrix A R ( c + t d ) × ( c + t d ) , defined by block-bandwidths ( m , n ) and sub-block-bandwidths ( ϵ + η , ϵ + η ) , has the following characteristics:
(1) 
It is a matrix with a block-banded structure, where the block-bandwidths are ( m , n ) .
(2) 
The top-left block A 0 R c × c follows a banded pattern with bandwidths ( ϵ + η , ϵ + η ) .
(3) 
The blocks in the first row, denoted as B k R c × d , have bandwidths ( ϵ , η ) .
(4) 
The blocks in the first column, represented as C k R d × c , exhibit bandwidths ( η , ϵ ) .
(5) 
All other blocks, labeled as D k , j R d × d , are diagonal.
For example, for the mass matrix of C and Q , the block-bandwidths are (2,2), and the sub-block-bandwidths are (1,1).

4. Discretizations of the Screened Poisson Equation

To consider the FEM discretization of Equation (1), we first reformulate it in its variational form. Consider the domain Ω = [ a , b ] × [ c , d ] and define H s ( Ω ) : = W s , 2 ( Ω ) , where W s , q ( Ω ) represents the standard Sobolev spaces for s > 0 and q 1 . The Lebesgue spaces are denoted by L q ( Ω ) for q 1 , and H 0 1 ( Ω ) is the space v H 1 ( Ω ) : v | Ω = 0 , where Ω : H 1 ( Ω ) H 1 / 2 ( Ω ) is the usual trace operator. The dual space of H 0 1 ( Ω ) is H 1 ( Ω ) = ( H 0 1 ( Ω ) ) * . For a Banach space X and a Hilbert space H, · , · X * , X represents the duality pairing between X and X * , while · , · H refers to the inner product in H. For f H 1 ( Ω ) , the Poisson equation can be written in its weak (variational) formulation: find u H 0 1 ( Ω ) such that
v , u + ω 2 v , u = v , f H 0 1 ( Ω ) , H 1 ( Ω ) v H 0 1
where u = ( u x , u y ) denotes the gradient. To construct a discretization using the finite element method, subspaces of H 1 are chosen as the trial space (for the discretization of u) and the test space (for the discretization of v), referred to as trial and test bases [2].

4.1. Screened Poisson in 1D

When it comes to problems with zero Dirichlet boundary conditions, we utilize the basis Q up to degree i for both the test and trial spaces:
Q 0 : i : = Q I n 1 I n I n 0 n × n I 0 : i R × D = [ H ˜ | W 0 | | W i 2 ] .
where D = ( i + 1 ) n 1 is the overall degrees of freedom. Then, to approximate our solution, we express u i R D such that u ( x ) u i ( x ) : = Q 0 : i ( x ) u i . Similarly, we write our test functions as v i = Q 0 : i v i . The discretisation of our weak form is as follows:
v i , u i + ω 2 v i , u i = v i ( ( Q 0 : i ) ) , ( Q 0 : i ) u i + ω 2 v i ( Q 0 : i ) , Q 0 : i u i = v i ( I 0 : i Δ Q I 0 : i Δ Q , i + ω 2 I 0 : i M Q I 0 : i M Q , i ) u i .
Next, naturally, using the delta function as f = δ ( x ) , we simplify the right-hand side:
v i , f = v i ( Q 0 : i ) f = v i ( Q 0 : i ) δ = v i H ˜ , δ W 0 , δ W 1 , δ W i 2 , δ = v i H ˜ ( 0 ) W 0 ( 0 ) W 1 ( 0 ) W i 2 ( 0 ) .
For the last equality, we use the property of the delta function. Then, imposing this equation for every v i R D results in a system of D × D equations:
( Δ Q , i + ω 2 M Q , i ) u i = H ˜ ( 0 ) W 0 ( 0 ) W 1 ( 0 ) W i 2 ( 0 )
where the right-hand side always has many zeros (i.e., it is sparse).
Figure 2 shows the numerical solution of the 1D Poisson equation with a Dirac delta function source located at the origin. The result demonstrates the expected piecewise linear profile, with a maximum at x = 0 and zero values at the boundaries. This matches the analytical form of the Green’s function in one dimension, given by u ( x ) = 1 2 ( 1 | x | ) . The agreement validates the ability of our method to capture the singular structure induced by the delta source.

4.2. Screened Poisson in 2D

In 2D, we focus on partitions x 1 < < x m and y 1 < < y n , truncating degrees i and j, respectively. The basis can then be expressed using the tensor product. We obtain
u ( x , y ) u i j ( x , y ) : = Q 0 : i x ( x ) U i j Q 0 : j y ( y )
where unknown coefficients are signified by U i j R M × N . In addition, we can represent f ( x , y ) as δ ( x , y ) .
Then, we consider the test function v i j ( x , y ) = Q 0 : i x ( x ) V i j Q 0 : j y ( y ) . By inserting u and v into the weak form of the Poisson equation, the left-hand side of our weak formulation becomes as follows:
v , u + ω 2 v , u = v i j x , u i j x + v i j y , u i j y + ω 2 v , u = V i j ( Q 0 : i x ( x ) ) , Q 0 : i x ( x ) U i j Q 0 : j y ( y ) , Q 0 : j y ( y ) + V i j Q 0 : i x ( x ) , Q 0 : i x ( x ) U i j ( Q 0 : j y ( y ) ) , Q 0 : j y ( y ) + ω 2 V i j Q 0 : i x ( x ) , Q 0 : i x ( x ) U i j Q 0 : j y ( y ) , Q 0 : j y ( y ) = V i j ( I 0 : i Δ Q x I 0 : i ) U i j ( I 0 : j M Q y I 0 : j ) + V i j ( I 0 : i M Q x I 0 : i ) U i j ( I 0 : j Δ Q y I 0 : j ) + ω 2 V i j ( I 0 : i M Q x I 0 : i ) U i j ( I 0 : j M Q y I 0 : j ) = V i j ( Δ Q , i x ) U i j M Q , j y + V i j M Q , i x U i j ( Δ Q , j y ) + ω 2 V i j M Q , i x U i j M Q , i y = V i j ( Δ Q , i x ) U i j M Q , j y + M Q , i x U i j ( Δ Q , j y ) + ω 2 M Q , i x U i j M Q , i y .
Then, we use the delta function f ( x , y ) = δ ( x , y ) , so we can obtain the right-hand side as follows:
v , f = V i j Q 0 : i x ( x ) f Q 0 : j y ( y ) = V i j Q 0 : i x ( x ) δ ( x , y ) Q 0 : j y ( y ) = V i j Q 0 : i x ( x ) δ ( x ) δ ( y ) Q 0 : j y ( y ) = V i j H ˜ , δ W 0 , δ W 1 , δ W i 2 , δ δ , H ˜ , δ , W 0 , δ , W 1 , , δ , W i 2 = V i j H ˜ ( 0 ) W 0 ( 0 ) W 1 ( 0 ) W i 2 ( 0 ) H ˜ ( 0 ) , W 0 ( 0 ) , W 1 ( 0 ) , , W i 2 ( 0 ) = V i j H ˜ ( 0 ) H ˜ ( 0 ) H ˜ ( 0 ) W 0 ( 0 ) H ˜ ( 0 ) W i 2 ( 0 ) W 0 ( 0 ) H ˜ ( 0 ) W 0 ( 0 ) W 0 ( 0 ) W 0 ( 0 ) W i 2 ( 0 ) W i 2 ( 0 ) H ˜ ( 0 ) W i 2 ( 0 ) W 0 ( 0 ) W i 2 ( 0 ) W i 2 ( 0 ) .
Therefore, we know
( Δ Q , i x ) U i j M Q , j y + M Q , i x U i j ( Δ Q , j y ) + ω 2 M Q , i x U i j M Q , i y = H ˜ ( 0 ) H ˜ ( 0 ) H ˜ ( 0 ) W 0 ( 0 ) H ˜ ( 0 ) W i 2 ( 0 ) W 0 ( 0 ) H ˜ ( 0 ) W 0 ( 0 ) W 0 ( 0 ) W 0 ( 0 ) W i 2 ( 0 ) W i 2 ( 0 ) H ˜ ( 0 ) W i 2 ( 0 ) W 0 ( 0 ) W i 2 ( 0 ) W i 2 ( 0 ) .
Also, we can modify it to Sylvester’s equation:
( Δ Q , i x + ω 2 2 M Q , i x ) U i j M Q , j y + M Q , i x U i j ( Δ Q , j y + ω 2 2 M Q , i y ) = H ˜ ( 0 ) H ˜ ( 0 ) H ˜ ( 0 ) W 0 ( 0 ) H ˜ ( 0 ) W i 2 ( 0 ) W 0 ( 0 ) H ˜ ( 0 ) W 0 ( 0 ) W 0 ( 0 ) W 0 ( 0 ) W i 2 ( 0 ) W i 2 ( 0 ) H ˜ ( 0 ) W i 2 ( 0 ) W 0 ( 0 ) W i 2 ( 0 ) W i 2 ( 0 ) .
Similarly, for the right-hand side of the above equation, it has the sparse structure. Naturally, solving this equation is desirable, despite its structural complexity. For a generalized Sylvester equation,
A U B C U D = E ,
Using the Alternating Direction Implicit (ADI) method [1,2], including reverse Cholesky factorization, a decomposition of a matrix into the form A = L L , with L being a lower triangular matrix, is a good way to derive U. The outline of the algorithm is given in Algorithm 1. The shift parameters p i and q i used in Algorithm 1 are computed using an explicit analytic formula derived from the theory of elliptic functions and conformal mappings. Specifically, the formulas for p i and q i are explicitly given in ([1], Equation (2.4)), based on the Jacobi elliptic function, the complete elliptic integral, and a Möbius transformation that maps reference intervals to the spectral ranges of the system matrices. This choice guarantees near-optimal convergence of the ADI iterations. The convergence rate satisfies the exponential decay bound, as described in ([1], Equation (2.5)).
Algorithm 1 Generalized ADI Solver
  • Input: Symmetric matrices A R m × m , B R n × n , C R m × m , D R n × n , matrix E R m × n , and error tolerance ϵ > 0
  • Output: Matrix U R m × n such that A U B C U D E
  • Preprocessing Phase:
      1:
    Compute generalized eigenvalue intervals σ ( A , C ) and σ ( B , D ) via banded symmetric generalized eigen-solvers. Therefore, we can get σ ( A ˜ ) is in [ a 1 , a 2 ] and σ ( B ˜ ) is in [ b 1 , b 2 ] where a 1 , b 1 are the minimal eigenvalues and a 2 , b 2 are maximal eigenvalues.
      2:
    Determine the number of ADI iterations:
    K = log ( 16 η ) log ( 4 ϵ ) π 2 , η = | b 1 a 1 | | b 2 a 2 | | b 1 a 2 | | b 2 a 1 | .
      3:
    Calculate the ADI shift parameters { p i } i = 1 K and { q i } i = 1 K , ensuring p i > 0 , q i < 0 .
      4:
    Precompute reverse Cholesky factorizations for q i A C and p i D B for each i.
  • Iterative Solution Phase:
      1:
    Initialize W 0 = 0 .
      2:
    for  i = 1 to K do
      3:
        Compute intermediate iterate:
    W i 1 2 = ( ( A p i C ) W i 1 E ) ( D p i B ) 1 W i = ( A q i C ) 1 ( W i 1 2 ( D q i B ) E )
      4:
    end for
      5:
    Output final result: U = W K C 1 .
By applying this method to our problem, we obtain the coefficient matrix U i j , from which the desired solution u ( x , y ) can be reconstructed. We then present the numerical solution to the Poisson equation with a delta function as the source term. Figure 3 shows a 2D contour plot of the numerical solution. The concentric level lines reflect the radial symmetry of the Green’s function. Figure 4 provides a 3D surface view of the same solution, highlighting the sharp peak at the origin and its smooth radial decay, consistent with the analytical form of the Green’s function, which serves as the impulse response to the Poisson operator.

4.3. Screened Poisson in 3D

Consider Poisson’s equation on the cube with homogeneous Dirichlet conditions:
( u x x + u y y + u z z ) = f , ( x , y , z ) [ 1 , 1 ] 3 .
We can discretize the above equation as follows:
( D x x + D y y + D z z ) vec ( X ) = vec ( F ) ,
where X , F C n × n × n , D x x = A A I , D y y = A I A , and D z z = I A A [1]. And, A = D 1 M is the pentadiagonal matrix, I is the n × n identity matrix, ‘⊗’ is the Kronecker product, and vec ( · ) is the vectorization operator.
We aim to solve Equation (3) by extending ADI without forming a Kronecker product matrix. But, it is not straightforward to generalize ADI to manage over two terms simultaneously [1,20]. Therefore, we utilize the nested ADI method, as outlined in [4]. This approach entails combining the initial two terms and executing the ADI-like iteration as follows:
( D z z p i , 1 I ) vec ( X i + 1 / 2 ) = vec ( F ) ( ( D x x + D y y ) p i , 1 I ) vec ( X i ) ,
( D x x + D y y ) q i , 1 I vec ( X i + 1 ) = vec ( F ) ( D z z q i , 1 I ) vec ( X i + 1 / 2 )
for reasonable selection of p i , 1 and q i , 1 [1]. Because D x x , D y y , and D z z are Kronecker products, we can prove that the eigenvalue bounds on D x x , D y y , and D z z are the same [12,20].
In order to solve Equation (4), nested ADI iteration can be used in matrices D x x q i , 1 2 I and D y y q i , 1 2 I for [1]
F i D y y ( q i , 1 / 2 ) I p j , 2 I vec ( Y j ) = D x x ( q i , 1 / 2 ) I p j , 2 I vec ( Y i + 1 / 2 ) ,
F i D x x ( q i , 1 / 2 ) I q j , 2 I vec ( Y j + 1 / 2 ) = D y y ( q i , 1 / 2 ) I q j , 2 I vec ( Y j + 1 )
When it converges, we can see that the solution to Equation (5) is given by X i + 1 : = Y j + 1 . With the optimal selection of p j , 2 and q j , 2 , we anticipate that Equations (6) and (7) will converge within O ( log n ) iterations.
In the final step, we need to solve the three linear systems (4), (6), and (7), each of which involves a shifted Kronecker system. These Kronecker systems are degenerate in one dimension because of the identity matrix. Consequently, we can separate (4), (6), and (7) and solve n independent equations individually [1]. For instance, if we aim to solve (4) for X i + 1 / 2 , we just need to handle the following:
F i ( : , : , t ) = A X i + 1 2 ( : , : , t ) A p i , 1 X i + 1 2 ( : , : , t ) , 1 t n ,
where X ( : , : , t ) signifies the tth slice of X along third dimension and F i = vec ( F ) ( D x x + D y y p i , 1 I ) vec ( X i ) . For solving Equation (8), an additional nested ADI iteration can be executed. Next, we can rewrite (8) as follows:
A 1 F i ( : , : , t ) = p i , 1 A 1 X i + 1 2 ( : , : , t ) X i + 1 2 ( : , : , t ) A .
The iteration for each value of t is given as follows:
A 1 F i ( : , : , t ) ( p i , 1 A 1 p l , 3 I ) Z l = Z l + 1 2 ( A p l , 3 I ) ,
A F i ( : , : , t ) A Z l + 1 2 ( A p l , 3 I ) = ( p i , 1 I q l , 3 A ) Z l + 1 .
Once the iteration converges, the solution to (4) for each k is obtained as X i + 1 / 2 ( : , : , k ) : = Z l + 1 . It is important to note that (7) has been multiplied by A to allow for faster solving of (6) and (7).
In this paper, again, we use delta function f ( x , y , z ) = δ ( x , y , z ) . From previous discussion, we know the specific F, which is the sparse tensor for the right-hand side using the property of delta function.
Figure 5 illustrates the numerical solution to the Poisson equation with a delta function source in three dimensions. To visualize the behavior of the solution, we plot cross-sectional slices of the scalar field on the coordinate planes x y , y z , and x z . The result shows a bright central peak at the origin, consistent with the singular nature of the Dirac delta source, and smooth radial decay outward. The star-like shape arises from the intersection of three orthogonal heatmaps, each corresponding to a fixed coordinate plane. It is important to note, however, that while this method is recognized for its optimal complexity in spectral methods for solving Equation (2), it is not considered a practical algorithm. The reason is that the inner iterations of the Alternating Direction Implicit (ADI) method must be computed with machine precision to ensure convergence of the outer iterations [1].

5. Computational Cost Analysis

The computational cost of the proposed method is primarily driven by two factors: the sparse system matrix construction and the iterative ADI solution procedure. Below, we discuss the computational complexity of each step.

5.1. Sparse Matrix Construction

The system matrix in our method is sparse, resulting from the use of Legendre polynomials as basis functions in the FEM framework. These polynomials are orthogonal, leading to a system matrix with a Banded-Block-Banded-Arrowhead ( B 3 -Arrowhead) sparsity pattern. The sparsity structure significantly reduces the computational cost, as only a limited number of non-zero entries are present in the matrix, particularly in higher dimensions.
In the 1D case, the system matrix consists of the mass matrix and the weak Laplacian matrix. The mass matrix arises from the inner products of the operator derived from basis functions, while the weak Laplacian matrix corresponds to the inner products of derivative of the operator. The construction of the mass and weak Laplacian matrices involves matrix multiplication, which requires O ( N ) operations, where N is the number of degrees of freedom in one direction. When extending to 2D and 3D, the complexity increases to O ( N 2 ) and O ( N 3 ) , respectively, because of the the increased number of degrees of freedom. Despite the growth, the banded-block-banded structure of the matrices ensures that they remain sparse, reducing both memory usage and computational time.

5.2. ADI Iterative Algorithm

The solution of the system is performed using the Alternating Direction Implicit (ADI) algorithm, which has been shown to achieve near-optimal computational complexity.
For the preprocessing phase, which includes constructing and factorizing matrices, the computational cost is O ( N 2 + K N ) , where K is the number of iterations. The O ( N 2 ) term comes from the matrix construction (especially for 2D), while the O ( K N ) term accounts for the cost of computing reverse Cholesky factorization. When considering the iterative solution phase, the computational cost for obtaining a solution is O ( K N 2 ) in 2D and O ( K N 3 ) in 3D, as each ADI step involves performing multiplication and inversion. The number of iterations K remains small due to the choice of shift parameters, keeping the method efficient in practice.

5.3. Scalability and Parallelization

The method is highly parallelizable due to the fact that the ADI iterations are independent in each direction. This parallelism enables the method to scale efficiently for large grids, with each iteration being handled by multiple processors or computational nodes in parallel. Thus, the overall performance can be further improved for large-scale 2D and 3D problems, making the method well-suited for high-performance computing environments.

6. Conclusions

This paper presents a numerical framework for solving the screened Poisson equation with Dirac delta forcing using sparse finite element methods. By adopting integrated Legendre functions as basis functions, we derive structured variational formulations that lead to sparse system matrices, specifically exhibiting a Banded-Block-Banded-Arrowhead ( B 3 -Arrowhead) structure.
We develop and analyze discretizations that effectively resolve the singular behavior induced by the delta function. In higher dimensions, we employ the Alternating Direction Implicit (ADI) method with reverse Cholesky factorization to preserve sparsity and ensure computational efficiency.
Numerical results demonstrate that the solution exhibits a peak at the origin and decays symmetrically outward, consistent with the characteristics of Green’s functions in classical potential theory. These results further indicate that combining sparse FEM structures with integrated Legendre functions yields a robust and stable framework for solving PDEs with singular forcing—an area not systematically addressed in previous work.
The proposed approach offers a general and effective tool for handling impulsive source terms in elliptic PDEs, with promising applications in physics, engineering, and computational mathematics.
Future work may explore extensions to more complex domains, such as circular or spherical geometries, as well as alternative forcing terms beyond the delta function. Additionally, extending the framework to accommodate mixed boundary conditions (e.g., combinations of Dirichlet and Neumann boundaries) enhances its applicability in practical scenarios and introduces new challenges in preserving the sparse structure and solver efficiency.

Author Contributions

Conceptualization, L.T. and Y.T.; methodology, L.T. and Y.T.; software, Y.T.; validation, L.T. and Y.T.; formal analysis, L.T.; investigation, Y.T.; resources, L.T.; data curation, Y.T.; writing—original draft preparation, L.T. and Y.T.; writing—review and editing, L.T. and Y.T.; visualization, Y.T.; supervision, L.T.; project administration, L.T.; funding acquisition, L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (Grant No. 42061064) and Science and Disruptive Technology Program, AIRCAS (No. E3Z211010F).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FEMFinite Element Method
ADIAlternating Direction Implicit

References

  1. Fortunato, D.; Townsend, A. Fast Poisson solvers for spectral methods. IMA J. Numer. Anal. 2020, 40, 1994–2018. [Google Scholar] [CrossRef]
  2. Knook, K.; Olver, S.; Papadopoulos, I. Quasi-optimal complexity hp-FEM for Poisson on a rectangle. arXiv 2024, arXiv:2402.11299. [Google Scholar]
  3. Babuška, I.; Craig, A.; Mandel, J.; Pitkäranta, J. Efficient preconditioning for the p-version finite element method in two dimensions. SIAM J. Numer. Anal. 1991, 28, 624–661. [Google Scholar] [CrossRef]
  4. Wachspress, E.L. Three-variable alternating-direction-implicit iteration. Comput. Math. Appl. 1994, 27, 1–7. [Google Scholar] [CrossRef]
  5. Schwab, C. p-and hp-Finite Element Methods: Theory and Applications in Solid and Fluid Mechanics; Clarendon Press: Oxford, UK, 1998. [Google Scholar]
  6. Szabó, B.; Babuška, I. Introduction to Finite Element Analysis: Formulation, Verification and Validation; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  7. Beuchler, S.; Schoeberl, J. New shape functions for triangular p-FEM using integrated Jacobi polynomials. Numer. Math. 2006, 103, 339–366. [Google Scholar] [CrossRef]
  8. Babuška, I.; Suri, M. The p and h-p versions of the finite element method, basic principles and properties. SIAM Rev. 1994, 36, 578–632. [Google Scholar] [CrossRef]
  9. Beuchler, S.; Pechstein, C.; Wachsmuth, D. Boundary concentrated finite elements for optimal boundary control problems of elliptic PDEs. Comput. Optim. Appl. 2012, 51, 883–908. [Google Scholar] [CrossRef]
  10. Beuchler, S.; Pillwein, V. Sparse shape functions for tetrahedral p-FEM using integrated Jacobi polynomials. Computing 2007, 80, 345–375. [Google Scholar] [CrossRef]
  11. Dubiner, M. Spectral methods on triangles and other domains. J. Sci. Comput. 1991, 6, 345–390. [Google Scholar] [CrossRef]
  12. Houston, P.; Schwab, C.; Süli, E. Discontinuous hp-finite element methods for advection-diffusion-reaction problems. SIAM J. Numer. Anal. 2002, 39, 2133–2163. [Google Scholar] [CrossRef]
  13. Jia, L.; Li, H.; Zhang, Z. Sparse spectral-Galerkin method on an arbitrary tetrahedron using generalized Koornwinder polynomials. J. Sci. Comput. 2022, 91, 22. [Google Scholar] [CrossRef]
  14. Karniadakis, G.; Sherwin, S.J. Spectral/hp Element Methods for Computational Fluid Dynamics; Oxford University Press: Oxford, MS, USA, 2005. [Google Scholar]
  15. Pavarino, L.F. Additive Schwarz methods for the p-version finite element method. Numer. Math. 1993, 66, 493–515. [Google Scholar] [CrossRef]
  16. Snowball, B.; Olver, S. Sparse spectral and-finite element methods for partial differential equations on disk slices and trapeziums. Stud. Appl. Math. 2020, 145, 3–35. [Google Scholar] [CrossRef]
  17. Averbuch, A.; Israeli, M.; Vozovoi, L. A fast Poisson solver of arbitrary order accuracy in rectangular regions. SIAM J. Sci. Comput. 1998, 19, 933–952. [Google Scholar] [CrossRef]
  18. Wikipedia Contributors. 2024. Available online: https://en.wikipedia.org/wiki/Diracdeltafunction (accessed on 10 January 2025).
  19. Olver, S.; Slevinsky, R.M.; Townsend, A. Fast algorithms using orthogonal polynomials. Acta Numer. 2020, 29, 573–699. [Google Scholar] [CrossRef]
  20. Olver, S.; Townsend, A. A fast and well-conditioned spectral method. SIAM Rev. 2013, 55, 462–489. [Google Scholar] [CrossRef]
Figure 1. Visual image of Dirac delta function.
Figure 1. Visual image of Dirac delta function.
Mathematics 13 01360 g001
Figure 2. Numerical solution to the 1D Poisson equation with a delta function source, Δ u ( x ) = δ ( x ) , subject to homogeneous Dirichlet boundary conditions on [ 1 , 1 ] . The solution exhibits a sharp peak at x = 0 and linear decay to the boundaries, consistent with the analytical Green’s function.
Figure 2. Numerical solution to the 1D Poisson equation with a delta function source, Δ u ( x ) = δ ( x ) , subject to homogeneous Dirichlet boundary conditions on [ 1 , 1 ] . The solution exhibits a sharp peak at x = 0 and linear decay to the boundaries, consistent with the analytical Green’s function.
Mathematics 13 01360 g002
Figure 3. 2D contour plot of the solution to Δ u ( x , y ) = δ ( x , y ) . The solution exhibits radial symmetry and a sharp peak at the origin, corresponding to the Dirac delta source.
Figure 3. 2D contour plot of the solution to Δ u ( x , y ) = δ ( x , y ) . The solution exhibits radial symmetry and a sharp peak at the origin, corresponding to the Dirac delta source.
Mathematics 13 01360 g003
Figure 4. 3D surface plot of the solution to Δ u ( x , y ) = δ ( x , y ) . The plot shows the peaked behavior around the origin and symmetric decay outward, consistent with the structure of Green’s function.
Figure 4. 3D surface plot of the solution to Δ u ( x , y ) = δ ( x , y ) . The plot shows the peaked behavior around the origin and symmetric decay outward, consistent with the structure of Green’s function.
Mathematics 13 01360 g004
Figure 5. Cross-sectional heatmap views of the numerical solution to the 3D Poisson equation with a Dirac delta source, Δ u ( x , y , z ) = δ ( x , y , z ) . The solution exhibits radial symmetry and a sharp peak at the origin, decaying smoothly in all directions. The figure shows slices of the solution on the x y -, x z -, and y z -planes.
Figure 5. Cross-sectional heatmap views of the numerical solution to the 3D Poisson equation with a Dirac delta source, Δ u ( x , y , z ) = δ ( x , y , z ) . The solution exhibits radial symmetry and a sharp peak at the origin, decaying smoothly in all directions. The figure shows slices of the solution on the x y -, x z -, and y z -planes.
Mathematics 13 01360 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, L.; Tang, Y. Finite Element Method for Solving the Screened Poisson Equation with a Delta Function. Mathematics 2025, 13, 1360. https://doi.org/10.3390/math13081360

AMA Style

Tang L, Tang Y. Finite Element Method for Solving the Screened Poisson Equation with a Delta Function. Mathematics. 2025; 13(8):1360. https://doi.org/10.3390/math13081360

Chicago/Turabian Style

Tang, Liang, and Yuhao Tang. 2025. "Finite Element Method for Solving the Screened Poisson Equation with a Delta Function" Mathematics 13, no. 8: 1360. https://doi.org/10.3390/math13081360

APA Style

Tang, L., & Tang, Y. (2025). Finite Element Method for Solving the Screened Poisson Equation with a Delta Function. Mathematics, 13(8), 1360. https://doi.org/10.3390/math13081360

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop