Next Article in Journal
A Method for Synthesizing Self-Checking Discrete Systems with Calculations Testing Based on Parity and Self-Duality of Calculated Functions
Previous Article in Journal
Effect of Heated Wall Corrugation on Thermal Performance in an L-Shaped Vented Cavity Crossed by Metal Foam Saturated with Copper–Water Nanofluid
Previous Article in Special Issue
Features of Three-Dimensional Calculation of Gas Coolers of Turbogenerators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fourier–Gegenbauer Integral Galerkin Method for Solving the Advection–Diffusion Equation with Periodic Boundary Conditions

by
Kareem T. Elgindy
1,2
1
Department of Mathematics and Sciences, College of Humanities and Sciences, Ajman University, Ajman P.O. Box 346, United Arab Emirates
2
Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
Computation 2025, 13(9), 219; https://doi.org/10.3390/computation13090219
Submission received: 18 August 2025 / Revised: 3 September 2025 / Accepted: 4 September 2025 / Published: 9 September 2025
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)

Abstract

This study presents the Fourier–Gegenbauer integral Galerkin (FGIG) method, a new numerical framework that uniquely integrates Fourier series and Gegenbauer polynomials to solve the one-dimensional advection–diffusion (AD) equation with spatially symmetric periodic boundary conditions, achieving exponential convergence and reduced computational cost compared to traditional methods. The FGIG method uniquely combines Fourier series for spatial periodicity and Gegenbauer polynomials for temporal integration within a Galerkin framework, resulting in highly accurate numerical and semi-analytical solutions. Unlike traditional approaches, this method eliminates the need for time-stepping procedures by reformulating the problem as a system of integral equations, reducing error accumulation over long-time simulations and improving computational efficiency. Key contributions include exponential convergence rates for smooth solutions, robustness under oscillatory conditions, and an inherently parallelizable structure, enabling scalable computation for large-scale problems. Additionally, the method introduces a barycentric formulation of the shifted Gegenbauer–Gauss (SGG) quadrature to ensure high accuracy and stability for relatively low Péclet numbers. This approach simplifies calculations of integrals, making the method faster and more reliable for diverse problems. Numerical experiments presented validate the method’s superior performance over traditional techniques, such as finite difference, finite element, and spline-based methods, achieving near-machine precision with significantly fewer mesh points. These results demonstrate its potential for extending to higher-dimensional problems and diverse applications in computational mathematics and engineering. The method’s fusion of spectral precision and integral reformulation marks a significant advancement in numerical PDE solvers, offering a scalable, high-fidelity alternative to conventional time-stepping techniques.

1. Introduction

The AD equation is a fundamental PDE that governs the transport of a scalar quantity within a fluid medium. This scalar quantity may represent various physical properties, such as heat, pollutants, or chemical concentrations. The AD equation encapsulates the combined influence of advection, the transport driven by fluid flow, and diffusion, which arises from random molecular motion. This equation finds wide-ranging applications in fields such as fluid mechanics, environmental science, biophysics, and materials science. Accurate solutions to the AD equation are indispensable for critical applications, such as environmental monitoring and control, engineering design, climate modeling, and medical applications. Consequently, the AD equation is a crucial tool for understanding and predicting the behaviors of various natural and engineered systems.
Over the past decade, numerous numerical methods have been developed, each addressing specific stability, accuracy, or computational constraints. Appadu et al. [1] analyzed upwind and non-standard finite difference schemes for AD reaction problems, noting their dissipative properties. Khalsaraei and Jahandizi [2] evaluated and improved positivity in finite difference methods. Al-khafaji and Al-Zubaidi [3] developed a 1D model for instantaneous spills in river systems by numerically solving the advection–dispersion equation along with the shallow water equations using finite difference methods. Hwang and Son [4] utilized a second-order, well-balanced, positivity-preserving central-upwind scheme based on the finite volume method to create an efficient numerical scheme that accurately captures contact discontinuities in scalar transport within a shallow water flow environment. Solis and Gonzalez [5] presented non-standard finite difference schemes for modeling infection dynamics. Sun and Zhang [6] proposed high-order compact finite difference schemes, but these require dense grids, increasing computational costs, especially for diffusion-dominated regimes. Jena and Senapati [7] combined a quartic-order cubic B-spline with Crank–Nicolson and FEM, achieving high accuracy but requiring fine meshes. Cerfontaine et al. [8] applied FEM to borehole heat exchangers, relying on explicit boundary enforcement, which limits scalability for periodic problems. Wang and Yuan [9] introduced nonlinear correction techniques, ensuring discrete extremum principles. Wang and Yuan [10] presented a nonlinear correction technique for finite element methods applied to advection–diffusion problems. Sejekan and Ollivier-Gooch [11] enhanced diffusive flux accuracy. Chernyshenko et al. [12] combined the finite volume method and FEM for fractured media. Kramarenko et al. [13] developed a nonlinear correction for subsurface flows. Mei et al. [14] proposed a unified finite volume PINN (UFV-PINN) approach to improve the accuracy and efficiency of solving heterogeneous PDEs using PINNs. UFV-PINN combines sub-domain decomposition, finite volume discretization, and conventional numerical solvers within the PINN framework. Cardone et al. [15] integrated exponential fitting with IMEX time integration for AD problems, offering high accuracy but struggling with oscillatory solutions. Bokanowski and Simarmata [16] developed high-order, unconditionally stable semi-Lagrangian schemes with discontinuous Galerkin elements, effective for first- and second-order PDEs but reliant on iterative time stepping. Bakhtiari et al. [17] introduced a parallelizable semi-Lagrangian scheme, yet it lacks the integral equation framework of FGIG. Despite advancements, significant challenges remain: iterative time stepping in finite difference [6], FEM [8], and semi-Lagrangian methods [16] accumulates temporal errors, particularly in diffusion-dominated regimes; high accuracy often demands dense meshes [7], escalating computational costs; stability issues persist at low Péclet numbers or with oscillatory solutions, as noted by Benedetto et al. [18] for FEM and O’Sullivan [19] for Runge–Kutta methods; few methods, except for spectral integral approaches [20], leverage integral equation formulations for periodic boundary conditions, limiting accuracy and stability; sequential computations in traditional methods [21] hinder scalability, unlike parallelizable PS methods [22]. The FGIG method addresses these by combining Fourier series and Gegenbauer polynomials in an integral equation framework, eliminating time stepping, ensuring stability at low Péclet numbers, and enabling parallel computation.
This study introduces the FGIG method for solving the 1D AD equation with periodic boundary conditions, ideal for diffusion-dominated cases due to its stability and minimal temporal error accumulation (see Section 6 and Section 8). The FGIG method integrates Fourier series for spatial periodicity and Gegenbauer polynomials for temporal integration, achieving high accuracy, efficiency, and stability. Unlike traditional Galerkin methods yielding ODEs requiring time integration, FGIG uses integral equations solved directly without initial or boundary conditions. Key features include: (i) no time stepping, reducing temporal errors (Section 8); (ii) direct solution of integral equations, lowering computational cost (Section 5 and Section 8), and (iii) a barycentric SGG quadrature for stable, efficient integral evaluation.
This work contributes: (i) a novel FGIG method, combining Fourier series and Gegenbauer polynomials for the 1D AD equation; (ii) a semi-analytical solution for high accuracy; (iii) exponential convergence for smooth solutions, outperforming polynomial-based methods; (iv) stability at low Péclet numbers ( [ 0 , 0.5 ] Gegenbauer parameter range) under oscillatory conditions; (v) parallelizable formulation for efficient large-scale computation; (vi) elimination of temporal errors via integral equations; (vii) improved accuracy with a barycentric SGG quadrature; (viii) implicit incorporation of periodic boundary conditions via a Fourier basis; and (ix) semi-analytical solutions for result verification. This is the first method integrating Fourier series and Gegenbauer polynomials into a Galerkin framework for the 1D AD equation, distinct from prior uses of Gegenbauer polynomials [19,23,24]. This synthesis not only fills a methodological gap in the literature but also establishes a new paradigm for solving periodic PDEs with exponential accuracy.
The remainder of this paper is structured as follows. Section 3 introduces the problem under study. Section 4 presents the FGIG method, describing its formulation and implementation. In Section 5, we analyze the computational cost and time complexity of the FGIG method. Section 6 presents a rigorous analysis of the error and stability characteristics of the method. Section 7 derives a semi-analytical solution within the FGIG framework, offering a highly accurate alternative for specific cases. Section 8 provides computational results that validate the accuracy and efficiency of the FGIG method. Section 9 and Section 10 conclude the paper with a summary of the key findings, a brief discussion of the limitations of the FGIG method, and potential future research directions. Mathematical proofs are presented in Appendix A. Section 2 introduces the sets of symbols and notations used throughout the paper to represent complex mathematical formulae and expressions concisely. These notations generally follow the writing convention established in [25,26].

2. Symbols and Notations

This section summarizes the symbols and notations used throughout the paper. Unless otherwise specified, all the functions are assumed to be sufficiently smooth, and all the domains are subsets of the real line or the complex plane.
  • Logical Operators and Quantifiers.
: for all a : for any a a : for almost all e : for each s : for some l : for a relatively large
  • Complex Number Operations.
( · ) : real part , ( · ) : imaginary part
  • Sets and Number Systems.
C : complex - valued functions F : real - valued functions R : real numbers R 0 + : nonnegative reals Z : integers Z + : positive integers Z 0 + : nonnegative integers Z o + : positive odd integers Z e + : positive even integers Z 0 , e + : nonnegative even integers J n = { 0 , 1 , , n 1 } , J n + = J n { n } , N n = { 1 , , n } S n T = { t n , j = T j / n } j J n K n = { n / 2 , , n / 2 } , K n = K n { n / 2 } , K n , 0 = K n { 0 } G n λ : Gegenbauer - Gauss node set z n , 0 : n λ , λ > 1 / 2 G ^ L , n λ = L ( z n , 0 : n λ + 1 ) / 2 , G ^ L , n λ , + = G ^ L , n λ { L } Ω a , b = [ a , b ] , Ω : interior of Ω Ω T = [ 0 , T ] , Ω L × T = Ω L × Ω T β = [ β , β ] , C T , β = { x + i y : x Ω T , y β }
  • Special Constants.
u R : unit round - off ( typically 10 16 in double precision )
  • Functions and Operators.
δ n , m : Kronecker delta Γ : Gamma function g * : complex conjugate g n = g ( t n ) , g N , n = g N ( t n ) , u l , j = u ( x l , t j ) u 0 : m 1 , 0 : n 1 : array of u ( x l , t j ) , D α u : multi - index derivative
  • List and Sequence Notation.
i : j : k or i ( j ) k : list from i to k with step j i : k : shorthand for i : 1 : k y i i = 1 : n or y 1 : n : list of y 1 , , y n { y 1 : n } : set of elements y 1 , , y n
  • Integral Notation.
I b ( t ) h = 0 b h ( t ) d t , I a , b ( t ) h = a b h ( t ) d t , I t ( t ) h = 0 t h ( · ) d ( · ) I b ( t ) h [ u ( t ) ] = 0 b h ( u ( t ) ) d t , I Ω a , b ( x ) h = a b h ( x ) d x
  • Inner Products.
( f , g ) w = I a , b ( x ) ( f g * w ) , ( f , g ) n , w : discrete approximation using n points ( f , g ) : inner product with w ( x ) = 1
  • Function Spaces and Norms.
T T : T - periodic functions C k ( Ω ) : space of functions with continuous derivatives up to order k on Ω , k Z 0 + L p ( Ω ) : u : Ω R | | | u | | L p = Ω | u ( x ) | p d x 1 / p < , 1 p < L p ( Ω L × T ) : u : Ω L × T R | | | u | | L p = 0 L 0 T | u ( x , t ) | p d t d x 1 / p < , 1 p < L loc ( Ω T ) : u : Ω T R | u L 1 ( K ) for every compact K Ω T H s ( Ω T ) : u L loc ( Ω T ) | D α u L 2 ( Ω T ) , | α | s , s Z 0 + H T s H s with periodic boundary conditions D ( Ω T ) : ϕ C ( Ω T ) | supp ( ϕ ) Ω T is compact B V ( Ω T ) : u L 1 ( Ω T ) | | | u | | B V : = sup Ω T u ( x ) ϕ ( x ) d x : ϕ D ( Ω T ) , | | ϕ | | L 1 < H s ( Ω ) : Sobolev space , A T , β : space of functions analytic in C T , β L ( Ω ) : essentially bounded measurable functions | | · | | = | | · | | L 2 ( Ω T ) , | | · | | 1 : l 1 - norm , | | · | | 2 : matrix 2 - norm
  • Vector Notation.
t N = [ t N , 0 , , t N , N 1 ] , g 0 : N 1 = [ g 0 , , g N 1 ] t N = [ t N , 0 , , t N , N 1 ] , Σ g 0 : N 1 : sum of entries h ( y ) : [ h ( y 1 ) , , h ( y n ) ] , h ( y ) or h 1 : m [ y ] : [ h 1 ( y ) , , h m ( y ) ] h ( y ( t N ) ) : = [ h ( y ( t 0 ) ) , h ( y ( t 1 ) ) , , h ( y ( t N 1 ) ) ] h ( y ( t N ) ) : = [ h ( y ( t 0 ) ) , h ( y ( t 1 ) ) , , h ( y ( t N 1 ) ) ]
  • Matrix Notation.
O n : zero matrix , I n : identity matrix , C n , m : matrix of size n × m 1 n : = [ 1 , 1 , , 1 ] R n , 0 n : = [ 0 , 0 , , 0 ] R n , k ( A ) : condition number : Kronecker product , : Hadamard product , [ . ; . ] : vertical concatenation resh m , n A : reshape matrix A into size m × n by column - wise ordering resh n A : reshape matrix A into square size n × n by column - wise ordering

3. Problem Statement

In this study, we focus on the following 1D AD equation with periodic boundary conditions:
u t + μ u x = ν u x x , ( x , t ) Ω L × T , s L , T R + ,
with the given initial condition
u ( x , 0 ) = u 0 ( x ) , x Ω L ,
and the periodic Dirichlet boundary conditions
u ( x + L , t ) = u ( x , t ) , t Ω T , x R 0 + .
The nonnegative constants μ and ν denote the advection velocity and diffusion coefficient, respectively. Equations (1)–(3) describe the initial boundary value problem of the AD equation in strong form, which we refer to as Problem S. The solution to the problem, u, represents the concentration or distribution of a scalar quantity (like heat, pollutants, or particles) in a medium over space and time, taking into account both advection and diffusion processes. Problem S serves as a valuable benchmark problem for testing the accuracy, stability, and efficiency of numerical methods. It also acts as a prototype for developing more sophisticated numerical techniques that can be applied to more complex problems in higher dimensions and with more realistic boundary conditions.

4. The FGIG Method

4.1. Weak Formulation

The classical solution of Problem S must be at least twice differentiable in space. To allow a larger class of solutions, we rewrite the time-integrated formulation of the PDE in a weak form. To this end, let us partially integrate both sides of the PDE with respect to time on Ω t and impose the initial condition (2) to obtain the time-integrated form of the PDE as follows:
u + I t ( t ) ( μ u x ν u x x ) = u 0 .
Multiplying both sides of the equation by an arbitrary function ( φ T L H 1 ( Ω L ) ) and partially integrating over Ω L yields
u + I t ( t ) ( μ u x ν u x x ) , φ = u 0 , φ ,
which represents the weak time-integrated form of the PDE. Integration by parts gives
u + μ I t ( t ) u x , φ + ν I t ( t ) u x , φ x = u 0 , φ + ν φ I t ( t ) u x 0 L .
The last term vanishes due to the spatial periodicity of u and φ ; therefore,
u + μ I t ( t ) u x , φ + ν I t ( t ) u x , φ x = u 0 , φ .

4.2. Integral System Formulation

Now, let u x = ψ s ψ L 2 ( Ω L × T ) , u ( 0 , t ) = u ( L , t ) = g ( t ) t Ω T , and impose the left boundary condition to obtain
u = I x ( x ) ψ + g .
Substituting Equation (8) into Equation (7) yields:
I x ( x ) + μ I t ( t ) ψ , φ + ν I t ( t ) ψ , φ x = u 0 g , φ .
We refer to this new form as the spatial-derivative-substituted weak time-integrated form of the PDE and call its solution, ψ , the SSD of Problem S. Notice how the spatial-derivative-substituted weak time-integrated form of the PDE requires only that the weak solution for Problem S be square integrable, as shown by Theorem A1, which is much less restrictive than requiring the twice differentiability of u.

4.3. Fourier Expansion

The spatial periodic nature of u allows us to approximate the time-dependent offset solution, I x ( x ) ψ , by a truncated Fourier basis expansion. In particular, let us consider the N / 2 -degree, L-sp, truncated Fourier series with time-dependent coefficients, I x ( x ) N ψ , written in the following modal form:
I x ( x ) N ψ ( x , t ) = k K N ψ ˜ k ( t ) e i ω k x , s N Z e + .
Notice that there are N + 1 unknown coefficient functions for us to determine. Fortunately, since
I 0 ( x ) ψ = 0 = Σ ψ ˜ N / 2 : N / 2 [ t ] ,
by definition, we actually need to find N unknown coefficients, as the last coefficient is automatically computed via Equation (11). In this work, we seek all the coefficients with nonzero indices first, and then easily calculate ψ ˜ 0 using the following formula:
ψ ˜ 0 ( t ) = k K N , 0 ψ ˜ k ( t ) .
To determine the unknown coefficients, we require that the approximation ( I N ψ ) satisfies the spatial-derivative-substituted weak time-integrated form of the PDE (9) for φ = e i ω n x e n K N , 0 . That is, the spectral approximate SSD is the one that satisfies
I x ( x ) + μ I t ( t ) ψ , e i ω n x + ν I t ( t ) ψ , x e i ω n x = u 0 g , e i ω n x ,
n K N , 0 . This can be reduced further to the following form:
I x ( x ) + ( μ i ω n ν ) I t ( t ) ψ , e i ω n x = u 0 , e i ω n x , n K N , 0 ,
by realizing that g , e i ω n x = 0 n K N , 0 . Using the Fourier quadrature rule based on the equally spaced node set S N L = { x N , 0 : N 1 } :
Q F ( f ) = L N j = 0 N 1 f j , f T T ,
we can define the following discrete inner product:
( u , v ) N = L N u 0 : N 1 v 0 : N 1 * , a u , v C .
Consider, now, the N / 2 -degree, L-periodic Fourier interpolant ( I N u 0 ) that matches u 0 at the set of nodes ( S N L ) so that
I N u 0 ( x ) = k K N u ˜ 0 , k c k e i ω k x ,
where
c k = 1 , k K N , 2 , k = ± N / 2 ,
and u ˜ 0 , k is the DFT interpolation coefficient given by
u ˜ 0 , k = 1 L u 0 , e i ω k x N = 1 N j J N u 0 , j e i ω ^ k j , N , k K N ,
with u ˜ 0 , N / 2 = u ˜ 0 , N / 2 and ω ^ k , N = 2 π k / N , cf. [25]. Since u 0 is periodic, the DFT coefficients satisfy the additional conjugate symmetry condition u ˜ 0 , n = u ˜ 0 , n * n K N . This implies that for real-valued, periodic signals, only half of the DFT coefficients need to be computed, as the other half can be obtained through conjugation.

4.4. Discrete System

Substituting (10) and (17) into (14) yields the following linear system of integral equations:
k K N , 0 ψ ˜ k ( t ) + ω k ( ν ω n + μ i ) I t ( t ) ψ ˜ k e i ω k x , e i ω n x = k K N , 0 u ˜ 0 , k c k e i ω k x , e i ω n x , n K N , 0 .
The orthogonality property of complex exponentials
e i ω k x , e i ω n x = L δ k , n , { k , n } K N ,
allows us to reduce System (19) to the following linear system of integral equations:
A n ( t ) ψ ˜ n ( t ) = γ n , n K N , 0 ,
where
A n ( t ) = 1 + α n I t ( t ) , α n = ω n ( ν ω n + μ i ) , γ n = u ˜ 0 , n c n , n K N , 0 .
Before we continue our prescription of the proposed method, it is interesting to note here that unlike the standard Galerkin method, which yields a system of ordinary differential equations requiring initial conditions for time integration, the present approach results in the system of integral Equation (21) that can be solved without imposing any initial or boundary conditions (see Section 8 for numerical validation). The approximate solution, N u , can be directly computed at any point in space and time by adding the boundary function g to the time-dependent offset solution through Equation (8), after reconstructing the latter using Equations (10) and (12).
To slightly improve the accuracy of the system model (21), we can increase the degree of the Fourier interpolant of u 0 , I N u 0 , since u 0 is already given, and the first N DFT interpolation coefficients of two Fourier interpolations of degrees n / 2 and m / 2 for the same function are generally not the same when n and m are distinct a n , m Z e + : N min { n , m } . This is because the coefficients in DFT interpolation depend on the number of sample points (or the degree of the interpolation) and how they are distributed, which changes with n and m. Generally, as the degree of interpolation increases, the approximation tends to capture more details of the function. This means that the discrete coefficients for a higher degree interpolation are often more accurate in representing the true Fourier coefficients of the function, especially for smooth functions. With this observation, we can accurately re-represent u 0 by an N 0 / 2 -degree, L-periodic Fourier interpolant, I N 0 u 0 , that matches u 0 at the set of nodes S N 0 L s N 0 Z e + : N 0 > N . Using [25] (Equation (4.2)), we can write
I N 0 u 0 ( x ) = k K N 0 u ^ 0 , k e i ω k x ,
where the primed sigma denotes a summation in which the last term is omitted, and u ^ 0 , k is the DFT interpolation coefficient given by
u ^ 0 , k = 1 L u 0 , e i ω k x N 0 = 1 N 0 j J N 0 u 0 , j e i ω ^ k j , N 0 , k K N 0 .
By following the same procedure prescribed earlier in this section, we can easily show that the linear system of integral Equation (21) can be replaced with the more accurate model:
A n ( t ) ψ ˜ n ( t ) = u ^ 0 , n , n K N , 0 .

4.5. SG Quadrature

Now, we shift gears and turn our attention to how we can effectively solve system (25) numerically. Notice, first, that while complex exponentials are excellent as basis functions for representing periodic functions, they are often less suitable for nonperiodic functions. On the other hand, orthogonal polynomials are specifically designed to approximate nonperiodic functions over finite intervals, making them a more natural choice for nonperiodic integral equations. Since the coefficient vector function ψ ˜ N / 2 : N / 2 [ t ] is generally nonperiodic, we solve system (25) using a variant of the SG integral PS method of Elgindy [27] and Elgindy [25]. The latter methods use the barycentric SGG quadratures, constructed using the stable barycentric representation of shifted Lagrangian interpolating polynomials and the explicit barycentric weights for the SGG points, well known for their stability, efficiency, and superior accuracy. These barycentric SGG quadratures allow us to construct the SGIMs (also known as the SG operational matrices of integration) that can effectively evaluate the sought definite integration approximations using matrix–vector multiplications, often leading into well-conditioned systems of algebraic equations. In the present work, we generate the necessary barycentric SGIMs using [28] (Equation (4.38)), which directly defines the required barycentric SGIMs in terms of the barycentric Gegenbauer integration matrices. This modification to the methods of Elgindy [25,27] skips the need to shift the quadrature nodes and weights and Lagrangian polynomials from their original time domain ( Ω 1 , 1 ) to the shifted time domain ( Ω T ), and directly constructs the desired SGIM by premultiplying the usual Gegenbauer integration matrix by a scalar multiple, unless the approximate solution values are required at non-collocation time nodes. That is, if we denote the first-order barycentric SGIM by Q T then [28] (Equation (4.38)) immediately tells us that
Q T = T 2 Q ,
where Q is the first-order barycentric Gegenbauer integration matrix based on the stable barycentric representation of Lagrangian interpolating polynomials and the explicit barycentric weights for the Gegenbauer–Gauss points.
To collocate system (25) in the SG physical space, we first write the system at the SG set of mesh points G ^ T , M λ = { t ^ M , 0 : M λ } s M Z + :
ψ ˜ n , j + α n I t j ( t ) ψ ˜ n = u ^ 0 , n , j J M + , n K N , 0 .
Next, we discretize system (27) at the SG mesh set G ^ T , M λ with the aid of Equation (26) to obtain the following N × ( M + 1 ) linear systems of algebraic equations:
ψ ˜ n , j + α n T Q j ψ ˜ n , 0 : M = u ^ 0 , n , j J M + , n K N , 0 ,
which can be written in the following matrix form:
A ( n , M ) T ψ ˜ n , 0 : M = u ^ 0 , n 1 M + 1 , n K N , 0 ,
where
A ( n , M ) T = I M + 1 + α n · Q T ,
and
ψ ˜ n , 0 : M = ψ ˜ n ( t ^ M , 0 λ ) ψ ˜ n ( t ^ M , 1 λ ) ψ ˜ n ( t ^ M , M λ ) , n K N , 0 .
Solving system (29) provides N Fourier coefficients at the time collocation points. The values of the ( N + 1 ) st coefficient at the same nodes can be recovered via Equation (12), as explained earlier. The solution (u) can, therefore, be readily computed at the collocation points using Equations (8) and (10):
u N , M x N , j , t ^ M , l λ = I x N , j ( x ) N ψ x , t ^ M , l λ + g t ^ M , l λ ,
x N , j , t ^ M , l λ S N L × G ^ T , M λ . For faster computations of Formula (32), instead of running explicit For loops, we can quickly synthesize N , M u at the collocation points using the following useful formula, written in matrix form as follows:
resh M + 1 , N ( N , M u 0 : N 1 , 0 : M ) = resh M + 1 , N + 1 ( ψ ˜ N / 2 : N / 2 [ t M λ ] ) e i ω N / 2 : N / 2 x N + g 0 : M 1 N ,
where g 0 : M = g t M λ . Moreover, to determine the solution (u) at any non-collocation temporal points in Ω L × T , we can approximate the coefficients’ values using the inverse SGG transform, written in the following Lagrangian form:
ψ ˜ n ( t ) ψ ˜ n , 0 : M L 0 : M ( λ ) [ t ] , t G ^ T , M λ ,
where L 0 : M ( λ ) [ t ] is the shifted Lagrangian interpolating polynomial vector function in barycentric form, whose element functions are defined by
L l ( λ ) ( t ) = ξ l ( λ ) t t ^ M , l λ / j J M + ξ j ( λ ) t t ^ M , j λ , l J M + ,
and the barycentric weights, ξ 0 : M ( λ ) , associated with the SGG points can be expressed explicitly in terms of the corresponding Christoffel numbers ( ϖ M , 0 : M ( λ ) ) in the following algebraic form:
ξ l ( λ ) = 2 ( 1 ) l 4 λ T 2 ( 1 + λ ) T t ^ M , l λ t ^ M , l λ ϖ M , l ( λ ) , l J M + ,
or in the following more numerically stable trigonometric form:
ξ l ( λ ) = 1 l 2 T λ sin cos 1 2 t ^ M , l λ T 1 ϖ M , l ( λ ) , l J M + .
cf. [25,27].
It is noteworthy to mention that one key advantage of the proposed FGIG method lies in its ability to not only compute the solution within the desired domain but also accurately determine its spatial derivative as a byproduct of Equation (10):
u x N ( x , t ) = ψ ( x , t ) N i k K N ω k ψ ˜ k ( t ) e i ω k x , ( x , t ) Ω L × T ,
which offers comprehensive information about the solution’s characteristics. We can swiftly compute u x N at the collocation points using the following efficient formula, written in matrix form:
resh M + 1 , N ( N , M u x , 0 : N 1 , 0 : M ) = i ( ( ω N / 2 : N / 2 1 N ) e i x N ω N / 2 : N / 2 ) resh M + 1 , N + 1 ( ψ ˜ N / 2 : N / 2 [ t M λ ] ) ,
where u x , j , l N , M = N , M u x x N , j , t ^ M , l λ x N , j , t ^ M , l λ S N L × G ^ T , M λ . The FGIG method’s implementation is outlined in the flowchart in Figure 1. For detailed algorithmic steps, refer to Appendix B.
Another interesting feature in the proposed method appears in the process of recovering u from I x ( x ) ψ , as described by Equation (8), which is entirely free from truncation errors, thus maintaining the full precision attained in the calculation of I x ( x ) N ψ .
A third key feature of the current method lies in its ability to bypass traditional time-stepping procedures. Instead, it directly solves the linear systems (29) globally, using a single, coarse SGG time grid. This is particularly beneficial in diffusion-dominated scenarios, where accurate temporal resolution with exponential convergence can be achieved, even with very coarse grids. By circumventing time stepping, the method minimizes temporal error accumulation, a common pitfall in such methods, and ensures a more faithful representation of the diffusion process.

5. Computational Cost and Time Complexity

In Section 4, we demonstrated that only N Fourier coefficients of the series (10) need to be determined, as the remaining coefficient can be computed directly using Equation (11). Remarkably, the following theorem demonstrates that only the first N / 2 coefficients need to be determined, since the remaining coefficients can be efficiently computed using a conjugate symmetry condition.
Theorem 1. 
The time-dependent Fourier coefficients, ψ ˜ N / 2 : N / 2 , satisfy the conjugate symmetry condition
ψ ˜ n = ψ ˜ n * , n K N , 0 .
Proof. 
Let k = n K N , 0 , and notice first that
α n = ω n ( ν ω n + μ i ) = ω n ( ν ω n + μ i ) = α n * .
Taking the conjugate of the nth system (29) while realizing the conjugate symmetries of u ^ 0 , n and α n yields
A ( n , M ) T ψ ˜ n , 0 : M * = u ^ 0 , n * 1 M + 1 I M + 1 + α n T Q * ψ ˜ n , 0 : M * = u ^ 0 , n 1 M + 1 I M + 1 + α n T Q ψ ˜ n , 0 : M * = u ^ 0 , n 1 M + 1 ,
since I M + 1 and Q T are real matrices. This matches the system for k = n , from which the proof is established.    □
The aftermath of Theorem 1 is at least twofold: (i) It shows that we only need to solve the following N / 2 systems of linear equations:
A ( n , M ) T ψ ˜ n , 0 : M = u ^ 0 , n 1 M + 1 , n N N / 2 ,
for the first N / 2 Fourier coefficients, instead of solving the N systems (29); the last N / 2 coefficients are computed efficiently using Equation (40), and (ii) it provides further a more efficient alternative for computing ψ ˜ 0 using the following formula:
ψ ˜ 0 ( t ) = 2 k N N / 2 ψ ˜ k ( t ) ,
which requires adding the real parts of only half the series terms of Equation (12).
Since the first-order barycentric Gegenbauer integration matrix, Q , is typically a dense, non-symmetric matrix, solving each system in (43) using highly optimized direct algorithms, such as LU decomposition, has a computational complexity of approximately O ( M 3 ) l M . Consequently, the total computational cost of solving systems (43) is O ( N M 3 ) l M , N . The computation of u N , M via (33) requires O ( M N 2 ) operations for the matrix multiplication plus O ( M N ) scalar multiplications required by the Kronecker product. Thus, the total cost, including the cost of adding the resulting matrices, is O ( M N 2 ) l M , N . This analysis shows that the overall cost of solving the linear systems and recovering the collocated approximate solution, u N , M , is
O ( N M 3 ) , l M / N , O ( M N 2 ) , l N / M .
If the collocated spatial derivative approximation, u x N , M , is also required, one may estimate the additional cost as follows: First, observe that computing the Kronecker product in Equation (39) has a cost of O ( N 2 ) . Similarly, the Hadamard product also incurs a cost of O ( N 2 ) . The subsequent matrix multiplication between the resulting matrix and the reshaped matrix costs O ( M N 2 ) . Finally, scaling each entry of the resulting matrix by i incurs a cost of O ( N M ) , which is negligible compared to the matrix multiplication cost. Therefore, the total additional cost is O ( M N 2 ) , which matches the computational cost order of recovering u N , M .
Parallel computing can significantly reduce the overall wall-clock time required to solve the linear systems. Since the N / 2 systems are independent of each other, they can be distributed across multiple cores or processors, allowing simultaneous computation. In particular, assuming that there are P available processing cores, and the N / 2 systems are distributed evenly across these cores, the total computational time using parallel computing, T parallel , is approximately:
T parallel = T serial min { P , N / 2 } + T overhead ,
where T serial = O ( N M 3 ) is the serial time to solve the N / 2 systems, and T overhead is the additional time incurred due to the parallel computing process itself from tasks such as communication, synchronization, and workload distribution. These are generally negligible if the workload per system, O ( M 3 ) , is sufficiently large compared to the overhead. If the overheads are minimal, the computational time is reduced significantly to approximately O ( N M 3 / min { P , N / 2 } ) by Equation (45). It is important to understand here that parallel computing is not useful for small datasets, as the associated overhead can outweigh the potential gains from parallelization. In such cases, a standard sequential “for” loop might be more efficient. It is noteworthy to mention also, here, that the constant nature of the matrix Q allows us to precompute and store it prior to executing the code for solving these systems, reducing the runtime overhead. Moreover, the numerical quadrature induced by Q (or  Q T ) through matrix–vector multiplication converges exponentially, enabling nearly exact approximations with relatively few quadrature nodes [28]. Consequently, systems (44) can be solved efficiently and with high accuracy using a relatively low M values, significantly improving the computational efficiency without compromising precision.
Remark 1.
The FGIG method trades the sequential cost of time stepping for the cost of solving a global system. A key advantage of this reformulation is that it leads to well-conditioned linear systems. This good conditioning is a direct result of using a spectral integral formulation instead of a differential one, and employing stable barycentric quadrature rules. Consequently, for the low/moderate values of M and N required for spectral accuracy in our problems, the systems can be solved directly and efficiently, without the need for specialized iterative techniques. The computational complexity is O ( N M 3 ) for the direct solution of all the systems, which is highly feasible for the scale of problems considered and is offset by the method’s exponential convergence and lack of temporal error accumulation. Therefore, while specialized iterative techniques, like the preconditioned Krylov solvers, are a powerful tool for the ill-conditioned systems common in numerical differentiation, they are not necessary so here. The excellent conditioning is a core strength of our integral-based approach.

6. Error and Stability Analyses

Let E N = u u N = I x ( x ) ψ I x ( x ) N ψ N Z e + denote the truncation error of the truncated Fourier series expansion (10). The squared L 2 -norm of the error is defined as follows:
| | E N | | 2 = I T ( t ) I L ( x ) | E N | 2 = L I T ( t ) | k | > N / 2 | ψ ˜ k ( t ) | 2 ,
cf. [26] (Proof of Theorem 4.2). Thus, the error depends on the sum of the squared magnitudes of the omitted Fourier coefficients. For analytic solutions, Fourier coefficients decay exponentially with | k | , as shown by the following theorem:
Theorem 2 
([26] Theorems 4.1 and 4.2). Let t Ω T , and suppose that I x ( x ) ψ A L , β s β > 0 is approximated by the N / 2 -degree, L-sp, truncated Fourier series (10). Then,
| ψ ˜ k | = O e ω k β , as k ,
| | I x ( x ) ψ I x ( x ) N ψ | | = O e ω N β / 2 , as N .
Moreover, if I x ( x ) ψ is β-analytic then
| | I x ( x ) ψ I x ( x ) N ψ | | = 0 , N Z e + .
The rapid decay of Fourier coefficients for sufficiently smooth functions is a hallmark of diffusion-dominated problems, where the solution tends to smooth out over time. The FGIG method effectively exploits this property by employing a Fourier basis, enabling an accurate representation of the solution with relatively few terms. This contributes to the method’s efficiency and accuracy in diffusion-dominated regimes.
For nonsmooth solutions, the coefficients decay at a polynomial rate depending on the degree of smoothness, as shown by the following theorem:
Theorem 3 
([25] Theorems A.1 and A.2). Let t Ω T , and suppose that I x ( x ) ψ H L s s s Z 0 + is approximated by the N / 2 -degree, L-sp truncated Fourier series (10). Then,
| ψ ˜ k | = O k s 1 , as k ,
| | I x ( x ) ψ I x ( x ) N ψ | | = O N s 1 / 2 , as N .
Discontinuities or shock features in u affect the coefficients and result in the well-renowned Gibbs phenomenon, where the truncated Fourier series near a discontinuity exhibits an overshoot or undershoot that does not diminish as the number of terms in the sum increases.
A crucial step in the implementation of the FGIG method lies in the solution of the linear system (43). To analyze the sources of error in solving the given linear system, we must consider errors arising from numerical stability, discretization, and condition numbers, as well as how these depend on N , M , μ , and ν . The right-hand side of the linear system presents the nth Fourier interpolation coefficient of u 0 , which reflects how u 0 is approximated by a finite number of frequencies (N). The convergence rates of the interpolation error associated with Equation (23) are analyzed in the following two corollaries, specifically focusing on their dependence on the smoothness of the underlying function space:
Corollary 1 
([26] Corollary 4.1). Suppose that u 0 A L , β s β > 0 is approximated by the N 0 / 2 -degree, L-periodic Fourier interpolant (23), a N 0 Z e + , then
| | u 0 I N 0 u 0 | | = O e ω N 0 β / 2 , as N 0 .
Moreover, if u 0 is β-analytic then
| | u 0 I N 0 u 0 | | = 0 , N 0 Z e + .
Corollary 2 
([25] Corollary A.1). Suppose that u 0 H L s s s Z + is approximated by the N 0 / 2 -degree, L-periodic Fourier interpolant (23), a N 0 Z e + , then
| | u 0 I N 0 u 0 | | = O N 0 s 1 / 2 , as N 0 .
Now, we turn our attention to the left-hand side of the linear system (43) whose coefficient matrix mainly consists of the SGIM that is scaled by the factor α n . The error of the SGG quadrature induced by the SGIM, Q T , for sufficiently smooth functions, can be described in closed form by the following theorem:
Theorem 4 
([28] Theorem 4.1). Let f C M + 1 ( Ω T ) , be interpolated by the SG polynomials at the SGG nodes, t ^ M , 0 : M ( λ ) G ^ T , M λ . Then, there exist a matrix, Q T = ( q l , j T ) , 0 l , j M , and some numbers, ξ l = ξ t ^ M , l ( λ ) Ω T l J M + , satisfying
I t ^ M , l ( λ ) ( t ) f = Q l T f 0 : M + E T , M ( λ ) t ^ M , l ( λ ) , ξ l ,
where f 0 : M = f t M λ ,
E T , M ( λ ) t ^ M , l ( λ ) , ξ l = f ( M + 1 ) ( ξ l ) ( M + 1 ) ! K T , M + 1 ( λ ) I t ^ M , l ( λ ) ( t ) G T , M + 1 ( λ ) ,
K T , j ( λ ) = 2 2 j 1 T j Γ ( 2 λ + 1 ) Γ ( j + λ ) Γ ( λ + 1 ) Γ ( j + 2 λ ) j Z 0 + ,
is the leading coefficient of the jth-degree SG polynomial, G T , j ( λ ) ( t ) .
The following theorem shows that the SG quadrature formula converges exponentially fast for sufficiently smooth functions. Its proof can be immediately derived from [25] (Theorem A.5) in the absence of domain partitioning.
Theorem 5 
([25] Theorem A.5). Let | | f ( M + 1 ) | | L ( Ω T ) = A R 0 + , where the constant A is independent of M. Suppose also that the assumptions of Theorem 4 hold true. Then there exist some constants, D λ > 0 , B 1 λ = A D λ and B 2 λ > 1 , which depend on λ but are independent of M such that the SG quadrature truncation error, E T , M λ t ^ M , l ( λ ) , ξ l , is bounded by
| | E M λ t ^ M , l ( λ ) , ξ l | | L ( Ω T ) = B 1 λ 2 2 M 1 e M M λ M 3 2 T M + 1 t ^ M , l ( λ ) × 1 , M 0 λ 0 , Γ M 2 + 1 Γ λ + 1 2 π Γ M 2 + λ + 1 , M Z o + 1 2 < λ < 0 , 2 Γ M + 3 2 Γ λ + 1 2 π M + 1 M + 2 λ + 1 Γ M + 1 2 + λ , M Z 0 , e + 1 2 < λ < 0 , B 2 λ M + 1 λ , M 1 2 < λ < 0 ,
l J M + .
Besides the exponential convergence of the SG quadrature, the upper bounds on the rounding errors incurred in computing the elements of qth-order barycentric Gegenbauer integration matrices of size ( n + 1 ) × ( m + 1 ) —used in constructing the Gegenbauer quadrature rules—are approximately O ( m u R ) , according to standard analysis, where m, n, and q Z + . Formula (26) shows immediately that the rounding errors in Q T are of O ( T M u R / 2 ) = O ( M u R ) , which is the same as the rounding errors in Q up to a constant factor of T / 2 . The scaling by T / 2 does not change the asymptotic order of the rounding error, though it could affect the constant factors involved. The scalar multiplication by T / 2 does not affect the condition number either because it scales both the norm of Q and the norm of Q 1 by a factor and its reciprocal as follows:
k ( Q T ) = T 2 | | Q | | 2 · 2 T | | Q 1 | | 2 = k ( Q ) .
Thus, the condition numbers of Q and Q T are identical.
Now, let us analyze the error due to rounding in the collocation matrix ( A ( n , M ) T ) in (43). Assume that Q T is perturbed by Δ Q T due to rounding errors. Furthermore, we assume that α n also has rounding errors, which we will denote as Δ α n : | | Δ α n | | = O ( u R ) . Then, the rounding errors in A ( n , M ) T can be estimated by Δ α n Q T + α n Δ Q T + Δ α n Δ Q T :
I M + 1 + ( α n + Δ α n ) Q T + Δ Q T = I M + 1 + α n Q T + Δ α n Q T + α n Δ Q T + Δ α n Δ Q T .
These error terms contribute asymptotic rounding errors in each matrix element of orders O ( u R ) , O ( M α n u R ) , and O ( M u R 2 ) , respectively. We can, thus, express the total asymptotic error in each matrix element as O max u R , M α n u R , M u R 2 . Consequently, the asymptotic error in each element of the product ( A ( n , M ) T ψ ˜ n , 0 : M ) of the linear system (43) can be directly estimated by
O max u R , M α n u R , M u R 2 | | ψ ˜ n , 0 : M | | 1 , n N N / 2 .
Notice how the size of Fourier coefficients is critical in determining the error. The rate at which these coefficients decay as | n | increases is influenced by the smoothness of the solution function (u), as shown earlier in this section. In particular, for smooth functions, as is often the case with a relatively low Péclet number ( P e = μ L / ν ), Fourier coefficients decay exponentially as | n | increases, and their l 1 -norm will be relatively low, leading to a lower overall error, even as M or N increases. For nonsmooth solutions, however, especially those with discontinuities or sharp gradients, which often occur at relatively high P e values, Fourier coefficients decay much more slowly. This slower decay means that the Fourier coefficients for higher frequencies remain relatively high, and their contribution to the l 1 -norm will be high. As a result, the error in the product can increase significantly. This shows that nonsmoothness often leads to slower convergence and higher numerical errors, as expected.
The overall error in the solution will depend not only on the error in the matrix–vector product, A ( n , M ) T ψ ˜ n , 0 : M , or the relatively low errors in the discrete Fourier coefficients of u 0 , but also on the conditioning of the collocation matrix ( A ( n , M ) T ), which is given by
k A ( n , M ) T = | | I M + 1 + α n Q T | | | | I M + 1 + α n Q T 1 | | .
Therefore, the behavior of the condition number depends on the relative size of α n and the spectrum of α n Q T . If ƛ 0 : M are the eigenvalues of Q T then the eigenvalues of A ( n , M ) T , ƛ ^ 0 : M , are shifted scaled versions of ƛ 0 : M by the scaling factor α n : ƛ ^ 0 : M = 1 + α n ƛ 0 : M . To partially analyze the effect of this shift-scaling operation on the conditioning of the collocation matrix, let ƛ min and ƛ ^ min denote the lowest eigenvalues of Q T and A ( n , M ) T , respectively. Also, let σ max , σ min , σ ^ max , and σ ^ min denote the extreme singular values of Q T and A ( n , M ) T in respective order. The first rows of Figure 2, Figure 3 and Figure 4 display their distributions together with the distributions of ƛ 0 : M and ƛ ^ 0 : M for increasing values of M and λ . Observe how ƛ 0 : M cluster gradually around 0 as the size of the matrix increases for all the values of  λ . Moreover, increasing λ values while holding M fixed tends to gradually spread out the eigenvalues in the complex plane away from the origin. This shows that ƛ min 0 as λ 0.5 . While a near-zero eigenvalue does not directly determine the exact value of the lowest singular value, it strongly confirms the existence of a relatively low σ min , leading to a high condition number and potential numerical instability. In fact, Theorem A3 proves the existence of at least one near-zero singular value of Q , not necessarily the lowest singular value, if λ 0.5 . Figure 5 confirms this fact, where we can clearly see the rapid decay of the lowest singular values of Q as λ 0.5 . This analysis shows that k ( Q ) = k Q T , as λ 0.5 , with a higher growth rate as M increases. The first row in Figure 6 further supports this observation, showing a significant shift in the order of magnitude of k Q T for all M values as λ gradually approaches 0.5 . The figure also reveals that the curve of k Q T initially exhibits a near-L-shaped pattern for low M values. It rapidly increases as λ decreases below 0 but grows slowly as λ increases beyond 1. This latter growth rate increases gradually for higher M values, and the curve of k Q T gradually transitions to a U-shaped pattern with a base in the range 0 λ 1 . It is striking, here, to observe that the poor conditioning for λ 0.5 can be largely restored with a proper scaling of Q T followed by a shift by the identity matrix. This operation effectively shifts ƛ min from near 0 to near 1, for relatively low scaling factors. In particular, assuming a relatively low α n , which occurs when μ , ν , and n are low, the scaled eigenvalue ( α n ƛ j ) remains relatively low j J M + , especially for high M values, forcing the eigenvalues of A ( n , M ) T to cluster around 1. This can significantly decrease the ratio σ ^ max / σ ^ min . Furthermore, if α n is too low then A ( n , M ) T I M + 1 . Hence, k A ( n , M ) T 1 , and the matrix is a nearly perfectly well-conditioned matrix. For the sake of illustration, consider the dataset { L = 2 , T = 0.2 , μ = 0 , ν = 1 , N = 4 } . At the fundamental frequency, we can readily verify that ƛ ^ 0 : M = 1 M + 1 + π 2 ƛ 0 : M 1 for sufficiently high M values, since ƛ 0 : M cluster around the origin, as evident in Figure 3 and Figure 4. Notice, however, that this result remains valid for low M values only if | λ | remains relatively low. The second rows in Figure 2, Figure 3 and Figure 4 manifest the distributions of ƛ ^ 0 : M , where we observe their clustering around unity for increasing M values. The second row in Figure 6 displays the astonishing decay of the condition number for all the λ and M values, dropping by four orders of magnitude in the range 40 M 80 . Notice, here, that the conditioning relatively deteriorates as λ continues to grow beyond nearly 1.5, and the degeneration becomes relatively clear for high M values.
The above analysis assumes a low α n value for all the cases. When α n becomes relatively high, k A ( n , M ) T will depend on the interplay among the parameters n , μ , and ν . Assuming μ and ν are held fixed, the highest condition number, in this case, occurs at the highest value of α n , which occurs at n = N / 2 , see Theorem A4. At this stage, increasing either μ or ν makes the eigenvalues more spread out in the complex plane, which will gradually increase k A ( n , M ) T until a terminal value where it ceases to increase anymore, assuming N , λ < . This is because A ( n , M ) T α n Q T l α n ; thus, k A ( n , M ) T k Q T . This demonstrates that for highly oscillatory solutions, where numerous Fourier terms are necessary to accurately represent the high-frequency oscillations, the condition number of the resulting linear system will be, at worst, comparable to that of the integration matrix itself. Overall, for higher n values, the effect of both μ and ν becomes more pronounced, and the balance between these two constants is crucial in the sense that reducing their values can improve the conditioning of the linear system. Figure 7, Figure 8, Figure 9 and Figure 10 demonstrate the effect of μ and ν on the conditioning as their values grow large.
The above analysis emphasizes that the proper choice of the Gegenbauer parameter ( λ ) can significantly influence the stability and conditioning of the SGIM, which is crucial for the stability and convergence of spectral integration methods employing Gegenbauer polynomials in general. Besides our new findings here on the strong relationship between λ and the conditioning of the matrix, it is important to mention also that the barycentric representation of the SGIM employed in this work generally improves the stability of Gegenbauer quadratures, since it mitigates numerical instabilities that can arise from the direct evaluation of Gegenbauer polynomials, especially for high degrees or specific λ values. Since the stability characteristics of the SGIM are directly inherited from the Gegenbauer integration matrix, a suitable rule of thumb that can be drawn from this study and verified by Figure 6 is to select λ within the interval ( 1 / 2 + ε , 2 ] for a relatively low p = max { N , μ , ν } , where ε is a relatively low and positive parameter. l p , Figure 7, Figure 8, Figure 9 and Figure 10 suggest to shrink this recommended interval to [ 0 , 0.5 ] to maintain a relatively low condition number.
Besides the conditioning matter, we draw the attention of the reader to the important fact that the Gegenbauer weight function associated with the Gegenbauer integration matrix diminishes rapidly near the boundaries ( x = ± 1 ) for increasing λ > 2 , which forces the Gegenbauer quadrature to depend more heavily on the behavior of the integrand near the central part of the interval, increasing sensitivity to errors and forcing the quadrature to become more extrapolatory. Moreover, as λ increases, the Gegenbauer quadrature nodes cluster more toward the center of the interval. This means that the quadrature rule relies more on extrapolation rather than interpolation, making it more sensitive to perturbations in the function values and amplifying numerical errors. On the other hand, as λ 1 / 2 , Gegenbauer polynomials grow rapidly in magnitude, leading to poor numerical stability. A low buffer parameter ( ε ) is often added to avoid singularities and maintain numerical accuracy. For sufficiently smooth functions and high spectral expansion terms, the truncated expansion in the shifted Chebyshev quadrature, associated with λ = 0 , is optimal for the L -norm approximations of definite integrals, while the shifted Legendre quadrature, associated with λ = 0.5 , is optimal for the L 2 -norm approximations.
Remark 2
(Practical Selections of λ and M). The analysis in this section reveals that the Gegenbauer parameter (λ) significantly impacts both accuracy and stability. For general applications with relatively low p = max { N , μ , ν } , λ should be chosen within ( 1 / 2 + ε , 2 ] , where ε > 0 is a low parameter. l p , this interval should be reduced to [ 0 , 0.5 ] to maintain low condition numbers and ensure numerical stability. Specifically, for L -norm approximations, λ = 0 (shifted Chebyshev quadrature) is optimal, and for L 2 -norm approximations, λ = 0.5 (shifted Legendre quadrature) is optimal. These choices provide the best balance between numerical stability and approximation accuracy for smooth solutions. On the other hand, the parameter M governs the precision of the SGG quadrature employed to discretize the integral system. Its value directly influences the accuracy of the collocation matrices and, consequently, that of the resulting linear system. For smooth, non-oscillatory functions, relatively low values, such as M [ 12 , 16 ] , are typically sufficient to achieve near-machine-precision accuracy. In contrast, higher M values are required when dealing with highly oscillatory or nonsmooth functions, where additional quadrature nodes are necessary to adequately resolve oscillations or capture irregular features. In practice, it is advisable to begin with modest M values (e.g., 12–16) and increase them only as dictated by the oscillatory nature or regularity of the underlying solution. For simulations extending over long time intervals ( T 1 ), the temporal dynamics may also demand higher resolution. A practical heuristic is to start with a coarse grid (e.g., M = 10 ) and incrementally increase M until the solution profile or the discrete norm error (65) stabilizes within the desired tolerance.
Remark 3
(Spectral Properties and Preconditioning). The linear systems arising from the FGIG method (Equation (43)) are characterized by coefficient matrices of the form I + α n Q T . For integral equations, it is known that the eigenvalues of the underlying operators (and their discrete approximations) can exhibit clustering behavior. The results in [29] concerning the clustering of eigenvalues for matrices from integral equations provide a theoretical foundation for the observed excellent conditioning of our systems, particularly for lower values of | α n | (i.e., lower frequencies n). This spectral clustering near 1 is precisely the property that makes preconditioned Krylov solvers, as discussed in [29], highly effective for such problems. While the superb conditioning of our systems renders the use of such iterative solvers unnecessary for the problems considered in this work—allowing for efficient direct solution—the findings in [29] become highly relevant for potential extensions of the FGIG method to higher-dimensional or more complex problems, where the resulting linear systems might be larger and less amenable to direct factorization. This connection to the literature on integral equations and iterative solvers opens promising directions for future research, especially for problems in higher-dimensional spatial domains, where iterative Krylov methods are essential [30]. The well-established theory on superlinear convergence for such methods under eigenvalue clustering [31] provides a strong foundation for these future developments.

7. The SA-FGIG Method

We show in this section how to compute the time-dependent Fourier coefficients analytically and use them to synthesize the solution (u). To this end, notice that Equation (25) is a Volterra integral equation of the first kind for ψ ˜ n e n . To obtain its solutions in closed form, we differentiate both of its sides with respect to t to get
d ψ ˜ n ( t ) d t + α n ψ ˜ n ( t ) = 0 , n N N / 2 .
This is a first-order linear ODE for ( ψ ˜ n ) with the initial condition of ψ ˜ n ( 0 ) = u ^ 0 , n n N N / 2 , which can be derived easily from Equation (25). Its solution is therefore given by
ψ ˜ n ( t ) = u ^ 0 , n e α n t , n N N / 2 .
Notice, here, that ( α n ) = ν ω n 2 controls the super-exponential decay of ψ ˜ n due to the quadratic dependence on n in the exponent for ν > 0 . Consequently, for ν > 0 , higher-frequency modes, corresponding to high | n | values, decay much faster. This is characteristic of diffusion, which dissipates high-frequency components rapidly, leading to the smoothing of the solution (u). On the other hand, the oscillatory behavior of ψ ˜ n arises from two sources: (i) the discrete Fourier coefficient ( u ^ 0 , n ), which is generally complex and introduces an initial phase that defines the starting point of the oscillation for each mode, and (ii)  ( α n ) = μ ω n , which introduces a time-dependent phase shift, e i μ ω n t , that causes the oscillations to evolve over time. The frequency of these oscillations increases linearly with | n | . This reflects advection, which causes phase shifts in the modes without affecting their amplitude. The combined effects of ν and μ , encapsulated by the parameter α n , cause the modes to experience amplitudinal decay that is dominated by ν and a phase shift that is determined by  μ .
Using Equation (44), the conjugate symmetry condition (40), and the closed-form expression of Fourier coefficients (61), we can now write the Fourier series of the SA solution, u s a , as follows:
u s a N ( x , t ) = 2 k N N / 2 ψ ˜ k ( t ) e i ω k x 2 k N N / 2 ψ ˜ k ( t ) + g ( t ) = 2 k N N / 2 u ^ 0 , k e α k t + i ω k x 2 k N N / 2 u ^ 0 , k e α k t + g ( t ) .
The SA-SSD, u x s a N , is therefore given by
u x s a N ( x , t ) = 2 k N N / 2 ω k u ^ 0 , k e α k t + i ω k x .

8. Computational Results

This section presents a series of numerical experiments to validate the accuracy and efficiency of the proposed FGIG and SA-FGIG methods. We consider three benchmark problems with known analytical solutions, allowing for a direct comparison between the numerical and exact solutions. Each method’s performance is evaluated using error norms and computational time. We measure the error in the approximate solution of Problem S at the augmented collocation points set S N L × G ^ T , M λ , + using the absolute error function given by
E u ( x , t ) = | u ( x , t ) u ( x , t ) N | ,
and the discrete norm at t = T given by
DNE u = | | u ( x , T ) u ( x , t ) N | | N = L N j J N u ( x N , j , T ) u N ( x N , j , T ) 2 ,
where DNE u denotes the discrete norm error in u. The superscript “sa” is added to the above two error notations when the solution is approximated by the SA solution obtained through (62).
Test Problem 1. Consider Problem S with μ = 0 , ν = 1 , and L = 2 , along with initial and boundary functions u 0 ( x ) = sin ( π x ) and g ( t ) = 0 . The exact solution of this problem is u ( x , t ) = e π 2 t sin ( π x ) [7]. Figure 11 shows the surfaces of the exact solution and its approximations obtained by the FGIG and the SA-FGIG methods, as well as their corresponding errors, for some parameter values. Both methods achieve near-machine precision in resolving the solution and the SSD on a relatively coarse mesh grid of size 22 × 13 . Figure 12 shows the surfaces of the logarithms of the discrete error norms for both methods at the terminal time, t = 0.2 , for certain ranges of the parameters N , M , and  λ . When N is fixed, the linear decay of the surface for increasing M values indicates the exponential convergence of the FGIM. The nearly flat surface along N while holding M fixed in the plot associated with the FGIM occurs because the error is dominated by the discretization error of system (27), controlled by the size of the temporal linear system. To observe the theoretical exponential decay with N, it is necessary to increase the time mesh point grid so that the temporal discretization error becomes negligible compared to the spatial error. This interplay between N and M reflects the dependence of the total error on both spatial and temporal resolutions. The SA-FGIM method, on the other hand, exhibits much lower logarithmic error norms, in the range from 16.1 to 16.8 , indicating a relatively flat surface. This is expected because the exact solution is β -analytic in x L < 2 , cf. Theorem A2. Consequently, Fourier truncation and interpolation errors vanish N Z e + by Theorem 2 and Corollary 1. Furthermore, the numerical errors in double-precision arithmetic quickly plateau at about the O ( 10 16 ) level and exhibit random fluctuations. Figure 13 shows the times (in seconds) required to run the FGIM and the SA-FGIM methods for certain ranges of the parameters N and M. The surfaces associated with the FGIM have relatively much lower time values compared to those of their counterparts associated with the SA-FGIM. In all the runs, the FGIM finished in under 0.01 s. In contrast, the SA-FGIM method exhibited substantially longer execution times, peaking at approximately 0.17 s when M = N = 100 . At this point, the FGIM demonstrated a remarkable speed advantage, operating roughly 21 times faster than the SA-FGIM. This indicates that the FGIM is much more efficient in terms of the execution time. As the number of cores in the computing device increases, the time gap between both methods is expected to widen significantly, especially when processing larger datasets. Table 1 shows a comprehensive comparison with the methods of Jena and Gebremedhin [32], Jena and Senapati [7], and Demir and Bildik [33]; the latter work uses a collocation method with cubic B-spline finite elements. Compared with all three methods, the FGIM achieves significantly lower errors, even when employing relatively coarse mesh grids. For instance, the method in [7] necessitates a mesh grid comprising 3,210,321 points to achieve an L -error on the order of 10 12 . In contrast, the FGIM attains the same level of accuracy with as few as 36 points and achieves near-full precision using just 44 mesh points—a reduction in the mesh grid size by approximately 99.999% in either case. The SA-FGIM outperforms all the methods in terms of accuracy, scoring full machine precision using just five truncated Fourier series terms. Figure 14 displays the values of the collocation matrix condition number for certain ranges of the parameters N and M. Observe how k A ( n , M ) T remains relatively close to 1 at n = 1 and slightly improves for increasing matrix size, as discussed earlier in Section 6.
Figure 15 illustrates further the surfaces of the exact and approximate solutions, their spatial derivatives, and the associated errors over the extended time interval ( [ 0 , 10 ] ). The results demonstrate that both the FGIM and SA-FGIM maintain high accuracy over long-time simulations, with errors remaining near machine precision. This confirms the methods’ ability to effectively reduce error accumulation, as the exponential convergence observed earlier at t = 0.2 persists using a finer temporal resolution.
Test Problem 2. Consider Problem S with μ = 0 , ν = 1 / π 2 , and L = 2 , along with initial and boundary functions u 0 ( x ) = sin ( π x ) and g ( t ) = 0 . The exact solution to this problem is u ( x , t ) = e t sin ( π x ) [34]. Figure 16 shows the surfaces of the exact and approximate solutions and their errors obtained by the FGIG and SA-FGIG methods for some parameter values. Table 2 shows a comparison among the proposed methods and the Crank–Nicolson and compact boundary value method of Sun and Zhang [6] as well as the method of Mohebbi and Dehghan [34]. The former methods combine fourth-order boundary value methods for time discretization with a fourth-order compact difference scheme for spatial discretization. The latter method utilizes a compact finite difference approximation of fourth-order accuracy to discretize spatial derivatives, and the resulting system of ordinary differential equations is then solved using the cubic C 1 -spline collocation method. Both FGIM and SA-FGIM outperform these methods in terms of accuracy and computational cost, as shown in the table. In particular, the SA-FGIM exhibits the lowest L -error using five terms of Fourier truncated series. The FGIM method demonstrates a significant improvement in accuracy as the number of temporal mesh points increases. Notably, adding only three temporal mesh points to the FGIM configuration leads to a remarkable decrease in error by approximately five orders of magnitude. Figure 17 further illustrates the surfaces of the exact and approximate solutions, their spatial derivatives, and associated errors over the extended time interval ( [ 0 , 10 ] ).
Test Problem 3. Consider Problem S with μ = 0.01 , ν = 0.1 , and L = 2 with initial and boundary functions u 0 ( x ) = sin ( 2 π x / L ) and g ( t ) = e π 2 ν t sin ( 2 π μ t / L ) . The exact solution to this problem is u ( x , t ) = e π 2 ν t sin [ π ( μ t 2 x / L ) ] . Figure 18 shows the surfaces of the exact and approximate solutions and their errors obtained by the FGIG and SA-FGIG methods for some parameter values. Both methods resolve the solution surface well within an error of O 10 3 using as few as sixteen Fourier series terms and five time-mesh points. Figure 19 further illustrates the absolute error profiles and the discrete norm level at the extended terminal time ( t = 10 ). The results demonstrate that both the FGIG and SA-FGIG methods maintain accuracy over long-term simulations, with errors remaining within O ( 10 3 ) . This confirms the robustness of the methods in handling advection–diffusion problems with periodic boundary conditions, even over extended time intervals. The ability to achieve such accuracy with relatively coarse grids highlights the efficiency and computational advantages of the proposed methods. The discrete norm level further validates the stability and precision of the solutions and reinforces the suitability of the FGIG framework for long-time simulations.
For more convenience, we present a comparison of computational times and errors for all the test problems in Table 3, which demonstrates the exceptional performance of both FGIM and SA-FGIM methods across all the test problems.
It is noteworthy to mention that although the FGIG method demonstrates exceptional performance in diffusion-dominated problems, it often does not perform well when applied to solve advection-dominated regimes, characterized by steep gradients and sharp features. Figure 20 shows the absolute error profiles and the discrete norm level for the last test example at the terminal time ( t = 0.1 ) with parameters L = 2 , μ = 50 , and ν = 0.1 . This configuration gives a Péclet number of P e = μ L / ν = 1000 , representing a strongly advection-dominated regime. The results demonstrate that the numerical scheme becomes unstable, leading to poor approximation estimates.

9. Discussion and Conclusions

In this work, we introduced the FGIG method, a novel approach for solving the 1D AD equation with periodic boundary conditions. The method integrates Fourier series and SG polynomials within a Galerkin framework. In particular, it effectively exploits the spatial symmetry inherent in periodic boundary conditions through Fourier series, while its SG-polynomial-based temporal integration captures the asymmetric dynamics of AD processes. This dual approach makes it particularly suited for problems where symmetry and asymmetry coexist, such as in diffusion-dominated transport phenomena with periodic spatial structures. Unlike traditional numerical methods, the FGIG method eliminates the need for iterative time stepping by solving integral equations directly, significantly reducing computational overhead and temporal errors, and capturing the smooth and spreading behavior characteristic of diffusion-dominated regimes with excellent accuracy. The numerical experiments in Section 8 confirmed exponential convergence for smooth solutions, achieving near-machine precision (errors as low as 10 16 ) with coarse mesh grids (e.g., 4 × 11 points for test problem 1) and demonstrating stability under oscillatory conditions and for a certain range of Gegenbauer parameters. Specifically, for test problem 1, the FGIG method achieved an L -error of 7.21645 × 10 16 with only 44 mesh points, a 99.999% reduction in grid size compared to that obtained using the 3,210,321 points required by Jena and Senapati [7] for a similar error of 10 12 . The SA-FGIG method further outperformed, reaching full machine precision ( 5.5511 × 10 17 ) with just five Fourier terms. For test problem 2, the FGIG method reduced errors by five orders of magnitude (from 7.0965 × 10 11 to 8.3267 × 10 16 ) by adding just three temporal mesh points, surpassing the accuracy of methods by Sun and Zhang [6] and Mohebbi and Dehghan [34]. The FGIG method’s inherent parallelizability enables efficient computation across multi-core architectures, with execution times under 0.01 s for the FGIG method compared to 0.17 s for SA-FGIG, a 21-fold speed advantage, making it suitable for large-scale simulations. Furthermore, the barycentric formulation of the SGG quadrature ensures high precision in integral evaluations, a feature not commonly available in other numerical methods. This novel framework also facilitates the derivation of semi-analytical solutions, providing a benchmark for verifying numerical results. The FGIG method provides a strong foundation for improving numerical solutions of PDEs in computational science and engineering. Its scientific merit lies in its innovative integration of Fourier and Gegenbauer bases, rigorous error and stability analysis, and demonstrated superiority over established methods. By eliminating time stepping and enabling semi-analytical verification, the FGIG framework sets a new benchmark for high-accuracy solvers in periodic domains.

10. Limitations of the FGIG Method and Future Work

The FGIG method has demonstrated exceptional performance in solving diffusion-dominated problems, where solutions exhibit smooth and spreading behavior (see Section 8). While its application to advection-dominated problems—characterized by steep gradients and sharp features—may present challenges, these are not inherent limitations of the FGIG framework but, rather, areas where further refinement can improve its versatility. In particular, the time evolution of the Fourier coefficients can become complex and difficult to predict in this case, especially for high Péclet numbers. This is because the advection term can introduce significant phase shifts and amplitudinal variations to the coefficients. Nevertheless, the method retains its stability and accuracy for moderately advection-dominated regimes, especially when the Gegenbauer parameter is carefully tuned within the recommended range.
To further improve performance in strongly advection-dominated scenarios, several strategies can be employed. These include integrating adaptive mesh refinement strategies, which strategically concentrate computational resources in regions exhibiting large gradients, thereby improving the method’s ability to resolve sharp features and accurately simulate advection-dominated phenomena. Additionally, hybrid spectral filtering techniques or localized basis enrichment may help to mitigate Gibbs phenomena and improve resolution near discontinuities. Exploring hybrid approaches that combine the FGIG method with possibly machine-learning techniques offers another exciting direction. These methods could use data-driven insights to improve computational efficiency and accuracy for complex real-world problems.
Another promising direction is the extension of the FGIG framework to higher-dimensional problems and coupled systems, thereby broadening its range of applications. The integral Galerkin formulation of the FGIG method, which combines Fourier series to represent spatial periodicity with Gegenbauer polynomials for temporal integration, provides a solid foundation for such extensions to multidimensional and nonlinear problems. In two- and three-dimensional settings, tensor-product spectral bases could be employed, while nonlinear effects—such as those arising in the viscous Burgers’ equation—may be addressed through suitable quadrature rules or iterative solvers within the integral formulation. An important advantage of this approach is the elimination of time-stepping procedures, which reduces error accumulation in long-time simulations.
Insights from related works, such as the stability and convergence analysis of fully discrete Fourier collocation spectral methods for the 3D viscous Burgers’ equation [35] and the long-time stability analysis of high-order multistep schemes for the 2D incompressible Navier–Stokes equations [36], could inform the adaptation of the FGIG method. In particular, energy estimates for stability and error analysis in Sobolev norms developed in these studies might be extended to the integral equation framework, potentially enabling rigorous proofs of spectral spatial accuracy and high-order temporal accuracy for nonlinear extensions.
Overall, the FGIG method offers a flexible and extensible foundation, with its current limitations serving as a springboard for future innovations in spectral integration and high-accuracy PDE solvers.

Funding

The APC was funded by Ajman University, United Arab Emirates.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares that there are no conflicts of interest.

Abbreviations

AbbreviationDefinition
ADAdvection–Diffusion
FEMFinite Element Method
PSPseudospectral
SASemi-Analytic
SGShifted Gegenbauer
SGIMShifted-Gegenbauer Integration Matrix
SSDSolution Spatial Derivative
DFTDiscrete Fourier Transform
FGIGFourier–Gegenbauer Integral Galerkin
SA-FGIGSemi-Analytical FGIG
SGGShifted Gegenbauer–Gauss
spSpatially Periodic

Appendix A. Mathematical Proofs

The following theorem establishes the L 2 nature of I x ( x ) ψ + g , provided that ψ L 2 .
Theorem A1.
Let the SSD of Problem S be ψ L 2 ( Ω L × T ) . Then, u = I x ( x ) ψ + g L 2 ( Ω L × T ) { L , T } R + .
Proof. 
First, notice that
I L ( x ) I T ( t ) | u | 2 = I L ( x ) I T ( t ) ( I x ( x ) ψ ) 2 + 2 I x ( x ) ψ g + g 2 .
Applying Cauchy–Schwarz inequality to the first integrand term gives
| I x ( x ) ψ | 2 x I x ( x ) | ψ | 2 L I x ( x ) | ψ | 2 < , L R + .
Using Fubini’s theorem to swap the order of integration gives
I L ( x ) I T ( t ) | I x ( x ) ψ | 2 I L ( x ) I T ( t ) x I x ( x ) | ψ | 2 = I T ( t ) I L ( ξ ) I ξ , L ( x ) x | ψ | 2 = I T ( t ) I L ( ξ ) L 2 ξ 2 2 | ψ | 2 < ,
since ψ L 2 ( Ω L × T ) by assumption. The inequality
I L ( x ) I T ( t ) 2 I x ( x ) ψ g + g 2 < ,
follows readily from Ineq. (A2) and the boundedness of g on Ω T . This completes the proof of the theorem.    □
The following theorem proves the β -analyticity of the exact solution of test problem 1.
Theorem A2.
The function u ( x , t ) = e π 2 t sin ( π x ) is β-analytic in x ( x , t ) Ω L × T : L < 2 .
Proof. 
First, notice that u is analytic on C L , t Ω T because the sine function is entire. Observe also that
u ( x + i y ) = e π 2 t sin ( π ( x + i y ) ) = e π 2 t sin ( π x ) cosh ( π y ) + i cos ( π x ) sinh ( π y ) ,
so
| u ( x + i y ) | = e π 2 t cosh ( π y ) ,
using the identity cosh 2 ( π y ) sinh 2 ( π y ) = 1 . On C L , β , the supremum occurs at y = ± β , so
| | u | | A L , β = e π 2 t cosh ( π β ) .
Consequently,
lim β | | u | | A T , β e ω β = e π 2 t lim β cosh ( π β ) e 2 π β / L e π 2 t lim β 1 2 e π β e 2 π β / L = 0 , L < 2 .
This completes the proof of the theorem.    □
The following theorem highlights the existence of relatively low singular values of the Gegenbauer integration matrix Q , as λ 0.5 .
Theorem A3.
There exists at least one near-zero singular value of Q , not necessarily the lowest singular value, when λ 0.5 .
Proof. 
Since Q has at least one eigenvalue ( ƛ Q 0 ), as λ 0.5 , there exists a nonzero vector ( v : Q v 0 ). Therefore,
Q Q v = Q ( Q v ) Q 0 0 .
This implies that v is approximately an eigenvector of Q Q , with an eigenvalue ( ƛ Q Q 0 ) and a corresponding singular value ( σ ) given by
σ = ƛ Q Q 0 ,
from which the proof is accomplished.    □
The following theorem highlights the peak of k A ( n , M ) T among all the frequencies, assuming that Q T is diagonalizable, which is often the case, as observed in extensive numerical experiments.
Theorem A4.
Assume that Q T is a diagonalizable matrix. If μ and ν do not vanish simultaneously then
max n N N / 2 k A ( n , M ) T = k A ( N / 2 , M ) T .
Proof. 
Since Q T is a diagonalizable matrix, we can express it as
Q T = V Λ V 1 ,
where V is a matrix of eigenvectors of Q T , and Λ is a diagonal matrix containing the eigenvalues of Q T . Substituting (A12) into the collocation matrix gives
A ( n , M ) T = I M + 1 + α n V Λ V 1 = V ( I M + 1 + α n Λ ) V 1 .
The eigenvalues of A ( n , M ) T are given by the diagonal elements of I M + 1 + α n Λ , which are 1 + α n ƛ 0 : M . Therefore,
k A ( n , M ) T = k ( V ) · 1 + α n ƛ max 1 + α n ƛ min .
As α n increases, the eigenvalues of A ( n , M ) T become more spread out. This increased spread in eigenvalues generally leads to a higher ratio between 1 + α n ƛ max and 1 + α n ƛ min and, consequently, a higher condition number. The proof is accomplished by realizing that max n N N / 2 α n = α N / 2 .    □
Theorem A4 and the analysis conducted in Section 6 reveal further that k A ( n , M ) T l N , λ .

Appendix B. Algorithmic Implementation of the FGIG Method

This appendix provides a detailed description of the algorithmic implementation of the FGIG method for solving the one-dimensional AD equation with periodic boundary conditions. The method proceeds in several key stages: (1) problem reformulation into an integral system, (2) spatial discretization via Fourier expansion, (3) temporal discretization using SG polynomials and quadrature, (4) solution of the resulting linear systems, and (5) recovery of the final solution. The complete procedure is summarized in Algorithm A1.
Algorithm A1 The FGIG method
Input:  L , T , μ , ν , u 0 ( x ) , g ( t ) , N , M , N 0 , λ
Output: Approximate solution u N , M ( x , t ) on the spatiotemporal grid
1:
Stage 1: Precomputation
2:
Set spatial frequency vector: ω k = 2 π k / L for k = N / 2 : N / 2 .
3:
Generate SGG nodes { t ^ M , 0 : M λ } and barycentric weights { ξ 0 : M ( λ ) } on Ω T .
4:
Load or precompute the first-order SGIM Q T using Equation (26).
5:
Stage 2: Spatial Discretization and Fourier Expansion
6:
Sample u 0 at N 0 equidistant points: u 0 , j = u 0 ( j L / N 0 ) , j = 0 : N 0 1 .
7:
Compute DFT coefficients u ^ 0 , k for k K N 0 via Equation (24).
8:
Stage 3: Construct and Solve the Linear Systems
9:
for  n = 1  to  N / 2  do
▹ Solve for first N / 2 modes
10:
     α n ω n ( ν ω n + μ i )
11:
    Assemble collocation matrix: A ( n , M ) T = I M + 1 + α n · Q T
12:
    Solve the linear system: A ( n , M ) T ψ ˜ n , 0 : M = u ^ 0 , n 1 M + 1
13:
end for
14:
Stage 4: Coefficient Recovery via Symmetry
15:
for  n = 1  to  N / 2   do
16:
     ψ ˜ n , 0 : M ψ ˜ n , 0 : M *
▹ Conjugate symmetry
17:
end for
18:
ψ ˜ 0 , 0 : M 2 n = 1 N / 2 ( ψ ˜ n , 0 : M )
▹ Compute zeroth mode
19:
Stage 5: Solution Reconstruction
20:
Construct spatial grid: x j = j L / N , j = 0 : N 1
21:
Ψ ψ ˜ N / 2 , 0 : M ; ψ ˜ N / 2 + 1 , 0 : M ; ; ψ ˜ N / 2 , 0 : M
( N + 1 ) × ( M + 1 ) matrix
22:
E exp i [ x 0 , , x N 1 ] · [ ω N / 2 , , ω N / 2 ]
N × ( N + 1 ) matrix
23:
U offset ( E · Ψ )
▹ Offset solution on grid
24:
G [ g ( t ^ M , 0 λ ) , , g ( t ^ M , M λ ) ]
25:
U N , M U offset + 1 N · G
▹ Final solution on grid
26:
return  U N , M

Appendix B.1. Algorithm Stages in Detail

Stage 1:
Precomputation.
The algorithm begins by defining the fundamental spatial frequencies ( ω k ). The temporal discretization is set up by generating the set of M + 1 SGG nodes, { t ^ M , 0 : M λ } , and their corresponding barycentric weights, { ξ 0 : M ( λ ) } , on the interval Ω T for a given parameter ( λ > 1 / 2 ). The first-order SGIM, Q T , is precomputed. This matrix allows for the approximation of integrals from 0 to any SGG node via a matrix–vector product and is constructed by scaling the standard Gegenbauer integration matrix ( Q ) on Ω 1 , 1 by T / 2 , as shown in Equation (26).
Stage 2:
Spatial Discretization and Fourier Expansion.
The initial condition function ( u 0 ( x ) ) is sampled at N 0 equidistant points in the spatial domain ( Ω L ). DFT is applied to these samples to compute the Fourier coefficients ( u ^ 0 , k ) for the high-resolution ( N 0 -term) interpolant of u 0 , as specified by Equation (24). Using a higher resolution ( N 0 > N ) for this step often improves the accuracy of the right-hand side of the subsequent linear systems.
Stage 3:
Construct and Solve the Linear Systems.
The core of the algorithm involves solving the decoupled system of integral equations given by Equation (25). For each positive Fourier mode ( n = 1 : N / 2 ), the complex parameter ( α n = ω n ( ν ω n + μ i ) ) is computed. The collocation matrix ( A ( n , M ) T = I M + 1 + α n · Q T ) is then assembled. This matrix is dense and non-symmetric. The linear system (29) is solved for the vector ψ ˜ n , 0 : M , which contains the values of the nth Fourier coefficient of the SSD at all the SGG time nodes. These N / 2 systems are independent and can be solved in parallel to reduce wall-clock time.
Stage 4:
Coefficient Recovery via Symmetry.
Thanks to the conjugate symmetry of the Fourier coefficients for real-valued solutions (Theorem 1), the coefficients for the negative modes ( n = N / 2 : 1 ) are obtained by conjugating the coefficients of the corresponding positive modes. The zeroth-mode coefficient is computed efficiently using the real parts of the first N / 2 coefficients via Equation (44), ensuring the integral condition (11) is satisfied.
Stage 5:
Solution Reconstruction.
The approximate solution ( u N , M ( x , t ) ) is reconstructed on the computational grid. The spatial grid is defined by x j = j L / N for j = 0 : N 1 . The Fourier coefficients are arranged into a matrix ( Ψ ). The solution is then synthesized by evaluating the Fourier series (10) via a matrix multiplication ( U offset = ( E · Ψ ) ), where the matrix E contains the complex exponentials. Finally, the boundary function ( g ( t ) ), evaluated at the SGG nodes, is added to complete the solution according to Equation (8).

Appendix B.2. Computational Complexity and Implementation Notes

The computational cost of Algorithm A1 is dominated by two steps:
(i)
Solving the Linear Systems: Solving each of the N / 2 systems of size ( M + 1 ) using a direct solver typically has a cost of O ( M 3 ) . The total cost for this stage is therefore O ( N M 3 ) .
(ii)
Solution Reconstruction: The matrix multiplication required to evaluate the Fourier series on the grid is O ( M N 2 ) .
The overall complexity is therefore O ( max ( N M 3 , N 2 M ) ) . For typical applications where high spectral accuracy allows for low N and M values, this cost is manageable. The independent nature of the linear systems in Stage 3 makes the algorithm highly amenable to parallelization, potentially reducing the effective cost of that stage to O ( M 3 ) .
The algorithm is implemented efficiently in MATLAB R2023b. Key implementation details include the use of vectorization for operations, like the DFT and the solution synthesis, and the precomputation of constant matrices, like Q T and E , to avoid redundant calculations. The barycentric Lagrangian interpolation formula (35) is used for stable evaluation of the solution at times not coinciding with those of the SGG nodes.
This algorithm provides a complete, efficient, and reproducible recipe for implementing the FGIG method, enabling the high-accuracy solution of periodic advection–diffusion problems without traditional time stepping.

References

  1. Appadu, A.R.; Lubuma, J.M.; Mphephu, N. Computational study of three numerical methods for some linear and nonlinear advection-diffusion-reaction problems. Prog. Comput. Fluid Dyn. Int. J. 2017, 17, 114–129. [Google Scholar] [CrossRef]
  2. Khalsaraei, M.M.; Jahandizi, R.S. Efficient explicit nonstandard finite difference scheme with positivity-preserving property. Gazi Univ. J. Sci. 2017, 30, 259–268. [Google Scholar]
  3. Al-khafaji, F.; Al-Zubaidi, H.A. Numerical modeling of instantaneous spills in one-dimensional river systems. Nat. Environ. Pollut. Technol. 2024, 23, 2157–2166. [Google Scholar] [CrossRef]
  4. Hwang, S.; Son, S. An efficient HLL-based scheme for capturing contact-discontinuity in scalar transport by shallow water flow. Commun. Nonlinear Sci. Numer. Simul. 2023, 127, 107531. [Google Scholar] [CrossRef]
  5. Solis, F.J.; Gonzalez, L.M. A numerical approach for a model of the precancer lesions caused by the human papillomavirus. J. Differ. Equ. Appl. 2017, 23, 1093–1104. [Google Scholar] [CrossRef]
  6. Sun, H.; Zhang, J. A high-order compact boundary value method for solving one-dimensional heat equations. Numer. Methods Partial. Differ. Equ. Int. J. 2003, 19, 846–857. [Google Scholar] [CrossRef]
  7. Jena, S.R.; Senapati, A. One-dimensional heat and advection-diffusion equation based on improvised cubic B-spline collocation, finite element method and Crank-Nicolson technique. Int. Commun. Heat Mass Transf. 2023, 147, 106958. [Google Scholar] [CrossRef]
  8. Cerfontaine, B.; Radioti, G.; Collin, F.; Charlier, R. Formulation of a 1D finite element of heat exchanger for accurate modelling of the grouting behaviour: Application to cyclic thermal loading. Renew. Energy 2016, 96, 65–79. [Google Scholar] [CrossRef]
  9. Wang, S.; Yuan, G. Discrete strong extremum principles for finite element solutions of diffusion problems with nonlinear corrections. Appl. Numer. Math. 2022, 174, 1–16. [Google Scholar] [CrossRef]
  10. Wang, S.; Yuan, G. Discrete strong extremum principles for finite element solutions of advection-diffusion problems with nonlinear corrections. Int. J. Numer. Methods Fluids 2024, 96, 1990–2005. [Google Scholar] [CrossRef]
  11. Sejekan, C.B.; Ollivier-Gooch, C.F. Improving finite-volume diffusive fluxes through better reconstruction. Comput. Fluids 2016, 139, 216–232. [Google Scholar] [CrossRef]
  12. Chernyshenko, A.; Olshahskii, M.; Vassilevski, Y. A hybrid finite volume—finite element method for modeling flows in fractured media. In Finite Volumes for Complex Applications VIII-Hyperbolic, Elliptic and Parabolic Problems: FVCA 8, Lille, France, June 2017 8; Springer: Berlin/Heidelberg, Germany, 2017; pp. 527–535. [Google Scholar]
  13. Kramarenko, V.; Nikitin, K.; Vassilevski, Y. A nonlinear correction FV scheme for near-well regions. In Finite Volumes for Complex Applications VIII-Hyperbolic, Elliptic and Parabolic Problems: FVCA 8, Lille, France, June 2017 8; Springer: Berlin/Heidelberg, Germany, 2017; pp. 507–516. [Google Scholar]
  14. Mei, D.; Zhou, K.; Liu, C.-H. Unified finite-volume physics informed neural networks to solve the heterogeneous partial differential equations. Knowl.-Based Syst. 2024, 295, 111831. [Google Scholar] [CrossRef]
  15. Cardone, A.; D’Ambrosio, R.; Paternoster, B. Exponentially fitted IMEX methods for advection–diffusion problems. J. Comput. Appl. Math. 2017, 316, 100–108. [Google Scholar] [CrossRef]
  16. Bokanowski, O.; Simarmata, G. Semi-Lagrangian discontinuous Galerkin schemes for some first-and second-order partial differential equations. ESAIM Math. Model. Numer. Anal. 2016, 50, 1699–1730. [Google Scholar] [CrossRef]
  17. Bakhtiari, A.; Malhotra, D.; Raoofy, A.; Mehl, M.; Bungartz, H.-J.; Biros, G. A parallel arbitrary-order accurate amr algorithm for the scalar advection-diffusion equation. In Proceedings of the SC’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Salt Lake City, UT, USA, 13–18 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 514–525. [Google Scholar]
  18. Benedetto, M.F.; Berrone, S.; Borio, A.; Pieraccini, S.; Scialo, S. Order preserving SUPG stabilization for the virtual element formulation of advection–diffusion problems. Comput. Methods Appl. Mech. Eng. 2016, 311, 18–40. [Google Scholar] [CrossRef]
  19. O’Sullivan, S. Runge–Kutta–Gegenbauer explicit methods for advection-diffusion problems. J. Comput. Phys. 2019, 388, 209–223. [Google Scholar] [CrossRef]
  20. Greengard, L. Spectral integration and two-point boundary value problems. SIAM J. Numer. Anal. 1991, 28, 1071–1080. [Google Scholar] [CrossRef]
  21. Bondarenco, M.; Gamazo, P.; Ezzatti, P. A comparison of various schemes for solving the transport equation in many-core platforms. J. Supercomput. 2017, 73, 469–481. [Google Scholar] [CrossRef]
  22. Fornberg, B. A Practical Guide to Pseudospectral Methods; Cambridge University Press: Cambridge, UK, 1996; Volume 1. [Google Scholar]
  23. Kumar, S.; Pandey, P.; Das, S. Gegenbauer wavelet operational matrix method for solving variable-order non-linear reaction–diffusion and Galilei invariant advection–diffusion equations. Comput. Appl. Math. 2019, 38, 162. [Google Scholar] [CrossRef]
  24. Nagy, A.; Issa, K. An accurate numerical technique for solving fractional advection–diffusion equation with generalized Caputo derivative. Z. Angew. Math. Phys. 2024, 75, 164. [Google Scholar] [CrossRef]
  25. Elgindy, K.T. New optimal periodic control policy for the optimal periodic performance of a chemostat using a Fourier–Gegenbauer-based predictor-corrector method. J. Process Control 2023, 127, 102995. [Google Scholar] [CrossRef]
  26. Elgindy, K.T. Numerical solution of nonlinear periodic optimal control problems using a Fourier integral pseudospectral method. J. Process Control 2024, 144, 103326. [Google Scholar] [CrossRef]
  27. Elgindy, K.T. Optimal control of a parabolic distributed parameter system using a fully exponentially convergent barycentric shifted Gegenbauer integral pseudospectral method. J. Ind. Manag. Optim. 2018, 14, 473. [Google Scholar] [CrossRef]
  28. Elgindy, K.T. High-order numerical solution of second-order one-dimensional hyperbolic telegraph equation using a shifted Gegenbauer pseudospectral method. Numer. Methods Partial. Differ. Equ. 2016, 32, 307–349. [Google Scholar] [CrossRef]
  29. Al-Fhaid, A.S.; Serra-Capizzano, S.; Sesana, D.; Ullah, M.Z. Singular-value (and eigenvalue) distribution and Krylov preconditioning of sequences of sampling matrices approximating integral operators. Numer. Linear Algebra Appl. 2014, 21, 722–743. [Google Scholar] [CrossRef]
  30. Axelsson, O.; Karátson, J.; Magoulès, F. Robust superlinear Krylov convergence for complex noncoercive compact-equivalent operator preconditioners. SIAM J. Numer. Anal. 2023, 61, 1057–1079. [Google Scholar] [CrossRef]
  31. Axelsson, O.; Lindskog, G. On the rate of convergence of the preconditioned conjugate gradient method. Numer. Math. 1986, 48, 499–523. [Google Scholar] [CrossRef]
  32. Jena, S.R.; Gebremedhin, G.S. Computational technique for heat and advection–diffusion equations. Soft Comput. 2021, 25, 11139–11150. [Google Scholar] [CrossRef]
  33. Demir, D.D.; Bildik, N. The numerical solution of heat problem using cubic B-splines. Appl. Math. 2012, 2, 131–135. [Google Scholar]
  34. Mohebbi, A.; Dehghan, M. High-order compact solution of the one-dimensional heat and advection–diffusion equations. Appl. Math. Model. 2010, 34, 3071–3084. [Google Scholar] [CrossRef]
  35. Gottlieb, S.; Wang, C. Stability and convergence analysis of fully discrete Fourier collocation spectral method for 3-D viscous Burgers’ equation. J. Sci. Comput. 2012, 53, 102–128. [Google Scholar] [CrossRef]
  36. Cheng, K.; Wang, C. Long time stability of high order multistep numerical schemes for two-dimensional incompressible Navier–Stokes equations. SIAM J. Numer. Anal. 2016, 54, 3123–3144. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the FGIG Algorithm.
Figure 1. Flowchart of the FGIG Algorithm.
Computation 13 00219 g001
Figure 2. The distribution of the eigenvalues (blue circles) and the extreme singular values (red x marks) of Q T (first row) and A ( 1 , M ) T (second row) for M = 4 and λ = 0.4 , 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Figure 2. The distribution of the eigenvalues (blue circles) and the extreme singular values (red x marks) of Q T (first row) and A ( 1 , M ) T (second row) for M = 4 and λ = 0.4 , 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Computation 13 00219 g002
Figure 3. The distribution of the eigenvalues (blue circles) and the extreme singular values (red x marks) of Q T (first row) and A ( 1 , M ) T (second row) for M = 40 and λ = 0.4 , 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Figure 3. The distribution of the eigenvalues (blue circles) and the extreme singular values (red x marks) of Q T (first row) and A ( 1 , M ) T (second row) for M = 40 and λ = 0.4 , 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Computation 13 00219 g003
Figure 4. The distribution of the eigenvalues (blue circles) and the extreme singular values (red x marks) of Q T (first row) and A ( 1 , M ) T (second row) for M = 80 and λ = 0.4 , 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Figure 4. The distribution of the eigenvalues (blue circles) and the extreme singular values (red x marks) of Q T (first row) and A ( 1 , M ) T (second row) for M = 80 and λ = 0.4 , 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Computation 13 00219 g004
Figure 5. The values of the lowest singular values of Q for λ = 0.49 , 0.499 , and 0.4999 and M = 4 , 40 , and 80.
Figure 5. The values of the lowest singular values of Q for λ = 0.49 , 0.499 , and 0.4999 and M = 4 , 40 , and 80.
Computation 13 00219 g005
Figure 6. The values of k ( Q T ) (first row) and k A ( 1 , M ) T (second row) for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Figure 6. The values of k ( Q T ) (first row) and k A ( 1 , M ) T (second row) for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 0 , ν = 1 , and N = 4 .
Computation 13 00219 g006
Figure 7. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 1 , ν = 1 , and N = 50 .
Figure 7. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 1 , ν = 1 , and N = 50 .
Computation 13 00219 g007
Figure 8. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 100 , ν = 1 , and N = 50 .
Figure 8. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 100 , ν = 1 , and N = 50 .
Computation 13 00219 g008
Figure 9. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 1 , ν = 50 , and N = 50 .
Figure 9. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 1 , ν = 50 , and N = 50 .
Computation 13 00219 g009
Figure 10. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 100 , ν = 100 , and N = 50 .
Figure 10. The values of k A ( N / 2 , M ) T for M = 4 , 40 , and 80 and λ = 0.4 and 0.2 , 0:0.5:2. All the plots were generated using L = 2 , T = 0.2 , μ = 100 , ν = 100 , and N = 50 .
Computation 13 00219 g010
Figure 11. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 1 using M = 12 , N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 0.2 .
Figure 11. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 1 using M = 12 , N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 0.2 .
Computation 13 00219 g011
Figure 12. The surfaces of the logarithms of the discrete error norms for the FGIM and SA-FGIM methods at the terminal time, t = 0.2 . The computations were performed for N = 4:2:22, N 0 = N + 2 , M = 4:12, and λ = 0.4 .
Figure 12. The surfaces of the logarithms of the discrete error norms for the FGIM and SA-FGIM methods at the terminal time, t = 0.2 . The computations were performed for N = 4:2:22, N 0 = N + 2 , M = 4:12, and λ = 0.4 .
Computation 13 00219 g012
Figure 13. The surfaces depict the median execution times (in seconds) of the FGIM and the SA-FGIM, averaged over multiple runs, for λ = 0.4 and N = 4:2:52 and M = 4:52 (upper row) and N = 70:4:102 and M = 70:5:100 (lower row). N 0 = N + 2 in all the cases. Gaussian filtering was applied for smoothing the surfaces. The upper left surface was computed without parallel computing, while the lower left utilizes parallel processing.
Figure 13. The surfaces depict the median execution times (in seconds) of the FGIM and the SA-FGIM, averaged over multiple runs, for λ = 0.4 and N = 4:2:52 and M = 4:52 (upper row) and N = 70:4:102 and M = 70:5:100 (lower row). N 0 = N + 2 in all the cases. Gaussian filtering was applied for smoothing the surfaces. The upper left surface was computed without parallel computing, while the lower left utilizes parallel processing.
Computation 13 00219 g013
Figure 14. The condition number curve (left) and its logarithmic surface (right) of the collocation matrix of system (43), associated with the FGIM at the fundamental and Nyquist frequencies. The plots were generated using the parameters λ = 0.4 and N = M = 10:10:100. N 0 = N + 2 in all the cases.
Figure 14. The condition number curve (left) and its logarithmic surface (right) of the collocation matrix of system (43), associated with the FGIM at the fundamental and Nyquist frequencies. The plots were generated using the parameters λ = 0.4 and N = M = 10:10:100. N 0 = N + 2 in all the cases.
Computation 13 00219 g014
Figure 15. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 1 using M = 60 , N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 10 .
Figure 15. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 1 using M = 60 , N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 10 .
Computation 13 00219 g015
Figure 16. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 2 using M = 12 , N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 1 .
Figure 16. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 2 using M = 12 , N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 1 .
Computation 13 00219 g016
Figure 17. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 2 using M = N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 10 .
Figure 17. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 2 using M = N = 22 , N 0 = 24 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 10 .
Computation 13 00219 g017
Figure 18. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 3 using M = 4 , N = 16 , N 0 = 18 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 0.1 .
Figure 18. (ah) illustrates the surfaces of the exact and approximate solutions, their spatial derivatives and approximations, and their associated errors for test problem 3 using M = 4 , N = 16 , N 0 = 18 , and λ = 0.4 . (i,j) depicts the absolute error profiles and the discrete norm level at the terminal time, t = 0.1 .
Computation 13 00219 g018
Figure 19. (a,b) depicts the absolute error profiles and the discrete norm level at the extended terminal time ( t = 10 ) for test problem 3 using M = 6 , N = 16 , N 0 = 18 , and λ = 0.4 .
Figure 19. (a,b) depicts the absolute error profiles and the discrete norm level at the extended terminal time ( t = 10 ) for test problem 3 using M = 6 , N = 16 , N 0 = 18 , and λ = 0.4 .
Computation 13 00219 g019
Figure 20. Absolute error profiles and discrete norm level at the terminal time ( t = 0.1 ) for a high Péclet number case ( P e = 1000 ). Parameters: N = 50 , M = 50 , N 0 = 52 , L = 2 , T = 0.1 , μ = 50 , ν = 0.1 , and λ = 0.5 .
Figure 20. Absolute error profiles and discrete norm level at the terminal time ( t = 0.1 ) for a high Péclet number case ( P e = 1000 ). Parameters: N = 50 , M = 50 , N 0 = 52 , L = 2 , T = 0.1 , μ = 50 , ν = 0.1 , and λ = 0.5 .
Computation 13 00219 g020
Table 1. Comparison of test problem 1 at t = 0.1 and ν = 1 . The errors are rounded to five significant digits.
Table 1. Comparison of test problem 1 at t = 0.1 and ν = 1 . The errors are rounded to five significant digits.
MethodParameter L
[33] Δ t = 0.00001 , h = 0.00625 2.3781 × 10−5
[32] Δ t = 0.00001 , h = 0.00625 3.1585 × 10−10
[7] Δ t = 0.00001 , h = 0.00625 8.8459 × 10−12
FGIM N = 4 , N 0 = 6 , M = 8 , λ = 0.4 1.6445 × 10−12
FGIM N = 4 , N 0 = 6 , M = 10 , λ = 0.4 7.21645 × 10−16
SA-FGIM N = 4 , N 0 = 6 5.5511 × 10−17
Table 2. Comparison of test problem 2 at t = 1 and ν = 1 / π 2 . The errors are rounded to five significant digits.
Table 2. Comparison of test problem 2 at t = 1 and ν = 1 / π 2 . The errors are rounded to five significant digits.
MethodParameter L
Crank–Nicolson [6] h = k = 0.00625 1.1 × 10−5
Compact Boundary Value Method [6] h = k = 0.00625 2.3 × 10−10
[34] h = k = 0.00625 2.3436 × 10−10
FGIM N = 4 , N 0 = 6 , M = 7 , λ = 0.4 7.0965 × 10−11
FGIM N = 4 , N 0 = 6 , M = 10 , λ = 0.4 8.3267 × 10−16
SA-FGIM N = 4 , N 0 = 6 5.5511 × 10−17
Table 3. Comparison of computational times and errors for all the test problems.
Table 3. Comparison of computational times and errors for all the test problems.
Test ProblemMethodParametersGrid SizeTime (s) L
1[33] Δ t = 10 5 , h = 0.00625 3,210,321-2.38 × 10−5
[32] Δ t = 10 5 , h = 0.00625 --3.16 × 10−10
[7] Δ t = 10 5 , h = 0.00625 3,210,321-8.85 × 10−12
FGIM N = 4 , M = 8 , λ = 0.4 367.51 × 10−51.64 × 10−12
FGIM N = 4 , M = 10 , λ = 0.4 447.56 ×10−57.22 × 10−16
SA-FGIM N = 4 42.95 × 10−45.55 × 10−17
2Crank–Nicolson [6] h = k = 0.00625 51,681-1.10 × 10−5
Compact BVM [6] h = k = 0.00625 51,681-2.30 × 10−10
[34] h = k = 0.00625 51,681-2.34 × 10−10
FGIM N = 4 , M = 7 , λ = 0.4 327.29 × 10−57.10 × 10−11
FGIM N = 4 , M = 10 , λ = 0.4 448.05 × 10−58.33 × 10−16
SA-FGIM N = 4 42.96 × 10−45.55 × 10−17
3FGIM N = 16 , M = 4 , λ = 0.4 802.30 × 10−42.31 × 10−3
SA-FGIM N = 16 161.45 × 10−31.87 × 10−3
Notes: Grid Size = Spatial Points × Temporal Points; DNE = Discrete Norm Error at the terminal time (time was measured in seconds (median of multiple runs)). Dashes indicate data not available in cited references or not verified due to lack of access to the publication. The final times for test problems 1–3 are assumed to be 0.1 , 1, and 0.1 , respectively, while the spatial interval lengths are assumed to be 2.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elgindy, K.T. Fourier–Gegenbauer Integral Galerkin Method for Solving the Advection–Diffusion Equation with Periodic Boundary Conditions. Computation 2025, 13, 219. https://doi.org/10.3390/computation13090219

AMA Style

Elgindy KT. Fourier–Gegenbauer Integral Galerkin Method for Solving the Advection–Diffusion Equation with Periodic Boundary Conditions. Computation. 2025; 13(9):219. https://doi.org/10.3390/computation13090219

Chicago/Turabian Style

Elgindy, Kareem T. 2025. "Fourier–Gegenbauer Integral Galerkin Method for Solving the Advection–Diffusion Equation with Periodic Boundary Conditions" Computation 13, no. 9: 219. https://doi.org/10.3390/computation13090219

APA Style

Elgindy, K. T. (2025). Fourier–Gegenbauer Integral Galerkin Method for Solving the Advection–Diffusion Equation with Periodic Boundary Conditions. Computation, 13(9), 219. https://doi.org/10.3390/computation13090219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop