1. Introduction
The AD equation is a fundamental PDE that governs the transport of a scalar quantity within a fluid medium. This scalar quantity may represent various physical properties, such as heat, pollutants, or chemical concentrations. The AD equation encapsulates the combined influence of advection, the transport driven by fluid flow, and diffusion, which arises from random molecular motion. This equation finds wide-ranging applications in fields such as fluid mechanics, environmental science, biophysics, and materials science. Accurate solutions to the AD equation are indispensable for critical applications, such as environmental monitoring and control, engineering design, climate modeling, and medical applications. Consequently, the AD equation is a crucial tool for understanding and predicting the behaviors of various natural and engineered systems.
Over the past decade, numerous numerical methods have been developed, each addressing specific stability, accuracy, or computational constraints. Appadu et al. [
1] analyzed upwind and non-standard finite difference schemes for AD reaction problems, noting their dissipative properties. Khalsaraei and Jahandizi [
2] evaluated and improved positivity in finite difference methods. Al-khafaji and Al-Zubaidi [
3] developed a 1D model for instantaneous spills in river systems by numerically solving the advection–dispersion equation along with the shallow water equations using finite difference methods. Hwang and Son [
4] utilized a second-order, well-balanced, positivity-preserving central-upwind scheme based on the finite volume method to create an efficient numerical scheme that accurately captures contact discontinuities in scalar transport within a shallow water flow environment. Solis and Gonzalez [
5] presented non-standard finite difference schemes for modeling infection dynamics. Sun and Zhang [
6] proposed high-order compact finite difference schemes, but these require dense grids, increasing computational costs, especially for diffusion-dominated regimes. Jena and Senapati [
7] combined a quartic-order cubic B-spline with Crank–Nicolson and FEM, achieving high accuracy but requiring fine meshes. Cerfontaine et al. [
8] applied FEM to borehole heat exchangers, relying on explicit boundary enforcement, which limits scalability for periodic problems. Wang and Yuan [
9] introduced nonlinear correction techniques, ensuring discrete extremum principles. Wang and Yuan [
10] presented a nonlinear correction technique for finite element methods applied to advection–diffusion problems. Sejekan and Ollivier-Gooch [
11] enhanced diffusive flux accuracy. Chernyshenko et al. [
12] combined the finite volume method and FEM for fractured media. Kramarenko et al. [
13] developed a nonlinear correction for subsurface flows. Mei et al. [
14] proposed a unified finite volume PINN (UFV-PINN) approach to improve the accuracy and efficiency of solving heterogeneous PDEs using PINNs. UFV-PINN combines sub-domain decomposition, finite volume discretization, and conventional numerical solvers within the PINN framework. Cardone et al. [
15] integrated exponential fitting with IMEX time integration for AD problems, offering high accuracy but struggling with oscillatory solutions. Bokanowski and Simarmata [
16] developed high-order, unconditionally stable semi-Lagrangian schemes with discontinuous Galerkin elements, effective for first- and second-order PDEs but reliant on iterative time stepping. Bakhtiari et al. [
17] introduced a parallelizable semi-Lagrangian scheme, yet it lacks the integral equation framework of FGIG. Despite advancements, significant challenges remain: iterative time stepping in finite difference [
6], FEM [
8], and semi-Lagrangian methods [
16] accumulates temporal errors, particularly in diffusion-dominated regimes; high accuracy often demands dense meshes [
7], escalating computational costs; stability issues persist at low Péclet numbers or with oscillatory solutions, as noted by Benedetto et al. [
18] for FEM and O’Sullivan [
19] for Runge–Kutta methods; few methods, except for spectral integral approaches [
20], leverage integral equation formulations for periodic boundary conditions, limiting accuracy and stability; sequential computations in traditional methods [
21] hinder scalability, unlike parallelizable PS methods [
22]. The FGIG method addresses these by combining Fourier series and Gegenbauer polynomials in an integral equation framework, eliminating time stepping, ensuring stability at low Péclet numbers, and enabling parallel computation.
This study introduces the FGIG method for solving the 1D AD equation with periodic boundary conditions, ideal for diffusion-dominated cases due to its stability and minimal temporal error accumulation (see
Section 6 and
Section 8). The FGIG method integrates Fourier series for spatial periodicity and Gegenbauer polynomials for temporal integration, achieving high accuracy, efficiency, and stability. Unlike traditional Galerkin methods yielding ODEs requiring time integration, FGIG uses integral equations solved directly without initial or boundary conditions. Key features include: (i) no time stepping, reducing temporal errors (
Section 8); (ii) direct solution of integral equations, lowering computational cost (
Section 5 and
Section 8), and (iii) a barycentric SGG quadrature for stable, efficient integral evaluation.
This work contributes: (i) a novel FGIG method, combining Fourier series and Gegenbauer polynomials for the 1D AD equation; (ii) a semi-analytical solution for high accuracy; (iii) exponential convergence for smooth solutions, outperforming polynomial-based methods; (iv) stability at low Péclet numbers (
Gegenbauer parameter range) under oscillatory conditions; (v) parallelizable formulation for efficient large-scale computation; (vi) elimination of temporal errors via integral equations; (vii) improved accuracy with a barycentric SGG quadrature; (viii) implicit incorporation of periodic boundary conditions via a Fourier basis; and (ix) semi-analytical solutions for result verification. This is the first method integrating Fourier series and Gegenbauer polynomials into a Galerkin framework for the 1D AD equation, distinct from prior uses of Gegenbauer polynomials [
19,
23,
24]. This synthesis not only fills a methodological gap in the literature but also establishes a new paradigm for solving periodic PDEs with exponential accuracy.
The remainder of this paper is structured as follows.
Section 3 introduces the problem under study.
Section 4 presents the FGIG method, describing its formulation and implementation. In
Section 5, we analyze the computational cost and time complexity of the FGIG method.
Section 6 presents a rigorous analysis of the error and stability characteristics of the method.
Section 7 derives a semi-analytical solution within the FGIG framework, offering a highly accurate alternative for specific cases.
Section 8 provides computational results that validate the accuracy and efficiency of the FGIG method.
Section 9 and
Section 10 conclude the paper with a summary of the key findings, a brief discussion of the limitations of the FGIG method, and potential future research directions. Mathematical proofs are presented in
Appendix A.
Section 2 introduces the sets of symbols and notations used throughout the paper to represent complex mathematical formulae and expressions concisely. These notations generally follow the writing convention established in [
25,
26].
2. Symbols and Notations
This section summarizes the symbols and notations used throughout the paper. Unless otherwise specified, all the functions are assumed to be sufficiently smooth, and all the domains are subsets of the real line or the complex plane.
5. Computational Cost and Time Complexity
In
Section 4, we demonstrated that only
N Fourier coefficients of the series (
10) need to be determined, as the remaining coefficient can be computed directly using Equation (
11). Remarkably, the following theorem demonstrates that only the first
coefficients need to be determined, since the remaining coefficients can be efficiently computed using a conjugate symmetry condition.
Theorem 1. The time-dependent Fourier coefficients, , satisfy the conjugate symmetry condition Proof. Let
, and notice first that
Taking the conjugate of the
nth system (
29) while realizing the conjugate symmetries of
and
yields
since
and
are real matrices. This matches the system for
, from which the proof is established. □
The aftermath of Theorem 1 is at least twofold: (i) It shows that we only need to solve the following
systems of linear equations:
for the first
Fourier coefficients, instead of solving the
N systems (
29); the last
coefficients are computed efficiently using Equation (
40), and (ii) it provides further a more efficient alternative for computing
using the following formula:
which requires adding the real parts of only half the series terms of Equation (
12).
Since the first-order barycentric Gegenbauer integration matrix,
, is typically a dense, non-symmetric matrix, solving each system in (
43) using highly optimized direct algorithms, such as LU decomposition, has a computational complexity of approximately
. Consequently, the total computational cost of solving systems (
43) is
. The computation of
via (
33) requires
operations for the matrix multiplication plus
scalar multiplications required by the Kronecker product. Thus, the total cost, including the cost of adding the resulting matrices, is
. This analysis shows that the overall cost of solving the linear systems and recovering the collocated approximate solution,
, is
If the collocated spatial derivative approximation,
, is also required, one may estimate the additional cost as follows: First, observe that computing the Kronecker product in Equation (
39) has a cost of
. Similarly, the Hadamard product also incurs a cost of
. The subsequent matrix multiplication between the resulting matrix and the reshaped matrix costs
. Finally, scaling each entry of the resulting matrix by
i incurs a cost of
, which is negligible compared to the matrix multiplication cost. Therefore, the total additional cost is
, which matches the computational cost order of recovering
.
Parallel computing can significantly reduce the overall wall-clock time required to solve the linear systems. Since the
systems are independent of each other, they can be distributed across multiple cores or processors, allowing simultaneous computation. In particular, assuming that there are
P available processing cores, and the
systems are distributed evenly across these cores, the total computational time using parallel computing,
, is approximately:
where
is the serial time to solve the
systems, and
is the additional time incurred due to the parallel computing process itself from tasks such as communication, synchronization, and workload distribution. These are generally negligible if the workload per system,
, is sufficiently large compared to the overhead. If the overheads are minimal, the computational time is reduced significantly to approximately
by Equation (
45). It is important to understand here that parallel computing is not useful for small datasets, as the associated overhead can outweigh the potential gains from parallelization. In such cases, a standard sequential “for” loop might be more efficient. It is noteworthy to mention also, here, that the constant nature of the matrix
allows us to precompute and store it prior to executing the code for solving these systems, reducing the runtime overhead. Moreover, the numerical quadrature induced by
(or
) through matrix–vector multiplication converges exponentially, enabling nearly exact approximations with relatively few quadrature nodes [
28]. Consequently, systems (
44) can be solved efficiently and with high accuracy using a relatively low
M values, significantly improving the computational efficiency without compromising precision.
Remark 1. The FGIG method trades the sequential cost of time stepping for the cost of solving a global system. A key advantage of this reformulation is that it leads to well-conditioned linear systems. This good conditioning is a direct result of using a spectral integral formulation instead of a differential one, and employing stable barycentric quadrature rules. Consequently, for the low/moderate values of M and N required for spectral accuracy in our problems, the systems can be solved directly and efficiently, without the need for specialized iterative techniques. The computational complexity is for the direct solution of all the systems, which is highly feasible for the scale of problems considered and is offset by the method’s exponential convergence and lack of temporal error accumulation. Therefore, while specialized iterative techniques, like the preconditioned Krylov solvers, are a powerful tool for the ill-conditioned systems common in numerical differentiation, they are not necessary so here. The excellent conditioning is a core strength of our integral-based approach.
6. Error and Stability Analyses
Let
denote the truncation error of the truncated Fourier series expansion (
10). The squared
-norm of the error is defined as follows:
cf. [
26] (Proof of Theorem 4.2). Thus, the error depends on the sum of the squared magnitudes of the omitted Fourier coefficients. For analytic solutions, Fourier coefficients decay exponentially with
, as shown by the following theorem:
Theorem 2 ([
26] Theorems 4.1 and 4.2)
. Let , and suppose that is approximated by the -degree, L-sp, truncated Fourier series (10). Then,Moreover, if is β-analytic then The rapid decay of Fourier coefficients for sufficiently smooth functions is a hallmark of diffusion-dominated problems, where the solution tends to smooth out over time. The FGIG method effectively exploits this property by employing a Fourier basis, enabling an accurate representation of the solution with relatively few terms. This contributes to the method’s efficiency and accuracy in diffusion-dominated regimes.
For nonsmooth solutions, the coefficients decay at a polynomial rate depending on the degree of smoothness, as shown by the following theorem:
Theorem 3 ([
25] Theorems A.1 and A.2).
Let , and suppose that is approximated by the -degree, L-sp truncated Fourier series (10). Then, Discontinuities or shock features in u affect the coefficients and result in the well-renowned Gibbs phenomenon, where the truncated Fourier series near a discontinuity exhibits an overshoot or undershoot that does not diminish as the number of terms in the sum increases.
A crucial step in the implementation of the FGIG method lies in the solution of the linear system (
43). To analyze the sources of error in solving the given linear system, we must consider errors arising from numerical stability, discretization, and condition numbers, as well as how these depend on
, and
. The right-hand side of the linear system presents the
nth Fourier interpolation coefficient of
, which reflects how
is approximated by a finite number of frequencies (
N). The convergence rates of the interpolation error associated with Equation (
23) are analyzed in the following two corollaries, specifically focusing on their dependence on the smoothness of the underlying function space:
Corollary 1 ([
26] Corollary 4.1)
. Suppose that is approximated by the -degree, L-periodic Fourier interpolant (23), , thenMoreover, if is β-analytic then Corollary 2 ([
25] Corollary A.1)
. Suppose that is approximated by the -degree, L-periodic Fourier interpolant (23), , then Now, we turn our attention to the left-hand side of the linear system (
43) whose coefficient matrix mainly consists of the SGIM that is scaled by the factor
. The error of the SGG quadrature induced by the SGIM,
, for sufficiently smooth functions, can be described in closed form by the following theorem:
Theorem 4 ([
28] Theorem 4.1)
. Let , be interpolated by the SG polynomials at the SGG nodes, . Then, there exist a matrix, , and some numbers, , satisfyingwhere ,is the leading coefficient of the jth-degree SG polynomial, . The following theorem shows that the SG quadrature formula converges exponentially fast for sufficiently smooth functions. Its proof can be immediately derived from [
25] (Theorem A.5) in the absence of domain partitioning.
Theorem 5 ([
25] Theorem A.5)
. Let , where the constant A is independent of M. Suppose also that the assumptions of Theorem 4 hold true. Then there exist some constants, and , which depend on λ but are independent of M such that the SG quadrature truncation error, , is bounded by. Besides the exponential convergence of the SG quadrature, the upper bounds on the rounding errors incurred in computing the elements of
qth-order barycentric Gegenbauer integration matrices of size
—used in constructing the Gegenbauer quadrature rules—are approximately
, according to standard analysis, where
m,
n, and
. Formula (
26) shows immediately that the rounding errors in
are of
, which is the same as the rounding errors in
up to a constant factor of
. The scaling by
does not change the asymptotic order of the rounding error, though it could affect the constant factors involved. The scalar multiplication by
does not affect the condition number either because it scales both the norm of
and the norm of
by a factor and its reciprocal as follows:
Thus, the condition numbers of
and
are identical.
Now, let us analyze the error due to rounding in the collocation matrix (
) in (
43). Assume that
is perturbed by
due to rounding errors. Furthermore, we assume that
also has rounding errors, which we will denote as
. Then, the rounding errors in
can be estimated by
:
These error terms contribute asymptotic rounding errors in each matrix element of orders
, and
, respectively. We can, thus, express the total asymptotic error in each matrix element as
. Consequently, the asymptotic error in each element of the product (
) of the linear system (
43) can be directly estimated by
Notice how the size of Fourier coefficients is critical in determining the error. The rate at which these coefficients decay as
increases is influenced by the smoothness of the solution function (
u), as shown earlier in this section. In particular, for smooth functions, as is often the case with a relatively low Péclet number (
), Fourier coefficients decay exponentially as
increases, and their
-norm will be relatively low, leading to a lower overall error, even as
M or
N increases. For nonsmooth solutions, however, especially those with discontinuities or sharp gradients, which often occur at relatively high
values, Fourier coefficients decay much more slowly. This slower decay means that the Fourier coefficients for higher frequencies remain relatively high, and their contribution to the
-norm will be high. As a result, the error in the product can increase significantly. This shows that nonsmoothness often leads to slower convergence and higher numerical errors, as expected.
The overall error in the solution will depend not only on the error in the matrix–vector product,
, or the relatively low errors in the discrete Fourier coefficients of
, but also on the conditioning of the collocation matrix (
), which is given by
Therefore, the behavior of the condition number depends on the relative size of
and the spectrum of
. If
are the eigenvalues of
then the eigenvalues of
, are shifted scaled versions of
by the scaling factor
. To partially analyze the effect of this shift-scaling operation on the conditioning of the collocation matrix, let
and
denote the lowest eigenvalues of
and
, respectively. Also, let
, and
denote the extreme singular values of
and
in respective order. The first rows of
Figure 2,
Figure 3 and
Figure 4 display their distributions together with the distributions of
and
for increasing values of
M and
. Observe how
cluster gradually around 0 as the size of the matrix increases for all the values of
. Moreover, increasing
values while holding
M fixed tends to gradually spread out the eigenvalues in the complex plane away from the origin. This shows that
as
. While a near-zero eigenvalue does not directly determine the exact value of the lowest singular value, it strongly confirms the existence of a relatively low
, leading to a high condition number and potential numerical instability. In fact, Theorem A3 proves the existence of at least one near-zero singular value of
, not necessarily the lowest singular value, if
.
Figure 5 confirms this fact, where we can clearly see the rapid decay of the lowest singular values of
as
. This analysis shows that k
k
, as
, with a higher growth rate as
M increases. The first row in
Figure 6 further supports this observation, showing a significant shift in the order of magnitude of k
for all
M values as
gradually approaches
. The figure also reveals that the curve of k
initially exhibits a near-
L-shaped pattern for low
M values. It rapidly increases as
decreases below 0 but grows slowly as
increases beyond 1. This latter growth rate increases gradually for higher
M values, and the curve of k
gradually transitions to a
U-shaped pattern with a base in the range
. It is striking, here, to observe that the poor conditioning for
can be largely restored with a proper scaling of
followed by a shift by the identity matrix. This operation effectively shifts
from near 0 to near 1, for relatively low scaling factors. In particular, assuming a relatively low
, which occurs when
, and
n are low, the scaled eigenvalue (
) remains relatively low
, especially for high
M values, forcing the eigenvalues of
to cluster around 1. This can significantly decrease the ratio
. Furthermore, if
is too low then
. Hence, k
, and the matrix is a nearly perfectly well-conditioned matrix. For the sake of illustration, consider the dataset
. At the fundamental frequency, we can readily verify that
for sufficiently high
M values, since
cluster around the origin, as evident in
Figure 3 and
Figure 4. Notice, however, that this result remains valid for low
M values only if
remains relatively low. The second rows in
Figure 2,
Figure 3 and
Figure 4 manifest the distributions of
, where we observe their clustering around unity for increasing
M values. The second row in
Figure 6 displays the astonishing decay of the condition number for all the
and
M values, dropping by four orders of magnitude in the range
. Notice, here, that the conditioning relatively deteriorates as
continues to grow beyond nearly 1.5, and the degeneration becomes relatively clear for high
M values.
The above analysis assumes a low
value for all the cases. When
becomes relatively high, k
will depend on the interplay among the parameters
, and
. Assuming
and
are held fixed, the highest condition number, in this case, occurs at the highest value of
, which occurs at
, see Theorem A4. At this stage, increasing either
or
makes the eigenvalues more spread out in the complex plane, which will gradually increase k
until a terminal value where it ceases to increase anymore, assuming
. This is because
; thus, k
k
. This demonstrates that for highly oscillatory solutions, where numerous Fourier terms are necessary to accurately represent the high-frequency oscillations, the condition number of the resulting linear system will be, at worst, comparable to that of the integration matrix itself. Overall, for higher
n values, the effect of both
and
becomes more pronounced, and the balance between these two constants is crucial in the sense that reducing their values can improve the conditioning of the linear system.
Figure 7,
Figure 8,
Figure 9 and
Figure 10 demonstrate the effect of
and
on the conditioning as their values grow large.
The above analysis emphasizes that the proper choice of the Gegenbauer parameter (
) can significantly influence the stability and conditioning of the SGIM, which is crucial for the stability and convergence of spectral integration methods employing Gegenbauer polynomials in general. Besides our new findings here on the strong relationship between
and the conditioning of the matrix, it is important to mention also that the barycentric representation of the SGIM employed in this work generally improves the stability of Gegenbauer quadratures, since it mitigates numerical instabilities that can arise from the direct evaluation of Gegenbauer polynomials, especially for high degrees or specific
values. Since the stability characteristics of the SGIM are directly inherited from the Gegenbauer integration matrix, a suitable rule of thumb that can be drawn from this study and verified by
Figure 6 is to select
within the interval
for a relatively low
, where
is a relatively low and positive parameter.
,
Figure 7,
Figure 8,
Figure 9 and
Figure 10 suggest to shrink this recommended interval to
to maintain a relatively low condition number.
Besides the conditioning matter, we draw the attention of the reader to the important fact that the Gegenbauer weight function associated with the Gegenbauer integration matrix diminishes rapidly near the boundaries () for increasing , which forces the Gegenbauer quadrature to depend more heavily on the behavior of the integrand near the central part of the interval, increasing sensitivity to errors and forcing the quadrature to become more extrapolatory. Moreover, as increases, the Gegenbauer quadrature nodes cluster more toward the center of the interval. This means that the quadrature rule relies more on extrapolation rather than interpolation, making it more sensitive to perturbations in the function values and amplifying numerical errors. On the other hand, as , Gegenbauer polynomials grow rapidly in magnitude, leading to poor numerical stability. A low buffer parameter () is often added to avoid singularities and maintain numerical accuracy. For sufficiently smooth functions and high spectral expansion terms, the truncated expansion in the shifted Chebyshev quadrature, associated with , is optimal for the -norm approximations of definite integrals, while the shifted Legendre quadrature, associated with , is optimal for the -norm approximations.
Remark 2 (Practical Selections of
and
M)
. The analysis in this section reveals that the Gegenbauer parameter (λ) significantly impacts both accuracy and stability. For general applications with relatively low , λ should be chosen within , where is a low parameter. , this interval should be reduced to to maintain low condition numbers and ensure numerical stability. Specifically, for -norm approximations, (shifted Chebyshev quadrature) is optimal, and for -norm approximations, (shifted Legendre quadrature) is optimal. These choices provide the best balance between numerical stability and approximation accuracy for smooth solutions. On the other hand, the parameter M governs the precision of the SGG quadrature employed to discretize the integral system. Its value directly influences the accuracy of the collocation matrices and, consequently, that of the resulting linear system. For smooth, non-oscillatory functions, relatively low values, such as , are typically sufficient to achieve near-machine-precision accuracy. In contrast, higher M values are required when dealing with highly oscillatory or nonsmooth functions, where additional quadrature nodes are necessary to adequately resolve oscillations or capture irregular features. In practice, it is advisable to begin with modest M values (e.g., 12–16
) and increase them only as dictated by the oscillatory nature or regularity of the underlying solution. For simulations extending over long time intervals (), the temporal dynamics may also demand higher resolution. A practical heuristic is to start with a coarse grid (e.g., ) and incrementally increase M until the solution profile or the discrete norm error (65) stabilizes within the desired tolerance. Remark 3 (Spectral Properties and Preconditioning)
. The linear systems arising from the FGIG method (Equation (43)) are characterized by coefficient matrices of the form . For integral equations, it is known that the eigenvalues of the underlying operators (and their discrete approximations) can exhibit clustering behavior. The results in [29] concerning the clustering of eigenvalues for matrices from integral equations provide a theoretical foundation for the observed excellent conditioning of our systems, particularly for lower values of (i.e., lower frequencies n). This spectral clustering near 1 is precisely the property that makes preconditioned Krylov solvers, as discussed in [29], highly effective for such problems. While the superb conditioning of our systems renders the use of such iterative solvers unnecessary for the problems considered in this work—allowing for efficient direct solution—the findings in [29] become highly relevant for potential extensions of the FGIG method to higher-dimensional or more complex problems, where the resulting linear systems might be larger and less amenable to direct factorization. This connection to the literature on integral equations and iterative solvers opens promising directions for future research, especially for problems in higher-dimensional spatial domains, where iterative Krylov methods are essential [30]. The well-established theory on superlinear convergence for such methods under eigenvalue clustering [31] provides a strong foundation for these future developments. 8. Computational Results
This section presents a series of numerical experiments to validate the accuracy and efficiency of the proposed FGIG and SA-FGIG methods. We consider three benchmark problems with known analytical solutions, allowing for a direct comparison between the numerical and exact solutions. Each method’s performance is evaluated using error norms and computational time. We measure the error in the approximate solution of Problem S at the augmented collocation points set
using the absolute error function given by
and the discrete norm at
given by
where
denotes the discrete norm error in
u. The superscript “sa” is added to the above two error notations when the solution is approximated by the SA solution obtained through (
62).
Test Problem 1. Consider Problem S with
, and
, along with initial and boundary functions
and
. The exact solution of this problem is
[
7].
Figure 11 shows the surfaces of the exact solution and its approximations obtained by the FGIG and the SA-FGIG methods, as well as their corresponding errors, for some parameter values. Both methods achieve near-machine precision in resolving the solution and the SSD on a relatively coarse mesh grid of size
.
Figure 12 shows the surfaces of the logarithms of the discrete error norms for both methods at the terminal time,
, for certain ranges of the parameters
, and
. When
N is fixed, the linear decay of the surface for increasing
M values indicates the exponential convergence of the FGIM. The nearly flat surface along
N while holding
M fixed in the plot associated with the FGIM occurs because the error is dominated by the discretization error of system (
27), controlled by the size of the temporal linear system. To observe the theoretical exponential decay with
N, it is necessary to increase the time mesh point grid so that the temporal discretization error becomes negligible compared to the spatial error. This interplay between
N and
M reflects the dependence of the total error on both spatial and temporal resolutions. The SA-FGIM method, on the other hand, exhibits much lower logarithmic error norms, in the range from
to
, indicating a relatively flat surface. This is expected because the exact solution is
-analytic in
, cf. Theorem A2. Consequently, Fourier truncation and interpolation errors vanish
by Theorem 2 and Corollary 1. Furthermore, the numerical errors in double-precision arithmetic quickly plateau at about the
level and exhibit random fluctuations.
Figure 13 shows the times (in seconds) required to run the FGIM and the SA-FGIM methods for certain ranges of the parameters
N and
M. The surfaces associated with the FGIM have relatively much lower time values compared to those of their counterparts associated with the SA-FGIM. In all the runs, the FGIM finished in under 0.01 s. In contrast, the SA-FGIM method exhibited substantially longer execution times, peaking at approximately 0.17 s when
. At this point, the FGIM demonstrated a remarkable speed advantage, operating roughly 21 times faster than the SA-FGIM. This indicates that the FGIM is much more efficient in terms of the execution time. As the number of cores in the computing device increases, the time gap between both methods is expected to widen significantly, especially when processing larger datasets.
Table 1 shows a comprehensive comparison with the methods of Jena and Gebremedhin [
32], Jena and Senapati [
7], and Demir and Bildik [
33]; the latter work uses a collocation method with cubic B-spline finite elements. Compared with all three methods, the FGIM achieves significantly lower errors, even when employing relatively coarse mesh grids. For instance, the method in [
7] necessitates a mesh grid comprising 3,210,321 points to achieve an
-error on the order of
. In contrast, the FGIM attains the same level of accuracy with as few as 36 points and achieves near-full precision using just 44 mesh points—a reduction in the mesh grid size by approximately 99.999% in either case. The SA-FGIM outperforms all the methods in terms of accuracy, scoring full machine precision using just five truncated Fourier series terms.
Figure 14 displays the values of the collocation matrix condition number for certain ranges of the parameters
N and
M. Observe how k
remains relatively close to 1 at
and slightly improves for increasing matrix size, as discussed earlier in
Section 6.
Figure 15 illustrates further the surfaces of the exact and approximate solutions, their spatial derivatives, and the associated errors over the extended time interval (
). The results demonstrate that both the FGIM and SA-FGIM maintain high accuracy over long-time simulations, with errors remaining near machine precision. This confirms the methods’ ability to effectively reduce error accumulation, as the exponential convergence observed earlier at
persists using a finer temporal resolution.
Test Problem 2. Consider Problem S with
, and
, along with initial and boundary functions
and
. The exact solution to this problem is
[
34].
Figure 16 shows the surfaces of the exact and approximate solutions and their errors obtained by the FGIG and SA-FGIG methods for some parameter values.
Table 2 shows a comparison among the proposed methods and the Crank–Nicolson and compact boundary value method of Sun and Zhang [
6] as well as the method of Mohebbi and Dehghan [
34]. The former methods combine fourth-order boundary value methods for time discretization with a fourth-order compact difference scheme for spatial discretization. The latter method utilizes a compact finite difference approximation of fourth-order accuracy to discretize spatial derivatives, and the resulting system of ordinary differential equations is then solved using the cubic
-spline collocation method. Both FGIM and SA-FGIM outperform these methods in terms of accuracy and computational cost, as shown in the table. In particular, the SA-FGIM exhibits the lowest
-error using five terms of Fourier truncated series. The FGIM method demonstrates a significant improvement in accuracy as the number of temporal mesh points increases. Notably, adding only three temporal mesh points to the FGIM configuration leads to a remarkable decrease in error by approximately five orders of magnitude.
Figure 17 further illustrates the surfaces of the exact and approximate solutions, their spatial derivatives, and associated errors over the extended time interval (
).
Test Problem 3. Consider Problem S with
, and
with initial and boundary functions
and
. The exact solution to this problem is
.
Figure 18 shows the surfaces of the exact and approximate solutions and their errors obtained by the FGIG and SA-FGIG methods for some parameter values. Both methods resolve the solution surface well within an error of
using as few as sixteen Fourier series terms and five time-mesh points.
Figure 19 further illustrates the absolute error profiles and the discrete norm level at the extended terminal time (
). The results demonstrate that both the FGIG and SA-FGIG methods maintain accuracy over long-term simulations, with errors remaining within
. This confirms the robustness of the methods in handling advection–diffusion problems with periodic boundary conditions, even over extended time intervals. The ability to achieve such accuracy with relatively coarse grids highlights the efficiency and computational advantages of the proposed methods. The discrete norm level further validates the stability and precision of the solutions and reinforces the suitability of the FGIG framework for long-time simulations.
For more convenience, we present a comparison of computational times and errors for all the test problems in
Table 3, which demonstrates the exceptional performance of both FGIM and SA-FGIM methods across all the test problems.
It is noteworthy to mention that although the FGIG method demonstrates exceptional performance in diffusion-dominated problems, it often does not perform well when applied to solve advection-dominated regimes, characterized by steep gradients and sharp features.
Figure 20 shows the absolute error profiles and the discrete norm level for the last test example at the terminal time (
) with parameters
,
, and
. This configuration gives a Péclet number of
, representing a strongly advection-dominated regime. The results demonstrate that the numerical scheme becomes unstable, leading to poor approximation estimates.