Next Article in Journal
Memetic-Based Biogeography Optimization Model for the Optimal Design of Mechanical Systems
Previous Article in Journal
An Adaptive Learning Time Series Forecasting Model Based on Decoder Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Solution of Fokker–Planck Equations in Two Dimensions

1
Sound and Vibration Laboratory, College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
2
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo 315201, China
3
Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 491; https://doi.org/10.3390/math13030491
Submission received: 9 December 2024 / Revised: 10 January 2025 / Accepted: 21 January 2025 / Published: 31 January 2025

Abstract

:
Finite element analysis (FEA) of the Fokker–Planck equation governing the nonstationary joint probability density function of the responses of a dynamical system produces a large set of ordinary differential equations, and computations become impractical for systems with as few as four states. Nonetheless, FEA remains of interest for small systems—for example, for the generation of baseline performance data and reference solutions for the evaluation of machine learning-based methods. We examine the effectiveness of two techniques which, while they are well established, have not to our knowledge been applied to this problem previously: reduction of the equations onto a smaller basis comprising selected eigenvectors of one of the coefficient matrices, and splitting of the other coefficient matrix. The reduction was only moderately effective, requiring a much larger basis than was expected and producing solutions with clear artifacts. Operator splitting, however, performed very well. While the methods can be combined, our results indicate that splitting alone is an effective and generally preferable approach.

1. Introduction

The Fokker–Planck (FP) equation, also known as the forward Kolmogorov equation, is a deterministic partial equation (PDE) whose solution gives the joint probability density function (pdf) of the responses (states) of a dynamical system subjected to random excitation. While analytical solutions are available for linear systems of any dimension [1,2], nonlinear dynamical systems naturally give rise to more challenging FP equations, for which analysis may be limited to the stationary pdf [3] when it can be carried out at all. When the FP equation is discretized, for example, by using finite elements in a system’s phase space, the resulting matrix equation grows much faster than the dimension of the original dynamical system, and computations quickly become intractable. In the following, we try to address this problem for small (but nontrivial) systems.
The first attempts at the application of finite element analysis (FEA) in numerical treatment of the FP equation date back several decades. Bergman and Heinrich [4] first proposed Petrov–Galerkin FEA to solve the backward Kolmogorov equation (the formal adjoint of the FP equation), and solved the first passage problem for a second-order linear system driven by additive Gaussian white noise. Langley [5] used a finite element formulation to study the stationary random vibration of 2-D nonlinear systems. Spencer and Bergman [6] used a Bubnov–Galerkin FEA method to solve the FP equations of linear, Duffing, and Van der Pol oscillators, and studied the corresponding first passage problems. El-Gebeily and Shabaik [7] reported a straightforward analysis of the transient problem. Masud and Bergman [8] presented a multi-scale FEA formulation, applied it to calculate the solution of the FP equation, and examined its efficacy through selected applications. Floris [9] used spline interpolation in FEA of the FP equation, Kumar et al. [10] considered systems subjected to colored non-Gaussian noise, and Král and Náprstek [11] adopted simplex elements to treat higher-dimensional problems. Further algorithmic refinements and additional applications may be found in Refs. [12,13,14,15,16,17].
All of these works, however, suffer from the same fundamental limitation. When the FE method is used to solve the FP equation, as the dimension of the state space increases, the number of equations to be solved increases exponentially. The resulting huge amount of computation required brings great challenges. In this paper, we consider two approaches to overcoming this difficulty—namely, reducing the dimension of the FE equation by projecting it onto a smaller basis, eliminating one coefficient matrix in the process; and splitting the remaining matrix to enable efficient time marching of the nonstationary solution. Numerous examples of related work on deterministic problems may be found in the literature, for example, in computational fluid dynamics [18,19,20,21] and in nearfield acoustic holography [22]. The novelty of this work lies in the application of these techniques to the FP equation, and in the results obtained for specific choices of basis and splitting scheme.
Dimension reduction has been gradually applied in the solution of FP equations. Lötstedt and Ferm [23] obtained results for a problem involving multiple chemical species by deriving PDEs to approximate the first moments for some of the species and coupling these to the FP equations for the species not so reduced. Leonenko and Phillips [24] used spectral discretization and a high-order reduced-basis approximation in an application to polymer dynamics, and Er [25] proposed a subspace method to solve for the response probability density functions of large-scale nonlinear stochastic dynamical systems. Chen and Lin [26] and Chen and Rui [27] obtained the generalized probability density evolution equation by using decoupled physical equations. Zhu [28] studied the stationary probability density function of a multi-degree-of-freedom nonlinear system subjected to external, independent Poisson white noise, reducing the high-dimensional generalized FP equation to a low-dimensional equation, and Liang et al. [29] improved on this approach. In contrast to most of these studies, the reduction method used herein does not depend on the physics of the problem considered.
Independent of the dimension of the problem, operator splitting provides a numerical method to compute the solution to matrix differential equations that can be more efficient than straightforward integration using the original coefficient matrices. Operator splitting methods have been known for more than a century, starting with the scheme introduced by Lie in the mid-1870s [30]. Usadi and Dawson [31] developed alternating direction implicit (ADI) methods. In the mid-1970s, close relationships between the augmented Lagrangian methods of Hestenes [32] and Powell [33,34] and ADI methods were identified, which inspired the alternating direction method of multipliers (ADMM). Based on a symmetrization principle, Strang [35] introduced a second-order variant of the Lie scheme, which was motivated by a need for the accurate solution of hyperbolic problems. This second-order scheme is unconditionally stable, which makes it popular in computing solutions to PDEs. Speth et al. [36] further developed balanced splitting and rebalanced splitting methods; compared with earlier splitting methods, these have higher accuracy and can correctly capture steady-state behavior. Goldstein and Osher [37] discussed an algorithm that is equivalent to the ADMM, and this method has been used in many image-processing applications.
Operator splitting has attracted attention for its computational efficiency in the solution of FP equations. Zorzano et al. [38] combined an ADI scheme with the finite difference method to solve FP problems in particle accelerators. For studying the influence of noise in the particle dynamics in a storage ring, a method based on operator splitting was developed by Mais and Zorzano [39], which can also be used in stochastic beam-dynamics problems in accelerators. Zorzano et al. [40] provided a robust finite difference scheme with operator splitting (ADI) for the FP equation in two spatial variables and time.
Because of the rapid growth of the FE equation with the increasing dimension of the underlying dynamical system, it now seems unlikely that this approach to the solution of the FP equation will be extended beyond the three or four states that have represented a practical limit for several years. However, systems of this size do have practical applications, such as design optimization and nonlinear filtering [41], and so there is some value in pursuing their efficient solution. In addition, we are motivated in part by the goal of providing baseline timing and solution results for the FEA of such problems as the field shifts increasingly toward the routine use of alternative formulations, such as physics-informed neural networks (PINNs) [42,43,44], which are very promising for problems of high dimension but whose numerical properties have not yet been fully tested.
Our interest is thus pragmatic. The finite element analysis of the FP equation is largely a mature subject, with some gaps where known numerical techniques may be applied. We try to provide practical guidance for the use of two of these, model reduction and operator splitting. While we focus on 2-D problems herein, our approach is directly extensible to larger systems, which have structurally similar coefficient matrices; however, both programming complexity and storage requirements grow very quickly, and the efficiency gains we achieve are not enough to offset this. Rather than trying to compete with methods such as the PINN solvers mentioned above (a rough computation based on estimating the numbers of floating-point operations needed suggests that they overtake FEA by the time a system has grown to four states), we seek to improve the efficiency of the solution of a class of problems that may serve as well-understood test cases. For the same reason, we believe these results will be of value to those studying FEA or other techniques [45,46,47] with an eye primarily to performance and algorithmic properties other than speed.
In the following sections, the FP equation is used to formulate a PDE for the joint pdf of the displacement and velocity of a single-degree-of-freedom (SDOF) oscillator, which is then discretized in the phase plane. The resulting large matrix ordinary differential equation has constant coefficient matrices, and the properties of their eigenvectors for use in a reduced basis are elucidated. One of the coefficient matrices may be usefully split into the sum of two matrices with better stability characteristics, which makes feasible the use of a state transition matrix to advance the solution by a fixed step size. The performance of these techniques is demonstrated through numerical examples where the nonstationary random responses of linear and nonlinear SDOF oscillators are computed. Recommendations based on the findings from these applications are summarized in the results and conclusion.

2. Problem Formulation

Consider an SDOF system (an oscillator) governed by the stochastic differential equations
X ˙ 1 = g 1 ( X ) ,
X ˙ 2 = g 2 ( X ) + w ( t ) ,
where the state vector is X = X X ˙ T = X 1 X 2 T and the overdot denotes differentiation with respect to time. The excitation w ( t ) is Gaussian white noise, which is fully defined by its first two moments,
E [ w ( t ) ] = 0 and E [ w ( t ) w ( t + t ) ] = 2 D δ ( t ) ,
where δ ( · ) is the Dirac delta function and D / π is the magnitude of the constant two-sided spectral density function of the excitation. Equation (1) describes a Markov process, and its behavior is completely determined by the transition probability density function p ( x , t | x 0 ) , which satisfies the (deterministic) Fokker–Planck equation
p t = D 2 p x 2 2 x 1 [ g 1 ( x ) p ] x 2 [ g 2 ( x ) p ]
and the initial condition lim t 0 p ( x , t | x 0 ) = δ ( x x 0 ) . Homogeneous boundary conditions are imposed where p 0 as | x 1 | , | x 2 | . The joint probability density function of the response is related to the transition probability density by
f ( x , t ) = p ( x , t | x 0 ) f ( x 0 ) d x 0 ,
where for convenience the initial condition may be taken in the form of a binormal distribution. Substituting this function into the FP equation and integrating term-wise gives
f t = D 2 f x 2 2 x 1 ( g 1 f ) x 2 ( g 2 f ) .
The desired joint probability density function f ( x , t ) satisfies this PDE subject to the initial conditions on f and the boundary conditions (which are the same as those on p).
A Galerkin finite element method is used where within a single element with domain Ω e , the probability density function is interpolated according to
f ( x , t ) = r = 1 n N r ( x ) f r e ( t ) ,
where the N r ( x ) , r = 1 , 2 , , n , are bilinear trial functions of class C 0 , n is the number of nodes in a single element, and f r e are the nodal values of the density. Substituting Equation (6) into the weak form of Equation (5) leads to
s = 1 n m r s f ˙ s e + s = 1 n k r s f s e = 0 , r = 1 , , n ,
where the element matrices are given by
m r s = Ω e N r N s d x 1 d x 2
and
k r s = Ω e D N r x 2 N s x 2 N s g 1 N r x 1 + g 2 N r x 2 d x 1 d x 2 .
Assembling the global coefficient matrices by standard procedures produces
M f ˙ ( t ) + K f ( t ) = 0 ,
where f ( t ) is the global vector of nodal values of the joint probability density function and M and K are constant, square matrices.
We note some properties of the coefficient matrices that are relevant to the discussion to follow. Both M and K are real and sparse. The matrix M is symmetric and positive definite, while K is nonsymmetric and has 0 as an eigenvalue with multiplicity one; the corresponding eigenvector, scaled so that
Ω f d Ω = 1 ,
is the solution to
K f = 0 ,
and gives the stationary joint pdf of the displacement and velocity responses. (Such normalization is not needed when the transient solution is computed starting from initial conditions given in the form of a pdf.) The matrix M is independent of the oscillator model and the excitation intensity, while K depends on both.
Experience has shown that for typical linear, Duffing, and Van der Pol oscillators, an effective finite element discretization can be achieved with about 100 elements in each dimension of a uniform rectangular mesh covering Ω [6]. This leads to a state vector f of dimension 101 2 = 10,201. After imposition of homogeneous essential boundary conditions at the nodes on the boundary Ω , this is reduced to 9801.

3. Model Reduction

Given the matrices M and K , we can pose three eigenproblems whose solutions may be useful in reducing the dimension of Equation (9) and thus lowering the computational cost of its solution. In this section, we review some familiar results from linear algebra and establish a framework for the systematic selection of the eigenvectors to be included in a reduced basis [48]. We also discuss an ad hoc technique for improving satisfaction of the boundary conditions in the reduced problem.

3.1. Eigenproblems in the Coefficient Matrices

The simplest eigenvalue problem we will consider is
M ϕ = λ ϕ .
The n × n matrix M is real, symmetric, and positive definite, with eigenvalues λ i , i = 1 , , n , where n is on the order of  10 4 in the problems considered herein. Consequently, the eigenvectors ϕ i are orthogonal, both simply and with respect to M , and can be normalized so that
ϕ i T M ϕ j = δ i j .
If we define a transformation matrix using a subset of the eigenvectors,
S = ϕ i ,
and let
f = S η ,
we obtain
η ˙ + K ^ η = 0 ,
where
K ^ = S T K S .
If we have used m < n eigenvectors in defining the transformation, the matrix S is not square, but n × m ; as a result, η is of length m, and K ^ is m × m . The dimension of the problem has been reduced from n to m.
The situation is somewhat different if we instead choose to solve
K ϕ = λ ϕ .
Because K is real but not symmetric, its eigenvalues may not all be real. We do again expect an eigenvalue of 0 and its (real) eigenvector giving the stationary pdf. (Computationally, it is usually necessary to accept the eigenvalue that is smallest in modulus as “zero”, but we have encountered no cases in which this was problematic.) The other eigenvectors may be complex, so we need to use the Hermitian (conjugate) transpose; otherwise, the foregoing calculations are essentially unchanged. In general, the eigenvectors of K are not orthogonal. Because neither of the transformed coefficient matrices M ^ and K ^ can be expected to be diagonal while they both will likely be complex, it is not clear that this offers any advantage over the original DE with real coefficient matrices. By defining the transformation of coordinates using a matrix S containing a subset of the eigenvectors of K , it is again possible to obtain a reduced problem. However, the real-valued reduced problem of the previous subsection, constructed using eigenvectors of M , seems preferable.
The solution of the generalized eigenproblem
K ϕ = λ M ϕ
will produce n eigenvalues and eigenvectors, generally complex-valued. We once again expect an eigenvalue of zero, with its eigenvector giving the stationary solution. The eigenvectors have properties similar to those of K above, and are not orthogonal with respect to either matrix. The reduced coefficient matrices, while they may be structurally simpler than M and K , are again both complex.
Subsets of vectors from any of the three problems will work in practice. Because neither of the transformed coefficient matrices is diagonal, and because many standard differential equation solvers require that the resulting complex-valued problem be recast as a real-valued problem of double the dimension, there is little attraction to using eigenvectors of K . The same observations can be made about the use of vectors from the generalized eigenproblem, although there it might be hoped that these drawbacks would be offset by the greater effectiveness of a smaller basis. However, experience has shown that neither set of complex eigenvectors offers any significant advantage over the real eigenvectors obtained from the regular eigenproblem in M , particularly regarding the size of the basis required to achieve a given accuracy. Furthermore, the use of eigenvectors of M allows the diagonalization of one coefficient matrix, and, with proper normalization, M ^ = I . Therefore, we choose to draw our reduced basis from the eigenvectors of matrix M .

3.2. Selection of Basis Vectors

The questions immediately arise of how to choose m and which m eigenvectors are to be included in S . Some guidance may be had if a vector u representing a meaningful pdf is available—for example, results from a past calculation. The projections of this vector onto the eigenvectors are given by
β i = ϕ i T u ϕ i T ϕ i , i = 1 , , n ,
when ϕ i is real, and by a similar expression involving the conjugate transpose when the eigenvector is complex. These coefficients can be sorted by absolute value and examined for abrupt changes indicating groups of eigenvectors of similar importance in the new basis. If there are no obvious groupings of the β i , the results will at least show which ϕ i should be chosen first. Naturally, the value of this ranking depends on the accuracy of u in representing the solution over time. It may be desirable to construct a composite, artificial “solution” from snapshots taken from a previously computed time series, if one is available. This technique is demonstrated in the examples of a later section.

3.3. Limitations of Solution on Reduced Basis

When computing a nonstationary solution of the FP equation on a reduced basis, it is frequently observed that while the overall character of the pdf obtained at any time step is correct, there are small errors in both the peak values and the tails of the distribution. The latter often manifest as large values near the boundaries, and in combination with the homogeneous boundary conditions lead to large gradients in the pdf and so encourage diffusion of probability mass over the boundaries. This leads to errors that grow over time and ultimately to an underestimated, if not qualitatively wrong, stationary solution. Enlarging the computational domain, refining the mesh, or both have little effect on this phenomenon; a nearly complete basis is often needed if the solution is to satisfy the boundary conditions over time. This significantly limits the dimension reduction that can be employed.
This deterioration of the pdf can be ameliorated by introducing into the original PDE artificial diffusive terms that oppose this loss over the boundaries. Specifically, we consider a modified equation in the form
f t = D 2 f x 2 2 + α 1 2 f x 1 2 + α 2 2 f x 2 2 x 1 [ g 1 f ] x 2 [ g 2 f ] .
For simplicity, the coefficients α 1 and α 2 are taken to be nonzero only in rectangular regions of widths b 1 and b 2 near Ω , starting at zero on the inner edges of these strips and growing quadratically to values of d 1 and d 2 on the boundaries. These terms are easily discretized in the finite element formulation of the problem. The effects of the parameters controlling the subdomains (“boundary layers”) where these artificial terms exist and the strength of the resulting inward diffusion are examined in the examples to follow.

4. Operator Splitting

If the convection and diffusion terms in the original equation are separated, perhaps in the expectation that their physics may develop on different time scales, the terms in the element k matrix can be grouped and that matrix can be divided into
k 1 = Ω e D N x 2 N T x 2 d x 1 d x 2
and
k 2 = Ω e g 1 N x 1 + g 2 N x 2 N T d x 1 d x 2 .
Obviously, this splitting satisfies k = k 1 + k 2 . Then, global matrices K 1 and K 2 can be assembled as the previous K matrix was and transformed to obtain K ^ 1 = S T K 1 S and K ^ 2 = S T K 2 S .
For the system of ordinary differential equations of the form η ˙ + K ^ η = 0 obtained using S = ϕ 1 ϕ n (so that M ^ = I ), we begin with the simple first-order splitting method and
η ˙ + ( K ^ 1 + K ^ 2 ) η = 0 ,
and separate the original equation into two parts. Over a time step Δ t , we solve the two equations
q ˙ + K ^ 1 q = 0 , p ˙ + K ^ 2 p = 0
separately, using the solution of the first as the initial conditions for the second, and then combine the two separate solutions to form a solution to the original Equation (23) [35]. Using the state transition matrices associated with Equation (24), we can advance the complete solution from time step n to step n + 1 by computing
q n + 1 = e K 1 Δ t p n , p n + 1 = e K 2 Δ t q n + 1 .
Combining these equations for a fixed time step Δ t , the result
η n + 1 = e K ^ 2 Δ t e K ^ 1 Δ t η n
is obtained, which may be used to efficiently advance the solution over any number of steps by simply multiplying the previous value by a constant matrix. By comparison to the Taylor series for the exact solution of a canonical equation, it can be shown that this splitting method is first-order accurate ( O ( Δ t ) ) even when the matrices K 1 and K 2 do not commute under multiplication [49].
This scheme is both simple and stable once the matrix exponentials are computed and multiplied to form the single-state transition matrix needed to advance the solution over a fixed step. It is of course possible to choose a time step so large that the exponentiation (carried out in floating-point arithmetic) fails, but our experience has been that using time steps small enough to resolve the dynamics of interest in a system usually avoids any such difficulties.
Symmetric splitting, as introduced by Strang [35], can achieve second-order accuracy. Here, K is split into
K = 1 2 K 1 + K 2 + 1 2 K 1 .
Examining the Taylor series shows that
e 1 2 K 1 Δ t e K 2 e 1 2 K 1 Δ t e ( K 1 + K 2 ) Δ t = C Δ t 3 + O ( Δ t 4 ) ,
where
C = [ K 1 , K 2 ] , K 1 + 2 [ K 1 , K 2 ] , K 2
and where [ A , B ] denotes the matrix commutator A B B A . The total number of time steps is T / Δ t , so the global error is O ( Δ t 2 ) .

5. Numerical Examples

The example problems of this section are taken from Spencer and Bergman [6] and comprise two SDOF oscillators, one linear and one nonlinear (a Duffing oscillator with negative linear stiffness but positive cubic stiffness). The notation adopted here is typical of that used to describe vibrating mechanical systems, but these equations of motion have analogs in many areas of science and engineering. Initial conditions are specified as a binormal distribution. For each example, we give results from reduced problems, from operator splitting, and from these techniques used in combination. The solution of the full FE problem by an adaptive Runge–Kutta (RK) integrator is used as a reference except in those cases where an exact solution is available. (Specifically, we used the function solve_ivp from the SciPy Python library (v. 1.8.0), which implements the RK5(4) algorithm of Dormand and Prince [50].) This routine was also used to integrate those reduced problems for which operator splitting was not employed. Time-dependent results are usually reported here in terms of the nondimensional time τ = ω 0 t / 2 π (i.e., in units of the (linearized) natural period of the system being considered).
Basis vectors were selected from eigenvectors of M with the aid of a composite vector u constructed by superposing twenty snapshots from a full RK solution, equally spaced in time from the relevant initial conditions to apparent stationarity, according to the magnitudes of the projections of this vector onto the eigenvectors. This allows the estimation of the minimum basis size required for the solution of the nonstationary problem, but it is obviously not a technique that will be generally available. Because the required bases were found to be quite large, half or more of the original problem dimension in the Duffing example, we have not investigated alternative means of selecting the best basis vectors; however, we note that the eigenvectors selected on the basis of the projection coefficients obtained using u and the eigenvectors corresponding to the largest eigenvalues of M were generally strongly correlated. This suggests that, in the typical case where a composite solution vector is not available, including in the basis, those eigenvectors with the largest eigenvalues will generally give the best results. This choice, and the required size of the basis, will be influenced by the degree to which the nonstationary solution is localized, and thus by the initial conditions. The initial mean values used in our examples are rather large, which makes for interesting but challenging examples.
We summarize here the steps followed in each example when either Runge–Kutta integration or first-order operator splitting was used. The steps for second-order splitting are very similar.
  • Form the finite element global coefficient matrices M and K . If operator splitting will be used, form K 1 and K 2 in lieu of K . If basis reduction will be used, optionally include in K or K 1 additional terms representing artificial diffusion.
  • Solve the eigenproblem M ϕ = λ ϕ and normalize the resulting vectors with respect to M . (Hence, M ^ = I .)
  • Select the eigenvectors to be used to transform the problem (which may be all of them) and form S .
  • Compute K ^ or K ^ 1 and K ^ 2 , as appropriate.
  • Starting from the given initial conditions, march the solution in time (typically to stationarity).
    (a)
    If using a Runge–Kutta integrator, solve η ˙ = K ^ η directly. If the output is in the form of a sparse structure, evaluate the solution at the desired times (e.g., on uniform steps).
    (b)
    If using operator splitting, choose a time step Δ t , evaluate the necessary matrix exponentials, and form the corresponding state transition matrix (cf. Equation (26)). Advance the solution by forming a sequence of matrix–vector products.
Computations were performed on a typical x86 desktop computer using programs written in Python and taking advantage of its compiled numerical libraries, with the exception of the eigensolutions, which were found using Matlab (v. R2021b) running on the same hardware. The reported run times (normalized by the time required for the computation of the RK reference solution) include any transformations associated with problem reduction and recovery of the full solution vector.

5.1. Linear Oscillator

The FP equation for a linear system has an analytical solution [1,2], and it is frequently used as a benchmark problem for testing numerical methods [4,6,8]. We consider the well-known SDOF linear oscillator (LO)
g 1 ( x 1 , x 2 ) = x 2 ,
g 2 ( x 1 , x 2 ) = 2 ζ ω 0 x 2 ω 0 2 x 1 ,
with the parameters ζ = 0.05 and ω 0 = 1 , the excitation D = 0.1 , and the initial conditions ( μ 10 , μ 20 ) = ( 5 , 5 ) and ( σ 10 2 , σ 20 2 ) = ( 1 / 9 , 1 / 9 ) . The domain for computation is square, 10 x 1 , x 2 10 . The FE mesh is uniform, with an element size of 0.2 in both the x 1 and x 2 directions.
We begin with the FEA represented by Equations (8) and (9), applied to the state Equation (30). Using a composite solution vector u composed of snapshots taken from a reference Runge–Kutta solution of the full FE equations, the coefficients resulting from the projection of this vector onto all the eigenvectors of M are plotted in Figure 1, where they have been sorted from large to small. The norm of the error resulting when u is approximated with an increasing number of eigenvectors follows a similar trend.
When artificial diffusive terms were used in the solution of this problem on reduced bases, parameters were chosen after a limited amount of experimentation. In the x 1 direction, the parameters b 1 = 2.4 and d 1 = 1.6 were selected, such that the added terms are nonzero only for | x 1 | > 7.6 and rise to a maximum value of 1.6 on the boundary of Ω . Likewise, in the x 2 direction, with b 2 = 2.5 and d 2 = 1.3 , the boundary terms enter where | x 2 | > 7.5 , with coefficients that increase to 1.3 on Ω . While an effort was made to minimize the effect of these terms on the original PDE (beyond stabilizing clearly erroneous solutions), the results obtained were not particularly sensitive to the values chosen.
Figure 2 compares solutions computed using a reduced basis of 2000 eigenvectors, with and without artificial diffusive boundary terms, to the exact solution. The basis vectors were selected according to the projection coefficient magnitudes in Figure 1. Figure 2a shows the evolution with time of the nonstationary pdf at the origin of the phase plane. Both reduced solutions exhibit initial large errors because of the distortion of the initial conditions caused by the transformation onto the smaller basis. Both then track the exact solution until approximately τ = 6 , when the reduced solution computed without added boundary terms begins to visibly exceed the reference curve. This trend continues, and would eventually result in divergence rather than a stationary solution. In contrast, the numerical solution computed on the same basis but with diffusive terms introduced near the boundary continues to agree with the exact analytical result, exceeding it only slightly as τ approaches 20. A cross section of the stationary pdf along the displacement axis of the phase plane is plotted in Figure 2b, where it may be seen that the artificial diffusive boundary terms both improve the accuracy of the peak value and reduce spurious oscillations in the tails of the distribution. However, both reduced solutions do oscillate rather than going smoothly to zero in the tails, and consequently, they take on physically meaningless negative values at some points.
Figure 3 and Figure 4 reveal the effect of basis size, depicting the same responses discussed above, this time computed using a basis of 1000, 1500, or 2000 eigenvectors. All of the reduced problems were solved using artificial diffusive terms to improve the solutions. Collectively, these figures suggest that the use of fewer than 2000 basis vectors will result in spurious, sometimes negative numerical solutions, if not outright divergence. This is an important and somewhat surprising result: despite the simplicity and smoothness of the solution, the minimum useful basis is approximately 20% of the size of the original problem.
We now return to consideration of the full-dimensional problem and examine the use of operator splitting in its solution, without reduction. When the results of operator splitting are compared to the exact solution, no significant differences are found except that the splitting solution very slightly underestimates the peak value of the stationary pdf. The nonstationary and stationary numerical solutions are practically indistinguishable from the exact ones, and so are not shown here.
In combination with operator splitting, larger and smaller reduced bases, of sizes 1000 and 3000, were used without adding boundary terms. Trends and errors similar to those seen in the earlier reduced formulations for this system were found. We conclude that the use of a basis consisting of 2000 eigenvectors of the M matrix along with splitting the K matrix according to convective and diffusive terms produces reasonably accurate results for this problem.
Table 1 shows the run times of the different calculation methods applied to this example. Compared with Runge–Kutta integration, operator splitting and solving the problem using a state transition matrix can obviously accelerate the process. The more basis vectors used, naturally, the longer any solver will take. Solving the equation with artificial boundary terms is more time-consuming, but because of the higher accuracy, the run time cost is generally worthwhile. In many cases, the use of only operator splitting, but not reduction, may result in acceptable run times, obviating any need for artificial diffusive terms.

5.2. Duffing Oscillator

The governing equation of the nonlinear Duffing oscillator is, in state form,
g 1 ( x 1 , x 2 ) = x 2 ,
g 2 ( x 1 , x 2 ) = 2 ζ ω 0 x 2 ω 0 2 x 1 ( γ + ϵ x 1 2 ) ,
with ζ = 0.2 , ω 0 = 1 , γ = 1 , ϵ = 0.1 , and D = 0.4 . (Like the previous example, this system is taken from [6].) These parameters are those of a system with negative linear stiffness but positive cubic stiffness, resulting in a bimodal stationary pdf that is doubly symmetric about the origin of the phase plane and approximately Gaussian in x 2 . The nonlinearity thus controls the stationary response despite the small numerical value of ϵ . The initial conditions were taken to be ( μ 10 , μ 20 ) = ( 0 , 10 ) and ( σ 10 2 , σ 20 2 ) = ( 1 / 2 , 1 / 2 ) , and the FE domain was 15 x 1 15 , 20 x 2 20 , with 100 elements used in each dimension.
Guided by the projection coefficients of a composite vector u and some experimentation, we determined that somewhat more than 4000 basis vectors were needed to capture the dynamics of this problem in reduced form. The diffusive boundary layer coefficients b 1 = 8 and d 1 = 4.6 and b 2 = 13 and d 2 = 0.1 were then chosen for the x 1 and x 2 directions, respectively.
The discretized equation for the joint pdf of the Duffing oscillator was reduced onto a basis of 4500 eigenvectors, with and without introducing artificial boundary terms in the PDE. The nonstationary response in Figure 5a is shown for the point ( 3.3 , 0 ) , which is the approximate location in the phase plane of one of the two peaks of the bimodal stationary distribution. Both the full and reduced problems were integrated here by a standard Runge–Kutta routine. When artificial diffusive boundary terms are included in the formulation, the solution of the resulting reduced problem approximates well the solution of the full problem, after a transient error attributed to the inability of the reduced basis to represent the given initial conditions accurately. Without added boundary terms, however, the solution does not approach the correct stationary value, and begins to decrease from it as probability mass is lost over the edges of the FE domain. Figure 5b, a section of the stationary pdf along the x 1 axis, shows spurious oscillations in the tails of the pdf. (The exact stationary solution shown there is from Caughey [3].) Both reduced problems underestimate the pdf in the valley between the peaks of the stationary distribution. While the inclusion of diffusive boundary terms reduces some of the extraneous ripples, it must be said that these results are not as clean as those obtained for the linear oscillator, despite the use of a much larger reduced basis (here, almost 46% of the size of the full problem).
When the results of first- and second-order splitting are compared to those of a Runge–Kutta solution of the unsplit problem, second-order splitting is found to be more accurate, but both solutions from splitting are found to agree quite well with the RK solution during the entire response, including at stationarity.
Responses found using a combination of basis reduction and operator splitting, with and without artificial diffusive boundary terms, followed the trends observed for the linear oscillator. Even when the basis was enlarged to 5000 eigenvectors, the solutions found without boundary terms deviated significantly from the reference RK integration of the unreduced system. With boundary terms, good results were obtained with the previous basis size of 4500, albeit with the same sorts of errors observed when these equations were earlier solved without splitting.
The times required to solve this example as described above are shown in Table 2. Because of the large basis required, the savings compared to the full-dimensional problem are not as dramatic as for the linear oscillator, but they are still significant. Operator splitting and advancing the solution using a state transition matrix, however, yields great improvement in computational efficiency compared to straightforward time integration using a Runge–Kutta algorithm. Further savings can be achieved by combining dimension reduction with operator splitting, but in view of the errors that result and the practical requirement for adding artificial diffusive terms to the PDE, this approach is relatively unattractive in this example.

6. Results

We have investigated two techniques to improve the speed of solution of Fokker–Planck equations as discretized using finite element analysis in the phase plane. In the first of these, reduced bases were constructed from selected eigenvectors of the coefficient matrices M and K appearing in the resulting first-order differential equation. Basis vectors obtained by solving a regular eigenproblem in M were attractive because they were real-valued, could be used to diagonalize M , and were independent of the parameters of the underlying dynamical system and the intensity of the excitation. The size of the basis required in order for the reduced problem to capture well the dynamics of the full problem was typically one-fifth to one-half that of the original system. Considering that the number of elements of the reduced coefficient matrices goes roughly as the square of this ratio, this represents an attractive savings in storage and computation; however, it does not always enable the drastic improvement in computational efficiency sometimes associated with a reduced-order model. In addition, even when a large basis was used, the reduced problem was found to be susceptible to spurious oscillations in the tails of the computed pdf, and to loss of probability mass over the edges of the computational domain. These phenomena were effectively addressed by the introduction into the original PDE of artificial diffusive terms near the boundaries of the domain. While greater gains were achieved in the linear example, in the more representative, nonlinear Duffing oscillator example, the typical reduction in run time, compared to integration by a conventional adaptive Runge–Kutta algorithm, was approximately 80%. This is certainly significant; however, we must reiterate that the reduced bases were selected with the benefit of the composite solution vector u , and that even a modest reduction in the size of the basis used was found to require the introduction of artificial diffusive terms to stabilize the solution near the boundaries of the computational domain. In view of these limitations, we cannot enthusiastically recommend the use of this reduction.
The second numerical method applied herein to the same discretized PDE was operator splitting, where the singular coefficient matrix K was written as the sum of two nonsingular matrices. This allowed the rapid computation of the joint response pdf with a uniform time step using the constant state transition matrix given by the product of the exponentials of the two component matrices. Both first- and second-order-accurate splitting schemes were considered. It was observed that, although the cost of second-order methods was not much greater, first-order splitting was usually sufficiently accurate while being simpler to program, and that separating the FE coefficient matrix K according to the physical source of the terms in the original PDE (i.e., convection or diffusion) was effective in the linear and Duffing oscillator examples studied. The resultant saving in computational effort was comparable to that obtained with basis reduction for the linear oscillator example, a little more than 80%, but savings of over 98% were observed when operator splitting was used to compute the joint pdf of a Duffing oscillator.
When these methods were combined, by using operator splitting to solve equations reduced on a basis of selected eigenvectors, the advantages of both techniques were retained. Run times for the solution phase of nonstationary problems were commonly 1% to 10% of those required for Runge–Kutta integration. This improvement was achieved with little cost in accuracy, although the caveats stated above regarding basis reduction apply here as well.

7. Conclusions

The class of dynamical systems for which the solution of the FP equation by FEA is tractable is small, effectively limited to those with only a few states. While the methods reported here will not, alone, allow the treatment of new, larger systems, they do represent a significant advance in the speed of the computation of the nonstationary pdfs of realistic, complex small systems. It is expected that these results will be useful to anyone needing to accelerate such calculations in a particular application or to test an alternative numerical approach.

Author Contributions

Conceptualization, L.A.B.; methodology, D.M.M.; software, C.Z., R.Z., T.H. and H.F.; validation, F.Y., C.Z., R.Z., T.H., and H.F.; formal analysis, D.M.M. and L.A.B.; data curation, F.Y. and T.H.; writing—original draft preparation, T.H. and H.F.; writing—review and editing, D.M.M. and H.L.; visualization, T.H. and H.F.; supervision, D.M.M. and H.L.; project administration, H.L.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China through Grant No. 51975525 and Grant No. 52005443; the Ministry of Science and Technology of China through Grant No. 2017YFC0306202; and the Ningbo Institute of Digital Twin through Grant No. S203.01.22.001 and Grant No. S203.01.22.002.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

The following notation is used in this manuscript:
Variables and functions
b, d     parameters of artificial diffusive terms
Dintensity of random excitation
Eexpectation operator
f, f joint probability density function
gfunction defining system dynamics
I identity matrix
K , M global coefficient matrices
k , m element coefficient matrices
k 1 , k 2 split element matrices
K 1 , K 2 split global matrices
N, N finite element shape functions
ptransition probability density function
S transformation matrix
ttime
u composite solution vector
wGaussian white noise input
X, X state variables (random)
x, x state variables (deterministic)
α 1 , α 2 profiles of artificial diffusive terms
β projection coefficient
Δ t time step
δ Dirac delta function
ζ damping ratio
η , η transformed coordinates
λ eigenvalue
τ normalized time
ϕ eigenvector
Ω computational domain
Ω boundary of computational domain
ω 0 linearized natural frequency
Subscripts
0initial value
efinite element-level quantity
r, smatrix row and column indices
Diacritical mark
( · ) ^ dimension-reduced quantity

References

  1. Wang, M.C.; Uhlenbeck, G.E. On the Theory of the Brownian Motion II. Rev. Mod. Phys. 1945, 17, 323–342. [Google Scholar] [CrossRef]
  2. Lin, Y.K. Probabilistic Theory of Structural Dynamics; McGraw-Hill: New York, NY, USA, 1967. [Google Scholar]
  3. Caughey, T.K. Nonlinear Theory of Random Vibrations. In Advances in Applied Mechanics; Yih, C.S., Ed.; Elsevier: Amsterdam, The Netherlands, 1971; Volume 11, pp. 209–253. [Google Scholar]
  4. Bergman, L.A.; Heinrich, J.C. On the Moments of Time to First Passage of the Linear Oscillator. Earthq. Eng. Struct. Dyn. 1981, 9, 197–204. [Google Scholar] [CrossRef]
  5. Langley, R.S. A Finite Element Method for the Statistics of Non-Linear Random Vibration. J. Sound Vib. 1985, 101, 41–54. [Google Scholar] [CrossRef]
  6. Spencer, B.F., Jr.; Bergman, L.A. On the Numerical Solution of the Fokker-Planck Equation for Nonlinear Stochastic Systems. Nonlin. Dyn. 1993, 4, 357–372. [Google Scholar] [CrossRef]
  7. El-Gebeily, M.A.; Shabaik, H.E.E. Approximate Solution of the Fokker-Planck-Kolmogorov Equation by Finite Elements. Commun. Numer. Meth. Eng. 1994, 10, 763–771. [Google Scholar] [CrossRef]
  8. Masud, A.; Bergman, L.A. Application of Multi-Scale Finite Element Methods to the Solution of the Fokker-Planck Equation. Comput. Meth. Appl. Mech. Eng. 2005, 194, 1513–1526. [Google Scholar] [CrossRef]
  9. Floris, C. Numeric Solution of the Fokker-Planck-Kolmogorov Equation. Engineering 2013, 5, 975–988. [Google Scholar] [CrossRef]
  10. Kumar, P.; Narayanan, S.; Gupta, S. Finite Element Solution of Fokker-Planck Equation of Nonlinear Oscillators Subjected to Colored Non-Gaussian Noise. Probab. Eng. Mech. 2014, 38, 143–155. [Google Scholar] [CrossRef]
  11. Král, R.; Náprstek, J. Theoretical Background and Implementation of the Finite Element Method for Multi-Dimensional Fokker-Planck Equation Analysis. Adv. Eng. Softw. 2017, 113, 54–75. [Google Scholar] [CrossRef]
  12. Banks, H.T.; Tran, H.T.; Woodward, D.E. Estimation of Variable Coefficients in the Fokker-Planck Equations using Moving Node Finite Elements. SIAM J. Numer. Anal. 1993, 30, 1574–1602. [Google Scholar] [CrossRef]
  13. Franca, L.P.; Hauke, G.; Masud, A. Revisiting Stabilized Finite Element Methods for the Advective-Diffusive Equation. Comput. Meth. Appl. Mech. Eng. 2006, 195, 1560–1572. [Google Scholar] [CrossRef]
  14. Galán, R.F.; Ermentrout, G.B.; Urban, N.N. Stochastic Dynamics of Uncoupled Neural Oscillators: Fokker-Planck Studies with the Finite Element Method. Phys. Rev. E 2006, 76, 056110. [Google Scholar] [CrossRef] [PubMed]
  15. Kumar, M.; Chakravorty, S.; Singla, P.; Junkins, J.L. The Partition of Unity Finite Element Approach with hp-Refinement for the Stationary Fokker-Planck Equation. J. Sound Vib. 2009, 327, 144–162. [Google Scholar] [CrossRef]
  16. Sepehrian, B.; Radpoor, M.K. Numerical Solution of Non-Linear Fokker-Planck Equation using Finite Differences Method and the Cubic Spline Functions. Appl. Math. Comput. 2015, 262, 187–190. [Google Scholar] [CrossRef]
  17. Brunken, J.; Smetana, K. Stable and Efficient Petrov-Galerkin Methods for a Kinetic Fokker-Planck Equation. SIAM J. Numer. Anal. 2022, 60, 157–179. [Google Scholar] [CrossRef]
  18. Mahajan, A.J.; Dowell, E.H.; Bliss, D.B. Eigenvalue Calculation Procedure for an Euler-Navier-Stokes Solver with Application to Flows Over Airfoils. J. Comput. Phys. 1991, 97, 398–413. [Google Scholar] [CrossRef]
  19. Mahajan, A.J.; Dowell, E.H.; Bliss, D.B. Role of Artificial Viscosity in Euler and Navier-Stokes Solvers. AIAA J. 1991, 29, 555–559. [Google Scholar] [CrossRef]
  20. Dowell, E.H.; Hall, K.C.; Romanowski, M.C. Eigenmode Analysis in Unsteady Aerodynamics: Reduced Order Models. Appl. Mech. Rev. 1997, 50, 371–385. [Google Scholar] [CrossRef]
  21. Hall, K.; Thomas, J.; Dowell, E. Reduced-Order Modelling of Unsteady Small-Disturbance Flows using a Frequency-Domain Proper Orthogonal Decomposition Technique. In Proceedings of the 37th Aerospace Sciences Meeting and Exhibit, Reno, NV, USA, 11–14 January 1999; pp. 1–11. [Google Scholar]
  22. Ungnad, S.; Sachau, D. Interior near-field acoustic holography based on finite elements. J. Acoust. Soc. Am. 2019, 146, 1758–1768. [Google Scholar] [CrossRef]
  23. Lötstedt, P.; Ferm, L. Dimensional Reduction of the Fokker-Planck Equation for Stochastic Chemical Reactions. Multiscale Model. Sim. 2006, 5, 593–614. [Google Scholar] [CrossRef]
  24. Leonenko, G.M.; Phillips, T.N. On the Solution of the Fokker-Planck Equation using a High-Order Reduced Basis Approximation. Comput. Meth. Appl. Mech. Eng. 2009, 199, 158–168. [Google Scholar] [CrossRef]
  25. Er, G.K. Methodology for the Solutions of Some Reduced Fokker-Planck Equations in High Dimensions. Ann. Phys. 2011, 523, 247–258. [Google Scholar] [CrossRef]
  26. Chen, J.; Lin, P. Dimension-Reduction of FPK Equation via Equivalent Drift Coefficient. Theor. Appl. Mech. Lett. 2014, 4, 013002. [Google Scholar] [CrossRef]
  27. Chen, J.; Rui, Z. Dimension-Reduced FPK Equation for Additive White-Noise Excited Nonlinear Structures. Probab. Eng. Mech. 2018, 53, 1–13. [Google Scholar] [CrossRef]
  28. Zhu, H.T. Probabilistic Solution of Some Multi-Degree-of-Freedom Nonlinear Systems Under External Independent Poisson White Noises. J. Acoust. Soc. Am. 2012, 131, 4550–4557. [Google Scholar] [CrossRef]
  29. Liang, C.; Zhang, J.; Lian, J.; Liu, F.; Li, X. Probabilistic Analysis for the Response of Nonlinear Base Isolation System Under the Ground Excitation Induced by High Dam Flood Discharge. Earthq. Eng. Eng. Vib. 2017, 16, 841–857. [Google Scholar] [CrossRef]
  30. Chorin, A.J.; Hughes, T.J.R.; McCracken, M.F.; Marsden, J.E. Product Formulas and Numerical Algorithms. Commun. Pure Appl. Math. 1978, 31, 205–256. [Google Scholar] [CrossRef]
  31. Usadi, A.; Dawson, C. 50 Years of ADI Methods: Celebrating the Contributions of Jim Douglas, Don Peaceman, and Henry Rachford. SIAM News 2006, 39, 3. [Google Scholar]
  32. Hestenes, M.R. Multiplier and Gradient Methods. J. Optimiz. Theory Appl. 1969, 4, 303–320. [Google Scholar] [CrossRef]
  33. Powell, M.J.D. A Method for Nonlinear Constraints in Minimization Problems. In Optimization; Fletcher, R., Ed.; Academic Press: New York, NY, USA, 1969; pp. 283–298. [Google Scholar]
  34. Powell, M.J.D. On Search Directions for Minimization Algorithms. Math. Program. 1973, 4, 193–201. [Google Scholar] [CrossRef]
  35. Strang, G. On the Construction and Comparison of Difference Schemes. SIAM J. Numer. Anal. 1968, 5, 506–517. [Google Scholar] [CrossRef]
  36. Speth, R.L.; Green, W.H.; MacNamara, S.; Strang, G. Balanced Splitting and Rebalanced Splitting. SIAM J. Numer. Anal. 2013, 51, 3084–3105. [Google Scholar] [CrossRef]
  37. Goldstein, T.; Osher, S. The Split Bregman Method for L1-Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  38. Zorzano, M.P.; Mais, H.; Vazquez, L. Numerical Solution for Fokker-Planck Equations in Accelerators. Physica D 1998, 113, 379–381. [Google Scholar] [CrossRef]
  39. Mais, H.; Zorzano, M.P. Stochastic Dynamics and Fokker-Planck Equation in Accelerator Physics. Nuov. Cim. A 1999, 112, 467–474. [Google Scholar] [CrossRef]
  40. Zorzano, M.P.; Mais, H.; Vazquez, L. Numerical Solution of Two Dimensional Fokker-Planck Equations. Appl. Math. Comput. 1999, 98, 109–117. [Google Scholar] [CrossRef]
  41. Duffy, M.; Chung, S.J.; Bergman, L.A. A general Bayesian Nonlinear Estimation Method using Resampled Smooth Particle Hydrodynamics Solutions of the Underlying Fokker-Planck Equation. Int. J. Non-Linear Mech. 2022, 146, 104134. [Google Scholar] [CrossRef]
  42. Sirignano, J.; Spiliopoulos, K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 2018, 375, 1339–1364. [Google Scholar] [CrossRef]
  43. Al-Aradi, A.; Correia, A.; Jardim, G.; Naiff, D.D.F.; Saporito, Y. Extensions of the deep Galerkin method. Appl. Math. Comput. 2022, 430, 127287. [Google Scholar] [CrossRef]
  44. Sirignano, J.; MacArt, J.; Spiliopoulos, K. PDE-constrained models with neural network terms: Optimization and global convergence. J. Comput. Phys. 2023, 481, 112016. [Google Scholar] [CrossRef]
  45. Kumar, M.; Pandit, S. An efficient algorithm based on Haar wavelets for numerical simulation of Fokker-Planck equations with constants and variable coefficients. Int. J. Numer. Meth. Heat Fluid Flow 2015, 25, 41–56. [Google Scholar] [CrossRef]
  46. Ankur; Jiwari, R. New multiple analytic solitonary solutions and simulation of (2+1)-dimensional generalized Benjamin-Bona-Mahony-Burgers model. Nonlin. Dyn. 2023, 111, 13297–13325. [Google Scholar] [CrossRef]
  47. Ankur; Jiwari, R. A new error estimates of finite element method for (2+1)-dimensional nonlinear advection-diffusion model. Appl. Numer. Math. 2024, 198, 22–42. [Google Scholar] [CrossRef]
  48. Strang, G. Linear Algebra and Its Applications, 4th ed.; Brooks: Belmont, CA, USA, 2006. [Google Scholar]
  49. Glowinski, R.; Osher, S.J.; Yin, W. (Eds.) Splitting Methods in Communication, Imaging, Science, and Engineering; Springer: Cham, Switzerland, 2016. [Google Scholar]
  50. Dormand, J.R.; Prince, P.J. A family of embedded Runge-Kutta formulae. J. Comput. Appl. Math. 1980, 6, 19–26. [Google Scholar] [CrossRef]
Figure 1. Coefficients resulting from the projection of the composite solution vector u onto the eigenvectors of M for the linear oscillator.
Figure 1. Coefficients resulting from the projection of the composite solution vector u onto the eigenvectors of M for the linear oscillator.
Mathematics 13 00491 g001
Figure 2. Comparison between solutions computed with and without added boundary terms when the LO problem is reduced onto a basis of 2000 eigenvectors: (a) time history of the nonstationary pdf at the origin, ( x 1 , x 2 ) = ( 0 , 0 ) ; (b) cross section of the pdf along x 2 = 0 at stationarity ( τ = 20 ).
Figure 2. Comparison between solutions computed with and without added boundary terms when the LO problem is reduced onto a basis of 2000 eigenvectors: (a) time history of the nonstationary pdf at the origin, ( x 1 , x 2 ) = ( 0 , 0 ) ; (b) cross section of the pdf along x 2 = 0 at stationarity ( τ = 20 ).
Mathematics 13 00491 g002
Figure 3. Effect of basis size on LO solutions. All computations used diffusive boundary terms. (a) Nonstationary pdf at origin of phase plane. (b) Cross section of stationary pdf along displacement axis.
Figure 3. Effect of basis size on LO solutions. All computations used diffusive boundary terms. (a) Nonstationary pdf at origin of phase plane. (b) Cross section of stationary pdf along displacement axis.
Mathematics 13 00491 g003
Figure 4. Solutions for the LO computed using (a) an inadequate basis of size 1500 and (b) an improved basis of size 2000. The computations used added boundary terms. Results of Runge–Kutta integration of the full problem are shown for reference.
Figure 4. Solutions for the LO computed using (a) an inadequate basis of size 1500 and (b) an improved basis of size 2000. The computations used added boundary terms. Results of Runge–Kutta integration of the full problem are shown for reference.
Mathematics 13 00491 g004
Figure 5. Comparison between solutions computed with and without artificial boundary terms when the Duffing oscillator problem is reduced onto a basis of 4500 eigenvectors: (a) time history of the nonstationary pdf at the location of a stationary peak, ( x 1 , x 2 ) = ( 3.3 , 0 ) ; (b) cross section of the pdf along x 2 = 0 at stationarity ( τ = 35 ).
Figure 5. Comparison between solutions computed with and without artificial boundary terms when the Duffing oscillator problem is reduced onto a basis of 4500 eigenvectors: (a) time history of the nonstationary pdf at the location of a stationary peak, ( x 1 , x 2 ) = ( 3.3 , 0 ) ; (b) cross section of the pdf along x 2 = 0 at stationarity ( τ = 35 ).
Mathematics 13 00491 g005
Table 1. Integration times for various methods of solution for the linear oscillator ( τ = 0:0.01:4). The size of the basis used in each method is shown in parentheses. ABT denotes the use of artificial boundary terms. Run times are normalized with respect to the Runge–Kutta solution of the full FE problem.
Table 1. Integration times for various methods of solution for the linear oscillator ( τ = 0:0.01:4). The size of the basis used in each method is shown in parentheses. ABT denotes the use of artificial boundary terms. Run times are normalized with respect to the Runge–Kutta solution of the full FE problem.
MethodTime, %
RK (9801)100.00
RK (3000)15.90
RK (2000)11.98
RK (3000) and ABT17.17
RK (2000) and ABT12.70
1st-order splitting (9801)17.36
1st-order splitting (3000)10.07
1st-order splitting (2000)9.69
1st-order splitting (3000) and ABT10.16
1st-order splitting (2000) and ABT9.81
Table 2. Integration times for various methods of solution for the Duffing oscillator ( τ = 0:0.01:4). The size of the basis used in each method is shown in parentheses. ABT denotes the use of artificial boundary terms. Run times are normalized with respect to the Runge–Kutta solution of the full FE problem.
Table 2. Integration times for various methods of solution for the Duffing oscillator ( τ = 0:0.01:4). The size of the basis used in each method is shown in parentheses. ABT denotes the use of artificial boundary terms. Run times are normalized with respect to the Runge–Kutta solution of the full FE problem.
MethodTime, %
RK (9801)100.00
RK (4500)19.84
RK (4500) and ABT19.76
1st-order splitting (9801)1.57
2nd-order splitting (9801)1.61
1st-order splitting (4500)1.00
1st-order splitting (4500) and ABT1.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

McFarland, D.M.; Ye, F.; Zong, C.; Zhu, R.; Han, T.; Fu, H.; Bergman, L.A.; Lu, H. Efficient Solution of Fokker–Planck Equations in Two Dimensions. Mathematics 2025, 13, 491. https://doi.org/10.3390/math13030491

AMA Style

McFarland DM, Ye F, Zong C, Zhu R, Han T, Fu H, Bergman LA, Lu H. Efficient Solution of Fokker–Planck Equations in Two Dimensions. Mathematics. 2025; 13(3):491. https://doi.org/10.3390/math13030491

Chicago/Turabian Style

McFarland, Donald Michael, Fei Ye, Chao Zong, Rui Zhu, Tao Han, Hangyu Fu, Lawrence A. Bergman, and Huancai Lu. 2025. "Efficient Solution of Fokker–Planck Equations in Two Dimensions" Mathematics 13, no. 3: 491. https://doi.org/10.3390/math13030491

APA Style

McFarland, D. M., Ye, F., Zong, C., Zhu, R., Han, T., Fu, H., Bergman, L. A., & Lu, H. (2025). Efficient Solution of Fokker–Planck Equations in Two Dimensions. Mathematics, 13(3), 491. https://doi.org/10.3390/math13030491

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop