Next Article in Journal
Motion of Quantum Particles in Terms of Probabilities of Paths
Previous Article in Journal
Bounds on the Excess Minimum Risk via Generalized Information Divergence Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Generation of Trajectories Statistically Consistent with Stochastic Differential Equations

by
Mykhaylo Evstigneev
Department of Physics and Physical Oceanography, Memorial University of Newfoundland, St. John’s, NL A1B 3X7, Canada
Entropy 2025, 27(7), 729; https://doi.org/10.3390/e27070729
Submission received: 20 May 2025 / Revised: 24 June 2025 / Accepted: 4 July 2025 / Published: 6 July 2025
(This article belongs to the Section Statistical Physics)

Abstract

A weak second-order numerical method for generating trajectories based on stochastic differential equations (SDE) is developed. The proposed approach bypasses direct noise realization by updating the system’s state using independent Gaussian random variables so as to reproduce the first three cumulants of the state variable at each time step to the second order in the time-step size. The update rule for the state variable is derived based on the system’s Fokker–Planck equation in an arbitrary number of dimensions. The high accuracy of the method as compared to the standard Milstein algorithm is demonstrated on the example of Büttiker’s ratchet. While the method is second-order accurate in the time step, it can be extended to systematically generate higher-order terms of the stochastic Taylor expansion approximating the solution of the SDE.

1. Introduction

Stochastic differential equations (SDEs) are an essential tool for research and modelling across a wide range of disciplines, from physics to biology, climatology, and finance [1,2,3]. The standard schemes to solve a SDE are based on the stochastic Taylor expansion [4,5,6,7,8,9] illustrated here on a one-dimensional Langevin equation (LE):
z ˙ ( t ) = h ( z ( t ) ) + g ( z ( t ) ) ξ ( t ) ,
in which h ( z ) and g ( z ) are known functions of the state variable z, the overdot indicates the time derivative, and the noise ξ ( t ) is a random function of time. To derive the update rule for the state variable from the current time t to the next time t + Δ t , an integration of Equation (1) is first performed:
z ( t ) = z ( t ) + t t d s h ( z ( s ) ) + g ( z ( s ) ) ξ ( s ) ,
over an interval t < t t + Δ t not exceeding the time step Δ t intended to be used in the numerical solution of (1). Within the integral (2), the functions h ( z ( s ) ) and g ( z ( s ) ) are Taylor-expanded around the initial point z ( t ) to a desired order, and the original Equation (2) is then back-substituted into this expansion. Finally, the state variable z ( s ) in the integral is replaced with its initial value z ( t ) , and the upper integration limit is set to t = t + Δ t . As a result, one obtains the next-step value z = z ( t + Δ t ) in terms of the initial value z = z ( t ) . In the lowest order, one recovers the Milstein algorithm [10], a standard method of solving SDEs, whose various extensions and refinements exist [8,11,12,13,14,15].
The so-obtained numerical scheme is a strong one, i.e., it reproduces the stochastic trajectory for each specific noise function ξ ( t ) . For most practical purposes, however, the explicit stochastic trajectory z ( t ) corresponding to a particular noise realization ξ ( t ) is usually unnecessary. Instead, one typically aims to generate multiple trajectories that are statistically consistent with the SDE, allowing for the computation of meaningful quantities, such as average values and correlation functions of various types. This statistical approach, also known as a weak integration scheme of the SDE (1), is equally valuable as the strong one.
Building upon this perspective, alternative methods were introduced [16,17], in which the system’s state at the next time step t + Δ t is treated as a random variable sampled from the conditional probability density P ( z , t + Δ t | z , t ) at time t + Δ t given the initial state at time t. Due to the smallness of Δ t , efficient approximations for the function P ( z , t + Δ t | z , t ) can be developed.
As a matter of fact, the full conditional probability density is not necessary for the generation of stochastic trajectories; knowing only its first few moments suffices to develop an effective numerical procedure for trajectory generation [18]. This principle was applied in a recent study [19], where the Langevin Equation (1) driven by Gaussian white noise ξ ( t ) was analyzed for a one-dimensional system.
The present work generalizes the weak second-order scheme from [19] to the white noise-driven LE in arbitrary dimensionality N. The significance of the method developed here lies in its capacity to systematically generate the terms in the weakly convergent stochastic Taylor expansion. In this respect, it plays a role analogous to the stochastic Taylor expansion formulated by Kloeden and Platen for strongly convergent schemes [4]. Notably, the Milstein algorithm emerges as the first-order truncation of either expansion in Δ t : the strong Taylor expansion of Kloeden–Platen or the weak one presented in this work. The formulation developed here can serve as a benchmark framework for assessing the accuracy of integration schemes not explicitly derived from stochastic Taylor expansions, such as midpoint-type algorithms, leapfrog methods, predictor–corrector schemes, or stochastic Runge–Kutta methods.
This paper is structured as follows. In the next section, the main result of this work is formulated, which is a recipe to generate the system’s state z n at the time t + Δ t given the initial state z n at the earlier simulation time step t for a multidimensional version of the LE (1) with Gaussian white noises. The subsequent section focuses on the derivation of this update rule. An illustrative example, Büttiker ratchet driven by a spatially periodic temperature profile, is then provided and analyzed using the method developed in this paper and with the help of the classical Milstein scheme. Finally, a possible systematic improvement of the method is briefly discussed and a few concluding remarks are made.

2. Numerical Generation of Stochastic Trajectories

Consider a system whose state, represented by a collection of variables z = ( z 1 , z 2 , , z N ) , evolves in time according to the N-dimensional LE:
z ˙ i = h i ( z ) + g i j ( z ) ξ j ( t ) , ξ i ( t ) = 0 , ξ i ( t ) ξ j ( t ) = δ i j δ ( t t ) ,
in which the functions h i ( z ) and g i j ( z ) are real-valued and differentiable at least three times, the matrix g i j ( z ) is non-degenerate for all values of the state variable z, the independent noises ξ i ( t ) are unbiased, Gaussian, and white, the noise term g i j ( z ) ξ j ( t ) is interpreted in the Stratonovich sense, and the summation is implied for repeated indices.
Suppose that the system was in a state z = z ( t ) at time t and entered a state
z = z ( t + Δ t ) = z + Δ z
at the next time t + Δ t . The central result of this paper is the propagation rule z z :
z n = z n + a n + b n m Γ m + c n m l ( Γ m Γ l δ m l )
where Γ n are independent Gaussian random variables drawn from the probability density
W ( Γ 1 , Γ 2 , ) = i e Γ i 2 / 2 2 π ,
and the coefficients are given by
a n = Δ t H n + Δ t 2 2 ( G i j H n , i , j + H i H n , i ) + O ( Δ t 3 ) , b n m = g n m Δ t + Δ t 3 / 2 1 2 H n , i g i m + G i j g n m , i , j + H i g n m , i + σ n [ i j ] σ k [ i j ] ( g 1 ) m k + O ( Δ t 5 / 2 ) , c n m l = Δ t σ n m l + O ( Δ t 2 ) .
For notational brevity the argument z is suppressed in the right-hand sides, where g i j g i j ( z ) and
G i j = 1 2 g i k g j k , H i = h i + σ i j j , σ i j k = 1 2 g i j , l g l k
are likewise functions of the initial state z. Partial differentiation is represented by a comma, e.g., A / z i A , i , and square brackets around subscripts indicate antisymmetrization, whereas the round brackets will signify symmetrization. Namely, for any object A i 1 i k that depends on k indices i 1 , , i k , its symmetrized and antisymmetrized versions are
A ( i 1 i k ) = 1 k ! P A P ( i 1 i k ) , A [ i 1 i k ] = 1 k ! P ( 1 ) P A P ( i 1 i k ) ,
where summation is performed over all permutations P of the subscripts i 1 , , i k , and ( 1 ) P equals 1 or 1 for even and odd permutations, respectively. We, furthermore, adopt the convention that the dummy indices inside the brackets are not to be symmetrized over; hence, placement of the brackets is important. For example,
A ( i j k k ) A ( i j ) k k = 1 2 ( A i j k k + A j i k k )
is, in general, not equal to
A ( i j k ) k = 1 6 ( A i j k k + A j i k k + A i k j k + A j k i k + A k i j k + A k j i k ) ,
unless A i j k l is symmetric in the first three subscripts.

3. Derivation of the Propagation Rule

The idea of the derivation is to choose the coefficients a n , b n m , and c n m l in Equation (5) so as to correctly reproduce the first three moments μ n = Δ z n ¯ , μ n m = Δ z n Δ z m ¯ , and μ n m l = Δ z n Δ z m Δ z l ¯ of the displacements Δ z n = z n z n . The moments can be obtained from the characteristic function
C ( ζ ) = e ζ s Δ z s ¯ = e ζ s z s e ζ s z s ¯
by differentiation, e.g., μ n = C / ζ n | ζ = 0 , etc. To ensure the existence of C ( ζ ) , the parameters ζ n are purely imaginary.
Alternatively, we may focus on the first three cumulants, defined as the derivatives of the characteristic function’s natural logarithm [2],
κ n 1 n k = ln C ( ζ ) ζ n 1 ζ n k | ζ = 0 .
The first three cumulants happen to coincide with the respective central moments and are expressed in terms of the unknown coefficients in Equation (5) as
κ n = Δ z n ¯ = a n , κ n m = ( Δ z n Δ z n ¯ ) ( Δ z m Δ z m ¯ ) ¯ = b n i b m i + 2 c n ( i j ) c m ( i j ) , κ n m l = ( Δ z n Δ z n ¯ ) ( Δ z m Δ z m ¯ ) ( Δ z l Δ z l ¯ ) ¯ = 6 b ( n i b m j c l ( i j ) ) + 8 c n ( i j ) c m ( j k ) c l ( i k ) ,
where the round brackets indicate symmetrization with respect to the free indices, but not the dummy ones. The derivation details of Equation (14) are provided in Appendix A. The extra symmetrization brackets are placed around the indices i and j in the first term of the last expression in order to emphasize that it is only the symmetric part of c l i j that contributes to the third cumulant κ n m l .
To obtain the cumulants (13) based on the initial state variable z, we resort to the Fokker–Planck equation (FPE) for the transition probability density P ( z , t | z 0 , t 0 ) from the state z 0 at time t 0 to the state z at time t [2]:
P ˙ ( z , t | z 0 , t 0 ) = L ^ ( z ) P ( z , t | z 0 , t 0 ) = 1 2 g i j g k j P , k h i 1 2 g i j g k j , k P , i
with the initial condition
P ( z , t 0 | z 0 , t 0 ) = δ ( z z 0 ) .
The formal solution of the FPE reads
P ( z , t | z 0 , t 0 ) = e L ^ ( z ) ( t t 0 ) δ ( z z 0 ) .
Replacing the earlier time t 0 with t and the later time t s with t + Δ t , we obtain the expectation value of an arbitrary state function f ( z ) at t + Δ t given that, at time t, the system was in the state z:
f ( z ) ¯ = d z f ( z ) e L ^ ( z ) Δ t δ ( z z ) = d z e L ^ ( z ) Δ t f ( z ) δ ( z z ) = e L ^ ( z ) Δ t f ( z ) .
Here, the adjoint Fokker–Planck operator is defined by
L ^ f = G i j f , i , j + H i f , i ,
and the functions H i and G i j are introduced in Equation (8).
With f ( z ) = e ζ s z s , we obtain the characteristic function (12)
C ( ζ ) = e ζ s z s e L ^ Δ t e ζ s z s = 1 + e ζ s z s n = 1 Δ t n n ! ( L ^ ) n e ζ s z s .
Its natural logarithm is written with the help of the Taylor series
ln C ( ζ ) = ln ( 1 + S ) = S S 2 / 2 + S 3 / 3 , S = e ζ s z s n = 1 Δ t n n ! ( L ^ ) n e ζ s z s .
Specifically, to the second order in Δ t
ln C ( ζ ) = Δ t e ζ s z s L ^ e ζ s z s + Δ t 2 2 e ζ s z s ( L ^ ) 2 e ζ s z s e 2 ζ s z s ( L ^ e ζ s z s ) 2 + O ( Δ t 3 ) = Δ t ( G i j ζ i ζ j + H i ζ i ) + Δ t 2 2 L ^ ( G i j ζ i ζ j + H i ζ i ) + 2 G k l ( G i j , k ζ i ζ j + H i , k ζ i ) ζ l + O ( Δ t 3 ) .
Here, the terms that multiply Δ t and Δ t 2 were grouped together, and then the identity
L ^ e ζ s z s = ( G i j ζ i ζ j + H i ζ i ) e ζ s z s
was used together with the “product rule” valid for arbitrary state functions ϕ ( z ) and χ ( z ) :
L ^ ( ϕ χ ) = ϕ L ^ χ + χ L ^ ϕ + 2 G i j ϕ , i χ , j .
Differentiating Equation (22), we obtain the first three cumulants:
κ n = Δ t H n + Δ t 2 2 L ^ H n + O ( Δ t 3 ) , κ n m = 2 G n m Δ t + 2 G i ( n H m ) , i + L ^ G n m Δ t 2 , κ n m l = 6 Δ t 2 G i ( n G m l ) , i + O ( Δ t 3 ) .
The first line immediately gives an expression for the coefficient a n from Equation (5) according to the first Equation (14). We now need to solve the remaining two Equations in (14) with the cumulants κ n m and κ n m l from Equation (25) to find the coefficients b n m and c n m l . We look for them in the form of expansions in the time step:
b n m = α n m Δ t + β n m Δ t 3 / 2 + O ( Δ t 5 / 2 ) , c n m l = Δ t σ n m l + O ( Δ t 2 ) .
Substitution of these expansions into Equation (14) gives the second and the third cumulants in terms of the yet-unknown coefficients α n m , β n m , and σ n m l :
κ n m = α n i α m i Δ t + 2 Δ t 2 ( α ( n i β m i ) + σ n ( i j ) σ m ( i j ) ) + O ( Δ t 3 ) , κ n m l = 6 Δ t 2 α ( n i α m j σ l ( i j ) ) + O ( Δ t 3 ) .
Comparing the term proportional to Δ t in κ n m with the second Equation (25), we find that α n i α m i = 2 G n m = g n i g m i . Since this equality must hold for an arbitrary function g n m ( z ) , we can identify
α n m = g n m .
Next, we go on to the calculation of σ n m l based on the third cumulant in Equation (25), which is rewritten as
κ n m l = 3 Δ t 2 g ( n j g m k g l k , i g i j ) + O ( Δ t 3 ) .
Here, we used the fact that G i n = G n i = 1 2 g n j g i j and G m l , i = 1 2 ( g m k , i g l k + g m k g l k , i ) = g ( m k g l k ) , i .
Due to the symmetrization, we can interchange the indices n and m in Equation (29). Further, we interchange the dummy indices j and k and write
κ n m l = 3 Δ t 2 g ( n j g m k g l j , i g i k ) + O ( Δ t 3 ) .
On the other hand, the second Equation (27) and Equation (28) give
κ n m l = 6 Δ t 2 g ( n j g m k σ l ( j k ) ) + O ( Δ t 3 ) .
A comparison of Equation (31) with the arithmetic average of Equations (29) and (30) allows one to identify
σ l ( j k ) = 1 2 g l ( j , i g i k ) , σ i j k = 1 2 g i j , l g l k ,
as stated in the third Equation (7) and Equation (8).
We substitute the expressions for α n m and σ n m l into the first Equation (27) and compare the terms that multiply Δ t 2 with the respective terms in the second Equation (25). Based on the product rule (24), we can write
L ^ G n m = 1 2 L ^ ( g n i g m i ) = g ( n i L ^ g m i ) + 1 2 g k j g l j g n i , k g m i , l = g ( n i L ^ g m i ) + 2 σ n i j σ m i j ,
where the previously obtained expression for σ i j k is used in the last equality. The first term in the brackets of the second expression (25) is G i ( n H m ) , i = 1 2 g ( i j g n j H m , i ) . Hence,
g ( n i β m i ) = 1 2 g ( n i g j i H m , j ) + g ( n i L ^ g m i ) + σ n i j σ m i j σ n ( i j ) σ m ( i j )
To deal with the difference of the σ ’s in the second line, we express these coefficients as a sum of symmetric and antisymmetric parts, σ n i j = σ n ( i j ) + σ n [ i j ] , and note that σ n ( i j ) σ n [ i j ] = 0 . Then, the second line in Equation (34) is just
σ n i j σ m i j σ n ( i j ) σ m ( i j ) = σ n [ i j ] σ m [ i j ] = δ n k σ k [ j l ] σ m [ j l ] = g ( n i ( g 1 ) i k σ k [ j l ] σ m [ j l ] ) .
The symmetrization brackets are placed around the subscripts in the last step to emphasize that the expression obtained is symmetric with respect to the free indices n and m, as is obvious from the right-hand side of Equation (35). Substitution of this result into Equation (34) finally gives
β m i = 1 2 H m , j g j i + L ^ g m i + σ m [ j l ] σ k [ j l ] ( g 1 ) i k ,
thereby completing the derivation of the coefficients (7) in the propagation rule (5).

4. Case Study: Transport Induced by Periodic Spatial Modulation of Temperature

Ratchet effect refers to transport in a noisy system, whose parameters are periodically modulated around the average values in such a way that transport in the absence of this modulation is impossible [20]. Usually, the parameter modulation occurs in time [20]; however, the above definition may as well be applied to the situations in which the modulation happens in space, as demonstrated by Büttiker [21] in the earliest example of a ratchet effect induced by spatial modulation of temperature.
To evaluate the improvements introduced by the present scheme over the first-order methods, we compare it with the Milstein algorithm, which arises as the leading-order truncation of our formulation. In particular, we wish to examine the role of higher-order contributions—specifically, the terms of order Δ t 3 / 2 and Δ t 2 —that are absent in the Milstein method. As a testing ground, we choose the Büttiker ratchet system, which permits comparison of not only the static properties via the equilibrium probability distribution, but also the dynamical features, captured by the mean particle velocity.
Let us consider the Büttiker’s ratchet model [21] of an overdamped Brownian particle in a periodic potential U ( x ) = U ( x + a ) with periodicity a in a non-uniform temperature field T ( x ) = T ( x + a ) with the same spatial periodicity as the potential. The LE reads
γ x ˙ = d U d x + 2 γ T ( x ) ξ ( t ) ,
where γ is the damping coefficient and T ( x ) is the position-dependent temperature. For definiteness, we assume the potential and the temperature to be given by
U ( x ) = U 0 2 cos 2 π x a , T ( x ) = T 0 + Δ T sin 2 π x a ,
where U 0 is the potential corrugation depth, T 0 is the average temperature, and Δ T is the temperature modulation amplitude. A simple way of thinking about this model is to consider a particle in a gravity field moving in a periodic terrain, see inset in Figure 1. When light is incident on this landscape at an angle, it induces a non-uniform heating effect, with the illuminated regions becoming hotter than the shaded areas.
In the absence of spatial temperature variations, the model does not exhibit net motion. Likewise, periodic temperature variations alone, without a corresponding periodic potential, do not induce a net drift. However, when both the temperature and potential vary periodically in space and are phase-shifted relative to each other, the probabilities of a particle transitioning from one potential well to an adjacent one—either to the left or right—are generally unequal. Specifically, the probability of transition over the “hotter” side of the potential well is greater than that over the “colder” side. This asymmetry leads to a net transport of the particle. It has recently been suggested [22] that this effect can drive a semiconductor thermoelectric generator.
An analytical expression for the drift velocity can be developed following the treatment of Ref. [21]. Namely, the FPE for the probability density P ( x , t ) to find the particle near the position x at time t,
P ˙ ( x , t ) = 1 γ x T ( x ) P x + d U d x + 1 2 d T d x P ,
has the form of a continuity equation, P ˙ = J / x , where J ( x , t ) is the probability current. We look for the stationary solution of the FPE (39) that respects the periodic boundary conditions and is normalized to 1 within one period:
P s t ( x + a ) = P s t ( x ) , 0 a d x P s t ( x ) = 1 .
The probability current is constant and equals
J s t = 1 γ T ( x ) d P s t d x + d U d x + 1 2 d T d x P s t .
By solving Equation (41) with the periodicity conditions (38), we first express P s t ( x ) in terms of the probability current as
P s t ( x ) = γ J s t x x + a d y T ( y ) e ψ ( y ) ψ ( x ) 1 e ψ ( a ) T ( x ) , ψ ( x ) = 0 x d y T ( y ) d U ( y ) d y .
Imposing the normalization condition (40) and noting that the probability current is related to the drift velocity of the particle by J s t = v d r / a , we obtain the drift velocity as [21]
v d r = a ( 1 e ψ ( a ) ) γ 0 a d x T ( x ) x x + a d y T ( y ) e ψ ( y ) ψ ( x ) .
In the numerical simulations, the parameters a, U 0 , and γ are set to 1, thereby fixing the units of length, energy, and time. The drift velocity (43) vs. the ratio of the temperature modulation amplitude Δ T to the average temperature T 0 is shown in Figure 1 for T 0 = 20 , 5 , 2 , 1 , 0.5 , and 0.2 (from top to bottom). As might be expected, the drift velocity increases with the temperature modulation amplitude, as well as with the average temperature T 0 at a fixed ratio Δ T / T 0 . Somewhat less obvious is the fact that the drift velocity vs. Δ T / T 0 curve becomes less sensitive to the average temperature T 0 with increasing its value. Indeed, the curves v d r vs. Δ T / T 0 obtained for T 0 = 5 and T 0 = 20 differ very little; further increase in T 0 above the value 20 does not result in its noticeable change.
The stochastic trajectories of the Brownian particle (38) were simulated according to the algorithm from Section 2, which, in the one-dimensional case, simplifies to
x ( t + Δ t ) = x ( t ) + a + b Γ + c ( Γ 2 1 ) , a = Δ t H + Δ t 2 2 G d 2 H d x 2 + H d H d x , b = g Δ t + Δ t 3 / 2 2 g d H d x + G d 2 g d x 2 + H d g d x , c = Δ t 2 d G d x , g ( x ) = 2 T ( x ) γ , G ( x ) = 1 2 g 2 ( x ) , H ( x ) = 1 γ d U d x + 1 2 d G d x .
If only the first-order term is kept in the expression for a and only the term of the order of Δ t is kept in the expression for b, the scheme (44) becomes identical with the standard Milstein method [10].
The simulations were performed according to the algorithm (44) and following the Milstein scheme at several average temperatures T 0 = 0.2 , 1 , and 5. For all values of T 0 , the temperature amplitude was kept at Δ T = T 0 / 2 . To determine the drift velocity v s i m u l = x ( t m a x ) / t m a x from the simulations, the particle trajectory was generated over a long time t m a x = 10 5 with the initial condition x ( 0 ) = 0 . The statistical uncertainty of the simulation results was below 1% in all cases.
Shown in Figure 2 is the relative deviation of the average velocity v s i m u l from the exact value (43), ( v s i m u l / v d r 1 ) × 100 % , for both simulation algorithms at different time-step values Δ t . It is seen that, at low average temperature T 0 = 0.2 , both schemes exhibit about the same accuracy, even though the Milstein scheme overestimates the drift velocity, while the algorithm (43) underestimates it by a slightly smaller amount; for example, at Δ t = 0.01 , the error of the Milstein method is close to 5%, while the scheme (43) has an error of about 3%.
The discrepancy of the two methods becomes more evident at higher temperatures. At T 0 = 1 , the scheme (43) achieves a 1% accuracy at Δ t = 5 × 10 3 , whereas the Milstein approach requires Δ t = 5 × 10 4 . Likewise, at T 0 = 5 , the scheme (43) achieves this accuracy at Δ t = 10 3 , whereas the Milstein procedure requires a time step ten times as small.
The similarity in the mean velocity calculation by the two methods at low temperature, Figure 2a, does not necessarily imply that they are equally accurate in this regime. Indeed, in the deterministic limit T 0 0 , our scheme (43) reduces to a second-order Taylor expansion of x ( t + Δ t ) = x ( t ) + x ˙ ( t ) Δ t + x ¨ ( t ) Δ t 2 / 2 with velocity x ˙ = γ 1 d U / d x and acceleration x ¨ = γ 1 x ˙ d 2 U / d x 2 , whereas the Milstein scheme only contains the velocity term. Nevertheless, although the Milstein scheme is only first-order accurate, it still correctly yields the zero drift velocity at zero temperature T 0 . One can expect that at low but finite temperatures, even a rudimentary method can capture the near-zero drift velocity with seemingly good accuracy. To properly differentiate the performance of integration schemes in this regime, one would need to examine a quantity more sensitive than v d r .
Such a quantity may be the equilibrium probability distribution P s t ( x ) itself, given by Equation (42). It is shown in Figure 3 at (a) T 0 = 0.2 and (b) T 0 = 5 , where the exact distribution (42) is compared with the one found from the simulations based on the present method (5)–(7) and the Milstein algorithm. It is seen that the Milstein integration method yields quantitatively inaccurate steady-state probability distribution P s t ( x ) at both temperatures, even though its estimate of the drift velocity is close to the correct value. At the same time, the numerical results obtained with the present method (5)–(7) are in excellent agreement with the theoretical curve, which highlights the importance of the higher-order terms in the stochastic Taylor expansion (5).

5. Concluding Remarks

A numerical method is worked out for generating stochastic trajectories that preserve the cumulants of the state variable up to the second order in the time step. The derivation presented here applies to the white noise-driven systems of arbitrary dimensionality N. The accuracy of this approach in computing observables is comparable to the accuracy of the second-order stochastic Taylor expansion-based methods [4,5]. The advantage of the present approach lies in the fact that it reduces computational complexity, because it requires a single set of N-independent Gaussian random variables. In contrast, the stochastic Taylor expansion-based methods require random numbers of the order N 4 with specific correlations among them (see, e.g., Equations (10)–(15) of Ref. [8]). Their generation is a non-trivial task, especially at large N.
In terms of the idea used in the derivation, the closest multidimensional algorithm published in the literature is by Cao and Pope (abbreviated as CP; see Section 2.4 of [18]), as both methods provide a second-order weak integration scheme for the Langevin Equation (3), and both are based on matching the average properties of the updated system’s state using the associated Fokker–Planck Equation (15). While the conceptual foundation is similar, there are several important distinctions between the present algorithm and the CP method. First, the CP algorithm is a midpoint scheme: it requires one to evaluate the state variable at time t n + Δ t / 2 before computing the full time-step update z ( t n ) z ( t n + 1 ) . In contrast, the numerical scheme (5)–(7) performs this time-step propagation directly. Second, the CP scheme uses three N-dimensional sets of uncorrelated Gaussian random variables; the present method requires only one such set. Finally, the CP scheme is formulated for the special case in which the noise-coupling matrix g i j ( z ) reduces to a scalar function common to all components z n of the state vector z, whereas the scheme developed here allows for a fully general, position-dependent noise matrix.
To improve the performance speed of the scheme (5), one may be tempted to replace Gaussian random numbers Γ i with a different type of random numbers that can be generated more quickly [8,23]. However, there is strong evidence [24] that using non-Gaussian random variables can worsen the accuracy of the method. Indeed, the properties of the Gaussian numbers were explicitly used in deriving the cumulant expressions (14); attempting to replace Γ i with non-Gaussian random variables may require a major modification of the derivation of the propagation rule (5), and thus to a major modification of this rule itself.
If the terms of order higher than Δ t 1 are neglected in the scheme (5)–(7), it reduces to the general multidimensional form of the Milstein method [10]. For this reason, the update rule (5)–(7) may be regarded as a second-order explicit weak Milstein method. It is logical to focus the future research on exploring the higher-order corrections to the scheme (5), (7). When doing this, adding terms of the higher order in Δ t to the coefficient a n , b n m , and c n m l may not necessarily lead to better accuracy of the algorithm. The reason is that increasing the order of the algorithm results in the emergence of the higher-order cumulants, as can be shown based on Equations (20) and (21). In particular, the leading term in the fourth cumulant is of the order of Δ t 3 ; hence, if one wishes to extend the order of the scheme (5), (7) to Δ t 3 , one would need to impose an extra condition that the fourth cumulant κ n m l k is correctly reproduced by the updated state variable z . This, in turn, implies that an extra term d n m l k Γ m Γ l Γ k should be added to the stochastic Taylor expansion (5) with the unknown parameters d n m l k . Thus, going to the higher order in Δ t will result in higher computational complexity, but may potentially be beneficial for the accuracy of the method.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

I am grateful to ACEnet for providing computational resources.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Properties of Gaussian Random Numbers

To derive Equation (14), Equation (5) was used together with the average value of the product of Gaussian random variables (6):
Γ i 1 Γ i 2 k + 1 ¯ = 0 , Γ i 1 Γ i 2 k ¯ = ( 2 k ) ! 2 k k ! δ ( i 1 i 2 δ i 2 k 1 i 2 k ) ,
see, e.g., Section 2.3.3 of Ref. [2]. The last line represents a sum over all different combinations of Kronecker’s deltas; for example, Γ i Γ j Γ k Γ l ¯ = δ i j δ k l + δ i k δ j l + δ i l δ k j .
More generally, the mean value of the product
Γ i 1 Γ i 2 k ( Γ j 1 Γ j 2 δ j 1 j 2 ) ( Γ j 2 n 1 Γ j 2 n δ j 2 n 1 j 2 n ) ¯
is a sum of different products of Kronecker deltas with all combinations of the pairs of indices from i 1 , , i 2 k , j 1 , , j 2 n , excluding the ones that contain the deltas found in the brackets, i.e., δ j 1 j 2 , …, δ j 2 n 1 j 2 n . This can be proven by induction. Indeed, this statement is obviously true for n = 1 and arbitrary k, as
Γ i 1 Γ i 2 k ( Γ j 1 Γ j 2 δ j 1 j 2 ) ¯ = 2 ( k + 1 ) ! 2 k + 1 ( k + 1 ) ! δ ( i 1 i 2 δ i 2 k 1 i 2 k δ j 1 j 2 ) 2 k ! 2 k k ! δ ( i 1 i 2 δ i 2 k 1 i 2 k ) δ j 1 j 2
Assuming that it holds for some n and arbitrary k, we write the expectation value for n + 1 as a difference:
Γ i 1 Γ i 2 k Γ j 2 n + 1 Γ j 2 n + 2 ( Γ j 1 Γ j 2 δ j 1 j 2 ) ( Γ j 2 n 1 Γ j 2 n δ j 2 n 1 j 2 n ) ¯ δ j 2 n + 1 j 2 n + 2 Γ i 1 Γ i 2 k ( Γ j 1 Γ j 2 δ j 1 j 2 ) ( Γ j 2 n 1 Γ j 2 n δ j 2 n 1 j 2 n ) ¯ .
By the inductive assumption, the first average represents the sum of products of deltas δ i 1 i 2 , δ i 1 i 3 , …, δ j 2 n + 1 j 2 n + 2 , from which the ones that contain δ j 1 j 2 , …, δ j 2 n 1 j 2 n are excluded. The second term represents the subset of products of deltas found in the first average that contain δ j 2 n + 1 j 2 n + 2 . The difference between the two averages is, therefore, the sum of products of all deltas, from which δ j 1 j 2 , …, δ j 2 n + 1 j 2 n + 2 are excluded.
With these properties, we immediately obtain the first cumulant in Equation (14). The second cumulant is
κ n m = b n i b m i Γ i Γ j ¯ + c n i j c m k l ( Γ i Γ j δ i j ) ( Γ k Γ l δ k l ) ¯ = b n i b m i δ i j + c n i j c m k l ( δ i k δ j l + δ i l δ j k ) = b n i b m i + c n i j ( c m i j + c m j i ) = b n i b m i + 2 c n i j c m ( i j ) .
In the last line, c n i j = c n ( i j ) + c n [ i j ] can be replaced with just c n ( i j ) , because c n [ i j ] c n ( i j ) = 0 . The third cumulant in Equation (14) is derived in a similar manner using the identities
Γ i Γ j ( Γ k Γ l δ k l ) ¯ = δ i k δ j l + δ i l δ j k , ( Γ i Γ j δ i j ) ( Γ k Γ l δ k l ) ( Γ m Γ n δ n m ) ¯ = δ i k δ l m δ n j + δ i k δ l n δ m j + δ i l δ k m δ n j + δ i l δ k n δ m j + δ i m δ j k δ l n + δ i m δ j l δ k n + δ i n δ j k δ l m + δ i n δ j l δ k m .

References

  1. Coffey, W.T.; Kalmykov, Y.P. Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry, and Electrical Engineering; World Scientific: Singapore, 2012. [Google Scholar] [CrossRef]
  2. Risken, H. Fokker-Planck Equation; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar] [CrossRef]
  3. Øksendal, B. Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar] [CrossRef]
  4. Kloeden, P.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar] [CrossRef]
  5. Milstein, G. Numerical Integration of Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar] [CrossRef]
  6. Platen, E. An introduction to numerical methods for stochastic differential equations. Acta Numer. 1999, 8, 197–246. [Google Scholar] [CrossRef]
  7. Greiner, A.; Strittmatter, W.; Honerkamp, J. Numerical integration of stochastic differential equations. J. Stat. Phys. 1988, 51, 95–108. [Google Scholar] [CrossRef]
  8. Qiang, J.; Habib, S. Second-order stochastic leapfrog algorithm for multiplicative noise Brownian motion. Phys. Rev. E 2000, 62, 7430–7437. [Google Scholar] [CrossRef] [PubMed]
  9. Duan, W.L.; Fang, H.; Zeng, C. Second-order algorithm for simulating stochastic differential equations with white noises. Phys. A 2019, 525, 491–497. [Google Scholar] [CrossRef]
  10. Mil’shtejn, G. Approximate integration of stochastic differential equations. Theory Probab. Its Appl. 1975, 19, 557–562. [Google Scholar] [CrossRef]
  11. Sotiropoulosa, V.; Kaznessis, Y.N. An adaptive time step scheme for a system of stochastic differential equations with multiple multiplicative noise: Chemical Langevin equation, a proof of concept. J. Chem. Phys. 2008, 128, 014103. [Google Scholar] [CrossRef] [PubMed]
  12. Yin, Z.; Gan, S. An improved Milstein method for stiff stochastic differential equations. Adv. Differ. Equ. 2015, 369. [Google Scholar] [CrossRef]
  13. Jentzen, A.; Röckner, M. A Milstein scheme for SPDEs. Found. Comput. Math. 2015, 15, 313–362. [Google Scholar] [CrossRef]
  14. Tripura, T.; Hazra, B.; Chakraborty, S. Novel Girsanov correction based Milstein schemes for analysis of nonlinear multi-dimensional stochastic dynamical systems. Appl. Math. Model. 2023, 122, 350–372. [Google Scholar] [CrossRef]
  15. Wu, X.; Yan, Y. Milstein scheme for a stochastic semilinear subdiffusion equation driven by fractionally integrated multiplicative noise. Fractal Fract. 2025, 9, 314. [Google Scholar] [CrossRef]
  16. Pechenik, L.; Levine, H. Interfacial velocity corrections due to multiplicative noise. Phys. Rev. E 1999, 59, 3893–3900. [Google Scholar] [CrossRef]
  17. Dornic, I.; Chaté, H.; Muñoz, M.A. Integration of Langevin equations with multiplicative noise and the viability of field theories for absorbing phase transitions. Phys. Rev. Lett. 2005, 94, 100601. [Google Scholar] [CrossRef] [PubMed]
  18. Cao, R.; Pope, S.B. Numerical integration of stochastic differential equations: Weak second-order mid-point scheme for application in the composition PDF method. J. Comput. Phys. 2003, 185, 194–212. [Google Scholar] [CrossRef]
  19. Evstigneev, M.; Kacmazer, D. Fast and accurate numerical integration of the Langevin equation with multiplicative Gaussian white noise. Entropy 2025, 26, 879. [Google Scholar] [CrossRef] [PubMed]
  20. Reimann, P. Brownian motors: Noisy transport far from equilibrium. Phys. Rep. 2002, 361, 57–265. [Google Scholar] [CrossRef]
  21. Büttiker, M. Transport as a consequence of state-dependent diffusion. Z. Phys. B Condens. Matter 1987, 68, 161–167. [Google Scholar] [CrossRef]
  22. Kompatscher, A.; Kemerink, M. On the concept of an effective temperature Seebeck ratchet. Appl. Phys. Lett. 2021, 119, 023303. [Google Scholar] [CrossRef]
  23. Dünweg, B.; Paul, W. Brownian dynamics simulations without Gaussian random numbers. Int. J. Mod. Phys. C 1991, 2, 817–827. [Google Scholar] [CrossRef]
  24. Grønbech-Jensen, N. On the application of non-Gaussian noise in stochastic Langevin simulations. J. Stat. Phys. 2023, 190, 96. [Google Scholar] [CrossRef]
Figure 1. Drift velocity of an overdamped Brownian particle in Büttiker’s ratchet model (37), (38) as a function of the temperature modulation amplitude Δ T normalized to the average temperature T 0 . The model parameters U 0 , a, and γ are set to 1. The six curves represent different values of T 0 = 20, 5, 2, 1, 0.5, and 0.2 (from top to bottom). The inset illustrates the physical interpretation of the system, where a periodic potential and spatial temperature variations induce directed transport.
Figure 1. Drift velocity of an overdamped Brownian particle in Büttiker’s ratchet model (37), (38) as a function of the temperature modulation amplitude Δ T normalized to the average temperature T 0 . The model parameters U 0 , a, and γ are set to 1. The six curves represent different values of T 0 = 20, 5, 2, 1, 0.5, and 0.2 (from top to bottom). The inset illustrates the physical interpretation of the system, where a periodic potential and spatial temperature variations induce directed transport.
Entropy 27 00729 g001
Figure 2. Relative error, v s i m u l v d r v d r × 100 % , of the drift velocity obtained from the numerical simulation of Equation (37) using the Milstein method (red open circles) and the method of the present work (black filled circles) for different values of the simulation time step Δ t .
Figure 2. Relative error, v s i m u l v d r v d r × 100 % , of the drift velocity obtained from the numerical simulation of Equation (37) using the Milstein method (red open circles) and the method of the present work (black filled circles) for different values of the simulation time step Δ t .
Entropy 27 00729 g002
Figure 3. Particle steady-state periodic probability distribution in the Büttiker ratchet model at Δ T = T 0 / 2 at (a) T 0 = 0.2 and integration time step Δ t = 0.01 and (b) T 0 = 5 and Δ t = 0.001 . Solid line: exact curve (44); green symbols: simulation results obtained with the present method; magenta symbols: simulation results using the Milstein scheme.
Figure 3. Particle steady-state periodic probability distribution in the Büttiker ratchet model at Δ T = T 0 / 2 at (a) T 0 = 0.2 and integration time step Δ t = 0.01 and (b) T 0 = 5 and Δ t = 0.001 . Solid line: exact curve (44); green symbols: simulation results obtained with the present method; magenta symbols: simulation results using the Milstein scheme.
Entropy 27 00729 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Evstigneev, M. Numerical Generation of Trajectories Statistically Consistent with Stochastic Differential Equations. Entropy 2025, 27, 729. https://doi.org/10.3390/e27070729

AMA Style

Evstigneev M. Numerical Generation of Trajectories Statistically Consistent with Stochastic Differential Equations. Entropy. 2025; 27(7):729. https://doi.org/10.3390/e27070729

Chicago/Turabian Style

Evstigneev, Mykhaylo. 2025. "Numerical Generation of Trajectories Statistically Consistent with Stochastic Differential Equations" Entropy 27, no. 7: 729. https://doi.org/10.3390/e27070729

APA Style

Evstigneev, M. (2025). Numerical Generation of Trajectories Statistically Consistent with Stochastic Differential Equations. Entropy, 27(7), 729. https://doi.org/10.3390/e27070729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop