Next Article in Journal
Information Theory Meets Quantum Chemistry: A Review and Perspective
Previous Article in Journal
Stochastic Model for a 4 QAM Transmission Subject to the Epidemic Interference Effect
Previous Article in Special Issue
Forecasting Stock Market Indices Using Integration of Encoder, Decoder, and Attention Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brown and Levy Steady-State Motions

School of Chemistry, Tel Aviv University, Tel Aviv 6997801, Israel
Entropy 2025, 27(6), 643; https://doi.org/10.3390/e27060643
Submission received: 18 May 2025 / Revised: 9 June 2025 / Accepted: 12 June 2025 / Published: 16 June 2025
(This article belongs to the Collection Advances in Applied Statistical Mechanics)

Abstract

This paper introduces and explores a novel class of Brown and Levy steady-state motions. These motions generalize, respectively, the Ornstein-Uhlenbeck process (OUP) and the Levy-driven OUP. As the OUP and the Levy-driven OUP: the motions are Markov; their dynamics are Langevin; and their steady-state distributions are, respectively, Gauss and Levy. As the Levy-driven OUP: the motions can display the Noah effect (heavy-tailed amplitudal fluctuations); and their memory structure is tunable. And, as Gaussian-stationary processes: the motions can display the Joseph effect (long-ranged temporal dependencies); and their correlation structure is tunable. The motions have two parameters: a critical exponent which determines the Noah effect and the memory structure; and a clock function which determines the Joseph effect and the correlation structure. The novel class is a compelling stochastic model due to the following combination of facts: on the one hand the motions are tractable and amenable to analysis and use; on the other hand the model is versatile and the motions display a host of both regular and anomalous features.
PACS:
02.50.-r (probability theory, stochastic processes, and statistics); 05.40.-a (fluctuation phenomena, random processes, noise, and Brownian motion)

1. Introduction

The paradigmatic model—in science and engineering at large—for random motions that are in statistical steady-state is the Ornstein-Uhlenbeck process (OUP) [1,2,3,4]. The interest in the OUP and its applications is broad [5,6,7,8,9,10,11,12].
Almost a century after its inception, OUP research is active and ongoing, e.g.: time averages [13]; phase descriptions [14]; ergodicity breaking [15]; survival analysis [16]; first-passage area [17]; optical tweezers [18]; large deviations [19]; high-dimensionality [20]; active particles [21,22,23,24,25]; stochastic resetting [26,27,28]; and parameter estimation [29,30]. Examples of most recent OUP research include: memory effects [31]; wind profiles [32]; dynamical multimodality [33]; identification [34]; and rare events [35].
The OUP is the only random motion that is [36]: Gaussian [37,38,39] and Markov [40,41,42] and stationary [43,44,45]. In turn, the OUP exhibits the following amplitudal and temporal behaviors. Amplitude-wise: the OUP steady-state distribution is Gauss (Normal), and hence the OUP fluctuations are ‘mild’ [46]. Time-wise: the OUP auto-correlation is exponential, and hence the OUP dependencies are short ranged. Also, the OUP dynamics are governed by the Langevin stochastic differential equation [47,48,49].
The underpinning source of randomness that ‘drives’ the OUP is Brownian motion [50]. Shifting from Brownian motion to Levy motion yields the Levy-driven OUP [51,52,53,54,55,56,57,58,59,60,61,62]. As in the case of the OUP, also in the case of the Levy-driven OUP: interest and applications are broad, and the research is active and ongoing [63,64,65,66,67,68,69,70,71,72,73,74,75,76].
The shift from the OUP to the Levy-driven OUP maintains the Markov and stationary properties, and lets go of the Gaussian property. In turn, the steady-state distribution changes from Gauss (Normal) to Levy, and hence the fluctuations of the Levy-driven OUP are ‘wild’ [46]. Thus, amplitude-wise: the Levy steady-state distribution is ‘heavy tailed’ [77,78], and hence the Levy-driven OUP displays the ‘Noah effect’ [79]. Also, the Levy-driven OUP maintains the Langevin dynamics and the exponential correlation structure.
Orthogonal to the Levy approach is the Gauss approach. This approach maintains the Gaussian and stationary properties, and lets go of the Markov property. Taking on the Gauss approach, the specific exponential auto-correlation (which characterizes the OUP) is replaced by a general auto-correlation. In turn, the decay of the auto-correlation can be slow – in which case the dependencies of the resulting Gaussian-stationary motion are long ranged [80,81,82]. Thus, time-wise: the Gauss approach is capable of yielding motions that display the ‘Joseph effect’ [79].
So, on the one hand, the Levy approach affects the amplitudal behavior. And, on the other hand, the Gauss approach affects the temporal behavior. Consequently, one would expect that the former approach does not affect motion-memory, and that the latter approach does. Most recently, it was shown that—rather surprisingly and counter-intuitively—the situation is actually the other way round [76]: the Levy approach affects motion-memory, whereas the Gauss approach does not.
This paper presents a third approach for ‘tinkering’ with the OUP: a ‘golden-path’ approach that interlaces the best of the Levy and Gauss approaches. Rather than letting go of either the Gaussian property or the Markov property, the novel approach relaxes the stationary property. Specifically, stationarity is relaxed to the principle feature of the OUP: the statistical steady-state behavior. As shall be shown, the novel approach yields a versatile class of steady-state motions (SSMs) with the following features.
The SSMs can jointly display the Noah and Joseph effects.
The SSMs’ correlation and memory structures are tunable.
The SSMs are Markov and their dynamics are Langevin.
On the one hand, the SSMs offer a riches of statistical behaviors which neither the Levy approach nor the Gauss approach can offer on their own. On the other hand, the SSMs are as tractable as the OUP. The novel class of SSMs is investigated here in detail. To that end, this paper shall make use of recently established results regarding Ornstein-Uhlenbeck memory [76], and regarding power Levy motion [83,84,85]. The paper is organized as follows.
Section 2 reviews the ‘pillars’ of the SSMs: Brownian motion, Levy motion, and a spatio-temporal transformation of random motions. Considering the inputs of the spatio-temporal transformation to be Brownian motion and Levy motion: Section 3 addresses the outputs’ increments; and Section 4 addresses the outputs’ correlations. Then, the paper carries on as follows: Section 5 introduces and explores the SSMs; Section 6 further investigates these motions; and Section 7 explores the motions’ Langevin dynamics. Last, Section 8 concludes with an overview. Readers looking for an ‘executive summary’ can go directly to Section 8.

2. Preliminaries

This section reviews three pillars on which the SSMs ‘stand’: the symmetric Levy-sable distribution (Section 2.1); the symmetric Levy-sable process (Section 2.2); and a spatio-temporal transformation of random motions (Section 2.3). As shall be described below, the symmetric Levy-sable (SLS) process is the ‘mother process’ of Brownian motion (BM) and of Levy motion (LM). The acronyms SLS, as well as BM and LM, will be frequently used in this section and along the manuscript.

2.1. SLS Distribution

The SLS statistical distribution [86,87,88] emerges universally via the central limit theorem and its generalizations [89]. The SLS distribution has three parameters: m, a real number which is the distribution’s median; s, a positive number which is the distribution’s scale; and an exponent λ that takes values in the range 0 < λ 2 , and which is the distribution’s critical parameter.
The Fourier characterization of the SLS distribution is as follows. A real-valued random variable R is SLS when:
E exp i θ R m s = exp 1 λ θ λ ,
where < θ < is the underlying Fourier variable. The Fourier transform of Equation (1) highlights the ‘center role’ of the median m, and the ‘scaling role’ of the scale s. Also, the Fourier transform of Equation (1) implies that the SLS random variable R is symmetric about its median.
When the SLS exponent is two, λ = 2 , then the SLS distribution is Gauss (Normal) with: mean m and variance s 2 . When the SLS exponent is smaller than two, 0 < λ < 2 , then the SLS distribution is ‘heavy-tailed’ [77,78]. Namely, the deviations of the SLS random variable R from its center—the median m—are governed by the following power-law ‘tail asymptotics’:
Pr ( R m > x ) x s λ x λ .
The tail asymptotics of Equation (2) are a quantitative manifestation of the Noah effect [79].
The only case in which the SLS distribution has a well-defined and finite variance is the Gauss case. In general, the terms s λ and s—the variance and the standard deviation (SD) in the Gauss case—can be addressed, respectively, as an ‘extended variance’ and an ‘extended SD’. The extended variance s λ is the enumerator of the right-hand side of Equation (2). Up to multiplicative coefficients, the extended SD s is a quantitative measure of the inherent randomness of the SLS distribution [46,76].

2.2. SLS Process

Consider a real-valued random motion with positions L τ ( τ 0 ). The motion’s increment over the time interval ( τ 1 , τ 2 ] (where τ 1 < τ 2 ) is the displacement L τ 2 L τ 1 .
The motion is a Levy process [90,91,92] when it initiates from the origin ( L 0 = 0 where the equality holds with probability one), and when its increments are stationary and independent. Specifically: increments that correspond to time intervals with the same temporal length ( τ 2 τ 1 ) have the same statistical distribution; and increments that correspond to non-overlapping time intervals are independent random variables.
The motion is a SLS process when it is a Levy process, and when the statistical distribution of the position L τ is SLS with: median 0 and extended variance τ . When the SLS exponent is two, λ = 2 , then the SLS process is Brownian motion (BM) [50]. And when the SLS exponent is smaller than two, 0 < λ < 2 , then the SLS process is Levy motion (LM). LM-based models attracted major interest in science and engineering [93,94,95,96,97,98,99,100,101,102,103,104], and are continuing to do so [105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121].
Due to the properties of the SLS distribution, the BM and LM increments have markedly different statistics. On the one hand, the variances of the BM increments are finite, and the distribution tails of these increments are ‘light’. On the other hand, the variances of the LM increments are either not defined or infinite, and the distribution tails of these increments are ‘heavy’. BM and LM also have markedly different trajectories: a continuous curve in the BM case; and a trajectory that evolves via jumps—and only via jumps—in the LM case.
According to the Levy-Ito decomposition theorem [90,91,92], a general Levy process is the sum of three independent parts. One part is deterministic: a linear temporal function, which manifests the process’ drift. The two other parts are stochastic: a continuous part which is BM; and a pure-jump part. The pure-jump part is LM if and only if it is a selfsimilar and symmetric process.

2.3. Spatio-Temporal Transformation

As in Section 2.2, consider a real-valued random motion with positions L τ ( τ 0 ). Further considering the random motion to be an ‘input’ process, transform it to an ‘output’ process by scaling both space and time. Specifically, the output is a real-valued random motion with positions
X t = A t L C t
( t t 0 ). The spatio-temporal transformation of Equation (3) uses two functions: an ‘amplitude’ A t that manifests the spatial scaling; and a ‘clock’ that manifests the temporal scaling. The amplitude is positive, and the clock is monotone increasing from C ( t 0 ) 0 to lim t C ( t ) = .
Specific choices of the input, as well as of the amplitude and clock, yield specific outputs. With regard to BM and LM inputs, examples of specific outputs include: scaled BM [122,123,124,125,126,127,128,129,130,131,132,133], and scaled LM; power BM [134,135,136,137], and power LM [83,84,85]; the OUP [136], and the Levy-driven OUP [84]. The Ornstein-Uhlenbeck examples are a special case of the Lamperti transformation [138]: a general mapping of selfsimilar processes to stationary processes. A most useful stochastic tool [139,140,141,142,143,144,145,146], the Lamperti transformation itself is a special case of the spatio-temporal transformation of Equation (3).
Henceforth the input is set to be the SLS process. In turn, the output’s positions are SLS. Specifically, the position X t is SLS with: median 0 and extended variance
V t = A t λ C t .
So, in the transition from the input’s positions to the output’s positions: the positions’ distributions remain SLS; their medians remain zero; and their extended variances change from τ to that of Equation (4).

3. Output Increments

This section addresses the statistics of the output’s increments. Specifically, the output’s increment over the time interval ( t 1 , t 2 ] (where t 0 < t 1 < t 2 ) is the displacement X t 2 X t 1 .
The input is set to be the SLS process. In turn, the properties of the SLS process, and the structure of the spatio-temporal transformation of Equation (3), imply that [83]: the statistical distributions of output’s increments are SLS. Moreover, also the conditional statistical distributions of the output’s increments—given their initial positions—are SLS [84]. The medians and the extended variances of these unconditional and conditional SLS distributions are detailed in Table 1.
As evident from Table 1, the increments’ unconditional and conditional SLS distributions coincide if and only if the amplitude A t is a flat function. When the amplitude A t is a monotone function then there are marked differences between the increments’ SLS distributions. Indeed, shifting from the unconditional SLS distribution to the conditional SLS distribution: the median changes from zero to non-zero (whenever X t 1 0 ); and the extended variance decreases.
Specifically, when the amplitude A t is a monotone function then the extended-variance reduction (from the unconditional SLS distribution to the conditional SLS distribution) is V t 1 A t 2 / A t 1 1 λ . In turn, measured relative to the extended variance of the conditional SLS distribution, the extended-variance reduction is
Q t 1 ; t 2 = V t 1 V t 2 A t 2 A t 1 1 λ 1 C t 1 C t 2 .
The increments’ SLS distributions give rise to three informative ratios [76] which will be addressed below: a signal-to-noise ratio (Section 3.1); a noise-to-noise ratio (Section 3.2); and a variance-to-variance ratio (Section 3.3). These ratios quantify the output’s memory, and they all involve the variance-reduction quantity of Equation (5). The section ends with a short conclusion (Section 3.4).

3.1. Signal-to-Noise Ratio

Consider the conditional statistical distribution of the increment X t 2 X t 1 , given the information X t 1 . The bottom row of Table 1, together with the definition of the SLS distribution, yield the stochastic representation
X t 2 X t 1 X t 1 = L a w A t 2 A t 1 1 Signal coefficient · X t 1 Signal + V t 2 1 C t 1 C t 2 1 / λ Noise coefficient · R Noise .
Namely, in Equation (6): the equality is in law; R is a ‘standardized’ SLS random variable (i.e., with median zero m = 0 and with scale one s = 1 ); and R is independent of the information X t 1 .
The stochastic representation of Equation (6) implies that the increment X t 2 X t 1 is the sum of two parts—one deterministic and one stochastic. The deterministic part is the product of: (i) the information X t 1 —which assumes the role of a known ‘signal’; and (ii) a signal coefficient. The stochastic part is the product of: (i) the SLS random variable R —which assumes the role of an unknown ‘noise’; and (ii) a noise coefficient.
Combined together, the signal coefficient and the noise coefficient yield a signal-to-noise ratio (SNR) S N R t 1 ; t 2 . Specifically: the SNR’s enumerator is the absolute value of the signal coefficient of Equation (6); and the SNR’s denominator is the noise coefficient of Equation (6). It follows from Equation (6) that
S N R t 1 ; t 2 = Q t 1 ; t 2 V t 1 1 / λ ,
where Q t 1 ; t 2 is the variance-reduction quantity of Equation (5).

3.2. Noise-to-Noise Ratio

Consider the unconditional statistical distribution of the increment X t 2 X t 1 . The middle row of Table 1, together with the definition of the SLS distribution, yield the stochastic representation
X t 2 X t 1 = L a w V t 1 A t 2 A t 1 1 λ + V t 2 1 C t 1 C t 2 1 / λ Noise coefficient · R Noise .
Namely, in Equation (8): the equality is in law; and R is a ‘standardized’ SLS random variable (i.e., with median zero m = 0 and with scale one s = 1 ).
As described in Section 3.1, the stochastic representation of Equation (6) is the sum of a deterministic part and a stochastic part. In contrast, the stochastic representation of Equation (8) comprises only of a stochastic part. Analogously to the stochastic part of Equation (6), the stochastic part of Equation (8) is the product of: (i) the SLS random variable R —which assumes the role of an unknown ‘noise’; and (ii) a noise coefficient.
As noted in the opening of this section (provided that the amplitude A t is not a flat function): the shift from the unconditional distribution to the conditional distribution results in a decrease of the extended variance. Indeed, once the information X t 1 is given—the increment’s distribution becomes less noisy. The noise reduction is quantified by a noise-to-noise ratio (NNR) N N R t 1 ; t 2 .
Specifically, the NNR’s enumerator is the noise coefficient of Equation (6)—the scale of the increment’s conditional SLS distribution. And, the NNR’s denominator is the noise coefficient of Equation (8)—the scale of the increment’s unconditional SLS distribution. It follows from Equations (6) and (8) that
N N R t 1 ; t 2 = 1 1 + Q t 1 ; t 2 1 / λ ,
where Q t 1 ; t 2 is the variance-reduction quantity of Equation (5).

3.3. Variance-to-Variance Ratio

Consider the conditional distribution of the increment X t 2 X t 1 , given the information X t 1 . As described in Section 3.1, this conditional distribution admits the stochastic representation of Equation (6). Denote the right-hand side of Equation (6) R c o n . The random variable R c o n is SLS with: median m c o n that is the deterministic part (i.e., the ‘signal’ part) of the right-hand side of Equation (6); and scale s c o n that is the noise coefficient of the right-hand side of Equation (6).
Consider the unconditional distribution of the increment X t 2 X t 1 . As described in Section 3.2, this unconditional distribution admits the stochastic representation of Equation (8). Denote the right-hand side of Equation (8) R u n c . The random variable R u n c is SLS with: median zero m u n c = 0 ; and scale s u n c that is the noise coefficient of the right-hand side of Equation (8).
Now, consider the deviations of the random variables R c o n and R u n c from their medians. The ‘tail asymptotics’ of these deviations—in the LM-input case—are described by Equation (2): for the random variable R c o n with the median m c o n and the scale s c o n ; and for the random variable R u n c with the median m u n c and the scale s u n c . Equation (2) implies that the ratio of the tail asymptotics is
Pr R c o n m c o n > x Pr R u n c m u n c > x x s c o n λ s u n c λ = s c o n s u n c λ .
The left-hand side of Equation (10) is a tail-to-tail ratio (TTR). The TTR’s enumerator is the probability tail of the increment’s conditional SLS distribution. The TTR’s denominator is the probability tail of the increment’s unconditional SLS distribution.
The middle part of Equation (10) is a ‘variance-to-variance’ ratio (VVR) V V R t 1 ; t 2 . The VVR’s enumerator s c o n λ is the extended variance of the increment’s conditional SLS distribution. The VVR’s denominator s u n c λ is the extended variance of the increment’s unconditional SLS distribution.
The right-hand side of Equation (10) is the λ th power of the ratio s c o n / s u n c —which is the NNR (described in Section 3.2). So, Equations (9) and (10) imply that
V V R t 1 ; t 2 = 1 1 + Q t 1 ; t 2 ,
where Q t 1 ; t 2 is the variance-reduction quantity of Equation (5).
As the SNR and the NNR, also the VVR is defined for the entire range of the SLS exponent ( 0 < λ 2 ). For the LM-input case ( 0 < λ < 2 ) this subsection presented a ‘tail perspective’ of the VVR and of the NNR – doing so via, respectively, the middle part and the right-hand side of Equation (10).

3.4. Conclusion

Based on the unconditional and conditional statistical distributions specified in Table 1, this section addressed three informative ratios.
The signal-to-noise ratio S N R t 1 ; t 2 of Equation (7).
The noise-to-noise ratio N N R t 1 ; t 2 of Equation (9).
The variance-to-variance ratio V V R t 1 ; t 2 of Equation (11).
The ratios quantify the output’s memory. Indeed, the ratios quantify how the knowledge of the ‘present position’ X t 1 affects the statistical distribution of the ‘future increment’ X t 2 X t 1 (where t 0 < t 1 < t 2 ). It follows from the aforementioned ratio formulae that the ratios are coupled by the relation
V V R t 1 ; t 2 = N N R t 1 ; t 2 λ = 1 1 + V t 1 · S N R t 1 ; t 2 λ .
Namely, the VVR and the NNR are coupled by a power-law relation. And, the coupling of the VVR/NNR and the SNR involves the extended variance of the output’s position at the time point t 1 .

4. Correlations

As noted in Section 2.2, the SLS process comprises two motions—Brown ( λ = 2 ) and Levy ( 0 < λ < 2 )—which have markedly different behaviors. On the one hand, BM is a continuous process whose positions’ variances are finite. On the other hand, LM is a pure-jump process whose positions’ variances are either not defined or infinite.
The input is set to be the SLS process. In turn, the input behaviors are induced to the output. Consequently, in the BM-input case: the output has a well-defined covariance function—from which the output’s correlations follow. Antithetically, in the LM-input case: the output does not have a well-defined covariance function, and hence its correlations are also not well-defined (in the common statistical sense).
Being a pure-jump process, LM has an underlying jump structure. The jump structure is specified by the Levy-Ito decomposition theorem [90,91,92], and it is Poissonian [54,147]. The jump structure gives rise to a Poissonian-correlations method [148,149] that is applicable in the context of Levy-driven motions at large [150,151,152,153]. In particular—in the LM-input case–the Poissonian-correlations method is applicable to the output, and it unveils the output’s intrinsic correlation structure [85].
This section is organized as follows. Firstly, the BM-input case is addressed (Section 4.1). Thereafter, with regard to the LM-input case: the underlying jump structure is described (Section 4.2); and the Poissonian correlations are presented (Section 4.3). The section ends with a short conclusion (Section 4.4).

4.1. BM Input

Consider the SLS input to be BM ( λ = 2 ). In this case the input’s covariance function is Cov [ L τ 1 , L τ 2 ] = τ 1 (where τ 1 τ 2 ) [50]. In turn, Equation (3) and the fact that the clock C t is monotone increasing imply that: the output’s covariance function is Cov [ X t 1 , X t 2 ] = A t 1 A t 2 C t 1 (where t 1 t 2 ). With this covariance function at hand, the output’s correlations are deduced.
Denote by F B M t 1 , t 2 the correlation of the output’s positions X t 1 and X t 2 (where t 1 t 2 ). Then, it follows straightforwardly from the output’s covariance function that
F B M t 1 , t 2 = C t 1 C t 2 .
Section 3 addressed the conditional statistical distribution of the increment X t 2 X t 1 (where t 0 < t 1 < t 2 ), given the information X t 1 . The corresponding correlation is that of the output’s position X t 1 and the output’s increment X t 2 X t 1 . Denoting this correlation G B M t 1 , t 2 , a calculation that uses the output’s covariance function implies that [85]:
G B M t 1 , t 2 = A t 2 A t 1 1 1 2 A t 2 A t 1 + V t 2 V t 1 ,
Evidently, there is a marked difference between the two correlations. On the one hand, the correlation F B M t 1 , t 2 is determined by the clock C t alone, and is not affected by the amplitude A t . On the one hand, the correlation G B M t 1 , t 2 is determined by both the amplitude A t and the clock C t , and the amplitude’s effect on this correlation is dramatic: it determines if G B M t 1 , t 2 is positive, negative, or zero.

4.2. LM Input

Consider the SLS input to be LM ( 0 < λ < 2 ). As noted in Section 2.2, LM is a pure-jump process. The jump structure of LM is described by the Levy-Ito decomposition theorem [90,91,92]. Specifically, the LM jumps form a Poisson point process [54,147]. In particular, the LM jumps whose sizes are greater than the positive level l form a ‘standard’ Poisson process with rate c λ / l λ (where c λ is a constant that depends on the SLS exponent λ ) [85].
The LM position L τ is the sum of all the LM jumps that occurred up to the time point τ . Following the scaling of space and time—according to the spatio-temporal transformation of Equation (3)—the input’s jump structure is induced to the output. Specifically, the output’s position X t is as follows: it is the sum of all the LM jumps that occurred up to the time point C t , and the sizes of these jumps are multiplied by A t .
Set the focus on the summands comprising X t whose sizes are greater than the positive level l, and denote their number N l t . Namely, the number N l t counts the LM jumps that: (i) occurred up to the time point C t ; and (ii) are greater than l / A t . In turn, the aforementioned Poisson-process fact [85] implies that the number N l t is a Poisson random variable whose mean and variance are:
E N l t = Var N l t = c λ l λ · V t ,
where V t is the extended variance of Equation (4).
The right-hand side of Equation (15) is the product of two terms: (i) the level-dependent term c λ / l λ , which is the aforementioned Poisson-process rate; and (ii) the time-dependent term V t , which is the extended variance of the output’s positions. So, Equation (15) yields a genuine variance-meaning to the extended variance of Equation (4). [Indeed, prior to Equation (15), V t was merely an extension of the variance—rather than a genuine variance.]
Now, consider two time points t 1 < t 2 , as well as a jump that is counted by the number N l t 1 . Exploiting the Poissonian statistics of the LM input yields the following conclusion. The probability that the jump—which was counted by the number N l t 1 —will also be counted by the number N l t 2 is [85]:
P t 1 ; t 2 = min 1 , A t 2 A t 1 λ .
The quantity of Equation (16) manifests an intrinsic ‘survival probability’ of the output.
The parameter l of the number N l t is, in effect, a resolution level. Note that while the mean and the variance of the number N l t 1 depend on the resolution level, the survival probability of Equation (16) does not. So, the survival probability is ‘level free’.

4.3. Poissonian Correlations

Section 4.1 addressed—in the BM-input case ( λ = 2 )—the following correlations. The correlation F B M t 1 , t 2 of the output’s positions X t 1 and X t 2 . And the correlation G B M t 1 , t 2 of the output’s position X t 1 and the output’s increment X t 2 X t 1 . Shifting from a BM input to a LM input has the following effect: the variances of the output’s positions shift from finite to either not defined or infinite. In turn, the correlations F B M t 1 , t 2 and G B M t 1 , t 2 cannot be carried on to the LM-input case ( 0 < λ < 2 ).
As described in Section 4.2, the LM input has an underlying Poissonian jump structure. This structure is induced to the output, and it gives rise to the numbers N l t ( t t 0 )—which are Poisson random variables (with means and variances that are specified in Equation (15)). With regard to these numbers, analogues of the correlations F B M t 1 , t 2 and G B M t 1 , t 2 are well-defined indeed, and shall now be presented.
Denote by F L M t 1 , t 2 the correlation of the numbers N l t 1 and N l t 2 (where t 1 t 2 ). It was established in [85] that if the amplitude A t is monotone decreasing then
F L M t 1 ; t 2 = C t 1 C t 2 V t 2 V t 1 .
Denote by G L M t 1 , t 2 the correlation of the number N l t 1 and of the numbers’ increment N l t 2 N l t 1 (where t 1 < t 2 ). It was further established in [85] that if the amplitude A t is monotone decreasing then
G L M t 1 ; t 2 = C t 1 C t 2 V t 2 V t 1 1 1 2 C t 1 C t 2 V t 2 V t 1 + V t 2 V t 1 .
At the close of Section 4.2 it was noted that the survival probability of Equation (16) is ‘level free’. The Poissonian correlations F L M t 1 ; t 2 and G L M t 1 ; t 2 are also ‘level free’: they do not depend on the resolution level l—the parameter l of the number N l t .

4.4. Conclusions

This section addressed the output’s correlations. As elucidated above, there are marked differences between the two motions that comprise the SLS input: BM ( λ = 2 ) and LM ( 0 < λ < 2 ).
When the input is BM then the variances of its positions are finite—and hence so are the variances of the output’s positions. In turn, the following correlations of the output were derived.
The positions’ correlation F B M t 1 , t 2 of Equation (13), and the position-increment correlation G B M t 1 , t 2 of Equation (14).
When the input is LM then the variances of its positions are either not defined or infinite—and hence so are the variances of the output’s positions. In turn, the correlations of the BM-input case cannot be ‘carried on’ to the LM-input case.
The LM has an underlying jump structure—which is induced to the output in the LM-input case. This jump structure gives rise to jump counts: numbers that count the output’s underlying jumps. In turn, the following jump-structure quantities were derived.
The survival probability P t 1 ; t 2 of Equation (16).
The numbers’ correlation F L M t 1 , t 2 of Equation (17), and the number-increment correlation G L M t 1 , t 2 of Equation (18).
The survival probability P t 1 ; t 2 has no parallel in the BM-input case. In the transition from the BM-input case to the LM-input case: the numbers-based correlations F L M t 1 , t 2 and G L M t 1 , t 2 are the ‘Levy counterparts’ of the positions-based correlations F B M t 1 , t 2 and G B M t 1 , t 2 .

5. Steady-State Motions

With the general results of Section 3 and Section 4 at hand, the stage is now set to introduce and analyze this paper’s main object: the steady-state outputs of the spatio-temporal transformation of Equation (3). These outputs are henceforth termed steady-state motions (SSMs).
This section begins with the characterization of the SSMs (Section 5.1). Then, a quantitative analysis of the SSMs is presented (Section 5.2). The section concludes with two special cases of SSMs (Section 5.3): Ornstein-Uhlenbeck processes (OUPs); and extensions of power motions.

5.1. Steady State

Consider a real-valued random process whose timeline is the real line ( < t < ). The process is stationary [43,44,45] when it is invariant with respect to temporal translations: t t + s , where s is a real shift parameter. In turn, when the process is stationary then it is in steady state, i.e.,: the positions of the process—at all time points—are governed by a common statistical distribution. [Steady state is often described by the condition t p ( t , x ) = 0 , where: p ( t , x ) is the probability density function of the process’ position at the time point t. Informally, p ( t , x ) is the probability that: at the time point t the value of process is x.]
The input is set to be the SLS process. In turn, as described in Section 2.3, the statistical distribution of the output’s position X t is SLS with: median zero; and scale V t 1 / λ (where V t is the extended variance of Equation (4)). Thus, the output is in steady state if and only if the extended variance is a flat function: V t = v , where v is a positive ‘variance parameter’.
As noted at the opening of this section, the steady-state outputs are termed steady-state motions (SSMs). The four following statements (in which t 1 , t 2 t 0 ) are equivalent characterizations of SSMs.
  • Extended-variance characterization: V t 1 = V t 2 .
  • Amplitude-clock characterization: [ A t 2 / A t 1 ] λ = C t 1 / C t 2 .
  • Ratios characterization: V V R t 1 ; t 2 = N N R t 1 ; t 2 λ = 1 / [ 1 + v S N R t 1 ; t 2 λ ] .
  • Levy characterization: P t 1 ; t 2 = F L M t 1 ; t 2 = C t 1 / C t 2 .
The equivalence of the characterizations follows straightforwardly from: Equation (4); Equation (12); and Equations (16) and (17). The characterizations #1 to #3 apply to the entire range of the SLS exponent ( 0 < λ 2 ). The Levy characterization applies only to the LM-input case ( 0 < λ < 2 ).
The following conclusions hold with regard to SSMs.
  • As the clock C t is monotone increasing: the amplitude A t is monotone decreasing.
  • The SNR and NNR/VVR are inversely related: the larger the SNR—the smaller the NNR/VVR; the smaller the SNR—the larger the NNR/VVR.
  • In the BM-input case, the correlations of Section 4.1 are coupled by the relation G B M t 1 ; t 2 = [ 1 F B M t 1 , t 2 ] / 2 .
  • In the LM-input case, the correlations of Section 4.3 are coupled by the relation G L M t 1 ; t 2 = [ 1 F L M t 1 , t 2 ] / 2 .
Conclusion #1 follows straightforwardly from Equation (4). Conclusion #2 follows straightforwardly from the ratios characterization. The derivations of conclusions #3 and #4 are detailed in the Appendix A.

5.2. Quantitative Analysis

General outputs of the spatio-temporal transformation of Equation (3) were analyzed via eight informative quantities. These quantities were presented in Section 3 and Section 4, and are the following.
With regard to the BM-input case: (1) the positions’ correlation F B M t 1 , t 2 of Equation (13); and (2) the position-increment correlation G B M t 1 ; t 2 of Equation (14). With regard to the LM-input case: (3) the survival probability P t 1 ; t 2 of Equation (16); (4) the numbers’ correlation F L M t 1 ; t 2 of Equation (17); and (5) the number-increment correlation G L M t 1 ; t 2 of Equation (18). With regard to both the BM-input and the LM-input cases, and based on the ‘mother quantity’ Q t 1 , t 2 of Equation (5): (6) the signal-to-noise ratio S N R t 1 ; t 2 of Equation (7); (7) the noise-to-noise ratio N N R t 1 ; t 2 of Equation (9); and (8) the variance-to-variance ratio V V R t 1 ; t 2 of Equation (11).
With no loss of generality, consider the positions of the SSMs to be standardized, i.e., with median zero m = 0 and with scale one s = 1 . So, the amplitude is A t = C t 1 / λ , and hence the SSMs have two parameters: the exponent λ of SLS input, which is a one-dimensional parameter; and the clock C t of the spatio-temporal transformation, which is an infinite-dimensional parameter.
Substituting the amplitude A t = C t 1 / λ into the aforementioned formulae yields the following conclusion: all eight quantities depend on the clock C t , and they do so via the clock ratio r = C t 1 / C t 2 . As the clock C ( t ) is positive and monotone increasing over the temporal range t > t 0 , note that: when t 0 < t 1 < t 2 then the clock ratio takes values in the unit-interval range 0 < r < 1 .
Quantities #1 to #5 depend on the clock C t alone (they do not depend on the SLS exponent λ ). So, these quantities admit the functional form ϕ r . The shape of the functions ϕ r is monotone increasing. Per each of these quantities, Table 2 specifies: the function ϕ r and its limit values (as the clock-ratio r approaches the endpoints of its unit-interval range). Graphs of the functions ϕ r are depicted in Figure 1.
Quantities #6 to #8, as well as their mother quantity, depend on both the clock C t and the SLS exponent λ . So, these quantities admit the functional form ϕ λ r (in which the SLS exponent λ assumes the role of the form’s parameter). Depending on the value of the parameter λ , the functions ϕ λ r display three different shapes: monotone increasing; monotone decreasing; and flat. Per each of these quantities, Table 3 specifies: the function ϕ λ r , its shape, and its limit values (as the clock-ratio r approaches the endpoints of its unit-interval range). Graphs of the functions ϕ λ r of quantities #6 and #8 are depicted, respectively, in Figure 2 and Figure 3.

5.3. Special Cases

As described in Section 5.2, the clock ratio r = C t 1 / C t 2 assumes a key role in the context of quantifying various statistics of the SSMs. Two special cases of the clock ratio shall now be highlighted. These special cases correspond, respectively, to two special SSMs.
Exponential clock and OUPs. The clock ratio C t 1 / C t 2 is a function of the temporal difference t 2 t 1 alone—rather than a function of the time points t 1 and t 2 —if and only if the clock is an exponential function: C t = exp γ t , where γ is a positive exponent (as the clock is monotone increasing). In turn, the exponential clock produces the OUPs: the ‘regular’ OUP in the BM-input case; and the Levy-driven OUP in the LM-input case.
Power clock and power motions. The clock ratio C t 1 / C t 2 is a function of the temporal ratio t 2 / t 1 alone—rather than a function of the time points t 1 and t 2 —if and only if the clock is a power function: C t = t γ , where γ is a positive exponent (as the clock is monotone increasing). In turn, the power clock produces extensions of power motions: an extension of power BM [134,135,136,137] in the BM-input case; and an extension of power LM [83,84,85] in the LM-input case. In both these extensions the underlying Hurst exponent H (of power BM and of power LM) is extended from the positive values ( H > 0 ) to the zero value ( H = 0 ).

6. Steady-State Insights

Section 5 introduced and analyzed the SSMs. This section carries on with further insights regarding the SSMs: the prediction of their increments (Section 6.1); their asymptotic behaviors (Section 6.2); and their ranges of dependence (Section 6.3).

6.1. Prediction

Denote by Δ = t 2 t 1 the temporal lag between the time points t 1 and t 2 (where t 0 < t 1 < t 2 ). Keeping the time point t 1 fixed, note that (as the clock C ( t ) is monotone increasing): the larger the lag Δ —the smaller the clock-ratio r = C t 1 / C t 1 + Δ . Thus, the different shapes of the functions ϕ r and ϕ λ r —which are specified in Table 2 and Table 3 above—imply the following behaviors of the corresponding quantities.
Increasing shape: the larger the lag Δ —the smaller the quantity.
Decreasing shape: the larger the lag Δ —the larger the quantity.
Flat shape: the quantity is invariant with respect to the lag Δ .
For quantities #1 to #5, intuition suggest that the shape is increasing. Indeed, as the lag Δ grows larger, one would expect that: the correlations grow smaller; and the survival probability grows smaller. For these quantities intuition is correct.
Quantities #6 to #8 quantify the SSMs’ memory: how does the knowledge of the ‘present position’ X t 1 affect the prediction of the ‘future increment’ X t 2 X t 1 . As with weather forecasts and with stock-market forecasts, intuition suggest that: the larger the lag Δ —the more ‘noisy’ the prediction. So, as the lag Δ grows larger: the SNR should grow smaller; and the NNR and the VVR should grow larger. Memory-wise intuition turns out to be tricky—as shall now be elucidated.
When the SLS exponent is in the range 1 < λ 2 then intuition is correct. However, when the SLS exponent is in the range 0 < λ 1 then intuition is wrong. Indeed, when the SLS exponent is in the range 0 < λ < 1 then the following counter-intuitive behavior is displayed: the larger the lag Δ —the better the prediction. And when the SLS exponent is one, λ = 1 , then the following odd behavior is displayed: the prediction is invariant with respect to the lag Δ .
The ‘prediction invariance’ holds only when λ = 1 , and this particular value of the SLS exponent marks a phase transition between the counter-intuitive prediction behavior ( 0 < λ < 1 ) and the intuitive prediction behavior ( 1 < λ 2 ). The SLS exponent λ = 1 also characterizes the case in which the SLS input is the Cauchy process [90,91,92].
And there is more to the counter-intuitive and odd behaviors of the prediction. Equation (2) implies that the smaller the SLS exponent λ —the ‘heavier’ the tails of the SLS distribution, and hence the ‘wilder’ the distribution’s fluctuations. Consequently, intuition suggest that: the smaller the SLS exponent λ —the more ‘noisy’ the prediction. Yet again, intuition is wrong.
Keeping the SLS exponent fixed (at any value in the range 0 < λ 2 ), the shapes of quantities #6 to #8—as functions ϕ λ r of the clock-ratio r—are specified in Table 3 above. Keeping the clock-ratio r fixed (at any value in the range 0 < r < 1 ), the shapes of quantities #6 to #8—now as functions ϕ λ r of the SLS exponent λ —are as follows (where ϕ 0 r = lim λ 0 ϕ λ r ).
SNR: monotone decreasing from ϕ 0 r = to ϕ 2 r = 1 r 1 + r .
NNR: monotone increasing from ϕ 0 r = 0 to ϕ 2 r = 1 + r 2 .
VVR: monotone increasing from ϕ 0 r = 0 to ϕ 2 r = 1 + r 2 .
These monotone shapes (with respect to the SLS exponent λ ) imply the following counter-intuitive behavior: the smaller the SLS exponent λ —the better the prediction. Namely, increasing the fluctuations of the SLS distribution (by decreasing the SLS exponent λ ) results in a surprising outcome: the prediction becomes less noisy—rather than more noisy, as intuition suggests. The monotone shapes of the SNR and of the VVR are depicted in Figure 4.

6.2. Asymptotic Behaviors

As in Section 6.1, denote by Δ = t 2 t 1 the temporal lag between the time points t 1 and t 2 (where t 0 < t 1 < t 2 ). Three asymptotic behaviors of the clock-ratio r = C t 1 / C t 1 + Δ shall now be addressed.
The first asymptotic behavior is with regard to the short-lag limit Δ 0 . In this case the time point t 1 is kept fixed, and the resulting clock-ratio limit is one: lim Δ 0 r = 1 .
The second asymptotic behavior is with regard to the long-lag limit Δ . In this case the time point t 1 is also kept fixed, and the resulting clock-ratio limit is zero: lim Δ r = 0 (as the clock C ( t ) is monotone increasing to infinity).
The third asymptotic behavior is with regard to the large-time limit t 1 . In this case the temporal lag Δ is kept fixed, and the resulting clock-ratio limit is: either zero lim t 1 r = 0 (as in the long-lag limit); or one lim t 1 r = 1 (as in the short-lag limit); or intermediate, i.e., larger than zero and smaller than one. The intermediate scenario is described as follows.
In the intermediate scenario the clock-ratio limit admits a universal exponential form: lim t 1 r = exp γ Δ , where γ is a positive exponent. This universal clock-ratio limit is the clock-ratio of the exponential clock: C t = exp γ t . As noted in Section 5.3, the exponential clock produces the OUPs: the ‘regular’ OUP in the BM-input case; and the Levy-driven OUP in the LM-input case.
Introduce the function C u = C [ ln ( u ) ] , where u exp ( t 0 ) . For the third asymptotic behavior, the clock-ratio limit is as follows [89]: zero, lim t 1 r = 0 , if and only if C u is rapidly varying; one, lim t 1 r = 1 , if and only if C u is slowly varying; and intermediate, lim t 1 r = exp γ Δ , if and only if C u is regularly varying—in which case γ is a positive ‘regular-variation exponent’.
So, the three asymptotic behaviors can be summarized as follows.
The long-lag limit ( Δ ), as well as the rapidly-varying scenario of the large-time limit ( t 1 ), yield the zero clock-ratio limit: r 0 .
The short-lag limit ( Δ 0 ), as well as the slowly-varying scenario of the large-time limit ( t 1 ), yield the unit clock-ratio limit: r 1 .
The regularly-varying scenario of the large-time limit ( t 1 ) yields the OUP clock-ratio: r = exp γ Δ , where γ is a positive exponent.
Per each of the eight quantities, Table 2 and Table 3 specify the limit values that correspond to the clock-ratio limits r 0 and r 1 . Table 4 presents examples of clocks C t that, in the large-time limit ( t 1 ), produce: the rapidly-varying scenario ( r 0 ); and the slowly-varying scenario ( r 1 ).

6.3. Range of Dependence

Consider a stationary process whose positions have finite variances. With no loss of generality, the process can be considered to be ‘standardized’, i.e.: its positions have zero means and unit variances. In turn, the process’ ‘second order statistics’ are coded by the process’ auto-correlation function ρ Δ ( Δ 0 ). Namely, ρ t 2 t 1 is the correlation of the process’ positions at the time points t 1 and t 2 (where t 1 t 2 ).
The process’ ‘range-of-dependence’ is determined by the integrability of its auto-correlation function at infinity [80,81,82]. Specifically, when the auto-correlation function is integrable at infinity then the process is said to be short-range dependent (SRD). And when the auto-correlation function is not integrable at infinity then the process is said to be long-range dependent (LRD). The integrability at infinity can be formulated as follows: fix the time point t 1 , and check the integrability of ρ t 2 t 1 —as a function of the temporal variable t 2 —in the limit t 2 .
Now, switch from a general stationary process to the SSMs of Section 5. As described in Section 5.1 with regard to the BM-input case, the switch induces the following replacement: the stationary correlation ρ t 2 t 1 is replaced by the steady-state correlation C t 1 / C t 2 . In turn, the notions of SRD and LRD follow naturally to the SSMs (generated by BM input): fix the time point t 1 , and check the integrability of C t 1 / C t 2 —as a function of the temporal variable t 2 —in the limit t 2 .
In the LM-input case the variances of the output’s positions are either not defined or infinite. Thus, the notions of SRD and LRD cannot be carried on ‘as is’ from the BM-input case to the LM-input case. Nonetheless, alternative notions of SRD and LRD can be devised for the LM-input case. These alternative notions are based on the survival probability of the SSMs (generated by LM input): P t 1 ; t 2 = C t 1 / C t 2 .
The survival probability P t 1 ; t 2 —as a function of the variable t 2 —is a tail distribution function over the range t 2 t 1 . Namely, for a fixed time point t 1 (where t 1 > t 0 ): P t 1 ; t 2 is monotone decreasing from the value lim t 2 t 1 P t 1 ; t 2 = 1 to the value lim t 2 P t 1 ; t 2 = 0 . In turn, the mean of the corresponding statistical distribution is: finite when the tail distribution function is integrable; and infinite when the tail distribution function is not integrable.
As the survival probability is the clock ratio, the integrability of the tail distribution function P t 1 ; t 2 is equivalent to: the integrability of the function 1 / C t 2 over the range t 2 t 1 (where t 1 > t 0 ). Thus, the notions of SRD and LRD—for SSMs that are generated by LM input—are determined by the integrability of the reciprocal of the clock function at infinity.
If 1 / C t is integrable at infinity ( t ) then the SSM is SRD.
If 1 / C t is not integrable at infinity ( t ) then the SSM is LRD.
Table 4 presents examples of clocks C t that—according to the integrability criteria for the LM-input case—produce SSMs that are either SRD or LRD.

7. Steady-State Dynamics

Section 5 and Section 6 investigated various statistics of the SSMs. This section investigates the dynamics of the SSMs (Section 7.1), draws insights from the dynamics (Section 7.2 and Section 7.3), and then demonstrates the insights via examples (Section 7.4).

7.1. Langevin Dynamics

Due to its independent-increments property, the SLS process is Markov [40,41,42]. Due to its structure, the spatio-temporal transformation of Equation (3) maps Markov inputs to Markov outputs. So, as the input is set to be the SLS process—the output is Markov.
The spatio-temporal transformation of Equation (3) maps the input’s positions to the output’s positions. The corresponding map of the input’s velocities to the output’s velocities is [84]:
X ˙ t = A ˙ t A t X t + A t λ C ˙ t 1 / λ L ˙ t   .
The input’s velocity process is Gaussian ‘white noise’ in the BM-input case ( λ = 2 ), and is Levy ‘white noise’ in the LM-input case ( 0 < λ < 2 ). [When the output’s range ( t t 0 ) stretches beyond the input’s range ( τ 0 ) then the white noise should be stretched accordingly.]
Equation (19) manifests Langevin dynamics that are ‘driven’ by the input process. More specifically, in the BM-input case Equation (19) is a ‘regular’ Langevin stochastic differential equation (SDE) [47,48,49]. And in the LM-input case Equation (19) is a ‘Levy-driven’ Langevin SDE—an object that attracted major interest [154,155,156,157,158,159,160,161,162,163,164,165], and that continues to do so [166,167,168,169,170,171,172,173,174].
The term A ˙ t / A t appearing on the right-hand side of Equation (19) is the damping coefficient of the Langevin dynamics. In the BM-input case ( λ = 2 ) the term A t λ C ˙ t appearing on the right-hand side of Equation (19) is the diffusion coefficient of the Langevin dynamics. In general—and in analogy with the extended variance of Equation (4)—the term A t λ C ˙ t can be addressed as an ‘extended diffusion coefficient’.
Consider the output to be in steady state, and—as in Section 5.2—set A t = C t 1 / λ . In turn: the damping coefficient is 1 λ C ˙ t / C t ; and the extended diffusion coefficient is C ˙ t / C t . So, the Langevin dynamics of the SSMs are governed by the SLS exponent λ , and by the following infinite-dimensional parameter: the clock’s logarithmic derivative C ˙ t / C t .
As the clock C t is positive and monotone increasing (over the temporal range t > t 0 ), the damping coefficient is negative: 1 λ C ˙ t / C t < 0 . In turn, the Langevin dynamics of the SSMs are ‘pushing’ towards the value zero—which is the median of the SSMs’ positions. So, the Langevin dynamics of the SSMs are ‘center reverting’ [175,176,177].
The ratio of the damping coefficient 1 λ C ˙ t / C t to the extended diffusion coefficient C ˙ t / C t is 1 / λ . The damping-diffusion ratio 1 / λ is the negative Hurst exponent of the SLS input [83], and it almost characterizes the SSMs. Indeed, a calculation shows that the damping-diffusion ratio 1 / λ implies that: A t λ = C t + k , where k is an integration constant. In turn, when the integration constant is zero ( k = 0 ) then the amplitude-clock relation A t = C t 1 / λ is attained.
The damping coefficient 1 λ C ˙ t / C t and the extended diffusion coefficient C ˙ t / C t are both constant—rather than time dependent—if and only if the clock is an exponential function: C t = exp γ t , where γ is a positive exponent (as the clock is monotone increasing). As noted in Section 5.3, the exponential clock produces the OUPs: the ‘regular’ OUP in the BM-input case; and the Levy-driven OUP in the LM-input case.

7.2. Logarithmic-Derivative Perspective

As noted in Section 5.2 with regard to the SSMs: the transformation’s (infinite-dimensional) parameter is the clock C t . As described in Section 7.1 with regard to the Langevin dynamics of the SSMs: the dynamics’ (infinite-dimensional) parameter is the clock’s logarithmic derivative: H t = C ˙ t / C t (the reason for using the letter H to denote the logarithmic derivative shall become clear in Section 7.3 below).
With no loss of generality, the clock’s initial value C t 0 can be assumed to be positive, and the clock’s logarithmic derivative can be assumed to be integrable at the temporal origin t 0 . [Indeed, if either of these assumptions does not hold then simply set a new temporal origin that is larger than t 0 .] Consequently, in terms of the logarithmic derivative H t the clock admits the formulation
C t = C t 0 exp t 0 t H u d u .
Now, assume that the logarithmic derivative has a limit value H = lim t H t . As the clock C t is monotone increasing, its logarithmic derivative H t is positive, and hence: the limit value H is either zero, or positive, or infinite.
Equation (20) implies that the clock ratio admits the formulation C t 1 / C t 2 = exp [ t 1 t 2 H u d u ] . With this formulation at hand, an asymptotic calculation implies that: the large-time limit t 1 of the clock-ratio r = C t 1 / C t 1 + Δ (where the temporal lag Δ = t 2 t 1 is kept fixed) is lim t 1 r = exp H Δ . In turn, the following characterization of the three scenarios of the large-time limit—which were stated in Section 6.2—is attained.
The slowly-varying scenario holds if and only if H = 0 .
The regularly-varying scenario holds if and only if 0 < H < .
The rapidly-varying scenario holds if and only if H = .
Introduce the function H 1 t = t H t , and assume that it has a limit value H 1 = lim t H 1 t . As the limit value H , also the limit value H 1 is: either zero, or positive, or infinite. The limit value H 1 turns out to determine the integrability of the clock’s reciprocal 1 / C t at infinity [46]: if H 1 < 1 then 1 / C t is not integrable; and if H 1 > 1 then 1 / C t is integrable. In turn, the integrability criteria of Section 6.3 yield the following implications for SSMs that are generated by LM input.
If H 1 < 1 then the SSM is LRD.
If H 1 > 1 then the SSM is SRD.
Evidently, if H is either positive or infinite then H 1 = . Thus, the following pair of conclusions holds for SSMs that are generated by LM input. LRD conclusion: the SSM can be LRD only in the slowly-varying scenario. SRD conclusion: in the regularly-varying scenario and in the rapidly-varying scenario the SSM is always SRD.

7.3. Hazard-Rate Perspective

It follows from Equation (20) that C t 0 / C t is a tail distribution function over the temporal range t > t 0 . Namely, there is a random variable T that takes values in the temporal range t > t 0 , and whose statistical distribution is governed by the tail distribution function
Pr T > t = exp t 0 t H u d u .
Equation (21) implies that the function H t ( t > t 0 ) is the hazard rate of the random variable T [178,179,180]. Namely, H t is the likelihood that the random variable T be realized right after the time point t—given the information that T was not realized up to the time point t.
So, in terms of the random variable T the clock’s logarithmic derivative H t admits a probabilistic hazard-rate meaning (and hence the letter H was used to denote the clock’s logarithmic derivative). Also, in terms of the random variable T the clock-ratio admits the following conditional-probability meaning:
C t 1 C t 2 = Pr ( T > t 2 | T > t 1 ) .
Namely, the clock ratio is the probability that T is larger than t 2 , given the information that T is larger than t 1 (where t 0 < t 1 < t 2 ).
Moreover, the integrability of the clock’s reciprocal 1 / C t at infinity is equivalent to the convergence of the mean E [ T + ] , where T + = max { 0 , T } is the positive part of the random variable T. In turn, the following holds for SSMs that are generated by LM input: the SSM is SRD if and only if E [ T + ] < ; and the SSM is LRD if and only if E [ T + ] = .

7.4. Examples

The logarithmic-derivative perspective and the hazard-rate perspective that were presented in Section 7.2 and Section 7.3 shall now be demonstrated via three examples: Pareto [181]; Weibull [182]; and Gumbel [183]. These examples correspond, respectively, to the following clock examples of Table 4: #1, #3, and #5. As in Table 4, in all three examples γ is a positive exponent.
Pareto distributions [184,185,186] and the Weibull distribution [187,188,189] are widely applied in science and engineering. The Weibull distribution and the Gumbel distributions are (two of the three) universal extreme-value distributions [190,191,192].
Pareto example. In this example t 0 is a positive number, and the tail distribution function is Pareto “type I”: Pr T > t = ( t 0 / t ) γ . In turn, the hazard rate is the harmonic function H t = γ / t , and hence: H = 0 and H 1 = γ . In this example the conditional probability Pr ( T > t 2 | T > t 1 ) is also a Pareto “type I” tail distribution function.
Weibull example. In this example t 0 = 0 , and the tail distribution function is Weibull: Pr T > t = exp ( t γ ) . In turn, the hazard rate is the power function H t = γ t γ 1 , and hence: H = 0 when γ < 1 ; H = 1 when γ = 1 ; H = when γ > 1 ; and H 1 = . The hazard limits H = 0 and H = characterize, respectively, the sub-exponential and the super-exponential ‘relaxation regimes’ of the Weibull distribution [193,194,195].
Gumbel example. In this example t 0 = , and the tail distribution function is Gumbel: Pr T > t = exp [ γ exp ( t ) ] . In turn, the hazard rate is the exponential function H t = γ exp ( t ) , and hence: H = and H 1 = . In this example the conditional probability Pr ( T > t 2 | T > t 1 ) is the Gompertz tail distribution function [196,197,198].

8. Overview

This paper introduced and explored a novel and versatile class of real-valued random motions that are in statistical steady state. This section offers readers an ‘executive-summary’ overview of the paper.
The steady-state class is devised via a spatio-temporal transformation of a foundational real-valued random motion: the symmetric Levy-sable (SLS) process. The SLS process, its spatio-temporal transformation, and the resulting steady-state motions (SSMs) are as follows.
The SLS process is parameterized by an exponent λ that takes values in the range 0 < λ 2 , and it comprises two random motions. Brownian motion (BM), which is characterized by the SLS exponent value λ = 2 . And Levy motion (LM), which is characterized by the SLS exponent range 0 < λ < 2 .
BM and LM are principal stochastic models in science and engineering, and they display diametric behaviors. On the one hand BM is a continuous process, the variances of its positions are finite, and the probability tails of its positions are ‘light’. On the other hand LM is a pure-jump process, the variances of its positions are either not defined or infinite, and the probability tails of its positions are ‘heavy’. Thus, the fluctuations of BM are ‘mild’, and the fluctuations of LM are ‘wild’.
The spatio-temporal transformation is a mapping of real-valued random motions. The transformation acts by scaling, in both space and time, the positions of its input—thus generating the positions of its output. Here the input is the SLS process, and its positions are L τ ( τ 0 ). In turn, the output’s positions are
X t = C t 1 / λ L C t
( t t 0 ), where C t is the transformation’s clock.
The clock C t is positive over the temporal range t > t 0 , and it is monotone increasing to infinity: lim t C t = . The term C t 1 / λ appearing on the right-hand side of Equation (23) is the transformation’s amplitude. The amplitude C t 1 / λ determines the transformation’s spatial scaling, and the clock C t determines the transformation’s temporal scaling.
The amplitude and the clock are coupled in a particular way: to produce SSMs. Namely, outputs whose positions—at all time points—share the same statistical distribution. And indeed, the properties of the SLS process imply that the output’s positions X t are all equal, in law, to the random variable L 1 (the input’s position at the time point τ = 1 ).
Equation (23) maps the positions of the SLS to the positions of the SSMs. The mapping of the corresponding velocities is given by the following Langevin stochastic differential equation:
X ˙ t = 1 λ H t X t + H t 1 / λ L ˙ t ,
where H t = C ˙ t / C t is the clock’s logarithmic derivative. The stochastic dynamics that are manifested by the Langevin Equation (24) are center reverting, i.e.: the dynamics are ‘pushing’ towards the origin 0 of the spatial axis. When the input is LM then the Langevin Equation (24) is ‘Levy-driven’.
With no loss of generality, the initial time point t 0 (of the temporal axis t t 0 ) can be set so that the clock’s initial value is one: C t 0 = 1 . Doing so, the clock’s reciprocal 1 / C t and the clock’s logarithmic derivative H t turn out to have probabilistic manifestations. These manifestations are in terms of a general random variable T that takes values over the temporal range t > t 0 .
Indeed, the clock’s reciprocal and the clock’s logarithmic derivative are, respectively, the random-variable’s tail distribution function and hazard rate:
Pr T > t = 1 C t = exp t 0 t H u d u .
Equation (25) couples together—vividly and transparently—the random variable T, the function C t , and the function H t .
So, the SSMs have two underlying parameters: the input’s and the transformation’s. The input’s parameter is the SLS exponent λ , it is one-dimensional, it governs the statistics of the SSMs’ positions, and hence it governs the SSMs’ fluctuations. The transformation’s parameter is infinite-dimensional, it governs the SSMs’ temporal structure, and hence it governs the SSMs’ inter-dependencies. The transformation’s parameter admits the following equivalent representations.
A scaling representation via the function C t of Equation (23).
A Langevin representation via the function H t of Equation (24).
A probabilistic representation via the random variable T of Equation (25).
The SSMs were analyzed, comprehensively, via: eight different quantities; three different asymptotics; and a pair of integrability criteria. The eight quantities are specified in Table 2 and Table 3 above, and they comprise two sets. A set of five quantities that depend on the transformation’s parameter alone, and that quantify the SSMs’ inter-dependencies. And a set of three quantities that depend on both parameters, and that quantify the SSMs’ memory.
Focusing on temporal lags, the three asymptotics address (per each of the eight quantities): a short-lag limit; a long-lag limit; and a large-time limit. The integrability criteria depend on the transformation’s parameter alone, and they determine when the inter-dependencies of the SSMs are either short-ranged or long-ranged. The analysis provides a detailed ‘statistical picture’ of the SSMs.
Special cases of SSMs are Ornstein-Uhlenbeck processes (OUPs). Namely: the ‘regular’ OUP when the input is BM; and the ‘Levy-driven’ OUP when the input is LM. The OUPs are characterized by the following equivalent statements (in statements #3 to #5 γ is a positive Ornstein-Uhlenbeck parameter).
  • The SSM is a stationary process.
  • The Langevin Equation (24) is time homogeneous.
  • The clock is an exponential function, C t = exp ( γ t ) .
  • The hazard rate is a flat function, H t = γ .
  • The random variable T is exponentially distributed with mean E [ T ] = 1 / γ .
The regular OUP is the only random motion that is Gaussian and Markov and stationary. In the transition from the regular OUP to the Levy-driven OUP the Gaussian property is discarded, and the Markov and stationary properties are maintained. In the transition from the regular OUP to Gaussian-stationary processes the Markov property is discarded, and the Gaussian and stationary properties are maintained.
In the transition from the regular OUP to the SSMs the Markov property is maintained, the stationary property is relaxed, and the Gaussian property: is maintained in the BM-input case; and is discarded in the LM-input case. A comparison of the different stochastic models, and of their key features, is presented in Table 5.
In bottom line, this paper established a versatile stochastic model for real-valued random motions. The model has a one-dimensional parameter and an infinite-dimensional parameter, and it facilitates the combination of the following ‘regular’ and ‘anomalous’ features.
Regular-wise: the motions are in steady state, are Markov, and their dynamics are Langevin.
Anomalous-wise, and tuned by the one-dimensional parameter: the motions can display wild fluctuations—a.k.a. ‘Noah effect’—and they have an adjustable memory structure.
Anomalous-wise, and tuned by the infinite-dimensional parameter: the motions can display long-ranged temporal dependencies—a.k.a. ‘Joseph effect’—and they have an adjustable correlation structure.
As noted in the introduction and along the manuscript: Levy-driven stochastic models in general—and, in particular, the Levy-driven OUP and the Levy-driven Langevin dynamics—attracted and attract major interest in science and engineering. With regard to the LM input: this paper established a Levy-driven stochastic model, with Levy-driven Langevin dynamics, and with the above features (see also the right column of Table 5).
Several potential directions for future research of the novel steady-state stochastic model are the following. For theoreticians: the study of the model’s first-passage times [199,200,201]; and the study of the model ‘under restart’ [202,203,204]. For statisticians: estimation of the model’s underlying parameters. And for experimentalists and practitioners: real-world applications of the model. Evidently, real-world applications and parameters estimation are intertwined. Future research in these intertwined directions can begin with special cases of the steady-state model, e.g., the ones presented in Table 4.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Consider a pair of real-valued random variables, Z 1 and Z 2 , that satisfy the following conditions: same variances, Var [ Z 1 ] = Var [ Z 2 ] = v , where v > 0 ; and correlation c, where 1 c < 1 . Then, the correlation of the random variable Z 1 and the random-variables difference Z 2 Z 1 is:
Cov Z 1 , Z 2 Z 1 Var [ Z 1 ] Var [ Z 2 Z 1 ] = Cov Z 1 , Z 2 Var [ Z 1 ] Var [ Z 1 ] Var [ Z 1 ] + Var [ Z 2 ] 2 Cov Z 1 , Z 2 = c v v v 2 v 2 c v = c 1 2 ( 1 c ) = 1 c 2 .
Consider the SLS input to be BM ( λ = 2 ), and set Z 1 = X t 1 and Z 2 = X t 2 (where t 0 < t 1 < t 2 ). When the output is in steady state then the above conditions are met, and c = F B M t 1 , t 2 . Hence Equation (A1) implies that
G B M t 1 , t 2 = 1 F B M t 1 , t 2 2 .
Consider the SLS input to be LM ( 0 < λ < 2 ), and set Z 1 = N l t 1 and Z 2 = N l t 2 (where t 0 < t 1 < t 2 ). When the output is in steady state then the above conditions are met, and c = F L M t 1 , t 2 . Hence Equation (A1) implies that
G L M t 1 , t 2 = 1 F L M t 1 , t 2 2 .

References

  1. Uhlenbeck, G.E.; Ornstein, L.S. On the theory of the Brownian motion. Phys. Rev. 1930, 36, 823. [Google Scholar] [CrossRef]
  2. Caceres, M.O.; Budini, A.A. The generalized Ornstein-Uhlenbeck process. J. Phys. Math. Gen. 1997, 30, 8427. [Google Scholar] [CrossRef]
  3. Bezuglyy, V.; Mehlig, B.; Wilkinson, M.; Nakamura, K.; Arvedson, E. Generalized Ornstein-Uhlenbeck processes. J. Math. Phys. 2006, 47, 073301. [Google Scholar] [CrossRef]
  4. Maller, R.A.; Muller, G.; Szimayer, A. Ornstein-Uhlenbeck processes and extensions. In Handbook of Financial Time Series; Springer: Berlin/Heidelberg, Germany, 2009; pp. 421–437. [Google Scholar]
  5. Debbasch, F.; Mallick, K.; Rivet, J. Relativistic Ornstein-Uhlenbeck process. J. Stat. Phys. 1997, 88, 945–966. [Google Scholar] [CrossRef]
  6. Graversen, S.; Peskir, G. Maximal inequalities for the Ornstein-Uhlenbeck process. Proc. Am. Math. Soc. 2000, 128, 3035–3041. [Google Scholar] [CrossRef]
  7. Aalen, O.O.; Gjessing, H.K. Survival models based on the Ornstein-Uhlenbeck process. Lifetime Data Anal. 2004, 10, 407–423. [Google Scholar] [CrossRef] [PubMed]
  8. Larralde, H. A first passage time distribution for a discrete version of the Ornstein–Uhlenbeck process. J. Phys. A Math. Gen. 2004, 37, 3759. [Google Scholar] [CrossRef]
  9. Eliazar, I.; Klafter, J. Markov-breaking and the emergence of long memory in Ornstein–Uhlenbeck systems. J. Phys. A Math. Theor. 2008, 41, 122001. [Google Scholar] [CrossRef]
  10. Eliazar, I.; Klafter, J. From Ornstein-Uhlenbeck dynamics to long-memory processes and fractional Brownian motion. Phys. Rev. E 2009, 79, 021115. [Google Scholar] [CrossRef]
  11. Wilkinson, M.; Pumir, A. Spherical Ornstein-Uhlenbeck Processes. J. Stat. Phys. 2011, 145, 113–142. [Google Scholar] [CrossRef]
  12. Gajda, J.; Wylomańska, A. Time-changed Ornstein-Uhlenbeck process. J. Phys. A Math. Theoretical 2015, 48, 135004. [Google Scholar] [CrossRef]
  13. Cherstvy, A.G.; Thapa, S.; Mardoukhi, Y.; Chechkin, A.V.; Metzler, R. Time averages and their statistical variation for the Ornstein-Uhlenbeck process: Role of initial particle distributions and relaxation to stationarity. Phys. Rev. E 2018, 98, 022134. [Google Scholar] [CrossRef]
  14. Thomas, P.J.; Lindner, B. Phase descriptions of a multidimensional Ornstein-Uhlenbeck process. Phys. Rev. E 2019, 99, 062221. [Google Scholar] [CrossRef]
  15. Mardoukhi, Y.; Chechkin, A.V.; Metzler, R. Spurious ergodicity breaking in normal and fractional Ornstein–Uhlenbeck process. New J. Phys. 2020, 22, 073012. [Google Scholar] [CrossRef]
  16. Giorgini, L.T.; Moon, W.; Wettlaufer, J.S. Analytical Survival Analysis of the Ornstein–Uhlenbeck Process. J. Stat. Phys. 2020, 181, 2404–2414. [Google Scholar] [CrossRef]
  17. Kearney, M.J.; Martin, R.J. Statistics of the first passage area functional for an Ornstein–Uhlenbeck process. J. Phys. A Math. Theor. 2021, 54, 055002. [Google Scholar] [CrossRef]
  18. Goerlich, R.; Li, M.; Albert, S.; Manfredi, G.; Hervieux, P.-A.; Genet, C. Noise and ergodic properties of Brownian motion in an optical tweezer: Looking at regime crossovers in an Ornstein-Uhlenbeck process. Phys. Rev. E 2021, 103, 032132. [Google Scholar] [CrossRef] [PubMed]
  19. Smith, N.R. Anomalous scaling and first-order dynamical phase transition in large deviations of the Ornstein-Uhlenbeck process. Phys. Rev. E 2022, 105, 014120. [Google Scholar] [CrossRef]
  20. Kersting, H.; Orvieto, A.; Proske, F.; Lucchi, A. Mean first exit times of Ornstein–Uhlenbeck processes in high-dimensional spaces. J. Phys. A Math. Theor. 2023, 56, 215003. [Google Scholar] [CrossRef]
  21. Bonilla, L.L. Active Ornstein-Uhlenbeck particles. Phys. Rev. E 2019, 100, 022601. [Google Scholar] [CrossRef]
  22. Sevilla, F.J.; Rodríguez, R.F.; Gomez-Solano, J.R. Generalized Ornstein-Uhlenbeck model for active motion. Phys. Rev. E 2019, 100, 032123. [Google Scholar] [CrossRef] [PubMed]
  23. Martin, D.; O’BYrne, J.; Cates, M.E.; Fodor, É.; Nardini, C.; Tailleur, J.; van Wijland, F. Statistical mechanics of active Ornstein-Uhlenbeck particles. Phys. Rev. E 2021, 103, 032607. [Google Scholar] [CrossRef] [PubMed]
  24. Nguyen, G.H.P.; Wittmann, R.; Löwen, H. Active Ornstein–Uhlenbeck model for self-propelled particles with inertia. J. Phys.: Condens. Matter 2021, 34, 035101. [Google Scholar]
  25. Dabelow, L.; Eichhorn, R. Irreversibility in Active Matter: General Framework for Active Ornstein-Uhlenbeck Particles. Front. Phys. 2021, 8, 516. [Google Scholar] [CrossRef]
  26. Trajanovski, P.; Jolakoski, P.; Zelenkovski, K.; Iomin, A.; Kocarev, L.; Sandev, T. Ornstein-Uhlenbeck process and generalizations: Particle dynamics under comb constraints and stochastic resetting. Phys. Rev. E 2023, 107, 054129. [Google Scholar] [CrossRef]
  27. Trajanovski, P.; Jolakoski, P.; Kocarev, L.; Sandev, T. Ornstein–Uhlenbeck Process on Three-Dimensional Comb under Stochastic Resetting. Mathematics 2023, 11, 3576. [Google Scholar] [CrossRef]
  28. Dubey, A.; Pal, A. First-passage functionals for Ornstein Uhlenbeck process with stochastic resetting. arXiv 2023, arXiv:2304.05226. [Google Scholar] [CrossRef]
  29. Strey, H.H. Estimation of parameters from time traces originating from an Ornstein-Uhlenbeck process. Phys. Rev. E 2019, 100, 062142. [Google Scholar] [CrossRef]
  30. Janczura, J.; Magdziarz, M.; Metzler, R. Parameter estimation of the fractional Ornstein–Uhlenbeck process based on quadratic variation. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 103125. [Google Scholar]
  31. Trajanovski, P.; Jolakoski, P.; Kocarev, L.; Metzler, R.; Sandev, T. Generalised Ornstein-Uhlenbeck process: Memory effects and resetting. J. Phys. A Math. Theor. 2025, 58, 045001. [Google Scholar] [CrossRef]
  32. Pham, K.; Nguyen, A.T.; Nguyen, L.N.; Thi, T.V.D. Application of the Ornstein–Uhlenbeck Process to Generate Stochastic Vertical Wind Profiles. J. Aircr. 2025, 1–5. [Google Scholar] [CrossRef]
  33. Mandrysz, M.; Dybiec, B. Dynamical Multimodality in Systems Driven by Ornstein–Uhlenbeck Noise. Entropy 2025, 27, 263. [Google Scholar] [CrossRef]
  34. Xu, J.; Lu, Q.; Bar-Shalom, Y. Bayesian optimization for robust Identification of Ornstein-Uhlenbeck model. arXiv 2025, arXiv:2503.06381. [Google Scholar]
  35. Bassanoni, A.; Vezzani, A.; Barkai, E.; Burioni, R. Rare events and single big jump effects in Ornstein-Uhlenbeck processes. arXiv 2025, arXiv:2501.07704. [Google Scholar] [CrossRef]
  36. Doob, J.L. The Brownian Movement and Stochastic Equations. Ann. Math. 1942, 43, 351–369. [Google Scholar] [CrossRef]
  37. MacKay, D.J.C. Introduction to Gaussian processes. Nato Asi Ser. Comput. Syst. Sci. 1998, 168, 133–166. [Google Scholar]
  38. Ibragimov, I.; Rozanov, Y. Gaussian Random Processes; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  39. Lifshits, M. Lectures on Gaussian Processes; Springer: Berlin, Germany, 2012. [Google Scholar]
  40. Gillespie, D.T. Markov Processes: An Introduction for Physical Scientists; Elsevier: Amsterdam, The Netherlands, 1991. [Google Scholar]
  41. Liggett, T.M. Continuous Time Markov Processes: An Introduction; American Mathematical Soc.: Providence, Rhode Island, NE, USA, 2010; Volume 113. [Google Scholar]
  42. Dynkin, E.B. Theory of Markov Processes; Dover: New York, NY, USA, 2012. [Google Scholar]
  43. Lindgren, G. Stationary Stochastic Processes: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  44. Lindgren, G.; Rootzen, H.; Sandsten, M. Stationary Stochastic Processes for Scientists and Engineers; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  45. Hida, T. Stationary Stochastic Processes; (MN-8); Princeton University Press: Princeton, NJ, USA, 2015; Volume 8. [Google Scholar]
  46. Eliazar, I. Five degrees of randomness. Phys. A Stat. Mech. Its Appl. 2021, 568, 125662. [Google Scholar] [CrossRef]
  47. Langevin, P. Sur la theorie du mouvement Brownien. Compt. Rendus 1908, 146, 530–533. [Google Scholar]
  48. Coffey, W.; Kalmykov, Y.P. The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering; World Scientific: Singapore, 2012. [Google Scholar]
  49. Pavliotis, G.A. Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations; Springer: Berlin/Heidelberg, Germany, 2014; Volume 60. [Google Scholar]
  50. Borodin, A.N.; Salminen, P. Handbook of Brownian Motion: Facts and Formulae; Birkhauser: Basel, Switzerland, 2015. [Google Scholar]
  51. Garbaczewski, P.; Olkiewicz, R. Ornstein-Uhlenbeck-Cauchy process. J. Math. Phys. 2000, 41, 6843–6860. [Google Scholar] [CrossRef]
  52. Eliazar, I.; Klafter, J. A growth-collapse model: Levy inflow, geometric crashes, and generalized Ornstein-Uhlenbeck dynamics. Phys. A Stat. Mech. Its Appl. 2004, 334, 1–21. [Google Scholar] [CrossRef]
  53. Jongbloed, G.; Van Der Meulen, F.; Van Der Vaart, A. Nonparametric inference for Lévy-driven Ornstein-Uhlenbeck processes. Bernoulli 2005, 11, 759–791. [Google Scholar] [CrossRef]
  54. Eliazar, I.; Klafter, J. Levy, Ornstein-Uhlenbeck, and subordination: Spectral vs. jump description. J. Stat. 2005, 119, 165–196. [Google Scholar] [CrossRef]
  55. Eliazar, I.; Klafter, J. Stochastic Ornstein-Uhlenbeck Capacitors. J. Stat. Phys. 2005, 118, 177–198. [Google Scholar] [CrossRef]
  56. Brockwell, P.J.; Davis, R.A.; Yang, Y. Estimation for non-negative Levy-driven Ornstein-Uhlenbeck processes. J. Appl. Probab. 2007, 44, 977–989. [Google Scholar] [CrossRef]
  57. Magdziarz, M. Short and long memory fractional Ornstein-Uhlenbeck alpha-stable processes. Stoch. Model. 2007, 23, 451–473. [Google Scholar] [CrossRef]
  58. Magdziarz, M. Fractional Ornstein-Uhlenbeck processes. Joseph effect in models with infinite variance. Phys. A Stat. Its Appl. 2008, 387, 123–133. [Google Scholar] [CrossRef]
  59. Brockwell, P.J.; Lindner, A. Ornstein-Uhlenbeck related models driven by Levy processes. Stat. Stoch. Differ. Equ. 2012, 124, 383–427. [Google Scholar]
  60. Toenjes, R.; Sokolov, I.M.; Postnikov, E.B. Nonspectral Relaxation in One Dimensional Ornstein-Uhlenbeck Processes. Phys. Rev. Lett. 2013, 110, 150602. [Google Scholar] [CrossRef]
  61. Riedle, M. Ornstein-Uhlenbeck Processes Driven by Cylindrical Lévy Processes. Potential Anal. 2015, 42, 809–838. [Google Scholar] [CrossRef]
  62. Thiel, F.; Sokolov, I.M.; Postnikov, E.B. Nonspectral modes and how to find them in the Ornstein-Uhlenbeck process with white μ-stable noise. Phys. Rev. E 2016, 93, 052104. [Google Scholar] [CrossRef]
  63. Wolpert, R.L.; Taqqu, M.S. Fractional Ornstein–Uhlenbeck Lévy processes and the Telecom process: Upstairs and downstairs. Signal Process. 2005, 85, 1523–1545. [Google Scholar] [CrossRef]
  64. Onalan, O. Financial modelling with Ornstein-Uhlenbeck processes driven by Levy process. In Proceedings of the World Congress on Engineering, London, UK, 1–3 July 2009; Volume 2, pp. 1–3. [Google Scholar]
  65. Onalan, O. Fractional Ornstein-Uhlenbeck processes driven by stable Levy motion in finance. Int. Res. J. Financ. Econ. 2010, 42, 129–139. [Google Scholar]
  66. Shu, Y.; Feng, Q.; Kao, E.P.; Liu, H. Lévy-driven non-Gaussian Ornstein–Uhlenbeck processes for degradation-based reliability analysis. IIE Trans. 2016, 48, 993–1003. [Google Scholar] [CrossRef]
  67. Chevallier, J.; Goutte, S. Estimation of Levy-driven Ornstein-Uhlenbeck processes: Application to modeling of CO2 and fuel-switching. Ann. Oper. Res. 2017, 255, 169–197. [Google Scholar] [CrossRef]
  68. Endres, S.; Stubinger, J. Optimal trading strategies for Levy-driven Ornstein–Uhlenbeck processes. Appl. Econ. 2019, 51, 3153–3169. [Google Scholar] [CrossRef]
  69. Kabanov, Y.; Pergamenshchikov, S. Ruin probabilities for a Lévy-driven generalised Ornstein–Uhlenbeck process. Financ. Stoch. 2019, 24, 39–69. [Google Scholar] [CrossRef]
  70. Mba, J.C.; Mwambi, S.M.; Pindza, E. A Monte Carlo Approach to Bitcoin Price Prediction with Fractional Ornstein–Uhlenbeck Lévy Process. Forecasting 2022, 4, 409–419. [Google Scholar] [CrossRef]
  71. Mariani, M.C.; Asante, P.K.; Kubin, W.; Tweneboah, O.K. Data Analysis Using a Coupled System of Ornstein–Uhlenbeck Equations Driven by Lévy Processes. Axioms 2022, 11, 160. [Google Scholar] [CrossRef]
  72. Barrera, G.; Högele, M.A.; Pardo, J.C. Cutoff Thermalization for Ornstein–Uhlenbeck Systems with Small Lévy Noise in the Wasserstein Distance. J. Stat. Phys. 2021, 184, 27. [Google Scholar] [CrossRef]
  73. Zhang, X.; Shu, H.; Yi, H. Parameter Estimation for Ornstein–Uhlenbeck Driven by Ornstein–Uhlenbeck Processes with Small Levy Noises. J. Theor. Probab. 2023, 36, 78–98. [Google Scholar] [CrossRef]
  74. DDexheimer, N.; Strauch, C. On Lasso and Slope drift estimators for Lévy-driven Ornstein–Uhlenbeck processes. Bernoulli 2024, 30, 88–116. [Google Scholar] [CrossRef]
  75. Wang, C.; Shu, H.; Shi, Y.; Zhang, X. Parameter estimation for partially observed stochastic processes driven by Ornstein–Uhlenbeck processes with small Lévy noises. Int. J. Syst. Sci. 2025, 1–9. [Google Scholar] [CrossRef]
  76. Eliazar, I. Levy Noise Affects Ornstein–Uhlenbeck Memory. Entropy 2025, 27, 157. [Google Scholar] [CrossRef] [PubMed]
  77. Adler, R.; Feldman, R.; Taqqu, M. A Practical Guide to Heavy Tails: Statistical Techniques and Applications; Springer: New York, NY, USA, 1998. [Google Scholar]
  78. Nair, J.; Wierman, A.; Zwart, B. The Fundamentals of Heavy-Tails: Properties, Emergence, and Identification; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  79. Mandelbrot, B.B.; Wallis, J.R. Noah, Joseph, and operational hydrology. Water Resour. Res. 1968, 4, 909–918. [Google Scholar] [CrossRef]
  80. Cox, B.; Laufer, J.G.; Arridge, S.R.; Beard, P.C.; Laufer, A.G.; Arridge, A.S.R. Long Range Dependence: A Review; Iowa State University: Ames, IA, USA, 1984. [Google Scholar]
  81. Doukhan, P.; Oppenheim, G.; Taqqu, M. (Eds.) Theory and Applications of Long-Range Dependence; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  82. Rangarajan, G.; Ding, M. (Eds.) Processes with Long-Range Correlations: Theory and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  83. Eliazar, I. Power Levy motion. I. Diffusion. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 033157. [Google Scholar] [CrossRef] [PubMed]
  84. Eliazar, I. Power Levy motion. II. Evolution. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 033158. [Google Scholar] [CrossRef]
  85. Eliazar, I. Power Levy motion: Correlations and Relaxation. Phys. A Stat. Mech. Its Appl. 2025, in press. [Google Scholar]
  86. Zolotarev, V.M. One-Dimensional Stable Distributions; American Mathematical Soc.: Providence, Rhode Island, NE, USA, 1986; Volume 65. [Google Scholar]
  87. Borak, S.; Hardle, W.; Weron, R. Stable Distributions; Humboldt-Universitat zu Berlin: Berlin, Germany, 2005. [Google Scholar]
  88. Nolan, J.P. Univariate Stable Distributions; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  89. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  90. Bertoin, J. Levy Processes; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  91. Ken-Iti, S. Levy Processes and Infinitely Divisible Distributions; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  92. Barndorff-Nielsen, O.E.; Mikosch, T.; Resnick, S.I. (Eds.) Levy Processes: Theory and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  93. Shlesinger, M.F.; Klafter, J. Levy walks versus Levy flights. In On Growth and Form: Fractal and Non-Fractal Patterns in Physics; Springer: Dordrecht, The Netherlands, 1986; pp. 279–283. [Google Scholar]
  94. Shlesinger, M.F.; Klafter, J.; West, B.J. Levy walks with applications to turbulence and chaos. Phys. A Stat. Mech. Its Appl. 1986, 140, 212–218. [Google Scholar] [CrossRef]
  95. Allegrini, P.; Grigolini, P.; West, B.J. Dynamical approach to Lévy processes. Phys. Rev. E 1996, 54, 4760. [Google Scholar] [CrossRef]
  96. Shlesinger, M.F.; West, B.J.; Klafter, J. Lévy dynamics of enhanced diffusion: Application to turbulence. Phys. Rev. Lett. 1987, 58, 1100. [Google Scholar] [CrossRef]
  97. Uchaikin, V.V. Self-similar anomalous diffusion and Levy-stable laws. Physics-Uspekhi 2003, 46, 821. [Google Scholar] [CrossRef]
  98. Chechkin, A.V.; Gonchar, V.Y.; Klafter, J.; Metzler, R. Fundamentals of Levy flight processes. In Fractals, Diffusion, and Relaxation in Disordered Complex Systems: Advances in Chemical Physics, Part B; John Wiley & Sons: Hoboken, NJ, USA, 2006; pp. 439–496. [Google Scholar]
  99. Metzler, R.; Chechkin, A.V.; Gonchar, V.Y.; Klafter, J. Some fundamental aspects of Lévy flights. Chaos Solitons Fractals 2007, 34, 129–142. [Google Scholar] [CrossRef]
  100. Chechkin, A.V.; Metzler, R.; Klafter, J.; Gonchar, V.Y. Introduction to the theory of Levy flights. In Anomalous Transport: Foundations and Applications; Wiley-VCH: Weinheim, Germany, 2008; pp. 129–162. [Google Scholar]
  101. Dubkov, A.A.; Spagnolo, B.; Uchaikin, V.V. Levy flight superdiffusion: An introduction. Int. J. Bifurc. Chaos 2008, 18, 2649–2672. [Google Scholar] [CrossRef]
  102. Schinckus, C. How physicists made stable Levy processes physically plausible. Braz. J. Phys. 2013, 43, 281–293. [Google Scholar] [CrossRef]
  103. Zaburdaev, V.; Denisov, S.; Klafter, J. Levy walks. Rev. Mod. Phys. 2015, 87, 483–530. [Google Scholar] [CrossRef]
  104. Reynolds, A.M. Current status and future directions of Lévy walk research. Biol. Open 2018, 7, bio030106. [Google Scholar] [CrossRef]
  105. Abe, M.S. Functional advantages of Lévy walks emerging near a critical point. Proc. Natl. Acad. Sci. USA 2020, 117, 24336–24344. [Google Scholar] [CrossRef] [PubMed]
  106. Garg, K.; Kello, C.T. Efficient Lévy walks in virtual human foraging. Sci. Rep. 2021, 11, 5242. [Google Scholar] [CrossRef]
  107. Mukherjee, S.; Singh, R.K.; James, M.; Ray, S.S. Anomalous Diffusion and Lévy Walks Distinguish Active from Inertial Turbulence. Phys. Rev. Lett. 2021, 127, 118001. [Google Scholar] [CrossRef]
  108. Gunji, Y.-P.; Kawai, T.; Murakami, H.; Tomaru, T.; Minoura, M.; Shinohara, S. Lévy Walk in Swarm Models Based on Bayesian and Inverse Bayesian Inference. Comput. Struct. Biotechnol. J. 2021, 19, 247–260. [Google Scholar] [CrossRef]
  109. Park, S.; Thapa, S.; Kim, Y.-J.; A Lomholt, M.; Jeon, J.-H. Bayesian inference of Lévy walks via hidden Markov models. J. Phys. A Math. Theor. 2021, 54, 484001. [Google Scholar] [CrossRef]
  110. Romero-Ruiz, A.; Rivero, M.J.; Milne, A.; Morgan, S.; Filho, P.M.; Pulley, S.; Segura, C.; Harris, P.; Lee, M.R.; Coleman, K.; et al. Grazing livestock move by Lévy walks: Implications for soil health and environment. J. Environ. Manag. 2023, 345, 118835. [Google Scholar] [CrossRef]
  111. Sakiyama, T.; Okawara, M. A short memory can induce an optimal Levy walk. In World Conference on Information Systems and Technologies; Springer Nature: Cham, Switzerland, 2023; pp. 421–428. [Google Scholar]
  112. Levernier, N.; Textor, J.; Benichou, O.; Voituriez, R. Inverse square Levy walks are not optimal search strategies for d≥2. Phys. Rev. Lett. 2020, 124, 080601. [Google Scholar] [CrossRef]
  113. Guinard, B.; Korman, A. Intermittent inverse-square Lévy walks are optimal for finding targets of all sizes. Sci. Adv. 2021, 7, eabe8211. [Google Scholar] [CrossRef]
  114. Clementi, A.; d’Amore, F.; Giakkoupis, G.; Natale, E. Search via parallel Levy walks on Z2. In Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing, Virtual, 26–30 July 2021; pp. 81–91. [Google Scholar]
  115. Padash, A.; Sandev, T.; Kantz, H.; Metzler, R.; Chechkin, A.V. Asymmetric Lévy Flights Are More Efficient in Random Search. Fractal Fract. 2022, 6, 260. [Google Scholar] [CrossRef]
  116. Majumdar, S.N.; Mounaix, P.; Sabhapandit, S.; Schehr, G. Record statistics for random walks and Lévy flights with resetting. J. Phys. A Math. Theor. 2021, 55, 034002. [Google Scholar] [CrossRef]
  117. Zbik, B.; Dybiec, B. Levy flights and Levy walks under stochastic resetting. Phys. Rev. E 2024, 109, 044147. [Google Scholar] [CrossRef] [PubMed]
  118. Radice, M.; Cristadoro, G. Optimizing leapover lengths of Lévy flights with resetting. Phys. Rev. E 2024, 110, L022103. [Google Scholar] [CrossRef]
  119. Xu, P.; Zhou, T.; Metzler, R.; Deng, W. Lévy walk dynamics in an external harmonic potential. Phys. Rev. E 2020, 101, 062127. [Google Scholar] [CrossRef]
  120. Aghion, E.; Meyer, P.G.; Adlakha, V.; Kantz, H. Bassler, K.E. Moses, Noah and Joseph effects in Levy walks. New J. Phys. 2021, 23, 023002. [Google Scholar] [CrossRef]
  121. Cleland, J.D.; Williams, M.A.K. Analytical Investigations into Anomalous Diffusion Driven by Stress Redistribution Events: Consequences of Lévy Flights. Mathematics 2022, 10, 3235. [Google Scholar] [CrossRef]
  122. Lim, S.C.; Muniandy, S.V. Self-similar Gaussian processes for modeling anomalous diffusion. Phys. Rev. E 2002, 66, 021114. [Google Scholar] [CrossRef]
  123. Jeon, J.-H.; Chechkin, A.V.; Metzler, R. Scaled Brownian motion: A paradoxical process with a time dependent diffusivity for the description of anomalous diffusion. Phys. Chem. Chem. Phys. 2014, 16, 15811–15817. [Google Scholar]
  124. Thiel, F.; Sokolov, I.M. Scaled Brownian motion as a mean-field model for continuous-time random walks. Phys. Rev. E 2014, 89, 012115. [Google Scholar] [CrossRef] [PubMed]
  125. Safdari, H.; Chechkin, A.V.; Jafari, G.R.; Metzler, R. Aging scaled Brownian motion. Phys. Rev. E 2015, 91, 042107. [Google Scholar] [CrossRef]
  126. Safdari, H.; Cherstvy, A.G.; Chechkin, A.V.; Thiel, F.; Sokolov, I.M.; Metzler, R. Quantifying the non-ergodicity of scaled Brownian motion. J. Phys. A: Math. Theor. 2015, 48, 375002. [Google Scholar] [CrossRef]
  127. Bodrova, A.S.; Chechkin, A.V.; Cherstvy, A.G.; Metzler, R. Ultraslow scaled Brownian motion. New J. Phys. 2015, 17, 063038. [Google Scholar] [CrossRef]
  128. Bodrova, A.S.; Chechkin, A.V.; Cherstvy, A.G.; Safdari, H.; Sokolov, I.M.; Metzler, R. Underdamped scaled Brownian motion: (non-)existence of the overdamped limit in anomalous diffusion. Sci. Rep. 2016, 6, 30520. [Google Scholar] [CrossRef]
  129. Safdari, H.; Cherstvy, A.G.; Chechkin, A.V.; Bodrova, A.; Metzler, R. Aging underdamped scaled Brownian motion: Ensemble-and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation. Phys. Rev. E 2017, 95, 012120. [Google Scholar] [CrossRef]
  130. Bodrova, A.S.; Chechkin, A.V.; Sokolov, I.M. Scaled Brownian motion with renewal resetting. Phys. Rev. E 2019, 100, 012120. [Google Scholar] [CrossRef]
  131. Dos Santos, M.A.F.; Junior, L.M. Random diffusivity models for scaled Brownian motion. Chaos Solitons Fractals 2021, 144, 110634. [Google Scholar] [CrossRef]
  132. Dos Santos, M.A.F.; Menon, L., Jr.; Cius, D. Superstatistical approach of the anomalous exponent for scaled Brownian motion. Chaos Solitons Fractals 2022, 164, 112740. [Google Scholar] [CrossRef]
  133. Wang, W.; Metzler, R.; Cherstvy, A.G. Anomalous diffusion, aging, and nonergodicity of scaled Brownian motion with fractional Gaussian noise: Overview of related experimental observations and models. Phys. Chem. Chem. Phys. 2022, 24, 18482–18504. [Google Scholar] [CrossRef] [PubMed]
  134. Bauer, B.; Gerhold, S. Self-similar Gaussian Markov processes. arXiv 2020, arXiv:2008.03052. [Google Scholar]
  135. Eliazar, I. Power Brownian motion. J. Phys. A Math. Theor. 2024, 57, 03LT01. [Google Scholar] [CrossRef]
  136. Eliazar, I. Power Brownian Motion: An Ornstein–Uhlenbeck lookout. J. Phys. A Math. Theor. 2024, 58, 015001. [Google Scholar] [CrossRef]
  137. Eliazar, I. Taylor’s Law from Gaussian diffusions. J. Phys. A Math. Theor. 2025, 58, 015004. [Google Scholar] [CrossRef]
  138. Lamperti, J. Semi-stable stochastic processes. Trans. Am. Math. Soc. 1962, 104, 62–78. [Google Scholar] [CrossRef]
  139. Burnecki, K.; Maejima, M.; Weron, A. The Lamperti transformation for self-similar processes: Dedicated to the memory of Stamatis Cambanis. Yokohama Math. 1997, 44, 25–42. [Google Scholar]
  140. Flandrin, P.; Borgnat, P.; Amblard, P.-O. From stationarity to self-similarity, and back: Variations on the Lamperti transformation. In Processes with Long-Range Correlations; Springer: Berlin/Heidelberg, Germany, 2003; pp. 88–117. [Google Scholar]
  141. Magdziarz, M.; Zorawik, T. Lamperti transformation-cure for ergodicity breaking. Commun. Nonlinear Science Numer. Simul. 2019, 71, 202–211. [Google Scholar] [CrossRef]
  142. Magdziarz, M. Lamperti transformation of scaled Brownian motion and related Langevin equations. Commun. Nonlinear Sci. Numer. Simul. 2020, 83, 105077. [Google Scholar] [CrossRef]
  143. Bianchi, S.; Angelini, D.; Pianese, A.; Frezza, M. Rough volatility via the Lamperti transform. Commun. Nonlinear Sci. Numer. Simul. 2023, 127, 107582. [Google Scholar] [CrossRef]
  144. Molchan, G. The persistence exponents of Gaussian random fields connected by the Lamperti transform. J. Stat. Phys. 2022, 186, 21. [Google Scholar] [CrossRef]
  145. Kyprianou, A.E.; Pardo, J.C. Stable Levy Processes via Lamperti-Type Representations; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  146. Eliazar, I. Selfsimilar stochastic differential equations. Europhys. Lett. 2022, 136, 40002. [Google Scholar] [CrossRef]
  147. Eliazar, I. Power Laws: A Statistical Trek; Springer Nature: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  148. Eliazar, I.; Klafter, J. Correlation cascades of Levy-driven random processes. Phys. A Stat. Mech. Its Appl. 2007, 376, 1–26. [Google Scholar] [CrossRef]
  149. Eliazar, I.; Klafter, J. Fractal Levy correlation cascades. J. Phys. A Math. Theor. 2007, 40, F307. [Google Scholar] [CrossRef]
  150. Magdziarz, M. Correlation cascades, ergodic properties and long memory of infinitely divisible processes. Stoch. Process. Their Appl. 2009, 119, 3416–3434. [Google Scholar] [CrossRef]
  151. Weron, A.; Magdziarz, M. Generalization of the Khinchin theorem to Levy flights. Phys. Rev. Lett. 2010, 105, 260603. [Google Scholar] [CrossRef]
  152. Magdziarz, M.; Weron, A. Ergodic properties of anomalous diffusion processes. Ann. Phys. 2011, 326, 2431–2443. [Google Scholar] [CrossRef]
  153. Magdziarz, M.; Weron, A. Anomalous diffusion: Testing ergodicity breaking in experimental data. Phys. Rev. E 2011, 84, 051138. [Google Scholar] [CrossRef]
  154. Fogedby, H.C. Langevin equations for continuous time Levy flights. Phys. Rev. E 1994, 50, 1657–1660. [Google Scholar] [CrossRef] [PubMed]
  155. Jespersen, S.; Metzler, R.; Fogedby, H.C. Lévy flights in external force fields: Langevin and fractional Fokker-Planck equations and their solutions. Phys. Rev. E 1999, 59, 2736. [Google Scholar] [CrossRef]
  156. Chechkin, A.; Gonchar, V.; Klafter, J.; Metzler, R.; Tanatarov, L. Stationary states of non-linear oscillators driven by Lévy noise. Chem. Phys. 2002, 284, 233–251. [Google Scholar] [CrossRef]
  157. Brockmann, D.; Sokolov, I. Lévy flights in external force fields: From models to equations. Chem. Phys. 2002, 284, 409–421. [Google Scholar] [CrossRef]
  158. Eliazar, I.; Klafter, J. Lévy-Driven Langevin Systems: Targeted Stochasticity. J. Stat. Phys. 2003, 111, 739–768. [Google Scholar] [CrossRef]
  159. Chechkin, A.V.; Gonchar, V.Y.; Klafter, J.; Metzler, R.; Tanatarov, L.V. Levy flights in a steep potential well. J. Stat. Phys. 2004, 115, 1505–1535. [Google Scholar] [CrossRef]
  160. Dybiec, B.; Gudowska-Nowak, E.; Sokolov, I.M. Stationary states in Langevin dynamics under asymmetric Levy noises. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2007, 76, 041122. [Google Scholar] [CrossRef] [PubMed]
  161. Dybiec, B.; Sokolov, I.M.; Chechkin, A.V. Stationary states in single-well potentials under symmetric Lévy noises. J. Stat. Mech. Theory Exp. 2010, 2010, P07008. [Google Scholar] [CrossRef]
  162. Eliazar, I.I.; Shlesinger, M.F. Langevin unification of fractional motions. J. Phys. Math. And Theoretical 2012, 45, 162002. [Google Scholar] [CrossRef]
  163. Magdziarz, M.; Szczotka, W.; Żebrowski, P. Langevin Picture of Lévy Walks and Their Extensions. J. Stat. Phys. 2012, 147, 74–96. [Google Scholar] [CrossRef]
  164. Sandev, T.; Metzler, R.; Tomovski, Ž. Velocity and displacement correlation functions for fractional generalized Langevin equations. Fract. Calc. Appl. Anal. 2012, 15, 426–450. [Google Scholar] [CrossRef]
  165. Liemert, A.; Sandev, T.; Kantz, H. Generalized Langevin equation with tempered memory kernel. Phys. A Stat. Its Appl. 2017, 466, 356–369. [Google Scholar] [CrossRef]
  166. Chen, Y.; Wang, X.; Deng, W. Langevin dynamics for a Lévy walk with memory. Phys. Rev. E 2019, 99, 012135. [Google Scholar] [CrossRef] [PubMed]
  167. Wang, X.; Chen, Y.; Deng, W. Levy-walk-like Langevin dynamics. New J. Phys. 2019, 21, 013024. [Google Scholar] [CrossRef]
  168. Barrera, G.; Högele, M.A.; Pardo, J.C. The Cutoff Phenomenon in Wasserstein Distance for Nonlinear Stable Langevin Systems with Small Lévy Noise. J. Dyn. Differ. Equ. 2022, 36, 251–278. [Google Scholar] [CrossRef]
  169. Liu, Y.; Wang, J.; Zhang, M.-G. Exponential Contractivity and Propagation of Chaos for Langevin Dynamics of McKean-Vlasov Type with Lévy Noises. Potential Anal. 2024, 62, 27–60. [Google Scholar] [CrossRef]
  170. BBao, J.; Fang, R.; Wang, J. Exponential ergodicity of Lévy driven Langevin dynamics with singular potentials. Stoch. Process. Their Appl. 2024, 172, 104341. [Google Scholar] [CrossRef]
  171. Chen, Y.; Deng, W. Levy-walk-like Langevin dynamics affected by a time-dependent force. Phys. Rev. E 2021, 103, 012136. [Google Scholar] [CrossRef]
  172. Chen, Y.; Wang, X.; Ge, M. Levy-walk-like Langevin dynamics with random parameters. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 013109. [Google Scholar] [CrossRef]
  173. Oechsler, D. Levy Langevin Monte Carlo. Stat. Comput. 2024, 34, 37. [Google Scholar] [CrossRef]
  174. Padash, A.; Capala, K.P.; Kantz, H.; Dybiec, B.; Shokri, B.; Metzler, R.; Chechkin, A. First-passage statistics for Levy flights with a drift. J. Phys. A Math. Theor. 2025, 58, 185002. [Google Scholar] [CrossRef]
  175. Daniel, K. The power and size of mean reversion tests. J. Empir. Finance 2001, 8, 493–535. [Google Scholar] [CrossRef]
  176. Eliazar, I. The misconception of mean-reversion. J. Phys. A Math. Theor. 2012, 45, 332001. [Google Scholar] [CrossRef]
  177. Allen, E. Environmental variability and mean-reverting processes. Discrete Contin. Dyn. Syst. Ser. B 2016, 21, 2073–2089. [Google Scholar] [CrossRef]
  178. Barlow, R.E.; Proschan, F. Mathematical Theory of Reliability; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1996. [Google Scholar]
  179. Finkelstein, M. Failure Rate Modelling for Reliability and Risk; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  180. Dhillon, B.S. Engineering Systems Reliability, Safety, and Maintenance: An Integrated Approach; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  181. Pareto, V. Manual of Political Economy; reprint edition; Oxford University Press: Oxford, UK, 2014; originally published by Droz, Geneva, in 1896. [Google Scholar]
  182. Weibull, W. A statistical theory of strength of materials. Proc. Roy. Swed. Inst. Eng. Res. 1939, 151, 5–45. [Google Scholar]
  183. Gumbel, E.J. Statistics of Extremes; Columbia University Press: New York, NY, USA, 1958. [Google Scholar]
  184. Newman, M.E.J. Power laws, Pareto distributions and Zipf’s law. Contemp. Phys. 2005, 46, 323–351. [Google Scholar] [CrossRef]
  185. Clauset, A.; Shalizi, C.R.; Newman, M.E.J. Power-law distributions in empirical data. SIAM Rev. 2009, 51, 661–703. [Google Scholar] [CrossRef]
  186. Arnold, B.C. Pareto Distributions; Routledge: London, UK, 2020. [Google Scholar]
  187. Murthy, D.N.P.; Xie, M.; Jiang, R. Weibull Models; John Wiley & Sons: Hoboken, NJ, USA, 2004; Volume 505. [Google Scholar]
  188. Rinne, H. The Weibull Distribution: A Handbook; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  189. McCool, J.I. Using the Weibull Distribution: Reliability, Modeling, and Inference; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 950. [Google Scholar]
  190. Kotz, S.; Nadarajah, S. Extreme Value Distributions: Theory and Applications; World Scientific: Singapore, 2000. [Google Scholar]
  191. Reiss, R.-D.; Thomas, M. Statistical Analysis of Extreme Values: With Applications to Insurance, Finance, Hydrology and Other Fields; Birkhauser: Basel, Switzerland, 2007. [Google Scholar]
  192. Hansen, A. The Three Extreme Value Distributions: An Introductory Review. Front. Phys. 2020, 8, 604053. [Google Scholar] [CrossRef]
  193. Phillips, J.C. Stretched exponential relaxation in molecular and electronic glasses. Rep. Prog. Phys. 1996, 59, 1133. [Google Scholar] [CrossRef]
  194. Kalmykov, Y.P.; Coffey, W.T.; Rice, S.A. (Eds.) Fractals, Diffusion, and Relaxation in Disordered Complex Systems; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  195. Bouchaud, J.-P. Anomalous relaxation in complex systems: From stretched to compressed exponentials. In Anomalous Transport: Foundations and Applications; Wiley-VCH: Weinheim, Germany, 2008; pp. 327–345. [Google Scholar]
  196. Gompertz, B. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. In a letter to Francis Baily, Esq. FRS &c. Philos. Trans. R. Soc. Lond. 1825, 115, 513–583. [Google Scholar]
  197. Winsor, C.P. The Gompertz curve as a growth curve. Proc. Natl. Acad. Sci. USA 1932, 18, 1–8. [Google Scholar] [CrossRef] [PubMed]
  198. Pollard, J.H.; Valkovics, E.J. The Gompertz distribution and its applications. Genus 1992, 48, 15–28. [Google Scholar] [PubMed]
  199. Redner, S. A Guide to First-Passage Processes; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  200. Meltzer, R.; Oshanin, G.; Redner, S. (Eds.) First-Passage Phenomena and Their Applications; World Scientific: Singapore, 2014. [Google Scholar]
  201. Grebenkov, D.S.; Holcman, D.; Metzler, R. Preface: New trends in first-passage methods and applications in the life sciences and engineering. J. Phys. Math. And Theor. 2020, 53, 190301. [Google Scholar] [CrossRef]
  202. Evans, M.R.; Majumdar, S.N.; Schehr, G. Stochastic resetting and applications. J. Phys. A Math. Theor. 2020, 53, 193001. [Google Scholar] [CrossRef]
  203. Gupta, S.; Jayannavar, A.M. Stochastic Resetting: A (Very) Brief Review. Front. Phys. 2022, 10, 789097. [Google Scholar] [CrossRef]
  204. Reuveni, S.; Kundu, A. Preface: Stochastic resetting—Theory and applications. J. Phys. A Math. Theor. 2024, 57, 060301. [Google Scholar]
Figure 1. The functions ϕ ( r ) specified in rows #1 to #5 of Table 2. Left panel: red curve—row #1; blue curve—rows #3 and #4. Right panel: green curve—row #2; purple curve—row #5.
Figure 1. The functions ϕ ( r ) specified in rows #1 to #5 of Table 2. Left panel: red curve—row #1; blue curve—rows #3 and #4. Right panel: green curve—row #2; purple curve—row #5.
Entropy 27 00643 g001
Figure 2. The function ϕ λ ( r ) specified in row #6 of Table 3. Left panel: red curve λ = 0.2; blue curve λ = 0.4; green curve λ = 0.6; purple curve λ = 0.8. Right panel: red curve λ = 1.2; blue curve λ = 1.4; green curve λ = 1.6; purple curve λ = 1.8.
Figure 2. The function ϕ λ ( r ) specified in row #6 of Table 3. Left panel: red curve λ = 0.2; blue curve λ = 0.4; green curve λ = 0.6; purple curve λ = 0.8. Right panel: red curve λ = 1.2; blue curve λ = 1.4; green curve λ = 1.6; purple curve λ = 1.8.
Entropy 27 00643 g002
Figure 3. The function ϕ λ ( r ) specified in row #8 of Table 3. Left panel: red curve λ = 0.2; blue curve λ = 0.4; green curve λ = 0.6; purple curve λ = 0.8. Right panel: red curve λ = 1.2; blue curve λ = 1.4; green curve λ = 1.6; purple curve λ = 1.8.
Figure 3. The function ϕ λ ( r ) specified in row #8 of Table 3. Left panel: red curve λ = 0.2; blue curve λ = 0.4; green curve λ = 0.6; purple curve λ = 0.8. Right panel: red curve λ = 1.2; blue curve λ = 1.4; green curve λ = 1.6; purple curve λ = 1.8.
Entropy 27 00643 g003
Figure 4. The functions ϕ λ ( r ) specified in row #6 (left panel) and in row #8 (right panel) of Table 3, depicted with respect to their parameter λ . In both panels: red curve r = 0.2; blue curve r = 0.4; green curve r = 0.6; purple curve r = 0.8.
Figure 4. The functions ϕ λ ( r ) specified in row #6 (left panel) and in row #8 (right panel) of Table 3, depicted with respect to their parameter λ . In both panels: red curve r = 0.2; blue curve r = 0.4; green curve r = 0.6; purple curve r = 0.8.
Entropy 27 00643 g004
Table 1. SLS distributions of the output’s increments X t 2 X t 1 (where t 0 < t 1 < t 2 ). The table’s rows correspond to: the unconditional distribution—in which case no information is provided; and the conditional distribution—in which case the information X t 1 is provided. The table’s columns correspond to: the median of the SLS distribution; and the extended variance of the SLS distribution—which is the λ th power of the distribution’s scale.
Table 1. SLS distributions of the output’s increments X t 2 X t 1 (where t 0 < t 1 < t 2 ). The table’s rows correspond to: the unconditional distribution—in which case no information is provided; and the conditional distribution—in which case the information X t 1 is provided. The table’s columns correspond to: the median of the SLS distribution; and the extended variance of the SLS distribution—which is the λ th power of the distribution’s scale.
MedianExtended Variance
Unconditional distribution0 V t 1 A t 2 A t 1 1 λ + V t 2 1 C t 1 C t 2
Conditional distribution A t 2 A t 1 1 X t 1 V t 2 1 C t 1 C t 2
Table 2. Quantities #1 to #5—of the SSMs—as functions ϕ r of the clock-ratio r = C t 1 / C t 2 (where t 0 < t 1 < t 2 ). Per each quantity, the table specifies: the function ϕ r and its limit values, ϕ 0 = lim r 0 ϕ r and ϕ 1 = lim r 1 ϕ r .
Table 2. Quantities #1 to #5—of the SSMs—as functions ϕ r of the clock-ratio r = C t 1 / C t 2 (where t 0 < t 1 < t 2 ). Per each quantity, the table specifies: the function ϕ r and its limit values, ϕ 0 = lim r 0 ϕ r and ϕ 1 = lim r 1 ϕ r .
QuantityFunction ϕ r = Limit ϕ 0 = Limit ϕ 1 =
(1) F B M t 1 , t 2 r 01
(2) G B M t 1 , t 2 1 r 2 1 2 0
(3) P t 1 , t 2 r01
(4) F L M t 1 , t 2 r01
(5) G L M t 1 , t 2 1 r 2 1 2 0
Table 3. The mother quantity Q t 1 ; t 2 and quantities #6 to #8—of the SSMs—as functions ϕ λ r of the clock-ratio r = C t 1 / C t 2 (where t 0 < t 1 < t 2 ). Per each quantity, the table specifies: the function ϕ λ r and its shape—which is determined by the value of the SLS exponent λ . The notation a b is shorthand for: monotone increasing from the level a = lim r 0 ϕ λ r to the level b = lim r 1 ϕ λ r . The notation b a is shorthand for: monotone decreasing from the level b = lim r 0 ϕ λ r to the level a = lim r 1 ϕ λ r .
Table 3. The mother quantity Q t 1 ; t 2 and quantities #6 to #8—of the SSMs—as functions ϕ λ r of the clock-ratio r = C t 1 / C t 2 (where t 0 < t 1 < t 2 ). Per each quantity, the table specifies: the function ϕ λ r and its shape—which is determined by the value of the SLS exponent λ . The notation a b is shorthand for: monotone increasing from the level a = lim r 0 ϕ λ r to the level b = lim r 1 ϕ λ r . The notation b a is shorthand for: monotone decreasing from the level b = lim r 0 ϕ λ r to the level a = lim r 1 ϕ λ r .
QuantityFunction ϕ λ r = λ < 1 λ = 1 λ > 1
Q t 1 ; t 2 1 r 1 / λ λ 1 r 1 ϕ 1 r = 1 1 0
(6) S N R t 1 ; t 2 1 r 1 / λ 1 r 1 / λ 1 ϕ 1 r = 1 1 0
(7) N N R t 1 ; t 2 1 r 1 r + 1 r 1 / λ λ 1 / λ 1 2 1 / λ 0 ϕ 1 r = 1 2 1 / λ 1 2 1 / λ 1
(8) V V R t 1 ; t 2 1 r 1 r + 1 r 1 / λ λ 1 2 0 ϕ 1 r = 1 2 1 2 1
Table 4. Examples of clocks C t ( t t 0 ). In all examples γ is a positive exponent. With regard to the large-time limit ( t 1 ) of Section 6.2, and per each clock: the ‘ r 0 ’ column and the ‘ r 1 ’ column specify, respectively, if and when the rapidly-varying scenario and the slowly-varying scenario are attained. With regard to the integrability criteria of Section 6.3 (for the LM-input case), and per each clock: the SRD column and the LRD column specify, respectively, if and when the SSM is either SRD or LRD.
Table 4. Examples of clocks C t ( t t 0 ). In all examples γ is a positive exponent. With regard to the large-time limit ( t 1 ) of Section 6.2, and per each clock: the ‘ r 0 ’ column and the ‘ r 1 ’ column specify, respectively, if and when the rapidly-varying scenario and the slowly-varying scenario are attained. With regard to the integrability criteria of Section 6.3 (for the LM-input case), and per each clock: the SRD column and the LRD column specify, respectively, if and when the SSM is either SRD or LRD.
ClockOrigin r 0 r 1 SRDLRD
(1) C t = t γ t 0 = 0 noyes γ > 1 γ 1
(2) C t = ln t γ t 0 = 1 noyes γ > 1 γ 1
(3) C t = exp t γ t 0 = 0 γ > 1 γ < 1 yesno
(4) C t = exp ln t γ t 0 = 1 γ > 1 γ 1 γ > 1 γ 1
(5) C t = exp γ exp t t 0 = yesnoyesno
Table 5. Comparison of stochastic models. The table’s columns correspond to the following models: ‘regular’ OUP; ‘Levy-driven’ OUP; Gaussian-stationary processes; Brown SSM (which is generated by BM input); and Levy SSM (which is generated by LM input). Per each model, the table’s rows address the following questions. (1) What are the model’s parameters? (2) Is the model Gaussian? (3) Is the model Markov (with Langevin dynamics)? (4) Is the model stationary? (5) Can the model display the Noah effect? (6) Can the model display the Joseph effect? (7) Does the model have an adjustable correlation structure? (8) Does the model have an adjustable memory structure? Regarding the OUPs: γ is a positive OUP parameter. Regarding the Gaussian-stationary processes (see [76] for the results per this model): ρ Δ ( Δ 0 ) is the auto-correlation function.
Table 5. Comparison of stochastic models. The table’s columns correspond to the following models: ‘regular’ OUP; ‘Levy-driven’ OUP; Gaussian-stationary processes; Brown SSM (which is generated by BM input); and Levy SSM (which is generated by LM input). Per each model, the table’s rows address the following questions. (1) What are the model’s parameters? (2) Is the model Gaussian? (3) Is the model Markov (with Langevin dynamics)? (4) Is the model stationary? (5) Can the model display the Noah effect? (6) Can the model display the Joseph effect? (7) Does the model have an adjustable correlation structure? (8) Does the model have an adjustable memory structure? Regarding the OUPs: γ is a positive OUP parameter. Regarding the Gaussian-stationary processes (see [76] for the results per this model): ρ Δ ( Δ 0 ) is the auto-correlation function.
OUPLOUPGSBSSMLSSM
(1) Parameters γ λ , γ ρ Δ T λ , T
(2) Gaussianyesnoyesyesno
(3) Markovyesyesnoyesyes
(4) Stationaryyesyesyessemisemi
(5) Noahnoyesnonoyes
(6) Josephnonoyesyesyes
(7) Correlationnonoyesyesyes
(8) Memorynoyesnonoyes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Eliazar, I. Brown and Levy Steady-State Motions. Entropy 2025, 27, 643. https://doi.org/10.3390/e27060643

AMA Style

Eliazar I. Brown and Levy Steady-State Motions. Entropy. 2025; 27(6):643. https://doi.org/10.3390/e27060643

Chicago/Turabian Style

Eliazar, Iddo. 2025. "Brown and Levy Steady-State Motions" Entropy 27, no. 6: 643. https://doi.org/10.3390/e27060643

APA Style

Eliazar, I. (2025). Brown and Levy Steady-State Motions. Entropy, 27(6), 643. https://doi.org/10.3390/e27060643

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop