Next Article in Journal
On Closed Forms of Some Trigonometric Series
Previous Article in Journal
A Variational Theory for Biunivalent Holomorphic Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elephant Random Walk with a Random Step Size and Gradually Increasing Memory and Delays

Department of Statistics and Operation Research, King Saud University, Riyadh 11451, Saudi Arabia
Axioms 2024, 13(9), 629; https://doi.org/10.3390/axioms13090629 (registering DOI)
Submission received: 9 May 2024 / Revised: 21 August 2024 / Accepted: 11 September 2024 / Published: 14 September 2024

Abstract

:
The ERW model was introduced twenty years ago to study memory effects in a one-dimensional discrete-time random walk with a complete memory of its past throughout a parameter p between zero and one. Several variations of the ERW model have recently been introduced. In this work, we investigate the asymptotic normality of the ERW model with a random step size and gradually increasing memory and delays. In particular, we extend some recent results in this subject.

1. Introduction and Main Results

In 2004, Schütz and Trimper [1] introduced the now famous elephant random walk (ERW model) in order to examine memory effects in a non-Markovian random walk. Over the years, the study of the ERW model has captivated a lot of attention in the probability and statistic communities. In particular, Dedecker et al. [2] considered a nice variation of the ERW model, allowing the elephant to have a random step size each time the elephant moves to the right or to the left. Recently, Aguech [3] investigated the asymptotic normality of this new model when the memory of the elephant is allowed to gradually increase in the sense of the model introduced by Gut and Stadtmüller [4]. In this paper, our main contribution is the investigation of the validity of the central limit theorem for the elephant random walk with random step sizes and gradually increasing memory and delays. Our work can be seen as an extension of some results established in [2,3,4,5]. First, we recall the definition of the one-dimensional ERW model introduced by Schuütz and Trimper [1]. At time zero, the position S 0 of the elephant is zero. At time n = 1 , the elephant moves to the right with probability s and to the left with probability 1 s , where s [ 0 , 1 ] is fixed. So, the position of the elephant at time n = 1 is given by S 1 = X 1 , where X 1 has a Rademacher R ( s ) distribution. Now, for any n 1 , we uniformly choose at random an integer n among the previous times 1 , , n , and we define
X n + 1 = + X n with probability p X n with probability 1 p
where the parameter p [ 0 , 1 ] is the memory of the ERW. Then, the position of the ERW is given by
S n + 1 = S n + X n + 1 .
Recently, Gut and Stadtmüller [4] considered the case of variable memory length in the ERW model. This means that, at each time n 1 , the random integer n is no longer uniformly chosen from the previous times 1 , 2 , , n , but rather from between 1 , 2 , , m n , where ( m n ) n 1 is a nondecreasing sequence growing to infinity satisfying m n n and where n 1 m n 0 as n goes to infinity. Bercu [6] introduces the model with delays but using all the last steps: 1 , 2 , , n . One year later, Aguech and El Machkouri [7] considered the general case of a nondecreasing memory satisfying m n n and n 1 m n θ as n goes to infinity where θ [ 0 , 1 ] is fixed, proving the following result:
Theorem 1
(Aguech and El Machkouri [7]). Let ( m n ) n 1 be a nondecreasing sequence of positive integers growing to infinity such that n 1 m n n + θ for some θ [ 0 , 1 ] , and denote τ = θ + ( 1 θ ) ( 2 p 1 ) .
(1)
if 0 < p < 3 / 4 , then m n S n n n + Law N 0 , τ 2 3 4 p + θ ( 1 θ ) .
(2)
if p = 3 / 4 , then m n S n n log ( m n ) n + Law N 0 , ( 1 + θ ) 2 4 .
(3)
if 3 / 4 < p < 1 , then m n 2 ( 1 p ) S n n n + Ł 4 , a . s . τ L where L is a non Gaussian random variable. In addition, if m n 4 p 3 | n 1 m n θ | n + 0 then
m n 4 p 3 S n m n 2 ( 1 p ) n τ L n + Law N 0 , τ 2 4 p 3 + θ ( 1 θ ) .
where L is a non-Gaussian random variable (see [6], Theorem 3.7]).
In this work, we are going to extend the result established in Theorem 1 by allowing the elephant to have a random step size, and also by including a possibility for the elephant to have stops, which means that the elephant can sometimes stay in its current position.
When comparing this study to earlier ones, its primary contribution is as follows: In contrast to [7], we allow the elephant to pause and take steps of any size, which is an extension. Additionally, in contrast to [2], where the elephant’s memory is increasing and it only remembers steps up to m n , in [8], the elephant’s memory is decreasing and it can take random steps.
The ERW with random step sizes was introduced by Fan and Shao [9]. In what follows, we investigate an extension of the model introduced in [9]. More precisely, let θ be a fixed constant in [ 0 , 1 ] and let ( m n ) n 1 be an nondecreasing sequence of positive integers growing to infinity and satisfying m n n and lim n + m n / n = θ . Consider also a sequence Z k k 1 of positive i.i.d random variables, with a finite mean ν = E ( Z 1 ) and variance Var Z 1 = σ 2 0 . An ERW with random step sizes may be described as follows: At time n = 1 , the elephant moves to Z 1 with probability s [ 0 , 1 ] and to Z 1 with probability 1 s . So, the position S 1 of the elephant at time n = 1 is given by S 1 = X 1 Z 1 , where
X 1 = 1 with probability s 1 with probability 1 s .
For any integer n 1 , we also define
X n + 1 = X L n with probability p [ 0 , 1 ] X L n with probability q [ 0 , 1 ] 0 with probability r [ 0 , 1 ]
where ( p , q , r ) ] 0 , 1 [ 3 are fixed parameters satisfying p + q + r = 1 and L n is a random variable uniformly distributed on the set 1 , 2 , , m n . From now on, we assume that ( Z n ) n 1 and ( X n ) n 1 are independent, and we define the position S n of the elephant at time n 1 and the sum of X . D n at time n by
S n = k = 1 n X k Z k , D n = k = 1 n X k .
In the sequel, we use m and L to, respectively, denote m n and L n throughout the paper. Additionally, we assume, without a loss of generality, that ν = 1 .
Moreover, we introduce the σ -algebra F n = σ ( X 1 , X 2 , , X n ) and the notations
Σ n = k = 1 n X k 2 and r k = P X k = 0 | F m for m + 1 k n .
The expression of r n is given in the following lemma:
Lemma 1.
For all k { m + 1 , , n } , given F m , the probability that the elephant does not move is given by
r k = r Σ m m + 1 Σ m m = 1 1 r Σ m m .
Proof. 
Conditioned on F m , the probability of X k = 0 is the probability that the elephant previously chose to make a step not equal to zero, but he decides to not move, plus the probability that the elephant previously chose a step equal to zero.
r k = r Σ m m + 1 Σ m m = 1 1 r Σ m m .
The term r Σ m m is exactly the probability of choosing a step from 1 to m not equal to 0 and deciding to not move, and the term 1 Σ m m represents the probability of choosing a step from 1 to m equal to 0. □
The following result is a key lemma in obtaining our main results:
Lemma 2.
For all k = m + 1 , , n , conditioned on F m , the distribution of X k is given by
P X k = + 1 | F m = 1 r n 2 + ( p q ) D m 2 m , P X k = 1 | F m = 1 r n 2 ( p q ) D m 2 m , P X k = 0 | F m = r n = 1 1 r Σ m m .
Recall that  D m = l = 1 m X l .
Proof. 
Let m + 1 k n , then, for L uniformly distributed on { 1 , , m } ,
E X k | F m = p E X L | F m q E X L | F m = ( p q ) E X L | F m = ( p q ) E = 1 m X 1 { L = } | F m = ( p q ) E D m | F m m = ( p q ) m D m .
In order to complete the proof, it suffices to note that for k = m + 1 , , n ,
P X k = 1 | F m + P X k = 1 | F m + P X k = 0 | F m = 1 .

2. Asymptotics When the Elephant Has Full Memory

In this section, we suppose that m n = n , which means that the elephant remembers all its steps from the past. The following result gives the almost certain asymptotic of ( Σ n ) n 1 as n goes to infinity.
Lemma 3
(Lemma 2.1, [8]). For any p, q, and r in [ 0 , 1 ] , we have
Σ n n 1 r n + a . s . Σ
where Σ has a Mittag-Leffler distribution with parameter 1 r .
The main result of this section is the following theorem:
Theorem 2.
Let Σ be a Mittag-Leffler random variable with parameter 1 r . We assume that Z 1 has a mean of 1 and a finite variance of σ 2 . Consider the notations p r : = p 1 r , and for any p r < 3 / 4 ,
σ r 2 : = 1 r 3 ( 1 r ) 4 p .
  • If 0 < p r < 3 / 4 (diffusive regime), then
    S n n 1 r n + L a w Σ N 0 , σ r 2 + σ 2 ,
    where the random variables Σ and N 0 , σ r 2 + σ 2 are independent.
  • If p r = 3 / 4 (critical regime), then
    S n n 1 r ln n n L a w ( 1 r ) Σ N ( 0 , 1 ) ,
    where the random variables Σ and N ( 0 , 1 ) are independent.
  • If p r > 3 / 4 (superdiffusive regime), then
    S n n 2 p + r 1 n P r o b M ,
    where M is a non Gaussian and non-degenerate random variable.
Proof. 
Assume that 0 < p r < 3 / 4 . Following [2], we have S n = T n + H n where
T n = k = 1 n X k and H n = k = 1 n X k Z k 1 .
Let t and n be a fixed real number and a fixed positive integer, respectively, and let
φ n ( t ) = E exp i t S n n 1 r .
Using the decomposition of S n , the characteristic function φ n ( t ) can be decomposed as
φ n ( t ) = E exp i t T n + H n n 1 r = E exp i t T n n 1 r exp i t H n n 1 r .
To study the asymptotic distribution of the normalized walk, we proceed as follows:
  • At the first step, in the last equation, in order to separate between H n and T n , we condition with respect to F n ;
  • In the second step, we use the fact that I { X k 0 } X 2 = I { X k 0 } ;
  • In the last step, we observe that, conditionally with regard to F n , the random variable X k Z k 1 is centered at zero with variance equal to σ 2 I { X k 0 } .
In conclusion, if we denote by Z ˜ k = ( Z k 1 ) and by ψ , the characteristic function of Z ˜ 1 which is centered at zero, for a large n, we have
φ n ( t ) = E exp i t T n n 1 r E exp i t H n n 1 r | F n = E exp i t T n n 1 r k = 1 n E exp i t X k Z k 1 n 1 r | F n = E exp i t T n n 1 r k = 1 n E exp i t X k Z k 1 n 1 r | F n = E exp i t T n n 1 r k = 1 n ψ t X k n 1 r = E exp i t T n n 1 r k = 1 n 1 I { X k 0 } t 2 σ 2 2 n 1 r + o 1 n 1 r E exp i t T n n 1 r 1 t 2 σ 2 2 n 1 r Σ n E exp i t T n n 1 r exp Σ n n 1 r t 2 σ 2 2 .
But, by Lemma 3, Σ n / n 1 r converges almost surely to Σ and, by (Theorem 3.3, [8]), converges to a suitable normal distribution.
Finally, we conclude the proof using Slutsky’s theorem.
Now, we assume that p r = 3 / 4 , the critical case. The behavior is very close to the critical case for the classic elephant random walk model [6]. In order to study the asymptotic distribution of the walk S n , we employ the characteristic function ϕ n ( t ) defined, for all t R and for all positive integers n, by
ϕ n ( t ) = E exp i t S n n 1 r ln n = E exp i t T n + H n n 1 r ln n = E exp i t T n n 1 r ln n exp i t H n n 1 r ln n .
Using the same arguments as in the previous case, given F n and for a large n, we can write
ϕ n ( t ) = E exp i t T n n 1 r ln n E exp i t H n n 1 r ln n | F n = E exp i t T n n 1 r ln n k = 1 n E exp i t X k Z k 1 n 1 r ln n | F n E exp i t T n n 1 r ln n k = 1 n 1 + I { X k 0 } E i t X k Z k 1 n 1 r ln n t 2 Z k 1 2 2 n 1 r ln n | F n = E exp i t T n n 1 r ln n k = 1 n 1 I { X k 0 } t 2 σ 2 2 n 1 r ln n = E exp i t T n n 1 r ln n 1 t 2 σ 2 2 n 1 r ln n Σ n E exp i t T n n 1 r ln n exp t 2 σ 2 2 Σ n n 1 r 1 ln n .
Again, we conclude the proof using (Theorem 3.6, [8]) and Slutsky’s theorem.
For the case where p r > 3 / 4 , we have
S n n 2 p + r 1 = T n n 2 p + r 1 + H n n 2 p + r 1 .
By (Theorem 4, [3]), (Theorem 3.7, [8]), and (Theorem 2, [7]), we have
T n n 2 p + r 1 n a . s M ,
where M is a non-Gaussian and non-degenerate random variable.
On the other hand, for all ε > 0 , we have
P | H n n 2 p + r 1 | ε σ 2 E Σ n ε 2 n 4 p + 2 r 2 σ 2 n 1 r E Σ ε 2 n 4 p + 2 r 2 = σ 2 E Σ ε 2 n 4 p + 3 r 3 .
Since p r > 3 / 4 , then 4 p + 3 r 3 > 0 , and since E Σ is finite, we deduce that
H n n 2 p + r 1 n P r o b 0

3. Asymptotics When the Elephant Has Increasing Memory

In this section, we assume that the elephant has agradually increasing memory.
Theorem 3.
Let θ [ 0 , 1 ] , such that m / n θ as n goes to infinity, and let Σ be a Mittag-Leffler random variable with parameter 1 r . Consider the notation p r : = p 1 r .
  • If 0 < p r < 3 / 4 (diffusive regime), then
    m n n S n Σ m n L a w N 0 , 1 r 1 + θ + p q 2 1 r 2 p q + 1 r θ 1 θ + σ 2 θ r + 1 .
  • If p r = 3 / 4 (critical regime), then
    m n n T n Σ m ln Σ m n L a w N 0 , θ + 1 r ( 1 θ ) 2 2 .
  • If p r > 3 / 4 (superdiffusive regime), then
    S n n 1 m 2 p 1 + r n L 2 θ + ( 1 θ ) ( 2 p + r 1 ) M
    where M is a non-Gaussian and non-degenerate random variable and the asymptotic distribution of the fluctuations, around L, is given by
    S n n 1 m 2 p 1 + r M m n + ( 1 m n ) ( 2 p + r 1 ) m n 2 p + r 1 Σ m n L a w N 0 , λ 2 + θ r + 1 σ 2 ,
    where
    λ 2 = θ + 2 p + r 1 1 θ 2 4 p r 3 4 + 1 r θ 1 θ .
Proof. 
Assume that p r < 3 / 4 and denote φ n ( t ) = E e i t S n m n Σ m for any t R . Since Z k 1 is centred at zero, we have
φ n ( t ) = E E exp i t S n m n Σ m | F n = E exp i t T n m n Σ m E exp i t H n m n Σ m | F n = E exp i t T n m n Σ m k = 1 n E exp i t X k Z k 1 m n Σ m | F n E exp i t T n m n Σ m k = 1 n E 1 + i t X k Z k 1 m n Σ m t 2 2 m 2 Z k 1 2 n 2 Σ m | F n = E exp i t T n m n Σ m k = 1 n 1 I { X k 0 } t 2 σ 2 2 m 2 n 2 Σ m = E exp i t T n m n Σ n 1 t 2 σ 2 2 m 2 n 2 Σ m Σ n E exp i t T n m n Σ n exp t 2 σ 2 2 m 2 n 2 Σ n Σ m E exp i t T n m n Σ n exp t 2 σ 2 2 θ r + 1 .
The last approximation is due to the fact that
Σ n Σ m n a . s θ r 1
On the other hand, by (Theorem 2.1, [10]), we know that
m n T n Σ m n L a w N 0 , 1 r 1 + θ + p q 2 1 r 2 p q + 1 r θ 1 θ ,
This concludes the proof in the case of p r < 3 / 4 .
Assume that p r = 3 / 4 . Using similar arguments as in the previous case, we have
ϕ n ( t ) : = E exp i t m n n S n ln Σ m Σ m = E exp i t m n n T n ln Σ m Σ m exp i t m n n H n ln Σ m Σ m = E exp i t m n n T n ln Σ m Σ m E k = 1 n exp i t m n n X k Z k 1 ln Σ m Σ m | F n = E exp i t m n n T n ln Σ m Σ m k = 1 n E exp i t m n n X k Z k 1 ln Σ m Σ m | F n E exp i t m n n T n ln Σ m Σ m k = 1 n E 1 + i t m n n X k Z k 1 ln Σ m Σ m t 2 m n 2 2 n 2 X k 2 Z k 1 2 Σ m ln 2 Σ m | F n = E exp i t m n n T n ln Σ m Σ m k = 1 n 1 I { X k 0 } t 2 m n 2 2 n 2 σ 2 Σ m ln 2 Σ m = E exp i t m n n T n ln Σ m Σ m 1 t 2 m n 2 2 n 2 σ 2 Σ m ln 2 Σ m Σ n E exp i t m n n T n ln Σ m Σ m exp t 2 m n 2 2 n 2 σ 2 Σ n Σ m ln 2 Σ m E exp i t m n n T n ln Σ m Σ m .
By (Theorem 2.1, [10]), we know that
m n n T n ln Σ m Σ m n L a w N 0 , θ + 1 r ( 1 θ ) 2 2 ,
and we obtain the desired result.
Now, assume p r > 3 / 4 . By (Theorem 2.1, (iii), [10]), we have
T n n 1 m 2 p 1 + r n L 2 L θ + ( 1 θ ) ( 1 r ) .
On the other hand, since Z 1 1 is centred around zero, we obtain
E H n n 1 m 2 p 1 + r 2 = σ 2 n 2 m 4 ( p 1 ) + 2 r E Σ n σ 2 n 1 r ε 2 n 2 m 4 ( p 1 ) + 2 r 0 .
consequently
H n n 1 m 2 p 1 + r n L 2 0 .
As before, this is sufficient in order to get the desired result. □
The following result is given in (Theorem 5.3, [10]) but we provide a new proof.
Theorem 4.
We have
lim n Σ n n 1 r = 1 r θ r + r θ 1 r Σ
where Σ is the Mittag-Leffler random variable given in [8].
Proof. 
Keeping in mind the notation Σ n = k = 1 n X k 2 for any n 1 , we have
Σ n n 1 r = Σ m n 1 r + 1 n 1 r k = m + 1 n X k 2 = m n 1 r Σ m m 1 r + m n 1 r 1 m 1 r k = m + 1 n X k 2 ,
and from (Lemma 2.1, [8]), we know that lim n Σ m / m 1 r = Σ , where Σ has a Mittag-Leffler distribution with parameter 1 r . So, we deduce
lim n m n 1 r Σ m m 1 r = θ 1 r Σ .
On the other hand, using the strong law of large numbers, for a sufficiently large n, we have
m n 1 r 1 m 1 r k = m + 1 n X k 2 = m n 1 r n m m 1 r 1 n m k = m + 1 n X k 2 θ 1 r n m m 1 r ( p + q ) Σ m m θ 1 r n m 1 ( p + q ) Σ m m 1 r θ r 1 θ ( p + q ) Σ m m 1 r .
Finally, we deduce that
lim n Σ n n 1 r = 1 r θ r + r θ 1 r Σ .
Remark 1.
Note that Theorem (3) generalizes many previous results already published in the literature. Actually,
  • for θ = 1 and σ = 0 , it contains results obtained in [8],
  • for θ = σ = 0 , it contains results obtained in (Theorem 4.1, [4]),
  • for r = σ = 0 and θ = 1 , we find the result already obtained in (Theorems: 3.3, 3.6, 3.7, [6]),
  • for r = 0 and θ = 1 , we obtain results of (Theorem 1-iii, Theorem 2, [2]),
  • for r = σ = 0 and θ ( 0 , 1 ) , it contains results obtained in (Theorem 2, [7]),
  • it coincides with (Theorems 2.1–2.3, [3]) when r = 0 .

4. Conclusions

In this work, we established new results on the asymptotic normality for a variation of the elephant random walk (ERW) introduced by [4] in 2022. The ERW model we were interested in is the so-called elephant random walk with gradually increasing memory for which a random step size is allowed. Our main results (Theorems 3 and 4) contain previous results established in [2,3,4,6,7,8]. In a future work, it will be interesting to investigate the question of the validity of the law of the iterated logarithm for this ERW model, but also to provide a method for the estimation of the parameters p, q, and r. Another very interesting and more natural variation of the model would be to consider that the elephant remembers only its steps from time n m to time n 1 instead of the steps 1 to m.

Funding

This research was funded by King Saud University, grant number RSPD2024R987.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author thanks the anonymous referees for many critical and helpful comments and suggestions, which helped us improve the presentation of the paper. The author would like to extend his sincere appreciation to Deanship of Scientific Research at King Saud University: Researchers Supporting Program number (RSPD2024R987).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Schütz, G.M.; Trimper, S. Elephants can always remember: Exact long-range memory effects in a non-Markovian random walk. Phys. Rev. E 2004, 70, 045101. [Google Scholar] [CrossRef] [PubMed]
  2. Dedecker, J.; Fan, X.; Hu, H.; Merlevéded, F. Rates of convergence in the central limit theorem for the elephant random walk with random step sizes. J. Stat. Phys. 2023, 190, 154. [Google Scholar] [CrossRef]
  3. Aguech, R. On the Central Limit Theorem for the Elephant Random Walk with gradually increasing memory and random step size. AIMS Math. 2024, 9, 17784–17794. [Google Scholar] [CrossRef]
  4. Gut, A.; Stadtmüller, U. The elephant random walk with gradually increasing memory. Stat. Probab. Lett. 2022, 189, 109598. [Google Scholar] [CrossRef]
  5. Gut, A.; Stadtmüller, U. Variations of the elephant random walk. J. Appl. Probab. 2021, 58, 805–829. [Google Scholar] [CrossRef]
  6. Bercu, B. A martingale approach for the elephant random walk. J. Phys. A Math. Theor. 2018, 51, 015201. [Google Scholar] [CrossRef]
  7. Aguech, R.; EL Machkouri, M. Gaussian fluctuations of the elephant random walk with gradually increasing memory. J. Phys. A Math. Theor. 2024, 57, 065203. [Google Scholar] [CrossRef]
  8. Bercu, B. On the elephant random walk with stops playing hide and seek with the Mittag-Leffler distribution. J. Stat. Phys. 2022, 189, 12. [Google Scholar] [CrossRef]
  9. Fan, X.; Shao, Q.M. Cramér’s moderate deviations for martingales with applications In Annales de l’Institut Henri Poincare (B) Probabilites et Statistiques; Institut Henri Poincaré: Paris, France, 2023. [Google Scholar]
  10. Roy, j.; Takei, M.; Tanumera, H. The elephant random walk in the triangular array setting. arXiv 2024, arXiv:2403.02881. Available online: https://arxiv.org/pdf/2403.02881.pdf (accessed on 5 March 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aguech, R. Elephant Random Walk with a Random Step Size and Gradually Increasing Memory and Delays. Axioms 2024, 13, 629. https://doi.org/10.3390/axioms13090629

AMA Style

Aguech R. Elephant Random Walk with a Random Step Size and Gradually Increasing Memory and Delays. Axioms. 2024; 13(9):629. https://doi.org/10.3390/axioms13090629

Chicago/Turabian Style

Aguech, Rafik. 2024. "Elephant Random Walk with a Random Step Size and Gradually Increasing Memory and Delays" Axioms 13, no. 9: 629. https://doi.org/10.3390/axioms13090629

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop