**Appendix A**

In this Appendix, we sketch the solution for calculating the moments of the process in the limit of long times. All increments of the process Δ*x* are independent, so first we will focus only on the number of jumps. We calculate the probability *Pn*(*t*) for *n* ≥ 0, which is the probability that it will be exactly *n* jumps up to time *t*, *<sup>P</sup>*0(*t*) can be obtained directly from the definition, as the probability of no jumps in the time *t* is

$$P\_0(t) = \Psi(t) \Rightarrow \tilde{P}\_0(s) = \tilde{\Psi}(s),\tag{A1}$$

where <sup>Ψ</sup>(·) is the sojourn probability for the waiting time distribution. For *n* ≥ 1 the process will be described by the number of series of waiting times *k*, the waiting times in each series Δ*t i* , and the number of repetitions of waiting times in each series *νi*.

Of course, in each series the number of repetitions has to be at least one, so the number of series *k* cannot be larger than the number of jumps *n*. Particularly, the cumulative number of jumps in all *k* series must equal the total number of jumps: *ν*1 + *ν*2 + ··· + *νk* = *n*. All considered jumps have to happen before looking time *t*, so the total time elapsed has to be smaller than *t*: *<sup>ν</sup>*1Δ*t* 1 + *<sup>ν</sup>*2Δ*t* 2 + ··· + *<sup>ν</sup>k*Δ*t k* ≤ *t*. Let us call the period between *t* and the last jump *δt* = *t* − ∑ Δ*t i νi* > 0. For the simplicity of notation, let us redefine <sup>Ω</sup>(*ν*) = ∑ ∞ *<sup>n</sup>*=*ν*+1 *<sup>ω</sup>*(*n*).

Looking at the process at time *t*, we can be in one of two situations. There has been *k* series of WTs so far. The next *k* + 1-th series has just begun, or we could still be in the *k*-th series. Therefore, the soft propagator *Pn*(*t*) for *n* ≥ 1 can be written as a sum of two parts:

	- (a) probabilities *ψ*(Δ*t i* ) that there was waiting time Δ*t i* in the *i*-th series for each series 1, . . . , *k*;
	- (b) probabilities *<sup>ω</sup>*(*<sup>ν</sup>i*) that the repetition number was *νi* in the *i*-th series for each series 1, . . . , *k*;
	- (c) probability Ψ(*δt*) that the new WT Δ*t k*+1 is larger than *δt*, because there was no jump in this period;
	- (d) probability Ω(0) = 1 that the new repetition number *<sup>ν</sup>k*+<sup>1</sup> is at least one. By definition, this probability is one;

To calculate the total probability, we need to consider the aggregated probability of all individual events. This means considering all possible sets of variables *k*, *νi*, Δ*t i*. To do this, we need to sum over all possible *k* and *νi* and integrate over all possible Δ*t i* .

	- (a) probabilities *ψ*(Δ*t i* ) that there was waiting time Δ*t i* in the *i*-th series for each series 1, . . . , *k*;
	- (b) probabilities *<sup>ω</sup>*(*<sup>ν</sup>i*) that the repetition number was *νi* in the *i*-th series for each series 1, . . . , *k* − 1;
	- (c) probability <sup>Ω</sup>(*<sup>ν</sup>k*) that there will be at least one more repetition of the *k*-th WT. This means that the total number or repetitions in this series is larger than *νk* observed so far;

As before, we need to consider the aggregated probability of all individual events to calculate the total probability. In this case, we have another constraint for Δ*t k* , which has to be larger than *δt*, because in other cases, there should be another jump before time *t*.

Summing the above requirements, we can write the formula for the soft propagator:

$$\begin{split} P\_n(t) &= \sum\_{k=1}^n \sum\_{\substack{\upsilon\_1, \ldots, \upsilon\_k \\ \upsilon\_1 + \ldots + \upsilon\_k = \mu \end{subarray}} \int \underbrace{\psi(\Delta^1) \ldots \psi(\Delta^k)}\_{0 < \beta t} \Psi(\delta t) \omega(\upsilon\_1) \ldots \omega(\upsilon\_k) d\Delta t^1 \ldots d\Delta t^k} \\ &+ \sum\_{k=1}^n \sum\_{\substack{\upsilon\_1, \ldots, \upsilon\_k \\ \upsilon\_1 + \ldots + \upsilon\_k = \mu \land \delta^1, \ldots, \Delta t^k}} \int \underbrace{\psi(\Delta^1) \ldots \psi(\Delta^k) \omega(\upsilon\_1) \ldots \omega(\upsilon\_{k-1}) \Omega(\upsilon\_k) d\Delta t^1 \ldots d\Delta t^k}\_{0 < \delta t < \Delta t^k} \\ \end{split} \tag{A2}$$

Next, we calculate the Laplace transform (*t* → *s*) and Z transform (*n* → *z*) to obtain

$$P(z;s) = P\_0(s) + P\_z(s) = \frac{1}{s} \frac{1}{1 - \bar{f}(z;s)} \left[ 1 + F(z;s) - z(F(z;s) + \bar{f}(z;s)) \right],\tag{A3}$$

where ˜ *f*(*z*;*s*) = ∑∞*<sup>ν</sup>*=<sup>1</sup> *z*<sup>−</sup>*νψ*˜(*sν*)*ω*(*ν*) and, analogically, *<sup>F</sup>*˜(*z*;*s*) = ∑∞*<sup>ν</sup>*=<sup>1</sup> *<sup>z</sup>*<sup>−</sup>*νψ*˜(*sν*)Ω(*ν*). Notice that the full soft propagator with included jumps can be easily expressed as the Z transform of *P* ˜ *n* at the point *z* = ˜ *<sup>h</sup>*(*k*)−<sup>1</sup>

$$\begin{split} \tilde{P}(k;s) &= \sum\_{n=0}^{\infty} \tilde{P}\_{n} \tilde{h}^{n}(k) = \left. \tilde{P}(z;s) \right|\_{z=\tilde{h}(k)^{-1}} \\ &= \frac{1}{s} \frac{1 + \mathcal{F}(\tilde{h}(k)^{-1};s) - \tilde{h}(k)^{-1} (\mathcal{F}(\tilde{h}(k)^{-1};s) + \tilde{f}(\tilde{h}(k)^{-1};s))}{1 - \tilde{f}(\tilde{h}(k)^{-1};s)}. \end{split} \tag{A4}$$

The first two moments of the process can be calculated as derivatives of the propagator at the point *k* = 0:

$$\begin{split} \dot{m}\_1(s) &= -i \frac{\partial \tilde{P}(k;s)}{\partial k} \Big|\_{k=0} = \frac{\mu\_1}{s} \frac{f\_0 + \dot{j}\_0}{1 - \dot{j}\_0}, \\ \dot{m}\_2(s) &= -\frac{\partial^2 \tilde{P}(k;s)}{\partial k^2} \Big|\_{k=0} \\ &= \frac{2\mu\_1^2}{s} \frac{\dot{f}\_1(f\_0 + \dot{j}\_0) + (1 - \dot{j}\_0)(f\_1 + \dot{j}\_1 - f\_0 - \dot{j}\_0)}{(1 - \dot{j}\_0)^2} \\ &+ \frac{\mu\_2}{s} \frac{f\_0 + \dot{j}\_0}{1 - \dot{j}\_0}, \end{split} \tag{A5}$$

where we introduced

$$j\_{\mathbb{Z}} = j(\boldsymbol{\eta}; \boldsymbol{s}) = \sum\_{\nu=1}^{\infty} \nu^{\nu} \,\,\tilde{\boldsymbol{\psi}}(\boldsymbol{s}\boldsymbol{\nu}) \,\,\boldsymbol{\omega}(\boldsymbol{\nu}), \quad j\_{\mathbb{R}} = J(\boldsymbol{\eta}; \boldsymbol{s}) = \sum\_{\nu=1}^{\infty} \nu^{\nu} \,\,\tilde{\boldsymbol{\psi}}(\boldsymbol{s}\boldsymbol{\nu}) \,\,\Omega(\boldsymbol{\nu}).\tag{A6}$$

Next, we focus on the specific power-law memory. We set the distribution of the number of repeats to be zeta distribution with the parameter *ρ*: *ω*(*ν*) = *ν* −*ρ ζ*(*ρ*), *ρ* > 2. The parameter *ρ* has to be bigger than two because the distribution of the number of repeats must have a finite mean not to break ergodicity. Furthermore, we expand the moments into series, assuming very small *s* (so for long times). To do that, we need expansions of *j*(*n*;*s*) and *J*(*n*;*s*) for *n* = {0, 1} < (*ρ* − <sup>1</sup>). One can express *j*(*n*;*s*) as the power-law sum

$$\tilde{\varphi}(n;s) = \frac{1}{\tilde{\zeta}(\rho)} \sum\_{\nu=1}^{\infty} \tilde{\psi}(s\nu) \,\,\nu^{-(\rho-n)} = \frac{s^{\rho-n-1}}{\tilde{\zeta}(\rho)} \underbrace{\sum\_{\nu=1}^{\infty} \tilde{\psi}(s\nu) \, (s\nu)^{-(\rho-n)} \,\, s}\_{I} \,\,. \tag{A7}$$

The behaviour of sum *I* can be estimated by integrals

$$\int\_{2s}^{\infty} \tilde{\psi}(\mathbf{x}) \mathbf{x}^{-(\rho - n)} d\mathbf{x} \prec I < \int\_{s}^{\infty} \tilde{\psi}(\mathbf{x}) \mathbf{x}^{-(\rho - n)} d\mathbf{x}.\tag{A8}$$

Therefore, we can approximate sum *I* into the series and finally obtain

$$j(n;s) = \mathbb{C}\_n \mathbf{s}^{\rho - n - 1} + \mathbb{C}\_n^0 + \mathbb{C}\_n^1 \mathbf{s} + \mathbb{C}\_n^2 \mathbf{s}^2 + \mathbb{C}\_n^3 \mathbf{s}^3 + \dotsb \,. \tag{A9}$$

One can calculate

$$\mathcal{C}\_n^0 = j(n;0) = \frac{1}{\tilde{\zeta}(\rho)} \sum\_{\nu=1}^{\infty} \nu^{-(\rho-n)} = \frac{\tilde{\zeta}(\rho-n)}{\tilde{\zeta}(\rho)} \ge 1. \tag{A10}$$

Moreover, we can notice that

$$\frac{C\_1^0}{C\_0^1} = -\frac{1}{\langle \Delta t \rangle}.\tag{A11}$$

Similarly, we approximated

$$f(n;s) = D\_n s^{\rho - n - 2} + D\_n^0 + D\_n^1 s + D\_n^2 s^2 + D\_n^3 s^3 + \cdots \,\,\_\*\,. \tag{A12}$$

Constant terms are:

$$D\_0^0 = \frac{\zeta(\rho - 1)}{\zeta(\rho)} - 1 = \mathcal{C}\_1^0 - \mathcal{C}\_0^0, \quad D\_1^0 = \frac{\zeta(\rho - 2) - \zeta(\rho - 1)}{2\zeta(\rho)} = \frac{\mathcal{C}\_2^0 - \mathcal{C}\_1^0}{2}. \tag{A13}$$

This gives the form of the first moment

$$\mathfrak{m}\_{1}(s) \approx \frac{\mu\_{1}}{s} \left(\mathcal{C}\_{1}^{0} + D\_{0}s^{\rho-2}\right) \frac{\frac{\mathcal{C}\_{0}}{\mathcal{C}\_{0}^{1}}s^{\rho-2} - 1}{s\mathcal{C}\_{0}^{1}} = -\frac{\mu\_{1}}{s^{2}}\frac{\mathcal{C}\_{1}^{0}}{\mathcal{C}\_{0}^{1}} - \frac{\mu\_{1}}{s^{4-\rho}}\frac{D\_{0} + \frac{\mathcal{C}\_{0}\mathcal{C}\_{1}^{0}}{\mathcal{C}\_{0}^{1}}}{\mathcal{C}\_{0}^{1}},\tag{A14}$$

concerning only terms increasing with time (*s*<sup>−</sup>*α*, *α* > 1)—the analytical and the most important power-law one. Switching to time variables, we obtain:

$$m\_1(t) = \mathcal{L}^{-1}[\bar{m}\_1(s)] \approx \frac{\mu\_1}{\langle \Delta t \rangle} t - \mu\_1 \frac{D\_0 + \frac{\mathbb{C}\_0}{\langle \Delta t \rangle}}{\mathbb{C}\_0^1 \Gamma(4 - \rho)} t^{3 - \rho}. \tag{A15}$$

The second moment can be expressed as

$$\begin{split} \dot{m}\_{2}(s) &\approx \frac{2\mu\_{1}^{2}}{\left<\Delta t\right>^{2}}s^{-3} - \mu\_{1}^{2}\frac{4\mathsf{C}\_{0}^{2} + 3\mathsf{C}\_{0}^{1}\left<\Delta t\right> + 2\mathsf{C}\_{1}^{1}\left<\Delta t\right> + 2D\_{0}^{1}\left<\Delta t\right> + \mathsf{C}\_{2}^{0}\left<\Delta t\right>^{2}}{2\mathsf{C}\_{0}^{1}\left<\Delta t\right>^{2}}s^{-2} \\ &- \mu\_{1}^{2}\frac{D\_{0} + \mathsf{C}\_{1} - D\_{1} + 2\frac{\mathsf{C}\_{0}}{\left<\Delta t\right>}s^{\rho-5}}{\mathsf{C}\_{0}^{1}\left<\Delta t\right>}s^{\rho-5} + \frac{\mu\_{2}}{\left<\Delta t\right>}s^{-2}. \end{split} \tag{A16}$$

This gives us the variance in the time domain presented in the main text.
