**2. Motivation**

Models with interdependent waiting times are used to describe electron transfer [63], the firing of a single neuron [64], interhuman communication [65], and the modeling of earthquakes [66–69]. An excellent example of a process with correlated inter-event times that we will describe in this manuscript is tick-by-tick transaction price data from the stock market [70]. These data are very convenient to use, as they are of high quality and easily accessible in large amounts.

Firstly, let us recall two basic stylized facts observed in the majority of stock markets [71].


The latter is considered to be reminiscent of the volatility clustering phenomenon.

Of course, these are not the only or the most significant stylized facts, but these two do not directly depend on the log-return distribution. The list should also contain the broad distribution of log-returns [72]; multi-fractality [73,74]; universal scaling of the distribution of times between large jumps [75,76]; and the slow, power-law decay of the correlation between these times. We will further discuss the latter in this manuscript. Usually, the CTRW models used to describe high-frequency stock market data consider waiting times Δ*tn* as inter-transaction times, and process increments Δ*xn* as logarithmic returns between consecutive transactions. Taking into account the so-called bid-ask bounce phenomenon allows CTRW processes to reproduce the first stylized fact of short-term negative autocorrelation [58,77,78]. In this type of models, waiting times Δ*tn* are i.i.d. variables and only the dependence between Δ*xn* and Δ*xn*−<sup>1</sup> is considered. Unfortunately, models considering only this type of dependencies turned out to be unable to describe the time ACF of absolute values of price changes [60]. Technically, it is possible to obtain a CTRW model reproducing both stylized facts, but it requires a power-law waiting-time distribution *ψ*(<sup>Δ</sup>*t*). However, this solution is not satisfying as we can obtain waiting-time distribution directly from the empirical data of inter-transaction times. It turns out that this distribution is far from a power-law one [58]. These results sugges<sup>t</sup> that the source of the second stylized fact is not in the distributions of increments *h*(<sup>Δ</sup>*x*) and waiting times *ψ*(<sup>Δ</sup>*t*), but in the dependence between consecutive Δ*x* and Δ*t* values.

Let us start with an empirical analysis of the step ACF of series Δ*tn* and |<sup>Δ</sup>*xn*|. We observe approximately power-law memories in waiting times and absolute values of price changes; see Figure 2a. For a lag (in the number of steps) - 3, the autocorrelation of |<sup>Δ</sup>*xn*| is higher than the autocorrelation of Δ*tn*, but for a lag > 3 it is otherwise. This result suggests that in the limit of long times, the dependence between waiting times may be more critical than dependence between price changes. To verify this hypothesis we perform a shuffling test. We compare the time ACF of price changes' absolute values for four samples of time series. The first one is the original time series of tick-by-tick transaction data. The second time series keeps the price changes Δ*xn* in the original order but shuffles the order of waiting times Δ*tn*. This way, we obtained a time series keeping all dependencies between price changes Δ*xn*, but without any dependencies between waiting times Δ*tn*. In the third time series, we kept the original waiting times Δ*tn* but shuffled the price changes Δ*xn*. In the last, fourth time series, both Δ*tn* and Δ*xn* were shuffled. Let us emphasise that all four time series have the same, unchanged distributions *ψ*(Δ*tn*) and *h*(<sup>Δ</sup>*xn*). The results are shown in Figure 2b. As expected, we observe the slow, almost power-law decay of the time ACF for the first empirical time series. Surprisingly, removing dependencies between waiting times does not change the time ACF in the limit of *t* → 0, but significantly increases its slope of decay in the long-term. On the other hand, removing dependencies between price changes decreases the time ACF, dividing it by an almost constant factor but does not change the slope of the decay. The removal of all dependencies still leads to a positive time ACF, resulting from the non-exponential empirical distribution of waiting times.

**Figure 2.** Figures 2 and 3 were prepared using transaction data for KGHM (one of the most liquid Polish stocks) from period of January 2013 to July 2017. Both figures are on a log-log scale. (**a**) The plot of normalized empirical step ACF of Δ*t* and |<sup>Δ</sup>*x*|. Both functions decay like a power-law. For lag = 1, the autocorrelation of |<sup>Δ</sup>*x*| is higher. However, it decays faster, and for long times the memory in waiting times is stronger. (**b**) The plot of the normalized time ACF of |<sup>Δ</sup>*x*| for four time series. The presented lines are for empirical data (thick black), empirical price changes, and intra-daily shuffled waiting times (dotted red); intra-daily shuffled price changes and empirical waiting times (dash-dotted green); and intra-daily independently shuffled price changes and waiting times (thin blue). Considering only empirical dependencies of waiting times reproduces the ACF, which decays with almost the same slope as the empirical one.

The empirical observations presented above convinced us that it is necessary to consider long-range dependencies between waiting times within CTRW to reproduce the slowly decaying ACF of price changes' absolute values observed in the financial data.

Please note that in Figure 2, we analyzed the step ACF for lags up to 100 and the time ACF for times up to 1000 s. The procedure used to estimate the time ACF was presented in [58] and is a modification of the classical slotting technique introduced in [79]. Such limits were chosen due to the length of trading sessions (around 8 h or 1000 trades). Unfortunately, these limits are not long enough to detect power-law dependencies. The only way to increase these limits is by joining all sessions into one sequence. In this procedure, we merge the end of one session with the beginning of the following one (we omit overnight price changes). These two periods of the sessions are different, as we observe intraday activity in financial data [80]. The session begins with short inter-transaction times and a high standard deviation of price changes. Usually, up to the middle of the session, average inter-trade times increase, and the standard deviation of price changes decreases. The situation reverts again close to the end of the session. This phenomenon is called the *lunch effect* [81]. We use the canonical method to remove intraday non-stationarity by dividing each waiting time by the corresponding average waiting time, depending on the time that has elapsed since the beginning of the session for each day of the week separately [82,83]. The comparison of the step ACFs of waiting times for non-stationarized and stationarized

data is presented in Figure 3a. As a result of this procedure, we obtain the power-law decay over four orders of magnitude of lag. In Figure 3b, we present the time ACF of price changes' absolute values for stationarized data, which also exhibit power-law decay over four orders of magnitude of time lag. It is now reasonable to ask what the relationship is between the decay exponents of these autocorrelations. Fortunately, the model studied in this paper gives a strict answer to this question.

**Figure 3.** All intraday data (waiting times and corresponding price changes) are joined into one data set. (**a**) The plot shows the normalized step ACF of Δ*t* for non-stationary and stationarized cases. The stationarizing procedure is described in the main text. (**b**) The plot of the normalized time ACF of |<sup>Δ</sup>*x*| with stationarized waiting times. Both stationarized autocorrealations decay like a power-law with similar slope.

#### **3. Process of Waiting Times**

Let us now focus on the sequence of inter-transaction times Δ*t*1, Δ*t*2, ... , Δ*tn*, .... We are now looking for the point process to describe this series, which will be suitable for use in CTRW. For this reason, we need analytically solvable models. Moreover, we would like to use the empirical distribution of inter-event times *ψ*(Δ*tn*) and observe the power-law step ACF, as shown in Figure 3a. Even these two simple conditions exclude ACD models and Hawkes processes from our considerations. We are not interested in ACD models, as the power-law ACF can be obtained only within the fractional extension. In the Hawkes process, both the waiting time distribution and autocorrelation depend on the memory kernel [15,84]. Therefore, they cannot be set independently. As the Hawkes process is defined solely by its kernel, both waiting time distribution and autocorrelation depend on it. Thus, it would be difficult (if it is possible at all) to reproduce both empirical WTD and ACF at the same time. This feature of the Hawkes process hampers its use in the description of empirical data.

As the solution to our search, we propose a simple point process in which waiting times Δ*t* are repeated. In a very general sense, our proposition can be interpreted as a discretized version of CTRW, adapted to the role of the point process. Let us briefly

describe this analogy. Within the canonical CTRW, values of the process are represented by a spatial variable, and the time is continuous. The spatial variable remains constant for a given period of continuous waiting time. Now, we define the point process by the series of waiting times. Here, the number of repetitions *νi* of the same value of waiting time is the analog of waiting time in the canonical CTRW. The exemplary realization of such an adapted process of waiting times is shown in Figure 4.

We require the waiting times Δ*tn* (values of the process in the discrete subordinated time *n*) to come from the distribution *ψ*(Δ*tn*) (Δ*tn* > 0), with a finite mean <sup>Δ</sup>*t*. We define *νi* as the number of repetitions of the same waiting times (drawn independently for each series of repetitions). Let *νi* be the i.i.d. random variables with the distribution *<sup>ω</sup>*(*<sup>ν</sup>i*). In general, it can be any distribution, but to recreate the power-law step ACF of waiting times we will focus on a fat-tailed distribution with a finite first moment *ν*. In particular, we use the zeta distribution with parameter *ρ*

$$
\omega\_{\rho}(k) = k^{-\rho} / \zeta(\rho); \quad \zeta(\rho) = \sum\_{i=1}^{\infty} i^{-\rho}, \; \rho > 1,\tag{1}
$$

where *ζ*(*ρ*) is Riemann's zeta function. Its expected value is equal to *ω* = *ζ*(*ρ*−<sup>1</sup>) *ζ*(*ρ*) for *ρ* > 2 and the variance is finite for *ρ* > 3. The cumulative distribution function is given by *Hk*,*<sup>ρ</sup> ζ*(*ρ*), where *Hk*,*<sup>ρ</sup>* = ∑*ki*=<sup>1</sup> *i*−*ρ* is the generalized harmonic number. Let us introduce Ω(*k*) = ∑∞*<sup>i</sup>*=*<sup>k</sup> ω*(*i*) as a sojourn probability. We have Ω(*k*) = 1 − *Hk*−1,*<sup>ρ</sup> ζ*(*ρ*) for the zeta distribution.

**Figure 4.** The example realization of the process of waiting times, the values of which correspond to the waiting times Δ*tn* of the point process used in the primary CTRW process. Process values Δ*t*1, Δ*t*2, ... , Δ*tn* come from the values Δ*t*1, Δ*t*2, ... , Δ*t<sup>k</sup>* repeated *ν*1, *ν*2, ... , *<sup>ν</sup>k* times, respectively. Number of repetitions *νi* are drawn from the distribution *<sup>ω</sup>*(*<sup>ν</sup>i*). In the example above: *ν*1 = 1, *ν*2 = 3, *ν*3 = 2, . . . and Δ*t*1 = Δ*t*1, Δ*t*2 = Δ*t*3 = Δ*t*4 = Δ*t*2, Δ*t*5 = Δ*t*6 = Δ*t*3, . . ..

We define a soft propagator of the process of times P(<sup>Δ</sup>*t*; *<sup>n</sup>*|<sup>Δ</sup>*t*0, <sup>0</sup>), which is the conditional probability density that the waiting time, which was initially (at *n* = 0) in the origin value (Δ*t* = Δ*t*0), is equal to Δ*t* after *n* steps. The soft propagator can be expressed by

$$\mathcal{P}(\Delta t; n | \Delta t\_0, 0) = \delta(\Delta t - \Delta t\_0) \Omega^{\text{first}}(n) + [1 - \Omega^{\text{first}}(n)] \psi(\Delta t),\tag{2}$$

where <sup>Ω</sup>first(*n*) is the sojourn probability obtained from *<sup>ω</sup>*first(*n*), which is the stationarized distribution of the repetition of the first waiting time:

$$\begin{split} \omega^{\text{first}}(n) &= \frac{\sum\_{n'=1} \omega(n+n')}{\sum\_{n''=0} \sum\_{n'=1} \omega(n''+n')} = \frac{\sum\_{n'=1} \omega(n+n')}{\sum\_{n=1} n\omega(n)} = \frac{\sum\_{n'=n+1} \omega(n')}{\langle \omega \rangle}, \\ \Omega^{\text{first}}(n) &= \frac{\sum\_{i=n} \sum\_{n'=i+1} \omega(n')}{\langle \omega \rangle} = \frac{\sum\_{i=1} i\omega(i+n)}{\langle \omega \rangle} = \frac{\langle \omega \rangle - n\Omega(n+1) - \sum\_{i=1}^{n} i\omega(i)}{\langle \omega \rangle}. \end{split} \tag{3}$$

The first term of the right-hand side of Equation (2) is the probability that the process value will stay constant (equal Δ*t*0) after *n* jumps. The second term indicates that there will be a process value jump with probability 1 − <sup>Ω</sup>first(*n*), so new process values will be completely independent, drawn from the distribution *ψ*(<sup>Δ</sup>*t*).

Restricting ourselves to *ω*(*n*) in the form of the zeta distribution, we can obtain

$$
\Omega^{\text{first}}(n) = 1 - \frac{n}{\langle \omega \rangle} + \frac{nH\_{n,\rho}}{\zeta(\rho - 1)} - \frac{H\_{n,\rho - 1}}{\zeta(\rho - 1)},\tag{4}
$$

and hence the propagator given by Equation (2). The step autocovariance of waiting times Δ*tn* can be expressed as

$$\text{cov}(n) = \left< \Delta t\_i \Delta t\_{i+n} \right> - \left< \Delta t\_i \right> \left< \Delta t\_{i+n} \right> = \left< \Delta t\_i \Delta t\_{i+n} \right> - \left< \Delta t \right>^2,\tag{5}$$

where symbol ... means taking the average. Note that Δ*ti*<sup>+</sup>*n* = Δ*ti* with probability *p* = <sup>Ω</sup>first(*n*). With probability 1 − *p*, the Δ*ti* is independent. This leads to

$$\text{cov}(n) = p\left<\Delta t^2\right> + \left(1 - p\right)\left<\Delta t\right>^2 - \left<\Delta t\right>^2 = \sigma\_{\Delta t}^2 p = \sigma\_{\Delta t}^2 \Omega^{\text{first}}(n). \tag{6}$$

We are interested in the asymptotic form of autocorrelation for *n* 1. We can use following approximation (Theorem 12.21 from [85])

$$
\zeta(\rho) - H\_{n,\rho} \approx \frac{n^{1-\rho}}{\rho - 1}.\tag{7}
$$

Finally, we obtain the normalized step ACF

$$\text{corr}(n) = \frac{\text{cov}(n)}{\text{cov}(0)} \approx \frac{n^{-(\rho - 2)}}{\zeta(\rho - 1)(\rho - 2)(\rho - 1)}.\tag{8}$$

The step ACF of waiting times decays like a power-law and the decay exponent is *ρ* − 2. It is worth emphasizing that even considering only *ρ* > 2, required for the existence of a finite average number of repetitions, we can obtain any value of the decay exponent.

#### **4. The Primary Process**

Now we are ready to define the primary CTRW process with repeating waiting times. This process is characterized by two key properties:


Note that we do not assume any dependence within the series of consecutive changes of the process value Δ*x*1, Δ*x*2, ... , Δ*xn*. We do not make any further assumptions about the shape of distributions *h*(<sup>Δ</sup>*x*). The memory in this process is present only in the sequence of waiting times.

Let us start the analysis of the properties of this process with the following observation. As the changes Δ*xn* are independent, the changes above any given threshold occur independently. Knowing the result in (8), we can calculate the autocorrelation of the series of inter-occurrence times between changes above or below any threshold. The details of

the derivation are presented in Appendix B. It turns out that we also obtain power-law decay with the exponent −(*ρ* − <sup>2</sup>), the same as in (8).

Moreover, we managed to obtain the soft propagator of the primary CTRW process and the characteristics derived from it. The details of calculations can be found in Appendix A. Here we present selected results, namely, the first two moments and the time autocorrelation of changes, in the limit of long times (*t* → ∞). We consider analytical terms (*t*, *t*2, *t*3, . . .) and the most significant power-law term when *ρ* is non-integer.

Using results from Appendix A, the first moment of the process for *t* → ∞ can be approximated as

$$m\_1(t) = \mathcal{L}^{-1} \left[ -i \frac{\partial \varGamma(k; s)}{\partial k} \Big|\_{k=0} \right](t) \approx \frac{\mu\_1}{\langle \varDelta t \rangle} t + \mu\_1 \frac{a \{ \psi \}}{\Gamma(4-\rho)} t^{3-\rho}, \quad \rho \in (2; 4), \tag{9}$$

where L−<sup>1</sup>[·](*t*) is the inverse Laplace transform, *<sup>P</sup>*˜(*k*;*s*) is the propagator of the process in the Fourier–Laplace domain, <sup>Γ</sup>(·) is Euler's gamma function, and *α*{*ψ*} is a complex functional of *ψ*, which has to be calculated separately for each *ψ*. The most important term is typical, linear behavior, but we observe an additional power-law term. The second moment can be written in the form

$$\begin{split} m\_2(t) &= \mathcal{L}^{-1} \left[ -\frac{\partial^2 \tilde{P}(k;s)}{\partial k^2} \Big|\_{k=0} \right](t) \\ &\approx \mu\_1^2 \left( \frac{t}{\langle \Delta t \rangle} \right)^2 + \sigma\_x^2 \frac{t}{\langle \Delta t \rangle} + \mu\_1^2 \beta \{\Psi \} \frac{t}{\langle \Delta t \rangle} + \mu\_1^2 \frac{\gamma \{\Psi \}}{\Gamma(5-\rho)} t^{4-\rho}, \quad \rho \in (2;5), \end{split} \tag{10}$$

where *β*{*ψ*}, *γ*{*ψ*} are complex functionals of *ψ*, which have to be calculated separately for each *ψ*. From the first two moments of the process, we calculate the process variance (still considering only analytical and the most important power-law term)

$$
\sigma^2(t) = m\_2(t) - m\_1^2(t) \approx \left(\sigma\_x^2 + \mu\_1^2 \beta \{\psi\}\right) \frac{t}{\langle \Delta t \rangle} + \mu\_1^2 \frac{\gamma \{\psi\}}{\Gamma(5-\rho)} t^{4-\rho}, \quad \rho \in (2;5). \tag{11}
$$

It is worth mentioning that for variance the power-law term from the second moment is more important than the power-law term from the first moment. We can observe normal diffusion for *ρ* > 3. However, there is superdiffusion in the case of *ρ* ∈ (2; <sup>3</sup>). We obtain ballistic diffusion in the limit *ρ* → 2.

Having the first two moments, one can calculate velocity ACF, which is equivalent to normalized ACF of changes for fixed sampling for the stationary process

$$\mathbb{C}(t) = \frac{1}{2} \frac{\partial^2 m\_2(t)}{\partial t^2} - \left(\frac{\partial m\_1(t)}{\partial t}\right)^2 \Rightarrow \mathbb{C}(t) \approx \mu\_1^2 \frac{1}{\Gamma(3-\rho)} \kappa \{\psi\} t^{2-\rho},\tag{12}$$

where *κ*{*ψ*} = *γ*{*ψ*} 2 − <sup>2</sup>*α*{*ψ*} Δ*t* , for *ρ* ∈ (2; <sup>4</sup>). In the limit of *t* → ∞ and *μ*1 = 0 we observe a power-law decay of ACF with the exponent *ρ* − 2. In the case of *μ*1 = 0, it can be proven that this exponent is *ρ* − 1, so the decay is faster (A5).

It is crucial to emphasize that in Equations (9)–(12) for *ρ* exceeding the mentioned range, there is still a power-law term with the same dependence on *μ*1 and the same time exponent. However, the dependence of the amplitude on *ρ* takes a different, more complex form.

#### **5. Empirical Results**

We use the constructed process to investigate the role of correlated inter-trade times in the volatility clustering effect. We consider this process as a toy model, describing high-frequency financial data. The value of the process represents the logarithm of the stock price. We can treat transactions as events that change the price. Therefore, the inter-transaction times correspond to waiting times in our model. The jumps represent the difference in the logarithmic prices of consecutive transactions, which are logarithmic returns [52].

The CTRW formalism allows us to obtain the autocorrelation of price returns. Moreover, the same formalism can be used to obtain the nonlinear ACF of absolute increments. This can be achieved by using different jump distributions *h*(<sup>Δ</sup>*x*). To model the process of price changes in time, we should use the symmetric distribution *h*(<sup>Δ</sup>*x*), as the empirical distribution of returns is symmetrical. As a result, we obtain the vanishing mean *μ*1 = 0 and the quickly decaying ACF of returns. To derive the nonlinear ACF of absolute returns, we define the new CTRW process, and by calculating its linear ACF, we obtain the nonlinear ACF of price increments. Following [60], if as *h*(<sup>Δ</sup>*x*) we use only the positive half of the previous distribution multiplied by 2, we deal with the case of non-zero drift and obtain an artificial, monotonically increasing process. As *μ*1 = 0, we obtain the slow power-law decay of the autocorrelation of absolute returns, as in the empirical results presented as a solid black line in Figure 2b.

Since we assumed only one type of memory in our model, introduced by the distribution *<sup>ω</sup>*(*ν*), we cannot expect that the model will be able to reproduce exact values of the empirical nonlinear ACF of the absolute returns. The model, however, should be able to reproduce its slope (as in Figure 2b, in which the green dash-dotted line reproduces the slope of the solid black line). The theoretical slope is obtained analytically and is equal 2 − *ρ*. It is worth emphasizing that the slope does not depend on the distribution of price changes *h*(<sup>Δ</sup>*x*) or waiting times *ψ*(Δ*t*) and is fully determined by the single parameter *ρ*, characterizing the distribution *<sup>ω</sup>*(*ν*). This fact significantly simplifies the comparison with the empirical data, as we are required to estimate only one parameter *ρ*. On the other hand, the assumption of repeated waiting time is a technical method introducing memory. We cannot expect to observe such a phenomenon in the empirical time series. The parameter *ρ* is a measure of the memory present in the sequence of consecutive waiting times. Therefore, we estimate this parameter using the slope of the step ACF of waiting times, which is equal to 2 − *ρ* in the model. It is a surprising and potentially essential fact that the exponent of the decay of the nonlinear time ACF is the same as in the step ACF of waiting times. This result motivates us to compare these two values for empirical financial data. Of course, in the empirical data we also observe a long-term positive step ACF of |<sup>Δ</sup>*x*|, which was not included in our model. Therefore, we can expect that the slope of time ACF of |<sup>Δ</sup>*x*| should be slightly higher than the slope of the step ACF of Δ*t*. Since a long-term nonlinear autocorrelation is usually interpreted as a reminiscence of the volatility clustering phenomenon, it is interesting to check what part of the observed volatility clustering effect can be explained only by memory between inter-trade times. We present the results for the five most traded stocks from the Warsaw Stock Exchange in Table 1 (ordered by the number of transactions), with the average inter-trade time not being greater than 30 s.

**Table 1.** Table with fitted slopes of the empirical stationarized step ACF of waiting times and the time ACF of price changes' absolute values for the five most liquid stocks from the WSE. The time ACF slopes are close to the corresponding step ACF slopes. The analysis was performed on the tick-by-tick market data from the public domain database [70]. The data covers the period from 3 January 2013 to 14 July 2017. For instance, the data set for KGHM contains 3,096,625 transactions.


We see that our model can estimate the slope of time ACF with an accuracy of around 10%. Moreover, our model can successfully reproduce the power-law decay of the autocorrelation of inter-occurrence times between changes below or above any given threshold reported in [75,76]. Please note that the decay exponent predicted by our model −(*ρ* − <sup>2</sup>), with empirical values presented in the Table 1, is close to 0.31, as reported in [76].

### **6. Conclusions**

We introduced a new continuous-time random walk (CTRW) model with long-term memory within a sequence of waiting times. We use a simple model of repeating waiting times instead of commonly-used point processes such as the ACD and the Hawkes process. Despite its simplicity, our model of repeating waiting times has a few valuable properties. It is stationary, can be treated analytically, and the distribution of waiting times and memory in its series can be set independently.

As we observe many phenomena with dependencies between waiting times, possible applications of this family of CTRW models go beyond the exemplary application presented here.

However, in this manuscript, we applied the proposed model to describe highfrequency financial time series. We asked ourselves which commonly known properties of the financial time series can be reproduced by the long-term memory introduced in our model, only by means of the repeating waiting times. We have to emphasize that part of these properties, known as stylized facts, depend on the waiting time distribution *ψ*(Δ*t*) and price change distribution *h*(<sup>Δ</sup>*x*). As we are not trying to study the general ability of continuous-time random walk to describe the high-frequency financial time series, we have not studied the broad distribution of log-returns [72], multi-fractality [73,74], or universal scaling of the distribution of times between large jumps [75,76]. We have analyzed the decay of the nonlinear time autocorrelation function of log-returns and the decay of the step autocorrelation function of times between large jumps. Although we considered only memory in a sequence of waiting times, we managed to show that long-term dependencies in waiting times are crucial in explaining the volatility clustering effect and results in the power-law decay of both measures mentioned above.

Our results indicate that the dependence between consecutive price changes is not the primary carrier of long-range memory in the volatility clustering phenomenon. To verify these results, we conducted another simulation. We prepared autocorrelated series of waiting times according to the Fourier filtering method (for example, described in [86]). Similarly, as in our model, both the slopes of the step ACF of WTs and the time ACF of absolute returns were the same. This verification confirms our conclusion and indicates that it is general, independently of the origin of the autocorrelation between inter-trade intervals.

**Author Contributions:** Data curation, J.K.; formal analysis, J.K. and T.G.; investigation, J.K.; methodology, T.G.; software, J.K.; supervision, T.G; validation, T.G.; visualization, J.K.; writing—original draft, J.K. and T.G.; writing—review and editing, J.K. and T.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is not applicable to this article.

**Acknowledgments:** We want to thank Ryszard Kutner and Tomasz Raducha for their helpful remarks and comments on the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.
