Abstract
We study the projection of an element of fractional Gaussian noise onto its neighbouring elements. We prove some analytic results for the coefficients of this projection. In particular, we obtain recurrence relations for them. We also make several conjectures concerning the behaviour of these coefficients, provide numerical evidence supporting these conjectures, and study them theoretically in particular cases. As an auxiliary result of independent interest, we investigate the covariance function of fractional Gaussian noise, prove that it is completely monotone for , and, in particular, monotone, convex, log-convex along with further useful properties.
Keywords:
fractional Brownian motion; fractional Gaussian noise; coefficients of projection; conjecture; covariance matrix; autocovariance function; completely monotonic function MSC:
60G22; 60G15; 60E99
1. Introduction
This paper is about some (conjectured) properties of the projection of an element of fractional Gaussian noise onto the neighbouring elements. Unfortunately, not all our conjectures are amenable to analytical proofs, while numerical experiments confirm their validity. This is indeed rather strange, as the properties of fractional Brownian motion and its increments have been thoroughly studied, attracting a lot of research efforts resulting in countless papers and several books, e.g., [1,2,3,4]. These books are mostly devoted to the stochastic analysis of fractional processes, the properties of their trajectories, distributional properties of certain functionals of the paths, and related issues. Note that such interest and the large number of theoretical studies related to Gaussian fractional noises is due to the wide range of applications of such processes and its properties: existence of memory combined with self-similarity and stationarity. In particular, fractional Gaussian noises appear in the investigation of the behaviour of anomalous diffusion and solutions of fractional diffusion equations, including numerical schemes [5,6,7], information capacity of a non-linear neuron model [8], statistical inference [9,10], entropy calculation [11,12], extraction of the quantitative information from recurrence plots [13] and many others. There is, however, an area where much less is known: problems relating to the covariance matrix of fractional Brownian motion and fractional Gaussian noise in high dimensions, and its determinant. Computational features of the covariance matrices are widely used for simulations and in various applications, see, for example, [14,15,16,17]. The problem considered in the present paper arose in the following way: In [18], the authors construct a discrete process that converges weakly to a fractional Brownian motion (fBm) with Hurst parameter . The construction of this process is based on the Cholesky decomposition of the covariance matrix of the fractional Gaussian noise (fGn). Several interesting properties of this decomposition are proved in [18], such as the positivity of all elements of the corresponding triangular matrix and the monotonicity along its main diagonal. Numerical examples suggest also the conjecture, that one has monotonicity along all diagonals of this matrix. However, the analytic proof of this fact remains an open problem. Studying this problem, the authors of [18] establish a connection between the predictor’s coefficients—that is, the coefficients of the projection of any value of a stationary Gaussian process onto finitely many subsequent elements—and the Cholesky decomposition of the covariance matrix of the process. It turns out that the positivity of the coefficients of the predictor implies the monotonicity along the diagonals of the triangular matrix of the Cholesky decomposition of fGn, which is sufficient for the monotonicity along the columns of the triangular matrix in the Cholesky decomposition of fBm itself; this property, in turn, ensures the convergence of a wide class of discrete-time schemes to a fractional Brownian motion. We will see in Section 2.1 below that the coefficients of the predictor can be found as the solution to a system of linear equations, whose coefficient matrix coincides with the covariance matrix of fGn. This enables us to reduce the monotonicity problem for the Cholesky decomposition to proving the positivity of the solution to a linear system of equations. However, see Section 2, even in the particular case of a -matrix, an analytic proof of positivity of all coefficients is a non-trivial problem. For the moment, we have only a partial solution. Therefore, we formulate the following conjecture:
Conjecture 1.
If , then the coefficients of the projection of any element of fractional Gaussian noise onto any finite number of its subsequent elements are strictly positive.
We shall discuss this conjecture in Section 2 in more detail. Due to stationarity, it is sufficient to establish Conjecture 1 for the projection of onto , i.e., for the conditional expectation
where denotes fBM and . Having computational evidence but lacking an analytical proof for Conjecture 1, we provide in this paper a wide range of associated properties of coefficients, some with an analytic proof, and some obtained using various computational tools. It is, in particular, interesting to study the asymptotic behaviour of the coefficients as . This is particularly interesting since fractional Brownian motion is degenerate, i.e., , where ∼, and denotes the standard normal distribution. Consequently, ∼ for all , and
for any convex combination, . This shows that in the case , the values of the coefficients are indefinite, and therefore they cannot define the asymptotic behaviour of the prelimit coefficients as . It will be very “elegant” if all coefficients tend to ; however, in reality their asymptotic behaviour is different, see Section 2.3. Another interesting question are the relations between the coefficients. It is natural to assume that they decrease as k increases, but the situation here is also more involved, essentially depending on the value of H. In Section 2.4, we prove some recurrence relations between the coefficients. These relations lead to a computational algorithm which is more efficient than solving the system of equations as described in Section 2.1. Finally, it turns out that the positivity of the first coefficient can be proven analytically for all values of n; this result is established in Section 2.5.
We close the paper with a few numerical examples, supporting our theoretical results and conjectures. In particular, we compute the coefficients for all and for various values of H, and discuss their behaviour. Additionally, we compare different calculation methods for the coefficients in terms of computing time, and we demonstrate the advantage of the approach via the recurrence of formulae in most cases.
The paper is organized as follows: Section 2 contains almost all properties of the predictor’s coefficients that can be established analytically, and it introduces the system of linear equations for these coefficients and some properties of the coefficients of this system. We consider in detail two particular cases: and . In these cases, we prove the positivity of all coefficients, establish some relations between them, and study the asymptotic behaviour as . We also obtain recurrence relations for the coefficients, and prove that for all values of n, the first coefficient is positive. Section 3 contains some numerical illustrations of the properties and conjectures from Section 1 and Section 2. In Section 3.3, we briefly discuss some observations concerning the case .
2. Analytical Properties of the Coefficients
Let be a fractional Brownian motion (fBm) with Hurst index , that is, a centered Gaussian process with covariance function of the form
We use
for the nth increment of fBM. It is well known that the process has stationary increments, which implies that is a stationary Gaussian sequence (known as fractional Gaussian noise—fGn for short). It follows from (1) that its autocovariance function is given by
Obviously, .
Now, let us consider the projection of onto , i.e., the conditional expectation . Since the joint distribution of is centered and Gaussian, we obtain the following relation from the theorem on normal correlation (see, for example, Theorem 3.1 in [19]):
where . Our Conjecture 1 means that all the coefficients for , , are strictly positive (We have formulated it in more general form, i.e., for any element , because, by stationarity, the projection for any j has the same distribution as ).
Let us consider two approaches to the calculation of the coefficients . The first method is straightforward; it involves solving of the system of linear equations. The second one is based on recurrence relations for the .
2.1. System of Linear Equations for Coefficients
Multiplying both sides of (3) by , and taking expectations yields
Due to stationarity,
This leads to the following system of linear equations for the coefficients , :
We can solve this using Cramer’s rule,
where
and is the matrix A with its kth column vector replaced by :
Remark 1.
It is known that the finite-dimensional distributions of have a nonsingular covariance matrix; in particular, for any , the values are linearly independent; see Theorem 1.1 in [20] and its proof. Obviously, a similar statement holds for fractional Gaussian noise, since the vector is a nonsingular linear transform of . In other words, ; moreover, if a.s., then for all k.
2.2. Relations between the Values
In order to establish analytic properties of the coefficients , we need several auxiliary results on the properties of the sequence . We start with a useful relation between , and .
Lemma 1.
The following equality holds:
Proof.
Using the self-similarity of fBm and the stationarity of its increments, we obtain
Remark 2.
The inequality was proved in [18] (p. 28) by analytic methods. In this paper, we improve this result in two directions: we obtain an explicit expression for and we prove the sharper bound ; see Lemma 3 below.
Many important properties of the covariance function of a fractional Gaussian noise (such as monotonicity, convexity and log-convexity) follow from the more general property of complete monotonicity, which is stated in the next lemma. To formulate it, let us introduce the function
Lemma 2.
- 1.
- The function is convex if and concave if .
- 2.
- If , then the function ρ is completely monotone(CM)on , that is, and
- 3.
- If , then the function is completely monotone on .
Proof.
1. Using the elementary relation , it is not hard to see that
Since is convex, and since convex functions are a convex cone which is closed under pointwise convergence, the double integral appearing in the representation of is again convex. Thus, is convex or concave according to or , respectively.
2. Let and . Then, Formula (9) remains valid if we replace with . But is CM and so is an integral mixture of CM-functions. Since CM is a convex cone which is closed under pointwise convergence, cf. Corollary 1.6 in [21], we see that is CM on .
3. The above arguments holds true in the case ; the only difference is that in this case, the factor is negative. □
Remark 3.
1. Since is a CM function on , it admits the representation , for some positive measure μ on and , see, for example, Theory 1.4 in [21]. Taking into account that , it is not hard to see that , i.e.,
2. The function ρ can be represented in the form , where we write for the step-1 difference operator, and . Then the second statement of Lemma 2 follows from the more general result: if f is CM on , then is CM. Indeed, since CM is a closed convex cone, it is enough to verify the claim for the “basic” CM function , where is a parameter. Now we have
and this is clearly a completely monotone function.
3. The argument which we used in the proof of Lemma 2. proves a bit more: The function is for and even a Stieltjes function, i.e., a double Laplace transform. To see this, we note that the kernel is a Stieltjes function. Further details on Stieltjes functions can be found in [21].
As for the following properties, fractional Brownian with Hurst index is degenerate, i.e., , where ∼; consequently all and the next set of inequalities are equalities. Therefore, we consider only .
Corollary 1.
Let . The sequence has the following properties
- 1.
- Monotonicity and positivity: for any
- 2.
- Convexity: for any
- 3.
- Log-convexity: for any
Proof.
By Lemma 2, the function is convex on and completely monotone on ; by continuity, we can include the endpoints of each interval.
We begin with the observation that a completely monotone function is automatically log-convex. We show this for using the representation (10): for any ,
Thus, the Cauchy–Schwarz inequality yields
which guarantees that is convex.
Therefore all properties claimed in the statement hold for , convexity even for , and we only have to deal with the case .
Monotonicity for : We have to show . This follows by direct verification since by (2),
(recall that ).
The previous lemma implies that . The following result gives a sharper bound.
Lemma 3.
If , then
Proof.
Applying (7), we may write
because of Statement 2 of Corollary 1. □
2.3. Particular Cases
We will now consider in detail two particular cases: and . In these cases, we prove the positivity of all coefficients , establish some relations between them, and study the asymptotic behavior as . In the case everything is established analytically, while in the case , the sign of the second coefficient and the relation between the second and the third coefficients, and , are verified numerically.
2.3.1. Case
Proposition 1.
For any ,
Proof.
Recall that, by Corollary 1 (Statement 1), Hence, the first inequality is equivalent to
which is true due to Corollary 1.
To prove the second inequality , we need to show that , which was established in Corollary 1. □
Remark 4.
It is worth pointing out that the positivity (and positive definiteness) of the coefficient matrix together with the positivity of the right-hand side of the system does not imply the positivity of the solution. Indeed, consider the following system with the same coefficients as in (17), but another positive right-hand side, say :
The solution has the form
If, for example, , then and . For the system (17), this condition is written as , contradicting Corollary 1.
Proposition 2.
Proof.
If we take the limit in the relations
we obtain
Therefore,
By l’Hôpital’s rule,
Similarly,
Figure 1 shows the dependence of the coefficients and on H. It illustrates the theoretical results stated in Propositions 1 and 2, in particular, the positivity and monotonicity of the coefficients, and convergence to theoretical limit values as .
Figure 1.
Case : and as the functions of H.
2.3.2. Case
For , the system (4) has the following form
Therefore,
Proposition 3.
For any ,
Proof.
The positivity of the denominator follows from the representation
and with Corollary 1. Therefore, it suffices to prove the claimed relations for the numerators of , , and .
1. Let us prove that . The difference between the numerators of and is equal to
since and by Statements 1 and 2 of Corollary 1.
Figure 2 confirms the above proposition. We see that is the largest coefficient. However, only for ; for larger H, the order changes.
Figure 2.
Case : , , and as the functions of H.
Remark 5.
Consider numerically the relation between and and the sign of . One may represent the numerator of as follows:
Thus we need to establish that
We established this fact numerically since we could not come up with an analytical proof. Figure 3 shows the plot of the left-hand side of (24) that confirms the positivity of .
Figure 3.
The left-hand side of (24).
The left- and the right-hand sides of this inequality are the values at the points and , respectively, of the following function:
The graph of the surface is shown in Figure 4. It was natural to assume that the function decreases in x for any H, being at bigger than at . However, the function is not monotone for all H. Figure 5 contains two-dimensional plots of for four different values of H: , , and . We observe that for each value of H; however, the function changes its behavior from increasing to decreasing.
Figure 4.
The function as a surface.
Figure 5.
The function as a function of x for various H.
Remark 6.
The unexpected behavior of (first increasing, then decreasing) is a consequence of the non-standard term . For , this function decreases in x for any . Indeed, for , it has the form
Write . It is sufficient to prove that the function
increases in y for any . However, for ,
and
where
(here is the Pochhammer symbol). The monotonicity of for can be proved by differentiation. Then
and hence, the partial derivative equals
By rearranging the double sum in the numerator, we obtain the expression
which is clearly positive. Thus for any , is increasing as a function of .
Let us try to establish a bit more. We can represent in the following form:
where the coefficients can be found successively from the following equations:
Let us find the first few coefficients: ,
It is easy to see that , , and are positive for . We believe that for all k. However, the proof of this fact remains an open problem.
Proposition 4.
Remark 7.
Obviously, the sum of the limits of the coefficients is 1, as expected.
Proof
Remark 8.
For , we present the graphical results only; see Figure 6. The situation here is more complicated compared to the case . The first coefficient is still the largest; however, the order of three other coefficients changes several times depending on H. In particular, for H close to 1/2, these coefficients are decreasing, but for H close to 1, they are increasing.
Figure 6.
Case : , , , and depending on H.
2.4. Recurrence Relations for the Coefficients
In general, there are several ways to obtain (4). For example, we can consider the coefficients as a result of minimizing the value of the quadratic form
Evidently, differentiation leads again to the system (4). We can look for the coefficients with the help of the inverse matrix , where A is from (5). However, calculating the entries of the inverse matrix is as difficult as calculating the determinants. It is possible to avoid determinants using the properties of fGn. More precisely, we propose a recurrence method to calculate the coefficients successively, starting with .
Proposition 5.
The following relations hold true:
Proof.
In order to prove (26) and (27), we use the theorem on normal correlation as well as the independence of and any of . We get
where , , are some constants. Now we take the conditional expectation on both sides of (28) to obtain
Comparing this equality with (3), and taking into account that the increments , are linearly independent, we conclude that
Now we insert this equality into (28) and see
After multiplying both sides of the last equality by and taking expectations, we arrive at
It follows from the stationarity of the increments that the indices and 1 of the last equality play symmetric roles, i.e., they are equivalent to
From this, we conclude that
Thus, the relation (26) is proved.
Using again the symmetry of the stationary increments, it is not hard to see that
2.5. Positivity of
We conjecture that all coefficients , are positive. However, analytically, we can prove only the positivity of the leading coefficient, .
Proposition 6.
For all , .
3. Numerical Results
3.1. Properties of Coefficients: Positivity and (non)monotonicity
In this section, we compute numerically the coefficients for various values of H. In Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, the results for , , , , , and are listed for .
Table 1.
Coefficients for .
Table 2.
Coefficients for .
Table 3.
Coefficients for .
Table 4.
Coefficients for .
Table 5.
Coefficients for .
Table 6.
Coefficients for .
Observe the following:
- 1.
- All coefficients are positive.
- 2.
- The first coefficient in each row is the largest, i.e., for any . Moreover, often it is substantially larger than any other coefficient in the row.
- 3.
- The conjecture concerning the monotonicity of coefficients (decrease along each row) does not hold in general. If we take sufficiently large values of H, for example , we see that the coefficient is always less than . Moreover, the last coefficient is bigger than for sufficiently large H.
- 4.
- The monotonicity along each column holds, i.e., for fixed k. Figure 7 and Figure 8 illustrate the dependence of of n for and for various H.
Figure 7. Case : as the functions of n.
Figure 8. Case : as the functions of n. - 5.
- The limiting distribution of the coefficients as is not uniform.
- 6.
- The second of these relations makes us expect that knowing that the coefficients decrease in k for n fixed and that the last coefficient is positive, we can prove that they decrease in k for by induction. Unfortunately, if we take as the start of the induction, we see that such a relation holds only if , and indeed, as we know from Proposition 3. However, the relation between and is not so stable and depends on H; see Figure 2. Therefore, we cannot state that .
3.2. Comparison of the Methods: Computation Time
Let us compare the two methods in terms of computation time. The first method (solving the system of equations) was realized using the R function solve(). We considered two problems:
- 1.
- For fixed n, compute the coefficients , , i.e., compute the nth row of the matrix.
- 2.
- Compute the whole triangular array . This requires solving systems of equations.
The second method (recurrence relations) always gives us the whole array of coefficients, which can be considered an advantage.
Let us mention that both methods give exactly the same values of the coefficients.
We also compared the time needed for computation on an Intel Core i3-8145U processor by each method. The results are shown in Table 7. Observe that the recurrence method is always faster, especially for large n, if we need to compute the whole matrix. It takes less than 2 s for , while solving all systems of equations takes more that 21 min. Moreover, for large n the recurrence method is even faster than the calculation of a single row of the matrix, which requires solving only one system of equations.
Table 7.
Computation time.
3.3. Remarks on the Case
In this paper, we mainly focus on the case (the case of long-range dependence). In this section, we give some brief comment on the other case .
- 1.
- Using the complete monotonicity of (see Lemma 2), we can show that in the case , the inequalities for from Corollary 1, Properties 1 and 2, remain valid with opposite signs (the sign “<” instead of the sign “>”). In other words, the sequence is negative, increasing and concave. However, it remains log-convex, i.e., Property 3 of Corollary 1 holds for all .
- 2.
- The behavior of the coefficients for is shown in Figure 9, Figure 10 and Figure 11. We see that for , the coefficients are negative and increasing with respect to H. Moreover, for all we also observe the monotonicity with respect to k, i.e., (unlike the case ).
Figure 9. Case : and as the functions of H.
Figure 10. Case : , , and as the functions of H.
Figure 11. Case : , , , and depending on H. - 3.
- Let . In this case , where is a sequence of independent and identically distributed random variables. So, and, in general,Consider the equalityThenTherefore, the system of linear equations has the formand we obtainFinding and from the first and last equations, and then calculating successively , we obtain the following solution:
Author Contributions
Investigation, Y.M., K.R. and R.L.S.; writing—original draft preparation, Y.M., K.R. and R.L.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Japan Science and Technology Agency, project CREST JPMJCR2115 and project STORM 274410 and supported through the joint Polish–German NCN–DFG “Beethoven 3” grant NCN 2018/31/G/ST1/02252 and DFG SCHI 419/11-1.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Probability and its Applications (New York); Springer: London, UK, 2008. [Google Scholar]
- Mishura, Y.S. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2008; Volume 1929. [Google Scholar]
- Nourdin, I. Selected Aspects of Fractional Brownian Motion; Bocconi & Springer Series; Springer: Milan, Italy; Bocconi University Press: Milan, Italy, 2012; Volume 4. [Google Scholar]
- Samorodnitsky, G. Long range dependence. Found. Trends Stoch. Syst. 2006, 1, 163–257. [Google Scholar] [CrossRef]
- Wang, W.; Cherstvy, A.G.; Liu, X.; Metzler, R. Anomalous diffusion and nonergodicity for heterogeneous diffusion processes with fractional Gaussian noise. Phys. Rev. E 2020, 102, 012146. [Google Scholar] [CrossRef] [PubMed]
- Nie, D.; Deng, W. A unified convergence analysis for the fractional diffusion equation driven by fractional Gaussian noise with Hurst index H∈(0,1). SIAM J. Numer. Anal. 2022, 60, 1548–1573. [Google Scholar] [CrossRef]
- Nie, D.; Sun, J.; Deng, W. Strong convergence order for the scheme of fractional diffusion equation driven by fractional Gaussian noise. SIAM J. Numer. Anal. 2022, 60, 1879–1904. [Google Scholar] [CrossRef]
- Gao, F.Y.; Kang, Y.M.; Chen, X.; Chen, G. Fractional Gaussian noise-enhanced information capacity of a nonlinear neuron model with binary signal input. Phys. Rev. E 2018, 97, 052142. [Google Scholar] [CrossRef] [PubMed]
- Brouste, A.; Fukasawa, M. Local asymptotic normality property for fractional Gaussian noise under high-frequency observations. Ann. Statist. 2018, 46, 2045–2061. [Google Scholar] [CrossRef]
- Sørbye, S.H.; Rue, H.V. Fractional Gaussian noise: Prior specification and model comparison. Environmetrics 2018, 29, e2457. [Google Scholar] [CrossRef]
- Stratonovich, R.L. Theory of Information and Its Value; Springer: Cham, Switzerland, 2020; pp. xxii+419. [Google Scholar]
- Dávalos, A.; Jabloun, M.; Ravier, P.; Buttelli, O. On the statistical properties of multiscale permutation entropy: Characterization of the estimator’s variance. Entropy 2019, 21, 450. [Google Scholar] [CrossRef] [PubMed]
- Ramdani, S.; Bouchara, F.; Lesne, A. Probabilistic analysis of recurrence plots generated by fractional Gaussian noise. Chaos 2018, 28, 085721. [Google Scholar] [CrossRef] [PubMed]
- Dieker, A.B.; Mandjes, M. On spectral simulation of fractional Brownian motion. Probab. Engrg. Inform. Sci. 2003, 17, 417–434. [Google Scholar] [CrossRef]
- Gupta, A.; Joshi, S. Some studies on the structure of covariance matrix of discrete-time fBm. IEEE Trans. Signal Process. 2008, 56, 4635–4650. [Google Scholar] [CrossRef]
- Kijima, M.; Tam, C.M. Fractional Brownian motions in financial models and their Monte Carlo simulation. Theory Appl. Monte Carlo Simulations 2013, 53–85. [Google Scholar] [CrossRef]
- Montillet, J.P.; Yu, K. Covariance matrix analysis for higher order fractional Brownian motion time series. In Proceedings of the 2015 IEEE 28th Canadian Conference on Electrical and Computer Engineering (CCECE), Halifax, NS, Canada, 3–6 May 2015; pp. 1420–1424. [Google Scholar]
- Mishura, Y.; Ralchenko, K.; Shklyar, S. General conditions of weak convergence of discrete-time multiplicative scheme to asset price with memory. Risks 2020, 8, 11. [Google Scholar] [CrossRef]
- Mishura, Y.; Shevchenko, G. Theory and Statistical Applications of Stochastic Processes; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
- Banna, O.; Mishura, Y.; Ralchenko, K.; Shklyar, S. Fractional Brownian Motion: Approximations and Projections; ISTE Ltd. & Wiley: London, UK, 2019. [Google Scholar]
- Schilling, R.L.; Song, R.; Vondraček, Z. Bernstein Functions, 2nd ed.; De Gruyter Studies in Mathematics; Theory and applications; Walter de Gruyter & Co.: Berlin, Germany, 2012; Volume 37. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).










