*Article* **Fourth Cumulant Bound of Multivariate Normal Approximation on General Functionals of Gaussian Fields**

**Yoon-Tae Kim and Hyun-Suk Park \***

Division of Data Science and Data Science Convergence Research Center, College of Information Science, Hallym University, Chuncheon 200-702, Korea; ytkim@hallym.ac.kr

**\*** Correspondence: hspark@hallym.ac.kr; Tel.: +82-33-248-2036

**Abstract:** We develop a technique for obtaining the fourth moment bound on the normal approximation of *F*, where *F* is an R*d*-valued random vector whose components are functionals of Gaussian fields. This study transcends the case of vectors of multiple stochastic integrals, which has been the subject of research so far. We perform this task by investigating the relationship between the expectations of two operators Γ and Γ∗. Here, the operator Γ was introduced in Noreddine and Nourdin (2011) [*On the Gaussian approximation of vector-valued multiple integrals.* J. Multi. Anal.], and Γ∗ is a muilti-dimensional version of the operator used in Kim and Park (2018) [*An Edgeworth expansion for functionals of Gaussian fields and its applications*, stoch. proc. their Appl.]. In the specific case where *F* is a random variable belonging to the vector-valued multiple integrals, the conditions in the general case of *F* for the fourth moment bound are naturally satisfied and our method yields a better estimate than that obtained by the previous methods. In the case of *d* = 1, the method developed here shows that, even in the case of general functionals of Gaussian fields, the *fourth moment theorem* holds without conditions for the multi-dimensional case.

**Keywords:** Malliavin calculus; fourth moment theorem; multiple stochastic integrals; multivariate normal approximation; Gaussian fields

**MSC:** 60H07; 60F25

#### **1. Introduction**

For a given real separable Hilbert space H, we write *X* = {*X*(*h*), *h* ∈ H} to indicate an isonormal Gaussian process defined on a probability space (Ω, F, P). Let {*Fn*, *n* ≥ 1} be a sequence of random variables of functionals of Gaussian fields associated with *X*. The authors in [1] discovered a central limit theorem (CLT), known as the *fourth moment theorem*, for a sequence of random variables belonging to a fixed Wiener chaos.

**Theorem 1.** [Fourth moment theorem] *Let* {*Fn*, *n* ≥ 1} *be a sequence of random variables belonging to the <sup>q</sup>*(<sup>≥</sup> <sup>2</sup>)*th Wiener chaos with* <sup>E</sup>[*F*<sup>2</sup> *<sup>n</sup>* ] = <sup>1</sup> *for all <sup>n</sup>* <sup>≥</sup> <sup>1</sup>*. Then, Fn* <sup>L</sup> −→ *Z if and only if* E[*F*<sup>4</sup> *<sup>n</sup>* ] <sup>→</sup> <sup>3</sup>*, where <sup>Z</sup> is a standard normal random variable and the notation* <sup>L</sup> −→ *means a convergence in distribution.*

Such a result provides a remarkable simplification of the method of moments or cumulants. In [2], the *fourth moment theorem* is expressed in terms of the Malliavin derivative. However, the results given in [1,2] do not provide any estimates, whereas the authors in [3] find an upper bound for various distances by combining Malliavin calculus (see, e.g., [4–6]) and Stein's method for normal approximation (see, e.g., [7–9]). Moreover, the authors in [10,11] obtain optimal Berry–Esseen bounds as a further refinement of the main results proven in [3] (see, e.g., [12] for a short survey).

**Citation:** Kim, Y.-T.; Park, H.-S. Fourth Cumulant Bound of Multivariate Normal Approximation on General Functionals of Gaussian Fields. *Mathematics* **2022**, *10*, 1352. https://doi.org/10.3390/ math10081352

Academic Editors: Alexandru Agapie, Denis Enachescu, Vlad Stefan Barbu and Bogdan Iftimie

Received: 25 March 2022 Accepted: 15 April 2022 Published: 18 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

For the *fourth moment theorem*, the key step for the proof of this theorem is to show the following inequality:

$$\text{Var}(\langle DF, -DL^{-1}F \rangle\_{\mathfrak{H}}) \le c(\mathbb{E}[F^4] - \mathfrak{Z}(\mathbb{E}[F^2])^2),\tag{1}$$

where *DF* is the Malliavin derivative of *F* and *L*−<sup>1</sup> is the *pseudo-inverse of the Ornstein– Uhlenbeck generator* (see Section 2). In the particular case where *<sup>F</sup>* <sup>=</sup> *Iq*(*f*), *<sup>f</sup>* <sup>∈</sup> <sup>H</sup>⊗*q*, with E[*F*2] = 1, the bound in (1) is given by

$$d\_{Kol}(F, Z) \le \Var(\langle DF\_\prime - DL^{-1}F \rangle\_{\mathfrak{H}}) \le \sqrt{\frac{q - 1}{3q}} \sqrt{\mathbb{E}[F^4] - 3}. \tag{2}$$

where *dKol* stands for the *Kolmogorov distance*.

Another research of this line can be found: [13] for multiple Winger integrals in a fixed order of free Winger chaos, and [14–16] for multi-dimensional vectors of multiple stochastic integrals, such that each integral belongs to a fixed order of Wiener chaos. In particular, the new techniques for the proof of the *fourth moment theorem* are also found in [17–19]. In [19], the authors prove this theorem by using the asymptotic independence between blocks of multiple stochastic integrals. At this point, it is important to mention that all of these approaches deal with only the random variables in a fixed chaos, and thus do not cover the cases that are not part of some chaoses. For this reason, we are interested in the conditions that the property of (2) holds for the generalized random variables that are not in a fixed Wiener chaos.

In this paper, we will develop a method for finding a bound on the multivariate normal approximation of a random vector *F* for which the *fourth moment theorem* holds even when *F* is a *d*-dimensional random vector whose components are general functionals of Gaussian fields. By applying this method to a random vector whose components belong to some Wiener chaos, we derive the *fourth moment theorem* with an upper bound more sharply than the previous one given in Theorem 4.3 of [19].

Differently from the *fourth moment theorem* for functionals of Gaussian fields studied so far, the findings of our research represent a further extension and refinement of the *fourth moment theorem*, in the sense that (i) they do not require the involved random vector whose components belong to some Wiener chaos, and (ii) the constant part except for the fourth cumulant may be significantly improved. The main aim in this paper is to discover under what conditions the fourth moment bound holds for vector-valued general functionals of Gaussian fields, where each of which needs not to belong to some Wiener chaos. In the case of vector-valued multiple integrals, the conditions on the *fourth moment theorem* are quite naturally satisfied.

On the other hand, in the case of *d* = 1, the application of the method developed here shows that, even in case of general functionals of Gaussian fields, the *fourth moment theorem* holds without any conditions needed for the case of *d* ≥ 2. The only necessary condition is that the fourth cumulant is non-zero. The result in the one-dimensional case is different from the result obtained by substituting *d* = 1 into the multi-dimensional case. For these reasons, we will see how the random vector case can be reformulated in the one-dimensional case.

Our paper is organized in the following way. Section 2 contains some basic notion on Malliavin calculus. Section 3 is devoted to developing a method for obtaining the fourth moment bound for a R*d*-valued random vector whose components are functionals of Gaussian fields. In Section 4, we will show the *fourth moment theorem* by applying the new method developed in Section 3 to vector-valued multiple stochastic integrals. In Section 5, we will describe how the random vector case can be reconstructed in the one-dimensional case.

#### **2. Preliminaries**

In this section, we describe some basic facts on Malliavin calculus for Gaussian processes. For a more detailed explanation on this subject, see [4,5]. Fix a real separable Hilbert space H with an inner product denoted by ·, ·H. Let *B* = {*B*(*h*), *h* ∈ H} be an isonormal Gaussian process that is a centered Gaussian family of random variables, such that E[*B*(*h*)*B*(*g*)] = *h*, *g*H. If *Hq* is the *q*th Hermite polynomial, then the closed linear subspace, denoted by <sup>H</sup>*<sup>q</sup>* of <sup>L</sup>2(Ω) generated by {*Hq*(*B*(*h*)) : *<sup>h</sup>* <sup>∈</sup> <sup>H</sup>, *h*<sup>H</sup> <sup>=</sup> <sup>1</sup>} is called the *q*th *Wiener chaos* of *B*.

We define a linear isometric mapping *Iq* : <sup>H</sup>*<sup>q</sup>* <sup>→</sup> <sup>H</sup>*<sup>q</sup>* by *Iq*(*h*⊗*n*) = *<sup>q</sup>*!*Hq*(*B*(*h*)), where H*<sup>q</sup>* is the symmetric *q*th tensor product. It is well known that any square integrable random variable *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2(Ω, <sup>G</sup>, <sup>P</sup>), where <sup>G</sup> denotes the *<sup>σ</sup>*-field generated by *<sup>B</sup>*, admits a series expansion of multiple stochastic integrals:

$$F = \sum\_{q=0}^{\infty} I\_{\mathbb{Q}}(f\_{\mathbb{Q}})\_{\prime\prime}$$

where the series converges in *<sup>L</sup>*2(Ω) and the functions *fq* <sup>∈</sup> <sup>H</sup>*<sup>q</sup>* and *<sup>q</sup>* <sup>≥</sup> 0 are uniquely determined with *f*<sup>0</sup> = E[*F*].

Let {*ei*, *i* = 1, 2, ...} be a complete orthonormal system of the Hilbert space H. For *<sup>f</sup>* <sup>∈</sup> <sup>H</sup>*<sup>p</sup>* and *<sup>g</sup>* <sup>∈</sup> <sup>H</sup>*<sup>q</sup>*, the *contraction <sup>f</sup>* <sup>⊗</sup>*<sup>r</sup> <sup>g</sup>* of *<sup>f</sup>* and *<sup>g</sup>*, *<sup>r</sup>* ∈ {0, 1, ... , *<sup>p</sup>* <sup>∧</sup> *<sup>q</sup>*}, is the element of H⊗(*p*+*q*−2*r*) defined by

$$f \otimes\_r g \quad = \sum\_{i\_1, \dots, i\_r = 1}^{\infty} \langle f, e\_{i\_1} \otimes \dots \otimes e\_{i\_r} \rangle\_{\mathfrak{H}^{\otimes r}} \otimes \langle g, e\_{i\_1} \otimes \dots \otimes e\_{i\_r} \rangle\_{\mathfrak{H}^{\otimes r}}.\tag{3}$$

The product formula for the multiple stochastic integrals is given below.

**Proposition 1.** *If f* <sup>∈</sup> <sup>H</sup>*<sup>p</sup> and g* <sup>∈</sup> <sup>H</sup>*q, then*

$$I\_p(f)I\_q(\mathbf{g}) = \sum\_{r=0}^{p \wedge q} r! \binom{p}{r} \binom{q}{r} I\_{p+q-2r}(f \otimes\_r \mathbf{g}).\tag{4}$$

We denoted by S the class of smooth and cylindrical random variables *F* of the form

$$F = f(B(\varphi\_1), \dots, B(\varphi\_n)), \ n \ge 1,\tag{5}$$

where *<sup>f</sup>* ∈ C<sup>∞</sup> *<sup>b</sup>* (R*n*) and *<sup>ϕ</sup><sup>i</sup>* <sup>∈</sup> <sup>H</sup>, *<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>n</sup>*. For these random variables, the *Malliavin derivative* of *F* with respect to *B* is the element of *L*2(Ω, H) defined as

$$DF = \sum\_{i=1}^{n} \frac{\partial f}{\partial \boldsymbol{\omega}\_i} (B(\boldsymbol{\varphi}\_1), \cdot, \cdot, B(\boldsymbol{\varphi}\_n)) \, \boldsymbol{\uprho}\_i. \tag{6}$$

Let D*q*,*<sup>p</sup>* be the closure of its associated smooth random variable class with respect to the norm

$$\|\|F\|\|\_{q,p}^p = \mathbb{E}[|F|^p] + \sum\_{k=1}^q \mathbb{E}[\|D^k F\|\_{\mathcal{Y}^{\otimes k}}^p].$$

Let *δ* be the adjoint of the Malliavin derivative *D*. The domain of *δ*, denoted by Dom(*δ*), is composed of those elements *<sup>u</sup>* <sup>∈</sup> <sup>L</sup>2(Ω; <sup>H</sup>) such that there exists a constant *<sup>C</sup>* satisfying

$$|\mathbb{E}[\langle D^k F, u\rangle\_{\mathfrak{H}^{\otimes l}}]| \le \mathbb{C}(\mathbb{E}[|F|^2])^{1/2} \text{ for all } F \in \mathbb{D}^{k,2}.$$

If *<sup>u</sup>* <sup>∈</sup> Dom(*δ*), then *<sup>δ</sup>*(*u*) is an element of <sup>L</sup>2(Ω) defined as the following duality formula, called an integration by parts,

$$\mathbb{E}[F\delta(\mu)] = \mathbb{E}[\langle DF, \mu \rangle\_{\mathfrak{H}}] \text{ for all } F \in \mathbb{D}^{1,2}.$$

Recall that any square integrable random variable *F* can be expanded as *F* = E[*F*] + ∑<sup>∞</sup> *<sup>q</sup>*=<sup>1</sup> *Jq*(*F*), where *Jq*, *q* = 0, 1, 2 ..., is the projection of *F* onto H*q*. We say that this random variable belongs to *Dom*(*L*) if ∑<sup>∞</sup> *<sup>q</sup>*=<sup>1</sup> *q*2E[*Jq*(*F*)2] < ∞. For such a random variable *F*, we define an operator *L* = ∑<sup>∞</sup> *<sup>q</sup>*=<sup>0</sup> −*q Jq*, which coincides with the *infinitesimal generator of the Ornstein–Uhlhenbeck semigroup*. Then, *<sup>F</sup>* <sup>∈</sup> *Dom*(*L*) if and only if *<sup>F</sup>* <sup>∈</sup> <sup>D</sup>1,2 and *DF* <sup>∈</sup> *Dom*(*δ*), and, in this case, *<sup>δ</sup>DF* <sup>=</sup> <sup>−</sup>*LF*. We also define the operator *<sup>L</sup>*−1, called the *pseudo-inverse* of *L*, as *L*−1*F* = ∑<sup>∞</sup> *<sup>q</sup>*=<sup>1</sup> <sup>1</sup> *<sup>q</sup> Jq*(*F*). Then, *<sup>L</sup>*−<sup>1</sup> is an operator with values in D2,2, and *LL*−1*<sup>F</sup>* <sup>=</sup> *<sup>F</sup>* <sup>−</sup> <sup>E</sup>[*F*] for all *<sup>F</sup>* <sup>∈</sup> <sup>L</sup>2(Ω).

#### **3. Main Results**

In this section, we will find a sufficient condition on the fourth moment bound for a vector-valued random variable whose components are functionals of Gaussian fields. It is important to note that these functionals of Gaussian fields do not necessarily belong to some Wiener chaos. The next lemma will play a fundamental role in this paper.

**Lemma 1.** *Suppse that F* <sup>∈</sup> <sup>D</sup>1,2 *and G* <sup>∈</sup> <sup>L</sup>2(Ω)*. Then, we have that L*−1*<sup>G</sup>* <sup>∈</sup> <sup>D</sup>2,2 *and*

$$\mathbb{E}[FG] = \mathbb{E}[F]\mathbb{E}[G] + \mathbb{E}[\langle -DL^{-1}G, DF \rangle\_{\mathcal{H}}].$$

A multi-index is a vector of a non-negative integer of the form *α* = (*α*1, ... , *αd*). Then, we write

$$|\mathfrak{a}| = \sum\_{j=1}^{d} \mathfrak{a}\_{j}, \quad \mathfrak{d}\_{\mathfrak{j}} = \frac{\mathfrak{d}}{\partial\_{\mathfrak{x}\_{\mathfrak{j}}}}, \quad \mathfrak{d}^{\mathfrak{a}} = \partial\_{1}^{\mathfrak{a}\_{1}} \dots \partial\_{d}^{\mathfrak{a}\_{d}}, \quad \mathfrak{x}^{\mathfrak{a}} = \prod\_{i=1}^{d} \mathfrak{x}\_{i}^{\mathfrak{a}\_{i}}.$$

where *x* = (*x*1,..., *xd*). By convention, we set 0<sup>0</sup> = 1.

For the rest of this section, we fix a random vector *F* = (*F*1,..., *Fd*), *d* ≥ 2.

**Definition 1.** *Assume that* E[|*F*| *<sup>α</sup>*] <sup>&</sup>lt; <sup>∞</sup> *for some <sup>α</sup>* <sup>∈</sup> <sup>N</sup>*<sup>d</sup>* \ {0}*. The joint cumulant of order* <sup>|</sup>*α*<sup>|</sup> *of F is defined by*

$$\kappa\_{\mathfrak{a}}(F) = (-i)^{|\alpha|} \partial^{\alpha} \Big|\_{t=0} \log \phi\_F(t) \text{ for } t \in \mathbb{R}^d,$$

*where φF*(*t*) = E *e it*,*F* R*d* . *is the characteristic function of F.*

Suppose that *Fi* <sup>∈</sup> <sup>D</sup>1,2 for each *<sup>i</sup>* <sup>=</sup> 1, ... , *<sup>d</sup>*. Let *<sup>l</sup>*1, *<sup>l</sup>*2, ... be a sequence taking values in {*e*1,...,*ed*}, where *ei* is the multi-index of length *d* given by

$$e\_i = (0, \dots, 0, 1, 0, \dots, 0).$$

If *l*<sup>1</sup> = *ei*, then Γ<sup>∗</sup> *l*1 (*F*) = *Fi*. Suppose that Γ<sup>∗</sup> *l*1,...,*lk* (*F*) is a well-defined random variable of L2(Ω). We define

$$
\Gamma\_{l\_1,\ldots,l\_{k+1}}^\*(F) = \langle -DL^{-1}F^{l\_{k+1}}, D\Gamma\_{l\_1,\ldots,l\_k}^\*(F) \rangle\_{\mathfrak{H}^\*}.
$$

For the multivariate Gamma operator Γ*l*1,...,*lk* (*F*), see Definition 4.2 in [14]. For simplicity, we will frequently write Γ∗ *i*1,...,*ik* (*F*) and Γ*i*1,...,*ik* (*F*) instead of Γ<sup>∗</sup> *ei* <sup>1</sup> ,...,*ei k* (*F*) and Γ*ei* <sup>1</sup> ,...,*ei k* (*F*), respectively.

Using the Gamma operators Γ*l*1,...,*lk* of *F*, we can state a formula for the cumulants of any random vector *F* (see, e.g., [14,20]).

**Lemma 2** (Noreddine and Nourdin)**.** *Let <sup>α</sup>* = (*α*1, ... , *<sup>α</sup>d*) <sup>∈</sup> <sup>N</sup>*<sup>d</sup>* \ {0} *be a d-dimensional multi-index with the unique decomposition* {*l*1,..., *<sup>l</sup>*|*<sup>α</sup>*|}*. If Fi* <sup>∈</sup> <sup>D</sup>|*α*|,2|*α*<sup>|</sup> *for* 1 ≤ *i* ≤ *d, then*

$$\kappa\_{\mathfrak{A}}(F) = \sum\_{\sigma} \mathbb{E}\left[\Gamma\_{I\_1 J\_{\nu(2)}, \dots, I\_{\nu(|a|)}}(F)\right],\tag{7}$$

*where the sum* ∑*<sup>σ</sup> is taken over all permutations σ of the set* {2, 3, . . . , |*α*|}*.*

**Remark 1.** *Obviously, the above lemma can be expressed in the one-dimensional case as follows: Let m* <sup>≥</sup> <sup>1</sup> *be an integer, and suppose that F* <sup>∈</sup> <sup>D</sup>*m*,2*<sup>m</sup> . Then*

$$\kappa\_{m+1}(F) = m! \mathbb{E}[\Gamma\_m(F)].\tag{8}$$

**Remark 2.** *Successive applications of Lemma 1 yield that*

$$\begin{aligned} \mathbb{E}[\Gamma\_{i,i,j,j}(F)] &= \ \frac{1}{2} \mathbb{E}[(DF\_{j}^{2} - DL^{-1}\Gamma\_{i,i}(F))\_{\mathcal{H}}] \\ &= \ \frac{1}{2} \Big\{ \mathbb{E}[F\_{j}^{2}\Gamma\_{i,i}(F)] - \mathbb{E}[F\_{j}^{2}]\mathbb{E}[\Gamma\_{i,i}(F)] \Big\} \\ &= \ \frac{1}{2} \Big\{ \mathbb{E}[F\_{i}^{2}F\_{j}^{2}] - 2\mathbb{E}[F\_{i}F\_{\mathcal{I}}\Gamma\_{i,j}(F)] - \mathbb{E}[F\_{i}^{2}]\mathbb{E}[F\_{j}^{2}] \Big\} \\ &= \ \frac{1}{2} \Big\{ \mathbb{E}[F\_{i}^{2}F\_{j}^{2}] - 2(\mathbb{E}[F\_{i}F\_{\mathcal{I}}])^{2} - \mathbb{E}[F\_{i}^{2}]\mathbb{E}[F\_{j}^{2}] \Big\} \\ &\quad - \Big( \mathbb{E}[\Gamma\_{i,i,j}(F)] + \mathbb{E}[\Gamma\_{i,j,i,i}(F)] \Big). \end{aligned} \tag{9}$$

*Equation (9) gives that*

$$\begin{aligned} &\mathbb{E}[\Gamma\_{i,i,j,j}(F)] + \mathbb{E}[\Gamma\_{i,j,j,j}(F)] + \mathbb{E}[\Gamma\_{i,j,j,i}(F)] \\ &= \quad \frac{1}{2}\left\{ \mathbb{E}[F\_i^2 F\_j^2] - 2(\mathbb{E}[F\_i F\_j])^2 - \mathbb{E}[F\_i^2]\mathbb{E}[F\_j^2] \right\}. \end{aligned} \tag{10}$$

For the forthcoming theorem, first we define a set:

$$\begin{split} \mathfrak{C}\_{(d)}(F) &= \left\{ \mathfrak{c} \in \mathbb{R} : \sum\_{i,j=1}^{d} \sum\_{\substack{l\_1+l\_2+l\_3=c\_i+2c\_j}} \mathbb{E} [\Gamma\_{i,l\_1,l\_2,l\_3}^\*(F)] \\ &\ge \mathfrak{c} \sum\_{i,j=1}^{d} \sum\_{\substack{l\_1+l\_2+l\_3=c\_i+2c\_j}} \mathbb{E} [\Gamma\_{i,l\_1,l\_2,l\_3}(F)] \right\}. \end{split}$$

**Theorem 2.** *Let <sup>F</sup>* = (*F*1, ... , *Fd*)*, <sup>d</sup>* <sup>≥</sup> <sup>2</sup>*, with Fi* <sup>∈</sup> <sup>D</sup>3,2<sup>3</sup> *and* E[*Fi*] = 0 *for i* = 1, ... , *d, and <sup>Z</sup> be a centered normal random vector with the covariance* <sup>Σ</sup> = (*σij*)1≤*i*,*j*≤*d, where <sup>σ</sup>ij* <sup>=</sup> <sup>E</sup>[*FiFj*]*. Suppose that, for* 1 ≤ *i*, *j* ≤ *d,*

$$\begin{array}{lll} (\alpha) & \mathbb{E}[\Gamma\_{i,i}^{\*}(F)\Gamma\_{j,j}^{\*}(F)] \geq \mathbb{E}[\Gamma\_{i,i}^{\*}(F)]\mathbb{E}[\Gamma\_{j,j}^{\*}(F)],\\ (\beta) & \mathbb{E}[\Gamma\_{i,j}^{\*}(F)\Gamma\_{j,i}^{\*}(F)] \geq (\mathbb{E}[\Gamma\_{i,j}^{\*}(F)])^{2},\\ (\gamma) & \mathbf{c} \in \mathfrak{E}\_{(d)}(F). \end{array}$$

*Assume that* <sup>Σ</sup> *is invertible. We have that, for any Lipschitz function h* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>R</sup>*,*

$$\begin{aligned} &|\mathbb{E}[h(F)] - \mathbb{E}[h(Z)]| \\ \leq & \sqrt{d} ||\Sigma||\_{op}^{1/2} ||\Sigma^{-1}||\_{op} ||h||\_{Lip} \sqrt{\left(\frac{2-\mathfrak{c}}{2}\right) \sum\_{i,j=1}^{d} \kappa\_{c\_{i}c\_{j}c\_{i}c\_{j}}(F)} \end{aligned} \tag{11}$$

*or, as another expression,*

$$\begin{split} & \quad \left| \mathbb{E} [h(F)] - \mathbb{E} [h(Z)] \right| \\ & \leq \quad \sqrt{d} ||\Sigma||\_{op}^{1/2} ||\Sigma^{-1}||\_{op} ||h||\_{Lip} \sqrt{\left( \frac{2-\mathfrak{c}}{2} \right) \left( \mathbb{E} [||F||\_{\mathbb{R}^d}^4] - \mathbb{E} [||Z||\_{\mathbb{R}^d}^4] \right)}. \end{split} \tag{12}$$

*where* ·*op and* ·R*<sup>d</sup> denote the operator norm of a matrix and the euclidean norm in* <sup>R</sup>*d, respectively, and*

$$||h||\_{Lip} = \sup\_{\mathbf{x}, \mathbf{y} \in \mathbb{R}^d} \frac{|h(\mathbf{x}) - h(\mathbf{y})|}{||\mathbf{x} - \mathbf{y}||\_{\mathbb{R}^d}}.$$

**Proof.** Recall that, for a Lipschitz function *<sup>h</sup>* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>R</sup>, Theorem 6.1.1 in [4] shows that

$$\begin{split} & \quad |\mathbb{E}[h(F)] - \mathbb{E}[h(N)]| \\ \leq & \quad \sqrt{d} ||\Sigma||\_{op}^{1/2} ||\Sigma^{-1}||\_{op} ||h||\_{Lip} \sqrt{\sum\_{i,j=1}^{d} \mathbb{E}\left[\left(\sigma\_{ij} - \Gamma\_{i,j}(F)\right)^{2}\right]}. \tag{13} \end{split} \tag{13}$$

Since Γ∗ *<sup>i</sup>*,*<sup>j</sup>* = Γ*j*,*<sup>i</sup>* for 1 ≤ *i*, *j* ≤ *d*, the right-hand side of (13) can be expressed as

$$\begin{aligned} &|\mathbb{E}[h(F)] - \mathbb{E}[h(N)]| \\ \leq & \sqrt{d} ||\Sigma||\_{op}^{1/2} ||\Sigma^{-1}||\_{op} ||h||\_{Lip} \sqrt{\sum\_{i,j=1}^d \mathbb{E}\left[\left(\sigma\_{ij} - \Gamma\_{i,j}^\*(F)\right)^2\right]}. \end{aligned}$$

By the definition of the operator Γ∗, we have that, for 1 ≤ *i*, *j* ≤ *d*,

$$\begin{array}{rcl} \mathbb{E}[\Gamma\_{i,j}^{\*}(F)^{2}] &=& \mathbb{E}[\Gamma\_{i,j}^{\*}(F)\langle -DL^{-1}F\_{j}, DF\_{i}\rangle\_{\mathcal{H}}] \\ &=& \mathbb{E}[\langle -DL^{-1}F\_{j}, D(F\_{i}\Gamma\_{i,j}^{\*}(F))\rangle\_{\mathcal{H}}] \\ & & - \mathbb{E}[F\_{i}\langle -DL^{-1}F\_{j}, D\Gamma\_{i,j}^{\*}(F)\rangle\_{\mathcal{H}}] \\ &=& \mathbb{E}[F\_{i}F\_{j}\Gamma\_{i,j}^{\*}(F)] - \mathbb{E}[\Gamma\_{i,j,i}^{\*}(F)]. \end{array} \tag{14}$$

For *a* + *b* + *c* = 1, we write, using Lemma 1 and the definition of Γ∗, the first term in (14) as follows:

$$\begin{aligned} \mathbb{E}[F\_{\vec{i}}F\_{\vec{j}}\Gamma\_{\vec{i},\vec{j}}^{\*}(F)] &= \begin{aligned} &a\mathbb{E}[F\_{\vec{i}}F\_{\vec{j}}\langle -DL^{-1}F\_{\vec{j}}, DF\_{\vec{i}}\rangle\_{\mathcal{H}}] \\ &+ b\mathbb{E}[\langle -DL^{-1}F\_{\vec{i}}, D(F\_{\vec{j}}\Gamma\_{\vec{i},\vec{j}}^{\*}(F))\rangle\_{\mathcal{H}}] \\ &+ c\mathbb{E}[\langle -DL^{-1}F\_{\vec{j}}, D(F\_{\vec{i}}\Gamma\_{\vec{i},\vec{j}}^{\*}(F))\rangle\_{\mathcal{H}}] \\ &:= &A\_{1} + A\_{2} + A\_{3}. \end{aligned} \end{aligned}$$

It is obvious that

$$\begin{aligned} A\_1 &=& a\mathbb{E}[\langle -DL^{-1}F\_{\circ}, D(F\_{\circ}F\_{\circ} \times F\_{i}) \rangle\_{\mathfrak{H}}] \\ &- a\mathbb{E}[F\_{\circ}(-DL^{-1}F\_{\circ}, D(F\_{\circ}F\_{\circ})) \rangle\_{\mathfrak{H}} \\ &=& a\mathbb{E}[F\_{\circ}^2 F\_{\circ}^2] - a\mathbb{E}[F\_{\circ}^2 \Gamma\_{\circ,i}^\*(F)] - a\mathbb{E}[F\_{\circ}F\_{\circ} \Gamma\_{i,j}^\*(F)] \\ &=& a\mathbb{E}[F\_{\circ}^2 F\_{\circ}^2] - a\mathbb{E}[\Gamma\_{i,i}^\*(F)\Gamma\_{j,j}^\*(F)] - a\mathbb{E}[\Gamma\_{j,i,i}^\*(F)] \\ &- A\_1. \end{aligned} \tag{15}$$

The above Equation (15) gives

$$A\_1 = \frac{a}{2} \left\{ \mathbb{E} [F\_i^2 F\_j^2] - \mathbb{E} [\Gamma\_{i,i}^\*(F) \Gamma\_{j,j}^\*(F)] - \mathbb{E} [\Gamma\_{j,j,i,i}^\*(F)] \right\}. \tag{16}$$

Also using Lemma 1 and the definition of Γ∗, the terms *A*<sup>2</sup> and *A*<sup>3</sup> can be expressed as

$$\mathcal{A}\_2 \quad = \; b\mathbb{E}[\Gamma^\*\_{i,j,i,j}(F)] + b\mathbb{E}[\Gamma^\*\_{i,j}(F)\Gamma^\*\_{j,i}(F)],\tag{17}$$

$$A\_3 \quad = \ c \mathbb{E}[\Gamma\_{i,j,j,i}^\*(F)] + \mathcal{c} \mathbb{E}[\Gamma\_{i,j}^\*(F)^2]. \tag{18}$$

Combining (16)–(18), we obtain, together with (14), that

$$\begin{split} & \quad \mathbb{E}[\Gamma\_{i,j}^{\*}(F)^{2}] \\ &= \quad \frac{a}{2(1-c)} \left\{ \mathbb{E}[F\_{i}^{2}F\_{j}^{2}] - \mathbb{E}[\Gamma\_{i,i}^{\*}(F)\Gamma\_{j,j}^{\*}(F)] - \mathbb{E}[\Gamma\_{j,j,i}^{\*}(F)] \right\} \\ &+ \frac{b}{1-c} \left\{ \mathbb{E}[\Gamma\_{i,j,i,j}^{\*}(F)] + \mathbb{E}[\Gamma\_{i,j}^{\*}(F)\Gamma\_{j,i}^{\*}(F)] \right\} \\ &+ \frac{c-1}{1-c} \mathbb{E}[\Gamma\_{i,j,i}^{\*}(F)]. \end{split} \tag{19}$$

Now, we choose *a*, *b*, and *c* such that *a* + *b* + *c* = 1 and

$$-\frac{a}{2(1-c)} = \frac{b}{1-c} = \frac{c-1}{1-c}.$$

Obviously, we may take *a* = 1, *b* = −1/2, and *c* = 1/2. The assumptions (*α*) and (*β*) yield that the left-hand side of (19) can be bounded by

$$\begin{array}{rcl} \mathbb{E}[\Gamma\_{i,j}^{\*}(F)^{2}] & \leq & \mathbb{E}[\boldsymbol{F}\_{i}^{2}\boldsymbol{F}\_{j}^{2}] - \mathbb{E}[\Gamma\_{j,i,i,i}^{\*}(F)] \\ & - \mathbb{E}[\Gamma\_{i,j,i,j}^{\*}(F)] - \mathbb{E}[\Gamma\_{i,j,i,i}^{\*}(F)] \\ & - \mathbb{E}[\Gamma\_{i,i}^{\*}(F)]\mathbb{E}[\Gamma\_{j,j}^{\*}(F)] - (\mathbb{E}[\Gamma\_{i,j}^{\*}(F)])^{2}. \end{array}$$

Therefore the Inequality (20) and the assumption (*γ*) prove that, if e ∈ E(*d*)(*F*),

$$\begin{split} &\sum\_{i,j=1}^{d} \mathbb{E}[(\sigma\_{ij} - \Gamma\_{i,j}^{\*}(F))^2] \\ &\leq \sum\_{i,j=1}^{d} \left\{ \mathbb{E}[F\_i^2 F\_j^2] - \sum\_{l\_1+l\_2+l\_3=c\_i+2c\_j} \mathbb{E}[\Gamma\_{i,l\_1,l\_2,l\_3}^{\*}(F)] \right\} \\ & \quad - 2\left(\mathbb{E}[F\_i F\_j]\right)^2 - \mathbb{E}[F\_i^2]\mathbb{E}[F\_j^2] \Bigg\} \\ &\leq \sum\_{i,j=1}^{d} \left\{ \mathbb{E}[F\_i^2 F\_j^2] - \mathfrak{c} \sum\_{l\_1+l\_2+l\_3=c\_i+2c\_j} \mathbb{E}[\Gamma\_{i,l\_1,l\_2,l\_3}(F)] \right\} \\ & \quad - 2\left(\mathbb{E}[F\_i F\_j]\right)^2 - \mathbb{E}[F\_i^2]\mathbb{E}[F\_j^2] \Bigg\}. \end{split} \tag{21}$$

Applying (10) in Remark 2 (or Lemma 2) to the right-hand side of (21), we have, together with the assumptions (*α*) and (*β*), that

$$\begin{split} &\sum\_{i,j=1}^{d} \mathbb{E}[(\sigma\_{ij} - \Gamma\_{i,j}^{\*}(F))^2] \\ \leq &\sum\_{i,j=1}^{d} \left\{ \mathbb{E}[F\_i^2 F\_j^2] - \frac{\mathfrak{c}}{2} \mathbb{E}[F\_i^2 F\_j^2] + (\mathfrak{c} - 2)(\mathbb{E}[F\_i F\_j])^2 \right. \\ &\left. + \frac{\mathfrak{c} - 2}{2} \mathbb{E}[F\_i^2] \mathbb{E}[F\_j^2] \right\} \\ = &\left( \frac{2 - \mathfrak{c}}{2} \right) \sum\_{i,j=1}^{d} \left( \mathbb{E}[F\_i^2 F\_j^2] - 2(\mathbb{E}[F\_i F\_j])^2 - \mathbb{E}[F\_i^2] \mathbb{E}[F\_j^2] \right) \\ = &\left( \frac{2 - \mathfrak{c}}{2} \right) \sum\_{i,j=1}^{d} \kappa\_{\mathfrak{c}, \mathfrak{c}\_j, \mathfrak{c}\_i, \mathfrak{c}\_j} (\mathcal{F}). \end{split} \tag{22}$$

The Inequality (22) proves the desired conclusion (11). Since E[*Z*<sup>2</sup> *<sup>i</sup> <sup>Z</sup>*<sup>2</sup> *<sup>j</sup>* ] = <sup>2</sup>(E[*ZiZj*])<sup>2</sup> + E[*Z*<sup>2</sup> *<sup>i</sup>* ]E[*Z*<sup>2</sup> *<sup>j</sup>* ], the identity <sup>E</sup>[*Z*<sup>4</sup> <sup>R</sup>*<sup>d</sup>* ] = <sup>∑</sup>*<sup>d</sup> i*,*j*=1(2*σ*<sup>2</sup> *ij* + *σiiσjj*) holds, which gives another expression (12). Hence, the proof of this theorem is completed.

**Remark 3.** *Our techniques do not require the components of a random vector F* = (*F*1, ... , *Fd*) *to belong to a fixed Wiener chaos. Since the assumptions* (*α*)*,* (*β*)*, and* (*γ*) *are satisfied in the case of a random vector whose entries are element of some Wiener chaos, our result is an extension of Theorem 4.3 in [19]. This fact makes it possible to estimate how restrictive the assumptions given in Theorem 2 are in practice. In addition, for this random vector, the constant of the estimate in Theorem 4.3 in [19] corresponds to* e = 0 *in (12).*

#### **4. Vector-Valued Multiple Stochastic Integrals**

In this section, we consider a special case of the previous result such that *F* is a vectorvalued multiple stochastic integral. First, for an explicit expression of Γ∗, we introduce the combinatorial constants

$$\beta^\*\_{q\_{i\_1},\ldots, q\_{i\_a}}(r\_1,\ldots,r\_a)$$

recursively defined by the relation

$$
\beta^\*\_{q\_{i\_1}, q\_{i\_2}}(r\_2) = q\_{i\_2}(r\_2 - 1)! \binom{q\_{i\_1} - 1}{r\_2 - 1} \binom{q\_{i\_2} - 1}{r\_2 - 1} \nu
$$

and for any *a* ≥ 3,

$$\begin{aligned} &\beta\_{q\_{i\_1},\ldots, q\_{i\_a}}^\*(r\_{2^\*},\ldots, r\_a) \\ &=\beta\_{q\_{i\_1},\ldots, q\_{i\_{a-1}}}^\*(r\_{2^\*},\ldots, r\_{a-1})(q\_{i\_1}+\cdots+q\_{i\_{a-1}}-2r\_2-\ldots-2r\_{a-1})(r\_a-1)! \\ &\times\Big(\begin{matrix} q\_{i\_1}+\cdots+q\_{i\_{a-1}}-2r\_2-\ldots-2r\_{a-1}-1\\ r\_a-1 \end{matrix}\Big)\Big(\begin{matrix} q\_{i\_a}-1\\ r\_a-1 \end{matrix}\Big). \end{aligned}$$

For an explicit expression of Γ, we use the notations

$$
\beta\_{q\_{i\_1}, q\_{i\_2}}(r\_2) = q\_{i\_2}(r\_2 - 1)! \binom{q\_{i\_1} - 1}{r\_2 - 1} \binom{q\_{i\_2} - 1}{r\_2 - 1}.
$$

and

$$\begin{aligned} &\beta\_{\vec{q}\_{\vec{i}\_1},\ldots,q\_{\vec{i}\_d}}(r\_{2^\prime},\ldots,r\_d) \\ &=\ \beta\_{\vec{q}\_{\vec{i}\_1},\ldots,q\_{\vec{i}\_{s-1}}}(r\_2,\ldots,r\_{a-1})q\_{\vec{i}\_d}(r\_a-1)! \\ &\times \begin{pmatrix} q\_{\vec{i}\_1}+\cdots+q\_{\vec{i}\_{a-1}}-2r\_2-\ldots-2r\_{a-1}-1\\ r\_a-1 \end{pmatrix} \begin{pmatrix} q\_{\vec{i}\_d}-1\\ r\_a-1 \end{pmatrix}\text{ for } a\ge 3. \end{aligned}$$

**Theorem 3.** *Fix d* ≥ 2*. Let qi* ≥ 2*, i* = 1, ... , *d, be positive integers, and let F be a random vector*

$$F = (F\_1, \dots, F\_d) = (I\_{\mathfrak{q}\_1}(f\_{\mathfrak{q}\_1}), \dots, I\_{\mathfrak{q}\_d}(f\_{\mathfrak{q}\_d})),$$

*where fqi* <sup>∈</sup> <sup>H</sup>*qi for <sup>i</sup>* <sup>=</sup> 1, ... , *d. Let <sup>Z</sup> be a centered multivariate normal random variable with the covariance* <sup>Σ</sup> = (*σij*)1≤*i*,*j*≤*d, where <sup>σ</sup>ij* <sup>=</sup> <sup>E</sup>[*FiFj*]*. For any Lipschitz function <sup>h</sup>* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>R</sup>*, it holds that*

$$\begin{split} & \quad |\mathbb{E}[h(F)] - \mathbb{E}[h(Z)]| \\ \leq & \quad \sqrt{\frac{2-\mathfrak{c}}{2}} \sqrt{d} ||\Sigma||\_{op}^{1/2} ||\Sigma^{-1}||\_{op} ||h||\_{Lip} \sqrt{\sum\_{i,j=1}^{d} \kappa\_{e\_i e\_j, e\_i c\_j}(F)} \end{split} \tag{23}$$

*or*

$$\begin{split} & \quad \left| \mathbb{E} [h(F)] - \mathbb{E} [h(Z)] \right| \\ & \leq \quad \sqrt{\frac{2 - \mathfrak{e}}{2}} \sqrt{d} ||\Sigma||\_{op}^{1/2} ||\Sigma^{-1}||\_{op} ||h||\_{Lip} \sqrt{\mathbb{E} [||F||\_{\mathbb{R}^d}^4] - \mathbb{E} [||Z||\_{\mathbb{R}^d}^4]}, \end{split} \tag{24}$$

*where a constant* e *is given by*

$$\mathfrak{s} = \frac{1}{\max\_{1 \le i \le d} q\_i}.$$

*Moreover, if q*<sup>1</sup> = ··· = *qd* = *q, then* e *is given by*

$$
\mathfrak{e} = \frac{2}{q}.\tag{25}
$$

**Proof.** It is sufficient to prove that *F* satisfies the assumptions (*α*), (*β*), and (*γ*) in Theorem 2. For the condition (*α*): By the definition of Γ∗, we have that

$$\begin{aligned} & \quad \Gamma\_{ii}^\*(F)\Gamma\_{jj}^\*(F) \\ &= \quad q\_i q\_j \sum\_{r\_1=1}^{q\_i} \sum\_{r\_2=1}^{q\_j} (r\_1 - 1)!(r\_2 - 1)! \binom{q\_i - 1}{r\_1 - 1}^2 \binom{q\_j - 1}{r\_2 - 1}^2 \\ & \quad \times I\_{2q\_i - 2r\_1}(f\_{q\_i} \vec{\otimes}\_{r\_1} f\_{q\_i}) I\_{2q\_j - 2r\_2}(f\_{q\_j} \vec{\otimes}\_{r\_2} f\_{q\_j}) \end{aligned}$$

which yields

$$\begin{split} & \quad \mathbb{E}[\Gamma\_{ii}^{\*}(F)\Gamma\_{jj}^{\*}(F)] \\ &= \, = \, q\_{i}q\_{j} \sum\_{r=1}^{q\_{i}} (r\_{1}-1)! (q\_{j}-q\_{i}+r-1)! \binom{q\_{i}-1}{r-1}^{2} \binom{q\_{j}-1}{q\_{j}-q\_{i}+r-1}^{2} \\ & \quad \times (2q\_{i}-2r)! \langle f\_{q\_{i}}\bar{\odot}\_{i}f\_{q\_{i}}, f\_{q\_{j}}\bar{\odot}\_{q\_{j}-q\_{i}+r}f\_{q\_{j}} \rangle\_{\mathcal{H}^{\otimes (2q\_{i}-2r)}} \\ &= \, q\_{i}!q\_{j}! (f\_{q\_{i}}\bar{\odot}\_{q\_{i}}f\_{q\_{i}}) (f\_{q\_{j}}\bar{\odot}\_{q\_{j}}f\_{q\_{j}}) \\ & \quad + q\_{i}q\_{j} \sum\_{r=1}^{q\_{i}-1} (r-1)! (q\_{j}-q\_{i}+r-1)! \binom{q\_{j}-1}{r-1}^{2} \binom{q\_{j}-1}{q\_{j}-q\_{i}+r-1}^{2} \\ & \quad \times (2q\_{i}-2r)! \langle f\_{q\_{i}}\bar{\odot}\_{q\_{i}}f\_{q\_{i}}, f\_{q\_{j}}\bar{\odot}\_{q\_{j}-q\_{i}+r}f\_{q\_{j}} \rangle\_{\mathcal{H}^{\otimes (2q\_{i}-2r)}}. \end{split} \tag{26}$$

On the other hand,

$$\mathbb{E}[\Gamma\_{ii}^\*(F)]\mathbb{E}[\Gamma\_{jj}^\*(F)] = q\_i!(f\_{\mathbb{q}\_i}\bar{\odot}\_{\mathbb{q}\_i}f\_{\mathbb{q}\_i}) \times q\_j!(f\_{\mathbb{q}\_j}\bar{\odot}\_{\mathbb{q}\_j}f\_{\mathbb{q}\_j}).\tag{27}$$

Denote by (**a**) the length of a vector **a**. To prove (*α*), we need to show that, for every 1 ≤ *i*, *j* ≤ *d*, the inner products in (26)

$$\langle f\_{\mathfrak{q}\_i} \vec{\odot}\_r f\_{\mathfrak{q}\_{i'}} f\_{\mathfrak{q}\_j} \vec{\odot}\_{\mathfrak{q}\_j - \mathfrak{q}\_i + r} f\_{\mathfrak{q}\_j} \rangle\_{\mathfrak{g}^{\otimes (2q\_i - 2r)}} \cong \mathcal{O} \dots$$

For this, it is sufficient, from the symmetry of *fqi* , *i* = 1, ... , *d*, and symmetrization of contractions, to show that, for every 1 ≤ *i*, *j* ≤ *d*,

$$\int\_{Z} \mathcal{I}^{2(q\_{i}+q\_{j})} f\_{\varnothing i}(\mathbf{u}\_{1\prime}, \mathbf{w}) f\_{\varnothing i}(\mathbf{u}\_{2\prime}, \mathbf{w}) f\_{\varnothing j}(\mathbf{u}\_{1\prime}, \mathbf{v})$$

$$\times f\_{\varnothing j}(\mathbf{u}\_{2\prime}, \mathbf{v}) \mu^{\otimes^{2(q\_{i}+q\_{j})}}(d\mathbf{u}\_{1\prime}, d\mathbf{u}\_{2\prime}, d\mathbf{v}, d\mathbf{w}) \ge 0,\tag{28}$$

where (**w**) = *r* and (**u**1) + (**u**2) = 2*qi* − 2*r*. Since (**u**1) = (**u**2) = *qi* − *r*, the integral in (28) can be expressed as

$$\int\_{\mathcal{Z}^{q\_j - q\_i + 2r}} (f\_{q\_i} \otimes\_{\ell(\mathbf{u}\_1)} f\_{q\_j}) (\mathbf{w}, \mathbf{v}) (f\_{q\_i} \otimes\_{\ell(\mathbf{u}\_2)} f\_{q\_j}) (\mathbf{w}, \mathbf{v}) \mu^{q\_j + r\_1 + r\_2} \, (d\mathbf{w}, d\mathbf{v})$$

$$= \int\_{\mathcal{Z}^{q\_j - q\_i + 2r}} (f\_{q\_i} \otimes\_{\ell(\mathbf{u}\_1)} f\_{q\_j})^2 (\mathbf{w}, \mathbf{v}) \mu^{\otimes^{q\_j + r\_1 + r\_2}} (d\mathbf{w}, d\mathbf{v}) \ge 0. \tag{29}$$

Using (26) and (27) together with (29) yields that, for 1 ≤ *i*, *j* ≤ *d*,

$$\mathbb{E}[\Gamma\_{ii}^\*(F)\Gamma\_{jj}^\*(F)] \ge \mathbb{E}[\Gamma\_{ii}^\*(F)]\mathbb{E}[\Gamma\_{jj}^\*(F)].$$

For the condition (*β*): Obviously,

$$\begin{split} & \Gamma\_{ij}^{\*} (F) \Gamma\_{ji}^{\*} (F) \\ &= \begin{array}{c} q\_i \wedge q\_j \, q\_i \wedge q\_j \\ r\_1 = 1 \; r\_2 = 1 \end{array} (r\_1 - 1)! (r\_2 - 1)! \begin{pmatrix} q\_i - 1 \\ r\_1 - 1 \end{pmatrix}^2 \begin{pmatrix} q\_j - 1 \\ r\_2 - 1 \end{pmatrix}^2 \\ & \times I\_{q\_i + q\_j - 2r\_1} (f\_{q\_i} \vec{\odot} r\_1 f\_{q\_j}) I\_{q\_i + q\_j - 2r\_2} (f\_{q\_i} \vec{\odot} r\_2 f\_{q\_j}) . \end{split} \tag{30}$$

The expectation of (30) gives

$$\begin{split} & \mathbb{E} [\Gamma\_{ij}^{\*} (F) \Gamma\_{ji}^{\*} (F)] \\ &= \ & q\_{i} q\_{j} \sum\_{r=1}^{q\_{i} \wedge q\_{j}} [(r-1)!]^{2} \binom{q\_{i}-1}{r-1}^{2} \binom{q\_{j}-1}{r-1}^{2} \\ & \quad \times (q\_{i} + q\_{j} - 2r)! \| f\_{q\_{i}} \tilde{\otimes}\_{r} f q\_{j} \|\_{\mathfrak{F}^{\otimes (q\_{i}+q\_{j}-2r)}}^{2} . \end{split} \tag{31}$$

For *qi* < *qj*, the expectation (31) can be written as

$$\begin{array}{rcl} \mathbb{E}[\Gamma\_{ij}^{\*}(F)\Gamma\_{ji}^{\*}(F)] &=& q\_{i}q\_{j}[(q\_{i}-1)!]^{2} \|f\_{q\_{i}}\check{\otimes}\_{\mathcal{G}}f\_{q\_{j}}\|\_{\mathfrak{H}}^{2} \|\prescript{}{{\circ}}{\circ}(q\_{i}+q\_{j}-2r) \\ &+\sum\_{r=1}^{q\_{i}-1}[(r-1)!]^{2} \binom{q\_{i}-1}{r-1}^{2} \binom{q\_{j}-1}{r-1}^{2} \\ &\times (q\_{i}+q\_{j}-2r)! \|f\_{q\_{i}}\check{\otimes}\_{\mathcal{F}}f\_{q\_{j}}\|\_{\mathfrak{H}}^{2} \|\prescript{}{{\circ}}{\circ}(q\_{i}+q\_{j}-2r) . \end{array} \tag{32}$$

Since E[Γ∗ *ij*(*F*)] = 0 for *qi* < *qj*, we deduce, from (32), that

$$\mathbb{E}[\Gamma^\*\_{ij}(F)\Gamma^\*\_{ji}(F)] \ge \left(\mathbb{E}[\Gamma^\*\_{ij}(F)]\right)^2 \text{ for } q\_i < q\_j.$$

On the other hand, if *qi* = *qj*, then

$$\begin{split} \mathbb{E}[\Gamma\_{ij}^{\*}(F)\Gamma\_{ji}^{\*}(F)] &= \ (q\_{i}!)^{2}||f\_{\bar{q}\_{i}}||\_{\mathcal{H}^{\otimes q\_{i}}}^{4} \\ &\quad + \sum\_{r=1}^{q\_{i}-1} [(r-1)!]^{2} \binom{q\_{i}-1}{r-1}^{2} \binom{q\_{j}-1}{r-1}^{2} \\ &\quad \times (2q\_{i}-2r)!! ||f\_{\bar{q}\_{i}} \bar{\odot}\_{r} f\_{\bar{q}\_{i}}||\_{\mathcal{H}^{\otimes (2q\_{i}-2r)}}^{2} \\ &\geq \ (\mathbb{E}[\Gamma\_{ij}^{\*}(F)])^{2}. \end{split}$$

For the condition (*γ*): First, write

$$\sum\_{l\_1+l\_2+l\_3=e\_i+2e\_j} \mathbb{E}[\Gamma^\*\_{i,l\_1,l\_2,l\_3}(F)]$$

$$=\mathbb{E}[\Gamma^\*\_{i,i,j,j}(F)] + \mathbb{E}[\Gamma^\*\_{i,j,i,j}(F)] + \mathbb{E}[\Gamma^\*\_{i,j,j,i}(F)].\tag{34}$$

Next, we compute the three expectations in (34). By the definition of the operator Γ∗, we obtain

$$\begin{split} & \Gamma\_{i\_{1}i\_{2}j\_{3}j\_{4}}^{\*} (F) \\ &= \sum\_{r\_{2}=1}^{q\_{i\_{1}} \wedge q\_{i\_{2}}} \sum\_{r\_{3}=1}^{q\_{i\_{3}} \rightarrow q\_{i\_{4}} + q\_{i\_{2}} + q\_{i\_{3}} - 2r\_{1} - 2r\_{2}} \times \boldsymbol{q\_{i\_{4}}}} \\ & \quad \times \, \beta\_{q\_{i\_{1}}, \ldots, q\_{i\_{4}}}^{\*} (r\_{2}, r\_{3}, r\_{4}) \mathbf{1}\_{\{2r\_{2} < q\_{i\_{1}} + q\_{i\_{2}}\}} \mathbf{1}\_{\{2r\_{2} + 2r\_{3} < q\_{i\_{1}} + q\_{i\_{2}} + q\_{i\_{3}}\}} \\ & \quad \times \, \boldsymbol{I}\_{q\_{i\_{1}} + \cdots + q\_{i\_{4}} - 2r\_{2} - 2r\_{3} - 2r\_{4}} ( ( (f\_{q\_{i\_{1}}} \odot\_{r\_{2}} f\_{q\_{i\_{2}}}) \odot\_{r\_{3}} f\_{q\_{i\_{3}}}) \odot\_{r\_{4}} f\_{q\_{i\_{4}}} ), \end{split} \tag{35}$$

and

$$\begin{split} & \quad \Gamma\_{i\_{1}i\_{2}j\_{3}j\_{4}j\_{4}}(F) \\ = & \sum\_{r\_{2}=1}^{q\_{i\_{1}}\wedge q\_{i\_{2}}} \sum\_{r\_{3}=1}^{q\_{i\_{1}}\wedge q\_{i\_{2}} + q\_{i\_{3}}} \sum\_{r\_{4}=1}^{q\_{i\_{1}}\wedge q\_{i\_{2}} + q\_{i\_{3}} - 2r\_{1} - 2r\_{2}} \rangle \wedge q\_{i\_{4}} \\ & \times \oint\_{q\_{i\_{1}}, \dots, q\_{i\_{4}}} (r\_{2}, r\_{3}, r\_{4}) \mathbf{1}\_{\{2r\_{2} < q\_{i\_{1}} + q\_{i\_{2}}\}} \mathbf{1}\_{\{2r\_{2} + 2r\_{3} < q\_{i\_{1}} + q\_{i\_{2}} + q\_{i\_{3}}\}} \\ & \times I\_{q\_{i\_{1}} + \dots + q\_{i\_{4}} - 2r\_{2} - 2r\_{3} - 2r\_{4}}(((\{f\_{q\_{i\_{1}}} \bigotimes\_{r\_{2}} f\_{q\_{i\_{2}}}) \bigotimes\_{r\_{3}} f\_{q\_{i\_{3}}}) \bigotimes\_{r\_{4}} f\_{q\_{i\_{4}}}). \end{split} \tag{36}$$

When *qi*<sup>1</sup> + ··· + *qi*<sup>4</sup> = 2*r*<sup>2</sup> + 2*r*<sup>3</sup> + 2*r*<sup>4</sup> and *r*<sup>3</sup> ≤ *qi*<sup>1</sup> + *qi*<sup>2</sup> + *qi*<sup>3</sup> − 2*r*<sup>2</sup> − 2*r*3, we have that *qi*<sup>4</sup> ≥ *r*4. Hence, *r*<sup>4</sup> = *qi*<sup>4</sup> . Taking an expectation on (35) and (36) yields that

$$\begin{array}{rcl} \mathbb{E}[\Gamma\_{i\_1,i\_2;i\_3;i\_4}^\*(F)] &=& \sum\_{r\_2=1}^{q\_{i\_1}\wedge q\_{i\_2}} \sum\_{r\_3=1}^{(q\_{i\_1}+q\_{i\_2}-2r\_2)\wedge q\_{i\_3}} \mathcal{S}^\*\_{q\_{i\_1},\ldots, q\_{i\_4}}(r\_2,r\_3,q\_{i\_4}) \\ & \qquad \times I\_1(i\_1,\ldots,i\_4;r\_2,r\_3) \mathbf{1}\_{\{2r\_2$$

and

$$\begin{array}{rcl} \mathbb{E}[\Gamma\_{i\_1, i\_2, i\_3, i\_4}(F)] &=& \sum\_{r\_2=1}^{q\_{i\_1} \wedge q\_{i\_2}} \sum\_{r\_3=1}^{(q\_{i\_1} + q\_{i\_2} - 2r\_2) \wedge q\_{i\_3}} \mathcal{S}\_{q\_{i\_1}, \dots, q\_{i\_4}}(r\_2, r\_3, q\_{i\_4}) \\ & \qquad \times I\_1(i\_1, \dots, i\_4; r\_2, r\_3) \mathbf{1}\_{\{2r\_2 < q\_{i\_1} + q\_{i\_2}\}} \\ & \qquad \times \mathbf{1}\_{\{2r\_2 + 2r\_3 = q\_{i\_1} + q\_{i\_2} + q\_{i\_3} - q\_{i\_4}\}} \end{array} \tag{38}$$

where

$$\begin{array}{rcl} \langle f\_1(i\_1, \ldots, i\_4; r\_2, r\_3) & = & \langle (f\_{\varnothing i\_1} \vec{\otimes}\_{r\_2} f\_{\varnothing i\_2}) \vec{\otimes}\_{r\_3} f\_{\varnothing i\_3}, f\_{\varnothing i\_4} \rangle\_{\mathfrak{H}^{\otimes q\_4}} \dots \rangle \end{array}$$

Using the definition of coefficients *β*∗ and *β*, we compute

$$\begin{split} &\mathcal{J}^{\*}\_{q\_{\overline{i}\_{1}},...,q\_{\overline{i}\_{4}}}(r\_{2},r\_{3},q\_{\overline{i}\_{4}}) - \mathfrak{c}\mathcal{J}\_{\overline{q}\_{\overline{i}\_{1}},...,q\_{\overline{i}\_{4}}}(r\_{2},r\_{3},q\_{\overline{i}\_{4}}) \\ &= \qquad (q\_{\overline{i}\_{4}})! \left\{ \mathcal{J}^{\*}\_{q\_{\overline{i}\_{1}},q\_{\overline{i}\_{2}},q\_{\overline{i}\_{3}}}(r\_{2},r\_{3}) - \mathfrak{c}\mathcal{J}\_{q\_{\overline{i}\_{1}},q\_{\overline{i}\_{2}},q\_{\overline{i}\_{3}}}(r\_{2},r\_{3}) \right\} \\ &= \qquad (q\_{\overline{i}\_{1}} + q\_{\overline{i}\_{2}} - 2r\_{2} - \mathfrak{c}q\_{\overline{i}\_{3}})I\_{2}(i\_{1},\ldots,i\_{4};r\_{2},r\_{3}), \end{split} \tag{39}$$

where

$$\begin{aligned} \|\mathfrak{z}(i\_1, \ldots, i\_4; r\_2, r\_3)\| &= \|(q\_{i\_4})\_!\| \beta\_{q\_{i\_1}, q\_{i\_2}}(r\_2)(r\_3 - 1) ! \\ &\times \begin{pmatrix} q\_{i\_1} + q\_{i\_2} - 2r\_2 - 1 \\ r\_3 - 1 \end{pmatrix} \begin{pmatrix} q\_{i\_3} - 1 \\ r\_3 - 1 \end{pmatrix} .\end{aligned}$$

If (*i*1, ... , *i*4)=(*i*, *i*, *j*, *j*),(*i*, *j*, *i*, *j*) or (*i*, *j*, *j*, *i*), then we have, from a similar estimate as for (29), that, for 1 ≤ *r*<sup>2</sup> ≤ *qi*<sup>1</sup> ∧ *qi*<sup>2</sup> and 1 ≤ *r*<sup>3</sup> ≤ (*qi*<sup>1</sup> + *qi*<sup>2</sup> − 2*r*2) ∧ *qi*<sup>3</sup> ,

$$J\_1(i\_1, \ldots, i\_4; r\_2, r\_3) \ge 0.$$

Indeed, for (*i*1,..., *i*4)=(*i*, *i*, *j*, *j*), it is sufficient to show that

$$\int\_{\mathcal{Z}} \mathcal{I}^{2(q\_i+q\_j)} f\_{\boldsymbol{q}\_i}(\mathbf{u}\_1, \mathbf{v}\_1, \mathbf{w}) f\_{\boldsymbol{q}\_i}(\mathbf{u}\_2, \mathbf{v}\_2, \mathbf{w}) f\_{\boldsymbol{q}\_j}(\mathbf{u}\_1, \mathbf{u}\_2, \mathbf{v}\_3)$$

$$\times f\_{\boldsymbol{q}\_j}(\mathbf{v}\_1, \mathbf{v}\_2, \mathbf{v}\_3) \mu^{\otimes^{2(q\_i+q\_j)}}(d\mathbf{u}\_1, d\mathbf{u}\_2, \mathbf{w}, d\mathbf{v}\_1, d\mathbf{v}\_2, d\mathbf{v}\_3)$$

$$= \int\_{\mathcal{Z}^{2(q\_i+q\_j)}} (f\_{\boldsymbol{q}\_i} \otimes\_{\ell(\mathbf{u}\_1)} f\_{\boldsymbol{q}\_j})(\mathbf{v}\_1, \mathbf{u}\_2, \mathbf{w}, \mathbf{v}\_3)$$

$$\times (f\_{\boldsymbol{q}\_i} \otimes\_{\ell(\mathbf{v}\_2)} f\_{\boldsymbol{q}\_j})(\mathbf{v}\_1, \mathbf{u}\_2, \mathbf{w}, \mathbf{v}\_3) (d\mathbf{v}\_1, d\mathbf{u}\_2, \mathbf{w}, d\mathbf{v}\_3) \ge 0,\tag{40}$$

where (**u**1) = (**v**2). Similarly, we can show that, for (*i*1,..., *i*4)=(*i*, *j*, *i*, *j*) or (*i*, *j*, *j*, *i*),

$$J\_1(i\_{1\prime}, \ldots, i\_4; r\_2, r\_3) \ge 0.$$

These facts lead us to E[Γ*i*1,*i*2,*i*3,*i*<sup>4</sup> (*F*)] ≥ 0 and E[Γ<sup>∗</sup> *i*1,*i*2,*i*3,*i*<sup>4</sup> (*F*)] ≥ 0 for (*i*1, ... , *i*4)=(*i*, *i*, *j*, *j*), (*i*, *j*, *i*, *j*) or (*i*, *j*, *j*, *i*), which implies that E(*d*)(*F*) = ∅. Now, we find a constant e > 0 such that e ∈ E(*d*)(*F*). Let us set *J*(···) = *J*1(···) × *J*2(···). From (37) and (38), we have, together with (39), that

$$\sum\_{i,j=1}^{d} \left\{ \sum\_{l\_1+l\_2+l\_3=c\_i+2c\_j} \left( \mathbb{E}[\Gamma\_{i,l\_1,l\_2,l\_3}^\*(F)] - \text{eE}[\Gamma\_{i,l\_1,l\_2,l\_3}(F)] \right) \right\}$$

$$= \sum\_{i,j=1}^{d} \left\{ \mathbb{E}[\Gamma\_{i,i,j,j}^\*(F)] - \text{eE}[\Gamma\_{i,i,j,j}(F)] + \mathbb{E}[\Gamma\_{i,j,i,j}^\*(F)] \right.$$

$$\left. - \text{eE}[\Gamma\_{i,j,i,j}(F)] + \mathbb{E}[\Gamma\_{i,j,j,i}^\*(F)] - \text{eE}[\Gamma\_{i,j,j,i}(F)] \right\}$$

$$= \quad V\_{1,d} + V\_{2,d} + V\_{3,d} \,. \tag{41}$$

where

$$\begin{array}{rcl} V\_{1,d} &=& \sum\_{i,j=1}^{d} \sum\_{r\_2=1}^{q\_i} (2q\_i - 2r\_2 - \mathfrak{e}q\_j) f(i, i, j, j; r\_2, r\_3) \\ & & \times \mathbf{1}\_{\{r\_2 < q\_i\}} \mathbf{1}\_{\{r\_2 + r\_3 = q\_i\}}, \\ V\_{2,d} &=& \sum\_{i,j=1}^{d} \sum\_{r\_2=1}^{q\_i \wedge q\_j} (q\_i + q\_j - 2r\_2 - \mathfrak{e}q\_j) f(i, j, i, j; r\_2, r\_3) \\ & & \times \mathbf{1}\_{\{2r\_2 < q\_i + q\_j\}} \mathbf{1}\_{\{r\_2 + r\_3 = q\_i\}}, \end{array}$$

and

$$\begin{aligned} V\_{3,d} &= \sum\_{i,j=1}^d \sum\_{r\_2=1}^{q\_i \wedge q\_j} (q\_i + q\_j - 2r\_2 - \mathfrak{e}q\_j) J(i, j, j, i; r\_2, r\_3) \\ &\times \mathbf{1}\_{\{2r\_2 < q\_i + q\_j\}} \mathbf{1}\_{\{r\_2 + r\_3 = q\_j\}}. \end{aligned}$$

For every *i*, *j* ∈ {1, . . . , *d*} and *r*<sup>2</sup> ∈ {1, . . . , *qi* − 1}, we have

$$(2q\_i - 2r\_2 - \mathfrak{e}q\_j) \ge (2 - \mathfrak{e} \max\_{1 \le i \le d} q\_i).$$

This leads us to

$$V\_{1,d} \geq \left(2 - \mathfrak{e} \max\_{1 \leq i \leq d} q\_i\right) \bar{V}\_{1,d} \tag{42}$$

where

$$\tilde{V}\_{1,d} = \sum\_{i,j=1}^d \sum\_{r\_2=1}^{q\_i} J(i, i, j, j; r\_2, r\_3) \mathbf{1}\_{\{r\_2 < q\_i\}} \mathbf{1}\_{\{r\_2 + r\_3 = q\_i\}} \cdot \mathbf{1}\_{\{r\_2 + r\_3 = q\_i\}}$$

For the second sum *V*2,*<sup>d</sup>* in (41), we change the range of *r*<sup>2</sup> from the inequality 2*r*<sup>2</sup> < *qi* + *qj* to

$$r\_2 \le \frac{q\_{\bar{i}} + q\_{\bar{j}}}{2} - \alpha\_{\bar{i}, \bar{j}} \text{ for } \alpha\_{\bar{i}, \bar{j}} \in (0, 1]\_q$$

where [(*qi* + *qj*)/2] − *αi*,*<sup>j</sup>* is a positive integer. For fixed *i*, *j* ∈ {1, . . . , *d*},

$$\begin{aligned} & \left( q\_i + q\_j - 2r\_2 - \mathfrak{e} q\_j \right) \\ & \ge \quad \left( q\_i + q\_j - 2 \left[ \left( \frac{q\_i + q\_j}{2} - \mathfrak{a}\_{i,j} \right) \wedge q\_i \right] - \mathfrak{e} q\_j \right). \end{aligned} \tag{43}$$

If *qi* = *qj* for 1 ≤ *i*, *j* ≤ *d*, then, from (43), we have

$$\begin{array}{rcl} (q\_i + q\_j - 2r\_2 - \mathfrak{e}q\_j) & \geq & (2q\_i - 2(q\_i - 1) - \mathfrak{e}q\_i) \\ & \geq & (2 - \mathfrak{e} \max\_{1 \leq i \leq d} q\_i) \end{array} \tag{44}$$

for every *i*, *j* ∈ {1, ... , *d*} and *r*<sup>2</sup> ∈ {1, ... , *qi* − 1}. For *qj* − *qi* ≥ 2, we deduce, from (43), for fixed *i*, *j* ∈ {1, . . . , *d*}, that

$$\begin{array}{rcl}(q\_i + q\_j - 2r\_2 - \mathfrak{e}q\_j) & \geq & (q\_i + q\_j - 2q\_i - \mathfrak{e}q\_j) \\ & \geq & (2 - \mathfrak{e} \max\_{1 \leq i \leq d} q\_i). \end{array} \tag{45}$$

For *qj* = *qi* + 1 and 0 < *αi*,*<sup>j</sup>* ≤ 0.5, the Inequality (43) yields

$$\begin{array}{rcl}(q\_i + q\_j - 2r\_2 - \mathfrak{e}q\_j) & \geq & (2q\_i + 1 - 2q\_i - \mathfrak{e}q\_j) \\ & \geq & (1 - \mathfrak{e} \max\_{1 \leq i \leq d} q\_i). \end{array} \tag{46}$$

On the other hand, if *qj* = *qi* + 1 and 0.5 < *αi*,*<sup>j</sup>* ≤ 1, then we obtain, from (43), that

$$\begin{aligned} \left(q\_i + q\_j - 2r\_2 - \mathfrak{e}q\_j\right) &\geq \quad \left[2q\_i + 1 - 2\left(q\_i + \frac{1}{2} - a\_{i,j}\right) - \mathfrak{e}q\_j\right] \\ &\geq \quad \left(2a\_{i,j} - \mathfrak{e}q\_j\right) \\ &\geq \quad (1 - \mathfrak{e}\max\_{1 \leq i \leq d} q\_i). \end{aligned} \tag{47}$$

Combining the above results (44)–(47), we obtain

$$V\_{2,d} \ge \left(1 - \mathfrak{e} \max\_{1 \le i \le d} q\_i\right) \mathcal{V}\_{2,d} \tag{48}$$

where

$$\tilde{V}\_{2,d} = \sum\_{i,j=1}^d \sum\_{r\_2=1}^{q\_i \wedge q\_j} I(i,j,i,j; r\_2, r\_3) \mathbf{1}\_{\{2r\_2 < q\_i + q\_j\}} \mathbf{1}\_{\{r\_2 + r\_3 = q\_i\}}.$$

Similarly,

$$\|V\_{3,d} \ge (1 - \mathfrak{e} \max\_{1 \le i \le d} q\_i) \mathcal{V}\_{3,d'} \tag{49}$$

where

$$\bar{V}\_{3,d} = \sum\_{i,j=1}^d \sum\_{r\_2=1}^{q\_i \wedge q\_j} f(i, j, j, i; r\_2, r\_3) \mathbf{1}\_{\{2r\_2 < q\_i + q\_j\}} \mathbf{1}\_{\{r\_2 + r\_3 = q\_j\}}.$$

The Inequalities (42), (48), and (49) yield

$$\begin{aligned} &\sum\_{i,j=1}^d \left\{ \sum\_{l\_1+l\_2+l\_3=\mathfrak{e}\_i+2\mathfrak{e}\_j} \left( \mathbb{E}[\Gamma\_{i,l\_1,l\_2,l\_3}^\*(F)] - \mathfrak{e} \mathbb{E}[\Gamma\_{i,l\_1,l\_2,l\_3}(F)] \right) \right\} \\ &\geq \quad (1-\mathfrak{e}\max\_{1\leq i\leq d} q\_i) (\tilde{V}\_{1,d} + \tilde{V}\_{2,d} + \tilde{V}\_{3,d}) \\ &\geq \quad 0 \text{ for } \mathfrak{e} \in \left[ 0, \frac{1}{\max\_{1\leq i\leq d} q\_i} \right], \end{aligned}$$

so that the condition (*γ*) is satisfied. Hence, applying Theorem 2 gives the desired conclusion. If *q*<sup>1</sup> = ··· = *qd* = *q*, the estimate in (42) yields a constant e given in (25).

**Remark 4.** *1. Theorem 3 proves that the three assumptions in Theorem 2 are satisfied under the same conditions as in Theorem 4.3 of [19]. To achieve this, we just need to explicitly compute the expected values of Gamma operators and compare them.*

*2. The estimate in Theorem 4.3 of [19] corresponds to the estimate (24) with* e = 0*. Hence, our approach improves the rate of constants appearing in the previous estimate given in [19]. If q*<sup>1</sup> = ··· = *qd* = 1*, then* e = 2*, which implies that F has the same distribution with Z.*

#### **5. Results in Dimension One (***d* **= 1)**

In this section, we specialize the results given in the previous Sections 3 and 4 to the one-dimensional case. We begin with a one-dimensional version of Gamma operators Γ and Γ<sup>∗</sup> (for these operators, see [21,22]). We set Γ1(*F*) = *F* and Γ<sup>∗</sup> <sup>1</sup> (*F*) = *F*. If *F* is a well-defined element in <sup>L</sup>2(Ω), we set <sup>Γ</sup>*k*+1(*F*) = *DF*, <sup>−</sup>*DL*−1Γ*k*(*F*)<sup>H</sup> and <sup>Γ</sup><sup>∗</sup> *<sup>k</sup>*+1(*F*) = −*DL*−1*F*, *<sup>D</sup>*Γ<sup>∗</sup> *<sup>k</sup>* (*F*)<sup>H</sup> for *k* = 1, 2, . . ..

**Theorem 4.** *If d* = 1*, the conditions* (*α*)*,* (*β*)*, and* (*γ*) *are satisfied under the assumption* E[Γ4(*F*)] = 0*.*

**Proof.** The assumptions (*α*) and (*β*) obviously hold. Indeed, the Cauchy–Schwartz inequality proves that

$$\mathbb{E}[\Gamma\_2^\*(F)^2] \ge (\mathbb{E}[\Gamma\_2^\*(F)])^2,$$

where Γ∗ <sup>2</sup> (*F*) = <sup>Γ</sup>2(*F*) = −*DL*−1*F*, *DF*H. A repeated application of Lemma 1 proves that

$$\begin{aligned} \mathbb{E}[\Gamma\_2(F)^2] &= \mathbb{E}[F^2 \Gamma\_2(F)] - \mathbb{E}[\Gamma\_4^\*(F)] \\ &= 2\mathbb{E}[\Gamma\_4(F)] + (\mathbb{E}[F^2])^2 - \mathbb{E}[\Gamma\_4^\*(F)]. \end{aligned}$$

This shows that V*ar*(Γ2(*F*)) = 2E[Γ4(*F*)] − E[Γ<sup>∗</sup> <sup>4</sup> (*F*)]. Let *φ*(*x*) = E[Γ4(*F*)]*x* − E[Γ<sup>∗</sup> <sup>4</sup> (*F*)]. Then, *φ*(2) ≥ 0. Since E[Γ4(*F*)] = 0, there exists a constant e ∈ R such that *φ*(e) ≤ 0. This implies that the condition (*γ*) is satisfied.

**Remark 5.** *If* E[*F*] = 0*, it follows from (8) that*

$$\mathbb{E}[\Gamma\_4(F)] = \frac{1}{6} \left( \mathbb{E}[F^4] - 3 (\mathbb{E}[F^2])^2 \right). \tag{50}$$

*Studies so far have shown that Inequality (1) holds true only when F belongs to a fixed Wiener chaos. However, the technique developed here can be applied to prove that the fourth moment theorem (1)*

*holds even if F is not an element of a fixed Wiener chaos. The proof in Theorem 4 yields, together with (50), that*

$$\text{Var}(\Gamma\_2(F)) \le \frac{2-\mathfrak{e}}{6} \left( \mathbb{E}[F^4] - 3 (\mathbb{E}[F^2])^2 \right),\tag{51}$$

*where a constant* e *satisfies φ*(e) ≤ 0*. Note that the constant given in (12) is three times that in (51).*

**Proposition 2.** *Let <sup>φ</sup> be a linear function in the proof of Theorem 4. Let <sup>F</sup>* <sup>=</sup> *Iq*(*f*) *with <sup>f</sup>* <sup>∈</sup> <sup>H</sup>*<sup>q</sup> (q* ≥ 2*). Then, there exists a constant* e ∈ [2/*q*, 2) *such that φ*(e) ≤ 0*, and* (−∞, 2/*q*] ⊆ E(1)(*F*)*.*

**Proof.** A direct computation yields that

$$\begin{split} \mathbb{E}[\Gamma\_{4}^{\*}(F)] &= \ \ q! \sum\_{r=1}^{q-1} \beta\_{q,q}^{\*}(r)(2q-2r)(q-r-1)! \binom{2q-2r-1}{q-r-1} \\ &\times \binom{q-1}{q-r-1} \|f\odot\_{r}f\|\_{\mathfrak{H}^{\otimes (2q-2r)}}^{2} > 0. \end{split} \tag{52}$$

On the other hand, Theorem 5.1 in [22] shows that

$$\begin{aligned} \mathbb{E}[\Gamma\_4(F)] &= \ \prescript{q}{}{q}\sum\_{r=1}^{q-1} \beta\_{\emptyset, \emptyset}(r) q(q-r-1)! \binom{2q-2r-1}{q-r-1} \\ &\times \binom{q-1}{q-r-1} \|f \odot\_r f\|\_{\mathfrak{H}^{\otimes (2q-2r)}}^2 > 0. \end{aligned} \tag{53}$$

Combining (52) and (53) (or *V*1,*<sup>d</sup>* for *d* = 1 in (41) in the proof of Theorem 3) together with *β*∗ *<sup>q</sup>*,*<sup>q</sup>* = *βq*,*q*, we obtain that

$$\begin{split}-\phi(\mathfrak{e})&=\operatorname{E}\left[\Gamma\_{4}^{\sf T}(F)\right]-\operatorname{eE}\left[\Gamma\_{4}(F)\right]\\&=q!\sum\_{r=1}^{q-1}\beta\_{\mathfrak{q},\mathfrak{q}}(r)(2q-2r-\mathfrak{e}q)(q-r-1)!\binom{2q-2r-1}{q-r-1}\\&\quad\times\left(\begin{matrix}q-1\\q-r-1\end{matrix}\right)\left\|f\right\|\_{\mathfrak{F}\otimes(2q-2r)}\\&\ge\ (2-\mathfrak{e}q)q!\sum\_{r=1}^{q-1}\beta\_{\mathfrak{q},\mathfrak{q}}(r)(q-r-1)!\binom{2q-2r-1}{q-r-1}\\&\quad\times\left(\begin{matrix}q-1\\q-r-1\end{matrix}\right)\left\|f\right\|\_{\mathfrak{F}\otimes(2q-2r)}.\end{split}\tag{54}$$

This Inequality (54) shows that *φ*(2/*q*) ≤ 0. Since E[Γ<sup>∗</sup> <sup>4</sup> (*F*)] > 0 and E[Γ4(*F*)] > 0, it may be possible for e to belong to [2/*q*, 2).

**Remark 6.** *Substituting* 2/*q for* e *in (51), we can derive the* fourth moment theorem *in (2). By using the new method developed in this paper, we show that the constant term given in (51) is less than or equal to the one in (2). This means that*

$$\frac{2-\mathfrak{e}}{6} \le \frac{q-1}{3}.\tag{55}$$

Let's take an example that satisfies (55).

**Example 1.** *We consider the case of <sup>q</sup>* <sup>=</sup> <sup>3</sup>*. Let <sup>F</sup>* <sup>=</sup> *<sup>I</sup>*3(*h*⊗3) *with <sup>h</sup>* <sup>∈</sup> <sup>H</sup>*. A similar computation as for (54) proves that*

$$\begin{split} & \quad \mathbb{E}[\Gamma\_{4}^{\star}(F)] - \epsilon \mathbb{E}[\Gamma\_{4}(F)] \\ &= \quad 3! \times 3 \sum\_{r=1}^{2} (r-1)! \binom{2}{r-1}^{2} (6-2r-\epsilon q)(3-r-1)! \\ & \quad \times \binom{6-2r-1}{3-r-1} \binom{2}{3-r-1} \|h^{\otimes 3} \bar{\otimes}\_{r} h^{\otimes 3}\|\_{\mathcal{H}^{\otimes (6-2r)}}^{2} \\ &= \quad (3! \times 18)(4-3\epsilon) \|h^{\otimes 3} \bar{\otimes}\_{1} h^{\otimes 3}\|\_{\mathcal{H}^{\otimes 4}}^{2} \\ & \quad + (3! \times 12)(2-3\epsilon) \|h^{\otimes 3} \bar{\otimes}\_{2} h^{\otimes 3}\|\_{\mathcal{H}^{\otimes 2}}^{2} \\ &= \quad 72\left(8-\frac{15}{2}\epsilon\right) \|h\|\_{\mathcal{H}}^{6} . \end{split} \tag{56}$$

*From (56), it follows that* (−∞, 16/15] = C(1)(*F*) *and*

$$\mathfrak{e} = \frac{\mathbb{E}[\Gamma\_4^\*(F)]}{\mathbb{E}[\Gamma\_4(F)]} = 16/15.$$

*As a consequence of (51), the upper bound is given by*

$$\operatorname{Var}(\Gamma\_2(F)) \le \sqrt{\frac{7}{45}} \sqrt{\mathbb{E}[F^4] - 3(\mathbb{E}[F^2])^2}.\tag{57}$$

*On the other hand, the estimate (2) (q* = 3*) gives*

$$\operatorname{Var}(\Gamma\_2(F)) \le \sqrt{\frac{30}{45}} \sqrt{\mathbb{E}[F^4] - 3(\mathbb{E}[F^2])^2}. \tag{58}$$

*Compare the constant in (57) with that in (58).*

#### **6. Conclusions and Future Works**

This paper finds a method to obtain the fourth moment bound on the normal approximation of *F*, where *F* is a *d*-dimensional random vector whose components are general functionals of Gaussian fields. In order to prove the *fourth moment theorem*, all we need to do is to show that the conditions (*α*), (*β*), and (*γ*) in Theorem 2 are satisfied. The significant feature of our works is that these conditions are naturally satisfied in the specific case where *F* is a random variable belonging to the vector-valued multiple integrals. In addition, our technique yields a much better estimate than the conventional method. Comparing with the studies in literatures [3,14–16,19,20], our study is not only an extension of these studies, but it is also possible to naturally derive the results of existing studies.

As future research directions, we will apply our approach for the *fourth moment theorem*, developed here, to more general processes, including Markov diffusion processes and Poisson processes. Our developed approach is expected to integrate the *fourth moment theorem* for many processes.

**Author Contributions:** Conceptualization, Y.-T.K. and H.-S.P.; methodology, Y.-T.K.; writing and original draft preparation, Y.-T.K. and H.-S.P.; co-review and validation, H.-S.P.; writing—editing and funding acquisition. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by Hallym University Research Fund 2021 (HRF-202112-005).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We are very grateful to the anonymous Referees for their suggestions and valuable advice.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

