**1. Introduction**

The Laplace integrals find applications in numerous problems of mathematics and applied science, and the literature on these integrals is abundant. For example, let us mention the applications in statistical physics, see e.g., [1] or Lecture 5 in [2], in the pattern analysis [3], in the large deviation theory [4–6], where it is sometimes referred to as the Laplace–Varadhan method, in the analysis of Weibullian chaos [7], in the asymptotic methods for large excursion probabilities [8], in the asymptotic analysis of stochastic processes [9], and in the calculation of the tunneling effects in quantum mechanics and quantum fields, see [10,11]. It can be used to essentially simplify Maslov's type derivation of the Gibbs, Bose–Einstein and Pareto distribution [12]. An infinite-dimensional version and a non-commutative versions of the Laplace approximations were developed recently in [13,14], respectively.

The majority of research on this topic is devoted to the asymptotic expansions, or even, following the general approach to large deviation of Varadhan, just to the logarithmic asymptotics, see also [15]. In the present paper, following the recent trend for the searching of the best constants for the error term in the central-limit-type results, see [16] and references therein, we are interested in exact estimates for the main error term of the Laplace approximation. This approach to Laplace integrals was initiated by the author in book [9] (Appendix B), where the stress was on the integrals with complex phase. Here we aimed at making these asymptotic more precise for real phase including the most general case of both exponent and the pre-exponential term in the integral depending on the parameter (which is crucial for the applications to the classical conditional large numbers (LLN) that we have in mind here), and stressing two new applications, to the sums instead of integrals (Laplace–Varadhan asymptotics) and to the conditional law of large numbers (LLN) and central limit theorems (CLT) of large deviations.

The content of the paper is as follows. In Section 2 we obtained the estimates for the error term in Laplace approximation with minimum of the phase in the interior of the domain of integration improving slightly on estimates from [9], and in Section 3 we derived the resulting LLN and CLT results. In Sections 4 and 5 the same program was carried out for the case of phase minima occurring in the border of the domain. In Section 6 we derived the analogous results for the case of sums, rather than integrals. In Section 7 we show how our results can be applied to the conditional LLN and CLT of large deviations.

#### **2. Phase Minimum Inside the Domain of Integration**

Here we present the estimates of the remainder in the asymptotic formula for the Laplace integrals with the critical point of the phase lying in the interior of the domain of integration, adapting and streamlining the arguments of [9].

Consider the integral

$$I(N) = \int\_{\Omega} f(\mathbf{x}, N) \exp\{-NS(\mathbf{x}, N)\} \, d\mathbf{x}, \quad N \ge N\_0 > 0,\tag{1}$$

where Ω is an open bounded subset of the Euclidean space **<sup>R</sup>***d*, equipped with the Euclidean norm |.|, with Euclidean volume |Ω|, the amplitude *f* and the phase *S* are continuous real functions of *x* ∈ Ω, *N* ≥ *N*0.

**Remark 1.** *The assumption that* Ω *is bounded is not essential, but simplifies explicit estimates for the error terms. One should think of* Ω *as a bounded subset of the full domain of integration containing all minimum points of <sup>S</sup>*(., *<sup>N</sup>*)*. If f is integrable outside* Ω*, the integral of f*(*<sup>x</sup>*, *N*) exp{−*NS*(*<sup>x</sup>*, *N*)} *over* **R***<sup>d</sup>* \ Ω *will be exponentially small as compared with Equation* (1)*.*

Recall that the *k*th order derivative

$$\phi^{(k)}(x) = \frac{\partial^k \phi}{\partial x^k}$$

of a real function *φ* on **R***<sup>d</sup>* can be viewed as the multi-linear map

$$(\boldsymbol{\phi}^{(k)}(\boldsymbol{x})[\boldsymbol{v}] = \frac{\partial^k \boldsymbol{\phi}}{\partial \mathbf{x}^k}(\boldsymbol{x})[\boldsymbol{v}] = \sum\_{i\_1,\cdots,i\_k=1}^d \frac{\partial^k \boldsymbol{\phi}(\boldsymbol{x})}{\partial \mathbf{x}\_{i\_1}\cdots\partial \mathbf{x}\_{i\_k}} \boldsymbol{v}\_{i\_1}\cdots\boldsymbol{v}\_{i\_k}, \quad \boldsymbol{v} \in \mathbb{R}^d.$$

The second derivative will be written as usual in the matrix form

$$
\phi''(\mathbf{x})[v] = \left(\frac{\partial^2 \phi}{\partial \mathbf{x}^2}(\mathbf{x}) v, v\right).
$$

We shall denote by *φ*(*k*)(*x*) the corresponding norm defined as the lowest constant for which the estimate

$$|\phi^{(k)}(\mathbf{x})[v]| \le ||\phi^{(k)}(\mathbf{x})||v|^k$$

holds for all *v*.

**Remark 2.** *It is a standard way to define norms of multi-linear mappings, see e.g., [17]. However, as all norms on finite-dimensional spaces are equivalent, the choice of a norm is not very essential here.*

Let us make now the following assumptions on the functions *f* and *S*: (C1) *f*(*<sup>x</sup>*, *N*)) is a Lipshitz continuous function of *x* with

$$f\_0 = \sup\_{\mathbf{x} \in \Omega, N > N\_0} |f(\mathbf{x}, N)| < \infty, \quad f\_1 = \sup\_{\mathbf{x} \neq y, N > N\_0} \frac{|f(\mathbf{x}, N) - f(y, N)|}{|\mathbf{x} - y|} < \infty;$$

(C2) *<sup>S</sup>*(*<sup>x</sup>*, *N*) is a thrice continuously differentiable function in *x* such that

$$S\_3 = \sup\_{\mathbf{x} \in \Omega, N \ge \mathbf{N}\_0} \left\| \frac{\partial^3 S(\mathbf{x}, \mathbf{N})}{\partial \mathbf{x}^3} \right\| < \infty;$$

and

$$\left|\Lambda\_m|\underline{\boldsymbol{\zeta}}|^2 \leq \left(\frac{\partial^2 S}{\partial \boldsymbol{x}^2}(\boldsymbol{x}, \boldsymbol{N})\underline{\boldsymbol{\zeta}}, \underline{\boldsymbol{\zeta}}\right) \leq \Lambda\_M |\underline{\boldsymbol{\zeta}}|^2$$

for all *x* ∈ Ω, *N* ≥ *N*0, *ξ* ∈ **<sup>R</sup>***d*, with positive constants Λ*<sup>m</sup>*, Λ*M*; the latter condition can be concisely written as

$$
\Lambda\_{\mathfrak{m}} \le \frac{\partial^2 S}{\partial \mathfrak{x}^2} (\mathfrak{x}\_\prime N) \le \Lambda\_{M\prime}
$$

where the usual ordering on symmetric matrices is used;

(C3) For any *N* ≥ *N*0 there exists a unique point *x*(*N*) of global minimum of *<sup>S</sup>*(., *N*) in Ω, and the ball

$$\mathcal{U}(N) = \{ \mathbf{x} : |\mathbf{x} - \mathbf{x}(N)| < N^{-1/3} \}\tag{2}$$

is contained in Ω. Let us denote by *DN* the matrix of the second derivatives of *S* at *<sup>x</sup>*(*N*), that is

$$D\_N = \frac{\partial^2 S}{\partial x^2} (x(N), N). \tag{3}$$

Notice that from convexity of *S* in Ω and Assumption (C3) it follows that

$$S\_{\min}(N) = \inf \{ S(\mathbf{x}, N) : \mathbf{x} \in \Omega \mid \mathcal{U}(N) \} = \min \{ S(\mathbf{x}) : \mathbf{x} \in \partial \mathcal{U}(N) \}. \tag{4}$$

Our approach to the study of the Laplace integral *I*(*N*) is based on its decomposition

$$I(N) = I'(N) + I''(N)\_{\prime\prime}$$

with

$$I'(N) = \int\_{\mathcal{U}(N)} f(\mathbf{x}, N) \exp\{-NS(\mathbf{x}, N)\} \, d\mathbf{x}, \quad I''(N) = \int\_{\Omega(\mathcal{U}(N))} f(\mathbf{x}, N) \exp\{-NS(\mathbf{x}, N)\} \, d\mathbf{x}. \tag{5}$$

**Remark 3.** *In the proof below one can use U*(*N*) = {*x* : |*x* − *x*(*N*)| < *<sup>N</sup>*−<sup>κ</sup>} *instead of Equations* (2) *with* 1/3 ≤ κ < 1/2*, the lower bounds coming from the estimate of I*1 *below, and the upper bound from the estimate of I*3 *below.*

**Proposition 1.** *Under Assumptions (C1)–(C3),*

$$I(N) = \exp\{-NS(\mathbf{x}(N), N)\} \left(\frac{2\pi}{N}\right)^{d/2} \left[\frac{f(\mathbf{x}(N), N)}{\sqrt{\det D\_N}} + \frac{\omega(N)}{\sqrt{N}}\right] + \omega^{\mathrm{exp}}(N),\tag{6}$$

*where ω*(*N*) *is a bounded function depending on* Λ*<sup>m</sup>*, *f*0, *f*1, *S*3, *d, and ωexp*(*N*) *is exponentially small, compared to the main term. Explicitly*

$$|\omega(N)| \le d\Lambda\_m^{-(1+d)/2} \left[ f\_1 + \frac{d+1}{6\Lambda\_m} f\_0 \mathbb{S}\_3 e^{\mathbb{S}\_3/6} \right] \tag{7}$$

$$|\omega^{\exp}(N)| \le f\_0 \exp\{-NS(\mathbf{x}(N), N)\} \exp\{-\Lambda\_m N^{1/3}/2\}$$

$$\times \left[|\Omega| + \frac{(2\pi)^{d/2} N^{-d/3}}{\Lambda\_m N^{1/3}} \left(\frac{1}{\Gamma(d/2)} + \frac{2^{d/2}}{2\Lambda\_m N^{1/3}}\right)\right].\tag{8}$$

**Proof.** From the Taylor formula for functions on **R**

$$\mathbf{g}(t) = \mathbf{g}(0) + \mathbf{g}'(0)t + \int\_0^t (t - s)\mathbf{g}''(s)ds$$

it follows that

$$\mathbf{S(x,N)} - \mathbf{S(x(N),N)} = \mathbf{S(x(N) + t(x - x(N)), N)}|\_{t=0}^{t=1}$$

$$= \int\_0^1 (1 - \tau) \left( \frac{\partial^2 S}{\partial \mathbf{x}^2} (\mathbf{x(N) + \tau(x - x(N)), N)(\mathbf{x} - \mathbf{x(N)}), \mathbf{x} - \mathbf{x(N)}) \right) d\tau. \tag{9}$$

Consequently, for *x* ∈ *∂U*(*N*) we have by Assumption (C2) that

$$S(\mathbf{x}, N) - S(\mathbf{x}(N), N) \ge \frac{1}{2} \Lambda\_{\text{ll}} |\mathbf{x} - \mathbf{x}(N)|^2 = \frac{1}{2} \Lambda\_{\text{ll}} N^{-2/3} \tag{10}$$

It follows then from Equation (4) that

$$S\_{\min}(N) = \inf \{ S(\mathbf{x}, N) : \mathbf{x} \in \Omega \mid \mathcal{U}(N) \} \ge S(\mathbf{x}(N), N) + \frac{1}{2} \Lambda\_m N^{-2/3},\tag{11}$$

so that

$$M^{\prime\prime}(N) \le \exp\{-NS\_{\mathrm{min}}(N)\} \int\_{\Omega} f(\mathbf{x}, N) \, d\mathbf{x} \le f\_0 |\Omega| \exp\{-\Lambda\_{\mathrm{m}} N^{1/3}/2\} \exp\{-NS(\mathbf{x}(N), N)\}. \tag{12}$$

To go further we shall need the Taylor expansion of *S* up to the third order. Namely, from Equation (9) we deduce the expansion

$$S(\mathbf{x}, N) - S(\mathbf{x}(N), N) = \frac{1}{2} (D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N)) + \sigma(\mathbf{x}, N), \tag{13}$$

where, due to the equation 10 (1 − *τ*)*τ dτ* = 1/6,

$$|\sigma(\mathbf{x}, \mathbf{N})| \le \frac{1}{6} \mathcal{S}\_3 |\mathbf{x} - \mathbf{x}(\mathbf{N})|^3. \tag{14}$$

Turning to *I*(*N*) we further decompose it into the four integrals

$$I'(N) = \exp\{-NS(\mathbf{x}(N), N)\} \left(I\_1(N) + I\_2(N) + I\_3(N) + I\_4(N)\right) \tag{15}$$

with

$$I\_1(N) = \int\_{\mathcal{U}(N)} f(\mathbf{x}, N) \exp\{-\frac{N}{2}(D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N))\} (e^{-Nr(\mathbf{x}, N)} - 1) d\mathbf{x},$$

$$I\_2(N) = \int\_{\mathcal{U}(N)} (f(\mathbf{x}, N) - f(\mathbf{x}(N), N) \exp\{-\frac{N}{2}(D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N))\} \, d\mathbf{x},$$

$$I\_3(N) = f(\mathbf{x}(N), N) \int\_{\mathbf{R}^d \backslash \mathcal{U}(N)} \exp\{-\frac{N}{2}(D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N))\} \, d\mathbf{x},$$

$$I\_4(N) = f(\mathbf{x}(N), N) \int\_{\mathbf{R}^d} \exp\{-\frac{N}{2}(D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N))\} \, d\mathbf{x}.$$

It follows from Equation (14) that, for *x* ∈ *<sup>U</sup>*(*N*), *<sup>N</sup>*|*σ*(*<sup>x</sup>*, *N*)| ≤ *S*3/6. Using Equation (14) again and the trivial estimate |*et* − <sup>1</sup>|≤|*t*|*e*|*t*|, we conclude that, for *x* ∈ *<sup>U</sup>*(*N*),

$$|e^{-N\sigma(\mathbf{x},N)} - 1| \le \frac{1}{6}e^{S\_3/6}NS\_3|\mathbf{x} - \mathbf{x}(N)|^3. \tag{16}$$

Consequently,

$$|I\_1(N)| \le \frac{1}{6} e^{S\_3/6} S\_3 f\_0 N \int\_{\mathbf{R}^d} |y|^3 \exp\{-N\Lambda\_{\text{ff}} |y|^2/2\}.$$

> From the standard integral

$$\int\_{\mathbf{R}^d} |y|^k \exp\{-a|y|^2\} \, dy = \pi^{d/2} n^{-(k+d)/2} \frac{\Gamma((k+d)/2)}{\Gamma(d/2)},\tag{17}$$

we deduce that

$$|I\_1(N)| \le \frac{1}{6} \pi^{d/2} \frac{\Gamma((3+d)/2)}{\Gamma(d/2)} 2^{(3+d)/2} \Lambda\_m^{-(3+d)/2} f\_0 \mathbb{S} \mathbf{s}^{\mathbb{S}\times\{6}} N^{-(d+1)/2}.\tag{18}$$

Next,

$$|I\_2(\mathbb{N})| \le f\_1 \int\_{\mathbb{R}^d} |y| \exp\{-N\Lambda\_m |y|^2/2\}\_{\prime}$$

or, using Equation (17) with *k* = 1,

$$|I\_2(N)| \le \pi^{d/2} \frac{\Gamma((1+d)/2)}{\Gamma(d/2)} 2^{(1+d)/2} \Lambda\_{\mathfrak{m}}^{-(1+d)/2} f\_1 N^{-(d+1)/2}.\tag{19}$$

Next,

$$|I\_3(N)| \le f\_0 \int\_{\{y:|y| \ge N^{-1/3}\}} \exp\{-N\Lambda\_m |y|^2/2\} dy$$

$$= f\_0 \exp\{-\Lambda\_m N^{1/3}/2\} \int\_{N^{-1/3}}^{\infty} \exp\{-N\Lambda\_m (r^2 - N^{-2/3})/2\} |S^{d-1}| r^{d-1} dr$$

where

$$|\mathcal{S}^{d-1}| = 2\frac{\pi^{d/2}}{\Gamma(d/2)}$$

is the area of the unit sphere in **R***d*. Changing *r* to *z* so that

$$z = N\Lambda\_m(r^2 - N^{-2/3})/2 \Longleftrightarrow r^2 = N^{-2/3} \left( 1 + \frac{2z}{\Lambda\_m N^{1/3}} \right) \text{s}$$

and thus *dz* = *N*Λ*m<sup>r</sup> dr*, the last integral rewrites as

$$\frac{f\_0}{\Lambda\_m} N^{-(d+1)/3} \exp\{-\Lambda\_m N^{1/3}/2\} \int\_0^\infty e^{-z} \left(1 + \frac{2z}{N^{1/3}\Lambda\_m}\right)^{(d-2)/2} |S^{d-1}| \, dz \, $$

so that, using the inequality (1 + *ω*) ≤ 2*n*(1 + *<sup>ω</sup><sup>n</sup>*),

$$|I\_3(N)| \leq \frac{f\_0}{\Lambda\_m} N^{-(d+1)/3} \exp\{-\Lambda\_m N^{1/3}/2\} \frac{\pi^{d/2}}{\Gamma(d/2)} 2^{d/2} \left[1 + \left(\frac{2}{\Lambda\_m N^{1/3}}\right)^{(d-2)/2} \Gamma\left(\frac{d}{2}\right)\right]. \tag{20}$$

**Remark 4.** *For d* = 1 *we get simply*

$$|I\_3(N)| \le \frac{2f\_0}{\Lambda\_m} N^{-(d+1)/3} \exp\{-\Lambda\_m N^{1/3}/2\}\_m$$

*and for d* = 2 *the same with* 2*π instead of* 2*.*

> Finally *I*4 is calculated explicitly giving the main term of asymptotics:

$$I\_4(N) = f(\mathbf{x}(N), N) \left(\frac{2\pi}{N}\right)^{d/2} (\det D\_N)^{-1/2}.$$

Summarizing the estimates for all integrals involved and performing elementary simplifications, in particular using Γ((*d* + 1)/2) < *d*Γ(*d*/2)/√2 and Γ(1 + *α*) = *<sup>α</sup>*<sup>Γ</sup>(*α*), yields estimate Equation (7).

**Proposition 2.** *Under (C1)–(C3) assume additionally that S is four times differentiable and f has a Lipschitz continuous first derivatives with respect to x with*

$$S\_4 = \sup\_{\mathbf{x} \in \Omega, N \ge N\_0} \| \frac{\partial^4 S(\mathbf{x}, N)}{\partial \mathbf{x}^4} \| < \infty, \quad f\_2 = \sup\_{\mathbf{x} \neq y, N \ge N\_0} \frac{|\frac{\partial f}{\partial \mathbf{x}}(\mathbf{x}, N) - \frac{\partial f}{\partial \mathbf{x}}(y, N)|}{|\mathbf{x} - y|} < \infty.$$

*Then*

$$I(N) = \exp\{-NS(\mathbf{x}(N), N)\} \left(\frac{2\pi}{N}\right)^{d/2} \left[\frac{f(\mathbf{x}(N), N)}{\sqrt{\det D\_N}} + \frac{\omega(N)}{N}\right] + \omega^{\exp}(N),\tag{21}$$

*where the exponentially small term ωexp has exactly the same estimate as in the previous Proposition and ω*(*N*) *is a bounded function depending on* Λ*<sup>m</sup>*, *f*0, *f*1, *f*2, *S*3, *S*4, *d. Explicitly,*

$$|\omega(N)| \le \max(1, \Lambda\_m^{-3-d/2})[f\_0 S\_3^2 d^3 \varepsilon^{S\_3/6} + f\_0 S\_4 d^2 + f\_2 d + f\_2 S\_3 d^3 + f\_1 S\_3 d^2].\tag{22}$$

**Remark 5.** *The key difference in the error term here is the denominator N instead of* √*N in Equation* (6)*.*

**Proof.** We again decompose *I*(*N*) in the sum *I*(*N*) = *I*(*N*) + *I*(*N*) with *<sup>I</sup>*(*N*), *I*(*N*) given by Equation (5) and estimate *I*(*N*) by Equation (12). Estimation of *I*(*N*) needs more careful analysis using further terms of the Taylor expansion of *S* and *f* . Namely we decompose it first as

$$I'(N) = \exp\{-NS(x(N), N)\} (I\_1(N) + I\_2(N))\tag{23}$$

with

$$I\_1(N) = \int\_{\mathcal{U}(N)} f(\mathbf{x}, N) \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N))\_\prime \mathbf{x} - \mathbf{x}(N))\} \left[\varepsilon^{-N\sigma(\mathbf{x}, N)} - 1 + N\sigma(\mathbf{x}, N)\right] d\mathbf{x},$$

$$I\_2(N) = \int\_{\mathcal{U}(N)} f(\mathbf{x}, N) \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N))\_\prime \mathbf{x} - \mathbf{x}(N))\} \left[1 - N\sigma(\mathbf{x}, N)\right] d\mathbf{x}.$$

From Equation (14) we ge<sup>t</sup>

$$|e^{-N\sigma(\mathbf{x},\mathcal{N})} - 1 + N\sigma(\mathbf{x}, \mathcal{N})| \le \frac{1}{2} (N|\sigma(\mathbf{x}, \mathcal{N})|)^2 e^{N|\sigma(\mathbf{x}, \mathcal{N})|} \le \frac{1}{2} N^2 (S\mathfrak{z}/6)^2 |\mathbf{x} - \mathfrak{x}(\mathcal{N})|^6 e^{S\_3/6} \dots$$

Consequently,

$$|I\_1(N)| \le \frac{f\_0 S\_3^2}{72} e^{S\_3/6} N^2 \int\_{\mathbf{R}^d} |y|^6 \exp\{-N\Lambda\_m |y|^2/2\}.$$

From Equation (17) with *k* = 6 we deduce that

$$|I\_1(N)| \le \frac{f\_0 S\_3^2}{72} e^{S\_3/6} \pi^{d/2} \frac{\Gamma((6+d)/2)}{\Gamma(d/2)} \left(\frac{2}{\Lambda\_m}\right)^{(6+d)/2} N^{-(d+2)/2}$$

$$= \frac{f\_0 S\_3^2}{72} e^{S\_3/6} (2\pi)^{d/2} \frac{d(d+2)(d+4)}{\Lambda\_m^{3+d/2}} N^{-(d+2)/2}.\tag{24}$$

To evaluate *<sup>I</sup>*2(*N*) we use the Taylor expansion of *S* to the fourth order yielding

$$\sigma(\mathbf{x}, N) = \frac{1}{6} \frac{\partial^3 S}{\partial \mathbf{x}^3} (\mathbf{x}(N), N) [\mathbf{x} - \mathbf{x}(N)] + \mathcal{O}(\mathbf{x}, N)$$

with

$$|\tilde{\sigma}(\mathbf{x}, N)| \le \frac{1}{24} S\_4 |\mathbf{x} - \mathbf{x}(N)|^4.$$

Consequently, *<sup>I</sup>*2(*N*) can be represented as *<sup>I</sup>*2(*N*) = *J*1(*N*) + *J*2(*N*) with

$$J\_1(N) = -N \int\_{\mathcal{U}(N)} f(\mathbf{x}, N) \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N))\_\prime \mathbf{x} - \mathbf{x}(N))\} \overline{v}(\mathbf{x}, N) \, d\mathbf{x},$$
 
$$\dots$$

$$J\_2(N) = \int\_{\mathcal{U}(N)} f(\mathbf{x}, N) \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N))\_\prime \mathbf{x} - \mathbf{x}(N))\} \left[1 - \frac{N}{6} \frac{\partial^3 S}{\partial \mathbf{x}^3}(\mathbf{x}(N), N)[\mathbf{x} - \mathbf{x}(N)]\right] d\mathbf{x}.$$

Using the estimate for *σ*˜ we obtain

$$|f\_1(N)| \le \frac{1}{24} N f\_0 S\_4 \int\_{\mathbf{R}^d} \exp\{-N\Lambda\_m |y|^2/2\} |y|^4 \, dy$$

$$= \frac{1}{24} f\_0 S\_4 \pi^{d/2} \frac{\Gamma((4+d)/2)}{\Gamma(d/2)} \left(\frac{2}{\Lambda\_m}\right)^{(4+d)/2} N^{-(d+2)/2}$$

$$= \frac{1}{24} f\_0 S\_4 (2\pi)^{d/2} \frac{d}{2} (\frac{d}{2}+1) \frac{4}{\Lambda\_m^{(4+d)/2}} N^{-(d+2)/2}.$$

To evaluate *J*2 we expand *f* in Taylor series writing

$$f(\mathbf{x}, N) = f(\mathbf{x}(N), N) + \left(\frac{\partial f}{\partial \mathbf{x}}(\mathbf{x}(N), N), \mathbf{x} - \mathbf{x}(N)\right)$$

$$+ [f(\mathbf{x}, N) - f(\mathbf{x}(N), N) - \left(\frac{\partial f}{\partial \mathbf{x}}(\mathbf{x}(N), N), \mathbf{x} - \mathbf{x}(N)\right)].$$

Substituting this in *J*2 and using the fact that the integral of an odd function over a ball centered at the origin vanishes, we ge<sup>t</sup>

$$J\_2(N) = J\_{21}(N) + J\_{22}(N) + J\_{23}(N)$$

with

$$f\_{21}(N) = \int\_{\mathcal{U}(N)} \left[ f(\mathbf{x}, N) - f(\mathbf{x}(N), N) - \left( \frac{\partial f}{\partial \mathbf{x}}(\mathbf{x}(N), N), \mathbf{x} - \mathbf{x}(N) \right) \right]$$

$$\times \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N))\} \left[ 1 - \frac{N}{6} \frac{\partial^3 S}{\partial \mathbf{x}^3}(\mathbf{x}(N), N)[\mathbf{x} - \mathbf{x}(N)] \right] d\mathbf{x},$$

$$f\_{22}(N) = -\int\_{\mathcal{U}(N)} \frac{N}{6} \left( \frac{\partial f}{\partial \mathbf{x}}(\mathbf{x}(N), N), \mathbf{x} - \mathbf{x}(N) \right) \frac{\partial^3 S}{\partial \mathbf{x}^3}(\mathbf{x}(N), N)[\mathbf{x} - \mathbf{x}(N)]$$

$$\times \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N)) \} \, d\mathbf{x},$$

$$f\_{23}(N) = \int\_{\mathcal{U}(N)} f(\mathbf{x}(N), N) \exp\{-\frac{N}{2} (D\_N(\mathbf{x} - \mathbf{x}(N)), \mathbf{x} - \mathbf{x}(N)) \} \, d\mathbf{x}.$$

The first two integrals are estimated as above, that is

$$|f\_{21}(N)| \leq \int\_{\mathbb{R}^d} \frac{1}{2} f\_2 |y|^2 \left(1 + \frac{NS\_3}{6} |y|^3\right) \exp\{-N\Lambda\_m |y|^2/2\} \, dy$$

$$= \frac{1}{2} f\_2 \pi^{d/2} \left[\frac{d}{2} \left(\frac{2}{\Lambda\_m}\right)^{(2+d)/2} N^{-(d+2)/2} + \frac{S\_3}{6} \left(\frac{2}{\Lambda\_m}\right)^{(5+d)/2} \frac{\Gamma((5+d)/2)}{\Gamma(d/2)} N^{-(d+3)/2}\right] \lambda^2$$

$$|I\_{-1}(N)| \leq \int\_{-1}^1 \mathbf{1}\_{M \leq \xi < -|\Delta| \leq \sqrt{d}} \, d\lambda \, |\mathbf{v}|^2(\Omega) \, dv$$

and

$$|J\_{22}(N)| \leq \int\_{\mathbf{R}^d} \frac{1}{6} N f\_1 S\_3 |y|^4 \exp\{-N\Lambda\_m |y|^2/2\} \, dy$$

$$=\frac{1}{6}f\_1\mathbb{S}\_3\pi^{d/2}\left(\frac{2}{\Lambda\_\mathrm{m}}\right)^{(4+d)/2}N^{-(d+2)/2}\frac{\Gamma((4+d)/2)}{\Gamma(d/2)}.$$

Finally, *J*23(*N*) was estimated in Proposition 1 by representing it as the difference between the integral over the whole space **R***<sup>d</sup>* and the integral over **R***<sup>d</sup>* \ *<sup>U</sup>*(*N*), the first term yielding the main term of the asymptotics and the second one being exponentially small. Exponentially small terms are exactly the same as in the previous Proposition. Summarizing the estimates obtained and slightly simplifying, yields Equation (22).

#### **3. LLN and CLT for Internal Minima of the Phase**

**Theorem 1.** *Let* Ω *be a bounded open subset of* **R***<sup>d</sup> and f*(*<sup>x</sup>*, *<sup>N</sup>*)*, <sup>S</sup>*(*<sup>x</sup>*, *N*) *be continuous functions on* Ω × [*<sup>N</sup>*0, ∞) *satisfying conditions of Proposition 1. Assume that f*(*<sup>x</sup>*, *N*) *is strictly positive and the sequence of global minima x*(*N*) *converges, as N* → <sup>∞</sup>*, to a point x*0 *belonging to the interior of* Ω*.*

*Let ξN denote a* Ω*-valued random variable having density φN*(*x*) *that is proportional to f*(*<sup>x</sup>*, *N*) exp{−*NS*(*<sup>x</sup>*, *<sup>N</sup>*)}*, that is*

$$\phi\_N(\mathbf{x}) = f(\mathbf{x}, N) \exp\{-NS(\mathbf{x}, N)\} \left(\int\_{\Omega} f(\mathbf{x}, N) \exp\{-NS(\mathbf{x}, N)\} \,d\mathbf{x}\right)^{-1}$$

*(i) Then ξN weakly converge to x*0*. More explicitly, for a smooth g, one has*

$$|\mathsf{Eg}(\xi\_N) - \mathsf{g}(\mathbf{x}\_0)| \le \left(\frac{\varepsilon\_1}{\sqrt{N}} + |\mathbf{x}(N) - \mathbf{x}\_0|\right) \|\mathcal{G}\|\_{\mathbb{C}^1(\Omega)}\tag{25}$$

.

*with a constant c*1 *depending on f*0, Λ*<sup>m</sup>*, *S*3, *d and fm* = min*x*∈<sup>Ω</sup> *f*(*x*)*, which can be explicitly derived from Equations* (7) *and* (8)*.*

*(ii) If additionally S satisfies the conditions of Proposition 2, then*

$$|\mathsf{Eg}(\xi\_N) - \mathsf{g}(\mathbf{x}\_0)| \le \frac{c\_2}{N} \|\mathsf{g}\|\_{\mathcal{C}^2(\Omega)} + |\mathsf{x}(N) - \mathsf{x}\_0| \|\mathsf{g}\|\_{\mathcal{C}^1(\Omega)'} \tag{26}$$

*with a constant c*2 *depending on f*0, *f*1, Λ*<sup>m</sup>*, *S*3, *S*4, *d and fm.*

**Proof.** From Propositions 1 and 2 we conclude that

$$|\mathbb{E}\mathbf{g}(\xi\_N) - \mathbf{g}(\mathbf{x}(N))| \le \frac{c\_1}{\sqrt{N}} \|\mathbf{g}\|\_{\mathbb{C}^1(\Omega)}\tag{27}$$

and

$$\left| \mathbb{E} \mathbf{g}(\xi\_N) - \mathbf{g}(\mathbf{x}(N)) \right| \le \frac{c\_2}{N} \| \mathbf{g} \|\_{\mathbb{C}^2(\Omega)}\tag{28}$$

in cases (i) and (ii) respectively. The estimates of Equations (25) and (26) are then obtained from the triangle inequality.

Next we were interested in the convergence of the normalized fluctuations of *ξN* around *x*0, namely, of the random variables

$$
\eta\_N = \sqrt{N}(\mathfrak{J}\_N - \mathfrak{x}\_0). \tag{29}
$$

To simplify the formulas below we shall assume that *f*(*<sup>x</sup>*, *N*) = 1, but everything remains valid under general *f* satisfying the assumptions above,

To analyze the fluctuations, we use their moment generating functions

$$M\_{\rm N}(p) = \mathbb{E}\exp\{(p, \eta\_{\rm N})\} = \frac{\int\_{\Omega} \exp\{-NS(\mathbf{x}, \mathbf{N}) + \sqrt{N}(p, \mathbf{x} - \mathbf{x}\_{0})\} \,d\mathbf{x}}{\int\_{\Omega} \exp\{-NS(\mathbf{x}, \mathbf{N})\} \,d\mathbf{x}} \tag{30}$$

for *p* ∈ **R***d*.

> The numerator in Equation (30) can be written in the form of Equation (1) as

$$I(p) = \int\_{\Omega} \exp\{-N\left(S(\mathbf{x}, N) - \frac{1}{\sqrt{N}}(p, \mathbf{x} - \mathbf{x}\_0)\right)\} \, d\mathbf{x} = \int\_{\Omega} \exp\{-NS^\*(\mathbf{x}, N)\} \, d\mathbf{x}$$

where the new phase is

$$S^\*(\mathbf{x}, N) = S(\mathbf{x}, N) - \frac{1}{\sqrt{N}}(p\_\prime \mathbf{x} - \mathbf{x}\_0).$$

To shorten the notations, we shall denote by primes the derivatives of *S* or *S*∗ with respect to the variable *x*. *S*∗ is also convex, as *S* is, and has the same derivatives of order 2 and higher as *S*. To apply the Laplace method we need to find its point of global minimum, which coincides with its (unique) critical point, that we denote by *x*<sup>∗</sup> = *<sup>x</sup>*<sup>∗</sup>(*p*, *N*) and that solves the equation

$$(S^\*)'(\mathbf{x}^\*, N) = 0 \Longleftrightarrow S'(\mathbf{x}^\*, N) = p/\sqrt{N}.\tag{31}$$

As a preliminary step to proving our CLT let us perform some elementary analysis of this equation proving its well posedness and finding its dependence on *N* in the first approximation. We shall need the following elementary result.

**Lemma 1.** *Let <sup>S</sup>*(*x*) *be a smooth convex function in* **R***<sup>d</sup> s.t. <sup>S</sup>*(*x*) ≥ Λ*m everywhere and <sup>S</sup>*(*<sup>x</sup>*0) = 0*. Then for any K the mapping z* '→ *<sup>S</sup>*(*<sup>x</sup>*0 + *z*) *is a diffeomorphism of the ball BK* = {*z* : |*z*| ≤ *K*} *on its image and this image contains the ball BK*<sup>Λ</sup>*m .*

**Proof.** Injectivity is straightforward from convexity. Let us prove the last statement, that is, that for any *y* ∈ *BK*<sup>Λ</sup>*m* there exists *z* ∈ *BK* such that *<sup>S</sup>*(*<sup>x</sup>*0 + *z*) = *y*. For any *α* > 0, this claim is equivalent to the existence of a fixed point for a mapping

$$\Phi(z) = z - \mathfrak{a}(S'(\mathfrak{x}\_0 + z) - y) = z - \mathfrak{a} \int\_0^1 S''(\mathfrak{x}\_0 + sz)z \, ds + ay$$

in *BK*. By the famous fixed point principle, to show the existence of a fixed point, it is sufficient to show that Φ maps *BK* to itself, that is, Φ(*z*) ≤ *K* whenever *z* ≤ *K*. Let

$$\Lambda\_M = \sup\_{z \in B\_k} \| S''(x\_0 + z) \| .$$

and take *α* = 1/Λ*M*. Then the symmetric matrix *B* = 1 − *α* 10 *<sup>S</sup>*(*<sup>x</sup>*0 + *sz*) *ds* is such that 0 ≤ *B* ≤ 1 − Λ*m*/Λ*<sup>M</sup>* for all *z* ∈ *BK*. Hence, if *z* ∈ *BK* we have

$$\|\Phi(z)\| \le (1 - \frac{\Lambda\_m}{\Lambda\_M})K + \frac{||y||}{\Lambda\_M}.$$

Hence, the inequality Φ(*z*) ≤ *K* is fulfilled whenever *y* ≤ Λ*mK*, as was claimed.

Thus the image of the set *U*(*N*) contains the ball of radius <sup>Λ</sup>*mN*−1/3, so that for every *y* : |*y*| ≤ <sup>Λ</sup>*mN*−1/3 there exists a unique *x* ∈ *U*(*N*) such that *<sup>S</sup>*(*x*) = *y*.

On the other hand, for any *K* we can take *N*1 = max(*<sup>N</sup>*0,(*K*/Λ*m*)<sup>6</sup>), which is such that

$$\frac{p}{\sqrt{N}} < \Lambda\_m N^{-1/3}$$

for all *N* > *N*1 and |*p*| ≤ *K*. Consequently, by Lemma 1, for such *p* and *N*, there exists a unique solution *x*<sup>∗</sup> = *<sup>x</sup>*<sup>∗</sup>(*p*, *N*) of Equation (31) in Ω, and *x*<sup>∗</sup> ∈ *<sup>U</sup>*(*N*), i.e.,

$$|x^\*-x(N)| \le N^{-1/3}.\tag{32}$$

Next, expanding *<sup>S</sup>*(*<sup>x</sup>*, *N*) in the Taylor series around *x*(*N*) (where *<sup>S</sup>*(*x*(*N*), *N*) = 0), we find from Equation (31) that

$$|S^{\prime\prime}(\mathbf{x}(N),N)(\mathbf{x}^\*-\mathbf{x}(N))-\frac{p}{\sqrt{N}}| \le \mathbf{S}\_3|\mathbf{x}^\*-\mathbf{x}(N)|^2,\tag{33}$$

and thus

$$|D\_N(\mathbf{x}^\* - \mathbf{x}(N)) - \frac{p}{\sqrt{N}}| \le \mathbb{S}\_3 N^{-2/3} \tag{34}$$

(recall that we denote *DN* = *<sup>S</sup>*(*x*(*N*), *N*)).

> This allows us to improve the preliminary estimate of Equation (32) and to obtain

$$|\mathbf{x}^\* - \mathbf{x}(N)| \le D\_N^{-1} \left( \frac{p}{\sqrt{N}} + \frac{\mathcal{S}\_3}{N^{2/3}} \right) \le \frac{|p| + \mathcal{S}\_3}{\Lambda\_m \sqrt{N}}.\tag{35}$$

Hence from Equation (33) we ge<sup>t</sup>

$$|D\_N(\mathbf{x}^\* - \mathbf{x}(N)) - \frac{p}{\sqrt{N}}| \le \frac{S\_3(|p| + S\_3)^2}{\Lambda\_m^2 N}. \tag{36}$$

Finally we conclude that

$$\mathbf{x}^\*(p, N) = \mathbf{x}(N) + \frac{1}{\sqrt{N}} D\_N^{-1} p + \frac{\epsilon}{N} \tag{37}$$

with

$$|\epsilon| \le \frac{S\_3(|p| + S\_3)^2}{\Lambda\_m^3}. \tag{38}$$

We can now prove a convergence result that can be called the *CLT for Laplace integrals*.

**Theorem 2.** *Under the assumption of Theorem 1 (i), assume additionally that x*(*N*) *converges to x*0 *quickly enough, that is*

$$|\mathbf{x}(N) - \mathbf{x}\_0| \le cN^{-\delta - 1/2} \tag{39}$$

*with positive constants c*, *δ. Then the fluctuations ηN* = √*N*(*ξN* − *<sup>x</sup>*0) *converge weakly to a centered Gaussian random variable with the moment generating function*

$$M(p) = \exp\{\frac{1}{2}(p\_\prime D\_N^{-1}p)\}.\tag{40}$$

**Proof.** We show that the moment generating functions of the fluctuations *ηN* given by Equation (30) converge, as *N* → <sup>∞</sup>, to the function *<sup>M</sup>*(*p*), the convergence being uniform on bounded subsets of *p*. By the well known characterization of weak convergence this will apply the weak convergence of the random fluctuations *ηN*.

Applying Proposition 1 to the numerator and denominator of the r.h.s. of Equation (30) we get, for *N* > *N*0,

$$M\_{N}(p) = \frac{\exp\{-NS^\*(\mathbf{x}^\*(p,N),N)\}}{\exp\{-NS(\mathbf{x}(N),N)\}} \frac{\sqrt{\det D\_N}}{\sqrt{\det S^\theta(\mathbf{x}^\*(p,N),N)}} \left(1 + \frac{\omega(\mathbf{x},N,p)}{\sqrt{N}}\right),\tag{41}$$

where *ω* is a bounded function, with a bound, depending on *S*3, Λ*<sup>m</sup>*, *p*, *d*, that can be found explicitly from Equation (7).

We have

$$S(\mathbf{x}^\*(p, N), N) = S\left(\mathbf{x}(N) + \frac{1}{\sqrt{N}} D\_N^{-1} p + \frac{\epsilon}{N'} N\right)$$

$$= S(X(N), N) + \frac{1}{2} \left( D\_N (\frac{1}{\sqrt{N}} D\_N^{-1} p + \frac{\epsilon}{N}), \frac{1}{\sqrt{N}} D\_N^{-1} p + \frac{\epsilon}{N} \right) + \phi N^{-3/2},$$

$$|\phi| \le S\_3 \left| D\_N^{-1} p + \frac{\epsilon}{\sqrt{N}} \right|^3,$$

and consequently

with

$$S(\mathbf{x}^\*(p,\mathbf{N}),\mathbf{N}) = S(\mathbf{X}(\mathbf{N}),\mathbf{N}) + \frac{1}{2N}(p,D\_N^{-1}p) + \frac{1}{N^{3/2}}(p,\boldsymbol{\epsilon}) + \frac{1}{2N^2}(D\_N\boldsymbol{\epsilon},\boldsymbol{\epsilon}) + \boldsymbol{\phi}N^{-3/2}.$$

Therefore,

$$S^\*(\mathbf{x}^\*(p,N),N) = S(\mathbf{x}^\*(p,N),N) - \frac{1}{\sqrt{N}} \left(p, \mathbf{x}(N) + \frac{1}{\sqrt{N}} D\_N^{-1} p + \frac{\epsilon}{N} - \mathbf{x}\_0\right)$$

$$= S(\mathbf{X}(N),N) - \frac{1}{2N} (p, D\_N^{-1} p) + \frac{1}{2N^2} (D\_N \epsilon, \epsilon) + \frac{\phi}{N^{3/2}} - \frac{1}{\sqrt{N}} (p, \mathbf{x}(N) - \mathbf{x}\_0).$$

Using Equation (63) we conclude that

$$\left| N[S(\mathbf{x}(N), N) - S^\*(\mathbf{x}^\*(p, N), N)] - \frac{1}{2} (p, D\_N^{-1} p) \right| \le c \left( N^{-1/2} + N^{-\delta} \right),$$

where the constant *c* depends on *p*, *S*3, Λ*<sup>m</sup>*, Λ*M*, *d*.

Next, from Equation (35) we ge<sup>t</sup>

$$\left\| S^{\prime\prime}(\mathbf{x}(N), N) - S^{\prime\prime}(\mathbf{x}^\*(p, N), N) \right\| \le S\_3 \frac{|p| + S\_3}{\Lambda\_{\text{nr}} \sqrt{N}} N$$

so that

$$\left|\frac{\sqrt{\det D\_N}}{\sqrt{\det S''(\mathbf{x}^\*(p,N),N)}} - 1\right| \le \frac{c}{\sqrt{N}}\tag{42}$$

with another constant *c* depending on *p*, *S*3, Λ*<sup>m</sup>*, Λ*M*, *d*. Consequently, we deduce from Equation (41) that

$$M\_N(p) = \exp\{\frac{1}{2}(p, D\_N^{-1}p) + \frac{c(N, p)}{\sqrt{N}}\} \left(1 + \frac{\omega(N, p)}{\sqrt{N}}\right) \tag{43}$$

with some functions *c*, *ω*, which are bounded on bounded subsets of *p*, implying the required convergence of the functions *MN*(*p*).

#### **4. Phase Minimum on the Border of the Domain of Integration**

Here we present the estimates of the remainder in the asymptotic formula for the Laplace integrals with the critical point of the phase lying on the boundary of the domain of integration.

Let us start with a simple one-dimensional result, which is version of the well known Watson lemma. The proof can be performed as above by decomposing the domain of integration [0, *a*] into the two intervals: [0, *<sup>N</sup>*−1/2] and [*<sup>N</sup>*−1/2, *a*]. We omit the detail of the proof.

**Lemma 2.** *Let <sup>S</sup>*(*<sup>x</sup>*, *N*) *and f*(*<sup>x</sup>*, *N*) *be two continuous functions on the domain* {*x* ∈ [0, *a*], *N* ≥ 1} *with a* > 0*. Let f be continuously differentiable and S be twice continuously differentiable with respect to x, with the uniform bounds*

$$|S''(\mathbf{x}, \mathbf{N})| \le s\_{2\prime} \quad |f(\mathbf{x}, \mathbf{N})| \le f\_{0\prime} \quad |f'(\mathbf{x}, \mathbf{N})| \le f\_{1\prime}$$

*and the lower bound*

$$S'(x, N) \ge s\_{1\prime}$$

*with some strictly positive constants s*1,*s*2, *f*0, *f*<sup>1</sup>*, where by primes we denote derivatives with respect to x. Then, for the Laplace integral*

$$I(N) = \int\_0^a \exp\{-NS(\mathbf{x}, N)\} f(\mathbf{x}, N) \, d\mathbf{x} \,\mu$$

*we have the asymptotic expression*

$$I(N) = \frac{\exp\{-NS(0, N)\}}{NS'(0, N)} \left( f(0, N) + \frac{\omega(N)}{N} \right) + \omega^{\exp}(N), \tag{44}$$

*where*

$$|\omega(N)| \le \frac{f\_1}{S'(0, N)} + \frac{f\_0}{(S'(0, N))^2} s\_2 e^{s\_2/2},$$

$$|\omega^{\varepsilon \ge p}(N)| \le 2af\_0 \exp\{-NS(0, N)\} \exp\{-s\_1 \sqrt{N}\}.$$

**Remark 6.** *One can obtain similar result by decomposing* [0, *a*]=[0, *<sup>N</sup>*−*<sup>γ</sup>*] ∪ [*<sup>N</sup>*−*γ*, *a*] *for any γ* ∈ [1/2, <sup>1</sup>)*, in which case the exponentially small term will get the estimate*

$$|\omega^{\varepsilon \exp}(\mathfrak{x}, N)| \le 2af\_0 \exp\{-NS(0, N)\} \exp\{-s\_1 N^{1-\gamma}\}.$$

*This also shows that Lemma 2 remains essentially valid for small a of order a* = *N*−*γ, γ* < 1*, which is used in the proof of the next result.*

Let us turn to the general case. Namely, assume Ω is a bounded open set in **R***d*+1. The coordinates in **R***d*+<sup>1</sup> will be denoted (*<sup>x</sup>*, *y*) with *x* ∈ **R**, *y* ∈ **R***d*. Let

$$\Omega\_+ = \{(\mathbf{x}, \mathbf{y}) \in \Omega : \mathbf{x} \ge \boldsymbol{\psi}(\mathbf{y})\},\tag{45}$$

with some smooth function *ψ*. It will be convenient to introduce the sections of Ω as the sets

$$
\Omega(\mathbf{x}) = \{ y : (\mathbf{x}, y) \in \Omega \}.
$$

We are interested in the asymptotics of the Laplace integral

$$I(N) = \int\_{\Omega\_{+}} f(\mathbf{x}, \mathbf{y}, N) \exp\{-NS(\mathbf{x}, \mathbf{y}, N)\} \, d\mathbf{x} dy, \quad N > N\_{0} \tag{46}$$

with continuous functions *f* and *S* referred to as the amplitude and phase respectively.

Let us first discuss the case of Ω+ with a plane boundary, that is with *ψ*(*Y*) = 0, or equivalently with

$$
\Omega\_+ = \{(x, y) \in \Omega : x \ge 0\}.\tag{47}
$$

We shall assume the following:

(C1') *f*(*<sup>x</sup>*, *y*, *N*) is a continuously differentiable function on Ω+ (up to the border) with

$$f\_0 = \sup\_{(x,y)\in\Omega\_+, N\ge N\_0} |f(x,y,N)| < \infty,\quad f\_1 = \sup\_{(x,y)\in\Omega\_+, N\ge N\_0} \left( |\frac{\partial f}{\partial x}| + |\frac{\partial f}{\partial y}| \right) < \infty;$$

(C2') *<sup>S</sup>*(*<sup>x</sup>*, *y*, *N*) is thrice continuously differentiable function of *x* and *y* such that

$$\frac{\partial^2 S}{\partial y^2}(\mathbf{x}, \mathbf{y}, \mathbf{N}) \ge \Lambda\_m$$

(where ≥ is the usual order on symmetric matrices) and

$$\frac{\partial S}{\partial x}(x, y, N) \ge g\_{\text{var}}$$

with positive constants Λ*<sup>m</sup>*, *gm*, and

$$S\_2 = \sup\_{(x,y)\in\Omega\_+, N\geq N\_0} \max\left( \|\|\frac{\partial^2 S}{\partial x^2}\|\_{\prime}, \|\|\frac{\partial^2 S}{\partial x \partial y}\|\|\, \|\frac{\partial^2 S}{\partial y^2}\|\|\right) < \infty,$$

$$S\_3 = \sup\_{(x,y)\in\Omega\_+, N\geq N\_0} \max\left( \|\|\frac{\partial^3 S}{\partial x^3}\||\|\frac{\partial^3 S}{\partial x^2 \partial y}\||\, \|\frac{\partial^3 S}{\partial x \partial y^2}\||\, \|\frac{\partial^3 S}{\partial y^3}\||\right) < \infty.$$

**Remark 7.** *As was noted above, the norms of higher derivatives in the estimates that we are using are their norms as multi-linear operators. For instance, <sup>∂</sup>*2*S*(*<sup>x</sup>*,*y*,*<sup>N</sup>*) *∂x∂y is the minimum of constants α such that*

$$\left| \sum\_{j=1}^d \frac{\partial^2 S(\mathfrak{x}\_\prime y\_\prime N)}{\partial \mathfrak{x} \partial y\_j} \mathfrak{x} y\_j \right| \le \alpha |\mathfrak{x}| |y|.$$

(C3') For any *N* > *N*0, there exists a unique point of global minimum of *S* in Ω+, this point lies on the boundary {*x* = <sup>0</sup>}, i.e., it has coordinates (0, *y*(*N*)) with some *y*(*N*) ∈ *<sup>R</sup>d*, and the box

$$\mathcal{L}I(N) = \{(\mathbf{x}, \mathbf{y}) : \mathbf{x} \in [0, N^{-2/3}], \left| y - y(N) \right| \le N^{-1/3} \}\tag{48}$$

is contained in Ω+. We shall also use the sections

$$\mathcal{U}(\mathbf{x}, \mathbf{N}) = \{ y : (\mathbf{x}, y) \in \mathcal{U}(\mathbf{N}) \}.$$

Let us denote by *DN* the matrix of the second derivatives of *S* as a function of *y* at (0, *y*(*N*), *<sup>N</sup>*), and by *gN* the gradient of *S* as a function of *x* at (0, *y*(*N*), *<sup>N</sup>*), that is

$$D\_N = \frac{\partial^2 S}{\partial y^2}(0, y(N), N), \quad \mathcal{g}\_N = \frac{\partial S}{\partial x}(0, y(N), N). \tag{49}$$

The approach of our analysis is to decompose the integral *I*(*N*) into the sum of two integrals

$$I(N) = I'(N) + I''(N)\_{\prime\prime}$$

over the sets {*x* ≤ *<sup>N</sup>*−2/3} and {*x* > *<sup>N</sup>*−2/3}, to represent the first integral as the double integral, so that

$$\mathcal{I}^{\prime\prime}(N) = \int\_{\Omega \cap \{(\mathbf{x}, \mathbf{y}) : \mathbf{x} > N^{-2/3}\}} f(\mathbf{x}, \mathbf{y}, N) \exp\{-NS(\mathbf{x}, \mathbf{y}, N)\} \, d\mathbf{x} dy,\tag{50}$$

$$I'(N) = \int\_0^{N^{-2/3}} I(\mathbf{x}, N) d\mathbf{x}, \quad I(\mathbf{x}, N) = \int\_{\Omega(\mathbf{x})} f(\mathbf{x}, y, N) \exp\{-NS(\mathbf{x}, y, N)\} \, dy,\tag{51}$$

and to use Proposition 1 for the estimation of *<sup>I</sup>*(*<sup>x</sup>*, *<sup>N</sup>*), *x* ∈ [0, *<sup>N</sup>*−2/3], and finally Lemma 2 to estimate *<sup>I</sup>*(*N*).

**Theorem 3.** *Under the assumptions (C1')–(C3'), the formula*

$$I(N) = \exp\{-NS(0, y(N), N)\} \left(\frac{2\pi}{N}\right)^{d/2} \frac{1}{N} \left[\frac{f(0, y(N))}{\mathcal{g}\_N \sqrt{\det D\_N}} + \frac{\omega(N)}{\sqrt{N}}\right] + \omega^{\exp}(N) \tag{52}$$

*holds for* Ω+ *from Equation* (47) *and N* > *N*1 = max(*<sup>N</sup>*0,(2*S*2/Λ*m*)<sup>3</sup>)*, where ωexp*(*N*) *is an exponentially small term and*

$$|\omega(N)| \le \frac{1}{\mathcal{g}\_m \Lambda\_m^{d/2}} \left[ f\_1 \left( 1 + \frac{d}{\Lambda\_m} \right) + f \text{od} \max(\mathcal{S}\_3, \mathcal{S}\_2) e^{\max(\mathcal{S}\_3, \mathcal{S}\_2)} \left( 1 + \frac{1}{\Lambda\_m^2} + \frac{1}{\mathcal{g}\_m^2} \right) \right]. \tag{53}$$

**Proof.** Integral *I*(*N*) from Equation (50) yields clearly an exponentially small contribution, similar to the integral *I*(*N*) in Proposition 1, so we omit the details here.

To calculate *<sup>I</sup>*(*<sup>x</sup>*, *N*) we have to know critical points of the phase *<sup>S</sup>*(*<sup>x</sup>*, *y*, *N*) as a function of *y*, that is the solutions *y*<sup>∗</sup>(*<sup>x</sup>*, *N*) of the equation

$$\frac{\partial \mathcal{S}}{\partial y}(\mathbf{x}, y^\*(\mathbf{x}, \mathcal{N}), \mathcal{N}) = 0. \tag{54}$$

As *S* is convex in *y*, the solution is unique, if it exists. Proceeding as in Lemma 1, that is, searching for a fixed point of the mapping

$$z \mapsto z - \frac{\partial S}{\partial y}(\mathfrak{x}, \mathfrak{y}(N) + z, N)\_{\mathfrak{x}}$$

we find that there exists a unique solution *y*<sup>∗</sup>(*<sup>x</sup>*, *N*) of Equation (54) whenever

$$S\_2 < \Lambda\_m N^{1/3} \iff N > N\_1 \tag{55}$$

such that

$$|y^\*(\mathbf{x}, \mathbf{N}) - y(\mathbf{N})| \le N^{-1/3}.\tag{56}$$

Next, using the Taylor expansion of *∂S*/*∂y* around the point (0, *y*(*N*), *N*) we ge<sup>t</sup> that

$$0 = \frac{\partial S}{\partial y}(\mathbf{x}, y^\*(\mathbf{x}, N), N)$$

$$0 = \frac{\partial^2 S}{\partial y \partial \mathbf{x}}(0, y(N), N)\mathbf{x} + \frac{\partial^2 S}{\partial y^2}(0, y(N), N)(y^\*(\mathbf{x}, N) - y(N)) + \phi(\mathbf{x}, y, N) \tag{57}$$

with

$$\phi(\mathbf{x}, \mathbf{y}, \mathbf{N}) \le 2S\_3(|\mathbf{x}|^2 + |\mathbf{y}^\*(\mathbf{x}, \mathbf{N}) - \mathbf{y}(\mathbf{N})|^2) \le 4S\_3N^{-2/3}.$$

This implies

$$y^\*(\mathbf{x}, N) - y(N) = -D\_N^{-1} \left( \frac{\partial^2 S}{\partial y \partial \mathbf{x}} (0, y(N), N) \mathbf{x} + \phi(\mathbf{x}, y, N) \right),$$

so that

> +

$$|y^\*(\mathbf{x}; \mathbf{N}) - y(\mathbf{N})| \le \frac{S\_2 + 4S\_3}{\Lambda\_{\text{II}}} N^{-2/3},\tag{58}$$

which is an essential improvement as compared with the initial estimate of Equation (56). It ensures that the distance from *y*<sup>∗</sup>(*<sup>x</sup>*, *N*) to the border of *<sup>U</sup>*(*<sup>x</sup>*, *N*) is of order *<sup>N</sup>*−1/3, so that Proposition 1 can in fact be applied to the integral *<sup>I</sup>*(*<sup>x</sup>*, *N*) leading to

$$l(\mathbf{x}, N) = \omega^{\text{exp}}(\mathbf{x}, \mathbf{y}, N)$$

$$\exp\{-NS(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, N), N)\} \left(\frac{2\pi}{N}\right)^{d/2} \left[\frac{f(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, N))}{\left(\det\frac{\partial S^2}{\partial \mathbf{y}^2}(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, N), N)\right)^{1/2}} + \frac{\omega(\mathbf{x}, \mathbf{y}, N)}{\sqrt{N}}\right],\tag{59}$$

where *ωexp* is exponentially small compared to the main term and

$$|\omega(x, y, N)| \le d\Lambda\_m^{-(1+d)/2} \left[ f\_1 + \frac{d+1}{6\Lambda\_m} f\_0 \mathcal{S}\_3 e^{\mathcal{S}\_3/6} \right].$$

In order to apply Lemma 2 we need to ge<sup>t</sup> lower and upper bounds to the quantities

$$\frac{\partial}{\partial \mathbf{x}} S(\mathbf{x}, \boldsymbol{y}^\*(\mathbf{x}, \boldsymbol{N}), \boldsymbol{N}) \text{ and } \left| \frac{\partial}{\partial \mathbf{x}} \left( \det \frac{\partial S^2}{\partial \boldsymbol{y}^2} (\mathbf{x}, \boldsymbol{y}^\*(\mathbf{x}, \boldsymbol{N}), \boldsymbol{N}) \right)^{-1/2} \right|.$$

respectively.

> We have

$$\frac{\partial}{\partial \mathbf{x}} \mathcal{S}(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, \mathbf{N}), \mathbf{N}) = \frac{\partial \mathcal{S}}{\partial \mathbf{x}}(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, \mathbf{N}), \mathbf{N}) + \frac{\partial \mathcal{S}}{\partial \mathbf{y}}(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, \mathbf{N}), \mathbf{N}) \frac{\partial \mathbf{y}^\*}{\partial \mathbf{x}}(\mathbf{x}, \mathbf{N}).$$

But the second term vanishes. Hence

$$\frac{\partial}{\partial \mathbf{x}} S(\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, \mathbf{N}), \mathbf{N}) = \frac{\partial S}{\partial \mathbf{x}} (\mathbf{x}, \mathbf{y}^\*(\mathbf{x}, \mathbf{N}), \mathbf{N}) \ge \mathcal{g}\_{\mathbf{m}}.$$

Next, differentiating Equation (54) with respect to *y* we obtain

$$\frac{\partial y^\*}{\partial \mathbf{x}}(\mathbf{x}, \mathbf{N}) = -\left[\frac{\partial^2 S}{\partial y^2}(\mathbf{x}, y^\*(\mathbf{x}, \mathbf{N}), \mathbf{N})\right]^{-1} \frac{\partial^2 S}{\partial \mathbf{x} \partial y}(\mathbf{x}, y^\*(\mathbf{x}, \mathbf{N}), \mathbf{N})\_\mathbf{x}$$

implying the estimate

$$\left\|\frac{\partial y^\*}{\partial \mathbf{x}}(\mathbf{x}, \mathbf{N})\right\| \leq \frac{S\_2}{\Lambda\_{\text{nr}}}.\tag{60}$$

.

Consequently, using the formula for the differentiation of the determinant of invertible symmetric matrices,

$$(\det A)' = \det A \operatorname{tr} (A'A^{-1})\_{\prime\prime}$$

we can estimate

$$\left| \frac{\partial}{\partial \mathbf{x}} \left( \det \frac{\partial S^2}{\partial y^2} (\mathbf{x}, y^\*(\mathbf{x}, N), N) \right)^{-1/2} \right| \le \frac{dS\_3}{2 \Lambda\_m^2} \left( \det \frac{\partial S^2}{\partial y^2} (\mathbf{x}, y^\*(\mathbf{x}, N), N) \right)^{-1/2}.$$

Hence Lemma 2 can be applied to the calculation of *I*(*N*) given by Equations (51) and (59) yielding Equation (52).

**Remark 8.** *Arguing as in Proposition 2, one can improve the estimate of the remainder term in Equation* (52) *to be of order <sup>N</sup>*−1*, by assuming more regularity on S and f .*

The general case of Equation (45) can be directly reduced to the case of Ω+ from Equation (47). In fact, changing coordinates (*<sup>x</sup>*, *y*) to (*z*, *y*) with *z* = *x* − *ψ*(*y*) we ge<sup>t</sup> that Ω+ turns to Ω˜ + = {(*<sup>z</sup>*, *y*) : *z* ≥ <sup>0</sup>}. Making this change of the variable of integration in *I*(*N*) yields

$$I(N) = \int\_{\Omega\_+} \tilde{f}(z, y, N) \exp\{-N\tilde{S}(z, y, N)\} \, d\mathbf{x} dy, \quad N > N\_{0+} $$

with *<sup>S</sup>*˜(*<sup>z</sup>*, *y*, *N*) = *<sup>S</sup>*(*z* + *ψ*(*y*), *y*, *<sup>N</sup>*), ˜ *f*(*<sup>z</sup>*, *y*, *N*) = *f*(*z* + *ψ*(*y*), *y*, *<sup>N</sup>*). Assuming that these functions satisfy the conditions of Theorem 3 we obtain

$$I(N) = \exp\{-NS(\psi(y(N)), y(N), N)\} \left(\frac{2\pi}{N}\right)^{d/2} \frac{1}{N} \left[\frac{f(\psi(y(N)), y(N))}{g\_N\sqrt{\det D\_N}} + \frac{\hat{\omega}(N)}{\sqrt{N}}\right] + \hat{\omega}^{\exp}(N), \tag{61}$$

where

$$\mathcal{D}\_N = \frac{\partial^2 \tilde{S}}{\partial y^2} (0, y(N), N) = \left( \frac{\partial^2 S}{\partial y^2} + \frac{\partial S}{\partial x} \frac{\partial^2 \psi}{\partial y^2} + \frac{\partial^2 S}{\partial x^2} \frac{\partial \psi}{\partial y} \right) (\psi(y(N)), y(N), N)$$

and with similar change in the constants appearing in *ω*˜(*N*) and *ω*˜ *exp*(*N*).

#### **5. LLN and CLT for Minima on the Boundary**

The results on weak convergence of random variables with exponential densities given above for the case of the phase having minimum in the interior of the domain can be now recast for the case of the phase having minimum on the boundary of the domain of integration. The following statements are proved by literally the same argumen<sup>t</sup> as Theorems 1 and 2. We omit details.

**Theorem 4.** *Let* Ω *be a bounded open set in* **R***d*+<sup>1</sup> + *with coordinates* (*<sup>x</sup>*, *y*)*, x* ∈ **R**, *y* ∈ **R***d, and let*

$$
\Omega\_+ = \{(x, y) \in \Omega : x \ge 0\}.
$$

*Let the functions f*(*<sup>x</sup>*, *y*, *<sup>N</sup>*)*, <sup>S</sup>*(*<sup>x</sup>*, *y*, *N*) *be a continuous functions on* Ω+ × [1, ∞) *satisfying condition (C1')- (C3') from Theorem 3. Assume moreover that f is bounded below by a positive constants and that the sequence of global minima* (0, *y*(*N*)) *converges, as N* → <sup>∞</sup>*, to a point* (0, *y*0) *belonging to the interior of* Ω*.*

*Let* (*ξx N*, *ξ y N*) *denote a* Ω+*-valued random variable having density φN*(*<sup>x</sup>*, *y*) *that is proportional to f*(*<sup>x</sup>*, *y*, *N*) exp{−*NS*(*<sup>x</sup>*, *y*, *<sup>N</sup>*)}*, that is*

$$\Phi\_N(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}, \mathbf{y}, \mathbf{N}) \exp\{-NS(\mathbf{x}, \mathbf{y}, \mathbf{N})\} \left(\int\_{\Omega\_+} f(\mathbf{x}, \mathbf{y}, \mathbf{N}) \exp\{-NS(\mathbf{x}, \mathbf{y}, \mathbf{N})\} \,d\mathbf{x}d\mathbf{y}\right)^{-1}.$$

*Then* (*ξx N*, *ξ y N*) *weakly converge to a constant* (0, *y*0)*. More explicitly, for a smooth g, one has*

$$|\mathbb{E}g(\xi\_N^x, \xi\_N^y) - g(0, y\_0)| \le \left(\frac{c}{\sqrt{N}} + |y(N) - y\_0|\right) \|g\|\_{C^1(\Omega)}\tag{62}$$

*with a constant c depending only on S (actually on the bounds for the derivatives of S up to the third order).*

**Theorem 5.** *Under the assumptions of Theorem 4 assume additionally that*

$$|y(N) - y\_0| \le cN^{-\delta - 1/2}.\tag{63}$$

*Then the fluctuations* (*η<sup>x</sup> N*, *ηy N*)=( *Nξx N*, √ *N*(*ξ y n* − *y*0)) *converge weakly to a* (*d* + 1)*-dimensional random vector such that its last coordinates form a centered Gaussian random vector with the moment generating function*

$$M(p) = \exp\{\frac{1}{2}(p\_\prime D\_N^{-1} p)\},\tag{64}$$

*and the first coordinate is independent and represents a g*0*- exponential random variable. The rates of convergence with all explicit constants are obtained directly from Theorem 3.*

#### **6. Laplace Sums with Error Estimates**

It is more or less straightforward to modify the above results to the of sums rather than integrals. Namely, instead of the integral *I*(*N*) from Equation (1) let us consider the sum

$$\Sigma(N) = \frac{1}{N^d} \sum\_{k=(k\_1,\cdots,k\_d):\mathbf{x}\_k=k/N\in\Omega} f(\mathbf{x}\_k) \exp\{-NS(\mathbf{x}\_k, N)\}, \quad N > 1,\tag{65}$$

where Ω is an open polyhedron of the Euclidean space **<sup>R</sup>***d*, with Euclidean volume |Ω|, the amplitude *f* and the phase *S* are continuous real functions.

**Theorem 6.** *Under the assumptions of Proposition 1,*

$$\Sigma(N) = \exp\{-NS(\mathbf{x}(N), N)\} \left(\frac{2\pi}{N}\right)^{d/2} \left[\frac{f(\mathbf{x}(N), N)}{\sqrt{\det D\_N}} + \frac{\tilde{\omega}(N)}{\sqrt{N}}\right] + \omega^{\exp}(N),\tag{66}$$

*where*

$$|\tilde{\omega}(N)| \le |\omega(N)| + (f\_0 + f\_1)\mathbb{C}(\Lambda\_{m\_{\prime\prime}}\Lambda\_{M\prime}S\_3)\_{\prime\prime}|$$

*and where ω*(*N*) *and ωexp*(*N*) *are the same as in Proposition 1 and <sup>C</sup>*(<sup>Λ</sup>*M*, *<sup>S</sup>*3) *is yet another constant depending on* Λ*<sup>m</sup>*, Λ*M*, *S*3*.*

**Proof.** We use the well known (and easy to prove) fact (a simplified version of the Euler–Maclorin formula) that

$$\leq \int\_{\Omega} g(\mathbf{x}) - \frac{1}{N^d} \sum\_{k=(k\_1,\dots,k\_d):\mathbf{x}\_k=k/N\in\Omega} g(\mathbf{x}\_k) |\leq \frac{1}{N} \int\_{\Omega} |g'(\mathbf{x})| \, d\mathbf{x}.\tag{67}$$

Consequently,

$$|\Sigma(N) - I(N)| \le \frac{1}{N} \int |f'(\mathbf{x})| \exp\{-NS(\mathbf{x}, N)\} \, d\mathbf{x} + \int f(\mathbf{x}) |S'(\mathbf{x}, N)| \exp\{-NS(\mathbf{x}, N)\} \, d\mathbf{x}, \tag{68}$$

where *I*(*N*) is from Equation (1). The first integral on the r.h.s. of Equation (68) is clearly of order 1/*N*, as compared with the main term of *I*(*N*) given in Proposition 1. The pre-exponential term in the second integral vanishes at the critical point (*x*(*N*), *N*) of *<sup>S</sup>*(*<sup>x</sup>*, *<sup>N</sup>*). Hence the required estimate for the second integral is obtained directly from Proposition 1.

Now all LLN and CLT results obtained above for continuous distributions can be reformulated and proved straightforwardly for the case of discrete random variables taking values in the lattice {*xk* = *k*/*N* ∈ Ω} with probabilities proportional to *f*(*xk*) exp{−*NS*(*xk*, *<sup>N</sup>*)}.

#### **7. Application to LLN and CLT of Large Deviations**

Conditional LLN (conditioned on the sums of the corresponding random variables to stay in a certain prescribed domain, usually some linear subspace or a convex set) are well developed in probability (see e.g., [2,18] for two different contexts). The results above can be used to supply exact estimates for the error terms in these approximations. To illustrate this statement in the most transparent way let us start with the classical multidimensional local theorem of large deviations as given in [4] (that extends earlier results of [6]). Namely, let *ξ*, *ξ*1, *ξ*2, ··· be a sequence of independent identically distributed **R***k*-valued random vectors. Assume that the set *O* of vectors *λ* ∈ **R***<sup>k</sup>* such that the moment generating function *v*(*λ*) = **E***e*(*<sup>λ</sup>*,*ξ*) is well defined has a nonempty interior *O*0. It is well known (and easy to see) that the functions *v* and ln *v* are convex and the sets *O*<sup>0</sup> and its closure *O*¯ 0 = *O*¯

are convex. The function *ψ*(*α*) = inf[ln *v*(*λ*) − (*<sup>α</sup>*, *λ*)] is called the entropy and it is concave. Moreover, the infimum in its definition is attained, so that there exists *<sup>λ</sup>*(*α*) ∈ *O* such that

$$\psi(\mathfrak{a}) = \inf\_{\lambda} [\ln \upsilon(\lambda) - (\mathfrak{a}, \lambda)] = \ln \upsilon(\lambda(\mathfrak{a})) - (\mathfrak{a}, \lambda(\mathfrak{a}))\_\* $$

and the function *<sup>λ</sup>*(*α*) is a diffeomorphism of *O*<sup>0</sup> onto some open domain Ω in **R***k*. Assume that the random variable *ξ* has a bounded probability density *p*(*x*), and define the family of distributions *Pα* with the densities

$$
\pi\_{\mathfrak{a}}(\mathfrak{x}) = \exp\{ (\lambda(\mathfrak{a}), \mathfrak{x}) - \psi(\mathfrak{a}) \} p(\mathfrak{x}) .
$$

Let *pn*(*x*) be the density of the averaged sum *Sn*/*n* = (*ξ*1 + ··· + *ξn*)/*<sup>n</sup>*.

Theorem 1 of [4] states (though we formulate it equivalently in terms of the density of *Sn*/*<sup>n</sup>*, rather than *Sn* as is done in [4]) that if Φ is any compact set in Ω, then

$$p\_n(a) = \frac{n^{k/2} \epsilon^{n\psi(a)}}{(2\pi)^{k/2} \det(M(a))^{1/2}} \left( 1 + \sum\_{j=1}^s c\_j(a) n^{-j} + O(n^{-s}) \right),\tag{69}$$

where *s* is arbitrary, the estimate is uniform for *α* ∈ Φ, *<sup>M</sup>*(*α*) is the matrix of the second moments of the distributions *P<sup>α</sup>*, the coefficients *cj*(*α*) depend only on 2*j* + 2 moments of *Pα* and are uniformly bounded in Φ.

The densities of Equation (69) are exactly of the type dealt with in our Theorems 1, 2, and 4, and Equation (5). Thus, these theorems are applied directly for finding the rates of convergence for LLN and CLT for the sums of independent variables when *Sn*/*n* is reduced to some convex bounded set with smooth boundary or a linear subspace. These conditional versions of LLN may be applied even if **E***ξ* is not defined, so that the usual LLN does not hold.

When the random variable *ξ* has values in a lattice, a version with sums, that is Theorem 6, should be applied to ge<sup>t</sup> the rates of convergence in the corresponding laws of large numbers.

**Funding:** FRC CSC RAS, Supported by the RFFI gran<sup>t</sup> 18-07-01405.

**Conflicts of Interest:** The author declares no conflicts of interest.
