**1. Introduction**

It is well known that the leading asymptotics for large *t* of *ζ*(*s*) can be expressed in terms of a transcendental sum,

$$\zeta(\mathbf{s}) \sim \sum\_{m=1}^{[t]} \frac{1}{m^s}, \qquad \mathbf{s} = \sigma + it, \quad 0 < \sigma < 1, \quad \mathbf{t} \to \infty,\tag{1}$$

where throughout this paper [A] denotes the integer part of the positive number A. Lindelöf's hypothesis, one of the most important open problems in the history of mathematics, states that for *σ* = 1/2, this sum is of order *O*(*t<sup>ε</sup>*) for any *ε* > 0.

The sum of the rhs of (1) is a particular case of an exponential sum. Pioneering results for the estimation of such sums were obtained in 1916 using methods developed by Weyl [1], and Hardy and Littlewood [2], when it was shown that *ζ*(1/2 + *it*) = *<sup>O</sup>*(*t*1/6+*ε*). In the last 100 years some slight progress was made using the ingenious techniques of Vinogradov [3]. Currently, the best result is due to Bourgain [4] who has been able to reduce the exponent factor to 13/84 ≈ 0.155.

It is interesting that, in contrast to the usual situation in asymptotics where higher order terms in an asymptotic expansion are more complicated, the higher order terms of the asymptotic expansion of *ζ*(*s*) can be computed *explicitly*. Siegel, in his classical paper [5] presented the asymptotic expansion of *ζ*(*s*) to *all* orders in the important case of *x* = *y* = √*t*/2*<sup>π</sup>*. In [6], analogous results are presented for *any x* and *y* valid to *all* orders. A similar result for the Hurwitz zeta function is presented in [7]. Some of the results of [6] are used in [8] and the latter results are useful for the estimates presented in this paper.

A major obstacle in trying to prove Lindelöf's hypothesis via the estimation of relevant exponential sums is that in estimates one "loses" something (the more powerful the technique, the less the loss). Here we follow the new formalism for analysing the large *<sup>t</sup>*-asymptotics of the Riemann zeta function, introduced in [9]. For the sake of clarity of presentation we restrict our attention to the case *σ* = 1/2.

We start with the following integral equation derived in Equation (1.6) of [9]:

$$\begin{split} & \frac{t}{\pi t} \oint\_{-t^{\delta\_1 - 1}}^{1 + t^{\delta\_4 - 1}} \Re \left\{ \frac{\Gamma(it - i\tau t)}{\Gamma(1/2 + it)} \Gamma(1/2 + i\tau t) \right\} \left| \zeta(1/2 + i\tau t) \right|^2 \mathrm{d}\tau + \mathcal{G}(1/2, t) \\ & + O\left( e^{-\pi t^{\delta\_4}} \right) = 0, \qquad t \to \infty, \\ & \delta\_1 > 0, \; \delta\_4 > 0, \; \delta\_{14} = \min(\delta\_1, \delta\_4), \end{split} \tag{2}$$

where <sup>Γ</sup>(*z*) denotes the usual gamma function, the principal value integral is defined with respect to *τ* = 1 and G(1/2, *t*) satisfies the estimate

$$\mathcal{G}(1/2, t) = \ln t + O(1), \qquad t \to \infty. \tag{3}$$

In [9] the computation of the large *t* asymptotics of (2) is obtained by first splitting the interval [−*tδ*1−1, 1 + *<sup>t</sup>δ*4−<sup>1</sup>] into the following four subintervals:

$$\begin{aligned} L\_1 &= [-t^{\delta\_1 - 1}, t^{-1}], \; L\_2 = [t^{-1}, t^{\delta\_2 - 1}], \\ L\_3 &= [t^{\delta\_2 - 1}, 1 - t^{\delta\_3 - 1}], \; L\_4 = [1 - t^{\delta\_3 - 1}, 1 + t^{\delta\_4 - 1}], \end{aligned} \tag{4}$$

with *δj* ∈ (0, <sup>1</sup>), *j* = 1, 2, 3, 4. We employ the same splitting in (2) and hence the asymptotic evaluation of (2) reduces to the analysis of the four integrals,

$$I\_{\hat{\mathbb{P}}}(t) = \frac{t}{\pi} \oint\_{L\_{\hat{\mathbb{P}}}} \Re \left\{ \frac{\Gamma(it - i\tau t)}{\Gamma(1/2 + it)} \Gamma(1/2 + i\tau t) \right\} |\zeta(1/2 + i\tau t)|^2 \,\mathrm{d}\tau, \; t > 0,\tag{5}$$

where *I*1, *I*2, *I*3, *I*4 also depend on *δ*1, *δ*2, (*<sup>δ</sup>*2, *<sup>δ</sup>*3), (*<sup>δ</sup>*3, *<sup>δ</sup>*4), respectively, *L*1, *L*2, *L*3, *L*4 are defined in (4), and the principal value integral is needed only for *I*4.

#### *Organisation of the Paper*

In Section 2 we derive a linear integral equation for |*ζ*(*s*)|2. This equation is given by (23) where *SP*4 and *<sup>S</sup>SD*4 are defined by (18) and (19), respectively.

In Section 3 we present the methodology for deriving the main result of this paper, namely the linear Volterra-type integral equation for |*ζ*(*s*)|<sup>2</sup> given by Equation (8) below. In this connection, we first estimate the double sum *SP*4appearing in the linear integral Equation (23):

$$\mathcal{R}\left\{S\_4^P(t,\delta\_3)\right\} = O\left(t^{\frac{\delta\_3}{2}}\ln t\right), \qquad t \to \infty. \tag{6}$$

Then, we present heuristic arguments regarding the estimation of *<sup>S</sup>SD*4, which sugges<sup>t</sup> that

$$S\_4^{SD} = O\left(t^{\frac{1}{5} - \frac{\delta\_3}{2}} (\ln t)^2\right) + O\left(t^{\frac{\delta\_3}{2}} \ln t\right), \qquad t \to \infty. \tag{7}$$

Replacing in (23) *SP*4 and *<sup>S</sup>SD*4 by (6) and (7), for 0 < *δ*2 < 1/2 and *δ*3 = 1/3, we find the main result of the paper:

$$\left| \left\langle \zeta \left( \frac{1}{2} + it \right) \right\rangle \right|^2 = \frac{1}{\pi} \int\_{I^{\sharp\_2}}^{t - t^{1/3}} \mathcal{K}(\rho, t) \left| \zeta \left( \frac{1}{2} + i\rho \right) \right|^2 \mathrm{d}\rho + O\left( t^{\frac{1}{6}} (\ln t)^2 \right), \qquad t \to \infty, \tag{8}$$

where the kernel *<sup>K</sup>*(*ρ*, *t*) is given by

$$\mathcal{K}(\rho, t) = \Re \left\{ \frac{\Gamma(it - i\rho)}{\Gamma(1/2 + it)} \Gamma(1/2 + i\rho) \right\}. \tag{9}$$

For the rigorous derivation of (7) we make crucial use of some of the results of [10]. For completeness of presentation, the relevant results of [10] are reviewed in Section 3.

In Section 4 we derive (7). This derivation is based on the following: first, on a lemma for partial summation in two dimensions, which is crucial for the analysis of some parts of the sum *SSD* 4 . Second, on the asymptotic estimates of the function *ESD* 4 (*t*, *δ*) appearing in the definition of *SSD* 4 which are given in [11]. Third, on the splitting of *SSD* 4 into three cases involving certain sums denoted by *<sup>S</sup>*(*i*), *<sup>S</sup>*(*ii*), *S*(*iii*). Relatively straightforward estimates yield that both *S*(*ii*) and *S*(*iii*) are *O t δ* 2 ln *t* , *t* → ∞.

The estimation of *S*(*i*) is quite complicated; details are given in Section 4.3.

Section 5 summarizes the basic results in this paper and discusses future directions.

#### **2. Derivation of a Linear Integral Equation for** *|ζ***(***s***)***|* **2**

The main result of this section is the linear integral Equation (23) which is obtained from (2) by computing the contribution of the integrals {*Ij*}<sup>4</sup> 1. In this direction, we first recall the estimates for *I*1 and *I*2, and then we introduce a methodology that computes explicitly the leading asymptotic behaviour of *I*4. In addition, this methodology avoids the need to compute the asymptotics of *I*3.

#### *2.1. The Contribution of I*1 *and I*2

Using Lemma 4.1 of [9], it can be shown that for *δ*1 sufficiently small, *I*1 satisfies the estimate

$$I\_1(t, \delta\_1) = O\left(t^{-1/2 + \frac{4}{3}\delta\_1}\right), \qquad t \to \infty. \tag{10}$$

Furthermore, by employing the classical estimates of Atkinson, and following the steps of Lemma 4.2 of [9], it can be shown that that *I*2 satisfies the estimate

$$H\_2(t, \delta\_2) = O\left(t^{-\frac{1}{2} + \delta\_2} \ln t\right), \quad 0 < \delta\_2 < 1, \qquad t \to \infty. \tag{11}$$

Thus, for sufficiently small *δ*1 and *δ*2 Equations (10) and (11) yield

$$I\_1 = o(1) \quad \text{and} \quad I\_2 = o(1), \qquad t \to \infty. \tag{12}$$

#### *2.2. The Contribution of the Leading Order Term of I*4

Let ˜*I*4 denote the contribution of the leading order term of *I*4. This term is defined by replacing *ζ*(*s*) with the leading term of its large *<sup>t</sup>*-asymptotics in (5) for *j* = 4. Using the change of variables *τ* = 1 − *x t*, ˜*I*4 becomes

$$I\_4(t, \delta\_3, \delta\_4) = \frac{1}{\pi} \oint\_{-t^4}^{t^3} \Re \left\{ \Gamma(ix) \frac{\Gamma(1/2 + it - ix)}{\Gamma(1/2 + it)} \right\} \left| \tilde{\zeta}(1/2 + it - ix) \right|^2 dx,\tag{13}$$

where the principal value integral is with respect to *x* = 0, and

$$\left|\tilde{\zeta}(1/2+it-ix)\right|^2 = \sum\_{m\_1=1}^{[t]} \sum\_{m\_2=1}^{[t]} \frac{1}{m\_1^s m\_2^s} \left(\frac{m\_1}{m\_2}\right)^{ix}, \qquad s = \frac{1}{2} + it. \tag{14}$$

Thus, we obtain the following expression for the leading behaviour of *I*4:

$$\bar{I}\_4(t, \delta\_3, \delta\_4) = \Re \left\{ \sum\_{m\_1=1}^{\left[t\right]} \sum\_{m\_2=1}^{\left[t\right]} \frac{1}{m\_1^s} \frac{1}{m\_2^s} I\_4\left(\sigma, t, \delta\_2, \delta\_3, \frac{m\_1}{m\_2}\right) \right\}, \qquad s = \frac{1}{2} + it,\tag{15}$$

where *J*4 is defined by

$$J\_4\left(t, \delta\_3, \delta\_4, \frac{m\_1}{m\_2}\right) = \frac{1}{\pi t} \oint\_{-t^{\delta\_4}}^{t^{\delta\_3}} \Gamma(ix) \frac{\Gamma(1/2 + it - ix)}{\Gamma(1/2 + it)} \left(\frac{m\_1}{m\_2}\right)^{ix} dx,$$

$$t > 0, \ 0 < \delta\_3 < 1, \ 0 < \delta\_4 < 1, \ m\_j = 1, 2, \dots, \left[t\right],\tag{16}$$

with the principal value integral defined with respect to *x* = 0.

Theorem 6.1 of [9] gives the estimate

$$\begin{split} I\_4(t, \delta\_3, \delta\_4) &= \left\{ -\sum\_{m\_1=1}^{[t]} \sum\_{m\_2=1}^{[t]} \frac{1}{m\_1^s m\_2^{\mathbb{S}}} + 2\Re\left\{ S\_4^P \right\} + \Re\left\{ S\_4^{SD} \right\} \right\} \left[ 1 + O(t^{2\delta\_{34}-1}) \right], \quad t \to \infty \\ s &= \frac{1}{2} + it, \ 0 < \delta\_3 < \frac{1}{2}, \ 0 < \delta\_4 < \frac{1}{2}, \ \delta\_{34} = \max\{\delta\_3, \delta\_4\}, \end{split} \tag{17}$$

where *SP*4 and *<sup>S</sup>SD*4 are defined as follows:

$$S\_4^P(t, \delta\_3) = \sum\_{m\_1, m\_2 \in M\_4} \frac{1}{m\_1^s m\_2^{\overline{g}}} e^{-\frac{im\_2}{m\_1}t} \tag{18}$$

with

$$M\_4 := M\_4(\delta\_3, t) = \left\{ m\_j = 1, \dots, [t], \ j = 1, 2, \ \frac{m\_1}{m\_2} \in (t^{1-\delta\_3}, t) \right\},$$

and

$$S\_4^{SD}(t, \delta\_3) = \sum\_{m\_1=1}^{[t]} \sum\_{m\_2=1}^{[t]} \frac{1}{m\_1^s m\_2^s} E\_4^{SD}(t, \delta\_3);\tag{19}$$

*<sup>E</sup>SD*4satisfies the asymptotic estimate

$$E\_4^{SD} \sim -\sqrt{\frac{2}{\pi}} e^{\frac{i\pi}{4}} t^{-\frac{\delta\_3}{2}} e^{-it^{\delta\_3}} t^{i(\delta\_3 - 1)t^{\delta\_3}} \frac{1}{\ln\left(\frac{m\_2}{m\_1} t^{1 - \delta\_3}\right)} \left(\frac{m\_1}{m\_2}\right)^{it^{\beta\_3}}, \qquad t \to \infty,\tag{20}$$

when *m*2 *m*1 *t*1−*δ*<sup>3</sup> = 1.

**Remark 1.** *According to the analysis of [9], the derivation of* (17) *involves the computation of the contribution of an integral along the so-called Hankel contour. The function SP*4 *is related with the contribution of the pole wP* = −*i m*2 *m*1 *t*1−*δ*<sup>3</sup> *and <sup>S</sup>SD*4 *is related with the contribution of the Hankel contour after deforming it so that it passes through the point of steepest descent wSD* = −*i. Hence, we call SP*4 *and <sup>S</sup>SD*4 *as the Pole and Steepest Descent contribution, respectively.*

#### *2.3. The Contribution of I*3

Let *I*3 be defined by (5), with *j* = 3. By making the change of variables *ρ* = *tτ*, we obtain

$$d\_3(t, \delta\_2, \delta\_3) = \frac{1}{\pi} \int\_{\bar{t}^2}^{t - t^3} \Re \left\{ \frac{\Gamma(it - i\rho)}{\Gamma(1/2 + it)} \Gamma(1/2 + i\rho) \right\} \left| \zeta(1/2 + i\rho) \right|^2 \,\mathrm{d}\rho. \tag{21}$$

#### *2.4. A Volterra-Type Integral Equation*

It is shown in Appendix A that the first term of the rhs of (17) is the leading asymptotic term of |*ζ*|2. Hence, (17) becomes

$$\left|\bar{I}\_{4}(t,\delta\_{3},\delta\_{4})\sim -|\zeta(s)|^{2} + 2\Re\left\{S\_{4}^{P}\right\} + \Re\left\{S\_{4}^{SD}\right\},\ s = \frac{1}{2} + it, \qquad t \to \infty. \tag{22}$$

By replacing in (2), *I*1 and *I*2 by (12), *I*3 by (21) and *I*4 by (22) we obtain the following Volterra-type integral equation:

$$\begin{split} |\zeta(1/2+it)|^2 &= \frac{1}{\pi} \int\_{t^2}^{t-t^3} \Re \left\{ \frac{\Gamma(it-i\rho)}{\Gamma(1/2+it)} \Gamma(1/2+i\rho) \right\} |\zeta(1/2+i\rho)|^2 \,\mathrm{d}\rho \\ &+ 2\Re \left\{ S\_4^P \right\} + \Re \left\{ S\_4^{SD} \right\} + \ln t + O(1), \qquad t \to \infty, \end{split} \tag{23}$$

where *SP*4 and *<sup>S</sup>SD*4 are given in (18) and (19), respectively.

#### **3. The Methodology for Deriving the Integral Equation** (8)

In this section we derive Equation (6) and we also provide heuristic arguments for supporting the validity of Equation (7). The employment of the estimates (6) and (7), evaluated at *δ*3 = 1/3, in Equation (23) yields (8).

#### *3.1. An Estimate for SP*4

In order to estimate the sum *SP*4, we use (1.30) of [9], namely (See Appendix B)

$$S\_3(t, \delta\_3) = S\_4^P(t, \delta\_3) \left[ 1 + O\left( t^{2\delta\_3 - 1} \right) \right], \qquad t \to \infty,\tag{24}$$

with

$$S\_3(t, \delta\_3) = \sum\_{(m\_1, m\_2) \in M\_3} \frac{1}{m\_2^s (m\_1 + m\_2)^s}, \qquad s = \frac{1}{2} + it,\tag{25}$$

and *M*3 is defined by

$$M\_3 := M\_3(\delta\_3, t) = \left\{ m\_{\vec{\jmath}} = 1, \ldots, [t], \ j = 1, 2, \ \frac{m\_2}{m\_1} < \frac{1}{t^{1 - \delta\_3} - 1} \right\}. \tag{26}$$

Using results of [6], it is shown in Theorem 5.1 of [8] that

$$S\_3(t, \delta\_3) = O\left(t^{\frac{\delta\_3}{2}} \ln t\right), \qquad t \to \infty. \tag{27}$$

Thus, (6) follows.

#### *3.2. An Estimate for <sup>S</sup>SD*4

The definition of *<sup>S</sup>SD*4 , given in (19), implies

$$S\_4^{SD} = O\left(\frac{1}{t^{\frac{\delta\_3}{2}}} \left| \sum\_{m\_1=1}^{\lfloor t \rfloor} \sum\_{m\_2=1}^{\lfloor t \rfloor} \frac{1}{m\_1^{\frac{1}{2} + i(t - t^{\delta\_3})} m\_2^{\frac{1}{2} - i(t - t^{\delta\_3})}} \frac{1}{\ln\left(\frac{m\_2}{m\_1} t^{1 - \delta\_3}\right)}\right|\right). \tag{28}$$

In order to estimate the sum *<sup>S</sup>SD*4 , we employ the classical techniques of [10,12–14]. These techniques can be used for the estimation of the sums of the form

$$\sum\_{1$$

In this connection we recall the following well known result (see, for example, Theorem 5.12 of [13]):

$$\sum\_{n=1}^{\lfloor t \rfloor} \frac{1}{n^{\frac{1}{2} + it}} = O\left(t^{\frac{1}{6}} \ln t\right), \qquad t \to \infty. \tag{29}$$

The above result can be further improved, see, for example, Theorem 5.18 of [13]:

$$\left| \sum\_{n=1}^{\lfloor t \rfloor} \frac{1}{n^{\frac{1}{2} + it}} \right|^2 = O\left( t^{\frac{1}{3}} \right), \qquad t \to \infty. \tag{30}$$

Using similar arguments, it is straightforward to show that

$$\left| \sum\_{n=1}^{\lfloor t \rfloor} \frac{1}{n^{\frac{1}{2} + i(t - t^{\frac{1}{2}})}} \right|^2 = O\left( t^{\frac{1}{2}} \right), \qquad t \to \infty. \tag{31}$$

It turns out that the techniques of [10] can be directly applied to estimating sums involving the lhs of (31). In this way it can be shown that

$$\sum\_{m\_1=1}^{\lfloor t \rfloor} \sum\_{m\_2=1}^{\lfloor t \rfloor} \frac{1}{m\_1^{\frac{1}{2} + i(t - t^{\delta\_3})} m\_2^{\frac{1}{2} - i(t - t^{\delta\_3})}} = O\left(t^{\frac{1}{2}} (\ln t)^2\right), \qquad t \to \infty.$$

The sum in the rhs of (28), in comparison to the above sum contains the extra term 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1 *<sup>t</sup>*1−*δ*<sup>3</sup> . Fortunately, this term satisfies the properties needed for the partial summation procedure. Actually, a slight modification of the partial summation technique used in [10] suggests the estimate (7).

We note that the second term of the rhs of (7) is identical with the estimate of (6). This is due to the fact that the estimation of *<sup>S</sup>SD*4 involves the splitting of the relevant set of the summation of in three parts, and in one of these parts the summand has the form of the summand of (18).

#### *3.3. Review of Techniques for Estimating Euler-Zagier Double Sums*

We summarise some of the techniques used in [10] that will be needed in the estimation of the sum *<sup>S</sup>SD*4. In what follows we use the terminology of [10].

In [10] estimates of the Euler-Zagier double sums are obtained by employing techniques from [12,14]. Indeed, letting *sj* = *σj* + *itj*, with 0 ≤ *σj* < 1, *j* = 1, 2 and |*tj*| ∼ *cjt*, for some positive constants *cj*, estimates as *t* → ∞ are derived for sums of the form

$$\sum\_{1 \le m < n} \frac{1}{m^{s\_1}} \frac{1}{n^{s\_2}}.$$

A special case of Theorem 1.1 in [10] yields

$$\sum\_{1 \le m < n \le m} \frac{1}{m^{\frac{1}{2} + it}} \frac{1}{n^{\frac{1}{2} + it}} = O\left(t^{\frac{1}{3}} (\ln t)^2\right).$$

The above result, as well as the estimates of Theorem 1.1 therein, provide a 'sharp' generalisation for double sums of the classical result for the single sum of Theorem 5.12 in [13]. In this sense, the results of [10] improve significantly the analogous results of [15].

Here we are interested only on the part of the analysis of [10] concerning the sums of the form

$$S\_1 = \sum\_{1 \le m < n < t} \frac{1}{m^{s\_1}} \frac{1}{n^{s\_2}}.$$

and in particular for the case that *σ*1 = *σ*2 = 1/2. The above sum is estimated by splitting it into two classes of sums:

$$\mathcal{S}\_1 = \sum\_{j=1}^{\left\lfloor \frac{\ln 2l}{\ln 2} \right\rfloor} \left[ T \left( 2^{-j} t \right) + \mathcal{U} \left( 2^{-j} t \right) \right],$$

where

$$T(M) = \sum\_{M < m < n \le 2M} \frac{1}{m^{s\_1}} \frac{1}{n^{s\_2}} \qquad \text{and} \qquad \mathcal{U}(M) = \sum\_{1 \le m \le M} \frac{1}{m^{s\_1}} \sum\_{M < n \le 2M} \frac{1}{n^{s\_2}}.$$

The estimation of the sum *U*(*M*) is straightforward since it can be reduced to the estimation of a single sum; this is given by employing the Theorem 5.12 of [13], namely

$$\sum\_{1 \le m \le M} \frac{1}{m^{1/2 + it}} = O\left(t^{\frac{1}{6}} \ln t\right) \qquad \text{and} \qquad \sum\_{M < m \le 2M} \frac{1}{n^{1/2 + it}} = O\left(t^{\frac{1}{6}}\right).$$

Thus,

$$\sum\_{j=1}^{\left[\frac{\ln 2}{\ln 2}\right]} \mathcal{U}\left(2^{-j}t\right) = \mathcal{O}\left(t^{\frac{1}{2}}(\ln t)^2\right), \qquad t \to \infty.$$

The estimation of the sum *T*(*M*) is more elaborate and is based on Lemmas 3.1–3.5, therein. Lemma 3.1 appears in [14], Lemmas 3.2–3.4 appear in [12], and Lemma 3.5 is a variation of the classical and widely used partial summation technique (see for example [13,14]).

Since the latter Lemma plays an important role in our analysis we find it helpful to restate it:

**Lemma 1** (Lemma 3.5 in [10])**.** *Let M and N be positive integers such that M* < *N, f*(*<sup>x</sup>*, *y*) *be a C*<sup>2</sup>*-function on* [*<sup>M</sup>*, *N*] × [*<sup>M</sup>*, *<sup>N</sup>*]*, g*(*<sup>m</sup>*, *n*) *be an arithmetical function on the same domain, and*

$$G(x,y) = \sum\_{x < m \le n} \sum\_{y \le y} g(m,n).$$

*Suppose that*

$$|G(\mathbf{x}, y)| \le G, \quad |f\_{\mathbf{x}}(\mathbf{x}, y)| \le \kappa\_1, \quad |f\_y(\mathbf{x}, y)| \le \kappa\_2, \quad |f\_{\mathbf{xy}}(\mathbf{x}, y)| \le \kappa\_3.$$

*for some positive constants G*, *κ*1, *κ*2, *κ*3*, and for any M* ≤ *x*, *y* ≤ *N. Then, we have*

[

$$\left| \sum\_{M < m \le n \le N} f(m, n) \varrho(m, n) \right| \le \mathcal{G} \left[ f(M, N) + (\kappa\_1 + \kappa\_2)(N - M) + \kappa\_3(N - M)^2 \right].\tag{32}$$

In order to estimate the sum *T*(*M*) the set of summation is divided in three regions corresponding to the following three cases:


For case (a) it is sufficient to observe that *T*(*M*) = *O* (*M*), which yields

$$\sum\_{j=\left[\frac{3}{2}\ln\frac{t}{2}\right]}^{\left[\frac{\ln 2t}{\ln 2}\right]} T\left(2^{-j}t\right) = O\left(t^{\frac{1}{3}}\ln t\right), \qquad t \to \infty.$$

For case (c) Lemma 3.4 is used to treat the oscillatory part of the sum, i.e., it is shown that ∑ *M*<*m*<*n*≤2*M* 1 *mit*<sup>1</sup> 1 *nit*<sup>2</sup> = *O*(*t* ln *t*). Then, applying partial summation using Lemma 3.5 which shows that *T*(*M*) = *O* 1*M t* ln *t* , it follows that

$$\sum\_{j=1}^{\left[\frac{1}{2}\frac{\ln t}{m^2}\right]} T\left(2^{-j}t\right) = O\left(t^{\frac{1}{3}}(\ln t)^2\right), \qquad t \to \infty.$$

Case (b) is conceptually the same with case (c) but involves more technicalities: Lemmas 3.1–3.3 are used to treat the oscillatory part of the sum, i.e., to show that ∑ *M*<*m*<*n*≤2*M* 1 *mit*<sup>1</sup> 1 *nit*<sup>2</sup> = *O Mt* 13 (ln *t*) 12 . Then, applying partial summation by using Lemma 3.5 in order to obtain that *T*(*M*) = *O t* 13 (ln *t*) 12 , it follows that

$$\sum\_{j=\left[\frac{1}{3}\frac{\ln t}{\ln 2}\right]}^{\left[\frac{2}{3}\frac{\ln t}{\ln 2}\right]} T\left(2^{-j}t\right) = O\left(t^{\frac{1}{3}}(\ln t)^{\frac{3}{2}}\right), \qquad t \to \infty.$$

Summarising the above results it follows that

$$S\_1 = \sum\_{1 \le m < n < t} \frac{1}{m^{s\_1}} \frac{1}{n^{s\_2}} = O\left(t^{\frac{1}{5}} (\ln t)^2\right), \qquad t \to \infty. \tag{33}$$

**Remark 2.** *From the above analysis it follows that it is much more complicated to estimate the sums of the form T*(*M*) *in comparison to those of the form <sup>U</sup>*(*M*)*; the latter ones correspond to set of summations which can be decoupled, whereas the set of summations corresponding to the former ones cannot be decoupled. The latter observation necessitates the use of the Lemma 3.5 in [10], which is related with the partial summation technique. The sets of summation in our work are more complicated, requiring more general forms of that Lemma. In this connection, in Section 4.1 we state a general form of Abel's summation formula for double sums; its proof is presented in [11].*

#### **4. Derivation of the Estimate** (7)

In what follows we let *δ*3 = *δ*, and throughout the rest of the paper we have *s* = 1/2 + *it*. In order to derive (7), we split the sum *<sup>S</sup>SD*4 in three different sums *<sup>S</sup>*(*i*), *<sup>S</sup>*(*ii*), *<sup>S</sup>*(*iii*), in accordance with the analysis of Section 4.2, below. We analyse these three sums in Section 4.3:


#### *4.1. A Lemma for Partial Summation in Two Dimensions*

Lemma 3.5 of [10] is a two-dimensional form of the so-called Abel's summation formula, see Appendix C. The difficulty appearing in the proof of Lemma 3.5 of [10] is due to the fact that the set of summation is given by an expression which does not allow the double sum to be decoupled in two single sums. In the separable case the simple form of the Abel's summation formula for double sums is given in Lemma A1, and is straightforward to derive it by applying twice (A1). However, for our analysis we need to generalise Lemma 3.5 of [10]. This generalisation is given by Lemma 2 below, whose proof is given in [11]. It is this form of Abel's summation formula for double sums that is needed for the analysis of the sums (3b) and (4b) appearing in the sum *S*(*i*) 2 , which is analysed in Section 4.3.

**Lemma 2.** *Let <sup>θ</sup>*(·) *be a linear function and φ*(·) *be its inverse. Particular such functions are <sup>θ</sup>*(*x*) = *<sup>t</sup>δ*−1*x, φ*(*x*) = *t*1−*δx. Let M* < *N be positive integers and f*(*<sup>x</sup>*, *y*) *be a C*<sup>2</sup>*-function on* [*θ*(*M*), *θ*(*N*)] × [*<sup>M</sup>*, *<sup>N</sup>*]*, g*(*<sup>m</sup>*, *n*) *be an arithmetical function on the same domain, and*

$$G(x,y) = \sum\_{x < \phi(m)} \sum\_{n \le y} g(m,n).$$

*Suppose that*

$$|G(\mathbf{x}, \mathbf{y})| \le \mathbf{G}, \quad |f\_{\mathbf{x}}(\mathbf{x}, \mathbf{y})| \le \kappa\_1, \quad |f\_{\mathbf{y}}(\mathbf{x}, \mathbf{y})| \le \kappa\_2, \quad |f\_{\mathbf{x}\mathbf{y}}(\mathbf{x}, \mathbf{y})| \le \kappa\_3.$$

*for some positive constants G*, *κ*1, *κ*2, *κ*3*, and for any* (*<sup>x</sup>*, *y*) ∈ [*θ*(*M*), *θ*(*N*)] × [*<sup>M</sup>*, *<sup>N</sup>*]*. Then, we have*

$$\begin{aligned} & \left| \sum\_{M < \phi(m) < n \le N} f(m, n) g(m, n) \right| \\ \leq & G \left[ f \left( \theta(M), N \right) + \kappa\_1 \left( \theta(N) - \theta(M) \right) + \kappa\_2 (N - M) + \kappa\_3 \left( \theta(N) - \theta(M) \right) (N - M) \right]. \tag{34} \end{aligned}$$

**Remark 3.** *The above formulation is adapted to the subregion (4b) of the splitting presented in Section 4.3, but the choice of function θ (respectively φ) is wider than the particular forms chosen in Lemma 2. The result and the proof is the same if we substitute in the above formulation* ∑ ∑ *x*<sup>&</sup>lt;*φ*(*m*)<sup>&</sup>lt;*n*≤*y with* ∑ ∑ *x*<*n*<sup>&</sup>lt;*φ*(*m*)≤*y , thus this result*

*can be adapted to the subregion (3b).*

*4.2. The Different forms of <sup>E</sup>SD*4

> Equation (19) with *δ*3 = *δ* becomes

$$S\_4^{SD}(t,\delta) = \sum\_{m\_1=1}^{[t]} \sum\_{m\_2=1}^{[t]} \frac{1}{m\_1^s m\_2^{\mathcal{S}}} E\_4^{SD}(t,\delta). \tag{35}$$

Let *α* = *m*2 *m*1 *t*1−*δ*. It is shown in [11] that the term *<sup>E</sup>SD*4 is given by the expression below.

(i) |*α* − 1| > *c* > 0, with the constant *c* independent of *t*:

$$E\_4^{SD} \sim -\sqrt{\frac{2}{\pi}} e^{\frac{i\pi}{4}} e^{-it^\delta} \frac{1}{t^{\frac{\delta}{2}}} \frac{1}{\ln a} a^{it^\delta}, \quad t \to \infty. \tag{36}$$

The condition |*α* − 1| > *c* > 0 yields that 1ln *α* = *O* 1*c* is bounded.

(ii) 1 # |*α* − 1| ≥ Γ*t*− *δ*2 , for some constant Γ > 0 independent of *t*:

$$E\_4^{SD} \sim -\sqrt{\frac{2}{\pi}} e^{\frac{i\mathfrak{g}}{\mathfrak{g}}} e^{-it^\delta} \frac{1}{t^{\frac{\delta}{2}}} \frac{1}{\ln a} a^{it^\delta}, \quad t \to \infty. \tag{37}$$

The condition 1 # |*α* − 1| ≥ Γ*t*− *δ*2 , yields that 1ln *α* = *O*(1)*t δ*2 , thus the term 1*t δ*2 1ln *α* is bounded. Furthermore, this condition restricts the set of summation in a sufficiently small set, so that we will use a different technique to estimate the relevant sum compared to the case (i). *δ*

(iii) *α* = 1 + *<sup>γ</sup>t*−Δ, Δ ≥ 2 , *γ* ∈ R, for any constant *γ* independent of *t*:

The leading contribution is equal to the pole contribution multiplied by some constant *c* depending only on *γ*, with |*c*(*γ*)| < 2 and *c*(0<sup>±</sup>) = ±1.

If Δ = *δ*2 , then, using the analysis in [11], we obtain

$$E\_4^{SD} \sim c(\gamma)e^{-i\frac{m\_2}{m\_1}t}e^{-i\frac{\gamma^2}{2}} + \frac{1}{\sqrt{2\pi}}e^{\frac{i\pi}{4}}e^{-it^\delta}\frac{1}{t^{\frac{\delta}{2}}}, \quad t \to \infty. \tag{38}$$

If Δ > *δ*2 , then, similarly to the above derivation and using the Plemelj's formulae we obtain

$$E\_4^{SD} \sim \text{sign}(\gamma) \, e^{-i\frac{m\gamma}{m\_1}t} e^{-i\frac{\gamma^2}{T}t^{\delta-2\Delta}} \left(1 + O\left(t^{\frac{\zeta}{2}-\Delta}\right)\right), \quad t \to \infty. \tag{39}$$

The sets of summation corresponding to cases (ii) and (iii) are bounded by the two red lines in Figure 1.

**Figure 1.** The subregions of the set of summation.

**Remark 4.** *In Equation* (39) *one observes that for* Δ > *δ*2 *the dominant contribution of <sup>E</sup>SD*4 *is given by "plus" or "minus" half of the pole contribution (depending on the sign of γ), where the pole contribution is given in* (18)*. Noting that γ* < 0 ⇔ *m*1, *m*2 ∈ *M*4*, with M*4 *defined in* (18)*, one observes that the dominant contribution of the expression* <sup>2</sup>*SP*4 + *<sup>S</sup>SD*4 *appearing in* (17)*, is equal to SP*4 *for all γ* ∈ R *and* Δ > *δ*2 *.*

*The analysis of the case* Δ = *δ*2 *is included in (ii). Equation* (38) *elucidates the mechanism responsible for switching the contribution of <sup>E</sup>SD*4*from the form* (37) *to the form* (39)*.*

#### *4.3. The Estimation of the Three Parts of <sup>S</sup>SD*4

In what follows we will estimate the three sums corresponding to the above three forms of the *<sup>S</sup>SD*4.

#### 4.3.1. The Estimate of Case (iii)

Recalling Remark 4, we treat the sum associated with the case (iii) similarly to the derivation of (6), but for a smaller set of summation; hence, it yields the estimate *O t δ*2 ln *t* .

#### 4.3.2. The Estimate of Case (ii)

We treat the sum associated with the case (ii) similarly to Lemma 5.1 in [8], but for a smaller set of summation. Hence, it also yields the estimate *O t δ*2 ln *t* .

It is sufficient to estimate the following sum

$$S^{(ii)} = \frac{1}{t^{\frac{\delta}{2}}} \sum\_{m\_2=1}^{t^{\delta}} \sum\_{m\_1=c\_1 w\_2 t^{1-\delta}}^{c\_2 m\_2 t^{1-\delta}} \frac{1}{m\_1^{s+it^{\delta}}} \frac{1}{m\_2^{s-it^{\delta}}} \frac{1}{\ln\left(\frac{m\_2}{m\_1} t^{1-\delta}\right)},\tag{40}$$

where *c*1 and *c*2 are two positive constants with *c*2 ≤ 2*c*1.

**Remark 5.** *The constraint c*2 ≤ 2*c*1 *is satisfied by taking a sufficiently small positive constant c in case (i). Indeed, if c* < 13*then the condition* |*α* − 1| < 13*, yields*

$$
\frac{3}{4}m\_2t^{1-\delta} < m\_1 < \frac{3}{2}m\_2t^{1-\delta}.
$$

*Thus, the condition* 1 # |*α* − 1| ≥ Γ*t*− *δ*2 *gives rise to two sums of the form* (40) *with* 34 < *c*1 < *c*2 < 32 *, thus c*2 ≤ 2*c*1*. In particular, we obtain* 34 < *c*1 < *c*2 < 1 *for the first sum and* 1 < *c*1 < *c*2 < 32 *for the second sum.*

Recalling that 1 *t δ* 2 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1 *<sup>t</sup>*1−*<sup>δ</sup>* = *<sup>O</sup>*(1), we will first estimate the sum

$$S\_A = \sum\_{m\_2=1}^{p^\delta} \sum\_{m\_1=c\_1 m\_2 t^{1-\delta}}^{c\_2 m\_2 t^{1-\delta}} \frac{1}{m\_1^s} \frac{1}{m\_2^s}.$$

where *c*1 and *c*2 are two positive constants with *c*2 ≤ 2*c*1. Thus, by using partial summation we will estimate the sum *S*(*ii*).

Observing that *m*2 takes relatively "small" values in the set of summation of *SA*, we use the following inequality without losing crucial information:

$$|\mathcal{S}\_A| < \sum\_{m\_2=1}^{\lfloor t^\delta \rfloor} \frac{1}{m\_2^{1/2}} \left| \sum\_{m\_1=c\_1 m\_2 t^{1-\delta}}^{c\_2 m\_2 t^{1-\delta}} \frac{1}{m\_1^s} \right| \dots$$

Then, we estimate the *m*1-sum using Theorem 5.9 of [13], namely

$$\sum\_{a$$

Following the partial summation technique appearing in the proof of Theorem 5.12 of [13] and using the fact that *a* > *<sup>c</sup>*1*m*2*t*1−*δ*, we obtain

$$\sum\_{m\_1=c\_1m\_2t^{1-\delta}}^{c\_2m\_2t^{1-\delta}} \frac{1}{m\_1^{\epsilon}} = O\left(t^{\frac{1}{2}}t^{-\frac{1}{2}(1-\delta)}m\_2^{-\frac{1}{2}}\right), \qquad t \to \infty.$$

Thus,

$$S\_A = \sum\_{m\_2=1}^{\left[\mathbf{r}^\circ\right]} \frac{1}{m\_2} O\left(\mathbf{t}^{\frac{\xi}{2}}\right) = O\left(\mathbf{t}^{\frac{\xi}{2}} \ln \mathbf{t}\right), \qquad \mathbf{t} \to \infty. \tag{41}$$

Using the estimate (41), the monotonicity properties of the term 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1 *<sup>t</sup>*1−*<sup>δ</sup>* appearing in (40), and the fact that 1 *t δ* 2 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1 *<sup>t</sup>*1−*<sup>δ</sup>* = *<sup>O</sup>*(1), the partial summation technique, as described in [14] and the Appendix B of [8], yields 

$$S^{(ii)} = O\left(t^{\frac{\xi}{2}} \ln t\right), \qquad t \to \infty.$$

**Remark 6.** *The above approaches do not fully exploit the smallness of the set of summation, thus we expect that the above estimates can be sharpened (we recall that the sets of summation corresponding to cases (ii) and (iii) are bounded by the two red lines in Figure 1). In order to exploit the smallness of the set of summation one could follow the techniques presented in [8], which make use of the results of [6]. However, the estimates provided here for S*(*ii*) *and S*(*iii*)*are sufficient for the purpose of this paper, since they are the same as (and not weaker than)* (6)*.*

#### 4.3.3. The Estimate of Case (i)

In order to estimate this sum we will use techniques similar to the ones used in [10] for the estimation of the double zeta function, but with two main differences: first, we will split the set of summation in more regions, and second, for some of these regions, we will use Lemma 2 needed for the partial summation for double sums, which is a more general form of the Lemma 3.5 in [10].

The term involved in the partial summation is now of the form

$$f(m\_1, m\_2) = \frac{1}{m\_1^{1/2}} \frac{1}{m\_2^{1/2}} \frac{1}{\ln\left(\frac{m\_2}{m\_1} t^{1-\delta}\right)}.$$

instead of the term

$$
\tilde{f}(m\_1, m\_2) = \frac{1}{m\_1^{\sigma}} \frac{1}{m\_2^{\sigma}}, \qquad 0 < \sigma < 1,
$$

appearing in [10]. However, *f* shares the same properties with ˜ *f* needed for the application of the partial summation technique, provided that the quantity *m*2 *m*1 *t*1−*<sup>δ</sup>* is not arbitrarily close to 1; this is ensured by the condition |*α* − 1| > *c* > 0, with the constant *c* independent of *t*. Furthermore, {*<sup>κ</sup>j*}31 remain the same as in the [10], with the exception of the occasional appearance of a logarithmic term, due to 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1 *<sup>t</sup>*1−*<sup>δ</sup>* . However, this term does not affect the relevant estimates; in fact it is slightly helpful sincenow{*<sup>κ</sup>j*}31aredividedbyln*t*.

 The term involving the exponential sum now has the form

$$g(m\_1, m\_2) = \frac{1}{m\_1^{i(t-t^\delta)}} \frac{1}{m\_2^{-i(t-t^\delta)}},$$

instead of the corresponding term of [10]

$$\bar{\mathcal{g}}(m\_1, m\_2) = \frac{1}{m\_1^{\bar{\mathcal{U}}\_1}} \frac{1}{m\_2^{\bar{\mathcal{U}}\_2}} \prime \qquad t\_1 \asymp t\_2 \ldots$$

**Remark 7.** *The formalism t*1 \$ *t*2 *in [10] means that t*1 = *<sup>O</sup>*(*<sup>t</sup>*2) *and t*2 = *<sup>O</sup>*(*<sup>t</sup>*1)*. This is compatible with the selection of our t*1 *and t*2*. Furthermore, the fact that t* − *tδ* ∼ *t implies that all relevant estimates are the same. It should be noted that the condition* |*t*1 + *t*2| # 1 *in [10] is imposed because the double Riemann zeta function considered in [10] gives rise to sums which for t*1 = −*t*<sup>2</sup> *are not defined. In our work we deal only with sums where the set of summation is* [1, *t*] × [1, *t*]*. In analogy, the single Riemann zeta function ζ*(*s*) *and the relevant single sum are not defined at s* = 1*, however, the sum t* ∑1 *can be estimated to be O*(ln *t*)*.*

*<sup>m</sup>*=1 *In summary, the analysis in [10] can be applied to the sums appearing to our work.*

The Estimate of *S*(*i*) 1

> First, we treat the part of the sum where *m*2 > *m*1: in this case, it is sufficient to estimate the sum

*m*

$$S\_1^{(i)} = \frac{1}{t^{\frac{\delta}{2}}} \sum\_{m\_2=1}^t \sum\_{m\_1=1}^{m\_2} \frac{1}{m\_1^{s+i\bar{t}^\delta}} \frac{1}{m\_2^{\bar{s}-i\bar{t}^\delta}} \frac{1}{\ln\left(\frac{m\_2}{m\_1} t^{1-\delta}\right)}.\tag{42}$$

First observe that since *t* > *m*2 > *m*1 > 1, we obtain that *t*1−*<sup>δ</sup>* < *m*2 *m*1 *t*1−*<sup>δ</sup>* < *<sup>t</sup>*2−*δ*, thus the quantity 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1 *<sup>t</sup>*1−*<sup>δ</sup>* is bounded both from above and below by 1ln *t* multiplied by some positive constant that depends only on *δ*. For our purpose it is sufficient to work for 0 < *δ* < 1/2, thus we obtain that 1 2 1 ln *t* < 1 ln *<sup>m</sup>*2 *<sup>m</sup>*1*<sup>t</sup>*1−*<sup>δ</sup>* < 2ln *t* .

The sum *S*(*i*) 1 is estimated through the analysis provided in [10] with *m*1 = *m*, *m*2 = *n*. Indeed, we follow the methodology presented in Section 3.3 above by splitting the set of summation in subsets corresponding to the forms *U*(*M*) and *<sup>T</sup>*(*M*). For the former case we follow step-by-step the analysis of [10]. Then, we incorporate the contribution of the term 1 ln( *nm t*1−*<sup>δ</sup>* ) through the analysis used for the sum *S*(*ii*) above. This involves the use of partial summation as described in [14] and the Appendix B of [8]. For the latter case we use *f*(*<sup>m</sup>*, *n*) = 1 *m*1/2 1 *n*1/2 1 ln( *nm t*1−*<sup>δ</sup>* ) and apply the analysis appearing in [10] and described in Section 3.3 above, with the only difference occurring in the application of Lemma 3.5 of [10], where now the bounds will be multiplied by the term 1ln*t*. Thus, we obtain the estimate

$$\sum\_{m\_2=1}^t \sum\_{m\_1=1}^{m\_2} \frac{1}{m\_1^{s + it^\delta}} \frac{1}{m\_2^{\overline{s} - it^\delta}} \frac{1}{\ln\left(\frac{m\_2}{m\_1} t^{1 - \delta}\right)} = O\left(t^{\frac{1}{\delta}} \ln t\right), \qquad t \to \infty,\tag{43}$$

which yields

$$S\_1^{(i)} = O\left(t^{\frac{1}{3} - \frac{\delta}{2}} \ln t\right), \qquad t \to \infty. \tag{44}$$

Furthermore, the part of the sum where *m*2 = *m*1 becomes the following single sum

$$O\left(\frac{1}{t^{\frac{\xi}{2}}}\right) \sum\_{m=1}^{t} \frac{1}{m} \frac{1}{\ln t^{1-\delta}} = O\left(t^{-\frac{\xi}{2}}\right).$$

The Estimate of *S*(*i*) 2

Next, we will treat the sum in the domain *m*2 < *m*1; this sum presents more difficulties. We first have to split this domain in several subdomains. In each of these subdomains we use the techniques of [10]. Furthermore, in some cases the partial summation requires more general forms of the Lemma involving the partial summation in double sums; for this reason we employ Lemma 2.

Our splitting is motivated by the following observation in the analogue approach of [10]: if the double sum can be decoupled, namely if the domain of summation (in two dimensions) is a rectangle, then estimating this double sum can be reduced to estimating two single sums; this occurs for sums of the form *U*(*M*) appearing in [10] (see Section 3.3 above). If the double sum cannot be decoupled, namely if the domain of summation (in two dimensions) is bounded by at least one curve which depends on both the horizontal and the vertical coordinates, then a more sophisticated approach is required, both for the treatment of the double exponential sum and the partial summation technique; this occurs for sums of the form *T*(*M*) appearing in [10] (see Section 3.3 above).

Let us use the notation *m*1 = *n*, *m*2 = *m*. Furthermore, let us denote by *Dr* the remaining set of summation, i.e., for (*<sup>n</sup>*, *m*) ∈ [1, *t*] × [1, *t*], let (*<sup>n</sup>*, *m*) ∈ *Dr* iff

$$\begin{aligned} m &< n \quad \text{and} \quad &n < mt^{1-\delta}(1-\varepsilon), \\\\ m &< n \quad \text{and} \quad &mt^{1-\delta}(1+\varepsilon) < n, \end{aligned} \tag{45}$$

for some sufficiently small constant *c* > 0 (independent of *t*); these restrictions are induced by the condition |*α* − 1| > *c* > 0, with *α* = *mnt*1−*δ*.

In *Dr* there are two types of regions that correspond to sums of the form *T*(*M*) in [10]. The first type is bounded by the line *n* = *m* and the second type is bounded by the lines *n* = (1 ± *<sup>c</sup>*)*mt*1−*δ*, for some sufficiently small *c* > 0. For both cases the treatment of the exponential sum follows the arguments presented in [10] (which first appeared in [12,14]). Considering the partial summation, the Lemma 3.5 in [10] is sufficient for the treatment of the first case, however, Lemma 2 is required for the second case.

Thus, in order to estimate the sum

$$S\_2^{(i)} = \frac{1}{t^{\frac{\delta}{2}}} \sum\_{(n,m)\in D\_r} \frac{1}{n^{s+it^\delta}} \frac{1}{m^{s-it^\delta}} \frac{1}{\ln\left(\frac{m}{n}t^{1-\delta}\right)},\tag{46}$$

we split *Dr* into four different regions, where, in addition to conditions (45), the following conditions hold:

	- (1a) *m* < *M* and *M* < *n* < 2*M*.
	- (1b) *M* < *m* < *n* < 2*M*.
	- (2a) *tδ* < *m* < *M* and *M* < *n* < 2*M*.
	- (2b) *M* < *m* < *n* < 2*M*.
	- (3a) *M* < *mt*1−*<sup>δ</sup>* < 2*M* and *t*1−*<sup>δ</sup>* < *n* < *M*.
	- (3b) *M* < *n* < *mt*1−*<sup>δ</sup>* < 2*M*.
	- (4a) *M* < *mt*1−*<sup>δ</sup>* < 2*M* and 2*M* < *n* < *t*.
	- (4b) *M* < *mt*1−*<sup>δ</sup>* < *n* < 2*M*.

The first subregion of each of the above regions, namely (1a), (2a), (3a) and (4a), are of rectangular shape, see Figure 1. The corresponding sums are treated similarly to the *U*(*M*) sums in [10]. It is straightforward to modify the relevant techniques therein according to the discussion of the case *S*(*i*) 1 and obtain the essential bound of the rhs of (44). In fact, observing that in these regions 1 ln(*mnt*1−*<sup>δ</sup>* )=

*<sup>O</sup>*(1), one obtains the estimate *O t* 13 − *δ*2 (ln *t*)2 , *t* → ∞.

The subregions (1b) and (2b) are of triangular shape, see Figure 1, thus the corresponding sums are treated similarly to the *T*(*M*) sums in [10]. The sums in these regions are treated in [10], via Lemma 3.5. It is straightforward to modify accordingly this approach and obtain the same bound as the rhs of (44).

The subregions (3b) and (4b) are also of triangular shape, see Figure 1. In order to analyse these sums we have to modify the approach of estimating the sums *T*(*M*) in [10]. It is straightforward to modify the analysis of the oscillatory part of the sum, namely the part which uses the Lemmas 3.1–3.3 therein. For the analogue of the partial summation we need to use Lemma 2 instead of Lemma 3.5 in [10]. Then, we obtain the essential bound of the rhs of (44). In fact, observing that in these regions 1 ln( *mn t*1−*<sup>δ</sup>* ) = *<sup>O</sup>*(1), one obtains the estimate *O t* 13 − *δ*2 (ln *t*)2 , *t* → ∞.

#### *4.4. An Alternative Way to Estimate S*(*i*) 2

It is possible to estimate *S*(*i*) 2 using a different and less technical approach. Let us use the notation *D*2 = (*n*, *m*) ∈ [1, *t*] × [1, *t*], *m* < *n* . Then, we rewrite

$$S\_2^{(i)} = \frac{1}{t^{\frac{\delta}{2}}} \sum\_{(n,m)} \sum\_{\bar{n}^s \mapsto \bar{\mathcal{U}}^s} \frac{1}{n^{s+i\bar{\mathcal{V}}^s}} \frac{1}{m^{\bar{s}-i\bar{\mathcal{V}}^s}} \frac{1}{\ln\left(\frac{m}{n}t^{1-\bar{\mathcal{S}}}\right)'},\tag{47}$$

as

$$S\_2^{(i)} = \frac{1}{t^{\frac{\tilde{\sigma}}{2}}} \sum\_{(n,m)\in D\_2} \frac{1}{n^{s+i\tilde{t}^i}} \frac{1}{m^{\tilde{s}-i\tilde{t}^\delta}} F(n,m) - \frac{1}{t^{\frac{\tilde{\sigma}}{2}}} \sum\_{(n,m)\in D\_2} \sum\_{(D\_r)} \frac{1}{n^{s+i\tilde{t}^i}} \frac{1}{m^{\tilde{s}-i\tilde{t}^\delta}} H(n,m),\tag{48}$$

where the functions *F* and *H* are *C*<sup>2</sup> and are defined as follows

$$F(x,y) = \begin{cases} \frac{1}{\ln\left(\frac{V}{x}t^{1-\delta}\right)}, & (x,y) \in D\_{r\_{\prime}}\\ H(x,y), & (x,y) \in D\_2 \nmid D\_{r\_{\prime}} \end{cases} \tag{49}$$

with *Dr* defined by the conditions (45). Furthermore, the function *<sup>P</sup>*(*<sup>x</sup>*, *y*) : *D*2 → R, which is defined by *<sup>P</sup>*(*<sup>x</sup>*, *y*) := *<sup>F</sup>*(*<sup>x</sup>*, *y*) *x*1/2*y*1/2 , belongs to *C*<sup>2</sup> and has the following properties:

$$\begin{aligned} P(\mathbf{x}, \mathbf{y}) &= O\left(\frac{1}{\mathbf{x}^{1/2}\mathbf{y}^{1/2}}\right), & P\_\mathbf{x}(\mathbf{x}, \mathbf{y}) &= O\left(\frac{1}{\mathbf{x}^{3/2}\mathbf{y}^{1/2}}\right), \\ P\_\mathbf{y}(\mathbf{x}, \mathbf{y}) &= O\left(\frac{1}{\mathbf{x}^{1/2}\mathbf{y}^{3/2}}\right), & P\_\mathbf{xy}(\mathbf{x}, \mathbf{y}) &= O\left(\frac{1}{\mathbf{x}^{3/2}\mathbf{y}^{3/2}}\right). \end{aligned} \tag{50}$$

From (49) the set where we have to assure that *<sup>P</sup>*(*<sup>x</sup>*, *y*) ∈ *<sup>C</sup>*<sup>2</sup>(*<sup>D</sup>*2) is given by the constraint *y x t*1−*<sup>δ</sup>* = 1 ± *c*, for some sufficiently small positive constant *c*. Hence, it is sufficient to determine the function *<sup>H</sup>*(*<sup>x</sup>*, *y*) = *d* - *yx <sup>t</sup>*1−*<sup>δ</sup>* , with the following six properties:

$$d(1 \pm c) = \frac{1}{\ln(1 \pm c)}, \qquad d'(1 \pm c) = \frac{1}{\left[\ln(1 \pm c)\right]^2 (1 \pm c)},$$

$$d'(1 \pm c) = \frac{2 + \ln(1 \pm c)}{\left[\ln(1 \pm c)\right]^3 (1 \pm c)^2},$$

for some fixed and sufficiently small *c* > 0.

Furthermore, the conditions (50) are satisfied if the functions *d*(*r*), *<sup>d</sup>*(*r*), *<sup>d</sup>*(*r*) are bounded in the interval *r* ∈ (1 − *c*, 1 + *c*).

Thus, it is sufficient for *d*(*r*) to be a fifth order polynomial which satisfies the conditions (51).

Now, the sum *S*(*i*) 2 has the appropriate form, so it can be analysed through the following two arguments:


The resulting estimate remains invariant.

## **5. Conclusions**

The main result of this paper is the derivation of the Volterra-type linear integral Equation (8). In order to derive this equation starting from (1.6) of [9] it is necessary to:


The derivation of (i) is based on replacing in the definition of *I*4, the term|*ζ*(*s*)|<sup>2</sup> by its leading asymptotics. The proof that the error term is indeed small is presented in [11].

The derivation of (ii) is given in Section 3.

The derivation of (iii) is given in Section 4 under the assumption that the function *<sup>E</sup>SD*4 appearing in *<sup>S</sup>SD*4is given by Equations (36)–(38); the latter proof is given in [11].

The importance of the derivation of (8) is a consequence of the following considerations: taking into account that the variable *ρ* appearing in the Γ functions in the integral of (8) satisfies *ρ* ≥ *tδ*2 and *t* − *ρ* ≥ *t*1/3, it follows that these Γ functions can be simplified as *t* → ∞. Indeed, Equations (4.4), (5.7) and (5.8) on [9] yield

$$\begin{aligned} \frac{\Gamma(it - i\rho)}{\Gamma(1/2 + it)} \Gamma(1/2 + i\rho) &= \sqrt{\frac{2\pi}{t}} e^{-\frac{i\pi}{4}} \frac{1}{\left(1 - \frac{\rho}{\Gamma}\right)^{1/2}} e^{it\left[\left(1 - \frac{\rho}{\Gamma}\right)\ln\left(1 - \frac{\rho}{\Gamma}\right) + \frac{\rho}{\Gamma}\ln\left(\frac{\rho}{\Gamma}\right)\right]}\\ & & \qquad \times \left[1 + O(t^{-\delta\_{23}})\right], \quad t \to \infty, \end{aligned}$$

with *δ*23 = min{*<sup>δ</sup>*2, *<sup>δ</sup>*3}.

Hence, for the specific choice of *δ*2 = *δ*3 = 13 , replacing in Equation (8) the combination of the Gamma functions by the rhs of the above equation, we find

$$\begin{split} \left| \zeta \left( \frac{1}{2} + it \right) \right|^2 &= \sqrt{\frac{2}{\pi}} \int\_{t^{1/3}}^{t - t^{1/3}} \mathfrak{R} \left\{ \frac{e^{-\frac{i\pi}{4}}}{\left(t - \rho\right)^{1/2}} e^{it \left[ \left(1 - \frac{\rho}{t}\right) \ln\left(1 - \frac{\rho}{t}\right) + \frac{\rho}{t} \ln\left(\xi\right) \right]} \right\} \left| \zeta \left(\frac{1}{2} + i\rho\right) \right|^2 d\rho \\ &\qquad \times \left[ 1 + O\left(t^{-1/3}\right) \right] \quad + O\left(t^{\frac{1}{6}} (\ln t)^2\right), \qquad t \to \infty, \end{split} \tag{52}$$

It is straightforward to show that the ansatz |*ζ* (1/2 + *it*)|<sup>2</sup> = *O t*1/6(ln *t*)2 provides a solution of (52). The rigorous proof that the above ansatz provides the unique solution of the linear Volterra integral equation will be presented in [11]. This estimate implies that *ζ* (1/2 + *it*) = *O t*1/12 ln *t* , which is a dramatic improvement of the current best estimate of the large *t* behaviour of *ζ* (1/2 + *it*).

**Author Contributions:** Both authors K.K. and A.S.F. were involved in conceptualization, methodology, formal analysis, investigation, writing and reviewing of the present work.

**Funding:** This research was funded by EPSRC, gran<sup>t</sup> number 79707.

**Acknowledgments:** Both authors are supported by the EPSRC, UK. This is part of a large program of study initiated by one of the authors (ASF); in this effort Jonatan Lenells is an indispensable collaborator.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Asymptotics of** *|ζ***(***s***)***|***<sup>2</sup>**

Equation (1.3) of [6] for *s* = 12 + *it* and *η* = 2*πt*, yields

$$\zeta(s) = \sum\_{n=1}^{\lfloor t \rfloor} \frac{1}{n^s} + O\left(t^{-\frac{1}{2}}\right).$$

Multiplying the above equation with its complex conjugate and using the classical estimate from Theorem 5.12 of [13], which states that

$$
\zeta(s) = O\left(t^{\frac{1}{6}} \ln t\right),
$$

we obtain

$$|\zeta(s)|^2 = \sum\_{m\_1=1}^{\lfloor t \rfloor} \sum\_{m\_2=1}^{\lfloor t \rfloor} \frac{1}{m\_1^s m\_2^s} + O\left(t^{-\frac{1}{3}} \ln t\right).$$

#### **Appendix B. Derivation of** (24)

Using the constraint *m*2 *m*1< *tδ*3−<sup>1</sup> % 1, we rewrite *S*3 as follows:

$$\begin{split} S\_3 &= \sum\_{(m\_1, m\_2) \in M\_3} \frac{1}{m\_2^s (m\_1 + m\_2)^s} = \sum\_{(m\_1, m\_2) \in M\_3} \frac{1}{m\_2^s m\_1^s \left(1 + \frac{m\_2}{m\_1}\right)^s} \\ &= \sum\_{(m\_1, m\_2) \in M\_3} \frac{1}{m\_2^s m\_1^s} \frac{1}{\left(1 + \frac{m\_2}{m\_1}\right)^{1/2}} e^{-it \ln\left(1 + \frac{m\_2}{m\_1}\right)} \\ &= \sum\_{(m\_1, m\_2) \in M\_3} \frac{1}{m\_2^s m\_1^s} \left(1 + O\left(t^{\delta\_3 - 1}\right)\right) e^{-it \left[\frac{m\_2}{m\_1} + O\left(t^{2\delta\_3 - 2}\right)\right]} \\ &= \sum\_{(m\_1, m\_2) \in M\_3} \frac{1}{m\_2^s m\_1^s} e^{-it \frac{m\_2}{m\_1}} \left(1 + O\left(t^{2\delta\_3 - 1}\right)\right) = S\_4^P \left[1 + O\left(t^{2\delta\_3 - 1}\right)\right], \quad t \to \infty. \end{split}$$

#### **Appendix C. Abel's Summation**

The so-called Abel's summation formula for a single sum is given as follows: let (*an*)<sup>∞</sup>*n*=<sup>0</sup> be a sequence of real or complex numbers. Define the partial sum function

$$A(y) = \sum\_{0 \le \mathbf{n} \le y} a\_{\mathbf{n}\prime} \quad \text{for any real number } y.$$

Fix a real number *x*, and let *ρ* be a continuously differentiable function on [0, *x*]. Then,

$$\sum\_{0 \le n \le x} a\_n \rho(n) = A(\mathbf{x}) \rho(\mathbf{x}) - \int\_0^\mathbf{x} A(u) \rho'(u) \, du. \tag{A1}$$

The simple form of the Abel's summation formula for double sums is given in Lemma A1 below, and is straightforward to derive it by applying twice (A1).

**Lemma A1.** *Let A*, *B*, *C*, *D be positive integers such that A* < *B, C* < *D and f*(*<sup>x</sup>*, *y*) *be a C*<sup>2</sup>*-function on* [*<sup>A</sup>*, *B*] × [*<sup>C</sup>*, *<sup>D</sup>*]*, g*(*<sup>m</sup>*, *n*) *be an arithmetical function on the same domain, and*

$$G(x,y) = \sum\_{m=A}^{x} \sum\_{n=C}^{y} g(m,n).$$

*Suppose that*

$$|G(\mathbf{x}, \mathbf{y})| \le G, \quad |f\_{\mathbf{x}}(\mathbf{x}, \mathbf{y})| \le \kappa\_1, \quad |f\_{\mathbf{y}}(\mathbf{x}, \mathbf{y})| \le \kappa\_2, \quad |f\_{\mathbf{xy}}(\mathbf{x}, \mathbf{y})| \le \kappa\_3.$$

*for some positive constants G*, *κ*1, *κ*2, *κ*3*, and for any* (*<sup>x</sup>*, *y*) ∈ [*<sup>A</sup>*, *B*] × [*<sup>C</sup>*, *<sup>D</sup>*]*.* *Then, we have*

$$\begin{split} \left| \sum\_{m=A}^{B} \sum\_{n=C}^{D} f(m,n) \chi(m,n) \right| \\ &\leq G \left[ f(B,D) + \kappa\_1 (B-A) + \kappa\_2 (D-C) + \kappa\_3 (B-A)(D-C) \right]. \end{split} \tag{A2}$$
