• Aucun résultat trouvé

On the mod-Gaussian convergence of a sum over primes

N/A
N/A
Protected

Academic year: 2021

Partager "On the mod-Gaussian convergence of a sum over primes"

Copied!
20
0
0

Texte intégral

(1)

DOI 10.1007/s00209-013-1216-z

Mathematische Zeitschrift

On the mod-Gaussian convergence of a sum over primes

Martin Wahl

Received: 31 July 2012 / Accepted: 17 July 2013 / Published online: 22 September 2013 © Springer-Verlag Berlin Heidelberg 2013

Abstract We prove mod-Gaussian convergence for a Dirichlet polynomial which

approx-imates Im logζ(1/2 + it). This Dirichlet polynomial is sufficiently long to deduce Sel-berg’s central limit theorem with an explicit error term. Moreover, assuming the Riemann hypothesis, we apply the theory of the Riemann zeta-function to extend this mod-Gaussian convergence to the complex plane. From this we obtain that Im logζ(1/2 + it) satisfies a large deviation principle on the critical line. Results about the moments of the Riemann zeta-function follow.

Keywords Distribution of primes· Mod-Gaussian convergence · Riemann zeta-function · Selberg’s central limit theorem· Large deviations

Mathematics Subject Classification (2000) 11N05· 11M06 · 60F05 · 60F10

1 Introduction

In this paper we study the distribution of values taken by logζ(1/2+it). A breakthrough was achieved by Selberg who showed that as t varies in[T, 2T ], the distribution of (Re log ζ(1/2+

i t), Im log ζ(1/2+it)) is approximately Gaussian, with independent components each having

expectation 0 and variance(log log T )/2. More precisely, he proved a central limit theorem which, by the Lévy continuity theorem, is equivalent to the statement that

1 T 2T  T ei u Re log ζ(1/2+it) (log log T )/2+iv

Im log ζ(1/2+it)

(log log T )/2 dt→ e−u2/2−v2/2, (1.1) as T → ∞, for all real numbers u and v. For the case of Im log ζ(1/2 + it) see [17,18], and also the work of Ghosh [7]. The general case is investigated for instance in the book of Joyner

M. Wahl (

B

)

Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland e-mail: mwahl1983@yahoo.de

(2)

[11]. Some of Selberg’s more recent results, for example about the rate of convergence, can be found in [19] and the thesis of Tsang [21]. Initially, Selberg obtained the asymptotics of the joint moments which lead to (1.1) by the method of moments. A more effective approach, applied in our analysis, too, is treated in the work of Bombieri and Hejhal [2]. A central limit theorem for the sum over primes(1/(log log x)/2)p≤x p−1/2−iUT, U

T

being random variables uniformly distributed on[T, 2T ], log x = log T/(log log T )1/4, follows from the mean value theorem of Montgomery and Vaughan and the method of moments. To complete the proof (see [2, Lemma 3 and Corallary]), they showed that the

L1-norm of logζ(1/2 + iU T) −



p≤x p−1/2−iUT is sufficiently small.

The convergence in (1.1) is also a consequence of a conjecture on the behaviour of the moments of the Riemann zeta-function on the critical line (see, e.g., the work of Keating and Snaith [12] and the references therein). It asserts that

e  z21+z22(log log T )/41 T 2T  T

ei z1Re logζ(1/2+it)+iz2Im logζ(1/2+it)dt

→ Φg(z1, z2)Φa(z1, z2) as T → ∞ (1.2)

locally uniformly for z1, z2 ∈Cwith Re(iz1) > −1 and analytic functions Φg,Φa(see [13,

Conjecture 9] and also [9, Conjecture 1]). This type of convergence was introduced in [10] where it is called mod-Gaussian convergence.

A precise form of the function Φg was conjectured by Keating and Snaith and is

based on calculations in the theory of random matrices (see [12], [13, formula (18)]). The arithmetic factor Φa can be explained, e.g., by computing the characteristic function of 

n≤xΛ(n)/(n1/2+iUTlog n) (see [9, Theorem 2], where x has to be O((log T )2−)) or

of the corresponding stochastic model (replacepiUT

p∈Pby an independent sequence of

random variables uniformly distributed on the unit circle, see [13, Example 4]).

In this paper we further investigate the distribution of the sum over primesp≤x p−1/2−it

as t varies in[T, 2T ] and its consequences on the distribution of values of the Riemann zeta-function on the critical line. Here, we will restrict ourselves to the case of Im logζ(1/2 +it). Note that some of the arguments cannot be applied to the case of Re logζ(1/2 + it). It is our first aim to establish mod-Gaussian convergence if x fulfills certain conditions. Precisely, in Sect.4we prove the following:

Theorem 1 Let x= elog T/Nand N such that x→ ∞ and N/ log log T → ∞ as T → ∞. Then eu2(log log x+γ )/41 T 2T  T ei u 

p≤x sin(t log p)p dt→ Φ(u) as T → ∞ (1.3) locally uniformly for u∈R. Here,γ denotes Euler’s constant and Φ is the analytic function given by Φ(u) = p∈P 1− 1 p −u2/4 J0 up , (1.4)

where J0denotes the zeroth Bessel function (see, e.g., Sect.3).

One interesting point of the result seems to be the size of x. It can be chosen large enough to obtain Selberg’s central limit theorem with Selberg’s explicit error term (see [19, Theorem 2] and “Appendix 1”). Moreover, we obtain the following improvement of (1.1):

(3)

Corollary 1 Assume RH. For T sufficiently large, we have 1 T 2T  T eiv Im log ζ(1/2+it) (log log T )/2 dt= e−v2/2+ v2O

log log log T log log T

+ O(1/ log T ) uniformly for|v| ≤log log T/ log log log T .

In Sect.5we deal with the question if the convergence in Theorem1can be extended to the complex plane. Assuming the Riemann hypothesis, we prove such a result for a weighted sum over primes.

Theorem 2 Assume RH. Let x= elog T/Nand N such that x → ∞ and N/ log log T → ∞ as T → ∞. Furthermore, let f be the function f (u) = (πu/2) cot(πu/2) and γf = −0.1080 . . . be the constant defined byp≤x(1 − f2(log p/ log x)/p) = (e−γf/ log x)(1 + o(1)). Then ez2(log log x+γf)/41 T 2T  T ei z  p≤x sin(t log p)p f log p log x dt→ Φ(z) as T → ∞

locally uniformly for z∈C, whereΦ is given by (1.4).

More general sums are possible as well (see [8, Lemma 1] and [2, Lemma 1]). For the evaluation ofγf see [8, proof of Lemma 6].

The crucial step from Theorem1to Theorem2is an estimate of the exponential moments of the above sum. For this purpose let x≤ T2and hR. Assuming the Riemann hypothesis,

we then show that there exist constants C, C, and Csuch that

1 T 2T  T eh  n≤x Λ(n)log n sin(t log n) n f log n log x

dt≤ CeC|h|log Tlog x+Ch2log log T.

Note that this inequality, which is almost a subgaussian bound, is valid beyond the range which is contained in Theorems1and2.

We turn to the applications of Theorem2. As described above, Theorem1can be used to obtain results in connection with the central limit theorem. In addition, Theorem2yields large deviations results. Applying the Gärtner-Ellis theorem and Theorem2, one obtains a large deviation principle (see [4, chapter 1.2] or “Appendix 3” for the definition of the large deviation principle) from which we will deduce the following two Corollaries.

Corollary 2 Assume RH. Let UTbe random variables uniformly distributed on[T, 2T ]. Then the family(1/((log log T )/2))Im log ζ(1/2+iUT) satisfies the large deviation principle with the speed 1/((log log T )/2) and the rate function I (h) = h2/2. For instance,

1

(log log T )/2log

1



t∈ [T, 2T ] : Im log ζ(1/2 + it) ≥ h(log log T )/2

→ −h2/2 as T → ∞, (1.5)

(4)

Corollary 3 Assume RH. Let h∈R. Then

1

(log log T )/2log

⎛ ⎝ 1 T 2T  T eh Im logζ(1/2+it)dt⎠ → h2/2 as T → ∞.

Related papers which also discuss large deviations results are the work of Radziwiłł [15], who extended the range of Selberg’s central limit theorem for Re logζ(1/2 + it) and the work of Soundararajan [20], who proved large deviation bounds for Re logζ(1/2 + it). In fact, Soundararajan [20, Corollary A] completed the proof of Corollary3 in the case of Re logζ(1/2 + it) by proving the upper bound. The result can be stated as follows. For all  > 0 and all h > 0 we have (log T )h2−

h, T2T|ζ(1/2 + it)|2hdt h, (log T )h 2+

. Note that the proof of the upper bound also applies to the case of Im logζ(1/2 + it) and that we apply a slightly weaker upper bound in the proofs of Theorem2and Corollary3.

Notation For y≥ 2 and a function g : [0, 1] → [0, 1], we define

Σg,y(t) =  p≤y 1 p1/2+itg log p log y , Σg,y(t) =  n≤y Λ(n) log n 1 n1/2+itg log n log y ,

rg,y(t) = log ζ(1/2 + it) − Σg,y(t), and rg,y(t) = log ζ(1/2 + it) − Σg,y(t).

2 Moments of a sum over primes

Section2is devoted to some standard mean value calculations. In doing so, we will apply the following generalization of the mean value theorem of Montgomery and Vaughan contained in [16, Theorem 1.4.3] (see also [21, Lemma 3.1]). Let a1, . . . , aMand b1, . . . , bMbe complex

numbers, M≥ 2, and let T > 0. Then 1 T 2T  T ⎛ ⎝ m≤M amm−it ⎞ ⎠ ⎛ ⎝ m≤M bmm−it⎠ dt =  m≤M ambm+ θ 2D T   m≤M m|am|2  m≤M m|bm|2, (2.1)

whereθ depends on the various parameters but satisfies |θ| ≤ 1 and D is the universal constant in [16, Theorem 1.4.3].

Proposition 1 Let x ≥ 2 and T > 0 be real numbers, k be a nonnegative integer, and p1, . . . , pn be the prime numbers not exceeding x. Then

1 T 2T  T   p≤x sin(t log p) √ p 2k dt = 1 22k 2k k  λ1+···+λn=k k! λ1! . . . λn! 2 p−λ1 1 . . . p−λn n+ θ 2D T  n2k(2k)! (2.2)

(5)

and |(1/T )T2T(p≤xsin(t log p)/√p)2k+1dt| ≤ (2D/T )n2k+1(2k + 1)! with |θ| ≤

1 and D the constant in (2.1). Furthermore, the main term in (2.2) is bounded by ((2k)!/22kk!)(

p≤x1/p)k.

Proof From sin(t log p) = (pi t− p−it)/2i, we obtain

1 T 2T  T  p≤x sin(t log p) √p k dt = 1 (2i)k k  j=0 k j (−1)j T 2T  T   p≤x 1 p1/2+it j  p≤x 1 p1/2+it k− j dt. (2.3) For j= 1, . . . , k the multinomial theorem yields

  p≤x 1 p1/2+it j =  λ1+···+λn= j j! λ1! . . . λn!  p−λ1 1 . . . p−λn n 1/2+it . (2.4)

If we plug in (2.4) into (2.3) with k replaced by 2k, we obtain from (2.1) that

1 T 2T  T   p≤x sin(t log p) √ p 2k dt = 1 22k 2k k  λ1+···+λn=k k! λ1! . . . λn! 2 p−λ1 1 . . . p−λn n +θ2D 22kT 2k  j=0 2k j    λ1+···+λn= j j! λ1! . . . λn! 2   λ1+···+λn=2k− j (2k − j)! λ1! . . . λn! 2 (2.5) with|θ| ≤ 1. Applying j!/(λ1! . . . λn!) ≤ j!, j = 0, . . . , 2k, we bound the absolute value of the remainder by 2D 22kT 2k  j=0 2k j  njj! n2k− j(2k − j)! ≤2D T  n2k(2k)!. (2.6)

The main term in (2.2) can be bounded similarly. As in (2.5) and (2.6), we also bound the (2k + 1)th moment. Note that there is no main term in this case. This completes the proof.

We want to compare these mean value estimates to some random variables expectations. Therefore, let X1, X2, . . . be an i.i.d. sequence of random variables uniformly distributed on

the unit circle and let p1, . . . , pnbe the primes not exceeding x. Then

E  n i=1 Im Xipi 2k = 1 22k 2k k  λ1+···+λn=k k! λ1! . . . λn! 2 p−λ1 1 . . . p−λn n (2.7) andEni=1Im Xi/√pi

2k+1= 0. To prove this, we replace sin(t log p) by Im Xi

and integration by expectation in (2.3) and (2.4) and then apply the formulaE[X1λ1. . . XnλnX−μ1 1

. . . X−μn

(6)

3 Bessel functions

The Bessel functions appear in the Fourier expansion of the function ei z sinθ,

ei z sinθ = ∞  k=−∞

Jk(z)ei kθ. (3.1)

Explicitly the kth Bessel function Jk(z) is given by Jk(z) = ∞  n=0 (−1)n(z/2)k+2n n!(k + n)! (3.2)

for k ≥ 0 and given by the relation Jk(z) = (−1)kJ−k(z) for k < 0 (for these and more

facts about Bessel functions see, e.g., the book of Andrews, Askey and Roy [1]). This section is devoted to the following mod-Gaussian convergence result (compare to [10, Proposition 4.1]).

Proposition 2 Let X1, X2, . . . be an i.i.d. sequence of random variables uniformly distrib-uted on the unit circle and let p1, p2, . . . be the increasing sequence of all primes. Then

ez2(log log x+γ )/4E  ei z π(x) j=1Im X jp j  → Φ(z) as x → ∞ (3.3)

locally uniformly for z∈C. Here,γ denotes Euler’s constant, π(x) denotes the number of primes not exceeding x, andΦ(z) is given by (1.4).

Proof By (3.1), we have Eei z Im X1!= 1 2π 2π  0 ei z sinθdθ = J0(z). (3.4)

Applying the independence of the Xj’s, (3.4), and finally Merten’s formulap≤x(1−1/p) =

(e−γ/ log x)(1 + o(1)), we obtain that the left hand side of (3.3) is equal to ez2(log log x+γ )/4 p≤x J0 zp = (1 + o(1))z2/4 p≤x 1− 1 p −z2/4 J0 zp . It remains to show that the above product converges toΦ(z), locally uniformly for z ∈C. This follows from the fact that the productΦ(z) is normally convergent (see [6, Chapter IV.1, especially Remark IV.1.7]). This completes the proof. Consider the random variables ImΣ1,x(−UT), UT being random variables uniformly

distributed on[T, 2T ]. Note that we computed their moments in Proposition1. As mentioned in the Introduction, one can use the method of moments to deduce that, as x → ∞, x =

To(1), (1/(log log x)/2)Im Σ1,x(−UT) converges in distribution to a Gaussian random

variable with expectation 0 and variance 1 (see [2, Proof of Theorem B]). We will generalize this result by considering the cumulants of ImΣ1,x(−UT).

If Y is a real random variable such that E[ezY] exists and is finite for all z ∈ C,

E[ezY] is an analytic function and there exists a neighbourhood of 0 where logE[ezY] = 

m=1κm(Y )zm/m!. The coefficients κm(Y ), m ≥ 1, are called the cumulants of Y . Thus,

(7)

Corollary 4 Let x = elog T/N and N such that x → ∞ and N → ∞ as T → ∞ and let UT be random variables uniformly distributed on [T, 2T ]. Then, as T → ∞,

κ2(Im Σ1,x(−UT)) − (log log x + γ )/2 → c2and for m = 2 κm(Im Σ1,x(−UT)) → cm, where the cm’s are defined by the series expansion logΦ(−iz) =∞m=1cmzm/m!, for z in a neighbourhood of 0.

Proof By the construction ofΦ, there exists a real number 0 < r ≤ 1 such that for |z| ≤ r,

logΦ(z) =p∈P((−z2/4) log(1 − 1/p) + log J0(z/√p)). Hence, by Merten’s formula,

(−z2/4)(log log x + γ ) + p≤x log J0−izp → log Φ(−iz) as x → ∞ (3.5) uniformly for|z| ≤ r, z ∈C. The uniform convergence implies (see [6, Theorem III.1.3]), that the mth derivative of the left hand side of (3.5) evaluated at 0 converges to cm. Hence, under

the assumptions of Proposition2, the cumulants ofπ(x)j=1Im Xj/√pj satisfy the

conver-gence described in Corollary4, sinceE[exp(zπ(x)j=1Im Xj/√pj)] = 

p≤x J0(−iz/√p).

It remains to show that for m≥ 1 κm  ImΣ1,x(−UT)  − κm ⎛ ⎝π(x) j=1 Im Xjp j ⎞ ⎠ → 0 as T → ∞. (3.6)

To prove this, we use the fact that the cumulants can be expressed in terms of the moments, namelyκm(Y ) =



aλ1,...,λmE[Y1]λ1. . .E[Ym]λm, where the sum is over all positive inte-gers such that 1λ1 + 2λ2 + · · · + mλm = m, aλ1,...,λm are integers, and Y is a

ran-dom variable as above. If we plug in Proposition1and (2.7) into this formula, (3.6) fol-lows from multiplying out since for k ≤ m and x ≥ 3 the main terms in (2.2) are

O((p≤x1/p)m) = O((log log x)m) (see [3, (5) of chapter 7]), while for k≤ m the remain-ders in (2.2) are O(T(m/N)−1) which is O(T−a) for some 0 < a < 1 if T is sufficiently

large.

4 Mod-convergence of a sum over primes

By means of Proposition1and (2.7), we can apply the method of moments for fixed x and obtain the following convergence

1 T 2T  T ei u  p≤x sin(t log p)p dtp≤x J0 up as T → ∞. (4.1)

Another proof of (4.1) is contained in [14, Theorem 5.1]. The techniques used therein can be applied to get Theorem1and Theorem2for the choice x = (log T )2−, > 0 arbitrary.

The improvement of Theorem1follows from Proposition3combined with Proposition2.

Proposition 3 Let c> 1 be a constant. Define x = elog T/N with N = (cec2/4) log log T , where c > 1 is allowed to depend on T but such that x → ∞ as T → ∞. For T ≥ 3, sufficiently large such that x≥ 2 and N ≥ 1, we have

1 T 2T  T ei u  p≤xsin(t log p)p dt= p≤x J0 up + O(1/c)N−1+ (2c2/ log x)N (4.2) uniformly for|u| ≤ c, u ∈R.

(8)

Proof of Theorem 1 We apply Proposition 3 with x, N as in Theorem 1 and c an arbi-trary constant with c > 1. Since c → ∞ in that case, the remainder in (4.2) is

o(exp(−c2(log log T )/4)). If we multiply in (4.2) both sides by exp(u2(log log x + γ )/4)

and then apply Proposition2, we obtain (1.3) uniformly for|u| ≤ c. Since c > 1 is arbitrary,

this completes the proof.

Proof of Proposition 3 Let N= N. From the Taylor expansion ei u =k≤2N−1(iu)k/k! + θu2N/(2N)!, u ∈R, with|θ| ≤ 1, we obtain

1 T 2T  T ei u  p≤xsin(t log p)p dt=  k≤2N−1 (iu)k k! 1 T 2T  T  p≤x sin(t log p) √ p k dt u2N  (2N)! 1 T 2T  T  p≤x sin(t log p)p 2N dt (4.3)

with|θ| ≤ 1. By Proposition1, the remainder is

O⎝c2N N! 1 22N   p≤x 1 p N +(c2π(x))N  T⎠ .

Using the bound(N)! ≥ (N/e)N, elementary results in the theory of primes, namely the

formulas p≤x1/p = log log x + c1+ O(1/ log x) and π(x) ≤ 2x/ log x, and finally N= N, this is O ec 2log log T 4N N +(c2π(x))N T = O 1 c N−1 + 2c2 log x N .

Now, let X1, X2, . . . be an i.i.d. sequence of random variables uniformly distributed on the

unit circle. By Proposition1and (2.7), the moments in (4.3) are equal to those of the stochastic model plus a remainder which is bounded by(2D/T )(π(x))kk!. The resulting remainders

in (4.3), k ≤ 2N− 1, add up to O((c2π(x))N/T ) = O((2c2/ log x)N). Hence, (4.3) is equal to  k≤2N−1 (iu)k k! E ⎡ ⎣ π(x) j=1 Im Xjp j k⎦ + O(1/c)N−1+ (2c2/ log x)N.

Applying the above Taylor expansion again, we obtain

p≤x J0 up =E & ei u π(x) j=1Im X jp j ' =  k≤2N−1 (iu)k k! E ⎡ ⎣ π(x) j=1 Im Xjp j k⎦ + θ u2N (2N)!E ⎡ ⎣ π(x) j=1 Im Xjp j 2N⎤ ⎦

with|θ| ≤ 1. The remainder already appeared in (4.3) and is O((1/c)N−1). This completes

(9)

5 Mod-convergence in the complex plane

Section5 is devoted to the proof of Theorem2. Here, we will apply an explicit formula obtained by Goldston [8, Lemma 1] assuming RH. For 4≤ x ≤ t2and t = γ , we have

Im logζ(1/2 + it) = − n≤x Λ(n) log n sin(t log n) √ n f log n log x + γ sin((t − γ ) log x) ∞  0 u u2+ ((t − γ ) log x)2 du sinh u +O 1 t(log x)2 , (5.1)

where f(u) = (πu/2) cot(πu/2). We will also apply the following estimate obtained by Soundararajan assuming RH. For every h∈Rthere exist constants C, C> 0 such that

1

T 2T  T

eh Im logζ(1/2+it)dt≤ CeCh2log log T. (5.2) Soundararajan [20] proved (5.2) for Re logζ(1/2 + it). However, by using [17, Theorem 1] instead of [20, Proposition], his arguments apply to Im logζ(1/2 + it), too. We prove (compare to [2, Lemma 3 and Corallary]):

Proposition 4 Assume RH. Let 4 ≤ x ≤ T2 and f(u) = (πu/2) cot(πu/2). For every h∈Rthere exist constants C, C, and Csuch that

1 T 2T  T eh  n≤x Λ(n)log n sin(t log n) n f log n log x

dt≤ CeC|h|log Tlog x+Ch2log log T.

Proof of Theorem 2 The proof mainly differs from the proof of Theorem1and Proposition3 in its estimation of the remainder term. Nevertheless, we will repeat the main steps. We assume that|z| ≤ c, where c > 1 is an arbitrary constant and z ∈C, say z = u − ih with

u, h ∈R. Let N= N/2. From the Taylor expansion ei z = eh+iu=k≤2N−1(iz)k/k! +

θeh(|z|2N/(2N)!) with |θ| ≤ 1, we obtain

1 T 2T  T ei z ImΣf,x(−t)dt =  k≤2N−1 (iz)k k! 1 T 2T  T  ImΣf,x(−t) k dt + θ c2N  (2N)! 1 T 2T  T eh ImΣf,x(−t)ImΣf,x(−t)2Ndt (5.3) with|θ| ≤ 1. To continue as in the proof of Proposition3, we use that under the assumptions of Proposition1we have 1 T 2T  T (Im Σf,x(−t))kdt=E ⎡ ⎢ ⎣ ⎛ ⎝π(x) j=1 Im Xjp j f log pj log x ⎞ ⎠ k⎤ ⎥ ⎦ + θ2DT (π(x))kk! (5.4)

(10)

with |θ| ≤ 1. Moreover, if we replace k by 2k, the main term is bounded by ((2k)!/22kk!)(

p≤x1/p)k. These estimates follow as in the proof of Proposition1and (2.7).

Applying the Cauchy–Schwarz inequality, the absolut value of the remainder in (5.3) can be bounded by c2N (2N)!     1 T 2T  T (Im Σf,x(−t))4Ndt     1 T 2T  T e2h ImΣf,x(−t)dt.

For x ≥ 2, we have |Im Σf,x(−t) − Im Σf,x(−t)| ≤ (log log x)/2 + O(1), say ≤ C N for T sufficiently large. Hence, by (5.4) and Proposition4, the above is

O⎝ c2N (2N)!  (4N)!( p≤x1/p)2N 24N(2N)! +  (4N)!(π(x))4N T  e4CcN+4Cc2N ⎞ ⎠

for T sufficiently large. Applying(4N)!/(24N(2N)!) ≤ (2N)!,(2N)! ≥ (2N/e)N, π(x) ≤ 2x/ log x, and N= N/2, there exists a constant c> 0 (depending on c, C, and C) such that this is

O ⎛ ⎝  cp≤x1/p N N + c log x N/2⎠ .

Since N/p≤x1/p and log x tend to infinity, this is o(exp(−c2(log log x)/4)). Hence, applying (5.4) to the other terms, (5.3) is equal to

 k≤2N−1 (iz)k k! E  π(x) j=1 Im Xjp j f log pj log x k + oe−c2(log log T )/4

uniformly for|z| ≤ c. If we replace sin(t log p) by Im Xi and integration by expectation in (5.3), we can bound the resulting remainder as above. In doing so, we apply (5.5) instead of Proposition4. The result is that

1 T 2T  T ei z ImΣf,x(−t)dt =E  ei z π(x) j=1 Im X jp j f log p j log x  +oe−c2(log log T )/4

uniformly for|z| ≤ c. If we multiply both sides by exp(z2(log log x + γ

f)/4) and then apply

the formula ez2(log log x+γf)/4E & ei z π(x) j=1Im X jp j f log p j log x ' → Φ(z) as x → ∞ (5.5)

locally uniformly for z∈C, the statement of Theorem2follows by the same argument as in the proof of Theorem1. (5.5) follows as in the proof of Proposition2by using the additional fact that p≤x 1− 1 pf 2 log p log x −z2/4 J0 zp f log p log x → Φ(z) as x → ∞

(11)

locally uniformly for z ∈ C. We conclude by a brief argument why this holds. Split the product in p≤ y and y < p ≤ x. The product over y < p ≤ x converges locally uniformly to 1 if y→ ∞, while one can show that the product over p ≤ y, say y = log x, converges locally uniformly toφ(z), by using, e.g., f2(log p/ log x) − 1 = O((log log x)/ log x) if

p≤ log x. This completes the proof.

Proof of Proposition 4 From (5.1), (5.2), and the Cauchy–Schwarz inequality, we obtain

1 T 2T  T eh  n≤xΛ(n)log n sin(t log n) n f log n log x dt ≤ Ce2Ch2log log T     1 T 2T  T e2h  γsin((t−γ ) log x)0∞ u u2+((t−γ ) log x)2 du sinh udt

where Cis a constant. The absolute value of the sum over zeros is bounded by a constant times  |(t−γ ) log x|≤1 1+  |(t−γ ) log x|>1 1 ((t − γ ) log x)2 (5.6)

and therefore it suffices to deal with the exponential moments of (5.6) with h≥ 0. Using the Cauchy–Schwarz inequality again, we obtain

1 T 2T  T eh 

|(t−γ ) log x|≤11+h|(t−γ ) log x|>1((t−γ ) log x)21 dt ≤     1 T 2T  T e2h  |(t−γ ) log x|≤11dt     1 T 2T  T e2h  |(t−γ ) log x|>1((t−γ ) log x)21 dt.

We start with the first term, using the following fact on the number of zeros (see [3, (1) of Ch. 15]) N(t) = t 2πlog t 2π − t 2π + 7 8+ S(t) + O(1/t), (5.7)

where t = γ and S(t) = (1/π)Im log ζ(1/2 + it). We compute (note that h ≥ 0) 1 T 2T  T eh  |(t−γ ) log x|≤11dt = 1 T 2T  T eh N t+log x1 −Ntlog x1 dt ≤ 1 T 2T  T eC h log t log x+h S t+log x1 −Stlog x1 dt

(12)

≤ eC hlog Tlog x     1 T 2T  T e2h S t+ 1 log x dt     1 T 2T  T e−2hS t− 1 log x dt = O eC h log T

log x+4C(h/π)2log log T

. (5.8)

In the last step we used (5.2). Next, we divide the sum over|(t − γ ) log x| > 1 into |t − γ | ≥

T, 1 < |t − γ | < T , and 1/ log x < |t − γ | ≤ 1. For t∈ [T, 2T ], we have  |t−γ |≥T 1 ((t − γ ) log x)2 = O ⎛ ⎝ γ 1 γ2(log x)2 ⎞ ⎠ = O 1 (log x)2 .

The last step results from [3, (4) of Ch. 12]. For the second sum we use the fact that N(t + 1) − N(t) = O(1 + log+|t|) (see [3, (2) of Ch. 15]). For t∈ [T, 2T ], we obtain

 1<|t−γ |<T 1 ((t − γ ) log x)2 ≤ T −1 k=1 N(t + k + 1) − N(t + k) k2(log x)2 + T −1 k=1 N(t − k) − N(t − k − 1) k2(log x)2 = O ⎛ ⎝T −1 k=1 log T k2(log x)2 ⎞ ⎠ = O log T (log x)2 . Next, we consider the sum over 1/ log x < γ − t ≤ 1. We have

 1/ log x<γ −t≤1 1 ((t − γ ) log x)2 ≤ M  j=1 N t+log xkj − Nt+log xkj−1 k2j−1 (5.9)

where 1= k0 < k1 < · · · < kM with kM−1 < log x ≤ kM. By (5.7), this is bounded by,

recall t∈ [T, 2T ], M  j=1 ⎛ ⎝C(kj− kj−1) log T (log x)k2 j−1 + S t+ kj log x − St+klog xj−1 k2j−1⎠ .

We choose kj = 2j/2and bound the left hand side of (5.9) by √ 2Clog T log x + M  j=1 S t+log x2j/2 − St+2( j−1)/2log x 2j−1 . It follows that 1 T 2T  T eh  1/ log x<γ −t≤1((t−γ ) log x)21 dt ≤ e2Chlog Tlog x 1 T 2T  T eh M j=12 j −11 S t+2 j/2log x −St+2( j −1)/2log x dt.

(13)

UsingE[eh

M

j=1Xj/2j] ≤ M

j=1(E[eh Xj])1/2 j

, which follows from repeated application of the Cauchy–Schwarz inequality, this is

≤ e2Chlog Tlog x M j=1 ⎛ ⎝ 1 T 2T  T e2h S t+2 j/2log x −St+2( j −1)/2log x dt ⎞ ⎠ 1/2j . Applying again the Cauchy–Schwarz inequality and then (5.2) [as in (5.8)], this is

O e2Chlog Tlog x e16C(h/π)2log log T .

The same bound is true for the sum over 1/ log x < t − γ ≤ 1. The claim now follows from

putting together all these estimates.

6 Proof of Corollary1

Let T, c, c, x, and N be as in Proposition3, T ≥ 3 sufficiently large such that x ≥ 2 and N ≥ 2. Assume further that c> 4 is a constant such that the bound (log log T )1/2(c/4)−N/2=

O(1/ log T ) holds and that T is so big that the bound (log T )(2c2/ log x)N/2 = O(1/ log T )

holds, too. Then we show that

1 T 2T  T ei u Im logζ(1/2+it)dt = p≤x J0 up −  p≤x k≥3 odd u kpkJk up q≤x q =p J0 uq

+u2O(log log log T ) + O(1/ log T ) (6.1)

uniformly for|u| ≤ c, u ∈ R. One can deduce Corollary1from (6.1) as follows. Replace

u by v/(log log T )/2 with |v| ≤log log T/ log log log T and let T be sufficiently large. Then, by (3.2) and the formulas p≤x1/p = log log x + c1 + O(1/ log x) and

log log x/ log log T = 1 + O(log log log T/ log log T ), the first term on the right hand side of (6.1) is equal to exp   p≤x log J0(v/  p(log log T )/2)  = e−v2/2 1+ v2O

log log log T log log T

and, by using |Jk(u)| ≤ (|u|/2)k/k! and |J

0(u)| ≤ 1, u ∈ R, the second term is

v4O(1/(log log T )2) which is smaller than v2O(log log log T/ log log T ).

Hence, it remains to prove (6.1). From Im logζ(1/2 + it) = Im Σ1,x(t) + Im r1,x(t) and

Taylor’s theorem, we obtain

1 T 2T  T ei u Im logζ(1/2+it)dt = 1 T 2T  T ei u ImΣ1,x(t)dt+ iu1 T 2T  T Im r1,x(t)ei u ImΣ1,x(t)dt + θu2 2 1 T 2T  T (Im r1,x(t))2dt

(14)

with |θ| ≤ 1. By Proposition 3 and the above assumptions, the first term is equal to



p≤x J0(u/√p) + O(1/ log T ) and by [21, Corollary of Theorem 5.1], the third term is u2O(log log log T ). It remains to consider the second term. We start showing that

1 T 2T  T Im logζ(1/2 + it)ei u ImΣ1,x(t)dt =  p≤x k≥1 odd i kpkJk up q≤x q =p J0 uq + O(1/ log T ) (6.2)

uniformly for|u| ≤ c. Let N= N/2. From the Taylor expansion ei u =k≤2N−1(iu)k/k! + θu2N/(2N)!, u ∈R, with|θ| ≤ 1, we obtain that the left hand side of (6.2) is equal to

 k≤2N−1 (iu)k k! 1 T 2T  T Im logζ(1/2 + it)(Im Σ1,x(t))kdt + θ c2N  (2N)! 1 T 2T  T

|Im log ζ(1/2 + it)|(Im Σ1,x(t))2Ndt (6.3) with|θ| ≤ 1. Applying the Cauchy–Schwarz inequality, the estimates in the proof of Propo-sition3, and(1/T )T2T(Im log ζ(1/2 + it))2dt= (log log T )/2 + O(1) (see [18, Theorem

3]), the remainder is O((log log T )1/2((c/4)−N/2+ (2c2/ log x)N/2) = O(1/ log T ). The remaining moments can be computed by using the following lemma which is a modification of [17, Lemma 5] and [8, equation (6.3)] and serves as a substitute for the mean value theorem of Montgomery and Vaughan in Sect.2.

Lemma 1 Assume RH. Let k, h ≤ T be two positive integers with (k, h) = 1. Then 2T  T logζ(1/2 + it) k h i t dt = √TΛ(k) k log k + O(kh log T), h = 1 O(kh log T), h = 1, 2T  T Im logζ(1/2 + it) k h i t dt = −iT Λ(k) 2√k log k + O(kh log T), h = 1 = i TΛ(h) 2√h log h+ O(kh log T), k = 1 O(kh log T), h, k = 1. (6.4)

Denote by p1, p2, . . . , pn the prime numbers not exceeding x. Let X1, X2, . . . be an

i.i.d. sequence of random variables uniformly distributed on the unit circle. Furthermore, let

k, h ≤ T be positive integers with k/h = p−k1

(15)

1 T 2T  T Im logζ(1/2 + it) p−k1 1 . . . pn−kn i t dt =E ⎡ ⎣−n j=1 Im log(1 − Xj/√pj)Xk11. . . Xknn⎦ + O 1 T  p|k1| 1 . . . p |kn| n log T . (6.5) Expanding(Im Σ1,x(t))kas in (2.3) and (2.4), we deduce from (6.5) that

1 T 2T  T Im logζ(1/2 + it)(Im Σ1,x(t))kdt =E ⎡ ⎢ ⎣ ⎛ ⎝−n j=1 Im log(1 − Xj/√pj) ⎞ ⎠ ⎛ ⎝n j=1 Im Xjp j ⎞ ⎠ k⎤ ⎥ ⎦ +O⎝log T 2kT k  l=0 k l  λ1+···+λn=l l! λ1! . . . λn!  λ1+···+λn=k−l (k − l)! λ1! . . . λn! ⎞ ⎠ .

The remainder is O((log T )nk/T ) and the resulting remainders in (6.3), k ≤ 2N− 1, add

up to O((log T )(2c/ log x)N) = O(1/ log T ). Hence, (6.3) is equal to  k≤2N−1 (iu)k k! E ⎡ ⎢ ⎣ ⎛ ⎝−n j=1 Im log(1 − Xj/√pj) ⎞ ⎠ ⎛ ⎝n j=1 Im Xjp j ⎞ ⎠ k⎤ ⎥ ⎦ + O(1/ log T ) =E ⎡ ⎣ ⎛ ⎝−n j=1 Im log(1 − Xj/√pj)⎠ ei unj=1Im X jp j ⎤ ⎦ + θ c2N  (2N)!E ⎡ ⎢ ⎣*** − n  j=1 Im log(1 − Xj/√pj)*** ⎛ ⎝n j=1 Im Xjp j ⎞ ⎠ 2N⎤ ⎥ ⎦ + O(1/ log T ). (6.6) The last equality follows from applying Taylor’s theorem as in (6.3). If we treat the first remainder in the last row as the corresponding one in (6.3), using E[(π(x)j=1Im log(1 −

Xj/√pj))2] = (log log x)/2 + O(1) this time, one can show that it is also O(1/ log T ). By

plugging in (3.1) and expanding the logarithm, we obtain that (6.6) is equal to

 p≤x k≥1 odd i kpkJk up q≤x q =p J0 uq + O(1/ log T )

which completes the proof of (6.2). The last step in the proof of (6.1) is to show that

1 T 2T  T ImΣ1,x(t)ei u ImΣ1,x(t)dt

(16)

=E ⎡ ⎣ ⎛ ⎝n j=1 Im Xjp j⎠ ei unj=1Im X jp j⎦ + O(1/ log T ) = p≤x ipJ1 up q≤x q =p J0 uq + O(1/ log T )

uniformly for|u| ≤ c. The first equality follows as above or as in the proof of Proposition1, the second equality again by plugging in (3.1). This completes the proof.

7 Proof of Corollary2and3

Proof of Corollary 2 Let x ≥ 2 be as in Theorem 2 with the additional property that

N/ log log T = O(log log T ). By Theorem2and the fact that log log T/ log log x → 1 in this case, we obtain for each h∈R

1

(log log T )/2log

⎛ ⎝ 1 T 2T  T eh ImΣf,x(t)dt⎠ → h2/2 as T → ∞. (7.1)

Using Theorem3, we obtain that the family(1/((log log T )/2))Im Σf,x(UT) satisfies the

large deviation principle with the speed 1/((log log T )/2) and the rate function I (h) = h2/2.

Next, consider Im rf,x(UT). We will show that there exists a constant C > 0 (the constant

in (7.4)) such that for eachδ > 0

(1/T )λ({t ∈ [T, 2T ] : |Im rf,x(t)| ≥ Cδ log log T })

≤ e−(1−o(1))(δ log log T ) log(δ log log T ). (7.2)

We postpone the proof of (7.2) to the end of this section. From (7.2) we deduce that for each δ > 0

1

(log log T )/2log

1

Tλ({t ∈ [T, 2T ] : |Im rf,x(t)| ≥ δ log log T }) ≤ −2(δ/C)(1 − o(1))(log log log T + log(δ/C)).

As T → ∞, the right hand side goes to −∞. Hence, by Definition 1, the families (1/((log log T )/2))Im log ζ(1/2+iUT) and (1/((log log T )/2))Σf,x(UT) are exponentially

equivalent. To obtain the statement of the theorem, we finally apply [4, Theorem 4.2.13], which states that if two families of random variables are exponentially equivalent, and one of them satisfies the large deviation principle with good rate function I , then the same large deviation principle holds for the other family.

It remains to show (7.2). Therefore, let V = δ log log T and decompose Im rf,x= Im rg,T1/V +  Σg,T1/V − Σg,T1/V+Σg,T1/V − Σg,x+ Σg− f,x . If|Im rf,x(t)| ≥ CV , there exists a summand on the right hand side whose absolute value is

greater or equal to C V/4. Applying the union bound, we obtain (1/T )λ({t ∈ [T, 2T ] : |Im rf,x(t)| ≥ CV }

≤ (1/T )λ({t ∈ [T, 2T ] : |Im rg,T1/V(t)| ≥ CV/4}) +(1/T )λ({t ∈ [T, 2T ] : |Im Σ

(17)

+(1/T )λ({t ∈ [T, 2T ] : |Im Σg,T1/V(t) − Im Σg,x(t)| ≥ CV/4})

+(1/T )λ({t ∈ [T, 2T ] : |Im Σg− f,x(t)| ≥ CV/4}). (7.3) If we choose Selberg’s function g(u) = e−2umin(1, 2(1 − u)), we can apply [17, Theorem

1], which says that, assuming RH, there exists constants C, C> 0 such that for 2 ≤ y ≤ t2 and t≥ 2, |Im rg,y(t)| ≤ ** ** C log y  n≤y Λ(n) n1/2+itg log n log y ** ** + C 16 log t log y. (7.4)

If we choose y = T1/V and t ∈ [T, 2T ], T ≥ 2, we have (C/16)(log t/ log y) ≤ CV/8.

For T ≥ 2, sufficiently large such that 2 ≤ T1/V ≤ T2, we obtain (1/T )λ({t ∈ [T, 2T ] : |Im rg,T1/V(t)| ≥ CV/4}) ≤ 1 + t∈ [T, 2T ] :**** C  log T1/V  n≤T1/V Λ(n) n1/2+itg log n log T1/V ** ** ≥ CV/8, . Now, we can apply Markov’s inequality and (9.1) to bound the last term by

8C

C V 2V 

32V(2(AV )V+ O(1)V) = e−(1−o(1))V log V.

Similarly, by using the other bounds in “Appendix 2”, we can bound the three other terms in (7.3) by exp(−(1 − o(1))V log V ). Hence, (7.2) follows. This completes the proof.

Proof of Corollary 3 The asserted formula is exactly content of Varadhan’s integral lemma

(see Theorem4). The assumptions of the theorem are satisfied by Corollary2and Eq. (5.2).

Acknowledgments Finally, I sincerely would like to thank Prof. Ashkan Nikeghbali and Prof. Emmanuel Kowalski for their support and guidance during the preparation of this paper. Thanks also to the referee for comments leading to improvements in the presentation of the manuscript.

8 Appendix 1: Selberg’s result

In this appendix we briefly discuss Selberg’s result about the rate of convergence in the central limit theorem of Im logζ(1/2 + it) (see [19, Theorem 2] and [21, Theorem 6.2]). From Theorem1we deduce:

Lemma 2 Let x = elog T/Nand N such that x → ∞ and N/ log log T → ∞ as T → ∞. Suppose further that N/ log log T = O(log log T ). Then

sup a<b  1 -t∈ [T, 2T ] :  1 (log log x + γ )/2  p≤x sin(t log p)p ∈ [a, b]. − b  a e−t2/2dt 2π

(18)

Proof We denote byΦn(u) the left hand side of (1.3). Using [5, XVI.3, formula 3.13] we

can bound the left hand side of (8.1) by

2 π

clog log x

−clog log x

e−u2/2|(Φn(u/(log log x + γ )/2) − 1)/u|du + O

1

clog log x

.(8.2)

An inspection of the proof of Proposition 2 combined with (4.2) shows that Φn(u) =

Φ(u)(1 + O(1/ log x)) + O(1/ log T ), |u| ≤ c. If we choose c > 0 such that Φ(u) has no zeros for|u| ≤ c, we obtain Φn(u) = Φ(u)(1 + O(1/ log x)), |u| ≤ c. On the other hand, we haveΦ(u/(log log x + γ )/2) = 1 + O(u2/ log log x), |u| ≤ clog log x. Plugging in these estimates gives that (8.2) is O(1/log log x). From N/ log log T = O(log log T ) we conclude that log log T/ log log x → 1 and this completes the proof.

This lemma combined with the bound (see [21, Lemma 6.2])

|{t ∈ [T, 2T ] : |r1,x(t)| ≥ clog log log T}| = O(1/log log T), where c> 0 is a constant, yields Selberg’s result

sup a<b 1 -t ∈ [T, 2T ] : Im log ζ(1/2 + it)

(log log T )/2 ∈ [a, b]

. − b  a e−t2/2dt 2π = O

log log log T

log log T

.

9 Appendix 2: Mean value estimates

For completeness we present some standard mean value estimates which we applied in the proof of Corollary2(see [17, Lemma 3] and [20, Lemma 3]). For this purpose let x and y be positive real numbers, apand bpbe complex numbers with|ap| ≤ 1 and |bp| ≤ log p/ log x,

and k be a nonnegative integer. By repeating the arguments in the proof of Proposition1, we obtain 1 T 2T  T ** ** p≤x ap p1+2it ** **2kdt≤ k!  p≤x 1 p2 k + 2Dk!(π(x))k/T, 1 T 2T  T ** **  y<p≤x ap p1/2+it ** **2kdt ≤ k!  y<p≤x 1 p k + 2Dk!(π(x) − π(y))k/T, 1 T 2T  T ** ** p≤x bp p1/2+it ** **2kdt≤ k! 1 (log x)k  p≤x log p p k + 2Dk!(π(x))k/T.

If x≤ T1/k, the first and the third term are bounded by(Ak)kand the second by(k(log log x − log log y+ A))k, A > 0 some constant.

(19)

For example, we obtain for a function|g(u)| ≤ 1 1 T 2T  T ** **log T11/V  n≤T1/V Λ(n) n1/2+itg log n log T1/V ** **2V dt = 1 T 2T  T ** **  p≤T1/V bp p1/2+it +  p2≤T1/V ap p1+2it + O(1) ** **2V dt

≤ 32V(AV )V+ (AV )V + O(1)V. (9.1) 10 Appendix 3: Large deviation theory

In this appendix we give the definition of the large deviation principle and state two important results which we used in the proofs of Corollary2and3(see [4]).

A function I :R→ [0, ∞] is called a rate function (resp. good rate function), if for all α ∈ [0, ∞), the sets {x : I (x) ≤ α} are closed (resp. compact). A family {Z} of real-valued

random variables satisfies the large deviation principle with the speed and the rate function

I , if

(a) For any closed set F⊆R lim sup

→0  logP(Z∈ F) ≤ − infx∈FI(x).

(b) For any open set G⊆R lim inf

→0  logP(Z∈ G) ≥ − infx∈GI(x).

Theorem 3 (Gärtner-Ellis, see Theorem 2.3.6 or 4.5.20 in [4]) Suppose that for eachλ ∈R Λ(λ) := lim

→0 logE e λZ/!

exists and thatΛ is differentiable. Then the family {Z} satisfies the large deviation principle with the good rate function I(x) = supλ∈R(λx − Λ(λ)).

Theorem 4 (Varadhan, see Theorem 4.3.1 in [4]) Suppose that{Z} satisfies the large

devi-ation principle with a good rate function I and let h ∈ R. Assume further that for some

γ > 1 lim sup →0  logE e γ hZ/!< ∞. (10.1) Then lim →0 logE e h Z/!= sup x∈R(xh − I (x)).

Definition 1 (see Definition 4.2.10 in [4]) Let{Z} and { ˜Z} be two families of real-valued random variables, defined on the same probability space. Then{Z} and { ˜Z} are called

exponentially equivalent if for eachδ > 0, lim sup

(20)

References

1. Andrews, G.E., Askey, R., Roy, R.: Special Functions, Encyclopedia of Mathematics and Its Applications 71. Cambridge University Press, Cambridge (1999)

2. Bombieri, E., Hejhal, D.A.: On the distribution of zeros of linear combinations of Euler products. Duke Math. J. 80, 821–862 (1995)

3. Davenport, H.: Multiplicative Number Theory, 3rd edn. Springer, New York (2000)

4. Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications, corrected reprint of the 2nd edn. Springer, Berlin (2010)

5. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2, 2nd edn. Wiley, New York (1971)

6. Freitag, E., Busam, R.: Complex Analysis. Springer, Berlin (2005)

7. Ghosh, A.: On the Riemann zeta-function-mean value theorems and the distribution of|S(T )|. J. Number Theory 17, 93–102 (1983)

8. Goldston, D.A.: On the function S(T ) in the theory of the Riemann zeta-function. J. Number Theory 27, 149–177 (1987)

9. Gonek, S.M., Hughes, C.P., Keating, J.P.: A hybrid Euler-Hadamard product for the Riemann zeta function. Duke Math. J. 136, 507–549 (2007)

10. Jacod, J., Kowalski, E., Nikeghbali, A.: Mod-Gaussian convergence: new limit theorems in probability and number theory. Forum Math. 23, 835–873 (2011)

11. Joyner, D.: Distribution Theorems of L-Functions. Longman Scientific/Wiley, Harlow/New York (1986) 12. Keating, J.P., Snaith, N.C.: Random matrix theory andζ(1/2 + it). Commun. Math. Phys. 214, 57–89

(2000)

13. Kowalski, E., Nikeghbali, A.: Mod-Gaussian convergence and the value distribution ofζ(1/2 + it) and related quantities. J. London Math. Soc. (to appear). Preprint available athttp://arxiv.org/abs/0912.3237

14. Laurinˇcikas, A.: Limit Theorems for the Riemann Zeta-Function. Kluwer, Dordrecht (1996)

15. Radziwiłł, M.: Large deviations in Selberg’s central limit theorem. (preprint).http://arxiv.org/abs/1108. 5092

16. Ramachandra, K.: On the Mean-Value and Omega-Theorems for the Riemann Zeta-Function. Tata Insti-tute of Fundamental Research Lectures on Mathematics and Physics, 85. Published for the Tata InstiInsti-tute of Fundamental Research, Bombay, Springer (1995)

17. Selberg, A.: On the remainder in the formula for N(T ), the number of zeros of ζ(s) in the strip 0 < t < T . Avh. Norske Vid. Akad. Oslo I 1944 (1), 27 (1944)

18. Selberg, A.: Contributions to the theory of the Riemann zeta-function. Arch. Math. Naturvid. 48, 89–155 (1946)

19. Selberg, A.: Old and new conjectures and results about a class of Dirichlet series. In: Proceedings of the Amalfi Conference on Analytic Number Theory (Maiori 1989), pp. 367–385. University of Salerno, Salerno (1992)

20. Soundararajan, K.: Moments of the Riemann zeta function. Ann. Math. 170, 981–993 (2009)

21. Tsang, K.M.: The Distribution of the values of the Riemann Zeta-Function. Ph.D. thesis, Princeton University (1984)

Références

Documents relatifs

In 1979, Ejiri [Eji79] obtained the following rigidity theorem for n( ≥ 4)- dimensional oriented compact simply connected minimal submanifolds with pinched Ricci curvatures in

[7] Chunhui Lai, A lower bound for the number of edges in a graph containing no two cycles of the same length, The Electronic J. Yuster, Covering non-uniform hypergraphs

Wang, Error analysis of the semi-discrete local discontinuous Galerkin method for compressible miscible displacement problem in porous media, Applied Mathematics and Computation,

In this paper we shall examine gravitational fields containing (gravita- tional) radiation. We shall consider also electromagnetic fields in special relativity, but

This construction produces a saturated desingularization of quasi-compact fine torsion free monoschemes which is functorial with respect to smooth morphisms.. Combining theorems

Regularity results in the sense of u being in C 1 ,α for L(ξ) = |ξ| p with p &gt; 1 have been proved by Uhlenbeck [8], Lewis [6] and Tolksdorf [7] for g = 0 and by Di Benedetto

Let us check that these spaces X with the counting measure and the path distance are of transitive type and hence satisfies the common growth condition.. Finally recall that

[r]