• Aucun résultat trouvé

CLT for Crossings of random trigonometric Polynomials

N/A
N/A
Protected

Academic year: 2021

Partager "CLT for Crossings of random trigonometric Polynomials"

Copied!
22
0
0

Texte intégral

(1)

HAL Id: hal-00747030

https://hal.archives-ouvertes.fr/hal-00747030v2

Submitted on 29 May 2013

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

CLT for Crossings of random trigonometric Polynomials

Jean-Marc Azaïs, José R. Leon

To cite this version:

Jean-Marc Azaïs, José R. Leon. CLT for Crossings of random trigonometric Polynomials. Electronic Journal of Probability, Institute of Mathematical Statistics (IMS), 2013, vol. 18 (paper no. 68),

�10.1214/EJP.v18-2403�. �hal-00747030v2�

(2)

CLT for Crossings of random trigonometric Polynomials.

Jean-Marc Aza¨ıs

Jos´ e R. Le´ on

March 7, 2013

Abstract

We establish a central limit theorem for the number of roots of the equation XN(t) = u when XN(t) is a Gaussian trigonometric poly- nomial of degree N. The case u = 0 was studied by Granville and Wigman. We show that for some size of the considered interval, the asymptotic behavior is different depending on whetheru vanishes or not. Our mains tools are: a) a chaining argument with the stationary Gaussain process with covariance sintt, b) the use of Wiener chaos de- composition that explains some singularities that appear in the limit

whenu6= 0.

AMS Subject Classification: 60G15.

Keywords: Crossings of random trigonometric polynomials; Rice formula;

Chaos expansion.

1 Introduction

Let us consider the random trigonometric polynomial:

X

N

(t) = 1

√ N

N

X

n=1

(a

n

sin nt + b

n

cos nt), (1) where the coefficients a

n

and b

n

are independent standard Gaussian random variables and N is some integer.

The number of zeroes of such a process on the interval [0, 2π) has been studied in the paper by Granville and Wigman [5] where a central limit theorem, as N → +∞ is proved for the first time using the method of Malevich [8].

Universit´e de Toulouse, IMT, ESP, F31062 Toulouse Cedex 9, France. Email: jean- marc.azais@math.univ-toulouse.fr

Escuela de Matem´atica. Facultad de Ciencias. Universidad Central de Venezuela.

A.P. 47197, Los Chaguaramos, Caracas 1041-A, Venezuela. Email: jose.leon@ciens.ucv.ve

(3)

The aim of this paper is twofold: firstly we extend their result to the number of crossings of every level and secondly we propose a simpler proof.

The key point consist in proving that after a convenient scaling the process X

N

(t) converges in a certain sense to the stationary process X(t) with co- variance r(t) =

sintt

. The central limit theorem for the crossings of process X

N

(t) is then a consequence of the central limit theorem for the crossings in large time for X(t).

The above idea is outlined in Granville and Wigman [5] but the authors could not implement this procedure. Let us quoted their words: “While com- puting the asymptotic of the variance of the crossings of process X

N

(t), we determined that the covariance function r

XN

of X

N

has a scaling limit r(t), which proved useful for the purpose of computing the asympotics. Rather than scaling r

XN

, one might consider scaling X

N

. We realize, that the above should mean, that the distribution of the zeros of X

N

is intimately related to the distribution of the number of the zeros on (roughly) [0, N ] of a certain Gaussian stationary process X(t), defined on the real line

R

, with covari- ance function r....Unfortunately, this approach seems to be difficult to make rigorous, due to the different scales of the processes involved”.

Our method can roughly be described as follows. In the first time in Section 3 we defined the two process X

N

(or rather its normalization Y

N

, see its definition in the next section) and X in the same probability space.

This fact allows us to compute the covariance between these two processes.

Afterwards we get a representation of the crossings of both processes in the Wiener’s Chaos. These representations and the Mehler formula for non- linear functions of four dimensional Gaussian vectors, permit us to compute the L

2

distance between the crossings of Y

N

and the crossings of X. The central limit theorem for the crossings of X can be obtained easily by a mod- ification of the method of m-dependence approximation, developed firstly by Malevich [8] and Berman [3] and improved by Cuzick [4]. The hypothesis in this last work are more in accord with ours. Finally the closeness in L

2

(in quadratic mean) of the two numbers of crossings : those of X(t) and those of the m-dependent approximation gives us the central limit theorem for the crossings of X

N

.

The organization of the paper is the following: in Section 2 we present

basic calculations; Section 3 is devoted to the presentation of the Wiener

chaos decomposition and to the study of the variance. Section 4 states the

central limit theorem. Additional proofs are given in Section 5 and 6. A

table of notation is given in Section 7.

(4)

2 Basic results and notation

r

XN

(τ ) will be the covariance of the process X

N

(t) given by r

XN

(τ ) :=

E

[X

N

(0)X

N

(τ )] = 1

N

N

X

n=1

cos nτ = 1

N cos( (N + 1)τ

2 ) sin(

N τ2

) sin

τ2

.

(2) We define the process

Y

N

(t) = X

N

(t/N ), with covariance

r

YN

(τ ) = r

XN

(τ /N ).

We have r

0YN

(τ ) = 1

2N sin

2Nτ

cos 2N + 1 2N τ

− sin τ

4N

2

sin

22Nτ

, (3)

r

00XN

(τ ) = − sin

τ2

2N sin

2 2Nτ

[sin (N + 1)τ 2N sin τ

2N + cos (N + 1)τ 2N cos τ

2N ] (4) r

00Y

N

(τ ) = 1 N

2

r

00Y

N

( τ N )

= cos

2Nτ

cos

(2N2N+1)

τ − 2

(2N+1)2

sin

2Nτ

sin

(2N+1)2N

τ − cos τ 4N

2

sin

22Nτ

− (2N sin

2Nτ

cos(

2N+12N

τ ) − sin τ ) cos

2Nτ

4N

3

sin

32Nτ

. (5)

The convergence of Riemann sums to the integral implies simply that

r

YN

(τ ) → r(τ ) := sin(τ )/τ, (6) r

Y0

N

(τ ) → r

0

(τ ) = cos(τ )/τ − τ

−2

sin(τ ), (7) r

Y00N

(τ ) = 1

N

2

r

N00

( τ

N ) → r

00

(τ ) = − sin(τ )

τ − 2 cos(τ )

τ

2

+ 2 sin(τ )

τ

3

. (8) And these convergences are uniform in every compact interval that does not contains zero. We will need also the following upper-bounds that are easy When τ ∈ [0, N π]:

|r

YN

(τ )| ≤ π/τ ; |r

Y0

N

(τ )| ≤ π 2τ + π

2

2

; |r

00Y

N

(τ )| ≤ (const) τ

−1

−2

−3

. (9) We now compute the ingredients of the Rice formula [2]

E

X

N2

(t) = 1, and

E

(X

N0

(t))

2

= 1 N

N

X

n=1

n

2

= (N + 1)(2N + 1)

6 .

(5)

Denoting by N

[0,2π)XN

(u) the numbers of crossings of the level u of X

N

on the interval [0, 2π), the Rice formula gives

E

[N

[0,2π)XN

(u)] = 2π.

q

E

(X

N0

(t))

2p

2/π e

u

2

2

2π = 2

√ 3

r

(N + 1)(2N + 1)

2 e

u

2 2

. Hence

N

lim

→∞

E

[N

[0,2π)XN

(u)]

N = 2

√ 3 e

u

2 2

.

When not specified, all the limits are taken when N → ∞.

3 Spectral representation and Wiener Chaos

This section has as main goal to build both processes X(t) and Y

N

(t) in the same probability space. This chaining argument is one of our main tools.

It makes it possible to show that the two processes are close in L

2

distance and by consequence the same result holds true for the crossings of both processes.

We have

X(t) =

Z 1

0

cos(tλ) dB

1

(λ) +

Z 1

0

sin(tλ) dB

2

(λ), (10) where B

1

and B

2

are two independent Brownian motion. Using the same Brownian motions we can write

Y

N

(t) =

Z 1

0 N

X

n=1

cos( nt N ) 1I

[n−1

N ,Nn)

(λ)dB

1

(λ)+

Z 1 0

N

X

n=1

sin( nt N ) 1I

[n−1

N ,Nn)

(λ)dB

2

(λ).

It is easy to check, using isometry properties of stochastic integrals that Y

N

(t) has the desired covariance.

By defining the functions γ

N1

(t, λ) =

N

X

n=1

cos( nt N )1

[n−1

N ,Nn)

(λ) and γ

N2

(t, λ) =

N

X

n=1

sin( nt N )1

[n−1

N ,Nn)

(λ), we can write

Y

N

(t) =

Z 1

0

γ

N1

(t, λ)dB

1

(λ) +

Z 1

0

γ

N2

(t, λ)dB

2

(λ). (11) In the sequel we are going to express the representation (10) and (11) in an isonormal process framework. Let define

H2

the Hilbert vector space defined as

{h = (h

1

, h

2

) :

Z

R

h

21

(λ)dλ +

Z

R

h

22

(λ)dλ < ∞},

(6)

with scalar product

<

h,g

>=

Z

R

h

1

(λ)g

1

(λ)dλ +

Z

R

h

2

(λ)g

2

(λ)dλ.

The transformation

h

→ W (h) :=

Z

R

h

1

(λ)dB

1

(λ) +

Z

R

h

2

(λ)dB

2

(λ),

defines an isometry between

H2

and a Gaussian subspace of L

2

(Ω, A, P ) where A is the σ−field generated by B

1

(λ) and B

2

(λ).

Thus W (h)

h∈H2

is the isonormal process associated to

H2

. By using the representations (10) and (11), readily we get

X(t) = W ( 1I

[0,1]

(·, ·)(cos t·, sin t·)), Y

N

(t) = W ( 1I

[0,1]

(·, ·)(γ

N1

(·, t), γ

N2

(·, t))),

X ˜

0

(t) := X

0

(t)

p

1/3 = W ( 1I

[0,1]

p

1/3 (·, ·)(− sin t·, cos t·)), Y ˜

N0

(t) := Y

N0

(t)

q

−r

Y00

N

(0)

= W ( 1I

[0,1]

q

−r

Y00

N

(0)

(·, ·)((γ

N1

(·, t))

0

, (γ

N2

(·, t))

0

).

We are in disposition of introduce the Wiener’s chaos which is our second main tool. For a general reference about this topic see [9]. Let H

k

be the Hermite’s polynomial of degree k defined by

H

k

(x) = (−1)

k

e

x

2 2

d

k

dx

k

(e

x

2 2

).

It is normalized such that for Y a standard Gaussian random variable we have

E

(H

k

(Y )H

m

(Y )) = δ

k,m

k!. Consider {e

i

}

i∈N

an ortonormal basis for

H2

. Let Λ be the set the sequences a = (a

1

, a

2

, . . .) a

i

N

such that all the terms except a finite number vanish. For a ∈ Λ we set a! =

Q

i=1

a

i

! and

|a| =

P

i=1

a

i

. For any multiindex a ∈ Λ we define Φ

a

= 1

√ a!

Y

i=1

H

ai

(W (e

i

)).

For each n ≥ 1, we will denote by H

n

the closed subspace of L

2

(Ω, A, P ) spanned by the random variables {Φ

a

, a ∈ Λ, |a| = n}. The space H

n

is the nth Wiener chaos associated with B

1

(λ) and B

2

(λ). If H

0

denotes the space of constants we have the ortogonal decomposition

L

2

(Ω, A, P ) =

M

n=0

H

n

.

(7)

For any Hermite’s polynomial H

q

, it holds H

q

(W (h)) =I

q

(h) :=

Z +∞

0

. . .

Z +∞

0

h

1

1

) . . . h

1

q

)dB

1

1

) . . . dB

1

q

) +

Z +∞

0

. . .

Z +∞

0

h

2

1

) . . . h

2

q

)dB

2

1

) . . . dB

2

q

), with

h

= (h

1

, h

2

). For instance as Y

N

(t) = W ( 1I

[0,1]

(·, ·)(γ

N1

(·, t), γ

N2

(·, t))), we obtain

H

2

(Y

N

(t)) =

Z 1

0

Z 1 0

γ

N1

1

, t)γ

1,N1

2

, t)dB

1

1

)dB

1

2

) +

Z 1 0

Z 1 0

γ

N2

1

, t)γ

N2

2

, t)dB

2

1

)dB

2

2

).

We now write the Wiener Chaos expansion for the number of crossings.

As the absolute value function belongs to

L2

(

R

, ϕ(x)dx), where ϕ is the standard Gaussian density, we have |x| =

P

k=0

a

2k

H

2k

(x) with a

2k

= 2 (−1)

k+1

2π2

k

k!(2k − 1) .

It is shorter to study first X

N

(t) on [0, π] (resp. Y

N

(t) and X(t) on [0, N π]), the generalization to [0, 2π] (resp. to [0, 2N π]) will be done in Section 4.

The result of Kratz & Le´ on [6] or Th 10.10 in [2] imply

√ 1

N π (N

[0,πNX ]

(u) −

E

N

[0,πNX ]

(u))

=

p

1/3ϕ(u)

X

q=1 [q2]

X

k=0

H

q−2k

(u) (q − 2k)!

a

2k

√ N π

Z πN 0

H

q−2k

(X(s))H

2k

( ˜ X

0

(s)) ds, (12) where [x] is the integer part. We introduce the notation

f

q

(u, x

1

, x

2

) = ϕ(u)

[q2]

X

k=0

H

q−2k

(u)

(q − 2k)! a

2k

H

q−2k

(x

1

)H

2k

(x

2

). (13) For each s the random variable

f

q

(u, X(s), X ˜

0

(s)) = ϕ(u)

[q2]

X

k=0

H

q−2k

(u) (q − 2k)! a

2k

×H

q−2k

(W ( 1I

[0,1]

(·, ·)(cos s·, sin s·)))H

2k

(W ( 1I

[0,1]

p

1/3 (·, ·)(− sin s·, cos s·)))

(8)

belongs the q-th chaos as a consequence of linearity and the property of mul- tiplication of two functionals belonging to different chaos, cf. [9] Proposition 1.1.3. Furthermore also by linearity the same is true for

I

q

[0, t]

=

p

1/3

√ t

Z t 0

f

q

(u, X(s), X ˜

0

(s))ds. (14) So that

√ 1

N π (N

[0,πN]X

(u) −

E

N

[0,πN]X

(u)) =

X

q=1

I

q

[0, πN ] ,

gives the decomposition in the Wiener chaos. The same type of expansion is also true for N

[0,πN]YN

(u)

√ 1 N

N

[0,πNYN ]

(u) −

E

N

[0,πNYN ]

(u)

=

X

q=1

I

q,N

[0, πN ]

, (15)

where

I

q,N

[0, πN ]

=

q

−r

00Y

N

(0)

√ πN

Z πN 0

f

q

(u, Y

N

(s), Y ˜

N0

(s))ds. (16) Our first goal is to compute the limit variance of (15). Our main tool will be the Arcones inequality. We define the norm

||f

q

||

2

:=

E

f

q2

(u, Z

1

, Z

2

),

where (Z

1

, Z

2

) is a bidimensional standard Gaussian vector. We have

||f

q

||

2

= ϕ

2

(u)

[q2]

X

k=0

H

q−2k2

(u)

(q − 2k)! a

22k

(2k)! ≤ (const)

[q2]

X

k=0

a

22k

(2k)! ≤ (const), where (const) is some constant that does not depend on q. Now we must introduce the Arcone’s coefficient of dependence [1]

ψ

N

(τ ) = sup

r

YN

(τ ) ,

r

Y0

N

(τ )

q

−r

00Y

N

(0)

,

r

00Y

N

(τ ) r

00Y

N

(0)

.

The Arcones inequality says that if ψ

N

(s

0

− s) < 1, it holds

E

[f

q

(u, Y

N

(s), Y ˜

N0

(s))f

q

(u, Y

N

(s

0

), Y ˜

N0

(s

0

))]

≤ ψ

qN

(s

0

− s)||f

q

||

2

.

We will use also the following Lemma the proof of which is given in

Section 5

(9)

Lemma 1

For every a > 0, there exists a constant K

a

such that sup

N

Var N

[0,a]YN

(u)

≤ K

a

< ∞. (17) Choose some ρ < 1 , using the inequality (9), we can choose a big enough such that for τ > a we have ψ

N

(τ ) <

Kτ

≤ ρ.

Then we partition [0, N π] into L = [

N πa

] intervals J

1

, . . . , J

L

of length larger than a, and we set for short

N

`

= N

JYN

`

(u).

We have

Var(N

[0,N π]YN

(u)) = Var(N

1

+· · ·+N

L

) =

X

`,`0,|`−`0|≤1

Cov (N

`

, N

`0

)+

X

`,`0,|`−`0|>1

Cov (N

`

, N

`0

).

The first sum is easily shown to be O(N ) by applying Lemma 1 and the Cauchy-Schwarz inequality.

Let us look at a term of the second sum. Using the expansion (15) we set

N

`

E

(N

`

)

√ πN =

X

q=1

I

q,N

(J

`

),

where I

q,N

(J

`

) =

q

−r

00Y

N

(0)

√ πN

Z

J`

f

q

(u, Y

N

(s), Y ˜

N0

(s))ds. Let us consider the terms corresponding to q > 1. The Arcones inequality implies that

Cov (Iq,N

(J

`

), I

q,N

(J

`0

))

Z

J`×J`0

1 N π (−r

00Y

N

(0))(K/τ )

q

Cdsdt

≤ (const)

Z

J`×J`0

ρ

q−2

τ

−2

dsdt, (18) where τ = s − t. Summing over all pairs on intervals and over q ≥ 2 it is easy to check that this sum is bounded.

It remains to study the case q = 1. Since H

1

(x) = x I

1,N

(J

`

) = (N π)

−1/2q

−r

Y00

N

(0)uφ(u)

Z

J`

Y

N

(s)ds.

So that

X

`,`0,|`−`0|>1

Cov (I

1,N

(J

`

), I

1,N

(J

`0

))

≤ (const)

1 N

Z πN 0

Z πN 0

r

YN

(s−s

0

)dsds

0 ,

(10)

which is bounded because of the following result 1

N

Z πN

0

Z πN 0

r

YN

(s − s

0

)dsds

0

= 2 N

Z πN 0

(πN − τ )r

YN

(τ )dτ

= 2

N

X

n=1

Z πN 0

(π − τ N ) 1

N cos n τ N dτ

= 2

N

X

n=1

1 − cos nπ n

2

= 4

N

X

j=0

1 (2j + 1)

2

→ 4

X

j=0

1

(2j + 1)

2

= 4 π

2

8 = π

2

2 . (19)

Define σ

q2

:= lim

N→∞

Var I

q

([0, πN ])

< ∞.

Proposition 2

For q > 1 we have Var I

q,N

([0, πN ])

→ σ

2q

as N → +∞.

For q = 1

Var I

1,N

([0, πN ])

→ 1

3 u

2

φ

2

(u)π.

In the case u 6= 0 this limit is different from

N

lim

→∞

Var I

1

([0, πN ])

= 2

3 u

2

φ

2

(u)π.

Remark 1

This different behavior, depending in which chaos we are, is explicit thanks to the Wiener chaos decomposition.

Proof:

Firstly we consider the case q > 2 :

E

I

q,N2

([0, N π]

= −r

Y00

N

(0)ϕ

2

(u)

[q2]

X

k1=0 [q2]

X

k2=0

H

q−2k1

(u)

(q − 2k

1

)! a

2k1

H

q−2k2

(u) (q − 2k

2

)! a

2k2

1

N π

Z N π

0

Z N π 0

E

[H

q−2k

(Y

N

(s))H

2k

( Y

N0

(s

0

)

q

−r

00Y

N

(0)

)H

q−2k

(Y

N

(s

0

))H

2k

( Y

N0

(s)

q

−r

Y00

N

(0)

)] ds

0

ds

= −r

Y00

N

(0)ϕ

2

(u)

[q2]

X

k1=0 [q2]

X

k2=0

H

q−2k1

(u)

(q − 2k

1

)! a

2k1

H

q−2k2

(u) (q − 2k

2

)! a

2k2

2

Z πN 0

(1− s

N π )

E

[H

q−2k1

(Y

N

(0))H

2k1

( Y

N0

(0)

q

−r

Y00

N

(0)

)H

q−2k2

(Y

N

(s))H

2k2

( Y

N0

(s)

q

−r

Y00

N

(0)

)] ds.

(11)

We now use the generalized Mehler formula (Lemma 10.7 page 270 of [2]).

Lemma 3

Let (X

1

, X

2

, X

3

, X

4

) be a centered Gaussian vector with variance matrix

Σ =

1 0 ρ

13

ρ

14

0 1 ρ

23

ρ

24

ρ

13

ρ

23

1 0 ρ

14

ρ

24

0 1

Then, if r

1

+ r

2

= r

3

+ r

4

,

E

H

r1

(X

1

)H

r2

(X

2

)H

r3

(X

3

)H

r4

(X

4

)

=

X

(d1,d2,d3,d4)∈J

r

1

!r

2

!r

3

!r

4

!

d

1

!d

2

!d

3

!d

4

! ρ

d131

ρ

d142

ρ

d233

ρ

d244

, where J is the set of d

i

’s satisfying : d

i

≥ 0;

d

1

+ d

2

= r

1

; d

3

+ d

4

= r

2

; d

1

+ d

3

= r

3

; d

2

+ d

4

= r

4

. (20) If r

1

+ r

2

6= r

3

+ r

4

the expectation is equal to zero.

Using this lemma, there exist a finite set J

q

and constants C

q,k1,k2

such that

E

[H

q−2k1

(Y

N

(0))H

2k1

( ˜ Y

N0

(0))H

q−2k2

(Y

N

(τ ))H

2k2

( ˜ Y

N0

(τ ))]

=

X

Jq

C

q,k1,k2

|r

YN

(τ )|

2q−(2k1+2k1)−h1

| r

0Y

N

(

Nτ

)

q

−r

00Y

N

(0)

|

2h1

| r

00Y

N

(τ )

q

−r

00Y

N

(0)

|

2k1+2k2−h1

:= G ˜

q,k1,k2,N

(τ ). (21)

This clearly proves that

E[Hq−2k1

(Y

N

(0))H

2k1

( ˜ Y

N0

(0))H

q−2k2

(Y

N

(τ ))H

2k2

( ˜ Y

N0

(τ ))]

E[Hq−2k1

(X(0))H

2k1

( ˜ X

0

(0))H

q−2k2

(X(τ ))H

2k2

( ˜ X

0

(τ ))]

and Formula (18) gives a domination proving the convergence of the integral and the fact that σ

2q

is finite.

Let us look to the case q = 1

E

I

1,N2

([0, N π]

= −r

Y00

N

(0)ϕ

2

(u)(ua

0

)

2

1 N π

Z N π 0

Z N π 0

E

(Y

N

(s)Y

N

(s

0

))ds

0

ds

→ 1/3ϕ

2

(u) 2u

2

π

2

π

2

/2 = 1

3 u

2

φ

2

(u), (22)

using (19).

(12)

On the other hand we have

E

I

12

([0, N π]

= 1

3 ϕ

2

(u)(ua

0

)

2

1 N π

Z N π 0

Z N π 0

sin(s − s

0

) s − s

0

ds

0

ds

= 1

3 ϕ

2

(u) 2u

2

π

2

2

Z N π 0

(π − τ /N ) sin(τ )

τ dτ → 2

3 u

2

φ

2

(u). (23)

4 Central limit Theorem with a chaining argu- ment

In this section we first establish a central limit theorem, Theorem 4 for the crossings of the process X(t) in the second step, we show that it implies our main result : Theorem 5, central limit theorem for the crossings of the X

N

(t).

The covariance r(t) of the limit process X(t) is not a summable in the sense that

Z +∞

0

|r(t)|dt = +∞, but it satisfies

Z N 0

r(t)dt converges as N → ∞, for q > 1

Z +∞

0

|r(t)|

q

dt < +∞.

The following theorem is a direct adaptation of the theorems Theorem 1 in [7] or of Theorem 10.11 of [2]. Its proof is given in Section 6 for completeness.

Theorem 4

As t → +∞,

√ 1

t N

[0,t]X

(u) −

E

(N

[0,t]X

(u))

⇒ N (0, 2

3 u

2

φ

2

(u) +

X

q=2

σ

q2

(u)), where ⇒ is the convergence in distribution.

The main idea is to use this result to extend it to the crossings of Y

N

(t).

Our main result is the following:

Theorem 5

As N → +∞, 1. 1

√ N π N

[0,N π]YN

(u) −

E

(N

[0,N π]YN

(u))

⇒ N (0, 1

3 u

2

φ

2

(u) +

X

q=2

σ

q2

(u)),

(13)

2. 1

√ 2N π N

[0,2N π]YN

(u) −

E

(N

[0,2N π]YN

(u))

⇒ N (0, 2

3 u

2

φ

2

(u) +

X

q=2

σ

q2

(u)),

Remark 2

We point out that in the case u = 0 the two limit variances are the same and this is the result of Granville and Wigman [5], but in the other cases this is a new result. The chaos method permits an easy interpretation of the difference between these two behaviors.

Proof:

Let us introduce the cross correlation:

ρ

N

(s, t) =

E(X(s)YN

(t)) =

N

X

n=1

Z n

N n−1

N

cos(sλ − t n N ) dλ

=

N

X

n=1

Z 1

N

0

cos((s − t) n

N − sv) dv = <{

Z 1

N

0

e

−isv

dv

N

X

n=1

e

i(s−t)Nn

}

= sin

Ns

s N

1 N

N

X

n=1

cos(s − t) n

N + 1 − cos

Ns

s2 2N2

s 2N

2

N

X

n=1

sin(s − t) n N , where < is the real part. So we can write

ρ

N

(s, t) = sin(s/N)

s/N r

YN

(t − s) + 1 − cos(s/N ) s

2

/(2N

2

)

s 2N

1 N

N

X

n=1

sin(s − t) n N . The two functions

sin(z)z

and

1−cos(z)z2/2

are bounded, with bounded derivatives and

sin(z)z

tend to 1 as z tends to 0. We have also

| 1 N

N

X

n=1

sin(s − t) n

N | = | 2 s − t

sin

(s−t)2

2N s−t

sin(

N+12N

(s − t)) sin

(s−t)2N

| ≤ (const)|s − t|

−1

, whenever |s − t| < πN .

We have already proved that r

YN

(s − t) =

N1 PN

n=1

cos (s − t)

Nn

, con- verges to r(s − t) uniformly on every compact that does not contains zero.

The same result is true for the first two derivatives that converge respec- tively to the corresponding derivative of r(s −t). In addition for large values of |s − t| these functions are bounded by K|s − t|

−1

and for each fixed s,

s 2N2

PN

n=1

sin(s − t)

Nn

→ 0. Using the derivation rules it is easy to see that this is enough to have

ρ

N

(s, t) → r(s − t)

∂ρ

N

(s, t)

∂s =

E(X0

(s)Y

N

(t)) → r

0

(s − t)

∂ρ

N

(s, t)

∂t =

E

(X(s)Y

N0

(t)) → −r

0

(s − t)

2

ρ

N

(s, t)

∂s∂t =

E

(X

0

(s)Y

N0

(t)) → −r

00

(s − t),

(14)

again the convergence being uniform on every compact that does not con- tains zero. In additions these function are bounded by (const)(s − t)

−1

.

Before beginning the proofs, we present two results that were established in Peccati & Tudor [10] (Theorem 1 and Proposition 2) and we state as a theorem for later reference.

We will denote as ζ

q,r

a generic element of the q-th chaos depending of a parameter r that tends to infinity. For instance in our cases we will have ζ

q,t

= I

q

([0, t]) and ζ

q,N

= I

q,N

([0, πN ]) respectively.

Theorem 6

(i) Assume that for every q

1

≤ q

2

, . . . ≤ q

m

, it holds that

t→∞

lim

E

qi,t

]

2

= σ

ii2

and that for i 6= j lim

t→∞E

qi,t

ζ

qj,t

] = 0.

Then, if D

m

is the diagonal matrix with entries σ

2ii

, Theorem 1 of [10]

says that the random vector

q1,t

, . . . , ζ

qm,t

) ⇒ N (0, D

m

),

if and only if each ζ

qi,t

converges in distribution towards N (0, σ

2ii

) when t → ∞.

(ii) Considering now d functionals of the q-th chaos {ζ

q,rl

}

dl=1

, Proposition 2 of [10] says that

q,r1

, ζ

q,r2

, . . . , ζ

q,rd

) ⇒ N(0, C)

if an only if ζ

q,ti

⇒ N (0, c

ii

) and

E

q,ti

ζ

q,tj

] → c

ij

when t → ∞, where c

ij

is the entry i, j of the matrix C.

We are now ready to prove the following lemma

Lemma 7

For q ≥ 2

N

lim

→∞E

I

q,N

([0, N π]) − I

q

([0, N π])

2

= 0.

Proof:

E

I

q,N

([0, N π]) − I

q

([0, N π])

2

=

E

I

q,N

([0, N π])

2

+

E

I

q

([0, N π])

2

− 2

E

I

q,N

([0, N π])I

q

([0, N π]) . We have already shown that the first two terms tend to σ

2q

(u). It only remains to prove that the third also does. But, since the cross correlation ρ

N

(s, t) shares all the properties of r

YN

(s − t), the same proof as in Section 3 shows that the limit is again σ

2q

(u).

We now finish the proof of Theorem 5.

Proof of 1.

The case of I

1,N

([0, N π]) is easy to handle since it is already a

(15)

Gaussian variable and that its limit variance is easy to compute using (19).

By Lemma 7, for q ≥ 2, I

q,N

([0, N π]) inherits the asymptotic Gaussian behavior of I

q

([0, N π]) .

By using (i) of Theorem 6, this is enough to obtain the normality of the sum.

Proof of 2.

We have already proved that χ

N

(1) := 1

N π N

[0,N π]YN

(u) −

E

(N

[0,N π]YN

(u))

⇒ N (0, 1

3 u

2

φ

2

(u) +

X

q=2

σ

q2

(u)), the same result holds by stationarity for the sequence

χ

N

(2) := 1

N π N

[N π,2N π]YN

(u) −

E

(N

[N π,2N π]YN

(u)) , and given that

√ 1

2N π N

[0,2N π]YN

(u) −

E

(N

[0,2N π]YN

(u))

= 1

2 (χ

N

(1) + χ

N

(2)).

It only remains to show that the limit of the vector (χ

N

(1), χ

N

(2)) is jointly Gaussian and that the variance of the sum converges to the corresponding one. Defining

I

q,N

([πN, 2πN ]) =

q

−r

00Y

N

(0)

√ πN

Z 2πN πN

f

q

(u, Y

N

(s), Y

eN0

(s)ds, we can write the sum above as

√ 1

2 (χ

N

(1) + χ

N

(2)) = 1

√ 2 (

X

q=1

I

q,N

([0, πN ]) +

X

q=1

I

q,N

([πN, 2πN ])), and given that the limit variance is finite we have

√ 1

2 (χ

N

(1) + χ

N

(2)) = 1

√ 2 (

Q

X

q=1

I

q,N

([0, πN ]) +

Q

X

q=1

I

q,N

([πN, 2πN ])) + o

P

(1), where o

P

(1) denotes a term that tends to zero in probability when Q → ∞ uniformly in N . Let us consider first the term corresponding to the first chaos (q = 1). We have

E :=

E

I

1,N

([0, N π])I

1,N

([N π, 2N π])

= −r

00Y

N

(0)ϕ

2

(u)(ua

0

)

2

1 N π

Z N π 0

Z 2N π N π

E

(Y

N

(s)Y

N

(s

0

))ds

0

ds

= −r

00Y

N

(0)ϕ

2

(u)(ua

0

)

2

1 N π

Z N π 0

Z 2N π N π

r

YN

(s

0

− s)ds

0

ds,

(16)

making the change of variable s

0

− s = τ we get

= −r

00Y

N

(0)ϕ

2

(u)(ua

0

)

2

1 N π (

Z πN 0

τ r

YN

(τ )dτ +

Z 2πN

πN

(2πN−τ )r

YN

(τ )dτ ).

Since r

YN

is periodic with period 2πN : E = −r

00Y

N

(0)ϕ

2

(u)(ua

0

)

2

1 N π (

Z πN 0

τ r

YN

(τ )dτ −

Z 0

−πN

τ r

YN

(τ )dτ)

= −r

00Y

N

(0)ϕ

2

(u)(ua

0

)

2

2 N π

Z πN 0

τ r

YN

(τ )dτ → 1

3 ϕ

2

(u)u

2

, using the same computation as for getting (19).

This implies that

12E

I

1,N

([0, N π]) + I

1,N

([N π, 2N π])

2

23

ϕ

2

(u)u

2

. Since the two random variables I

1,N

([0, N π]) and I

1,N

([N π, 2N π]) are jointly Gaussian this implies the convergence of

1

2

N

(1) + χ

N

(2)) in distribution.

Let us consider the term in the other chaos (q ≥ 2).

E

I

q,N

([0, N π])I

q,N

([N π, 2N π])

= −r

Y00

N

(0)ϕ

2

(u)

[q2]

X

k1=0 [q2]

X

k2=0

H

q−2k1

(u)

(q − 2k

1

)! a

2k1

H

q−2k2

(u)

(q − 2k

2

)! a

2k2

1 πN

Z πN 0

Z 2πN πN

G

q,k1,k2,N

(s−s

0

)dsds

0

, where we have put

G

q,k1,k2,N

(s−s

0

) =

E

[H

q−2k1

(Y

N

(0))H

2k1

( Y

eN0

(0))H

q−2k2

(Y

N

(s−s

0

))H

2k2

( Y

eN0

(s−s

0

))].

A change of variables and Fubini’s Theorem give 1

πN

Z πN

0

Z 2πN

πN

G

q,k1,k2,N

(s − s

0

)dsds

0

= 1 N π (

Z πN 0

τ G

q,k1,k2,N

(τ )dτ −

Z 2πN

πN

(2πN − τ )G

q,k1,k2,N

(τ )dτ )

= 1 N π (

Z πN

0

τ G

q,k1,k2,N

(τ )dτ +

Z πN

0

τ G

q,k1,k2,N

(−τ )dτ),

where this last equality is a consequence of periodicity and the change of variable τ = v + 2πN in the second integral. In this form we get

| 1 πN

Z πN 0

Z πN πN

G

q,k1,k2,N

(s − s

0

)dsds

0

| ≤ 2 N π

Z πN 0

τ G ˜

q,k1,k2,N

(τ )dτ.

(17)

G ˜

q,k1,k2,N

(τ ) has been defined in (21) and we also recall that this function is even. Moreover, it is plain that over any compact interval [0, a] it holds

N→∞

lim 2 N π

Z a 0

τ G ˜

q,k1,k2,N

(τ )dτ = 0,

for the integral over [a, πN ] we use the bound (9) and Arcones’ inequality.

Thereby

N

lim

→∞

| 2 N π

Z πN

0

τ G

q,k1,k2,N

(τ )dτ| = 0.

By using (ii) of Theorem 6, we get for q ≥ 2

(I

q,N

([0, N π]), I

q,N

([N π, 2N π])) ⇒ N(0, σ

2q

I), where I is the identity matrix in

R2

.

Defining

I

q,N

([0, 2N π]) = 1

√ 2 (I

q,N

([0, N π]) + I

q,N

([N π, 2N π]),

it holds for each q that I

q,N

([0, 2N π]) ⇒ N (0, σ

q2

), this asymptotic normality holds true also for q = 1. The theorem now follows applying again (i) of Theorem 6 and the expansion (12).

5 Proof of Lemma 1

It suffices to prove that N

[0,a]YN

(u) has a second moment which is bounded uniformly in N . Let U

[0,a]YN

(u) be the number of up-crossings of the level u by Y

N

(t) in the interval [0, a] i.e. the number of instants t such that Y

N

(t) = u; Y

N0

(t) > 0. The Rolle theorem implies

N

[0,a]YN

(u) ≤ 2U

[0,a]YN

(u) + 1.

So it suffices to give a bound for the second moment of the number up- crossings. Writing U for U

[0,a]YN

(u) for short, we have

E

(U

2

) =

E

(U (U − 1)) +

E

(U ).

We have already proven that the last term gives a finite contribution after normalization. For studying the first one we define the function θ

N

(t) by

r

YN

(τ ) = 1 + r

Y00

N

(0)

2 τ

2

+ θ

N

(τ ).

Références

Documents relatifs

The main step of the proof of Theorem 1.4 is an explicit construction of a meromorphic flat connection whose horizontal sections are given by basic iterated integrals (see (3.7)

In Section 3 we give a new (very short) proof of the result of [M], using a theorem of Kolmogorov on conjugate functions (see Section 2 for definitions and properties of conjugate

In this note, we find the asymptotic main term of the variance of the number of roots of Kostlan–Shub–Smale random polynomials and prove a central limit theorem for this number of

Compared to data for the period of 1983–1995, PCP-associated mortality in HIV-negative patients, as well as the length of hospital stay, sharply decreased to reach those

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Of the most significant and important among those is the ensemble of random trigonometric polynomials, as the distribution of it zeros occurs in a wide range of problems in science

In view of the proof of Theorem 2.1 and as a consequence of Theorem 3.1, here we obtain a family of functions, depending on a real parameter, µ, in the Laguerre-Pólya class such

Toute utilisa- tion commerciale ou impression systématique est constitutive d’une in- fraction pénale.. Toute copie ou impression de ce fichier doit conte- nir la présente mention