HAL Id: hal-00747030
https://hal.archives-ouvertes.fr/hal-00747030v2
Submitted on 29 May 2013
HAL
is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or
L’archive ouverte pluridisciplinaire
HAL, estdestinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires
CLT for Crossings of random trigonometric Polynomials
Jean-Marc Azaïs, José R. Leon
To cite this version:
Jean-Marc Azaïs, José R. Leon. CLT for Crossings of random trigonometric Polynomials. Electronic Journal of Probability, Institute of Mathematical Statistics (IMS), 2013, vol. 18 (paper no. 68),
�10.1214/EJP.v18-2403�. �hal-00747030v2�
CLT for Crossings of random trigonometric Polynomials.
Jean-Marc Aza¨ıs
∗Jos´ e R. Le´ on
†March 7, 2013
Abstract
We establish a central limit theorem for the number of roots of the equation XN(t) = u when XN(t) is a Gaussian trigonometric poly- nomial of degree N. The case u = 0 was studied by Granville and Wigman. We show that for some size of the considered interval, the asymptotic behavior is different depending on whetheru vanishes or not. Our mains tools are: a) a chaining argument with the stationary Gaussain process with covariance sintt, b) the use of Wiener chaos de- composition that explains some singularities that appear in the limit
whenu6= 0.
AMS Subject Classification: 60G15.
Keywords: Crossings of random trigonometric polynomials; Rice formula;
Chaos expansion.
1 Introduction
Let us consider the random trigonometric polynomial:
X
N(t) = 1
√ N
N
X
n=1
(a
nsin nt + b
ncos nt), (1) where the coefficients a
nand b
nare independent standard Gaussian random variables and N is some integer.
The number of zeroes of such a process on the interval [0, 2π) has been studied in the paper by Granville and Wigman [5] where a central limit theorem, as N → +∞ is proved for the first time using the method of Malevich [8].
∗Universit´e de Toulouse, IMT, ESP, F31062 Toulouse Cedex 9, France. Email: jean- marc.azais@math.univ-toulouse.fr
†Escuela de Matem´atica. Facultad de Ciencias. Universidad Central de Venezuela.
A.P. 47197, Los Chaguaramos, Caracas 1041-A, Venezuela. Email: jose.leon@ciens.ucv.ve
The aim of this paper is twofold: firstly we extend their result to the number of crossings of every level and secondly we propose a simpler proof.
The key point consist in proving that after a convenient scaling the process X
N(t) converges in a certain sense to the stationary process X(t) with co- variance r(t) =
sintt. The central limit theorem for the crossings of process X
N(t) is then a consequence of the central limit theorem for the crossings in large time for X(t).
The above idea is outlined in Granville and Wigman [5] but the authors could not implement this procedure. Let us quoted their words: “While com- puting the asymptotic of the variance of the crossings of process X
N(t), we determined that the covariance function r
XNof X
Nhas a scaling limit r(t), which proved useful for the purpose of computing the asympotics. Rather than scaling r
XN, one might consider scaling X
N. We realize, that the above should mean, that the distribution of the zeros of X
Nis intimately related to the distribution of the number of the zeros on (roughly) [0, N ] of a certain Gaussian stationary process X(t), defined on the real line
R, with covari- ance function r....Unfortunately, this approach seems to be difficult to make rigorous, due to the different scales of the processes involved”.
Our method can roughly be described as follows. In the first time in Section 3 we defined the two process X
N(or rather its normalization Y
N, see its definition in the next section) and X in the same probability space.
This fact allows us to compute the covariance between these two processes.
Afterwards we get a representation of the crossings of both processes in the Wiener’s Chaos. These representations and the Mehler formula for non- linear functions of four dimensional Gaussian vectors, permit us to compute the L
2distance between the crossings of Y
Nand the crossings of X. The central limit theorem for the crossings of X can be obtained easily by a mod- ification of the method of m-dependence approximation, developed firstly by Malevich [8] and Berman [3] and improved by Cuzick [4]. The hypothesis in this last work are more in accord with ours. Finally the closeness in L
2(in quadratic mean) of the two numbers of crossings : those of X(t) and those of the m-dependent approximation gives us the central limit theorem for the crossings of X
N.
The organization of the paper is the following: in Section 2 we present
basic calculations; Section 3 is devoted to the presentation of the Wiener
chaos decomposition and to the study of the variance. Section 4 states the
central limit theorem. Additional proofs are given in Section 5 and 6. A
table of notation is given in Section 7.
2 Basic results and notation
r
XN(τ ) will be the covariance of the process X
N(t) given by r
XN(τ ) :=
E[X
N(0)X
N(τ )] = 1
N
N
X
n=1
cos nτ = 1
N cos( (N + 1)τ
2 ) sin(
N τ2) sin
τ2.
(2) We define the process
Y
N(t) = X
N(t/N ), with covariance
r
YN(τ ) = r
XN(τ /N ).
We have r
0YN(τ ) = 1
2N sin
2Nτcos 2N + 1 2N τ
− sin τ
4N
2sin
22Nτ, (3)
r
00XN(τ ) = − sin
τ22N sin
2 2Nτ[sin (N + 1)τ 2N sin τ
2N + cos (N + 1)τ 2N cos τ
2N ] (4) r
00YN
(τ ) = 1 N
2r
00YN
( τ N )
= cos
2Nτcos
(2N2N+1)τ − 2
(2N+1)2sin
2Nτsin
(2N+1)2Nτ − cos τ 4N
2sin
22Nτ− (2N sin
2Nτcos(
2N+12Nτ ) − sin τ ) cos
2Nτ4N
3sin
32Nτ. (5)
The convergence of Riemann sums to the integral implies simply that
r
YN(τ ) → r(τ ) := sin(τ )/τ, (6) r
Y0N
(τ ) → r
0(τ ) = cos(τ )/τ − τ
−2sin(τ ), (7) r
Y00N(τ ) = 1
N
2r
N00( τ
N ) → r
00(τ ) = − sin(τ )
τ − 2 cos(τ )
τ
2+ 2 sin(τ )
τ
3. (8) And these convergences are uniform in every compact interval that does not contains zero. We will need also the following upper-bounds that are easy When τ ∈ [0, N π]:
|r
YN(τ )| ≤ π/τ ; |r
Y0N
(τ )| ≤ π 2τ + π
24τ
2; |r
00YN
(τ )| ≤ (const) τ
−1+τ
−2+τ
−3. (9) We now compute the ingredients of the Rice formula [2]
E
X
N2(t) = 1, and
E(X
N0(t))
2= 1 N
N
X
n=1
n
2= (N + 1)(2N + 1)
6 .
Denoting by N
[0,2π)XN(u) the numbers of crossings of the level u of X
Non the interval [0, 2π), the Rice formula gives
E
[N
[0,2π)XN(u)] = 2π.
q
E
(X
N0(t))
2p2/π e
−u2
√
22π = 2
√ 3
r
(N + 1)(2N + 1)
2 e
−u2 2
. Hence
N
lim
→∞E
[N
[0,2π)XN(u)]
N = 2
√ 3 e
−u2 2
.
When not specified, all the limits are taken when N → ∞.
3 Spectral representation and Wiener Chaos
This section has as main goal to build both processes X(t) and Y
N(t) in the same probability space. This chaining argument is one of our main tools.
It makes it possible to show that the two processes are close in L
2distance and by consequence the same result holds true for the crossings of both processes.
We have
X(t) =
Z 10
cos(tλ) dB
1(λ) +
Z 10
sin(tλ) dB
2(λ), (10) where B
1and B
2are two independent Brownian motion. Using the same Brownian motions we can write
Y
N(t) =
Z 10 N
X
n=1
cos( nt N ) 1I
[n−1N ,Nn)
(λ)dB
1(λ)+
Z 1 0
N
X
n=1
sin( nt N ) 1I
[n−1N ,Nn)
(λ)dB
2(λ).
It is easy to check, using isometry properties of stochastic integrals that Y
N(t) has the desired covariance.
By defining the functions γ
N1(t, λ) =
N
X
n=1
cos( nt N )1
[n−1N ,Nn)
(λ) and γ
N2(t, λ) =
N
X
n=1
sin( nt N )1
[n−1N ,Nn)
(λ), we can write
Y
N(t) =
Z 10
γ
N1(t, λ)dB
1(λ) +
Z 10
γ
N2(t, λ)dB
2(λ). (11) In the sequel we are going to express the representation (10) and (11) in an isonormal process framework. Let define
H2the Hilbert vector space defined as
{h = (h
1, h
2) :
ZR
h
21(λ)dλ +
ZR
h
22(λ)dλ < ∞},
with scalar product
<
h,g>=
Z
R
h
1(λ)g
1(λ)dλ +
ZR
h
2(λ)g
2(λ)dλ.
The transformation
h
→ W (h) :=
Z
R
h
1(λ)dB
1(λ) +
ZR
h
2(λ)dB
2(λ),
defines an isometry between
H2and a Gaussian subspace of L
2(Ω, A, P ) where A is the σ−field generated by B
1(λ) and B
2(λ).
Thus W (h)
h∈H2is the isonormal process associated to
H2. By using the representations (10) and (11), readily we get
X(t) = W ( 1I
[0,1](·, ·)(cos t·, sin t·)), Y
N(t) = W ( 1I
[0,1](·, ·)(γ
N1(·, t), γ
N2(·, t))),
X ˜
0(t) := X
0(t)
p
1/3 = W ( 1I
[0,1]p
1/3 (·, ·)(− sin t·, cos t·)), Y ˜
N0(t) := Y
N0(t)
q
−r
Y00N
(0)
= W ( 1I
[0,1]q
−r
Y00N
(0)
(·, ·)((γ
N1(·, t))
0, (γ
N2(·, t))
0).
We are in disposition of introduce the Wiener’s chaos which is our second main tool. For a general reference about this topic see [9]. Let H
kbe the Hermite’s polynomial of degree k defined by
H
k(x) = (−1)
ke
x2 2
d
kdx
k(e
−x2 2
).
It is normalized such that for Y a standard Gaussian random variable we have
E(H
k(Y )H
m(Y )) = δ
k,mk!. Consider {e
i}
i∈Nan ortonormal basis for
H2. Let Λ be the set the sequences a = (a
1, a
2, . . .) a
i∈
Nsuch that all the terms except a finite number vanish. For a ∈ Λ we set a! =
Q∞i=1
a
i! and
|a| =
P∞i=1
a
i. For any multiindex a ∈ Λ we define Φ
a= 1
√ a!
∞
Y
i=1
H
ai(W (e
i)).
For each n ≥ 1, we will denote by H
nthe closed subspace of L
2(Ω, A, P ) spanned by the random variables {Φ
a, a ∈ Λ, |a| = n}. The space H
nis the nth Wiener chaos associated with B
1(λ) and B
2(λ). If H
0denotes the space of constants we have the ortogonal decomposition
L
2(Ω, A, P ) =
∞
M
n=0
H
n.
For any Hermite’s polynomial H
q, it holds H
q(W (h)) =I
q(h) :=
Z +∞
0
. . .
Z +∞0
h
1(λ
1) . . . h
1(λ
q)dB
1(λ
1) . . . dB
1(λ
q) +
Z +∞
0
. . .
Z +∞0
h
2(λ
1) . . . h
2(λ
q)dB
2(λ
1) . . . dB
2(λ
q), with
h= (h
1, h
2). For instance as Y
N(t) = W ( 1I
[0,1](·, ·)(γ
N1(·, t), γ
N2(·, t))), we obtain
H
2(Y
N(t)) =
Z 10
Z 1 0
γ
N1(λ
1, t)γ
1,N1(λ
2, t)dB
1(λ
1)dB
1(λ
2) +
Z 1 0
Z 1 0
γ
N2(λ
1, t)γ
N2(λ
2, t)dB
2(λ
1)dB
2(λ
2).
We now write the Wiener Chaos expansion for the number of crossings.
As the absolute value function belongs to
L2(
R, ϕ(x)dx), where ϕ is the standard Gaussian density, we have |x| =
P∞k=0
a
2kH
2k(x) with a
2k= 2 (−1)
k+1√
2π2
kk!(2k − 1) .
It is shorter to study first X
N(t) on [0, π] (resp. Y
N(t) and X(t) on [0, N π]), the generalization to [0, 2π] (resp. to [0, 2N π]) will be done in Section 4.
The result of Kratz & Le´ on [6] or Th 10.10 in [2] imply
√ 1
N π (N
[0,πNX ](u) −
EN
[0,πNX ](u))
=
p1/3ϕ(u)
∞
X
q=1 [q2]
X
k=0
H
q−2k(u) (q − 2k)!
a
2k√ N π
Z πN 0
H
q−2k(X(s))H
2k( ˜ X
0(s)) ds, (12) where [x] is the integer part. We introduce the notation
f
q(u, x
1, x
2) = ϕ(u)
[q2]
X
k=0
H
q−2k(u)
(q − 2k)! a
2kH
q−2k(x
1)H
2k(x
2). (13) For each s the random variable
f
q(u, X(s), X ˜
0(s)) = ϕ(u)
[q2]
X
k=0
H
q−2k(u) (q − 2k)! a
2k×H
q−2k(W ( 1I
[0,1](·, ·)(cos s·, sin s·)))H
2k(W ( 1I
[0,1]p
1/3 (·, ·)(− sin s·, cos s·)))
belongs the q-th chaos as a consequence of linearity and the property of mul- tiplication of two functionals belonging to different chaos, cf. [9] Proposition 1.1.3. Furthermore also by linearity the same is true for
I
q[0, t]
=
p1/3
√ t
Z t 0
f
q(u, X(s), X ˜
0(s))ds. (14) So that
√ 1
N π (N
[0,πN]X(u) −
EN
[0,πN]X(u)) =
∞
X
q=1
I
q[0, πN ] ,
gives the decomposition in the Wiener chaos. The same type of expansion is also true for N
[0,πN]YN(u)
√ 1 N
N
[0,πNYN ](u) −
EN
[0,πNYN ](u)
=
∞
X
q=1
I
q,N[0, πN ]
, (15)
where
I
q,N[0, πN ]
=
q−r
00YN
(0)
√ πN
Z πN 0
f
q(u, Y
N(s), Y ˜
N0(s))ds. (16) Our first goal is to compute the limit variance of (15). Our main tool will be the Arcones inequality. We define the norm
||f
q||
2:=
Ef
q2(u, Z
1, Z
2),
where (Z
1, Z
2) is a bidimensional standard Gaussian vector. We have
||f
q||
2= ϕ
2(u)
[q2]
X
k=0
H
q−2k2(u)
(q − 2k)! a
22k(2k)! ≤ (const)
[q2]
X
k=0
a
22k(2k)! ≤ (const), where (const) is some constant that does not depend on q. Now we must introduce the Arcone’s coefficient of dependence [1]
ψ
N(τ ) = sup
r
YN(τ ) ,
r
Y0N
(τ )
q−r
00YN
(0)
,
r
00YN
(τ ) r
00YN
(0)
.
The Arcones inequality says that if ψ
N(s
0− s) < 1, it holds
E
[f
q(u, Y
N(s), Y ˜
N0(s))f
q(u, Y
N(s
0), Y ˜
N0(s
0))]
≤ ψ
qN(s
0− s)||f
q||
2.
We will use also the following Lemma the proof of which is given in
Section 5
Lemma 1
For every a > 0, there exists a constant K
asuch that sup
N
Var N
[0,a]YN(u)
≤ K
a< ∞. (17) Choose some ρ < 1 , using the inequality (9), we can choose a big enough such that for τ > a we have ψ
N(τ ) <
Kτ≤ ρ.
Then we partition [0, N π] into L = [
N πa] intervals J
1, . . . , J
Lof length larger than a, and we set for short
N
`= N
JYN`
(u).
We have
Var(N
[0,N π]YN(u)) = Var(N
1+· · ·+N
L) =
X`,`0,|`−`0|≤1
Cov (N
`, N
`0)+
X`,`0,|`−`0|>1
Cov (N
`, N
`0).
The first sum is easily shown to be O(N ) by applying Lemma 1 and the Cauchy-Schwarz inequality.
Let us look at a term of the second sum. Using the expansion (15) we set
N
`−
E(N
`)
√ πN =
∞
X
q=1
I
q,N(J
`),
where I
q,N(J
`) =
q
−r
00YN
(0)
√ πN
ZJ`
f
q(u, Y
N(s), Y ˜
N0(s))ds. Let us consider the terms corresponding to q > 1. The Arcones inequality implies that
Cov (Iq,N
(J
`), I
q,N(J
`0))
≤
Z
J`×J`0
1 N π (−r
00YN
(0))(K/τ )
qCdsdt
≤ (const)
ZJ`×J`0
ρ
q−2τ
−2dsdt, (18) where τ = s − t. Summing over all pairs on intervals and over q ≥ 2 it is easy to check that this sum is bounded.
It remains to study the case q = 1. Since H
1(x) = x I
1,N(J
`) = (N π)
−1/2q−r
Y00N
(0)uφ(u)
ZJ`
Y
N(s)ds.
So that
X
`,`0,|`−`0|>1
Cov (I
1,N(J
`), I
1,N(J
`0))
≤ (const)
1 N
Z πN 0
Z πN 0
r
YN(s−s
0)dsds
0 ,which is bounded because of the following result 1
N
Z πN0
Z πN 0
r
YN(s − s
0)dsds
0= 2 N
Z πN 0
(πN − τ )r
YN(τ )dτ
= 2
N
X
n=1
Z πN 0
(π − τ N ) 1
N cos n τ N dτ
= 2
N
X
n=1
1 − cos nπ n
2= 4
N
X
j=0
1 (2j + 1)
2→ 4
∞
X
j=0
1
(2j + 1)
2= 4 π
28 = π
22 . (19)
Define σ
q2:= lim
N→∞
Var I
q([0, πN ])
< ∞.
Proposition 2
For q > 1 we have Var I
q,N([0, πN ])
→ σ
2qas N → +∞.
For q = 1
Var I
1,N([0, πN ])
→ 1
3 u
2φ
2(u)π.
In the case u 6= 0 this limit is different from
N
lim
→∞Var I
1([0, πN ])
= 2
3 u
2φ
2(u)π.
Remark 1
This different behavior, depending in which chaos we are, is explicit thanks to the Wiener chaos decomposition.
Proof:
Firstly we consider the case q > 2 :
E
I
q,N2([0, N π]
= −r
Y00N
(0)ϕ
2(u)
[q2]
X
k1=0 [q2]
X
k2=0
H
q−2k1(u)
(q − 2k
1)! a
2k1H
q−2k2(u) (q − 2k
2)! a
2k21
N π
Z N π0
Z N π 0
E
[H
q−2k(Y
N(s))H
2k( Y
N0(s
0)
q−r
00YN
(0)
)H
q−2k(Y
N(s
0))H
2k( Y
N0(s)
q−r
Y00N
(0)
)] ds
0ds
= −r
Y00N
(0)ϕ
2(u)
[q2]
X
k1=0 [q2]
X
k2=0
H
q−2k1(u)
(q − 2k
1)! a
2k1H
q−2k2(u) (q − 2k
2)! a
2k22
Z πN 0
(1− s
N π )
E[H
q−2k1(Y
N(0))H
2k1( Y
N0(0)
q−r
Y00N
(0)
)H
q−2k2(Y
N(s))H
2k2( Y
N0(s)
q−r
Y00N
(0)
)] ds.
We now use the generalized Mehler formula (Lemma 10.7 page 270 of [2]).
Lemma 3
Let (X
1, X
2, X
3, X
4) be a centered Gaussian vector with variance matrix
Σ =
1 0 ρ
13ρ
140 1 ρ
23ρ
24ρ
13ρ
231 0 ρ
14ρ
240 1
Then, if r
1+ r
2= r
3+ r
4,
E
H
r1(X
1)H
r2(X
2)H
r3(X
3)H
r4(X
4)
=
X(d1,d2,d3,d4)∈J
r
1!r
2!r
3!r
4!
d
1!d
2!d
3!d
4! ρ
d131ρ
d142ρ
d233ρ
d244, where J is the set of d
i’s satisfying : d
i≥ 0;
d
1+ d
2= r
1; d
3+ d
4= r
2; d
1+ d
3= r
3; d
2+ d
4= r
4. (20) If r
1+ r
26= r
3+ r
4the expectation is equal to zero.
Using this lemma, there exist a finite set J
qand constants C
q,k1,k2such that
E
[H
q−2k1(Y
N(0))H
2k1( ˜ Y
N0(0))H
q−2k2(Y
N(τ ))H
2k2( ˜ Y
N0(τ ))]
=
XJq
C
q,k1,k2|r
YN(τ )|
2q−(2k1+2k1)−h1| r
0YN
(
Nτ)
q−r
00YN
(0)
|
2h1| r
00YN
(τ )
q−r
00YN
(0)
|
2k1+2k2−h1:= G ˜
q,k1,k2,N(τ ). (21)
This clearly proves that
E[Hq−2k1
(Y
N(0))H
2k1( ˜ Y
N0(0))H
q−2k2(Y
N(τ ))H
2k2( ˜ Y
N0(τ ))]
→
E[Hq−2k1(X(0))H
2k1( ˜ X
0(0))H
q−2k2(X(τ ))H
2k2( ˜ X
0(τ ))]
and Formula (18) gives a domination proving the convergence of the integral and the fact that σ
2qis finite.
Let us look to the case q = 1
EI
1,N2([0, N π]
= −r
Y00N
(0)ϕ
2(u)(ua
0)
21 N π
Z N π 0
Z N π 0
E
(Y
N(s)Y
N(s
0))ds
0ds
→ 1/3ϕ
2(u) 2u
2π
2π
2/2 = 1
3 u
2φ
2(u), (22)
using (19).
On the other hand we have
EI
12([0, N π]
= 1
3 ϕ
2(u)(ua
0)
21 N π
Z N π 0
Z N π 0
sin(s − s
0) s − s
0ds
0ds
= 1
3 ϕ
2(u) 2u
2π
22
Z N π 0
(π − τ /N ) sin(τ )
τ dτ → 2
3 u
2φ
2(u). (23)
4 Central limit Theorem with a chaining argu- ment
In this section we first establish a central limit theorem, Theorem 4 for the crossings of the process X(t) in the second step, we show that it implies our main result : Theorem 5, central limit theorem for the crossings of the X
N(t).
The covariance r(t) of the limit process X(t) is not a summable in the sense that
Z +∞
0
|r(t)|dt = +∞, but it satisfies
Z N 0
r(t)dt converges as N → ∞, for q > 1
Z +∞
0
|r(t)|
qdt < +∞.
The following theorem is a direct adaptation of the theorems Theorem 1 in [7] or of Theorem 10.11 of [2]. Its proof is given in Section 6 for completeness.
Theorem 4
As t → +∞,
√ 1
t N
[0,t]X(u) −
E(N
[0,t]X(u))
⇒ N (0, 2
3 u
2φ
2(u) +
∞
X
q=2
σ
q2(u)), where ⇒ is the convergence in distribution.
The main idea is to use this result to extend it to the crossings of Y
N(t).
Our main result is the following:
Theorem 5
As N → +∞, 1. 1
√ N π N
[0,N π]YN(u) −
E(N
[0,N π]YN(u))
⇒ N (0, 1
3 u
2φ
2(u) +
∞
X
q=2
σ
q2(u)),
2. 1
√ 2N π N
[0,2N π]YN(u) −
E(N
[0,2N π]YN(u))
⇒ N (0, 2
3 u
2φ
2(u) +
∞
X
q=2
σ
q2(u)),
Remark 2We point out that in the case u = 0 the two limit variances are the same and this is the result of Granville and Wigman [5], but in the other cases this is a new result. The chaos method permits an easy interpretation of the difference between these two behaviors.
Proof:
Let us introduce the cross correlation:
ρ
N(s, t) =
E(X(s)YN(t)) =
N
X
n=1
Z n
N n−1
N
cos(sλ − t n N ) dλ
=
N
X
n=1
Z 1
N
0
cos((s − t) n
N − sv) dv = <{
Z 1
N
0
e
−isvdv
N
X
n=1
e
i(s−t)Nn}
= sin
Nss N
1 N
N
X
n=1
cos(s − t) n
N + 1 − cos
Nss2 2N2
s 2N
2N
X
n=1
sin(s − t) n N , where < is the real part. So we can write
ρ
N(s, t) = sin(s/N)
s/N r
YN(t − s) + 1 − cos(s/N ) s
2/(2N
2)
s 2N
1 N
N
X
n=1
sin(s − t) n N . The two functions
sin(z)zand
1−cos(z)z2/2are bounded, with bounded derivatives and
sin(z)ztend to 1 as z tends to 0. We have also
| 1 N
N
X
n=1
sin(s − t) n
N | = | 2 s − t
sin
(s−t)22N s−t
sin(
N+12N(s − t)) sin
(s−t)2N| ≤ (const)|s − t|
−1, whenever |s − t| < πN .
We have already proved that r
YN(s − t) =
N1 PNn=1
cos (s − t)
Nn, con- verges to r(s − t) uniformly on every compact that does not contains zero.
The same result is true for the first two derivatives that converge respec- tively to the corresponding derivative of r(s −t). In addition for large values of |s − t| these functions are bounded by K|s − t|
−1and for each fixed s,
s 2N2
PN
n=1
sin(s − t)
Nn→ 0. Using the derivation rules it is easy to see that this is enough to have
ρ
N(s, t) → r(s − t)
∂ρ
N(s, t)
∂s =
E(X0(s)Y
N(t)) → r
0(s − t)
∂ρ
N(s, t)
∂t =
E(X(s)Y
N0(t)) → −r
0(s − t)
∂
2ρ
N(s, t)
∂s∂t =
E(X
0(s)Y
N0(t)) → −r
00(s − t),
again the convergence being uniform on every compact that does not con- tains zero. In additions these function are bounded by (const)(s − t)
−1.
Before beginning the proofs, we present two results that were established in Peccati & Tudor [10] (Theorem 1 and Proposition 2) and we state as a theorem for later reference.
We will denote as ζ
q,ra generic element of the q-th chaos depending of a parameter r that tends to infinity. For instance in our cases we will have ζ
q,t= I
q([0, t]) and ζ
q,N= I
q,N([0, πN ]) respectively.
Theorem 6
(i) Assume that for every q
1≤ q
2, . . . ≤ q
m, it holds that
t→∞
lim
E[ζ
qi,t]
2= σ
ii2and that for i 6= j lim
t→∞E
[ζ
qi,tζ
qj,t] = 0.
Then, if D
mis the diagonal matrix with entries σ
2ii, Theorem 1 of [10]
says that the random vector
(ζ
q1,t, . . . , ζ
qm,t) ⇒ N (0, D
m),
if and only if each ζ
qi,tconverges in distribution towards N (0, σ
2ii) when t → ∞.
(ii) Considering now d functionals of the q-th chaos {ζ
q,rl}
dl=1, Proposition 2 of [10] says that
(ζ
q,r1, ζ
q,r2, . . . , ζ
q,rd) ⇒ N(0, C)
if an only if ζ
q,ti⇒ N (0, c
ii) and
E[ζ
q,tiζ
q,tj] → c
ijwhen t → ∞, where c
ijis the entry i, j of the matrix C.
We are now ready to prove the following lemma
Lemma 7For q ≥ 2
N
lim
→∞EI
q,N([0, N π]) − I
q([0, N π])
2= 0.
Proof:
E
I
q,N([0, N π]) − I
q([0, N π])
2=
EI
q,N([0, N π])
2+
EI
q([0, N π])
2− 2
EI
q,N([0, N π])I
q([0, N π]) . We have already shown that the first two terms tend to σ
2q(u). It only remains to prove that the third also does. But, since the cross correlation ρ
N(s, t) shares all the properties of r
YN(s − t), the same proof as in Section 3 shows that the limit is again σ
2q(u).
We now finish the proof of Theorem 5.
Proof of 1.
The case of I
1,N([0, N π]) is easy to handle since it is already a
Gaussian variable and that its limit variance is easy to compute using (19).
By Lemma 7, for q ≥ 2, I
q,N([0, N π]) inherits the asymptotic Gaussian behavior of I
q([0, N π]) .
By using (i) of Theorem 6, this is enough to obtain the normality of the sum.
Proof of 2.
We have already proved that χ
N(1) := 1
√
N π N
[0,N π]YN(u) −
E(N
[0,N π]YN(u))
⇒ N (0, 1
3 u
2φ
2(u) +
∞
X
q=2
σ
q2(u)), the same result holds by stationarity for the sequence
χ
N(2) := 1
√
N π N
[N π,2N π]YN(u) −
E(N
[N π,2N π]YN(u)) , and given that
√ 1
2N π N
[0,2N π]YN(u) −
E(N
[0,2N π]YN(u))
= 1
√
2 (χ
N(1) + χ
N(2)).
It only remains to show that the limit of the vector (χ
N(1), χ
N(2)) is jointly Gaussian and that the variance of the sum converges to the corresponding one. Defining
I
q,N([πN, 2πN ]) =
q−r
00YN
(0)
√ πN
Z 2πN πN
f
q(u, Y
N(s), Y
eN0(s)ds, we can write the sum above as
√ 1
2 (χ
N(1) + χ
N(2)) = 1
√ 2 (
∞
X
q=1
I
q,N([0, πN ]) +
∞
X
q=1
I
q,N([πN, 2πN ])), and given that the limit variance is finite we have
√ 1
2 (χ
N(1) + χ
N(2)) = 1
√ 2 (
Q
X
q=1
I
q,N([0, πN ]) +
Q
X
q=1
I
q,N([πN, 2πN ])) + o
P(1), where o
P(1) denotes a term that tends to zero in probability when Q → ∞ uniformly in N . Let us consider first the term corresponding to the first chaos (q = 1). We have
E :=
EI
1,N([0, N π])I
1,N([N π, 2N π])
= −r
00YN
(0)ϕ
2(u)(ua
0)
21 N π
Z N π 0
Z 2N π N π
E
(Y
N(s)Y
N(s
0))ds
0ds
= −r
00YN
(0)ϕ
2(u)(ua
0)
21 N π
Z N π 0
Z 2N π N π
r
YN(s
0− s)ds
0ds,
making the change of variable s
0− s = τ we get
= −r
00YN
(0)ϕ
2(u)(ua
0)
21 N π (
Z πN 0
τ r
YN(τ )dτ +
Z 2πNπN
(2πN−τ )r
YN(τ )dτ ).
Since r
YNis periodic with period 2πN : E = −r
00YN
(0)ϕ
2(u)(ua
0)
21 N π (
Z πN 0
τ r
YN(τ )dτ −
Z 0−πN
τ r
YN(τ )dτ)
= −r
00YN
(0)ϕ
2(u)(ua
0)
22 N π
Z πN 0
τ r
YN(τ )dτ → 1
3 ϕ
2(u)u
2, using the same computation as for getting (19).
This implies that
12EI
1,N([0, N π]) + I
1,N([N π, 2N π])
2→
23ϕ
2(u)u
2. Since the two random variables I
1,N([0, N π]) and I
1,N([N π, 2N π]) are jointly Gaussian this implies the convergence of
√12
(χ
N(1) + χ
N(2)) in distribution.
Let us consider the term in the other chaos (q ≥ 2).
E
I
q,N([0, N π])I
q,N([N π, 2N π])
= −r
Y00N
(0)ϕ
2(u)
[q2]
X
k1=0 [q2]
X
k2=0
H
q−2k1(u)
(q − 2k
1)! a
2k1H
q−2k2(u)
(q − 2k
2)! a
2k21 πN
Z πN 0
Z 2πN πN
G
q,k1,k2,N(s−s
0)dsds
0, where we have put
G
q,k1,k2,N(s−s
0) =
E[H
q−2k1(Y
N(0))H
2k1( Y
eN0(0))H
q−2k2(Y
N(s−s
0))H
2k2( Y
eN0(s−s
0))].
A change of variables and Fubini’s Theorem give 1
πN
Z πN0
Z 2πN
πN
G
q,k1,k2,N(s − s
0)dsds
0= 1 N π (
Z πN 0
τ G
q,k1,k2,N(τ )dτ −
Z 2πNπN
(2πN − τ )G
q,k1,k2,N(τ )dτ )
= 1 N π (
Z πN
0
τ G
q,k1,k2,N(τ )dτ +
Z πN0
τ G
q,k1,k2,N(−τ )dτ),
where this last equality is a consequence of periodicity and the change of variable τ = v + 2πN in the second integral. In this form we get
| 1 πN
Z πN 0
Z πN πN
G
q,k1,k2,N(s − s
0)dsds
0| ≤ 2 N π
Z πN 0
τ G ˜
q,k1,k2,N(τ )dτ.
G ˜
q,k1,k2,N(τ ) has been defined in (21) and we also recall that this function is even. Moreover, it is plain that over any compact interval [0, a] it holds
N→∞
lim 2 N π
Z a 0
τ G ˜
q,k1,k2,N(τ )dτ = 0,
for the integral over [a, πN ] we use the bound (9) and Arcones’ inequality.
Thereby
N
lim
→∞| 2 N π
Z πN
0
τ G
q,k1,k2,N(τ )dτ| = 0.
By using (ii) of Theorem 6, we get for q ≥ 2
(I
q,N([0, N π]), I
q,N([N π, 2N π])) ⇒ N(0, σ
2qI), where I is the identity matrix in
R2.
Defining
I
q,N([0, 2N π]) = 1
√ 2 (I
q,N([0, N π]) + I
q,N([N π, 2N π]),
it holds for each q that I
q,N([0, 2N π]) ⇒ N (0, σ
q2), this asymptotic normality holds true also for q = 1. The theorem now follows applying again (i) of Theorem 6 and the expansion (12).
5 Proof of Lemma 1
It suffices to prove that N
[0,a]YN(u) has a second moment which is bounded uniformly in N . Let U
[0,a]YN(u) be the number of up-crossings of the level u by Y
N(t) in the interval [0, a] i.e. the number of instants t such that Y
N(t) = u; Y
N0(t) > 0. The Rolle theorem implies
N
[0,a]YN(u) ≤ 2U
[0,a]YN(u) + 1.
So it suffices to give a bound for the second moment of the number up- crossings. Writing U for U
[0,a]YN(u) for short, we have
E
(U
2) =
E(U (U − 1)) +
E(U ).
We have already proven that the last term gives a finite contribution after normalization. For studying the first one we define the function θ
N(t) by
r
YN(τ ) = 1 + r
Y00N