HAL Id: hal-01659406
https://hal.inria.fr/hal-01659406
Preprint submitted on 8 Dec 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Asymptotic expansion for vector-valued sequences of random variables with focus on Wiener chaos
Ciprian Tudor, Nakahiro Yoshida
To cite this version:
Ciprian Tudor, Nakahiro Yoshida. Asymptotic expansion for vector-valued sequences of random vari-
ables with focus on Wiener chaos. 2017. �hal-01659406�
Asymptotic expansion for vector-valued sequences of random variables with focus on Wiener chaos
Ciprian A. Tudor
1 ∗Nakahiro Yoshida
2,3,†1
Laboratoire Paul Painlev´e, Universit´e de Lille 1 F-59655 Villeneuve d’Ascq, France.
tudor@math.univ-lille1.fr
2
Graduate School of Mathematical Science, University of Tokyo 3-8-1 Komaba, Meguro-ku, Tokyo 153, Japan
3
Japan Science and Technology Agency CREST nakahiro@ms.utokyo.ac.jp
December 8, 2017
Abstract
We develop the asymptotic expansion theory for vector-valued sequences (F
N)
N≥1of random variables in terms of the convergence of the Stein-Malliavin matrix associated to the sequence F
N. Our approach combines the classical Fourier approach and the recent theory on Stein method and Malliavin calculus. We find the second order term of the asymptotic expansion of the density of F
Nand we illustrate our results by several examples.
2010 AMS Classification Numbers: 62M09, 60F05, 62H12
Key Words and Phrases: Asymptotic expansion, Stein-Malliavin calculus, Quadratic variation, Fractional Brownian motion, Central limit theorem, Fourth Moment Theorem.
1 Introduction
The analysis of the convergence in distribution of sequences of random variables constitutes a fundamental direction in probability and statistics. In particular, the asympotic expan-
∗
. The author was partially supported by Project ECOS NCRS-CONICYT C15E05,MEC PAI80160046 and MATHAMSUD 16-MATH-03 SIDRE Project.
†
The author was in part supported by Japan Science and Technology Agency CREST JPMJCR14D7;
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research No. 17H01702 (Scientific
Research) and by a Cooperative Research Program of the Institute of Statistical Mathematics.
sion represents a technique that allows to give a precise approximation for the densities of probability distributions,
Our work concerns the convergence in law of random sequences to the Gaussian distribution. Although our context is more general, we will focus on examples based on elements in Wiener chaos. The characterization of the convergence in distribution of se- quences of random variables belonging to a Wiener chaos of fixed order to the Gaussian law constitutes an important research direction in stochastic analysis in the last years. This research line was initiated by the seminal paper [11] where the authors proved the famous Fourth Moment Theorem. This result, which basically says that the convergence in law of a sequence of multiple stochastic integrals with unit variance is equivalent to the convergence of the sequence of the fourth moments to the fourth moment of the Gaussian distribution, has then been extended, refined, completed or applied to various situations. The reader may consult the recent monograph [8] for the basics of this theory.
In this work, we are also concerned with some refinements of the Fourt Moment The- orem for sequences of random variables in a Wiener chaos of fixed order. More concretely, we aim to find the second order expansion for the density. This is a natural extension of the Fourth Moment Theorem and of its ramifications. In order to explain this link, let us briefly describe the context. Let H be a real and separable Hilbert space, (W (h), h ∈ H) an isonormal Gaussian process on a probability space (Ω, F , P ) and let I
qdenote the qth multiple stochastic integral with respect to W . Let Z ∼ N (0, 1). Denote by d
T V(F, G) the total variation distance between the laws of the random variables F and G and let D be the Malliavin derivative with respect to W . The Fourth Moment Theorem from [11] can be stated as follows.
Theorem 1 Fix q ≥ 1. Consider a sequence { F
k= I
q(f
k), k ≥ 1 } of random variables in the q-th Wiener chaos. Assume that
k
lim
→∞EF
k2= lim
k→∞
q! k f
kk
2H⊗q= 1. (1) Then, the following statements are equivalent:
1. The sequence of random variables { F
k= I
n(f
k), k ≥ 1 } converges in distribution as k → ∞ to the standard normal law N (0, 1).
2. lim
k→∞E[F
k4] = 3.
3. k DF
kk
2Hconverges to q in L
2(Ω) as k → ∞ . 4. d
T V(F
k, Z) →
k→∞0.
A more recent point of interest in the analysis of the convergence to the normal dis-
tribution of sequences of multiple integrals is represented by the analysis of the convergence
of the sequence of densities. For every N ≥ 1, denote by p
FNthe density of the random
variable F
N(which exists due to a result in [14]). Since the total variation distance between
F and G (or between the distributions of the random variables F and G) can be written
as d
T V(F, G) =
12R
R
| p
F(x) − p
G(x) | dx, we can notice that point 4. in the Fourth Moment Theorem implies that
k p
FN(x) − p(x) k
L1(R)→
N→∞0 where p(x) =
√12π
e
−x2
2
denotes, throughout our work, the density of the standard normal distribution. The result has been improved in the work [7], where the authors showed that if
N
lim
→∞E k DF
Nk
−H4−ε< ∞ (2) then the conditions 1.-4. in the Fourth Moment Theorem are equivalent to
sup
x∈R
| p
FN(x) − p(x) | ≤ c q
(EF
N4− 3). (3)
We also refer to [3], [4] for other references related to the density convergence on Wiener chaos.
Our purpose is to find the asymptotic expansion in the Fourth Moment Theorem (and, more generally, for sequences of random variables converging to the normal law).
By finding the asymptotic expansion we mean to find a sequence (p
N)
N≥1of the form p
N(x) = p(x) + r
Nϕ(x) such that r
N→
N→∞0 which approximates the density p
FNin the following sense
sup
x∈R
| x
α| | p
FN(x) − p
N(x) | = o(r
N) (4) for any α ∈ Z
+. This improves the main result in [7] and it will also allow to generalize the Edgeworth -type expansion (see [9], [1]) for sequences on Wiener chaos to a larger class of measurable functions. The main idea to get the asymptotic expansion is to analyze the asymptotic behavior of the characteristic function of F
Nand then to obtain the approximate density p
Nby Fourier inversion of the dominant part of the characteristic function.
As mentioned before, we develop our asymptotic expansion theory for a general class of random variables, which are not necessarily multiple stochastic integrals, although our main examples are related to Wiener chaos. Our main assumption is that the vector-valued random sequence F
N, γ
N−1(Λ(F
N) − C)
converges to a d+d × d-dimensional Gaussian vec- tor, where (γ
N)
N≥1is a deterministic sequence converging to zero as N → ∞ , Λ(F
N) is the so-called Stein-Malliavin matrix of F
Nwhose components are Λ
i,j= h DF
N(i), D( − L)
−1F
N(j)i
Hfor i, j = 1, d (L is the Ornstein-Uhlenbeck operator) and C
i,j= lim
N→∞EF
N(i)F
N(j). In the one-dimensional case the assumption becomes
F
N, γ
N−1( h DF
N, D( − L)
−1F
Ni
H− 1)
(5) converges in distribution, as N → ∞ , to a two-dimensional centered Gaussian vector (Z
1, Z
2) with EZ
12= EZ
22= 1 and EZ
1Z
2= ρ ∈ ( − 1, 1).
The condition (5) is related to what is usually assumed in the asymptotic expansion
theory for martingales, which has been developed in the last decades and it is based on the
so-called Fourier approach. Let us refer, among many others, to [6], [17], [13], [15], [16] for few important works on the martingale approach in asymptotic expansion. Recall that if (M
N)
N≥1denotes a sequence of random variables converging to Z ∼ N (0, 1) such that M
Nis the terminal value of a continuous martingale and if h M i
Nis the martingale’s bracket, then one assumes in e.g. [17]), [15] that
h M i
N→
N→∞1 in probability and the following joint convergence holds true
M
N, h M i
N− 1 γ
N→
(d)N→∞(Z
1, Z
2) (6) where (Z
1, Z
2) is a centered Gaussian vector as above. The notation ” →
(d)” stands for the convergence in distribution.
In our assumption (5), the role of the martingale bracket is played by
h DF
N, D( − L)
−1F
Ni
Hwhich becomes
1qk DF
Nk
2Hwhen F
Nis an element of the qth Wiener chaos. The choice is natural, suggested by the identity EF
N2= E h F i
2Nwhich playes the role of the Itˆo isometry and by the fact that
kDFqNk2− 1 converges to zero in L
2(Ω), due to the Fourth Moment Theorem.
We organized our paper as follows. In Section 2, we fix the general context of our work, the main asumptions and some notation. In Section 3, we analyze the asymptotic behavior of the characteristic function of a random sequence that satisfies our assumptions while in Section 4 we obtain the approximate demsity and the asymptotic behavior via Fourier inversion. In Section 5, we illustrate our theory by several examples related to Wiener chaos.
2 Asssumptions and notation
We will here present the basic assumptions and the notation utilised throughout our work.
Let H be a real and separable Hilbert space endowed with the inner product h· , ·i
Hand consider (W (h), h ∈ H) an isonormal process on the probability space (Ω, F , P ). In the sequel, D, L and δ represent the Malliavin derivative, the Ornstein-Uhlenbeck operator and the divergence integral with respect to W . See the Appendix for their definition and basic properties.
2.1 The main assumptions
Consider a sequence of centered random variables (F
N)
N≥1in R
dof the form F
N=
F
N(1), . . . , F
N(d).
Denote by Λ(F
N) = (Λ
i,j)
i,j=1,..,dthe Stein-Malliavin matrix with components Λ
i,j= h DF
N(i), D( − L)
−1F
N(j)i
Hfor every i, j = 1, .., d.
We consider the following assumptions:
[C1 ] There exists a symmetric (strictly) positive definite d × d matrix C and a deterministic sequence (γ
N)
N≥1converging to zero as N → ∞ such that F
N, γ
N−1(Λ(F
N) − C) converges in distribution to a d + d × d Gaussian random vector (Z
1, Z
2) such that Z
1= (Z
1(1), . . . , Z
1(d)), Z
2= (Z
2(k,l))
k,l=1,..,dwith
EZ
1(i)Z
1(j)= C
i,j, for every i, j = 1, .., d.
[C2 ] For every i = 1, .., d, the sequence (F
N(i))
N≥1is bounded in D
ℓ+1,∞= ∩
p>1D
ℓ+1,pfor some l ≥ d + 3.
[C3 ] With γ
N, C from [C1 ], for every i, j = 1, d the sequence γ
N−1Λ
i,j− C
i,jN∈N
is bounded in D
ℓ,∞for some l ≥ d + 3.
Notice that the matrix Λ(F
N) is not necessarily symmetric (except when all the components belong to the same Wiener chaos) since in general Λ
i,j6 = Λ
j,i. Nevertheless, we assumed in [C1] that it converges to the symmetric matrix C, in the sense that γ
N−1(Λ(F
N) − C
converges in law to normal random variable.
In the case of random variables in Wiener chaos, from the main result in [12] (see Theorem 4 later in the paper), we can prove that, under the convergence in law of F
N(i), i = 1, .., d to the Gaussian law, the quantities Λ
i,j− C
i,j, Λ
j,i− C
i,jconverges to zero in L
2(Ω). Thus condition [C1] intuitively means that these two terms have the same rate of convergence to zero.
2.2 The truncation
Let us introduce the truncation sequence (Ψ
N)
N≥1which will be widely used throughout our work. Its use allows to avoid the condition on the existence of negative moments for the Malliavin derivative of F
N(2).
We consider the function Ψ
N∈ D
ℓ,pfor p > 1 and ℓ ≥ d + 3 such that Ψ
N= ψ
K | Λ(F
N) − C |
d×dγ
Nδ 2!
(7) with some K, 0 < δ < 1, where ψ ∈ C
∞( R ; [0, 1]) is such that ψ(x) = 1 is | x | ≤
12and ψ(x) = 0 if | x | ≥ 1. The square in the argument pf ψ in order to have Ψ
Ndifferentiable in the Mlliavin sense.
From this definition, clearly we have
1. 0 ≤ Ψ
N≤ 1 for every N ≥ 1.
2. Ψ
N→
N→∞1 in probability.
A crucial fact in our computations is that on the set { Ψ
N> 0 } we have
| Λ(F
N) − C |
d×d≤ 1 K γ
Nδ.
This also implies (see e.g. [17]) that there exists two positive constants c
1, c
2such that for N large enough
c
1≤ det Λ(F
N) ≤ c
2. (8)
2.3 Notation
For every x = (x
1, ..., x
d) ∈ R
dand α = (α
1, ..., α
d) ∈ Z
+we use the notation x
α= x
α11....x
αddand ∂
α∂x
α= ∂
α1∂x
α11.... ∂
αd∂x
αddIf x, y ∈ R
d, we denote by h x, y i
d:= h x, y i their scalar product in R
d. By | x |
d:= | x | we will denote the Euclidean norm of x.
The following will be fixed in the sequel:
• The sequence (F
N)
N≥1defined in Section 2.1.
• The truncation sequence (Ψ
N)
N≥1is as in Section 2.2.
• o
λ(γ
N) denotes a deteministic sequence that depends on λ such that
oλγ(γN)N
→
N→∞0.
By o(γ
N) we refer as usual to a sequence such that
o(γγN)N
→
N0.
• By Φ(λ) = e
−λ2
2
we denote the characteristic function of Z ∼ N (0, 1) and by p(x) =
√1
2π
e
−x22its density.
• By c, C, C
1, ... we denote generic strictly positive constants that may vary from line to line and by h· , ·i we denote the Euclidean scalar product R
d. On the other hand, we will keep the subscript for the inner product in H, denoted by h· , ·i
H.
• We will use bold letters to indicate vectors in R
dwhen we need to differentiate them
from their one-dimensional components.
3 Asymptotic behavior of the characteristic function
The first step in finding the asymptotic expansion for the sequence (F
N)
N≥1which satisfies [C1] - [C3] is to analyze the asymptotic behavior as N → ∞ of the characteristic function of F
N. In order to avoid troubles related to the integrability over the whole real line, we will work under truncation, in the sense that we will always keep the multiplicative factor Ψ
Ngiven by (7).
Let λ = (λ
1, ..., λ
d) ∈ R
dand θ ∈ [0, 1] and let us consider the truncated interpola- tion
ϕ
ΨN(θ, λ ) = E
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λ. (9)
Notice that ϕ
N(1, λ ) = E(Ψ
Ne
iλ,FNi), the ”truncated” characteristic function of F
N, while ϕ
N(0, λ ) = (EΨ
N)e
−λT C2 λ, the ”truncated”characteristic function of the limit in law of F
N. Therefore, it is useful to analyze the behavior of the derivative of ϕ
ΨN(θ, λ ) with respect to the variable θ.
We have the following result.
Proposition 1 Assume [C1] - [C3] and suppose that
E
Z
2(k,l)| Z
1=
d
X
a=1
ρ
ak,lZ
1(a)(10)
for every k, l = 1, .., d, where (Z
1, Z
2) is the Gaussian random vector from [C1]. Then we have the convergence
γ
N−1∂
∂θ ϕ
ΨN(θ, λ ) →
N→∞−i θ
2
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λ. (11) Proof: By differentiating (9) with respect to θ and using the formula F
N(i)= δD( − L)
−1F
N(i)for every i = 1, .., d, we can write
∂
∂θ ϕ
ΨN(θ, λ ) = E
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λi h λ , F
Ni + θ λ
TC λ
= i E
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λd
X
j=1
λ
jδD( − L)
−1F
N(j)
+θ
d
X
j,k=1
λ
jλ
kC
j,kE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λand by the duality relationship (65)
∂
∂θ ϕ
ΨN(θ, λ ) = i
d
X
j=1
λ
jE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λi θ h D( − L)
−1F
N(j), D h λ , F
Nii
H+ i
d
X
j=1
λ
jE
e
iθhλ,FNi−(1−θ2)λT C2 λh DΨ
N, D( − L)
−1F
N(j)i
H+θ
d
X
j,k=1
λ
jλ
kC
j,kE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λ.
We obtain, since D h λ , F
Ni = P
dk=1
λ
kDF
N(k),
∂
∂θ ϕ
ΨN(θ, λ ) = − θ
d
X
j,k=1
λ
jλ
kE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λh DF
N(k), D( − L)
−1F
N(j)i
H− C
j,k+ i
d
X
j=1
λ
jE
e
iθhλ,FNi−(1−θ2)λT C2 λh DΨ
N, D( − L)
−1F
N(j)i
H:= A
N(θ, λ ) + B
N(θ, λ ). (12)
Using the definition of the truncation function Ψ
N, we can prove that γ
N−1B
N(θ, λ ) = γ
N−1d
X
j=1
E
e
iθhλ,FNi−(1−θ2)λT C2 λh DΨ
N, D( − L)
−1F
N(j)i
H→
N→∞0. (13) Indeed, by the chain rule (66)
DΨ
N= ψ
′K | Λ(F
N) − C |
d×dγ
Nδ 2! D
| Λ(F
N) − C |
d×dγ
Nδ 2and by using [C2]-[C3] and (7), we get for every p > 1 γ
N−1| B
N(θ, λ ) | ≤ cγ
N−1E
h DΨ
N, D( − L)
−1F
N(j)i
H≤ cγ
N−1P
| Λ(F
N) − C |
d×d≥ 1 2 γ
Nδ 12≤ cE | Λ(F
N) − C |
d×d pγ
−N1−δp≤ cγ
Np(1−δ)−1where we used again [C2]. Since δ < 1 and p is arbitrarly large, we obtain the convergence (13).
On the other hand, by assumption [C1], we have the convergence
γ
N−1A
N(θ, λ ) →
N→∞− θ
d
X
j,k=1
λ
jλ
kE
e
iθhλ,Z1i−(1−θ2)λT C2 λZ
2(k,l). (14)
Using our assumption (10), we can express the above limit in a more explicit way. We have E
e
iθhλ,Z1i−(1−θ2)λT C2 λZ
2(j,k)= E
e
iθhλ,Z1i−(1−θ2)λT C2 λE(Z
2(j,k)| Z
1)
= E e
iθhλ,Z1i−(1−θ2)λT C2 λd
X
a=1
ρ
aj,kZ
1(a)!
(15) and, moreover, for every a = 1, .., d, from the differential equation verified by the charac- teristic function of the normal distribution,
E
e
iθhλ,Z1i−(1−θ2)λT C2 λZ
1(a)= e
−(1−θ2)λT C2 λE
e
iθhλ,Z1iZ
1(a)= e
−(1−θ2)λT C2 λ− 1 i θ (
d
X
i=1
λ
iC
ia)θ
2e
−θ2λT C2 λ= e
−λT C2 λ− θ i (
d
X
i=1
λ
iC
ia) = i θe
−λT C2 λd
X
i=1
λ
iC
ia. (16)
Thus, by replacing (15) and (16) in (14), we obtain γ
−N1A
N(θ, λ ) →
N→∞−i θ
2
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λ. (17) Relation (17), together with with (13) and (12) will imply the conclusion.
We will also need to analyze the asymptotic behavior of the partial derivatives with respect to the variable λ of (9). Let us first introduce some notation borrowed from [16].
For every u, z ∈ R
d, α ∈ Z
d+
and for every symmetric d × d matrix C, we will denote by
P
α(u, z, C) = e
−ihu,zid−12uTCu( −i )
|α|∂
α∂u
αe
ihu,zid+12uTCu(18) where | α | := α
1+ ... + α
dif α = (α
1, .., α
d) ∈ Z
d+. Then clearly,
( −i )
|α|∂
α∂u
αe
ihu,zid+12uTCu= e
ihu,zid+12uTCuP
α(u, z, C). (19)
In the next lemma we recall the explicit expression of the polynomial P
αtogether
on some estimates on this polynomial. We refer to Lemma 1 in [16] for the proof.
Lemma 1 If P
αis given by (18), then for every u, z ∈ R
dand evey d × d matrix C, we have
P
α(u, z, C) = X
0≤β≤α
C
αβ( −i )
βz
α−βS
β(u, C ) with
S
β(u, C) = e
−12uTCu∂
β∂u
βe
12uTCu. Moreover, we have the estimate
| S
β(u, C ) | ≤
|β|
X
j=0
c
|jβ|| u |
j| C |
(j+|β|)/2for every u ∈ R
dand every d × d matrix C.
Let us now state and prove the asymptotic behavior of the partial derivatives of the truncated interpolation ϕ
ΨN.
Proposition 2 Assume [C1] -[C3] and (10). Let ϕ
ΨNbe given by (9). Then for every α ∈ Z
d+
, we have
γ
N−1∂
α∂ λ
α∂
∂θ ϕ
ΨN(θ, λ ) →
N→∞∂
α∂ λ
α
−i θ
2
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λ
.
Proof: If α = (α
1, ..., α
d) ∈ Z
d+, then by (12) we have
∂
α∂ λ
α∂
∂θ ϕ
ΨN(θ, λ ) = ∂
α∂ λ
αA
N(θ, λ ) + ∂
α∂ λ
αB
N(θ, λ ) with A
N, B
Ndefined in (12). Now, since by (19)
∂
α∂ λ
αe
iθhλ,FNi−(1−θ2)λT C2 λ= i
|α|P
α( λ , θF
N, − (1 − θ
2)C)e
iθhλ,FNi−(1−θ2)λT C2 λwe can write
∂
α∂ λ
α∂
∂θ ϕ
ΨN(θ, λ )
= − θ
d
X
j,k=1
λ
jλ
kE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λi
αP
α( λ , θF
N, − (1 − θ
2), C)
h DF
N(k), D( − L)
−1F
N(j)i
H− C
j,k+ i
d
X
j=1
λ
jE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λi
αP
α( λ , θF
N, − (1 − θ
2), C) h DΨ
N, D( − L)
−1F
N(j)i
H(20)
As in the proof of (13), the second summand above multiplied by γ
N−1converges to zero as N → ∞ . Concerning the first summand in (20), the hypothesis [C1] implies that it converges as N → ∞ , to
− θ
d
X
j,k=1
λ
jλ
kE
i
αP
α( λ , θZ
1, − (1 − θ
2)C)e
iθhλ,Z1i−(1−θ2)λT C2 λZ
2(k,l)= − θ
d
X
j,k=1
λ
jλ
kE
∂
α∂ λ
αe
iθhλ,Z1i−(1−θ2)λT C2 λZ
2(k,l)= ∂
α∂ λ
α
−i θ
2
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λ
.
Then the conclusion is obtained.
Remark 1 If d = 1, relation (11) becomes γ
N−1∂
∂θ ϕ
ΨN(θ, λ) →
N→∞−i θ
2λ
3ρe
−λ2 2
where ρ = E(Z
2/Z
1).
The following integration by parts formula plays an important role in the sequel.
Its proof is a slightly adaptation of the proof of Proposition 1 in [16]. For its proof the nondegeneracy (8) is needed.
Lemma 2 Assume [C2] -[C3]. Consider a function f ∈ C
pl( R
d) with l ≥ 0 and let G be a random variable in D
l,∞. Then it holds
E ∂
l∂x
lf (F
N)GΨ
N= E (f (F
N)H
l(GΨ
N)) (21) where the random variable H
l(GΨ
N) belongs to L
p(Ω) for every p ≥ 1.
We will use the integration by parts formula in the above Lemma 2 to show that
∂α
∂λα ∂
∂θ
ϕ
ΨN(θ, λ ) is dominated by an integrable function uniformly with respect to (θ, λ ) ∈ [0, 1] × R
d.
Lemma 3 Assume [C2]-[C3]. For every α ∈ Z
d+, there exists a positive constant C
αsuch that
γ
N−1∂
λα∂
∂θ ϕ
ΨFN(θ, λ )
≤ C
α(1 + | λ | )
−ℓ+2(22)
for all λ ∈ R
d, θ ∈ [0, 1] and N ≥ 1.
Proof: First we consider the case α = 0 ∈ Z
d+. Consider the function f (x) = e
iθhλ,xi−(1−θ2)λT C2 λfor x ∈ R
dwith
∂x∂ααf (x) = f (x)(i λ )
α.
For θ ∈ (0, 1], by using (12) and the integration by parts formula (21) to the function f ℓ-times, we obtain
( i θ λ )
ℓ∂
∂θ ϕ
ΨFN(θ, λ )
= − θ( i θ λ )
ℓd
X
j,k=1
λ
jλ
kE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λh DF
N(k), D( − L)
−1F
N(j)i
H− C
j,k+ i ( i θ λ )
ℓd
X
j=1
λ
jE
e
iθhλ,FNi−(1−θ2)λT C2 λh DΨ
N, D( − L)
−1F
N(j)i
H= − θ
d
X
j,k=1
λ
jλ
kE
Ψ
Ne
iθhλ,FNi−(1−θ2)λT C2 λH
ℓq
−1h DF
N, DF
Ni
H− 1 Ψ
N+
d
X
j=1
λ
jE
e
iθhλ,FNi−(1−θ2)λT C2 λH
ℓh i λ ( − q)
−1DF
N, DΨ
Ni
HWe use this estimate for θ ≥ 1/2. For θ ∈ [0, 1/2], the inequality is trivial thanks to the exponential function exp( − (1 − θ
2)
λT2Cλ). Thus we obtained the desired estimate when α = 0. For α ∈ Z
d+, α 6 = (0, ...0), we can follow a similar argument with the estimate
sup
θ,λ
(1 − θ
2)(1 + λ
2)
mexp − (1 − θ
2) λ
TC λ 2
< ∞ for every m ∈ Z
+. Thus we obtained the result.
Let us denote by ϕ
Nthe sum of the characteristic function of the law N(0, C) and of the integral over [0, 1] with respect to θ of the limit in (11), i.e.
ϕ
N( λ ) = e
−λT C2 λ− i γ
N1 3
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λ. (23) The next result shows that the difference between the truncated characteristic func- tion of F
Nand the characteristic function of the weak limit of the sequence F
Ncan be approximated by ϕ
N.
Proposition 3 Suppose that [C1]-[C3] are fulfilled for ℓ = d + 3. For every N ≥ 1, let ϕ
Nbe given by (23). Then
γ
N−1Z
Rd
∂
λαE
e
ihλ,FNi Ψ
N− E[Ψ
N]ϕ
N( λ )
d λ →
N→∞0
for every α ∈ Z
d+.
Proof: If α ∈ Z
d+, using
ϕ
ΨN(1, λ ) = E
e
ihλ,FNiΨ
Nand ϕ
ΨN(0, λ ) = e
−λT C2 λwe can write
Z
Rd
γ
−N1∂
λαE
e
ihλ,FNiΨ
N− E[Ψ
N]ϕ
N( λ )
d λ
= Z
R
∂
λαZ
10
γ
N−1∂
θϕ
ΨN(θ, λ ) − ( −i )θ
2E[Ψ
N]
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λdθ
d λ
≤ Z
Rd
Z
10
γ
N−1∂
λα∂
θϕ
ΨN(θ, λ ) − ∂
λα( −i )θ
2
d
X
i,j,k,a=1
λ
jλ
kλ
iρ
aj,kC
ia
e
−λT C2 λdθd λ +C k 1 − Ψ
Nk
22 12.
The last quantity converges to zero as N → ∞ by the choice of the truncation function and also Lemma 3 and the dominated convergence theorem since the integrand is bounded uniformly in (θ, N ).
Let ϕ
FNbe the characteristic function of the random vector F
N, i.e. ϕ
FN( λ ) = Ee
ihλ,FNifor λ ∈ R
d. If ϕ
Nis given by (23), we have
ϕ
FN(λ)
= ϕ
N( λ )EΨ
N+ E
Ψ
Ne
ihλ,FNi− E(Ψ
N)ϕ
N( λ ) + E
(1 − Ψ
N)e
ihλ,FNi= ϕ
N( λ ) + ϕ
N( λ )(EΨ
N− 1) + E
Ψ
Ne
ihλ,FNi− E(Ψ
N)ϕ
N( λ ) + E
(1 − Ψ
N)e
ihλ,FNiProposition 3 shows that the difference E Ψ
Ne
ihλ,FNi− E(Ψ
N)ϕ
N( λ ) while from the definition of the truncation function Ψ
Nwe see that the term E((1 − Ψ
N)e
ihλ,FNi+ ϕ
N( λ )(EΨ
N− 1) is also small. Consequently, ϕ
FN( λ ) can be approximated by ϕ
N. Actually, we have
ϕ
FN( λ ) = ϕ
N( λ ) + s
N( λ ) (24) where s
N( λ ) is a ”small term’” such that γ
N−1s
N( λ ) and its derivatives are dominated by the right-hand side of (20) and it that satisfies
s
N( λ ) ≤ C k 1 − Ψ
Nk
p+ o
λ(γ
N) (25) for every p ≥ 1, where o
λ(γ
N) denotes a deteministic sequence that depends on λ such that
oλ(γN)
γN
→
N→∞0.
Therefore ϕ
Nis the dominant part in the asymptotic expansion of ϕ
FN. By inverting
ϕ
N, we get the approximate density.
4 The approximate density
In this paragraph, our purpose is to find the first and second-order term in the asymptotic expansion of the density of F
N. We will assume throughout this section [C1 ]-[C3 ] are fulfilled for ℓ = d + 3. Let us denote by ϕ
ΨFNthe truncated characteristic function of F
N, i.e.
ϕ
ΨFN( λ ) = E
Ψ
Ne
ihλ,FNi= ϕ
ΨN(1, λ ) (26)
with ϕ
ΨN(1, λ ) from (9).
We define the approximate density of F
Nvia the Fourier inversion of the dominant part of p
ΨFN
defined by (26).
Definition 1 For every N ≥ 1, we define the approximate density p
Nby p
N(x) = 1
(2π)
dZ
dR
e
−ihλ,xiϕ
N( λ )d λ , x ∈ R
d(27) where ϕ
Nis given by (23).
Let us first notice that in the case d = 1, we can obtain an explicit expression for p
Nin the next result. Note that for d = 1, assuming C
1,1= 1 and EZ
1Z
2= ρ, we have from (23)
ϕ
N(λ) = e
−λ2 2
− i 1
3 λ
3ρe
−λ2
2
, for every λ ∈ R . (28) Recall that p(x) =
√12π
e
−x22denotes the density of the standard normal law. By H
kwe denote the Hermite polynomial of degree k ≥ 0 given by H
0(x) = 1 and for k ≥ 1
H
k(x) = ( − 1)
ke
x22d
kdx
ke
−x22.
Proposition 4 If p
Nis given by (27) then for every x ∈ R , p
N(x) = p(x) − ρ
3 γ
NH
3(x)p(x).
Proof: This follows from (28) and the formula 1
2π Z
R
e
−iλx( i λ)
kΦ(λ)dλ = H
k(x)p(x) (29) for every k ≥ 0 integer. Recall the notation Φ(λ) := e
−λ2 2
.
We prefer to work with the truncated density of F
N, which can be easier controlled.
Definition 2 The local (or truncated) density of the random variable F
Nis defined as the inverse Fourier transform of the truncated characteristic function
p
ΨFN(x) = 1 (2π)
dZ
Rd
e
−ihλ,xiϕ
ΨFN( λ )d λ for x ∈ R
d.
The local density is well-defined since obviously the truncated characteristic function ϕ
ΨFN
is integrable over R
d.
Our purpose is to show that the local density p
ΨFN
is well-approximated by the approximate density p
Ngiven by (27) in the sense that for every α ∈ Z
d+sup
x∈Rd
x
αp
ΨFN(x) − p
N(x) .
is of order o(γ
N). This will be showed in the next result, based on the asymptotic behavior of the characteristic function of F
N.
Theorem 2 For every p ≥ 1, and for every integer α = (α
1, .., α
d) ∈ Z
d+
sup
x∈Rd
x
αp
ΨFN(x) − p
N(x)
≤ c k 1 − Ψ
Nk
p+ o(γ
N) = o(γ
N). (30) Proof: We have, by integrating by parts
x
αp
ΨFN(x) − p
N(x)
= x
α1 (2π)
dZ
R
e
−ihλ,xiϕ
ψFN
( λ − ϕ
N(λ) d λ
= 1
(2π)
d( −i )
|α|Z
Rd
e
−ihλ,xi∂
α∂ λ
αϕ
ψFN
( λ ) − ϕ
N( λ ) d λ and then, by using (24) and Lemma 3 we obtain
sup
x∈Rd
x
αp
ΨFN
(x) − p
N(x)
≤ 1 (2π)
dZ
Rd
d
αd λ
αϕ
ψFN
(λ) − ϕ
N( λ )
d λ
≤ c Z
Rd
(1 + | λ | )
−ℓ+2d λ (o(γ
N) + c k 1 − Ψ
Nk
p)
≤ o(γ
N) + c k 1 − Ψ
Nk
p= o(γ
N)
where we use ℓ − 2 > d since ℓ ≥ d + 3 in [C2]. The fact that c k 1 − Ψ
Nk
p= o(γ
N) follows from the choice of our truncation function Ψ
N.
Let us make some comments on the above result.
Remark 2 • Theorem 2 extends the inequality (3) when F
Nbelongs to the Wiener chaos of order q ≥ for every N ≥ 1. It is known that for every N ≥ 1, we have that γ
N2= V ar(
1qk DF
Nk
2H) behaves as EF
N4− 3, more precisely (see Section 5.2.2 in [8])
γ
N2≤ q − 1
3q (EF
N4− 3) ≤ (q − 1)γ
N2.
It also extends some results in [3] (in particular Theorem 6.2 and Corollary 6. 6).
• The appearance of truncation function Ψ in the right-hand side of (30) replaces the condition (2) on the existence of the negative moments of the Malliavin derivative.
As a consequence of Theorem 2 we will obtain the asymptotic expansion for the expectation of measurable functionals of F
N.
Theorem 3 Let p
Nbe given by (27) and fix M ≥ 0, γ > 0. Then sup
f∈E(M,γ)
Ef(F
n) − Z
Rd
f (x)p
N(x)dx
= o(γ
N) (31)
where E (M, γ ) is the set of measurable functions f : R
d→ R such that | f (x) | ≤ M (1 + | x |
γ) for every x ∈ R
d.
Proof: For any measurable function f that satisfies the assumptions in the statement, we can write
Ef (F
n) − Z
Rd
f (x)p
N(x)dx = Ef (F
N) − Ef(F
N)Ψ
N+ Ef (F
N)Ψ
N− Z
Rd
f(x)p
N(x)dx
= Ef (F
N)(1 − Ψ
N) + Z
Rd
f(x) h p
ψFN
(x) − p
N(x) i dx with p
ψFN
defined in (26). Then for every p > 1 and for α ≥ 0 large enough (it may depend on M, γ ) with p
′such that
1p+
p1′= 1, we have, by using H¨older’s inequality,
Ef (F
n) − Z
Rd
f (x)p
N(x)dx
≤
E | f(F
N) |
p′1p′
k 1 − Ψ
Nk
p+ Z
Rd
| f (x) | 1
1 + | x |
2 α2dx × sup
x∈Rd
1 + | x |
2α2h p
ψFN
(x) − p
N(x) i
≤
E | f(F
N) |
p′1p′
k 1 − Ψ
Nk
p+ (c k 1 − Ψ
Nk
p+ o(γ
N))
≤ C k 1 − Ψ
Nk
p+ o(γ
N) = o(γ
N).
where we used (30). So the desired conclusion is obtained.
We insert some remarks related to the above result.
Remark 3 • Theorem 3 is related to Theorem 9.3.1 in [8] (which reproduces the main result of [9]). Notice that we do not assume the differentiability of f and our result includes the uniform control in the case of the Kolmogorov distance.
• In particular, the asymptotic expansion (31) holds for any bounded function f and for
polynomial functions, but the class of examples is obviously larger.
In the case of sequences in a Wiener chaos of fixed order, we have the following result, which follows from the above results and from the hypercontractivity property (64).
Corollary 1 Assume (F
N)
N≥1= (I
q(f
N))
N≥1(with f
N∈ H
⊗qfor every N ≥ 1) be a random sequence in the q th Wiener chaos (q ≥ 2). Assume EF
N2= 1 for every N ≥ 1 and suppose that the condition [C1] holds with γ
N= V ar
kDFNk2H
q
i.e.
F
N, γ
N−1k DF
Nk
2Hq − 1
→
(d)N→∞(Z
1, Z
2)
where (Z
1, Z
2) is a centered Gaussian vector with EZ
12= EZ
22= 1 and EZ
1Z
2= ρ ∈ (0, 1).
Then we have the asymptotic expansions (30) and (31).
Proof: The results follows from Theorems 2 and 3 by noticing that (64) ensures that the conditions [C2] and [C3] are satisfied.
5 Examples
In the last part of our work, we will present three examples in order two illustrate our theory. The first example concerns a one-dimensional sequence of random variables in the second Wiener chaos and it is inspired from [8], Chapter 9. The second example is a multidimensional one and it involves quadratic functions of two correlated fractional Brownian motions while the last example involves a sequence in a finite sum of Wiener chaoses.
5.1 A one-dimensional in a Wiener chaos of a fixed order: Quadratic variations of stationary Gaussian sequences
We consider a centered stationary Gaussian sequence (X
k)
k∈Zwith EX
k2= 1 for every k ∈ Z . We set
ρ(l) := EX
0X
lfor every l ∈ Z so ρ(0) = 1 and | ρ(l) | ≤ 1 for every l ∈ Z .
Without loss of generality and in order to fit to our context, we assume that X
k= W (h
k) with k h
kk
H= 1 for every k ∈ Z , where (W (h), h ∈ H) is an isonormal process as defined in Section 6. Define the quadratic variation sequence
V
N= 1
√ N
N
X
k=0
X
k2− 1
, N ≥ 1. (32)
and let v
N:= EV
N2. Assume ρ ∈ ℓ
2( Z ) (the Banach space of 2-summable functions endowed with the norm k ρ k
2ℓ2(Z)= P
k∈Z