• Aucun résultat trouvé

Simulation of BSDEs with jumps by Wiener Chaos Expansion

N/A
N/A
Protected

Academic year: 2021

Partager "Simulation of BSDEs with jumps by Wiener Chaos Expansion"

Copied!
42
0
0

Texte intégral

(1)

HAL Id: hal-01118172

https://hal.archives-ouvertes.fr/hal-01118172v2

Submitted on 4 Apr 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Simulation of BSDEs with jumps by Wiener Chaos Expansion

Christel Geiss, Céline Labart

To cite this version:

Christel Geiss, Céline Labart. Simulation of BSDEs with jumps by Wiener Chaos Expansion. Stochas- tic Processes and their Applications, Elsevier, 2016, 126 (7), pp.2123-2162. �10.1016/j.spa.2016.01.006�.

�hal-01118172v2�

(2)

Simulation of BSDEs with jumps by Wiener Chaos Expansion

Christel Geiss

a,∗

, Céline Labart

b,∗∗

a

Department of Mathematics and Statistics, P.O.Box 35 (MaD),FI-40014 University of Jyväskylä, Finland

b

LAMA - Université de Savoie, Campus Scientifique, 73376 Le Bourget du Lac, France

Abstract

We present an algorithm to solve BSDEs with jumps based on Wiener Chaos Expansion and Picard’s iterations. This paper extends the results given in [7] to the case of BSDEs with jumps. We get a forward scheme where the conditional expectations are easily computed thanks to chaos decomposition formulas. Concerning the error, we derive explicit bounds with respect to the number of chaos, the discretization time step and the number of Monte Carlo simulations. We also present numerical experiments. We obtain very encouraging results in terms of speed and accuracy.

Keywords: Backward stochastic Differential Equations with jumps, Wiener Chaos expansion, Numerical method

2000 MSC: 60H10, 60J75, 60H35, 65C05, 65G99, 60H07 1. Introduction

In this paper we are interested in the numerical approximation of solutions (Y, Z, U ) to backward stochastic differential equations (BSDEs in the sequel) with jumps of the following form

Y

t

= ξ +

Z

T t

f(s, Y

s

, Z

s

, U

s

) ds

Z

T t

Z

s

dB

s

Z

]t,T]

U

s

d N ˜

s

, 0 ≤ tT, (1) where B is a 1-dimensional standard Brownian motion and ˜ N is a compensated Poisson process independent from B, i.e. N ˜

t

:= N

t

κt and {N

t

}

t≥0

is a Poisson process with intensity κ > 0. The terminal condition ξ is a real-valued F

T

–measurable random variable where {F

t

}

0≤t≤T

stands for the augmented natural filtration associated with B and N . Under standard Lipschitz assumptions on the driver f , the existence and uniqueness of the solution have been stated by Tang and Li [23], generalizing the seminal paper of Pardoux and Peng [18].

The main objective of this paper is to propose a numerical method to approximate the solution (Y, Z, U ) of (1). In the no-jump case, there exist several methods to simulate (Y, Z). The most popular one is the method based on the dynamic programming equation, introduced by Briand, Delyon and Mémin [6]. In the Markovian case, the rate of convergence of the method has been studied by Zhang [24] and Bouchard and Touzi [4]. From a numerical

christel.geiss@jyu.fi

(corresponding author)

∗∗

celine.labart@univ-savoie.fr

(3)

point of view, the main difficulty in solving BSDEs is to compute conditional expectations.

Different approaches have been proposed: Malliavin calculus [4], regression methods [10]

and quantization techniques [2]. In the general case (i.e. for a terminal condition which is not necessarily Markovian), Briand and Labart [7] have proposed a forward scheme based on Wiener chaos expansion and Picard’s iterations. Thanks to the chaos decomposition formulas, conditional expectations are easily computed, which leads to an efficient, fully implementable scheme. In case of BSDEs driven by a Poisson random measure, Bouchard and Elie [3] have proposed a scheme based on the dynamic programming equation and studied the rate of convergence of the method when the terminal condition is given by ξ = g(X

T

), where g is a Lipschitz function and X is a forward process. More recently, Geiss and Steinicke [9] have extended this result to the case of a terminal condition which may be a Borel function of finitely many increments of the Lévy forward process X which is not necessarily Lipschitz but only satisfies a fractional smoothness condition. In the case of jumps driven by a compensated Poisson process, Lejay, Mordecki and Torres [15] have developed a fully implementable scheme based on a random binomial tree, following the approach proposed by Briand, Delyon and Mémin [5].

In this paper, we extend the algorithm based on Picard’s iterations and Wiener chaos expansion introduced in [7] to the case of BSDEs with jumps. Our starting point is the use of Picard’s iterations: (Y

0

, Z

0

, U

0

) = (0, 0, 0) and for q ∈ N ,

Y

tq+1

= ξ +

Z

T t

f (s, Y

sq

, Z

sq

, U

sq

) ds

Z

T t

Z

sq+1

· dB

s

Z

]t,T]

U

sq+1

d N ˜

s

, 0 ≤ tT.

Writing this Picard scheme in a forward way gives Y

tq+1

= E ξ +

Z

T 0

f (s, Y

sq

, Z

sq

, U

sq

) ds

F

t

!

Z

t 0

f (s, Y

sq

, Z

sq

, U

sq

) ds, Z

tq+1

= E

D

(0)t

Y

tq+1

F

t

= E D

(0)t

ξ +

Z

T 0

f (s, Y

sq

, Z

sq

, U

sq

) ds

!

F

t

!

, U

tq+1

= E

D

(1)t

Y

tq+1

F

t

= E D

(1)t

ξ +

Z

T 0

f (s, Y

sq

, Z

sq

, U

sq

) ds

! F

t

!

,

where D

(0)t

X (resp. D

(1)t

X) stands for the Malliavin derivative of the random variable X with respect to the Brownian motion (resp. w.r.t. the Poisson process).

In order to compute the previous conditional expectation, we use a Wiener chaos expan- sion of the random variable

F

q

= ξ +

Z

T 0

f (s, Y

sq

, Z

sq

, U

sq

) ds.

More precisely, we use the following orthogonal decomposition of the random variable F

q

(see Proposition 2.6)

F

q

= E [F

q

] +

X

k=1 k

X

l=0

X

k∈Nl

X

j ∈Nk−l

d

kl,jk−l

L

0,...,0l

e[k

1

, . . . , k

l

]) L

1,...,1k−l

e[j

1

, . . . , j

k−l

]).

(4)

where L

0,···m ,0

(g) (resp. L

1,···m ,1

(g)) denotes the iterated integral of order m of g w.r.t. the Brownian motion (resp. w.r.t. the compensated Poisson process), (˜ e[k

1

, . . . , k

m

])

km∈N

is an orthogonal basis of ( ˜ L

2

)

⊗m

([0, T ]), the subspace of symmetric functions from (L

2

)

⊗m

([0, T ]).

The sequence of coefficients {d

kl,jk−l

}

klNl,jk−l∈Nk−l

ensues from the Wiener chaos decompo- sition of F

q

.

The point to get an implementable scheme is that we only keep a finite number of terms in this expansion: we use a finite number of chaos and we choose a finite number of functions {e

1

, · · · , e

N

} to build {˜ e[k

1

, · · · , k

m

]}

km∈{1,···,N}

. More precisely, if we choose e

i

:=

1

h

1

]ti−1,ti]

where t

i

= ih and h :=

NT

, we obtain

F

q

∼ E [F

q

] +

p

X

k=1

X

|n|=k

d

nk

N

Y

i=1

K

nB

i

B

t

i

B

ti−1

h

!

C

nP

i

(N

ti

N

ti−1

, κh),

where K

i

(resp. C

i

) denotes the Hermite (resp. Charlier) polynomial of degree i, n = (n

B1

, · · · , n

BN

, n

P1

, · · · , n

PN

) is a vector of integers and |n| = P

Ni=1

(n

Bi

+ n

Pi

). By using this approximation of F

q

we can easily compute E (F

q

|F

t

), E (D

(0)t

F

q

|F

t

) and E (D

(1)t

F

q

|F

t

), which gives us (Y

tq+1

, Z

tq+1

, U

tq+1

). To get a fully implementable algorithm, it remains to approximate E (F

q

) and the coefficients {d

nk

}

n,k

by Monte Carlo.

When extending [7] to the jump case one realizes that the main difficulty lies in the fact that there is no hypercontractivity property in the Poisson chaos decomposition case. This property plays an important role in the proof of the convergence in the Brownian case. To circumvent this problem, we exploit a recent result of Last, Penrose, Schulte and Thäle [13], which gives a formula to compute the expectation of products of Poisson multiple integrals, and the according result for the Brownian case from Peccati and Taqqu [19]. In fact, in equation (18) of Proposition 2.9 we get an explicit expression for

E (I

n1

(f

n1

) · · · I

nl

(f

nl

))

in terms of a combinatoric sum of tensor products of the chaos kernels f

ni

. Here I

ni

(f

ni

) denotes the multiple integral of order n

i

with respect to the process B + ˜ N . By this expres- sion one gets the required estimates for the truncated chaos without the hypercontractivity property. Therefore, to prove the convergence of the method we may proceed similarly to [7], and split the error into four terms:

• the error due to Picard iterations

• the error due to the truncation onto the chaos up to order p

• the error due to the finite number of basis functions {e

1

, · · · , e

N

} for each chaos

• the error due to the Monte Carlo simulations to approximate the expectations appear- ing in the coefficients {d

nk

}

n,k

.

The paper is organized as follows: Section 2 contains the notations and gives preliminary

results, Section 3 describes the approximation procedure, Section 4 states the convergence

results and Section 5 presents the algorithm and some numerical examples. Some technical

results are proved in the appendix.

(5)

1.1. Definitions and Notations

Given a probability space (Ω, F , P ) we consider

• L

p

(F

T

) := L

p

(Ω, F

T

, P ), p ∈ N

= N \ {0}, the space of all F

T

-measurable random variables (r.v. in the following) X : Ω 7−→ R satisfying kXk

pp

:= E (|X|

p

) < ∞.

• S

pT

( R ), p ∈ N , p ≥ 2, the space of all càdlàg, adapted processes φ : Ω × [0, T ] 7−→ R such that kφk

pSp

= E (sup

t∈[0,T]

t

|

p

) < ∞.

• H

pT

( R ), p ∈ N , p ≥ 2, the space of all predictable processes φ : Ω × [0, T ] 7−→ R such that kφk

pHp

T

= E ( R

0T

t

|

p

dt) < ∞.

• L

2

(0, T ), the space of all square integrable functions on [0, T ].

C

k,l

, the set of continuously differentiable functions φ : (t, x) ∈ [0, T ] × R

3

with con- tinuous derivatives w.r.t. t (resp. w.r.t. x) up to order k (resp. up to order l).

C

bk,l

, the set of continuously differentiable functions φ : (t, x) ∈ [0, T ] × R

3

with contin- uous and uniformly bounded derivatives w.r.t. t (resp. w.r.t. x) up to order k (resp.

up to order l). The function φ is also bounded.

• k∂

spj

f k

2

, the sum of the squared norms of the derivatives of f([0, T ] × R

3

, R ) w.r.t. all the space variables x which sum equals j : k∂

spj

fk

2

:= P

|k|=j

k∂

xk11

xk22

xk33

f k

2

, where

|k| = k

1

+ k

2

+ k

3

.

C

p

, the set of smooth functions f : R

n

7−→ R (n ≥ 1) with partial derivatives of polynomial growth.

• k(·, ·, ·)k

pLp

, p ≥ 1, the norm on the space S

pT

( R ) × H

pT

( R ) × H

pT

( R ) defined by k(Y, Z, U )k

pLp

:= E ( sup

t∈[0,T]

|Y

t

|

p

) +

Z

T

0

E (|Z

t

|

p

)dt + κ

Z

T

0

E (|U

t

|

p

)dt. (2) Hypothesis 1.1. We assume

the terminal condition ξ belongs to L

2

(F

T

);

the generator fC([0, T ] × R

3

; R ) is Lipschitz continuous in space, uniformly in t:

there exists a constant L

f

such that

|f (t, y

1

, z

1

, u

1

) − f (t, y

2

, z

2

, u

2

)| ≤ L

f

(|y

1

y

2

| + |z

1

z

2

| + |u

1

u

2

|) .

Lemma 1.2. If Hypothesis 1.1 is satisfied and ξ ∈ D

1,2

(defined below) we get from [9, Theorem 3.4] that for a.e. t ∈ [0, T ]

Z

t

= E [D

t(0)

Y

t

|F

t−

], U

t

= E [D

t(1)

Y

t

|F

t−

] P − a.s. (3)

where D

t(0)

X stands for the Malliavin derivative w.r.t. the Brownian motion of the random

variable X, and D

(1)t

X stands for the Malliavin derivative w.r.t. the Poisson process of the

random variable X. Here E [·|F

t−

] should be understood as the predictable projection, and

since the paths s 7→ D

t(i)

Y

s

are a.s. càdlàg we define D

t(i)

Y

t

:= lim

s↓t

D

t(i)

Y

s

if the limit exists,

and zero otherwise.

(6)

2. Wiener Chaos Expansion 2.1. Notations and useful results 2.1.1. Iterated integrals

We refer to [16] and [21] for more details on this section. Let us briefly recall the Wiener chaos expansion in the case of a real-valued Brownian motion and an independent Poisson process with intensity κ > 0.

We define

G

0

(t) = B

t

, G

1

(t) = N

t

κt, and L

ik1,···,ik

(f ) the iterated integral of f with respect to G

0

and G

1

L

ik1,···,ik

(f ) =

Z

T 0

Z

tk 0

· · ·

Z

t2 0

f (t

1

, . . . , t

k

)dG

i1

(t

1

)

!

· · · dG

ik−1

(t

k−1

)

!

dG

ik

(t

k

).

We have the following chaotic representation property.

Proposition 2.1. ([16, Proposition 2.1]) For k ∈ N

define i

k

:= (i

1

, . . . , i

k

) ∈ {0, 1}

k

. Any F ∈ L

2

(F

T

) has a unique representation of the form

F = E (F ) +

X

k=1

X

ik∈{0,1}k

L

ikk

(f

ik

), (4)

where f

ik

L

2

k

) and Σ

k

= {(t

1

, . . . , t

k

) ∈ [0, T ]

k

: 0 < t

1

< · · · < t

k

< T } is the simplex of [0, T ]

k

.

Let |i

k

| := P

kj=1

i

j

. Due to the isometry property it holds kL

ikk

(f)k

2

= κ

|ik|

kfk

2Σ

k

,

and for any fL

2

k

), g ∈ L

2

m

), i

k

∈ {0, 1}

k

, and j

m

∈ {0, 1}

m

we have (see [16, Proposition 1.1])

E [L

ikk

(f )L

jmm

(g)] =

( κ

|ik|

R

Σ

k

f (t

1

, · · · , t

k

)g(t

1

, · · · , t

k

)dt

1

· · · dt

k

if i

k

= j

m

0 otherwise.

Then, kF k

2

= E [F ]

2

+ P

k≥1

P

ik

κ

|ik|

kf

ik

k

2L2k)

. The chaos approximation of F up to order p is defined by

C

p

(F ) := E (F ) +

p

X

k=1

X

ik

L

ikk

(f

ik

) (5)

and P

k

(F ) := P

ik

L

ikk

(f

ik

) is the Wiener chaos of order k of F . We have E [(P

k

(F ))

2

] = X

ik

κ

|ik|

kf

ik

k

2Σk

. (6)

(7)

• Let fL

2

k

) and j ∈ {0, 1}. Following [16], we define the derivative of L

ikk

(f ) w.r.t.

the Brownian motion and the Poisson process as the element of L

2

(Ω × [0, T ]) given by

D

(j)t

L

ikk

(f ) =

k

X

l=1

1

{il=j}

L

ik−11,···,

b

il,···,ik

(f ( · · ·

|{z}

l−1

, t, · · · )), (7) where b i means that the i-th index is omitted.

• Let j ∈ {0, 1}. We extend the definition of D

(j)

to Dom D

(j)

:=

FL

2

(F

T

) satisfying (4) and

X

k=1

X

ik

k

X

l=1

1

{il=j}

κ

|ik|

kf

ik

k

2Σk

<

.

If FDom D

(j)

then

kF k

2Dom D(j)

:= E |F |

2

+ κ

j

E

Z

T 0

|D

t(j)

F |

2

dt < ∞.

F with chaotic representation (4) belongs to Dom D =: D

1,2

if F belongs to Dom D

(0)

Dom D

(1)

, i.e.

kF k

2D1,2

:= E |F |

2

+

X

k=1

k X

ik

κ

|ik|

kf

ik

k

2Σ

k

< ∞.

More generally, we define D

m,2

as follows:

• Let m ≥ 1. We say that F satisfying (4) belongs to D

m,2

if it holds kF k

2Dm,2

:= E |F |

2

+

m

X

l=1

X

k=l

k!

(k − l)!

X

ik

κ

|ik|

kf

ik

k

2Σ

k

< ∞.

We recall

D

∞,2

= ∩

m=1

D

m,2

.

We define for l ∈ N

with lm the seminorm k · k

Dl

on D

m,2

by

kF k

2Dl

:= X

il

κ

|il|

E

Z

T 0

· · ·

Z

T 0

D

til1,···,tl

F

2

dt

1

· · · dt

l

!

=

X

k=l

k!

(k − l)!

X

ik

κ

|ik|

kf

ik

k

2Σ

k

, (8) where D

til1,···,tl

= D

it11

· · · D

itll

represents the multi-index Malliavin derivative.

Remark 2.2. By using this notation we have kF k

2

Dm,2

= E |F |

2

+ P

ml=1

kF k

2Dl

.

(8)

• For m ≥ 1 and j ∈ N

we define D

m,j

as the space of all F ∈ D

m,2

such that kF k

jm,j

:= X

1≤l≤m

X

il∈{0,1}l

ess sup

(t1,···,tl)∈[0,T]l

E [|D

til1,···,tl

F |

j

] < ∞.

(Since (ω, t

1

, ..., t

l

) 7→ (D

til1,···,tl

F )(ω) is regarded as an element of L

2

(Ω × [0, T ]

l

) w.r.t. the measure P ⊗ λ

d

d

denotes the Lebesgue measure on R

d

) we use the es- sential supremum w.r.t. λ

d

.)

• S

m,j

denotes the space of all triples of processes (Y, Z, U ) belonging to S

jT

( R )×H

jT

( R

d

H

jT

( R ) and such that k(Y, Z, U )k

jm,j

: = X

1≤l≤m

X

il

ess sup

(t1,···,tl)∈[0,T]l

k(D

til1,···,tl

Y, D

itl1,···,tl

Z, D

itl1,···,tl

U )k

jLj

< ∞, where k · k

jLj

has been defined in (2). We denote S

m,∞

:= ∩

j=1

S

m,j

.

Remark 2.3. If F := g(G), where g : R → R is a C

b1

function and G ∈ D

1,2

, we have (following [8, Proposition 5.1]) that

(D

(0)t

F, D

t(1)

F ) = (g

0

(G)D

t(0)

G, g(G + D

t(1)

G)g (G)).

Moreover, using Notation (8), we get

kF k

2D1

= kD

(0)

F k

2L2(Ω×[0,T])

+ κkD

(1)

F k

2L2(Ω×[0,T])

≤ kg

0

k

2

kGk

2D1

. More generally, if g : R → R is a C

bm

function and G ∈ D

m,2

, we have

kF k

2Dm

C(m, {kg

(k)

k

}

k≤m

, kGk

Dm,2

),

where C(m, {kg

(k)

k

}

k≤m

, kGk

Dm,2

) is a constant depending on m, {kg

(k)

k

}

k≤m

and kGk

Dm,2

. Lemma 2.4. Let 1 ≤ mp + 1 and F ∈ D

m,2

. We have

E [|F − C

p

(F )|

2

] ≤ kF k

2Dm

(p + 2 − m) · · · (p + 1) . Proof. Using (6), we get

E [|F − C

p

(F )|

2

] = X

k≥p+1

E [P

k

(F )

2

] = X

k≥p+1

X

ik

κ

|ik|

kf

ik

k

2Σ

k

= X

k≥p+1

k!

(k − m)!

(k − m)!

k!

X

ik

κ

|ik|

kf

ik

k

2Σ

k

≤ 1

(p + 2 − m) · · · (p + 1)

X

k≥p+1

k!

(k − m)!

X

ik

κ

|ik|

kf

ik

k

2Σ

k

.

≤ 1

(p + 2 − m) · · · (p + 1)

X

k≥m

k!

(k − m)!

X

ik

κ

|ik|

kf

ik

k

2Σ

k

.

(9)

2.1.2. Multiple integrals

In the following, λ denotes the Lebesgue measure. Setting M (ds, dx) := dG

0

(s)dδ

0

(x) + dG

1

(s)dδ

1

(x)

we get an independent random measure in the sense of Itô (see [11]). There exists a chaotic representation by multiple integrals w.r.t. this random measure M which is equivalent to Proposition 2.1.

Proposition 2.5. ([11]) Any FL

2

(F

T

) can be represented as F = E [F ] +

X

k=1

I

k

(g

k

), (9)

with g

k

∈ (L

2

)

⊗k

(λ ⊗ (δ

0

+ κδ

1

)) := (L

2

)

⊗k

([0, T ] × {0, 1}, B([0, T ]) ⊗ 2

{0,1}

, λ ⊗ (δ

0

+ κδ

1

)).

This representation is unique if we assume that the functions g

k

(z

1

, ..., z

k

) with z

i

= (t

i

, x

i

) ∈ [0, T ] × {0, 1} are symmetric.

For the definition of the multiple integrals I

k

(g

k

) we refer to [11] or [12]. But using the result that the representations in Proposition 2.1 and 2.5 are both unique we conclude for symmetric g

k

the relation

I

k

(g

k

) = k! X

ik

L

ikk

(g

k

((·, i

1

), · · · , (·, i

k

))), (10) where i

k

is defined in Proposition 2.1 and

g

k

(((t

1

, i

1

), · · · , (t

k

, i

k

))) = k!f

ik

(t

1

, . . . , t

k

) on Σ

k

with f

ik

from Proposition 2.1.

Moreover, for symmetric g

k

∈ (L

2

)

⊗k

(λ ⊗ (δ

0

+ κδ

1

)) and f

m

∈ (L

2

)

⊗m

(λ ⊗ (δ

0

+ κδ

1

)) the relation

E [I

k

(g

k

)I

m

(f

m

)] =

( k!hg

k

, f

k

i

(L2)⊗k(λ⊗(δ0+κδ1))

if k = m

0 otherwise, (11)

holds true. If F ∈ D

m,2

, we combine (9), (10) and [16, Definition 1.7] (which extends (7) to functions defined on L

2

([0, T ])

k

) to get

g

k

((t

1

, i

1

), . . . , (t

k

, i

k

)) = 1

k! E D

tik1,...,tk

F, km. (12) On the other hand, this can be easily derived if one takes into account that for F = E [F ] +

P

k=1

I

k

(g

k

) we have D

ti11

F = P

k=1

kI

k−1

(g

k

((t

1

, i

1

), ·)), and that the expectation of any multiple integral of order k > 0 is zero while I

0

is the identity map.

For the implementation of the numerical scheme we will use Hermite and Charlier poly-

nomials. In order to do so, we provide a chaotic representation consisting only of iterated

(10)

integrals of the form L

0,...,0

and L

1,...,1

for which the relations (22) and (23) below can be used.

Use {p

0

, p

1

} = {1

{0}

,

1κ

1

{1}

} as orthonormal basis of L

2

({0, 1}, 2

{0,1}

, δ

0

+ κδ

1

) and fix an orthonormal basis {e

k

}

k∈N

for L

2

([0, T ], B([0, T ]), λ). By setting

e[(k

1

, i

1

), . . . , (k

m

, i

m

)] := (e

k1

p

i1

) ⊗ . . . ⊗ (e

km

p

im

), k

j

∈ N , i

j

∈ {0, 1}

we get an orthonormal basis of (L

2

)

⊗m

(λ ⊗ (δ

0

+ κδ

1

)). The symmetrizations

˜

e[(k

1

, i

1

), . . . , (k

m

, i

m

)] := 1 m!

X

π∈Sm

e[(k

π(1)

, i

π(1)

), . . . , (k

π(m)

, i

π(m)

)], k

j

∈ N , i

j

∈ {0, 1}

(13) form an orthogonal basis of (L ˜

2

)

⊗m

(λ ⊗ (δ

0

+κδ

1

)), the subspace of symmetric functions from (L

2

)

⊗m

(λ ⊗ (δ

0

+ κδ

1

)).

We also will use the notation

˜

e[k

1

, . . . , k

m

] := 1 m!

X

π∈Sm

e

kπ(1)

⊗ · · · ⊗ e

kπ(m)

, k

j

∈ N , where S

m

stands for the set of all permutations of {1, ..., m}.

Proposition 2.6. Any FL

2

(F

T

) can be represented as F = E [F ] +

X

k=1 k

X

l=0

X

kl∈Nl

X

jk−l∈Nk−l

d

kl,jk−l

L

0,...,0l

e[k

1

, . . . , k

l

]) L

1,...,1k−l

e[j

1

, . . . , j

k−l

]),

where d

kl,jk−l

=

l!(k−l)!hgk,e[(k1,0),....(kl,0)]⊗e[(j1,1),...,(jk−l,1)]i(L2)⊗k

κk−l2e[(k1,0),....(kl,0),(j1,1),...,(jk−l,1)]k2

(L2)⊗k(λ⊗(δ0+κδ1))

.

Proof. According to [11, Theorem 1] a permutation of the coordinates of the kernels does not change the multiple integral, i.e. for any π ∈ S

k

we have

I

k

e[(k

1

, i

1

), . . . , (k

k

, i

k

)]) = I

k

(e[(k

π(1)

, i

π(1)

), . . . , (k

π(k)

, i

π(k)

)]).

For any π with (i

π(1)

, . . . , i

π(k)

) = (0, . . . , 0, 1, . . . , 1) (we assume that (i

1

, . . . , i

k

) contains l zeros) it holds by the product formula for multiple integrals (see Appendix A.5 or [14, Theorem 3.6])

I

k

e[(k

1

, i

1

), . . . , (k

k

, i

k

)]) = I

l

(e[(k

π(1)

, 0), . . . , (k

π(l)

, 0)])I

k−l

(e[(k

π(l+1)

, 1), . . . , (k

π(k)

, 1)]) (14) since

e[(k

π(1)

, i

π(1)

), . . . , (k

π(k)

, i

π(k)

)] = e[(k

π(1)

, 0), . . . , (k

π(l)

, 0)] ⊗ e[(k

π(l+1)

, 1), . . . , (k

π(k)

, 1)], and for the contraction-identification ⊗

rm

(for the definition see (A.12)) it holds

e[(k

π(1)

, 0), . . . , (k

π(l)

, 0)] ⊗

rm

e[(k

π(l+1)

, 1), . . . , (k

π(k)

, 1)] = 0

(11)

if r 6= 0 or m 6= 0. Since

e[(k

π(l+1)

, 1), . . . , (k

π(k)

, 1)] = 1

κ

k−l2

e[k

π(l+1)

, . . . , k

π(k)

], we conclude from (14) and (10) that

I

k

e[(k

1

, i

1

), . . . , (k

k

, i

k

)]) = l!(kl)!

κ

k−l2

L

0,...,0l

e[k

π(1)

, . . . , k

π(l)

])L

1,...,1k−l

e[k

π(l+1)

, . . . , k

π(k)

]).

(15) The symmetric functions g

k

from Proposition 2.5 can be written as

g

k

=

k

X

l=0

X

kl

X

jk−l

hg

k

, e[(k

1

, 0), ..., (k

l

, 0)] ⊗ e[(j

1

, 1), ..., (j

k−l

, 1)]i

(L2)⊗k

× ˜ e[(k

1

, 0), ..., (k

l

, 0), (j

1

, 1), ..., (j

k−l

, 1)]c

kl,jk−l

, (16) where we sum over all k

l

∈ N

l

and j

k−l

∈ N

k−l

and

c

kl,jk−l

= k˜ e[(k

1

, 0), ..., (k

l

, 0), (j

1

, 1), ..., (j

k−l

, 1)]k

−2(L2)⊗k(λ⊗(δ0+κδ1))

denotes the normalizing factor. Abbreviating d

kl,jk−l

:= l!(kl)!

κ

k−l2

hg

k

, e[(k

1

, 0), ..., (k

l

, 0)] ⊗ e[(j

1

, 1), ..., (j

k−l

, 1)]i

(L2)⊗k

c

kl,jk−l

we conclude from Proposition 2.5, (15) and (16) the orthogonal decomposition

F = E [F ] +

X

k=1 k

X

l=0

X

kl

X

jk−l

d

kl,jk−l

L

0,...,0l

e[k

1

, . . . , k

l

]) L

1,...,1k−l

e[j

1

, . . . , j

k−l

]). (17)

Lemma 2.7. Fix N ∈ N

and let

e[(k

1

, 0), ..., (k

l

, 0), (j

1

, 1), ..., (j

k−l

, 1)] =

N

O

i=1

(e

i

p

0

)

⊗nBi

N

O

j=1

(e

j

p

1

)

⊗nPj

,

i.e. n

Bi

and n

Pi

(1 ≤ iN ) denote the multiplicities of the functions e

i

p

0

and e

i

p

1

, respectively, so that |n

B

| = |(n

B1

, ..., n

BN

)| = l and |n

P

| = |(n

P1

, ..., n

PN

)| = kl. Let n

A

! :=

n

A1

! · · · n

AN

! for A = B, P and define n := (n

B

, n

P

) so that |n| = |n

B

| + |n

P

|. Then

c

kl,jk−l

= k˜ e[(k

1

, 0), ..., (k

l

, 0), (j

1

, 1), ..., (j

k−l

, 1)]k

−2(L2)⊗k(λ⊗(δ0+κδ1))

= |n|!

n

B

! n

P

! .

(12)

Proof. To compute c

kl,jk−l

notice that the functions h

j

:= (e

kj

p

ij

) and h

j0

(1 ≤ j, j

0

k) are either equal or orthogonal in L

2

(λ ⊗ (δ

0

+ κδ

1

)). Denoting

e[(k

1

, 0), ..., (k

l

, 0), (j

1

, 1), ..., (j

k−l

, 1)] = h

1

⊗ · · · ⊗ h

k

yields

e[(k

1

, 0), ..., (k

l

, 0), (j

1

, 1), ..., (j

k−l

, 1)]k

2(L2)⊗k(λ⊗(δ0+κδ1))

=

1 k!

X

π∈Sk

h

π(1)

⊗ · · · ⊗ h

π(k)

2

(L2)⊗k(λ⊗(δ0+κδ1))

= n

B

!n

P

! k!

N

O

i=1

(e

i

p

0

)

⊗nBi

N

O

j=1

(e

j

p

0

)

⊗nPj

2

(L2)⊗k(λ⊗(δ0+κδ1))

= n

B

!n

P

! k! .

Remark 2.8. We deduce from (17) that C

p

(F ) = E [F ] +

p

X

k=1 k

X

l=0

X

kl

X

jk−l

d

kl,jk−l

L

0,...,0l

e[k

1

, . . . , k

l

])L

1,...,1k−l

e[j

1

, . . . , j

k−l

]).

In order to compute the expectation of products of multiple integrals (see formula (18) below) we introduce some notation following [13], [22], [19] and [17].

• If n ∈ N

then [n] := {1, . . . , n}.

• For J ⊆ [n] we denote by O

nJ

the singleton containing that x ∈ {0, 1}

n

for which x

i

= 0 ⇐⇒ iJ holds.

• If n

1

, . . . , n

l

(l ∈ N

) are given and n := n

1

+ · · · + n

l

we will denote by Ψ the ‘natural’

partition of [n] given by the summands n

i

: Ψ := {Ψ

1

, . . . , Ψ

l

}

:= {{1, . . . , n

1

}, . . . , {n

1

+ · · · + n

l−1

+ 1, . . . , n}}.

• Let Π

n

denote the set of all partitions of [n] (a partition means here a set of disjoint non-empty subsets of [n] such that their union is [n]) and Π

n

denote the set of all subpartitions of [n] (any set of disjoint non-empty subsets of [n] is a subpartition).

• Let Π(n

1

, . . . , n

l

) ⊆ Π

n

(respectively Π

(n

1

, . . . , n

l

) ⊆ Π

n

) denote the set of all σ ∈ Π

n

(respectively σ ∈ Π

n

) with |Ψ

i

J| ≤ 1 for 1 ≤ il and all Jσ.

• Let Π

≥2

(n

1

, . . . , n

l

) (respectively Π

=2

(n

1

, . . . , n

l

)) denote the set of all σ ∈ Π(n

1

, . . . , n

l

)

with |J| ≥ 2 (respectively |J | = 2) for all Jσ.

(13)

• In order to distinguish between integration w.r.t. the Brownian motion and compen- sated Poisson process we consider for J

B

⊆ [n] (J

B

will stand for integration w.r.t the Brownian motion) and introduce Π

=2,≥2

(J

B

; n

1

, . . . , n

l

) as the set of all pairs (τ, σ) of subpartitions from Π

n

(n

1

, . . . , n

l

) such that for all Jτ : |J| = 2 and S

J∈τ

J = J

B

as well as for all Jσ: |J| ≥ 2 and S

J∈σ

J = [n] \ J

B

.

• For τ ∈ Π

n

let |τ| = #{J ⊆ [n] : Jτ } i.e. the number of its blocks and kτk :=

# S

J∈τ

J.

• For (τ, σ) ∈ Π

=2,≥2

(J

B

; n

1

, . . . , n

l

) and f : ([0, T ] × {0, 1})

n

→ R we define f

τ∪σ

: [0, T ]

|τ|+|σ|

→ R by identifying the time variables of each block of τσ and setting x

i

= 0 for iS

J∈τ

J and x

i

= 1 for iS

J∈σ

J. In order to make this map unique we identify first the time variables of that block of τσ which contains the smallest number and denote all identified variables by t

1

. Next we choose from the remaining blocks that one containing the smallest number and use t

2

for all identified variables and so on.

Example: Let n

1

= 2, n

2

= 2 and n

3

= 3. Then Ψ = {{1, 2}, {3, 4}, {5, 6, 7}}. If J

B

= {2, 4, 6, 7} and τ = {{2, 6},{4, 7}}, σ = {{1, 3, 5}} we change by τσ the function f ((t

1

, x

1

), · · · , (t

7

, x

7

)) into

f

τ∪σ

(t

1

, t

2

, t

3

) = f((t

1

, 1), (t

2

, 0), (t

1

, 1), (t

3

, 0), (t

1

, 1), (t

2

, 0), (t

3

, 0)).

Proposition 2.9. Let f

ni

∈ (L

2

)

⊗ni

(λ ⊗ (δ

0

+ κδ

1

)) (n

i

∈ N for 1 ≤ il) be symmetric and assume that for all (τ, σ) ∈ Π

=2,≥2

(J

B

; n

1

, . . . , n

l

) it holds

Z

[0,T]|+|σ|

l

O

i=1

|f

ni

|

!

τ∪σ

|τ|+|σ|

< ∞.

Then

E Π

li=1

I

ni

(f

ni

) = X

JB∈[n]

X

(τ,σ)∈Π=2,≥2(JB;n1,...,nl)

κ

|σ|

Z

[0,T]|+|σ|

l

O

i=1

f

ni

!

τ∪σ

|τ|+|σ|

. (18) Proof. Let us assume for the moment that the f

ni

are of the form

f

ni

((t

1

, x

1

), · · · , (t

ni

, x

ni

)) = Π

nk=1i

d

i

(t

k

, x

k

) (19) for some d

i

L

2

(λ ⊗ (δ

0

+ κδ

1

)).

If J

i

⊆ [n

i

] and n

0i

= #J

i

, we let I

nB0

i

denote the multiple integral of order n

0i

w.r.t. the Brownian motion and I

nP1

i

the multiple integral of order n

1i

(n

1i

:= n

i

−n

0i

) w.r.t. the compound Poisson process. Similar to (14) we get

I

ni

(d

⊗ni i

1

OJini

) = I

nB0

i

([d

i

(·, 0)]

⊗n0i

)I

nP1

i

([d

i

(·, 1)]

⊗n1i

).

Consequently, since

Références

Documents relatifs

The algorithm is based on five types of approximations: Picard’s iterations, a Wiener chaos expansion up to a finite order, the truncation of an L 2 (0, T ) basis in order to

On constate en effet que dans le cas d’un bruit ayant une dépendance spatiale, la taille des chaos nécessaire à l’obtention de bons résultats était plus élevée que dans le

The purpose of this section is to extend a result by Kusuoka [8] on the characteriza- tion of the absolute continuity of a vector whose components are finite sums of multiple

We prove, without relying to Stein’s method but in the same spirit as in the famous proof of the Hörmander theorem by Paul Malliavin [10], that if a sequence of vectors of

seminal paper [12], in which Nualart and Peccati discovered the surprising fact that convergence in law for sequences of multiple Wiener-Itô integrals to the Gaussian is equivalent

We prove, without relying to Stein's method but in the same spirit as in the famous proof of the Hörmander theorem by Paul Malliavin [10], that if a sequence of vectors of

For instance, it is a well-known fact (see [14, Section 3]) that non-zero random variables living inside a xed Wiener chaos of order q &gt; 3 are not necessarily moment-determinate,

Abstrat: We ombine Malliavin alulus with Stein's method, in order to derive expliit bounds.. in the Gaussian and Gamma approximations of random variables in a xed Wiener haos of