HAL Id: hal-01118172
https://hal.archives-ouvertes.fr/hal-01118172v2
Submitted on 4 Apr 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Simulation of BSDEs with jumps by Wiener Chaos Expansion
Christel Geiss, Céline Labart
To cite this version:
Christel Geiss, Céline Labart. Simulation of BSDEs with jumps by Wiener Chaos Expansion. Stochas- tic Processes and their Applications, Elsevier, 2016, 126 (7), pp.2123-2162. �10.1016/j.spa.2016.01.006�.
�hal-01118172v2�
Simulation of BSDEs with jumps by Wiener Chaos Expansion
Christel Geiss
a,∗, Céline Labart
b,∗∗a
Department of Mathematics and Statistics, P.O.Box 35 (MaD),FI-40014 University of Jyväskylä, Finland
b
LAMA - Université de Savoie, Campus Scientifique, 73376 Le Bourget du Lac, France
Abstract
We present an algorithm to solve BSDEs with jumps based on Wiener Chaos Expansion and Picard’s iterations. This paper extends the results given in [7] to the case of BSDEs with jumps. We get a forward scheme where the conditional expectations are easily computed thanks to chaos decomposition formulas. Concerning the error, we derive explicit bounds with respect to the number of chaos, the discretization time step and the number of Monte Carlo simulations. We also present numerical experiments. We obtain very encouraging results in terms of speed and accuracy.
Keywords: Backward stochastic Differential Equations with jumps, Wiener Chaos expansion, Numerical method
2000 MSC: 60H10, 60J75, 60H35, 65C05, 65G99, 60H07 1. Introduction
In this paper we are interested in the numerical approximation of solutions (Y, Z, U ) to backward stochastic differential equations (BSDEs in the sequel) with jumps of the following form
Y
t= ξ +
Z
T tf(s, Y
s, Z
s, U
s) ds −
Z
T tZ
sdB
s−
Z
]t,T]
U
sd N ˜
s, 0 ≤ t ≤ T, (1) where B is a 1-dimensional standard Brownian motion and ˜ N is a compensated Poisson process independent from B, i.e. N ˜
t:= N
t− κt and {N
t}
t≥0is a Poisson process with intensity κ > 0. The terminal condition ξ is a real-valued F
T–measurable random variable where {F
t}
0≤t≤Tstands for the augmented natural filtration associated with B and N . Under standard Lipschitz assumptions on the driver f , the existence and uniqueness of the solution have been stated by Tang and Li [23], generalizing the seminal paper of Pardoux and Peng [18].
The main objective of this paper is to propose a numerical method to approximate the solution (Y, Z, U ) of (1). In the no-jump case, there exist several methods to simulate (Y, Z). The most popular one is the method based on the dynamic programming equation, introduced by Briand, Delyon and Mémin [6]. In the Markovian case, the rate of convergence of the method has been studied by Zhang [24] and Bouchard and Touzi [4]. From a numerical
∗
christel.geiss@jyu.fi
(corresponding author)∗∗
celine.labart@univ-savoie.fr
point of view, the main difficulty in solving BSDEs is to compute conditional expectations.
Different approaches have been proposed: Malliavin calculus [4], regression methods [10]
and quantization techniques [2]. In the general case (i.e. for a terminal condition which is not necessarily Markovian), Briand and Labart [7] have proposed a forward scheme based on Wiener chaos expansion and Picard’s iterations. Thanks to the chaos decomposition formulas, conditional expectations are easily computed, which leads to an efficient, fully implementable scheme. In case of BSDEs driven by a Poisson random measure, Bouchard and Elie [3] have proposed a scheme based on the dynamic programming equation and studied the rate of convergence of the method when the terminal condition is given by ξ = g(X
T), where g is a Lipschitz function and X is a forward process. More recently, Geiss and Steinicke [9] have extended this result to the case of a terminal condition which may be a Borel function of finitely many increments of the Lévy forward process X which is not necessarily Lipschitz but only satisfies a fractional smoothness condition. In the case of jumps driven by a compensated Poisson process, Lejay, Mordecki and Torres [15] have developed a fully implementable scheme based on a random binomial tree, following the approach proposed by Briand, Delyon and Mémin [5].
In this paper, we extend the algorithm based on Picard’s iterations and Wiener chaos expansion introduced in [7] to the case of BSDEs with jumps. Our starting point is the use of Picard’s iterations: (Y
0, Z
0, U
0) = (0, 0, 0) and for q ∈ N ,
Y
tq+1= ξ +
Z
T tf (s, Y
sq, Z
sq, U
sq) ds −
Z
T tZ
sq+1· dB
s−
Z
]t,T]
U
sq+1d N ˜
s, 0 ≤ t ≤ T.
Writing this Picard scheme in a forward way gives Y
tq+1= E ξ +
Z
T 0f (s, Y
sq, Z
sq, U
sq) ds
F
t!
−
Z
t 0f (s, Y
sq, Z
sq, U
sq) ds, Z
tq+1= E
D
(0)tY
tq+1F
t−= E D
(0)tξ +
Z
T 0f (s, Y
sq, Z
sq, U
sq) ds
!
F
t−!
, U
tq+1= E
D
(1)tY
tq+1F
t−= E D
(1)tξ +
Z
T 0f (s, Y
sq, Z
sq, U
sq) ds
! F
t−!
,
where D
(0)tX (resp. D
(1)tX) stands for the Malliavin derivative of the random variable X with respect to the Brownian motion (resp. w.r.t. the Poisson process).
In order to compute the previous conditional expectation, we use a Wiener chaos expan- sion of the random variable
F
q= ξ +
Z
T 0f (s, Y
sq, Z
sq, U
sq) ds.
More precisely, we use the following orthogonal decomposition of the random variable F
q(see Proposition 2.6)
F
q= E [F
q] +
∞
X
k=1 k
X
l=0
X
k∈Nl
X
j ∈Nk−l
d
kl,jk−lL
0,...,0l(˜ e[k
1, . . . , k
l]) L
1,...,1k−l(˜ e[j
1, . . . , j
k−l]).
where L
0,···m ,0(g) (resp. L
1,···m ,1(g)) denotes the iterated integral of order m of g w.r.t. the Brownian motion (resp. w.r.t. the compensated Poisson process), (˜ e[k
1, . . . , k
m])
km∈Nis an orthogonal basis of ( ˜ L
2)
⊗m([0, T ]), the subspace of symmetric functions from (L
2)
⊗m([0, T ]).
The sequence of coefficients {d
kl,jk−l}
kl∈Nl,jk−l∈Nk−lensues from the Wiener chaos decompo- sition of F
q.
The point to get an implementable scheme is that we only keep a finite number of terms in this expansion: we use a finite number of chaos and we choose a finite number of functions {e
1, · · · , e
N} to build {˜ e[k
1, · · · , k
m]}
km∈{1,···,N}. More precisely, if we choose e
i:=
√1h
1
]ti−1,ti]where t
i= ih and h :=
NT, we obtain
F
q∼ E [F
q] +
p
X
k=1
X
|n|=k
d
nkN
Y
i=1
K
nBi
B
ti
− B
ti−1√ h
!
C
nPi
(N
ti− N
ti−1, κh),
where K
i(resp. C
i) denotes the Hermite (resp. Charlier) polynomial of degree i, n = (n
B1, · · · , n
BN, n
P1, · · · , n
PN) is a vector of integers and |n| = P
Ni=1(n
Bi+ n
Pi). By using this approximation of F
qwe can easily compute E (F
q|F
t), E (D
(0)tF
q|F
t−) and E (D
(1)tF
q|F
t−), which gives us (Y
tq+1, Z
tq+1, U
tq+1). To get a fully implementable algorithm, it remains to approximate E (F
q) and the coefficients {d
nk}
n,kby Monte Carlo.
When extending [7] to the jump case one realizes that the main difficulty lies in the fact that there is no hypercontractivity property in the Poisson chaos decomposition case. This property plays an important role in the proof of the convergence in the Brownian case. To circumvent this problem, we exploit a recent result of Last, Penrose, Schulte and Thäle [13], which gives a formula to compute the expectation of products of Poisson multiple integrals, and the according result for the Brownian case from Peccati and Taqqu [19]. In fact, in equation (18) of Proposition 2.9 we get an explicit expression for
E (I
n1(f
n1) · · · I
nl(f
nl))
in terms of a combinatoric sum of tensor products of the chaos kernels f
ni. Here I
ni(f
ni) denotes the multiple integral of order n
iwith respect to the process B + ˜ N . By this expres- sion one gets the required estimates for the truncated chaos without the hypercontractivity property. Therefore, to prove the convergence of the method we may proceed similarly to [7], and split the error into four terms:
• the error due to Picard iterations
• the error due to the truncation onto the chaos up to order p
• the error due to the finite number of basis functions {e
1, · · · , e
N} for each chaos
• the error due to the Monte Carlo simulations to approximate the expectations appear- ing in the coefficients {d
nk}
n,k.
The paper is organized as follows: Section 2 contains the notations and gives preliminary
results, Section 3 describes the approximation procedure, Section 4 states the convergence
results and Section 5 presents the algorithm and some numerical examples. Some technical
results are proved in the appendix.
1.1. Definitions and Notations
Given a probability space (Ω, F , P ) we consider
• L
p(F
T) := L
p(Ω, F
T, P ), p ∈ N
∗= N \ {0}, the space of all F
T-measurable random variables (r.v. in the following) X : Ω 7−→ R satisfying kXk
pp:= E (|X|
p) < ∞.
• S
pT( R ), p ∈ N , p ≥ 2, the space of all càdlàg, adapted processes φ : Ω × [0, T ] 7−→ R such that kφk
pSp= E (sup
t∈[0,T]|φ
t|
p) < ∞.
• H
pT( R ), p ∈ N , p ≥ 2, the space of all predictable processes φ : Ω × [0, T ] 7−→ R such that kφk
pHpT
= E ( R
0T|φ
t|
pdt) < ∞.
• L
2(0, T ), the space of all square integrable functions on [0, T ].
• C
k,l, the set of continuously differentiable functions φ : (t, x) ∈ [0, T ] × R
3with con- tinuous derivatives w.r.t. t (resp. w.r.t. x) up to order k (resp. up to order l).
• C
bk,l, the set of continuously differentiable functions φ : (t, x) ∈ [0, T ] × R
3with contin- uous and uniformly bounded derivatives w.r.t. t (resp. w.r.t. x) up to order k (resp.
up to order l). The function φ is also bounded.
• k∂
spjf k
2∞, the sum of the squared norms of the derivatives of f([0, T ] × R
3, R ) w.r.t. all the space variables x which sum equals j : k∂
spjfk
2∞:= P
|k|=jk∂
xk11∂
xk22∂
xk33f k
2∞, where
|k| = k
1+ k
2+ k
3.
• C
p∞, the set of smooth functions f : R
n7−→ R (n ≥ 1) with partial derivatives of polynomial growth.
• k(·, ·, ·)k
pLp, p ≥ 1, the norm on the space S
pT( R ) × H
pT( R ) × H
pT( R ) defined by k(Y, Z, U )k
pLp:= E ( sup
t∈[0,T]
|Y
t|
p) +
Z
T0
E (|Z
t|
p)dt + κ
Z
T0
E (|U
t|
p)dt. (2) Hypothesis 1.1. We assume
• the terminal condition ξ belongs to L
2(F
T);
• the generator f ∈ C([0, T ] × R
3; R ) is Lipschitz continuous in space, uniformly in t:
there exists a constant L
fsuch that
|f (t, y
1, z
1, u
1) − f (t, y
2, z
2, u
2)| ≤ L
f(|y
1− y
2| + |z
1− z
2| + |u
1− u
2|) .
Lemma 1.2. If Hypothesis 1.1 is satisfied and ξ ∈ D
1,2(defined below) we get from [9, Theorem 3.4] that for a.e. t ∈ [0, T ]
Z
t= E [D
t(0)Y
t|F
t−], U
t= E [D
t(1)Y
t|F
t−] P − a.s. (3)
where D
t(0)X stands for the Malliavin derivative w.r.t. the Brownian motion of the random
variable X, and D
(1)tX stands for the Malliavin derivative w.r.t. the Poisson process of the
random variable X. Here E [·|F
t−] should be understood as the predictable projection, and
since the paths s 7→ D
t(i)Y
sare a.s. càdlàg we define D
t(i)Y
t:= lim
s↓tD
t(i)Y
sif the limit exists,
and zero otherwise.
2. Wiener Chaos Expansion 2.1. Notations and useful results 2.1.1. Iterated integrals
We refer to [16] and [21] for more details on this section. Let us briefly recall the Wiener chaos expansion in the case of a real-valued Brownian motion and an independent Poisson process with intensity κ > 0.
We define
G
0(t) = B
t, G
1(t) = N
t− κt, and L
ik1,···,ik(f ) the iterated integral of f with respect to G
0and G
1L
ik1,···,ik(f ) =
Z
T 0Z
t−k 0· · ·
Z
t−2 0f (t
1, . . . , t
k)dG
i1(t
1)
!
· · · dG
ik−1(t
k−1)
!
dG
ik(t
k).
We have the following chaotic representation property.
Proposition 2.1. ([16, Proposition 2.1]) For k ∈ N
∗define i
k:= (i
1, . . . , i
k) ∈ {0, 1}
k. Any F ∈ L
2(F
T) has a unique representation of the form
F = E (F ) +
∞
X
k=1
X
ik∈{0,1}k
L
ikk(f
ik), (4)
where f
ik∈ L
2(Σ
k) and Σ
k= {(t
1, . . . , t
k) ∈ [0, T ]
k: 0 < t
1< · · · < t
k< T } is the simplex of [0, T ]
k.
Let |i
k| := P
kj=1i
j. Due to the isometry property it holds kL
ikk(f)k
2= κ
|ik|kfk
2Σk
,
and for any f ∈ L
2(Σ
k), g ∈ L
2(Σ
m), i
k∈ {0, 1}
k, and j
m∈ {0, 1}
mwe have (see [16, Proposition 1.1])
E [L
ikk(f )L
jmm(g)] =
( κ
|ik|R
Σk
f (t
1, · · · , t
k)g(t
1, · · · , t
k)dt
1· · · dt
kif i
k= j
m0 otherwise.
Then, kF k
2= E [F ]
2+ P
k≥1P
ikκ
|ik|kf
ikk
2L2(Σk). The chaos approximation of F up to order p is defined by
C
p(F ) := E (F ) +
p
X
k=1
X
ik
L
ikk(f
ik) (5)
and P
k(F ) := P
ikL
ikk(f
ik) is the Wiener chaos of order k of F . We have E [(P
k(F ))
2] = X
ik
κ
|ik|kf
ikk
2Σk. (6)
• Let f ∈ L
2(Σ
k) and j ∈ {0, 1}. Following [16], we define the derivative of L
ikk(f ) w.r.t.
the Brownian motion and the Poisson process as the element of L
2(Ω × [0, T ]) given by
D
(j)tL
ikk(f ) =
k
X
l=1
1
{il=j}L
ik−11,···,b
il,···,ik(f ( · · ·
|{z}
l−1
, t, · · · )), (7) where b i means that the i-th index is omitted.
• Let j ∈ {0, 1}. We extend the definition of D
(j)to Dom D
(j):=
F ∈ L
2(F
T) satisfying (4) and
∞
X
k=1
X
ik
k
X
l=1
1
{il=j}κ
|ik|kf
ikk
2Σk< ∞
.
If F ∈ Dom D
(j)then
kF k
2Dom D(j):= E |F |
2+ κ
jE
Z
T 0|D
t(j)F |
2dt < ∞.
• F with chaotic representation (4) belongs to Dom D =: D
1,2if F belongs to Dom D
(0)∩ Dom D
(1), i.e.
kF k
2D1,2:= E |F |
2+
∞
X
k=1
k X
ik
κ
|ik|kf
ikk
2Σk
< ∞.
More generally, we define D
m,2as follows:
• Let m ≥ 1. We say that F satisfying (4) belongs to D
m,2if it holds kF k
2Dm,2:= E |F |
2+
m
X
l=1
∞
X
k=l
k!
(k − l)!
X
ik
κ
|ik|kf
ikk
2Σk
< ∞.
We recall
D
∞,2= ∩
∞m=1D
m,2.
We define for l ∈ N
∗with l ≤ m the seminorm k · k
Dlon D
m,2by
kF k
2Dl:= X
il
κ
|il|E
Z
T 0· · ·
Z
T 0D
til1,···,tlF
2dt
1· · · dt
l!
=
∞
X
k=l
k!
(k − l)!
X
ik
κ
|ik|kf
ikk
2Σk
, (8) where D
til1,···,tl= D
it11· · · D
itllrepresents the multi-index Malliavin derivative.
Remark 2.2. By using this notation we have kF k
2Dm,2
= E |F |
2+ P
ml=1kF k
2Dl.
• For m ≥ 1 and j ∈ N
∗we define D
m,jas the space of all F ∈ D
m,2such that kF k
jm,j:= X
1≤l≤m
X
il∈{0,1}l
ess sup
(t1,···,tl)∈[0,T]l
E [|D
til1,···,tlF |
j] < ∞.
(Since (ω, t
1, ..., t
l) 7→ (D
til1,···,tlF )(ω) is regarded as an element of L
2(Ω × [0, T ]
l) w.r.t. the measure P ⊗ λ
d(λ
ddenotes the Lebesgue measure on R
d) we use the es- sential supremum w.r.t. λ
d.)
• S
m,jdenotes the space of all triples of processes (Y, Z, U ) belonging to S
jT( R )×H
jT( R
d)×
H
jT( R ) and such that k(Y, Z, U )k
jm,j: = X
1≤l≤m
X
il
ess sup
(t1,···,tl)∈[0,T]l
k(D
til1,···,tlY, D
itl1,···,tlZ, D
itl1,···,tlU )k
jLj< ∞, where k · k
jLjhas been defined in (2). We denote S
m,∞:= ∩
∞j=1S
m,j.
Remark 2.3. If F := g(G), where g : R → R is a C
b1function and G ∈ D
1,2, we have (following [8, Proposition 5.1]) that
(D
(0)tF, D
t(1)F ) = (g
0(G)D
t(0)G, g(G + D
t(1)G) − g (G)).
Moreover, using Notation (8), we get
kF k
2D1= kD
(0)F k
2L2(Ω×[0,T])+ κkD
(1)F k
2L2(Ω×[0,T])≤ kg
0k
2∞kGk
2D1. More generally, if g : R → R is a C
bmfunction and G ∈ D
m,2, we have
kF k
2Dm≤ C(m, {kg
(k)k
∞}
k≤m, kGk
Dm,2),
where C(m, {kg
(k)k
∞}
k≤m, kGk
Dm,2) is a constant depending on m, {kg
(k)k
∞}
k≤mand kGk
Dm,2. Lemma 2.4. Let 1 ≤ m ≤ p + 1 and F ∈ D
m,2. We have
E [|F − C
p(F )|
2] ≤ kF k
2Dm(p + 2 − m) · · · (p + 1) . Proof. Using (6), we get
E [|F − C
p(F )|
2] = X
k≥p+1
E [P
k(F )
2] = X
k≥p+1
X
ik
κ
|ik|kf
ikk
2Σk
= X
k≥p+1
k!
(k − m)!
(k − m)!
k!
X
ik
κ
|ik|kf
ikk
2Σk
≤ 1
(p + 2 − m) · · · (p + 1)
X
k≥p+1
k!
(k − m)!
X
ik
κ
|ik|kf
ikk
2Σk
.
≤ 1
(p + 2 − m) · · · (p + 1)
X
k≥m
k!
(k − m)!
X
ik
κ
|ik|kf
ikk
2Σk
.
2.1.2. Multiple integrals
In the following, λ denotes the Lebesgue measure. Setting M (ds, dx) := dG
0(s)dδ
0(x) + dG
1(s)dδ
1(x)
we get an independent random measure in the sense of Itô (see [11]). There exists a chaotic representation by multiple integrals w.r.t. this random measure M which is equivalent to Proposition 2.1.
Proposition 2.5. ([11]) Any F ∈ L
2(F
T) can be represented as F = E [F ] +
∞
X
k=1
I
k(g
k), (9)
with g
k∈ (L
2)
⊗k(λ ⊗ (δ
0+ κδ
1)) := (L
2)
⊗k([0, T ] × {0, 1}, B([0, T ]) ⊗ 2
{0,1}, λ ⊗ (δ
0+ κδ
1)).
This representation is unique if we assume that the functions g
k(z
1, ..., z
k) with z
i= (t
i, x
i) ∈ [0, T ] × {0, 1} are symmetric.
For the definition of the multiple integrals I
k(g
k) we refer to [11] or [12]. But using the result that the representations in Proposition 2.1 and 2.5 are both unique we conclude for symmetric g
kthe relation
I
k(g
k) = k! X
ik
L
ikk(g
k((·, i
1), · · · , (·, i
k))), (10) where i
kis defined in Proposition 2.1 and
g
k(((t
1, i
1), · · · , (t
k, i
k))) = k!f
ik(t
1, . . . , t
k) on Σ
kwith f
ikfrom Proposition 2.1.
Moreover, for symmetric g
k∈ (L
2)
⊗k(λ ⊗ (δ
0+ κδ
1)) and f
m∈ (L
2)
⊗m(λ ⊗ (δ
0+ κδ
1)) the relation
E [I
k(g
k)I
m(f
m)] =
( k!hg
k, f
ki
(L2)⊗k(λ⊗(δ0+κδ1))if k = m
0 otherwise, (11)
holds true. If F ∈ D
m,2, we combine (9), (10) and [16, Definition 1.7] (which extends (7) to functions defined on L
2([0, T ])
k) to get
g
k((t
1, i
1), . . . , (t
k, i
k)) = 1
k! E D
tik1,...,tkF, k ≤ m. (12) On the other hand, this can be easily derived if one takes into account that for F = E [F ] +
P
∞k=1
I
k(g
k) we have D
ti11F = P
∞k=1kI
k−1(g
k((t
1, i
1), ·)), and that the expectation of any multiple integral of order k > 0 is zero while I
0is the identity map.
For the implementation of the numerical scheme we will use Hermite and Charlier poly-
nomials. In order to do so, we provide a chaotic representation consisting only of iterated
integrals of the form L
0,...,0and L
1,...,1for which the relations (22) and (23) below can be used.
Use {p
0, p
1} = {1
{0},
√1κ1
{1}} as orthonormal basis of L
2({0, 1}, 2
{0,1}, δ
0+ κδ
1) and fix an orthonormal basis {e
k}
k∈Nfor L
2([0, T ], B([0, T ]), λ). By setting
e[(k
1, i
1), . . . , (k
m, i
m)] := (e
k1⊗ p
i1) ⊗ . . . ⊗ (e
km⊗ p
im), k
j∈ N , i
j∈ {0, 1}
we get an orthonormal basis of (L
2)
⊗m(λ ⊗ (δ
0+ κδ
1)). The symmetrizations
˜
e[(k
1, i
1), . . . , (k
m, i
m)] := 1 m!
X
π∈Sm
e[(k
π(1), i
π(1)), . . . , (k
π(m), i
π(m))], k
j∈ N , i
j∈ {0, 1}
(13) form an orthogonal basis of (L ˜
2)
⊗m(λ ⊗ (δ
0+κδ
1)), the subspace of symmetric functions from (L
2)
⊗m(λ ⊗ (δ
0+ κδ
1)).
We also will use the notation
˜
e[k
1, . . . , k
m] := 1 m!
X
π∈Sm
e
kπ(1)⊗ · · · ⊗ e
kπ(m), k
j∈ N , where S
mstands for the set of all permutations of {1, ..., m}.
Proposition 2.6. Any F ∈ L
2(F
T) can be represented as F = E [F ] +
∞
X
k=1 k
X
l=0
X
kl∈Nl
X
jk−l∈Nk−l
d
kl,jk−lL
0,...,0l(˜ e[k
1, . . . , k
l]) L
1,...,1k−l(˜ e[j
1, . . . , j
k−l]),
where d
kl,jk−l=
l!(k−l)!hgk,e[(k1,0),....(kl,0)]⊗e[(j1,1),...,(jk−l,1)]i(L2)⊗kκk−l2 k˜e[(k1,0),....(kl,0),(j1,1),...,(jk−l,1)]k2
(L2)⊗k(λ⊗(δ0+κδ1))
.
Proof. According to [11, Theorem 1] a permutation of the coordinates of the kernels does not change the multiple integral, i.e. for any π ∈ S
kwe have
I
k(˜ e[(k
1, i
1), . . . , (k
k, i
k)]) = I
k(e[(k
π(1), i
π(1)), . . . , (k
π(k), i
π(k))]).
For any π with (i
π(1), . . . , i
π(k)) = (0, . . . , 0, 1, . . . , 1) (we assume that (i
1, . . . , i
k) contains l zeros) it holds by the product formula for multiple integrals (see Appendix A.5 or [14, Theorem 3.6])
I
k(˜ e[(k
1, i
1), . . . , (k
k, i
k)]) = I
l(e[(k
π(1), 0), . . . , (k
π(l), 0)])I
k−l(e[(k
π(l+1), 1), . . . , (k
π(k), 1)]) (14) since
e[(k
π(1), i
π(1)), . . . , (k
π(k), i
π(k))] = e[(k
π(1), 0), . . . , (k
π(l), 0)] ⊗ e[(k
π(l+1), 1), . . . , (k
π(k), 1)], and for the contraction-identification ⊗
rm(for the definition see (A.12)) it holds
e[(k
π(1), 0), . . . , (k
π(l), 0)] ⊗
rme[(k
π(l+1), 1), . . . , (k
π(k), 1)] = 0
if r 6= 0 or m 6= 0. Since
e[(k
π(l+1), 1), . . . , (k
π(k), 1)] = 1
κ
k−l2e[k
π(l+1), . . . , k
π(k)], we conclude from (14) and (10) that
I
k(˜ e[(k
1, i
1), . . . , (k
k, i
k)]) = l!(k − l)!
κ
k−l2L
0,...,0l(˜ e[k
π(1), . . . , k
π(l)])L
1,...,1k−l(˜ e[k
π(l+1), . . . , k
π(k)]).
(15) The symmetric functions g
kfrom Proposition 2.5 can be written as
g
k=
k
X
l=0
X
kl
X
jk−l
hg
k, e[(k
1, 0), ..., (k
l, 0)] ⊗ e[(j
1, 1), ..., (j
k−l, 1)]i
(L2)⊗k× ˜ e[(k
1, 0), ..., (k
l, 0), (j
1, 1), ..., (j
k−l, 1)]c
kl,jk−l, (16) where we sum over all k
l∈ N
land j
k−l∈ N
k−land
c
kl,jk−l= k˜ e[(k
1, 0), ..., (k
l, 0), (j
1, 1), ..., (j
k−l, 1)]k
−2(L2)⊗k(λ⊗(δ0+κδ1))denotes the normalizing factor. Abbreviating d
kl,jk−l:= l!(k − l)!
κ
k−l2hg
k, e[(k
1, 0), ..., (k
l, 0)] ⊗ e[(j
1, 1), ..., (j
k−l, 1)]i
(L2)⊗kc
kl,jk−lwe conclude from Proposition 2.5, (15) and (16) the orthogonal decomposition
F = E [F ] +
∞
X
k=1 k
X
l=0
X
kl
X
jk−l
d
kl,jk−lL
0,...,0l(˜ e[k
1, . . . , k
l]) L
1,...,1k−l(˜ e[j
1, . . . , j
k−l]). (17)
Lemma 2.7. Fix N ∈ N
∗and let
e[(k
1, 0), ..., (k
l, 0), (j
1, 1), ..., (j
k−l, 1)] =
N
O
i=1
(e
i⊗ p
0)
⊗nBi⊗
N
O
j=1
(e
j⊗ p
1)
⊗nPj,
i.e. n
Biand n
Pi(1 ≤ i ≤ N ) denote the multiplicities of the functions e
i⊗ p
0and e
i⊗ p
1, respectively, so that |n
B| = |(n
B1, ..., n
BN)| = l and |n
P| = |(n
P1, ..., n
PN)| = k − l. Let n
A! :=
n
A1! · · · n
AN! for A = B, P and define n := (n
B, n
P) so that |n| = |n
B| + |n
P|. Then
c
kl,jk−l= k˜ e[(k
1, 0), ..., (k
l, 0), (j
1, 1), ..., (j
k−l, 1)]k
−2(L2)⊗k(λ⊗(δ0+κδ1))= |n|!
n
B! n
P! .
Proof. To compute c
kl,jk−lnotice that the functions h
j:= (e
kj⊗ p
ij) and h
j0(1 ≤ j, j
0≤ k) are either equal or orthogonal in L
2(λ ⊗ (δ
0+ κδ
1)). Denoting
e[(k
1, 0), ..., (k
l, 0), (j
1, 1), ..., (j
k−l, 1)] = h
1⊗ · · · ⊗ h
kyields
k˜ e[(k
1, 0), ..., (k
l, 0), (j
1, 1), ..., (j
k−l, 1)]k
2(L2)⊗k(λ⊗(δ0+κδ1))=
1 k!
X
π∈Sk
h
π(1)⊗ · · · ⊗ h
π(k)2
(L2)⊗k(λ⊗(δ0+κδ1))
= n
B!n
P! k!
N
O
i=1
(e
i⊗ p
0)
⊗nBi⊗
N
O
j=1
(e
j⊗ p
0)
⊗nPj2
(L2)⊗k(λ⊗(δ0+κδ1))
= n
B!n
P! k! .
Remark 2.8. We deduce from (17) that C
p(F ) = E [F ] +
p
X
k=1 k
X
l=0
X
kl
X
jk−l
d
kl,jk−lL
0,...,0l(˜ e[k
1, . . . , k
l])L
1,...,1k−l(˜ e[j
1, . . . , j
k−l]).
In order to compute the expectation of products of multiple integrals (see formula (18) below) we introduce some notation following [13], [22], [19] and [17].
• If n ∈ N
∗then [n] := {1, . . . , n}.
• For J ⊆ [n] we denote by O
nJthe singleton containing that x ∈ {0, 1}
nfor which x
i= 0 ⇐⇒ i ∈ J holds.
• If n
1, . . . , n
l(l ∈ N
∗) are given and n := n
1+ · · · + n
lwe will denote by Ψ the ‘natural’
partition of [n] given by the summands n
i: Ψ := {Ψ
1, . . . , Ψ
l}
:= {{1, . . . , n
1}, . . . , {n
1+ · · · + n
l−1+ 1, . . . , n}}.
• Let Π
ndenote the set of all partitions of [n] (a partition means here a set of disjoint non-empty subsets of [n] such that their union is [n]) and Π
∗ndenote the set of all subpartitions of [n] (any set of disjoint non-empty subsets of [n] is a subpartition).
• Let Π(n
1, . . . , n
l) ⊆ Π
n(respectively Π
∗(n
1, . . . , n
l) ⊆ Π
∗n) denote the set of all σ ∈ Π
n(respectively σ ∈ Π
∗n) with |Ψ
i∩ J| ≤ 1 for 1 ≤ i ≤ l and all J ∈ σ.
• Let Π
≥2(n
1, . . . , n
l) (respectively Π
=2(n
1, . . . , n
l)) denote the set of all σ ∈ Π(n
1, . . . , n
l)
with |J| ≥ 2 (respectively |J | = 2) for all J ∈ σ.
• In order to distinguish between integration w.r.t. the Brownian motion and compen- sated Poisson process we consider for J
B⊆ [n] (J
Bwill stand for integration w.r.t the Brownian motion) and introduce Π
=2,≥2(J
B; n
1, . . . , n
l) as the set of all pairs (τ, σ) of subpartitions from Π
∗n(n
1, . . . , n
l) such that for all J ∈ τ : |J| = 2 and S
J∈τJ = J
Bas well as for all J ∈ σ: |J| ≥ 2 and S
J∈σJ = [n] \ J
B.
• For τ ∈ Π
∗nlet |τ| = #{J ⊆ [n] : J ∈ τ } i.e. the number of its blocks and kτk :=
# S
J∈τJ.
• For (τ, σ) ∈ Π
=2,≥2(J
B; n
1, . . . , n
l) and f : ([0, T ] × {0, 1})
n→ R we define f
τ∪σ: [0, T ]
|τ|+|σ|→ R by identifying the time variables of each block of τ ∪ σ and setting x
i= 0 for i ∈ S
J∈τJ and x
i= 1 for i ∈ S
J∈σJ. In order to make this map unique we identify first the time variables of that block of τ ∪ σ which contains the smallest number and denote all identified variables by t
1. Next we choose from the remaining blocks that one containing the smallest number and use t
2for all identified variables and so on.
Example: Let n
1= 2, n
2= 2 and n
3= 3. Then Ψ = {{1, 2}, {3, 4}, {5, 6, 7}}. If J
B= {2, 4, 6, 7} and τ = {{2, 6},{4, 7}}, σ = {{1, 3, 5}} we change by τ ∪ σ the function f ((t
1, x
1), · · · , (t
7, x
7)) into
f
τ∪σ(t
1, t
2, t
3) = f((t
1, 1), (t
2, 0), (t
1, 1), (t
3, 0), (t
1, 1), (t
2, 0), (t
3, 0)).
Proposition 2.9. Let f
ni∈ (L
2)
⊗ni(λ ⊗ (δ
0+ κδ
1)) (n
i∈ N for 1 ≤ i ≤ l) be symmetric and assume that for all (τ, σ) ∈ Π
=2,≥2(J
B; n
1, . . . , n
l) it holds
Z
[0,T]|τ|+|σ|
l
O
i=1
|f
ni|
!
τ∪σ
dλ
|τ|+|σ|< ∞.
Then
E Π
li=1I
ni(f
ni) = X
JB∈[n]
X
(τ,σ)∈Π=2,≥2(JB;n1,...,nl)
κ
|σ|Z
[0,T]|τ|+|σ|
l
O
i=1
f
ni!
τ∪σ
dλ
|τ|+|σ|. (18) Proof. Let us assume for the moment that the f
niare of the form
f
ni((t
1, x
1), · · · , (t
ni, x
ni)) = Π
nk=1id
i(t
k, x
k) (19) for some d
i∈ L
2(λ ⊗ (δ
0+ κδ
1)).
If J
i⊆ [n
i] and n
0i= #J
i, we let I
nB0i
denote the multiple integral of order n
0iw.r.t. the Brownian motion and I
nP1i
the multiple integral of order n
1i(n
1i:= n
i−n
0i) w.r.t. the compound Poisson process. Similar to (14) we get
I
ni(d
⊗ni i1
OJini
) = I
nB0i
([d
i(·, 0)]
⊗n0i)I
nP1i