• Aucun résultat trouvé

Random walk approximation of BSDEs with Hölder continuous terminal condition

N/A
N/A
Protected

Academic year: 2021

Partager "Random walk approximation of BSDEs with Hölder continuous terminal condition"

Copied!
28
0
0

Texte intégral

(1)

HAL Id: hal-01818668

https://hal.archives-ouvertes.fr/hal-01818668v2

Preprint submitted on 14 Sep 2018

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Random walk approximation of BSDEs with Hölder continuous terminal condition

Christel Geiss, Céline Labart, Antti Luoto

To cite this version:

Christel Geiss, Céline Labart, Antti Luoto. Random walk approximation of BSDEs with Hölder

continuous terminal condition. 2018. �hal-01818668v2�

(2)

Random walk approximation of BSDEs with Hölder continuous terminal condition

Christel Geiss

1

, Céline Labart

2

, Antti Luoto

3

Abstract

In this paper we consider the random walk approximation of the solution of a Markovian BSDE whose terminal condition is a locally Hölder continuous function of the Brownian motion.

We state the rate of the L

2

-convergence of the approximated solution to the true one. The proof relies in part on growth and smoothness properties of the solution u of the associated PDE. Here we improve existing results by showing some properties of the second derivative of u in space.

Keywords : Backward stochastic differential equations, numerical scheme, random walk approxi- mation, speed of convergence

MSC codes : 65C30 60H35 60G50 65G99

1 Introduction

Let (Ω, F , P ) be a complete probability space carrying the standard Brownian motion B = (B

t

)

t≥0

and assume (F

t

)

t≥0

is the augmented natural filtration. We consider the following backward stochastic differential equation (BSDE for short)

Y

s

= g(B

T

) + Z

T

s

f (r, B

r

, Y

r

, Z

r

)dr − Z

T

s

Z

r

dB

r

, 0 ≤ sT, (1) where f is Lipschitz continuous and g is a locally α-Hölder continuous and polynomially bounded function (see (3)). In this paper we are interested in the L

2

-convergence of the numerical approx- imation of (1) by using a random walk. First results dealing with the numerical approximation of BSDEs date back to the late 1990s. Bally (see [2]) was the first to consider this problem by introducing random discretization, namely the jump times of a Poisson process. In his PhD thesis, Chevance (see [17]) proposed the following discretization

y

k

= E (y

k+1

+ hf (y

k+1

)|F

kn

), k = n − 1, · · · , 0, n ∈ N

and proved the convergence of (Y

tn

)

t

:= (y

[t/h]

)

t

to Y . At the same time, Coquet, Mackevičius and Mémin [18] proved the convergence of Y

n

by using convergence of filtrations, still in the case of

1

Department of Mathematics and Statistics, P.O.Box 35 (MaD), FI-40014 University of Jyvaskyla, Finland christel.geiss@jyu.fi

2

Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France celine.labart@univ-smb.fr

3

Department of Mathematics and Statistics, P.O.Box 35 (MaD), FI-40014 University of Jyvaskyla, Finland

antti.k.luoto@student.jyu.fi

(3)

a generator independent from z. The general case (f depends on z, terminal condition ξL

2

) has been studied by Briand, Delyon and Mémin (see [5]). In that paper the authors define an approximated solution (Y

n

, Z

n

) based on random walk and prove weak convergence to (Y, Z) using convergence of filtrations. We also refer to [27], [29], [30], [31] for other numerical methods for BSDEs which use a random walk approach. The rate of convergence of this method was left as an open problem.

Introducing instead of random walk an approach based on the dynamic programming equation, Bouchard and Touzi in [8] and Zhang in [35] managed to establish a rate of convergence. However, to be fully implementable, this algorithm requires to have a good approximation of its associated conditional expectation. For this, various methods have been developed (see [24], [19], [15]). For- ward methods have also been introduced to approximate (1) : a branching diffusion method (see [26]), a multilevel Picard approximation (see [34]) and Wiener chaos expansion (see [7]). Many ex- tensions of (1) have also been considered : high order schemes (see [11], [10]), schemes for reflected BSDEs (see [3], [14]), for fully-coupled BSDEs (see [21], [9]), for quadratic BSDEs (see [13]), for BSDEs with jumps (see [23]) and for McKean-Vlasov BSDEs (see [1], [16], [12]).

From a numerical point of view, the random walk is of course not competitive with recent methods listed above. We emphasize that the aim of this paper is to give the convergence rate of the initial method based on random walk, which, to the best of our knowledge, has not been done so far.

As in [5], let us introduce the following approximation of B, based on a random walk:

B

tn

=

h

[t/h]

X

i=1

ε

i

, 0 ≤ tT,

where h =

Tn

(n ∈ N

) and (ε

i

)

i=1,2,...

is a sequence of i.i.d. Rademacher random variables. Consider the following approximated solution (Y

n

, Z

n

) of (Y, Z)

Y

tn

k

= g(B

Tn

) + h

n−1

X

m=k

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

) − √ h

n−1

X

m=k

Z

tnm

ε

m+1

, 0 ≤ kn − 1. (2) The main result of our paper gives the rate of convergence in L

2

-norm of Y

vn

Y

v

and Z

vn

Z

v

for each v ∈ [0, T ) (see Theorem 3.1). Basically, we get that the L

2

-norm of the error on Y is of order h

α4

and the L

2

-norm of the error on Z is of order

h

α

4

T−v

. The proof of this result is based on several ingredients. In particular, we need some estimates on the bound of the first and second derivatives of the solution of the PDE associated to the BSDE (1). We establish these bounds in the case of a forward backward SDE (FBSDE for short) whose terminal condition satisfies the Hölder continuity condition (3). This result extends Zhang [36, Theorem 3.2].

The rest of the paper is organized as follows. Section 2 introduces notations, assumptions and the representation for Z and Z

n

based on the Malliavin weights. Section 3 states the rate of convergence of the error on Y and Z in L

2

-norm, which is the main result of the paper. Section 4 presents numerical simulations and Section 5 recalls some properties of Malliavin weights, of the regularity of solutions to FBSDEs with a locally Hölder continuous terminal condition function and states some properties of the solutions to the PDEs associated to these FBSDEs.

2 Preliminaries

This section is dedicated to notations, assumptions and the representation of Z and Z

n

using the

Malliavin weights.

(4)

Notation:

• G

k

:= σ(ε

i

: 1 ≤ ik) and G

0

= {∅, Ω}. The associated discrete-time random walk (B

ntk

)

nk=0

is (G

k

)

nk=0

-adapted.

• k · k

p

:= k · k

Lp(P)

for p ≥ 1 and for p = 2 simply k · k. constant.

Assumption 2.1.

g is locally Hölder continuous with order α ∈ (0, 1] and polynomially bounded (p

0

≥ 0, C

g

> 0) in the following sense

∀(x, y) ∈ R

2

, |g(x) − g(y)| ≤ C

g

(1 + |x|

p0

+ |y|

p0

)|x − y|

α

. (3)

The function [0, T ] × R

3

: (t, x, y, z) 7→ f (t, x, y, z) satisfies

|f (t, x, y, z) − f (t

0

, x

0

, y

0

, z

0

)| ≤ L

f

( √

tt

0

+ |x − x

0

| + |y − y

0

| + |z − z

0

|). (4) Notice that (3) implies

|g(x)| ≤ K(1 + |x|

p0+1

) =: Ψ(x). (5) In the rest of the paper, the study of the error (Y

n

Y, Z

n

Z) will either rely on (2) or on its integral version:

Y

sn

= g(B

Tn

) + Z

(s,T]

f (r, B

rn

, Y

rn

, Z

rn

)d[B

n

, B

n

]

r

− Z

(s,T]

Z

rn

dB

nr

, 0 ≤ sT, (6) where the backward equation (6) arises from (2) by setting Y

rn

:= Y

tnm

and Z

rn

:= Z

tnm

for r ∈ [t

m

, t

m+1

). For n large enough, (6) has a unique solution (Y

n

, Z

n

), and (Y

tnm

, Z

tnm

)

n−1m=0

is adapted to the filtration (G

m

)

n−1m=0

. Let us now introduce the Malliavin representations for Z and Z

n

. They are the cornerstone of our study of the error on Z .

2.1 Representations for Z and Z

n

We will use the representation (see Ma and Zhang [28, Theorem 4.2]) Z

t

= E

t

g(B

T

)N

Tt

+

Z

T t

f (s, B

s

, Y

s

, Z

s

)N

st

ds

!

, 0 ≤ tT, (7)

where E

t

[·] = E [·|F

t

], and for all s ∈ (t, T ] we have N

st

:= B

s

B

t

st .

Lemma 2.2. Suppose that Assumption 2.1 holds. Then the process Z

n

given by (6) has the repre- sentation

Z

tnk

= E

k

g(B

Tn

) B

tnn

B

ntk

t

n

t

k

+ E

k

h

n−1

X

m=k+1

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

) B

ntm

B

tnk

t

m

t

k

 (8)

for k = 0, 1, . . . , n − 1, where E

k

[ · ] := E [ · |G

k

].

(5)

Proof. We multiply equation (2) by ε

k+1

and take the conditional expectation with respect to G

k

. Since (Y

tn

k

, Z

tn

k

) is G

k

-measurable, it holds for 0 ≤ kn − 1 that E

k

Y

tnk

ε

k+1

= E

k

(g(B

Tn

k+1

) + hE

k n−1

X

m=k

f(t

m+1

, B

ntm

, Y

tnm

, Z

tnm

k+1

!

− √ hE

k

n−1

X

m=k

Z

tnm

ε

m+1

ε

k+1

!

=

h E

k

g(B

Tn

) B

tnn

B

tnk

t

n

t

k

+ h

3/2

n−1

X

m=k+1

E

k

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

) B

tnm

B

ntk

t

m

t

k

− √

hZ

tnk

, (9) where the l.h.s. is equal to zero. Indeed, for mk + 1, we have

E

k

(Z

tnm

ε

m+1

ε

k+1

) = E

k

(Z

tnm

ε

k+1

E

m

ε

m+1

) = 0, and for m = k it holds E

k

(Z

tnk

ε

2k+1

) = Z

tnk

. Moreover, the fact that B

Tn

= √

h P

n−1m=0

ε

m+1

, where (ε

m

)

m=1,2...

are i.i.d., yields

E

k

(g(B

Tn

k+1

) = E

k

g(B

Tn

)

n−1

X

m=k

ε

k+1

nk

!

= E

k

g(B

Tn

)

n−1

X

m=k

ε

m+1

nk

!

= √ h E

k

g(B

Tn

) B

tnn

B

ntk

t

n

t

k

.

Similarly, for mk + 1, we get (using [5, Proposition 5.1], where it is stated that both Y

tnm

and Z

tnm

can be represented as functions of t

m

and B

ntm

)

E

k

f(t

m+1

, B

ntm

, Y

tnm

, Z

tnm

k+1

=

h E

k

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

) B

tnm

B

tnk

t

m

t

k

.

It remains to divide (9) by √

h and rearrange.

3 Main result

This section is devoted to the main result of the paper: the rate of the L

2

-convergence of (Y

n

, Z

n

) to (Y, Z). The proof will rely on the fact that the random walk B

n

can be constructed from the Brownian motion B by Skorohod embedding. Let τ

0

:= 0 and define

τ

k

:= inf {t > τ

k−1

: |B

t

B

τk−1

| =

h}, k ≥ 1.

Then (B

τk

B

τk−1

)

k=1

is a sequence of i.i.d. random variables with P (B

τk

B

τk−1

= ± √

h) =

12

, which means that √

k

=

d

B

τk

B

τk−1

. We will use this random walk for our approximation, i.e.

we will require

B

tn

=

[t/h]

X

k=1

(B

τk

B

τk−1

), 0 ≤ tT. (10)

Properties satisfied by τ

k

and B

τk

are stated in Lemma A.1. We will denote by E

τk

the conditional

expectation w.r.t. F

τk

.

(6)

Theorem 3.1. Let Assumption 2.1 hold. If B

n

satisfies (10) then we have (for sufficiently large n) that

E |Y

v

Y

vn

|

2

C

0

h

α2

for v ∈ [0, T ), E |Z

v

Z

vn

|

2

C

0

h

α2

Tt

k

+ C

1

h

α2

(T − v)

1−α2

1

v6=tk

for v ∈ [t

k

, t

k+1

), k = 0, ..., n − 1, where we have the dependencies C

0

= C(T, p

0

, L

f

, C

g

, C

5.3y

, C

5.3z

, K

f

, c

5.4

, α), C

1

= C(T, p

0

, C

5.3z

, α) and K

f

:= sup

0≤t≤T

|f(t, 0, 0, 0)|.

Remark 3.2. Theorem 3.1 implies that sup

v∈[0,T)

E |Y

v

Y

vn

|

2

C

0

h

α2

and E Z

T

0

|Z

v

Z

vn

|

2

dvC(C

0

, C

1

, β) h

β

for β ∈ (0,

α2

).

Proof of Theorem 3.1. Let u : [0, T ) × R → R be the solution of the PDE associated to (1). Since by Theorem 5.4

Y

s

= u(s, B

s

), Z

s

= u

x

(s, B

s

), a.s.

we introduce

F(s, x) := f (s, x, u(s, x), u

x

(s, x)),

so that F(s, B

s

) = f (s, B

s

, Y

s

, Z

s

). We first give some properties satisfied by F .

Lemma 3.3. If Assumption 2.1 holds then F is a Lipschitz continuous and polynomially bounded function in x :

|F (t, x

1

) − F (t, x

2

)| ≤ C(T, L

f

, c

2,35.4

)(1 + |x

1

|

p0+1

+ |x

2

|

p0+1

) |x

1

x

2

| (T − t)

1−α2

,

|F (t, x)| ≤ C(T, L

f

, c

1,25.4

, K

f

) Ψ(x) (T − t)

1−α2

, where Ψ(x) is given in (5).

Proof of Lemma 3.3. Thanks to the mean value theorem and Theorem 5.4-(ii-c) and (iii-b) we have for x

1

, x

2

∈ R that there exist ξ

1

, ξ

2

∈ [min{x

1

, x

2

}, max{x

1

, x

2

}] such that

|F (t, x

1

) − F (t, x

2

)| = |f (t, x

1

, u(t, x

1

), u

x

(t, x

1

)) − f (t, x

2

, u(t, x

2

), u

x

(t, x

2

))|

L

f

(|x

1

x

2

| + |u(t, x

1

) − u(t, x

2

)| + |u

x

(t, x

1

) − u

x

(t, x

2

)|)

L

f

1 + c

25.4

Ψ(ξ

1

) (T − t)

1−α2

+ c

35.4

Ψ(ξ

2

) (T − t)

1−α2

!

|x

1

x

2

|

C(T, L

f

, c

2,35.4

)(1 + |x

1

|

p0+1

+ |x

2

|

p0+1

) |x

1

x

2

|

(T − t)

1−α2

.

The second inequality can be shown similarly.

(7)

For the estimate of E |Y

tk

Y

tnk

|

2

we will use (1) and (2): Since Y

tnk

is F

τk

-measurable we have kY

tk

Y

tnk

k ≤ k E

tk

g(B

T

) − E

τk

g(B

Tn

)k

+

E

tk

Z

T tk

f (s, B

s

, Y

s

, Z

s

)ds − h E

τk

n−1

X

m=k

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

)

. (11) We frequently express conditional expectations with the help of an independent copy of B denoted by ˜ B, for example E

t

g(B

T

) = ˜ E g(B

t

+ ˜ B

T−t

).

By (3) and Lemma A.1,

k E

tk

g(B

T

) − E

τk

g(B

nT

)k

2

= E | E ˜ g(B

tk

+ ˜ B

T−tk

) − E ˜ g(B

τk

+ ˜ B

τ˜n−k

)|

2

≤ ( E E ˜ (Ψ

1

)

4

)

12

( E E ˜ |B

tk

B

τk

+ ˜ B

T−tk

B ˜

˜τn−k

|

)

12

C(C

g

, T, p

0

)(( E |B

tk

B

τk

|

)

12

+ ( E |B

T−tk

B

τn−k

|

)

12

)

C(C

g

, T, p

0

)h

α2

, (12)

where Ψ

1

:= C

g

(1 + |B

tk

+ ˜ B

T−tk

|

p0

+ |B

τk

+ ˜ B

˜τn−k

|

p0

). To estimate the other term in (11) we consider the decomposition

E

tk

f (s, B

s

, Y

s

, Z

s

) − E

τk

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

)

= ( E

tk

f (s, B

s

, Y

s

, Z

s

) − E

tk

f (t

m

, B

tm

, Y

tm

, Z

tm

)) + ( E

tk

F (t

m

, B

tm

) − E

τk

F (t

m

, B

τm

))

+( E

τk

F (t

m

, B

τm

) − E

τk

F (t

m

, B

tm

)) + ( E

τk

f (t

m

, B

tm

, Y

tm

, Z

tm

) − E

τk

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

))

=: D

1

(s, m) + D

2

(m) + ... + D

4

(m) so that

E

tk

Z

T tk

f (s, B

s

, Y

s

, Z

s

)ds − h E

τk n−1

X

m=k

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

)

n−1

X

m=k

Z

tm+1

tm

D

1

(s, m)ds

+ h

4

X

i=2

kD

i

(m)k

! .

For D

1

we have by Theorem 5.3 that kD

1

(s, m)k ≤ L

f

( √

st

m

+ kB

s

B

tm

k + kY

s

Y

tm

k + kZ

s

Z

tm

k)

C(T, L

f

, C

5.3y

, C

5.3z

, p

0

) (T − s)

α−22

h

12

, (13) where the last inequality follows from kB

s

B

tm

k = √

st

m

h

12

for s ∈ [t

m

, t

m+1

] and kY

s

Y

tm

k + kZ

s

Z

tm

k ≤ ( E Ψ(B

tm

)

2

)

12

C

5.3y

Z

s

tm

(T − r)

α−1

dr

1

2

+ C

5.3z

Z

s

tm

(T − r)

α−2

dr

1

2

!

C(T, C

5.3y

, C

5.3z

, p

0

) √

st

m

((T − s)

α−12

+ (T − s)

α−22

).

We bound D

2

using Lemma 3.3 and Lemma A.1. Similar to (12) we conclude (setting Ψ

2

:=

1 + |B

tk

+ ˜ B

tm−k

|

p0+1

+ |B

τk

+ ˜ B

τ˜m−k

|

p0+1

) that

kD

2

(m)k = E | E

tk

F (t

m

, B

tm

) − E

τk

F (t

m

, B

τm

)|

2

1 2

(8)

C(T, L

f

, c

2,35.4

)( E E ˜ Ψ

42

)

14

1

(T − t

m

)

1−α2

(t

k

h + t

m−k

h)

14

C(T, p

0

, L

f

, c

2,35.4

) 1

(T − t

m

)

1−α2

h

14

. For D

3

we apply again Lemma 3.3 and Lemma A.1,

kD

3

(m)k ≤ kF (t

m

, B

tm

) − F (t

m

, B

τm

)k ≤ C(T, L

f

, c

2,35.4

) 1

(T − t

m

)

1−α2

3

|B

tm

B

τm

|k

C(T, p

0

, L

f

, c

2,35.4

) 1

(T − t

m

)

1−α2

h

14

, where Ψ

3

:= 1 + |B

tm

|

p0+1

+ |B

τm

|

p0+1

. For the last term D

4

we get

kD

4

(m)k ≤ L

f

(h

12

+ kB

tm

B

tnm

k + kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k).

Finally, using the estimates for the terms D

1

(s, m), D

2

(m), ..., D

4

(m) we arrive at kY

tk

Y

tnk

k ≤ C(C

g

, T, p

0

)h

α4

+ C(T, L

f

, C

5.3y

, C

5.3z

, p

0

) h

12

Z

T tk

(T − s)

α−22

ds +C(T, p

0

, L

f

, c

2,35.4

)h

14

n−1

X

m=k

h

(T − t

m

)

1−α2

+ hL

f

n−1

X

m=k

(kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k)

C(C

g

, T, p

0

, L

f

, c

2,35.4

, C

5.3y

, C

5.3z

)h

α4

+ hL

f

n−1

X

m=k

(kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k). (14) For kZ

tk

Z

tn

k

k we exploit the representations (7) and (8) and estimate kZ

tk

Z

tnk

k ≤ 1

Tt

k

k E

tk

g(B

T

)(B

T

B

tk

) − E

τk

g(B

τn

)(B

τn

B

τk

)k + E

tk

Z

T tk+1

f (s, B

s

, Y

s

, Z

s

) B

s

B

tk

st

k

ds

!

− E

τk

h

n−1

X

m=k+1

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

) B

ntm

B

tnk

t

m

t

k

+ E

tk

Z

tk+1

tk

f (s, B

s

, Y

s

, Z

s

) B

s

B

tk

st

k

ds . Then, similar to (12), we have for the terminal condition by Lemma A.1 that

k E

tk

[g(B

T

)(B

T

B

tk

)] − E

τk

[g(B

τn

)(B

τn

B

τk

)]k

= k E ˜ [g(B

tk

+ ˜ B

T−tk

) − g(B

tk

)]( ˜ B

T−tk

B ˜

τ˜n−k

) + ˜ E [g(B

tk

+ ˜ B

T−tk

) − g(B

τk

+ ˜ B

τ˜n−k

)] ˜ B

τ˜n−k

k

C(C

g

, T, p

0

)h

14

(T − t

k

)

α2+14

+ C(C

g

, T, p

0

)h

α4

(T − t

k

)

12

C(C

g

, T, p

0

)h

α4

(T − t

k

)

12

.

Here we have used that ˜ E [g(B

tk

)( ˜ B

T−tk

B ˜

τ˜n−k

)] = 0. The term ˜ E [g(B

tk

+ ˜ B

T−tk

)−g(B

tk

)]( ˜ B

T−tk

B ˜

τ˜n−k

) provides us with the factor (T − t

k

)

α2

((T − t

k

)h)

14

. For the next term of the estimate of kZ

tk

Z

tn

k

k we use for s ∈ [t

m

, t

m+1

), where mk + 1, the decomposition E

tk

f (s, B

s

, Y

s

, Z

s

)(B

s

B

tk

)

st

k

− E

τk

f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

)(B

tnm

B

tnk

)

t

m

t

k

(9)

= E

tk

f (s, B

s

, Y

s

, Z

s

)(B

s

B

tk

)

st

k

− E

tk

f (t

m

, B

tm

, Y

tm

, Z

tm

)(B

tm

B

tk

) t

m

t

k

+ E

tk

F(t

m

, B

tm

)(B

tm

B

tk

)

t

m

t

k

− E

τk

F (t

m

, B

τm

)(B

τm

B

τk

) t

m

t

k

+ E

τk

[F (t

m

, B

τm

) − F (t

m

, B

tm

)] B

τm

B

τk

t

m

t

k

+ E

τk

[f (t

m

, B

tm

, Y

tm

, Z

tm

) − f (t

m+1

, B

tnm

, Y

tnm

, Z

tnm

)] B

tnm

B

tn

k

t

m

t

k

=: T

1

(s, m) + T

2

(m) + ... + T

4

(m).

Then by the conditional Hölder inequality and by (13) as well as by Lemma 3.3 we have kT

1

(s, m)k ≤ kD

1

(s, m)k kB

s

B

tk

k

st

k

+ kf (t

m

, B

tm

, Y

tm

, Z

tm

)k

B

s

B

tk

st

k

B

tm

B

tk

t

m

t

k

C(T, L

f

, C

5.3y

, C

5.3z

, p

0

) (T − s)

α−22

h

12

st

k

+C(T, L

f

, c

1,25.4

, K

f

) ( E Ψ(B

tm

)

2

)

12

(T − t

m

)

1−α2

×

kB

s

B

tm

k

st

k

+ kB

tm

B

tk

k

1

st

k

− 1 t

m

t

k

C(T, L

f

, K

f

, C

5.3y

, C

5.3z

, c

1,25.4

, p

0

)(T − s)

α−22

h

14

(s − t

k

)

34

. Indeed,

kB

s

B

tm

k

st

k

+ kB

tm

B

tk

k

1

st

k

− 1 t

m

t

k

st

m

st

k

+

t

m

t

k

(s − t

m

)

(s − t

k

)(t

m

t

k

) ≤ C h

14

(s − t

k

)

34

, where the last inequality follows from s− t

m

t

m+1

−t

m

= h and ht

m

t

k

s −t

k

. We estimate T

2

with the help of Lemma 3.3 and Lemma A.1 as follows :

kT

2

(m)k ≤ k D b

2

(m)k kB

tm

B

tk

k

t

m

t

k

+ kF (t

m

, B

τm

)k kB

tm−k

B

τm−k

k t

m

t

k

C(T, p

0

, L

f

, K

f

, c

5.4

) 1 (T − t

m

)

1−α2

h

14

(t

m

t

k

)

34

.

Here D b

2

(m) := (˜ E |F (t

m

, B

tk

+ ˜ B

tm−k

) − F (t

m

, B

τk

+ ˜ B

τ˜m−k

)|

2

)

12

which can be estimated as D

2

(m).

For T

3

the conditional Hölder inequality and Lemma A.1 yield kT

3

(m)k ≤ k D b

3

(m)k

B

τm

B

τk

t

m

t

k

C(T, p

0

, L

f

, c

2,35.4

) 1 (T − t

m

)

1−α2

h

14

(t

m

t

k

)

12

, where D b

3

(m) := F (t

m

, B

τm

) − F (t

m

, B

tm

) is estimated as D

3

(m). Finally,

kT

4

(m)k ≤ L

f

(h

12

+ kB

tm

B

ntm

k + kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k) 1

t

m

t

k

.

(10)

For the estimate of E

tk

R

tk+1

tk

f (s, B

s

, Y

s

, Z

s

)

Bss−t−Btk

k

ds one notices that by the conditional Hölder inequality,

k E

tk

f (s, B

s

, Y

s

, Z

s

)

Bss−t−Btk

k

k = k E

tk

[(f (s, B

s

, Y

s

, Z

s

) − f (s, B

tk

, Y

tk

, Z

tk

))

Bss−t−Btk

k

]k

≤ kf(s, B

s

, Y

s

, Z

s

) − f(s, B

tk

, Y

tk

, Z

tk

)k 1

st

k

C(T, L

f

, C

5.3y

, C

5.3z

, p

0

) (T − s)

α−22

h

12

st

k

, where the last inequality follows in the same way as in (13). Consequently, we have

kZ

tk

Z

tnk

k ≤ C(C

g

, T, p

0

)

(T − t

k

)

12

h

α4

+ C(T, L

f

, K

f

, C

5.3y

, C

5.3z

, c

1,25.4

, p

0

) Z

T

tk

ds

(T − s)

1−α2

(s − t

k

)

34

h

14

+C(T, p

0

, L

f

, K

f

, c

5.4

) h

n−1

X

m=k+1

1 (T − t

m

)

1−α2

h

14

(t

m

t

k

)

34

+L

f

h

n−1

X

m=k+1

(kB

tm

B

tnm

k + kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k) 1

t

m−k

.

Lemma A.2 enables to bound the second and third term of the r.h.s. by C

h

1 4

(T−tk)34α2

B (

α2

,

14

), which is bounded by C

h

α 4

(T−tk)12α4

. Thus we get kZ

tk

Z

tnk

k ≤ C

0

h

α4

(T − t

k

)

12

+ L

f

h

n−1

X

m=k+1

(kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k) 1

t

m−k

.

Then we use (14) and the above estimate to get kY

tk

Y

tnk

k + kZ

tk

Z

tnk

k ≤ C

0

h

α4

(T − t

k

)

12

+ C(L

f

) h

n−1

X

m=k+1

(kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k) 1

t

m−k

.

If this inequality is iterated, one gets a shape where the Gronwall lemma applies. Indeed, setting a

m

:= (kY

tm

Y

tnm

k + kZ

tm

Z

tnm

k) one has to consider the double sum

n−1

X

m=k+1

n−1

X

l=m+1

a

l

h

t

l−m

h t

m−k

= h

n−1

X

l=k+1

l−1

X

m=k+1

h t

m−k

t

l−m

a

l

Ch

n−1

X

l=k+1

a

l

.

Consequently,

kY

tk

Y

tnk

k + kZ

tk

Z

tnk

k ≤ C

0

h

α4

(T − t

k

)

12

which gives the bound on the error on Z . Moreover, (14) yields

kY

tk

Y

tn

k

k ≤ C

0

h

α4

. If v ∈ [t

k

, t

k+1

), we have by Theorem 5.3 that

kY

v

Y

vn

k ≤ kY

v

Y

tk

k + kY

tk

Y

tn

k

k ≤ C(C

5.3y

, T, p

0

) Z

v

tk

(T − r)

α−1

dr

12

+ kY

tk

Y

tn

k

k,

Références

Documents relatifs

The latter tool is used repeatedly: first in the proof of the local gain of integrability of sub-solutions; second in proving that the gradient with respect to the velocity variable

In particular, we analyze its continuity and convexity with respect to the m-parameter appearing in the weak terminal condition, and show how it can be related to a dual optimal

Zhang, On stability and continuity of bounded solutions of degenerate complex Monge- Amp`ere equations over compact K¨ ahler manifolds, Adv.. Zeriahi, Singular K¨

Backward stochastic differential equations, generator of quadratic growth, unbounded terminal condition, uniqueness result.. AMS

In Constantin, Córdoba and Wu [6], we proved in the critical case (α = 1 2 ) the global existence and uniqueness of classical solutions corresponding to any initial data with L ∞

Theory Related Fields 141 (2008) 543–567], the authors proved the uniqueness among the solutions of quadratic BSDEs with convex generators and unbounded terminal conditions which

In this paper, we study the stability of the solutions of Backward Stochastic Differential Equations (BSDE for short) with an almost surely finite random terminal time.. Antonelli

Section 3 is devoted to the optimal control problem from which we get as a byproduct a uniqueness result for quadratic BSDEs with unbounded terminal conditions.. Finally, in the