• Aucun résultat trouvé

Central Limit Theorem for the Multilevel Monte Carlo Euler Method and Applications to Asian Options

N/A
N/A
Protected

Academic year: 2021

Partager "Central Limit Theorem for the Multilevel Monte Carlo Euler Method and Applications to Asian Options"

Copied!
35
0
0

Texte intégral

(1)

HAL Id: hal-00693191

https://hal.archives-ouvertes.fr/hal-00693191

Preprint submitted on 2 May 2012

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de

Central Limit Theorem for the Multilevel Monte Carlo Euler Method and Applications to Asian Options

Mohamed Ben Alaya, Ahmed Kebaier

To cite this version:

Mohamed Ben Alaya, Ahmed Kebaier. Central Limit Theorem for the Multilevel Monte Carlo Euler Method and Applications to Asian Options. 2012. �hal-00693191�

(2)

Central Limit Theorem for the Multilevel Monte Carlo Euler Method and

Applications to Asian Options

Mohamed Ben Alaya

1

& Ahmed Kebaier

1

1LAGA, CNRS (UMR 7539), Universit´e Paris 13, 99, av. J.B. Cl´ement 93430 Villetaneuse, France mba@math.univ-paris13.fr kebaier@math.univ-paris13.fr

May 2, 2012

Abstract

This paper focuses on studying the multilevel Monte Carlo method recently introduced by Giles [8] and significantly more efficient than the classical Monte Carlo one. Our aim is to prove a central limit theorem of Lindeberg Feller type for the multilevel Monte Carlo method associated to the Euler discretization scheme. To do so, we prove first a stable law convergence theorem, in the spirit of Jacod and Protter [15], for the Euler scheme error on two consecutive levels of the algorithm. This leads to an accurate description of the optimal choice of parameters and to an explicit characterization of the limiting variance in the central limit theorem of the algorithm. We investigate the application of the Multilevel Monte Carlo method to the pricing of Asian options, by discretizing the integral of the payoff process using Riemann and trapezoidal schemes. In particular, we prove stable law convergence for the error of these second order schemes. This allows us to prove two additional central limit theorems providing us the optimal choice of the parameters with an explicit representation of the limiting variance. For this setting of second order schemes, we give new optimal parameters leading to the convergence of the central limit theorem. Complexity analysis of the Multilevel Monte Carlo algorithm were processed.

AMS 2000 Mathematics Subject Classification. 60F05, 62F12, 65C05, 60H35.

Key Words and Phrases. Central limit theorem, Multilevel Monte Carlo methods, Euler scheme, Asian options, finance.

This research benefited from the support of the chair ”Risques Financiers”, Fondation du Risque.

(3)

1 Introduction

In many applications, in particular for the pricing of financial securities, we are interested in the effective computation by Monte Carlo methods of the quantityEf(XT), whereX := (Xt)0tT

is a diffusion process and f a given function. The Monte Carlo Euler method consists of two steps. First, approximate the diffusion process (Xt)0tT by the Euler scheme (Xtn)0tT with time step T /n. Then, approximate Ef(XTn) by N1 PN

i=1f(XT,in ), where f(XT,in )1iN is a sample of N independent copies of f(XTn). This approximation is affected respectively by a discretization error and a statistical error

εn:=E(f(XTn)−f(XT)) and 1 N

N

X

i=1

f(XT,in )−Ef(XTn).

The optimal choice of the sample sizeN in the classical Monte Carlo method mainly depends on the order of the discretization error. In the context of possibly degenerate diffusions X andC1-functionsf, Kebaier [16] proves that the rate of convergence of the discretization error εn can be 1/nα for all values of α ∈ [1/2,1]. It turns out that for such order of convergence the optimal choice of N is given by n. This leads to a total complexity in the Monte Carlo method of orderCM C =n2α+1. A further discussion of this is given in subsection 2.4 (see Duffie and Glynn [5] for related results).

In order to improve the performance of this method, Kebaier [16] introduced a two-level Monte Carlo method [16] (called the statistical Romberg method) reducing the complexityCM C

while maintaining the convergence of the algorithm. This method uses two Euler schemes with time steps T /n and T /nβ, β ∈(0,1) and approximates Ef(XT) by

1 N1

N1

X

i=1

f( ˆXT,inβ) + 1 N2

N2

X

i=1

f(XT,in )−f(XT,inβ),

where ˆXTnβ is a second Euler scheme with time stepT /nβ and such that the Brownian paths used for XTn and XTnβ has to be independent of the Brownian paths used to simulate ˆXTnβ. In order to get a rational choice ofN1,N2 andβ versusn, Kebaier [16] proves a Central Limit Theorem for this new algorithm. This theorem uses the weak convergence of the normalized error of the Euler scheme for diffusions proved by Kurtz and Protter [19] (and strengthened by Jacod and Protter [15]). It turns out that for a given discretization error εn = 1/nα (α ∈ [1/2,1]), the optimal choice is obtained for β = 1/2, N1 = n and N2 = n(1/2). With this choice, the complexity of the statistical Romberg method is of order CSR =n2α+(1/2) which is lower than the classical complexity in the Monte Carlo method.

More recently, Giles [8] generalized the statistical Romberg method of Kebaier [16] and proposed the multilevel Monte Carlo algorithm, in a similar approach to Heinrich’s multilevel method for parametric integration [12] (see also Creutzig, Dereich, M¨uller-Gronbach and Ritter [3], Dereich [4], Giles [7], Giles, Higham and Mao [9], Heinrich [11] and Heinrich and Sindambiwe [13] for related results). The multilevel Monte Carlo method uses information from a sequence of computations with decreasing step sizes and approximates the quantityEf(XT) by

Qn= 1 N0

N0

X

k=1

f(XT,k1 ) +

L

X

ℓ=1

1 N

N

X

k=1

f(XT,km)−f(XT,kmℓ−1)

, m∈N\ {0,1} and L= logn logm.

(4)

The process (Xtm)0tT denotes the Euler scheme with time step mT for ℓ ∈ {0,· · · , L}. Here, it is important to point out that all theseL+ 1 Monte Carlo estimators have to be based on different, independent samples. However, for fixed k and ℓ, the simulations f(XT,km) and f(XT,kmℓ−1) have to be based in the same Brownian path but with different times steps mT and m(ℓ1)T. Due to the above independence assumption for the paths, the variance of the multilevel estimator is given by

σ2 :=V ar(Qn) =N01V ar(f(XT1)) +

L

X

ℓ=1

N1σ2,

where σ2 = V ar

f(XTm)−f(XTmℓ−1)

. For a Lipschitz continuous function f, it is easy to check, using properties of the Euler scheme, that

σ2 ≤c2 L

X

ℓ=0

N1m

for some positive constantc2(see Proposition 1 for more details). Giles [8] uses this computation in order to find the optimal choice of the multilevel Monte Carlo parameters. More precisely, to obtain a desired root mean squared error (RMSE), say of order 1/nα, for his multilevel estimator, Giles [8] uses the above computation on σ2 to minimize the total complexity of the algorithm. It turns out that the optimal choice is obtained for (see Theorem 3.1 of [8])

N = 2c2n

logn logm + 1

T

m, for ℓ ∈ {0,· · · , L} and L= logn

logm. (1)

This optimal choice leads to a complexity for the multilevel Monte Carlo Euler method pro- portional to n(logn)2. Interesting numerical tests, comparing three methods (crude Monte Carlo, Statistical Romberg and the Multilevel Monte Carlo), were proceed in Korn, Korn and Kroisandt [18]. Furthermore, Giles [8] obtain also the optimal parameters for the multilevel Monte Carlo method when a second order scheme is used instead of the Euler scheme which is, of course, of order one. Recall that a discretization scheme is said to be of second order when the quantityσ2, introduced above, is of orderm2ℓ. For example, this is the case for the Milstein scheme ( see e.g. Kloeden and Platen [17] for more details on second order schemes).

By the same reasoning as above, to achieve a given RMSE error for his multilevel estimator of order 1/nα, Giles obtains an optimal choice of the parameters given by (see Theorem 3.1 of [8])

N = 2c2n√ T

√ m−1

√m

T m

3/2

forℓ ∈ {0,· · · , L} and L= logn

logm. (2) This choice leads to an optimal complexity for the multilevel Monte Carlo proportional ton. In the present paper, we are interested in using Kebaier’s approach [16] to get the optimal choice for the Multilevel Monte Carlo method. More precisely, our main result is a Lindeberg Feller central limit theorem for the Multilevel Monte Carlo Euler algorithm (see Theorem 5).

In order to show this result, we first prove a stable law convergence theorem, for the Euler

(5)

scheme error on two consecutive levelsm1 and m, of the type obtained in Jacod and Protter [15]. Indeed, we prove the following functional result (see Theorem 4)

s m

(m−1)T(Xm−Xmℓ−1)⇒stably U, as ℓ → ∞,

whereU is the same limit process given in Theorem 3.2 of Jacod and Protter [15]. In fact, their result, namely

rm

T (Xm−X)⇒stably U, as ℓ→ ∞,

is not sufficient to prove our Theorem 5, since the multilevel Monte Carlo Euler method involves the error processXm−Xmℓ−1 rather thanXm−X. Thanks to Theorem 5 we obtain a precise description for the choice of the parameters to run the multilevel Monte Carlo Euler method.

Afterward, by a complexity analysis we obtain the optimal choice for the multilevel Monte Carlo Euler method. It turns out that for a total error of order 1/nα the optimal parameters are given by

N0 =n, N = (m−1)T nlogn

mlogm , for ℓ∈ {1,· · · , L} and L= logn

logm. (3) This leads us to a complexity proportional to n(logn)2 which is the same order obtained by Giles [8]. By comparing relations (1) and (3), we note that our optimal sequence of sample sizes (N)0L does not depend on any given constant, since our approach is based on proving a central limit theorem and not on obtaining an upper bound for the variance of the algorithm.

All these results are stated and proved in section 3.

In section 4, we investigate the application of this method to the pricing of Asian options.

We proceed by approximating the integral in the payoff process using first the classical Riemann discretization scheme then the trapezoidal one. It was shown in Lapeyre and Temam [21] that these discretization schemes are both of second order and the associated weak error εn is of ordern1 (see section 4). At first, we prove two stable law convergence theorems, for the errors of both Riemann and trapezoidal schemes, on two consecutive levelsm1 andm(see Theorem 6 and Theorem 7). We obtain a rate of convergence equal to m and the limit processes in both theorems are related to the one obtained by Kebaier [16] for the same setting. Then, we take advantage of this study to establish two new Lindeberg Feller central limit theorems (see Theorem 8 and Theorem 9 ). These results provide us a precise description for the choice of the parameters in the multilevel Monte Carlo method when used to price Asian options. In this context of second order schemes, the optimal sequence of sample sizes (N)0L proposed by Giles [8] (see relation (2) withα = 1) does not satisfy the so called Lyapunov assumption of the Lindeberg Feller central limit theorem (see subsection 4.3). Indeed, Giles’s analysis is only based on a control of the variance. However, our approach is based on proving a central limit theorem for the multilevel Monte Carlo method and we need in addition to satisfy a Lyapunov type condition that controls a moment of order greater than 2. Finally, we provide three possible choices of (N)0Lsatisfying assumptions of the Lindeberg Feller central limit theorem and for which the optimal complexities can be closer to the orderCM M C =n but without reaching it (see subsection 4.4). Section 2 below is devoted to recall some useful stochastic limit theorems and to introduce our notations.

(6)

2 General framework

2.1 Preliminaries

We first recall basic facts about stable convergence. In the following we adopt the notation of Jacod and Protter [15]. Let Xn be a sequence of random variables with values in a Polish space E, all defined on the same probability space (Ω,F,P). Let ( ˜Ω,F˜,P˜) be an extension of (Ω,F,P), and let X be an E-valued random variable on the extension. We say that (Xn) converges in law to X stably and write Xnstably X, if

E(U h(Xn))→E˜(U h(X))

for all h : E → R bounded continuous and all bounded random variable U on (Ω,F) . This convergence, introduced by R´enyi [23] and studied by Aldous and Eagelson [1], is obviously stronger than convergence in law that we will denote here by “⇒”. According to section 2 of Jacod [14] and Lemma 2.1 of Jacod and Protter [15], we have the following results.

Lemma 1 let Vn and V be defined on (Ω,F) with values in another metric space E. if Vn

P V, Xnstably X then (Vn, Xn)⇒stably (V, X)

This result remains valid whenVn =V and conversely, if (V, Xn)⇒(V, X), we can realize this limit as (V, X) with X defined on an extension of (Ω,F,P) and Xnstably X as soon as V generates the σ-field F.

Note that all this applies when Xn,X areRd-valued right-continuous and left-hand limited processes, whereE =D([0, T],Rd) is equipped with the Skorokhod topology.

Now, we recall a result on the convergence of stochastic integrals formulated from Jacod and Protter [15]. This is a simplified version but it is sufficient for our study. LetXn= (Xn,i)1id

be a sequence of Rd-valued continuous semimartingales with the decomposition Xtn,i=X0n,i+An,it +Mtn,i, 0≤t≤T

where, for each n∈N and 1≤i≤d,An,i is a predictable process with finite variation, null at 0 and Mn,i is a martingale null at 0.

Theorem 1 Assume that the sequence (Xn) is such that hMn,iiT +

Z T 0

dAn,is

is tight. Let Hn and H be a sequence of adapted, right-continuous and left-hand limited pro- cesses all defined on the same filtered probability space. If (Hn, Xn) ⇒ (H, X) then X is a semimartingale with respect to the filtration generated by the limit process (H, X), and we have (Hn, Xn,R

HndXn)⇒(H, X,R

HdX).

(7)

2.2 The Euler scheme

LetX := (Xt)0≤t≤T be the process with values in Rd, solution to

dXt=b(Xt)dt+σ(Xt)dWt, X0 =x∈Rd (4) where W = (W1, . . . , Wq) is a q-dimensional Brownian motion on some given filtered proba- bility space B= (Ω,F,(Ft)t0, P). (Ft)t0 is the standard Brownian filtration.

The functions b:Rd−→Rd and σ :Rd−→Rd×q are continuously differentiable and satisfy

∃ CT >0 ; ∀x, y ∈Rd we have

kb(x)−b(y)k+kσ(x)−σ(y)k ≤CT(ky−xk).

We consider the Euler continuous approximation Xn with step δ=T /n given by:

dXtn =b(Xηn(t))dt+σ(Xηn(t))dWt, ηn(t) = [t/δ]δ.

It is well known that the Euler scheme satisfies the following properties (see for instance Faure [6] for more details)

P1) ∀p >1, E

sup

0tT|Xt−Xtn|p

≤ Kp(T)

np/2 ,Kp(T)>0.

P2) ∀p >1, E

sup

0tT|Xt|p

+E

sup

0tT|Xtn|p

≤Kp(T), Kp(T)>0.

2.3 Stable convergence for the Euler scheme error

Now assume that

ϕ(Xt) =

b1(Xt) σ11(Xt) . . . σ1q(Xt) b2(Xt) σ21(Xt) . . . σ2q(Xt)

... ... ...

bd(Xt) σd1(Xt) . . . σdq(Xt)

and dYt:=

 dt dWt1

...

dWtq

then the S.D.E (4) becomes

dXt=ϕ(Xt)dYt=

q

X

j=0

ϕj(Xt)dYtj

where ϕj is jth column of the matrix σ, for 1 ≤ j ≤ q, and ϕ0 = b. The Euler continuous approximation Xn with step δ =T /n is given by

dXtn =ϕ(Xηnn(t))dYt=

q

X

j=0

ϕj(Xηnn(t))dYtj, ηn(t) = [t/δ]δ. (5) The following result proven by Jacod and Protter [15] is an improvement of the result given by Kurtz and Protter [19].

(8)

Theorem 2 With the above notations we have rn

T(Xn−X)⇒stably U with (Ut)0tT the d-dimensional process satisfying

Ut = 1

√2

q

X

i,j=1

Zt

Z t 0

Zs1ϕ˙j(Xsi(Xs)dBsij, t∈[0, T], (6)

where (Zt)0tT is the Rd×d valued process solution of the linear equation Zt=Id+

q

X

j=0

Z t 0

˙

ϕj(Xs)dYsjZs, t ∈[0, T],

˙

ϕj is ad×dmatrix with( ˙ϕj)ik is the partial derivative ofϕij with respect to thek-th coordinate, and(Bij)1i,jq is a standardq2-dimensional Brownian motion independent ofW. This process is defined on an extension ( ˜Ω,F˜,( ˜Ft)t0,P˜) of the space (Ω,F,(Ft)t0,P).

2.4 Central limit theorem for Monte Carlo Euler method

In many applications (in particular for the pricing of financial securities), the effective compu- tation ofEf(XT) is crucial. The Monte Carlo method consists of the following steps:

• Approximate the process (Xt)0≤t≤T by the Euler scheme (Xtn)0≤t≤T, with stepT /n, which can be simulated.

• Evaluate the expectation on the approximating process f(XTn) by the Monte Carlo method.

In order to evaluateEf(XTn) by the Monte Carlo method, N independent copies f(XT,in )1iN

of f(XTn) are sampled and the expectation is approximated by the following quantity fˆn,N := 1

N

N

X

i=1

f(XT,in ).

The approximation is affected by two types of errors. An analytical error given by εn :=Ef(XTn)−Ef(XT)

and a statistical error ˆfn,N −Ef(XTn), controlled by the central limit Theorem and which is of order 1/√

N. An interesting problem (studied by Duffie and Glynn [5] and Kurtz and Protter [20]) is to find N as a function of n so that both errors are of the same order. Talay and Tubaro [24] prove that if f is sufficiently smooth, then εn ∼ c/n with c a given constant. A similar result was proven by Kurtz and Protter [20] for a function f ∈ C3. The same result was extended by Bally and Talay [2] for a measurable function f but with a nondegeneracy condition of H¨ormander type on the diffusion. In the context of possibly degenerate diffusions X and C1-functions f, Kebaier [16] prove that the rate of convergence of the discretization error εn can be 1/nα for all values of α ∈ [1/2,1] (see Proposition 2.2 of [16]). The following result highlights the behavior of the global error in the classical Monte Carlo method. It can be proved in the same way as the limit theorem given in Duffie and Glynn [5].

(9)

Theorem 3 Let f be an Rd-valued function satisfying

(Hf) |f(x)−f(y)| ≤C(1 +|x|p+|y|p)|x−y|, for some C, p >0.

Assume that P(XT ∈ D/ f) = 0, where Df := {x ∈ Rd;f is differentiable at x}, and that for some α∈[1/2,1] we have

(Hεn) lim

n→∞nαεn =Cf(T, α).

Then

nα 1 n

n2α

X

i=1

f(XT,in )−Ef(XT)

⇒σG¯+Cf(T, α) with σ2 =V ar(f(XT)) and G¯ a standard normal.

A functional version of this theorem, with α = 1 was proven by Kurtz and Protter [20] for a function f of class C3. One can interpret the theorem as follows. For a total error of order 1/nα the minimal computation effort necessary to run the Monte Carlo algorithm is obtained for N =n. This leads to an optimal time complexity of the algorithm given by

CM C =C×(nN) =C×n2α+1, with C some positive constant.

3 The Multilevel Monte Carlo Euler method

It is well known that the rate of convergence in the Monte Carlo method depends on the variance of f(XTn) where XTn is the Euler scheme of step T /n. This is a crucial point in the practical implementation. A large number of reduction of variance methods are used in practice. The multilevel algorithm proposes an iterative control variate reduction of variance that extends the the statistical Romberg method of Kebaier [16] (see also section 1 above). Its specificity is that the control variate is constructed in an iterative way by the Monte Carlo method using different time steps mT, ℓ ∈ {0,1,· · · , L} and m ∈N\ {0,1} and such that mL = n. Let us be more precise, it is clearly that

Ef(XTn) =Ef(XT1) +

L

X

ℓ=1

E

f(XTm)−f(XTmℓ−1)

. (7)

The multilevel method is to estimate independently by the Monte Carlo method each of the expectations on the right-hand side. Hence, we approximate Ef(XTn) by

Qn= 1 N0

N0

X

k=1

f(XT,k1 ) +

L

X

ℓ=1

1 N

N

X

k=1

f(XT,km)−f(XT,kmℓ−1)

. (8)

The process (Xtm)0tT denotes the Euler scheme with time step mT for ℓ ∈ {0,· · · , L}, where L = logn/logm. Here, it is important to point out that all these L+ 1 Monte Carlo estimators have to be based on different, independent samples. However, for each k and ℓ the simulationsf(XT,km) andf(XT,kmℓ−1) come from the same Brownian path but with different time steps. The following result gives us a first description of the asymptotic behavior of the variance in the Multilevel Monte Carlo Euler method.

(10)

Proposition 1 For a function f : Rd −→ Rd which is Lipschitz continuous of constant [f]lip

that is [f]lip= supu6=v |f(u)kuf(v)vk |, we have V ar(Qn) = O

L

X

ℓ=0

N1m

. (9)

Proof : We have

V ar(Qn) =N01V ar f(XT1) +

L

X

ℓ=1

N1V ar

f(XTm)−f(XTmℓ−1)

≤N01V ar f(XT1) + 2

L

X

ℓ=1

N1

V ar(f(XTm)−f(XT)) +V ar(f(XTmℓ−1)−f(XT))

≤N01V ar f(XT1)

+ 2[f]lip

L

X

ℓ=1

N1E

sup

0tT

Xtm −Xt

2+ sup

0tT

Xtmℓ−1 −Xt

2 . We complete the proof by using P1) on the strong convergence of the Euler scheme.

The inequality (9) shows that the variance of Qn depends on the choice of sample size N, ℓ∈ {0,1,· · · , L}. This variance can be smaller than the variance off(XTn), so thatQnappears as a good candidate for the reduction of variance method.

The main result of this section is a Lindeberg Feller central limit theorem for the Multilevel Monte Carlo Euler algorithm (See Theorem 5 below). In order to prove this result, we need to prove first a stable law convergence theorem for the Euler scheme error. This is the aim of the following subsection.

3.1 Stable convergence

In what follows, we prove a stable law convergence theorem, for the Euler scheme error on two consecutive levels m1 andm, of the type obtained in Jacod and Protter [15] (see Theorem 2 above). Indeed, their result namely,

rm

T (Xm−X)⇒stably U, as ℓ→ ∞,

is not sufficient to prove our Theorem 5 below, since the multilevel Monte Carlo Euler method involves the error processXm−Xmℓ−1 rather than Xm−X. Note that the study of the error Xm−Xmℓ−1 as ℓ→ ∞ can be reduced to the study of the error Xmn−Xn asn → ∞. Theorem 4 Under notations of Theorem 2, we have the following result

r mn

(m−1)T(Xmn−Xn)⇒stably U, where U is solution to equation (6) and m ∈N\ {0,1}.

(11)

Proof : Consider the error process Umn,n = (Utmn,n)0tT, defined by Utmn,n :=Xtmn−Xtn, t ∈[0, T].

Combining relation (5), for both processesXmnandXn, together with a Taylor expansion yield us

dUtmn,n =

q

X

j=0

˙

ϕnt,j(Xηmnmn(t)−Xηnn(t))dYtj, where

˙ ϕnt,j =

Z 1 0

∇ϕj Xηnn(t)+λ(Xηmnmn(t)−Xηnn(t)) dλ.

Therefore, the equation satisfied by Un can be written as Utmn,n =

Z t 0

q

X

j=0

˙

ϕns,jUsmn,ndYsj+Gmn,nt , with

Gmn,nt = Z t

0 q

X

j=0

˙

ϕns,j(Xsn−Xηnn(s))dYsj− Z t

0 q

X

j=0

˙

ϕns,j(Xsmn−Xηmnmn(s))dYsj. In the following, let (Ztmn,n)0tT be the Rd×d valued solution of

Ztmn,n =Id+ Z t

0 q

X

j=0

˙ ϕns,jdYsj

!

Zsmn,n.

Theorem 48 p.326 in [22], ensures the existence of the process ((Ztmn,n)1)0tT solution to (Ztmn,n)1 =Id+

Z t 0

(Zsmn,n)1

q

X

j=1

( ˙ϕns,j)2ds− Z t

0

(Zsmn,n)1

q

X

j=0

˙

ϕns,jdYsj.

Thanks to theorem 56 p. 33 in the same reference [22], we get Utmn,n =Ztmn,nnZ t

0

(Zsmn,n)1dGmn,ns − Z t

0

(Zsmn,n)1

q

X

j=1

( ˙ϕns,j)2(Xsn−Xηnn(s))ds

+ Z t

0

(Zsmn,n)1

q

X

j=1

( ˙ϕns,j)2(Xsmn−Xηmnmn(s))dso .

Since the increments of the Euler scheme satisfy Xsn−Xηnn(s) =

q

X

i=0

¯

ϕns,i(Ysi−Yηin(s)) and Xsmn−Xηmnmn(s) =

q

X

i=0

¯

ϕmns,i (Ysi−Yηimn(s)),

(12)

with ¯ϕns,ii(Xηnn(s)) and ¯ϕmns,ii(Xηmnmn(s)), it is easy to check that

Utmn,n =

q

X

i,j=1

Ztmn,n Z t

0

Hsi,j,mn,n(Ysi−Yηin(s))dYsj +Rmn,nt,1 +Rmn,nt,2

q

X

i,j=1

Ztmn,n Z t

0

si,j,mn,n(Ysi−Yηimn(s))dYsj −R˜mn,nt,1 −R˜mn,nt,2 (10)

with Rmn,nt,1 =

q

X

i=0

Ztmn,n Z t

0

Ksi,mn,n(Ysi−Yηin(s))ds, Rt,2mn,n=

q

X

j=1

Ztmn,n Z t

0

Hs0,j,mn,n(s−ηn(s))dYsj, and

mn,nt,1 =

q

X

i=0

Ztmn,n Z t

0

si,mn,n(Ysi−Yηimn(s))ds, R˜mn,nt,2 =

q

X

j=1

Ztmn,n Z t

0

s0,j,mn,n(s−ηmn(s))dYsj.

where, for (i, j)∈ {0,· · · , q} × {1,· · · , q}, Ksi,mn,n= (Zsmn,n)1 ϕ˙ns,0ϕ¯ns,i

q

X

j=1

( ˙ϕns,j)2ϕ¯ns,i

!

, Hsi,j,mn,n= (Zsmn,n)1ϕ˙ns,jϕ¯ns,i, and

si,mn,n = (Zsmn,n)1 ϕ˙ns,0ϕ¯mns,i

q

X

j=1

( ˙ϕns,j)2ϕ¯mns,i

!

, H˜si,j,mn,n = (Zsmn,n)1ϕ˙ns,jϕ¯mns,i .

Now, let us introduce Zt=DxXt solution to Zt=Id+

Z t 0

q

X

j=0

˙

ϕs,jdYsj

Zs, with ˙ϕt,j =∇ϕj(Xt).

Moreover, ((Zt)1)0tT exists and satisfies the following explicit linear stochastic differential equation

(Zt)1 =Id+ Z t

0

(Zs)1

q

X

j=1

( ˙ϕs,j)2ds− Z t

0

(Zs)1

q

X

j=0

˙

ϕs,jdYsj.

Note that using the same techniques as in the proof of existence and uniqueness for stochastic differential equations with Lipschitz coefficients (i.e. Gronwall inequality), we obtain that for any p≥1 and for anyt ∈[0, T], Ztmn,n, Zt, (Ztmn,n)1, (Zt)1 ∈Lp and

nlim→∞

E

sup

0tT |Ztmn,n−Zt|p

= 0, and lim

n→∞

E

sup

0tT

(Ztmn,n)1−(Zt)1

p

= 0. (11)

(13)

Furthermore, in relation (10), one can replace respectivelyHsi,j,mn,n and ˜Hsi,j,mn,n by their com- mon limit

Hsi,j = (Zs)1ϕ˙s,jϕ¯s,i, with ˙ϕs,j =∇ϕj(Xs) and ¯ϕs,ii(Xs).

So that, relation (10) becomes Utmn,n =

q

X

i,j=1

Ztmn,n Z t

0

Hsi,j(Yηimn(s)−Yηin(s))dYsj+Rtmn,n, (12) with

Rmn,nt =Rmn,nt,1 +Rmn,nt,2 +Rmn,nt,3 −R˜mn,nt,1 −R˜mn,nt,2 −R˜mn,nt,3 whereRmn,nt,i and ˜Rmn,nt,i , i∈ {1,2}, are introduced by relation (10) and

Rmn,nt,3 =

q

X

i,j=1

Ztmn,n Z t

0

(Hsi,j,mn,n−Hsi,j)˙(Ysi −Yηin(s))dYsjmn,nt,3 =

q

X

i,j=1

Ztmn,n Z t

0

( ˜Hsi,j,mn,n−Hsi,j)(Ysi −Yηimn(s))dYsj, The remainder term process Rmn,n vanishes with rate √

n in probability. More precisely, we have the following convergence result.

Lemma 2 The rest term introduced in relation (12) satisfies sup0tT

√nRtmn,n

converges to zero in probability as n tends to infinity.

For the reader convenience, the proof of this lemma is postponed to the end of the current subsection.

The task is now to study the asymptotic behavior of the process given by relation (12)

q

X

i,j=1

√nZtmn,n Z t

0

Hsi,j(Yηimn(s)−Yηin(s))dYsj.

In order to study this process, we introduce the martingale process, Mtn,i,j =

Z t 0

(Yηimn(s)−Yηin(s))dYsj, (i, j)∈ {1,· · · , q}2,

and we proceed to a preliminary calculus of the expectation of its bracket. Let (i, j) and (i, j)∈ {1,· · · , q}2, we have

• forj 6=j, the bracket hMn,i,j, Mn,i,ji= 0

• forj =j and i6=i,EhMn,i,j, Mn,i,ji= 0

(14)

• forj =j and i=i,EhMn,i,jit=Rt

0mn(s)−ηn(s))ds, t∈[0, T] and we have E(hMn,i,jit) =

Z ηn(t) 0

mn(s)−ηn(s))ds+O( 1 n2)

=

m1

X

ℓ=0 [t/δ]1

X

k=0

Z (mk+ℓ+1)δ/m (mk+ℓ)δ/m

mn(s)−ηn(s))ds+O( 1 n2)

=

m1

X

ℓ=0 [t/δ]1

X

k=0

δ2 m

mk+ℓ m −k

+O( 1

n2) = (m−1)δ2

2m [t/δ] +O( 1 n2)

= (m−1)T

2mn t+O( 1

n2). (13)

Having disposed of this preliminary evaluations, we can now study the stable convergence of q

2mn

(m1)TMn,i,j

1i,jq. By virtue of Theorem 2-1 of [14], we need to study the asymptotic behavior of both brackets nhMn,i,j, Mn,i,jit and √

nhMn,i,j, Yjit, for all t ∈ [0, T] and all (i, j, i, j)∈ {1,· · · , q}4. The case j 6=j is obvious and we only proceed to prove that

• forj =j, √

nhMn,i,j, Yjit

−→P

n→∞0, for all t∈[0, T].

• forj =j and i6=i,nhMn,i,j, Mn,i,jitn−→P

→∞0, for all t∈[0, T].

• forj =j and i=i,nhMn,i,jit

−→P n→∞

(m1)T

2m t, for all t∈[0, T].

For the first point, we consider the L2 convergence EhMn,i,j, Yji2t = E

Z t 0

(Yηimn(s)−Yηin(s))ds 2

= Z t

0

Z t 0

E (Yηimn(s)−Yηin(s))(Yηimn(u)−Yηin(u)) dsdu

= 2 Z

0<s<u<t

g(s, u)dsdu

with

g(s, u) =ηmn(s)∧ηmn(u)−ηmn(s)∧ηn(u)−ηn(s)∧ηmn(u) +ηn(s)∧ηn(u). (14) It is worthy to note that

ηn(s)≤ηmn(s)≤s≤ηn(u)≤ηmn(u)≤u, ∀ s≤ηn(u). (15) Hence g(s, u) = 0, for s≤ηn(u),g(s, u) = ηmn(s)−ηn(s), for ηn(u)< s < u, and

E hMn,i,j, Yji2t = 2 Z

0<ηn(u)<s<u<t

mn(s)−ηn(s))dsdu≤2T n

Z t 0

(u−ηn(u))du≤2T2 n2t.

(15)

This yields the desired result. Concerning the second point, theL2 norm is given by EhMn,i,j, Mn,i,ji2t = E

Z t 0

(Yηimn(s)−Yηin(s))(Yηimn (s)−Yηin(s))ds 2

= Z t

0

Z t 0

E (Yηimn(s)−Yηin(s))(Yηimn(u)−Yηin(u))2

dsdu

= 2 Z

0<s<u<t

g(s, u)2dsdu,

with the same function g given in relation (14). By properties of g developed above, we have in the same manner

EhMn,i,j, Mn,i,ji2t = 2 Z

0<ηn(u)<s<u<t

mn(s)−ηn(s))2dsdu≤2T3 n3t,

which proves our claim. For the last point, that is the essential one, taking into account the development of EhMn,i,jit given by relation (13) we obtain

E

nhMn,i,jit− (m−1)T

2m t

2

=n2EhMn,i,ji2t − (m−1)2T2

4m2 t2+O(1

n). (16) Otherwise, we have

EhMn,i,ji2t = E Z t

0

(Yηimn(s)−Yηin(s))2ds 2

= Z t

0

Z t 0

E (Yηimn(s)−Yηin(s))2(Yηimn(u)−Yηin(u))2 dsdu

= 2 Z

0<s<u<t

h(s, u)dsdu (17)

with

h(s, u) = E (Yηimn(s)−Yηin(s))2(Yηimn(u)−Yηin(u))2

. (18)

On the one hand, for s≤ηn(u), by property (15) and since the increments Yηimn(s)−Yηin(s) and Yηimn(u)−Yηin(u) are independent, it follows immediately that

h(s, u) = (ηmn(s)−ηn(s))(ηmn(u)−ηn(u)).

On the other hand, in relation (18) we use the Cauchy-Schwartz inequality to get h(s, u) = O(n12) and this yields

Z

0<ηn(u)<s<u<t

h(s, u)dsdu=O( 1 n3).

Now, noting that (ηmn(s)−ηn(s))(ηmn(u)−ηn(u)) =O(n12), relation (17) becomes E hMn,i,ji2t

= 2 Z

0<s<u<t

mn(s)−ηn(s))(ηmn(u)−ηn(u))dsdu+O( 1 n3)

= Z t

0

mn(s)−ηn(s))ds 2

+O( 1 n3).

Références

Documents relatifs

URGES Member States to progressively introduce the above-mentioned process for the evaluation of national health programmes and services by national health personnel, and

Prediction 3.2 : In case of low opportunities of crime, we expect the highest crime rates for the second stage when individuals cannot expunge ( No Expungement ) and when they

Actes du colloque international Les oasis dans la mondialisation : ruptures et continuités, Paris, 16 et 17 Décembre 2013 Proceedings of the international colloquium Oases in

Looking at the green box between 1 and 5 September (red line) shows that the aerosol extinction of the fire plume reaching the Asian monsoon area is a factor of ∼ 9 higher than the

In the heartland of Eurasia: the multilocus genetic landscape of Central Asian populations.. Evelyne Heyer, Begoña Martínez-Cruz, Renaud Vitalis, Laure Ségurel, Frédéric

We have combined spaceborne measurements from the Thermal And Near infrared Sensor for carbon Obser- vations – Fourier Transform Spectrometer (TANSO-FTS) instrument on the

In this section, we will first validate the proposed algorithm by comparing our results with those obtained with the numer- ical integration method presented in [8] and [9]. Then

Toute utilisation commerciale ou impression systématique est constitutive d’une infrac- tion pénale.. Toute copie ou impression de ce fichier doit contenir la présente mention