• Aucun résultat trouvé

The second method

Dans le document The DART-Europe E-theses Portal (Page 145-152)

4.2 Normal approximation

4.2.2 The second method

Moreover,

|δ(J, N, h)| ≤X

i∈J

σi2

(N + 1)!kfh,J(N+2)ksupE

|Xi|N+1+|Xi|N+1 . So again by induction, the error term is bounded by

|e(J, N, h)| ≤X

iJ

σi2 XN

k=1

1 k!

e(J(i), N −k, fh,J(k+1))E(Xi)k−(Xi)k

+X

iJ

σi2

(N + 1)!kfh,J(N+2)ksup

E[|Xi|N+1] +E[|Xi|N+1] .

2 It is apparent that (4.1) is tedious to apply in reality. To do the recurrence for the setJ, we need to know the zero order approximation of all its subsets. In addition, we should calculate the normal expectation where the variance is not the same with the variance of the Stein’s equation. So Proposition 3.3.24 does not apply here to calculate Φσ0(xmfh,σ(x)), which makes the calculation become much more complicated.

4.2.2 The second method

In this subsection, we propose a refined method to improve the first one in the previous subsection. As shown in the previous subsection, since the Taylor expansion is made around the point WJ(i), at each step, we have to eliminate one variate and calculate a normal expectation function with the reduced variance. This increases significantly the calculation, especially in the exogenous case. Therefore, it’s natural to propose a solution by changing in (4.3) the termE[fh,J(k+1)(WJ(i))] with some expectation function on WJ. This procedure introduces an additional error term. So the objective is to 1) find the relationship betweenE[f(W(i))] andE[f(W)];

2) estimate the error of the above step.

We introduce the following notations. Let X and Y be two independent random variables and let f be a N+ 1 times derivable function. We denote by δ(N, f, X, Y) the error of the Nth-order Taylor’s formula in the expectation form, i.e.

E[f(X +Y)] = XN

k=0

E[Yk]

k! E[f(k)(X)] +δ(N, f, X, Y). (4.4)

Recall the Taylor expansion (in [60] for example) By taking the expectation of the above formula, we obtain directly (4.4) since X and Y are independent and we get

δ(N, f, X, Y) = 1 We now present the key formula (4.8) of our method which writes the expectation E[f(X)] as an expansion of E[f(X+Y)] and its derivatives multiplied by expectation terms containing the powers of Y. We call (4.8) the reversed Taylor’s formula in the expectation form. The main feature of this formula is that we treat the products of expectation of functions on random variablesX+Y andY which are not independent.

This property makes (4.8) very different with the standard Taylor’s formula where (4.4) is obtained by taking expectation of its corresponding form (4.5). However, we show that the remaining term of (4.8) can be deduced from the remaining terms of the standard Taylor’s formula.

Proposition 4.2.2 Letε(N, f, X, Y)be the remaining term of the following expansion E[f(X)] =E[f(X+Y)]+X

Proof. Combining (4.4) and (4.8), we have ε(N, f, X, Y) =−

We take the (N− |J|)th-order Taylor expansion of E[f(|J|)(X+Y)] to get The second term of (4.11) is obtained by regrouping E[Yk!k] in (4.10) with the product term and the sum concerningk with the other sums. Multiplying (4.11) by (−1)d and taking the sum on d, we notice that most terms disappear and we get

X

= 1. With these conventions, X

Therefore we get (4.9). 2

Corollary 4.2.3 With the notations of (4.4) and (4.8), iff has up to(N+ 1)th order derivatives and if f(N+1) is bounded, then

1)

|δ(N, f, X, Y)| ≤ E[YN+1]

(N + 1)!kf(N+1)k; 2)

|ε(N, f, X, Y)| ≤ kf(N+1)kX

d≥1

X

J=(jl)Nd

|J|=N+1

Yd

l=1

E[Yjl] jl! .

Proof. 1) is obvious by definition.

2) From 1) and Proposition 4.2.2, we know that ε(N, f, X, Y)≤X

d≥0

X

J=(jl)Nd

|J|≤N

|δ(N − |J|, f(|J|), X, Y)| Yd

l=1

E[Yjl] jl!

≤ kf(N+1)kX

d0

X

J=(jl)∈Nd

|J|≤N

E[YN−|J|+1] (N − |J|+ 1)!

Yd

l=1

E[Yjl] jl!

which implies 2) by regrouping the product terms. 2

Remark 4.2.4 1. Note that δ is relatively easier to study while ε is much more complicated. Therefore, the above proposition facilitates the calculation.

2. The equality (4.8) allows us to writeE[f(W(i))] as an expansion of functions on W. In fact, without specifying the explicit form of ε, one can always propose some expansion form with a remaining term which depends on N, X and Y. The one we propose here is for the purpose to obtain the high order expansion in Theorem 4.2.5.

Before presenting the theorem, we first explain briefly the idea how to replace the terms containing f(W(i)) by those of W. Suppose that E[f(X)] has an expansion

E[f(X)] = XN

j=0

αj(f, Y)E[f(j)(X+Y)] +ε(N, f, X, Y).

Then by Taylor’s formula, we have sufficient order we need. It follows then

E[f(X)] =E[f(X+Y)]−

The right-hand side of the above equation consists of an expansion onX+Y. The next step is to regroup all the terms of E[f(l)(X +Y)] for 1≤l≤N to get the expansion.

We now present our main theorem.

Theorem 4.2.5 For any integer N ≥ 0, we can write E[h(W)] =C(N, h) +e(N, h) if all terms in (4.12) and (4.13) are well defined (here we use the conventions proposed in the proof of Proposition 4.2.2).

Proof. We deduce by induction. The theorem holds when N = 0. Suppose that Notice that the first term in the bracket whenk = 0 equalsE[fh0(W)]. For the simplicity of writing, we define the following notation to add the remaining summands whenk≥1 of the first term to the second term asd= 0. To be more precise, let by convention

X

Using the above notation, we can rewrite (4.14) by separating the cases when k = 0 and whenk = 1,· · ·, N and the remaining terms as By interchanging summations, we have

(4.16) =X

We then regroup E[(Xk!i)k] with the product term to get Therefore, taking the sum of (4.16) and (4.17), we get

(4.16) + (4.17) (4.12). Finally, it suffices to notice that e(N, h) contains the terms in (4.18) and the

terms in the above replacement of lower orders. 2

Corollary 4.2.6 The expansion of the first two orders are given by 1) Proof. 1) is a direct result of the above proposition.

2) By (4.12),

Remark 4.2.7 The first order correction given by Theorem 3.4.8 is a special case here when N = 1.

4.2.2.1 Numerical result

We apply (4.19) to the call function and we present the numerical results for i.i.d.

random variables Xi. Figure 4.1 and 4.2 compare the second order approximation C(2, h) to other approximations: the first order approximation ΦσW(h) +C(1, h), the first and the second order approximations by the saddle point method. The test is the same with that for Figure 3.10 and 3.11. We observe that C(2, h) provides better approximation quality than C(1, h). It is of the same precision of the second order saddle-point approximation.

In this case, similar with the first order approximation of the indicator function. We can not obtain theoretically the approximation error estimation since the call function is only one time differentiable. The explanation of this improvement should be similar to that for the indicator function case.

Figure 4.1: Second order expansion for Call function, asymptotic case: p= 0.01, and k= 1.

0 50 100 150 200 250 300 350 400 450 500

0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24

n

Approximation of E[(Wn−1)+], where p=0.01

binomial 1st order corr. approx.

second order approx.

saddle1 saddle2

Dans le document The DART-Europe E-theses Portal (Page 145-152)