• Aucun résultat trouvé

Analytical approximations of local-Heston volatility model and error analysis

N/A
N/A
Protected

Academic year: 2021

Partager "Analytical approximations of local-Heston volatility model and error analysis"

Copied!
34
0
0

Texte intégral

(1)

HAL Id: hal-00839650

https://hal.archives-ouvertes.fr/hal-00839650v2

Preprint submitted on 18 Mar 2015

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Analytical approximations of local-Heston volatility model and error analysis

Romain Bompis, Emmanuel Gobet

To cite this version:

Romain Bompis, Emmanuel Gobet. Analytical approximations of local-Heston volatility model and

error analysis. 2015. �hal-00839650v2�

(2)

AND ERROR ANALYSIS

R. BOMPIS AND E. GOBET

Abstract. This paper consists in providing and mathematically analyzing the expansion of an option price (with bounded Lipschitz payoff) for model combining local and stochastic volatility. The local volatility part has a general form, with appropriate growth and boundedness assumptions. For the stochastic part, we choose a square root process, which is widely used for modeling the behavior of the variance process (Heston model). We rigorously establish tight error estimates of our expansions, using Malliavin calculus, which requires a careful treatment because of the lack of weak differentiability of the model; this error analysis is interesting on its own. Moreover, in the particular case of Call-Put options, we also provide expansions of the Black-Scholes implied volatility which allows to obtain very simple and rapid formulas in comparison to the Monte Carlo approach while maintaining a very competitive accuracy.

This version: March 18, 2015.

Key words. Analytical approximations; Local and stochastic volatilities; Malliavin calculus.

AMS subject classifications. 34E10; 60Hxx

1. Introduction.

B Formulation of the problem. We aim at providing analytical approximations of

E [h(X

T

)] (1.1)

for a bounded Lipschitz function h : R 7→ R , a fixed maturity T > 0, where X is modeling the log-price of an asset, which dynamics are

dX

t

= σ(t, X

t

) p

V

t

dW

t

− 1

2 σ

2

(t, X

t

)V

t

dt, X

0

= x

0

∈ R , (1.2) dV

t

= α

t

dt + ξ

t

p

V

t

dB

t

, V

0

= v

0

> 0, (1.3)

dhW, Bi

t

= ρ

t

dt,

where (W

t

, B

t

)

0≤t≤T

is a two-dimensional correlated Brownian motion defined on a filtered prob- ability space (Ω, F, (F

t

)

0≤t≤T

, P ) with the usual assumptions on the filtration. This problem is motivated by option pricing: in this framework, asset and option prices are forward prices or equivalently, interest-rates and dividends are set to 0. In the model (1.2)-(1.3), σ(., .) is the time- dependent local volatility function, (V

t

)

t∈[0,T]

is a square root process (a.k.a. CIR process), which models the stochastic variance; the non-negative drift (α

t

)

t

, the non-negative volatility of volatility (ξ

t

)

t

and the correlation (ρ

t

)

t

are bounded measurable functions of time. Because the stochastic variance (1.3) is of the Heston form, we call this model local-Heston volatility model. Actually, our parametrization is equivalent to the usual "mean-reverting" form d ˜ V

t

= (α

t

− κ

t

V ˜

t

)dt + ξ

t

p

V ˜

t

dB

t

A preliminary version has circulated as a working paper under the title "Price expansion formulas for model combining local and stochastic volatility",http://hal.archives-ouvertes.fr/hal-00839650

Email: romain.bompis@yahoo.fr. CMAP, Ecole Polytechnique and CNRS, Route de Saclay, 91128 Palaiseau cedex, France.

Email: emmanuel.gobet@polytechnique.edu. CMAP, Ecole Polytechnique and CNRS, Route de Saclay, 91128 Palaiseau cedex, France. Corresponding author. This research is part of the Chair Financial Risks of the Risk Foundation and the FiME Laboratory.

1

(3)

(with deterministic (κ

t

)

t

) and dX

t

= σ(t, X

t

) p V ˜

t

dW

t

12

σ

2

(t, X

t

) ˜ V

t

dt: indeed by setting V

t

= V ˜

t

exp( R

t

0

κ

s

ds), observe that (X, V ) solves a system of the form (1.2)-(1.3) with straightforward modifications of σ, ξ, α. Our parametrization is more convenient for the subsequent analysis.

Models combining local and stochastic volatilities have emerged in the last decade to offer more flexibility in the skew and smile management: see for instance the well-known SABR model by Hagan etal. [HKLW02], the CEV-Heston model studied by Forde and Pogudin [FP13]. In view of the rather general form (1.2)-(1.3), the pricing (and thus the model calibration) is challenging, in particular because of the lack of closed-form formulas and because PDE or Monte-Carlo based numerical methods are too much time-consuming for real-time uses. In this work, we address this issue by deriving an expansion formula for E [h(X

T

)] within the family of correlated local-Heston model. We do not argue that this is the best model to fit market data, nevertheless this is very popular among practitioners since it has the flavor to encompass the local volatility model and the Heston one. For works related to calibration, see [EKO11]. The interest of this work is not only the expansion formula but also the rigorous treatment of error estimates, an issue which is often not handled in the literature (sometimes computations are formal). In our case of square root process (little smooth), Lipschitz payoffs and pointwise ellipticity of local volatility, we develop a strategy of proof which is interesting on its own. The reader could argue that by smoothing data, expanding the modified E [h(X

T

)] at a given order and then passing to the limit w.r.t. the smoothing parameter, we could avoid these technicalities. This may be incorrect: for instance in [FPSS04], this strategy applied to non-smooth data leads to a loss of theoretical accuracy (see the error estimates in [FPSS04, Theorem 3 and Lemma 1] in the context of call- option pricing).

Even worse, in [GP14] about Backward Stochastic Differential Equations, the authors prove that the irregularity in the coefficients may discard any possibility of expansion at a given order. With these examples in mind, we believe that proving rigorous error estimates when coefficients/functions are not smooth is a serious mathematical issue which can not be avoided and which, once done, brings confidence in the derived expansion and sheds light on the needed assumptions.

B Methodology. We follow the proxy approach initiated in [BGM09], by employing Malliavin calculus to compute expansion terms and to derive tight error estimates as a function of the most important model and payoff parameters (non-asymptotic error analysis). Inspired by [BGM09, BGM10b], we choose the following Gaussian proxy:

( dX

tP

= σ(t, x

0

) √

v

t

dW

t

12

σ

2

(t, x

0

)v

t

dt, X

0P

= x

0

, v

t

= v

0

+ R

t

0

α

s

ds. (1.4)

Such an approximation can be justified if the volatility of volatility ξ is small (thus V

t

≈ v

t

) and if one of the two following situations holds, both justifying σ(t, X

t

) ≈ σ(t, x

0

): i) the local volatility function σ(t, ·) has small variations; ii) the local part of the diffusion component is small (i.e. |σ|

small), which implies X

t

≈ x

0

. Besides we expect to have even more accurate approximations for small maturities (leading to X

t

≈ x

0

and V

t

≈ v

0

, t ∈ [0, T ]). These features are encoded in the error bounds of Theorems 2.4, 4.2 and 4.4, that read as multi-parameter error estimates. The above assumptions are rather realistic in practice, or nonetheless they define a domain of validity of our approximations.

In spite of the inspiration by [BGM10b], the mathematical analysis of error estimates must be

quite different since the model (1.2)-(1.3) is not smooth enough in the Malliavin sense (because of

the CIR process, see [AE08, De 11]). We briefly explain how we overcome this major problem, the

(4)

arguments are interesting on their own and could be transposed to other studies: we use Malliavin calculus for a suitably perturbed payoff h

δ

and a suitably perturbed random variable X

T

, this is inspired by [Fou08] but in a different context. Let us consider the Gaussian regularization h

δ

(x) :=

E

h(x + δW

T

)

for an independent Brownian motion W and a small parameter δ > 0 (fixed later);

we use the splitting-noise property h

δ

(x) = E h

h

δ/2

(x + δW

T 2

) i

. Write the decomposition E [h(X

T

)] ≈ E

h

δ

(X

T

)

≈ E

h

δ

(X

TP

) + E

h

0δ

(X

TP

)(X

T

− X

TP

) + . . .

i) Since h is Lipschitz, the first approximation is easily justified if δ is small enough and if X

T

and X

T

are close enough in L

p

-sense.

ii) The last expectation is computed (up to an error) using the techniques of [BGM09, BGM10b], and although cumbersome, the computations may be considered now as stan- dard and yield the existence of explicit coefficients (C

k

)

k

such that

E

h

0δ

(X

TP

)(X

T

− X

TP

)

= X

k

C

k

kxk

E

h

δ

(X

TP

+ x)

|

x=0

+ error

= X

k

C

k

kxk

E

h(X

TP

+ x)

|

x=0

+ other error.

iii) The tough part of the analysis is related to the global error control, which enlightens the right choice of δ and X

T

. To account for non-smooth payoffs we use an integration by parts formula, which relies on the non-degeneracy of the interpolated random variable X

λ

:=

λX

T

+(1 −λ)X

TP

(for any fixed λ ∈]0, 1[). Note that this excludes to take X

T

= X

T

that is not sufficiently Malliavin differentiable. Alternatively, we select a X

T

∈ D

which on the one hand, is close enough to X

T

in L

p

, and which, on the other hand, is such that X

λ

is non- degenerate with high probability under the sole assumption R

T

0

σ(t, x

0

)

2

v

t

dt > 0 (which we call (H

x0

) and which reads as a pointwise ellipticity assumption). The non-degeneracy of X

λ

on a subset of Ω is not sufficient for an integration-by-parts formula for E

xk

h

δ

(X

λ

)Y (Y is an arbitrary random variable): but due to the splitting-noise property, the above expectation is equal to E

h

xk

h

δ/2

(X

λ

+ δW

T 2

)Y i

, moreover X

λ

+ δW

T

2

is uniformly non-degenerate, with nice estimates provided that δ is not too small.

The precise tuning of δ and the construction of X

T

are detailed in Section 3. The final approxi- mation formula (Theorem 2.4) takes the form of an explicit Gaussian representation

E [h(X

T

)] = E h(X

TP

)

+ X

k

C

k

xkk

E

h(X

TP

+ x)

|

x=0

+ error.

where the above derivatives write as Greeks within the Gaussian model.

To summarize, the main strengths of our work are fourfold: (1) explicit expansions for bounded Lipschitz payoffs and for the implied volatility, (2) allowing time-dependent parameters and asset- volatility correlation, (3) non-asymptotic error analysis (dependence of error with respect to multi- parameters) under a local non-degeneracy condition (H

x0

), (4) rigorous error analysis taking into account that V has a non-smooth coefficient.

B Comparison with the literature. In the two last decades, numerous papers have been devoted to the analytical approximation of financial models. For an overview, see [BG12, Gul12, FGG

+

15]

among others. Here, we mention the works that are the most related to our setting, and we

emphasize the main differences compared to our works (keeping in mind the above main features).

(5)

Firstly, some authors focus on small maturity asymptotics and use geodesic tools to compute the main expansion terms: see [BBF04, Hen08, Lew07] where short maturity implied volatility approximations are obtained. First expansion terms are derived for general local and stochastic volatilities models in [FJ11] but under the assumptions of null correlation and under rather strong hypotheses of the volatility coefficients (excluding the CIR process). The asymptotic expansion of the density function in the SABR model is investigated in [JT11]. Similar results are derived using PDE (parametrix, adjoint expansion) and Fourier arguments by Pascucci etal. [PP13] for general time-independent local-Heston volatility models, however the error analysis is not handled; exten- sions to general stochastic volatility models are performed in [LPP14] under stronger smoothness assumptions (excluding the CIR process) and uniform ellipticity conditions (stronger than in our setting, where we assume only that the initial volatility is not 0 and the correlation is arbitrary in [−1, 1]). Observe that our non-degeneracy pointwise condition (H

x0

) does not imply that the law of X

T

has a density or that related PDEs have solution in the classical sense, which illustrates the difference with the setting of [LPP14].

Secondly, regarding long maturity asymptotics, Forde and Pogudin in [FP13] study the cases of SABR and CEV-Heston models in different strike regimes, but mainly assuming null-correlation.

Thirdly, the homogenization/multiscales approach by Fouque etal. [FPSK11] is developed in [FL11] to perform an asymptotic expansion w.r.t. a fast mean reversion parameter of the CIR volatility, in a model of type (1.2)-(1.3) with time-independent parameters. So far, the aforemen- tioned works (except [LPP14]) assume time-independent parameters.

Fourthly, following the asymptotic expansion method of Watanabe, Takahashi and co-authors provide in a series of works small noise approximations of financial models (see [TY12] about general stochastic volatility models): however, their results can not apply to the current local- Heston volatility model since it is not smooth in Malliavin sense.

B Outline of the paper. The paper is organized as follows. In Section 2 we state a third order price approximation formula with error bounds (Theorem 2.4), which is the main result. Section 3 is devoted to the proof, which constitutes the technical core of the paper, interesting on its own. For the sake of clarity, we first give an outline of the proof, in a rather heuristic way, to highlight the main steps, the major difficulties and to sketch arguments to overcome them.

The explicit calculus of the expansion coefficients is postponed to Appendix B. In Section 4 we apply our expansion formula to the particular case of Call/Put payoffs to derive implied volatility expansions with local volatility frozen at spot and at mid-point (between strike and spot). Results are stated in Theorems 4.2 and 4.4. Section 5 gathers numerical experiments illustrating the approximation formula accuracy, taking as a benchmark the Monte Carlo method. In Appendix, we give intermediate and complementary results.

2. Main Result.

2.1. Notations and definitions.

B Extremes of deterministic functions. For measurable and bounded functions f : [0, T ] → R and g : [0, T ] × R → R , we define f

inf

:= ess inf

t∈[0,T]

f

t

, f

sup

:= ess sup

t∈[0,T]

f

t

, |g|

:= ess sup

t∈[0,T],x∈R

|g(t, x)|.

B Differentiation and integration. We denote by C

b

( R ), the space of real-valued infinitely dif-

ferentiable functions, bounded with bounded derivatives. For a sufficiently smooth function ψ :

[0, T ] × R 7→ R , we write: ψ

(i)t

(x) = ∂

xii

ψ(t, x). When considering the spatial point x

0

, we often

(6)

use the notations ψ

t

= ψ

t

(x

0

) and ψ

t(i)

= ψ

(i)t

(x

0

) whenever unambiguous.

Definition 2.1. The integral operator ω

T

and its n-times iteration are defined as follows:

for any integrable functions l, l

1

, · · · , l

n

and any t ∈ [0, T ], ω(l)

Tt

:=

Z

T t

l

u

du and ω(l

1

, · · · , l

n

)

Tt

:= ω(l

1

ω(l

2

, · · · , l

n

)

T.

)

Tt

.

Definition 2.2. For h : R 7→ R a payoff function with exponential growth, the i-th Greek for X

TP

and h is defined

1

by

G

ih

:= ∂

ixi

E

h(X

TP

+ x)

x=0

. (2.1)

Given appropriate smoothness assumptions on h, one has G

ih

= E

h

(i)

(X

TP

) . B Assumptions on σ(.) and (V

t

)

t≤T

.

(H

x0

): σ(.) is a bounded measurable function of (t, x) ∈ [0, T ] × R and three times continuously differentiable w.r.t. x with bounded derivatives. Set

M

1

(σ) = max

1≤i≤3

|∂

ixi

σ(.)|

and M

0

(σ) = max

0≤i≤3

|∂

xii

σ(.)|

. In addition, we assume the pointwise ellipticity condition R

T

0

σ

t2

v

t

dt > 0 where we recall σ

t

= σ(t, x

0

).

(P): α and ξ are measurable, bounded on [0, T ] and positive, with ξ

inf

> 0 and 2(

ξα2

)

inf

≥ 1.

Because there exists a unique process (V

t

)

t≤T

satisfying the SDE (1.3), (H

x0

) guarantees the existence and the uniqueness of a solution for (1.2), considering generalized stochastic integration w.r.t. semi-martingales (see [Pro04, Theorem 6 p. 249]). Remind (see [BGM10b, Lemma 4.2]) that (P ) implies that P (∀t ∈ [0, T ] : V

t

> 0) = 1. Additionally, the asset price process is a martingale, which is proved in Appendix A.

Proposition 2.3. Under (H

x0

) and (P ), (e

Xt

)

0≤t≤T

is a martingale.

This property will be used in Subsection 2.3 and Section 4.

B Assumptions on the payoff function. For realistic interesting applications in finance, we consider the space Lip

b

( R ) of Lipschitz bounded functions h, i.e. satisfying

C

h

:= sup

x∈R

|h(x)| < +∞, L

h

:= sup

x6=y,(x,y)∈R2

h(y) − h(x) y − x

< +∞. (2.2) This space includes the classical Put payoff function x → (K − e

x

)

+

with strike K. Relaxing the boundedness assumption may lead to ill-posedness problems since the model (1.2)-(1.3) may be such that E

(e

XT

)

p

= +∞ for some p > 1 (see [AP07]). Nevertheless, our approximation formula extends to the Call payoff owing to the Call/Put relation (valid in view of Proposition 2.3): E

(e

XT

− K)

+

= e

x0

− K + E

(K − e

XT

)

+

.

B Generic constants and upper bounds. We keep the same notation c for all non-negative constants depending on: universal constants, on a number p ≥ 1 arising in L

p

estimates, in a non decreasing way on ξ

sup

, M

0

(σ), M

1

(σ), T ,

RT|σ|2T

0 σ2tvtdt

, v

0

, 1/v

0

and α

sup

. We frequently use the short notation A ≤

c

B for positive A which means that A ≤ cB for a generic constant c. Similarly "A = O(B)"

means that |A| ≤ cB for a generic constant c.

B Miscellaneous. The L

p

-norm (p ≥ 1) of a random variable is denoted by ||.||

p

.

1well-defined, even ifhis not smooth, as soon as the varianceRT

0 σt2vtdtof the Gaussian random variableXTP is positive, i.e. our assumption(Hx0).

(7)

2.2. Third order approximation price formula. We state the main result of the paper:

Theorem 2.4 ( 3rd order approximation price formula ). Assume (H

x0

) and (P ). Then for any h ∈ Lip

b

( R ), we have:

E [h(X

T

)] = E h(X

TP

)

+

6

X

i=1

η

i,T

G

ih

+ Error

3,h

, (2.3) where:

η

1,T

:= C

1,Tl

2 − C

2,Tl

2 − C

3,Tl

4 − C

4,Tl

2 − C

1,Tls

, η

2,T

:= − 3C

1,Tl

2 + C

2,Tl

2 + 5C

3,Tl

4 + 7C

4,Tl

2 + (C

1,Tl

)

2

8 − C

1,Ts

2 + C

3,Ts

4 + C

1,Tls

+ C

2,Tls

2 + C

3,Tls

2 + C

4,Tls

+ C

5,Tls

2 + C

6,Tls

4 , η

3,T

:= C

1,Tl

− 2C

3,Tl

− 6C

4,Tl

− 3(C

1,Tl

)

2

4 + C

1,Ts

2 − C

2,Ts

2 − C

3,Ts

2

− 3C

2,Tls

2 − 3C

3,Tls

2 − 5C

4,Tls

2 − C

5,Tls

− 3C

6,Tls

4 − C

1,Tl

C

1,Ts

4 ,

η

4,T

:= C

3,Tl

+ 3C

4,Tl

+ 13(C

1,Tl

)

2

8 + C

2,Ts

2 + C

3,Ts

4 + C

2,Tls

+ C

3,Tls

+ 3C

4,Tls

2 + C

5,Tls

2 + C

6,Tls

2 + (C

1,Ts

)

2

8 + C

1,Tl

C

1,Ts

, η

5,T

:= − 3(C

1,Tl

)

2

2 − (C

1,Ts

)

2

4 − 5C

1,Tl

C

1,Ts

4 ,

η

6,T

:= (C

1,Tl

)

2

2 + (C

1,Ts

)

2

8 + C

1,Tl

C

1,Ts

2 ,

and:

C

1,Tl

:= ω(σ

2

v, σσ

(1)

v)

T0

, C

2,Tl

:= ω(σ

2

v, ((σ

(1)

)

2

+ σσ

(2)

)v)

T0

, C

3,Tl

:= ω(σ

2

v, σ

2

v, ((σ

(1)

)

2

+ σσ

(2)

)v)

T0

, C

4,Tl

:= ω(σ

2

v, σσ

(1)

v, σσ

(1)

v)

T0

, C

1,Ts

:= ω(ρξσv, σ

2

)

T0

, C

2,Ts

:= ω(ρξσv, ρξσ, σ

2

)

T0

, C

3,Ts

:= ω(ξ

2

v, σ

2

, σ

2

)

T0

,

C

1,Tls

:= ω(ρξσv, σσ

(1)

)

T0

, C

2,Tls

:= ω(ρξσv, σ

2

v, σσ

(1)

)

T0

, C

3,Tls

:= ω(σ

2

v, ρξσv, σσ

(1)

)

T0

, C

4,Tls

:= ω(ρξσv, σ

2

, σσ

(1)

v)

T0

, C

5,Tls

:= ω(ρξσv, σσ

(1)

v, σ

2

)

T0

, C

6,Tls

:= ω(σ

2

v, ρξσ

(1)

v, σ

2

)

T0

. Then the approximation error is estimated as follows:

Error

3,h

= O L

h

|σ|

sup3

+ M

1

(σ)(M

0

(σ) + ξ

sup

)

2

]T

2

. (2.4)

The notations C

.l

, C

.s

, C

.ls

refer to the dependency of the coefficients w.r.t. the local volatility only, to the stochastic one, or to both (see explanations in Subsection 2.3).

We make several additional remarks. Firstly, note that contrary to [BGM10b, Theorem 2.2],

we do not assume anymore that the correlation is bounded away from −1 and 1. This is a nice

improvement.

(8)

Secondly, the error bound (2.4) justifies the label of third order approximation formula because using the notation M = max(M

0

(σ), ξ

sup

), we readily have Error

3,h

= O (M √

T )

4

. Besides, making reference to the introduction, we retrieve that if |σ|

= 0 or max(M

1

(σ), ξ

sup

) = 0 or T = 0, the approximation formula (2.3) is exact (the model and the proxy coincide and the C coefficients vanish). Obviously if L

h

= 0 (i.e. constant h), the error and sensitivities are equal to zero.

Thirdly, if one prefers to restrict to a second order approximation, it simply writes:

E [h(X

T

)] = E h(X

TP

)

+ C

1,Tl

1

2 G

1h

− 3

2 G

2h

+ G

3h

+ C

1,Ts

− G

2h

2 + G

3h

2

+ O L

h

|σ|

sup2

+ M

1

(σ)(M

0

(σ) + ξ

sup

)]T

32

.

We let the reader verify that the additional corrective terms of the expansion (2.3) are bounded (up to generic constants) by L

h

|σ|

2sup

+ M

1

(σ)(M

0

(σ) + ξ

sup

)]T

32

using standard upper bounds for the derivatives of the Gaussian density and for the neglected coefficients C

.,T.

.

2.3. Corollaries.

B Particular cases of pure local volatility model and pure Heston model.

a) When ξ

sup

is equal to zero, the coefficients C

s

and C

ls

are null: then we exactly retrieve the expansion of the pure local volatility model proposed in [BGM10a]. The terms C

l

therefore read as purely local volatility contributions.

b) If M

1

(σ) = 0 (case of pure Heston model), all the coefficients C

l

and C

ls

are equal to zero:

we can retrieve the expansion derived in [BGM10b], by taking into account our parametrization of V and by transforming the sensitivities w.r.t. the total variance in [BGM10b, Theorem 2.2] into sensitivities w.r.t. the log-spot.

c) Finally we interpret the coefficients C

ls

as a contribution related to the mixture of both the local and stochastic parts of the volatility. All these terms notably depend on the correlation. In case of independent W and B, all the coefficients are equal to 0 except the C

l

terms and C

3,Ts

. B Applications to Call payoff function. One can directly apply this theorem for the Put payoff function h(x) = (K −e

x

)

+

. The reader should remark that the above expansion formula is exact for the particular payoff function h(x) = exp(x): indeed E [h(X

T

)] = E

h(X

TP

)

= G

iexp(.)

= e

x0

and the sum of the corrective terms is equal to zero, P

6

i=1

η

i,T

G

iexp(.)

= e

x0

P

6

i=1

η

i,T

= 0. Therefore the expansion remains valid for the Call payoff function h(x) = (e

x

− K)

+

although h 6∈ Lip

b

( R ), and furthermore, the Call/Put parity relationship (valid owing to Proposition 2.3) is preserved within these approximations.

3. Error analysis.

3.1. Outline of the proof. To relate the initial process (1.2)-(1.3) to the proxy process (1.4), we introduce a two-dimensional parameterized process given by:

dX

tη

=σ(t, ηX

tη

+ (1 − η)x

0

) q

V

tη

dW

t

− 1

2 σ

2

(t, ηX

tη

+ (1 − η)x

0

)V

tη

dt, X

0η

= x

0

, (3.1) dV

tη

t

dt + ηξ

t

q

V

tη

dB

t

, V

0η

= v

0

, (3.2)

where η is an interpolation parameter lying in the range [0, 1], so that on the one hand for η = 1,

X

t1

= X

t

and V

t1

= V

t

, and on the other hand for η = 0, X

t0

= X

tP

and V

t0

= v

t

. This param-

eterization is a tricky interpolation between models, which goal is to derive successive corrective

(9)

processes in order to obtain a tractable explicit approximation formula. In addition (P ) implies that ∀η ∈ [0, 1], P (∀t ∈ [0, T ] : V

tη

> 0) = 1 (see [BGM10b, Lemma 4.2]).

We define the stochastic volatility process:

Definition 3.1. Λ

ηt

= p

V

tη

, ∀t ∈ [0, T ], ∀η ∈ [0, 1].

In addition, we introduce (λ

t

)

t∈[0,T]

defined for any t ∈ [0, T ] by:

λ

t

= Λ

η=0t

= q

V

t0

= √ v

t

=

s v

0

+

Z

t 0

α

s

ds. (3.3)

We present here a sketch of proof in order to fix the main ideas and to highlight the main difficulties.

The strategy of approximation has been globally explained in introduction.

B 1st step. We construct corrective processes to approximate X

T

in L

p

. Consider the parameter- ized process defined in (3.1)-(3.2). We recall that the Gaussian proxy process (X

tP

)

t∈[0,T]

defined in (1.4) is obtained by setting η = 0. The following corrective processes (X

i,t

)

t∈[0,T]

-(V

i,t

)

t∈[0,T]

- (Λ

i,t

)

t∈[0,T]

for i ∈ {1, 2} are obtained by a formal i-times differentiation of (3.1)-(3.2) w.r.t. η and by taking η = 0 thereafter. For the first corrective processes, we obtain:

dX

1,t

=[(X

tP

− x

0

t(1)

λ

t

+ Λ

1,t

σ

t

](dW

t

− σ

t

λ

t

dt), X

1,0

= 0, (3.4) V

1,t

=

Z

t 0

ξ

s

λ

s

dB

s

, (3.5)

Λ

1,t

= V

1,t

t

. (3.6)

The second corrective processes are:

dX

2,t

=

λ

t

[(X

tP

− x

0

)

2

σ

t(2)

+ 2X

1,t

σ

t(1)

] + 2(X

tP

− x

0

1,t

σ

t(1)

(dW

t

− σ

t

λ

t

dt) (3.7) +

Λ

2,t

σ

t

dW

t

− [(X

tP

− x

0

)V

1,t

σ

t(1)

σ

t

+ (X

tP

− x

0

)

2

t(1)

)

2

v

t

+ V

2,t

2 σ

2t

]dt , X

2,0

= 0 V

2,t

=

Z

t 0

ξ

s

V

1,s

λ

s

dB

s

, (3.8)

Λ

2,t

= V

2,t

t

− V

1,t2

4(λ

t

)

3

. (3.9)

Under (H

x0

), these corrective processes (X

i,t

)

t∈[0,T]

-(V

i,t

)

t∈[0,T]

-(Λ

i,t

)

t∈[0,T]

for i ∈ {1, 2} are well defined.

B 2nd step. We compute the corrective terms. To start with, assume that h ∈ C

b

( R ) and perform a third order Taylor expansion for the function h at x = X

T

around x = X

TP

:

E [h(X

T

)] = E h(X

TP

)

+ E h

h

(1)

(X

TP

)(X

T

− X

TP

) i + 1

2 E h

h

(2)

(X

TP

)(X

T

− X

TP

)

2

i + E

(X

T

− X

TP

)

3

Z

1

0

h

(3)

(X

TP

+ η(X

T

− X

TP

)) (1 − η)

2

2 dη

(3.10)

= E h(X

TP

)

+ E

h h

(1)

(X

TP

)X

1,T

i + E

h

(1)

(X

TP

) X

2,T

2

+ 1

2 E

h h

(2)

(X

TP

)X

1,T2

i

+ Error

3,h

, (3.11) Error

3,h

= E

h

(1)

(X

TP

)(X

T

2

X

j=0

X

j,T

j! )

 + 1 2 E

h h

(2)

(X

TP

)(X

T

− X

TP

− X

1,T

)(X

T

− X

TP

+ X

1,T

) i

(10)

+ E

(X

T

− X

TP

)

3

Z

1

0

h

(3)

(X

TP

+ η(X

T

− X

TP

)) (1 − η)

2

2 dη

, (3.12)

with the convention X

TP

= X

T0

= X

0,T

. Then we transform the terms E

h

(1)

(X

TP

)X

1,T

, E

h

h

(1)

(X

TP

)

X2,T2

i and

12

E

h

(2)

(X

TP

)X

1,T2

into a weighted sum of sensitivities, yielding the sum in (2.3). To achieve this transformation, we apply a key lemma which proof is postponed to Appendix B:

Lemma 3.2. Let ϕ be a C

b

( R ) function and (f

t

)

t

be a measurable and bounded deterministic function. Let N ≥ 1 be fixed, and consider measurable and bounded deterministic functions t 7→ l

i,t

for i = 1, . . . , N . Then, using the convention dW

t0

= dt, dW

t1

= dW

t

and dW

t2

= dB

t

, for any (I

1

, . . . , I

N

) ∈ {0, 1, 2}

N

we have:

E

"

ϕ(

Z

T 0

f

t

dW

t

) Z

T

0

l

N,tN

Z

tN

0

l

N−1,tN−1

. . . Z

t2

0

l

1,t1

dW

tI1

1

. . . dW

tIN−1N−1

dW

tIN

N

#

= ω(b l

1

, . . . ,b l

N

)

T0

#{k:Ik6=0}

x#{k:Ik6=0}

E

"

ϕ(

Z

T 0

f

t

dW

t

+ x)

#

x=0

, (3.13)

where b l

k,t

:=

l

k,t

if I

k

= 0, f

t

l

k,t

if I

k

= 1, f

t

ρ

t

l

k,t

if I

k

= 2.

Details of the complete derivation of the corrective terms appearing in (2.3) is given in Ap- pendix B. Remind that these sensitivities are well defined even if h is not smooth.

B 3rd step: error analysis. Last but not least, one has to estimate the residual terms. In the smooth case for h, owing to (3.10), it is sufficient to estimate the L

p

norms of the residual processes X

T

i

X

j=0

X

j,T

j! for i ∈ {1, 2} and there is no other difficulties in order to conclude. Under the sole assumption that h ∈ Lip

b

( R ), the previous decomposition doesn’t apply (because of h

(2)

and h

(3)

).

To get rid off the derivatives of h, a natural idea is first to smooth h, second to employ Malliavin integration by parts formulas, and then to take the smoothing parameter to 0. But this strategy applied to the representation (3.10) fails because the random variable X

TP

+ η(X

T

− X

TP

) does not belong to the space D

for η 6= 0: indeed, the coefficient function of the square root model does not satisfy the standard assumptions. Malliavin differentiability is studied by hand in [AE08] up to the second order.

To overcome these difficulties, the trick is twofold:

1. we replace X

T

by the smooth random variable (in Malliavin sense) X

TP

+X

1,T

+

X2,T2

close to X

T

in L

p

,

2. we apply a Gaussian regularization to h, giving a new payoff h

δ

(defined later in (3.32)) which will support the Malliavin calculus computations (inspired by [GM14]).

Therefore E [h(X

T

)] is approximated by E [h

δ

(X

T

)] which can be decomposed as follows (with similar arguments as (3.12)):

E [h

δ

(X

T

)] = E

h

δ

(X

TP

+ X

1,T

+ X

2,T

2 )

+ E

(X

T

2

X

j=0

X

j,T

j! )

Z

1 0

h

(1)δ

((1 − η)

2

X

j=0

X

j,T

j! + ηX

T

)dη

= E

h

δ

(X

TP

) + E

h

(1)δ

(X

TP

)(X

1,T

+ X

2,T

2 )

+ 1 2 E

h

h

(2)δ

(X

TP

)X

1,T2

i

+ Error

3,hδ

,

(3.14)

(11)

Error

3,hδ

= E

(X

T

2

X

j=0

X

j,T

j! ) Z

1

0

h

(1)δ

((1 − η)

2

X

j=0

X

j,T

j! + ηX

T

)dη

+ 1 2 E

"

h

(2)δ

(X

TP

)(X

1,T

X

2,T

+ X

2,T2

4 )

#

+ E

(X

1,T

+ X

2,T

2 )

3

Z

1

0

(1 − η)

2

2 h

(3)δ

(X

TP

+ η(X

1,T

+ X

2,T

2 ))dη

. (3.15)

Observe that the equations (3.11) and (3.14) are the same (up to the modification of h into h

δ

) but the representations of error (3.12) and (3.15) differ. The latter will be that to use in our analysis with Malliavin calculus. As h ∈ Lip

b

( R ), the first term of (3.15) involving only h

(1)δ

can be handled directly. Integration by parts from Malliavin calculus should be applied to the two last terms of (3.15), which contain higher derivatives of h

δ

with the random variables X

TP

, X

1,T

and X

2,T

belonging to D

, but X

TP

+ η(X

1,T

+

X2,T2

) suffers from degeneracy (in the Malliavin sense) for η 6= 0. This is where Gaussian regularization plays an important role: it allows to integrate by parts w.r.t. the random variable X

TP

+ η(X

1,T

+

X2,T2

) + Gaussian r.v., which is now non degenerate. Details are explained in Subsection 3.3.

The complete analysis of the error estimate (2.4) is given in the following subsection, along several steps:

1. L

p

norms estimates of the residuals processes,

2. small Gaussian noise perturbation to smooth the function h,

3. careful use of Malliavin integration by parts formulas to achieve the proof.

3.2. Approximation of X, V , Λ and error estimates.

Approximation of V , Λ and related error estimates.

Definition 3.3. Assume (P). We introduce for i ∈ {0, 1, 2} the Λ-residual processes defined by (t ∈ [0, T ])

R

Λi,t

:= Λ

t

i

X

j=0

Λ

j,t

j!

where by convention Λ

0,t

= λ

t

and the corrective processes (Λ

1

, Λ

2

) are defined in (3.6)-(3.9). By replacing Λ by V , we define similarly the V -residual processes using the notation R

V

.

Proposition 3.4. Assume (P ). Then for any p ≥ 1, we have:

√ v

0

≤ λ

inf

≤ λ

sup

≤ p

v

0

+ T α

sup

, (3.16)

sup

t∈[0,T]

||Λ

i,t

||

p

c

sup

T )

i

, ∀i ∈ {1, 2}, (3.17) sup

t∈[0,T]

||R

Λi,t

||

p

c

sup

T )

i+1

, ∀i ∈ {0, 1, 2}. (3.18)

Proof. (3.16) is obvious in view of (3.3). The proofs of (3.17) and (3.18) can be found in [BGM10b, Propositions 4.6, 4.7 and 4.8] replacing in the quoted paper κ by 0 and κθ

t

by α

t

.

Corollary 3.5. Assume (P). Then for any p ≥ 1 we have

v

0

≤ v

inf

≤ v

sup

≤v

0

+ T α

sup

, (3.19)

(12)

sup

t∈[0,T]

||V

t

||

p

c

1 + v

0

, (3.20)

sup

t∈[0,T]

||V

i,t

||

p

c

sup

T )

i

, ∀i ∈ {1, 2}, (3.21) sup

t∈[0,T]

||R

Vi,t

||

p

c

sup

T )

i+1

, ∀i ∈ {0, 1, 2}. (3.22)

Proof. The proofs of (3.19) and (3.20) are easy. The inequality (3.21) is readily obtained thanks to (3.5), (3.8) and (3.17). The proof of (3.22) is available in [BGM10b, Corollary 4.9].

Approximation of X and related error estimates.

Definition 3.6. Assume (H

x0

). We introduce for i ∈ {0, 1, 2} the X -residual processes defined by (t ∈ [0, T ])

R

Xi,t

:= X

t

i

X

j=0

X

j,t

j! ,

where by convention X

0,t

= X

t0

= X

tP

and the corrective processes (X

1

, X

2

) are defined in (3.4)- (3.7). When writing a Taylor expansion of σ

t

(.) at x = X

t

around x = x

0

, we denote by R

n,σ

(X

t

) the n

th

Taylor residual:

R

n,σ

(X

t

) := σ

t

(X

t

) −

n

X

i=0

(X

t

− x

0

)

i

i! σ

(i)t

. (3.23)

Replacing σ by σ

2

, we use the similar notation R

n,σ2

(X

t

).

Let p ≥ 2, standard computations involving Burkholder-Davis-Gundy and Holder inequalities yield:

||X

t

− x

0

||

pp

c

t

p2−1

Z

t

0

||σ

s

(X

s

) p

V

s

||

pp

ds + t

p−1

Z

t

0

||σ

2s

(X

s

)V

s

||

pp

ds

c

t

p2−1

|σ|

p

Z

t

0

E h V

sp/2

i

ds + t

p−1

|σ|

2p

Z

t

0

E [V

sp

] ds ≤

c

(|σ|

T )

p

, (3.24) where we have applied the estimate (3.20) at the last inequality. Observe that the above estimate is valid also for p ∈ [1, 2) by using k.k

p

≤ k.k

q

for p ≤ q.

We now aim at handling X -residual processes, and the next results are intermediate steps.

Lemma 3.7. Assume (H

x0

) and (P). For any p ≥ 1:

sup

t∈[0,T]

||X

tP

− x

0

||

p

c

|σ|

T , (3.25)

sup

t∈[0,T]

||X

i,t

||

p

c

|σ|

supi

+ M

1

(σ)(M

0

(σ) + ξ

sup

)

i−1

]T

i+12

, ∀i ∈ {1, 2}. (3.26)

Proof. It is enough to prove the inequalities for p ≥ 2: thus, now consider such a p. The proof of (3.25) is similar to (3.24). For (3.26) i = 1: starting from (3.4), the same computations as before give:

||X

1,t

||

p

c

M

1

(σ) √

T (1 + M

0

(σ) √ T ) sup

t∈[0,T]

||X

tP

− x

0

||

p

(13)

+ |σ|

T (1 + M

0

(σ) √ T ) sup

t∈[0,T]

||V

1,t

||

p

. We conclude using (3.25) and (3.21). For (3.26) i = 2, one has from (3.7):

||X

2,t

||

p

c

M

1

(σ) √

T (1 + M

0

(σ) √

T )( sup

t∈[0,T]

||(X

tP

− x

0

)

2

||

p

+ sup

t∈[0,T]

||X

1,t

||

p

)+

+ |σ|

T (1 + M

0

(σ) √ T ) sup

t∈[0,T]

||V

2,t

||

p

+ |σ|

√ T sup

t∈[0,T]

||V

1,t2

||

p

+ M

1

(σ) √

T (1 + M

0

(σ) √ T ) sup

t∈[0,T]

||X

tP

− x

0

||

2p

sup

t∈[0,T]

||V

1,t

||

2p

. We conclude using (3.25), (3.26) i = 1 and (3.21).

The following Lemma provides the explicit equations solved by the X -residual processes, in a form appropriate for tight L

p

estimates.

Lemma 3.8. Assume (H

x0

) and (P). One has:

dR

X0,t

=[λ

t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ0,t

]dW

t

− 1

2 [v

t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V0,t

]dt, R

X0,0

= 0, (3.27) dR

X1,t

=[λ

t

R

1,σ

(X

t

) + Λ

1,t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ1,t

+ λ

t

σ

(1)t

R

X0,t

]dW

t

− 1

2 [v

t

R

1,σ2

(X

t

) + V

1,t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V1,t

+ 2v

t

σ

t

σ

t(1)

R

X0,t

]dt, R

X1,0

= 0, (3.28) dR

X2,t

=[λ

t

R

2,σ

(X

t

) + Λ

1,t

R

1,σ

(X

t

) + Λ

2,t

2 R

0,σ

(X

t

) + σ

t

(X

t

)R

2,tΛ

+ λ

t

σ

t(1)

R

X1,t

+ λ

t

σ

t(2)

2 R

X0,t

(X

t

+ X

tP

−2x

0

) + Λ

1,t

σ

(1)t

R

X0,t

]dW

t

− 1

2 [v

t

R

2,σ2

(X

t

) + V

1,t

R

1,σ2

(X

t

) + V

2,t

2 R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V2,t

+ 2v

t

σ

t

σ

t(1)

R

X1,t

+ v

t

((σ

(1)t

)

2

+ σ

(2)t

σ

t

)R

X0,t

(X

t

+ X

tP

−2x

0

) + 2V

1,t

σ

t

σ

(1)t

R

X0,t

]dt, R

X2,0

= 0. (3.29)

Proof. The verification of these identities is tedious but without mathematical difficulties. For convenience, we detail some computations. To obtain (3.27), start from (1.2)-(1.3) and (1.4) and write:

dR

0,tX

=[σ

t

(X

t

t

− σ

t

λ

t

]dW

t

− 1

2 [σ

2t

(X

t

)V

t

− σ

t2

v

t

]dt

=[λ

t

t

(X

t

) − σ

t

) + σ

t

(X

t

)(Λ

t

− λ

t

)]dW

t

− 1

2 [v

t

2t

(X

t

) − σ

t2

) + σ

t2

(X

t

)(V

t

− v

t

)]dt

=[λ

t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ0,t

]dW

t

− 1

2 [v

t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V0,t

]dt.

Similarly for (3.28), using (3.27) and (3.4), we get:

dR

X1,t

=dR

X0,t

− dX

1,t

=[λ

t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ0,t

]dW

t

− 1

2 [v

t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V0,t

]dt

− [(X

tP

− x

0

t(1)

λ

t

+ Λ

1,t

σ

t

](dW

t

− σ

t

λ

t

dt)

=[λ

t

R

0,σ

(X

t

) + Λ

1,t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ1,t

]dW

t

− 1

2 [v

t

R

0,σ2

(X

t

) + V

1,t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V1,t

]dt

− (X

tP

− x

0

t(1)

λ

t

(dW

t

− σ

t

λ

t

dt)

(14)

=[λ

t

R

1,σ

(X

t

) + Λ

1,t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ1,t

+ λ

t

σ

(1)t

R

X0,t

]dW

t

− 1

2 [v

t

R

1,σ2

(X

t

) + V

1,t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V1,t

+ 2v

t

σ

t

σ

t(1)

R

X0,t

]dt.

Now consider (3.29). Start from (3.28)-(3.7) and write:

dR

X2,t

=dR

X1,t

− 1 2 dX

2,t

=[λ

t

R

1,σ

(X

t

) + Λ

1,t

R

0,σ

(X

t

) + σ

t

(X

t

)R

Λ1,t

+ λ

t

σ

(1)t

R

X0,t

]dW

t

− 1

2 [v

t

R

1,σ2

(X

t

) + V

1,t

R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V1,t

+ 2v

t

σ

t

σ

t(1)

R

X0,t

]dt

− 1 2

λ

t

[(X

tP

− x

0

)

2

σ

(2)t

+ 2σ

(1)t

X

1,t

] + 2(X

tP

− x

0

1,t

σ

(1)t

(dW

t

− σ

t

λ

t

dt)

− 1 2

Λ

2,t

σ

t

dW

t

− [(X

tP

− x

0

)V

1,t

σ

t(1)

σ

t

+ (X

tP

− x

0

)

2

(1)t

)

2

v

t

+ V

2,t

2 σ

t2

]dt

=[λ

t

R

1,σ

(X

t

) + Λ

1,t

R

0,σ

(X

t

) + Λ

2,t

2 R

0,σ

(X

t

) + σ

t

(X

t

)R

2,tΛ

+ λ

t

σ

t(1)

R

X1,t

]dW

t

− 1

2 [v

t

R

1,σ2

(X

t

) + V

1,t

R

0,σ2

(X

t

) + V

2,t

2 R

0,σ2

(X

t

) + σ

2t

(X

t

)R

2,tV

+ 2v

t

σ

t

σ

(1)t

R

X1,t

]dt

− 1 2

λ

t

(X

tP

− x

0

)

2

σ

(2)t

+ 2(X

tP

− x

0

1,t

σ

t(1)

(dW

t

− σ

t

λ

t

dt) + 1

2 [(X

tP

− x

0

)V

1,t

σ

(1)t

σ

t

+ (X

tP

− x

0

)

2

t(1)

)

2

v

t

]dt

=[λ

t

R

2,σ

(X

t

) + Λ

1,t

R

0,σ

(X

t

) + Λ

2,t

2 R

0,σ

(X

t

) + σ

t

(X

t

)R

2,tΛ

+ λ

t

σ

t(1)

R

X1,t

+ λ

t

σ

t(2)

2 R

X0,t

(X

t

+ X

tP

−2x

0

)]dW

t

− 1

2 [v

t

R

2,σ2

(X

t

) + V

1,t

R

0,σ2

(X

t

) + V

2,t

2 R

0,σ2

(X

t

) + σ

2t

(X

t

)R

2,tV

+ v

t

((σ

t(1)

)

2

+ σ

t(2)

σ

t

)R

X0,t

(X

t

+ X

tP

−2x

0

) + 2v

t

σ

t

σ

t(1)

R

X1,t

]dt − (X

tP

− x

0

1,t

σ

t(1)

(dW

t

− σ

t

λ

t

dt) + 1

2 (X

tP

− x

0

)V

1,t

σ

t(1)

σ

t

dt

=[λ

t

R

2,σ

(X

t

) + Λ

1,t

R

1,σ

(X

t

) + Λ

2,t

2 R

0,σ

(X

t

) + σ

t

(X

t

)R

2,tΛ

+ λ

t

σ

t(1)

R

X1,t

+ λ

t

σ

t(2)

2 R

X0,t

(X

t

+ X

tP

−2x

0

) + Λ

1,t

σ

(1)t

R

X0,t

]dW

t

− 1

2 [v

t

R

2,σ2

(X

t

) + V

1,t

R

1,σ2

(X

t

) + V

2,t

2 R

0,σ2

(X

t

) + σ

t2

(X

t

)R

V2,t

+ 2v

t

σ

t

σ

t(1)

R

X1,t

+ v

t

((σ

(1)t

)

2

+ σ

(2)t

σ

t

)R

X0,t

(X

t

+ X

tP

−2x

0

) + 2V

1,t

σ

t

σ

(1)t

R

X0,t

]dt.

Another intermediate result is the estimates of R

n,σ

(X

t

) and R

n,σ2

(X

t

). In view of (H

x0

), we have |R

n,σ

(X

t

)| ≤

c

|X

t

− x

0

|

n+1

M

1

(σ) and |R

n,σ2

(X

t

)| ≤

c

|X

t

− x

0

|

n+1

M

0

(σ)M

1

(σ) for n = 0, 1, 2. Combined with (3.24), this readily gives ∀p ≥ 1 and ∀j ∈ {0, 1, 2}:

sup

t∈[0,T]

||R

j,σ

(X

t

)||

p

c

(|σ|

T )

j+1

M

1

(σ), sup

t∈[0,T]

||R

j,σ2

(X

t

)||

p

c

(|σ|

T )

j+1

M

0

(σ)M

1

(σ).

(3.30) We are now in a position to estimate the residuals processes, by taking advantage of the explicit representation of Lemma 3.8.

Proposition 3.9. Assume that (H

x0

) and (P) hold. Then for any p ≥ 1, we have:

sup

t∈[0,T]

||R

Xj,t

||

p

c

|σ|

ξ

supj+1

+ M

1

(σ)(M

0

(σ) + ξ

sup

)

j

T

j2+1

, ∀j ∈ {0, 1, 2}. (3.31)

Références

Documents relatifs

Photon density and photon flux are widely used to model the measurable quantity in diffuse optical tomogra- phy problems.. However, it is not these two quantities that are

The analysis is carried out for abstract Schrödinger and wave conservative systems with bounded observation (locally distributed).. Mathematics Subject Classification (2000)

Eikonal equation; Non-local; Viscosity solution; Error bounds; Continuum limits; Weighted

Proof of Theorem 1.6 We will now consider the case when xρ 3 is small, so that the classical method of stationary phase does not lead to the asymptotic behavior.. Indeed, the phase t

Like in the previous price approximation formulas, it is possible to perform additional Taylor expansions in order to obtain similar formulas using local volatility function frozen

We consider a portfolio with 5 options (Put or Call), each of these being written on a different underlying asset, and we compare the the semi-explicit 1-st order approximation ˜¯ Y

Proposition 1 (Black-Scholes Greeks). Using the notation from Defini- tion 3, the sensitivities w.r.t.. Smart expansion and fast calibra- tion for jump diffusion. Analytical

The coupled model is derived from the Navier–Stokes equations with the terms of order