Numerical approximation of two-scale SDEs
Camilo Andr´ es Garc´ıa Trillos
Camilo-Andres.GARCIA@unice.fr
Colloque de Jeunes Probabilistes et Statisticiens
CIRM Marseille - Avril 18 2012
Stochastic Volatility
Stochastic volatility
dS t = b(S t )dt + σ(S t , v t )dW t where v t is also given as the solution of an SDE.
Empirical studies on high frequency data (Ex: S&P [Fouqu´ e et al. (98)] ; IBOVESPA [Souza et al. (06)] ) support financial asset modeling using Stochastic Volatility with
• Mean reverting volatility
• Fast reverting process: mean reverting time in the order of
days ( maturity of instruments/derivatives).
Stochastic Volatility
Stochastic volatility
dS t = b(S t )dt + σ(S t , v t )dW t where v t is also given as the solution of an SDE.
Empirical studies on high frequency data (Ex: S&P [Fouqu´ e et al.
(98)] ; IBOVESPA [Souza et al. (06)] ) support financial asset modeling using Stochastic Volatility with
• Mean reverting volatility
• Fast reverting process: mean reverting time in the order of days ( maturity of instruments/derivatives).
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 2 / 16
A general two-scale SDE
We consider the following two-scale SDE X t
= x 0 +
Z t 0
f (X s
, Y s
) ds + Z t
0
g (X s
, Y s
) dW s Y t
= y 0 +
−1Z t 0
b(X s
, Y s
)ds +
−1/2Z t
0
σ(X s
, Y s
)d W ˜ s ,
We will assume
• 1
• W and ˜ W are independent
• f (x, y) and g (x, y) C 1 with linear growth in x, C b
∞in y
• b(x, y), C b
∞in both x, y
• Non-degenerate: λ M > σσ
∗(x, y) > λ m > 0. And mean reverting
|y|→∞
lim b(x, y)y = −∞
A general two-scale SDE
We consider the following two-scale SDE X t
= x 0 +
Z t 0
f (X s
, Y s
) ds + Z t
0
g (X s
, Y s
) dW s Y t
= y 0 +
−1Z t 0
b(X s
, Y s
)ds +
−1/2Z t
0
σ(X s
, Y s
)d W ˜ s , We will assume
• 1
• W and ˜ W are independent
• f (x, y ) and g (x, y ) C 1 with linear growth in x, C b
∞in y
• b(x, y), C b
∞in both x, y
• Non-degenerate: λ M > σσ
∗(x, y) > λ m > 0. And mean reverting
|y|→∞
lim b(x, y)y = −∞
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 3 / 16
Homogenization result
Theorem [Pardoux & Veretennikov (01, 03)]
Under the hypothesis the rescaled, frozen parameter diffusion Y t x = y 0 +
Z t 0
b(x, Y s x )ds + Z t
0
σ(x, Y s x )d W ˜ s ,
is ergodic with invariant measure µ x . X
−→
LX where X solves X t = x 0 +
Z t 0
F (X s )ds + Z t
0
G (X s )dW s (Effective equation), with G = √
A, and F and A are averages of f and a = gg
∗with respect to µ x .
F (x) = Z
f (x, y )µ x (dy ) A(x) = Z
g · g
∗(x, y)µ x (dy )
Homogenization result
Theorem [Pardoux & Veretennikov (01, 03)]
Under the hypothesis the rescaled, frozen parameter diffusion Y t x = y 0 +
Z t 0
b(x, Y s x )ds + Z t
0
σ(x, Y s x )d W ˜ s ,
is ergodic with invariant measure µ x . X
−→
LX where X solves X t = x 0 +
Z t 0
F (X s )ds + Z t
0
G (X s )dW s (Effective equation), with G = √
A, and F and A are averages of f and a = gg
∗with respect to µ x .
• Problem: In general we do not know µ x explicitly
• Our goal: Propose a numerical method to approximate the Effective equation
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 4 / 16
Effective equation approximation algorithm
• Suppose we have good (possibly random) estimates ˜ F (n) ≈ F , G ˜ (n) ≈ G
• Discretize the approximated equation. Euler scheme: For t k = k/n
X ˜ t n
k+1= 1 n F ˜ (n)
X ˜ t n
k+ 1
√ n G ˜ (n)
X ˜ t n
kU k+1 where U k ∼ N (0, 1)
• Similar approach used for deterministic [Fatkullin,
Vanden-Eijnden (04)] and stochastic setup [E et al. (04)] .
• Our approach: Choose estimators for the averages ˜ F (n) , G ˜ (n)
allowing us to develop a C.L.T like result for the strong error.
Effective equation approximation algorithm
• Suppose we have good (possibly random) estimates ˜ F (n) ≈ F , G ˜ (n) ≈ G
• Discretize the approximated equation. Euler scheme: For t k = k/n
X ˜ t n
k+1
= 1
n F ˜ (n) X ˜ t n
k
+ 1
√ n G ˜ (n)
X ˜ t n
k
U k+1 where U k ∼ N (0, 1)
• Similar approach used for deterministic [Fatkullin,
Vanden-Eijnden (04)] and stochastic setup [E et al. (04)] .
• Our approach: Choose estimators for the averages ˜ F (n) , G ˜ (n) allowing us to develop a C.L.T like result for the strong error.
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 5 / 16
Effective equation approximation algorithm
• Suppose we have good (possibly random) estimates ˜ F (n) ≈ F , G ˜ (n) ≈ G
• Discretize the approximated equation. Euler scheme: For t k = k/n
X ˜ t n
k+1
= 1
n F ˜ (n) X ˜ t n
k
+ 1
√ n G ˜ (n)
X ˜ t n
k
U k+1 where U k ∼ N (0, 1)
• Similar approach used for deterministic [Fatkullin,
Vanden-Eijnden (04)] and stochastic setup [E et al. (04)] .
• Our approach: Choose estimators for the averages ˜ F (n) , G ˜ (n)
allowing us to develop a C.L.T like result for the strong error.
Invariant approximation algorithm: Decreasing Euler step
Decreasing Euler step
Let {γ k } be a sequence of decreasing positive reals tending to zero, and Γ M := P M
k=0 γ k , and ¯ U k ∼ N (0, 1)
• Decreasing step Euler scheme:
Y ¯ k+1 x = ¯ Y k x + γ k +1 b x, Y ¯ k x + √
γ k+1 σ x, Y ¯ k x U ¯ k+1 ,
• Average estimator:
ν(f , x; M ) := 1 Γ M
M
X
k=1
γ k f x , Y ¯ k−1 x
≈ 1 Γ
MZ
ΓM0
f (x, Y
sx) ds
≈ lim
M→∞
1 Γ
MZ
ΓM0
f (x, Y
sx) ds = Z
f (x , y)µ
x(dy ) = F (x)
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 6 / 16
Invariant approximation algorithm: Decreasing Euler step
Decreasing Euler step
Let {γ k } be a sequence of decreasing positive reals tending to zero, and Γ M := P M
k=0 γ k , and ¯ U k ∼ N (0, 1)
• Decreasing step Euler scheme:
Y ¯ k+1 x = ¯ Y k x + γ k +1 b x, Y ¯ k x + √
γ k+1 σ x, Y ¯ k x U ¯ k+1 ,
• Average estimator:
ν(f , x; M ) := 1 Γ M
M
X
k=1
γ k f x , Y ¯ k−1 x
≈ 1 Γ
MZ
ΓM0
f (x, Y
sx) ds
≈ lim
M→∞
1 Γ
MZ
ΓM0
f (x, Y
sx) ds = Z
f (x, y)µ
x(dy ) = F (x)
Invariant approximation algorithm
Properties [Lamberton, Pag` es (02)]
Under some hypothesis on the step sizes γ k ,
1
Almost sure convergence: For x fixed, ν(f , x; M ) −→ a.s. F (x)
2
C.L.T.: For x fixed,
p Γ M (ν(f , x ; M) − F (x)) −→ N
L(0, Ξ(x))
Ξ depends on µ x and the coefficients of the ergodic diffusion, but not on the choice of γ k .
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 7 / 16
Effective equation approximation
X ¯ t (n)
k+1= 1
n ν f , X ¯ t
k; M(n) + 1
√ n q
ν gg
∗, X ¯ t
k; M (n) U k+1
i.e. it is an Euler scheme for which we use independent realizations of the decreasing Euler estimator at each discretization step.
1
We fix γ k = k
−θfor 1/3 < θ < 1.
2
We relate the two parameters M and n. The most efficient way to do so is by fixing M (n) such that
Γ M ∝ n
Result 1 : Strong convergence
n→∞ lim E sup
0≤t≤T
X t − X ¯ t (n)
2 !
→ 0
Sketch of the proof Stability technique
• A priori bounds
• Obtain a global L 2 error control from step-wise L 2 error control (Burkholder maximal inequality + Gronwall lemma)
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 9 / 16
Result 2 : Limit error distribution
Limit distribution
If the effective diffusion is non-degenarte, fixing Γ M = n n 1/2
X t − X ¯ (n)
⇒ ζ with ζ defined as the solution of
ζ t = Z t
0
∂ x F (X s )ζ(s)ds + Z t
0
∂ x G (X s )ζ (s )dW s
+ 1
√ 2
Z t 0
∂ x G (X s )G(X s )dB s 1 + Z t
0
p Ξ(X s )dB s 2
with B 1 and B 2 are two independent standard Brownian motions.
Result 2 : Limit error distribution
Limit distribution
If the effective diffusion is non-degenarte, fixing Γ M = n n 1/2
X t − X ¯ (n)
⇒ ζ with ζ defined as the solution of
ζ t = Z
t0
∂
xF (X
s)ζ(s )ds + Z
t0
∂
xG (X
s)ζ(s)dW
s+ 1
√ 2
Z
t0
∂
xG (X
s)G(X
s)dB
s1| {z }
Euler discretization
+ Z
t0
p Ξ(X
s)dB
s2| {z }
average approx.
with B 1 and B 2 are two independent standard Brownian motions.
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 10 / 16
Result 2 : Limit error distribution - outline of the proof
n 1/2
X t − X ¯ (n)
=: ζ n ⇒ ζ
Sketch of the proof
• Tightness
• Obtain an SDE for the error ζ
n= ˜ ζ
n+ R
n• Prove R
n−→
L20
• Classical results (Kurtz and Protter) to deduce tightness of ˜ ζ
nfrom convergence in law of the tuple of coefficients of related SDE.
• Convergence of the tuple: Use C.L.T of decreasing Euler, independence and convergence of the quadratic variations.
• Identification is straightforward thanks to strong convergence.
Result 3 : Romberg extrapolation
Let l ∈ N , l ≥ 2. Let c 1 , . . . , c l ∈ R satisfy the linear system
l
X
i=1
c i = 1
l
X
i =1
c i
Γ iM
iM
X
j=1
γ j r
= 0 for r = 2, . . . , l.
Define the approximation function
ˆ
ν(x , F ; M, l) =
l
X
i=1
c i ν(x, F ; iM )
Convergence and limit error results using this approximation are unchanged (up to a constant), but in this case we may take
1
2l + 1 < θ < 1
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 12 / 16
Result 3 : Romberg extrapolation
Let l ∈ N , l ≥ 2. Let c 1 , . . . , c l ∈ R satisfy the linear system
l
X
i=1
c i = 1
l
X
i =1
c i
Γ iM
iM
X
j=1
γ j r
= 0 for r = 2, . . . , l.
Define the approximation function
ˆ
ν(x , F ; M, l) =
l
X
i=1
c i ν(x, F ; iM )
Convergence and limit error results using this approximation are unchanged (up to a constant), but in this case we may take
1
2l + 1 < θ < 1
Efficiency analysis
Define
τ := # of operations Our algorithm with n steps requires
τ (n) = Kn
2−θ1−θFor a fixed strong error tolerance ∆ ( recall ∆(n) := Kn
−1/2) :
• Simple SDE (θ > 1/3) :
τ (∆) = K ∆
−2−1−θ2≥ K ∆
−5• Interpolated SDE (θ > 2l+1 1 ):
τ (∆) = K l ∆
−2−1−θ2≥ K l ∆
−4−1l.
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 13 / 16
Efficiency analysis
Define
τ := # of operations Our algorithm with n steps requires
τ (n) = Kn
2−θ1−θFor a fixed strong error tolerance ∆ ( recall ∆(n) := Kn
−1/2) :
• Simple SDE (θ > 1/3) :
τ (∆) = K ∆
−2−1−θ2≥ K ∆
−5• Interpolated SDE (θ > 2l+1 1 ):
τ (∆) = K l ∆
−2−1−θ2≥ K l ∆
−4−1l.
Numerical test
Test problem:
dX t
= X t
dt + X t
Y t
dW t
dY t
=
−1s
1
2(1 + (X t
) 2 ) − Y t
!
dt +
−1/2s
2(X t
) 2 + 1 (X t
) 2 + 1 dW t
We test the algorithm with
• γ k = k
−0.35for the simple version (θ ≈ 1/3)
• γ k = k
−0.225for the extrapolated version (θ ≈ 1/5)
Numerical approximation of two-scale SDEs CJPS - Avril 18 2012 14 / 16
*
****
*****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
********* **
***
−10 −5 0 5 10
−10−50510
QQplot − SDE − Decreasing step
Normalized observed error (n1 2⋅(X−X~))
Limit error (ζ)
**
****
***
* *
******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
***************************************
*
*
−10 −5 0 5 10
−10−50510
QQplot − SDE − Extrapolated
Normalized observed error (n1 2⋅(X−X~))
Limit error (ζ)
5e−02 5e−01 5e+00 5e+01
0.10.20.51.02.0
SDE − L
2error vs. Time
Time (s) Error
|
X−X~n|
2Simple (r=−0.18) Extrap. (r=−0.22)