1
Array-RQMC for Markov Chains with Random Stopping Times
Pierre L’Ecuyer Maxime Dion
Adam L’Archevˆ eque-Gaudet
Informatique et Recherche Op´ erationnelle, Universit´ e de Montr´ eal
1. Markov chain setting, Monte Carlo, classical RQMC.
2. Array-RQMC: preserving the low discrepancy of the chain’s states.
3. Least-squares Monte Carlo for optimal stopping times.
4. Examples.
2
Monte Carlo for Markov Chains
Setting: A Markov chain with state space X ⊆ R
`, evolves as X
0= x
0, X
j= ϕ
j(X
j−1, U
j), j ≥ 1,
where the U
jare i.i.d. uniform r.v.’s over (0, 1)
d. Want to estimate µ = E[Y ] where Y =
τ
X
j=1
g
j(X
j)
and τ is a stopping time w.r.t. the filtration F {(j , X
j), j ≥ 0}.
Ordinary MC: For i = 0, . . . , n − 1, generate X
i,j= ϕ
j(X
i,j−1, U
i,j), j = 1, . . . , τ
i, where the U
i,j’s are i.i.d. U(0, 1)
d. Estimate µ by
ˆ µ
n= 1
n
n
X
i=1 τi
X
j=1
g
j(X
i,j) = 1 n
n
X
i=1
Y
i.
2
Monte Carlo for Markov Chains
Setting: A Markov chain with state space X ⊆ R
`, evolves as X
0= x
0, X
j= ϕ
j(X
j−1, U
j), j ≥ 1,
where the U
jare i.i.d. uniform r.v.’s over (0, 1)
d. Want to estimate µ = E[Y ] where Y =
τ
X
j=1
g
j(X
j)
and τ is a stopping time w.r.t. the filtration F {(j , X
j), j ≥ 0}.
Ordinary MC: For i = 0, . . . , n − 1, generate X
i,j= ϕ
j(X
i,j−1, U
i,j), j = 1, . . . , τ
i, where the U
i,j’s are i.i.d. U(0, 1)
d. Estimate µ by
ˆ µ
n= 1
n
n
X
i=1 τi
X
j=1
g
j(X
i,j) = 1 n
n
X
i=1
Y
i.
3
Classical RQMC for Markov Chains
Put V
i= (U
i,1, U
i,2, . . . ). Estimate µ by ˆ
µ
rqmc,n= 1 n
n
X
i=1 τi
X
j=1
g
j(X
i,j)
where P
n= {V
0, . . . , V
n−1} ⊂ (0, 1)
shas the following properties:
(a) each point V
ihas the uniform distribution over (0, 1)
s; (b) P
nhas low discrepancy.
Dimension is s = inf{s
0: P [d τ ≤ s
0] = 1}.
For a Markov chain, the dimension s is often very large!
4
Array-RQMC for Markov Chains
[L´ ecot, Tuffin, L’Ecuyer 2004, 2008]
Simulate n chains in parallel. At each step, use an RQMC point set P
nto advance all the chains by one step, while inducing global negative
dependence across the chains.
Intuition: The empirical distribution of S
n,j= {X
0,j, . . . , X
n−1,j}, should be a more accurate approximation of the theoretical distribution of X
j, for each j , than with crude Monte Carlo. The discrepancy between these two distributions should be as small as possible.
Then, we will have small variance for the (unbiased) estimators:
µ
j= E [g
j(X
j)] ≈ 1 n
n−1
X
i=0
g
j(X
i,j) and µ = E [Y ] ≈ 1 n
n−1
X
i=0
Y
i.
How can we preserve low-discrepancy of X
0,j, . . . , X
n−1,jwhen j increases?
Can we quantify the variance improvement?
4
Array-RQMC for Markov Chains
[L´ ecot, Tuffin, L’Ecuyer 2004, 2008]
Simulate n chains in parallel. At each step, use an RQMC point set P
nto advance all the chains by one step, while inducing global negative
dependence across the chains.
Intuition: The empirical distribution of S
n,j= {X
0,j, . . . , X
n−1,j}, should be a more accurate approximation of the theoretical distribution of X
j, for each j , than with crude Monte Carlo. The discrepancy between these two distributions should be as small as possible.
Then, we will have small variance for the (unbiased) estimators:
µ
j= E [g
j(X
j)] ≈ 1 n
n−1
X
i=0
g
j(X
i,j) and µ = E [Y ] ≈ 1 n
n−1
X
i=0
Y
i.
How can we preserve low-discrepancy of X
0,j, . . . , X
n−1,jwhen j increases?
Can we quantify the variance improvement?
4
Array-RQMC for Markov Chains
[L´ ecot, Tuffin, L’Ecuyer 2004, 2008]
Simulate n chains in parallel. At each step, use an RQMC point set P
nto advance all the chains by one step, while inducing global negative
dependence across the chains.
Intuition: The empirical distribution of S
n,j= {X
0,j, . . . , X
n−1,j}, should be a more accurate approximation of the theoretical distribution of X
j, for each j , than with crude Monte Carlo. The discrepancy between these two distributions should be as small as possible.
Then, we will have small variance for the (unbiased) estimators:
µ
j= E [g
j(X
j)] ≈ 1 n
n−1
X
i=0
g
j(X
i,j) and µ = E [Y ] ≈ 1 n
n−1
X
i=0
Y
i.
How can we preserve low-discrepancy of X
0,j, . . . , X
n−1,jwhen j increases?
Can we quantify the variance improvement?
5
To simplify, suppose each X
jis a uniform r.v. over (0, 1)
`.
Select a discrepancy measure D for the point set S
n,j= {X
0,j, . . . , X
n−1,j} over (0, 1)
`, and a corresponding measure of variation V , such that
Var[ˆ µ
rqmc,j,n] = E[(ˆ µ
rqmc,j,n− µ
j)
2] ≤ E[D
2(S
n,j)] V
2(g
j).
If D is defined via a reproducing kernel Hilbert space, then, for some random ξ
j(that generally depends on S
n,j),
E [D
2(S
n,j)] = Var
" 1 n
n
X
i=1
ξ
j(X
i,j)
#
= Var
" 1 n
n
X
i=1
(ξ
j◦ ϕ
j)(X
i,j−1, U
i,j))
#
≤ E [D
(2)2(Q
n)] · V
(2)2(ξ
j◦ ϕ
j)
for some other discrepancy D
(2)over (0, 1)
`+d, where Q
n= {(X
0,j−1, U
0,j), . . . , (X
n−1,j−1, U
n−1,j)}.
Heuristic: Under appropriate conditions, we should have V
(2)(ξ
j◦ ϕ
j) < ∞
and E[D
(2)2(Q
n)] = O(n
−α+) for some α ≥ 1.
5
To simplify, suppose each X
jis a uniform r.v. over (0, 1)
`.
Select a discrepancy measure D for the point set S
n,j= {X
0,j, . . . , X
n−1,j} over (0, 1)
`, and a corresponding measure of variation V , such that
Var[ˆ µ
rqmc,j,n] = E[(ˆ µ
rqmc,j,n− µ
j)
2] ≤ E[D
2(S
n,j)] V
2(g
j).
If D is defined via a reproducing kernel Hilbert space, then, for some random ξ
j(that generally depends on S
n,j),
E [D
2(S
n,j)] = Var
"
1 n
n
X
i=1
ξ
j(X
i,j)
#
= Var
"
1 n
n
X
i=1
(ξ
j◦ ϕ
j)(X
i,j−1, U
i,j))
#
≤ E [D
(2)2(Q
n)] · V
(2)2(ξ
j◦ ϕ
j)
for some other discrepancy D
(2)over (0, 1)
`+d, where Q
n= {(X
0,j−1, U
0,j), . . . , (X
n−1,j−1, U
n−1,j)}.
Heuristic: Under appropriate conditions, we should have V
(2)(ξ
j◦ ϕ
j) < ∞
and E[D
(2)2(Q
n)] = O(n
−α+) for some α ≥ 1.
6
In the points (X
i,j−1, U
i,j) of Q
n, the U
i,jcan be defined via some RQMC scheme, but the X
i,j−1cannot be chosen; they are determined by the history of the chains.
The idea is to select a low-discrepancy point set
Q ˜
n= {(w
0, U
0), . . . , (w
n−1, U
n−1)},
where the w
i∈ [0, 1)
`are fixed and the U
i∈ (0, 1)
dare randomized, and then define a bijection between the states X
i,j−1and the w
iso that the
X
i,j−1are “close” to the w
i(small discrepancy between the two sets).
Bijection defined by a permutation π
jof S
n,j.
State space in R
`: same algorithm essentially.
6
In the points (X
i,j−1, U
i,j) of Q
n, the U
i,jcan be defined via some RQMC scheme, but the X
i,j−1cannot be chosen; they are determined by the history of the chains.
The idea is to select a low-discrepancy point set
Q ˜
n= {(w
0, U
0), . . . , (w
n−1, U
n−1)},
where the w
i∈ [0, 1)
`are fixed and the U
i∈ (0, 1)
dare randomized, and then define a bijection between the states X
i,j−1and the w
iso that the
X
i,j−1are “close” to the w
i(small discrepancy between the two sets).
Bijection defined by a permutation π
jof S
n,j.
State space in R
`: same algorithm essentially.
7
Array-RQMC algorithm
X
i,0← x
0, for i = 0, . . . , n − 1;
for j = 1, 2, . . . , max
iτ
ido
Randomize afresh {U
0,j, . . . , U
n−1,j} in ˜ Q
n; X
i,j= ϕ
j(X
πj(i),j−1, U
i,j), for i = 0, . . . , n − 1;
Compute the permutation π
j+1(sort the states);
end for
Estimate µ by the average ¯ Y
n= ˆ µ
rqmc,n.
Theorem: The average ¯ Y
nis an unbiased estimator of µ.
Can estimate Var[ ¯ Y
n] by the empirical variance of m indep. realizations.
7
Array-RQMC algorithm
X
i,0← x
0, for i = 0, . . . , n − 1;
for j = 1, 2, . . . , max
iτ
ido
Randomize afresh {U
0,j, . . . , U
n−1,j} in ˜ Q
n; X
i,j= ϕ
j(X
πj(i),j−1, U
i,j), for i = 0, . . . , n − 1;
Compute the permutation π
j+1(sort the states);
end for
Estimate µ by the average ¯ Y
n= ˆ µ
rqmc,n.
Theorem: The average ¯ Y
nis an unbiased estimator of µ.
Can estimate Var[ ¯ Y
n] by the empirical variance of m indep. realizations.
8
Mapping chains to points
Multivariate sort:
Sort the states (chains) by first coordinate, in n
1packets of size n/n
1. Sort each packet by second coordinate, in n
2packets of size n/n
1n
2.
.. .
At the last level, sort each packet of size n
`by the last coordinate.
Choice of n
1, n
2, ..., n
`?
Generalization:
Define a sorting function v : X → [0, 1)
cand apply the multivariate sort (in c dimensions) to the transformed points v(X
i,j).
Choice of v: Two states mapped to nearby values of v should be
approximately equivalent.
8
Mapping chains to points
Multivariate sort:
Sort the states (chains) by first coordinate, in n
1packets of size n/n
1. Sort each packet by second coordinate, in n
2packets of size n/n
1n
2.
.. .
At the last level, sort each packet of size n
`by the last coordinate.
Choice of n
1, n
2, ..., n
`?
Generalization:
Define a sorting function v : X → [0, 1)
cand apply the multivariate sort (in c dimensions) to the transformed points v(X
i,j).
Choice of v: Two states mapped to nearby values of v should be
approximately equivalent.
9
A (4,4) mapping
States of the chains
0.00.0 0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s
s s
s s
s
s
s s
s s s
s
s s
s
Sobol’ net in 2 dimensions with digital shift
0.00.0 0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0
1.0
s
s s
s s
s
s
s s
s s
s s
s
s
s
10
A (4,4) mapping
States of the chains
0.00.0 0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s s
s s
s s s
s s s
s s
s s s
s
Sobol’ net in 2 dimensions with digital shift
0.00.0 0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s s s
s
s s s
s
s s s s
s s
s
s
11
A (4,4) mapping
0.0 0.0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s s s
s
s s s
s
s
s s s
s s
s s
s s
s s
s s s
s
s s
s s
s s s
s
12
A (16,1) mapping, sorting along first coordinate
0.0 0.0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s
s s
s
s s s
s s
s s
s s
s s
s
s s
s
s s
s
s s
s s
s
s s s
s
s
13
A (8,2) mapping
0.0 0.0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s s
s s
s s
s s
s s
s s
s s
s s
s s
s s
s s
s s
s s
s s
s s
s
s
14
A (4,4) mapping
0.0 0.0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s s s
s
s s s
s
s
s s s
s s
s s
s s
s s
s s s
s
s s
s s
s s s
s
15
A (2,8) mapping
0.0 0.0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s
s s
s s s
s s
s s
s s
s
s s
s
s s
s s s
s s
s
s s s s
s s
s s
16
A (1,16) mapping, sorting along second coordinate
0.0 0.0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0.6 0.6
0.7 0.7
0.8 0.8
0.9 0.9
1.0 1.0
s s
s s s
s s
s s
s
s s
s s
s
s
s s
s s s s
s s
s s
s
s s s
s s
17
Dynamic programming for optimal stopping times
Suppose the stopping time τ is a decision determined by a stopping policy π = (ν
0, ν
1, . . . , ν
T−1) where ν
j: X → {stop now, wait}.
Suppose also that must stop at or before step T .
Dynamic programming equations: V
T(x) = g
T(x),
Q
j(x) = E [V
j+1(X
j+1) | X
j= x], (continuation value) V
j(x) = max[g
j(x), Q
j(x)], (optimal value) ν
j∗(x) =
( stop now if g
j(x) ≥ Q
j(x)
wait otherwise, (optimal decision)
for j = T − 1, . . . , 0 and all x ∈ X .
17
Dynamic programming for optimal stopping times
Suppose the stopping time τ is a decision determined by a stopping policy π = (ν
0, ν
1, . . . , ν
T−1) where ν
j: X → {stop now, wait}.
Suppose also that must stop at or before step T . Dynamic programming equations:
V
T(x) = g
T(x),
Q
j(x) = E[V
j+1(X
j+1) | X
j= x], (continuation value) V
j(x) = max[g
j(x), Q
j(x)], (optimal value) ν
j∗(x) =
( stop now if g
j(x) ≥ Q
j(x)
wait otherwise, (optimal decision)
for j = T − 1, . . . , 0 and all x ∈ X .
18
Hard to solve when the state space is large and multidimensional.
Can approximate Q
jwith a small set of basis functions.
{ψ
k: X → R , 1 ≤ k ≤ m}:
Q ˜
j(x) =
m
X
k=1
β
j,kψ
k(x)
where β
j= (β
j,1, . . . , β
j,m)
tcan be determined by least-squares regression, using an approximation W
i,jof Q
j(x
i,j) at a set of points x
i,j.
We solve
β min
j∈Rm nX
i=1
Q ˜
j(x
i,j) − W
i,j+1 2.
A set of representative states x
i,jat each step j can be generated by
Monte Carlo, or RQMC, or array-RQMC.
19
Regression-based least-squares Monte Carlo
Tsistiklis and Van Roy (2000) (TvR);
Simulate n indep. trajectories of the chain {X
j, j = 0, . . . , T }, and let X
i,jbe the state for trajectory i at step j ;
W
i,T← g
T(X
i,T), i = 1, . . . , n;
for j = T − 1, . . . , 0 do
Compute the vector β
jthat minimizes
n
X
i=1 m
X
k=1
β
j,kψ
k(X
i,j) − W
i,j+1!
2.
W
i,j← max[g
j(X
i,j), Q ˜
j(X
i,j)] , i = 1, . . . , n;
end for
return Q ˆ
0(x
0) = (W
1,0+ · · · + W
n,0)/n as an estimate of Q
0(x
0);
Longstaff and Schwartz (2001) (LSM): Define W
i,jinstead by W
i,j=
( g
j(X
i,j) if g
k(X
j,k) ≥ Q ˜
j(X
i,j);
W
i,j+1otherwise .
19
Regression-based least-squares Monte Carlo
Tsistiklis and Van Roy (2000) (TvR);
Simulate n indep. trajectories of the chain {X
j, j = 0, . . . , T }, and let X
i,jbe the state for trajectory i at step j ;
W
i,T← g
T(X
i,T), i = 1, . . . , n;
for j = T − 1, . . . , 0 do
Compute the vector β
jthat minimizes
n
X
i=1 m
X
k=1
β
j,kψ
k(X
i,j) − W
i,j+1!
2.
W
i,j← max[g
j(X
i,j), Q ˜
j(X
i,j)] , i = 1, . . . , n;
end for
return Q ˆ
0(x
0) = (W
1,0+ · · · + W
n,0)/n as an estimate of Q
0(x
0);
Longstaff and Schwartz (2001) (LSM): Define W
i,jinstead by W
i,j=
( g
j(X
i,j) if g
k(X
j,k) ≥ Q ˜
j(X
i,j);
W
i,j+1otherwise .
20
Example: a simple put option
Asset price obeys GBM {S (t), t ≥ 0} with drift (interest rate) µ = 0.05, volatility σ = 0.08, initial value S (0) = 100.
For American version, exercise dates are t
j= j/16 for j = 1, . . . , 16.
Payoff at t
j: g
j(S(t
j)) = e
−0.05tjmax(0, K − S (t
j)), where K = 101.
European version: Can exercise only at t
16= 1.
One-dimensional state X
j= S (t
j). Sorting for array-RQMC is simple. Basis functions for regression-based MC:
polynomials ψ
k(x) = (x − 101)
k−1for k = 1, . . . , 5.
For RQMC and array-RQMC, we use Sobol’ nets with a linear scrambling and a random digital shift, for all the results reported here.
Results are very similar for randomly-shifted lattice rule + baker’s
transformation.
20
Example: a simple put option
Asset price obeys GBM {S (t), t ≥ 0} with drift (interest rate) µ = 0.05, volatility σ = 0.08, initial value S (0) = 100.
For American version, exercise dates are t
j= j/16 for j = 1, . . . , 16.
Payoff at t
j: g
j(S(t
j)) = e
−0.05tjmax(0, K − S (t
j)), where K = 101.
European version: Can exercise only at t
16= 1.
One-dimensional state X
j= S (t
j). Sorting for array-RQMC is simple.
Basis functions for regression-based MC:
polynomials ψ
k(x) = (x − 101)
k−1for k = 1, . . . , 5.
For RQMC and array-RQMC, we use Sobol’ nets with a linear scrambling and a random digital shift, for all the results reported here.
Results are very similar for randomly-shifted lattice rule + baker’s
transformation.
20
Example: a simple put option
Asset price obeys GBM {S (t), t ≥ 0} with drift (interest rate) µ = 0.05, volatility σ = 0.08, initial value S (0) = 100.
For American version, exercise dates are t
j= j/16 for j = 1, . . . , 16.
Payoff at t
j: g
j(S(t
j)) = e
−0.05tjmax(0, K − S (t
j)), where K = 101.
European version: Can exercise only at t
16= 1.
One-dimensional state X
j= S (t
j). Sorting for array-RQMC is simple.
Basis functions for regression-based MC:
polynomials ψ
k(x) = (x − 101)
k−1for k = 1, . . . , 5.
For RQMC and array-RQMC, we use Sobol’ nets with a linear scrambling and a random digital shift, for all the results reported here.
Results are very similar for randomly-shifted lattice rule + baker’s
transformation.
21
European version of put option.
log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-40 -30 -20 -10
n
−2array-RQMC
PCA
BB Seq
standard MC
21
European version of put option.
log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-40 -30 -20 -10
n
−2array-RQMC PCA
BB Seq
standard MC
22
Histogram of states at step 16
States for array-RQMC with n = 2
14in blue and for MC in red.
Theoretical dist.: black dots.
S
1690 100 110 120
frequency
0
200
400
600
23
Histogram after transformation to uniforms (applying the cdf).
States for array-RQMC with n = 2
14in blue and for MC in red.
Theoretical dist. is uniform (black dots).
0 0.5 1
frequency
0
200
400
600
24
log
2n 8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-25 -20 -15 -10 -5
n
−1TvR, array-RQMC
TvR, RQMC bridge
TvR, standard MC
LSM, array-RQMC
LSM, RQMC bridge
LSM, standard MC
American put option: estimation for a fixed policy.
25log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-20 -15 -10 -5
array-RQMC
RQMC PCA
RQMC bridge
RQMC sequential
standard MC
American put option: out-of-sample value for policy obtained from
26LSM.
log
2n
6 8 10 12 14
E [out-of-sample value]
1.95 2.00 2.05 2.10 2.15
2.1690 array-RQMC
RQMC PCA
standard MC
American put option: out-of-sample value for policy obtained from
27TvR.
log
2n
6 8 10 12 14
E [out-of-sample value]
2.05 2.10
2.15 2.1514 array-RQMC
RQMC PCA
standard MC
28
Example: Asian Option
Given observation times t
1, t
2, . . . , t
s, suppose
S (t
j) = S (t
j−1) exp[(r − σ
2/2)(t
j− t
j−1) + σ(t
j− t
j−1)
1/2Φ
−1(U
j)], where U
j∼ U[0, 1) and S(t
0) = s
0is fixed.
State is X
j= (S(t
j), S ¯
j), where S ¯
j=
1jP
ji=1
S (t
i).
Transition:
(S (t
j), S ¯
j) = ϕ(S(t
j−1), S ¯
j−1, U
j) =
S (t
j), (j − 1)¯ S
j−1+ S (t
j) j
.
Payoff at step j is max
0, S ¯
j− K .
We use the two-dimensional sort at each step; we first sort in n
1packets
based on S (t
j), then sort the packets based on ¯ S
j.
29
GBM with parameters: S (0) = 100, K = 100, r = 0.05, σ = 0.15, t
j= j /52 for j = 0, . . . , s = 13.
Basis functions to approximate the continuation value: polynomials of the form g (S , S ¯ ) = (S − 100)
k(¯ S − 100)
m, for k , m = 0, . . . , 4 and km ≤ 4.
Also broken polynomials max(0, S − 100)
kfor k = 1, 2, and
max(0, S − 100)(¯ S − 100).
European version of Asian call option
30log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-40 -30 -20 -10
n
−2array-RQMC, split sort RQMC PCA
RQMC bridge
RQMC sequential
standard MC
31
European version, sorting strategies for array-RQMC.
log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-40 -30 -20 -10
n
−2n
−1array-RQMC, n
1= n
2/3array-RQMC, n
1= n
1/3array-RQMC, split sort
array-RQMC, sort on ¯ S
array-RQMC, sort on S
32
American-style Asian option with a fixed policy.
log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-25 -20 -15 -10 -5
array-RQMC, split sort RQMC PCA
RQMC bridge
RQMC sequential
standard MC
33
Fixed policy, choices of array-RQMC sorting.
log
2n
8 10 12 14 16 18 20
log
2Var[ˆ µ
RQMC,n]
-20 -15 -10
array-RQMC, n
1= n
2/3array-RQMC, n
1= n
1/3array-RQMC, split sort
array-RQMC, sort on ¯ S
array-RQMC, sort S
Out-of-sample value of policy obtained from LSM.
34log
2n
8 10 12 14
E [out-of-sample value]
2.17 2.19 2.22 2.24 2.27 2.29
2.32 2.3204 array-RQMC, split sort
RQMC PCA
standard MC
35
Out-of-sample value of policy obtained from TvR.
log
2n
8 10 12 14
E [out-of-sample value]
2.27 2.28 2.29
2.30 2.2997 array-RQMC, split sort
RQMC PCA
standard MC
36
Call on the maximum of 5 assets
Five indep. asset prices obeys a GBM with s
0= 100, r = 0.05, σ = 0.2.
The assets pay a dividend at rate 0.10, which means that the effective risk-free rate can be taken as r
0= 0.05 − 0.10 = −0.05.
Exercise dates are t
j= j /3 for j = 1, . . . , 9.
State at t
jis X
j= (S
j,1, . . . , S
j,5).
Basis functions for regression: 19 polynomials in the S
j,(`)− 100, where S
j,(1), . . . , S
j,(5)are the asset prices sorted in increasing order.
For array-RQMC, we sort on the m largest asset prices.
At each step we generate the next value first for the maximum, then for
the second largest, and so on.
36
Call on the maximum of 5 assets
Five indep. asset prices obeys a GBM with s
0= 100, r = 0.05, σ = 0.2.
The assets pay a dividend at rate 0.10, which means that the effective risk-free rate can be taken as r
0= 0.05 − 0.10 = −0.05.
Exercise dates are t
j= j /3 for j = 1, . . . , 9.
State at t
jis X
j= (S
j,1, . . . , S
j,5).
Basis functions for regression: 19 polynomials in the S
j,(`)− 100, where S
j,(1), . . . , S
j,(5)are the asset prices sorted in increasing order.
For array-RQMC, we sort on the m largest asset prices.
At each step we generate the next value first for the maximum, then for
the second largest, and so on.
European version
37log
2n
8 10 12 14 16 18
log
2Var[ˆ µ
RQMC,n]
-25 -20 -15 -10 -5 0
n
−2array-RQMC, split sort 3 max RQMC PCA
RQMC bridge
RQMC sequential
standard MC
38
log
2n
8 10 12 14 16 18
log
2Var[ˆ µ
RQMC,n]
-25 -20 -15 -10 -5 0
n
−2n
−1array-RQMC, split sort 5 max
array-RQMC, split sort 4 max
array-RQMC, split sort 3 max
array-RQMC, split sort 2 max
array-RQMC, sort 1 max
American version, fixed policy
39log
2n
8 10 12 14 16 18
log
2Var[ˆ µ
RQMC,n]
-10 -5 0
array-RQMC, split sort 3 max RQMC PCA
RQMC bridge
RQMC sequential
standard MC
Fixed policy.
40log
2n
8 10 12 14 16 18
log
2Var[ˆ µ
RQMC,n]
-12.5 -10 -7.5 -5 -2.5
array-RQMC, split sort 5 max
array-RQMC, split sort 4 max
array-RQMC, split sort 3 max
array-RQMC split, sort 2 max
array-RQMC, sort 1 max
41
Out-of-sample value of policy obtained from LSM.
log
2n
8 10 12 14
E[out-of-sample value]
24 25 26
26.116 array-RQMC, split sort 3 max
RQMC PCA
RQMC bridge
RQMC sequential
standard MC
Out-of-sample value of policy obtained from TvR.
42log
2n
8 10 12 14
E[out-of-sample value]
25.0 25.5 26.0 26.5
26.124 array-RQMC, split sort 3 max
RQMC PCA
RQMC bridge
RQMC sequential
standard MC
43