• Aucun résultat trouvé

The Array-RQMC method: Review of convergence results

N/A
N/A
Protected

Academic year: 2022

Partager "The Array-RQMC method: Review of convergence results"

Copied!
49
0
0

Texte intégral

(1)aft. 1. The Array-RQMC method: Review of convergence results Pierre L’Ecuyer, Christian Lécot, Bruno Tuffin. Dr. DIRO, Université de Montréal, Canada LAMA, Université de Savoie, France Inria–Rennes, France.

(2) 2. Monte Carlo for Markov Chains X0 = x0 ,. aft. Setting: A Markov chain with state space X ⊆ R` , evolves as Xj = ϕj (Xj−1 , Uj ), j ≥ 1,. where the Uj are i.i.d. uniform r.v.’s over (0, 1)d . Want to estimate µ = E[Y ]. where. Y =. τ X j=1. Dr. for some fixed time horizon τ .. gj (Xj ).

(3) 2. Monte Carlo for Markov Chains X0 = x0 ,. aft. Setting: A Markov chain with state space X ⊆ R` , evolves as Xj = ϕj (Xj−1 , Uj ), j ≥ 1,. where the Uj are i.i.d. uniform r.v.’s over (0, 1)d . Want to estimate µ = E[Y ]. where. Y =. τ X. gj (Xj ). j=1. Dr. for some fixed time horizon τ .. Ordinary MC: For i = 0, . . . , n − 1, generate Xi,j = ϕj (Xi,j−1 , Ui,j ), j = 1, . . . , τ , where the Ui,j ’s are i.i.d. U(0, 1)d . Estimate µ by n. µ̂n =. τ. n. 1 XX 1X gj (Xi,j ) = Yi . n n i=1 j=1. i=1.

(4) 3. Example: Asian Call Option. aft. Given observation times t1 , t2 , . . . , tτ suppose. S(tj ) = S(tj−1 ) exp[(r − σ 2 /2)(tj − tj−1 ) + σ(tj − tj−1 )1/2 Φ−1 (Uj )], where Uj ∼ U[0, 1) and S(t0 ) = s0 is fixed. P Running average: S̄j = 1j ji=1 S(ti ).. Dr. State: Xj = (S(tj ), S̄j ) . Transition:. . Xj = (S(tj ), S̄j ) = ϕj (S(tj−1 ), S̄j−1 , Uj ) =. (j − 1)S̄j−1 + S(tj ) S(tj ), j.   Payoff at step j = τ is Y = gτ (Xτ ) = max 0, S̄τ − K ..  ..

(5) 4. Finance Queueing systems. aft. Plenty of other applications:. Inventory, distribution, logistic systems Reliability models. Etc.. Dr. MCMC in Bayesian statistics.

(6) 5. aft. Classical RQMC for Markov Chains. Put Vi = (Ui,1 , . . . , Ui,τ ) ∈ (0, 1)s = (0, 1)dτ . Estimate µ by n. µ̂rqmc,n =. τ. 1 XX gj (Xi,j ) n i=1 j=1. Dr. where Pn = {V0 , . . . , Vn−1 } ⊂ (0, 1)s satisfies: (a) each point Vi has the uniform distribution over (0, 1)s ; (b) Pn covers (0, 1)s very evenly (i.e., has low discrepancy). The dimension s is often very large!.

(7) 6. Array-RQMC for Markov Chains. aft. L., Lécot, Tuffin, et al. [2004, 2006, 2008, etc.] Simulate an “array” of n chains in “parallel.” At each step, use an RQMC point set Pn to advance all the chains by one step, while inducing global negative dependence across the chains. Goal: Want a small discrepancy (or “distance”) between the empirical distribution of Sn,j = {X0,j , . . . , Xn−1,j } and the theoretical distribution of Xj , for each j.. Dr. If we succeed, these (unbiased) estimators will have small variance: n−1. 1X gj (Xi,j ) µj = E[gj (Xj )] ≈ n i=0. n−1. 1X and µ = E[Y ] ≈ Yi . n i=0.

(8) 6. Array-RQMC for Markov Chains. aft. L., Lécot, Tuffin, et al. [2004, 2006, 2008, etc.] Simulate an “array” of n chains in “parallel.” At each step, use an RQMC point set Pn to advance all the chains by one step, while inducing global negative dependence across the chains. Goal: Want a small discrepancy (or “distance”) between the empirical distribution of Sn,j = {X0,j , . . . , Xn−1,j } and the theoretical distribution of Xj , for each j.. Dr. If we succeed, these (unbiased) estimators will have small variance: n−1. 1X gj (Xi,j ) µj = E[gj (Xj )] ≈ n i=0. n−1. 1X and µ = E[Y ] ≈ Yi . n i=0. How can we preserve low-discrepancy of Sn,j as j increases? Can we quantify the variance improvement? Convergence rate in n?.

(9) 7. Some generalizations. aft. L., Lécot, and Tuffin [2008]: τ can be a random stopping time w.r.t. the filtration F{(j, Xj ), j ≥ 0}. L., Demers, and Tuffin [2006, 2007]: Combination with splitting techniques (multilevel and without levels), combination with importance sampling and weight windows. Covers particle filters.. Dr. L. and Sanvido [2010]: Combination with coupling from the past for exact sampling. Dion and L. [2010]: Combination with approximate dynamic programming and for optimal stopping problems. Gerber and Chopin [2014]: Sequential QMC (yesterday’s talk)..

(10) 8. aft. Convergence results and applications L., Lécot, and Tuffin [2006, 2008]: Special cases: convergence at MC rate, one-dimensional, stratification, etc. Lécot and Tuffin [2004]: Deterministic, one-dimension, discrete state. El Haddad, Lécot, L. [2008, 2010]: Deterministic, multidimensional.. Dr. Fakhererredine, El Haddad, Lécot [2012, 2013, 2014]: LHS, stratification, Sudoku sampling, ... Wächter and Keller [2008]: Applications in computer graphics..

(11) 9. aft. Other QMC methods for Markov chains. Interested in steady-state distribution. Introduce dependence between the steps j; a single chain visit the state space very uniformly. Owen, Tribble, Chen, Dick, Matsumoto, Nishimura, .... [2004–2010]: Markov chain quasi-Monte Carlo.. Dr. Propp [2012] and earlier: Rotor-router sampling..

(12) 10. To simplify, suppose each Xj is a uniform r.v. over (0, 1)` .. aft. Select a discrepancy measure D for the point set Sn,j = {X0,j , . . . , Xn−1,j } over (0, 1)` , and a corresponding measure of variation V , such that. Dr. Var[µ̂rqmc,j,n ] = E[(µ̂rqmc,j,n − µj )2 ] ≤ E[D 2 (Sn,j )] V 2 (gj )..

(13) 10. To simplify, suppose each Xj is a uniform r.v. over (0, 1)` .. aft. Select a discrepancy measure D for the point set Sn,j = {X0,j , . . . , Xn−1,j } over (0, 1)` , and a corresponding measure of variation V , such that Var[µ̂rqmc,j,n ] = E[(µ̂rqmc,j,n − µj )2 ] ≤ E[D 2 (Sn,j )] V 2 (gj ). If D is defined via a reproducing kernel Hilbert space, then, for some random ξj (that generally depends on Sn,j ), " n # " n # X X 1 1 ξj (Xi,j ) = Var (ξj ◦ ϕj )(Xi,j−1 , Ui,j )) E[D 2 (Sn,j )] = Var n n. Dr. i=1 2 E[D(2) (Qn )]. ≤. ·. 2 V(2) (ξj. i=1. ◦ ϕj ). for some other discrepancy D(2) over (0, 1)`+d , where Qn = {(X0,j−1 , U0,j ), . . . , (Xn−1,j−1 , Un−1,j )}. Goal: Under appropriate conditions, to obtain V(2) (ξj ◦ ϕj ) < ∞ and 2 (Q )] = O(n−α+ ) for some α ≥ 1. E[D(2) n.

(14) 11. Discrepancy bounds by induction?. aft. Let ` = d = 1, X = [0, 1], and Xj ∼ U(0, 1). L2 -star discrepancy: n−1. D 2 (x0 , . . . , xn−1 ) =. 1X 1 (wi − xi )2 + 2 12n n i=0. where wi = (i + 1/2)/n and 0 ≤ x0 ≤ x1 ≤ · · · ≤ xn−1 . We have n−1. 1X [µ(Yi ) + B2 ((x − Yi ) mod 1) + B1 (x)B1 (Yi )] , n. Dr. ξj (x) = −. i=1. where B1 (x) = x − 1/2 and B2 (x) = x 2 − x + 1/6. Problem: the 2-dim function ξj ◦ ϕj has mixed derivative that is not square integrable, so it has infinite variation, it seems. Otherwise, we would have a proof that E[D 2 (Sn,j )] = O(n−2 ). Help!.

(15) 12. aft. In the points (Xi,j−1 , Ui,j ) of Qn , the Ui,j can be defined via some RQMC scheme, but the Xi,j−1 cannot be chosen; they are determined by the history of the chains. The idea is to select a low-discrepancy point set. Q̃n = {(w0 , U0 ), . . . , (wn−1 , Un−1 )},. Dr. where the wi ∈ [0, 1)` are fixed and the Ui ∈ (0, 1)d are randomized, and then define a bijection between the states Xi,j−1 and the wi so that the Xi,j−1 are “close” to the wi (small discrepancy between the two sets). Example: If ` = 1, can take wi = (i + 0.5)/n. Bijection defined by a permutation πj of Sn,j ..

(16) 12. aft. In the points (Xi,j−1 , Ui,j ) of Qn , the Ui,j can be defined via some RQMC scheme, but the Xi,j−1 cannot be chosen; they are determined by the history of the chains. The idea is to select a low-discrepancy point set. Q̃n = {(w0 , U0 ), . . . , (wn−1 , Un−1 )},. Dr. where the wi ∈ [0, 1)` are fixed and the Ui ∈ (0, 1)d are randomized, and then define a bijection between the states Xi,j−1 and the wi so that the Xi,j−1 are “close” to the wi (small discrepancy between the two sets). Example: If ` = 1, can take wi = (i + 0.5)/n. Bijection defined by a permutation πj of Sn,j . For state space in R` : same algorithm essentially..

(17) 13. aft. Array-RQMC algorithm. Dr. Xi,0 ← x0 , for i = 0, . . . , n − 1; for j = 1, 2, . . . , τ do Randomize afresh {U0,j , . . . , Un−1,j } in Q̃n ; Xi,j = ϕj (Xπj (i),j−1 , Ui,j ), for i = 0, . . . , n − 1; Compute the permutation πj+1 (sort the states); end for Estimate µ by the average Ȳn = µ̂rqmc,n ..

(18) 13. aft. Array-RQMC algorithm. Dr. Xi,0 ← x0 , for i = 0, . . . , n − 1; for j = 1, 2, . . . , τ do Randomize afresh {U0,j , . . . , Un−1,j } in Q̃n ; Xi,j = ϕj (Xπj (i),j−1 , Ui,j ), for i = 0, . . . , n − 1; Compute the permutation πj+1 (sort the states); end for Estimate µ by the average Ȳn = µ̂rqmc,n .. Theorem: The average Ȳn is an unbiased estimator of µ. Can estimate Var[Ȳn ] by the empirical variance of m indep. realizations..

(19) 14. Example: Asian Call Option. aft. S(0) = 100, K = 100, r = 0.05, σ = 0.15, tj = j/52, j = 0, . . . , τ = 13. RQMC points: Sobol’ nets with a linear scrambling + random digital shift, for all the results reported here. Similar results for randomly-shifted lattice + baker’s transform. log2 Var[µ̂RQMC,n ] -10. crude MC. -30 -40 8. Dr. -20. 10. 12. 14. 16. 18. RQMC sequential array-RQMC, split sort n−2 log n 2 20.

(20) 15. Mapping chains to points. aft. One possibility: Multivariate sort: Sort the states (chains) by first coordinate, in n1 packets of size n/n1 . Sort each packet by second coordinate, in n2 packets of size n/n1 n2 . ···. At the last level, sort each packet of size n` by the last coordinate.. Dr. Choice of n1 , n2 , ..., n` ?.

(21) 15. Mapping chains to points. aft. One possibility: Multivariate sort: Sort the states (chains) by first coordinate, in n1 packets of size n/n1 . Sort each packet by second coordinate, in n2 packets of size n/n1 n2 . ···. At the last level, sort each packet of size n` by the last coordinate. Choice of n1 , n2 , ..., n` ?. Dr. For large `: Define a transformation v : X → [0, 1)c and do a multivariate sort (in c < ` dimensions) of the points v (Xi,j ). Choice of v : states mapped to nearby values of v should be nearly equivalent..

(22) 15. Mapping chains to points. aft. One possibility: Multivariate sort: Sort the states (chains) by first coordinate, in n1 packets of size n/n1 . Sort each packet by second coordinate, in n2 packets of size n/n1 n2 . ···. At the last level, sort each packet of size n` by the last coordinate. Choice of n1 , n2 , ..., n` ?. Dr. For large `: Define a transformation v : X → [0, 1)c and do a multivariate sort (in c < ` dimensions) of the points v (Xi,j ). Choice of v : states mapped to nearby values of v should be nearly equivalent. For c = 1, X is mapped to [0, 1), which leads to a one-dim sort. The mapping v can be based on a space-filling curve: Z-curve, Hilbert curve, etc. See Wächter and Keller [2008], Gerber and Chopin [2014]..

(23) 16. A (4,4) mapping. aft. Sobol’ net in 2 dimensions with digital shift. States of the chains 1.0. 1.0. s. 0.9. s. s. s. 0.8. 0.6. 0.1 0.0. Dr. s. s. 0.2. s 0.0. 0.1. 0.2. 0.7. s. 0.3. s s. s. s. 0.4. 0.5. s. 0.7. 0.8. 0.9. 1.0. s. 0.2. s. 0.1. 0.6. s. 0.0. 0.0. 0.1. s. s. 0.4 0.3. s. s. s. s. s. 0.5. s. s. 0.4 0.3. 0.8. 0.6. s. 0.5. 0.9. s. 0.7. s. s. s. s s. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7. 0.8. 0.9. 1.0.

(24) 17. A (4,4) mapping. aft. Sobol’ net in 2 dimensions with digital shift. States of the chains 1.0. 1.0. s. 0.9. s. s. s. 0.8. 0.6. 0.1 0.0. Dr. s. s. 0.2. s 0.0. 0.1. 0.2. 0.7. s. 0.3. s s. s. s. 0.4. 0.5. s. 0.7. 0.8. 0.9. 1.0. s. 0.2. s. 0.1. 0.6. s. 0.0. 0.0. 0.1. s. s. 0.4 0.3. s. s. s. s. s. 0.5. s. s. 0.4 0.3. 0.8. 0.6. s. 0.5. 0.9. s. 0.7. s. s. s. s s. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7. 0.8. 0.9. 1.0.

(25) 18. A (4,4) mapping s. 1.0. s. s 0.7. s. 0.0. s. s. s. s. s. 0.1. s. s. s. Dr. 0.5. 0.2. s. s. 0.6. 0.3. s. s s. s. 0.8. 0.4. s. aft. s. 0.9. s. s. 0.1. 0.2. 0.3. s s. s. s. s. s. 0.0. ss. s. 0.4. 0.5. 0.6. s 0.7. 0.8. 0.9. 1.0.

(26) 19. A (16,1) mapping, sorting along first coordinate s. 1.0. s. s 0.7. s. 0.0. s. s. s. s. s. 0.1. s. s. s. Dr. 0.5. 0.2. s. s. 0.6. 0.3. s. s s. s. 0.8. 0.4. s. aft. s. 0.9. s. s. 0.1. 0.2. 0.3. s s. s. s. s. s. 0.0. ss. s. 0.4. 0.5. 0.6. s 0.7. 0.8. 0.9. 1.0.

(27) 20. A (8,2) mapping s. 1.0. s. s 0.7. s. 0.0. s. s. s. s. s. 0.1. s. s. s. Dr. 0.5. 0.2. s. s. 0.6. 0.3. s. s s. s. 0.8. 0.4. s. aft. s. 0.9. s. s. 0.1. 0.2. 0.3. s s. s. s. s. s. 0.0. ss. s. 0.4. 0.5. 0.6. s 0.7. 0.8. 0.9. 1.0.

(28) 21. A (4,4) mapping s. 1.0. s. s 0.7. s. 0.0. s. s. s. s. s. 0.1. s. s. s. Dr. 0.5. 0.2. s. s. 0.6. 0.3. s. s s. s. 0.8. 0.4. s. aft. s. 0.9. s. s. 0.1. 0.2. 0.3. s s. s. s. s. s. 0.0. ss. s. 0.4. 0.5. 0.6. s 0.7. 0.8. 0.9. 1.0.

(29) 22. A (2,8) mapping s. 1.0. s. s 0.7. s. 0.0. s. s. s. s. s. 0.1. s. s. s. Dr. 0.5. 0.2. s. s. 0.6. 0.3. s. s s. s. 0.8. 0.4. s. aft. s. 0.9. s. s. 0.1. 0.2. 0.3. s s. s. s. s. s. 0.0. ss. s. 0.4. 0.5. 0.6. s 0.7. 0.8. 0.9. 1.0.

(30) 23. A (1,16) mapping, sorting along second coordinate s. 1.0. s. s 0.7. s. 0.0. s. s. s. s. s. 0.1. s. s. s. Dr. 0.5. 0.2. s. s. 0.6. 0.3. s. s s. s. 0.8. 0.4. s. aft. s. 0.9. s. s. 0.1. 0.2. 0.3. s s. s. s. s. s. 0.0. ss. s. 0.4. 0.5. 0.6. s 0.7. 0.8. 0.9. 1.0.

(31) 24. Sorting strategies for array-RQMC.. aft. State-point mapping via two-dimensional sort: sort in n1 packets based on S(tj ), then sort the packets based on S̄j . Split sort: n1 = n2 .. log2 Var[µ̂RQMC,n ] -10. -30. -40. Dr. -20. 8. 10. 12. 14. 16. 18. n−1 array-RQMC, array-RQMC, array-RQMC, array-RQMC, array-RQMC, n−2. 20. log2 n. sort on S̄ sort on S n1 = n2/3 n1 = n1/3 split sort.

(32) 25. Artificial one-dim example: a simple put option. log2 Var[µ̂RQMC,n ] -10. crude MC Sequential RQMC. -40. Dr. -20 -30. aft. GBM {S(t), t ≥ 0} with drift µ = 0.05, volatility σ = 0.08, S(0) = 100. Generate Xj = S(tj ) for tj = j/16, j = 1, . . . , τ = 16, sequentially. Payoff at t16 = 1: Y = gτ (S(1)) = e −0.05 max(0, 101 − S(1)).. 8. 10. 12. 14. 16. array-RQMC n−2 18. 20. log2 n.

(33) 26. Histogram of states at step 16. aft. States for array-RQMC with n = 214 in red and for MC in blue. Theoretical dist.: black dots. frequency 600. 200. 0. Dr. 400. 90. 100. 110. 120.

(34) 27. Histogram after transformation to uniforms (applying the cdf).. 600 400 200 0 0. Dr. frequency. aft. States for array-RQMC with n = 214 in red and for MC in blue. Theoretical dist. is uniform (black dots).. 0.5. 1.

(35) 28. aft. Example. Let Y = θU + (1 − θ)V , where U, V indep. U(0, 1) and θ ∈ [0, 1). This Y has cdf Gθ . Markov chain is defined by. Xj = ϕj (Xj−1 , Uj ) = Gθ (θXj−1 + (1 − θ)Uj ), j ≥ 1. X0 = U0 ;. Dr. where Uj ∼ U(0, 1). Then, Xj ∼ (0, 1). Define gj (Xj ) = Xj.

(36) 29. log Dj as a function of j, for n = 4093 ≈ 212. aft. Dj (10−5 ). 30. L2 Star θ = 0.5 L2 Star θ = 0.9. 10. 0 0. Dr. 20. 25. 50. 75. L2 Star θ = 0.1 j 100.

(37) 30. log2(n). -25. -35 -40 -45. 12.5. 15. 17.5. Dr. 10 -30. aft. log2 Var[µ̂rqmc,j,n ] as a function of log2 n. θ = 0.5, 20 steps θ = 0.9, 100 steps θ = 0.9, 20 steps slope = -2.

(38) 31. aft. Convergence results and proofs. L., Lécot, Tuffin [2008] + some extensions.. Simple case: suppose ` = d = 1, X = [0, 1], and Xj ∼ U(0, 1). Define ∆j. =. sup |F̂j (x) − Fj (x)| x∈X Z 1. |dgj (x)|. (variation of gj ). Dr. V (gj ) =. 0. Theorem.. (discrepancy of states). Ȳn,j − E [gj (Xj )] ≤ ∆j V (gj )..

(39) 32. Convergence results and proofs. Let. aft. Assumption 1. ϕj (x, u) non-decreasing in u. That is, we use inversion to generate next state from cdf Fj (z | · ) = P[Xj ≤ z | Xj−1 = · ]. Λj. =. sup V (Fj (z | · ]).. Dr. 0≤z≤1.

(40) 32. Convergence results and proofs. Let. aft. Assumption 1. ϕj (x, u) non-decreasing in u. That is, we use inversion to generate next state from cdf Fj (z | · ) = P[Xj ≤ z | Xj−1 = · ]. Λj. sup V (Fj (z | · ]).. =. 0≤z≤1. Assumption 2. Each square of. √. n×. √. n grid has one RQMC point.. Proposition. (Worst-case error.) Under Assumptions 1 and 2, j X. j Y. (Λk + 1). Dr. ∆j ≤ n−1/2. k=1. i=k+1. Corollary. If Λj ≤ ρ < 1 for all j, then ∆j ≤. 1 + ρ −1/2 n . 1−ρ. Λi ..

(41) 33. Let ϕj (x, u) non-decreasing in x and u. Fix z. 1. aft. V (Fj (z | · ) = Λj = ρ. U=u. 0. Dr. Fj (z | x) (ϕj (x, u) = z). Xj−1 = x. Fj (z) = P[Xj ≤ z] = size of blue area.. 1.

(42) 33. Let ϕj (x, u) non-decreasing in x and u. Fix z. 1. aft. V (Fj (z | · ) = Λj = ρ. U=u. 0. Dr. Fj (z | x) (ϕj (x, u) = z). Xj−1 = x. Fj (z) = P[Xj ≤ z] = size of blue area.. 1. ←− states Xj−1,i.

(43) 33. Let ϕj (x, u) non-decreasing in x and u. Fix z. 1. aft. V (Fj (z | · ) = Λj = ρ. U=u. 0. Dr. Fj (z | x) (ϕj (x, u) = z). Xj−1 = x. ←− states Xj−1,i. 1. Fj (z) = P[Xj ≤ z] = size of blue area. F̃j (z) = P[Xj ≤ z | Xj−1 ∼ F̂j−1 ] = area of histogram..

(44) 33. Let ϕj (x, u) non-decreasing in x and u. Fix z. 1. aft. V (Fj (z | · ) = Λj = ρ. U=u. 0. Dr. Fj (z | x) (ϕj (x, u) = z). Xj−1 = x. 1. ←− states Xj−1,i ←− points wi = (i + 0.5)/n. Fj (z) = P[Xj ≤ z] = size of blue area. F̃j (z) = P[Xj ≤ z | Xj−1 ∼ F̂j−1 ] = area of histogram. F̂j (z) = fraction of the points that fall in histogram. ∆j = sup0≤z≤1 |F̂j (z) − Fj (z)|..

(45) 34. aft. 1. Dr. U=u. 0. Xj−1 = x. 1. √ 2 n − 1 diagonal strings of squares. The boundary crosses at most one square in each string. √ At most 2 n − 1 squares out of n may contribute to |F̂j (z) − F̃j (z)|..

(46) 35. aft. 1. Dr. U=u. 0. √. Xj−1 = x. 1. √ √ n × n squares: 2 n − 1 diagonal strings of squares. The boundary crosses at most one square in each string. √ So at most 2 n − 1 squares may contribute to the error (or variance), and √ (2 n − 1) n−3/2 Var[F̂j (z) − F̃j (z)] ≤ ≤ . 2 4n 2.

(47) 36. Variance bound for stratified sampling. aft. Assumption 3. Assump. 2 (one point per square) + second coordinate of each point is uniformly dist. in square, and these are independent or have negative covariance. Proposition. Under Assump. 3, Var[Ȳn,j ] ≤. j j Y 1X (Λk + 1) Λ2i 4 k=1. !. V 2 (gj )n−3/2 .. i=k+1. Dr. Corollary. If all Λj ≤ ρ < 1, then Var[Ȳn,j ] ≤. 1+ρ V 2 (gj )n−3/2 . 4(1 − ρ2 ). Works also with RQMC if we can show that for any pair of small squares, the indicators that the two RQMC point of those squares are in the histogram do not have a positive covariance..

(48) 37. In ` = 1 and d > 1. aft. RQMC points are now in d + 1 dimensions. Unit cube partitioned in n = k d+1 subcubes. Assumption 4. Assump. 2 (one point per square) + the randomized parts of the points are pairwise independent in their squares. Proposition. Under Assump. 4, if ϕj is monotone non-decreasing, Var[Ȳn,j ] = O(n−1−1/(d+1) ).. Dr. Consider diagonal string of squares from (0, ..., 0) to (1, ..., 1) and all parallel diagonal strings. There are less than (d + 1)k d of those, and the histogram boundary can cross at most one square in each. Then Var[F̂j (z) − F̃j (z)] <. d + 1 −1−1/(d+1) (d + 1)k d = n . 2 4n 4.

(49) 38. aft. Conclusion. Empirically, the variance converges as O(n−2 ) for some examples, even for a large number of steps.. Dr. We have convergence proofs for special cases, but not yet O(n−2 )..

(50)

Références

Documents relatifs

(b) Helmholtz resonator with large oscillations of the water column due to strong water flow through a hole on the lower part of the cavity.. (b)

We have studied the combination of the fixed

are sorted in a particular order and then moved forward by one step using a randomized quasi-Monte Carlo (RQMC) point set of cardinality n. If the state space has more than

The SMC part uses importance sampling and particle resampling, a form of splitting (Kahn and Harris, 1949; L’Ecuyer et al., 2007) with fixed effort (N remains constant)..

However, a new RQMC method specially designed for Markov chains, called array-RQMC (L’Ecuyer et al., 2007), is often very effective in this situation. The idea of this method is

The aim of this paper is to provide further insight on why the method works, ex- amine and compare alternative ways of matching the RQMC points to the chains at each step, and

For the array-RQMC method, this causes the number of points used at each step of the chain to decrease with the step number, within a given splitting level, thus reducing the

We showed that the propensity of the trityl group to generate supramolecular spheres provided this molecular design with a unique robustness that allowed the covalent introduction