C. The Posterior Cram´er-Rao Bound
In order to study the efficiency of an estimation **method**, it is of great interest to compute the variance bounds on the estimation errors and to compare them to the lowest bounds corresponding to the optimal estimator. For time-invariant statistical models, a commonly used lower bound is the Cram´er-Rao bound (CRB), given by the inverse of the Fisher information matrix. In a time-varying context as we deal with here, a lower bound analogous to the CRB for random parameters has been derived in [33]; this bound is usually referred to as the Van Trees version of the CRB, or posterior CRB (PCRB) [34].

En savoir plus
4 Impact of quasi **Monte** **Carlo** on diversity of the allocation variables We compare some properties of the allocation variables trajectories x (n) 1:T , n = 1, . . . , N of DP normal mixture models under three samplers: the non **sequential** **Monte** **Carlo** **method** (Gibbs sampler) of
Neal ( 2000 ) (MC), a **sequential** **Monte** **Carlo** **method** of Griffin ( 2015 ) (SMC) and our proposed **sequential** quasi **Monte** **Carlo** **method** (SQMC) 3 . We run the three samplers on three simulated datasets of size T = 200, respectively with heavy-tailed, skewed, and multimodal distributions (see densities on Figure 1 ). The number of iterations for MC (after a burn-in period) and the number of particles in both **sequential** samplers is set to the same value of N = 1000.

En savoir plus
G. Bonchev 25 A, 1113 Sofia, Bulgaria
Abstract
A new Walk on Equations (WE) **Monte** **Carlo** algorithm for Linear Algebra (LA) problem is proposed and studied. This algorithm relies on a non-discounted sum of an absorbed random walk. It can be applied for either real or complex matrices. Several techniques like simultaneous scoring or the **sequential** **Monte** **Carlo** **method** are applied to improve the basic algorithm. Numerical tests are performed on exam- ples with matrices of different size and on systems coming from various applications. Comparisons with standard deterministic or **Monte** **Carlo** algorithms are also done.

En savoir plus
Comment on “**Sequential** Quasi-**Monte** **Carlo** Sampling”
Pierre L’Ecuyer
DIRO, Universit ´e de Montr ´eal, Canada
Gerber and Chopin combine SMC with RQMC to accelerate convergence. They apply RQMC as in the array-RQMC **method** discussed below, for which convergence rate theory remains thin despite impressive empirical performance. Their proof of o(N −1/2 ) convergence rate is a remarkable contribution.

0:k , we need only record its current position
ξ 0:k N,i (k), weight ω k N,i and associated functional value t N,i k . Thus, the **method** necessitates only minor adaptations once the particle filter has been implemented.
As illustrated in Figure 1 , as n increases, the path trajectories system collapses, and the estimators ( 3 ) are not reliable for sensible N values (see Doucet et al. [ 6 ], Kitagawa and Sato [ 11 ] and Andrieu and Doucet [ 1 ] for a discussion).

6 Conclusion
We develop an off-line and on-line SMC algorithm (called TNT) well-suited for situations where a relevant number of similar distributions has to be estimated. The **method** encom- passes the off-line AIS of Neal (1998), the on-line IBIS algorithm of Chopin (2002) and the RM **method** of Gilks and Berzuini (2001) that all arise as special cases in the SMC sampler theory (see Del Moral, Doucet, and Jasra (2006)). The TNT algorithm benefits from the conjugacy of the tempered and the time domains to avoid particle degeneracies observed in the on-line methods. More importantly, we introduce a new adaptive MCMC kernel based on the Evolutionary optimization literature which consists in 10 different moves based on particles interactions. These MCMC updates are selected according to probabilities that are adjusted over the SMC iterations. Furthermore, the scale parameter of these updates are also automated thanks to the **method** of Atchad´e and Rosenthal (2005). It makes the TNT algo- rithm fully generic and one needs only to plug the likelihood function, the prior distributions and the number of particles to use it.

En savoir plus
The literature of AIS is vast, including methods based on **sequential** moment matching such as AMIS [7], that com- prises a Rao-Blackwellization of the temporal estimators, and APIS that incorporates multiple proposals [8]. Other recent methods have introduced Markov chain **Monte** **Carlo** (MCMC) mechanisms for the adaptation of the IS proposals [9], [10], [11]. The family of population **Monte** **Carlo** (PMC) methods also falls within AIS. Its key feature is arguably the use of resampling steps in the adaptation of the location parameters of the proposals [12], [13]. The seminal paper [14] introduced the PMC framework. Since then, other PMC algorithms have been proposed, increasing the resulting performance by the incor- poration of stochastic expectation-maximization mechanisms [15], non-linear transformation of the importance weights [16], or better weighting and resampling schemes [17]. The **method** we propose in this paper falls within the PMC framework.

En savoir plus
3. Pseudo-marginal PaRIS algorithms
3.1. Pseudo marginalisation in **Monte** **Carlo** methods
Pseudo-marginalisation was originally proposed in [ 3 ] in the framework of MCMC methods, and in [ 1 ] the **method** was developed further and provided with a solid theoretical basis. In the following we recapitulate briefly the main idea behind this approach. Consider the problem of sampling from some target distribution π defined on some measurable space (X, X ) and having a density with respect to some reference measure µ. This density is assumed to be proportional to some intractable nonnegative measurable function ℓ on X, i.e., π(dx) = λ(dx)/λ1 X ,

En savoir plus
Futhermore, some solutions concatenated the source dataset with new samples, which increased the dataset size during iterations [30–33]. Others were limited only to the use of samples extracted from the target domain [28], which resulted in losing pertinent information of source samples. Ali et al. [37] presented an approach that learned a specific model by propagating a sparsely labeled training video based on object tracking. Inspired from this, Mao and Yin [19] opted for chains of tracked samples (track- lets) to automatically label target data. They linked detec- tion samples returned by an appearance-object detector into tracklets and propagated labels to uncertain tracklets based on a comparison between their features and those of labeled tracklets. The **method** used a lot of parameters, which should be determined or estimated empirically, and several **sequential** thresholding rules, causing an ineffi- cient adaptation of a scene-specific detector.

En savoir plus
5. Conclusion
In this paper, we address the difficult problem of data detection in pilot-aided multicarrier systems that suffer from the presence of phase noise and carrier frequency offset. The originality of this work consists in an autoregressive modeling of the OFDM signal from which we have deduced an SMC **method** for time domain processing of the nonlinear received signal. Numerical simulations show that even with significant PHN rates, the JSCPE-MPF achieves good performance in terms of both the phase distortion estimation and BER performance; moreover, it offers a significant per- formance gain in comparison to existing methods. Thus the JSCPE-MPF algorithm with AR modeling can be efficiently used with the channel estimator proposed in [ 8 ] for the design of a complete multicarrier receiver in wireline and wireless communication systems.

En savoir plus
i. In this case, particle rejuvenation may be introduced by using the forward weighted samples at time i − 1 and extending these trajectories at time i with a Kalman filter for all possible values of the regime. Then, ˜ a i is sampled in {1, . . . , J} using an appropriately adapted weight.
The paper is organized as follows. The algorithms introduced in [BDM10] and in [SBG12, LBGS13, LBS + 16] as long as the proposed rejuvenation associated with each **method** are presented

The paper is organized as follows. Section 2 describes and analyses the fixed- levels algorithm. Section 3 provides and studies the adaptive levels version, which proves to be optimal in terms of asymptotic variance of the estimator. Section 4 deals with the tuning of the algorithm and especially the choice and the iteration of the transition kernel which is at the core of the **method**. Section 5 shows the relevance of our algorithm for watermarking and finger- printing, which constitute a new application area of rare event simulation techniques. Finally, all the proofs are gathered in the appendix.

En savoir plus
phase transitions are still debated [50, 51]. We perform simulations at the mean-field critical temperature β = n, with box sizes set to 1. The results are shown in Fig. 5. The clock FMet **method** clearly displays an O(N ) accel- eration over the Metropolis algorithm for all the Ising, XY and Heisenberg models. It also exhibits some su- periority (A ∼ 50 for large system sizes) compared to the LB cluster algorithm that already implements the clock technique and has an O(1) computational complex- ity. The central-limit theorem tells that, as temperature is lowered and/or the strength of the external fields is increased, the acceptance rate exponentially drops for clusters of large sizes in the LB algorithm, and thus this superiority would become more pronounced.

En savoir plus
Mots-Clés : Partage de Ressources Indivisibles, Algorithmes de **Monte**-**Carlo**
1 Allocation Distribuée de Ressources Indivisibles
Cet article s’intéresse au problème de partage de ressources indivisibles par des mécanismes distri- bués. Plus spécifiquement, on considère un ensemble R de ressources devant être attribuées à un en- semble N d’agents. Toutes les ressources doivent être distribuées et aucune n’est partageable ni divi- sible : une allocation est donc simplement une partition des ressources parmi les agents (A : N → 2 R ). Les agents ont des préférences sur les lots de ressources qu’ils peuvent posséder (typiquement selon les contextes une relation d’ordre sur les lots, ou une fonction d’utilité permettant de valuer les diffé- rents lots). Le problème général est d’allouer “optimalement” les ressources aux agents (typiquement d’une façon qui soit Pareto-optimale, ou qui maximise la somme des utilités des agents). A l’inverse des approches centralisées supposant l’existence d’un algorithme prenant en entrée les préférences des agents et calculant une allocation optimale, on cherche ici à concevoir le mécanisme de manière à ce qu’il exhibe des propriétés souhaitables en dépit du comportement auto-intéressé des agents. En particulier, on cherche à montrer la convergence de tels systèmes si les agents peuvent modifier successivement l’allocation courante en procédant à des échanges de ressources qu’ils trouvent indi-

En savoir plus
Table 2 gives the times used to solve 100 problems that have 66% of empty cells. The tested algorithms are Forward Checking (i.e. a depth ﬁrst search), Iterative Sampling and Nested **Monte**-**Carlo** Search at level 1 and 2. Concerning Nested **Monte**-**Carlo** Search, if the ﬁrst search does not ﬁnd a solution, other searches of the same level are performed until a solution is found. Forward Checking (FC) is stopped when the search time for a problem exceeds 20,000 seconds, For- ward Checking is unable to solve 21 problems out of 100. Iterative sampling takes much less time than Forward Check- ing and solves all the problems. Nested **Monte**-**Carlo** Search is clearly much better than Forward Checking, and better than Iterative Sampling. Going from a level 1 Nested **Monte**-**Carlo** Search to a level 2 search is not beneﬁcial, maybe because

En savoir plus
[7] Gobet E., Lemor J.P., Warin X., A regression-based **Monte** **Carlo** **method** to solve Backward stochastic differential equations The Annals of Applied Probability 2005, Vol. 15, No. 3
[8] Kloeden P. E., Platen E., Numerical Solution of Stochastic Differential Equations Springer
[9] Ma J., Yong J., Forward-Backward Stochastic Differential Equations and their Applications Lecture Note in Math. 1702 Springer 1999

1 Introduction
**Monte**-**Carlo** methods have been applied with success to many games. In perfect infor- mation games, they are quite successful for the game of Go which has a huge search space [1]. The UCT algorithm [9] in combination to the incremental development of a global search tree has enabled Go programs such as C RAZY S TONE [2] and M OGO [11] to be the best on 9 × 9 boards and to become competitive on 19 × 19 boards.

où r s désigne le taux d’intérêt instantané, τ t,T l’ensemble des temps d’arrêt à
valeurs dans [t, T ], Φ la fonction payoff et (F t ) une filtration donnée. cette for-
mule telle qu’elle est explicitée est presque inexploitable et est vraiment non pratique. Cependant, de nombreuses formules sont envisageables telles que celles d’enveloppe de snell permettant ainsi d’exprimer le prix d’une option américaine sous forme de solution d’une équation de programmation dyna- mique faisant intervenir des espérances conditionnelles. Par ailleurs, on ne connait pas de formules fermées comme dans le cas des options européennes, donc on sera souvent amené à utiliser des approximations et des algorithmes numériques de résolution. On peut citer comme algorithmes, le fameux al- gorithme de type arbre binômial et l’algorithme de **Monte**-**carlo** qui sera quasiment toujours utilisé pour évaluer des espérances non conditionnelles.

En savoir plus
Abstract—In the cloud computing model, cloud providers invoice clients for resource consumption. Hence, tools helping the client to budget the cost of running their application are of pre-eminent importance. However, the opaque and multi-tenant nature of clouds, make job runtimes both variable and hard to predict. In this paper, we propose an improved simulation framework that takes into account this variability using the **Monte**-**Carlo** **method**.

0 sinon
2.1 Principe
2.1.1 Intégration unidimensionnelle
Pour utiliser une méthode de **Monte**-**Carlo** [9], on doit tout d’abord mettre sous la forme d’une espérance la quantité que l’on cherche à calculer, à l’issu de cette étape, il reste à calculer cette quantité par une espérance E(X) de la variable aléatoire X. Pour ce calcul, il convient de savoir simuler une variable aléatoire selon la loi de X. On dispose alors d’une suite (x i ) 1≤i≤n de n réalisations de la variable aléatoire X, On approxime alors E(X) par :