• Aucun résultat trouvé

Lab 3: Monte Carlo integration

N/A
N/A
Protected

Academic year: 2022

Partager "Lab 3: Monte Carlo integration"

Copied!
1
0
0

Texte intégral

(1)

Universit´e Joseph Fourier and ENSIMAG Master SIAM

Stochastic modelling for neurosciences

Lab 3: Monte Carlo integration

Objectives: Implementing some Markov Chain Monte Carlo methods.

Let us consider a discretized auto-regressive model fork= 0, . . . , n, Xk+1 =φXk+Uk

withUk ∼ N(0, τ2) and where the observation is

Yk=Xk2+Vk withVk∼ N(0, σ2).

Fixk. The objective is to estimate the conditional distributionp(Xk|Xk−1, Xk+1, Yk). One can prove that this conditional distribution has a densityp(x|Xk−1, Xk+1, Yk) proportional to

exp

− 1 2τ2

(x−φXk−1)2+ (Xk+1−φx)22

σ2(Yk−x2)2

The difficulty is the term (Yk−x2)2, and thus it is not possible to simulate exactly under the conditional distribution.

1. Simulate a trajectory withφ=−0.1,τ= 1,σ= 0.2,X0= 0,n= 100. Plot the trajectory and the observations.

2. Independent Metropolis Hastings

(a) Ignore the term (Yk−x2)2in the conditional distribution and consider a proposal distribution q(x,·) =N(µk, ρ2k) with

µk =φXk−1+Xk

1 +φ2 and ρ2k= τ2 1 +φ2 (b) Implement the independent Metropolis Hastings algorithm.

(c) Compute the acceptance rate.

(d) Run the algorithm with 10,000 iterations. From the last 500 realizations of the Markov chain, draw the dynamics of the chain and the density of the chain.

3. Random-walk Metropolis-Hastings

(a) Consider as proposalqa centered normal distribution with standard deviationρ= 0.1.

(b) Implement the algorithm, compute the acceptance rate.

(c) Run the algorithm with 10,000 iterations. From the last 500 realizations of the Markov chain, draw the dynamics of the chain and the density of the chain.

(d) Change the valueρ= 0.5. Comment.

Références

Documents relatifs

The posterior distribution of the parameters will be estimated by the classical Gibbs sampling algorithm M 0 as described in Section 1 and by the Gibbs sampling algo- rithm with

Probabilities of errors for the single decoder: (solid) Probability of false negative, (dashed) Probability of false positive, [Down] Average number of caught colluders for the

Monte Carlo MC simulation is widely used to estimate the expectation E[X ] of a random variable X and compute a con dence interval on E[X ]... What this talk

Figure 4: Negative samples drawn from an RBM with 500 hidden units, trained with tempered MCMC, (a) at epoch 5 of 10 (b) after learning is stopped.. It is clear that the resulting

Keywords Bayesian statistics · Density dependence · Distance sampling · External covariates · Hierarchical modeling · Line transect · Mark-recapture · Random effects · Reversible

Probabilistic infer- ences using Markov chain Monte Carlo (MCMC) methods are considered as another Monte Carlo simulation technique, and other measurand pdf estimation alternatives,

The proposed model is applied to the region of Luxembourg city and the results show the potential of the methodologies for dividing an observed demand, based on the activity

Mais, c’est l’édification de la ville qui, deux siècles plus tard, révèle la fragilité du milieu.. En effet, la fin du XVIII e siècle marque une rupture,