• Aucun résultat trouvé

Lab 3: Monte Carlo integration

N/A
N/A
Protected

Academic year: 2022

Partager "Lab 3: Monte Carlo integration"

Copied!
2
0
0

Texte intégral

(1)

Universit´e Grenoble Alpes and ENSIMAG Master SIAM

Stochastic modelling for neurosciences

Lab 3: Monte Carlo integration

Objectives: Implementing some Markov Chain Monte Carlo methods.

1 MCMC for a one dimensional distribution

Let us define the following model:

• X0=−1,φ=−0.1,τ = 1,σ= 0.2

• Fork= 1,2,Xk+1=φXk+Uk withUk∼ N(0, τ2)

• Fork= 0,1,2,Vk=Xkk withηk ∼ N(0, σ2).

1. Simulate the variables.

2. Assume we observe X0, X2, V1, V2 but not X1. The objective is to filter X1, ie to estimate the conditional distributionp(X1|X0, X2, V1).

This conditional distribution has a densityp(x|X0, X2, V1) proportional to exp

− 1 2τ2

(x−φX0)2+ (X2−φx)22

σ2(V1−x)2

We use a MCMC approach to estimate this distribution, through the implementation of a Random- walk Metropolis-Hastings.

(a) Consider as proposalqa centered normal distribution with standard deviationρ= 0.5.

(b) Implement the algorithm, compute the acceptance rate.

(c) Run the algorithm with 10,000 iterations. From the last 500 realizations of the Markov chain, draw the dynamics of the chain and the density of the chain.

(d) Change the valueρ= 0.2 orρ= 1. Comment.

3. Increase the value of σ= 0.5 and rerun. Comment.

2 MCMC for filtering a discrete-time process

Let us consider the discretized auto-regressive model fork= 0, . . . , n, Xk+1 =φXk+Uk

withUk ∼ N(0, τ2) and where the observation is

Vk =Xkk withηk ∼ N(0, σ2).

(2)

1. Simulate a trajectory (Vk, Xk)k=0,...,n withφ=−0.1,τ = 1,σ= 0.2,X0=−1,n= 100. Plot the trajectory and the observations.

2. We assume that we observe only the Vk’s and the Xk’s are hidden. The objective is to filter the hidden process X, ie to estimate the conditional distribution p(X0:n|V0:n). We assume that the values of the parameters (φ, τ, σ) are known.

The filtering distribution p(X0:n|V0:n) is a multidimensional and complex distribution. Instead of estimating it directly, we split the problem in one dimension problems using the Gibbs algorithm.

• Initialize the algorithm withX0:n(0)=V0:n+0:n where0:n is a Gaussian random vector.

• At iterationj= 1 of the Gibbs algorithm, given the current value of the hidden processX0:n(j−1): – Fork= 0, simulate a Markov Chain with stationary distribution equal to the conditional

distributionp(X0|X1(j−1), V0). SetX0(j)equal to the last value of the chain.

– For k = 1, . . . , n−1, estimate the conditional distribution p(Xk|Xk−1(j) , Xk+1(j−1), Vk). Set Xk(j)equal to the last value of the chain.

– For k=n, estimate the conditional distribution p(Xn|Xn−1(j) , Vn). SetXn(j) equal to the last value of the chain.

3. Filter the whole trajectory.

2

Références

Documents relatifs

[r]

[r]

2. b) Derive the least squares solution of the unknown parameters and their variance-covariance matrix.. Given the angle measurements at a station along with their standard

Objectives: Implementing some Markov Chain Monte Carlo methods2. Let us consider a discretized auto-regressive model for k

The second mentioned author would like to thank James Cogdell for helpful conversation on their previous results when he was visiting Ohio State University.. The authors also would

Combien de points devrait enlever une mauvaise réponse pour qu’une personne répondant au hasard ait un total de point

[r]

La multiplication par un polynôme xé est linéaire, la dérivation est linéaire, l'application f est donc aussi linéaire.. On suppose maintenant que B est un polynôme propre dont