Universit´e Joseph Fourier and ENSIMAG Master SIAM
Stochastic modelling for neurosciences
Lab 3: Monte Carlo integration
Objectives: Implementing some Markov Chain Monte Carlo methods.
Let us consider a discretized auto-regressive model fork= 0, . . . , n, Xk+1 =φXk+Uk
withUk ∼ N(0, τ2) and where the observation is
Yk=Xk2+Vk withVk∼ N(0, σ2).
Fixk. The objective is to estimate the conditional distributionp(Xk|Xk−1, Xk+1, Yk). One can prove that this conditional distribution has a densityp(x|Xk−1, Xk+1, Yk) proportional to
exp
− 1 2τ2
(x−φXk−1)2+ (Xk+1−φx)2+τ2
σ2(Yk−x2)2
The difficulty is the term (Yk−x2)2, and thus it is not possible to simulate exactly under the conditional distribution.
1. Simulate a trajectory withφ=−0.1,τ= 1,σ= 0.2,X0= 0,n= 100. Plot the trajectory and the observations.
2. Independent Metropolis Hastings
(a) Ignore the term (Yk−x2)2in the conditional distribution and consider a proposal distribution q(x,·) =N(µk, ρ2k) with
µk =φXk−1+Xk
1 +φ2 and ρ2k= τ2 1 +φ2 (b) Implement the independent Metropolis Hastings algorithm.
(c) Compute the acceptance rate.
(d) Run the algorithm with 10,000 iterations. From the last 500 realizations of the Markov chain, draw the dynamics of the chain and the density of the chain.
3. Random-walk Metropolis-Hastings
(a) Consider as proposalqa centered normal distribution with standard deviationρ= 0.1.
(b) Implement the algorithm, compute the acceptance rate.
(c) Run the algorithm with 10,000 iterations. From the last 500 realizations of the Markov chain, draw the dynamics of the chain and the density of the chain.
(d) Change the valueρ= 0.5. Comment.