• Aucun résultat trouvé

Structure of the algorithm

Dans le document The DART-Europe E-theses Portal (Page 65-69)

3.2 Structure of the PIM algorithm

3.2.1 Structure of the algorithm

(3.34)

e−V(∆r)= exp

"

−σ2p 2

ν−1

X

λ=0

(∆rλ+1−∆rλ)2

#

(3.35) e−δβV¯(r,∆r)= exp [−δβV(r0)]eδβ2 [V(rν+∆rν2 )+V(rν∆rν2 )]e−δβPν−1λ=1[V(rλ+2)+V(rλ2)]

(3.36) which allow us to rewrite the marginal (ρm(r)), the conditional (ρc(∆r|r)) and the noisy probability density in a more synthetic way as:

ρm(r) = Z

d∆rρ(r,∆r) (3.37)

= 1

Ne−Vr(r) Z

d∆re−δβV¯(r,∆r) (3.38)

= 1

Ne−Vr(r)ρ0m(r) (3.39)

ρc(∆r|r) = ρ(r,∆r)

ρm(r) (3.40)

= e−V(∆r)e−δβV¯(r,∆r)

R d∆re−V(∆r)e−δβV¯(r,∆r) (3.41) P(r, p) = e−Vr(r)ρ0m(r)e−E(p,r)

R dpR

dre−Vr(r)ρ0m(r)e−E(p,r) (3.42) As stated before, our goal is to sample numericallyP (note thatpandrin this prob-ability density are not independent and have to be treated together in the sampling).

We use a Monte Carlo scheme in which the probability to generate a new state of the system (by changing momenta or coordinates of a particle) and accept/reject this new state is generalized to statisfy the detailed balance even in the presence of sig-nificant noise in the estimates ofρ0m(r)andE(r, p). I will present first the important steps of the algorithm and the two different noisy Monte Carlo schemes (Penalty and Kennedy methods) and then show how we calculate the different estimators that we need for our algorithm.

3.2.1 Structure of the algorithm

The first step of the algorithm is to choose if we move the momentum or coordinates.

Let us consider that we move the momentum. In this case, we choosep0 =p+δp,δp being an uniform random vector centered at zero (the magnitude of the displacement is chosen so as to optimize the acceptance). With this prescription, keeping into account that thervariables are not being updated, detailed balance takes the form:

P(r, p)Tp(p→p0)Paccp (p→p0) =P(r, p0)Tp(p0 →p)Paccp (p0 →p) (3.43) where Paccp is the acceptance probability of the displacement andTp the, uniform, transition probability.

We can manipulate the previous equation by simplifying all the terms which depend exclusively ofrinP(r, p)and using the fact that, for uniform displacement, we haveTp(p→p0) =Tp(p0 →p) = 1 if |p0−p|< 2. Thus equation3.43becomes

e−E(p,r)Paccp (p→p0) =e−E(p

0,r)Paccp (p0 →p) (3.44)

This detailed balance is similar to the one considered by Ceperly et al. [10]when they introduce the penalty method. This is in fact a generalized Monte Carlo for sampling a density given by the exponential of a function known with a statistical error (in our case E(p,r)). In this case the procedure is the following, if we have a probability densityP(s) of this form:

P(s)∝e−L(s) withL(s)estimated (3.45)

Then we perform the calculation of the difference of the exponent at the current, s, and proposed,s0, state N times (as L(s) is estimated the result will be different each time we calculate this quantity) and take the average and the variance of this quantity. Thus, we compute

δi(s, s0) =Li(s0)−Li(s) (3.46) D(s, s0) = 1

N

N

X

i=1

δi(s, s0) (3.47)

χ2(s, s0) = 1 N(N −1)

N

X

i=1

(D(s, s0)−δi(s, s0))2 (3.48) wherei= 1,· · ·, N.

According to Ceperly et al. [10], asymptotic sampling ofP(s) is achieved by using an acceptance probability of following form:

a(s→s0) = minh

1, exp(−D(s, s0)−uχ2)i

(3.49)

with

uχ2 = χ2

2 + χ4

4N + 1+ χ6

3(N+ 1)(N + 3)+· · · (3.50) This acceptance test differs from the standard Metropolis rule for the presence of uχ2 and is valid when χ42 ≤1: in the limit of an infinitively precise estimate of the difference uχ2 →0we are in the case of the Metropolis algorithm, when it is non0, uχ2 corrects, on average, for the effect of the noise.

In our situation, we have:

P(r, p)∝e−E(r,p) with E(r, p) estimated (3.51)

In order to implement the penalty method, we need a numerical estimator,

∆Ep(p, p0;r), of the difference E(r, p0)−E(r, p) to be used in equation 3.51. This quantity will be discussed in the section 3.2.2.

We now move to see what is required instead to sample a coordinate change in our Monte Carlo. In this case, indicating with Tr(r → r0) and Ar(r → r0) the prob-ability to generate and accept a new configuration for the system (respectively), detailed balance becomes:

P(r, p)Tr(r→r0)Ar(r→r0) =P(r0, p)Tr(r0 →r)Ar(r0 →r) (3.52)

Using the explicit form of P and simplifying we have:

e−Vr(r)e−E(r,p)ρ0m(r)Tr(r→r0)Ar(r→r0) = e−Vr(r

0)e−E(r

0,p)ρ0m(r0)Tr(r0 →r)Ar(r0 →r) (3.53)

As for thepdisplacement, we have a probability density which is known within some numerical estimates. Nevertheless, the numerical complexity of the calculation ofP is increasing in this case. Indeed, we have:

P(r, p)∝ρ0m(r)e−E(r,p) withE(r, p)and ρ0m(r)estimated (3.54)

Given the non exponential form of the probability in detailed balance, we cannot simply apply the Penalty method previously described but we have to combine this method with another method introduced by Kennedyet al. [11] who adapted Monte Carlo procedure for the following case:

P(s)∝f(s)e−L(s) withf(s) estimated and L(s) known analytically (3.55)

Kennedyet al. showed that choosing the transition probability as:

T(s→s0)∝e−L(s

0) (3.56)

Detailed balance is simplified to:

f(s)A(s→s0) =f(s0)A(s0 →s) (3.57)

This can be satisfied by defining the acceptance probability as:

A(s→s0) =

(cU(s→s0) if ”s > s0

c if ”s≤s0” (3.58)

whereU(s→s0) is a numerical estimate of f(s

0) f(s). Note that equation3.58depends on the ratio f(s

0)

f(s) and a specific ordering criterion s > s0(its particular form will be specified for our case) and guarantees that detailed balance is satisfied even in the presence of noise with a parameter to adjust the acceptance probability between 0 and 1. Indeed, c < 1 is a constant that ensures A(s→s0)∈[0,1].

In our case the Kennedy method is implemented defining:

Tr(r→r0)∝e−Vr(r

0)e−E(r

0,p) (3.59)

And the detailed balance is satisfied if the acceptance probability is of the following form and indicating the ordering criteria as”r>r0”and ”r≤r0”:

Ar(r→r0) =

(cU(r→r0) if ”r>r0

c if ”r≤r0” (3.60)

In the equation 3.60, U(r → r0) is an unbiased estimator of the ratio ρ

0 m(r0) ρ0m(r). Nu-merical tests have shown that, for our calculations, we can set c= 0.9 (cis choosen in a way that we do not have often, around 1% maximum, Ar(r→ r0) >1). The optimal choice of the ordering criteria, ”r > r0” and ”r ≤ r0” ", depends on the problem, here we adopted”r>r0”equivalent toe−δβV(r,∆r=0) > e−δβV(r0,∆r=0). When introducing the Kennedy procedure, I specified that e−L(s) is assumed to be

known analytically. This is postulated in the original method to allow sampling of the transition probability either analitycally or via standard Monte Carlo. In our case, the condition is not met because e−E(r,p) is only known numerically. We can however solve this problem by using first the Penalty method to generate configu-rations according to Tr(r→ r0)∝ e−Vr(r

0)e−E(r

0,p). These configurations are thus generated using a Monte Carlo with transition probabilityt(r→r0)and acceptance probability a(r→r0):

e−Vr(r)e−E(r,p)t(r→r0)a(r→r0) =e−Vr(r

0)e−E(r

0,p)t(r0 →r)a(r0 →r) (3.61) t(r→r0)∝e−Vr(r

0) (3.62)

e−E(r,p)a(r→r0) =e−E(r

0,p)a(r0 →r) (3.63)

a(r→r0) = minh

1, exp(−∆Er(r0,r;p)−uχ2r)i

(3.64)

where∆Er(r0,r;p)is a unbiased numerical estimate of E(r, p)−E(r0, p) and uχ2

r is defined in analogy with equations3.49 and3.50.

Here we have t(r → r0) ∝e−Vr(r0) which means that we have to sample our tran-sition probability along a spring chain in the r variables and so we have to use a Gaussian sampling and the staging variables [25]. I present the technical aspects of that in the Appendix B.

In figure 3.1, the scheme of our Monte Carlo moves is summarized. In the next subsection we define the different numerical estimators∆Er(r0,r;p),U(r→r0) and

∆Ep(p, p0;r) required by the algorithm.

Dans le document The DART-Europe E-theses Portal (Page 65-69)