• Aucun résultat trouvé

In the previous chapter we examined the evolution of the simple exclusion process without distinguishing particles. We investigate now the asymptotic behavior of an individual particle.

N/A
N/A
Protected

Academic year: 2021

Partager "In the previous chapter we examined the evolution of the simple exclusion process without distinguishing particles. We investigate now the asymptotic behavior of an individual particle."

Copied!
36
0
0

Texte intégral

(1)

4

Self diffusion

1 The process as seen from a tagged particle

In the previous chapter we examined the evolution of the simple exclusion process without distinguishing particles. We investigate now the asymptotic behavior of an individual particle.

Fix a finite range probability measure p on Z d such that p(0) = 0 and recall from Chapter 3 that we denote by { η t : t ≥ 0 } the exclusion process on X d = { 0, 1 }

Zd

associated to p. Consider a configuration η with a particle at the origin. Tag this particle and denote by Z t , t ≥ 0, its position at time t. Due to the presence of the other particles { Z t : t ≥ 0 } by itself is not a Markov process but the pair { (Z t , η t ) : t ≥ 0 } is a Markov process whose generator ˜ L acts on local functions f : Z d × { 0, 1 }

Zd

→ R as

( ˜ L f )(z, η) = X

x,y∈

Zd

x,y6=z

p(y − x) η(x) [1 − η(y)] { f (z, σ x,y η) − f (z, η) } (1.1)

+ X

x∈

Zd

p(x) [1 − η(z + x)] { f (z + x, σ z,z+x η) − f (z, η) } . In this formula, σ x,y η, defined in (3.0.2), is the configuration obtained from η by exchanging the occupation variables η(x) and η(y).

We are interested in this chapter in the asymptotic behavior of Z t . If there were no other particles, Z t would be a continuous time random walk for which the law of large numbers and the invariance principle are well known.

Of course, the presence of the other particles affect the behavior of the tagged particle. If, for instance, all sites are occupied the particle remains at the origin for ever.

The evolution of a tagged particle immersed in the simple exclusion process

has a special feature which simplifies considerably the analysis. Consider the

process as seen from the position of the tagged particle. This means that we

(2)

assume the tagged particle to be freezed at the origin. Each time it jumps by x, we translate the all configuration by − x to keep the tagged particle immobile. A fundamental and quite particular property of this process is that it possesses a very simple family of stationary states: for each 0 ≤ α ≤ 1 the measure obtained by placing a particle independently at each site different from the origin with probability α turns out to be a stationary state. We will denote these invariant states by ν α in this chapter.

A law of large number for the tagged particle starting from ν α is easy to infer. The mean displacement of the tagged particle is m = P

x∈

Zd

xp(x).

Since ν α is stationary, the site to which the tagged particle chooses to jump is empty with probability 1 − α. We thus expect a mean displacement in the stationary state equal to be equal to (1 − α)m. This is in fact the content of the first main theorem of this chapter which states that in the stationary state Z t /t converges a.s. to (1 − α)m.

The second main result of this chapter establishes a central limit theorem:

starting from ν α (Z t − (1 − α)m)/ √

t converges in distribution to a mean–zero Gaussian random vector with a covariance matrix D(α) depending on the dentisty. So far, this result has been proved in the mean–zero case (m = 0) and in dimension d ≥ 3 in the asymmetric case. It is conjectured to hold also in dimension 1 and 2 in the asymmetric case, but there even the finiteness of the asymptotic variance has not yet been obtained.

In the mean-zero one-dimensional nearest-neighbor case the tagged parti- cle evolves in a subdiffusive scale and the covariance matrix D(α) vanishes.

The reason of this singular beahavior is simple. Since particles can not jump over other particles, for the tagged particle to reach a site N , all particles in the interval { 1, . . . , N } need to be displaced and this requires a longer time scale. Actually, it has been proved that Z t /t 1/4 converges in distribu- tion to a non-degenerate Gaussian random variable in the symmetric case p(1) = p( − 1) = 1/2.

Finally, let us mention that the method presented in Chapter 6 permits to show that the covariance matrix D(α) depends smoothly on α.

We close this section introducing the notation needed to state the main results of the chapter. Denote by { τ x : x ∈ Z d } the group of translations on X d so that

(τ x η)(y) = η(x + y)

for x, y in Z d and a configuration η in X d . The action of the translation group is naturally extended to functions and measures.

Denote by { ξ t : t ≥ 0 } the state of the exclusion process as seen from the position of the tagged particle: ξ t = τ Z

t

η t . Notice that the origin is always occupied: ξ t (0) = (τ Z

t

η t )(0) = η t (Z t ) = 1. In particular, we may consider ξ t

as a configuration of X d with a particle at the origin or as a configuration of

X d = { 0, 1 }

Zd

, where Z d = Z d \{ 0 } . We adopt here the latter convention. It is

then opportune to define the shift operators { θ x : x ∈ Z d } on X d as follows:

(3)

1 The process as seen from a tagged particle 97 (θ x ξ )(y) =

ξ (x) if y = − x, ξ (y + x) otherwise.

This means that θ x ξ stands for the configuration where the tagged particle, sitting at the origin, is first transferred to site x and then all the configuration is translated by − x.

It is not difficult to show that { ξ t : t ≥ 0 } is a Markov process on X d . The space C of local functions forms a core for the generator L = L 0 + L θ given by

( L 0 f )(ξ) = X

x,y∈Z

d

p(y − x) ξ(x) [1 − ξ(y)] [f (σ x,y ξ) − f (ξ)] , (1.2) ( L θ f )(ξ) = X

z∈Z

d

p(z) [1 − ξ(z)] [f (θ z ξ) − f (ξ)] .

The first part of the generator takes into account the jumps of the environ- ment, while the second one corresponds to the jumps of the tagged particle.

Denote by { S(t) : t ≥ 0 } the semigroup of this Markov process.

For 0 ≤ α ≤ 1, let ν α be the Bernoulli product measure on X d with marginals given by

ν α { ξ : ξ(x) = 1 } = α

for x in Z d . An elementary computation shows that ν α is an invariant state for the Markov process { ξ t : t ≥ 0 } .

As in the previous chapter, the semigroup { S(t) : t ≥ 0 } extends to a Markov semigroup on L 2α ) whose generator L ν

α

is the closure of L in L 2α ). Since the density α remains fixed, to keep notation simple we de- note L ν

α

by L and by D ( L ) the domain of L in L 2α ).

Hereafter, L represents the generator defined by (1.2) associated to the transition probability p (x) = p( − x). An elementary computations shows that L is the adjoint of L in L 2α ). In particular, in the symmetric case p( − x) = p(x), L is self-adjoint with respect to each ν α .

Denote by h· , ·i ν

α

the scalar product in L 2α ). For any local function f h f, ( −L )f i ν

α

=: D (f ) = D 0 (f ) + D θ (f ) , (1.3) where

D 0 (f ) = (1/2) X

x,y∈

Zd

s(y − x) Z

ξ (x) [1 − ξ(y)] (T x,y f )(ξ) 2 ν α (dξ) , D θ (f ) = (1/2) X

z∈Z

d

s(z) Z

[1 − ξ(z)] (T z f )(ξ) 2 ν α (dξ) .

In this formula and below, s( · ) (resp. a( · )) stands for the symmetric (resp.

anti-symmetric) part of the probability p( · ) : s(x) = (1/2) { p(x) + p( − x) } ,

a(x) = (1/2) { p(x) − p( − x) } . Moreover, for a function g in L 2α ),

(4)

(T x,y g)(ξ) = g(σ x,y ξ) − g(ξ) , (T z g)(ξ ) = g(θ z ξ) − g(ξ) .

Since (T x,y f )(ξ) vanishes unless ξ(x) 6 = ξ(y) and since T x,y f = T y,x f , we may rewrite D 0 (f ) as

D 0 (f ) = (1/4) X

x,y∈

Zd

s(y − x) Z

(T x,y f )(ξ) 2 ν α (dξ) .

The explicit formulae for the Dirichlet forms written above hold also for functions f in the domain D ( L ) of the generator, and the series defined on the right hand side converge absolutely. This can be proved by an approximation argument (cf. Lemma 4.4.3 in Liggett (1985)).

We claim that the measures { ν α : 0 ≤ α ≤ 1 } are ergodic in all but one degenerate case:

Theorem 4.1. Assume that d ≥ 2 or assume that d = 1 and that p( · ) is not nearest neighbor: P

x6=±1 p(x) > 0. Then, for any 0 ≤ α ≤ 1, ν α is ergodic for L .

Proof. The proof follows closely the one of Theorem 3.3.2. Fix a function f ∈ L 2α ) invariant for the (L 2 extension) semigroup generated by L . Then, f is in the domain of L and L f = 0. By multiplying by f both sides of this equation and integrating, we obtain that

(1/4) X

x,y∈

Zd

s(y − x) Z

[f (σ x,y ξ) − f (ξ)] 2 ν α (dξ)

+ (1/2) X

x∈

Zd

s(x) Z

[1 − ξ(x)] [f (θ x ξ) − f (ξ)] 2 ν α (dξ) = 0 . Under our assumptions (d ≥ 2 or d = 1 and p is not nearest neighbor), the support of s( · ) generates Z d . Hence, for any x, y ∈ Z d

f (σ x,y η) = f (η) ν α – a.e.

By De Finetti’s theorem we conclude that f is constant ν α – a.e.

Remark 4.2. Without any further mention, we exclude from now on the de- generate one–dimensional case with only nearest neighbor jumps.

Denote by P ν

α

the probability measure on the path space D( R + , X d ) in- duced by the Markov process ξ t starting from the ergodic stationary measure ν α and by E ν

α

expectation with respect to P ν

α

. The main results of this chap- ter state a law of large numbers and a central limit theorem for the position of the tagged particle:

Theorem 4.3. Fix 0 ≤ α ≤ 1. Then, as t ↑ ∞ , Z t /t converges P ν

α

–almost surely to [1 − α]m, where m = P

x∈

Zd

xp(x).

(5)

2 Some elementary martingales 99 Theorem 4.4. Assume that m = 0 or that d ≥ 3. Then, under P ν

α

, Z t − (1 − α)mt

√ t

converges in distribution, as t ↑ ∞ , to a mean–zero Gaussian random vector with covariance matrix denoted by D(α).

As a quadratic form D(α) is strictly positive and finite: There exists a strictly positive constant C, depending on p and on the density α, such that

C(p, α)(1 − α) | a | 2 ≤ a · D(α)a ≤ C 0 (1 − α) | a | 2 for all a in R d .

The matrix D(α) is called the self-diffusion matrix of the exclusion process.

In the stament of this theorem and throughout this chapter, C 0 stands for a constant which depends only on the transition probability p and which may change from line to line.

Remark 4.5. Before proceeding, note that:

(i) As in the previous sections, this convergence holds in probability with respect to the initial stationary measure ν α (cf. definition xxx).

(ii) Denote by Z t N , N ≥ 1, t ≥ 0, the rescaled position of the tagged particle speeded-up by N :

Z t N = Z tN − tN (1 − α)m

√ N ·

An invariance principle holds: Under P ν

α

, the sequence of processes { Z t N : t ≥ 0 } converges in D( R + , R d ) to a d-dimensional Brownian motion with diffusion matrix D(α).

(iii) It is conjectured that this theorem should be valid also in dimension 1 and 2 in the case m 6 = 0, but this is still an open problem. Actually, in these cases we are not even able to prove that the asymptotic variance is finite.

The proof of Theorem 4.4 follows the same ideas exposed in Chapter 3 for additive functionals of simple exclusion processes. Some complications arise, however, due to the presence of the very non-local operators associated to the translations.

2 Some elementary martingales

It is useful to represent the position of the tagged particle in terms of elemen- tary orthogonal martingales associated to the jumps of the process.

For z such that p(z) > 0 and for 0 ≤ s < t, denote by N [s,t] z the total

number of jumps of the tagged particle from the origin to z in the time interval

(6)

[s, t]. In the same way, for x, y in Z d

∗ such that p(y − x) > 0, denote by N [s,t] x,y the total number of jumps of a particle from x to y in the time interval [s, t].

Let N t z = N [0,t] z , N t x,y = N [0,t] x,y .

Lemma 4.6. For x, y, z in Z d such that p(z) > 0, p(y − x) > 0, let M t z = N t z

Z t 0

p(z)[1 − ξ s (z)] ds M t x,y = N t x,y

Z t 0

p(y − x)ξ s (x)[1 − ξ s (y)] ds . { M t z : p(z) > 0 } , { M t x,y : x, y ∈ Z d

∗ , p(y − x) > 0 } are orthogonal martingales with quadratic variation h M z i t , h M x,y i t given by

h M z i t = Z t

0

p(z)[1 − ξ s (z)] ds , h M x,y i t = Z t

0

p(y − x)ξ s (x)[1 − ξ s (y)] ds .

Proof. Fix x, y, z in Z d

∗ such that p(z) > 0, p(y − x) > 0. It is easy to check that (ξ t , N t z , N t x,y ) is a Markov process on X d × Z × Z with generator L x,y,z given by

( L x,y,z f )(ξ, k, j)

= p(y − x) ξ (x) [1 − ξ(y)] { f (σ x,y ξ, k, j + 1) − f (ξ, k, j) } + p(z) [1 − ξ(z)] { f (θ z ξ, k + 1, j) − f (ξ, k, j) }

+ X

x

0

,y

0

Zd

p(y 0 − x 0 ) ξ(x 0 ) [1 − ξ(y 0 )] { f (σ x

0

,y

0

ξ, k, j) − f (ξ, k, j) }

+ X

z

0

6=z

p(z 0 )[1 − ξ (z 0 )] { f (θ z

0

ξ, k, j) − f (ξ, k, j) } .

In this formula the first sum is carried over all pairs (x 0 , y 0 ) in Z d

∗ × Z d

∗ dif- ferent from (x, y ). Dynkin’s formula applied to the functions F 1 (ξ, k, j) = k, F 2 (ξ, k, j) = j shows that M t z , M t x,y are martingales with quadratic variation as stated in the lemma.

To show that these martingales are orthogonal, we need to prove that M t z M t x,y is a martingale. Let W z (ξ) = p(z)[1 − ξ(z)], W x,y (ξ) = p(y − x)ξ (x)[1 − ξ(y)]. Dynkin’s formula applied to F (ξ, k, j) = kj gives that

N t z N t x,y − Z t

0

N s z W x,y (ξ s ) + N s x,y W z (ξ s ) ds

is a martingale. Rewriting the processes N t z , N t x,y as M t z + R t

0 W z (ξ s )ds, M t x,y + R t

0 W x,y (ξ s )ds and integrating by parts we obtain that

(7)

2 Some elementary martingales 101 M t z M t x,y +

Z t 0

M s z W x,y (ξ s ) + M s x,y W z (ξ s ) ds

+ Z t

0

W x,y (ξ s ) ds Z t

0

W z (ξ s ) ds − Z t

0

N s z W x,y (ξ s ) + N s x,y W z (ξ s ) ds

is a martingale. Rewriting the martingales M z , M x,y which are inside the integral in terms of the jump processes N z , N x,y we see that the expressions involving these jumps cancel. Thus

M t z M t x,y + Z t

0

Z s 0

W z (ξ r ) dr W x,y (ξ s ) ds +

Z t 0

Z s 0

W x,y (ξ r )dr W z (ξ s ) ds + Z t

0

W x,y (ξ s ) ds Z t

0

W z (ξ s ) ds

is a martingale. An integration by parts shows that the integrals cancel so that M t z M t x,y is a martingale as claimed.

The same proof applies to any pair of distinct martingales in the set { M t z : p(z) > 0 } ∪ { M t x,y : x, y ∈ Z d , p(y − x) > 0 } . This concludes the proof of the lemma.

These martingales associated to the jumps of the Markov process are called the elementary martingales of the process because any martingale given by Dynkin’s formula can be written in terms of them: Fix a local function f : X ∗ d → R . By Dynkin’s formula,

M t f = f (ξ t ) − f (ξ 0 ) − Z t

0

( L f )(ξ s ) ds (2.1) is a martingale. Since f is a local function, rewriting f (ξ t ) − f (ξ 0 ) as a sum of all contributions of all jumps from or to a site contained in the support of f in the time interval [0, t], we obtain that

f (ξ t ) − f (ξ 0 ) = X

x,y∈

Zd

Z t 0

(T x,y f )(ξ s− ) dN s x,y + X

z∈

Zd

Z t 0

(T z f)(ξ s− ) dN s z

so that

M t f = X

x,y∈Z

d

Z t 0

(T x,y f )(ξ s− ) dM s x,y + X

z∈Z

d

Z t 0

(T z f )(ξ s− ) dM s z . (2.2)

In (2.2) and below we set M t z = 0 and M t x,y = 0 if p(z) = 0, p(y − x) = 0.

By (2.0.2), we know that E ν

α

(M t f ) 2

= 2t D (f ) . (2.3)

(8)

We now reobtain this identity taking advantage of the representation of M t f in terms of the elementary martingales. As we shall see at the end of this section, this computation allows to describe the limits of L 2 ( P ν

α

)–Cauchy sequences of martingales M t f .

Since the elementary martingales are orthogonal, the quadratic variation of the martingale M f defined by (2.2) is given by

h M f i t = X

x,y∈Z

d

p(y − x) Z t

0

ξ s (x) [1 − ξ s (y)] (T x,y f )(ξ s ) 2 ds

+ X

z∈

Zd

p(z) Z t

0

[1 − ξ s (z)] (T z f )(ξ s ) 2 ds

so that 1 t E ν

α

(M t f ) 2

= X

x,y∈

Zd

p(y − x) Z

ξ(x) [1 − ξ (y)] (T x,y f )(ξ) 2 ν α (dξ)

+ X

z∈

Zd

p(z) Z

[1 − ξ (z)] (T z f )(ξ) 2 ν α (dξ) .

To obtain the expression of the Dirichlet form D (f ) given in (1.3), we need to replace p by s and to remove the indicator ξ(x)[1 − ξ(y)] in the first sum.

Since

(T x,y f )(ξ ) = (T y,x f )(ξ ) , (T x,y f )(σ x,y ξ ) = − (T x,y f )(ξ) , (T z f )(θ −z ξ) = − (T −z f )(ξ) , (2.4) the change of variables ξ 0 = θ z ξ, ξ 00 = σ x,y ξ permit to replace p by s. Observ- ing now that

(T x,y f )(ξ) 2

ξ (x)[1 − ξ(y)] + ξ(y)[1 − ξ(x)] = (T x,y f )(ξ ) 2 , (2.5) we obtain (2.3).

We claim that the representation (2.2) extends to functions in the domain D ( L ):

Lemma 4.7. Fix a function u in the domain D ( L ). Then, the martingale M t u , defined by (2.1) with u in place of f , can be represented as in (2.2):

M t u = X

x,y∈

Zd

Z t 0

(T x,y u)(ξ s− ) dM s x,y + X

z∈

Zd

Z t 0

(T z u)(ξ s− ) dM s z .

Proof. Fix such a function u. Since u belongs to the domain of the generator

and since the space of local functions forms a core for the generator in L 2α ),

(9)

3 The spaces H

1

, H

−1

103 there exists a sequence of local functions { f n : n ≥ 1 } such that f n , L f n

converges in L 2α ) to u, L u, respectively. In particular, the martingale M t f

n

, defined by (2.1) with f n in place of f , converges in L 2α ), as n ↑ ∞ , to the martingale M t u , defined by (2.1) with u in place of f . This holds of course for every t > 0.

In view of (1.3), (2.2) with u in place of f defines a martingale in L 2α ) because the partial sums form a Cauchy sequence in L 2α ). Denote this martingale by m u t .

To show that M t u = m u t , it is enough to show that M t f

n

converges in L 2α ) to m u t . Since f n is a local function, the martingale M t f

n

can be represented through the elementary martingales M z , M x,y by (2.2). Since the martingales M z , M x,y are orthogonal, by the computations performed right after (2.3),

1 t E ν

α

h

(M t f

n

− m u t ) 2 i

= 2 D (f n − u)

This expression vanishes as n ↑ ∞ by the choice of the sequence { f n : n ≥ 1 } . This concludes the proof of the lemma.

3 The spaces H 1 , H −1

We examine in this section the L 2 ( P ν

α

) limits of the martingales M t f , where f is a local function. As in Section 2.2, denote by H 1 the Hilbert space generated by the space C of local functions endowed with the scalar product h f, ( −L )g i ν

α

. Let k · k 1 be the norm in H 1 . We have seen in (1.3) that

k f k 2 1 = D 0 (f ) + D θ (f )

for functions f in C . This identity extends to the domain D ( L ) because C forms a core for L .

Denote by H −1 the dual space of H 1 defined as in (2.2.2) with ν α in place of π. Recall that the H −1 norm is defined by the variational norm

k f k 2 −1 = sup

g∈C

n 2 h f, g i ν

α

− k g k 2 1

o

for f in C .

As in (2.2.3), (2.2.4), it is easy to check from this variational formula that for every function f in D ( L ) and every functions g in L 2α ) ∩ H −1

h f, g i ν

α

≤ k f k 1 k g k −1 .

The same variational formula permits to show that a function in D ( L ) belongs to H −1 if and only if there exists a finite constant C 1 such that

h f, g i ν

α

≤ C 1 k g k 1 (3.1)

(10)

for every g in C . Since every function in the domain D ( L ) can be approximated by local functions in H 1 , one just need to prove (3.1) for local functions. In this case, k f k −1 ≤ C 1 .

Denote by L 2α ) the space of sequences Ψ = { Ψ z : X ∗ d → R ; p(z) >

0 } × { Ψ x,y : X ∗ d → R ; x, y ∈ Z d

∗ , p(y − x) > 0 } of L 2α ) functions such that X

z∈

Zd

p(z)E ν

α

h

[1 − ξ(z)]Ψ z (ξ) 2 i

+ X

x,y∈

Zd

p(y − x)E ν

α

h

ξ(x) [1 − ξ(y)] Ψ x,y (ξ ) 2 i is finite. L 2α ) is endowed with the scalar product h· , ·i defined by

h Ψ , Φ i = X

z∈

Zd

p(z)E ν

α

h

[1 − ξ(z)] Ψ z (ξ) Φ z (ξ) i

+ X

x,y∈

Zd

p(y − x)E ν

α

h ξ(x) [1 − ξ (y)] Ψ x,y (ξ)Φ x,y (ξ) i .

A function u in the domain D ( L ) induces a sequence in L 2α ), denoted by Ψ u and given by Ψ z u = T z u, Ψ x,y u = T x,y u. Moreover, for Ψ in L 2α ), M Ψ defined by (2.2), with Ψ z , Ψ x,y in place of T z f , T x,y f, defines a square integrable martingale such that E ν

α

[(M t Ψ ) 2 ] = t h Ψ, Ψ i . In view of (2.4), (2.5), denote by L 2

0 (ν α ) the closed subspace of L 2α ) composed by all sequences Ψ such that ν α – a.s.

Ψ x,y (ξ) = Ψ y,x (ξ) , Ψ x,y (σ x,y ξ) = − Ψ x,y (ξ) , Ψ x,y (ξ) 2

ξ(x)[1 − ξ(y)] + ξ(y)[1 − ξ (x)] = Ψ x,y (ξ) 2 , (3.2) Ψ z (θ −z ξ) = − Ψ −z (ξ) .

Repeating the computation performed right after (2.3) and taking advantage of the relations (3.2), we obtain that

t −1 E ν

α

[(M t Ψ ) 2 ] = h Ψ, Ψ i = (1/2) X

x,y∈

Zd

s(y − x) Z

Ψ x,y (ξ) 2 ν α (dξ)

+ X

z∈

Zd

s(z) Z

[1 − ξ (z)] Ψ z (ξ) 2 ν α (dξ) . (3.3) Lemma 4.8. Consider a sequence of local functions { f n : n ≥ 1 } which forms a Cauchy sequence in H 1 . Then, there exists Ψ in L 2 0α ) such that M t f

n

converges to M t Ψ in L 2 ( P ν

α

) for all t ≥ 0.

Proof. A sequence { f n : n ≥ 1 } of local functions forms a Cauchy sequence in H 1 if and only if { Ψ f

n

: n ≥ 1 } forms a Cauchy sequence in L 2α ).

In particular, Ψ f

n

converges in L 2α ) to some Ψ which belongs to L 2 0α )

because this space is closed. By (3.3), M t f

n

converges to M t Ψ in L 2α ).

(11)

5 Central limit theorem 105

4 Law of large numbers

We prove in this section Theorem 4.3. Recall the definition of the jump pro- cesses { N t z : t ≥ 0 } defined in Section 2. The position at time t of the tagged particle is obtained by summing over the number of jumps multiplied by their size:

Z t = X

z∈

Zd

zN t z = X

z∈

Zd

zM t z + Z t

0

V ˜ (ξ s ) ds , (4.1)

where ˜ V is the local function given by V ˜ (ξ) = X

x∈Z

d

x p(x) [1 − ξ(x)] . (4.2)

Under the stationary measure P ν

α

, the quadratic variation of the vector–

valued martingale M t = P

x∈

Zd

xM t x is bounded by C 0 t. In particular, as t ↑ ∞ , M t /t converges to 0 a.s. (cf. Theorem VII.9.3 in Feller 1971). On the other hand, since P ν

α

is ergodic, as t ↑ ∞ , t −1 R t

0 V ˜ (ξ s ) ds converges a.s. to E ν

α

[ ˜ V ] = m(1 − α). This proves the theorem.

5 Central limit theorem

We prove in this section Theorem 4.4 following the strategy presented in Chapter 2. Recall the definition of Z t , the position of the tagged particle. By (4.1),

Z t − (1 − α)tm = X

z∈

Zd

zM t z + Z t

0

V (ξ s ) ds , where V is the local zero-mean function

V (ξ) = ˜ V (ξ ) − (1 − α)m = X

x∈

Zd

x p(x) { α − ξ(x) } . (5.1) We have seen in Chapter 2 that the proof of a central limit theorem for additive functionals of Markov processes relies on bounds on H −1 . Fix a vector a ∈ R d and let V

a

= a · V . Denote by u λ the solution of the resolvent equation

λu λ − L u λ = V

a

. (5.2)

We prove in this section Theorem 4.4 assuming that for every vector a ∈ R d , V

a

∈ H −1 and sup

0<λ≤1 kL u λ k −1 < ∞ . (5.3)

By Lemma 2.2.9, it follows from these conditions that

(12)

λ→0 lim λ h u λ , u λ i ν

α

= 0 and u λ converges in H 1 as λ ↓ 0. (5.4) The the strategy of the proof of Theorem 4.4 relies on the ideas presented in Chapter 2. The goal is to represent the additive functional R t

0 V

a

(ξ s )ds as the sum of a martingale m t and a negligible term and then to use the central limit theorem for the martingale M t + m t , where M t = P

z z · a M t z . For λ > 0, let m λ t be the martingale

m λ t = u λ (ξ t ) − u λ (ξ 0 ) − Z t

0

( L u λ )(ξ s ) ds . By Lemma 4.7, the martingale m λ t can be represented as

X

x,y∈

Zd

Z t 0

Ψ x,y λ (ξ s− ) dM s x,y + X

z∈

Zd

Z t 0

Ψ z λ (ξ s− ) dM s z ,

where Ψ x,y λ = T x,y u λ , Ψ z λ = T z u λ . The resolvent equation permits to write the position of the tagged particle Z t as

Z t · a − (1 − α)t (m · a) = M t + m λ t + R λ t , (5.5) where M t is the martingale P

z (z · a)M t z and where the remainder R λ t is given by

R t λ = u λ (ξ 0 ) − u λ (ξ t ) + λ Z t

0

u λ (ξ s ) ds . We first claim that

Lemma 4.9. For every t > 0, the martingale m λ t converges in L 2 ( P ν

∗ α

) to some martingale m t as λ ↓ 0.

Proof. Since the sequence u λ converges in H 1 as λ ↓ 0, by Lemma 4.8 the martingale m λ t converges in L 2 ( P ν

α

) to a martingales m t = M t Ψ associated to a sequence Ψ in L 2 0α ).

It follows from this lemma that the remainder R λ t appearing in (5.5) con- verges in L 2α ) as λ ↓ 0 so that

Z t · a − (1 − α)t (m · a) = M t + m t + R t (5.6) where m t is a martingale in L 2α ).

Lemma 4.10. t −1/2 R t vanishes in L 2α ) as t ↑ ∞ .

The proof of this lemma is similar to the one of Lemma 2.2.8 and relies

on (5.4).

(13)

5 Central limit theorem 107 Since both martingales M t and m t are written in terms of the elementary martingales, the quadratic variation of the sum is easy to compute and equals

h M + m i t = X

x,y∈

Zd

p(y − x) Z t

0

ξ s (x) [1 − ξ s (y)] Ψ x,y (ξ s ) 2 ds

+ X

z∈Z

d

p(z) Z t

0

[1 − ξ s (z)] { a · z + Ψ z (ξ s ) } 2 ds .

By the ergodic theorem under P ν

α

, t −1 h M + m i t converges a.s. Therefore, by Lemma 2.2.1, t −1/2 { M t + m t } , and therefore Z t · a − (1 − α)t (m · a), converges in distribution to a mean-zero Gaussian distribution with variance D(α) satisfying

a · D(α)a = X

x,y∈

Zd

p(y − x) Z

ξ(x) [1 − ξ(y)] Ψ x,y (ξ) 2 ν α (dξ)

+ X

z∈Z

d

p(z) Z

[1 − ξ(z)] { a · z + Ψ z (ξ) } 2 ν α (dξ) .

Since Ψ belongs to L 2 0α ), an analogous computation to the one presented just after (2.3) shows that

a · D(α)a = (1/2) X

x,y∈

Zd

s(y − x) Z

Ψ x,y (ξ) 2 ν α (dξ) (5.7)

+ X

z∈Z

d

s(z) Z

[1 − ξ(z)] { a · z + Ψ z (ξ ) } 2 ν α (dξ) .

Nothing in principle prevents the asymptotic variance D(α) to vanish or to be + ∞ and the proof of the central limit theorem would be incomplete without a strictly positive lower bound and a finite upper bound. Such bounds are derived in Section 8 below. This concludes the proof of Theorem 4.4 under the assumptions (5.3) on the solution of the resolvent equation. The purpose of the next sections is to show that (5.3) holds if m = 0 or if d ≥ 3.

We close this section showing that the function V

a

belongs to H −1 if m = 0 and that the solution of the resolvent equation satisfies the assumptions (5.3) provided p( − x) = p(x), proving the central limit theorem for the tagged particle in the symmetric case.

Lemma 4.11. Assume that m = 0. Then, the local function V

a

introduced in (5.1) belongs to H −1 and k V

a

k 2 −1 ≤ C 0 χ(α) | a | 2 .

Proof. In view of (3.1) and the remarks thereafter, to show that V

a

belongs

to H −1 , we need to prove that there exists a finite constant C 0 such that

(14)

h f, V

a

i ν

α

≤ C 0

p χ(α) k f k 1 | a |

for every local function f .

Fix a local function f . In the mean zero case m = 0, V

a

can be rewritten

as X

x∈Z

d

(a · x) p(x) { ξ(e) − ξ(x) } , where e is any fixed site of Z d

∗ . Since s( · ) generates Z d

∗ , for each x such p(x) > 0, there exists a path e = y 0 , . . . , y n = x going from x to e avoiding the origin and such that s(y i+1 − y i ) > 0. Since

h ξ(e) − ξ (x), f i ν

α

=

n−1 X

i=0

h ξ(y i+1 ) − ξ(y i ), f i ν

α

,

performing the change of variables ξ 0 = σ y

i

,y

i+1

ξ, we may rewrite this sum as

− (1/2)

n−1 X

i=0

h ξ(y i+1 ) − ξ (y i ), T y

i

,y

i+1

f i ν

α

. Clearly, this expression is less than or equal to

(1/2)nAα(1 − α) + 1 4A

n−1 X

i=0

E ν

α

h

(T y

i

,y

i+1

f ) 2 i for every A > 0. Summing over all x such that p(x) > 0, we get that

h V

a

, f i ν

α

≤ C 0 Aα(1 − α) | a | 2 + C 0

A h ( −L 0 )f, f i ν

α

for all A > 0 and some finite constant C 0 . It remains to minimize of A to conclude.

The previous lemma shows that the first condition in (5.3) holds in the mean zero case. On the other hand, since the process is reversible in the symmetric case, the second assumption in (5.3) follows from Subsection 2.6.1.

Note that in the previous lemma only the piece of the Dirichlet form D 0 associated to jumps of the environment were used. This part plays an important role in the sequel and deserves a notation.

Let H 0,1 be the Hilbert space generated by the local functions C endowed with the scalar product h f, ( −L s 0 )g i ν

α

, where L s 0 stands for the symmetric part of L 0 . Denote also by H 0,−1 the dual space of H 0,1 , given by (2.2.2) with ν α in place of π. Clearly,

k f k 0,1 ≤ k f k 1 so that k f k −1 ≤ k f k 0,−1 (5.8) for all local functions f .

The proof of Lemma 4.11 shows in fact that V

a

belongs to H 0,−1 and that

k V

a

k 2 0,−1 ≤ C 0 χ(α) | a | 2 (5.9)

for all a in R d .

(15)

6 Strong sector property for the zero mean case 109

6 Strong sector property for the zero mean case

We examine in this section the mean zero case P

x p(x)x = 0. In the previous section, we have reduced the proof of the central limit theorem to the inspec- tion of the conditions stated in (5.3). The first assumptions was proved in Lemma 4.11. In view of Subsection 2.6.2, to derive the second assumption in (5.3), it is enough to prove that the generator L satisfies a sector condition.

This is the content of the main result of this section, whose proof relies on the decomposition of a mean zero probability p in cycle probability measures, as as seens in Section 3.3. The argument reminds the one of Proposition 3.3.4 but the presence of the highly nonlocal shift operator L θ introduces further complications.

Proposition 4.12. There exists a finite constant C 0 depending only on the probability measure p such that

h f, ( −L )g i 2 ν

α

≤ C 0 h f, ( −L )f i ν

α

h g, ( −L )g i ν

α

for all local functions f , g.

Proof. By Lemma 3.3.5 and Lemma 3.3.6, we may assume that p is a cycle probability:

p(x) = 1 n

n−1 X

j=0

1 { x = y j − y j−1 } ,

for some n ≥ 1, the length of the cycle, and some set C = { y 0 , . . . , y n−1 , y n = y 0 } . We may of course assume that the cycle is irreducible in the sense that y i 6 = y j if i 6 = j, 0 ≤ i, j ≤ n − 1.

Let C + x be the cycle { y 0 + x, . . . , y n−1 + x, y n + x } . Since the cycle C is irreducible, there are exactly n cycles of the form C + x which intersect the origin: C − y 0 , . . . , C − y n−1 .

For a cycle C + x which does not intersect the origin, let L 0 C+x be the generator defined by

( L 0 C+x f )(ξ ) = 1 n

n−1 X

k=0

ξ(y k + x)[1 − ξ(y k+1 + x)]

f (σ y

k

+x,y

k+1

+x ξ) − f (ξ) . In contrast, for the cycle C − y j , 0 ≤ j ≤ n − 1, let L 0 C−y

j

be the generator defined by

( L 0 C−y

j

f )(ξ) = 1

n X

0≤k≤n−1 k6=j−1,j

ξ(y k − y j )[1 − ξ(y k+1 − y j )]

f (σ y

k

−y

j

,y

k+1

−y

j

ξ)) − f (ξ) .

Notice that we suppressed the jump from y j−1 − y j to the origin, because the

origin is always occupied, and from the origin to y j+1 − y j because this jumps

(16)

does not appear in the generator associated to the environment jumps. Note also that in the formulas of the generators L 0 C+x , L 0 C−y

j

, we may remove the factors ξ(y k + x), ξ(y k − y j ) without modifying the generators.

For the cycle probability introduced above, the generator associated to the jumps of the tagged particle is written as

( L θ f )(ξ) = 1 n

n−1 X

k=0

[1 − ξ(y k+1 − y k )]

f (θ y

k+1

−y

k

ξ ) − f (ξ) . Let L 1 = P

x6∈C L 0 C−x be the piece of the generator L whose cycles do not intersect the origin and let L 2 = L θ + P

0≤j≤n−1 L 0 C−y

j

be the remaining part of the generator. By Lemma 3.3.8,

h f, ( −L 1 )g i 2 ν

α

≤ 16n 4 h f, ( −L 1 )f i ν

α

h g, ( −L 1 )g i ν

α

. (6.1) We may of course replace L 1 by L on the right hand side since we will be adding only non-negative terms.

It remains to prove a similar bound for the generator L 2 . The genera- tor L 2 involves n(n − 2) measure-preserving transformations coming from the piece P

0≤j≤n−1 L 0 C−y

j

and n additional measure-preserving transforma- tions coming from the piece L θ . Denote by T 1 , . . . , T n(n−1) these transforma- tions. We claim that there is a permutation s of { 1, . . . , n(n − 1) } such that T s(n(n−1)) ◦ · · · ◦ T s(1) ξ = ξ provided ξ has a hole in a specific site.

A rigorous proof of this property is lengthy and requires too much nota- tion. We prefer to convince the reader with an example. Consider the cycle { 0, − e 2 , e 1 } in Z 2 . In this case, the generators L 3 = P

0≤j≤n−1 L 0 C−y

j

, L θ can be written as

( L 3 f )(ξ) = [1 − ξ(e 1 )]

f (σ −e

2

,e

1

ξ) − f (ξ) + [1 − ξ( − e 1 − e 2 )]

f (σ −e

1

,−e

1

−e

2

ξ) − f (ξ) + [1 − ξ(e 2 )]

f (σ e

1

+e

2

,e

2

ξ) − f (ξ ) , ( L θ f )(ξ) = [1 − ξ ( − e 2 )]

f (θ −e

2

ξ) − f (ξ) + [1 − ξ ( − e 1 )]

f (θ −e

1

ξ) − f (ξ) + [1 − ξ (e 1 + e 2 )]

f (θ e

1

+e

2

ξ) − f (ξ) .

The 6 measure–preserving transformations are σ −e

2

,e

1

, σ −e

1

,−e

1

−e

2

, σ e

1

+e

2

,e

2

, θ −e

2

, θ −e

1

, θ e

1

+e

2

. We have to estimate h f, ( −L 3 − L θ )g i ν

α

. This expression consists of 6 terms. To fix ideas, consider the first one which is multiplied by 1 − ξ(e 1 ). A simple computations shows that

θ −e

1

◦ σ −e

1

,−e

1

−e

2

◦ θ e

1

+e

2

◦ σ e

1

+e

2

,e

2

◦ θ −e

2

◦ σ −e

2

,e

1

ξ = ξ ,

provided ξ (e 1 ) = 0. Similar identities hold for the other 5 cases. We may

therefore write the generator L θ + L 3 as the generator L in the statement of

Lemma 3.3.7 and deduce that

(17)

7 Duality 111 h f, ( −L 3 − L θ )g i 2 ν

α

≤ 16n 2 h f, ( −L 3 − L θ )f i ν

α

h g, ( −L 3 − L θ )g i ν

α

, where n = 6. Here again we may replace the generator L 3 + L θ by L on the right hand side. This bound together with (6.1) proves the sector condition for mean-zero exclusion process as seen from the tagged particle.

The first three operators are symmetric and do not alter the degree of a function. The fourth and fifth are anti–symmetric and also do not modify the degree of a function. Finally, the last two operators change the degree by one.

7 Duality

The proof of the central limit theorem for the position of the tagged particle in the asymmetric exclusion process relies on the dual spaces and operators introduced in Section 3.4. The presence of the tagged particle and the corre- sponding shifts operators create new difficulties.

To stress the similitude between the present context and the one of the pre- vious chapter, we use the same notation introduced in Section 3.4 to represent different sets and operators.

For n ≥ 0, denote by E n the subsets of Z d

∗ with n points and let E = S

n≥0 E n . Note that these sets E , E n are different from the ones introduced in Section 3.4 because their elements do not contain the origin.

For each A in E , let Ψ A be the local function Ψ A = Y

x∈A

ξ(x) − α p χ(α) ,

where χ(α) = α(1 − α). By convention, Ψ φ = 1. The class { Ψ A , A ∈ E} forms an orthonormal basis of L 2α ). For each n ≥ 1, denote by G n the subspace of L 2α ) generated by { Ψ A , A ∈ E n } , so that L 2 α ) = ⊕ n≥0 G n . Functions in G n are said to have degree n.

Consider a local function f . Since { Ψ A : A ∈ E} is a basis of L 2α ), there exists a finitely supported function f : E → R such that

f = X

A∈E

f(A) Ψ A = X

n≥0

X

A∈E

n

f(A) Ψ A .

The function f represents the Fourier coefficients of the local function f and is denoted by Tf when we want to stress its dependence on f . f: E → R is function of finite support because f is a local function. Moreover, the Fourier coefficients f(A) depend not only on f but also on the density α: f(A) = f(α, A). Note finally that f(φ) = E ν

α

[f ].

A function f : E → R is said to have degree n ≥ 0 if f(B) = 0 for all B 6∈ E n

and if f(A) 6 = 0 for some A ∈ E n . The null function is said to have degree 0.

(18)

Denote by π n the orthogonal projection on G n , the space of functions of degree n, or the restriction to E n of a finite supported function f : E → R : (π n f)(A) = f(A) 1 { A ∈ E n } . With this definition, T(π n f ) = π n Tf .

Denote by C the space of finitely supported functions f : E → R , by µ the counting measure on E and by h· , ·i µ the scalar product in L 2 (µ). For any two cylinder functions f = P

A∈E f(A) Ψ A , g = P

A∈E g(A) Ψ A , h f, g i ν

α

= X

A,B∈E

f(A) g(B) h Ψ A , Ψ B i ν

α

= X

A∈E

f(A) g(A) = h f, g i µ .

In particular, the map T : L 2α ) → L 2 (µ) is an isomorphism.

To examine how the the generator acts on the Fourier coefficients, recall the definition of the set A x,y introduced in (3.4.1). To represent the shift operator, for A in E , denote by θ y A the set defined by

θ y A =

A + y if − y 6∈ A , (A + y) 0,y if − y ∈ A ,

where B + z is the set { x + z; x ∈ B } . Therefore, if − y belongs to A, we first translate A by y (obtaining a new set which contains the origin) and then we remove the origin and we add the site y. Of course, θ y : E n → E n is a one-to-one function and a straightforward computation shows that Ψ A (θ y ξ ) = Ψ θ

y

A (ξ) for every configuration ξ.

Fix a local function f = P

A∈E (Tf )(A) Ψ A . We claim that L f = X

A∈E

(L α Tf )(A) Ψ A , where L α = L 0,α + L θ,α and

L 0,α = S + (1 − 2α)A 0 + p

χ(α) J 0,+ + p

χ(α) J 0,− , L θ,α = αL θ,p,1 + (1 − α)L θ,p,2 + p

χ(α)J θ,p,+ + p

χ(α)J θ,p,− . The operators S, A 0 , J 0,+ , J 0,− are given by

(Sf)(A) = (1/2) X

x,y∈Z

d

s(y − x) [f(A x,y ) − f(A)] , (A 0 f)(A) = X

x∈A,y6∈A y6=0

a(y − x) [f(A x,y ) − f(A)] ,

(J 0,+ f)(A) = 2 X

x,y∈A

a(y − x) f(A \ { y } ) , (J 0,− f)(A) = 2 X

x,y6∈A x,y6=0

a(y − x) f(A ∪ { y } )

(19)

7 Duality 113 for any function f : E → R of finite support. Note that the generator S is different from the generator S of Section 3.4 because the summation is carried over sites in Z d

∗ . The operators L θ,p,1 , L θ,p,2 , J θ,p,+ , J θ,p,− are defined as follows:

(L θ,p,1 f)(A) = X

x∈A

p(x)

f(θ −x A) − f(A) ,

(L θ,p,2 f)(A) = X

x6∈A

p(x)

f(θ −x A) − f(A) ,

(J θ,p,+ f)(A) = X

x∈A

p(x)

f(A \{ x } ) − f([θ −x A] \{− x } ) , (J θ,p,− f)(A) = X

x6∈A

p(x)

f(A ∪ { x } ) − f([θ −x A] ∪ {− x } ) for any function f : E → R of finite support. In these equations the origin never appears in the summation because p(0) = 0. We denote by L θ,q,1 , L θ,q,2 , J θ,q,+ , J θ,q,− the operators L θ,p,1 , L θ,p,2 , J θ,p,+ , J θ,p,− defined above with p replaced by q, where q = p , s or a.

Up to this point we proved that

T L = L α T .

Before proceeding, we examine the operators just introduced. Our goal is to rewrite the operator L α as a sum of operators which are either symmetric or anti-symmetric. Notice first that S is a symmetric operator in L 2 (µ) which preserves the degree of a function. In contrast with the previous chapter, we shall see that it does not carry all the symmetric part of the operator L α .

The operators A 0 , L θ,p,1 and L θ,p,2 also preserves the degree of functions.

The special role played by the origin introduces some complications: A 0 is not anti–symmetric as its notation suggests, but it carries a piece which is compensated by L θ,p,1 and L θ,p,2 . More precisely, let U : E → R be given by

U (A) = X

x∈A

a(x) .

Note that U vanishes on sets A which do not contain points close to the origin. Denote by A 0 , L θ,p,1 and L θ,p,2 the adjoints of A 0 , L θ,p,1 and L θ,p,2 , respectively, in L 2 (µ). An elementary computation shows that

(A 0 f)(A) = − (A 0 f)(A) − 2U(A)f(A) , (L θ,p,1 f)(A) = (L θ,p

,1 f)(A) − 2U (A)f(A) ,

(L θ,p,2 f)(A) = (L θ,p

,2 f)(A) + 2U (A)f(A) . The derivation of these identities rely on the facts that P

x∈A a(x) = − P

x6∈A a(x), P

x,y∈A a(y − x) = 0 because a is anti–symmetric.

(20)

The computation of the adjoints suggest the following decomposition of the operator α { L θ,p,1 − A 0 } + (1 − α) { L θ,p,2 + A 0 } in symmetric and anti–

symmetric operators:

αL θ,s,1 + (1 − α)L θ,s,2 and α { L θ,a,1 − A 0 } + (1 − α) { L θ,a,2 + A 0 } . It follows from the computation of the adjoints that L θ,s,1 and L θ,s,2 are symmetric operators in L 2 (µ) which preserve the degrees of functions, while the operators L θ,a,1 − A 0 , L θ,a,2 + A 0 are anti–symmetric operators in L 2 (µ) which preserve the degrees of functions

We turn now to the operators which do not preserve the degree of a func- tion: J 0,+ , J 0,− , J θ,p,+ , J θ,p,− . The adjoint of these operators can be written as

(J 0,+ f)(A) = − (J 0,− f)(A) − 2 X

y6∈A

a(y)f(A ∪ { y } ) , (J 0,− f)(A) = − (J 0,+ f)(A) − 2 X

y∈A

a(y)f(A \ { y } ) , (J θ,p,+ f)(A) = (J θ,p

,− f)(A) + 2 X

y6∈A

a(y)f(A ∪ { y } ) , (J θ,p,− f)(A) = (J θ,p

,+ f)(A) + 2 X

y∈A

a(y)f(A \ { y } ) . Note that the indices ± on the left hand side are changed to ∓ on the right hand side. It follows from these identities that

J θ,s,+ = J θ,s,− , { J 0,+ + J θ,a,+ } = −{ J 0,− + J θ,a,− } .

Therefore, J θ,s,+ + J θ,s,− is symmetric in L 2 (µ), while J 0,+ + J 0,− + J θ,a,+ + J θ,a,− is anti–symmetric. Both operators modify the degrees of functions by 1.

Let J + = J 0,+ + J θ,p,+ , J − = J 0,− + J θ,p,− . With this notation we have that

L α = S + α { L θ,p,1 − A 0 } + (1 − α) { L θ,p,2 + A 0 } + p

χ(α) { J + + J − } . Denote by H 0,1 the Hilbert space generated by the finitely supported func- tions C endowed with the scalar product h f, ( − S)g i µ . We use the same no- tation k · k 0,1 to denote the norm of the Hilbert space H 0,1 . An elementary computation shows that

k f k 2 0,1 = h f, ( − S)f i µ = 1 4

X

x,y∈Z

d

s(y − x) X

A∈E

{ f(A x,y ) − f(A) } 2 (7.1)

for all finitely supported functions f : E → R .

(21)

8 Asymmetric exclusion in dimension d ≥ 3 115 Let H 0,−1 be the dual space of H 0,1 , given by (2.2.2) with µ, C in place of π, C . Here also k · k 0,−1 stands for the norm of the H 0,−1 .

Fix a local function f in C and recall that L s 0 represents the symmetric part of L 0 . An elementary computation shows that T L s 0 f = STf so that

T L s 0 = ST .

Hence, since T is an isomorphism from L 2α ) to L 2 (µ), for any local function f ,

k f k 2 0,1 = h f, ( −L 0 )f i ν

α

= h f, ( −L s 0 )f i ν

α

= h f, ( − S)f i µ = k f k 2 0,1 , (7.2) where f = Tf . Therefore T is also an isomorphism from H 0,1 to H 0,1 . By duality these identities extend to H 0,−1 , H 0,−1 :

k f k 2 0,−1 = k f k 2 0,−1 . (7.3) The isomorphism from L 2α ) to L 2 (µ) and the relation T L = L α T give also that

h f, ( −L )f i ν

α

= h f, ( − L α )f i µ (7.4) for all local functions f , an identity needed later.

8 Asymmetric exclusion in dimension d ≥ 3

We prove in this section Theorem 4.4 for asymmetric exclusion processes in dimension d ≥ 3. We have seen in Section 5 that the proof is reduced to the inspection of conditions (5.3).

This section is divided in two parts. We first prove that the local function V

a

belongs to H 0,−1 in dimensions d ≥ 3. This part is based on the estimates of the Green function of transient Markov processes presented in Section 2.8.

In the second part of the section we prove that the second condition in (5.3) holds following the strategy presented in Subsection 2.??.

Lemma 4.13. Assume that d ≥ 3. Then, the local function V

a

introduced in (5.1) belongs to H 0,−1 and k V

a

k 2 0,−1 ≤ C 0 α(1 − α) | a | 2 .

Proof. The local function V

a

can be written in the basis { Ψ A : A ∈ E} as

− p χ(α) P

x∈

Zd

x · a p(x)Ψ {x} and therefore belongs to the space G 1 of func- tions of degree 1. Since functions of different degrees are orthogonal both in L 2α ) and in H 0,1 , in the variational formula which defines the k · k 0,−1 norm we may restrict the supremum to local functions of degree 1:

k V

a

k 2 0,−1 = sup

f∈C

n 2 h V

a

, f i ν

α

− k f k 2 0,1

o = sup

f∈C∩G

1

n 2 h V

a

, f i ν

α

− k f k 2 0,1

o .

(22)

Denote the Fourier coefficients of the local function f by f : E 1 → R so that f = P

x f( { x } )Ψ {x} . With this notation the previous variational formula can be written as

sup

f

n − 2 p

χ(α) X

x

x · a p(x)f( { x } ) − D 0 (f) o .

In this formula the supremum is carried over all finitely supported functions f : E 1 → R and D 0 (f) corresponds to the Dirichlet form of a symmetric random walk on Z d

∗ in which a jump from x to y occurs at rate (1/2)s(y − x):

D 0 (f) = 1 4

X

x,y∈

Zd

s(y − x) { f( { y } ) − f( { x } ) } 2 .

By Schwarz inequality, the linear term in the variational formula is less than or equal to

Aχ(α) X

x

(x · a) 2 p(x) + 1 A

X

x

p(x) f( { x } ) 2

for all A > 0. By Proposition 2.2.24, since the transition probability p has finite range, the second term is less than or equal to

1 A sup

y∈

Zd

G 0 (y, y) D 0 (f) ,

where G 0 (x, y) stands for the Green function of the random walk with Dirich- let form D 0 defined above. By Proposition 2.2.26, the Green function G 0 can be estimate by the Green function of the random walk on Z d which jumps from x to y at rate (1/2)s(y − x). The previous expression is thus less than or equal to C 0 A −1 D 0 (f), for some finite constant C 0 depending only on p.

Choosing A = C 0 we conclude the proof of the lemma.

8.1 Graded sector estimates in the asymmetric case

We prove in this subsection the second condition in (5.3). Let u λ = Tu λ . Since T L = L α T, applying the operator T on both sides of the resolvent equation (5.2), we obtain that

λu λ − L α u λ = V

a

,

where V

a

= TV

a

. Since L α u λ = T L u λ , by (7.3), kL u λ k 0,−1 = k L α u λ k 0,−1 . Thus, in view of (5.8), to prove the second condition in (5.3) it is enough to show that

sup

0<λ≤1 k L α u λ k 0,−1 < ∞ . (8.1)

We place ourselves in the set-up of Subsection 2.??. Here E n stands for

A n , µ for π and we have the following correspondence between the operators:

(23)

8 Asymmetric exclusion in dimension d ≥ 3 117 B 0 → α { L θ,p,1 − A 0 } + (1 − α) { L θ,p,2 + A 0 } ,

S 0 → S , L + → p

χ(α) J + , L − → p

χ(α) J − .

By Lemma 2.2.12, (8.1) holds if we show that conditions (2.6.2), (2.6.4) and (2.??) are in force. In the present context these conditions read as follows:

There exists a finite constant C 0 such that

0 ≤ h f, − Sf i µ ≤ C 0 h f, ( − L α )f i µ (8.2) for all functions f : E → R of finite support.

There exists β < 1 and a finite constant C 0 such that

h J + f, g i 2 µ ≤ C 0 n h− Sf, f i µ h− Sg, g i µ (8.3) h f, J − g i 2 µ ≤ C 0 n h− Sf, f i µ h− Sg, g i µ

for any finitely supported functions f : E n → R , g : E n+1 → R and all n ≥ 1.

There exists a finite constant C 0 such that k π n A 0 u λ k 2 0,−1 ≤ C 0 n k V

a

k 2 0,−1 + C 0 n 3

n+1 X

j=n−1

k π j u λ k 2 0,1 (8.4) for all n ≥ 1; and identical bounds with L θ,p,j , j = 1, 2, in place of A 0 .

In the remaining part of this section we prove Conditions (8.2), (8.3), (8.4).

Condition (8.2)

This condition is straightforward since the operators S, L α are generators and therefore non-positive: For every finitely supported function f : E → R , by (7.2), (7.4),

0 ≤ h−L 0 f, f i ν

α

= h− Sf, f i µ ,

h− Sf, f i µ = h−L 0 f, f i ν

α

≤ h−L f, f i ν

α

= h f, ( − L α )f i µ , where f is the local function f (ξ ) = P

A∈E f(A)Ψ A (ξ).

Condition (8.3) : Off diagonal operators

We start with a graded sector condition on the symmetric diagonal operators:

Lemma 4.14. There exists a constant C 0 , depending only on p( · ) such that h− L θ,s,j f, f i µ ≤ C 0 n h− Sf, f i µ

for j = 1, 2 and all finitely supported function f : E n → R .

(24)

Proof. Set j = 1 and fix a finitely supported function f : E n → R . An elemen- tary computation shows that

h− L θ,s,1 f, f i µ = (1/2) X

z∈

Zd

s(x) X

A3z

{ f(θ −z A) − f(A) } 2 .

Since the sum over z is finite, we need to estimate for each z X

A∈E

n

f(θ −z A) − f(A) 2

in terms of the Dirichlet form h− Sf, f i µ in which only exchange of sites are allowed.

We can assume with out loss of generality that z = (1, 0, . . . , 0), the unit vector in the direction of the first coordinate axis. The problem is now reduced to the following: we are given a function f : E n → R . We think of E n as a graph with edges E n . The Dirichlet form can be written as

h− Sf, f i µ = 1 4

X

e∈E

n

| (δf)(e) | 2 and we are claiming an estimate of the form

1 2

X

A∈E

n

| f(θ z A) − f(A) | 2 ≤ C 0 n h− Sf, f i µ

with a constant C 0 independent of n.

It is clear that for any set A one can move from A to θ −z A along edges of the graph E n , using only the edges in E n . We will verify that we can assign a set of edges E A ⊂ E n such that, (i) for every A one can use the edges of E A to go from A to θ −z A, (ii) for any A there are at most n edges in E A and (iii) the subsets { E A } are mutually disjoint as A varies over E n . Then it is easy to see that

| f(θ −z A) − f(A) | 2 = X

e∈E

A

(δf)(e)

2

≤ n X

e∈E

A

| (δf)(e) | 2

and summing over A ∈ E n , because E A are disjoint, we can establish the lemma.

To construct the paths from A to θ −z A we totally order the points of Z d by lexicographic ordering. We say that z = (z 1 , . . . , z d ) ∈ Z d is positive if, either z 1 > 0; or z 1 = 0, z 2 = 0, . . . , z j−1 = 0 and z j > 0 for some 2 ≤ j ≤ d.

The total ordering declares y > x if y − x is positive. Let the set A ∈ E n consist

of the n points (x 1 , x 2 , . . . , x n ) of Z d . We can assume that they are ordered so

that x 1 > x 2 > . . . > x n . Then θ −z A = (x 1 , . . . , x n ) where x j = x j − z unless

x j = z in which case x j = x j − 2z = − z. We use the edges in E n to shift

successively each x i to x i starting from x 1 and proceeding in order and ending

Références

Documents relatifs

In Section 2, we show that the generator of the mean zero asymmetric exclusion process satisfies a sector condition and deduce from this result a central limit theorem for (0.3)..

Si les membres du conseil d’administration montrent une certaine réticence à fixer l’âge des participants à 18 ans, considérant que « des jeunes de 17 ans peuvent être

The present study made an attempt to address the following question: does a laminar diffusion flame at lower pressure and higher gravitational acceleration have the same

peaks on each XRD curve. The 1st peak is located at 2 of about 2° and is related to intercalation by molten PS macromolecules. The 2nd peak is lo- cated at 2 of about 5.5° and

The EM_array approach constitutes one efficient method for restoring the missing expression gene values, with a lower estimation error level.. Nonetheless, the presence of MVs even at

After Spitzer's paper, it was necessary to answer the question of the construction of the exclusion process in the case where S and the initial number of particles are both innite.

following considerations: (1) Evaluate the risks of a loose live- stock/public event; (2) Inspect containment facility and identify secondary containment barriers, including

Avec VI planches. Mémoire publié à l'occasion du Jubilé de l'Université. Mémoire publié à l'occasion du Jubilé de l'Université.. On constate l'emploi régulier