• Aucun résultat trouvé

Some results about ergodicity in shape for a crystal growth model

N/A
N/A
Protected

Academic year: 2021

Partager "Some results about ergodicity in shape for a crystal growth model"

Copied!
21
0
0

Texte intégral

(1)

HAL Id: hal-00817273

https://hal.archives-ouvertes.fr/hal-00817273

Submitted on 24 Apr 2013

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

growth model

François Ezanno

To cite this version:

François Ezanno. Some results about ergodicity in shape for a crystal growth model. Electronic Jour-

nal of Probability, Institute of Mathematical Statistics (IMS), 2013, 18, pp.1–20. �10.1214/EJP.vVOL-

PID�. �hal-00817273�

(2)

E l e c t r o n i c J

o f

P r

o b a b i l i t y

Electron. J. Probab. 0 (2012), no. 0, 1–20.

ISSN: 1083-6489 DOI: 10.1214/EJP.vVOL-PID

Some results about ergodicity in shape for a crystal growth model

François Ezanno

Abstract

We study a crystal growth Markov model proposed by Gates and Westcott ([1], [2]).

This is an aggregation process where particles are packed in a square lattice ac- cordingly to prescribed deposition rates. This model is parametrized by three values (β

i

, i = 0, 1, 2) corresponding to deposition rates on three different types of loci.

The main problem is to determine, for the shape of the crystal, when recurrence and when ergodicity do occur. In [3] and [4] sufficient conditions are given both for er- godicity and transience. We establish some improved conditions and give a precise description of the asymptotic behavior in a special case.

Keywords:Markov chain ; random deposition ; positive recurrence..

AMS 2010 Subject Classification:60J27 ; 60G55 ; 60K25 ; 60J10.

Submitted to EJP on July 19, 2012, final version accepted on February 8, 2013.

1 Definitions and first properties

Let n be an integer, n ≥ 2 . We consider a set of n aligned sites, each site correspond- ing to a growing pile of particles. The state of a lamellar crystal (see [5]) is described by a vector x = (x(1), . . . , x(n)) ∈ N n , where the value of x(i) may be thought of as the height of the pile above site i . If 1 ≤ j ≤ n , e j will stand for the unitary vector:

e j (i) = δ i,j .

For x ∈ N n and 1 ≤ j ≤ n , let V j (x) be the number of sites adjacent to j whose pile is strictly higher than the pile at site j . Namely,

V j (x) = 1

{x(j−1)>x(j)}

+ 1

{x(j+1)>x(j)}

∈ {0, 1, 2}.

For V j (x) to be well-defined for j = 1 and j = n , we adopt from now on the conven- tion that x(0) = x(n + 1) = 0 , unless otherwise specified. This is the so-called zero condition, which amounts to add a leftmost and a rightmost site that stay at height 0

∗Aix-Marseille Université, France. E-mail:fezanno@cmi.univ-mrs.fr

(3)

forever. Another natural convention is the periodic condition that consists in deciding that x(0) = x(n) and x(n + 1) = x(1) , but we believe that all the results here can be transposed to periodic condition (in the same way as Theorem 1.1 in [3]). We shall also use the infinite condition (resp. the zero-infinite condition), that is x(0) = x(n + 1) = ∞ (resp. x(0) = 0, x(n + 1) = ∞ ), and anything relative to this condition will be denoted with the superscript ∞ (resp. the superscript 0/∞ ).

Definition 1. Let n ≥ 2 and β = (β 0 , β 1 , β 2 ) ∈]0, +∞[ 3 . We say that (X t n , t ≥ 0) is a crystal process with n sites and parameter β if it is a Markov process on N n with transition rates given by

( q(x, x + e j ) = β V

j

(x) , j = 1, . . . , n,

q(x, y) = 0, if y / ∈ {x + e 1 , . . . , x + e n }.

For a configuration x , we define the shape h of x by h = (∆ 1 x, . . . , ∆ n−1 x) , where

∆ j x = x(j) − x(j + 1), j = 1, . . . , n − 1.

Knowing h is equivalent to knowing x up to vertical translation. It is important to remark that V j (x) only depends on x through h , and V j (h) will denote the value of V j (x) for any x whose shape is h . Let us define, for j = 1, . . . , n , the vector

f j =

 

 

e

0

1 , if j = 1, e

0

j − e

0

j−1 , if 1 < j < n,

−e

0

n−1 , if j = n,

e

0

1 , . . . , e

0

n being the unitary vectors of Z n−1 . The object of main interest is the process of the shape of X n , that we now define, rather than the process X n itself.

Definition 2. The shape process with n sites and parameter β is defined by H t n = (∆ 1 X t n , . . . , ∆ n−1 X t n ) ,

where X n is a crystal process with n sites and parameter β . H n is a Markov process on Z n−1 with transition mechanism given by

( q(h, h + f j ) = β V

j

(h) , j = 1, . . . , n,

q(h, h

0

) = 0, if h

0

∈ {h / + f 1 , . . . , h + f n }.

These processes have a basic symmetry property, namely the process (X t n (n), . . . , X t n (1)) has the same distribution as X n , and consequently the process (−∆ n−1 X n , . . . ,

−∆ 1 X n ) has the same distribution as H n . There is a convenient construction of X n , and hence of H n , that we now describe and will later refer as the Poisson construction.

As we will see later, the interest of this construction is to yield useful couplings. Let b 0 , b 1 and b 2 be the β k ’s ranked in the increasing order. We take a family of Poisson processes (N k,j , 0 ≤ k ≤ 2, 1 ≤ j ≤ n) such that

- N k,j has intensity b k ,

- the triples (N 0,j , N 1,j , N 2,j ) 1≤j≤n are mutually independent,

- for any j there exist three processes N ˜ 0,j , N ˜ 1,j and N ˜ 2,j , mutually independent, with intensities b 0 , b 1 − b 0 and b 2 − b 1 respectively, such that

N 0,j = ˜ N 0,j , N 1,j = ˜ N 0,j + ˜ N 1,j , N 2,j = ˜ N 0,j + ˜ N 1,j + ˜ N 2,j .

(4)

We build the process (X t n , t ≥ 0) starting from x 0 letting X 0 n = x 0 , and at any jump time t of some N k,j ,

X t n =

( X t− n + e j , if β V

j

(X

n

t−

) ≥ b k ,

X t− n , otherwise. (1.1)

It is not hard to check that this process has the Markov property and the desired jump rates. Hence it is a crystal process starting from x 0 with n sites and parameter β . For any positive function f on N or R + , we write

f (x) = O exp (x)

if there exists α, C > 0 such that f (x) ≤ Ce

−αx

. If f also depends on some other variable t , the notation

f (x, t) = O t exp (x)

means that the same inequality holds with constants C and α being independent of t . We say that the process X n is ergodic in shape, resp. transient in shape, whenever the process H n is ergodic, resp. transient. The notation P x , resp. P h , will stand for the distribution of the trajectory (X t n , t ≥ 0) starting from x , resp. of the trajectory (H t n , t ≥ 0) starting from h . If there is any ambiguity on the parameter β , the notation P β x will be used instead. The null vector will be denoted by 0 .

This simple model was first described by Gates and Westcott in [1], where attention was focused on the special case β 2 > β 0 , and

β 1 = (β 0 + β 2 )/2.

Under this assumption the process H n with periodic conditions enjoys a remarkable dy- namic reversibility property that implies ergodicity, and even allows to derive an exact computation of the invariant distribution. Unfortunately without this assumption on β , there is no such simple way to determine whether ergodicity occurs or not. However we can make a naive remark: since β 0 is the statistic speed of peaks and β 2 is the one of holes, basic intuition says that increasing β 0 should make the shape more irregular, making the process H n more likely to be transient. Conversely, increasing β 2 should make the shape smoother, making the process H n more likely to be recurrent.

Gates and Westcott later proved several results about the problem of recurrence in shape for other parameters, by means of Foster criteria with quite simple Lyapunov functions. Theorem 2 in [4] states that for periodic conditions and n ≥ 2 , H n is tran- sient if β 2 < β 0 . Ergodicity is shown to hold for

β 1 , β 2 > (n − 1) 2 β 0 , (1.2) and a similar condition for ergodicity is also obtained for a process with a two-dimensional grid of sites. Of course when n is large such conditions are very restrictive.

The family (∆ j X t n , t ≥ 0) is said to be exponentially tight if P

0

(|∆ j X t n | ≥ k) = O t exp (k).

We also say that the family (H t n , t ≥ 0) is exponentially tight if for j = 1, . . . , n − 1 , (∆ j X t n , t ≥ 0) is exponentially tight. Obviously, exponential tightness of the process (H t n , t ≥ 0) implies that it is ergodic with an invariant distribution having exponential tails.

From Theorems 1.2 and 3.1 in [3] we get:

Theorem 1. If n ≥ 2 and β 0 < β 1 ≤ β 2 then (H t n , t ≥ 0) is exponentially tight, and hence ergodic. Moreover there exists d n < β 1 such that

P

0

(X t n (n) ≥ d n t) = O exp (t).

(5)

We point out that this result is actually given for β 0 < β 1 < β 2 but the reader may verify that its proof works exactly the same if we take β 1 = β 2 . We will pick up several ideas of the approach in [3] in order to give weaker conditions for ergodicity in shape.

Our notations will be consistent with this reference as much as possible. Before stating our results we begin by defining two useful notions: growth rate and monotonicity.

Proposition 1. Suppose that H n is ergodic and let π n be its invariant distribution.

There exists v n > 0 such that for j = 1, . . . , n , almost surely,

t→∞ lim X t n (j)

t = v n . Moreover, for any j = 1, . . . , n , we have v n = P

h∈

Zn−1

β V

j

(h) π n (h) .

Proof. Let j ∈ {1, . . . , n} . Since (X t n (j), t ≥ 0) is a counting process with intensity β V

j

(X

tn

) , we have that

M t := X t n (j) − Z t

0

β V

j

(X

ns

) ds is a martingale , (1.3) and also

L t := M t 2 − Z t

0

β V

j

(X

n

s

) ds is a martingale . (1.4) Since ergodicity yields the a.s. convergence

t→∞ lim 1 t

Z t 0

β V

j

(X

n

s

) ds = X

h∈

Zn−1

β V

j

(h) π n (h),

it now remains to show that lim t

−1

M t = 0 , a.s. But (1.3) allows us to use Doob’s inequality: for r ≤ t ,

P sup

s∈[r,t]

|M s | s ≥ ε

!

≤ P sup

s∈[r,t]

|M s | ≥ rε

!

≤ E [M t 2 ] r 2 ε 2

≤ Kt r 2 ε 2 ,

where K = max(β 0 , β 1 , β 2 ) . The last inequality is a direct consequence of (1.4). Thus we have P (sup n

2≤s≤(n+1)2

s

−1

|M s | ≥ ε) ≤ K(n + 1) 2 /(ε 2 n 4 ) , and we can conclude by Borel-Cantelli’s lemma.

For n = 2 , the simplicity of the dynamics allows us to compute the exact value of the growth rate:

Proposition 2. H 2 is ergodic if and only if β 1 > β 0 . In this case, v 2 = β

0

β

1

0

1

. Proof. H t 2 = X t 2 (1) − X t 2 (2) is a random walk on Z , whose jump rates are given by:

 

 

q(i, i + 1) = β 0 , q(i, i − 1) = β 1 , if i > 0, q(0, 1) = q(0, −1) = β 0 ,

q(i, i + 1) = β 1 , q(i, i − 1) = β 0 , if i < 0.

(6)

Thus the first assertion is straightforward. If β 1 > β 0 , it is easy to check that the probability measure

µ({i}) = β 1 − β 0 β 1 + β 0

β 0 β 1

|i|

is a reversible measure for this random walk, so it is the invariant distribution of the process. We can then compute:

v 2 = X

i∈Z

π 2 ({i})(β 0 1 i≥0 + β 1 1 i<0 ) = β 0 π 2 ( Z + ) + β 1 π 2 ( Z

) = 2β 0 β 1 β 0 + β 1

.

We define the canonical partial order ≤ in an obvious way: for two configurations x, y ∈ N n , we write x ≤ y if

x(j) ≤ y(j), 1 ≤ j ≤ n.

The process X n is said to be attractive if for any x ≤ y , there exists a coupling of two processes (X t n , t ≥ 0) and (Y t n , t ≥ 0) , with distributions P x and P y , such that almost surely,

∀t ≥ 0, X t n ≤ Y t n . Lemma 1. Let n ≥ 2 . If β 0 ≤ β 1 ≤ β 2 , then X n is attractive.

Proof. Let x ≤ y . We consider X n and Y n obtained with the above Poisson construction, with X 0 n = x , Y 0 n = y , both using the same Poisson processes. Suppose that

X t− n (j) = Y t− n (j)

and N k,j jumps at time t . We then have V j (X t− n ) ≤ V j (Y t− n ) . Consequently, if X

·

n (j) jumps at time t , then so does Y

·

n (j) and hence the inequality X n (j) ≤ Y n (j) is preserved.

We are now interested in comparing two processes with same initial states, but dif- ferent numbers of sites, or different parameters. In general it is not true that increasing one of the parameters increases the process himself. However we have a weaker result which is sufficient for our purpose.

Lemma 2. Let n ≥ 2 and β ∈]0, +∞[ 3 . If β 0 ≤ β 1 ≤ β 2 and n ≤ m , then there exists a coupling of two processes X n and X m distributed as P β,n x and P β,m x , such that

∀i = 1, . . . , n, ∀t ≥ 0, X t n (i) ≤ X t m (i).

Let β

0

∈]0, +∞[ 3 . If β and β

0

are such that β k ≤ β `

0

for k ≤ ` , then there exists a coupling of two processes X t n and X t

0n

distributed as P β,n x and P β x

0

,n , such that

∀t ≥ 0, X t n ≤ X t

0n

.

Proof. Here again we can use Poisson constructions in such a way that the obtained

processes enjoy the desired properties. The details are left to the reader.

(7)

2 Results

As already noticed, X n is transient in shape (for periodic conditions) when β 2 < β 0 . This is not a surprise since this inequality says that peaks grow faster than holes. Our first Theorem describes more precisely the asymptotic behaviour of the process X n , with zero-condition, under this assumption. It says that almost surely the shape ul- timately adopts a comb shape. The exact form of the comb actually depends on the position of β 1 relatively to β 2 and β 0 so we actually establish three analogue results. To illustrate this, Figure 1 shows three realizations of t

−1

X t n for t = 1000 , n = 100 , and for parameters corresponding to these three cases.

Before stating the result we need to introduce some further notation. We write t

−1

X t n for the vector t

−1

X t n (1), . . . , t

−1

X t n (n)

. For two vectors a = (a 1 , . . . , a k ) and b = (b 1 , . . . , b ` ) we denote by (a, b) the vector (a 1 , . . . , a k , b 1 , . . . , b ` ) . (−) denotes the empty vector. Let E 1 be the set of all the n -uples of the form

(a 1 , β 2 , a 2 , . . . , a k−1 , β 2 , a k ), (2.1) where k ∈ N , and a i = (β 0 ) or (v 2 , v 2 ) for 1 ≤ i ≤ k . Similarly we define E 2 as the set of all the n -uples of the form

(β i

1

, . . . , β i

n

), (2.2)

where i j ∈ {0, 1, 2} , i j 6= i j+1 for 1 ≤ j ≤ n − 1 and i 1 , i n 6= 2 . Let H n,∞ be the shape process with infinite condition. The proof of Proposition 2 also works with H 2 being replaced by H 2,∞ , β 0 by β 1 and β 1 by β 2 . Thus, whenever β 2 > β 1 the process H 2,∞ is ergodic with growth rate v 2,∞ = 2β 1 β 2 /(β 1 + β 2 ) . Let E 3 be the set of all n -uples of the form

(e L , β 0 , a 1 , β 0 , a 2 , . . . , a k−1 , β 0 , a k , β 0 , e R ), (2.3) where k ∈ N , a i = (β 2 ) or (v 2,∞ , v 2,∞ ) for 1 ≤ i ≤ k , and e L , e R = (β 1 ) or (−) .

In Section 3 we prove:

Theorem 2. Let n ≥ 2 and x ∈ N n .

(i) If β 2 < β 0 ≤ β 1 then t

−1

X t n converges P x -a.s. and P x

t→∞ lim t

−1

X t n ∈ E 1

= 1.

(ii) If β 2 < β 1 < β 0 then t

−1

X t n converges P x -a.s. and P x

t→∞ lim t

−1

X t n ∈ E 2

= 1.

(iii) If β 1 ≤ β 2 < β 0 then t

−1

X t n converges P x -a.s. and P x

t→∞ lim t

−1

X t n ∈ E 3

= 1.

Remark. It is plausible that the almost sure convergence of t

−1

X t n holds even with- out the assumption β 2 < β 0 . For instance when β 0 < β 2 < β 1 our belief, confirmed by computer simulations, is that it is always the case that the n sites ultimately divide in a certain number of blocks (possibly one in the ergodic case) of various widths separated by holes of unit width, each of these blocks being ergodic in shape. If this is true then each site admits an asymptotic speed which is either v k , k being the width of the block containing the site, or β 2 if the site is ultimately a hole. Unfortunately we have not been able to prove this.

The next results concern the process with parameters lying in the domain

D = {β = (β 0 , β 1 , β 2 ) : β 0 < β 2 < β 1 }.

(8)

Figure 1: realizations of X t n for n = 100 , t = 1000 , and parameters: (a) β = (3, 1, 2) , (b) β = (3, 2, 1) , (c) β = (2, 3, 1) .

We point out that the three degrees of freedom actually reduce to two. We can indeed assume β 0 = 1 because otherwise we can work with the process (X t/β n

0

, t ≥ 0) .

Our first result in that direction is an abstract condition for ergodicity. The value of β 0 being fixed from now on, our strategy is to give for each β 1 > β 0 a threshold value of β 2

above which ergodicity holds. The main idea is to compare X n with an auxiliary process X ˜ n which is defined as the crystal process with parameters

β ˜ 0 = β 0 , β ˜ 1 = β 1 and β ˜ 2 = β 1 . (2.4) Anything relative to the process X ˜ n will be denoted with the symbol ∼ . For β 1 > β 0 and n ≥ 2 we define

d ˜ n (β 1 ) = inf{d > 0 : P

0

( ˜ X t n (n) ≥ dt) = O exp (t)}.

Clearly d ˜ n (β 1 ) ∈ [β 0 , β 1 ] . Moreover it follows from Lemma 2 that

d ˜ n (β 1 ) is an increasing function of both n and β 1 . (2.5) Theorem 3. If β 1 > β 0 and β 2 > d ˜ n (β 1 ) then H k is ergodic for 2 ≤ k ≤ n + 2 .

Corollary 1. If β 1 , β 2 > β 0 then H 3 is ergodic.

In section 4 we shall prove Theorem 3, and Theorem 4 below, which is an application of Theorem 3.

Theorem 4. Let n ≥ 2 and β ∈ D . The process H k is ergodic for 2 ≤ k ≤ n + 2 if β satisfies one of the following conditions:

(a) β 2 > nβ 0 ,

(b) β 2 > ((n − 1)β 1 + β 0 )/n .

(9)

Moreover H k is ergodic for any k ≥ 2 if β satisfies:

(c) β 2 > 4 √ 2 √

β 1 β 0 .

Finally in Section 5 we establish that H n is transient for some parameters in D . More precisely we show

Theorem 5. Let β 0 > 0 and β 2 ∈]β 0 , 2β 0 [ . Then there exists B > β 0 such that H n is transient for any β 1 > B and n ≥ 5 .

Before turning to the proofs we briefly comment the interest of the above assertions, with the following diagram in mind. In Theorem 4, condition (a) improves the only sufficient condition for ergodicity in D established so far, namely (1.2). Condition (b) provides for fixed n a right-side neigbourhood of the set {β : β 0 < β 1 ≤ β 2 } in which ergodicity still holds. Condition (c) is certainly the most important one since it yields a zone of ergodicity that does not depend on the number of sites, and Theorem 5 does the same for transience.

β

0

β

0

β

1

β

2

0

Theorem 1

Theorems 4 and 5

Theorem 2

Figure 2: Diagram of parameters.

3 Proof of Theorem 2

The proof of Theorem 2 uses the following technical result.

Lemma 3. Let (Z t , t ≥ 0) be a Markov process on some countable set E , and A ⊂ E . For any F ⊂ E we define T F = inf{t ≥ 0 : Z t ∈ F} . We assume that there exists p > 0, N ∈ N and some subsets B 1 , . . . , B N and C 1 , . . . , C N of E such that

(a) for any x / ∈ (A ∪ B ) , where B = ∪ N i=1 B i , we have

P x (T B < +∞) = 1,

(10)

(b) for any i ∈ {1, . . . , n} , and x ∈ B i \A ,

P x (T C

i∪A

< +∞) = 1, and

P x (Z T

Ci∪A

∈ A) ≥ p.

Then for any x ∈ A c , we have P x (T A < +∞) = 1 .

Proof. We consider the partition (B 1

0

, . . . , B

0

N ) of the set B given by B 1

0

= B 1 and B k

0

= B k \ ∪ k−1 i=1 B i

, 2 ≤ k ≤ N.

We start off by defining inductively an increasing sequence of stopping times. For the well-definedness of this sequence we add an element ∂ to the set E and use the conventions inf ∅ = ∞ and Z

= ∂ .

Let x / ∈ A . We define

τ 1 := inf{t ≥ 0 : Z t ∈ B}, and let i 1 be such that Z τ

1

∈ B

0

i

1

, and

τ 1

0

:=

( inf{t ≥ τ 1 : Z t ∈ C i

1

∪ A}, if Z τ

1

∈ / A,

∞, otherwise .

For n ≥ 2 , we define τ n :=

( inf{t ≥ τ n−1

0

: Z t ∈ B}, if Z τ

n−10

∈ / A ∪ {∂},

∞, otherwise , ,

and let i n be such that Z τ

n

∈ B i

0

n

if τ n < ∞ , and τ n

0

:=

( inf{t ≥ τ n : Z t ∈ C i

n

∪ A}, if Z τ

n

∈ / A ∪ {∂},

∞, otherwise .

In this construction the sequence (Z τ

1

, Z τ

10

, . . . , Z τ

n

, Z τ

n0

, . . . ) is such that Z τ

1

∈ / A, Z τ

10

∈ / A, Z τ

2

∈ / A, . . . until one of its terms belongs to A , and all the following terms are equal to ∂ . Proceeding by induction, the strong Markov property and assumptions (a) and (b) easily yield

∀n ≥ 1, P x (Z τ

1

∈ / A, Z τ

0

1

∈ / A, . . . , Z τ

n

∈ / A, Z τ

0

n

∈ / A) ≤ (1 − p) n . Letting n go to infinity in this inequality, we get P x (∀t ≥ 0, Z t ∈ / A) = 0 .

Proof of Theorem 2 . We consider the following subsets of Z n−1 . To simplify notations, inside braces we denote by x any configuration whose shape is h :

- B i = {h ∈ Z n−1 : h(i) = 0}, 1 ≤ i ≤ n − 1,

- A i = {h ∈ Z n−1 : h(i − 1) ≥ 0, h(i) ≤ 0}, 2 ≤ i ≤ n − 1, - A = ∪ n−1 i=2 A i ,

- B = ∪ n−1 i=1 B i ,

- C 1 = {h ∈ Z n−1 : min(x(1), x(2)) = x(3)},

- C i = {h ∈ Z n−1 : min(x(i), x(i + 1)) = max(x(i − 1), x(i + 2))}, 2 ≤ i ≤ n − 2,

- C n−1 = {h ∈ Z n−1 : min(x(n − 1), x(n)) = x(n − 2)}.

(11)

C i is the set of configurations in which the lower site of the block {i, i + 1} is at the same level as the higher site among the sites neighboring this block (there are two such sites unless i = 1 or i = n − 1 ). A little moment of thought will convince the reader of the following fact: in any configuration h ∈ (A ∪ B) c , there must be a unique site with maximal height. This will be used several times in this Section.

Before turning to the proof, we introduce further notations. If 1 ≤ a ≤ b ≤ n and x 0 ∈ N b−a+1 we denote by

(X t a:b,x

0

,t

0

, t ≥ 0) (3.1)

the crystal process with b − a + 1 sites starting from x 0 and defined in the same way as in (1.1) but using the Poisson processes N k,j (t 0 + ·), a ≤ j ≤ b, 0 ≤ k ≤ 2. When t 0 = 0 this superscript will be dropped. Moreover for any vector x ∈ R n , we let x(a : b) := (x(a), x(a + 1), . . . , x(b)) .

We begin with the proof of (i), proceeding by induction on n . The case n = 1 is straight- forward and for n = 2 the result is a consequence of Proposition 2. We now take n ≥ 3 and assume that (i) holds for any k < n . For Y ⊂ Z n−1 we define T Y = inf {t ≥ 0 : H t n ∈ Y } , and we also use the following notations:

E Y (t) = {∀s ≥ t, H s n ∈ Y }, and

F Y = ∪ t≥0 E Y (t).

Let i ∈ {2, . . . , n − 1} . On the event E A

i

(t) , after time t the value of X t n (i) is increased by one unit at the jump times of N 2,i , and only at these times. Indeed, for s ≤ t , if both H s n (i − 1) > 0 and H s n (i) < 0 then V i (H s n ) = 2 , and if one of them is 0 then any jump of site i is forbidden by the event E A

i

(t) . Consequently we have E A

i

(t) ⊂ {∀s ≥ t, X s n (i) = X t n (i) + N 2,i (s) − N 2,i (t)} so for any x ∈ N n ,

P x -ps, F A

i

⊂ {lim t

−1

X t n (i) = β 2 }. (3.2) We now use the fact that as long as H t n ∈ A i , the vectors X t n (1 : i − 1) and X t n (i + 1 : n) evolve like two independent crystal processes with i − 1 (resp. n − i ) sites. Namely on the event E A

i

(t 0 ) ∩ {X t n

0

= x 0 } , we have

- X t n

0

+t (1 : i − 1) = X 1:i−1,x

` 0

,t

0

t , where x ` 0 = x 0 (1 : i − 1) , and - X t n

0

+t (i + 1 : n) = X i+1:n,x

r 0

,t

0

t , where x r 0 = x 0 (i + 1 : n) . Thus the inductive hypothesis ensures that P x -a.s.,

F A

i

t

−1

X t n (1 : i − 1) and t

−1

X t n (i + 1 : n) both converge

and their limits have the form (2.1) . (3.3) Thanks to (3.2) and (3.3) it is sufficent to show that

P h (∪ n−1 i=2 F A

i

) = 1 (3.4)

to achieve the proof. We first prove the existence of a constant r > 0 such that for any h ∈ A i ,

P h (E A

i

(0)) ≥ r. (3.5)

On one hand, starting from h ∈ A i two transitions suffice to make site i strictly lower than its two neighbours, so there exists r 1 > 0 such that for any h ∈ A i ,

P h ∀t ≤ 1, H t n ∈ A i ; H 1 n (i − 1) > 0 and H 1 n (i) < 0

≥ r 1 . (3.6)

(12)

On the other hand if h

0

is such that h

0

(i − 1) > 0 and h

0

(i) < 0 , we have the P h

0

-a.s.

inclusion {∀t ≥ 0, N 2,i (t) ≤ min(N 0,i−1 (t), N 0,i+1 (t))} ⊂ E A

i

(0) . Since β 2 < β 0 basic considerations about Poisson processes give the existence of r 2 > 0 such that for any h

0

as above,

P h

0

(E A

i

(0)) ≥ r 2 . (3.7)

Hence (3.5) with r = r 1 r 2 follows from (3.6) and (3.7). Finally (3.4) will follow form (3.5), the strong Markov property and the fact that for any h / ∈ A ,

P h (T A < +∞) = 1, (3.8)

which now remains to be shown. To show (3.8) we shall check that assumptions (a) and (b) of Lemma 3 are fulfilled.

For (a) we take h / ∈ (A∪B) . Let i be the unique site with maximal height in configuration h . If i > 2 then h(i − 2), h(i − 1) < 0 (and if i = 2 it is still the case by convention), so that on the event {T B > t} , we have H t n (i − 1) = h(i − 1) + N 1,i−1 (t) − N 0,i (t), P h -a.s.

Thus P h (T B = +∞) ≤ P h (∀t ≥ 0, h(i − 1) + N 1,i−1 (t) − N 0,i (t) < 0) = 0 . By the symmetry of the process, the case i = 1 may be treated as the case i = n .

We now turn to (b) so we take an initial condition h ∈ B i \A . It is easy to see that there is some i ∈ {2, . . . , n − 1} such that

h(1) < 0, . . . , h(i − 1) < 0, h(i) = 0, h(i + 1) > 0, . . . , h(n − 1) > 0. (3.9) Note that on the event {T C

i∪A

> t} all strict inequalities in (3.9) have to be preserved up to time t , hence {T C

i∪A

> t} ⊂ {∀s ∈ [0, t], V i−1 (H s n ) = 1} . Consequently we have, for any configuration x whose shape is h :

P x -a.s. , {T C

i∪A

> t} ⊂ {X t n (i − 1) = x(i − 1) + N 1,i−1 (t)}.

From a similar argument we also get:

P x -a.s. , {T C

i∪A

> t} ⊂ { X t n (i), X t n (i + 1)

= X i:i+1,(0,0)

t },

and combining the two last inclusions gives {T C

i∪A

> t} ⊂ {H t n (i − 1) = h(i − 1) + N 1,i−1 (t) − X i:i+1,(0,0)

t (1)}, P x -a.s. Letting t → ∞ then gives

P h (T C

i∪A

= +∞) ≤ P (∀t ≥ 0, h(i − 1) + N 1,i−1 (t) − X i:i+1,(0,0)

t (1) < 0). (3.10) If β 1 > β 0 the probability in (3.10) is equal to 0 since a.s.,

lim 1

t h(i − 1) + N 1,i−1 (t) − X i:i+1,(0,0)

t (1)

= β 1 − v 2 > 0,

where the last inequality is an easy consequence of the definition of v 2 . If β 1 = β 0

this probability is still null since h(i − 1) + X i:i+1,(0,0)

t (1) − N 1,i−1 (t) is then a symmetric random walk on Z . Now by symmetry the case i = 1 may be treated like the case i = n − 1 .

Finally, the distribution of the process (X t 2 , t ≥ 0) being exchangeable, we have P h (H T n

Ci∪A

∈ C i ∩ A) ≥ P h (H T n

Ci∪A

∈ C i ∩ A c ).

In particular, P h (H T n

Ci∪A

∈ A) ≥ 1/2 and this concludes the proof of (i).

The proofs of (ii) and (iii) are based on the same ideas. Let us continue with (ii). It

is straightforward for n = 1 . For n = 2 it is easy: the process H t 2 ∈ Z is a nearest-

neighbour random walk, namely |H t 2 | is increased by one unit at rate β 0 and decreased

by one unit at rate β 1 (except of course at 0 ). We then have P h (F

N

∪ F

−N

) = 1 , and this

(13)

allows us to conclude.

As we did for (i), we shall use Lemma 3 to show that P h (T A < +∞) = 1 for h / ∈ A , and conclude by induction. This time however, this is true only for n ≥ 4 , so we first have to treat the case n = 3 separately.

For n = 3 we let K 1 = {h ∈ Z 2 : h(1) ≥ 0, h(2) ≤ 0} and K 2 = {h ∈ Z 2 : h(1) ≤ 0, h(2) ≥ 0} . As in the proof of (i) there exists r > 0 such that P h (E K

i

(0)) ≥ r for any h ∈ K i , and once we know that H t 3 stays forever in one of these two sets, we are done.

Putting K = K 1 ∪ K 2 it is again sufficient, thanks to the Markov property, to show that for h ∈ K c we have P h (T K < +∞) = 1 . Take for example h(1), h(2) < 0 . On the event {T K > t} we have H t 3 (1) = h(1) + N 1,1 (t) − N 1,2 (t) , P h -a.s., so P h (T K = +∞) ≤ P (∀t ≥ 0, h(1) + N 1,1 (t) − N 1,2 (t) > 0) = 0 because of the recurrence of the symmetric random walk on Z .

We now fix n ≥ 4 and check (a) in Lemma 3. Let h / ∈ (A ∪ B) and i be the unique site with maximal height in configuration h . We may suppose that i ≥ 3 without loss of generality thanks to the symmetry. On the event {T B > t} , we have H t n (i − 2) = h(i−2) + N 1,i−2 (t) −N 1,i−1 (t), P h -a.s. Again we get P h (T B = +∞) = 0 by the recurrence of the symmetric random walk.

Now we check (b) in Lemma 3. Let h ∈ B i \A . We suppose that 2 ≤ i ≤ n − 1 , since the case i = 1 is the same as i = n − 1 thanks to the symmetry. For any configuration h and i < j , we denote by

∆ i,j (h) = h(i) + · · · + h(j − 1)

the height difference between sites i and j . Then, on the event {T C

i∪A

> t} , we have P h -a.s.,

H t n (i − 1), ∆ i−1,i+1 H t n

= h(i − 1), ∆ i−1,i+1 h

+ N 1,i−1 (t), N 1,i−1 (t)

− X i:i+1,(0,0)

t .

We define the events

G 1 (t) = {∀s ≥ t, X i:i+1,(0,0)

s (1) < X i:i+1,(0,0)

s (2)},

G 2 (t) = {∀s ≥ t, X i:i+1,(0,0)

s (1) > X i:i+1,(0,0)

s (2)}.

We have {T C

i∪A

= +∞}∩ G 1 (t) ⊂ {∀s ≥ t, H s n (i−1) = H t n (i−1)+ N 1,i−1 (s)−N 1,i−1 (t)

− N 1,i (s) − N 1,i (t)

} . Thus using the recurrence of the symmetric random walk we get P h ({T C

i∪A

= +∞} ∩ G 1 (t)) ≤ P h ∀s ≥ t, H t n (i − 1) + (N 1,i−1 (s) − N 1,i−1 (t))

− (N 1,i (s) − N 1,i (t)) < 0

= 0. (3.11)

Similarly we have {T C

i∪A

= +∞} ∩ G 2 (t) ⊂ {∀s ≥ t, ∆ i−1,i+1 H s n = ∆ i−1,i+1 H t n + N 1,i−1 (s) − N 1,i−1 (t)

− N 1,i+1 (s) − N 1,i+1 (t)

} and we deduce that

P h ({T C

i∪A

= +∞} ∩ G 2 (t)) = 0. (3.12) From the above remark on the crystal process with 2 sites, we obtain

P

 [

t≥0

G 1 (t)

 ∪

 [

t≥0

G 2 (t)

 = 1,

and consequently (3.11) and (3.12) imply that P h ({T C

i∪A

= +∞}) = 0 . The fact that P h (H T n

Ci∪A

∈ A) ≥ 1/2 follows from a symmetry argument as in the proof of (i).

Finally the proof of (iii) is analogous to the proof of (i). We shall show that with probabil-

ity 1, some sites become, and remain forever higher than their neighbours. When this

(14)

happens the configuration is broken in two disjoint parts, but this time infinite bound- aries can be created and have to be taken into account. For this reason it is necessary to study the three types of boundary conditions (0, 1 or 2 infinite boundaries) to make the induction work. Thus our inductive hypothesis contains three statements. Let (H n ) : For any x ∈ N n , t

−1

X t n converges P x -a.s.(resp. P

x -a.s. and P 0,∞ x -a.s.) to some random variable G (resp. G

and G 0,∞ ), which takes the form

(c ` , β 0 , a 1 , β 0 , a 2 , . . . , a k−1 , β 0 , a k , β 0 , c r ), (3.13) where the a i ’s are as in (2.3), and c ` , c r are given by:

- for G , c ` , c r = (β 1 ) or (−) ;

- for G 0,∞ , c ` = (β 1 ) or (−) , and c r = (β 2 ) or (v 2,∞ , v 2,∞ ) ; - for G

, c ` , c r = (β 2 ) or (v 2,∞ , v 2,∞ ) .

It is tedious but easy to check that vectors of the form (3.13) concatenate together into a vector of E 3 . Since (H 1 ) and (H 2 ) are straightforward, the problem is again reduced to showing that the separation in two blocks occurs almost surely for n ≥ 3 . Putting D i = {h : h(i − 1) ≤ 0, h(i) ≥ 0} for i = 1, . . . , n (the signs of h(0) and h(n) are stressed by the boundaries), we have to show that:

P h (∪ n i=1 F D

i

) = 1, P

h (∪ n−1 i=2 F D

i

) = 1, and P 0,∞ h (∪ n−1 i=1 F D

i

) = 1.

But β 0 now is the largest parameter, so we easily get the analogous of (3.5) with A i

replaced by D i , and it remains to prove that

P h (T D < +∞) = 1, P

h (T D

< +∞) = 1, and P 0,∞ h (T D

0,∞

< +∞) = 1, where D = ∪ n i=1 D i , D

= ∪ n−1 i=2 D i and D 0,∞ = ∪ n−1 i=1 D i .

The first equality is straightforward since any configuration belongs to the set D . To prove the second and third equalities we can follow exactly the proof of (i), except that A i is replaced by D i , β 2 and β 0 invert their roles, v 2 is replaced by v 2,∞ and the sets C i

are defined with opposite inequalities.

4 Proof of Theorems 3 and 4

We recall that in this section we always assume that β 0 < β 2 < β 1 . We first need to introduce some further notations:

- ∆ s,t j X n := X t n (j) − X s n (j) ;

- τ t j := sup{s ≤ t : ∆ j X s n = 0} . This is not a stopping time.

- P λ will stand for a random variable with Poisson( λ ) distribution.

- a + := max(a, 0) , a ∈ R .

Remark. Since ∆ j X t n has the same distribution as −∆ n−j X t n , showing the exponential tightness of (H t n , t ≥ 0) amounts to checking that for any j , P

0

(∆ j X t n ≥ k) = O exp t (k) , that is exponential tightness for ((∆ j X t n ) + , t ≥ 0) .

Lemma 4. We assume that for some C j , α j > 0 ,

∀t ≥ 0, ∀` ∈ N ∩ [0, t], P

0

(∆ j X t n > 0; τ t j ∈ [` − 1, `[) ≤ C j e

−αj

(t−`) . (4.1)

Then ((∆ j X t n ) + , t ≥ 0) is exponentially tight.

(15)

Proof. Let t ≥ 0 , and put m = min{q ∈ N : t − q ≤ k

1

} . We have k/(4β 1 ) ≤ (t − m) ≤ k/(2β 1 ) , as soon as k ≥ 4β 1 and t ≥ k

1

. But we may suppose these two restrictions ful- filled: the first one because the conclusion does not depend on the values of P

0

(∆ j X t n ≥ k) for any finite number of k , and the second one because, if it is not then the conclusion easily follows from P

0

(∆ j X t n ≥ k) ≤ P (P β

1

t ≥ k) ≤ P (P k/2 ≥ k) = O exp (k) .

We decompose

P

0

(∆ j X t n ≥ k) ≤ P

0

(∆ j X t n ≥ k, τ t j ≥ m) + P

0

(∆ j X t n > 0, τ t j ≤ m).

In this sum, the first term is less than P (N j,1 (t) − N j,1 (m) ≥ k) ≤ (P β

1

(t−m) ≥ k) ≤ P (P k/2 ≥ k) = O exp (k) , and the second term is bounded by

X

`≤m

C j e

−αj

(t−`) ≤ X

t−`≥k/(4β

1

)

C j e

−αj

(t−`) ≤ X

u≥0

C j e

−αj

(k/(4β

1

)+u) = C j (1 − e

−αj

)

−1

e

αj 4β1

k

.

Lemma 5. Let β 0 < β 2 < β 1 and 1 ≤ j ≤ n − 1 . We suppose that for i = 1, . . . , j − 1 , ((∆ i X t n ) + , t ≥ 0) is exponentially tight, and that there exists d j < β 2 such that

P

0

( ˜ X t j (j) ≥ d j t) = O exp (t), (4.2) where X ˜ j is defined by (2.4) Then ((∆ j X t n ) + , t ≥ 0) is exponentially tight.

For j = n − 1 , it is not necessary to assume (4.2).

Proof. We first take j ≤ n − 2 , and choose a constant L > 0 such that d j + j

L < β 2 . (4.3)

The conclusion will follow from (4.1) that we now prove. The events A 1 :=

i=1,...,j−1 max ∆ i X ` n > t − ` L

and A 2 :=

j X ` n > t − `

L , τ t j ∈ [` − 1, `[

satisfy

P

0

(A 1 ), P

0

(A 2 ) ≤ D j e

−γj

(t−`) , for some D j , γ j > 0.

The bound for P

0

(A 1 ) holds by assumption, and the bound for P

0

(A 2 ) holds because P

0

(A 2 ) ≤ P (N j,1 (`) − N j,1 (` − 1) > t−` L ) = P (P β

1

> t−` L ) . We now remark that {∆ j X t n >

0; τ t j ∈ [` − 1, `[} ∩ A c 1 ∩ A c 2 ⊂ {∆ j X s n > 0, ∀s ∈ [`, t]; max i=1,...,j ∆ i X ` nt−` L } , so it now remains to show that for some C j , α j > 0 ,

∀t ≥ 0, ∀` ≤ t, P

0

j X s n > 0, ∀s ∈ [`, t]; max

i=1,...,j ∆ i X ` n ≤ t − ` L

≤ C j e

−αj

(t−`) .

Denoting by A this last event, we note that A ⊂ {∆ `,t j X n > (d j + j−1 L )(t−`)}∪{∆ `,t j+1 X n ≤ (d j + L j )(t − `)} , so

P

0

(A) ≤ P

0

A ∩ {∆ `,t j+1 X n ≤ (d j + L j )(t − `)}

+ P

0

A ∩ {∆ `,t j X n > (d j + j−1 L )(t − `)}

. (4.4)

Since {∆ `,t j+1 X n ≤ (d j + L j )(t − `)} ∩ {∆ j X s n > 0, ∀s ∈ [`, t]} ⊂ {N j+1,2 (t) − N j+1,2 (`) ≤ (d j + L j )(t − `)} , the first term in the sum (4.4) is less than

P

P β

2

(t−`) ≤ (d j + j

L )(t − `)

,

(16)

which is O exp (t − `) by (4.3). Putting E j = {x ∈ N j : max i=1,...,j−1 ∆ i x ≤ t−` L } and using the Markov property, the second term in (4.4) is less than

sup

x∈E

j

P x

0,t−` j X j > (d j + j − 1 L )(t − `)

≤ sup

x∈E

j

P x

0,t−` j X ˜ j > (d j + j − 1 L )(t − `)

≤ P

0

X ˜ t−` j (j) > d j (t − `)

= O exp (t − `),

where the first inequality follows from Lemma 2, the second one from Lemma 1 and the fact that max i=1,...,j x(i) − x(j) ≤ (j − 1)(t − `)/L for x ∈ E j , and the equality is assumption (4.2). This concludes the proof for j ≤ n − 2 .

We now treat the case j = n − 1 . Applying Theorem 1 to X ˜ n , we get (4.2) for some constant d n−1 < β 1 . This time we take L such that d n−1 + n−1 L < β 1 . We still have (4.4).

Note that on the event {∆ n−1 X t n > 0} , we must have V n (X t n ) = 1 . Hence in the sum (4.4) we proceed as for j < n − 1 for the second term, and the first term is less than P P β

1

(t−`) ≤ d n−1 + n−1 L

(t − `)

= O exp (t − `) .

Proof of Theorem 3 . With (2.5) in mind, our hypothesis implies that β 2 > d ˜ k1 ) for k ≤ n . Hence we only have to prove the desired result with k = n + 2 . Let us take r ∈] ˜ d n (β 1 ), β 2 [ . By Lemma 2, we have

P

0

( ˜ X t j (j) ≥ rt) = O exp (t), j = 1, . . . , n.

We show by induction on j that for j = 1, . . . , n + 1 , we have:

(H j ) : ((∆ j X t n+2 ) + , t ≥ 0) is exponentially tight,

which by the remark preceding Lemma 4 is a sufficient condition for H n+2 to be ergodic.

For j = 1 we simply apply Lemma 5, whose assumptions are clearly satisfied since β 2 > β 0 and X ˜ t 1 is a simple Poisson process with intensity β 0 . For j ≤ n , the fact that (H i ), i = 1, . . . , j − 1, imply (H j ) is a direct consequence of Lemma 5. For j = n + 1 it is still the case using the last assertion of Lemma 5.

Proof of Theorem 4 . In the light of Theorem 3, we shall be able to conclude if we show that for any ε > 0 :

P

0

X ˜ t n (n) ≥ (nβ 0 + ε)t

= O exp (t) (4.5)

P

0

X ˜ t n (n) ≥

(n − 1)β 1 + β 0

n + ε

t

= O exp (t) (4.6)

P

0

X ˜ t n (n) ≥ (4 √ 2 p

β 1 β 0 + ε)t

= O exp (t) (4.7)

First (4.5) simply follows from X ˜ t n (n) ≤ max i=1,...,n X ˜ t n (i) and the fact that the pro- cess max i=1,...,n X ˜ t n (i) is dominated by a Poisson process with intensity nβ 0 .

We now prove (4.6). For notational convenience we define g n := n

−1

((n − 1)β 1 + β 0 ) . Let us take η < 2(n − 1)

−1

ε and let δ = nε − n(n − 1)η/2 > 0 . We have

P

0

X ˜ t n (n) ≥ (g n + ε) t

≤ P

0

X ˜ t n (n) ≥ (g n + ε) t; min

i=1,...,n−1 ∆ i X ˜ t n ≥ −ηt

+ P

0

i=1,...,n−1 min ∆ i X ˜ t n ≤ −ηt

(17)

For x ∈ N n we define

Σx =

n

X

i=1

x(i).

Then

P

0

X ˜ t n (n) ≥ (g n + ε) t; min

1≤i≤n−1 ∆ i X ˜ t n ≥ −ηt

≤ P

0

Σ ˜ X t n

n

X

j=1

(g n + ε − (n − j)η)t

= P

0

Σ ˜ X t n ≥ (ng n + δ)t

= O exp (t),

because in any configuration x , V j (x) = 0 for at least one site, hence Σ ˜ X t n is dominated by a Poisson process with intensity ng n . The fact that also

P

0

i=1,...,n−1 min ∆ i X ˜ t n ≤ −ηt

= O exp (t) is a direct consequence of Theorem 1.

We finally turn to the proof of (4.7), and let t 0 := 1/(4 √ 2 √

β 1 β 0 ) . We shall show that for k ≥ 1 , j ≥ 0 and 1 ≤ i ≤ n ,

p i k,j := P

0

X ˜ kt n

0

(i) ≥ k + j − 1

≤ 1

2 j

. (4.8)

This implies the desired result: if (4.8) holds, then for ε > 0 ,

P

0

( ˜ X t n (n) ≥ (1/t 0 + ε)t) ≤ P

0

( ˜ X t n

0

(bt/t

0c+1)

(n) ≥ bt/t 0 c + bεtc) ≤ (1/2)

bεtc

= O exp (t).

Here bt/t 0 c stands for the integer part of t/t 0 . To prove (4.8) we proceed by induction, showing that (H ` ) holds for any ` ≥ 1 , where

(H ` ) : ∀k ≥ 1, j ≥ 0 with k + j = `, ∀i ∈ {1, . . . , n}, p i k,j ≤ (1/2) j . In this proof we may and will suppose that

β 1 ≥ 2β 0 , (4.9)

since otherwise we easily get d ˜ n (β 1 ) ≤ β 1 ≤ 2β 0 ≤ 4 √ 2 √

β 1 β 0 . For readability we define τ v,d := inf{s ≥ 0 : ˜ X s n (v) = d} .

A site i is said to be a seed at level ` if the ` -th square to be deposed at site i is added at a moment when site i is at least as high as its neighbours. This means that

V i ( ˜ X n

i,`

)

) = 0.

For i 1 ≤ i 2 we say that i 1 extends to i 2 during the time interval [s, t] if N 1,i

1

+1 , N 1,i

1

+2 , . . . , N 1,i

2

jump successively between times s and t . For i 1 > i 2 this definition is extended in an obvious way.

Inequality (4.8) for j = 0 , and hence (H 1 ) , are straightforward. We now suppose that

(H ` ) holds and take i ∈ {1, . . . , n} and k, j ≥ 1 with k + j = l + 1 . We make the following

observation : for the locus at level k + j − 1 of pile i to be occupied at time kt 0 , it is

necessary that, in some time interval ](m − 1)t 0 , mt 0 ] , m ≤ k , some site u reaches level

(18)

k +j − 1 ( u being a seed at this level) and then extends to site i until time kt 0 . This leads to the following inequality.

p i k,j = P

0

X ˜ kt n

0

(i) ≥ k + j − 1

n

X

u=1 k

X

m=1

P

0

X ˜ (m−1)t n

0

(u) < k + j − 1; ˜ X mt n

0

(u) ≥

k + j − 1; u is a seed at level k + j − 1; u extends to i during [τ u,k+j−1 , kt 0 ]

.

For this last event to be realized, the three following conditions have to be satisfied:

- τ u,k+j−2 ≤ mt 0 ,

- N 0,u jumps at least one time in the time interval [max(τ u,k+j−2 , (m − 1)t 0 ), mt 0 ] , - u extends to i after the first one of these jumps.

Using the fact that the Poisson distribution satisfies P (P λ ≥ 1) ≤ λ , it follows that p i k,j

n

X

u=1 k

X

m=1

p u m,k−m+j−1 β 0 t 0 P 0 P (k−m+1)β

1

t

0

≥ |u − i|

, and by the inductive hypothesis,

p i k,j

k

X

m=1

(1/2) k−m+j−1 β 0 t 0 n

X

u=1

P 0 P (k−m+1)β

1

t

0

≥ |u − i|

k

X

m=1

(1/2) k−m+j−1 β 0 t 0 X

v∈Z

P 0 P (k−m+1)β

1

t

0

≥ |v|

≤ (1/2) j−1 β 0 t 0 k

X

m=1

(1/2) k−m (1 + 2 E [P (k−m+1)β

1

t

0

])

≤ (1/2) j−1 β 0 t 0

" k−1 X

r=0

(1/2) r + 2β 1 t 0 k−1

X

r=0

(r + 1)(1/2) r

#

≤ (1/2) j−1 β 0 t 0 (2 + 8β 1 t 0 )

≤ (1/2) j .

The last inequality is a consequence of (4.9) and the choice of t 0 .

5 Proof of Theorem 5

The next lemma tells that when n = 3 , taking β 1 very large makes the growth rate v 3 close to its maximum value 3β 0 . We recall that by Corollary (1), v 3 exists as soon as we take β 1 , β 2 > β 0

Lemma 6. Let β 1 , β 2 > β 0 and ε > 0 . We suppose that β 1 ≥ 27β 0 2 β 2

ε(β 2 − β 0 ) . (5.1)

Then the growth rate satisfies

v 3 ≥ 3β 0 − ε.

(19)

Proof. Here we denote by A the set of configurations with a hole:

A = {h ∈ Z 2 : h(1) > 0, h(2) < 0}.

For x ∈ N 3 we also say that x ∈ A if the shape of x belongs to A . We define a double sequence of stopping times by letting:

T 0 = 0,

U 1 = inf{t ≥ 0 : X t 3 ∈ A} , T 1 = inf{t > U 1 : X t 3 ∈ / A},

U k+1 = inf{t ≥ T k : X t 3 ∈ A} , T k+1 = inf{t > U k+1 : X t 3 ∈ / A}, for k ≥ 2.

We also define

Y n := ΣX T 3

n

.

The desired result will follow if we show that lim n→∞ Y n /T n ≥ 9β 0 − 3ε . We first claim that the sequence

I n := Y n − (9β 0 − 3ε)T n

is a submartingale. By the strong Markov property this is the case if for any x / ∈ A , E x [Y 1 − (9β 0 − 3ε)T 1 ] ≥ 0. (5.2) To establish this inequality we make the following observations:

- Let Z 1 = ΣX U 3

1

be the number of jumps before hitting A . Starting from any y / ∈ A the probability that the first jump leads to A is less than β 01 . Hence by the Markov property, Z 1 is stochastically larger than the geometrical distribution with parameter β 01 , and consequently

E x [Z 1 ] ≥ β 1 /β 0 . (5.3)

- Any y / ∈ A with Σy / ∈ 3 Z has at least one site j such that V j (y) = 1 . Thus condition- ally on Z 1 , at least (2Z 1 /3 − 1) transitions until time U 1 occur with a rate larger than β 1 , and the others occur with a rate at least 3β 0 . We deduce from this remark that E x [U 1 |Z 1 ] ≤ 2Z

1

1

+ Z

1

/3+1

0

, and hence E x [U 1 ] ≤

2 3β 1

+ 1 9β 0

E x [Z 1 ] + 1 3β 0

. (5.4)

- The configuration X U 3

1

belongs to the set A ∩ {x ∈ N 3 : x(1) = x(2) + 1 or x(3) = x(2) + 1} . But clearly, for any y in that set, the exit time from A starting from y is stochastically smaller than the hitting time of 0 for a birth and death process on Z + starting from 1 , with birth rate β 0 and death rate β 2 . Hence

E x [T 1 − U 1 ] ≤ 1

β 2 − β 0 . (5.5)

Now (5.4) and (5.5) yield

E x [Y 1 − (9β 0 − 3ε)T 1 ] ≥ E x [Z 1 ] − (9β 0 − 3ε) 2

3β 1

+ 1 9β 0

E x [Z 1 ] + 1 3β 0

+ 1

β 2 − β 0

=

2ε − 6β 0 β 1

+ ε 3β 0

E x [Z 1 ] − (9β 0 − 3ε) 1

3β 0

+ 1

β 2 − β 0

.

(20)

Condition (5.1) implies that (2ε − 6β 0 )/β 1 + ε/(3β 0 ) ≥ 0 . Using (5.3) we then have E x [Y 1 − (9β 0 − 3ε)T 1 ] ≥

2ε − 6β 0 β 1

+ ε 3β 0

β 1 β 0

− (9β 0 − 3ε) 1

3β 0

+ 1

β 2 − β 0

≥ εβ 1

2 0 − 9 β 2

β 2 − β 0 ,

which is nonnegative under (5.1). Thus (5.2) holds and we conclude that the sequence I n is a submartingale.

We define another sequence (S k , k ≥ 0) of integers by letting S 0 = 0 , and for k ≥ 0 , S k+1 = inf{n > S k : H T 3

n

= 0}.

We remark that T S

1

= inf{t ≥ 0 : H t 3 = 0 and H t− 3 = (1, −1)} . But it is well known that for any ergodic Markov process on a countable set and any two states s 1 and s 2

with positive jump rate from s 1 to s 2 , the time of first transition from s 1 to s 2 has finite expectation. From this remark we deduce that E 0 [T S

1

] < ∞ , and consequently we also have E 0 |I S

1

| < ∞ . The submartingale property gives us E 0 [I S

1

] ≥ 0 . Since by the Markov property I S

k

is the sum of k independent copies of variables distributed as I S

1

, and the same holds for T S

k

, an application of the the law of large numbers gives:

3v 3 = lim

k→∞

Y S

k

T S

k

= lim

k→∞

I S

k

T S

k

+ 9β 0 − 3ε = E 0 [I S

1

]

E 0 [T S

1

] + 9β 0 − 3ε ≥ 9β 0 − 3ε.

Proof of Theorem 5. We first recall a basic fact. If (N t , t ≥ 0) is a Poisson process with intensity λ , and f is some nonnegative deterministic function with lim

f (t)/t = µ > λ , then

P (∀t ≥ 0, N t ≤ f (t)) > 0. (5.6) From Proposition 2 and Lemma 6 with ε = 3β 0 −β 2 , we deduce that a sufficient condition for

β 2 < min(v 2 , v 3 ) is that β 1 > B , where

B := max

β 0 β 2 2β 0 − β 2

, 27β 0 2 β 2 (3β 0 − β 2 )(β 2 − β 0 )

.

For any n ≥ 5 it is possible to decompose the n sites in blocks of width 2 or 3 separated by holes of unit width. From now on we suppose for notational convenience that n ∈ 3 Z + 2 , so we only use blocks of width 2 and hence only β 2 < v 2 is necessary. Of course this assumption could be dropped and if it was, we would also need β 2 < v 3 .

We start from configuration x := (1, 1, 0, 1, 1, 0, . . . , 1, 1, 0, 1, 1) and show that the event E = {∀t ≥ 0, H t n 6= 0} satisfies P x (E) > 0 . We use the notation (3.1) and remark that

E ⊃ n

∀t ≥ 0, X t n (3) < min X t n (2), X t n (4) o

∩ · · · ∩ n

∀t ≥ 0, X t n (n − 2) <

min X t n (n − 3), X t n (n − 1) o

= n

∀t ≥ 0, N 3,2 (t) < min X t 1:2,(1,1) (2), X t 4:5,(1,1) (1) o

∩ · · · ∩ n

∀t ≥ 0, N n−2,2 (t) <

min X n−4:n−3,(1,1)

t (2), X n−1:n,(1,1)

t (1) o

.

(21)

The process m t = min X t 1:2 (2), X t 4:5 (1), . . . , X t n−4:n−3 (2), X t n−1:n (1)

satisfies

t→∞ lim m t

t = v 2 ,

and is independent of the Poisson processes N 3,2 , N 6,2 , . . . , N n−2,2 . Hence the result follows from (5.6) and the fact that β 2 < v 2 .

References

[1] D. J. Gates, and M. Westcott. Kinetics of polymer crystallization. I. Discrete and continuum models. Proc. Roy. Soc. London Ser. A., 416(1851):443–461, 1988.

[2] D. J. Gates, and M. Westcott. Kinetics of polymer crystallization. II. Growth régimes. Proc.

Roy. Soc. London Ser. A, 416(1851):463–476, 1988.

[3] E. D. Andjel, M. V. Menshikov, and V. V. Sisko. Positive recurrence of processes associated to crystal growth models. Ann. Appl. Probab., 16(3):1059–1085, 2006.

[4] D. J. Gates, and M. Westcott. Markov models of steady crystal growth. Ann. Appl. Probab., 3(2):339–355, 1993.

[5] D. M. Sadler. On the growth of two dimensional crystals: 2. Assessment of kinetic theories of crystallization of polymers. Polymer, 28(9):1440–1455, 1987

Acknowledgments. The author is grateful to the referee for his very careful reading

and useful suggestions. He also wishes to thank Enrique Andjel and Étienne Pardoux

for their continuous support during this work, and to express his sincere gratitude

to Alexandre Gaudillière for stimulating discussions and his important contribution to

Theorem 4.

Références

Documents relatifs

A prominent cause of XOR interactions relies on structural constraints imposed by the availability of β-arrestin docking sites, as illustrated by the interaction between β-arrestin

The cost savings of me-too drugs as a function of price discount, launch timing and SoV were modelled in a market behaving similarly to the average market in the sample: TRx

- Fonctionnement en « CP dédoublés » durant cette année expérimentale dans les écoles du REP Duruy. - Synthèse de la conférence CNESCO des 14 et 15 mars 2018 « Ecrire et

vestigation of angular magnetoresistance oscillations upon rotating magnetic field in a series of planes perpendicular to the conducting plane.. We have performed such measurements on

La première partie néessite simplement un peu de propreté dans les déri-. vaton, sinon l'exerie

a large density of states at the Fermi level. The valence-band structure of the hy- dride, for an electropositive metal like uranium, can be considered in terms defined

School pupils are not a homogenous group [4] and there is now a substantial body of research that shows how the most effective uses of Information and Communications

Degree of freedom (df), AICc, difference between AICc of each model and the minimum AICc score (ΔAICc) and Akaike weight (ω) of the 66 best TSA-SAR err models used to obtain