• Aucun résultat trouvé

Random walks on discrete point processes

N/A
N/A
Protected

Academic year: 2022

Partager "Random walks on discrete point processes"

Copied!
29
0
0

Texte intégral

(1)

www.imstat.org/aihp 2015, Vol. 51, No. 2, 727–755

DOI:10.1214/13-AIHP593

© Association des Publications de l’Institut Henri Poincaré, 2015

Random walks on discrete point processes

Noam Berger

a,1

and Ron Rosenthal

b,1

aThe Hebrew University of Jerusalem and The Technical University of Munich. Einstein Institute of Mathematics, Edmond J. Safra Campus, Givat Ram, Jerusalem, 91904, Israel. E-mail:berger@math.huji.ac.il

bETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland. E-mail:ron.rosenthal@math.ethz.ch Received 30 October 2011; revised 4 November 2013; accepted 7 November 2013

Abstract. We consider a model for random walks on random environments (RWRE) with a random subset ofZdas the vertices, and uniform transition probabilities on 2dpoints (the closest in each of the coordinate directions). We prove that the velocity of such random walks is almost surely zero, give partial characterization of transience and recurrence in the different dimensions and prove a Central Limit Theorem (CLT) for such random walks, under a condition on the distance between coordinate nearest neighbors.

Résumé. Nous considérons un modèle de marches aléatoires en milieu aléatoire ayant pour sommets un sous-ensemble aléatoire de Zdet une probabilité de transition uniforme sur 2dpoints (les plus proches voisins dans chacune des directions des coordonnées).

Nous prouvons que la vitesse de ce type de marches est presque sûrement zéro, donnons une caractérisation partielle de transience et récurrence dans les différentes dimensions et prouvons un théorème central limite (CLT) pour de telles marches sous une condition concernant la distance entre plus proches voisins.

MSC:60K37; 60K35

Keywords:Discrete point processes; Random walk in random environment

1. Introduction

1.1. Background

Random walks on random environments is the object of intensive mathematical research for more than 3 decades.

It deals with models from condensed matter physics, physical chemistry, and many other fields of research. The common subject of all models is the investigation of particles movement in inhomogeneous media. It turns out that the randomness of the media (i.e., the environment) is responsible for some unexpected results, especially in large scale behavior. In the general case, the random walk takes place in a countable graph(V , E), but the most investigated models deals with the graph of thed-dimensional integer lattice, i.e.,Zd. For some of the results on those models see [7,14,21] and [26]. The definition of RWRE involves two steps: First the environment is randomly chosen according to some given distribution, then the random walk, which takes place on this fixed environment, is a Markov chain with transition probabilities that depend on the environment. We note that the environment is kept fixed and does not evolve during the random walk, and that the random walk, given the environment, is not necessarily reversible. The questions on RWRE come in two major flavors: Quenched, in which the walk is distributed according to a given typical environment, and annealed, in which the distribution of the walk is taken according to an average on the environments.

1Research was partially supported by ERC StG grant 239990.

(2)

There are two main differences between the quenched and the annealed laws: First the quenched is Markovian, while the annealed distribution is usually not. Second, in most models there is some additional assumption of translation invariance of the environments, which implies that the annealed law is translation invariance, while the quenched law is not.

In contrast to most of the models for RWRE on Zd, this work deals with non-nearest neighbor random walks.

In our case this is most expressed in the estimation ofE[|Xn|]. Unlike nearest neighbor models we don’t have an a priori estimation on the distance made in one step. Nonetheless using an ergodic theorem by Nevo and Stein we managed to bound the above and therefore to show that the estimationE[|Xn|] ≤c(ω)

nstill holds. The subject of non nearest neighbor random walks has not been systematically studied. For results on long range percolation see [2].

For literature on the subject in the one dimensional case see [6,8] and [10]. For some results on bounded non-nearest neighbors see [15]. For some results that are valid in that general case see [25]. For recurrence and transience criteria CLT and more for random walks on random point processes, with transition probabilities between every two points decaying in their distance, see [9] and the references therein. Our model also has the property that the random walk is reversible. For some of the results on this topic see [4,5,18] and [23].

1.2. The model

We start by defining the random environment of the model which will be a random subset ofZd, thed-dimensional lattice of integers (we also refer to such random environment as a random point process). DenoteΩ= {0,1}Zd and letBbe the Borel σ-algebra (with respect to the product topology) onΩ. For every x∈Zd let θx:ΩΩ be the shift along the vectorx, i.e., for everyy ∈Zd and everyωΩ we haveθx(ω)(y)=ω(x+y). In addition let E=E(d)= {±ei}di=1, whereei is a unit vector along theith principal axes.

Throughout this paper we assume thatQis a probability measure onΩsatisfying the following:

Assumption 1.1.

1. Qis stationary and ergodic with respect to each of the translations{θei}di=1. 2. Q(P(ω)=∅) <1,whereP(ω)= {x∈Zd:ω(x)=1}.

LetΩ0= {ωΩ: ω(0)=1}. It follows from Assumption1.1thatQ(Ω0) >0 and therefore we can define a new probability measureP onΩ0as the conditional probability ofQonΩ0, i.e.:

P (B)=Q(B|Ω0)=Q(BΩ0)

Q(Ω0) ,B∈B. (1.1)

We denote byEQandEP the expectation with respect toQandP respectively.

Claim 1.2. ForQalmost everyωΩ,everyv∈Zdand every vectoreEthere are infinitely manyk∈Nsuch that v+keP(ω).

Proof. DenoteΩv= {ωΩ: vP(ω)}and notice that1ΩvL1(Ω,B, Q). Sinceθe is measure preserving and ergodic with respect toQ, by Birkhoff’s Ergodic Theorem

nlim→∞

1 n

n1

k=0

θek1Ωv =EQ[1Ωv] =Q(Ωv)=Q(Ω0) >0, Qa.s.

Consequently, there existQalmost surely infinitely many integers such thatθek1Ωv=1, and therefore infinitely many

k∈Nsuch thatv+keP(ω).

The following function measures the distance of “coordinate nearest neighbors” from the origin in an environment:

(3)

Fig. 1. An example for coordinate nearest neighbors.

Definition 1.3. For everyeEwe definefe:Ω→N+by fe(ω)=min

k >0: θek(ω)(0)=ω(ke)=1

. (1.2)

Note thatfeandfehave the same distribution with respect toQ.

For every v∈Zd defineNv(ω) to be the set of the 2d “coordinate nearest neighbors” inω of v, one for each direction (see Fig.1). More preciselyNv(ω)=

e∈E{v+fev(ω))e}. By Claim1.2fev(ω))isQalmost surely well defined and thereforeNv(ω)isQalmost surely a set of 2dpoints inZd.

We now turn to define the random walk on environments. Fix some ωΩ0 such that|Nv(ω)| =2d for every vP(ω). The random walk on the environmentωis defined on the probability space((Zd)N,G, Pω), whereGis the σ-algebra generated by cylinder functions, as the Markov chain taking values inP(ω)with initial condition

Pω(X0=0)=1, (1.3)

and transition probability Pω(Xn+1=u|Xn=v)=

0, u /Nv(ω),

1

2d, uNv(ω). (1.4)

The distribution of the random walk according to this measure is called the quenched law of the random walk, and the corresponding expectation is denoted byEω.

Finally, since for each GG, the mapωPω(G) isBmeasurable, we may define the probability measure P=PPω on0×(Zd)N,G)by

P(B×G)=

B

Pω(G)P (dω),B∈B,∀GG.

The marginal of P on(Zd)N, denoted byP, is called the annealed law of the random walk and its expectation is denoted byE.

In the proof of the high dimensional Central Limit Theorem we will assume in addition to Assumption 1.1the following:

Assumption 1.4.

3. There existsε0>0such thatEP[fe2+ε0]<for every coordinate directioneE.

(4)

1.3. Examples

Before turning to state and prove theorems regarding the model we give a few examples for distributions of points in Z2which satisfy the above conditions.

Example 1.5 (Bernoulli percolation). The first obvious example for point process which satisfies the above conditions is the Bernoulli vertex percolation.Fix some 0< p <1 and declare every point v∈Zd to be in the environment independently with probabilityp.

Example 1.6 (Infinite component of supercritical percolation). Fix somed≥2and denote bypc(Zd)the critical value for Bernoulli edge percolation onZd.For every pc(Zd) < p≤1there exists with probability one a unique infinite component inZd,which we denote byC=C(ω).We can now define the environment byP(ω)=C(ω), i.e.,the points in the environment are exactly the points of the unique infinite cluster of the percolation process.

Example 1.7. We denote by{rn}n∈Nand{pn}n∈Ntwo sequences of positive numbers,the first satisfieslimn→∞rn=

and the second satisfieslimn→∞pn=0andpn<1for everyn∈N.We define the environment by the following procedure:For everyv∈ZdandnNdelete the ball of radiusrncentered atvwith probabilitypn.If the sequencepn converge fast enough to zero and the sequencern converge slow enough to infinity,this procedure yields a random point process that satisfy the model assumptions.

Example 1.8 (Random interlacement). Fix somed≥3.In[24]Sznitman introduced the model of random interlace- ment inZd.Informally this is the union of traces of simple random walks inZd.The random interlacement inZdis a distribution on points inZd which satisfies the above conditions(see[24],Theorem2.1).

1.4. Main results

Our main goal is to study the behavior of random walks in this model. The results are summarized in the following theorems:

(1)Law of Large Numbers.ForP almost everyωΩ0, the limiting velocity of the random walk exists and equals zero. More precisely:

Theorem 1.9. Let(Ω,B, Q)be ad-dimensional discrete point process satisfying Assumption1.1,then P

nlim→∞

Xn

n =0 =1.

(2)Recurrence transience classification.We give a partial classification of recurrence-transience for random walks on discrete point processes. The precise statements are:

Proposition 1.10. Any one dimensional random walk on a discrete point process satisfying Assumption 1.1is P almost surely recurrent.

Theorem 1.11. Let(Ω,B, Q) be a two dimensional discrete point process satisfying Assumption1.1and assume there exists a constantC >0such that

k=N

k·P (fei=k)C

N,i∈ {1,2},N∈N, (1.5)

which in particular holds wheneverfei has a second moment fori∈ {1,2}.Then the random walk isPalmost surely recurrent.

Theorem 1.12. Fixd≥3 and let(Ω,B, Q)be a d-dimensional discrete point process satisfying Assumption1.1.

Then the random walk isPalmost surely transient.

(5)

(3) Central Limit Theorems.We prove that one-dimensional random walks on discrete point processes satisfy a Central Limit Theorem. We also prove that in dimension d≥2, under the additional Assumption1.4, the random walks on a discrete point process satisfy a Central Limit Theorem.

Theorem 1.13. Let(Ω,B, Q)be a one-dimensional discrete point process satisfying Assumption1.1.ThenEP[f1]<

and forP almost everyωΩ0

nlim→∞

Xn

n

=DN

0,E2P[f1]

, (1.6)

whereN (0, a2)denotes the normal distribution with zero expectation and variancea2,and the limit is in distribution.

Remark 1.14. Note that for one-dimensional random walks on discrete point processes CLT holds even without the assumption that the variance off1is finite.In particular the diffusion constant is given by the square ofEP[f1].

Theorem 1.15. Fixd ≥2 and let(Ω,B, Q)be ad-dimensional discrete point process satisfying Assumptions1.1 and1.4,then forP almost everyωΩ0

nlim→∞

Xn

n

=DN (0, D), (1.7)

whereN (0, D)is ad-dimensional normal distribution with zero expectation and covariance matrixDthat depends only ondand the distribution ofP.As before the limit is in distribution.

Structure of the paper.Section2collects some facts about the Markov chain on environments and some ergodic results related to it. It is based on previously known material. In Section3we deal with the proof of Law of Large Numbers and in Section4with the one dimensional Central Limit Theorem. The recurrence transience classification is discussed in Section5. The novel parts of the high dimensional Central Limit Theorem proof (asymptotic behavior of the random walk, construction of the corrector and sublinear bounds on the corrector) appear in Sections6–8. The actual proof of the high dimensional Central Limit Theorem is carried out in Section9. Finally Section10contains further discussion, some open questions and conjectures.

2. The induced shift and the environment seen from the random walk

The content of this section is a standard textbook material. The form in which it appears here is taken from Section 3 of [3]. Even though it had all been known before, [3] is the best existing source for our purpose.

Fix someeE. Since by Claim1.2feisQalmost surely finite we can define the induced shiftσe:Ω0Ω0by σe(ω)=θefe(ω)ω.

Theorem 2.1. For everyeE,the induced shiftσe:Ω0Ω0is measure preserving and ergodic with respect toP. The proof of Theorem2.1can be found in [3] (Theorem 3.2).

Our next goal is to prove that the Markov chain on environments (i.e., the Markov chain given by the environment viewed from the particle) is ergodic. LetΞ=Ω0Zand defineBto be the productσ-algebra onΞ. The spaceΞ is a space of two-sided sequences(. . . , ω1, ω0, ω1, . . .), the trajectories of the Markov chain on environments. Letμbe the measure on(Ξ,B)such that for anyB∈B2n+1(coordinates between−nandn),

μ

n, . . . , ωn)B

=

B

P (dωn)Λ(ωn,n+1)· · ·Λ(ωn1,n), whereΛ:Ω0×B→ [0,1]is the Markov kernel defined by

Λ(ω, A)= 1 2d

x∈Zd

1{xN0(ω)}1{θxωA}= 1 2d

e∈E

1{σe(ω)A}. (2.1)

(6)

Note that the sum is finite since forQalmost every ωΩ there are exactly 2d elements inN0(ω). Because P is preserved byΛ(see Theorem2.1), the finite dimensional measures are consistent, and therefore by Kolmogorov’s theoremμexists and is unique. One can see from the definition ofμthat{θXk(ω)}k0has the same law underP(the annealed law) as0, ω1, . . .)has underμ. LetT:ΞΞ be the shift defined by(T ω)n=ωn+1. The definition of T implies that it is measure preserving. In fact the following also holds:

Proposition 2.2. Tis ergodic with respect toμ.

As before, a proof can be found in section 3 of [3] (Proposition 3.5).

Theorem 2.3. LetfL10,B, P ).Then forP almost everyωΩ0 nlim→∞

1 n

n1

k=0

fθXk(ω)=EP[f], Pωalmost surely.

Similarly,iff:Ω×Ω→Ris measurable withE[f (ω, θX1ω)]<∞,then

nlim→∞

1 n

n1

k=0

f (θXkω, θXk+1ω)=E

f (ω, θX1ω)

forP almost everyωandPωalmost every trajectory of(Xk)k0.

Proof. Recall that {θXk(ω)}k0 has the same law underP as 0, ω1, . . .)has under μ. Hence, ifg(. . . , ω1, ω0, ω1, . . .)=f (ω0)then

nlim→∞

1 n

n1

k=0

fθXk=D lim

n→∞

1 n

n1

k=0

gTk.

The latter limit exists by Birkhoff’s Ergodic Theorem (we have already seen thatTis ergodic) and equalsEμ[g] =

EP[f]almost surely. The second part follows from the first.

3. Law of Large Numbers

This section is devoted to the proof of Theorem1.9, the Law of Large Numbers for random walks on discrete point processes.

Proof of Theorem1.9. Using linearity, it is enough to prove that P

nlim→∞

Xn·e

n =0 =1, ∀eE. Fix someeE and define

S(k)=max

n≥0:

n1

m=0

f σem(ω)

< k

. Becausefeis positive, ifEP[fe] = ∞, then

nlim→∞

1 n

n1

k=0

fe

σek(ω)

= ∞, P a.s.

(7)

and therefore

klim→∞

S(k)

k =0, P a.s.

However, sinceS(k)=k1

j=01Ω0ej(ω)), by Birkhoff’s Ergodic Theorem and Assumption1.1

klim→∞

S(k) k = lim

k→∞

1 k

k1

j=0

1Ω0

θej(ω)

=Q(Ω0) >0, P a.s.

ThusEP[fe]<∞. Applying Birkhoff Ergodic Theorem once more we get

nlim→∞

1 n

n1

k=0

fe

σek(ω)

=EP[fe]<, P a.s. (3.1)

The stationarity ofP with respect toσeimplies thatP (fe(ω)=k)=P (fee1(ω))=k)=P (fe(ω)=k), and therefore

EP[fe] =EP[fe]. (3.2)

Letge:Ω×Ω→Zbe defined by:

ge

ω, ω

=

fe(ω), ifω=σe(ω),

fe(ω), ifω=σe(ω),

0, otherwise.

Observing thatgeis measurable and recalling (3.2) we get that EP

Eω

ge(ω, θX1ω)

=EP

1

2dfe(ω)− 1 2dfe(ω)

=0.

Thus forP almost everyωΩ0andPωalmost every random walk{Xk}k0, we have by Theorem2.3

nlim→∞

Xn·e n = lim

n→∞

1 n

n k=1

(XkXk1)·e

= lim

n→∞

1 n

n1

k=0

geXkω, θXk+1ω)=EP

Eω

ge(ω, θX1ω)

=0.

4. One dimensional Central Limit Theorem

This section is devoted to the proof of Theorem1.13– Central Limit Theorem of one-dimensional random walks on discrete point processes. The basic observation of the proof is the fact that random walk on discrete point processes in one dimension is in fact a simple random walk on Z with stretched edges. Combining this with the fact that EP[f1]<∞implies the result. We turn to make this into a more precise argument:

Proof of Theorem1.13. Denotee=1. Given an environmentωΩ0 and a random walk{Xk}k0, we define the simple one-dimensional random walk{Yk}k0associated with{Xk}k0by:

Yk= kj=1|XXjjXXjj−11|, k≥1,

0, k=0.

(8)

Since{Yk}k0is a simple one dimensional random walk onZ, it follows from the Central Limit Theorem that forP almost everyωΩ0

nlim→∞

√1

n·Yn=DN (0,1). (4.1)

Given an environmentωΩ0andn∈Zletpn=pn(ω)be thenth point inP(ω)(with respect to 0). More precisely denote

pn=

⎧⎪

⎪⎩ n1

k=0feekω), n >0,

0, n=0,

n

k=−1feekω), n <0.

(4.2)

For everya∈R\{0}andP almost everyωΩ0we have

nlim→∞

√1

npan=a· lim

n→∞

1 a

n

a n

k=0

fe σekω

=a·EP[fe].

In fact the last argument also holds trivially fora=0, i.e., for everya∈R

nlim→∞

√1

npan=a·EP[fe]. (4.3)

Using (4.1) and (4.3) we get that forP almost everyωΩ0and everyε >0

nlim→∞Pω pYn

na

≤ lim

n→∞Pω pYn

na, Yn

n> a EP[fe]+ε

+Pω

Yn

na EP[fe]+ε

≤ lim

n→∞Pω

1

np((a/EP[fe])+ε)na

+Pω

Yn

na EP[fe]+ε

=

a EP[fe]+ε

,

where is the standard normal cumulative distribution function. A similar argument gives that

nlim→∞Pω pYn

na

a EP[fe]−ε

for everyε >0.

Observing thatXn=pYnand recalling thatε >0 was arbitrary we get

nlim→∞Pω

Xn

na

=

a EP[fe]

, (4.4)

as required.

5. Transience and recurrence

Before continuing to deal with the Central Limit Theorem in higher dimensions, we turn to a discussion on transience- recurrence of random walks on discrete point processes.

(9)

5.1. One-dimensional case

Here we wish to prove the recursive behavior of the one-dimensional random walk on discrete point processes (Propo- sition1.10). This follows from the same coupling introduced in order to prove the CLT.

Proof of Proposition1.10. Using the notation from the previous section, sinceYnis a one-dimensional simple ran- dom walk, it is recurrentPalmost surely. Therefore we have #{n: Yn=0} = ∞Palmost surely, but sinceXn=pYn

andp0=0 this implies #{n: Xn=0} = ∞,Palmost surely. Thus the random walk is recurrent.

5.2. Two-dimensional case

In this section we deal with the two-dimensional case. The proof is based on the correspondence of random walks to electrical networks. Recall that an electrical network is given by a tripleG=(V , E, c), where(V , E)is an unoriented graph andc:E(0,)is a conductance field. We start by recalling the Nash–Williams criterion for recurrence of random walks:

Theorem 5.1 (Nash–Williams criterion). A set of edgesΠ is called a cutset for an infinite networkG=(V , E, c) if there exists some vertexaV such that every infinite simple path fromato infinity must include an edge inΠ.If {Πn}is a sequence of pairwise disjoint finite cutsets in a locally finite infinite graphG,each of which separatesaV from infinity and

n(

eΠnc(e))1= ∞,then the random walk induced by the conductancescis recurrent.

For a proof of the Nash–Williams criterion and some background on the subject see [12] and [17]. The following definition will be used in the proof:

Definition 5.2. Let(Ω, B, P ) be a probability space.We say that a random variableX:Ω→ [0,∞)has a Cauchy tail if there exists a positive constantCsuch thatP (Xn)Cn for everyn∈N.

Note that ifE[X]<∞, thenXhas a Cauchy tail.

In order to prove Theorem1.11we will need the following lemmas taken from [2].

Lemma 5.3 ([2], Lemma 4.1). Let{fi}i=1be identically distributed(not necessarily independent)positive random variables,on a probability space(Ω, B, P ), that have a Cauchy tail.Then,for every >0,there existK >0and N∈Nsuch that for everyn > N

P

1 n

n i=1

fi> Klogn

< .

Lemma 5.4 ([2], Lemma 4.2). LetAn be a sequence of events such thatP (A˜ n) >1−for all sufficiently largen, and let{an}n=1be a sequence such that

n=1an= ∞.Then

n=1an1An= ∞with probability of at least1−ε.

We also need the following definition:

Definition 5.5. AssumeG=(V , E)is a graph such thatV ⊂Z2andEis a set of edges,each of them is parallel to some axis,but may connect non-nearest neighbors inZ2.For an edgeeEwe denote bye+, eV the end points ofeE.In order for this to be well defined we assume that if(e+e)·ei=0then(e+e)·ei>0.Note that by the assumption on the edges inethe value ofe+eis non-zero in exactly one coordinate.

Proof of Theorem1.11. The idea of the proof is to construct for everyωΩ an electrical network which satisfy the Nash–Williams criterion and induce the same law on the random walk as the law of the random walk onω,P- a.s. SinceP is a marginal ofQit is enough to construct a network which satisfy the criterion forQalmost every

(10)

Fig. 2. Construction of the network in two dimensions.

ωΩ. For everyωΩ, we define the corresponding network with conductancesG(ω)=(V (ω), E(ω), c(ω))via the following three steps (see Fig.2for an illustration):

Step1. DefineG1(ω)=(V1(ω), E1(ω), c1(ω))to be the network induced fromωwith all conductances equal to 1.

More precisely we define V1(ω)=P(ω), E1(ω)=

{x, y} ∈V1×V1: y

x±fe1(ω)e1, x±fe2(ω)e2 , c1(ω)(e)=1, ∀eE1.

Note that the continuous time random walk induced by the networkG1(ω)(cf. [12,17]) is indeed the random walk introduced in (1.4) when 0∈P(ω).

Step2. DefineG2(ω)to be the network generated fromG1(ω)by “cutting” every edge of lengthkintokedges of length 1, giving conductancekto each part. A small technical problem with “cutting” the edges is that vertical and horizontal edges may cross each other in a point that doesn’t belong toP(ω). In order to avoid this we give the following formal definition which is a bit cumbersome:

V2(ω)=V21(ω)V22(ω)⊂Z2× {0,1}, E2(ω)=E21(ω)E22(ω),

where

V2i(ω)=

(x, i):eE1(ω),∃0≤k≤ |e+e|1such that(e+e)·ei=0, x=e+kei

and

Ei(ω)=

(v, i), (w, i)

: ∃eE1(ω),∃0≤k <|e+e|1such that (e+e)·ei=0, v=e+kei, w=e+(k+1)ei .

(11)

We also define the conductancec(ω)(e)of an edgeeE(ω)to bek, given that the length (i.e.,|e+e|1) of the original edge it was part of wask.

Step3. DefineG(ω)to be the graph obtained fromG2(ω)by identifying two vertices if they are of the form(v,1) and(v,2)for somevP(ω). Note that by a standard analysis of conductances, see, e.g., [12], it is clear that the random walk on the new network is transient if and only if the original random walk is transient. Thus we turn to prove the recurrence of the random walk on the new graph. This is done using the Nash–Williams criterion. LetΠnbe the set of edges exiting the box[−n, n]2×{1,2}in the graphG(ω). The setsΠndefine a sequence of pairwise disjoint cutsets in the networkG(ω), i.e., a set of edges that any infinite simple path starting at the origin must cross. Next we wish to estimate the conductances in the networkG. Fix someeEsuch that(e+e)·ei=0 and note that the distribution ofc(e)is the same for all edges in directionei. ForωΩwe denote by lenei(ω)the length of the interval containing the origin in directionei, where in the case that the origin belongs to the point process we define lenei(ω)to be the length of the interval starting at the origin in directionei. More precisely we define lenei(ω)=fei(ω)+gei(ω), wheregei(ω)=min{n≤0: ω(nei)=1}. In addition forn∈Nwe defineln(ω)=lnei(ω)to be the length of the first nth intervals starting at the origin in directionei, i.e.,ln(ω)=gei(ω)+n1

j=0feieji−1(ω)). Using the definition of leneiwe have the following estimate

Q

c(e)=k

=Q

the original edge that containedeinG1(ω)is of lengthk

=Q

lenei(ω)=k .

By Birkhoff’s Ergodic Theorem the last termQalmost surely equals

nlim→∞

1 n

n1

j=0

1{len

eijω)=k}.

Sincelntends to infinityQalmost surely andgei(ω)is finiteQalmost surely this implies

Q

c(e)=k

= lim

n→∞

1 ln(ω)

ln(ω) j=0

1{len

eijω)=k}= lim

n→∞

n ln(ω)·1

n

ln(ω)gei(ω) j=−gei(ω)

1{len

eijω)=k}, which after rearrangement can be written as

nlim→∞

n ln·1

n

n1

j=0

k·1{f

eieij(ω))=k}.

Recalling that by Birkhoff’s Ergodic Theorem (applied to the induced shift) we also haveP almost surely

nlim→∞

ln

n =EP[fei], lim

n→∞

1 n

n1

j=0

1{f

eieij(ω))=k}=P (fei=k), we get

Q

c(e)=k

=k·P (fei=k)

EP[fei] . (5.1)

From (5.1) and the assumption of Theorem1.11, i.e., (1.5), it follows thatc(e)has a Cauchy tail. Note that Πn

contains 4n+2 edges at each level, i.e., in each of the sets{Ei(ω)}i=1,2, all of them with the same distribution (by (1.5) with a Cauchy tail), though they may be dependent. By Lemma5.3, for everyε >0 there existK >0 andN∈N such that for everyn > N, we have

Q

eΠn

c(e)K(8n+4)log(8n+4)

>1−ε. (5.2)

(12)

DefineAn to be the event in Eq. (5.2), and an=(K(8n+4)log(8n+4))1. Notice that CΠn def=

eΠnc(e) satisfies

n=1CΠn1

n=N1An·an. In addition the definition of{an}implies that

n=Nan= ∞. Combining the last two facts together with (5.2) and Lemma5.4givesQ(

n=1CΠn1= ∞)≥1−ε. Sinceεis arbitrary, we get that

n=1CΠn1= ∞,Qa.s. and therefore in particularP a.s. Thus by the Nash–Williams criterion, the random

walk isPalmost surely recurrent.

5.3. Higher dimensions(d≥3)

Here we prove the transience of random walks on discrete point processes in dimension 3 or higher. The idea of the proof is to bound the heat kernel so that the Green function of the random walk will be finite. This is done by first proving an appropriate discrete isoperimetric inequality for finite subsets ofZd, and then using well known connections between isoperimetric inequalities to heat kernel bounds (see [19]) to bound the heat kernel. In order to state the isoperimetric inequality we need the following definition:

Definition 5.6. Letx=(x1, x2, . . . , xd)be a point inZd.For1≤jddenote byΠj:Zd→Zd1the projection on all but thejth coordinate,namely

Πj(x)=Πj

(x1, x2, . . . , xd)

=(x1, x2, . . . , xj1, xj+1, . . . , xd).

Lemma 5.7. There existsC=C(d) >0such that for every finite subsetAofZd

1maxjd

Πj(A)C· |A|(d1)/d, (5.3)

where| · |denotes the cardinality of the set.

Before turning to the proof we fix some notations.

Definition 5.8.

Denote byQdthe quadrant of points inZd all of whose entries are positive.

For a pointxQd define its energy byE(x)=d

j=1xj.

For a finite setAQddenoteE(A)=

xAE(x).

Given a finite setAQd, 1≤jd and some pointy=(y1, y2, . . . , yd1)Qd1we define they-fiber ofAin directionj

Aj,ydef=

xj: (y1, y2, . . . , yj1, xj, yj, . . . , yd1)A .

Proof of Lemma5.7. Assume|A| =n. Using translations, we can assume without loss of generality thatAQd. Next, for 1≤jd we defineSj: 2Qd →2Qd the “squeezing operator in directionj.” The definition ofSj is a bit complicated, however the idea is to mimic the operation of pushing the points inside each of the fibers ofAin direction j as close to the hyperplanexj =0 as possible without any of them leaving the quadrantQd. An illustration ofSj

operation is illustrated in Fig.3. More formallySj is defined by Sj(A)=

y=(y1,...,yd1)∈Qd1

(y1, y2, . . . , yj1, m, yj, . . . , yd1)|Aj,y| m=1. The operatorSj satisfies the following properties:

1. The size of each fiber ofSj(A)in directionj is the same as the corresponding one forA.

2. The size ofSj(A)is the same as the size ofA.

3. |Πi(Sj(A))| ≤ |Πi(A)|for every 1≤i, jd.

4. E(Sj(A))E(A), and equality holds if and only ifSj(A)=A.

(13)

Fig. 3. The operatorSj(A).

Indeed,

1. This follows directly from the definition ofSj. Giveny=(y1, . . . , yd1)Qd1 Sj(A)j,y=(y1, y2, . . . , yj1, m, yj, . . . , yd1)|Aj,y|

m=1= |Aj,y|. 2. Since the fibers in directionj of a set form a partition we get

Sj(A)=

y∈Qd−1

Qj(A)j,y=

y∈Qd−1

|Aj,y| = |A|.

3. Fori=j note thaty=(y1, . . . , yd1)Πj(A)if and only if there exists somem∈Nsuch that(y1, . . . , yj1, m, yj, . . . , yd1)A. This however is equivalent to the fact that(y1, . . . , yj1,1, yj, . . . , yd1)Sj(A)which again is true if and only ify=(y1, . . . , yd1)Πj(Sj(A)). ThusΠj(A)=Πj(Sj(A)). Turning to the casei=j, the proof follows from the fact that we can reduce the problem into two dimensions. Without loss of generality assume thati=1 andj=2, then

Πi

Sj(A)=Π1 S2(A)

=

(y2,...,yd)∈Qd1

1{∃m1 s.t.(m,y2,...,yd)S2(A)}

=

(y3,...,yd)∈Qd−2

y2∈Q

1{∃m1 s.t.(m,y2,y3,...,yd)S2(A)}

=

(y3,...,yd)∈Qd2

ymax2∈QS2(A)2,(y2,...,yd)

=

(y3,...,yd)∈Qd2

max

y2∈Q

|A2,(y2,...,yd)|

(y3,...,yd)∈Qd2

y2∈Q

1{∃m1 s.t.(m,y2,y3,...,yd)A}

=Π1(A)=Πi(A),

where the third equality follows from the definition ofS2(see Fig.3).

Références

Documents relatifs

Collected in this section are several estimates for simple random walks on graphs, which are used in proving the convergence of the local times of jump processes on finite trees

When D 5 (and for discrete groups with super- polynomial growth), James and Peres [11] adapt the method used by Lawler on Z D to get also the existence, with probability 1, of

Key words: central limit theorem, invariance principle, strictly stationary process, maximal inequality, strong mixing, absolute regularity, Markov

Theorem 2.1 asserts a functional central limit theorem for the random walk XúJ when it is confined to an infinite equivalence class, under the assumption that the density

PRUSS, Discrete harmonic measure, Green’s functions and symmetrization: a unified probabilistic approach, Preprint, 1996.

One of the most important theorems on branching processes with n-types of particles (n-dimensional Galton-Watson processes) is the Sevastyanov’s theorem on

Bolthausen, On a Functional Central Limit Theorem for Random Walk Conditioned to Stay Positive, Annals of Probability 4 (1976), no.. Chaumont, An invariance principle for random

Using, on the one side, a general result in linear algebra due to Klingenberg (see [Kli54]) and on the other side, a theorem on the holonomy of pseudo-Riemannian ma- nifolds