• Aucun résultat trouvé

The infinite valley for a recurrent random walk in random environment

N/A
N/A
Protected

Academic year: 2022

Partager "The infinite valley for a recurrent random walk in random environment"

Copied!
12
0
0

Texte intégral

(1)

www.imstat.org/aihp 2010, Vol. 46, No. 2, 525–536

DOI:10.1214/09-AIHP205

© Association des Publications de l’Institut Henri Poincaré, 2010

The infinite valley for a recurrent random walk in random environment

Nina Gantert

a,1

, Yuval Peres

b

and Zhan Shi

c

aCeNos Center for Nonlinear Science, and Institut für Mathematische Statistik, Fachbereich Mathematik und Informatik, Universität Münster, Einsteinstr. 62, D-48149 Münster, Germany. E-mail:gantert@math.uni-muenster.de

bDepartment of Statistics, University of California at Berkeley, Berkeley, CA 94720. E-mail:peres@stat.berkeley.edu cLaboratoire de Probabilités et Modèles Aléatoires, Université Paris VI, 4 place Jussieu, F-75252 Paris Cedex 05, France.

E-mail:zhan.shi@upmc.fr

Received 6 August 2007; revised 27 February 2009; accepted 7 February 2009

Abstract. We consider a one-dimensional recurrent random walk in random environment (RWRE). We show that the – suitably centered – empirical distributions of the RWRE converge weakly to a certain limit law which describes the stationary distribution of a random walk in an infinite valley. The construction of the infinite valley goes back to Golosov, seeComm. Math. Phys.92 (1984) 491–506. As a consequence, we show weak convergence for both the maximal local time and the self-intersection local time of the RWRE and also determine the exact constant in the almost sure upper limit of the maximal local time.

Résumé. Nous prouvons que les mesures empiriques d’une marche aléatoire unidimensionnelle en environnement aléatoire convergent étroitement vers la loi stationnaire d’une marche aléatoire dans une vallée infinie. La construction de cette vallée infinie revient à Golosov, voirComm. Math. Phys.92(1984) 491–506. En applications, nous obtenons la convergence étroite du maximum des temps locaux et du temps local d’intersections de la marche aléatoire en environnement aléatoire; de plus, nous identifions la constante représentant la “limsup” presque sûre du maximum des temps locaux.

MSC:60K37; 60J50; 60J55; 60F10

Keywords:Random walk in random environment; Empirical distribution; Local time; Self-intersection local time

1. Introduction and statement of the results

Let ω=x)x∈Z be a collection of i.i.d. random variables taking values in (0,1) and letP be the distribution of ω. For eachωΩ =(0,1)Z, we define the random walk in random environment (abbreviated RWRE) as the time-homogeneous Markov chain(Xn)taking values inZ+, with transition probabilitiesPω(Xn+1=1|Xn=0)=1, Pω[Xn+1=x+1|Xn=x] =ωx=1−Pω[Xn+1=x−1|Xn=x]forx >0, andX0=0. We equipΩ with its Borel σ-fieldF andZN with its Borelσ-fieldG. The distribution of(ω, (Xn))is the probability measurePonΩ ×ZN defined byP[F×G] =

FPω[G]P (dω),FF,GG.

Letρi=ρi(ω):=(1ωi)/ωi. We will always assume that

logρ0(ω)P (dω)=0, (1.1)

P[δω0≤1−δ] =1 for someδ(0,1), (1.2)

Var(logρ0) >0. (1.3)

1Research supported in part by the European program RDSES.

(2)

The first assumption, as shown in [11], implies that forP-almost allω, the Markov chain(Xn)is recurrent, the second is a technical assumption which could probably be relaxed but is used in several places, and the third assumption ex- cludes the deterministic case. Usually, one defines in a similar way the RWRE on the integer axis, but for simplicity, we stick to the RWRE on the positive integers; see Section4for the results for the usual RWRE model. A key prop- erty of recurrent RWRE is its strong localization: under the assumptions above, Sinai [10] showed thatXn/(logn)2 converges in distribution. A lot more is known about this model; we refer to the survey by Zeitouni [12] for limit theorems, large deviations results, and for further references.

Letξ(n, x):= |{0≤jn:Xj=x}|denote the local time of the RWRE inxat timenandξ(n):=supx∈Zξ(n, x) the maximal local time at timen. It was shown in [7] and [8] that

lim sup

n→∞

ξ(n)

n >0, P-a.s. (1.4)

(Clearly this lim sup is at most 1/2.) In addition, a 0–1 law (see [5]) says that lim supn→∞ξn(n) isP-almost surely a constant. The constant however was not known. We give its value in Theorem1.1.

Our main result (Theorem1.2below) shows weak convergence for the process(ξ(n, x), x∈Z)– after a suitable normalization – in a function space. In particular, it will imply the following theorem.

Theorem 1.1. LetM:=sup{s: s∈supp(ω0)} ∈(12,1]andw:=inf{s: s∈supp(ω0)} ∈ [0,12).Then lim sup

n→∞

ξ(n)

n = (2M−1)(1−2w)

2(M−w)min{M,1−w}, P-a.s. (1.5) In particular,ifM=1−w,we have

lim sup

n→∞

ξ(n)

n =2M−1

2M , P-a.s. (1.6)

Define the potentialV =(V (x), x∈Z)by

V (x):=

⎧⎨

x

i=1logρi, x >0,

0, x=0,

0

i=x+1logρi, x <0

andC(x,x+1):=exp(−V (x)). For eachω, the Markov chain is an electrical network in the sense of [4], whereC(x,x+1) is the conductance of the bond(x, x+1). In particular,μ(x):=exp(−V (x−1))+exp(−V (x)), x >0,μ(0)=1 is a (reversible) invariant measure for the Markov chain.

LetV=(V (x), x∈Z)be a collection of random variables distributed asV conditioned to stay non-negative for x >0 and strictly positive forx <0. Due to (1.3), such a distribution is well-defined, see, for example, Bertoin [1] or Golosov [6]. Moreover, it has been shown that

x∈Z

exp

V (x)

<, (1.7)

see [6], p. 494. For each realization of (V (x), x∈Z)consider the corresponding Markov chain onZ, which is an electrical network with conductancesC(x,x+1):=exp(−V (x)). Intuitively, this Markov chain is a random walk in the

“infinite valley” given byV. As usual,μ(x):=exp(−V (x −1))+exp(−V (x)), x ∈Zis a reversible measure for this Markov chain. But, due to (1.7), we can normalizeμto get a reversible probability measureν, defined by

ν(x):=exp(−V (x −1))+exp(−V (x))

2

x∈Zexp(−V (x)) , x∈Z. (1.8)

Note that in contrast to the original RWRE which is null-recurrent, the random walk in the “infinite valley” given byVis positive recurrent.

(3)

Fig. 1. Definition ofbn.

Let 1be the space of real-valued sequences := { (x), x∈Z}satisfying :=

x∈Z| (x)|<∞. Let cn:=min

x≥0: V (x)− min

0yxV (y)≥logn+(logn)1/2

and

bn:=min

x≥0:V (x)= min

0ycn

V (y)

,

see Figure1. We will consider{ξ(n, x), x∈Z}as a random element of 1(of course,ξ(n, y)=0 fory <0). Here is our main result.

Theorem 1.2. Consider{ξ(n, x), x∈Z}as a random element of 1under the probabilityP.Then, ξ(n, bn+x)

n , x∈Z law

−→ ν, n→ ∞, (1.9)

where−→law denotes convergence in distribution.In other words,the distributions of{ξ(n,bnn+x), x∈Z}converge weakly to the distribution ofν(as probability measures on 1).

The following corollary is immediate.

Corollary 1.1. For each continuous functionalf: 1→Rwhich is shift-invariant,we have f

ξ(n, x) n , x∈Z

law

−→f

ν(x), x∈Z

. (1.10)

Example 1 (Self-intersection local time). Let f

(x), x∈Z

=

x∈Z

(x)2.

(4)

Corollary1.1yields that 1

n2

x∈Z

ξ(n, x)2−→law

x∈Z

ν(x)2. (1.11)

This confirms Conjecture7.4in[9],at least for the RWRE on the positive integers;for the RWRE on the integer axis, see Section4.

Example 2 (Maximal local time). Let f

(x), x∈Z

=sup

x∈Z

(x).

Corollary1.1yields that ξ(n)

n

−→law sup

x∈Zν(x). (1.12)

2. Proof of Theorem1.2

The key lemma is

Lemma 2.1. For anyK∈N, ξ(n, bn+x)

n ,KxK

−→law

ν(x),KxK

. (2.1)

Given Lemma 2.1, the proof of Theorem 1.2 is straightforward. It suffices to show that for every function f: 1→R,f bounded and uniformly continuous, we have

E

f

ξ(n, bn+x) n , x∈Z

E f

ν(x), x∈Z

, n→ ∞. (2.2)

For ∈ 1, define K by K(x)= (x)I{−KxK} (x∈Z). Forf: 1→R, definefK byfK( )=f ( K)( ∈ 1).

Due to Lemma2.1, E

fK

ξ(n, bn+x) n , x∈Z

E fK

ν(x), x∈Z

, n→ ∞. (2.3)

Fixε >0. Sincef is uniformly continuous, there isδsuch that we have for ∈ 1

K1δfK( )f ( )ε. (2.4)

Hence E

fK

ξ(n, bn+x) n , x∈Z

f

ξ(n, bn+x)

n , x∈Z

ε (2.5)

provided that E

x:|x|>K

ξ(n, bn+x) n

δ. (2.6)

Due to Lemma 2.1, E[

x:|x|>K

ξ(n,bn+x)

n ] converges to E[

x:|x|>Kν(x)]. But, for K large enough, E[

x:|x|>Kν(x)] ≤δsinceE[ν(·)]is a probability measure onZ. Together with (2.6) and (2.3), this implies (2.2).

(5)

Proof of Lemma2.1. We proceed in several steps:

(i) Define

T (y):=inf{n≥1: Xn=y}, (2.7)

the first hitting time ofy. Forε >0, letAndenote the event{T (bn)εn}. Then,P[An] →1 forn→ ∞.

Proof: see Golosov [6], Lemma 1.

(ii) LetBndenote the event{the RWRE exits[0, cn)before timen} = {T (cn) < n}. Then, forP-a.a.ω,Pω[Bn|X0= bn] →0 forn→ ∞.

Proof: Due to [6], Lemma 7, we have for allm Pω

T (cn) < m|X0=bn

≤const·melogn(logn)1/2. Takingm=n, we conclude thatPω[Bn|X0=bn] →0.

(iii) Due to (i) and (ii), we can consider, instead ofPω, a finite Markov chainPω=Pω(n)started frombn, in the valley [0, cn]. More precisely, the Markov chainPω is the original Markov chainPω with reflection atcn, i.e. for 0< x < cn,Pω[Xn+1=x+1|Xn=x] =ωx=1−Pω[Xn+1=x−1|Xn=x],Pω[Xn+1=1|Xn=0] =Pω[Xn+1= cn−1|Xn=cn] =1 andX0=bn. The invariant probability measureμω=μ(n)ω ofPωis given by

μω(x):=

⎧⎪

⎪⎨

⎪⎪

1 Zω

eV (x)+eV (x1)

, 0< x < cn,

1

ZωeV (0), x=0,

1

ZωeV (cn1), x=cn, whereZω=2cn1

x=0 eV (x).

(iv) Recall (2.7). We note for further reference that for 0≤b < y < i,

Pω

T (b) < T (i)|X0=y

=

i1

j=y

eV (j ) i1

j=b

eV (j ) 1

. (2.8)

This follows from direct computation, usingC(x,x+1)=eV (x), see also [12], formula (2.1.4). We now decompose the paths of the Markov chainPω into excursions frombntobn. Letx∈ [−bn, cnbn],x=0 and denote byYbn,x

the number of visits tobn+xbefore returning tobn. The distribution ofYbn,x is “almost geometric”: we have Pω[Ybn,x=m] =

α(1β)m1β, m=1,2,3, . . . ,

1−α, m=0,

whereα=αbn,x=Pω[T (bn+x) < T (bn)|X0=bn],β=βbn,x=Pω[T (bn) < T (bn+x)|X0=bn+x]. In particular, Eω[Ybn,x] =α

β =μω(bn+x)

μω(bn) , (2.9)

whereμω is the reversible measure for the Markov chain, see above. Note that (2.9) also applies forx =0, with Ybn,0=1. Now, with Varω(Ybn,x)denoting the variance ofYbn,x w.r.t.Pω,

Varω(Ybn,x)=α(2βα)

β2 ≤ 2

β

μω(bn+x) μω(bn) ≤ 4

β. (2.10)

Forx >1,

β=(1ωbn+x)Pω

T (bn) < T (bn+x)|X0=bn+x−1 .

(6)

Taking into account (2.8) yields β=(1ωbn+x)

b

n+x1

j=bn

eV (j )V (bn+x1) 1

, (2.11)

and (2.11) applies also to x =1. In particular, recalling (1.2), there is a constant a =a(K, δ) >0 such that for 1≤xK, uniformly inn,

Varω(Ybn,x)a(K, δ). (2.12)

In the same way, one obtains (2.12) for−Kx≤0. We note for further reference that due to (2.10) and (2.11), there is a constanta=a(δ) >0 such that forx∈ [−bn, cnbn],

Varω(Ybn,x)a(δ)cn

logn+(logn)1/2

. (2.13)

(v) Denote bykn the number of excursions frombn tobn before timen. It follows from (2.9) that the average lengthγnof an excursion frombntobnunderPω is given by

γn=

cn

y=0

μω(y) μω(bn)≥2.

Fixε(0,1). We show that forP-a.a.ω, Pω

kn n − 1

γnε

→0, n→ ∞. (2.14)

Proof: LetYb(1)

n,x, Yb(2)

n,x, Yb(3)

n,x, . . .be i.i.d. copies ofYbn,x, and denote byEn(i)=cnbn

x=−bnYb(i)

n,x the length of theith excursion frombntobn. Then,{knm} ⊆ {m

i=1En(i)n}and{knm} ⊆ {m

i=1En(i)n}. Hence, Pω

kn n − 1

γnε

=Pω

n(1/γ

n+ε)

i=1

En(i)n

+Pω

n(1/γ

nε)

i=1

En(i)n

. (2.15)

To handle the first term in (2.15), recallEn(1), E(2)n , . . .are i.i.d. with expectationγnunderPω and apply Chebyshev’s inequality:

Pω

1 n(1/γn+ε)

n(1/γn+ε)

i=1

En(i)γn

≤ − εγn2 1+γnε

≤ Varω(En(1)) n(1/γn+ε)

(1+γnε)2

ε2γn4 . (2.16)

Now, use the inequality Var(N

j=1Xj)NN

j=1Var(Xj)and (2.13) to get from (2.16) that Pω

n(1/γ

n+ε)

i=1

En(i)n

c4na(δ)2(logn)3 n

(1+γnε)

ε2γn3c4na(δ)2(logn)3 n

2 ε2.

Due to Chung’s law of the iterated logarithm, lim infn→∞(log lognn )1/2(max0xnV (x)−min0xnV (x))is a.s. a strictly positive constant, and this impliesc4n/n→0 forP-a.a.ω. We conclude that the first term in (2.15) goes to 0 for a.a.ω. The second term in (2.15) is treated in the same way.

Further, due to Chebyshev’s inequality, (2.9) and (2.12), we have for−KxK Pω

1 kn

kn

i=1

Yb(i)

n,xμω(bn+x) μω(bn)

ε

→0, n→ ∞. (2.17)

(7)

(vi) Define, for−KxK,ρn,ωby ρn,ω(x)=ξ(n+T (bn), bn+x)

nξ(T (bn), bn+x)

n ,KxK. (2.18)

Forx=0, we haven,ω(x)=kn.

We now estimateρn,ω(x)for−KxK,x=0:

kn n

1 kn

kn

i=1

Yb(i)

n,xρn,ω(x)kn+1 n

1 kn+1

kn+1

i=1

Yb(i)

n,x. (2.19)

We conclude from (2.14) and (2.17) that for−KxK, Pω

ρn,ω(x)− 1 γn

μω(bn+x) μω(bn)

ε

→0, n→ ∞. (2.20)

(vii) We show that{γ1nμωμ(bω(bn+n)x),KxK}converges in distribution to{ν(x),KxK}.

Proof: Due to [6], Lemma 4, the finite-dimensional distributions of{V (bn+x)V (bn)}x∈Zconverge to those of {V (x) }x∈Z. Therefore, settingγn(N ):=bn+N

x=bnN μω(x)

μω(bn), we have for eachN that 1

γn(N )

μω(bn+x)

μω(bn) ,KxK

converges in distribution to eV (x) +eV (x 1)

N

x=−NeV (x) +eV (x 1),KxK

.

It remains to show that for N large enough, γn(N ) and γn are close for n→ ∞ and that, for N large enough, N

x=−N(eV (x)+eV (x 1))is close to

x=−∞(eV (x) +eV (x 1)). Note thatγn(N )γnand P

γn(N )

γn ≥1−ε

P

N

x=−N(eV (x)+eV (x 1))

x=−∞(eV (x)+eV (x 1))≥1−ε

(2.21) forn→ ∞. But, due to (1.7),

N

x=−N

eV (x) +eV (x 1)

(1ε)

x=−∞

eV (x) +eV (x 1)

increases to the whole space and we conclude that the last term in (2.21) goes to 1 forN→ ∞.

Putting together (i)–(vii), we arrive at (2.1).

3. Proof of Theorem1.1

We know now that the laws of ξn(n)converge to the law ofν(see (1.12)), where ν:=sup

x∈Zν(x)=sup

x∈Z

exp(−V (x −1))+exp(−V (x))

2

x∈Zexp(−V (x)) . (3.1)

We will show that lim sup

n→∞

ξ(n)

n =c, P-a.s., (3.2)

(8)

where

c=sup

z: z∈suppν

. (3.3)

In fact, we get lim sup

n→∞

ξ(n)

nc, P-a.s. (3.4)

from the following general lemma, whose proof is easy (and therefore omitted).

Lemma 3.1. Let(Yn), n=1,2, . . .be a sequence of random variables such that(Yn)converges in distribution to a random variableY andlim supn→∞Ynis constant almost surely.Then,

lim sup

n→∞ Yn≥sup{z: z∈suppY}. (3.5)

To show that lim sup

n→∞

ξ(n)

nc, P-a.s., (3.6)

we will use a coupling argument with an environmentωwhose potentialV will be shown to achieve the supremum in (3.3). Define the environmentωas follows.

ωx:=

w, x >0, M, x≤0.

For the corresponding potentialV, exp(−V (x))is given as follows:

exp

V (x)

=

⎧⎨

w

1w

x

, x >0,

1, x=0,

M

1M

x

, x <0.

For any fixed environmentω∈ [w, M]Z,Qω defines a Markov chain on the integers with transition probabilities Qω[Xn+1=x+1|Xn=x] =ωx=1−Qω[Xn+1=x−1|Xn=x],x∈Z, andX0=0. Then we have the following lemma.

Lemma 3.2. Let the environmentωbe defined as above.AssumeM≤1−w.For allω∈ [w, M]Z,allx∈Zand all n∈N,the distribution ofξ(n, x)with respect toQω is stochastically dominated by the distribution ofξ(n,0)with respect toQω.In particular,

sup

z: z∈supp sup

x∈Zν(x)

=ν(0):=exp(−V (−1))+exp(−V (0))

2

x∈Zexp(−V (x)) . (3.7)

Proof of Lemma3.2. First step. We show that with T (0):=min{n≥1: Xn=0}, the distribution ofT (0) with respect toQω is stochastically dominated by the distribution of T (0)with respect toQω. We do this by coupling two Markov chains: the Markov chain(Xn)moves according toQω, with X0=0, the Markov chain(Xn)moves according toQω, with X0=0, in such a way that (Xn) returns to 0 before (or at the same time as) (Xn). Let (Xn)move according to Qω. IfXn≤0 and Xn+1< Xn andXn≤0, then also Xn+1< Xn: this is possible since Qω[Xn+1< Xn] =1−MQω[Xn+1< Xn]. IfXn>0 andXn+1> XnandXn>0, then alsoXn+1> Xn: this is possible sinceQω[Xn+1> Xn] =wQω[Xn+1> Xn]. If Xn>0 andXn+1> Xn andXn<0, thenXn+1< Xn: this is possible sinceQω[Xn+1> Xn] =w≤1−MQω[Xn+1< Xn]. Now,(Xn)visits 0 before (or at the same time as) as(Xn)does and this proves the statement.

(9)

Second step. We show that for eachn, the distribution ofξ(n,0)with respect toQω is stochastically dominated by the distribution of ξ(n,0)with respect toQω. LetT1, T2, . . . be i.i.d. copies ofT (0). Then, Qω[ξ(n,0)≥k] = Pω[k

j=1Tjn] ≤Qω[k

j=1Tjn] =Qω[ξ(n,0)≥k], where the inequality follows from the first step.

Third step. Forx∈Z, letθx be the shift on Ω, i.e.(θxω)(y)=ω(x+y),y∈Z. Forx∈Z, the distribution of ξ(n, x) with respect toQω is dominated by the distribution ofξ(n,0)with respect toQθxω (more precisely, with T (x):=inf{n: Xn=x} denoting the first hitting time ofx, the distribution of ξ(n, x) with respect toQω is the distribution ofξ((nT (x)),0)IT (x)nwith respect toQθxω). Now, apply the second step withθxωinstead ofω.

To show (3.7), define, forω∈ [w, M]Z, the corresponding potentialV as in Section 1. Note that ifV is in the support ofV, the Markov chain defined byQωis positive recurrent with invariant probability measure

ν(x)=exp(−V (x−1))+exp(−V (x))

2

x∈Zexp(−V (x)) , (3.8)

and we haveν(x)=limn→∞1

nξ(n, x), hence (3.7) follows from the stochastic domination.

Intuitively,V is the steepest possible (infinite) valley which maximises the occupation time of 0 (ifM≤1−w).

Letε >0. Since(Xn)is a positive recurrent Markov chain with respect toQω, we have

nlim→∞

ξ(n,0)

n =ν(0), Qω-a.s., and

Qω

ξ(n,0)

nν(0)+ε

≤eC(ε)n, (3.9)

whereC(ε)is a strictly positive constant depending only onε. We have Qω

ξ(n)

nν(0)+ε

n

x=−n

Qθxω

ξ(n,0)

nν(0)+ε

(2n+1)Qω

ξ(n,0)

nν(0)+ε

(2n+1)eC(ε)n, (3.10)

where we used Lemma3.2for the second inequality and (3.9) for the last inequality. By the Borel–Cantelli lemma, we conclude that

lim sup

n→∞

ξ(n)

nν(0), Qω-a.s., for allω∈ [w, M]Z. (3.11)

We have to show that (3.11) holds trueP-a.s., i.e. for our RWRE onZ+, with reflection at 0. In order to do so, we need the following modification of Lemma3.2for environments onZ+. LetK≥1 and let the environmentω(K) be defined as follows.

ω(K)x :=

M, 0< xK, w, x > K, 1, x=0.

For the corresponding potentialV(K), exp(−V(K)(x))is given as follows:

exp

V(K)(x)

=

⎧⎨

M

1M

x

, 0< xK, M

1M

K w

1w

xK

, x > K,

1, x=0.

(10)

The Markov chain onZ+defined byPω(K)is positive recurrent with invariant probability measureν(K)given by

ν(K)(x):=

⎧⎪

⎪⎩

exp(V(K)(x1))+exp(V(K)(x)) 1+2

x1exp(V(K)(x)) , x≥1,

1 1+2

x1exp(V(K)(x)), x=0.

(3.12)

Lemma 3.3. AssumeM≤1−w.For allK≥1,ifω∈ {1} × [w, M]Z+ satisfiesω0=1andbn(ω)→ ∞,then for alln∈Nsuch thatbn(ω)≥2K,and allx∈Z,x≥ −bn+K,the distribution ofξ(n, bn+x)with respect toPω is stochastically dominated by the distribution ofξ(n, K)with respect toPω(K)[·|X0=K].Further,ν(K)(K)ν(0)for K→ ∞.

The proof of Lemma3.3is similar to the proof of Lemma3.2. The last statement follows from (3.12).

Letε >0. ChooseKsuch thatν(K)(K)ν(0)+ε2. Since(Xn)is a positive recurrent Markov chain with respect toPω(K)[·|X0=K], we have

nlim→∞

ξ(n, K)

n =ν(K)(K), Pω(K)[·|X0=K]-a.s., and

Pω(K)

ξ(n, K)

nν(K)(K)+ε|X0=K

≤eC(ε)n, (3.13)

whereC(ε)is a strictly positive constant depending only onε, and onK. ForP-a.a.ω, choosen0(ω)such that for nn0(ω),bn(ω)≥2K and 1nK

x=0ξ(n, x)4ε,Pω-a.s. (this is possible since bn(ω) increases to∞,P-a.s. for n→ ∞, andn1K

x=0ξ(n, x)→0,P-a.s. forn→ ∞sincePωis null-recurrent forP-a.a.ω). We then haveP-a.s. for nn0(ω)

Pω ξ(n)

nν(0)+ε

Pω ξ(n)

nν(K)(K)+ε 2

n

x=−bn+K

Pω

ξ(n, bn+x)

nν(K)(K)+ε 4

(2n+1)Pω(K)

ξ(n, K)

nν(K)(K)+ε 4

X0=K

(2n+1)eC(ε/4)n, (3.14)

where we used Lemma3.2for the third inequality and (3.13) for the last inequality. By the Borel–Cantelli lemma, we conclude that

lim sup

n→∞

ξ(n)

nν(0), Pω-a.s., forP-a.a.ω. (3.15)

Together with (3.7), this finishes the proof in the caseM≤1−w: the value ofcis computed easily from (3.7). In the caseM >1−w, Lemma3.2is replaced with the following

Lemma 3.4. Let the environment ωbe defined as above.AssumeM >1−w.For allω∈ [w, M]Z,allx∈Zand alln∈N, the distribution ofξ(T (1)+n, x)ξ(T (1), x)with respect to Qω is stochastically dominated by the distribution ofξ(T (1)+n,1)with respect toQω.In particular,

sup

z: z∈supp sup

x∈Zν(x)

=ν(1)=exp(−V (0))+exp(−V (1))

2

x∈Zexp(−V (x)) . (3.16)

The rest of the proof is analogous to the caseM≤1−w.

(11)

4. Further remarks

1. Consider RWRE on the integers axis, i.e. with transition probabilitiesQω[Xn+1=x+1|Xn=x] =ωx=1− Qω[Xn+1=x−1|Xn=x]forx∈Z. Then, Theorem1.2can be modified in the following way (we refer to [2] for details). First, we have to replacebnin (1.9) withbn defined as follows. We call a triple(a, b, c)witha < b < ca valleyofV if

V (b)= min

axcV (x), V (a)= max

axbV (x), V (c)= max

bxcV (x).

The depth of the valley is defined asd(a,b,c)=(V (a)V (b))(V (c)V (b)). Call a valley(a, b, c)minimalif for all valleys(a,b,c)witha <a,c < c, we haved(a,b,c)< d(a,b,c). Consider the smallest minimal valley(an,bn,cn)with an<0<cnandd(an,bn,cn)≥logn+(logn)1/2, i.e.(an, bn,cn)is a minimal valley withd(a

n, bn,cn)≥logn+(logn)1/2 and for every other valley(an,bn,cn)with these properties, we havean<an or cn<cn. If there are several such valleys, take (an, bn,cn)such that |bn| is minimal (and bn >0 if there are two possibilities). Further, one has to replace the limit measureνin (1.9) withνdefined in as follows. LetVleft=(Vleft(x), x∈Z)be a collection of random variables distributed as V conditioned to stay strictly positive forx >0 and non-negative for x <0. (Recall that V=(V (x), x∈Z)is a collection of random variables distributed asV conditioned to stay non-negative forx >0 and strictly positive forx <0.) Set

νleft(x):=exp(−Vleft(x−1))+exp(−Vleft(x))

2

x∈Zexp(−Vleft(x)) , x∈Z. (4.1)

In other words,νleftis defined in the same way asν, replacingVin (1.8) withVleft. Now, letQbe the distribution of νandQleftbe the distribution ofνleftand letνbe a random probability measure onZwith distribution 12(Q+Qleft).

Then, (1.9)–(1.12) hold true for RWRE on the integer axis, if we replacebnwithbnandνwithν.

Since the right-hand side of (1.5) is a function ofM andw, sayϕ(M, w), for which we haveϕ(M, w)=ϕ(1w,1−M), we see that replacingV withVleftin (3.1) yields the same constantcas in (3.3) and therefore Theorem1.1 carries over verbatim.

2. We showed that lim supn→∞ξn(n) is P-a.s. a strictly positive constant and gave its value. We have lim infn→∞ξ(n)

n =0 P-a.s. which is a consequence of (1.12). It was shown in [3] that there is a strictly positive constantasuch that lim infn→∞ξ(n)log log logn

n =a,P-a.s. The value ofais not known.

Acknowledgments

We thank Gabor Pete, Achim Klenke and Pierre Andreoletti for pointing out mistakes in earlier versions of this paper and Amir Dembo for discussions. We also thank the referee for several valuable comments.

References

[1] J. Bertoin. Splitting at the infimum and excursions in half-lines for random walks and Lévy processes.Stochastic Process. Appl.47(1993) 17–35.MR1232850

[2] R. Billing. Häufig besuchte Punkte der Irrfahrt in zufälliger Umgebung. Diploma thesis, Johannes Gutenberg-Universität Mainz, 2009.

[3] A. Dembo, N. Gantert, Y. Peres and Z. Shi. Valleys and the maximal local time for random walk in random environment.Probab. Theory Related Fields137(2007) 443–473.MR2278464

[4] P. G. Doyle and E. J. Snell.Probability: Random Walks and Electrical Networks. Carus Math. Monographs22. Math. Assoc. Amer., Wash- ington, DC, 1984.MR0920811

[5] N. Gantert and Z. Shi. Many visits to a single site by a transient random walk in random environment.Stochastic Process. Appl.99(2002) 159–176.MR1901151

[6] A. O. Golosov. Localization of random walks in one-dimensional random environments.Comm. Math. Phys.92(1984) 491–506.MR0736407 [7] P. Révész.Random Walk in Random and Non-Random Environments, 2nd edition. World Scientific, Hackensack, NJ, 2005.MR2168855 [8] Z. Shi. A local time curiosity in random environment.Stochastic Process. Appl.76(1998) 231–250.MR1642673

[9] Z. Shi. Sinai’s walk via stochastic calculus. InMilieu aléatoires53–74. F. Comets and E. Pardoux (Eds).Panoramas et Synthèses12. Soc.

Math. France, Paris, 2001.MR2226845

(12)

[10] Y. G. Sinai. The limiting behavior of a one-dimensional random walk in a random medium.Theory Probab. Appl.27(1982) 256–268.

MR0657919

[11] F. Solomon. Random walks in random environment.Ann. Probab.3(1975) 1–31.MR0362503

[12] O. Zeitouni. Random walks in random environment. InXXXI Summer School in Probability, St Flour (2001)193–312.Lecture Notes in Math.

1837. Springer, Berlin, 2004.MR2071631

Références

Documents relatifs

The one-dimensional recurrent random walk on random environment (thereafter abbreviated RWRE) we treat here, also called Sinai’s walk, has the property to be localized at an instant

In the vertically flat case, this is simple random walk on Z , but for the general model it can be an arbitrary nearest-neighbour random walk on Z.. In dimension ≥ 4 (d ≥ 3), the

Asymptotic normality and efficiency of the maximum likelihood estimator for the parameter of a ballistic random walk in a random environment. Random difference equations and

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Then, following [11] (Prop 9-131), and standard electrical/capacited network theory, if the conductances have finite sum, then the random walk is positive recurrent, the

Our paper focuses on the almost sure asymp- totic behaviours of a recurrent random walk (X n ) in random environment on a regular tree, which is closely related to Mandelbrot

In this section we repeatedly apply Lemma 3.1 and Corollary 3.3 to reduce the problem of finding weak quenched limits of the hitting times T n to the problem of finding weak

On the equality of the quenched and averaged large deviation rate functions for high-dimensional ballistic random walk in a random environment. Random walks in