• Aucun résultat trouvé

The most visited sites of biased random walks on trees

N/A
N/A
Protected

Academic year: 2021

Partager "The most visited sites of biased random walks on trees"

Copied!
15
0
0

Texte intégral

(1)

HAL Id: hal-01214677

https://hal.sorbonne-universite.fr/hal-01214677

Submitted on 12 Oct 2015

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Yueyun Hu, Zhan Shi

To cite this version:

Yueyun Hu, Zhan Shi. The most visited sites of biased random walks on trees. Electronic Journal of Probability, Institute of Mathematical Statistics (IMS), 2015, 20, pp.1-14. �10.1214/EJP.v20-4051�.

�hal-01214677�

(2)

El e c t ro nic J

ou o

f Pr

ob a bi l i t y

Electron. J. Probab.20(2015), no. 62, 1–14.

ISSN:1083-6489 DOI:10.1214/EJP.v20-4051

The most visited sites of biased random walks on trees

*

Yueyun Hu

Zhan Shi

Abstract

We consider the slow movement of randomly biased random walk(Xn)on a supercriti- cal Galton–Watson tree, and are interested in the sites on the tree that are most visited by the biased random walk. Our main result implies tightness of the distributions of the most visited sites under the annealed measure. This is in contrast with the one-dimensional case, and provides, to the best of our knowledge, the first non-trivial example of null recurrent random walk whose most visited sites are not transient, a question originally raised by Erd˝os and Révész [11] for simple symmetric random walk on the line.

Keywords:Biased random walk on the Galton–Watson tree; branching random walk; local time;

most visited site.

AMS MSC 2010:60J80 ; 60G50 ; 60K37.

Submitted to EJP on January 12, 2015, final version accepted on May 29, 2015.

SupersedesarXiv:1502.02831.

1 Introduction

We consider a (randomly) biased random walk(Xn)on a supercritical Galton–Watson treeT, rooted at∅. The random biases are represented byω:= (ω(x), x∈T\{∅}), a family of random vectors; for each vertexx∈T,ω(x) := (ω(x, y), y ∈ T)is such that ω(x, y)≥0 for ally ∈T and thatP

y∈Tω(x, y) = 1. For any vertex x∈T\{∅}, letx be its parent. For the sake of presentation, we modify the values ofω(∅, x)forxwith

x=∅, and add a special vertex, denoted by∅, which is considered as the parent of∅, such thatω(∅,∅) +P

x:x=ω(∅, x) = 1. The vertex∅is, however,notregarded as a vertex ofT; so, for example,P

x∈Tf(x)does not contain the termf(∅).

Assume that for each pair of verticesxandyinT∪ {∅},ω(x, y)>0if and only if y∼x, where byx∼ywe mean thatxis either a child, or the parent, ofy. Moreover, we defineω(∅,∅) := 1.

*Partly supported by ANR project MEMEMO2 (2010-BLAN-0125).

LAGA, Institut Galilée, Université Paris XIII, 99 avenue J-B Clément, F-93430 Villetaneuse, France.

E-mail:yueyun@math.univ-paris13.fr

LPMA, Université Paris VI, 4 place Jussieu, F-75252 Paris Cedex 05, France.

E-mail:zhan.shi@upmc.fr

(3)

Givenω, the biased walk(Xn, n≥0) is a Markov chain taking values onT∪ {∅}, started atX0=∅, whose transition probabilities are

Pω{Xn+1=y|Xn =x}=ω(x, y).

The probabilityPω is often referred to as the quenched probability. We also consider the annealed probabilityP(·) :=R

Pω(·)P(dω), wherePdenotes the probability with respect to the environment(ω,T).

There is an active literature on randomly biased walks on Galton-Watson trees; see, for example, a large list of references in [17]. In this paper, we restrict our attention to a regime ofslow movementof the walk in the recurrent case.

Clearly, the movement of the biased random walk(Xn)is determined by the law of the random environmentω. We assume that(ω(x, y), y ∼x)forx∈T, are i.i.d. random vectors. It is convenient to view(ω,T)as a marked tree (in the sense of Neveu [23]).

The influence of the random environment is quantified by means of the random potentialprocess(V(x), x∈T), defined byV(∅) := 0and

V(x) :=− X

y∈]], x]]

log ω(y , y) ω(y ,y)

, x∈T\{∅}, (1.1)

wherey is the parent ofy, and ]]∅, x]] := [[∅, x]]\{∅}, with[[∅, x]]denoting the set of vertices (includingxand∅) on the unique shortest path connecting∅tox. There exists an obvious bijection between the random environmentωand the random potentialV.

For anyx∈T, let|x|denote its generation. Throughout the paper, we assume E X

x:|x|=1

e−V(x)

= 1, E X

x:|x|=1

V(x) e−V(x)

= 0. (1.2)

We also assume that the following integrability condition is fulfilled: there existsδ >0 such that

E X

x:|x|=1

e−(1+δ)V(x)

+E X

x:|x|=1

eδV(x)

+Eh X

x:|x|=1

11+δi

<∞. (1.3)

The random potential (V(x), x∈T)is a branching random walk as in Biggins [6];

as such, (1.2) corresponds to the “boundary case" (Biggins and Kyprianou [9]). It is known that, under some additional integrability assumptions that are weaker than (1.3), the branching random walk in the boundary case possesses some deep universality properties, see [25] for references.

Under(1.2)and(1.3), the biased walk(Xn)is null recurrent (Lyons and Pemantle [21], Menshikov and Petritis [22], Faraud [12]), such that upon the system’s survival,

|Xn| (logn)2

−→law X, (1.4)

1

(logn)3 max

0≤i≤n|Xi| → c1 a.s., (1.5)

whereXis non-degenerate taking values in(0,∞), andc1denotes a positive constant:

bothXandc1are explicitly known, see [18] and [13], respectively.

For any vertexx∈T, let us define

Ln(x) :=

n

X

i=1

1{Xi=x}, n≥1,

(4)

which is the (site) local time of the biased walk at x. Consider, for any n ≥ 1, the non-empty random set

An:=n

x∈T: Ln(x) = max

y∈TLn(y)o

. (1.6)

In words,Anis the set of the most visited sites (or: favourite sites) at timen. The study of favourite sites was initiated by Erd˝os and Révész [11] for the symmetric Bernoulli random walk on the line (see a list of ten open problems presented in Chapter 11 of the book of Révész [24]). In particular, for the symmetric Bernoulli random walk onZ, Erd˝os and Révész [11] conjectured: (a) tightness for the family of most visited sites, and (b) the cardinality of the set of most visited sites being eventually bounded by2. Conjecture (b) was partially proved by Tóth [27], and is believed to be true by many. On the other hand, Conjecture (a) was disproved by Bass and Griffin [5]: as a matter of fact,inf{|x|, x∈An} → ∞almost surely for the one-dimensional Bernoulli walk. Later, we proved in [16] that it was also the case for Sinai’s one-dimensional random walk in random environment. The present paper is devoted to studying both questions for biased walks on trees; our answer is as follows.

Corollary 2.2.Assume(1.2)and(1.3). There exists a finite non-empty setUmin, defined in(2.5)and depending only on the environment, such that

n→∞lim P(An ⊂Umin|non-extinction) = 1. In particular, the family of most visited sites is tight underP.

So, concerning the tightness question for most visited sites, biased walks on trees behave very differently from recurrent one-dimensional nearest-neighbour random walks (whether the environment is random or deterministic). To the best of our knowledge, this is the first non-trivial example of null recurrent Markov chain whose most visited sites are tight.

In the next section, we give a precise statement of the main result of this paper, Theorem 2.1.

2 Statement of results

Let us define a symmetrized version of the potential:

U(x) :=V(x)−log( 1 ω(x,x)

), x∈T. (2.1)

Note that

e−U(x)= 1 ω(x,x)

e−V(x)= e−V(x)+ X

y∈T:y=x

e−V(y), x∈T. (2.2)

It is known (Biggins [7], Lyons [20]) that under assumption (1.2), inf

x:|x|=nU(x)→ ∞, P-a.s., (2.3) where here and in the sequel,

P(·) := P(· |non-extinction), P(·) := P(· |non-extinction).

(5)

Define thederivative martingale Dn := X

x:|x|=n

V(x)e−V(x), n≥0. (2.4)

It is known (Biggins and Kyprianou [8], Aïdékon [1], Chen [10]) that (1.3) implies that Dn convergesP-a.s. to a limit, denoted byD, and that

D>0, P-a.s.

Define the set of the minimizers ofU(·): Umin:=n

x∈T: U(x) = min

y∈TU(y)o

. (2.5)

Sinceinfx:|x|=nU(x)→ ∞P-a.s. (see (2.3)), the setUminis finite and non-empty.

The main result of the paper is as follows.

Theorem 2.1.Assume(1.2)and(1.3). For anyε >0,1

sup

x∈T

Pω

n

Ln(x)

n logn

− σ2

4De−U(x) > εo

→0, inP-probability,

whereU(·)is the symmetrized potential in(2.1),DtheP-almost sure positive limit of the derivative martingale(Dn)in(2.4), and

σ2:=E X

y:|y|=1

V(y)2e−V(y)

∈(0,∞). (2.6)

Corollary 2.2.Assume(1.2)and(1.3). IfAnis the set of the most visited sites at timen as in(1.6), then

P(An ⊂Umin)→1, whereUminis the set of the minimizers ofU(·)in(2.5).

Our results are not as strong as they might look like. For example, Theorem 2.1 does not claim thatPω{supx∈T|Lnn(x)

logn4Dσ2

e−U(x)|> ε} →0inP-probability. It essentially says, in view of Proposition 2.3 below, that for anyfixedx∈T,Pω{|Lnn(x)

logn

4Dσ2

e−U(x)|>

ε} →0inP-probability. Corollary 2.2 is much weaker than what Tóth [27] proved for the symmetric Bernoulli random walk onZ: for example, it does not claim thatP-a.s., An⊂Uminfor all sufficiently largen; we even do not know whether this is true.

For local time at fixed site of biased random walks on Galton–Watson trees in other recurrent regimes, see the recent paper [15].

An important ingredient in the proof of Theorem 2.1 is the following estimate on the local time of vertices that are away from the root:

Proposition 2.3.Assume(1.2)and(1.3). Then

ε→0limlim sup

n→∞ Pn

max

x∈T:U(x)≥log(8

ε2)

Ln(x)≥ ε n logn

o

= 0.

In the light of the fact that inf|x|=nU(x) → ∞ P-a.s. (see (2.3)), Proposition 2.3 allows us, in the proof of Theorem 2.1, to estimate the probability for fixedx.

Proposition 2.3 is proved in Section 3; Theorem 2.1 and Corollary 2.2 in Section 4.

Throughout the paper, for any pair of verticesxandy, we writex < yory > xifyis a (strict) descendant ofx, andx≤yory≥xif eitheryis either a (strict) descendant of x, orxitself. For anyx∈T, we usexi (for0≤i≤ |x|) to denote the ancestor ofxin the i-th generation; in particular,x0=∅andx|x|=x.

1By convergence inP-probability, we mean convergence in probability underP.

(6)

3 Proof of Proposition 2.3

Before presenting the proof of Proposition 2.3, we outline the overall strategy. We exploit the relation between the local time atx, and the hitting timeTxand the return timeT+ (defined respectively in (3.4) and (3.5) below). Probabilities involving these random times are known to have standard one-dimensional formulas ((3.6) and (3.7)).

Proposition 2.3 is then proved using large deviation properties away from the mean if the potentialU(x)is large. SinceU(x)is indeedP-a.s. large uniformly in the tree depth (as seen in (2.3)), we will be done.

We need some preliminaries. Let Λ(x) := X

y:y=x

e−[V(y)−V(x)], x∈T, (3.1)

In particular,Λ(∅) =P

x:|x|=1e−V(x). By definition, 1 + Λ(x) = 1

ω(x,x).

LetSi−Si−1,i≥1, be i.i.d. random variables whose law is characterized by Eh

h(S1)i

=Eh X

x∈T:|x|=1

e−V(x)h(V(x))i

, (3.2)

for any Borel functionh: R→R+.

The following fact, quoted from [18], is a variant of the so-called “many-to-one formula" for the branching random walk.

Fact 3.1.Assume(1.2)and(1.3). LetΛ(x)be as in(3.1). For anyn≥1and any Borel functiong:Rn+1→R+, we have

Eh X

x∈T:|x|=n

g

V(x1),· · · , V(xn),Λ(x)i

=Eh eSnG

S1,· · · , Sn

i ,

whereSi−Si−1,i≥1, are i.i.d. whose common distribution is given in(3.2), and

G(a1,· · · , an) :=E[g(a1,· · · , an, X

x∈T:|x|=1

e−V(x))].

Define a reflecting barrier at (notation:]]∅, x[[ := ]]∅, x]]\{x}) Ln(γ) := n

x: X

z∈]], x]]

eV(z)−V(x)> n (logn)γ,

X

z∈]], y]]

eV(z)−V(y)≤ n

(logn)γ, ∀y∈]]∅, x[[o

, (3.3)

whereγ∈Ris a fixed parameter. We writex <Ln(γ)ifP

z∈]], y]]eV(z)−V(y)(lognn)γ for ally∈]]∅, x[[.

We recall two results from [18]. The first justifies the presence of the barrierLn(γ)

for the biased walk(Xn), and the second describes the local time at the root.

Fact 3.2([18]).Assume(1.2)and(1.3). Ifγ <2, then

n→∞lim P[n

i=1

{Xi∈Ln(γ)}

= 0.

(7)

Fact 3.3([18]).Assume(1.2)and(1.3). For anyε >0,

Pωn

Ln(∅)

n logn

− σ2 4D

e−U() > εo

→0, inP-probability.

We now proceed to the proof of Proposition 2.3. Define

Tx := inf{i≥0 : Xi=x}, x∈T, (3.4)

T+ := inf{i≥1 : Xi=∅}. (3.5)

In words,Tx is the first hitting time atxby the biased walk, whereasT+ is the first returntime to the root∅.

Let x ∈ T\{∅}. The probability Pω(Tx < T+) only involves a one-dimensional random walk in random environment (namely, the restriction at[[∅, x[[ of the biased walk (Xi)), so a standard result for one-dimensional random walks in random environment (Golosov [14]) tells us that

Pω(Tx< T+) = ω(∅, x1) eV(x1) P

z∈]], x]]eV(z) = ω(∅,∅) P

z∈]], x]]eV(z), (3.6) Px,ω{T< Tx+} = eU(x)

P

z∈]], x]]eV(z), (3.7)

wherex1is the ancestor ofxin the first generation.

Proof of Proposition 2.3.By Fact 3.2, for allγ1<2, we haveP(∪mi=1{Xi∈Lm1)})→0, m→ ∞. So it suffices to check that for someγ1<2,

b→0limlim sup

m→∞ Pn

max

x<Lm1 ):U(x)≥log(8

b2)

Lm(x)≥ bm logm

o

= 0.

SinceP(U(∅)≥log(b82))→0forb→0, it suffices to prove that for someγ1<2, lim

b→0lim sup

m→∞ Pn

max

x∈T\{∅}:x<Lm1 ), U(x)≥log(8

b2)

Lm(x)≥ bm logm

o

= 0. (3.8)

LetT(0) := 0and inductivelyT(j) := inf{i > T(j−1): Xi =∅}, forj ≥1. In words, T(j)is thej-th return time to ∅. We have, forn ≥2, c >0, ε∈(0,1), 1 < γ <2and m(n) =bc nlognc,

Pn

max

x∈T\{}:x<Lm(n)(γ) , U(x)≥log(8ε)

Lm(n)(x)≥εno

≤ P{T(n)≤m(n)}+Pn

max

x∈T\{}:x<Lm(n)(γ) , U(x)≥log(8ε)

LT(n)

(x)≥εno .

By Fact 3.3, T

(n)

nlogn4Dσ2eU()inP-probability, so the portmanteau theorem implies thatlim supn→∞P{T(n)≤m(n)} ≤P{4Dσ2eU()≤c}. Assume, for the time being, that we are able to prove that for someγ <2, anyc >0and any0< ε <1,

Pn

max

x∈T\{}:x<Lm(n)(γ) , U(x)≥log(8ε)

LT(n)

(x)≥εno

→0, n→ ∞. (3.9)

Then we will have lim sup

n→∞ Pn

max

x∈T\{}:x<Lm(n)(γ) , U(x)≥log(8ε)

Lm(n)(x)≥εno

≤Pn4D

σ2 eU()≤co .

(8)

Sincen≤2clogm(n+1)m(n) (for all sufficiently largen), this will yield

lim sup

n→∞ Pn

max

x∈T\{}:x<Lm(n)(γ) , U(x)≥log(8ε)

Lm(n)(x)≥ 2ε c

m(n+ 1) logm(n)

o≤Pn4D

σ2 eU()≤co .

Letm∈[m(n), m(n+ 1)]∩Z. ThenLm(n)(x)≤Lm(x)(for allx∈T); on the other hand, ifx <Lm1), thenx <Lm(n)(γ) for allγ1∈(γ,2)and all sufficiently largen. Consequently, we will have, for allc >0andε∈(0, 1),

lim sup

m→∞ Pn

max

x∈T\{}:x<Lm1 ), U(x)≥log(8ε)

Lm(x)≥2ε c

m logm

o≤Pn4D

σ2 eU()≤co .

Takingc:= 2ε1/2will then yield (3.8) (writingb:=ε1/2there) and thus Proposition 2.3.

The rest of the section is devoted to the proof of (3.9). By (1.5), (log1n)3max0≤i≤n|Xi| convergesP-a.s. to a positive constant, and since T

(n)

nlogn converges inP-probability to a positive limit, we deduce that (log1n)3 max0≤i≤T(n)

|Xi|converges inP-probability to a positive limit. So the proof of (3.9) is reduced to showing the following estimate: for some1< γ <2, anyc >0and any0< ε <1,

Pn

max

x<Lm(n)(γ) :U(x)≥log(8ε),1≤|x|≤(logn)4

LT(n)

(x)≥εno

→0, n→ ∞.

Fork≥1, we have Pω

n

max

x<Lm(n)(γ) :U(x)≥log(8ε),1≤|x|≤(logn)4

LT(n)

(x)≥ko

≤ X

x<Lm(n)(γ) :U(x)≥log(8ε),1≤|x|≤(logn)4

Pω{LT(n)

(x)≥k}.

The law ofL

T(n)(x)underPωis the law ofPn

i=1ξi, where(ξi, i≥1)is an i.i.d. sequence withPω1= 0) = 1−aandPω1≥k) =a pk−1,∀k≥1, where

1−p := Px,ω{T < Tx+}= eU(x) P

z∈]], x]]eV(z), (3.10) a := Pω{Tx< T+}= ω(∅,∅)

P

z∈]], x]]eV(z). (3.11) [We have used (3.6) and (3.7).]

The tail estimate of LT(n)

(x)underPω is summarized in the following elementary lemma, whose proof is in the Appendix.

Lemma 3.4.Let0< a <1and0< p <1. Let(ξi, i≥1)be an i.i.d. sequence of random variables withP(ξ1= 0) = 1−aandP(ξ1≥k) =a pk−1,∀k≥1.

Let0< ε <1. If1−p > 8εa, then

PnXn

i=1

ξi≥ dεneo

≤6nae(1−p)εn8 .

(9)

We continue with the proof of (3.9). IfU(x)≥log(8ε), then1−p > 8εa, so we are entitled to apply Lemma 3.4 to arrive at:

Pωn

max

x<Lm(n)(γ) :U(x)≥log(8ε),1≤|x|≤(logn)4

LT(n)

(x)≥ dεneo

≤ 6n X

x<Lm(n)(γ) : 1≤|x|≤(logn)4

ω(∅,∅) P

z∈]], x]]eV(z)exp

−εn 8

eU(x) P

z∈]], x]]eV(z) .

We haveω(∅,∅)≤1. It remains to check the following convergence inP-probability (forn→ ∞):

X

x<Lm(n)(γ) : 1≤|x|≤(logn)4

n P

z∈]], x]]eV(z)exp

−ε 8

neU(x) P

z∈]], x]]eV(z)

→0. (3.12)

Recall the definition ofLm(n)(γ) : x < Lm(n)(γ) implies P eV(x)

z∈]]∅, x]]eV(z)(logm(n)m(n))γ, which is

(logcnn)γ−1 for all sufficiently largen(sayn≥n0). Also, we recall thateU(x)= 1+Λ(x)eV(x) , withΛ(x) :=P

y:y=xe−[V(y)−V(x)]as in (3.1).

For the sumP

x<Lm(n)(γ) on the left-hand side of (3.12), we distinguish two possible situations depending on the value of Λ(x). Let 0 < % < 1. Applying the elementary inequalityλe−λ≤c2e−λ/2(forλ≥0) toλ:=ε8 P neU(x)

z∈]]∅, x]]eV(z), we see that forn≥n0, X

x<Lm(n)(γ)

1{1+Λ(x)≤(P neV(x) z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z)exp

−ε 8

neU(x) P

z∈]], x]]eV(z)

≤ c2 X

x<Lm(n)(γ)

1{1+Λ(x)≤(P neV(x) z∈]]∅, x]] eV(z))%}

8

εe−U(x)exp

− ε

16(1 + Λ(x))

neV(x) P

z∈]], x]]eV(z)

≤ 8c2

ε

X

x<Lm(n)(γ)

e−U(x)exp

− ε

16( neV(x) P

z∈]], x]]eV(z))1−%

.

Since P neV(x)

z∈]]∅, x]]eV(z)1c(logn)γ−1(forx <Lm(n)(γ) andn≥n0), this yields, forn≥n0, X

x<Lm(n)(γ)

1{1+Λ(x)≤(P neV(x) z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z)exp

−ε 8

neU(x) P

z∈]], x]]eV(z)

≤ 8c2

ε exp

− ε

16c1−%(logn)(γ−1)(1−%) X

x<Lm(n)(γ)

e−U(x),

which converges to0inP-probability (recalling that for anyγ∈R,log1n

P

x∈T:x<Ln(γ)e−U(x) converges inP-probability to a finite limit; see [18]). So it remains to prove that there exists%∈(0, 1)such that (removing the big exponential term which is bounded by1)

X

x<Lm(n)(γ) : 1≤|x|≤(logn)4

1{1+Λ(x)>(P neV(x)

z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z) →0,

in P-probability (for n → ∞). Sincelimr→∞inf|x|=rV(x) → ∞P-a.s. (see (2.3)), it suffices to prove the existence of%∈(0,1) andγ ∈ (1,2)such that for all α >0and

(10)

n→ ∞, X

x<Lm(n)(γ) : 1≤|x|≤(logn)4

1{1+Λ(x)>(P neV(x) z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z)1{V(x)≥−α}→0, (3.13) inP-probability, whereV(x) := minz∈]], x]]V(z).

To prove this, we first recall thatx <Lm(n)(γ) implies that for ally ∈]]∅, x]], we have

P

z∈]]∅, y]]eV(z)

eV(y)(logcnn)γ−1 (forn≥n0) which is bounded bynfor all sufficiently largen (sayn≥n1); a fortioriV(y)−V(y)≤logn(withV(y) := maxz∈]], y]]V(y)). By Fact 3.1, we obtain, forn≥n0∨n1,

Eh X

x<Lm(n)(γ) ,1≤|x|≤(logn)4

1{1+Λ(x)>(P neV(x)

z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z)1{V(x)≥−α}i

b(logn)4c

X

k=1

Eh X

x:|x|=k

1{V(y)−V(y)≤logn,∀y∈]], x]]}×

1{1+Λ(x)>(P neV(x)

z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z)1{V(x)≥−α}i

=

b(logn)4c

X

k=1

Eh

eSk1{S#

k≤logn}F

( neSk Pk

i=1eSi)% n Pk

i=1eSi 1{S

k≥−α}

i ,

whereF(λ) :=P(1+P

x:|x|=1e−V(x)> λ)forλ >0,Sk := max1≤i≤kSi,Sk:= min1≤i≤kSi, andS#k := max1≤i≤k(Si−Si)for anyk≥1.

An application of the Hölder inequality, using assumption (1.3), yields the existence ofδ1>0such that

Eh X

x:|x|=1

e−V(x)1+δ1i

<∞. (3.14)

As such, c3 := E[(1 +P

x:|x|=1e−V(x))1+δ1] < ∞, so F(λ) ≤ c3λ−1−δ1 for all λ > 0. Consequently,

Eh X

x<Lm(n)(γ) ,1≤|x|≤(logn)4

1{1+Λ(x)>(P neV(x) z∈]]∅, x]] eV(z))%}

n P

z∈]], x]]eV(z)1{V(x)≥−α}i

≤c3

b(logn)4c

X

k=1

EhPk i=1eSi neSk

%(1+δ1)−1

1{S#

k≤logn}1{S

k≥−α}

i

. (3.15)

Lemma 3.5.Letδbe the constant in assumption(1.3). For allα >0andδ2∈(0, δ∧161),

n→∞lim

b(logn)4c

X

k=1

EhPk i=1eSi neSk

δ2

1{S#

k≤logn}1{S

k≥−α}

i

= 0.

Since it is possible to choose0< % <1such that%(1 +δ1)−1lies in(0, δ∧161), we can apply Lemma 3.5 to see that (3.15) implies (3.13), and thus yields Proposition 2.3.

It remains to prove Lemma 3.5.

Proof of Lemma 3.5.SincePk

i=1eSi ≤keSk, it suffices to check that (logn)2

nδ2

b(logn)4c

X

k=1

Eh

eδ2(Sk−Sk)1{S#

k≤logn}1{S

k≥−α}

i→0.

(11)

Recall the law ofS1from (3.2). By assumption (1.3) and Hölder’s inequality, we have E(eaS1)<∞, ∀a∈(−δ,1 +δ),

whereδ >0is the constant in (1.3). In particular,E(ea|S1|)<∞for all0≤a < δ. Since 0 < δ2 < δ, we haveE(eδ2(Sk−Sk)) ≤ec4k for some constantc4 > 0 and allk ≥1. So

(logn)2 nδ2

Pb(logn)1/2c

k=1 E[eδ2(Sk−Sk)]→0. It remains to prove that (logn)2

nδ2

b(logn)4c

X

k=b(logn)1/2c

Eh

eδ2(Sk−Sk)1{S#

k≤logn}1{S

k≥−α}

i→0.

We make a change of indicesk=b(logn)1/2c+`. LetSe`:=S`+b(logn)1/2c−Sb(logn)1/2c,`≥ 0. Then(Se`, `≥0)is a random walk having the law of(S`, `≥0), and is independent of (Si,1≤i≤ b(logn)1/2c). For`≥0,S`+b(logn)1/2c−S`+b(logn)1/2c = max(x,max0≤j≤`Sej)−

Se` ≥ max0≤j≤`Sej −Se`, where x := Sb(logn)1/2c −Sb(logn)1/2c. So for k ≥ b(logn)1/2c and` := k− b(logn)1/2c, on the event that{Sk ≥ −α}, eithermax0≤j≤`Sej ≤ x, then Sk−Sk =x−Se`=Sb(logn)1/2c−Sk≤Sb(logn)1/2c+α, ormax0≤j≤`Sej> x, thenSk−Sk= max0≤j≤`Sej−Se`. It follows that

Eh

eδ2(Sk−Sk)1{S#

k≤logn}1{S

b(logn)1/2c≥−α}

i

≤ E(eδ2(α+Sb(logn)1/2c)) +P(Sb(logn)1/2c≥ −α)×Eh

eδ2(S`−S`)1{S#

` ≤logn}

i .

SinceE(eδ2Sb(logn)1/2c)≤ec4(logn)1/2, we have(lognn)δ22

Pb(logn)4c

k=b(logn)1/2cE(eδ2(α+Sb(logn)1/2c))→ 0. On the other hand,P(Sb(logn)1/2c ≥ −α)≤c5(logn)−1/4for some constantc5>0and alln≥2(see Kozlov [19]); it suffices to prove that

(logn)2−(1/4) nδ2

X

`=0

Eh

eδ2(S`−S`)1{S#

`≤logn}

i→0.

This will be a straightforward consequence of the following estimate (applied toλ:= logn andb:=δ2; it is here we use the conditionδ2< 161): for any0< b < δ,

lim sup

λ→∞

EτXλ−1

`=0

e−b[λ−(S`−S`)]

<∞, (3.16)

whereτλ:= inf{i≥1 : Si−Si> λ}.

To prove (3.16), we define the (strictly) ascending ladder times(Hi, i≥0):H0:= 0 and for anyi≥1,

Hi:= inf{` > Hi−1:S`> max

0≤j≤Hi−1

Sj}.

Therefore,

EτXλ−1

`=0

eb(S`−S`)

=

X

i=1

E HXi−1

`=Hi−1

eb(SHi−1−S`)1{S#

` ≤λ}

.

We apply the strong Markov property, first at timeHi−1to see that

E HXi−1

`=Hi−1

eb(SHi−1−S`)1{S#

`≤λ}

≤P SH#

i−1 ≤λ

EHX1−1

`=0

e−bS`1{S#

` ≤λ}

,

(12)

and then successively at timesH1,H2, · · ·,Hi−1to see thatP(SH#

i−1 ≤λ)≤P(SH#

1

λ)i−1. As such,

EτXλ−1

`=0

eb(S`−S`)

X

i=1

P S#H

1 ≤λi−1

EHX1−1

`=0

e−bS`1{S#

`≤λ}

.

We defineσ−λ:= inf{n≥0 :Sn<−λ}. Then1−P(SH#

1 ≤λ) =P(σ−λ< H1)≥ 1+λc6 for some constantc6 >0 and all sufficiently largeλ, sayλ≥λ0 (for the last elementary inequality, see for example, Lemma A.1 in [17]). Thus we get that

EτXλ−1

`=0

eb(S`−S`)

≤ 1 +λ

c6 EHX1−1

`=0

e−bS`1{S#

`≤λ}

.

Finally, for all smallb >0, there exists some positive constantc7=c7(b)>0such that

EHX1−1

`=0

e−bS`1{S#

`≤λ}

=EHX1−1

`=0

e−bS`1−λ>`}

≤ c7

λ e,

by applying [2] (Lemma 6, formula (4.17)) to (−Si, i ≥ 1). This yields (3.16), and

completes the proof of Lemma 3.5 and Proposition 2.3. 2

4 Proof of Theorem 2.1 and Corollary 2.2

Proof of Theorem 2.1. Recall thatlimk→∞infx:|x|=kU(x) → ∞ P-a.s. (see (2.3)). In view of Proposition 2.3, we only need to prove that for any fixedx∈Tandε >0, when n→ ∞,

Pω

n

Ln(x)

n logn

− σ2

4De−U(x) > εo

→0, inP-probability.

According to Fact 3.3, this is equivalent to convergence inP-probabilityPω{ |LLn(x)

n()− e−[U(x)−U()]|> ε} →0(forn→ ∞), and thus to the following statement: for anyx∈T andm→ ∞,

Pωn

LT(m)

(x)

m −e−[U(x)−U()]

> εo

→0, inP-probability,

whereT(m)is as before, them-th return time of the biased walk(Xi)to the root∅. This, however, holds trivially asLT(m)

(x)−LT(m−1)

(x),m≥1, are i.i.d. random variables under PωwithEω[LT(1)

(x)] = 1−pa = e−[U(x)−U()](see the notation at (3.10)–(3.11) as well as the discussion preceding the equations). Theorem 2.1 is proved. 2 Proof of Corollary 2.2.Letε >0and0< a < 12. Let

En(ε, a) :=n ω: sup

x∈T

Pω

Ln(x)

n logn

− σ2

4De−U(x) > ε

< ao .

By Theorem 2.1,P(En(ε, a))→1,n→ ∞.

Letxn ∈An, and letxmin∈Umin. For allω∈En(ε, a), we have

Pω

Ln(y)

n logn

− σ2

4De−U(y) ≤ε

≥1−a ,

(13)

fory=xnand fory=xmin; hence, for allω∈En(ε, a),

Pω

Ln(xn)

n logn

≤ σ2

4De−U(xn)+ε, Ln(xmin)

n logn

≥ σ2

4De−U(xmin)−ε

≥1−2a .

By definition,Ln(xn) = supx∈TLn(x)≥Ln(xmin). Therefore, for allω,

Pω

σ2

4De−U(xn)≥ σ2

4De−U(xmin)−2ε

≥(1−2a)1En(ε, a)(ω).

Taking expectation with respect toPon both sides gives that

P σ2

4De−U(xn)≥sup

x∈T

σ2

4De−U(x)−2ε

≥(1−2a)P(En(ε, a)),

which converges to1−2awhenn→ ∞. Sincea >0can be as small as possible, this yields

σ2

4D e−U(xn)→supx∈T4Dσ2

e−U(x)in probability underP, i.e.,U(xn)→infx∈TU(x)in probability underP.

SinceUmin, the set of the minimizers ofU(·), isP-a.s. finite, we haveinfx∈UminU(x)<

infx∈T\UminU(x)P-a.s., which yieldsP(xn ∈Umin)→1,n→ ∞. 2

A Appendix: Proof of Lemma 3.4

Lets∈[1, 1p). ThenE(sξ1) = 1−a+a(1−p)s1−ps . So

PnXn

i=1

ξi≥ko

≤ 1 sk Eh

sPni=1ξi1{Pn

i=1ξi>0}

i

= [E(sξ1)]n−[P{ξ1= 0}]n

sk .

Observe that

[E(sξ1)]n−[P{ξ1= 0}]n =

1−a+a(1−p)s 1−ps

n

−(1−a)n

≤ na(1−p)s 1−ps

1−a+a(1−p)s 1−ps

n−1

,

where, in the last line, we usedxn−yn≤n(x−y)xn−1(for0≤y≤x). Hence

PnXn

i=1

ξi≥ko

≤s−kna(1−p)s 1−ps

1−a+a(1−p)s 1−ps

n−1

. (A.1)

First case: 13≤p <1. We takes:= 1+p2p ∈[1, 1p), so that (1−p)s1−ps = 1+pp ; hence by (A.1),

PnXn

i=1

ξi≥ko

≤ 1 +p 2p

−k

na(1 +p) p

1 +a

p n−1

= na1 +p p

1 +1−p 2p

−k 1 + a

p n−1

≤ 2na p

1 + 1−p 2p

−k 1 +a

p n

.

Since(1 +u)−1≤e−u/2(for0≤u≤1) and1 +v≤ev(forv≥0), applied tou:= 1−p2p ≤1 andv:= ap, we obtain, in case 13 ≤p <1,

PnXn

i=1

ξi≥ko

≤6naexp

−(1−p)k 4p +na

p

.

Références

Documents relatifs

Keywords : Random walk in random scenery; Weak limit theorem; Law of the iterated logarithm; Brownian motion in Brownian Scenery; Strong approxima- tion.. AMS Subject Classification

Biased random walk on the Galton–Watson tree, branching random walk, slow movement, random walk in a random environment, potential energy.. 2010 Mathematics

Products of real affine transformations were, probably, one of the first examples of random walks on groups where non-commutativity critically influences asymptotic properties of

In Section 4, using coupling, we obtain heat kernel estimates, which is later used in Section 5 to show the uniqueness of the non-zero limiting velocity.. Throughout the paper,

We show that the – suitably centered – empirical distributions of the RWRE converge weakly to a certain limit law which describes the stationary distribution of a random walk in

In their recent work [1], Majumder and Tamm computed analytically the mean number of common sites W N (t) vis- ited at time t by N independent random walkers starting from the origin

The number of generations entirely visited for recurrent random walks on random environment.. Pierre Andreoletti,

Spread of visited sites of a random walk along the generations of a branching process.. Pierre Andreoletti,