• Aucun résultat trouvé

Uniform mixing time for random walk on lamplighter graphs

N/A
N/A
Protected

Academic year: 2022

Partager "Uniform mixing time for random walk on lamplighter graphs"

Copied!
21
0
0

Texte intégral

(1)

www.imstat.org/aihp 2014, Vol. 50, No. 4, 1140–1160

DOI:10.1214/13-AIHP547

© Association des Publications de l’Institut Henri Poincaré, 2014

Uniform mixing time for random walk on lamplighter graphs

Júlia Komjáthy

a,1

, Jason Miller

b

and Yuval Peres

c

aTechnische Universiteit Eindhoven, Postbus 513, 5600 MB Eindhoven, The Netherlands. E-mail:j.komjathy@tue.nl

bDepartment of Mathematics, Massachusetts Institute of Technology, Headquarters Office, Building 2, Room 236, 77 Massachusetts Avenue, Cambridge, MA 02139, USA. E-mail:jpmiller@mit.edu

cMicrosoft Research, One Microsoft Way, WA 98052, USA. E-mail:peres@microsoft.com Received 20 September 2011; revised 27 December 2012; accepted 3 February 2013

Abstract. Suppose thatGis a finite, connected graph andXis a lazy random walk onG. The lamplighter chainXassociated with Xis the random walk on the wreath productG=Z2G, the graph whose vertices consist of pairs(f , x)wheref is a labeling of the vertices ofGby elements ofZ2= {0,1}andxis a vertex inG. There is an edge between(f , x)and(g, y)inGif and only if xis adjacent toyinGandfz=gzfor allz=x, y. In each step,Xmoves from a configuration(f , x)by updatingxtoyusing the transition rule ofXand then sampling bothfx andfyaccording to the uniform distribution onZ2;fzforz=x, yremains unchanged. We give matching upper and lower bounds on the uniform mixing time ofXprovidedGsatisfies mild hypotheses.

In particular, whenGis the hypercubeZd2, we show that the uniform mixing time ofXisΘ(d2d). More generally, we show that whenGis a torusZdnford≥3, the uniform mixing time ofXisΘ(dnd)uniformly innandd. A critical ingredient for our proof is a concentration estimate for the local time of the random walk in a subset of vertices.

Résumé. SoitGun graphe connexe fini etXune marche aléatoire fainéante surG. La chaîne de l’allumeur de réverbèresX associée àXest la marche aléatoire sur le groupe produitG=Z2G, le graphe dont les sites sont des paires(f , x)f est un label des sites deGpar des éléments deZ2= {0,1}etxest un site deG. Il existe une arête entre(f , x)et(g, y)dansGsi et seulement sixest adjacent àydansGetfz=gzpour toutz=x, y. A chaque pas,Xse déplace d’une configuration(f , x) en mettant à jourx versypar la règle de translation deXet ensuite en mettant à jour à la foisfxetfy selon la distribution uniforme surZ2;fzpourz=x, yrestant inchangé. Nous prouvons des bornes supérieures et inférieures équivalentes sur le temps de mélange uniforme deXsous des hypothèses faibles surG. En particulier quandG est l’hypercubeZd2, nous montrons que le temps de mélange uniforme deX estΘ(d2d). Plus généralement, nous montrons que quandGest le toreZdn avecd≥3, le temps de mélange uniforme deXestΘ(dnd)uniformément ennetd. Un ingrédient crucial de notre preuve est une estimation de concentration pour le temps local d’une marche aléatoire dans un sous ensemble de sites.

MSC:60J10; 60D05; 37A25

Keywords:Random walk; Uncovered set; Lamplighter walk; Mixing time

1. Introduction

Suppose thatG is a finite graph with verticesV (G)and edgesE(G), respectively. LetX(G)= {f:V (G)Z2}be the set of markings ofV (G)by elements ofZ2. The wreath productZ2Gis the graph whose vertices are pairs(f , x) wherefX(G)andxV (G), see Fig.1. There is an edge between(f , x)and(g, y)if and only if{x, y} ∈E(G) andfz=gz for allz /∈ {x, y}. Suppose thatP is a transition matrix for a Markov chain onG. The lamplighter walk X(with respect to the transition matrixP) is the Markov chain onGwhich moves from a configuration(f , x)by

1Supported by Grant KTIA-OTKA # CNK 77778, funded by the Hungarian National Development Agency (NFÜ) from a source provided by KTIA.

(2)

Fig. 1. A typical configuration of the lamplighter over a 5×5 planar grid. The colors indicate the state of the lamps and the dashed circle gives the position of the lamplighter.

1. pickingyadjacent toxinGaccording toP, then

2. updating each of the values offxandfyindependently according to the uniform measure onZ2.

The lamp states at all other vertices inGremain fixed. It is easy to see that ifPis ergodic and reversible with stationary distributionπP then the unique stationary distribution ofXis the product measure

π (f , x)

=πP(x)2−|G|,

andXis itself reversible. In this article, we will be concerned with the special case thatP is the transition matrix for thelazy random walkonGin order to avoid issues of periodicity. That is,P is given by

P (x, y)= 1

2 ifx=y,

1

2d(x) if{x, y} ∈E(G) (1.1)

forx, yV (G)and whered(x)is the degree ofx.

1.1. Main results

LetP be the transition kernel for the lazy random walk on a finite, connected graphGwith stationary distributionπ. The-uniform mixing timeofGis given by

tu(G, ε)=min

t≥0: max

x,yV (G)

Pt(x, y)π(y) π(y)

. (1.2)

Throughout, we lettu(G)=tu(G,2e1). The main result of this article is a general theorem which gives matching upper and lower bounds oftu(G)providedGsatisfies several mild hypotheses. One important special case of this result is the hypercubeZd2and, more generally, toriZdnford≥3. These examples are sufficiently important that we state them as our first theorem.

Theorem 1.1. There exists constantsC1, C2>0such that C1tu((Zd2))

d2dC2 for alld.

(3)

More generally, C1tu((Zdn))

dnd+2C2 for alln≥2andd≥3.

Prior to this work, the best known bound [11] fortu((Zd2))was C1d2dtu

Zd2

C2log(d)d2d forC1, C2>0.

In order to state our general result, we first need to review some basic terminology from the theory of Markov chains. Therelaxation timeofP is

trel(G)= 1 1−λ2

, (1.3)

whereλ2is the second largest eigenvalue ofP. Themaximal hitting timeofP is thit(G)= max

x,yV (G)Ex[τy], (1.4)

whereτy denotes the first timet thatX(t )=y andEx stands for the expectation under the law in whichX(0)=x.

The Green’s functionG(x, y)forP is

G(x, y)=Ex

tu(G)

t=0

1{X(t )=y}

=

tu(G)

t=0

Pt(x, y), (1.5)

i.e. the expected amount of timeXspends atyup to the uniform mixing timetugivenX(0)=x. For each 1n≤ |G|, we let

G(n)= max

SV (G)

|S|=n

maxzS yS

G(z, y). (1.6)

This is the maximal expected timeXspends in a setSV (G)of sizenbefore the uniform mixing time. This quantity is related to the hitting time of subsets ofV (G).

Finally, recall thatG is said to be vertex transitive if for everyx, yV (G)there exists an automorphismϕofG withϕ(x)=y. Our main result requires the following hypothesis.

Assumption 1.2. G is a finite,connected,vertex transitive graph andX is a lazy random walk on G.There exists constantsK1, K2, K3>0such that

(A) thit(G)K1|G|,

(B) 2K2(5/2)K2(maxy:yxG(x, y))K2≤exp(−ttrelu((GG))), (C) G(n)K3(trel(G)+log|G|)/(logn)

wheren=4K2tu(G)/miny:yxG(x, y)forx, yV (G)adjacent.

Remark 1.3. Note that such constants always exists for a given fixed graphG.However,to get the Theorem1.5to hold for a givenfamily of graphs,the constantK1, K2, K3need to be uniform in the family of graphs under consideration.

Thus,to get Theorem1.1,the constants need to be uniform innandd.

Remark 1.4. We need to writemaxy:yxG(x, y)in(B)andminy:yxG(x, y)in the definition ofnsince the graph G is only assumed to be vertex transitive and not edge-transitive,thusG(x, y)may vary along the neighbors of a vertexx.

(4)

These assumptions are rather technical:(B)and(C)express some sort of local transience criterion. The gap left by Peres–Revelle in [11] is only there iftrel(G)=o(tmix(G))and log|G| =o(tmix(G))both hold. Thus, the right hand side of criterion(B)is tending to 0 as|G| → ∞in cases of our main interest, and the criterion says that the Green function should behave in a similar manner. The criterion connects two error terms later in the proofs. Assumption (C)will serve to be able to establish a Binomial domination argument in the lemmas below for covering the last few left-out vertices, and establishes that the extra time we will give for the Binomial domination is still less then the total time in which we want to mix the chain.

The general theorem is:

Theorem 1.5. Let G be any graph satisfying Assumption 1.2. There exists constants C1, C2 depending only on K1, K2, K3such that

C1tu(G)

|G|(trel(G)+log|G|)C2. (1.7)

The lower bound is proved in [11], Theorem 1.4. The proof of the upper bound is based on the observation from [11] that the uniform distance to stationarity can be related toE[2|U(t)|]whereU(t)is the set of vertices inGwhich have not been visited byXby timet. Indeed, suppose thatf is any initial configuration of lamps, letF (t)be the state of the lamps at time t, and letgbe an arbitrary lamp configuration. LetW be the set of vertices wheref =g. Let C(t)=V (G)\U(t)be the set of vertices which have been visited byXby timet. WithP(f ,x)the probability under whichX(0)=(f , x), we have that

P(f ,x)

F (t)=g|C(t)

=2−|C(t)|1{W⊆C(t)}.

Since the probability of the configurationgunder the uniform measure is 2−|G|, we therefore have P(f ,x)[F (t)=g]

2−|G| =E(f ,x)

2|U(t)|1{W⊆C(t)}

. (1.8)

The right hand side is clearly bounded from above by E[2|U(t)|](the initial lamp configuration and position of the lamplighter no longer matters). On the other hand, we can bound (1.8) from below by

P(f ,x)

WC(t)

PU(t)=0

≥1− E

2|U(t)|

−1 . Consequently, to boundtu(G, ε)it suffices to compute

min

t≥0:E 2|U(t)|

≤1+

(1.9) since the amount of time it requires forXto subsequently uniformly mix after this time is negligible.

In order to establish (1.9), we will need to perform a rather careful analysis of the process by which U(t) is decimated byX. The key idea is to break the process of coverage into two different regimes, depending on the size ofU(t). The main ingredient to handle the case whenU(t)is large is the following concentration estimate of the local time

LS(t)=

t

s=0

1{X(s)S}

forXinSV (G).

Proposition 1.6. Assume that the second largest eigenvalue ofP,λ212and fixSV (G).Then forC0=1/50,we have that

Pπ

LS(t)tπ(S) 2

≤exp

C0t π(S) trel(G)

. (1.10)

(5)

Proposition1.6is a corollary of [8], Theorem 1; we consider this sufficiently important that we state it here. By invoking Green’s function estimates, we are then able to show that the local time is not concentrated on a small subset ofS. Since the total time of ours is limited byC|G|(trel(G)+log|G|), these estimates give us a small error probability only as long as the set which is not yet covered is large enough. Thus, we will handle the case whenU(t)is small separately, via an estimate (Lemma3.5) of the hitting timeτS=min{t≥0: X(t )S}ofS.

1.2. Previous work

Suppose thatμ, νare probability measures on a finite measure space. Recall that thetotal variation distancebetween μ, νis given by

μνTV=max

A

μ(A)ν(A)=1

2 x μ(x)ν(x). (1.11)

The-total variation mixing timeofP is tmix(G, ε)=min

t≥0: max

xV (G)

Pt(x,·)π

TV

. (1.12)

Lettmix(G)=tmix(G,2e1). It was proved [11], Theorem 1.4, by Peres and Revelle that ifGis a regular graph such that thit(G)K|G|, there exists constantsC1, C2depending only onKsuch that

C1|G|

trel(G)+log|G|

tu G

C2|G|

tmix(G)+log|G|

.

These bounds fail to match in general. For example, for the hypercubeZd2,trel(Zd2)=Θ(d)[9], Example 12.15, while tmix(Zd2)=Θ(dlogd)[9], Theorem 18.3. Theorem1.5says that the lower from [11], Theorem 1.4, is sharp.

Before we proceed to the proof of Theorem1.5, we will mention some other work on mixing times for lamplighter chains. The mixing time ofGwas first studied by Häggström and Jonasson in [6] in the case of the complete graph Kn and the one-dimensional cycleZn. Their work implies a total variation cutoff with threshold 12tcov(Kn)in the former case and that there is no cutoff in the latter. Here,tcov(G)for a graphGdenotes the expected number of steps required by the lazy random walk to visit every site inG. The connection betweentmix(G)andtcov(G)is explored further in [11], in addition to developing the relationship between the relaxation time ofGandthit(G), andE[2|U(t)|] andtu(G). The results of [11] include a proof of total variation cutoff forZ2n with thresholdtcov(Z2n). In [10], it is shown thattmix((Zdn))12tcov(Zdn)when d≥3 and more generally thattmix(Gn)12tcov(Gn)whenever(Gn)is a sequence of graphs satisfying some uniform local transience assumptions.

The mixing time ofX=(F , X)is typically dominated by the first coordinateF since the amount of time it takes forXto mix is negligible compared to that required byX. We can sample fromF (t)by:

1. sampling the rangeC(t)of the lazy random walk run for timet, then 2. marking the vertices ofC(t)by i.i.d. fair coin flips.

Determining the mixing time ofXis thus typically equivalent to computing the thresholdtwhere the corresponding marking becomes indistinguishable from a uniform marking ofV (G)by i.i.d. fair coin flips. This in turn can be viewed as a statistical test for the uniformity of the uncovered setU(t)ofX– ifU(t)exhibits any sort of non-trivial systematic geometric structure thenX(t)is not mixed. This connects this work to the literature on the geometric structure of the last visited points by the random walk [1–3,10].

1.3. Outline

The remainder of this article is structured as follows. In Section2, we will give the proof of Theorem1.1by checking the hypotheses of Theorem1.5. Next, in Section3we will collect a number of estimates regarding the amount ofX spends in and requires to cover sets of vertices inGof various sizes. Finally, in Section4, we will complete the proof of Theorem1.5.

(6)

2. Proof of Theorem1.1

We are going to prove Theorem1.1by checking the hypotheses of Theorem1.5. We begin by noting that by [9], Corollary 12.12, and [9], Section 12.3.1, we have that

trel Zdn

=Θ dn2

. (2.1)

By [4], Example 2, p. 2155, we know thattu(Zn)=O(n2). Hence by [5], Theorem 2.10, we have that tu

Zdn

=O

(dlogd)n2

. (2.2)

The key to checking parts (A)–(C) of Assumption 1.2are the Green’s function estimates which are stated in Proposition2.2(low dimension) and Proposition2.6(high dimension). In order to establish these we will need to prove several intermediate technical estimates. We begin by recording the following facts about the transition kernel P for the lazy random walk on a vertex transitive graphG. First, we have that

Pt(x, y)Pt(x, x) for allx, y. (2.3)

To see this, we note that fort even, the Cauchy–Schwarz inequality and the semigroup property imply Pt(x, y)=

z

Pt /2(x, z)Pt /2(z, y)

Pt(x, x)Pt(y, y)=Pt(x, x).

The inequality and final equality use the vertex transitivity ofGso thatP (x, z)=P (z, x)andP (x, x)=P (y, y). To get the same result fortodd, one just applies the same trick used in the proof of [9], Proposition 10.18(ii). Moreover, by [9], Proposition 10.18, we have that

Pt(x, x)Ps(x, x) for allst. (2.4)

The main ingredient in the proof of Proposition2.2, our low dimension Green’s function estimate, is the following bound for the return probability of the lazy random walk onZd.

Lemma 2.1. LetP (x, y;Zd)denote the transition kernel for the lazy random walk onZd.For allt≥1,we have that Pt

x, x;Zd

≤√ 2

4d π

d/2

1

td/2+et /8. (2.5)

Proof. To prove the lemma we first give an upper bound on the transition probabilities for a (non-lazy) simple random walkY onZd. One can easily give an exact formula for the return probability ofY to the origin ofZd in 2t steps by counting all of the possible paths from 0 back to 0 of length 2t (here and hereafter,PNL(x, y;Zd)denotes the transition kernel ofY):

PNL2t

x, x;Zd

=

n1+···+nd=t

(2t )!

(n1!)2(n2!)2· · ·(nd!)2 · 1 (2d)2t

= 1 (2d)2t

2t t

n1+···+nd=t

t! n1!n2! · · ·nd!

2

.

We can bound the sum above as follows, using the multinomial theorem in the second step:

PNL2t

x, x;Zd

≤ 1 (2d)2t

2t

t max

n1+···+nd=t

t! n1! · · ·nd!

n1+···+nd=t

t! n1! · · ·nd!

≤ 1 (2d)2t

2t t

t!

[(t /d)!]d ·dt.

(7)

Applying Stirling’s formula to each term above, we consequently arrive at PNL2t

x, x;Zd

√2

(2π)d/2·dd/2

td/2. (2.6)

We are now going to deduce from (2.6) a bound on the return probability for the lazy random walkXonZd. We note that we can coupleXandY so thatXis a random time change ofY:X(t )=Y (Nt)whereNt=t

i=0ξi and thei)are i.i.d. withP[ξi=0] =P[ξi =1] =12 and are independent ofY. Note thatNt is distributed as a binomial random variable with parameterst and 1/2. Thus,

Pt

x, x;Zd

=

t /2

i=0

PNL2i

x, x;Zd

P(Nt =2i)

P(Nt < t /4)+√ 2

4d π

d/2

1 td/2,

where in the second term we used the monotonicity of the upper bound in (2.6) int. The first term can be bounded from above by using the Hoeffding inequality. This yields the term et /8in (2.5).

Throughout the rest of this section, we let|xy|denote theL1distance betweenx, yZdn.

Proposition 2.2. LetG(x, y)denote the Green’s function for the lazy random walk onZdn.For eachδ(0,1),there exists constantsC1, C2, C3>0independent ofn, dford≥3such that

G(x, y)C1 d

4d π

d/2

|xy|1d/2+C2(dlogd) 4d

π d/2

n2d(1δ/2) +C3

d2logd

n2enδ/2 for allx, yZdndistinct.

Proof. Fixδ(0,1). We first observe that the probability that there is a coordinate in which the random walk wraps around the torus withint < n2steps can be estimated by using Hoeffding’s inequality and a union bound by

d·P

Z(t ) > n

=den2/(2t ),

whereZ(t )is a one dimensional simple random walk onZ. Letk= |xy|. Applying (2.3) and (2.4) in the second step, and estimating the probability of wrapping around in timen2δin the third term, we see that

G(x, y)=

tu

t=k

Pt(x, y)

n2δ

t=k

Pt

x, x;Zd

+tuPn2δ

x, x;Zd

+dtuenδ/2. (2.7)

We can estimate the first term on the right hand side above using Lemma2.1, yielding the first term in the assertion of the lemma. Applying Lemma2.1again, we see that there exists a constantC2which does not depend onn, dsuch that the second term in the right side of (2.7) is bounded by

C2(dlogd) 4d

π d/2

n2d(1δ/2). (2.8)

Indeed, the factor(dlogd)n2comes from (2.2) and the other factor comes from Lemma2.1. Combining proves the

lemma.

(8)

Proposition2.2is applicable whennis much larger thand. We now turn to state some preliminary estimates for Proposition2.6. This proposition will give us an estimate for the Green’s function that we will use whend is large.

Here comes the first ingredient:

Lemma 2.3. Suppose thatXis a lazy random walk onZdnford≥8and that|X(0)| =kd8.For eachj ≥0,letτj be the first timetthat|X(t )| =j.There exists a constantCk>0depending only onksuch thatP[τ0< τ2k] ≤Ckdk. If,instead,|X(0)| =1,then there exists a universal constantp >0such thatP[τ0< τ2] ≥p.

Proof. It clearly suffices to prove the result whenXis non-lazy. Assume that|X(t )| =j ∈ {k, . . . ,2k}. It is obvious that the probability that|X|moves toj+1 in its next step is at least 1−2kd. The reason is that the probability that the next coordinate to change is one of the coordinates ofX(t )whose value is 0 is at least 1−2kd. Similarly, the probability that|X|next moves toj−1 is at most 2kd. Consequently, the first result of the lemma follows from the Gambler’s ruin problem (see, for example, [9], Section 17.3.1). The second assertion of the lemma follows from the

same argument.

Lemma 2.4. Assume thatkNand thatd=2k∨3.Suppose thatXis a lazy random walk onZdand that|X(0)| = 2k.Letτk be the first timetthat|X(t )| =k.There existspk>0depending only onksuch thatP[τk= ∞] ≥pk>0.

Proof. This is an easy corollary of the transience of the random walk onZdford≥3.

The next lemma is the corresponding version for the torus.

Lemma 2.5. Assume thatkNandd≥2k∨3.Suppose thatXis a lazy random walk onZdnand that|X(0)| =2k.Let τkbe the first timet that|X(t )| =k.There existspk, ck>0depending only onksuch thatP[τk> ckdn2] ≥pk>0.

Proof. We first assume thatd=2k∨3. It follows from Lemma2.4that there exists a constantpk,1>0 depending only on k such that P[τk > τn/4] ≥pk,1. The central limit theorem (see [7], Chapter 2) implies that there exists constantsck,1, pk,2>0 such that the probability that a random walk onZdnmoves more than distancen4 in timeck,1n2 is at most 1−pk,2. Combining implies the result ford=2k∨3.

Now we suppose thatd≥2k∨3. Let(X1(t), . . . , Xd(t))be the coordinates ofX(t ). By re-ordering if necessary, we may assume without loss of generality thatX2k+1(0), . . . , Xd(0)=0. LetY (t )=(X1(t), . . . , X2k(t)). ThenY is a random walk onZ2kn . Clearly,|Y (0)| =2kbecauseX(0)cannot have more than 2knon-zero coordinates. For eachj, letτjY be the first timet that|Y (t )| =j. ThenτkYτk. For eacht, letNt denote the number of steps thatXtakes in the time interval{1, . . . , t}in which one of its first 2kcoordinates is changed (in other words,Nt is the number of steps taken byY). The previous paragraph implies thatP[NτY

kck,1n2] ≥pk,3>0 for a constantpk,3>0 depending only onk. Since the probability that the first 2kcoordinates are changed in any step isk/d(recall thatXis lazy), the

final result holds from a simple large deviations estimate.

Now we are ready to state and prove our estimate ofG(x, y)whendis large.

Proposition 2.6. Suppose thatd≥8.LetG(x, y)denote the Green’s function for the lazy random walk onZdn.For eachkNwithkd8,there exists a constantCk>0which does not depend onn, dsuch that

G(x, y)Ck

dk for allx, yZdnwith|xy| ≥k.

Proof. See Fig.2for an illustration of the proof. By translation, we may assume without loss of generality thaty=0;

letk= |x|. Letτ0be the first timetthat|X(t )| =0. The strong Markov property implies that G(x, y)P[τ0tu])G(x, x).

(9)

Fig. 2. Assume thatd8 and thatkNwithd8k. LetXbe a lazy random walk onZdnand thatX(0)=xwith|xy| =k. In Proposition2.6, we show thatG(x, y)Ckd−kwhereCk>0 is a constant depending only onk. By translation, we may assume without loss of generality that

|x| =kandy=0. The idea of the proof is to first invoke Lemma2.3to show thatXescapes to∂B(0,4k)with probability at least 1Ck,1d−k. We then decompose the path ofXinto successive excursions{X(σ2kj ), . . . , X(τ4kj ), . . . , X(σ2kj+1)}between∂B(0,2k)back to itself through∂B(0,4k).

By Lemma2.3, we know that each excursion hits 0 with probability bounded byC2k,1d2kand Lemma2.5implies that each excursion takes length ckdn2with probability at leastpk>0. Consequently, the result follows from a simple stochastic domination argument.

Consequently, it suffices to show that for eachkN, there exists constantsCk, C0>0 such that P[τ0tu] ≤Ck

dk and (2.9)

G(x, x)C0. (2.10)

We will first prove (2.9); the proof of (2.10) will be similar.

Let N be a geometric random variable with success probability C2kd2k where C2k is the constant from Lemma2.3. Letj)be a sequence of independent random variables with P[ξj =c2kdn2] =p2k andP[ξj =0] = 1−p2kwherec2k, p2kare the constants from Lemma2.5independent ofN. We claim thatτ0is stochastically dom- inated from below byN ζ1

j=1 ξj whereζ is independent of N andj)withP[ζ =0] =Ckdk=1−P[ζ =1].

Indeed, to see this we will decompose the trajectory of the walk to excursions between the boundaries of theL1-balls B(0,2k)andB(0,4k)as follows: letσk0=0 and letτ4k0 be the first timetthat|X(t )| =4k. For eachj ≥1, we induc- tively letσ2kj be the first timetafterτ4kj1that|X(t )| =2kand letτ4kj be the first timetafterσ2kj that|X(t )| =4k. Let Ft be the filtration generated byX. Lemma2.3implies that the probability thatXhits 0 in{σ2kj, . . . , τ4kj }givenFσj

2k

is at mostC2kd2k for eachj ≥1 whereC2k>0 only depends on 2k. This leads to the success probability in the definition ofN above. The factorζ is to take into account the probability thatXreaches distance 2kbefore hitting 0.

Moreover, Lemma2.5implies thatP[σ2kjτ4kj1c2kdn2|Fτj

4k] ≥p2k. This leads to the definition of thej)above.

This implies our claim.

To see (2.9) from our claim, an elementary calculation yields that P

N ζC2k1dk

P

NC2k1dkorζ=0

≤2dk+Ckdk.

(10)

We also note that P

m

j=1

ξjp2kc2kmn2 2

≤ecm

for some constantc >0. Combining these two observations along with a union bound implies (2.9) (with a larger constantCkthan in the one in Lemma2.3). To see (2.10), we apply a similar argument using the second assertion of

Lemma2.3.

Now that we have proved both Proposition2.2and Proposition2.6, we are ready to check the criteria of Assump- tion1.2.

2.1. Part(A)

By [9], Proposition 1.14, withτx+=min{t≥1: X(t )=x}, we have thatEx[τx+] = |Zdn|. Applying Proposition2.6, we see that there exists constantsd0, r >0 such that ifdd0, then

G(x, y)≤1/2 for all|xy| ≥r. (2.11)

Proposition2.2implies that there existsn0such that ifnn0and 3≤d < d0then (2.11) likewise holds, possibly by increasing r (clearly, part (A)holds when dd0 and nn0; note also that we may assume without loss of generality thatd0, n0are large enough so that the diameter of the graph is at least 2r). Letτr be the first timet that

|X(t )X(0)| =r. We observe that there existsρ0=ρ0(r) >0 such that Px

τr< τx+

ρ0 (2.12)

uniform in n, d since in each time step there ared directions in which X(t )increases its distance fromX(0). By combining (2.11) with (2.12), we see thatPx[τx+tu(G)] ≥ρ1>0 uniformdd0. LetFtbe the filtration generated byX. We consequently have that

Ex

τx+

Ex

τx+1{τ+ xtu(G)}

=Ex

Ex

τx+|Ftu(G)

1{τ+ x tu(G)}

Ex

EX(tu(G))[τx]1{τx+tu(G)}

ρ1

1− 1

2e

Eπ[τx].

That is, withρ2:=ρ1(12e1) >0, there exists aρ2uniform indd0such that|Znd| =Ex[τx+] ≥ρ2Eπ[τx]. Com- bining this fact with [9], Lemma 10.2, stating thatthit(G)≤2 maxwEπw)holds for any irreducible Markov chain, we arrive atthit(Zdn)K1|Zdn|whereK1=2/ρ2is a uniform constant.

Remark 2.7. There is another proof of Part(A)which is based on eigenfunctions.In particular,we know that thit

Zdn

≤2Eπ[τx] =4

i

1 1−λi,

where theλi are the eigenvalues of the simple random walk onZdn distinct from1;the extra factor of2in the final equality accounts for the laziness of the chain.Theλi can be computed explicitly using[9],Lemma12.11,and the form of theλi whend=1which are given in[9],Section12.3.The assertion follows by performing the summation which can be accomplished by approximating it by an appropriate integral.

2.2. Part(B)

It follows from Proposition2.6that there exist constantsC >0 andd0≥3 such that G(x, y)C

d forx, yZdnwith|xy| =1 (2.13)

(11)

provideddd0. Consequently, there existsKNwhich does not depend ondd0such that 2K(5/2)KGK(x, y)=O

2K

5/2 d

K

. (2.14)

It follows by combining (2.1) and (2.2) that we have that tu(Zdn)

trel(Zdn)=O(logd). (2.15)

Combining (2.14) with (2.15) shows that part (B)of Assumption 1.2is satisfied provided we takeK2=K large enough. Moreover, (2.15) clearly holds if 3≤d < d0by Proposition2.2.

2.3. Part(C)

We first note that it follows from (2.1), (2.2), Proposition2.2, and Proposition2.6that there exists constantsC >0 such thatnforZdnis at mostCd2n2logd for alld≥3. Thus, to check Assumption(C), we need to show that there existsK3>0 such that

G n

K3

dn2+dlogn logd+logn

. (2.16)

We are going to prove the result by considering the regimes ofd≤√

lognandd >

lognseparately.

Case1:d <√ logn.

From (2.16) it is enough to show thatG(n)Kdn2/logn. We can boundG(n)in this case as follows. Let D=(dlogdlogn)1/((1/2)d1). By Proposition 2.2, we can bound from above the expected amount of time thatX starting at 0 inZdnspends in theL1ball of radiusDby summing radially as follows:

D

k=1y:|y|=k

G(0, y)

D

k=1

C1 d

4d π

d/2

k1d/2·2d(2k)d1

C1 16d

π

d/2 D k=1

kd/2C2

d 16d

π d/2

D1+d/2C3n(dlogdlogn)5

for constantsC1, C2, C3>0, where we used thatdd/2n. We also note that 2d(2k)d1is the size of theLball of radiusk. The exponent 5 comes from the inequality

(1/2)d+1

(1/2)d−1≤5 for alld≥3.

We can estimateG(n)by dividing between the set of points which have distance at mostDto 0 and those whose distance to 0 exceedsD, and then using the previous estimate on the first set and Proposition2.2on the latter:

G n

C3n(dlogdlogn)5+C4D1(1/2)dn

C3n(dlogdlogn)5+C4·Cd2n2logd dlogdlogn ,

whereC4>0 is a constant and we recall thatC >0 is the constant from the definition ofn. This implies the desired result.

Case2:d≥√ logn.

(12)

In this case, we are going to employ Proposition2.6to boundG(n). The number of points which have distance at mostkto 0 is clearly 1+(2d)k. Consequently, by Proposition2.6, we have that

G n

C0+

3

k=1

Ckdk(2d)k

+C4d4n

C5+C6(logd)n2 d2

for some constants C5, C6>0. Since d2≥logn, this is clearly dominated by the right hand side of (2.16) (with a

large enough constant), which completes the proof in this case.

3. Coverage estimates

Now we turn to collect some preliminary lemmas for the proof of the general Theorem1.5.

Throughout, we assume thatG is a finite, connected, vertex transitive graph andX is a lazy random walk onG with transition matrix P and stationary measure π. For SV (G), we letCS(t)be the set of vertices inS visited byX by timet and letUS(t)=S\CS(t)be the subset ofS whichX has not visited by timet. Recall the notation C(t)=CV (G)(t)andU(t)=UV (G)(t). As before, we will usePx,Exto denote the probability measure and expectation under whichX(0)=x. Likewise, we letPπ,Eπcorrespond to the case thatXis initialized at stationarity. The purpose of this section is to develop a number of estimates which will be useful for determining the amount of time required byXin order to cover subsetsSofV (G). We consider two different regimes depending on the size ofS. IfSis large, we will estimate the amount of time it takes forXto visittu(G)distinct vertices inS. IfSis small, we will estimate the amount of time it takes forXto visit 1/2 of the vertices inS.

3.1. Large sets

In this subsection, we will prove that the amount of time it takes forXto visitt=tu(G)distinct elements of a large set of verticesSV (G)by timet=constantt/π(S)is stochastically dominated by a geometric random variable whose parameter depends ont/trel(G). The main result is:

Proposition 3.1. AssumeXsatisfies part(B)of Assumption1.2with constantsK2.LetSV (G)consist of at least 2K2tu(G)/miny:yxG(x, y)elements forx, yV (G)adjacent and let

t=2(K2+2)tu(G)

π(S) .

There exists a universal constantC >0such that for everyxV (G),we have that Px

CS(t)tu(G)

≤exp

C tu(G) trel(G)

.

Recall that LS(t)=t

s=01{X(s)S} is the amount of time thatX spends inS up to timet. The proof of Propo- sition3.1consists of several steps. First we show thatLS(t)is large with high probability. This is done in Proposi- tion1.6, which we will deduce from [8], Theorem 1, shortly, and states that the probability thatLS(t)is less than 1/2 its mean is exponentially small int. Then, in order to show thatXvisits many vertices inS, we need to rule out the possibility ofXconcentrating most of its local time in a small subset ofS. This is accomplished in Lemma3.2. We now proceed to the proof of Proposition1.6.

Proof of Proposition1.6. We rewrite the event

LS(t)tπ(S) 2

= t

s=0

f (Xs) > t

1−π(S)+π(S) 2

, (3.1)

Références

Documents relatifs

that of Kara- mata; unfortunately, in this situation we have no way of obtaining the necessary information about these probabilities to apply such a Tauberian theorem (usually,

This technique, originally introduced in [6; 9; 10] for the proof of disorder relevance for the random pinning model with tail exponent α ≥ 1/2, has also proven to be quite powerful

We study the persistence exponent for random walks in random sceneries (RWRS) with integer values and for some special random walks in random environment in Z 2 including random

We obtain estimates for large and moderate deviations for the capacity of the range of a random walk on Z d , in dimension d ≥ 5, both in the upward and downward directions..

In the eighties, Lawler established estimates on intersection probabilities for random walks, which are relevant tools for estimating the expected capacity of the range (see

Instead, we are going to see that this result can also be obtained as a corollary of the one of BFLM for the linear walk : at first, we are going to prove that their result

One way to construct a RWRE is to start with a reversible markov chain, in our case it will be a reversible nearest neighborhood random walk.. Now if this potential also

Yves Guivarc’H, Emile Le Page. On the homogeneity at infinity of the stationary probability for an affine random walk. Siddhartha Bhattacharya, Tarun Das, Anish Ghosh and Riddhi