• Aucun résultat trouvé

Self-Intersection Times for Random Walk, and Random Walk in Random Scenery

N/A
N/A
Protected

Academic year: 2021

Partager "Self-Intersection Times for Random Walk, and Random Walk in Random Scenery"

Copied!
26
0
0

Texte intégral

(1)

HAL Id: hal-00009063

https://hal.archives-ouvertes.fr/hal-00009063

Preprint submitted on 27 Sep 2005

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Self-Intersection Times for Random Walk, and Random Walk in Random Scenery

Amine Asselah, Fabienne Castell

To cite this version:

Amine Asselah, Fabienne Castell. Self-Intersection Times for Random Walk, and Random Walk in

Random Scenery. 2005. �hal-00009063�

(2)

ccsd-00009063, version 1 - 27 Sep 2005

Self-Intersection Times for Random Walk, and Random Walk in Random Scenery

in dimensions d ≥ 5.

Amine Asselah & Fabienne Castell C.M.I., Universit´e de Provence,

39 Rue Joliot-Curie,

F-13453 Marseille cedex 13, France

asselah@cmi.univ-mrs.fr & castell@cmi.univ-mrs.fr

Abstract

Let { S k , k ≥ 0 } be a symmetric random walk on Z d , and { η(x), x ∈ Z d } an inde- pendent random field of centered i.i.d. with tail decay P (η(x) > t) ≈ exp( − t α ). We consider a Random Walk in Random Scenery, that is X n = η(S 0 ) + · · · + η(S n ). We present asymptotics for the probability, over both randomness, that { X n > n β } for 1/2 < β < 1 and 1 < α. To obtain such asymptotics, we establish large deviations esti- mates for the the self-intersection local times process P

l 2 n (x), where l n (x) = 1I { S 0 = x } + · · · + 1I { S n = x } is the local time of the walk.

Keywords and phrases: moderate deviations, self-intersection, local times, random walk, random scenery.

AMS 2000 subject classification numbers:

Running head: Random Walk in Random Scenery.

1 Introduction.

We study transport in divergence free random velocity fields. For simplicity, we discretize both space and time and consider the simplest model of shear flow velocity fields:

∀ x, y ∈ Z × Z d , V (x, y) = η(y)e x ,

where e x is a unit vector in the first coordinate of Z d+1 , and { η(y), y ∈ Z d } are i.i.d. real random variables. Thus, space consists of the sites of the cubic lattice Z d+1 and the direction of the shear flow is e x . We wish to model a polluant evolving by two mechanisms:

• a passive transport by the velocity field;

• collisions with the other fluid particles; this is modeled by random symmetric incre-

ments { (α n , β n ) ∈ Z × Z d , n ∈ N } , independent of the velocity field.

(3)

Thus, if R n ∈ Z × Z d is the polluant’s position at time n, then

R n+1 − R n = V (R n ) + (α n+1 , β n+1 ), and R 0 = (0, 0). (1) When solving by induction for R n , (1) yields

R n =

n

X

k=1

α k +

n

X

k=0

η(

k

X

i=1

β i ),

n

X

k=1

β k

!

. (2)

The sum β 1 + · · · +β n is denoted by S n , and called the Random Walk (RW). The displacement along e x consists of two independent parts: a sum of i.i.d. random variables α 1 + · · · + α n , and a sum of dependent random variables η(S 0 ) + · · · + η(S n ), which we denote by X n and call the Random Walk in Random Scenery (RWRS). Writing it in terms of local times of the RW { S n , n ∈ N } , we get

X n =

n

X

k=0

η(S k ) = X

x ∈ Z

d

l n (x)η(x), where l n (x) =

n

X

k=0

1I { S k = x } . (3) The process { X n , n ∈ N } was studied at about the same time by Kesten & Spitzer [11], Borodin [4, 5], and Matheron & de Marsily [15]. The fact that in dimension 1, E[X n 2 ] ∼ n 3/2 made the model popular and led the way to examples of superdiffusive behaviour. However, the typical behaviour of X n resembles that of a sum of n independent variables all the more when dimension is large.

Our goal is to estimate the probability that X n be large. By probability, we consider averages with respect to the two randomness, and P = P 0 ⊗ P η , where P 0 is the law of the nearest neighbor symmetric random walk { S k , k ∈ N } on Z d with S 0 = 0, and P η is the law of the velocity field.

Now, when d ≥ 3, Kesten and Spitzer established in [11] that X n / √

n converges in law to a Gaussian variable. Thus, by large, we mean { X n > n β } with β > 1/2. We expect P (X n > n β ) ≈ exp( − n ζ I) with constant rate I > 0, and we characterize in this work the exponent ζ. For this purpose, the only important feature of the η-variables is the α-exponent in the tail decay:

t lim →∞

log P η (η(x) > t)

t α = − c, for a positive constant c. (4) Let us now recall the classical estimates for P (Y 1 + · · · + Y n > n β ), where β > 1/2 and the { Y n , n ∈ N } are centered i.i.d. with tail decay P (Y n > t) ≈ exp( − t a ), with a > 0.

There is a dichotomy between a “collective” and an “extreme” type of behaviour. In the former case, each variable contributes about the same, whereas in latter case, only one term exceeds the level n β , when the others remain small. Thus, it is well known that P (Y 1 + · · · + Y n > n β ) ∼ exp( − n ζ ) with three regimes (since only the exponent ζ interests us, we have omitted all constants).

• When β < 1 and β(2 − a) < 1, a small collective contribution yields ζ = 2β − 1.

• When β ≥ 1 and a > 1, a large collective contribution yields ζ = (β − 1)a + 1.

(4)

1 d/2 1

2

1 d/2

1 2

β = d

2

d( α α ( +1) d

− 2)

I : 2β − 1

α α

β = 1 + 1 α

d(d − 2) 2(d − 1)

: α(β − 1) IV

: β α+1 α II

d+2 d+4

Conjectured ζ-exponent.

β β

: d+2α(β d+2 1) IIIb

: β d+2 d IIIa

d

2

− 2 d

2

+2d − 4

β = α α +1 +2

Proved Results.

Figure 1: ζ-exponent diagram

• When β > 1/(2 − a) and a < 1, an extreme contribution yields ζ = βa.

For the RWRS, one expects a rich interplay between the scenery and the random walk. To get some intuition about the relationship between ζ and α, β, we propose simple scenarii leading to the left diagram in Figure 1. Here also, we focus on the exponent, and constants are omitted.

• Region I. No constraint is put on the walk. When d ≥ 3, the range is of order n and sites of the range are typically visited once. Thus, { X n > n β } ∼ { η 1 + · · · + η n > n β } . When β < 1 the latter sum performs a moderate deviations of order n β . Since the η-variable satisfy Cramer’s condition, we obtain P (X n > n β ) ≥ exp( − n 1 ).

• Region II, IV. A few sites are visited often, so that X n ∼ η(0)l n (0). Now, using the tail behaviour of η(x), and the fact that in d ≥ 3, l n (x) is almost an exponential variable, we obtain

P

X n ≥ n β

≥ P

l n (0)η(0) ≥ n β

∼ sup

k ≤ n

P 0 [l n (x) = k] P η

η(x) ≥ n β k

∼ sup

k ≤ n

exp( − k − ( n β k ) α ) .

Now, the minimum of k 7→ k + n βα /k α is reached for k = n βα/(α+1) . Since, we impose also that k ≤ n, two different exponents prevail according to the value of β:

(II) β < (α + 1)/α, and ζ = βα/(α + 1). The RW spends a time of order n βα/(α+1) on

favorite sites.

(5)

(IV) β ≥ (α + 1)/α, and ζ = α(β − 1). The RW spends a time of the order of n on one favorite site.

• Region III.a, III.b. The random walk is localized a time T in a ball B r of radius r, with r 2 ≪ T : this costs of the order of exp( − T /r 2 ). Then, during this period, each site of B r is visited about T /r d , and we further assume that r d ≪ T . Thus

P

X n ≥ n β

≥ exp( − T r 2 )P η

"

√ 1 r d

X

B

r

η j ≥ n β r d/2 T

#

. (5)

Two different exponents prevail according to β:

(III.a) β ≤ 1. The condition 1 ≪ n β r d/2 /T ≪ r d/2 means that the sum of η-s performs a moderate (up to large) deviations and this costs of the order of exp( − n r d /T 2 ).

When the two costs are equalized, we obtain that the walk is localized a time T = n β on a ball of radius r = n β/(d+1) , and that ζ = dβ/(d + 2).

(III.b) β > 1. Here T = n and we deal with a very large deviations for a sum of i.i.d. . This has a cost of order exp( − n α(β 1) r d ). Choosing r so that n/r 2 = n α(β 1) r d , we obtain ζ = (d + 2α(β − 1))/(d + 2). The condition r ≫ 1 is equivalent to β < 1 + α 1 . The walk is localized all the time on a ball of radius r satisfying r d+2 = n 1 α(β 1) .

The following regions have already been studied.

• α = + ∞ (bounded scenery) and β = 1 in [1] (actually Brownian motion is considered there instead of RW);

• α = 2 (Gaussian scenery) and β ∈ [1, 1 + 1/α] in [6, 7];

• Region III.b (α > d/2, 1 ≤ β < 1 + 1/α) in [10];

• β = 1 and α < d/2 in [2].

In the first three cases, the precise decay rate is obtain (i.e. both the ζ -exponent and the rate I). In the case β = 1 and α < d/2, distinct lower and upper bounds with the same exponent are given in [2]. Outside the diagram of Figure 1, in the region 0 < α < 1 and β < 1+α 2 , in d ≥ 3, precise estimates are established in [9].

This paper is devoted to regions I and II. Henceforth, we consider d ≥ 5, unless explicitly mentioned.

Proposition 1.1 Upper Bounds for the RWRS.

1. Region I. We assume (i) β ≤ min( α+1 α+2 , d/2+1 d/2+2 ) and (ii) β < d

2

d +2d

2

2 − 4 . There exists an explicit y 0 , such that for y > y 0 , there exists a constant c ¯ 1 > 0, and

P (X n ≥ n β y) ≤ exp( − c ¯ 1 n 1 ). (6)

(6)

2. Region II. Let α < d/2, β > max n

α+1

α+2 ; d

2

d(α+1) − α(d − 2)

o

. For y > 0, there exists a constant

¯

c 2 > 0, such that

P (X n ≥ n β y) ≤ exp( − c ¯ 2 n βα/(α+1) ). (7) Remark 1.2 In our context d

2

d +2d

2

2 − 4 is smaller than d/2+1 d/2+2 , but we insisted on keeping the latter exponent in the definition of Region I, since the former is a technical artifact.

We indicate below lower bounds for P (X n ≥ n β y), which prove that we have caught the correct rates of the logarithmic decay of P (X n ≥ n β y). These lower bounds are given under an additional symmetry assumption on the scenery, which is not crucial, but simplifies the proofs. Hence, we say that a real random variable is bell-shaped, if its law has a density with respect to Lebesgue which is even, and decreasing on R + .

Proposition 1.3 Lower Bounds for the RWRS.

Assume d ≥ 3, and that the random variables { η(x), x ∈ Z d } are bell-shaped.

1. Region I. Let 1 ≥ β > 1/2. For all y > 0, there exists a constant c 1 > 0, such that P (X n ≥ n β y) ≥ exp( − c 1 n 1 ) . (8) 2. Region II. Let β ≤ 1 + 1/α. For all y > 0, there exists a constant c 2 > 0, such that

P (X n ≥ n β y) ≥ exp( − c 2 n βα/(α+1) ). (9) The right diagram in Figure 1 summarizes the logarithmic decay rate of P

X n ≥ n β y . The hatched areas are explored territories (bold lines excluded), and the horizontal stripes correspond to Propositions 1.1 and 1.3.

In the process of establishing Proposition(1.1), one faces the problem of evaluating the chances the random walk visits often the same sites. More precisely, a crucial quantity is the self-intersection local time process (SILT):

Σ 2 n = X

x ∈ Z

d

l 2 n (x) = n + 1 + 2 X

0 ≤ k<k

≤ n

1I { S k = S k

} . (10) It is expected that Σ 2 n would show up in the study of RWRS. Indeed, Σ 2 n is the variance of X n when averaged over P η . If we assume for a moment that the η-variables are standard Gaussian, then conditionally on the random walk, X n is a Gaussian variable with variance Σ 2 n , so that

P η ( X

x ∈ Z

d

η(x)l n (x) > n β ) ≤ exp

− n 2 P

x ∈ Z

d

l n 2 (x)

(11) It is well known that (11) holds for any tail behaviour (4) with α ≥ 2. Now, if we average with respect to the random walk law, then for any γ > 0

P (X n > n β ) ≤ E 0

exp

− n 2 P

x ∈ Z

d

l 2 n (x)

(7)

≤ exp( − n γ ) + P 0 X

x ∈ Z

d

l 2 n (x) > n γ

!

. (12)

Hence, at least for large α, we have to evaluate the logarithmic decay of quantities such as P 0 P

x ∈ Z

d

l 2 n (x) > n γ

. Note first that for d ≥ 3, and n → ∞ , E 0

"

X

x ∈ Z

d

l n 2 (x)

#

≃ n(2G d (0) − 1) , (13) where G d is the Green kernel

G d (x) , E 0 [l (x)] .

Therefore, we have to take γ ≥ 1 to be in a large deviations scaling. For large deviations of SILT in d = 1, we refer the reader to Mansmann [14], and Chen & Li [8], while in d = 2, this problem is treated in Bass & Chen [3].

We first present large deviations estimates for the SILT.

Proposition 1.4 Assume d ≥ 5. For y > 1 + 2 P

x ∈ Z

d

G d (x) 2 , there are positive constants c, c ¯ such that

exp( − c ¯ √

n) ≥ P 0

"

X

x ∈ Z

d

l 2 n (x) ≥ ny

#

≥ exp( − c √

n) . (14)

Proposition 1.4 is a corollary of the next result where we prove that the main contribution in the estimates comes from the region where the local time is of order √

n.

Proposition 1.5 1. For ǫ > 0, and y > 1 + 2 P

x ∈ Z

d

G d (x) 2 , lim sup

n →∞

√ 1

n log P 0

X

x:l

n

(x) ≤ n

1/2ǫ

l 2 n (x) ≥ ny

 = −∞ . (15) 2. For y > 0 and ǫ > 0, there exists a constant c > ˜ 0, such that

lim sup

n →∞

√ 1

n log P 0

X

x:l

n

(x)>n

1/2ǫ

l 2 n (x) ≥ ny

 ≤ − ˜ c . (16) Let us give some heuristics on the proof of Proposition 1.5. First of all, we decompose Σ 2 n using the level sets of the local time. Note that it is not useful to consider { x : l n (x) ≫ √

n } , since l n (x) is bounded by an exponential variable. Now, for a subdivision { b i } i ∈ N of [0, 1/2], let D b

i

=

x ∈ Z d , n b

i

≤ l n (x) < n b

i+1

. Denoting by | Λ | the number of sites in Λ ⊂ Z d , we then have

Σ 2 n = X

i

X

x ∈ D

bi

l 2 n (x) ≤ X

i

n 2b

i+1

| D b

i

| . Hence, choosing (y b

i

) i ∈ N such that P

i y b

i

≤ y, P 0

Σ 2 n ≥ ny

≤ X

i

P 0

 X

x ∈ D

bi

l 2 n (x) ≥ ny b

i

 ≤ X

i

P 0

| D b

i

| ≥ n 1 2b

i+1

y b

i

.

A first estimate of the right hand term is given by Lemma 1.2 of [2].

(8)

Lemma 1.6 Assume d ≥ 3. There is a constant κ d > 0 such that for any Λ ⊂ Z d , and any t > 0

P [l (Λ) > t] ≤ exp

− κ d t

| Λ | 2/d

, where l (Λ) = X

x ∈ Λ

l (x) . Hence, if we drop the index i, and set b = b i+1 ≈ b i , for L = n 1 2b y b , we have

P 0 [ | D b | ≥ L] ≤ X

Λ ⊂ ] − n;n[

d

, | Λ | =L

P 0 D b = Λ, l n (Λ) ≥ n b L

≤ (2n) dL exp( − κ d y b n ζ ) with ζ = b + (1 − 2

d )(1 − 2b). (17) Since ζ > 1/2 when b < 1/2 and d > 4, this estimate would suffice if the combinatorial factor (2n) dL were negligible. This case corresponds to “large” b. For “small” b, we need to get rid of the combinatorial term. We propose a reduction to intersection local times of two independent random walks. Assume for a moment that in place of P

x ∈ D

b

l n 2 (x), we were to deal with P

x ∈ D

b

l n (x)˜ l n (x), where (˜ l n (x)) x Z

d

is an independent copy of (l n (x)) x Z

d

. Then, using Lemma 1.6, we obtain

P 0 ⊗ P ˜ 0

"

X

x ∈ D

b

l n (x)˜ l n (x) ≥ ny b

#

≤ P 0 ⊗ P ˜ 0

h ˜ l n ( D b ) ≥ n 1 b y b

i

≤ E 0

exp

− κ d

n 1 b y b

| D b | 2/d

.

Now, the simplest upper bound on the volume of D b is n b | D b | ≤ P

x ∈ D

b

l n (x) ≤ n, so that P 0

"

X

x ∈ D

b

l n (x)˜ l n (x) ≥ ny b

#

≤ exp( − κ d y b n 1 b 2/d(1 b) ) .

In this crude way, we have obtain a weaker bound than (17), but without the combinatorial term. This can be improved as we improve on the upper bound for the volume of D b .

The paper is organized as follows. In Section 2, we prove the lower bound in Proposi- tion (1.4), and present an upper bound rougher than (14). Indeed, the first step in establish- ing (14) is to reduce SILT into intersection local times of two independent walks. However, this reduction only yields

P 0

"

X

x ∈ Z

d

l n 2 (x) ≥ ny

#

≤ exp − c ¯

√ n p log(n)

!

. (18)

For pedagogical reasons, we first prove (18), whereas its refinements are proved in Section 4.

We gather the technical Lemmas in Section 3. The results of Section 3 are applied to the

problem of large deviations for SILT in Section 4, to obtain Proposition 1.5. In Section 5,

we treat the problem of large and moderate deviations estimates for the RWRS, and prove

Proposition 1.1. Finally, the corresponding lower bounds (Proposition 1.3) are shown in

Section 6.

(9)

2 Rough estimates for SILT

Lower Bound. For k ∈ N , let T 0 (k) be the k-th return time at 0:

T 0 (0) , 0 , T 0 (k) , inf n

n > T 0 (k 1) , S n = 0 o .

For y > 0, P

"

X

x ∈ Z

d

l n 2 (x) ≥ ny

#

≥ P [l n (0) ≥ ⌊ √

ny ⌋ + 1] = P h

T 0 ( ny ) ≤ n i

≥ P

∀ k ∈ { 1, · · · ⌊ √ ny ⌋} , T 0 (k) − T 0 (k 1) ≤ n

⌊ √ ny ⌋

≥ P

T 0 ≤ n

√ ny ny

=

P [T 0 < ∞ ] − P √

√ n y < T 0 < ∞ ny

This proves the lower bound since lim n →∞ P h √ n

y < T 0 < ∞ i

= 0, and P (T 0 < ∞ ) < 1 for d ≥ 3.

Upper bound (18).

We write the proof of the upper bound for n = 2 N since this simplifies notations. The trivial extension to general n is omitted. First, note that

X

x ∈ Z

d

l 2 n (x) = n + 1 + 2Z (0) , with Z (0) = X

0 ≤ k<k

≤ n

1I { S k = S k

} .

The idea is to reduce Z (0) to the intersection times of two independent random walks stem- ming from S n/2 , as in Le Gall [13]. Then, we use moment estimates for intersection of independent random walks obtained in [12].

We divide Z (0) into

Z (0) = Z 1 (1) + Z 2 (1) + X

0 ≤ k<2

N1

<k

≤ 2

N

1I { S k = S k

} , with,

Z 1 (1) = X

k,k

1I { 0 ≤ k < k ≤ 2 N 1 } 1I { S k = S k

} , and,

Z 2 (1) = X

k,k

1I { 2 N 1 ≤ k < k ≤ 2 N } 1I { S k = S k

} . Thus, we can define two independent walks for times k ∈ { 0, . . . , 2 N 1 }

S k,1 = S 2

N1

− S 2

N1

k , and S k,2 = S 2

N1

− S 2

N1

+k .

(10)

Finally, we obtain Z (0) ≤ Z 1 (1) + Z 2 (1) + I 1 (1) with for i = 1, 2 Z i (1) = X

k,k

1I { 0 ≤ k < k ≤ 2 N 1 } 1I { S k,i = S k

,i } , and I 1 (1) = X

x ∈ Z

d

l 2

N1

,1 (x)l 2

N1

,2 (x) , where for i = 1, 2, (l k,i (x)) k ∈ N denotes the local times of the random walk (S k,i ) k ∈ N . Iterating this procedure, we get

Z (0)

N − 1

X

l=1 2

l1

X

k=1

I k (l) , (19)

where for each l ∈ { 1, · · · , N − 1 } , the random variables (I k (l) ; 1 ≤ k ≤ 2 l 1 ) are i.i.d. , and are distributed as P

x ∈ Z

d

l 2

Nl

(x)˜ l 2

Nl

(x), (˜ l n (x), x ∈ Z d ) being an independent copy of (l n (x), x ∈ Z d ). Hence, to prove (18), it is enough to prove that for all y > P

x ∈ Z

d

G d (x) 2 , P 0

Z (0) ≥ 2 N y

≤ exp − c ¯ p 2 N y

N

!

. (20)

Now, for any { y 1 , . . . , y N − 1 } positive reals summing up to ¯ y ≤ y, we have

P 0 (Z (0) ≥ 2 N y) ≤

N − 1

X

l=1

P 0

2

l1

X

k=1

I k (l) ≥ 2 N y l

 .

The strategy is now the following:

• When l is small, we bound each term of { I k (l) , k = 1, . . . , 2 l 1 } by a sequence { I k , k = 1, . . . , 2 l 1 } of i.i.d. random variables distributed as I , where

I = X

x ∈ Z

d

l (x)˜ l (x) ,

(˜ l (x)) x Z

d

being an independent copy of (l (x)) x Z

d

. The estimate P 0 (I > t) ≤ exp( − κ s

√ t) is proved in [12]. Thus, in each generation, we are dealing with a sum of independent variables with the right streched exponential tails.

• When l is large, we use the trivial bound I k (l) ≤ 2 2(N l) , and the classical Cramer’s estimates.

First, we choose the { y k } . Note that for d ≥ 5, m 1 = E 0 [I ] = P

x ∈ Z

d

G d (x) 2 < ∞ , so that we have to choose y l such that

2 N y l > 2 l 1 m 1 ≥ 2 l 1 E 0 [I k (l) ] . A convenient choice is the following: for l = 9N/10,

• y l = y/(2N ) for l < l .

• y l = y/2 N l+1 , for l ≥ l .

(11)

Thus, P N − 1

l=1 y l ≤ (l /(2N) + 1/2)y < y, and we obtain P 0 (Z (0) > 2 N y) ≤ R 1 + R 2 with R 1 = X

l<l

P 0

2

l1

X

k=1

I k (l) ≥ 2 N y 2N

 , and R 2 = X

l ≥ l

P 0

2

l1

X

k=1

I ¯ k (l) ≥ 2 l 1 (y − m 1 )

 ,

where ¯ I k (l) = I k (l) − E[I k (l) ].

Case l < l . We take advantage of the small size of the l-th generation to compare I k (l) with I and use the bounds of [12]. Set z N = 2 N /N and J k = I k (l) 1I { I k (l) < z N } , so that

P 0

2

l1

X

k=1

I k (l) ≥ 2 N y l

 ≤ 2 l 1 P 0

I k (l) ≥ z N + P 0

2

l1

X

k=1

J k ≥ 2 N y l

 .

For any λ > 0,

P 0

2

l1

X

k=1

J k ≥ 2 N y l

 ≤ e λ2

N

y

l

E 0 [e λJ

k

] 2

l1

.

Now, using [12],

E 0 [e λJ

k

] = 1 + Z z

N

0

λe λu P (J k > u)du ≤ 1 + λ Z z

N

0

e λu κ

s

u du.

We choose λ = κ s /2 √ z N so that κ s √

u ≥ 2λu for u ≤ z N . For such a choice, there is c 0 > 0 such that E 0 [e λJ

k

] ≤ 1 + c 0 λ, and

E 0 [e λJ

k

] 2

l

≤ e λc

0

2

l

, for λ = κ s

2 r N

2 N . Finally,

P 0

2

l1

X

k=1

I k (l) ≥ 2 N y 2N

 ≤ 2 l exp( − κ s

r 2 N

N ) + exp( − κ s y 4N

√ N 2 N + c 0 κ s 2 l 2

r N 2 N ).

This provides the desired bound as long as 2 l ≪ 2 N /N, which is always the case for l < l . Case l ≥ l . Note that for all l, I (l) ≤ 2 2(N l) . Hence, for large l, we use large deviations es- timates for sums of i.i.d. “small” random variables. More precisely, using Markov inequality, for all λ > 0,

P 0

2

l1

X

j=1

I ¯ k (l) ≥ 2 l 1 (y − m 1 )

 ≤ exp − λ2 l 1 (y − m 1 ) E 0

exp λ I ¯ (l) 2

l1

.

We choose λ ≤ 1/2 2(N l) and use the fact that exp(x) ≤ 1 + x + 2x 2 for | x | ≤ 1, to obtain E 0 h

exp

λ I ¯ k (l) i

≤ 1 + 2λ 2 E[( ¯ I k (l)

) 2 ] ≤ 1 + 2m 2 λ 2 ,

(12)

where m 2 = E 0 [I 2 ] < ∞ by [12]. Thus, R 2 ≤ X

l ≥ l

exp − 2 l 1 λ ((y − m 1 ) − 2m 2 λ)

. (21)

Thus, we need (y − m 1 ) > 2m 2 λ and λ ≤ 1/2 2(N l) for l ≥ l . For l = 9N/10, it is enough to choose λ = 2 N/5 .

3 Technical Lemmas.

This Section provides the key estimates for the probability that P

l n p (x) be large. In a first reading of Lemmas 3.2 and 3.3, we suggest the reader to think of the case p = 2, γ = 1, ζ = 1/2, on which relies our results on SILT. The cases p 6 = 2 are needed for the proofs of the moderate deviations for the RWRS, but involve no additional ideas.

We begin with a simple improvement of Lemma 1.6.

Lemma 3.1 Assume d ≥ 3. There exists a constant κ d > 0 such that for any t > 0, L ≥ 1, P 0 [ | { x : l n (x) ≥ t } | ≥ L] ≤ (2n) dL exp( − κ d tL 1 2/d ) . (22) Proof: The proof is a simple application of Lemma 1.6 of [2].

P ( | { x : l n (x) ≥ t } | ≥ L) ≤ X

Λ ⊂ ] − n;n[

d

; | Λ | =L

P ( ∀ x ∈ Λ, l n (x) ≥ t)

≤ X

Λ ⊂ ] − n;n[

d

; | Λ | =L

P (l n (Λ) ≥ Lt) ≤ n dL exp( − κ d tL 1 2/d ). (23)

As a corollary of Lemma 3.1, we obtain the following estimates for the regions where the local times are large. We recall that for p > 1, we denote by p := p/(1 − p) the conjugate exponent.

Lemma 3.2 Assume d ≥ 3, and fix positive real numbers a, b, γ, ζ, p, y, with ζ

d/2 ≤ b < a, and define D =

x : n b ≤ l n (x) ≤ n a . We assume either of the following two conditions.

(i) p ≥ (d/2) ; ζ < p(2/d)+1 γ ; a < γ p ζ(d/2) (d/2)

. (ii) 1 < p < (d/2) ; b > ζ(d/2) (d/2)

p γ .

Then, for a constant c (depending on a, b, p, γ, ζ, y) and for n large enough,

P 0

"

X

x ∈ D

l p n (x) ≥ n γ y

#

≤ exp( − cn ζ ) . (24)

(13)

Proof: The strategy is to slice the above sum according to the level sets of the local times.

Thus, we decompose D into a finite number M of regions. For i = 0, . . . , M , let D i =

x : n b

i

≤ l n (x) < n b

i+1

, where b = b 0 < b 1 < · · · < b M , b M ≥ a. (25) M and the sequence { b i ; 0 ≤ i ≤ M } will be chosen later. Then,

P 0

"

X

x ∈ D

l p n (x) ≥ n γ y

#

M − 1

X

i=0

P 0

"

X

x ∈ D

i

l p n (x) ≥ n γ y M

#

(26)

M − 1

X

i=0

P 0

| D i | ≥ n γ pb

i+1

y M

. (27)

We now use Lemma 3.1 with t = n b

i

and L = n γ pb

i+1

y/M to get P 0

"

X

x ∈ D

l p n (x) ≥ n γ y

#

M − 1

X

i=0

n dn

γpbi+1

y/M exp − κ d n b

i

+(1 2/d)(γ pb

i+1

) (y/M ) 1 2/d

. (28) To conclude, it is now enough to check that we can find a finite sequence (b i , 0 ≤ i ≤ M), such that b 0 = b, b M > a and satisfying the constraints

γ − pb i+1 < b i + (1 − 2/d)(γ − pb i+1 ) ζ ≤ b i + (1 − 2/d)(γ − pb i+1 )

b i < b i+1

b i+1 > γ p2p d b i (C 2 ) b i+1 ≤ γ p + p(d d 2) (b i − ζ ) (C 1 )

b i+1 > b i (C 0 )

. (29)

x D

2

D

0

y

b

1

D

1

b

M

x

Case p ≥ d d 2 Case p < d d 2

b

1

D

1

D

2

D

0

y

b

M

a

2

a

2

b = b

0

a a

0

a

0

b a

Figure 2: Construction of (b i , 0 ≤ i ≤ M ) for D

Let D 0 be the line y = x, D 1 be the line y = γ p + p(d d 2) (x − ζ), and D 2 the line y = γ p − 2p d x.

Case p ≥ (d/2) : In that case, the slope of D 1 is less than 1. Let a 0 (resp. a 2 ) be the abscisse of the intersection of D 1 with D 0 (resp. D 2 )

a 0 = γ − ζ(d/2)

p − (d/2) , a 2 = ζ

(d/2) .

(14)

Then, the region of constraints is non empty (see figure 2) if and only if a 2 < a 0 ⇔ ζ < γ

1 + 2p/d .

In that case, it is always possible to construct a finite sequence (b i ) 0 ≤ i ≤ M satisfying the constraints (C 0 ), (C 1 ), (C 2 ) and b 0 = b, b M ≥ a, as soon as b ≥ a 2 and a < a 0 . A possible choice is to take b i+1 = γ p + p(d d 2) (b i − ζ), M being defined by b M 1 < a ≤ b M .

Case p < (d/2) : In that case, the slope of D 1 is strictly greater than 1, and the region of constraints is never empty. It is always possible to construct a finite sequence (b i ) 0 i M satisfying the constraints (C 0 ), (C 1 ), (C 2 ) as soon as b > a 0 , and b ≥ a 2 . A possible choice is to take b i+1 = γ p + p(d d 2) (b i − ζ), M being defined by b M − 1 < a ≤ b M .

We deal now with the regions where the local times are small.

Lemma 3.3 Assume d ≥ 5, and fix positive real numbers b, ζ, y. Let D b =

x ∈ Z d : l n (x) ≤ n b . Assume that

γ ≥ 1 ; ζ < γ − b − 2

d (1 − b); and

y > 0 if γ > 1 ,

y > 1 + 2 P

x ∈ Z

d

G d (x) 2 if γ = 1 . (30) Then, for a constant c (depending on b, γ, ζ, y) and for n large enough, we have

P 0

 X

x ∈ D

b

l n 2 (x) ≥ n γ y

 ≤ exp( − cn ζ ). (31) Proof: We again perform the decomposition in terms of level sets with D i as in (25).

However, we are now in a region where the estimate (22) is useless since the combinatorial factor is dominant. To overcome this problem, we rewrite SILT in Proposition 1.4 in terms of intersections of independent random walks, as explained in the introduction. We assume from now on, that n is a power of 2, n = 2 N . As in Proposition 1.4,

X

x ∈ D

b

l n 2 (x) ≤ n + 1 + 2Z (0) , with Z (0) = X

x ∈ D

b

X

0 ≤ k<k

≤ 2

N

1I { S k = S k

= x } . (32) Now,

Z (0) ≤ X

x

1I { l 2

N1

(x) ≤ 2 N b } X

0 ≤ k<k

≤ 2

N1

1I { S k = S k

= x }

+ X

x

1I { l 2

N

(x) − l 2

N1

(x) ≤ 2 N b } X

2

N1

≤ k<k

≤ 2

N

1I { S k = S k

= x }

+ X

x

1I { l 2

N1

(x) ≤ 2 N b } X

0 ≤ k ≤ 2

N1

≤ k

≤ 2

N

1I { S k = S k

= x }

, Z 1 (1) + Z 2 (1) + J 1 (1) .

(15)

With the same notations than in the proof of Proposition 1.4, for i = 1, 2 Z i (1) = X

y

1I { S 2

N1

= y } X

x

1I { l 2

N1

,i (y − x) ≤ 2 N b } X

0 ≤ k<k

≤ 2

N1

1I { S k,i = S k

,i = y − x } . Changing x in y − x in the second summation, we obtain for i = 1, 2

Z i (1) = X

x

1I { l 2

N1

,i (x) ≤ 2 N b } X

0 ≤ k<k

≤ 2

N1

1I { S k,i = S k

,i = x } . The self-intersection times of the two independent strands is denoted

J 1 (1) = X

x

1I { l 2

N1

,1 (x) ≤ 2 N b } l 2

N1

,1 (x)l 2

N1

,2 (x) . Iterating this procedure, we get

Z (0)

N − 1

X

l=1 2

l1

X

k=1

J k (l) , (33)

where for each l ∈ { 1, · · · , N − 1 } , the random variables { J k (l) ; 1 ≤ k ≤ 2 l 1 } are i.i.d. , and are distributed as a variable, say J (l) , with

J (l) = X

x:l

2N−l

(x) ≤ 2

Nb

l 2

Nl

(x)˜ l 2

Nl

(x),

where { ˜ l n (x), x ∈ Z d } is an independent copy of { l n (x), x ∈ Z d } . Now, note that

E 0

N

X

l=1 2

l1

X

k=1

J k (l)

 ≤ 2 N X

x

G d (x) 2 .

Hence, if ¯ J k

(l) = J k (l) − E 0 [J k (l) ],

P 0

 X

x ∈ D

b

l 2 n (x) ≥ n γ y

 ≤ P 0

N

X

l=1 2

l1

X

k=1

J ¯ k (l) ≥ n γ y − n − 1

2 − n X

x ∈ Z

d

G 2 d (x)

 .

Thus, we need to prove that there exists a constant c such that P 0 h P N

l=1

P 2

l1

k=1 J ¯ k (l) ≥ 2 N γ y i

≤ exp( − c2 N ζ ). Now

P 0

N

X

l=1 2

l1

X

k=1

J ¯ k (l) ≥ 2 N γ y

 ≤

N

X

l=1

P 0

2

l1

X

k=1

J ¯ k (l) > 2 N γ y N

 . (34)

We wish to use Cramer’s estimates, so that we need the existence of some exponential

moments for the J k (l) . For this purpose, we choose { b i , i = 0, . . . , M } a regular subdivision

(16)

of [0, b] of mesh δ > 0 to be chosen later, such that b = 0, b M = Mδ ≥ b > (M − 1)δ. Let D i =

x; 2 N b

i

≤ l 2

Nl

(x) < 2 N b

i+1

. For each l and k, and any u > 0, we have, using Lemma 1.6, and independence between l and ˜ l,

P 0 (J (l) > u) ≤

M − 1

X

i=0

P 0 X

D

i

˜ l 2

Nl

(x) > u 2 N b

i+1

M

!

M − 1

X

i=0

E 0

exp

− κ d

u 2 N b

i+1

| D i | 2/d M

. (35)

We now use a rough upper bound for D i : 2 N b

i

| D i | ≤ 2 N , to obtain P 0 (J (l) > u) ≤

M − 1

X

i=0

exp

− κ d u 2 N ζ

i

M

≤ M exp

− κ d u 2 N max(ζ

i

) M

, (36)

with ζ i = b i+1 + 2 d (1 − b i ) = iδ(1 − 2/d) + 2/d + δ. Thus,

max(ζ i ) = Mδ(1 − 2/d) + 2/d + δ ≤ (b + δ)(1 − 2/d) + 2/d + δ,

and for any ǫ > 0, we can choose δ such that max(ζ i ) ≤ b + 2 d (1 − b) + ǫ 2 . Thus we have a constant C ǫ such that

P 0 (J (l) > u) ≤ C ǫ exp( − ξ N u), with ξ N = κ d

M 2 N(b+(1 b)2/d+ǫ/2) . (37) Note that this estimate is better than the estimate of [12]

P 0 (J (l) > u) ≤ P 0 (I > u) ≤ exp( − κ s √

u), (38)

only for u > κ s /ξ N 2 . However, it permits us to consider exponential moment E[exp(λJ k )] for λ < ξ N . We now go back to the standard Cramer’s method. For simplicity of notations, we drop the indices l and k when unambiguous. Returning now to (34), for any 0 ≤ λ < ξ N ,

P 0

2

l1

X

k=1

J ¯ k (l) ≥ 2 N γ y N

 ≤ exp

− λ 2 N γ y N

E 0

exp(λ J) ¯ 2

l1

. (39)

Now, using the fact that e x ≤ 1 + x + 2x 2 for x ≤ 1,

E 0 [e λ J ¯ ] = E 0 [e λ J ¯ 1I { J < 1/λ } ] + E 0 [e λ J ¯ 1I { J ≥ 1/λ } ] (40)

≤ E 0 [e λ J ¯ 1I { J < 1/λ } ] + E 0 [e λJ 1I { J ≥ 1/λ } ] (41)

≤ E 0

1 + λ J ¯ + 2λ 2 ( ¯ J ) 2

1I { J < 1/λ }

+ E 0

e λJ 1I { J ≥ 1/λ }

(42)

≤ 1 + λ E 0

| J ¯ | 1I { J ≥ 1/λ }

+ 2λ 2 E 0 J ¯ 2

+ E 0

e λJ 1I { J ≥ 1/λ }

, (43) where we have used the fact that E 0 J ¯

= 0. Now, E 0

| J ¯ | 1I { J ≥ 1/λ }

≤ E 0

( ¯ J ) 2 1/2

P 0 [J ≥ 1/λ] 1/2 ≤ λ E 0 J 2

≤ λ E 0 (I 2 ) .

(17)

Note that by the results of [12], E 0 (I 2 ) < ∞ . Hence, for some constant c, E 0 [e λ J ¯ ] ≤ 1 + cλ 2 + E 0

e λJ 1I { J ≥ 1/λ } . We now show that for some constant C, E

e λJ 1I { J ≥ 1/λ }

≤ Cλ 2 . We decompose this last expectation into

E 0

e λJ 1I { J ≥ 1/λ }

= e 1 P 0 (λJ ≥ 1) + I ≤ e 1 E 0 [I 22 + I, with

I = Z

1/λ

λe λu P 0 (J ≥ u)du, and we choose λ = ξ N

2 log(1/ξ N 3 ) . To bound I, we use estimate (37), λ < ξ N /2 and N large enough

I ≤

Z

1/λ

λe λu ξ

N

u du ≤ 2λ ξ N

Z

1/λ

(ξ N /2)e

N

/2)u du

≤ 2λ ξ N

exp( − ξ N

2λ ) ≤ ξ 3 N

log(1/ξ N 3 ) ≤ 4ξ N log(1/ξ N 32 ≤ λ 2 . (44) Thus, there is a constant C such that

E 0 [exp(λ J)] ¯ ≤ 1 + Cλ 2 ≤ exp(Cλ 2 ), which together with (39), yield

P 0

N − 1

X

l=1 2

l1

X

k=1

J ¯ k ≥ 2 N γ y N

 ≤ N exp

− 2 N γ y 2N

ξ N

2 log(1/ξ N 3 ) + Cξ N 2 2 l 4 log 2 (1/ξ n 3 )

(45)

≤ N exp

− 2 N γ yξ N

8N log(1/ξ N 3 )

, (46)

where we used that 2 N γ y > 2CNξ N 2 l / log(1/ξ N 3 ) for any l ≤ N and N large enough, as soon as ǫ is chosen so that γ − b − d 2 (1 − b) − ǫ/2 > 0. Now, we can use an extra ǫ/2 to swallow the denominator N log(1/ξ N 3 ) in the exponential, and the N factor in front of the exponential in (45). We obtain then for large enough N ,

P 0

N − 1

X

l=1 2

l1

X

k=1

J ¯ k (l) ≥ 2 N γ y

 ≤ exp − C2 N ζ

, with ζ = γ − b − 2

d (1 − b) − ǫ . (47)

4 Refined upper bound estimates for SILT.

In this Section, we prove Proposition 1.5. In the first Subsection, we apply Lemmas 3.2

and 3.3 to deal with the case d ≥ 6, then we improve Lemma 3.3 to treat separetely the case

d = 5, which we have added as a Lemma.

(18)

4.1 Proof of 1. of Proposition 1.5

The region

x : l n (x) ≤ n 1/2 ǫ is split into two regions D ,

x : n b < l n (x) ≤ n 1/2 ǫ , and D b ,

x : l n (x) ≤ n b , (b to be chosen later) so that for any y > 1 + 2 P

x G 2 d (x), P

X

x:l

n

(x) ≤ n

1/2ǫ

l 2 n (x) ≥ ny

 ≤ P

"

X

x ∈ D

l n 2 (x) ≥ ny 1

# + P

 X

x ∈ D

b

l 2 n (x) ≥ ny 2

 ,

where y 1 > 0, y 2 > 1 + 2 P

x G 2 d (x), y 1 + y 2 ≤ y.

For the first region, we apply Lemma 3.2 with p = 2, γ = 1. Since d ≥ 4, we are in Case (i) of Lemma 3.2, so that ζ has to verify ζ < 4+d d . Note that for d ≥ 5, 4+d d > 1/2, so that we can choose ζ = 1/2 + η where η > 0 is such that 1/2 + η < 4+d d . We want now to take a = 1/2 − ǫ, so that η has also to satisfy

a < γ(d − 2) − dζ

(d − 2)p − d ⇔ 1/2 − ǫ < d − 2 − d(1/2 + η)

d − 4 = 1

2 − dη

d − 4 ⇔ η < d − 4 d ǫ .

Thus for b = d = 1 d + d , Lemma 3.2 allows to conclude that lim sup

n →∞

√ 1

n log P 0

"

X

x ∈ D

l 2 n (x) ≥ ny 1

#

= −∞ . (48)

For the region D b , we use Lemma 3.3, with γ = 1, ζ = 1 2 + η, b = 1 d + d , y = y 2 . We just have to check that we can find η > 0 such that

ζ < γ − b − 2

d (1 − b) ⇔ (d 2 + 2d − 4)η < d 2

2 − 3d + 2 . This is possible when d 2

2

− 3d + 2 > 0, i.e. when d ≥ 6.

For the case d = 5, we need a special treatment.

Lemma 4.1 Assume d = 5. There exists ǫ > 0 such that for b ≤ 1/d + ǫ, for y > 1 + 2 P

x G 2 d (x),

n lim →∞

√ 1

n log P 0

 X

D

b

l n 2 (x) > ny

 = −∞ . (49) Proof: We use the same decomposition as in the proof of Lemma 3.3, up to inequality (35), where we use the rough estimate for | D i | only for the young generations.

Case l ≥ (2/d 2 )N . We denote D (l)

i,k = { x : 2 N b

i

≤ l 2

Nl

,k (x) < 2 N b

i+1

} , where we have

associated l 2

Nl

,k with the k-th variable J k (l) appearing in (34). We actually add an index k

(19)

and l to make precise this correspondence. Hence, the rough bound is | D (l)

i,k | 2 N b

i

≤ 2 N l , so that after the appropriate changes, (37) reads: ∀ δ > 0, ∃ C such that ∀ l ≥ (2/d 2 )N,

P 0 (J (l) > u) ≤ exp( − C2 N ζ u), with ζ = b(1 − 2 d ) + 2

d (1 − 2

d 2 ) + δ. (50) By proceeding as in the proof of Lemma 3.3, we obtain that for l ≥ (2/d 2 )N ,

P 0

2

l1

X

k=1

J ¯ k (l) ≥ 2 N y

 ≤ exp − C2 N (1 ζ)

, (51)

It is easy to check that one can find δ > 0 such that 1 − ζ > 1/2 as soon as b < 2(d d 4 2) + d

2

(d 4 − 2) . Note that for d = 5, this last quantity is strictly bigger than 1 d .

Case l < (2/d 2 )N . The strategy for the old generations is to control the size of D i by a boostrap-type argument. That is, if D i is large, then P

D

i

l 2 n (x) is large and Lemma 3.3 can be applied to control this term. Thus, for any γ i ,

{| D (l)

i,k | > 2 N(1 γ

i

) } ⊂ { X

D

(l)

i,k

l 2 2

N−l

,k (x) > 2 N (1+2b

i

γ

i

) } , (52)

and we can invoke Lemma 3.3 to obtain a good γ i . Before doing so, we go back to the right hand side of (34), and for a fixed l, we define

A = {∀ k = 1, . . . , 2 l 1 ; ∀ i = 1, . . . , M ; | D (l)

i,k | < 2 N (1 γ

i

) } and perform the following partitioning

P 0

2

l1

X

k=1

J ¯ k (l) > 2 N γ y N

 ≤ P 0 [ A c ] + P 0

2

l1

X

k=1

J ¯ k (l) > 2 N γ y N , A

 . (53) Now, for l < d 2

2

N , N − l > N (1 − d 2

2

), and D (l)

i,k ⊂ n

x; l 2

Nl

(x) ≤ 2 (N l)b

i+1

/(1 2/d

2

) o

. Hence,

P 0 [ A c ] ≤

2

l1

X

k=1

X

i

P 0 h

| D (l)

i,k | ≥ 2 N(1 γ

i

) i

(54)

≤ 2 l 1 X

i

P 0

"

X

x

1I

{ l

2N−l

(x) ≤ 2

(N

−l) bi+1 1−2/d2

}

l 2 2

N−l

(x) ≥ 2 (N l)(1+2b

i

γ

i

)

#

(55) We can now apply Lemma 3.3 at time 2 N l , to bound P 0 [ A c ] by exp( − 2 (N l)ζ ). To obtain P 0 [ A c ] ≪ exp( − C2 N/2 ), we have to take ζ > (1/2)/(1 − 2/d 2 ). We have thus to choose γ i in order to satisfy the following conditions

1 + 2b i − γ i > 1 , 1/2

1 − 2/d 2 < 1 + 2b i − γ i − (1 − 2

d ) b i+1

1 − 2/d 2 − 2

d . (56)

(20)

We choose γ i = b i (1 + 2/d). In that case, the two conditions in (56) are satisfied for d = 5, and b < 13/12.

For the second term on the right hand side of (53), we follow the same lines than (35)- (47), now taking advantage of the fact that the volume of D (l)

i,k is small. As in (35), for fixed l and k, we have now

P 0

h J (l) > u; ∀ i, | D (l)

i,k | ≤ 2 N (1 γ

i

) i

≤ M exp

− κ d u

M 2 N max(b

i+1

+(1 γ

i

)2/d) .

With the previous choice for the γ i , this yields P 0 h

J (l) > u; ∀ i, | D (l)

i,k | ≤ 2 N(1 γ

i

) i

≤ exp( − C2 N ξ u) with ξ = b(1 − 2 d − 4

d 2 ) + 2 d + δ.

Therefore, ∀ λ > 0,

P 0

2

l1

X

k=1

J ¯ k (l) > 2 N y N , A

 ≤ exp( − λ2 N y/N ) E 0 h

exp(λJ (l) ); ∀ i, | D (l)

i,k | ≤ 2 N(1 γ

i

) i 2

l1

.

Following the same lines than (44)-(47), we end up with

P 0

2

l1

X

k=1

J ¯ k (l) > 2 N y N , A

 ≤ exp( − C2 N (1 ξ) ) .

Now 1 − ξ > 1/2 if b < 2(d d(d

2

2d 4) − 4) , and for d = 5, 1/d < 2(d d(d

2

2d 4) − 4) .

4.2 Proof of 2. of Proposition 1.5

Since P 0

 X

x: l

n

(x) ≥ √ n

l 2 n (x) ≥ ny

 ≤ P 0

∃ x; l n (x) ≥ √ n

≤ X

x ∈ ] − n;n[

d

P 0

l n (x) ≥ √ n

≤ X

x ∈ ] − n;n[

d

P 0 (H x < ∞ ) P x (l n (x) ≥ √ n)

≤ cn d P 0 (l n (0) ≥ √

n) ≤ cn d exp( − c √ n) ,

it is enough to prove that for d ≥ 5, for any y > 0 and any ǫ ∈ ]0, 1/2 − 1/d[, ∃ ˜ c > 0 such that

lim sup

n →∞

√ 1

n log P 0

X

x: n

1/2ǫ

<l

n

(x) ≤ √ n

l 2 n (x) ≥ ny

 ≤ − c . ˜ (57) We write again

x : n 1/2 ǫ < l n (x) ≤ √

n ⊂ ∪ M i=0 1 D i , b 0 ≤ 1/2 − ǫ, b M = 1/2, but this time,

M will depend on n (actually M ≃ log(log(n)). Let (y i , i = 0 · · · M − 1) be positive numbers

Références

Documents relatifs

We prove a level 3 large deviation principle, under almost every environment, with rate function related to a relative

We show that the – suitably centered – empirical distributions of the RWRE converge weakly to a certain limit law which describes the stationary distribution of a random walk in

On the equality of the quenched and averaged large deviation rate functions for high-dimensional ballistic random walk in a random environment. Random walks in

We obtain estimates for large and moderate deviations for the capacity of the range of a random walk on Z d , in dimension d ≥ 5, both in the upward and downward directions..

As we shall see the proposition is a consequence of the following more general result concerning the functional moderate deviation principle of an array of martingale

They suggest that conditioned on having a small boundary of the range, in dimension three a (large) fraction of the walk is localized in a ball of volume n/ε, whereas in dimension

A fragmentation occurs in the random permutation when a transposition occurs between two integers in the same σ-cycle, so tree components in the random graph correspond to cycles in

Keywords : Random walk in random scenery; Weak limit theorem; Law of the iterated logarithm; Brownian motion in Brownian Scenery; Strong approxima- tion.. AMS Subject Classification