• Aucun résultat trouvé

The almost sure limits of the minimal position and the additive martingale in a branching random walk.

N/A
N/A
Protected

Academic year: 2021

Partager "The almost sure limits of the minimal position and the additive martingale in a branching random walk."

Copied!
19
0
0

Texte intégral

(1)

HAL Id: hal-00755972

https://hal.archives-ouvertes.fr/hal-00755972v2

Preprint submitted on 15 Apr 2013

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

The almost sure limits of the minimal position and the additive martingale in a branching random walk.

Yueyun Hu

To cite this version:

Yueyun Hu. The almost sure limits of the minimal position and the additive martingale in a branching random walk.. 2012. �hal-00755972v2�

(2)

The almost sure limits of the minimal position and the additive martingale in a branching random walk

Yueyun Hu1 Universit´e Paris XIII

Summary. Consider a real-valued branching random walk in the boundary case. Using the techniques developed by A¨ıd´ekon and Shi [5], we give two integral tests which de- scribe respectively the lower limits for the minimal position and the upper limits for the associated additive martingale.

1 Introduction

Let{V(u), u∈T}be a discrete-time branching random walk on the real lineR, whereTis an Ulam-Harris tree which describes the genealogy of the particles andV(u) ∈Ris the position of the particle u. When a particleuis atn-th generation, we write|u|=nforn≥0. The branching random walkV can be described as follows: At the beginning, there is a single particle ∅located at 0. The particle∅ is also the root of T. At the generation1, the root dies and gives birth to some point process Lon R. The point process L constitutes the first generation of the branching random walk {V(u),|u| = 1}. The next generations are defined by recurrence: For each|u|=n(if suchuexists), the particleudies at the(n+ 1)-th generation and gives birth to an independent copy ofLshifted byV(u). The collection of all children of allutogether with their positions gives the(n+ 1)-th generation. The whole system may survive forever or die out after some generations.

PlainlyL=P

|u|=1δ{V(u)}. AssumeE[L(R)]>1and that E

Z

exL(dx)

= 1, E

Z

xexL(dx)

= 0. (1.1)

When the hypothesis (1.1) is fulfilled, the branching random walk is called in the boundary case in the literature (see e.g. Biggins and Kyprianou [9] and [10], A¨ıd´ekon and Shi [5]). Under some integrability con- ditions, a general branching random walk can be reduced to the boundary case after a linear transformation, see Jaffuel [19] for detailed discussions. We shall assume (1.1) throughout this paper.

Denote by Mn := min|u|=nV(u) the minimal position of the branching random walk at generationn (with conventioninf∅ ≡ ∞). Hammersly [17], Kingman [20] and Biggins [8] established the law of large numbers forMn(for any general branching random walk), whereas the second order limits have recently attracted many attentions, see [1, 18, 12, 2] and the references therein. In particular, A¨ıd´ekon [2] proved the

1D´epartement de Math´ematiques, Universit´e Paris 13, Sorbonne Paris Cit´e, LAGA (CNRS UMR 7539), F–93430 Villetaneuse.

Research partially supported by ANR 2010 BLAN 0125 Email: yueyun@math.univ-paris13.fr

(3)

convergence in law ofMn32lognunder (1.1) and some mild conditions, which gives a discrete analog of Bramson [11]’s theorem on the branching brownian motion.

Concerning the almost sure limits of Mn, there is a phenomena of fluctuation at the logarithmic scale ([18]): Under (1.1) and some extra integrability assumption: ∃δ > 0 such that E[L(R)1+δ] < ∞ and E R

R(eδx+e(1+δ)x)L(dx)

<∞, the following almost sure limits hold:

lim sup

n→∞

Mn

logn = 3

2, P-a.s., lim inf

n→∞

Mn

logn = 1

2, P-a.s., where here and in the sequel,

P(·) :=P(·|S),

andSdenotes the event that the whole system survives. The upper bound 32lognis the usual fluctuation for Mnbecause Mn32lognconverges in law ([2]). It is a natural question to ask howMncan approach the unusual lower bound 12logn.

A¨ıd´ekon and Shi [5] proved that under (1.1) and the following integrability conditions σ2 :=Eh Z

R

x2exL(dx)i

<∞, (1.2)

Eh

η(log+η)2+ηelog+eηi

<∞, (1.3)

whereη:=R

Re−xL(dx),ηe:=R

0 x e−xL(dx)andlog+x:= max(0,logx), then lim inf

n→∞

Mn−1 2logn

=−∞, P-a.s.

Furthermore, they asked whether there is some deterministic sequencean→ ∞such that

−∞<lim inf

n→∞

1 an

Mn−1 2logn

<0, P-a.s.?

The answer is yes: we can choosean= log logn. Moreover, we can give an integral test to describe the lower limits ofMn:

Theorem 1.1 Assume(1.1),(1.2)and(1.3). For any functionf ↑ ∞, P

Mn−1

2logn <−f(n), i.o.

=



 0 1

⇐⇒

Z dt texp(f(t))



<∞

=∞

, (1.4)

where i.o. means infinitely often as the relevant indexn→ ∞.

As a consequence of the integral test (1.4), we have that for anyε >0,P-a.s. for all largen≥n0(ω), Mn12logn ≥ −(1 +ε) log logn whereas there exists infinitely often n such that Mn12logn ≤

−log logn.HenceP-a.s.,lim infn→∞ log log1 n(Mn21logn) =−1.

(4)

The behaviors of the minimal position Mn are closely related to the so-called additive martingale (Wn)n≥0:

Wn:= X

|u|=n

e−V(u), n≥0, with the usual convention: P

≡ 0. By Biggins [8] and Lyons [23],Wn → 0almost surely asn → ∞. The problem to find the rate of convergence (or a Seneta-Heyde norming) for Wn arose in Biggins and Kyprianou [9] and was studied in [18]. A¨ıd´ekon and Shi [5] gave a definite result to this problem. Let

Dn:= X

|u|=n

V(u)eV(u), n≥1, (1.5)

be the derivative martingale (which is a martingale under the boundary condition (1.1)). It was shown in Biggins and Kyprianou [9] thatP-a.s.,Dnconverges to some nonnegative random variableD. Moreover under (1.1), (1.2) and (1.3),P-a.s.,D>0, as shown in [9] and [2].

Theorem (A¨ıd´ekon and Shi [5]).Assume(1.1),(1.2)and(1.3). Then underP,

√nWn (p)→ r 2

πσ2 D, asn→ ∞. Moreover

lim sup

n→∞

√n Wn=∞, P-a.s.

Furthermore A¨ıd´ekon and Shi conjectured that lim inf

n→∞

√n Wn= r 2

πσ2 D, P-a.s. (1.6)

The upper limits ofWncan be described as follows:

Theorem 1.2 Assume(1.1),(1.2)and(1.3). For any functionf ↑ ∞,P-almost surely, lim sup

n→∞

√n Wn f(n) =



 0

⇐⇒

Z dt tf(t)



<∞

=∞

. (1.7)

Concerning the lower limits ofWn, we confirm (1.6) under a stronger integrability assumption: There exists some small constantε0>0such that

E

η1+ε0 + Z

ex|x|2+ε0L(dx)

<∞. (1.8)

It is easy to see that the condition (1.8) is stronger than (1.2) and (1.3).

Proposition 1.3 Assume(1.1)and(1.8). We have lim inf

n→∞

√n Wn= r 2

πσ2 D, P-a.s.

(5)

Combining Theorems 1.1 and 1.2, we can roughly say that the main contribution to the upper limits ofWncomes from the termeMn. According to Madaule [25], and A¨ıd´ekon, Berestycki, Brunet and Shi [3], Arguin, Bovier and Kistler [6] (for the branching brownian motion), the branching random walk seen from the minimal position converges in law to some point process, in particular,WneMn converges in law asn→ ∞, but we are not able to determine the almost sure fluctuations ofWneMn.

The whole paper uses essentially the techniques developed by A¨ıd´ekon and Shi [5]. To show Theorems 1.1 and 1.2, we firstly remark that both two theorems share the same integral test and that since Wn ≥ eMn, it is enough to prove the convergence part in the integral test (1.7) and the divergence part in (1.4).

The convergence part in (1.7) will follow from an application of Doob’s maximal inequality to a certain martingale. To prove the divergence part in (1.4), we shall use the arguments in A¨ıd´ekon and Shi [5] (the proof of their Lemma 6.3) to estimate a second moment, then apply Borel-Cantelli’s lemma. We can also directly prove Theorem 1.2 without the use of the divergence part of (1.4). Finally, the proof of Proposition 1.3 relies on a result (Lemma 4.1) which is also implicitly contained in A¨ıd´ekon and Shi [5] (by following the proof of their Proposition 4.1).

The rest of this paper is organized as follows: In Section 2, we recall some known results on the branch- ing random walk (many-to-one formula, change of measure) and on a real-valued random walk. In Section 3, we prove Theorems 1.1 and 1.2, whereas the proof of Proposition 1.3 will be given in Section 4.

Throughout this paper, f(n) ∼ g(n) asn → ∞ means thatlimn→∞fg(n)(n) = 1and (ci,1 ≤ i ≤ 36) denote some positive constants.

2 Preliminaries

2.1 Many-to-one formula for the branching random walk

In this subsection, we recall some change of measure formulas in the branching random walk, for the details we refer to [9, 13, 24, 5, 27] and the references therein.

At first let us fix some notations which will be used throughout this paper: For |u| = n, we write [∅, u]≡ {u0 := ∅, u1, ..., un1, un =u}the shortest path from the root∅tousuch that|ui|=ifor any 0≤i≤n. For anyu, v ∈T, we use the partial orderu < vifuis an ancestor ofvandu≤vifu < vor u=v. We also denote byv the parent ofv.

Under (1.1), there exists a centered real-valued random walk{Sn, n ≥0}such that for anyn≥1and any measurable functionf :Rn→R+,

Eh X

|u|=n

eV(u)f(V(u1), ..., V(un))i

=E[f(S1, ..., Sn)]. (2.1)

Moreover under (1.2),σ2 =Var(S1) =Eh P

|u|=1(V(u))2e−V(u)i

∈(0,∞).

The renewal functionR(x)related to the random walkSis defined as follows:

R(x) :=

X k=0

P

Sk≥ −x, Sk< min

0jk1Sj

, x≥0, (2.2)

andR(x) = 0ifx <0. Moreover,

xlim→∞

R(x)

x =cR, (2.3)

(6)

with some positive constantcR(see Feller [16], pp.612).

Forα≥0, we define as in A¨ıd´ekon and Shi [5] two truncated processes: For anyn≥0, Wn(α) := X

|u|=n

eV(u)1(V(u)≥−α), (2.4)

D(α)n := X

|u|=n

Rα(V(u))e−V(u)1(V(u)≥−α)), (2.5) whereV(u) := minvuV(v),Rα(x) :=R(α+x)andRis the renewal function defined in (2.2).

Denote by(Fn, n≥0)the natural filtration of the branching random walk. If the branching random walk starts fromV(∅) = x, then we denote its law byPx (withP= P0). According to Biggins and Kyprianou [9],(Dn(α), n ≥ 0)is a(Px,(Fn))-martingale and on some enlarged probability space (more precisely on the space of marked trees enlarged by an infinite ray(ξn, n≥0), called spine), we may construct a family of probabilities(Q(α)x , x≥ −α)such that for anyx≥ −α, the following statements (i), (ii) and (iii) hold:

(i) For alln≥1, dQ(α)x

dPx

Fn = D(α)n

D(α)0 , (2.6)

Q(α)x ξn=uFn

= 1

D(α)n Rα(V(u))eV(u)1(V(u)≥−α), ∀|u|=n. (2.7) (ii) UnderQ(α)x , the process{V(ξn), n≥0}along the spine(ξn)n0, is distributed as the random walk (Sn, n≥0)underPconditioned to stay in[−α,∞). Moreover for anyn≥1, x≥ −αandf :Rn→R+,

EQ(α)x

hf(V(ξ1), ..., V(ξn))i

= 1

Rα(x)Exh

f(S1, ..., Sn)Rα(Sn)1(Sn≥−α)i

. (2.8)

(iii) Let Gn := σ{u, V(u) : u ∈ {ξk,0 ≤ k < n}}, n ≥ 0. Under Q(α)x and conditioned on G, for allu 6∈ {ξk, k ≥ 0} butu ∈ {ξk, k ≥ 0}the induced branching random walk(V(uv),|v| ≥ 0) are independent and are distributed asPV(u), where{uv,|v| ≥0}denotes the subtree ofTrooted atu.

Let us mention that as a consequence of (i), the following many-to-one formula holds: For any n ≥ 1, x≥ −αandf :Rn→R+,

Exh X

|u|=n

eV(u)Rα(V(u))f(V(u1), ..., V(un))1(V(u)≥−α)i

=Rα(x)exE

Q(α)x

h

f(V(ξ1), ..., V(ξn))i . (2.9) 2.2 Estimates on a centered real-valued random walk

We collect here some estimates on a real-valued random walk{Sk, k≥0}, centered and with finite variance σ2 >0. LetSn:= min0≤i≤nSi,∀n≥0. Recall (2.2) for the renewal functionR(·).

Fact 2.1 There exists some constantc1 >0such that for anyx≥0, Px

Sn≥0

≤ c1(1 +x)n−1/2, ∀n≥1, (2.10)

Px Sn1 > Sn≥0

≤ c1(1 +x)R(x)n3/2, ∀n≥1, (2.11) Px

Sn≥0

∼ θ R(x)n1/2, asn→ ∞, (2.12)

(7)

withθ= c1

R

q 2

πσ2. Moreover there isc2 >0such that for anyb≥a≥0, x≥0, n≥1, Px

Sn∈[a, b], Sn≥0

≤c2(1 +x)(1 +b−a)(1 +b)n3/2, (2.13) For any0< r <1, there exists somec3=c3(r)>0such that for allb≥a≥0, x, y≥0, n≥1,

Px

Sn∈[y+a, y+b], Sn≥0, min

rn≤j≤nSj ≥y

≤c3(1 +x)(1 +b−a)(1 +b)n3/2. (2.14) See Feller ([16], Theorem 1a, pp.415) for (2.10), A¨ıd´ekon and Jaffuel ([4], equation (2.8)) for (2.11), A¨ıd´ekon and Shi [5] for (2.13) and (2.14), and Kozlov [22] and Lemma 2.1 in [5] for (2.12) with the identification of the constantθ= c1

R

q 2 πσ2.

We end this section by an estimate on the stability onxin the convergence (2.12).

Lemma 2.2 LetSbe a centered random walk with positive variance. There exists a constantc4 >0such that for alln≥1andx≥0,

Px(Sn≥0)

R(x)P(Sn≥0) −1 ≤c41 +x

√n .

Proof of Lemma 2.2. Denote in this proof by̺(n) := P(Sn ≥0)forn≥ 0. Letx ≥0. By considering the firstk∈[0, n]such thatSk=Sn, we get that

Px(Sn≥0) = Px(Sn≥x) + Xn k=1

Px

Sk1> Sk ≥0, min

k<jnSj ≥Sk

= ̺(n) + Xn k=1

Px

Sk1 > Sk≥0

̺(n−k),

by the Markov property atk. Note thatR(x) = 1 +P

k=1Px Sk−1 > Sk≥0

. It follows that Px(Sn≥0)≤R(x)̺(n) +

Xn k=1

Px

Sk1 > Sk≥0

[̺(n−k)−̺(n)], (2.15) and

Px(Sn≥0)≥R(x)̺(n)− X k=n+1

Px

Sk1 > Sk≥0

̺(n). (2.16)

Denote respectively by I(2.15)and I(2.16) the sumPn

k=1 in (2.15) and the sumP

k=n+1 in (2.16). Let T := inf{j ≥ 1 : Sj < 0}. By the local limit theorem (Eppel [15], see also [28], equation (22)), if the distribution ofS1is non-lattice, then

P

T=k

∼ C

k3/2, k→ ∞, (2.17)

with some positive constantC. Moreover Eppel [15] mentioned that a modification of (2.17) holds in the lattice distribution case. Then there exists some constantc5>0such that for allk≥1,

P

T=k

≤ c5

k3/2. (2.18)

(8)

It follows that for anyk ≤n,̺(n−k)−̺(n) =P n−k < T ≤n

≤ c5Pn

i=nk+1i3/2. Then by (2.11),

I(2.15)≤c6(1 +x)R(x) Xn k=1

k3/2 Xn i=nk+1

i3/2.

Elementary computations show thatPn/2

k=1k3/2Pn

i=nk+1i3/2≤Pn/2

k=1k3/2×k(n2)3/2 =O(n1) andPn

k=n/2k−3/2Pn

i=nk+1i−3/2 ≤(n2)−3/2Pn

i=1i−3/2×i=O(n1). HenceI(2.15)≤c7(1+x)R(x)n1 ≤ c8(1 +x)R(x)1

n̺(n)by (2.12).

Finally again by (2.11), we get that I(2.16) ≤ c9(1 +x)R(x)1

n̺(n). Then the Lemma follows from (2.15) and (2.16).

3 Proofs of Theorems 1.1 and 1.2

In view of the inequality: Wn ≥ eMn, the convergence part of the integral test (1.7) yields that of (1.4), whereas the divergence part of the integral test (1.4) implies that of (1.7). We only need to show the conver- gence part in (1.7) and the divergence part in (1.4).

3.1 Proof of the convergence part in Theorem 1.2:

Lemma 3.1 Assume(1.1). For any α ≥0, there exists some constantc10 =c10(α) > 0such that for any 1< n≤mandλ >0, we have

P

nmaxkm

√kWk(α)> λ

≤c10logn

√n +c101 λ

rm n. Proof of Lemma 3.1.Forn≤k≤m+ 1, define

Wfk(α,n):= X

|u|=k

e−V(u)1(V(un)≥−α),

where as beforeV(un) := min1jnV(uj)andunis the ancestor ofuatn-th generation. ThenWfn(α,n)= Wn(α).

Fork ∈ [n, m], fWk+1(α,n) = P

|v|=k1(V(vn)≥−α)P

u:u=veV(u).The branching property implies that E

fWk+1(α,n)|Fk

=Wfk(α,n)fork∈[n, m]. By Doob’s maximal inequality, P

nmaxkm

√kfWk(α,n)≥λ

√m

λ E(fWm(α,n)) =

√m

λ E(Wn(α)).

By the many-to-one formula (2.1) and the random walk estimate (2.10), E(Wn(α)) =P

Sn≥ −α

≤ c11

√n, withc11:=c1(1 +α). It follows that

P

nmaxkm

√kfWk(α,n)≥λ

≤ c11 λ

rm n.

(9)

ComparingfWk(α,n)andWk(α), we get that P

n≤k≤mmax

√kWk(α)> λ

≤P

n≤k≤mmin min

|u|=kV(u)<−α +c11

λ rm

n. The proof of the Lemma will be finished if we can show that for alln≥2,

P

|minu|≥nV(u)<−α

≤c10

logn

√n . (3.1)

To this end, let us apply the following known result (see e.g. [27]):

P

u∈infTV(u)<−x

≤ex, ∀x≥0.

Then for alln≥2, P

minkn min

|u|=kV(u)<−α

≤ P

uinfTV(u)<−logn +P

minkn min

|u|=kV(u)<−α,inf

vTV(v)≥ −logn

≤ 1 n+

X k=n

Eh X

|u|=k

1(V(u)<α,V(un)≥−α,...,V(uk1)≥−α, V(u)≥−logn)i

= 1

n+ X k=n

Eh

eSk1(Sk<−α,Sn≥−α,...,Sk1≥−α, Sk≥−logn)i

≤ 1

n+e−αP

Sn≥ −logn ,

where the above equality is due to the many-to-one formula (2.1). Using (2.10) to bound the above proba- bility term, we get (3.1) and the Lemma.

Proof of the convergence part in Theorem 1.2: Letf be nondecreasing such thatR dt

tf(t) < ∞. Let nj := 2j for largej ≥j0. ThenP

j=j0

1

f(nj) <∞. By using Lemma 3.1, P

njmaxknj+1

√kWk(α)> f(nj)

≤c10lognj

√nj +c10

√2 f(nj), whose sum onjconverges. The Borel-Cantelli lemma implies thatP-a.s. for all largek,√

kWk(α) ≤f(k).

Replacingf(k)byεf(k)with an arbitrary constantε >0, we get that lim sup

k→∞

√kWk(α)

f(k) = 0, P-a.s.,

for any α ≥ 0. By considering a countable α → ∞ (for instance α integer) and by using the fact that Wk(α)=Wkon the set{infuTV(u)≥ −α}, we get the convergence part..

(10)

3.2 Proof of the divergence part in Theorem 1.1:

The following lemma is a slight modification of A¨ıd´ekon and Shi [5]’s Lemma 6.3:

Lemma 3.2 ([5]) There exist some constants K > 0andc12 = c12(K) > 0such that for alln≥ 2,0 ≤ λ≤ 13logn,

c12eλ≤P [2n

k=n+1

Ek(n,λ)∩Fk(n,λ)

6

=∅

≤ 1

c12eλ, (3.2)

where forn < k≤2n, Ek(n,λ) := n

u:|u|=k,1

2logn−λ≤V(u)≤ 1

2logn−λ+K, V(ui)≥a(n,λ)i ,∀0≤i≤ko , Fk(n,λ) := n

u:|u|=k, X

vΥ(ui+1)

(1 + (V(v)−a(n,λ)i )+)e−(V(v)−a(n,λ)i ) ≤K e−b(k,n)i ,∀0≤i < ko ,

where foru∈T\{∅},Υ(u) :={v:v6=u,v =u}denotes the set of brothers ofu,x+:= max(x,0), a(n,λ)i :=1

2logn−λ 1(n

2<i2n), 0≤i≤2n, and forn < k≤2n,

b(k,n)i :=i1/121(0≤i≤n

2)+ (k−i)1/121(n

2<i≤k), 0≤i≤k.

Proof of Lemma 3.2. The proof of the lower bound in (3.2) [by the second moment method] goes in the same way as that of Lemma 6.3 in A¨ıd´ekon and Shi [5] [We also keep their notations], by replacing 12logn in their proof by 12logn−λ. Moreover, a similar computation of the second moment will be given in the proof of Lemma 3.3. Then we omit the details.

The upper bound in (3.2) is a simple consequence of the many-to-one formula: Definings:= 12logn−λ, we have that

P [2n

k=n+1

Ek(n,λ) 6=∅

≤ X2n k=n+1

Eh X

|u|=k

1(sV(u)s+K,V(ui)a(n,λ)i ,ik)

i

= X2n k=n+1

Eh eSk1

(sSks+K,Sia(n,λ)i ,ik)

i

≤ X2n k=n+1

es+KP

s≤Sk≤s+K, Si ≥a(n,λ)i ,∀i≤k .

By (2.14), P s ≤ Sk ≤ s +K, Si ≥ a(n,λ)i ,∀i ≤ k

≤ c13n3/2 for all n < k ≤ 2n. Hence P S2n

k=n+1Ek(n,λ)6=∅

≤c13eλ+Kproving the upper bound in (3.2).

Using the notations in Lemma 3.2 with the constantK, we define forn≥2and0≤λ≤ 13logn, A(n, λ) :=n

2nk=n+1 Ek(n,λ)∩Fk(n,λ)

6

=∅o

. (3.3)

The following estimate will be useful in the application of Borel-Cantelli’s lemma:

(11)

Lemma 3.3 There exists some constantc14>0such that for anyn≥2,0≤λ≤ 13lognandm≥4n,0≤ µ≤ 13logm,

P

A(n, λ)∩A(m, µ)

≤c14eλµ+c14eµlogn

√n .

Proof of Lemma 3.3.As we mentioned before, the arguments that we use are very close to the computation of the second moment in the proof of Lemma 6.3 in [5]. The introduction of the eventsFk(n,λ) inA(n, λ), sometimes called a truncation argument, is necessary to control the second moment: the eventFk(n,λ)keeps the path(V(ui),0≤i≤k)of a particleuinEk(n,λ)to stay far away from(a(n,λ)i ,0≤i≤k), otherwise the particleuwould give a too large expectation in the second moment. Such truncation argument was already introduced in A¨ıd´ekon [2].

Let us enter into the details of the proof of Lemma 3.3. Write for brevity s:= 1

2logn−λ, t:= 1

2logm−µ.

Similarly to (2.6) and (2.7), we may construct a new probability Qsuch that for alln ≥1, dQdP

Fn =Wn, Q ξn =uFn) = eWV(u)

n ,∀|u|=n. Moreover underQ,(V(ξn), n ≥0)is distributed as the random walk (Sn, n ≥0) defined in Section 2, and the spine decomposition similar to (iii) in Section 2 holds underQ. We refer to [9, 13, 24, 5, 27] for details. It follows that

P

A(n, λ)∩A(m, µ)

≤ Eh 1A(n,λ)

X2m k=m+1

X

|u|=k

1(uE(m,µ)k Fk(m,µ))

i

=

X2m k=m+1

EQh

1A(n,λ)eVk)1

kEk(m,µ)Fk(m,µ))

i

≤ et+K X2m k=m+1

EQh

A(n, λ), ξk∈Ek(m,µ)∩Fk(m,µ)i

≤ et+K X2m k=m+1

X2n l=n+1

EQh X

|v|=l

1(v

El(n,λ)Fl(n,λ), ξkE(m,µ)k Fk(m,µ))

i

=: et+K X2m k=m+1

X2n l=n+1

I(3.4)(k, l). (3.4)

Forn < l≤2n≤ m2 < k≤2m, we may decompose the sum on|v|=las follows:

X

|v|=l

1(v∈El(n,λ)∩Fl(n,λ))= 1

l∈El(n,λ)∩Fl(n,λ))+ Xl p=1

X

uΥ(ξp)

X

vT(u),|v|u=lp

1(v∈El(n,λ)∩Fl(n,λ)),

(12)

whereT(u)denotes the subtree ofTrooted atu and|v|u = |v| − |u|the relative generation ofv ∈ T(u).

Then

I(3.4)(k, l)

= Q

ξl∈E(n,λ)l ∩Fl(n,λ), ξk∈Ek(m,µ)∩Fk(m,µ) +

Xl p=1

EQh 1

kEk(m,µ)Fk(m,µ))

X

u∈Υ(ξp)

fk,l,p(V(u))i

=: I(3.5)(k, l) + Xl p=1

J(3.5)(k, l, p), (3.5)

with

fk,l,p(x) :=EQh X

vT(u),|v|u=lp

1(vE(n,λ)l Fl(n,λ))

V(u) =xi

, x∈R.

In what follows, we shall at first estimateJ(3.5)(k, l, p)thenI(3.5)(k, l). By the branching property atu and by removing the eventFl(n,λ)from the indicator function infk,l,p(r), we get that

fk,l,p(x) ≤ Exh X

|v|=lp

1(sV(v)s+K,V(vi)a(n,λ)i+p ,0ilp)

i

= e−xExh eSlp1

(sSlps+K,Sia(n,λ)i+p ,0ilp)

i

≤ e−x+s+KPx

s≤Slp≤s+K, Si ≥a(n,λ)i+p ,∀0≤i≤l−p

, (3.6)

where to get the above equality, we applied an obvious modification of (2.1) forExinstead ofE.

Let us denote by (3.6)k,l,pthe probability term in (3.6). To estimate (3.6)k,l,p, we distinguish as in [5]

two cases: p≤ n2 and n2 < p≤l. Recall thatn < l≤2n≤ m2 < k≤2m. Ifp≤ n2, (3.6)k,l,p≤1(x0)c15 1 +x

(l−p)3/2, by using (2.14). Then for1≤p≤ n2,

fk,l,p(x)≤c151(x0)es+Kx(1 +x)(l−p)3/2. It follows that for alln < l≤2n, m < k≤2m,

X

1pn/2

J(3.5)(k, l, p) ≤ Xn/2 p=1

EQh

1kEk(m,µ)Fk(m,µ))

X

uΥ(ξp)

c151(V(u)≥0)es+K−V(u)1 +V(u) (l−p)3/2 i

≤ c16esn3/2 Xn/2 p=1

EQh

1kE(m,µ)k Fk(m,µ))

X

uΥ(ξp)

1(V(u)0)eV(u)(1 +V(u))i

≤ c16K esn3/2 Xn/2 p=1

EQh

1kEk(m,µ)Fk(m,µ))e(p1)1/12i ,

(13)

where the last inequality is due to the definition ofξk∈Fk(m,µ)[noticing thata(m,µ)p = 0andb(k,m)p =p1/12 for allp≤n/2< m/2]. Then we get that

X

1pn/2

J(3.5)(k, l, p)≤c17esn3/2Q

ξk∈E(m,µ)k

≤c18esn3/2m3/2, (3.7)

sinceQ

ξk∈Ek(m,µ)

=P(t≤Sk≤t+K, Si ≥a(m,µ)i ,∀0≤i≤k

≤c19m3/2for allm < k≤2m, by using (2.14).

Now considering n2 < p≤l,a(n,λ)i+p =sfor any0≤i≤l−p, hence (3.6)k,l,p= 1(xs)Px

s≤Sl−p ≤s+K, Slp≥s

≤1(xs)c2(1 +K)2 1 +x−s (1 +l−p)3/2, by (2.13). It follows that

X

n 2<pl

J(3.5)(k, l, p) ≤ X

n 2<pl

EQh 1

kE(m,µ)k Fk(m,µ))

X

u∈Υ(ξp)

c2(1 +K)21(V(u)s)es+KV(u)1 +V(u)−s (1 +l−p)3/2 i

.

By the definition ofξk∈Fk(m,µ), for allp≤l≤2n≤ m2, we have that X

uΥ(ξp)

1(V(u)s)e−V(u)(1 +V(u)−s)≤ X

uΥ(ξp)

1(V(u)0)e−V(u)(1 +V(u))≤Ke−(p−1)1/12.

Then X

n 2pl

J(3.5)(k, l, p)≤c20esen1/13Q(ξk ∈Ek(m,µ))≤c21esn1/13m3/2. (3.8) Combining (3.7) and (3.8), we get that

X

1pl

J(3.5)(k, l, p) ≤(c18n3/2+c21en1/13)esm3/2. (3.9) It remains to estimateI(3.5)(k, l)forn < l≤2nandm < k≤2m[in particularl < k]. We have

I(3.5)(k, l)

≤ Q

ξl∈E(n,λ)l , ξk ∈Ek(m,µ)

= P

s≤Sl≤s+K, Si ≥a(n,λ)i ,∀i≤l, t≤Sk≤t+K, Sj ≥a(m,µ)j ,∀j≤k

. (3.10) Let us denote by (3.10)k,l the probability term in (3.10). Using the Markov property atl, we get that

(3.10)k,l = Eh

1(s≤Sl≤s+K,Si≥a(n,λ)i ,∀0≤i≤l)PSl

t≤Skl≤t+K, Sj ≥a(m,µ)j+l ,∀0≤j≤k−li

≤ c21 (k−l)3/2Eh

1(sSls+K,Sia(n,λ)i ,0il)(1 +Sl)i

(by (2.13))

≤ c22(1 +s+K)(k−l)3/2l3/2.

(14)

Based on the above estimate and (3.9), we deduce from (3.4) and (3.5) that P

A(n, λ)∩A(m, µ)

≤ c23et+K X2m k=m+1

X2n l=n+1

(1 +s+K)(k−l)3/2l3/2+esen1/13m3/2+esn3/2m3/2

≤ c24e−λ−µ+c24e−µlogn

√n ,

proving the Lemma.

Proof of the divergence part in Theorem 1.1. Letf be nondecreasing such thatR dt

tef(t) =∞. Without any loss of generality we may assume that√

logt≤ef(t) ≤(logt)2for all larget≥t0(see e.g. [14] for a similar justification). Denote by

Bx(k) :=n

Mn+x≤ 1

2logn−f(n+k), i.o. asn→ ∞o

, x∈R, k ≥0.

Let us first prove that there exists some constantc25>0such that for anyx∈Randk≥0, P

Bx(k)

≥c25. (3.11)

To this end, we takeni:= 2ifori≥1,λi:=f(ni+1+k)+x+Kand consider the eventAi:=A(ni, λi) in (3.3). There is some integeri0≡i0(x, k)≥1such that for alli≥i0,0≤λi13logni. By Lemma 3.2,

c12eλi ≤P(Ai)≤ 1 c12

eλi, ∀i≥i0.

Note thatR dt

tef(t+k) ≥R dt

(t+k)ef(t+k) =∞, andRni+2

ni+1

dt

tef(t+k) ≤(log 2)ef(ni+1+k)by the monotonic- ity off. HenceP

ieλi =∞. By Lemma 3.3, we have for anyi≥i0andj≥i+ 2, P

Ai∩Aj

≤c14eλiλj+c14eλjlogni

√ni ,

which implies thatPk

i,j=i0P

Ai∩Aj

≤c14 Pk

i=i0eλi2

+ 2c14 Pk

i=i0eλi

× P

i=1lognnii

.Using the lower boundP(Ai)≥c12eλi and the fact thatPk

i=i0eλi → ∞ask→ ∞, we obtain that lim sup

k→∞

P

1i,jkP(Ai∩Aj) Pk

i=1P(Ai)2 ≤ c14 c212.

By Kochen and Stone [21]’s version of the Borel-Cantelli lemma,P(Ai,i.o.i→ ∞) ≥c212/c14 =:c25

which does not depend on (x, k). Observe that{Ai,i.o.i → ∞} ⊂ Bx(k), in fact, for those isuch that Ai ≡ A(ni, λi) holds, by the definition (3.3), there exits somen ∈ (ni, ni+1]such thatMn12logni− λi+K= 12logni−f(ni+1+k)−x≤12logn−f(n+k)−x. Hence we get (3.11).

We have proved that for anyx ∈Randk≥ 0,P(Bx(k)) ≥c25. For anyk≥0, the eventsBx(k)are non-increasing onx. LetB(k) :=∩i=1Bi(k)[thenB(k)is nothing but{lim infn→∞(Mn12logn+

(15)

f(n+k)) = −∞}]. By the monotone convergence, P(B(k)) ≥c25, for allk ≥ 0. Moreover, for any x∈R,Px(B(k)) =P(B(k))≥c25. On the other hand, if we denote byZk :=P

|u|=k1the number of particles in thek-th generation, then by the branching property,

P

B(0)Fk

= 1(Zk>0)

1− Y

|u|=k

(1−PV(u)(B(k)))

≥1(Zk>0)

1−(1−c25)Zk .

It is well-known (cf. [7], pp.8) that S = {limk→∞Zk = ∞}. Then by letting k → ∞in the above inequality, we get that

1B(0) = lim

k→∞P

B(0)Fk

≥1S, P-a.s.

ClearlySc ⊂ B(0)c by the convention on the definition ofMnon Sc. HenceS = B(0), P-a.s. This proves the divergence part of Theorem 1.1.

4 Proof of Proposition 1.3

The main technical part was already done in A¨ıd´ekon and Shi [5]:

Lemma 4.1 ([5]) Assume (1.1) and (1.8). For any fixed α ≥ 0, there exist some δ = δ(ε0) > 0 and c26=c26(α, δ) >0such that for alln≥2,

VarQ(α)

√nWn(α) D(α)n

!

≤c26 nδ+ sup

k1/3n ≤x≤kn

hx+α(n−kn) hα(n) −1

!

, (4.1)

wherekn:=⌊n1/3⌋andhx(j) :=

jPx(Sj0)

R(x) forj≥1, x≥0.

Proof of Lemma 4.1. The Lemma was implicitly contained in the proof of Proposition 4.1 in [5]. In fact, in their proof of the convergence that VarQ(α)

nWn(α)

D(α)n

→ 0, we choosekn :=⌊n1/3⌋in their definition ofEn := En,1∩En,2∩En,3 (see the equation (4.6) in [5], Section 4). We claim that for some constant δ110)>0, there is somec27=c271, α)>0such that for alln≥1,

Q(α) Enc

≤ c27n−δ1, (4.2)

sup

k1/3n xkn

Q(α)

EncV(ξkn) =x

≤ c27nδ1. (4.3)

In fact, according to the definition ofEn,1in [5], Q(α)

En,1c

≤Q(α)

{V(ξkn)> kn} ∪ {V(ξkn)< kn1/3}

+ sup

kn1/3xkn

Q(α)x

ni=0kn{V(ξi)< kn1/6} .

By (2.8), Q(α)

V(ξkn)< k1/3n

= 1

Rα(0)E

1(Skn≥−α,Skn<k1/3n )Rα(Skn)

≤ Rα(k1/3n )

Rα(0) P(Skn ≥ −α)≤c28kn1/6,

Références

Documents relatifs

Our aim is to prove the convergence for trajectories of particles chosen according to the Gibbs measure in the near-critical case, in order to explain how happens the transition

We investigate the total progeny of the killed branching random walk and give its precise tail distribution both in the critical and subcritical cases, which solves an open problem

For the critical branching random walk in Z 4 with branching at the origin only we find the asymptotic behavior of the probability of the event that there are particles at the origin

Let M n be the minimal position in the n-th generation, of a real-valued branching random walk in the boundary case.. This problem is closely related to the small deviations of a

We then introduce some estimates for the asymptotic behaviour of random walks conditioned to stay below a line, and prove their extension to a generalized random walk where the law

To complete the proof of Theorem 5.1, we first observe that the genealogy of particles close to the minimal displacement at time n in the branching random walk are either

We were able to obtain the asymptotic of the maximal displacement up to a term of order 1 by choosing a bended boundary for the study of the branching random walk, a method already

Branching processes with selection of the N rightmost individuals have been the subject of recent studies [7, 10, 12], and several conjectures on this processes remain open, such as