• Aucun résultat trouvé

The spread of a catalytic branching random walk

N/A
N/A
Protected

Academic year: 2022

Partager "The spread of a catalytic branching random walk"

Copied!
25
0
0

Texte intégral

(1)

www.imstat.org/aihp 2014, Vol. 50, No. 2, 327–351

DOI:10.1214/12-AIHP529

© Association des Publications de l’Institut Henri Poincaré, 2014

The spread of a catalytic branching random walk

Philippe Carmona

a,1

and Yueyun Hu

b,2

aLaboratoire Jean Leray, UMR 6629, Université de Nantes, BP 92208, F-44322 Nantes Cedex 03, France.

E-mail:philippe.carmona@univ-nantes.fr; url:http://www.math.sciences.univ-nantes.fr/~carmona

bDépartement de Mathématiques (Institut Galilée, L.A.G.A. UMR 7539), Université Paris 13, 99 Av. J-B Clément, 93430 Villetaneuse, France.

E-mail:yueyun@math.univ-paris13.fr; url:http://www.math.univ-paris13.fr/~yueyun/

Received 31 January 2012; revised 27 September 2012; accepted 6 October 2012

Abstract. We consider a catalytic branching random walk onZthat branches at the origin only. In the supercritical regime we establish a law of large number for the maximal positionMn: For some constantα, Mnnαalmost surely on the set of infinite number of visits of the origin. Then we determine all possible limiting laws forMnαnasngoes to infinity.

Résumé. Nous considérons une marche aléatoire branchant catalytique surZqui ne branche qu’à l’origine. Dans le cas surcritique, nous établissons une loi des grands nombres pour la position maximaleMn: Il existe une constanteαexplicite telle queMnnα presque sûrement sur l’ensemble des trajectoires pour lesquelles l’origine est visitée une infinité de fois.

Ensuite, nous déterminons toutes les lois limites possibles, lorsquen→ +∞, pour la suiteMnαn.

MSC:60K37

Keywords:Branching processes; Catalytic branching random walk

1. Introduction

A catalytic branching random walk (CBRW) onZbranching at the origin only is the following particle system:

When a particle locationx is not the origin, the particle evolves as an irreducible random walk (Sn)n∈N on Z starting fromx.

When a particle reaches the origin, say at timet, then a timet+1 it dies and gives birth to new particles positioned according to a point processD0. Each particle (at the origin at timet) produces new particles independently of every particle living in the system up to timet. These new particles evolve as independent copies of(Sn)n∈Nstarting from their birth positions.

The system starts with an initial ancestor particle located at the origin. Denote byPthe law of the whole system (P also governs the law of the underlying random walkS), and byPxif the initial particle is located atx(thenP=P0).

Let{Xu,|u| =n}denote the positions of the particles alive at timen(here|u| =nmeans that the generation of the particleuin theUlam–Harristree isn). We assume that

D0=

Xu,|u| =1d

=

S1(i),1≤iN

whereN is an integer random variable describing the offspring of a branching particle, with finite meanm=E[N], and(Sn(i), n≥0)i1are independent copies of(Sn, n≥0), and independent ofN.

1Supported by Grant ANR-2010-BLAN-0108.

2Supported by ANR 2010 BLAN 0125.

(2)

Letτ be the first return time to the origin τ:=inf{n≥1:Sn=0} with inf∅= +∞.

The escape probability isqesc:=P= +∞)∈ [0,1)(qesc<1 becauseSis irreducible). Assume that we are in thesupercritical regime, that is

m(1qesc) >1. (1.1)

An explanation of assumption (1.1) is given in Section7, Lemma7.3.

Since the function defined on(0,)byrρ(r)=mE[e]is of classC, strictly decreasing, limr0ρ(r)= mP(τ <+∞)=m(1qesc) >1 and limr→+∞ρ(r)=0, there exists a uniquer >0, aMalthusian parametersuch that

mE e

=1. (1.2)

Letψbe the logarithmic moment generating function ofS1: ψ (t ):=logE

et S1

(−∞,+∞], t∈R.

Letζ:=sup{t >0:ψ (t ) <∞}. We assume furthermore thatζ >0 and there exists somet0(0, ζ )such that

ψ (t0)=r. (1.3)

Observe that by convexityψ(t0) >0.

LetMn:=sup|u|=nXu be the maximal position at timenof all living particles (with convention sup∅:= −∞).

Since the system only branches at the origin 0, we define the set of infinite number of visits of the catalyst by S:=

ω: lim sup

n→∞

u: |u| =n, Xu=0

=∅ .

Remark thatP(dω)-almost surely onSc, for all largenn0(ω), either the system dies out or the system behaves as a finite union of some random walks onZ, starting respectively fromXu(ω)with|u| =n0. In particular, the almost sure behavior ofMnis trivial onSc. It is then natural to considerMnon the setS. Our first result onMnis

Theorem 1.1 (Law of large numbers). Assume(1.1)and(1.3).On the setS,we have the convergence

n→+∞lim Mn

n =α:=ψ (t0) t0 a.s.

In Theorem1.1, the underlying random walkScan be periodic. In order to refine this convergence to a fluctuation result by centeringMn, we shall need to assume the aperiodicity ofS. However, we cannot expect a convergence in distribution forMnαnsinceMnis integer-valued whereasαnin general is not.

Forx∈R, let xbe the integer part ofxand{x} :=xx ∈ [0,1)be the fractional part ofx.

Theorem 1.2. Assume(1.1)and(1.3).Assume furthermore thatE(N2) <and thatSis aperiodic.Then there exists a constantc>0and a random variableΛsuch that for any fixedy∈R,

P(Mnαn > y)=E

1−ece−t0y(et0{αn+y}+o(1))Λ

, (1.4)

whereo(1)denotes some deterministic term which goes to0asn→ ∞.The random variableΛis nonnegative and satisfies that

{Λ>0} =S a.s. (1.5)

(3)

Consequently for any subsequencenj→ ∞such that{αnj} →s∈ [0,1)for somes∈ [0,1),we have that

jlim→∞P

Mnjαnj =y =E

ecet0(ys)Λ−ecet0(y1s)Λ (y∈Z). (1.6) Let us make some remarks on Theorem1.2:

Remark 1.

1. The random variableΛis the limit of the positive fundamental martingale of Section4.The value of constantc is given in(6.14)at the beginning of Section6.

2. The hypothesisE(N2) <might be weakened toE(Nlog(N+1)) <∞,just as the classicalLlogL-condition (see e.g.Biggins[8])in the branching random walk.

3. We do need the aperiodicity of the underlying random walk S in the proof of Theorem 1.2. However,for the particular case of the nearest neighborhood random walk(the period equals2),we can still get a modified version of Theorem1.2,see Remark5of Section6.1.

Theorems1.1and1.2are new, even though a lot of attention has been given to CBRW in continuous time. In papers [3–5,10,27–30] very precise asymptotics are established for the moments ofηt(x)the number of particles located atx at timet, in every regime (sub/super/critical). Elaborate limit theorems were obtained for the critical case by Vatutin, Topchii and Yarovaya in [27–30].

Concerning on the maximal/minimal position of a branching random walk (BRW) onR, some important progress were made in recent years, in particular a convergence in law result was proved in Aïdékon [1] when the BRW is not lattice-valued. It is expected that such convergence dos not hold in general for lattice-valued BRW, for instance see Bramson [11] where he used a centering with the integer part of some (random) sequence. In the recent studies of BRW, the spine decomposition technique plays a very important role. It turns out that a similar spine decomposition exists for CBRW (and more generally for branching Markov chains), and we especially acknowledge the paper [16]

that introduced us the techniques of multiple spines, see Section3.

We end this introduction by comparing our results to their analogue for (noncatalytic) branching random walks (see e.g. [1,2,23,25]). We shall restrict ourselves to simple random walk onZ, that isP(S1= ±1)=12.

For supercritical BRW (m >1), almost surely on the set of nonextinction limn→+∞M(brw)n

n =b, whereb is the unique solution ofψ(b)=logm, withψ(b):=supt(btψ (t ))the rate function for large deviations of the simple random walk andψ (t )=log cosh(t). For CBRW, we can do explicit computations: Since forx=0,Ex[e] =et0|x| the Malthusian parameter satisfiesr+t0=log(m). Combined with log cosh(t0)=rthis implies et0=√

2m−1 and α=log(2m2 log(m)1)−1. Numerically, form=1.83 we findb=0.9 andα=0.24. The second order results emphasize the difference between BRW and CBRW: for BRW,Mn(brw)bnis of order O(logn), whereas for CBRW,Mnαnis of order O(1), see Remark5.

The organization of the rest of this paper is as follows: We first give in Section 2the heuristics explaining the differences between CBRW and ordinary BRW (branching random walk). Then we proceed (in Section3) to recall many to one/few lemmas, we exhibit a fundamental martingale (in Section4) and prove Theorems1.1and1.2in Sections5and6respectively, with the help of sharp asymptotics derived from renewal theory. Finally, Section7is devoted to an extension to the case of multiple catalysts. There the supercritical assumption (1.1) appears in a very natural way.

Finally, let us denote byC,CorCsome unimportant positive constants whose values can be changed from one paragraph to another.

2. Heuristics

Assume for sake of simplicity that we have a simple random walk. The existence of the fundamental martingale Λn=ern

|u|=nφ(Xu), see Section4, such that{Λ>0} =S, shows that on the set of nonextinctionS, we have roughly ernparticles at timen.

(4)

If we apply the usual heuristic for branching random walk (see e.g. [25], Section II.1), then we say that we have approximately ernindependent random walks positioned at timen, and therefore the expected population above level an >0 is roughly:

E ern

i=1

1(S(i) n an)

= ernP(Snan)=en(ψ(a)r)(1+o(1))

whereψ(a)=supt0(taψ (t ))is the large deviation rate function (for simple random walk, eψ (t)=E[et S1] = ch(t)).

This expected population is of order 1 whenψ(a)=r and therefore we would expect to have Mnnγ onS, whereψ(γ )=r.

However, for CBRW, this is not the right speed, since the positions of the independent particles cannot be assumed to be distributed as random walks. Instead, the ernindependent particles may be assumed to be distributed as a fixed probability distribution, sayν. Ifηn(x)=

|u|=n1(Xu=x)is the number of particles at locationxat timen, we may assume that for a constantC >0, ernE[ηn(x)] →Cν(x)and thus,νinherits fromηnthe relation:

ν(x)=er

y

c(y)p(y, x)(m1(y=0)+1(y=0))

withp(x, y)the random walk kernel. For simple random walk, this implies that for|x| ≥2 we have 12(ν(x+1)+ ν(x−1))=erν(x)and thusν(x)=Cet0|x|for|x| ≥2, withψ (t0)=log cosh(t0)=r.

Therefore the expected population with distance to the origin at leastanis roughly E

|x|≥an

ηn(x)

=ern

|x|≥an

ernE ηn(x)

∼ernC

|x|≥an

et0|x|Cernet0an.

This expectation is of order 1 whena=tr0 =ψ(tt00)=α, and this yields the right asymptotics Mn

nα a.s. onS.

This heuristically gives the law of large numbers in Theorem1.1.

3. Many to one/few formulas for multiple catalysts branching random walks (MCBRW)

For a detailed exposition of many to one/few formulas and the spine construction we suggest the papers of Biggins and Kyprianou [9], Hardy and Harris [20], Harris and Roberts [22] and the references therein. For an application to the computations of moments asymptotics in the continuous setting, we refer to Döring and Roberts [16]. We state the many to one/two formulas for a CBRW with multiple catalysts and will specify the formulas in the case with a single catalyst.

3.1. Multiple catalysts branching random walks(MCBRW)

The set of catalysts is a some subsetCofZ. When a particle reaches a catalystxCit dies and gives birth to new particles according to the point process

Dx

=d

S1(i),1≤iNx

(5)

where(Sn(i), n∈N)i1are independent copies of an irreducible random walk(Sn, n∈N)starting formx, independent of the random variableNxwhich is assumed to be integrable. Each particle inCproduces new particles independently from the other particles living in the system. Outside ofCa particle performs a random walk distributed asS. The CBRW (branching only at 0) corresponds toC= {0}.

3.2. The many to one formula for MCBRW

Some of the most interesting results about first and second moments of particle occupation numbers that we obtained come from the existence of a “natural” martingale. An easy way to transfer martingales from the random walk to the branching processes is to use a slightly extended many to one formula that enables conditioning. Let

m1(x):=E[Nx]<, x∈Z. (3.1)

On the space of trees with a spine (a distinguished line of descent) one can define a probability Qvia martingale change of probability, that satisfies

E

Z

|u|=n

f (Xu)

=Q

Zf (Xξn)

0kn1

m1(Xξk)

, (3.2)

for alln≥1,f:Z→R+a nonnegative function andZa positiveFnmeasurable random variable, and where(Fn, n≥ 0)denotes the natural filtration generated by the MCBRW (it does not contain information about the spine). On the right-hand-side of (3.2)k)is the spine, and it happens that the distribution of(Xξn)n∈NunderQis the distribution of the random walk(Sn)n∈N.

Specializing this formula to CBRW for whichm1(x)=m1(x=0)+1(x=0)yields E

|u|=n

f (Xu)

=E

f (Sn)mLn1

, (3.3)

whereLn1=n1

k=01(Sk=0)is the local time at level 0.

3.3. The many to two formula for MCBRW Recall (3.1). Let us assume that

m2(x):=E Nx2

<, x∈Z. (3.4)

Then for anyn≥1 andf:Z×Z→R+, we have E

|u|=|v|=n

f (Xu, Xv)

=Q

f

Sn1, S2n

0k<Tden

m2

S1k

Tdenk<n

m1 Sk1 m1

Sk2 , (3.5)

where under Q,S1 andS2 are coupled random walks that start from 0 and stay coupled (in particular at the same location) until the decoupling timeTdeand afterTde, they behave as independent random walks.

More precisely, we have a three component Markov process(S1n, Sn2, In, n≥0)whereIn∈ {0,1}is the indicator that is one iff the random walks are decoupled: when the two random walks are coupled at timen, and at sitex, the they stay coupled at timen+1 with probability mm1(x)

2(x). That means that the transition probability are the following:

• P(Sn1+1=y, Sn2+1=y, In+1=0|Sn1=Sn2=x, In=0)=mm1(x)

2(x)p(x, y),

• P(Sn1+1=y, Sn2+1=z, In+1=1|Sn1=Sn2=x, In=0)=(1mm12(x)(x))p(x, y)p(x, z),

• P(Sn1+1=y, Sn2+1=z, In+1=1|Sn1=x1, S2n=x2, In=1)=p(x1, y)p(x2, z).

(6)

The random walks are initially coupled and at the origin. The decoupling timeTde=inf{n≥1: In=1}satisfies for anyk≥0,

Q

Tdek+1|σ

Sj1, Sj2, Ij, jk

=

0lk1

m1(Sl1)

m2(Sl2)1(Ik=0), (3.6)

where we keep the usual convention

≡1.

This formula is proved in [20,22] by defining a new probabilityQon the space of trees with two spines.

An alternative proof, that makes more natural the coupling of(S1, S2)is to condition on the generation of the common ancestorw=uv of the two nodes, then use the branching to get independence, and plug in the many to one formula in each factor. We omit the details.

4. A fundamental martingale

Martingale arguments have been used for a long time in the study of branching processes. For example, for the Galton Watson process with mean progenymand populationZnat timen, the sequenceWn=Zmnn is a positive martingale converging to positive finite random variableW. The Kesten–Stigum theorem implies that ifE[Nlog(N+1)]<+∞, we have the identity a.s.,{W >0}equals the survival set. A classical proof can be found in the reference book of Athreya and Ney [6], Section I.10. A more elaborate proof, involving size-biased branching processes, may be found in Lyons–Pemantle–Peres [24].

Similarly, the law of large numbers for the maximal positionMnof branching random walks system may be proved by analyzing a whole one parameter family of martingales (see Shi [26] for a detailed exposition on the equivalent form of Kesten–Stigum’s theorem for BRW). Recently, the maximal position of a branching Brownian motion with inhomogeneous spatial branching has also been studied with the help a family of martingale indexed this time by a function space (see Berestycki, Brunet, Harris and Harris [7] or Harris and Harris [21]).

We want to stress out the fact that for catalytic branching random walk, since we branch at the origin only, we only have one natural martingale, which we call the fundamental martingale.

LetT =inf{n≥0: Sn=0}be the first hitting time of 0, recall thatτ=inf{n≥1:Sn=0}and let φ(x):=Ex

erT

(x∈Z), (4.1)

whereris given in (1.2). Finally letp(x, y)=Px(S1=y)andPf (x)=

yp(x, y)f (y)be the kernel and semigroup of the random walkS.

Proposition 4.1. Under(1.1)and(1.3).

(1) The functionφsatisfies P φ(x)=erφ(x)

1

m1(x=0)+1(x=0)

.

(2) The process

Δn:=ernφ(Sn)mLn1

is a martingale,whereLn1=

0kn11(Sk=0)is the local time at level0.

(3) The process Λn:=ern

|u|=n

φ(Xu)

is a martingale called thefundamental martingale.

(4) IfE[N2]<+∞,then the processΛnis bounded inL2,and therefore is a uniformly integrable martingale.

(7)

Proof. (1) Ifx=0, thenT ≥1, therefore, by conditioning on the first step:

φ(x)=

y

p(x, y)erEy

erT

=erP φ(x).

On the other hand,τ ≥1 so conditioning by the first step again, φ(0)=1=mE

e

=m

y

p(0, y)erEy

erT

=merP φ(0).

(2) Denote byFnS:=σ{S1, . . . , Sn}forn≥1.We have, E

Δn+1|FnS

=er(n+1)mLnE

φ(Sn+1)|FnS

=er(n+1)mLnP φ(Sn)

=er(n+1)mLnerφ(Sn) 1

m1(Sn=0)+1(Sn=0)

=Δn.

(3) Recall that (Fn)n0denotes the natural filtration of the CBRW. By the many to one formula, if Z is Fn1

measurable positive, then E[ΛnZ] =ernE

|u|=n

φ(Xu)Z

=ernE

Zφ(Sn)mLn1

=E[n]

=E[n1] (the martingale property ofΔn)

=E[Λn1Z].

(4) The proof is given in Section7in the case of multiple catalysts and uses heavily the many to two formula.

Let us introduceηn(x)the number of particles located atxat timen:

ηn(x):=

|u|=n

1(Xu=x).

Corollary 4.2. Under(1.1)and(1.3).

(1) We havesupx,nernφ(x)ηn(x) <+∞a.s.

(2) IfN has finite variance then there exists a constant0< C <such that E

ηn(x)ηm(y)

C

φ(x)φ(y)er(n+m) (n, m∈N, x, y∈Zd).

Proof. (1) Let us writeΛn=ern

xφ(x)ηn(x). Since it is a positive martingale it converges almost surely to a finite integrable positive random variableΛ. ThereforeΛ:=supΛn<+∞a.s. and

sup

x,n

ernφ(x)ηn(x)Λ.

(2) Assume for example thatnmand letC=supnE[Λ2n]<+∞. We have, sinceΛnis a martingale, er(n+m)φ(x)φ(y)E

ηn(x)ηm(y)

≤E[ΛnΛm]

=E

ΛnE[Λm|Fn]

=E Λ2n

C.

(8)

For the proof of the following result instead of using large deviations forLn, we use renewal theory, in the spirit of [12,17]. Letdbe the period of the return times to 0:

d:=gcd

n≥1:P=n) >0

. (4.2)

Proposition 4.3. Assume(1.1)and (1.3).For every x ∈Z there exists a constantcx(0,)and a uniquelx∈ {0,1, . . . , d−1}such that

n→+∞lim er(dn+lx)E

ηnd+lx(x)

=cx.

Moreover,for anyllx(modd),ηnd+l(x)=0for alln≥0.In particular,forx=0,lx=0andc0=md. Proof. By the many to one formula (3.2),

vn(x):=E ηn(x)

=E

|u|=n

1(Xu=x)

=Q

1(Sn=x)eA0n)

=E

1(Sn=x)mLn−1 .

We decompose this expectation with respect to the value ofτ=inf{n≥1:Sn=0}: vn(x)=mE[1(Sn=x)1n)] +

1kn1

E

1(Sn=x)mLn−11=k)

.

By the Markov property, ifuk:=P=k), then vn(x)=mPn, Sn=x)+

1kn1

mukvnk(x)=mPn, Sn=x)+mv.(x)u(n),

Recall that the Malthusian parameterris defined by 1=mE

e

=m

k1

erkuk.

Hence if we letv˜n(x)=ernvn(x)andu˜k=merkukthen,

˜

vn(x)=mernPn, Sn=x)+ ˜v·(x)∗ ˜u(n).

By the periodicity, we haveun=0 ifnis not a multiple ofd and forx∈Zdthere is a uniquelx∈ {0,1, . . . , d−1}

such thatνn(x)=0 ifnlx(modd). Therefore the sequencetn= ˜vnd+l(x)satisfies the following renewal equation tn=yn+tsn

withsn= ˜undandyn=er(nd+lx)Pdn+lx, Sdn+lx =x). Since the sequencesis aperiodic, the discrete renewal theorem (see Feller [18], Section XIII.10, Theorem 1) implies that

tn

n=1yn

n=1nsn =:cx. Remark that

n=1nsn=

n=1nerndmund=1d. We have cx=d

n=1

er(nd+lx)Pdn+lx, Sdn+lx =x) >0.

(9)

This is exactly the desired result.

Finally forx=0,x=0 andc0=d

n=1erndPdn, Sdn=0)=d

n=1erndP=dn)=dE(e)=md

by the choice ofr. This completes the proof of Proposition4.3.

Remark 2. The family(cx)x∈Zsatisfies a system of linear equations,dual to the one(see Proposition4.1)satisfied by the functionφ:Recalling thatp(x, y)=Px(S1=y)is the kernel of the random walk,we have the recurrence relation

E

ηn+1(x)

=

y

E ηn(y)

p(y, x)(m1(y=0)+1(y=0)).

Assuming for simplicityd=1and multiplying byer(n+1)and lettingn→ +∞,we obtain the following functional equation for the functionxcx:

cx=er

y

cyp(y, x)(m1(y=0)+1(y=0)), x∈Z.

We end this section by the following lemma which yields the part (1.5) in Theorem1.2.

Lemma 4.4. Assume(1.1)and(1.3).Assume furthermore thatNhas finite variance.Then we have {Λ>0} =S a.s.

Remark that in this Lemma we do not need the aperiodicity of the underlying random walkS.

Proof of Lemma4.4. We first prove thatSc⊂ {Λ=0}a.s. In fact, onSc, either the system dies out thenΛn=0 for all largen, or for all largenn0(ω): ηn(0)=0. Then, ifηn=

xηn(x)is the total population,ηn=ηn0 for all nn0since the system only branches at 0. SinceΛn=ern

φ(Xu)≤ernηn=ernηn0, we still getΛ=0.

Lets=P=0)andsˆ:=P(Sc). If we can proves= ˆs, then the lemma follows. We shall condition on the number of children of the initial ancestorN. Forkj ≥0, letΥk,j be the event that amongstkparticles of the first generation there are exactlyj particles which will return to 0. Then

s=P=0)=

k=0

P(N=k) k

j=0

P

Υk,j ∩ {Λ=0}|N=k

=

k=0

P(N=k) k

j=0

k j

qesckj(1qesc)jsj

=

k=0

P(N=k)

qesc+s(1qesc)k=f

qesc+s(1qesc) ,

withf (x)=E[xN]the generating function of the reproduction law. Exactly in the same way, we show thatsˆsatisfies the same equation ass.

It remains to check the equationx=f (qesc+x(1qesc))has a unique solution in[0,1)(sˆ≤sands <1 thanks to Proposition4.1). To this end, we consider the functiong(x):=f (qesc+x(1qesc))x. The functiongis strictly convex on[0,1],g(0)=f (qesc) >0,g(1)=0 andg(1)=m(1qesc)−1>0. Thusghas a unique zero on[0,1),

proving the lemma.

(10)

5. The law of large numbers: Proof of Theorem1.1

5.1. Proof of the upper bound

Letθ >0,x >0. By the many to one formula, P(Mn> xn)=P

|u|=n

1(Xu>xn)=0

≤E

|u|=n

1(Xu>xn)

=E

1(Sn>nx)mLn1

≤E

eθ (Snxn)mLn1

=eθ nxhn, withhn=E

eθ SnmLn1 .

As in Proposition4.3, we are going to use the renewal theory to study the asymptotics ofvn. Let us condition on τ=inf{n≥1:Sn=0}:

hn=E

eθ SnmLn11n)

+

1kn1

E

eθ SnmLn11=k)

=E

eθ Sn1n)

+

1kn1

mP=k)hnk

=zn+mhu(n),

withzn:=E[eθ Sn1n)]andun:=P=n).

Assume now thatθ > t0so thatψ (θ ) > ψ (t0)=r. We let

h˜n:=enψ(θ )hn, z˜n:=enψ(θ )zn, u˜n:=menψ(θ )un.

On the one hand, by definition of the Malthusian parameter we have 1=mE[e] =

mnernunso that

ku˜k<1.

On the other hand,

˜ zn=E

eθ Snnψ(θ )1n)

=Pθn)

withPθ defined by the martingale change of probability dPθ

dP =eθ Snnψ(θ ) (onFn).

Since underPθ,(Sn)n0is a random walk with meanEθ[S1] =ψ(θ )ψ(t0) >0, we have

˜

zn→ ˜z:=Pθ= +∞).

If we make the aperiodicity assumptiond=1, then by the discrete renewal theorem, we have h˜ny˜

1−

ku˜k.

In the general case, we can prove exactly as in the proof of Proposition4.3that for everyl∈ {0, . . . , d−1}there exists a finite constantKlsuch that

n→+∞lim h˜nd+lKl.

(11)

Therefore in any case, the sequenceh˜nis bounded, and ifx >ψ(θ )θ P(Mn> xn)≤en(θ xψ(θ ))h˜n

satisfies

nP(Mn> xn) <+∞. Hence, by Borel Cantelli’s lemma lim sup

n→+∞

Mn

nx a.s.

Hence, letting firstxψ(θ )θ and thenθt0we obtain that lim sup

n→+∞

Mn

nψ (t0)

t0 =α a.s.

5.2. Proof of the lower bound,under the hypothesisE(N2) <

The strategy of proof is as follows: Let 0< s <1,a >0 and consider the eventAn,a,s (withca positive constant):

“the particles survive forever, there are at least 12cersnparticle alive at timesn, and one of these particle stays strictly positive until timenand reaches a position larger that(1s)anat timen.”

We shall prove that for a suitable constantc, we can choosea, ssuch that on the setSof infinite number of visits to 0, for largenwe are inAn,a,s. This implies that almost surely onS, lim infMnna(1s). Optimizing over the set of admissible couples(a, s)will yield the desired lower bound: lim infMnnαa.s. onS.

Recall from Proposition4.3and Corollary4.2that

n→+∞lim erdnE ηdn(0)

=c0, sup

n

e2rdnE

ηdn(0)2

<+∞. Therefore Paley–Zygmund’s inequality entails that

P

ηdn(0)cerdnc, (5.1)

for some constantc>0. The following lemma aims at describing the a.s. behavior ofηn(0):

Lemma 5.1. Under(1.1)and(1.3).Almost surely onS, ηdn(0)c

2erdn, for all largen.

Proof. We shall write the proof for the aperiodic cased=1. The generalization to a periodd≥2 is straightforward by consideringdninstead ofnthroughout the proof of this Lemma.

Letηn=

xηn(x)be the total population at timen. Since 0φ(x)≤1 we haveΛn=ern

xφ(x)ηn(x)≤ ernηn. Furthermore, a particle living at timenhas to have an ancestor at location 0 at some timekn, and ifNi is the number of children of this ancestor, then

ηn

1iΓn

Ni withΓn=

0kn

ηk(0)

where the(Ni)i1are independent random variables distributed asN and independent ofΓn. SinceE[N]<+∞, by Borel Cantelli’s Lemma, there existsi0=i0(ω)such that

Nii2 forii0.

(12)

Hence, almost surely fornlarge enough, ηn

1ii0

Ni+Γn2

1ii0

Ni+n2

sup

0kn

ηk(0) 2

.

By Lemma4.4, almost surely on the survival setS, we haveΛ>0 and thus, fornlarge enoughηn12Λern and therefore fornlarge enough, onS,

sup

0kn

ηk(0) >ern/4.

Considering the stopping time (for the branching system endowed with the natural filtration) T :=inf

n: ηn(0) >ern/4 .

We have established that onS,T <∞a.s. It follows from the branching property and (5.1) that P

ηn+T(0)cern,S ≤P

ηn(0)cern ern/4

≤ 1−c e

rn/4

,

whose sum onnconverges. By Borel–Cantelli’s lemma, onS, a.s. for all largen, ηn(0)cer(nT )c

2ern.

This proves the lemma.

Proof of the lower bound ofMn. Let 0< s <1. Definek=k(n):=d snd. By the preceding lemma, on the survival setS, at time k, there are at least c2erkparticles at 0, which move independently. Letting these particles move as the random walkS staying positive up to timenk, then Mn is bigger than c2erki.i.d. copies ofSnk with S1>0, . . . , Snk>0. By a large deviations estimate (Theorem 5.2.1 of Dembo and Zeitouni [15], see the forthcoming Remark3), for any fixeda(0,),

P

Snk> a(1s)n, S1>0, . . . , Snk>0 =e(1s)nψ(a)+o(n), where we denote as before,

ψ(a)=sup

θ >0

ψ (θ ) .

It follows that P

Mn(1s)an, ηk(0)c 2erk

≤ 1−P

Snk> a(1s)n, S1>0, . . . , Snk>0 c/2erk

=exp

−ersnψ(a)(1s)n+o(n) . Choose(a, s)(0,+∞)×(0,1)such that

rs > ψ(a)(1s),

Références

Documents relatifs

According to Madaule [25], and A¨ıd´ekon, Berestycki, Brunet and Shi [3], Arguin, Bovier and Kistler [6] (for the branching brownian motion), the branching random walk seen from

Let M n be the minimal position in the n-th generation, of a real-valued branching random walk in the boundary case.. This problem is closely related to the small deviations of a

To complete the proof of Theorem 5.1, we first observe that the genealogy of particles close to the minimal displacement at time n in the branching random walk are either

We were able to obtain the asymptotic of the maximal displacement up to a term of order 1 by choosing a bended boundary for the study of the branching random walk, a method already

Branching processes with selection of the N rightmost individuals have been the subject of recent studies [7, 10, 12], and several conjectures on this processes remain open, such as

This stronger assumption is necessary and suffi- cient to obtain the asymptotic behaviour of many quantities associated to the branching random walk, such as the minimal

Also, in the case of a branching random walk on a homogeneous tree, we express the law of the corresponding limiting renormalized Gibbs measures, confirming, in this dis- crete

Such a time- inhomogeneous branching random walk on R is a process which starts with one in- dividual located at the origin at time 0, and evolves as follows: at each time k ∈ N ,