• Aucun résultat trouvé

The number of absorbed individuals in branching Brownian motion with a barrier

N/A
N/A
Protected

Academic year: 2021

Partager "The number of absorbed individuals in branching Brownian motion with a barrier"

Copied!
32
0
0

Texte intégral

(1)

HAL Id: hal-00472913

https://hal.archives-ouvertes.fr/hal-00472913v3

Preprint submitted on 1 Dec 2011

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

The number of absorbed individuals in branching

Brownian motion with a barrier

Pascal Maillard

To cite this version:

Pascal Maillard. The number of absorbed individuals in branching Brownian motion with a barrier. 2011. �hal-00472913v3�

(2)

The number of absorbed individuals in branching Brownian

motion with a barrier

Pascal Maillard∗

December 1, 2011

Summary. We study supercritical branching Brownian motion on the real line starting at the origin and with constant drift c. At the point x > 0, we add an absorbing barrier, i.e. individuals touching the barrier are instantly killed without producing offspring. It is known that there is a critical drift c0, such that this process becomes extinct almost surely if and only if c ≥ c0.

In this case, if Zxdenotes the number of individuals absorbed at the barrier,

we give an asymptotic for P (Zx= n) as n goes to infinity. If c = c0 and the

reproduction is deterministic, this improves upon results of L. Addario-Berry and N. Broutin [1] and E. A¨ıd´ekon [2] on a conjecture by David Aldous about the total progeny of a branching random walk. The main technique used in the proofs is analysis of the generating function of Zxnear its singular point

1, based on classical results on some complex differential equations. Keywords. Branching Brownian motion, Galton–Watson process, Briot– Bouquet equation, FKPP equation, travelling wave, singularity analysis of generating functions.

MSC2010. Primary: 60J80. Secondary: 34M35.

1

Introduction

We define branching Brownian motion as follows. Starting with an initial individual sitting at the origin of the real line, this individual moves according to a 1-dimensional Brownian motion with drift c until an independent exponentially distributed time with rate 1. At that moment it dies and produces L (identical) offspring, L being a random variable taking values in the non-negative integers with P (L = 1) = 0. Starting from the position at which its parent has died, each child repeats this process, all independently of one another and of their parent. For a rigorous definition, see for example [10].

We assume that m = E[L] − 1 ∈ (0, ∞), which means that the process is supercritical. At position x > 0, we add an absorbing barrier, i.e. individuals hitting the barrier are instantly killed without producing offspring. Kesten proved [19] that this process becomes extinct almost surely if and only if the drift c ≥ c0 =√2m (he actually needed E[L2] < ∞ for the

“only if” part, but we are going to prove that the statement holds in general). A conjecture of David Aldous [3], originally stated for the branching random walk, says that the number

Laboratoire de Probabilit´es et Mod`eles Al´eatoires, UMR 7599, Universit´e Paris VI, Case courrier 188, 4 Place Jussieu, 75252 PARIS Cedex 05, France, e-mail: pascal.maillard@upmc.fr

(3)

Nx of individuals that have lived during the lifetime of the process satisfies E[Nx] < ∞ and

E[Nxlog+Nx] = ∞ in the critical speed area (c = c0), and P (Nx > n) ∼ Kn−γ in the

subcritical speed area (c > c0), with some K > 0, γ > 1. For the branching random walk,

the conjecture of the critical case was proven by Addario-Berry and Broutin [1] for general reproduction laws satisfying a mild integrability assumption. A¨ıd´ekon [2] refined the results for constant L by showing that there are positive constants ρ, C1, C2, such that for every

x > 0, we have

C1xeρx

n(log n)2 ≤ P (Nx > n) ≤

C2xeρx

n(log n)2 for large n.

Assuming L constant has the advantage that Nx is directly related to the number Zx of

individuals absorbed at the barrier by Nx− 1 = (Zx− 1)(L/(L − 1)), hence it is possible to

study Nxthrough Zx. In this sense, Neveu [23] had already proven the critical case conjecture

for branching Brownian motion since he showed that the process Z = (Zx)x≥0 is actually a

continuous-time Galton–Watson process of finite expectation, but with E[Zxlog+Zx] = ∞

for every x > 0, if c = c0.

Let N = {1, 2, 3, . . .} and N0 = {0} ∪ N. Define the infinitesimal transition rates (see [4],

p. 104, Equation (6) or [14], p. 95)

qn= lim x↓0

1

xP (Zx = n), n ∈ N0\{1}. We propose a refinement of Neveu’s result:

Theorem 1.1. Assume c = c0. Assume that E[L(log L)2+ε] < ∞ for some ε > 0. Then we

have as n → ∞, ∞ X k=n qk∼ c0 n(log n)2 and P (Zx > n) ∼ c0xec0x

n(log n)2 for each x > 0.

The heavy tail of Zx suggests that its generating function is amenable to singularity

analysis in the sense of [12]. This is in fact the case in both the critical and subcritical cases if we impose a stronger condition upon the offspring distribution and leads to the next theorem. Define f (s) = E[sL] the generating function of the offspring distribution. Denote by δ the span of L − 1, i.e. the greatest positive integer, such that L − 1 is concentrated on δZ. Let λc ≤ λc be the two roots of the quadratic equation λ2− 2cλ + c20 = 0 and denote by d = λλcc

the ratio of the two roots. Note that c = c0 if and only if λc = λc if and only if d = 1.

Theorem 1.2. Assume that the law of L admits exponential moments, i.e. that the radius of convergence of the power series E[sL] is greater than 1.

− In the critical speed area (c = c0), as n → ∞,

qδn+1∼

c0

δn2(log n)2 and P (Zx= δn + 1) ∼

c0xec0x

δn2(log n)2 for each x > 0.

− In the subcritical speed area (c > c0) there exists a constant K = K(c, f ) > 0, such that,

as n → ∞, qδn+1∼ K nd+1 and P (Zx = δn + 1) ∼ eλcx− eλcx λc− λc K nd+1 for each x > 0.

(4)

Furthermore, qδn+k = P (Zx= δn + k) = 0 for all n ∈ Z and k ∈ {2, . . . , δ}.

Remark 1.3. The idea of using singularity analysis for the study of Zx comes from Robin

Pemantle’s (unfinished) manuscript [24] about branching random walks with Bernoulli repro-duction.

Remark 1.4. Since the coefficients of the power series E[sL] are real and non-negative, Pring-sheim’s theorem (see e.g. [13], Theorem IV.6, p. 240) entails that the assumption in Theorem

1.2is verified if and only if f (s) is analytic at 1.

Remark 1.5. Let β > 0 and σ > 0. We consider a more general branching Brownian motion with branching rate given by β and the drift and variance of the Brownian motion given by c and σ2, respectively. Call this process the (β, c, σ)-BBM (the reproduction is still governed

by the law of L, which is fixed). In this terminology, the process described at the beginning of this section is the (1, c, 1)-BBM. The (β, c, σ)-BBM can be obtained from (1, c/(σ√β), 1)-BBM by rescaling time by a factor β and space by a factor σ/√β. Therefore, if we add an absorbing barrier at the point x > 0, the (β, c, σ)-BBM gets extinct a.s. if and only if c ≥ c0 = σ√2βm. Moreover, if we denote by Zx(β,c,σ) the number of particles absorbed at x,

we obtain that

(Zx(β,c,σ))x≥0 and (Z(1,c/(σ

√ β),1)

x√β/σ )x≥0 are equal in law.

In particular, if we denote the infinitesimal transition rates of (Zx(β,c,σ))x≥0 by qn(β,c,σ), for

n ∈ N0\{1}, then we have q(β,c,σ)n = lim x↓0 1 xP  Zx(β,c,σ)= n= √ β σ limx↓0 σ x√βP  Z(1,c/(σ√β),1) x√β/σ = n  = √ β σ q (1,c/(σ√β),1) n .

One therefore easily checks that the statements of Theorems 1.1 and 1.2 are still valid for arbitrary β > 0 and σ > 0, provided that one replaces the constants c0, λc, λc, K by c0/σ2,

λc/σ2, λc/σ2, √ β σ K(c/(σ √ β), f ), respectively.

Remark 1.6. After submission of this article, Yang and Ren published an article [25] which permits to weaken the hypothesis in Theorem1.1: It is enough to assume that E[L(log L)2] <

∞. In our proof, one needs to replace the reference [20] by [25] and use Theorem B of [7] instead of our Lemma4.1, in order to obtain (4.6).

The content of the paper is organised as follows: In Section2we derive some preliminary results by probabilistic means. In Section 3, we recall a known relation between Zx and the

so-called Fisher–Kolmogorov–Petrovskii–Piskounov (FKPP) equation. Section 4 is devoted to the proof of Theorem 1.1, which draws on a Tauberian theorem and known asymptotics of travelling wave solutions to the FKPP equation. In Section 5 we review results about complex differential equations, singularity analysis of generating functions and continuous-time Galton–Watson processes. Those are needed for the proof of Theorem1.2, which is done in Section6.

2

First results by probabilistic methods

(5)

Proposition 2.1. Assume c > c0 and E[L2] < ∞. There exists a constant C = C(x, c, L) >

0, such that

P (Zx> n) ≥

C

nd for large n.

This result is needed to assure that the constant K in Theorem 1.2 is non-zero. It is independent from Sections3 and 4 and in particular from Theorem 1.1. Its proof is entirely probabilistic and follows closely [2].

2.1 Notation and preliminary remarks

Our notation borrows from [20]. An individual is an element in the space of Ulam–Harris labels

U = [

n∈N0

Nn,

which is endowed with the ordering relations  and ≺ defined by

u  v ⇐⇒ ∃w ∈ U : v = uw and u ≺ v ⇐⇒ u  v and u 6= v.

The space of Galton–Watson trees is the space of subsets t ⊂ U, such that ∅ ∈ t, v ∈ t if v ≺ u and u ∈ t and for every u there is a number Lu∈ N0, such that for all j ∈ N, uj ∈ t if

and only if j ≤ Lu. Thus, Lu is the number of children of the individual u.

Branching Brownian motion is defined on the filtered probability space (T , F, (Ft), P ).

Here, T is the space of Galton–Watson trees with each individual u ∈ t having a mark (ζu, Xu) ∈ R+× D(R+,R ∪ {∆}), where ∆ is a cemetery symbol and D(R+,R ∪ {∆}) denotes

the Skorokhod space of cadlag functions fromR+ toR ∪ {∆}. Here, ζ

u denotes the life length

and Xu(t) the position of u at time t, or of its ancestor that was alive at time t. More precisely,

for v ∈ t, let dv =Pwvζw denote the time of death and bv = dv− ζv the time of birth of v.

Then Xu(t) = ∆ for t ≥ du and if v  u is such that t ∈ [bv, dv), then Xu(t) = Xv(t).

The sigma-field Ft contains all the information up to time t, and F = σSt≥0Ft

 . Let y, c ∈ R and L be some random variable taking values in N0\{1}. P = Py,c,L is the

unique probability measure, such that, starting with a single individual at the point y, − each individual moves according to a Brownian motion with drift c until an independent

time ζu following an exponential distribution with parameter 1.

− At the time ζu the individual dies and leaves Lu offspring at the position where it has

died, with Lu being an independent copy of L.

− Each child of u repeats this process, all independently of one another and of the past of the process.

Note that often c and L are regarded as fixed and y as variable. In this case, the notation Py

is used. In the same way, expectation with respect to P is denoted by E or Ey.

A common technique in branching processes since [21] is to enhance the space T by

selecting an infinite genealogical line of descent from the ancestor ∅, called the spine. More precisely, if T ∈ T and t its underlying Galton–Watson tree, then ξ = (ξ0, ξ1, ξ2, . . .) ∈ UN0 is

a spine of T if ξ0=∅ and for every n ∈ N0, ξn+1 is a child of ξn in t. This gives the space

e

T = {(T, ξ) ∈ T × UN0 : ξ is a spine of T }

of marked trees with spine and the sigma-fields eF and eFt. Note that if (T, ξ) ∈ eT , then T is

(6)

Assume from now on that m = E[L] − 1 ∈ (0, ∞). Let Nt be the set of individuals alive

at time t. Note that every eFt-measurable function f : eT → R admits a representation

f (T, ξ) = X

u∈Nt

fu(T )1

u∈ξ,

where fu is an Ft-measurable function for every u ∈ U. We can therefore define a measure eP

on ( eT , eF, ( eFt)) by Z e T f d eP = e−mt Z T X u∈Nt fu(T )P (dT ). (2.1)

It is known [20] that this definition is sound and that eP is actually a probability measure with the following properties:

− Under eP , the individuals on the spine move according to Brownian motion with drift c and die at an accelerated rate m + 1, independent of the motion.

− When an individual on the spine dies, it leaves a random number of offspring at the point where it has died, this number following the size-biased distribution of L. In other words, let eL be a random variable with E[f (eL)] = E[f (L)L/(m + 1)] for every positive measurable function f . Then the number of offspring is an independent copy of eL. − Amongst those offspring, the next individual on the spine is chosen uniformly. This

individual repeats the behaviour of its parent.

− The other offspring initiate branching Brownian motions according to the law P . Seen as an equation rather than a definition, (2.1) also goes by the name of “many-to-one lemma”.

2.2 Branching Brownian motion with two barriers

We recall the notation Py from the previous subsection for the law of branching Brownian motion started at y ∈ R and Ey the expectation with respect to Py. Recall the definition of

e

P and define ePy and eEy analogously.

Let a, b ∈ R such that y ∈ (a, b). Let τ = τa,b be the (random) set of those individuals

whose paths enter (−∞, a] ∪ [b, ∞) and all of whose ancestors’ paths have stayed inside (a, b). For u ∈ τ we denote by τ(u) the first exit time from (a, b) by u’s path, i.e.

τ (u) = inf{t ≥ 0 : Xu(t) /∈ (a, b)} = min{t ≥ 0 : Xu(t) ∈ {a, b}},

and set τ (u) = ∞ for u /∈ τ. The random set τ is an (optional) stopping line in the sense of [10].

For u ∈ τ, define Xu(τ ) = Xu(τ (u)). Denote by Za,b the number of individuals leaving

the interval (a, b) at the point a, i.e.

Za,b=

X

u∈τ

1

Xu(τ )=a.

Lemma 2.2. Assume |c| > c0 and define ρ =

p c2− c2

0. Then

Ey[Za,b] = ec(a−y)sinh((b − y)ρ)

(7)

If, furthermore, V = E[L(L − 1)] < ∞, then Ey[Za,b2 ] = 2V ec(a−y) ρ sinh3((b − a)ρ) h sinh((b − y)ρ) Z y a

ec(a−r)sinh2((b − r)ρ) sinh((r − a)ρ) dr

+ sinh((y − a)ρ) Z b

y

ec(a−r)sinh3((b − r)ρ) dri+ Ey[Za,b].

Proof. On the space eT of marked trees with spine, define the random variable I by I = i if ξi∈ τ and I = ∞ otherwise. For an event A and a random variable Y write E[Y, A] instead

of E[Y1A]. Then Ey[Za,b] = Ey h X u∈τ 1X u(τ )=a i = eEy[emτ (ξI), I < ∞, X ξI(τ ) = a]

by the many-to-one lemma extended to optional stopping lines (see [6], Lemma 14.1 for a discrete version). But since the spine follows Brownian motion with drift c, we have I < ∞,

e

P -a.s. and the above quantity is therefore equal to Wy,c[emT, BT = a],

where Wy,c is the law of standard Brownian motion with drift c started at y, (Bt)t≥0 the

canonical process and T = Ta,b the first exit time from (a, b) of Bt. By Girsanov’s theorem,

and recalling that m = c20/2, this is equal to

Wy[ec(BT−y)−12(c2−c20)T, BT = a],

where Wy = Wy,0. Evaluating this expression ([8], p. 212, Formula 1.3.0.5) gives the first equality.

For u ∈ U, let Θu be the operator that maps a tree in T to its sub-tree rooted in u.

Denote further by Cu the set of u’s children, i.e. Cu = {uk : 1 ≤ k ≤ Lu}. Then note that for

each u ∈ τ we have Za,b= 1 + X v≺u X w∈Cv wu Za,b◦ Θw, hence Ey[Za,b2 ] = Eyh X u∈τ 1X u(τ )=aZa,b i = Ey[Za,b] + ePy " emτ (ξI) X v≺ξI X w∈Lv wξI Za,b◦ Θw, XξI(τ ) = a # . (2.2)

Define the σ-algebras

G = σ(XξI(t); t ≥ 0),

H = G ∨ σ(ζv; v ≺ ξI),

(8)

such that G contains the information about the path of the spine up to the individual that quits (a, b) first, H adds to G the information about the fission times on the spine and I adds to H the information about the individuals of the spine and the number of their children. Now, conditioning on I and using the strong branching property, the second term in the last line of (2.2) is equal to e Py " emτ (ξI)X v≺ξI (Lv− 1)EXv(dv−)[Za,b], XξI(τ ) = a #

(recall that dv is the time of death of v). Conditioning on H and noting the fact that Lv

follows the size-biased law of L for an individual v on the spine, yields

e Py " emτ (ξI) X v≺ξI V m + 1E XξI(dv)[Z a,b], XξI(τ ) = a # .

Finally, since under eP the fission times on the spine form a Poisson process of intensity m + 1, conditioning on G and applying Girsanov’s theorem yields

Wy  ec(BT−y)−12ρ2T Z T 0 V EBt[Z a,b] dt, BT = a  = V ec(a−y) Z b a Er[Za,b]Wy h e−12ρ 2T LrT, BT = a i dr,

where LrT is the local time of (Bt) at the time T and the point r. The last expression can be

evaluated explicitly ([8], p. 215, Formula 1.3.3.8) and gives the desired equality.

Corollary 2.3. Under the assumptions of Lemma 2.2, for each b > 0 there are positive constants Cb(1), Cb(2), such that as a → −∞,

a) E0[Z a,b] ∼ Cb(1)e(c+ρ)a, b) if c > c0, E0[Za,b2 ] ∼ C (2) b e(c+ρ)a and c) if c < −c0, E0[Za,b2 ] ∼ Cb(2)e2(c+ρ)a.

The following result is well known and is only included for completeness. We emphasize that the only moment assumption here is m = E[L] − 1 ∈ (0, ∞). Recall that Zx denotes the

number of particles absorbed at x of a BBM started at the origin. For |c| ≥ c0, define λc to

be the smaller root of λ2− 2c + c20, thus λc = c −

p c2− c2 0. Lemma 2.4. Let x > 0. − If |c| ≥ c0, then E[Zx] = eλcx. − If |c| < c0, then E[Zx] = +∞.

Proof. We proceed similarly to the first part of Lemma 2.2. Define the (optional) stopping line τ of the individuals whose paths enter [x, ∞) and all of whose ancestors’ paths have stayed inside (−∞, x). Define I as in the proof of Lemma2.2. By the stopping line version of the many-to-one lemma we have

E[Zx] = E[

X

u∈τ

(9)

By Girsanov’s theorem, this equals

W [ecx−21(c 2−c2

0)Tx, Tx< ∞],

where W is the law of standard Brownian motion started at 0 and Tx is the first hitting time

of x. The result now follows from [8], p. 198, Formula 1.2.0.1.

2.3 Proof of Proposition 2.1

By hypothesis, c > c0, E[L2] < ∞ and the BBM starts at the origin. Let x > 0 and let τ = τx

be the stopping line of those individuals hitting the point x for the first time. Then Zx = |τx|.

Let a < 0 and n ∈ N. By the strong branching property,

P0(Zx> n) ≥ P0(Zx > n | Za,x ≥ 1)P0(Za,x ≥ 1) ≥ Pa(Zx> n)P0(Za,x ≥ 1).

If P0 denotes the law of branching Brownian motion started at the point 0 with drift −c, then

Pa(Zx> n) = P0(Za−x> n) ≥ P−0(Za−x,1> n).

In order to bound this quantity, we choose a = an in such a way that n = 12E0[Zan−x,1]. By

Corollary 2.3a), c) (applied with drift −c) and the Paley–Zygmund inequality, there is then

a constant C1> 0, such that

P0(Zan−x,1 > n) ≥ 1 4 E0 −[Zan−x,1]2 E0[Za2n−x,1] ≥ C1 for large n. Furthermore, by Corollary2.3a) (applied with drift −c), we have

1 2C

(1)

1 e−λc(an−x)∼ n, as n → ∞,

and therefore an= −(1/λc) log n + O(1). Again by the Paley–Zygmund inequality and

Corol-lary 2.3a), b) (applied with drift c), there exists C2> 0, such that for large n,

P0(Zan,x≥ 1) = P 0(Z an,x> 0) ≥ E0[Zan,x] 2 E0[Z2 an,x] ≥ (C (1) x )2 2Cx(2) eλcan C2 nd.

This proves the proposition with C = C1C2.

3

The FKPP equation

As was already observed by Neveu [23], the translational invariance of Brownian motion and the strong branching property immediately imply that Z = (Zx)x≥0 is a homogeneous

continuous-time Galton–Watson process (for an overview to these processes, see [4], Chap-ter III or [14], Chapter V). There is therefore an infinitesimal generating function

a(s) = α ∞ X n=0 pnsn− s ! , α > 0, p1 = 0, (3.1)

(10)

associated to it. It is a strictly convex function on [0, 1], with a(0) ≥ 0 and a(1) ≤ 0. Its probabilistic interpretation is α = lim x→0 1 xP (Zx 6= 1) and pn= limx→0P (Zx = n|Zx6= 1),

hence qn = αpn for n ∈ N0\{1}. Note that with no further conditions on c and L, the sum

P

n≥0pn need not necessarily be 1, i.e. the rate αp∞, where p∞ = 1 −Pn≥0pn, with which

the process jumps to +∞, may be positive.

We further define Fx(s) = E[sZx], which is linked to a(s) by Kolmogorov’s forward and

backward equations ([4], p. 106 or [14], p. 102): ∂ ∂xFx(s) = a(s) ∂ ∂sFx(s) (forward equation) (3.2) ∂

∂xFx(s) = a[Fx(s)] (backward equation) (3.3) The forward equation implies that if a(1) = 0 and φ(x) = E[Zx] = ∂s∂Fx(1−), then φ′(x) =

a′(1)φ(x), whence E[Zx] = ea′(1)x. On the other hand, if a(1) < 0, then the process jumps to

∞ with positive rate, hence E[Zx] = ∞ for all x > 0.

The next lemma is an extension of a result which is stated, but not proven, in [23], Equation (1.1). According to Neveu, it is due to A. Joffe. To the knowledge of the author, no proof of this result exists in the current literature, which is why we prove it here.

Lemma 3.1. Let (Yt)t≥0 be a homogeneous Galton–Watson process started at 1, which may

explode and may jump to +∞ with positive rate. Let u(s) be its infinitesimal generating function and Ft(s) = E[sYt]. Let q be the smallest zero of u(s) in [0, 1].

1. If q < 1, then there exists t ∈ R ∪ {−∞} and a strictly decreasing smooth function ψ : (t, +∞) → (q, 1) with limt→tψ(t) = 1 and limt→∞ψ(t) = q, such that on (q, 1) we have u = ψ◦ ψ−1 , Ft(s) = ψ−(ψ−−1(s) + t).

2. If q > 0, then there exists t+ ∈ R ∪ {−∞} and a strictly increasing smooth function

ψ+ : (t+, +∞) → (0, q) with limt→t+ψ+(t) = 0 and limt→∞ψ+(t) = q, such that on

(0, q) we have u = ψ′

+◦ ψ−1+ , Ft(s) = ψ+(ψ+−1(s) + t).

The functions ψ and ψ+ are unique up to translation.

Moreover, the following statements are equivalent: − For all t > 0, Yt< ∞ a.s.

− q = 1 or t−= −∞.

Proof. We first note that u(s) > 0 on (0, q) and u(s) < 0 on (q, 1), since u(s) is strictly convex, u(0) ≥ 0 and u(1) ≤ 0. Since F0(s) = s, Kolmogorov’s forward equation (3.2) implies

that Ft(s) is strictly increasing in t for s ∈ (0, q) and strictly decreasing in t for s ∈ (q, 1).

The backward equation (3.3) implies that Ft(s) converges to q as t → ∞ for every s ∈ [0, 1).

Repeated application of (3.3) yields that Ft(s) is a smooth function of t for every s ∈ [0, 1].

Now assume that q < 1. For n ∈ N set sn = 1 − 2−n(1 − q), such that q < s1 < 1,

sn< sn+1 and sn→ 1 as n → ∞. Set t1 = 0 and define tnrecursively by

tn+1 = tn− t′, where t′ > 0 is such that Ft′(sn+1) = sn.

Then (tn)n∈N is a decreasing sequence and thus has a limit t− ∈ R ∪ {−∞}. We now define

for t ∈ (t−, +∞),

(11)

The function ψ is well defined, since for every n ∈ N and t ≥ tn,

Ft−tn(sn) = Ft−tn(Ftn−tn+1(sn+1)) = Ft−tn+1(sn+1),

by the branching property. The same argument shows us that if s ∈ (q, 1), sn> s and t′ > 0

such that Ft′(sn) = s, then Ft(s) = Ft+t′(sn) = ψ(t + t′+ tn) for all t ≥ 0. In particular,

ψ(t′+ tn) = s, hence Ft(s) = ψ−(ψ−−1(s) + t). The backward equation (3.3) now gives

u(s) = ∂

∂tFt(s) t=0= ψ ′

−(ψ−1− (s)).

The second part concerning ψ+ is proven completely analogously. Uniqueness up to

transla-tion of ψ and ψ+ is obvious from the requirement ψ(ψ−1(s) + t) = Ft(s), where ψ is either

ψ or ψ+.

For the last statement, note that P (Yt < ∞) = 1 for all t > 0 if and only if Ft(1−) = 1

for all t > 0. But this is the case exactly if q = 1 or t = −∞.

The following proposition shows that the functions ψ and ψ+ corresponding to (Zx)x≥0

are so-called travelling wave solutions of a reaction-diffusion equation called the Fisher– Kolmogorov–Petrovskii–Piskounov (FKPP) equation. This should not be regarded as a new result, since Neveu ([23], Proposition 3) proved it already for the case c ≥ c0 and L = 2 a.s.

(dyadic branching). However, his proof relied on a path decomposition result for Brownian motion, whereas we show that it follows from simple renewal argument valid for branching diffusions in general.

Recall that f (s) = E[sL] denotes the generating function of L. Let qbe the unique fixed

point of f in [0, 1) (which exists, since f′(1) = m + 1 > 1), and let q be the smallest zero of a(s) in [0, 1].

Proposition 3.2. Assume c ∈ R. The functions ψ and ψ+ from Lemma 3.1 corresponding

to (Zx)x≥0 are solutions to the following differential equation on (t−, +∞) and (t+, +∞),

respectively.

1 2ψ

′′− cψ= ψ − f ◦ ψ. (3.4)

Moreover, we have the following three cases:

1. If c ≥ c0, then q = q′, t−= −∞, a(1) = 0, a′(1) = λc, E[Zx] = eλcx for all x > 0.

2. If |c| < c0, then q = q′, t−∈ R, a(1) < 0, a′(1) = 2c, P (Zx = ∞) > 0 for all x > 0.

3. If c ≤ −c0, then q = 1, a(1) = 0, a′(1) = λc, E[Zx] = eλcx for all x > 0.

Proof. Let s ∈ (0, 1) and define the function ψs(x) = Fx(s) = E[sZx] for x ≥ 0. By

sym-metry, Zx has the same law as the number of individuals N absorbed at the origin in a

branching Brownian motion started at x and with drift −c. By a standard renewal argument (LemmaA.1), the function ψsis therefore a solution of (3.4) on (0, ∞) with ψs(0+) = s. This

proves the first statement, in view of the representation of Fx in terms of ψ− and ψ+ given

by Lemma3.1.

Let s ∈ (0, 1)\{q} and let ψ(s) = ψ−(s) if s > q and ψ(s) = ψ+(s) otherwise. By (3.4),

a′(s) = ψ′′◦ ψ−1(s) ψ′◦ ψ−1(s) = 2c + 2 ψ ◦ ψ−1(s) − f ◦ ψ ◦ ψ−1(s) ψ′◦ ψ−1(s) = 2c + 2 s − f (s) a(s) ,

(12)

whence, by convexity,

a′(s)a(s) = 2ca(s) + 2(s − f (s)), s ∈ [0, 1]. (3.5) Assume |c| ≥ c0. By Lemma 2.4, E[Zx] = eλcx, hence a(1) = 0 and a′(1) = λc, in

particular, a′(1) > 0 for c ≥ c0 and a(1) < 0 for c ≤ −c0. By convexity, q < 1 for c ≥ c0 and

q = 1 for c ≤ −c0. The last statement of Lemma3.1now implies that t−= −∞ if c ≥ c0.

Now assume |c| < c0. By Lemma 2.4, E[Zx] = +∞ for all x > 0, hence either a(1) < 0

or a(1) = 0 and a′(1) = +∞, in particular, q < 1 by convexity. However, if a(1) = 0, then

by (3.5), a′(1) = 2c − 2m/a(1), whence the second case cannot occur. Thus, a(1) < 0 and

a′(1) = 2c by (3.5).

It remains to show that q = q′ if q < 1. Assume q 6= q. Then a(q) 6= 0 by the (strict)

convexity of a and a′(q′) = 2c by (3.5). In particular, a′(q′) ≥ a′(1), which is a contradiction to a being strictly convex.

4

Proof of Theorem

1.1

We start with the following Abelian-type lemma:

Lemma 4.1. Let X be a random variable concentrated on N0 and let ϕ(s) = E[sX] be its

generating function. Assume that E[X(log+X)γ] < ∞ for some γ > 0. Then, as s → 0, ϕ′(1) − ϕ(1 − s) = O((log1s)−γ) and ϕ′(1)s + ϕ(1 − s) − 1 = O(s(log1s)−γ). Proof. Let s0 > 0 be such that the function s 7→ s(log 1s)γ is increasing on [0, s0]. Let

s ∈ (0, s0). Then, with pk= P (X = k), (ϕ′(1) − ϕ(1 − s))(log1s)γ= ∞ X k=1 kpk(1 − (1 − s)k−1)(log1s)γ.

If k ≥ s−1, then (1 − (1 − s)k−1)(log1s≤ (log k)γ. If ⌈s−10 ⌉ ≤ k < s−1, then s(log 1s)γ <

1

k(log k)γ and thus (1 − (1 − s)k−1)(log1s)γ< ks(log1s)γ≤ (log k)γ. Hence, ∞ X k=⌈s−1 0 ⌉ kpk(1 − (1 − s)k−1)(log1s)γ≤ ∞ X k=⌈s−1 0 ⌉ pkk(log k)γ≤ E[X(log+X)γ].

Furthermore, we have for s ∈ (0, 1),

⌈s−1 0 ⌉ X k=1 kpk(1 − (1 − s)k−1)(log1s)γ≤ (s(log 1s)γ) ⌈s−1 0 ⌉ X k=1 k2pk≤ C,

for some C > 0. Collecting these results, we have, for every s ∈ (0, 1), (ϕ′(1) − ϕ(1 − s))(log1s≤ C + E[X(log+X)γ] < ∞,

by hypothesis. This yields the first equality. Setting g(s) = ϕ′(1)s + ϕ(1 − s) − 1, we note that g(0) = 0 and

(13)

by the first equality. Since (log1s)−γ is slowly varying,

g(s) = Z s

0

g′(r) dr = O(s(log1s)−γ),

by standard theorems on the integration of slowly varying functions (see e.g. [11], Section VIII.9, Theorem 1).

Proof of Theorem 1.1. We have c = c0 by hypothesis. Let ψ− be the travelling wave from

Proposition 3.2, which is defined on R, since t = −∞. Let φ(x) = 1 − ψ−(−x), such that

φ(−∞) = 1 − q, φ(+∞) = 0 and

1

2φ′′(x) + c0φ′(x) = f (1 − φ(x)) − (1 − φ(x)), (4.1)

by (3.4). Furthermore, a(1 − s) = φ−1(s)) and Fx(1 − s) = 1 − φ(φ−1(s) − x).

Under the hypothesis E[L(log L)2+ε] < ∞, it is known [20] that there exists K ∈ (0, ∞),

such that φ(x) ∼ Kxe−c0x as x → ∞. Since a(1) = 0 and a(1) = c

0 by Proposition 3.2, this

entails that φ′(x) = a(1 − φ(x)) ∼ −c0Kxe−c0x, as x → ∞.

Set ϕ1= φ′ and ϕ2 = φ. By (4.1), d dx  ϕ1(x) ϕ2(x)  =  φ′′(x) φ′(x)  =  −2c0φ′(x) + 2[f (1 − φ(x)) − (1 − φ(x))] φ′(x)  .

Setting g(s) = c20s + 2[f (1 − s) − (1 − s)] = 2[f(1)s + f (1 − s) − 1], this gives d dx  ϕ1(x) ϕ2(x)  = M  ϕ1(x) ϕ2(x)  +  g(ϕ2(x)) 0  , with M =  −2c0 −c20 1 0  . (4.2)

The Jordan decomposition of M is given by

J = A−1M A =  −c0 1 0 −c0  , A =  −c0 1 − c0 1 1  . (4.3) Setting  ϕ1 ϕ2  = A  ξ1 ξ2  , we get with ξ =  ξ1 ξ2  : ξ′(x) = Jξ(x) +  −g(φ(x)) g(φ(x))  ,

which, in integrated form, becomes

ξ(x) = exJξ(0) + exJ Z x 0 e−yJ  −g(φ(y)) g(φ(y))  dy. (4.4) Note that exJ =  e−c0x xe−c0x 0 e−c0x  . (4.5)

By the definition of ξ and the above asymptotic of φ, we have g(φ(x)) = O(e−c0x/x1+ε),

as x → ∞, by Lemma4.1and the hypothesis on L. Equations (4.4) and (4.5) now imply that

ξ2(x) ∼ e−c0x  ξ2(0) + Z 0 ec0yg(φ(y)) dy  ,

(14)

and (ξ1+ ξ2)(x) ∼ xe−c0x  ξ2(0) + Z 0 ec0yg(φ(y)) dy  , and since φ = ξ1+ ξ2 and ξ2 = φ′+ c0φ, this gives

(φ′+ c0φ)(x) ∼ φ(x)/x ∼ Ke−c0x. (4.6)

With this information, one can now show by elementary calculus (see Section A.2), that

a′′(1 − s) ∼ c0

s(log1s)2, and (4.7)

Fx′′(1 − s) ∼ c0xe−c

0x

s(log1s)2, as s → 0. (4.8)

By standard Tauberian theorems ([11], Section XIII.5, Theorem 5), (4.7) implies that

U (n) = n X k=1 k2qk∼ c0 n (log n)2, as n → ∞.

By integration by parts, this entails that

∞ X k=n qk= Z n− x−2U (dx) ∼ c0  2 Z n 1 x2(log x)2dx − 1 n(log n)2  .

But the last integral is equivalent to 1/(n(log n)2) ([11], Section VIII.9, Theorem 1), which

proves the first part of the theorem. The second part is proven analogously, using (4.8) instead.

5

Preliminaries for the proof of Theorem

1.2

In light of Proposition 2.1, one may suggest that under suitable conditions on L one may extend the proof of Theorem 1.1 to the subcritical case c > c0 and prove that as n → ∞,

P (Zx> n) ∼ C′n−d for some constant C′. In order to apply Tauberian theorems, one would

then have to establish asymptotics for the (⌊d⌋ + 1)-th derivatives of a(s) and Fx(s) as s → 1.

In trying to do this, one quickly sees that the known asymptotics for the travelling wave (1 − ψ(x) ∼ const × e−λcx as x → −∞, see [20]) are not precise enough for this method to

work. However, instead of relying on Tauberian theorems, one can analyse the behaviour of the holomorphic function a(s) near its singular point 1. This method is widely used in combinatorics at least since the seminal paper by Flajolet and Odlyzko [12] and is the basis for our proof of Theorem1.2. Not only does it work in both the critical and subcritical cases, it even yields asymptotics for the density instead of the tail only.

In the rest of this section, we will define our notation for the complex analytic part of the proof and review some necessary general complex analytic results.

5.1 Notation

In the course of the paper, we will work in the spacesC and C2, endowed with the Euclidean

(15)

point z0 is also called a neighbourhood of z0. The closure of a set D is denoted by D, its

border by ∂D. The disk of radius r around z0 is denoted byD(z0, r) = {z ∈ C : |z − z0| < r},

its closure and border byD(z0, r) and ∂D(z0, r), respectively. We further use the abbreviation

D = D(0, 1) for the unit disk. For 0 ≤ ϕ ≤ π, r > 0 and x ∈ R, we define

G(ϕ, r) = {z ∈ D(1, r)\{1} : | arg(1 − z)| < π − ϕ}, S+(ϕ, x) = [x, ∞) × (−ϕ, ϕ),

∆(ϕ, r) = {z ∈ D(0, 1 + r)\{1} : | arg(1 − z)| < π − ϕ}, S−(ϕ, x) = (−∞, x] × (−ϕ, ϕ),

H(ϕ, r) = {z ∈ D(0, r)\{0} : | arg z| < ϕ}.

Note that H(ϕ, r) = 1 − G(π − ϕ, r). Here and during the rest of the paper, arg(z) and log(z) are the principal values of argument and logarithm, respectively.

Let G be a region inC, z0∈ G and f and g analytic functions in G with g(z) 6= 0 for all

z ∈ G. We write

f (z) = o(g(z)) ⇐⇒ ∀ε > 0 ∃δ > 0 ∀z ∈ G ∩ D(z0, δ) : |f (z)| ≤ ε|g(z)|,

f (z) = O(g(z)) ⇐⇒ ∃C ≥ 0 ∃δ > 0 ∀z ∈ G ∩ D(z0, δ) : |f (z)| ≤ C|g(z)|,

f (z) = eO(g(z)) ⇐⇒ ∃K ∈ C : f (z) = Kg(z) + o(g(z)), f (z) ∼ g(z) ⇐⇒ f (z) = g(z) + o(g(z)),

specifying that the relations hold as z → z0.

5.2 Complex differential equations

In this section, we review some basics about complex differential equations. We start with the fundamental existence and uniqueness theorem ([5], p. 1, [15], Theorem 2.2.1, p. 45 or [18], Section 12.1, p. 281).

Fact 5.1. Let G be a region in C2 and (w

0, z0) a point in G. Let f : G → C be analytic in G,

i.e. f is continuous and both partial derivatives exist and are continuous. Then there exists a neighbourhood U of z0 and a unique analytic function w : U → C, such that

1. w(z0) = w0,

2. (w(z), z) ∈ G for all z ∈ U and 3. w′(z) = f (w(z), z) for all z ∈ U.

In other words, the differential equation w′ = f (w, z) with initial condition w(z0) = w0 has

exactly one solution w(z) which is analytic at z0.

The following standard result is a special case of a theorem by Painlev´e ([5], p. 11, [15], Theorem 3.2.1, p. 82 or [18], Section 12.3, p. 286f).

Fact 5.2. Let H be a region inC and w(z) analytic in H. Let G be a region in C2, such that

(w(z), z) ∈ G for each z ∈ H and suppose that there exists an analytic function f : G → C, such that w′(z) = f (w(z), z) for each z ∈ H. Let z0 ∈ ∂H. Suppose that w(z) is continuous

at z0and that (w(z0), z0) ∈ G. Then z0 is a regular point of w(z), i.e. w(z) admits an analytic

extension at z0.

Let [z1, . . . , zk]n denote a power series of the variables z1, . . . , zk, converging in a

neigh-bourhood of (0, . . . , 0) and which contains only terms of order n or higher. The complex differential equation

(16)

was introduced in 1856 by Briot and Bouquet [9] as an example of a complex differential equation admitting analytic solutions at a singular point of the equation. More precisely, they obtained ([15], Theorem 11.1.1, p. 402):

Fact 5.3. If λ is not a positive integer, then there exists a unique function w(z) which is analytic in a neighbourhood of z = 0 and which satisfies (5.1). Furthermore, w(0) = 0.

The singular solutions to this equation were later investigated by Poincar´e, Picard and others (for a full bibliography, see [17]). We are going to need the following result (see [17], Paragraph III.9.2oor [15], Theorem 11.1.3, p. 405, but note that the latter reference is without proof and the statement is slightly incomplete).

Fact 5.4. Assume λ > 0. There exists a function ψ(z, u) =Pjk≥0pjkzjuk, converging in a

neighbourhood of (0, 0) and such that p00= 0 and p01= 1, such that the general solution of

(5.1) which vanishes at the origin is w = ψ(z, u), with − u = Czλ, if λ /∈ N,

− u = zλ(C + K log z), if λ ∈ N.

Here, C ∈ C is an arbitrary constant and K ∈ C is a fixed constant depending only on the right-hand side of (5.1).

Remark 5.5. The above statement is slightly imprecise, in that the term solution is not defined, i.e. what a priori knowledge of w(z) (regarding its domain of analyticity, smoothness, behaviour at z = 0, . . . ) is required in order to guarantee that it admits the representation stated in Fact 5.4? Inspecting the proof (as in [17], for example) shows that it is actually enough to know that w(z) satisfies (5.1) on an interval (0, ε) of the real line and that w(0+) = 0. We briefly explain why:

In order to prove Fact 5.4, one shows that there exists a function ψ of the form stated above, such that when changing variables by w = ψ(z, u), the function u(z) formally satisfies one of the equations

zu′= λu or zu′ = λu + Kzλ, according to whether λ /∈ N or λ ∈ N.

Now suppose that w(z) satisfies the above conditions. By the implicit function theorem ([16], Theorem 2.1.2), we can invert ψ to obtain a function ϕ(w, z) = w + qz + [w, z]2,

q ∈ C, such that ψ(z, ϕ(w, z)) = w in a neighbourhood of (0, 0). We may thus define u(z) = ϕ(w(z), z) for all z ∈ (0, ε1) for some ε1 > 0. Moreover, u(z) now truly satisfies the

above equations on (0, ε1) and u(0+) = 0. Standard theory of ordinary differential equations

on the real line now yields that u is necessarily of the form stated in Fact 5.4.

We further remark that since u(z) is analytic in the slit planeC\(−∞, 0] and goes to 0 as z → 0 in C\(−∞, 0], there exists an r > 0, such that (z, u(z)) is in the domain of convergence of ψ(z, u) for every z ∈ H(π, r). Hence, every solution w(z) can be analytically extended to H(π, r).

5.3 Singularity analysis

We now summarise results about the singularity analysis of generating functions. The basic references are [12] and [13], Chapter VI. The results are of two types: those that establish an asymptotic for the coefficients of functions that are explicitly known, and those that estimate the coefficients of functions which are dominated by another function. We start with the results of the first type:

(17)

Fact 5.6. Let d ∈ (1, ∞)\N, k ∈ N, γ ∈ Z\{0}, δ ∈ Z and the functions f1, f2 defined by f1(z) = (1 − z)d and f2(z) = (1 − z)k  log 1 1 − z γ log log 1 1 − z δ ,

for z ∈ C\[1, +∞). Let (p(i)n ) be the coefficients of the Taylor expansion of fi around the

origin, i = 1, 2. Then (p(i)n ) satisfy the following asymptotics as n → ∞:

p(1)n K1

nd+1 and p (2)

n ∼

K2(log n)γ−1(log log n)δ

nk+1 ,

for some non-zero constants K1 = K1(d), K2 = K2(k, γ, δ). We have K2(1, −1, 0) = 1.

Proof. For f1, this is Proposition 1 from [12]. For f2 this is Remark 3 at the end of Chapter

3 in the same paper. Note that the additional factors 1z do not change the nature of the singularities, since 1z is analytic at 1 (see the footnote on p. 385 in [13]). The last statement follows from Remark 3 as well.

The results of the second type are contained in the next theorem. It is identical to Corollary 4 in [12]. Note that a potential difficulty here is that it requires analytical extension outside the unit disk.

Fact 5.7. Let 0 < ϕ < π/2, r > 0 and f (z) be analytic in ∆(ϕ, r). Assume that as z → 1 in ∆(ϕ, r), f (z) = o  (1 − z)αL  1 1 − z 

, where L(u) = (log u)γ(log log u)δ, α, γ, δ ∈ R.

Then the coefficients (pn) of the Taylor expansion of f around 0 satisfy

pn= o  L(n) nα+1  , as n → ∞.

5.4 An equation for continuous-time Galton–Watson processes

In this section, let (Yt)t≥0be a homogeneous continuous-time Galton–Watson process starting

at 1. Let a(s) be its infinitesimal generating function and Ft(s) = E[sYt]. Assume a(1) = 0

and a′(1) = λ ∈ (0, ∞), such that a(s) = 0 has a unique root q in [0, 1).

The following proposition establishes a relation between the infinitesimal generating func-tion of a Galton–Watson process and its generating funcfunc-tion at time t. For real s, the formulae stated in the proposition are well known, but we will need to use them for complex s, which is why we have to include some (complicated) hypotheses to be sure that the functions and integrals appearing in the formulae are well defined.

Proposition 5.8. Suppose that a and Ft have analytic extensions to some regions Da and

DF. Let Za = {s ∈ Da : a(s) = 0}. Let there be simply connected regions G ⊂ Da\Za and

D ⊂ G ∩ DF with Ft(D) ⊂ G and D ∩ (0, 1) 6= ∅. Then the following equations hold for all

s ∈ D: Z

Ft(s)

s

1

(18)

and 1 − Ft(s) = eλt(1 − s) exp − Z Ft(s) s f∗(r) dr ! , (5.3)

where f∗(s) is defined for all s ∈ Da\Za as

f∗(s) = λ a(s) +

1

1 − s, (5.4)

and the integrals may be evaluated along any path from s to Ft(s) in G.

Proof. For s ∈ (0, 1)\{q}, equation (5.2) follows readily from Kolmogorov’s backward equation (3.3), when the integral is interpreted as the usual Riemann integral ([4], p. 106). Now note that by definition of G, both a(s)1 and f∗ are analytic in the simply connected region G and therefore possess antiderivatives g and h in G. Thus, the functions

s 7→ Z Ft(s) s 1 a(r)dr = g(Ft(s)) − g(s) and s 7→ Z Ft(s) s f∗(r) dr = h(Ft(s)) − h(s)

are analytic in D. By the analytic continuation principle, (5.2) then holds for every s ∈ D,

since D ∩ (0, 1) 6= ∅ by hypothesis. This proves the first equation. For the second equation, note that − log(1 − s) is an antiderivative of 1−s1 in G, whence the right-hand side of (5.3) equals

eλt(1 − s) exp log(1 − Ft(s)) − log(1 − s) − λ

Z Ft(s) s 1 a(r)dr ! = 1 − Ft(s),

for all s ∈ D, by (5.2). This gives (5.3).

Corollary 5.9. If 1 is a regular point of a(s), then it is a regular point for Ft(s) for every

t ≥ 0.

Proof. Define G = {s ∈ D : Re s > q}. Then G ∩ Za=∅, since q is the only zero of a in D

(every probability generating function g with g′(1) > 1 has exactly one fixed point q in D;

this can easily be seen by applying Schwarz’s lemma to τ−1 ◦ g ◦ τ, where τ is the M¨obius transformation of the unit disk that maps 0 to q). Let s1∈ (q, 1) be such that Ft(s) ∈ G for

every s ∈ H = {s ∈ D : Re s > s1}. We can then apply Proposition5.8to conclude that (5.3)

holds for every s ∈ H.

Since a(s) is analytic in a neighbourhood U of 1 by hypothesis, it is easy to show that f∗ is analytic in U as well. Thus, f∗ has an antiderivative Fin H ∪ U. We define the function

g(s) = (1 − s) exp(F∗(s)) on H ∪ U. Since g′(1) = − exp(F∗(1)) 6= 0, there exists an inverse g−1 of g in a neighbourhood U1 of g(1) = 0. Let U2 ⊂ U be a neighbourhood of 1, such that

eλtg(s) ∈ U1 for every s ∈ U2. Define the analytic function eFt(s) = g−1(eλtg(s)) for s ∈ U2.

Then by (5.3), we have Ft(s) = eFt(s) for every s ∈ H ∩ U2, hence eFt is an analytic extension

of Ftat 1.

Corollary 5.10. Suppose that a(s) has an analytic extension to G(ϕ0, r0) for some 0 <

ϕ0 < π and r0 > 0. Suppose further that there exist c ∈ R, γ > 1, such that a(1 − s) =

−λs + λcs/ log s + O(s/| log s|γ) as s → 0. Then for every ϕ

0 < ϕ < π there exists r > 0,

(19)

Proof. Recall that λ > 0. By hypothesis, we can then assume that a(s) 6= 0 in G(ϕ0, r0) by

choosing r0 small enough. Then λ/a has an antiderivative A on G(ϕ0, r0). Define B(s) =

A(1 − s) for s ∈ H(π − ϕ0, r0), such that

B′(s) = 1

s(1 − c/ log s + O(| log s|−γ)) =

1 s + c s log s+ O  1 s| log s|− min(γ,2)  .

We can therefore apply Lemma A.7 to B and deduce that there exist ϕ1 ∈ (ϕ0, ϕ) and

r1, r ∈ (0, r0), such that A is injective on G(ϕ1, r1) and such that A(s) + λt ∈ A(G(ϕ1, r1))

for every s ∈ G(ϕ, r). Hence, eFt(s) = A−1(A(s) + λt) is defined and analytic on G(ϕ, r). By

(5.2), eFt(s) = Ft(s) on G(ϕ, r) ∩ D, hence eFt is an analytic extension of Ft, mapping G(ϕ, r)

into G(ϕ1, r1) ⊂ G(ϕ0, r0) by definition.

6

Proof of Theorem

1.2

We turn back to branching Brownian motion and to our Galton–Watson process Z = (Zx)x≥0

of the number of individuals absorbed at the point x. Throughout this section, we place ourselves under the hypotheses of Theorem1.2, i.e. we assume that c ≥ c0 =√2m and that

the radius of convergence of f (s) = E[sL] is greater than 1. The equation λ2− 2cλ + c20 = 0 then has the solutions λc= c −

p c2− c2 0 and λc = c + p c2− c2 0, hence λc = λc = c0 if c = c0

and λc < c0 < λc otherwise. The ratio d = λc/λc is therefore greater than or equal to one,

according to whether c > c0 or c = c0, respectively. Recall further that δ ∈ N denotes the

span of L − 1.

Let a(s) = α(Pk≥0pksk− s) be the infinitesimal generating function of Z and let Fx(s) =

E[sZx]. We recall the equation (3.5) from Section3: For s ∈ [0, 1],

a′(s)a(s) = 2ca(s) + 2(s − f (s)). (6.1) By the analytic continuation principle, this equation is satisfied on the domain of analyticity of a(s), in particular, on D.

We now give a quick overview of the proof. Starting point is the equation (6.1). We are going to see that this equation is closely related to the Briot–Bouquet equation (5.1) with λ = d. The representation of the solution to this equation given by Fact5.4 will therefore enable us to derive asymptotics for a(s) near its singular point s = 1 (Theorem6.4). Via the results in Section5.4, we will be able to transfer these to the functions Fx(s) (Corollary6.6).

Finally, the theorems of Flajolet and Odlyzko in Section5.3yield the asymptotics for qn and

P (Zx= n).

More specifically, we will see that the main singular term in the expansion of a(1 − s) or Fx(1 − s) near s = 0 is sd, if d /∈ N and sdlog s, if d ∈ N. At first sight, this dichotomy

might seem strange, but it becomes evident if one remembers that we expect the coefficients of Fx(s) (i.e. the probabilities P (Zx= n), assume δ = 1) to behave like 1/nd+1, if d > 1 (see

Proposition 2.1). In light of Fact 5.6, a logarithmic factor must therefore appear if d is a natural number, otherwise Fx(s) would be analytic at 1, in which case its coefficients would

decrease at least exponentially.

We start by determining the singular points of a(s) and Fx(s) on the boundary of the unit

disk, which is the content of the next three lemmas.

Lemma 6.1. Let X be a random variable with law (pk)k∈N0 and let x > 0. Then the spans

(20)

Proof. This follows from the fact that the BBM starts with one individual and the number of individuals increases by l − 1 when an individual gives birth to l children.

Lemma 6.2. If δ = 1, then a(s) and (Fx(s))x>0 are analytic at every s0 ∈ ∂D\{1}. If δ ≥ 2,

then there exist a function h(s) and a family of functions (hx(s))x>0, all analytic on D, such

that

a(s) = sh(sδ) and Fx(s) = shx(sδ),

for every s ∈ D. Furthermore, h and (hx)x>0 are analytic at every s0 ∈ ∂D\{1}.

Proof. Assume first that δ ≥ 2. Define h(s) = α(X n p1+δnsn− 1) and hx(s) = X n P (Zx = 1 + δn)sn.

By Lemma 6.1, pk+δn = P (Zx = k + δn) = 0 for every k ∈ {2, . . . , δ} and n ∈ Z, whence

a(s) = sh(sδ) and Fx(s) = shx(sδ) for every s ∈ D.

We now claim that a and Fx are analytic at every s0 ∈ ∂D with sδ0 6= 1. Note that if

δ ≥ 2, this implies that h and hx are analytic at every s0∈ ∂D\{1}, since the function s 7→ sδ

has an analytic inverse in a neighbourhood of any s 6= 0.

First note that by [11], Lemma XV.2.3, p. 475, we have |Pnpnsn0| < 1 for every s0 ∈ ∂D,

such that sδ

06= 1, whence a(s0) 6= 0. Now write the differential equation (6.1) in the form

a′= 2ca + 2(s − f (s))

a =: g(a, s).

Since the radius of convergence of f is greater than 1 by hypothesis, g is analytic at (a(s0), s0).

Furthermore, a is continuous at s0, since Pnpnsn converges absolutely for every s ∈ D.

Fact5.2 now shows that a is analytic at s0.

It remains to show that Fxis analytic at s0. Kolmogorov’s forward and backward equations

(3.2) and (3.3) imply that a(s)Fx′(s) = a(Fx(s)) on [0, 1], and the analytic continuation

principle implies that this holds on D. Now, let s0 ∈ ∂D, such that sδ0 6= 1. Then we have

just shown that a is analytic and non-zero at s0. Furthermore, |Fx(s0)| < 1, by the above

stated lemma in [11] and Lemma 6.1. Thus, the function f (w, s) = a(w)/a(s) is analytic at (Fx(s0), s0), hence we can apply Fact5.2 again to conclude that Fx is analytic at s0 as

well.

The next lemma ensures that we can ignore certain degenerate cases appearing in the course of the analysis of (3.5). It is the analytic interpretation of the probabilistic results in Section2.

Lemma 6.3. 1 is a singular point of a(s). If c = c0, then a′′(1) = +∞.

Proof. If c = c0, the second assertion follows from Theorem1.1 or from Neveu’s result that

E[Zxlog+Zx] = ∞ for x > 0 (see the remark before Theorem 1.1). This implies that 1

is a singular point of a(s). If c > c0, Proposition 2.1 implies that E[sZx] = ∞ for every

s > 1, whence 1 is a singular point of the generating function Fx(s) by Pringsheim’s theorem

([13], Theorem IV.6, p. 240). By Corollary 5.9, it follows that 1 is a singular point of a(s) as well.

(21)

Theorem 6.4. Under the assumptions of Theorem 1.2, for every ϕ ∈ (0, π) there exists

r > 0, such that a(s) possesses an analytical extension (denoted by a(s) as well) to G(ϕ, r). Moreover, as 1 − s → 1 in G(ϕ, r), the following holds.

− If d = 1, then a(1 − s) = −c0s + c0 s log1s − c0s log log1s (log1s)2 + eO s (log1s)2 ! . (6.2)

− If d > 1, then there is a K = K(c, f ) ∈ C\{0} and a polynomial P (s) = P⌊d⌋n=2cnsn,

such that

if d /∈ N : a(1 − s) = −λcs + P (s) + Ksd+ o(sd), (6.3)

if d ∈ N : a(1 − s) = −λcs + P (s) + Ksdlog s + o(sd). (6.4)

Proof of Theorem 6.4. We set b(s) = a(1 − s). By (6.1),

− b′(s)b(s) = 2cb(s) + 2(1 − s − f (1 − s)) onD(1, 1). (6.5) Since f is analytic at 1 by hypothesis, there exists 0 < ε1 < 1 − q and a function g analytic

on D(0, ε1) with g(0) = g′(0) = 0, such that f (1 − s) = 1 − (m + 1)s + g(s) for s ∈ D(0, ε1).

As a first step, we analyse (6.5) for real non-negative s. Since ε1 < 1 − q, b(s) < 0 on

(0, ε1), whence we can divide both sides by b(s) to obtain

db ds =

−2cb − c20s + 2g(s)

b on (0, ε1). (6.6) Introduce the parameter t(s) = Rε1

s dr

−b(r), s ∈ (0, ε1], such that t(ε1) = 0, t(0+) = +∞

and t(s) is strictly decreasing on (0, ε1]. There exists then an inverse s(t) on [0, ∞), which

satisfies s′(t) = b(s(t)). Hence, we have

db dt = db ds ds dt = −2cb(t) − c 2 0s(t) + 2g(s(t)) on (0, ∞),

In matrix form, this becomes d dt  b s  = M  b s  +  2g(s) 0  , M =  −2c −c20 1 0  , (6.7)

for t ∈ (0, ∞). Note that this extends (4.2) to the subcritical case. This time, the Jordan decomposition of M is given by A−1M A =  −λc 0 0 −λc  , A =  −λc −λc 1 1  , if c > c0, (6.8) and by (4.3), if c = c0. Setting  b s  = A  B S  , (6.9)

(22)

transforms (6.7) into dB dt = −λcB + [B, S]2, dS dt = −λcS + [B, S]2, if c > c0, (6.10) dB dt = −c0B + S + [B, S]2, dS dt = −c0S + [B, S]2, if c = c0, (6.11) for t ∈ (0, ∞). Furthermore, by (6.9), we have

s = B + S, (6.12) S = ( (λc− λc)−1(b + λcs), if d > 1, b + c0s, if d = 1, (6.13) B = ( (λc− λc)−1(b + λcs), if d > 1, −b + (1 − c0)s, if d = 1. (6.14)

From now on, let ε2, ε3, . . . be positive numbers that are as small as necessary. By the strict

convexity of b and the fact that b′(0) = −λc by Lemma2.4, equation (6.13) implies that S is

a strictly convex non-negative function of s on [0, ε2). This implies that the inverse s = s(S)

exists and is non-negative and strictly concave on [0, ε3). It follows that t(S) = t(s(S)) exists

on [0, ε4). Equations (6.10) and (6.11) then yield for S ∈ (0, ε4),

dB dS = dB + [B, S]2 S + [B, S]2 , if c > c0, (6.15) dB dS = B − c−10 S + [B, S]2 S + [B, S]2 , if c = c0. (6.16)

By (6.12) and the fact that s(S) is strictly concave, B is a strictly concave function of S as well, hence strictly monotone on (0, ε5). We claim that B(S)2= o(S) as S → 0. For d > 1,

one checks by (6.13) that S(s) ∼ s, as s → 0, whence B(S) = o(S), as S → 0, by (6.12). If d = 1, then b′(0) = −c0 by Lemma 2.4 and b′′(0) = +∞ by Lemma 6.3. Equation (6.13)

then implies that S(s)/s2 → +∞ as s → 0, whence s(S) = o(√S). The claim now follows by (6.12).

PropositionA.4now tells us that there exists a function h(z) = [z]2, such that the function

s(S) = S − h(B(S)) has an inverse S(s) on (0, ε6) and b(s) = B(S(s)) satisfies the Briot–

Bouquet equation sb′ = ( db + [b, s]2, if d > 1, b− c−10 s+ [b, s]2, if d = 1, (6.17)

on (0, ε6). By Fact5.4and Remark5.5, there exists then a function ψ(z, u) = u + rz + [z, u]2,

r ∈ C, such that b(s) = ψ(s, u(s)), where

u(z) = Czd, if d /∈ N and u(z) = Czdlog z, if d ∈ N,

for some constant C = C(c, f ) ∈ C (the form of u in the case d ∈ N can be obtained from the one in Fact 5.4 by changing ψ, C and K). Moreover, comparing the coefficient of s on both sides of (6.17), we get, if d > 1, r = dr, whence r = 0 and if d = 1: r + C = r − c−10 , whence C = −c−10 .

(23)

Assume now d > 1. Then b = u(s) + [s, u(s)]2. Recall that B = b and S = s + h(b). By

(6.12),

s = B + S = b + s + h(b) = s + u(s) + [s, u(s)]2,

such that s′(s) = 1 + o(1) and s(s) = s + [s]2+ o(sγ), as s → 0, where γ = (d + ⌊d⌋)/2, if

d /∈ N and γ = d − 1/2, if d ∈ N. By LemmasA.6 and A.8, for every ϕ0 ∈ (0, π) there exists

r0 > 0, such that the inverse s(s) exists and is analytic on H(ϕ0, r0) and satisfies

s(s) = s + [s]2+ o(sγ), as s → 0.

This entails that

u(s) = Csd= C(s + o(s))d= Csd+ o(sd), if d /∈ N, u(s) = Csdlog s = C(s + o(s3/2))dlog(s + o(s)) = Csdlog s + o(sd), if d ∈ N\{1},

sn= [s]2+ o(sγ+1) + o(sγ

2

) = [s]2+ o(sd), for all n ≥ 2.

It follows that

b(s) = b(s(s)) = u(s) + [s]2+ o(sd), as s → 0.

We finally get by (6.9),

b = −λcB − λcS = −λcs + (λc− λc)b = −λcs + (λc− λc)u(s) + [s]2+ o(sd),

which proves (6.3) and (6.4).

If d = 1, recall that u(z) = c−10 z log1z and b = u(s) + rs + [s, u(s)]2 for some r ∈ C. By

(6.12),

s = B + S = b + s + h(b) = u(s) + (r + 1)s + [s, u(s)]2,

such that s′(s) = c−1

0 log(1s) + O(1) and s(s) = c−10 slog1s + (r + 1)s + o(s). Lemma A.6now

implies that for every ϕ0 ∈ (0, π) there exists r0 > 0, such that the inverse s(s) exists and is

analytic on H(ϕ0, r0). Now, by (6.9),

b = −c0s + S = −c0s + s + h(b) = −c0s + s + O(s3/2).

Lemma A.9now yields (6.2).

Remark 6.5. The reason why we cannot explicitly determine the constant K in Theorem6.4

is that we are analysing (3.5) only locally around the point 1. Since the solution of (3.5) with boundary conditions a(q) = a(1) = 0 is unique (this follows from the uniqueness of the travelling wave solutions to the FKPP equation), a global analysis of this equation should be able to exhibit the value of K. But it is probably easier to refine the probabilistic arguments of Section2, which already give a lower bound that can be easily made explicit.

The asymptotics established in Theorem 6.4for the infinitesimal generating function can now be readily transferred to the generating functions Fx(s).

Corollary 6.6. Under the assumptions of Theorem 1.2, for every x > 0 and ϕ ∈ (0, π) there

exists r > 0, such that Fx(s) = E[sZx] can be analytically extended to G(ϕ, r). Furthermore,

(24)

− If d = 1, then Fx(1 − s) = 1 − ec0xs + c0xec0x s log1s − s log log1s (log1s)2 ! + eO s (log1s)2 ! . (6.18)

− If d > 1, then there is a polynomial Px(s) =P⌊d⌋n=2cnsn, such that

if d /∈ N : Fx(1 − s) = 1 − eλcxs + Px(s) + Kxdsd+ o(sd), (6.19)

if d ∈ N : Fx(1 − s) = 1 − eλcxs + Px(s) + Kxsdlog s + o(sd), (6.20)

where Kx = K(eλcx− eλcx)/(λc− λc), with K being the constant from Theorem 6.4.

Proof. Let 0 < ϕ0 < ϕ. By Theorem6.4, there exists r0 > 0, such that a(s) can be analytically

extended to G(ϕ0, r0) and satisfies the hypothesis of Corollary 5.10. It follows that there

exists r > 0, such that Fx(s) can be analytically extended to G(ϕ, r) and maps G(ϕ, r) into

G(ϕ0, r0). Hence, the functions

w(s) = 1 − Fx(1 − s) and I(s) =

Z w(s)

s

f∗(1 − r) dr,

where f∗(s) is defined as in (5.4), are analytic in H(π − ϕ, r). In what follows, we always

assume that s ∈ H(π − ϕ, r). Appearance of the symbols ∼, O, eO, o means that we let s go to 0 in H(π − ϕ, r).

First of all, we note that by Proposition5.8, we have

w(s) = seλcxexp(I(s)) = seλcx  1 + I(s) + ∞ X k=2 I(s)k k!  . (6.21)

Now assume d > 1. By Theorem6.4, a(1 − s) = −λcs + [s]2+ u(s) + o(sd), where u(s) = Ksd

or u(s) = Ksdlog s, according to whether d /∈ N or d ∈ N, respectively. It follows that

f∗(1 − s) = λc a(1 − s) + 1 s = − 1 s  1 − [s]1−u(s)λcs + o(sd−1) −1 +1 s = [s]0−λu(s)cs2 + o(sd−2).

Now,Rsw(s)o(rd−2) dr = o(sd−1), since w(s) ∼ seλcx by Lemma2.4. Thus,

I(s) = Z w(s) s f∗(1 − r) dr = [w(s), s]1− Z w(s) s u(r) λcr2 dr + o(sd−1). (6.22)

SinceRsw(s)r−2u(r) dr = O(sd−1log s), equations (6.21) and (6.22) now give

w(s) = seλcx 1 + [w(s), s] 1− Z w(s) s u(r) λcr2 dr + o(sd−1) ! . (6.23)

If d ≥ 2, we deduce that w(s) = seλcx+ o(s3/2). Straightforward calculus now shows that

Z w(s) s u(r) λcr2 dr = Kx eλcx u(s) s + [w(s), s]1+ o(s d−1), (6.24)

(25)

and (6.23) and (6.24) now yield

w(s) = seλcx+ [w(s), s]

2− Kxu(s) + o(sd).

Repeated application of this equation shows that w(s) = seλcx+ [s]

2− Kxu(s) + o(sd), which

yields (6.19) and (6.20).

In the critical case d = 1, Theorem6.4 tells us that

f∗(1 − s) = 1 s − 1 log1s + log log1s (log1s)2 + eO 1 (log1s)2 !! . (6.25)

Write λ = λc = c0. For our first approximation of w(s), we note that

I(s) ∼ − Z seλx s 1 r log1r dr ∼ − 1 log1s Z seλx s 1 r dr = − λx log1s, hence, by (6.21), w(s) = seλx 1 − λx log1s + o 1 log1s !! . (6.26)

To obtain a finer approximation, we decompose I(s) into

I(s) = Z seλx s f∗(1 − r) dr + Z w(s) seλx f∗(1 − r) dr =: I1(s) + I2(s). We then have I1(s) = − λx log1s + λx log log1s (log1s)2 + eO 1 (log1s)2 ! , and, because of (6.26), −I2(s) ∼ Z seλx(1+ λx log s) seλx 1 r log1r dr ∼ λx (log1s)2.

Plugging this back into (6.21) finishes the proof.

Proof of Theorem 1.2. Let x > 0. We want to apply the methods from singularity analysis reviewed in Section 5.3 to the functions a and Fx, if δ = 1, or the functions h and hx from

Lemma 6.2, if δ ≥ 2. Let ϕ ∈ (0, π/2). By Theorem 6.4 and Corollary 6.6, there exists r0 > 0, such that a and Fx can be analytically extended to G(ϕ, r0), which implies that

for some ϕ1 ∈ (ϕ, π/2) and r1 ∈ (0, r), h and hx can be extended to G(ϕ1, r1), as well.

Moreover, by Lemma 6.2, each of these functions is analytic in a neighbourhood of every point of C = {s ∈ ∂D : |1 − s| ≥ r1/2}, which is a compact set. Hence, there exists a finite

number of neighbourhoods which cover C. It is then easy to show that there exists r > 0, such that the functions are analytic in ∆(ϕ1, r).

If δ = 1, we can then immediately apply Facts 5.6and5.7, together with the asymptotics on a and Fx established in Theorem6.4and Corollary 6.6, to prove Theorem 1.2.

If δ ≥ 2, let q(s) be the inverse of s 7→ sδin a neighbourhood of 1, then h(s) = a(q(s))/q(s)

near 1, by Lemma 6.2. But since q′(1) = 1/δ, we have

(26)

for some constants cn, c′n, and so equations (6.2), (6.3) and (6.4) transfer to h with the

coefficient of the main singular term divided by δd. We can therefore use Facts 5.6 and 5.7

for the function h to obtain the asymptotic for (pδn+1)n∈N in Theorem1.2. In the same way,

equations (6.18), (6.19) and (6.20) yield asymptotics for hx, such that we can use again Facts

5.6and 5.7to prove the second part of Theorem1.2.

A

Appendix

A.1 A renewal argument for branching diffusions

Let W = (Wt)t≥0 be a diffusion on an interval with endpoints a′ ≤ 0 < a, such that

limx↓0Px[T

0 < t] = 1 for every t > 0, where T0 = inf{t ≥ 0 : Wt = 0} and W0 = x,

Px-almost surely. For x ∈ (0, a), and only in the scope of this section, we define Px to be the

law of the branching diffusion starting with a single particle at position x where the particles move according to the diffusion W and branch with rate β according to the reproduction law with generating function f (s). Moreover, particles hitting the point 0 are absorbed at that point. Denote by Z the number of particles absorbed during the lifetime of the process and define us(x) = Px[sZ] for s ∈ [0, 1) and x ∈ (0, a).

Lemma A.1. Let s ∈ [0, 1) and G be the generator of the diffusion W . Then Gus= β(us− f ◦ us) on (0, a), with us(0+) = s.

Proof. The proof proceeds by a renewal argument similar to the one in [22]. As for the BBM, for an individual u, we denote by ζu its time of death, Xu(t) its position at time t and Lu the

number of u’s children. Define the event A = {∃t ∈ [0, ζ∅) : X∅(t) = 0}. For s ∈ [0, 1) we

have by the strong branching property

us(x) = Ex[sZ] = sPx(A) + Ex  EX∅(ζ∅−)[sZ] L∅ , Ac  = sPx(T0 < ξ) +Ex[f (us(Wξ)), ξ ≤ T0],

where W = (Wt)t≥0 is a diffusion with generator G starting at x under Px, T0 = inf{t ≥ 0 :

Wt= 0} and the random variable ξ is exponentially distributed with rate β and independent

from W . Setting v(x) =Px(T

0 < ξ) we get by integration by parts

v(x) = Z 0 βe−βtPx(T0 < t) dt = Z 0 e−βtPx(T0 ∈ dt) = Ex h e−βT0 i ,

and therefore Gv = βv on (0, a) ([8], Paragraph II.1.10, p. 18).

Denote the β-resolvent of the diffusion by Rβ. By the strong Markov property,

Ex[f (u s(Wξ)), ξ ≤ T0] =Ex h Z ∞ 0 βe−βtf (us(Wt)) dt i − Exh Z ∞ T0 βe−βtf (us(Wt)) dt i = βRβ(f ◦ us)(x) − βEx[e−βT0]Rβ(f ◦ us)(0),

hence us = Cs,βv + βRβ(f ◦ us), with Cs,β= s − βRβ(f ◦ us)(0). It follows that

Gus= βCs,βv + β2Rβ(f ◦ us) − β(f ◦ us) = β(us− f ◦ us) on (0, a).

By the above hypothesis on W , Px(T

(27)

A.2 Addendum to the proof of Theorem 1.1

With the notation used in the proof of Theorem1.1, recall that for some constant K > 0 we have

(φ′+ c0φ)(x) ∼ φ(x)/x ∼ Ke−c0x, as x → ∞.

In what follows, formulae containing the symbols ∼ and o() are meant to hold as s ↓ 0. The above equation yields

a(1 − s) = φ′(φ−1(s)) = −c0s + (φ′+ c0φ)(φ−1(s)) = −c0s +(c0+ o(1))s

log1s . (A.1) Now, by (3.5), we have

a′(1 − s)a(1 − s) = 2c0a(1 − s) + c20s − g(s),

where we recall that g(s) was defined as g(s) = 2(f (1 − s) − 1 + f′(1)s). From the above equation, one gets

a′′(1 − s) = −(a(1 − s))−3 c0a(1 − s) + c20s − g(s)

2

− g′(s)a(1 − s)2, and an application of Lemma4.1and (A.1) yields (4.7).

Kolmogorov’s forward and backward equations (3.2) and (3.3) give

Fx′(s) = a(Fx(s)) a(s) , and taking the derivative on both sides of this equation gives

Fx′′(s) = a(Fx(s)) a(s)2 a′(Fx(s)) − a′(s)  . (A.2) By (4.7) and F′ x(1) = E[Zx] = ec0x, we get a′(Fx(1 − s)) − a′(1 − s) = − Z 1−Fx(1−s) s a′′(1 − r) dr ∼ − c 2 0x (log s)2.

This equation, together with (A.2) now yields

Fx′′(1 − s) ∼ −c0e c0xs c2 0s2  − c 2 0x (log s)2  , which is (4.8).

A.3 Reduction to Briot–Bouquet equations

In this section, we show how one can reduce differential equations as those obtained in the proof of Theorem 1.2to the canonical form (5.1). It is mostly based on pp. 64 and 65 of [5]. Lemma A.2. Let λ ∈ (0, 1] and p ∈ C. Then the equation

w′ = λw + [w, z]2 z + pw + [w, z]2

. (A.3)

(28)

Proof. We choose the ansatz w = z · w1. This transforms (A.3) into zw1′ + w1 = λzw1+ z2[w1, z]0 z + pzw1+ z2[w1, z]0 = λw1+ z[w1, z]0 1 + [w1, z]1 .

Writing the inverse of the denominator as a power series in w1 and z, this equals

(λw1+ z[w1, z]0)(1 + [w1, z]1) = λw1+ rz + [w1, z]2,

for some r ∈ C. This finally yields

zw1= (λ − 1)w1+ rz + [w1, z]2.

Since λ − 1 is not a positive integer, this equation now has an analytic solution w1(z) = [z]1

by Fact 5.3, whence w(z) = zw1(z) = [z]2 solves (A.3).

Remark A.3. The important point in LemmaA.2is that the coefficient of z in the numerator of (A.3) is 0, which is why w′(z) = 0.

Proposition A.4. Let λ ≥ 1 and p ∈ R. Suppose w(z) is a strictly monotone real-valued function on (0, ε), ε > 0, with w(z)2 = o(z) as z → 0 and satisfying

w′ = λw + pz + [w, z]2 z + [w, z]2

on (0, ε). (A.4)

Then there exists h(z) = [z]2 and ε1 > 0, such that z = z − h(w) has an inverse z = z(z) on

(0, ε1) and such that

zdw

dz = λw + pz + [w, z]2 on (0, ε1). (A.5) Proof. By hypothesis, w(z) is monotone on (0, ε) and therefore possesses an inverse z = z(w) on (0, δ), δ > 0, which satisfies

dz dw =

λ−1z + [w, z]2

w + pλ−1z + [w, z]2. (A.6)

By Lemma A.2, there exists then an analytic solution z = g(w) = [w]2 to (A.6), since

λ−1 ∈ (0, 1] by hypothesis. Setting z = z − g(w) transforms (A.6) into a differential equation, which has z = 0 as a solution, hence it is of the form

dz dw =

λ−1z+ z[w, z]1

w + pλ−1z+ [w, z]2.

We have dz/dz = 1 + g′(w(z))w′(z) = 1 + O(w(z)w′(z)). By (A.4), w(z)w′(z) = O(w(z)2/z + w(z)) = o(1),

by hypothesis. Hence, there exists ε1 > 0, such that z(z) is strictly increasing on (0, ε1) and

therefore has an inverse. Thus, w(z) = w(z(z)) satisfies dw

dz =

w + pλ−1z+ [w, z]2

λ−1z(1 + [w, z]1) on (0, ε2),

Références

Documents relatifs

Furthermore, because the process has evolved for a long time, we expect the density of particles at time t to follow approximately (2.10). The key to proving Theorem 1.5 is to

We obtain a decomposition in distribution of the sub-fractional Brownian motion as a sum of independent fractional Brownian motion and a centered Gaussian pro- cess with

Je remercie ´ Elie A¨ıd´ekon, Louis-Pierre Arguin, Julien Berestycki, Anton Bovier, ´ Eric Brunet, Nicola Kistler et Zhan Shi pour leurs r´eponses `a mes questions et pour

In a previous paper, the authors proved a conjecture of Lalley and Sellke that the empirical (time-averaged) distribution function of the maximum of branching Brownian motion

We consider critical branching Brownian motion with absorption, in which there is initially a single particle at x &gt; 0, particles move according to independent

Keywords: Branching Brownian motion; Galton–Watson process; Briot–Bouquet equation; FKPP equation; Travelling wave; Singularity analysis of generating

The classical scaled path properties of branching Brownian motion (BBM) have now been well-studied: for ex- ample, see Lee [13] and Hardy and Harris [3] for large deviation results

One of our martingale results, concerning the question of convergence to a non-trivial limit of the so-called derivative martingale, offers a moment dichotomy which has not